There are many great questions about the future of AI. When will we achieve general intelligence? How will AI agents transform industries? Or even much simpler ones, such as: what lies beyond transformers? While I do not have the answers to those, one thing is certain—they will all remain academic unless we dramatically improve the unit economics of compute. As it turns out, enabling the future of AI requires an entirely novel compute fabric—and with it comes the most unreasonable task list in the history of technology. That is exactly how we are building the future of AI from the atoms up at Eva.

Over the past week, markets have reacted dramatically to DeepSeek's release of a low-cost-trained, high-performance AI model. Many have begun asking whether it signals the obsolescence of high-performance AI compute — or, by extension, the beginning of the end for NVIDIA. The answer is simply 'No.' Here are some thoughts for why there will never be enough compute in the world, and how Eva is building the next generation compute fabric to enable development of advanced AI application far beyond current reach.
