Every time you use ChatGPT, a tree dies
AI is an energy hog. Training models like GPT burns through literally enough energy to power small towns- and running them isn't much better. Yet we just keep on pumping money into AI infrastructure while seemingly neglecting the possibility that these models can run better with less. It's brute force progress- and China's Deepseek just called America out on it.
Sequoia's David Cahn calls it the $400 billion problem. It costs an absurd amount to build a data center fitted with H100's, only to tear it all down a couple years later and drop another $200 billion because NVIDIA just put out the Blackwell. One of two things is bound to happen: either compute is going to get cheaper, or our models are going to get more efficient. Big tech is pumping hundreds of millions into companies like Crusoe that promise sustainable AI infrastructure, but is that really going to solve the problem? AI efficiency isn't just a climate issue; it's also about accessibility.
I don't think it's a money problem. I see it as more of a math problem. Marc Andreesen called the Deepseek-fuelled tech stock plunge AI's "Sputnik moment". That made me really excited, because it proved two things: 1. OpenAI and NVIDIA don't have it all figured out. This space is still in its infancy and there's plenty of room for innovation. 2. Efficiency is innovation. Bigger no longer means better if it comes at the cost of sustainability and scalability.
I might have a bit of a bias towards Edge AI, but I see trillion dollar potential. The limitations of edge deployment–restricted compute power, energy consumption, and real-time requirements–demand that we innovate smart rather than harder. It's not just about fitting these massive models on smaller chips, it's about rethinking how we design, train, and run these models altogether. The race to make AI leaner and faster is on. Deepseek dealt the first blow. Your move, American Big Tech. Optimization isn't just a side quest anymore; it's becoming the whole game.
01/22/25