· Quick Take · 2 min read
Why Hardware Companies Are Missing AI's Biggest Market
While everyone focuses on training chips, the real opportunity lies in specialized inference hardware that any company can build.

Hardware companies are missing a big opportunity in AI inference chips. The market is huge and the technical challenge is way lower than people think.
Training AI models is difficult, it requires cutting-edge chips and sophisticated software. Yes CUDA is the only game in town, that is why NVIDIA dominates this market. But inference is fundamentally just matrix multiplication. You don’t need CUDA or exotic architectures.
The way you can think of it is like what happened with bitcoin mining. People started with GPUs, but GPUs was born for graphic rendering, not bitcoin mining. Then ASICs replaced them because ASICs are just faster and more cost effective.
Google proved this with TPUs. Apple is already halfway there with their M-series; people are buying thousands of Mac minis connecting with each other to run opensource models. But there’s room for many more players in this space.
I’m surprised more hardware companies aren’t aggressively pursuing this market. The demand is clearly there, every company wants to run AI models. The first company to create affordable, fast inference chips for common AI workloads could capture enormous market share.
This feels like a classic case where the infrastructure hasn’t caught up to the demand yet. But that gap creates opportunities for companies willing to invest in the right solutions.