· Essay  · 4 min read

Why Now Is the Perfect Time for Enterprise LLM Adoption

As frontier model improvements slow down and trust increases, enterprises have a unique window to adopt LLMs without the risk of rapid obsolescence.

As frontier model improvements slow down and trust increases, enterprises have a unique window to adopt LLMs without the risk of rapid obsolescence.

Why Now?

I would argue that the best time to introduce LLMs to your enterprise is now, even better than a year ago. Start today to multiply your team’s productivity and gain a competitive edge, as waiting longer may leave your business lagging behind. I explain why below.

Slower Frontier Gains

Even though billions of dollars keep pouring into training frontier LLMs, there are signs of diminishing returns. Yesterday (7 Aug 2025), OpenAI released its new flagship model, GPT-5, and the public reception was somewhat disappointed. There is some improvement in quality, but none of it is groundbreaking. For example, the SWE-Bench Verified score (a standard benchmark for coding ability) is 74.9%, while the previous state-of-the-art Claude Opus 4.1 sits at 74.5%.

One or two years ago, companies might have spent significant effort building sophisticated systems to leverage LLMs, only to find the next model release could do everything in a single click. Today, improvements look more incremental than leap-forward, so you can integrate LLMs into your workflow without worrying that months of effort will be gone by the next release.

Less Hallucination, More Trust

OpenAI also reports that GPT-5 reduces hallucination by six-fold compared with o3, which is reassuring. Hallucination is still one of the biggest concern when businesses consider LLMs: the financial and reputational cost of an AI mistake can easily outweigh the benefit. As raw benchmark gains slow, AI labs are turning their attention to trust and safety, which still have plenty of room for progress. The lower the hallucination rate, the wider the set of tasks you can safely automate.

A note of caution: LLMs still have risks, such as data privacy, compliance with sector-specific regulations, and the danger of over-relying on an early prototype. Keep a human in the loop, audit outputs regularly, verify the output, and fence off any sensitive data.

Open-Source and In-House Options

LLMs are already strong in many business tasks, particularly excelling at handling messy, unstructured input. They hold a vast amount of knowledge and can search the web or your private docs for the right answer. They can also call tools, like an eager intern, yet work way faster, especially on repetitive boring work, potentially making teams up to 2-3 time more productive than before.

Many enterprises want or are required to keep their data in-house, so they turn to open-source models rather than external APIs. Since the DeepSeek moment earlier this year, open-source models have largely kept up with proprietary flagship models: in the past month alone, three Chinese labs released their top-tier open-source models. You miss very little frontier capability by choosing these options.

Agentic search, the ability of a model to choose the right tool or action on its own, is rising fast and already makes LLMs a lot better at finding the information you need. I expect model sizes to keep shrinking as static knowledge moves back to the internet and is fetched only when needed. Soon we may see high-quality open-source models small enough to run on commodity hardware, cutting both cost and the need for deep infrastructure expertise.

Don’t Fall Behind

Many companies are already rolling out AI, and if you wait, you could lose six months to a year of learning curve. Applying LLMs takes experimentation to see what works and what doesn’t. Large firms need even more time to bring every stakeholder on board. The good news is onboarding AI is cheaper than before. Instead of a team of data scientists training models for months, you can let an LLM do most of the heavy lifting.

Key Takeaways

  • Start now: The performance curve is flattening; the risk of being leap-frogged has dropped.
  • Trust is improving: Hallucinations are falling, opening up more mission-critical use-cases.
  • Open-source is enough: In-house deployment no longer means second-rate models.
  • Smaller, smarter future: Agentic search will shrink models and hardware bills.
  • Delay costs: Every month you wait is time competitors spend learning and iterating.

How should enterprises leverage LLMs to get the most benefit while minimizing risk, what kinds of projects they should start with, and what best practices look like, all of this I’ll discuss in another post.

Back to Posts

Related Posts

View all posts »