Generative AI vs Traditional Enterprise AI Systems

How the builders of foundational LLMs think

Cassie Kozyrkov
4 min readMay 31


Enterprises tend to err on the side of caution when it comes to releasing automation solutions at scale. When enterprises rely on an AI system, one of the following tends to be true [1][2][3]:

  • The AI system’s use case is so benign that it is unable to pose a threat to the business. This usually means it won’t be deployed to automate a task of strategic importance, since AI systems make mistakes. This scenario is rare in practice because AI systems are costly to build, deploy, and maintain, so it’s hard to make a business case for them when the use case is fundamentally unimportant.
  • The AI system’s use case is strategically important, but its performance justifies the associated cost and increased risk from mistakes.
Image by the author

Most of the time, the traditional enterprise-scale AI systems of the previous decade fall into the latter category, which means that their decision-making hinges on performance and whether the mistakes are worth it. That’s why, as a rule of thumb, if you see that a corporation has deployed an AI system at scale (as opposed to in a tiny experimental sandbox), you can expect the following characteristics:

Ask any trust and safety professional and I’m sure they’ll agree that each of these makes it easier to do their job. I’ve devoted a whole blog post to explaining why the third bullet point is a particularly important one, which I’ll summarize again here:

It’s a lot easier to protect a varied group of users from a single-purpose system than to protect the same group from a multi-purpose system.

Okay, so that covers traditional enterprise AI systems. How does all that compare with…



Cassie Kozyrkov

Chief Decision Scientist, Google. ❤️ Stats, ML/AI, data, puns, art, theatre, decision science. All views are my own.