Member-only story
Making Friends with AI
Generative AI vs Traditional Enterprise AI Systems
How the builders of foundational LLMs think
Enterprises tend to err on the side of caution when it comes to releasing automation solutions at scale. When enterprises rely on an AI system, one of the following tends to be true [1][2][3]:
- The AI system’s use case is so benign that it is unable to pose a threat to the business. This usually means it won’t be deployed to automate a task of strategic importance, since AI systems make mistakes. This scenario is rare in practice because AI systems are costly to build, deploy, and maintain, so it’s hard to make a business case for them when the use case is fundamentally unimportant.
- The AI system’s use case is strategically important, but its performance justifies the associated cost and increased risk from mistakes.
Most of the time, the traditional enterprise-scale AI systems of the previous decade fall into the latter category, which means that their decision-making hinges on performance and whether the mistakes are worth it. That’s why, as a rule of thumb, if you see that a corporation has deployed an AI system at scale (as opposed to in a tiny experimental sandbox), you can expect the following characteristics: