Generative AI vs Traditional Enterprise AI Systems
- The AI system’s use case is so benign that it is unable to pose a threat to the business. This usually means it won’t be deployed to automate a task of strategic importance, since AI systems make mistakes. This scenario is rare in practice because AI systems are costly to build, deploy, and maintain, so it’s hard to make a business case for them when the use case is fundamentally unimportant.
- The AI system’s use case is strategically important, but its performance justifies the associated cost and increased risk from mistakes.
Most of the time, the traditional enterprise-scale AI systems of the previous decade fall into the latter category, which means that their decision-making hinges on performance and whether the mistakes are worth it. That’s why, as a rule of thumb, if you see that a corporation has deployed an AI system at scale (as opposed to in a tiny experimental sandbox), you can expect the following characteristics:
- a relatively straightforward use case statement
- a vision of what the intended “good behavior” for the system looks like
- a clear, monolithic objective
- measurable performance
- well-defined testing criteria
- relative clarity about what could go wrong and thus which safety nets are needed
Ask any trust and safety professional and I’m sure they’ll agree that each of these makes it easier to do their job. I’ve devoted a whole blog post to explaining why the third bullet point is a particularly important one, which I’ll summarize again here:
It’s a lot easier to protect a varied group of users from a single-purpose system than to protect the same group from a multi-purpose system.
Okay, so that covers traditional enterprise AI systems. How does all that compare with…