AI Bias: Good intentions can lead to nasty results
AI isn’t magic. Whatever “good judgment” it appears to have is either pattern recognition or safety nets built into it by the people who programmed it, because AI is not a person, it’s a pattern-finding thing-labeler. When you build an AI solution, always remember that if it passes your launch tests, you’ll get what you asked for, not what you hoped you were asking for. AI systems are made entirely out of patterns in the examples we feed them and they optimize for the behaviors we tell them to optimize for.
AI is not a person, it’s a pattern-finding thing-labeler.
So don’t be surprised when the system uses the patterns sitting right there in your data. Even if you tried your best to hide them…
Policy layers and reliability
If you care about AI safety, you’ll insist that every AI-based system should have policy layers built on top of it. Think of policy layers as the AI version of human etiquette.
Policy layers are the AI equivalent to human etiquette.
I happen to be aware of some very pungent words across several languages, but you don’t hear me uttering them in public. That’s not because they fail to occur to me. It’s because I’m filtering myself. Society has taught me good(ish) manners. Luckily for all of us there’s an equivalent fix for AI… that’s exactly what policy layers are. A policy layer is a separate layer of logic that sits on top of the ML/AI system. It’s a must-have AI safety net that checks the output, filters it, and determines what to do with it.
In my previous article, policy layers (and Batman) were the heroes of a discussion centered on AI system reliability. They’re a simple first line of defense between you and foreseeable errors… and they come with the added bonus of being easy enough to modify (without breaking or retraining a complex system!) to give you relatively quick reaction times in the face of mistakes you didn’t see coming.