AI Bias: Good intentions can lead to nasty results
Why fairness through unawareness is a pretty idea with ugly consequences
AI isn’t magic. Whatever “good judgment” it appears to have is either pattern recognition or safety nets built into it by the people who programmed it, because AI is not a person, it’s a pattern-finding thing-labeler. When you build an AI solution, always remember that if it passes your launch tests, you’ll get what you asked for, not what you hoped you were asking for. AI systems are made entirely out of patterns in the examples we feed them and they optimize for the behaviors we tell them to optimize for.
AI is not a person, it’s a pattern-finding thing-labeler.
So don’t be surprised when the system uses the patterns sitting right there in your data. Even if you tried your best to hide them…
Policy layers and reliability
If you care about AI safety, you’ll insist that every AI-based system should have policy layers built on top of it. Think of policy layers as the AI version of human etiquette.
Policy layers are the AI equivalent to human etiquette.