AI Bias: Good intentions can lead to nasty results

Why fairness through unawareness is a pretty idea with ugly consequences

Cassie Kozyrkov
10 min readSep 30, 2023

--

AI isn’t magic. Whatever “good judgment” it appears to have is either pattern recognition or safety nets built into it by the people who programmed it, because AI is not a person, it’s a pattern-finding thing-labeler. When you build an AI solution, always remember that if it passes your launch tests, you’ll get what you asked for, not what you hoped you were asking for. AI systems are made entirely out of patterns in the examples we feed them and they optimize for the behaviors we tell them to optimize for.

AI is not a person, it’s a pattern-finding thing-labeler.

So don’t be surprised when the system uses the patterns sitting right there in your data. Even if you tried your best to hide them…

Policy layers and reliability

If you care about AI safety, you’ll insist that every AI-based system should have policy layers built on top of it. Think of policy layers as the AI version of human etiquette.

Policy layers are the AI equivalent to human etiquette.

--

--

Cassie Kozyrkov
Cassie Kozyrkov

Written by Cassie Kozyrkov

Chief Decision Scientist, Google. ❤️ Stats, ML/AI, data, puns, art, theatre, decision science. All views are my own. twitter.com/quaesita

Responses (12)