Data-Driven Leadership and Careers

Is AI a fad?

Three reasons people think AI is a passing craze

Cassie Kozyrkov

--

Here’s the audio version of the article, read for you by the author.

Every time some genius decides to apply AI where it doesn’t belong, the world collectively rolls its eyes and puts another ballot in the AI-Is-A-Fad box.

Don’t read this dictionary, it’s not good for you.

If your dictionary defines AI as magic or robots (or magical robots), of course you’ll be disappointed when it doesn’t deliver the cure to all that ails you. Let’s look at three common gripes using simple examples everyone can grasp.

Fad gripe #1: “AI is a waste of time”

A respectable software engineer once asked me with a straight face, “Can AI know that Canada is a country?”

Hold your horses there, cowboy. Let’s take a moment to think about how you know that Canada is a country. Someone told you that fact when you were little, you memorized it, and now you’re looking it up.

Meanwhile, in Canada…

We can write the code that does that without AI—record the data in a table, then if someone asks about Canada, the program looks the word up and outputs the answer. What do you need AI for here?

Nothing, that’s what.

If you expect that AI is magic, you’ll try to use it for everything. When your bosses find out how much effort you’ve wasted coming up with a complicated solution to a simple problem, it’s hard to blame them for thinking that AI is hype and nonsense.

How to avoid trap #1

If you can solve a problem without AI, then don’t use AI. Seriously, no matter what my employer’s marketing team says at you.

AI is like medicine — it can be a life-changer to those who need it, but everyone else should know better than to snack on it out of boredom.

Don’t use AI to learn things you already know the rules for, especially things that are defined by human-made rules in the first place. Examples: How do we convert dollars to cents? Is the one with the superhero cape the male or female toilet? How do I indent C++ code? What’s the sales tax in Hawaii? Which boxing weight class may I compete in? Should I wear a balaclava into a bank? (Photo credit: Danielle Spires)

Pick your use case and think about it carefully before you collect any data or hire any PhD gurus. If you feel the need to apply AI somewhere — anywhere! — just because all your friends are doing it, you’re setting yourself up for failure.

If you can make it work without AI, so much the better.

Instead, start with a use case you care about and if you can make it work without AI, so much the better.

Fad gripe #2: “AI doesn’t work”

Canada, I’m not done with you yet.

My best friend is Canadian and she suggested I add this image to the article. I think maybe she’s cold.

In our quest to determine whether AI can know that Canada is a country, we established that a computer can store and retrieve such information without any fancy-pants AI. Our engineer friend wants to kick it up a notch: “Can a machine learn that all by itself?”

Whoa, what do you mean by “learn” and “all by itself”? Those words mean different things to different people. Let’s answer this version: “Can we expect a machine to reliably output the conclusion that Canada is a country if it never had access to the word Canada before?”

Hey people-who-can’t-read-Chinese, is 香蕉 a country? How about 英国? No, don’t go looking up the answer, that’s cheating. You have to learn this all by yourself, remember?

When you have no additional information, how could you possibly know the answer? Similarly, common sense should force you to suspect that AI can’t learn things if there’s no information to learn from. You’d be correct.

If you’re curious about this image and want to know how algorithms turn patterns into recipes, take a look here.

AI is all about extracting patterns from information and using those patterns to automatically make a recipe for turning your next input (Canada) into an output (country). So let’s ask ourselves: what relevant patterns could our computer possibly use if it has never seen the word before?

If there was nothing to learn from, learning is impossible.

Even if we have some data, our algorithm might pull out spurious patterns that give us a stupid recipe. Let’s imagine these are our training data: South Africa-country, hippopotamus-animal, frog-animal, Russian Federation-country, United States-country, cat-animal, United Kingdom-country, raccoon-animal, South Korea-country, New Zealand-country, butterfly-animal, giraffe-animal.

Before you’ve even finished reading the first pair, your AI algorithm has already digested them. It gives a satisfied burp and invites you to input your noun. Any guesses what it does when you show it Canada?

There are two loud patterns in these data. One is that all the countries have capital letters. If that’s the basis for the AI’s recipe, then “Canada” would be labeled correctly, but “canada” would not.

What if the recipe were based on a different pattern?

Is this a Canada?

Did you notice all the countries have two-word names, while animals are single words? Well, your algorithm did. It says that Canada is clearly an animal. Oh deer.

How to avoid trap #2

Yes, machine learning doesn’t work… if there’s nothing to learn from. AI is not for people who can’t get their hands on relevant data. If you want your solution to work well for all countries, your dataset can’t only have the two-word ones in it.

Garbage in, garbage out.

It’s important not to lose your grip on common sense here; the basics of learning and teaching that apply to human students also apply to AI. Datasets are textbooks and you’re the teacher. If you give your students garbage textbooks, expect them to learn some garbage.

Fad gripe #3: “AI is untrustworthy”

If AI isn’t magical robots, what is it? It’s just a tool that helps you write code for tasks that you’re struggling to express the instructions for.

Why would a good programmer have difficulty with giving a computer instructions? Isn’t that the job? Sure, but some tasks require extremely complicated instructions. If they’re too complicated for the human mind to wrap itself around, you won’t be able to come up with them… unless you can communicate the task a different way.

AI lets you make your wishes known with examples (data) instead of explicit instructions. That means you have a shot at automating tasks that you can’t write instructions for.

Simple solutions don’t work for tasks that need complicated solutions. So AI comes to the rescue with — surprise! — complicated solutions.

It also means that you should start expecting a tangle of complicated instructions when the AI algorithm distills patterns into code for one of those headache tasks. When you read the recipe it came up with for you… it’s unreadable.

Many people have a gut reaction to mystery and ambiguity: “Get rid of it! Simple or I don’t want it! I can’t trust it.”

Wishing complex things could be simple doesn’t make them so.

It looks like you’re stuck with two bad options: live your life solving nothing but the simplest problems or progress beyond the low hanging fruit, but give up trust. Luckily, there’s another way.

Imagine choosing between two spaceships. Spaceship 1 comes with exact equations explaining how it works, but has never been flown. How Spaceship 2 flies is a mystery, but it has undergone extensive testing, with years of successful flights like the one you’re going on. Which spaceship would you choose?

How to avoid trap #3

We need a better basis for trusting those machines than reading unreadable things. The good news is that there is one: testing.

You don’t need to understand how it works to check that it does work.

Proper testing isn’t trivial, but it’s a lot easier than making sense of something so huge it makes you dizzy. It’s also a principle that we practice often — for example with medicine. Do you know how that headache pill works? Neither does science. The reason we trust it is that we carefully check that it does work. (Here’s my deeper discussion of testing as a basis for trust.)

If you’re content with only solving easy problems, it’s okay to sit this AI thing out.

AI isn’t a fad, it’s the way to progress

The problems of the future will only be getting harder. After you automate the simple tasks, you’ll want to move on to bigger challenges. Once you reach past the low-hanging fruit, you’ll run into a task you can’t solve using your old tricks and the brute force of raw imagination. You’ll realize you can only communicate what you want with examples, not instructions… Welcome to AI.

Next steps? Try my quick reality check(list) to see if ML/AI is a good idea for you.

Thanks for reading! How about an AI course?

If you had fun here and you’re looking for an applied AI course designed to be fun for beginners and experts alike, here’s one I made for your amusement:

Enjoy the entire course playlist here: bit.ly/machinefriend

--

--

Cassie Kozyrkov

Chief Decision Scientist, Google. ❤️ Stats, ML/AI, data, puns, art, theatre, decision science. All views are my own. twitter.com/quaesita