Statistical Thinking

Why do we trust scientists?

Now’s a good time to rethink our assumptions about fact and fiction

Cassie Kozyrkov


In my previous article, I explained why you shouldn’t look to statistical inference for truth. Given the prevalence of statistical techniques in scientific research, what does this mean for science?

Image from an xkcd t-shirt, which you can find here.

(For those who insist that you need credentials to have an opinion about science, this jerk of an author holds graduate degrees in neuroscience and mathematical statistics. Glad we got that out of the way.)

Scientific theory

A hypothesis is a description or explanation, but it needn’t be true. If it amuses me, I can hypothesize that no human is taller than five feet. (I’d better not look in the mirror, since the 5'9'’ person in there will promptly falsify my hypothesis.)

When you have incomplete data, what does it really take to disprove something?

The scientific method revolves around falsifiability. If you buy into this philosophical concept, you won’t take a hypothesis seriously if there’s no way to disprove it. Then you’ll make honest attempts at dismantling it and if it survives your siege, you’ll tentatively accept it as a theory.

A theory is what a hypothesis becomes when it grows up.

A theory is the best thing science can make. It’s what a hypothesis becomes when it grows up. (People confuse the two, thinking that a theory is something less than a hypothesis. It isn’t.)

“Just” a theory? A theory is the best thing science can make. Meme template.

Want to read this story later? Save it in Journal.

The more thoroughly you kick the tires, the better the theory. In theory. Unfortunately, when there’s uncertainty involved (as in most scientific inquiry), two problems tend to rear their ugly heads:

  1. Known unknowns. Besides the issue of whether you have enough statistical power for your approach not to be a joke, you also have to contend with the fundamental subjectivity of statistical testing: When your sample only covers a fraction of the population, you’re forced to make assumptions to go from what you know to what you don’t.
  2. Unknown unknowns. To truly do your homework in proving a theory, you have to think of everything during your attempts at falsification. Can you be trusted to think of everything? Can the cleverest human?

This might be why I’ve never heard an actual scientist refer to something as a “scientific fact.” Scientists know that when you have uncertainty, you aren’t making truths.

Instead, scientists use statistics create “temporary but durable” theories. They’ll advocate following strategies based on them cautiously. Time will tell how “temporary” or “durable” they’ll turn out to be — it’s a good idea to keep an analytical eye out for more data and course correct if a better theory comes along.

The wobbly shoulders of giants

Science is blessedly incremental, which means that researchers work in a culture that gives a thumbs up to trusting the conclusions of reputable predecessors and plugging these in as assumptions. Naturally, if published “canonical results” are later discovered to be fashionable nonsense (often because people unsuccessfully try to replicate them), the whole house of cards comes tumbling down and the scientific establishment has to re-investigate or denounce all the impacted studies.

“If I have seen further it is by standing on the shoulders of Giants.” — Isaac Newton, 1675. Painting info.

These upsets happen fairly often in science, especially when results are statistical (mistakes are possible due to random chance), but the scientific community treats this as part of the cost of doing business. If they denied themselves the expediency of trusting findings from reputable sources, they’d have to reinvent every wheel. Science would move too slowly.

Scientific conclusions

So, what does it mean when a scientist uses statistics to come to a conclusion? Simply that they’ve formed an opinion and have made the decision to share it with the world. That’s not a bad thing — it’s a scientist’s job to form opinions reluctantly, which makes me feel better about assuming that they’re worth listening to.

It’s a scientist’s job to form opinions reluctantly, which makes me feel better about assuming that they’re worth listening to.

The role of science in society

Society funds scientists on the belief that they refuse to be convinced easily. If there’s a nit to pick, we expect them to pick it. The implicit expectation is that if scientists voice an opinion, those who didn’t participate directly can trust it as having higher quality than some random tweet.

Without scientists, it would be impractical to practice empiricist epistemology — everyone would have to, er, “science” for themselves. We’d all have to stick with rationalism or reinvent our own empirical wheels from scratch. Since each individual lifetime is limited, we wouldn’t get very far.

Like other forms of teamwork, investing in science is a smart move by a civilized species, so, despite their inability to create facts from statistical inference, I’m glad we have scientists.

Should we trust scientists?

I’m lucky to know many scientists whose opinions I would trust. I admire these folks because they are very good at understanding the limitations of what they know — if they’re convinced, it means something to me.

Unfortunately, there are plenty of “scientists” who are in it for the career and find themselves in a rough job market with incentives towards ignoble behavior. Reluctance to form opinions might be in conflict with getting enough publications to win that coveted professorship — so not every scientist is trustworthy. When I spot these charlatans, my mind immediately adds air-quotes around their job title so I don’t make the mistake of trusting them.

Not every scientist is trustworthy.

Additionally, science-as-a-career means that there’s pressure to publish certain topics and results in favor of others — often dictated by fashion and funding. Science is haunted by funding bias and publication bias.

  • Publication bias occurs when the outcome of a study determines the likelihood that it is published. Not all high-quality scientific inquiry is published.
  • Funding bias occurs when the interests of financial sponsors affect the direction of inquiry. Some findings would be awfully inconvenient if you want to keep that research grant.

And it’s not just the careerists you need to watch out for. While many scientists are well-versed in working with probability, I’ve seen other “scientists” make enough statistical mess to last several lifetimes.

I’m a sucker for fun with language, so I enjoyed these playful names for scientist “diseases” that John Antonakis came up with:

  • Significosis — an inordinate focus on statistically significant results.
  • Neophilia — an excessive appreciation for novelty.
  • Theorrhea — a mania for new theory.
  • Arigorium — a deficiency of rigor in theoretical and empirical work.
  • Disjunctivitis — a proclivity to produce large quantities of redundant, trivial, and incoherent works.

Reporting bias

That said, don’t let the bad apples spoil your opinion of competent scientists for whom the pursuit of incremental discovery is a calling. They perform labors of love to replace society’s ignorance with reasonable theories.

The trouble is that in doing what their profession demands — being reluctant to form opinions — these clever and competent folk take stock of the limitations of their conclusions. If you read a good scientific article, you’re likely to see reams of caveats… which only make for a riveting read if you’re a fellow scientist. To a Twitter-weaned attention span, those publications can be a prescription-grade sleeping pill.

To a Twitter-weaned attention span, scientific publications can be a prescription-grade sleeping pill.

Can you guess what happens when reporters are tasked with spicing those findings up for the public? The first order of business is to cut the boring bits. It’s more of a story if the conclusion sounds like a… scientific fact! Welcome to a phenomenon called reporting bias.

Reporting bias occurs when people come to a conclusion other than the one they would have made if given all the information their source had.

This is one of my favorite ironies: Society agrees to trust scientists because their job is to form opinions reluctantly — which is why name-dropping a scientist carries media weight — but professional humility doesn’t make for an exciting story. Readers demand facts, not caveats. Where there’s demand, expect attempts at supply. Someone will gladly glue a horn to a horse to sell you a unicorn.

Meme template SOURCE.

What should you do?

I’m a huge fan of taking advice from those who have more expertise and information than I do, but I never let myself confuse their opinions with facts. I hope you won’t either.

I’m a huge fan of taking advice from those who have more expertise and information than I do, but I never let myself confuse their opinions with facts.

My advice to you — take it or leave it — is to thank the good scientists for their hard work. Of all the voices with an opinion on a topic, I’d pick the person who spent the most effort competently nitpicking it before grudgingly acknowledging that it seems more reasonable than the alternatives.

However, blind faith in science and scientists makes you look ignorant. Scientists are only human; they can be as fallible and greedy as anyone. There are enough bad apples in that batch to keep us on our toes. I wish we’d stop using “because science” to justify whatever harebrained schemes we’ve got going.

As a statistician, I’m painfully aware of how easy it is to lie with numbers. If you can’t trust the person, you can’t trust their data either. When your trust is at an all-time low, the only way forward is to do the research yourself.

When you pick whom to trust, remember to think about competence *and* incentives.

Alas, when I don’t have the time and resources to do the research myself and I must act, I’m forced to trust someone. I choose to bet on the sturdiest shoulders to stand on. If I see competent reasoning and I trust their incentives, I value scientists’ advice. Theirs may be only an opinion, but it’s usually better than what I have without them.

Thanks for reading! Liked the author?

If you’re keen to read more of my writing, most of the links in this article take you to my other musings. Can’t choose? Try this one:

Liked the author? Connect with Cassie Kozyrkov

Let’s be friends! You can find me on Twitter, YouTube, Substack, and LinkedIn. Interested in having me speak at your event? Use this form to get in touch.

How about an AI course?

If you had fun here and you’re looking for an applied AI course designed to be fun for beginners and experts alike, here’s one I made for your amusement:

Enjoy the entire course playlist here:



Cassie Kozyrkov

Chief Decision Scientist, Google. ❤️ Stats, ML/AI, data, puns, art, theatre, decision science. All views are my own.