We all want to make well-informed decisions, especially when there are valuable outcomes at stake.
Having specific information and knowledge about likely outcomes can be incredibly useful—but only if we actually know how to interpret the evidence.
A classic example
Imagine that you go to see the doctor. The doctor administers a medical assessment for a rare disease that you might have.
You test positive.
The doctor informs you that it’s a very accurate assessment. 99% of people who test positive actually have the disease.
“Uh-oh,” you might think. “There’s a 99% chance that I have this disease.”
In fact you probably don’t have the disease.
Keep in mind that, in this example, we are talking about a rare disease.
Let’s say that only 1 out of every 1,000 people has the disease.
Then, basically, for every 1 person correctly diagnosed, about 10 people (1% of the remaining 999) will test positive—even though they don’t actually have the disease.
Even if you test positive, you’re still nearly 10 times more likely to not have the disease.
The disturbing thing is that even most doctors don’t seem to understand how this actually works. For example, this study found that up to 82% of doctors will not account for disease prevalence when making a diagnosis.
So now what?
The assessment said that you were positive for a disease. And it was probably wrong.
Should you adjust your beliefs based on this experience?
Is the assessment actually useless? Does it do more harm than good?
We’re so glad that you kept reading!
The truth is that assessments can be very useful, and doctors can provide you with very valuable medical advice.
Seriously, if you need medical advice, please go see a doctor. Don’t take medical advice from a blog, pretty much ever.
Apologies to anyone unfortunately named “Dr. Blog”
The problem here is that we (everyone, including medical doctors) expect assessments to be more accurate than they actually can be.
Right in front of you, the assessment said one thing, and then it was probably wrong (because it was predicting a rare outcome).
If you actually watch that wrong prediction play out, the whole thing can end up looking like it’s useless.
But let’s look at this a different way.
Would you generally want to bet for or against the medical assessment from the above example?
Let’s say you get $1 every time it’s correct, and lose $1 every time it’s incorrect.
If you bet for the assessment, then congratulations! You just (hypothetically) won a ton of money!
Since we were earlier just focused on the limited experiences of one person’s results, or on a small group’s overall results, we could easily miss the much bigger picture.
Remember that the huge majority of the time that someone didn’t have the disease, the test results showed it accurately. And they weren’t subjected to further expensive examinations or unnecessary medical procedures. That’s a lot of time and money saved, and many other risks averted entirely.
Now that you know all this, what can we conclude? Seeing the bigger picture is incredibly difficult, especially when we are relying on our own intuitions. Your individual experience can be drastically different from what you expect, and from how things actually work on a broader scale.
Whenever you look at the evidence in front of you, it’s equally important to consider how well that particular piece of evidence actually represents the broader world of possibilities relevant to the decisions you want to make.