First: Slate: What Does It Really Mean When a Headline Says “75 Percent of Cases Occurred in Vaccinated People”?, subtitled, “A simple math lesson to calm some of the panic around breakthrough cases.”
Yes, there are unscrupulous media sources (or perhaps merely headline writers) who publish misleading claims like this, for the sake of drawing readers, especially alarmists, that are easily dismantled through a bit of mathematical savvy.
(Just the latest example of “innumeracy,” that affects the vast majority of the population. See John Allen Paulos.)
The headline (cited from Reuters and MarketWatch) has the inevitable implication that vaccines are worse for you rather than better: they make you more susceptible to new Covid infections!
No. This is a case of basic misunderstanding of rates and population sizes.
The best way to understand why these kinds of statements are always misleading is to use a very useful technique that every mathematician learns—and is easy to use and very useful for nonmathematicans as well: come up with an extreme hypothetical and check if the statement makes any sense in the hypothetical.
This is a basic tool for critical thinking: consider an extreme hypothetical and see if it makes sense.
In this case, the extreme might be this: if there are far more vaccinated people than unvaccinated (the actual example cited is Singapore), the gross number of new infections among the former population might easily be greater than the number in the latter, even when the latter are in fact more susceptible, proportionally.
You have to look at the size of the two populations and calculate the odds. Citing simple percentages (like an increase of 100%, which might just mean a change from 1% to %2 and still be very low) can easily be misleading, sometimes intentionally so.
From last week, Jordan Ellenberg (my favorite mathematician/writer currently) in Slate: If They Say They Know, They Don’t Know, subtitled “A principle for understanding which experts to trust, including the CDC.”
This isn’t precisely a mathematical concept, but one about the nature of science and even of expertise. Very rarely, *except* in mathematics, is there anything like certainty. Science is always provisional; scientists can and should change their minds based on changing evidence. This is something many find difficult to understand, and even use to discredit science and expertise, in a simple-minded way.
The point here is that the more certain people tend to be, the less likely they are to be right, or at least to be trusted. If, like a police detective, the claim is only “this is the provisional conclusion based on the evidence gathered so far,” the more likely it is to be correct, and be trusted.
Nondeference to experts is the spirit of the moment. Who are those pinheads, those elites, those people who say they know, to demand our obedience?
Those who do know say that they don’t know. If you pooh-pooh the credentialed authorities, but then hop into obedience to someone else who says they know [e.g. someone on YouTube], you’re totally missing the point…
This is because these critics think the experts are making claims of absolute certainty. And they’re not. It’s the conspiracy theorists who claim absolute certainty.
I suspect these critics of expertise and science come from a mindset, and worldview, in which everything has a definite, black and white answer that cannot be questioned. (Religious verities.) Whereas science and expertise is about questioning answers that can always be revised.
And so we’re attracted to the sound of uncomplicated confidence. But it’s an attraction to be resisted. There are the people who say they know and there are the people who know. When the stakes are high, which do you want to listen to?