Links and Comments: About Thinking; About Research; About Orientations

First, from yesterday’s New York Times Science section: How to Think Like an Epidemiologist. Subtitle: “Don’t worry, a little Bayesian analysis won’t hurt you.”

As Marc Lipsitch, an infectious disease epidemiologist at Harvard, noted on Twitter, Bayesian reasoning comes awfully close to his working definition of rationality. “As we learn more, our beliefs should change,” Dr. Lipsitch said in an interview. “One extreme is to decide what you think and be impervious to new information. Another extreme is to over-privilege the last thing you learned. In rough terms, Bayesian reasoning is a principled way to integrate what you previously thought with what you have learned and come to a conclusion that incorporates them both, giving them appropriate weights.”

(The first extreme is that of the religious fundamentalist; the second extreme is what psychologists call the availability heuristic.)

A few posts ago I discussed how risk analysis is done, and is done in great detail within industries that depend on bringing a project in, as close to possible, on time and to budget. (As opposed to going with your gut.) Similarly, there is a rigorous, if idealistic, process for how a scientist, or anyone else, updates their (always) provisional conclusions as new evidence comes in. It’s called Bayesian analysis (Wikipedia calls it Bayesian inference), and it boils down to the couple equations in the linked NYT article, but can be described with examples to seem simple and intuitive. It’s analogous to how a detective changes his likely suspect as new evidence is discovered.

Sadly, this is not only not understood by many, but willfully misunderstood by those who dislike scientific conclusions and want to find reasons to reject them. Their take is that if a scientist says one thing today and another thing in a month, it’s because they can’t be trusted and science itself is discredited. (Peter Navarro’s attempted take-down of Dr. Fauci was like this.) This attitude is prevalent, it seems, among the religious faithful, for whom all questions have definitive answers that harbor no revision. The real world isn’t like that. Any more than when a detective shows up on scene of a crime, he is expected to be completely correct at once, before further evidence is gathered.

An example just from today: a couple lines of evidence about the age of the universe don’t exactly line up. And so as Paul Fidalgo puts it (scroll down):

There is some slight disagreement among scientists about the age of the Universe, which means of course that science is wrong and God made everything like a few days ago.

\\

Second: Forbes: You Must Not ‘Do Your Own Research’ When It Comes To Science.

The writer, Ethan Siegel, is a Ph.D. astrophysicist.

“Research both sides and make up your own mind.” It’s simple, straightforward, common sense advice. And when it comes to issues like vaccinations, climate change, and the novel coronavirus SARS-CoV-2, it can be dangerous, destructive, and even deadly. The techniques that most of us use to navigate most of our decisions in life — gathering information, evaluating it based on what we know, and choosing a course of action — can lead to spectacular failures when it comes to a scientific matter.

The reason is simple: most of us, even those of us who are scientists ourselves, lack the relevant scientific expertise needed to adequately evaluate that research on our own. In our own fields, we are aware of the full suite of data, of how those puzzle pieces fit together, and what the frontiers of our knowledge is. When laypersons espouse opinions on those matters, it’s immediately clear to us where the gaps in their understanding are and where they’ve misled themselves in their reasoning. When they take up the arguments of a contrarian scientist, we recognize what they’re overlooking, misinterpreting, or omitting. Unless we start valuing the actual expertise that legitimate experts have spent lifetimes developing, “doing our own research” could lead to immeasurable, unnecessary suffering.

With this description of what many people actually do when they say they’re “doing research”:

There’s an old saying that I’ve grown quite fond of recently: you can’t reason someone out of a position they didn’t reason themselves into. When most of us “research” an issue, what we are actually doing is:

• formulating an initial opinion the first time we hear about something,
• evaluating everything we encounter after that through that lens of our gut instinct,
• finding reasons to think positively about the portions of the narrative that support or justify our initial opinion,
• and finding reasons to discount or otherwise dismiss the portions that detract from it.

Of course, that’s not what we think we’re doing. We think of ourselves as the heroes of our stories: cutting through misinformation and digging up the real truth on the matter. We think that, just by applying our brainpower and our critical reasoning skills, we can discern whose expert opinions are trustworthy and responsible. We think that we can see through who’s a charlatan and a fraud, and we can tell what’s safe and effective from what’s dangerous and ineffective.

Except, for almost all of us, we can’t. Even those of us with excellent critical thinking skills and lots of experience trying to dig up the truth behind a variety of claims are lacking one important asset: the scientific expertise necessary to understand any finds or claims in the context of the full state of knowledge of your field. It’s part of why scientific consensus is so remarkably valuable: it only exists when the overwhelming majority of qualified professionals all hold the same consistent professional opinion. It truly is one of the most important and valuable types of expertise that humanity has ever developed.

My Facebook friend who linked this article prefaced by saying, “For too many people, _research_ means ‘google stuff until you’re bored or emotionally satisfied’. I think we’re going to have to make up a new word to use when we’re talking about actual research.”

\\

Finally, a bit of fun. A minute-and-a-half YouTube video that asks, in Star Trek, why when two starships approach each other, they are always level to each other? My answer: it’s intuitive physics at work; it’s naive mapping of the human experience of water-going ships that move two-dimensionally, to space-going ships that can move three-dimensionally. The video’s answer is cute, but incomplete; the matter of people beaming down from a starship to a planet is still an issue.

SF writers have occasionally dealt with a related issue, e.g., if you could time travel to the past or future, how would you catch up to where the Earth will be or was? Wouldn’t it move out from underneath you? Some of these writers develop workarounds or rationales; most just ignore the issue.

(Also, as I recall from last seeing the film decades ago, the second Star Trek film, The Wrath of Khan, made a plot point of having the Enterprise approach the enemy ship from below, rather than straight-on. And thought it was being very clever to think of this ploy.)

This entry was posted in Psychology, Science. Bookmark the permalink.