About Motivated Reasoning

(rev. 8jul20)

This isn’t so much a Notes for the Book post, as a refinement of a portion of my Principles page, which compiles what I think are crucial guidelines for understanding the world, in particular how to evaluate claims made in politics, by advertising and the news media, and by religion, science, and pseudo-science (as it says there).

A couple of things came to mind recently, one about the distinction between “can I believe” as opposed to “must I believe.” This was highlighted by Jonathan Haidt (here, http://www.markrkelly.com/Blog/2015/12/09/jonathan-haidts-the-righteous-mind-why-good-people-are-divided-by-politics-and-religion-3/) citing earlier work by Thomas Gilovich. I’ll quote a passage from that link, summarizing my take on a portion of his book The Righteous Mind:

When we *want* to believe something, we ask, *Can* I believe it? For this you need only a single piece of pseudo-evidence. Whereas if you’re not inclined to believe something, you ask *Must* I believe it? And then no matter how much supporting evidence you find, if you find a single reason to doubt the claim, you dismiss it. This is the essence of motivated reasoning, and Haidt illustrates it by observing that conspiracy theories operate on the former strategy (*can* I believe it? give me one example) while science operates on the latter (if all the evidence supports an idea, you must believe), and non-scientists are adept at finding some reason to quibble. p85.6:

Whatever you want to believe about the causes of global warming or whether a fetus can feel pain, just Google your belief. You’ll find partisan websites summarizing and sometimes distorting relevant scientific studies. Science is a smorgasbord, and Google will guide you to the study that’s right for you.

(Don’t get your news, or do your research, from social media, or Google!)

This recalls various takes on the idea of “motivated reasoning.” The simplex take is that people pay attention to evidence that supports their per-determined views, and ignore evidence that doesn’t. A complex take is that people rationalize these two positions; they find reasons to dismiss evidence that doesn’t support their position (mostly invalid, e.g. by ad hominem attacks) and reasons to accept reasons that support their position (by ignoring any counter-evidence). The multiplex take is described above; your standards for belief change depending on whether you want to believe, or not.

(I’ll resist describing examples, for now.)

Another thing that came to mind is the idea that a rational person should be able to evaluate their beliefs and be able to, if only in principle, imagine possible evidence that would cause them to change their minds. If there’s *nothing* that could possibly change your mind about some commitment, then you are not being rational and there’s no reason to hold a conversation with you on this subject. (Other subjects sure; humans are masters at compartmentalizing.) Scientific theories, almost by definition, involve statements that are, in principle, subject to disproof. Religious and political positions do not. That’s why these subjects famously cannot be discussed over dinner; people’s opinions on these matters are seldom (politics) or never (religion) based on rational arguments.

And, a famous quote by Christopher Hitchens: “That which can be claimed without evidence can be dismissed without evidence.”

 

This entry was posted in Psychology. Bookmark the permalink.