Levitin: A FIELD GUIDE TO LIES

Daniel J. Levitin’s A FIELD GUIDE TO LIES: Critical Thinking in the Information Age (Dutton, 2016) is a nice complement to the book previously reviewed. Levitin an academic at UC Berkeley and has written three previous books, including This Is Your Brain on Music: The Science of Human Obsession (2006).

Most of the content here is familiar from easily available material about how statistics can be misleading and about issues of very basic epistemology, i.e. how to evaluate the world and know what is likely to be so. Though he uses the words “believing things that aren’t so” he has no mention or references to Thomas Gilovich or Michael Shermer, who’ve written books on related topics.

The book is fairly casual, with many good examples of the points it summarizes. It easily could have been longer. It covers many issues that I first read about in a college textbook, Logic and Contemporary Rhetoric, that remains for me the granddaddy of all books about logical fallacies and mental biases in politics and culture. (Blog post about that book)

Levitin’s book includes a glossary, notes, and index. Rather than summarize generally, I’ll just post my chapter by chapter notes, even though some topics are here only mentioned.

Part One, Evaluating Numbers

Quoting Mark Twain: “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

Page 3, Plausibility

Examples of statistics that fail basic plausibility, e.g. a salesperson made 1000 sales a day. Pays to stop and think if the claim is remotely plausible, let alone correct.

Counting pregnancies not births. A Fox News chart that adds up to way over 100%.

P11, Fun with Averages

Mean, median, mode. Examples of meaningless averages. 18b the ecological and exception fallacies. Examples involving life expectancy, and shifting baselines.

P26, Axis Shenanigans

Axes that are unlabeled, or truncated to exaggerate a claim by hiding the context. Example, another Fox news chart p29. Examples of crime rates, home prices, double Y axes. Notorious chart put up in congress about Planned Parenthood, to imply abortions far outnumber cancer screenings (the villain is a Republican of course), p40.

P43, Hijinks with How Numbers are Reported

Cola sales by themselves may not tell the story; it’s more about market share. Or if sales drop, show a chart of *cumulative* sales instead—Apple did this, p48.

One can find unrelated correlations, e.g. drownings and Nicholas Cage movies, p49. Deceptive illustrations, p52.

Framings issues, e.g. water usage s/be per acre or whatever. Better to use proportions.

Beware extrapolations that lead to nonsensical results, e.g. coffee cooling.

Precision v accuracy. Comparing apples to oranges. How to display birth rates by state, using different sized bins, p68ff.

P75, How Numbers Are Collected

People collect numbers; they don’t just appear.

Sampling: must be representative. Example of how to sample pedestrians in San Francisco. People aren’t always honest. Be aware of margin of error.

Sampling bias: led to famous wrong prediction that Landon would defeat Roosevelt. Thus the Gallup poll.

Sometimes it helps to disguise the purpose of a poll; to account for non-responses; to account for biases in reporting (what people say isn’t necessarily what they do).

Standardization, measurement error, definitions: of rain, of the homeless.

P95, how to ask political polls; everyone will gripe. And how to realize that some things are simply unknowable, about how many suicides were gay, or how many readers a magazine actually has, p96. [[ points like these are much more crucial than most items here ]]

P97, Probabilities

Different kinds: classic, as in a die with six sides; frequentist, as in an experiment with a drug; and subjective, as when a person estimates his likelihood of doing something. These can be confused, e.g. weather forecasts. Combining probabilities involves multiplication. Some are conditional, and can be confused by prosecuting attorneys and juries.

Visualizing: a fourfold table can help portray conditional prob’s – this is Bayesian thinking, to see how results change as conditions change, e.g. about breast cancer. These conditionals do not work backwards, e.g. p115.

People are uncomfortable with statistics and graphs, and some information is confused, deliberately or not.

Part Two, Evaluating Words

P123, How do we know?

We discover information ourselves, or acquire it implicitly, or are taught it explicitly. There are skills we can learn to help analyze claims, skills that should be taught to 14-year-olds, p124.2.

Recall Twain epigram again, p125. Did Twain really say it? Author relates the details of tracking it down.

P129, identifying expertise

First ask what is their authority. Experts can be wrong, but are more likely right than nonexperts, 131.4. definition p130. Expertise is often quite narrow. Work is peer reviewed. Experts are recognized by prizes and grants.

Be aware of source hierarchy – some are more reliable than others, e.g. major newspapers compared certain websites like TMZ.

137, website domain: .gov or .edu or org likely more reliable than .com. Ask who’s behind a site; some are deliberately misleading (another Republican example p140).

Institutions can be biased. Also can look at who links to a page, p143.

Certain journals exercise rigorous peer review, as do textbooks and encyclopedia; not so much claims by food companies, say.

Beware out of date webpages, or stories that have been discredited but remain available. E.g. Trump’s discredited claims.

Some people copy information and claim it as their own.

Some make citations in footnotes that don’t actually support their claims (most people won’t look).
And beware confusing terminology, e.g. incidence vs. prevalence of a disease, p149.

P152, Overlooked, Undervalued Alternative Explanations

Beware assuming a cause or outrageous claims that might have ordinary explanations. This applies to magicians, fortune-tellers, and so on, 153.

And claims about ancient astronauts, etc.; what is more likely.

Some claims are missing control groups, e.g. that listening to Mozart increases a baby’s IQ. The real explanation was that boredom temporarily decreases it, 158.

Cherry-picking and selective windowing bias the data toward a particular hypothesis. Beware the gambler’s fallacy…

Small samples are usually not representative, and statistical literacy misleads in tricks and red/white cards, p167.

P168, Counterknowledge

This is misinformation packaged to look like fact, such as celebrity gossip or pseudo-history, including many conspiracy theories. Consider 9/11: what’s the probability of the various claims that 9/11 was a conspiracy; a handful of unexplained anomalies does not discredit thousands of other pieces of evidence.

Reporters can mislead; some report what one expert says, better ones will interview more than one. Different than breaking news mode, where news is gathered from eyewitnesses. These can be confused.

Perception of risk can be skewed when ordinary risks are not reported, e.g. drownings, while unusual ones are in the news. [[ the standard availability bias ]]

And association can mislead, as in an argument about bottled water, 176.

Part Three: Evaluating the World

P181, How science works

We are all human with imperfect brains, and there are some scientists who are frauds – examples include Andrew Wakefield.

It’s a myth that science is neat and tidy, that scientists never disagree; and a myth that a single experiment ever settles anything. What counts is the meta-analysis, the results of many experiments, to reach a consensus, with attendant risks about samples.

Deduction and induction; the syllogism; how a syllogism can be true even if the premise is false. ‘Modus ponens’ p186, with three variations: the contrapositive, the converse, the inverse, p188-89.

Sherlock Holmes did *abduction*, making clever guesses from specifics to conclude another specific, to a degree of likelihood—but not logical certainty.

What we call ‘arguments’ are premises, or evidence, with conclusions. Example of a deduction about maternity wards and how mortality rates went down not because of initial hypotheses, but when doctors washed their hands.

P198, Logical fallacies

Illusory correlations, as when you get a phone call from someone after just thinking of them. You don’t consider all the times that didn’t happen, or how many people there are in the world…

Framing probabilities: chance encounters on a vacation.

Framing risk: need to look at rates, not absolute numbers, about plane crashes or immigrant risk. [[ politicians play up specific anecdotes to undermine this ]]

Belief perseverance is that we have a hard time letting go of a belief despite evidence to the contrary; we maintain allegiance to low-fat diets, or the link between autism and vaccines, p207.

P211, Knowing What You Don’t Know

Rumsfeld’s known unknowns. Uncovering unknown unknowns is the principal job of scientists: “The B-movie characterization of the scientist who clings to his pet theory to his last breath doesn’t apply to any scientist I know; real scientists know that they only learn when things don’t turn out the way they thought they would.’ P213.2

Fourfold table, p214. It’s the unknown unknowns that are the most dangerous.

P216, Bayesian Thinking in Science and in Court

Scientists update their confidence in ideas based on new evidence; they move from prior probability, of a hypothesis, to posterior probability. Unlikely claims require stronger evidence. Examples from forensics.

Technically, it’s not true that a person is innocent (prior prob of 0); it’s always some tiny number, based on the possible number of perpetrators in a city, say. Example of fourfold table of guilt based on a blood match…

P222 Four Case Studies

Rather lengthy examples of:

  • The author’s dog, who made have had cancer;
  • Whether the moon landing was faked, p229, with reference to Rocketdyne;
  • Whether David Blaine’s stunts are real, p231, with mentions of James Randi and Peter Popoff;
  • And of the universe, its layers of particles, and whether Higgs is really the end.

P251, Conclusion: Discovering Your Own

Recalls Orwell’s 1984. Experts vs. the anti-science bias in public discourse, when we need expertise to make critical decisions. Also an anti-skepticism bias, in which people figure if it’s on the internet, it must be true. We all must apply careful thinking… We’re better off knowing a moderate number of things with certainty, than a large number of things that might not be so, 254.

This entry was posted in Book Notes, Culture, Politics, Psychology. Bookmark the permalink.