Naomi Oreskes, a professor at Harvard, just published a book, Why Trust Science?, which has gotten a fair amount of coverage in various review and interview venues. Her main point, I gather, is that science isn’t so much about the perhaps simplistic idea of the ‘scientific method,’ as about the process by which results and conclusions are rigorously vetted.
Time Magazine just published a short essay in which she expands on this premise: Science Isn’t Always Perfect—But We Should Still Trust It.
(The headline in the print magazine is “Put Your Faith in Science” which is terrible — the point is, it’s not about ‘faith’; it’s about confidence, or trust, based on past results. Also, the print version is trimmed slightly from this online version.)
The answer is not the methods by which scientists generate claims, but the methods by which those claims are evaluated. The common element in modern science, regardless of the specific field or the particular methods being used, is the critical scrutiny of claims. It’s this process—of tough, sustained scrutiny—that works to ensure that faulty claims are rejected and that accepted claims are likely to be right.
She describes the process from initial claim to casual discussion with colleagues to submission to a scientific journal which then engages in rigorous peer review before publication; and even then, “if serious errors are detected after publication, journals may issue errata or even retractions.”
A key aspect of scientific judgment is that it is not done individually; it is done collectively. It’s a cliché that two heads are better than one: in modern science, no claim gets accepted until it has been vetted by dozens, if not hundreds of heads. In areas that have been contested, like climate science and vaccine safety, it’s thousands. This is why we are generally justified in not worrying too much if a single individual scientist, even a very famous one, dissents from the consensus. There are many reasons why an individual might dissent: he might be disappointed that his own theory didn’t work out, bear a personal grudge, or have an ideological ax to grind. She might be stuck on a detail that just doesn’t change the big picture, or enjoy the attention she gets for promoting a contrarian view. Or he might be an industry shill. The odds that the lone dissenter is right, and everyone else is wrong, are not zero, but so long as there has been adequate opportunity for the full vetting of his and everyone else’s claims they are probably in most cases close to zero. This is why diversity in science is important: the more people looking at a claim from different angles, the more likely they are to identify errors and blind-spots. It’s also why we should have a healthy skepticism towards brand-new claims: it takes years or sometimes decades for this process to unfold.
Final paragraph, cut from the print article:
Modern society relies on trust in experts, be they dentists, plumbers, car mechanics, or professors. If trust were to come to a halt, society would come to a halt, too. Like all people, scientists make mistakes, but they have knowledge and skills that make them useful to the rest of us. They can do things that we can’t. And just as we wouldn’t go to a plumber to fix our teeth or a dentist to fix our car, we shouldn’t go to actresses or politicians, much less industries with a vested interest or ideologically-driven think-tanks, for answers to scientific questions. If we need scientific information, we should go to the scientists who have dedicated their lives to learning about the matters at stake. On scientific matters, we should trust science.
This idea of how modern civilization relies on experts — because no one person can know everything, unless he’s a primitive pioneer living off the land, where that ‘everything’ is limited to basic survival skills — has been recognized for decades; I’ve observed it in commentaries about science and science fiction, as documented here in many posts.