A recent recalibration of past climate models has revealed that we appear to have been underestimating the rate of the increase in global temperatures, with the change in the world’s ocean temperatures being off by a tenth of a degree Celsius. This new finding itself may stem from the ongoing recalibration of previous, error-prone temperature recording methods, but a recent essay published in Scientific American also asks the question of whether or not the continuing trend of underestimating climate change might be due to a form of bias on the part of climatologists.
Modern researchers have the luxury of having access to large amounts of high-quality climate data gathered by a vast network of high-tech submersible buoys, but this now-standardized practice only began in the early 1990s; prior to this, sea surface temperature measurements were made using a variety of different, albeit error-prone, methods such as using open buckets, lamb’s wool–wrapped thermometers, and canvas bags. The pre-1990s lack of standardized data collection methods has since forced climatologists to develop techniques that can compensate for potential errors caused by the old methods, so that an accurate picture of the Earth’s past environment can be reliably compared to our present situation, so as to better forecast what the future might hold.
Needless to say, the evolution of these correction techniques is an ongoing process, and continually sharpens our focus on past measurements. Toward that end, the UK’s Met Office Hadley Centre has released the latest iteration of their Sea Ice and Sea Surface Temperature data set (HadISST), dubbed “HadSST4”, and the news isn’t good: the latest calibrations to the temperature model show that sea surface temperatures were 0.1°C (0.18°F) cooler than what had been previously assumed, meaning that global warming has been proceeding faster than we previously thought.
In an article published in Scientific American, authors Naomi Oreskes, Michael Oppenheimer, and Dale Jamieson, write that in addition to the corrected data set indicating that we’re that much closer to 1°C (1.8°F) above the pre-industrial average, “it was reported recently that in the one place where it was carefully measured, the underwater melting that is driving disintegration of ice sheets and glaciers is occurring far faster than predicted by theory—as much as two orders of magnitude faster—throwing current model projections of sea level rise further in doubt.”
To be clear, this “two orders of magnitude” means that the ice sheets and glaciers in question are melting one hundred times faster than previously thought, a massive underestimation by scientists, and a potentially dangerous one if sea levels are going to rise much faster than we’re anticipating. But where did such a discrepancy come from?
Newly-discovered elements affecting the climate that were previously unknown to climatologists are an obvious factor—these are the changes to the paradigm that make headlines—but the Scientific American article also questions the political bias that creeps into scientific publications.
Contrary to the “climate alarmist” label that global warming deniers use to dismiss forecasts of impending climate chaos, scientists are, in reality, prone to pulling their punches when it comes to making climate forecasts. Oreskes, Oppenheimer and Jamieson state that this continuing habit of underestimating climate change trends is an ongoing one, and is “consistent with observations that we and other colleagues have made identifying a pattern in assessments of climate research of underestimation of certain key climate indicators, and therefore underestimation of the threat of climate disruption.
“When new observations of the climate system have provided more or better data, or permitted us to reevaluate old ones, the findings for ice extent, sea level rise and ocean temperature have generally been worse than earlier prevailing views,” they continue.
This unintended bias appears to have grown from political pressure from the climate skeptic community in the form of the scientific community’s need to present a single, unified voice to the public on the issue, or “the perceived need for consensus, or what we label univocality: the felt need to speak in a single voice,” according to the article. “Many scientists worry that if disagreement is publicly aired, government officials will conflate differences of opinion with ignorance and use this as justification for inaction.”
The easiest way to illustrate how this perceived need for consensus results in underestimation is through simple math: “Consider a case in which most scientists think that the correct answer to a question is in the range 1-10, but some believe that it could be as high as 100,” the article illustrates.
“In such a case, everyone will agree that it is at least 1-10, but not everyone will agree that it could be as high as 100. Therefore, the area of agreement is 1-10, and this is reported as the consensus view. Wherever there is a range of possible outcomes that includes a long, high-end tail of probability, the area of overlap will necessarily lie at or near the low end.”
The article also points out that there may be a “mental model” where individual scientists assume that if their conclusions are too far removed from what the rest of the scientific community—the previously-mentioned ” high as 100″ factor—then the data they present may be “viewed as opinions rather than facts and dismissed not only by hostile critics but even by friendly forces,” prompting the publishing scientists to low-ball their findings, to make them more palatable.
Oreskes, Oppenheimer and Jamieson came upon these insights while researching their book, Discerning Experts, that explores the history of the study of climate change. In regards to the overall integrity of modern climatological research, the authors’ own research brought them to the conclusion that there’s no reason to doubt what scientists are saying when it comes to global warming.