When Galileo invented a device to measure temperature changes in 1593, it began a quest to obtain objective information about how heat moves around, both locally and globally, that continues to this day. The first attempt to create and maintain a consistent temperature record for a specific location was started in 1699 by a clergyman and gifted amateur scientist named William Derham, just outside Upminster Church near London. By 1880, enough consistent temperature records existed globally to make a reasonably precise estimation of global temperature changes possible. These records have only improved since then, to the point that climate scientists can talk about the ups and downs of global temperatures with high confidence. Of course, we’ve had to spend most of our time talking about the ups.
There are six global temperature records, maintained by the NASA Goddard Institute for Space Studies (GISS); the National Oceanic and Atmospheric Administration (NOAA); the U.K. Meteorological Office's Hadley Centre/Climatic Research Unit (CRU) of the University of East Anglia; the European Center for Medium Range Weather Forecasts (ECMWF); the Japanese Meteorological Agency; and the non-profit Berkeley Earth. The methodologies involved are different (for a good primer on the temperature records, click here), usually in terms of how they account for regions where direct thermometer measurements are limited. They were also established for different reasons. The GISS temperature record, for example, started when the agency recognized a need to extract mean temperature trends from the data. Berkeley Earth, by contrast, was the brainchild of Richard A. Muller, author of the 2013 book Energy for Future Presidents (an excellent book on energy and the energy sector for its time, but like all energy books from over a decade ago, it could use an update). Muller was initially very skeptical about global warming, and wanted to address concerns he had with potential measurement biases inherent in the other records. But Berkeley’s results barely differed from those of the other records. In fact, the differences in the end results for all the records are essentially negligible, as Figure 2 shows. They show the same short-term variability and long-term trend. The consistency between records, despite some differences in the methodologies, suggests that they are all proceeding in a manner that would hold up to objective scrutiny.
But how do we know what the long-term trend is? Here’s where you need to be a little bit careful. The results of a linear regression analysis depend on which year you start with and which year you end with. For more than a decade following a massive El Niño event that produced a significant temperature spike in 1998, any linear regression that chose that year as a starting point did not produce a statistically significant positive slope. Plenty of climate skeptics used this to crow that climate change had stopped, and a fair number of credible scientists who should have known better published papers about the “pause,” or “hiatus,” in global warming. But those analyses were skewed by focusing on a year that was as strongly influenced by natural variability as any non-volcanic year on the record.
There are ways to smooth out the natural variability in order to focus on the long term trend. A lot of the analyses of temperature data led by James Hansen (former director of the NASA Goddard Institute for Space Studies and current member of the Columbia Climate School) employ an eleven-year running mean (see Figure 1 above), where the data for a given month are averaged with data going back 5 1/2 years and forward 5 1/2 years. This cleanly smooths out the oscillations due to the 11-year solar cycle, and mostly smooths out the oscillations due to the more dominant, shorter-term El Niño cycle. Figure 1 shows the eleven-year running mean for temperature anomalies. Between 1970 and 2014, the trend in the running mean is remarkably linear. This means that when natural variability is smoothed out, global mean temperatures increased at a very steady rate (about 0.18ºC per decade) for nearly half a century. There was no pause or hiatus, or even a slowdown. Unfortunately, subsequent temperature data puts us noticeably above that trend line. The trend since 2010 is in fact about 0.37ºC. This has led Hansen to conclude very emphatically that the rate of global warming has accelerated.
But it’s still possible to explain what the temperature record is showing -- or at least it should be. Gavin Schmidt (the current head of NASA/GISS) and Zeke Hausfather (from Berkeley Earth) recently contributed a section to the World Meteorological Organization (WMO)’s State of the Global Climate 2024, in which they look more closely at the individual contributing factors to the Earth’s energy balance. These factors include the unexpectedly quick transition from La Niña back to El Niño, the early onset of a new solar cycle, reduced emissions of sulfate aerosols due to regulations on shipping that went into effect in 2020 and from tighter emissions controls in Chinese industry, and a 2022 volcanic eruption in the South Pacific. The results (see Figure 12 b and d on page 23 of the publication) indicate that the sum of the median contributions of the different factors do not suffice to explain the observed warming. However, the uncertainties are large enough (especially for the El Niño event) to draw that conclusion definitively
So what does this all mean? It’s possible that, if further research reduces the known uncertainties, the scientific community can explain the observed warming of the last couple of years adequately based on what we know. But the possibility also exists that we’re missing something. Regardless, there is room for active debate — at least if the people who hold the purse strings allow for it. In my next post, I will look at some different possibilities that are being discussed.
No comments:
Post a Comment