On November 10, 1942, after British and Commonwealth forces defeated the Germans and Italians at the Second Battle of El Alamein, Winston Churchill told the British Parliament, “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.”
In The Hinge of Fate, volume 3 of his marvelous 6-volume history of World War II, he reflected, “It may almost be said, ‘Before Alamein we never had a victory. After Alamein we never had a defeat’.”
The publication of Nicholas Lewis and Judith Curry’s newest paper in The Journal of Climate reminds me of that. The two authors for years have focused much of their work on figuring out how much warming should come from adding carbon dioxide to the atmosphere. In this paper they conclude that it’s at least 30% and probably 50% less than climate alarmists have claimed for the last forty years.
In fact, there are reasons to think the alarmists’ error is even greater than 50 percent. And if that is true, then all the reasons for drastic policies to cut carbon dioxide emissions – by replacing coal, oil and natural gas with wind and solar as dominant energy sources – simply disappear. Here’s another important point.
For the last 15 years or more, at least until a year or two ago, it would have been inconceivable that The Journal of Climate would publish their article. That this staunch defender of climate alarmist “consensus science” does so now could mean the alarmist dam has cracked, the water’s pouring through, and the crack will spread until the whole dam collapses.
Is this the beginning of the end of climate alarmists’ hold on climate science and policy, or the end of the beginning? Is it the Second Battle of El Alamein, or is it D-Day? I don’t know, but it is certainly significant. It may well be that henceforth the voices of reason and moderation will never suffer a defeat.
Shattered Consensus: The True State of Global Warming was edited 13 years ago by climatologist Patrick J. Michaels, then Research Professor of Environmental Sciences at the University of Virginia and the State Climatologist of Virginia; now Senior Fellow in Environmental Studies at the Cato Institute. Its title was at best premature.
The greatly exaggerated “consensus” – that unchecked human emissions of carbon dioxide and other “greenhouse” gases would cause potentially catastrophic global warming – wasn’t shattered then, and it hasn’t shattered since then. At least, that’s the case if the word “shattered” means what happens when you drop a piece of fine crystal on a granite counter top: instantaneous disintegration into tiny shards.
However, although premature and perhaps a bit hyperbolic, the title might have been prophetic.
From 1979 (when the National Academy of Sciences published “Carbon Dioxide and Climate: A Scientific Assessment”) until 2013 (when the Intergovernmental Panel on Climate Change published its “5th Assessment Report” or AR5), “establishment” climate-change scientists claimed that – if the concentration of carbon dioxide (or its equivalent in other “greenhouse” gases) doubled – global average surface temperature would rise by 1.5–4.5 degrees C, with a “best estimate” of about 3 degrees. (That’s 2.7–8.1 degrees F, with a “best” of 5.4 degrees F.)
But late in the first decade of this century, spurred partly by the atmosphere’s failure to warm as rapidly as the “consensus” predicted, various studies began challenging that conclusion, saying “equilibrium climate sensitivity” (ECS) was lower than claimed. As the Cornwall Alliance reported four years ago:
“The IPCC estimates climate sensitivity at 1.5˚C to 4.5˚C, but that estimate is based on computer climate models that failed to predict the absence of warming since 1995 and predicted, on average, four times as much warming as actually occurred from 1979 to the present. It is therefore not credible. Newer, observationally based estimates have ranges like 0.3˚C to 1.0˚C (NIPCC 2013a, p. 7) or 1.25˚C to 3.0˚C – with a best estimate of 1.75˚C (Lewis and Crok 2013, p. 9). Further, “No empirical evidence exists to support the assertion that a planetary warming of 2°C would be net ecologically or economically damaging” (NIPCC 2013a, p. 10).” [Abbreviated references are identified here.]
However, most of the lower estimates of equilibrium climate sensitivity were published in places that are not controlled by “consensus” scientists and thus were written off or ignored.
Now, though, a journal dead center in the “consensus” – the American Meteorological Society’s Journal of Climate – has accepted a new paper, “The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity,” by Nicholas Lewis and Judith Curry. It concludes that ECS is very likely just 50–70% as high as the “consensus” range. (Lewis is an independent climate science researcher in the UK. Curry was Professor and Chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology and now is President of the Climate Forecast Applications Network.)
Here’s how Lewis and Curry summarize their findings in their abstract, with the takeaways emphasized:
“Energy budget estimates of equilibrium climate sensitivity (ECS) and transient climate response (TCR) [increase in global average surface temperature at time of doubling of atmospheric CO2 concentration, i.e., 70 years assuming 1% per annum increase in concentration] are derived based on the best estimates and uncertainty ranges for forcing provided in the IPCC Fifth Assessment Scientific Report (AR5).
“Recent revisions to greenhouse gas forcing and post-1990 ozone and aerosol forcing estimates are incorporated and the forcing data extended from 2011 to 2016. Reflecting recent evidence against strong aerosol forcing, its AR5 uncertainty lower bound is increased slightly. Using a 1869–1882 base period and a 2007−2016 final period, which are well-matched for volcanic activity and influence from internal variability, medians are derived for ECS of 1.50 K (5−95%: 1.05−2.45 K) and for TCR of 1.20 K (5−95%: 0.9−1.7 K). These estimates both have much lower upper bounds than those from a predecessor study using AR5 data ending in 2011.
“Using infilled, globally-complete temperature data gives slightly higher estimates; a median of 1.66 K for ECS (5−95%: 1.15−2.7 K) and 1.33 K for TCR (5−95%:1.0−1.90 K). These ECS estimates reflect climate feedbacks over the historical period, assumed time-invariant.
“Allowing for possible time-varying climate feedbacks increases the median ECS estimate to 1.76 K (5−95%: 1.2−3.1 K), using infilled temperature data. Possible biases from non-unit forcing efficacy, temperature estimation issues and variability in sea-surface temperature change patterns are examined and found to be minor when using globally-complete temperature data. These results imply that high ECS and TCR values derived from a majority of CMIP5 climate models are inconsistent with observed warming during the historical period.
A press release from the Global Warming Policy Forum quoted Lewis as saying, “Our results imply that, for any future emissions scenario, future warming is likely to be substantially lower than the central computer model-simulated level projected by the IPCC, and highly unlikely to exceed that level.”
Veteran environmental science writer Ronald Bailey commented on the new paper in Reason, saying: “How much lower? Their median ECS estimate of 1.66°C (5–95% uncertainty range: 1.15–2.7°C) is derived using globally complete temperature data. The comparable estimate for 31 current generation computer climate simulation models cited by the IPCC is 3.1°C. In other words, the models are running almost two times hotter than the analysis of historical data suggests that future temperatures will be.
“In addition, the high-end estimate of Lewis and Curry’s uncertainty range is 1.8°C below the IPCC’s high-end estimate.” [emphasis added]
Cornwall Alliance Senior Fellow Dr. Roy W. Spencer (Principal Research Scientist in Climatology at the University of Alabama-Huntsville and U.S. Science Team Leader for NASA’s satellite global temperature monitoring program) commented on the paper. Even Lewis and Curry’s figures make several assumptions that are at best unknown and quite likely false. He noted:
“I’d like to additionally emphasize overlooked (and possibly unquantifiable) uncertainties: (1) the assumption in studies like this that the climate system was in energy balance in the late 1800s in terms of deep ocean temperatures; and (2) that we know the change in radiative forcing that has occurred since the late 1800s, which would mean we would have to know the extent to which the system was in energy balance back then.
“We have no good reason to assume the climate system is ever in energy balance, although it is constantly readjusting to seek that balance. For example, the historical temperature (and proxy) record suggests the climate system was still emerging from the Little Ice Age in the late 1800s. The oceans are a nonlinear dynamical system, capable of their own unforced chaotic changes on century to millennial time scales, that can in turn alter atmospheric circulation patterns, thus clouds, thus the global energy balance. For some reason, modelers sweep this possibility under the rug (partly because they don’t know how to model unknowns).
“But just because we don’t know the extent to which this has occurred in the past doesn’t mean we can go ahead and assume it never occurs.
“Or at least if modelers assume it doesn’t occur, they should state that up front.
“If indeed some of the warming since the late 1800s was natural, the ECS would be even lower.”
With regard to that last sentence, Spencer’s University of Alabama research colleague Dr. John Christy and co-authors Dr. Joseph D’Aleo and Dr. James Wallace published a paper in the fall of 2016 (revised in the spring of 2017). It argued that solar, volcanic and ocean current variations are sufficient to explain all the global warming over the period of allegedly anthropogenic warming, leaving no global warming to blame on carbon dioxide.
At the very least, this suggests that indeed “some of the warming since the late 1800s was natural” – which means the ECS would be even lower than Lewis and Curry’s estimate.
All of this has important policy implications.
Wisely or not, the global community agreed in the 2015 Paris climate accords to try to limit global warming to at most 2 C degrees – preferably 1.5 degrees – above pre-Industrial (pre-1850) levels.
If Lewis and Curry are right, and the warming effect of CO2 is only 50–70% of what the “consensus” has said, cuts in CO2 emissions need not be as drastic as previously thought. That’s good news for the billions of people living in poverty and without affordable, reliable electricity. Their hope for electricity is seriously compromised by efforts to impose a rapid transition from abundant, affordable, reliable fossil fuels to diffuse, expensive, unreliable wind and solar (and other renewable) as chief electricity sources.
Moreover, if Spencer (like many others who agree with him) is right that the assumptions behind ECS calculations are themselves mistaken … and Christy (like many others who agree with him) is right that some or all of the modern warming has been naturally driven – then ECS is even lower than Lewis and Curry thought. That would mean there is even less justification for the punitive, job-killing, poverty-prolonging energy policies sought by the “climate consensus” community.
Regardless, we’re coming closer and closer to fulfilling the prophecy in Michaels’ 2005 book. The alarmist “consensus” on anthropogenic global warming is about to be shattered – or at least eroded and driven into a clear minority status.
The Biggest Deception in the Human Caused Global Warming Deception
By Dr. Tim Ball – WUWT
It is likely that every year annual variance in the amount of water vapour in the atmosphere exceeds the warming effects of human CO2. I can’t prove it, but nobody can disprove it with any reasonable measure of evidence because there is insufficient data or understanding of natural processes. However, it is likely true, and alone destroys the human-caused global warming (AGW) narrative. This is one reason why AGW is the biggest, most pervasive, and longest lasting ‘fake news’ story to date. It is also a ‘deep state’ story created and perpetuated by and through the bureaucracies.
Part of the reason the deception persists is because of the failure of skeptics to explain the scientific problems with the AGW claim in a way people can understand. As I have written, most people, that is the 85% who lack science skills, find the science arguments of most skeptics too arcane.
However, there are problems on both sides of the debate that preclude, or at least seriously limit, the possibility of clear understanding and explanation. It is the lack of data. There is so much speculation without any facts that it is time to consider the lessons of problem solving identified by Sir Arthur Conan Doyle through his detective Sherlock Holmes.
“It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”
It is important to use a fictional source for this reminder because, in the real world, facts and data are no longer a prerequisite. There is virtually no real weather or climate data, yet people on both sides of the debate build computer models and speculate endlessly. They end up doing precisely what Holmes predicted. It is frightening the number of people who are so certain of the AGW hypothesis yet know virtually nothing.
Another story from Holmes identifies two other problems created by the lack of data and speculation. The first is ignoring variables.
Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”
Sherlock Holmes: “To the curious incident of the dog in the night-time.”
Gregory: “The dog did nothing in the night-time.”
Holmes: “That was the curious incident.”
Why are the IPCC and proponents of AGW ignoring and even deliberately omitting major variables in the complexity that is weather and climate? How are they allowed to claim the validity of their predictions when virtually everything is omitted?
The second involves getting so wrapped up in the complexity that you ignore the obvious. This story did not originate with Conan Doyle but uses the Sherlock approach of keeping calm and not losing perspective.
Sherlock Holmes and Dr. Watson go on a camping trip. After a good dinner and a bottle of wine, they retire for the night, and go to sleep. Some hours later, Holmes wakes up and nudges his faithful friend. “Watson, look up at the sky and tell me what you see.”
“I see millions and millions of stars, Holmes” replies Watson.
“And what do you deduce from that?”
Watson ponders for a minute. “Well, astronomically, it tells me that there are millions of galaxies and potentially billions of planets. Astrologically, I observe that Saturn is in Leo. Horologically, I deduce that the time is approximately a quarter past three. Meteorologically, I suspect that we will have a beautiful day tomorrow. Theologically, I can see that God is all powerful, and that we are a small and insignificant part of the universe.”
What does it tell you, Holmes?
Holmes is silent for a moment. “Watson, you idiot!” he says. “Someone has stolen our tent!”
There is a multitude of variables overlooked or ignored. Almost all are unmeasured or even minimally measured. There is as much synthetic data created by models that is then used as real data in another model. The results of the models show the inbreeding. When you consider even the most extreme claim for the global warming effect of human CO2, it is within the error of the estimate of almost every single variable. Albedo level varies more from year to year creating an energy variation that likely exceeds the estimated impact of human CO2. Go and look at the work of Kukla and Kukla for early awareness of this issue. The work continued on natural variability of albedo due to snow cover. More recently we learned,
Because of its large seasonal variability and distinctive physical properties, snow plays a major role in the climate system through strong positive feedbacks related to albedo [e.g., Groisman et al., 1994a] and other weaker feedbacks related to moisture storage, latent heat, and insulation of the underlying surface [Stieglitz et al., 2003].
The Intergovernmental Panel on Climate Change (IPCC) provides this assessment of the situation in Assessment Report 5 (p.359).
In addition to reductions in snow cover extent, which will reduce the mean reflectivity of particular regions, the reflectivity (albedo) of the snow itself may also be changing in response to human activities.
How do they know? It is pure speculation. Then they send a very confused message.
However, spatially comprehensive surveys of impurities in Arctic snow in the late 2000s and mid-1980s suggested that impurities decreased between those two periods (Doherty et al., 2010) and hence albedo changes have probably not made a significant contribution to recent reductions in Arctic ice and snow.
The balance of the entry discusses general conditions under the title, “Interactions of Snow within the Cryosphere.” In the climate models chapter, they say,
There is a strong linear correlation between North- ern-Hemisphere spring snow cover extent and annual mean surface air temperature in the models, consistent with available observations. The recent negative trend in spring snow cover is underestimated by the CMIP5 (and CMIP3) models (Derksen and Brown, 2012), which is associated with an underestimate of the boreal land surface warming (Brutel-Vuilmet et al., 2013).
They don’t know and what they use underestimates reality, which they also don’t know. Despite that, in the AR5 Synthesis Report, they state,
There is very high confidence that the extent of Northern Hemisphere snow cover has decreased since the mid-20th century by 1.6 [0.8 to 2.4] % per decade for March and April, and 11.7% per decade for June, over the 1967 to 2012 period.
They fail to tell us how much of that decrease was due to human-caused warming. They can’t do it because they don’t know what the natural variability is for any time prior to satellite data, but even afterward because, as they acknowledge, the full and accurate data is unavailable. Remember, this is just one variable in a myriad of variables.
I will focus on water vapour because it is the least measured, least understood, and yet critical to the entire basis of the warming due to human interference in the greenhouse gas theory. The IPCC was able to essentially ignore it a cause of warming by the definition of climate change that only includes human causes. Ironically, as I will explain, they use and manipulate it to bolster their deception.
The obsessive political objective was to isolate and demonize CO2 from human sources as the cause of global warming. This was primarily achieved by directing the controlled group of unaccountable people, mostly bureaucrats, to only consider human-causes of climate change. That eliminates the Sun because, as King Canute showed, there are things that no leader (person) can control.
Despite this, the IPCC included a category “sun” in their list of “forcing” variables. Why? Humans don’t and can’t vary solar insolation. The most they can argue is that humans add particulates to the atmosphere and that filters insolation. The problem is we have no idea how much particulate matter is in the atmosphere or how it varies over time or space. We saw an example of this when AGW proponents claimed the cooling from 1940 to 1980 was due to increased sulfate levels from humans. How did they know? They simply added enough to the models to approximate the cooling. The problem was after 1980 it began to warm despite no change in the sulfate levels.
The decisions were more difficult with regard to greenhouse gases (GHG) because humans produce all of them in varying quantities. Worse, the one they wanted to demonize was, at the start in the 1980s, less than 4% of the total. Water vapour was 95% of the total and humans added it to the atmosphere. The IPCC acknowledged the human production, but then said the amount was so small relative to the total volume they excluded it from their calculations. They did what early computer models did with evaporation from the oceans. They had no measures so assumed what was called a swamp approach that evaporation was 100 percent all the time.
With CO2 they assumed, incorrectly as the OCO2 satellite later disclosed, that it is evenly distributed throughout the atmosphere. Water vapour varies more in volume and distribution throughout the atmosphere than any other gas. That is why meteorology developed four different measures, mixing ratio, specific humidity, absolute humidity, and relative humidity, to try and understand water vapour and its role in the atmosphere. The last is the best known, but the most meaningless from a scientific perspective. The amount of water vapour in the air can vary from almost zero to about 4%. This raises the question, how much water is in the atmosphere and how does it vary over time?
The United States Geological Survey (USGS) has a Water Science School. They say
One estimate of the volume of water in the atmosphere at any one time is about 3,100 cubic miles (mi3) or 12,900 cubic kilometers (km3). That may sound like a lot, but it is only about 0.001 percent of the total Earth’s water volume of about 332,500,000 mi3 (1,385,000,000 km3), as shown in the table below. If all of the water in the atmosphere rained down at once, it would only cover the globe to a depth of 2.5 centimeters, about 1 inch.
Notice how they downplay its atmospheric significance by comparing it to the total water volume on the planet. They are talking about water in its liquid phase, but it is as important as a gas and a solid from a weather perspective. What percentage of water in the atmosphere is in each of the three phases and how does that vary over time? The answer is nobody knows or even has a crude estimate, as the failure of the computer models to simulate clouds and their effect proves. Not only that, but phase changes can occur in large volumes in a matter of seconds.
The IPCC set water vapour aside as a GHG variable assuming it remained constant. They had to do this because they don’t know how much it varies over time. They concentrated on CO2 but soon discovered that there was an upper limit to the warming effect with a CO2 increase. I called this the ‘black paint’ problem. If you want to block light passing through a window apply a layer of paint. It will block most of the light. A second layer only reduces light fractionally. The current level of CO2 is like the one layer of paint. Doubling the level has a fractional effect. A measure of how little is understood about this effect is reflected in the different estimates of the effect (Figure 1). The problem continues as evidenced by the ongoing decline of CO2 climate sensitivity.
The response to this problem was to create a positive feedback. This claimed that a CO2 increase causes a temperature increase, which increases evaporation and more water vapour in the atmosphere. This keeps temperature rising beyond the capability of CO2. I accept the argument that this sounds theoretically sound, but it is not supported by empirical evidence. It does not allow for negative feedback, for example, as more clouds form changing the albedo. Regardless, there is no empirical data and the only data they have is generated in a computer model. The outcome is determined by the data used to construct the model but there is no meaningful data or even good estimates. The sequence then is data is produced using models for which there is no data and the outcome is then used in models for which there is no data. The amount of water vapor increase suddenly becomes important in their narrative. But how much increased evaporation was necessary to create appositive feedback. How can they determine the amount If you don’t know what the original volume was or how it changes over time? Let me put a number on my opening claim. It is probable that even a 1% variation in atmospheric water vapor equals or exceeds all the effects of human sourced CO2.
So, not only have Sherlock and Watson lost their tent, but they are now exposed to precipitation. Unfortunately, the IPCC and their models will not know what form it will take. Sherlock would know why. It is because they have no data and are theorizing and speculating.