Homogenization of temperature data makes Capetown South Africa have a warmer climate record

Playing around with my hometown data, I was horrified when I found what NASA had done to it.  Even producing GISTEMP Ver 2 was counterfactual.

homogenize-definition

Guest essay by Philip Lloyd

The raw data that is fed to NASA in order to develop the global temperature series is subjected to “homogenization” to ensure that it does not suffer from such things as the changes in the method of measuring the mean temperature, or changes in readings because of changes in location. However, while the process is supposed to be supported by metadata – i.e. the homogenizers are supposed to provide the basis for any modification of the raw data.

For example, the raw data for my home city, Cape Town, goes back to 1880:

clip_image002

http://data.giss.nasa.gov/tmp/gistemp/STATIONS/tmp_141688160000_0_0/station.txt

The warmest years were in the 1930’s, as they were in many other parts of the globe. There was then a fairly steep decline into the 1970’s before the temperature recovered to today’s levels, close to the hottest years of the 1930’s.

In NASA’s hands, the data pre-1909 was discarded; the 1910 to 1939 data was adjusted downwards by 1.1deg C; the 1940 to 1959 data was adjusted downwards by about 0.8 deg C on average; the 1969 to 1995 data was adjusted upwards by about 0.2 deg C, with the end result that GISS Ver 2 was:-

clip_image004

Being curious, I asked for the metadata. Eventually I got a single line, most of which was obvious, latitude, longitude, height above mean sea level, followed by four or five alphanumerics. This was no basis for the “adjustments” to the raw data.

Which should I believe? The raw data showed a marked drop from the 1940’s to the 1970’s, which echoed similar drops elsewhere. Time magazine covers showed the 1970’s were indeed cold.

The raw data is probably accurate. The homogenized data is certainly not. It is difficult to avoid the conclusion that “homogenization” means “revise the story line” and “anthropogenic global warming” really means “humans changed the figures”.


Prof Philip Lloyd, Energy Institute, CPUT, SARETEC, Sacks Circle, Bellville

Ref.: https://wattsupwiththat.com/2017/01/28/homogenization-of-temperature-data-makes-capetown-south-africa-have-a-warmer-climate-record/

IPCC Objectives and Methods Mandated Elimination, Reduction, Manipulation of Inadequate Real Data and Creation of False Data.

Guest opinion: Dr. Tim Ball

Intergovernmental Panel on Climate Change (IPCC) computer model projections are unfailingly wrong. Projections for three scenarios, High, Medium and Low, are consistently high compared to the actual temperature. There is something fundamentally wrong with their work and claims. They should not be the basis of any policy, public or private. The following statement from Assessment Report (AR4) is untenable given the projection results.

There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes. Confidence in model estimates is higher for some climate variables (e.g., temperature) than for others (e.g., precipitation). Over several decades of development, models have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases.

This is like saying that a soap box car is a good approximation of a Rolls Royce or a Ferrari. Their proof is that the soap box car appears to have some trace characteristics and moves down the road in the same direction – if it is on a hill.

clip_image002

Figure 1: Soap-Box-Car basic kit

Even a simple systems diagram of the atmosphere (Figure 2) is a thousand times more complicated than the soap box kit shown.

clip_image004

Figure 2; After Kellogg and Schneider (1974)

In the list of variables in the diagram, and the many excluded, there are no meaningful data. By meaningful, I mean if they exist they are inadequate in length, coverage, or accuracy. They are insufficient as the basis for a computer model, and the model results are completely unrepresentative of reality and inadequate as the basis for any policy. Proof of that claim is in the failure to validate the models, except by adding or adjusting variables until it appears to recreate known conditions. The failure of that sleight of hand is the failed projections. The only lesson they yield is the need for a total focus on data collection because climate science is already fulfilling Sherlock Holmes’s warning:

“It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”

I have written about the data problem often, but it is so fundamental that it requires constant repetition. It appears the Trump government is acting to stop the twisting of facts to suit theories. They may choose, justifiably considering the misuse of funds to date, to cancel funding to further climate change research. However, if they choose to go forward, any approach will founder on the lack of data.

Hubert Lamb, who probably gathered more climate data than any person before or since, explained in his autobiography (1997) that he created the Climatic Research Unit (CRU) because

“…it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.”

In our personal communications, he described the problems that lack of data would create and regretted that it was occurring at the CRU. As he wrote,

“My immediate successor, Professor Tom Wigley, was chiefly interested in the prospects of world climates being changed as result of human activities, primarily through the burning up of wood, coal, oil and gas reserves…” “After only a few years almost all the work on historical reconstruction of past climate and weather situations, which first made the Unit well known, was abandoned.”

Wigley was the grandfather of the IPCC, the ‘go-to-person,’ the arbiter among the central players during the debacle that became the CRU as it prepared and controlled the Third 2001 Assessment Report (TAR). Read the released Climategate emails and notice how often they seek his input to resolve disputes. It is a surreal experience because, invariably, his comments are restricted to warning about issues that might be threatening to their AGW objective, not to seeking the truth.

Many commentators pithily describe some home truths about data and statistics.

“If you torture the data enough, nature will always confess” – Ronald Coase.

Facts are stubborn things, but statistics are more pliable. – Anonymous.

“All of us are exposed to huge amounts of material, consisting of data, ideas, and conclusions — much of it wrong or misunderstood or just plain confused. There is a crying need for more intelligent commentary and review.” Murray Gell-Mann

“Science is built of facts the way a house is built of bricks; but an accumulation of facts is not more science than a pile of bricks is a house.” Henri Poincare

 

I have written about the lack of data several times on this site and elsewhere.

https://wattsupwiththat.com/2014/03/21/ipcc-scientists-knew-data-and-science-inadequacies-contradicted-certainties-presented-to-media-public-and-politicians-but-remained-silent/

https://wattsupwiththat.com/2015/09/27/approximately-92-or-99-of-ushcn-surface-temperature-data-consists-of-estimated-values/

https://wattsupwiththat.com/2013/10/22/lack-of-data-for-all-phases-of-water-guarantees-failed-ipcc-projections/

https://wattsupwiththat.com/2013/10/02/ipcc-climate-a-product-of-lies-damn-lies-and-statistics-built-on-inadequate-data/

https://wattsupwiththat.com/2016/04/24/particulates-aerosols-and-climate-the-more-important-story/

https://wattsupwiththat.com/2015/11/04/more-ipcc-inadequacies-and-failures-precipitation/

In one article, I pointed out that the IPCC and key players in the AGW deception knew there was no data.

In 1993, Stephen Schneider, a primary player in the anthropogenic global warming hypothesis and the use of models went beyond doubt to certainty when he said,

“Uncertainty about important feedback mechanisms is one reason why the ultimate goal of climate modeling – forecasting reliably the future of key variables such as temperature and rainfall patterns – is not realizable.”

A February 3, 1999, US National Research Council Report said,

Deficiencies in the accuracy, quality, and continuity of the records place serious limitations on the confidence that can be placed in the research results.

To which Kevin Trenberth responded,

It’s very clear we do not have a climate observing system….This may come as a shock to many people who assume that we do know adequately what’s going on with the climate, but we don’t.

Two CRU Directors, Tom Wigley, and Phil Jones said,

Many of the uncertainties surrounding the causes of climate change will never be resolved because the necessary data are lacking.

They didn’t hide the fact because it allowed them to produce the data they needed to prove their hypothesis and present the models as representative of the real world. They also knew, as they did with most climate science, the public didn’t know there was inadequate data.

This article was triggered by listening to a powerful advocate of the anthropogenic global warming (AGW) hypothesis talk about “synthetic’ data as if it was real data. Most people don’t know that most data are synthetic data. To offset the dearth of data for global climate models, it is common practice to create synthetic data in one model and use it as ‘real’ data in another model. Since there is almost no data for any of the multitude of variables that combine to create weather and climate they create a virtual reality.

The IPCC froze climate science progress since its inception in 1990 by deliberately turning the focus on human causes. They then contradicted the scientific method by proving rather than disproving the Anthropogenic Global Warming (AGW) hypothesis. This required actions that created several dilemmas.

· National Weather Agencies appointed the IPCC members and controlled most climate funding in their countries.

· This essentially put it beyond political or normal scientific control. Skeptics were eliminated.

· Most funding was directed to theories and research that ‘proved’ the AGW hypothesis.

· In many cases, money was diverted from data collection as the political focus intensified. Here are comments by Ken Green from an investigation into what was happening at Environment Canada.

Properly supporting the contention of scientific dishonesty has been difficult, however, due to the confidential nature of much of the government’s internal communications and the obvious reluctance of civil servants to speak out. However, as a result of a recent Access to Information request, scientific dishonesty and “conclusion rigging” has been discovered in one of Canada’s largest ever science and technology projects – the effort to “manage” climate change and the science that supposedly “supports” the Kyoto Accord. This is revealed by the following analysis of the attached contract report by the consulting company, “The Impact Group”, and related internal communications by Environment Canada’s (EC) Meteorological Service of Canada (MSC) and others on the department’s “ADM Committee on Climate Change Science.”

· Weather stations were closed across the World, ostensibly to be replaced satellite data. NASA GISS graphed these changes showing two distinct declines in the 1960s and the 1990s.

clip_image006

· The problem is even greater as data is lost, records terminated or truncated, and proxy reconstructions perverted by rewriting history for a political agenda.

I heard the comments about synthetic data following my involvement in Australia with Senator Malcolm Roberts and Tony Heller. I went to Australia to support Senator Roberts demand empirical data as proof of their AGW claims from the bureaucrats of the Commonwealth Scientific and Industrial Research Organization (CSIRO), the organization that advises the government of climate change. We sought actual data with accompanying evidence of the cause and effect, not computer model generations. They responded with a Report that provided nothing but the computer-generated claims of the Intergovernmental Panel on Climate Change (IPCC). But that was no surprise because they were using data provide for them by IPCC member the Australian Bureau of Meteorology.

But it isn’t just about empirical data and evidence of AGW. The amount of real, directly measured, data used to create the computer models, and their policy directing evidence is completely inadequate. The synthetic data the person spoke about was created in a computer model of one part of the vast atmospheric /ocean system and then input as real data in a larger model.

One of the biggest deliberate deceptions is the difference between the certainties of the IPCC Summary for Policymakers (SPM) and the frightening list of inadequacies in the Physical Science Report of Working Group I. The following quotes are directly from that Report, but very few people ever read them. This is because the SPM is released to great fanfare months before the Science report is published. As IPCC expert reviewer David Wojick wrote

Glaring omissions are only glaring to experts, so the “policymakers”—including the press and the public—who read the SPM will not realize they are being told only one side of a story. But the scientists who drafted the SPM know the truth, as revealed by the sometimes artful way they conceal it.

What is systematically omitted from the SPM are precisely the uncertainties and positive counter evidence that might negate the human interference theory. Instead of assessing these objections, the Summary confidently asserts just those findings that support its case. In short, this is advocacy, not assessment.

The Assessment Report 5 (AR5) SPM claims

Anthropogenic greenhouse gas emissions have increased since the pre-industrial era, driven largely by economic and population growth, and are now higher than ever. This has led to atmospheric concentrations of carbon dioxide, methane and nitrous oxide that are unprecedented in at least the last 800,000 years. Their effects, together with those of other anthropogenic drivers, have been detected throughout the climate system and are extremely likely to have been the dominant cause of the observed warming since the mid-20th century.

The designation “extremely likely” is 95-100%. Consider that assessment in the context of the following data limitations from the Science Report. They begin by acknowledging the serious limitations in a general statement.

Uncertainty in Observational Records

The vast majority of historical (and modern) weather observations were not made explicitly for climate monitoring purposes. Measurements have changed in nature as demands on the data, observing practices and technologies have evolved. These changes almost always alter the characteristics of observational records, changing their mean, their variability or both, such that it is necessary to process the raw measurements before they can be considered useful for assessing the true climate evolution. This is true of all observing techniques that measure physical atmospheric quantities. The uncertainty in observational records encompasses instrumental/ recording errors, effects of representation (e.g., exposure, observing frequency or timing), as well as effects due to physical changes in the instrumentation (such as station relocations or new satellites). All further processing steps (transmission, storage, gridding, interpolating, averaging) also have their own particular uncertainties. Because there is no unique, unambiguous, way to identify and account for non-climatic artefacts in the vast majority of records, there must be a degree of uncertainty as to how the climate system has changed. The only exceptions are certain atmospheric composition and flux measurements whose measurements and uncertainties are rigorously tied through an unbroken chain to internationally recognized absolute measurement standards (e.g., the CO2 record at Mauna Loa; Keeling et al., 1976a).

Uncertainty in data set production can result either from the choice of parameters within a particular analytical framework—parametric uncertainty, or from the choice of overall analytical framework— structural uncertainty. Structural uncertainty is best estimated by having multiple independent groups assess the same data using distinct approaches. More analyses assessed now than in AR4 include published estimates of parametric or structural uncertainty. It is important to note that the literature includes a very broad range of approaches. Great care has been taken in comparing the published uncertainty ranges as they almost always do not constitute a like- for-like comparison. In general, studies that account for multiple potential error sources in a rigorous manner yield larger uncertainty ranges. This yields an apparent paradox in interpretation as one might think that smaller uncertainty ranges should indicate a better product. However, in many cases this would be an incorrect inference as the smaller uncertainty range may instead reflect that the published estimate considered only a subset of the plausible sources of uncertainty. Within the time series figures, where this issue would be most acute, such parametric uncertainty estimates are therefore not generally included. Consistent with AR4 HadCRUT4 uncertainties in GMST are included in Figure 2.19, which in addition includes structural uncertainties in GMST.

To conclude, the vast majority of the raw observations used to monitor the state of the climate contain residual non-climatic influences. Removal of these influences cannot be done definitively and neither can the uncertainties be unambiguously assessed. Therefore, care is required in interpreting both data products and their stated uncertainty estimates. Confidence can be built from: redundancy in efforts to create products; data set heritage; and cross-comparisons of variables that would be expected to co-vary for physical reasons, such as LSATs and SSTs around coastlines. Finally, trends are often quoted as a way to synthesize the data into a single number.

Why isn’t this placed at the front of the SPM?

The following quotes are directly from AR5 and were selected because they acknowledge the data problems. The Report uses various terms are that indicate their measures of the availability of the evidence, that is the amount, extent, and quality, while a second measure indicates their confidence in their knowledge for predictions. I have underlined and made bold their assessments, inserted percentages where appropriate and commented on the inadequate, misleading analysis and phraseology.

In this Report, the following summary terms are used to describe the available evidence: limited, medium, or robust; and for the degree of agreement: low, medium, or high. A level of confidence is expressed using five qualifiers: very low, low, medium, high, and very high, and typeset in italics, e.g., medium confidence. For a given evidence and agreement statement, different confidence levels can be assigned, but increasing levels of evidence and degrees of agreement are correlated with increasing confidence (see Section 1.4 and Box TS.1 for more details).

In this Report, the following terms have been used to indicate the assessed likelihood of an outcome or a result: Virtually certain 99–100% probability, Very likely 90–100%, Likely 66–100%, About as likely as not 33–66%, Unlikely 0–33%, Very unlikely 0–10%, Exceptionally unlikely 0–1%. Additional terms (Extremely likely: 95–100%, More likely than not >50–100%, and Extremely unlikely 0–5%) may also be used when appropriate. Assessed likelihood is typeset in italics, e.g., very likely (see Section 1.4 and Box TS.1 for more details).

Because of large variability and relatively short data records, confidence in stratospheric H2O vapour trends is low. (This is important because ice crystals visible in the form ofNoctilucent and Polar Stratospheric Clouds are important especially in the levels of ozone, which is likely why they have problems in the next item.).

Confidence is medium in large-scale increases of tropospheric ozone across the Northern Hemisphere (NH) since the 1970s.

Confidence is low in ozone changes across the Southern Hemisphere (SH) owing to limited measurements. The public think we are fully informed on ozone and the Montreal Protocol resolved all issues).

Satellite records of top of the atmosphere radiation fluxes have been substantially extended since AR4, and it is unlikely (0-33%) that significant trends exist in global and tropical radiation budgets since 2000.

Surface solar radiation likely (66-100%) underwent widespread decadal changes after 1950, with decreases (‘dimming’) until the 1980s and subsequent increases (‘brightening’) observed at many land-based sites. There is medium confidence for increasing downward thermal and net radiation at land-based observation sites since the early 1990s.

While trends of cloud cover are consistent between independent data sets in certain regions, substantial ambiguity and therefore low confidence remains in the observations of global-scale cloud variability and trends.

It is likely (66-100%) that since about 1950 the number of heavy precipitation events over land has increased in more regions than it has decreased. (A meaningless comment).

Confidence is low for a global-scale observed trend in drought or dryness (lack of rainfall) since the middle of the 20th century, owing to lack of direct observations, methodological uncertainties and geographical inconsistencies in the trends. (The precipitation data is far more limited in every way than temperature data and it is inadequate. Despite this they tell the public the likelihood of drought is significantly increased because of AGW).

Confidence remains low for long-term (centennial) changes in tropical cyclone activity, after accounting for past changes in observing capabilities. (Does this mean it is useless before accounting for past changes?)

Confidence in large-scale trends in storminess or storminess proxies over the last century is low owing to inconsistencies between studies or lack of long-term data in some parts of the world (particularly in the SH). (It is not just the SH, although that is half the planet).

Because of insufficient studies and data quality issues confidence is also low for trends in small-scale severe weather events such as hail or thunderstorms. (Storms referred to here and above are a major mechanism for transfer of greenhouse gases and latent heat throughout the atmosphere.)

It is likely (66-100%) that circulation features have moved poleward since the 1970s, involving a widening of the tropical belt, a poleward shift of storm tracks and jet streams, and a contraction of the northern polar vortex. (It is probably at the low end of “likely” because there are insufficient surface stations to determine extent).

Large variability on inter-annual to decadal time scales hampers robust conclusions on long-term changes in atmospheric circulation in many instances. (What does “robust” mean? The atmosphere, weather, and climate are all about circulation. This comment is so broad and vague that it implies they don’t know what is going on.)

Confidence in the existence of long-term changes in remaining aspects of the global circulation is low owing to observational limitations or limited understanding. (This combines with the last to confirm they don’t know.)

Uncertainties in air–sea heat flux data sets are too large (How large is too large?) to allow detection of the change in global mean net air-sea heat flux, of the order of 0.5 W m–2 since 1971, required for consistency with the observed ocean heat content increase. The products cannot yet be reliably used to directly identify trends in the regional or global distribution of evaporation or precipitation over the oceans on the time scale of the observed salinity changes since 1950. (These are massive engines of latent heat transfer and alone are sufficient to say that any results of their work are meaningless.)

Basin-scale wind stress trends at decadal to centennial time scales have been observed in the North Atlantic, Tropical Pacific and Southern Ocean with low to medium confidence. (Wind is the almost forgotten variable and with the least data, yet essential to accurate measurements of evaporation and energy transfer.)

Observed changes in water mass properties likely (66-100%) reflect the combined effect of long-term trends in surface forcing (e.g., warming of the surface ocean and changes in E – P) and inter- annual-to-multi-decadal variability related to climate modes. (Water mass properties determine the sea level, so this comment makes a mockery of the claims about AGW sea level increase.)

It is likely (66-100%) that the annual period of surface melt on Arctic perennial sea ice lengthened by 5.7 ± 0.9 days per decade over the period 1979–2012. (The sea ice data was not reliable until 1981 and 30 years is a completely inadequate sample size for any climate-related variable despite the WMO created 30-year Normal).

After almost one decade of stable CH4 concentrations since the late 1990s, atmospheric measurements have shown renewed CH4 concentrations growth since 2007. The drivers of this renewed growth are still debated. (Apparently not. The media is full of stories about the growing threat of methane from human sources.)

Many of the cloudiness and humidity changes simulated by climate models in warmer climates are now understood as responses to large-scale circulation changes that do not appear to depend strongly on sub-grid scale model processes, increasing confidence in these changes. (But they just told us they don’t understand large scale circulation changes.) For example, multiple lines of evidence now indicate positive feedback contributions from circulation-driven changes in both the height of high clouds and the latitudinal distribution of clouds (medium to high confidence). However, some aspects of the overall cloud response vary substantially among models, and these appear to depend strongly on sub-grid scale processes in which there is less confidence. (How much less? In fact, they don’t know.)

Climate-relevant aerosol processes are better understood, and climate-relevant aerosol properties better observed, than at the time of AR4 (But they were not well understood or observed then, so this is a relative and meaningless statement.). However, the representation of relevant processes varies greatly in global aerosol and climate models and it remains unclear what level of sophistication is required to model their effect on climate. Globally, between 20 and 40% of aerosol optical depth (medium confidence) and between one quarter and two thirds of cloud condensation nucleus concentrations (low confidence) are of anthropogenic origin. (This is akin to saying 20/20 vision is like seeing 20% of the things 20% of the time. Look at the next quote.)

The quantification of cloud and convective effects in models, and of aerosol–cloud interactions, continues to be a challenge. Climate models are incorporating more of the relevant processes than at the time of AR4, but confidence in the representation of these processes remains weak. (Another relative statement that is meaningless. It is like saying I was useless before but I think I am slightly better now.)

Aerosol–climate feedbacks occur mainly through changes in the source strength of natural aerosols or changes in the sink efficiency of natural and anthropogenic aerosols; a limited number of modelling studies have bracketed the feedback parameter within ±0.2 W m–2 °C–1 with low confidence. (What is a limited number? How many would make it meaningful?)

Climate and Earth System models are based on physical principles, and they reproduce many important aspects of observed climate (Many but not all aspects.). Both aspects (This is misleading. They are talking about their aspects not the ones of observed climate) contribute to our confidence in the models’ suitability for their application in detection and attribution studies (Chapter 10) and for quantitative future predictions and projections (Chapters 11 to 14). In general, there is no direct means of translating quantitative measures of past performance into confident statements about fidelity of future climate projections. (This is a purely political meaningless statement. Are saying that their past performance is no measure or predictor of their future performance? So please let us keep stumbling forward wasting billions of dollars when the entire problem is insoluble because there is no data.)

The projected change in global mean surface air temperature will likely (66-100%) be in the range 0.3 to 0.7°C (medium confidence) (Does this mean they may be 66% certain they are 50% correct?)

Climate models have continued to be developed and improved since the AR4, and many models have been extended into Earth System models by including the representation of biogeochemical cycles important to climate change. (They have not improved since AR4). These models allow for policy-relevant calculations such as the carbon dioxide (CO2) emissions compatible with a specified climate stabilization target. (They do not allow for “policy-relevant calculations” and they have not improved because they have failed in every projection made.)

The ability of climate models to simulate surface temperature has improved in many, though not all, important aspects relative to the generation of models assessed in the AR4. (How many models improved and how many are needed to be meaningful? This confirms what is wrong with the preceding statement.)

The simulation of large-scale patterns of precipitation has improved somewhat since the AR4, although models continue to perform less well for precipitation than for surface temperature. (There is virtually no improvement. This is an enormous understatement.)

The simulation of clouds in climate models remains challenging. There is very high confidence that uncertainties in cloud processes explain much of the spread in modelled climate sensitivity. (A classic example of Orwellian double talk. They are very certain that they are uncertain.)

Models are able to capture the general characteristics of storm tracks and extratropical cyclones, and there is some evidence of improvement since the AR4. Storm track biases in the North Atlantic have improved slightly, but models still produce a storm track that is too zonal and underestimate cyclone intensity. (Well, which is it? Have they improved or not? I assume they have not. Otherwise, they would have said so.)

Many important modes of climate variability and intra-seasonal to seasonal phenomena are reproduced by models, with some improvements evident since the AR4. (That meaningless relative measure, improving on virtually nothing is still virtually nothing.) The statistics of the global monsoon, the North Atlantic Oscillation, the El Niño-Southern Oscillation (ENSO), the Indian Ocean Dipole and the Quasi-Biennial Oscillation are simulated well by several models, although this assessment is tempered by the limited scope of analysis published so far, or by limited observations. (More doublespeak: they have improved, except where they haven’t.)

Promoters of AGW and members of the IPCC lead the public to believe that they have a vast amount of data to support their analysis and claim that they are 95 percent certain that human CO2 is causing global warming. They also promote the notion that 97 percent of scientists agree with their conclusion. They promote by specific statements, by failing to investigate the accuracy of the data, or failing to speak out when they know it is incorrect.

Most people, probably at least 97 percent, have never read the SPM, including scientists, politicians, and the media. Probably 99 percent of people have never read the Science Report. How many of them would change their minds if they considered the information shown above? Maybe that is too much. Maybe all that is necessary is to learn that every projection the IPCC ever made was wrong.

This brief and limited look at what the IPCC are saying on its own gives credence to Emeritus Professor Hal Lewis’s charge in his October 2010 resignation letter from the American Physical Society

“It (the global warming scam) is the greatest and most successful pseudoscientific fraud I have seen in my long life as a physicist.”

It is a pseudoscientific fraud because there was no data as the basis for any of their work. The scientists determined to achieve the objective of the IPCC, that is prove ‘scientifically’ that human CO2 was causing global warming, had to modify or eliminate the inadequate real data and create false data. Even if, under the new regime, the fraud is exposed and proper science and scientific methods are applied it will take a very long time to gather the minimum data required. Until that occurs it is all just hand-waving. However, there is enough evidence to know that the precautionary principle is not applicable. The little evidence we have indicates we are safer to do nothing.

Ref.: https://wattsupwiththat.com/2017/01/29/ipcc-objectives-and-methods-mandated-elimination-reduction-manipulation-of-inadequate-real-data-and-creation-of-false-data/

 

Support

Newscats – on Patreon or Payoneer ID: 55968469

Cherry May Timbol – Independent Reporter
Contact Cherry at: cherrymtimbol@newscats.org or timbolcherrymay@gmail.com
Support Cherry May directly at: https://www.patreon.com/cherrymtimbol

Ad

Why do CO2 lag behind temperature?

71% of the earth is covered by ocean, water is a 1000 times denser than air and the mass of the oceans are 360 times that of the atmosphere, small temperature changes in the oceans doesn’t only modulate air temperature, but it also affect the CO2 level according to Henry’s Law.

The reason it is called “Law” is because it has been “proven”!

“.. scientific laws describe phenomena that the scientific community has found to be provably true ..”

That means, the graph proves CO2 do not control temperature, that again proves (Man Made) Global Warming, now called “Climate Change” due to lack of … Warming is – again – debunked!