Half the results published in peer-reviewed scientific journals are probably wrong. John Ioannidis, now a professor of medicine at Stanford, made headlines with that claim in 2005. Since then, researchers have confirmed his skepticism by trying—and often failing—to reproduce many influential journal articles. Slowly, scientists are internalizing the lessons of this irreproducibility crisis. But what about government, which has been making policy for generations without confirming that the science behind it is valid?
The biggest newsmakers in the crisis have involved psychology. Consider three findings: Striking a “power pose” can improve a person’s hormone balance and increase tolerance for risk. Invoking a negative stereotype, such as by telling black test-takers that an exam measures intelligence, can measurably degrade performance. Playing a sorting game that involves quickly pairing faces (black or white) with bad and good words (“happy” or “death”) can reveal “implicit bias” and predict discrimination.
All three of these results received massive media attention, but independent researchers haven’t been able to reproduce any of them properly. It seems as if there’s no end of “scientific truths” that just aren’t so. For a 2015 article in Science, independent researchers tried to replicate 100 prominent psychology studies and succeeded with only 39% of them.
Further from the spotlight is a lot of equally flawed research that is often more consequential. In 2012 the biotechnology firm Amgen tried to reproduce 53 “landmark” studies in hematology and oncology. The company could only replicate six. Are doctors basing serious decisions about medical treatment on the rest? Consider the financial costs, too. A 2015 study estimated that American researchers spend $28 billion a year on irreproducible preclinical research.
The chief cause of irreproducibility may be that scientists, whether wittingly or not, are fishing fake statistical significance out of noisy data. If a researcher looks long enough, he can turn any fluke correlation into a seemingly positive result. But other factors compound the problem: Scientists can make arbitrary decisions about research techniques, even changing procedures partway through an experiment. They are susceptible to groupthink and aren’t as skeptical of results that fit their biases. Negative results typically go into the file drawer. Exciting new findings are a route to tenure and fame, and there’s little reward for replication studies.
PHOTO: DAVID KLEIN
American science has begun to face up to these problems. The National Institutes of Health has strengthened its reproducibility standards. Scientific journals have reduced the incentives and opportunities to publish bad research. Private philanthropies have put serious money behind groups like the Meta-Research Innovation Center at Stanford, led in part by Dr. Ioannidis, and the Center for Open Science in Charlottesville, Va.
There’s more to be done, and the National Association of Scholars has made some recommendations. Before conducting a study, scientists should “preregister” their research protocols by posting the intended methodology online, which eliminates opportunities for changing the rules in the middle of the experiment. High schools, colleges and graduate schools need to improve science education, particularly in statistics. Universities and journals should create incentives for researchers to publish negative results. Scientific associations should seek to disrupt disciplinary groupthink by putting their favored ideas up for review by experts in other sciences.
A deeper issue is that the irreproducibility crisis has remained largely invisible to the general public and policy makers. That’s a problem given how often the government relies on supposed scientific findings to inform its decisions. Every year the U.S. adds more laws and regulations that could be based on nothing more than statistical manipulations.
All government agencies should review the scientific justifications for their policies and regulations to ensure they meet strict reproducibility standards. The economics research that steers decisions at the Federal Reserve and the Treasury Department needs to be rechecked. The social psychology that informs education policy could be entirely irreproducible. The whole discipline of climate science is a farrago of unreliable statistics, arbitrary research techniques and politicized groupthink.
The process of policy-making also needs to be overhauled. Federal agencies that give out research grants should immediately adopt the NIH’s new standards for funding reproducible research. Congress should pass a law—call it the Reproducible Science Reform Act—to ensure that all future regulations are based on similar high standards.
Each scientific discipline needs to accept responsibility for its share of the irreproducibility crisis and incorporate strict standards into its procedures. The goal must be to reinvigorate the tradition of scientific inquiry. What the crisis teaches is that the scientific spirit lies with those who constantly test for that fundamental requirement of truth—that a result can be reproduced.
Thirty Years Of The James Hansen Clown Show
It has been thirty years since CO2 hit 350 PPM and NASA’s James Hansen warned that the Midwest was going to burn up and dry up.
Since Hansen predicted heat and drought for the Midwest 30 years ago, they have had above normal precipitation almost every year.
Maximum temperatures and the occurrence of heatwaves in the Midwest have plummeted to record lows.
Hansen predicted that global warming would lower the water level in the Great Lakes.
Great Lakes water levels are near record highs.
The Midwest is having their coldest April on record.
Michigan gives James Hansen the big thumbs up for promoting some of the worst junk science ever dreamed up.
Climate prophet Hansen predicted that the Arctic would be ice-free and Lower Manhattan would be underwater by this year.
James Hansen has an incredible record of misprediction and junk science, which is why Democrats love him so much. To his credit – he did get one thing right though.
From The Hockey Schtick (2015)
Why a new paper does not provide evidence of an increased CO2 greenhouse effect
1. As stevengoddard.com points out in “Junk Science Award For The Evening”
“Over the deade the authors examined (2000 to 2010), the average level of the gas (CO2) in the atmosphere went up by 22 parts-per-million. And the time series shows a steadily rising trend in its impact, layered on top of the seasonal changes. By the end of that period, the gas was retaining an extra 0.2 Watts for every square meter of the Earth’s surface compared to the start.
Still, it seems worth noting that the continued increase in greenhouse energy retention measured during this time coincides with a period where the Earth’s surface temperatures did not change dramatically. All that energy must have been going somewhere. [i.e. to space] “
The authors started in the 2000 La Nina, and ended at the 2010 El Nino – when troposphere temperatures were half a degree warmer. Then they noticed that there was slightly more downwelling long wave radiation [DWLR], which they blamed on increased absorption from the increase in CO2.
The increase in DLWR was due to the warmer troposphere during the El Nino. Warmer air emits more longwave radiation. The higher concentration of CO2 will also emit more DLWR radiation, but that is not due to increased absorption. I don’t know how scientists can get any more clueless than that.
2. Secondly, the authors claim CO2 was retaining an extra 0.2 Watts for every square meter of the Earth’s surface compared to the start (over a period of one decade).
Thus even if one believes the IPCC formula and this new paper’s assumptions (including extensive computer modeling in the new paper), the IPCC formula exaggerates CO2 surface radiative forcing by 45% over the observations.
3. Thirdly, the peak emission spectra of CO2 is at 15 microns, which by Wien’s displacement law is equivalent to a blackbody radiating at -80C. Per the second law of thermodynamics, a low temperature/frequency/energy body at -80C cannot warm a higher temperature/frequency/energy body at 15C (Earth).
4. Rather, the entire 33K greenhouse effect is entirely explained by the Maxwell/Carnot/Clausius atmospheric mass/gravity/pressure theory and the ‘greenhouse equation.’ Increased CO2 instead facilitates loss of outgoing IR radiation to space, as has been observed by an increase in OLR (Outgoing Longwave Radiation) over the past 60+ years, opposite to the predictions of the alternative radiative forcing greenhouse theory.
The claim that the warming 2000-2010 is from CO2 confuses cause with effect. Warming of the atmosphere due to internal variability, ocean oscillations, cloud cover changes, solar amplification mechanisms, etc. secondarily warm the CO2 in the atmosphere increasing the 15 micron IR radiation observed from increased levels of CO2.
UPDATE: Rog Tallbloke Has even more fun with the above study than I did. He points out that in Alaska over the study period, the average temperature actually FELL by four degrees. So rising CO2 must cause cooling, Right?
Another point I did not mention because I saw no point in beating a dead horse concerns the graph below. It appeared with the original story.
It shows two nicely matching curves, does it not? But what are the quantities being graphed? One is CO2 but the other is NOT temperature. It is a theoretically derived construct called forcing. Not so impressive.
Junk science and conduct by criminal activists pretending to do science do have consequences, not only for your wallet, example: