by Richard Peachey

Recently I gave a presentation on how our society’s “faith” in science is being badly shaken.

My presentation’s basic point was this:

If experimental science, being done right now, can hardly be relied upon . . . then why should we feel compelled to trust claims of theory-driven scientists regarding what may have happened in the invisibleinaccessibledistant past?

Of course, I’m not saying that scientists are generally incompetent or crooked. I’m a science graduate myself, and I love good science. But these articles certainly raise significant issues of concern!

Below is a selection of articles worth perusing, from my (thick!) file titled “Questionable Science.”

(1) “How Science Goes Wrong” (The Economist, Oct. 19, 2013, p. 13)

(2) “Trouble at the lab” (The Economist, Oct. 19, 2013, pp. 26-30) [If this link doesn’t work for you, just google “Trouble at the lab”.]

NOTE: If you like the above articles, save them as “pdf” or “Web Archive”. The Economist apparently allows only a limited number of free visits to their website.

(3) Here’s the front cover of the above issue of The Economist: <>

(4) John Ioannidis, “An Epidemic of False Claims” (Scientific American, Vol. 304 No. 6, June 2011, p. 16)

(5) Jennifer Crocker and M. Lynne Cooper, “Addressing Scientific Fraud” (Science, Vol. 334, Dec. 2, 2011, p. 1182)

(6) (Editorial), “Through the gaps” (Nature, Vol. 489, Sept. 20, 2012, p. 335) [Sub-heading: “A 20-year campaign of scientific fraud says as much about the research community as it does about the perpetrator. The system that allowed such deception to continue must be reformed.”]

(7) Brian C. Martinson, Melissa S. Anderson and Raymond de Vries, “Scientists behaving badly” (Nature, Vol. 435, June 9, 2005, pp. 737f.)

(8) Jill Neimark, “Line of Attack” (Science, Vol. 347 No. 6225, Feb. 27, 2015, pp. 938-940) [Sub-heading: “Christopher Korch is adding up the costs of contaminated cell lines.”]

(9) (Announcement), “Time to tackle cells’ mistaken identity” (Nature, Vol. 520, Apr. 16, 2015, p. 264)

(10) Tina Hesman Saey, “12 reasons research goes wrong” (Science News, Vol. 187 No. 2, Jan. 24, 2015, pp. 24f.) [This is a worthwhile sidebar from a longer article which is not accessible for free.]

(11) Jim Giles, “Breeding cheats” (Nature, Vol. 445, Jan. 18, 2007, pp. 242f.) [This link also provides several other articles from a Nature “Misconduct Special.”]

(12) Colin Macilwain, “Scientific Misconduct: More Cops, More Robbers?” (Cell, Vol. 149, June 22, 2012, pp. 1417-1419)

(13) Sandra L. Titus, James A. Wells and Lawrence J. Rhoades, “Repairing research integrity” (Nature, Vol. 453, June 19, 2008, pp. 980-982)

(14) Richard Stone and Barbara Jasny, “Scientific Discourse: Buckling at the Seams” (Science, Vol. 342, Oct. 4, 2013, p. 57) [From this webpage, you can access several other articles in a Science special section on “Communication in Science: Pressures and Predators,” including the next article below.]

(15) John Bohannon, “Who’s Afraid of Peer Review?” (Science, Vol. 342, Oct. 4, 2013, pp. 60-65)

(16) (Editorial), “Con Men in Lab Coats” (Scientific American, Vol. 294 No. 3, Mar. 2006, p. 10)

(17) Michael Balter, “Data in Key Papers Cannot Be Reproduced” (Science, Vol. 283, Mar. 26, 1999, pp. 1987, 1989)

(18) Jocelyn Kaiser, “Forty-Four Researchers Broke NIH Consulting Rules” (Science, Vol. 309, July 22, 2005, p. 546) [NOTE: You’ll need to scroll down to find this title nested within a collection of news items.]

(19) Monya Baker, “Blame it on the Antibodies” (Nature, Vol. 521, May 21, 2015, pp. 274-276)

(20) John Ioannidis, “Why Most Published Research Findings Are False” (PLoS Medicine, Vol. 2, No. 8, Aug. 30, 2005)

(21) Bruce Bower, “Psych studies fail replication test” (Science News, Vol. 188 No. 7, Oct. 3, 2015, p. 8)

(22) John Bohannon, “Many psychology papers fail replication test” (Science, Vol. 349 No.6251, Aug. 28, 2015, pp. 910f.) [NOTE: The link for this article appears to be fickle, so I’ve provided the full text below.]


Many psychology papers fail replication test

  • John Bohannon
Science  28 Aug 2015:
Vol. 349, Issue 6251, pp. 910-911
DOI: 10.1126/science.349.6251.910

ArticleFigures & DataInfo & MetricseLetters PDF

The largest effort yet to replicate psychology studies has yielded both good and bad news. On the down side, of the 100 prominent papers analyzed, only 39% could be replicated unambiguously, as a group of 270 researchers describes on page 943. On the up side, despite the sobering results, the effort seems to have drawn little of the animosity that greeted a similar replication effort last year (Science, 23 May 2014, p. 788). This time around, many of the original authors are praising the replications as a useful addition to their own research.
“This is how science works,” says Joshua Correll, a psychologist at the University of Colorado, Boulder, and one of the authors whose results could not be replicated. “How else will we converge on the truth? Really, the surprising thing is that this kind of systematic attempt at replication is not more common.”
“This is how science works,” says Joshua Correll, a psychologist at the University of Colorado, Boulder, and one of the authors whose results could not be replicated. “How else will we converge on the truth? Really, the surprising thing is that this kind of systematic attempt at replication is not more common.”
That’s encouraging news to Brian Nosek, a psychologist at the University of Virginia in Charlottesville who led the effort. “I don’t know if replication is ‘entirely ordinary’ yet, but it is certainly more ordinary than it was [a few] years ago.” he says. In that time, major psychology journals have started publishing replications alongside original research. “The change is pretty remarkable.”
The mass replication effort began in 2011 with the goal of putting psychological science on more rigorous experimental footing. The strategy was to replicate a sample of published studies using an approach that Nosek has popularized through the Center for Open Science, a nonprofit he founded in 2013: publish your experimental design first, receive open peer review on it, and only then carry out the experiment and share the results, no matter the outcome. That should reduce the number of papers that report statistically significant results that are actually false positives.
In the Open Science Collaboration, 270 psychologists from around the world signed up to replicate studies; they did not receive any funding. The group selected the studies to be replicated based on the feasibility of the experiment, choosing from those published in 2008 in three journals: Psychological Science, the Journal of Personality and Social Psychology (JPSP), and the Journal of Experimental Psychology: Learning, Memory, and Cognition. Not only were all 100 replications preregistered, but the authors of the original studies were invited to collaborate in the design of the replication.
The results lend support to the idea that scientists and journal editors are biased—consciously or not—in what they publish. For example, even in studies that could be replicated, the size of the effect—a measure of how much of a difference there was between the experiment groups—was on average only half as big as the original studies. The bias could be due to scientists throwing out negative results, for example, or journal editors preferentially accepting papers with bigger effects. Some of the replications even found the opposite effect compared with the original paper. “This very well done study shows that psychology has nothing to be proud of when it comes to replication,” says Charles Gallistel, president of the Association for Psychological Science.
“Their data are sobering and present a clear challenge to the field,” says Lynne Cooper, a psychologist at the University of Missouri, Columbia, who became one of the editors of JPSP in January. Already, she says, the journal is instituting reforms. In order to encourage researchers to test published results, JPSP will publish more replications, Cooper says. The journal is also launching new policies that will encourage “authors, editors, and reviewers … to reexamine and recalibrate basic notions about what constitutes good scholarship,” she says. The details have not yet been announced.
“Some will be tempted to conclude that psychology is a bad apple,” says Charles Carver, a psychologist at the University of Miami who was one of the editors of JPSP in 2008. He insists this is not the case. “This is a problem of science, medical science no less than behavioral science.” Replication efforts in other fields are equally low, says John Ioannidis, a biologist at Stanford University in Palo Alto, California, who suspects the true proportion of psychology papers that are not false positive is “something like 25% … [which] seems to be in the same ballpark as what we see in many biomedical disciplines.”
Like most researchers contacted by Science, Correll says the exercise was “worth the effort,” no matter the outcome. That’s not to say Correll is disavowing his earlier results. In his 2008 JPSP study, his team asked subjects to identify images of weapons while simultaneously showing them images of people of different races. The goal was to test the idea that differences in the reaction times indicate a person’s implicit racial bias. They found that the variation in people’s response times had a nonrandom signature known as 1/f noise that predicted their bias.
But when Etienne LeBel’s lab at the University of Western Ontario in Canada repeated the experiment, they found 1/f noise, but the prediction did not hold. Correll says the failure “does not convince me that my original effects were [a] fluke. I know that other researchers have found similar patterns … and my lab has additional data that supports our basic claim.” He is already working on the follow-up study.