By James D. Agresti
March 29, 2018
In a recent New York Times op-ed, two former EPA officials criticize a Trump administration plan that would require the EPA to reveal the details of studies used to craft environmental regulations. In this piece, Obama’s EPA director Gina McCarthy and assistant director Janet McCabe, claim that:
- current EPA director Scott Pruitt and “some conservative members of Congress are setting up a nonexistent problem in order to prevent the EPA from using the best available science.”
- EPA’s studies “adhere to all professional standards and meet every expectation of the scientific community in terms of peer review and scientific integrity.”
- the process of “peer review ensures that the analytic methodologies underlying studies funded by the agency are sound.”
A broad array of scientific facts and literature proves all of those claims to be false. This has important ramifications, for as explained in book Molecular Biology and Biotechnology: A Guide for Teachers, “there are risks in misperceiving” environmental risks, because “the experiences or products you avoid because of faulty assumptions and misinformation affect the quality of your life and the lives of those around you.”
Transparency is Essential to Science
Since at least 1994, various scientists, including those on the EPA’s Clean Air Scientific Advisory Committee, have asked the EPA to “make available the primary data” used in studies for “regulatory decisions” that “have multibillion dollar impacts on society” so that “others can validate the analyses.” The EPA has often resisted and refused these requests using the same arguments as McCarthy and McCabe.
Such arguments, however, are contradicted by numerous scholarly works about scientific integrity. In opposition to McCarthy and McCabe’s claim that EPA studies based on undisclosed data “adhere to all professional standards” for “scientific integrity”:
- The Handbook of Social Research Ethics states that:
- “any hindrance to the collection, analysis, or publication of data, such as inaccessible findings from refusal to share data or not publishing a study, should also be corrected for science to fully function.”
- “scientific theories must be testable and precise enough to be capable of falsification,” and “to be so, science, including social science, must be essentially a public endeavor, in which all findings should be published and exposed to scrutiny by the entire scientific community.”
- the Handbook of Data Analysis states that “the techniques of analysis should be sufficiently transparent that other researchers familiar with the area can recognize how the data are being collected and tested, and can replicate the outcomes of the analysis procedure. (Journals are now requesting that authors provide copies of their data files when a paper is published so that other researchers can easily reproduce the analysis and then build on or dispute the conclusions of the paper.)”
- the book Quantifying Research Integrity states that:
- “when data are not available, researchers must either trust past published results, or they must recreate the data as best they can based on descriptions in the published works, which often turn out to be too cryptic.”
- “descriptions are no substitute for the data itself.”
Transparency is especially important when it comes to matters that broadly impact the public and are a matter of dispute. This is the case with the issue that led to this transparency debate, which was the EPA’s decision to widely regulate PM 2.5, or tiny particles of dust and other substances that are 2.5 micrometers or smaller. As detailed in a 2017 paper about this subject published by the journal Regulatory Toxicology and Pharmacology:
- “Many published studies are difficult or impossible to reproduce because of lack of access to confidential data sources.”
- “Here we make publically available a dataset containing daily air quality levels, PM2.5 and ozone, daily temperature levels, minimum and maximum and daily maximum relative humidity levels for the eight most populous California air basins, thirteen years, >2M deaths, over 37,000 exposure days.”
- “We were unable to find a consistent and meaningful relationship between air quality and acute death in any of the eight California air basins considered.”
In other words, this 2017 study that uses transparent data finds opposing results to earlier studies that use undisclosed data. This does not mean that the newer study is correct and the others are wrong, but it does highlight the importance of transparency so that scientists can sort out the reasons for the differences.
Peer Review Doesn’t Ensure Sound Science
Perhaps the most hollow of McCarthy and McCabe’s claims is that “peer review ensures” EPA studies “are sound.” This naive notion is belied by reams of facts about peer-reviewed publications and candid statements from people involved with them. In merely the past seven years:
- the journal Nature published a study that attempted to confirm the findings of 53 prominent peer-reviewed papers that present results of lab experiments related to cancer drugs. The scientists were unable to reproduce 94% of these results, despite the fact that “when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors’ direction, occasionally even in the laboratory of the original investigator.”
- the Proceedings of the National Academy of Sciences published a “detailed review“ of “2,047 biomedical and life-science research articles” that have been retracted. It found that “21.3% of retractions were attributable to error” and “67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).” The authors also noted that “incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic.”
- BioMed Central announced that it had “identified 43 articles” in its peer-reviewed journals “that were published on the basis of reviews from fabricated reviewers.”
- the journal Tumor Biology retracted more than 100 papers because the editors had “strong reason to believe that the peer review process was compromised.”
- Phil Hurst, a publisher for the Royal Society, wrote that “traditional peer review is confidential, with research papers scrutinized by a small number of anonymous experts. Although publishers are vigilant, this secrecy provides the opportunity for fraud.”
- Austin L. Hughes, a professor of biological sciences at the University of South Carolina, wrote that “the high confidence in funding and peer-review panels should seem misplaced to anyone who has served on these panels and witnessed the extent to which preconceived notions, personal vendettas, and the like can torpedo even the best proposals.”
- The journal PLOS ONE published an analysis of peer-review practices that states:
- “Peer review is the main process by which scientists communicate their work, and is widely regarded as a gatekeeper of the quality of published research. However, its effectiveness remains largely assumed rather than demonstrated.”
- Peer review “has limited tools to safeguard the efficiency of the process.”
- “Reviewers are typically protected by anonymity, and are not rewarded for an accurate and fair job nor held accountable for a sloppy or biased one. Reviewers are thus under little incentive to act in the best interest of science as opposed to their own best interest.”
- “We find that the biggest hazard to the quality of published literature is not selfish rejection of high-quality manuscripts but indifferent acceptance of low-quality ones.”
- Andy Farke, a vertebrate paleontologist and editor for the scientific journals PLOS ONE and PeerJ, wrote, “I have seen errors or editorial/reviewer lapses in pretty much every journal I have read.”
- the journal Nature published an analysis of peer-reviewed papers conducted by “a group of researchers working on obesity, nutrition and energetics.” They found:
- “In the course of assembling weekly lists of articles in our field, we began noticing more peer-reviewed articles containing what we call substantial or invalidating errors.”
- “After attempting to address more than 25 of these errors with letters to authors or journals, and identifying at least a dozen more, we had to stop—the work took too much of our time.”
- “Our efforts revealed invalidating practices that occur repeatedly … and showed how journals and authors react when faced with mistakes that need correction.”
- Drummond Rennie, former deputy editor of the New England Journal of Medicine and of the Journal of the American Medical Association, affirmed there “are scarcely any bars to eventual publication” in peer-reviewed journals. Emphasizing the point, he added, “There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.”
Given all of the above, McCarthy and McCabe’s blind confidence in peer review is misplaced or dishonest.
The Privacy Farce
McCarthy and McCabe allege that the EPA cannot possibly release the data for some studies used to create regulations, because they “rely on medical records that by law are confidential because of patient privacy policies.” This is a transparent farce, because there is a common practice to protect the privacy of such records when study data is released.
As detailed in an academic book about medical research, “Researchers have a responsibility to protect the anonymity of subjects and to maintain the confidentiality of data collected during a study. You can protect anonymity by giving each subject a code number.”
This is a technique where identifying information about individuals, such as names, complete addresses and patient numbers are converted to a meaningless name or number in a consistent manner.
The Boston Globe has reported that some authors of EPA regulatory studies “contend that, even if names and addresses are removed, it would be possible for someone to determine the identities of many subjects based on their age, hometown, and date of death.” This claim also fails to stand up to scrutiny, because the above-cited resources about health data privacy provide ways to protect such information, including “group analysis,” distorting “certain details,” and using a “pseudonymisation algorithm” to ensure that “statistical data linkers have no need to ever see the direct record identifiers.”
Quoting CBO Out of Context
McCarthy and McCabe also write that a Congressional Budget Office analysis found that requiring data transparency from the EPA would “reduce by half the number of studies it relies on in developing policies and regulations” and “the quality of the agency’s work would be compromised if that work relies on a significantly smaller collection of scientific studies.”
That is a bald misrepresentation of what CBO actually wrote. In the quote below from CBO’s analysis, take special note of how McCarthy and McCabe changed the phrase “could be compromised” to “would be compromised”:
CBO expects that EPA would modify its practices, at least to some extent, and would base its future work on fewer scientific studies, and especially those studies that have easily accessible or transparent data. Any such modification of EPA practices would also have to take into consideration the concern that the quality of the agency’s work could be compromised if that work relies on a significantly smaller collection of scientific studies; we expect that the agency would seek to reduce its reliance on numerous studies without sacrificing the quality of the agency’s covered actions related to research and development.
The Dose Makes the Poison
McCarthy and McCabe also use their Times op-ed to spread this sophomoric fallacy: “When people are exposed to mercury, lead or other air- and waterborne pollutants, we know their health is affected, whether or not EPA is allowed to use the scientific studies that confirm those health impacts.”
Anything is toxic at a high enough dose. … Even water, drunk in very large quantities, may kill people by disrupting the osmotic balance in the body’s cells. … Potatoes make the insecticide, solanine. But to ingest a lethal dose of solanine would require eating 100 pounds (45.4 kg) of potatoes at one sitting. However, certain potato varieties—not on the market—make enough solanine to be toxic to human beings. Generally, potentially toxic substances are found in anything that we eat or drink.
In fact, naturally occurring substances like radon sometimes do more harm than man-made ones.
An Oxford University Press book about “nature’s building blocks“ notes that the human body contains “traces of all of the elements that exist on earth.” Scientific studies allow the EPA to determine the levels at which these substances become harmful, and federal law requires the EPA administrator to set standards that “protect the public health” with “an adequate margin of safety….” Hence, the EPA has specified safe levels for even highly toxic substances like lead, carbon monoxide, and sulfur dioxide.
Paracelsus, a Swiss physician who reformed the practice of medicine in the 16th century, said it best: “All substances are poisons, there is none which is not a poison. The dose differentiates a poison and a remedy.”
Given that a former head of the EPA would write a statement at odds with this core scientific fact and many others demonstrates the need for transparency and accountability in government. And the fact that the New York Times would provide a platform for such falsehoods has implications as well.