Should you believe the findings of scientific studies? Amid current concerns about the public’s trust in science, old arguments are resurfacing that can sow confusion.
As a statistician involved in research for many years, I know the care that goes into designing a good study capable of coming up with meaningful results. Understanding what the results of a particular study are and are not saying can help you sift through what you see in the news or on social media.
Let me walk you through the scientific process, from investigation to publication. The research results you hear about crucially depend on the way scientists formulate the questions they’re investigating.
The scientific method and the null hypothesis
Researchers in all kinds of fields use the scientific method to investigate the questions they’re interested in.
First, a scientist formulates a new claim – what’s called a hypothesis. For example, is having some genetic mutations in BRCA genes related to a higher risk of breast cancer? Then they gather data relevant to the hypothesis and decide, based on the data, whether that initial claim was correct or not.
It’s intuitive to think that this decision is cleanly dichotomous – that the researcher decides the hypothesis is either true or false. But of course, just because you decide something doesn’t mean you’re right.
If the claim is really false but the researcher decides, based on the evidence, it’s true – a false positive – they commit what’s called a Type 1 error. If the claim is really true but the researcher fails to see that – a false-negative conclusion – then they commit a Type 2 error.
Moreover, in the real world, it gets a little messier. It’s really hard to decide about the truth or falsity of a claim just based on what’s observed.
For that reason, most scientists employ what is called the null hypothesis significance testing framework. Here’s how it works: A researcher first states a “null hypothesis,” something that’s contrary to what they want to prove. For instance, in our example the null hypothesis is that BRCA genetic mutations are not associated with increased breast cancer occurrence.
The scientist still gathers data and makes a decision, but the decision is not about whether the null is true. Instead, a researcher decides whether there’s enough evidence to reject the null hypothesis or not.
What rejecting the null does and doesn’t mean
Understanding this distinction is crucial. Rejecting the null is equivalent in practice to acting as though it is false – in the example, rejecting the null means claiming that those with some BRCA gene mutations do have a higher risk of breast cancer. Along with other evidence, such as the size of the increased risk, this outcome can justify recommending early breast cancer screening for people with the identified BRCA mutations.
But failing to reject the null hypothesis doesn’t imply that it’s true – in this case, it doesn’t mean there is no association between the BRCA mutations and breast cancer. Rather, such a result is inconclusive; there’s not enough evidence to claim there is an association. A negative result – inadequate evidence to say the null is false – does not necessarily invite the researcher to believe the null is true.
This is because null hypothesis significance testing is set up to control for Type 1 error (false positive) at a level defined in advance by the researcher but at the cost of having less control over Type 2 error (false negative).
A researcher’s chances of correctly rejecting the null if there is increased risk can depend on how much data they have, how complex the design of the study is and, most importantly, how large the effect actually is. It’s much easier to reject the null if BRCA mutations truly increase cancer risk many times than it is if the risk is only slightly elevated. A researcher can end up with a result that is not statistically significant but cannot rule out the possibility of an increased risk that is too small for the study to detect.
Which results are more often publicized
Once they have their result and the researchers want to disseminate their work, they typically do so through peer-reviewed publication. Journal publishers consider a researcher’s write-up of their study, send it out for other scientists to review, and then decide whether to publish it.
In this process, the publishers tend to favor studies that rejected their null hypothesis over those that failed to reject it. This is called positive publication bias.
It is natural for publishers to prefer studies that support new claims since they objectively carry more information than studies that failed to reject their null hypothesis. Journals want to publish something new and noteworthy.
Many sources flag this phenomenon as “bad science,” but is it really? Remember, the framework used to make decisions about scientific claims is intentionally only capable of either rejecting the null hypothesis – in other words, supporting the claim – or alternatively declaring inconclusive results.
The framework isn’t designed to be able to prove the null hypothesis. That said, researchers can reverse the design of a scientific investigation so that a previous claim becomes the null hypothesis in a new study with fresh data.
For instance, rather than a null hypothesis that there is no association between BRCA mutations and breast cancer, the null hypothesis becomes that the increased breast cancer risk from BRCA mutations is equal to or greater than some value the researcher settles on before gathering fresh data.
Rejecting the null this time would mean the increased risk is smaller than that set value, thus supporting the claim consistent with what had previously been the null hypothesis on prior data. In the example, rejecting the null means the effect of BRCA genes is small enough to be practically negligible in terms of developing breast cancer.
It’s critical for a researcher to structure their study so that what they’re interested in proving is aligned with the rejection of the null. Publishers are naturally less inclined to consider studies that failed to reject their null hypothesis, not because they do not want to publish studies that support negative statements but because null hypothesis significance testing does not actually support negative statements. Failure to reject the null just means your results are inconclusive – and may perhaps seem less newsworthy.
What positive publication bias does
So what does the practice of preferring to publish studies that reject their null hypothesis do?
While we can’t know for certain, we can see how this plays out under different circumstances. You can explore the scenarios in this app I made.
If scientists are acting in good faith, using null hypothesis significance testing appropriately, it turns out that positive publication bias on the part of scientific journal publishers will increase the proportion of true discoveries in their pages much more than it will increase the proportion of false positives.
If editors did not exercise any positive publication bias, journals would be almost entirely full of studies with inconclusive results.
Of course, if scientists are not acting in good faith and are just interested in getting published while ignoring proper use of statistical tests, that can lead to false-positive rates being as high or higher than the rate of true discoveries. But this possibility is true even without positive publication bias.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Mark Louie Ramos, Penn State
Read more:
- Why a study claiming vaccines cause chronic illness is severely flawed – a biostatistician explains the biases and unsupported conclusions
- What is peer review? The role anonymous experts play in scrutinizing research before it gets published
- The equivalence test: A new way for scientists to tackle so-called negative results
Mark Louie Ramos does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


The Conversation
Local News in Washington
Raw Story
CNN Politics
Associated Press US and World News Video
WAND TV
Reuters US Domestic
CNN
Observer News Enterprise