In scientific research, negative results can get a bad rap. What exactly do I mean when I say “negative results”? At its core, scientific research is the process of developing a research question, formulating a hypothesis (an educated prediction) about the outcome of that question, then testing the hypothesis through a system of measurement and observation.
For example, a scientist may want to know whether drinking alcohol in the evening leads to louder snoring during sleep that night. In other words, she is proposing a possible relationship between two variables: the independent variable (what the researcher changes, i.e. alcohol consumption in the evening) and the dependent variable (what the researcher will measure, i.e. volume of snoring during the following night).
In reality, what the scientist is really testing is the null hypothesis, which states that there is no relationship between the two variables; or, in this case, alcohol does not lead to louder snoring. Let’s suppose that after performing a well-designed study with proper statistical analysis in a group of subjects, the scientist finds that subjects who drank alcohol snored at a statistically higher volume than those who didn’t drink alcohol. She can then reject the null hypothesis and conclude that drinking alcohol in the evening leads to louder snoring that night. If there is no significant difference between the two groups, then the observed snoring is likely unrelated to drinking alcohol. The scientist would regard this as a negative result.
Negative results are a disappointing but necessary part of the scientific method, and the lessons learned can help to refine a hypothesis or pursue other avenues of study. Unfortunately, negative results do not always get the attention that they deserve. Scientists are under increasingly tremendous pressure to publish high-impact results with positive outcomes in scientific journals due to competition for funding, and sometimes negative results can get shoved by the wayside. An analysis recently published in the journal PLoS ONE concluded that “papers are less likely to be published and to be cited if they report ‘negative’ results”. This alarming trend, dubbed publication bias, can discourage many researchers from publishing data that does not have a positive outcome, leading them instead to “file” that data away indefinitely.
Why do negative results have such a bad reputation? A common misconception is that negative results are the product of a poorly designed or executed study, particularly when those results cast doubt on the prevailing hypothesis in a particular field. This leads many to equate “negative results” with “bad studies”. In fact, any study, whether its outcomes are positive or negative, can be susceptible to methodological flaws, and although the peer-review system is designed to catch and correct such studies, some invariably slip through the cracks. More to the point, negative results can lead to positive outcomes down the road, and it’s absolutely essential that negative results be reported so that experiments aren’t needlessly repeated, wasting valuable time, effort, and resources.
Increasingly, there has been a concerted push to give negative results an equal pulpit, notably by various journals that were expressly created to tackle this problem, including the Journal of Negative Results in Biomedicine, The All Results Journal, and the Missing Pieces Collection recently launched by PLoS ONE. So long as the studies submitted to these journals are properly executed, ethical and based on sound scientific principles according to a rigorous peer review, these journals welcome all papers, regardless of the outcome of the experiment.
So what does this mean for multiple sclerosis research? While there have been substantial and exciting breakthroughs in MS research over the past few decades, negative results have also been reported. For example, last December Novartis announced the results of a phase III clinical trial evaluating the therapeutic potential of the drug Gilenya* for people with primary progressive MS, stating that the drug did not improve various disability measures compared to placebo. Similarly, the CombiRx study, which investigated whether the combined use of interferon β-1a and glatiramer acetate improved therapeutic outcomes in individuals with relapsing-remitting MS compared to either drug alone, did not demonstrate a significant clinical benefit of the combination therapy. Results like these may seem like disappointing road blocks, but in fact they’re just as important as positive results and reveal critical information that allows clinicians to select appropriate treatments and rule out ones that aren’t effective.
Sometimes, a negative result can even serendipitously lead to an encouraging finding. This was the case with the OLYMPUS trial, which set out to test the efficacy of rituximab, a drug that targets certain immune cells, in halting progression of disability in people with primary progressive MS. Although the drug overall failed to stop disability progression, further analysis revealed that a subset of participants with primary progressive MS did in fact show improvement, in turn paving the way for a landmark study that identified a possible biomarker in certain people with primary progressive MS who could benefit from immune-directed treatments.
I’m encouraged to hear that negative results have been getting more and more attention lately, with a greater recognition among researchers and the public alike of their valuable contribution to the advancement of research. At the end of the day, all results, both positive and negative, help to fill in the missing pieces of the scientific puzzle.
*Gilenya® is a registered trademark of Novartis
Image credits: © Cornelius20 | Dreamstime.com – Brain Maze Photo