Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Negative publishing and publication-bias

Options
  • 13-03-2015 8:44pm
    #1
    Registered Users Posts: 16,931 ✭✭✭✭


    It's a bit dead in here lately.... so how about something everyone can discuss. I'm sure everyone is more than aware of the publication bias that exists towards positive findings (and probably have experienced it first hand). It must be one of the few things common amongst all disciplines.

    It seems like we've come full circle, moving from an era of researchers reaching the same conclusions on opposite sides of the globe and being oblivious that someone else is even working on it, to one where the difference is that this time it's negative results and failures, not the positive.

    It does seem like the tides are changing though. With the likes of PLOSOne already accepting negative results and the journal of negative results in biomedicine dedicated to negative results, it's becoming easier to publish negative results and is somewhat of a step towards reducing publication bias.

    What effects will this have overall? Will it make life easier for PhD students? Particularly with the publish or perish attitude - the pressure can mount when studies just don't work.

    Will it be abused like low-hanging fruit, leading to publications on a CV becoming devalued? Will literature reviews of failed studies become a thing?

    Will it improve the field, or just make it harder to determine what research has already been carried out? In particular fields with over 1000 publications daily, it's already impossible to keep up with what research is happening in the field. Now with negative publications (which will likely out-number the positive), how is anyone supposed to keep up?

    I feel that removing negative publication bias is ultimately a good thing and definitely necessary for real progress to be made in certain areas.


Comments

  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,226 CMod ✭✭✭✭Black Swan


    Good topic CM.
    I'm sure everyone is more than aware of the publication bias that exists towards positive findings (and probably have experienced it first hand). It must be one of the few things common amongst all disciplines.
    Yes, statistical significance and rejecting the null hyptheses rules.
    ...moving from an era of researchers reaching the same conclusions on opposite sides of the globe...
    Provides intersubjectivity of results that we cite in our lit reviews, given they are published and sourced by search engines and web.
    and being oblivious that someone else is even working on it...
    Scary part of doing original research that's intended to add new knowledge and justify PhD, while worrying that someone goes to press before you, and forces you back to the drawing board.
    It does seem like the tides are changing though. With the likes of PLOSOne already accepting negative results and the journal of negative results in biomedicine dedicated to negative results, it's becoming easier to publish negative results and is somewhat of a step towards reducing publication bias.
    Do you mean "negative" results that refute earlier discoveries, or insignificant results that can be due to a complex number of factors, including poor research design, measurement problems, sampling error and bias, type I and II errors, etc., etc.?


  • Registered Users Posts: 16,931 ✭✭✭✭challengemaster


    Black Swan wrote: »
    Provides intersubjectivity of results that we cite in our lit reviews, given they are published and sourced by search engines and web.

    Do you mean "negative" results that refute earlier discoveries, or insignificant results that can be due to a complex number of factors, including poor research design, measurement problems, sampling error and bias, type I and II errors, etc., etc.?

    Negative results (IMO) can be either those that refute earlier studies, and those that could be considered failures. For the most part I mean the latter. However, both are intrinsically linked in impeding progress. Studies that refute earlier discoveries are often not published, forced into low-grade journals, or take a different approach to getting published (methods papers, etc) to avoid going against the grain. In the same vein as unpublished failed studies, researchers never get a true full picture.

    What I was more referring to was along the lines of multiple discovery - something that was all too common in the 19th and 20th centuries. Since the dawn of the internet and internet publishing, it has largely become a thing of the past... at least for positive results which get published. I believe it's still prominent in the world of unpublished negative results.

    Worldwide, researchers have detailed lab books with failed experiments in them.
    Nowadays it's very simple for anyone to obtain papers or reviews and find out what research people have done, or at least what research they've done that worked. It's nigh on impossible to figure out what failed research is in those lab books.
    Those who cannot remember the past are condemned to repeat it.

    Inevitably, different research groups scattered across the globe ultimately (and obliviously) reach the same conclusions to the same failed experiment time and time again - for the sole reason that the experiment failed and was not published.
    Scary part of doing original research that's intended to add new knowledge and justify PhD, while worrying that someone goes to press before you, and forces you back to the drawing board.
    The scary part about it relating to negative results is that the same PhD may have already been done and found no positive outcome. What you think is the drawing board is actually someone else's tippex. Countless research hours, resources, and grants are potentially wasted due to not having access to the full picture.


  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,226 CMod ✭✭✭✭Black Swan


    Inevitably, different research groups scattered across the globe ultimately (and obliviously) reach the same conclusions to the same failed experiment time and time again - for the sole reason that the experiment failed and was not published.
    It's rare that such failed experiments were available for review, and I doubt that researchers, teams, and labs would want to make such information available, especially if knowledge of such failures may make them less competitive for grants, etc. When we complete RFPs, we don't include our failures, only our successes and related publications. RFPs don't ask for failures, and in this seriously competitive research marketplace, we would be foolish to volunteer such information.

    So what incentives can you actually offer us to share our research failures that may overcome any adverse impacts when competing for research monies?


  • Registered Users Posts: 16,931 ✭✭✭✭challengemaster


    Black Swan wrote: »
    It's rare that such failed experiments were available for review, and I doubt that researchers, teams, and labs would want to make such information available, especially if knowledge of such failures may make them less competitive for grants, etc. When we complete RFPs, we don't include our failures, only our successes and related publications. RFPs don't ask for failures, and in this seriously competitive research marketplace, we would be foolish to volunteer such information.

    So what incentives can you actually offer us to share our research failures that may overcome any adverse impacts when competing for research monies?

    I think the problem is that negative results are seen as research failures, which goes back to the bias towards positive result publishing. Negative results are still results and are equally as important as positive ones. That's something that most people learn at undergraduate level, so why does it change so much when it comes to publishing?

    Here's a very good article by the guardian on the matter .

    This is key:
    Their value lies in mapping out blind alleys, warning other investigators not to waste their time or at least to tread carefully. The only trouble is, it can be hard to get them published.

    I believe a serious change is needed in terms of mentality towards negative results. As you say, only successes are included and you feel it's foolish to publish failures. Realistically though, those failures are whole bodies of research done by researchers and teams that they never get recognition for. Furthermore negative results do contribute to the field in quite a significant way. As the perception of negative results changes, it allows researchers and teams to publish the full extent of their work. Ultimately this will speak to the experience of a group when applying for grants/funding.


  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,226 CMod ✭✭✭✭Black Swan


    I think the problem is that negative results are seen as research failures, which goes back to the bias towards positive result publishing. Negative results are still results and are equally as important as positive ones.
    Once again it depends upon how you define "negative results." If negative results are those that significantly challenge prior findings, perhaps suggesting alternative results and conclusions, then they merit publication. I'm 2nd author on a peer-reviewed publication that refuted prior findings as one of its contributions.

    But if negative results were due to spurious theoretical and conceptual frameworks, poor research design, invalid variable operationalisations and measurements, problematic size, poor sampling methods, type I and II errors, etc., then its merit was questionable and should not be published, except in methods texts that suggest how to avoid the conduct of poor research.
    That's something that most people learn at undergraduate level, so why does it change so much when it comes to publishing?
    It's heartening to hear you believe that "most people learn at undergraduate level" the importance of negative results in research, but I do not share your confidence in undergrad student research comprehension and understanding; i.e., quite to the contrary.


  • Advertisement
  • Registered Users Posts: 13,104 ✭✭✭✭djpbarry


    What I was more referring to was along the lines of multiple discovery - something that was all too common in the 19th and 20th centuries. Since the dawn of the internet and internet publishing, it has largely become a thing of the past... at least for positive results which get published.
    I wouldn’t necessarily agree with that. For example, the Noble Prize for Chemistry in 2014 was shared among three different research groups who “independently” described what was essentially the same method. There was at least one more lab that developed a very similar technique around about the same time.


  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,226 CMod ✭✭✭✭Black Swan


    I think the problem is that negative results are seen as research failures, which goes back to the bias towards positive result publishing. Negative results are still results and are equally as important as positive ones.
    Once again, good topic CM.

    Are you defining "negative results" as those that produce statistical insignificance for all or most variable relationships during the conduct of primary data research analysis; or if significant so small in magnitude as may be attributed to random variation; or in like manner fail to reject all or most of the null hypotheses? Or if doing secondary data analysis (e.g., data mining), produce mostly statistically insignificant results, or spurious relationships, and fail to produce empirical generalisations?


  • Registered Users Posts: 78,278 ✭✭✭✭Victor


    Black Swan wrote: »
    Yes, statistical significance and rejecting the null hyptheses rules.
    All very important in statistics, but the rest of the world wants to know if it is practically different.

    Existing drug cures 50%±0.1% of a disease. New drug cures 51%±0.1% of a disease, but will take €1 billion to come to market and has unknown side effects. The new drug might not be worthwhile.


  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,226 CMod ✭✭✭✭Black Swan


    Victor wrote: »
    All very important in statistics, but the rest of the world wants to know if it is practically different.
    Agree Victor. What you raise is the all important "So what?" question that is all too often overlooked or not addressed in peer-reviewed publication, or consequent editorial decisions to publish or not.


  • Registered Users Posts: 16,931 ✭✭✭✭challengemaster


    Black Swan wrote: »
    But if negative results were due to spurious theoretical and conceptual frameworks, poor research design, invalid variable operationalisations and measurements, problematic size, poor sampling methods, type I and II errors, etc., then its merit was questionable and should not be published, except in methods texts that suggest how to avoid the conduct of poor research.
    TBH I don't think any study that failed for those reasons would even get past peer review, even in OA journals. Unless, as you said - in a methods paper to avoid the same mistakes being made in future.
    It's heartening to hear you believe that "most people learn at undergraduate level" the importance of negative results in research, but I do not share your confidence in undergrad student research comprehension and understanding; i.e., quite to the contrary.
    I'd agree, I don't think undergrad research comprehension and understanding is up to scratch at all. There's a difference between teaching/learning and understanding, and it's really only when exposed to actual research that the latter comes.
    djpbarry wrote: »
    I wouldn’t necessarily agree with that. For example, the Noble Prize for Chemistry in 2014 was shared among three different research groups who “independently” described what was essentially the same method. There was at least one more lab that developed a very similar technique around about the same time.
    There's somewhat of a truth to that... but Nobel prizes are never awarded in a timely fashion. Those discoveries were made in 1994 and 2000 by the different groups. Internet publishing was a long way away from what it is today. :) I believe it happens much less now, because as Black_Swan said, in the race to publish someone has to lose... and go back to the drawing board.
    Black Swan wrote: »

    Are you defining "negative results" as those that produce statistical insignificance for all or most variable relationships during the conduct of primary data research analysis; or if significant so small in magnitude as may be attributed to random variation; or in like manner fail to reject all or most of the null hypotheses? Or if doing secondary data analysis (e.g., data mining), produce mostly statistically insignificant results, or spurious relationships, and fail to produce empirical generalisations?

    Perhaps it's easier if I quote the aims and scope of negative results journals?
    Journal of Negative Results in BioMedicine aims to encourage scientists and physicians of all fields to publish results that challenge current models, tenets or dogmas. The journal invites scientists and physicians to submit work that illustrates how commonly used methods and techniques are unsuitable for studying a particular phenomenon. Journal of Negative Results in BioMedicine strongly promotes and invites the publication of clinical trials that fall short of demonstrating an improvement over current treatments. The aim of the journal is to provide scientists and physicians with responsible and balanced information in order to improve experimental designs and clinical decisions.

    Articles published in traditional journals frequently provide insufficient evidence regarding negative data. They hardly allow a rigorous evaluation of the quality of these results. In addition, controversial results that refute a current model or simply negative results within a current dogma, frequently meet considerable resistance before they are acknowledged. This is particularly the case if current techniques and technologies are too crude to shed further light on the findings. As more sophisticated techniques become available such findings may turn out to have been groundbreaking only decades later.

    Not every unexpected observation, controversial conclusion or proposed model will turn out to be of such groundbreaking significance. Nor will they even be confirmed by subsequent scientific progress. However, we strongly believe that such "negative" observations and conclusions, based on rigorous experimentation and thorough documentation, ought to be published in order to be discussed, confirmed or refuted by others. In addition, publishing well documented failures may reveal fundamental flaws and obstacles in commonly used methods, drugs or reagents such as antibodies or cell lines, ultimately leading to improvements in experimental designs and clinical decisions.
    http://www.nature.com/jcbfm/journal/v30/n7/full/jcbfm201051a.html

    We invite authors to submit data that did not substantiate their alternative hypotheses (i.e., a difference between experimental groups) and/or did not reproduce published findings. A common criticism of the publication of negative results is that the experimentation involved may not have been as extensive as in research with positive results, which are often further complemented by additional, mechanistic experiments. A survey of the existing literature exposes this as wishful thinking, as most experimental studies are grossly underpowered. Importantly, the quality of the data submitted to our Negative Results section must meet the same rigorous standards that our journal applies to all other submissions. In fact, it may be said that the standards must even exceed those applied currently, as type II error (false negatives) considerations need to be included. Of note, in clinical studies, a priori sample-size calculations (at given levels for type I and II error, α and β) are mandatory. Experimental medicine has deplorably escaped this requirement, at least partially explaining why experimental results often have a very low positive predictive value.

    The Negative Results section of the Journal of Cerebral Blood Flow and Metabolism will provide a platform and raise awareness of a problem with a proven negative impact on scientific progress as well as bench-to-bedside translation. Now researchers must step up to this platform. It is an experiment, but, if successful, it may serve as a role model for other journals and other research fields and thus help to reduce publication bias.


  • Advertisement
  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,226 CMod ✭✭✭✭Black Swan


    Perhaps it's easier if I quote the aims and scope of negative results journals?
    Learning vicariously from the published failures of others would have some utility, as would follow-up reviews of said failures that suggested research design, sampling methods, etc., improvements so that future failures may be mitigated and not repeated.

    Online published peer-reviewed journals may allow such an alternative approach to exist, but those that are still under the profit and loss business demands of expensive paper and print may continue to be more selective, seeking breakthrough articles, or those that produce statistically significant results that attract readership more so than the less colourful reiteration of failed researchers.

    The journal of negative results should continue to experiment with this alternative idea, but I am a bit skeptical that when being faculty reviewed for tenure and promotion at a flagship university, that publication of a negative results article will carry as much weight as those statistically significant; rather, publication of significant results will continue to rule in our publish or perish environment.


Advertisement