Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Psychological research - not so great at replication?

  • 07-09-2015 8:42pm
    #1
    Moderators, Category Moderators, Entertainment Moderators, Science, Health & Environment Moderators, Regional East Moderators Posts: 18,664 CMod ✭✭✭✭


    Recent article in The Atlantic, with a whole bunch of links. I don't have time to study them in detail just now, but I thought I'd post as this was floating around.
    No one is entirely clear on how Brian Nosek pulled it off, including Nosek himself. Over the last three years, the psychologist from the University of Virginia persuaded some 270 of his peers to channel their free time into repeating 100 published psychological experiments to see if they could get the same results a second time around. There would be no glory, no empirical eurekas, no breaking of fresh ground. Instead, this initiative—the Reproducibility Project—would be the first big systematic attempt to answer questions that have been vexing psychologists for years, if not decades. What proportion of results in their field are reliable?

    ~

    Like many sciences, psychology suffers from publication bias, where journals tend to only publish positive results (that is, those that confirm the researchers’ hypothesis), and negative results are left to linger in file drawers. On top of that, several questionable practices have become common, even accepted. A researcher might, for example, check to see if they had a statistically significant result before deciding whether to collect more data. Or they might only report the results of “successful” experiments. These acts, known colloquially as p-hacking, are attempts to torture positive results out of ambiguous data. They may be done innocuously, but they flood the literature with snazzy but ultimately false “discoveries.”

    ~

    In the last few years, psychologists have become increasingly aware of, and unsettled by, these problems. Some have created an informal movement to draw attention to the “reproducibility crisis” that threatens the credibility of their field. Others have argued that no such crisis exists, and accused critics of being second-stringers and bullies, and of favoring joyless grousing over important science.

    I don't read enough research regularly to have a full grasp of what's discussed in the piece. Some of the stuff I do read is a bit 'Outcomes for treatment group B are similar to Bloggs et al 2007, yet differ from Ahern 2008', etc, etc, but that's not really replication, eh?

    I'm interested in learning more about the rigours involved, though. Any initial thoughts?


Advertisement