Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi all,
Vanilla are planning an update to the site on April 24th (next Wednesday). It is a major PHP8 update which is expected to boost performance across the site. The site will be down from 7pm and it is expected to take about an hour to complete. We appreciate your patience during the update.
Thanks all.

How common in academia is this: getting the data first and then making a "prediction"

  • 21-05-2013 7:17pm
    #1
    Registered Users Posts: 437 ✭✭


    A prediction that--oh look--matches the data results?

    It could be something simple like a survey: get people's attitudes to Issue X, and then make a posthoc "prediction" that validates those results.

    Generally when writing a thesis you don't lose any marks for your prediction not coming true, but I wonder how often people get an interesting result from their data and the retrofit their theory to accommodate it.


Comments

  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,223 CMod ✭✭✭✭Black Swan


    Not sure what you are asking.

    Researchers may have an idea as to what may occur based upon their education and experience, analysis of theory, literature, prior studies, and existing data, but to only use or seek data that would support their future prediction (and exclude potentially contrary evidence) would violate ethics and scientific method conventions, and may also lead to missing unanticipated findings or spurious results.

    The trial-and-error method may be informed by prior understandings, as well as those that are acquired by this process as it progresses. SPC (Statistical Process Control) in manufacturing (action research) proceeds with the desired outcome in mind before it's reached; i.e., reducing variation from given standards and specifications through interventions in the process overtime.

    There are inductive data mining studies that look for emergent patterns in data, where empirical generalisations may be made to inform theory and future deductive studies (See Wallace's Wheel of Science).


  • Closed Accounts Posts: 31,967 ✭✭✭✭Sarky


    "If we knew what we were doing, it wouldn't be called research" springs to mind.

    Happens all the time in a certain way. In my job it tends to go something like:

    "Based on past experiments, we wanted to see if X was true. We suspect it is, but we're the first people to do the research. Here's a load of data from our experiments, be a dear and see what you can find, k?"

    [several days, computer crashes and cups of coffee later]

    "Oh, interesting. Looks like we were partly right, anyway, there's a definite link between X and Y, but look at Z over there, that looks like it's affecting things too. Hey, based on this guy's past experiments, I wonder if Z does this. Let's research it!"

    And so on (as long as there's funding).

    That's microbiology/genetics, anyway, I can't speak for other fields. Research on a topic is headed by someone with an intimate knowledge of it, so they'll have plenty of hunches as to how everything works. A good chunk of research is confirming what someone already suspected. Can't write a paper on a suspicion though, you need hard data and analysis. In my experience the reliable data (low-quality data might disagree, but they're not reliable) usually corroborate the suspicion, and on top of that provide a load of tangential clues to other phenomenon. Head researchers are better acquainted with all the disparate threads than those working on a specific project, so they're better placed to decide which avenues are more likely to yield results, and have a guess at what the results might be. They don't always get it right of course. But yeah, I think they tend to be more hit than miss, especially when funding is hard to come by.

    Worth mentioning the COBE project. Scientists made predictions about the black body curve of the universe's cosmic microwave background radiation, and the data matched up almost perfectly, with absolutely tiny margins of error. Fantastic piece of research, that.


  • Registered Users Posts: 81,223 ✭✭✭✭biko


    It's not a prediction if you already have some data, it's deduction.

    A lot of science start the same, something is observed and an assumption is made.
    Data is collected that strengthens or weakens the assumption. If it weakens the assumption it's discarded or changed according to the data. And the process start again.

    Karl Popper has done some work on the scientific method, well worth a read.


  • Closed Accounts Posts: 34 jackpension


    I think this is a real interesting issue, and an important debate.

    According to the deductive logic, we start with a theory and set of hypotheses, which we then test using data. In doing so we test whether the null hypothesis can be rejected.

    However, academic journals rarely publish test of hypotheses where the null is upheld. This is a widely known bias is the social sciences. If you don't believe me on this point, then find me an example of a social scientific study published in a reputable journal where no of the hypotheses are borne out

    So what are you to do in such circumstances? One option is to develop a new theory and repeat the cycle - that is if you can survive long enough to do so. In the US, and to an extent in Asia, there is an up and out policy. So what do you do? You revise you revise your theory and hypotheses and test them again - and maybe even once more.

    Some may view this as the art of alchemy - but it is part of the scientific process; and it is necessary for scientific survival.

    This practice must be distinguished from fishing. Examining the data and coming up with hypotheses that fit. This is not appropriate and is abhorrent to the scientific virtues of most fields. But it happens.

    In the end, there are multiple shades of grey between pure inductive and deductive, and that is where most us live.


  • Closed Accounts Posts: 34 jackpension


    A prediction that--oh look--matches the data results?

    It could be something simple like a survey: get people's attitudes to Issue X, and then make a posthoc "prediction" that validates those results.

    Generally when writing a thesis you don't lose any marks for your prediction not coming true, but I wonder how often people get an interesting result from their data and the retrofit their theory to accommodate it.

    To build on my last point, and relate it more directly to yours, I think it happens a lot. I guess what I was trying to get at in my last point was that there are two sides to this coin - a necessary side, that I maintain is scientifically beneficial, and a malevolent side, wherein the theory entirely follows the data through speculative processes.


  • Advertisement
  • Closed Accounts Posts: 34 jackpension


    biko wrote: »
    It's not a prediction if you already have some data, it's deduction.

    A lot of science start the same, something is observed and an assumption is made.
    Data is collected that strengthens or weakens the assumption. If it weakens the assumption it's discarded or changed according to the data. And the process start again.

    Karl Popper has done some work on the scientific method, well worth a read.[/QUOTE

    Ok. But bear in mind Biko, you are talking about paradigms, not individual hypotheses. Paradigms that govern assumption that transcend multiple theories evolve in the fashion you describe.

    The process of deriving and testing hypotheses is quite different. It is less cyclical and more cumulative.


  • Moderators, Category Moderators, Science, Health & Environment Moderators, Society & Culture Moderators Posts: 47,223 CMod ✭✭✭✭Black Swan


    Certainly many researches do not proceed in textbook style. We've had clients approach us with a problem and a pile of secondary data, asking us to make sense of it.


  • Closed Accounts Posts: 34 jackpension


    That's another shade of grey, Black Swan.

    BTW I think those opportunities are great.


  • Closed Accounts Posts: 2,980 ✭✭✭Kevster


    Getting data first and then making a hypothesis ('prediction') is not a bad thing, really. I worked in cancer research, and in this area we simply don't know what the entire human genome does in relation to a particular cancer. So, it's all about discovery and seeing what's different between the 'cancer' genome and a healthy genome. Once you see the differences, you can hypothesis about what's going on that could drive the cancer in the first place.

    An issue - as I see it - is when someone initially has a hypothesis and then rejects data/results that does not fit it. That goes on all the time in research, like you wouldn't believe. There are so many substandard researchers out there.

    Kevin


Advertisement