Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Limitations of Science?

Options
13468915

Comments

  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    And as I said above when the criteria of selection is such that the studies are selected to get the results you want then there IS a problem. And that criteria was NOT made explicit beforehand in this case. In fact it was quite deliberately obfuscated. I already said I agree that criteria have to be implemented in Meta Analysis. But that criteria should never be "Select the studies which give the results that I want to get".

    And again when a similar study was performed using 30 examples from the same Ganzfeld pool, no significant results were found at all. This is no small thing.

    Again given the meta analysis you first linked to was so poor and has all the problems I outlined, and your last link was broken and did not go anywhere, I repeat my earlier invitation for how best to progress the discussion. Drill down into one of the meta analysis, cite one of the studies, what you think it is claiming and we can work through together the methodology and interpretations involved.

    Outside that I can only say that, like Blackemore's piece I linked to, I am not seeing any significant results out there that one can draw any conclusions on at all. Like Blackemore I would be very happy indeed to see more studies done... studies with explicitly and carefully address the issues found in previous studies... and see where that leads us. One very good way to do that would be to include the people who found the methodological problems with the old studies IN the new studies as full co-authors.


  • Registered Users Posts: 7,182 ✭✭✭Genghiz Cohen


    And as I said above when the criteria of selection is such that the studies are selected to get the results you want then there IS a problem. And that criteria was NOT made explicit beforehand in this case. In fact it was quite deliberately obfuscated. I already said I agree that criteria have to be implemented in Meta Analysis. But that criteria should never be "Select the studies which give the results that I want to get".

    And again when a similar study was performed using 30 examples from the same Ganzfeld pool, no significant results were found at all. This is no small thing.

    Again given the meta analysis you first linked to was so poor and has all the problems I outlined, and your last link was broken and did not go anywhere, I repeat my earlier invitation for how best to progress the discussion. Drill down into one of the meta analysis, cite one of the studies, what you think it is claiming and we can work through together the methodology and interpretations involved.

    Outside that I can only say that, like Blackemore's piece I linked to, I am not seeing any significant results out there that one can draw any conclusions on at all. Like Blackemore I would be very happy indeed to see more studies done... studies with explicitly and carefully address the issues found in previous studies... and see where that leads us. One very good way to do that would be to include the people who found the methodological problems with the old studies IN the new studies as full co-authors.

    Atta boy Nozz, reinforce my beliefs!

    to-give-a-pat-on-the-back-t13471.jpg


  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    As I have repeatedly said, other meta-anaylses have taken into account Wiseman and other supposed skeptics criticisms and still found a psi effect.

    Which „other“ meta studies are you referring to exactly. As I said you already provided one broken link and a vague reference to Storm and Ertel from 2001 and 2002.

    What S&E-2001/2 did was to perform another Meta Analysis of around 80 studies between the early 70s and the late 90s. In doing so they clearly ignored the well known issues with the earlier studies in that set. They also implemented inconsistent methods for calculating outcomes and criteria for inclusion of papers.

    Mitlon and Wiseman responded quite quickly to S&E-2001/2 outlining these issues. They showed how some of the studies included in S&E-2001/2 were rubbished by Hyman in 1985. At that time in response Honorton who had actually carried out many of those studies conceded that the studies contained methodological flaws and reporting issues. For example only 1 third of the studies dealt with the issue of “duplicate target sets”.

    Both Hyman (1985) and Honorton (1985) extensively documented the methodological problems in the database but despite that S&E-2001/2 claimed there was no such lack in their quality. They declared this without offering any argument or evidence to dispute the assessments of Hyman and Honorton.

    S&E also weighted 11 of their studies but left the rest un-weighted. As Wiseman says “ When producing a quantitative summary of a group of studies, it makes no sense to distort their effect sizes by applying quality weights, nor does it make sense to apply quality weights to some studies in a database but not to others. This practice is simply bizarre and cannot be the basis for making strong claims that the statistically significant cumulated outcome represents strong evidence for psi.”

    So not only was your first link rife with issues that I have laid out, your second reference to S&E-2001/2 is worse again. Not only does it have many of the same issues, it has MORE issues again and on top of that there appears to be wanton “Bizarre” dishonesty included too.

    Was there another meta-analysis aside from these two you wished to add to the fray that I may have missed as we appear to be jumping from study to study with a frequency that has left both of us wondering which comments apply to which author/study.


  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    Atta boy Nozz, reinforce my beliefs!

    Now now, that picture is a misrepresentation. That is not where you touched me last time at all :-)

    But yes, there is certainly something wrong when someone in these parts beefs up their arguments by declaring everyone here is just engaged in confirmation bias, back patting and are ignorant of the arguments and evidence. When the discussion turns to that level of rhetoric I think it becomes pretty clear the writer of such comments is projecting their own failings on to others.


  • Registered Users Posts: 7,182 ✭✭✭Genghiz Cohen


    But yes, there is certainly something wrong when someone in these parts beefs up their arguments by declaring everyone here is just engaged in confirmation bias, back patting and are ignorant of the arguments and evidence. When the discussion turns to that level of rhetoric I think it becomes pretty clear the writer of such comments is projecting their own failings on to others.

    Yeah! Good one!


  • Advertisement
  • Closed Accounts Posts: 2,019 ✭✭✭nagirrac


    Another issue with Meta Analysis is that biased cherry picking of results is also possible. Meta Analysis is rendered useless if out of 100 studies you pick the 25 that best indicate the results you want. for a Meta Analysis to be good the studies used should share methodlogy and be picked randomly. This did NOT happen in the case of Bam & as is outlined in this excerpt from a larger article by Susan Blackemore. The bolding and highlighting is my own:

    You are referring to the "file drawer problem" which is a valid area of discussion. The file drawer problem (how unpublished studies might impact the overall results) is a potential problem in all psychology studies and indeed much of science. The concern is a legitimate one but there is strong reason to believe it is less of a concern in parapsychology. In 1975 parapsychology became the first and only science to adopt an explicit policy against the file drawer effect. Publishing negative results has become routine in parapsychology, unlike other areas of study. Dean Radin covers this is great detail in both "the Conscious Universe" and "Entangled Minds".

    The core issue here is whether parapsychology is a valid area of study conducted by proper scientific method. The American Association for the Advancement of Science acknowledges Parapsychology as a bona fide scientific discipline since the mid 1960s, almost 50 years ago. No "professional skeptic" organization, such as Randi's which also conducts lots of research, is an affiliate of the AAAS. Randi and others (including lots of posters on this forum) paint all parapsychology with the "pseudoscience" label which is unfortunate and unscientific given the serious scientific work in the field. Most scientists (including Wiseman) who have seriously looked at the data nowadays state that the psi effect is real, the discussion is largely trying to explain the effect. In this respect it is no different to quantum entanglement, something that was observed first almost 100 years ago and today has dozens of interpretations but no clear agreed understanding of the cause of the effect.


  • Registered Users Posts: 44 Heres Your Future


    Which „other“ meta studies are you referring to exactly. As I said you already provided one broken link

    what link is broken? they all work fine for me
    What S&E-2001/2 did was to perform another Meta Analysis of around 80 studies between the early 70s and the late 90s. In doing so they clearly ignored the well known issues with the earlier studies in that set. They also implemented inconsistent methods for calculating outcomes and criteria for inclusion of papers.

    You havent read the full debate between Storm & Ertel & Milton & Wiseman so. Storm & Ertel replied to the criticisms in their 2002 article in Journal of Parpasychology
    They showed how some of the studies included in S&E-2001/2 were rubbished by Hyman in 1985.
    This would be the same Hyman who as I pointed out above wrote in an article in 1996 in which he claimed he could not find methodological flaws in the experiments but still wouldn't accept their results.


    S&E also weighted 11 of their studies but left the rest un-weighted. As Wiseman says “ When producing a quantitative summary of a group of studies, it makes no sense to distort their effect sizes by applying quality weights, nor does it make sense to apply quality weights to some studies in a database but not to others. This practice is simply bizarre and cannot be the basis for making strong claims that the statistically significant cumulated outcome represents strong evidence for psi.”

    Again I would refer you to the links I have cited already. They work fine for me, but if for some reason you can't get them to work then go to Professor Utts homepage, you will find plenty there. http://www.ics.uci.edu/~jutts/.
    So not only was your first link rife with issues that I have laid out, your second reference to S&E-2001/2 is worse again. Not only does it have many of the same issues, it has MORE issues again and on top of that there appears to be wanton “Bizarre” dishonesty included too.

    You would want to be very careful about making claims about academics being 'bizarrely dishonest' Not even those supposedly skeptical scientists who disagreed with their conclusions called them 'bizarrely dishonest' and I doubt the researchers themselves would take too kindly to it.

    Funnily enough Jessica Utts, who is Professor of Statistics at the University of California, doesn't seem to share many of your concerns about the quality of meta analyses of the parapsychological literature. At the end of the abstract of the article I linked to above (which for some reason you cant get to work) she says 'Finally, meta-analyses from several areas of parapsychology are summarized. It is concluded that the overall evidence indicates that there is an anomalous effect in need of an explanation.'

    But then I guess she must be 'bizarrely dishonest' too, they are all in on it, those damn statisiticians and parapsychologists :rolleyes:


  • Registered Users Posts: 4,930 ✭✭✭Jimoslimos


    In debates like this I'm reminded of that quote (oft unfinished) by Kekulé;

    "Let us learn to dream, gentlemen, then perhaps we shall find the truth... But let us beware of publishing our dreams till they have been tested by waking understanding."

    Parapsychology is a psuedoscience, until I see such evidence as well as detailed materials and methods in a published reputable journal that allows me to replicate their findings it will remain so.


  • Registered Users Posts: 44 Heres Your Future


    Jimoslimos wrote: »
    Parapsychology is a psuedoscience, until I see such evidence as well as detailed materials and methods in a published reputable journal that allows me to replicate their findings it will remain so.

    What makes you think parapsychology is a pseudoscience? What is unscientific about it?

    Have you been reading the thread? Such evidence has already been published in reputable journals. And some people here say I am wrong to think that most have no familiarity with the actual literature they propose to condemn..jeez


  • Registered Users Posts: 44 Heres Your Future


    Now now, that picture is a misrepresentation. That is not where you touched me last time at all :-)

    But yes, there is certainly something wrong when someone in these parts beefs up their arguments by declaring everyone here is just engaged in confirmation bias,

    Lol, I specifically said in my post that many posters not all congregrate here to make snide uninformed remarks. That was based on my observation of earlier posts in the thread. Yet this has somehow morphed in your post into 'everyone here is engaged in confirmation in bias'

    I would suggest you are the one who is misrepesnting things. I presumed it was because you were unfamiliar with the relevant evidence but you have repeatedly said that you are, in which case you must be deliberately doing so. A bit like Hyman in his 96 article where he admitted he couldn't find flaws in the parapsychology experiments he was examining but wasn' t going to believe the findings anyway since he didn't like the results!


  • Advertisement
  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    nagirrac wrote: »
    You are referring to the "file drawer problem" which is a valid area of discussion. The file drawer problem (how unpublished studies might impact the overall results) is a potential problem in all psychology studies and indeed much of science.

    That is not what I was talking about at all. I never mentioned anything like unpublished studies or anything of the sort. I am talking about something ENTIRELY different such as

    1) the cherry picking of the studies that best reflect the result you want to find. 2) The weighting of those studies unevenly or at all to massage the results.
    3) The inclusion of studies that have been shown to be flawed even by the people who conducted them.


  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    what link is broken? they all work fine for me

    This link is not working for me at all from my work and home computers. If you have a different link to the same file I could do with it. However if it is the paper I think it is I have already read it and it is not actually a meta analysis of the topic here being discussed so it is probably not that important.
    You havent read the full debate between Storm & Ertel & Milton & Wiseman so. Storm & Ertel replied to the criticisms in their 2002 article

    I know. Do you even bother to read what I write before you reply? I very clearly wrote "a vague reference to Storm and Ertel from 2001 and 2002." and then continued to refer to this debate as "S&E-2001/2". IF you can not even be bothered to read the things I write why are you even replying to me???
    You would want to be very careful about making claims about academics being 'bizarrely dishonest'

    If they have an issue with me they can take it up with me. I have no time to pander to you doing vicariously on their behalf. As I quoted Milton and Wiseman found the practice of weighting the articles at all... let alone weighting 11 of them and not the rest "Bizarre". I find it also "dishonest" as do I find it dishonest to cherry pick 9 articles and then be vague about their source. Dishonesty, or at least obfuscation, appears to be at play here throughout the debate.

    Again: These things are NOT good:

    1) the cherry picking of the studies that best reflect the result you want to find. 2) The weighting of those studies unevenly or at all to massage the results. 3) The inclusion of studies that have been shown to be flawed even by the people who conducted them.
    Lol, I specifically said in my post that many posters not all

    I was not referencing you directly in that quote but speaking generally. However yes I do feel the problem is not mine when someone I am talking to resorts to the kind of rhetoric suggesting people do not know the material, are ignorant or uneducated, are biased, are just massaging each others position and back patting, and so on. When that rhetoric creeps into your prose it is just ad hominem and baseless and does not add to the discussions at hand.


  • Registered Users Posts: 44 Heres Your Future


    the link is still working fine for me so i am not sure why it wont work for you.
    the name of the article i was referencing is Replication and Meta Analysis in Parapsychology J Utts (1991) published in Statistical Science. I also gave the link to Professor Utts homepage where you can find further info

    Here is a more recent article by Lance Storm (one of those pesky 'bizarrely dishonest' guys that seem to annoy you so much because of their research findings)

    http://www.aiprinc.org/para-ac01_Storm_2006a.pdf


  • Closed Accounts Posts: 2,019 ✭✭✭nagirrac


    That is not what I was talking about at all. I never mentioned anything like unpublished studies or anything of the sort. I am talking about something ENTIRELY different such as

    1) the cherry picking of the studies that best reflect the result you want to find. 2) The weighting of those studies unevenly or at all to massage the results.
    3) The inclusion of studies that have been shown to be flawed even by the people who conducted them.

    The "file drawer problem" or publication bias is the biggest issue by far in terms of cherry picking results in many areas of science. At least parapsychologists are being more intellectually honest by publishing negative data sets, more than can be said for the broader Psychology field.


  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    This would be the same Hyman who as I pointed out above wrote in an article in 1996 in which he claimed he could not find methodological flaws in the experiments but still wouldn't accept their results.

    There is quite a few problems with just this piece of text so it warrants a separate reply.

    Firstly he never once said he would not accept their results in the part you pasted. You are just making that up entirely. In the quote you posted in post #146 all he said was that he could not claim to be 100% certain of a lack of flaws. No more. You are putting words deliberately into his mouth here! Not just you either It is a quote often lifted out of that paper and posted on sites supporting PSI and the like. He actually goes on then to say:
    At this point, a parapsychologist might protest that such "in principle" arguments can always be raised against any findings, no matter how well conceived was the study from which they emerged. Such a response is understandable, but I believe my caution is reasonable in this particular case. Historically, many cases of evidence for psi were proffered on the grounds that they came from experiments of impeccable methodological design. Only subsequently, sometimes by fortunate accident, did the possibility of a serious flaw or alternative explanation of the results become available

    ... before then also going on to give examples such as how Soal managed to cheat the system and make it look like PSI was where it was not even though people thought the experiment was sound and flawless!

    Also pasting this quote out of context and just saying "the experiments" when replying to my comments is also somewhat disingenuous. Here is why:

    1) You presented a study which was a meta analysis of 80 studies.
    2) I showed how both Hyman AND Honorton found many problems with THAT study for many reasons INCLUDING the fact they included very problematic studies from before 1985.
    3) You post out of context a quote from Hyman which was not talking about the pre-1985 studies AT ALL as if it somehow negated the issue of the pre-1985 studies. It does not. When he said he could not find flaws with the studies in the experiements he was talking about RECENT experiments. Not the ones that make S&E-2001/2 so problematic. In the conclusions of the report you are talking about he even says in 1995:
    Although, I cannot point to any obvious flaws in the experiments, the experimental program is too recent and insufficiently evaluated to be sure that flaws and biases have been eliminated.

    So let no one make the mistake of thinking the quote you pasted has any reference to the studies that make the 2001/2 paper problematic. Actually I recommend the entire paper for reading for anyone interested in following this thread. He ends the paper, which was about remote viewing, with some interesting ideas:
    1. Both Professor Utts and I agree that the first step should be to have the SAIC protocols rejudged by independent judges who are blind to the actual target.

    2. Assuming that such independent judging confirms the extra-chance matchings, the findings should be replicated in independent laboratories. Replication could take several forms. Some of the original viewers from the SAIC experiments could be used. However, it seems desirable to use a new target set and several independent judges.

    Operational Implications

    1. The current default assessment of the operational effectiveness of remote viewing is fraught with hazards. Subjective validation is well known to generate compelling, but false, convictions that a description matches a target in striking ways. Better, double blind, ways of assessing operational effectiveness can be used. I suggest at least one way in the report.

    2. The ultimate assessment of the potential utility of remote viewing for intelligence gathering cannot be separated from the findings of laboratory research.


  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    nagirrac wrote: »
    The "file drawer problem" or publication bias is the biggest issue by far in terms of cherry picking results in many areas of science. At least parapsychologists are being more intellectually honest by publishing negative data sets, more than can be said for the broader Psychology field.

    It is great if they are taking steps to be honest. I can not fault that at all. But even if they are taking SOME steps to be honest that just not magically negate the issues with S&E-2001/2 that I mentioned. Those issues are still real and present even if they are doing their best to be honest elsewhere.


  • Registered Users Posts: 44 Heres Your Future


    There is quite a few problems with just this piece of text so it warrants a separate reply.

    Firstly he never once said he would not accept their results in the part you pasted. You are just making that up entirely. In the quote you posted in post #146 all he said was that he could not claim to be 100% certain of a lack of flaws. No more. You are putting words deliberately into his mouth here! Not just you either It is a quote often lifted out of that paper and posted on sites supporting PSI and the like. He actually goes on then to say:

    Yes Hyman most definitely did imply that he would not accept the results. He was examining a series of ganzfeld studies for flaws, he did not find any but still said he couldn't rule out other unknown methodological flaws as being reposnsible for the results. This is a ridiculous approach since if we were to follow this approach then very few scientific experiments would ever get done at all since someone can always claim the results aren't accurate because of some possible methodological flaw that might exist but whcih we can't identify.

    Re your mention of Soal, yes he engaged in fraud, however studies have shown that overall parapsychology as a field, has the same or less instance of fraud than other area of experimental psychology. And before you ask, yes I can cite a study to back this up but need to go and find the reference first.

    Re your criticisms of meta analysis and the authors of these studies being intellectually dishonest then I will again for the nth time suggest you read some of the articles actually dealing with these criticisms such as Utts or the 2006 Storm article I cited. No offence but when deciding whether the totality of psi evidence to date is convincing or not I will take the word of a Professor of Statistics at Irvine University over yours.


  • Registered Users Posts: 44 Heres Your Future


    It is great if they are taking steps to be honest. I can not fault that at all. But even if they are taking SOME steps to be honest that just not magically negate the issues with S&E-2001/2 that I mentioned. Those issues are still real and present even if they are doing their best to be honest elsewhere.

    These supposed 'issues' that you are so obsessed with are dealt with in a number of places including Storms 2006 article which I have now given you a link to. For fecks sake will you just read the bloody thing or is that link not working for you either?


  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    No. He did not say he would not accept the result. Anyone falling for this lie please read the paper yourselves and do not take HYFs word for it. He just said he would not say 100% there was no flaws. Just that he could not find any. That is a perfectly valid, honest and useful thing to say.

    He explained his reluctance by giving examples of studies that had been thought to be sound at the time but turned out later to be flawed. Soal was one of those examples.

    He also points out how it takes TIME to uncover flaws often and the time frame he was given for writing this paper was too short.

    You are just putting words in his mouth he never actually said.

    You also ignored my second concern with you posting that quote out of context. I of course can not prove it was your intention to do so but pasting it out of context like that makes it look like Hyman was retracting his claims about the methodological issues with the pre 1985 studies. He was doing no such thing.

    As I said I can not evidence that you intentional tried that trick, but whether you did or not it certainly looks like the 1995 quote you pasted was retracting his 1985 work. It was not.

    And I quoted Hyman ONLY to show that he and another person actually involved with performing them found many methodology problems with the studies that were included in the 2001 meta analysis YOU cited as being relevant to this discussion.


  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    Actually an amusing anecdote for anyone else who is still following the thread and has not been bored into wandering off. Apologies to those who actually DO know the material that is out there if you have heard this one before. There was an interesting study performed (Kanthamani and Broughton 94) which was a very large scale Ganzfeld study indeed.

    It was a massive undertaking as the experiments were run over a 6 year period with over 350 sessions. In the end the studies yielded... you guessed it.... squat.

    Rather than concluding that the study showed no evidence for PSI however the writers concluded the fault was in the materials used to perform the study and that “it is probably safe to say that static picture targets remain a less than ideal choice for ganzfeld experiments."

    Yeah guys, it is not that your experiments failed to produce a positive result... it is just that the magic world of PSI is rendered silly by the power of static picture targets. Right.

    Though in fairness Willin’s (1996) tried it all again using music instead of static pictures. Good idea, especially if he had read the conclusion above. Trouble is he found squat too. His excuse? The people being tested must have lost interest!


  • Advertisement
  • Closed Accounts Posts: 3,371 ✭✭✭Obliq


    Yeah guys, it is not that your experiments failed to produce a positive result... it is just that the magic world of PSI is rendered silly by the power of static picture targets. Right.

    Though in fairness Willin’s (1996) tried it all again using music instead of static pictures. Good idea, especially if he had read the conclusion above. Trouble is he found squat too. His excuse? The people being tested must have lost interest!

    Hmmm. Getting interested, yay! Goin off to all them links you people put up....but in the meantime keeping an open mind. Would hate to *blindly* follow my parent's belief that there is nothing more than the eye can see, nothing more to be proven, nothing but the dimension we can experience. Have no problem with these limitations if they can be proven, but I equally have no problem with people trying to prove the limitiations to be limiting.


  • Registered Users Posts: 2 theexorcist


    6a00d83451b4ba69e2017d3cbf81c7970c-100wiThe word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honorable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can (for me) change this.:p

    nagirrac , did you read the above quote from Einsteins letter , read
    it slowly, he did not believe in God :rolleyes:


  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    Not sure what you are saying there Obliq but thanks for the thanks :) I am not sure I look at life as us being "limited" though. We are alive against all the odds, we are here despite us not knowing any good reason why we should be. We are talking, living, breathing, loving and more. I do not feel limited. I feel alive despite all the limits that are possible and all the reasons I should not be alive.

    It is a shame many people look at life and the human condition as limiting. I see it entirely the opposite. Then again I am often told I have an overly optimistic outlook on life :)


  • Closed Accounts Posts: 2,019 ✭✭✭nagirrac


    6a00d83451b4ba69e2017d3cbf81c7970c-100wiThe word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honorable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can (for me) change this.:p

    nagirrac , did you read the above quote from Einsteins letter , read
    it slowly, he did not believe in God :rolleyes:

    As I said before I hate when religious and atheist zealots alike claim Einstein.
    Einstein did not believe in a personal God i.e. a theist God. Einstein was a deist agnostic, look it up.


  • Registered Users Posts: 26,578 ✭✭✭✭Turtwig


    This may seem strange but this has to be the most depressing thread I read in A&A for quite sometime.

    Is this the 20th or 21st century?
    Attack science all the time, I do it all the time. (Most folks here might be surprised by that.) But you know what, this thread so far hasn't actually discussed the limitations of science and there are many.

    Jeez,
    Depressing.


    *gets down off high horse and apologies to regulars, lurkers and everyone for outburst.*


  • Registered Users Posts: 9,340 ✭✭✭nozzferrahhtoo


    Science is of course going to have limitations. It is a human construct and humans, and their constructs, are not perfect by any definition of "perfect" I have ever read.

    I would be the first to admit to it being "limited" but that does not change the fact it is the best we have got. It gives us answers and the scientific method is the best we have found so far. Religion is not giving people back their sight, informing us where we came from, restoring mobility to people who lost their limbs, curing disease, allowing us to communicate across vast distances with each other, bringing us to space, improving crop yields, curing and moderating pain, allowing sterile people to reproduce and much much more.

    While limited therefore it IS the best we have and I am happy to settle for that until something better comes along.


  • Closed Accounts Posts: 13,993 ✭✭✭✭recedite


    Jernal wrote: »
    Attack science all the time, I do it all the time. (Most folks here might be surprised by that.) But you know what, this thread so far hasn't actually discussed the limitations of science and there are many.

    Jeez,
    Depressing.
    Bear in mind the thread only got that title a few hours ago after splitting off from another one :D


  • Registered Users Posts: 26,578 ✭✭✭✭Turtwig


    recedite wrote: »
    Bear in mind the thread only got that title a few hours ago after splitting off from another one :D

    Oh, I er, didn't know that. :o


  • Closed Accounts Posts: 3,371 ✭✭✭Obliq


    Not sure what you are saying there Obliq but thanks for the thanks :) I am not sure I look at life as us being "limited" though. We are alive against all the odds, we are here despite us not knowing any good reason why we should be. We are talking, living, breathing, loving and more. I do not feel limited. I feel alive despite all the limits that are possible and all the reasons I should not be alive.

    It is a shame many people look at life and the human condition as limiting. I see it entirely the opposite. Then again I am often told I have an overly optimistic outlook on life :)

    Oops, one glass of wine and I make not a bit of sense. Should probably answer this tomorrow....suffice it to say I was not talking about limits to life or the human condition, but (present) limits to our observational capabilities. Might pick myself up off the ground tomorrow with a better answer, if I'm feeling clever :)
    Night!
    Ps. Taking full credit here for the interesting new direction on limits, even if I didn't mean it :-) !


  • Advertisement
  • Registered Users Posts: 533 ✭✭✭Michael OBrien


    nagirrac wrote: »
    Yes, certain knowledge is passed on via the written and oral traditions. However, that is not what I am talking about. A few examples:

    There is a species of wasp that lays its eggs in mud flats. They build an inverted funnel (with slippery surfaces to keep predators out) to get in and out while making the nest and then when complete they lay their eggs, fill their tunnel with food, break off the funnel and seal the entrance. It is very sophistocated. There are obviously thousands of examples like this in nature but the question is where did the individual wasp building the nest get this detailed knowledge (from a book? a guide to wasp nest building for wasps). There no evidence they learned it from older wasps as wasps bred under a controlled environment do exactly the same thing.

    I have chickens in my backyard. They were all procured as day old chicks so have no adults to teach them anything. One night I forget to lock them into their coup and they roosted on braches about 15ft off the ground. How did they know to do this to avoid predators? The obvious response is instinct but what the hell is instinct?

    Genetic reactions to stimuli, recorded in the dna of a species over time from simple beginnings. Why do beavers build dams, mostly instinct, why do we have a fear of the dark, mostly instinct.
    Those that did not have sufficently motivated responses to stimuli faired less well than those that did.
    I think its no different to a moth having elaborate camoflage. We see the END result of hundreds of thousands of generations. Small steps along the way and incremental benefits lead up to that point.

    Why roost high up, because their dna instructs them to find height more comfortable for sleeping at night. Why, because over the many generations, chickens, and protochickens were prey to noturnal hunters, and those that were more comfortable in high places survived to have more offspring.
    There is no free will, its all cause and effect. Identify the cause, and you understand the effect.


Advertisement