Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Origins

2»

Comments

  • Registered Users Posts: 6 WrathOfNothing


    Fourier wrote: »
    Not really. In the standard view of QM what's really down there is not amenable to scientific analysis or mathematical description because it is not "mechanical" or algorithmic, so you probably can't know.
    How can there not be a scientific explanation?


  • Moderators, Society & Culture Moderators Posts: 24,395 Mod ✭✭✭✭robindch


    How can there not be a scientific explanation?
    You're aware that humans don't know everything there is to know?


  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    robindch wrote: »
    You're aware that humans don't know everything there is to know?
    Perhaps you're encompassing this when you say "don't know", but this goes a bit further into "unknowable in principle". Microscopic systems don't seem to be quantifiable. Whatever is responsible for the behaviour of the sub-molecular world it seems to lie outside of spacetime and not obey quantifiable rules.

    That's not "obey scientific laws we don't know" it's literally "their behaviour is not governed by mathematical rules or laws". This is related to why reductionism breaks down in quantum mechanics.

    I'm trying to think of how to explain it briefly though.


  • Moderators, Society & Culture Moderators Posts: 24,395 Mod ✭✭✭✭robindch


    Fourier wrote: »
    Microscopic systems don't seem to be quantifiable.
    Is 'quantifiable' the right word here? I'd have said that the world of the very small is quantifiable - inasmuch as it's amenable to statistical calculation. That's different however from 'predictable' which seems to be what the wider population expect derives from "scientific knowledge".


  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    robindch wrote: »
    Is 'quantifiable' the right word here? I'd have said that the world of the very small is quantifiable - inasmuch as it's amenable to statistical calculation. That's different however from 'predictable' which seems to be what the wider population expect derives from "scientific knowledge".
    What's amenable to statistical calculations is macroscopic responses to the microscopic world. So for instance QM doesn't say there is a (let's say) 40% chance of a particle being located in a certain region, it says that if there is a device (with the correct properties) placed in that region it will light up 40% of the time. However that's not a statistical statement of the subatomic world, just responses from it.

    If the positions, momenta, spins, etc we measured were actual real properties of the stuff down there, then you'd have statistical account. However since they're actually properties of our devices we don't even have that.


  • Advertisement
  • Moderators, Society & Culture Moderators Posts: 15,704 Mod ✭✭✭✭smacl


    Fourier wrote: »
    What's amenable to statistical calculations is macroscopic responses to the microscopic world. So for instance QM doesn't say there is a (let's say) 40% chance of a particle being located in a certain region, it says that if there is a device (with the correct properties) placed in that region it will light up 40% of the time. However that's not a statistical statement of the subatomic world, just responses from it.

    If the positions, momenta, spins, etc we measured were actual real properties of the stuff down there, then you'd have statistical account. However since they're actually properties of our devices we don't even have that.

    Excuse the total ignorance of QM but how does that differ from use of transducers in other passive or active forms of measurement such LIDAR, interferometry, tomography etc...


  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    smacl wrote: »
    Excuse the total ignorance of QM but how does that differ from use of transducers in other passive or active forms of measurement such LIDAR, interferometry, tomography etc...
    Just explain what seems to you to be the similarity first in more detail. This is just to frame my answer without me rambling.


  • Registered Users Posts: 794 ✭✭✭moonage


    robindch wrote: »
    The most plausible of the current theories concerning the early universe suggest that time itself "began" at the Big Bang, so the concept of "what happened before" doesn't really apply.

    Also, externalizing the creation of the universe to a deity doesn't achieve anything anyway, since the question then moves to well, who or what created the deity?

    The problem with the Big Bang theory is, as Terence McKenna used to say, that modern science is based on the principle, 'Give us one free miracle, and we'll explain the rest'. And the one free miracle is the appearance of all the matter and energy in the universe, and all the laws that govern it, from nothing, in a single instant.



  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    moonage wrote: »
    The problem with the Big Bang theory is, as Terence McKenna used to say, that modern science is based on the principle, 'Give us one free miracle, and we'll explain the rest'. And the one free miracle is the appearance of all the matter and energy in the universe, and all the laws that govern it, from nothing, in a single instant.
    The Big Bang model as it exists in cosmology is not an attempt to explain the origins of reality. It's an attempt to explain the large scaled features of the universe from a hypothesised earlier state. It seems to do that pretty well. It's not sensible to say it has a problem because it doesn't explain everything.


  • Moderators, Category Moderators, Arts Moderators, Sports Moderators Posts: 48,184 CMod ✭✭✭✭magicbastarder


    explaining *that* something happened, explaining *how* it happened, and explaining *why* it happened can be three different things.


  • Advertisement
  • Moderators, Society & Culture Moderators Posts: 15,704 Mod ✭✭✭✭smacl


    Fourier wrote: »
    Just explain what seems to you to be the similarity first in more detail. This is just to frame my answer without me rambling.

    Ok, so say you have a system like LIDAR. You fire a laser burst in a given direction, you have a light sensitive receptor that looks for a reflection of that laser, you compare the time you fired the laser against the time you get the return, and divide by twice the speed of light to get a distance. You haven't measured the distance to the object directly, you've measured the effect, direct reflection in this case, of placing the object in the path of a laser beam. This usually works but is subject to various interference factors such that it yields a wrong result or no result, e.g. trying to measure to a refractive surface such as water doesn't work well. To improve the result you might take a very large number of repeated measurements and use statistics to get the most likely distance after removing outliers.


  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    smacl wrote: »
    Ok, so say you have a system like LIDAR. You fire a laser burst in a given direction....To improve the result you might take a very large number of repeated measurements and use statistics to get the most likely distance after removing outliers.
    Okay well I'll go with two examples from quantum mechanics. The Kochen Specker theorem and Bell's theorem. I'll have to do this in stages rather than all in one go.

    First Bell's theorem.

    Say you have two particles and you can measure their spin in two directions X and Z. That means how much they are spinning about that axis. The spins can be either -1 or 1, clockwise or anticlockwise.

    I'll use the notation XX to denote that you measured the X-direction spin of the first and the X-direction spin of the second. Similarly XZ means X-direction spin of the first and Z-direction spin of the second.

    If I do the measurements I find the following results:
    Measurement|Comparison
    XX|=
    XZ|=
    ZX|=
    ZZ|≠


    This means for example that when I measure X-direction spins for both they are always equal.

    If you look at this there is no possible assignment of values for X and Z for both particles that can make this table true.

    If that makes sense I'll proceed with what this demonstrates and compare it to what you are talking about.


  • Moderators, Society & Culture Moderators Posts: 24,395 Mod ✭✭✭✭robindch


    Fourier wrote: »
    However that's not a statistical statement of the subatomic world, just responses from it.
    Now that you say that, I'm reminded of the double-slit experiment and the multiple interpretations coming from from a single lower-level phenomenon - what you say makes sense, once some head furniture is moved about a bit.


  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    robindch wrote: »
    Now that you say that, I'm reminded of the double-slit experiment and the multiple interpretations coming from from a single lower-level phenomenon - what you say makes sense, once some head furniture is moved about a bit.
    What's happening in the double slit experiment is that if you don't place a detector at either of the slits then there is no "fact of the matter" about which slit it went through. Many popular expositions say it went through both, but rather it's that the concept of "which slit the particle went through" is non-applicable. Only concepts you measure have truth values.

    And since there is no fact about how it went through the slits, the screen beyond the slits can develop a pattern that makes no sense in terms of it going through either or both. It's not constrained by a fact about which slit the particle went through.


  • Registered Users Posts: 3,457 ✭✭✭Morbert


    robindch wrote: »
    The most plausible of the current theories concerning the early universe suggest that time itself "began" at the Big Bang, so the concept of "what happened before" doesn't really apply.

    Also, externalizing the creation of the universe to a deity doesn't achieve anything anyway, since the question then moves to well, who or what created the deity?

    Hmm, I think the traditional cosmological argument, which attempts to deduce god from cosmology (in the philosophical sense of the word), makes a distinction between contingent truths and necessary truths, with only the former having explanations. An interlocutor would say God exists necessarily, while the universe does not.

    They also wouldn't agree that "no time before the big bang" would exempt the universe from warranting an explanation. E.g. Even if some iteration of the Hartle-Hawking no-boundary proposal* eventually turns out to yield a description of our universe, an interlocutor might insist you explain why our universe is in line with this proposal, or any proposal at all.

    My suspicion is the strongest response is to question whether the universe is thoroughly intelligible, and hence whether the principle of sufficient reason is universal.

    *e.g. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.201302


  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    smacl wrote: »
    Ok, so say you have a system like LIDAR. You fire a laser burst in a given direction, you have a light sensitive receptor that looks for a reflection of that laser, you compare the time you fired the laser against the time you get the return, and divide by twice the speed of light to get a distance. You haven't measured the distance to the object directly, you've measured the effect, direct reflection in this case, of placing the object in the path of a laser beam. This usually works but is subject to various interference factors such that it yields a wrong result or no result, e.g. trying to measure to a refractive surface such as water doesn't work well. To improve the result you might take a very large number of repeated measurements and use statistics to get the most likely distance after removing outliers.
    If you want a very short answer. The statistics you'll see are consistent with there being a pre-existent value for your measurement, i.e. regardless of whether your measurement was direct or indirect it was going to produce some specific effect in your equipment.

    The statistics you get out of quantum mechanical measurements are not compatible with this. You're forced to conclude that your apparatus created the value and the specific nature of your apparatus is important to what that value is.

    Thus the results of quantum measurements are not you learning something about the microscopic system, they're just reactions in your device.


Advertisement