Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

A Consistent Quantum Ontology

Options
  • 25-09-2017 12:30am
    #1
    Registered Users Posts: 3,457 ✭✭✭


    Posting a philosophy of physics paper I particularly like here.

    https://arxiv.org/abs/1105.3932

    It explores the interpretation of quantum mechanics put forth by James Hartle (Of Hartle-Hawking fame) and Murray Gell-Mann, one of the founding fathers of quantum chromodynamics (the theory governing the behaviour of atomic nuclei). It's called the decoherent histories interpretation.

    Their interpretation is simple. It is effectively the copenhagen interpretation applied to possible histories of the universe, instead of just experimental outcomes. It is sometimes called the neo-copenhagen interpretation for this reason. It's also sometimes called the neo-everettian interpretation, because it also resembles the many-world's interpretation of QM (the major difference is only one history is postulated to be real).

    I like it because it is sharp, unambiguous, and simple. Existence is not contingent on observes. Cats are not "dead and alive" and classical physics is not artificially injected at some Heisenberg cut.

    The paper I've linked to answers ontological questions (questions about existence) in the context of this interpretation. It is fairly non-technical, and some of the more technical parts can be skimmed over.


Comments

  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    Thanks for that. It's actually a very clear exposition. A few things I'm still wondering, if you know how to clarify.

    So the basic ontology here, is that there a mutual exclusive views of the world. An analogy in the spirit of the paper, I can take altitude and colour measurements of a hill independently, but I can combine them into a 3D imagine of a coloured hill as I can for a classical hill, i.e. different sets of measurements cannot be composed into a single narrative.
    I think we see this reflected in how quantum logic does not permit reasoning based on the law of composition, which I think is fascinating.
    e.g., if I know:
    (i) That outside a house it is raining (outside, determined state)
    (ii) Inside the house there is either man wearing a coat or a man not wearing a coat (inside, indeterminate state)

    In classical logic I can compose this inside and outside facts to obtain the following:

    (I) It is either raining with a man in his house wearing a coat or it is raining with a man inside not wearing a coat (indeterminate world state)

    However quantum mechanically I cannot, I just have knowledge about the outside and the inside separately with no way to combine them into a picture (even an probabilistic/indeterminate picture) of the world.

    So I think it makes non-commutativity very clear.

    What I am wondering about is the other aspect of QM, interference. Even taking commuting observables, there can be interference between the different outcomes, what is the consistent histories take on this?


  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    I don't seem to be able to edit so the above should read:

    but I can't combine them into a 3D imagine of a coloured hill


  • Registered Users Posts: 3,457 ✭✭✭Morbert


    Fourier wrote: »
    What I am wondering about is the other aspect of QM, interference. Even taking commuting observables, there can be interference between the different outcomes, what is the consistent histories take on this?

    An interesting question. For fear of misunderstanding it, are you referring to phenomena like those observed in the double slit experiment, where different possible paths through the slits interfere with each other, resulting in a distinctive pattern when particles strike the detector screen?

    Consistent histories would treat descriptions of which slit the particle went through as incompatible with the description of where on the detector screen the particle lands, but would not submit one description as more correct than the other.

    If we decide on a framework that describes the particle as landing on the detector screen in some localised interval, we must also describe the particle travelling though "both" slits in some delocalised manner.

    Similarly, if we use a framework that describes the particle as passing through one slit or the other in a localised manner, we must also describe the particle landing on the detector screen with a delocalised wavefunction.

    Neither of these alternative descriptions would be more correct. But experimentalists would be typically interested in predicting where on the detector screen the particle will land, so they use a descriptive framework where the particle does not pass through one slit or the other but "both".

    Robert Griffith says as much in his book "Consistent Quantum Theory" which can be found here. In the chapter of quantum interference he states
    The analog for two-slit interference is a consistent family F in which the particle passes through the slit system in a delocalized state, but arrives at a definite location in the diffraction zone. It is F which lies behind conventional discussions of two-slit interference, which emphasize (correctly) that in such circumstances it is meaningless to discuss which slit the particle passed through. However, there is also another consistent family G ... in which the particle passes through one or the other of the two slits, and is described in the diffraction zone by one of two delocalized wave packets ... The families F and G are incompatible, and hence the descriptions they provide cannot be combined. Attempting to do so by assuming that the particle goes through a definite slit and arrives at a definite location in the diffraction zone gives rise to inconsistencies ... From the perspective of fundamental quantum theory there is no reason to prefer one of these two families to the other. Each has its use for addressing certain types of physical question. If one wants to know the location of the particle when it reaches the diffraction zone, F must be used in preference to G, because it is only in F that this location makes sense. On the other hand, if one wants to know which slit the particle passed through, G must be employed, for in F the concept of passing through a particular slit makes no sense.


  • Registered Users Posts: 3,457 ✭✭✭Morbert


    P.S. I should also mention Gell-Mann and Hartle take it to a whole other level with their "EPE-DH" formalism.

    Phys Rev A link
    https://journals.aps.org/pra/abstract/10.1103/PhysRevA.85.062120

    The arxiv link
    https://arxiv.org/abs/1106.0767

    Under this fromalism (if I understand it correctly), you could describe a particle as travelling through one slit, and landing on the detector screen in one location, provided you concede such a history cannot be correlated with some measurement apparatus.


  • Registered Users Posts: 10,558 ✭✭✭✭Fourier


    Fascinating read, I wasn't aware of that interpretation. So they are essentially saying that:

    1. Our world literally follows a trajectory/history which is one of those in the path integral.
    2. Beings (or really anything) in such a world fundamentally cannot resolve their history beyond a certain point and hence certain questions are unanswerable in principle, this being modeled by negative probabilities, giving you QM's noncommutative probability.

    As mentioned in Feynman-Hibbs book on the subject, path integral paths are not even continuous and hence an average velocity can only easily be measured by taking "a long time" to perform the measurement (and hence smooth out the irregularities). Getting an accurate reading at smaller times requires absurd energies. However taking this time means the particle will have jumped around a lot in that time resulting in a poor position measurement.

    Hence the uncertainty principle arises due to the discontinuity of our history.

    It's kind of bizarre to think the history of the world would be discontinuous, i.e. physical observables would jump from value to value without occupying the values in between, but at least its a "realist" world!

    I'd have to think more about what this means for QFT where paths are not just discontinuous, but distributions, e.g. objects like the Dirac delta. What does it mean for the history of our world to be a distribution?


  • Advertisement
  • Registered Users Posts: 3,457 ✭✭✭Morbert


    I found some interesting papers on consistent histories and quantum field theory/quantum cosmology. I'll try and find them again (the formalism discussed was similar to the formalism used by Hawking and Hartle in their famous paper here, where a wavefunction is recast as a "wavefunctional" over field and spacetime configurations. They might be relevant to your thoughts about QFT.
    It's kind of bizarre to think the history of the world would be discontinuous, i.e. physical observables would jump from value to value without occupying the values in between, but at least its a "realist" world!

    Yeah, what I like about the consistent histories formalism is, even if you don't want to commit to an ontology as ambitious as Gell-Mann's and Hartle's, you can still makes sense of QM from a realist framework, insofar as you can understand measurement as a process whereby your apparatus becomes correlated with a pre-existing property of the system being measured, and not a process where properties are created at the moment of measurement.

    Of course "realism" isn't used univocally. Consistent histories is local and "realist", but not in the sense used in Bell's theorem.


Advertisement