PDA

View Full Version : Are The Odds Regarding Particles Definitely Independent Events?


David Sklansky
05-09-2007, 09:45 PM
I might be exposing my ignorance here, but I have often wondered whether the randomness associated with subatomic particles, 50% chance they will decay in 13 microseconds, 50% chance they spin up rather than down, etc. etc. is independent. I know there is no "cause" for this randomness. But does that also mean that the randomness is totally independent? Or might it be like a deck with a few octillion cards in it. So that if we find one paricle up there is a teeny extra chance that the next one is down. Because half of all particles are up and half are down. Cards rather than coin flips.

Does Bell's Theorem or something else prove this idea wrong?

Duke
05-09-2007, 10:27 PM
Locality is the first issue that I see.

jason1990
05-10-2007, 12:01 AM
I think it would be fair to say that the standard notion of independence does not typically apply to quantum phenomena. The standard notion is that A and B are independent if

P(A and B) = P(A)P(B).

But in a quantum setting, P(A and B) is not well-defined in general. The uncertainty principle prevents us, in many cases, from measuring quantum properties simultaneously. Moreover, the statistical properties of sequential measurements depend, in general, on the order in which the measurements are done.

inlemur
05-10-2007, 12:01 AM
This seems to me like a philosophical question rather than one that can be proven or shown empirically. A similar situation occurs in statistical mechanics, where the fundamental postulate is that all microstates occur with equal probability. This assumption leads to models that accurately describe real physical systems, but the postulate itself is unproven.

Duke
05-10-2007, 12:28 AM
[ QUOTE ]
This seems to me like a philosophical question rather than one that can be proven or shown empirically. A similar situation occurs in statistical mechanics, where the fundamental postulate is that all microstates occur with equal probability. This assumption leads to models that accurately describe real physical systems, but the postulate itself is unproven.

[/ QUOTE ]

There are big differences between "can't be proven," "can't be proven yet," and "can't be proven because we currently have a fundamental misunderstanding." I see absolutely no reason to cite likely ignorance as a reason to relegate something to long discussions about nothing. I spend my life trying to avoid that trap.

oe39
05-10-2007, 12:32 AM
it seems like a global hidden variable theory would be hard to disprove?

chezlaw
05-10-2007, 12:43 AM
[ QUOTE ]
[ QUOTE ]
This seems to me like a philosophical question rather than one that can be proven or shown empirically. A similar situation occurs in statistical mechanics, where the fundamental postulate is that all microstates occur with equal probability. This assumption leads to models that accurately describe real physical systems, but the postulate itself is unproven.

[/ QUOTE ]

There are big differences between "can't be proven," "can't be proven yet," and "can't be proven because we currently have a fundamental misunderstanding." I see absolutely no reason to cite likely ignorance as a reason to relegate something to long discussions about nothing. I spend my life trying to avoid that trap.

[/ QUOTE ]
This is a simple can't be proven.We can't prove that apparant randomness isn't in fact deterministic and it immediately follows that apparantly random events could be dependent.

chez

inlemur
05-10-2007, 12:56 AM
[ QUOTE ]
[ QUOTE ]
This seems to me like a philosophical question rather than one that can be proven or shown empirically. A similar situation occurs in statistical mechanics, where the fundamental postulate is that all microstates occur with equal probability. This assumption leads to models that accurately describe real physical systems, but the postulate itself is unproven.

[/ QUOTE ]

There are big differences between "can't be proven," "can't be proven yet," and "can't be proven because we currently have a fundamental misunderstanding." I see absolutely no reason to cite likely ignorance as a reason to relegate something to long discussions about nothing. I spend my life trying to avoid that trap.

[/ QUOTE ]

When I say can't be proven, I mean that to the best of my understanding it is non-falsifiable and indistinguishable from a simpler model. Using the previous example, it could be that different microstates occur with different probabilities. However, if this is the case, the variation in probability with which they occur is so slight that systems can be modeled as though each microstate occurs with an equal probability. I don't exclude the possibility that either of these models could be more representative of the actual physical phenomenon, but one of them is sufficient to describe any system we have yet encountered and is simpler than the other.

All that aside, as we all know, nothing can be truly proven; the best we can do is develop models which most accurately predict physical phenomena. When a model makes a prediction that is unobservable (and we're kidding ourselves if we think that we can observe the effect of a single particle's quantum state on every other particle in the universe) in addition to all the predictions of some other model, we stick with the simpler model, for reasons that should be obvious.

Metric
05-10-2007, 02:45 AM
If two systems are quantum mechanically entangled, then probabilities will be dependent on one another. If two systems are "seperable" (not entangled), then there exist at least some observables for which probabilities will be independent.

For increasingly large (multi-part) systems, an increasingly large fraction of the state space is dominated by entangled states. The universe as a whole is certainly described by an entangled state.

yukoncpa
05-10-2007, 03:03 AM
[ QUOTE ]
If two systems are quantum mechanically entangled, then probabilities will be dependent on one another. If two systems are "seperable" (not entangled), then there exist at least some observables for which probabilities will be independent.

For increasingly large (multi-part) systems, an increasingly large fraction of the state space is dominated by entangled states. The universe as a whole is certainly described by an entangled state.

Post Extras


[/ QUOTE ]
Doesn’t entanglement entail action at a distance? If so, Metric, I thought this was something that you are arguing against.

Metric
05-10-2007, 03:12 AM
[ QUOTE ]
Doesn’t entanglement entail action at a distance? If so, Metric, I thought this was something that you are arguing against.

[/ QUOTE ]
Entanglement is fine and good. Conditional probabilities are fine and good. Non-unitary "wave function collapse" over extended distances is bad and wrong, but I'm willing to pretend such things exist at an "intro to QM" level, since such notions are widely in use.

flipdeadshot22
05-10-2007, 04:15 AM
[ QUOTE ]
This seems to me like a philosophical question rather than one that can be proven or shown empirically. A similar situation occurs in statistical mechanics, where the fundamental postulate is that all microstates occur with equal probability. This assumption leads to models that accurately describe real physical systems, but the postulate itself is unproven.

[/ QUOTE ]

You should probably attempt to learn a little basic QM before handing off such a question to philosophy. Determining probabilities for quantum mechanical systems to undergo a given set of transitions has a fundamental connection to the commutivity of the operators that define such transitions or "events" as DS put it. Commutivity is basically a mathematical statement that expresses our ability to 'observe independent quantum events' (none of this is standard QM terminology, since i'm trying to keep this simple and avoid making my post 5 pages and filled with Latex coding.)

A good example of this is the non-commutivity of the momentum (p) and position (x) operators, which can mathematically be expressed as [x,p]=ihbar. The fact that the right side of the equation is nonzero means that the two operators do not commute, and that once you have performed a measurement and determined the exact position of the particle, you cannot subsequently measure its momentum with any certainty. So getting back to what david was asking, the odds regarding the measurement of position and after that, the measurement of momentum are not independent.

Contrast this to the measurement of the spin of a particle. Spin is a 3 dimensional property of many quantum systems, and has the property that the measurement of the spin in the x, y or z direction, followed by another measurement in a different direction than the first measurement CAN BE considered independent or, mathematically stated for instance [Sz, Sx] = 0. This shows the commutivity of the spin in the z and x directions (this can be generalized to all directions). This means that if we measure the spin of a particle to be up with 100% certainty, we can follow this up with another measurement to see what the spin is in the x direction and obtain a 0% probability that it exists in this state.

flipdeadshot22
05-10-2007, 06:41 AM
Just to correct myself in my above post before metric jumps on me, the commutation relation for the Sz Sx spin operators [Sz, Sx] = 0, is incorrect (these operators actually anti-commute)

Also, in case my post was tl;dr, it applies only to scenarios in which we measure two distinct properties of a system, rather than running multiple trials of measuring a single property (such as that of spin in davids OP.) In the case we are determining whether a system is in a spin up or spin down with a 50/50 chance, then yes this is your basic coinflip with no "hidden variables" or deterministic causes.
The usual form of QM does not say anything about these actual deterministic causes that lie behind the probabilistic quantum phenomena. This fact is often used to claim that QM implies that nature is fundamentally random. Of course, if the usual form of QM is really the ultimate truth, then it is true that nature is fundamentally random. But who says that the usual form of QM really is the ultimate truth? (A serious scientist will never claim that for any current theory.) A priori, one cannot exclude the existence of some hidden variables (not described by the usual form of QM) that provide a deterministic cause for all seemingly random quantum phenomena. I think a good example of this is the Bohm interpretation (check out wiki).

PairTheBoard
05-10-2007, 08:18 AM
[ QUOTE ]
A priori, one cannot exclude the existence of some hidden variables (not described by the usual form of QM) that provide a deterministic cause for all seemingly random quantum phenomena.

[/ QUOTE ]

I thought that was the whole point to Bell's Theorem. That if you assume such a hidden variable it leads to contradictions that can be observed.

PairTheBoard

Metric
05-10-2007, 12:27 PM
[ QUOTE ]
[ QUOTE ]
A priori, one cannot exclude the existence of some hidden variables (not described by the usual form of QM) that provide a deterministic cause for all seemingly random quantum phenomena.

[/ QUOTE ]

I thought that was the whole point to Bell's Theorem. That if you assume such a hidden variable it leads to contradictions that can be observed.

PairTheBoard

[/ QUOTE ]
Bell's theorem basically says that perfectly classical, local, hidden variable theories "aren't good enough" to reproduce experimentally confirmed predictions of QM. That's slightly different from saying there are no hidden variables. There might be hidden variables. But by themselves they cannot be a replacement for QM -- you would have to add some more ingredients (usually to violate locality or something else somewhat disturbing).

Metric
05-10-2007, 01:40 PM
Let's do a little example to see things explicitly.

Let's consider a two-qubit system. A single qubit is a system spanned by the states |0> and |1>.

I will call the first qubit "a" and the 2nd qubit "b."

Consider the following state: 1/2 (|0> + |1>)_a (|0> + |1>)_b

The probability to measure "a" in state |0> is 1/2, completely independent of whether "b" was measured to be in |0> or |1>, or not measured at all. This is due to the fact that the state is seperatble -- I can express the state as PSI_a PSI_b.

Now consider the following state:
1/root2 (|0>_a |0>_b + |1>_a |1>_b)

Now, the probability to measure "a" in state |0> is still 1/2 if I don't measure "b". BUT, if I do measure "b" to be in state |0>, then there is 100% chance that "a" will also be in |0>. And likewise if "b" was measured to be in state |1>, then "a" will also be discovered to be in state |1> with certainty. Correlations like this are the hallmark of entanglement -- the fact that I can't write this state as PSI_a PSI_b.

The relation to Bell's theorem is that all entangled (pure) states of a system like this one can also be used to violate a Bell inequality, which can't be done classicaly (modulo truly bizarre nonlocal theories).

gumpzilla
05-10-2007, 02:42 PM
[ QUOTE ]
[ QUOTE ]

I thought that was the whole point to Bell's Theorem. That if you assume such a hidden variable it leads to contradictions that can be observed.

PairTheBoard

[/ QUOTE ]
Bell's theorem basically says that perfectly classical, local, hidden variable theories "aren't good enough" to reproduce experimentally confirmed predictions of QM. That's slightly different from saying there are no hidden variables. There might be hidden variables. But by themselves they cannot be a replacement for QM -- you would have to add some more ingredients (usually to violate locality or something else somewhat disturbing).

[/ QUOTE ]

Yeah. Something like Bohm/De Broglie's pilot wave trick works just fine as a hidden variable theory, but it's no longer local (meaning it allows for superluminal influences.) At least that's my understanding; I've been meaning to read Bohm's papers on this subject to actually learn what it is he says, but right now I'm just sort of quoting the conventional wisdom.

Regarding conditional probabilities, I've been reading some interesting stuff recently on weak measurement in quantum mechanics. Essentially, the idea is that if you have an ensemble of states that are both pre and post selected (meaning we know the state of the particle at time t_1 and t_2), applying conditional probability to the results of measurements at time t_1 < t < t_2 allows for some really weird, counterintuitive stuff. When I understand it better, I might write up a post on it.

Metric
05-10-2007, 03:12 PM
[ QUOTE ]
Yeah. Something like Bohm/De Broglie's pilot wave trick works just fine as a hidden variable theory, but it's no longer local (meaning it allows for superluminal influences.) At least that's my understanding; I've been meaning to read Bohm's papers on this subject to actually learn what it is he says, but right now I'm just sort of quoting the conventional wisdom.

[/ QUOTE ]
Yeah -- the Bohm-De Broglie stuff is really weird. It works (at least in the nonrelativistic case -- not sure how far they've gotten with field theory), but it has an extreme feel of jury-rigging about it. And at the end of the day the entire motivation is simply that you don't need to give up the traditional concept of an "always localized" particle -- a concept I'm very happy to live without anyway.

[ QUOTE ]
Regarding conditional probabilities, I've been reading some interesting stuff recently on weak measurement in quantum mechanics. Essentially, the idea is that if you have an ensemble of states that are both pre and post selected (meaning we know the state of the particle at time t_1 and t_2), applying conditional probability to the results of measurements at time t_1 < t < t_2 allows for some really weird, counterintuitive stuff. When I understand it better, I might write up a post on it.

[/ QUOTE ]
Are you talking, by chance, about the delayed choice quantum eraser experiment?

http://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser

gumpzilla
05-10-2007, 04:40 PM
[ QUOTE ]

[ QUOTE ]
Regarding conditional probabilities, I've been reading some interesting stuff recently on weak measurement in quantum mechanics. Essentially, the idea is that if you have an ensemble of states that are both pre and post selected (meaning we know the state of the particle at time t_1 and t_2), applying conditional probability to the results of measurements at time t_1 < t < t_2 allows for some really weird, counterintuitive stuff. When I understand it better, I might write up a post on it.

[/ QUOTE ]
Are you talking, by chance, about the delayed choice quantum eraser experiment?

http://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser

[/ QUOTE ]

It was not what I had in mind, but it is certainly possible that similar ideas are in play. I have heard of this experiment before but I am not very familiar with it; I'll print out the PRL and add that to my list of reading on this subject.

I don't have my notes on the relevant papers with me, and I'm finding that without them it's hard for me to reconstruct enough of the arguments to really explain them, so I'll try and write more about them in the near future. But, the general idea is that in a situation where you both pre and postselect ensembles (the key thing I'm missing in trying to reconstruct it is why this is necessary), if you employ a von Neumann measuring scheme with a weak enough coupling Hamiltonian, the value that you measure for various observables corresponds to a particular average value for these "generalized states" that consist of a ket evolving forward from t_1 and a bra evolving backwards from t_2.

From this, weird stuff follows. Among other things pointed out:

- The result of the weak measurement of an operator can be greater than the largest eigenvalue of that operator (!)

- To quote one of the section headings from one of these papers, "Two noncommuting observables have definite values in the time period between two measurements." I think this is special behavior based on what your finishing and ending states are; intuitively, the idea is supposed to be that the measurements are "weak" enough that making a measurement of B doesn't perturb the measurement you made of the noncommutative operator A.

- In another one of the papers, a specially constructed state (where a particle is in one of N+1 boxes) is given such that opening any among N of the N+1 boxes will find the particle with certainty. Apparently this is compensated for with some kind of "negative probability." I don't even know what the hell this is supposed to mean yet, and it sounds like complete quackery. But, I was at a talk where some experimental data was shown (by a pretty sharp guy) that apparently could be interpreted in this way. This is what sparked my interest in it.

Like I said, when I understand it more I'll try and write something lengthy up. If you're interested right now, the papers I'm drawing the theoretical statements from are:

Aharonov and Vaidman, PRA 41 11

Aharonov and Vaidman, Journal of Physics A, 24 2315

PairTheBoard
05-10-2007, 04:58 PM
[ QUOTE ]
Are you talking, by chance, about the delayed choice quantum eraser experiment?

http://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser

[/ QUOTE ]

I have a question about the Quantum Eraser. In this link,

Quantum Eraser Experiment (http://strangepaths.com/the-quantum-eraser-experiment/2007/03/20/en/)

where they show the diagrams, they say this
--------------------
At time T0 when D0 is triggered no interference appears, since the which-way information is contained in the system at that time. At time T1, which in the experiment is some nanoseconds later but could be in principle any time later, when D1/D2/D3/D4 are triggered, we find interference in the correlated subsets of past D0 records undergoing future erasure of the which-way information.
----------------------

If the principle is that Erasing the Which-Slit information allows an Interference Pattern as if no Which-Slit was ever gathered, then why couldn't you just direct all the idler photons directly to the BS spliter and erase the information for all the photons. If the idea is as they say, "At time T0 when D0 is triggered no interference appears, since the which-way information is contained in the system at that time", why couldn't you just move the Erasing Apparatus closer so that the Which-Way information is erased by the time the photon reaches D0?

Of course, if the Timing of the Erasure were to affect the Interference pattern at D0 for All photons it would be bad because you could then move the Erasure Apparatus closer and farther away to send faster than light messages.

So I assume it doesn't work that way. Which is funny because it seems like there's no reason why it shouldn't except for the fact that it would then violate faster than light communication.

PairTheBoard

Metric
05-10-2007, 05:05 PM
Thanks for the refs! I'd certainly be interested in a distilled summary if you get the chance -- those papers seem to be cited a lot in the context of "retrocausation." This makes me a bit nervous, but I don't know -- there could be something really profound here...

Metric
05-10-2007, 06:15 PM
Yes, what you're describing is the difference between the standard quantum eraser and the delayed choice quantum eraser (if I read you correctly). And as you seem to be anticipating, there is no change in the outcome of the experiment determined by whether the erasure happened before or after observation of the interfering photons.

This is a very non-trivial result. It's not so easy to see how this is possible in the standard "projective collapse" formalism -- it is somewhat easier to see using a formalism without collapse. This makes me wonder how Scully et al. predicted the outcome in advance, unless they simply appealed to "no superluminal signaling" type arguments.

PairTheBoard
05-10-2007, 07:05 PM
[ QUOTE ]
Yes, what you're describing is the difference between the standard quantum eraser and the delayed choice quantum eraser (if I read you correctly). And as you seem to be anticipating, there is no change in the outcome of the experiment determined by whether the erasure happened before or after observation of the interfering photons.

This is a very non-trivial result. It's not so easy to see how this is possible in the standard "projective collapse" formalism -- it is somewhat easier to see using a formalism without collapse. This makes me wonder how Scully et al. predicted the outcome in advance, unless they simply appealed to "no superluminal signaling" type arguments.

[/ QUOTE ]

I think I understand that. What I don't understand is why you can't bring back the interference pattern for All the photons by applying the Eraser to All the idler photons?

PairTheBoard

Metric
05-10-2007, 07:15 PM
[ QUOTE ]
I think I understand that. What I don't understand is why you can't bring back the interference pattern for All the photons by applying the Eraser to All the idler photons?

[/ QUOTE ]
I think you actually can bring back the interference pattern for all photons with a slightly different experimental setup (those which don't involve sorting out coincidence data). However, I haven't studied this particular experiment it in that much detail, so it's possible that I'm just wrong here.

PairTheBoard
05-10-2007, 07:36 PM
[ QUOTE ]
[ QUOTE ]
I think I understand that. What I don't understand is why you can't bring back the interference pattern for All the photons by applying the Eraser to All the idler photons?

[/ QUOTE ]
I think you actually can bring back the interference pattern for all photons with a slightly different experimental setup (those which don't involve sorting out coincidence data). However, I haven't studied this particular experiment it in that much detail, so it's possible that I'm just wrong here.

[/ QUOTE ]

The thing is, if you could bring back the interference pattern for all the photons by applying the Eraser to all the idler photons then you could signal someone sitting at the Screen. You apply the Eraser on the Left and someone sitting on the Right sees the Interference for All the Photons. You Disconnect the Eraser and the person on the Right sees the interference go away. You are in instantaneous communication with him.

PairTheBoard

Metric
05-10-2007, 10:20 PM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
I think I understand that. What I don't understand is why you can't bring back the interference pattern for All the photons by applying the Eraser to All the idler photons?

[/ QUOTE ]
I think you actually can bring back the interference pattern for all photons with a slightly different experimental setup (those which don't involve sorting out coincidence data). However, I haven't studied this particular experiment it in that much detail, so it's possible that I'm just wrong here.

[/ QUOTE ]

The thing is, if you could bring back the interference pattern for all the photons by applying the Eraser to all the idler photons then you could signal someone sitting at the Screen. You apply the Eraser on the Left and someone sitting on the Right sees the Interference for All the Photons. You Disconnect the Eraser and the person on the Right sees the interference go away. You are in instantaneous communication with him.

PairTheBoard

[/ QUOTE ]
I see what you're saying -- let me work this out in some detail and get back in a bit. I'm used to thinking about the quantum eraser experiment on a single photon, where the polarization wave function is entangled with the position wave function (hence no FTL possibilities). In this particular setup we're entangling the polarization and position wave functions of two different photons, which throws another element into the mix and at least allows you to talk about communication. However, there is always some way out of these things -- no signaling is guaranteed the moment you invoke quantum field theory (which describes quantum photons).

Piers
05-11-2007, 09:48 PM
[ QUOTE ]
but I have often wondered whether the randomness associated with subatomic particles, 50% chance they will decay in 13 microseconds, 50% chance they spin up rather than down, etc. etc. is independent.

[/ QUOTE ]

It think its clear that know one knows.

However I think the universe is much more holistic than is normally assumed. We tend to put boundaries around things to make them easy for us to model. However in the real world I don’t think such boundaries exist. I would be amazed if such randomness was independent of the rest of the universe. I fully expect there are more accurate models of the universe, currently beyond us where the random factor completly drops out.

Metric
05-15-2007, 06:27 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
I think I understand that. What I don't understand is why you can't bring back the interference pattern for All the photons by applying the Eraser to All the idler photons?

[/ QUOTE ]
I think you actually can bring back the interference pattern for all photons with a slightly different experimental setup (those which don't involve sorting out coincidence data). However, I haven't studied this particular experiment it in that much detail, so it's possible that I'm just wrong here.

[/ QUOTE ]

The thing is, if you could bring back the interference pattern for all the photons by applying the Eraser to all the idler photons then you could signal someone sitting at the Screen. You apply the Eraser on the Left and someone sitting on the Right sees the Interference for All the Photons. You Disconnect the Eraser and the person on the Right sees the interference go away. You are in instantaneous communication with him.

PairTheBoard

[/ QUOTE ]
I checked up on this... You can indeed restore the interference pattern for all photons. However, the interference pattern associated with detection of idler photon at "D1" is different than the interference pattern associated with detection of idler photon at "D2." The troughs of one pattern correspond to the peaks in the other, and the total pattern still shows no interference -- you still need coincidence information to see the interference pattern, even if every photon is interfering. So since coincidence info only travels at c, there are no ftl communication possibilities.

PairTheBoard
05-15-2007, 10:13 AM
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
[ QUOTE ]
I think I understand that. What I don't understand is why you can't bring back the interference pattern for All the photons by applying the Eraser to All the idler photons?

[/ QUOTE ]
I think you actually can bring back the interference pattern for all photons with a slightly different experimental setup (those which don't involve sorting out coincidence data). However, I haven't studied this particular experiment it in that much detail, so it's possible that I'm just wrong here.

[/ QUOTE ]

The thing is, if you could bring back the interference pattern for all the photons by applying the Eraser to all the idler photons then you could signal someone sitting at the Screen. You apply the Eraser on the Left and someone sitting on the Right sees the Interference for All the Photons. You Disconnect the Eraser and the person on the Right sees the interference go away. You are in instantaneous communication with him.

PairTheBoard

[/ QUOTE ]
I checked up on this... You can indeed restore the interference pattern for all photons. However, the interference pattern associated with detection of idler photon at "D1" is different than the interference pattern associated with detection of idler photon at "D2." The troughs of one pattern correspond to the peaks in the other, and the total pattern still shows no interference -- you still need coincidence information to see the interference pattern, even if every photon is interfering. So since coincidence info only travels at c, there are no ftl communication possibilities.

[/ QUOTE ]

Aha. That makes sense. Although, saying you've restored the interference pattern for "All" the photons is a little ambiguious. There is no interference seen for "All" the photons by the observer at the screen. What's seen at the screen on the Right by an observer looking at All the photons is the same as what he would see if no Erasure were taking place on the Left. The interference is only seen for subgroups of All photons identified by their corresponding idler photons reaching D1 and D2. The Erasure mechanism on the Left has no effect on what the observer sitting on the Right sees. At least not until he is told which subgroup of photons to look at by the observer on the left. So no FTL communication.

It looks like "Erasing" the Which-Slit information requires a splitting into subgroups with interference restored only for individual subgroups. It's tempting to try to pass the D1,D2 idler photons on to a Final F detector in such a way as to make it impossible to tell which D1,D2 pass-through they came from. But to do that you would have to "Erase" the D1,D2 Which-Way information which would require more splitting into subgroups. You can't bring All the idler photons together into one Final F detector in such a way that all Which-Way information has been Erased.

It's a fascinating experiment. Another one of those where you get the feeling that you aren't really grasping all the implications.

PairTheBoard

m_the0ry
05-15-2007, 12:40 PM
Yeah this is the locality debate and believe me a lot of words can be said about it.

From what I've been reading there is increasing evidence showing that locality is in fact false and that action at a distance is possible. There have been a few experiments devised to test this hypothesis. You'll never guess what they're waiting on - the LHC. (/joke)

m_the0ry
05-15-2007, 12:44 PM
[ QUOTE ]
[ QUOTE ]
but I have often wondered whether the randomness associated with subatomic particles, 50% chance they will decay in 13 microseconds, 50% chance they spin up rather than down, etc. etc. is independent.

[/ QUOTE ]

It think its clear that know one knows.

However I think the universe is much more holistic than is normally assumed. We tend to put boundaries around things to make them easy for us to model. However in the real world I don’t think such boundaries exist. I would be amazed if such randomness was independent of the rest of the universe. I fully expect there are more accurate models of the universe, currently beyond us where the random factor completly drops out.

[/ QUOTE ]

Hidden variable theory. This is also a hot topic of research and experimentation, also waiting for proof/disproof via the LHC.

Personally - and all we can really say at this point are 'I' statements - I don't mind interpreting quantum mechanics as the final tier of understanding nature. It seems fitting to me that the final level be completely and undeniably random. This is the only solution that leaves no room for determinism which I think is mathematically and naturally an ugly concept.