My friend Thomas Metcalf (Spring Hill College) has a nice three-part series on the basics of quantum mechanics and some of their potential philosophical implications up at 1000-Word Philosophy. I very much encourage readers to check it out, as its a very nice introduction to some of the main physical and philosophical puzzles that quantum mechanics generates. Because I have explored and provided a very different picture of these puzzles in my own work, I'd like to add a bit to Tom's discussion.

As Tom points out to begin the series, some of the basic puzzles that quantum phenomena raise have to do with:

*Paths*: quantum objects can take multiple paths simultaneously.*Watching*: whenever we observe a quantum object, we always observe it taking*one*path only.*Observation*: observing a quantum object (probabilistically) affects where it will be observed.*Knowledge*: we cannot know all of the properties of a quantum object (location + velocity, etc.) simultaneously.*Non-locality*: observing a quantum object affects other quantum-objects instantaneously without observable information transfer.*Superposition*: quantum objects exist in a superposition (having many different, logically inconsistent values simultaneously).

These aren't the only funky features of quantum mechanics. As I detail here, in quantum mechanics there is:

*Wave-particle duality*: quantum objects simultaneously have properties of points (located at one point in space-time) and properties of a wave (spread out over space-time).*Minimal space-time distance*(Planck length): below 1.6 x 10^{-}^{35}, notions of space, time, causality have no sense.

And of course this is just quantum mechanics. In addition, we know that our universe is *relativistic,* where space and time (viz. simultaneity) is relative to reference frames.

Now that we have all that out of the way, let us turn to the three dominant intepretations of quantum phenomena, along with Metcalf's nice brief presentation of the main problems with each:

#### A. Copenhagen

When we look for particles, we don’t seem to find them in superpositions. But of course when we *don’t* look for them, they seem to stay in superpositions. So there must be something *special* about measurement; it must cause superposed things to *stop* being superposed. Copenhagen-theorists say that observation “collapses” superpositions, and as noted, this collapse is indeterministic; nothing predicts or can predict whether the particle will be found *here* or *there*.^{10}

A nice thing about this interpretation is that it seems very much like classical physics...

A not-so-nice thing about this interpretation is that there is simply no direct experimental evidence that collapse ever actually happens.^{11} Indeed, collapse is incompatible with the Schrödinger equation. Copenhagen-theorists conclude that collapse *must* have happened (since otherwise, we’d see the particles in superpositions), but we actually don’t have a mathematical or a physical story that tells us how or why it happens.

This interpretation also makes measurement mysterious. How does the coin “know” I’m looking at it? Could a cat’s observation “cause” this collapse? A bacterium’s?^{12} It would be better overall if we didn’t have to say that observation itself causes physical changes in the thing observed.

#### B. Many-Worlds

Roughly speaking, the Many-Worlds interpretation says that superpositions remain after observation. When you look at the coin, the world evolves into a superposition of you observing ‘Heads’ and you observing ‘Tails.’

A nice thing about this interpretation is that the mathematical side is completely straightforward...

A not-so-nice thing about this interpretation is that it’s incompatible with our experience unless we say that the *universe itself* is branching into an outcome for every observation. We don’t ever *see* superpositions, so it must be that each branch of the universe gets its own “outcome” of the observation. This conclusion seems very strange to many people.

As it happens, this interpretation also makes probability very mysterious.^{14}

#### C. Bohm

The third interpretation to consider is the most “classical” of the lot. According to David Bohm and his followers, the coin was definitely ‘Heads’ or definitely ‘Tails’ before you measured it. The reason is that in addition to the coin, there was also another *thing*: a sort of guiding probability-wave that caused the coin to land on ‘Heads’ or ‘Tails.’ The world evolves deterministically, and superpositions, in a sense, aren’t real.^{15} Particles just *seem* to behave in a “superposition” way because we don’t have a way of monitoring everything about them.

A nice thing about this interpretation is, as mentioned, that it’s very classical...

A not-so-nice thing about this interpretation is that in the details, it turns out to need *nonlocality*.^{16} Basically, that means that things can affect each other at faster than the speed of light, even if they’re nowhere near each other. I observe ‘Heads’ on a coin here, and *instantly*, somehow, a coin ten light years away “becomes” ‘Tails.’ And there’s no obvious particle or mechanism to convey that causal signal, if it is a causal signal.

Another thing some people don’t like about this interpretation is that it seems to require the existence of an object we have no way of empirically detecting: the “pilot wave” that guides the particles to do what they do.^{17}

In short, none of the standard interpretations of quantum phenomena appear to work. So, what should we do? In my 2013 article, "A New Theory of Free Will", 2014 article "A Unified Interpretation of Quantum Phenomena?...", as well as here, I discuss these problems and argue that we should approach them like any other question in science: namely, we should look for a *simple, concrete, unified mechanism* through which worlds might produce all of these bizarre results. To use a very rough example I give in "A New Theory of Free Will", if you wanted to understand how a bridge supports itself, one good way to do it is to make a *model of the bridge* and see how well it works (note: it was just brought to my attention that architects now do just this with virtual reality simulations of buildings, bridges, etc., which they use to test their designs before making them reality). Anyway, how might we do this with quantum mechanics? My suggestion is simple: we should try to figure out the simplest way to* make a world *that has quantum features. Fortunately, or so I've argued, we already know how to do that: in fact, ** we've already done it**. Allow me to explain.

First, and most obviously we already know how to make "worlds": they are called computer simulations. Here's a rather cool one: an entire universe that creates itself and simulates every feature--including every blade of grass--in minute detail.

What then does it take to get quantum phenomena out of a world? Surprisingly, not much. All you need to do is utilize peer-to-peer networking: a network of *parallel interacting simulations* with each computer on the network running its own simulation, interacting with every other parallel simulation without any central computer--as in:

Turns out, here's what you get when you have multiple simulations running in a peer-to-peer network without perfect error-correction:

- A peer-to-peer simulation just is a
of different parallel representations of the simulated environment on different computers on the network (viz. each individual computer has its**superposition***own*ever-so-slightly different representation of where things in the simulation are, such that the union of the different representations of "reality" is a giant superposition of alternate states), - "The" location of any object or property in a P2P simulation is therefore also
given that each computer on the network has its**indeterminate,***own*representation of where "the" object or property is, and there is no dedicated server on the network to represent where the object or property "really" is (any object or property "really" is represented at*many different*positions on the network, thanks to slightly different representations on many computers all operating in parallel), - Any measurement taken by any single measurement device a P2P network also thereby affects the network as a whole (since what one computer measures will affect what other computers on the network are
*likely*to measure at any given instant), giving rise to a massive(one can only measure an object is on the network by**measurement problem***disturbing the entire network*, thereby altering where the object will be measured to be at any given instant), - Because different machines on the network represent the same object in slightly
*different*positions at any given instant (with some number*n*of machines representing a given object at position P, some other number*n**of machines representing a given object at position*P*,*etc.)*probably is*in the environment will have**features of a wave**(viz. an amplitude equivalent to the*number*of computers representing the object at a given instant, and wavelength equivalent to dynamical change of how many computers represent the object at a given point at the next instant), while at the same time - Any
*particular*measurement on any*particular*computer will result in the observation of the object as located at a specific**point**(thus embodying a wave-particle duality), such that - Any particular measurement on any particular computer will result in the appearance of a
**“collapse” of the wave-like dynamics**of the simulation into a single, determinate measurement (thus modeling wave-function collapse), - Finally, it is a natural result of a peer-to-peer network that single objects can “split in two”, becoming
**entangled**(in a peer-to-peer network multiple computers can, in a manner of speaking, get slightly out of phase, with one or more computers on the network coding for the particle passing through a boundary, while one or more other computers on the network coding for the particle to bounce backwards – in which case, if the coding is right, all of the computers on the network will treat the “two” resulting objects as simply later continuants of what was previously a single object). - All time measurements in a P2P simulation are
**relative to observers**. Each measurement device on a P2P simulation (i.e. game console) has its*own*internal clock, and there is no universal clock that all machines share. - Because the quantized data comprising the physical information of a P2P simulation will have to be separated/non-continuous much as there are "spaces" between pits of data on a CD/DVD/Blu-Ray disc (see image below), there must be within any such simulation some
**minimum space-time distance akin to the Planck length.**

In other words, you get a concrete mechanism whereby the Copenhagen interpretation and other strange phenomena begin to make sense: all objects in a peer-to-peer simulation are *always* in a superposition (viz. the network as a whole), but also *always* measured as "collapsed" down to a determinate value because each individual computer on the network will always render the object at a particular point.

Anyway, I don't claim that the P2P model explains all of quantum mechanics or relativity (I'm still working on making sense of some puzzles, and have been in contact with a few computer scientists and mathematicians who claim to be making progress with the model--though they haven't published their results yet). What I do claim that the model provides a tantalizing picture of how a variety of quantum-type phenomena emerge naturally and inevitably from a very common form of computer networking in existence today.

## Comments

You can follow this conversation by subscribing to the comment feed for this post.