Science

Back to reality: summary of a New Synthesis

Paul J.Werbos

National Science Foundation

room 675, Arlington VA 22230, USA

This summary will try to be as simple as possible, but will build up to the most recent mathematical technicalities, and a new insight which - curiously enough - came to me on the day 6/6/6, on a flight home from China. (Do things like 6/6/6 or objective reality really worry you? Are you afraid of what you might see if you turned on the light in the darkness? If so, I recommend Arthur Clarke's book Childhood's End, or, if you have more time, www.werbos.com) Towards the end, I will have to get into very precise technicalities, without which none of this is truly real.

Decades ago, Albert Einstein defied emerging conventional wisdom by claiming that objective reality still does exist, despite the mystical belief by Heisenberg that this world is only a passing illusion, and despite the great success of new computational tools started by Heisenberg which seem to support Heisenberg's beliefs. Einstein proposed that the complicated wave functions and density matrices used in modern quantum mechanics are really just statistical tools for calculating the average, emergent dynamics of a Universe governed by "Classical Field Theory" (CFT). In Einstein's version of Classical Field Theory, everything in the Universe is made up of smooth, continuous fields ("force fields") that are governed by a specific kind of beautiful, simple local dynamical law called "Lagrange-Euler equations". He once said: "God does not throw dice with the Universe".

Einstein also claimed that there is nothing magical or metaphysical about the act of measurement; the statistics of measurement can all be derived, in principle, by working out the physics of how objects like human brains and computers interact with other parts of a physical experiment. He disagreed with Heisenberg's theory of quantum measurement, in which the wave function of the entire Universe is instantly changed whenever a qualified metaphysical "observer" simply looks at it: (The weirdness of Heisenberg's idea was explained clearly by Schrodinger; anyone who has not yet heard of Schrodinger's Cat should Google on it or glance at any decent popular book on physics) My claim in this paper is that Einstein was right after all, despite decades and decades of work which assumed the opposite and seemed to prove what was assumed.

There are three legitimate reasons why most well-informed physicists turned away from Einstein's position. Those who want to get straight to the new idea may jump to the equations in section (3b) of this paper.

In this paper, I have tried to convey and specify the essential new ideas with a minimum of unnecessary detail. However, there are certain connections to other areas which are important, even at this general level of discussion.

First, in discussing Quantum Field Theory (QFT), I have glossed over the fact that there are actually four major mainstream versions of what a QFT actually is, mathematically (not counting issues of interpretation).

There is the original canonical version of QFT, described in the text by Mandel and Shaw. That is the point of departure for the usual many-worlds versions of QFT, for the vast bulk of the work using QFT in device design and condensed matter physics, and for the discussion above.

There is a second version more popular in theoretical physics, based on functional integration, which emerged from Feynman's work on path integrals and Schwinger's work on source theory. This is considered equivalent to the first version, for reasons discussed in many standard texts; however, these discussions remind me of a paper by Coleman (discussing a different pair of theories) where he says, in effect, "I have proven these two theories are equivalent, provided that they both exist, which no one begins to know how to prove". There are mathematical subtleties here which have yet to be nailed down.

There is a third version by Streater and Wightman, which appears far more elegant than the others at first. It is often cited as the gold standard for mathematical rigor in proving that a QFT is well-defined in an axiomatic sense. That is the formulation for which the "spin-statistics theorem has been proven". (Some would cite that theorem as a basis for dismissing my suggestion about the neutrino a bit too hastily - but the issue is not so simple, especially since no one has actually seen exchange statistics for neutrinos)

But no one has actually shown that any specific QFT over three-dimensional space and time can actually fit that formulation, let alone the standard model. By contrast, if we propose that the Universe is governed by CFT, it is enough to show that the CFT are well-enough defined; there is a huge literature on the existence of solutions to nonlinear PDE which can be used.

There is a fourth version based on Wick transformations, which I will not discuss here.

Within the canonical version of QFT, QFT is actually made up of two parts: (1) quantum dynamics, which, in the modern form, are expressed as the Von Neumann equation for the evolution of the density matrix; and (2) quantum measurement, which is usually calculated by the standard measurement operator "collapse of the wave function" procedure. This paper mainly deals with quantum dynamics.

The "Bell's Theorem" experiments, similar paradoxes, and much of today's quantum computing are based on quantum measurement more than quantum dynamics. For that reason, my papers at arxiv.org (and other previous published papers) focused very heavily on quantum measurement. (IJBC also discusses computing). My personal web page (www.werbos.com)goes further, and discusses specific experiments related to quantum measurement, and links to the work of Yanhua Shih and Huw Price - perhaps the two other people who understand the realities here the best. But in the end, I argue that quantum measurement is derived from quantum dynamics; for that reason, in recent years, I have put more effort into nailing down the situation with dynamics.

Discussions with Richard Holt crystallized my understanding of the Backwards Time Interpretation, but they aren't where it started. For many years, I had believed - like most of the world - that we should keep working with the "Newtonian" view of physics as a kind of time-forwards progression. That view does seem natural to our brain, and I did not yet see any concrete, experimentally based reason to go beyond it. But then, in a course on nuclear physics, I spent many hours pondering certain curves for nuclear exchange reactions. In these interactions, a proton which is sent whizzing past a resting neutron will often turn into a neutron itself, even when the nuclear interaction is very weak and distant. It seems that a neutron "knows" that it is OK to emit a charged "virtual pion" when there will be a proton there to receive it, in its future. There is no nonlocal model proceeding forwards in time which really fits this in a natural way - but it is all quite simple if we accept that the neutron "sees the future," that the charged virtual pion is a kind of excitation which depends on boundary conditions both in the past and in the future.

More recently - a new field of basic physics has arisen, which revolutionizes our understanding of some similar phenomena. Once again, empirical reality has quietly changed the rules, in a way which the introductory textbooks have not yet assimilated.

It's not just a matter of needing to use density matrices. Cavity QED has shown that the traditional form of quantum electrodynamics simply does not work at all, when confronted with a wide variety of practical physical systems. Traditional QED appears local in spirit, because it is all based on an interaction y(x)A(x)y(x) which takes place at a single point in space; an electron "knows" it can emit a photon when the information it has available says that it can. The many-worlds version of this obeys at least a restricted kind of locality. But it doesn't actually work that way. In actuality, an electron at the edge of a cavity "senses" the entire cavity; its "decision" is based on the shape of things far away; it "senses" whether the cavity can accommodate a certain amount of light energy resonating in a certain pattern across the entire cavity, and emits that energy only if the boundary conditions are there to receive it in the future. This is not an obscure phenomenon, like the T violations of superweak nuclear interactions; it is a large and useful phenomenon, which is the basis of a new generation of advanced video display systems, using Vertical Cavity Surface-Emitting Lasers - but it has many other applications, and is a well-developed fundamental alternative to traditional QED, fully consistent with BTI.

There is still a lot of work to be done to evaluate various forms of the hypothesis presented here. For example, what are the possible mechanisms for "thermalizing" the Universe? Logic suggests two possibilities - boundary conditions, leading to something like a Boltzmann relation, or stochastic sources and sinks operating to perturb the soliton dynamics. The analysis is complicated by the need to use (local) Markhov Random Field methods over Minkowski space-time, so as to avoid the inappropriate asymmetries of the usual time-forwards local Markhov models. The assumption of stochastic sources and sinks results in a theory which is 100% phenomenologically consistent with special relativity, whereas a traditional Boltzmann relation inserts a kind of preferred direction in space (i.e., a temperature vector which multiplies the energy and momentum operators in the canonical grand ensemble); however, since no one has measured the size of zero-temperature decoherence effects in objects moving near the speed of light, I do not yet know enough to rule out either possibility. Of course, it is well-known that a Boltzmann distribution about a local minimum of energy can be approximated as a kind of Gaussian distribution (as assumed in the Q hypothesis) in the right units, and that the approximation is good when the temperatures are relatively small (compared with the masses of the electrons, quarks, etc.).

Another possible direction for follow-on work: if the original "P" classical hypothesis and the new "Q" hypothesis provide upper and lower bounds for energy levels which are very hard to calculate directly in standard QFT (e.g. for strong nuclear forces), they might have some practical value even if P, Q and QFT all turn out to be approximations.

 




[Contents]

homeKazanUniversitywhat's newsearchlevel upfeedback

© 1995-2008 Kazan State University