Thanks for coming to this site. Please hit the sidebar links to reach the pages for The many worlds of probability, reality and cognition.
Or go to the table of contents here:
http://randompaulr.blogspot.com/2013/11/the-many-worlds-of-probability-reality_6543.html
If you run into any difficulties with these pages, please write me at krypto78 a.t.g.m.a.i.l.d.o.t.c.o.m. and I will send you an email with the entire article, including latest revisions.
RandomPaul
Wednesday, November 20, 2013
Part VI: Many worlds...
The many worlds of probability, reality and cognition
Noumena I: Spacetime and its discontents
Newton with his belief in absolute space and time considers motion a proof of the creation of the world out of God's arbitrary will, for otherwise it would be inexplicable why matter moves in this [relative to a fixed background frame of reference] rather than any other direction. -- Hermann Weyl (60).
Weyl, a mathematician with a strong comprehension of physics, had quite a lot to say about spacetime. For example, he argued that Mach's principle, as adopted by Einstein, was inconsistent with general relativity.
Background on Weyl
http://plato.stanford.edu/entries/weyl/#LawMotMacPriWeyCosPos
Weyl's book 'Symmetry' online
https://archive.org/details/Symmetry_482
See also my paper,
Einstein, Sommerfeld and the Twin Paradox
http://paulpages.blogspot.com/2013/10/einstein-sommerfeld-and-twin-paradox.html
Einstein had hoped to deal only with "observable facts," in accord with Mach's empiricist (and logical positivist) program, and hence to reduce spacetime motions to relative actions among bodies, but Weyl found that such a level of reduction left logical holes in general relativity. One cannot, I suggest, escape the background frame, even if it is not a strictly Newtonian background frame. Sometimes this frame is construed to be a four-dimensional spacetime block.
So how would one describe the "activity" of a four-dimensional spacetime block? Something must be going on, we think, yet, from our perspective looking "out," that something "transcends" space and time.
Popper, in his defense of phenomenal realism, objected that the spacetime block interpretation of relativity theory implies that time and motion are somehow frozen, or not quite real. While not directly challenging relativity theory, he objected to such a manifold and ruled it out as not in accord with reality as he thought reality ought to be. But, we hasten to make note of Popper's trenchant criticism of the logical positivism of most scientists.
My thought: "Laws" of nature, such as Einstein's law of universal gravitation, are often thought of in a causative sense, as in "the apple drops at 9.81 meters per second squared by cause of the law of gravity."
Actually, the law describes a range of phenomena which are found to be predictable via mathematical formulas. We have a set of observable relations "governed" by the equations. If something has mass or momentum, we predict that it will follow a calculated trajectory. But, as Newton knew, he had an algorithm for representing actions in nature, but he had not got to the world beneath superficial appearances. How does gravity exercise action at a distance? If you say, via curved spacetime fields, one may ask, how does spacetime "know" to curve?
We may say that gravity is a principle cause of the effect of a rock falling. But, in truth, no one knows what gravity is. "Gravity" is a word used to represent behavior of certain phenomena, and that behavior is predictable and calculable, though such predictability remains open to Hume's criticism.
On this line, it should be noted that Einstein at first resisted basing what became his General Theory of Relativity on geometrical (or, topological) representations of space and time. He thought that physical insight should accompany his field equations, but eventually he settled on spacetime curvature as insight enough. His competition with David Hilbert may well have spurred him to drop that proviso. Of course, we all know of his inability to accept the lack of "realism" implied by quantum mechanics, which got the mathematics right but dispensed with certain givens of phenomenal realism. To this end, we note that he once said that he favored the idea that general relativity's mathematics gave correct answers without accepting the notion that space and time were "really" curved.
Newton had the same problem: There was, to him, an unsatisfactory physical intuition for action at a distance. Some argue that this difficulty has been resolved through the use of "fields," which act as media for wave motion. The electromagnetic field is invoked as a replacement for the ether that Einstein ejected from physics as a useless concept. Still, Einstein saw that the field model was intuitively unsatisfactory.
As demonstrated by the "philosophical" issues raised by quantum theory, the problem is the quantization of energies needed to account for chains of causation. When the energy reaches the quantum level, there are "gaps" in the chain. Hence the issue of causation can't easily be dismissed as a problem of "metaphysics" but is in truth a very important area of discussion on what constitutes "good physics."
One can easily visualize pushing an object, but it is impossible to visualize pulling an object. In everyday experience, when one "pulls," one is in fact pushing. Yet, at the particle level, push and pull are complementary properties associated with charge sign. This fact is now sufficiently familiar as not to seem occult or noumenal. Action at a distance doesn't seem so mysterious, especially if we invoke fields, which are easy enough to describe mathematically, but does anyone really know what a field is? The idea that gravitation is a macro-effect from the actions of gravitons may one day enhance our understanding of nature. But that doesn't mean we really know what's going on at the graviton level.
Gott (61), for example, is representative of numerous physicists who see time as implying many strange possibilities. And Goedel had already argued in the 1940s that time must not exist at all, implying it is some sort of illusion. Goedel had found a solution to Einstein's field equations of general relativity for a rotating universe in which closed time loops exist, meaning a rocket might travel far enough to find itself in its past. Einstein shrugged off this finding of his good friend, arguing that it does not represent physical reality. But Goedel countered that if such a solution exists at all, then time cannot be what we take it to be and doesn't actually exist (62).
These days, physicists are quite willing to entertain multiple dimension theories of cosmology, as in the many dimensions of string theory and M theory.
We have Penrose's cyclic theory of the cosmos (63), which differs from previous Big Bang-Big Crunch cyclic models. Another idea comes from Paul J. Steinhardt, who proposes an "ekpyrotic universe" model. He writes that his model is based on the idea that our hot big bang universe was formed from the collision of two three-dimensional worlds moving along a hidden, extra dimension. "The two three-dimensional worlds collide and 'stick,' the kinetic energy in the collision is converted to quarks, electrons, photons, etc., that are confined to move along three dimensions. The resulting temperature is finite, so the hot big bang phase begins without a singularity."
Steinhardt on the ekpyrotic universe
http://wwwphy.princeton.edu/~steinh/npr/
The real point here is that spacetime, whatever it is, is rather strange stuff. If space and time "in the extremes" hold strange properties, should we not be cautious about assigning probabilities based on absolute Newtonian space and equably flowing time? It is not necessarily a safe assumption that what is important "in the extremes" has no relevance locally.
And yet, here we are, experiencing "time," or something. The difficulty of coming to grips with the meaning of time suggests that beyond the phenomenal world of appearances is a noumenal world that operates along the lines of Bohm's implicate order, or -- in his metaphor -- of a "holographic universe."
But time is even more mind-twisting in the arena of quantum phenomena (as discussed in Noumena II, below).
The "anthropic cosmological principle" has been a continuing vexation for cosmologists (64). Why is it that the universe seems to be so acutely fine-tuned to permit and encourage human life? One answer is that perhaps we are in a multiverse, or collection of noninteracting or weakly interacting cosmoses. The apparent miniscule probability that the laws and constants are so well suited for the appearance of humans might be answered by increasing the number and variety of cosmoses and hence increasing the distribution of probabilities for cosmic constants.
The apparent improbability of life is not the only reason physicists have for multiverse conjectures. But our concern here is that physicists have used probabilistic reasoning on a question of the existence of life. This sort of reasoning is strongly reminiscent of Pascal's wager and I would argue that the question is too great for the method of probability analysis. The propensity information is far too iffy, if not altogether zero. Yet, that doesn't mean the problem is without merit. To me, it shows that probability logic cannot be applied universally and that it is perforce incomplete. It is not only technically incomplete in Goedel's sense, it is incomplete because it fundamentally rests on the unknowable.
Paul Davies, in the Guardian, wrote: "The multiverse comes with a lot of baggage, such as an overarching space and time to host all those bangs, a universe-generating mechanism to trigger them, physical fields to populate the universes with material stuff, and a selection of forces to make things happen. Cosmologists embrace these features by envisaging sweeping 'meta-laws' that pervade the multiverse and spawn specific bylaws on a universe-by-universe basis. The meta-laws themselves remain unexplained -- eternal, immutable transcendent entities that just happen to exist and must simply be accepted as given. In that respect the meta-laws have a similar status to an unexplained transcendent god." Davies concludes, "Although cosmology has advanced enormously since the time of Laplace, the situation remains the same: there is no compelling need for a supernatural being or prime mover to start the universe off. But when it comes to the laws that explain the big bang, we are in murkier waters."
Davies on the multiverse
http://www.theguardian.com/commentisfree/belief/2010/sep/04/stephen-hawking-big-bang-gap
Noumena II: Quantum weirdness
The double-slit experiment
The weird results of quantum experiments have been known since the 1920s and are what led Werner Heisenberg to his breakthrough mathematical systemization of quantum mechanics.
An example of quantum weirdness is the double-slit experiment, which can be performed with various elementary particles. Consider the case of photons, in which the intensity of the beam is reduced to the point that only one photon at a time is fired at the screen with the slits. In the case where only one slit is open, the photo-plate detector on the other side of the screen will record basically one spot where the photons that make it through the slit arrive in what one takes to be a straight line from source to detector.
However, when two slits are open the photons are detected at different places on the plate. The positions are not fully predictable, and so are random within constraints. After a sufficient number of detections, the trained observer notices a pattern. The spots are forming a diffraction pattern associated with how one would expect a wave going through the two slits to separate into two subwaves, and simultaneously interact, showing regions of constructive and destructive wave interference. However, the components of the "waves," are the isolated detection events. From this effect, Max Born described the action of the particles in terms of probability amplitudes, or that is, waves of probability.
This is weird because there seems to be no way, in terms of classical causality, for the individual detection events to signal to the incoming photons where they ought to land. It also hints that the concept of time isn't what we typically take it to be. That is, one might interpret this result to mean that once the pattern is noticed, one cannot ascribe separate time units to each photon (which seems to be what Popper, influenced by Landes, was advocating). Rather, it might be argued that after the fact the experiment must be construed as an irreducible whole. This bizarre result occurs whether the particles are fired off at the velocity of light or well below it.
Schroedinger's cat
In 1935, Erwin Schroedinger, proposed what has come to be known as the Schroedinger's cat thought experiment in an attempt to refute the idea that a quantum property in many experimental cases cannot be predicted precisely, but can only be known probabilistically prior to measurement. Exactly what does one mean by measurement? In the last analysis, isn't a measurement an activity of the observer's brain?
To underscore how ludicrous he thought the probability amplitude idea is, Schroedinger gave this scenario: Suppose we place a cat in a box which contains a poison gas pellet rigged to a Geiger counter that measures radioactive decay. The radioactive substance has some suitable half-life, meaning there is a probability that it is detected or not.
Now in the standard view of quantum theory, there is no routine causation that can be accessed that gives an exact time the detection will occur. So then, the property (in this case, the time of the measurement) does not exist prior to detection but exists in some sort of limbo in which the quantum possibilities -- detection at time interval x and non-detection at time interval x -- are conceived of as a wave of probabilities, with the potential outcomes superposed on each other.
So then, demanded Schroedinger, does not this logically require that the cat is neither alive nor dead prior to the elapse of the specified time interval?! Of course, once we open the box, the "wave function collapses" and the cat's condition -- dead or alive -- tells us whether the quantum event has been detected. The cat's condition is just as much of a detection event as a photo-plate showing a bright spot.
Does this not then mean that history must be observer-centric? However, no one was able to find a way out of this dilemma, despite many attempts (see Toward). Einstein conceded that such a model was consistent, but rejected it on philosophical grounds. You don't really suppose the moon is not there when you aren't looking, he said.
The EPR scenario
In fact, also in 1935, Einstein and two coauthors unveiled another attack on quantum weirdness known as the Einstein-Podolsky-Rosen (EPR) thought experiment, in which the authors pointed out that quantum theory implies what Einstein called "spooky action at a distance" that violated C, the velocity of light in a vacuum, which is an anchor of his theory of relativity. Later John Bell found a way to apply a test to see whether statistical correlation would uphold the spooky quantum theory. Experiments by Alain Aspect in the 1980s and by others have confirmed, to the satisfaction of most experts, that quantum "teleportation" occurs.
So we may regard a particle as carrying a potential for some property or state that is only revealed upon detection. That is the experiment "collapses the wave function" in accordance with the property of interest. Curiously, it is possible to "entangle" two particles of the same type at some source. The quantum equations require that each particle carries the complement property of the other particle -- even though one cannot in a proper experiment predict which property will be detected first.
Bohm's version of EPR is easy to follow: An electron has a property called "spin." Just as a screw may rotate left or right, so an electron's spin is given as "up" or "down," which is where it will be detected in a Stern-Gerlach device. There are only two possibilities, because the electron's rotational motion is quantized into halves -- as if the rotation jumps immediately to its mirror position without any transition, just as the Bohr electron has specific discontinuous "shells" around a nucleus.
Concerning electron spin
http://hyperphysics.phy-astr.gsu.edu/hbase/spin.html
So if we entangle electrons at a source and send them in different directions, then, quantum theory declares that if we detect spin "up" at detector A, it is necessarily so that detector B ought to read spin "down."
In that case, as Einstein and his coauthors pointed out, doesn't that mean that the detection at A required a signal to reach B faster than the velocity of light?
For decades, EPR remained a thought experiment only. A difficulty was that detectors and their related measuring equipment tend to be slightly cranky, giving false positives and false negatives. It may be that error correction codes might have reduced the problem, but it wasn't until Bell introduced his statistical inequalities that the possibility arose of conducting actual tests of correlation.
In the early 1980s Aspect arranged photon experiments that tested for Bell's inequalities and made the sensational discovery that the correlation showed that Einstein was wrong and that detection of one property strongly implied that its "co-particle" would detect for the complement property. (We should tip our hats both to Schroedinger and Einstein for the acuity of their thought experiments.) Further, Aspect did experiments in which monitors were arranged so that any signal from one particle to another would necessarily exceed the velocity of light. Even so, the results held.
This property of entanglement is being introduced into computer security regimens because if, say, the NSA or other party is looking at the data stream, the use of some entangled particles can be used to tip off the sender that the stream is being observed.
Hidden variables
John Von Neumann, contradicting Einstein, published a proof that quantum theory was complete in Heisenberg's sense and that "hidden variables" could not be used to devise causal machinery to explain quantum weirdness. Intuitively, one can apprehend this by noting that if one thinks of causes as undetected force vectors, then Planck's constant means that there is a minimum on the amount of force (defined in terms of energy) that can exist insofar as detection or observation. If we think of causes in terms of rows of dominos fanning out and at points interacting, we see there is nothing smaller than the "Planck domino." So there are bound to be gaps in what we think of as "real world causation."
Popper objected to Von Neumann's claim on grounds that after it was made, discoveries occurred in the physics of the nucleus that required "new" variables. Yet if hidden variables are taken to mean the forces of quantum chromodynamics and the other field theories, these have no direct influence on the behaviors of quantum mechanics (now known as quantum field theory). Also, these other theories are likewise subject to quantum weirdness, so if we play this game, we end up with a level where the "variables" run out.
We should note that by "hidden variable," Von Neumann evidently had in mind the materialist viewpoint of scientists like Bohm whose materialism led him to reject the minimalist ideas of the "Copenhagen interpretation" whereby what one could not in principle observe simply doesn't count. Instead, Bohm sought what might be called a pseudo-materialist reality in which hidden variables are operative if one concedes the bi-locality inherent in entanglement. In fact, I tend to agree with Bohm's view of some hidden order, as summarized by his "holographic universe" metaphor. On the other hand, I do not agree that he succeeded in his ambition to draw a sharp boundary between the "real external world" and subjective perception.
Bohm quotes John Archibald Wheeler:
"No phenomenon is a phenomenon until it is an observed phenomenon" so that "the universe does not exist 'out there' independently of all acts of observation. It is in some strange sense a participatory universe. The present choice of mode of observation... should influence what we see about the past... the past is undefined and undefinable without the observation" (65).
"We can agree with Wheeler that no phenomenon is a phenomenon until it is observed, as by definition, a phenomenon is what appears. Therefore it evidently cannot be a phenomenon unless it is the content of an observation," Bohm says, adding, "The key point in an ontological interpretation such as ours is to ask the question as to whether there is an underlying reality that exists independently of observation" (66).
Bohm argues that a "many minds" interpretation of quantum effects removes "many of the difficulties with the interpretation of [Hugh] Everett and [Bryce] DeWitt (67), but requires making a theory of mind basically to account for the phenomena of physics. At present we have no foundations for such a theory..." He goes on to find fault with this idea.
And yet, Bohm sees that "ultimately our overall world view is neither absolutely deterministic nor absolutely indeterministic," adding: "Rather it implies that these two extremes are abstractions which constitute different views or aspects of the overall set of appearances" (68).
So perhaps the thesis of determinism and the antithesis of indeterminism resolve in the synthesis of the noumenal world. In fact, Bohm says observables have no fundamental significance and prefers an entity dubbed a "be-able," again showing his "implicate order" has something in common with our "noumenal world." And yet our conceptualization is at root more radical than is his.
One specialist in relativity theory, Kip S. Thorne (69), has expressed a different take. Is it possible that the spacetime continuum, or spacetime block, is multiply connected? After all if, as relativity holds, a Riemann topology holds for expressing spacetime, then naive Euclidean space is not operative, except vanishingly close to the curvature functions. So in that case, it shouldn't be all that surprising that spacetime might have "holes" connecting one region to another. Where would such wormholes be most plausible? In black holes, Thorne says. By this, the possibility of a "naked singularity" is addressed. The singularity is the point at which Einstein's field equations cease to be operative; the presumed infinitely dense point at the center of mass doesn't exist because the wormhole ensures that the singularity never occurs; it smooths out spacetime (70).
One can see an analog of this by considering a sphere, which is the surface of a ball. A wormhole would be analogous to a straight-line tunnel connecting Berlin and London by bypassing the curvature of the Earth. So on this analogy, one can think of such tunnels connecting different regions of spacetime. The geodesic -- analogous to a great circle on a sphere -- yields the shortest distance between points in Einstein spacetime. But if we posit a manifold, or cosmic framework, of at least five dimensions then one finds shortcuts, topologically, connecting distinct points on the spacetime "surface." Does this accord with physical reality? The answer is not yet in.
Such wormholes could connect different points in time without connecting different regions of space, thereby setting up a time travel scenario, though he is quoted as arguing that his equation precludes time travel paradoxes.
Thorne's ideas on black holes and wormholes
https://en.wikipedia.org/wiki/Kip_Thorne
The standard many-worlds conjecture is an interpretation of quantum mechanics that asserts that a universal wave function represents objective phenomenal reality. So there is no intrinsically random "collapse of the wave function" when a detection occurs. The idea is to be rid of the Schroedinger cat scenario by requiring that in one world the cat is alive and in another it is dead. The observer's world is determined by whether he detects cat dead or cat alive. These worlds are continually unfolding.
The key point here is the attempt to return to a fully deterministic universe, a modern Laplacian clockwork model. However, as the observer is unable to foretell which world he will end up in, his ignorance (stemming from randomness1 and randomness2) is tantamount to intrinsic quantum randomness (randomness3).
In fact, I wonder how much of a gain there is in saying Schroedinger's cat was alive in one world and dead in another prior to observation as opposed to saying the cat was in two superposed states relative to the observer.
On the other hand it seems likely that Hawking favors the notion of a universal wave function because it implies that information represents hard, "external" reality. But even so, the information exists in superposed states as far as a human observer is concerned.
At present, there is no means of calculating which world the cat's observer will find himself in. He can only apply the usual quantum probability methods.
Time-bending implications
What few have understood about Aspect's verification of quantum results is that time itself is subject to quantum weirdness.
A logical result of the entanglement finding is this scenario:
We have two detectors, A which is two meters from the source and B which is one meter distant. You are positioned at detector A and cannot observe B. Detector A goes off and registers, say, spin "down." You know immediately that Detector B must read spin "up" (assuming no equipment-generated error). That is, from your position, the detector at B went off before your detector at A. If you like, you may in principle greatly increase the scale of the distances to the detectors. It makes no difference. B seems to have received a signal before you even looked at A. It's as if time is going backward with respect to B, as far as you are concerned.
Now it is true that a great many physicists agree with Einstein in disdaining such scenarios, and assume that the picture is incomplete. However, incomplete or not, the fact is that the observer's sense of time is stood on its head. And this logical implication is validated by Aspect's results.
Now, let's extend this experimentally doable scenario with a thought experiment reminiscent of Schroedinger's cat. Suppose you have an assistant stationed at detector B, at X kilometers from the source. You are at X/2 kilometers from the source. Your assistant is to record the detection as soon as it goes off, but wait for your call to report the property. As soon as you look at A, you know his property will be the complement of yours. So was he in a superposed state with respect to you? Obnoxious as many find this, the logical outcome, based on the Aspect experiments and quantum rules, is yes.
True, you cannot in relativity theory receive the information from your assistant faster than C, thus presenting the illusion of time linearity. And yet, I suggest, neither time nor our memories are what we suppose them to be.
The amplituhedron
When big particle accelerators were introduced, it was found that Richard Feynman's diagrams, though conceptually useful, were woefully inadequate for calculating actual particle interactions. As a result, physicists have introduced a remarkable calculational tool called the "amplituhedron." This is a topological object that exists in higher-dimensional space. Particles are assumed to follow the rules in this object, and not the rules of mechanistic or pseudo-mechanistic and continuous Newtonian and Einsteinian spacetime.
Specifically, it was found that the scattering amplitude equals the volume of this object. The details of a particular scattering process dictate the dimensionality and facets of the corresponding amplituhedron.
It has been suggested that the amplituhedron, or a similar geometric object, could help resolve the perplexing lack of commensurability of particle theory and relativity theory by removing two deeply rooted principles of physics: locality and unitarity.
“Both are hard-wired in the usual way we think about things,” according to Nima Arkani-Hamed, a professor of physics at the Institute for Advanced Study in Princeton. “Both are suspect.”
Locality is the notion that particles can interact only from adjoining positions in space and time. And unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one. The concepts are the central pillars of quantum field theory in its original form, but in certain situations involving gravity, both break down, suggesting neither is a fundamental aspect of nature.
At this point I interject that an axiom of nearly all probability theories is that the probabilities of the outcome set must equal 1. So if, at a fundamental, noumenal level, this axiom does not hold, what does this bode for the whole concept of probability? At the very least, we sense some sort of nonlinearity here. (At this point we must acknowledge that quantum physicists have for decades used negative probabilities with respect to the situation before the "collapse of the wave function," but "negative unity" is preserved.)
Mark Burgin on negative probabilities
http://arxiv.org/ftp/arxiv/papers/1008/1008.1287.pdf
Wikipedia article on negative probabilities
https://en.wikipedia.org/wiki/Negative_probability
According to the article linked below, scientists have also found a “master amplituhedron” with an infinite number of facets, analogous to a circle in 2-D, which has an infinite number of sides. This amplituhedron's volume represents, in theory, the total amplitude of all physical processes. Lower-dimensional amplituhedra, which correspond to interactions between finite numbers of particles, are conceived of as existing on the faces of this master structure.
“They are very powerful calculational techniques, but they are also incredibly suggestive,” said one scientist. “They suggest that thinking in terms of space-time was not the right way of going about this.”
“We can’t rely on the usual familiar quantum mechanical spacetime pictures of describing physics,” said Arkani-Hamed. “We have to learn new ways of talking about it. This work is a baby step in that direction.”
So it indeed looks as though time and space are in fact some sort of illusion.
In my estimate, the amplituhedron is a means of detecting the noumenal world that is beyond the world of appearances or phenomena. Quantum weirdness implies that interactions are occurring in a way and place that do not obey our typical perceptual conceits. It's as if, in our usual perceptual state, we are encountering the "shadows" of "projections" from another "manifold."
Simons Foundation article on the amplituhedron
https://www.simonsfoundation.org/quanta/20130917-a-jewel-at-the-heart-of-quantum-physics/
The spacetime block of relativity theory likewise suggests that there is a realm that transcends ordinary energy, time and motion.
Zeno's paradox returns
Motion is in the eye of the beholder.
When an object is lifted to height n, it has a specific potential energy definable in terms of Planck's energy constant. Hence, only the potential energies associated with multiples of Planck's constant are permitted. In that case, only heights associated with those potential energies are permitted. When the object is released and falls, its kinetic energy increases with the acceleration. But the rule that only multiples of Planck's constant are permitted means that there is a finite number of transition heights before the object hits the ground. So what happens between quantum height y and quantum height y - 1?
No doubt Zeno would be delighted with the answer:
The macro-object can't cross these quantum "barriers" via what we think of as motion. The macro-object makes a set of quantum jumps across each "barrier," exactly like electrons in an atom jumping from one orbital probability shell to another.
Here we have a clear-cut instance of the "macro-world" deceiving us, when in fact "motion" must occur in quantum jumps. This is important for not only what it says about motion, but also because it shows that the macro-world is highly -- not minimally -- interactive with the "quantum world." Or that is, that both are highly interactive with some noumenal world that can only be apprehended indirectly.
Even in classical physics, notes Popper in his attack on the Copenhagen interpretation, if acceleration is measured too finely, one finds one gets an indeterminate value, as in a = 0/0 (71).
Even on a cosmic scale, quantum weirdness is logically required.
Cosmic weirdness
Suppose we had a theory of everything (ToE) algorithm. Then at proper time ta we will be able to get a snapshot of the ToE waveform -- obtained from the the evolving net ToE vector -- from ta to tb. It is pointless to decompose the waveform below the threshold set by Planck's constant. So the discrete superpositions of the ToE, which might be used to describe the evolution of the cosmos, cannot be reduced to some continuum level. If they could be reduced infinitely, then the cosmic waveform would in effect represent a single history. But, the fact that the waveform is composed of quantum harmonics means that more than one history (and future) is in superposition.
In this respect, we see that quantum theory requires "many universes," though not necessarily in the sense of Hugh Everett or of those who posit "bubble" universes.
Many will object that what we have is simply an interpretation of the meaning of quantum theory. But, I reply that once hidden variables are eliminated, and granted the success of the Aspect experiments, quantum weirdness logically follows from quantum theory.
EPR, action at a distance, special relativity and the fuzziness of motion and time and of the cosmos itself, all suggest that our reality process only reflects but cannot represent noumenal reality. That is, what we visualize and see is not what's actually behind what we visualize and see. Quantum theory gives us some insight into how noumena are mapped into three- and four-dimensional phenomena, but much remains uncharted.
So if phenomenon A correlates with phenomenon B, we may be able to find some algorithm that predicts this and other outcomes with a probability near 1. But if A and B are phenomena with a relation determined in a noumenal "world," then what is to prevent all sorts of oddities that make no sense to phenomenalist physicists? Answer: If so, it might be difficult to plumb such a world, just as a shadow only discloses some information about the object between the projection and the light source.
Physicists are, I would say, somewhat more likely to accept a nonlinearity in causality than are scientists in general. For example, Brian Josephson, a Nobel laureate in physics, favors a radical overhaul of physical knowledge by taking into account such peculiarities as outlined by John A. Wheeler, who proposes a "participatory universe." Josephson believes C.S. Peirce's semiotics combined with a new approach to biology may help resolve the impasses of physics, such as the evident incommensurability of the standard model of particle physics with the general theory of relativity.
Josephson on a 'participatory universe'
http://arxiv.org/pdf/1108.4860v4.pdf
And Max Tegmark argues that the cosmos has virtually zero algorithmic information content, despite the assumption that "an accurate description of the state of the universe appears to require a mind-bogglingly large and perhaps even infinite amount of information, even if we restrict our attention to a small subsystem such as a rabbit."
But, he says that if the Schroedinger equation is universally valid, then "decoherence together with the standard chaotic behavior of certain non-linear systems will make the universe appear extremely complex to any self-aware subsets that happen to inhabit it now, even if it was in a quite simple state shortly after the big bang."
Tegmark's home page
http://space.mit.edu/home/tegmark/home.html
Roger Penrose has long been interested in the "huge gap" in the understanding of physics posed by the Schroedinger's cat scenario. He sees this issue as strongly suggestive of a quantum influence in consciousness -- consciousness being crucial to the collapse of Schroedinger's wave function.
He and Stuart Hameroff, a biologist, propose that microtubules in the brain are where the relevant quantum activities occur.
Even though Penrose is attempting to expunge the problem of superposed realities from physics with his novel proposal, the point to notice here is that he argues that the quantum enigma is indicative of something beyond current physical knowledge that must be taken into account. The conscious mind, he claims, is not at root doing routine calculations. That chore is handled by the unconscious autonomic systems, he says.
In our terms, he is pointing to the existence of a noumenal world that does not operate in the routine "cause-effect" mode of a calculational model.
The 'Orch OR' model for consciousness
http://www.quantumconsciousness.org/penrose-hameroff/orchOR.html
Penrose talk on quantum activity in consciousness
https://www.youtube.com/watch?v=3WXTX0IUaOg
On the other hand, there has always been a strong belief in non-illusional reality among physicists. We have Einstein and Popper as notable examples. Popper was greatly influenced by Alfred Landes whose strong opposition to the Copenhagen interpretation is spelled out in books published well before Aspect's experiments had confirmed bilocality to the satisfaction of most physicists (72).
Yet, the approach of many probability theorists has been to ignore these sorts of implications. Carnap's attitude is typical. In his Logical Foundations of Probability (73), Carnap mentions a discussion by James Jeans of the probability waves of quantum mechanics, which Jeans characterizes as "waves of knowledge," implying "a pronounced step in the direction of mentalism" (74).
But Carnap breezes right past Jeans's point, an omission that I hazard to guess calls into question the logical foundation of Carnap's whole system -- though I note that I have not attempted to plow through the dense forest of mathematical logic symbols in Carnap's book.
I have tried to address some of the issues in need of exploration in my paper Toward, which discusses the reality construction process and its implications. Many worlds of probability is intended as a companion to that paper:
Toward a signal model of perception
http://paulpages.blogspot.com/2013/11/edited-version-posted-march-3-2013-i_7324.html
We have two opposing tendencies: On the one hand, our experiments require that detections occur according to a frequency-style "probability wave," in which the probability of a detection is constrained by the square of the wave amplitude. If numerous trials are done, the law of large numbers will come into effect in, say, the correlation found in an Aspect experiment. So our sense is that quantum probabilities are intrinsic, and that quantum randomness is fundamental. That is, quantum propensities verify an "objective external reality."
On the other hand, the logical implication of such randomness -- as demonstrated in the double-slit and Aspect experiments -- is that what we call reality must be more subjective than usually supposed, that various histories (and hence futures) are superpositions of potential outcomes and do not actualize until observation in the form of cognitive focus (which may be, for all we know, partly unconscious). So one's mental state and train of thought must influence -- after the manner of the science fiction film The Matrix -- one's perceived world. This is the asymmetric three-sink vector field problem. Where will the fixed point (as in center of mass or gravity) be found?
So then assignment of probabilities may seem to make sense, but only if one neglects the influence of one's mind on the perceived outcomes. As long as you stay in your assumed "world," probability calculations may work well enough -- as you inhabit a world that "goes in that direction" (where many people micromanage "the future" in terms of probabilities).
This logical outcome of course is what Popper and many others have objected to. But, despite a great many attempts to counter this line of thought, none have succeeded (as I argue in Toward). At the end of his career, Popper, faced with the Aspect results, was reduced to a statement of faith: even if bi-locality holds, his belief in classical realism would not be shaken.
He points to the horror of Hiroshima and Nagasaki and the "real suffering" of the victims as a moral reason to uphold his objectivism. That is, he was arguing that we must not trivialize their pain by saying our perception of what happened is a consequence of some sort of illusion (75).
Einstein, it seems clear, implicitly believed in a phenomenal world only, though his own progress with relativity required the ditching of such seemingly necessary phenomena as the ether as mediating not only light waves but gravitational waves. In his more mature period, he conceded that the spacetime continuum devised by himself and Minkowski was in effect an ether. My estimate is that Einstein mildly conceded a noumenal world, but resisted a strong dependence on such a concept. Bohm, who favored a form of "realism," settled on a noumenal world with the analogies of a holographic universe and of the "implicate" order shown by an ink blob that unravels when spun in a viscous fluid and pretty much exactly is restored to its original state when its spin is reversed. Phenomena are observed because of some noumenal relation.
So we say there is some physical process going on which we can only capture in part. By probability wave, we mean that we may use a wave model to represent what we can know about the unfolding of the process. The probability wave on the one hand implies an objective reality but on the other a reality unfolding in a feedback loop within one's brain-mind.
Waves imply change. But whatever is going on in some noumenal realm is partly predictable in terms of probability of observed properties. That is, the probability wave function is a means of partly predicting event observations but we cannot say it models the noumenal process precisely or at all.
As Jeans put it:
"Heisenberg attacked the enigma of the physical universe by giving up the main enigma -- the nature of the physical universe -- as insoluble, and concentrating on the minor puzzle of co-ordinating our observations of the universe. Thus it is not surprising that the wave picture which finally emerged should prove to be concerned solely with our knowledge of the universe as obtained from our observations (76)."
This pragmatic idea of ignoring the noumenal, or as some might prefer, sub-phenomenal, world has been largely adopted by practicing scientists, who also adopt a background assumption that discussion of interpretation is philosophy and hence outside science. They accept Popper's view that there exists a line dividing science from meta-science and similarly his view that interpretations are not falsifiable. And yet, a counterexample to that belief is the fact that Einstein's interpretation barring bi-localism was falsified, in the minds of most physicists, by the experiments of Aspect.
The importance of brain teasers
The Monty Hall problem
The scenario's opener: The contestant is shown three curtains and told that behind one is a new car and behind each of the others an old boot and she is told to choose one of the three curtains, which we will label from left to right as ABC.
She chooses B.
Monty opens curtain A and reveals a boot. He then surprises her with the question: "Do you want to stick with your choice of A, or do you want to switch to curtain C?"
The problem is: Should she switch?
The counterintuitive answer, according to numerous probabilists, is yes. When the problem, and its answer, first appeared in the press there were howls of protest, including from mathematicians and statisticians.
Here is the reasoning: When she chose B she had a 1/3 chance of winning a car. Hence her choice of B had a 2/3 chance of being wrong. Once Monty opened curtain A, her choice still carried a 2/3 probability of error. Hence a switch to C gives her a 2/3 probability of being right!
Various experiments were done and it was found that a decision to switch tended to "win a car" in two out of three trials.
A few points:
The contestant starts out with complete observer ignorance. She has no idea whether a "common" permutation is in effect and so she might as well assume a randomization process has established the permutation.
Once Monty opens curtain A, the information available to her increases and this affects the probabilities in an unanticipated way. The typical reaction is to say that whether one switches or not is immaterial because the odds are now 50/50. It seems quite bothersome that her mental state can affect the probabilities. After all, when she chose B, she wasn't in the standard view actually making anything happen. So why should the disclosure of the boot at A make any difference as to what actually happens? Hence the thought that we have a new trial which ought to be independent from the previous, making a probability of 1/2.
However, the new information creates conditions for application of conditional probability. This experiment then tends to underscore that Bayesian reasoning has "real world" applications. (There exist Monty Hall proofs that use the Bayesian formula, by the way.)
The initial possible permutations, in terms of car or boot, are:
bcb
bbc
cbb
where bcb means, for example, boot behind curtain A, car behind curtain B, boot behind curtain C.
By raising curtain A, the contestant has the information that two orderings remain, bcb and bbc, leading to the thought that the probability of guessing correctly is 1/2. But before Monty asks her if she wishes to switch, she has made an estimate based upon the initial information. In that case, her probability of guessing wrong is 2/3. If she switches, her probability of winning the car becomes 2/3.
Part of the perplexity stems from the types of randomness and probability at hand. A modern American tends to relate probabilities to assumed random forces in the external world, rather than only to mental state (the principle of insufficient reason).
And yet, if we grant that the six permutations entail six superposed histories, we then must consider a negative feedback control process that might affect the probabilities, as suggested above and described in Toward. The brain's method of "selecting" and "constructing" reality may help explain why a few people are consistently well above or well below the mean in such low-skill games of chance.
This point of course raises a serious difficulty: solipsism. I have addressed that issue, however inadequately, in Toward.
There are a number of other probability brain teasers, with attendant controversy over proper methods and quantifications. A significant issue in these controversies is the usefulness of the probabilistic process in making decisions. As Paul Samuelson observed, the St. Petersburg paradox is a poser that would never happen in actuality because no sane person would make such an offer.
Samuelson on paradoxes
http://www.jstor.org/stable/2722712
Keynes argued similarly that simple expectation value is not always valid, as when the risk is unacceptable no matter how great the payoff.
In the case of the Sleeping Beauty problem, we could notionally run a series of trials to find the limiting relative frequency. But such an experiment is likely to encounter ethics barriers and, even more likely, to be seen as too frivolous for the time and expense.
These and other posers are legitimate questions for those inclined to logical analysis. But note that such scenarios all assume a "linear" background randomness. Such an assumption may serve in many instances, but what of potential exceptions? For example, Sleeping Beauty, from her orientation, may have several superposed "histories" upon awakening. Which history "happens" is, from the experimenter's orientation, partly guided by quantum probabilities. So to ask for the "linear" probability solution to the Sleeping Beauty problem is to ignore the "reality wave" probabilities that affect any solution.
The St. Petersburg paradox
https://en.wikipedia.org/wiki/St._Petersburg_paradox
http://books.google.com/books?id=vNvXkFUbfM8C&pg=PA267&lpg=PA267&dq=robert+martin+st+petersburg+paradox+dictionary&source=bl&ots=x5NVz53Ggc&sig=JrlvGjnw5tcX9SEbgaA1CbKGW9U&hl=en&sa=X&ei=OdzmUdWXItK24AOxqoHQAQ&ved=0CDYQ6AEwAQ#v=onepage&q=robert%20martin%20st%20petersburg%20paradox%20dictionary&f=false
The Sleeping Beauty problem
https://en.wikipedia.org/wiki/Sleeping_Beauty_problem
http://www.u.arizona.edu/~thorgan/papers/other/Beauty.htm
A noumenal world
Ludwig Wittgenstein's Tractatus does something very similar to what Goedel proved rigorously, and reflects the paradoxes of Bertrand Russell and Georg Cantor. That is, philosophy is described by statements, or propositions, but cannot get at "the problems of life." I.e., philosophy uses a mechanistic structure which cannot apprehend directly what others have called the noumenal world. Hence, the propositions used in Tractatus are themselves nonsense. Again, the self-referencing dilemma.
Interestingly, later on Wittgenstein was unable to follow Goedel and dropped work on the philosophy of mathematics.
The phenomenal versus the noumenal is reflected in experiments in which participants at a console are urged to inflict pain on a subject allegedly connected to an electroshock device, but who is in fact an actor simulating pain responses. Here we see Hannah Arendt's "banality of evil" among those who obey the experimenter's commands to inflict greater and greater pain. Those who passively obey criminal orders are responding to the social clues suggested by the situation, and are rather easily persuaded when they see others carrying out obnoxious deeds under "legal" circumstances. They accept rationalizations because, essentially, they see their immediate interest in conforming to settings controlled by some authority. In some cases, they may also have a psychological need to express the primitive predator within (which for most people is expressed during sporting events or entertainment of other sorts). These persons are close to the phenomenal world accepted by Darwinists.
Yet there are those who resist criminal orders or cajoling, whatever the setting. Is this only the Freudian superego that has been programed to bar such behavior (the internalization of parental strictures)? If so, one would suspect that such inhibitions would be weakened over time by consistent exposure to the actions of the herd. Yet, there are those who do not respond well to the blandishments that come from herd leaders and who strenuously resist being pushed into criminal (even if "legalized") behavior. Very often such persons cite religious convictions. Still, as shown by the horrific history of religious warfare, it is possible for a person to have a set of religious ideas that do not work against the stampede effect.
So such peculiar individuals point to an interior moral compass not found in others, or if some others do possess that quality, it has been greatly repressed. The idea that such a moral compass is a consequence of random physical forces is, by today's standards, plausible. But another possibility is connection with a noumenal world, which holds the source of the resistance of banal evil.
Types of intuition
Consider Type 1 intuition:
Let us consider mathematical intuition, in which we have what is really an informed guess as to a particular mathematical truth.
Such intuition is wrong often enough that the word "counterintuitive" is common among mathematicians. Such informed guesswork is based on one's experience with similar sets of relations. So such intuition can improve with experience, thus the respect given to experts.
There is also intuition based on subtle clues, which may tip one off to imminent danger.
Then we have someone who is vexed by a scientific (or other) problem and, no matter how much he spins his mental wheels, is unable to solve it. But while asleep or in a reverie, he suddenly grasps an answer.
The following scenario is plausible:
He has the intuition, based on experience, that the problem is solvable (though he may be wrong about this). In many cases he has quieted the left-brain analytic function in order to permit the right brain to make associations at a more "primitive" level. It is noteworthy that the left brain will call to consciousness the precise left-right, or time sequence, order of a telephone number. When fatigue dims the analytic function, the right brain will call to consciousness the digits, but generally not in left-right order. So one can see how some set of ideas might similarly be placed in an unexpected arrangement by the right brain, leading possibly to a new insight recognized by the integrated mind.
The analytic functions are in some people "closer to consciousness," being centered in the frontal lobes which represent the most recent major adaptation of the human species. The more basic associative functions are often regarded as an expression of an earlier, in an evolution sense, segment of the brain, and so further into the unconscious. This is the region expressed by artists of all sorts.
Mental relaxation means curtailing the analytic function so as to let the associative region express itself. When one is dreaming, it may be said that one's analytic function and executive control is almost shut down -- though the dream censorship shows that the mental monitor is still active.
So we might say that, at times, what is meant by intuition is that the brain's executive function is refereeing the analytic and associative processes and integrating them into an insight that may prove fruitful.
We regard this form of intuition as belonging to the phenomenal, and not the noumenal world -- though the noumenal world's influence is felt, I daresay, at all times.
[Another view of intuition is that of Henri Bergson.
Discussion of Bergson's ideas
http://plato.stanford.edu/entries/bergson/ ]
But Type 2 intuition is the direct knowledge of something without recourse to the phenomenal world associated with the senses (of which there are many, and not five). This form of communication (though who's to say what is doing the communicating) bypasses or transcends the phenomenal world, as when an individual turns about upon being gazed at from a distance. I realize that such a phenomenon doesn't get much support in the available literature; however, I have on many occasions looked at people from behind from a distance -- say while on public transport -- and noticed them turning about and scanning the middle distance, often with a quizzical look on their faces. Of course such an effect can be "explained," but it seems quite apparent that the person -- more often a woman than a man -- is not turning around for identifiable reasons.
The person who turns about may not even be conscious of what prompted her. Part of her brain has "intuitive" or direct knowledge of another's presence. One can view this effect in terms of a Darwinistic survival advantage. That is, one may say that conscious life forms interact with an unknown world, which, by its nature is immaterial and apparently not subject to the laws of physics as they apply in the phenomenal world.
Sometimes, and perhaps always, Type 1 intuition would seem to have at its core Type 2 intuition.
This is to say there is something other than digital and analog reasoning, whether unconscious or not. Hence, one would not expect an artificial intelligence program, no matter how advanced, to have Type 2 intuition (77).
Of course, it is to be expected that some will disparage such ideas as constituting a "revival of vitalism." However, the anti-vitalist must do more than wave hands against "paranormal" events; he must make serious attempts to exclude the likelihood of what I term noumenal effects.
A confusion here is the claim that "because" vitalism can't be tested or falsified in a Popperian sense, the idea is hence unscientific and must be ignored by scientifically minded people. True, there has been an ongoing battle of statistics between the yay sayers and the nay sayers. But I wonder about attempts to use repeated trials, because it seems unlikely that the independence criterion will work. Here is a case where Bayesian methods might be more appropriate.
In the Newtonian-Laplacian era, prior to the quantum mechanics watershed, the concept of randomness was tied to the belief in a fully deterministic cosmos, in which humans are players on a cosmic stage. The Laplacian clockwork model of the cosmos forbids intrinsic randomness among the cogs, wheels and pulleys. The only thing that might, from a human viewpoint, be construed as random would have been the actions of the elan vitale. Of course the devout did not consider the vital spirit to be random at all, but rather saw it as stemming from a direct influence of God.
Curiously, in the minds of some, a clockwork cosmos seemed to imply a need for God. Otherwise, there would be no free will. Humans would be reduced to delusional automatons.
So uncertainty, in the clockwork model, was viewed as simply a lack of sufficient knowledge for making a prediction. In the famous conceit of Laplace, it was thought that a grand robot would be able to calculate every trajectory in the entire universe to any extent forward or backward in time. It was lack of computing power, not inherent randomness that was thought to be behind the uncertainty in gambling systems.
In the early 20th century, R.A. Fisher introduced random selection as a means of minimizing bias. Or, a better way to express this is that he sought ways to screen out unwanted extraneous biases. The methods chosen for filtering out bias were then seen as means of ensuring randomness, and this perspective is still in common use. So one might then define randomness as a consequence of low (in the ideal case zero) bias in sampling. In the 1930s, however, some prominent probabilists and statisticians, influenced by the new quantum theory, accepted the notion of intrinsic background randomness, leading them to dispense with the idea that probability measures a degree of belief. They thought there was an objective discipline of probability that did not require "subjectivism." To them, quantum mechanics justified the idea that a properly calculated probability result yields a "concrete" truth that is true regardless of the observer.
However, physicists do not tend to see quantum logic as an easy way to dispose of subjectivism. In fact, a number take quite the opposite tack, acknowledging a strong logical case for a "spooky" interface between subject and object. Such a noumenal world -- where space and time are "transcended" -- should indeed interact with the phenomenal world in "weird" ways, reminiscent of the incident in the science fiction film The Matrix when the hero observes a cat move oddly, as in a quick film rewind and replay. The example may be silly, but the concept is not.
Now, as a great many reports of "paranormal" events are subjective first-person accounts, it is easy to dismiss them all under the rubric "anecdotal." Clearly, many scientists want nothing to do with the paranormal because it attracts so many starry eyed "true believers" who have very little scientific background. Such notoriety can be a career destroyer for a young academic.
Bruce Hood, who sits on the board of The Skeptic magazine, is a psychologist who take's a neuroscience view of cognition. To Hood, the fact that the "self" is an integrated composite of physical functions implies that consciousness is an epiphenomenon. Hence, religion, faith and assorted superstitions are delusions; there is no self in need of being saved and no evidence of a soul, which is viewed as paranormal nonsense (78).
Hood on 'the self illusion'
http://www.psychologytoday.com/blog/the-self-illusion/201205/what-is-the-self-illusion
While I agree that phenomenal reality, including in part the reality of self, is interwoven with the perception-cognition apparatus, my point is that if we look closely enough, we apprehend something beyond our usual set of conceits and conceptions. The observer has much to do with forming phenomenal reality, and to me this of itself points to a component of cognition which is non-phenomenal or, as we say, noumenal.
Hence, it is not unreasonable after all to think in terms of a noumenal world in which transactions occur that are beyond our immediate ken. It is safe to say that for quite some time a great many men of high caliber knew as self-evident that the world was flat. And yet there were clues, such as sailing ship masts sinking below the horizon, that suggested a revolutionary way of conceiving of the world, one that at first makes no sense: if the world is round, why don't people fall off on the underside?; if this round world is spinning, why isn't everyone hurled off?
So I would say that for the flat-earthers, the reality of a round world was hidden, part of an "implicate order" in need of unfolding.
Writing prior to the development of thermonuclear bombs, J.D. Stranathan gives this account of the discovery of deuterium (79) :
G.H. Aston in 1927 had obtained a value 1.00778 +- 0.00015 for the atomic weight of hydrogen, which differed from the accepted chemical value of 1.00777 +- 0.00002. The figures were so close that no isotope seemed necessary.
But, the discovery of the two heavier isotopes of oxygen forced a reconsideration because their existence meant that the physically derived and chemically derived scales of atomic weight were slightly, but importantly, different. This meant that Aston's value, when converted to the chemical scale, was 1.00750, and this was appreciably smaller than the chemically determined atomic weight. The alleged close agreement was adjudged to be false.
That discrepancy spurred Harold C. Urey, Ferdinand G. Brickwedde and George M. Murphy to hunt for deuterium, which they found and which became a key component in the development of the atomic bomb.
But, this discrepancy turned out to have been the result of a small experimental error. It was shown that the 1927 mass spectrograph value was slightly low, in spite of having been carefully confirmed by Kenneth T. Bainbridge. When the new spectrograph value of 1.0081 was converted to the chemical scale, there was no longer a substantive disagreement. Hence, there was no implication of the existence of deuterium.
Though the chemical and physical scales were revealed to have been slightly different, that revelation, without the 1927 error, would have yielded no reason to expend a great deal of effort searching for heavy hydrogen.
Had heavy water been unknown, would allied scientists have been fearful of German development of atomic fission weapons (British commandos wrecked Germany's heavy water production in occupied Norway) and have spurred the British and American governments into action?
Even had the Manhattan Project been inevitable, it is conceivable that, at the outset of World War II, the existence of heavy water would have remained unknown and might have remained unknown for years to come, thus obviating postwar fulfillment of Edward Teller's dream of a fusion bomb. By the time of deuterium's inevitable (?) discovery, the pressure for development of thermonuclear weapons might well have subsided.
That is, looking back, the alleged probability of the discovery of heavy water was miniscule, and one is tempted to wonder about some noumenal influence that fated humanity with this almost apocalyptic power.
At the least, we have the butterfly effect on steroids.
At any rate, the idea here is not to idolize paranormal phenomena, but rather to urge that there is no sound epistemological reason to justify the "Darwinistic" (or, perhaps, Dawkinsistic) edict of ruling out any noumenal world (or worlds) and the related prohibition of consideration of any interaction between phenomenal and noumenal worlds.
In fact, our attempt to get a feel for the noumenal world is somewhat analogous to the work of Sigmund Freud and others in examining the unconscious world of the mind in order to find better explanations of superficially cognized behaviors. (Yet I hasten to add that, though Carl Jung's brilliance and his concern with what I term the noumenal world cannot be gainsaid, I find that he has often wandered too far from the beaten path even for my tastes.)
A note on telepathy
There is a great deal of material that might be explored on "paranormal" phenomena and their relation to a noumenal world. But, we will simply give one psychologist's thoughts on one "noumenal" subject. Freud was quite open-minded about the possibility of extra-normal thought transference.
In New Introductory Lectures on Psycho-Analysis, he writes: "One is led to the suspicion that this is the original, archaic method of communication between individuals and in the course of phylogenetic evolution it has been replaced with the better method of giving information via signals which are picked up by the sense organs."
He relates a report of Dorothy Burlingham, a psychoanalyst and "trustworthy witness." (She and colleague Anna Freud did pioneering work in child psychology.)
A mother and child were in analysis together. One day the mother spoke during analysis of a gold coin that had played a particular part in one of her childhood experiences. On her return home, the woman's son, who was about 10, came to her room and gave her a gold coin which he asked her to keep for him. Astonished, she asked him where he had got it. It turned out that it had been given him as a birthday present a few months previously, but there was no obvious reason why he had chosen that time to bring her the coin.
Freud sees this report as potential evidence of telepathy. One might also suspect it as an instance of Jungian "synchronicity" or of the reality construction process as discussed in Toward.
At any rate, a few weeks later the woman, on her analyst's instructions, sat down to write an account of the gold coin incident. Just then her child approached her and asked for his coin back, as he wanted to show it during his analysis session.
Freud argues that there is no need for science to fear telepathy (though his collaborator, Ernest Jones, certainly seems to have feared the ridicule the subject might bring); Freud, who never renounced his atheism, remained open-minded not only about telepathy, but about the possibility of other extra-normal phenomena.
From our perspective, we argue that reports of "paranormal" communication and other such phenomena tip us off to an interaction with a noumenal world that is the reality behind appearances -- appearances being phenomena generally accepted as ordinary, whether or not unusual.
See my post:
Freud and telepathy
http://randompaulr.blogspot.com/2013/10/freud-on-telepathy.html
Freud, of course, was no mathematician and could only give what seemed to him a reasonable assessment of what was going on. Keynes's view was similar to Freud's. He was willing to accept the possibility of telepathy but rejected the "logical limbo" of explaining that and other "psychic phenomena" with other-worldly spirits.
Many scientists, of course, are implacably opposed to the possibility of telepathy in any form, and there has been considerable controversy over the validity of statistical studies for and against such an effect.
On the other hand, Nobelist Josephson has taken on the "scientific system" and upheld the existence of telepathy, seeing it as a consequence of quantum effects.
Josephson's page of psychic phenomena links
http://www.tcm.phy.cam.ac.uk/~bdj10/psi.html
In the name of Science
The tension between Bayesian reasoning and the intrinsic background randomness imputed to quantum physics perforce implies Wheeler's "participatory universe" in which perception and "background reality" (the stage on which we act, with the props) merge to an extent far greater than has previously been suspected in the halls of academia -- despite herculean efforts to exorcise this demon from the Realm of Science. In other words, we find that determinism and indeterminism are inextricably entangled at the point where consciousness meets "reality."
Nevertheless, one cannot avoid the self-referencing issue. In fact, if we suspend the continuity assumptions of space and time, which quantum theory tells us we should, we arrive at infinite regress. But even with continuity assumptions, one can see infinite regress in say an asymmetric three-sink vector field. Where is the zero point at time Tx? In a two-sink field, the symmetry guarantees that the null point can be exactly determined. But in a three-sink field that is not symmetric, one always faces something analogous to quantum uncertainty -- and which also points to problems of infinite regress.
We can think of this in terms of a nonlinear feedback control system. Some such systems maintain an easily understood homeostasis. The thermostat is a case in point. But others need not follow such a simple path to homeostasis. A particular input value may yield a highly unpredictable output within the constraint of homeostasis. In such systems, we tend to find thresholds and properties to be the best we can do in the way of useful information. Probabilities may help us in estimating properties, as we find in the behavior of idealized gas systems.
However, these probabilities cannot really be frequency based, except in the classical sense based on the binomial distribution. Trials can't be done. E.T. Jaynes thought that the Shannon approach of simply expropriating what I call the classical approach sufficed for molecular physics. However, I add that when Einstein used probabilities to establish that Brownian motion conformed to the behavior of jostling atoms, he was not only implicitly using the classical approach, but also what we might call a propensity approach in which the presumed probabilities were assigned in accordance with system start-up information, which in this case was given by Newtonian and Maxwellian mechanics.
The above considerations suggest that it is a mistake to assume that human affairs are correctly portrayed in terms of intrinsic randomness played out in some background framework that is disentangled from the observer's consciousness.
In fact, we may see some kind of malleable interconnectedness that transcends the phenomenal world.
This also suggests that linear probability reckoning works well enough within limits. We use the word linear to mean that the influences among events are small enough so as to be negligible, permitting us the criterion of independence. (Even conditional probabilities rest on an assumption of independence at some point.) The limits are not so easily defined, as we have no system of nonlinear differential equations to represent the sharing of "reality" among minds or the balance between the brain's reality construction versus "external" reality.
Certainly in the extremes, probability assessments do not seem terribly satisfactory within a well-wrought metaphysical system, and should not be so used, even though "linear" phenomenal randomness is viewed as a component of the Creed of Science, being a basic assumption of many a latter day atheist, whether or not scientifically trained.
"Everyone knows" that some phenomena are considered to be phantasms of the mind, whether they be optical or auditory illusions or delusions caused by temporary or permanent brain impairment, and that, otherwise, these phenomena are objective, meaning that there is wide agreement that such phenomena exist independently of any observer, especially if such phenomena have been tested and verified by an accepted scientific process. However, the underlying assumptions are much fuzzier than the philosophical advocates of "hard science" would have us believe.
So this suggests there exists some holistic "uber force," or organizing principle. Certainly we would not expect an atheist to believe this uber force is conscious, though he or she might, like Einstein, accept the existence of such an entity in Spinoza's pan-natural sense. On the other hand, neither Einstein, nor other disciples of Spinoza, had a logical basis for rejecting the possibility that this uber force is conscious (and willing to intervene in human affairs). This uber force must transcend the laws of physics of this universe (and any clonelike cosmoses "out there"). Here is deep mystery; "dark energy" is a term that comes to mind.
I have not formalized the claim for such an uber force. However we do have Goedel's ontological proof of God's existence, though I am unsure I agree that such a method is valid. An immediate thought is that the concept of "positive" requires a subjective interpretation. On the other hand, we have shown that the human brain/mind is a major player in the construction of so-called "concrete" phenomenal reality.
Goedel's ontological proof of God's existence
http://math.stackexchange.com/questions/248548/godels-ontological-proof-how-does-it-work
Background of god theorem
https://en.wikipedia.org/wiki/G%C3%B6del's_ontological_proof
Formalization, mechanization and automation of Gödel's proof of god's existence
http://arxiv.org/abs/1308.4526
In a private communication, a mathematician friend responded thus:
"For example, BMW is a good car. BMW produces nitrous oxide pollution. Therefore nitrous oxide pollution is good."
My friend later added: "But maybe the point of the ontological proof is not 'good' but 'perfect.' God is supposed to be perfect. A perfect car would not pollute."
Again, the property of goodness requires more attention; though dubious, I am not fully unpersuaded of Goedel's offering.
In this respect, we may ponder Tegmark's mathematical universe hypothesis, which he takes to imply that all computable mathematical structures exist.
Tegmark's mathematical universe paper
http://arxiv.org/pdf/gr-qc/9704009v2.pdf
Tegmark's mathematical universe hypothesis has been stated thus: Our external physical reality is a mathematical structure. That is, the physical universe is mathematics in a well-defined sense. So in worlds "complex enough to contain self-aware substructures," these entities "will subjectively perceive themselves as existing in a physically 'real' world." The hypothesis suggests that worlds corresponding to different sets of initial conditions, physical constants, or altogether different equations may be considered equally real. Tegmark elaborates his conjecture into the computable universe hypothesis, which posits that all computable mathematical structures exist.
Here I note my paper:
On Hilbert's sixth problem
http://kryptograff.blogspot.com/2007/06/on-hilberts-sixth-problem.html
which argues against the notion that the entire cosmos can be modeled as a Boolean circuit or Turing machine.
An amusing aside:
60. Symmetry by Hermann Weyl (Princeton, 1952).
61. Time Travel in Einstein's Universe by J. Richard Gott III (Houghton Mifflin, 2001).
62. Kurt Goedel in Albert Einstein: Philosopher-Scientist, edited by Paul Arthur Schilpp (Library of Living Philosophers, 1949)
63. Cycles of Time: An extraordinary new view of the universe by Roger Penrose (The Bodley Head, 2010).
64. The Anthropic Cosmological Principle by John D. Barrow and Frank J. Tipler (Oxford, 1988).
65. Wheeler quoted in The Undivided Universe: An Ontological Interpretation of Quantum Theory by David Bohm, Basil James Hiley (Routledge, Chapman & Hall, Incorporated, 1993). The quotation is from Wheeler in Mathematical Foundations of Quantum Mechanics, A.R. Marlow, editor (Academic Press, 1978).
66. Undivided Universe, Bohm.
67. Bohm (see above) is referring to The Many-Worlds Interpretation of Quantum Mechanics by B.S. DeWitt and N. Graham (Princeton University Press 1973).
68. Undivided Universe, Bohm.
69. Gravitation by Charles W. Misener, Kip S. Thorne and John Archibald Wheeler (W.H. Freeman, 1970, 1971).
70. Black Holes and Wormholes by Kip Thorne (W.W. Norton, 1994).
71. The Open Universe (Postscript Volume II) by Karl Popper (Routledge, 1988. Hutchinson, 1982).
72. New Foundations of Quantum Mechanics by Alfred Landé (Cambridge University Press, 1965). Cited by Popper in Schism.
73. Logical Foundations of Probability by Rudolph Carnap (University of Chicago, 1950).
74. Physics and Philosophy by James Jeans (Cambridge, Macmillan, 1943).
75. Quantum Theory and the Schism in Physics (Postscript Volume III) by Karl Popper (Routledge, 1989. Hutchinson, 1982).
76. The New Background of Science by James Jeans (Cambridge, 1933, 1934).
77. B. Alan Wallace, a Buddhist scholar, tackles the disconnect between the scientific method and consciousness in this video from the year 2000.
B. Alan Wallace on science and consciousness
http://www.youtube.com/watch?v=N0IotYndKfg
77aa. Space, Time and Gravitation: An Outline of the General Relativity Theory by Athur Eddington (Cambridge 1920, Harper and Row reprint, 1959).
77a. Taken from excerpts of the scientist's writings found in Quantum Questions: Mystical Writings of the World's Great Physicists, edited by Ken Wilbur (Shambhala Publications, 1984). Wilbur says the book's intent is not to marshal scientific backing for a New Age agenda.
77bb. From "Autobiographical Notes" appearing in Albert Einstein: Philosopher-Scientist, Paul Arthur Schilpp, editor (Library of Living Philosophers 1949).
77xa. Science and Information Theory, Second Edition, by Leon Brillouin (Dover 2013 reprint of Academic Press 1962 edition; first edition, 1956).
77b. The Meaning of Relativity by Albert Einstein (fifth edition, Princeton, 1956).
78. The Self Illusion: how the social brain creates identity by Bruce Hood (Oxford, 2012).
79. The "Particles" of Modern Physics by J.D. Stranathan (Blakison, 1942).
Newton with his belief in absolute space and time considers motion a proof of the creation of the world out of God's arbitrary will, for otherwise it would be inexplicable why matter moves in this [relative to a fixed background frame of reference] rather than any other direction. -- Hermann Weyl (60).
Weyl, a mathematician with a strong comprehension of physics, had quite a lot to say about spacetime. For example, he argued that Mach's principle, as adopted by Einstein, was inconsistent with general relativity.
Background on Weyl
http://plato.stanford.edu/entries/weyl/#LawMotMacPriWeyCosPos
Weyl's book 'Symmetry' online
https://archive.org/details/Symmetry_482
See also my paper,
Einstein, Sommerfeld and the Twin Paradox
http://paulpages.blogspot.com/2013/10/einstein-sommerfeld-and-twin-paradox.html
Einstein had hoped to deal only with "observable facts," in accord with Mach's empiricist (and logical positivist) program, and hence to reduce spacetime motions to relative actions among bodies, but Weyl found that such a level of reduction left logical holes in general relativity. One cannot, I suggest, escape the background frame, even if it is not a strictly Newtonian background frame. Sometimes this frame is construed to be a four-dimensional spacetime block.
So how would one describe the "activity" of a four-dimensional spacetime block? Something must be going on, we think, yet, from our perspective looking "out," that something "transcends" space and time.
Popper, in his defense of phenomenal realism, objected that the spacetime block interpretation of relativity theory implies that time and motion are somehow frozen, or not quite real. While not directly challenging relativity theory, he objected to such a manifold and ruled it out as not in accord with reality as he thought reality ought to be. But, we hasten to make note of Popper's trenchant criticism of the logical positivism of most scientists.
My thought: "Laws" of nature, such as Einstein's law of universal gravitation, are often thought of in a causative sense, as in "the apple drops at 9.81 meters per second squared by cause of the law of gravity."
Actually, the law describes a range of phenomena which are found to be predictable via mathematical formulas. We have a set of observable relations "governed" by the equations. If something has mass or momentum, we predict that it will follow a calculated trajectory. But, as Newton knew, he had an algorithm for representing actions in nature, but he had not got to the world beneath superficial appearances. How does gravity exercise action at a distance? If you say, via curved spacetime fields, one may ask, how does spacetime "know" to curve?
We may say that gravity is a principle cause of the effect of a rock falling. But, in truth, no one knows what gravity is. "Gravity" is a word used to represent behavior of certain phenomena, and that behavior is predictable and calculable, though such predictability remains open to Hume's criticism.
On this line, it should be noted that Einstein at first resisted basing what became his General Theory of Relativity on geometrical (or, topological) representations of space and time. He thought that physical insight should accompany his field equations, but eventually he settled on spacetime curvature as insight enough. His competition with David Hilbert may well have spurred him to drop that proviso. Of course, we all know of his inability to accept the lack of "realism" implied by quantum mechanics, which got the mathematics right but dispensed with certain givens of phenomenal realism. To this end, we note that he once said that he favored the idea that general relativity's mathematics gave correct answers without accepting the notion that space and time were "really" curved.
Newton had the same problem: There was, to him, an unsatisfactory physical intuition for action at a distance. Some argue that this difficulty has been resolved through the use of "fields," which act as media for wave motion. The electromagnetic field is invoked as a replacement for the ether that Einstein ejected from physics as a useless concept. Still, Einstein saw that the field model was intuitively unsatisfactory.
As demonstrated by the "philosophical" issues raised by quantum theory, the problem is the quantization of energies needed to account for chains of causation. When the energy reaches the quantum level, there are "gaps" in the chain. Hence the issue of causation can't easily be dismissed as a problem of "metaphysics" but is in truth a very important area of discussion on what constitutes "good physics."
One can easily visualize pushing an object, but it is impossible to visualize pulling an object. In everyday experience, when one "pulls," one is in fact pushing. Yet, at the particle level, push and pull are complementary properties associated with charge sign. This fact is now sufficiently familiar as not to seem occult or noumenal. Action at a distance doesn't seem so mysterious, especially if we invoke fields, which are easy enough to describe mathematically, but does anyone really know what a field is? The idea that gravitation is a macro-effect from the actions of gravitons may one day enhance our understanding of nature. But that doesn't mean we really know what's going on at the graviton level.
Gott (61), for example, is representative of numerous physicists who see time as implying many strange possibilities. And Goedel had already argued in the 1940s that time must not exist at all, implying it is some sort of illusion. Goedel had found a solution to Einstein's field equations of general relativity for a rotating universe in which closed time loops exist, meaning a rocket might travel far enough to find itself in its past. Einstein shrugged off this finding of his good friend, arguing that it does not represent physical reality. But Goedel countered that if such a solution exists at all, then time cannot be what we take it to be and doesn't actually exist (62).
These days, physicists are quite willing to entertain multiple dimension theories of cosmology, as in the many dimensions of string theory and M theory.
We have Penrose's cyclic theory of the cosmos (63), which differs from previous Big Bang-Big Crunch cyclic models. Another idea comes from Paul J. Steinhardt, who proposes an "ekpyrotic universe" model. He writes that his model is based on the idea that our hot big bang universe was formed from the collision of two three-dimensional worlds moving along a hidden, extra dimension. "The two three-dimensional worlds collide and 'stick,' the kinetic energy in the collision is converted to quarks, electrons, photons, etc., that are confined to move along three dimensions. The resulting temperature is finite, so the hot big bang phase begins without a singularity."
Steinhardt on the ekpyrotic universe
http://wwwphy.princeton.edu/~steinh/npr/
The real point here is that spacetime, whatever it is, is rather strange stuff. If space and time "in the extremes" hold strange properties, should we not be cautious about assigning probabilities based on absolute Newtonian space and equably flowing time? It is not necessarily a safe assumption that what is important "in the extremes" has no relevance locally.
And yet, here we are, experiencing "time," or something. The difficulty of coming to grips with the meaning of time suggests that beyond the phenomenal world of appearances is a noumenal world that operates along the lines of Bohm's implicate order, or -- in his metaphor -- of a "holographic universe."
But time is even more mind-twisting in the arena of quantum phenomena (as discussed in Noumena II, below).
The "anthropic cosmological principle" has been a continuing vexation for cosmologists (64). Why is it that the universe seems to be so acutely fine-tuned to permit and encourage human life? One answer is that perhaps we are in a multiverse, or collection of noninteracting or weakly interacting cosmoses. The apparent miniscule probability that the laws and constants are so well suited for the appearance of humans might be answered by increasing the number and variety of cosmoses and hence increasing the distribution of probabilities for cosmic constants.
The apparent improbability of life is not the only reason physicists have for multiverse conjectures. But our concern here is that physicists have used probabilistic reasoning on a question of the existence of life. This sort of reasoning is strongly reminiscent of Pascal's wager and I would argue that the question is too great for the method of probability analysis. The propensity information is far too iffy, if not altogether zero. Yet, that doesn't mean the problem is without merit. To me, it shows that probability logic cannot be applied universally and that it is perforce incomplete. It is not only technically incomplete in Goedel's sense, it is incomplete because it fundamentally rests on the unknowable.
Paul Davies, in the Guardian, wrote: "The multiverse comes with a lot of baggage, such as an overarching space and time to host all those bangs, a universe-generating mechanism to trigger them, physical fields to populate the universes with material stuff, and a selection of forces to make things happen. Cosmologists embrace these features by envisaging sweeping 'meta-laws' that pervade the multiverse and spawn specific bylaws on a universe-by-universe basis. The meta-laws themselves remain unexplained -- eternal, immutable transcendent entities that just happen to exist and must simply be accepted as given. In that respect the meta-laws have a similar status to an unexplained transcendent god." Davies concludes, "Although cosmology has advanced enormously since the time of Laplace, the situation remains the same: there is no compelling need for a supernatural being or prime mover to start the universe off. But when it comes to the laws that explain the big bang, we are in murkier waters."
Davies on the multiverse
http://www.theguardian.com/commentisfree/belief/2010/sep/04/stephen-hawking-big-bang-gap
Noumena II: Quantum weirdness
The double-slit experiment
The weird results of quantum experiments have been known since the 1920s and are what led Werner Heisenberg to his breakthrough mathematical systemization of quantum mechanics.
An example of quantum weirdness is the double-slit experiment, which can be performed with various elementary particles. Consider the case of photons, in which the intensity of the beam is reduced to the point that only one photon at a time is fired at the screen with the slits. In the case where only one slit is open, the photo-plate detector on the other side of the screen will record basically one spot where the photons that make it through the slit arrive in what one takes to be a straight line from source to detector.
However, when two slits are open the photons are detected at different places on the plate. The positions are not fully predictable, and so are random within constraints. After a sufficient number of detections, the trained observer notices a pattern. The spots are forming a diffraction pattern associated with how one would expect a wave going through the two slits to separate into two subwaves, and simultaneously interact, showing regions of constructive and destructive wave interference. However, the components of the "waves," are the isolated detection events. From this effect, Max Born described the action of the particles in terms of probability amplitudes, or that is, waves of probability.
This is weird because there seems to be no way, in terms of classical causality, for the individual detection events to signal to the incoming photons where they ought to land. It also hints that the concept of time isn't what we typically take it to be. That is, one might interpret this result to mean that once the pattern is noticed, one cannot ascribe separate time units to each photon (which seems to be what Popper, influenced by Landes, was advocating). Rather, it might be argued that after the fact the experiment must be construed as an irreducible whole. This bizarre result occurs whether the particles are fired off at the velocity of light or well below it.
Schroedinger's cat
In 1935, Erwin Schroedinger, proposed what has come to be known as the Schroedinger's cat thought experiment in an attempt to refute the idea that a quantum property in many experimental cases cannot be predicted precisely, but can only be known probabilistically prior to measurement. Exactly what does one mean by measurement? In the last analysis, isn't a measurement an activity of the observer's brain?
To underscore how ludicrous he thought the probability amplitude idea is, Schroedinger gave this scenario: Suppose we place a cat in a box which contains a poison gas pellet rigged to a Geiger counter that measures radioactive decay. The radioactive substance has some suitable half-life, meaning there is a probability that it is detected or not.
Now in the standard view of quantum theory, there is no routine causation that can be accessed that gives an exact time the detection will occur. So then, the property (in this case, the time of the measurement) does not exist prior to detection but exists in some sort of limbo in which the quantum possibilities -- detection at time interval x and non-detection at time interval x -- are conceived of as a wave of probabilities, with the potential outcomes superposed on each other.
So then, demanded Schroedinger, does not this logically require that the cat is neither alive nor dead prior to the elapse of the specified time interval?! Of course, once we open the box, the "wave function collapses" and the cat's condition -- dead or alive -- tells us whether the quantum event has been detected. The cat's condition is just as much of a detection event as a photo-plate showing a bright spot.
Does this not then mean that history must be observer-centric? However, no one was able to find a way out of this dilemma, despite many attempts (see Toward). Einstein conceded that such a model was consistent, but rejected it on philosophical grounds. You don't really suppose the moon is not there when you aren't looking, he said.
The EPR scenario
In fact, also in 1935, Einstein and two coauthors unveiled another attack on quantum weirdness known as the Einstein-Podolsky-Rosen (EPR) thought experiment, in which the authors pointed out that quantum theory implies what Einstein called "spooky action at a distance" that violated C, the velocity of light in a vacuum, which is an anchor of his theory of relativity. Later John Bell found a way to apply a test to see whether statistical correlation would uphold the spooky quantum theory. Experiments by Alain Aspect in the 1980s and by others have confirmed, to the satisfaction of most experts, that quantum "teleportation" occurs.
So we may regard a particle as carrying a potential for some property or state that is only revealed upon detection. That is the experiment "collapses the wave function" in accordance with the property of interest. Curiously, it is possible to "entangle" two particles of the same type at some source. The quantum equations require that each particle carries the complement property of the other particle -- even though one cannot in a proper experiment predict which property will be detected first.
Bohm's version of EPR is easy to follow: An electron has a property called "spin." Just as a screw may rotate left or right, so an electron's spin is given as "up" or "down," which is where it will be detected in a Stern-Gerlach device. There are only two possibilities, because the electron's rotational motion is quantized into halves -- as if the rotation jumps immediately to its mirror position without any transition, just as the Bohr electron has specific discontinuous "shells" around a nucleus.
Concerning electron spin
http://hyperphysics.phy-astr.gsu.edu/hbase/spin.html
So if we entangle electrons at a source and send them in different directions, then, quantum theory declares that if we detect spin "up" at detector A, it is necessarily so that detector B ought to read spin "down."
In that case, as Einstein and his coauthors pointed out, doesn't that mean that the detection at A required a signal to reach B faster than the velocity of light?
For decades, EPR remained a thought experiment only. A difficulty was that detectors and their related measuring equipment tend to be slightly cranky, giving false positives and false negatives. It may be that error correction codes might have reduced the problem, but it wasn't until Bell introduced his statistical inequalities that the possibility arose of conducting actual tests of correlation.
In the early 1980s Aspect arranged photon experiments that tested for Bell's inequalities and made the sensational discovery that the correlation showed that Einstein was wrong and that detection of one property strongly implied that its "co-particle" would detect for the complement property. (We should tip our hats both to Schroedinger and Einstein for the acuity of their thought experiments.) Further, Aspect did experiments in which monitors were arranged so that any signal from one particle to another would necessarily exceed the velocity of light. Even so, the results held.
This property of entanglement is being introduced into computer security regimens because if, say, the NSA or other party is looking at the data stream, the use of some entangled particles can be used to tip off the sender that the stream is being observed.
Hidden variables
John Von Neumann, contradicting Einstein, published a proof that quantum theory was complete in Heisenberg's sense and that "hidden variables" could not be used to devise causal machinery to explain quantum weirdness. Intuitively, one can apprehend this by noting that if one thinks of causes as undetected force vectors, then Planck's constant means that there is a minimum on the amount of force (defined in terms of energy) that can exist insofar as detection or observation. If we think of causes in terms of rows of dominos fanning out and at points interacting, we see there is nothing smaller than the "Planck domino." So there are bound to be gaps in what we think of as "real world causation."
Popper objected to Von Neumann's claim on grounds that after it was made, discoveries occurred in the physics of the nucleus that required "new" variables. Yet if hidden variables are taken to mean the forces of quantum chromodynamics and the other field theories, these have no direct influence on the behaviors of quantum mechanics (now known as quantum field theory). Also, these other theories are likewise subject to quantum weirdness, so if we play this game, we end up with a level where the "variables" run out.
We should note that by "hidden variable," Von Neumann evidently had in mind the materialist viewpoint of scientists like Bohm whose materialism led him to reject the minimalist ideas of the "Copenhagen interpretation" whereby what one could not in principle observe simply doesn't count. Instead, Bohm sought what might be called a pseudo-materialist reality in which hidden variables are operative if one concedes the bi-locality inherent in entanglement. In fact, I tend to agree with Bohm's view of some hidden order, as summarized by his "holographic universe" metaphor. On the other hand, I do not agree that he succeeded in his ambition to draw a sharp boundary between the "real external world" and subjective perception.
Bohm quotes John Archibald Wheeler:
"No phenomenon is a phenomenon until it is an observed phenomenon" so that "the universe does not exist 'out there' independently of all acts of observation. It is in some strange sense a participatory universe. The present choice of mode of observation... should influence what we see about the past... the past is undefined and undefinable without the observation" (65).
"We can agree with Wheeler that no phenomenon is a phenomenon until it is observed, as by definition, a phenomenon is what appears. Therefore it evidently cannot be a phenomenon unless it is the content of an observation," Bohm says, adding, "The key point in an ontological interpretation such as ours is to ask the question as to whether there is an underlying reality that exists independently of observation" (66).
Bohm argues that a "many minds" interpretation of quantum effects removes "many of the difficulties with the interpretation of [Hugh] Everett and [Bryce] DeWitt (67), but requires making a theory of mind basically to account for the phenomena of physics. At present we have no foundations for such a theory..." He goes on to find fault with this idea.
And yet, Bohm sees that "ultimately our overall world view is neither absolutely deterministic nor absolutely indeterministic," adding: "Rather it implies that these two extremes are abstractions which constitute different views or aspects of the overall set of appearances" (68).
So perhaps the thesis of determinism and the antithesis of indeterminism resolve in the synthesis of the noumenal world. In fact, Bohm says observables have no fundamental significance and prefers an entity dubbed a "be-able," again showing his "implicate order" has something in common with our "noumenal world." And yet our conceptualization is at root more radical than is his.
One specialist in relativity theory, Kip S. Thorne (69), has expressed a different take. Is it possible that the spacetime continuum, or spacetime block, is multiply connected? After all if, as relativity holds, a Riemann topology holds for expressing spacetime, then naive Euclidean space is not operative, except vanishingly close to the curvature functions. So in that case, it shouldn't be all that surprising that spacetime might have "holes" connecting one region to another. Where would such wormholes be most plausible? In black holes, Thorne says. By this, the possibility of a "naked singularity" is addressed. The singularity is the point at which Einstein's field equations cease to be operative; the presumed infinitely dense point at the center of mass doesn't exist because the wormhole ensures that the singularity never occurs; it smooths out spacetime (70).
One can see an analog of this by considering a sphere, which is the surface of a ball. A wormhole would be analogous to a straight-line tunnel connecting Berlin and London by bypassing the curvature of the Earth. So on this analogy, one can think of such tunnels connecting different regions of spacetime. The geodesic -- analogous to a great circle on a sphere -- yields the shortest distance between points in Einstein spacetime. But if we posit a manifold, or cosmic framework, of at least five dimensions then one finds shortcuts, topologically, connecting distinct points on the spacetime "surface." Does this accord with physical reality? The answer is not yet in.
Such wormholes could connect different points in time without connecting different regions of space, thereby setting up a time travel scenario, though he is quoted as arguing that his equation precludes time travel paradoxes.
Thorne's ideas on black holes and wormholes
https://en.wikipedia.org/wiki/Kip_Thorne
The standard many-worlds conjecture is an interpretation of quantum mechanics that asserts that a universal wave function represents objective phenomenal reality. So there is no intrinsically random "collapse of the wave function" when a detection occurs. The idea is to be rid of the Schroedinger cat scenario by requiring that in one world the cat is alive and in another it is dead. The observer's world is determined by whether he detects cat dead or cat alive. These worlds are continually unfolding.
The key point here is the attempt to return to a fully deterministic universe, a modern Laplacian clockwork model. However, as the observer is unable to foretell which world he will end up in, his ignorance (stemming from randomness1 and randomness2) is tantamount to intrinsic quantum randomness (randomness3).
In fact, I wonder how much of a gain there is in saying Schroedinger's cat was alive in one world and dead in another prior to observation as opposed to saying the cat was in two superposed states relative to the observer.
On the other hand it seems likely that Hawking favors the notion of a universal wave function because it implies that information represents hard, "external" reality. But even so, the information exists in superposed states as far as a human observer is concerned.
At present, there is no means of calculating which world the cat's observer will find himself in. He can only apply the usual quantum probability methods.
Time-bending implications
What few have understood about Aspect's verification of quantum results is that time itself is subject to quantum weirdness.
A logical result of the entanglement finding is this scenario:
We have two detectors, A which is two meters from the source and B which is one meter distant. You are positioned at detector A and cannot observe B. Detector A goes off and registers, say, spin "down." You know immediately that Detector B must read spin "up" (assuming no equipment-generated error). That is, from your position, the detector at B went off before your detector at A. If you like, you may in principle greatly increase the scale of the distances to the detectors. It makes no difference. B seems to have received a signal before you even looked at A. It's as if time is going backward with respect to B, as far as you are concerned.
Now it is true that a great many physicists agree with Einstein in disdaining such scenarios, and assume that the picture is incomplete. However, incomplete or not, the fact is that the observer's sense of time is stood on its head. And this logical implication is validated by Aspect's results.
Now, let's extend this experimentally doable scenario with a thought experiment reminiscent of Schroedinger's cat. Suppose you have an assistant stationed at detector B, at X kilometers from the source. You are at X/2 kilometers from the source. Your assistant is to record the detection as soon as it goes off, but wait for your call to report the property. As soon as you look at A, you know his property will be the complement of yours. So was he in a superposed state with respect to you? Obnoxious as many find this, the logical outcome, based on the Aspect experiments and quantum rules, is yes.
True, you cannot in relativity theory receive the information from your assistant faster than C, thus presenting the illusion of time linearity. And yet, I suggest, neither time nor our memories are what we suppose them to be.
The amplituhedron
When big particle accelerators were introduced, it was found that Richard Feynman's diagrams, though conceptually useful, were woefully inadequate for calculating actual particle interactions. As a result, physicists have introduced a remarkable calculational tool called the "amplituhedron." This is a topological object that exists in higher-dimensional space. Particles are assumed to follow the rules in this object, and not the rules of mechanistic or pseudo-mechanistic and continuous Newtonian and Einsteinian spacetime.
Specifically, it was found that the scattering amplitude equals the volume of this object. The details of a particular scattering process dictate the dimensionality and facets of the corresponding amplituhedron.
It has been suggested that the amplituhedron, or a similar geometric object, could help resolve the perplexing lack of commensurability of particle theory and relativity theory by removing two deeply rooted principles of physics: locality and unitarity.
“Both are hard-wired in the usual way we think about things,” according to Nima Arkani-Hamed, a professor of physics at the Institute for Advanced Study in Princeton. “Both are suspect.”
Locality is the notion that particles can interact only from adjoining positions in space and time. And unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one. The concepts are the central pillars of quantum field theory in its original form, but in certain situations involving gravity, both break down, suggesting neither is a fundamental aspect of nature.
At this point I interject that an axiom of nearly all probability theories is that the probabilities of the outcome set must equal 1. So if, at a fundamental, noumenal level, this axiom does not hold, what does this bode for the whole concept of probability? At the very least, we sense some sort of nonlinearity here. (At this point we must acknowledge that quantum physicists have for decades used negative probabilities with respect to the situation before the "collapse of the wave function," but "negative unity" is preserved.)
Mark Burgin on negative probabilities
http://arxiv.org/ftp/arxiv/papers/1008/1008.1287.pdf
Wikipedia article on negative probabilities
https://en.wikipedia.org/wiki/Negative_probability
According to the article linked below, scientists have also found a “master amplituhedron” with an infinite number of facets, analogous to a circle in 2-D, which has an infinite number of sides. This amplituhedron's volume represents, in theory, the total amplitude of all physical processes. Lower-dimensional amplituhedra, which correspond to interactions between finite numbers of particles, are conceived of as existing on the faces of this master structure.
“They are very powerful calculational techniques, but they are also incredibly suggestive,” said one scientist. “They suggest that thinking in terms of space-time was not the right way of going about this.”
“We can’t rely on the usual familiar quantum mechanical spacetime pictures of describing physics,” said Arkani-Hamed. “We have to learn new ways of talking about it. This work is a baby step in that direction.”
So it indeed looks as though time and space are in fact some sort of illusion.
In my estimate, the amplituhedron is a means of detecting the noumenal world that is beyond the world of appearances or phenomena. Quantum weirdness implies that interactions are occurring in a way and place that do not obey our typical perceptual conceits. It's as if, in our usual perceptual state, we are encountering the "shadows" of "projections" from another "manifold."
Simons Foundation article on the amplituhedron
https://www.simonsfoundation.org/quanta/20130917-a-jewel-at-the-heart-of-quantum-physics/
The spacetime block of relativity theory likewise suggests that there is a realm that transcends ordinary energy, time and motion.
Zeno's paradox returns
Motion is in the eye of the beholder.
When an object is lifted to height n, it has a specific potential energy definable in terms of Planck's energy constant. Hence, only the potential energies associated with multiples of Planck's constant are permitted. In that case, only heights associated with those potential energies are permitted. When the object is released and falls, its kinetic energy increases with the acceleration. But the rule that only multiples of Planck's constant are permitted means that there is a finite number of transition heights before the object hits the ground. So what happens between quantum height y and quantum height y - 1?
No doubt Zeno would be delighted with the answer:
The macro-object can't cross these quantum "barriers" via what we think of as motion. The macro-object makes a set of quantum jumps across each "barrier," exactly like electrons in an atom jumping from one orbital probability shell to another.
Here we have a clear-cut instance of the "macro-world" deceiving us, when in fact "motion" must occur in quantum jumps. This is important for not only what it says about motion, but also because it shows that the macro-world is highly -- not minimally -- interactive with the "quantum world." Or that is, that both are highly interactive with some noumenal world that can only be apprehended indirectly.
Even in classical physics, notes Popper in his attack on the Copenhagen interpretation, if acceleration is measured too finely, one finds one gets an indeterminate value, as in a = 0/0 (71).
Even on a cosmic scale, quantum weirdness is logically required.
Cosmic weirdness
Suppose we had a theory of everything (ToE) algorithm. Then at proper time ta we will be able to get a snapshot of the ToE waveform -- obtained from the the evolving net ToE vector -- from ta to tb. It is pointless to decompose the waveform below the threshold set by Planck's constant. So the discrete superpositions of the ToE, which might be used to describe the evolution of the cosmos, cannot be reduced to some continuum level. If they could be reduced infinitely, then the cosmic waveform would in effect represent a single history. But, the fact that the waveform is composed of quantum harmonics means that more than one history (and future) is in superposition.
In this respect, we see that quantum theory requires "many universes," though not necessarily in the sense of Hugh Everett or of those who posit "bubble" universes.
Many will object that what we have is simply an interpretation of the meaning of quantum theory. But, I reply that once hidden variables are eliminated, and granted the success of the Aspect experiments, quantum weirdness logically follows from quantum theory.
EPR, action at a distance, special relativity and the fuzziness of motion and time and of the cosmos itself, all suggest that our reality process only reflects but cannot represent noumenal reality. That is, what we visualize and see is not what's actually behind what we visualize and see. Quantum theory gives us some insight into how noumena are mapped into three- and four-dimensional phenomena, but much remains uncharted.
So if phenomenon A correlates with phenomenon B, we may be able to find some algorithm that predicts this and other outcomes with a probability near 1. But if A and B are phenomena with a relation determined in a noumenal "world," then what is to prevent all sorts of oddities that make no sense to phenomenalist physicists? Answer: If so, it might be difficult to plumb such a world, just as a shadow only discloses some information about the object between the projection and the light source.
Physicists are, I would say, somewhat more likely to accept a nonlinearity in causality than are scientists in general. For example, Brian Josephson, a Nobel laureate in physics, favors a radical overhaul of physical knowledge by taking into account such peculiarities as outlined by John A. Wheeler, who proposes a "participatory universe." Josephson believes C.S. Peirce's semiotics combined with a new approach to biology may help resolve the impasses of physics, such as the evident incommensurability of the standard model of particle physics with the general theory of relativity.
Josephson on a 'participatory universe'
http://arxiv.org/pdf/1108.4860v4.pdf
And Max Tegmark argues that the cosmos has virtually zero algorithmic information content, despite the assumption that "an accurate description of the state of the universe appears to require a mind-bogglingly large and perhaps even infinite amount of information, even if we restrict our attention to a small subsystem such as a rabbit."
But, he says that if the Schroedinger equation is universally valid, then "decoherence together with the standard chaotic behavior of certain non-linear systems will make the universe appear extremely complex to any self-aware subsets that happen to inhabit it now, even if it was in a quite simple state shortly after the big bang."
Tegmark's home page
http://space.mit.edu/home/tegmark/home.html
Roger Penrose has long been interested in the "huge gap" in the understanding of physics posed by the Schroedinger's cat scenario. He sees this issue as strongly suggestive of a quantum influence in consciousness -- consciousness being crucial to the collapse of Schroedinger's wave function.
He and Stuart Hameroff, a biologist, propose that microtubules in the brain are where the relevant quantum activities occur.
Even though Penrose is attempting to expunge the problem of superposed realities from physics with his novel proposal, the point to notice here is that he argues that the quantum enigma is indicative of something beyond current physical knowledge that must be taken into account. The conscious mind, he claims, is not at root doing routine calculations. That chore is handled by the unconscious autonomic systems, he says.
In our terms, he is pointing to the existence of a noumenal world that does not operate in the routine "cause-effect" mode of a calculational model.
The 'Orch OR' model for consciousness
http://www.quantumconsciousness.org/penrose-hameroff/orchOR.html
Penrose talk on quantum activity in consciousness
https://www.youtube.com/watch?v=3WXTX0IUaOg
On the other hand, there has always been a strong belief in non-illusional reality among physicists. We have Einstein and Popper as notable examples. Popper was greatly influenced by Alfred Landes whose strong opposition to the Copenhagen interpretation is spelled out in books published well before Aspect's experiments had confirmed bilocality to the satisfaction of most physicists (72).
Yet, the approach of many probability theorists has been to ignore these sorts of implications. Carnap's attitude is typical. In his Logical Foundations of Probability (73), Carnap mentions a discussion by James Jeans of the probability waves of quantum mechanics, which Jeans characterizes as "waves of knowledge," implying "a pronounced step in the direction of mentalism" (74).
But Carnap breezes right past Jeans's point, an omission that I hazard to guess calls into question the logical foundation of Carnap's whole system -- though I note that I have not attempted to plow through the dense forest of mathematical logic symbols in Carnap's book.
I have tried to address some of the issues in need of exploration in my paper Toward, which discusses the reality construction process and its implications. Many worlds of probability is intended as a companion to that paper:
Toward a signal model of perception
http://paulpages.blogspot.com/2013/11/edited-version-posted-march-3-2013-i_7324.html
We have two opposing tendencies: On the one hand, our experiments require that detections occur according to a frequency-style "probability wave," in which the probability of a detection is constrained by the square of the wave amplitude. If numerous trials are done, the law of large numbers will come into effect in, say, the correlation found in an Aspect experiment. So our sense is that quantum probabilities are intrinsic, and that quantum randomness is fundamental. That is, quantum propensities verify an "objective external reality."
On the other hand, the logical implication of such randomness -- as demonstrated in the double-slit and Aspect experiments -- is that what we call reality must be more subjective than usually supposed, that various histories (and hence futures) are superpositions of potential outcomes and do not actualize until observation in the form of cognitive focus (which may be, for all we know, partly unconscious). So one's mental state and train of thought must influence -- after the manner of the science fiction film The Matrix -- one's perceived world. This is the asymmetric three-sink vector field problem. Where will the fixed point (as in center of mass or gravity) be found?
So then assignment of probabilities may seem to make sense, but only if one neglects the influence of one's mind on the perceived outcomes. As long as you stay in your assumed "world," probability calculations may work well enough -- as you inhabit a world that "goes in that direction" (where many people micromanage "the future" in terms of probabilities).
This logical outcome of course is what Popper and many others have objected to. But, despite a great many attempts to counter this line of thought, none have succeeded (as I argue in Toward). At the end of his career, Popper, faced with the Aspect results, was reduced to a statement of faith: even if bi-locality holds, his belief in classical realism would not be shaken.
He points to the horror of Hiroshima and Nagasaki and the "real suffering" of the victims as a moral reason to uphold his objectivism. That is, he was arguing that we must not trivialize their pain by saying our perception of what happened is a consequence of some sort of illusion (75).
Einstein, it seems clear, implicitly believed in a phenomenal world only, though his own progress with relativity required the ditching of such seemingly necessary phenomena as the ether as mediating not only light waves but gravitational waves. In his more mature period, he conceded that the spacetime continuum devised by himself and Minkowski was in effect an ether. My estimate is that Einstein mildly conceded a noumenal world, but resisted a strong dependence on such a concept. Bohm, who favored a form of "realism," settled on a noumenal world with the analogies of a holographic universe and of the "implicate" order shown by an ink blob that unravels when spun in a viscous fluid and pretty much exactly is restored to its original state when its spin is reversed. Phenomena are observed because of some noumenal relation.
So we say there is some physical process going on which we can only capture in part. By probability wave, we mean that we may use a wave model to represent what we can know about the unfolding of the process. The probability wave on the one hand implies an objective reality but on the other a reality unfolding in a feedback loop within one's brain-mind.
Waves imply change. But whatever is going on in some noumenal realm is partly predictable in terms of probability of observed properties. That is, the probability wave function is a means of partly predicting event observations but we cannot say it models the noumenal process precisely or at all.
As Jeans put it:
"Heisenberg attacked the enigma of the physical universe by giving up the main enigma -- the nature of the physical universe -- as insoluble, and concentrating on the minor puzzle of co-ordinating our observations of the universe. Thus it is not surprising that the wave picture which finally emerged should prove to be concerned solely with our knowledge of the universe as obtained from our observations (76)."
This pragmatic idea of ignoring the noumenal, or as some might prefer, sub-phenomenal, world has been largely adopted by practicing scientists, who also adopt a background assumption that discussion of interpretation is philosophy and hence outside science. They accept Popper's view that there exists a line dividing science from meta-science and similarly his view that interpretations are not falsifiable. And yet, a counterexample to that belief is the fact that Einstein's interpretation barring bi-localism was falsified, in the minds of most physicists, by the experiments of Aspect.
The importance of brain teasers
The Monty Hall problem
The scenario's opener: The contestant is shown three curtains and told that behind one is a new car and behind each of the others an old boot and she is told to choose one of the three curtains, which we will label from left to right as ABC.
She chooses B.
Monty opens curtain A and reveals a boot. He then surprises her with the question: "Do you want to stick with your choice of A, or do you want to switch to curtain C?"
The problem is: Should she switch?
The counterintuitive answer, according to numerous probabilists, is yes. When the problem, and its answer, first appeared in the press there were howls of protest, including from mathematicians and statisticians.
Here is the reasoning: When she chose B she had a 1/3 chance of winning a car. Hence her choice of B had a 2/3 chance of being wrong. Once Monty opened curtain A, her choice still carried a 2/3 probability of error. Hence a switch to C gives her a 2/3 probability of being right!
Various experiments were done and it was found that a decision to switch tended to "win a car" in two out of three trials.
A few points:
The contestant starts out with complete observer ignorance. She has no idea whether a "common" permutation is in effect and so she might as well assume a randomization process has established the permutation.
Once Monty opens curtain A, the information available to her increases and this affects the probabilities in an unanticipated way. The typical reaction is to say that whether one switches or not is immaterial because the odds are now 50/50. It seems quite bothersome that her mental state can affect the probabilities. After all, when she chose B, she wasn't in the standard view actually making anything happen. So why should the disclosure of the boot at A make any difference as to what actually happens? Hence the thought that we have a new trial which ought to be independent from the previous, making a probability of 1/2.
However, the new information creates conditions for application of conditional probability. This experiment then tends to underscore that Bayesian reasoning has "real world" applications. (There exist Monty Hall proofs that use the Bayesian formula, by the way.)
The initial possible permutations, in terms of car or boot, are:
bcb
bbc
cbb
where bcb means, for example, boot behind curtain A, car behind curtain B, boot behind curtain C.
By raising curtain A, the contestant has the information that two orderings remain, bcb and bbc, leading to the thought that the probability of guessing correctly is 1/2. But before Monty asks her if she wishes to switch, she has made an estimate based upon the initial information. In that case, her probability of guessing wrong is 2/3. If she switches, her probability of winning the car becomes 2/3.
Part of the perplexity stems from the types of randomness and probability at hand. A modern American tends to relate probabilities to assumed random forces in the external world, rather than only to mental state (the principle of insufficient reason).
And yet, if we grant that the six permutations entail six superposed histories, we then must consider a negative feedback control process that might affect the probabilities, as suggested above and described in Toward. The brain's method of "selecting" and "constructing" reality may help explain why a few people are consistently well above or well below the mean in such low-skill games of chance.
This point of course raises a serious difficulty: solipsism. I have addressed that issue, however inadequately, in Toward.
There are a number of other probability brain teasers, with attendant controversy over proper methods and quantifications. A significant issue in these controversies is the usefulness of the probabilistic process in making decisions. As Paul Samuelson observed, the St. Petersburg paradox is a poser that would never happen in actuality because no sane person would make such an offer.
Samuelson on paradoxes
http://www.jstor.org/stable/2722712
Keynes argued similarly that simple expectation value is not always valid, as when the risk is unacceptable no matter how great the payoff.
In the case of the Sleeping Beauty problem, we could notionally run a series of trials to find the limiting relative frequency. But such an experiment is likely to encounter ethics barriers and, even more likely, to be seen as too frivolous for the time and expense.
These and other posers are legitimate questions for those inclined to logical analysis. But note that such scenarios all assume a "linear" background randomness. Such an assumption may serve in many instances, but what of potential exceptions? For example, Sleeping Beauty, from her orientation, may have several superposed "histories" upon awakening. Which history "happens" is, from the experimenter's orientation, partly guided by quantum probabilities. So to ask for the "linear" probability solution to the Sleeping Beauty problem is to ignore the "reality wave" probabilities that affect any solution.
The St. Petersburg paradox
https://en.wikipedia.org/wiki/St._Petersburg_paradox
http://books.google.com/books?id=vNvXkFUbfM8C&pg=PA267&lpg=PA267&dq=robert+martin+st+petersburg+paradox+dictionary&source=bl&ots=x5NVz53Ggc&sig=JrlvGjnw5tcX9SEbgaA1CbKGW9U&hl=en&sa=X&ei=OdzmUdWXItK24AOxqoHQAQ&ved=0CDYQ6AEwAQ#v=onepage&q=robert%20martin%20st%20petersburg%20paradox%20dictionary&f=false
The Sleeping Beauty problem
https://en.wikipedia.org/wiki/Sleeping_Beauty_problem
http://www.u.arizona.edu/~thorgan/papers/other/Beauty.htm
A noumenal world
Ludwig Wittgenstein's Tractatus does something very similar to what Goedel proved rigorously, and reflects the paradoxes of Bertrand Russell and Georg Cantor. That is, philosophy is described by statements, or propositions, but cannot get at "the problems of life." I.e., philosophy uses a mechanistic structure which cannot apprehend directly what others have called the noumenal world. Hence, the propositions used in Tractatus are themselves nonsense. Again, the self-referencing dilemma.
Interestingly, later on Wittgenstein was unable to follow Goedel and dropped work on the philosophy of mathematics.
The phenomenal versus the noumenal is reflected in experiments in which participants at a console are urged to inflict pain on a subject allegedly connected to an electroshock device, but who is in fact an actor simulating pain responses. Here we see Hannah Arendt's "banality of evil" among those who obey the experimenter's commands to inflict greater and greater pain. Those who passively obey criminal orders are responding to the social clues suggested by the situation, and are rather easily persuaded when they see others carrying out obnoxious deeds under "legal" circumstances. They accept rationalizations because, essentially, they see their immediate interest in conforming to settings controlled by some authority. In some cases, they may also have a psychological need to express the primitive predator within (which for most people is expressed during sporting events or entertainment of other sorts). These persons are close to the phenomenal world accepted by Darwinists.
Yet there are those who resist criminal orders or cajoling, whatever the setting. Is this only the Freudian superego that has been programed to bar such behavior (the internalization of parental strictures)? If so, one would suspect that such inhibitions would be weakened over time by consistent exposure to the actions of the herd. Yet, there are those who do not respond well to the blandishments that come from herd leaders and who strenuously resist being pushed into criminal (even if "legalized") behavior. Very often such persons cite religious convictions. Still, as shown by the horrific history of religious warfare, it is possible for a person to have a set of religious ideas that do not work against the stampede effect.
So such peculiar individuals point to an interior moral compass not found in others, or if some others do possess that quality, it has been greatly repressed. The idea that such a moral compass is a consequence of random physical forces is, by today's standards, plausible. But another possibility is connection with a noumenal world, which holds the source of the resistance of banal evil.
Types of intuition
Consider Type 1 intuition:
Let us consider mathematical intuition, in which we have what is really an informed guess as to a particular mathematical truth.
Such intuition is wrong often enough that the word "counterintuitive" is common among mathematicians. Such informed guesswork is based on one's experience with similar sets of relations. So such intuition can improve with experience, thus the respect given to experts.
There is also intuition based on subtle clues, which may tip one off to imminent danger.
Then we have someone who is vexed by a scientific (or other) problem and, no matter how much he spins his mental wheels, is unable to solve it. But while asleep or in a reverie, he suddenly grasps an answer.
The following scenario is plausible:
He has the intuition, based on experience, that the problem is solvable (though he may be wrong about this). In many cases he has quieted the left-brain analytic function in order to permit the right brain to make associations at a more "primitive" level. It is noteworthy that the left brain will call to consciousness the precise left-right, or time sequence, order of a telephone number. When fatigue dims the analytic function, the right brain will call to consciousness the digits, but generally not in left-right order. So one can see how some set of ideas might similarly be placed in an unexpected arrangement by the right brain, leading possibly to a new insight recognized by the integrated mind.
The analytic functions are in some people "closer to consciousness," being centered in the frontal lobes which represent the most recent major adaptation of the human species. The more basic associative functions are often regarded as an expression of an earlier, in an evolution sense, segment of the brain, and so further into the unconscious. This is the region expressed by artists of all sorts.
Mental relaxation means curtailing the analytic function so as to let the associative region express itself. When one is dreaming, it may be said that one's analytic function and executive control is almost shut down -- though the dream censorship shows that the mental monitor is still active.
So we might say that, at times, what is meant by intuition is that the brain's executive function is refereeing the analytic and associative processes and integrating them into an insight that may prove fruitful.
We regard this form of intuition as belonging to the phenomenal, and not the noumenal world -- though the noumenal world's influence is felt, I daresay, at all times.
[Another view of intuition is that of Henri Bergson.
Discussion of Bergson's ideas
http://plato.stanford.edu/entries/bergson/ ]
But Type 2 intuition is the direct knowledge of something without recourse to the phenomenal world associated with the senses (of which there are many, and not five). This form of communication (though who's to say what is doing the communicating) bypasses or transcends the phenomenal world, as when an individual turns about upon being gazed at from a distance. I realize that such a phenomenon doesn't get much support in the available literature; however, I have on many occasions looked at people from behind from a distance -- say while on public transport -- and noticed them turning about and scanning the middle distance, often with a quizzical look on their faces. Of course such an effect can be "explained," but it seems quite apparent that the person -- more often a woman than a man -- is not turning around for identifiable reasons.
The person who turns about may not even be conscious of what prompted her. Part of her brain has "intuitive" or direct knowledge of another's presence. One can view this effect in terms of a Darwinistic survival advantage. That is, one may say that conscious life forms interact with an unknown world, which, by its nature is immaterial and apparently not subject to the laws of physics as they apply in the phenomenal world.
Sometimes, and perhaps always, Type 1 intuition would seem to have at its core Type 2 intuition.
This is to say there is something other than digital and analog reasoning, whether unconscious or not. Hence, one would not expect an artificial intelligence program, no matter how advanced, to have Type 2 intuition (77).
Of course, it is to be expected that some will disparage such ideas as constituting a "revival of vitalism." However, the anti-vitalist must do more than wave hands against "paranormal" events; he must make serious attempts to exclude the likelihood of what I term noumenal effects.
A confusion here is the claim that "because" vitalism can't be tested or falsified in a Popperian sense, the idea is hence unscientific and must be ignored by scientifically minded people. True, there has been an ongoing battle of statistics between the yay sayers and the nay sayers. But I wonder about attempts to use repeated trials, because it seems unlikely that the independence criterion will work. Here is a case where Bayesian methods might be more appropriate.
In the Newtonian-Laplacian era, prior to the quantum mechanics watershed, the concept of randomness was tied to the belief in a fully deterministic cosmos, in which humans are players on a cosmic stage. The Laplacian clockwork model of the cosmos forbids intrinsic randomness among the cogs, wheels and pulleys. The only thing that might, from a human viewpoint, be construed as random would have been the actions of the elan vitale. Of course the devout did not consider the vital spirit to be random at all, but rather saw it as stemming from a direct influence of God.
Curiously, in the minds of some, a clockwork cosmos seemed to imply a need for God. Otherwise, there would be no free will. Humans would be reduced to delusional automatons.
So uncertainty, in the clockwork model, was viewed as simply a lack of sufficient knowledge for making a prediction. In the famous conceit of Laplace, it was thought that a grand robot would be able to calculate every trajectory in the entire universe to any extent forward or backward in time. It was lack of computing power, not inherent randomness that was thought to be behind the uncertainty in gambling systems.
In the early 20th century, R.A. Fisher introduced random selection as a means of minimizing bias. Or, a better way to express this is that he sought ways to screen out unwanted extraneous biases. The methods chosen for filtering out bias were then seen as means of ensuring randomness, and this perspective is still in common use. So one might then define randomness as a consequence of low (in the ideal case zero) bias in sampling. In the 1930s, however, some prominent probabilists and statisticians, influenced by the new quantum theory, accepted the notion of intrinsic background randomness, leading them to dispense with the idea that probability measures a degree of belief. They thought there was an objective discipline of probability that did not require "subjectivism." To them, quantum mechanics justified the idea that a properly calculated probability result yields a "concrete" truth that is true regardless of the observer.
However, physicists do not tend to see quantum logic as an easy way to dispose of subjectivism. In fact, a number take quite the opposite tack, acknowledging a strong logical case for a "spooky" interface between subject and object. Such a noumenal world -- where space and time are "transcended" -- should indeed interact with the phenomenal world in "weird" ways, reminiscent of the incident in the science fiction film The Matrix when the hero observes a cat move oddly, as in a quick film rewind and replay. The example may be silly, but the concept is not.
Now, as a great many reports of "paranormal" events are subjective first-person accounts, it is easy to dismiss them all under the rubric "anecdotal." Clearly, many scientists want nothing to do with the paranormal because it attracts so many starry eyed "true believers" who have very little scientific background. Such notoriety can be a career destroyer for a young academic.
Bruce Hood, who sits on the board of The Skeptic magazine, is a psychologist who take's a neuroscience view of cognition. To Hood, the fact that the "self" is an integrated composite of physical functions implies that consciousness is an epiphenomenon. Hence, religion, faith and assorted superstitions are delusions; there is no self in need of being saved and no evidence of a soul, which is viewed as paranormal nonsense (78).
Hood on 'the self illusion'
http://www.psychologytoday.com/blog/the-self-illusion/201205/what-is-the-self-illusion
While I agree that phenomenal reality, including in part the reality of self, is interwoven with the perception-cognition apparatus, my point is that if we look closely enough, we apprehend something beyond our usual set of conceits and conceptions. The observer has much to do with forming phenomenal reality, and to me this of itself points to a component of cognition which is non-phenomenal or, as we say, noumenal.
Hence, it is not unreasonable after all to think in terms of a noumenal world in which transactions occur that are beyond our immediate ken. It is safe to say that for quite some time a great many men of high caliber knew as self-evident that the world was flat. And yet there were clues, such as sailing ship masts sinking below the horizon, that suggested a revolutionary way of conceiving of the world, one that at first makes no sense: if the world is round, why don't people fall off on the underside?; if this round world is spinning, why isn't everyone hurled off?
So I would say that for the flat-earthers, the reality of a round world was hidden, part of an "implicate order" in need of unfolding.
Writing prior to the development of thermonuclear bombs, J.D. Stranathan gives this account of the discovery of deuterium (79) :
G.H. Aston in 1927 had obtained a value 1.00778 +- 0.00015 for the atomic weight of hydrogen, which differed from the accepted chemical value of 1.00777 +- 0.00002. The figures were so close that no isotope seemed necessary.
But, the discovery of the two heavier isotopes of oxygen forced a reconsideration because their existence meant that the physically derived and chemically derived scales of atomic weight were slightly, but importantly, different. This meant that Aston's value, when converted to the chemical scale, was 1.00750, and this was appreciably smaller than the chemically determined atomic weight. The alleged close agreement was adjudged to be false.
That discrepancy spurred Harold C. Urey, Ferdinand G. Brickwedde and George M. Murphy to hunt for deuterium, which they found and which became a key component in the development of the atomic bomb.
But, this discrepancy turned out to have been the result of a small experimental error. It was shown that the 1927 mass spectrograph value was slightly low, in spite of having been carefully confirmed by Kenneth T. Bainbridge. When the new spectrograph value of 1.0081 was converted to the chemical scale, there was no longer a substantive disagreement. Hence, there was no implication of the existence of deuterium.
Though the chemical and physical scales were revealed to have been slightly different, that revelation, without the 1927 error, would have yielded no reason to expend a great deal of effort searching for heavy hydrogen.
Had heavy water been unknown, would allied scientists have been fearful of German development of atomic fission weapons (British commandos wrecked Germany's heavy water production in occupied Norway) and have spurred the British and American governments into action?
Even had the Manhattan Project been inevitable, it is conceivable that, at the outset of World War II, the existence of heavy water would have remained unknown and might have remained unknown for years to come, thus obviating postwar fulfillment of Edward Teller's dream of a fusion bomb. By the time of deuterium's inevitable (?) discovery, the pressure for development of thermonuclear weapons might well have subsided.
That is, looking back, the alleged probability of the discovery of heavy water was miniscule, and one is tempted to wonder about some noumenal influence that fated humanity with this almost apocalyptic power.
At the least, we have the butterfly effect on steroids.
At any rate, the idea here is not to idolize paranormal phenomena, but rather to urge that there is no sound epistemological reason to justify the "Darwinistic" (or, perhaps, Dawkinsistic) edict of ruling out any noumenal world (or worlds) and the related prohibition of consideration of any interaction between phenomenal and noumenal worlds.
In fact, our attempt to get a feel for the noumenal world is somewhat analogous to the work of Sigmund Freud and others in examining the unconscious world of the mind in order to find better explanations of superficially cognized behaviors. (Yet I hasten to add that, though Carl Jung's brilliance and his concern with what I term the noumenal world cannot be gainsaid, I find that he has often wandered too far from the beaten path even for my tastes.)
A note on telepathy
There is a great deal of material that might be explored on "paranormal" phenomena and their relation to a noumenal world. But, we will simply give one psychologist's thoughts on one "noumenal" subject. Freud was quite open-minded about the possibility of extra-normal thought transference.
In New Introductory Lectures on Psycho-Analysis, he writes: "One is led to the suspicion that this is the original, archaic method of communication between individuals and in the course of phylogenetic evolution it has been replaced with the better method of giving information via signals which are picked up by the sense organs."
He relates a report of Dorothy Burlingham, a psychoanalyst and "trustworthy witness." (She and colleague Anna Freud did pioneering work in child psychology.)
A mother and child were in analysis together. One day the mother spoke during analysis of a gold coin that had played a particular part in one of her childhood experiences. On her return home, the woman's son, who was about 10, came to her room and gave her a gold coin which he asked her to keep for him. Astonished, she asked him where he had got it. It turned out that it had been given him as a birthday present a few months previously, but there was no obvious reason why he had chosen that time to bring her the coin.
Freud sees this report as potential evidence of telepathy. One might also suspect it as an instance of Jungian "synchronicity" or of the reality construction process as discussed in Toward.
At any rate, a few weeks later the woman, on her analyst's instructions, sat down to write an account of the gold coin incident. Just then her child approached her and asked for his coin back, as he wanted to show it during his analysis session.
Freud argues that there is no need for science to fear telepathy (though his collaborator, Ernest Jones, certainly seems to have feared the ridicule the subject might bring); Freud, who never renounced his atheism, remained open-minded not only about telepathy, but about the possibility of other extra-normal phenomena.
From our perspective, we argue that reports of "paranormal" communication and other such phenomena tip us off to an interaction with a noumenal world that is the reality behind appearances -- appearances being phenomena generally accepted as ordinary, whether or not unusual.
See my post:
Freud and telepathy
http://randompaulr.blogspot.com/2013/10/freud-on-telepathy.html
Freud, of course, was no mathematician and could only give what seemed to him a reasonable assessment of what was going on. Keynes's view was similar to Freud's. He was willing to accept the possibility of telepathy but rejected the "logical limbo" of explaining that and other "psychic phenomena" with other-worldly spirits.
Many scientists, of course, are implacably opposed to the possibility of telepathy in any form, and there has been considerable controversy over the validity of statistical studies for and against such an effect.
On the other hand, Nobelist Josephson has taken on the "scientific system" and upheld the existence of telepathy, seeing it as a consequence of quantum effects.
Josephson's page of psychic phenomena links
http://www.tcm.phy.cam.ac.uk/~bdj10/psi.html
In the name of Science
The tension between Bayesian reasoning and the intrinsic background randomness imputed to quantum physics perforce implies Wheeler's "participatory universe" in which perception and "background reality" (the stage on which we act, with the props) merge to an extent far greater than has previously been suspected in the halls of academia -- despite herculean efforts to exorcise this demon from the Realm of Science. In other words, we find that determinism and indeterminism are inextricably entangled at the point where consciousness meets "reality."
Nevertheless, one cannot avoid the self-referencing issue. In fact, if we suspend the continuity assumptions of space and time, which quantum theory tells us we should, we arrive at infinite regress. But even with continuity assumptions, one can see infinite regress in say an asymmetric three-sink vector field. Where is the zero point at time Tx? In a two-sink field, the symmetry guarantees that the null point can be exactly determined. But in a three-sink field that is not symmetric, one always faces something analogous to quantum uncertainty -- and which also points to problems of infinite regress.
We can think of this in terms of a nonlinear feedback control system. Some such systems maintain an easily understood homeostasis. The thermostat is a case in point. But others need not follow such a simple path to homeostasis. A particular input value may yield a highly unpredictable output within the constraint of homeostasis. In such systems, we tend to find thresholds and properties to be the best we can do in the way of useful information. Probabilities may help us in estimating properties, as we find in the behavior of idealized gas systems.
However, these probabilities cannot really be frequency based, except in the classical sense based on the binomial distribution. Trials can't be done. E.T. Jaynes thought that the Shannon approach of simply expropriating what I call the classical approach sufficed for molecular physics. However, I add that when Einstein used probabilities to establish that Brownian motion conformed to the behavior of jostling atoms, he was not only implicitly using the classical approach, but also what we might call a propensity approach in which the presumed probabilities were assigned in accordance with system start-up information, which in this case was given by Newtonian and Maxwellian mechanics.
The above considerations suggest that it is a mistake to assume that human affairs are correctly portrayed in terms of intrinsic randomness played out in some background framework that is disentangled from the observer's consciousness.
In fact, we may see some kind of malleable interconnectedness that transcends the phenomenal world.
This also suggests that linear probability reckoning works well enough within limits. We use the word linear to mean that the influences among events are small enough so as to be negligible, permitting us the criterion of independence. (Even conditional probabilities rest on an assumption of independence at some point.) The limits are not so easily defined, as we have no system of nonlinear differential equations to represent the sharing of "reality" among minds or the balance between the brain's reality construction versus "external" reality.
Certainly in the extremes, probability assessments do not seem terribly satisfactory within a well-wrought metaphysical system, and should not be so used, even though "linear" phenomenal randomness is viewed as a component of the Creed of Science, being a basic assumption of many a latter day atheist, whether or not scientifically trained.
"Everyone knows" that some phenomena are considered to be phantasms of the mind, whether they be optical or auditory illusions or delusions caused by temporary or permanent brain impairment, and that, otherwise, these phenomena are objective, meaning that there is wide agreement that such phenomena exist independently of any observer, especially if such phenomena have been tested and verified by an accepted scientific process. However, the underlying assumptions are much fuzzier than the philosophical advocates of "hard science" would have us believe.
So this suggests there exists some holistic "uber force," or organizing principle. Certainly we would not expect an atheist to believe this uber force is conscious, though he or she might, like Einstein, accept the existence of such an entity in Spinoza's pan-natural sense. On the other hand, neither Einstein, nor other disciples of Spinoza, had a logical basis for rejecting the possibility that this uber force is conscious (and willing to intervene in human affairs). This uber force must transcend the laws of physics of this universe (and any clonelike cosmoses "out there"). Here is deep mystery; "dark energy" is a term that comes to mind.
I have not formalized the claim for such an uber force. However we do have Goedel's ontological proof of God's existence, though I am unsure I agree that such a method is valid. An immediate thought is that the concept of "positive" requires a subjective interpretation. On the other hand, we have shown that the human brain/mind is a major player in the construction of so-called "concrete" phenomenal reality.
Goedel's ontological proof of God's existence
http://math.stackexchange.com/questions/248548/godels-ontological-proof-how-does-it-work
Background of god theorem
https://en.wikipedia.org/wiki/G%C3%B6del's_ontological_proof
Formalization, mechanization and automation of Gödel's proof of god's existence
http://arxiv.org/abs/1308.4526
In a private communication, a mathematician friend responded thus:
"For example, BMW is a good car. BMW produces nitrous oxide pollution. Therefore nitrous oxide pollution is good."
My friend later added: "But maybe the point of the ontological proof is not 'good' but 'perfect.' God is supposed to be perfect. A perfect car would not pollute."
Again, the property of goodness requires more attention; though dubious, I am not fully unpersuaded of Goedel's offering.
In this respect, we may ponder Tegmark's mathematical universe hypothesis, which he takes to imply that all computable mathematical structures exist.
Tegmark's mathematical universe paper
http://arxiv.org/pdf/gr-qc/9704009v2.pdf
Tegmark's mathematical universe hypothesis has been stated thus: Our external physical reality is a mathematical structure. That is, the physical universe is mathematics in a well-defined sense. So in worlds "complex enough to contain self-aware substructures," these entities "will subjectively perceive themselves as existing in a physically 'real' world." The hypothesis suggests that worlds corresponding to different sets of initial conditions, physical constants, or altogether different equations may be considered equally real. Tegmark elaborates his conjecture into the computable universe hypothesis, which posits that all computable mathematical structures exist.
Here I note my paper:
On Hilbert's sixth problem
http://kryptograff.blogspot.com/2007/06/on-hilberts-sixth-problem.html
which argues against the notion that the entire cosmos can be modeled as a Boolean circuit or Turing machine.
An amusing aside:
1. Assuming the energy resources of the universe are finite, there is a greatest expressible integer.
2. (The greatest expressible number) + 1.
3. Therefore God exists.
Arthur Eddington once observed that biologists were more likely to be in the camp of strict materialists than physicists. Noteworthy examples are Sigmund Freud, who considered himself a biologist, and J.B.S. Haldane, a pioneer in population genetics. Both came of age during the first wave of the Darwinian revolution of the 19th century, a paradigm that captured many minds as a model that successfully screened out God just as, as it was thought, the clockwork cosmos of Laplace had disposed of the need for the God hypothesis. Freud and Haldane were convinced atheists, and it is safe to say that Freud's view of extraordinary communication was thoroughly materialist. Haldane's severe materialism can be seen in the context of his long-term involvement in communism.
Though physicists often remained reticent about their views on God, those who understood the issues of quantum theory were inclined toward some underlying transcendency. This situation remains as true today as it did in the 1930s, as we see with the effectively phenomenalist/materialist world view of the biologist Richard Dawkins, who is conducting a Lennonist crusade against belief in God.
A previous generation witnessed Bertrand Russell, the logician, in the role of atheist crusader. Russell with his colleague Alfred North Whitehead, in their Principia Mathematica had tried to assure that formal knowledge could be described completely and consistently. One can see that such an achievement would have bolstered the cause of atheism. If, at least in principle, the universe can be "tamed" by human knowledge, then one can explain every step of every process without worry about God, or some transcendental entity. God, like the ether, would have been consigned to the rubbish heap of history.
Of course, in 1931 Goedel proved this goal an illusion, using Principia Mathematica to demonstrate his proof. Goedel's incompleteness theorem caught the inrushing tide of the quantum revolution, which brought the question of traditional scientific external reality into question. The revolution had in part been touched off by experimental confirmation of Louis de Broglie's proposed matter waves, an idea that made use of Einstein's energy/matter relation to posit matter waves, So the doctrine of materialism was not only technically in question, but, because material waves obeyed probability amplitudes, the very existence of matter had become a very strange puzzle, a situation that continues today.
Even before this second quantum revolution, the astrophysicist Arthur Eddington had used poetic imagery to put into perspective Einstein's spacetime weirdness (77aa).
Perhaps to move His laughter at their quaint opinions wide Hereafter, when they come to model Heaven And calculate the stars, how they will wield The mighty frame, how build, unbuild, contrive To save appearances. -- John Milton, Paradise Lost
Quantum weirdness only strengthened Eddington's belief in a noumenal realm (77a).
"A defence of the mystic might run something like this: We have acknowledged that the entities of physics can from their very nature form only a partial aspect of the reality. How are we to deal with the other part?" Not with the view that "the whole of consciousness is reflected in the dance of electrons in the brain" and that "quasi-metrical aspects" suffice.
Eddington, in countering Russell on what Eddingon said was Russell's charge of an attempt to "prove" distinctive beliefs of religion, takes aim at loose usage of the word "reality," warning that it is possible to employ that word as a talisman providing "magic comfort." And, we add that cognitive dissonance with internal assumptions and rationalizations usually provokes defensive reactions.
"We all know that there are regions of the human spirit untrammeled by the world of physics" and that are associated with an "Inner Light proceeding from a greater power than ours."
Another English astrophysicist, James Jeans, also inclined toward some noumenal presence (77a).
Jeans writes that the surprising results of the theories of relativistic and quantum physics leads to "the general recognition that we are not yet in contact with ultimate reality." We are only able to see the shadows of that reality. Adopting John Locke's assertion that "the real essence of substances is unknowable," Jeans argues that scientific inquiry can only study the laws of the changes of substances, which "produce the phenomena of the external world."
In a chapter entitled, "In the Mind of Some Eternal Spirit," Jeans writes: "The essential fact is simply that all the pictures which science now draws of nature, and which alone seem capable of according with observational fact, are mathematical pictures." Or, I would say, the typical human brain/mind's empirically derived expectations of physical reality are inapplicable. In a word, the pictures we use for our existence are physically false, delusional, though that is not to say the delusional thinking imparted via the cultic group mind and by the essentials of the brain/mind system (whatever they are) are easily dispensed with, or even safe to dispense with absent something reliably superior.
In a play on the old epigram that the cosmos had been designed by a "Great Architect," Jeans writes of the cosmos "appearing to have been designed by a pure mathematician."
He adds: "Our remote ancestors tried to interpret nature in terms of anthropomorphic concepts and failed. The efforts of our nearer ancestors to interpret nature on engineering lines proved equally inadequate. Nature refused to accommodate herself to either of these man-made moulds. On the other hand, our efforts to interpret nature in terms of the concepts of pure mathematics have, so far, proved brilliantly successful."
Further, "To my mind the laws which nature obeys are less suggestive of those which a machine obeys in its motion than those which a musician obeys in writing a fugue, or a poet in composing a sonnet."
Remarks:
Materialism, writes Heisenberg, is a concept that at root is found wanting. "For the smallest units of matter are, in fact, not physical objects in the ordinary sense of the word; they are forms, structures or -- in Plato's sense -- Ideas, which can be unambiguously spoken of in the language of mathematics."
Heisenberg relates the paradox of Parmenides: "Only being is; non-being is not. But if only being is, there cannot be anything outside this being that articulates it or could bring about changes. Hence being will have to be conceived of as eternal, uniform, and unlimited in space and time. The changes we experience can thus be only an illusion."
Though initially unnerved that his wave mechanics could not resolve the "quantum jump" problem, Erwin Schroedinger's concept of reality evolved.
Schroedinger did not care for the idea he attributes to another quantum pioneer, Pascual Jordan, that quantum indeterminacy is at the basis of free will, an idea echoed in some ways by Penrose. If free will steps in to "fill the gap of indeterminacy," writes Schroedinger, the quantum statistics will change, thus disrupting the laws of nature.
In the same article, Schroedinger talks about how scientific inquiry can't cope very well, if at all, with what we have called noumena:
"The scientific picture of the real world around me is very deficient. It gives a lot of factual information, puts all our experience in a magnificently consistent order, but it is ghastly silent about all and sundry that is really near to our heart, that really matters to us. It cannot tell us a word about red and blue, bitter and sweet, physical pain and physical delight; it knows nothing of beautiful and ugly, good or bad, God and eternity. Science sometimes pretends to answer questions in these domains, but the answers are very often so silly that we are not inclined to take them seriously."
Further, "The scientific world-picture vouchsafes a very complete understanding of all that happens -- it makes it just a little too understandable. It allows you to imagine the total display as that of a mechanical clockwork which, for all that science knows, could go on just the same as it does, without there being consciousness, will, endeavour, pain and delight and responsibility connected with it -- though they actually are."
Hence, "this is the reason why the scientific worldview contains of itself no ethical values, no aesthetical values, not a word about our own ultimate scope or destination, and no God, if you please" (77a).
Elsewhere, he argues that Science is repeatedly buffeted by the unjust reproach of atheism. When we use the clockwork model of the cosmos, "we have used the greatly simplifying device of cutting our own personality out, removing it; hence it is gone, it has evaporated, it is ostensibly not needed” (77a).
My thought is that such a method of depersonalization has strong advantages, if the scope of inquiry is limited, as in Shannon's depersonalized information. Depersonalization of information for specific purposes does not imply, of course, that information requires that no persons are needed to justify existence of information.
"No personal god," says Schroedinger, "can form part of a world model that has only become accessible at the cost of removing everything personal from it. We know, when God is experienced, this is an event as real as an immediate sense perception or as one’s own personality. Like them he must be missing in the space-time picture."
Though Schroedinger does not think that physics is a good vehicle for religion, that fact does not make him irreligious, even if he incurs blame from those who believe that "God is spirit.”
And yet Schroedinger favored Eastern philosophy: “Looking and thinking in that manner you may suddenly come to see, in a flash, the profound rightness of the basic conviction in Vedanta: it is not possible that this unity of knowledge, feeling and choice which you call your own should have sprung into being from nothingness at a given moment not so long ago; rather this knowledge, feeling, and choice are essentially eternal and unchangeable and numerically one in all men, nay in all sensitive beings.”
Schroedinger upheld "the doctrine of the Upanishads" of the "unfication of minds or consciousnesses" despite an illusion of multiplicity.
In a similar vein, Wolfgang Pauli, another quantum pioneer, relates quantum weirdness to the human means of perception:
"For I suspect that the alchemistical attempt at a unitary psychophysical language miscarried only because it was related to a visible concrete reality. But in physics today we have an invisible reality (of atomic objects) in which the observer intervenes with a certain freedom (and is thereby confronted with the alternatives of "choice" or "sacrifice"); in the psychology of the unconscious we have processes which cannot always be unambiguously ascribed to a particular subject. The attempt at a psychophysical monism seems to me now essentially more promising, given the relevant unitary language (unknown as yet, and neutral in regard to the psychophysical antithesis) would relate to a deeper invisible reality. We should then have found a mode of expression for the unity of all being, transcending the causality of classical physics as a form of correspondence (Bohr); a unity of which the psychophysical interrelation, and the coincidence of a priori instinctive forms of ideation with external perceptions, are special cases. On such a view, traditional ontology and metaphysics become the sacrifice, but the choice falls on the unity of being" (77a).
Pauli appears here to be sympathetic with the notion of Jungian archetype, along with possibly something like the Jungian collective unconscious. Though he relates quantum weirdness to the human means of perception, one cannot be sure of any strong support of Jung's synchronicity theory.
Pauli, says his friend Heisenberg, was very fussy about clear thinking in physics and arrived at the idea of a psychophysical interrelation only after painstaking reflection. Even so, it should be noted that Pauli had been a patient of Jung and remained on good terms with Jung for many years.
Einstein made strict physical causality an article of faith, an outlook that underlies his Spinoza-style atheism. That view, however, did not make him irreligious, he argues. Despite church-state persecution of innovative thinkers, "I maintain that the cosmic religious feeling is the strongest and noblest motive for scientific research" (77a).
Yet he makes plain his belief in strict causality, which meant there is no need for a personal God to interfere in what is already a done deal. Consider the "phenomenological complex" of a weather system, he says. The complex is so large so that in most cases of prediction "scientific method fails us." Yet "no one doubts that we are confronted with a causal connection whose causal components are in the main known to us."
Leon Brillouin, on the other hand, says bluntly that because exact predictibility is virtually impossible, a statement such as Einstein's was an assertion of faith that was not the proper province of science (77xa). Curious that Brillouin uses the logical positivist viewpoint to banish full causality from the province of science, just as Einstein used the same philosophical viewpoint to cast the luminiferous ether into outer darkness.
Einstein, of course, was swept up in the Darwinisitic paradigm of his time, which I believe, is reflected in his point that early Jewish religion was a means of dealing with fear, but that it evolved into something evincing a sense of morality as civilization advanced. He believed that spiritual savants through the ages tended to have a Buddhist-style outlook, in which a personal, anthropomorphic God is not operative. Einstein did on occasion however refer to a "central order" in the cosmos, though he plainly did not have in mind Bohm's implicate order, which accepts quantum bilocalism.
Einstein: “I see on the one hand the totality of sense-experiences, and, on the other, the totality of the concepts and propositions which are laid down in books. The relations between concepts and propositions among themselves and each other are of a logical nature, and the business of logical thinking is strictly limited to the achievement of the connection between concepts and propositions among each other according to firmly laid down rules, which are the concern of logic. The concepts and propositions get 'meaning,' viz., 'content,' only through their connection with sense-experiences. The connection of the latter with the former is purely intuitive, not itself of a logical nature. The degree of certainty with which this relation, viz., intuitive connection, can be undertaken, and nothing else, differentiates empty fantasy from scientific 'truth'" (77bb).
The idea that abstract concepts draw meaning from the content and context of sense experiences is a core belief of many. But is it true? It certainly is an unprovable, heuristic allegation. What about the possibility that meaning is imparted via and from a noumenal realm? If you say, this is a non-falsifiable, non-scientific speculation, you must concede the same holds for Einstein's belief that meaning arises via the sensory apparatus. We note further that "meaning" and "consciousness" are intertwined concepts. Whence consciousness? There can never be a scientific answer to that question in the Einsteinian philosophy. At least, the concept of a noumenal world, or Bohmian implicate order, leaves room for an answer.
In buttressing his defense of strict causality, Einstein lamented the "harmful effect upon the progress of scientific thinking in removing certain fundamental concepts from the domain of empiricism, where they are under our control, to the intangible heights of the a priori" (77b). Logic be damned, is my take on this remark.
Louis de Broglie, another pioneer of quantum physics, at first accepted matter-wave duality, but later was excited by David Bohm's idea of what might be termed "saving most appearances" by conceding quantum bilocalism.
What, de Broglie asks, is the "mysterious attraction acting on certain men that urges them to dedicate their time and labours to works from which they themselves often hardly profiit?" Here we see the dual nature of man, he says. Certain people aim to escape the world of routine by aiming toward the ideal. Yet, this isn't quite enough to explain the spirit of scientific inquiry. Even when scientific discoveries are given a utilitarian value, one can still sense the presence of an "ontological order."
We are nowhere near a theory of everything, de Broglie says. Yet "it is not impossible that the advances of science will bring new data capable" of clarifying "great problems of philosophy." Already, he writes, new ideas about space and time, the various aspects of quantum weirdness and "the profound realities which conceal themselves behind natural appearances" provide plenty of philosophical fodder (77a).
Scientific inquiry yields technology which "enlarges" the body by amplifying the power of brawn and perhaps brain. But, such vast amplification has resulted in massive misery as well as widespread social improvements. "Our enlarged body clamours for an addition to the spirit," says de Broglie, quoting Henri Bergson.
The man who ignited the quantum revolution, Max Planck, warned that because science can never solve the ultimate riddle of nature, science "cannot really take the place of religion" (77a). If one excludes "nihilistic" religion, "there cannot be any real opposition between religion and science," he writes. "The greatest thinkers of all ages were deeply religious" even if quiet about their religious thoughts.
"Anybody who has been seriously engaged in scientific work of any kind realizes that over the entrance to the temple of science are written the words: Ye must have faith."
This last sentiment seems boringly conventional. But how would a person, even a strict determinist, proceed without a strong conviction that the goal is achievable? As Eddington jokes, "Verily it is easier for a camel to pass through the eye of a needle than for a scientific man to pass through a door. And whether the door be barn door or church door it might be wiser that he should consent to be an ordinary man and walk in rather than wait till all the difficulties involved in a really scientific ingress are resolved."
Would, for example, the atheist Alan Turing have achieved so much had he not believed that his initial ideas would bear fruit? Would an AI program that passes the Turing test encounter an idea that provokes it to highly focused effort in anticipation of some reward, such as the satisfaction of solving a problem? Can qualia, or some equivalent, be written into a computer program?
In The Grammar of Science, Pearson has an annoying way of personifying Science, almost as if it is some monolithic God. I realize that Pearson is using a common type of metaphorical shorthand, but nevertheless he gives the impression that he and Science are One, a criticism that is applicable to various thinkers today.
Let us digress for a bit and consider what is meant by the word "science."
At a first approximation, one might say the word encompasses the publications and findings of people who interrogate nature in a "rational" manner.
That is, the scientific methods attempt to establish relations among various phenomena that do not contradict currently accepted major theories. The scientific investigator has much in common with the police detective. He or she often uses a process of elimination, coupled with the sketch of a narrative provided by various leads or "facts." The next step is to fill out the narrative to the degree necessary to establish the "truth" of a particular finding. It is often the case that more than one narrative is possible. The narratives that are crowned with the title "theory" (in the scientific sense of the word) are those which seem to be most internally consistent and which also are not dissonant with the background framework "reality" (and if it is, that theory will encounter strong resistance).
So "science" is a word used to describe the activities of a certain group of people who interrogate nature in accord with certain group norms -- norms concerning process and norms concerning philosophy or metaphysics (denial of such interests nevertheless implies a metaphysical belief set).
One idea of Popper's widely accepted among scientists is that if a statement is not potentially falsifiable via advances in experimental technology, then that statement is not scientific and not a proper focus of scientists (though light-hearted speculations may be tolerated).
Hence, the entire scientific enterprise is not scientific, as its underlying assumptions, which are metaphysical, cannot be falsified by experimental or logical means.
At any rate, many will agree that science as a monolithic entity does not exist. Science does not do anything. Science does not prove anything. The word is really a convenient handle that by itself cannot properly summarize a complex set of human activities. Scientists of course all know that there is no great being named Science. And yet when various thinkers, including scientists, employ this anthropomorphism, they often tend to give this being certain qualities, such as rationality, as distinct from, say, irrational Religion, a straw man which is also an anthropomorphism for a wide range of human activities.
In other words, we should beware scientists propagating their faith in the god Science. They may say they don't mean to do that, but, mean it or not, that is in fact what quite a few of them do.
Though physicists often remained reticent about their views on God, those who understood the issues of quantum theory were inclined toward some underlying transcendency. This situation remains as true today as it did in the 1930s, as we see with the effectively phenomenalist/materialist world view of the biologist Richard Dawkins, who is conducting a Lennonist crusade against belief in God.
A previous generation witnessed Bertrand Russell, the logician, in the role of atheist crusader. Russell with his colleague Alfred North Whitehead, in their Principia Mathematica had tried to assure that formal knowledge could be described completely and consistently. One can see that such an achievement would have bolstered the cause of atheism. If, at least in principle, the universe can be "tamed" by human knowledge, then one can explain every step of every process without worry about God, or some transcendental entity. God, like the ether, would have been consigned to the rubbish heap of history.
Of course, in 1931 Goedel proved this goal an illusion, using Principia Mathematica to demonstrate his proof. Goedel's incompleteness theorem caught the inrushing tide of the quantum revolution, which brought the question of traditional scientific external reality into question. The revolution had in part been touched off by experimental confirmation of Louis de Broglie's proposed matter waves, an idea that made use of Einstein's energy/matter relation to posit matter waves, So the doctrine of materialism was not only technically in question, but, because material waves obeyed probability amplitudes, the very existence of matter had become a very strange puzzle, a situation that continues today.
Even before this second quantum revolution, the astrophysicist Arthur Eddington had used poetic imagery to put into perspective Einstein's spacetime weirdness (77aa).
Perhaps to move His laughter at their quaint opinions wide Hereafter, when they come to model Heaven And calculate the stars, how they will wield The mighty frame, how build, unbuild, contrive To save appearances. -- John Milton, Paradise Lost
Quantum weirdness only strengthened Eddington's belief in a noumenal realm (77a).
"A defence of the mystic might run something like this: We have acknowledged that the entities of physics can from their very nature form only a partial aspect of the reality. How are we to deal with the other part?" Not with the view that "the whole of consciousness is reflected in the dance of electrons in the brain" and that "quasi-metrical aspects" suffice.
Eddington, in countering Russell on what Eddingon said was Russell's charge of an attempt to "prove" distinctive beliefs of religion, takes aim at loose usage of the word "reality," warning that it is possible to employ that word as a talisman providing "magic comfort." And, we add that cognitive dissonance with internal assumptions and rationalizations usually provokes defensive reactions.
"We all know that there are regions of the human spirit untrammeled by the world of physics" and that are associated with an "Inner Light proceeding from a greater power than ours."
Another English astrophysicist, James Jeans, also inclined toward some noumenal presence (77a).
Jeans writes that the surprising results of the theories of relativistic and quantum physics leads to "the general recognition that we are not yet in contact with ultimate reality." We are only able to see the shadows of that reality. Adopting John Locke's assertion that "the real essence of substances is unknowable," Jeans argues that scientific inquiry can only study the laws of the changes of substances, which "produce the phenomena of the external world."
In a chapter entitled, "In the Mind of Some Eternal Spirit," Jeans writes: "The essential fact is simply that all the pictures which science now draws of nature, and which alone seem capable of according with observational fact, are mathematical pictures." Or, I would say, the typical human brain/mind's empirically derived expectations of physical reality are inapplicable. In a word, the pictures we use for our existence are physically false, delusional, though that is not to say the delusional thinking imparted via the cultic group mind and by the essentials of the brain/mind system (whatever they are) are easily dispensed with, or even safe to dispense with absent something reliably superior.
In a play on the old epigram that the cosmos had been designed by a "Great Architect," Jeans writes of the cosmos "appearing to have been designed by a pure mathematician."
He adds: "Our remote ancestors tried to interpret nature in terms of anthropomorphic concepts and failed. The efforts of our nearer ancestors to interpret nature on engineering lines proved equally inadequate. Nature refused to accommodate herself to either of these man-made moulds. On the other hand, our efforts to interpret nature in terms of the concepts of pure mathematics have, so far, proved brilliantly successful."
Further, "To my mind the laws which nature obeys are less suggestive of those which a machine obeys in its motion than those which a musician obeys in writing a fugue, or a poet in composing a sonnet."
Remarks:
- We are unsure whether Jeans believed in a God who intervenes in human affairs. Often, the denial of "anthropomorphism" includes denial of the basis of Christianity. But what does he mean when he says that, contrary to Kant's idea that the "mathematical universe" was a consequence of wearing mathematical eyeglasses, "the mathematics enters the universe from above instead of from below"? He seems to mean that the mathematics of physics corresponds to an objective reality capable of being discerned by the human mind in terms of mathematics.
- Though he saw the "engineering" paradigm as fundamentally flawed, that message has failed to make much headway in the ranks and files of scientific activity, where full determinism is still accepted as a practical article of faith, based on the incorrect assumption that physical indeterminism is only relevant in very limited areas.
- His enthusiasm about the mathematics is, in another book, tempered when he criticizes Werner Heisenberg for focusing on the lesser problem of the mathematics of quantum relations while ignoring the greater problem of observer influence.
Materialism, writes Heisenberg, is a concept that at root is found wanting. "For the smallest units of matter are, in fact, not physical objects in the ordinary sense of the word; they are forms, structures or -- in Plato's sense -- Ideas, which can be unambiguously spoken of in the language of mathematics."
Heisenberg relates the paradox of Parmenides: "Only being is; non-being is not. But if only being is, there cannot be anything outside this being that articulates it or could bring about changes. Hence being will have to be conceived of as eternal, uniform, and unlimited in space and time. The changes we experience can thus be only an illusion."
Though initially unnerved that his wave mechanics could not resolve the "quantum jump" problem, Erwin Schroedinger's concept of reality evolved.
Schroedinger did not care for the idea he attributes to another quantum pioneer, Pascual Jordan, that quantum indeterminacy is at the basis of free will, an idea echoed in some ways by Penrose. If free will steps in to "fill the gap of indeterminacy," writes Schroedinger, the quantum statistics will change, thus disrupting the laws of nature.
In the same article, Schroedinger talks about how scientific inquiry can't cope very well, if at all, with what we have called noumena:
"The scientific picture of the real world around me is very deficient. It gives a lot of factual information, puts all our experience in a magnificently consistent order, but it is ghastly silent about all and sundry that is really near to our heart, that really matters to us. It cannot tell us a word about red and blue, bitter and sweet, physical pain and physical delight; it knows nothing of beautiful and ugly, good or bad, God and eternity. Science sometimes pretends to answer questions in these domains, but the answers are very often so silly that we are not inclined to take them seriously."
Further, "The scientific world-picture vouchsafes a very complete understanding of all that happens -- it makes it just a little too understandable. It allows you to imagine the total display as that of a mechanical clockwork which, for all that science knows, could go on just the same as it does, without there being consciousness, will, endeavour, pain and delight and responsibility connected with it -- though they actually are."
Hence, "this is the reason why the scientific worldview contains of itself no ethical values, no aesthetical values, not a word about our own ultimate scope or destination, and no God, if you please" (77a).
Elsewhere, he argues that Science is repeatedly buffeted by the unjust reproach of atheism. When we use the clockwork model of the cosmos, "we have used the greatly simplifying device of cutting our own personality out, removing it; hence it is gone, it has evaporated, it is ostensibly not needed” (77a).
My thought is that such a method of depersonalization has strong advantages, if the scope of inquiry is limited, as in Shannon's depersonalized information. Depersonalization of information for specific purposes does not imply, of course, that information requires that no persons are needed to justify existence of information.
"No personal god," says Schroedinger, "can form part of a world model that has only become accessible at the cost of removing everything personal from it. We know, when God is experienced, this is an event as real as an immediate sense perception or as one’s own personality. Like them he must be missing in the space-time picture."
Though Schroedinger does not think that physics is a good vehicle for religion, that fact does not make him irreligious, even if he incurs blame from those who believe that "God is spirit.”
And yet Schroedinger favored Eastern philosophy: “Looking and thinking in that manner you may suddenly come to see, in a flash, the profound rightness of the basic conviction in Vedanta: it is not possible that this unity of knowledge, feeling and choice which you call your own should have sprung into being from nothingness at a given moment not so long ago; rather this knowledge, feeling, and choice are essentially eternal and unchangeable and numerically one in all men, nay in all sensitive beings.”
Schroedinger upheld "the doctrine of the Upanishads" of the "unfication of minds or consciousnesses" despite an illusion of multiplicity.
In a similar vein, Wolfgang Pauli, another quantum pioneer, relates quantum weirdness to the human means of perception:
"For I suspect that the alchemistical attempt at a unitary psychophysical language miscarried only because it was related to a visible concrete reality. But in physics today we have an invisible reality (of atomic objects) in which the observer intervenes with a certain freedom (and is thereby confronted with the alternatives of "choice" or "sacrifice"); in the psychology of the unconscious we have processes which cannot always be unambiguously ascribed to a particular subject. The attempt at a psychophysical monism seems to me now essentially more promising, given the relevant unitary language (unknown as yet, and neutral in regard to the psychophysical antithesis) would relate to a deeper invisible reality. We should then have found a mode of expression for the unity of all being, transcending the causality of classical physics as a form of correspondence (Bohr); a unity of which the psychophysical interrelation, and the coincidence of a priori instinctive forms of ideation with external perceptions, are special cases. On such a view, traditional ontology and metaphysics become the sacrifice, but the choice falls on the unity of being" (77a).
Pauli appears here to be sympathetic with the notion of Jungian archetype, along with possibly something like the Jungian collective unconscious. Though he relates quantum weirdness to the human means of perception, one cannot be sure of any strong support of Jung's synchronicity theory.
Pauli, says his friend Heisenberg, was very fussy about clear thinking in physics and arrived at the idea of a psychophysical interrelation only after painstaking reflection. Even so, it should be noted that Pauli had been a patient of Jung and remained on good terms with Jung for many years.
Einstein made strict physical causality an article of faith, an outlook that underlies his Spinoza-style atheism. That view, however, did not make him irreligious, he argues. Despite church-state persecution of innovative thinkers, "I maintain that the cosmic religious feeling is the strongest and noblest motive for scientific research" (77a).
Yet he makes plain his belief in strict causality, which meant there is no need for a personal God to interfere in what is already a done deal. Consider the "phenomenological complex" of a weather system, he says. The complex is so large so that in most cases of prediction "scientific method fails us." Yet "no one doubts that we are confronted with a causal connection whose causal components are in the main known to us."
Leon Brillouin, on the other hand, says bluntly that because exact predictibility is virtually impossible, a statement such as Einstein's was an assertion of faith that was not the proper province of science (77xa). Curious that Brillouin uses the logical positivist viewpoint to banish full causality from the province of science, just as Einstein used the same philosophical viewpoint to cast the luminiferous ether into outer darkness.
Einstein, of course, was swept up in the Darwinisitic paradigm of his time, which I believe, is reflected in his point that early Jewish religion was a means of dealing with fear, but that it evolved into something evincing a sense of morality as civilization advanced. He believed that spiritual savants through the ages tended to have a Buddhist-style outlook, in which a personal, anthropomorphic God is not operative. Einstein did on occasion however refer to a "central order" in the cosmos, though he plainly did not have in mind Bohm's implicate order, which accepts quantum bilocalism.
Einstein: “I see on the one hand the totality of sense-experiences, and, on the other, the totality of the concepts and propositions which are laid down in books. The relations between concepts and propositions among themselves and each other are of a logical nature, and the business of logical thinking is strictly limited to the achievement of the connection between concepts and propositions among each other according to firmly laid down rules, which are the concern of logic. The concepts and propositions get 'meaning,' viz., 'content,' only through their connection with sense-experiences. The connection of the latter with the former is purely intuitive, not itself of a logical nature. The degree of certainty with which this relation, viz., intuitive connection, can be undertaken, and nothing else, differentiates empty fantasy from scientific 'truth'" (77bb).
The idea that abstract concepts draw meaning from the content and context of sense experiences is a core belief of many. But is it true? It certainly is an unprovable, heuristic allegation. What about the possibility that meaning is imparted via and from a noumenal realm? If you say, this is a non-falsifiable, non-scientific speculation, you must concede the same holds for Einstein's belief that meaning arises via the sensory apparatus. We note further that "meaning" and "consciousness" are intertwined concepts. Whence consciousness? There can never be a scientific answer to that question in the Einsteinian philosophy. At least, the concept of a noumenal world, or Bohmian implicate order, leaves room for an answer.
In buttressing his defense of strict causality, Einstein lamented the "harmful effect upon the progress of scientific thinking in removing certain fundamental concepts from the domain of empiricism, where they are under our control, to the intangible heights of the a priori" (77b). Logic be damned, is my take on this remark.
Louis de Broglie, another pioneer of quantum physics, at first accepted matter-wave duality, but later was excited by David Bohm's idea of what might be termed "saving most appearances" by conceding quantum bilocalism.
What, de Broglie asks, is the "mysterious attraction acting on certain men that urges them to dedicate their time and labours to works from which they themselves often hardly profiit?" Here we see the dual nature of man, he says. Certain people aim to escape the world of routine by aiming toward the ideal. Yet, this isn't quite enough to explain the spirit of scientific inquiry. Even when scientific discoveries are given a utilitarian value, one can still sense the presence of an "ontological order."
We are nowhere near a theory of everything, de Broglie says. Yet "it is not impossible that the advances of science will bring new data capable" of clarifying "great problems of philosophy." Already, he writes, new ideas about space and time, the various aspects of quantum weirdness and "the profound realities which conceal themselves behind natural appearances" provide plenty of philosophical fodder (77a).
Scientific inquiry yields technology which "enlarges" the body by amplifying the power of brawn and perhaps brain. But, such vast amplification has resulted in massive misery as well as widespread social improvements. "Our enlarged body clamours for an addition to the spirit," says de Broglie, quoting Henri Bergson.
The man who ignited the quantum revolution, Max Planck, warned that because science can never solve the ultimate riddle of nature, science "cannot really take the place of religion" (77a). If one excludes "nihilistic" religion, "there cannot be any real opposition between religion and science," he writes. "The greatest thinkers of all ages were deeply religious" even if quiet about their religious thoughts.
"Anybody who has been seriously engaged in scientific work of any kind realizes that over the entrance to the temple of science are written the words: Ye must have faith."
This last sentiment seems boringly conventional. But how would a person, even a strict determinist, proceed without a strong conviction that the goal is achievable? As Eddington jokes, "Verily it is easier for a camel to pass through the eye of a needle than for a scientific man to pass through a door. And whether the door be barn door or church door it might be wiser that he should consent to be an ordinary man and walk in rather than wait till all the difficulties involved in a really scientific ingress are resolved."
Would, for example, the atheist Alan Turing have achieved so much had he not believed that his initial ideas would bear fruit? Would an AI program that passes the Turing test encounter an idea that provokes it to highly focused effort in anticipation of some reward, such as the satisfaction of solving a problem? Can qualia, or some equivalent, be written into a computer program?
In The Grammar of Science, Pearson has an annoying way of personifying Science, almost as if it is some monolithic God. I realize that Pearson is using a common type of metaphorical shorthand, but nevertheless he gives the impression that he and Science are One, a criticism that is applicable to various thinkers today.
Let us digress for a bit and consider what is meant by the word "science."
At a first approximation, one might say the word encompasses the publications and findings of people who interrogate nature in a "rational" manner.
That is, the scientific methods attempt to establish relations among various phenomena that do not contradict currently accepted major theories. The scientific investigator has much in common with the police detective. He or she often uses a process of elimination, coupled with the sketch of a narrative provided by various leads or "facts." The next step is to fill out the narrative to the degree necessary to establish the "truth" of a particular finding. It is often the case that more than one narrative is possible. The narratives that are crowned with the title "theory" (in the scientific sense of the word) are those which seem to be most internally consistent and which also are not dissonant with the background framework "reality" (and if it is, that theory will encounter strong resistance).
So "science" is a word used to describe the activities of a certain group of people who interrogate nature in accord with certain group norms -- norms concerning process and norms concerning philosophy or metaphysics (denial of such interests nevertheless implies a metaphysical belief set).
One idea of Popper's widely accepted among scientists is that if a statement is not potentially falsifiable via advances in experimental technology, then that statement is not scientific and not a proper focus of scientists (though light-hearted speculations may be tolerated).
Hence, the entire scientific enterprise is not scientific, as its underlying assumptions, which are metaphysical, cannot be falsified by experimental or logical means.
At any rate, many will agree that science as a monolithic entity does not exist. Science does not do anything. Science does not prove anything. The word is really a convenient handle that by itself cannot properly summarize a complex set of human activities. Scientists of course all know that there is no great being named Science. And yet when various thinkers, including scientists, employ this anthropomorphism, they often tend to give this being certain qualities, such as rationality, as distinct from, say, irrational Religion, a straw man which is also an anthropomorphism for a wide range of human activities.
In other words, we should beware scientists propagating their faith in the god Science. They may say they don't mean to do that, but, mean it or not, that is in fact what quite a few of them do.
60. Symmetry by Hermann Weyl (Princeton, 1952).
61. Time Travel in Einstein's Universe by J. Richard Gott III (Houghton Mifflin, 2001).
62. Kurt Goedel in Albert Einstein: Philosopher-Scientist, edited by Paul Arthur Schilpp (Library of Living Philosophers, 1949)
63. Cycles of Time: An extraordinary new view of the universe by Roger Penrose (The Bodley Head, 2010).
64. The Anthropic Cosmological Principle by John D. Barrow and Frank J. Tipler (Oxford, 1988).
65. Wheeler quoted in The Undivided Universe: An Ontological Interpretation of Quantum Theory by David Bohm, Basil James Hiley (Routledge, Chapman & Hall, Incorporated, 1993). The quotation is from Wheeler in Mathematical Foundations of Quantum Mechanics, A.R. Marlow, editor (Academic Press, 1978).
66. Undivided Universe, Bohm.
67. Bohm (see above) is referring to The Many-Worlds Interpretation of Quantum Mechanics by B.S. DeWitt and N. Graham (Princeton University Press 1973).
68. Undivided Universe, Bohm.
69. Gravitation by Charles W. Misener, Kip S. Thorne and John Archibald Wheeler (W.H. Freeman, 1970, 1971).
70. Black Holes and Wormholes by Kip Thorne (W.W. Norton, 1994).
71. The Open Universe (Postscript Volume II) by Karl Popper (Routledge, 1988. Hutchinson, 1982).
72. New Foundations of Quantum Mechanics by Alfred Landé (Cambridge University Press, 1965). Cited by Popper in Schism.
73. Logical Foundations of Probability by Rudolph Carnap (University of Chicago, 1950).
74. Physics and Philosophy by James Jeans (Cambridge, Macmillan, 1943).
75. Quantum Theory and the Schism in Physics (Postscript Volume III) by Karl Popper (Routledge, 1989. Hutchinson, 1982).
76. The New Background of Science by James Jeans (Cambridge, 1933, 1934).
77. B. Alan Wallace, a Buddhist scholar, tackles the disconnect between the scientific method and consciousness in this video from the year 2000.
B. Alan Wallace on science and consciousness
http://www.youtube.com/watch?v=N0IotYndKfg
77aa. Space, Time and Gravitation: An Outline of the General Relativity Theory by Athur Eddington (Cambridge 1920, Harper and Row reprint, 1959).
77a. Taken from excerpts of the scientist's writings found in Quantum Questions: Mystical Writings of the World's Great Physicists, edited by Ken Wilbur (Shambhala Publications, 1984). Wilbur says the book's intent is not to marshal scientific backing for a New Age agenda.
77bb. From "Autobiographical Notes" appearing in Albert Einstein: Philosopher-Scientist, Paul Arthur Schilpp, editor (Library of Living Philosophers 1949).
77xa. Science and Information Theory, Second Edition, by Leon Brillouin (Dover 2013 reprint of Academic Press 1962 edition; first edition, 1956).
77b. The Meaning of Relativity by Albert Einstein (fifth edition, Princeton, 1956).
78. The Self Illusion: how the social brain creates identity by Bruce Hood (Oxford, 2012).
79. The "Particles" of Modern Physics by J.D. Stranathan (Blakison, 1942).
Subscribe to:
Posts (Atom)