Thursday, November 7, 2013

Part IV: Many worlds...

The many worlds of probability, reality and cognition

Equiprobability and the empirico-inductive framework
We return to the scenario in which system information is zero. That is, there is no algorithm known to the observer for obtaining systemic information that can be used to assign an initial probability.

Let us do a thought experiment in which a ball is to be drawn from an urn containing n black or white balls (where "or" is the inclusive "or"). We have no idea of the proportion of white to black balls. So, based on ignorance, every ratio is equiprobable. Let n = 2.

We have of course

BB WW BW WB

By symmetry, we then have for a 2-unit (white or black) test, a probability of 1/2 for a first draw of black from an urn containing between 0 and 2 black balls and 0 and 2 white balls. This symmetry, under the condition, holds for any number of balls in the urn, meaning it holds as n approaches infinity.

One might assume the urn has an enumerable infinitude of balls if one has zero knowledge of how many balls are inside the urn. If n is infinite, a "long" run of k black balls cannot affect the probability of the (k+1)th draw, as we know from (lim n --> inf) k/n = 0. Note that we are not assuming no bias in how the balls are arranged, but rather that we simply have no knowledge in that area.

(Similar reasoning applies to multinomial examples.)

Does this scenario justify the concept of independence? It may be so used, but this description is not airtight. Just because, based on ignorance, we regard any ratio as equiprobable, that does not entitle us to assume that every ratio is in fact equiprobable. This presentation, however, suffices for a case of zero input information where we are dealing with potential ratios.

An important point here is that in this case we can find some rational basis for determining the "probability of a probability." But, this holds when we have the knowledge that one ratio exists for k colors of balls. However, if we don't know how many colors of balls are potentially available, we must then sum the rationals. Even if we exclude the pure integers, the series sum exceeds that of the harmonic series from n = 3 on and so the sum of all ratios is unbounded, or, that is to say, undefined, or one might say that one ratio in an infinitude carries probability 0.

So empirico-inductive methods imply a Bayesian way of thinking, as with approaches such as the rule of succession or possibly, nonparametric tests. Though these approaches may help us answer the question of the probability of an observer encountering a "long" run, we can never be certain a counterexample won't occur. That is, we are saying that we believe it is quite probable that a probability method tends to establish "objective" facts that are to be used within some scientific discipline. We feel this confidence because it has been our experience that what we conceive of as identical events often show patterns, such as a periodicity of 1 (as with a string of sun-risings).

Now an empirico-inductive method generally assumes effective independence. Again, utter intrinsic physical randomness is not assumed, but rather ignorance of physical relations, excepting the underlying idea that the collective mass of "causes" behind any event is mostly neutral and can be ignored. So then we must also account for probability of non-occurrence of a long run, so that on the assumption of independence, we assert that the probability of a long run not occurring within n steps is about (1 - p)n.

For example, consider the number 1321 (assuming equiprobability of digit selection based on some randomization procedure). Taking logs, we find that the run 1321 has a probability of 1/2 of occurrence within a string of length 6932.

The use of the independence assumption to test the probability of independence is what the runs test does, but it is not the approach of the rule of succession, though the rule of succession might be said to have anticipated nonparametric tests, such as the runs test.

The runs test uses n/2 as the mean for runs in a randomized experiment. And we have shown elsewhere that as n increases, a string of two or more identical runs (a periodic run) has a probability approaching that of the probability for a sole permutation, which, in base 2, is 2-n, as I discuss in my paper

Note on the probability of periodicity
http://kryptograff5.blogspot.com/2013/08/draft-1-please-let-me-know-of-errors.html

We may also ask the question: how many runs are there of length m or greater that fall upon a string of length n? And one might use this information to obtain a probability of a run of length m. For example, for n = 6932, any specified string of length 4 has a probability of about 1/2 of appearing somewhere in the overall string.

For a specified substring of length 7, we require a string of 693,148 digits for a probability of 1/2 of encountering it.

At length 6932, the probability of non-occurrence is about 0.999, giving a probability of occurrence of 0.001. Modern computing methods greatly increase our capacity for such calculations, of course.

If we stick with an urn model, we are suggesting zero bias, or, at any rate, zero input knowledge, excluding the knowledge of how many elements (possible colors) are in use. We can then match the outcome to the potential probabilities and argue that the string 1321 is reasonable to expect on a string of length 6932, but that it is not reasonable to expect to bet on the string 1321578 turning up within 6932 draws.

In other words, we are not arguing for complete axiomatic justification of an empirico-inductive approach, but we are suggesting that such an approach can be justified if we think it acceptable to assume lack of bias in selection of balls from an urn.

The above tends to underscore my point earlier that probabilities are best viewed in terms of logically explicable -- by some line of reasoning -- inequalities that assist in the decision-making process.

Of course, the standard idea is to say that the probability of the number 1321 appearing over the first four trials is 10-4. Implicit here is the simplifying assumption of equiprobability, an assumption that serves well in many cases. What we have tried to do, however, is posit some credible reason for such an assumption. We may suspect bias if we called the number 1321578 in advance and it turned up as a substring over 6932 draws. The urn model often permits a fair amount of confidence that the draws ought be unbiased. In my opinion, that model is superior to the coin-toss model, which can always be criticized on specific physical grounds, such as the virtual impossibility of positioning the center of mass exactly at the coin's centroid.

Note that in our scenario positing independence, we begin with the assumption of nearly complete ignorance; we have no propensity information if the urn model is used to represent a typical dynamical system. Again, the propensity information in the urn system is that we have no reason to think that the urn's contents are not at maximum entropy, or homogeneity. By specifying the number 1321 before doing 6932 draws, we are saying that such a test is of little value in estimating bias, as the probability is 1/2. However, if we test other longer numbers, with the cutoff at substring length 7, we find that calling the number 1321578 provides a fairly good test for bias. Determining the kind of bias would require further refinements.

As noted, of course, we may find it extraordinarily unlikely that a physical experiment really has zero bias; what we are usually after is negligible bias. That is, if one flips a coin and finds an experimental bias of 0.5 +- 10-10, in most cases we would very likely ignore the bias and accept the coin as fair.

Equiprobability and propensity
In his book Symmetry, Weyl wrote, "We may be sure that in casting dice which are perfect cubes, each side has the same chance, 1/6. Because of symmetry sometimes we are thus able to make predictions a priori on account of the symmetry of the special cases, while the general case, as for instance the law of equilibrium for scales with arms of different lengths [found by Archimedes], can only be settled by experience or by physical principles ultimately based on experience. As far as I see, all a priori statements in physics have their origins in symmetry" (44).

By symmetry in the die case, Weyl means the center of mass and the geometric centroid are the same and that all sides are congruent. Hence, he believes it obvious that equiprobability follows. Yet he has neglected here to talk about the extreme sensitivity to initial conditions that yield effectively random outcomes, based on force and position of the throw, along with background fluctuations. These variations in the evolving net force vector are held to be either undetectable or not worth bothering about. But, at any rate, I interpret his remarks to mean that symmetry implies here equiprobability on account of the assumption that the laws of physics are invariant under spatial translation. Specifically, a Newtonian system of forces is implied.

In the perfect die scenario, we would say that the system's propensity information is essentially that the laws of physics remain the same under spatial translation. The propensity of the system is summarized as zero bias or equiprobability of six outcomes.

Consider a perfectly equiprobable gambling machine.

1. It is impossible or, if possible, extremely difficult to physically test the claim of true equiprobability, as it is notionally possible for bias to be out of range of the computational power -- although we grant that one might find that bias beyond a certain decimal can't occur because the background perturbations would never be strong enough.

2. If one uses enough resources (not infinite), one comes asymptotically close to an error-free message, though one which moves very slowly. But, if the output is random (true noise), there is no signal. Yet an anti-signal contains information (if we have enough bits for analysis). If the redundancy is too low and we have no reason to suspect encryption, then we have the information that the bit stream is noise. (As Shannon demonstrated, language requires sufficient structure, or redundancy -- implying conditional probabilities among letters -- to be able to make crossword puzzles.)

So if we determine that a bit stream is pure noise, can we then be sure that the bit samples are normally, or randomly, distributed? No, because one can think of a "pseudo-message" in which an identifiable pattern, such as a period, is sent that does not contain sufficient redundancy at a relevant scale. On the other hand, if the scale of the bit string is large enough, the redundancy inherent in the period will show. But there are other possibilities that might escape redundancy testing though one parametric test or another is liable to detect the presence of apparent structure.

3. We are also entitled to wonder whether considerations of entropy rule out such a device.

Popper cites Alfred Lande's concern with the "law of large numbers" in which Landes shows that the idea that hidden "errors" cancel is a statistical result derived from something that can be accounted for deterministically (45).

So one might say that there exists a huge number of "minor" hidden force vectors relative to the experiment in the environment and that these tend to cancel -- over time summing to some constant, such as 0. For purposes of the experiment, the hidden net force vectors tend to be of the same order of magnitude (the available orders of magnitude would be normally distributed).

Popper's point is: Why are the vectors distributed relatively smoothly, whether normally or uniformly?

The answer: That is our empirical experience. And the paradigm of normal distribution of hidden vectors may only be partly accurate, as I try to show in my paper Toward.

Of course, the phenomena represented by dynamical systems would not have been systematized had there not been some provisional input information, and so it is difficult to say what "equiprobable" really means. There are a number of reasons for questioning the concept of equiprobability. Among those are the fact that a dynamical system would rarely be free of at least a slight bias. But even when we talk about bias, we are forced to define that word as the complement of equiprobable.

Another issue is what is meant by repeatable event. The whole of probability theory and inferential statistics is grounded on the notions of repeatable and yet distinguishable events -- what Pearson called "regularities" in nature. But as I show in Toward and as others have shown, regularities are a product of the interface of the brain-mind and the "external" environment.

The concept of equiprobability essentially axiomatically accepts that some sets are composed of events that do not influence one another. Yet, if events are, in ways not fully understood, products of mental activity, it is quite bold to assume such an axiom. However, one might accept that, under certain circumstances, it may be possible to neglect apparently very slight influences attributable to the fact that one event may be construed an "effect" of the entire set of "causes" inherent in the world, including a previous event under consideration. There are so many such "causes" that, we suspect, they strongly tend to cancel out, just as the electromagnetism of our planet is close to neutral because the electron charges are distributed randomly. (As you see, it is pretty much impossible to avoid circularities.)

One idea might be to employ an urn model, whereby we avoid the issues of causation and instead focus on what the observer doesn't know. If, let's say, all we know is that the urn contains only black or white balls, then, as shown above, we may encapsulate that knowledge with the ratio 1/2. This however doesn't tell us anything about the actual proportion, other than what we don't know about it. So perhaps in this case we may accept the idea of equiprobability.

Yet if we do know the proportion -- say 2 black balls and 3 white -- then we like to assert that the chance of drawing a black ball on the first draw is 2/3. But, even in this case, we mean that our ignorance can be encapsulated with the number 2/3 (without reference to the "law of large numbers"). We don't actually know what the "true" probability is in terms of propensity. It's possible that the balls have been so arranged that it is highly likely a white ball will come out first.

So in this example, an assurance that the game is fair gives the player some information about the system; to wit, he knows that homogeneous mixing has occurred, or "well shuffling," or "maximum entropy." That is, the urn has been "well shaken" to assure that the balls are "randomly" distributed. The idea is to reduce a component of the propensity information to the minimum.

Now we are saying that the observer's ignorance is in accord with "maximum" or equilibrium entropy of the system. (We have much more to say about entropy in Part V.)

Now suppose we posit an urn with an enumerably infinite number of balls. Such a model might be deployed to approximate a physical system. But, as n gets large or infinite, how can we be sure of maximum entropy, or well mixing? What if the mixing is only strong at a certain scale, below which clusters over a specific size are highly common, perhaps more so than small runs? We can't really be sure. So we are compelled to fall back to observer ignorance; that is, if one is sufficiently ignorant, then one may as well conjecture well mixing at all scales because there is insufficient reason not to do so.

Thus far in our analysis, the idea of equiprobability is difficult to apprehend, other than as a way of evaluating observer ignorance.

If we disregard randomness3 -- standard quantum randomness -- and randomness4, associated with Popper's version of propensity, we then have nothing but "insufficient reason," which is to say partial observer ignorance, as a basis of probability determinations.

Consider a perfectly balanced plank on a fulcrum. Disregarding background vibrations, if we add a weight x to the left end of the plank, what is the probability the plank will tilt down?

In Newtonian mechanics, the probability is 1 that the left end descends, even if the tilt is so tiny as to be beyond the means of detection. The plank's balance requires two net vectors of equal force. The tiniest change in one force vector destabilizes the balancing act. So, in this respect, we have an easily visualizable "tipping point" such that even one infinitesimal bit of extra mass is tantamount to the "cause" of the destabilization.

So if we assign a probability to the plank sinking on the left or the right, we can only be speaking of observer ignorance as to which end will receive an unseen addition to its force vector. In such a Laplacian clockwork, there is no intrinsic randomness in "objective" reality. This notion is different metaphysically, but similar conceptually, to the older idea that "there are no mere coincidences" because God controls every action.

We cannot, of course, dispense with randomness3 (unless we are willing to accept Popperian randomness4). Infinitesimals of force below the limit set by Plank's constant have no effect on the "macro" system. And, when attempting to deal with single quanta of energy, we then run into intrinsic randomness. Where and how will that quantum interact with the balance system? The answer requires a calculation associated with the intrinsic randomness inherent in Heisenberg's uncertainty principle.

Popperian a priori propensity (probability8) requires its own category of randomness (randomness4). From what I can gather, randomness4 is a mixture of randomness1, randomness2 and randomness3.

Popper, I believe, wanted to eat his cake and have it too. He wanted to get rid of the anomolies of randomness3 while coming up with another, not very well justified, form of intrinsic randomness.

How are probability amplitudes conceptualized in physics? Without getting into the details of calculation, we can get a sense of this by thinking of the single-photon-at-a-time double slit experiment. If one knows where a photon is detected, that information yields very little information as to where the next photon will be detected. We do know the probability is near zero for detection to occur at what will become a diffraction line, where we have the effect of waves destructively interfering. After enough photons have been fired, one at a time, a diffraction pattern builds up that looks exactly like the interference pattern of a wave going through both silts simultaneously.

So our empirical experience gives this result. With this evidence at hand, Werner Heisenberg, Erwin Schroedinger, Paul Dirac and others devised the quantum formalism that gives, in Max Born's words, a "probability wave" conceptualization. So this formalism gives what I term quantum propensity, which is the system information used to calculate outcomes or potential outcomes. Once the propensities are known and corroborated with an effective calculation method, then frequency ratios come into play. In the case of a Geiger counter's detection of radioactive decay events, an exponential distribution of probabilities works best.

We also have ordinary propensity -- probability7 -- associated with the biases built into a gambling machine or system. Generally, this sort of propensity is closely associated with randomness1 and randomness2. However, modern electronic gambling systems could very well incorporate quantum fluctuations into the game design.

To summarize, equiprobability in classical physics requires observer ignorance of two or more possible outcomes, in which hidden force vectors play a role. However, in quantum physics, a Stern-Gerlach device can be set up such that the probability of detecting spin up is 1/2 as is the probability of detecting spin down. No further information is available -- even in principle -- according to the standard view.

Popper and propensity
Resetting a penny-dropping machine changes the measure of the possibilities. We might have a slight shift to the left that favors the possibility of a head. That is, such an "objective probability" is the propensity inherent in the system that comes with the first trial -- and so does not depend on frequency or subjective classical reasoning. (Popper eschews the term "a priori" because of the variety of ways it is used and, I suspect, because it is associated with subjective Bayesian reasoning.)

Popper asserts, "Thus, relative frequencies can be considered as the result, or the outward expression, or the appearance, of a hidden and not directly observable physical disposition or tendency or propensity; and a hypothesis concerning the strength of this physical disposition or tendency may be tested by statistical tests, that is to say, by observations of relative frequencies."

He adds, "I now regard the frequency interpretation as an attempt to do without the hidden physical reality..." (46).

Propensities are singular insofar as they are inherent in the experimental setup which is assumed to be the same for each experiment. (Thus we obtain independence, or freedom from aftereffects, for the elements of the sequence.)

"Force," observes Popper, is an "occult" term used to account for a relation between phenomena. So, he argues, why can't we have another occult term? Even if we agree with him, this seems to be exchanging one occult idea (quantum weirdness) for another (propensity). Popper wants his propensity to represent a physical property that implies that any trial will tend to be verified by the law of large numbers. Or, one might say he means spooky action at a micro-level.

The propensity idea is by no means universally accepted. At the very least, it has been argued, a propensity conceptualization is not sufficient to encompass all of probability.

Mauricio Suarez and others on propensities
http://www.academia.edu/2975215/Four_Theses_on_Probabilities_Causes_Propensities

Humphrey's paradox and propensity
http://www.jstor.org/stable/20117533

Indeterminism in Popper's sense
It is not terribly surprising to learn that the "publish or perish" mantra is seen as a factor in the observer bias that seems evident in a number of statistical studies. Examination of those studies has found that their results cannot be corroborated. The suspect here is not outright scientific fraud, but rather a tendency to wish to present "significant" results. Few journals are much interested in studies that yield null results (even though null -- or statistically insignificant -- results are often considered to be important).

I would however point out that another suspect is rarely named: the fact that probabilities may be a consequence of feedback processes. For example, it has been shown that a user's environment affects drug and alcohol tolerance -- which is to say that the actual probabilities of the addiction syndrome don't stay fixed. And further, this implies that any person's perceptions and entanglement with the environment are malleable. So it becomes a question as to whether perceived "regularities" on which probability ideas are based are, in fact, all that regular.

Alcoholics and Pavlov's dogs have much in common

http://cdp.sagepub.com/content/14/6/296.abstract

http://www.apa.org/monitor/sep02/drug.aspx

http://www.apa.org/monitor/mar04/pavlovian.aspx

Another point: non-replicability is a charge leveled against statistico-probabilistic studies favoring paranormal phenomena. However, it seems plausible that such phenomena are in general not amenable to experiments of many trials in that we do not understand the noumenal interactions that may be involved. In fact, the same can be said for studies of what are assumed to be non-paranormal phenomena. The ordinary and the paranormal, however, may be different ends of a spectrum, in which "concrete reality" is far more illusory than we tend to realize. (Part VI, see sidebar; also see Toward, link in sidebar.)

Popper, who spent a lifetime trying to reduce quantum weirdness to something amenable to classical reasoning, came to the belief that "the doctrine of indeterminism is true and that determinism is completely baseless." He objected to Heisenberg's use of the word "uncertainty" as reflecting subjective knowledge, when what occurs, Popper thought, is the scattering of particles in accord with a propensity that cannot be further reduced. (It is not clear whether Popper was aware of experiments showing single particles detected over time as an interference pattern.)

What does he mean by indeterministic? Intrinsically random?

Take the creation of a new work, Popper suggests, such as Mozart's G minor symphony; it cannot be predicted in all its detail by the careful study of Mozart's brain. However, some neuroscientists very likely do believe that the day will come when MRI imagery and other technology will in fact be able to make such a prediction. However, I am skeptical, because it seems to me that the flash of creative insight is not purely computational, that some noumenal effect is involved that is more basic than computationalism. This is a position strongly advocated by Roger Penrose in The Emperor's New Mind (47).

What Popper is getting at, when talking of indeterminism, is the issue of effective computational irreversibility, along with the problem of chaos. He wants to equate the doctrine of determinism with an idealistic form of universal predictability and hence kill off the ideal form of universal determinism.

But, is he saying that we cannot, even in principle, track all the domino chains of "causation" and hence we are left with "indeterminism"? Or, does he mean that sometimes links in the domino chains are missing and yet life goes on? The former, I think.

My thought is that he is saying that we can only model the world with an infinite sequence of approximations, analogous to what we do when computing a representation of an irrational number. So there is no subjective universal domino theory. Yet, we are left to wonder: Does his theory allow for missing dominos and, if so, would not that notion require something beyond his "propensity"?

"Causality has to be distinguished from determinism," Popper wrote, adding that he finds "it crucially important to distinguish between the determined past and the open future." Evidently he favored de facto free will, a form of vitalism, that was to be somehow explained by his concept of indeterminism.

"The determinist doctrine -- according to which the future is also completely determined by what has happened, wantonly destroys a fundamental asymmetry in the structure of experience" such as the fact that one never observes waves propagating back toward the point at which a stone is dropped into a pool. Popper's stance here is related to his idea that though the past is closed, the future is open. Despite propensity concepts, this idea doesn't seem properly justified. But, if we look at past and future in terms of superpositions of possible realities, an idea abhorrent to Popper, then one might agree that in some sense the future, and the universe, is "open" (48). (Part VI, see sidebar; also see Toward, link in sidebar.)

Popper, however, seems to have been talking about limits on knowledge imposed by the light cones of special relativity, as well as those imposed by Alan Turing's result in the halting problem.

Scientific determinism, said Popper, is the "doctrine that the structure of the world is such that any event can be rationally predicted, with any desired degree of precision, if we are given a sufficiently precise description of past events, together with all the laws of nature." In other words, the Laplacian clockwork universe model, a mechanistic theory which I characterize as the domino theory.

Though Popper's Open Universe was published in 1982, most of it was written in the early 1950s, and so we wouldn't expect Popper to have been abreast of the developments in chaos theory spurred by computerized attempts to solve previously intractable problems.

But at one point, Popper makes note of a theorem by Jacques Hadamard from 1898 discussing extreme sensitivity to initial conditions, implying that, though not up to date on chaos theory, Popper had at least some inkling of that issue. In fact, after all the locution, this -- plus his propensity idea -- seems to be what he meant by indeterminism. "For, as Hadamard points out, no finite degree of precision of the initial conditions" will permit us to learn whether an n-body system is stable in Laplace's sense, Popper writes (48).

Hadamard and chaos theory
http://www.wolframscience.com/reference/notes/971c

Popper not only faulted Heisenberg, he also took on Einstein, writing that "Einstein's famous objection to the dice-playing God is undoubtedly based on his view that probability is a stopgap based on a lack of knowledge, or human fallibility; in other words, ... his belief in the subjective interpretation of probability theory, a view that is clearly linked with determinism" (50).

Correct. In a mechanistic view, or "domino theory," there is no intrinsic randomness; external and internal realities are "really" separate and so what you don't know is knowable by someone with more knowledge of the system, perhaps some notional super-intellect in the case of the cosmos at large. So then deploying propensities as a means of talking about indeterminism seems to be an attempt to say that the universe is non-mechanistic (or acausal). Yet, by giving a name to a scenario -- "propensity" -- have we actually somehow restored "realism" to its "rightful place"?

A related point concerning prediction possibilities: Kolmogorov-Chaitin information, which is related to chaos theory results, says that one can have a fully deterministic system and still only be able to compute some output value or "final state" with as many steps or as much work as the actual readout. And, in the discussion of entropy below, I bring up a high-computation algorithm for a simple result that is meaningful in terms of what it is possible to know.


I interject here that an unattainable upper limit on computation has been discussed by Gregory Chaitin in terms of the busy beaver function.

Computing the busy beaver function
http://www.cs.auckland.ac.nz/~chaitin/bellcom.pdf

At any rate, it is in principle true -- excepting the case of the busy beaver function as described by Chaitin -- that with enough input information, one can obtain an exact readout for a fully deterministic system, assuming all forces and other determinants are exactly described. Still, in the chaotic (asymmetric) three-body gravitational problem, we may find that our computing power must rise asymptotically toward infinity (the y asymptote is at 90 degrees to the x axis) as we push our prediction point into the future. So then, how does a mortal simulate reality here?

At some point, we must begin approximating values, which very often gives rise to Lorenz's "butterfly effect;" two input values differing only, say, in the fifth decimal place, may after an interval of "closeness" produce wildly different trajectories. Hence, we expect that many attempts at computation of Lorenz-style trajectories will fail to come anywhere near the true trajectory after some initial interval.

Note: there are three intertwined definitions of "butterfly effect":

1. As Wolfram MathWorld puts it: "Due to nonlinearities in weather processes, a butterfly flapping its wings in Tahiti can, in theory, produce a tornado in Kansas. This strong dependence of outcomes on very slightly differing initial conditions is a hallmark of the mathematical behavior known as chaos."

Weather system 'butterfly effect'
http://mathworld.wolfram.com/ButterflyEffect.html

2. The Lorenz attractor, derived from application of Edward N. Lorenz's set of differential equations that he devised for modeling weather systems, has the appearance of a butterfly (51).

On the Lorenz attractor
http://www.wolframalpha.com/input/?i=Lorenz+attractor

3. The "butterfly catastrophe" is named for the appearance of its graph. Such a catastrophe is produced by the following equation:

F(x,u,v,w,t) = x6 + ux4 + vx3 + wx2 + tx.

The word "catastrophe" is meant to convey the idea of a sudden, discrete jump from one dynamical state to another, akin to a phase shift. The use of the term "unfoldment parameters" echoes Bohm's concept of implicate order. We see directly that, in this sense, the dynamical system's post-catastrophe state is covertly implicit in its pre-catastrophe state. Note the nonlinearity of the equation.

The 'butterfly catastrophe'
http://mathworld.wolfram.com/ButterflyCatastrophe.html

So even if we were to grant a domino theory of nature, unless one possesses godlike powers, it is in principle impossible to calculate every trajectory to some indefinite point in a finite time period.

Further, and importantly, we learn something about so-called "causes." On the one hand, a fine "tipping point" such as when a coin bounces and lands head or tail up or on an edge, shows that a large set of small force vectors does indeed nearly cancel, but not quite. That "not quite" is the remaining net force vector. However, we may also see that the tipping point net force vector is composed of a large set of "coherent" sub-vectors. That is, "causes" may pile up on the brink of a significant event. The word "coherent" is appropriate. If we map the set of constituent vectors onto a Fourier sine wave map, it is evident that a number of small waves (force vectors) have cohered into a wave form of sufficient amplitude to "tip the balance."

LINK TO PART V IN SIDEBAR


44. Symmetry by Hermann Weyl (Princeton, 1952).
45. New Foundations of Quantum Mechanics by Alfred Landé (Cambridge University Press, 1965). Cited by Popper in Quantum Theory and the Schism in Physics (Postscript to the Logic of Scientific Discovery, Volume III; Routledge, 1989. Hutchinson, 1982).
46. Popper's evolving view on probability shows up in material added to the English language edition of Logic.
The Logic of Scientific Discovery by Karl Popper. Published as Logik der Forschung in 1935; English version published by Hutchinson in 1959.
47. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics by Roger Penrose, (Reed Business Information Inc.).
48. Logic, Popper.
50. The Open Universe (Postscript to the Logic of Scientific Discovery, Volume II) by Karl Popper (Routledge, 1988. Hutchinson, 1982).
51. The Essence of Chaos by Edward N. Lorenz (University of Washington, 1996).

No comments:

Post a Comment