I’m currently dealing with a philosophical problem about nihilism, idealism and phenomenology. But in trying to understand more the aspects that are to me more obscure I’ve found this good book suggested by Sean Carroll on twitter.

While skimming through this interesting book I stumbled on a page about “caos”, that has been one of those topics I was dealing with, and now I’m bumping my head against it and its deceitful wording.

Caos

The great power of science lies in the ability to relate cause and effect. On the basis of the laws of gravitation, for example, eclipses can be predicted thousands of years in advance. There are other natural phenomena that are not as predictable. Although the movements of the atmosphere obey the laws of physics just as much as the movements of the planets do, weather forecasts are still stated in terms of probabilities. The weather, the flow of a mountain stream, the roll of the dice all have unpredictable aspects. Since there is no clear relation between cause and effect, such phenomena are said to have random elements. Yet until recently there was little reason to doubt that precise predictability could in principle be achieved. It was assumed that it was only necessary to gather and process a sufficient amount of information.

Such a viewpoint has been altered by a striking discovery: simple deterministic systems with only a few elements can generate random behavior. The randomness is fundamental; gathering more information does not make it go away. Randomness generated in this way has come to be called chaos.

The result is a revolution that is affecting many different branches of science.

Okay, so this is the thesis. The randomness is fundamental. Information won’t make it go away. And this is what we now call “caos.”

The discovery of chaos has created a new paradigm in scientific modeling. On one hand, it implies new fundamental limits on the ability to make predictions.

But then:

On the other hand, the determinism inherent in chaos implies that many random phenomena are more predictable than had been thought.

Wait. That’s an oxymoron. The correct phrase would be: these phenomena are predictable, because they aren’t as random as it was thought.

If the phenomenon is “random” then you cannot predict it. And if we can predict them it’s because they only APPEAR as random.

But here’s the real contradiction:

A speck of dust observed through a microscope is seen to move in a continuous and erratic jiggle. This is owing to the bombardment of the dust particle by the surrounding water molecules in thermal motion. Because the water molecules are unseen and exist in great number, the detailed motion of the dust particle is thoroughly unpredictable.

But they opened the article saying the exact opposite:

It was assumed that it was only necessary to gather and process a sufficient amount of information.

Such a viewpoint has been altered

Cause => water molecules => if these molecules are absent from the model, then this absence IS a lack of information.

You’re obtaining unpredictability because your model doesn’t includes everything that takes part in this process. Your model is partial, so produces prediction errors. Your model LACKS the necessary information to make that prediction you want.

If “sufficient information” was provided, then there wouldn’t be any errors, because the model is complete and so can fully predict the evolution of the system it wants to model.

What makes the motion of the atmosphere so much harder to anticipate than the motion of the solar system? Both are made up of many parts, and both are governed by Newton’s second law, F = ma, which can be viewed as a simple prescription for predicting the future. If the forces F acting on a given mass m are known, then so is the acceleration a. It then follows from the rules of calculus that if the position and velocity of an object can be measured at a given instant, they are determined forever. This is such a powerful idea that the 18th-century French mathematician Pierre Simon de Laplace once boasted that given the position and velocity of every particle in the universe, he could predict the future for the rest of time. Although there are several obvious practical difficulties to achieving Laplace’s goal, for more than 100 years there seemed to be no reason for his not being right, at least in principle. The literal application of Laplace’s dictum to human behavior led to the philosophical conclusion that human behavior was completely predetermined: free will did not exist.

This is the basic argument, clearly explained. But then:

Twentieth-century science has seen the downfall of Laplacian determinism, for two very different reasons. The first reason is quantum mechanics.

And okay. We know quantum mechanics are weird and that introduce a true, fundamental randomness. But the theory is incomplete and so it doesn’t make a lot of sense to put it against the hypothesis of determinism, at least until we get a clearer, definite formulation of it.

What I care about, then, is the second reason. And it takes another couple of pages to get to the point.

It is the exponential amplification of errors due to chaotic dynamics that provides the second reason for Laplace’s undoing.

That’s not Laplace, though. That’s a very obvious straw man.

Laplaces’ true position, copy/pasting from above, was:

given the position and velocity of every particle in the universe

WHERE THE FUCK DID YOU SEE **ERRORS** IN LAPLACE’S HYPOTHESIS?

If you know the position and velocity of every particle in the universe then there isn’t any space for “errors”, because errors are caused, as in the example above, about tiny stuff that interferes with the model. And this external interference is in fact what the text points to:

A simple example serves to illustrate just how sensitive some physical systems can be to external influences.

It begs the question: what can be “external” to knowing the position and velocity of every particle in the universe?

The text even acknowledged that the problem wasn’t the PRINCIPLE, but the feasibility of that principle:

there are several obvious practical difficulties to achieving Laplace’s goal, for more than 100 years there seemed to be no reason for his not being right, at least in principle

But your thesis is that Laplace’s thesis is wrong *in principle*, and not just in practice.

This article states again and again that there has been a revolution in science, but when it comes to motivate why, it falls flat on its face.

On one hand there’s the thing about quantum mechanics and okay, I accept that. But on the other hand the article reduces the essence of chaos to computational errors, then puts these computational errors against the deterministic conclusion: human behavior being completely predetermined, free will does not exist.

And this conclusion would be false because WE MAKE COMPUTATIONAL ERRORS?

Caos can be two things:
– variables not modeled that interfere with the prediction (external interferences)
– computational approximations/errors within the model that have exponential effects

BOTH ARE ABOUT IMPERFECT KNOWLEDGE, not imperfect determinism.

It’s completely ridiculous. This confuses a subjective level where you work with imperfect approximate models, and so with imperfect predictions, with an objective level where the idea of “errors” and external interference don’t have any sense. External from what, reality? Is it the hand of god that meddles with physics?

In Laplace’s hypothesis, without any straw man, there couldn’t be any external factors, as the thesis postulated that every particle is part of the model, so no other particle can arrive from outside reality to produce an interference. And of course the principle, in the same way it presumes an impossible thing like knowing every particle, then would assume that the computational model would also be accurate enough to properly handle that data. Because the thesis is that it’s computable in theory, so assuming that it is fundamentally possible even if not possible in practice because we just don’t have/won’t have that computational power and accuracy.

Nitpicking, Laplace is impermeable even to quantum mechanics. “Given the position and velocity… then…” Quantum mechanics undermine the possibility of the knowability of the initial condition, but not the validity of the hypothesis itself, that remains strong. This makes Laplace’s hypothesis not relevant, but not wrong.

The argument in defense of free will doesn’t even hold on the other hand: quantum mechanics want that the fundamental nature of reality is “random”, so unpredictable. But “free will”, in the sense of human free will, implies that human beings are IN CONTROL.

How a reality that is merely random, opposed to determined, makes human beings more in control? Huh? Whether the cause is determined or random, human agency remains out of the picture just the same.

WTF is wrong with these people who write these articles?

(Of course I’m not implying they are all idiots and me the smart one. I’m just putting emphasis on the subjective struggle I go through, in the way it happens in my mind. And that’s why I keep looking to figure out where and why I’m wrong.)

Leave a Reply

Your email address will not be published. Required fields are marked *