**PART 1: CERTAINTY
AND UNCERTAINTY**

Most people
prefer certainty over uncertainty in most aspects of their lives. We like to
have some idea of what is coming so we can prepare for it. Can we ever have
absolute certainty? In this physical world that we happen to live in, the
answer is no. The co-existence of conditional certainty and uncertainty may be
the engine that drives the dynamic processes of the universe. In any finite physical
system, the amount of certainty, i.e. relative predictability, depends upon the
relative stability of the system and patterns of change that can be identified
within the system.

When I was a
mathematical modeler and systems analyst in the US government, my supervisor,
Dr Nicholas Matalas, and I had many interesting discussions about determinism
versus probabilism. Dr. Matalas was a PhD from Harvard specializing in
probabilistic analysis, and I took the position of my hero, Albert Einstein,
who was a determinist. Briefly, a determinist believes that we can model the quantifiable
details of a physical system, write equations describing describing the effects of those details, and
predict changes within the system and the outcome to be expected as the results
of forces acting within and upon the system. The determinist sees statistics
and probability as tools of estimation to be used only when we don’t know much
about some part of the system. The probabilist, on the other hand, believes
that randomness is a fundamental aspect of reality.

This
difference of opinion about the nature of reality was the at the heart of a
famous debate in the 1930s, with Einstein, Podolsky, and Rosen on one side, and
Bohr, Heisenberg, and Schrodinger on the other. The determinists did not like the
Heisenberg uncertainty principle which defined the uncertainty of observations
at the quantum scale of measurement in probabilistic terms as follows: If the
measurement of the location of a moving particle is taken to be exact, then the
measurement of the particle’s momentum can only be determined approximately
within a predictable range of uncertainty, and vice versa. This prompted
Einstein’s famous statement, “God does not play dice with the universe”. As a
determinist, Einstein believed that when we know more detail about quantum
physics, this uncertainty will disappear. His position was later generalized as
a form of “hidden-variables theory”.

Einstein,
Podolsky, and Rosen (EPR) published a paper describing a conceptual experiment
(also known as a thought experiment) based on assumptions that were generally accepted
by particle physicists, involving a complementary pair of quantum particles
produced in a well-known subatomic process that clearly contradicted the
Heisenberg uncertainty principle, even though all known experimental
measurements fell within Heisenberg’s predicted range of uncertainty. Thus it
became known as the EPR paradox. Bohr and Heisenberg argued that the conclusion
of the EPR thought experiment was wrong because it was based on the unwarranted
assumption of the physical continuity of the particles that were observed at the
beginning and end of the experiment. In other words, elementary particles in
motion do not behave like tiny baseballs, as the EPR thought experiment
assumed.

A
mathematician named John Bell devised a way to prove whether or not the EPR
thought experiment was correct. This mathematical expression, actually an
inequality involving probabilities, became known as Bell’s theorem. This,
however, is a misnomer, because it was not a mathematical theorem, it was a
probabilistic hypothesis that could only be proved or disproved by conducting a
very delicate physical experiment. The technology needed to conduct the
experiment was not developed to the point that it could be performed with
enough accuracy to produce indisputable evidence until more than a quarter
century after Einstein’s death. When it was finally performed by a team of
physicists headed by Alain Apect in France, it indicated that Bohr was right
and Einstein was wrong!

Many people,
including most mainstream physicists I think, jump to the conclusion that
Einstein was wrong about the nature of reality and that a certain amount of uncertainty
is a fundamental feature of reality, based on the experimental evidence from
the Aspect experiment and other experiments that demonstrated violations of
Bell’s inequality. This conclusion, however, does not necessarily follow. These
experimental results only prove that EPR’s basic assumptions about the nature
and behavior of elementary particles in motion were wrong, not that uncertainty
is a fundamental aspect of reality. I believe it is still possible that
Einstein was right, and that our inability to measure both location (position in
space) of an elementary particle and the particle’s momentum (mass movement in
time) with equal accuracy is an artifact of errors in our model of reality, not
because of an intrinsic uncertainty in reality itself.

One of the
things I learned modeling environmental systems, is that even a well-known part
of the environment, like rainfall and runoff for a given river basin, is
affected by a number of measurable input variables that can have a wide range
of effects on the output, some major, and some minor. Some of the effects of these
variables may be modeled using simple equations, some by more complex
equations, and for some, we may not be able to write any equations at all,
either because the functions are too complex, or because we don’t have enough
data to define them. The relative importance of the input of these variables can
be determined by sensitivity analysis, which simply means varying these inputs
incrementally and seeing what effect it has on the output.

In order to
produce a usefully predictive model within a reasonable amount of time, a mostly
deterministic model can be augmented by using “stochastic” elements to
represent parts of the model about which we do not have enough data to define a
pattern of cause and effect, “Stochastic” in this context means that the input
of the variables of such elements are determined by statistical or
probabilistic analysis of the available data, not by a descriptive cause and
effect equation.

What does all
this have to do with YOU, a living, breathing, conscious human being? Perhaps a
lot more than you may think! First of all, you are a modeler and systems analyst
yourself, whether you realize it or not. Your modeling and analysis is being
done mostly automatically after the first few years of your life by processes
built up by the habitual repetition of patterns in your brain corresponding to the
interaction of your consciousness with what you perceive to be things existing outside
of your consciousness. This has been traditionally thought of as the problem of
the interaction of mind and matter.

You are
carrying around with you a model composed of many parts created by electrical and
chemical activities in your brain triggered by sensory input as images in your
mind. Those images are constantly being compared with new input data. The very existence and continuation of the
organism that you think of as your body depends on how well your model
corresponds with reality, especially those parts of reality that can harm,
disable or destroy the physical organism that acts a temporary vehicle for your
conscious mind.

You may think
that you are experiencing reality directly in your day-to-day life, but you are
not! What you think of as reality is nothing more than a model that your mind
has constructed, and the model you carry around inside your head is incomplete.
If you don’t believe that, just look through a telescope or a microscope. They
reveal a lot more complex details of the world beyond the range of receptivity
of the senses of our physical bodies. Our senses are very limited, and telescopes
and microscopes are limited too, only extending our perceptions a little bit. Are
our models of reality doomed to be forever incomplete? To answer this question,
we will need to explore the concepts of consistency and completeness as they
related to the components of the logical systems that make up our models of
reality.

I will continue this discussion in this post as time permits.

ERC - May 30, 2021

**PART 2: CONSISTENCY AND COMPLETENESS**

Before we get into the concepts of consistency and completeness, so important to the modeling of reality, and also of critical importance in the logical structure of mathematics, i.e. the formalized symbolic representation of the logical structure of reality, it will be helpful to discuss the conscious process of modeling a little further. First, we must recognize that every valid model of any part of reality, like, e.g., Maxwell’s wave equation, E=mc^{2}, or one of the quantum calculus conveyance equations of TDVP, each one is a piece of the puzzle generally referred to as a “Theory Of Everything” (TOE), the Holy Grail of modern science.

The idea of a theory of everything grew out of Einstein’s quest for a unified field theory that would describe how all of the forces of the universe are related in one complete set of consistent equations. David Hilbert, one of the most brilliant and influential mathematicians of the 19th and early 20th centuries, saw this as part of a quest for a complete axiomatic system of mathematics. However, these dreams of a TOE were doomed to failure for two reasons: 1) Their conceptual model did not include consciousness, so, by definition it was not a TOE, and 2) Gӧdel’s incompleteness theorems. To see why Gӧdel’s incompleteness theorems eliminate the TOE physicists dreamt of, we have to look into the proofs of Kurt Gӧdel’s incompleteness theorems. They prove that no consistent system of logic, and therefore no physical TOE will ever be complete.

Gӧdel’s proofs are difficult to follow for anyone without a considerable amount of formal training in pure mathematics or symbolic logic; but what they imply about the conceptual modeling of reality is not hard to understand. However, before we discuss what Gӧdel’s proofs imply about the modeling of reality in general, and a TOE in particular, a brief discussion of exactly what logical consistency and completeness mean is necessary to avoid confusion. A logical system defined by a finite number of basic symbols, axioms, and rules is consistent when statements constructed from the basic symbols used in the axioms and rules do not contradict any of the axioms. And a logical system is complete if, and only if, every meaningful statement that can be constructed using the basic symbols can be reduced to one of the axioms by applying one or more of the operational rules a finite number of times.

A key step in Gӧdel’s proof is showing that any consistent logical system, as defined above, can be modeled in a field of integers by assigning a unique whole number to each and every element of the system of symbols, axioms, and rules, so that any statement that can be constructed in the system is represented by a unique finite whole number. This demonstration that any internally consistent logical system can be translated from any symbolic language (ӧincluding English) into a purely arithmetic code, makes it possible to define absolute consistency and generalize the incompleteness theorem without reference to truth or existence in reality.. In this way, any model of reality can be represented by a string of “Gӧdel numbers”.

The Triadic Dimensional Vortical Paradigm (TDVP) is a logical system representing reality that can be translated into Gӧdel numbers. In TDVP, however, reality is defined as everything that exists, and logically demonstrable truth is identified with existence by defining the basic unit of observation and measurement as a quantum equivalence unit with the mass and volume of the smallest elementary quantum of reality, the electron. With reality as the ultimate absolutely consistent logical system, Gӧdel’s theorems of incompleteness apply to reality and all consistent models of reality constructed in human consciousness. The outcome is proof that any logically consistent model of reality ever constructed will never be complete. Does this mean that the dream of a TOE is a fantasy of the finite minds of theoretical physicists? No, not necessarily. To understand how a real theory of everything is still possible, we must investigate the nature of the dimensional domains of space, time and consciousness and their contents.

Albert Einstein’s conceptual model of a significant part of physical reality known as the theory of relativity, was been translated into a four-dimensional mathematical model independently by Hermann Minkowski, David Hilbert, and a few other mathematicians. Then, notably, Gunnar Nordstrom in Norway, Theodore Kaluza and Oskar Klein in Germany, and Wolfgang Pauli in the US, extended the Minkowski 4-D model {3 spatial dimensions and 1 dimension of time (3S-1T)} to a 5-D model, and were able to derive the Maxwell wave equation in five dimensions as part of the 5-D logical system. Wolfgang Pauli even extended the model to six dimensions. I find it very interesting that, as a mainstream physicist, he did not publish the 6-D model because it indicated the existence of non-physical particles, which he called “phantom particles”. Was the world just not ready yet for the non-physical content we call gimmel?

Einstein recognized the potential value of multi-dimensional mathematical modeling and encouraged Kaluza and Klein to pursue their research. So now, the relevant question becomes: Why has multi-dimensional modeling not been more successful? The answer lies in the limited nature and use of applied mathematics over the past 300 plus years. By relegating mathematics to nothing more than a set of tools for macro-scale problem solving, contemporary science has failed to realize the real power of mathematical modeling. As a result, mathematical application has been disconnected from the greater reality that pure mathematical structure reflects. That greater reality includes everything from the finite quantum reality of elementary phenomena to the infinite continuity of the cosmos and Primary Consciousness.

With the foregoing discussions of certainty, uncertainty, consistency, completeness, incompleteness, and hyper-dimensional modeling as background, and new clues produced by the application of the calculus of dimensional distinctions in the theoretical framework, we can envision a TOE that models quantum and non-quantum reality as follows:

*The TDVP Model of Reality is a Multi-Dimensional Self-Referential Logical System of Interacting Fields Exhibiting Triadic Content as Mass, Energy, and Consciousness Existing in 3 Orthogonal Three-Dimensional Domains of Space, Time and Conscious Extent. A Model incorporating these characteristics is Absolutely Consistent and Complete if Time is three- Dimensional. Therefore: TDVP qualifies as a Theory of Everything.*

Definition of terms:

** Reality**: Everything that exists, has existed, or will ever exist.

** Multi-Dimensional**: Multi- a prefix meaning many; specifically more than three. Dimension: a measurable variable of extent.

** Self-Referential**: Something that cannot be referenced to, compared with, or equated with anything other than itself and things contained within it. This follows from the definition of Reality as everything.

** Three-Dimensional Domains**: Dimensions are measured in variables of extent, and space, time and consciousness have extent and logical structure.

** Orthogonal Domains**: Space, time, and consciousness dimensions are mutually orthogonal {oriented at 90-degree angles, consistent with Occam’s Razor (the law of parsimony)}.

Science will begin to appreciate the powerful potential of mathematical modeling when scientists realize that the logical systems of consciousness, mind, and pure mathematics as defined in TDVP reflect the same elegant dimensional structure and meaningful content that is displayed at all levels of reality from the smallest quantum of the universe to the entire expanse of the infinite cosmos.

ERC- June 4, 2021