**THE CRAZY IDEA WHOSE TIME HAS COME**

**©**

**Edward R. Close, July 15, 2016**

**INTRODUCTION**

Almost
sixty years ago, in the winter of 1956, when I was an undergraduate student
pursuing a degree in physics and mathematics, I voiced the following concern:
“I think there is more to reality than just matter and energy interacting in
time and space, and no one is looking outside the box of materialism.”

I
believed that the general objective of science was to gain an ever-deeper
understanding of the nature of reality. What I saw in higher education was that
increasing specialization was the focus. Students were learning more and more
about less and less. Increasingly, academic physicists, chemists and other
scientists, weren’t able to communicate with each other because each
specialized field was creating its own vernacular. But it seemed that no one
shared or had any interest in even discussing my concerns.

An
explosion of academic publications started then and continues to flood the
world with data and information, creating a situation in the academic world
where very little original thinking is encouraged, only pushing the boundaries
of what is currently accepted. In my opinion, this is, at least in part, why
there has been no real scientific paradigm shift since 1935.

**How Do We Get OUT of this Self-Imposed Box that has resulted in a**

**State of INTELLECTUAL STAGNATION?**

**We Have to Question Basic Assumptions!**

**Why? Because some of the basic assumptions behind the current scientific paradigm are simply WRONG.**

Modern science
has successfully explained most of the physical phenomena that we observe and
experience through the physical senses. But when it comes to understanding the deeper
nature of reality, mainstream scientists have been sliding down the
Reductionist/Materialist hole for a long time. The really good news is that
this pursuit has illuminated many of the weaknesses of the current scientific
paradigm, with more and more scientists from every discipline calling for a
change in one of the most basic assumptions upon which our current scientific
paradigm is built.

__What We Know__
In the late 19

^{th}century, the laws of ‘Natural Science’, including classical physics with Newton’s laws of motion, the laws of electricity, magnetism and thermodynamics, seemed sufficient to explain mid-scale observations, but they could not explain certain very large scale, and very small scale phenomena like the orbit of the planet Mercury around the sun and the orbits of electrons in discrete shells around the nuclei of atoms.
In the first part of the 20

^{th}century, the classical view of the universe was shaken by Planck’s discovery that mass and energy are quantized, and Einstein’s discovery that the speed of light is the upper limit of accelerated velocity. These two surprising discoveries provided explanations of the orbit of Mercury, the quantized orbits of electrons, and other astronomical scale and quantum scale phenomena. These discoveries also drastically changed our basic understanding of the universe and provided a broad range of new technologies, including educational and entertaining electronic devices that have had an enormous impact on our lives. They also provided the first empirical hints that the universe was more than matter and energy interacting in time and space.
Theoretical physicists and astrophysicists
began applying the principles of relativity and quantum mechanics to cosmology
and particle physics, completing the shift from classical science to the
scientific paradigm now known as the Standard Model (SM). About 50 years after
this 20

^{th}Century paradigm shift, we had determined that a seamless merger of quantum mechanics and relativity was probably not possible. Conflicts and paradoxes perplexed scientists trying to flesh out the new paradigm. There were a number of unexplained problems, clearly indicating that something was wrong with the SM paradigm. It appeared that it was either incomplete, partially incorrect, or both.
In the
late 1960s and early 70s, two theorists, Stephen Hawking and Roger Penrose
became famous for proving mathematical singularity theorems that implied that
black holes predicted by the equations of general relativity might be real. The
discovery of the first astronomical object that fit the expected physical
profile of a black hole, a massive object known as Cygnus X-1, provided
empirical evidence confirming their findings. What caught the attention of the
public, was the claim that it proved that the universe exploded from a
mathematical singularity, - a dimensionless point. This was seen as an exciting
confirmation of the popular notion of the big-bang creation of everything from

*nothing*.
This
popular notion has held sway into the beginning of the 21

^{st}Century, even though physicists, including Hawking and Penrose have since moved on to String Theory, M-brane Theory, and multiple other theories, but they have not ever provided proof of any kind, not even a mathematical proof that is any more than an internally consistent self-referential complex. Why?

__What is wrong with the current scientific paradigm?__
1. The calculus of
Newton and Leibniz is being misused by applying it beyond its legitimate range
of applicability.

2. Even though Planck,
Bohm, Wigner, and others have indicated that they believe an Infinite Consciousness
or Mind is behind physical reality, mainstream science has virtually excluded
consciousness and spirituality from scientific study. There may have been valid
reasons for this in the past, but now, there is empirical evidence that consciousness
may be just as fundamental as matter and energy.

3. The fallacy of
nothingness: Mainstream science confuses the concepts of zero and nothing,
leading to logically absurd conclusions.

In 1986
I applied a mathematical tool I call the Calculus of Distinctions to the
big-bang theory and found it to contain unresolved paradoxes. Having been a
devout follower of Einstein’s work, I was at first surprised to find that these
paradoxes could be resolved by taking quantum theory seriously and avoiding the
fallacies listed above. By doing this, I concluded that the Hawking and Penrose
‘proof’ of a singularity origin for the universe was most likely a mathematical
abstraction, with no existential counterpart in the evolution of the physical
universe. The Hawking-Penrose model of reality was four-dimensional, while the
mathematical logic of the Calculus of Distinctions requires a nine-dimensional
quantized reality embedded in an infinitely continuous substrate.

Among
the conclusions I reached were that some form of consciousness must be present
with the mass and energy of the physical universe to form reality as we know
it, and that time, like space, is three-dimensional, implying that, consistent
with the law of conservation of mass and energy, our dynamic reality has no
absolute beginning or end, only change.

In 1987,
I undertook the daunting job of preparing my findings for publication. I
submitted an early manuscript to Stephen Hawking in late 1987. After about
three months, I received a reply from his student/interpreter, saying that
Prof. Hawking was very busy and that he was not interested in hyper-dimensional
(more than 4-D) models. I finished my write-up in 1988, and submitted the
manuscript to an eminent physics professor at Berkeley, who wrote notes in the
margins, commenting on what he believed were ‘crackpot’ ideas. After a few
pages, he vowed to “read no further” but never-the-less, he continued to the
end, making a number of notes that were actually very helpful. Based on his
comments, and additional research, I made some adjustments and corrections, and
published my findings in a book titled: “Infinite Continuity”, in 1990.

At
about the same time as “Infinite Continuity” was published, Prof. Hawking
stated his opinion that consciousness has no direct involvement in the forming
of physical reality, in a public lecture in California, and he further stated
that “someone” had suggested that time is three dimensional, but that he could
not imagine that. In later publications, however, Prof. Hawking began to
consider the extra dimensions of string-theory models, and after a serious
attempt to reconcile general relativity with quantum theory, he abandoned the
mathematical singularity origin of the physical universe proof, for which he
and Penrose had been widely recognized earlier, saying that we should probably
not be talking about absolute beginnings and endings, only change.

And in
this I whole heartedly agree with Prof. Hawking. If we accept the axiom that
finite volumetric distinctions, like atoms, stones, plants, animals, human
beings, planets, stars and galaxies do in fact exist, then the universal law of
conservation of mass and energy implies that if there is

*something*at any point in time, a state of nothingness can never have existed.
My
personal research using and applying the Calculus of distinctions in an
existential quantized 9-D universe implies that there is no such thing as
nothingness on

*any*of the infinite number of possible timelines. Why is this important? It is important because it means that the universe cannot*ever*have exploded from nothingness. Something has always existed, and always will. There can be no existential state of absolute nothingness, no absolute beginning, no absolute end; only change.

__How TDVP fixes the current scientific paradigm__**:**

If we
live in a quantized universe, then the calculus of Newton & Leibniz is
being applied beyond its legitimate range of applicability, and this understanding
may require a completely new approach and a new unit of measurement.

There
are four primary variables in any mathematical model describing physical
reality: mass, energy, space and time. Planck’s discovery that mass and energy
are only meted out in multiples of very small units, coupled with Einstein’s
discovery that they are related mathematically by the equation E = mc

^{2}, means that neither mass nor energy can be divided infinitely; there is a finite smallest equivalence unit; there is a bottom to what we can measure in a quantized universe.
You can
reduce a given amount of mass and/or energy to smaller and smaller amounts by
removing units of mass and/or energy one at a time, but, you can only end up
with one unit or none, not anything in between, because Planck’s discovery
means that

**Thus the variables used to measure mass and energy cannot approach***there can be no fractional quanta.**nothingness*infinitely closely, meaning that the basic assumption of differential and integral calculus is invalid for application to quantum mass and energy.
What
about space-time? Can space and/or time be divided infinitely? It might seem
so, but a closer look reveals that such divisions are meaningless because
space-time is a derivative of mass-energy.

To
understand this, notice that the equivalence expression, E = mc

^{2}involves not just mass and energy, but also space and time. The speed of light, represented by ‘c’, is the distance travelled by light in a unit of time. We can measure it in miles per hour, kilometers per second, etc. But, in order to normalize the units of mass and energy so that, in keeping with empirical evidence of quantization, i.e. the results of Planck’s black body radiation experiments, we must also normalize the units of space and time. We do this by defining the speed of light as the movement of light across one unit of space in one unit of time. In this normalized system of units,
c = Δx/Δt = 1/1 =1,

as it
does in the ‘natural’ units known as Planck units.

This is
consistent with Einstein’s final appendix to his book on relativity, suggesting
that space and time are derivative of mass and energy, and have no independent
existence of their own. Considered this way, empty space and empty time have no
meaning. Space and time have meaning only in relation to mass, energy and
observation by a conscious entity.

Thus
the variables of space and time, like mass and energy, cannot meaningfully approach
nothing infinitesimally as assumed in the application of Newton’s differential
and integral calculus.

*The ‘fix’ for this, while somewhat difficult to accomplish, is really easy to understand: We must simply replace the calculus of Newton and Leibniz with a quantum calculus, a calculus in which the variables approach something, a finite quantum limit, and not non-existing nothingness.***QUANTUM CALCULUS**

Just like
the SM paradigm, the calculus of Newton and Leibniz has been very successful
when applied to its proper domain. That domain is the macro-scale of measurement
where quantum effects are not directly detectable. But when applied to reality
on the quantum scale, Newtonian calculus leads to erroneous results. The reason
this is true is easy to understand: Applying Newtonian calculus to a mid-scale
problem, the numerical values of expressions describing an object with
measurable variables like mass, energy, space or time, are determined to
accuracies within the limits of our ability to measure, when one or more of the
variables approaches absolute nothingness.

For
mid-scale projects, like building a bridge or firing a rocket, assuming that
physical objects are continuous, so that a measurement can be made at any point
from the size of the entire object to nothing, causes no problems, and is
consistent with our sense-based experience of physical objects, space and time.

At the
quantum scale, however, if, for example, we are trying to determine the exact
location and velocity of an elementary particle consisting of one quantum, or a
few quanta, in a system of such particles, the assumption of continuity is not
valid, and no variable of measurement can approach nothingness infinitely
closely, because the measurement of quantized phenomena by successive division
stops at one quantum. Beyond that, there is no phenomenon to measure.

Clearly,
the actual value of an expression describing a quantum state will be different
than the value obtained by applying Newtonian calculus. Clearly a quantum
calculus is needed, and the fundamental operations and procedures of the quantum
calculus are necessarily different than those of Newtonian calculus.

Most
scientists are not aware of the fact that the calculus of Leibniz and Newton is
only one of a number of calculi that can be developed based on arithmetic and
geometric axioms. This is primarily because of academic specialization, the
calculus’ many successes in describing physical reality on the macro-scale, and
just the fact that it has been known as “

*the*calculus” for more than 300 years. For a good description of what a calculus is and the step-by-step development of a more comprehensive calculus with applications to logic, see George Spencer Brown’s “Laws of Form”, the Julian Press, 1972.
The appropriate
calculus for quantum phenomena is a calculus with one quantum as its basic unit
of measurement. I developed such a calculus in 1986, called it the Calculus of
Distinctions and published the derivation in “Infinite Continuity”, the Paradigm
Press, in 1990. I applied it to the processes of consciousness in
“Transcendental Physics”, Paradigm Press, 1997, and later published by toExcell
Press, 2,000. The Calculus of Distinctions was applied to the analysis of
intelligence in “The Calculus
of Dimensional Distinction”, in “Elements of mathematical theory of intellect”,
Brandin V, Close ER, Moscow, Interphysica Lab, 2003, and with the encouragement
and assistance of Dr. Vernon Neppe, the Calculus of Distinctions was further
developed and published In articles such as “The Calculus of Distinctions: A Workable Model
across Dimensions and Consciousness”,
the

*Dynamic Journal of Exceptional Creative Achievement (DJECA)*1210:1210; 2387 -2397, 2012, Close ER, Neppe VM, and “Reality Begins with Consciousness”, an e-book, www.Brain Voyage.com, Neppe, VM and Close, ER, 2012.
As early as 1986, I reasoned that,
if the natural elementary particle with the smallest mass also had the smallest
volume, then it would be the logical candidate for the unitary distinction of
the Calculus of Distinctions for application to quantum mechanics. I also
realized that proof of Fermat’s Last Theorem for n = 3 might explain why quarks
combine in threes to form protons and neutrons. I developed the concept and
published a brief description of it in “Infinite Continuity”, pp. 68 – 71 and
192, in 1990. Infinite Continuity received little attention at the time, and
was quickly out of print. Many of the ideas in Infinite Continuity, including
the Calculus of Distinctions, were further developed in “Transcendental
Physics’, first published in1997.

In 2010, twenty years after
publishing Infinite continuity, when Dr. Vernon Neppe and I first met in person
in Amsterdam, I confided to him that I believed that I could explain why
up-quarks and down-quarks only combine in threes. In 2011, using particle
collider data, I was able to demonstrate that the fact that up-and down-quarks
only combine in threes proves that they combine volumetrically. That is, quarks
are not just held together by elementary forces like a cluster of grapes, they
merge to form symmetrically stable protons and neutrons.

In 2012, I applied the principles
of relativity and quantum mechanics to the Hydrogen atom and its isotopes, and
used the mass and volume of the free electron to define a new basic unit of
measurement at the quantum scale. Normalizing the collider data for quarks to
multiples of this unit, which I named the Triadic Rotational Unit of
Equivalence (the TRUE quantum unit), I was able to show that there would be no
stable atomic structure in the universe without TRUE quantum units of a third
form of the substance of reality. These units of the third form, while
occupying equivalent volumes in the same way mass and energy do, have no
measurable mass or energy.

After some amount of discussion,
Dr. Neppe and I decided to call this third form of the substance of reality
‘gimmel’. Applying TRUE analysis to the natural elements, we found that the
most stable atoms of the Periodic Table having this basic symmetry provided by
gimmel, are the elements that support life. Furthermore, gaps that occur in the
progressive symmetry of the Periodic Table, are filled by compounds that are
part of the RNA and DNA molecules that make up the physical structure of
organic life. These facts strongly suggest that the universe is designed
specifically for conscious life as we know it.

If there were a big-bang explosive
origin event as the current paradigm suggests, there would be no stable physical
structure without the third form existing at, or before the explosion, because
without it, any particles that might somehow get together randomly in a
universe where particles are flying away from a violent explosion, would soon
fly apart, and such a big-bang expanding universe would return to maximum
entropy, the same nothingness assumed to exist before the explosion. The
absurdity of ‘everything from nothing’ is apparent when one realizes that it
arises from the inappropriate application of the Newtonian calculus, with
vanishing infinitesimals, to quantum reality.

**FINALLY, THE SCIENCE OF THE FUTURE IS HERE!**

When I made those observations
nearly 60 years ago, and voiced concerns about the direction science was headed
and the need to include consciousness and spiritual experience as a legitimate
parts of reality, almost no one was interested. The public was fascinated with
the Bridey Murphy reincarnation story and UFO sightings, while mainstream
science spent more time and effort trying to debunk such stories than studying
them.

Relativity and quantum physics
were still new, and mainstream scientists were ignoring the efforts of a few,
like David Bohm and Eugene Wigner, who saw evidence in quantum experiments that
consciousness could be actively participating in the formation of reality at
the quantum level. Most mainstream scientists justified their overly
unscientific closed-mindedness as necessary in an effort to keep science from
slipping into what they considered to be ‘pseudoscience’.

Continental drift, later known as
plate tectonics, was considered to be a crackpot idea, psychology was
considered to be a ‘fringe’ science, and most medical school graduates
considered hypnosis, acupuncture and chiropractic to be medical quackery. Consciousness
was believed to be a recent evolutionary development, emerging only after
primitive life forms developed brains with a sufficient level of complexity,
and anyone thinking outside the box of scientific orthodoxy was in danger of
being ridiculed and shunned by ‘real’ scientists. The only major university
with a parapsychology department was Duke, where Dr. J.B. Rhine, his wife
Louisa Rhine, and a handful of grad students were attempting to apply the
scientific method and statistical analysis to the study of extra-sensory
perception (ESP), and Duke University severed its relationship with
parapsychology a few years later.
Thankfully, things have changed some since then.

Recently, in a TED talk video
published July 14, 2014, David Chalmers asked the question “How do you explain
consciousness?” He has been asking this question for at least 20 years, and an
increasing number of scientists, like Henry Stapp, Roger Penrose, Stuart
Hameroff, David Peat, Peter Russell, Fred Alan Wolf, Dean Radin, Menos Kafatos,
John Hagelin, and Deepak Chopra, are finally seeing consciousness as a
legitimate subject for scientific investigation, and yet, few even agree on a
comprehensive definition of consciousness.

David Chalmers asks: “How do we
accommodate consciousness in science?” No one knows. Finally, he says: “Maybe
it is time to consider a crazy idea: Maybe consciousness itself is fundamental
and universal in reality.” Based on this TED presentation, it appears that, in
his quest to answer the ‘hard question’ of why and how we experience the
amazing qualia of consciousness, Chalmers favors the ‘crazy idea’ that
consciousness is fundamental; but he is less certain about whether or not
consciousness is universal.

Researchers like Penrose and
Hameroff are making headway in linking consciousness to neurological
structures, and Stuart Hameroff and others like Chalmers, Kafatos and Hagelin
are raising the age-old mind-body, or mind-matter question, but in a slightly
different form. They recognize that what is missing is a scientifically
reproducible, and mathematically provable connection between the laws of
physics and the qualia of conscious experience.

And that is what we (Close and
Neppe) have provided. That is what TDVP is all about. The mathematics of TRUE
quantum analysis

*is*the scientifically reproducible, and mathematically provable connection between the laws of physics and the qualia of conscious experience.
Proof of this is the fact that
TRUE quantum analysis has explained, and continues to explain an increasing
number of phenomena inexplicable in the current paradigm. And, of course, TDVP turns the current
paradigm upside down:

*It proves that consciousness is fundamental and universal. The physical universe is an emergent feature of consciousness, not the other way round.*
In conclusion, I believe that TDVP
is the science of the future, and I predict that, in the not-too-distant
future, nearly every thinking person alive will realize that the paradigm of
scientific materialism was actually the crazy theory, and they will wonder how
anyone could have ever thought that reality could possibly exclude the
fundamental truth that reality begins with consciousness.

In furtherance of our earlier comments on this article, Ed, and as I have mystically-opined about my whimsical 'New Creation Myth', in 'Dream to Reality', Part IV of ' On the Square plus One' - "To serve the so-called New Age that throws conventional thinking into a celestial dustbin!" . Amun!

ReplyDelete