Monday, March 19, 2018


©Edward R. Close 2018 

I’m borrowing the subtitle above from Albert Einstein’s Book “Relativity, The Special and General Theory” because that’s what I am going to try to do: provide a clear explanation that anyone can understand. A clear explanation of what? Why, of gimmel, of course. In my opinion, the discovery of the existence of gimmel, the third form of reality that is neither mass nor energy, is the most important scientific discovery since Max Planck discovered quantum physics, Albert Einstein discovered relativity and Theodore Kaluza discovered that a five-dimensional model of reality works better than the Einstein-Minkowski 4-D spacetime model.

What I want to do is explain gimmel in clear terms, using analogies with familiar things as much as I can, things with which I hope most readers will be familiar. I believe that the concepts are simple enough that, indeed, anyone can understand them. But first, a few words about language. [I hope the reader will not be put off and might in fact enjoy my natural tendency to explore relevant side issues, of which this side-bar is one, because I think such relevant side-issues will enhance the narrative which some readers might otherwise find tedious.]

There are hundreds of languages in the world, many of which are related to each other in interestingly complex ways, but most people are fluent in only one or a few of them. English, like most modern languages, is a mixture of several of them. The basic grammatical structures and most of the basic words, like mother, father, house, home, etc., in English are Teutonic (Germanic), but many words of common English usage are borrowed from other languages. This makes English non-phonetic (many words are not pronounced the way they are spelled). That makes it confusing, for a non-native speaker, and it also make English much weaker grammatically. Because of its mixed origin, English is capable of conveying subtle nuances, and that can be a good thing, but it can also lead to misunderstandings. Somewhat purer languages like German and French, are much more precise. It may surprise some readers to learn that some of the world’s most ancient languages are more precise and more sophisticated than English or any other modern-day language. Examples include Sanskrit, Hebrew and Chinese.

This brings to mind an amusing experience I’d like to share. [A relevant side-bar.] While working and living in the Middle-East a few years ago, I was invited to a wedding feast. Several of the attendees were American or European, but many were natives of the area. While waiting for one of the courses to be served, the chit-chat turned to languages. An Englishman asked a young Saudi accountant seated next to me, how many languages he spoke. The young man said:

 “I speak three languages.”

“Oh, very good!” The English gentleman said. “What are they?”

“I speak Arabic, French and Italian.” The young man replied.

“But, what about English?” the Englishman raised his eyebrows, “You forgot about English!”

“Oh, that’s not a language!” The young man’s replay was unexpected.

I tried not to laugh at the shocked look on the Englishman’s face. But, of course, the young man was right. English is not a pure language. It’s a mixture of Germanic languages, missing much of the grammar, and is laced with horribly mis-pronounced words borrowed from other languages.

Back to the task at hand: Mathematics is (are?) also language(s). [Here’s a good example of mixed language: To the British, the word mathematics is plural (that’s why it has an ‘s’ at the end) and it is treated as such. They talk about the different “maths” such as algebra, trigonometry, calculus, etc., while in American English, we treat it as a singular noun:

 Mathematics, like English, is a symbolic language, where the symbols convey logical units.

Mathematical expressions form an actual written language, a language designed to express basic logical concepts like quantification, enumeration, equivalence, and transformational operational processes (addition, subtraction, multiplication, division, roots, differentiation, integration, etc.). When I was teaching high-school algebra, I liked to make the point that math is a language and an equation is a sentence in that language. When you diagram a sentence, the subject and its modifiers form the left-hand side of the equation, the verb is the equals sign, and the object and its modifiers form the right-hand side of the equation. This helped some students, who had learned to diagram sentences to identify the parts of speech, to set up and solve word problems using algebra.

The challenge of communication for mathematicians and physicists is that of trying to translate complex mathematical and physical concepts into a language with which the reader is familiar. Some theoretical concepts must be communicated by analogy because the exact mechanism or process being discussed may not be something with which the reader is familiar.

I agree with Richard Feynman when he said: “If we can’t explain it so that a first-year student can understand it, then we don’t really understand it ourselves!” So, my purpose here is two-fold: First, I hope to explain gimmel in a way that anyone can understand, and second, by doing so, demonstrate that I really do understand it myself. Before I get into explaining gimmel, it may be helpful to explore the origin of the attitude of quantum weirdness held by most physicists.

I think Erwin Schrӧdinger may have been the one who coined the term quantum weirdness, because, even though he is cited as one of the most important pioneers in the field, he certainly didn’t like the way quantum theory evolved. In fact, he said: “I don’t like it, and I’m sorry I ever had anything to do with it.” Source:
Most, if not all of the founders of modern quantum theory, including Niels Bohr, were physicists who were unable to explain why things like quantum uncertainty, quantum entanglement and quantum jumps should exist, and modern mainstream physicists have adopted the “quantum weirdness” mantra and like Richard Feynman, tell their students: “Don’t try to understand it, just accept the fact that it’s weird and do the calculations, because it works!”

I shouldn’t have to point out that this is not a scientific attitude. Likewise, the idea that there is one set of rules for macro-scale phenomena (big stuff) and a different set of rules for quantum-scale phenomena (little-bitty, unbelievably teeny-weenie stuff), is scientifically unacceptable, because the rules, as currently formulated, are incompatible. Reality is not incompatible with itself. That which happens on the macro-scale depends on what is going on at the quantum scale. John von Neumann showed this elegantly in his book “Mathematical Foundations of Quantum Mechanics”. But, there shouldn’t ever be any question about this point; it’s just common sense: If everything is composed of quanta of mass and energy, then everything is quantum phenomena, period.

This brings us to the discovery of gimmel. You may well ask: “How in the world was it ever possible to discover something which is neither matter nor energy?” We can measure the mass of an object in terms of its weight or inertia (its resistance to any attempt to move it), and its energy in terms of the force it exerts upon impact upon another object, and in regard to elementary particles, which are what we will be talking about here pretty soon, the only things measured in “atom smashers” like the Large Hadron Collider (LHC) for example, are mass and energy or the compound effects of mass and energy through statistical analyses. If something exists that is neither matter nor energy, then how is it detected? Gimmel shows up in something physicists call angular momentum. I can explain what angular momentum is most easily with a mathematical equation, but in keeping with my desire to explain everything so that anyone can understand it, I will use analogy.

Imagine you are on a children’s playground, standing on a merry-go-round, near its center, and it is spinning very fast at a constant rate. Also imagine that there is a large tree growing near the merry-go-round and some of its branches over-hang the merry-go-round so that the smallest ends of several limbs are hanging down to the level of your face. When you are standing near the center and the end of a branch brushes your face, you’ll hardly notice it; you’ll probably just wave it aside with your hand. Now imagine that you move slowly toward the outer edge of the merry-go-round, and you encounter the end of a branch of the same size near the edge. It will sting your face like a whip.

The difference between the gentle brush and the whip-like lashing, is the difference in the angular momentum. Your mass, indicated by your weight, hasn’t changed in any measurable way, you weigh the same whether you are standing at the center of the merry-go-round, or on the edge. The mass of the two limbs is the same, and the mass and the motion of the merry-go-round haven’t changed. The difference is a result of increased angular momentum.

Next, I must talk about mathematics a little bit, in particular about “the calculus”; but, again, I’ll stay away from mathematical symbols and operations that comprise the language of mathematics in which the average person is not fluent as much as I can, and talk about the underlying concepts. I was fortunate to have an excellent calculus teacher in the prime of his tenure. His name was Dr. Floyd Helton. He made sure I understood the basic concepts, not just how to plug in numbers and get answers.

When the first grades were posted for Dr. Helton’s class, I saw that I had received a B+. I had been told by upperclassmen that Dr. Helton had only awarded two A grades in sophomore calculus in over 10 years of teaching, but because I had the highest test-score average in the class by several points, I had expected an A. Math was an easy subject for me and I was accustomed to earning A’s. When I went to his office to ask about the grade, before I could complain, he said: “You did very well, Ed. If you apply yourself, you might be able to raise your grade to an A – during the next quarter-semester!” He went on to tell me that, if I were a math major (I was a physics major at the time) he would be concerned about my unorthodox methods of solving problems. (I often found quicker, easier ways to solve some of the problems he gave us, and by doing so, by-passed some of the problem-solving techniques he wanted his students to learn.

In those days, most math and physics departments in US colleges and universities had two or three levels of basic math and physics courses in their curricula: Introductory level classes for students not majoring in math or physics, a second, more stringent level for students bound for medical school, called “pre-med” math or physics, and a third, more rigorous level of classes for students majoring in the math and/or physics department. At the time, many students saw this as elite-ism, and I tended to agree, but after witnessing the gradual dumbing-down of American institutes of higher learning over the past 50 years, I realize that the process of multiple career tracks is not elite-ism, but a way to insure that when a student is awarded a degree in math or physics, he or she actually has some in-depth knowledge of the subject. Having been responsible for hiring professionals for many years, I know that, increasingly, this is too often not the case.

Enough of my soap-box lecturing on what I see as a serious problem in modern education. Back to the concepts behind the mathematics used by mainstream scientists: There are two points I want to stress, because they will be important in getting you to the point where you can understand gimmel. First, the calculus of Newton and Leibniz, generally known as “the calculus” (TC), is only one of several possible calculi, -we will talk about others. Second, the successful application of TC depends upon the concepts of variables, functions, continuity and limits. A variable is the measurable feature of an object, like length, weight, energy, etc. A function is the mathematical description of an object which depends on the values of variables. Continuity means that the measurement of something (like mass, energy, space, and time) can be divided indefinitely, and a limit is a specific mathematical value a function approaches as one or more of the variables are diminished repeatedly, approaching but never reaching zero.

At the macro-scale, i.e., the everyday size of the things we can see and feel, everything seems to be continuous, that is, infinitely divisible. But, at the quantum scale, below the sensitivity of our physical senses, this is not true. So, while TC works on the macro-scale, it fails at the quantum scale because the requirements of continuity of variables are not met. The application of TC at the quantum level leads to incorrect results and much of the quantum “weirdness” proclaimed by physicists. Because the physical universe is quantized, we need to recognize this fact and develop quantum mathematics accordingly.

The obviously inappropriate application of TC to quantum phenomena has been over-looked by mainstream science for more than 100 years. The reason it has been over-looked is easy to understand: In a quantized world, no variable can approach zero infinitely closely, as it must do for TC to apply. It must always be a multiple of one quantum. The smallest it can be possibly be, and still be an object, is one quantum. Max Planck discovered that every object in the physical universe is made up of quanta of mass and energy. Imagine a physical object as a stack of blocks. If you remove one block at a time, the stack becomes smaller and smaller, all the way down to one block. If you remove that final block, you no longer have an object. Planck discovered that there are no fractional quanta. Mass and energy always occur in multiples of whole quantum units. Thus, variables measuring physical objects cannot approach zero infinitely closely, as it must do for TC to apply.

Newton’s calculus (TC) has served us wonderfully well for more than 300 years because the quanta that make up reality are so small that they are far below our ability to observe, weigh and measure directly. For the problems of classical and relativistic physics, and practical engineering problems dealing with macro-scale objects like houses, bridges and rockets, the assumption of the continuity of variables and the application of TC yields results that are correct within the margin of measurement error. In terms of our macro-units of measurement like inches, centimeters, pounds and grams, etc., a quantum of mass or energy can be considered to be an infinitesimal, that is, a virtually dimensionless point. That’s why TC works for macro-scale objects, but it does not work for quantum objects. Until Planck discovered that we exist in a quantized reality, and John von Neumann proved that all objects are quantum objects in his mathematically elegant book “Mathematical Foundations of Quantum Mechanics”, we had no way of knowing that our math would not work for subatomic phenomena like quarks.

The way to rectify this problem is clear: a quantum mathematics with a quantum calculus is needed. I will proceed to discuss conceptually how this was done with the foundational mathematics of the Triadic Dimensional Vortical Paradigm (TDVP) model of reality developed by Dr. Vernon Neppe and this author over the past ten years.


Quantum Units
Max Planck understood the need for quantum units and attempted to provide them. He reasoned that the known universal constants normalized to 1 should provide a set of basic quantum units. The term “normalization of units” may sound mysterious and complicated to some readers, but it is really quite simple. We are so used to thinking in terms of units like inches, feet, miles, and pounds, etc., or centimeters, meters, kilometers, grams, etc., that we forget that the length, distance, weight, etc. they describe were chosen quite arbitrarily. Inches, feet and pounds, e.g., were based on certain physical characteristics of an English king. It makes no difference how we define a unit of measurement, as long as we agree on a standard definition of the unit.
Planck’s units, sometimes called natural units or normalized units, are physical units of measurement defined by normalizing the 5 known universal physical constants c, G, h, Ke, and KB to one. The universal constants normalized to 1 to define Planck units are:

The speed of light in a vacuum, c,
The gravitational constant, G,
The reduced Planck constant, ħ,
The Coulomb constant, Ke,
The Bolyzmann constant, Ke,

Setting these constants equal to one provides a quantized basis for calculation, but, there are two problems with this normalized system of measurement: First, they don’t result in whole-number descriptions of physical objects, and two, Planck did not identify the problem with using TC in quantum calculations.

In 1989, I realized that the first problem with Planck units could be resolved by normalizing the units of measurement of the four primary variables of physics: mass, energy, space and time, instead of normalizing universal constants, and the second problem was then eliminated by defining a quantum equivalence unit for the primary variables as the basis of a quantum calculus. That quantum equivalence unit is the Triadic Rotational Unit of Equivalence (TRUE), and the quantum calculus is the calculus that uses the TRUE as the basic unit of measurement and calculation. In 1990. I published my second book, “Infinite Continuity” in which I presented these concepts and applied them to some physics and cosmology paradoxes and problems.

After publishing these concepts in 1990, and again in my third book “Transcendental Physics in 2000, I filled in the details, defining the quantum equivalence unit (TRUE) by starting with a very accurately defined quantity, the minimum mass of particle physics data, the mass of the electron, which I normalized to 1 and then determined the normalized masses of the up and down quark as multiples of that natural unit. I chose to develop the quantum calculus by focusing on electrons and quarks, because the normal matter of the physical universe is made up almost entirely of combinations of electrons, up-quarks and down-quarks, and I reasoned that any other objects that exist in the universe would arise naturally as results of application of the laws of physics to various combinations of these particles.

Adding to what Planck discovered, Einstein taught us that mass, energy and space-time are very precisely related mathematically. That relationship is expressed by the equation E = mc2. It is not necessary for the reader to understand the math and physics behind this equation. The point I want to make here is that there is a well-defined equivalence of mass and energy related to space-time. With this knowledge, I used relativity to define a volumetric space-time equivalence unit, the Triadic Rotational Unit of Equivalence (TRUE). I call it this because the elementary particles that make up reality are rotating, or spinning, very rapidly, and the unit used to describe them Includes the equivalence of mass and energy by normalization. Taking this unit as the basic distinction of calculation, I proceeded to develop the Calculus of Dimensional Distinctions (CoDD). Why do I call it this? Because it is a calculus dealing with dimensional distinctions (TRUE) as the basic quantum units of quantized reality.

In plain English, this simplifies the measurement and description of reality tremendously by using a calculus with a natural quantum unit as its basic unit of measurement. It simplifies everything because all descriptions of reality in this calculus are multiples of whole numbers of quantum equivalence units. This is why I am so bold as to think that I can explain TDVP in terms that anyone can understand!

In the CoDD, the equations that describe the combinations of electrons and quarks, namely protons, neutrons and all the atoms of the Periodic Table, are whole-number equations, equations called Diophantine equations, after the Greek mathematician Diophantus, who studied them. Diophantine equations are simply equations that have whole-number solutions and no fractional variables or constants.

My intention in writing this article was, and still is, to explain gimmel in a way that anyone can understand, and I intend to do that by avoiding the use of mathematical equations and complex calculations. But I think most readers will forgive me for referring to E = mc2 because the equation is so well known. I’m hoping readers will also accept another overt display of abstract mathematical symbolism in the same vein. Like the E = mc2, it is a brief abstract expression that has far-reaching implications.

It is the general summation expression: 
This expression, with X, Z, n, and m defined as positive integers (whole numbers) is the expression in conventional mathematical symbols that represents the CoDD combination of all possible distinctions made of multiples of the TRUE unitary distinction. In other words, with the right choice of positive whole-number values for the variables X, Z, n, and m, this expression can represent any combination of elementary particles, including electrons, quarks, protons, neutrons and atoms. I call it the Conveyance Expression because it conveys the logic of combination from quantum equivalence units to observable and measurable objects in the everyday world of our common experience.

This expression is a bridge in many ways: It is a bridge from the finite to the infinite, because, with the variables ranging from one to positive infinity, it represents an infinite number of equations. It is a bridge between dimensions because m represents the number of dimensions in the domain of each specific equation. For example, the integer solutions when n = 2 and m = 2, are the triples of the Pythagorean Theorem, and the lack of integer solutions when n = 2 and m ≥ 3, represents Fermat’s Last Theorem. It also provides a bridge between integers and complex numbers by determining the roots of the quantum equivalence unit that act as location indicators in n-dimensional domains, where n can be any positive integer, 1, 2, 3, 4, 5, ….

OK, enough about the mathematical, concepts. How does this help get us to gimmel? It reveals that for protons and neutrons to be as stable as they are, there is something occupying volumes of specific numbers of quantum equivalence units in up- and down-quarks, that are neither mass nor energy. Dr. Neppe and I have chosen to call this something gimmel because there is no word for it in the current scientific or mathematical lexicons. Describing the combinations of up-quarks and down-quarks as distinctions that combine to form stable protons and neutrons also explains why they are made up of three quarks each: 2 up-quarks and 1 down-quark for protons, and 1 up-quark and 2-down quarks for neutrons. They cannot combine in twos to form symmetric objects.

I think it will help to talk about the stability of spinning objects here. Exactly why the elementary entities that make up physical reality are spinning, is another very interesting question, but it would take a lot more discussion to explain, so I’ll skip that for now, and simply state that we know that they are spinning at high rates of angular velocity. In the process of defining the quantum equivalence unit, e.g., it was necessary to calculate the angular velocity of the electron after it is stripped from a hydrogen atom in the process called ionization, and we found that the free electron spins at, or very near light speed.

Protons are very stable. They may, in fact, be the most stable compound entities in the universe. Estimates of their half-life exceed the age of the big-bang universe. This means that, because they are spinning, they must be perfectly symmetrical. As an analogy, think of a lump of clay on a potter’s wheel. As the angular velocity of the potter’s wheel increases, if the lump isn’t quickly guided into a symmetrical shape, it will fly apart. The same thing is true for the proton. The conveyance equation describing the combination of two up-quarks and one down-quark to form a stable proton, shows us that, in order for the proton, composed of quarks which are comprised of normalized quantum equivalence units, to be symmetrical, there has to be something that is neither mass nor energy occupying some of the volume of the up- and down-quarks. The details of the math proving this have been published in several journal articles and in the book “Reality Begins with Consciousness” published in 2011. See

Returning to the merry-go-round analogy: The mass of the quarks in a proton will have a different effect on the total angular momentum of a proton and the mass of a proton than they have on their own, because of where the mass is in relation to the center of rotation of the proton. When we determine this, it explains why the mass of the proton is much greater than the mass on two up-quarks and one down-quark. The same type of analysis also yields the explanation of the mass of the neutron.

It is important to understand why this something called gimmel cannot be mass or energy: If it were, the object detected could not be identified in LHC data as an up-quark or down-quark; it would have the wrong mass-energy equivalence. Gimmel cannot be detected as mass or energy and only shows up in its contribution to the total angular momentum of particles like electrons, protons, neutrons, and atoms. Having no mass or energy, gimmel has to be classified as non-physical, because it does not fit the definition of a physical object. A physical object, by definition, is composed of matter and energy and occupies space.

In addition to explaining why quarks combine in threes and explaining away “quantum weirdness”, the existence of non-physical gimmel explains many things not explained by the current mainstream paradigm, including why particles spin, why fermions like protons and neutrons have an intrinsic spin of 1/2, the Cabibbo angle, what dark matter and dark energy really are, and much more.

The question for further discussion is: If gimmel is not matter or energy, what is it?

No comments:

Post a Comment