Three Fundamental Limitations of Modern Science (Part 2)

Paradoxes that exist in logic all have two things in common: self-reference and contradiction. A simple and well known paradox is the liar paradox, expressed as “I always lie.”
Three Fundamental Limitations of Modern Science (Part 2)
APPLICATION OF MATHEMATICS: Undated portrait of Albert Einstein (1879-1955) who was awarded the Nobel Prize for Physics in 1921. (AFP/Getty Images)
5/29/2010
Updated:
10/1/2015
<a><img class="size-medium wp-image-1819315" title="APPLICATION OF MATHEMATICS: Undated portrait of Albert Einstein (1879-1955) who was awarded the Nobel Prize for Physics in 1921. (AFP/Getty Images)" src="https://www.theepochtimes.com/assets/uploads/2015/09/Einstein_Getty93434227.jpg" alt="APPLICATION OF MATHEMATICS: Undated portrait of Albert Einstein (1879-1955) who was awarded the Nobel Prize for Physics in 1921. (AFP/Getty Images)" width="320"/></a>
APPLICATION OF MATHEMATICS: Undated portrait of Albert Einstein (1879-1955) who was awarded the Nobel Prize for Physics in 1921. (AFP/Getty Images)

Some of the greatest thinkers wanted to determine the nature of mathematical reasoning in order to improve their understanding of the notion of “proof” in mathematics. To that end, they attempted to codify the thought process of human reasoning, as it applies to mathematics. They surmised that logic and mathematics are interrelated and that mathematics can be a branch of logic, or vice versa. They thought that the kind of logical deductive method of geometry may be employed for mathematics, where all true statements of a system can be derived from the basis of a small set of axioms.

“The axiomatic development of geometry made a powerful impression upon thinkers throughout the ages; for the relatively small number of axioms carry the whole weight of the inexhaustibly numerous propositions derivable from them,” Philosopher Dr. Ernest Nagel and mathematician Dr. James R. Newman wrote in their book Gödel’s Proof. “The axiomatic form of geometry appeared to many generations of outstanding thinkers as the model of scientific knowledge at its best.”

Persistent Contradictions in Logic

However, inherent paradoxes were known to exist in logic. And a variety of paradoxes were also discovered in set theory, such as Russell’s paradox. Those paradoxes all have two things in common: self-reference and contradiction. A simple and well known paradox is the liar paradox such as “I always lie.” From such a statement it follows that if I am lying, then I am telling the truth; and if I am telling the truth, them I am lying. The statement can be neither true nor false. It simply does not make sense. From the discovery of paradoxes in set theory, mathematicians suspected that there may be serious imperfections in other branches of mathematics.

In his book Gödel, Escher, Bach: An Eternal Golden Braid, Dr. Douglas Hofstadter, professor of cognitive science at Indiana University in Bloominton, wrote, “These types of issues in the foundations of mathematics were responsible for the high interest in codifying human reasoning methods which was present in the early part of [the 20th century]. Mathematicians and philosophers had begun to have serious doubts about whether even the most concrete of theories, such as the study of whole numbers (number theory), were built on solid foundations. If paradoxes could pop up so easily in set theory—a theory whose basic concept, that of a set, is surely very intuitively appealing—then might they not also exist in other branches of mathematics?”

Logicians and mathematicians tried to work around these issues. One of the most famous of these efforts was conducted by Alfred North Whitehead and Bertrand Russell in their mammoth work of Principia Mathematica. They realized that all paradoxes involve self-reference and contradiction, and devised a hierarchical system to disallow for both. Principia Mathematica basically had two goals: to provide a complete formal method of deriving all of mathematics from a finite set of axioms, and to be consistent with no paradoxes.

At the time, it was unclear whether or not Russell and Whitehead really achieved their goals. A lot was at stake. The very foundation of logic and mathematics seemed to be on shaky ground. And there was a great effort, involving leading mathematicians of the world, to verify the work of Russell and Whitehead.

Hofstadter wrote in Gödel, Escher, Bach: “[German mathematician Dr. David Hilbert] set before the world community of mathematicians (and metamathematicians) this challenge: to demonstrate rigorously—perhaps following the very methods outlined by Russell and Whitehead—that the system defined in Principia Mathematica was both consistent (contradiction-free), and complete (i.e. that every true statement of number theory could be derived within the framework drawn up in [Principia Mathematica]).”

Gödel’s Incompleteness Theorem

In 1931, the hope in that great effort was destroyed by Austrian mathematician and logician Dr. Kurt Gödel with the publication of his paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Gödel demonstrated an inherent limitation, not just in Principia Mathematica, but in any conceivable axiomatic formal system that attempts to model the power of arithmetic. Arithmetic, the theory of whole numbers, such as addition and multiplication, is the most basic and oldest part of mathematics, which as we know has great practical importance.

Gödel proved that such an axiomatic formal system that attempts to model arithmetic cannot be both complete and consistent at the same time. This proof is known as Gödel’s Incompleteness Theorem. There were only two possibilities in such a formal system:

(1) If the formal system is complete, then it cannot be consistent. And the system will contain a contradiction analogous to the liar paradox.

(2) If the formal system is consistent, then it cannot be complete. And the system cannot prove all the truth of the system.

For very simple formal systems, the limitation does not exist. Ironically, as a formal system becomes more powerful, at least as powerful enough to model arithmetic, the limitation of Gödel’s Incompleteness Theorem becomes unavoidable.

Some scientists say that Gödel’s proof has little importance in actual practice. However, English mathematical physicist Dr. Roger Penrose pointed out that another theorem, Goodstein’s theorem, is actually a Gödel theorem that demonstrates the limitation of mathematical induction in proving certain mathematical truths. Mathematical induction is a purely deductive method that can be very useful in proving an infinite series of cases with finite steps of deduction.

Inherent Limitation of Formal Deductive Methods

There was a deeper motivation behind Gödel’s efforts beyond the issues of Principia Mathematica and other more practical formal methods. Like other great mathematicians and logicians of his time, Gödel wanted to have a better understanding of basic questions about mathematics and logic: What is mathematical truth and what does it mean to prove it? These questions still remain largely unresolved. Part of the answer came with the discovery that some true statements in mathematical systems cannot be proved by formal deductive methods. An important revelation of Gödel’s achievement indicates that the notion of proof is weaker than the notion of truth.

Gödel’s proof seems to demonstrate that the human mind can understand certain truths that axiomatic formal systems can never prove. From this, some scientists and philosophers claim that the human mind can never be fully mechanized.

Although Gödel’s Incompleteness Theorem is not well known by the public, it is regarded by scientists and philosophers as one of the greatest discoveries in modern times. The profound importance of Gödel’s work was recognized many years after its publication, as mentioned in Gödel’s Proof: “Gödel was at last recognized by his peers and presented with the first Albert Einstein Award in 1951 for achievement in the natural sciences—the highest honor of its kind in the United States. The award committee, which included Albert Einstein and J. Robert Oppenheimer, described his work as ”one of the greatest contributions to the sciences in recent times.”