Three Fundamental Limitations of Modern Science (Part 2)

May 29, 2010 Updated: October 1, 2015
APPLICATION OF MATHEMATICS: Undated portrait of Albert Einstein (1879-1955) who was awarded the Nobel Prize for Physics in 1921. (AFP/Getty Images)
APPLICATION OF MATHEMATICS: Undated portrait of Albert Einstein (1879-1955) who was awarded the Nobel Prize for Physics in 1921. (AFP/Getty Images)

Some of the greatest thinkers wanted to determine the nature of mathematical reasoning in order to improve their understanding of the notion of “proof” in mathematics. To that end, they attempted to codify the thought process of human reasoning, as it applies to mathematics. They surmised that logic and mathematics are interrelated and that mathematics can be a branch of logic, or vice versa. They thought that the kind of logical deductive method of geometry may be employed for mathematics, where all true statements of a system can be derived from the basis of a small set of axioms.

“The axiomatic development of geometry made a powerful impression upon thinkers throughout the ages; for the relatively small number of axioms carry the whole weight of the inexhaustibly numerous propositions derivable from them,” Philosopher Dr. Ernest Nagel and mathematician Dr. James R. Newman wrote in their book Gödel’s Proof. “The axiomatic form of geometry appeared to many generations of outstanding thinkers as the model of scientific knowledge at its best.”

Persistent Contradictions in Logic

However, inherent paradoxes were known to exist in logic. And a variety of paradoxes were also discovered in set theory, such as Russell’s paradox. Those paradoxes all have two things in common: self-reference and contradiction. A simple and well known paradox is the liar paradox such as “I always lie.” From such a statement it follows that if I am lying, then I am telling the truth; and if I am telling the truth, them I am lying. The statement can be neither true nor false. It simply does not make sense. From the discovery of paradoxes in set theory, mathematicians suspected that there may be serious imperfections in other branches of mathematics.

In his book Gödel, Escher, Bach: An Eternal Golden Braid, Dr. Douglas Hofstadter, professor of cognitive science at Indiana University in Bloominton, wrote, “These types of issues in the foundations of mathematics were responsible for the high interest in codifying human reasoning methods which was present in the early part of [the 20th century]. Mathematicians and philosophers had begun to have serious doubts about whether even the most concrete of theories, such as the study of whole numbers (number theory), were built on solid foundations. If paradoxes could pop up so easily in set theory—a theory whose basic concept, that of a set, is surely very intuitively appealing—then might they not also exist in other branches of mathematics?”

Logicians and mathematicians tried to work around these issues. One of the most famous of these efforts was conducted by Alfred North Whitehead and Bertrand Russell in their mammoth work of Principia Mathematica. They realized that all paradoxes involve self-reference and contradiction, and devised a hierarchical system to disallow for both. Principia Mathematica basically had two goals: to provide a complete formal method of deriving all of mathematics from a finite set of axioms, and to be consistent with no paradoxes.

At the time, it was unclear whether or not Russell and Whitehead really achieved their goals. A lot was at stake. The very foundation of logic and mathematics seemed to be on shaky ground. And there was a great effort, involving leading mathematicians of the world, to verify the work of Russell and Whitehead.