This is a followup to an earlier post where I talked about Gödel’s First Incompleteness Theorem. Here, I discuss the Second Incompleteness Theorem, and further implications.
Could you remind me what the theorem was?
The theorem states that a consistent formal system cannot prove its own consistency.
As previously discussed, there are a couple qualifiers. The formal system must include some amount of arithmetic, and must have a computable set of axioms.
What does consistency mean?
A system is consistent if it cannot prove any contradictions. A system is inconsistent if it can prove a contradiction.
Contradictions sound bad. Are they bad?
Yes. The Explosion Principle states that if you can prove a direct contradiction, then you can prove absolutely any statement.
Here’s how the Explosion Principle works. Suppose A and not-A are both provable. Now consider statement B. “(A implies B) or (not-A implies B)” is a tautology. Since both A and not-A, that means we can prove B. Following the same procedure we can also prove not-B.
I once heard Gödel’s theorem stated as “A system of logic can prove its consistency if and only if it is inconsistent. Is that right?
Yeah, that’s another equivalent way to state the theorem. If a system of logic is inconsistent, then by the Explosion Principle, it can prove everything, including its own consistency. If it is consistent, then it cannot prove its own consistency.
So how do you prove Gödel’s Second Incompleteness Theorem?
The First Incompleteness Theorem says that if a logical system is consistent, then Gödel’s statement (“This statement cannot be proven”) is true.
Now suppose that the logical system can prove its own consistency. Then it can also prove Gödel’s statement. Which also implies that it cannot prove Gödel’s statement, which is a contradiction. Therefore, the logical system cannot prove its own consistency.
Does this mean we will never prove the consistency of math?
Well, it’s not quite as bad as it sounds. A consistent system cannot prove its own consistency. But you can prove the consistency of a system by using a second, larger system. For example, the consistency of arithmetic has been proven by set theory. The issue is that now set theory cannot prove its own consistency. You need a yet larger system to prove the consistency of set theory.
Can I add an axiom asserting that the system is consistent?
Yes, but then you could prove the system is inconsistent.
Going back to an earlier point, you said you would discuss what would happen if we added an axiom stating Gödel’s unprovable statement is false.
Right. So if Gödel’s unprovable statement is false, that means there exists a proof of the unprovable statement. How can that be?
To understand this, we need to understand how “provability” is defined. We have some function Proof(x,y), which means that Statement x is a proof of Statement y. Statement y is “provable” if there exists some x such that Proof(x,y).
So, let g be the Gödel’s number of Gödel’s statement. We know that Proof(0,g) is false, Proof(1,g) is false, Proof(2,g) is false, and so on. And yet, according to our new axiom, there exists some x such that Proof(x,g) is true. This axiom basically asserts that there is a number greater than all other natural numbers.
Doesn’t that contradict the axiom of induction?
The axiom of induction states that for some predicate P(x), if you can prove P(0) and that P(k) implies P(k+1), then you can prove P(x) for all x. Basically, the axiom of induction says that proof by induction works.
But there are actually multiple forms the axiom of induction. You could say “For all predicates P, proof by induction works” (second-order Peano axioms). Or you could say that “For P, proof by induction works” is an axiom for each predicate P (first-order Peano axioms). It actually makes a difference, because any formal proof is finite in length, and therefore can only refer to a finite number of axioms.
Using the first-order axioms, you can produce non-standard number systems. Using the second-order axioms, my understanding is you can produce a formally consistent logic, but it does not correspond to any number system at all.
If Gödel’s statement is false, isn’t the system inconsistent?
The First Incompleteness Theorem says that if the system is consistent, then Gödel’s statement is true. But if we have an axiom saying Gödel’s statement is false, then this appears to require that the system is inconsistent.
And indeed, the non-standard number systems are inconsistent–in a certain sense. There are at least two distinct notions of consistency. Regular consistency means you can’t prove a direct contradiction. ω-consistency means that you also can’t prove a certain class of indirect contradictions. For example, suppose P(0), P(1), P(2), and so on are all false, but there exists some x such that P(x) is true–that is an indirect contradiction.
Most of my sources aren’t clear about which kind of consistency they’re talking about at any given moment. I myself have lots more questions about how this works exactly.
So let me get this straight. There might be numbers greater than all natural numbers?
We need to sort out the difference between a “theory” and a “model”. In math, a theory is a set of axioms and inference rules, from which you can prove many theorems. A model is an interpretation of those axioms. In a model, every statement is just true or false, and there’s no notion of a distinction between “axioms” and “theorems”.
Natural numbers are a model, and in this model, we have only the numbers we want to have, no more or less. We wish to come up with a theory that describes that model. Unfortunately, the Incompleteness Theorems say that there is no theory that can completely describe the natural numbers. We run into at least one of the following problems:
1. The theory is inconsistent. The theory proves every true fact about the natural numbers, but also proves every false fact.
2. The theory is incomplete. Some facts can’t be proven, and the theory has non-standard interpretations.
3. The theory is incomputable, aka “ineffective”. There are some things that can be “proven” but for which you can’t show the proof.
Of these three problems, #2 seems to be the least bad. Unfortunately the Second Incompleteness Theorem says you can’t really determine if that’s the problem you have.