Higher order logic is a logic of total functions, which just means that if you apply a function
f : A → B to a value of its argument type A, it always produces a value of its result type B. There is no notion of definedness built into the logic, which can cause some confusion when using higher order logic formalizations of mathematical functions that are naturally partial. For example, all of the following are well-typed higher order logic terms:
0 - 1 : natural
1 mod 0 : natural
1 / 0 : real
head  : A list
Informally, we call these undefined terms because they don’t have a defined mathematical value, but formally speaking they must all have a value in higher order logic. It is tempting to ask what undefined terms evaluate to, but since we are working in a theorem prover rather than a programming language there isn’t a notion of evaluation. A better question is to ask what properties are true of undefined terms, and the answer is that you can prove many theorems about them:
0 * (0 - 1) = 0
(1 mod 0) - (1 mod 0) = 0
(0 - 1) + (1 mod 0) = (1 mod 0) + (0 - 1)
2 * (1 / 0) = (1 / 0) + (1 / 0)
head  = head 
Can you spot the common pattern in all the above theorems involving undefined terms? They are all true of any element of the same type as the undefined term:
∀n : natural. 0 * n = 0
∀n : natural. n - n = 0
∀m n : natural. m + n = n + m
∀x : real. 2 * x = x + x
∀x : A. x = x
In other words, proving theorems like this about undefined terms gives us no more information than is already provided by their type. However, depending on how we have formalized the partial mathematical function we may be able to also prove accidental theorems about undefined terms that give us more information about them. For example, consider formalizing natural number subtraction with the following defining equations:
∀m. m - 0 = m
∀m n. suc m - suc n = m - n
This seems harmless enough: we cannot use these equations to prove that an undefined subtraction term such as
0 - 1 is equal to a particular natural number. However, we can instantiate the second equation to prove that two undefined subtraction terms are equal:
1 - 2 = 0 - 1
This is an accidental theorem, which we can check by replacing the undefined terms with arbitrary natural numbers:
∀m n : natural. m = n
Since this generalized theorem is clearly false, it demonstrates that the accidental theorem reveals more information about the undefined subtraction terms than is already known by their type.
Fortunately, it is usually straightforward to eliminate accidental theorems. In the case of subtraction we simply replace the second defining equation with the following:
∀m n. n ≤ m ⇒ suc m - suc n = m - n
After this change there can be no more accidental theorems about undefined subtraction terms, and we can derive no information about them other than they have natural number type.
Suppose we have carefully formalized our partial mathematical functions to avoid any accidental theorems. Then we expect every theorem about the
mod function to have a side-condition that ensures the second argument is non-zero, such as the following:
∀n a b. ¬(n = 0) ⇒ (a * b) mod n = ((a mod n) * (b mod n)) mod n
∀n a. a < n ⇒ a mod n = a
The reason for this side condition is that by design we have no information about the value of undefined
mod terms, and so to prove properties of
mod we have to rule out the case where the second argument is zero. This is satisfying, because the mathematical version of these theorems have the same side-condition to ensure that all terms are defined.
However, there are some surprising theorems in the formalization that do not have this side-condition, such as the following:
∀n a b. (a * n + b) mod n = b mod n
Why doesn’t this theorem need a side-condition that n is non-zero to avoid undefined
mod terms? Well, if n is zero, then
a * n + b = a * 0 + b = 0 + b = b
reducing the left hand side of the surprising theorem to be exactly the same as the right hand side:
b mod 0 = b mod 0
Since this is true of every natural number, we can prove it for these undefined
mod terms even though we have no information about their value.
Is This Shocking?
The existence of surprising theorems is a consequence of formalizing partial mathematical functions in higher order logic, in which there is no notion of definedness and every function is total. Even taking care to avoid accidental theorems, this fundamental logical mismatch leads to differences between mathematical theorems and their formalized versions.
In most situations this does not cause any problems, and it is merely a curiosity that certain theorems do not have the expected side-conditions. But in some circumstances, such as the need to export formalized proofs to environments that do have a notion of definedness, it may be critical to maintain a close correspondence between the mathematical theorems and their formalized versions. Happily, since the surprising feature of surprising theorems is that they are stronger than expected, we can simply choose to formalize deliberately weaker versions with logically unnecessary side-conditions, such as the following:
∀n a b. ¬(n = 0) ⇒ (a * n + b) mod n = b mod n
In this way we can maintain a close correspondence between the syntax of mathematical theorems and their formalized versions, if that be necessary.