Monthly Archives: July 2015

Surprising Theorems

Higher order logic is a logic of total functions, which just means that if you apply a function f : A → B to a value of its argument type A, it always produces a value of its result type B. There is no notion of definedness built into the logic, which can cause some confusion when using higher order logic formalizations of mathematical functions that are naturally partial. For example, all of the following are well-typed higher order logic terms:

  • 0 - 1 : natural
  • 1 mod 0 : natural
  • 1 / 0 : real
  • head [] : A list

Informally, we call these undefined terms because they don’t have a defined mathematical value, but formally speaking they must all have a value in higher order logic. It is tempting to ask what undefined terms evaluate to, but since we are working in a theorem prover rather than a programming language there isn’t a notion of evaluation. A better question is to ask what properties are true of undefined terms, and the answer is that you can prove many theorems about them:

  • 0 * (0 - 1) = 0
  • (1 mod 0) - (1 mod 0) = 0
  • (0 - 1) + (1 mod 0) = (1 mod 0) + (0 - 1)
  • 2 * (1 / 0) = (1 / 0) + (1 / 0)
  • head [] = head []

Accidental Theorems

Can you spot the common pattern in all the above theorems involving undefined terms? They are all true of any element of the same type as the undefined term:

  • ∀n : natural. 0 * n = 0
  • ∀n : natural. n - n = 0
  • ∀m n : natural. m + n = n + m
  • ∀x : real. 2 * x = x + x
  • ∀x : A. x = x

In other words, proving theorems like this about undefined terms gives us no more information than is already provided by their type. However, depending on how we have formalized the partial mathematical function we may be able to also prove accidental theorems about undefined terms that give us more information about them. For example, consider formalizing natural number subtraction with the following defining equations:

  • ∀m. m - 0 = m
  • ∀m n. suc m - suc n = m - n

This seems harmless enough: we cannot use these equations to prove that an undefined subtraction term such as 0 - 1 is equal to a particular natural number. However, we can instantiate the second equation to prove that two undefined subtraction terms are equal:

1 - 2 = 0 - 1

This is an accidental theorem, which we can check by replacing the undefined terms with arbitrary natural numbers:

∀m n : natural. m = n

Since this generalized theorem is clearly false, it demonstrates that the accidental theorem reveals more information about the undefined subtraction terms than is already known by their type.

Fortunately, it is usually straightforward to eliminate accidental theorems. In the case of subtraction we simply replace the second defining equation with the following:

∀m n. n ≤ m ⇒ suc m - suc n = m - n

After this change there can be no more accidental theorems about undefined subtraction terms, and we can derive no information about them other than they have natural number type.

Surprising Theorems

Suppose we have carefully formalized our partial mathematical functions to avoid any accidental theorems. Then we expect every theorem about the mod function to have a side-condition that ensures the second argument is non-zero, such as the following:

  • ∀n a b. ¬(n = 0) ⇒ (a * b) mod n = ((a mod n) * (b mod n)) mod n
  • ∀n a. a < n ⇒ a mod n = a

The reason for this side condition is that by design we have no information about the value of undefined mod terms, and so to prove properties of mod we have to rule out the case where the second argument is zero. This is satisfying, because the mathematical version of these theorems have the same side-condition to ensure that all terms are defined.

However, there are some surprising theorems in the formalization that do not have this side-condition, such as the following:

∀n a b. (a * n + b) mod n = b mod n

Why doesn’t this theorem need a side-condition that n is non-zero to avoid undefined mod terms? Well, if n is zero, then

a * n + b = a * 0 + b = 0 + b = b

reducing the left hand side of the surprising theorem to be exactly the same as the right hand side:

b mod 0 = b mod 0

Since this is true of every natural number, we can prove it for these undefined mod terms even though we have no information about their value.

Is This Shocking?

The existence of surprising theorems is a consequence of formalizing partial mathematical functions in higher order logic, in which there is no notion of definedness and every function is total. Even taking care to avoid accidental theorems, this fundamental logical mismatch leads to differences between mathematical theorems and their formalized versions.

In most situations this does not cause any problems, and it is merely a curiosity that certain theorems do not have the expected side-conditions. But in some circumstances, such as the need to export formalized proofs to environments that do have a notion of definedness, it may be critical to maintain a close correspondence between the mathematical theorems and their formalized versions. Happily, since the surprising feature of surprising theorems is that they are stronger than expected, we can simply choose to formalize deliberately weaker versions with logically unnecessary side-conditions, such as the following:

∀n a b. ¬(n = 0) ⇒ (a * n + b) mod n = b mod n

In this way we can maintain a close correspondence between the syntax of mathematical theorems and their formalized versions, if that be necessary.

Advertisements

The Slowest Software Development Methodology in the World

For some time now I’ve been practicing what can only be described as the slowest software development methodology in the world: a three step waltz of Prototyping, Verification and Export.

Prototyping: I begin quite normally by writing some code in Haskell, testing out different ways of solving some problem that I’m interested in. In this step I’m mainly focused on achieving a clean design with reasonable performance, and wherever possible I use existing Haskell packages (but restricted to those that have been developed with the same software development methodology). I also write QuickCheck tests to check various properties of the functions in the design. At this point I’m essentially done with the traditional coding phase, though usually I spend what might be perceived as an excessive amount of time polishing to make the design and properties as simple and clear as possible.

Verification: Next I use the HOL Light interactive theorem prover to develop a logical theory that (i) defines the types and functions in my Haskell design, and (ii) proves the QuickCheck properties as theorems that follow from the definitions. If the Haskell package depends on other packages that follow the same software development methodology then their contents must also be formalized in theories, and the logical definitions can build on each other in the same way as the Haskell definitions. This step is the most labour-intensive, and often some degree of creativity is required to formulate properties of recursive functions that can be proved by induction. This is where the effort spent polishing the prototype repays itself, since any unnecessary complexity in the design will inevitably thread its way through the proof of many properties.

Export: The final step is to use the opentheory tool to export the logical theory as a Haskell package. I first package up the HOL Light theory in OpenTheory format, by rendering its proof in a low-level article format and adding some meta-data such as the package name and the list of OpenTheory packages it depends on. There is also some Haskell-specific meta-data, such as the mapping from OpenTheory symbol names to Haskell names, and selecting which theorems should be exported as QuickCheck properties. Once that is complete the opentheory tool can automatically export the OpenTheory package as a Haskell package (in a form ready to be uploaded to Hackage).

Why Would You Do That?

At this point you might reasonably ask: why develop a Haskell package, throw it away, and then develop a logical theory only to export the same Haskell package again? Excellent question, and the rest of this post is devoted to answering it, but let’s begin with a depressing truth: it’s not the proof. Despite the countless hours and myriad creative insights you poured into that final Q.E.D., no one wants to see or even hear about your proof. Formal verification proofs are write-only. Sorry about that.

Specification: The proofs might be irrelevant, but the proved theorems constitute a precise specification of the Haskell package, grounded all the way down to standard mathematical objects formalized in higher order logic. I view Haskell as a computational platform for higher order logic (rather than higher order logic as a verification platform for Haskell), a perspective that emphasizes the consistent set of logical theories underlying the Haskell package that defines its semantics. A practical consequence of this is that any differences in the behaviour of the Haskell package and its higher order logic semantics (for example, due to bugs in the export code) must be fixed by changing the Haskell computational platform to conform to its higher order logic semantics.

Let’s take a look at an example specification for an extended GCD function egcd defined in the Haskell package opentheory-divides:

egcd :: Natural -> Natural -> (Natural, (Natural, Natural))

The following theorem about egcd is one of many proved in the corresponding logical theory natural-divides:

∀a b g s t. ¬(a = 0) ∧ egcd a b = (g,(s,t)) ⇒ t * b + g = s * a

This says that the three components of the egcd result (g, s and t) will satisfy a particular equation with its arguments (a and b), so long as the first argument a is non-zero. Side-conditions like this don’t fit naturally into the Haskell type system, and so the burden falls on programmers to discover them and code around them when necessary. This may be one reason why programmers prefer to write their own software components (with side-conditions tailored to the application) rather than reuse existing components (with unknown and/or inconvenient side-conditions). Using this software development methodology I shift the burden of discovering side-conditions to HOL Light, which will force me to prove that the first argument of egcd is non-zero in any code that relies on the above property of the result. If the code just uses the property that the result g of egcd is the greatest common divisor then HOL Light will understand that no side-condition proof is necessary, because another theorem in the logical theory proves that this is true for all arguments to egcd:

∀a b. fst (egcd a b) = gcd a b

Proof: It may be surprising to learn that verified Haskell packages can be useful as a building block to formalize mathematics in higher order logic. Once we have defined Haskell types and functions in the logic and proved that they satisfy their specification, we can build on this formal development to prove mathematical theorems that do not refer to any Haskell symbols. To continue the egcd example, suppose we have two natural numbers a and b (b is greater than 1) that are coprime (i.e., gcd a b = 1). Then we can reduce modulo b both sides of the first property of egcd to show the existence of a multiplicative inverse for a modulo b:

∀a b. 1 < b ∧ gcd a b = 1 ⇒ ∃s. (s * a) `mod` b = 1

This is a non-trivial higher order logic theorem which refers only to standard mathematical concepts, but its proof rests on the properties of the verified Haskell egcd function.

Maintenance: Software maintenance is a pain, and it is here that this software development methodology really shines (and who knows, maybe eventually cause the tortoise to pass the hare?). Formally verified software offers a new capability: by examining the logical theory of a Haskell package, we can see exactly which properties of dependent packages it relies on. We can use this information to automatically check whether or not a new version of a dependent package will preserve functional correctness: if so then extend the acceptable version range; and if not then report an error to the maintainer. The opentheory tool uses this technique to automatically generate maximal acceptable version ranges for dependent packages whenever it exports Haskell packages, removing another cognitive burden from the programmer. Earlier I said that formal verification proofs were write-only, but really all I meant is that humans won’t read them: as this version analysis shows machines are quite happy to read them and extract useful information.

Fun: Actually I retract the whole depressing truth: it is totally about the proof. Not for showing to people (humans still won’t read formal verification proofs) but rather for the pure fun of writing them. Those myriad creative insights you made over the course of countless hours? For me formal verification is as immersive as a video game, and my high scores come with a new artefact to play with and build upon. Free toy inside!