Hoare logic (also known as Floyd–Hoare logic or Hoare rules) is a formal system with a set of logical rules for reasoning rigorously about the correctness of computer programs. It was proposed in 1969 by the British computer scientist and logician C. A. R. Hoare, and subsequently refined by Hoare and other researchers.^{[1]} The original ideas were seeded by the work of Robert Floyd, who had published a similar system^{[2]} for flowcharts.
Hoare triple
The central feature of Hoare logic is the Hoare triple. A triple describes how the execution of a piece of code changes the state of the computation. A Hoare triple is of the form

{P} C {Q}
where P and Q are assertions and C is a command.^{[note 1]} P is named the precondition and Q the postcondition: when the precondition is met, executing the command establishes the postcondition. Assertions are formulae in predicate logic.
Hoare logic provides axioms and inference rules for all the constructs of a simple imperative programming language. In addition to the rules for the simple language in Hoare's original paper, rules for other language constructs have been developed since then by Hoare and many other researchers. There are rules for concurrency, procedures, jumps, and pointers.
Partial and total correctness
Using standard Hoare logic, only partial correctness can be proven, while termination needs to be proved separately. Thus the intuitive reading of a Hoare triple is: Whenever P holds of the state before the execution of C, then Q will hold afterwards, or C does not terminate. In the latter case, there is no "after", so Q can be any statement at all. Indeed, one can choose Q to be false to express that C does not terminate.
Total correctness can also be proven with an extended version of the While rule.
In his 1969 paper, Hoare used a narrower notion of termination which also entailed absence of any runtime errors: "Failure to terminate may be due to an infinite loop; or it may be due to violation of an implementationdefined limit, for example, the range of numeric operands, the size of storage, or an operating system time limit."^{[3]}
Rules
Empty statement axiom schema
The empty statement rule asserts that the skip statement does not change the state of the program, thus whatever holds true before skip also holds true afterwards.^{[note 2]}

/{P} skip {P}
Assignment axiom schema
The assignment axiom states that after the assignment any predicate holds for the variable that was previously true for the righthand side of the assignment. Formally, let P be an assertion in which the variable x is free. Then:

/{P[E/x]} x := E {P}
where P[E/x] denotes the assertion P in which each free occurrence of x has been replaced by the expression E.
The assignment axiom scheme means that the truth of P[E/x] is equivalent to the afterassignment truth of P. Thus were P[E/x] true prior to the assignment, by the assignment axiom, then P would be true subsequent to which. Conversely, were P[E/x] false (i.e. ¬P[E/x] true) prior to the assignment statement, P must then be false afterwards.
Examples of valid triples include:


{ x+1 = 43 } y := x + 1 { y = 43 }

{ x + 1 ≤ N } x := x + 1 { x ≤ N }
The assignment axiom scheme is equivalent to saying that to find the precondition, first take the postcondition and replace all occurrences of the lefthand side of the assignment with the righthand side of the assignment. Be careful not to try to do this backwards by following this incorrect way of thinking: {P} x:=E {P[E/x]}; this rule leads to nonsensical examples like:

{ x = 5 } x := 3 { 3 = 5 }
Another incorrect rule looking tempting at first glance is {P} x:=E {P and x=E}; it leads to nonsensical examples like:

{ x = 5 } x := x + 1 { x = 5 and x = x + 1 }
While a given postcondition P uniquely determines the precondition P[E/x], the converse is not true. For example:


{ 0 ≤ y*y ∧ y*y ≤ 9 } x := y * y { 0 ≤ x ∧ x ≤ 9 } ,

{ 0 ≤ y*y ∧ y*y ≤ 9 } x := y * y { 0 ≤ x ∧ y*y ≤ 9 } ,

{ 0 ≤ y*y ∧ y*y ≤ 9 } x := y * y { 0 ≤ y*y ∧ x ≤ 9 } , and

{ 0 ≤ y*y ∧ y*y ≤ 9 } x := y * y { 0 ≤ y*y ∧ y*y ≤ 9 }
are valid instances of the assignment axiom scheme.
The assignment axiom proposed by Hoare does not apply when more than one name may refer to the same stored value. For example,

{ y = 3 } x := 2 { y = 3 }
is wrong if x and y refer to the same variable (aliasing), although it is a proper instance of the assignment axiom scheme (with both {P} and {P[2/x]} being {y=3}).
Rule of composition
Hoare's rule of composition applies to sequentially executed programs S and T, where S executes prior to T and is written S;T (Q is called the midcondition):^{[4]}

{P} S {Q} , {Q} T {R}/ {P} S;T {R}
For example, consider the following two instances of the assignment axiom:

{ x + 1 = 43 } y := x + 1 { y = 43 }
and

{ y = 43 } z := y { z = 43 }
By the sequencing rule, one concludes:

{ x + 1 = 43 } y := x + 1; z := y { z = 43 }
Conditional rule

{B ∧ P} S {Q} , {¬B ∧ P } T {Q}/ {P} if B then S else T endif {Q}
The conditional rule states that a postcondition Q common to then and else part is also a postcondition of the whole if...endif statement. In the then and the else part, the unnegated and negated condition B can be added to the precondition P, respectively. The condition, B, must not have side effects. An example is given in the next section.
This rule was not contained in Hoare's original publication.^{[1]} However, since a statement

if B then S else T endif
has the same effect as a onetime loop construct

bool b:=true; while B∧b do S; b:=false done; b:=true; while ¬B∧b do T; b:=false done
the conditional rule can be derived from the other Hoare rules. In a similar way, rules for other derived program constructs, like for loop, do...until loop, switch, break, continue can be reduced by program transformation to the rules from Hoare's original paper.
Consequence rule

P_{1} → P_{2} , {P_{2}} S {Q_{2}} , Q_{2} → Q_{1}/ {P_{1}} S {Q_{1}}
This rule allows to strengthen the precondition and/or to weaken the postcondition. It is used e.g. to achieve literally identical postconditions for the then and the else part.
For example, a proof of

{0 ≤ x ≤ 15 } if x < 15 then x := x + 1 else x := 0 endif {0 ≤ x ≤ 15 }
needs to apply the conditional rule, which in turn requires to prove

{ 0 ≤ x ≤ 15 ∧ x < 15 } x:=x+1 { 0 ≤ x ≤ 15 }, or simplified

{0 ≤ x < 15 } x:=x+1 {0 ≤ x ≤ 15 }
for the then part, and

{0 ≤ x ≤ 15 ∧ x ≥ 15} x:=0 {0 ≤ x ≤ 15}, or simplified

{x=15} x:=0 {0 ≤ x ≤ 15 }
for the else part.
However, the assignment rule for the then part requires to choose P as 0 ≤ x ≤ 15; rule application hence yields

{0 ≤ x+1 ≤ 15} x:=x+1 {0 ≤ x ≤ 15}, which is logically equivalent to

{1 ≤ x < 15} x:=x+1 {0 ≤ x ≤ 15}.
The consequence rule is needed to strengthen the precondition {1 ≤ x < 15} obtained from the assignment rule to {0 ≤ x < 15} required for the conditional rule.
Similarly, for the else part, the assignment rule yields

{0 ≤ 0 ≤ 15} x:=0 {0 ≤ x ≤ 15}, or equivalently

{true} x:=0 {0 ≤ x ≤ 15},
hence the consequence rule has to be applied with P_{1} and P_{2} being {x=15} and {true}, respectively, to strengthen again the precondition. Informally, the effect of the consequence rule is to "forget" that x=15 is known at the entry of the else part, since the assignment rule used for the else part doesn't need that information.
While rule

{P ∧ B} S {P}/ {P} while B do S done {¬B ∧ P}
Here P is the loop invariant, which is to be preserved by the loop body S. After the loop is finished, this invariant P still holds, and moreover ¬B must have caused the loop to end. As in the conditional rule, B must not have side effects.
For example, a proof of

{x ≤ 10} while x < 10 do x := x + 1 done {¬x < 10 ∧ x ≤ 10}
by the while rule requires to prove

{x ≤ 10 ∧ x < 10} x := x + 1 {x ≤ 10 }, or simplified

{x < 10} x := x + 1 {x ≤ 10},
which is easily obtained by the assignment rule. Finally, the postcondition {¬x<10 ∧ x≤10} can be simplified to {x=10}.
For another example, the while rule can be used to formally verify the following strange program to compute the exact square root x of an arbitrary number a  even if x is an integer variable and a is not a square number:

{true} while x*x ≠ a do skip done {x * x = a ∧ true}
After applying the while rule with P being true, it remains to prove

{true ∧ x*x ≠ a} skip {true},
which follows from the skip rule and the consequence rule.
In fact, the strange program is partially correct: if it happened to terminate, it is certain that x must have contained (by chance) the value of a 's square root. However, it is not totally correct, since it obviously will not terminate under almost all circumstances.
While rule for total correctness
If the above ordinary while rule is replaced by the following one, the Hoare calculus can also be used to prove total correctness, i.e. termination^{[note 3]} as well as partial correctness. Commonly, square brackets are used here instead of curly braces to indicate the different notion of program correctness.

< is a wellfounded ordering on the set D , [P ∧ B ∧ t ∈ D ∧ t = z] S [P ∧ t ∈ D ∧ t < z ]/[P ∧ t ∈ D] while B do S done [¬B ∧ P ∧ t ∈ D]
In this rule, in addition to maintaining the loop invariant, one also proves termination by way of an expression t, called the loop variant, whose value strictly decreases with respect to a wellfounded relation < on some domain set D during each iteration. Since < is wellfounded, a strictly decreasing chain of members of D can have only finite length, so t cannot keep decreasing forever. (For example, the usual order < is wellfounded on the set of positive integer numbers, but neither on the set of all integers nor on the set of positive real numbers; all these sets are meant in the mathematical, not in the computing sense, they are all infinite in particular.)
Given the loop invariant P, the condition B must imply that t is not a minimal element of D, for otherwise the body S< could not decrease t any further, i.e. the premise of the rule would be false. (This is one of various notations for total correctness.) ^{[note 4]}
Resuming the first example of the previous section, for a totalcorrectness proof of

[x ≤ 10] while x < 10 do x:=x+1 done [¬ x < 10 ∧ x ≤ 10]
the while rule for total correctness can be applied with e.g. D being the positive integers with the usual order, and the expression t being 10  x, which then in turn requires to prove

[x ≤ 10 ∧ x < 10 ∧ 10x ≥ 0 ∧ 10x = z] x:= x+1 [x ≤ 10 ∧ 10x ≥ 0 ∧ 10x < z]
Informally speaking, we have to prove that the distance 10x decreases in every loop cycle, while it always remains nonnegative; this process can go on only for a finite number of cycles.
The previous proof goal can be simplfied to

[x < 10 ∧ 10x = z] x:=x+1 [x ≤ 10 ∧ 10x < z],
which can be proven as follows:

[x+1 ≤ 10 ∧ 10x1 < z] x:=x+1 [x ≤ 10 ∧ 10x < z] is obtained by the assignment rule, and

[x+1 ≤ 10 ∧ 10x1 < z] can be strengthened to [x < 10 ∧ 10x = z] by the consequence rule.
For the second example of the previous section, of course no expression t can be found that is decreased by the empty loop body, hence termination cannot be proved.
See also
Notes

^ Hoare originally wrote "P {C} Q" rather than "{P} C {Q}".

^ This article uses a natural deduction style notation for rules. For example, α , β/φ informally means "If both α and β hold, then also φ holds"; α and β are called antecedents of the rule, φ is called its succedent. A rule without antecedents is called an axiom, and written as / φ .

^ "Termination" here is meant in the broader sense that computation will eventually be finished; it does not imply that no limit violation (e.g. zero divide) can stop the program prematurely.

^ Hoare's 1969 paper didn't provide a total correctness rule; cf. his discussion on p.579 (top left). For example Reynolds' textbook (John C. Reynolds (2009). Theory of Programming Languages. Cambridge University Press. ), Sect.3.4, p.64 gives the following version of a total correctness rule: P ∧ B ⇒ 0≤t , [P ∧ B ∧ t=z] S [P ∧ t<z]/[P] while B do S done [P ∧ ¬B] when z is an integer variable that doesn't occur free in P, B, S, or t, and t is an integer expression (Reynolds' variables renamed to fit with this article's settings).
References

^ ^{a} ^{b}

^ R. W. Floyd. "Assigning meanings to programs." Proceedings of the American Mathematical Society Symposia on Applied Mathematics. Vol. 19, pp. 19–31. 1967.

^ p.579 upper left

^ Huth, Michael; Ryan, Mark. Logic in Computer Science (second ed.). CUP. p. 276.
Further reading

Robert D. Tennent. Specifying Software (a textbook that includes an introduction to Hoare logic, written in 2002) ISBN 0521004012
External links
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.