Log in


A global collaboration of engineers, scientists, and other domain experts, funded by philanthropists,

building the Gold Standard of Artificial General Intelligence, for all of mankind.

(Well, that's the plan!)



PREREQUISITE: It is recommended that you watch Part 03 before watching this video!

Index:

  • Clip 01 (1:13)
    • Intro
      • [Hello - but no audience photo!]
  • Clip 02 (3:15)
    • Intro
      • Upgrading from PROP to FOLEQ
  •  Clip 03 (15:00)
    • First-order logic with equality - Syntax
  • Clip 04 (24:25)
    • First-order logic with equality - Semantics
  • Clip 05 (21:23)
    • First-order logic with equality - Proof rules
  • Clip 06 (2:50) 
    • First-order logic with equality - Soundness & completeness
  • Clip 07 (9:27)
    • First-order logic with equality - Theorem-proving
  • Clip 08 (1:57)
    • Outro
      • Homework!

Clip 01:

Back to index

Transcript 01:

Aaron Turner:

So, hi, I'm Daddy Bear, or you can call me Aaron. I am surprisingly nervous, even though there is literally nobody here. Normally I'd take an audience photo at this point, but because of the Coronavirus, I'm unable to book the room in the school that I normally hire. So I'm making this in my illustrious office, and I need to keep up the schedule of one talk per month. So I'm going to try to make a video every month, regardless of what happens. So this is a very long presentation. I've changed this slide to say up to 20 or 30 hours, over 20 talks, because I realize now that's how long it's going to take. And that's why I need to keep up with the schedule. Armageddon not withstanding. So, okay. Normally people would interrupt me, but there is no audience, no studio audience.


Back to index

Clip 02:

Back to index

Transcript 02:

Aaron Turner:

Okay, so here we go. The last talk we looked at propositional logic, PROP, and that has given us a simple way to represent Big Mother's beliefs as wffs A,B,C. You'll see on this diagram of the machine architecture wffs Ah, B1 to BN, and C. They are all wffs of some logic, at the moment it's PROP and so we can represent the machine's belief system using PROP wffs. We also now have a way to perform deduction. That's our first cognitive process. So last time we looked at the natural deduction proof system and natural deduction theorem-proving. So yeah, we've basically got the first two very basic components of our machine, a representation mechanism for beliefs, and a method of implementing our first cognitive process, which is deduction.

Aaron Turner:
So, I'm just getting comfy [kicking my flip flops off!] Right now as a representation mechanism, propositional logic is an extremely blunt tool. We can form conjunctions over finite sets of wffs, W1, W2, to Wn. So we can "and" all those together. We can form disjunctions over finite sets of wffs. So we can "or" all those together, but we can't extend such assertions to infinite sets of wffs which means we can't make universal statements about infinite sets. So for example, what if we wanted to assert this: for all real numbers X and Y, X plus Y equals Y plus X? This is a standard mathematical result. It's the commutativity of addition. So it doesn't matter which order you put the X and Y in (the two parameters to addition), but we can't say that. We can't express that in PROP because we can't talk about infinite sets.

Aaron Turner:

The set of real numbers is an infinite set. So we're going to remedy this deficiency by adding universal and existential quantifiers which are written as this upside down A and backwards E and they will denote "for all" and "there exists". This should become clearer as we proceed. Now, things are going to get considerably more complex as a result of these enhancements, but in the end it'll be totally worth it due to the expressive power that we will gain.


Back to index

Clip 03:

Back to index

Transcript 03:

Aaron Turner:

So this is the outline of today's talk. As we did with PROP, we're going to look first at the syntax of FOLEQ (first order logic with equality). Then we're going to look at its semantics. Then we're going to look at its proof rules. We're going to look at soundness and completeness. Finally, we're going to end talking a little bit about theorem proving in FOLEQ rather than in PROP. So again as before, as we did with propositional logic, we're going to look at the alphabet. Now, PROP's alphabet comprises these symbols. This is exactly the same as last month's talk. The lower case Latin letters A to Z, the underscore, the digits nought to nine, left and right parentheses, the comma, these strange symbols for and [∧], or [∨], not [¬], implies [⇒], and equivalence [⇔]. Now FOLEQ's alphabet is just an extension. So we start with the alphabet for PROP and we add the universal and existential quantifiers (that funny upside down A [∀] and backwards E [∃]), the colon, and the equality symbol.

Aaron Turner:
Because this is first order logic with equality that we're looking at. Analogous to PROP a formula of FOLEQ is a possibly empty finite sequence of symbols taken from the above alphabet, the above combined alphabet, extended alphabet. So to define the wffs of FOLEQ, we start with the set of all PROP wwfs, the propositional logic wffs there. That's how we described them recursively last month. So we replace the proposition symbol with the relation symbol and I'll just go back and show you. So look at the top one here. Proposition symbol, and we just replace it with relation symbol, and because the relation symbol is followed by a comma separated list of terms in parentheses, we also have to define what terms are and what variables are.

Aaron Turner:
So yeah that's part of it. We also add the universal and existential quantifiers. So we have two more wff forms at the bottom. Then finally we add equality as the last wff form. So the set of all FOLEQ wffs is defined recursively using that syntax. It's exactly the same as PROP, conceptually, it's just a little bit more complicated. But the underlying concepts are the same.

Aaron Turner:
So here's some examples, function symbols and relation symbols. So these are all lowercase names and actually whenever you use a lower case name in a FOLEQ wff, we'll be able to tell which it is. We'll be able to tell if it's a variable, a function symbol, or a relation symbol just by the context and hopefully that will become clear. Some example terms. There are two forms of terms, the variable form and the function symbol form. So this is the variable form and again, they're just lower case names. The function symbol form is again a lowercase name, but this time you can tell the difference because it's followed by a parenthesis that actually encloses a list of terms. So there's some simple examples there. [EDIT: Note that a function symbol may be followed by an empty set of parentheses. Such "nullary" functions (i.e. taking zero arguments) effectively denote constants. So sometimes we will talk about "the constant a()" or "the constant k()" etc. But these are just nullary function symbols.]

Aaron Turner:
Okay. Some example wffs. The relation symbol form which again it just looks like a propositional symbol [i.e. a lowercase name], but again there's a comma separated list of terms in parentheses after it. [EDIT: And again, relation symbols may be nullary, i.e. followed by an empty set of parentheses.] You can see there that they're nested. The universal quantifier looks like this. This actually looks quite complicated, but it's not. It's just ∀ and then some name and then a colon and then a wff all in brackets. And the existential quantification form is similar. We just replace the ∀ symbol with a ∃ symbol. And finally the equality form is actually really simple. It is just one term, which you can see there it's quite big on the left, equals another term and there's an even more complicated wff on the right. So it's term = term. And we can do combinations of these things. So here we have ∀ X, ∀ Y: if X is real and Y is real, then X + Y = Y + X. Now this has exactly the form that we need in order to express the commutativity of addition that we mentioned earlier. So even though it's only syntax at the moment, we haven't defined any meaning, you can see where going in the right direction in that respect.

Aaron Turner:
So there are some complications with FOLEQ wffs that we didn't have with PROP because FOLEQ wffs have much more structure. So every instance of a variable in a FOLEQ wff or term is either free or bound. Variables are bound by the quantifiers [∀ and ∃]. Any instance of a variable that is not bound by a quantifier is deemed free. There you go, free and bound. So if V is a variable and P is a wff, then in the wff (∀ V : P), V is called a dummy variable, and every free instance of V in P (if you just look at P by itself to see what the free variables are), every free instance of V in P then becomes bound, basically by that ∀ quantifier. Similarly, it's exactly the same for the existential quantifier [∃]. So here's examples. Okay? So in the wff p(y), p is a relation symbol and y is a free variable.

Aaron Turner:
Okay? There's no quantifier binding it [y]. So in the wff (∀ y : p(y)), the first y after the ∀ symbol is a dummy variable. The second y (in the p(y) wff) is bound, bound by that instance of ∀. This is a little bit more complicated. So in this wff, if we go from left to right looking at the variables, the first x after the ∀ is dummy. That's a dummy variable attached to that ∀ quantifier. The next x (in p(x, y)) is bound by that ∀ quantifier, that universal quantifier. Then the p (in p(x, y)) is free because it's not bound by anything. If we carry on after the comma (because you see this as all part of a conjunction we're getting, so this is actually a list of wffs).

Aaron Turner:
So after the comma we have ∀ x again. Again, that's a dummy x, and then there's a sub-wff ∃ y. So that y is a dummy y attached to that existential quantifier [∃]. Then in the sub-sub-wff (I guess), q(x, y, z), the x is bound by the enclosing universal quantifier [∀]. The y is bound, but by the enclosing existential quantifier [∃]. Finally the occurrence of z is free. So that's a slightly more complicated example, showing free and bound variables in the same wff, and nested quantifiers. So, a wff or term with no free variables is called closed. I will sometimes write that as a cwff or a cterm.

Aaron Turner:
Sometimes in the literature they are also called "ground". Okay. Just means no free variables. Okay. So there are two methods of substitution we can apply to wffs. So for our purposes, and this is my own syntax I've made up here, just to make it easier to understand. If P is a wff, V is a variable, and T is a closed term (remember, with no free variables) then this expression P[T for all V] denotes the wff obtained by substituting T (the closed term, T) for all free occurrences of V in P. So there's some examples. So here we've got p(y, x, y) and we're substituting f(g()) [Note: g() is a constant, not a variable!] For all y and both instances of the y in p(y, x, y) are free.

Aaron Turner:
So the result is p(f(g()), x, f(g())) because both instances of y have been replaced by f(g()). In this one we're doing the same substitution, f(g()) for all y, but in the wff to the left (∀ y : p(y)) because of the quantifier [∀], the y is not free. So this substitution doesn't make any difference, it doesn't change the wff it's being applied to at all. So the result is (∀ y : p(y)) unchanged. This is a more complicated one. So, okay let's go through it. It's the same substitution, f(g()) for all y, but some instances of y are free and some are bound, so some get substituted and some don't. If you look through it, the first instance of y, which is in (∀ y : p(x, y)), is free, so that's substituted. That's straightforward. The next instance of y is bound by the existential quantifier [∃], so that is not substituted. So there you go.

Aaron Turner:

Okay. Different type of substitution now. In this case for our purposes if P is a wff and T1 and T2 are both closed terms (remember closed terms) then [P with T1 for some T2] denotes any wff obtained by substituting T1 for some occurrences of T2 in P. Basically one or more, and it's entirely up to you if there's more than one substitution possible, it's up to the context to decide which ones you want to change or not. We'll see how that works. This is specifically for handling equality, so we'll see how that works later. So here's some examples. Yes, what we've done is p(y(), x, y()) [Note: y() is a nullary function, i.e. a constant] and we're replacing f(g()) for some y() [again, y() is a function, actually a constant]. Now that may denote p(f(g())), x, y()) and we've replaced the first occurrence of y(). In this one we've replaced the second occurrence, but not the first. So in the first example, we replaced the first occurrence, not the second. In the second example, replacing the second occurrence and not the first. What do you know? In the third example, we replaced both occurrences.


Back to index

Clip 04:

Back to index

Transcript 04:

Aaron Turner:

Hoo, okay. So next. That's syntax and they're just strings of symbols. Okay, wffs are just strings of symbols, so now we're going to give them some meaning. Now the semantics, as you can see, the syntax is more complicated, the syntax of FOLEQ is more complicated than that of PROP and unsurprisingly the semantics of FOLEQ is also more complicated.

Aaron Turner:
Don't panic though because this is not actually a formal course on logic. All I'm trying to do, if you remember, over this series of talks... I want to communicate sufficient intuition about each piece of the Big Mother jigsaw puzzle that by the end of the entire sequence of talks, you should be able to see in your mind's eye a concrete path from where we are now to an operational machine having the target behavior that I've previously talked about. I'm just trying to communicate an intuitive grasp of things so I don't have to be fully formal.

Aaron Turner:
So I'm going to try to describe the semantics of First Order Logic with Equality informally. In truth, it's sort of semi-formal, but I'm trying to limit how formal it is. Here, let's have another look at propositional logic, the semantics of propositional logic. The semantic domain for PROP wffs, remember, was this set of strange symbols, which we call top [⟙] and bottom [⟘], but we could think of as representing "true" and "false". And we can give meaning (i.e. we can evaluate to either true or false) the wff forms "and" [∧], "or" [∨], "not" [¬], "implies" [⇒] and "equivalence" [⇔] just by using the truth tables that we looked at last time. Proposition symbols are uninterpreted. So in order to give a meaning to proposition symbols, we had to use the concept of an interpretation, and an interpretation maps proposition symbols to top [⟙] and bottom [⟘] (true and false). And so given a PROP interpretation we can now evaluate any PROP wff to either ... true or false.

Aaron Turner:
And also for PROP we defined the concept of semantic consequence. ... The consequent w is a semantic consequence of the antecedents, w1 to wN if and only if for all possible interpretations J (all possible PROP interpretations J) if all of w1 to wN evaluate to true in J then w also evaluates to true in J. Okay?

Aaron Turner:
We're going to build on this when we now look at First Order Logic. Here, the semantics of First Order Logic. So the sematic domain for First Order Logic with Equality wffs is we're going to use the same two symbols, top [⟙] and bottom [⟘]. This time we're going to think of them as meaning "satisfied" and "unsatisfied". Because in First Order Logic, truth is defined slightly differently.

Aaron Turner:
So the First Order Logic forms "and" [∧], "or" [∨], "not" [¬], "implies" [⇒] and "equivalence" [⇔] can be evaluated to top [⟙] or bottom [⟘] using the truth tables, exactly the same truth tables as we used in PROP (although we should probably call them "satisfaction tables" in this context). So function symbols and relation symbols are uninterpreted, and we will need more sophisticated mechanisms than simple truth tables to assign meanings to free and bound variables, terms, quantifiers and equality. But first we've got to define a few concepts from set theory, simple concepts. So the concept of an n-tuple, of a relation and of a function. So n-tuples first... An n-tuple, where n is a natural number (which means a whole number from zero, one, two, three, etc, so no negative numbers). An n-tuple is an ordered list of values. Okay?

Aaron Turner:
So this is a 0-tuple. You see we bracket the list using these...well you can think of them as "less than" [<] and "greater than" [>] symbols. So this is a 0-tuple, it's got nothing in it, basically. These are all 1-tuples because each one of them contains a single item, [in this case] a single number. So the tuple containing just 2, the tuple containing just 1729, and the tuple containing just that very big number which I can't see... 87,539,319. Those are n-tuples. (And incidentally these are the first three taxicab numbers if you want to look that up. That's how nerdy I am!)

Aaron Turner:
Right. So these are 2-tuples. So each of these has two elements separated by a comma. So the first one is <1, 1> and the second one is <2, 1> and the third one is <2, 2>. They're all 2-tuples and these are 3-tuples. Incidentally, so <1,1, and then the third element is another 2-tuple <1,0>>. And similarly for the other examples. So you can see... Yep, so these are 3-tuples but also n-tuples can be nested inside each other so they can have a lot of structure and we can use that structure to represent things. Surprisingly complicated things. So you should get the idea from that.

Aaron Turner:

Now a relation, what's a relation in mathematics? So given some set D, which we'll call the domain, a relation R over D to the n (which just means it's n-place [or n-ary, i.e. takes n arguments]) is (or may be represented by) a set of n-tuples, each element of which is a member of D. Sometimes D is called the carrier, I think, but we'll call it the domain. Now this is just a way of representing [relationships] between individual members of a set of mathematical objects. And all we do is we enumerate every single case explicitly. [EDIT: Note that, yet again, relations may be nullary, i.e. with n = 0. Such relations effectively denote a boolean constant, i.e. the empty set denotes False, and the set containing the empty tuple (and nothing else) denotes True.]

Aaron Turner:
So, for example, the "divides" relation for natural numbers might be represented as follows. And if you look here... So they're all in a set, so the outside, we've got curly braces on the outside. So we're defining a set. The first element is the 2-tuple <1,0> which basically says "one divides zero", that's what we're trying to represent. One also divides one, one divides two, etcetera. We've got an infinite number of those. And then we also list... We do the same for the number two. So two divides zero, two divides two, two divides four, etcetera. Three divides three... Sorry, three divides zero, three divides three, three divides six, etcetera. So obviously this is an infinite set, a very, very large set and conceptually it's representing the "divides" relation for natural numbers.

Aaron Turner:
Similarly, the "less than or equal to" relation might be represented like this, see? So it's basically the same thing. Zero is less than or equal to zero. Zero is less than or equal to one. Zero is less than or equal to two. One is less than or equal to one. One is less than or equal to two. One is less than or equal to three, etcetera. And if D is infinite, as in this case, then relations over D to the n may also be infinite.

Aaron Turner:
Now almost any mathematical relation may be notionally represented in this way. I say almost because there are some collections of objects which are so large they can't be represented as sets. They're called proper classes. We will meet them in the next talk next month on set theory. But don't worry about those things now.

Aaron Turner:
So now we need to define what a function is. Now we already know what a relation is. A function is just a special type of relation. It's a relation that has the property of being univocal. Also a function from D to the n to D is a univocal relation over D to the n+1. So they're represented as n+1-tuples. And what we do is we put the n arguments to the function (i.e. the inputs to the function) in the first n elements of the tuple and then the result we put in the n+1th element of each n+1-tuple. [EDIT: Yet again, a function may be nullary, i.e. with n = 0, in which case it effectively denotes a constant, i.e. a specific element of D] Now a relation is univocal if, given the first n elements (remember, a relation is a set) ... if given the first n elements, the n+1th element is unique, then it's univocal. Meaning it unambiguously defines the result of the function for that particular set of inputs.

Aaron Turner:
So we'll give some examples now. The following relation is not univocal and is therefore not a function. It's relation, yes, but not a function. So here we have ... if given the inputs one and two, the output is three, but given the inputs one and two, the output is also four. Well that's not uniquely defined. So it's a relation but it's not a function. And the following relation is univocal and therefore is a function. So here ... given 1,2 the answer is 3, given 2,1 the answer is 3, given 3,1 then the answer is 4. So that's unambiguous and defines this tiny little finite function.

Aaron Turner:
Now you should be able to see now how even functions for example the arithmetic operators over real numbers can be notionally represented as a very large univocal relations. Remember they'll be infinite because the real numbers are infinite. So the set of it would be 3-tuples for each of those will be infinite. But notionally that's a way to represent those functions. And again, almost any mathematical function may be notionally represented this way.

Aaron Turner:
So moving now to interpretations, remember we used interpretations as part of propositional logic. So the interpretations in FOLEQ are just a little bit more complicated. So a FOLEQ interpretation comprises a non-empty domain of discourse D (which is a set of otherwise unspecified objects). And each function symbol ... remember when we use [a function symbol] it is followed by a comma-separated list of terms, which is of finite length. In other words ... for any particular instance of a function symbol, it's going to be followed by n arguments in brackets. So if we just think of any particular n, the interpretation J provides a mapping FSn from n-ary function symbols [i.e. taking n arguments] to functions from D to the n to D. And similarly for all n, the interpretation provides a mapping RSn from any n-ary relation symbol [taking n arguments] to relations over D to the n. So...

Aaron Turner:
Oh, and finally... Given an interpretation J, so we know what D is, a variable assignment VA maps variables to objects in D. So given an interpretation J and a variable assignment VA, we can now assign a meaning to function symbols, relation symbols, and to free [and bound!] variables. Yes. So basically we assign a function from Dn to D to function symbols, a relation over Dn to relation symbols and an element of D to variables.

Aaron Turner:
So this is the syntax of FOLEQ, First Order Logic with Equality. Given an interpretation J and variable assignment VA we can evaluate any term T (which is the top three things). We can now evaluate any term T to determine which specific object in D it denotes. Okay? And given an interpretation J and a variable assignment VA we can also evaluate the relation symbol wff form. And we can also evaluate the straightforward "and" [∧], "or" [∨], "not" [¬], "implies" [⇒] and "equivalence" [⇔] wff forms using the truth tables from PROP. And finally given an interpretation J and variable assignment VA, we can evaluate the equality wff form. Okay? And all this leaves is the universal [∀] and existential [∃] quantifier forms. Given what we've talked about so far, we can give all of those syntactic clauses, all of those things meaning except for the quantifiers [∀ and ∃] which we're going to move on to now.

Aaron Turner:
So intuitively, (∀ V : P) evaluates to top [⟙] i.e. satisfied if and only if P evaluates to satisfied for all possible values of V. And similarly (∃ V : P) evaluates to satisfied if and only if P evaluates to satisfied for some possible value of V. At least one, okay, possible value of V. So yeah, this is actually quite hard to express formally but in order to evaluate a quantifier wff form in respect of a given interpretation J and variable assignment VA, we need to use the given variable assignment VA to assign some object in D to each free variable occurring in P and we effectively use another variable assignment, VA', to assign some object in D to every occurrence of the bound variable V in P.

Aaron Turner:
So (∀ V : P) evaluates to satisfied in the interpretation J and variable assignment VA if and only if P evaluates to satisfied in (J, VA') for all variable assignments VA' that differ from VA in respect of at most V.

Aaron Turner:
I know that's a bit of a mouthful but if you think about it hard enough it does actually work. And similarly (∃ V : P) evaluates to satisfied in the interpretation J and variable assignment VA if and only if P evaluates to satisfied in (J, VA') for some variable assignment (i.e. at least one variable assignment) VA' that differs from VA in respect of at most V.

Aaron Turner:
Phew. Okay. So when we looked at PROP we defined a few things, satisfiability and validity and things. We're going to do the same for FOLEQ which is a little bit more complicated. In PROP, in order to evaluate a wff to top [⟙] or bottom [⟘], all we needed was a PROP interpretation J which is a fairly simple thing. In FOLEQ, in order to evaluate a not necessarily closed wff w to top [⟙] or bottom [⟘] we need both a FOLEQ interpretation J, which we know is a little bit more complicated, and a variable assignment VA.

Aaron Turner:
So analogous to PROP we can now state the following: wff w is called "unsatisfiable" if it evaluates to unsatisfied in every combination of interpretation and variable assignment. A wff w is called "invalid" if it evaluates to unsatisfied in at least one combination of interpretation and variable assignment. A wff w is called "satisfiable" if it evaluates to satisfied in at least one combination of interpretation and variable assignment. And finally a wff w is called "valid" if it evaluates to satisfied in every combination of interpretation and variable assignment.

Aaron Turner:
There you go. Straightforward.

Aaron Turner:
Okay. This is analogous to PROP, but a semantic sequent, w1, w2 to wn and then the double turnstile (you see I've put the subscript FOLEQ on the end of it) and w as the consequent means... w is a semantic consequence of wffs w1 to wn. Yes, we know this. This is called the double turnstile and w1 to wn are called antecedents, assumptions or hypotheses and w is called the consequent or succeedent.

Aaron Turner:
Now in FOLEQ, the definition is little bit more complicated. The consequent w is a semantic consequence of the antecedents w1 to wn if and only if for all possible interpretations J and variable assignments VA if all of the antecedents, w1 to wn evaluate to satisfied in (J, VA) then the consequent w also evaluates to satisfied in (J, VA). If you think about it enough times ... intuitively it's just a little bit more complicated than PROP but the intent is the same. Everything to the left of the turnstile has to evaluate to top [⟙] and when that's the case and only when that's the case, the consequent to the right of the turnstile has to evaluate to top [⟙]. And if that is the case, then this semantic consequence applies.

Aaron Turner:
So from a semantics point of view, the biggest difference between PROP and FOLEQ is in PROP everything is finite but FOLEQ is non-finite. If we want to evaluate a semantic consequence in PROP then we can do so in finite time because in practice there are only a finite number of possible interpretations and we can go through them all one at a time. However, in FOLEQ that's not the case. If we want to evaluate a semantic consequence in FOLEQ, then we need to consider an infinite number of possible interpretations, because the domain of discourse D in an interpretation can be any set, including any infinite set. And so it's just impossible. We can't enumerate them one at a time. It would take literally forever. So given that, how can we possibly evaluate a semantic consequence (w is a semantic consequence of w1 to wn)? How can we evaluate whether that holds or not?

Aaron Turner:

So I'm going to put a pin in that for a minute and we're going to have a look at the First Order of Logic's proof rules.


Back to index

Clip 05:

Back to index

Transcript 05:

Aaron Turner:

So, here we go. All right. Now, we're looking at syntactic consequence. We've just looked at semantic consequence. We're going to look at syntactic consequence. So, again analogously to PROP, this expression w1 to wn, single turnstile w, means wff w is a syntactic consequence of wffs w1 to wn. Yes, that's a single turnstile. Again, these are called antecedents, assumptions, hypotheses, and w is called the consequent or succeedent. Now, in first order logic, as in PROP theorems are syntactic sequents, not wffs and syntactic sequent is a theorem if and only if it can be derived using the proof rules, which we are now going to describe. A proof is a non-empty finite sequence of theorems, each of which may be derived from theorems appearing earlier in the proof. So, no circularity allowed.

Aaron Turner:
In first order logic, the Hyp, Weaken, and introduction and elimination rules for "and" [∧], "or" [∨], "not" [¬], "implies" [⇒] and "equivalence" [⇔] are the same as for PROP. So we just inherit those from PROP exactly the same. There they all are. So in order to complete our natural deduction proof rules for first order logic with equality, we need to add the following proof rules: ∀ introduction, ∀ elimination, ∃ introduction, ∃ elimination, equality [=] introduction (or equals [=] introduction), and = elimination. We're going to do those one at a time now. So ∀ elimination, this is the ∀ elimination proof rule and you can see that there's a side condition that T must be closed. The term T must be closed in order to apply this rule and it's very important. So if we have (∀ V : P) on the top and T is closed, then we can deduce P[T for all V] with the assumptions unchanged. Here's an example. So we have (∀ x : p(x)) as an assumption and also as a consequent by the Hyp rule. Then we can simply apply ∀ elimination to it [with constant a() as term T].

Aaron Turner:
Yes, we've replaced the variable V which is x with the closed term a() [i.e. the constant a()]. Okay. That's ∀ elimination. So yeah, ∀ introduction. If we have on the top line P[T for all V] where T is closed and T is not in the assumption As or in P. So in other words, the term T must be completely arbitrary. Then we can derive (∀ V : P). Here's an example. So again, we use Hyp to get us started with (∀ x : p(x)) on the right hand side. We can use ∀ elim as we've just shown to derive p(a()) which has eliminated the universal quantifier [∀]. Then because the term a() [Edit: not p(a()), which is a wff!] is closed and it's not in As or P, then we can apply ∀ introduction and deduce (∀ y : p(y)). You see what we've actually ended up doing is we've renamed the bound variable from x to y using for ∀ elim followed by ∀ introduction.

Aaron Turner:
So a slightly more complicated example. (Starting to wish I hadn't put these in now!) So in this example, there's the rules at the top that we're going to use, the main rules. In this example, b(x) is supposed to mean x is a barber, h(y) is supposed to mean y needs a haircut and c(x, y) means x can cut y's hair. So these are quite complicated. So we start off with the Hyp rule introducing that ... ∀ x and ∀ y if x is a barber and y needs a haircut, then x can cut y's hair, and we use ∀ elimination to remove the [enclosing!] ∀ X and we will replace x with the constant n(). We do the same again with the second quantifier ∀ y, and on this occasion we replace the y with the constant n() again.

Aaron Turner:
So we're going to put the universal quantifiers [∀] back in again. ∀ z if z is a barber and z needs a haircut, then z can cut their own hair. Okay, so those were the ∀ rules. Now we're going to look at the existential [∃] rules. So the ∃ intro proof rule is as follows. So again, if we start off with P[T for all V] and T is closed, then we can deduce (∃ V : P). Again here we'll give an example. So if we start off again with Hyp introducing something on the right hand side. If we start off with p(a) ... I might've made a mistake here [Edit: yup!] Yes, the p(a) was supposed to be a constant, so that should be p(a()) ... everywhere you see it. So if you look what I've done now, p(a), a is a variable there and therefore if we tried to use "a" as the term T, it's not closed. We wouldn't be able to apply ∃ intro.

Aaron Turner:
So what it should be, I'll do it all again. So step one should be p(a()) as an assumption. Then the conclusion should be p(a()) meaning the constant a(). Then we'll be able to apply at step two ∃ intro and we'd get again, the assumptions unchanged p(a()) and [on the right hand side] (∃ x : p(x)). Basically we're saying ... if p(a()) is true for some arbitrary a(), then there is an x for which p(x) is true. Okay. Sorry about that. Schoolboy error. They're really easy to make [Edit: computers have infinitely better precision of thought than humans, and are therefore much better suited to this level of formal reasoning than humans are!] So yes, the ∃ elimination proof rule is as we can see quite a little bit more complicated. It's got two premises on the top line, above the top line and a side condition. So on the top line if we've got (∃ V : P) and we've also got (∀ V : (P ⇒ Q)), then we can deduce Q as long as V is not free in Q.

Aaron Turner:
If you remember from PROP, ∃ elimination is a generalization of ∨ elimination. So if you look ∨ elimination, if we can say the disjunction of Cs, so C1 or C2 or C3 or whatever. If that's true. So let's just say it's just got two disjuncts. So if C1 or C2 is true, but we know that both C1 implies D and C2 implies D, then we can deduce D because whichever C is true, D may be implied from it. So the ∃ elimination rule (or the way I've written it anyway), is exactly that. All we've done is we've extended the disjunction from the finite case to an infinite case. Okay, represented by the existential quantifier [∃] and we've extended the conjunction from the finite case to an infinite case represented by the universal quantifier [∀].

Aaron Turner:
So I hope that makes sense. So yes, we've got an example here. So again, we have to use Hyp a couple of times to get us going. So we've got now there (∃ x : p(x)) and we've got (∀ x :(p(x) ⇒ q(a()))). We can apply ∃ elim to those two steps because it matches the pattern, but most importantly Q, as you can see, Q is [q applied to the constant a(), i.e. ...] Q(a()), and so the variable V which is x is not free in Q, which is q(a()). So we can deduce q(a()). Okay. Oh we have an even more complicated example now. So we've got an example now of the ∃ rules and in this case tt(x) is intended to mean x has developed time travel, and sf(x) is intended to mean x is a fan of science fiction.

Aaron Turner:
This is quite a long one. So again, Hyp gets us going. ∃ x such that x has developed time travel. ∀ x if x has developed time travel, then x is a fan of science fiction and we use ∀ elimination on step two to derive that if the constant k(), we'll call him Kevin, if Kevin has developed time travel, then Kevin is a fan of science fiction. Now this is an interesting one. We've added another assumption on the left of the turnstile and we've assumed that Kevin has developed time travel, and now we've applied the Weaken rule to step three basically so that it has the same set of assumptions on the left side of the turnstile as step four where we assumed that Kevin had developed time travel. We apply ⇒ elimination (modus ponens) to four and five because we've basically got Kevin has developed time travel and if Kevin has developed time travel then Kevin is a fan of science fiction. From those two things we can deduce that Kevin is a fan of science fiction.

Aaron Turner:
All this is predicated remember on the additional assumption that we added. So we can now apply ∃ intro to step six, giving us step seven ∃ y such that y is a fan of science fiction (namely Kevin, because we know there's a Kevin who is a fan of science fiction). Kevin as a term, is a closed term [k()]. So we can apply ∃ elimination and we get ∃ y such that y is a fan of science fiction. Now we can use ⇒ elimination to basically get rid of that extra assumption on the left hand side of the turnstile [not turntable!] It effectively moves over from the left side to the right side and we have if Kevin has invented time travel, then ∃ somebody who is a fan of science fiction. Then finally ... (I think this is finally). No, there's one more.

Aaron Turner:
∀ x if x has developed time travel, then ∃ a y such that y is a fan of science fiction. Finally using the very, very first step and the very, very last but one step ... one and nine. We can apply ∃ elimination and we can deduce (from the assumptions, remember on the left hand side of the turnstile [not turntable!]) that ∃ y such that y is a fan of science fiction. There you go. First order logic at its best. So right. We've done the universal quantifier [∀], we've done the existential quantifier [∃]. We've just got = to do. So we start with the =-intro proof rule (which is called the reflexivity of =) is defined as follows. Basically it's an axiom. There's nothing on the top line and for any closed T we can write T = T. There's an example: a() = a(). Next =-elim, which is called the substitutivity of equality, is defined as follows.

Aaron Turner:

So there are two premises above the lines. So if T1 and T2 are closed terms and we've already derived that T1 = T2 and we've derived Q, then we can derive Q with T1 substituted for some instances of T2 [because we know they're equal, right?] This is the only reason we had that additional "for some" substitution rule. So here's an example [which I've fixed in the transcript!] If a() = b() and q(b()), then we know that q(a()). All we've done is we've substituted a() for b() according to the bottom line of the =-elim rule.


Back to index

Clip 06:

Back to index

Transcript 06:

Aaron Turner:

Okay. So finally, soundness and completeness. If you recall, propositional logic was both sound and complete. We've now defined the semantics and proof rules for first-order logic with equality. And just as [with] PROP, both are very different as well as quite complicated, and these different quite complicated things are equivalent. So FOLEQ has the property of being strongly sound. There's a slight qualification here. For all sets of closed wffs As and all closed wffs C, if C is a syntactic consequence of As then C is a semantic consequence of As. And FOLEQ is also strongly (semantically) complete. So again, ... for all sets of closed wffs As and all closed wffs C, if C is a semantic consequence of As then C is a syntactic consequence of As, which basically means the two things are equivalent.

Aaron Turner:

So for all sets of closed wffs As and all closed wffs C, C is a semantic consequence of As if and only if ... C is a syntactic consequence of As. Now if you remember earlier on, we didn't have a way of evaluating ... [whether or not] C is a semantic consequence of As because of the infinite nature of the semantics. But now, we don't have to consider an infinite number of interpretations. ... if we need to try to evaluate whether C is a semantic consequence of As, then we can try to evaluate ... if C is a syntactic consequence of As, and if it is then we know from soundness that C is a semantic consequence of As. Which brings us to theorem-proving, i.e. the practical problem of proving that C is a syntactic consequence of As.


Back to index

Clip 07:

Back to index

Transcript 07:

Aaron Turner:

Right, remember we're building an AGI here, and there's our architecture. And first of all, we use PROP as our representation mechanism. So if I just go back and forwards there, you'll see by the double turnstile [⊨], we need some representation mechanism to represent our belief system, A, B, C, etc. And we were using PROP to do that, and we were using natural deduction to implement our deduction cognitive process.

Aaron Turner:
And in last month's lecture we had a little look at the process of finding a proof mechanically. But now we've upgraded the representation mechanism from PROP to FOLEQ. See, watch, PROP, FOLEQ, PROP, FOLEQ.

Aaron Turner:
Okay, now our representation mechanism for A, B and C is first-order logic with equality which we know is much more expressive, and we're using natural deduction theorem-proving extended with the new proof rules for our deduction cognitive process.

Aaron Turner:
What are the pros and cons of FOLEQ compared to PROP? Well, on the plus side, FOLEQ is infinitely more expressive as a representation mechanism. And it's actually foundational, it's able to represent all of mathematics. That might not be clear quite at the moment, but it should become clear over the next two talks.

Aaron Turner:
Next month, we're going to look at NBG set theory, and then the month after that, apocalypse notwithstanding, we're going to look at metamathematics. Wow, how exciting is that. So anyway, FOLEQ is able to represent all of mathematics. There it is, in all its glory.

Aaron Turner:
Now on the minus side, theorem proving just got a whole lot harder. Theorem proving with first-order logic is harder than theorem proving with PROP. So PROP is decidable, which means there is an effective procedure, i.e. an algorithm, such as the truth table method, for determining whether or not C is a syntactic consequence of As for arbitrary As and Cs.

Aaron Turner:
However, first-order logic with equality is only semi-decidable. So there is an effective procedure for confirming that C is a syntactic consequence of As, but there isn't a procedure for confirming that C is not a syntactic consequence of As. In other words, if As ⊢ C is a theorem, we can confirm it, but if As ⊢ C is not a theorem, then we can't confirm that fact by using the rules of FOLEQ.

Aaron Turner:
In practice, imagine if we've got a FOLEQ theorem-prover and we give it the sequent As ⊢ C and we ask it, is this a theorem of FOLEQ or not? And it goes off and searches the infinite tree of all theorems looking for a proof. Then only two things can happen. [(a)] It might find a proof. So if As ⊢ C is a theorem, it might find a proof of that after some finite time t < T. [(b)] However, after time T, time T might have elapsed and it might not have found any proof after time T at which point we'll just give up, and we don't know. We don't know at that point whether it's a theorem or not, and it hasn't produced any information.

Aaron Turner:
Nevertheless, such a theorem prover will provide economic utility, as long as outcome (a) occurs in a reasonable proportion of cases relative to the resource limit T. In other words, sometimes it will give us useful information and so it has economic value.

Aaron Turner:
A human mathematician, a decent one, we'll call him H, might be able to do a little bit better. So again, [(a)] H might find a proof of As ⊢ C after some time t < T. [(b)] H might not have found a proof after time T, at which time we give up and we don't know whether it's a theorem or not. So those are exactly the same possible outcomes as with our theorem prover. But with a human there is a third possibility, which is [(c)] the human H might find a meta-proof after some time t < T that [As ⊢ C] is in fact a non-theorem. H won't be able to confirm that *in* the theory of FOLEQ. H won't be able to come to that conclusion by thinking *in* the system FOLEQ. He will have to, he or she, will have to think *about* the system FOLEQ, and that's what metamathematics is.

Aaron Turner:
You can ... reason *inside* the rules of a [formal] system. You can use the proof rules of FOLEQ and you're reasoning *inside* the system, or you can actually reason *about* the proof rules and *about* the semantics, whatever. And that's basically one mathematics being used to reason about another mathematics, and the higher level mathematics is called the metamathematics and the mathematics that's being reasoned about is called the object mathematics or the object language.

Aaron Turner:
At this point, we're assuming that our theorem-prover ... is a naive theorem-prover and it can only reason *inside* the rules that we've given it, inside the rules of FOLEQ. Whereas a human can think *about* FOLEQ, and might be able to, in some cases, derive this extra piece of information, that a naive theorem-prover won't be able to. And so in case (c), a human mathematician will have provided slightly greater economic utility. Actually, I would say probably considerably greater economic utility, than a naive mechanical theorem-prover.

Aaron Turner:

We're building an AGI here, remember. Now we have to ask the question, how can we get a mechanical theorem-prover to sometimes return (c), just like a human. You know, if the human can do it, we can do it, or our machine can do it. How can we get a mechanical theorem prover to do that? How is the human doing that, or equivalently, how can we get a theorem-prover to think meta-mathematically? And this is a question to which we will be returning in subsequent talks, like I said. So next month is about set theory, NBG set theory, and then the month after that will be about metamathematics and how to make a theorem-prover think meta-mathematically. Exciting.


Back to index

Clip 08:

Back to index

Transcript 08:

Aaron Turner:

So in anticipation, a little bit of homework. Pawel, I know you're watching! So this is for you. Last month's homework was actually quite complicated. Please write a program. That's a lot of work. This month is easy, don't worry about it.

Aaron Turner:
According to this book, Introduction to Mathematical Logic by Mendelson, which is a little bit of a bible of mine, if we have a theory with equality, the equals relation has to be reflexive, symmetric, and transitive. But in FOLEQ we've only got the =-intro and =-elim rules. So the homework is, use the proof rules to derive the following theorems. Okay?

Aaron Turner:
Basically, this theorem expresses the reflexivity ... of equality: (∀ x : (x = x)), the symmetry of equality: (∀ x : (∀ y : ((x = y) ⇒ (y = x)))). That's symmetry. And finally, the transitivity of equality: (∀ x : (∀ y : (∀ z : (∧((x = y), (y = z)) ⇒ (x = z))))).

Aaron Turner:
So, try to prove those using the proof rules of first-order logic with equality. They are actually really easy. So, don't be scared. Right. Next talk, like I say, we will build NBG set theory on top of first-order logic with equality.

Aaron Turner:

Given the global pandemic, stay safe, and thank you for your attention. Thank you.


Back to index

Powered by Wild Apricot Membership Software