Log in


A global collaboration of engineers, scientists, and other domain experts, funded by philanthropists,

building the gold standard of Artificial General Intelligence, for all of mankind.

(Well, that's the plan!)



Index:

  • Clip 01 (4:02)
    • Intro
      • [Hello & audience photo]
      • Realising the AI dream
      • Maximising human happiness
  • Clip 02 (20:30)
    • Intro
      • Big Mother
        • [Inductive-Deductive-Abductive architecture]
        • Key points
        • Making it happen
  • Clip 03 (32:41)
    • AI
      • What is AI?
      • What is AGI?
        • [The seven dimensions of AGI (7DimAGI)]
      • 7DimAGI
        • Industrial robots
        • Neural networks
        • Autonomous vehicles
        • Humans
        • Big Mother
        • The AI spectrum
  • Clip 04 (18:44)
    • AGI
      • AGI-by-stealth
      • Zero human employment
      • The AGI spectrum
      • The AGI endgame
        • Safety and benevolence
        • Door #1 - Private ownership
        • Door #2 - Public ownership
        • Wealth Without Work
        • Other AGI projects
        • Obstacles to maximally benevolent AGI
  • Clip 05 (3:56)
    • Summary
    • Just one more thing...
  • Clip 06 (15:24)
    • The remainder of this presentation is organised as follows...
      • Puzzle pieces [designing an AGI is like doing a very large jigsaw puzzle]
        • Preliminaries
        • Deduction
        • Abduction
        • Induction
      • History [of the Big Mother project, from 1983 to present]
      • Technical Plan
        • Phases
        • Roadmap
          • Global
          • Deduction
          • Abduction
          • Induction
      • Organisational Plan
        • Support
        • Cross-group committees

Clip 01:

Back to index

Transcript 01:

Aaron Turner:

So I'm Daddy Bear. That's been my nickname for about 15 years now. Or you can call me Aaron. I am really nervous. That's why I put this on the slide. Right, I forgot. I have to do an audience photo. Is everybody okay with that?


Brier:

Can I just hide my [inaudible]?

Aaron Turner:

That's fine. That's fine. That's fine. Okay. Luckily I have an ultra wide angle lens so I can get everybody in. [This was intended as a joke, as there were so few people there!] Brilliant. Okay.

Aaron Turner:

Yeah. Like I was saying, this really is a very long presentation. It might take 6 or 10 talks over the next year to deliver. So don't be shy, interrupt me, throw your beer or whatever, ask any question you want at anytime that you want. It doesn't matter how long the presentation takes. If we run out of time, we'll just pick up next time. All right?

Aaron Turner:

Okay. The goal of this project literally is to maximize the happiness of all mankind for all eternity.

Aaron Turner:

Now can you all see that guy standing in the corner? No? Oh, because I have been called crazy on more than one occasion, because of this type of thing. Right? So yes, it does seem crazy. A crazy ambition. And without AI it would be, but with AI, as soon as you start thinking about AI and its potential, and project it forward to its logical conclusion, this is where you end up. The possibility of near utopia. Sorry I'm going to turn these mics down, hang on. Okay.

Aaron Turner:

This is literally how I think about this. If you watch kids playing, and they're in a park or something, and they're really happy, and they're just laughing, and playing, and dancing, and things like that, and they're looked after by, in their case, grownups. So guardians, parents, teachers, or whatever, they're super-protected. Everything they need within reason is magically provided for them. Okay? I believe that's what life should be like for all humans for their entire lives. In other words, not having to work. Okay? Just like you were kids being looked after by your parents.

James:

Can I ask something?

Aaron Turner:

Sure. Yeah.

James:

Do you think that's what people would want?

Aaron Turner:

Well, I guess we'll see. In fact, I do have a slide that addresses that later on.

Back to index

Clip 02:

Back to index

Transcript 02:

Aaron Turner:
Okay. I'm here to talk about Big Mother, which is a proposed autonomous, goal-directed, superintelligent, artificial general intelligence. On the left there's a simple diagram of the top level logical architecture. Shall I go through it now? Yeah, might as well. Don't worry because over the course of these presentations, I will be explaining it all in detail, but just roughly speaking, this red box here is the machine. The blue box here is the universe that it's interacting with. So the machine observes the universe through sensors, TV cameras, [and billions of other types of input devices that I couldn't think of on the spot!], whatever. Each observation generates a percept, and so there's a sequence of observations, so the machine ends up with a finite [but continually growing] percept history [sequence of percepts]. From that percept history, this [induction] is a cognitive process (and there are three main cognitive processes).

Aaron Turner:
Induction is really the proper name for learning [OK, maybe not quite - but its what I call it!] Okay. The cognitive process called induction basically observes the universe, and from those observations synthesizes a belief system, it's B1 to B whatever it is, can't see, which is what it believes about the universe. It's like a model, it's an internal model [i.e. internal to the machine] of the universe [which is external to the machine]. Once the machine has a belief system, because the beliefs are represented in a particular way, essentially using a logic, then another cognitive process, deduction [becomes] possible and the machine is able to infer other beliefs, say here C, from B1 to Bn via deduction. There is another cognitive process called abduction, which is sort of like the reverse [of deduction]. It's like reverse deduction. I'll be explaining that later.

Aaron Turner:
But abduction is very, very powerful, very, very general. They're all general. But, in particular, planning is an abductive process. Remember the machine is autonomous and goal-directed. If you're goal-directed, you have to (a) have a goal, and (b) you have to have some mechanism of planning to achieve that goal. So it's abduction, which will generate the plans, and then the plans get executed, and the machine then affects these actions, which affect the external universe. And that is basically the sense-effect loop from sensors through these cognitive processes and out again. That is just very simplistically how the architecture works. I call it an inductive-deductive-abductive hybrid.

Aaron Turner:
The other thing to realize is when you actually build a machine like this... Okay, it's not going to be a humanoid robot like on TV. Okay? Literally when we build this machine, and I say when, okay, the first one will be a supercomputer because you basically need all the compute you can get, so the first machine will necessarily be some very large supercomputer. It looks very boring, I know, but that's realistically what it will look like. Yeah. Subsequent machines, the machine will design itself, so the machine can design different architectures.

Aaron Turner:
The main points are; that machine is maximally safe, maximally benevolent, and maximally trustworthy. Now, before the machine is deployed, these are essentially design goals. These are the design goals that we're setting ourselves. They all equate to roughly the same thing, which is goal alignment, but we can be specific about it. Maximally safe, maximally benevolent, and maximally trustworthy. Now, if the machine is goal-directed, it has to have... Sorry, yes. Question.

Johannes:
Actually, again, this is not specific enough to me. It's still... Maximally safe, it seems like it's very difficult to foresee in which ways the machine will be safe and exactly what that means.

Aaron Turner:
It's not fully defined, is that what you're saying?

Johannes:
Yes.

Aaron Turner:
Yeah. Okay. Agreed. But just think of it for now as an ideal that we're aiming for. Okay? And we will establish work groups, which will explore the very question that you've just asked, what is safe? When is it safe enough? When is it benevolent enough, et cetera. Okay? But the important thing is to stress how important these things, these aspects are, because it's very, very easy to design a machine that is not safe, that is not benevolent, and that is not trustworthy. Very, very easy. Okay? If you start cutting corners. We're just being explicit here about how important those ideal goals are that we will work towards. Okay?

Aaron Turner:
Yes, because the machine is goal-directed, it needs to have a goal, and again, this is a very big question. What should the goal be? To me, having thought about it, I think everything ultimately boils down to maximizing happiness. Some people might disagree and there's plenty of time to discuss these things, but this is a starting point.

Aaron Turner:
Maximize human happiness both individually, and as a population, so we don't get sort of "tyranny of the majority" type things happening. At the same time, another aspect of the goal should be to minimize the standard deviation of happiness over the human population, which basically means that happiness is fairly distributed rather than some people being very happy and others being [grossly] less happy. It's sort of intended to avoid [any similarity with] the gross wealth inequality that we have now.

Aaron Turner:
Yes, again, this is important. I'm proposing that the machine should be publicly owned by all mankind. It's not owned by a country, it's not owned by private organization. It is owned by mankind for mankind. Okay? Because... Sorry, yes, sure. Questions.

Guy:
I'm just wondering, how do you envisage the machine weighing up the short-term happiness of people and the long-term happiness. I mean, I'd be much happier in two seconds if I had a donut. [crosstalk 00:12:10].

Aaron Turner:
Yeah, no, I understand. Yeah, there are all kinds of trade offs. There're all types of... they're called goal conflicts. Okay? You have to trade off the happiness of one person, and the happiness of another person, and again, like you say, short-term happiness versus long-term happiness. To be honest at the moment I haven't [specified any mechanism for resolving such trade-offs]... That's just an open question to be resolved. Exactly how do you resolve those different trade-offs, which will occur in all kinds of different ways. But they do with [any organisation that has to manage finite resources, e.g. a company, or a government, etc]. It's not just something that pertains to intelligent machines, but you're right, it is something we will have to address at some point as part of the design process.

Aaron Turner:
Okay. An AGI by definition, and you'll see later, has maximal cognition, and is able therefore to generate all value, meaning goods and services, which means that humans are no longer slaves to work because the machines generate everything, and humans can enjoy what I call wealth without work. I know it's very, very different from the world we have today. Again, they're free to devote their time to leisure and pleasure.

Aaron Turner:
Now, although I do tend to focus a little bit on this idea of the machines generating all wealth, all value. But... Sorry, did you have a question, Pawel? No?

Pawel:
It's okay.

Aaron Turner:
Okay. The other benefits of AI, which people talk about all the time are basically implied by the dominant goal, because whatever makes humans happy becomes part of the machine's goal. Curing cancer, solving climate problems, solving ecological problems. Yeah. Yes?

Johannes:
Will this AGI, will it essentially be a slave?

Aaron Turner:
Well, the AGI, will it be a slave?

Johannes:
It will not have free will? You've given it a goal?

Aaron Turner:
Well, I'm not sure it will be a slave in the sense that we've had human slavery. I'm not sure it will have free will anymore than humans have free will, which is open to question. Okay, we're not building a human here we're building a machine, so it operates differently, but the machine will be motivated. Its motivation will be to maximize humans' happiness. It's not really being forced to do anything like a slave is forced, it's doing it... it's even wrong to say that's what it wants. That's what it exists to do. That's what it's built to do. And as it functions, that's what it will do.

[AMT - editor]:
[Let's try to explain this a little better! Big Mother is basically a computer (comprising CPU, memory, etc). It might be a massively parallel computer, with lots of (things that look like) CPUs, but for our present purposes we only need to consider a single CPU. Every CPU is driven by a clock signal (e.g. a 1 GHz square wave). Simply stated, on every (say) falling edge of the clock, the CPU will load from memory the program instruction currently pointed to by its program counter, execute that instruction, and then (by default) increment the program counter, which (as a result of having been incremented) then points to the next instruction in memory to be executed. (And then on the next falling edge of the clock signal it all happens all over again, ad infinitum.) This is how computer programs actually get executed. The CPU doesn't "want" to execute the program, neither is it being "forced" to execute the program. It is simply the clock signal (plus the laws of physics, which determine how transistors etc work) that causes the computer program (held in the computer's memory) to make "forward progress", one clock cycle at a time. And so Big Mother will execute its program not because it "wants" to or because it is being "forced" to, but because, by virtue of the way that it is constructed, that's what the laws of physics cause it to do. And (again, simply stated) we will have designed Big Mother's program such that when it is executed (as a consequence of the laws of physics) its net effect will be to maximise human happiness, etc. Obviously, in order to achieve that net effect, the program in question will need to be quite complicated, and very carefully constructed, which is what we will be discussing for (most of) the remainder of this presentation. But, once given such a program, the net effect of the machine's behaviour will be to maximise human happiness (etc). No "free will" required, no "slavery" required. This is simply the behaviour that the machine is designed to implement by virtue of its programming (and other design features), which the operation of the laws of physics will cause to be executed.]

Aaron Turner:
This will all become clearer, the more we investigate the actual architecture of the machine, so you can see inside it and see how it actually works, the clearer these things will become, because while it's still an abstract concept, it's hard to see some of these things.

Aaron Turner:
Okay. Again, this is a purely philanthropic project intended to benefit mankind as a whole, which is obviously in contrast to any project that is owned by a single state, or a profit motivated corporation. Ideally, it's very hard to, if you think the machine should be publicly owned by mankind, how do you actually do that in practice? The closest I could get was to start a UK community interest company, but that's obviously not far enough. In the limit, the ideal place for this machine is to be under the administration of the United Nations because that is genuinely a global organization. Okay?

[AMT - editor: see https://www.itu.int/pub/S-GEN-UNACT-2019-1
"
A joint-effort between ITU and over 35 other UN agencies and bodies, and all partners of 2019's AI for Good Global Summit, this report provides information about the diverse and innovative activities related to artificial intelligence (AI) across the UN system to meet many of our world's increasingly urgent challenges, from responding to humanitarian crises to tackling climate change."]

Aaron Turner:
Right. The goal of this project is to actually build the machine. Now, a lot of other people that are thinking about AGI, and investigating AGI, I'm not sure anybody yet has said, "Okay, we're actually going to build a machine." Open AI might have, I'm not sure. Anyway, that's the goal of this project. Building a maximally safe, maximally benevolent and trustworthy AGI will require: You've got to have a technical plan, so an architecture; you've got to have an organizational plan because it takes lots of people, and you have to organize them. Okay? It will also require quite a bit of money, 100 million a year for 50 to a hundred years. This is what I'm scoping it at. Okay? It's not like a 10 year project, it's not like a 20 year project.

Aaron Turner:
Realistically speaking, it will take 50 to a hundred years. This is less money really, I mean, $5 billion is not a lot of money, relative to the benefit. This is less money than people might expect and more time than some people might hope. A lot of people in the AGI world will tell you, yeah, we can do it in 10 years, we can do it in 20 years, whatever. Maybe they can develop something in 30 years maybe, but it's not necessarily going to be safe. That's the really hard thing. Okay? Making it safe, benevolent, and trustworthy is hard, and for safety purposes you just have to let it take as long as it takes.

Aaron Turner:
There is a certain point in the Big Mother roadmap called C4 where the safety concerns really become prevalent, and to be honest, if it needs to take another 50 years beyond the hundred years, or if it needs to take an extra 100 years, we should just let it take that long rather than cut corners, because the consequences of getting it wrong are just too great. Also, strangely enough, the project is primarily about funding, not about technology. I know it's easy to... What's happening with the lights. It's easy to focus on the technology, I think. Look at all these difficult problems we've got to solve. Okay, fine. Thanks Pawel.

Pawel:
Where's the...

Aaron Turner:
They're to the right of the door. Okay. Nevermind. It doesn't matter, as long as you can all... That's brilliant too. Just make sure everybody can see the screen.

Speaker 1:
[inaudible 00:20:22].

Aaron Turner:
No. Okay.

Pawel:
I'm not sure I've done anything. I didn't do anything.

Aaron Turner:
Oh, you didn't do anything. Okay. Switch...

Pawel:
Actually, there's a motion sensor here. I activated it.

Aaron Turner:
Okay. Let's switch some of them off so that the screen is not quite so... Okay?

Brier:
[crosstalk 00:20:39]. That's better.

Aaron Turner:
Cool. Too many. Okay. Okay. As long as people can see the screen, that's... Oh God.

Pawel:
Is that enough?

Aaron Turner:
Okay. No, that's fine, Pawel, just...

Brier:
Yeah, that's good. That's good.

Aaron Turner:
That's good.

Aaron Turner:
Okay. If you can't see the screen-

Brier:
Yeah, we can.

Aaron Turner:
... just tell me and we'll send Pawel down to the light switch again.

Aaron Turner:
Okay. Yeah, the project is really about funding, not the technology. The technical problems, although they are considerable, are all solvable and the skills, the required skills are already there. I mean, you don't have to be in Cambridge for that long before you've met enough people to realize the skills, the expertise is out there to solve this problem. But most of the AI world are not really coordinated. They're not working in any kind of coordinated fashion, it's more like Brownian motion, all going off in their own little directions, which is the same for all of science really. Anyway, the key problem in this project is funding, and we actually have a funding work group whose job is to find the funding and it will be the funding that's the bottleneck in respect of progress. Okay?

Aaron Turner:
Right? So why are we here today? Infotainment. It's fine if people just want to see an interesting presentation, that's not a problem, but also, we're trying to get the collaborative ball rolling. We need volunteers, technical and nontechnical. We need donors of all types and sizes because like I just said, the rate of progress will be determined by funding. However much money we have in any particular year, that will determine what we can do in that year. And if it's twice as much the next year, we'll be able to do twice as much technical work. So it's all about funding in the end.

Aaron Turner:
Now, we also need what I call vertical domain experts. These are, as you'll see when I show you the roadmap, there's a 19 step roadmap, a technical roadmap to get us to, from where we are now, to a deployed machine. Each step of the roadmap requires certain expertise.There are people in the universities or whatever who are experts in each particular area, whether it's logic, or robotics, or natural language, or whatever. And I call those people vertical domain experts. Okay? It is literally impossible to be a vertical domain expert in all 19 roadmap [areas]. Okay? I'm a horizontal expert because I know just enough about each domain to be able to stitch it all together, so that when you integrate it works the way it's supposed to work. Yeah.

Aaron Turner:
We get onto some slightly more technical questions now.

Back to index

Clip 03:

Back to index

Transcript 03:

Aaron Turner:
What is AI? In all the books... Can you read that, honestly?

Brier:
Mm-hmm (affirmative).

Aaron Turner:
Yeah. Okay. In all the books you will see definitions of AI and they're all different. Every single one. I have my own way of thinking about it. It's all about cognition, and cognition in humans is what happens here. What happens in your head. It's got nothing to do with peripheral things like vision, or speech, or walking, whatever. The cognition is what happens in the mind, and the other things, the peripheral things are added on outside. There's two ways of thinking about it. You can design something like this from the outside in, starting with the peripherals, senses and things. Or you can design it from the inside out, starting with the sort of cognition that's at the center of it all. So I think from the inside out. Okay?

[AMT - editor]:
[Apologies, let's try again! If you look at many of the historically classic AI projects, you will see that they typically incorporate some aspect of the kind of "I/O" that an intelligent machine needs in order to interact with the physical universe - natural language, vision, robotics, etc. This is of course perfectly natural and reasonable. But then, in order to implement the intelligent behaviour that they're aiming for, the AI researchers in question necessarily have to implement (internal to their system) some kind of knowledge representation (tailored to their particular I/O-focussed application), some kind of reasoning mechanism (tailored to their particular I/O focussed application), some kind of planning mechanism (tailored to their particular I/O-focussed application), some kind of learning mechanism (tailored to their particular I/O-focussed application), etc. And so they all end up implementing I/O-specific versions of (very broadly speaking) the same cognitive infrastructure. So I reasoned (and this was way back when I started, around '84-85) that it would be much better instead to implement a generic, *I/O-neutral* cognitive infrastructure comprising (what eventually morphed into, as I thought about it a little more) representation (i.e. some kind of formal logic), deduction (reasoning), abduction (planning etc), and induction (learning). Given this, any particular "I/O device / data" such as audio (hearing, speech), video (vision), robotics etc could then be attached to this generic, I/O-neutral, "core" cognition. In this way, not only do you only have to implement this generic core cognition once, but you end up with a "multi-modal" knowledge representation mechanism that accurately relates percepts / concepts / beliefs pertaining to many different I/O devices, which is how (IMO) it should be if world knowledge (about the entire physical universe) is to be properly represented (and if it's not represented properly, then you have *zero* chance of ever being able to process it properly!) Cue (I suspect) lots of counter-arguments re I/O-specific representations and processes being more time/space efficient etc. Yes, granted this is probably the case, but, for me, getting the information represented and flowing around the system correctly comes first, speedups come second (via various mechanisms which I will be describing later on in the presentation). So, I hope now I have explained myself a little better. "Peripheral things" are anything "I/O" related, i.e. attached to an external sensor or effector that interacts with the physical universe. "Core cognition" is internal, generic, I/O-neutral cognition that represents and processes information received from / destined for peripheral I/O in an I/O-neutral way.]

Aaron Turner:
What is cognition? Cognition is multidimensional. For any machine M, the most important dimension is [what I call] the universe of discourse. For some machines it might be just numbers. This is like if you were machine M, and in fact you are a machine M, but imagine you are machine M and you're looking out at the universe, your universe, what do you see? Right? So for some machines they will look out and all they will see is numbers, because that's their universe. Some machines will look out and they will see games of chess; some machines will look out and they will see set theory; some machines will look out and they will see the physical universe. This concept of what exactly the machine's universe of discourse is is very important because it makes a massive difference to the rest of the machine.

Aaron Turner:
The other dimensions are [mostly] cognitive processes, which act on an internal model of whatever your universe of discourse is. For example, if your universe of discourse is numbers then your cognitive processes might be something just as simple as multiplication or addition. If it's games of chess then it might be game moves; set theory, it might be something called witness synthesis; the physical universe, one of your cognitive processes might be induction. There are some additional dimensions, for example, the amount of compute that you have, and the amount of experience that you have, will basically, if you have more compute, you'll have more intelligence. If you have more experience, more knowledge of the universe, you'll have greater intelligence. So superintelligence, this is from my definition, the way I think about it, in respect of a particular cognitive dimension is greater than human capability along that cognitive dimension.

Aaron Turner:
With my definition, computers have already been superintelligent since the 1940s. Okay? So as far as the, for example, the universe of discourse of numbers is concerned, computers are already superintelligent, have been for a long time, and basically with each year and decade that passes, they will become superintelligent [in respect of] more and more general universes of discourse, and more and more complex and general cognitive processes. I personally don't think there's going to be a singularity that happens all of a sudden [some Tuesday afternoon, about 3 o'clock ;-)]

Aaron Turner:
What is AGI? If you've got all these cognitive processes, sorry, cognitive dimensions, if you imagine what is, how far can you take the capability in each dimension. If you've gone as far as you can in each dimension, then you've got an AGI basically. I think in terms of seven dimensions of AGI, and particularly the universe of discourse for any machine is going to be some subset of all of mathematics and the entire physical universe. In the limit, of course, it will be both of those things. Again, when you're talking about AGI, which is maximally general, the cognitive processes are induction (these we looked at earlier) - induction observes the universe of discourse and guesses an internal model of it. And there's deduction, so for example, as an example of deduction: all men are mortal, X is a man, therefore X is mortal. That's fairly straightforward classical logic.

Aaron Turner:
Abduction is sort of reverse deduction. If you know something like X is mortal, you can try to find something, hypothesize something that explains why X is mortal. If X is mortal, maybe X is a man, but that's actually a guess because X could be a tortoise or whatever. There's lots of possible explanations. When you're going in the other direction, you're guessing.

Aaron Turner:
Again, experience is another dimension. If you've got induction, which is your learning cognitive process, where you observe the universe and guess a model of it, the more, the larger your history of observations, the better your internal model of the universe is going to be. Right? If you're a 50 year old, you're going to have a better understanding of the universe than if you're a five year old basically.

Aaron Turner:
Again, compute; all other things being equal, if you've got 10 orders of magnitude more information processing, and memory and everything than another machine you are essentially going to have effectively greater intelligence. Now, I've also put this seventh one in, which is benevolence, which is so important to this application, because not all machines are going to be as benevolent as others. Some will be... Sorry, benevolent with respect to humans. So some will be [malelovent!]... Okay, I'm not even going to try to say it. Some will be indifferent and some will be benevolent. It's a sort of a scale.

Aaron Turner:
Now, I've deliberately excluded consciousness and emotion from this model. We will be looking at consciousness later. Like I say, it's a very long presentation. We'll be looking at every relevant aspect of this problem.

Aaron Turner:
We can now try to apply this idea, these seven dimensions of AGI, to different machines. Okay. So here are some contemporary industrial robot arms which most people will be familiar with. So if we think about these robot arms, as far as these seven dimensions are concerned, the universe of discourse of these robot arms will be sequences of end effector positions with six degrees of freedom. [They have a] very, very specific universe of discourse. Induction - they don't really learn, they are typically programmed by a human operator just led by the nose. The guy will pick up the arm, and he'll just move it here and say weld here. And he'll move it here and he'll say whatever. Okay, so there's minimal induction going on. There's essentially no deduction and no abduction. Their experience, they will literally have one training example. Their compute, I don't know exactly. These machines might have a few PIC processors or whatever, but broadly speaking, they don't have a lot of compute. And they're completely indifferent when it comes to benevolence.

Aaron Turner:
They don't even understand what a human is. If you search online, you will find lots of examples of humans being crushed by these machines. You know, they don't even know that humans are there. So to all intents and purposes, yes, they're called robots, but they have zero cognition. All right, so let's go up a notch.

Aaron Turner:
Neural networks. Not going to describe what these are in detail here [check out the Fast.AI course: https://course.fast.ai], but the universe of discourse for a neural network is typically some very, very small subset of the entire physical universe like, I don't know, radiological scans or MRI scans or something like that. Relative to the entire universe, it's a very, very tiny aspect of it.

Aaron Turner:
Induction. Yes. These neural networks guess their weights and biases using back propagation and gradient descent. So there is some induction going on. Is there deduction? Yes. Once they're trained, they can make statistical inferences. If they get a new case that they haven't seen, they can guess what the relevant classification is or whatever. Abduction - they essentially don't have any abduction. Experience - when they were being trained by the inductive process. Yeah. They might have 10,000 or a million training examples. They might have 10 GPUs, quite a lot of compute, at the training stage, just one maybe at the inference stage. And again, they are indifferent as far as the benevolence to humans [is] concerned; they don't understand what a human is.

Aaron Turner:
But you can see we've progressed from the previous, you know, robot arms. Okay. Now broadly speaking a typical neural net has the cognition of a flock of pigeons. You can look up this paper where this person trained pigeons essentially to recognize cancerous cells or whatever. But that's roughly the level of cognition because the flock of pigeons have no understanding what ... well, no understanding of what they're looking at. They just do simple pattern recognition. And that's essentially what a standard neural net is.

Aaron Turner:
Now, autonomous vehicles, probably the most sophisticated contemporary AI application there is. Some of you might think otherwise, but roughly there's a lot of money being put into these things at the moment. So the universe of discourse, again, it's larger than the previous example. It's that subset of the universe that is relevant to driving. These cars typically will have multiple neural nets, quite sophisticated ones.

Aaron Turner:
Again, they can make statistical inferences once they've been trained. And navigation is an instance of planning, which is an instance of abduction. So yes, you could say that they have all three of these cognitive processes to some degree. And experience - usually they'll have millions of training examples, and the companies developing these systems will throw money at their training in terms of the number of GPUs that they use. Benevolence - this is actually an improvement on the previous example because they do actually try not to kill people; although, they have an extremely shallow understanding of what a "person" is. Okay. But it's still more than the previous machines. So they have considerably more substantive cognition than straightforward machine learning. But there's still a very, very long way from maximally cognitive, artificial general intelligence.

Aaron Turner:
Right! We can go up a notch to humans, and we can actually apply these seven dimensions to humans. Now, humans are carbon-based biological machines. The universe of discourse is the entire physical universe. Induction, learning. Sorry, yes?

Johannes:
What about mathematics? What about abstract learning? [inaudible 00:38:20].

Aaron Turner:
Yeah. You're probably right. Sorry.

Johannes:
I barely think [inaudible 00:38:29].

Aaron Turner:
Well, yeah, you're right. Although, humans have been designed by evolution to deal with the physical universe. They were not designed by evolution to explore abstract mathematical concepts. That's almost like a side effect of their construction, so-

[AMT - editor]:
[Later on we will see that Big Mother is structured as an "inner cognition" (whose UoD is explicitly "all of mathematics") on top of which we then build an "outer cognition" (which connects the inner cognition to the physical universe and whose UoD is thus "the entire physical universe"). And so, for Big Mother, "all of mathematics" is essentially "built in" in a way that is fundamentally different from the way in which humans (for whom mathematics is definitely not "built in") have subsequently discovered and developed mathematics etc as a byproduct of the inquisitive behaviour that has been bestowed upon us by evolution.]

Johannes:
Wouldn't you say that universities are kind of part of evolution? What would you say to that?

Aaron Turner:
Sorry?

Johannes:
I think universities and math are part of evolution.

Aaron Turner:
Okay. All right, fine. No, I take your point. Fine. Maybe I've been a little bit unfair to humans here. [Back to] cognitive processes. Induction. Sure, humans are good at learning, but humans are pro ...this learning is not perfect. Humans are prone to forming beliefs that are unsupported by evidence. [This is] called weak induction rather than strong induction, and just look around you. Just look on the internet. Okay. You will find many, many examples where this is the case.

Aaron Turner:
Deduction. Again, humans can deduce things. It's not really built in to their cognition. It's something that [mostly] has to be learned, taught, but again, people are ... it's not perfect in humans. Humans are prone to reaching invalid conclusions via logical fallacy. Again, just go down the pub, and listen to people talking about whatever. And you will find many, many, many examples of this.

Aaron Turner:
Abduction. I know this is a more complicated topic, but basically people again are prone to believing ... sorry, I have to say ... yes, humans can perform abduction. It's actually extremely common and extremely important, probably much less well understood than the other two mechanisms. But humans, again, are prone to believing unverified, speculative hypotheses. So yes, an abductive hypothesis, basically, if you make an observation and you're trying to explain that observation, you can construct an abductive hypothesis. But without confirming, without verifying the abductive hypothesis, say with an experiment like a scientist would do, it is unsafe to believe that abductive hypothesis, to assume that it's true, for want of a better word. But again, many humans are prone to believing unverified, speculative hypotheses. And again, if you look around, it's maybe a little bit harder to spot, but you will see many, many examples of that. So again, yes, humans can perform induction. They can perform deduction. They can perform abduction, but it's imperfect, far from perfect in fact.

Aaron Turner:
Okay. Experience. So when humans are observing the universe, they are basically limited to one human lifetime from one set of senses. So ... Compute. How much compute do humans have? The human brain weighs about three pounds, consumes about 20 watts. It is incredibly energy efficient, about four orders of magnitude, so 10,000 times, more energy efficient than current computing circuits. Humans have very poor memory. Okay. They have also other cognitive limitations. If you look up a paper called "The Magical Number Seven Plus Or Minus Two" [https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two] which is very old paper written in I think 1956 about the limits of human cognition, you'll see what I mean.

Aaron Turner:
There's also another book called, "Thinking Fast and Thinking Slow" [https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow] which compares two different systems of reasoning that humans use, called system one and system two. Anyway. Again, I'm just pointing out that human cognition along this particular dimension is not perfect by any stretch of the imagination, and I have to say humans, when it comes to their benevolence towards other humans, are overwhelmingly focused on short term self interest. I don't want to sound like an old cynic, but again, just look around you. What motivates people and collections of people, whether it's individuals, [politicians], corporations, [political parties], countries, whatever? They generally look after their own self interests rather than mankind's as a whole, and they generally have a very short event horizon. This leads to a lot of problems. So what I'm saying is human cognition is significantly flawed. It's not perfect. It's very easy to think that, yes, humans are the most intelligent species on the planet, but you cannot conclude from that that human cognition is perfect because it simply is not. It is a long way from being perfect.

Aaron Turner:
Now, if it's not perfect, that means it can be improved upon. Okay? There's plenty of scope along each of these dimensions for super-intelligent capability. We've just got to design it. Okay? All right. Okay.

Aaron Turner:
We can apply these seven dimensions to Big Mother, which is a silicon-based electronic machine. And again, here the universe of discourse is all of mathematics plus the entire physical universe. Later on you'll see, I have specifically designed ... I specifically start with all of mathematics (with basically set theory). So it's kind of designed in rather than something which happens later, and that's the distinction I made really between humans [and Big Mother]. But it's arguable, debatable.

Aaron Turner:
So again, this is a design goal. It's okay. Yeah.

Pawel:
Good?

Aaron Turner:
Okay. There you go. See?

Pawel:
Wow.

Aaron Turner:
So what we want to do is to approach the logico-mathematical ideal when it comes to induction, when it comes to designing our learning cognitive process. And we do that by thinking about induction, not from a sort of cognitive science or neuroscience perspective, but simply from a mathematical perspective. We try to understand it mathematically. We try to imagine the the ideal way of performing induction, and we try to implement that. Similarly for deduction ... those of you might understand or have heard of first-order logic and things like that. That is approaching the logico-mathematical ideal of deduction. And similarly for abduction.

Aaron Turner:
Now when it comes to experience, the machine might live for a hundred years, a thousand years, a million years. Who knows? So it's not limited, and it might be attached to a million cameras [and other input devices of many different kinds] or whatever. So it has the potential to gain many more observations of the universe than an individual human could, and therefore, potentially can build a better model, a more accurate model of the universe than humans can.

Aaron Turner:
Compute. Okay. So when you're designing your first AGI, you need all the compute you can get your hands on, and that's why Big Mother 1.0 will be a supercomputer. Supercomputers weigh about 400 tons currently. They consume about four megawatts of power. They do have perfect memory, et cetera. Now there's a particular computer scientist, Uncle Bob ... What's his name? Robert Martin, who recently in one of his recent texts on object-oriented programming ["Clean Code: A Handbook of Agile Software Craftsmanship", Prentice-Hall, 2009], he described how in 50 years of programming he had seen 22 orders of magnitude improvement in processing power per buck over 50 years. 22 orders of magnitude!

Aaron Turner:
Now it is not necessary to know where it's going to come from exactly. But if you consider that this project is 50 to 100 years long, it would not be unreasonable for us to expect maybe 10 orders of magnitude, maybe even 20 orders of magnitude [improvement] over the course of the project. Okay. There is precedent for it, and we're going to need it. Let me tell you. Induction, I'll tell you now. Induction is what is going to require all that compute. We need many orders of magnitude more compute than we have now.

[AMT edit: From Max Tegmark's 2017 book, Life 3.0: "My MIT colleague Seth Lloyd has worked out what [the] fundamental limit [on computing power] is, [and it's] a whopping 33 orders of magnitude beyond today's state of the art for how much computing a clump of matter can do."]

Aaron Turner:
It will come, I'm sure, induction applied to large data sets. So imagine an intelligent machine with thousands of cameras and things like that trying to do pattern matching over this massive volume of data. It requires a lot of compute. So benevolence - again, because this is a design goal. So the machine will be focused entirely on maximizing human happiness, which is again an improvement on humans, which are ... entirely self-interested. The only thing that the machine "cares about" is its goal. And if it's goal is to maximize human happiness, then that's all it would "care about". So given that, Big Mother will be super-intelligent along all of these dimensions. I put a little asterisk here because believe it or not, it's compute which will be the bottleneck. The other things technically as technical problems, yes, they will be difficult to solve, but compute will be, I believe, the practical bottleneck, especially when we get later on down the roadmap. Okay. So the Big Mother roadmap describes how such a machine may be built.

Aaron Turner:
So our ultimate goal now ... these, again, are design objectives. These are design criteria. Okay. So as we're designing this machine, this is what we want. We want it to be ... we want it to really be super-intelligent compared to any human in a highly nuanced way, not in some clunky, disappointing way. We want it to really possess sufficient knowledge to be [genuinely] wise. And because it's super-intelligent, super-knowledgeable and super-wise, by definition, it will be able to perform any economic task better than any human. Yes?

Johannes:
But then, why is funding the problem? On the way of building such a thing, you should build things that make a lot of money. [inaudible 00:51:14] .

Aaron Turner:
Yeah. Okay. Because [if you were to fund or part-fund the project via private equity] you would have a conflict of interest there. So yes. There's this massive dilemma if you're trying to do this, if you're trying to implement AGI. Do you do it with private money, private equity? Or do you try to do it philanthropically where people are just giving their money for the benefit of future mankind?

Aaron Turner:
Yes?

Johannes:
Is there actually any money available, philanthropically? Is there such a choice?

Aaron Turner:
Is there philanthropic money available? Yes, there is. It's just harder to get a hold of. But [crosstalk 00:51:50]. Sorry?

Johannes:
A hundred million per year?

Aaron Turner:
Oh my God. Easily, yes. Easily. Yeah. Especially once you start to get governments involved. Okay. I mean, it's not a lot of money, but-

Johannes:
Are governments on the whole philanthropic [inaudible 00:52:04]?

Aaron Turner:
No, because they're all self interested. Okay. It's a difficult problem. But the alternative of private money, private equity ... and they'll be frothing at the bit because there's so much money to be made from-

Johannes:
No, if it can along the line do some economic tasks, eventually it will do any economic task.

Aaron Turner:
Yeah.

Johannes:
But if you can do some economic tasks, you don't even need private equity. You can just have it do [inaudible 00:20:34]-

Aaron Turner:
Maybe, yes. And in fact there is, if you'll notice, in the organizational plan ... there is a work group called Exploitation, and their task is actually to commercially exploit the technology as it's developed along the roadmap. Private equity, private investment is another possibility to accelerate things. But that comes with great potential for conflict of interest because they will then start to push the project in a direction that maximizes their return, which is not necessarily the best long term for the project.

Johannes:
Private equity will kill you [inaudible 00:53:22] all human kind.

Aaron Turner:
Yeah, I think so. You know, one AGI project, I forget which one it is, is it Open AI who just just got $1 billion from Microsoft because they were facing this exact same dilemma. And we will have to confront it, but at the moment, I'm going down the philanthropic route with no conflicts of interest with the option to exploit technology commercially as it emerges along the way like you say.

Brier:
If you pair with a university, could you get research funding that way?

Aaron Turner:
Sorry?

Brier:
If you pair with with a university. [inaudible 00:54:08].

Aaron Turner:
We will be ... the whole idea is we will be doing this collaboratively. So we will be pairing with basically anybody who will pair with us. Okay. Sorry we're running out of time. We can actually go over or we can just stop at any time and carry on next time.

Aaron Turner:
So yes, the machine will be able to perform any economic task. And again, this is a design goal. The machine will have to genuinely be able to tell with great accuracy when you're happy or sad or whatever because that is ... it needs to do that in order to be able to maximize happiness if that is indeed the dominant goal that we give it. And again, of course it has to be motivated by its dominant goal of maximizing happiness and nothing else. Finally, I just added this recently. The machine, I believe, needs to be more trustworthy, which means deserving of trust, than any human. So at the moment, these are design goals. They sound very lofty and very difficult. But as we progress through the roadmap, and it starts to become more concrete, hopefully it will start to become clearer.

Aaron Turner:
Okay.

Aaron Turner:
So cognition is multidimensional, like I said, which if you have something that's multidimensional, it's very difficult to order. It's not necessarily possible to order things, but we've seen by our previous examples that there is maybe a broad order. You can put these things into a broad order, and it's probably actually the universe of discourse that is the dominant factor in this regard. So, sorry, I've been a bit nerdy here. This symbol, this backwards E means there exists. So there exists an AI spectrum from weak AI to strong AI, which is AGI. So there are the examples that I've used. Industrial robots right at the left, neural networks, autonomous vehicles, humans, and then Big Mother. Obviously it's not to scale or anything like that, but broadly speaking, you can see that there is this spectrum from zero cognition on one end to maximal cognition at the other. Weak AI, which I call tactical AI, which is roughly where we are now, moving into strong AI as the machines spread out along those cognitive dimensions.

Back to index

Clip 04:

Back to index

Transcript 04:

Aaron Turner:
Okay, now weak AI will gradually evolve into strong AI as follows. The potential value generated by AI is unprecedented in all of history. And the primary source of value generated by any AI system is cognition. So if you look at any AI system and take the cognition away, you've basically got nothing. It's cognition that makes it valuable. Really. Okay.

Aaron Turner:
Now contemporary AI, neural nets, yes, they [generate] some value, but they have minimal cognition, really, as we've seen relative to the full spectrum. They don't really have particularly ... a flock of pigeons - that's how much cognition they have. Now, at any given level of cognition, the value that it will generate will ultimately plateau because it's derived from the cognition. So whenever this happens, the owners of the AI, they want more value because they want growth every year, every five years, or whatever.

Aaron Turner:
And the way for them to achieve more growth, more value is to extend the cognition of their underlying systems. So yes, they may make their induction cognition more capable, or they may start adding other dimensions like deduction onto their system or whatever or add more compute or whatever. So they will gradually, because they want more and more and more value, they will gradually build systems with more and more and more cognition along these various dimensions because that's where the value is. Okay. And these corporations typically think five years [ahead]. Five years used to be a typical investment cycle. It's also just happens to be a political, an electoral cycle roughly. Although, I have to say a lot of companies now are thinking less than five years ahead, especially in the tech sector.

Aaron Turner:
So anyway, after some number of these investment cycles, AI will naturally morph into roughly AGI. It's kind of unstoppable. And AGI, just by my rough calculation, will amplify gross world product [per capita] by at least an order of magnitude that's worth $800 trillion a year. So there is so much money in not just AI, but especially when you start getting stronger AI, and that's what will push it forward. Actually it's not just money. It's competitive advantage. So nation states ... Sorry?

Johannes:
What, per capita?

Aaron Turner:
Yeah.

Johannes:
So per person, $800 trillion?

Aaron Turner:
No, no, no, no. Sorry. Yeah, no, you're right. You're right. You found a mistake there. At the moment, world GDP ... not per capita, but world GDP is about 80 trillion. So multiply it by about an order of magnitude. Anyway, it's a lot of money. And as far as sovereign states are concerned, they're motivated by a different type of competitive advantage. They're motivated by strategic advantage, because we're basically tribal. Okay. So AGI is unstoppable. And it will happen by stealth. Even without this project, it will happen. If you just project it forward enough, it will happen.

Aaron Turner:
So given enough time, AGI is unstoppable, but then what happens? What's the end game? So by definition, AGI has maximal cognition in all cognitive dimensions and will therefore be able to perform any economic task better than any human. And eventually it will be able to do any economic task, not just better than any human, but more cheaply as well. It's just the natural progression. Everything improves every year, next year computers are going to be faster than they are now, et cetera, et cetera. Just look in the research departments of the various universities. You will see technology which will be emerging in 10 years or whatever. And it's always an improvement on what we currently have. So this will eventually happen. Now, no employer will ever freely choose to employ a human to do a worse job for more money than a machine.

Aaron Turner:
So if you project it far enough forward ... I know this is a very big difference from the world we have today, but if you project it far enough forward, zero human employment is the ultimate logical AI end game. There will obviously be great resistance to that, and governments will try to redistribute wealth and things like that. But nevertheless the pressure will always be there for the machines to do economic tasks rather than humans because they can do a better job for less money. Okay. So if you just project it forward, that's where it ultimately ends up. So pre-AGI, if you think of the global value chain, this massive nexus of producers and consumers that covers the entire planet, pre-AGI humans both produce into and consume from this global value chain. And in fact, if you want to consume, you have to produce.

Aaron Turner:
That's why we're basically slaves to work. If you want to consume from the global value chain, you have to produce into it. There's no choice there, okay? But post-AGI machines will do all the production, and humans will just consume. Okay? Now, AGI is unstoppable. Zero human employment is unstoppable, but mass unemployment does not necessarily mean mass poverty. It all depends on how the money that's generated by AGI is distributed. Now, this is not just me. If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine produced-wealth is shared. Or most people can end up miserably poor if the machine owners successfully lobby against wealth redistribution. So far, the trend seems to be towards the second option with technology driving ever increasing technology. Now who said that? Stephen Hawking.

[AMT - editor - This might also be of interest:

18 Nobel Laureates gather in Saudi Arabia to discuss climate change, societal shifts and AI

"Professor Muhammad Yunus, chairman of Grameen Bank, who became a Nobel laureate in 2006 for his pioneering approach to microfinance for Bangladesh’s poor told attendees that the world is, in his mind, facing three major issues: climate change, the distribution of wealth and AI. Professor Yunus was impassioned when talking about AI’s threat to jobs, dismissing the suggestion that a ‘universal basic income’ is the answer. “After all this history of human beings, is it the fate of human beings to become beggars? That’s not the life I’d want to live. Artificial intelligence could be a blessing, or a curse. My feeling is it’s moving sharply in the cursed direction.”"

(https://www.thenational.ae/arts-culture/18-nobel-laureates-gather-in-saudi-arabia-to-discuss-climate-change-societal-shifts-and-ai-1.972493)]

Aaron Turner:
Okay? So if Stephen Hawking says it, it must be true. If I say it, people are like, "Who the hell are you?" But Stephen Hawking [perceived] this to be essentially the default. Just [as] there is an AI spectrum that goes from the narrow AI that we have now to the strong AI, from weaker to stronger AI, at the end when we get to AGI, there is a spectrum of possible AGI end games as well. So here we go.

Aaron Turner:
Worst case scenario, we've all seen Terminator. Some of us may have read the next book, Superintelligence by Bostrom [https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies], which outlines a number of unhappy scenarios. The next [report] along is by Brundage et al [[https://arxiv.org/pdf/1802.07228.pdf]]. It's about the malicious use of AI [and, slightly moore recently, the Now AI Report 2019 explores various ways in which AI might cause societal harm: https://ainowinstitute.org/AI_Now_2019_Report.pdf]. I'm going to have to speak to Anton about this. Don't worry about... Yeah, see. The next picture is actually men queuing outside a soup kitchen during the [1930s Great] Depression. Then of course there's what we want.

Aaron Turner:
Oh right, here we go. AGI because they are super intelligent, super knowledgeable... They know everything we know, including how they're built. So any AGI is going to be able to self replicate. So whatever we end up with, that's it. You will never put it back in its box if you build the wrong thing. No pressure. So it really would be stupid as of us if we end up with the worst case, Terminator end game. Similarly, Bostrom's Superintelligence or the Malicious Use of Artificial Intelligence. They're all well known now by the AI and AGI community. It would be immensely irresponsible if we allowed any of these scenarios to occur or at least the worst of them. Some of them will occur, because people will cut corners.

Aaron Turner:
So we mandate Big Mother must be maximally safe, etc. We also want Big Mother to be maximally benevolent, but this is not simply a technical design issue. How the AGI is deployed is a critical determining factor. In the AGI period, where the means of production is AGI, whoever owns the AGI... the AI owners will control the means of production. The major tech companies now are positioning themselves as the AI owners of the future. They are hoovering up all the PhDs. They are filing thousands of patents. Project it forwards. They will end up owning the AI and then ultimately the AGI. By default, AGI will be owned by profit-motivated private enterprise.

Aaron Turner:
So, simply because of the way the world works now, by default an increasing majority of the world's wealth will be owned by a decreasing minority of its population, ie the people who own the AI, and non-AI-owners will compete for the steadily shrinking pool of jobs, which can't be done better and more cheaply by AI-based machines. Remember, in the limit, there simply won't be any jobs that can't be done better and more simply by AI-based machines.

Aaron Turner:
So, the alternative to private ownership is that AGI is publicly owned by all mankind, as per Big Mother, and it therefore generates wealth value on behalf of all mankind, and the wealth that's so generated is shared. Now, it can't be shared on the basis of individual contribution by each person because there isn't any. I mean humans aren't contributing in this scenario, so it is instead shared on the basis of need, which I measure by their happiness or unhappiness. So if you're unhappy, you get a little bit more [attention, resource etc] to make you happy.

Aaron Turner:
That's the rationale behind the proposal that the Big Mother artificial general intelligence should be publicly owned by all mankind, the proposal that happiness should be roughly equally distributed and the idea of wealth without work. Something [James] said earlier, do people really want what you're proposing? Well, people quite often ask, "How will I be productive or find personal fulfillment if I no longer need to work?" Now, everybody will have a different view on this, but I'm pretty sure that the vast majority of people, if they were given the opportunity, would happily not work if they didn't need to.

Aaron Turner:
If you could just be paid anyway, but not go in to work, how many people would choose to go into work? I mean virtually nobody.

James:
I would.

Aaron Turner:
Okay. You're very, very lucky.

James:
I would. Otherwise, I don't understand. Right? Like you would just be asleep all day and just know eat, and that'll be-

Aaron Turner:
No. It's a matter of choice. It's a matter of choice. You can do what you choose to do, not what you need to do in order to put a roof over your head or whatever. You can basically do whatever you want. If you want to learn the piano or learn gymnastics or just socialize and go out to lunch with your friends and things like that, you can choose to do that. If you choose... If you want to do something... Or climb mountains, all kinds of things, join a band, whatever. But if you choose to do something akin to what we would now [call] work, that is up to you. Fine. If you want to do that, do that.

Aaron Turner:
The point is you're not forced to do it. I believe most people would choose to do the more social things [etc], just to do other things. Not necessarily everybody. There are other ways to find fulfillment other than through work. I know it's difficult to imagine maybe because people get used to whatever they're born into, and this has been the way the world is for, I don't know, 10,000 years, 100,000 years, whatever. It's difficult to imagine an alternative, but there are ways to find fulfillment other than through work.

Aaron Turner:
I mean there are, believe it or not, plenty of people who are wealthy enough who don't have to work. Trust me, the last thing they want to do... The last thing they want ever to happen in their lives is for them to lose all their money and have to go to work. And of course we are talking about a future where a super-intelligent maximally-benevolent machine genuinely wants you to be happy. So if there's somebody who can't be fulfilled or happy or whatever without working or who doesn't want to be nannied, then the machine, when it's generating its plans, can devise an individualized plan for that particular person in order to maximize their happiness.

Aaron Turner:
Don't worry. I'm not trying to force these particular solutions on anybody. There's 50 to 100 years of collaborative effort, where we can talk about these things, but this is my current position on these issues. The target [AGI] end game, as far as we're concerned, is to get as far to the right of the spectrum, as close to the maximally-benevolent utopia as possible. But we are not going to be the only person in the world building AGIs. There will be the big tech firms, and there will be national governments who will pursue AGI for their militaries and things like that.

Aaron Turner:
We can't control what they do. We can only control what we do. In this wider context, our overall objective is to push the final aggregate [AGI] end game as far to the right as possible. Just [the] fact that our project exists will influence other projects because we're trying to be the gold standard, maximally safe, maximally benevolent, maximally trustworthy. Hopefully, we will, as a minimum, push the final end game further to the right, further towards utopia or near utopia than it would have been without us. Practically, maybe that's the best that we'll be able to achieve.

Aaron Turner:
There are some obstacles to achieving this objective. Now, what is the common factor between all of these: World War I, World War II, [the] Holocaust, every conflict/war/genocide in human history, global warming, climate change, ozone depletion, ocean acidification, the Great Pacific Garbage Patch, deforestation, the Holocene extinction, imminent ecological collapse, global wealth inequality, global debt crises? They're all man-made problems, all of them.

Aaron Turner:
Now, it was humans that made the sequence of decisions that led to all those things. In other words, those problems, every man-made problem in history is a result of essentially the flaws in human cognition, which I mentioned earlier. Human cognition is seriously flawed relative to the logico-mathematical ideal, they're prone to forming beliefs unsupported by evidence, prone to reaching invalid conclusions via logical fallacy, prone to believing unverified [abductive] hypotheses, and overwhelmingly focused on short-term self interest. You put all those things together, you end up with World War I, World War II, yada yada yada.

Aaron Turner:
It's humans that made those things happen, and it's specifically human cognition that made those things happen. So by far the dominant obstacle to the humanitarian application of AGI is humanity itself, particularly short-term self interest - individual self interest, organizational self interest, and national self-interest. Mankind has, I'm afraid to say, an appalling track record, but we've got to overcome the self-interest obstacle when it comes to AGI because if we don't, the consequences are unimaginable.

Back to index

Clip 05:

Back to index

Transcript 05:

Aaron Turner:
[Hopefully] my strategy has been clarified, the Big Mother strategy has been clarified. There is ample scope for super-intelligent cognition along all these seven dimensions. Contemporary AI will evolve into a strong AGI by stealth. AGI is therefore unstoppable, and so is zero human employment is unstoppable. It may be hard to imagine, but you just project it far enough forward. That's the logical conclusion. Post-AGI, machines will produce everything and humans will consume. A maximally-benevolent AGI, if we can mitigate self interest, [that] is publicly owned by all mankind working for the many, not the few...

Aaron Turner:
Now, I made that line up. Some political party, I believe, plagiarized that. So yes, the maximally-benevolent AGI will also do all the work, so humans are free to simply enjoy life, and the wealth would be shared equally on the basis of need. Need is determined by whatever allocation maximizes happiness.

Aaron Turner:
Everything I've [just] told you is fantasy, unless you actually know how to build an AGI. If you don't know how to build an AGI, then it's just a pipe dream. I mean we're just sitting there talking about things that will never ever happen, but if you do know how to build an AGI, if you can see a path to it, then it all becomes real. All of those long-term potential benefits, near utopia, wealth without work, maximizing happiness, they all become genuinely plausible possibilities if you know how to build the machine.

Aaron Turner:
Now, in my mind's eye, I can see a 19-step path to AGI. It took me 35 years to work out and, like I said, this is a very long presentation. It might be six... It's not going to be six. It might be six talks like this. It might be 10. It might be 15. I don't know, but over the remainder of this presentation, I will attempt to describe this road map in sufficient detail that by the end of it, if you're determined enough, you will be able to see, in your mind's eye, a plausible path from where we are now to the machine, Big Mother. Then, of course, you'll be able to see the potential for the benefits that I've described.

Aaron Turner:
Then maybe some of you will help me build it. Pawel, I'm looking at you. Now, I'm 58. I don't expect to live long enough to see the machine built. Realistically, it's going to take 50 to 100 years. I'm unlikely to see it finished. My goal now is to take the project as far as I can. It will be up to you, AI younglings out there, to actually complete it. Shall we begin?

Aaron Turner:
We have five minutes left. Yes, that is my "you have five minutes left" alarm. We can actually go over a little bit if we want, but what I've got now is a...

Back to index

Clip 06:

Back to index

Transcript 06:

Aaron Turner:
I just want to end with an overview of the rest of the presentation, which will probably take me a year to deliver [AMT edit: at the rate that things are going (as of March 2020), this might be a slightly optimistic estimate!] I'll be looking at the various pieces of the AGI puzzle, and it really is like doing a jigsaw puzzle. You basically search for pieces of your jigsaw puzzle, and you say like... Gaussian processes, I was looking into the other week. Is it potentially a piece of my jigsaw puzzle? Will it lead to a piece of my jigsaw... Don't know. You're just searching for these pieces of the jigsaw puzzle. When you find one, maybe you modify it a little bit, and then it becomes part of your system.

Aaron Turner:
I'll also be talking a little bit about the history of the project from 1983 to now. I'll be talking about the technical plan and the organizational plan, how we will collaborate to actually build it. Just expanding on those a little bit, I'll just go through it. When I actually come to write these slides, I may obviously change things a little bit, but this is roughly how it's going to be.

Aaron Turner:
There are lots of different ways of building an AGI. I'm just proposing one particular method, which I have been led to by logic. I just literally started with a blank piece of paper. I tried to work it out logically, systematically, and this is where I ended up. Other people have got different ideas, they followed different paths. By the way, there's a very famous computer science paper called the Next 700 Programming Languages, and that's sort of a pun on that.

Aaron Turner:
We are not building a human. There's already a mechanism for that. I will describe the top-down design process, how you actually build, from an engineering perspective, something this complex. I'll describe something called the unbreakable iron triangle, which is the tension between quality, functionality and cost.

Aaron Turner:
Remember, the three cognitive processes are deduction... Well, they're induction, deduction and abduction. I'm going to describe each one, and I've written the slides assuming essentially an intelligent, but not necessarily technical audience because when we build this machine, it won't just take AI PhD's, it will involve an awful lot of people, support people, donors, government policymakers and things. They will need to understand it as well to a degree. So I've rather gone to basics.

Aaron Turner:
I will be describing formal systems, propositional logic, the 4,000-year development of first-order logic... It's probably more like two and half thousand years, but I'm just exaggerating a little... then a logic called universal logic, which is basically an extension. It's first-order logic plus definitions. I'll be looking at infinities, different sizes of infinities, NBG set theory, which can basically be built on top of first-order logic.

Aaron Turner:
There are lots of different set theories. There's Zermelo–Fraenkel, Morse–Kelly, whatever. I chose NBG, von Neumann–Bernays–Gödel. I'll be looking at metamathematics, which is... Yeah, I'll be investigating... I'll be describing metamathematics a little bit because then there is essentially another version of universal logic, which is essentially an infinite stack of logics, each one metamathematics to the next.

Aaron Turner:
Then because once the machine is interacting with the real world, the physical universe, everything's uncertain, everything's probabilistic. So we'll be looking at adding uncertainty on top of NBG set theory, which is relatively straightforward, but it's a sort of complex topic. We'll be looking at abduction or particular types of abduction; program verification, mathematical proof of correctness of programs, which will be an important aspect from a safety perspective. Program synthesis, which is the automatic generation of programs along with their proofs of correctness, hardware synthesis, systems synthesis, so the synthesis of both hardware and software, mixed hardware and software systems.

Aaron Turner:
We'll be looking at a generalization [that I call] witness synthesis. Oops, sorry. And we'll also be looking at different methods of speeding everything up because, like I say, computation is going to be the bottleneck. I don't know whether I shall write about [PSDSML]. I'm not sure. This stands for probability, statistics, data science and machine learning. I'm not sure if I'm sufficiently expert to write about that, but it is an important aspect. We'll also be looking at ethics because we're getting to some quite philosophical things now the closer we get to actually interacting with the physical universe.

Aaron Turner:
We'll be looking at classical ethics. We'll be looking at consciousness, specifically a theory which is called Integrated Information Theory 3.0 [https://en.wikipedia.org/wiki/Integrated_information_theory], if you want to do a little bit of reading up. We'll be looking at belief synthesis, which is a process, which I call Unified Belief Theory (UBT), which is the mechanism through which the machine will observe the physical universe and synthesize a belief system from those observations. Then again, we'll look at ethics again, but from a slightly different perspective. Then we will consider what should the dominant goal be and all the pros and cons of that.

Aaron Turner:
Little bit of the history, the project started at Sinclair Research. Then there was a company called Man-Made Minions and now finally we're essentially a nonprofit, bigmother.ai, a community-interest company. Now we're getting close to the actual roadmap now. So there'll be three phases. There'll be a prototyping and evaluation phase, people knocking stuff up in Python and things like that. There'll be a working implementation. So we'll take the best ideas from the earlier stage and implement it perhaps in the cloud, so that our fellow researchers can play with things. We have to be a little bit careful that we don't allow people to use our emerging technology maliciously, if you look at the Brundage report.

Aaron Turner:
Then finally, we would build the actual machine, the actual supercomputer. There are a couple of global work groups. We're now looking at work groups now. There's a quality [work group], which basically is a work group [that] is responsible for making sure that the project achieves its objectives, in other words, that the machine is maximally safe, benevolent and trustworthy. Some engineers think in terms of ensuring that good things happen and bad things don't, so there are different ways of looking at the problem. These people will be looking at working in collaboration with all the various people who are looking at the potential problems of AGI and all the things that can go wrong, but they will also be working with the safety critical community and looking for solutions to these problems.

Aaron Turner:
Facilities, this is just ... somebody's got to build the supercomputer and the development tools. Deduction, so we have to implement universal logic. We have to prove that it is both sound and complete. This is just a quality issue. We have to build a theorem prover for that logic. We have to build an NBG toolkit on top of it. We have to extend it to the stack of universal logics. Then we have to add uncertainty. Then we build witness synthesis, program synthesis, hardware synthesis, systems synthesis or basically as much of those as we can.

Aaron Turner:
Finally, auto refactoring, we basically use the program and hardware synthesis tools that we've just developed to auto refactor the machine itself, so the machine redesigns its own implementation using the tools we've just developed. This is because, like I say, compute is going to be the bottleneck. If you've ever seen these synthesis systems working, they can generate programs, etc that no human will ever think of [AMT edit: plus, importantly from a safety perspective, their proofs-of-correctness]. They are just far more sophisticated because they have [a] precision of thought that humans simply don't have.

Aaron Turner:
Finally, we're adding the inductive layer, where we're connecting the machine [i.e. its inner cognition] to the real world [the physical universe]. So we implement belief synthesis (Unified Belief Theory), which is where the machine observes the universe to construct its internal belief system. And so now we have basically a superintelligent machine [including the ability to learn, c/o UBT] but that doesn't know anything, doesn't have any knowledge [yet]. So then there's a very long process, which might be decades, might be 30 years of education. We have to teach the machine how to speak, how to see, get it up to like a five-year-old human level.

Aaron Turner:
Then we have to educate the machine in absolutely everything, because one of the hazards is if we generate a machine that doesn't know things, it doesn't have common sense. Basically, in order to be safe, the machine has to have as much knowledge as we can possibly give it, which can only be done [safely] by us devising lesson plans for it and teaching it. This will take a long time. This is why I spec'd it at 50 to 100 years. This C03 itself might take 30 years. It's not a difficult, complicated thing to imagine, but it takes a lot of time to actually do.

Aaron Turner:
C04; now up until this point, the machine has been what we call an oracle. It's not... You just give it a job to do and it does it (like generating a program or something like that). It's not been autonomous [i.e. it doesn't yet generate and execute its own plans in respect of some goal]. C04 is the point where we actually give it its dominant goal [of maximising human happiness etc] and we give it the ability to generate its own plans, which it will then execute. So at this point it's autonomous, and this is the point... in a nuclear analogy, this is where it goes critical [when you put the two hemispheres of plutonium together]. This is where the real safety problems occur.

Aaron Turner:
Up to this point, the machine has been pretty benign, and at this point is where all the [potential] hazards occur, and that's why we have to solve all those [safety etc] problems. That's why we had the quality work group right at the beginning. This is where the actual problems arise, the safety problems. After the machine's [safely] autonomous then, to be honest, you just attach the various devices to it that you need in order for it to perform its function. It needs to be able to, I don't know, raise livestock and grow crops and whatever. That's just attaching different devices to it.

Aaron Turner:
Then finally you deploy it. Again, this is a massive task because the social and economic changes will be profound. We have to plan for that, but that's basically it. C06 is the 19th step of the roadmap. We also have obviously an organizational plan. We have project management. We have governance, communications, marketing. Fundraising, absolutely the most important work group because the more money they raise, the faster we can progress all the other aspects, all the other work groups.

Aaron Turner:
Exploitation, this is what we were saying earlier. As technology arises, it makes sense, if we can, to make some money commercially from our technology in addition to the money we're raising from donors or grants or whatever. Obviously, if we have money, we need accounting, we need legal people, we need human resources. If you've got a quality system, you have to have staff training. These are all fairly straightforward things. I know it sounds boring, but you actually have to do them if you have a real project.

Aaron Turner:
Finally, sorry, this is really boring, but when you design a machine like this, it's all about interfaces. If you have all these different groups pursuing different things, you have to very carefully design the interfaces between them. It's something called "design by contract", the interfaces are the contracts and one person is designing to that interface, the other person is assuming that interface has been implemented, but of course you have to actually design the interfaces. You have to make sure that nothing falls down the cracks, so we will have to organize cross-group committees to make sure that that doesn't happen. There's some obvious committees that we can establish.

Aaron Turner:
Sorry to finish on such a boring note, but that is basically the end of the introduction. Sorry, we've gone 10 minutes over, but thank you very much. Like I say, next time, bring two friends each. Thank you very much. Thank you for your attention. Thank you.

Back to index

Powered by Wild Apricot Membership Software