Log in



These are exciting times to be in AI. We (that is, the collective AI community) have conquered chess, we have conquered Go, and we have conquered protein folding. Next year we will have doubtless conquered something else.

Thanks to technology's arrow  the observation that, for any particular technology, that technology's capabilities only ever improve with time  we can expect the raw capabilities of AI technology to continue to advance with every year, and every decade, that passes.

The potential competitive advantage to be gained from leveraging advanced AI technology is unprecedented in all of history. AI technology is ultimately expected to amplify world GDP (currently ~$85 trillion/year) by at least an order of magnitude. Put another way, future AI technology promises to generate trillions of dollars in revenues for whoever owns the corresponding intellectual property. Many for-profit actors in the AI field desire to capture as large a piece of this pie as possible, and this self-interest is a relentless driver for continuous improvement in AI.

One of the dimensions along which AI technology will doubtless improve over time is generality. Given sufficient time, narrow AI (that is, individual AI systems that are each designed to solve a highly specific problem) will inexorably evolve  via technology's arrow  into general AI, a.k.a. AGI (that is, individual AI systems that attempt to solve problems in general, and are not limited to any specific problem area).

There is no widely-accepted formal definition of AGI. A lowest-common-denominator (LCD) definition might be "any AI system that is not narrow AI". In the grand scheme of things, this is a pretty low bar, and some actors in the AI field are starting to imply that LCD AGI is within their grasp.

In the final analysis, an AGI's utility is determined by (a) the range of problems for which, if allowed a certain quantity of various physical resources (such as time, energy, compute, etc), it is able to find one or more valid solutions, and (b) the quality of the problem solutions so found (in terms of their optimality relative to the space of all possible valid solutions). Low quality AGI is much easier to build than high quality AGI!

(Note: an agentic AGI is able to physically interact with — and thereby affect — the physical universe; a non-agentic AGI is not.)

The ultimate objective (promising potentially maximal utilityis to achieve super-intelligent agentic AGI, i.e. agentic AGI whose performance on any problem-solving task (including those pertaining to the physical universe) exceeds that of any human. This is a much more difficult problem than many realise (even in the AI field), because a genuinely super-intelligent AGI must necessarily be super-knowledgable, i.e. more knowledgable on any subject than any human, and this is not something that can easily be fudged (e.g. by scraping the internet).

We will need, to all intents and purposes, and as a bare minimum, to take a sufficiently advanced AGI to school, and then to university, until it has passed, with the highest possible grades, every advanced-learning course designed for humans. This alone will likely take decades.

(Pattern-matching applied to such massive data sets may well require many orders of magnitude more compute per $ than is available today!)

Primarily due to these educational requirements, and even though initial AGI takeoff may seem to many observers to be relatively rapid (albeit falling significantly short of super-intelligence), the subsequent path to genuine super-intelligence will almost certainly be a long hard slog.

That said, technology's arrow extends forwards through time forever. So, in the final analysis, it doesn't really matter how long its development and education take, super-intelligent agentic AGI will eventually become a reality. Given enough time (2101 CE...?), it's unstoppable.

And that's where the real problems start.

We shall refer to "the point in time at which AI inexorably evolves into super-intelligent agentic AGI" as the AGI endgame.

The AGI endgame is a trap-door — once we've gone through it, there's no coming back!

This is because any genuinely super-intelligent AGI is, by definition, more intelligent than any human.

Assuming that you have one, what are the chances that your cat is going to be able to trick you (a relative super-intelligence) into not taking it to the vet? Zero. Similarly, no mere human will ever be able to persuade or otherwise trick a super-intelligent AGI into doing something (such as allowing itself to be switched off, modified, and restarted) that it isn't already intrinsically motivated to do. Furthermore, unless a super-intelligent agentic AGI has been designed and implemented with exquisite care,  it's not necessarily the case that it will be intrinsically motivated (from day one, without modification) to behave in a maximally — or even minimally  safe and benevolent way towards humans.

This is known as the AGI alignment problem (often referred to as the AI alignment problem, the AI safety problem, or the AI control problem). A maximally-aligned super-intelligent agentic AGI could transform today's world into a near-utopia, but a minimally-aligned super-intelligent agentic AGI could decide to eliminate the human race. Every scenario in between these extremes is also possible. If we end up in some horribly suboptimal post-endgame scenario then it's simply too late we won't (necessarily) be able to change it!

Thus:

  1. the nature of the AGI endgame — that is, the extent to which the first super-intelligent (or near-super-intelligent) agentic AGI is intrinsically aligned with human values  will likely determine the subsequent quality of life of all mankind for all eternity
  2. we have one  and only one  chance to get it right!

Imminent existential risks notwithstanding, this makes getting super-intelligent agentic AGI right the most important problem facing mankind.

Given all of the above, the goal of the BigMother project is to influence the nature of the AGI endgame, to the maximum extent possible, in order to secure a maximally safe and benevolent outcome for all mankind.

To this end, we seek to design, develop, and deploy a super-safe super-benevolent super-intelligent alpha AGI (called BigMother) that is publicly owned by all of mankind, and whose operation therefore benefits all of mankind, without favouring any particular subset thereof (such as the citizens of any particular country or countries, or the shareholders of any particular company or companies).

(A super-safe AGI is an agentic AGI that is more safe (from the perspective of humans) than any human. super-benevolent AGI is an agentic AGI that is more benevolent (from the perspective of humans) than any human. The alpha AGI is more intelligent than — and thus able to outsmart  any other AGI with which it might co-exist, including any other super-intelligent AGI. There can only be one alpha!)

Further details of the BigMother project — and the BigMother cognitive architecture — may be found in this unfinished draft paper.

Powered by Wild Apricot Membership Software