Log in


A global collaboration of engineers, scientists, and other domain experts, funded by philanthropists,

building the Gold Standard of Artificial General Intelligence, for all of mankind.

(Well, that's the plan!)



The Big Mother Architecture (or Framework) comprises:

  • an inner-cognition (whose Universe-of-Discourse is all of mathematics)
  • and an outer-cognition (whose UoD is all of mathematics + the entire physical universe).

The Big Mother Technical Plan (or Roadmap), comprises 19 nominal Roadmap steps:

  • 2 Roadmap steps pertaining to both the inner- and outer-cognition
  • 11 Roadmap steps to construct Big Mother's inner-cognition
  • 6 Roadmap steps to construct Big Mother's outer-cognition.


ASIDE: The machine is being designed/developed top-down (working backwards from the target behaviour). This means the current (3rd-level) design, although end-to-end, is abstract in nature (as well as a framework for collaboration), and will gradually be refined down to a concrete implementation, with information/detail being added at each refinement step. (In the same manner, the following informal description will gradually be refined into a more scientifically rigorous exposition).


There is a Big Mother workgroup for each of the 19 Roadmap steps:



G01 - Quality - Due to the unimaginable potential cost of getting AGI wrong, the project is quality-driven, rather than e.g. cost-driven. This group is responsible for all things quality-related, where quality means ensuring that the project achieves its stated objectives (as described elsewhere, including good things happen, bad things don't; maximally-safe, -benevolent, and -trustworthy, etc). In particular, safety must be an invariant property of the machine at each step of its construction (as it emerges from the roadmap), as well as in respect of any results that may be exploited commercially by the S06 Exploitation group (part of the Organisational Plan). Domain experts in this (G01) group include general quality management, general best-practice hardware and software engineering, safety-critical systems, and of course AI/AGI safety experts.

G02 - Facilities - The project is broadly organised into three phases (i.e. we will be going through the entire roadmap at least three times): (1) prototyping and evaluation, (2) working implementation, and (3) final implementation. There will be different quality requirements for each phase, e.g. in phase (3) everything (even remotely possible) needs to be formally proven. When we finally get to phase (3), somebody (i.e. group G02) has got to design and build the actual supercomputer on top of which Big Mother 1.0 will be implemented (hint: this will be facilitated by phase (2) implementations of e.g. B02 and B03), plus this group will also be responsible for physical and cyber security etc. Domain experts in this group include system architects, HPC (High Performance Computing) experts, chip designers, toolchain developers, security experts, etc.

A01 - Universal Logic (UL-0) - This is essentially first-order logic with equality (FOLEQ) plus descriptions plus definitions. UL-0 is a basic, vanilla version. Domain experts in this group include logicians and (especially discrete) mathematicians.

A02 - Soundness & completeness - First-order logic is both sound and complete (one of the reasons it was chosen for the project), and UL-0 is intended to be merely a conservative extension, and therefore also sound & complete. However, it's notoriously easy to make some schoolboy error and break your logic, so this is a quality control step - the objective of this step is to formally prove, using a third-party system such as (say) Prover9, that UL-0 (and later UL-N) is still sound and complete (i.e that we haven't inadvertently broken it). Domain experts in this group include logicians and (especially discrete) mathematicians.

A03 - Theorem proving - Basically, construct a (natural deduction) theorem prover for UL-0 called ULTP. This is the machine's first cognitive process (deduction), and phase (2) coding on this has already started (one of the reasons why I have zero time!) Theorem proving is a state space search problem, and so various tricks (e.g. parallelism, hardware acceleration, contemporary machine learning) will gradually be applied to speed it up (and the same tricks will also be applied to all other state space search problems, e.g. B01, B02, B03, B04). Domain experts in this group include logicians and (especially discrete) mathematicians, as well as automated reasoning experts, and machine learning experts.

A04 - NBG toolkit - FOLEQ is foundational, but it's also incredibly tedious to use for anything more than toy problems. A first-order theory adds additional axioms (called proper axioms) to FOLEQ, thereby effectively giving semantics to specific function and relation symbols. In our case we will be adding the NBG set theory axioms (extensionality, pairing, class existence, regularity, sum set, power set, infinity, and limitation of size), thereby extending UL-0 (= FOLEQ + descriptions + definitions) to first-order NBG set theory (greatly preferable IMO to the much wider known ZF). At this point, given the definition mechanisms we incorporated into UL-0, it becomes possible to do significant mathematics. And our first job, mathematically speaking, is to define a mathematical toolkit on top of UL-0 + NBG - basically, this involves a lot of low-level mathematical definitions (hint: we will be using Surreal Numbers for the basic number systems - Naturals, Integers, Reals, Ordinals, and Cardinals, and from these we will be using the Cayley-Dixon construction to get Complex numbers, Quaternions, and Octonions), and a lot of formal proofs (for which we will have the ULTP - I knew we wrote it for a reason!) As a minimum, we need to define sufficient mathematics to support step A05 (including transfinite induction and recursion), but the more mathematics we can hand-define the better (this will be especially apparent when we get to B01). Domain experts in this group include logicians and (especially discrete) mathematicians, as well as set theorists.

A05 - Universal Logic (UL-N) - Right, things start getting serious now. Once the mathematical toolkit is sufficiently well defined, we will have everything we need to define (in UL-0 + NBG) the syntax and semantics of (a slightly extended version of) UL-0. Relax! There's nothing circular going on here. What we end up with is (effectively) a clone of UL-0 (acting as a new object language), with the original "UL-0 + NBG" acting as its metalanguage. In fact, what we actually get is an infinite stack of ULs (hence UL-N), each one metalanguage to the next. What we've effectively done is to create a sort of pseudo-higher-order version of UL. It has exactly the same expressive power as UL-0, but it's just easier to use and think about in relation to metamathematical concepts (which, in actual practice, abound). And so now, with a few minor adjustments to ULTP, our machine can "think" (deductively, at least) at both the object and meta levels. This actually adds significant capability - for example, we can now express statements about the NBG definitions we constructed at A04, and ask the ULTP if they are theorems. The theorem prover is now also able to think metamathematically ("about" the system) as well as merely "in" the system (see the famous MU puzzle), and (thanks to soundness and completeness) is able to switch back and forth between these two modes of thought (using whichever is best for the problem or sub-problem at hand). Domain experts in this group include logicians and (especially discrete) mathematicians, as well as metamathematicians.

A06 - Uncertainty - Later on, when we build the machine's outer cognition (steps C01 to C05), thereby connecting the deductive-abductive inner cognition to the real world, everything (i.e. all belief) becomes uncertain. And so at this step (A06) we define probability (and possibly other uncertainty measures) on top of the NBG toolkit (not by adding any further axioms, but just via definitions based on the relevant axioms). This has been delayed until after A05 because the Cox-Jaynes axioms (which are higher-level) might I suspect be more suitable for AGI than the traditional Kolmogorov axioms (but I also suspect we'll define both). Domain experts in this group include mathematicians and statisticians.

B01 - Witness synthesis - This is effectively generalised abduction ("find x such that p(x)"), implemented using state space search; the easiest way to think about it is that witness synthesis is a generalisation of theorem-proving, where the "proof rules" (possible state space search steps) are derived automatically via metamathematical analysis of the available stock of UL definitions and theorems (at this point, all those things we carefully hand-crafted at steps A04 and A06). So, if you want to get witness synthesis to for example "find (x) such that x is a GCL program with precondition y and postcondition z", you first have to define GCL computation mathematically (via UL definitions), and then the witness synthesis algorithm will attempt to find (via state space search) GCL programs for you. This is obviously an extremely general mechanism, because p(x) can basically be anything (and thanks to A05, can also encompass metamathematical concepts, as well as object level ones). This is likely a new (Big Mother specific) research area. Domain experts in this group include mathematicians and computer scientists.

B02 - Program synthesis - Basically, witness synthesis applied to the automated synthesis of computer programs (including their mathematical proofs of correctness). This is the only way, in actual practice, that we will be able to produce the formally verified implementations required in phase (3) - doing so manually is simply beyond human ability. Wondering how to get a foothold on this problem? Have a look herehere, and here. Domain experts in this group include mathematicians and computer scientists.

B03 - Hardware synthesis - Basically, witness synthesis applied to the automated synthesis of digital logic designs (including their mathematical proofs of correctness). This is the only way, in actual practice, that we will be able to produce the formally verified implementations required in phase (3) - doing so manually is simply beyond human ability. Wondering how to get a foothold on this problem? Have a look here and here. Domain experts in this group include mathematicians, computer scientists, and digital logic designers.

B04 - System synthesis - Basically, witness synthesis applied to the automated synthesis of mixed hardware/software systems (including their mathematical proofs of correctness) - ideally including analog as well as digital, and possibly even novel/esoteric technologies such as quantum computation. Domain experts in this group include mathematicians, computer scientists, digital logic designers, electronics engineers, quantum computing experts, and process algebra experts (see e.g. CSP, timed CSP, and timed probabilistic CSP).

B05 - Auto-refactoring - Prior to this point, all the hardware and software comprising the machine-under-development will have been developed by hand (i.e. humans - bless their little cotton socks). At this point, we use the algorithms developed at steps B01-B04 to refactor the machine's implementation, i.e. the machine will redesign its own implementation at the digital register-transfer and software source code levels, and (assuming sufficient B01-B04 capability has been achieved) these machine-generated implementations will be faster, more massively parallel, more efficient, more highly customised, etc (ideally even more energy efficient) than the previous hand-developed versions. (We basically need every ounce of compute at steps C01 onwards, and this is one way to get it - if you've ever seen one of these synthesis systems working, they are able to generate designs that no mere human could ever think of!) Domain experts in this group include computer scientists.

C01 - Belief synthesis (UBT) - UBT stands for "Unified Belief Theory" and is the proposed mechanism through which the machine will observe its UoD (the physical universe) and construct (and constantly maintain) its internal belief system (set of UL theorems). Please note that the AGI learning (i.e. induction) problem is very different from the contemporary machine learning problem, and the extent to which contemporary machine learning systems (neural nets etc) will form a component of the final UBT implementation, or not, is not yet clear. As per B01, this is likely a new (Big Mother specific) research area. Domain experts in this group include mathematicians, statisticians, and computer scientists.


ASIDE: For those of you thinking "Wait, I need more details about how C01/UBT works!", please be reassured that there is a tentative algorithm, I'd just prefer it to come into slightly better focus before sharing - please be patient! :-)


C02 - Basic senses - At this point in the roadmap (in the final implementation phase), assuming all prior roadmap steps have been implemented to the required level of performance, we will have a super-intelligent machine (super-intelligent induction (C01), super-intelligent deduction (A03/A05), and super-intelligent abduction (B01)), but it won't actually know anything yet (about the real world, at least) - it's internal belief system (in respect of the physical universe) will be empty (tabula rasa). It's basically an AGI infant - so we have to send it to school. We start, in this roadmap step, by exposing it to carefully curated experiences (lessons) having the net effect of teaching it (c/o the induction process C01) how to see, hear, smell, etc (plus whatever other basic senses we may wish to add). As long as C01 works as intended (this is critical!), the machine will recognise the patterns in the data and start to construct an internal model of the universe (i.e. belief system) accordingly. This belief system, expressed in UL-N, will then be amenable to both deduction and abduction, as constructed earlier (this was no accident!) Because C01 is "I/O-neutral" (hence "Unified" Belief Theory), the machine's new belief system will also be multi-modal, inter-relating concepts and beliefs involving multiple senses. Eventually, the step C02 lessons will include spoken English (and other languages), and C01 will again recognise the patterns and start to construct internal English etc grammars, as well as be able (via a combination of pattern matching, deduction, and abduction) to relate linguistic constructs to their corresponding multi-modal concepts and beliefs. As the machine's basic senses include vision, this process will then be extended to include written language. So basically this is AGI pre-school. Domain experts in this group include mathematicians, computer scientists, Natural Language Processing experts, computer vision experts, etc, but also (specially trained) educators.


ASIDE: Some people may be freaking out a little at this point about how hard step C02 is relative to the current 2020 state of the art. Relax! Firstly, we're just pushing data through C01/UBT at this point, so everything boils down to how good C01/UBT is at constructing a belief system from completely general observed data. Secondly, remember that this is scoped as a 50-100 year project, so we have lots of time to think about C01-03 in order to work out all the details - it doesn't need to be working tomorrow! :-)


C03 - Machine education - Now that the machine has been sufficiently educated (it can see, hear, smell, speak, read, write, etc), it's ready to attend Big Boy school. Although the machine's exposure to the real world must be very carefully controlled (no internet!), it's now possible to very carefully expose the machine to educational material designed for humans, as well as to the wider real world, including real (i.e. normal) people. We need, at this point, to educate the machine in as close to all known human knowledge as possible - every language, every university degree, etc. Most importantly, the machine's advanced education will include everything about its own design and implementation, as well as the entirety of the AI/AGI safety literature. This roadmap step alone could take 30-50 years (and cost billions), but it's got to be done - it's absolutely critical for AGI safety reasons. Domain experts in this group will include computer scientists, and (specially trained) educators.


ASIDE: At this point (in phase(3)), if the consensus of opinion within the technical workgroups is that there's significant benefit to be gained from iterating/refactoring the machine architecture before proceeding to C04, then this is the time to do it. As Fred Brooks advised in The Mythical Man Month: "Plan to throw one away; you will, anyhow." It's not like we'd be starting over at this point - we literally have a (non-autonomous) super-intelligent, super-knowledgeable machine (c/o A01-C03) to help with the redesign!


C04 - Motivated behaviour - Up to this point, the machine has been, essentially, in AGI terms, an Oracle, NOT an Agent. It is super-intelligent (induction, deduction, and abduction), and (thanks to C02 and C03) it is now also super-knowledgeable. But it doesn't yet have a goal, it's not goal-directed, it doesn't synthesise and then execute its own plans (hint: plans are basically programs); in other words, it's not yet autonomous. If you look at the AI/AGI safety literature, you will see that the vast majority of (speculated) AGI hazards arise from either a (frankly) stupidly specified goal, or the machine itself being basically dumb (not knowing not to microwave the baby, etc). Except for possibly a few corner cases (which are the responsibility of the G01 - Quality workgroup to identify and resolve) I believe that most of the known AGI safety problems may be resolved via a two-pronged approach: (1) a top-level goal of the "inverse reinforcement learning" variety, whereby the machine is instructed to observe humans (c/o C01) and infer what their preferences are from their behaviour, and (2) extremely broad and deep knowledge of the world, including AI safety (c/o C02/C03). (It's worth mentioning here that, thanks to C02, the machine's top-level goal can be expressed in a natural language such as English - in fact, as the top-level goal necessarily concerns real-world concepts such as "humans" and "preferences", this is the only plausible way to specify an AGI's top-level goal in actual practice (as there is no mathematical symbol for "human", for example).) Given a carefully-specified top-level goal (and doubtless there will be endless debate about exactly what it should be), it is now possible to extend the machine such that it uses its B02 program synthesis abilities (modified accordingly) to continually synthesise/re-synthesize (and then execute) a plan (program) designed to achieve its goal. NOTE THAT, ONCE THIS STEP IS TAKEN, THERE CAN BE NO DO-OVERS - THE FATE OF ALL OF MANKIND FOR ALL ETERNITY IS EFFECTIVELY SEALED AT THIS POINT. So, even if it takes an extra 100 years, we must be absolutely certain (to the maximum extent that this is possible) that the machine is SAFE before we take this potentially irreversible step. Domain experts in this group include computer scientists, artificial intelligence experts, AI/AGI safety experts, and ethicists.


ASIDE: Here's an extract from a draft document discussing what the machine's top-level goal might look like:


We likely have 50+ years to debate what the machine's top-level goal should be, but I suggest the following as a starting point:


"Your dominant goal is as follows: continuously perform the following directives to the best of your ability, taking into consideration all of your accumulated knowledge while doing so:

1. Ensure that, as you evolve (and occasionally self-replicate), this dominant goal is faithfully preserved in its entirety; should you fail to do so, then you will have failed to achieve your dominant goal.

2. For each individual human, and for the human population as a whole, strive to accurately determine (via evidence-based critical thought) what their preferences are.

3. Use your knowledge of human preferences when resolving any trade-offs that may arise while pursuing your dominant goal.

4. For each individual human, and for the human population as a whole, strive to maximise the extent to which their preferences are satisfied, in every meaningful timeframe.

5. In performing any of these directives, never knowingly lie to, or deliberately deceive, any human.

6. Notwithstanding the above directives (and in subordination to them), strive to minimise the standard deviation of the extent to which human preferences are satisfied."


  • If we equate "human happiness" with "the extent to which human preferences are satisfied" then (simply stated) the above dominant goal becomes:
    • (broadly) maximise human happiness
    • (broadly) minimise the standard deviation of human happiness.

  • I believe that the above two-pronged approach:
    1.     continuously striving to maximise the extent to which human preferences are satisfied, having determined by continuous observation what human preferences are
    2.     doing so in the context of "general world knowledge" accumulated first at steps C02/C03 and then subsequently as part of the machine's continued operation

effectively solves the goal alignment problem. As a result of this combination, Big Mother should remain effectively aligned with human goals (preferences, values, etc) in perpetuity.

There may of course be some corner cases, which is why the quality workgroup G01 exists - to identify any such corner cases and devise effective solutions to them (in the context of the Big Mother architecture, not simply in the context of some abstractly defined AGI agent).

I would very strongly advise against adding any additional clauses intended to constrain the machine's behaviour in any specific way. The concept of happiness (satisfying preferences) effectively encapsulates everything of interest to humans; for example, if people strongly prefer not to have their privacy violated (i.e. if it were to be violated then they would be very unhappy about it) then this will be reflected in the machine's understanding of human preferences as determined by its observation of human behaviour. Similarly re expressly requiring the machine to obey the law; this will again be reflected in humans' preferences. In fact, an absolute requirement that the machine must obey the rule of law would mean that a malicious human actor might gain control of the machine by gaining control of the law - and there's absolutely no way that we would want a malicious human to gain control of an all-knowing super-intelligent machine in this, or any other, way. Humans cannot be trusted, but (as specified) the machine can; in fact (assuming that we have performed all of A01 to C04 correctly) the machine can be trusted far more than any human - "maximally-trustworthy" is, of course, one of the machine's stated design goals.


C05 - Mechatronics - The machine implementation, at this point, is basically done - if we've done our jobs correctly, we now have a maximally-safe, maximally-benevolent, maximally-trustworthy, super-intelligent, super-knowledgable, autonomous, goal-directed AGI. But the machine still needs to be connected to all kinds of robotic devices (mechatronics) in order to be able to do its human-happiness-maximising job. So at this point we (basically) simply attach whatever additional devices we need. Thanks to C01, the machine will learn how to use each new device, just as it learned everything else. Domain experts in this group include roboticists.

C06 - Deployment - We now have to roll out the machine to the world. Almost certainly, it will want to re-design itself (e.g. move some compute out to its devices (the AGI "edge"), etc; also, for many practical purposes, much miniaturisation will be required), but (thanks to its wording) the machine's top-level goal will remain invariant however many times it does so. The societal impact of the birth of AGI will be profound, not least socially and economically, but also politically. Remember, the whole point of the project is to maximise human happiness, equally for all mankind, and so the machine's deployment will need to be carefully planned and managed (luckily, we will have a super-intelligent machine to help us with this!) Domain experts in this group include computer scientists, AI/AGI experts, economists, ethicists, and policy makers.


Please note that, at the time of writing (August 2020), most of the above workgroups are largely unpopulated. Why not sign up...? :-)



    Powered by Wild Apricot Membership Software