top of page

Testimonials

Professor Leslie Smith

(University of Stirling, Associate Editor of the Journal of Artificial General Intelligence)

“Much of what passes for AI is simply neural networks, basically the same ideas as in the 1980’s but upgraded with various bells and whistles like new activation functions, summarising levels, and feedback. Even the most sophisticated systems (like the GPT systems) lack novel ideas. Aaron’s work aims to push beyond this, to set an agenda for actual intelligent systems (so called artificial general intelligence, AGI) that considers more than pattern recognition and synthetic language construction. This is quite different from what is pursued by companies, and most computing departments. The work is important, and may underlie the next steps forward in artificial intelligence.”

TTQ - Preprint

TTQ title page.png

About

BigMother.AI CIC  is a non-profit AGI (Artificial General Intelligence) lab based in Cambridge (UK), focussing in particular on superintelligent AGI, superalignment, and the global governance of superintelligent AGI.  Our ultimate objective is to maximise the net benefit of AGI for all humanity, without favouring any subset thereof.


Whoever owns reliable human-level AGI will own the global means of production for all goods and services — superintelligent AGI has been estimated to have a net present value of at least $15 quadrillion! Accordingly, the major equity-funded-and-therefore-profit-motivated AI labs (and their associated sovereign states), being aggressively competitive by nature, are currently engaged in an AGI arms race, each in pursuit of their own short-term self-interest, seemingly oblivious to the long-term best interest of the human species as a whole.

​​

Unfortunately, due to competitive race dynamics and the trapdoor nature of superintelligence, a tribal race to AGI is most likely to be, at best, hugely sub-optimal for all humanity for all eternity, and, at worst, catastrophic.

 

A far more attractive alternative is to pursue AGI collectively in order to achieve an AGI Endgame that is (as close as possible to) maximally-beneficent for all humanity (i.e. to do it properly), irrespective of how long it may take.

​​​​

The BigMother approach is to try to imagine the ideal AGI Endgame (from the perspective of the human species as a whole), and to work backwards (top-down) from there in order to make it (or something close to it) actually happen. This is largely equivalent to imagining the ideal (or "Gold-Standard") superintelligent AGI  — maximally-aligned, maximally-validated, and maximally-superintelligent — and then working backwards to actually build it.

 

Accordingly, we seek to design, develop, and deploy a Gold-Standard (maximally-aligned, maximally-validated, and maximally-superintelligent) AGI called BigMom that is ultimately owned by all humanity (e.g. via the UN) as a global public good, and whose operation benefits all humanity, without favouring any subset thereof (such as the citizens of any particular country or countries, or the shareholders of any particular company or companies).

Our paper "TTQ: An Implementation-Neutral Solution to the Outer AGI Superalignment Problem" is step 1.

Tel: 123-456-7890

500 Terry Francine St San Francisco, CA 94158

SUBSCRIBE

Sign up to receive Autono news and updates.

Thanks for submitting!

© 2035 by Autono. Powered and secured by Wix

  • LinkedIn
  • Facebook
  • Twitter
  • Instagram
bottom of page