Log in


A global collaboration of engineers, scientists, and other domain experts, funded by philanthropists,

building the gold standard of Artificial General Intelligence, for all of mankind.


SPEEDUPS

At the current level of abstraction, some people may be concerned about how "inefficient" or "intractable" such-and-such a part of the design is, citing various AI war stories about how some approach was already tried, and failed, back in the 60s, 70s, 80s etc.

At the present (3rd) level of abstraction, what we mostly care about is that information flows through the machine in such a way that it behaves functionally in the way that we want (maximally-safe, -benevolent, and -trustworthy), ignoring time and space etc for the time being.

As we refine each roadmap step down towards a concrete implementation, the physical resources required will become substantial. This is to be expected. AI/AGI is computationally un/semi-decidable/intractable whichever way you try to achieve it. (In fact, this is where the unprecedented value generated by AI/AGI is derived: any specific case for which you can solve a problem that is "provably impossible in the general case" will often generate much greater real-world value than a general solution to a fully-decidable problem.)

Thus, as we refine the present design down to a concrete implementation, we will gradually apply various speedup techniques, for example:

  1. Algorithmic acceleration - For example, some data representations allow the variant of a loop to be decreased by a greater amount (corresponding to greater computational work being performed) on each iteration, thereby allowing a computation to progress more quickly.
  2. Statistical acceleration - For example: (a) statistical information about the data to which an algorithm is applied in actual use may inform the decision as to which of several competing algorithms is optimal; (b) a complex decision point within an algorithm (such as state space search) may be facilitated using a neural net or other machine learning model trained on data gleaned from many prior executions (e.g. AlphaGo). Note that using e.g. a neural net to speed up a state space search algorithm such as theorem proving or witness synthesis is broadly analogous to Kahneman's System 1 vs System 2 ("Thinking, Fast and Slow") - a lower-level, intuitive, pattern-matching mechanism (System 1), massively parallel, and trained on a lifetime's experience of problem examples, takes a very fast, but not necessarily correct, "quick-and-dirty guess", which is then double-checked, and applied as appropriate, by a much more sequential, deliberate, and above all precise, higher-level algorithm (System 2) such that the two systems working together produce something greater than the sum of its parts.
  3. Parallel acceleration - Some algorithms may be partitioned into sub-tasks which are then distributed over many cores executing in parallel.
  4. Hardware acceleration - If profiling reveals a program hotspot, then that specific functionality may be implemented in hardware (e.g. FPGA).
  5. Targeted hardware - Hardware designed for a specific algorithm will implement that algorithm more quickly than general purpose hardware.
  6. Auto-refactoring - Once the program, hardware, and/or system synthesis tools developed at roadmap steps B02 to B04 have achieved greater-than-human capability in respect of any of 1-5 above, then the machine will be able to synthesise a faster implementation of itself.
  7. Uncle Bob's Law - In the preface to his 2018 book Clean Architecture, veteran software engineer Robert C Martin describes how computer technology improved by 22 orders of magnitude in his first 50 years as a programmer. Given that ours is a 50-100 year project, it would not be unreasonable to anticipate a further 10-20 orders of magnitude improvement over the next 50-100 years. I say this despite the fact that Moore's Law is coming to an end - given the economic drivers, it would be unwise to underestimate the ingenuity of the hardware industry!
  8. Attention - In principle, given sufficient compute, an intelligent system (such as an AGI) should be able to process an arbitrarily large number of distinct tasks simultaneously. Nevertheless, however much compute is available, the possibility will always exist that some eventuality might occur whereby some tasks will need to be prioritised (given disproportionate attention) over others. Thus any sensible AGI design will assume that compute is always a scarce resource and incorporate an attention mechanism, effectively rationing compute over tasks.
  9. Approximation - It is often faster to compute an approximate result rather than a precise result, and such techniques are often touted as being indispensable for practical AI/AGI to be possible. Nevertheless, we must not forget that it is primarily the digital computer's precision of thought that makes super-intelligence (far exceeding that of humans) - and the corresponding practical utility that comes with it - possible. Should the technique of approximation be overly or inappropriately applied in an AI/AGI system, merely in order to preserve scarce compute (which, thanks to Uncle Bob's Law, we can be pretty sure will become less scarce with every year that passes), then we run a severe risk of throwing the baby out with the bathwater. Consequently, whenever the necessary compute is available, an AGI should always perform the precise calculation, only falling back on approximations as a last resort -- precision should be the default, and approximation the fallback.
  10. Backtracking - As a last resort, at any time during the top-down-recursive-decomposition process (for example, between steps C03 and C04), any component of the design, or even the entire design itself, may be replaced by a functionally-equivalent (but faster) alternative.

Powered by Wild Apricot Membership Software