Intelligence has shaped almost everything we know and cherish. Human civilization stands above every other species on the planet not because of our muscles, or the speed which we can run, or the height which we can jump, or the power of our bite, but rather our ability to plan and reason, our intelligence is what sets us apart. It is to those 100 billion or so little neurons and their trillions of synaptic connections between our ears that we owe our all our technological and scientific knowledge and our ability to organize socially on a scale no other mammal has achieved.
But this brain power which has achieved so much is now the main bottleneck to the further progress of human civilization. Our brains were not evolved to specialize in quantum mechanics, complex economics, aerospace engineering, molecular modeling, etc.
We can only barely scratch the surface of such fields due to an accident of evolution. The natural selection that grew our intellect because tool use and complex planning were a survival advantage only left us with a brain that could do calculus as a side effect, which is why we do it so poorly relative to even a simple handheld calculator.
But fortunately for those of us who crave technological and social progress the bottleneck of our intelligence is not a fixed constant. We can now begin to imagine and build non-biological intelligence (A.I.) that in principle can process information and reason on a scale that far surpasses biological intelligence.
While we could potentially grow the biological intelligence at our disposal through drugs and genetic engineering the prospects for non-biological intelligence appear far greater.
Non-biological intelligence holds several cards in its favour. The clearest advantage of non-biological intelligence over biological intelligence is its vastly superior possessing speed. An artificial neuron can run at least a million times faster than a our biological ones, and it’s not clear that the speed of our biological neurons could ever be enhanced. Non-biological intelligence could also contain significantly better algorithms and those algorithms could be rapidly improved and replaced, something that is near impossible with biological brains. The impact of improved algorithms could even be paramount to improvements in processing speed and memory. Moreover, non-biological intelligence can be duplicated with ease and each duplicate can benefit from all the compiled information of its ancestors, unlike our brains which require decades of learning to preform certain complex tasks such as surgery competently. With these advantages in mind, the path to superintelligence (a general intelligence that greatly exceeds human intelligence in every sphere) is almost certainly through a non-biological approach.
The avenues for developing superintelligence range from architectures that contain no biological inspiration, like those used in classic artificial intelligence and expert systems, to architectures such as neural networks which are deeply inspired by biology. The ultimate example of the biology inspired approach is a whole-brain upload. A whole-brain upload would require the scanning of every single neuron and synapse of a human brain which would then be modeled in a sufficiently powerful supercomputer, probably on the order of a few exaflops. The Blue Brain project based at the Swiss Federal Institute of Technology in Lausanne is perusing the whole-brain upload approach and has a budget of a billion Euros from the European Union to complete the project.
While the whole-brain upload approach to superintelligence does seem likely to succeed given enough detail of the brain’s structure and enough computational power, it’s also incredibly inefficient computationally. An approach that could develop superintelligence much sooner is distilling the underling principles of intelligence with inspiration from the human brain and applying those principles to a non-biological system rather than copying the brain in detail. This is the approach that groups like DeepMind favour.
Once we develop a non-biological system that broadly matches human intelligence the length of time required to progress to superintelligence could be strikingly short. This human-level non-biological intelligence would benefit from all the advantages discussed previously, giving it the potential ability to recursively self-improve its own architecture, creating a more and more intelligent version with each generation. With each generation the possess could occur faster and faster leading to what the legendary mathematician I.J. Good called an in 1965 an “intelligence explosion.” This has also been described as the hard takeoff scenario of the technological singularity.
Regardless of whether the path to the development of superintelligence occurs rapidly as a result of an intelligence explosion or turns out to be a decades long process of gradual improvement the outcome would be equally momentous in the story of human history, with its impact perhaps greater than that of the invention of language or fire. As I.J. Good wrote over half a century ago, superintelligence would likely be the last invention that humanity would ever need create, since, by definition, it would be a superior inventor, engineer or scientist to human.
Superintelligence would open up a panoply of theoretically possible technologies that our human brains find immensely difficult to create, such as atomically precise manufacturing (my personal favorite), fusion power, in vitro meat, virtual reality indistinguishable from reality, terraforming of Mars and Venus and the defeat of aging. A superintelligence might find these technological achievements as difficult as we find an afternoon crossword puzzle.
The potential upside of superintelligence is wondrous to imagine but it’s that same power that makes superintelligence so transformative that also makes it an existential risk. It’s imperative in my view that we do not pretend the risks associated with superintelligence do not exist just because we may be so enthralled with the potential upside.
I fully understand the resistance by some AI researchers to discussions of existential risks due to fears that such discussions might lead to misinformed hysteria among the general public and policy makers which could slow progress through unnecessary regulations. Nevertheless, I believe that by having researchers come out and acknowledge the risks of superintelligence up front and work on solutions in public, such as the Future of Life Institute is doing, helps mitigate the possible backlash and hysteria and increase our chances that superintelligence is beneficial for all.