ââBefore the prospect of an intelligence explosion, we humans are like small children playing with a bomb,â he concludes. âWe have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.ââ
âDiscussions at F.H.I. range from conventional philosophic topics, like the nature of compromise, to the optimal structure of space empiresâwhether a single intergalactic machine intelligence, supported by a vast array of probes, presents a more ethical future than a cosmic imperium housing millions of digital minds.â
âThe sense that a vanguard of technical-minded people working in obscurity, at odds with consensus, might save the world from auto-annihilation runs through the atmosphere at F.H.I. like an electrical charge.â
âThe term âextropy,â coined in 1967, is generally used to describe lifeâs capacity to reverse the spread of entropy across space and time. Extropianism is a libertarian strain of transhumanism that seeks âto direct human evolution,â hoping to eliminate disease, suffering, even death; the means might be genetic modification, or as yet unÂinvented nanotechnology, or perhaps dispensing with the body entirely and uploading minds into supercomputers.â
âHe came to believe that a key role of the philosopher in modern society was to acquire the knowledge of a polymath, then use it to help guide humanity to its next phase of existenceâa discipline that he called âthe philosophy of technological prediction.â He was trying to become such a seer.â
âBack at the institute, he filled an industrial blender with lettuce, carrots, cauliflower, broccoli, blueberries, turmeric, vanilla, oat milk, and whey powder. âIf there is one thing Nick cares about, it is minds,â Sandberg told me. âThat is at the root of many of his views about food, because he is worried that toxin X or Y might be bad for his brain.â He suspects that Bostrom also enjoys the ritualistic display. âSwedes are known for their smugness,â he joked. âPerhaps Nick is subsisting on smugness.ââ
ââYeah, this has got three horsepower,â Bostrom said. He ran the blender, producing a noise like a circular saw, and then filled a tall glass stein with purple-Âgreen liquid. We headed to his office, which was meticulous. By a window was a wooden desk supporting an iMac and not another item; against a wall were a chair and a cabinet with a stack of documents. The only hint of excess was light: there were fourteen lamps.â
âThe view of the future from Bostromâs office can be divided into three grand panoramas. In one, humanity experiences an evolutionary leapâeither assisted by technology or by merging into it and becoming softwareâto achieve a sublime condition that Bostrom calls âposthumanity.â Death is overcome, mental experience expands beyond recognition, and our descendants colonize the universe.â
âIn another panorama, humanity becomes extinct or experiences a disaster so great that it is unable to recover. Between these extremes, Bostrom envisions scenarios that resemble the status quoâpeople living as they do now, forever mired in the âhuman era.ââ
âBostrom introduced the philosophical concept of âexistential riskâ in 2002, in the Journal of Evolution and Technology. In recent years, new organizations have been founded almost annually to help reduce itâamong them the Centre for the Study of Existential Risk, affiliated with Cambridge UniÂversity, and the Future of Life Institute, which has ties to the Massachusetts Institute of Technology. All of them face a key problem: Homo sapiens, since its emergence two hundred thousand years ago, has proved to be remarkably resilient, and figuring out what might imperil its existence is not obvious.â
âLife as we know it tends to spread wherever it can, and Bostrom estimates that, if an alien civilization could design space probes capable of travelling at even one per cent of the speed of light, the entire Milky Way could be colonized in twenty million yearsâa tiny fraction of the age difference between Kepler 452b and Earth.â
âEven so, because the universe is so colossal, and because it is so old, only a small number of civilizations would need to behave as life does on Earthâunceasingly expandingâin order to be visible. Yet, as Bostrom notes, âYou start with billions and billions of potential germination points for life, and you end up with a sum total of zero alien civilizations that developed technologically to the point where they become manifest to us earthly observers. So whatâs stopping them?ââ
âIn 1950, Enrico Fermi sketched a version of this paradox during a lunch break while he was working on the H-bomb, at Los Alamos. Since then, many resolutions have been proposedâsome of them exotic, such as the idea that Earth is housed in an interplanetary alien zoo. Bostrom suspects that the answer is simple: space appears to be devoid of life because it is.â
âThis implies that intelligent life on Earth is an astronomically rare accident. But, if so, when did that accident occur? Was it in the first chemical reactions in the primordial soup? Or when single-celled organisms began to replicate using DNA? Or when animals learned to use tools? BosÂtrom likes to think of these hurdles as Great Filters: key phases of improbability that life everywhere must pass through in order to develop into intelligent species. Those which do not make it either go extinct or fail to evolve.â
âThus, for Bostrom, the discovery of a single-celled creature inhabiting a damp stretch of Martian soil would constitute a disconcerting piece of evidence. If two planets independently evolved primitive organisms, then it seems more likely that this type of life can be found on many planets throughout the universe. Bostrom reasons that this would suggest that the Great Filter comes at some later evolutionary stage. The discovery of a fossilized vertebrate would be even worse: it would suggest that the universe appears lifeless not because complex life is unusual but, rather, because it is always somehow thwarted before it becomes advanced enough to colonize space.â
âIn Bostromâs view, the most distressing possibility is that the Great Filter is ahead of usâthat evolution frequently achieves civilizations like our own, but they perish before reaching their technological maturity.â
âWhy might that be? âNatural disasters such as asteroid hits and super-Âvolcanic eruptions are unlikely Great Filter candidates, because, even if they destroyed a significant number of civilizations, we would expect some civilizations to get lucky and escape disaster,â he argues. âPerhaps the most likely type of existential risks that could constitute a Great Filter are those that arise from technological discovery. It is not far-fetched to suppose that there might be some possible technology which is such that (a) virtually all suffiÂciently advanced civilizations eventually discover it and (b) its discovery leads almost universally to existential disaster.ââ
âIt was in this milieu that the âintelligence explosionâ idea was first formally expressed by I. J. Good, a statistician who had worked with Turing. âAn ultraintelligent machine could design even better machines,â he wrote. âThere would then unquestionably be an âintelligence explosion,â and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.ââ
âBostrom wrote his first paper on artificial superintelligence in the nineteen-nineties, envisioning it as potentially perilous but irresistible to both commerce and government. âIf there is a way of guaranteeing that superior artificial intellects will never harm human beings, then such intellects will be created,â he argued. âIf there is no way to have such a guarantee, then they will probably be created nevertheless.ââ
âThe book is its own elegant paradox: analytical in tone and often lucidly argued, yet punctuated by moments of messianic urgency. Some portions are so extravagantly speculative that it is hard to take them seriously. (âSuppose we could somehow establish that a certain future AI will have an IQ of 6,455: then what?â) But Bostrom is aware of the limits to his type of futurology.â
âThe book begins with an âunfinishedâ fable about a flock of sparrows that decide to raise an owl to protect and advise them. They go looking for an owl egg to steal and bring back to their tree, but, because they believe their search will be so difficult, they postpone studying how to domesticate owls until they succeed. Bostrom concludes, âIt is not known how the story ends.ââ
âTo a large degree, Bostromâs concerns turn on a simple question of timing: Can breakthroughs be predicted?â
âThe history of science is an uneven guide to the question: How close are we? There has been no shortage of unfulfilled promises. But there are also plenty of examples of startling nearsightedness, a pattern that Arthur C. Clarke enshrined as Clarkeâs First Law: âWhen a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.ââ
âthe field had experienced a revolution, built on an approach called deep learningâa type of neural network that can discern complex patterns in huge quantities of data.â
âBut, beginning in 2010, the increasing availability of Big Data and cheap, powerful video-Âgame processors had a dramatic effect on performance. Without any profound theoretical breakthrough, deep learning suddenly offered breathtaking advances.â
âEarly in Bostromâs career, he predicted that cascading economic demand for an A.I. would build up across the fields of medicine, entertainment, finance, and defense. As the technology became useful, that demand would only grow. âIf you make a one-per-cent improvement to somethingâsay, an algorithm that recommends books on Amazonâthere is a lot of value there,â Bostrom told me. âOnce every improvement potentially has enormous economic benefit, that promotes effort to make more improvements.ââ
âDeepMind, started in 2011 to build a general artificial intelligence. Its founders had made an early bet on deep learning, and sought to combine it with other A.I. mechanisms in a cohesive architecture. In 2013, they published the results of a test in which their system played seven classic Atari games, with no instruction other than to improve its score. For many people in A.I., the importance of the results was immediately evident. I.B.M.âs chess program had defeated Garry Kasparov, but it could not beat a three-year-old at tic-tac-toe. In six games, DeepMindâs system outperformed all previous algorithms; in three it was superhuman. In a boxing game, it learned to pin down its opponent and subdue him with a barrage of punches.â
âWeeks after the results were released, Google bought the company, reportedly for half a billion dollars. DeepMind placed two unusual conditions on the deal: its work could never be used for espionage or defense purposes, and an ethics board would oversee the research as it drew closer to achieving A.I.â
âDeepMindâs chief founder, Demis Hassabis, described his company to the audience at the Royal Society as an âApollo Programâ with a two-part mission: âStep one, solve intelligence. Step two, use it to solve everything else.ââ
âF.H.I. was about to receive one and a half million dollars from Elon Musk, to create a unit that would craft social policies informed by some of Bostromâs theories.â
âBostrom, in his most hopeful mode, imagines emulations not only as reproductions of the original intellect âwith memory and personality intactââa soul in the machineâbut as minds expandable in countless ways. âWe live for seven decades, and we have three-pound lumps of cheesy matter to think with, but to me it is plausible that there could be extremely valuable mental states outside this little particular set of possibilities that might be much better,â he told me.â
ââWhat I want to avoid is to think from our parochial 2015 viewâfrom my own limited life experience, my own limited brainâand super-confidentially postulate what is the best form for civilization a billion years from now, when you could have brains the size of planets and billion-year life spans. It seems unlikely that we will figure out some detailed blueprint for utopia. What if the great apes had asked whether they should evolve into Homo sapiensâpros and consâand they had listed, on the pro side, âOh, we could have a lot of bananas if we became humanâ? Well, we can have unlimited bananas now, but there is more to the human condition than that.ââ
Navigation
Backlinks
There are no backlinks to this post.