What Kurzwil did was to chart anumber of proxies for technological development. As with Moore ’sLaw, they all are following geometric progressions that are becoming completelyapparent, particularly as we track the present cascade of knowledge production.
Thus certain obvious objectivesare now in sight. Recall that in 1970,myself and others had no difficulty anticipating the desktop by 1980, and theinternet, if not what form it would take, before the turn of the century. I understood the latter would leadimmediately to an explosion in research results as the knowledge industrysuddenly stopped duplication of effort and became near instantaneous. We are living through that now.
For me, the pleasant surprise isthat we have not broken pace since 1970 and are today closer to 2045 that1970. I no longer think we will breakpace. The next twenty years will see adoubling of human talent applied to technology in general. By then we will have machine supportedresearch underway.
It is reasonable to presume that by2045 we will have the GOD machine in place and it will be communicating withits equivalent operating outside of Earth. Long before that we will have our own magnetic field exclusion vessels (MFEV)with which to visit already existing space habitats in the Solar System.
Read the following to providesome sense of just how fast this is all happening. This is a must read if you expect to be alivethirty five years from now.
2045: The Year Man Becomes Immortal
By LEVGROSSMAN Thursday, Feb. 10, 2011
Technologist Raymond Kurzweil has a radical vision for humanity's immortalfuture
On Feb. 15, 1965, a diffident but self-possessed high school studentnamed Raymond Kurzweil appeared as a guest on a game show called I've Gota Secret. He was introduced by the host, Steve Allen, then he played a shortmusical composition on a piano. The idea was that Kurzweil was hiding anunusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.
On the show (see the clip onYouTube), the beauty queen did a good job of grilling Kurzweil, but thecomedian got the win: the music was composed by a computer. Kurzweil got $200.(See TIME's photo-essay "Cyberdyne's Real Robot.")
Kurzweil then demonstrated the computer, which he built himself — adesk-size affair with loudly clacking relays, hooked up to a typewriter. Thepanelists were pretty blasé about it; they were more impressed by Kurzweil'sage than by anything he'd actually done. They were ready to move on to Mrs.Chester Loney of Rough and Ready, Calif., whose secret was that she'd beenPresident Lyndon Johnson's first-grade teacher.
But Kurzweil would spend much of the rest of his career working outwhat his demonstration meant. Creating a work of art is one of those activitieswe reserve for humans and humans only. It's an act of self-expression; you'renot supposed to be able to do it if you don't have a self. To see creativity,the exclusive domain of humans, usurped by a computer built by a 17-year-old isto watch a line blur that cannot be unblurred, the line between organicintelligence and artificial intelligence.
That was Kurzweil's real secret, and back in 1965 nobody guessed it.Maybe not even him, not yet. But now, 46 years later, Kurzweil believes thatwe're approaching a moment when computers will become intelligent, and not justintelligent but more intelligent than humans. When that happens, humanity — ourbodies, our minds, our civilization — will be completely and irreversiblytransformed. He believes that this moment is not only inevitable but imminent.According to his calculations, the end of human civilization as we know it isabout 35 years away.(See the best inventions of 2010.)
Computers are getting faster. Everybody knows that. Also, computers aregetting faster faster — that is, the rate at which they're gettingfaster is increasing.
True? True.
So if computers are getting so much faster, so incredibly fast, theremight conceivably come a moment when they are capable of something comparableto human intelligence. Artificial intelligence. All that horsepower could beput in the service of emulating whatever it is our brains are doing when theycreate consciousness — not just doing arithmetic very quickly or composingpiano music but also driving cars, writing books, making ethical decisions,appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other verysmart people can, then all bets are off. From that point on, there's no reasonto think computers would stop getting more powerful. They would keep ondeveloping until they were far more intelligent than we are. Their rate ofdevelopment would also continue to increase, because they would take over theirown development from their slower-thinking human creators. Imagine a computerscientist that was itself a super-intelligent computer. It would workincredibly quickly. It could draw on huge amounts of data effortlessly. Itwouldn't even take breaks to play Farmville.
Probably. It's impossible to predict the behavior of thesesmarter-than-human intelligences with which (with whom?) we might one day sharethe planet, because if you could, you'd be as smart as they would be. But thereare a lot of theories about it. Maybe we'll merge with them to becomesuper-intelligent cyborgs, using computers to extend our intellectual abilitiesthe same way that cars and planes extend our physical abilities. Maybe theartificial intelligences will help us treat the effects of old age and prolongour life spans indefinitely. Maybe we'll scan our consciousnesses intocomputers and live inside them as software, forever, virtually. Maybe thecomputers will turn on humanity and annihilate us. The one thing all thesetheories have in common is the transformation of our species into somethingthat is no longer recognizable as such to humanity circa 2011. Thistransformation has a name: the Singularity.(Comment on this story.)
The difficult thing to keep sight of when you're talking about theSingularity is that even though it sounds like science fiction, it isn't, nomore than a weather forecast is science fiction. It's not a fringe idea; it's aserious hypothesis about the future of life on Earth. There's an intellectualgag reflex that kicks in anytime you try to swallow an idea that involvessuper-intelligent immortal cyborgs, but suppress it if you can, because whilethe Singularity appears to be, on the face of it, preposterous, it's an ideathat rewards sober, careful evaluation.
People are spending a lot of money trying to understand it. The three-year-old
The Singularity isn't a wholly new idea, just newish. In 1965 theBritish mathematician I.J. Good described something he called an"intelligence explosion":
Let an ultraintelligent machine be defined as a machine that can farsurpass all the intellectual activities of any man however clever. Since thedesign of machines is one of these intellectual activities, an ultraintelligentmachine could design even better machines; there would then unquestionably bean "intelligence explosion," and the intelligence of man would beleft far behind. Thus the first ultraintelligent machine is the last inventionthat man need ever make.
The word singularity is borrowed from astrophysics: it refersto a point in space-time — for example, inside a black hole — at which therules of ordinary physics do not apply. In the 1980s the science-fictionnovelist Vernor Vinge attached it to Good's intelligence-explosion scenario. Ata NASA symposium in 1993, Vinge announced that "within 30 years, we willhave the technological means to create super-human intelligence. Shortly after,the human era will be ended."
By that time Kurzweil was thinking about the Singularity too. He'd beenbusy since his appearance on I've Got a Secret. He'd made several fortunesas an engineer and inventor; he founded and then sold his first softwarecompany while he was still at MIT. He went on to build the firstprint-to-speech reading machine for the blind — Stevie Wonder was customer No.1 — and made innovations in a range of technical fields, including musicsynthesizers and speech recognition. He holds 39 patents and 19 honorarydoctorates. In 1999 President Bill Clinton awarded him the National Medal ofTechnology.(See pictures of adorable robots.)
But Kurzweil was also pursuing a parallel career as a futurist: he hasbeen publishing his thoughts about the future of human and machine-kind for 20years, most recently in The Singularity Is Near, which was a best sellerwhen it came out in 2005. A documentary by the same name, starring Kurzweil,Tony Robbins and Alan Dershowitz, among others, was released in January.(Kurzweil is actually the subject of two current documentaries. The other one,less authorized but more informative, is called The Transcendent Man. ) Bill Gates hascalled him "the best person I know at predicting the future of artificialintelligence."(See the world's most influential people in the 2010 TIME 100.)
In real life, the transcendent man is an unimposing figure who couldpass for Woody Allen's even nerdier younger brother. Kurzweil grew up inQueens, N.Y., and you can still hear a trace of it in his voice. Now 62, hespeaks with the soft, almost hypnotic calm of someone who gives 60 publiclectures a year. As the Singularity's most visible champion, he has heard allthe questions and faced down the incredulity many, many times before. He'sgood-natured about it. His manner is almost apologetic: I wish I could bringyou less exciting news of the future, but I've looked at the numbers, and thisis what they say, so what else can I tell you?
Kurzweil's interest in humanity's cyborganic destiny began about 1980largely as a practical matter. He needed ways to measure and track the pace oftechnological progress. Even great inventions can fail if they arrive beforetheir time, and he wanted to make sure that when he released his, the timingwas right. "Even at that time, technology was moving quickly enough thatthe world was going to be different by the time you finished a project,"he says. "So it's like skeet shooting — you can't shoot at the target."He knew about Moore 'slaw, of course, which states that the number of transistors you can put on amicrochip doubles about every two years. It's a surprisingly reliable rule ofthumb. Kurzweil tried plotting a slightly different curve: the change over timein the amount of computing power, measured in MIPS (millions of instructionsper second), that you can buy for $1,000.
As it turned out, Kurzweil's numbers looked a lot like Moore 's. They doubledevery couple of years. Drawn as graphs, they both made exponential curves, withtheir value increasing by multiples of two instead of by regular increments ina straight line. The curves held eerily steady, even when Kurzweil extended hisbackward through the decades of pretransistor computing technologies likerelays and vacuum tubes, all the way back to 1900.(Comment on this story.)
Kurzweil then ran the numbers on a whole bunch of other keytechnological indexes — the falling cost of manufacturing transistors, therising clock speed of microprocessors, the plummeting price of dynamic RAM. Helooked even further afield at trends in biotech and beyond — the falling costof sequencing DNA and of wireless data service and the rising numbers ofInternet hosts and nanotechnology patents. He kept finding the same thing:exponentially accelerating progress. "It's really amazing how smooth thesetrajectories are," he says. "Through thick and thin, war and peace, boomtimes and recessions." Kurzweil calls it the law of accelerating returns:technological progress happens exponentially, not linearly.
Then he extended the curves into thefuture, and the growth they predicted was so phenomenal, it created cognitiveresistance in his mind. Exponential curves start slowly, then rocket skywardtoward infinity. According to Kurzweil, we're not evolved to think in terms ofexponential growth. "It's not intuitive. Our built-in predictors arelinear. When we're trying to avoid an animal, we pick the linear prediction ofwhere it's going to be in 20 seconds and what to do about it. That is actuallyhardwired in our brains."
Here's what the exponential curves told him. We will successfullyreverse-engineer the human brain by the mid-2020s. By the end of that decade,computers will be capable of human-level intelligence. Kurzweil puts the dateof the Singularity — never say he's not conservative — at 2045. In that year,he estimates, given the vast increases in computing power and the vast reductionsin the cost of same, the quantity of artificial intelligence created will beabout a billion times the sum of all the human intelligence that exists today.(See how robotics are changing the future of medicine.)
The Singularity isn't just an idea. it attracts people, and thosepeople feel a bond with one another. Together they form a movement, asubculture; Kurzweil calls it a community. Once you decide to take theSingularity seriously, you will find that you have become part of a small butintense and globally distributed hive of like-minded thinkers known asSingularitarians.
Not all of them are Kurzweilians, not by a long chalk. There's roominside Singularitarianism for considerable diversity of opinion about what theSingularity means and when and how it will or won't happen. ButSingularitarians share a worldview. They think in terms of deep time, theybelieve in the power of technology to shape history, they have little interestin the conventional wisdom about anything, and they cannot believe you'rewalking around living your life and watching TV as if theartificial-intelligence revolution were not about to erupt and change absolutelyeverything. They have no fear of sounding ridiculous; your ordinary citizen'sdistaste for apparently absurd ideas is just an example of irrational bias, andSingularitarians have no truck with irrationality. When you enter theirmind-space you pass through an extreme gradient in worldview, a hardontological shear that separates Singularitarians from the common run ofhumanity. Expect turbulence.
In addition to the Singularity University , which Kurzweil co-founded,there's also a Singularity Institute for Artificial Intelligence, based in San Francisco . It countsamong its advisers Peter Thiel, a former CEO of PayPal and an early investor inFacebook. The institute holds an annual conference called the Singularity Summit . (Kurzweilco-founded that too.) Because of the highly interdisciplinary nature ofSingularity theory, it attracts a diverse crowd. Artificial intelligence is themain event, but the sessions also cover the galloping progress of, among otherfields, genetics and nanotechnology.(See TIME's computer covers.)
At the 2010 summit, which took place in August in San Francisco, therewere not just computer scientists but also psychologists, neuroscientists,nanotechnologists, molecular biologists, a specialist in wearable computers, aprofessor of emergency medicine, an expert on cognition in gray parrots and theprofessional magician and debunker James "the Amazing" Randi. Theatmosphere was a curious blend of Davos and UFO convention. Proponents ofseasteading — the practice, so far mostly theoretical, of establishingpolitically autonomous floating communities in international waters — handedout pamphlets. An android chatted with visitors in one corner.
After artificial intelligence, the most talked-about topic at the 2010summit was life extension. Biological boundaries that most people think of aspermanent and inevitable Singularitarians see as merely intractable butsolvable problems. Death is one of them. Old age is an illness like any other,and what do you do with illnesses? You cure them. Like a lot of Singularitarianideas, it sounds funny at first, but the closer you get to it, the less funnyit seems. It's not just wishful thinking; there's actual science going on here.
For example, it's well known that one cause of the physicaldegeneration associated with aging involves telomeres, which are segments ofDNA found at the ends of chromosomes. Every time a cell divides, its telomeresget shorter, and once a cell runs out of telomeres, it can't reproduce anymoreand dies. But there's an enzyme called telomerase that reverses this process;it's one of the reasons cancer cells live so long. So why not treat regularnon-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administeredtelomerase to a group of mice suffering from age-related degeneration. Thedamage went away. The mice didn't just get better; they got younger.(Comment on this story.)
Aubrey de Grey is one of the world's best-known life-extensionresearchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, deGrey runs a foundation called SENS, or Strategies for Engineered NegligibleSenescence. He views aging as a process of accumulating damage, which he hasdivided into seven categories, each of which he hopes to one day address usingregenerative medicine. "People have begun to realize that the view ofaging being something immutable — rather like the heat death of the universe —is simply ridiculous," he says. "It's just childish. The human bodyis a machine that has a bunch of functions, and it accumulates various types ofdamage as a side effect of the normal function of the machine. Therefore inprincipal that damage can be repaired periodically. This is why we have vintagecars. It's really just a matter of paying attention. The whole of medicineconsists of messing about with what looks pretty inevitable until you figureout how to make it not inevitable."
Kurzweil takes life extension seriously too. His father, with whom hewas very close, died of heart disease at 58. Kurzweil inherited his father'sgenetic predisposition; he also developed Type 2 diabetes when he was 35.Working with Terry Grossman, a doctor who specializes in longevity medicine,Kurzweil has published two books on his own approach to life extension, whichinvolves taking up to 200 pills and supplements a day. He says his diabetes isessentially cured, and although he's 62 years old from a chronologicalperspective, he estimates that his biological age is about 20 years younger.
But his goal differs slightly from deGrey's. For Kurzweil, it's not so much about staying healthy as long aspossible; it's about staying alive until the Singularity. It's an attemptedhandoff. Once hyper-intelligent artificial intelligences arise, armed withadvanced nanotechnology, they'll really be able to wrestle with the vastlycomplex, systemic problems associated with aging in humans. Alternatively, bythen we'll be able to transfer our minds to sturdier vessels such as computersand robots. He and many other Singularitarians take seriously the propositionthat many people who are alive today will wind up being functionally immortal.
It's an idea that's radical and ancient at the same time. In"Sailing to Byzantium ,"W.B. Yeats describes mankind's fleshly predicament as a soul fastened to adying animal. Why not unfasten it and fasten it to an immortal robot instead?But Kurzweil finds that life extension produces even more resistance in hisaudiences than his exponential growth curves. "There are people who canaccept computers being more intelligent than people," he says. "Butthe idea of significant changes to human longevity — that seems to beparticularly controversial. People invested a lot of personal effort intocertain philosophies dealing with the issue of life and death. I mean, that'sthe major reason we have religion."(See the top 10 medical breakthroughs of 2010.)
Of course, a lot of people think the Singularity is nonsense — afantasy, wishful thinking, a Silicon Valley version of the Evangelical story ofthe Rapture, spun by a man who earns his living making outrageous claims andbacking them up with pseudoscience. Most of the serious critics focus on thequestion of whether a computer can truly become intelligent.
The entire field of artificial intelligence, or AI, is devoted to thisquestion. But AI doesn't currently produce the kind of intelligence weassociate with humans or even with talking computers in movies — HAL or C3PO orData. Actual AIs tend to be able to master only one highly specific domain,like interpreting search queries or playing chess. They operate within anextremely specific frame of reference. They don't make conversation at parties.They're intelligent, but only if you define intelligence in a vanishinglynarrow way. The kind of intelligence Kurzweil is talking about, which is calledstrong AI or artificial general intelligence, doesn't exist yet.
Why not? Obviously we're still waiting on all that exponentiallygrowing computing power to get here. But it's also possible that there arethings going on in our brains that can't be duplicated electronically no matterhow many MIPS you throw at them. The neurochemical architecture that generatesthe ephemeral chaos we know as human consciousness may just be too complex andanalog to replicate in digital silicon. The biologist Dennis Bray was one ofthe few voices of dissent at last summer's Singularity Summit . "Although biological componentsact in ways that are comparable to those in electronic circuits," heargued, in a talk titled "What Cells Can Do That Robots Can't,""they are set apart by the huge number of different states they can adopt.Multiple biochemical processes create chemical modifications of proteinmolecules, further diversified by association with distinct structures atdefined locations of a cell. The resulting combinatorial explosion of statesendows living systems with an almost infinite capacity to store informationregarding past and present conditions and a unique capacity to prepare forfuture events." That makes the ones and zeros that computers trade in lookpretty crude.(See how to live 100 years.)
Underlying the practical challenges are a host of philosophical ones.Suppose we did create a computer that talked and acted in a way that was indistinguishablefrom a human being — in other words, a computer that could pass the Turingtest. (Very loosely speaking, such a computer would be able to pass as human ina blind test.) Would that mean that the computer was sentient, the way a humanbeing is? Or would it just be an extremely sophisticated but essentiallymechanical automaton without the mysterious spark of consciousness — a machinewith no ghost in it? And how would we know?
Even if you grant that the Singularity is plausible, you're stillstaring at a thicket of unanswerable questions. If I can scan my consciousnessinto a computer, am I still me? What are the geopolitics and the socioeconomicsof the Singularity? Who decides who gets to be immortal? Who draws the linebetween sentient and nonsentient? And as we approach immortality, omniscienceand omnipotence, will our lives still have meaning? By beating death, will wehave lost our essential humanity?
Kurzweil admits that there's a fundamental level of risk associatedwith the Singularity that's impossible to refine away, simply because we don'tknow what a highly advanced artificial intelligence, finding itself a newlycreated inhabitant of the planet Earth, would choose to do. It might not feellike competing with us for resources. One of the goals of the SingularityInstitute is to make sure not just that artificial intelligence develops butalso that the AI is friendly. You don't have to be a super-intelligent cyborgto understand that introducing a superior life-form into your own biosphere isa basic Darwinian error.(Comment on this story.)
If the Singularity is coming, these questions are going to get answerswhether we like it or not, and Kurzweil thinks that trying to put off theSingularity by banning technologies is not only impossible but also unethicaland probably dangerous. "It would require a totalitarian system toimplement such a ban," he says. "It wouldn't work. It would justdrive these technologies underground, where the responsible scientists whowe're counting on to create the defenses would not have easy access to thetools."
Kurzweil is an almost inhumanly patient and thorough debater. Herelishes it. He's tireless in hunting down his critics so that he can respondto them, point by point, carefully and in detail.
Take the question of whether computers can replicate the biochemical complexityof an organic brain. Kurzweil yields no ground there whatsoever. He does notsee any fundamental difference between flesh and silicon that would prevent thelatter from thinking. He defies biologists to come up with a neurologicalmechanism that could not be modeled or at least matched in power andflexibility by software running on a computer. He refuses to fall on his kneesbefore the mystery of the human brain. "Generally speaking," he says,"the core of a disagreement I'll have with a critic is, they'll say, Oh,Kurzweil is underestimating the complexity of reverse-engineering of the humanbrain or the complexity of biology. But I don't believe I'm underestimating thechallenge. I think they're underestimating the power of exponentialgrowth."
This position doesn't make Kurzweil an outlier, at least amongSingularitarians. Plenty of people make more-extreme predictions. Since 2005the neuroscientist Henry Markram has been running an ambitious initiative atthe Brain Mind Institute of the Ecole Polytechnique in Lausanne , Switzerland .It's called the Blue Brain project, and it's an attempt to create aneuron-by-neuron simulation of a mammalian brain, using IBM's Blue Genesuper-computer. So far, Markram's team has managed to simulate one neocorticalcolumn from a rat's brain, which contains about 10,000 neurons. Markram has saidthat he hopes to have a complete virtual human brain up and running in 10years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd thenhave to educate the brain, and who knows how long that would take?)(See portraits of centenarians.)
By definition, the future beyond the Singularity is not knowable by ourlinear, chemical, animal brains, but Kurzweil is teeming with theories aboutit. He positively flogs himself to think bigger and bigger; you can see himkicking against the confines of his aging organic hardware. "When peoplelook at the implications of ongoing exponential growth, it gets harder andharder to accept," he says. "So you get people who really accept,yes, things are progressing exponentially, but they fall off the horse at somepoint because the implications are too fantastic. I've tried to push myself toreally look."
In Kurzweil's future, biotechnology and nanotechnology give us thepower to manipulate our bodies and the world around us at will, at themolecular level. Progress hyperaccelerates, and every hour brings a century'sworth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution.The human genome becomes just so much code to be bug-tested and optimized and,if necessary, rewritten. Indefinite life extension becomes a reality; peopledie only if they choose to. Death loses its sting once and for all. Kurzweilhopes to bring his dead father back to life.
We can scan our consciousnesses into computers and enter a virtualexistence or swap our bodies for immortal robots and light out for the edges ofspace as intergalactic godlings. Within a matter of centuries, humanintelligence will have re-engineered and saturated all the matter in theuniverse. This is, Kurzweil believes, our destiny as a species.(See the costs of living a long life.)
Or it isn't. When the big questions get answered, a lot of the actionwill happen where no one can see it, deep inside the black silicon brains ofthe computers, which will either bloom bit by bit into conscious minds or justcontinue in ever more brilliant and powerful iterations of nonsentience.
But as for the minor questions, they're already being decided allaround us and in plain sight. The more you read about the Singularity, the moreyou start to see it peeking out at you, coyly, from unexpected directions. Fiveyears ago we didn't have 600 million humans carrying out their social livesover a single electronic network. Now we have Facebook. Five years ago youdidn't see people double-checking what they were saying and where they weregoing, even as they were saying it and going there, using handheldnetwork-enabled digital prosthetics. Now we have iPhones. Is it an unimaginablestep to take the iPhones out of our hands and put them into our skulls?
Already 30,000 patients with Parkinson's disease have neural implants.Google is experimenting with computers that can drive cars. There are more than2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure inthe history of artificial intelligence, but this time the computer will be theguest: an IBM super-computer nicknamed Watson will competeonJeopardy! Watson runs on 90 servers and takes up an entire room, and ina practice match in January it finished ahead of two former champions, Ken Jenningsand Brad Rutter. It got every question it answered right, but much moreimportant, it didn't need help understanding the questions (or, strictlyspeaking, the answers), which were phrased in plain English. Watson isn'tstrong AI, but if strong AI happens, it will arrive gradually, bit by bit, andthis will have been one of the bits.(Comment on this story.)
A hundred years from now, Kurzweil and de Grey and the others could bethe 22nd century's answer to the Founding Fathers — except unlike the FoundingFathers, they'll still be alive to get credit — or their ideas could look ashilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fastas the future.
But even if they're dead wrong about the future, they're right aboutthe present. They're taking the long view and looking at the big picture. Youmay reject every specific article of the Singularitarian charter, but youshould admire Kurzweil for taking the future seriously. Singularitarianism isgrounded in the idea that change is real and that humanity is in charge of itsown fate and that history might not be as simple as one damn thing afteranother. Kurzweil likes to point out that your average cell phone is about amillionth the size of, a millionth the price of and a thousand times morepowerful than the computer he had at MIT 40 years ago. Flip that forward 40years and what does the world look like? If you really want to figure that out,you have to think very, very far outside the box. Or maybe you have to thinkfurther inside it than anyone ever has before.
No comments:
Post a Comment