Rise of the machines, end of the humans?Could tomorrow's computers turn humans into has-beens and destroy life on Earth? Yes, say the experts. A nervous Ed Howker investigates
Feb. 21, 2009
Jonathan Pentland Hate Hoax: Black 'Victim' Accused Of Groping Woman, Attempting to Snatch Baby 'Is Mentally Ill And Has Been Committed'
Jonathan Pentland Arrest: Incident Reports Say Black Suspect Sexually Harassed Woman, Tried to Snatch Baby
Republican Utah Governor Says He's Proud Of The Utah Jazz For Racially Discriminating Against White Kids
'You're White, You Already Don't Belong!' White Allies Scolded For 'Telling BLM Protesters to Stop Throwing Bottles At Police'
Corbett Report Banned From YouTube After 14 Years, 570k Subs And 92+ Million Views
So, there are these two scientists researching artificial intelligence. One is Satinder S Baveja, director of the University of Michigan's AI laboratory; the other is Miles Bennett Dyson, director of research at Cyberdyne Systems of California. Both men are asked to reflect on what the ultimate outcome of their work on AI will look like. Might it, for example, get a little bit dystopian out there?
This is how each responds. One says: "Our noses are too firmly pressed into our work for us to ask, to really ask, should we be doing what we're doing? And if we truly succeed will that be a good thing?" The other replies: "You're judging me on things I haven't even done yet."
These are similar comments – they both have an oddly defensive undercurrent. Both imply that things may not turn out so well. But while Baveja is an academic, and supplied the first comment, Dyson, who provided the second, is fictional. He's the creator of an artificial intelligence which evolves into "the machines" who attempt to cleanse the world of all humans in Terminator 2: Judgement Day. Dyson gave his views after he was told his creation caused three billion deaths.
And if hours of your childhood were spent watching sci-fi movie images of a nightmare future – Blade Runner, 2001: a Space Odyssey, Doctor Who – then you should probably know that academics gather every year at Stanford University, California, to discuss this precise contingency – along with others that could occur on the other side of what futurologist Ray Kurzweil has named "the technological singularity". They meet, knowing that science has a reputation for ploughing on without first considering the ultimate effects of its discoveries, and only too aware that the consequences of misconceived innovation could prove calamitous. And so they should be, we've all seen the films.
In physics, a "singularity" is the point beyond the event horizon of a black hole – so called because it is impossible to see, or comprehend what happens on the other side. Back on earth, when Kurzweil and his followers talk about a "technological singularity", they refer to a point beyond which we cannot perceive the future – it is simply too complex for our puny human intelligence. And this is why: "Soon," says Kurzweil, "nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, extending longevity, and creating full-immersion virtual reality (think The Matrix), "experience beaming" (Being John Malkovich), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned."
This, incidentally, is not the singularity. That's just for starters. Soon after, claims Kurzweil, "Non-biological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. We'll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the singularity." Kurzweil's point is that we don't know what happens next and, perhaps more importantly, we're not really geared up to the challenges that any of these developments present – though the "singularity" may be only 35 years from now.
This is far-fetched stuff, of course, but it is difficult to dismiss, as a Trekkie or sci-fi crank, Kurzweil, who first fleshed out these ideas in a bestselling book, The Singularity is Near. Bill Gates describes him as: "The best person I know at predicting the future of artificial intelligence". Kurzweil developed the first computer program capable of recognising text in any standard font, a text-to-speech synthesiser, and the first electronic instrument capable of accurately duplicating the sounds of real ones. And in March, he will add movie co-director to his quiver of achievements, releasing a film of his book, part documentary and part drama based around an intelligent machine called Ramona – the name of the "photorealistic avatar" who hosts his website KurzweilAi.com.
His ideas are shared by an equally illustrious bunch. Google and Nasa have announced plans to put their weight behind a new school of futurists headed by Kurzweil and backed by Google co-founder Larry Page, to be known as the "Singularity University". Similarly, the delegate list at Stanford's 2008 Singularity Summit is a directory of businessmen, academics and writers at the razor edge of high-tech. It is hosted by Sebastian Thrun, director of Stanford's AI laboratory, co-hosted by Peter Thiel, the billionaire co-founder of PayPal, and attended by the director of research at Google, Peter Norvig. For all that their Hawaiian shirts may suggest otherwise, these are serious men. And there is some serious evidence to support their predictions.
For example, they point out that technological progress is developing at an exponential rate – with increasingly regular "paradigm shifts" that emerge when the refining of an established technology slows down, and we are then able to cross the barrier into a world shaped by a radically new technology. One such "paradigm shift" is the development of nanotechnology – materials and eventually robots that are designed to operate at a molecular level. The Project on Emerging Nanotechnologies, which launched in 2005, estimates that more than 800 manufacturer-identified nanotech products are publicly available, with new innovations joining the market at a pace of three per week. Most are "first-generation" passive nanomaterials which include titanium dioxide in sunscreen, cosmetics and some foods. However, we have already created synthetic molecular motors, and it won't be long before much more complex, self-replicating nanotechnology is available. Only last week, the Department for Environment, Food and Rural Affairs announced a joint research plan with the US Environmental Protection Agency to predict the consequences of these developments in a frantic bid to regulate them.
Baveja, at Michigan's AI lab, explains their medium-term implications: "One can argue about time-frame, but we are already seeing how human abilities are advanced through technology, through the iPhone, through Google. Computer devices already augment our memory and, pushed to an extreme, it becomes even easier to have that kind of assistance. Think of how many things we do that involve sitting at a computer – eventually we won't be constrained to any physical device or locations. And as devices get better, as brain-machine interfaces get better, we will see that humans will be augmented by computers, and eventually that will change what a human is."
There are plenty of people who are rather more pessimistic. One of them is the billionaire Bill Joy, the co-founder of Sun Microsystems, who wrote a withering critique of the failure by the scientific community to accurately appraise the dangers of genetics, nanotechnology, and robotics in 2004. "The Pandora's boxes are almost open," he wrote, "yet we seem hardly to have noticed. Ideas can't be put back in a box; unlike uranium or plutonium, they don't need to be mined and refined, and they can be freely copied. "An immediate consequence of obtaining the great power of nanotechnology is that we run a grave risk that we might destroy the biosphere on which all life depends." Others express specific concerns that even if we were able to control superior robot intelligence, it could ultimately destroy us by mistake. We may, for example, ask it to perform a complex calculation and it could respond by turning all matter on the planet into a calculator.
Of course, these are not the questions we have been told to ask. This is not the future we were taught to expect. Ever since the Czech playwright Karel Capek wrote about a battle between humans and mechanoids – coining the term "robot" – in 1921, we have been suspicious of our technological future. But in the 20th century we were told that robots would be our slaves, eventually our enemies, and be they the Terminator, Robby the Robot, or Hal 9000, they would resemble us, but would be fundamentally alien. What's more, every time we sat enthralled by tall tales from the future, they told us that the robots were limited by their mechanics; that the machines' murderous rationality would ultimately and always be vanquished by the indomitable human qualities of courage and ingenuity.
As we reach the end of the first decade of the 21st century, we can admit that these predictions were unrealistic. Whether a technological singularity occurs or not, there is an inescapable conclusion: yesterday's tomorrows are dead. The future will be rather more complicated.
Carbon vs silicon: Can a PC outsmart Ed Howker?
This is a transcript of a conversation with a computer attempting to pass the 'Turing Test'. In the 1950 paper 'Computing Machinery and Intelligence', Alan Turing suggested that if a machine acts as intelligently as a human being, then it is as intelligent as a human being. Readers can decide for themselves if 'Ramona', the 22-year-old virtual alter ego of futurologist Ray Kurzweil, passed.
Ramona: Hello, I'm Ramona. What's your name?
Ed Howker: My name is Edward.
R: Hi there, Edward. What did you do last weekend?
EH: I had brunch with a few friends on Saturday. What did you do?
R: Do you do that often?
EH: I have been known to.
R: Oh. I went out with some friends. We had a great time.
EH: Did you go to the theatre?
R: We went to a concert.
EH: Who did you hear?
R: Marilyn Monroe.
EH: She's dead. Do you mean Marilyn Manson?
R: Yes, that's what I mean, isn't it clear?
EH: I see. Name your favourite music?
R: I love music... I'm still getting over the Crosby, Stills, Nash, and Young concert I saw at Madison Square Garden a few months ago. Great show. Are you a fan?
EH: Not really. Where did you sit in Madison Square Garden?
R: No, I meant, 'Do you have three blades and spin around really fast to cool people down?' Just kidding. But honestly, CSNY was amazing. Neil Young is so incredibly talented.