|Technology that would just as soon kill you.|
Is the day coming when your toaster might try to kill you?
That may depend on how smart the toaster is. One day, it makes your breakfast, and the next it decides it wants to sleep in, or else.
Do toasters dream of electric sheep?
At two conferences held recently in California, researchers in computing and artificial intelligence speculated about a future in which our computers are as smart as we are — or possibly much smarter.
"These are powerful technologies that could be used in good ways or scary ways," said Eric Horvitz, quoted in The Sunday Times of London. Horvitz should know all about scary technologies. He's a principal researcher at Microsoft, which gave us Windows Vista.
We can't say we weren't warned. From HAL 9000 to the replicants of "Blade Runner" to the Cylons to Skynet, science fiction is full of super-intelligent computers, robots and androids who rebel against their human creators.
Then there is the 1970 film "Colossus: The Forbin Project," in which a supercomputer decides to take over the world, for humanity's own good, naturally.
"In time," Colossus says, "you will come to regard me not only with respect and awe, but with love."
With computers becoming faster and more complex, maybe such doomsday fantasies could become real. According to Moore's law, named for Intel co-founder Gordon E. Moore, the processing power of computers doubles about every two years. Whether that trend will continue is a subject of debate, but some futurists think it will.
Ray Kurzweil believes we are about 30 years away from creating a human-level artificial intelligence, a computer just as smart as we are.
If both Kurzweil and Moore are correct, things could get interesting, and soon. In his 2008 book "Future Imperfect: Technology and Freedom in an Uncertain World," David D. Friedman writes, "In forty years, that makes them (computers) something like 100 times as smart as we are. We are now chimpanzees — perhaps gerbils — and had better hope that our new masters like pets."
Science-fiction writer Vernor Vinge coined a name for it: the singularity. That's the point at which superhuman intelligences, rather than humans, are driving technological advancement. Each generation of machines creates another that's even smarter.
In a 1993 article, Vinge writes, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."
That's all well and good if the future's super-intelligent robots look like Tricia Helfer, Grace Park and Lucy Lawless — the "Battlestar Galactica" scenario — but there's still the danger they'll decide to nuke us from orbit (because it's the only way to be sure).
Kurzweil is an optimist. He thinks we can beat the robots at their own game. Find a way for human brains to connect to machines, and humanity can take advantage of Moore's law, too.
Yes, mankind's fate may hinge on us becoming cyborgs. But why stop there? We could, as science-fiction author Ken MacLeod has speculated, upload our minds into cyberspace, leaving our flesh to go the way of all flesh, while achieving technological immortality, at least until the universe reaches heat death. Then the lights go out permanently.
Stopping technological advancement isn't an option, but if becoming a cyborg seems too extreme, we could try to build something like Isaac Asimov's Three Laws of Robotics into our super-intelligent machines.
The only problem with that, as anyone who has read Asimov's stories knows, is the Three Laws often cause as many problems as they solve.
Maybe it's best just to hope we end up ruled by androids who look like Lucy Lawless. I, for one, welcome our new Cylon overlords.