I started writing a post for this blog yesterday. The first three words were "horses aren't human." That should have warned me. But I went on writing - and ended by posting it in another blog. ("Space Aliens and Killer Monster Robots - From Outer Space; or Pittsburgh
," Apathetic Lemming of the North (January 25, 2010))
I want to take another crack at some of the ideas from that post. I've got more time to work with today, so this post may be briefer.1
Anyway, I'm cutting out the discussion of space aliens: the squishy kind, at least.
Robots From Outer Space
The 'Menace of the Monster Killer Robots From Pittsburgh' thing (the author does not
use that phrase) started with a valid point made in the article.
The first space aliens we meet may be machines.
That's likely enough. If there were Martians, their first contact with humanity would be one of the landers we've sent. Thinking machines, even a world of "robots" isn't a particularly new idea. ("Men Martians and Machines," Eric Frank Russell (1955), for example - and that built on established conventions)
The idea that humanity's first encounter with extraterrestrial intelligence may involve robots from another planet is a valid point. I think the author of "What Will Aliens Really Look Like?
," Space.com (July 16, 2009) was right in discussing the possibility of machine intelligence in the article
On the other hand, here's how the author - SETI Institute's Senior Astronomer, Seth Shostak - leads into his discussion of machines as people.
"...Well, using our own experience as a guide, consider a human development that seems likely to take place sometime in the 21st century: we'll invent machine intelligence. Some futurists figure this dismaying development will take place before 2050. Maybe it will take twice that long. It doesn't matter. By 2100, our descendants will note that this was the century in which we spawned our successors...."
I don't know how old Seth Shostak, the SETI Institute's Senior Astronomer, is. If he's even close to my age, he really should know better.
Repeating the last sentence in that paragraph:
"By 2100, our descendants will note that this was the century in which we spawned our successors."
Artificial Intelligence is (Still) Just Around the Corner
I remember when "2001: A Space Odyssey
" hit the silver screen. The year was 1968. Even then, the HAL 9000 computer was a science fiction staple: an intelligent, sentient, self-aware computer. Who was insane. Homicidally so.
Tasks like designing a computer that actually thinks on its own, or a visual system that handles information in real time the way a human being's visual cortex does, seemed fairly straightforward.
Until scientists and technicians tried making one.
42 years later, we've got dancing robots
and a robot that solves soduku puzzles
But I'm no more concerned that the next desktop computer I buy will enslave me, than I think it's likely that Google is planting subliminal commands in that plain white background of theirs. (Now that
would make a story!)
Ooooh! Scary! Or, not.
Now I read that we'll have devices like the HAL 9000 computer and C-3PO in fifty years. Or so.
And that they'll take over.
A Person Can Learn a Lot From the Movies
I'm quite sure that the SETI Institute's Senior Astronomer didn't learn astronomy from watching the movies. On the other hand, it looks like he grew up in the American culture. Western culture, anyway. And assimilated at least some of its beliefs and attitudes.
Optimism hasn't been fashionable for quite a while now.
Why Mention the Movies?
English-language motion pictures have been a major part of American culture for generations. Whether they shape the culture, or the culture shapes them, or both, is a debatable point.
Either way, I think movies are a pretty good indicator for what the culture was like when they were made: for everything from haircuts to attitudes and assumptions.
I've learned that people feared biological warfare
and killer bees
. And that the bees were likely to make a nuclear reactor explode
. Even if we survived that, we'd probably be a handful of desperate survivors in an apocalyptic post-nuclear-holocaust wasteland, beset by monster frogs
Or be stuck in a high-end resort and have to shoot it out with a robot gunslinger.
It wasn't all doom and gloom in the movies, of course. There was "Star Wars
" in 1977: but that was merely escapist entertainment. Not serious at all. And "Hell Comes to Frogtown" was? Never mind. I don't think anyone took that one seriously.
message from the movies - including movies with a message - has been that the future, if any, is bleak. Also that computers, robots - just about anything invented after maybe 1930 - is dangerous and malevolent.
Here's a short list of what's been on the minds of America, in the movies:
- "2001: A Space Odyssey (1968)
- A big black artifact and
- Insane computer kills crewmates
- One of my favorite films
- Not much of a plot
- But the sets were well-thought-out
- Up to that time, a rarity in science fiction films
- "Colossus: The Forbin Project" (1970)
- Massive supercomputer is built
- Takes over the world
- Somewhat plausible, given a willing suspension of disbelief
- "Westworld" (1974)
- Killer robots
- A whole resort full of killer robots
- Moderately well-thought-out, in my opinion
- "Logan's Run" (1976)
- A nice, neat, orderly society
- Where life is groovy
- Until you hit 30
- Then you die
- Well, you can't have everything
- Then a crazed cop kills the master computer
- "Star Wars" (1977)
- Comedy-relief robots
- C-3PO, human-cyborg relations
- R2-D2, astromech droid
- Light sabers
- Evil emperor
- Farm kid
- "The Terminator" (1984)
- Evil computer mastermind
- Determined killer cyborg
- Threat of nuclear apocalypse
- "The Matrix" (1999)
- Humanity makes an artificial intelligence
- That takes over the world
These movies were drawing on a venerable tradition that included "Robot Monster
" (1953) and "The Phantom Creeps
Back to the topic at hand.
HAL 9000, Skynet, and The Matrix
Remember the Luddites
? They were a charming lot of 19th English workmen who broke machines. Give me a running start, and I'll use my college education to show them as downtrodden workers striving heroically to maintain their livelihoods.
I'll grant that they were under a lot of stress. And stressed-out people can do odd things. Luddites made such an impression that "Luddite" now means "any opponent of technological progress
". (Princeton's WordNet
It's hard to not
see Luddite feelings in the way technology is perceived these days.
I mean to say: "By 2100, our descendants will note that this was the century in which we spawned our successors.
Sure: Colossus took over the world, with a little help; HAL killed all but one astronaut on the Discovery; and Skynet was a really scary mastermind in the Terminator movies.
The premise of "The Matrix" may not be all that daft: that an artificial intelligence took over the world, and kept human beings alive because we make pretty good batteries.
Don't get me wrong: I've not putting down any of these movies. 2001 and Star Wars are favorites of mine. Anyway, I don't object to entertainment on principle.
But I try to make distinctions between what makes for a good story, and what's plausible. And AI that takes over the world - or wants to - seems a little far-fetched. Worse, from a writer's point of view, malevolent artificial intelligence is showing at least as much wear and tear as post-nuclear-apocalypse settings. In my opinion.
I suppose the "familiarity breeds contempt" principle applies here. I've been working with computers for over two decades now. The iron idiots can be frustrating, fascinating, and fast. But a threat? Not really.
It might be different, if artificial intelligence hadn't been 'just around the corner' for the last four decades: and if the existing artificial intelligence, like language translation tools, was more - intelligent.
What's So Strange About Cyborgs?
Maybe I don't have the culturally-appropriate fear and dread of machine intelligence because I'm not entirely human. Sort of.
- A few of my teeth are still original equipment
- But a fair portion of what I chew with is artificial
- There's metal and plastic where my hip joints used to be
- A plastic mesh held my belly together after some work was done in there
- I'm focusing on my computer's monitor with a clip-on set of lenses.
All of that's nothing unusual at all. Now.
Which is my point. I look
as human as my ancestors, a thousand years back: providing I take my glasses off and keep my mouth closed. But important parts of me are machinery of one sort or another.
Even my brain's been altered, chemically.
I was diagnosed with major depression a few years ago. Thanks to medication, I don't have to constantly fight the controls to think clearly - for the first time in over 45 years.
And I have no problem with that.
My distant ancestors, some of them, might have been freaked out to learn what has been done to me: but I like being able to walk without pain, see clearly, chew my food and have the insides of my abdomen stay where they belong - inside. I see the artificial parts of me as repairs and enhancements.
Certainly not as being "taken over" by machinery.
I'm not a cyborg. Not in the sense of "a human being whose body has been taken over in whole or in part by electromechanical devices
" (Princeton's WordNet
) But partly artificial? Yes.
And, these days, there's nothing unusual about that. Think of all the people with pacemakers in their chests.
Intel says it's coming out with brain chips and neural interfaces in about ten years. I believe them. This isn't "artificial intelligence," by the way, not unless you think human beings with artificial parts aren't human any more.
When those brain chips hit the market, and people with missing limbs control their prosthetics (and, I trust, get feedback) through their own nervous system, they'll be cyborgs. So will stroke victims whose damaged or destroyed circuits are replaced with artificial ones.
Scary? Maybe. So is the thought of driving without my glasses on.
I don't think technology is bad, by itself. What we do with it, that's something else:
Will Brain Implants Be Misused?
"What is that, a trick question? Of course they'll be misused. People misuse things. People have killed other people with rocks. That doesn't make the rocks bad.
"Direct neural interfaces are a new technology, and there'll almost certainly be an awkward period while we learn how to use them, and set up rules so that everybody's more-or-less on the same page about how they should be used.
"But, I'm looking forward to the things...."
(Apathetic Lemming of the North (December 2, 2009))
I don't think that Science and Technology (capitalized, of course) will Solve All Our Problems. But I'm not afraid of science and technology.
Artificial Intelligence, C3PO and Fido
Another reason I'm not worried about 'spawning our successors' is that we've already done something like develop artificial intelligence.
You've heard the saying: 'Dogs are man's best friend.' With a few psychotic exceptions, there's something to it. Dogs, as a rule, at least the ones I've known, like
Wolves? Not so much. A (very) few people have kept wolves as pets. It helps that wolves are pack animals. I'll be back to that idea.
We recently discovered that dogs are mutant wolves. Something happened. probably over a hundred thousand years ago, to the genes of a few wolves. Their offspring weren't as bright as your average wolf. And they just simply doted on human beings.
Coincidence? Maybe. But I doubt it.
Sure, people were "primitive" back then. White lab coats, test tubes, and electron microscopes wouldn't be invented for maybe 5,000 generations. Even so, it's hard for me to imagine that a breed of stupid wolves that love human beings 'just happened' to pop into existence and start following my ancestors around.
I think we made dogs. "Domesticated," if you prefer.
I also think that it helped - a lot - that wolf packs and human families work (very roughly) the same way. A wolf cub could bond with a human family in very nearly the same way that he or she would have bonded with the pack.
But I think we're the reason dogs are so, well: dog-like. Eager to help. willing to chase things for us. Intensely protective of the family. Not overly bright, maybe: but with humans around they don't need to be.
We won't develop artificial intelligence that's like Fido. We've already got dogs, and they're really good at what they do.
When (or if) we develop artificial intelligence, I think we'll do it for a reason. There will be jobs that can be done better - faster, with fewer errors and greater precision - with AI than with human beings. Flying aircraft and some management jobs come to mind.
And AI takes over the world? I don't think so. Airline pilots and managers won't like it: those who don't recognize the change as an opportunity to learn new skills and explore another facet of human potential.
AI, when (or if) it goes online, will - I think - be more like C3PO than Colossus. And, again I think, be about as likely to attempt world domination as C3PO would be to plot taking over the restored Republic.
There will be problems, of course. There are always problems. Taking dogs as an example: you'd think that an ancient technology like that would have every last bug worked out of it by now. Remember the problem with pit bulls, a decade or so back? I think the problem was more with the owners, and less with the breed: but that's another topic.
When (if) we get 'real' AI, some of the units will act up: at least, I'd be astonished if they didn't. But, on the whole, I don't think humanity will regret developing AI, any more than we (with a few whack exceptions) regret developing agriculture.
Cyborgs? We've been developing that sort of technology, ever since someone started hobbling around on a crutch or peg leg; or strapped on a hook to replace a severed hand. It's a little late to stop, now: and I don't think many would really want to.
But we're not 'losing our humanity.' We're still as human as ever: for good or ill.
- "Science Fiction in the Movies: 'The Satan Bug' to 'The Matrix' "
(January 26, 2010)
- " 'God Created Man in His Image' wasn't Written by An American"
A Catholic Citizen in America (January 25, 2010)
- "'All We Want is Peace and Quiet' - Another Look at the Mind of the Alien"
(December 9, 2009)
- "Beautiful Space Princesses, Almost Certainly Not: Flying Whales, Maybe"
(December 8, 2009)
- "Next-Generation Prosthetic Hand - and Intel Says Direct Neural Interface Brain Chips by 2020"
Apathetic Lemming of the North (December 2, 2009)
- "Tofu Turkeys, Genetically Altered Foods, and the Evil Eye"
Apathetic Lemming of the North (November 14, 2009)
- "Hard Science Fiction, Cultural Blinders and Laban's Sheep"
Drifting at the Edge of Time and Space (October 29, 2009)
- "The Sentient Cities are Coming: Park Benches With Attitude Coming"
Apathetic Lemming of the North (October 24, 2009)
- "Timing is Everything"
Drifting at the Edge of Time and Space (October 7, 2009)
- "Good News, Neural Devices Connect Brain, Computers: Bad News, Same Thing"
(July 11, 2009)
- "- - - 'And We're All Gonna Die!' "
(June 30, 2009)
Background (from Hard Science Fiction, Cultural Blinders and Laban's Sheep
," (October 29, 2009)):
Somebody said "I made this letter very long, because I did not have time to make it shorter." There seems to be a difference of opinion about who said that: Bartlett's "Familiar Quotations," sixteenth edition, says Blaise Pascal. The quote's from Lettres Provinciales [1656-1657], No. 16.
2 Tales of Future Past
has two decent still photos from "The Phantom Creeps
." The movie was a dramatic account of a mad scientist: "With the power of a radioactive meteor he discovered, his invisibility belt, ray gun, and killer robot spiders he plans to conquer the world.