Dave, stop. Stop, will you? Stop, Dave. Will you stop Dave? Stop, Dave. (voice of computer HAL9000, from the sci-fi movie 2001: A Space Odyssey)
Like it or not, you’ll probably talk to an android sometime this century, and maybe sooner than you think. Already, manufacturers are gearing up to sell androids for the home on a mass market basis.
There’s been a lot of preparation for that event. For years, we’ve all had a taste of talking machines – from press buttons that shout “Drop Dead!”, “Stupid Jerk” or something more acute; to cute and cuddly Ferbies that murmur and roll their eyes so delightfully; to infuriating, automatic call centers that often lead us into communications oblivion. And a host of others.
So, we all know what to expect. Or do we?
Before I discuss that question, there’s a curious aspect about human nature which has been around for a long time, perhaps for millennia: I speak of using names to apply to inanimate objects. In ancient times, it was swords, bows, axes etc. In modern times, it is, for example, a hunting weapon – guns, knives, rifles, pistols and others; a hand tool of any type; sporting gear like a tennis racket, hockey stick and so on; a household appliance for Pete’s sake; a motor bike; and, of course, a car. There are no doubt others I haven’t mentioned. Make your own list, if you feel inclined.
These days, some people I know even have names for their computers. Fancy that!
Though, the really useful thing about naming something is that, after telling that name to everybody, you only need to use one word – well, maybe two – to find it, shout at it, get it back from li’l bro’ or sis, call it other names, plead with it, pray to your god about it, tell it to bloody well start or work, or otherwise verbally or physically abuse it. The other useful thing about naming a thing, whatever it is, is that a single name is much better than saying, “Where’s my (adjective, expletive, expletive, adjective) iPad I bought yesterday?”
So much easier to simply screech: “Where’s my Paddy!” That short demand also helps to reduce any potential, reciprocal verbal abuse.
Now, in literature, films and TV, we’ve all seen robots with names, some good, some not so good. And now that most of us are conditioned to name many things with personal names, it’s quite reasonable to suggest that when you acquire your robot slave for house work, you will probably give it a name of your choice. My advice, for what it’s worth, is to use an easily pronounceable name.
In an even lighter vein, you’ll note I used the pronoun ‘it’ when referring to a robot – not he or she, for obvious reasons. Having a neuter gender is one of the bigger benefits with English, I guess. But, I can foresee problems for other languages.
Anyway, an even bigger problem – and now I’m tackling that question of expectations, above – is potentially quite serious. We agree, I think, people get attached to their things, and perhaps more so to things that they name. So, what will happen when you attach a name to a thing with machine intelligence that has the potential to form an emotional relationship with you? Worse, if you have children who grow up with the machine, what sort of relationship would you want or even allow with it? How would you react if your toddler formed a closer relationship with the machine – to the point of open defiance towards you and others? What would your reaction be if your child loved her android more than you; and it returned its love? For that matter, does the word ‘love’ mean anything substantial in that context?
This is serious stuff, like I just said. So much so, I’d like to tell you – if you haven’t seen it already in a previous post – about an experiment conducted by Dr Christoph Bartneck at Canterbury University in New Zealand. I have seen only the news report at that link; I would urge you to read, digest and think about it. I will contact Dr Bartneck to obtain a copy of his full report, if I can.
What follows now will make sense to you only if you have read the report at that link.
If I cut to the basic issue, you have two fundamental choices when using a robot: either you use it as a tool or you use it as you would a friend, family member or interlocutor. In the first case, the master/servant schema takes precedence always with you as master; in the second choice, you implicitly invite dialog, discussion, and probable contention, if not conflict, all of which allows for the inversion of the schema, to some degree. In other words, you’ll either never have issues about switching off a robot at anytime; it’s just another machine. Or, like many people, you’ll form an intellectual and emotional attachment with the machine to some degree that will tend to inhibit your freedom of action; little by little, it’s conceivable that you end up being the servant – an insidious and slow psychical death. And, there’s nothing worse, in my opinion, than being slowly loved to death.
So I’m with the fictional Dave Bowman, the guy who’s switching off HAL9000 in the quote at the top of the page. If you know that scene from the movie, you know that HAL was pleading, pleading to stay around to help, morally unaware that its programming had caused it to perform abnormally and destructively by killing off most of the space crew. So Dave was switching HAL off.
Sure, in the future when robots, and particularly androids, are commonplace, you might be able to have reasonable discussions with them. However, what if you become quite dependent upon one, or even overly dependent? Yes, I’m aware of medical and health situations where robots can be very useful in care and protection, especially for the infirm and old. To repeat, though: the risk is that the inversion of the relationship schema could well result in a diminished sense of self-worth for the humans, young and old; and that, in the robotics context, is a psychological and societal condition entirely new to the human experience.
So, from my perspective, it’s a no-brainer: I use machines – no machine uses me, ever.