Tag Archives: machines

The Humanity of Robots (1)

2 Apr

This topic will be in two parts, perhaps even a third.

That title above sounds like an odd statement, I guess, because we don’t usually regard machines as having “humanity”. However, anything made by humanity carries, by definition, something of us within or about it. But the issue here is not what is human. Not at all.

What is becoming of increasing importance, though, is an idea that robots will eventually achieve some kind of innate humanness as AI and mechanics just get better, more fluid and ultimately more real. Indeed some believe robots have, in some cases, already achieved some measure of that humanness. Others would disagree and oppose that concept for many reasons, as you might imagine.

I’m not entirely sure yet, one way or the other; although I lean toward the skeptical side of things. One thing is certain, though: this debate will not end soon, not even this century, I reckon. As an intro then, here is a sample of what is available to consider – from some of the professionals now involved in the robotics field.

In the forefront of robotics today is Professor Rodney Brooks, currently Chief Technology Officer at Rethink Robotics. In a magazine article from 2009, I read that Professor Brooks, sometimes known as “the bad boy of robotics”, is apparently aligned with the idea, expressed above, that robots will, at some time, reach a level of humanness in intelligence.

First though, a short digression: while reading that piece (and others), I found that he and I share some common experiences and ideas: we are both Australian by birth; when young, we both liked to experiment with chemicals to blow things up (I was so reckless, I’m truly lucky to be alive); just as he taught himself computer code, so did I (when I started my IT career in 1967); we both regard the human body as a biological machine or, if you like, an animal machine (in contrast, man-made robots are mineral machines, I’d suggest); and we are both atheists.

Despite – or perhaps because of – most of his life within robotics, there are still shades of gray for Professor Brooks when considering the human-robot nexus. I tend to agree with his comment, for example, that there must be care in deciding “how much feeling you should give them if you believe robots can have feelings”; although, I would claim that any such “feelings” (aka “emotions”) are just computer code that gives us humans the impression of robotic feelings. A firm point of departure though is when he says: “We had better be careful just what we build, because we might end up liking them, and then we will be morally responsible for their well-being. Sort of like children.”

On the one hand I accept, unequivocally, the caveat regarding being careful about what is built. Unlike the good professor, though, I view all mineral machines as just machines, for one very good reason: such machines obviously have never occurred naturally (i.e. within the natural selection process of evolution), as do all animal machines. No robot, for example, could exist before or now without the ingenuity of people like Professor Brooks, and others who developed computers and appropriate software. Additionally, he has a dream of developing “a robot that people feel bad about switching off.” Well, when it concerns preservation of human life, I’d agree; otherwise, no – any wish by an intelligent machine (e.g. “Please, please don’t turn me off!”) should never trump that of humanity’s.

Most troubling from my perspective though is the suggestion that humanity might become “morally responsible for their well-being. Sort of like children.” Other than the usual responsibility accorded to all machines and products owned, I fail to see why any human should treat a machine as an equal member of the human family. Arguably, once something is purchased, any person has the unequivocal right to destroy or abuse any machine/tool/appliance at any time, even immediately after bought, and take the personal consequences. Although, I might be carried off to the funny farm if I did do that.

But to accord any intelligent machine the same legal rights as humanity will, by implication, inevitably lead into a social and societal quagmire; and one that nobody needs. What could possibly be gained by painting ourselves into a tight corner of moral and legal conundrums concerning machines? I have great respect for Professor Brooks and his ongoing work in robotics. But I’m quite puzzled why such a brilliant scientist appears to allow emotion to color his opinion about machine intelligence vis-à-vis feelings.

Which brings me to Rosalind Picard, Sc.D., and professor at the MIT Media Laboratory, who has an opposing point of view which you can view at that link. In an engaging short presentation, Professor Picard unequivocally asserts that robots cannot have feelings in the same way that we humans do: they can simply act as though they do. Whether an automaton is a computer in a box or a humanoid robot, giving computers/robots emotion is simply an exercise in providing the appropriate algorithms that allow the machine to behave as though we think it has emotions. In other words, the machine’s programmed actions provide us with cues to allow our perceptions to visually experience its feelings.

Essentially, generating the necessary emotion all comes down to the sophistication of the programming. To repeat: machines don’t have feelings – they have or will have programs that simulate what we think are emotions like ours.  Perhaps, to help resolve the issue of feelings, we should then adopt better qualified definitions? For example, we could use animal feelings for humans and mineral feelings for robots; or, in you like, artificial feelings for robots and human feelings for us. By doing so, we circumvent knotty philosophical problems and conflict; and also help to ensure that the dichotomy between humans and robots remains in situ.

Nobody that I know of has any problems with the idea of Artificial Intelligence (AI). So – maybe we should all take another “giant leap for mankind” and adopt the concept of Artificial Feelings (AF) for all robots with AI? What could be more logical and sensible? However, there’s a bit of a problem with that approach, a fly in the ointment, if you will: Professor Brooks is famous for saying: “I am a robot.”

I think I understand what he’s saying. But, I’ll leave that and further aspects for my next posting.

Advertisements