Tag Archives: artificial intelligence

The Humanity of Robots (1)

2 Apr

This topic will be in two parts, perhaps even a third.

That title above sounds like an odd statement, I guess, because we don’t usually regard machines as having “humanity”. However, anything made by humanity carries, by definition, something of us within or about it. But the issue here is not what is human. Not at all.

What is becoming of increasing importance, though, is an idea that robots will eventually achieve some kind of innate humanness as AI and mechanics just get better, more fluid and ultimately more real. Indeed some believe robots have, in some cases, already achieved some measure of that humanness. Others would disagree and oppose that concept for many reasons, as you might imagine.

I’m not entirely sure yet, one way or the other; although I lean toward the skeptical side of things. One thing is certain, though: this debate will not end soon, not even this century, I reckon. As an intro then, here is a sample of what is available to consider – from some of the professionals now involved in the robotics field.

In the forefront of robotics today is Professor Rodney Brooks, currently Chief Technology Officer at Rethink Robotics. In a magazine article from 2009, I read that Professor Brooks, sometimes known as “the bad boy of robotics”, is apparently aligned with the idea, expressed above, that robots will, at some time, reach a level of humanness in intelligence.

First though, a short digression: while reading that piece (and others), I found that he and I share some common experiences and ideas: we are both Australian by birth; when young, we both liked to experiment with chemicals to blow things up (I was so reckless, I’m truly lucky to be alive); just as he taught himself computer code, so did I (when I started my IT career in 1967); we both regard the human body as a biological machine or, if you like, an animal machine (in contrast, man-made robots are mineral machines, I’d suggest); and we are both atheists.

Despite – or perhaps because of – most of his life within robotics, there are still shades of gray for Professor Brooks when considering the human-robot nexus. I tend to agree with his comment, for example, that there must be care in deciding “how much feeling you should give them if you believe robots can have feelings”; although, I would claim that any such “feelings” (aka “emotions”) are just computer code that gives us humans the impression of robotic feelings. A firm point of departure though is when he says: “We had better be careful just what we build, because we might end up liking them, and then we will be morally responsible for their well-being. Sort of like children.”

On the one hand I accept, unequivocally, the caveat regarding being careful about what is built. Unlike the good professor, though, I view all mineral machines as just machines, for one very good reason: such machines obviously have never occurred naturally (i.e. within the natural selection process of evolution), as do all animal machines. No robot, for example, could exist before or now without the ingenuity of people like Professor Brooks, and others who developed computers and appropriate software. Additionally, he has a dream of developing “a robot that people feel bad about switching off.” Well, when it concerns preservation of human life, I’d agree; otherwise, no – any wish by an intelligent machine (e.g. “Please, please don’t turn me off!”) should never trump that of humanity’s.

Most troubling from my perspective though is the suggestion that humanity might become “morally responsible for their well-being. Sort of like children.” Other than the usual responsibility accorded to all machines and products owned, I fail to see why any human should treat a machine as an equal member of the human family. Arguably, once something is purchased, any person has the unequivocal right to destroy or abuse any machine/tool/appliance at any time, even immediately after bought, and take the personal consequences. Although, I might be carried off to the funny farm if I did do that.

But to accord any intelligent machine the same legal rights as humanity will, by implication, inevitably lead into a social and societal quagmire; and one that nobody needs. What could possibly be gained by painting ourselves into a tight corner of moral and legal conundrums concerning machines? I have great respect for Professor Brooks and his ongoing work in robotics. But I’m quite puzzled why such a brilliant scientist appears to allow emotion to color his opinion about machine intelligence vis-à-vis feelings.

Which brings me to Rosalind Picard, Sc.D., and professor at the MIT Media Laboratory, who has an opposing point of view which you can view at that link. In an engaging short presentation, Professor Picard unequivocally asserts that robots cannot have feelings in the same way that we humans do: they can simply act as though they do. Whether an automaton is a computer in a box or a humanoid robot, giving computers/robots emotion is simply an exercise in providing the appropriate algorithms that allow the machine to behave as though we think it has emotions. In other words, the machine’s programmed actions provide us with cues to allow our perceptions to visually experience its feelings.

Essentially, generating the necessary emotion all comes down to the sophistication of the programming. To repeat: machines don’t have feelings – they have or will have programs that simulate what we think are emotions like ours.  Perhaps, to help resolve the issue of feelings, we should then adopt better qualified definitions? For example, we could use animal feelings for humans and mineral feelings for robots; or, in you like, artificial feelings for robots and human feelings for us. By doing so, we circumvent knotty philosophical problems and conflict; and also help to ensure that the dichotomy between humans and robots remains in situ.

Nobody that I know of has any problems with the idea of Artificial Intelligence (AI). So – maybe we should all take another “giant leap for mankind” and adopt the concept of Artificial Feelings (AF) for all robots with AI? What could be more logical and sensible? However, there’s a bit of a problem with that approach, a fly in the ointment, if you will: Professor Brooks is famous for saying: “I am a robot.”

I think I understand what he’s saying. But, I’ll leave that and further aspects for my next posting.

Gaining Humanity’s Trust

21 Mar

When meeting a new acquaintance, we like to know who we’re dealing with, right? For millennia, humanity has generally had little difficulty in that regard.

That’s all set to change, however.

In the process of watching and commenting upon the effects of the continuing robot revolution, I’ve been researching an aspect that appears to be inexorable: the push by some manufacturers to construct a Perfect Android (PA). For the record, my position is unequivocal: no robot should ever be made in humanity’s real image.

Fortunately, I’ve not found one PA ready for mass marketing yet; although, there are a number of Humanoid Robots (HR) either now on the market, or will be later in 2013. With few exceptions, I have no qualms with the use and integration of HR within society; they, like the humble computer, should prove to be of positive benefit to society as a whole; although, as with any technological change, there are always unforeseen consequences.  When they happen – and they will – one fixes the fixable, dumps the dreck and moves on.

Constructing and marketing HR is a big job. The task for PA manufacturers, however, is much bigger, more complex and definitely more expensive. Hence, given what I’ve seen so far (and what you can read about already at this blog), I don’t expect to see any PA on the market soon. But I’m confident that such a machine will be doable by mid-century (even so, I still don’t want to see that machine manufactured).

So, what’s so bad about any PA? Let me count the reasons…

First, a PA is a man-made machine masquerading as a real human. If it is a perfect image, you cannot distinguish it from a real human. In other words, it’s a con job. Who likes being conned?

Second, because a PA is a machine, composed of various metals, it is more durable than humans and capable of physically out-performing most, if not all humanity. Such a PA then is definitely stronger than humanity in many ways and thus moves into being a super con job.

Third, one of the goals within robotics is to develop human-like artificial intelligence (AI). AI, in and of itself, is a positive step because such a robot – be it in a mobile computer box or in a humanoid shape – would be able to solve problems more quickly than most humans, depending upon the elegance of the AI software. There is still a long way to go, however. (In fact, producing a robot with AI equivalent to or better than humanity’s is the biggest challenge in robotics.) Nevertheless, a PA with the best AI installed would then become a superior con job.

Quite simply, humanity doesn’t need that kind of triple whammy.

Granted, those three potential situations are just that: potential. But there is nothing to stop a manufacturer from producing a HR with similar physical and AI qualities as a PA – apart from appearance. Indeed, that is more likely to be the case, fortunately; because my research has already shown that the very idea of PA has caused much debate and concern already. And that’s a good outcome because any HR is designed to be always visibly non-human; think Asimo, C3P0, Robbie, Nestor etc.

Some will disagree, no doubt, about the need for some form of control. However, I would question the motivations of those who would deliberately seek to manufacture machines that, by design and appearance, will con unsuspecting, unaware humans.

Looking forward to mid-century and beyond, however, I think none of the foregoing will prevent some manufacturer developing a PA, eventually. Where there’s money involved, there are always those wanting to profit, regardless of consequences for humanity and society. Hence what’s needed now, or very soon, is a process to make sure any PA can be identified easily for what it is categorically: non-human.

Let me suggest a few ideas:

  • Every PA must be programmed to introduce itself as robot (e.g. Hi, I’m Jack, and I’m a robot) in the first instance whenever it meets a new human. To cater for those humans with hearing problems, every PA must also show a clearly visible label with the word ROBOT (in the appropriate language of location), followed by a number, name or combination of both; OR
  • The external temperature of every PA must be maintained at a temperature of 10C (50F), a bit more than half the normal temperature of humans. Hence, when touching a PA, the robot’s relative coldness will signify its robotics underpinnings. Moreover, if the cooling fails thus allowing temperature to rise, the programming must immobilize the machine; OR
  • When asked by any human, vocally or with text (e.g. Are you robot?), a PA must admit its origin. Failure to admit that truth will result in programmed immobility immediately; OR
  • Every PA must be dressed in clothes that clearly identify its robotic origins. This rule could be merged with any one of the above rules.

All of the above can apply to HR also, but not all are necessary for those machines, for obvious reasons. To allow for all contingencies, though, some combination of those ideas must be included into all robotic operating systems. Moreover, once programmed, the software must be locked in and rendered impenetrable to all viral and hacking attacks – no easy task, granted; but crucial for marketing success and reduced risk of looming legal issues.

Right now, those many legal and moral issues surrounding the whole robotics revolution are under debate. Technology always moves quickly, though resolution of legal and moral issues often tends to lag. Sure, we have already the Three Laws of Robotics which are generally accepted across the robotics industry as working guidelines for robot behavior vis-à-vis humanity’s welfare.

But humanity needs to know that all PA and HR are easily identifiable, one way or another, and with behaviors consistent with that need. We humans have necessarily developed means of mutual identification and acceptable behaviors over many years; it’s thus common sense to program all robots in a similarly appropriate fashion. Hence, I’d suggest a formal adoption of at least one or more of the ideas expressed above within the framework for all laws governing the manufacture and use of robots.

We’ve reached a point of no return with robotics. Society cannot function properly without trust and confidence in the machines produced and used. And when considering the history of the mobility revolution when the automobile was introduced, followed by the emergence of industrial automation and its effects, this third revolution in robotics will arguably completely surpass the trans-formative effects of the previous two combined, as it explodes globally.

We’d better be ready for it. Soon.