Archive | April, 2013

The Humanity of Robots (3)

27 Apr

In the realm of fact and fiction there have been many horrors perpetrated by humanity; on the other hand, there has been a greater measure of actual good things born out of our efforts (after all, we’re still alive and kicking after a million years or so of evolution). Are we just lucky, or have we been doing some things right to survive thus far?

Whatever the case, it could all easily change for the worse.

Because for the foreseeable future and if current trends continue in robotics, humanity is now embarking upon the planned obsolescence of our own species.  Not only embarking upon, but embracing it with open and willing arms – at least for the present.

Understand: I’m not talking about the eradication of humanity. I’m simply claiming that the rise of the machines – robots of all kinds – will, given enough time, eventually jeopardize the social and economic usefulness of most humans.  In the future that is coming, only a relatively small core of humanity will be required for key political, social, economic and military tasks.

To be sure, there will be great benefits with the use of pervasive robots throughout society: by and large, humanity will be freed of the tiresome, daily duties required of the body and will be able to concentrate most energy into the creativity of the mind – the world of ideas. On the other hand, we might simply evolve into spherical eating machines pampered by a 24/7 direct connect into the glabble (aka global babble). Either way, robots will eventually (be allowed to) take over the manual operation of the entire planet. Shoot, machine intelligence automatically operates and maintains most of the world now anyway, in all types of electronic communications.

We won’t see perfect androids, though, for a long time: not in my lifetime and not in that of my grand-children. The AI software now available is probably only at the level of a five-year-old child. Will the equivalent of  Moore’s Law apply to AI as the years roll on? Well, in view of the fact that the law, so called, applies to computer hardware only, the major effect would be that the robot brain would simply process data and instructions more quickly; Moore’s Law says nothing about improving AI software per se, the crux of robot intelligence.

However, if humanity continues with robot applications such as this as a possible example , so that it eventually becomes a clear and common presence, we are then on the proverbial slippery slope to societal oblivion.

In the first place, humanity appears to have lost its way when it devotes so much time and resources to the perpetual promulgation of war. So, it’s understandable and appropriate – in fact, necessary – that we should educate ourselves about efforts to prevent the cancer-like growth of robots into all of war’s facets.

For example, marine and robotics engineers are developing robotic jellyfish (Seabot? Aquabot?), and other marine machines, that could be used to patrol the seven seas, a collaborative effort “funded by U.S. Naval Undersea Warfare Center and the Office of Naval Research.” Roll on funding for the Pentagon, no?

More urgent perhaps, there are now calls warning that killer robots are coming, and even suggestions, from Human Rights Watch (HRW) that killer robots should be banned. If you want to read the full report from HRW, you can download it here. In addition, there is now a growing global campaign to stop the production and use of killer robots; access this link if you are interested in joining that effort.

None of the foregoing, and much more, should be surprising though. Ever since ancient humans began to use tools, humanity has been developing new and more efficient ways to kill. So, developing autonomous robots to do the job of searching, spying, surveilling, monitoring, fighting, killing and so on, does have benefits from the military perspective. And when compared to using human soldiers, the cost of maintaining an android army is much less, for obvious reasons. However, not until AI software reaches the sophistication of the lowest level human grunt in any army will autonomous killer robots (killbots) become a reality.

That particular day truly is a long, long way off – for which we should be truly thankful.

What is more likely in the near future, realistically, is a hybrid killbot: just as drones (UAVs) in Afghanistan are controlled by jockeys located far from harm, it’s conceivable and practical to use a similar type of human handler to remotely operate a killbot on the ground. Such a machine would function basically as a remote-controlled, ultra-sophisticated, electromechanical bipedal device: Terminator on an electronic leash, if you will. Moreover, problematic aspects of “the humanity of robots” would thereby be sidelined: all responsibility and accountability would remain with each handler/soldier, and up the chain of command. From what I’ve seen on video from DARPA, such a hybrid could be a reality within twenty years.

So much then for robots with some sort of humanity? Well, not quite: there are examples of machine intelligence that is intriguing. I’ve mentioned aspects in other posts, but this article is one more that shows how roboticists are slowly beginning to understand how to get robots to relate to humans. That sort of progress gives hope that the future to come is not as bleak as some might think.

And with some tongue-in-cheek pizzazz, these writers show how AI is indeed clever in a number of ways, including art, music and even being “self-aware”. However, just as there is nothing intrinsically human about intelligence, we’ve had chimpanzee and  elephant art around for many years. All that machine intelligence shows is evidence of exceedingly thorough system design and programming based upon what we humans learn as we grow.

Finally, as if to answer some of the issues I’ve broached above, watch this TED video of four academics and experts in robotics and communications. Let them explain why our humanity must always take precedence over anything we construct, and most of all, over the increasingly intelligent machines with which we are enmeshed in a symbiotic, neo-Frankensteinian struggle of potentially epic proportions.  And keep this in mind: if that symbiosis ever withers and dies when robots are fully autonomous, humanity better have a fail-safe Plan B.

If that isn’t enough for you, there is still another aspect to address: the physical hybrid of human and machine called cyborg – either fully integrated or using exo-skeleton. Until next time…

Robot News Roundup-April 14th 2013

14 Apr

Want to know where it’s all heading?

Read this ‘roadmap’ from the U.S. Congress. At 137 pages, it’s not light reading; but, it does shine light on where it’s all planned to be at in 5, 10, 15 years and beyond. It’s fascinating, if somewhat tedious at times.

Humanoid robot from India.

Not to be outdone, this is the first piece of robot news I’ve come across from India, since I started this blog a few weeks back. Programmed to play the ‘rock-scissor’ game, its creator sees the small humanoid as a means to investigate how robots can assist humans.

A new book – Robot Futures – from MIT professor

Everybody of course has an opinion about what robots are. Sure, they are machines, but this professor of robotics thinks they are “a new species”. Well, I’m not prepared to accept that – yet. Are you?

A humanoid robot with positive attitude

From an Italian project team, here’s an interesting and detailed look at a bipedal robot designed to be fully compliant with humans i.e. it won’t harm anything or anybody in a collision or physical encounter. Makes you feel better about companies that take that sort of care and concern with the impact of robotics in society, don’t you think?

Take in a slide show about some of the latest humanoid robots

From the cute and cuddly, I guess, to the more practical, get used to seeing robots like these more often in the news and on TV. I’m waiting for one that can act as my 24/7 servant. But, how long will I have to wait…?

A robot’s got to know its limitations

And while on the topic of mundane house work, read this piece about the difficulties in the home and discusses the complexity of the task to get a robot to even do such a job. Thankfully, the software – and hardware – will develop over time to surmount most problems. In my opinion, though, just remember that Murphy’s First Law lurks about all the time.


The Humanity of Robots (2)

7 Apr

There are three basic constituents to the world we know and live in: animal, vegetable and mineral. Generally, we know how it all works together as a sustainable, evolutionary entity – ignoring, for this discussion, the man-made effects upon this planet’s ecosystems. All life, as we know it, goes through a growth stage, reaches some form of maturity, ages over time, and eventually dies, one way or another. That applies to all naturally occurring life forms on this planet and, arguably, on other suitable planets in this universe.

Clearly, robots do not die, in the above sense. Moreover, no robot goes through similar stages as for all animal life on this planet. And no robot has ever spontaneously evolved in nature. But they do cease to function when the power supply is removed, depleted, stopped or interrupted.

So, how can we square that with Professor Brooks’s claim that he is a robot? Without further input from Professor Brooks, I can only speculate…

Well, as already mentioned in my prior posting, the professor and I regard the human body as a biological (or animal) machine. And, we are not alone: the 16th century philosopher, René Descartes, also suggested “the human body could be regarded as a machine”. For sure, there is a crude analogy between the multitude of wires, struts, servos, joints, appendages and so on that make up a humanoid robot and the various types of bones, muscles, sinews, etc that help us all get along in daily life. I could suggest, though, the same analogy could be applied to the chimpanzee or gorilla. Moreover, man-made machines don’t require any of the animal relief we all need, more or less daily e.g. sleep, food, exercise, sex and so on. Hence, although humans and robots are similar by definition, the machine analogy is thin at best and should not be taken seriously as an argument for “I am a robot.”

On the other hand, it’s clear that at birth we have animal instincts and as we grow, we learn to become the person we each are through socialization. In addition, according to Noam Chomsky’s theory of Universal Grammar, we are innately hard-wired to learn and use language; although, some social scientists still disagree. So, perhaps those human attributes can be regarded as a form of natural programming that develops from inception?

In contrast, when fully constructed, a robot requires some form of introduced programming, either as firmware or software before it can do anything. So, in my opinion, the programming similarities are indicative only of the ability for humans and robots to learn; and they are not sufficient to persuade me to agree that “I am a robot” also.

What’s left? Well, there are those who sincerely claim that robotics promises a new kind of life, especially when AI reaches a human-like maturity. If a robot does ever reach the sophistication of fictional entities we can all recall (think Terminator, Ash, Bishop et al), then the claim of new life must be considered and analyzed. The discussion then hinges first on what is meant by ‘life’, ‘living being’ or something similar; and in doing so, the term ‘robot’ for such an entity then becomes problematic.

A comprehensive discussion on the topic of ‘life’ itself is found here, with a list of life forms. There you will find seventeen different categories of life subsumed under Artificial and Engineered, only three of which are directly relevant to robotics: Android (Robot), Cyborg and Robot. None of those, however, provide a clear definition of the term ‘life’ in relation to robotics. Dictionaries and encyclopedias, of course, are useful but not categorical: some definitions do change, over time. So, is any one definition for ‘life’ as good as another? If there is no consensus about a single suitable definition, whose definition do you choose?

To cut through to something more helpful in this topic, I’d suggest considering this definition from one of the world’s greatest thinkers: “One can define living beings as complex systems of limited size that are stable and that reproduce themselves.” You’ll find that claim in the last chapter of The Grand Design by Stephen Hawking (Bantam Books, ISBN:978055381929, p. 224).

That definition covers many forms of life, including humanity, but not robots: no robot reproduces itself. Obviously, no robot procreates as humans do. Moreover, recall that professor Rosalind Picard stated that scientists cannot yet provide four human elements for robots: (1) feelings (or morals); (2) conscious experience; (3) soul, spirit, the ‘juice of life’; and (4) free will – free choice.  The last is the most important, in my opinion, because that attribute specifically excludes the term ‘robot’ which, by definition, is an entirely predictable machine that cannot have free will. Hence, man-made machines that do eventually incorporate those four attributes – if that day ever occurs – will require, I’d suggest, a new descriptive word; even the term ‘android’ is insufficient because that is defined as ‘robot’ also.

At present though, when assessing all of the above, and specifically the ‘I am a robot’ statement by Professor Brooks, I can only conclude that he may have been suggesting, obliquely, that machine intelligence in itself truly represents a new form of life, and one that is not yet fully recognized or appreciated. In other words, I think he is saying, essentially, the level of intelligence that exists in your head can certainly exist in a man-made machine, regardless of its structure and mobility. Any other interpretation of the claim fails the scrutiny discussed here.

The implications of that conclusion are quite profound; and beyond my knowledge and experience to fully comprehend, or even accept at this time.  I must, though, concede the existence of the real possibility; therefore, the probability must greater than zero.

Appropriately then and as a concluding piece, my next posting will explore some of the clever – or not so clever – stuff currently being done with machine intelligence.

The Humanity of Robots (1)

2 Apr

This topic will be in two parts, perhaps even a third.

That title above sounds like an odd statement, I guess, because we don’t usually regard machines as having “humanity”. However, anything made by humanity carries, by definition, something of us within or about it. But the issue here is not what is human. Not at all.

What is becoming of increasing importance, though, is an idea that robots will eventually achieve some kind of innate humanness as AI and mechanics just get better, more fluid and ultimately more real. Indeed some believe robots have, in some cases, already achieved some measure of that humanness. Others would disagree and oppose that concept for many reasons, as you might imagine.

I’m not entirely sure yet, one way or the other; although I lean toward the skeptical side of things. One thing is certain, though: this debate will not end soon, not even this century, I reckon. As an intro then, here is a sample of what is available to consider – from some of the professionals now involved in the robotics field.

In the forefront of robotics today is Professor Rodney Brooks, currently Chief Technology Officer at Rethink Robotics. In a magazine article from 2009, I read that Professor Brooks, sometimes known as “the bad boy of robotics”, is apparently aligned with the idea, expressed above, that robots will, at some time, reach a level of humanness in intelligence.

First though, a short digression: while reading that piece (and others), I found that he and I share some common experiences and ideas: we are both Australian by birth; when young, we both liked to experiment with chemicals to blow things up (I was so reckless, I’m truly lucky to be alive); just as he taught himself computer code, so did I (when I started my IT career in 1967); we both regard the human body as a biological machine or, if you like, an animal machine (in contrast, man-made robots are mineral machines, I’d suggest); and we are both atheists.

Despite – or perhaps because of – most of his life within robotics, there are still shades of gray for Professor Brooks when considering the human-robot nexus. I tend to agree with his comment, for example, that there must be care in deciding “how much feeling you should give them if you believe robots can have feelings”; although, I would claim that any such “feelings” (aka “emotions”) are just computer code that gives us humans the impression of robotic feelings. A firm point of departure though is when he says: “We had better be careful just what we build, because we might end up liking them, and then we will be morally responsible for their well-being. Sort of like children.”

On the one hand I accept, unequivocally, the caveat regarding being careful about what is built. Unlike the good professor, though, I view all mineral machines as just machines, for one very good reason: such machines obviously have never occurred naturally (i.e. within the natural selection process of evolution), as do all animal machines. No robot, for example, could exist before or now without the ingenuity of people like Professor Brooks, and others who developed computers and appropriate software. Additionally, he has a dream of developing “a robot that people feel bad about switching off.” Well, when it concerns preservation of human life, I’d agree; otherwise, no – any wish by an intelligent machine (e.g. “Please, please don’t turn me off!”) should never trump that of humanity’s.

Most troubling from my perspective though is the suggestion that humanity might become “morally responsible for their well-being. Sort of like children.” Other than the usual responsibility accorded to all machines and products owned, I fail to see why any human should treat a machine as an equal member of the human family. Arguably, once something is purchased, any person has the unequivocal right to destroy or abuse any machine/tool/appliance at any time, even immediately after bought, and take the personal consequences. Although, I might be carried off to the funny farm if I did do that.

But to accord any intelligent machine the same legal rights as humanity will, by implication, inevitably lead into a social and societal quagmire; and one that nobody needs. What could possibly be gained by painting ourselves into a tight corner of moral and legal conundrums concerning machines? I have great respect for Professor Brooks and his ongoing work in robotics. But I’m quite puzzled why such a brilliant scientist appears to allow emotion to color his opinion about machine intelligence vis-à-vis feelings.

Which brings me to Rosalind Picard, Sc.D., and professor at the MIT Media Laboratory, who has an opposing point of view which you can view at that link. In an engaging short presentation, Professor Picard unequivocally asserts that robots cannot have feelings in the same way that we humans do: they can simply act as though they do. Whether an automaton is a computer in a box or a humanoid robot, giving computers/robots emotion is simply an exercise in providing the appropriate algorithms that allow the machine to behave as though we think it has emotions. In other words, the machine’s programmed actions provide us with cues to allow our perceptions to visually experience its feelings.

Essentially, generating the necessary emotion all comes down to the sophistication of the programming. To repeat: machines don’t have feelings – they have or will have programs that simulate what we think are emotions like ours.  Perhaps, to help resolve the issue of feelings, we should then adopt better qualified definitions? For example, we could use animal feelings for humans and mineral feelings for robots; or, in you like, artificial feelings for robots and human feelings for us. By doing so, we circumvent knotty philosophical problems and conflict; and also help to ensure that the dichotomy between humans and robots remains in situ.

Nobody that I know of has any problems with the idea of Artificial Intelligence (AI). So – maybe we should all take another “giant leap for mankind” and adopt the concept of Artificial Feelings (AF) for all robots with AI? What could be more logical and sensible? However, there’s a bit of a problem with that approach, a fly in the ointment, if you will: Professor Brooks is famous for saying: “I am a robot.”

I think I understand what he’s saying. But, I’ll leave that and further aspects for my next posting.