Tag Archives: machine intelligence

The Humanity of Robots (3)

27 Apr

In the realm of fact and fiction there have been many horrors perpetrated by humanity; on the other hand, there has been a greater measure of actual good things born out of our efforts (after all, we’re still alive and kicking after a million years or so of evolution). Are we just lucky, or have we been doing some things right to survive thus far?

Whatever the case, it could all easily change for the worse.

Because for the foreseeable future and if current trends continue in robotics, humanity is now embarking upon the planned obsolescence of our own species.  Not only embarking upon, but embracing it with open and willing arms – at least for the present.

Understand: I’m not talking about the eradication of humanity. I’m simply claiming that the rise of the machines – robots of all kinds – will, given enough time, eventually jeopardize the social and economic usefulness of most humans.  In the future that is coming, only a relatively small core of humanity will be required for key political, social, economic and military tasks.

To be sure, there will be great benefits with the use of pervasive robots throughout society: by and large, humanity will be freed of the tiresome, daily duties required of the body and will be able to concentrate most energy into the creativity of the mind – the world of ideas. On the other hand, we might simply evolve into spherical eating machines pampered by a 24/7 direct connect into the glabble (aka global babble). Either way, robots will eventually (be allowed to) take over the manual operation of the entire planet. Shoot, machine intelligence automatically operates and maintains most of the world now anyway, in all types of electronic communications.

We won’t see perfect androids, though, for a long time: not in my lifetime and not in that of my grand-children. The AI software now available is probably only at the level of a five-year-old child. Will the equivalent of  Moore’s Law apply to AI as the years roll on? Well, in view of the fact that the law, so called, applies to computer hardware only, the major effect would be that the robot brain would simply process data and instructions more quickly; Moore’s Law says nothing about improving AI software per se, the crux of robot intelligence.

However, if humanity continues with robot applications such as this as a possible example , so that it eventually becomes a clear and common presence, we are then on the proverbial slippery slope to societal oblivion.

In the first place, humanity appears to have lost its way when it devotes so much time and resources to the perpetual promulgation of war. So, it’s understandable and appropriate – in fact, necessary – that we should educate ourselves about efforts to prevent the cancer-like growth of robots into all of war’s facets.

For example, marine and robotics engineers are developing robotic jellyfish (Seabot? Aquabot?), and other marine machines, that could be used to patrol the seven seas, a collaborative effort “funded by U.S. Naval Undersea Warfare Center and the Office of Naval Research.” Roll on funding for the Pentagon, no?

More urgent perhaps, there are now calls warning that killer robots are coming, and even suggestions, from Human Rights Watch (HRW) that killer robots should be banned. If you want to read the full report from HRW, you can download it here. In addition, there is now a growing global campaign to stop the production and use of killer robots; access this link if you are interested in joining that effort.

None of the foregoing, and much more, should be surprising though. Ever since ancient humans began to use tools, humanity has been developing new and more efficient ways to kill. So, developing autonomous robots to do the job of searching, spying, surveilling, monitoring, fighting, killing and so on, does have benefits from the military perspective. And when compared to using human soldiers, the cost of maintaining an android army is much less, for obvious reasons. However, not until AI software reaches the sophistication of the lowest level human grunt in any army will autonomous killer robots (killbots) become a reality.

That particular day truly is a long, long way off – for which we should be truly thankful.

What is more likely in the near future, realistically, is a hybrid killbot: just as drones (UAVs) in Afghanistan are controlled by jockeys located far from harm, it’s conceivable and practical to use a similar type of human handler to remotely operate a killbot on the ground. Such a machine would function basically as a remote-controlled, ultra-sophisticated, electromechanical bipedal device: Terminator on an electronic leash, if you will. Moreover, problematic aspects of “the humanity of robots” would thereby be sidelined: all responsibility and accountability would remain with each handler/soldier, and up the chain of command. From what I’ve seen on video from DARPA, such a hybrid could be a reality within twenty years.

So much then for robots with some sort of humanity? Well, not quite: there are examples of machine intelligence that is intriguing. I’ve mentioned aspects in other posts, but this article is one more that shows how roboticists are slowly beginning to understand how to get robots to relate to humans. That sort of progress gives hope that the future to come is not as bleak as some might think.

And with some tongue-in-cheek pizzazz, these writers show how AI is indeed clever in a number of ways, including art, music and even being “self-aware”. However, just as there is nothing intrinsically human about intelligence, we’ve had chimpanzee and  elephant art around for many years. All that machine intelligence shows is evidence of exceedingly thorough system design and programming based upon what we humans learn as we grow.

Finally, as if to answer some of the issues I’ve broached above, watch this TED video of four academics and experts in robotics and communications. Let them explain why our humanity must always take precedence over anything we construct, and most of all, over the increasingly intelligent machines with which we are enmeshed in a symbiotic, neo-Frankensteinian struggle of potentially epic proportions.  And keep this in mind: if that symbiosis ever withers and dies when robots are fully autonomous, humanity better have a fail-safe Plan B.

If that isn’t enough for you, there is still another aspect to address: the physical hybrid of human and machine called cyborg – either fully integrated or using exo-skeleton. Until next time…

The Humanity of Robots (2)

7 Apr

There are three basic constituents to the world we know and live in: animal, vegetable and mineral. Generally, we know how it all works together as a sustainable, evolutionary entity – ignoring, for this discussion, the man-made effects upon this planet’s ecosystems. All life, as we know it, goes through a growth stage, reaches some form of maturity, ages over time, and eventually dies, one way or another. That applies to all naturally occurring life forms on this planet and, arguably, on other suitable planets in this universe.

Clearly, robots do not die, in the above sense. Moreover, no robot goes through similar stages as for all animal life on this planet. And no robot has ever spontaneously evolved in nature. But they do cease to function when the power supply is removed, depleted, stopped or interrupted.

So, how can we square that with Professor Brooks’s claim that he is a robot? Without further input from Professor Brooks, I can only speculate…

Well, as already mentioned in my prior posting, the professor and I regard the human body as a biological (or animal) machine. And, we are not alone: the 16th century philosopher, René Descartes, also suggested “the human body could be regarded as a machine”. For sure, there is a crude analogy between the multitude of wires, struts, servos, joints, appendages and so on that make up a humanoid robot and the various types of bones, muscles, sinews, etc that help us all get along in daily life. I could suggest, though, the same analogy could be applied to the chimpanzee or gorilla. Moreover, man-made machines don’t require any of the animal relief we all need, more or less daily e.g. sleep, food, exercise, sex and so on. Hence, although humans and robots are similar by definition, the machine analogy is thin at best and should not be taken seriously as an argument for “I am a robot.”

On the other hand, it’s clear that at birth we have animal instincts and as we grow, we learn to become the person we each are through socialization. In addition, according to Noam Chomsky’s theory of Universal Grammar, we are innately hard-wired to learn and use language; although, some social scientists still disagree. So, perhaps those human attributes can be regarded as a form of natural programming that develops from inception?

In contrast, when fully constructed, a robot requires some form of introduced programming, either as firmware or software before it can do anything. So, in my opinion, the programming similarities are indicative only of the ability for humans and robots to learn; and they are not sufficient to persuade me to agree that “I am a robot” also.

What’s left? Well, there are those who sincerely claim that robotics promises a new kind of life, especially when AI reaches a human-like maturity. If a robot does ever reach the sophistication of fictional entities we can all recall (think Terminator, Ash, Bishop et al), then the claim of new life must be considered and analyzed. The discussion then hinges first on what is meant by ‘life’, ‘living being’ or something similar; and in doing so, the term ‘robot’ for such an entity then becomes problematic.

A comprehensive discussion on the topic of ‘life’ itself is found here, with a list of life forms. There you will find seventeen different categories of life subsumed under Artificial and Engineered, only three of which are directly relevant to robotics: Android (Robot), Cyborg and Robot. None of those, however, provide a clear definition of the term ‘life’ in relation to robotics. Dictionaries and encyclopedias, of course, are useful but not categorical: some definitions do change, over time. So, is any one definition for ‘life’ as good as another? If there is no consensus about a single suitable definition, whose definition do you choose?

To cut through to something more helpful in this topic, I’d suggest considering this definition from one of the world’s greatest thinkers: “One can define living beings as complex systems of limited size that are stable and that reproduce themselves.” You’ll find that claim in the last chapter of The Grand Design by Stephen Hawking (Bantam Books, ISBN:978055381929, p. 224).

That definition covers many forms of life, including humanity, but not robots: no robot reproduces itself. Obviously, no robot procreates as humans do. Moreover, recall that professor Rosalind Picard stated that scientists cannot yet provide four human elements for robots: (1) feelings (or morals); (2) conscious experience; (3) soul, spirit, the ‘juice of life’; and (4) free will – free choice.  The last is the most important, in my opinion, because that attribute specifically excludes the term ‘robot’ which, by definition, is an entirely predictable machine that cannot have free will. Hence, man-made machines that do eventually incorporate those four attributes – if that day ever occurs – will require, I’d suggest, a new descriptive word; even the term ‘android’ is insufficient because that is defined as ‘robot’ also.

At present though, when assessing all of the above, and specifically the ‘I am a robot’ statement by Professor Brooks, I can only conclude that he may have been suggesting, obliquely, that machine intelligence in itself truly represents a new form of life, and one that is not yet fully recognized or appreciated. In other words, I think he is saying, essentially, the level of intelligence that exists in your head can certainly exist in a man-made machine, regardless of its structure and mobility. Any other interpretation of the claim fails the scrutiny discussed here.

The implications of that conclusion are quite profound; and beyond my knowledge and experience to fully comprehend, or even accept at this time.  I must, though, concede the existence of the real possibility; therefore, the probability must greater than zero.

Appropriately then and as a concluding piece, my next posting will explore some of the clever – or not so clever – stuff currently being done with machine intelligence.