Tag Archives: robotics

Robot News Roundup-April 14th 2013

14 Apr

Want to know where it’s all heading?

Read this ‘roadmap’ from the U.S. Congress. At 137 pages, it’s not light reading; but, it does shine light on where it’s all planned to be at in 5, 10, 15 years and beyond. It’s fascinating, if somewhat tedious at times.

Humanoid robot from India.

Not to be outdone, this is the first piece of robot news I’ve come across from India, since I started this blog a few weeks back. Programmed to play the ‘rock-scissor’ game, its creator sees the small humanoid as a means to investigate how robots can assist humans.

A new book – Robot Futures – from MIT professor

Everybody of course has an opinion about what robots are. Sure, they are machines, but this professor of robotics thinks they are “a new species”. Well, I’m not prepared to accept that – yet. Are you?

A humanoid robot with positive attitude

From an Italian project team, here’s an interesting and detailed look at a bipedal robot designed to be fully compliant with humans i.e. it won’t harm anything or anybody in a collision or physical encounter. Makes you feel better about companies that take that sort of care and concern with the impact of robotics in society, don’t you think?

Take in a slide show about some of the latest humanoid robots

From the cute and cuddly, I guess, to the more practical, get used to seeing robots like these more often in the news and on TV. I’m waiting for one that can act as my 24/7 servant. But, how long will I have to wait…?

A robot’s got to know its limitations

And while on the topic of mundane house work, read this piece about the difficulties in the home and discusses the complexity of the task to get a robot to even do such a job. Thankfully, the software – and hardware – will develop over time to surmount most problems. In my opinion, though, just remember that Murphy’s First Law lurks about all the time.

The Humanity of Robots (1)

2 Apr

This topic will be in two parts, perhaps even a third.

That title above sounds like an odd statement, I guess, because we don’t usually regard machines as having “humanity”. However, anything made by humanity carries, by definition, something of us within or about it. But the issue here is not what is human. Not at all.

What is becoming of increasing importance, though, is an idea that robots will eventually achieve some kind of innate humanness as AI and mechanics just get better, more fluid and ultimately more real. Indeed some believe robots have, in some cases, already achieved some measure of that humanness. Others would disagree and oppose that concept for many reasons, as you might imagine.

I’m not entirely sure yet, one way or the other; although I lean toward the skeptical side of things. One thing is certain, though: this debate will not end soon, not even this century, I reckon. As an intro then, here is a sample of what is available to consider – from some of the professionals now involved in the robotics field.

In the forefront of robotics today is Professor Rodney Brooks, currently Chief Technology Officer at Rethink Robotics. In a magazine article from 2009, I read that Professor Brooks, sometimes known as “the bad boy of robotics”, is apparently aligned with the idea, expressed above, that robots will, at some time, reach a level of humanness in intelligence.

First though, a short digression: while reading that piece (and others), I found that he and I share some common experiences and ideas: we are both Australian by birth; when young, we both liked to experiment with chemicals to blow things up (I was so reckless, I’m truly lucky to be alive); just as he taught himself computer code, so did I (when I started my IT career in 1967); we both regard the human body as a biological machine or, if you like, an animal machine (in contrast, man-made robots are mineral machines, I’d suggest); and we are both atheists.

Despite – or perhaps because of – most of his life within robotics, there are still shades of gray for Professor Brooks when considering the human-robot nexus. I tend to agree with his comment, for example, that there must be care in deciding “how much feeling you should give them if you believe robots can have feelings”; although, I would claim that any such “feelings” (aka “emotions”) are just computer code that gives us humans the impression of robotic feelings. A firm point of departure though is when he says: “We had better be careful just what we build, because we might end up liking them, and then we will be morally responsible for their well-being. Sort of like children.”

On the one hand I accept, unequivocally, the caveat regarding being careful about what is built. Unlike the good professor, though, I view all mineral machines as just machines, for one very good reason: such machines obviously have never occurred naturally (i.e. within the natural selection process of evolution), as do all animal machines. No robot, for example, could exist before or now without the ingenuity of people like Professor Brooks, and others who developed computers and appropriate software. Additionally, he has a dream of developing “a robot that people feel bad about switching off.” Well, when it concerns preservation of human life, I’d agree; otherwise, no – any wish by an intelligent machine (e.g. “Please, please don’t turn me off!”) should never trump that of humanity’s.

Most troubling from my perspective though is the suggestion that humanity might become “morally responsible for their well-being. Sort of like children.” Other than the usual responsibility accorded to all machines and products owned, I fail to see why any human should treat a machine as an equal member of the human family. Arguably, once something is purchased, any person has the unequivocal right to destroy or abuse any machine/tool/appliance at any time, even immediately after bought, and take the personal consequences. Although, I might be carried off to the funny farm if I did do that.

But to accord any intelligent machine the same legal rights as humanity will, by implication, inevitably lead into a social and societal quagmire; and one that nobody needs. What could possibly be gained by painting ourselves into a tight corner of moral and legal conundrums concerning machines? I have great respect for Professor Brooks and his ongoing work in robotics. But I’m quite puzzled why such a brilliant scientist appears to allow emotion to color his opinion about machine intelligence vis-à-vis feelings.

Which brings me to Rosalind Picard, Sc.D., and professor at the MIT Media Laboratory, who has an opposing point of view which you can view at that link. In an engaging short presentation, Professor Picard unequivocally asserts that robots cannot have feelings in the same way that we humans do: they can simply act as though they do. Whether an automaton is a computer in a box or a humanoid robot, giving computers/robots emotion is simply an exercise in providing the appropriate algorithms that allow the machine to behave as though we think it has emotions. In other words, the machine’s programmed actions provide us with cues to allow our perceptions to visually experience its feelings.

Essentially, generating the necessary emotion all comes down to the sophistication of the programming. To repeat: machines don’t have feelings – they have or will have programs that simulate what we think are emotions like ours.  Perhaps, to help resolve the issue of feelings, we should then adopt better qualified definitions? For example, we could use animal feelings for humans and mineral feelings for robots; or, in you like, artificial feelings for robots and human feelings for us. By doing so, we circumvent knotty philosophical problems and conflict; and also help to ensure that the dichotomy between humans and robots remains in situ.

Nobody that I know of has any problems with the idea of Artificial Intelligence (AI). So – maybe we should all take another “giant leap for mankind” and adopt the concept of Artificial Feelings (AF) for all robots with AI? What could be more logical and sensible? However, there’s a bit of a problem with that approach, a fly in the ointment, if you will: Professor Brooks is famous for saying: “I am a robot.”

I think I understand what he’s saying. But, I’ll leave that and further aspects for my next posting.

The Real McCoy

5 Mar

This week, I’ve extended my reading into two companies that appear to be leaders in the development of androids and related technology: Hanson Robotics and Boston Dynamics.

Hanson Robotics, in a nutshell, eventually wants to produce androids that look like real humans. One of the key features of their robots is the synthetic skin called Frubber, a patented product that is truly impressive. For some views about that product, CEO David Hanson and the company, I picked up two articles online that are pertinent:

‘You, Robot’: Personal Robots For The Masses


Humanoid Robots Put Best Faces Forward .

Generally, David Hanson is man dedicated to bringing androids to the broad market to assist humanity’s efforts to make the world a better place. That’s a laudable goal. On the other hand, he sees no insurmountable problems with the concept of androids that look like real humans.

In contrast, while Hanson Robotics is a leader in human-like characteristics for androids, Boston Dynamics appears to concentrate on quadrupedal and bipedal locomotion, with the view to producing autonomous four-legged “pack animals” and two-legged androids. The former, called Big Dog, can outrun a normal human.

In videos at the company website, the sure-footed ability of Big Dog is nothing short of astounding, especially on ice, even at this stage of development. One of the bipeds, Petman, shows good balance on a treadmill but there is no information yet on how it fares on rough terrain, on ice, through water etc. Perhaps the field testing for the other biped, Atlas, later this year (2013) will include such aspects. The military application of such robots is obvious; but, there are other, equally obvious possibilities.

Companies such as those are the now and unavoidable future. Over time, I’ll research all such companies, globally. In the process, it’s with some satisfaction that I’ve discovered others who are just as concerned as I am about the rise of the machines – if you’ll pardon the hyperbole – and particularly the development of perfect androids. The above links provide further information in that regard.

Moreover, if you’ve been following this blog, then you know I’m not prejudiced towards the idea of androids per se. Though I think it’s crucial that humanity maintains its distance from robots when it comes to physical appearance: in this context, this matter truly is a case of Us and Them.

Not in competition, but in a mutually beneficial association and with the understanding and knowledge that real humans always assume the dominant role in the human-android nexus. To do otherwise is to court disaster in many forms at some future date. However, I don’t mean that androids should be programmed to regard humanity as infallible; obviously, we are not.

The crucial point within my argument is this: all robot software includes built-in rules, according to the functions each machine performs. For example, a future robot fire-fighter would have sets of rules governing its use of equipment to put out fires, rescue humans, treat wounded/burn victims and so on. A robot doctor would have even more complex sets of rules. Androids acting as police, should that ever come to pass (and I hope not), would necessitate even more complexity.

Regardless of the application specific software, though, all developers and manufacturers must ensure that all machines – android or otherwise – must be programmed to conform to the Three Laws of Robotics. That should not present any problem to any relevant company – other than the difficulty of getting the software right. And that’s a prodigious issue for developers because they would know only too well the difficulty of developing perfect, bug-free software.

For example, in all of my forty-seven years dealing with computers and programs, I’ve not encountered any complex program that’s bug free. The more complex, the more risk of bugs: that’s the reality, that’s the challenge, that’s the danger.

In the final analysis, though, it’s obviously in the company’s interest to get the software/firmware right: any product sold to the consumer must be certified as safe to use, in theory and practice. And that’s the catch: otherwise the consumer will go to the competition – arguably the worst fate for any company and as it should be, evermore.