Archive | Uncategorized RSS feed for this section

Robot News Roundup March 12th, 2013

12 Mar

Robot actors join humans onstage to explore the meaning of life

It had to come. But, I’m surprised it has happened already: mixing actual robots on stage with human actors. Primitive as yet, naturally, but definitely a foretaste of things to come. When will we see a reborn Humphrey Bogart android acting in future movies? Or a Paul Newman robot? Who could ignore a real Elvis Presley lookalike? Or is all that just too tacky to consider? Strange days ahead, people, strange days…

Top 5 sci-fi robot girlfriends

Okay, then, if that wasn’t tacky enough, try this: Daniel Bell at C-Net provides his list of desirable robot companions – if he had to, that is. All in jest, of course (we hope, don’t we?), yet the whole idea of robotic sexuality just won’t go away, will it? Well, sex is forever, of course; and although lies are still with us, videotape has just about gone – for which we should be thankful, I suppose.

Not me, though – I still use VHS tape occasionally.

Robot rat wages psychological warfare on real rats in experiment

I guess rats and other animals will always be used in scientific experiments. But, this is new: using robot rats with real rats to assess the interactions and stresses – all in efforts to perhaps to “gain insight into how humans might one day react” to the introduction of humanoid robots within society.

Watch the video, but make sure you have fast broadband.

Do you have dreams of having a robot?

Well, have a look at the agility and skill of these small robots from KumoTek Robotics. Sure, they are toys now, but the engineering and programming underpinning the products are quite good. Be sure to watch the videos which I found very entertaining and informative.

Killing Me Softly?

8 Mar

Dave, stop. Stop, will you? Stop, Dave. Will you stop Dave? Stop, Dave.  (voice of computer HAL9000, from the sci-fi movie 2001: A Space Odyssey)

Like it or not, you’ll probably talk to an android sometime this century, and maybe sooner than you think. Already, manufacturers are gearing up to sell androids for the home on a mass market basis.

There’s been a lot of preparation for that event.  For years, we’ve all had a taste of talking machines – from press buttons that shout “Drop Dead!”, “Stupid Jerk” or something more acute; to cute and cuddly Ferbies that murmur and roll their eyes so delightfully; to infuriating, automatic call centers that often lead us into communications oblivion. And a host of others.

So, we all know what to expect. Or do we?

Before I discuss that question, there’s a curious aspect about human nature which has been around for a long time, perhaps for millennia: I speak of using names to apply to inanimate objects. In ancient times, it was swords, bows, axes etc.  In modern times, it is, for example, a hunting weapon – guns, knives, rifles, pistols and others; a hand tool of any type; sporting gear like a tennis racket, hockey stick and so on; a household  appliance for Pete’s sake; a motor bike; and, of course, a car. There are no doubt others I haven’t mentioned. Make your own list, if you feel inclined.

These days, some people I know even have names for their computers. Fancy that!

Though, the really useful thing about naming something is that, after telling that name to everybody, you only need to use one word – well, maybe two – to find it, shout at it, get it back from li’l bro’ or sis, call it other names, plead with it, pray to your god about it, tell it to bloody well start or work, or otherwise verbally or physically abuse it. The other useful thing about naming a thing, whatever it is, is that a single name is much better than saying, “Where’s my (adjective, expletive, expletive, adjective) iPad I bought yesterday?”

So much easier to simply screech: “Where’s my Paddy!” That short demand also helps to reduce any potential, reciprocal verbal abuse.

Now, in literature, films and TV, we’ve all seen robots with names, some good, some not so good. And now that most of us are conditioned to name many things with personal names, it’s quite reasonable to suggest that when you acquire your robot slave for house work, you will probably give it a name of your choice. My advice, for what it’s worth, is to use an easily pronounceable name.

In an even lighter vein, you’ll note I used the pronoun ‘it’ when referring to a robot – not he or she, for obvious reasons. Having a neuter gender is one of the bigger benefits with English, I guess. But, I can foresee problems for other languages.

Anyway, an even bigger problem – and now I’m tackling that question of expectations, above – is potentially quite serious. We agree, I think, people get attached to their things, and perhaps more so to things that they name. So, what will happen when you attach a name to a thing with machine intelligence that has the potential to form an emotional relationship with you? Worse, if you have children who grow up with the machine, what sort of relationship would you want or even allow with it? How would you react if your toddler formed a closer relationship with the machine – to the point of open defiance towards you and others? What would your reaction be if your child loved her android more than you; and it returned its love? For that matter, does the word ‘love’ mean anything substantial in that context?

This is serious stuff, like I just said. So much so, I’d like to tell you – if you haven’t seen it already in a previous post – about an experiment conducted by Dr Christoph Bartneck at Canterbury University in New Zealand. I have seen only the news report at that link; I would urge you to read, digest and think about it. I will contact Dr Bartneck to obtain a copy of his full report, if I can.

What follows now will make sense to you only if you have read the report at that link.

If I cut to the basic issue, you have two fundamental choices when using a robot: either you use it as a tool or you use it as you would a friend, family member or interlocutor. In the first case, the master/servant schema takes precedence always with you as master; in the second choice, you implicitly invite dialog, discussion, and probable contention, if not conflict, all of which allows for the inversion of the schema, to some degree. In other words, you’ll either never have issues about switching off a robot at anytime; it’s just another machine. Or, like many people, you’ll form an intellectual and emotional attachment with the machine to some degree that will tend to inhibit your freedom of action; little by little, it’s conceivable that you end up being the servant – an insidious and slow psychical death. And, there’s nothing worse, in my opinion, than being slowly loved to death.

So I’m with the fictional Dave Bowman, the guy who’s switching off HAL9000 in the quote at the top of the page. If you know that scene from the movie, you know that HAL was pleading, pleading to stay around to help, morally unaware that its programming had caused it to perform abnormally and destructively by killing off most of the space crew. So Dave was switching HAL off.

Sure, in the future when robots, and particularly androids, are commonplace, you might be able to have reasonable discussions with them. However, what if you become quite dependent upon one, or even overly dependent? Yes, I’m aware of medical and health situations where robots can be very useful in care and protection, especially for the infirm and old. To repeat, though: the risk is that the inversion of the relationship schema could well result in a diminished sense of self-worth for the humans, young and old; and that, in the robotics context, is a psychological and societal condition entirely new to the human experience.

So, from my perspective, it’s a no-brainer: I use machines – no machine uses me, ever.

Robot News March 6 2013

6 Mar

Came across the following when searching news results today:

Stranger than fiction

Incredibly, there is now a company – called just Shadow – that is developing artificial internal organs for robots – including artificial blood. At present, it’s just a test bed, but the possibilities are truly intriguing.

DIY Robot

Even more interesting to me is a French company that allows a user to 3D print body parts and assemble a complete robot in the home – sort of like a modern day Dr Frankenstein. Now, if you didn’t know already, 3D printing is poised to change the economic world like never before – think of all those global companies that make plastic things that are sold to consumers. Couple 3D printing with robot production and there’s no telling where it will all go.

Watch This Robotic Dog Throw Cinder Blocks With Its Head

Well, Big Dog from Boston Dynamics has grown up. Now, it has a long neck with a strong grasping tool at the end. Seeing this, one can visualize many applications without any difficulty.

When are we going to learn to trust robots?

Well, it’s often difficult to trust humans, is it not – even family and associates? But, strangely, it’s sometimes easier to trust a complete stranger. So, where do robots sit with you? Read this article for some truly pertinent comments about robots in the home, perhaps sooner than you think.

The Real McCoy

5 Mar

This week, I’ve extended my reading into two companies that appear to be leaders in the development of androids and related technology: Hanson Robotics and Boston Dynamics.

Hanson Robotics, in a nutshell, eventually wants to produce androids that look like real humans. One of the key features of their robots is the synthetic skin called Frubber, a patented product that is truly impressive. For some views about that product, CEO David Hanson and the company, I picked up two articles online that are pertinent:

‘You, Robot’: Personal Robots For The Masses


Humanoid Robots Put Best Faces Forward .

Generally, David Hanson is man dedicated to bringing androids to the broad market to assist humanity’s efforts to make the world a better place. That’s a laudable goal. On the other hand, he sees no insurmountable problems with the concept of androids that look like real humans.

In contrast, while Hanson Robotics is a leader in human-like characteristics for androids, Boston Dynamics appears to concentrate on quadrupedal and bipedal locomotion, with the view to producing autonomous four-legged “pack animals” and two-legged androids. The former, called Big Dog, can outrun a normal human.

In videos at the company website, the sure-footed ability of Big Dog is nothing short of astounding, especially on ice, even at this stage of development. One of the bipeds, Petman, shows good balance on a treadmill but there is no information yet on how it fares on rough terrain, on ice, through water etc. Perhaps the field testing for the other biped, Atlas, later this year (2013) will include such aspects. The military application of such robots is obvious; but, there are other, equally obvious possibilities.

Companies such as those are the now and unavoidable future. Over time, I’ll research all such companies, globally. In the process, it’s with some satisfaction that I’ve discovered others who are just as concerned as I am about the rise of the machines – if you’ll pardon the hyperbole – and particularly the development of perfect androids. The above links provide further information in that regard.

Moreover, if you’ve been following this blog, then you know I’m not prejudiced towards the idea of androids per se. Though I think it’s crucial that humanity maintains its distance from robots when it comes to physical appearance: in this context, this matter truly is a case of Us and Them.

Not in competition, but in a mutually beneficial association and with the understanding and knowledge that real humans always assume the dominant role in the human-android nexus. To do otherwise is to court disaster in many forms at some future date. However, I don’t mean that androids should be programmed to regard humanity as infallible; obviously, we are not.

The crucial point within my argument is this: all robot software includes built-in rules, according to the functions each machine performs. For example, a future robot fire-fighter would have sets of rules governing its use of equipment to put out fires, rescue humans, treat wounded/burn victims and so on. A robot doctor would have even more complex sets of rules. Androids acting as police, should that ever come to pass (and I hope not), would necessitate even more complexity.

Regardless of the application specific software, though, all developers and manufacturers must ensure that all machines – android or otherwise – must be programmed to conform to the Three Laws of Robotics. That should not present any problem to any relevant company – other than the difficulty of getting the software right. And that’s a prodigious issue for developers because they would know only too well the difficulty of developing perfect, bug-free software.

For example, in all of my forty-seven years dealing with computers and programs, I’ve not encountered any complex program that’s bug free. The more complex, the more risk of bugs: that’s the reality, that’s the challenge, that’s the danger.

In the final analysis, though, it’s obviously in the company’s interest to get the software/firmware right: any product sold to the consumer must be certified as safe to use, in theory and practice. And that’s the catch: otherwise the consumer will go to the competition – arguably the worst fate for any company and as it should be, evermore.

Aye, Robot!

2 Mar

Fancy shaking hands with a robot that looks like a deceased relative, or Albert Einstein or…anybody?

Right now humanoid robots are actively making inroads into early childhood education, in psychological therapy, and with the care of the infirm and old. A swarm of other applications are also available or in progress, according to a recent NY Times article: .

I guess that’s news for some. But the mass of movie goers has been so battered with robots in film fiction – think Forbidden Planet, The Stepford Wives, Star Wars, Alien, Terminator, Blade Runner, Aliens, RoboCop, I,Robot etc – that they might think they know it all. Perhaps not, though.

Consider, for example, a recent Swedish TV series called Real Humans, a daring and dramatic fiction which examines the societal, legal and psychological implications of living with robots made to look just like you and me: perfect androids, called Hubots (human robots). This multilayered story intelligently examines robot issues that Hollywood generally avoids like the plague: Hubot sexuality, Hubot pornography, Hubot rights, Hubot sex slaves, illegal trafficking in Hubots, Hubot freedom fighters, humanity’s backlash, murder of humans by Hubots, and more.

Those familiar with The Three Laws of Robotics:, promoted by Isaac Asimov and others over fifty years ago, would know that such a story totally subverts those laws. But heck, somebody will say, those Hollywood movies have been doing that for years – nothing new there, mate!

True to a point, but think of the latest Hollywood offering, I, Robot: none of the humanoid robots in that movie are perfect androids – all of them still look like the machines they are. No human could ever mistake them for one of us. So, either by design or accident, Hollywood ensured that no conflict with human identity was evident – as it is, most definitely and insidiously, in the Real Humans TV show.

Robotics ethicians I tapped into online are now becoming more concerned about the mounting ethical and moral issues of robots permeating society sometime in the future, and certainly this century. They should be concerned because, while they all agree that unfettered robotic technology is leaping ahead, the moral debate is still hesitant, even lagging.

Over time, I’ve followed developments to produce androids that look, act and sound like humans. I’ve seen documentaries examining some of the issues – one particularly about backyard robo-engineers attempting to construct sex slaves. That might sound comical, but recall: the first Apple computer was made in a home garage.

So, the T-Rex in the room is: Will a robot manufacturer construct a perfect android for the mass market this century? Can it be done? Are the AI problems insurmountable? Considering the current state of robotics, I’m certain a perfect android is doable some time this century. I say that based upon the implications of the information provided below and elsewhere; and coupled with my understanding of the limitations and capabilities of computer technology and programming, having worked within the IT industry since 1967.

So, for a taste of what’s coming, have a look at the advances already available from Hanson Robotics. Contemplate the implications of what’s already labeled as the Most Human Like Robot Ever: . And, crucially, things are looking up for the US military – it likely will have its Terminator by mid- century, after all: .

All of the above examples are a long way off the perfect android. The technology, however, is just getting better, almost daily. All forms of robotics are expanding into numerous industries; one online source lists 78 robotics companies globally. I’m sure there are many more. A few companies are now marketing small humanoid robots.

Hence, androids are here to stay. They have their developing place in society, no question. But why the need for perfect androids, indistinguishable from real humans? Aren’t images like Gort, Robbie, C3P0, Nestor and others sufficient? They are all humanoid, but none resemble any real human. So, considering just the potential for future confusion concerning who is real human, even when shaking hands, wouldn’t it be smarter for all robotics engineers and manufacturers to voluntarily shun the construction of perfect androids?

Why smarter? Well, when certainty exists that humanoid robots still look like the machines they are, many of the problem issues – and there are many – explored, with chilling effect in Real Humans, cannot logically come to pass. So, why the push to do the opposite: build machines in our real image that will inevitably lead to self-imposed problematic situations – many trivial, others troubling, and some quite serious?  Is it not sufficient that we humans are occasionally stupid, often irresponsible, and sometimes criminal in our behaviors? Does humanity need a foreseeable future with the same or a worse level of daily competition and conflict from perfect androids with sophisticated AI and, potentially, ‘free will’?

These are questions that need to be resolved, and relatively soon. And while the Three Laws of Robotics were originally developed within fiction, the philosophical underpinnings have caused much serious debate for over half a century. Indeed, additional laws have been suggested: an obscure Zeroth Law and an even more obscure Minus One Law, details of which are at the link already provided, above.

Well, it’s one thing to have laws of robotics in fiction; now with the advent of imperfect androids into society, though, the legalities concerning human-robot interaction will come to center stage, sooner or later. Meanwhile, the push to be first to construct a perfect android continues. Hence, before anybody gets within cooee of that particular eureka moment, the whole issue of the assumed need for perfect androids needs to be resolved.

To help accomplish that, it would be prudent to suggest, here, a new law within robotics, one that is from the perspective of humanity – the creator of robots, no less – thus:

No robot shall be made in humanity’s real image. Call it, appropriately, The First Law of Humanity, perhaps. That implies the probability of further laws, an aspect I’ll continue to discuss here. Perhaps a continuous poll here about the First Law would be a useful, for starters?

Meanwhile, perfect androids are still the stuff of fiction and fantasy only; so let’s keep them that way. But, given the proclivity for humanity to push technology to its extremes, nobody knows where robotic developments will end, if indeed they will, ever.

I do know this, however: whenever I shake hands with some body, I want to see – unequivocally – a real human or machine.