October 2, 2018 By Joseph P. Farrell

Yesterday I blogged about the latest gimick coming down the pike, "spray on antennae." There's another thing coming down the pike; indeed, it's already here: robotics.  It is already transforming culture, finance, labor, politics. Imagine for a moment, the problem that robotics poses for a country like China, with over a billion mouths to feed, and an increasingly roboticized industrial base. In the USA, some restaurants already are heavily roboticized. And the trend will only continue. More recently, the trend has been toward automated robotic weapons systems, with the USA and Russia leading the trend.

With that in mind, consider this article shared by Ms. K.M.:

Robot manufacturers warned 'bug in AI code' will lead to MURDER SPREES

There's two things that leapt off the page here:

"At the same time they would pressure parliament or other legislative bodies to give them exemption from liability.

"There could be a bug in the code of the robot that promotes such behaviour."

His comments came after he previously told us: "Killer robots could easily go wrong.

"They may be used by crazed individuals or religious extremists to terrorise and kill people.

"They could go wrong due to a bug, or an unknown coding flaw that showed up as response to an unforeseen or unanticipated environment or situation.

"Or they could be hacked." (Emphasis added)

Ponder that first admission: the robotics corporations want to limit or altogether exempt themselves from liability if, or rather, when, their products create bodily harm to humans, or even murder. This, at the same time that some in Europe and elsewhere are proposing to give robots legal personhood. But such a step, in my opinion, is meant to sidestep a real possibility: if robotics corporations or, for that matter, "autonomous weapons systems" manufacturers were liable, they might think twice before releasing a shoddy, or worse, an insecure product if they were criminally and civilly liable for the actions of their creations.

It's that "insecurity" of all cyber products, however, that gives me real pause, and the article hints, but only hints, at yet another disturbing possibility. Imagine a bit of high octane speculation for a moment: you and your family go into your favorite McBurger joint and place your order with the McBurgerRobot.. Unbeknownst to you, the McBurgerRobot has been hacked by a "crazed individual" and its program has been altered to turn it into a killing machine.

In other words, the possibility of hacking raises the possibility that any robot could be turned into an "autonomous weapons system," and if I can think of it, and you can think of it, rest assured, "they" have already thought of it, and are busily engaged in figuring out both how to do it to an enemy, and how to protect against it.

Isaac Asimov's robot wars may have, and probably already has, begun.

See you on the flip side...