If you're like Nick Bostrom, Isaac Asimov, or (not to put myself on their level), me, you probably have a few, nay, probably many misgivings about the idea of artificial intelligence and the coming "robot revolution." Asimov, in his typically perspicacious way, explored the ethics and moral issues of artificial intelligences and robots in his sci-fi classic, I, Robot, which was made into a film version. There, as we know, VICKI, an artificial intelligence super-computer, takes over the world's robots and bascially imprisons humanity. For some of us, following the weirdness in financial markets for example, the "dark pools" and algorithmic trading that now constitutes the bulk of commodities and equities trading is tailor made for all sorts of A.I. trouble. Even the popular American television series (one of my favorites, incidentally) Person of Interest explores not only the dangers of A.I., but of two such artificial intelligences battling it out with each other, with humanity caught in the middle. In one episode, the "evil" A.I. gives a little demonstration of its "powers" when it deliberately crashes the stock markets in mere seconds, and then, just as quickly, rectifies it. Oxford philosopher Nick Bostrom has been sounding the warnings for many years about A.I.
Well, if the following story shared by Mr. A. is any indicator, Bostrom's and Asimov's concerns may be entirely justified:
Consider just the disturbing implications about the new robot "Sophia" as outlined in this paragraph:
It is important to note several things that Hanson mentions. Sophia first tells us that she would like to be “an ambassador” to humans, as well as to continue her evolution through formal education, studying art and eventually creating a business and having a family. Hanson explicitly states that Sophia will become as “conscious, creative, and capable as any human.” This statement is followed by a key mention of not having the rights of a human. This might seem absurd to the uninitiated, but this is a serious ethical discussion that has been taking place among “roboethicists.” This is all-but guaranteed to gain steam as robots are integrated in autonomous ways, whether it is on the battlefield, as self-driving vehicles (now programmed to sacrifice some humans over others), or certainly as they become visually and intelligently on par with human beings. Even the mainstream Boston Globe addressed this more than two years ago, citing a 2012 paper from MIT.
At this juncture, the article goes on to mention the existence of - get this! - a Society for the Prevention of Cruelty to Robots, this in a society that chops up the unborn, sells their parts, harvests human organs, and makes people pay for the whole "privilege."
Of course, driving all this robotmania is the quest of organizations like the Kammlersta.... er.... DARPA (the Diabolically Apocalyptic Research Projects Agency) to create killer robots, by passing the inconvenient questions of whether creating killer robots in an age of immanent A.I. is a good idea.
Well, if you happen to be one of the crazies like Ray Kurzweil & Co who think all this is a good idea, and that the approaching transhumanist "singularity" promises nothing but good for humanity, then you might want to pause and consider Sophia's last answer:
Regardless of whether or not you personally believe that the lofty intentions of robotics and artificial intelligence designers can truly manifest as planned, one must acknowledge that we are living in the realm of faith at this point, as nearly all of what they predicted years ago has come to pass.
Perhaps most troubling is the nervous laughter that erupts at the end of this video when the ultimate question is posited to our new humanoid friend and family member … and she gives her answer:
I will destroy humans.
Funny, super funny … ’til it’s not.
All of the components are coming together to bolster the warnings that have been issued by tech luminaries, scientists, universities, and even robot manufacturers themselves who all have urged a quick ethical framework to be established while we still remain in full control of this creation. If permitted to continue at its current pace, we might very well be asking who should really have the rights to be protected from whom.
I can't put it any better than that, except, perhaps, to suggest something: How about a moratorium on all robotics until (and even if) appropriate safeguards can be designed. I'm all for human society and civilization(s), for all their faults, over anything run exclusively by, and for, machines. I suspect that even the powers that be, if there is any sanity left among them, might be to. I'm just saying "No" to the temptations of robotics.
See you on the flip side...