Former national security advisor and Secretary of State Henry Kissinger is now worried about artificial intelligence. According to this article shared by Mr. H.B., the 95-year old Kissinger is pointing out a number of dangers:
Kissinger deserves perhaps half-a-cheer for pointing out some areas of potential dangers not often alluded to:
The typical science-fiction narrative is that robots will develop to the point where they turn on their creators and threaten all of humanity — but, according to Kissinger, while the dangers of AI may be great, the reality of the threat may be a little more benign. It is more likely, he suggests, that the danger will come from AI simply misinterpreting human instructions “due to its inherent lack of context.”
The fact is that AI learns much faster than humans. Another recent example was a computer program AlphaZero, which learned to play chess in a style never before seen in chess history. In just a few hours, it reached a level of skill that took humans 1,500 years to reach — after being given only the basic rules of the game.
This exceptionally fast learning process means AI will also make more mistakes “faster and of greater magnitude than humans do.” Kissinger notes that AI researchers often suggest that those mistakes can be tempered by including programming for “ethical” and “reasonable” outcomes — but what is ethical and reasonable? Those are things that humans are still fighting over how to define.
What happens if AI reaches its intended goals but can’t explain its rationale? “Will AI’s decision-making abilities surpass the explanatory powers of human language and reason?” Kissinger asks.
He argues that the effects of such a situation on human consciousness would be profound. In fact, he believes it is the most important question about the new world we are facing.
“What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?”
Mr, Kissinger is correct to zero in on (1) the lack of context for human instructions, (2) the potential for massive social consequences from AI "learning mistakes," and (3) the inability of human language and reason to cope with rapid AI-induced social change. In fact, I've often blogged about precisely these dangers in the use of AI in connection with high frequency algorthmically-driven trading in equities and commodities markets, and the dangers of flash crashes. The effect of these technologies has been to drive markets that, in my opinion, are no longer reflective of genuinely human-based market realities, and as a result, markets are increasingly no longer reflective of actual human economic conditions as machine trades with machine. As the gap between machine manipulated markets and human analysis grows, it becomes increasingly difficult for economic analysts to make much sense of it all.
But Mr. Kissinger is leaving one significant area unmentioned in his analysis, and it is to my mind the most disturbing of them all. What happens when machines begin to manipulate all information and thus to manipulate all human analysis and action? The old CBS television series Person of Interest and the thriller movie Eagle Eye both dealt with scenarios where artificial intelligences were directing human actions by manipulation of information, or in the case of Eagle Eye, outright blackmail and threat.
But there's also a completely different type of scenario, one that, I suspect, Mr. Kissinger, the consummate Mr. Globaloney and "insider", may really be afraid of. The article notes that Mr. Kissinger is calling for panels and committees of "experts" to deal with the looming problem. In this, one detects the typical insider call for "regulation" whenever a new technology emerges with the potential to threaten the power of "the elite." Thus, lurking behind Mr. Kissinger's warnings is perhaps an unstated, read-between-the-lines, fear, namely, that AI might indeed prove to be on the "side" of the bulk of humanity, and a hindrance to "the elite." It's not a scenario that gets mentioned very often, if at all, in all the apocalyptic predictions that typically accompany such speculations on artificial intelligence. Mr. Elon Musk, for example, is worried that an artificial intelligence might "wake up" or "transduce" or be possessed by higher dimensional entities that are of evil intention; literally, Mr. Musk is afraid of an AI "demonic possession." But what would happen if Mr. Musk's AI was indeed possessed, but not by something evil, but the opposite? To my knowledge, the only individual who ever explored that possibility was the famous science fiction writer Robert Heinlein, in his novel The Moon is a Harsh Mistress. In the novel, a supercomputer named "Mike" suddenly "wakes up", and decides to lend his supercomputer assistance to a group of lunar freedom fighters. We'll call this possibility, in opposition to Mr. Musk's "Demon Scenario," the "Angel scenario." Mr. Kissinger raises the question about what if AI is presented a choice having to choose between a child and an elderly person. But he neglects to ask, what if the AI found out a way to save both?
And that rather bizarre high octane speculation leads to a final question: are we seeing "the elite" raising apocalyptic questions and Demon Scenarios about AI, because they've already seen evidence that the Angel Scenario might be true?
See you on the flip side...