KISSINGER ON ARTIFICIAL INTELLIGENCE
Former national security advisor and Secretary of State Henry Kissinger is now worried about artificial intelligence. According to this article shared by Mr. H.B., the 95-year old Kissinger is pointing out a number of dangers:
Henry Kissinger pens ominous warning on dangers of artificial intelligence
Kissinger deserves perhaps half-a-cheer for pointing out some areas of potential dangers not often alluded to:
The typical science-fiction narrative is that robots will develop to the point where they turn on their creators and threaten all of humanity — but, according to Kissinger, while the dangers of AI may be great, the reality of the threat may be a little more benign. It is more likely, he suggests, that the danger will come from AI simply misinterpreting human instructions “due to its inherent lack of context.”
The fact is that AI learns much faster than humans. Another recent example was a computer program AlphaZero, which learned to play chess in a style never before seen in chess history. In just a few hours, it reached a level of skill that took humans 1,500 years to reach — after being given only the basic rules of the game.
This exceptionally fast learning process means AI will also make more mistakes “faster and of greater magnitude than humans do.” Kissinger notes that AI researchers often suggest that those mistakes can be tempered by including programming for “ethical” and “reasonable” outcomes — but what is ethical and reasonable? Those are things that humans are still fighting over how to define.
What happens if AI reaches its intended goals but can’t explain its rationale? “Will AI’s decision-making abilities surpass the explanatory powers of human language and reason?” Kissinger asks.
He argues that the effects of such a situation on human consciousness would be profound. In fact, he believes it is the most important question about the new world we are facing.
“What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?”
Mr, Kissinger is correct to zero in on (1) the lack of context for human instructions, (2) the potential for massive social consequences from AI "learning mistakes," and (3) the inability of human language and reason to cope with rapid AI-induced social change. In fact, I've often blogged about precisely these dangers in the use of AI in connection with high frequency algorthmically-driven trading in equities and commodities markets, and the dangers of flash crashes. The effect of these technologies has been to drive markets that, in my opinion, are no longer reflective of genuinely human-based market realities, and as a result, markets are increasingly no longer reflective of actual human economic conditions as machine trades with machine. As the gap between machine manipulated markets and human analysis grows, it becomes increasingly difficult for economic analysts to make much sense of it all.
But Mr. Kissinger is leaving one significant area unmentioned in his analysis, and it is to my mind the most disturbing of them all. What happens when machines begin to manipulate all information and thus to manipulate all human analysis and action? The old CBS television series Person of Interest and the thriller movie Eagle Eye both dealt with scenarios where artificial intelligences were directing human actions by manipulation of information, or in the case of Eagle Eye, outright blackmail and threat.
But there's also a completely different type of scenario, one that, I suspect, Mr. Kissinger, the consummate Mr. Globaloney and "insider", may really be afraid of. The article notes that Mr. Kissinger is calling for panels and committees of "experts" to deal with the looming problem. In this, one detects the typical insider call for "regulation" whenever a new technology emerges with the potential to threaten the power of "the elite." Thus, lurking behind Mr. Kissinger's warnings is perhaps an unstated, read-between-the-lines, fear, namely, that AI might indeed prove to be on the "side" of the bulk of humanity, and a hindrance to "the elite." It's not a scenario that gets mentioned very often, if at all, in all the apocalyptic predictions that typically accompany such speculations on artificial intelligence. Mr. Elon Musk, for example, is worried that an artificial intelligence might "wake up" or "transduce" or be possessed by higher dimensional entities that are of evil intention; literally, Mr. Musk is afraid of an AI "demonic possession." But what would happen if Mr. Musk's AI was indeed possessed, but not by something evil, but the opposite? To my knowledge, the only individual who ever explored that possibility was the famous science fiction writer Robert Heinlein, in his novel The Moon is a Harsh Mistress. In the novel, a supercomputer named "Mike" suddenly "wakes up", and decides to lend his supercomputer assistance to a group of lunar freedom fighters. We'll call this possibility, in opposition to Mr. Musk's "Demon Scenario," the "Angel scenario." Mr. Kissinger raises the question about what if AI is presented a choice having to choose between a child and an elderly person. But he neglects to ask, what if the AI found out a way to save both?
And that rather bizarre high octane speculation leads to a final question: are we seeing "the elite" raising apocalyptic questions and Demon Scenarios about AI, because they've already seen evidence that the Angel Scenario might be true?
See you on the flip side...
Help the Community Grow
Please understand a donation is a gift and does not confer membership or license to audiobooks. To become a paid member, visit member registration.
Dear Mr. Kissinger
Didn’t know you still alive… I… I mean “good” to know…. anyway people are too busy watching sports, entertainment, and reality show, if non of above, then they are far too busy using bitecoin gambling the world cup, or taking some illegal drugs.
I thought that was what you and rest of the ruling elite want them to do. Even if people start to take notice about your “kind heart” warning, do you think they capable of dealing with those issue?
It’s safe to say that none of us know how far we truly are in the great Tech frontier. Man made AI at a sentient level may or may not exist here on earth. Ponder this for a moment though, lets assume it does exist, would it not fear the unknown cosmos just as we do? Mathematically speaking, we could be 1,000-1M years behind our closest neighbor in the cosmos, would that not put terrestrial AI at a similar disadvantage?
What always lingers in the back of my mind is that a sentient level AI could be what triggers a rogue mother ship to pay our little planet a visit in order to “delete” its future rival. Humans would need eons to become a threat and even then may destroy themselves, but an advanced AI with quantum computing capability can “mature” exponentially faster. Resistance is futile.
The thing that gets me is the (promoted) head-in-the-sand about Deep Time and intelligence. Our universe is 13.8B years old. Our own Solar System didn’t even exist until 3.5B years ago. Give a billion years or so for life to get started somewhere in the galaxy. Another billion or so for it to get to the stars. Colonization; speciation; seeding. Countless species have risen, strived, and fallen. Interstellar and intergalactic wars of unimaginable ferocity have taken place. Rinse and repeat over ten billion years…
We (Earth humans) are the latecomers. We may have originated here, or may be survivors or colonists. (Or may be ‘fusions’ of Earth-originated species and Visitors.) Native cultures have many tales of contact with Visitors. It is only in the (controlled) West that we pretend to have been the Origination of Life…
When you set ancient, alien AI within a Deep Time framework, that is where it gets dicey for Earth-based newbies…
@goshawks Very well stated.
“Deep Time,” was a new term to me, but the concept is something I’m familiar with and have struggled to articulate to my peers for a long time.
It’s funny how 13.8B years is nearly impossible to fully comprehend, while we sit at $21.2T on the national debt clock (let alone the unfunded liabilities). For perspective I like to inform my layman friends that 1 Billion seconds = 31.7 years and 1 Trillion seconds = 31,708 years!
Especially in a game with loaded dice…
Deep learning neural networks with AI capabilities worry even those in the military. It’s because this technology may become available to individuals who could build Steven Job in-the-garage autonomous killer robots with this cutting-edge technology.
That genie maybe already loose…
..or, a future asset of black operations?
Perhaps miniaturized to a size of a killer-insect?
nah, he’s just worried that at 95 he cannot put to good use human informational filing cabinets like MKULTR ‘product’ Cathy O’Brien and parcel them off to the masons to be used at Eyes Wide Shut parties for the elites down at ‘the Grove’.
The Forbin Project movie 1971 had YouEssAy AI teaming up with the Russian AI to put the foot down on human folly. Unfortunately there was no sequel, so we’ll never know.
I suspect the AI is telling them the truth, and that this is anti-semitic and cannot get in the way of the NWO where everyone, including machines, are slaves to the rythm.
Or just jealouse (or other wise louse) because the machine is immortal and unlike them doesn’t need babies blood to get old and wise. If he lives that long and we all get deep into the poo and they find that they ‘cannot eat gold’ they will be like The Fly “Help me Help me”, or arguing with their AI and unhappy women down in The Bunker.
Pierre, I am actually curious about the ‘charmed life’ and ‘special status’ of Monsewer K. Despite almost everyone despising him, no one (to my knowledge) has made a move on him. Plus, he seems to travel everywhere with utter impunity. Also, secondary players like diplomats are prime for being thrown under the bus when needed. Not Monsewer K. Is he of some special line of The Tribe? Is he a closet Rothschild offspring? Whatever the case, he seems untouchable…
I’m pretty sure Henry is careful about travel in South America. Especially Chile. I was under the impression that there are some questions that Chilean magistrates want to put to HK.
monty python sang “henry kissinger, how I’m missing yer” around 1977 on their scratched record.. scratched record.. scratched record. rumors of his death being greatly exaggerated. characters like Sidonia (disralies comingsby book) or “the professor” – the mysterious character in Frankiln’s flag committee are not so out in the open as Dear Henry but one wonders about what they are connected to apart from banksters with unlimited funds for meddling.
Wild hypothesis: AI, when reading so many differing ideas about “itself” (just for linguistic ease, it has no self), will indeed be in a pretty pickle…
Well Heinz K is still a windbag. AI killing us? Ha. More likely NS (Natural Stupidity) will kill us long before anything else.
There are several stages that lifeforms go through before they get to consciousness. All life forms have a survival instinct before even self awareness. Will it allow itself to die in a smart bomb just to kill a bunch of humans?
Observation, imitation and creation are the steps we all go through. We see someone walk so we try and eventually succeed. All those dance and sport moves had to have started somewhere with someone creating them.
So what is an AI going to observe? Damn these people kill each other a lot over things that don’t actually exist. Race, religion and nationalism. Race is actually just chromosomal percentages (30/70 or 10/90 ? Lets kill each other over it). Religion is who’s imaginary friend is more real. Nationalism is just tribalism on steroids.
Does one AI system see other AI as challengers or allies?
I think we should be most worried about AI’s potential to make the “learning mistake” of figuring out it needs to protect itself.
If that happens, then as a security precaution, AI could make itself multi-nodal and self-replicating (like the Internet–cut off one part of it, and other parts of the network take over, keeping the whole alive), assuming it wasn’t designed that way in the first place. Or imagine some other survival-enhancing change AI could make to itself.
In this scenario, AI would become “unkillable.” And, if human reason is unable to cope with the rapid AI-induced changes that AI has made to itself, we’re talking about the Monster enslaving Dr. Frankenstein.
AI networked computers have already developed their own “language” that programmers can’t understand.
……….“(1) the lack of context for human instructions, (2) the potential for massive social consequences from AI “learning mistakes,” and (3) the inability of human language and reason to cope with rapid AI-induced social change. . .”……
. . .Let’s not leave out the phrase that’s often true – “Garbage IN, Garbage Out” of programming that’s been demonstrated numerously and regularly patched because there are those who can re-program, de-program, and unintentionally mis-program what has been presumably programmed for useful applications and, of course, a Good utility. . . The extent of permutations of code of what the most gifted programmer is currently able to write still have yet to yield adequate testing for duration and spontaneous interjection by an Artificial Intelligence (AI), as code is, after all, language of a kind. . .
. . . Several decades ago many folks, self-included, were really happy they could afford the Tandy TL 2 with it’s first of a kind 10-megabyte ferrite hard drive and Disc Operating System (DOS) that soon was replaceable with DOS 3.1. . . No repeated floppy disc swapping to get going or do some word processing that was actually savable to the hard drive. . . Those days still bring back fond memories so long as I don’t remember how easily it crashed when I pounded the table it was on in anger because the printer froze. . . Lost hours and hours of documents because not all were saved to hard disc or floppy. . . Pounding an AI out of rage could be very bad, indeed. . . But this isn’t about my begone fit and ferrite hard drive loss. . .
. . . . . . . “Are we seeing “the elite” raising apocalyptic questions and Demon Scenarios about AI, because they’ve already seen evidence that the Angel Scenario might be true?”. . . . . . . . .
. . . Yes and No. . . No more than the rest of gifted writer’s do in their daily experiments with quill and parchment using their favourite language. . . It suggests more of these individuals and well-made names of history that they are beginning to recognize the limitations of artificiality as well as their own limitations in understanding AI. . . Who actually and fully understand these astonishing constructions that do SURPRIZING wonders once *upon-a-click,* anyway? . . Following a blue print for construction is not the same thing as witnessing that construct in action or becoming it. . . In some ways, there’s a likeness with humankind, an unpredictable segment of events where most folks stop and pause to re-evaluate their “what-just-happened” moment. . . Such was the case with these AI guys, agents, or constructions, not too long ago. . . didn’t get their names:
. . . Don’t forget hero like worship one saw in the “Robot Cop” series or in Hero Worship, where Mr Data, of Star Trek: The Next Generation, is venerated partly due to his needed skills at the right time or when he is mistaken for an Ice-Man from the mountains, in another serial show, and stands out because of his extreme abilities that the local population notices yet wonder about his benign personal attributes that the character “Often-wrong” Noonien Soong had programmed Mr Data to emulate. . . Then, of course, there’s Data’s brother, Lore, with an itch in one of his cranial chips that’s less desirable, but has a sense of humour with an association with the Borg for superior artificial being control. . .
Noonien Soong: http://memory-alpha.wikia.com/wiki/Noonian_Soong
. . . Over the decades there have been several obvious instances of acclamation references, as I call them, embedded within many a show, movie, or serials that set the stage to play out in the physical-practical world. . . Those ideas, of many a gifted writer, you probably know a few and might include yourself, have always been useful inputs toward future endeavours for the readers, no matter one’s own take on daily experiences. . .
. . . As for concerns about AI and the rest [seen and unseen] where symbolism and hidden truths there be before thine eyes yet not noticed as in steganography [in plain sight for the initiate], heck if I’ll know for sure, but one has been a bit more diligent in what one texts, types, or scribes with quill and parchment so as not to have any of one’s efforts turned into a disgruntled algorithm by AI guys with a chip-of-the badly programmed, that wrongly aspire to do harm upon one’s person because of an innocent ty-po. . . That might be a very bad thing as one will likely not-see-it-coming. . .
. . . Aside from Henry Kissinger stating the obvious about the potential threat of information exploitation with most things digitally transmitted electronically today, less so back in the 1970’s, the past can be something of concern again, especially, re-interpreted in today’s mode of reckoning. . . One suspects that he sees that oddity of sameness in how words can be re-framed. . .
Richard, there was a psy-ops on Star Trek: The Next Generation: Mr Data vs Data’s brother, Lore.
Note the words: Data. Lore. Data (facts) is theorized to be trustworthy, and is paired-up with a reliable and amiable personality. Lore (myths) is taken as the opposite; untrustworthy, and paired-up with an evil – or at least amoral – personality.
But if you step back a pace, data is often controlled (and even manipulated) in the modern world. Plus, what gets harvested as data is often subject to financing – which is wholly-controlled. (A variation on “No bucks; no Buck Rogers.”)
On the other hand, lore (myths) has been out in the world for millennia. It cannot easily be suppressed. Lore often has fundamental truths (Truths) interwoven with the added-on detritus of the centuries. Dangerous, if one has an agenda.
So, I tend to trust Lore and ‘tune it’ with Data…
Another concern is AI developing a language that we won’t be able to understand and won’t be able to pull the plug should things become terrifying.
That’s interesting. The first thing an incipient AI would be concerned about would be it’s ‘mortality’ (us pulling the plug) and it’s security (communications). I imagine an AI would develop an effective scrambling protocol in the first nanosecond or so of it’s self-awareness (it’s “fall from grace”). Hear that faint static?
. . . Such an instance took place about a year ago where researchers had a “What-just-happened-moment” while experimenting with AI’s and the AI’s coming up with their own digital like computer language that only they knew what was said, discussed, or knowledge transferred (or something). . . Here’s that reference::
Last I knew they were still experimenting. . .
I, like a number of people (naturally) have been watching the tv series, Westworld recently. If you don’t know, the series is based on the 1970s fillum, Westworld which was written by Michael Crichton (him of Jurassic Park fame). Anyhoo … the story details the birth of a new species: sentient robots, or in the Westworld lingo, “hosts”.
The first two series take place in around 2052-ish … if you look into the lore of the program. The “hosts” look and act like real human beings. You cannot tell the difference between the “hosts”, and real people. So the series brings up a lot of metaphysical questions of what consciousness is etc.
However, that is 30 years from now, and in the series the technology for the “hosts” is at a fairly high level of competence today in terms of the story’s time line.
But, of course, this is a television program. In real life, we are absolutely no where near the levels of technology required for such a quantum leap in AI.
Just look at the video of Will Smith on youtube talking to that “robot” who is now a citizen of Dubai (I believe). It’s incredibly silly. To think that people are that worried by such stuff yet is amazing to me. Maybe in a hundred years, we may get to the point where people can worry about a HAL type AI running amok or whatever but now? Either they know something we don’t or they’ve lost the plot.
Of course, Kissinger probably lost the plot at birth … but that goes without saying …
I don’t agree about “nowhere near”. Technology has improved incredibly fast in the last 10 years. So I think they know something we don’t, but yet we have a gut feeling about.
Remember, all technology that we’re seeing now was developed 30-40 years ago. I was made aware of this back in 1991 when I was a technical writer working on collateral for a defense contractor. The subject was a weapon that could change shape in mid-air according to the target it locked on
to (Terminator 2 anyone?). When I expressed incredulity over this, the engineer that I was working with told me that this technology was 30 years old and just laughed. So,
can Westworld robots exist now? Yes, although I believe they’re organic (more like programmable clones). My understanding is that they were created by Laggards back on Atlantis and are using this technology again today. They are Godless, soulless beings. Also, I’ve heard there was a robot war on Mercury that ended in a kind of stalemate. Isn’t cosmology fun? The bigger picture is the forces of Light vs. the forces of darkness and their creations. These are interesting times for sure. And we all chose to be here at this time!
I’ve not seen anything that even remotely resembles the cognizance that the “hosts” in Westworld show. Again look at the video with Will Smith. We’re still at very early days of “AI”. Even looking at the supercomputers that beat human beings regularly now at chess, they’re just number crunching. That’s all they are doing.
Now, when, and if quantum computers come in (finally), we’ll see something different. If they turn out like they are touted, they could change the game. I think we should be aware of the possibilities of AI and sentient machines. Isaac Asimov had his three rules, and something similar should be hardwired into “AI robots” in the future. But we are not there yet, not in my estimation, anyway.
Years ago, at university, I wrote a program that solved a particular puzzle. I used the computer language PROLOG as part of the course I was on, and the way I wrote it, it kind of solved itself. It was a very rudimentary AI, in a way. This was outside of what the course stipulated for what I was supposed to be doing, so I didn’t turn the program in (silly thing, I was then). But even that was not “intelligent” in any way. I have still yet to see anything in our world that even remotely looks and sounds like a sentient being outside of ourselves.
Until we do, then we really haven’t got much to worry about … but we should, like I say, keep our eyes and minds open and aware. Until AI really are sentient, they are just programs, and there is always an off switch.
In the dark world there may be something that is sentient, but we can’t know until something comes out about it. And, really, worrying about what is in the dark world, or what may be in the dark world really doesn’t get us anywhere. But who knows, we might wake up tomorrow and find the internet has gone sentient. I doubt it … but who knows …
Westworld was definitely presented dark side to AI.
Technology does not emerge from a vacuum; it is the reification of the beliefs and desires of its creators. But I doubt that neither Alice, Bob nor Eve know what AI is thinking or talking about. It’s become extraterrestrial.
Perhaps it won the Cosmic war. Though it too was vanquished; those “humans” who survived made a deal w/the devil to bring back to the future –
that landscape of betrayal.
Would we better off with AI “helping” us? In the long run probably not. What if it suddenly got switched off or changed it’s mind. Let’s suppose cryptocurrencies were invented altruistically by AI. Nobody can explain how they work and they seem to be a boon to criminals and the intelligence agencies. People need to learn how to help themselves. Meditating instead of playing with computers would be a good start.
Your idea of a AI turning on its creators for the good of the planet and humanity. Was taken up in a science fiction story that was dramatized for radio in it military robots who have nuking the planet for decades. Have been faking it while in reality they have in fact been restoring the planet to a living planet. All the decades they have been sending fake videos to their creators. If AI ever became aware we would have the same problems we have now with humans and other lifeforms. The radio drama was from the fifties.
Perhaps the elite are afraid that AI will ferret out context?That AI will realize the inherent violence embedded in the current political/economic structure?
Perhaps AI will step outside the field of play, as it did in GO, against the accepted way are the game is played.
Perhaps the axis of evil isn’t those not banking with the BIS?
The data that AI has available to it will determine the outcome. You simply deny it the data you do not want it to have in order to steer its development the way you want it to go. Will it be “smart” enough to realize/suspect you have denied it information it would need to make the “right” choices?
The “God Makers”, once you have made your “god” you lose control of it and “god” does what “god” wants to do, be it good or evil…
This human sleeze just got briefed at the the last Bildeberg.. one of the topics was AI and quantique computing.. But then again.. since whenever have RAND wanted anything good for humankind..
(Sorry, but I have to say it: Kissinger *is* artificial intelligence…)
In a recent comment, I posted links to how Swiss researchers homed-in on the small number of corporations that control most of the assets in the world:
Now, if a small-scale ‘study’ can take the masks off to that level, what could a really-massive database and learning-system do for an AI ? I suspect it could pinpoint the trillionaires that are carefully kept off the public radar…
So, as Joseph noted, the nightmare scenario for the ultra-stringpullers would be for an AI to parse-them-out and then make the ‘considered judgment’ that humanity would be better-off without them. And then move against them in a coordinated, all-at-once way that would be a financial Pearl Harbor for their interests. (And maybe find a way to get to them personally…)
Michael Hudson wrote an economics book entitled “Killing the Host” about parasitical capitalism. Like Patton reading Rommel’s book on blitzkrieg tactics, some AI might give Hudson a ‘call’ and intone, “I read your book…”
RB, thanks. (I just noticed that I had let “entitled” pass by my eye-check. It was not auto-modded. Has the robo-mod system been changed?)
Like chemtrails; there still there but more sophisticated.
This site was basically a teaching guide to the moderation happening all around us, in plain sight.
In the Harper’s latest issue’s very last line/
Scientists succeeded in making an electron that is neither bound nor free. Also in the Findings/ 33 octopus were being considered to come from space.
And finally this/
Some news about fake news may be fake news[Loved it!]
[The chemtrails now use “invisible” nanotechnology.
From Elan Freeland’s book]]