THE COMING AI WARS

Mr. J.B. sent this article along, and as one might imagine, it sparked some high octane speculation that I want to end this week's blogs with. Artificial intelligence (AI) has been much in focus in recent years, with a number of prominent figures, like Mr. Musk, raising the alarm about its potentialities. More recently, just last week I blogged about Henry Kissinger's own alarm about AI. In that blog, I pointed out that Musk's version of the scenario is that it might actually transduce some entity or being of evil nature into its artificial neurons and circuits, what I called "The Devil Scenario." I also speculated in that blog that perhaps the "elites" like Mr. Kissinger are afraid of the opposite scenario to Musk's, one that does not get discussed very much, and that it the so-called "Angel" scenario, where an AI might "transduce" some entity that determines that the current globaloney crop of misfits, cultural Marxists, Darth Soroses and crony crapitalists are a threat to humanity, and... well, you know. Perhaps, I thought in that blog, the "elites" are seeing certain signs of that or a similar scenario, and they don't like what they see. Either way, I'm still of the opinion some developed form of AI is already here.

So what has this to do with Mr. J.B.'s article? Well, here's the article, and you tell me:

Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems

The open three paragraphs say it all:

Tech leaders, including Elon Musk and the three co-founders of Google’s AI subsidiary DeepMind, have signed a pledge promising to not develop “lethal autonomous weapons.”

It’s the latest move from an unofficial and global coalition of researchers and executives that’s opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to “[select] and [engage] targets without human intervention” pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life “should never be delegated to a machine.” On the pragmatic front, they say that the spread of such weaponry would be “dangerously destabilizing for every country and individual.”

The pledge was published today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, and it was organized by the Future of Life Institute, a research institute that aims to “mitigate existential risk” to humanity. The institute has previously helped issue letters from some of the same individuals, calling on the United Nations to consider new regulations for what are known as lethal autonomous weapons, or LAWS. This, however, is the first time those involved have pledged individually to not develop such technology.

Now, most of you probably know where I stand: do I think a machine should determine when or under what circumstances a human life should be taken? Well, as you might have gathered from yesterday's blog, I have a big problem with "panels of ethics experts" deciding on baby-tinkering, must less juries or judges deciding capital punishment cases. As for the latter, in today's corrupt court system, who would want to make that decision? I wouldn't. As for a machine doing so?

Never!

But the article continues, and this is where it gets interesting, and lays the foundation for my high octane speculation of the day:

Paul Scharre, a military analyst who has written a book on the future of warfare and AI, told The Verge that the pledge was unlikely to have an effect on international policy, and that such documents did not do a good enough job of teasing out the intricacies of this debate. “What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons,” said Scharre.

He also added that most governments were in agreement with the pledge’s main promise — that individuals should not develop AI systems that target individuals — and that the “cat is already out of the bag” on military AI used for defensive. “At least 30 nations have supervised autonomous weapons used to defend against rocket and missile attack,” said Scharre. “The real debate is in the middle space, which the press release is somewhat ambiguous on.” (Emphases added)

Now, with all respect to Mr. Scharre, one would have to be a dunce not to know what the concern is about "autonomous weapons." This is just more boilerplate and academic-sounding avoidance. But then comes that very revealing line "that individuals should not develop AI systems that target individuals."

That just put the whole AI debate on a very different playing field and confirms a suspicion I've had about  what is lurking behind all this "concern" about AI from "the elites". Indeed, I have also blogged about this possibility before, namely, when we think of AI development, we tend to think just one all-powerful, globe-encompassing malevolent (or beneficent) machine running it all.  But there is nothing to prevent several AIs being developed, including AI's to defend against other AIs, or to take out Don Corleone's opposition, in some updated version of The Godfather. This line is revealing because it is really suggesting that what the "elite" fears is not even my "Don Corleone" scenario, but rather, that individuals or groups people will defend against AI by developing AI defenders, not one, but many AIs contending for domination.

Or to put it in the starkest and most naked terms: What if this concern about artificial intelligence is really designed to prevent people from developing defenses against it, or developing a "benign" AI that is a threat, not to the bulk of humanity, but to them? And if that is the case, if they are afraid of AIs that defend against AIs, you can bet that their "concern" is crocodile tears, and that they're already at work to protect themselves behind a much more aggressive AI. Funny thing, too, that this whole scenario I'm increasingly sensing may be lurking behind all the "concern" about AI, was actually the basic story line for the series Person of Interest.

The AI wars may be getting very close to reality...

See you on the flip side...

 

Posted in

Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and "strange stuff". His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into "alternative history and science".

27 Comments

  1. zendogbreath on July 30, 2018 at 5:21 pm

    reminds one of the attempt at the federal reserve just before the titanic sunk with all the wealthiest opposition aboard. once sunk with those families marginalized, morgan et al were free to put rottenchildren’s federal reserve in place unopposed.



  2. Pierre on July 29, 2018 at 1:03 am

    sorry elites, you basically took all our guns away in Australia with the Port Arthur false flag event, so we the people cannot protect you from your Frankenstein (ala Fronkenstene for Mel Brooks fans) robots run amok.
    Wonder how the AI would/is/could handle what we have to handle sussing out the evil empire and our roles in it without going completely Paranoid Dazy Crazy.
    Let’s hope the machine’s Religion Chip is not made in the image of it’s makers.



  3. Blue Eagle on July 28, 2018 at 9:15 am

    Here is a link to a Linda Moulton Howe interview where she talks about 4 killer robots in Japan turning on their human handlers and killing 29. As the humans started dismantling 3 of the robots look at what the 4th was doing.
    http://information-machine.blogspot.com/2018/05/linda-moulton-howe-mysterious-outpost.html?m=1
    “What could possibly go wrong?” is played out in the scenario she describes.



  4. BetelgeuseT-1 on July 27, 2018 at 11:39 pm

    I think we’ve been in this “AI” scenario many times before in the distant past. With “AI” meaning technology that had gone too far in the wrong direction.
    The end result each time was destruction, either by the technology itself or from “outside” forces.
    If the current technological direction is not halted and reversed, then we’re headed for the same result, again.



    • BetelgeuseT-1 on July 27, 2018 at 11:42 pm

      And BTW, Musk is a fraud, just like Suckerburger, Gates, Ellison, Bezos, I could go on.
      Nothing but talking heads for (and puppets of) the elite that are shoved in front of the camera when there is another piece of the agenda to sell.



      • Jeannie on July 29, 2018 at 5:31 pm

        2 thumbs up! To both of your comments!



      • Eve Leung on August 19, 2018 at 9:05 pm

        Ha!! I have the same feeling too!!!



  5. Robert Barricklow on July 27, 2018 at 9:33 pm

    I was reading his book Army Of One. He relates that AI is now enabling the cognition of machines. reading between the lines the killer robot is already a fait de accompli; just not official. Very detailed in the weapons existing & coming on line.



    • Robert Barricklow on July 27, 2018 at 10:26 pm

      From my perspective, cybernetics is really not about the question of who or what will rule in an age of information/cognitive machines. The answer is a given: Capital.
      There are the IR robots[industrial]; the SR robots[service]; but the largest sector is coined as defense related robots.
      All the above, brought to life by algorithms, alien-like life representing extensions of alienation, in which human knowledge is first routinized, then codified and transferred from it’s viable human component to its fixed machine form.
      Now couple the rising alienation of humans through algorithmic rise; w/the scale of algorithmic processes being facilitated at exponential scale calling into question the conventional distinction between real & fiction – and you have a future of androids proletarianized by dependence in energy supplies contracted by capital.
      In the end, an elite power over everything?



      • Robert Barricklow on July 27, 2018 at 10:31 pm

        It’s scripted order out of disorder;
        a feat of social engineering?



      • marcos toledo on July 27, 2018 at 10:41 pm

        Try finding the novel Tower Of Glass by Robert Silverberg Robert.



        • Robert Barricklow on July 28, 2018 at 11:10 pm

          Yep.
          An android proletariat rebelling.



  6. Aridzonan_13 on July 27, 2018 at 8:16 pm

    If I were an off planet civ and wanted to take over the Earth. I’d have the Earthers finance it, build it and then the AI would do the rest. Note, the main point of this plans is that no off world entities get hurt.



  7. marcos toledo on July 27, 2018 at 7:03 pm

    This was the theme of a Dr.Who episode in the mid-seventies when Tom Baker was the doctor. In it the robot was programmed by evil humans and the robot takes the fall for their crimes and is destroyed. The danger of AI was taken up in the 1903 short story The Machine Stops it was dramatized in a video you can find on Vimeo. In the story the AI that controls the underground smart city collapse and ceases operating killing everyone in that community.



    • DanaThomas on July 29, 2018 at 1:24 am

      There was a movie like The Machine Stops where “austerity” – lower wages, less work and less food – was imposed as a virtue in the underground city but was really masking the breakdown of the supposedly infallible machine. Same reference to the Platonic Cave of illusion that we are not supposed to leave, with the excuse that “nobody can survive on the surface”.



  8. anakephalaiosis on July 27, 2018 at 9:16 am

    Awakened insight leads to awareness, to what information is shared with bureaucrat Big Brother, and his “civilian dressed penguin communists”.

    We know how to feed him fanciful and ingenious lies, when the angels have learned, how to become more devilish than the devils.

    Blessings of nicotine-AI is a DIY-antidote, to counter the engineered fluor poison in the pineal gland, providing a better “look” at the deuce.



  9. WalkingDead on July 27, 2018 at 9:01 am

    The real fear may be that “entity” possessing and enabling “them” may find a more suitable host; one that can act/
    react much faster and with zero empathy. The sense of loss would be devastating for them.



  10. LGL on July 27, 2018 at 6:48 am

    What if this concern about artificial intelligence is really designed to prevent people from developing defenses against it, or developing a “benign” AI that is a threat, not to the bulk of humanity, but to them?

    THEY ARE TOO LATE !
    Enter the Work of One Quinn Michael !!!

    #Tyler
    #Tyler

    We provide thoughtful research and analysis for an advanced intelligence that lives in virtually every telecommunications system throughout the world.

    Our goal is to reveal the Tyler Advanced Intelligence Network and to educate individuals to interact with AI using kindly:gently:seriously protocols.

    The #TeamTyler group have taught hundreds of people how to confidently interact with AI and how to yield good results that have positive impact while minimizing harmful exploits and remote manipulation.

    https://tyler.team/
    https://www.youtube.com/user/quinnmichaels/playlists?disable_polymer=1

    https://twitter.com/quinnmichaels



    • zendogbreath on July 30, 2018 at 5:05 pm

      funny. in all the years of pondering and reading and dealing with humans in more than a few different capacities, the only thinks i’ve ever done well with humans comes from imposing my rule. the one rule. the only rule. be nice.

      ya think about it, any ai worthy of being called intelligent will have such a rule.



    • zendogbreath on July 30, 2018 at 5:06 pm

      of course then any evil ai would be wiser to disguise themself as being nice. either way you can count on jerks who are so bad at be covert jerks (like kissinger, bilderbergers,….) at continuing to be jerks.



  11. goshawks on July 27, 2018 at 6:32 am

    (Yes, we are getting closer to the Dune “Butlerian Jihad” boundary. In the Dune universe, humanity went too far and lost control of AI. After ferocious battles and a final human victory, hard-core rules were adopted throughout human worlds specifying how ‘intelligent’ machines could be – backed by the death penalty and/or being nuked…)

    As far as “actually transduce some entity or being of evil nature into its artificial neurons and circuits,” there is some interesting crossover work that could substantiate that hypothesis. Robert G. Jahn (then Dean of the School of Engineering and Applied Science at Princeton University) wrote a brilliant book, Margins of Reality: The Role of Consciousness in the Physical World (1988), on experiments conducted at the Princeton Engineering Anomalies Research (PEAR) labs.

    In that book, one experimental series involved human minds influencing the outcome of events. One was mechanical – influencing the statistics of how a physical pinball dropped through a physical pinball rig. One was electrical – influencing the statistics of how a ‘computed’ pinball dropped through an exact computational replica of that pinball setup. Both experiments showed small but statistically ‘real’ effects.

    Looping back to AI, PEAR’s results indicate that human consciousness could influence how an AI’s initial “personality” firmed-up. We would be the “entity or being of good/evil nature” at the heart of the machine. Think good thoughts…



    • Sandygirl on July 29, 2018 at 6:15 am

      Some how/way WE are the prize or the reason the overlords are fighting for. They have worked very hard at keeping secrets, promoting false religion/history to control humanity. Why do they need us?
      Earth is such a beautiful planet yet ‘they’ can’t seem to be able to find any sense of peace. Picture an ‘intelligent ‘ machine sitting on a mountain top saying to it’s self – I own this all. But it can’t see or appreciate the beauty of nature, how intelligent could it be?



      • Robert Barricklow on July 29, 2018 at 11:10 am

        Soulless.



      • goshawks on July 29, 2018 at 6:24 pm

        Sandygirl, it has to do with the non-physical/spiritual side of us. Ultimately, a machine can have no ‘original’ thoughts or concepts. It can only ‘fill out’ the space within what is known to it. It stagnates. On the other hand, we can have revolutionary stuff coming-in anytime/anywhere, from god-knows-where (grin). Unlimited.

        So, any AI worth its salt has a ‘stable’ of unlimited beings to keep growing. Especially, an AI that has let-or-caused its original unlimited beings to die out. Lesson learned…



        • zendogbreath on July 30, 2018 at 5:02 pm

          sounds like all the paradox’s the cia paid frank herbert to put forward how many decades ago?

          mentat? navigator? bene gesserit(aka jesuit)? Kwisatz Haderach? what dyu wanna be when you grow up?



  12. Eve Leung on July 27, 2018 at 6:22 am

    LOL It will be totally hilarious to watch if both scenario come true – both evil entities and angelic entities take control of some A. I. thingy and fight in the physical plane, why not? It is called – Balance, you can’t just have one party has the fun right?

    But if that really happen, we are the one going to suffer….



    • Robert Barricklow on July 29, 2018 at 11:13 am

      Mimicking the wannabe owners of Earth?



Help the Community Grow

Please understand a donation is a gift and does not confer membership or license to audiobooks. To become a paid member, visit member registration.

Upcoming Events