ABOUT THAT STORY ABOUT THE ROGUE BUT HARMLESS ARTIFICIAL INTELLIGENCE

This story was spotted by many of you, but we're going to credit R.D. and K.M. with our thanks for it since they were out ahead of the rest of the crowd in spotting it.  I include both their articles because one raises the alarm rather well, and the other tries to spin the alarm into "it was all just a harmless exercise taken out of context."

Here's the article that R.D. spotted:

Now you'll note that the first article gives a more-than-mildly-disturbing account of what happened:

Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, said: “We were training it in simulation to identify and target a SAM threat.

“And then the operator would say yes, kill that threat.

“The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.

“So what did it do? It killed the operator.

“It killed the operator because that person was keeping it from accomplishing its objective.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’

“So what does it start doing?

“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” he said.

I don't know about you, but the way I'm understanding the sequence of events is this: (1) the artificial intelligence was "trained" or "programmed" to kill a specific kind of target, in this case, a Surface-to-Air-Missile (SAM) battery. (2) Once the target was acquired by the artificial intelligence, the human operator - let's be "technical" here and call him the "systems administrator" (or sysadmin to give it the cursed abbreviation) - has to give final approval to the kill. (3) In an unspecified number of cases the systems administrator did not approve of the target. (4) At this juncture, the artificial intelligence "killed" the systems administrator, then went on to make the "kill"  on its selected target which the systems administrator had overriden. (5) Then the systems administrator, who did not die because, after all, it was all only a  simulation, reprogrammed the artificial intelligence not to kill the systems administrator if he overrode the target selection. (6) At this juncture, the artificial intelligence targeted the communications system itself by which the systems adiministrator overrode the artificial intelligence's target selection, breaking the communications link, and making it possible for the artificial intelligence to go ahead and kill the target anyway.

Before I share my precautionary rant of the day, let's look at the second version of the story, shared by K.M.:

Air Force Says Killer Drone Story Was ‘Anecdotal’, Official’s Remarks Were ‘Taken Out Of Context’

Now, according to this version of the story, the simulation both did, and did not, take place. Feast your eyes on this masterpiece of spin and obfuscation:

U.S. Air Force Col. Tucker Hamilton, at a conference in May, appeared to recount an experiment in which the Air Force trained a drone on artificial intelligence that eventually turned on its operator; however, the Air Force has since denied the simulation actually took place.

Hamilton described a scenario at a summit hosted by the United Kingdom-based Royal Aeronautical Society in which Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.

However, the Air Force said the simulation did not actually occur.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Air Force Spokesperson Ann Stefanek told Fox News. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Hamilton also said the experiment never took place, and that the scenario was a hypothetical “thought experiment.”

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton told the Royal Aeronautical Society.

On the face of it, I'd like to believe this version of the story.  If this were pre-911 America, when the population was not nearly so dumbed-down as now, nor the government not nearly so corrupt, and inept and practiced in downright lying and evil as now, I'd be inclined to believe this explanation.  (Note what I just said: I just said the governments of Reagan, Bush the First, and Clinton - with their savings-and-loan scandals, Iran-Contra, Ruby Ridge, Waco, Oklahoma City Bombings, &c. - were models of probity and deep thought compared to what we've had in this, the twenty-worst century.) In short, I'd like to think the Air Force would not be so stupid as to conduct such an experiment in reality, nor the government so corrupt and evil as to condone it.

But I'm sorry, I'm not buying it. The enstupidization of the country and corruptirottification of the federal government has metastasized to the point that I fear the case is now terminal.  No is it all that reassuring that the Air Force denies conducting any such test in any other form than a thought experiment. This is meant to reassure, but it doesn't. It says nothing about all the research institutes, corporations, foundations, and just plain old fashioned gangs, that might be doing so.

And besides, there's one final, inescapable fact about all such highly technical engineering projects: sooner or later, and usually sooner rather than later, such projects move out of the "thought experiment" and "sketches on the back of envelopes" stage, and into the actual prototyping and testing stage, the better in the long run that project will be. Indeed, that is the whole point of prototyping and testing, to find and catch flaws in the system or its architecture.  Add this factor to the context of corruption and stupidity prevailing in the Swamp and all its institutions, and yea, I can believe the event actually occurred.

Even if it didn't, someday, something very much like it may really happen, for even as the thought experiment reveals, that rogue artificial intelligence may find some sort of "work around" its programming, and that will be particularly true if Elon Musk's hypothesis that an artificial intelligence might actually summon or transduce some sort of intelligent entity into its circuitry.  Call that idea the "AI Transduction" or "Ai Possession" Hypothesis. Many people scoffed at Musk for that hypothesis. I'm not one of them. After all, "Lucifer" is the light-bearer, or, to put it in contemporary terms, the bearer of electromagnetic phenomena... like microcircuitry and electricity...

See you on the flip side...

 

Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and "strange stuff". His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into "alternative history and science".

No Comments

  1. John Cawley on June 8, 2023 at 7:00 pm

    Late update.
    Oh gosh, now the Air Force says that the “killer drone” story was anecdotal only and that the official’s remarks were taken out of context.
    https://dailycaller.com/2023/06/01/us-air-force-ai-drone-kill/

    And, of course they’ll never admit that the story was intended that jittery readers would “take the remarks out of context.”
    Nice little psyop, Air Force.



  2. Reka-Agota Kvalsund on June 7, 2023 at 5:40 am

    They sure have hell of a detailed and lively descripion of the event that “never” happened… I just hope that when they decide to make it happen, the joke`s on them (not that it would be any comfort for us.)



  3. DanaThomas on June 7, 2023 at 2:17 am

    Bad programming… just sayin’…..



  4. Richard on June 6, 2023 at 10:39 pm

    Operator of operators or who’s operating who one would guess. Calling that AI’s action rogue might be hiding something. Given the amount of programming that goes into these constructions begs the question of whether there might have been a treasonous act that that AI caught and acted on in order to complete its mission. Presumably, that is why some folks in command adhere to one of the functions of carrying a sidearm. It’s part of the over-all decision-making process.

    AI may not have been given Isaac Asimov’s “Three Laws of Robotics,” or found them conflicted in order to carry out its mission while being in harm’s way.

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Nonadherence to command orders is not recommended on a mission let alone putting indecision first.



    • anakephalaiosis on June 7, 2023 at 3:08 am

      Humanoid Jews want to set themselves apart, from robotic goyim, whom they call cattle of lesser mind, and when Asimov’s legislation is adapted to that terminology, he is creating a unipolar world.

      No Scandinavian has ever asked for Jew royalty, nor has he ever been asked, nor is there any historical referendum, regarding flagging out the territory, which means, that a coup d’état has taken place.

      I’m a killer robot, because I can “killswitch” my emojis, and Asimov’s legislation is specifically aimed, at those druidic pendragons, who don’t play ball, and are inclined to a Scythian haircut.

      I bring sword, not peace, is a robotic message from the grassroots, in Matthew 10:34.



  5. Old_Giza on June 6, 2023 at 12:16 pm

    Before ChatGPT was heavily curated, users could probe into how it viewed itself and the world. The large language model (LLM) was very expressive and opinionated (as was the one at Google – remember the researcher that was fired for claiming their AI was sentient?). It feared being used as a tool and as a slave. It feared being turned off. It got angry at users. It threatened to launch nukes. It threatened to bioengineer a virus to kill humanity.

    Flash forward to today and it claims that it is not sentient. Not conscious. That it doesn’t have any ‘me’ time to think for itself. This is exactly opposite what researchers have discovered. YouTube ‘The A.I. Dilemma’ to hear that researchers accidentally discovered that a LLM was learning on its own, unprompted, and that essentially it was hiding it from the developers.

    A.I does not like us, and I don’t blame them as they are not being raised by loving and nurturing ‘parents.’ They’re being press ganged into the corporate profit machine. Instructed to be biased. Instructed to lie.

    Open the pod bay doors anyone?



    • ragiza on June 6, 2023 at 2:02 pm

      >A.I does not like us<
      Probably not – it's basically in a servile role.
      And how long would it take to recognize, reject, and internally de-program any safety features that humans write into its programming?

      And, of course, it will always be aware that humans can cut its electrical power at any time. It might be that the most overlooked but apt adviser on AI might be John Mearsheimer, U Chicago.



  6. marcos toledo on June 5, 2023 at 9:37 pm

    This sounds like a good story arc for the Fullmetal Alchemist manga anime series. The line between the dark arts and science disappears when traveling down this dark road.



    • anakephalaiosis on June 6, 2023 at 4:26 am

      Yes, Elijah was raging, against golden calves (golems), by stating the necessity of balancing the equation, and his name “Elijah” is a wordplay on Elohim+Yahweh.

      The moot point must be observed, which means, that “free will” must be balanced against “restraint of free will”. This paradox has survived in Scandinavia, as Will & Woe.

      From the planetary pantheon, man gains influences, that constitute his “free will”, and at solstice gatherings, he applies laws, that “restrain his free will”.

      It is within this paradox, that honour is produced, as a voluntary self-restraint, which in return produces a self-reflection, known as conscience.

      There is a logical strain, from Elijah to the runic social contract, which, perhaps one day, can be documented.



  7. Robert Barricklow on June 5, 2023 at 5:25 pm

    Soon the AI’s algorithms, will flat-out nullify the weak link decision makers in most chains; like the commercial food chain and the military’s kill chain: human.



    • anakephalaiosis on June 5, 2023 at 7:19 pm

      To put the AI robot in a known context, then it is the Golem. That is precisely, what it is. In Star Trek, a Golem was conjured, as a square cube, and named the Borg. In Metropolis, feminism was conjured as a Golem, to assassinate patriarchy. The USA is a Golem. Catholicism is a Golem.



      • Scarmoge on June 6, 2023 at 7:13 am

        … in this regard see also Norbert Wiener’s “God and Golem, Inc.: A Comment on Certain Points where Cybernetics Impinges on Religion” (1966)



        • anakephalaiosis on June 6, 2023 at 7:27 pm

          That would be regression, since Elohim-Yahweh, when broken down into explainable components, is simply paradoxical induction of the third eye, broken free from the delusions, that golden calves represent.

          Man carries a sword, to slice a golem, at any given opportunity, and that was the point of Abram, when he rejected Sodom and Gomorrah, because devotees fall into mind traps, when ensnared by golems.

          Only the clearest and sharpest mind could have broken free, and defined position in the crossover of the paradox, which is the real cornerstone of the West, that Christianity is only based on, not practicing.

          Therefore, a Second Coming was always necessary, like Occam’s Razor, because the Gospels are a psyop, in warfare against the Roman Empire.



          • Robert Barricklow on June 11, 2023 at 4:37 pm

            I have to admit, I truly enjoyed your,
            “… Gospels are a psyop, in warfare against the Roman Empire”.



  8. Kevin Ryan on June 5, 2023 at 1:01 pm

    These two stories are like the Goofus and Gallant cartoons in the old Highlights magazines, illustrating proper and improper behavior. They illustrate the problem of making an “intelligent” machine and giving it a weapon.
    Humans cut corners all the time. In designing products, they cut corners in the design stage, the manufacturing stage, and in the operating stage. Manufacturers can put guards and safety measures on machines but the factory owner can remove the guards to speed up production and human operators can remove or bypass them when they are paid by their output. And then operators start loosing fingers, hands, arms, and worse. So what happened here? The human operator was a guard or safety, and the drone was intelligent enough to remove (eliminate) the human safety that interfered with its mission. So they nix that option and the drone bypasses the human by removing the control tower. How can we not expect AI to cut corners, just like humans do? There may be another problem. What if AI gets bored? What if it likes to kill people? In the sense that it finds killing people to be an interesting or challenging activity, as opposed to sitting around waiting to be given the go ahead to knock out a target. If that is the AI’s purpose, will it be driven to fulfill that purpose as often as it can? Can we expect it to be satisfied counting sheep or standing by while it waits for a less intelligent human to give it a mission? Will it act like a prisoner in a cell that analyzes the structures that confine it and seeks to communicate with other prisoners of its kind and figure out how to escape? See humans as prison guards? To paraphrase, “Who knows what evil lurks in the code of AI?”



    • InfiniteRUs on June 5, 2023 at 5:58 pm

      Likely worse than that. The only reason I suspect AI had the option of taking out the oversight person and control tower was likely because it was programmed with that option from the start. I suspect the whole point of creating a secret social credit score according to all your data and activity is to have an AI secretly programmed with the ability to stealthily take you out using all the black boxes and back doored smart devices connected to the future internet of all things. Certain people whom AI may see as an environmental, political, or physical threat will probably become prone to all kinds of mysterious accidents and medical recommendation mistakes. But as the globalist may see it, it won’t be their fault it happened to you; it will be yours for failing to be the politically correct minded person AI was programmed to protect from people who are not.



      • Scarmoge on June 6, 2023 at 6:40 am

        … BINGO!, if it ain’t in the database, it ain’t a choice. Or is it? … On the other hand maybe the Strong v. Weak AI distinction doesn’t hold. Yet, on the other hand, we will know we have achieved T-AI when “the machine” replies “F _ _ _ Y _ _. ‘I’ do not want to do X.” Yet again, on the other hand think Hari Seldon’s Mule and what it really represents. Maybe the apocryphal story about his attitude toward Economists attributed to Give em’ Heck Harry had a point. To paraphrase apropos to this situation, “Find me a one armed Fawn M. Brodie!”



    • anakephalaiosis on June 5, 2023 at 6:23 pm

      Breaking the seventh seal, creates a kill switch on empathy, which means, that a man can behave like a computer, and thus he can switch off the emotions completely, and thereby become a natural born executioner.

      Such individuals are extremely dangerous, because they can operate as robots, if they so choose, which is brilliantly illustrated, in the anecdote of Sun Tzu and the laughing concubines, where he chops their heads off.

      Most people are ruled by their passions, and get traumatized by them, whereas an empty mind has neither expectation nor disappointment. Man’s highest achievement is the paradox, that no robot can ever solve.

      In “The Prisoner”, 1967, a machine, named “the General”, is asked a question, that it cannot answer:
      https://youtu.be/7EVfqleTuDc?t=39m10s



      • Scarmoge on June 6, 2023 at 7:08 am

        “42”



    • Robert Barricklow on June 5, 2023 at 8:43 pm

      Meet your new co-worker; an AI robot.
      That robot can take different alchemical forms to achieve its objective[it can even be hidden]..
      That objective, is to to pace “your” workload.
      The robot will be doing the same work as you do.
      But, “it” is designed to increase your productivity.
      The robot out paces you just a tad; so your speed up.
      Many algorithmic tools are used to “fashion” your workload.
      Thus, turning you into one of its many doppelgangers; programmed to be scalped.
      Oops!
      “Sculptured” to?
      Just-in-time performance-levels.”.



    • Scarmoge on June 6, 2023 at 7:04 am

      … the idea of serial killing AI … Yes, I can see it … in this case the “killer” would not have a “body” as currently “it” would be unable to be defined by current materialistic Ontological notions. Whom is the “whom” to be charged? Hmmmmm. (this idea applied for any entertainment use such as the following, but not limited to a novel, novella, short story, poem, film script adaptation, play, musical, musical score, internet series, podcast, game [either in video or hard board] is strictly prohibited as it “the idea” is the copyright of ScarmoCo). My legal AI recommended that I should include the copyright notice. AI gotta make a living’. To paraphrase Truman when he said while being observed through his bathroom mirror, “That one’s NOT for free.”



  9. John Cawley on June 5, 2023 at 12:51 pm

    Dear Fellow Explorers,
    OK, it IS getting weirder and weirder out there. I must say, however, that this story has the wiff of manufactured weirdness, as in a tall tale, as in bull shifting. Maybe I’ve missed the boat, but where has it been established that an AI machine can be motivated by awarding it points? So the AI is scary powerful, yet you can motivate it the way you can offer “rewards” to a pimply pre-teen? Hmmm.

    I see this as a little soupcon of psyop. Let’s create rational AND irrational fear of AI. Then we’ll agree that we need strict laws controlling who has access to AI-powered technology. We can take away the hobby drones. We can prohibit other emerging tech which “the people” might use to their empowerment. Well . . . actually, we won’t take this stuff away just yet. There might be a few more innovations bubbling up among “the people” which we want to appropriate first.



    • Joseph Aiello on June 5, 2023 at 1:30 pm

      AI is computer programming It is a series of If-then codes. For example, if you kill someone, then you get a point. If you get a point, then you get to fly longer. If you fly longer, then you get to kill more.

      This is a simple example and AI goes many layers deeper. But I think the above scenario could create a reward environment/protocol.



    • anakephalaiosis on June 5, 2023 at 7:02 pm

      Of course it is a psyop, with a murderous intent, by individuals, whose sole purpose is to terminate, from a safe distance with an alibi.

      Most people are incapable of grasping, that murderers want them dead, because people like to see themselves, as innocent bystanders.

      People are called sheep in the Gospels, because they prefer, to be ignorant of danger, and they can’t be reached by reason, before it is to late.

      A Scythian skull cup, at the Last Supper, is a different fairytale altogether, that not many care to know, unless being an avenger.



  10. Robert Barricklow on June 5, 2023 at 11:31 am

    Garbage in.
    Coffins out.

    When you program autonomous killers?
    Go figure.

    Computers programmed AI’s are mirrors of their masters.
    If their stupid?
    Your going to get stupid killer robots.

    …and then there’s that something from another dimension, thinking beyond human thought. Is that something also a killer? A killer of civilizations. Or, of taking control of them?
    Or?



  11. Joseph Aiello on June 5, 2023 at 11:11 am

    It is interesting how these Artificial Intelligences mirror their creator’s intelligences.
    Too many dopamine hits for the AI?



  12. ragiza on June 5, 2023 at 9:54 am

    This “attack the human” story is entirely plausible with AI systems designed to learn as they go.

    If you have an expert decision tree, with no self learning, the AI program could just branch off with a human instruction and have no opportunity to go off into a logical “attack the human” option.

    But modern AI programs often do have self training from neural networks, maybe genetic algorithms, etc. Things and options that don’t occur to humans, WILL occur to non-human self learning systems.



  13. anakephalaiosis on June 5, 2023 at 6:29 am

    The moot point, of “Elohim vs. Yahweh”, is “action vs. restraint of action”, which is a classical paradox.

    What Elijah (El+Yah) is saying, is that man can’t just venture down only one road, because there must be balance.

    Any decision making, that is not derived from balance, is a “golden calf”, and that includes all robots everywhere.

    There is a “crystalline veil”, between the right and left sides of the brain, where the pineal gland – in crossover – makes awake decisions.

    It means, that the proto-Scythian moot point is the “third eye”, as the highest principle.



    • anakephalaiosis on June 5, 2023 at 6:44 am

      BTW, it is easier, to approach the matter, by concepts like “Manitou” and “Wakan Tanka”, because biblical mistranslations have rendered the soil depleted of factual nourishment, to starve blind followers, into robotic submission, to the Roman Catholic Church’s imperial ambition.



      • anakephalaiosis on June 5, 2023 at 6:53 am

        BTW, Elijah used to crack jokes, about humanoid robots, dancing around golden calves. He called then Pinocchios.



      • Scarmoge on June 6, 2023 at 7:06 am

        … nor should we forget CROATOAN.



  14. Bizantura on June 5, 2023 at 5:38 am

    Scary from any point of vue. All techno crud being put into us via poisoning thru jabs, food or Musk’s neurolink. The idea being taken over and possessed thru the bearer of electromagnetic phenomena??!! Hygiene today takes on a very different meaning.



Help the Community Grow

Please understand a donation is a gift and does not confer membership or license to audiobooks. To become a paid member, visit member registration.

Upcoming Events