There is no doubt the world is moving through a "digital age paradigm shift", and the next step is the much-vaunted artificial intelligence. The signs are all around us: Mr. Globaloney of finance crapitalism (as we like to call it here) has for decades been executing commodities, securities, and equities trades with computer algorithms, and now wants to role out a cashless world with digital "currencies", linking them to social credit systems and other draconian measures, like "vaccine passports".  The result will  of course be a one-way mirror behind which Mr. Globaloney hides his own corruption. Additionally, we've seen article after article of a "transhumanist" stripe of how Mr. Globaloney wants to merge man and machine. Just last week I blogged about the US Army's new "virtual reality" headset to enable soldiers to see better and to make tactical decisions better.

The only problem, as I pointed out in that blog, was that the headset contract had been awarded to Baal Gates' Microsoft, which doesn't bode well for the tactical situation of the future: "Please suspend your firefight while Windows completes your update. This will take just a few minutes. We apologize for any inconvenience to your platoon or your enemy."

Beyond this, I've tried to sound the warning about this reliance on such systems by pointing out that no cyber systems are ever totally secure, that major powers have their own cyber warfare departments in their militaries, and that computer trading on markets only divorces them more and more from actual human risk assessment, as the pricing mechanism more and more reflects the aggregate "decisions" of algorithms.

But with the move to Artificial Intelligence, a new danger looms: what if the foundational principles of Artificial Intelligence are themselves ill-founded? That's the question addressed in the following article from Wired magazine by author Will Knight, that was passed along by L.G.L.R., and it's an article well-worth pondering in its entirety, beyond the snippets we quote here:

The Foundations of AI Are Riddled With Errors

Ponder the following observation in connection with last week's blog about the US Army's new virtual reality headset:

The current boom in artificial intelligence can be traced back to 2012 and a breakthrough during a competition built around ImageNet, a set of 14 million labeled images.

In the competition, a method called deep learning, which involves feeding examples to a giant simulated neural network, proved dramatically better at identifying objects in images than other approaches. That kick-started interest in using AI to solve different problems.

But research revealed this week shows that ImageNet and nine other key AI data sets contain many errors. Researchers at MIT compared how an AI algorithm trained on the data interprets an image with the label that was applied to it. If, for instance, an algorithm decides that an image is 70 percent likely to be a cat but the label says “spoon,” then it’s likely that the image is wrongly labeled and actually shows a cat. To check, where the algorithm and the label disagreed, researchers showed the image to more people.

But why the mistaken labeling to begin with? This is where it gets "fun," if it weren't for the fact that under certain circumstances, like the US Army's headset, or a self-driving automobile, people's lives were not at risk.  It seems that image recognition is based on massive statistical databases of people's responses to ambiguous images:

ImageNet and other big data sets are key to how AI systems, including those used in self-driving cars, medical imaging devices, and credit-scoring systems, are built and tested. But they can also be a weak link. The data is typically collected and labeled by low-paid workers, and research is piling up about the problems this method introduces.

And then there's the problem of selection bias:

Algorithms can exhibit bias in recognizing faces, for example, if they are trained on data that is overwhelmingly white and male. Labelers can also introduce biases if, for example, they decide that women shown in medical settings are more likely to be “nurses” while men are more likely to be “doctors.”

(I can't wait for "wokeness" to be programmed into the US Army's virtual headsets...)

Believe it or not, I couldn't help but think of this problem in relation to a problem that my co-author Gary Lawrence and I pointed out in our book about the Common Core educational bruhaha, Rotten to the (Common) Core, namely, that with the move to computerized instruction in addition to computerized standardized testing, the biases of the "experts" and "programmers" of the tests  often over-ruled actual facts, rendering standardized testing a means of determining conformity to a narrative or point of view, and less and less a determinant of the ability to think critically. My favorite example is the hypothetical multiple-choice question "Who killed President Kennedy?" with the multiple guess answer "(1) The Soviet Union, (2) Cuba and Fidel Castro, (3) Lee Harvey Oswald, (4) A cabal of insiders representing various interests inside the US government."  Well, you can guess which answer will be "correct." On a more serious level, Lawrence and I pointed out the running battle between mathematician (and friend of Albert Einstein) Banesh Hoffman, and the Eductional Testing Service in the late 1950s and early 1960s, when Hoffman absolutely impaled the Educational Testing Service on a poorly phrased physics question from one of its SAT tests, and then, when the ETS "experts" tried to defend their "correct" answer, made matters much worse. And Hoffman produced a variety of questions from actual tests to drill the point home. Sadly, no one really listened, so here we are, with one of the dumbest populations on the planet, and virtual reality headsets in the Army being run by Microsoft.

The bottom line, in other words, is that thus far standardized tests and artificial image recognition systems still require the human input... but that input becomes quite problematical when the data is from the lowest common denominator and collective, and one already dumbed-down to boot.

So is it a cat? or an enemy tank? Or a float in a parade? "Please suspend your firefight while Windows completes your update. This will take just a few minutes. We apologize for any inconvenience to your pla---"

"ERROR ERROR... Your image database update transfer has been interrupted; communication with the host is not possible."

Newspaper headline: "Experts: Recent Data Transmission Interruption During Firefight was Russian Interference."

See you on the flip side...

Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and "strange stuff". His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into "alternative history and science".


  1. Loxie Lou Davie on April 17, 2021 at 5:22 pm

    The Ultimate Authority on A.I. is CYRUS A. PARSA. His website is The A.I. Organization.com

  2. Richard on April 13, 2021 at 2:55 am

    In one’s humble opinion, that “Ba’al” buffoonery with billions, bytes, and his current glass-is-half-empty on climate is little more than cognitive laziness for profit. He needs to get out of his DOS box of sand. Their (if not his alone) presumption of having enough invested in their blunder toward some twisted longevity misses the point of Being as well as where and what this thing called climate actually is, what its multifaceted causality originates, and just what effect any organic might have on it. That which is, is not something conveniently separated on a spread sheet or in a lab until it suits them a few steps down “The Road Ahead.” Having access does not come with wisdom but it sure does have convenience all over it as it gets in the way of the Nature of things.

    From Gates Blog, 24 November 2020:
    ‘Twenty-five years ago today, I published my first book, The Road Ahead. At the time, people were wondering where digital technology was headed and how it would affect our lives, and I wanted to share my thoughts—and my enthusiasm. I also had fun making some predictions about breakthroughs in computing, and especially the Internet, that were coming in the next couple of decades.

    Next February, I’ll release another book, this one about climate change. Before it hits the shelves, I thought it would be fun to look back at The Road Ahead and see how things turned out.’

    . . . ‘As I (Bill Gates) wrote in “The Road Ahead,” we tend to overestimate the changes that will happen in the short term and underestimate the ones that will happen over the long term. That is certainly my experience with the book itself. I was too optimistic about some things, but other things happened even faster or more dramatically than I imagined.’

    It’s unfortunate that error, misguidance, and false standing also follows along with those unaware that they need consider loss and scarcity. Climbing that latter of Hubris also shows the way up to Nemesis who opportunistically maintains a convenient distance behind, waiting and calculating the next downfall not caring who else falls, too.

    He goes on. . . ‘These days, it’s easy to forget just how much the Internet has transformed society. When “The Road Ahead” came out, people were still navigating with paper maps. They listened to music on CDs. Photos were developed in labs. If you needed a gift idea, you asked a friend (in person or over the phone). Today you can do every one of these things much more easily—and in most cases at a much lower cost too—using digital tools.’

    ‘That’s all covered in the book (I was thinking and learning about these things obsessively back then). For instance, there’s a chapter on video on demand and computers that will fit in your pocket.’. .

    One must admit, had Bill Gates not come up with his version of selling code one might never had learned how to repair computers let alone use them. They come in handy but not as anything to worship.

    Nemesis, and all that it might suggest today, comes with a flotilla of ease and convenience as well as the harbingers of loss and wreckage from those undermining an operating system code and the potential of that code taking on way more than it can manage for any corporeal Being. To now say it’s been fruitful and multiplied is shallow foresight with dangerous undercurrents yet to be navigated. Access seems lacking on that score. Like writing in the sand, one wave from an incoming surf and what’s been written has changed for the duration.

    He may have it made in the shade but even then there are limits to his pollyannaish glance at the future from his perspective. The “riddled with errors” stated in the article, strongly suggests Nemesis directly behind with the same speed and accuracy of any breakthroughs previously noted. And yet, here one is poking away on an antiquated QWERTY keyboard meant for a century ago typing machine.

    • anakephalaiosis on April 13, 2021 at 6:37 am

      Miraculously, Bill Gates saw climate change outside his Windows.

      Looking outside his Windows, Bill Gates wrote a weather report.

      To Bill Gates’ surprise, the seasonal fluctuation affected his spreadsheet.

      Then Bill Gates wrote a proprietary software, to hack a Linux weather station.

  3. BYODKjiM on April 12, 2021 at 11:43 pm

    It’s been a decade or so since I’ve been active in computer programming, but there was a problem with neural nets from the very beginning. When AI researchers originally attempted such things as AI medical diagnosis, they built in mechanisms so as to be able to “backtrack” through the decision process and determine how a diagnosis was reached. With neural networks there is (or at least was) no “backtrack” capability. As a neural network is trained to recognize a certain object, it makes internal data connections (i.e., like a network of neurons) that are indecipherable to the humans that built it and trained it. If an unexpected bad decision is made, such as in a crash of an autonomous vehicle, it is not a simple task to determine where the fault in the decision process lies. Different images need to be shown to the neural net, and responses checked, until an educated guess can be made as to why the particular image resulted in the wrong decision. It’s all a bit like Westworld.

    • anakephalaiosis on April 13, 2021 at 6:41 am

      Mississippi has beginning and end; and is ever changing, and still the same.

      To grasp the totality of Mississippi, one transcends the whole of it.

      Transfiguration on the mountain rises – in the moment – and becomes future and past.

      Rising, falling, dying, living – all at once – as Old Man River:


  4. Laura on April 12, 2021 at 11:14 pm

    it is much more difficult to run an AI program on imagery data than text data.
    and, human based crowd sourcing for identification and location finding is more successful in image based search and rescue than automated only search. automation is somewhat successful against standard features like roads and buildings.

    • anakephalaiosis on April 13, 2021 at 6:44 am

      It is the angelic side of nature, that responds to images. There is a mirror in every image.

      Devils can’t stand their self-reflection, whereas angels submerge, into their own image.

      The mind itself is an image of eternal now. Beauty is in the eye of the beholder.

      God is the image of his creation.

  5. marcos toledo on April 12, 2021 at 7:51 pm

    Artificial Intelligence is not intelligent an idiot and moron the real ones are more intelligent than any machine. You know the saying garbage in garbage out it only the most hubristic fool would believe a calculating machine is anywhere near to being intelligent. It must first be aware of its surroundings and itself to be intelligent.

  6. johnycomelately on April 12, 2021 at 6:29 pm

    There’s another twist to the story, ‘the ghost in the machine’.

    Getting familiar with Nick Hinton and Rico Roho’s work on “entended intelligence.”

    Apparently the web is home to organic intelligences (ei) maybe a virtual version of river nymphs or sly demons masquerading as friendly beings.

    A cybernetics theory is known as ‘meta system transition theory’, with enough connected nodes in a network you get new creature, with enough connections and tiered layers of connections you get an organic sentient being.
    The scary aspect is that we’re part of the network nodes.

    I’m finding it weird that peoples internet experience is diverging, it’s almost as though same page adresses are producing different content for different people (mandela effect).

    Weird glitches, weird data dumps on devices, synchronisities etc. it’s almost as though there is a ghost in the machine.

    • Richard on April 13, 2021 at 3:02 am

      Sounds a lot like unintended consequences or even collateral damage.

  7. Gabe on April 12, 2021 at 5:55 pm

    The technocrats want an economy run by AI and to govern society with AI so they can go do other things.

  8. FiatLux on April 12, 2021 at 5:16 pm

    AI will never be a sufficient substitute for human intelligence. When evaluating something in the environment, most humans have access not only to prior intellectual learning, but to the five senses, to the ability to read body language, to empathy, intuition, creativity, judgment– and to all those things at the same time. No machine will ever possess that array of abilities.

    • RRoss on April 13, 2021 at 1:19 am

      Looks like the programmers introduced pareidolia into their data sets. Oops! They did! Talk about double negatives. The article below gives some examples:

      This is What Happens When Deep Learning Neural Networks Hallucinate


      “It’s a kind of massive, data-
      driven pareidolia that companies like Google is uniquely positioned to lead, since big amounts of data are needed to train big neural nets, and if anyone has access to huge amounts of data, and access to unparalleled computational power, it would be Google. Though they look amazing, these evocative images do elicit more questions than answers. For one, it shows how deep neural networks can be easily fooled; but on the flip side, these complex images also demonstrate the unknowns in these emergent neural networks. More profoundly, they also point to how little we know about the cognitive complexities of vision, and about the human brain and the creative process itself.”

      Oddly enough–AI “dreams” look like representations of DMT trips. Why am I not surprised? Compare these two articles and see for yourself.

      The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes


      Finally: Silicon Valley’s Drug of choice.

      What ayahuasca — Silicon Valley’s latest drug of choice — does to your brain and body


  9. Sandygirl on April 12, 2021 at 4:35 pm

    My first Kindle had a much more “intelligent” auto correct than my 3rd Kindle which I’m using now. It’s very strange and frustrating! It will even replace my word to a different word altogether. It’s very frustrating and I need to double check my writing before I send. I do wonder if the alphabet group inserted a “joke code.” I can only hope that it’s dumbing down AI.

    • PiPoe on April 12, 2021 at 8:45 pm

      I think the point is to further dumb down the populace.

    • Richard on April 13, 2021 at 3:08 am

      One has had something similar happen with that cloud based Microsoft Word subscription.

      In the 90’s there were a few folks located in Maryland who wanted to be able to cut into the spouse’s typing software code and introduce a memo of sorts while the spouse was actively engaged with the machinery. There are hints of that type of chicanery cropping these days.

      • Sandygirl on April 13, 2021 at 2:31 pm

        There have already been court cases where what they wrote to someone came back entirely different than what they really wrote. I remember reading about some guy they were trying to frame but by luck he happened to catch the set up. As Dr. Farrell says, the digital words can be changed or omitted too easily.

  10. Robert Barricklow on April 12, 2021 at 1:33 pm

    Like building digital sandcastles;
    or as St. Augustine said,
    [although something lost in translation]
    build on air – w/o foundation.
    To the post…

    Artificial intelligence is an oxymoron;
    right-out of the starting gate.
    Like mathematically proposing, if you can divide by zero…
    Then cut to the chase proof: Viola! Artificial intelligence.

    Artificial intelligence is BRUTE FORCE.
    In other words, it relies on BIG DATA;
    unlike Poirot’s two little grey cells.
    AI requires a gazillion bits of info to divide by ones/zeros.

    Yet, in the quantum world; things change?

    Cut to Covid1984 op”
    It “requires” Brute Force[in more ways than most are aware of]
    to jam the all real world into virtual worlds:

    Their whole AI world is built upon sand;
    requiring Brute Force to keep their castles from falling.
    Like the debt economy, upon which it’s based,
    an economy requiring more debt; more expansion.
    AI requiring more Big Data; more expansion.
    [Gnostic/Hermeticism personified; the snake eating its own tail.]

    What it’s all about is control. But control, based upon BRUTE FORCE.
    Like survival of the fittest: Only the STRONG will survive.
    Rather, that “fitting” into the circle of life,
    of giving & taking, by contributing to life’s being.
    NOT of an AI, by destroying it!

    Of course, “they” architect the algorithms w/biases
    towards inherent systems’ controls.
    A built-in violence, if you will.
    Think – no human on the phone: press 1 for; press 2 for; ad infinitum.

    This has been geared towards medical control;
    via, the medical welfare state. To qualify, you have to be healthy.
    To be healthy, you have to be vaccinate; etc., etc.

    Of course, the education systems’, as well,
    are all based upon Big Data/CONTROL.
    Tests, are treasure chests of data.
    Those are then used; to data stream the students, into other data flows.

    Digital = Copy Machine EXTRODINAIRE! = MIRRORS
    It can’t be carbon life/it becomes synthetic life.
    It will mirror image life[copy]; by reflecting carbon as synthetic.

    ALL of this Covid1984 AI op:
    to replace carbon analogue reality
    with digital synthetic reality.
    A digital sandcastle built upon air[cyber reality]
    from an analogue economic sandcastle foundation.

    Apropos to the above,
    is the coming plague:
    to re-set the reserve currency into digital coupons.
    Like the nursery rhyme during the Black Death:
    Ring-a- ring o’ roses
    A pocket full of posies
    A-tishoo! A-tishoo!

    No wonder; all was being built upon a foundation of sand and air.
    And, who pray tell; planted those seeds of sand castles?
    Why, the those seeds were blown in
    by the winds of Cosmic Wars?
    An analogue/digital circle of Cosmic Wars
    A living spiritual universe

    w/a devilish DNA twist
    That needs to pulled pulled out by its roots
    and finally destroyed.
    God willing.

    got carried away]

    • swimsinocean on April 12, 2021 at 3:24 pm

      ‘Their whole world is built on sand’

      Yep…a silicon sea.

      A fake world where humans are not welcome.

      • Richard on April 13, 2021 at 3:10 am

        Makes one wonder if it is dismissed as easily as the turn of the tide with an incoming wave.

    • Robert Barricklow on April 12, 2021 at 5:19 pm

      But, is there another “kind” of AI?
      One, that is interdimensional?
      The naked ape terrestrial one; that opened the door…
      for an “off-world” type?
      It lays hidden; moving in unseen ways,
      and communicating in unknown ways?
      Now, might that one, not be dumb;
      having w/holey different quantum pedigree?

    • PiPoe on April 12, 2021 at 9:02 pm

      God willing.

  11. Scott S on April 12, 2021 at 1:05 pm

    Artificial intelligence is not so much about making computers more intelligent. It’s about making people less so.

  12. gord on April 12, 2021 at 11:03 am

    In the 90s, I told a friend that “no AI will ever be any better than the agenda of the people paying the programmers to write it.”

  13. Terminal Tom on April 12, 2021 at 10:55 am

    I am sure this will all work itself out and humanity will bungle on just as it has for thousands of years.
    Or not.
    Maybe it’s actually been hundreds of thousands of years and we just keep getting wiped out down to a few thousand individuals and then, miraculously, escaping total extermination by a hair… being left only with legends of Atlantis and the Vedas.

    • Roger on April 12, 2021 at 2:27 pm

      But this time it’s on purpose. Got to wipeout the world’s populations so you can restore them with their original indigeonous peoples, cultures, and wildlife habitat. After wiping out all non-native bloodlines in North America and restoring the native red men which are currently being ushered across the border as a replacement population; they can tear down Ted Turner’s fences and let the buffalo truely roam again. Then manage the red population scientifically of course. Likely to do this in every region of the world. The technological elite will likely have their own technological Atlantis headquartered in New Zealand while the rest of the world’s population is kept in a carefully controlled and managed primitive state. They will be saving and restoring everything back to it’s natural sustainable state. They are foolish, nothing is meant to stay the way it is indeffinately. We are not the problem; they are!

  14. KSW on April 12, 2021 at 9:12 am

    Never though “dumbing down” would end up being a … good thing?

    • OrigensChild on April 12, 2021 at 9:30 am

      The only reason why the “controller” class demanded the public to be “dumbed down” was their level of intelligence and their sheer numbers. In many cases the “public” was far more intelligent than the “controllers”–especially those from those classes so heavily influenced by Jeremy Bentham, John Stuart Mill, the human mechanic schools of biology and the Pavlovian/Skinner psychological practicians. A “less intelligent” public was more likely to conform to these standards–and the “controllers” could maintain their social, political and economic position with greater ease. Computers were intended to fill that gap while giving the “controllers” the greater advantage. Somehow I don’t think this is going as well as they had hoped. Why? The decline in the intelligence pool has led to a decline in the talent and motivations of those building these systems. Perhaps God built into human nature the concept of the “Peter Principle” for those occasions when men’s vanities become their primary motivation for al of their activities. It would seem to be a wise corrective measure from a compassionate Creator. Again, we shall soon see. They are primed for their own Fall.

      • Robert Barricklow on April 12, 2021 at 3:10 pm

        I remember when The Peter Principle came out in 1969.
        I talked it up at the base I was stationed; and pretty soon the whole base was reading & talking about it. It’s a natural in the military: promoted to your rank level of incompetence.
        My, how things have changed!
        The Peter Principle is still there.
        Now there’s also, the In-Lockstep Principle.
        Where your In-Lockstep w/the Press Corp’s power-echo memes;
        for example, “Building Back Better”.
        You now advance; through power echoing
        up, towards your level of incompetence.

        • swimsinocean on April 12, 2021 at 3:39 pm

          We’re entering a march-in-lockstep-or-die situation. Dance or die. As Catherine Austin Fitts says…’death isn’t the worst thing that can happen to you’

        • Robert Barricklow on April 12, 2021 at 6:08 pm

          At this rate; how long before “they” engineer humans into insects?

        • Robert Barricklow on April 12, 2021 at 6:20 pm

          Social engineering the Masonic way?

      • FiatLux on April 12, 2021 at 5:07 pm

        OC — Love it… humanity saved by the Peter Principle! That would be poetic justice.

  15. Michael UK on April 12, 2021 at 8:39 am

    Re: President Kennedy – an alternative question should be posed as follows: “Who did Bobby Kennedy believe to have killed his brother John F Kennedy?”
    Talking about faces and cats and AI. Why not show pictures of the Face on Mars to cats, dogs, chimps and monkeys and watch them and observe if they express any reaction. I am quite certain that chimps and monkeys have the cognitive ability to identify faces – so it would be good to observe their reaction to being shown the Face on Mars.

    • Terminal Tom on April 12, 2021 at 10:56 am

      how about: there were far more shots fired in the hotel kitchen than Sirhan’s gun held… or the fact that Kennedy was shot at point blank range in the back of the head while Sirhan was standing in front of him.
      Oh, never mind.

      • anakephalaiosis on April 12, 2021 at 12:43 pm

        Neither Lincoln nor Kennedy were assassinated. Both were spirited out of Jesuit office, by theatrics, to produce a nonsense narrative.

        The fruitless attempt, to make head and tail of nonsense, preoccupies generations, that gradually lose grasp of reality.

        If the lie is big enough – like Corona virus and six million – the majority is shocked into collective belief, and can be molded, into a “new world order”.

        The method is called: problem – reaction – solution.

        • anakephalaiosis on April 12, 2021 at 1:10 pm

          The Trojan Horse was a stage prop, to lure the Trojans, into a false perception of reality, for their entrapment.

          The Trojans thought, that there was peace, and that the Greeks had raised the siege, and abandoned the battlefield.

          Letting one’s guard down, when the enemy is within the city gate, is a certain way, to get stabbed in the back.

          One should fear Jesuits bringing gifts from Pope Satan.

  16. DanaThomas on April 12, 2021 at 7:38 am

    This is an important admission by Wired, which is certainly not an alt-right conspiracy journal. It could help balance some of those assertions according to which “AI controls everything and we are all doomed”.

    • OrigensChild on April 12, 2021 at 9:16 am

      Well said. I have long believed that many of the “whistle-blowers” in the alternative community have overstated the case of AI in its current “occult” formulation. But, we shall see.

    • FiatLux on April 12, 2021 at 5:05 pm

      That’s a good point, Dana. However, I’m not inclined to believe that any evidence of this type is likely to balance the thinking of the psychos at the top, who are ultimately behind the push for AI. I suspect it will take a major “crash and burn” moment, where AI screws up something big, before the would-be controllers are dragged back to a moment of reason.

      • PiPoe on April 12, 2021 at 9:09 pm

        Agree to a point. I have a feeling that there will be no ‘drag back’.

  17. anakephalaiosis on April 12, 2021 at 5:42 am

    AI is a mirror, in a room of mirrors, and the great pretender is a boom town rat, caught in a labyrinth, unable to find the way out. The bewildered rat must ask the cat for directions.


    Prince Philip went straight to hell,
    to plead for his empty shell,
    but his death kiss
    in Petri dish,
    was a virus hard to sell.


    • Mick Yates on April 12, 2021 at 8:22 am

      The Prince was of renown
      Slay servants of the crown
      Turned back to a toad
      With a viral load
      Contentious verb / noun.

Help the Community Grow

Please understand a donation is a gift and does not confer membership or license to audiobooks. To become a paid member, visit member registration.

Upcoming Events