September 27, 2018 By Joseph P. Farrell

Science fiction - both literary and in film and television - is replete with explorations of the idea of fully automated warfare. And very little of it is reassuring. Imagine a world in which machines do all the fighting, and take all the decisions, leaving human input out of the equation, either completely, or completely enough that it makes little difference.  There was, for example, an episode of Star Trek (original series) where the intrepid crew of the Enterprise finds a world that has been fighting such an automated warfare for centuries with a neighboring planet. As the episode unfolds, they discover that the two planets signed an accord whereby computer simulations ran "attacks" on each other's cities, and "casualty lists" were drawn up, and if one had the unfortunate luck to be on one of those lists, one had to report to a killing station to be disintegrated. Thus, the war went on and on, and if one or the other side failed to have its casualties "report", then the fake war was off, and the real one - with thermonuclear bombs flying and cities reduced to real rubble and not imaginary rubble (with real casualties) was on again.

Well, according to this short article from Mr. V.T., it's much closer to reality than we think:

Has The Era Of Autonomous Warfare Finally Arrived?

Ponder these three paragraphs carefully:

The global arms race for the latest weapons of war is a naturally escalating cycle of countries pursuing ways to dominate the battlefield of the future. Increasingly, that battlefield is a matrix of soldiers with traditional weapons, robots, drones and cyberweapons. Until this point, command over this matrix has ultimately been in the hands of humans. Now, however, many of the trends in artificial intelligence-driven autonomy are enabling data collection, analysis and potentially combat to be done by algorithms.

Another key signpost has entered the roadmap toward a future of autonomous systems capable of engaging in combat without human oversight. The U.S. military announced the first ever successful unmanned aerial “kill” of another aircraft during a previously unreported training exercise.


However, previous research has indicated that robotics/A.I. is not yet up to even the most basic ethical tasks, yet its role in weapons systems continues. (Emphasis added)

When I read this, I could not help but indulge in some high octane speculation based on that episode from the original Star Trek series. Imagine, for example, that the two warring planets in question had allowed AI's to program their simulated wargame attacks and "casualty lists." At some point, why not conclude that the whole exercise was a bit futile, and simply draw up casualty lists that wiped out all the "biologicals" on both sides, and let the machines run everything? What would have happened?

I suspect, like in the original episode, that a strange kind of reason would have prevailed, and led the creators of the crazy system to revert to their old ways, of fighting real wars with real weapons creating real rubble and real casualties, because the alternative was far worse. The other alternative, of course, was to come to real agreement in the face of the threat of a real war, and learn to coexist.

Which brings me to the real lesson of that episode, and it's a timely one: in a culture where machines are taking the decisions in a war context, human responsibility both for the decisions and for the consequences of those decisions is abrogated. Like some of you out there, I've suspected that some nascent or incipient form of artificial intelligence is already among us, and particularly, very possibly already lurking in the bowels of the American military-industrial-intelligence complex. The insanity we see evident in American "foreign policy" and the dangerous adventurism it has evidenced since 9/11 might not be entirely of humanity's own making; it may be, very much, the result of faulty information, faultily analyzed and presented to humans to act upon, by over-grown computer modeling and projections. I've said many times that my problem with modern computer driven markets, be they equity or commodities markets, is that with the increased and all-pervasive role of algorithmic trading, the markets are no longer reflective of genuine human activity.

Now translate that to the military and geopolitical sphere, and you get the idea... Think of it as "algorithmic geopolitics."

See you on the flip side...