THE OPACITY OF AI AND THOSE BOEING PLANE CRASHES

March 14, 2019 By Joseph P. Farrell

Many people have been trying to warn of the dangers of artificial intelligence, my own small voice among them. And now, there is a strange article from Zero Hedge that draws attention to the dangers in connection with those Boeing 737 Max 8 airplanes, two of which have crashed - with fatalities - in recent months, prompting several airlines either to ground their fleets, or to cancel orders. Mr. E.G. and Ms. K.M. spotted this article, and as one might expect, it prompts some high octane speculation, or rather, perhaps a revisit to some of my earlier warnings about the increasing reliance on AI:

Is The Boeing 737 Max Crisis An Artificial Intelligence Event?

The problem, according to the article, is an "MCAS patch", a safety anti-stall program on the aircraft, and it's worth noting what this article says about it:

I think the problem is that the Boeing anti-stall patch MCAS is poorly configured for pilot use: it is not intuitive, and opaque in its consequences.

By the way of full disclosure, I have held my opinion since the first Lion Air crash in October, and ran it past a test pilot who, while not responsible for a single word here, did not argue against it. He suggested that MCAS characteristics should have been in a special directive and drawn to the attention of pilots.

And there's another problem:

Boeing had a problem with fitting larger and heavier engines to their tried and trusted 737 configuration, meaning that the engines had to be higher on the wing and a little forwards, and that made the 737 Max have different performance characteristics, which in turn led to the need for an anti-stall patch to be put into the control systems.

So the patch was put into the system. But then there's this at the end of the article:

After Lion Air I believed that pilots had been warned about the system, but had not paid sufficient attention to its admittedly complicated characteristics, but now it is claimed that the system was not in the training manual anyway. It was deemed a safety system that pilots did not need to know about.

This farrago has an unintended consequence, in that it may be a warning about artificial intelligence. Boeing may have rated the correction factor as too simple to merit human attention, something required mainly to correct a small difference in pitch characteristics unlikely to be encountered in most commercial flying, which is kept as smooth as possible for passenger comfort.

It would be terrible if an apparently small change in automated safety systems designed to avoid a stall turned out have given us a rogue plane, killing us to make us safe.

I don't know about you, but I find all this profoundly disturbing, because what is being alleged in the article is something like the following:

1) The MCAS "patch" was "too opaque" for humans; it was "not intuitive";

2) It was therefore kept out of the pilots' training manuals because of this; and

3) The cumulative effect of these decisions means pilots were essentially not in control of their aircraft, and the system itself may have crashed them

Most people are aware that I don't fly at all. I used to, but even then I was never comfortable doing so. These days, I just simply flatly refuse, and in part it's because of stories like this. And we've already seen similar stories with "self-driving cars". Automated semi-trucks are reportedly already on the road, though we've yet to hear of any accidents involving them or failures of their AI. But whether or not it's true that automated AI trucks are driving down the road, I can say that there's something else that has happened. Not too far from where I live, the local Burlington Northern Santa Fe railroad does run freight trains with no engineer in the cab of the locomotives. There's even a little sign to this effect at a prominent railroad crossing that I often cross, or rather, used to cross until the impact of the sign warning about the automated train finally sank in. Now I drive a block or two out of my way to cross over the tracks on a bridge.

As I've written before about this increasing distance between humans and the machine society, I'd like to employ the article's author's template to a completely different area that I've written about before in conjunction with artificial intelligence: computer trading. I've blogged about this phenomenon, and talked about it occasionally on Catherine Austin Fitts' Solari Report quarterly wrap ups. My concern there has always been that with computers doing most of the trading - often in mere fractions of a second - that commodities, equities, and securities markets are no longer genuinely reflective of human trading activity. Apply that template to currency speculation and trading and one has in my opinion a recipe for disaster. Here too, the author of the article's template of the lack of intuitive human characteristics, the opacity of the system, and so on, would seem to apply. The occasional "flash crashes" in markets are a testament of the fact that price no longer functions adequately as a measure by which to evaluate and make decisions, since that can literally crash or spike in a matter of mere second, and in this regard, I cannot help but wonder if the article's author is not on to something, namely, flash crashes of a very different, literal sort.

See you on the flip side...