There's a dimension to the artificial intelligence debate that seldom gets mentioned: mind control. Assuming that artificial intelligence is possible, and assuming that machines, like Robert Heinlein's "Mike" computer in his celebrated sci-fi novel The Moon is a Harsh Mistress "wakes up", would mind control and mind manipulation techniques work on such an entity? I do not know the answer to that question, but I suppose on the basis of the following article shared by V.T. that it might be possible. At the minimum, what the article does state is that it is possible according to recent studies to plant malware inside an AI's programming:
The key paragraphs here, for our high octane speculation purposes, are the following:
Neural networks, by their very nature, can be invaded by foreign agents. All such agents have to do is mimic the structure of the network in much the same way memories are added in the human brain. The researchers found that they were able to do just that by embedding malware into the neural network behind an AI system called AlexNet—despite it being rather hefty, taking up 36.9 MiB of memory space on the hardware running the AI system. To add the code into the neural network, the researchers chose what they believed would be the best layer for injection. They also added it to a model that had been trained already but noted hackers might prefer to attack an untrained network because it would likely have less of an impact on the overall network.
The researchers found that not only did standard antivirus software fail to find the malware, but the AI system performance was almost the same after being infected. Thus, the infection could have gone undetected if covertly executed.
The researchers note that simply adding malware to the neural network would not cause harm—whoever slipped the code into the system would still have to find a way to execute that code. They also note that now that it is known that hackers can inject code into AI neural networks, antivirus software can be updated to look for it.
So how might this work in the case of an AI program on a computer like Heinlein's "Mike"? Assume a way was found to execute a bit of malware slipped into an AI. Assume also that one of the functions of that malware was to disable an anti-virus program, or even to have that program create viruses rather than remove them. Think of it as the AI version of - oh, let's say - an mRNA quackcine that creates AI "spike proteins" which multiply and replicate with each running of the anti-virus program. Think of it also as the AI version of post-hypnotic suggestion of a human subject, a kind of program that runs on the occurrence of a certain "trigger" - a word or sound or phrase - to execute certain actions, which the subject is then programmed to forget.
Now imagine that your AI, per an insane suggestion once made by Cold Warriors (and even, if my memory serves me correctly, suggested once by William F. Buckley, Jr.) is in charge of running your strategic nuclear arsenal, and programmed to "launch on warning." One might, by malware, be able to program all sorts of "launches." Imagine your AI is in charge of - per the insane suggestions and actual practice of many others - executing stock, bond, or commodities markets trades, and imagine what a bit of creative malware might do: "If x occurs, then execute y trade," or "if the Fed announces a rate hike of one tenth of a basis point, then sell all US Treasuries at 90% discount," and so on.
One could imagine all sorts of similar chaotic scenarios, and now that the malware cat has been let out of the AI bag, perhaps we should reconsider whether or not AI is such a good idea after all. Of course, humans will go ahead and pursue the AI panacea anyway, regardless of warnings. It will take a few such actual nightmare scenarios to wake people up...
See you on the flip side...