AI AND FAKE NEWS (AND “Q”)August 30, 2019
The Fake News is in the Fake News once again. Or rather, artificial intelligence may be in the Fake News. This shouldn't be surprising, as we've seen articles about new technologies that can simulate virtually anyone saying or doing anything. But this story, shared by T.S., should sober even the most skeptical:
A couple of days ago I blogged about robots replacing humans, even as clergy in a kind of "robo-religion", and how robots are already being quietly and indirectly taxed. But imagine the news being written by an artificial intelligence (and being read by a robo-"anchor-person" on camera). It's not too far off:
An artificial intelligence system that generates realistic stories, poems and articles has been updated, with some claiming it is now almost as good as a human writer.
The text generator, built by research firm OpenAI, was originally considered "too dangerous" to make public because of the potential for abuse.
But now a new, more powerful version of the system - that could be used to create fake news or abusive spam on social media - has been released.
The BBC, along with some AI experts, decided to try it out.
The model, called GPT-2, was trained on a dataset of eight million web pages, and is able to adapt to the style and content of the initial text given to it.
It can finish a Shakespeare poem as well as write articles and epithets.
At the time, the firm said: "Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with."
As the article also indicates, the BBC decided to test the public version of the program, and came up with this:
The BBC decided to test the system by inputting the first few sentences of a recent story, appropriately one about AI.
We began with the first sentence of the real story:
Chinese search engine Sogou is creating artificial intelligence lookalikes to read popular novels in authors' voices.
Here's what the text generator came up with to complete it:
According to the researchers, this method of reading is not just the brainchild of Amazon Kindle users. Users of Android devices are also a notable market for fake reading. There are several theories to explain this.
Firstly, the device in a user's hand is different from the tablet or computer they are reading on. Secondly, the device used to access the Internet might be tampered with in some way. A third reason is the user may just copy and paste content directly into their virtual characters.
However, another explanation is that the device does not have voice recognition capabilities at all. If this is the case then voice recognition is more advanced than Google's translation engine. As such, these users are tricked into thinking they are reading from a real author - the book is made to sound authentic as soon as they try and read. (Emphasis in the original)
As the article also notes, however, a professor of computer science at the University of Sheffield, Noel Sharkey, tested the program, and wasn't too impressed:
" I tested the software with numerous headlines and got laughable results. For example, for 'Turkeys hate Christmas', I got that 'Turkey is the only nation in the world that doesn't celebrate Christmas' and a number of unrelated sentences.
"For 'Boris Johnson loves the backstop', it produced incoherent gibberish and some stuff about AI and sport. When I input the statement that 'Boris Johnson hates the backstop', I got a more coherent story that appears to have been pulled off a Google search."
Dave Coplin, founder of AI consultancy the Envisioners, also had a play with the system, inputting the first line of a classic joke: A man walks into a bar...
The suggestion from the AI was not what he was expecting: "...And ordered two pints of beer and two scotches. When he tried to pay the bill, he was confronted by two men - one of whom shouted "This is for Syria". The man was then left bleeding and stabbed in the throat".
But it should be noted once again, these results were obtained with the publicly released version of the program. We've no idea of what the private tests yielded, but with a vast expansion of parameters (including search parameters), it's a safe bet that the results were somewhat better. Indeed, there have been stories of computer-AI-generated "science" papers that in spite of being pure gibberish were accepted by serious journals for publication because they "sounded" authentic.
So where am I going with all of this? What's the high octane speculation? The problem, it would seem, is this: in this era of everyone claiming that such-and-such a story is "fake news", one wonders how much of that actually is being generated, in part, by AIs writing and planting fake stories. Consider only the story of President Trump supposedly saying he wanted to disrupt hurricanes by nuking them. He has denied it of course, and frankly, when I heard the story I thought it was fake. (What's not fake about the story were the 1950s studies of weather modification using nuclear weapons, but that's another blog for another day.) But whether it's fake or not, or whether Trump's denials are fake or not, it raises a crucial issue: with enough search parameters and computing power, the sky may be the limit. One possibility is that AI's become the editors of humanly-generated stories, modifying texts according to present (and probably ideological) parameters and filters. As I pointed out in a blog just a few days ago, some ebooks have already been marketed which contained texts not written by their original purported authors. Could it be that the "updaters" of these texts were not humans at all? I suspect the answer to that question is a qualified "yes."
Indeed, I've entertained in private discussions with a few people about the whole "Q" phenomenon that it represents not only a clever psyop by a team, but that it had to have access to some sophisticated and not-publicly-available search techniques and computing power. As a result, a powerful "narrative" can be (and has been) created with its own following. What I believe we've been watching is a kind of "beta-test" of sophisticated social engineering techniques to be rolled out in a much more general and dangerous way, one requiring - again - sophisticated search parameters, and a lot of computing power, to generate carefully crafted propaganda.
So how to counter it? Like everything else, I suspect that the answer is in part analogue, in this case, real research by real humans using antiquated things like card catalogues, books, newspaper articles, cross-checking, indices, and so on.
See you on the flip side...