alternative news

CHINA’S AI OUTSCORES HUMANS IN READING COMPREHENSION

January 22, 2018 By Joseph P. Farrell

Judging from today's blog headline, you may be thinking you're in for another rant on Amairikuhn edgykayshun. Well... maybe. I haven't decided yet. But chances are you're more likely to get, not a rant, but "vague rumblings and complaints" about the consistent way our society, and the corporate controlled media from whence today's article springs, manages to get almost everything wrong. For example, if one reads the title of this article from Newsweek - shared incidentally by my friend S.d.H. - the problem is that the USSA is losing the war for artificial intelligence to China:

China Winning Artificial Intelligence War Against U.S.

What's the story, according to Newsweek? The story is this:

A neural network model created by Chinese e-commerce giant Alibaba beat its flesh-and-blood competition on a 100,000-question Stanford University test that's considered the world’s top measure of machine reading. The model, developed by Alibaba’s Institute of Data Science of Technologies, scored 82.44, while humans scored a 82.304.

Microsoft’s artificial intelligence also beat humans, scoring 82.65 on the exam. But its results came in a day after Alibaba’s, meaning China holds the title as first country to create automation that outranks humans in written language comprehension.

There you have it: the USSA is losing the war because Microsoft was a day late and only a couple of decimal points ahead of China's AI, which in turn was only a couple of decimal points ahead of humans.

Ahh... but wait! There's hope, particularly for politicians, for AI could be the panacea for genuine "bipartisanship" and peaceful "cooperation":

But experts believe such technology can also analyze complex social and political issues, like conflict over limited resources and disagreement over policy. It’s possible, in other words, to imagine a world where politicians from opposing parties use AI to create perfectly negotiated and compromised bills.

There's those pesky "experts" ready to hand again, confirming with their unnamed presence the unique kind of stupidity that has become the mainstream corporate media's newest twist to the fallacy of argument from authority: the argument from anonymous authority. (It's at least nice to know that they're catching up with the alternative research community in this respect.)

And how perfectly wonderful and archly Hegelian it all is! No need to actually read Hegel any more and attempt to understand all that dialectic stuff: just punch the rights keys and voila! The Perfect Synthesis and Compromise to end all Syntheses and Comprises will come spitting out the other end in bipartisan nanoseconds. Why, AI could potentially solve all of our arguments and its omniscient and all-knowing programmers and designers could lead us into a bright and sunny future of the perfect Soviet man of machine-like perfection; imagine, if only Trotsky and Stalin had had an AI around back in the day! Why, they could have been shown the solution and all their petty grievances could have been laid aside.  And I freely admit, the temptation is there: we could do away with Congress/Parliament/(fill in the name of your country's ineffectual and highly corrupt legislative body here) and all the narcissistic swamp dwellers that go with it; we wouldn't have to put up with American/Canadian/British/French/German/Spanish (fill in your country here) life having been reduced to a constant political commentary on the next election; we could turn off our cable and never listen to those people again.

Or we could do the obvious... we could actually opt for a human future and a human and humanizing society: we could actually teach in the classroom and teach kids how to read, and to enjoy it; we could give them the humanizing pleasure and joy of having their own thoughts, creativity, and how to reasonably argue their positions. We can turn off the idiot box (I've done so for years and don't miss it one little bit).

If you've not read Isaac Asimov's Foundation series of books, including especially its original trilogy, do so, because he described the dystopian technocratic transhumanist future almost perfectly. It's a future where humans have "opted out" of the picture of responsibility for a highly advanced society run largely by machines and the few technocrats that understand them. Gradually, of course, people - including the technocrats - become lazy and more importantly, stupid. They forget how things work, and are unable to learn how to do so because they've forgotten how to learn. Gradually, society breaks down as the machines come grinding to a halt because humans have opted out of responsibility.

No, the problem isn't that AI reads better, Newsweek. The problem is that humans - particularly in America - read so poorly that they become staffers at Newsweek and write codswallop about "experts" ruminating about AI's being able to solve human conflict and humanize society.

See you on the flip side...