CHINA’S AI OUTSCORES HUMANS IN READING COMPREHENSION
Judging from today's blog headline, you may be thinking you're in for another rant on Amairikuhn edgykayshun. Well... maybe. I haven't decided yet. But chances are you're more likely to get, not a rant, but "vague rumblings and complaints" about the consistent way our society, and the corporate controlled media from whence today's article springs, manages to get almost everything wrong. For example, if one reads the title of this article from Newsweek - shared incidentally by my friend S.d.H. - the problem is that the USSA is losing the war for artificial intelligence to China:
China Winning Artificial Intelligence War Against U.S.
What's the story, according to Newsweek? The story is this:
A neural network model created by Chinese e-commerce giant Alibaba beat its flesh-and-blood competition on a 100,000-question Stanford University test that's considered the world’s top measure of machine reading. The model, developed by Alibaba’s Institute of Data Science of Technologies, scored 82.44, while humans scored a 82.304.
Microsoft’s artificial intelligence also beat humans, scoring 82.65 on the exam. But its results came in a day after Alibaba’s, meaning China holds the title as first country to create automation that outranks humans in written language comprehension.
There you have it: the USSA is losing the war because Microsoft was a day late and only a couple of decimal points ahead of China's AI, which in turn was only a couple of decimal points ahead of humans.
Ahh... but wait! There's hope, particularly for politicians, for AI could be the panacea for genuine "bipartisanship" and peaceful "cooperation":
But experts believe such technology can also analyze complex social and political issues, like conflict over limited resources and disagreement over policy. It’s possible, in other words, to imagine a world where politicians from opposing parties use AI to create perfectly negotiated and compromised bills.
There's those pesky "experts" ready to hand again, confirming with their unnamed presence the unique kind of stupidity that has become the mainstream corporate media's newest twist to the fallacy of argument from authority: the argument from anonymous authority. (It's at least nice to know that they're catching up with the alternative research community in this respect.)
And how perfectly wonderful and archly Hegelian it all is! No need to actually read Hegel any more and attempt to understand all that dialectic stuff: just punch the rights keys and voila! The Perfect Synthesis and Compromise to end all Syntheses and Comprises will come spitting out the other end in bipartisan nanoseconds. Why, AI could potentially solve all of our arguments and its omniscient and all-knowing programmers and designers could lead us into a bright and sunny future of the perfect Soviet man of machine-like perfection; imagine, if only Trotsky and Stalin had had an AI around back in the day! Why, they could have been shown the solution and all their petty grievances could have been laid aside. And I freely admit, the temptation is there: we could do away with Congress/Parliament/(fill in the name of your country's ineffectual and highly corrupt legislative body here) and all the narcissistic swamp dwellers that go with it; we wouldn't have to put up with American/Canadian/British/French/German/Spanish (fill in your country here) life having been reduced to a constant political commentary on the next election; we could turn off our cable and never listen to those people again.
Or we could do the obvious... we could actually opt for a human future and a human and humanizing society: we could actually teach in the classroom and teach kids how to read, and to enjoy it; we could give them the humanizing pleasure and joy of having their own thoughts, creativity, and how to reasonably argue their positions. We can turn off the idiot box (I've done so for years and don't miss it one little bit).
If you've not read Isaac Asimov's Foundation series of books, including especially its original trilogy, do so, because he described the dystopian technocratic transhumanist future almost perfectly. It's a future where humans have "opted out" of the picture of responsibility for a highly advanced society run largely by machines and the few technocrats that understand them. Gradually, of course, people - including the technocrats - become lazy and more importantly, stupid. They forget how things work, and are unable to learn how to do so because they've forgotten how to learn. Gradually, society breaks down as the machines come grinding to a halt because humans have opted out of responsibility.
No, the problem isn't that AI reads better, Newsweek. The problem is that humans - particularly in America - read so poorly that they become staffers at Newsweek and write codswallop about "experts" ruminating about AI's being able to solve human conflict and humanize society.
See you on the flip side...
Help the Community Grow
Please understand a donation is a gift and does not confer membership or license to audiobooks. To become a paid member, visit member registration.
Ahh, SkyNet is being prototyped:
Aviation Week, Jan 15-28, 2018, p.47
“Beijing-based Galaxy Space Technology Co. has larger ambitions: to build a constellation of 650 of the world’s first artificial-intelligence enhanced nanosatellites, called Milky Way, to be lofted by 2022. … Its main product is the AI-enhanced Zhongwei-1, with a 3-meter resolution from an alti tude [hah!] of 539 km…
The constellation will include satellites devoted to visible light, infrared, radar, and multispectral coverage. The constellation may use laser communications technology to link its satellites… Galaxy says it can offer a more extensive service by combining “Satellite and internet, artificial intelligence, and big data” to offer “AI deep learning and big data processing analysis.”
For military customers, likely starting with the PLA, Galaxy says it can offer “real-time coverage of global military bases, airborne high-speed targets, and large ships at sea.” It can also provide “artificial intelligence support and big data support for studying … activities of major military rivals … [and] provides a real-time solution for grasping the dynamics of the battlefield.”
Coupling AI and the military mind. What could go wrong?
Joshua [AI]: “Shall we play a game?”
David: [typing] “Love to. How about Global Thermonuclear War?”
Joshua: “Wouldn’t you prefer a nice game of chess?”
David: [typing] “Later. Let’s play Global Thermonuclear War.”
David: [typing] “What is the primary goal?”
Joshua: “To win the game.”
[after the AI playing out all possible outcomes for Global Thermonuclear War]
Joshua: “A strange game. The only winning move is not to play. How about a nice game of chess?”
I think dumbing down the population is the plan of the ruling class, as a way to keep the population under control.
How about hiring teachers who know how to read, do math, know geography…. that’s a novel idea… ah, intellectual atrophy, the way of the West. Also, on Earth there are no experts, only knowledgeable people.
JPF: “It’s at least nice to know that they’re catching up with the alternative research community in this respect.”
Synchronicity! I read the following last night. SF’s response to the new ‘experts’:
Analog SF Magazine, Jan/Feb 2018, p199:
(from “The Reference Library” column by Don Sakers)
“I’m not even going to start on how long SF readers have been concerned about artificial intelligence. Since the dawn of time, at least.
Over the last few years, the tech intelligentsia have begun to worry about AI. Recent developments in the field have convinced the world that the development of true AI is imminent enough to be worth serious consideration (or, in the real world’s delightfully condescending phrase, ‘isn’t science fiction anymore’). The AI research community is all abuzz with the possibility of artificial intelligences taking over the world. (Gosh, why didn’t we think of that?)”
[italics in original]
Also in the same column:
“In Mark Tegmark’s concept, life in the Universe goes through three phases. Life 1.0 represents simple organisms that can survive and reproduce, but can’t change either its software (instincts) or hardware (genetics). Life 2.0 – of which human beings are an example – can change its software (instincts) but not its hardware (genetics). And Life 3.0, AI, will be able to change its own software (programming) and hardware (physical form).”
What the darksiders leave out is the true Life 3.0: Advanced humans will be able to change their software (instincts) and their hardware (genetics). Advanced guru-types can already control the ‘firmware’ of the body (heartbeat, pain-response, etc). Advanced healers can already cause cancer remission and tissue regrowth. From there, it is only a small step to inner-control of genetics. There are stories of ‘immortals’ already amongst us. Ascension of the body may even be possible (reference the Ancients within the “Stargate” TV series). Lightsider Life 3.0 …
(Also useful for AI memes: “Here he [Tegmark] lays out models, many drawn from SF, such as Libertarian Utopia, Benevolent Dictator, Protector God, Zookeeper, Descendant, or Luddite Reversion.”)
“Tegmark outlines a number of possible future paths for the future of society:
Libertarian utopia: Humans and superintelligent AI systems coexist peacefully, thanks to rigorously enforced property rights that cordon off the two domains.
Benevolent dictator: Everyone knows that a superintelligent AI runs society, but it is tolerated because it does a rather good job.
Egalitarian utopia: Humans and intelligences coexist peacefully, thanks to a guaranteed income and abolition of private property.
Gatekeeper: An intelligent AI is created that interferes as little as possible in human affairs, except to prevent the creation of a competing superintelligent AI. Human needs are met, but technological progress may be forever stymied.
Enslaved god: A superintelligent AI is created to produce amazing technology and wealth, but it is strictly confined and controlled by humans.
Conquerors: A superintelligent AI takes control, decides that humans are a threat, nuisance and waste of resources, and then gets rid of us; possibly by a means that we do not even understand until it is too late.
Descendants: Superintelligent AIs replace humans, but give us a gradual and graceful exit.
Zookeeper: One or more superintelligent AI systems take control, but keep some humans around as amiable pets.
Reversion: An Orwellian surveillance state blocks humans from engaging in advanced AI research and development. [interesting take on “Dune” and it’s Butlerian Jihad]
Self-destruction: Humanity extinguishes itself before superintelligent AI systems are deployed.”
sciencemeetsreligion dot org/blog/2017/12/the-future-of-artificial-intelligence-utopia-or-dystopia/
Goals worthy of a naked apes that wants to advance beyond his capability to comprehend what he’s stepping into.
Doesn’t matter much anyway; as those in charge know that the current regime is engineering the death of the biosphere on Earth. So they better hurry up and bring out the models to replace nature. Probably last as long as the shelf life of a peanut, with the intelligence of one
The problem, in the end, is that the AI is as intelligent as the dufuses that program them. Only a stupid control freak would believe they really think.
Does the AI have in its data the fact that oligarchy’s are the ruling class in too many countless nation states? That should also confuse the AI as nation states are fictions according the oligarchs. They’re for the stupid humans that still believe in the self-governance illusion[which is quite expensive to maintain, and the “public” pay for in ways the dumbed-down wouldn’t understand. Does the China, or any AI, understand/comprehend any of this?
If not, then it passes the all important oligarchy question, which means the AI is hired. Cheap and dumbed down is a necessary requirement of employment in the empire of oligarchy rule.
Ok. So computers might be able to take standardized tests and score higher than humans. Well, considering the value of standardized tests that isn’t saying much. Computers have a far easier time collecting and processing data than humans do because that’s their function. Big deal!!!
IMHO it’s a direction where Asimov’s Foundation Series just might be the best possible outcome–but not the actual one. The term “artificial intelligence” has NOT lost it’s significance with me, but when one couples this to the current social meme of “social ignorance”–where the human species is regressed back to a non-thinking animal who follows directions from a higher “intelligence”, I’m left cold. If these gelatin heads have their way there may come a new Dark Age where an auditorium full of engineers will take a year to figure out how to replace a lightbulb when a robot could do it in a moment, provided someone can find the on switch.
Idiocracy – FULL Movie
Idiocracy is a 2006 American satirical science fiction comedy film directed by Mike Judge and starring Luke Wilson, Maya Rudolph, and Dax Shepard. The film tells the story of two people who take part in a top-secret military human hibernation experiment, only to awaken 500 years later in a dystopian society where advertising, commercialism, and cultural anti-intellectualism have run rampant, and which is devoid of intellectual curiosity and individualism. If this scenario sounds familiar to you, it’s because that future is already here.
I’d think it would be not that difficult to create a machine “intelligence” that has a greater reading “comprehension” than the average human. A few months ago it was reported that goldfish have a greater attention span than the average human.
Wake me up when they create an AI that surpasses an intelligent human.
Thanks Tommi for the link but chess and other games like go play to the strengths (pun intended) of complex computer algos backed by massive raw computing power (which is exactly what AI is, and no more). They have a very restricted rule set and with massive computational power you can simply have the machine play out millions upon millions of games, rank them by order of success, and then follow the moves and compare them to the best rankings to a win. No surprise here.
Language is far more complex and open-ended, though I’m sure soon enough we’ll have late-night AI comedians.
Wrong’A’mundo mon ami!!! Try this link and see if you change your mind?
It goes into more detail but the main take away is that AlphaZero LEARNED it by itself. They gave it the rules to Go (all 6 of them) and that was it. No openings, tactics, or end game maps. In 41 days it was the strongest Go player, human or computer, on the planet.
Then they let it loose on chess. Within 24 hours it was as strong or stronger than anyone/thing on the planet.
All this while using 3 orders of magnitude (not 3 times) LESS computing power than the brute force computer systems!
In closing I leave you with the world’s be human Go player, Ke Jie, after a streak of 20 wins. It was two months after he had played AlphaGo at the Future of Go Summit in Wuzhen, China.
“After my match against AlphaGo, I fundamentally reconsidered the game, and now I can see that this reflection has helped me greatly,” he said. “I hope all Go players can contemplate AlphaGo’s understanding of the game and style of thinking, all of which is deeply meaningful. Although I lost, I discovered that the possibilities of Go are immense and that the game has continued to progress.”
Forgot the link to the second part:
RUNE CLOCK & CALENDAR
Great Pyramid is Yahweh the Year
reigning by fourfold principle
of seasonal changes around spear
rotating axis of zodiac circle.
Holy Grail reveals the scroll in sky
as dome Cup of constellations,
with celestial writing in zodiac eye
of starlight cosmic revelation.
Lifespan is by thirty-two concluded
in Rune matrix of ocean deep.
Year is by twelve moons divided
and Day is eight-legged steed.
Odin’s original “horse” compass-timekeeper: https://en.m.wiktionary.org/wiki/eykt
(Open-source software is about file sharing source code. Rune code and Bitcoin technology alike. Free and open-source software. FOSS.)