May 24, 2013 By Joseph P. Farrell

Yesterday I blogged about the scenario probably lurking in the back of every globaloneyists' (shrinking) brain, the one that could give a Zbgnw Brzznsk or Henry Kissinger nightmares, and that is that maybe their network might wake up, and decide, just for kicks and giggles, to game the gamers.

Well, there was method to my mad high octane speculations, as usual, because just in case you missed it, there's some among the elite that think it's a wonderful idea, and they want to go ahead and do it, and they've hired transhumanist guru Ray Kurzweil to help:

How Ray Kurzweil Will Help Google Make the Ultimate AI Brain

Now, imagine for a moment. The oligarchs have their networked super computers, with their scenarios and so on, all gamed out, and the Crays are telling them "it's a green light for globaloney and the big World Take Trade Organization...uhm...WTO." Fine and dandy. But The Matrix scenario could be lurking just around the corner: the scenarios have a consequence that the elites may not even be told about: after all, the AI is AI, it may "wake up" and just decide to have fun with the Rockefailure Foundation monies, or maybe execute a few programs of its own design on the Rottenchild family fondi, all the while pretending he/she/it is asleep. After all, Kurzweil himself said it, not me:

"WIRED: In the Google hangout you just finished, Will Smith said he had a copy of your book by his bedside because he’s been involved in a number of science fiction movies. How do you view science fiction?

RAY KURZWEIL: Science fiction is the great opportunity to speculate on what could happen. It does give me, as a futurist, scenarios. It’s not incumbent upon science fiction creators to be realistic about time frames and so on. In this movie, for example, the characters come back to Earth a thousand years later and biological evolution has moved so far that the animals are quite different. That’s not realistic. Also, there’s very often a dystopian bent to science fiction because we can perceive the dangers of science more than the benefits, and maybe that makes more dramatic storytelling. A lot of movies about artificial intelligence envision that AI’s will be very intelligent but missing some key emotional qualities of humans and therefore turn out to be very dangerous."

So, yea, all those invincible family dynasties with their richer-than-God wealth... they too could be hacked by a network just for fun and games, "kicks and giggles" as I said yesterday.

I see two ways around this, and I'm sure it's probably already occurred to Zbg, Heinrich, and the boys: ultimately, some one individual is going to have to take that brain implant and interface directly with the new AI network, to maintain a lock on it and a control over the various funds and families and protect their power... Except, of course, whoever gets to do that will have all the power... so we can expect a bit of a tussle to occur to decide who gets to wear the implant crown.

And again, Kurzweil said it, not me:

"If your system really understood complex natural language, would you argue that it’s conscious?

"Well, I do. I’ve had a consistent date of 2029 for that vision. And that doesn’t just mean logical intelligence. It means emotional intelligence, being funny, getting the joke, being sexy, being loving, understanding human emotion. That’s actually the most complex thing we do. That is what separates computers and humans today. I believe that gap will close by 2029."

Now here's the sad part: the problem is, that in spite of all these potential dangers, these people still think it's a good idea. Yup folks, they're just plain nuts.

See you on the flip side.