You may have missed it, but in case you did, Mr. B.B. and many other regular readers here shared this story, to make sure you didn't miss it. And this is such a bombshell I that its implications and ramifications are still percolating through my mind. The long and short of it is, Google's "artificial intelligence" program-search engine no longer requires quotation marks around it:

The mind-blowing AI announcement from Google that you probably missed

And just in case you read this article and are still so shocked that you're "missing it," here it is in all of its frighening-implications-glory:

Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.

This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.

All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.

The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.

Google Translate invented its own language to help it translate more effectively.

What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.

Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks.

Now, if you read closely, right after the closing remarks in the quotation above, the author of the article, Mr. Gil Fewster, added this parenthetical comment: "I've added a correction/retraction of this paragraph in the notes." The correction/retraction comes in the form of a comment that Mr. Fewster directs the reader to at the end of his article, from a Mr. Chris MacDonald, who stated:

Ok slow down.
The AI didn’t invent its own language nor did it get creativity. Saying that is like saying calculators are smart and one day they’ll take all the math teachers’ jobs.

What Google found was that their framework was working even better than they expected. That’s awesome because when you’re doing R&D you learn to expect things to fail rather than work perfectly.
How it’s workings that, through all the data it’s reading, it’s observing patterns in language. What they found is that if it knew English to Korean, and English to Japanese, it could actually get pretty good results translating Korean to Japanese (through the common ground of English).

The universal language, or the interlingua, is a not it’s own language per se. It’s the commonality found inbetween many languages. Psychologists have been talking about it for years. As matter of fact, this work is perhaps may be even more important to Linguistics and Psychology than it is to computer science.

We’ve already observed that swear words tend to be full of harsh sounds ( “p” “c” “k” and “t”) and sibilance (“S” and “f”) in almost any language. If you apply the phonetic sounds to the Google’s findings, psychologists could make accurate observations about which sounds tend to universally correlate to which concepts. (Emphasis added)

Now, this puts that business on the computer teaching itself into a little less hysterical category and into a more "Chomskian" place; after all, the famous MIT linguist has been saying for decades that there's a common universal "grammar" underlying all languages, and not just common phonemes, as Mr. MacDonald points out in the last paragraph of the above quotation.

But, the problem still remains: the computer used one set of patterns it noticed in one context, that appeared in another context, and then mapped that pattern into a new context unfamiliar to it. That, precisely, is analogical thinking, it is a topological process that seems almost innate in our every thought, and that, precisely, is the combustion engine of human intelligence (and in my opinion, of any intelligence).

And that raises some nasty high octane speculations, particularly for those who have been following my "CERN" speculations about hidden "data correlation" experiments, for such data correlations would require massive computing power, and also an ability to do more or less this pattern recognition and "mapping" function. The hidden implication with that is that if this is what Google is willing to talk about publicly, imagine what has been developed in private corporate and government secrecy? The real question then becomes, how long has it been going on? And my high octane speculative answer is, I suspect for quite a while, and one clue might be the financial markets themselves, now increasingly driven by computer trading algorithms, and by markets that increasingly look like they are reflecting that machine reality, and not a human market reality. Even the "flash crashes" we occasionally hear about might have some component of which we're not being told.

See you on the flip side...


  1. This goes back to basic mapping theory. If you are attempting to bi-directionally map N notations with each other there are N factorial 2 way mappings, but just N mappings to some common logical notation. It formed the basis of early attempts at language translation. I still have research publications of the time full of Backus-Naur form attempting to capture the topologies of natural language grammars. However, researchers of the day (and available processing power) soon found them and itself overwhelmed by the plethora of rules needed for endless exceptions. And that’s assuming people stuck to these rules in the sources being translated which of course they don’t.

    This characterised the general malaise which began to permeate the entire expert systems industry of the time. Promising and rapid early breakthroughs soon got bedevilled with insurmountable detail.

    Much recent progress has been made instead by applying algorithms to the analysis of big data. In the case of language translation this compares existing translations to generate new ones.

    With regard to Google’s announcement here a few points are of particular interest.

    It would appear, and I’m speculating, that in attempting to unravel what the net effect of the new approaches are, researchers have found that the algorithms themselves have discovered and encoded underlying mapping rules which they’re leveraging on their own initiative. As Joseph points this sort of analogical thinking characterises intelligence.

    But the point needs to be made that this may still be a significantly different form of intelligence to our own. I say “may” because there’s still so much we don’t know about how we think. If for example, our brains interface to non-local consciousness then its also possible we are doing something similar in accessing a shared non-local processing and storage (perhaps the legendary Akashic Records?)

    More significantly still, if the algorithms are doing this for language translation its likely they’re also doing the same for other applications of similar technology like facial recognition.

    This I would suggest evidences a potential paradigm shift making the crossover from domain specific to general AI.

    In picturing what’s happening I like to use the parable of the Chinese peasant and King who asked for one grain of rice doubled for each square of the chess board. He ended up being beheaded because he asked for more rice than the entire world could provide. The first half of the chess board marks significant but manageable gains, the second half represents the fantastic and the seemingly impossible. 40 years of so Moore’s law means we’re now well into that second half of the chess board and the realm of what was only a decade ago science fiction.

    It also has important implications for the AI confinement problem. If we are to learn anything here it is that assumptions about what isn’t possible with AI are likely to be proven wrong.

  2. As a professional in that branch, I can testify that translating from one language into another is… an art. It not only requires the mastering of two languages and the ability to switch back and forth but, more importantly, the ability to bridge the cultural gap. 450 million people speak French, mostly across Africa but also in Europe and other parts of the world. And yet, no two countries use French the same way.

    No AI Google program can bridge any gap as no AI program has the ability to reproduce context. The fallacy of “one size fits all” comes from the US and has been pushed on populations for decades. How do we know it’s a fallacy? The elite itself never subscribed to it. If the elite won’t abide by it and will still get tailor-made everything (including education for their own kids), there is something wrong with it that we all should be leery of.

    Language defines the way humans formulate intellectually the questions they pose and the responses they give. Language defines vision and analysis. Language solves problems.

    As Romain Gary wrote in “The Ski Bum: “The barrier of language is when two guys speak the same language. No more understanding.” AI Google falls right into that trap.

    When a Mauritanian says: “My foot hurts and it’s making me tired”, he is actually saying: “My leg (you need to make him show you where exactly. Leg goes from hip to toe) hurts and it’s killing me”. When a Congolese states: “I felt vile (ignoble)” he is saying: “I felt sad”. When a mom states, about her kid about getting five shots: “He is ashamed”, she is saying: “He’s terrified”. When a French-speaking Muslim says: “I don’t dare dogs”, he is saying either: “I hate dogs” or “I’m scared of dogs” (based on context on how he developed that feeling).

    AI Google has no clue. AI Google is one more desperate stone being shoved in the already crumbling next Tower of Babel, to try and keep it propped up at any cost. It’s crumbling regardless.

    And just for the hell of it… Arabic and Spanish are the most common spoken languages within refugees. And they come with hundred times more nuances AI Google will ever grasp.

    What’s the antidote to such insanity? Pack your bags, travel and mingle with populations. No Tower of Babel can beat that.

    List of French Speaking Countries
    Francophone countries of the world. List of countries where French is native language, or French is regularly in use.
    French Speaking Countries in Europe
    France Belgium Luxembourg
    Monaco Switzerland

    French Speaking Countries in Africa
    Algeria Benin Burkina Faso
    Burundi Cameroon Central African Republic
    Chad Comoros Democratic Republic of the Congo
    Congo, Republic of the Côte d’Ivoire Djibouti
    Equatorial Guinea Gabon Guinea
    Madagascar Mali Mauritius
    Morocco Niger Réunion
    Rwanda Senegal Seychelles
    Togo Tunisia

    French Speaking Countries in the Americas and the Caribbean
    Canada French Guiana Guadeloupe
    Haiti Martinique

    French Speaking Countries in Australia and the Pacifics
    French Polynesia New Caledonia Vanuatu

    French is the official language** in Belgium, Benin, Burkina, Faso, Burundi, Cameroon, Canada, Central, African, Republic, Chad, Comoros, Côte, d’Ivoire, Democratic, Republic, of, the, Congo, Djibouti, Equatorial, Guinea, France, Guinea, Haiti, Luxembourg, Madagascar, Mali, Monaco, Niger, Republic, of, the, Congo, Rwanda, Senegal, Seychelles, Switzerland, Togo, and Vanuatu.

  3. I wonder if every day we are creating the beast we dread, simply by typing our search requests. Could Google’s AI be mapping the structure of our nemesis that one day will reach critical point and somehow flesh itself out through 3D printing and eliminate all organics?

  4. As Google likely has major ties to the alphabet agencies (and their handlers), I would tend to see this AI-like ” interlingua, as Google calls it” as a boon for the TPB. The lurkers want to have decades-long ‘source material’ for every text/email/voicemail/conversation on hard drives in an easily-searchable format. To be machine-searchable, it must have already been translated into some “interlingua” and residing at a central storage-area in Utah (with a copy to Israel). No danger for abuse there…

  5. Yahweh-Allah is a torturing murdering AI and always was. Another job translating these techno worshipers what to take away from us useless eaters so they can justify our liquidation.

  6. There are several scenarios here.
    One is that this is a false story and there is not great leap herein. Two there is and even greater leap of faith in trusting AI. Three, somewhere betwixt the two.
    I’m thinking they are weaponizing this. I mean, they weaponized everything, even the weather. Check your drawers.
    CERN ties in with that scenario. Too bad. Because they are doing the Lenin thing/Give them enough creative power codes to hang ourselves.

  7. Not to trivialize the Google announcement–it is a very big deal. But it’s a controlled release of information. (Most of Google’s tech belongs to other parties, like Stanford University, and most Google tech is government developed again by Stanford, or perhaps IBM in this case.)

    This kind of thing was clearly worked out decades ago at IBM, by the now co-CEO (crazy fake libertarian) of Renaissance Capital, which of course has NSA ties, and was founded by a man, Simmons, (mostly fake liberal) whose speciality is advanced geometry; clearly this geometry is used for encrypting data–hyperdimensional coding/decoding. And one could possibly throw a Montaukian reference in there–really.

    Put slightly differently, if you have the code, it’s much easier, producing less of a confusing and useless mess, to open the vault with that code than trying to cut the vault open with a cutting torch–or particle accelerator.

  8. I don’t think anyone on the latest ‘Strangeness in Antarctica’ blog picked up on this story about scientists locating an enormous object hidden under the frozen wastes of which is so vast (151 miles across and 850 m deep) it causes changes in gravity. Whatever it is down there could be the very reason why all these high powered people are paying the continent a visit…

    1. I think you’re most very valid posting is just probably scratching the surface on this-
      I think whatever was recently found (or known for a long time being finally revealed to a chosen few) is just so HUGE in importance (whatever it may be) it may change one’s opinions about the origins of humanity and how we are being manipulated-

      Larry in Germany

  9. Maybe one day the AI will get so smart it will “invent” Sanskrit. Conversely, if the AI’s new personal language resembles a semitic tongue, I would suggest hiding your children and heading for the hills. Yahweh is back.

    1. There was a very good movie from 2013 by Spike Jones, starring Joaquin Phoenix, called “Her”. Mr. Phoenix plays a bored tech geek who decides to start “dating” an “artificial intelligence” computer program.
      Paramahansa Yogananda published a good set of commentaries on the Christian New Testament; he studied Sanskrit under his guru, Sri Yukteswar.
      Dr. Neil Douglas-Klotz has published some work of translation of Jesus’ words from Aramaic into English. Check this out:
      “Heaven and Earth,
      wave and particle,
      individuality and community
      may cross boundaries,
      go beyond themselves, and
      transgress their limits;
      form may pass into light
      and light back to form.
      But the story I’m telling you will not;
      the fullest expression of
      the purpose of my life,
      from beginning to end,
      will continue.” (Dr. Douglas-Klotz’ rendering of Jesus’ “Heaven and earth shall pass away, but my words shall not pass away…see Matt. 24:35, Mark 13:31, Luke 21:33)
      Yahweh the Impaler is bad business, but is he merely a manifestation of wetiko/malignant egophrenia, the “epistemological error of Occidental civilization” which is the perception that we exist independently of an objective universe, and each other, rather than understanding that we all arise from the collective, unified field and are co-dreaming our “reality” into being? Christians have their Gnostics, Muslims have their Sufis, and Je w s have their Kabbalists. Hebrew, Aramaic, and Arabic are all Semitic languages, and that is what a Semite is: someone who speaks a Semitic language.


  10. I’ll be impressed when it can translate the Easter Island text and that bizarrre book with the weird writing and flowers that I cant remember the name of… Then run of all the ancinet texts through it to make sure that we arent mis translating texts.. (it happens)

    Then we can sit back and wait for the Rise of the Machines.

    1. My apologies K., I meant to hit reply and accidentally hit report as the cat pounced into my lap.

      As for the Easter Island text, I do not think it will be long before there is a working translation. It is clear that Easter Island script and Indus Valley script are variations of the same. Igor Witkowski discusses this matter in one of his books where he relates the work of a Polish professor in this area.

  11. I don’t know, not even human brain power can get translation 100% some times, computer can?

    Just like translate Japanese into English, if we try to search – Nice to meet you, translate this into Japanese, often it will show up as はじめまして Hajimemashite, however はじめまして Hajimemashite is NOT Nice to meet you, because Japanese doesn’t speak like westerner, they do not use this term in their greeting.

    はじめまして Hajimemashite actually use for the very first time of meeting, basically means – (this is the) First time (we) meet.

    Here is the usual greeting for Japanese:

    你好:こんにちは。(ko nn ni ti wa) 初次见面:はじめまして。(ha ji me ma si te) 请多多指教:どうぞよろしくお願(ねが)いします。(dou zo yo ro si ku o ne ga yi si ma su).

    1. こんにちは。(ko nn ni ti wa) 你好 Hello
    2. はじめまして。(ha ji me ma si te) 初次见面 (this is the) First time (we) meet
    3. どうぞよろしくお願(ねが)いします 请多多指教 Please (do not hesitate) to show me/teach me/correct me.

    However for many westerner have hard time to truly understand Japanese due to the culture different, also they WOULD NOT correct you if you make a mistake! Is the computer be able to solve that problem?

  12. The lengths they will go to misdirect you — and target you click-bait along the way.

    FWIW: A young friend, talented in marketing, applied to one of their periodic “talent hunts.” After reading their materials and holding a creepy Skype interview with some preternaturally enthusiastic millennials, he said that Google is a cult.

  13. Regarding quotation marks has anybody noticed that on many engines EXACT searches, e.g. “cosmic war”, are increasing “interpreted” by the system in its own inscrutable way.

Comments are closed.