OLD PAPERS, NEW DISCOVERIES, AND AI

OLD PAPERS, NEW DISCOVERIES, AND AI

July 17, 2019 By Joseph P. Farrell

If you're familiar with the work of Lt. Col. Tom Bearden (US Army, Ret.), this story will sound eerily familiar in its basic outlines. But for those of you who are not familiar, Col. Bearden maintained in various books and papers that the post-war Soviet Union undertook a most unusual form of black science projects research. According to Bearden, Stalin, faced with an overwhelming American numerical superiority in nuclear weapons, lashed the Soviet Academy of Sciences to find some technological breakthrough that would do an end run around this superiority. In effect, Stalin was searching for the next generation in strategic offensive and defensive weapons, a technological edge that would render nuclear and themonuclear weapons obsolescent if not obsolete. According to Bearden, vast and secret research bureaus inside the Soviet Union were established, whose sole task was to comb through the west's vast scientific literature and papers of the previous decades, and to "pull" any papers containing undeveloped, or interesting ideas for further study. In Bearden's argument, this effort led to the creation of the vast Soviet effort in what he came to call "scalar" physics - a term whose current currency owes its popularity in large part to Bearden's efforts - and with it, the whole world of secret Soviet research into everything from the torsion experiments of astrophysicist Nikolai Kozyrev, to the massive Soviet research into the "paranormal" (which I review in my book Microcosm and Medium).

With that in mind, J.E. (and many others) spotted this story at Zero Hedge, and it's worth careful consideration, for it appears as if the "Stalin-Bearden" model (with apologies to the colonel!) has been adopted and "updated":

AI Pores Over Old Scientific Papers, Makes Discoveries Overlooked By Humans

Notably, not only is the "Stalin-Bearden" model operating here, but the project is centered at a well-known American "black projects" research facility:

Researchers from Lawrence Berkeley National Laboratory trained an AI called Word2Vec on scientific papers to see if there was any "latent knowledge" that humans weren't able to grock on first pass.

In other words, rather than Stalin's armies of bureaucrats in research institutions sitting at their desks and reading paper after paper of scientific journals and pulling the interesting or anomalous paper for further study, the search for "the odd and forgotten" has been turned over to an artificial intelligence program, which reads and scans several papers, and then, most importantly, draws connections between them by a process of analogical mapping based on technical words and their context  (for those following my "Analogical Calculus/Topological Metaphor of the Medium" idea, that should sound very familiar):

The algorithm didn’t know the definition of thermoelectric, though. It received no training in materials science. Using only word associations, the algorithm was able to provide candidates for future thermoelectric materials, some of which may be better than those we currently use. -Motherboard

"It can read any paper on material science, so can make connections that no scientists could," said researcher Anubhav Jain. "Sometimes it does what a researcher would do; other times it makes these cross-discipline associations."

The algorithm was designed to assess the language in 3.3 million abstracts from material sciences, and was able to build a vocabulary of around half-a-million words. Word2Vec used machine learning to analyze relationships between words.

"The way that this Word2vec algorithm works is that you train a neural network model to remove each word and predict what the words next to it will be," said Jain, adding that "by training a neural network on a word, you get representations of words that can actually confer knowledge."

...

The algorithm linked words that were found close together, creating vectors of related words that helped define concepts. In some cases, words were linked to thermoelectric concepts but had never been written about as thermoelectric in any abstract they surveyed. This gap in knowledge is hard to catch with a human eye, but easy for an algorithm to spot.

After showing its capacity to predict future materials, researchers took their work back in time, virtually. They scrapped recent data and tested the algorithm on old papers, seeing if it could predict scientific discoveries before they happened. Once again, the algorithm worked. -Motherboard (Boldface emphasis in the original, italicized emphasis added)

Needless to say, that "testing" of the algorithm "on old papers" is where my high octane speculation of the day comes in. There's no doubt in my mind that the activity and experiment at Lawrence Livermore represents only the publicly-revealed tip of the iceberg, and that this sort of artificial intelligence scanning of old science papers has probably been going on for a long time, and at various institutes, analyzing not only the metadata of such papers, but also - pace the Lawrence Livermore "update" to the Stalin-Bearden model - looking for the anomalous concept or bit of data, or the "holes" in areas of research that were forgotten in the flood of scientific and engineering research. The veil on one technique of the ability of the breakaway group to pull ahead of the public science has been lifted and pulled back a little. I've proposed such a scheme of data mining and "pulling" of anomalous data for the operations at CERN, positing a secretive algorithm that allows select, and secret, committees of scientists to review that anomalous data. So I'm taking this Lawrence Livermore story as a bit of broad and general confirmation of that speculation. Once such concepts or data are pulled by the first computer pass, it can then be turned over to a "steering committee" - think the JASON group here folks, or RAND Corporation, DARPA, corporations like Mitre or  SAIC - which will then assign the discovery to a particular team for further research and development.

All of this means that with the addition of AI to the Stalin-Bearden model, the growth of scientific knowledge in the black research community will become (if it hasn't already) exponentially faster. And if it's been going on for some time as I suspect, then this lends even more credence to the hypothesis of many in the alternative research field that this black projects research world may be several decades in advance of the public in terms of its science and technology.

And notably, it was in Lawrence Livermore's case made possible by mimicking the processes of analogical thinking...

See you on the flip side...