Yesterday I outlined a sweeping "high octane speculation" context involving high frequency trading, predictive algorithms, the electric surveillance state, information or photon entanglement, and, yes, interplanetary commerce. The reason why such a context was necessary to outline - to my mind at least - was an article shared with us by Ms. P.H., a regular reader and contributor of article material here. The article in question is this:
The question is why DARPA would be exploring these types of possibilities, beyond the obvious and justifiable purpose of protecting national electronic communications of all sorts, government, corporate, private, and personal.
In today's high octane speculation, I suggest that there are deeper long term objectives in play in DARPA's project, objectives implicit first in some statements within the above-linked article, and implicit within the wider context that I outlined yesterday, which wider context is implicated in other statements that DARPA has recently made.
First, the statements contained within the article:
"Instead, the STAC program is looking at new program analysis techniques that could allow analysts to find those vulnerabilities and predict where leaks and denial of service might be possible. These new techniques and tools would enable a methodical search for vulnerabilities in critical government, military and economic software.
"For those tools, the program is looking for scale and speed. With regard to scale, the techniques would have to be capable of handling larger software, ranging from hundreds of thousands to several million lines of source code. They will also have work quickly by increasing precision and reducing the need for manual annotation."
There are three key concepts here, and I rather suspect the reader has already seen the huge implications and connections to the HFT (high frequency trading) phenomenon. To put the high octane speculation succinctly, it appears that DARPA is searching for ways to turn the data-mining and surveillance programs with whatever predictive algorithms they entail (think PROMIS for a moment here) loose on the programs themselves to predict where, how, when, and of what size, cyber attacks might come. But notice the reference to "reducing the need for manual annotation," i.e., reducing the need for a human to sit at a keyboard and manually type in code alterations, additions, and so on. In other words, they are looking for an adaptive program than can adapt to its own predictions. Such programs exist in rudimentary primitive forms already, but in DARPA's case one always needs to think in grand (or perhaps, grandiose) terms. As the article suggests and hints, the key concern here would seem to be financial and economic, since so much trading and financial clearing is now electronic and computer-reliant in nature
But what's the long term implication of this? Recall this article and blog:
Just for completeness sake, here is the crucial paragraph from the second article for our purposes here:
"Last September, a few hundred scientists, engineers and space enthusiasts gathered at the Hyatt Hotel in downtown Houston for the second public meeting of 100 Year Starship. The group is run by former astronaut Mae Jemison and funded by DARPA. Its mission is to "make the capability of human travel beyond our solar system to another star a reality within the next 100 years." (Emphasis added)
And if such travel - and the implication of interplanetary (if not interstellar) commerce implicit in it - is to become reality, then by the same token, a means must be found to communicate with those explorers, and to conduct any commerce and secure any communications from "cyber attack." And when you're traveling at warp speed or trading via entanglement, a predictive adaptive algorithm able to anticipate and head off cyber attacks before they happen is a handy tool to have around.
High octane speculation? To be sure!
But no more so than achieving warp capability in a mere century.
Welcome to the brave new world.
See you on the flip side...