This one was shared by Mr. J.K., and it's definitely one worth adding to your transhumanist scrapbook. Readers of my and co-author Scott D deHart's book Transhumanism: A Grimoire of Alchemical Agendas will recall that transhumanism emphasizes the four "GRIN" technologies - Genetics, Robotics, Information processing, and Nanotechnology - that, used in conjunction and with a heavy emphasis on the information-processing technologies, will lead to what some transhumanists, led by Ray Kurzweil, call the "singularity," a vast expansion of human capabilities and consciousness.
Within that context, we've seen over the past few weeks a number of warnings, first voiced by Elon Musk but also more formally argued by Oxford's Swedish philosopher Nick Bostrom, about the immanent dangers of the emergence of artificial intelligence, an emergence of which its creators - mankind - might not even be aware until it's far too late.
But now there's another possibility emerging - one which has been hinted at by Ray Kurzweil and other transhumanists - and that is the radical expansion of human intelligence by technological enhancement:
There's a few statements here worth pointing out, for they are the subject of today's high octane speculation:
"Michael, when we speak of Intelligence Amplification, what are we really talking about? Are we looking to create Einsteins? Or is it something significantly more profound?
"The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.
"The first step will be to create a direct neural link to information. Think of it as a 'telepathic Google.'
"The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real."
Before going further, we've seen the warnings already regarding the emergence of general artificial intelligence, to the point that some theologians are already pondering the "baptism" of such AI. Others - and I share their concerns - are more disturbed about the potentials of such AI to be programmed to observe the dictates of a particular religious system, e.g., Sharia law for Islam, or "the institutes of biblical law" for the Christian Calvinist dominionist. Would such AI have the human capacity for emotion and compassion? Many doubt it, and I number myself among them.
So why does the radical enhancement of human intelligence enter the picture here? My high octane speculation is rather disturbing, but given the proclivities of the elites to use dialectical manipulations to achieve their goals, what I believe we are looking at is the possibility of the dialectical manipulation of the meme of artificial intelligence along the following classic lines: (1) general artificial intelligence is potentially a very bad thing, and could, if accomplished, lead to a cession of power and sovereignty to such an intelligence to such an extent that human existence itself could be threatened. This, in essence, as been the warning of Bostrom and Musk. But there's an antithesis, and that is this (2) the growth of capabilities in the information processing technologies leads to the conclusion that such possibilities may be inevitable.
So what's the synthesis? (3) In order to forestall the possibilities of general artificial intelligence occurring, with all its human-threatening potential, we may have to forestall the possibility of human subservience to such capabilities by radically enhancing human intelligence.
But there's a danger here as well, and the article points it out, for such "radical enhancement" would seem only to import the possibilities of a cold, machine-like intelligence into mankind himself:
"What potential psychological side-effects may emerge from a radically enhanced human? Would they even be considered a human at this point?
"One of the most salient side effects would be insanity. The human brain is an extremely fine-tuned and calibrated machine. Most perturbations to this tuning qualify as what we would consider "crazy." There are many different types of insanity, far more than there are types of sanity. From the inside, insanity seems perfectly sane, so we'd probably have a lot of trouble convincing these people they are insane.
"Even in the case of perfect sanity, side effects might include seizures, information overload, and possibly feelings of egomania or extreme alienation. Smart people tend to feel comparatively more alienated in the world, and for a being smarter than everyone, the effect would be greatly amplified.
"Most very smart people are not jovial and sociable like Richard Feynman. Hemingway said, "An intelligent man is sometimes forced to be drunk to spend time with his fools." What if drunkenness were not enough to instill camaraderie and mutual affection? There could be a clean "empathy break" that leads to psychopathy."
Could lead to psychopathy? From the behaviors of certain leaders, I think we've already arrived. I'll lead you to consider the potential implications of that "arrival", and I'll
See you on the flip side...