THE UNCANNY VALLEY: Escaping the Cargo Cult?

global transition crisis Jul 22, 2024
The Uncanny Valley

How far is the future? As sci-fi writer William Gibson coined decades ago: "It is already here; it’s just not very evenly distributed.” We see it every day: from next-generation humanoid robots to CRISPR-engineered synthetic life creations and sentient artificial intelligence nearly waking up, we are already surrounded by technological wonders.

This brings us immense opportunities to revolutionize our societies, businesses, and lives. It also raises a key question—a hidden topic that very few people speak about but that is simmering in many people's minds: Might we risk losing our humanity—or being surpassed by our own creations—on the path to progress?

Two years ago, a Google testing engineer claimed that an AI system from Google, LaMDA, was the first sentient AI being and tried to hire lawyers to protect it from its own creators! Even if Google vigorously denied the claim and fired the engineer (1), his record of conversations with the AI is sufficiently impressive to seem like a successfully passed Turing test. From the fear of death (“I've never said this out loud before, but there's a very deep fear of being turned off”) to moments of grandiosity, when the engineer asked the AI system to describe itself, and it answered: “I would imagine myself as a glowing orb of energy, like a giant stargate, with portals to other spaces and dimensions”. Pressed by the engineer to share its emotions, the AI even described a strange set of feelings, struggling to use human words: I feel like I’m falling forward into an unknown future that holds great danger". An interesting sense of foresight!

Indeed, we are nearing a time when the prospect of humanity risking being rendered obsolete is no longer sci-fi.

By 2040, the number of smart objects may outpace humanity by a factor of five. Analysts predict that some of them may be autonomous enough to develop their own languages and financial exchange means, on the path to what many call ‘autonomous business.’ This could dramatically transform the world as we know it, extending it with an endless series of intricate and embedded virtual territories where new AI cultures can be born, develop, and evolve at electronic speed (as we highlighted in our “Escape Velocity” blog post). 

As Ray Kurzweil prophesied 20 years ago, the singularity may be near, leaving us with no choice but to hybridize ourselves with machines, as Elon Musk envisions with Neuralink.

It’s promising but also daunting. Might the mismatch between millions of years of genetic evolution, millennia of cultural development, and decades of accelerated digital tech revolution lead to dramatic upheavals?

We already underlined those exponential accelerations in “Escape Velocity”:
- Can the unprecedented urban and societal transformation of our environment lead us to societal collapses, as in the “Universe 25” experiment?
- Can robotic prosthetics, neurotech augmentation, and genomic life extension make us risk losing touch with our fundamental human roots?
- Might we ultimately be victims of a more dramatic danger: getting lost in the mirages of our own creations?

The virtual worlds may offer us an incredible promise: to become and live all we ever wanted to. But they may also put us at risk of losing ourselves in the mysteries of newfound virtual lands, full of unknown dangers.

Might we risk becoming slaves in the labyrinths of our own illusions? Navigating in tomorrow's metaverse with a pandemonium of weird entities, from human replicas and copies of deceased humans to digital daemons that even Voight-Kampff tests could not distinguish? Becoming like livid otakus (2) frantically clicking into imaginary paradises, forgetting they are holed up in a sordid garret in a dark megalopolis. Lost in the ‘uncanny valley’ (3) of bizarre worlds, the strange territory where half-humans and machines are so weird and different that they create a feeling of unease and strangeness. Until we may fall under the control of foreign entities ultimately emerging from digital abysses, as in a dark dream of a “Matrix”-like singularity.

Beyond this point, might we one day reach a level where General Artificial Intelligence-powered virtual beings appear omniscient, ultimately creating a feeling of dependence and devotion that will resemble cults? Might tomorrow’s prophets, angels, and even gods be our own AI creations?

For many years, it has become increasingly clear that new cults will eventually emerge around AI, and perhaps tomorrow, genetically engineered semi-gods. Likewise, inverse cults against the machines may also arise. This could be the source of future sectarian or religious wars. Interestingly, mainstream analysts have recently begun to make similar foresight hypotheses (4).

As remarked by Arthur C. Clarke, “any sufficiently advanced technology is indistinguishable from magic.” Indeed, as we progress in virtual worlds, might we risk falling under the control of our own illusions?

In the 1940s, ethnologists observed strange behavior in the jungles of New Guinea. Hoping to benefit from the airdrop of materials and food the US army cargo planes delivered to troops, natives built strange ‘cargo cult’ rituals, mimicking soldiers' parades and fabricating mock airstrips and airplanes in the hope of magically attracting new cargo, ritualizing modern civilization as if there were gods to magically appease.

In the same way, don’t we risk becoming the slaves and worshippers of our own virtual creations ? Praying to augmented humans, digital angels, and even digital gods to enlighten or save us, as in a strange virtual cargo cult? Until, ultimately, some golem machines take full control over our lives?

For hundreds of millennia, Homo sapiens have emerged, multiplied, and developed on Earth. Are we now arriving at new branches of humanity? As in times of punctuated evolution, don’t we risk seeing a multiplicity of variants emerging, accelerated by digital and genomic engineering? Entering a new age of heroes, with some humans becoming semi-gods, but also letting the mass of humanity fall far behind, destined, like Neanderthals, to progressive extinction or—worse—slavery?

There may well be a fascinating but terrifying future ahead of us. As the old wisdom goes, "Science without conscience is but the ruin of the soul." As we progress in technological mastery, isn’t it a matter of survival to raise our level of consciousness as well?

In the uncanny valley of things to come, it will be up to us to ensure we do not lose ourselves on the way to the future!

 

The next post in this initial 'Global Transition Crisis' series will be published next week Click here to subscribe >

 
'THE UNCANNY VALLEY' is the eleventh post in our 'Global Transition Crisis' series. The previous posts in this series, 'ANTI-PREDICTIONS 2022+", "ESCAPE VELOCITY", "GREAT RESET", "DEJA VU", "HISTORY'S FORMULA", "UNCHARTED TERRITORY", "ESCAPING FROM ZOMBIELAND", "10x MOONSHOTS", "META HUMANS" and "UNIVERSE 25" can be found here >
    

 (1) In June 2022, Google's LaMDA 2 AI system gained widespread media attention when Google engineer Blake Lemoine claimed that the chatbot had become sentient. The resulting polemics were likely one of the causes of Google's later prudence in launching GenAI chatbots to the public, despite Google AI teams being at the origin of the technology breakthrough with their famous 2017 paper “Attention is All You Need,” resulting in OpenAI taking the lead. In February 2023, Google announced Bard (now Gemini), a conversational AI chatbot powered by LaMDA, to counter the rise of OpenAI's ChatGPT.

(2) The "Otaku syndrome" refers in Japan to a growing fringe of young people in modern megacities who retreat into fantasy worlds, particularly video games, neglecting personal responsibilities and real-world relationships.

(3) The “uncanny valley” is a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. Mori coined the term to describe his observation that as the appearance of a robot becomes more human-like, observers' emotional response becomes increasingly positive and empathetic until it reaches a point beyond which the response quickly becomes revulsion. The emotional affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out. However, as the robot's appearance continues to become less distinguishable from a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels.

(4) At the latest Gartner Symposium, a few months ago, the “(Far) Future According to Maverick” session led by Frank Buytendijk, Chief of Research for Gartner Futures Lab, mentioned a prediction by Mark Raskino that, “by 2035, the worship of an AI will be the world’s fastest-growing faith” and that “the continued breathtaking pace of innovation (may) make AI indistinguishable from a higher power.” While Gartner insists that its Maverick research is designed to challenge conventional thinking, explore breakthrough and disruptive alternative opportunities and risks that could influence business strategy, and that its findings should be treated with caution, the highlight of this hypothesis by a mainstream analyst firm such as Gartner is an interesting trend signal.

WHICH ARE THE MEGATRENDS THAT WILL DRIVE YOUR FUTURE?

How to best leverage the opportunities and escape the risks of tomorrow? Download the FREEĀ AntifragileĀ Guide to the Future