Sunday, July 28, 2019

Alternative Intelligence

Standing in a queue at the Louvre museum in Paris, as it snaked towards arguably the most famous painting in the world, I was startled when an irreverent tourist out of my line of sight muttered, It is rather small, isn’t it? Granted that the Mona Lisa, measuring 30in by 21in seemed unimposing, nevertheless a 500-year degree of separation could not mask the lady’s elegance and magnetic sheen. When in early 2019, I saw the ubiquitous image of Mona Lisa babbling animatedly in a WhatsApp clip, the surreal head and facial movements appeared clever but harmless.


Months later, another lady of Italian lineage, and the first female Speaker of the US House of Representatives, Nancy Pelosi, featured in an online video that portrayed her in a drunken state. Apparently, the image had been digitally doctored to show Pelosi stammering, and slurring her words. This occurred about two years after an aide to the incumbent US President deadpanned the insidious phrase "alternative facts" in a clumsy attempt to deflect the truth.

Welcome to the age of deepfake technology in the so-called post-truth world. Coming in the wake of the contentious 2016 US presidential elections when pernicious fake news infected social media, it has become increasingly difficult to separate accurate or factual information from the chaff of falsehood. For your information, deepfake derives from an image synthesis process, driven by artificial intelligence (AI), that many fear could become weaponised in the near future to undermine global institutions. Given today’s toxic environment, the word “artificial” in Ais rather unfortunate and could be presumed to mean contrived, exaggerated or phony, which to an uncritical mind further erodes the legitimacy of digital platforms.

How did we stumble into fake media, and where could we possibly be heading? As a premise, there were about 26 million software engineers worldwide in 2019, representing about 0.34% of the global population. Therefore, on the subject of AI, the depth of technical knowledge accessible to over 99% of us is superficial at best. That said, consider the incredible fact that a smartphone, essentially a hand-held computer, is unimaginably more powerful than the system on board the Apollo 11 spacecraft! Fifty years after that historic launch, and after innumerable stops and starts, AI technology has finally begun late-stage testing of natural language processing, facial recognition, and self-driving cars. For better or worse, ready or not, and in the absence of common ethical and regulatory guidelines, AI is poised to reshape our future.   

In retrospect, “artificial intelligence” was coined in 1955 not by a committee of experts but by John McCarthy, an inquisitive computer scientist. Although AI is sometimes referred to as machine intelligence, the term AI has stuck and seems embedded in our vocabulary. If McCarthy envisioned a machine that would mimic the brain’s neural processes, in reality AI has progressed sufficiently to supplant the world champions of two strategic games, chess and Go - remarkable feats by any measure. However, AI faces huge challenges in replicating cognitive perception and dexterity reflective of human consciousness, which neuroscience still cannot explain. For instance, AI struggles to differentiate between high and low resolution objects, in order to infer meaning and derive understanding.

Coincidentally, the UK government this month honoured a former British computer pioneer, Alan Turing, by featuring his image on a currency note. He was famous for devising the Turing Test back in 1950, which is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Without delving too deeply into Turing’s landmark challenge, AI capabilities have definitely grown in leaps and bounds, based on ground-breaking algorithms that facilitate self-learning by autonomous systems tapping into massive datasets.

So, should we be worried? Yes and No.

Yes, because I believe that AI technology has already developed to the extent of being able to fool human beings, examples of which include fake news and deepfake images. Furthermore, authoritarian governments and malevolent non-state actors are deploying AI-driven bots designed to confound and cause havoc in cyberspace. Whether liberal democracies will be able to repel this assault is an open question. Beyond that, who can predict what uninhibited, pimpled 17-year old programmers around the world might forge next?

No, because it is doubtful whether the agency exhibited by AI will ever match human cognition, that could enable a machine to, for example, tell a "true" lie Irony aside, I believe that there is a gulf between a successful bluff and a whopping lie. Bluffing suggests a sleight-of-hand craft, practised most deftly by spies and poker players, in order to deceive and conquer. On the other hand, lying connotes a deep-seated and visceral human trait that implies that the perpetrator has something at stake worth protecting or hiding. Well, if machines ever develop contextual faculty to a degree that they routinely and convincingly lie to humans and other machines, then it could be curtains for the human species.

Could AI possibly decrypt Mona Lisa’s wry smile, and might we inadvertently sleepwalk into a situation whereby the tail begins to wag the dog?

You can download the More from Less app from the Google Play Store or by clicking here.

Later!

Please Leave a Comment

CLICK TO SUBSCRIBE