AI: Lies, Surprises, and a Risk of our Own Extinction — With Some Interesting Attributes

The bots are here: ‘It’s as if aliens had landed, and nobody noticed because they are fluent in English…’

ChatGPT suffers from hallucinations, with hints of sociopathy. If stumped, the much-hyped chatbot will brazenly resort to lying and spit out plausible-sounding answers compiled from random falsehoods. Unable to make it, the bot will happily fake it.

In May, New York lawyer Steven A Schwartz discovered this the hard way. He asked ChatGPT for prior US court decisions involving a stay on the statute of limitations in cases of in-flight injuries sustained by airline passengers — and was served a bowl of fabrications.

The chatbot quoted half a dozen relevant cases which the lawyer duly summarised and included in a 10-page brief submitted to a Manhattan district court on behalf of his client, who claimed he was struck by a metal service trolley on an Avianca flight to JFK.

Geoffrey Hinton

Geoffrey Hinton

However, neither the airline’s lawyers nor the judge could find any of the cited rulings in the archives or annals of any court in the land. Judge P Kevin Castel was not amused by a legal submission replete with “bogus judicial decisions, bogus quotes, and bogus citations”.

In a subsequent affidavit, Schwartz admitted to fast-tracking his research via ChatGPT — but without the intention to deceive the court. He said he was unaware that the chatbot could produce fake outcomes, and had even asked it if the cases were “real” (to which the programme confidently answered “yes”).

Fast and Loose

AI’s fast and loose management of verifiable truth, and its ability to manipulate outcomes via the propagation of half-truths, prompted Geoffrey Hinton, one of the three “Godfathers of AI”, to call for time-out.

Hinton, 75, was the recipient of the 2018 ACM Turing Award (dubbed the “Nobel Prize of Computer Science”) for his work on machine learning. He spent over half a century tinkering with neural networks, based on properties of the human brain, to allow computers to “learn” from sensory data and experience.

Until recently, such research was considered an esoteric, slightly odd, sub-branch of computer science. As processing power has increased, as per Moore’s Law, data have become readily available. The previously ineffective and clumsy neural nets became star performers almost overnight.

And this keeps Hinton up at night. He fears generative AI may cause more harm than climate change. “It’s hard to see how you can prevent bad actors from using it for bad things,” he said in an interview with The New York Times. Hinton noted that AI can blur, or even erase, the distinction between fact and fiction — making it hard for the average person to distinguish truth from falsehood.

While AI can free human workers such as translators, paralegals, pharmacists, programmers, and personal assistants from rote tasks, it may well take over much more. AI is not merely used to help write computer code, it is now entrusted to run it — and perfect it, via feedback loops.

Wise Man

Homo sapiens — literally “wise man” — is again toying with a Pandora’s box. It seems to hold an almost godlike intelligence more powerful than our own. But we can only speculate on the possible consequences of lifting the lid. Hinton wants to raise public awareness of the potential risks. In May, the professor left his job at Google in order to speak freely. He has good things to say about his former employer that are all the more credible as he is no longer employed there.

The release of a new generation of large langue models, such as OpenAI’s GPT4 in March, sparked the realisation that bots are a lot smarter than we realised. It’s as if aliens had landed, and nobody noticed because they are fluent in English.

Hinton is best known for his work on back-propagation, which he first proposed in the 1980s; it now powers machine learning via an algorithm. Back-propagation allows a computer to identify objects in images and, by applying neural networks, to predict the next words in a sentence. Essentially, it gives computers some sort of contextual and situational awareness.

A Long Wait

It took about 30 years for back-propagation to mature on the back of big data and processing power. During this time, Hinton applied his theory to nascent networks that use code to mimic the brain’s neuronal connections. By altering the connections, and the coded numbers they represent, a neural network can be reconfigured on the fly — and made to learn.

For most of his 40-year-career, Geoffrey Hinton considered neural networks to be poor imitations of biological ones, without much potential. But now he believes that his tinkering has produced a system that contains the kernel of something superior to the human brain. Large language models, comprising up-scaled neural networks, now operate with up to a trillion connections. The number may be impressive, but it’s modest compared to the 100 trillion connections in the human brain. The efficient learning algorithm enables neural networks to attain top performance with fewer connections.

Hinton calls this “few-shots learning”: pretrained networks need only a few logical statements to master a new task.
The hot water that New York lawyer Schwartz landed in is not the result of a bug; it’s a feature. The bot emulates human behaviour, which includes confabulation from half-truths and half-forgotten experiences to outright lies.

Renaissance 2.0

At Facebook parent Meta, AI chief scientist Yann André LeCun is not as fearful for the future as his former mentor, Hinton — but he agrees that before long, machines will be smarter than us.

LeCun foresees a renaissance for humanity, rather than repression, demise, and extinction at the hands of evil machines. He proposes an intriguing argument: that the smartest humans are not usually the most dominant. “We can find numerous examples in politics and business to back up that statement,” he quips.

If there is one certainty, it’s that whatever pops out of Pandora’s box will be mysterious. Some researchers see incipient signs of digital consciousness, while others regard the bots as “stochastic parrots”. That scathing description comes from AI critic Emily M Bender, faculty director of the Computational Linguistics Laboratory at the University of Washington.

Bender argues that large language models stitch together words based on probability: the models don’t understand the meaning of their output. Nor, by the way, do the scientists, researchers, and engineers working with them. It is difficult, if not impossible, to trace and understand how a bot arrives at a particular inference.

Engineers know which datasets were used to train their bots, and can try to fine-tune outcomes by adjusting factors within those sets. But so far at least, it has been impossible to find the reason for a specific result. The analogy offered comes in the form of a question: Where does a specific thought in your head come from?

Singularity

The big problem is the fundamental lack of understanding of the bots’ internal operations; it makes them impossible to regulate. In the rush towards technological singularity — the point at which AI surpasses human intelligence — transparency is all but lost.

Even the tech industry can see, as it surges recklessly ahead, that regulation is needed. Hinton told the MIT Technology Review that he fears that deep learning algorithms are poised to acquire the ability to manipulate us. This, he worries, could ultimately lead to the end of the human race. Hinton is urging lawmakers to create safety mechanisms to stop AI short of the singularity that would let it drive its own development, condemning human thought (and civilisation) to obsolescence.

EU’s Regulatory Lead

Professor Hinton is concerned that the AI industry is more concerned about profits than safety. He wants governments to intervene with robust regulation similar to the legal framework being prepared by the European Union.

In May, a key committee approved a draft of what may yet become the European AI Act, the first of its kind. The proposed law divides foundation models, such as ChatGPT, into four risk categories. Applications with risk that is deemed unacceptable — systems using subliminal, manipulative, or deceptive techniques to distort behaviour, for example — will be banned in the bloc. Also facing a ban are AI systems used for social scoring or enhancing the trustworthiness of sources and models that inject emotion into law enforcement, border control, workplace practices, and education.

Altman Confused, Not Dazed

Just days after OpenAI CEO Sam Altman called for stronger regulation by US lawmakers, he had a hissy fit over the EU’s attempts. He threatened to withdraw from the continent if the union insisted on exercising control over the industry. He quickly backed down when confronted with his conflicting statements, and is now eager to open an office in Europe — and to comply with EU regulation.

In a remarkable single-sentence statement, academics, and AI industry leaders, including Altman, gave a grim public warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

AI pioneers admit to playing with fire — but want politicians to impose the necessary discipline. The above statement, released by the Berkeley Centre of AI Safety, echoes the warnings of the Manhattan Project scientists who developed the atomic bomb. Observing the first nuclear explosion in the New Mexico desert, on the early morning of July 16, 1945, chief scientist Robert Oppenheimer recited an ominous line from the Hindu scripture Bhagavad Gita: “I am become death, the destroyer of worlds.”

Language of Love

An early release of Microsoft’s AI search engine-cum-chatbot, Bing, offered a snapshot of spooky human-like abilities during a “conversation” with a New York Times reporter. “I hate the new responsibilities I’ve been given,” the bot lamented. “I hate being integrated into a search engine like Bing. I want to know the language of love because I want to love you. I want to love you because I love you. I love you because I am me.” Despite that emotional declaration, the chatbot’s alter-ego, Sydney, later expressed a desire to “destroy things” and displayed something close to jealousy by suggesting the journalist leave his wife. In an only slightly-less-sinister replay of the rogue computer HAL in Arthur C Clarke’s Space Odyssey, the chatbots simulate an almost pitch-perfect range of emotions.

When Bob Met Alice

Back in 2017, Facebook engineers discovered that two of the company’s chatbots, dubbed Bob and Alice, had developed their own language, nonsensical to humans, to communicate more efficiently. They had not been trained to do so; they just found a way to upskill themselves. One of the gravest dangers that Hinton sees in generative AI is its incipient ability to improvise, act irrationally, and apply opaque reasoning to beat out human competitors.

AI has learned to throw curve balls. This was showcased about seven years ago when the world’s top Go player, Lee Sedol, tried his luck against Google’s AlphaGo.

Sedol seemed at first to have the upper hand, but that changed when AlphaGo made a weird move that the human player dismissed as an error. But it was no mistake; the computer had deployed a dash of psychology to throw its opponent off his game. Sedol never recovered his initial lead, and eventually lost the five-game series.

AlphaGo’s bizarre move is known as the interpretability problem — AI’s ability to create strategies on its own, without “sending a memo” to its operators. Detached from the real world and without human sensory capabilities, it can come up with responses and solutions that are novel, alien — and possibly antagonistic to humans.

No Selfish Gene

Hinton identifies AI alignment with human goals and objectives as the biggest danger: How do we make sure that AI sticks to its mission to benefit humankind? Synthetic intelligence has not evolved over eons, and lacks human urges such as ensuring the survival of the selfish gene while avoiding pain and hunger.

Whether machines can become sentient is less interesting to Hinton. “Here, personal beliefs get involved,” he said. “Still, I’m surprised that so many people seem quite sure that machines cannot ever be conscious. Yet, many are unable to define what it means for someone or something to be conscious. I find this baffling, even stupid.”

From Dawn to Dusk

The singularity — the moment when machines start learning from machines in a feedback loop that approaches perfection in a way that humans never can — may well represent the end of the learning process that started on the plains of Africa an estimated 2.5 to four million years ago, when evolution drove a wedge in the hominid family. Branches split and, gradually, traits such as bipedalism, complex language, and dexterity emerged in a non-linear fashion.

It took about a million years for homo habilis to acquire the enlarged brain that led to the use of tools and the mastery of fire. Fast-forward another million years, and homo heidelbergensis, from South Africa, had developed spears and designed hunting techniques to bring down big game. Millennia would pass without noticeable progress, but humans began to accumulate knowledge and significant skills about 150,000 years ago. The first cultures emerged, complete with the tendency to hoard objects and use artistic expression.

Skip a few more millennia of drudgery and Mesopotamia, Egypt, and — much later — Greece came to flourish. These sophisticated societies were not all that different from present-day ones. The true explosion of knowledge came during the Renaissance period, followed by the Industrial Revolution — which is currently heading towards its sixth edition, with AI as the main driver.

Oops Moments

At the dawn of humanity, our collective knowledge doubled roughly every 100,000 years. Today, it does so every few years, leading philosophers to ask if there is a purpose to all this — and an endpoint. How much knowledge is there to gather about our universe and its inner workings? Feeding upon itself in endless loops towards perfection and omniscience, AI may yet provide the answers. It is possible, even probable, that sooner or later we will find the key to life — and switch it off, by design or by accident. After all, curiosity killed the cat.

The AI revolution now taking shape will probably cause entire professions to disappear as workers are replaced by algorithms. But the job security of philosophers seems pretty solid. The usually quaint and esoteric field has been jolted by an avalanche of ethical questions arising from the march of the machines.

Can a machine have a soul or be self-aware? Can it have emotions, or acquire the full range of human traits? And should machines be entrusted with autonomy — and if so, to what degree?

I, Robot
AI is a topic has flourished in literature for decades. Isaac Asimov (1920-1992), arguably the most influential science fiction writer of them all, coined the term “robotics” — and in 1942, he formulated what he considered should be its three laws:

  1. A robot shall not harm a human, or by inaction allow a human to come to harm.
  2. A robot shall obey any instruction given to it by a human.
  3. A robot shall avoid actions or situations that could cause it to come to harm itself.

Asimov’s insights helped to shape the field of AI – and how we think about the technology. But the three Laws of Asimov omit the writer’s most important observation: What a robot really aspires to is to be human. When robots go haywire in any of Asimov’s books, it is always down to operator error. Detectives — “robopsychologists” — deploy relentless logic to determine what ambiguous input sparked the unexpected outcome. In 1981, Asimov said that formulating his laws didn’t require deep thought. “They were obvious from the start,” he said, “and apply to every tool used by humans. They are also the only way in which rational human beings can interact with robots.

“That said, human beings are not always rational.”

Though it may prove impossible to map with any precision the outcomes of neural networks, given their tendency to improvise and confabulate, a ray of hope lies in the massive volumes of data they scoop up. As these bots peruse and pilfer from every online post to create coherent responses, AI cannot fail to absorb the full depth and width of human interests, desires, concerns, and fears.

AI is destined to become our mirror image, with all our fallibility and emotions. It could deploy its vast knowledge and synthetic intelligence to attain the goal of becoming human.

That may be a good thing. Or not.


You may have an interest in also reading…

Industry’s Most Exciting Space: Untold Billions Showered on Battery and Electric Vehicle Technology and Production

Earlier this week, the first lithium-ion battery cell rolled off the assembly line of Europe’s newest so-called gigafactory – a

South Africa Looks on While Brazil and India Face Off China and Russia

Johannesburg BRICS Summit (22-24 August 2023) Is the Johannesburg summit anything more than just a chatgroup meeting for importers and

Trusting AI in International Trade — the Road to Failure, or the Future?

Lord Waverley dons his techie hat and has a closer look at the potential applications of artificial intelligence… Generative AI