Think ChatGPT is the bleeding edge in AI? Maybe think again.
Since we published our blog on what ChatGPT means for healthcare, Microsoft has committed to investing billions into ChatGPT creators OpenAI. Google is poised to release a competitor chatbot (“Bard”), even if it got off to a slightly bumpy start. The user base for ChatGPT’s research API has grown faster than TikTok. All in all, it’s pretty clear that large language models (LLMs) have become 2023’s tech darlings. But unnoticed by most, a steady stream of obscure-sounding research is promising a new generation of AI that will leave even the most advanced chatbots for dust.
Are LLMs our looking glass into an AI-enabled future?
In a few short months, ChatGPT has gone from AI curiosity to mainstream sensation. And little wonder. Its ability to dig out obscure facts, reason over abstract concepts, write computer code, and even produce unerringly human-like poetry are nothing short of spectacular. Other AIs have been said to meet the criteria laid out in Turing's "Imitation Game". But ChatGPT doesn’t just pass the Turing test - it knocks it out of the park.
Google is hot on OpenAI’s heels with its own AI chatbot, and Microsoft are already integrating ChatGPT into their internet search engine Bing. Many are predicting a slew of replicas will emerge over the course of the year. And yet, some of the greatest minds in AI aren’t impressed.
Professor Yann LeCun - head of AI at Meta and the brains behind several of the most significant recent breakthroughs in deep learning - summed up the prevailing sentiment in a recent LinkedIn post:
So why the naysaying? And more importantly, if LLMs are “an off-ramp on the highway towards human-level AI”, what else lies down the road?
Where do LLMs fall short?
This is the Merriam Webster dictionary’s definition of intelligence:
A bit simplistic but basically fair, right? So let’s apply that to LLMs like ChatGPT.
We can take it as read that LLMs can learn. But can they “deal with new or trying situations”? Sure, insofar as we restrict ourselves to natural language questions or conversations they haven’t encountered before. But take an AI like ChatGPT completely out of context and ask it to, say, play a game of chess? Or, even simpler, draw a picture of a flower? You can see for yourself what happens in this archive of ChatGPT failures. Spoiler: it doesn’t do too well.
By contrast, a ten-year-old child may not be able to produce a sonnet in the style of Shakespeare the way ChatGPT can. But in just a week at summer camp, we might reasonably expect them to attain basic competence in activities as diverse as skateboarding, performing long division and, indeed, playing chess and drawing flowers.
Young adults adapt even more quickly. In the world of clinical medicine, for example, lives often depend on the ability of junior clinical staff being able to handle new situations until senior help arrives. The fact is, we don’t learn the way machines do. Even today’s most advanced AIs like ChatGPT learn one task at a time, starting pretty much from scratch on each occasion unless they’ve learned how to do something extremely similar in the past. This is often referred to as “narrow intelligence”.
Humans, on the other hand, can adapt lessons from disparate past experiences to form effective baseline strategies in brand new situations. We’re also able to adapt those strategies very quickly based on small amounts of trial and error experience.
In short, we have learned how to learn.
How can AIs develop a more human approach to learning?
“Meta-learning” is the field of computer science concerned with helping AI systems learn more effectively. And it's moving fast. A lot of the progress in this space has flown under the mainstream radar, not least because it can get highly technical and the papers have names like Human-Timescale Adaptation in an Open-Ended Task Space. Make no mistake, though. The implications of recent advances in meta-learning are sci-fi blockbuster huge.
The most exciting type of meta-learning right now is something called meta-RL, which is what you get when you combine meta-learning and reinforcement learning (RL). We touched on RL in our original blog post on ChatGPT. But as a quick recap, RL-based AIs take many sequential actions before they receive any feedback during training. This means that they learn behavioural strategies rather than single-step tasks like detecting objects from images. (Or, in the case of LLMs, predicting the next word in a sentence.)
If your goal is to produce “Human-Level AI”, RL is the only way to go. After all, if humans learned like conventional AIs - which is to say, if we required constructive feedback every time we moved a muscle or uttered a syllable - we’d be little more than overgrown amoebas. But of course, that’s not the case. A child learning to ride a bike, for example, must undertake a whole series of highly complex actions before they find themselves either in a heap on the ground or zooming along with a terrified parent in tow. In AI jargon, that’s what we call a sparse reward signal. But it certainly didn’t stop any of my kids.
Humans are, in short, extremely effective reinforcement learners. And meta-learning is a huge part of what makes us so effective. The first time a child sits on a bike, they’re by no means starting from scratch. Rather, they’re creating a baseline strategy by invoking similar past experiences, like learning to walk, run, ride a scooter. Each time they fall, they’re recalling past falls and how they learned from them. And once they’ve mastered cycling, this experience too will be added to their memory bank. Next time they try something new, they’ll learn faster still.
Why does AI need to meta-learn?
It’s hard to overstate the role meta-learning has played in the story of humanity’s success. If we weren’t meta-learners - which is to say, if we started from absolute zero any time we tried something new - it could take many injury-filled years to get comfortable on two wheels. And as for driving a car: many of us wouldn’t even survive the first lesson.
The same is likely to hold true for the story of AI’s success. For anyone who’s excited about the possibility of human-level AI, meta-RL is absolutely key. If you read our 2022 AI round-up, you’ll know that some modern AI models can already handle multi-sensory data. And we’ve seen very clearly that ChatGPT exhibits human-like reasoning. You might think that leaves us pretty close to human-level intelligence. But to allow AI to develop anything approaching a human-like “world model”, we’ll have to go much further. AIs will need to be able to explore the world around them freely and learn from experience the way we do, adapting to new situations ever more quickly as they learn to learn.
For those who find the idea of robot toddlers living among us a little bit terrifying, the good news is that we’re not quite there yet. But we may be a lot less far than you think. The unassailable world leaders in RL, DeepMind, are hard at work. In one recent paper (warning - very technical), their Adaptive Agents Team employ a technique called “curriculum learning” - which does what it says on the tin - to train a particularly large AI using meta-RL methods. It’s an elegant approach, and they show that their AI can adapt as quickly as humans to (a limited number of) challenging tasks.
Combine this work with ChatGPT’s ability to grasp and reason across abstract concepts, Gato’s ability to handle multiple input and output data types, recent work by DeepMind showing that the memory limitations of artificial neural networks can be overcome by incorporating long-term memory into meta-RL, and the incredible advances in robotics technology over recent years… It starts to feel a lot like we’ve already got sight of all the key ingredients for human-level AI. The challenge, of course, is how to combine and scale these technologies. That could easily take decades to solve. But if there’s one thing we’ve learned from the AI community over the last twelve years or so, it’s this:
Expect the unexpected.
To find out more…
If we've succeeded in whetting your appetite for meta-learning, this article - coauthored by DeepMind founder Demis Hassabis - is an absolute must. And if you're looking for a broader perspective on how we get to "Human-Level AI", Yann LeCun's living text "A Path Towards Autonomous Machine Intelligence" is a thought-proving (if slightly long) read.
Otherwise, look out for our next blog post where we'll be getting back to our core focus: clinical knowledge management.