Is There a Clear Path to General AI?
People frequently mix up two pairs of terms when talking about artificial intelligence: Strong vs. Weak AI, and General vs. Narrow AI. The key to understanding the difference lies in which perspective we want to take: are we aiming for a holy grail that, once found, will mean solving one of mankind’s biggest questions … or are we merely aiming to build a tool to make us more efficient at a task?
The Strong vs. Weak AI dichotomy is largely a philosophical one, made prominent in 1980 by American philosopher John Searle. Philosophers like Searle are looking to answer the question of whether we can — theoretically and practically — build machines that truly think and experience cognitive states, such as understanding, believing, wanting, hoping. As part of that endeavor, some of them examine the relationship between these states and any possibly corresponding physical states in the observable world of the human body: when we are in the state of believing something, how does that physically manifest itself in the brain or elsewhere?
Searle concedes that computers, the most prominent form of such machines in our current times, are powerful tools that can help us study certain aspects of human thought processes. However, he calls that “Weak AI,” as it’s not “the real thing.” He contrasts that with "Strong AI” as follows: “But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.”
Which AI Do You Mean? It's a Matter of Perspective
While this philosophical perspective is fascinating in and of itself, it remains largely elusive to modern day practical efforts in the field of AI. Philosophers are thinkers, meant to raise the right questions at the right time to help us think through the implications of our doings. They are rarely builders. The builders among us, the engineers, seek to solve practical problems in the physical world. Note that this is not a question of whose aims are more noble, but merely a question of perspective.
Engineers seeking to build systems that are of practical use today are more interested in the distinction of General vs. Narrow AI. That distinction is one of the applicability of a system at hand. We call something “Narrow AI” if it is built to perform one function, or a set of functions in a particular domain, and that alone. In reality, that is the only form of AI we have at our disposal today. All of the currently available systems are built for one task alone.
The biggest revelation for any non-expert here is that an AI system's performance in one task does not generalize. If you've built a system that has learned to play chess, your system cannot play the ancient Chinese game of Go, not even with some additional modifications. And if you have a system that plays Go better than any human, no matter how hard that task seemed before such a program finally got built in 2017, that system will NOT generalize to any other task. Just because a system performs one task well does not mean it will “soon” (a term used often by people writing and talking about technology in general) perform seemingly related tasks well, too. Each new task that is different in nature (and there are many of those “different natures”) is a tedious and laborious job for the engineers and designers who build these systems.
So if the opposite of Narrow AI is General AI, you’re essentially talking about a system that can perform any task you throw at it. The original idea behind General AI was to build a system that could learn any kind of task through self-training, without requiring examples pre-labeled by humans. (Note that this is still different from Searle’s notion of Strong AI, in that you could theoretically build General AI without building “true thinking” — it could still just be a simulation of the “real thing.”)
Related Article: Confused by AI Hope and Fear? You're Not Alone
Learning Opportunities
Is it Possible to Jerry-Rig General AI?
Let’s do a thought experiment (a common tool of any philosopher who wants to think through an idea or theory). What if we interconnected each and every narrow AI solution ever built on planet Earth? What if we essentially built an IoA, an Internet of AIs? There are companies out there that have built:
- A system that helps you schedule meetings by taking you out of the typical email ping-pong situation of finding a time that works for all.
- A system that recognizes cats, bridges, trees, bicycles, towers, … in images (in fact, your iPhone offers that through the Search feature in the Photos app).
- A system to translate text from any language to any other language, or to summarize text.
- A system to recognize skin patterns to help diagnose cancer or other diseases.
- A system to find inconsistencies in legal contracts.
- A system to find best matching partners for dating.
- A system to determine the best time to send a marketing email given a particular audience.
- A system to identify title and artist of a song from a recording.
- A system to help control air traffic.
- A system to weed out applicants for a job based on their CV.
- A system to answer basic knowledge questions.
- A system to tell whether a website visitor is interested in buying soon, or still early in their discovery.
- And so on and so forth .…
If we standardized the interfaces for all of these solutions, and those for the hundreds and thousands of other tasks we face in our lives, wouldn’t we then essentially have built General AI? One AI system of systems that can solve whatever you throw at it?
Certainly not. A hodgepodge of backend systems that each accomplish one task in a proprietary way is certainly not the same as one system that is equipped with general learning capabilities and can thus self-teach any skill needed. It is also far from being the sort of Strong AI that philosophers have in mind, as humans are definitely not a conglomerate of differently built subcomponents for each and every task we can conduct.
But then again — does it matter? Wouldn't such a readily available system of systems essentially give us an omnipotent tool to help us with any imaginable task we face? It certainly would! And to someone oblivious to its inner structure, it would even appear to be that long-sought magical AI we’ve been shown in books and movies for decades.
The problem is this: such an Internet of AIs will never become reality. Our world’s capitalist nature essentially prohibits the sharing of intellectual property at the scale needed for such an endeavor. For any of the systems mentioned above, there are probably dozens of firms out there that make money having re-solved the same problem over and over again. Google’s translation engine does a fine job, but so too does Facebook’s, Microsoft’s, IBM’s, DeepL’s, SysTran’s, Yandex’s, Babylon’s, Apertium’s ... some of them use a common foundation that academic circles have produced over the years, but many don’t. Humans are not wired to combine their forces to a common greater good of such majestic proportions — we are observing that fateful trait of ours in matters both short-term (coronavirus) and long-term (global warming).
So until our very DNA changes, which would further a change of our societal systems, we are stuck with Narrow AI, which will continue to bring meaningful innovation to us and make us more efficient over time in each of the domains it tackles — but the holy grails of Strong or General AI will remain a dream.
Learn how you can join our contributor community.
About the Author
Tobias Goebel has almost 2 decades of enterprise software experience, with roles spanning product management, sales engineering, and product marketing. As a product marketing principal at Twilio IoT, he now works on defining and evangelizing technology solutions that leverage the potential of connecting the physical world to the Internet.