Forget ANI, AGI, and ASI: think about ABI and ASlI


 

 

If you read about artificial intelligence research and AI safety these days, you'll likely come across some acronyms:

ANI: "Artificial Narrow Intelligence" is the type of artificial intelligence we already have.  Machine learning has been used for many years for a variety of tasks.  For instance, Netflix has an algorithm to notice what kinds of movies and shows you watch, and to recommend more entertainment based on what you've previously enjoyed.  This is "narrow" intelligence: it is very very good at a very, very specific task.  We have programs that are better than any human at playing chess, at playing go, at playing a wide variety of video games and board games.  But outside of that specific domain, the software is useless.

AGI: "Artificial General Intelligence" is, in theory, a form of AI that can do everything a human can do.  It can adapt itself to many kinds of tasks and therefore has a wide or unlimited range of skills and expertise.  This probably doesn't exist yet - although some (a minority of researchers) would argue that with the dawn of GPT-3, we are witnessing the dawn of AGI.  The exact boundary between ANI and AGI is is a matter of some dispute: just how narrow is "narrow"?  There's an old joke that goes, "AGI is whatever computers can't do yet."  The goalposts keep moving.  At one time, it was thought (even by brilliant minds like Douglas Hofstadter) that a computer wouldn't beat a grandmaster at chess until it had learned a lot more about what human life and human culture were like.  Various tests have been proposed - most famously, the "Turing Test," (or the "Imitation Game" as Alan Turing himself called it).  Some chatbots are already capable of tricking some people into thinking that they're human (not just chatGPT), though experts (and many other people) can see the difference fairly easily.  But it's not too hard to imagine that truly convincing chatbots are coming very soon.  Then there's my favorite, Steve Wozniak's test for AGI, the Coffee Test: a piece of software has developed AGI when you can put it in someone's house, and it can figure out where the kitchen is and make coffee.  

ASI: "Artificial Super Intelligence" is what science fiction speculators have imagined: an intelligence that is far beyond any human intelligence.  But many people have pointed out: wouldn't AGI already, more or less instantly, be ASI?  It would do everything a human can do, plus much more: it would be able to calculate (at least) as fast as a calculator, it would process information very quickly, it would never have to sleep, it would have focus and memory retention and recall far surpassing humans, and so on.

So these acronyms are unsatisfying in various ways.  Therefore, I propose a couple new acronyms:

ABI: "Artificial Broad Intelligence."  This is somewhere between "narrow" and "general" intelligence.  I think this is where we are now, especially with OpenAI's GPT research.  We have machine learning algorithms trained on datasets to the point that they have quite a wide, and quickly growing, range of skills.  They are remarkably adaptable - they are certainly miles beyond the algorithm in Netflix that can predict your movie preferences (badly, in my opinion).  And yet they cannot do many things that most humans can do.  They cannot run, jump, walk, see, smell, play tennis, find the kitchen or make coffee.  I think we will have ABI for a long time, and ABIs will be considerably different from humans in a variety of ways, yet they will be astonishingly powerful and will totally transform our civilization.

...and I have one more proposal for an acronym:

ASlI: "Artificial Slender Intelligence."  Baked into the distinction between ANI and AGI is the implication that AGI is what we're seeking.  But should we really be striving for AGI?  I think the opposite may be the case: we should be figuring out ways to keep AI narrow.  What I call "slender" intelligence should be our goal, NOT AGI or ASI.  This is intelligence "on a diet," so to speak.  It is a variation on ABI, but in this case, we are actively working to prevent AIs from having certain human capabilities.  I think we should actively work to prevent an AI from feeling things like greed, rage, envy, and the will to power.  Thus we should actively work to prevent AIs from being capable of passing the Turing Test.

Comments

Popular posts from this blog

Liquefactionism

Why Capitalism is Ending

Why Ayn Rand was Wrong