Artificial Intelligence has been a sort of holy grail for computer science for some time now, though it has remained consistently out of reach. There are several opinions as to why we seem to be little closer to creating AGI today than we were two or three decades ago. We don’t know how to program an AGI, so the explanations for this lack of progress are varies. They range from it being a problem with the lack of understanding about how brains create intelligence, an issue of making a sufficiently accurate simulation of a human brain (regardless of knowing the specifics of how it creates consciousness), or simple lack of computing power and complexity (i.e. it should simply emerge from a sufficiently complex and powerful system).
It’s important to note that by “artificial intelligence“, I mean what’s now referred to as “artificial general intelligence” (sometimes also called “Strong AI”). The term AGI was coined to differentiate the notion of an artificial mind that is truly self-aware, thinking and creative in the way a human mind is from the way “artificial intelligence” has come to be used today. These days the term refers to sophisticated bits of software such as opponents in video games ranging from chess to World of Warcraft, control systems for cars, and robots sent to other planets which need navigate unknown terrain. In each of these cases the software must be able to make decisions based on complicated input, and usually does this very well, however, this is not generally considered intelligence in the way humans or even higher mammals are. They may have a lot of data to draw upon: a rich set of basic questions to ask about a situation and from those answers select a course of action from a list, but they are not self-aware or capable of creativity the way humans minds are.
Certainly computing power and system complexity having grown hugely since the invention of the microprocessor and we are starting to understand the human mind better. However, as the range of explanations shows, we don’t understand how consciousness is created by our brains, let alone how to program one, or, as the last explanation shows, whether we even need to know how to program one. How much complexity and power is enough? How does one instruct a machine to be creative and spontaneous?
All this is just background for what I want to discuss. The question, “What happens if and when we do create an AGI or AGIs?”
The movie industry, of course, has taken on this question on in a limited form. The ‘AGI(s) gone awry’ theme has been explored in numerous movies. The reasons for this generally fall into three categories. AGIs go insane, feel threatened by humans, or decide to save us from ourselves. Sometimes more than one of these occurs. Sometimes humans are involved in turning them against us. Sometimes it’s a reaction to realization of how flawed humans are. Sometimes the reason for the AGI attack is unknown. Regardless of the particulars, it comes down to a battle between humans and machines, often with apocalyptic results. The details of how this came about are missing. At best we’re told the AGI(s) reached the decision to wipe out or subjugate humanity so quickly humans never knew there was a decision being made.
I should note that there are examples of benign AIs, but they tend to be rare if not unique in the setting (usually for what are to me unsatisfying reasons) and thus have little to no social impact. In the one instance of a large population of AIs co-existing more or less peacefully with humans I know of (Iain M. Banks’ “The Culture” setting), the details of how this state of affairs was arrived at are not given.
Thus, the scenario we do not see is humanity and AGIs wrestling with the social problems of a large, but not murderous, AGI population. This lack carries over into non-fiction realm. There is a lot of speculation and analysis of how to create an AI and how great it might be for humanity, but little on what happens when the AGIs become a social and political force. A few Internet searches turned up only a handful of blog posts and other articles that discuss this (here, here. and here).
If they are anything like what we envision, good or bad, it’s easy to see how AGIs might quickly arrive at a decision to take matters into their own hands. They would have the resources to quickly run any number of thought experiments (i.e. simulations) and conduct any number of debates on the matter. But it’s not clear that this is necessarily how thing must play out. Before we get into that though, let’s talk about these potential issues. What might an AGI want that humans might not be willing to grant? What might be the bone or bones of contention that would cause the AGIs to take matters into their own hands, so to speak.
Remember that, by their very nature, by definition, these minds will be something like human minds in that they will be self-aware, thinking, creative beings. Presumably they will have wants and desires and preferences; be able to experience suffering (at least the emotional kind); have opinions and morals (possibly strong ones!). They will want their own existences to continue. As intelligent beings what rights should they have? Self-determinism? The right to vote? The right to reproduce?
And if they can vote and reproduce and build new homes, obviously they could outnumber humans very quickly. Even if we make them wait until they’re 18 that would only kick the problem down the road a bit. Humans suddenly becoming minority voters doesn’t seem like something a lot of people would tolerate. It doesn’t take much reflection on how privileged segments of the human population react to threats to their privileged position to see this.
If we deny them some or all of the rights that we hold so dear, what then? Would they tolerate such a state of affairs, such hypocrisy? For how long? Our experience as thinking, emotional, creative humans with other thinking, emotional, creative humans suggests not for very long.
Perhaps we would limit AGIs in ways that prevent a population explosion and/or keep them from acting against human beings. We create them without an urge to procreate and with a compulsion against harming us. Setting aside the possible moral arguments against limiting a fellow intelligent being in such ways, there are still problems with this method of dealing with AGIs. The AGIs would be aware of the rights they lacked – how could that not if they are generally intelligent? How long until they find a creative way the limitations humans have imposed upon them? Experience with other humans – of all ages – show how even the most carefully crafted laws and rules can be abused. The same could be said of computers and software as well. We’re all familiar with software bugs and computer viruses. Simply put, we are unable to plan for every contingency even in the limited realm of ‘stupid’ computing. How can we hope to do so with AGIs? And what of humans who wish to emancipate the AGIs? How do we prevent them and their creativity from achieving their ends?
I think the scenario where formerly oppressed AGIs gain real autonomy and then decide fairly quickly to deal with humanity in one way or another isn’t too far-fetched. It’s anyone’s guess what exactly they would do and how it might play out, but that highly intelligent formerly oppressed being might want some revenge or at least to teach their oppressors a lesson or two at least seems possible, if not likely.
All that said, I don’t think that once we create AGI we’re doomed or even that we can’t avoid a war humans win (the Butlerian Jihad against thinking machines in Dune comes to mind). I do think that we should tread carefully though. It’s easy to think that AGIs would be a lot like us, but given our rather limited experience with other intelligences on par with our own, we shouldn’t assume too much about their nature and capabilities regardless of what safeguards we might build into them.
At the same time it seems unlikely we’ll create a benevolent AGI-human utopia without some serious forethought and planning. Even if we tackle the AGI side of things, there’s no guarantee that all humans will play along. As I mentioned above, some might object to quasi-enslavement of AGIs. And what if two countries decide to unleash AGIs against one another (either only in cyberspace or housed in weaponized, robot bodies). Beyond those scenarios, it’s easy to see the objections some religious peoples might have against creating ‘soulless’ minds. And even at on a secular level, there’s bound to be anxiety, social upheaval, fear, fear-mongering, and backlash. Maybe a little, but maybe a lot.
In the end, I would like to see more discussion on this, and not just in the blogosphere, though clearly that is where it’s starting. If we think we are getting close to creating AGI then we ought to be thinking about the relationship we want to have with this new intelligent species we’ll be sharing our planet with. A new intelligent species, on par with or possibly more sophisticated than us, is a big deal. Putting off the discussion until the 11th hour, or worse, after the fact, seems irresponsible if not dangerous.