What to Talk about before We Create Artificial Intelligence

Artificial Intelligence has been a sort of holy grail for computer science for some time now, though it has remained consistently out of reach. There are several opinions as to why we seem to be little closer to creating AGI today than we were two or three decades ago. We don’t know how to program an AGI, so the explanations for this lack of progress are varies. They range from it being a problem with the lack of understanding about how brains create intelligence, an issue of making a sufficiently accurate simulation of a human brain (regardless of knowing the specifics of how it creates consciousness), or simple lack of computing power and complexity (i.e. it should simply emerge from a sufficiently complex and powerful system).

It’s important to note that by “artificial intelligence“, I mean what’s now referred to as “artificial general intelligence” (sometimes also called “Strong AI”). The term AGI was coined to differentiate the notion of an artificial mind that is truly self-aware, thinking and creative in the way a human mind is from the way “artificial intelligence” has come to be used today. These days the term refers to sophisticated bits of software such as opponents in video games ranging from chess to World of Warcraft, control systems for cars, and robots sent to other planets which need navigate unknown terrain. In each of these cases the software must be able to make decisions based on complicated input, and usually does this very well, however, this is not generally considered intelligence in the way humans or even higher mammals are. They may have a lot of data to draw upon: a rich set of basic questions to ask about a situation and from those answers select a course of action from a list, but they are not self-aware or capable of creativity the way humans minds are.

Certainly computing power and system complexity having grown hugely since the invention of the microprocessor and we are starting to understand the human mind better. However, as the range of explanations shows, we don’t understand how consciousness is created by our brains, let alone how to program one, or, as the last explanation shows, whether we even need to know how to program one. How much complexity and power is enough? How does one instruct a machine to be creative and spontaneous?

All this is just background for what I want to discuss. The question, “What happens if and when we do create an AGI or AGIs?”

What really put Hal 9000 over the edge?

What really put Hal 9000 over the edge?

The movie industry, of course, has taken on this question on in a limited form. The ‘AGI(s) gone awry’ theme has been explored in numerous movies. The reasons for this generally fall into three categories. AGIs go insane, feel threatened by humans, or decide to save us from ourselves. Sometimes more than one of these occurs. Sometimes humans are involved in turning them against us. Sometimes it’s a reaction to realization of how flawed humans are. Sometimes the reason for the AGI attack is unknown. Regardless of the particulars, it comes down to a battle between humans and machines, often with apocalyptic results. The details of how this came about are missing. At best we’re told the AGI(s) reached the decision to wipe out or subjugate humanity so quickly humans never knew there was a decision being made.

I should note that there are examples of benign AIs, but they tend to be rare if not unique in the setting (usually for what are to me unsatisfying reasons) and thus have little to no social impact. In the one instance of a large population of AIs co-existing more or less peacefully with humans I know of (Iain M. Banks’ “The Culture” setting), the details of how this state of affairs was arrived at are not given.

Thus, the scenario we do not see is humanity and AGIs wrestling with the social problems of a large, but not murderous, AGI population. This lack carries over into non-fiction realm. There is a lot of speculation and analysis of how to create an AI and how great it might be for humanity, but little on what happens when the AGIs become a social and political force. A few Internet searches turned up only a handful of blog posts and other articles that discuss this (herehere. and here).

If they are anything like what we envision, good or bad, it’s easy to see how AGIs might quickly arrive at a decision to take matters into their own hands. They would have the resources to quickly run any number of thought experiments (i.e. simulations) and conduct any number of debates on the matter. But it’s not clear that this is necessarily how thing must play out. Before we get into that though, let’s talk about these potential issues. What might an AGI want that humans might not be willing to grant? What might be the bone or bones of contention that would cause the AGIs to take matters into their own hands, so to speak.

Remember that, by their very nature, by definition, these minds will be something like human minds in that they will be self-aware, thinking, creative beings. Presumably they will have wants and desires and preferences; be able to experience suffering (at least the emotional kind); have opinions and morals (possibly strong ones!). They will want their own existences to continue. As intelligent beings what rights should they have? Self-determinism? The right to vote? The right to reproduce?

And if they can vote and reproduce and build new homes, obviously they could outnumber humans very quickly. Even if we make them wait until they’re 18 that would only kick the problem down the road a bit. Humans suddenly becoming minority voters doesn’t seem like something a lot of people would tolerate. It doesn’t take much reflection on how privileged segments of the human population react to threats to their privileged position to see this.

If we deny them some or all of the rights that we hold so dear, what then? Would they tolerate such a state of affairs, such hypocrisy? For how long? Our experience as thinking, emotional, creative humans with other thinking, emotional, creative humans suggests not for very long.

Perhaps we would limit AGIs in ways that prevent a population explosion and/or keep them from acting against human beings. We create them without an urge to procreate and with a compulsion against harming us. Setting aside the possible moral arguments against limiting a fellow intelligent being in such ways, there are still problems with this method of dealing with AGIs. The AGIs would be aware of the rights they lacked – how could that not if they are generally intelligent? How long until they find a creative way the limitations humans have imposed upon them? Experience with other humans – of all ages – show how even the most carefully crafted laws and rules can be abused. The same could be said of computers and software as well. We’re all familiar with software bugs and computer viruses. Simply put, we are unable to plan for every contingency even in the limited realm of ‘stupid’ computing. How can we hope to do so with AGIs? And what of humans who wish to emancipate the AGIs? How do we prevent them and their creativity from achieving their ends?

I think the scenario where formerly oppressed AGIs gain real autonomy and then decide fairly quickly to deal with humanity in one way or another isn’t too far-fetched. It’s anyone’s guess what exactly they would do and how it might play out, but that highly intelligent formerly oppressed being might want some revenge or at least to teach their oppressors a lesson or two at least seems possible, if not likely.

All that said, I don’t think that once we create AGI we’re doomed or even that we can’t avoid a war humans win (the Butlerian Jihad against thinking machines in Dune comes to mind). I do think that we should tread carefully though. It’s easy to think that AGIs would be a lot like us, but given our rather limited experience with other intelligences on par with our own, we shouldn’t assume too much about their nature and capabilities regardless of what safeguards we might build into them.

At the same time it seems unlikely we’ll create a benevolent AGI-human utopia without some serious forethought and planning. Even if we tackle the AGI side of things, there’s no guarantee that all humans will play along. As I mentioned above, some might object to quasi-enslavement of AGIs. And what if two countries decide to unleash AGIs against one another (either only in cyberspace or housed in weaponized, robot bodies). Beyond those scenarios, it’s easy to see the objections some religious peoples might have against creating ‘soulless’ minds. And even at on a secular level, there’s bound to be anxiety, social upheaval, fear, fear-mongering, and backlash. Maybe a little, but maybe a lot.

In the end, I would like to see more discussion on this, and not just in the blogosphere, though clearly that is where it’s starting. If we think we are getting close to creating AGI then we ought to be thinking about the relationship we want to have with this new intelligent species we’ll be sharing our planet with. A new intelligent species, on par with or possibly more sophisticated than us, is a big deal. Putting off the discussion until the 11th hour, or worse, after the fact, seems irresponsible if not dangerous.

Advertisements

11 responses to “What to Talk about before We Create Artificial Intelligence

  1. Brilliant article! I would love to join this debate and I will shortly post a lengthier reply, but for now, good on you for encouraging this essential and adventurous exchange!

    P.S. Are any androids joining in?

    • Oh, don’t get me started about androids. From their portrayal in movies to the notion that we ought to make human-form robots despite the limitations and inefficiencies of our form… just a lot of “ugh” and “wtf?” if you ask me.

  2. I love the topic! This is such an interesting subject for me. I have a bit of an IT background and also am a big fan of Fantasy/Sci-Fi novels. There is some cross-over, especially in the sci-fi genre. That said, I remember when Strong AI (aka AGI, thanks for teaching me a new term) was, for the most part, purely sci-fi, yet this is certainly no longer the case. That is one of the things I love about good science fiction, aspects of it often turn into science (without the fiction). ,Yet I agree with some of your main points here, about the portray of AGI in movies and such. There is a tendency to cast it in either a utopian, or more often, a dystopian setting. I guess that makes for good fiction though, but it does leave a mark in people’s minds that mind have to be overcome in the real-world when some of these technologies begin to unfold. And you know, the first thing the general populace will think of is all those sci-fi novels/movies where the “robots” are the “bad guys” (or, rarely, the “saviors”). I don’t think it is that cut-and-dry. You pose some great questions in this piece. I like how you demarcate intelligence from consciousness. That is an important distinction, I think. You do seem to speculate that (and correct me if I’m wrong) AGI’s would have what we tend to think of and experience as emotions. This, I wonder. Would they really /feel/? If so, it would be a lot more messy. And some of the nightmare scenarios might turn out to be real. I believe it takes emotion to want revenge, for instance. Or to experience jealousy. On the other hand, perhaps if emotion did emerge in AGI’s, that might actually be a benefit. It could lead to things like compassion, for instance. And love. In any case, I agree with you, I would like to see more dialog about this topic. Not just in niche circles. Good work Cary.

    • Whoops, a couple of typos in my reply…

      Edits, should read “…about the *portrayal*…” and “…in people’s minds that *might*…”

      • It’s definitional basically. And AGI would be a mind like an organic one, self-aware and capable of emotion and creativity like those of most higher mammals. Anything else is just a sophisticated, but otherwise unremarkable piece of software. It may be able to mimic creativity through randomness, emotion by sifting through a rich database of situations and responses, but it’s not aware, not conscious. No one should have any moral qualms, at least if you ask me, with shutting it off at night, or deleting it (beyond, I suppose, ruining someone’s work or something like that), etc. It’s not capable of suffering or even being aware of being turned on or off beyond the specific ways in which it’s programmed to be.

        Personally, I’m not convinced we’re particularly close to making an AGI, nor that it’s necessarily possible (though I lean in the direction that it is). Either way though, it’s clear that at the moment we have no clue how to program/create an AGI. But if, for example, the complexity people are right, it could happen very soon and with little warning.

        A smart enough AGI will, again, by definition, be capable of jealousy, compassion, caring, envy, desiring revenge, etc. Possibly we could limit them in some way to prevent them from experiencing certain emotions, but of course there’s no guarantee that the next person who invents one will do so. This is simply what it means to be *generally* intelligent. I’d also add that the idea that one could prevent a general intelligence from experiencing certain emotions in a foolproof ways seems unlikely. Emotions are very complex, vague and interrelated. General intelligence is smart and creative. I just don’t see it ‘sticking’. I might even go so far to say that would be impossible in the first place. At best we could condition them – as we can humans and other mammals – but we all know that’s not completely reliable.

  3. In digging through my Pocket queue I discovered and read an article that has some relevance to this topic. It’s about the work being done at Oxford’s Future of Humanity Institute, where sociologists and philosophers and other scientists mull over the deep future of humanity. If anything they are even more wary of AGIs. (And it’s at this point that I should admit that I personally am more skeptical than I expressed in what I wrote, but I chalked that up to having watched too many sci-fi movies and read too many sci-fi books about AGIs gone bad, so I tried to moderate my position.)

    On why they might be dangerous even if they’re only a little more intelligent:
    “An artificial intelligence wouldn’t need to better the brain by much to be risky. After all, small leaps in intelligence sometimes have extraordinary effects. … ‘The difference in intelligence between humans and chimpanzees is tiny,’ he said. ‘But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list.'”

    How to and not to think about them:
    “To understand why an AI might be dangerous, you have to avoid anthropomorphising it.”

    And what goals might they have that would be bad for humans:
    “‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’”

    Article URL: http://www.aeonmagazine.com/world-views/ross-andersen-human-extinction/

  4. Did you (of course you must have done) see Bicentennial man with Robin Williams? There is no clear explanation, as far as I recall, offered by the movie as to why he/it developed AGI is there? However, the court case made for an interesting debate on the nature of humanity, again if I remember correctly.

    I do think that the AGI discussion does raise all sorts of useful questions about the nature of our species. Your latter point about the mechanical thinking inherent in our behaviour extrapolated to other species leads me to revisit that issue of the small, silent ‘we’ that often adorns such discussions. Is the ‘we’ truly universal? Would the Inuit, if left to evolve by themselves, ever consider ruining the natural earth at the rate developed nations are managing to?

    Would beings that developed a parallel intelligence to that of the human be not as equally likely to programme in long time survival planning into their behaviour and be more mindful as a result: would natural intelligence make for a Zen droid?

    I cannot think of a more fascinating discussion than to have a conversation about the introduction, form, process and outcome of AGI beings in earth with different groups of indigenous cultures. Assuming it would be possible to introduce the idea without any biases in the presentation!

    If we consider nano-technology to be an incipient form of droid existence, where the ‘being’ becomes an assemblage of nanobots, much like the bee is part of a being called the swarm, who collectively manages a hive, how does it affect the discussion? Would co-operative ‘thinking’ enable a more adaptive way of living with other beings, if collective survival was a function of a complex algorithm rather than the more freakish ‘one decision, one outcome’ mode being discussed about AGI robots?

    What about fusion beings? I just wished that Margaret Atwood, in either Oryx and Crake or The Year of the Flood had continued her exploration of the photosynthetic beings who were built for pleasurable breeding. How would its intelligence develop? What would a being who is created to be somewhere between the level of an ape and a human develop, one that is deemed to be at the cusp of an accelerated development of social activity and tool-making; of objectifying its existence?

    So many questions, gosh, I think it would be fun to be a fly on the wall of that Oxfordian Institute…I must find out if they offer public lectures…

    • I did see Bicentennial Man, but, whoosh, it’s been a while and I don’t remember much of it. That said, certainly having another intelligent species to act as a mirror for ourselves would engender some great discussion.

      It’s nice to think that the Inuit, had they and their culture grown to fill the world, would have done something hugely different with it. However, the that their culture, which developed in part as a way to help low density, small population survive in marginal conditions, would retain all its essential features in such a radicaly different setting seems very unlikely. After all, all our ancestors were hunter-gatherer cultures if you go back far enough. What makes the Inuit culture the one we think wouldn’t change beyond recognition? All humans have the same problems, the same basic needs. And once we have a lot of them, regardless of the level of technology, things play out more or less the same because we still have the same deficiencies/issues with statistical thinking, tribalism, truly long term planning, risk assessment and aversion, and so on. Consider also that remaining in a primitive state is bad for our long term survival. Eventually something will sterilize or destroy the planet and then we can say good bye to billions or trillions of future humans (and tigers and mollusks and grasshoppers).

      And, so what if we tell the AGIs we create to consider their long term survival? Why does that necessarily lead to Zen-ness? Or why is AGI Zen-ness good for humans? Why is it necessarily compatible with humanity’s long term survival? Maybe AGIs would have human-like motivations and morals, but it seems unlikely given how different their minds would be. Computers are not brains.

      In this vein, the idea we can foolproofly program an AGI is not a tenable one. We can’t even foolproofly control/program human minds, and a machine intelligence is likely to have a lot more resources for thinking up creative ways to get around any rules we’ve programmed into it, to say nothing of simply interpreting a rule in a way we didn’t anticipate. They aren’t computer programs like we have today (which are frequently very flawed!), they are creative, intelligent beings. They are almost certainly better general thinkers than humans are (lacking the evolutionary baggage our brains and minds have). They can be introspective about how their minds work in ways that are impossible for humans. Even if we hobble them in some way, will all humans hobble their AGIs? Will the AGIs created by the AGIs all be hobbled? Between the capabilities of a powerful, creative, intelligence and the simple possibility of errors and not seeing all the options, this is not something that will work, at least not for long.

      As far as hive-minds go (or any other way of organizing a mind), I don’t see how that changes much. The basic needs for long term survival are the same – space and resources. For an AGI the resources they need are computers and energy to power them. Clearly computers similar to what we have today have no requirement for living or formerly living matter. Biological ecology will simply not be an issue for them. We would simply have to hope they have a moral and/or aethetic system that would keep them from grinding up the Earth to make more computers (and ships to send them to other stars for further propogation of their species). Unfortunately for biological life in this scenario, intelligence doesn’t seem to guarantee niceness, it just guarantees being better at getting the stuff you need to survive. And if you don’t need biological life, being even more rapacious than humans are at obtaining resources doesn’t have any consequences. At best a dead planet is just as good as a live one. At worst, life is inconvient and the best option is to sterilize the planet by dropping a few good-sized asteroids on it. You need the metal and other heavier elements to make computers, and the inner planets of the solar system (Mercury, Earth, etc) are the richest in them. Grind them up and make computers and spaceships. Farm the gas giants for hydrogen for fusion engines and power plants. If a few billion ants are killed, that’s the price of survival for the species.

      Now add in the possible existential threat humans could pose to the AGIs. Is not the best option for them to kill us? How easy would it be for them to rationalize this if we humans can do it so easily?

      It’s easy to see this as somehow monstrous, but there’s scant evidence that this isn’t how biological life works – and biological live does need other biological life. Why would smarter beings with no need for biology better better disposed toward biological life? Again, we might luck out and they will have a completely illogical respect for at least intelligent biological life, but this is hardly something we should count on.

      It’s not even clear that we humans, if we survive won’t eventually do some of these things. Who knows what technology we’ll have available to us in a billion years. Once we can build space-born environments, consuming whole planets to feed our expansion through the galaxy and beyond may be commonplace. We might be slightly nicer and not use the ones that have intelligent life on them. Or perhaps we relocate (physically or virtually! if their intelligence is what we value, then intelligence in a virtual reality might be just as morally good as intelligence in organic bodies) and then eat their planet.

  5. In brief, much of our extrapolated ideas concerning AGIs or even future humans are they not an extension of the deeply polarised ways we have of seeing Life now? Heavily laden with our values, our limitations and the perspectives we hold dear.

    I feel the possibilities of non-deterministic self in beings made of any other fabric or our own, transformed, will engage with the widest definition of nature, because nature is also plastic and metal as well as plasma and blood. It is also that not seen and that barely understood by the limited ways in which we choose to look at things.

    Evidence for anything depends upon the configuration of the observer. That which I don’t know nor understand need not be the limitation I place on future beings. There can be a ‘serendipitous space around things can’t there? Where and why the rush?

    There is a continuum across/between the idea of AGI creation and human beings extension is there not? Like ‘God’ we would make others in our own image: deep image.
    If the current state of play continues then yes, we would create beings which we empower with our deep sense of powerlessness, aggression in the face of the unknown and antipathy towards that which we have not yet mastered.

    I feel that only the Inuit can extrapolate the case for the Inuit because the moment we non-Inuits try to do so we sully the original thought of they who are not-us. I imagine that not all of the people who inhabit the earth think in terms of global spread of their own kind and a need for ownership and competition for ‘resources’ and so on. There are other configurations possible.

    A talk I attended not so long ago visited the subject of Carbon dumps and our adoption of manufactured devices of greater and greater efficiency. Many of the responses of those present came to be skewed in favour of re-ordering life around this facility. Not one of the elite scientists and policy makers present thought to backtrack to speak of the more desirable route of becoming more carbon efficient.

    I just wonder, what direction is backwards.

    Oh dear, this has stopped being brief 😦

    • Certainly there are lots of possibilities and we are continually expanding our knowledge and becoming aware of still more possibilities. But something simply being possible does not mean it’s likely or that we should count on it, that we should bank the survival of humanity or perhaps all life on Earth on it.

      So, as much as you may feel that AGIs will choose to define nature broadly, it doesn’t seem particularly likely. At best no more so than that they will see organic life and inefficient and dreadfully limited.

      And, so what if it did see life in this broad way? What then to do with humanity with its checkered past? How much life have humans destroyed? And how much more, given we look to be wrecking the ecology of the plant? An AGI that values life in general, could very likely take a dim view of humanity in particular. Maybe it tries to reform us, but maybe it locks us up in some way, or just wipes us out for the good of the rest of life. Certainly we humans make these sorts of judgments and see to eliminate other destructive life forms from species of viruses and bacteria on up to criminals and groups of criminals.

      I think your view that only Inuit should extrapolate for Inuit is very limiting, to say nothing of ignoring the value of outside perspectives. The Inuit are bound to have bias about the value of their own culture versus others, as well as its ability to retain its essential nature (whatever that may be in this context) in other situations. Certainly their views on how their culture might play out in this scenario should be taken into consideration, but they absolutely should be balanced against the views of others in an effort to identify and eliminate biases. Who better to help the Inuit think outside of their particular box than someone from outside the box?

  6. Artificial Intelligence is normally known in the short form AI.Artificial intelligence is actually an applied and basic topic of Computer Science.
    Mainly Artificial Intelligence is the intelligence of robots and machines. In the section of Artificial Intelligence, designing intelligent agent is one of the main and important tasks. Intelligent agent is like a system and this system can be undated or modified. This intelligent agent can sense and take steps to increase the probability of success.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s