All Commentary
Thursday, April 16, 2015

10 Questions about Conscious Machines

How will we handle the rights of AI?


For thousands of years, people have watched the skies and wondered if our species is alone in the universe — and what it would mean if we’re not. But lately, we have reason to think that we will build the answer long before we get a call from the stars. What happens to our institutions when the machines wake up? Max Borders raises ten questions we’ll need to answer. -Ed.

In the past year or so, there have been a lot of films about artificial intelligence: Her, Chappie, and now there’s Ex Machina

These films are good for us.

They stretch our thinking. They prompt us to ask serious questions about the possibility and prospects of conscious machines — the answers to which may be needed if we must someday co-exist with newly-sentient beings. Some of them may sound far out, but they force us to think critically about some important principles for the coming age of AI.

Ten come to mind.

  1. Can conscious awareness arise from causal-physical stuff — like that assembled (or grown) in a laboratory — to make a sentient being?

  2. If such beings become conscious, aware, and have volition, does that mean they could experience pain, pleasure, and emotion too?

  3. If these beings have human-like emotions, as well as volition, does that mean they are owed humane and ethical treatment?

  4. If these beings ought to be treated humanely and ethically, does that also confer certain rights upon them — and are they equal to the rights that humans have come to expect from each other? Does the comparison even make sense?

  5. If these beings have rights, is it wrong to program them for the specific task of serving us? What if they derive pleasure from serving us, or are programmed to do so?

  6. If these beings have rights by virtue of their consciousness and volition, does that offer the philosophical basis of rights in general?

  7. If these beings do not have rights people need respect, could anything at all grant rights to them?

  8. If these beings have nothing that grants them ethical treatment or rights, what makes humans distinct in this respect?

  9. If we were able to combine human intelligence with AI — a hybrid, if you will, in which the brain was a mix of biological material and sophisticated circuitry — what would be the ethical/legal status of this being?

  10. If it turns out that humans are not distinct in any meaningful sense from robots, at least in terms of justifying rights, does that mean that rights are a social construct? 

These questions might make some people uncomfortable. They should. I merely raise them; I do not purport to answer them here.

I invite your invective and your reasoned, tentative answers in the comments.


  • Max Borders is author of The Social Singularity. He is also the founder and Executive Director of Social Evolution—a non-profit organization dedicated to liberating humanity through innovation. Max is also co-founder of the Voice & Exit event and former editor at the Foundation for Economic Education (FEE). Max is a futurist, a theorist, a published author and an entrepreneur.