Friday, August 15, 2008

Murder, or not murder? A question about AI and Moral Theory

Imagine a computer that runs a program which generates a mind. The program runs with no input at all from the external world.

Shutting down that computer would be tantamount to murder, I think.

Now imagine another computer somewhere else that is running the exact same program. Since the execution is deterministic, the AI in this computer has the exact same sequence of thoughts and qualia as the AI in the original computer, the same phenomenal consciousness. This seems redundant, in a way... is shutting down this second computer as bad as shutting down the first one? Is it bad, but not as bad? Maybe it is not bad at all? What if the computers are not running at the same time, and the original execution of the program happened years ago?

If you think that shutting down the second computer is murder:

Imagine that the computers are Turing Machines whose parts (the tape, the head...) are very thin and positioned on a bidimensional plane. You could put two of these machines in parallel, separated only by a distance of millimeters, executing the same program at unison, the exact same pieces moving at the exact same time. Now imagine that you close the millimeter-long gap, gluing corresponding pieces of each machine together using Loctite. Now we have only one computer. But wait a minute... there were two AIs a moment ago! This is surely as bad as shutting down one of the computers... and yet the material components of both are is still there, working as before, and doing exactly the same things. Murder, or not murder?

If you think that shutting down the second computer is not murder:

Imagine that the scheduled lifespan of the AIs is forty years. We modify the program so that no interaction with the external world is allowed for the first thirty years but, after that, previously dormant input-output devices on both machines are activated and, from that time on, the AIs can chat with people outside their computers. Since their respective environments are bound to be different, program execution will start to diverge at that point. If, knowing this, you meet the second AI when "he" is only ten years old and you shut it down, do you still think that your action wouldn't be murder?

Edit. I have just found Daniel C. Dennet's essay Where Am I? that touches on similar themes (only, in his story, one of the brains is biological). Dennet mentions the extreme difficulty of maintaining two minds perfectly synchronized:

Since several of the most remarkable features of "Where am I?" hinge on the supposition of independent synchronic processing in Yorick and Hubert, it is important to note that this supposition is truly outrageous--in the same league as the supposition that somewhere there is another planet just like Earth, with an atom-for-atom duplicate of you and all your friends and surroundings,* or the supposition that the universe is only five days old (it only seems to be much older because when God made it five days ago, He made lots of instant "memory"-laden adults, libraries full of apparently ancient books, mountains full of brand-new fossils, and so forth).

I would say that, if we assume that the two minds are not biological, but instead programs running on a digital computer, the supposition of independent synchronic processing becomes much less outrageous. One can conceive supercomputers specifically designed for extremely reliable and reproducible execution of programs.

No comments: