Battlestar Galactica, Humanity, and Pokerbotting

A spirited debate has been raging on my Livejournal about whether it is immoral to operate a pokerbot online, prompted by the fact that I have been working on one for many years. See:

Are bots against online casino terms of service?
Pokerbot Poll II
Morality Cage Match: Blackjack vs. Pokerbotting
Deception

Here is my latest contribution, which I will crosspost here for a broader audience.

Let's take a step back here, and meta-analyze this pokerbot morality issue. I think it's great that we are discussing this because it is a morality question about the future. As computers move towards sentience, these issues are going to become more and more important, and more and common. This is not a big discussion about a small issue, this is a big discussion about a big issue, one of the biggest issues of the future: What does it mean to be human?

Is it cheating to use an implanted computer to play blackjack? What about roulette - you are still just using public information, but a computer can analyze the balls trajectory in a way that is (arguably) qualitatively different from what a person can do. Does it matter if the casino has posted a sign that says "Unaided humans only?" When playing a game, does it matter who the other people involved in a contest want to be playing against? Does it matter what they expect? Is it OK to use augmentation when it isn't against the rules just b/c people haven't thought of it yet? Is it not rude if people don't realize it? Should you respect other people's biases against augmentation, or sneer at them as old-fashioned?

Is a computer playing poker actually playing the game of poker, or is human psychology an essential part of the game even though it is never mentioned in the rules? Is a chatbot that pretends to be a person a fascinating exploration of the meaning of "person" or a rude sham? What if it's a fully sentient AI chatting? What if the AI is pretending to be a human, because it is illegal for AI's to talk to people unmonitored? What if the AI is pretending to be a human because it wants to talk to children, and parents won't let their kids talk to machines?

Pretty quickly this gets us to the big ones: Can computers own property? Is it murder to kill them? Are Cylons people or toasters?

It is a fact of modern life that technological change is confronting us with new moral questions and casting doubt on answers to the old ones (I'll have more on that in another post). For an exploration of the (related) legal aspects of technological change, see David Friedman's Future Imperfect.

I am a radical and a dynamist - I embrace change, and am uncomfortable with stasis. So it is no surprise that I am deeply bothered by the idea that it is wrong for a computer to engage in a widespread, publicly accessible, internet-based activity, where it can fully participate according to the rules of the game. It reminds me of xenophobia and anti-Cylon prejudice.

Of course, I am also very biased on the subject of pokerbots, because this is my brainchild, and because I stand to profit if silicon can out-think grey matter. So I am not making a moral argument here, I am not claiming that there is a slippery slope from saying that pokerbots are immoral to rioting against AI rights. I just wanted to point out that Human vs. Machine is an old and emotionally-laden conflict, and our culture is full of stories on the subject - John Henry, Terminator, Battlestar Galactica - because it resonates so powerfully. While I am biased in one direction, many people have strong biases in the other, and clashes between the two are only going to become more common.

Share this