Saturday, May 10, 2008
Worthy of human treatment ...
Once as a child I asked a Jesuit whether dolphins should be treated like people if they turned out to be intelligent. I think I phrased it in terms of the question "whether they had souls", but regardless the Jesuit's answer was immediate and clear: yes, they have souls (and, by implication, should be treated like people) if they had two things - intellect and will. Years later I read enough Aquinas to understand what he meant. But when I tried to regurgitate my understanding of these concepts for this essay, as partially digested by my thinking on artificial intelligence, I found that what came up were new concepts, and that I no longer cared what Aquinas thought, other than to give him due credit for inspiring my ideas.
So, in my view, the two properties that a sentient being needs to be treated with respect due to other sentients are:
- Intellect: the ability to understand the world in terms of a universal system of conceptual structures
- Will: the ability to select a conceptual description of a desired behavior and to regulate one's behavior to match it
In this view, part of the reason that we treat animals like animals is that their intellects are weak and as a consequence their wills almost nonexistent. While animals can learn basic concepts and do basic reasoning tasks, it's extraordinarily difficult for them to put what they can learn into larger structures that describe their world - for example, it takes years of intensive training for chimps to learn the basic language competencies a human child gets in eighteen to twenty four months. Without the ability to put together "universal" structures that describe behavior, your cat can't describe behaviors much more sophisticated than "I'm not allowed in the art studio" and hence is vulnerable to all sorts of hazards and prone to all sorts of misbehavior because they simply can't understand that, for example, it's not a good idea to go out after 2am since their owners won't be awake to let them in.
Similarly, children are wards of their parents because they haven't yet learned the conceptual structures of what they should do, and lack the self-regulation to guide themselves to follow what they have learned. Violent criminals become wards of the state for the same reason - either they didn't realize that it was a bad idea to hurt their fellow man, or more likely didn't bother to regulate themselves to achieve it. A similar problem occurs for a variety of neurodiverse people who, for one reason or other, are not able to regulate their behaviors well enough to manage their lives without the assistance of a caregiver (though having various kinds of self-regulatory dysfunctions is not necessarily a sign that someone does not have a sophisticated intellect, and there are a number of autistic people who would argue that we are too quick to discriminate; but I digress).
Regardless, so intellect and will are ideas that bump around in my head a lot. Can something understand the world it's in in abstract terms, and figure out its relationship to it? And given that understanding, can it decide what kind of life it should lead, and can it then actually follow that life? Anything that can do that gets a free pass towards being treated with respect - if you have those fundamental capabilities I'm inclined to treat you like a fellow sentient until and unless you prove me wrong.
We may seem to have gotten pretty far from souls here. But for the Christians in the audience, think about intellect and will for a moment. Something that had intellect could learn who Jesus was, and something with will could decide whether or not to follow him. And it wouldn't matter whether that was a neurotypical person, an autistic person, a talking dolphin or an intelligent machine. For the atheists in the audience, this may be an easier sell, but the point actually is still the same: something with a truly universal intellect could evaluate a system of beliefs that it was presented, and with a selfregulatory will decide whether or not it was going to agree and/or follow that system of beliefs.
Thinking out loud here...
I'm actually working on an article now about when AIs should be treated ethically, and for me the core of it is their ability to feel pain, sadness, pleasure, and joy. It sounds as though you're assuming animals can feel these things, and perhaps you're right, but if a being cannot feel any pain, suffering, or dissatisfaction whatsoever, it's hard to justify the need to treat them ethically, including giving them freedom or respect.
As a thought experiment, does Commander Data (in emotionless mode) need to be treated with respect? What's the harm in NOT doing so?
Without the emotional and sensation angle we'll be in danger soon of requiring treating things like the agents in panicking crowd simulations with respect. http://www.youtube.com/watch?v=T5VZFxRJ6ss