For some reason, we (meaning humans) have a tendency to anthropomorphize things, whether they are objects, animals, or phenomena around us. We assume that everything that we interact with in our environment rationalizes and thinks like us, that the things around us experience feelings and emotions the same way we do.
I do it with my dog quite often. I imagine him thinking about how much he likes to go for a walk, or how he wishes he could have steak for dinner every night. And while my dog does display some strangely human-like behaviors, it doesn’t change the fact that he is still a dog, an animal. A lot of what he does is instinctual or based on learned behavior as result of routine or training.
A similar problem arises with artificial intelligence. Because of how it responds to our questions, we have a tendency to attribute human qualities to it. We think that it wants to please us or be our friend. We assume it feels remorse when it doesn’t understand us because it responds with ‘”I’m sorry.” We’re amazed at how it knows the answers we’re looking for. While these things do feel oddly human, it doesn’t change the fact that we are dealing with a machine. The behaviors are based on the attributes programmed into it or learned from the data it’s fed. For both creators and users of AI, this is an important concept that must not be overlooked.
Continue reading