For some reason, we (meaning humans) have a tendency to anthropomorphize things, whether they are objects, animals, or phenomena around us. We assume that everything that we interact with in our environment rationalizes and thinks like us, that the things around us experience feelings and emotions the same way we do.
I do it with my dog quite often. I imagine him thinking about how much he likes to go for a walk, or how he wishes he could have steak for dinner every night. And while my dog does display some strangely human-like behaviors, it doesn’t change the fact that he is still a dog, an animal. A lot of what he does is instinctual or based on learned behavior as result of routine or training.
A similar problem arises with artificial intelligence. Because of how it responds to our questions, we have a tendency to attribute human qualities to it. We think that it wants to please us or be our friend. We assume it feels remorse when it doesn’t understand us because it responds with ‘”I’m sorry.” We’re amazed at how it knows the answers we’re looking for. While these things do feel oddly human, it doesn’t change the fact that we are dealing with a machine. The behaviors are based on the attributes programmed into it or learned from the data it’s fed. For both creators and users of AI, this is an important concept that must not be overlooked.
In my first post about AI, I touched on the opportunities for AI to improve our productivity and efficiency. There are real opportunities to use machines to improve our quality of life. However, we need to be careful assuming the machines are naturally inclined to do things in our best interest. AI is not a sentient being, it’s a tool that we use to accomplish tasks.
It’s like using a hammer. A hammer doesn’t choose to hammer a nail into a board. We choose to use and direct a hammer to accomplish the task. AI must be thought of the same way. It’s something we use to aid us in our work or to achieve an outcome. The machine is not human. Without guidance and direction, it is fundamentally useless.
At least today it is.
In the second post on AI, I discussed the concerns about runaway artificial intelligence. Numerous, very intelligent people who are way smarter than me have raised warnings about the machines taking over. It raises a question in my mind – can machines become sentient beings capable of feeling emotions? Can machines exhibit consciousness and self-awareness?
I don’t know the answer to these questions, but it’s clear that the current answer is no.
However, researchers are trying to understand the source of human emotions. They are trying to figure out how to recreate the human conscience outside of a human body. I’ve read plenty of science fiction books where the robots exhibit self awareness, where they develop and learn faster than the human mind can comprehend. Maybe it’s not possible today, but it doesn’t mean it won’t ever be possible. There are plenty of things we take for granted today – air travel, computers, the internet, mobile phones – that humans 100 years ago would have never dreamed possible.
When will the AI be conscious and self-aware? I don’t have the answer. I doubt anyone does. In the meantime, we need to understand that machines aren’t sentient beings. They don’t have a conscience or a soul, understand ethics, have morals, and know right from wrong. The machines have to be programmed with these rules or provided data to “learn” these rules. Basically, just because today’s machines may act like a human and display human characteristics doesn’t mean they are. AI is a reflection of the algorithms used to create it and the data the creators give it to “learn” from.
So the next time you interact with Siri, Alexa, or any other AI, remember that are using a tool to do a job. Today’s AI is no more human than my dog. If anything, it’s less so.