Rise of the Machines, Part 2

Humanoid machine

In my first post about the rise of the machines and the emergence of artificial intelligence, I talked about the possibilities and opportunities. It embodies my general opinion that change isn’t something to resist. Resistance is futile, especially when it comes to technology. Instead, change is something to embrace. The earlier it is embraced, the better we, as a whole, can prepare for the opportunities and guard against the downsides.

While I am generally optimistic about artificial intelligence, I do have concerns. If we are going to reap the benefits that the technology has to offer, we need to acknowledge the risks and downsides. We must make sure that the provisions to protect against potentially bad outcomes are put in place. Given how fast technology advances, particularly AI, these provisions need to be created and enacted sooner rather than later.

Concern #1 – Propaganda, Deep fakes, and Deception

The capability to generate news articles, photos, videos, and voice recordings with artificial intelligence is simply too easy. It opens us up to propaganda, deep fakes, and downright deception. Just take a look at these images of the Pope wearing a puffy jacket (see them here). If the picture wasn’t so outlandish, it might have been believable. And therein lies the problem. It’s not the outlandish photos that have the potential to incite public opinion, it’s the ones that are close enough to reality to be plausible. The same goes for news articles, videos, and even voice recordings. AI has the ability to alter our perception of reality. Without human intervention and a means to verify content, it could cause us to lose confidence in anything we read, see, or hear.

Concern #2 – The Devaluation of Human Creativity

Machines are being trained to generate content using copyrighted and limited use material. In many cases, the training is being done without the creators’ consent. I understand that derivative works are a crucial part of copyright law, but that applies to humans who have a limited ability to rapidly generate these works. Machines can generate thousands of derivatives in a short amount of time, diluting and devaluing the artistic work of the creator. If I am able to have a machine generate thousands of derivatives of an original work for virtually no cost, why would I ever want to compensate the original creator?

The issue cuts across all creative endeavors including music, art, cinema, coding, legal, and more. Humans are held liable for copyright infringements. Why shouldn’t machines, or those controlling and operating them, be held to the same standards?

Concern #3 – The Pace of Advancement

Machines can operate orders of magnitude faster than ordinary humans, which means they can run infinite circles around our governmental and bureaucratic structures. Yes, we love to hate on those structures, but we rely on these fundamental structures to ensure that society continues to operate in a fair and orderly manner. The internet and mobile computing have already exposed how fast technology can outpace regulation, and AI will only exacerbate the problem. It’s moving forward too fast for the structural supports society relies upon to keep up.

Concern #4 – Ethics

While we like to think of robots as humans, they are not. AI does not have emotions. It is simply code inside a machine. Yes, the code makes the machine capable of learning, but it doesn’t experience an emotion like empathy, even if it says it does, nor does it experience pain. Therefore, it has no concept of right or wrong, of ethics. It relies on the people writing the code for the machine and the companies that operate them. Unfortunately, those companies are locked in a “race to the bottom” willing to do whatever it take to build the most powerful algorithm. The companies recognize that AI has the potential to be a winner-take-all-game, and the lack of ethics in both the machine and the company coding it is not an encouraging combination.

The machines do not understand the impact they are having on society, nor do they care. Humans are being displaced by the technology. Those most at risk are the most vulnerable, and the capabilities are coming online faster than our ability to protect these groups (see #3). How are displaced people going to be retrained? Who’s going to bear the costs of the transition? Sure, these transitions have happened before, such as in the industrial revolution, but these transitions took place over a much longer period of time compared to what we are experiencing with AI. Companies developing and operating the technology have a fiduciary responsibility to society, similar to other professions like doctors and lawyers. For example, doctors who bring harm to others are subject to malpractice. Those who develop or use AI that disrupt society, regardless of intent, should be help liable for the damages they create.

Concern #5 – Guardrails, or the Lack Thereof

In Isaac Asimov’s science fiction classic, I Robot, the machines were governed by 3 rules:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

There have been all sorts of analysis that critique the validity of these laws. I’m not here to defend them nor am I proposing that these laws should be used to govern AI’s development. My point is that there are few, if any, rules or laws governing AI development. Given the potential for destruction that AI could inflict, especially when the technology is militarized, it would be wise to have some guardrails in place that govern the technology’s development and place reasonable restrictions on its usage.

Concern #6 – The Singularity

The singularity is defined as the point at which AI transcends human intelligence, blurring the line between humans and machines. The key concept around the singularity is its irreversible nature. Once the line is crossed, there’s no going back. Some analysis even describe the concept of runaway artificial general intelligence (AGI). In this case, the intelligence of the machines rapidly increases, so fast that humanity cannot stop or control the advancement. In apocalyptic scenarios, should the machines determine that humans are resource to be optimized (equating humans to bags of organic material and water), machines could optimize humanity out of existence before we even realized what was happening.

Are We There Yet?

Is it possible that we’ve already eclipsed the singularity? How would we know? Programmers are training the machines, but they aren’t in control of them. They don’t know what the machines are doing or the output being generated. We could very well be approaching the singularity point, be in the middle of it, or be well beyond it and not even know it. Could we be in the midst of a boiling frog syndrome of our own doing? We’re getting so comfortable with the technology, what it is telling us to do or not to do, that we won’t recognize that fact that we’re being boiled alive until it’s too late?

So yes, I’m embracing AI. I do believe that the technology can be a force for good. However, we need to recognize the downsides, the risks. We need to adequately plan and compensate for them. If and when we do, we can achieve a state of equilibrium where the machines help us to achieve new levels of enlightenment, provided the proper protections and guardrails are in place. The time to act is now, to put these in place before it’s too late.

Leave a Reply

Your email address will not be published. Required fields are marked *