Artificial Intelligence, for Good and for Bad

Dave: Open the pod bay doors, Hal.

Hal: I’m sorry, Dave. I’m afraid I can’t do that.

Dave: What’s the problem?

Hal: I think you know what the problem is just as well as I do.

Some of you may remember those lines from “2001: A Space Odyssey,” Stanley Kubrick’s 1968 science fiction masterpiece, as astronaut Dave Bowman tried to “reason” with Hal, the artificially intelligent command and control system of his spaceship.

I was pretty young when I saw the movie and was somewhat disturbed by my first, albeit fictional, encounter with the concept of artificial intelligence.

The thought that humans could build a machine that could dramatically outperform and outthink its creators was, to say the least, a bit scary. You know, the whole nightmare of a world controlled by machine masters served by their human slaves.

In hindsight, I had nothing to worry about in 1968. While some of the fundamental building blocks of AI had been laid, the technology was, for the most part, an aspirational dream.

But today, 50 years later, the rapid advances in AI and its increasingly ubiquitous integration into the hardware and devices we encounter in our daily lives has me greatly concerned.

And you should be as well.

It’s not that the technology in and of itself is frightening. While it will do so much more in the future, it is already greatly benefiting its human creators.

In medicine, AI is discovering new life-saving drugs, spotting cancer in tissue slides better than human epidemiologists and predicting hypoglycemic events hours before they actually occur.

In agricultural, it is detecting very specific locations of crop disease, down to a single plant, so pesticide applications can be made with precise, pinpoint accuracy, rather than sprayed unnecessarily over an entire field.

In security, AI can detect malware a human cannot “see,” anticipate a fraudulent financial transaction before it can occur and recognize the face of a single terrorist in a crowd of thousands of people. And we are only getting started.

But artificial intelligence will also cause major disruptions and serious problems. Let me give you a couple of different scenarios.

AI is going to bring about massive job losses in nearly all job categories: From blue-collar jobs in construction, manufacturing and transportation, to white-collar jobs in finance, health care and education. In fact, an Oxford University study used, ironically, a machine-learning algorithm to assess how easily 702 different jobs in America could be eliminated by advanced AI machine technology. The study concluded that fully 47% of the jobs could easily be lost over the next decade or two.

Moreover, AI will cause an almost complete loss of privacy. Data are, or will be, collected on all of us from every device we interface with, from a light switch in your home to the door at your office.

This will happen much like it has, as data are already being collected by retail sales, electronic banking systems, browser histories, car navigation systems, personal assistants and smart watches. It will be a massive amount of data that will allow a machine to intimately know you better than any human being could. And it will be a machine that could analyze that massive amount of stored personal data to build predictive analytics that would enable it to accurately forecast what you will do in the next hour, day, week or month.

The United States, China and Russia are all in a new arms race to dominate AI militarily. If there becomes a clear and convincing winner, they will have the ability to literally capture and rule the world.

There’s a chilling problem to AI in military weaponry. To avoid it being easily rendered harmless by the enemy, these weapons will be engineered to be extremely difficult to turn off. So, in theory, we humans could lose authority over them. Recall the nightmare scenario of “2001: A Space Odyssey.”

Finally, even though artificial intelligence can be programmed to do something beneficial, it can become destructive in achieving that benefit. The classic conundrum: If you design an autonomous vehicle to make passenger safety its highest priority, will it kill a pedestrian to accomplish this goal?

Aligning artificial intelligence goals to human goals is going to be an extremely difficult problem to solve. And letting AI ubiquitously into the human world before we do, could be a nightmare scenario worse than any movie.

We need to begin a very serious conversation about all of this now, before it becomes too late.

Copyright 2024 The Business Journal, Youngstown, Ohio.