By Stacia Erdos Littleton
YOUNGSTOWN, Ohio – In the late 1980s there was a short-lived television show that featured the first computer generated news presenter called “Max Headroom.” Although he was played by a real-life actor, I remember joking that eventually all of us in broadcast journalism would be replaced by Max Headrooms. The thing is – then, there was no question Max wasn’t real with his clunky hand gestures, jerky head moves and glitchy speech.
Fast forward 30 years and I’m sitting in a Youngstown Press Club luncheon being asked which of two videos of a woman speaking is real and which is AI (artificial intelligence) generated. Easy peasy, right?
I got the answer wrong. (So did my Business Journal colleague seated next to me, Dan O’Brien. So I didn’t feel that bad!)
The featured speaker was Nikita Roy – an award-winning podcaster and expert in AI in the media. The topic was how the news industry can use and is using AI, more than we know.
Here’s a synopsis:
To generate headlines, some 400 newsrooms are using it for this. Put in your article and voila, AI spits out a bunch of headlines to choose from. Well, that would certainly save time.
You can use it to transcribe an interview and then have it write an article from it. Say you’re assigned to read and report on a newly released 70-page document. Enter it into an AI model and out will spit everything you need: key findings, analysis, a summary and even trends. Definitely a time saver.
Not an Excel whiz? You can input data and ask AI to create a bar chart for your next presentation.
In Adobe Firefly, you can find b-roll for your video story and have it generate captions. You can also alter the video – have your subject located in a completely different setting, or add an object, or person who wasn’t even there!
A media company can use AI to personalize the news. As AI “learns” what you like, it will then offer you more articles geared toward your reading preferences.
But the more I listened, the more something nagged at me. I realized, intentional or not, it was the personification of these large AI models. Especially when Nikita Roy talked about the risks – and pointed to the “hallucinations and their ability to make things up.” Yikes!
“I call them an insecure intern. They never want to tell you they don’t know something,” she explained to the group. “If you ask it a random question, it won’t tell you it doesn’t know. It will just make it up. And that’s where it gets really scary if you are not having human judgment, a human in the loop in these AI systems …” Uh, yeah!
And to my non high-tech mind, her remedy for this seemed a tad simplistic.
“You have to tell them, If they don’t know something, it’s OK. Just leave it blank.”
I mean we wouldn’t want to make the insecure intern feel bad, right?
Another concern – “they” memorize. “They” can memorize a New York Times article and then later produce it verbatim in an article you ask it to write for you! Plagiarism in other words. And the liability is on you!
Roy repeatedly emphasized AI does NOT generate knowledge. It generates language – which can be wrong – meaning you need to double check pretty much everything it tells you through another source.
Another red flag – she says we don’t know how these models are going to respond even a year hence because it is a very evolving technology.
Then why are we risking it? I realize we can’t put the genie back in the bottle but maybe we shouldn’t have let it out until we understood it better.
It’s apparently documented that AI can be used to make us more productive and more creative. Roy cited a study done with 700 employees at The Boston Globe. Half were given ChatGPT to use. Those using AI had a 12% higher production rate, were 25% faster with 40% higher quality than those without it. It also helped the typically low performers even the playing field in productivity.
All this has me wondering – what does this mean for our education system and our livelihoods? Apparently, AI doesn’t do so well on math tests. But those of us who write for a living might want to pay attention as AI can pass bar exams and English tests, ranking in the 80th percentile.
I recently watched a YouTube video about a Wall Street Journal reporter who made an avatar of herself to see if it could fill in and give her a day off. Even her sister couldn’t tell it wasn’t her on the phone.
There were some glitches, to be sure. But The Journal reporter remarked it was so nice not to have to get dressed up for work and go in for a presentation when she could just have her AI-generated self be there virtually at the meeting.
I mean, would we even know if NBC’s Lester Holt were sick one day and NBC just used an AI Lester to deliver the news that night? A world leader could be kidnapped and his captors could use an AI-generated avatar to have him or her say whatever they want – even declare war. We already have trouble agreeing on the facts and truth anymore. What happens when we have absolutely no trust in anything we hear or see?
On the ‘60s TV show “I Dream of Jeannie,” when she was upset, Jeannie could turn into a puff of smoke and disappear into her bottle where she would lament on her Arabian couch.
But I fear Max Headroom is here to stay. For while we aging news anchors know our days are numbered, Max just keeps getting better (or more real) every day.