In the second part of the discussion, “Can Designers Survive the AI Threat?”, renowned infographics specialist Charles Apple shares his thoughts on AI
By Charles Apple
Several years ago — back during the time when I blogged about newspaper design and visuals — a design consultant began advertising a package of newspaper design templates. His marketing campaign urged buyers to: “Just add content.”
No, this consultant was NOT Mario Garcia. But it was another consultant who I respected greatly.
The campaign ruffled my feathers. In my blog work and in the sessions I frequently taught at the time, I had been trying to get editors to stop thinking in terms of content and then visuals. Design IS content, I said, over and over again. We can tell stories more effectively with good, thoughtful design. And if the design doesn’t support the content or help tell the story — well, then, you’re not doing it right.
Design, illustration and my specific specialty, infographics, are not afterthoughts, I taught for the better part of two decades. They ARE content, too.
The problem was that I blogged about this. Which didn’t please that consultant one little bit. For that, I felt very badly. A smarter journalist would have found a more diplomatic way of making my point. One that didn’t involve angering one of the giants of the industry.
Well, as I’m so fond of writing in my current work: History repeats itself.
I’m a huge fan of Mario Garcia and the work he’s done educating editors and designers about the visual side of journalism and his urging us all to bring ourselves into the modern era that includes mobile devices, interactive design and coding — lots and lots of coding.
But while I trust Mario implicitly, I find myself troubled with what appears to be his latest crusade: The use of Artificial Intelligence in journalism in general and in news design in particular.
I trust Mario. But I don’t trust AI. In the least.
Why don’t I trust AI?
Simple: Because Artificial Intelligence comes from a sketchy background and produces sketchy results.
The Artificial Intelligence that designers are currently dealing with and will be dealing with over the next year or two have been “trained” on other published work. This results in output that is essentially an unlicensed derivative of previously published work — work presumably done by real, live humans who received paychecks for doing that work.
If you were a journalism design manager and you caught a new hire doing work the way that AI is doing work, you’d be compelled to fire them. That’s plagiarism. Period.
If you’re claiming that it’s not a close enough copy to be plagiarism but instead is “derivative” of previously published content, then credit the previous work. In addition, that might also need a payment of some sort. Especially if the previously published work is copyrighted. And in our business, it often is.
Last year, one of the leading AI firms, OpenAI, signed a contract with the Associated Press to use AP news stories to train its algorithm. So AP is fair game for OpenAI. However, that same firm also approached the New York Times about a similar arrangement. The Times declined.
But OpenAI has continued to use NYT content. How is this fair? How is this allowed? Again, we wouldn’t allow a human to do this. Is the standard for AI that much lower?
Yes, there’s a lawsuit out there involving this suit — it’s one of many:
https://www.plagiarismtoday.com/2024/01/02/why-the-new-york-times-ai-case-is-different/
This one is about a lawsuit by authors:
https://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/93170-more-authors-sue-ai-developers-over-copyright.html
And here’s a story from the Harvard Business Review:
https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem?
There are plenty more. You can Google them as well as I can. Or, hell, just ask AI to find stories about AI plagiarism lawsuits.
I think it’s irresponsible of journalists and journalistic institutions for use to use AI while these legal matters are still unsettled.
A number of news outlets have already begun using AI. Many of these attempts have met with disaster.
The Associated Press began using AI a decade ago for corporate earnings stories and for sports briefs. But these are basically pre-formatted stories. AI is simply filling in blanks, like an electronic “Mad Libs” game.
But when you ask AI to do something a little more complex — like, y’know, write actual sentences and deal with real, live content — things go off the rails.
Last fall, a handful of Ohio-area Gannett papers used AI to write high school football recaps. The experiment was halted after readers found a number of spectacular errors lurking on the sports pages.
One story referred to a football game as a “close encounter of the athletic kind.” Another failed to fill in the name of the winning and losing teams, leaving placeholders — in brackets — in their place:
https://www.axios.com/local/columbus/2023/08/28/dispatch-gannett-ai-newsroom-tool
Clearly, failures like this could be caught by careful — or even casual — editing. But some outlets have cut way back on their copy desk personnel. Some appear to have eliminated copy desks entirely. The last thing an overworked, overstressed copy editor needs is AI gleefully making even more errors, faster than anyone can catch or fix them.
The tech site CNET quietly began using AI last fall. The results seemed impressive … at first. But then editors began making corrections to the stories. Tons and tons of corrections. “It turns out the bots are no better at journalism – and perhaps a bit worse – than their would-be human masters,” wrote the Washington Post:
https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/
The use of Google’s Gemini has resulted in grotesquely bizarre results, such as adding glue to pizza toppings to make them stick to the pizza, rather than to the box in which it’s packaged. Or drinking 2 quarts of urine every 24 hours to help pass kidney stones. Or to improve one’s health by eating a small rock every day.
A reporter backtracked Gemini’s thinking on that rock tip and found that info originated in the satirical web site the Onion.
A reporter for CNET. HOPEFULLY, it was a real, live reporter …
https://www.cnet.com/tech/computing/googles-ai-overviews-fail-at-fact-finding-but-excel-at-entertaining-and-other-ai-news/
And then there was the big Sports Illustrated incident last year. The owners of that magazine denied using AI at first. But it was happening:
https://chicago.suntimes.com/2023/12/11/23990714/sports-illustrated-artificial-intelligence-ai-newsrooms-journalism-fake-news-futurism-editorial
So there you are: AI is “trained” in ways that modern journalists find unethical. It produces results that are unacceptably error-ridden. And the news organizations that use AI have failed to give their ‘bots proper oversight or even rudimentary copy editing.
I’m failing to see an upside to this.
Other stray notes
– Yes, I have mostly used examples of writing and written content here. But the same applies to AI visuals. AI rips off existing artwork and photography from creators who are usually paid for their content. But those creators are not compensated for their work.
-Yes, AI can quickly supply a nice illustration for a magazine or newspaper or web story. But using AI for this takes away a job — either from a staffer or a freelancer.
-Yes, use of AI in this way might save the owners of our publications some money. But at the cost of their credibility and their ethics.
-Yes, AI is coming whether we like it or not. But that doesn’t mean I’m not going to fight it.
And finally…
-Yes. I still love Mario. And I trust him. I freely acknowledge he’s a lot smarter than I am.
So, I suppose we’ll see.
If AI does take over journalism and if it does so without fouling up facts or content, then swell. I guess those of us who are still clinging to visual-oriented jobs in the newspaper world can apply for jobs sweeping floors at fast food joints.
But you won’t read about that trend in the news media. Unless AI writes about it.
AI in journalism is basically like the Tesla CyberTruck. A questionable product. Created with questionable goals. That performs questionable actions. And clearly, was nowhere near ready at launch.
–
EDITOR’S NOTE: AI was most definitely NOT used in the production of this essay.
CHARLES APPLE is Further Review editor for the Spokesman-Review of Spokane, Washington. He researches writes, designs and creates infographics for four full-page stories a week, which are then distributed to papers across the U.S.
PART 1:Can Designers Survive the AI Threat?
Renowned design consultant Dr. Mario Garcia is the first to share his insights