15 AI-Generated Content Meets the SPJ Code of Ethics

Decorative illustration created through AI

Illustration above created with Microsoft Bing’s Dall-E Image Creator.


A
rtificial intelligence will no doubt play a role in the future evolution of journalism. Much like social media, AI in journalism can be reviled and embraced at the same time.

Already, some media professionals are using AI tools such as ChatGPT, Google Gemini, Bing, Dall-E, and Midjourney to assist in generating news and sports stories, headlines, graphic art, illustrations, scripts, email newsletters and video.

In a quote published in 2023, SPJ National President Claire Regan said, “While there is no need for a ban on artificial intelligence in journalism, its use is best limited and considered on a case-by-case basis.”

In this chapter, we’ll explore how the SPJ Code of Ethics can help shape guidelines for appropriate and limited use of AI-generated content.


SEEK TRUTH AND REPORT IT

AI-generated text is not always truthful. A 2023 New York Times article stated that generative A.I. “relies on a complex algorithm that analyzes the way humans put words together on the internet. It does not decide what is true and what is not.”

Furthermore, some AI chatbot responses fit the definition of fabrication. As the NY Times article noted, “Because the internet is filled with untruthful information, the technology learns to repeat the same untruths. And sometimes the chatbots make things up.”

The New York Times also chronicled an extreme example of AI hallucination when a New York lawyer used ChatGPT to prepare a court filing, but “no one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief.”

In 2023, the startup company Vectara, which was founded by former Google employees, shared research on the reliability of AI chatbots. A New York Times summary of the Vectara research reported that “even in situations designed to prevent it from happening, chatbots invent information at least 3 percent of the time — and as high as 27 percent.”

In order to use AI tools ethically, media professionals must prioritize fact-checking. Otherwise, they become part of the problem by lending credibility to inaccurate information.


MINIMIZING HARM

Manipulated image of pope with disclaimerBecause today’s AI tools are prone to hallucination and fabrication, editors and publishers may not always foresee potential harm. This creates an even clearer imperative to fact check AI content thoroughly.

Let’s consider the first draft of an AI-generated story about stock market trading. Editors and publishers have an ethical duty to verify content thoroughly before the story is published or broadcast. Careful fact-checking minimizes potential harm to investors who may make financial decisions based on that story.

Beyond text, AI tools give all users, not just journalists, pathways to create or alter audio, video and photos. Verification should become an even more essential part of the routine workflow for editors and producers, especially to check for manipulated multimedia content.

In visuals that discuss AI manipulation, journalists must be careful to label the manipulated content. For example, without a clear label on a manipulated photo, other online users can recycle the manipulated image out of context, thus perpetuating the image’s inherent misinformation. Included in this section is an example of a manipulated image (the pope) that spread through social media but has been appropriately labeled above to minimize future misuse of the image.

Finally, because of automated data scraping in some AI software, during the next decade we’ll likely see lots of ethical and legal questions about measuring the harm inflicted by AI-generated content. Some media outlets and journalists may have legitimate reasons for not wanting their published content to be included in AI queries. They may envision financial harm or loss of ownership if their content by default becomes part of AI-generated responses.

Do individuals and media companies have a right to minimize potential harm by controlling who or what accesses their content and data? Also, do media organizations have an ethical duty to minimize their use of artificial intelligence and prioritize work done by actual humans, many of whom need a secure job, a steady paycheck and a professional purpose in their lives? Let’s hope we gain more clarity in the future, for both law and ethics.


ACT INDEPENDENTLY

Ethical media professionals need to focus on authenticity in their use of AI  content. As the technology evolves, there will no doubt be debates about the extent to which some AI content resembles plagiarism because it is not generated independently.

One case example from Vanderbilt University illustrates how audiences negatively perceive content that appears to be insincere, mainly due to generic messaging.

The email was an attempt to provide comfort and support for Vanderbilt students in the wake of a shooting at Michigan State University. In this case, the use of ChatGPT to generate the message was transparent. As the USA Today article notes, a sentence in parenthesis at the bottom of the email said, “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023.”

However, instead of crafting a more independent and specific message catered to Vanderbilt students, those who sent the email depended on ChatGPT to do the heavy lifting. Although this example strays from the more typical SPJ interpretation of acting independently in regard to financial or political interests, it nonetheless illustrates how over-reliance on AI tools can prevent media professionals from delivering independent content that focuses on the best interests of the intended audience.

New York Times Editor A.G. Sulzberger wrote that “independent journalism also rests on the bedrock conviction that those seeking to change the world must first understand it — that a fully informed society not only makes better decisions but operates with more trust, more empathy, and greater care.”

Journalists who use AI tools may not fully understand how those tools work, and they may not be aware of potential biases within the algorithms. This can prevent them from being fully independent.


BE ACCOUNTABLE AND TRANSPARENT

Perhaps the most worrisome aspect of AI-generated content involves transparency. To follow the SPJ Code of Ethics, media professionals should clearly acknowledge the use of artificial intelligence.

What do those credits and bylines look like? That’s an evolving debate. Below is a screenshot example from Bankrate:

Screenshot of Bankrate.com attribution for AI contenthttps://www.bankrate.com/investing/what-is-arbitrage/

In early 2023, CNET was criticized for not being transparent to audiences about its use of AI to generate online content. In the CNET case, the generic word “Staff” was used for the byline with no additional disclosure, and in some cases the content contained significant errors. You can read a brief summary here:

The Axios summary included this observation:

Now that the technology has become so advanced and accessible, it’s become harder for newsrooms to draw the line between leveraging AI and over-relying on it.

For additional details, you can read this feature story:

A 2023 article in Medium described Medium’s approach but also included initial guidelines from other publications.

The most extreme policy example cited in the Medium article is Fanfare’s official position, with a clear line in the sand saying that any aspiring writers who submit AI content “will be barred at the gates like the uncivilized barbarians they are.”

Transparency in the use of AI-generated artwork is similarly problematic, as shown below. A book cover for the UK edition of a prominent fantasy novel used a colorful illustration of a wolf. On the left is the book cover as it appeared on an Amazon online sales page in May 2023. On the right is the Adobe Stock image (with accompanying AI credit) used to create the book cover.

Comparison of book cover to Adobe Stock art

The following article describes the parameters of a debate about whether most or all AI-generated art is unethical.

CONCLUSION

In academia, much of the focus has been on AI-generated text as a tool for students to bypass the traditional demands of academic research assignments. For now, though, much AI text is by nature boring and predictable because algorithms determine word usage based on already existing text.

Thus, AI isn’t well suited to write the first rough draft of history. As SPJ President Claire Regan noted, “Humans are best at connecting intimately with humans to tell their stories, which is what hyperlocal journalism is all about.”

A better source for support here may be AI itself. Question for Google Bard (now Gemini): “Why is AI-generated text often boring?” Bard’s response:

All models are trained on large datasets of texts, but they do not have the same understanding of the world as humans do. This means that they can generate text that is grammatically correct and factually accurate, but it may not be interesting or engaging.

There is no one-size-fits-all policy across all media platforms. Some real estate agents say they can’t imagine working without ChatGPT, possibly because there are only so many adjectives to describe a typical three-bedroom, two-bathroom suburban house.

The more immediate and deeper concern may involve multimedia content. Although fact-checking text is an established skillset that can be taught to aspiring journalists, verifying the authenticity of audio, video and photos likely will require constant training and re-training as the AI tools for manipulation evolve and improve. Some analysts compare this to an arms race.

As this AI arms race evolves, the SPJ Code of Ethics should remain a valuable and time-tested resource for journalists and other media professionals.


CLOSING QUIZ (courtesy of Google AI tools)
WRITE ABOUT IT

This writing task is for the adventurous. It comes in two parts.

PART A – CREATION (assisted by artificial intelligence)

Using AI tools as a starting point, craft a very brief essay or script (approximately 200 words) about the pros and cons of AI-generated content in journalism and public relations. Generate an image or audio (or both) to place at the top of Part A. Provide clear credit for the AI platforms used in generating your text and image/audio.
(Note – if you generate audio, you can just include a link to your audio instead of embedding it in your submitted document.)

Below are a few possible AI tools for Part A, although you are certainly not limited to this list:

As much as possible, use multiple prompts to refine and improve your text and artwork, as if you are an editor making improvements to a first draft. Don’t just use the first text response or art that the AI tool offers.

Unless you want to spend money, I suggest you first experiment with free versions of AI tools. Even for free versions, though, you may be required to register with an email address.

PART B – REFLECTION (your own work)

Write at least 300 words of personal reflection about your work in Part A. Feel free to use first-person pronouns (I, me, my) as needed. Do NOT use any AI tools for your writing in Part B.

  • Describe the process you used, including initial and follow-up prompts. How did you refine your work?
  • Next, assess the quality of the essay and art. How engaging and professional is the content? Now that you’ve read this chapter, what do you perceive as strengths and weaknesses for AI-generated content.
  • Finally, based on this chapter and your exploration for this assignment, discuss the ways you might use AI tools ethically in your career path. Or you can explain why you will probably NOT use AI tools professionally in the future.

 

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Ethics in Journalism and Strategic Media Copyright © by Dave Bostwick is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book