AI generated content: Why agencies and brands need to tread cautiously

South Africa

AI generated content: Why agencies and brands need to tread cautiously

 
Just as it is with numerous other industries, artificial intelligence (AI) is similarly poised to support and enable more effective and efficient mass communications. While it is a wonderful tool, it is important for companies, communicators and public relations (PR) agencies to understand the risks that the technology presents, and what the most prudent and ethical ways of managing them are.

 

A core responsibility of a PR agency is to help clients manage their reputations, and flag risks and red flags as soon as they appear in order to avoid any potential crisis. The increased use of AI is a risk that all brands are exposed to, as it pertains to the most fundamental aspect of marketing and communications: generating content.

 

Breaking trust

PR content relies strongly on the media for its efficacy, while reputable publications and their publishers guard this trust and influence dearly. Good PR specialists, build strong and meaningful relationships with various editors and journalists at these publications. These relationships are built on trust – they trust that they are being sent accurate, truthful, original and relevant content for their audience.

 

Journalists are increasingly putting editorial articles through AI content detectors, and if the result comes back as “mostly AI generated”, brands stand to face reputational risk and potential backlash from the media and possibly broader. Sports Illustrated in the USA was recently brought to its knees for using AI to develop editorial content. Here is an excerpt from CBS News published on December 12, 2023:

 

Sports Illustrated publisher Arena Group fires CEO following AI controversy

The publisher of Sports Illustrated has ousted its chief executive officer following public backlash over the sport’s magazine’s alleged use of AI to write stories. The decision comes after Sports Illustrated became steeped in public controversy over allegations it used AI to generate content and fictitious author bios for its website.

 

Why would a publication do this? One response could be that if the tools exist, why not use them. This is a wrong approach, however. Readers are becoming more familiar with the style and tone of AI generated content and experience shows that the moment they feel content may not be original, they stop reading.

 

Mitigating reputational risk

As we head deeper into the digital future, agencies have to counsel clients against presenting AI-generated content as genuine human-written content. Audiences want to hear the expertise of thought leaders and if exposed, can have a devastating impact on brand reputation. Credibility is built on authenticity and trust, and is easily broken by such practices.

 

Brands are also using digital platforms to build their online presence, drive referral website traffic, and potentially generate leads, and the same concerns apply here too. While it might seem quick and convenient to use AI to generate the messaging needed for one’s website, this can result in content that is generic and doesn’t add real value – the viewpoints of industry and domain experts.

 

It also does nothing for organic search rankings. Indeed, Google has long warned against this tactic, stating that the “use of automation, including generative AI, is spam if the primary purpose is manipulating ranking in search results”.

 

While Google does not explicitly penalise blog posts written via generative AI, such as ChatGPT, it does have a policy against using automated content generators to create, duplicate or produce low-quality content. The search giant adds that a recently released core update will reduce “low-quality, unoriginal content in search results by 40%”. Instead, the company calls for the creation of “helpful, reliable, people-first content that provides original information, reporting, research or analysis”.

 

Generative AI to assist, not replace

Beyond this, agencies should guide clients and their own staff to use generative AI cautiously and not as a source of truth because the technology is not yet mature and in some instances completely factually incorrect. Another hazard is that many of the datasets used to train generative AI platforms contain copyrighted information. In addition to this, the content may not read smoothly and often AI uses US English -dominant metaphors and figures of speech – and therein lie the red flags for readers. If ever you are in doubt; insert the same prompts into multiple AI tools, such as ZeroGPT or Quillbot, and compare the results.

 

Rather than turning to AI, agencies should look to position their clients’ business own insights and not simply use research already available online. This gives an advantage in media by offering a completely unique point of view and a far better chance of getting placed in credible publications.

 

Ultimately, though, technology, such as automation and AI, is there to support and enhance human speciality but never to replace it. Credibility and trust are earned, and as such there’s no substitute for your insights and expertise in thought leadership content.