Now that the second AI hype cycle is starting to die down, it’s a great time to scrutinise the claims being made about ChatGPT.
One of them suggests that AI can replace professional writers. It’s a tempting idea: no more scheduling in-person briefings, waiting for feedback, asking for re-writes, managing approval processes or tolerating creative divas. Everything from advertising copy to pillar posts, video scripts to keynote speeches can be generated by machine — faster and cheaper, and without sacrificing quality.
Not so fast.
Tech startups chasing unicorn-level valuations are bound to over-egg their claims, and it turns out that ChatGPT and Generative AI peers like Google Bard and Bing Chat have, shall we say, trust issues. High profile successes, such as computer-generated biblical texts that read eerily as though they were part of the original, must be balanced against all the times AI tools got it wrong: made obvious errors, dreamt up ‘facts’, or even libeled real people.
AI reality is starting to sink in. Whatever promise the current software cohort holds for content creation, using it to replace human writers is a risky business.
It’s probably worth pointing out that the term ‘artificial intelligence’ (AI) is frequently mis-used. A true AI is an apex machine powerful enough to replicate human brain function. That’s still a long way off. Today, it refers to a broad category of technologies, including machine learning. That’s what ChatGPT is: an advanced machine learning algorithm that draws on a massive data set called a Large Language Model (LLM). Data scientists train ChatGPT to interpret its LLM in ways that are human-like.
LLMs are big — big enough to contain multiple terabytes of data and a billion or more parameters. The ability to generate sophisticated, grammatically correct text outputs quickly from such a huge volume of information is clearly impressive.
The problem is, you can train a data model to output almost anything you want. The flexibility in how generative AI tools are taught to define truth — or arrive at it independently — can bring troubling results.
There’s an undeniable ‘wow’ factor to ChatGPT’s apparent depth of knowledge and its ability to mimic the style and tone of certain writers, but one problem has critics worried: it hallucinates. In practical terms, it sometimes inserts errors into text that are syntactically or semantically plausible, but on closer inspection turn out to be plain wrong.
Any ChatGPT-written marketing asset needs to be carefully proofread and double-checked for erroneous faux-facts that could chip away at your credibility.
Generative AI errors happen so frequently that there’s now a sub-genre of social media comment (#chatGPT #fails) devoted to it.
Here are a few that have made the hall of shame.
You can see the potential for AI-authored texts to harm a brand’s credibility or reputation, but there are two immediate business risks marketers should be aware of:
Writers in the US, including John Grisham, Jodi Picoult and Game of Thrones originator George R.R. Martin, have launched a major lawsuit accusing ChatGPT’s owners, OpenAI, of plagiarising their work by using it to train the software’s LLM. If they’re successful, it could destroy the system’s essential asset — its dataset — which will be full of copyrighted content that can’t be un-picked.
What that will mean for generative AI’s future efficacy is anyone’s guess. And if OpenAI ends up being liable for damages, or required to pay royalties, how will that affect ChatGPT users who generated content using a plagiarism-compromised dataset?
Looking for quality content made by talented humans who truly get what you do? Contact us today.
There are already AI tools that can detect AI-generated content. How long will it be before Google, Bing and Facebook push robot-written copy down in organic rankings — much the way plagiarised and ‘thin’ (replicated within a website) content is penalised now?
While it’s not clear if a specific ranking penalty is in the works, Google has said that using automated tools to generate content is against its guidelines. It wants web publishers to create ‘helpful, reliable, people-first’ content.
Reliability is the keyword here. Google’s ranking algorithm is already set to punish inaccurate or misleading content, while Facebook is aggressively fact checking, calling out and sometimes blocking social media posts that contain ‘misinformation.’
Is generative AI any good at writing copy? Beyond the high-profile success stories, ask ChatGPT to generate an article or blog post on a random topic and what you get is… OK.
Fine, but not great. Worthy, but bland. Lacking in colour, personality or depth. In short, anodyne prose that probably isn’t right for communicating your brand’s unique voice and values. We’re not yet at the stage where generative AI can comprehend the complexities of language or the shifting ambiguities of human conversation.
That’s a problem that could actually get worse. AI writing tools are now being trained on AI-written content. A study by Canadian and British researchers calls this AI’s ‘recursive curse’ — one with the potential to collapse AI models entirely.
Imagine a planet-sized photocopier making a sea of photocopies, then photocopying that sea of photocopies to create a new sea of diluted photocopies, and then photocopying that diluted sea of photocopies to fill an even more diluted ocean of content… continuously, forever, ad infinitum.
Our take: generative AI tools are finding niche use cases and some writers report that they can be a helpful research tool. When it comes to creating original, engaging, on-brand copy, however, they aren’t (yet) up to the task.
It’s still a case of ‘watch this space’. For now, marketers need to consider the potential risks.