Ready or not, we’re rapidly heading into a world where generative AI tools like ChatGPT, Dall-E, and others do more of marketers’ day-to-day work. That transition is not much in question. The only question is how well your organization makes that transition.
To make that transition as smoothly as possible, we recommend taking a cautious and realistic approach toward the here-and-now of these technologies, while keeping an optimistic long-term view. That starts with having a solid understanding of how these technologies work, including what they’re capable of and what they’re not.
Having that understanding will help you deploy generative AI in ways that will help your business grow, and either avoid or safeguard against situations where AI leads to costly errors or embarrassing customer experiences.
Because the issues and use cases around generative AI vary depending on whether you’re outputting images, code, or text, we’re addressing each of those separately. In this post, we’re focusing on generative AI for text. Let’s start by explaining…
How AI-Generated Text Works
Generative AI engines like ChatGPT and Bard build sentences one word at a time. Considering the question being asked of it, these large language models (LLMs) use the massive array of websites, articles, books, and other sources they’ve been trained on to score the likelihood of all the potential words it could use. It then chooses the one with the highest score and moves on to the next word and the next…until it has completed a sentence, and moves on to the next sentence, if necessary.
In the broadest sense, that’s how these engines work, but brands can adapt these tools to their purposes in many cases. For example, a brand may be able to:
For marketers, these guardrails and supplemental training are key to unlocking substantial value from these tools, while also minimizing risks, which we’ll talk about next.
4 Modes of Use
A lot of the buzz around generative AI is about how it can produce brand new content from nothing. However, that’s just one way of using AI-generated text.
It’s important to keep these modes of use in mind as we discuss the weaknesses of generative AI.
Weaknesses & Safeguards for Use
When creating text using a generative AI engine, it’s critical to be aware of several major weaknesses, so you can protect yourself against them.
1. Conveys inaccuracies and biases confidently. As much as these tools have wowed people with their speed and versatility, their inaccuracies have also stunned people.
For example, Men’s Journal had incorrect information in an AI-generated article about low testosterone. CNET had errors about interest calculations and about mortgage rates in their AI-generated stories, which were also less than transparent about being written by a machine. And, in perhaps the most costly mistake to date, Google lost $100 billion in market capitalization after running an ad for Bard that included erroneous AI-generated information about which satellite telescope took pictures of the first exoplanet.
There are two core issues here. First, these engines are largely trained on content from the public internet…which is full of inaccuracies and even outright lies. And second, these engines have no ability to judge if anything they’re saying is true or not. They only know if the words make sense next to each other syntactically and on a probabilistic basis. Their goal is simply coherence—to make people say, That sounds like a good answer. It’s because of those two factors that AI text-generating engines are prone to “hallucinations”—which is a kind of way of saying that they make things up.
For those same reasons, AI-generated text also has the potential to include racial, ethnic, cultural, and other biases and prejudices.
To safeguard against this weakness, be sure to fact-check AI-generated copy, paying particular attention to dates, stats, and details about people, places, organizations, products, and other entities.
2. Potential for plagiarism & copyright infringement. Because it’s pulling from existing content, generative AI also has the potential to plagiarize. For example, CNET has been accused of plagiarism stemming from its use of AI-generated articles.
To safeguard against this weakness, start by understanding that the risk of plagiarism is greatest when pulling from the internet or other very broad public datasets. Risk is lessened somewhat when the AI is trained on a substantial amount of brand-created copy, and reduced much further when the AI is prompted to rewrite original copy created by the brand, such as changing the copy’s tone. Legal risks are minimized when the AI is restricted to brand-created copy and works that are in the public domain.
Of course, this risk is also reduced when humans substantively rewrite or adapt the copy created by a generative AI tool.
Separately, it’s worth noting that you shouldn’t input into these AI engines any proprietary information or trade secrets, like Samsung employees were caught doing.
3. Voice can be off-brand. ChatGPT and other LLMs are trained on the internet, so their responses generally feel quite generic in terms of voice (unless you ask it to respond in the voice of a particular well-known author, which can heighten your exposure to plagiarism, copyright infringement, and other legal risks).
To safeguard against this weakness, train the generative AI engine your company is using on plenty of content you’ve created so that its responses can be much more on brand. Platforms like Writer even include brand style guides, where you can stipulate your preferences for the writing grade level, styling of heads and subheads, whether to use Oxford commas, and much more.
4. Isn’t up on the latest trends. The core training of a large language model takes a lot of time because of the amount of information it’s trained on, so that training tends to happen all at once and isn’t updated very often. Because of that, these tools generally aren’t going to be able to talk about recent developments, like the latest sports season or the most recent Oscar-winning movies. For example, ChatGPT 3 doesn’t have access to any information or news from after 2021.
However, as previously mentioned, these tools may not confess that they don’t have timely information about your request and simply present you with information that’s incorrect or dated.
To safeguard against this weakness, avoid asking these tools about current events or trends, such as current search trends or the best keyword terms to use in a blog post. You’ll need to use traditional search engines to do research on those.
5. Struggles with new topics. Relatedly, generative AI tools are weak when there’s little content on a subject from which to draw—and weakest when the topic has never been written about.
The bottom line: It’s impossible for these engines to come up with an idea that has never been had before. These are mimic machines.
To safeguard against this weakness, either use generative AI to write about well-trod topics or provide it with additional content from which to draw upon or to modify. But don’t expect it to help you become a thought leader in your field. In fact, it may do the opposite.
6. Can undermine your authority. For the better part of the past two decades, brands have worked hard to humanize themselves. B2B brands have built publishing empires that collectively rival traditional media outlets. And B2C brands have leaned heavily into influencer marketing, user-generated content, consumer polls and surveys, and giving their employees a voice in their marketing, among other ways of bringing more humanity to their marketing efforts.
A reversal of that trend could undermine consumer trust and your company’s image, especially if it becomes obvious or known that use of generative AI is the cause. Most Americans are highly skeptical of AI, with only 9% convinced that AI will do more good than harm, according to a Monmouth University poll. That negative outlook has already sparked a backlash against AI that will almost certainly grow as it displaces more and more workers across the economy.
For B2B marketers with large content marketing operations, the risks here seem particularly high. News outlets, academic institutions, industry groups, and other high-authority sources will never knowingly quote or cite AI-written copy. Similarly, AIs won’t be invited to speak at industry conferences, contribute guest articles, participate in roundups, or other earned media opportunities.
To safeguard against this weakness, keep the role of AI to a supportive role for any content that you’d currently put a person’s byline on or have a person speak about on a webinar, podcast, or news show.
7. Doesn’t necessarily boost performance. Some marketers inherently believe that whatever an AI writes is going to perform better than whatever they’d write. But we’re a long ways off from that being true on a regular basis.
That’s because these engines currently aren’t trained on the past performance of your content, which means their recommendations aren’t based on any understanding of your audience’s interests or preferences. They also don’t know your brand as well as you do.
Generative AI tools stand in sharp contrast to AI-powered copywriting tools like Phrasee and Persado that have been around for many years. Trained on the historical performance of your brand’s campaigns, these older tools are all about performance improvements through smarter word choices due to the preferences of your audience.
That said, the experiences of our clients with those tools over the years have been mixed. The performance increases they deliver often don’t translate well to bottom-of-the-funnel activity, which results in relatively low returns after licensing fees are paid. If performance-based AI tools aren’t delivering big returns, that should make everyone deeply skeptical of the performance opportunity with generative AI, which primarily offers time-savings.
To safeguard against this weakness, respect the experience and customer knowledge that your marketers have and use A/B testing to prove out the most effective copy.
8. Doesn’t understand the nuances of some channels. Given that AI tools that are explicitly designed to write subject lines struggle to deliver bottom-of-the-funnel improvements, it shouldn’t be a shock that broad-based generative AI tools don’t do a great job of understanding the intricacies of marketing either—and of email marketing in particular.
For example, generative AI tools don’t seem to understand the difference between B2C promotional emails and the emails that business folks send each other. They don’t understand the need to front-load keywords in subject lines, or the need to align subject lines with body content so that the subscribers who are most likely to click are the ones who open the email. And on and on.
To safeguard against this weakness, marketers should not only be mindful of their prompts for generative AI, but be ready to optimize the resulting copy for the channel based on their channel knowledge.
Best Marketing Use Cases
Given all of those weaknesses, how can brands wisely use AI-generated text in ways that play to their strengths? We see several clear use cases.
Brainstorming. Generative AI is great at conquering the blank page. For example, you might prompt it by saying…
With its suggestions in hand, you can then:
Summarize and cross-purpose. When you have existing content, generative AI is great at writing a summary of it or adapting it to different channels. For example, you could ask it to write a 4-sentence teaser of a blog post to use in your upcoming newsletter or copy for a tweet to promote it.
Similar to using generative AI for brainstorming, it’s best to ask it for multiple teasers of each format so you have options and plenty of inspiration to then tweak.
Copy adaptation and improvement. Generative AI can be highly effective at helping you tailor your content to a variety of audiences, such as people in different industries or different professions or roles, and customers with different motivations or needs.
For example, you could have your marketing copy for your accounting software modified to speak more pointedly to small business owners and larger companies, or to people in retail, consumer services, and restaurant industries. You could also have generative AI lower the reading level of your copy, making it easier to read.
In all of those cases, generative AI is playing only a role in the overall writing process—either in the beginning to help with ideation or at the end to help adapt it to other audiences or channels—not doing the whole thing. Given the current state of generative AI, we think this is a reasonable and effective role for it to play today.
That role also doesn’t deprive your organization of the self-discovery that comes from the writing process. You see, writing isn’t the last thing you do after you've had all the ideas you're going to have. Writing is the thing you do to really start seriously thinking, as the structure of writing forces you to consider the structure, details, and other elements of your content in a deep way. If over-relied upon, generative AI truncates the thinking process along with the writing process.
The Future of AI-Generated Text
Despite all of our skepticism, we’re optimistic about generative AI long term. We’re just in the messy early adolescence of its development.
In the years ahead, as tens of billions of dollars of investments are made, we anticipate that not only will LLMs become much more advanced, more accurate, and less risky, but organizations will have lots of ways to tailor these tools to their brands and to their marketing programs. The market will consolidate as leaders emerge, and mature pricing models will be established that allow the leaders to recoup the tens of billions they’ve invested, plus healthy margins.
In the meantime, now is the time to put these new technologies at the forefront of your mind and begin the process of once again adapting to a new era of technology in the workplace. Start slow and keep in mind that change management is hard. However, also remember that the modern office has changed dramatically many times in the last 50 years, each time creating new and interesting roles and opportunities. Now is the time to explore and start to reimagine your workflows and processes, always with the goal of improving the customer experience.
(Editor’s note: None of this post was written by AI. Not a single word.)
Need help exploring how generative AI and other AI tools can improve your digital marketing? Oracle Marketing Consulting has more than 500 of the leading marketing minds ready to help you to achieve more with the leading marketing cloud, including Analytic & Strategic Services and Creative Services teams that can help you use AI tools wisely and safely.
Talk to your Oracle account manager, visit us online, or reach out to us at CXMconsulting_ww@Oracle.com
Alex Stegall is the Director of Analytic & Strategic Services at Oracle Marketing Consulting. He has worked in digital marketing and ecommerce for over 10 years, both in-house and as a consultant, specializing in bringing a data-driven analytic approach to decision-making and design. His background in data science and creative ecommerce development give him a unique perspective on how companies can present and think about their data in ways that engage and inform.
Clint Kaiser is the Head of the Analytic & Strategic Services team at Oracle Marketing Consulting. His background in the email marketing space includes 20 years of experience with ESPs and digital agencies. His analytical approach to driving change in digital marketing is reflected in his quantitative approach to improving clients' business outcomes.
Chad S. White is the Head of Research at Oracle Marketing Consulting and the author of four editions of Email Marketing Rules and nearly 4,000 posts about digital and email marketing. A former journalist, he’s been featured in more than 100 publications, including The New York Times, The Wall Street Journal, and Advertising Age. Chad was named the ANA's 2018 Email Marketer Thought Leader of the Year. Follow him on LinkedIn, Twitter, and Mastodon.