Attention Marketers safeguard your brand against this AI issue.

Attention Marketers safeguard your brand against this AI issue.

Welcome to CXA's newsletter.

What is this newsletter? It's about marketing, consumer behaviour, AI and will always include useful marketing chart. We're getting this out about once a week. The 'in summary' section enables you to scan the main points less than 60 seconds.

If that sounds like something you're interested in, please subscribe.

In summary:

  • Trust for AI is falling globally.
  • Usage of generative AI by marketers is increasing rapidly
  • Just 16% of customers trust AI generated content by brands

Generative AI: usage up, trust falling.

The data is in about consumer trust of generative AI trust and it makes for complicated reading for marketers and business leaders.

Marketers are using AI more than ever to generate content across social media, websites, blogs and long-form content. In-fact, according to Hubspot's latest marketing survey:

  • 81% of marketers say content created with generative AI performs the same or better than fully human-created content.
  • 85% of marketers say it has changed the way they create content.
  • 62% say it is important to their marketing strategy.

However, while marketers and brands are racing to use generative AI in many areas of their businesses, one group that isn’t necessarily on board with this usage is consumers.

Over the last five years, consumer trust in AI has fall from 61% to 53% globally, according to Edelman's Trust Barometer (2024).

Research published by CXM magazine in 2024 found just 16% of consumers trust websites written completely by AI. 22% trust AI generated product photos on websites. Marketers and business leaders face a significant challenge balancing two seemingly aligned but ultimately discordant forces:

  • Generative AI can generate more content than ever before, but
  • When customers know the content from a brand is generated by AI they trust it, and the brand, less

 

Mistrust has been brewing for years

Mobile apps have enabled photo manipulation for years - manually and automatically - to make the sky bluer, tummies more toned and smiles whiter. This ability to warp reality has been endlessly discussed and we don't intend to re-interrogate it here, except to say that the rot has long set-in when it comes to trusting what we see at face value. Trust in photos, particularly on platforms that encourage living ones best life has eroded to such an extent that online communities now exist to solely highlight 'instagram v reality'.

As users a few improvements to photos seems harmless. After all, a quick improvement to the lighting of a precious memory can make a significant difference to photos - and the ensuing likes, hearts and thumbs ups one may receive from family and friends.

But it is a slippery slope that on a 1-1 level may seem largely harmless, can have very different interpretations if adopted by a brand to sell a belief, ethos or products and services.

Is the availability of generative AI the problem?

One reason consumers may be less likely to trust AI generated content is exactly because they know how easy it is to generate AI content.

Generative AI usage is exploding in personal and business lives. Its quick adoption by technology companies to embedded it into computing and mobile devices exposes more people faster than ever to the valuable by variable outputs and generic, white-washed, language generative AI uses. (See Gen AI usage in scientific papers).

With the ability to see 'how the sausage is made', and the reporting of when generative AI goes wrong consumers are likely to be even more sceptical of content from brands that feels generic. 

Is banning generative AI the solution?

Trying to ban generative AI is likely to be about as effective as building a sand wall to hold back the tide. But we encourage you and our clients to see the argument of banning generative AI as a red herring. Banning generative AI usage is not really the point, the point is that as humans we do not value AI outputs to the extent we value human effort.

Millennials and Gen Z are the most educated generation ever, and attempts to convince them to both trust a brand, and part with their dollars, with low-effort content generated by artificial intelligence is a recipe unlikely to praised or result in success.

Indeed, the renewed focus on authenticity in marketing reinforces the need for brands to have authenticity as a central tenet of their marketing strategy and brand persona.

Being honest about generative AI use is important

The rise of generative AI has also quickly given rise to ‘AI checkers’. Websites and tools are now being built, and often freely available to check if content is generated by AI.

Quillbot, for example, has an excellent free online tool which enables anyone to see if content has likely been generated by AI and if so, to what extent. (Nb. Quillbot isn't infallible, and only highlights that content is likely AI generated not definitively AI generated)

The availability of these checkers will make it easier than ever for consumers (and journalists) to determine the level of a brands authenticity in the content it publishes. The risk here is that if a consumer, influencer, journalist, or regulator sees this they could make quite the noise about it which could be very damaging for a brand using AI to generate content. 

What we at CXA can say with certainty is that no matter how careful a brand is to use generative AI and then mix it with human input is that a brand is going to be the first to get called out in the media for using generative AI content with its customers.

In these moments, brands would do well to have a thorough understanding of where, when, how and why generative AI is or isn’t being used. It may be best for brands, particularly those in highly regulated industries to ringfence where AI is an isn’t acceptable for use.

Which technology company will build in authenticity checkers first?

As generative AI seeps into even more areas of our lives, here at CXA we forecast some companies will find the right balance of making generative AI and providing the ability to quickly check how much of the content in-front of the consumer has been generated by it.

Apple, which has taken a strong stance on consumer privacy and data could easily also take such a position and be rewarded for doing so.

The final word

Holding back generative AI from increased usage seems like an impossibility at this point. It is simply too convenient for companies and marketers of all sizes to generate scripts, social media content, blog posts, long and short form video and more.

But brands, and in particular, marketers would do well to remember their job is to build trust with consumers to drive the growth they seek.

The potential damage to a brand's reputation from 'churning out' AI generated content with little to no human input won’t be worth all the efficiencies it hoped to gain, if it finds itself on the wrong side of it.


Leave a comment

Please note, comments must be approved before they are published

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.