“That’s AI”: Dealing with negative public perceptions of AI
Campaigns that use AI are facing increasing amounts of online ridicule. How should adland respond?
“That’s AI”: Dealing with negative public perceptions of AI
Campaigns that use AI are facing increasing amounts of online ridicule. How should adland respond?
“That’s AI,” my dad said with an air of surprised disappointment. We were sitting in a pub, staring blankly at a TV, having watched a heavy West Ham defeat. With football-induced low mood to one side, however, I was struck not only by the fact that he had noticed the AI-generated ad but also that he was bothered by it. My dad, who does not spend his spare time (or career) analysing advertising, proceeded to comment on the ad’s quality and cheapness. “How much did they save making that crap?” he asked.
Here lies the great paradox: my dad uses ChatGPT religiously. And he is not alone. The tech is booming in popularity, yet many people remain extremely cautious - and critical - of its use in the media they consume.
At the moment, (this is perhaps the key phrase), most AI-generated videos and ads are easy for most people to spot, ridicule and label as ‘lazy.’
Beyond aesthetics, audiences are aware that these productions may affect creative jobs, feeding a growing unease around automation in the creative industries and general sectors of employment more widely.
It is not just anecdotal. Much of the online sentiment toward AI-generated media is negative and distrustful. Perhaps it is because AI content often feels inauthentic or bland, or because it sometimes triggers the ‘uncanny valley’ effect - those almost-human visuals that feel eerily and subtly off, making the artificial nature hard to ignore. Apps like OpenAI’s Sora have taken social media by storm, letting users generate viral videos of celebrities - mostly deceased ones - doing things that feel really real. While viral, these videos reinforce some of the public’s unease about AI’s power to manipulate reality.
Even in text-based content, AI missteps are easy to spot. A post from Nottingham Forest FC in October, which announced the appointment of Sean Dyche in October, went viral on X after fans realised it was written by AI. One post, viewed 2.1 million times, bluntly stated that football clubs using AI in this way “should be absolutely ashamed of themselves.” Similar reactions appear across platforms whenever AI-generated copy or video fails to hit the mark. Audiences notice, judge and often mock what has been coined as ‘AI slop’.
More recently, this backlash has moved beyond isolated social posts and into real-world brand decisions. During the festive period – when the eyes of the world are firmly fixed on advertising - McDonald’s Netherlands pulled an end-to-end AI campaign titled ‘The Most Terrible Time of the Year’ after just one week of heavy online criticism. In a statement, the brand addressed the tone of the ad rather than the use of AI itself: “The Christmas commercial was intended to show the stressful moments during the holidays in the Netherlands. However, we notice - based on the social comments and international media coverage - that for many guests this period is ‘the most wonderful time of the year’.”
Coca-Cola’s AI-generated Christmas ad - the second fully AI-generated festive spot in a row - arguably received the most attention online this festive season - and not for entirely good reasons. IGN reports that the company used even fewer people to make the ad - 20 down from 50 - and Jason Zada, founder and CCO of the AI studio that created the work, Secret Level, also said: “We need to keep moving forward and pushing the envelope … the genie is out of the bottle, and you’re not going to put it back in.” Members of the public were quick to respond to the campaign, writing on X:
"This is disgusting. You’re a multiBILLION dollar company. Pay REAL animators,’ and ‘Flexing that you put even more people out of a job is CRAZY, especially when this isn't some metaphorical genie and is a technology forcing slop onto people."
These posts alone received 300,000 and 400,000 views respectively.
Yet while social media outrage is loud and immediate, it may not necessarily reflect the broader audience. Both of Coca-Cola’s headline-grabbing AI campaigns over the past two years have scored highly for predicted long and short-term effectiveness among general consumers, according to Andrew Tindall of System1. Writing in the Financial Times, columnist Sarah O’Connor also noted that Coca-Cola’s decision to repeat its AI approach suggests last year’s effort was commercially successful. “The Coca-Cola Christmas advert is really just a regular refresh of a popular idea which was originally generated by humans,” she wrote. “It taps into nostalgia.”
System1’s latest analysis of generative-AI advertising provides further clarity on audience reactions. Across 18 AI generated long form ads tested in the UK, US and Australia with 2,700 participants, only 24% of viewers spontaneously identified any ad as “digitally enhanced,” and most described them as “like a typical, professionally produced ad.” Even when explicitly prompted and asked if AI had been used, 62% said yes - yet emotional engagement remained high, with AI ads averaging 3.4 stars (out of five) compared with 2.3 stars for typical ads. In other words, audiences respond emotionally to AI content even if they think they dislike it, and AI involvement does not appear to reduce effectiveness.
“The fear that audiences will reject AI advertising simply isn’t supported by data,”
Tindall concludes, highlighting that perception is more nuanced than the social media outcry suggests.
Wider research beyond the advertising industry paints a similar picture. The most recent government research from the Department for Science, Innovation and Technology found that 43% of adults think AI will have a positive impact on society, compared with 33% who expect a negative one. On a personal level, optimism narrows as 39% see AI benefiting them directly, while 29% expect harm. Albeit unsurprisingly, among digitally disengaged adults, that anxiety grows sharper, with over half (51%) believing AI will have a negative impact on their lives.
The picture, then, is not one of outright rejection but cautious uncertainty. The loudest critics dominate the feed but many people are still forming their views. For advertisers, this means there is space to shape perception - yet also a clear risk if the industry gets it wrong.
This all presents a major challenge for advertising. AI can accelerate content creation and personalise campaigns at scale, but if the resulting media is detectable, dull or untrustworthy, it risks eroding the very credibility brands are trying to build - in part because, as Tindall points out, while AI can generate visuals and copy effectively, it still struggles with storytelling, character and the emotional nuance that drives long-term brand engagement.
The AI conversation is no longer just about novelty and efficiency - it is about quality, authenticity and trust. These are not new topics of discussion or problems for the advertising industry, but AI reframes them in sharper terms. Ultimately, the question that lies beneath all this is: how can human creativity remain the differentiator between effective and forgettable AI-produced campaigns?
We posed two questions to a group of advertising industry leaders and their perspectives shaped this article on AI perceptions as well as our piece on what’s next for AI. We’re grateful to them for openly sharing their insights and experiences.
Nemanja Pantelic
AI transformation lead
VML
If you want to create content with AI, you still need to know what good content is.
Without that human sense of quality and meaning, AI is just noise at scale.
That’s why we use AI to scale production, not to replace creativity. We rely on clear style guides, glossaries, and tone-of-voice rules to make sure every piece still sounds like the brand people know. We combine that with our clients’ goals and aspirations, so the content actually serves a purpose. And we keep humans involved to judge what’s good and what isn’t. When you combine that with being open about how AI is used, you can stay efficient without losing authenticity or trust.
Ingrid Olmesdahl
AI Transformation Director
EMEA, Ogilvy
While consumers will accept some AI-generated content, it should feel invisible. Our approach has always been about augmenting how we think, create and execute, not replacing it.
The core of creativity still lies in the authentic power of human creativity.
A great example of this is Hellmann's “Brat” stunt - a real-time cultural hack from Ogilvy UK that turned a banned Charli XCX tour poster into an unmissable moment. When Charli joked on TikTok that her banned poster was "just a sandwich bag," we showed up at her Birmingham show with actual mayo-filled sandwiches in plastic bags that mirrored the tour poster. This earned a place in culture through creativity that's hard to imagine an LLM generating.
But from insights to execution, AI is a powerful tool to help inspiration, idea generation, audience targeting and more- when used by capable human hands. That's why we are embedding AI across the entire agency. We want every colleague to become AI superusers so that they can augment their human and creative skills.
Ravi Pau
Head of AI Operations
Havas
The backlash against AI content runs deeper than quality. It comes down to intent. People can tell the difference between something made for them versus something simply pushed at them. When brands use AI without empathy, it doesn’t feel clever. It feels careless. Lazy, even. If the brand can’t be bothered, why should the consumer?
That’s why human oversight is crucial. The best ideas need someone to sense when something feels off, to bring back the nuance, the humour, the bit of judgement that a machine can’t fake. When that layer is missing, the work might look perfect, but it rarely feels it.
This is against a background of understandably deepening scepticism. Deepfakes have continued to blur the line between what’s real and what’s believable. Audiences don’t care how slick something looks if it doesn’t ring true and are quick to call out perceived inauthenticity. This only stands to fray the relationship between brand and consumer.
I’m currently being stalked around the internet by work that’s beautifully produced but oddly hollow. The words land, the pictures shine… yet it doesn’t quite connect. It’s a reminder of how easily intent can get lost in execution, and how fine the line is between innovation and imitation.
We know the work that sticks is built on emotion. When it feels genuine, people stop worrying about how it was made. They respond to care, not convenience. Machines can copy, but they can’t care.
The challenge isn’t hiding our use of AI. It’s about using it to make the work feel more human. Not less.
Lawrence Dodds
Managing Partner
UM London
People do not inherently dislike AI. Research shows audiences respond negatively only when content feels inauthentic, automated, or lacking human effort. The issue is perception—AI seen as a shortcut triggers discomfort, akin to the uncanny valley effect. Consumers notice when brands rely too heavily on automation at the expense of craft; they still seek work that feels thoughtful, emotional, and made with care. At UM, we view AI as an enabler of creativity: humans build ideas, AI enhances the process.
AI functions best as a copilot, not an originator. It accelerates ideation, iteration, and analysis, but creativity remains a human responsibility. The creative act involves selection, reflection, and judgment. Without oversight, AI outputs risk uniformity, replicating familiar patterns and favouring expected answers—a “race to the mean” that stifles full-colour creativity.
In practice, AI enhances media and creative workflows through intelligent automation and adaptive learning. It supports faster drafting, dynamic testing, and performance optimisation. At UM, we use AI to augment media ideas, model audience intent, test creative across formats, analyse performance, and identify under-leveraged cultural moments. These applications boost productivity without replacing human instinct. AI accelerates the process; humans define the purpose.
AI helps teams plan at the pace of culture, sketching, testing, and adapting faster than ever. But divergent thinking, emotional intelligence, and contextual awareness remain uniquely human strengths. Agencies that invest in talent while integrating AI achieve better outcomes. UM ensures technology amplifies imagination rather than replaces it.
Transparency and ethical governance are central to our approach. Generative AI introduces risks, including misinformation and trust erosion. Industry frameworks emphasise accountability and verification. UM supports trusted ecosystems, ensuring media partners deliver environments where AI-generated material is transparent, traceable, and responsibly filtered.
The goal is not to hide AI or treat it as novelty, but to use it intelligently and honestly.
When humans remain in control of meaning and emotion, work feels genuine.
Agencies must balance efficiency with imagination, removing friction while amplifying creativity, ultimately producing work in full colour.
UM stands for vibrancy over uniformity, imagination over imitation, and human difference as the ultimate source of value.
Kahmen Lai
Senior Vice President, Integrated Media Strategy
Weber Shandwick UK
It’s an unfortunate time for people who naturally like emdashes, bold statements followed by explanatory text, or ‘this, not that’ framing.
Accusations that content is AI-generated are routinely tossed around online. It seems to carry a neat threefold insult: that the creator is somehow unoriginal, that they are inauthentic, and that they are also incompetent – lacking the discernment to be able to edit the product into a non-AI-detectable form.
While that isn’t necessarily true, what it says is that despite our modern scepticism, despite the content overload, despite our critical thinking fatigue, we still hunger for authenticity and credibility in what we consume.
And we can achieve that, even when we use AI to accelerate and simplify certain processes in our workflows.
Our responsibility as agencies is firstly to start from a position of transparency. Use of AI shouldn’t be and isn’t inherently shameful; we are advanced enough to understand the need to optimise certain processes without compromising the quality of the product. Disclosing use is critical to maintaining trust with our clients and (if public-facing) with audiences.
Secondly, human intelligence is what powers artificial intelligence. Humans have the first, and last, say.
This is largely why AI can drive the insights engine, or first drafts of content, but typically doesn’t take control of the strategic or creative process. Knowing how to implement AI is only half the battle; knowing when to implement it is the other half.”
Troy Farnworth
Executive Creative Director
Leith
Are audiences rejecting AI? Yes, there have been quite a few miss steps with brands getting it wrong. The sea of AI content is only growing on social platforms, but I don’t hear or see a mass exodus from the likes of tiktok, Instagram et al? I read a stat recently that says for the first time more social content is created by AI than humans. And from what I’ve seen that’s still only going one way.
I think brands must tread a very careful line though.
Any brand that looks like they’re doing things cheaply or taking short-cuts look bad. That’s not a new AI problem. It’s the motive around decisions that are being evaluated rather than the specific content.
As an example in Scotland, the train platform announcements were replaced by an AI voice. The human voiceover artist quite rightly said they’d just replicated her voice and put her out of a job. It became big news and even debated at parliament. Rightly the train operator reversed the decision and went back to using human voice. But here's the rub, most people on the platforms didn’t care who read out ‘train approaching on platform 5 is…’ A small proportion do, but most don’t. In fact, most didn’t notice any change whatsoever. So this is a moral debate not an executional one. Rejection from AI won’t be executional, so it will come down to what resonates and what doesn’t. Sometimes things that are deeply human resonate. Sometimes Marvel films with 90% CGI resonate. Both will have a place. Knowing where that place is, will be the skill.
Konrad Shek
Director of Public Policy and Regulation
Advertising Association
Research suggests that the public remains wary about the role of AI in creating advertisements and they appear to strongly favour transparency through clear labelling. Although there is an apparent preference for labelling we also need to be mindful of risks such as the ‘implied truth effect’, whereby consumers may perceive unlabelled content as more trustworthy, or overlabelling, whereby excessive disclosure just becomes background noise that consumers ignore.
This points to a more fundamental challenge: how agencies can harness AI's creative potential whilst preserving authenticity. Successful approaches might include using AI as a creative catalyst rather than replacement - for instance, employing AI for initial concept generation or A/B testing creative variants whilst ensuring human strategists shape the final creative direction. Agencies could also focus AI applications on behind-the-scenes optimisation (media buying, audience insights) rather than front-facing creative execution where authenticity concerns are most acute.
What's clear is the broader trust issues that agencies must actively address. Successful agencies will need to invest time and resources in understanding the risks, how they undermine trust in AI, and developing effective strategies to mitigate them. There is also a compelling case for industry to invest more resources in media literacy, especially as some of the evidence suggests that greater familiarity with AI and its perceived benefits correlates with more open-mindedness towards its use.
As AI continues to evolve, maintaining public trust through transparency and ethical implementation and demonstrating human creative leadership will be crucial for its successful adoption in advertising.
Helen James
CEO
The Gate, Fergus Dyer-Smith, M3 Labs
The real question isn't "should we disclose AI?" It's "why does this feel different?"
People aren't rejecting AI content because it's AI. They're rejecting the lack of authenticity or understanding of the audience. Transparency alone won't save lazy work. The human touch isn't about whether a person tapped the keyboard. It's about whether a human decided what to say, understood why it mattered, and gave a monkey's about getting it right. AI can write the sentence, but it can't care if the sentence is true, or kind, or necessary etc
So agencies should use AI to get to the work faster, the research, the variations, the grunt work. But the creative leap? What makes someone feel seen? That still requires a human who knows what they're looking at. As AI gets better, the gap between "this was made" and "this was meant" will matter more than ever.
"Tagging is one way to go, but much of the evidence points to the fact that doing so actually induces a penalty. Given the speed of change, though, this may be short-lived. Audiences adapt fast when the value is clear. Photoshop moved from scandal to standard once ethics matured, Auto-Tune was once "cheating," and native ads settled once disclosure norms did. I’d expect the same arc here.
The question won't be "was AI used," it will be "does this respect the audience and make the work better?"
Agencies should keep humans in the driver's seat for authenticity and judgment and use AI as the production engine everywhere else. Put AI on repeatable, low-risk work such as variations or personalisation. Graduate it to bigger creative moments only when it beats human control and clears hard tests for accuracy, bias, legal and ensure a named editor is accountable. The result means audiences get human judgment where it counts, and brands get AI speed where it helps.
As AI becomes embedded in the creative process, agencies will need to think about integrity in the same way brands once did with sustainability. Ethical sourcing will become a new marker of credibility. Being able to say “our AI doesn’t steal” will matter just as much as saying a product is responsibly made. This also exposes a bigger challenge coming down the track: ownership, copyright and IP are entering a kind of global Wild West. While the UK market is approaching the issue with caution, brands creating content for international audiences face a patchwork of regulations - with laws around AI authorship, data use and copyright protection differing widely across territories. The result is growing complexity and uncertainty around who truly owns an idea or image. As AI matures, this will become one of the defining debates for the industry. Agencies will need to develop new forms of craft - curating and fine-tuning inputs responsibly, understanding which data to include, exclude or protect - while also helping to shape the rules that govern creative ownership in this new era. Those who lead that conversation won’t just protect their reputation; they’ll help define the future economics of creativity itself. The solution lies in creative provenance - building transparent systems that can trace and credit the origins of an idea, giving both audiences and creators confidence in the integrity of the work.
Sean Betts
CEO
Chief AI & Innovation Officer, OMG UK
Transparency shouldn’t be a reaction to audience scepticism; it's an ethical responsibility. Agencies must be open about when, how, and why AI is used. Declaring its role builds trust not because audiences demand it, but because integrity in the creative process matters.
AI can remove friction, surface insights, and expand possibility, but the emotional intelligence, storytelling, and judgement that connect with consumers should always remain human-led. AI should enhance, not replace, human creativity.
The fundamentals of great advertising will never change. It should be useful, interesting, and entertaining.
However, clients and agencies need to walk a fine line. Public sentiment towards AI varies greatly by sector, and transparency alone won't ensure acceptance. The backlash to campaigns from Toys "R" Us and Coca-Cola show that timing, tone, and execution matter as much as intent.
Ultimately, we need to lead by example and reward work that uses AI tools to enhance creativity whilst protecting the human insight, cultural awareness, and bravery that leads to great advertising.
David Morgan
Head of Creative and Production
What's Possible Group
Public perception of AI-generated content is going through a fascinating shift. On one hand, audiences are still impressed by what these models can create. Every time a new system launches, it brings a wave of viral clips like hyper-real Sora tests, celebrity deepfakes, or that infamous AI-generated Drake track that spiralled across the internet. People enjoy the spectacle.
But once AI enters the world of advertising, the reaction changes. Consumers now have a sharp instinct for spotting AI in brand comms, and when something feels cheaply made, generic, or like a shortcut, trust crashes instantly. The perception at the moment is: AI = cutting corners.
And yet, the industry’s behaviour contradicts that assumption. Coca-Cola has proudly run fully AI-enabled Christmas campaigns two years in a row, making it clear this is a creative direction, not a compromise.
For me, the difference between acceptance and rejection comes down to intention and transparency.
AI becomes a credibility problem when a brand tries to pass it off as traditional filmmaking or tries to “hide” the technique. But when a campaign leans into stylisation, heightened worlds, or playful visual logic ( as Sixt did so well recently) audiences are more than happy to engage. They don’t feel tricked; they feel entertained.
My view is simple: AI is a tool. It works best when you embrace what it is, not when you pretend it’s something else.
If you use AI to create stylised, imaginative, slightly surreal worlds, spaces where suspension of disbelief is part of the fun, the audience goes with you. What they reject is inauthenticity, not the technology itself.
Used thoughtfully, AI doesn’t remove the human touch. It simply expands the palette creative teams can work with. At What’s Possible Creative, that principle guides everything we do; AI isn’t a shortcut, it’s another craft tool we use to deepen the storytelling, not flatten it.
The future belongs to agencies that use AI to expand their imagination, not replace it, teams that keep human judgement, craft and emotional intelligence at the heart of the process.
Jasleen Carroll
Director of AI and Product Alex Calder
Head of Education and Innovation
Anything Is Possible
The idea that audiences instinctively reject AI content feels increasingly outdated - people aren’t anti-AI; they’re anti-inauthenticity. Most consumers already interact with AI daily through Spotify, Netflix, or Google without blinking. When brands use AI to invite creativity rather than conceal it, the response is overwhelmingly positive.
The backlash we see against AI-generated content isn’t really about AI; it’s a backlash against lazy marketing. Audiences are savvy and can spot low-quality, generic content a mile away, what many are referring to as ‘AI slop’.
Let’s take Coca-Cola’s “Create Real Magic” campaign, for instance, it lets fans co-create art with AI and was met with enthusiasm, not backlash. Similarly, Maybelline’s viral “sky-high mascara train” activation using synthetic video generated delight and reach, not skepticism.
The truth is, audiences reward honesty and imagination — they just want to feel a human spark behind the tech.
For agencies, credibility doesn’t come from hiding AI, but from showing it’s being used with purpose, care, and creativity. Taking your customers alongside you on your A.I enablement journey.
The smart approach is to define your human:AI ratio. Use AI for the heavy lifting it excels at: analysing data, summarising research, or generating initial ideas. But the final strategic thinking, the emotional nuance, and the authentic brand storytelling must come from a human. For example, we see great results when AI is used for conversion rate optimisation,, a task that benefits from rapid, data-led iteration. The goal shouldn’t be to hide the machine, but to use it to free up your best people to do their most human, creative work.
Ollie Sloan
Head of Strategy
Arke Agency
The audience isn’t anti-AI. They’re anti-fakery. The backlash often happens when brands hide behind it, when AI is used as a shortcut instead of a tool. The starting point has to be transparency, not just saying “we use AI,” but explaining why. If it’s there to improve efficiency, to reduce costs, or to free up creative teams to focus on deeper conceptual work, that’s something audiences respect. When AI is part of the process but not the product, the authenticity isn’t compromised.
The real credibility killer is pretending AI can replace the human insight that gives creative work meaning.
The role of an agency is to make sure AI amplifies that human instinct and doesn’t imitate it. So the discipline is in how we use it: creative intelligence, not artificial replacement. That means showing the human rationale behind the machine outputs, and keeping a clear voice that belongs to people with experience and real-life execution, not prompts.
Key takeaways:
- Transparency and rationale behind use.
- AI as an enhancer, not a substitute.
- Protecting creative judgment and voice.
- Human accountability in every stage of the process.
- Audiences value honesty over perfection.
Sam Green
Chief Technology Officer
Croud
The backlash against AI-generated content isn’t really about the technology; it’s about lazy implementation. Audiences can spot generic content that lacks human insight instantly, and more importantly, generic or lazy AI work actively erodes a brand’s salience. A brand communication is only as valuable as its ability to be easily remembered at the point of purchase, and indistinct AI outputs risk blurring the very distinctiveness and differentiation brands rely on to stay top-of-mind.
The solution? Use AI more thoughtfully. Agencies should use AI to streamline processes and scale output, while keeping human creativity and strategic thinking at the core.
A strict ‘human-in-the-loop’ approach to AI automation ensures strategic context is preserved and brand authenticity remains intact.
Transparency is fast becoming a competitive edge. Agencies that use AI in a responsible, client-first way build trust, not mistrust. This means establishing robust quality-control workflows where AI outputs are thoroughly vetted by humans. The winners in this new era view AI not as a shortcut to cheap content, but as a powerful amplifier of human creativity - one that elevates brand salience rather than diluting it.
Alistair Hague
Senior AI and Automation Partner
Open Partners
Audiences are quick to spot when something feels artificial, and they’ll switch off the moment they sense it. The answer isn’t to hide AI, it’s to use it with purpose.
Keep people in the loop, use AI to enhance (not replace) human creativity, and be open about how it’s being used when it matters.
The sweet spot is using AI to handle scale and speed, while humans focus on emotion, storytelling and brand truth. If your audience feels the intent behind the work is genuine, they’ll stay with you. Accepting ChatGPT is not a dirty word is something we all need to get used to. ce rather than diluting it.
Adam Cleaver
Founding Partner and Executive Creative Director
Collective
Audiences don’t hate AI. They just hate feeling like they’ve been cheated; out of craft, honesty, or just something that feels human.
There are three different reactions at play, and they often get muddled.
First, poor quality. When the work’s just not that good or leaves you with nothing. Like the ‘look we used AI’ ads that took thousands of AI rolls to find the shots that looked okay together. That’s not delivering on a creative vision, it’s roulette, because the machine has more control that the humans do.
For the intended audience, maybe that’s fine. But for anyone who still believes in emotional storytelling (especially the industry who are those most annoyed at it) it’s not cool.
No one would have cared if they’d quietly used digital twins to speed the 3D, or AI to extend or animate scenes, if it was all in service of the vision. The problem wasn’t the tool. It was the lack of control, taste and authorship.
Then there’s trust. Nottingham Forest letting AI write a message that should have come from a human voice. That’s not innovation, that’s outsourcing care, and nothing’s going to wind people up more.
And finally, fear. The Sora reels that look uncannily real. But you can’t watch them without wondering where that power leads in terms of changing the course of elections etc. It’s not awe, it’s unease. And that’s not even mentioning the fear we all rightly have of AI impacting jobs.
So yes, sometimes people just think the output’s crap.
But deeper down, they’re reacting to something else: a loss of control, of intent, of care.
How do we fix it? Keep the human vision at the centre and use AI when it helps you get closer to that vision, and leave it alone when it doesn’t.
Transparency? Yes, be transparent when the tech is the story, otherwise, just make the best work you can using the tools at your disposal. As long as they are used ethically you don’t owe anyone an asterisk.
Good creative systems already do this. They build around human judgement, taste, empathy, direction, and let AI do the heavy lifting where it makes sense: exploration, testing, refining or iterating/scaling. That’s how you get quality and speed without losing ‘soul’.
In short: Use AI to raise the floor, not lower the ceiling. Let it do the work, not the thinking. Set the human standard, the vision, and make the machines meet it. If they can’t, what’s the point of using them?
Eventually AI tools will be so integrated into workflows no-ne will care, as long as what you create is good.
Alistair Hornsby
Strategist
Craft Media London
Audiences are becoming quick to spot—and reject—AI-generated content. From backlash over AI-written posts to scepticism around synthetic imagery, trust and authenticity are now major challenges.
Coca-Cola’s 2025 all-AI Christmas ad made this clear: viewers said they preferred ads with real people. The issue wasn’t quality—it looked expensive—but that it looked AI-generated.
People’s reactions aren’t just about aesthetics; they’re responses to a broader unease with a world where everything looks cheaply, instantly made. The more AI-looking content fills feeds, the more people crave signs of human involvement. Authenticity has become the antithesis of AI.
AI can create realistic content—some studies show people can’t reliably tell AI faces from real ones. But as quality improves, audiences often like AI work less. The closer content gets to faking human effort, the more it’s rejected. Realism without human input feels creepy. When creative work feels low-effort or corner-cutting, people disengage. Coca-Cola’s ad, for instance, signalled cost-saving, not Christmas cheer.
Computer-generated work can connect when audiences sense human labour and intent—like Pixar films or video games that require years of craft. What audiences reward is not the absence of technology, but the presence of care, taste, and storytelling. Work that feels made with intent earns trust.
Audience expectations matter. Larger or luxury brands are held to higher standards, while smaller brands can use AI more freely for efficiency—versioning, copy, or design—without risking credibility. The danger lies in using AI to create content meant to express brand values or emotion; that’s where it risks feeling hollow. Neil Patel’s data shows AI-written blogs get less traffic over time—reinforcing that audiences value human input.
Even OpenAI knows this. Its recent campaign used 35mm film and real people, not AI tools. The result felt authentic because it looked like effort was spent.
Authentic effort signals brand integrity—suggesting you also care about your product and service.
AI will help smaller brands make and scale better content, just as social ads once did. But using AI well requires knowing your brand, your audience, and your strategy. Skilled creative professionals who understand what “good” looks like will get the most from it.
AI can make production cheaper, but if it devalues the sense of care and creativity audiences connect with, it risks weakening the very industry it promises to enhance.
Will Lion
CSO
BBH London
With great service in restaurants: you don’t quite notice it when it’s good, but you definitely notice it when it goes wrong. The same applies to CGI - if you don’t notice it, it’s great; if you notice it, it’s failed. Maybe the same principle applies to AI in entertainment: if it feels natural, enhances the story and builds the world, it works. I’m sure even the next versions of Avatar have embraced this as James Cameron is always at the edge of technological innovation.
On set, before AI, we applied the same thinking we do now. Does that feel cheap? Did laziness come through there? The public will probably spot that too. Maybe, for certain brands, it doesn’t matter as much; they can just show up and be entertaining. But for Paddy Power for example, using Danny Dyer, telling a good story and having humour really makes a difference. It comes down to human attention - you filter out what you’ve seen a thousand times and the fresh stuff rises to the top. For AI it’s the same principles, but different game.
On TikTok, for example, when you see a creator label saying “AI-generated,” it almost raises the bar. It’s so easy to generate a video of dropping someone out of a plane onto Earth - that’s not interesting. What is interesting is seeing Bigfoot creep up on people in the woods and then vlog about what he’s going to do to those campers later. The originality of the concept, the writing, the effort - that’s what stands out.
I think we’ll be quicker to punish the lazy, even if it looks glossy and perfect, and more rewarding to those who put in the time, effort, and good writing.
If you look at Sora, there’s a reason videos are going viral even if it’s ridiculous AI generated content. I guess it is that creative element - different ideas “having sex,” right? Like the Queen and hip hop coming together, doing rap battles on a corner, that’s a good concept. You see that in a creative room and you know it’ll stand out. A strong process can elevate ideas way, way up.
Elbert Hubbard has a line: “One machine could do the work of fifty ordinary men. No machine can do the work of one extraordinary man.”
Obviously, it's an older language and gendered, but the point stands. We’re in the same situation: machines can do so much, but one or two extraordinary people often make the difference on any project.
We here at BBH still believe in that difference from people.
Sara Chapman
Executive Experience Strategy Director
adam&eveDDB London
Consumers aren't rejecting AI content; they're reacting to brands that use it without considering context and connection.
Leaders in AI show a duality: they push forward, experimenting with how to blend creative and technology to create connection. But they also stand firm, knowing and showing their beliefs and values in how and when they use AI.
Instead of letting what's technically possible dictate the process, they ask where GenAI works in synergy with their brand's values and where human connection matters most. While the content may be synthetic, the impact is deeply authentic.
This brand-first approach fosters connection and trust.
Data backs this up. Gartner research shows that only 13% of consumers now totally reject AI. In fact, they expect GenAI use in marketing, but care deeply about how and where it's used. They’re open to its use for a simple update from the brand or information about a product but when they're looking for connection or need to resolve a complaint the human factor is what they want and expect.
As different brands experiment, new rules are emerging: be useful, amusing or creative and consumers will accept GenAI. Use it to shortcut true emotion or connection and expect disdain.
To protect creativity, our agency teams must also embrace that duality, finding a new definition of authentic creative craft that embraces the potential of technology in a way that nurtures and augments the emotional spark great storytelling delivers.
Rik Moore
Managing Partner, Strategy
The Kite Factory
I like the idea of baking the human touch into every AI output. At the bare minimum that should be checking and verifying the generated outputs. Nothing should go out without that.
Unchecked AI is at the very least lazy and at its worst, dangerous.
However, building on that, AI really comes into its own when the human and the machine collaborate.
Using AI as an ideation launch pad, to then use human creative flare to push beyond what’s in front of you. Its digitised ‘riffing’, building, refining and iterating ideas.
By pushing beyond the AI-iterated start point in front of you, I see real benefits:
- Pushing yourself and your abilities to go beyond challenges you to be creative, which keeps your skill sharp and avoids any atrophy in your personal creative skills (see recent MIT study on that topic ).
- Breaking new ground that is unique to you, that resolves audience disengagement with lazy AI.
Fergal O’Connor
CEO and Founder
Buymedia
AI should be used to support humans, rather than have AI doing it all. Those agencies that are using the right AI tools in collaboration with humans, in the most efficient way, will stay ahead of the competition. Agencies will have an advantage with both humans and AI working together, allowing for increased efficiency without losing creativity.
AI’s real value is in clearing the grunt work, automating the repetitive, freeing up time for the interesting, genuinely creative thinking that only people can do.
Tasks where AI can speed up the more mundane and administrative processes, or where AI helps lay the foundation of an idea or campaign, means the creative teams have their time freed up for what they do best, adding the sparkle, originality and magic on top.
Agencies not using AI will fall behind. Brands will go for what’s most efficient from a cost and time perspective and the agencies that are competitive on price and have a faster turn around, without losing the creative edge, will ultimately win.
Ben Perez Usher
Creative Director
AMV BBDO
Audiences aren’t rejecting AI because of quality. The technology’s improving rapidly and it’s becoming harder to separate from reality. They’re rejecting it when they feel they are being lied to. The flood of viral videos that turn out to be fake. Big brands not admitting they’re using it until the outcry forces them to. People have an unerring bullshit radar, and we need to respect that.
If you’re honest with people, they’ll judge the work on its merits. Coca-Cola openly said they used AI in their latest ‘Holidays Are Coming’ ad, and apart from a few harrumphs from industry types on LinkedIn, consumers seem fine with it. Autotrader launched their transparently AI-driven (pardon the pun) ‘It’s time for Autotrader’ campaign, and as far as I can tell there were no protests in the streets.
The answer to harnessing AI’s creative power without eroding credibility is authenticity.
Being open with your audience about how you’re using AI. Use it to elevate, not eliminate, human creativity.
That’s what we tried to do with our recent Intuit QuickBooks campaign. QuickBooks’ business platform combines Human Intelligence with Artificial Intelligence. So we thought, “why not try and make a campaign the same way?”.
AI-Verts showcases real businesses and real humans, with AI bringing their stories to life in crazily disruptive ways. We didn’t hide the use of AI, we put it in the first frame of every film, a title saying, ‘Made with Human & Artificial Intelligence’. We made AI a key part of the idea, in a way that felt true to the brand.
With AI, the possibilities are endless, but it takes human minds to choose the right one for the brief.
The biggest lesson we’re learning as we use AI? Welcome uncertainty. Obsess over every detail you can control, set the destination, and then let the AI run wild. Hold the creative reins lightly to enjoy the unexpected, then bring your human taste, intuition, and experience to bear in order to shape the final result.
If our final audience is human (and for the foreseeable future that will still be the case, sorry robot overlords), humanity needs to be at the heart of the work. And that’s how adland raises the standard: by letting AI expand the creative canvas, while humans guide the brush.
Dr Alexandra Dobra-Kiel
Innovation and Strategy Director
Behave
Three Nobel-winning economists recently issued a silent alarm: only disruptive innovation can reverse our economic decline. For those of us in the creative fields, this is our moment of reckoning. Yet, we often dress our hesitation in the robes of principle, fearing that automation will erase the human trace. This is a misdiagnosis. Avoiding AI doesn't safeguard creativity; it sentences it to the periphery.
Audiences don't consume content; they detect presence. They can feel the void where human care should be. Their rejection is not of the tool, but of the emptiness behind it.
The final frontier is not technical prowess, but emotional resonance, the capacity for empathy that remains our exclusive domain.
Authenticity is not a brand; it is a stance. In this new era, trust is forged not in the sterile glow of perfection, but in the honest soil of the process. Teams that work transparently, that claim ownership of their stumbles as well as their triumphs, build a different kind of bond with their audience - one that is fracture-resistant.
The true peril is not obsolescence, but apathy. When AI becomes a bypass instead of a lens, we generate a surplus of meaninglessness. We need creative cultures that are ecosystems of courage, where curiosity is the currency and compassion is the protocol. This requires the bravery to make deliberate, felt risks.
Our research at Behave points to three pillars for such cultures:
- Growth to enable anticipation: The goal is not just to learn, but to actively seek the unknown. This is a culture that questions its own core assumptions, treating every project as a prototype for a deeper understanding. Here, AI becomes a compass for uncharted territory, not merely a faster horse on a well-worn path.
- Belonging to enable friction: True creativity is not streamlined; it is ignited by the clash of diverse, dissenting voices. We must build arenas where creative tension is harnessed, not suppressed – this intellectual grit is the antibody to AI's sterile uniformity.
- Resilience to enable digging: This is not about bouncing back, but about leaning in. Resilient teams metabolise setbacks into insight. They cultivate spaces where experiments can fail loudly and constructively, producing work that is both bold and worthy of trust.
These are not abstract ideals. They are the essential conditions for creating work that resonates in an automated age. Teams anchored in this ethos make decisions an audience can feel viscerally. They are the guardians of the irreplicable: human judgment, intuition, and the messy, brilliant insight that breathes life into code.
Audiences do not oppose the artificial; they resist the thoughtless. The creative ventures that will thrive will not be the most efficient content factories, but the most perceptive architects of human feeling. Technology was never a substitute for creativity. It is a partner, a provocateur, a means to magnify our own humanity. Wielded with this clarity, AI becomes more than a tool - it becomes a threshold. It challenges us to work with greater care, to shape ideas with deeper empathy, and to fortify the crucial connection between the creator and the world.
Mark Fawcett
CEO
We Are Futures
Young consumers aren’t rejecting AI they’re redefining authenticity around it. For Gen Z and Gen Alpha, technology isn’t the issue; trust is.
They use AI every day - to study, create, plan and play - yet instinctively sense when a brand’s voice feels synthetic. They can spot an AI-written caption, a generic image or a brand voice that’s lost its human edge. And when they do, they disengage fast.
Research from the National Literacy Trust (2025) shows two-thirds of 13–18-year-olds already use generative AI, but almost half double-check what it produces. That critical mindset extends to brands too. They want real stories, not polished prompts.
For agencies and marketers, audiences don’t mind that AI is in the mix, they mind when it replaces creativity instead of enhancing it. The challenge isn’t whether to use AI, but how to use it in ways that still feel unmistakably human.
So, show your workings. Let audiences see the collaboration between human and machine, this could be done through behind-the-scenes content, co-creation, or creative process storytelling. Use AI to open up imagination and speed, not to automate connection.
By being transparent about where AI plays a role, and keeping people’s humour, imperfections and intent front and centre, brands can earn credibility rather than lose it.
The most powerful youth brands in the next decade won’t hide their use of AI; they’ll humanise it.
Because in a era where everything can be generated, what feels genuinely made is what earns attention, and more importantly, trust.
Gen Z can spot AI a mile off - and they’re calling it out. Don’t be the brand that gets caught faking it. Be the one that shows how real creativity evolves.
Gerard Crichlow
SVP global social strategy director
McCann
I love this question. I got into this industry to make better work that serves people. While AI is an amazing tool, we shouldn't forget why we're using it.
If agencies use AI purely for efficiency without layering in human insight and cultural relevance, they've already lost the authenticity battle.
AI is most useful when it helps us better understand and serve our audiences. Instead of pushing more content at folks, we should get better at listening, curating and discerning what communities actually want. Instead of hiding it, we should be honest about using it to enhance our creativity to create something more culturally meaningful. That's where we could unlock real co-creativity and deepen relationships with people in ways we haven't been able to before.
To me, the backlash against AI isn't just about the technology, it's about brands failing to create value for their audiences.
If I were to reframe the question, it isn't "how do we sneak AI past skeptical audiences?" It's "how do we use AI to create more culturally relevant, authentic experiences that actually add value to people's lives?" If we can't answer that, we shouldn't be using AI at all.
Karen Boswell
Global CEO, Consulting, Experience and Media, Group Lead for
AI and IP Development
M+C Saatchi
The key to navigating AI scepticism is a human-first philosophy. AI should amplify human creativity, not replace it. At our agency, we've found success using AI as a collaborator in the ideation phase - generating multiple creative territories quickly - then letting human judgement, cultural intuition, and craft shape the final work.
Transparency also builds trust. When AI plays a significant role in content creation, being upfront about it (where appropriate) demonstrates confidence rather than deception. More importantly, AI should handle the scaffolding - research, asset variations, production efficiency - while humans focus on cultural power: the insights, emotional resonance, and authentic storytelling that machines can't replicate.
The most successful work we've seen use AI to work at the pace of cultural production, allowing teams to respond to trends in real-time while maintaining strategic depth. This means using AI for speed and scale - testing multiple brand ideas and executions, personalising at volume, adapting quickly - but always with human editors ensuring every output feels genuine and culturally relevant.
Ultimately, audiences don't reject AI content because it's AI-generated; they reject it when it feels hollow, generic, or disconnected from human experience.
Keep humans at the heart of the creative process, and AI becomes a powerful tool rather than a credibility risk.
Aditya Basole
Strategy Lead
T&P
Spotting AI-generated content isn’t the problem. As agencies and creators, we hold the responsibility of transparency. We must help audiences discern that, like most things - it isn’t black and white, it's a spectrum - from human-generated → human-assisted → AI-assisted → AI generated, and a lot more in between.
Audiences reject content (irrespective of human or AI made) as an immune response to feeling duped. They feel duped when the “tells” of AI-generated content pull the audience out of the actual story being told. That is when trust breaks - when inauthentic, low-effort-low-quality, or deceptive content breaks the immersiveness of the experience (text or visual).
For agencies and creators, it is important to remember that craft - skill and expertise honed over years - is distinct from creation.
Inauthentic happens when craft is scrimped on, in the process of creation.
Upholding the principles of good craft is crucial to avoiding inauthenticity and maintaining trust with audiences.
In practice, we make sure that every asset is checked from the team’s unique craft lens (that comes from diverse experiences) - to check if it delivers the message authentically.
Matt Muse
Creative Technology Director
T&P
As a Creative Technology Director in the advertising world, I’ve seen the magic that AI can bring to the table. But let’s be honest, public perception right now is a bit of a minefield. Audiences are getting savvy, quickly spotting AI-generated brand content and often giving it the cold shoulder. The backlash – especially around AI-written social posts and synthetic imagery – highlights just how crucial trust and authenticity are in our industry.
Take our recent campaign featuring an AI avatar of the legendary José Mourinho. We let fans receive personalised videos of him playfully giving their mates a dressing-down for their little slip-ups. It was a brilliant way to engage people, fully branded and, importantly, brand-safe. This kind of user/brand co-creation would have been a pipe dream without AI, proving just how far we’ve come already.
But here’s the rub: the challenge lies in how we use this incredible tool. I often see AI being deployed in a rather lazy way, leading to content that’s, well, less than inspiring. Like any tool, AI can be wielded to create stunning, engaging work or, on the flip side, it can churn out utter rubbish. When we transitioned from draughtboards to Photoshop, that didn’t mean we compromised on craft; it actually elevated our game. The same should apply to AI.
Now, let’s talk about those viral AI “slop” videos of cats cooking or tackling life’s dramas. People can’t get enough of them! But when brands make slop, they often get a frosty reception. Why? Because brands want something in return – whether it’s a sale, a click, or a follow. They need to put in the effort to create something worthwhile in exchange. In contrast, those cat videos are pure entertainment; they’re just about people giving creatively without any strings attached.
In advertising, there’s always an exchange. Sure, we have vouchers and sales, but if we’re not offering something tangible, we need to entertain or provide our audience with something to chat about with their mates. We need to tap into the common vernacular and make our content relatable. Simples.
Moving forward, we need to focus on using AI to create highly personalised, ground breaking campaigns that truly resonate with our audiences. Let’s not get sucked into a race to the bottom of the slop barrel, churning out less than mediocre content. Instead...let’s aim for excellence!
Ensuring that every piece we create reflects the authenticity and creativity that people are hungry for and our industry has always been famous for. By embracing AI in a thoughtful and imaginative way, we can elevate our work and redefine what’s possible in advertising. This is the part of AI work that I’m really excited about!
