How agencies should manage AI bias  

Advertising sometimes reflecting societal biases is not a new problem - AI is simply forcing the industry to deal with it more seriously. 

Profile of a AI generated faces against a grey background

How agencies should manage AI bias  

Advertising sometimes reflecting societal biases is not a new problem - AI is simply forcing the industry to deal with it more seriously. 

Profile of a AI generated faces against a grey background

From who gets targeted to whose stories get told, the advertising industry has always inevitably mirrored some of the inequalities of the world around it. What is new is that AI is forcing agencies to confront that reality far more directly. 

AI bias occurs when machine learning systems produce skewed or discriminatory outputs that favour certain groups of society over others. These biases can stem from several sources: unrepresentative training data that oversamples some demographics while underrepresenting others; human prejudices embedded in the data itself, which AI then learns and amplifies; and algorithmic design choices that prioritise certain patterns or outcomes.  

To illustrate how easily this happens, I tasked ChatGPT with generating an image of a CEO. Predictably, it repeatedly produced images of white men - reinforcing a stereotype and quietly excluding entire communities from representation. That is bias in action. 

The challenge is that AI does not just reflect existing bias - it can entrench and scale it at unprecedented speed.

A prejudiced human reaches dozens or hundreds of people. A biased algorithm reaches millions. 

This risk is heightened by the realities of today’s data landscape. Generative AI is largely only as good as the data it is trained on - and much of the highest-quality data sits behind paywalls. That leaves many models learning from whatever is freely available, which is often incomplete, skewed or simply poor-quality.  

From the rise of synthetic audiences to AI-generated insights that “guess but don’t know”, the risk of unexamined bias creeping into strategy, creative and media planning is growing fast. Even bespoke platforms trained on private datasets are not immune, because bias is inherent to pattern-recognition systems and can never be fully eliminated. 

AI generated image of man in blue suit in an office environment

Why AI bias is a serious problem  

Two brains, pink and blue against a grey background

The truth is messier than it first appears. Biased decision-making has shaped advertising for decades. 

"In the nineties, postcode modelling excluded minority communities from credit offers under the banner of 'geodemographic segmentation,'" explains Erfan Djazmi, Chief Digital Officer at Mediahub Worldwide. "In the noughties, audience segments like 'affluent professionals' were built on early internet users - largely white, male and high-income. Most lookalike models since have simply reproduced those same profiles." 

Now that AI is squarely in the spotlight, bias is being addressed more head-on, supported by shifting cultural and societal expectations and the sheer scale of AI’s impact. 

"Bias is unavoidable in generative AI because the training data is uneven," says Dominic Palmer, Creative Director at Rock Kitchen Harris. "Ask for 'a parent cooking dinner' and you'll almost always get a woman. Many models lean toward US-specific stereotypes because their datasets are American." 

At its core, AI bias is a reflection of ourselves. As Ben Gott, Data and Technology President at Merkle UK&I, puts it: “Our biases are inbuilt into the systems we create. There is an absolute and urgent need for us to address this.” Karen Boswell, Global CEO for Consulting, Experience and Performance at M+C Saatchi, agrees that this is no longer a theoretical concern.

“AI bias isn't a distant, theoretical issue for agencies - it's already shaping the briefs we receive, the audiences we build, and the ideas we ship.”

Karen Boswell, Global CEO for Consulting, Experience & Performance, M+C Saatchi

The problem, for Poulter, starts at the source. "We're in a bizarre situation where the best data is increasingly locked away, so models feast on whatever's lying around - and what's lying around is often terrible," he adds. "It's like training a chef exclusively on fast food and wondering why they can't do fine dining." 

But there is a deeper issue at play. AI lacks the lived experience that underpins great advertising. "When building campaigns, experiences or ideas, inspiration for entertainment is drawn from our personal experiences, the stories we've heard, the conversations across the dinner table; and particularly, when it comes to client work, from conversations with the clients themselves," Poulter notes. "But none of that shows up in the model, nor is it often well documented internally." 

Cutting through wishful thinking is essential right now. AI will not become magically neutral with time. James Poulter, Head of AI at ELVIS, recognises that agencies cannot eliminate AI bias - “but they can get serious about managing it,” he says. “If you're waiting for perfectly unbiased AI, you'll be waiting forever." 

Input auditing: Where does the data come from, and who is missing?  

Blue 3-D question mark against a cerise background

The first line of defense against AI bias is rigorous input auditing - questioning the quality, source and representation of data before it enters the model. 

"For me, avoiding AI bias starts with treating data quality as a creative discipline in its own right," says Boswell. "Teams should be intentionally curating diverse, representative datasets - not just relying on whatever the model can scrape. That means validating inputs, questioning gaps and being proactive about whose voices aren't showing up." 

Agencies are increasingly using AI for research, but while Poulter recognises the tech as being “hugely helpful” for speed and productivity, it also raises awkward questions.

“Where did this data come from? Who’s represented? Who’s missing? What assumptions are baked in? If you can’t answer these, you shouldn’t be using it.”

James Poulter, Head of AI, ELVIS

Understanding what data is needed - and what is missing - remains a fundamentally human skill. As Sarah Treliving, Chief Digital, Data and Technology Officer at Goodstuff Communications, explains: “Minimising gaps and assumptions before allowing LLMs to reference data gives AI’s reasoning the best chance of usefulness, with minimal hallucination or bias.” 

Precision also matters. Once bias is identified, it can be countered through deliberate prompting. “If you want work that mirrors real society,” Palmer adds, “specify demographics clearly rather than letting the model fall back on statistical shortcuts.” 

For strategy teams, caution is critical. “AI is excellent at summarising complex information,” Palmer says, “but only when you keep tight control of what goes in. Use reputable sources, check citations, and anchor models to trusted references.” 

Treliving also reinforces the need for groundwork: “Creating any asset - strategy, analysis, ideas - relies on already having harmonised and unified data. Without that, AI leads to a sea of sameness and ‘law of averages’ thinking.” 

Output auditing: Treating AI-generated work as a draft, not truth   

Scrunched up paper on top of a green surface

Even with clean inputs, outputs require scrutiny. While AI generates with confidence - confidence is not accuracy. 

"Treat AI outputs as drafts, not truth," says Poulter. "AI guesses eloquently - it doesn't know. When it generates audience insights or creative recommendations, apply the 'would I accept this from a junior strategist?' test. If a human presented work with zero source validation, you'd send them back to do their homework. The same rules apply." 

For audience modelling, Palmer urges restraint.

"Synthetic research and synthetic audiences are improving, but the depth isn't there yet, so they're best used for testing hypotheses. Meaningful decisions still rely on unique market insight from real people." 

Dominic Palmer, Head of AI, ELVIS

AI can also strengthen validation - when used correctly. "AI doesn't remove the need for best practices, but it offers speed, connectedness, and expansion of lenses on the output,” Treliving says. “AI is also amazing at quality assurance (QA), which does still need to be prepared and overseen by a practitioner." 

Representation standards should never shift. Palmer notes that if diversity is baked into a brief the output should reflect it - “whether the work is human-made or AI-assisted.” 

Checking mechanisms: Building friction into the workflow   

Pen on top of a tick list

Speed is one of AI's greatest strengths - but it can also be its greatest liability. Agencies that build in deliberate checkpoints are the ones catching bias before it reaches clients or consumers. 

"Build checking mechanisms into your workflow," advises Poulter. "Don't wait until the campaign's live to discover your AI-generated targeting excluded entire demographics or your insights reinforce tired stereotypes. Introduce deliberate friction - diverse review panels, bias-checking protocols, reality-testing with actual humans from the groups you're trying to reach." 

To highlight the ways in which Mediahub vets bias, Djazmi shares a concrete example on how the agency builds client-specific AI agents to simulate fast-moving consumer insights. “Powered by connected teams, our strategists define cultural territories, planners ensure representative inputs, and technologists stress-test outputs against Acxiom's demographic, behavioral and attitudinal people-based database. Before any deployment, mandatory bias checkpoints assess each synthetic audience across demographics, cultural context, and behavior using fairness metrics. Human review and QA catch stereotyping, blind spots or skewed insights that AI alone might overlook." 

There is real importance in embedding consideration at every step of every development process. For Gott, this could range from using diverse and representative datasets to selecting fair success criteria when selecting and training models. “It needs a conscious effort to embed this at every stage." 

Ant Kenny, Head of Digital at Boutique, details his layered safeguarding approach: "One area where this can fall down is at the input stage. To reduce that risk, we don't rely on a single model or source. We cross-check against multiple inputs, prioritise first-party and high-quality data where we can, and are careful not to feed AI-generated outputs back into future decisions without proper review." 

Human review is also non-negotiable, Kenny adds. “Across all functions - strategy, creative, and media - the outputs always have specialist oversight, and no AI-generated insight is treated as truth without challenge." 

Oversight is both an ethical practice and a creative strength, Boswell feels.

"Human oversight remains the most powerful safeguard we have."

Karen Boswell, Global CEO for Consulting, Experience & Performance, M+C Saatchi

"Diverse teams reviewing outputs, challenging assumptions, and spotting the blind spots machines can't see. This is where ethical practice and creative excellence meet." 

Treliving sees AI's quality assurance potential as democratising expertise: "The great thing about programming a practitioner's knowledge into AI is that it allows a leveling-up by making it available to all." 

Platforms: Not all AI tools are created equal   

Multi-coloured paint brushes

Tool selection matters. Not every platform is transparent about its training data, bias mitigation strategies or model behaviour - and agencies cannot afford to treat AI as a black box. 

"Choose your platforms carefully," says Poulter.

"Your responsibility extends to vetting the tools themselves, not just your use of them." 

James Poulter, Head of AI, ELVIS

Boswell echoes this, emphasising the need for rigorous evaluation and pressure-testing the tools themselves. “Not all platforms are built the same, and we can't assume that 'smart' equals 'neutral.' Agencies need a clear framework for evaluating AI partners: Where does their data come from? How do they mitigate bias? What transparency do they offer on model behaviour? If you can't answer those questions, you can't safely put the tech into client work." 

Boutique are deliberate about tool selection, avoiding platforms that are opaque, unexplainable or prone to hallucination. “We take time to understand the data provenance behind any system we use for targeting, insight, or content generation,” Kenny says. “To get to this understanding, we have tested several models, and in doing so, we have a great understanding of which model is best suited to the task at hand." 

For Djazmi, frameworks and playbooks are useful starting points, but no tool is bias-proof. “Just as we've long trained teams to plan and target audiences responsibly and effectively, we now need to build similar fluency in identifying and addressing bias in models," he adds. 

The act of choosing an AI tool can be linked to competitive differentiation. Treliving adds: "Clients and agencies need USPs - uniqueness that is earned through understanding and connecting data before executing it with AI or vice versa." 

Ultimately, creative individuals remain accountable to building a culture of critical use 

Multi-coloured swirls with golden speckles

The speed of technological change does not absolve agencies of responsibility. The humans designing, deploying and defending AI-driven work remain accountable for what it produces. 

As James Poulter puts it: “The question isn’t whether your AI has bias, but whether you’re actually doing anything about it.” Agencies cannot hide behind “the AI did it” when work reinforces stereotypes or excludes entire groups. Bias may be inevitable - but inevitability is not the same as acceptability. Systems matter. 

Erfan Djazmi reinforces this point from a cultural perspective. AI, he notes, has never felt emotion or intent around ethics - only the humans using it do.“That means championing a culture of feedback, course correction and accountability, as well as relying on clever tech to help us." 

For Kenny and Boutique, the most effective defence has been cultural rather than technical. Building a mindset of critical use - training teams to question outputs, understand limitations and challenge models rather than defer to them - has proven more powerful than any single tool or framework. “Our mantra internally is that AI is there to enhance thinking and not to outsource it. If this is kept front of mind, then naturally as an agency, we should avoid any pitfalls." 

Gott reinforces the need for a human-centered mindset “where our experiences and creativity are enhanced by AI rather than the other way around." 

But cultural accountability also means confronting uncomfortable truths - about the work, the data and ourselves. Marcos Angelides, Head of AI Operations at Publicis Media, finds the psychology of bias in decision-making fascinating. "It’s not even just at a societal level, it’s at an individual level. We all have biases. The hardest moments come when data challenges our assumptions - and whether organisations are willing to listen, learn and pivot rather than seek validation.” 

AI becomes most dangerous in cultures that reward certainty and confirmation.

“The real power is going to come from businesses who challenge themselves, who empower their people to think differently, who are ready to pivot when they’re given information that suggests a better opportunity,”

Marcos Angelides, Head of AI Operations, Publicis Media

Angelides says. “AI becomes really powerful in those environments.” 

As Karen Boswell concludes, bias may be inherent to pattern-recognition systems - but complacency is not. “Agencies that build rigorous testing, transparent tooling and inclusive thinking into their AI workflows will not only protect their clients, they'll unlock far better ideas." 

The technology may be new, but the responsibility is ancient: to see clearly, question constantly and build systems that reflect the world as it should be - not just as it has been modelled to be.