AI hasn’t pressed delete on creative and responsible advertising  

How ethical AI use can amplify creativity, build trust and keep humans at the heart of advertising.

Single tree on a horizon against violet sky

AI hasn’t pressed delete on creative and responsible advertising  

Caitlin Ryan on how ethical AI use can amplify creativity, build trust and keep humans at the heart of advertising. 

Single tree on a horizon against violet sky

Working as a creative leader throughout her career for the likes of BBDO, Karmarama, Cheil, Meta and now Dentsu, Ryan has guided teams through multiple waves of cultural and technological change. And, lo and behold, the core ethical responsibilities remain the same: “don’t create harm and be honest.” 

Fear and apprehension still surround the use of generative AI in the marketing industry. Where to invest? How to keep up? What about IP and copyright issues? And how can it be used ethically while building trust? In many ways, AI has simply brought longstanding challenges - cost, growth, ethics and trust - into sharper focus.  

We all remember the image of the late Pope Francis wearing a Balenciaga puffer jacket. It became a defining symbol of the tension around AI. For some, it raised ethical questions about accountability; for others, it sparked excitement about creative possibility. Caitlin Ryan, creative partner at Dentsu Creative, says the image marked the early “Wild West” phase of AI - but it also revealed, in striking clarity, the technology’s creative potential. 

Caitlin Ryan
AI generated image of Pope Francis in a white coat

In its early phase, AI experimentation was not commercially viable, but creatively it hinted at what was coming. “It was exactly the same when Photoshop was first introduced,” Ryan says. “The time taken to reach a result shortened, which meant experimentation exploded.” 

Fear has always existed in the early days of technological revolutions. But what will help the industry move forward, Ryan began our chat by saying, is not “throwing out the guardrails” but maintaining commercial responsibility - “just as agencies always have” - while embracing AI’s creative potential. 

It’s our role and responsibility to be transparent

Stylised image of a pink cloud against a blue background with an empty picture frame

AI can feel so intelligent and efficient that it can sometimes seem like it has a value system we can rely on, Ryan admits. While the tech may often work through a performance metric and not a human metric, it can be easy to side with its decisions. This is where Ryan feels creative leaders really need to lean in. 

“You can’t abdicate responsibility.
Humans still have to be involved."

"There’s a temptation to think, ‘The machine came up with it, so I don’t need to intervene’ - and that’s precisely when human involvement matters most.” 

Ryan believes the industry has a role and responsibility to be transparent and avoid misleading and over-claiming. “Responsibility doesn’t disappear just because AI comes into the process.”  

In every major innovation - from early transport to modern automation - safety has had to be built in. Understanding where safeguards are needed still matters, Ryan says. “Don’t trust that the machine or model you’re using has already put those safety measures in - you, as a human, still need to stay responsible and use your judgment about whether something is the right thing to do.”  

Transparency also plays a key role in how brands connect with consumers. The creative veteran calls for agencies to openly acknowledge - rather than quietly conceal - their use of AI agents. “Stepping ahead of it and admitting it's part of your workflow shows strength, not weakness. But at the moment, we all go, ‘Oh, is it cheating?’” 

Ryan builds her own agents, rather than using ones that are part of a proprietary tool. “I build agents all the time, and you have a lot of control. You see the output, you realise it's not what you wanted, and you go back in and rewrite the prompts. You are the architect of that experimentation and that process - you're not divorced from it.” 

Whether agents are custom-built or pre-trained, Ryan is clear: accountability never shifts to the technology. Users must still govern what the system interrogates, how it behaves and whether it adds value. “You still have to ask: ‘Does this look right? And is this good for society if I put this ad out into the world?’” 

With regard to Dentsu Creative’s use of AI, the agency’s partnerships with Adobe, Google and Microsoft’s Copilot focus on transparency - and on clearly understanding what each tool does and does not do. 

“Adobe Express is a commercially clean environment because it draws from Adobe Stock. You know you're not using somebody else’s work or failing to credit the right artist. With something more Wild West, like Midjourney, you're not always sure where the commercial licence starts and ends. That’s a pretty important part of Dentsu’s process.” 

Ryan stresses the importance of embedding AI use into the workflow rather than allowing it to happen in isolation. “That’s where you can get into trouble,” she says. “Our creatives use tools like Firefly collaboratively - with clients and teams - not in silos. We look at the workflow end-to-end, with check-ins, sign-offs and the same legal and brand reviews we’ve always had.” 

Being human in how you use AI and protect consumers 

Hand covered in orange and teal paint against a yellow background pointing into the top left corner

AI can replicate cultural biases because it is trained on human behavior, which already contains bias. To prevent this, Ryan emphasises that leaders must stay aware and not accept outputs at face value. While early “hallucination” issues - like distorted hands or faces - are becoming rarer, subtler risks remain, particularly around reinforcing stereotypes. 

“You are in control. If your prompt isn’t specific there’s a risk that the machine which is already looking for patterns will replicate human stereotypes, because that’s exactly what stereotypes do - they shorten the cognitive load to create something that most people shortcut to. Being aware of this and designing to avoid it is entirely in the user’s control.” 

Hyper-personalisation is another area where AI-powered advertising risks crossing ethical lines. Ryan is particularly excited by the power AI has given to personalisation but she also recognises that it has potential brand-damaging effects. “I’m a big fan of relevance,” Ryan adds. “You can suddenly make brands super relevant to different audiences in a meaningful way. But when does that become exploitation?” 

Advertising crosses that line when data, that is used without permission, is taken advantage of to create a false relevance or false need with the sole aim of financial gain - whether it is by exploiting someone’s race, gender, marital status or search history.  

“It becomes incredibly important to protect data, and to make sure that when you're creating relevance between a brand and a customer, you're doing it in a way that adds value rather than stress, guilt or emotional distress.” For Ryan, being human about how you use the tool, and being protective of the end users or viewers, is incredibly important. 

AI can get us closer to creating value for brands

Woman looking through her fingers at the camera shaped like a screen

While onus can be placed on values and social purpose, advertising, of course, exists to creatively capture imagination to sell products and services. Ryan believes creatives have drifted from that commercial outcome - but that AI can help reconnect the two. 

“Creating hyper-relevant content that presents a brand in a way that is meaningful to somebody - not just shoving it down their throat - is far better than the idea of something chasing you around the internet based on demographics or psychographics and trying to sell you a product you have no interest in. That’s not good for the advertising industry. 

“If we can turn that on its head and make sure that when a brand appears in someone’s space, it does so in a meaningful, honest and transparent way - and genuinely fits that person - then they’re far more likely to buy. That’s a good outcome.” 

Ryan’s enthusiasm for AI stems from its ability to accelerate experimentation — a core part of creativity that allows ideas to be tested, pushed and reshaped quickly. 

While many creatives use AI to generate a single, often unoriginal solution, Ryan encourages her team to “go way out.” They experiment with temperature - from precise outputs to letting the AI “go wild” and form unexpected connections. 

“It’s a useful way to think about it. Are you drilling down to one answer, as many people do with ChatGPT? That’s not great for creativity. But exploring multiple directions - ‘What might this look like? What might it sound like? Where could it go?’ - that’s where it becomes useful.” 

AI also enables creative “mash-ups” - combining unrelated genres or ideas to spark invention. “That’s where new ideas come from. AI lets you do it affordably and efficiently. The Pope and Balenciaga moment worked because it surprised people emotionally — two worlds colliding in an unexpected way.” 

Ryan advises agency leaders that above all, they must have the mindset that AI can be used to create distinctive, authored and culturally intelligent work - “rather than just commoditising and creating a whole load of shit,” she jokes. “I think AI can actually provide the opposite.” 

“Don’t worry about the speed at which AI is evolving. It advances so quickly: one week Firefly can’t do something and Gemini can, the next week Firefly can. So don’t worry too much about your expertise on the different tools. It’s a good idea to just play around with lots of different models. 

“Less emphasis needs to be placed on knowing every tool inside out. In a lot of the work I do, the biggest barrier isn’t technical understanding - it’s mindset,” Ryan says.

“No matter what the platform is, the question should always be: how do we use this responsibly, creatively, and with intent?” 

For Ryan, that intent is clear. AI is not here to replace creative judgment or ethical responsibility - it is here to sharpen them. Used well, it can empower creatives to experiment more freely, connect more meaningfully, and build work that serves both brands and people. The guardrails have not changed - but the opportunity has expanded.