‘With AI, my legal advice is the same now as it’s always been: apply common sense.’
Vicky Brown, General Counsel, Commercial and Chief Privacy Officer at WPP, discusses the ways in which agencies should navigate the legal questions generative AI raise
‘With AI, my legal advice is the same now as it’s always been: apply common sense.’
Vicky Brown, General Counsel, Commercial and Chief Privacy Officer at WPP, discusses the ways in which agencies should navigate the legal questions generative AI raise
Vicky Brown has been supporting WPP agencies on a multitude of issues for 13 years - Intellectual Property (IP), data privacy, data ethics, dispute resolution, client and vendor negotiation, and now, of course, AI. One would assume that her role has changed significantly in the past three years but within the first few minutes of a chat I had with Brown, she was quick to admit that AI has not created entirely new legal issues for advertising - it has simply brought existing ones into sharper focus.
The way legal teams at WPP collaborate with agency professionals, work and clients has remained largely consistent. “We were already pretty well integrated with our agency colleagues, and we were already providing a lot of day-to-day advice on the day-to-day business of advertising,” Brown says.
“Where it’s gotten interesting is that AI, and generative AI in particular, has brought to the forefront all of the classic issues around IP that agencies have actually grappled with all the time - you’re just looking at them far more closely now.”
Brown recognises the excitement that surrounds what agencies can create with AI models, but she regularly reminds agencies that they cannot escape the traditional rules of clearance or the fundamental question at the heart of advertising regulation: is the advertising legal, decent, honest and truthful?
“Generative AI has brought this core principle in the CAP code [the Code of Non-broadcast Advertising and Direct and Promotional Marketing] into much sharper focus, particularly around how you ensure you’re managing IP issues in the right way.”
The traditional disciplines of interrogating outputs, applying common sense and ensuring that advertisements are not infringing, still apply. “If you’re using generative AI to create a new slogan for a client, you’ve still got to run trademark searches. You’ve still got to put that slogan into Google and see what other brands are out there using it. You still need to understand the market.”
‘WPP has been thinking about AI risk and governance from day one’
The legal team at WPP, which has a strong relationship with its technologists, enterprise tech and creatives, had a seat at the table when the holding company’s proprietary AI tool WPP Open was envisaged. “WPP Open really is a product of what I call ‘AI by design,’” Brown adds. “When it was being put together, the designers wanted to understand what the legal team needed to see in the architecture to make it appropriate to use.”
AI risk, governance and IP considerations were embedded from the start, including the decision to work with Bria, whose models are trained exclusively on licensed, rights-cleared content. Brown also ensured the platform functioned as a “walled garden,” meaning client information is secure, segregated and never used to train external LLMs.
“The bedrock of WPP Open is understanding the enormous potential of AI and generative AI, but looking pragmatically so you can innovate while managing risk.”
New AI vendors and tools are put through a traffic light risk system by a committee made up of security, enterprise tech, privacy, legal and the CTO team. “We don’t just take vendors’ word for it; we road-test everything. All tools in WPP Open must be green. That means we can explain to clients and their legal teams why a tool is approved, safe and fit for purpose.”
Brown affirms that WPP has made these commitments in all of its client contracts. “If something goes wrong, clients could walk - that’s the real risk. With long-standing clients like Coca-Cola, Colgate and Palmolive, the relationship of trust is key.”
Transparency
Clients are constantly concerned with brand safety and AI has exacerbated this. For Brown, brands must not only think about the tools creating the work and the security environment in which it is created, but also about disclosure. “There aren’t any legal rules governing disclosure yet,” Brown admits. “Obviously, the EU AI Act is coming, and it will cover the labeling of deepfakes quite rightly. But in the meantime, clients need to grapple with AI disclosure themselves, and we have lots of discussions to help guide them on that journey.”
While some platforms are already requiring the disclosure of AI generated content, Brown feels the industry will need to have a standard and align itself with a consistent approach, especially when the EU AI Act comes into force. “In the next 18 months regulation will be enforced differently across member states so we will need a standard for transparency with AI, where everyone in the industry leans in.”
WPP is a member of the Content Authenticity Initiative (CAI), an Adobe-led community that develops open-source tools for digital content provenance and transparency. This likens to the transparency consistency Brown desires.
“Our mantra at WPP is transparency, transparency, transparency - but with meaning. Not everything needs to be disclosed. The real ethical questions arise when people are represented or affected.”
Advice for agencies who do not possess the breadth of a holdco
When considering the legality of AI use, Brown urges agencies to return to first principles. “If I had to give one piece of advice, it would be this: apply common sense.
Everyone says, ‘Let’s use AI,’ but you need to step back and ask whether it’s appropriate for the use case.”
She cautions against reflexively generating synthetic people. “If you’re working with a brand that serves a diverse audience, the backlash risk is real.”
Addressing bias created through AI is as much a common-sense ethical issue as it is a legal one. For Brown, responsibility starts long before an AI-generated image, script or idea reaches a client. Agencies need to assess the models they use with the same rigour they would apply to any other strategic or creative partner.
“You need to assess the models the way we do,” she says. “We get comfortable - as much as we can with the information available from third-party vendors - that the model itself isn’t biased, or is as unbiased as possible. But beyond that, there’s a huge ethical training piece. You have to ask whether AI is actually the right tool for the project. Are you creating content that could introduce bias?”
That question becomes particularly acute when AI is used to generate people or audiences, Brown warns. Mindful common sense should continue to be baked into the DNA of agencies. “We’re heading toward a place where people will be equally aware of AI as they are with their personal data - understanding what feels appropriate and what doesn’t.”
Training, Brown stresses, is essential to building that judgement. “We train all of our people on AI, including the risks and the judicious use of AI,” she says. “Just because you can do something and create it doesn’t mean you should.”
Standing behind the use of tools
Brown is all for investing in enterprise-grade tools and wholeheartedly against free-to-use tools - where client information can be compromised and reused to train models. “I think that there’s a general misconception that using tools is much riskier than it actually is. That said, I still see people downloading and using widely available tools, and that carries real risk.”
Enterprise-tools, contained within a walled garden, allow agencies to stand behind the governance and its outputs for clients. “We give indemnity and copyright protection, so we have to be able to stand behind our product,” Brown says.
“We built our own tool so we could fully govern it. For advertising agencies, it has always been about creating original work and standing behind fundamental client commitments. It cannot be any different with generative AI.”
For smaller, independent agencies, Brown points WPP Open Pro itself, as it is designed to be licensed and provide the same standard protections that are applied across WPP. “Any client information put in would never go back to train an LLM globally. We’ve also built contracting agents, NDA agents, data protection agents, policy agents and more.”
Even when using enterprise-grade tools or working within a walled garden, Brown stresses that traditional ad clearance remains essential. Because generative AI feeds on vast amounts of publicly available data, agencies must still interrogate outputs carefully - to ensure a mouse does not look like Mickey Mouse.
For Brown, the broader challenge for agencies is not about individual vendors but about balancing innovation with the protection of creators’ rights. “Everybody wants to innovate - whether it’s LLM providers, tech companies or agencies - but you have to respect creators’ rights and strike the right balance,” she says.
That balance increasingly shapes how WPP works with technology partners. The company prioritises vendors whose models are trained on licensed and rights-cleared content and expects them to stand behind their products. “Vendors have to give us indemnity protection if there’s an issue,” Brown explains. “They can’t keep offering contractual indemnities unless they’re confident in how their models are built.”
This is where Brown believes agencies can help raise standards across the ecosystem. “We respect creators’ rights, we want to innovate, and we want to work with a broad range of partners,” she says. “That’s why WPP Open is ‘open’ - but we ask for clarity on IP, which forces partners to be more mindful of creators’ rights.”
Closing argument
Brown is clear that much of this work is happening in a legal grey area.
“The law hasn’t caught up with innovation yet,”
she says, pointing to recent cases such as Getty Images’ UK instability ruling as evidence that the legal framework around generative AI is still evolving.
That gap makes governance essential. “You need a clear risk appetite and AI principles that balance gaps in the law with common sense and the requirement that advertising remains legal, decent, honest and truthful.”
Those principles build on WPP’s long-standing adherence to the CAP Code and the UK’s model of advertising self-regulation - an approach Brown believes the industry should continue to lean into. “We police ourselves effectively, and that’s central to how we operate.”
It is the same mindset WPP applies to AI: acknowledge where the law is still catching up, apply common sense, and return repeatedly to first principles. “You have to keep asking: what are my principles? That’s how you grapple with innovation responsibly.”
