AI due diligence framework for advertising agencies

A smart, structured guide to help agencies navigate risk, protect clients and make informed AI decisions.

Person sitting in front of a transparent monitor, ticking items of a tick list

There are a great number of organisations offering AI tools and services. While the IPA can’t review or recommend individual providers for your agency’s specific needs, we can share some practical guidance on what to look out for before signing up to any AI solution. Below are key questions to ask before adopting AI services to avoid common pitfalls. 

1. Purpose & Strategic Alignment

  • What specific tasks or workflows will the tool support?
  • Who will use it (creative, media, strategy, production, ops, new business)?
  • What outcomes are expected (efficiency, quality, speed, innovation)?
  • How will its success be measured (KPIs, SLAs, quality benchmarks)?

2. Data Governance & Privacy

  • What data was used to train the model and did the model have rights to use this data?
  • What data does the tool ingest, store, process or transmit?
  • Do we own the data provided to the tool and if client data, do we have permission from the client to use it with AI?
  • Can we disable training or enforce “no data retention”? Will the data we input into the tool feedback to train the underlying AI model?
  • Where were the models trained, and where is the data (input/output data) stored by the platform?
  • Is the data secure, encrypted during transit and at rest?
Fingerprint in the shape of a person to signify data security.

3. Intellectual Property & Licensing

  • Who owns all outputs created by the tool?
  • Are outputs licensed for commercial use?
  • Are training datasets fully licensed or proprietary?
  • What warranties and indemnities does the tool offer over the generated output, if any? If the tool offers indemnification, what does it cover, are there significant exclusions and is there a cap to the platform’s liability and indemnity?
  • Are there restrictions on redistributing, editing or reselling outputs?
  • If we needed to switch providers, are the processes and outputs exportable in a format that could be used with other platforms?
  • What rights are granted to the AI tool over any of our input materials? Do we have the necessary permission or consent to grant such rights, if did we agree to grant these rights?

4. Model Transparency & Safety Controls

  • Which AI model(s) and version(s) does the tool rely on?
  • Are there guardrails to prevent...
    Inaccurate or misleading content,
    sensitive content,
    harmful outputs,
    hallucinations,
    misuse (deepfakes, impersonation).
    Content that infringes third-party rights.
  • Is there monitoring or bias testing?

5. Compliance

  • Do we have an internal policy around the use of AI or Generative AI in the workplace**? If so, does the AI tool meet the requirements in our AI policy?
  • Does it support or provide:
    audit trails,
    central management of your agency personnel's accounts,
    ability to view take up and usage of the AI tool,
    content provenance or watermarking.

The IPA has an example Generative AI in the workplace policy (for IPA members) which sets out an agency’s expectations and standards regarding the use of generative AI.

6. Security & Reliability

  • Backup and disaster recovery procedures?
  • Uptime commitments?

7. Integrations

  • Does the AI tool offer APIs for automations, DAMs, CRMs, cloud storage, or workflow tools?
  • Compatibility with existing agency systems (e.g., Adobe, Google Workspace, Microsoft 365, etc)?
  • Does the tool support:
    custom workflows,
    fine-tuned models,
    multi-client separation.
    Bring Your Own Model (BYOM), allowing you to plug in your own AI model.
  • Does it allow local or private deployments if needed?
3D human head made with interconnected cube shaped particles.

8. Ethics & Reputational Risk

  • Does the tool allow creation of synthetic humans, voice clones, or deepfakes?
  • Are synthetic assets automatically watermarked or traceable?
  • How does the vendor manage bias or stereotyping?
  • Could misuse of the tool expose the agency or clients to reputational risk?

9. Commercial Terms & Scalability

  • Pricing model (per seat, per token, usage tiers, overage fees)?
  • Cost forecast under realistic agency use?
  • Contract length, notice periods and exit penalties?
  • Support tiers, customer success plan, or training packages?

10. Vendor Stability & Roadmap

  • Who owns the company and what is their financial stability?
  • Does the vendor have a clear roadmap?
  • Is the platform/service dependent on one underlying AI provider (OpenAI, Anthropic, Google etc.)?

11. Exit Strategy

  • If you need to leave the service/platform, how will the agency extract:
    assets,
    workflows,
    prompts,
    fine-tuned models,
    user logs,
    datasets.
  • Are exports in open, interoperable (open/standard) formats?
  • What happens to historical user data upon termination?