Quantcast

The State of Ethical AI in Marketing: 2025 Survey Findings

A survey of 100 marketers shows AI adoption is outpacing
ethical frameworks.

AI-powered marketing campaigns are running at scale across the industry, but the ethical frameworks governing them are failing. A 2025 survey of 100 marketing professionals reveals a concerning reality: while AI adoption accelerates, ethical frameworks struggle to keep pace. Most concerning? A Net Promoter Score of -26 indicates that even marketers who implement AI are dissatisfied with their ethical approaches.

The Confidence-Competence Divide

This survey exposes a dangerous gap between technical skill and ethical awareness. An overwhelming 81% of marketing professionals rate themselves as advanced or expert AI users. Yet only 50% of organizations have implemented employee training on ethical AI principles.

81%

Rate Themselves As Advanced/Expert AI Users

50%

Have Ethical Training

This gap causes teams to move forward confidently without fully considering the ethical impact. Strong technical
skills don’t always mean strong ethical awareness. Since nearly half of organizations lack formal AI ethics training, many
marketers are making decisions in a complex area without clear guidance.

What Marketers Are Using AI For

Understanding current AI adoption helps contextualize the ethical challenges ahead:

Ai Technology Currently In Use Graph
Primary Ai Applications In Marketing Graph

Computer vision and generative AI dominate current trends, yet these same technologies raise the greatest ethical challenges around authenticity, disclosure, and intellectual property. Professionals are adopting the tools with the highest ethical risk at the fastest rate.

The Transparency Problem

While 90% of marketers claim some level of transparency about AI usage, the reality is more
complicated. Only 43% are “very transparent,” meaning they openly communicate both AI usage and its
implications to consumers and clients.

43%

Very Transparent

47%

“Transparency Theater”

10%

Not Transparent

Nearly half (47%) provide only general information without specific details—we call this “transparency theater” and the -26
NPS suggests it’s not working. Another 10% aren’t transparent at all. Combined, that’s 57% of marketers
whose transparency practices range from inadequate to nonexistent.

What this means: Surface-level disclosure may satisfy immediate requirements but fails to provide
the understanding consumers need to make informed decisions. With 51% of marketers using generative AI and 33%
primarily using AI for content creation, the potential for undisclosed AI-generated content is widespread.

The Sports Illustrated scandal serves as a cautionary example. The publication faced major backlash for using AI-generated content—not because it used AI, but because it failed to disclose it. Our survey shows this isn’t an isolated issue, but a widespread industry risk.

Ethical Safeguards: Inconsistent and Insufficient

Ethical AI usage measures exist, but aren’t universal or satisfactory:

Current Ethical Safeguards
Dedicated Oversight Graph

While 66% have dedicated oversight, 34% don’t—either integrating ethics into existing roles or ignoring it entirely. The relatively low oversight percentages across all measures suggest that marketing professionals consider ethical AI optional instead of foundational.

The most telling finding: 58% use computer vision for targeting and analysis, but only 39% audit for bias. That means the majority
are deploying sophisticated AI-powered targeting without checking whether their algorithms discriminate.
The risks aren’t hypothetical—biased algorithms can trigger lawsuits, regulatory fines, and lasting brand damage.

The Satisfaction Crisis: Even Implementers Are Dissatisfied

The survey’s most revealing finding comes from asking marketers if they would recommend their own ethical AI approaches:

Net Promoter Score Graph

An NPS of -26 is considered poor in any industry. This fundamental dissatisfaction suggests:

1. Current ethical frameworks feel inadequate for the pace of AI advancement.
2. Marketers recognize gaps between their aspirations and reality.
3. There’s no clear industry standard, leaving organizations to figure it out alone.
4. Existing approaches may be creating compliance burdens without a meaningful impact.

This isn’t just a problem for those without ethical measures in place. Even marketers who have implemented safeguards are dissatisfied. The industry knows something isn’t working.

Four Critical Gaps Putting Marketers at Risk

1. Confidence Without Competence 

81% rate themselves as advanced AI users, but only 50% have ethical training. The more confidently AI is deployed, the less likely ethical implications are questioned.

2. Transparency in Name Only 

47% provide only “general information” about AI usage. In an era where generative AI creates marketing content, emails, and ad creative, vague transparency is insufficient—especially as regulations evolve.

3. Adoption Outpacing Ethics

Generative AI and computer vision are the tools with the most complex ethical implications, and marketers tend to adopt it faster than ethical frameworks can address them. Organizations are learning ethics by trial and error in production environments.

4. The Satisfaction Deficit

An NPS of -26 indicates that marketers themselves lack trust in their current ethical approaches. This risks regulatory scrutiny, consumer backlash, or competitive pressure that could quickly expose inadequate practices.

Why Ethical AI Matters: Privacy, Bias, and Trust

Data Privacy Concerns

AI marketing tools increasingly train models on web-scraped content—sometimes incorporating people’s photos and text without explicit permission. While personalization drives results, the privacy implications are significant. Organizations must balance performance with clear consent, transparent data usage, and genuine consumer control.

Algorithm Bias

With 58% using computer vision for targeting but only 39% auditing for bias, the industry is exposed. Biased algorithms don’t just harm marginalized communities—they create legal liability, trigger regulatory action, and inflict lasting reputational damage. The risk isn’t theoretical; it’s operational.

Consumer Trust

Transparency about data practices is essential for maintaining consumer trust. When they understand, they’re more likely to engage comfortably with AI-powered marketing. When they don’t consent or, worse, when they discover undisclosed AI usage, trust erodes rapidly.

What Marketing Leaders Must Do Now

The survey data points to clear action steps:

1. Close the Training Gap

Don’t assume technical proficiency includes ethical awareness. Implement mandatory ethical AI training that
covers more than compliance. Develop a decision-making framework for novel situations that allows your team to execute consistent AI ethics. 

2. Move Beyond Transparency Theater

Elevate standards from “we use AI” to “here’s how AI influenced this specific outcome.” With generative AI now creating marketing content at scale, vague disclosures create liability as regulations tighten and consumer expectations rise.

3. Establish Dedicated Oversight

Without dedicated roles or teams, ethics becomes no one’s priority. Create accountability
structures with actual authority.

4. Implement Regular Bias Audits

Regular bias audits for AI systems involved in targeting, recommendation, and customer service should be standard practice, not optional. This is especially critical for computer vision applications, making targeting decisions.

5. Evaluate Your Current Practices

Assume your current approach has gaps. Conduct honest audits of where you use AI, how you make decisions,
and whether transparency claims match reality.

What the Industry Needs

Individual company action isn’t enough. This survey suggests the industry needs collaborative frameworks, not just isolated policies.

Develop Shared Standards

Industry associations should lead standardization efforts to create common ethical frameworks. The current approach, which is that every organization figures it out on its own, isn’t working.

Address Generative AI Specifically

The industry needs clear standards for AI-generated content—when to disclose, how to attribute, and what constitutes authentic versus misleading use.

Make Ethics Foundational, Not Optional

The low percentages across all ethical safeguards indicate that ethical AI is still considered optional. This must change.

The Choice Ahead

This survey reveals the marketing industry is at an inflection point. AI adoption is widespread and accelerating, but ethical frameworks struggle to keep pace. This has created a dangerous gap where confidence outpaces competence.

The Opportunity

Organizations that move beyond “transparency theater” and compliance checklists to build robust, authentic ethical AI practices will build trust and gain a competitive advantage as consumers, regulators, and partners increasingly scrutinize AI use.

The Risk

Continuing with surface-level measures while confidence outpaces competence could expose the industry to
significant regulatory, reputational, and business risks.

The Bottom Line

AI adoption isn’t up for debate—it’s happening at scale. The real question: will organizations build ethical practices that match their technical sophistication, or continue down the path of confidence without competence?

Businesses that get this right won’t just avoid regulatory fines and reputational crises. They’ll gain a competitive advantage as
scrutiny intensifies. Ethical AI isn’t just good practice—it’s strategic differentiation.

Advanced AI capabilities without
the ethical risks.

Contact SMA Marketing today!

Scroll to Top