Hot Topics

Advertisers Flee YouTube Over Offensive Ad Placements

Several top U.S. advertisers — including AT&T, Verizon and Johnson & Johnson — this week pulled out of their agreements with YouTube due to their ads appearing with videos advocating extremism, or with other offensive content.

Such placements represent violations of their agreements with Google, according to the companies.

“We are deeply concerned that our ads may have appeared alongside YouTube content promoting terrorism and hate,” AT&T said in a statement provided to the E-Commerce Times by spokesperson Fletcher Cook. “Until Google can ensure this won’t happen again, we are removing our ads from Google’s non-search platforms.”

Johnson & Johnson posted a statement on its media page saying it would pause all YouTube digital advertising globally to ensure that its ads don’t appear on sites containing offensive comment.

“Once we were notified that our ads were appearing on non-sanctioned websites, we took immediate action to suspend this type of ad placement and launched an investigation,” Verizon said in a statement provided to the E-Commerce Times by spokesperson Sanette Chao. “We are working with all of our digital advertising partners to understand the weak links so we can prevent this from happening in the future.”

UK Furor

The exodus comes on the heels of a boycott major UK companies launched against YouTube last week.

“Recently, we had a number of cases where brands’ ads appeared on content that was not aligned with their values. For this, we deeply apologize,” said Google Chief Business Officer Philipp Schindler said in a Tuesday post.

“We know that this is unacceptable to the advertisers and agencies who put their trust in us,” he continued. “That’s why we’ve been conducting an extensive review of our advertising policies and tools, and why we made a public commitment last week to put in place changes that would give brands more control over where their ads appear.”

Google’s UK Managing Director Ronan Harris delivered that commitment last week, after the controversy erupted in the UK.

“We recognize the need to have strict policies that define where Google ads should appear,” Harris said in an online post. “The intention of these policies is to prohibit ads from appearing on pages or videos with hate speech, gory or offensive content.”

The company spent millions to crack down on offensive and misleading content in 2016, he noted, removing nearly 2 billion bad ads from its network, blocking ads from appearing on 300 million YouTube videos, and removing 100,000 publishers from its AdSense program.

Closer Look

Google outlined several steps designed to raise the bar for its ad policies.

“Starting today, we’re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories,” Schindler said.

Google will tighten its policies to make sure ads show up only against legitimate creators participating in its YouTube Partner Program — a program that allows creators to monetize content through ads, subscriptions or merchandise — as opposed to those that violate community guidelines or impersonate other channels, he added.

It will deploy additional tools to help companies maintain greater control over where their ads appear on YouTube and on the Web generally, Schindler noted, including safer default settings for brands to exclude objectionable content; new account level controls to exclude specific sites from AdWords for Video and Google Display Network campaigns; and new controls for advertisers to exclude higher-risk content.

In addition, Google will hire “significant numbers of people” and use new artificial intelligence and machine learning tools to increase its capacity to better screen questionable content, he said.

The company also will establish a policy that lets advertisers escalate questions about the placement of their ads within a few hours.

Less Algorithm, More People

Managing ad placements to avoid extremist sites is similar to the problem social media companies have in managing ad placements to avoid fake news, observed Tim Mulligan, senior analyst at Midia Research.

“Algorithms struggle to effectively screen out extremist content because, paradoxically, they both lack human oversight and they also reflect the narrow parameters of their human coders,” he told the E-Commerce Times.

YouTube and its peers in the social media world focus on minimizing human overhead and rely on technology to resolve content curation issues that cannot be managed effectively with existing technology, Mulligan maintained.

“Facebook woke up to this reality in the backlash from the fake news controversy around their failure to screen out inaccurate news articles during the most recent presidential election,” he pointed out. “You Tube is now in a similar position having to deal with the fallout from not sufficiently investing in human screening teams.”

David Jones is a freelance writer based in Essex County, New Jersey. He has written for Reuters, Bloomberg, Crain's New York Business and The New York Times.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

E-Commerce Times Channels