The Washington PostDemocracy Dies in Darkness

Big advertisers suspend YouTube campaigns after reports of videos sexualizing children

November 27, 2017 at 5:02 a.m. EST
(Sascha Steinbach/European Pressphoto Agency/EFE)

Several major companies suspended their advertising campaigns on YouTube on Friday after learning their ads were displayed on videos that appeared to sexualize children.

In distancing themselves from YouTube, the companies cited the video platform’s seeming inability to police its content so their ads don’t appear in offensive videos. The companies included Deutsche Bank, Germany supermarket chain Lidl, sportswear company Adidas, candy makers Mars and Cadbury, and alcohol company Diageo, which produces Smirnoff vodka, Captain Morgan rum and Crown Royal whiskey.

The suspension was in response to an article published in the Times of London last week, which said the companies’ advertisements appeared on videos showing children in various stages of undress, according to the Wall Street Journal. Some of these videos, for example, featured “young girls filming themselves in underwear, doing the splits, brushing their teeth or rolling around in bed,” according to the London Times.

While some of the videos appeared to be uploaded by the children themselves, the comments sections were filled with sexual remarks — including statements encouraging the children to perform sexual acts on camera. In addition, YouTube’s algorithm suggested similar videos, such as “toddlers naked in a bath,” the Times reported.

A Mars spokesman told Business Insider the company was “shocked and appalled” that its advertising appeared with “such exploitative and inappropriate content.” Likewise, a spokesperson for Lidl told Reuters such content is “completely unacceptable” and that YouTube’s policies were “ineffective.”

The video platform, which is owned by Google, says that it forbids videos or comments that sexualize children. Its official policy states that posting such content “will immediately result in an account termination.” Regardless, one video showing a prepubescent girl in a nightgown racked up more than 6.5 million views and a number of lewd and sexual comments, the Times reported. Advertisements for several large brands ran with this video.

“There shouldn’t be any ads running on this content and we are working urgently to fix this,” a YouTube spokesman said on Friday, according to Reuters.

Johanna Wright, YouTube’s vice president of product management, said in a statement the company will be taking an “even more aggressive stance” against videos aimed at sexualizing or harming minors.

But policing content and ensuring that advertising doesn’t run with offensive clips has been a long-running problem for the video platform.

YouTube released a similar statement in March, when several companies including Coca-Cola, PepsiCo, Walmart, Dish Network, Starbucks and General Motors stopped advertising on the platform after learning that their ads were running alongside videos featuring racist and anti-Semitic content.

YouTube also issued a statement in June, when Britain’s major political parties pulled their commercials from YouTube after they appeared with videos that promoted “extremist ideology,” the Wall Street Journal reported.

The problem YouTube faces is twofold.

First is the overwhelming amount of content constantly being generated. Users watch one billion hours of video each day on the site. The Guardian reported that 300 hours of video are uploaded every minute.

YouTube uses a combination of human and automated watchdogs to suss out offensive content, but much of that content is often overlooked. There simply aren’t enough humans to monitor so much video, and many claim the protective algorithms in place often don’t work.

“They work by correlating patterns within the content — such as the use of particular word combination or image elements — that have previously been flagged by human content moderators as benign violations of the platform content policies,” Ansgar Koene, senior research fellow at the University of Nottingham’s School of Computer Science, told Wired. “The algorithms are therefore incapable of detecting novel types of violations.”

The second problem is how the ads are disseminated. Companies have three choices when placing their advertisements, according to the Wall Street Journal. They can be paired with a specific type of content, a particular set of keywords or a certain demographic profile. YouTube then automatically plays the ads with the corresponding videos.

But these categories can be misleading. The videos of young girls that attracted sexualized comments were not, on their face, sexual. So if a company requested its ad play with family-friendly content, for example, there’s a good chance it could have ended up on one of these videos.

“We have to accept that under the current model of rapid, instant publishing, content moderation will never be completely perfect,” Koene told Wired. “If we really want to block all content that violates the platform rules, then we would have to move to a model where platform users submit content they want to publish to an editor for approval, as we do when publishing in journals. This would transform the current Web 2.0 platforms into traditional media channels.”

More from Morning Mix: 

NASA launched this record into space in 1977. Now, you can own your own copy.

Olympian Gabby Douglas says she, too, was sexually abused by gymnastics team doctor

Who came up with the term ‘sexual harassment’