YouTube will have its Cambridge Analytica moment: Commentary

This week, after the pedophile comments scandal, BuzzFeed reported that the company had remove two cartoons from the YouTube Kids app that were spliced in with a man explaining how to conduct self-harm. In response, YouTube said it “work[s] hard to ensure YouTube is not used to encourage dangerous behavior and we have strict policies that prohibit videos which promote self-harm.”

You get the idea.

To be fair, YouTube has taken concrete steps to fix some problems. A couple of years ago, major news events were targets for scammers to post misleading videos about them, like videos claiming shootings such as the one in Parkland, Florida, were staged by crisis actors. In January, the company said it would stop recommending such videos, effectively burying them. It also favors “authoritative” sources in search results around major news events, like mainstream media organizations.

And YouTube is not alone in struggling to fight inappropriate content that users upload to its platform. Pinterest took steps last year to block misinformation about vaccines, but it was fairly easy for CNBC to find some search terms that the company had missed. Facebook and Twitter have been raked over the coals repeatedly for allowing their platforms to be used to spread everything from suicide videos to misinformation meant to sway elections or spur genocidal behavior.

The problem isn’t really about YouTube, Facebook or any single company.

The problem is the entire business model around user-generated content, and the whack-a-mole game of trying to stay one step ahead of people who abuse it.

Companies like Google and Facebook upended the traditional media business by giving regular people a friction-free way to upload and share whatever they wanted. As users uploaded masses of words and links and hours of video, these platform companies amassed huge audiences, then sold ads against them. When people can share whatever they want, these platforms turn into a mirror image of the human psyche — including the ugly parts.

These companies have human screeners who try to keep on top of the grossest material and take it down before it spreads too far. But it’s not practical — and may be physically impossible — to hire enough screeners to catch every violation, or to screen every piece of content before it’s posted instead of after. They are investing in computer algorithms and artificial intelligence as well, and these programs do work — there’s almost no porn or nudity on YouTube or Facebook, for instance — but they’re not 100 percent effective, especially for altered videos or political content.

TV, newspapers and other traditional media can get sued and fined by the government if they publish this material.

But tech platforms that rely on user-generated content are protected by the 1996 Communications Decency Act, which says platform providers cannot be held liable for material users post on them. It made sense at the time — the internet was young, and forcing start-ups to monitor their comments sections (remember comments sections?) would have exploded their expenses and stopped growth before it started.

Even now, when some of these companies are worth hundreds of billions of dollars, holding them liable for user-generated content would blow up these companies’ business models. They’d disappear, reduce services or have to charge fees for them. Voters might not be happy if Facebook went out of business or they suddenly had to start paying $20 a month to use YouTube.

Similarly, advertiser boycotts tend to be short-lived — advertisers go where they get the best return on their investment, and as long as billions of people keep watching YouTube videos, they’ll keep advertising on the platform.

So the only way things will change is if users get turned off so badly that they tune out.

We started to see some hints of this with Facebook after the Cambridge Analytica scandal last year. While Facebook had been caught violating users’ privacy dozens of times, the mere hint that a political consultancy might have used Facebook data to help elect Trump (although this is far from proven) set people off. Congress conducted hearings. Further privacy scandals got attention they never used to. People noisily deleted their accounts. Growth has largely stalled in the U.S., and younger users are abandoning the platform, although this might be more because of changing fashions and faddishness than any reaction to the scandals facing it. (Anyway, kids are still flocking to Facebook-owned Instagram.)

YouTube has so far skated free of any similar scandals. But people are paying closer attention than ever before, and it’s only a matter of time before the big scandal that actually starts driving users away.

In the meantime, if you post videos of your kids to YouTube, set them to private.

WATCH: Facebook, YouTube, Twitter remove accounts linked to Russia, Iran that created discord

Source link

Leave a Comment

AT&T and Hasbro pull YouTube ads over abuse claimsIndia proposes new e-commerce regulations with focus on data rules