YouTube’s failure to stop the spread of conspiracy theories related to last week’s school shooting in Florida highlights a problem that has long plagued the platform: It is far better at recommending videos that appeal to users than at stanching the flow of lies.
The company for years has poured resources into better tuning its recommendation algorithm to the tastes of individual viewers. But its weakness in detecting misinformation was on stark display this week as demonstrably false videos rose to the top of YouTube’s rankings.
One clip that mixed authentic news images with misleading context earned more than 200,000 views before YouTube yanked it Wednesday for breaching its rules on harassment.
The failures of this past week – which also happened on Facebook, Twitter and other social media – make clear that some of the richest, most technically sophisticated companies in the world are losing against people pushing content rife with untruths.
“I think tragically the proliferation and spread of these videos attacking the victims of the shooting in Parkland are a pretty clear indication the technology companies have a long way to go to deal with this problem,” said Rep. Adam Schiff, Calif., the top Democrat on the House Intelligence Committee.
YouTube has apologised for the prominence of the misleading videos, which claimed that survivors featured in news report were “crisis actors” merely appearing to grieve for political gain.
YouTube removed several videos and said the people who posted them outsmarted the platform’s safeguards by using portions of real news reports about the Parkland, Florid, shooting as the basis for their conspiracy videos. These fake reports often contain photos, videos and memes that repurpose authentic content.
YouTube said in a statement Thursday that its algorithm looks at a wide variety of factors when deciding a video’s placement and promotion. “While we sometimes make mistakes with what appears in the Trending Tab, we actively work to filter out videos that are misleading, clickbaity or sensational,” the statement said.
The company is expanding the fields its algorithm scans, including a video’s description, to ensure that clips alleging hoaxes do not appear in the trending tab, said a person familiar with internal deliberations at the company, speaking on the condition of anonymity to discuss matters not yet announced publicly.
The auto-complete feature on Google search also seems to have fallen victim to falsehoods, as it did after previous mass shootings. When users type the name of one prominent Parkland student, David Hogg, the word “actor” often appears in the field, a feature that drives traffic to a subject.
Such problems are endemic across social media platforms, researchers say.
“It’s not getting better. It’s not even really slowing down. It’s accelerating,” said Jonathan Albright, a Columbia University social media researcher. “If you can game Google and YouTube, we’re in a dark hour.”
However, Google has tuned its search software to elevate more reliable results than its video platform: News stories debunking the Parkland conspiracies dominated Google’s results.
Five months ago, YouTube promised to improve its search functions after a wave of falsehoods overwhelmed the platform in the aftermath of a Las Vegas shooting that left 58 people dead. The company also has pledged on several occasions to hire thousands more humans to monitor trending videos for deception because its software is not advanced enough to understand some nuance and context.
But experts say the massive volume of uploaded content – 400 hours per minute on YouTube alone – makes routine human review of the vast majority of videos implausible.
One of the most significant algorithmic changes to YouTube came in 2012, when it went from recommending content based largely on what other users clicked after videos ended to suggestions intended to maximise “watch time,” a metric closely monitored by advertisers because it shows how long people stay on the site. The site’s viewership has climbed dramatically in the years since.
On Wednesday, a video from a YouTube creator with few subscribers suddenly rocketed to the top of its trending module, which occupies the site’s most prominent real estate and is often the first thing viewers see. YouTube says the module runs on an algorithm that “aims to surface videos that are appealing to a wide range of viewers; are not misleading, clickbaity or sensational; and capture the breadth of what’s happening on YouTube and in the world.” The company said the trending videos “ideally, are surprising or novel.”
The company hasn’t detailed exactly what elements its algorithm considers before surfacing trending videos. But the company says its algorithm considers a video’s view count, how quickly its view count grew and how “fresh” or young the video is, among other things. New videos do better than old ones, and videos with major spikes in viewership do better than ones with more measured growth.
It isn’t just Silicon Valley that’s struggling to stop the spread of misinformation online: Congress is vexed, too. To Schiff, lawmakers generally “know far too little to be ready to propose anything” that might address the proliferation of conspiracy theories, hoaxes and other troubling content through legislation, he said in an interview Thursday.
For now, they have held a few hearings to study the matter, including a trio of sessions in which they grilled Facebook, Google and Twitter executives about Russian propaganda during the 2016 presidential election. In the aftermath of the Parkland shooting, however, the leaders of key congressional committees that oversee the tech industry have stopped short of announcing plans to question them again.
“All Americans should be protected from unfair, deceptive and abusive behaviours online. Clearly, these companies need to step up their efforts,” said Rep. Greg Walden, R-Ore., chairman of the House Energy and Commerce Committee, which oversees many technology issues. In a statement, Walden said the panel would “continue to scrutinise how the tech industry works to keep consumers safe.”
Sen. Richard Blumenthal, D-Conn., said in a statement, “The poisonous rumours promoted by bots, trolls, and fake accounts in the wake of the Parkland shooting exposed major tech companies’ continued inability to address rampant disinformation spread during critical emergencies. YouTube, Twitter, and all online platforms must do more to identify harmful and misleading content, inform users exposed to it, or prevent it from spreading in the first place.”