One year after Zuckerberg’s testimony about violent content on Facebook, has anything changed?

One year after Zuckerberg’s testimony about violent content on Facebook, has anything changed?

The live-streaming of New Zealand’s mosque shootings shows how difficult it is to put a stop to such content

At least 49 people were killed in a mass shooting in two mosques in New Zealand on Friday. The perpetrator broadcast live footage of the shooting on Facebook.

Experts say thwarting this kind of social-media broadcast is difficult because of the lack of legal oversight, the invasive nature of technology and how people instinctively react to such images.

They often share such videos and become desensitized to the violence.

Rewind to April 2018. That’s when Facebook FB, -2.46% founder and CEO Mark Zuckerberg testified on Capitol Hill about the serious strides the social-media company was making to thwart the spread of hateful, violent content.

‘What has happened here is an extraordinary and unprecedented act of violence.’

—New Zealand Prime Minister Jacinda Ardern

“We’re developing A.I. tools that can identify certain classes of bad activity pro-actively and flag it for our team at Facebook,” he told U.S. Senators.

“Today, as we sit here, 99% of the ISIS and al-Qaeda content that we take down on Facebook, our A.I. systems flag before any human sees it,” he added. Facebook intended to put more than 20,000 people on security and content review, Zuckerberg said.

Fast forward to March 2019. On Friday, Facebook and other tech giants like Twitter TWTR, +0.61% and YouTube GOOG, -0.09% were under scrutiny because of what people saw and shared through their sites: The 17-minute livestreamed mass shooting, not to mention links to a manifesto apparently inspired by white nationalism.

The three social-media companies told MarketWatch they had taken down the content, suspended accounts, were working authorities and were on guard to remove further posts. Facebook “quickly removed” the video when New Zealand police alerted the company, a company spokeswoman said.

New Zealand Prime Minister Jacinda Ardern tied the attack to people with “extremist views,” calling the shooting “one of New Zealand’s darkest days,” adding, “What has happened here is an extraordinary and unprecedented act of violence.”

Facebook, Twitter and YouTube have computers to remove violent content. Humans too — a reportedly difficult job.

Violence broadcast online isn’t unprecedented. Four people pleaded guilty to charges in connection to the livestreamed 2017 beating of a Chicago teen with special needs. A 74-year-old Cleveland man was shot dead in 2017, and then the murder was posted on Facebook. That year, BuzzFeed did its own count and said there were at least 45 times that violence was broadcasted over Facebook Live since its December 2015 start.

These sites insist they don’t idly stand by, even with the sheer amount of posting and sharing that goes one. Facebook, Twitter and YouTube all have posting policies, computers tasked with spotting and removing content in violation of their policies. Humans too — a reportedly difficult job.

The companies will also kick off people who run afoul of their rules, like when Twitter banned now-convicted “pharma bro” Martin Shkreli, and YouTube, Facebook and Twitter all booted conspiracy theorist Alex Jones, the creator of Infowars, from their platforms.

But like Friday, violent materials still surface sometimes. The question is how to clamp down.

‘I don’t think it’s an impossible task. It’s a hard task.’

—Danielle Citron, law professor at University of Maryland Francis King Carey School of Law

“I don’t think it’s an impossible task. It’s a hard task, and it depends on the defaults we want to live with,” said Danielle Citron, a University of Maryland law professor specializing in online free speech and privacy issues.

That could mean delays and filters to inspect content that was possibly violent or showing non-consensual sex, Citron said.

Zuckerberg recently announced Facebook’s effort to build up user privacy. The company did not reply to a MarketWatch question if livestreaming would be included in the shift towards more user privacy.

Legal liability

Months after Zuckerberg’s trip to Washington D.C., Facebook’s lawyers urged federal appellate judges in Manhattan to affirm the dismissal of a case claiming the site enabled Hamas to fan anti-Semitic violence in Israel.

“Facebook empathizes with all victims of terrorism and takes steps every day to rid Facebook of terrorist content,” court papers said. Still, lawsuit was “meritless,” it said.

One of the company’s arguments focused on the Communications Decency Act (CDA). Facebook said the law shielded it and similar service providers from what users put on the platform.

The case is pending.

‘Facebook empathizes with all victims of terrorism and takes steps every day to rid Facebook of terrorist content.’

When Twitter was unsuccessfully sued in San Francisco federal court over ISIS tweets, the company invoked the same law.

The case is being appealed.

The CDA wording “is the absolute go-to defense and it is usually granted,” Citron said. Tech companies have used it to fight off suits when all kinds of content spreads, from violence to hate speech to revenge porn, she said.

“We have the internet we have today because of it,” Citron said.

Still, the prospect of more legal liability might make tech companies police their content more intensely, Citron said. If it was up to her, she’d keep companies immune from lawsuits on content so long as they showed “reasonable practices” to halt illegal content.

Robert Tolchin, a lawyer for the plaintiffs in the Facebook and Twitter cases, said cases have wrongly built up protections for internet companies on the issue.

“It’s taken on a life of its own,” he said.

Technology gaps

Human moderators still play a major role in removing objectionable social-media content, said Jennifer Golbeck, a professor at the College of Information Studies at the University of Maryland.

“Automated approaches to understand video and images just aren’t good enough to rely on at this point,” Golbeck said.

She pointed to the issues websites have faced using algorithms to weed out pornography. Telling software to flag anything with nudity as pornography led companies to block content that wasn’t problematic, such breastfeeding or sexual-health education videos.

‘Computers are so much better at understanding the meaning of words than images.’

—Jennifer Golbeck, a professor at the College of Information Studies at the University of Maryland.

“Computers are so much better at understanding the meaning of words than images,” Golbeck said. “Words are defined, there are rules about how we put them together, and there’s a lot of examples of things other people have said that computers can learn from. Understanding what is in a picture, though, is very hard, and understanding the context of that is even harder.”

Another issue is the regulatory grey area for images or videos showing violence or terrorism.

Companies like Facebook and YouTube have developed stronger algorithms to weed out content that they could be held liable for in a court of law, such as copyrighted materials and child pornography, said Kalev Leetaru, a senior fellow at the Center for Cyber and Homeland Security at Auburn University.

“Facebook actually faces real consequences if a Hollywood blockbuster is shared on their platform,” he said. “It’s never going to face any consequences for having facilitated a live stream of terrorism.”

Politics also plays a role

Here’s how Facebook’s algorithm works: It compares potentially objectionable posts to a cache of content that has been deemed to represent terrorist or violent actions. But most of that content relates to acts from groups like ISIS or al-Qaeda.

As a result, Leetaru said, it can fall short of identifying acts of violence perpetrated by other terrorist groups such as Boko Haram or white supremacists.

Some violent imagery might be worth keeping out there, Leetaru argued. “If it’s a video of Venezuelan police opening fire on protestors, that stream of violence you may want to permit,” he said.

Another complication: News organizations may also share articles or videos that contain stills from the violent videos. That’s what some Australian outlets did with the Christchurch shooting.

An algorithm would similarly flag those, even though the content would be protected under freedom of press laws, Leetaru said.

Why people share such images

The jury’s out on the psychology behind sharing images or videos of violence and atrocities. But it’s clear people don’t respond to this content in the same way they would in the real world, said Desmond Upton Patton, associate professor of social work at Columbia University. “We have an ability to look with and engage with highly traumatic and violent content,” he said.

‘The screen and the technology provides a distance.’

—Desmond Upton Patton, associate professor of social work at Columbia University  

“Technology provides a distance,” Patton said. “There is something that happens in the technology space to not feel and empathize with the trauma that they’re seeing.”

Because people don’t always have the same emotional reaction to this content when they view it on social media, they are more willing to share or re-tweet it, he added.

Research has long associated depictions of violence in electronic media with an increased propensity for aggressive or violent behavior. While evidence has not yet suggested the same link with social media, Patton said the disassociation people feel toward these platforms may promote violent activity.

“People feel freer to say or do things on social media that they would not say or do in reality,” he said. “Once those things have been articulated on social media, for some people it becomes a need to follow through because you’ve put it out there.”

Share:
error: Content is protected !!