![]() “When we identify isolated instances of differently edited versions of the video, we take it down and add it to our database to prevent future uploads of the same version being shared,” she said.Īfter NBC News provided links to two of the new videos discovered on the platform, Facebook took them down and added them to its database to ensure that future instances couldn’t be uploaded. In the first 24 hours after the attack, Facebook blocked or removed 1.5 million versions of the video from the platform. Viewers of the mosque shootings recorded, repackaged and reposted the original video in a range of formats, creating a gruesome game of whack-a-mole. Platforms increasingly face a battle with bad actors organizing on forums such as 8chan to circumvent their detection systems. “The attack demonstrated the misuse of technology to spread radical expressions of hate, and highlighted where we needed to improve detection and enforcement against violent extremist content,” Facebook said. The company said that the incident “strongly influenced” the company’s updates to its policies and enforcement. Vaidhyanathan said Facebook’s live video feature has turned into a beast that Facebook can do little about “short of flipping the switch.” Though Facebook has hired more moderators to supplement its machine detection and user reports, “you cannot hire enough people” to police a service with 2.3 billion users.Facebook announced in a blog post Tuesday that it would work with law enforcement to train its artificial intelligence systems to recognize videos of violent events as part of a wider crackdown on extremist content.įacebook’s systems failed to detect the livestreamed video of the shootings. ![]() Facebook simply didn’t know about it in time.įacebook’s Sonderby said in Tuesday’s blog post that the company “designated both shootings as terror attacks, meaning that any praise, support and representation of the events” are violations. Indecision didn’t seem to be the case here, though. In some cases, it’s not clear at the outset whether a video or other post violates Facebook’s standards, especially on a service with a range of languages and cultural norms. She calls it “incredibly offensive and inappropriate” to pin responsibility on users subjected to traumatic video. “If they cannot handle the responsibility, then it’s their fault for continuing to provide that service,” said Mary Anne Franks, a law professor at the University of Miami. Nonetheless, they say Facebook cannot deflect responsibility. However, it’s less clear how these systems apply to Facebook’s live streaming.Įxperts say live video poses unique challenges, and complaints about live streaming suicides, murders and beatings regularly come up. The video also outlined how it uses “computer vision” to detect 97 percent of graphic violence before anyone reports it. Those reports are then sent to human reviewers, the company said in a November video. Facebook didn’t immediately respond to a request for comment and questions about its communications with police.įacebook uses artificial intelligence to detect objectionable material, while relying on the public to flag content that violates its standards. The company does have a page titled “ information for law enforcement authorities ,” but it merely outlines procedures for making legal requests for user account records. Users are also told to contact law enforcement if someone is in immediate danger.įacebook also doesn’t appear to post any public information instructing law enforcement how to report dangerous or criminal video. A user who clicks on “report live video” gets a choice of objectionable content types to select from, including violence, bullying and harassment. ![]() To report live video, a user must know to click on a small set of three gray dots on the right side of the post.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |