home
Guest Signup/Login to unlock all features

Menu

Views: 1047

Replies: 0

Share this post
facebook twitter wa-ico

avatar

HERO

Adedayo

WHEN YOUTUBE REMOVES VIOLENT VIDEOS, IT HNDERS JUSTICE

Oct 07 2017 at 02:24pm

Image

AMONG THE HALF-MILLION hours of video uploaded to YouTube on March 20, 2017, was a disturbing 18-second clip showing a gunman executing three people — bound and facing a wall — on a dusty street, purportedly in Benghazi, Libya.

Though the scene was truly shocking, the short video was similar to many others posted from conflict zones around the world.

But then on June 9, another video was posted of the same gunman apparently supervising the execution of four kneeling captives. About a month later, when yet another video of the execution of 20 captives surfaced, the gunman — allegedly Mahmoud al-Werfelli — was again visible, a literal commanding presence.
When the International Criminal Court issued an arrest warrant for Mahmoud al-Werfelli in August for the war crime of murder in Libya, it marked a watershed moment for open-source investigations. For those of us who embrace the promise of the digital landscape for justice and accountability, it came as welcome validation that content found on Facebook and YouTube form a good deal of the evidence before the Court.
But this relatively new path to justice is at risk of becoming a dead-end.

The explosion of content shared on various digital platforms in recent years has given human rights investigators the unprecedented ability to research grave abuses like the probable war crimes documented in these videos. But there are significant challenges to relying on this material in investigations. Footage is often attributed to the wrong time, place, or people. Tracing a chain of custody for the content often dead-ends at the platform to which the materials were posted. In addition, such platforms serve not only as a repository for materials, but as mediums for advancing narratives, occasionally with the intention of spreading misinformation, inciting hatred, or mobilizing for violence.

For years, companies like Facebook, Google, and Twitter have been challenged by governments and the general public to combat hate speech, incitement, and recruitment by extremist groups. To do so, these platforms rely on a mix of algorithms and human judgment to flag and remove content, with a tendency toward either false-negatives or false-positives — flagging too little or too much by some values-based standard.

In June, Google announced four steps intended to fight terrorism online, among them more rigorous detection and faster removal of content related to violent extremism and terrorism. The quick flagging and removal of content appears successful. Unfortunately, we know this because of devastating false-positives: the removal of content used or uploaded by journalists, investigators, and organizations curating materials from conflict zones and human rights crises for use in reporting, potential future legal proceedings, and the historical record.

Just as a victim or witness can be coerced into silence, information critical to effective investigations may disappear from platforms before any competent authority has the chance to preserve it. Investigators already know about this risk. Video documentation of the alleged murder of the four kneeling captives by al-Werfelli was quickly removed from YouTube, as were thousands of videos documenting the conflict in Syria since YouTube revamped its flagging system. With effort, some curators of content challenged YouTube on the removals, and some of the deleted channels and videos have been restored.

However, it's impossible to know how much material posted by human rights activists or others seeking to share evidence was or will be lost — including what could amount to key pieces of evidence for prosecutors to build their cases. Though YouTube may store removed material on its servers, the company cannot know the evidentiary or public interest value of the content, as it remains in digital purgatory, undiscoverable by investigators and researchers equipped to make those assessments. Since the only people who can challenge these removals are the original content owners or curators, it puts some at a profound disadvantage and even great personal risk if they try to safeguard access to their flagged content by providing further information, and logging additional risky keystrokes.
The people posting the most valuable content for pursuing justice and accountability — the civilian under siege, the citizen journalist facing death threats — are often the least
able to contest its removal.

Content platforms are not in the business of ensuring preservation of evidence for use in war crimes investigations. In fact, they are not in the business of fighting terrorism and hate speech. These are, however, among the obligations that such companies have to the greater public good.

While YouTube's content removal systems have been most recently in the news for erroneously removing content of public interest, other platforms face the same pressures to dampen the echo of certain types of degrading or extremist content.
As the public good is negotiated in the digital space, and systems are adjusted to better represent that good, it will be crucial to minimize the risk laid bare by the acknowledged YouTube failure. The new flagging system, while addressing a need, was implemented without due consultation with the civil society that has come to depend on platforms like YouTube for sharing and accessing information, some of it shared at great personal peril. The company, for its part, has said it will work with NGOs to distinguish between violent propaganda and material that's newsworthy. Such consultations may yield guidelines for human reviewers on the legitimate use of materials that — though they appear on their face to glorify violence or hatred — have public value.
The tension between the desire to keep information unfettered and the need to prevent the abuse of channels by violent or hate groups isn't one that can readily be settled with a consultation. But better outreach to human rights investigators can minimize harm and potential loss of evidence. As other platforms move to deal with these challenges, a little consultation will go a long way in ensuring the digital landscape of the future — though maybe not pretty — is contoured to the pursuit of justice and accountability.

Last edited 07 Oct 2017

Quick reply


+ files BBCode

Sponsored