FACEBOOK REPORT CONCLUDES COMPANY CENSORSHIP VIOLATED PALESTINIAN HUMAN RIGHTS


The report, which is expected tomorrow, claimed that Facebook and Instagram discriminated against Palestinians last May during an aggressive Israeli assault on the Gaza Strip.

According to a report commissioned by the parent company of the social media platforms, Meta, Facebook and Instagram’s speech standards violated the fundamental human rights of Palestinian users during a conflict that witnessed frequent Israeli assaults on the Gaza Strip last May.

The long-awaited report, which was obtained by The Intercept ahead of publication, states that “Meta’s actions in May 2021 appear to have had an adverse human rights impact… on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.”

The research, which was commissioned by Meta last year and produced by the impartial consultancy Business for Social Responsibility, or BSR, focuses on the company’s censoring procedures and claims of bias amid Israeli troops’ violent outbursts against Palestinians in the spring of 2016.

Israeli police cracked down on protesters in Israel and the West Bank after complaints about the forcible eviction of Palestinian families from the Sheikh Jarrah neighborhood in occupied East Jerusalem. They also launched military airstrikes against Gaza that injured thousands of Palestinians and killed 256 people, including 66 children, according to the UN. Many Palestinians who tried to use Facebook and Instagram to document and condemn the violence discovered that their posts abruptly vanished without warning; the BSR probe tries to investigate this phenomena.

More than a dozen civil society and human rights organizations published an open letter last month criticizing Meta for delaying the publication of the report, which the business had initially promised to do in the “first quarter” of the year.

While acknowledging that Meta has made improvements to its regulations, BSR also criticizes “a lack of control at Meta that permitted content policy errors to arise with substantial implications.”

Despite the fact that BSR makes it clear that Meta violates Palestinian rights with the censorship system it alone has created, the report clears Meta of “intentional bias” and instead focuses on instances where “Meta policy and practice, combined with broader external dynamics, does lead to different human rights impacts on Palestinian and Arabic speaking users,” a nod to the fact that these systemic flaws are by no means exclusive to the Arabic-speaking community.

In a document that would be distributed alongside the research, Meta responded to the BSR report. The company stated in a footnote in the response, which The Intercept also obtained, “Meta’s publication of this response should not be construed as an admission, agreement with, or acceptance of any of the findings, conclusions, opinions, or viewpoints identified by BSR, nor should the implementation of any suggested reforms be taken as admission of wrongdoing.” (Meta did not respond to The Intercept’s request for comment about the report by publication time.)

The findings of BSR’s analysis reflect long-standing allegations of unequal speech enforcement in the Palestinian-Israeli conflict, since Meta removed Arabic content about the violence at a much higher rate than Hebrew-language posts. The investigation discovered that the discrepancy persisted in posts that were examined by both manual employees and computerized software.

According to the research, Arabic content had increased over-enforcement on a per-user basis (e.g., incorrectly eliminating Palestinian voice). According to the data examined by BSR, proactive detection rates for possibly illegal Arabic content were much higher than proactive detection rates for potentially illegal Hebrew content.

The same institutional issues that rights organizations, whistleblowers, and scholars have all blamed for the company’s prior humanitarian failures were cited by BSR as the reason for the starkly different treatment of Palestinian and Israeli posts. The BSR investigation concluded that Meta, a firm with over $24 billion in cash on hand, lacks personnel with knowledge of different cultures, languages, and histories and is utilizing subpar algorithmic technology to control speech globally.

A “Arabic hostile speech classifier” that uses machine learning to flag potential policy violations and has no Hebrew equivalent not only subjected Palestinian users to algorithmic screening that Israeli users are not subjected to, but the report also notes that the Arabic system is ineffective: “Arabic classifiers are likely less accurate for Palestinian Arabic than other dialects, both because the dialect is less common, and because the training data — which is based on the assessment —

The skewed results of Meta’s speech-policing algorithms appear to have been made worse by human personnel. According to the report, potentially illegal Arabic content might not have been sent to reviewers who are fluent in or familiar with the language used in the content. It also mentions that Meta was short on workers who could understand Arabic and Hebrew to handle the surge in posts.

 


RELATED: FACEBOOK’S SECRET RULES ABOUT THE WORD “ZIONIST” IMPEDE CRITICISM OF ISRAEL

https://theintercept.com/2021/05/14/facebook-israel-zionist-moderation/


 

The study goes on to say that these mistakes created a domino effect that prevented speaking. According to BSR’s analysis of tickets and feedback from internal stakeholders, a crucial over-enforcement problem in May 2021 occurred when users accrued “false” strikes that adversely affected visibility and engagement after posts were mistakenly removed for breaking content policies. The study notes that given the circumstances, where rights like freedom of expression, freedom of association, and safety were of particular importance, particularly for activists and journalists, “the human rights implications… of these errors were more severe.”

The “Dangerous Individuals and Organizations” policy, or “DOI,” is a list of thousands of people and groups that Meta’s billions of users cannot “praise,” “support,” or “represent,” according to the BSR report. The full list, obtained and published by The Intercept last year, showed that the policy focuses primarily on Muslim and Middle Eastern entities, which critics described as a racist practice.

Legal experts disagree with Meta’s interpretation of federal anti-terrorism laws, despite the company’s assertion that it is required by law to suppress mention of organizations that have been sanctioned by the United States government. The Brennan Center for Justice referred to the company’s claims of legal responsibility as a “fiction” in response to The Intercept’s disclosure on the list.

BSR concurs that the policy is fundamentally biased: “Legal designations of terrorist organizations around the world have a disproportionate focus on individuals and organizations that have identified as Muslim. As a result, Meta’s DOI policy and the list are more likely to have an impact on Palestinian and Arabic-speaking users, both of which are based on Meta’s incorrect interpretation of legal obligations.”

According to the research, Palestinians are especially susceptible to the negative impacts of the blacklist: “Palestinians are more likely to breach Meta’s DOI policy because of the existence of Hamas as a governing entity in Gaza and political candidates linked with listed organizations. As a result of the exceptionally harsh penalties for DOI infractions, Palestinians are more likely to suffer the consequences of both proper and improper policy enforcement.

The document concludes with a list of 21 non-binding policy recommendations, some of which include increasing staff capacity to correctly comprehend and process Arabic posts, introducing a Hebrew-compatible algorithm, enhancing company oversight of outsourced moderators, and reforming and enhancing transparency surrounding the “Dangerous Individuals and Organizations” policy.

In its response to the study, Meta makes a hazy commitment to carry out or take into consideration parts of 20 of the 21 suggestions. With one exception, the company says it won’t pursue a request to “Fund public study into the ideal relationship between legally required counterterrorism commitments and the policies and practices of social media platforms” because it doesn’t want to offer legal advice to other businesses. Instead, Meta advises worried professionals to get in touch with the federal authorities.


REFERENCES:

By: Miss Cherry May Timbol – Independent Reporter

You can support my work directly on Patreon or Paypal
http://patreon.com/cherrymtimbol
http://Paypal.Me/cherrymtimbol
Contact by mail: cherrymtimbol@newscats.org
Contact by mail: timbolcherrymay@gmail.com

 

Support

Newscats – on Patreon 
Payoneer ID: 55968469,
or support us by PayPal:

 

President Trump Won!!

“Liberals” – Why are you so fucking stupid??

Ad

Your ad here?