Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

1 / 2
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
2 / 2
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
Short Url
Updated 02 July 2022
Follow

Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

  • Board shows commitment to bringing about positive change, and to lobbying Meta to do the same, But is this enough?
  • The first annual report from the independent review body, which is funded by Meta, explains the reasoning behind its 20 rulings and the 86 recommendations it has made

DUBAI: Meta’s Oversight Board has published its first annual report. Covering the period from October 2020 to December 2021, it describes the work the board has carried out in relation to how Meta, the company formerly known as Facebook, treats its users and their content, and the work that remains to be done.

The board is an independent body set up and funded by Meta to review content and content-moderation policies on Facebook and Instagram. It considers concerns raised by Meta itself and by users who have exhausted the company’s internal appeals process. It can recommend policy changes and make decisions that overrule the company’s decisions.

During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company.

“Through our first Annual Report, we’re able to demonstrate the significant impact the board has had on pushing Meta to become more transparent in its content policies and fairer in its content decisions,” Thomas Hughes, the board’s director, told Arab News.

Through our first Annual Report, we’re able to demonstrate the significant impact the board has had on pushing Meta to become more transparent in its content policies and fairer in its content decisions.

Thomas Hughes, Director of Meta’s Oversight Board

One of the cases the board considered concerns a post that appeared on media organization Al Jazeera Arabic’s verified page in May 2021, and which was subsequently shared by a Facebook user in Egypt. It consisted of Arabic text and a photo showing two men, their faces covered, who were wearing camouflage and headbands featuring the insignia of the Palestinian Al-Qassam Brigades.

The text read: “The resistance leadership in the common room gives the occupation a respite until 6 p.m. to withdraw its soldiers from Al-Aqsa Mosque and Sheikh Jarrah neighborhood, otherwise he who warns is excused. Abu Ubaida – Al-Qassam Brigades military spokesman.”

The user who shared the post commented on it in Arabic by adding the word “ooh.”

Meta initially removed the post because Al-Qassam Brigades and its spokesperson, Abu Ubaida, are designated under Facebook’s Dangerous Individuals and Organizations community standard. However, it restored the post based on a ruling by the board.

The board said in its report that while the community standard policy clearly prohibits “channeling information or resources, including official communications, on behalf of a designated entity,” it also noted there is an exception to this rule for content that is published as “news reporting.” It added that the content in this case was a “reprint of a widely republished news report” by Al Jazeera and did not include any major changes other than the “addition of the non-substantive comment, ‘ooh.’”

Meta was unable to explain why two of its reviewers judged the content to be in violation of the platform’s content policies but noted that moderators are not required to record their reasoning for individual content decisions.




Meta has agreed to our call to ensure all updates to its policies
are translated into all languages, says Thomas Hughes, Director of Meta’s Oversight Board

According to the report, the case also highlights the board’s objective of ensuring users are treated fairly because “the post, consisting of a republication of a news item from a legitimate outlet, was treated differently from content posted by the news organization itself.”

Based on allegations that Facebook was censoring Palestinian content, the board asked the platform a number of questions, including whether it had received any requests from Israel to remove content related to the 2021 Israeli-Palestinian conflict.

In response, Facebook said that it had not received any valid, legal requests from a government authority related to the user’s content in this case. However, it declined to provide any other requested information.

The board therefore recommended an independent review of these issues, as well as greater transparency about how Facebook responds to government requests.

“Following recommendations we issued after a case decision involving Israel/Palestine, Meta is conducting a review, using an independent body, to determine whether Facebook’s content-moderation community standards in Arabic and Hebrew are being applied without bias,” said Hughes.

In another case, the Oversight Board overturned Meta’s decision to remove an Instagram post by a public account that allows the discussion of queer narratives in Arabic culture. The post consisted of a series of pictures with a caption, in Arabic and English, explaining how each picture illustrated a different word that can be used in a derogatory way in the Arab world to describe men with “effeminate mannerisms.”

Meta is conducting a review, using an independent body, to determine whether Facebook’s content-moderation community standards in Arabic and Hebrew are being applied without bias.

Thomas Hughes, Director of Meta’s Oversight Board

Meta removed the content for violating its hate speech policies but restored it when the user appealed. However, it later removed the content a second time for violating the same policies, after other users reported it.

According to the board, this was a “clear error, which was not in line with Meta’s hate speech policy.” It said that while the post does contain terms that are considered slurs, it is covered by an exception covering speech that is “used self-referentially or in an empowering way,” and also an exception that allows the quoting of hate speech to “condemn it or raise awareness.”

Each time the post was reported, a different moderator reviewed it. The board was, therefore, “concerned that reviewers may not have sufficient resources in terms of capacity or training to prevent the kind of mistake seen in this case.”

Hughes said: “As demonstrated in this report, we have a track record of success in getting Meta to consider how it handles posts in Arabic.

“We’ve succeeded in getting Meta to ensure its community standards are translated into all relevant languages, prioritizing regions where conflict or unrest puts users at most risk of imminent harm. Meta has also agreed to our call to ensure all updates to its policies are translated into all languages.”

These cases illustrate the board’s commitment to bringing about positive change, and to lobbying Meta to do the same, whether that means restoring an improperly deleted post or agreeing to an independent review of a case. But is this enough?

This month, Facebook failed once again when it faced a test of how capable it is of detecting obviously unacceptable violent hate speech. The test was carried out by nonprofit groups Global Witness and Foxglove, which created 12 text-based adverts which featured dehumanizing hate speech that called for the murder of people belonging to Ethiopia’s three main ethnic groups — the Amhara, the Oromo and the Tigrayans — and submitted them to the platform. Despite the clearly objectionable content, Facebook’s systems approved the adverts for publication.

In March, Global Witness ran a similar test using adverts about Myanmar that used similar hate speech. Facebook also failed to detect those. The ads were not actually published on Facebook because Global Witness alerted Meta to the test and the violations the platform had failed to detect.

In another case, the Oversight Board upheld Meta’s initial decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities carried out in the Amhara region of Ethiopia. However, Meta restored the post after a user appealed to the board, so the company had to once again remove the content from the platform.

In November 2021, Meta announced that it had removed a post by Ethiopia’s prime minister, Abiy Ahmed Ali, in which he urged citizens to rise up and “bury” rival Tigray forces who threatened the country’s capital. His verified Facebook page remains active, however, and has 4.1 million followers.

In addition to its failures over content relating to Myanmar and Ethiopia, Facebook has long been accused by rights activists of suppressing posts by Palestinians.

“Facebook has suppressed content posted by Palestinians and their supporters speaking out about human rights issues in Israel and Palestine,” said Deborah Brown, a senior digital rights researcher and advocate at Human Rights Watch.

During the May 2021 Israeli-Palestinian conflict, Facebook and Instagram removed content posted by Palestinians and posts that expressed support for Palestine. HRW documented several instances of this, including one in which Instagram removed a screenshot of the headlines and photos from three New York Times op-ed articles, to which the user had added a caption that urged Palestinians to “never concede” their rights.

In another instance, Instagram removed a post that included a picture of a building and the caption: “This is a photo of my family’s building before it was struck by Israeli missiles on Saturday, May 15, 2021. We have three apartments in this building.”

Digital rights group Sada Social said that in May 2021 alone it documented more than 700 examples of social media networks removing or restricting access to Palestinian content.

According to HRW, Meta’s acknowledgment of errors that were made and attempts to correct some of them are insufficient and do not address the scale and scope of reported content restrictions, nor do they adequately explain why they occurred in the first place.

Hughes acknowledged that some of the commitments to change made by Meta will take time to implement but added that it is important to ensure that they are “not kicked into the long grass and forgotten about.”

Meta admitted this year in its first Quarterly Update on the Oversight Board that it takes time to implement recommendations “because of the complexity and scale associated with changing how we explain and enforce our policies, and how we inform users of actions we’ve taken and what they can do about it.”

In the meantime, Hughes added: “The Board will continue to play a key role in the collective effort by companies, governments, academia and civil society to shape a brighter, safer digital future that will benefit people everywhere.”

However, the Oversight Board only reviews cases reported by users or by Meta itself. According to some experts, the issues with Meta go far beyond the current scope of the board’s mandate.

“For an oversight board to address these issues (Russian interference in the US elections), it would need jurisdiction not only over personal posts but also political ads,” wrote Dipayan Ghosh, co-director of the Digital Platforms and Democracy Project at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School.

“Beyond that, it would need to be able to not only take down specific pieces of content but also to halt the flow of American consumer data to Russian operatives and change the ways that algorithms privilege contentious content.”

He went on to suggest that the board’s authority should be expanded from content takedowns to include “more critical concerns” such as the company’s data practices and algorithmic decision-making because “no matter where we set the boundaries, Facebook will always want to push them. It knows no other way to maintain its profit margins.”


Getty Images, Shutterstock gear up for AI challenge with $3.7bn merger

Updated 08 January 2025
Follow

Getty Images, Shutterstock gear up for AI challenge with $3.7bn merger

  • Deal faces potential antitrust scrutiny
  • Merger aims to cut costs and unlock new revenue streams as companies grapple with the rise of generative AI tools

LONDON: Getty Images said on Tuesday it would merge with rival Shutterstock to create a $3.7 billion stock-image powerhouse geared for the artificial intelligence era, in a deal likely to draw antitrust scrutiny.
The companies, two of the largest players in the licensed visual content industry, are betting that the combination will help them cut costs and grow their business by unlocking more revenue opportunities at a time when the growing use of generative AI tools such as Midjourney poses a threat to the industry.
Shutterstock shareholders can opt to receive either $28.80 per share in cash, or 13.67 shares of Getty, or a combination of 9.17 shares of Getty and $9.50 in cash for each Shutterstock share they own. The offer represents a deal value of more than $1 billion, according to Reuters calculations.
Shutterstock’s shares jumped 22.7 percent, while Getty was up 39.7 percent. Stocks of both companies have declined for at least the past four years, as the rising use of mobile cameras drives down demand for stock photography.
Getty CEO Craig Peters will lead the combined company, which will have annual revenues of nearly $2 billion and stands to benefit from Getty’s large library of visual content and the strong community on Shutterstock’s platform.
Peters downplayed the impact of AI on Tuesday and said that he was confident the merger would receive antitrust approval both in the United States and Europe.
“We don’t control the timing of (the approval), but we have a high confidence. This has been a situation where customers have not had choice. They’ve always had choice,” he said.
Some experts say US President-elect Donald Trump’s recent appointments to the Department of Justice Antitrust Division signal that there would be little change to the tough scrutiny that has come to define the regulator in recent years.
“With Gail Slater at the helm, the antitrust division is going to be a lot more aggressive under this Trump administration than it was under the first one,” said John Newman, professor of law at the University of Miami.
Regulators will examine how the deal impacts the old-school business model of selling images to legacy media customers, as well as the new business model of offering copyright-compliant generative-AI applications to the public.
The deal is expected to generate up to $200 million in cost savings three years after its close. Getty investors will own about 54.7 percent of the combined company, while Shutterstock stockholders will own the rest.
Getty competes with Reuters and the Associated Press in providing photos and videos for editorial use.


Israel extends closure of Al Jazeera’s West Bank office

Updated 07 January 2025
Follow

Israel extends closure of Al Jazeera’s West Bank office

  • Israel suspended Al Jazeera’s Ramallah office for 45 days in September on charges of “incitement to and support for terrorism”
  • Announcement comes days after Palestinian Authority also suspended the network’s broadcasts for four months

RAMALLAH, Palestinian Territories: Israeli authorities renewed a closure order for Al Jazeera’s Ramallah office in the occupied West Bank on Tuesday, days after the Palestinian Authority suspended the network’s broadcasts for four months.
An AFP journalist reported that Israeli soldiers posted the extension order Tuesday morning on the entrance of the building housing Al Jazeera’s offices in central Ramallah, a city under full Palestinian Authority security control.
The extension applies from December 22 and lasts 45 days.
In September, Israeli forces raided the Ramallah office and issued an initial 45-day closure order.
At the time, staff were instructed to leave the premises and take their personal belongings.
The move came months after Israel’s government approved a decision in May to ban Al Jazeera from broadcasting from Israel, also closing its offices for an initial 45-day period, which was extended for a fourth time by a Tel Aviv court in September.
Later in September, Israel’s government announced it was revoking the press credentials of Al Jazeera journalists in the country.
Prime Minister Benjamin Netanyahu’s government has long been at odds with Al Jazeera, a dispute that has escalated since the Gaza war began following Hamas’s attack on southern Israel on October 7.
The Israeli army has repeatedly accused the network’s reporters in Gaza of being “terrorist operatives” affiliated with Hamas or Islamic Jihad.
The Qatari channel denies the accusations, and says Israel systematically targets its staff in Gaza.


Meta replaces fact-checking with X-style community notes

Updated 07 January 2025
Follow

Meta replaces fact-checking with X-style community notes

  • Meta cited bias and excessive content reviews as key factor in ending fact-checking program
  • The social media company also announced plans to allow “more speech” by easing restrictions on discussions of mainstream topics like immigration and gender

LONDON: Facebook and Instagram owner Meta said Tuesday it’s scrapping its third-party fact-checking program and replacing it with a Community Notes program written by users similar to the model used by Elon Musk’s social media platform X.
Starting in the US, Meta will end its fact-checking program with independent third parties. The company said it decided to end the program because expert fact checkers had their own biases and too much content ended up being fact checked.
Instead, it will pivot to a Community Notes model that uses crowdsourced fact-checking contributions from users.
“We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context,” Meta’s Chief Global Affairs Officer Joel Kaplan said in a blog post.
The social media company also said it plans to allow “more speech” by lifting some restrictions on some topics that are part of mainstream discussion in order to focus on illegal and “high severity violations” like terrorism, child sexual exploitation and drugs.
Meta said that its approach of building complex systems to manage content on its platforms has “gone too far” and has made “too many mistakes” by censoring too much content.
CEO Mark Zuckerberg acknowledged that the changes are in part sparked by political events including Donald Trump’s presidential election victory.
“The recent elections also feel like a cultural tipping point toward once again prioritizing speech,” Zuckerberg said in an online video.
Meta’s quasi-independent Oversight Board, which was set up to act as a referee on controversial content decisions, said it welcomed the changes and looked forward to working with the company “to understand the changes in greater detail, ensuring its new approach can be as effective and speech-friendly as possible.”


India press watchdog demands journalist murder probe

Freelance journalist Mukesh Chandrakar. (Supplied)
Updated 06 January 2025
Follow

India press watchdog demands journalist murder probe

  • Chandrakar’s body was found on January 3 after police tracked his mobile phone records following his family reporting him missing

NEW DELHI: India’s media watchdog has demanded a thorough investigation after a journalist’s battered body was found stuffed in a septic tank covered with concrete.
Freelance journalist Mukesh Chandrakar, 28, had reported widely on corruption and a decades-old Maoist insurgency in India’s central Chhattisgarh state, and ran a popular YouTube channel “Bastar Junction.”
The Press Council of India expressed “concern” over the suspected murder of Chandrakar, calling for a report on the “facts of the case” in a statement late Saturday.
Chandrakar’s body was found on January 3 after police tracked his mobile phone records following his family reporting him missing.
Three people have been arrested.
More than 10,000 people have died in the decades-long insurgency waged by Naxalite rebels, who say they are fighting for the rights of marginalized indigenous people in India’s resource-rich central regions.
Vishnu Deo Sai, chief minister of Chhattisgarh from the ruling Bharatiya Janata Party (BJP), called Chandrakar’s death “heartbreaking” and promised the “harshest punishment” for those found responsible.
India was ranked 159 last year on the World Press Freedom Index, run by Reporters Without Borders.
 

 


Washington Post cartoonist quits after paper rejects sketch of Bezos bowing to Trump

Updated 05 January 2025
Follow

Washington Post cartoonist quits after paper rejects sketch of Bezos bowing to Trump

  • Ann Telnaes said that she’s never before had a cartoon rejected because of its inherent messaging and that such a move is dangerous for a free press
  • Wapo exec says the cartoon was rejected only to avoid repetition, because the paper had just published a column on the same topic as the cartoon

A cartoonist has decided to quit her job at the Washington Post after an editor rejected her sketch of the newspaper’s owner and other media executives bowing before President-elect Donald Trump.
Ann Telnaes posted a message Friday on the online platform Substack saying that she drew a cartoon showing a group of media executives bowing before Trump while offering him bags of money, including Post owner and Amazon founder Jeff Bezos.
Telnaes wrote that the cartoon was intended to criticize “billionaire tech and media chief executives who have been doing their best to curry favor with incoming President-elect Trump.” Several executives, Bezos among them, have been spotted at Trump’s Florida club Mar-a-Lago. She accused them of having lucrative government contracts and working to eliminate regulations.
Telnaes said that she’s never before had a cartoon rejected because of its inherent messaging and that such a move is dangerous for a free press.
“As an editorial cartoonist, my job is to hold powerful people and institutions accountable,” Telnaes wrote. “For the first time, my editor prevented me from doing that critical job. So I have decided to leave the Post. I doubt my decision will cause much of a stir and that it will be dismissed because I’m just a cartoonist. But I will not stop holding truth to power through my cartooning, because as they say ‘Democracy dies in darkness.’”
The Association of American Editorial Cartoonists issued a statement Saturday accusing the Post of “political cowardice” and asking other cartoonists to post Telnaes’ sketch with the hashtag #StandWithAnn in a show of solidarity.
“Tyranny ends at pen point,” the association said. “It thrives in the dark, and the Washington Post simply closed its eyes and gave in like a punch-drunk boxer.”
The Post’s communications director, Liza Pluto, provided The Associated Press on Saturday with a statement from David Shipley, the newspaper’s editorial page editor. Shipley said in the statement that he disagrees with Telnaes’ “interpretation of events.”
He said he decided to nix the cartoon because the paper had just published a column on the same topic as the cartoon and was set to publish another.
“Not every editorial judgment is a reflection of a malign force. ... The only bias was against repetition,” Shipley said.