Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

1 / 2
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
2 / 2
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
Short Url
Updated 02 July 2022
Follow

Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

  • Board shows commitment to bringing about positive change, and to lobbying Meta to do the same, But is this enough?
  • The first annual report from the independent review body, which is funded by Meta, explains the reasoning behind its 20 rulings and the 86 recommendations it has made

DUBAI: Meta’s Oversight Board has published its first annual report. Covering the period from October 2020 to December 2021, it describes the work the board has carried out in relation to how Meta, the company formerly known as Facebook, treats its users and their content, and the work that remains to be done.

The board is an independent body set up and funded by Meta to review content and content-moderation policies on Facebook and Instagram. It considers concerns raised by Meta itself and by users who have exhausted the company’s internal appeals process. It can recommend policy changes and make decisions that overrule the company’s decisions.

During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company.

“Through our first Annual Report, we’re able to demonstrate the significant impact the board has had on pushing Meta to become more transparent in its content policies and fairer in its content decisions,” Thomas Hughes, the board’s director, told Arab News.

Through our first Annual Report, we’re able to demonstrate the significant impact the board has had on pushing Meta to become more transparent in its content policies and fairer in its content decisions.

Thomas Hughes, Director of Meta’s Oversight Board

One of the cases the board considered concerns a post that appeared on media organization Al Jazeera Arabic’s verified page in May 2021, and which was subsequently shared by a Facebook user in Egypt. It consisted of Arabic text and a photo showing two men, their faces covered, who were wearing camouflage and headbands featuring the insignia of the Palestinian Al-Qassam Brigades.

The text read: “The resistance leadership in the common room gives the occupation a respite until 6 p.m. to withdraw its soldiers from Al-Aqsa Mosque and Sheikh Jarrah neighborhood, otherwise he who warns is excused. Abu Ubaida – Al-Qassam Brigades military spokesman.”

The user who shared the post commented on it in Arabic by adding the word “ooh.”

Meta initially removed the post because Al-Qassam Brigades and its spokesperson, Abu Ubaida, are designated under Facebook’s Dangerous Individuals and Organizations community standard. However, it restored the post based on a ruling by the board.

The board said in its report that while the community standard policy clearly prohibits “channeling information or resources, including official communications, on behalf of a designated entity,” it also noted there is an exception to this rule for content that is published as “news reporting.” It added that the content in this case was a “reprint of a widely republished news report” by Al Jazeera and did not include any major changes other than the “addition of the non-substantive comment, ‘ooh.’”

Meta was unable to explain why two of its reviewers judged the content to be in violation of the platform’s content policies but noted that moderators are not required to record their reasoning for individual content decisions.




Meta has agreed to our call to ensure all updates to its policies
are translated into all languages, says Thomas Hughes, Director of Meta’s Oversight Board

According to the report, the case also highlights the board’s objective of ensuring users are treated fairly because “the post, consisting of a republication of a news item from a legitimate outlet, was treated differently from content posted by the news organization itself.”

Based on allegations that Facebook was censoring Palestinian content, the board asked the platform a number of questions, including whether it had received any requests from Israel to remove content related to the 2021 Israeli-Palestinian conflict.

In response, Facebook said that it had not received any valid, legal requests from a government authority related to the user’s content in this case. However, it declined to provide any other requested information.

The board therefore recommended an independent review of these issues, as well as greater transparency about how Facebook responds to government requests.

“Following recommendations we issued after a case decision involving Israel/Palestine, Meta is conducting a review, using an independent body, to determine whether Facebook’s content-moderation community standards in Arabic and Hebrew are being applied without bias,” said Hughes.

In another case, the Oversight Board overturned Meta’s decision to remove an Instagram post by a public account that allows the discussion of queer narratives in Arabic culture. The post consisted of a series of pictures with a caption, in Arabic and English, explaining how each picture illustrated a different word that can be used in a derogatory way in the Arab world to describe men with “effeminate mannerisms.”

Meta is conducting a review, using an independent body, to determine whether Facebook’s content-moderation community standards in Arabic and Hebrew are being applied without bias.

Thomas Hughes, Director of Meta’s Oversight Board

Meta removed the content for violating its hate speech policies but restored it when the user appealed. However, it later removed the content a second time for violating the same policies, after other users reported it.

According to the board, this was a “clear error, which was not in line with Meta’s hate speech policy.” It said that while the post does contain terms that are considered slurs, it is covered by an exception covering speech that is “used self-referentially or in an empowering way,” and also an exception that allows the quoting of hate speech to “condemn it or raise awareness.”

Each time the post was reported, a different moderator reviewed it. The board was, therefore, “concerned that reviewers may not have sufficient resources in terms of capacity or training to prevent the kind of mistake seen in this case.”

Hughes said: “As demonstrated in this report, we have a track record of success in getting Meta to consider how it handles posts in Arabic.

“We’ve succeeded in getting Meta to ensure its community standards are translated into all relevant languages, prioritizing regions where conflict or unrest puts users at most risk of imminent harm. Meta has also agreed to our call to ensure all updates to its policies are translated into all languages.”

These cases illustrate the board’s commitment to bringing about positive change, and to lobbying Meta to do the same, whether that means restoring an improperly deleted post or agreeing to an independent review of a case. But is this enough?

This month, Facebook failed once again when it faced a test of how capable it is of detecting obviously unacceptable violent hate speech. The test was carried out by nonprofit groups Global Witness and Foxglove, which created 12 text-based adverts which featured dehumanizing hate speech that called for the murder of people belonging to Ethiopia’s three main ethnic groups — the Amhara, the Oromo and the Tigrayans — and submitted them to the platform. Despite the clearly objectionable content, Facebook’s systems approved the adverts for publication.

In March, Global Witness ran a similar test using adverts about Myanmar that used similar hate speech. Facebook also failed to detect those. The ads were not actually published on Facebook because Global Witness alerted Meta to the test and the violations the platform had failed to detect.

In another case, the Oversight Board upheld Meta’s initial decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities carried out in the Amhara region of Ethiopia. However, Meta restored the post after a user appealed to the board, so the company had to once again remove the content from the platform.

In November 2021, Meta announced that it had removed a post by Ethiopia’s prime minister, Abiy Ahmed Ali, in which he urged citizens to rise up and “bury” rival Tigray forces who threatened the country’s capital. His verified Facebook page remains active, however, and has 4.1 million followers.

In addition to its failures over content relating to Myanmar and Ethiopia, Facebook has long been accused by rights activists of suppressing posts by Palestinians.

“Facebook has suppressed content posted by Palestinians and their supporters speaking out about human rights issues in Israel and Palestine,” said Deborah Brown, a senior digital rights researcher and advocate at Human Rights Watch.

During the May 2021 Israeli-Palestinian conflict, Facebook and Instagram removed content posted by Palestinians and posts that expressed support for Palestine. HRW documented several instances of this, including one in which Instagram removed a screenshot of the headlines and photos from three New York Times op-ed articles, to which the user had added a caption that urged Palestinians to “never concede” their rights.

In another instance, Instagram removed a post that included a picture of a building and the caption: “This is a photo of my family’s building before it was struck by Israeli missiles on Saturday, May 15, 2021. We have three apartments in this building.”

Digital rights group Sada Social said that in May 2021 alone it documented more than 700 examples of social media networks removing or restricting access to Palestinian content.

According to HRW, Meta’s acknowledgment of errors that were made and attempts to correct some of them are insufficient and do not address the scale and scope of reported content restrictions, nor do they adequately explain why they occurred in the first place.

Hughes acknowledged that some of the commitments to change made by Meta will take time to implement but added that it is important to ensure that they are “not kicked into the long grass and forgotten about.”

Meta admitted this year in its first Quarterly Update on the Oversight Board that it takes time to implement recommendations “because of the complexity and scale associated with changing how we explain and enforce our policies, and how we inform users of actions we’ve taken and what they can do about it.”

In the meantime, Hughes added: “The Board will continue to play a key role in the collective effort by companies, governments, academia and civil society to shape a brighter, safer digital future that will benefit people everywhere.”

However, the Oversight Board only reviews cases reported by users or by Meta itself. According to some experts, the issues with Meta go far beyond the current scope of the board’s mandate.

“For an oversight board to address these issues (Russian interference in the US elections), it would need jurisdiction not only over personal posts but also political ads,” wrote Dipayan Ghosh, co-director of the Digital Platforms and Democracy Project at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School.

“Beyond that, it would need to be able to not only take down specific pieces of content but also to halt the flow of American consumer data to Russian operatives and change the ways that algorithms privilege contentious content.”

He went on to suggest that the board’s authority should be expanded from content takedowns to include “more critical concerns” such as the company’s data practices and algorithmic decision-making because “no matter where we set the boundaries, Facebook will always want to push them. It knows no other way to maintain its profit margins.”


‘Offensive’ Muslim fintech ads banned in UK for showing burning banknotes

Updated 08 January 2025
Follow

‘Offensive’ Muslim fintech ads banned in UK for showing burning banknotes

  • Posters by Wahed Invest were banned by Advertising Standards Authority after agency received 75 complaints

LONDON: Adverts by Muslim fintech company Wahed Invest have been banned in the UK for featuring burning banknotes, which the country’s advertising watchdog deemed “offensive.”

The New York-based investment platform, which targets the Muslim community, ran a series of posters across London’s transport system in September and October.

The ads showed US dollar and euro banknotes on fire alongside slogans such as “Join the money revolution” and “Withdraw from Riba” — a term referring to the Islamic prohibition of interest.

The Advertising Standards Authority said it received 75 complaints that the ads were offensive.

“The ads represented the expression that viewers’ money was ‘going up in flames’ and that images of burning money were commonly encountered,” the ASA said in a statement.

“However, regardless of whether viewers would have understood that message or understood it as a defiant act designed to show a challenge to financial institutions, the currencies which were burned in all of the ads were clearly visible as US dollar and euro banknotes.”

The advert also featured images of Muslim preacher Ismail ibn Musa Menk and Russian former professional mixed martial artist Khabib Abdulmanapovich Nurmagomedov.

Three of the posters showed Menk holding an open briefcase filled with US dollar and euro banknotes on fire, with two of them stating “Withdraw from Exploitation.”

Wahed defended the campaign, explaining that the burning banknotes symbolized money “going up in flames” due to inflation outpacing savings growth.

The company, which describe itself as an investment platform allowing consumers who were predominantly Muslim to invest in a manner which aligned with their faith and values, launched in the US in 2017 and is backed by the oil company Saudi Aramco and the French footballer Paul Pogba.

Wahed acknowledged that the currencies depicted in the ads could be viewed as symbols of national identity but argued that the imagery of burning money was a powerful reference to hyperinflation, a concept often depicted in popular culture through film and television.

A spokesperson added: “We understand that visuals like those included in our campaign can elicit strong reactions.

“While our intention was to spark thought and awareness, we recognize the importance of ensuring that messaging resonates positively with the diverse audiences that may consume them.”

The ASA said that the adverts would have been seen by many people, including people from the US and eurozone countries, who “would have viewed their nation’s currency as being culturally significant.

“Although we acknowledged Wahed Invest’s view that they had not directly criticized a specific group, and that depictions of burning banknotes were commonly encountered, we considered the burning of banknotes would have caused serious offense to some viewers,” the regulator said.

“We therefore concluded that the ads were likely to cause serious offense.”


Jailed Italian reporter in Tehran freed, says Italy

Updated 08 January 2025
Follow

Jailed Italian reporter in Tehran freed, says Italy

ROME: An Italian journalist arrested in Iran and jailed for three weeks has been freed and is returning to Italy, Prime Minister Giorgia Meloni’s office said on Wednesday.
“The plane taking journalist Cecilia Sala home took off from Tehran a few minutes ago” following “intense work through diplomatic and intelligence channels,” Meloni’s office said in a statement.
“Our compatriot has been released by the Iranian authorities and is on her way back to Italy. Prime Minister Giorgia Meloni expresses her gratitude to all those who helped make Cecilia’s return possible, allowing her to re-embrace her family and colleagues,” her office said.
Meloni personally informed Sala’s parents of her release by telephone, it added.
Sala, 29, was arrested on December 19, soon after the United States and Italy arrested two Iranian nationals over export violations linked to a deadly attack on American servicemen.
The journalist, who writes for the Italian daily Il Foglio and is the host of a news podcast produced by Chora Media, was kept in isolation in Tehran’s Evin prison.
Sala told her family she was forced to sleep on the floor in a cell with the lights permanently on.
Italy and Iran summoned each other’s ambassadors last week after Rome warned that efforts to secure her release were complicated.
Sala traveled to Iran on December 13 on a journalist’s visa. She was arrested six days later for “violating the law of the Islamic Republic of Iran,” said the country’s culture ministry, which oversees and accredits foreign journalists.
She had been due to return home the following day.
On Monday, Iran denied any link between Sala’s arrest and that of Iranian national Mohammad Abedini, detained in Italy in December at the behest of the United States over export violations linked to a deadly attack on US servicemen.


Surge in Telegram user data passed to French authorities

Updated 08 January 2025
Follow

Surge in Telegram user data passed to French authorities

  • Pavel Durov was arrested in Paris in August, where he was held for four days before being charged with various crimes, mostly linked to control of criminal content on Telegram

PARIS: Messaging service Telegram passed vastly more data on its users to French authorities in the second half of 2024 following founder Pavel Durov’s arrest in Paris, figures published by the platform showed.
The company said it handed over IP addresses or telephone numbers that Paris asked for in 210 cases in July-September and 673 in October-December.
That was up from just four in the first quarter and six in the second.
Some 2,072 users were affected by French requests for user data — again massively weighted toward the second half of 2024, with more than half in the fourth quarter alone.
Pavel Durov was arrested in Paris in August, where he was held for four days before being charged with various crimes, mostly linked to control of criminal content on Telegram.
He and his supporters have claimed that most French and European authorities’ requests for user data were simply not being sent to the right department at the company and therefore received no response.
Durov, who holds Russian, French and United Arab Emirates passports, has been barred from leaving French soil since he was charged.
That has not stopped Telegram from issuing updates to its moderation rules supposed to boost cooperation with investigators.
A source familiar with Durov’s case told AFP in December that the platform was responding more frequently to requests from the judicial system from both France and other countries.
 

 


Getty Images, Shutterstock gear up for AI challenge with $3.7bn merger

Updated 08 January 2025
Follow

Getty Images, Shutterstock gear up for AI challenge with $3.7bn merger

  • Deal faces potential antitrust scrutiny
  • Merger aims to cut costs and unlock new revenue streams as companies grapple with the rise of generative AI tools

LONDON: Getty Images said on Tuesday it would merge with rival Shutterstock to create a $3.7 billion stock-image powerhouse geared for the artificial intelligence era, in a deal likely to draw antitrust scrutiny.
The companies, two of the largest players in the licensed visual content industry, are betting that the combination will help them cut costs and grow their business by unlocking more revenue opportunities at a time when the growing use of generative AI tools such as Midjourney poses a threat to the industry.
Shutterstock shareholders can opt to receive either $28.80 per share in cash, or 13.67 shares of Getty, or a combination of 9.17 shares of Getty and $9.50 in cash for each Shutterstock share they own. The offer represents a deal value of more than $1 billion, according to Reuters calculations.
Shutterstock’s shares jumped 22.7 percent, while Getty was up 39.7 percent. Stocks of both companies have declined for at least the past four years, as the rising use of mobile cameras drives down demand for stock photography.
Getty CEO Craig Peters will lead the combined company, which will have annual revenues of nearly $2 billion and stands to benefit from Getty’s large library of visual content and the strong community on Shutterstock’s platform.
Peters downplayed the impact of AI on Tuesday and said that he was confident the merger would receive antitrust approval both in the United States and Europe.
“We don’t control the timing of (the approval), but we have a high confidence. This has been a situation where customers have not had choice. They’ve always had choice,” he said.
Some experts say US President-elect Donald Trump’s recent appointments to the Department of Justice Antitrust Division signal that there would be little change to the tough scrutiny that has come to define the regulator in recent years.
“With Gail Slater at the helm, the antitrust division is going to be a lot more aggressive under this Trump administration than it was under the first one,” said John Newman, professor of law at the University of Miami.
Regulators will examine how the deal impacts the old-school business model of selling images to legacy media customers, as well as the new business model of offering copyright-compliant generative-AI applications to the public.
The deal is expected to generate up to $200 million in cost savings three years after its close. Getty investors will own about 54.7 percent of the combined company, while Shutterstock stockholders will own the rest.
Getty competes with Reuters and the Associated Press in providing photos and videos for editorial use.


Israel extends closure of Al Jazeera’s West Bank office

Updated 07 January 2025
Follow

Israel extends closure of Al Jazeera’s West Bank office

  • Israel suspended Al Jazeera’s Ramallah office for 45 days in September on charges of “incitement to and support for terrorism”
  • Announcement comes days after Palestinian Authority also suspended the network’s broadcasts for four months

RAMALLAH, Palestinian Territories: Israeli authorities renewed a closure order for Al Jazeera’s Ramallah office in the occupied West Bank on Tuesday, days after the Palestinian Authority suspended the network’s broadcasts for four months.
An AFP journalist reported that Israeli soldiers posted the extension order Tuesday morning on the entrance of the building housing Al Jazeera’s offices in central Ramallah, a city under full Palestinian Authority security control.
The extension applies from December 22 and lasts 45 days.
In September, Israeli forces raided the Ramallah office and issued an initial 45-day closure order.
At the time, staff were instructed to leave the premises and take their personal belongings.
The move came months after Israel’s government approved a decision in May to ban Al Jazeera from broadcasting from Israel, also closing its offices for an initial 45-day period, which was extended for a fourth time by a Tel Aviv court in September.
Later in September, Israel’s government announced it was revoking the press credentials of Al Jazeera journalists in the country.
Prime Minister Benjamin Netanyahu’s government has long been at odds with Al Jazeera, a dispute that has escalated since the Gaza war began following Hamas’s attack on southern Israel on October 7.
The Israeli army has repeatedly accused the network’s reporters in Gaza of being “terrorist operatives” affiliated with Hamas or Islamic Jihad.
The Qatari channel denies the accusations, and says Israel systematically targets its staff in Gaza.