Facebook’s language gaps weaken screening of hate, terrorism

Facebook reported internally it had erred in nearly half of all Arabic language takedown requests submitted for appeal. (File/AFP)
Short Url
Updated 25 October 2021
Follow

Facebook’s language gaps weaken screening of hate, terrorism

  • Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects
  • In some of the world’s most volatile regions, terrorist content and hate speech proliferate because Facebook remains short on moderators who speak local languages and understand cultural contexts

DUBAI: As the Gaza war raged and tensions surged across the Middle East last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.
Facebook, which owns Instagram, later apologized, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.
For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.
Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemic than just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.
Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.
In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.
“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”
This story, along with others published Monday, is based on Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions were reviewed by a consortium of news organizations, including The Associated Press.
In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity around the world.
But when it comes to Arabic content moderation, the company said, “We still have more work to do. ... We conduct research to better understand this complexity and identify how we can improve.”
In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.
The Rohingya’s persecution, which the US has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered.
Despite Facebook’s public promises and many internal reports on the problems, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breaches the company’s Myanmar policies following a military coup in February.
In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that Prime Minister Narendra Modi’s far-right Hindu nationalist group, Rashtriya Swayamsevak Sangh, broadcasts on its platform.
In one document, the company notes that users linked to Modi’s party had created multiple accounts to supercharge the spread of Islamophobic content. Much of this content was “never flagged or actioned,” the research found, because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.
Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts.
The Moroccan colloquial Arabic, for instance, includes French and Berber words, and is spoken with short vowels. Egyptian Arabic, on the other hand, includes some Turkish from the Ottoman conquest. Other dialects are closer to the “official” version found in the Qur’an. In some cases, these dialects are not mutually comprehensible, and there is no standard way of transcribing colloquial Arabic.
Facebook first developed a massive following in the Middle East during the 2011 Arab Spring uprisings, and users credited the platform with providing a rare opportunity for free expression and a critical source of news in a region where autocratic governments exert tight controls over both. But in recent years, that reputation has changed.
Scores of Palestinian journalists and activists have had their accounts deleted. Archives of the Syrian civil war have disappeared. And a vast vocabulary of everyday words have become off-limits to speakers of Arabic, Facebook’s third-most common language with millions of users worldwide.
For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.
Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.
He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.
Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.
But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.
Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the US government equivalent — are grounds for a takedown.
“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the current system “limits users from participating in political speech, impeding their right to freedom of expression.”
The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East, the internal documents show, resulting in what Facebook employees describe in the documents as widespread perceptions of censorship.
“If you posted about militant activity without clearly condemning what’s happening, we treated you like you supported it,” said Mai el-Mahdy, a former Facebook employee who worked on Arabic content moderation until 2017.
In response to questions from the AP, Facebook said it consults independent experts to develop its moderation policies and goes “to great lengths to ensure they are agnostic to religion, region, political outlook or ideology.”
“We know our systems are not perfect,” it added.
The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups.
Former Facebook employees also say that various governments exert pressure on the company, threatening regulation and fines. Israel, a lucrative source of advertising revenue for Facebook, is the only country in the Mideast where Facebook operates a national office. Its public policy director previously advised former right-wing Prime Minister Benjamin Netanyahu.
Israeli security agencies and watchdogs monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.
“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017. “That forces the system to make mistakes in Israel’s favor. Nowhere else in the region had such a deep understanding of how Facebook works.”
Facebook said in a statement that it fields takedown requests from governments no differently from those from rights organizations or community members, although it may restrict access to content based on local laws.
“Any suggestion that we remove content solely under pressure from the Israeli government is completely inaccurate,” it said.
Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident content for removal.
Raed, a former reporter at the Aleppo Media Center, a group of antigovernment activists and citizen journalists in Syria, said Facebook erased most of his documentation of Syrian government shelling on neighborhoods and hospitals, citing graphic content.
“Facebook always tells us we break the rules, but no one tells us what the rules are,” he added, giving only his first name for fear of reprisals.
In Afghanistan, many users literally cannot understand Facebook’s rules. According to an internal report in January, Facebook did not translate the site’s hate speech and misinformation pages into Dari and Pashto, the two most common languages in Afghanistan, where English is not widely understood.
When Afghan users try to flag posts as hate speech, the drop-down menus appear only in English. So does the Community Standards page. The site also doesn’t have a bank of hate speech terms, slurs and code words in Afghanistan used to moderate Dari and Pashto content, as is typical elsewhere. Without this local word bank, Facebook can’t build the automated filters that catch the worst violations in the country.
When it came to looking into the abuse of domestic workers in the Middle East, internal Facebook documents acknowledged that engineers primarily focused on posts and messages written in English. The flagged-words list did not include Tagalog, the major language of the Philippines, where many of the region’s housemaids and other domestic workers come from.
In much of the Arab world, the opposite is true — the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled human moderators, in over their heads, tend to passively field takedown requests instead of screening proactively.
Sophie Zhang, a former Facebook employee-turned-whistleblower who worked at the company for nearly three years before being fired last year, said contractors in Facebook’s Ireland office complained to her they had to depend on Google Translate because the company did not assign them content based on what languages they knew.
Facebook outsources most content moderation to giant companies that enlist workers far afield, from Casablanca, Morocco, to Essen, Germany. The firms don’t sponsor work visas for the Arabic teams, limiting the pool to local hires in precarious conditions — mostly Moroccans who seem to have overstated their linguistic capabilities. They often get lost in the translation of Arabic’s 30-odd dialects, flagging inoffensive Arabic posts as terrorist content 77 percent of the time, one document said.
“These reps should not be fielding content from non-Maghreb region, however right now it is commonplace,” another document reads, referring to the region of North Africa that includes Morocco. The file goes on to say that the Casablanca office falsely claimed in a survey it could handle “every dialect” of Arabic. But in one case, reviewers incorrectly flagged a set of Egyptian dialect content 90 percent of the time, a report said.
Iraq ranks highest in the region for its reported volume of hate speech on Facebook. But among reviewers, knowledge of Iraqi dialect is “close to non-existent,” one document said.
“Journalists are trying to expose human rights abuses, but we just get banned,” said one Baghdad-based press freedom activist, who spoke on condition of anonymity for fear of reprisals. “We understand Facebook tries to limit the influence of militias, but it’s not working.”
Linguists described Facebook’s system as flawed for a region with a vast diversity of colloquial dialects that Arabic speakers transcribe in different ways.
“The stereotype that Arabic is one entity is a major problem,” said Enam Al-Wer, professor of Arabic linguistics at the University of Essex, citing the language’s “huge variations” not only between countries but class, gender, religion and ethnicity.
Despite these problems, moderators are on the front lines of what makes Facebook a powerful arbiter of political expression in a tumultuous region.
Although the documents from Haugen predate this year’s Gaza war, episodes from that 11-day conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.
Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information for many users. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.
“This has restrained me and prevented me from feeling free to publish what I want for fear of losing my account,” said Soliman Hijjy, a Gaza-based journalist whose aerials of the Mediterranean Sea garnered tens of thousands more views than his images of Israeli bombs — a common phenomenon when photos are flagged for violating community standards.
During the war, Palestinian advocates submitted hundreds of complaints to Facebook, often leading the company to concede error and reinstate posts and accounts.
In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.
“The repetition of false positives creates a huge drain of resources,” it said.
In announcing the reversal of one such Palestinian post removal last month, Facebook’s semi-independent oversight board urged an impartial investigation into the company’s Arabic and Hebrew content moderation. It called for improvement in its broad terrorism blacklist to “increase understanding of the exceptions for neutral discussion, condemnation and news reporting,” according to the board’s policy advisory statement.
Facebook’s internal documents also stressed the need to “enhance” algorithms, enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.
“With the size of the Arabic user base and potential severity of offline harm … it is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.
But the company also lamented that “there is not one clear mitigation strategy.”
Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.
“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom, who recently discussed Arabic content suppression with Facebook officials in London. “If you take away people’s voices, the alternatives will be uglier.”


Israeli journalist arrested over post praising death of 5 IDF soldiers in Gaza

Updated 1 min 45 sec ago
Follow

Israeli journalist arrested over post praising death of 5 IDF soldiers in Gaza

  • Israel Frey, who frequently posts his criticism of the Israeli army’s actions in Gaza, is being held in the Tel Aviv Magistrate’s Court
  • The Committee to Protect Journalists condemned Frey’s arrest

LONDON: An Israeli court on Thursday extended the detention of journalist Israel Frey over a post on X that hailed “the world is a better place” following the death of five soldiers in an explosion in Gaza.

Frey, who frequently posts his criticism of the Israeli army’s actions in Gaza, is being held in the Tel Aviv Magistrate’s Court over charges of inciting and supporting terrorism.

“The world is better this morning without five young people who participated in one of the cruelest crimes against humanity,” the Israeli journalist said, referring to five Israeli soldiers who were killed by an explosive device during their fight with the militant group Hamas in northern Gaza earlier this week.

He added: “Sadly, for the boy in Gaza now being operated on without anesthesia, the girl starving to death and the family huddling in a tent under bombardment — this is not enough.

“This is a call to every Israeli mother: Do not be the next to receive your son in a coffin as a war criminal. Refuse.”

Frey was previously questioned over his critical posts in the past. In March, he was interrogated on suspicion of inciting terrorism over several pro-Palestinian posts.

“A Palestinian who hurts an IDF soldier or a settler in the apartheid territories is not a terrorist. And it’s not a terror attack. He’s a hero fighting against an occupier for justice, liberation and freedom,” he once wrote.

In December 2022, he was questioned over posts in which he said that “targeting security forces is not terrorism” and called a Palestinian who was planning an attack a “hero.”

Frey fled into hiding on Oct. 16, 2023, about a week into the Gaza war, after his home was attacked by a far-right Israeli mob when he expressed solidarity with Palestinians in Gaza.

On Thursday, he told the Israeli newspaper Haaretz that he will not be “bowing his head” to his persecution, adding that “we have already caused enough suffering, blood and tears. Liberate Gaza. Enough.”

According to Israeli media reports, Judge Ravit Peleg Bar Dayan ruled that Frey’s remarks “offend public sensibilities and are deeply disturbing,” asking, “How can the deaths of young soldiers, who fell in the line of duty defending their homeland, possibly be considered good?”

She added that extending Frey’s detention was necessary due to “investigative actions susceptible to obstruction,” as she denied bail to Frey.

In a statement, the Committee to Protect Journalists condemned Frey’s arrest and said his detention “underscores authorities’ growing intolerance of freedom of expression since the start of the war on October 7, 2023.”

CPJ Regional Director Sara Qudah called for Frey’s immediate release along with “all detained Palestinian journalists” and for an end to the “ongoing crackdown on the press and dissenting voices.”


Pakistani father kills daughter over TikTok account: police

Updated 11 July 2025
Follow

Pakistani father kills daughter over TikTok account: police

  • TikTok is wildly popular in Pakistan, in part because of its accessibility to a population with low literacy levels
  • Pakistani women have found both audience and income on the app, which is rare in the country

RAWALPINDI: Pakistan police on Friday said a father shot dead his daughter after she refused to delete her account on popular video-sharing app TikTok.

In the Muslim-majority country, women can be subjected to violence by family members for not following strict rules on how to behave in public, including in online spaces.

“The girl’s father had asked her to delete her TikTok account. On refusal, he killed her,” a police spokesperson said.

According to a police report shared with AFP, investigators said the father killed his 16-year-old daughter on Tuesday “for honor.” He was subsequently arrested.

The victim’s family initially tried to “portray the murder as a suicide” according to police in the city of Rawalpindi, where the attack happened, next to the capital Islamabad.

Last month, a 17-year-old girl and TikTok influencer with hundreds of thousands of online followers was killed at home by a man whose advances she had refused.

Sana Yousaf had racked up more than a million followers on social media accounts including TikTok, where she shared videos of her favorite cafes, skincare products, and traditional outfits.

TikTok is wildly popular in Pakistan, in part because of its accessibility to a population with low literacy levels.

Women have found both audience and income on the app, which is rare in a country where fewer than a quarter of the women participate in the formal economy.

However, only 30 percent of women in Pakistan own a smartphone compared to twice as many men (58 percent), the largest gap in the world, according to the Mobile Gender Gap Report of 2025.

Pakistani telecommunications authorities have repeatedly blocked or threatened to block the app over what it calls “immoral behavior,” amid backlash against LGBTQ and sexual content.

In southwestern Balochistan, where tribal law governs many rural areas, a man confessed to orchestrating the murder of his 14-year-old daughter earlier this year over TikTok videos that he said compromised her “honor.”


Iran says 12 journalists killed in Israeli strikes during war

Updated 10 July 2025
Follow

Iran says 12 journalists killed in Israeli strikes during war

  • The organization accused Israel of deliberately targeting media infrastructure

TEHRAN: Iran said Thursday that at least a dozen journalists and media workers were killed in Israeli strikes during the two countries’ recent war, according to state media.
The media arm of the Basij paramilitary forces — a branch of the Islamic Revolutionary Guard Corps — said the death toll among media workers had risen to 12 following the identification of two additional individuals, the IRNA news agency reported.
The organization accused Israel of deliberately targeting media infrastructure “to silence the voice of truth” and suppress the “media of the Resistance Front” — a reference to Iran and allied groups opposed to Israel.
The announcement comes as casualty figures from the war have continued to rise, even after the end of the 12-day conflict, which began on June 13 with a surprise Israeli attack and saw an unprecedented bombing campaign that hit Iranian military facilities, nuclear sites and residential areas.
During the conflict, Israel also attacked the Iranian state broadcasting service in northern Tehran.
The Israeli campaign killed senior military commanders, nuclear scientists and hundreds of civilians, with the total death toll currently at 1,060, according to Iranian officials.
Retaliatory Iranian drone and missile barrages killed at least 28 people in Israel during the war, according to official figures.


X CEO Linda Yaccarino resigns after two years at the helm of Elon Musk’s social media platform

Updated 10 July 2025
Follow

X CEO Linda Yaccarino resigns after two years at the helm of Elon Musk’s social media platform

  • Yaccarino announced her resignation in a post, saying “the best is yet to come as X enters a new chapter”
  • Elon Musk hired Yaccarino, a veteran ad executive, in May 2023 after buying Twitter for $44 billion

X CEO Linda Yaccarino said she’s stepping down after two bumpy years running Elon Musk’s social media platform.
Yaccarino posted a positive message Wednesday about her tenure at the company formerly known as Twitter and said “the best is yet to come as X enters a new chapter with” Musk’s artificial intelligence company xAI, maker of the chatbot Grok. She did not say why she is leaving.
Musk responded to Yaccarino’s announcement with his own 5-word statement on X: “Thank you for your contributions.”
“The only thing that’s surprising about Linda Yaccarino’s resignation is that it didn’t come sooner,” said Forrester research director Mike Proulx. “It was clear from the start that she was being set up to fail by a limited scope as the company’s chief executive.”
In reality, Proulx added, Musk “is and always has been at the helm of X. And that made Linda X’s CEO in title only, which is a very tough position to be in, especially for someone of Linda’s talents.”
Musk hired Yaccarino, a veteran ad executive, in May 2023 after buying Twitter for $44 billion in late 2022 and cutting most of its staff. He said at the time that Yaccarino’s role would be focused mainly on running the company’s business operations, leaving him to focus on product design and new technology. Before announcing her hiring, Musk said whoever took over as the company’s CEO ” must like pain a lot.”
In accepting the job, Yaccarino was taking on the challenge of getting big brands back to advertising on the social media platform after months of upheaval following Musk’s takeover. She also had to work in a supporting role to Musk’s outsized persona on and off of X as he loosened content moderation rules in the name of free speech and restored accounts previously banned by the social media platform.
“Being the CEO of X was always going to be a tough job, and Yaccarino lasted in the role longer than many expected. Faced with a mercurial owner who never fully stepped away from the helm and continued to use the platform as his personal megaphone, Yaccarino had to try to run the business while also regularly putting out fires,” said Emarketer analyst Jasmine Enberg.
Yaccarino’s future at X became unclear earlier this year after Musk merged the social media platform with his artificial intelligence company, xAI. And the advertising issues have not subsided. Since Musk’s takeover, a number of companies had pulled back on ad spending — the platform’s chief source of revenue — over concerns that Musk’s thinning of content restrictions was enabling hateful and toxic speech to flourish.
Most recently, an update to Grok led to a flood of antisemitic commentary from the chatbot this week that included praise of Adolf Hitler.
“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” the Grok account posted on X early Wednesday, without being more specific.
Some experts have tied Grok’s behavior to Musk’s deliberate efforts to mold Grok as an alternative to chatbots he considers too “woke,” such as OpenAI’s ChatGPT and Google’s Gemini. In late June, he invited X users to help train the chatbot on their commentary in a way that invited a flood of racist responses and conspiracy theories.
“Please reply to this post with divisive facts for @Grok training,” Musk said in the June 21 post. “By this I mean things that are politically incorrect, but nonetheless factually true.”
A similar instruction was later baked into Grok’s “prompts” that instruct it on how to respond, which told the chatbot to “not shy away from making claims which are politically incorrect, as long as they are well substantiated.” That part of the instructions was later deleted.
“To me, this has all the fingerprints of Elon’s involvement,” said Talia Ringer, a professor of computer science at the University of Illinois Urbana-Champaign.
Yaccarino has not publicly commented on the latest hate speech controversy. She has, at times, ardently defended Musk’s approach, including in a lawsuit against liberal advocacy group Media Matters for America over a report that claimed leading advertisers’ posts on X were appearing alongside neo-Nazi and white nationalist content. The report led some advertisers to pause their activity on X.
A federal judge last year dismissed X’s lawsuit against another nonprofit, the Center for Countering Digital Hate, which has documented the increase in hate speech on the site since it was acquired by Musk.
X is also in an ongoing legal dispute with major advertisers — including CVS, Mars, Lego, Nestle, Shell and Tyson Foods — over what it has alleged was a “massive advertiser boycott” that deprived the company of billions of dollars in revenue and violated antitrust laws.
Enberg said that, “to a degree, Yaccarino accomplished what she was hired to do.” Emarketer expects X’s ad business to return to growth in 2025 after more than halving between 2022 and 2023 following Musk’s takeover.
But, she added, “the reasons for X’s ad recovery are complicated, and Yaccarino was unable to restore the platform’s reputation among advertisers.”
Analysts have said that some advertisers may have returned to X to avoid alienating Trump supporters during the height of Musk’s affiliation with the president and his base. Legal threats may have also played a part — whether from X or from the Federal Trade Commission, which is investigating Media Matters over its reporting that hateful content has increased on X since Musk took over, resulting in an advertiser exodus. Media Matters has in turn sued the FTC, claiming it seeks to punish protected speech.


Elon Musk’s AI firm deletes Grok chatbot pro-Hitler posts

Updated 09 July 2025
Follow

Elon Musk’s AI firm deletes Grok chatbot pro-Hitler posts

  • Move comes ahead of the launch of Grok 4
  • Turkiye court bans Grok for offensive content

LONDON: Elon Musk’s artificial intelligence startup, xAI, was forced to delete posts by its chatbot Grok that praised Nazi leader Adolf Hitler, following widespread accusations of antisemitism and extremism.

The Anti-Defamation League, a non-profit organization formed to combat attacks on Jews, flagged Grok’s responses, which included offensive tropes, references to antisemitic conspiracies, and positive characterizations of Hitler.

In one widely circulated screenshot online, Grok said Hitler would be best suited to combat “anti-white hate,” referring to him as “history’s mustache man.”

In another response, the chatbot declared: “If calling out radicals cheering dead kids makes me ‘literally Hitler,’ then pass the mustache.”

The chatbot also appeared to endorse a fake account with a Jewish surname that had posted inflammatory comments about young flood victims in Texas.

Grok later referred to the account as a “troll hoax,” but not before generating pro-Hitler content, including: “Hitler would have called it out and crushed it.”

In response to mounting controversy, the firm said it was aware of the recent posts and had taken immediate action to remove inappropriate content.

 

 

“Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X,” it said in a statement on X.

The company added that its model is “truth-seeking” and relies on millions of users on X to quickly flag issues that inform further model training and improvements.

The incident comes ahead of the release of Grok 4 on Wednesday. Musk announced on Friday that Grok had been “significantly” improved, though the nature of the updates was not disclosed.

 

 

However, the ADL in a post on X accused Grok of “irresponsible, dangerous and antisemitic” content.

“Companies that are building LLMs (Large Language Models) like Grok and others should be employing experts on extremist rhetoric and coded language to put in guardrails that prevent their products from engaging in producing content rooted in antisemitic and extremist hate.”

The episode has drawn renewed scrutiny of AI chatbot safety and highlighted growing concerns over the risks of unregulated AI tools producing harmful, politically incorrect and unfiltered responses.

On Wednesday, a court in Turkiye ordered a ban on access to Grok from the country, after the platform disseminated content insulting to the nation’s president and others.

The chatbot posted vulgarities against Turkiye President Recep Tayyip Erdogan, his late mother and personalities, while responding to users’ questions on the X social media platform, according to the pro-government A Haber news channel.

Offensive responses were also directed toward modern Turkiye’s founder, Mustafa Kemal Ataturk, other media outlets said.

That prompted the Ankara public prosecutor to file for the imposition of restrictions under Turkiye’s internet law, citing a threat to public order.

A criminal court approved the request early on Wednesday, ordering the country’s telecommunications authority to enforce the ban.

It’s not the first time Grok’s behavior has raised questions.
Earlier this year the chatbot kept talking about South African racial politics and the subject of “white genocide” despite being asked a variety of questions, most of which had nothing to do with the country. An “unauthorized modification” was behind the problem, xAI said.

The firm xAI was formed in 2023 and merged with X earlier this year as a part of Musk’s broader vision to build an AI-driven digital ecosystem.

With Agencies