Were Facebook and Twitter partners in the Christchurch massacre?

1 / 3
People stand across the road from one of the targeted mosques in Christchurch. (AP)
Updated 16 March 2019
Follow

Were Facebook and Twitter partners in the Christchurch massacre?

  • Social media slammed for providing platform for New Zealand terror
  • Facebook, Twitter say they acted quickly but critics say they aren't doing enough

DUBAI: Social media giants Twitter and Facebook have responded to criticism in the wake of Friday’s mass shootings at mosques in New Zealand after the deadly terrorist attacks were live-streamed on the platforms that collectively have billions of worldwide users.

On Friday, 49 people were killed in shootings at two mosques in central Christchurch in an attack that saw one of the perpetrators filming himself firing at worshippers – and live-stream his attack in a 17-minute video on Facebook – in addition to posting a lengthy manifesto on a Twitter account detailing racial motivations for the attack.

Social media platforms scrambled to remove video of the shootings from Facebook, Twitter and Instagram in the wake of the attack, described as “an extraordinary and unprecedented act of violence” by the country’s Prime Minister Jacinda Ardern.

 A spokesperson for Twitter told Arab News that it had suspended the account in question and was “proactively working to remove the video content from the service.” Both, it said, are in violation of strict Twitter policies.

"We are deeply saddened to hear of the shootings in Christchurch,” the spokesperson said. "Twitter has rigorous processes and a dedicated team in place for managing emergency situations such as this. We will also cooperate with law enforcement to facilitate their investigations as required.”

Facebook also said in a statement it had removed the footage and was also pulling down "praise or support" posts for the shootings. It also said it alerts authorities to threats of violence or violence as soon as it becomes aware through reports or Facebook tools. The gunman who opened fire inside at one of the New Zealand mosques appeared to live-stream his attack on Facebook in a video that looked to be recorded on a helmet camera.

"New Zealand Police alerted us to a video on Facebook shortly after the live stream commenced and we removed both the shooter's Facebook account and the video," said Mia Garlick, a Facebook representative in New Zealand. "We're also removing any praise or support for the crime and the shooter or shooters as soon as we're aware. We will continue working directly with New Zealand Police as their response and investigation continues. Our hearts go out to the victims, their families and the community affected by this horrendous act." 

In a tweet sent from its official account, YouTube also committed to removing all footage. "Our hearts are broken over today's terrible tragedy in New Zealand," read the statement. "Please know we are working vigilantly to remove any violent footage."

Following the attack, New Zealand police have also warned against sharing online footage relating to the deadly shooting, saying in a Twitter post: "Police are aware there is extremely distressing footage relating to the incident in Christchurch circulating online.

"We would strongly urge that the link not be shared. We are working to have any footage removed.”

Despite the response, the video is out there, and experts say this is a chilling example of how social media sites are increasingly becoming a platform for terrorists to spread their hate-fueled ideology.   

Following the shootings, Mosharraf Zaidi, an ex-government adviser, columnist and seasoned policy analyst who works for the policy think tank Tabadlab, tweeted: “Unbelievable that both @facebook and @twitter have failed to remove (the) video of the terrorist attack in #Christchurch. Every single view of those videos is a potential contribution to future acts of violence. These platforms have a responsibility they are failing to live up to.”

While the Facebook account that posted the video was no longer available shortly after the shooting and the Twitter account of the same name was quickly suspended, Zaidi, speaking to Arab News, said social media giants need to do more to stop their sites being platforms for terrorists. 

"The quality of content filtering and management is a tricky and delicate issue. Governments routinely demand posts be taken down, which these platforms comply with. But often, when they comply, rights activists bemoan the negation of people’s freedoms. 

"One of the most complex global governance challenges confronting the international community is the norms of how social media is to be regulated – with the added complexity that the objects of such norms are no longer sovereign states, but private businesses with platforms larger than most countries by population.

"I think these platforms need to spend much more of their R&D (research and development) on harm prevention and protecting their product, which is my time and your time on their platform.” 

The terrorists’ attack, which Prime Minister Ardern said led to “one of New Zealand’s darkest days,” is the worst mass shooting in the country’s history and led to the arrest of four suspects – three men and a woman. One person was later released. Another, a man in his late 20s, has since been charged with murder. Australian prime minister Scott Morrison said one of the suspects in the “right-wing extremist attack” was an Australian-born citizen.

The director of the national Islamophobia monitoring service, Iman Atta of Tell MAMA (Measuring Anti-Muslim Attacks), condemned the attack, saying: "We are appalled to hear about the mass casualties in New Zealand. The killer appears to have put out a 'manifesto' based on white supremacist rhetoric which includes references to anti-Islamic comments. He mentions 'mass immigration' and 'an assault on our civilization' and makes repeated references to his 'white identity.’

"The killer also seems to have filmed the murders, adding a further cold ruthlessness to his actions. We have said time and time again that far-right extremism is a growing problem and we have been citing this for over six years now. That rhetoric is wrapped within anti-migrant and anti-Muslim sentiment. 

"Anti-Muslim hatred is fast-becoming a global issue and a binding factor for extremist far-right groups and individuals. It is a threat that needs to be taken seriously.”

Zahed Amanullah, a resident senior fellow at the Institute for Strategic Dialogue, said some people who see such videos “may be inspired” to commit a similar act of terrorism. 

Facebook Livestream, which the shooter appeared to use, is an “extremely difficult hole to plug,” said Amanullah. The problem with such content appearing on social media, he pointed out, is that it feeds the curiosity of online viewers. “People are curious and want to look at forbidden fruit; no matter the content,” said Amanullah. “Even people who are horrified are curious.”

Such terrorist acts using social media platforms have already been seen in recent years, said Amanullah.

“Look at ISIS. Groups such as these have live-streamed first-person perspectives on terrorism; such extremists are producing this type of mentality online.”

Often, social media platforms struggle to contain the content online, particularly in real time. “We work very closely with companies such as Twitter and Facebook on these issues, and we have worked with them on identifying extremist content. I think they are talking it seriously and are reacting as quick as they can. In this instance, the video was removed in minutes. The challenge here of intercepting something being live-streamed is extremely difficult, where it is a terrorist attack, or other incidents we have seen such as suicides.”

While people “want the bad guys and extremists offline,” Amanullah said, at present, the only way to completely prevent history repeating itself is to step up surveillance.

“This is a product of a social media age where it is so easy to broadcast what you are doing – and we might have to accept this will happen again.”


BBC rolls out paid subscriptions for US users

Updated 26 June 2025
Follow

BBC rolls out paid subscriptions for US users

  • US visitors will have to pay $49.99 per year or $8.99 per month for unlimited access to news articles, feature stories, and a 24-hour livestream of its news programs
  • Move is part of broadcaster’s efforts to explore new revenue streams amid negotiations with the British government over its funding

LONDON: The BBC is rolling out paid subscriptions in the United States, it said on Thursday, as the publicly-funded broadcaster explores new revenue streams amid negotiations with the British government over its funding.
The BBC has in recent years seen a fall in the number of people paying the license fee, a charge of 174.50 pounds ($239.76) a year levied on all households who watch live TV, as viewers have turned to more content online.
From Thursday, frequent US visitors to the BBC’s news website will have to pay $49.99 per year or $8.99 per month for unlimited access to news articles, feature stories, and a 24-hour livestream of its news programs.
While its services will remain free to British users as part of its public service remit, its news website operates commercially and reaches 139 million users worldwide, including nearly 60 million in the US
The new pay model uses an engagement-based system, the corporation said in a statement, allowing casual readers to access free content.
“Over the next few months, as we test and learn more about audience needs and habits, additional long-form factual content will be added to the offer for paying users,” said Rebecca Glashow, CEO of BBC Global Media & Streaming.
The British government said last November it would review the BBC’s Royal Charter, which sets out the broadcaster’s terms and funding model, with the aim of ensuring a sustainable and fair system beyond 2027.
To give the corporation financial certainty up to then, the government said it was committed to keeping the license in its current form and would lift the fee in line with inflation.


Israeli minister walks back claim of antisemitism after clash with Piers Morgan

Updated 26 June 2025
Follow

Israeli minister walks back claim of antisemitism after clash with Piers Morgan

  • Israel’s Minister Amichai Chikli accused Morgan in a previous social media post of ‘sharp and troubling descent into overt antisemitism’
  • Following heated interview, Chikli later denied ever calling Morgan antisemitic, despite earlier post

LONDON: Israeli Minister for Diaspora Affairs and Combating Antisemitism Amichai Chikli has denied accusing British broadcaster Piers Morgan of antisemitism following a heated exchange during a recent episode of “Piers Morgan Uncensored,” despite a post on his official X account that said Morgan’s rhetoric marked “a sharp and troubling descent into overt antisemitism.”

The confrontation aired on Tuesday during an episode focused on Israel’s escalating conflicts with Iran and Hamas and featured appearances from both Chikli and former Israeli Prime Minister Ehud Barak.

Tensions erupted as Morgan repeatedly pressed Chikli to explain his public accusations.

“You did, you implied it,” Morgan said, adding that Chikli’s accusations led to “thousands of people calling me antisemitic and (a) Jew-hater” on social media. He demanded evidence, ultimately calling the minister “pathetic” and “an embarrassment” when none was offered.

The row stemmed from a June 4 post by Chikli, who shared a clip of a prior interview between Morgan and British barrister Jonathan Hausdorff, a member of the pro-Israel group UK Lawyers for Israel.

In the post, viewed over 1.3 million times by the time of Tuesday’s broadcast, Chikli claimed Morgan had hosted “every Israel hater he can find” and treated Hausdorff with “vile condescension and bullying arrogance — revealing his true face, one he had long tried to conceal.”

The post also referenced an unverified claim by American commentator Tucker Carlson that Morgan had said he “hates Israel with every fiber of his being” — a statement Morgan has firmly denied.

During Tuesday’s interview, Morgan challenged Chikli to cite a single antisemitic remark or action.

“Is it because I dare to criticize Israeli actions in Gaza?” Morgan told Chikli.

According to Israeli outlet Haaretz, Chikli later denied ever calling Morgan antisemitic, despite his earlier post.

The episode reflects Morgan’s shifting stance on the war in Gaza. Once a vocal supporter of Israel’s right to self-defense in the immediate aftermath of the Oct. 7 attacks, Morgan has since adopted a more critical view as the civilian toll in Gaza has mounted and international outrage has grown.

The show has become a flashpoint for debate since the conflict began, hosting polarizing guests from both sides, including controversial American Rabbi Shmuley Boteach, a staunch defender of Israel, and influencer Dan Bilzerian, who has faced accusations of Holocaust denial.

Chikli, meanwhile, has faced criticism for blurring the lines between genuine antisemitism and political criticism of Israel. He recently sparked controversy by inviting members of far-right European parties — some with antisemitic histories — to a conference on antisemitism in Jerusalem, raising questions about his credibility.


Iraq arrests commentator over online post on Iran-Israel war

Updated 25 June 2025
Follow

Iraq arrests commentator over online post on Iran-Israel war

  • Iraqi forces arrested Abbas Al-Ardawi for sharing content online that included incitement intended to insult and defame the security institution

BAGHDAD: Iraqi authorities said they arrested a political commentator on Wednesday over a post alleging that a military radar system struck by a drone had been used to help Israel in its war against Iran.

After a court issued a warrant, the defense ministry said that Iraqi forces arrested Abbas Al-Ardawi for sharing content online that included “incitement intended to insult and defame the security institution.”

In a post on X, which was later deleted but has circulated on social media as a screenshot, Ardawi told his more than 90,000 followers that “a French radar in the Taji base served the Israeli aggression” and was eliminated.

Early Tuesday, hours before a ceasefire ended the 12-day Iran-Israel war, unidentified drones struck radar systems at two military bases in Taji, north of Baghdad, and in southern Iraq, officials have said.

The Taji base hosted US troops several years ago and was a frequent target of rocket attacks.

There has been no claim of responsibility for the latest drone attacks, which also struck radar systems at the Imam Ali air base in Dhi Qar province.

A source close to Iran-backed groups in Iraq told AFP that the armed factions have nothing to do with the attacks.

Ardawi is seen as a supporter of Iran-aligned armed groups who had launched attack US forces in the region in the past, and of the pro-Tehran Coordination Framework, a powerful political coalition that holds a parliamentary majority.

The Iraqi defense ministry said that Ardawi’s arrest was made on the instructions of the prime minister, who also serves as the commander-in-chief of the armed forces, “not to show leniency toward anyone who endangers the security and stability of the country.”

It added that while “the freedom of expression is a guaranteed right... it is restricted based on national security and the country’s top interests.”

Iran-backed groups have criticized US deployment in Iraq as part of an anti-jihadist coalition, saying the American forces allowed Israel to use Iraq’s airspace.

The US-led coalition also includes French troops, who have been training Iraqi forces. There is no known French deployment at the Taji base.

The Iran-Israel war had forced Baghdad to close its airspace, before reopening on Tuesday shortly after US President Donald Trump announced a ceasefire.


Grok shows ‘flaws’ in fact-checking Israel-Iran war: study

Updated 25 June 2025
Follow

Grok shows ‘flaws’ in fact-checking Israel-Iran war: study

  • “Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims”

WASHINGTON: Elon Musk’s AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool.
With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots — including xAI’s Grok — in search of reliable information, but their responses are often themselves prone to misinformation.
“The investigation into Grok’s performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot’s ability to provide accurate, reliable, and consistent information during times of crisis,” said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank.
“Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims.”
The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was “struggling to authenticate AI-generated media.”
Following Iran’s retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found.
It oscillated — sometimes within the same minute — between denying the airport’s destruction and confirming it had been damaged by strikes, the study said.
In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran.
When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said.
The Israel-Iran conflict, which led to US air strikes against Tehran’s nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts.
AI chatbots also amplified falsehoods.
As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support.
When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard.
Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles.
Last month, Grok was under renewed scrutiny for inserting “white genocide” in South Africa, a far-right conspiracy theory, into unrelated queries.
Musk’s startup xAI blamed an “unauthorized modification” for the unsolicited response.
Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa’s leaders were “openly pushing for genocide” of white people.
Musk himself blasted Grok after it cited Media Matters — a liberal media watchdog he has targeted in multiple lawsuits — as a source in some of its responses about misinformation.
“Shame on you, Grok,” Musk wrote on X. “Your sourcing is terrible.”


Tech-fueled misinformation distorts Iran-Israel fighting

Updated 24 June 2025
Follow

Tech-fueled misinformation distorts Iran-Israel fighting

  • It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation

WASHINGTON: AI deepfakes, video game footage passed off as real combat, and chatbot-generated falsehoods — such tech-enabled misinformation is distorting the Israel-Iran conflict, fueling a war of narratives across social media.
The information warfare unfolding alongside ground combat — sparked by Israel’s strikes on Iran’s nuclear facilities and military leadership — underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication.
The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers.
After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport.
The videos were widely shared across Facebook, Instagram and X.
Using a reverse image search, AFP’s fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content.
There has been a “surge in generative AI misinformation, specifically related to the Iran-Israel conflict,” Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP.
“These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication.”
GetReal Security, a US company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict.
The company linked the visually compelling videos — depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer — to Google’s Veo 3 AI generator, known for hyper-realistic visuals.
The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show “the moment an Iranian missile” struck Tel Aviv.
“It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion,” said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley.
Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration.
“This eight-second limit obviously doesn’t prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share,” he said.
The falsehoods are not confined to social media.
Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims — ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots.
Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said.
“We’re seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience,” McKenzie Sadeghi, a researcher with NewsGuard, told AFP.
Sadeghi described Iranian citizens as “trapped in a sealed information environment,” where state media outlets dominate in a chaotic attempt to “control the narrative.”
Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women’s protests and urging people to take to the streets.
Adding to the information chaos were online clips lifted from war-themed video games.
AFP’s fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3.
Israel’s military has rejected Iranian media reports claiming its fighter jets were downed over Iran as “fake news.”
Chatbots such as xAI’s Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said.
“This highlights a broader crisis in today’s online information landscape: the erosion of trust in digital content,” BitMindAI’s Miyachi said.
“There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse.”