Russian ‘disinformation’ campaign on US immigration woes find way into American voter platforms

An American holds a sign during a "Take Our Border Back" rally on Feb. 3, 2024, in Quemado, Texas. Online actors tied to the Kremlin have begun pushing misleading and incendiary claims about US immigration in an apparent bid to target American voters ahead of the 2024 election. (AP)
Short Url
Updated 02 March 2024
Follow

Russian ‘disinformation’ campaign on US immigration woes find way into American voter platforms

  • Disinformation is worse on X and TikTok, given their lack of controls, says Logically, a tech company that tracks disinformation campaigns
  • Russia and other disinformation spreaders also use encrypted messaging sites or websites that masquerade as legitimate news outlets

WASHINGTON: For Vladimir Putin, victory in Ukraine may run through Texas’ Rio Grande Valley.
In recent weeks, Russian state media and online accounts tied to the Kremlin have spread and amplified misleading and incendiary content about US immigration and border security. The campaign seems crafted to stoke outrage and polarization before the 2024 election for the White House, and experts who study Russian disinformation say Americans can expect more to come as Putin looks to weaken support for Ukraine and cut off a vital supply of aid.
In social media posts, online videos and stories on websites, these accounts misstate the impact of immigration, highlight stories about crimes committed by immigrants, and warn of dire consequences if the US doesn’t crack down at its border with Mexico. Many are misleading, filled with cherry-picked data or debunked rumors.
The pivot toward the United States comes after two years in which Russia’s vast disinformation apparatus was busy pushing propaganda and disinformation about its invasion of Ukraine. Experts who study how authoritarian states use the Internet to spread disinformation say eroding support for Ukraine remains Russia’s top priority — and that the Kremlin is just finding new ways to do it.
“Things have shifted, even in the last few days,” said Kyle Walter, head of research at Logically, a tech company that tracks disinformation campaigns. While experts and government officials have long warned of Russia’s intentions, Walter said the content spotted so far this year “is the first indication that I’ve seen that Russia is actually going to focus on US elections.”
This month Logically identified dozens of pro-Russian accounts posting about immigration in the US, with a particular interest in promoting recent anti-immigration rallies in Texas. A recent Logically assessment concluded that after two years spent largely dedicated to the war in Ukraine, Russia’s disinformation apparatus has “started 2024 with a focus on the US”
Many posts highlight crimes allegedly committed by recent immigrants or suggest migrants are a burden on local communities. Some claims were posted by accounts with tiny audiences; others were made by state media sites with millions of followers.
This week the accounts seized on the recent death of a Georgia nursing student and the arrest of a Venezuelan man who had entered the US illegally and was allowed to stay to pursue his immigration case. The killing quickly became a rallying cry for former President Donald Trump and other Republicans who suggest that migrants commit crimes more often than do US citizens. The evidence does not support those claims.
The content, crafted in English, has quickly found its way to websites and platforms popular with American voters. Footage of a recent anti-immigration protest broadcast by Russian outlet RT, for example, was racking up thousands of views this week on X, the platform formerly known as Twitter, and prompting angry replies from other users.
The Russian outlet Sputnik ran a story this week about growing calls to build a US-Mexico border wall, a priority for Trump, who failed to complete the job as president. An analysis of other sites that later linked to the Sputnik piece shows more than half were in the US, according to data from the online analytics firm Semrush.com. Overall, Americans make up the English-language Sputnik’s largest audience.
US officials have warned that Russia could seek to meddle in the elections of dozens of countries in 2024, when more than 50 nations accounting for half of the world’s population are scheduled to hold national votes. While Russia has a strategic interest in the outcome of many of them — the European Parliament, for one — few offer the opportunity and the prize that America does.
For Russia’s bid to conquer Ukraine, this year’s US election stakes couldn’t be higher. President Joe Biden has pledged to fully back Ukraine. Republicans have been far less supportive. Trump has openly praised Putin and the former president has suggested he would encourage Russia to attack America’s NATO allies if they don’t pay their fair share for the military alliance.
More than half of Republicans believe the US is spending too much on Ukraine, according to a recent poll from The Associated Press-NORC Center for Public Affairs Research that found Democrats to be much more supportive of additional aid.
Soon after the war started, Russia mounted a disinformation campaign designed to cut into support for Ukraine. Claims included wild stories about secret US germ warfare labs or Nazi conspiracies or that Ukrainian refugees were committing crimes and taking jobs from people who had welcomed them.
That effort continues, but Russia also has shifted its attention to issues with no obvious tie to Moscow that are more likely to create cracks in the unity of its adversaries — for example immigration, or inflation, high-profile topics in the US and Europe.
“They’re very savvy and understand the right buttons to push,” said Bret Schafer, senior fellow and head of the information manipulation team at the Alliance for Securing Democracy, a Washington-based nonprofit. “If your ultimate objective is to reduce support for Ukraine, your inroad might be talking about how bad things are on the southern border. Their path to win this thing is to get the US and the E.U. to stop sending weapons and aid to Ukraine.”
A message left with the Russian Embassy in Washington wasn’t immediately returned.
America’s election may also be a tempting target for other authoritarian nations such as China and Iran that, like Russia, have shown a willingness to use online propaganda and disinformation to further their objectives.
The online landscape has dramatically shifted since Russia sought to meddle in America’s 2016 presidential race won by Trump. Platforms such as Facebook and Instagram have banned many Russian state accounts and built new safeguards aimed at preventing anyone from exploiting their sites. In one recent example, Meta, the owner of Facebook, announced last fall that it had identified and stopped a network of thousands of fake accounts created in China in an apparent effort to fool American voters.
Other platforms, including X, have taken a different approach, rolling back or even eliminating content moderation and rules designed to stop disinformation. Then there is TikTok, whose ties to China and popularity with young people have set off alarms in several state capitals and Washington.
Artificial intelligence is another concern. The technology now makes it easier than ever to create audio or video that is lifelike enough to fool voters.
Social media is no longer the only battleground either. Increasingly, Russia and other disinformation spreaders use encrypted messaging sites or websites that masquerade as legitimate news outlets.
“A lot of their activity has moved off the major platforms to places where they can operate more freely,” said John Hultquist, chief analyst at Mandiant Intelligence, a cybersecurity firm monitoring Russian disinformation.
Walter, Logically’s research director, said he is most concerned about disinformation on X and TikTok this year, given their lack of controls and their popularity, especially with young voters. TikTok’s ties to China have raised national security concerns.
He said that while election years tend to highlight the dangers of disinformation, the most effective information operations are launched years in advance. America’s adversaries have spent a long time studying its politics, building online networks and cultivating domestic divisions.
Now comes the payoff.
“They don’t need to put a ton of effort into causing disinformation,” Walter said. “They’ve already laid the groundwork leading up to 2024.”


Israeli strike on Gaza hospital kills wounded journalist, Hamas says

Updated 13 May 2025
Follow

Israeli strike on Gaza hospital kills wounded journalist, Hamas says

  • Hamas said the strike killed a journalist and wounded a number of civilians
  • The CPJ says at least 178 journalists and media workers have been killed in Gaza

JERUSALEM: The Israeli military said it struck a Gaza hospital housing Hamas militants in a raid Tuesday that, according to the Palestinian group, killed a journalist wounded in an Israeli attack last month.

The strike, which Hamas said happened at dawn, ended a brief pause in fighting to allow the release of a US-Israeli hostage.

The military said in a Telegram post that “significant Hamas terrorists” had been “operating from within a command and control center” at Nasser Hospital in Khan Yunis, southern Gaza’s main city.

“The compound was used by the terrorists to plan and execute terrorist attacks against Israeli civilians and IDF (army) troops,” it said.

In a statement, Hamas said the strike killed a journalist and wounded a number of civilians.

“The Israeli army bombed the surgeries building at Nasser Hospital in Khan Yunis at dawn on Tuesday, killing journalist Hassan Aslih,” said Gaza civil defense spokesman Mahmud Bassal.

Aslih, head of the Alam24 news outlet, had been at the hospital for treatment after being wounded in a strike on April 7, he told AFP.

Two other journalists, Ahmed Mansur and Hilmi Al-Faqaawi, were killed in that bombing, according to reports at the time.The Israeli military said the April strike had targeted Aslih, alleging he operated for Hamas “under the guise of a journalist.”

It said Aslih had “infiltrated Israeli territory and participated in the murderous massacre carried out by the Hamas terrorist organization” on October 7, 2023.

The Committee to Protect Journalists condemned the strike.It said Aslih had worked for international media outlets until 2023, when the pro-Israeli watchdog HonestReporting published a photo of him being kissed by then-Hamas leader Yahya Sinwar.

The CPJ says at least 178 journalists and media workers have been killed in Gaza, the occupied West Bank, Israel and Lebanon since the start of the war.

Israel had paused military operations in Gaza to allow for the release of Edan Alexander, a 21-year-old US-Israeli soldier who had been held hostage since October 2023.

Alexander, believed to be the last surviving hostage with US citizenship, was released Monday ahead of a Middle East visit by US President Donald Trump.

Israel resumed its military offensive in Gaza on March 18 after a two-month truce in its war against Hamas, which was triggered by the Palestinian group’s October 7 attack.

The attack on southern Israel resulted in the deaths of 1,218 people, mostly civilians, according to an AFP tally based on official figures.

The health ministry in Hamas-run Gaza said Monday at least 2,749 people have been killed since Israel resumed its campaign, bringing the overall death toll since the war broke out to 52,862.


600+ film and media insiders sign open letter demanding BBC airs delayed Gaza documentary

Updated 13 May 2025
Follow

600+ film and media insiders sign open letter demanding BBC airs delayed Gaza documentary

  • Actors Susan Sarandon, Indira Varma, Miriam Margolyes, Maxine Peake and Juliet Stevenson among those calling for immediate broadcast of ‘Gaza: Medics Under Fire’
  • The film was delayed pending an investigation into another documentary, “Gaza: How to Survive a Warzone,” after it emerged the narrator of that film is the son of a Hamas official

DUBAI: More than 600 prominent figures from the film and media industries have signed an open letter urging the BBC to broadcast the delayed documentary “Gaza: Medics Under Fire.”

The signatories include actors such as Susan Sarandon, Indira Varma, Miriam Margolyes, Maxine Peake and Juliet Stevenson, along with journalists, filmmakers and other industry professionals. One-hundred-and-thirty of them chose to remain anonymous; at least 12 were said to be BBC staff members.

The letter, addressed to BBC Director General Tim Davie, states: “Every day this film is delayed, the BBC fails in its commitment to inform the public, fails in its journalistic responsibility to report the truth, and fails in its duty of care to these brave contributors.

“No news organization should quietly decide behind closed doors whose stories are worth telling.”

The film was originally scheduled to air in January. BBC bosses said they decided to delay it while an investigation is carried out into another documentary, “Gaza: How to Survive a Warzone,” which was pulled from the schedules when it emerged that the narrator of that film is the son of a Hamas official.

Samir Shah, chairperson of the BBC, said this revelation was “a dagger to the heart of the BBC’s claim to be impartial and to be trustworthy” and that was why he and fellow board members were “determined to ask the questions.”

The writers of the letter said: “This is not editorial caution. It’s political suppression. The BBC has provided no timeline, no transparency. Such decisions reinforce the systemic devaluation of Palestinian lives in our media.”

“Gaza: Medics Under Fire” production company Basement Films said in the letter that it was “desperate for a confirmed release date in order to be able to tell the surviving doctors and medics when their stories will be told.”

The document concluded with a demand for the film to be released “NOW.”

A spokesperson for the BBC told Variety magazine the documentary will be broadcast “as soon as possible,” but the organization had taken “an editorial decision not to do so” while there was an “ongoing review” of the other Gaza-related film.


Former UK PM Theresa May to speak at Most Powerful Women summit in Riyadh

Updated 12 May 2025
Follow

Former UK PM Theresa May to speak at Most Powerful Women summit in Riyadh

  • Former Conservative Party leader will give her views on fractured trade ties, the erosion of multilateralism, and the race toward a greener economy
  • The event on May 20 and 21 is Fortune’s first international summit in the region and aims to gather 125 of the world’s most influential female business leaders

DUBAI: The speakers and special guests at the Fortune Most Powerful Women International Summit in Riyadh on May 20 and 21 will include the UK’s former prime minister, Theresa May, organizers revealed on Monday.

She will close the summit, at the St. Regis, with a talk moderated by Ellie Austin, the editorial director of Fortune Most Powerful Women, an invitation-only community of leaders from a wide range of industries worldwide.

The former leader of the Conservative Party, who was Britain’s prime minister from 2016 until 2019, will give her views on fractured trade ties, the erosion of multilateralism, and the race toward a greener economy. She will also talk about navigating globalization, seizing opportunities in the energy transition, and the need for bold and creative leadership in a volatile world.

The summit, which has the theme “A New Era for Business: Partnering for Global Prosperity,” is Fortune’s first international summit in the Middle East region. It aims to bring together 125 of the world’s most influential female business leaders.

The full lineup of speakers features public-sector leaders including: Ambassador Haifa Al-Jedea, Saudi Arabia’s permanent representative to the EU; Yuriko Koike, the governor of Tokyo; Neema Lugangira, a member of Tanzania’s parliament; and Silvana Koch-Mehrin, president and founder of non-profit organization Women Political Leaders.

Representatives from the private sector include: Amel Chadli, Gulf cluster president of Schneider Electric; Leah Cotterill, Cigna Healthcare’s CEO for the Middle East and Africa (excluding Saudi Arabia); Shazia Syed, general manager of the Personal Care Business Group with Unilever Arabia (GCC), Turkey, Pakistan and Bangladesh, and head of Unilever Arabia; and Julie Sweet, the CEO of Accenture.


French-Lebanese Saade holding company buys stake in Pathe cinemas

Updated 12 May 2025
Follow

French-Lebanese Saade holding company buys stake in Pathe cinemas

  • Acquisition of one of the world’s oldest film company was announced on Monday for an undiscolsed amount
  • Investment will focus on expanding Pathé’s international presence and modernizing its movie theaters, Saade said

PARIS: French-Lebanese billionaire Rodolphe Saade’s family holding Merit France has acquired a 20 percent stake in French cinema chain Pathe for an undisclosed amount, privately owned Pathe said on Monday.

WHY IT’S IMPORTANT
Saade and his shipping company CMA CGM have already become leading players in the French media landscape in recent years, buying up several newspapers as well as Altice Media, which owns 24-hour news channel BFM TV.
The investment in Pathe builds on the family’s interests in media and culture.

CONTEXT
Pathe, owned by French businessman Jerome Seydoux, is one of the world’s oldest film companies, and is a leading producer as well as movie theater operator in Europe and Africa.
The investment will help Pathe accelerate its development in producing films and series with an international reach, and to modernize its movie theaters, the statement said.

KEY QUOTE
“We are committed to contributing to the development of the sector and promoting French film culture around the world,” said Rodolphe Saade.


‘Tool for grifters’: AI deepfakes push bogus sexual cures

Updated 12 May 2025
Follow

‘Tool for grifters’: AI deepfakes push bogus sexual cures

  • The trend underscores how rapid advances in artificial intelligence have fueled what researchers call an AI dystopia, a deception-filled online universe designed to manipulate unsuspecting users into buying dubious products

WASHINGTON: Holding an oversized carrot, a brawny, shirtless man promotes a supplement he claims can enlarge male genitalia — one of countless AI-generated videos on TikTok peddling unproven sexual treatments.

The rise of generative AI has made it easy — and financially lucrative — to mass-produce such videos with minimal human oversight, often featuring fake celebrity endorsements of bogus and potentially harmful products.

In some TikTok videos, carrots are used as a euphemism for male genitalia, apparently to evade content moderation policing sexually explicit language.

“You would notice that your carrot has grown up,” the muscled man says in a robotic voice in one video, directing users to an online purchase link.

“This product will change your life,” the man adds, claiming without evidence that the herbs used as ingredients boost testosterone and send energy levels “through the roof.”

The video appears to be AI-generated, according to a deepfake detection service recently launched by the Bay Area-headquartered firm Resemble AI, which shared its results with AFP.

“As seen in this example, misleading AI-generated content is being used to market supplements with exaggerated or unverified claims, potentially putting consumers’ health at risk,” Zohaib Ahmed, Resemble AI’s chief executive and co-founder, told AFP.

“We’re seeing AI-generated content weaponized to spread false information.”

The trend underscores how rapid advances in artificial intelligence have fueled what researchers call an AI dystopia, a deception-filled online universe designed to manipulate unsuspecting users into buying dubious products.

They include everything from unverified — and in some cases, potentially harmful — dietary supplements to weight loss products and sexual remedies.

“AI is a useful tool for grifters looking to create large volumes of content slop for a low cost,” misinformation researcher Abbie Richards told AFP.

 

“It’s a cheap way to produce advertisements,” she added.

Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, has observed a surge of “AI doctor” avatars and audio tracks on TikTok that promote questionable sexual remedies.

Some of these videos, many with millions of views, peddle testosterone-boosting concoctions made from ingredients such as lemon, ginger and garlic.

More troublingly, rapidly evolving AI tools have enabled the creation of deepfakes impersonating celebrities such as actress Amanda Seyfried and actor Robert De Niro.

“Your husband can’t get it up?” Anthony Fauci, former director of the National Institute of Allergy and Infectious Diseases, appears to ask in a TikTok video promoting a prostate supplement.

But the clip is a deepfake, using Fauci’s likeness.

Many manipulated videos are created from existing ones, modified with AI-generated voices and lip-synced to match what the altered voice says.

“The impersonation videos are particularly pernicious as they further degrade our ability to discern authentic accounts online,” Mantzarlis said.

Last year, Mantzarlis discovered hundreds of ads on YouTube featuring deepfakes of celebrities — including Arnold Schwarzenegger, Sylvester Stallone, and Mike Tyson — promoting supplements branded as erectile dysfunction cures.

The rapid pace of generating short-form AI videos means that even when tech platforms remove questionable content, near-identical versions quickly reappear — turning moderation into a game of whack-a-mole.

Researchers say this creates unique challenges for policing AI-generated content, requiring novel solutions and more sophisticated detection tools.

AFP’s fact checkers have repeatedly debunked scam ads on Facebook promoting treatments — including erectile dysfunction cures — that use fake endorsements by Ben Carson, a neurosurgeon and former US cabinet member.

Yet many users still consider the endorsements legitimate, illustrating the appeal of deepfakes.

“Scammy affiliate marketing schemes and questionable sex supplements have existed for as long as the Internet and before,” Mantzarlis said.

“As with every other bad thing online, generative AI has made this abuse vector cheaper and quicker to deploy at scale.”