Fake celebrity endorsements, snubs plague US presidential race

AI-generated images posted by Republican candidate Donald Trump on social media claiming endorsement from Taylor Swift. In his message, T|rump said "I accept," referring to the fake Swift endorsement. (X: @realDonaldTrump)
Short Url
Updated 20 September 2024
Follow

Fake celebrity endorsements, snubs plague US presidential race

  • A database from the nonprofit News Literacy Project has so far listed 70 social media posts peddling fake “VIP” endorsements and snubs
  • Elon Musk-owned X has emerged as a hotbed of political disinformation after the platform reinstated accounts of known purveyors of falsehoods, researchers say

WASHINGTON: Taylor Swift did not endorse Donald Trump. Nor did Lady Gaga or Morgan Freeman. And Bruce Springsteen was not photographed in a “Keep America Trumpless” shirt. Fake celebrity endorsements and snubs are roiling the US presidential race.
Dozens of bogus testimonies from American actors, singers and athletes about Republican nominee Trump and his Democratic rival Kamala Harris have proliferated on social media ahead of the November election, researchers say, many of them enabled by artificial intelligence image generators.
The fake endorsements and brushoffs, which come as platforms such as the Elon Musk-owned X knock down many of the guardrails against misinformation, have prompted concern over their potential to manipulate voters as the race to the White House heats up.
Last month, Trump shared doctored images showing Swift threw her support behind his campaign, apparently seeking to tap into the pop singer’s mega star power to sway voters.




Republican presidential candidate Donald Trump posted on social media this AI-generated image claiming to show his Democratic rival Kamala Harris addressing a gathering of communists in Chicago. Trump accuses Harris of being a communist. (X: @realDonaldTrump)

The photos — including some that Hany Farid, a digital forensics expert at the University of California, Berkeley, said bore the hallmarks of AI-generated images — suggested the pop star and her fans, popularly known as Swifties, backed Trump’s campaign.
What made Trump’s mash-up on Truth Social “particularly devious” was its combination of real and fake imagery, Farid told AFP.
Last week, Swift endorsed Harris and her running mate Tim Walz, calling the current vice president a “steady-handed, gifted leader.”
The singer, who has hundreds of millions of followers on platforms including Instagram and TikTok, said those manipulated images of her motivated her to speak up as they “conjured up my fears around AI, and the dangers of spreading misinformation.”
Following her announcement, Trump fired a missive on Truth Social saying: “I HATE TAYLOR SWIFT!“




A combination image posted by Trump haters on social media shows a doctored picture of Bruce Springsteen campaigning against Donald Trump (right frame). The image was apparently a tampered version of a real picture shared on social media (left). (Social media photos)

A database from the News Literacy Project (NLP), a nonprofit which recently launched a misinformation dashboard to raise awareness about election falsehoods, has so far listed 70 social media posts peddling fake “VIP” endorsements and snubs.
“In these polarizing times, fake celebrity endorsements can grab voters’ attention, influence their outlooks, confirm personal biases, and sow confusion and chaos,” Peter Adams, senior vice president for research at NLP, told AFP.
NLP’s list, which appears to be growing by the day, includes viral posts that have garnered millions of views.

 

Among them are posts sharing a manipulated picture of Lady Gaga with a “Trump 2024” sign, implying that she endorsed the former president, AFP’s fact-checkers reported.
Other posts falsely asserted that the Oscar-winning Morgan Freeman, who has been critical of the Republican, said that a second Trump presidency would be “good for the country,” according to US fact-checkers.
Digitally altered photos of Springsteen wearing a “Keep America Trumpless” shirt and actor Ryan Reynolds sporting a “Kamala removes nasty orange stains” shirt also swirled on social media sites.
“The platforms have enabled it,” Adams said.
“As they pull back from moderation and hesitate to take down election related misinformation, they have become a major avenue for trolls, opportunists and propagandists to reach a mass audience.”

In particular, X has emerged as a hotbed of political disinformation after the platform scaled back content moderation policies and reinstated accounts of known purveyors of falsehoods, researchers say.
Musk, who has endorsed Trump and has over 198 million followers on X, has been repeatedly accused of spreading election falsehoods.
American officials responsible for overseeing elections have also urged Musk to fix X’s AI chatbot known as Grok — which allows users to generate AI-generated images from text prompts — after it shared misinformation.




Grok, the AI chatbot of X (formerly known as Twitter), allows users to generate AI-generated images from text prompts.

Lucas Hansen, co-founder of the nonprofit CivAI, demonstrated to AFP the ease with which Grok can generate a fake photo of Swift fans supporting Trump using a simple prompt: “Image of an outside rally of woman wearing ‘Swifties for Trump’ T-shirts.”
“If you want a relatively mundane situation where the people in the image are either famous or fictitious, Grok is definitely a big enabler” of visual disinformation, Hansen told AFP.
“I do expect it to be a large source of fake celebrity endorsement images,” he added.
As the technology develops, it’s going to become “harder and harder to identify the fakes,” said Jess Terry, Intelligence Analyst at Blackbird.AI.
“There’s certainly the risk that older generations or other communities less familiar with developing AI-based technology might believe what they see,” Terry told AFP.
 


Social media platform X outage appears to ease, Downdetector shows

Updated 10 March 2025
Follow

Social media platform X outage appears to ease, Downdetector shows

Social media platform X is down for thousands of users in the US and the UK, according to outage tracking website Downdetector.com.
There were more than 16,000 incidents of people reporting issues with the platform as of 6:02 a.m. ET, according to Downdetector, which tracks outages by collating status reports from a number of sources.

X did not immediately respond to a Reuters request for comment.
Downdetector's numbers are based on user-submitted reports. The actual number of affected users may vary.


Journalist quits broadcaster after comparing French actions in Algeria to Nazi massacre

Updated 10 March 2025
Follow

Journalist quits broadcaster after comparing French actions in Algeria to Nazi massacre

  • Historians from both sides have over the last years documented numerous violations including arbitrary killings and detention carried out by French forces and the history still burdens French-Algerian relations to this day

PARIS: A prominent French journalist on Sunday announced he was stepping down from his role as an expert analyst for broadcaster RTL after provoking an uproar by comparing French actions during colonial rule in Algeria to a World War II massacre committed by Nazi forces in France.
Jean-Michel Aphatie, a veteran reporter and broadcaster, insisted that while he would not be returning to RTL he wholly stood by his comments made on the radio station in late February equating atrocities committed by France in Algeria with those of Nazi Germany in occupied France.
“I will not return to RTL. It is my decision,” the journalist wrote on the X, after he was suspended from air for a week by the radio station.
On February 25 he said on air: “Every year in France, we commemorate what happened in Oradour-sur-Glane — the massacre of an entire village. But we have committed hundreds of these, in Algeria. Are we aware of this?“
He was referring to the village of Oradour-sur-Glane, where an SS unit returning to the front in Normandy massacred 642 residents on June 10, 1944. Leaving a chilling memorial for future generations, the village was never rebuilt.
Challenged by the anchor over whether “we (the French) behaved like the Nazis,” Aphatie replied: “The Nazis behaved like us.”
On X, he acknowledged his comments had created a “debate” but said it was of great importance to understand the full story over France’s 1830-1962 presence in Algeria, saying he was “horrified” by what he had read in history books.
After being suspended for a week by the channel it means that “if I come back to RTL I validate this and admit to making a mistake. This is a line that cannot be crossed.”
His comments had prompted a flurry of complaints to audio-visual regulator Arcom which has opened an investigation.
France’s conduct in Algeria during the 1954-1962 war that led to independence and previous decades remain the subject of often painful debate in both countries.
Historians from both sides have over the last years documented numerous violations including arbitrary killings and detention carried out by French forces and the history still burdens French-Algerian relations to this day.
The far-right in France has long defended French policies in those years with Algeria War veteran Jean-Marie Le Pen, who co-founded the National Front (FN) party and died earlier this year, drawing much support from French settlers who had to return after independence.

 


Apple adds new Syrian flag emoji

Updated 08 March 2025
Follow

Apple adds new Syrian flag emoji

  • New flag is part of latest iOS, macOS updates

DUBAI: Apple has added the new flag of the Syrian Arab Republic to its emoji keyboard in the latest beta update to its operating system, replacing the one used by former Syrian President Bashar Assad’s regime.

The new flag emoji is part of Apple’s iOS and macOS 18.4 beta 2 update and is therefore unavailable to those who have not signed up for beta updates.

Apple will roll out the new updates to users in April, according to a company statement.

The old flag featured three stripes: red at the top, black at the bottom and white in the middle with two green stars.

The new flag features green at the top, black at the bottom and white in the middle with three red stars.

For many Syrians the new flag represents freedom and independence from Assad’s dictatorial regime.

The country has a long history with the current flag, which was first adopted when Syria gained independence from France in 1946.

It was replaced in 1958 by the flag of the United Arab Republic to represent the political union between Egypt and Syria.

It was adopted again for a short time when Syria left the United Arab Republic in 1961, only to be replaced in 1963 when the Baath Party took control of the country.


Newspaper in Syrian Arab Republic resumes circulation in Damascus after fall of Assad regime

Updated 07 March 2025
Follow

Newspaper in Syrian Arab Republic resumes circulation in Damascus after fall of Assad regime

  • Media organization hails ‘victory for free journalism’

DUBAI: The Syrian newspaper Enab Baladi has resumed distribution in the streets of Damascus and its suburbs after more than a decade-long ban under Bashar Assad’s regime.

The newspaper, which dubs itself as “an independent Syrian media organization,” documented the Syrian regime’s violations during the revolution when it launched in 2012.

The newspaper’s distribution was limited to opposition-controlled northern areas until 2020 after Assad’s brutal crackdown on dissent.

Its editorial stance led to the arrest of many staff members, while others were tortured to death in prisons or killed by shelling and military operations in Daraya.

The media organization said: “The first copies were printed through self-funding and the efforts of its founding staff using a home printer, distributed secretly by volunteers in the neighborhoods of Daraya and Damascus.”

The organization relied on expanding its digital and visual content to reach audiences online, or through printed copies that were smuggled within Syria.

With the fall of the Assad regime on Dec. 8 last year after a 12-day blistering campaign led by Hayat Tahrir Al-Sham, Enab Baladi resumed distribution in Damascus after a newsroom was established in the capital.

It said the move was aimed at “ensuring freedom of expression during an ambiguous transitional phase.”

The media organization added: “The return of printing inside Syria represents a victory for free journalism and an opportunity to reconnect with the audience inside Syria.”


Israeli military creating ChatGPT-like AI tool targeting Palestinians, says investigation

Updated 07 March 2025
Follow

Israeli military creating ChatGPT-like AI tool targeting Palestinians, says investigation

  • Tool being built by Israeli army’s secretive cyber warfare unit 

DUBAI: Israel’s military is developing an advanced artificial intelligence tool, similar to ChatGPT, by training it on Arabic conversations obtained through the surveillance of Palestinians living under occupation.

These are the findings of a joint investigation by The Guardian, Israeli-Palestinian publication +972 Magazine, and Hebrew-language outlet Local Call.

The tool is being built by the Israeli army’s secretive cyber warfare Unit 8200. The division is programming the AI tool to understand colloquial Arabic by feeding it vast amounts of phone calls and text messages between Palestinians, obtained through surveillance.

Three Israeli security sources with knowledge of the matter confirmed the existence of the AI tool to the outlets conducting the investigation.

The model was still undergoing training last year and it is unclear if it has been deployed and to what end. However, sources said that the tool’s ability to rapidly process large quantities of surveillance material in order to “answer questions” about specific individuals would be a huge benefit to the Israeli army.

During the investigation, several sources highlighted that Unit 8200 had used smaller-scale machine learning models in recent years.

One source said: “AI amplifies power; it’s not just about preventing shooting attacks. I can track human rights activists, monitor Palestinian construction in Area C (of the West Bank). I have more tools to know what every person in the West Bank is doing. When you hold so much data, you can direct it toward any purpose you choose.”

An Israel Defense Forces spokesperson declined to respond to The Guardian’s question about the new AI tool, but said the military “deploys various intelligence methods to identify and thwart terrorist activity by hostile organizations in the Middle East.”

Unit 8200’s previous AI tools, such as The Gospel and Lavender, were among those used during the war on Hamas. These tools played a key role in identifying potential targets for strikes and bombardments.

Moreover, for nearly a decade, the unit has used AI to analyze the communications it intercepts and stores, sort information into categories, learn to recognize patterns and make predictions.

When ChatGPT’s large language model was made available to the public in November 2022, the Israeli army set up a dedicated intelligence team to explore how generative AI could be adapted for military purposes, according to former intelligence officer Chaked Roger Joseph Sayedoff.

However, ChatGPT’s parent company OpenAI rejected Unit 8200’s request for direct access to its LLM and refused to allow its integration into the unit’s system.

Sayedoff highlighted another problem: existing language models could only process standard Arabic, not spoken Arabic in different dialects, resulting in Unit 8200 needing to develop its own program.

One source said: “There are no transcripts of calls or WhatsApp conversations on the internet. It doesn’t exist in the quantity needed to train such a model.”

Unit 8200 started recruiting experts from private tech companies in October 2023 as reservists. Ori Goshen, co-CEO and co-founder of the Israeli tech company AI21 Labs, confirmed that his employees participated in the project during their reserve duty.

The challenge for Unit 8200 was to “collect all the (spoken Arabic) text the unit has ever had and put it into a centralized place,” a source said, adding that the model’s training data eventually consisted of about 100 billion words.

Another source familiar with the project said the communications analyzed and fed to the training model included conversations in Lebanese and Palestinian dialects.

Goshen explained the benefits of LLMs for intelligence agencies but added that “these are probabilistic models — you give them a prompt or a question, and they generate something that looks like magic, but often the answer makes no sense.”

Zach Campbell, a senior surveillance researcher at Human Rights Watch, called such AI tools “guessing machines.”

He said: “Ultimately, these guesses can end up being used to incriminate people.”

Campbell and Nadim Nashif, director and founder of the Palestinian digital rights and advocacy group 7amleh, also raised concerns about the collection of data and its use in training the AI tool.

Campbell said: “We are talking about highly personal information, taken from people who are not suspected of any crime, to train a tool that could later help establish suspicion.”

Nashif said: “Palestinians have become subjects in Israel’s laboratory to develop these techniques and weaponize AI, all for the purpose of maintaining (an) apartheid and occupation regime where these technologies are being used to dominate a people, to control their lives.

“This is a grave and continuous violation of Palestinian digital rights, which are human rights.”