Lawsuit against Meta asks if Facebook users have right to control their feeds using external tools

The lawsuit, in other words, asks the court to determine whether Facebook users’ news feed falls into the category of objectionable material that they should be able to filter out in order to enjoy the platform. (AFP)
Short Url
Updated 06 May 2024
Follow

Lawsuit against Meta asks if Facebook users have right to control their feeds using external tools

  • The tool, called Unfollow Everything 2.0, is a browser extension that would let Facebook users unfollow friends, groups and pages and empty their newsfeed — the stream of posts, photos and videos that can keep them scrolling endlessly

Do social media users have the right to control what they see — or don’t see — on their feeds?
A lawsuit filed against Facebook parent Meta Platforms Inc. is arguing that a federal law often used to shield Internet companies from liability also allows people to use external tools to take control of their feed — even if that means shutting it off entirely.
The Knight First Amendment Institute at Columbia University filed a lawsuit Wednesday against Meta Platforms on behalf of an Amherst professor who wants to release a tool that enables users to unfollow all the content fed to them by Facebook’s algorithm.
The tool, called Unfollow Everything 2.0, is a browser extension that would let Facebook users unfollow friends, groups and pages and empty their newsfeed — the stream of posts, photos and videos that can keep them scrolling endlessly. The idea is that without this constant, addicting stream of content, people might use it less. If the past is any indication, Meta will not be keen on the idea.
A UK developer, Louis Barclay, released a similar tool, called Unfollow Everything, but he took it down in 2021, fearing a lawsuit after receiving a cease-and-desist letter and a lifetime Facebook ban from Meta, then called Facebook Inc.
With Wednesday’s lawsuit, Ethan Zuckerman, a professor at the University of Massachusetts at Amherst, is trying to beat Meta to the legal punch to avoid getting sued by the social media giant over the browser extension.
“The reason it’s worth challenging Facebook on this is that right now we have very little control as users over how we use these networks,” Zuckerman said in an interview. “We basically get whatever controls Facebook wants. And that’s actually pretty different from how the Internet has worked historically.” Just think of email, which lets people use different email clients, or different web browsers, or anti-tracking software for people who don’t want to be tracked.
Meta declined to comment.
The lawsuit filed in federal court in California centers on a provision of Section 230 of the 1996 Communications Decency Act, which is often used to protect Internet companies from liability for things posted on their sites. A separate clause, though, provides immunity to software developers who create tools that “filter, screen, allow, or disallow content that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”
The lawsuit, in other words, asks the court to determine whether Facebook users’ news feed falls into the category of objectionable material that they should be able to filter out in order to enjoy the platform.
“Maybe CDA 230 provides us with this right to build tools to make your experience of Facebook or other social networks better and to give you more control over them,” said Zuckerman, who teaches public policy, communication and information at Amherst. “And you know what? If we’re able to establish that, that could really open up a new sphere of research and a new sphere of development. You might see people starting to build tools to make social networks work better for us.”
While Facebook does allow users to manually unfollow everything, the process can be cumbersome with hundreds or even thousands of friends, groups and businesses that people often follow.
Zuckerman also wants to study how turning off the news feed affects people’s experience on Facebook. Users would have to agree to take part in the study — using the browser tool does not automatically enroll participants.
“Social media companies can design their products as they want to, but users have the right to control their experience on social media platforms, including by blocking content they consider to be harmful,” said Ramya Krishnan, senior staff attorney at the Knight Institute. “Users don’t have to accept Facebook as it’s given to them. The same statute that immunizes Meta from liability for the speech of its users gives users the right to decide what they see on the platform.”


Tech-fueled misinformation distorts Iran-Israel fighting

Updated 23 June 2025
Follow

Tech-fueled misinformation distorts Iran-Israel fighting

  • It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation

WASHINGTON: AI deepfakes, video game footage passed off as real combat, and chatbot-generated falsehoods — such tech-enabled misinformation is distorting the Israel-Iran conflict, fueling a war of narratives across social media.
The information warfare unfolding alongside ground combat — sparked by Israel’s strikes on Iran’s nuclear facilities and military leadership — underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication.
The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers.
After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport.
The videos were widely shared across Facebook, Instagram and X.
Using a reverse image search, AFP’s fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content.
There has been a “surge in generative AI misinformation, specifically related to the Iran-Israel conflict,” Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP.
“These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication.”
GetReal Security, a US company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict.
The company linked the visually compelling videos — depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer — to Google’s Veo 3 AI generator, known for hyper-realistic visuals.
The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show “the moment an Iranian missile” struck Tel Aviv.
“It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion,” said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley.
Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration.
“This eight-second limit obviously doesn’t prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share,” he said.
The falsehoods are not confined to social media.
Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims — ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots.
Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said.
“We’re seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience,” McKenzie Sadeghi, a researcher with NewsGuard, told AFP.
Sadeghi described Iranian citizens as “trapped in a sealed information environment,” where state media outlets dominate in a chaotic attempt to “control the narrative.”
Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women’s protests and urging people to take to the streets.
Adding to the information chaos were online clips lifted from war-themed video games.
AFP’s fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3.
Israel’s military has rejected Iranian media reports claiming its fighter jets were downed over Iran as “fake news.”
Chatbots such as xAI’s Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said.
“This highlights a broader crisis in today’s online information landscape: the erosion of trust in digital content,” BitMindAI’s Miyachi said.
“There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse.”


BBC shelves Gaza documentary over impartiality concerns, sparking online outrage

Updated 22 June 2025
Follow

BBC shelves Gaza documentary over impartiality concerns, sparking online outrage

  • The film, titled “Gaza: Doctors Under Attack” had been under editorial consideration by the broadcaster for several months.

LONDON: The BBC has decided not to air a highly anticipated documentary about medics in Gaza, citing concerns over maintaining its standards of impartiality amid the ongoing Israel-Gaza conflict.

The film, titled “Gaza: Doctors Under Attack” (also known as “Gaza: Medics Under Fire”), was produced by independent company Basement Films and had been under editorial consideration by the broadcaster for several months.

In a statement issued on June 20, the BBC said it had concluded that broadcasting the documentary “risked creating a perception of partiality that would not meet the BBC’s editorial standards.” The rights have since been returned to the filmmakers, allowing them to seek distribution elsewhere.

The decision comes in the wake of growing scrutiny over how the BBC is covering the Israel-Gaza war. Earlier this year, the broadcaster faced backlash after airing “Gaza: How to Survive a War Zone,” a short film narrated by a 13-year-old boy later revealed to be the son of a Hamas official. The segment triggered nearly 500 complaints, prompting an internal review and raising questions about vetting, translation accuracy, and the use of sources in conflict zones.

BBC insiders report that portions of “Gaza: Doctors Under Attack” had been considered for integration into existing news programming. However, concerns reportedly emerged during internal reviews that even limited broadcast could undermine the BBC’s reputation for neutrality, particularly given the politically charged context of the ongoing war.

Filmmaker Ben de Pear and journalist Ramita Navai, who worked on the documentary, have expressed disappointment at the decision. They argue that the film provided a necessary and unfiltered look at the conditions medical workers face in Gaza. “This is a documentary about doctors — about the reality of trying to save lives under bombardment,” said Navai. “To shelve this is to silence those voices.”

Critics of the BBC’s decision have been vocal on social media and online forums, accusing the broadcaster of yielding to political pressure and censoring Palestinian perspectives. One commenter wrote, “Sorry, supporters of the Israeli government would get very offended if we demonstrated the consequences … so we shelved it.” Others, however, defended the move, citing the importance of neutrality in public service broadcasting.

A BBC spokesperson said the decision was made independently of political influence and reflected long-standing editorial guidelines. “We are committed to reporting the Israel-Gaza conflict with accuracy and fairness. In this case, we concluded the content, in its current form, could compromise audience trust.”

With the rights now returned, Basement Films is expected to seek other avenues for release. Whether the documentary will reach the public via another broadcaster or platform remains to be seen.


Iran’s Internet blackout leaves public in dark, creates uneven picture of war with Israel

Updated 20 June 2025
Follow

Iran’s Internet blackout leaves public in dark, creates uneven picture of war with Israel

  • Civilians are left unaware of when and where Israel will strike next, despite Israeli forces issuing warnings
  • Activists see it as a form of psychological warfare

DUBAI: As the war between Israel and Iran hits the one-week mark, Iranians have spent nearly half of the conflict in a near-communication blackout, unable to connect not only with the outside world but also with their neighbors and loved ones across the country.
Civilians are left unaware of when and where Israel will strike next, despite Israeli forces issuing warnings through their Persian-language online channels. When the missiles land, disconnected phone and web services mean not knowing for hours or days if their family or friends are among the victims. That’s left many scrambling on various social media apps to see what’s happening — again, only a glimpse of life able to reach the Internet in a nation of over 80 million people.
Activists see it as a form of psychological warfare for a nation all-too familiar with state information controls and targeted Internet shutdowns during protests and unrest.
“The Iranian regime controls the information sphere really, really tightly,” Marwa Fatafta, the Berlin-based policy and advocacy director for digital rights group Access Now, said in an interview with The Associated Press. “We know why the Iranian regime shuts down. It wants to control information. So their goal is quite clear.”
War with Israel tightens information space
But this time, it’s happening during a deadly conflict that erupted on June 13 with Israeli airstrikes targeting nuclear and military sites, top generals and nuclear scientists. At least 657 people, including 263 civilians, have been killed in Iran and more than 2,000 wounded, according to a Washington-based group called Human Rights Activists.
Iran has retaliated by firing 450 missiles and 1,000 drones at Israel, according to Israeli military estimates. Most have been shot down by Israel’s multitiered air defenses, but at least 24 people in Israel have been killed and hundreds others wounded. Guidance from Israeli authorities, as well as round-the-clock news broadcasts, flows freely and consistently to Israeli citizens, creating in the last seven days an uneven picture of the death and destruction brought by the war.
The Iranian government contended Friday that it was Israel who was “waging a war on truth and human conscience.” In a post on X, a social media platform blocked for many of its citizens, Iran’s Foreign Ministry asserted Israel banned foreign media from covering missile strikes.
The statement added that Iran would organize “global press tours to expose Israel’s war crimes” in the country. Iran is one of the world’s top jailer of journalists, according to the Committee to Protect Journalists, and in the best of times, reporters face strict restrictions.
Internet-access advocacy group NetBlocks.org reported on Friday that Iran had been disconnected from the global Internet for 36 hours, with its live metrics showing that national connectivity remained at only a few percentage points of normal levels. The group said a handful of users have been able to maintain connectivity through virtual private networks.
Few avenues exist to get information
Those lucky few have become lifelines for Iranians left in the dark. In recent days, those who have gained access to mobile Internet for a limited time describe using that fleeting opportunity to make calls on behalf of others, checking in on elderly parents and grandparents, and locating those who have fled Tehran.
The only access to information Iranians do have is limited to websites in the Islamic Republic. Meanwhile, Iran’s state-run television and radio stations offer irregular updates on what’s happening inside the country, instead focusing their time on the damage wrought by their strikes on Israel.
The lack of information going in or out of Iran is stunning, considering that the advancement of technology in recent decades has only brought far-flung conflicts in Ukraine, the Gaza Strip and elsewhere directly to a person’s phone anywhere in the world.
That direct line has been seen by experts as a powerful tool to shift public opinion about any ongoing conflict and potentially force the international community to take a side. It has also turned into real action from world leaders under public and online pressure to act or use their power to bring an end to the fighting.
But Mehdi Yahyanejad, a key figure in promoting Internet freedom in Iran, said that the Islamic Republic is seeking to “purport an image” of strength, one that depicts only the narrative that Israel is being destroyed by sophisticated Iranian weapons that include ballistic missiles with multiple warheads.
“I think most likely they’re just afraid of the Internet getting used to cause mass unrest in the next phase of whatever is happening,” Yahayanejad said. “I mean, some of it could be, of course, planned by the Israelis through their agents on the ground, and some of this could be just a spontaneous unrest by the population once they figure out that the Iranian government is badly weakened.


BBC threatens legal action against AI startup Perplexity over content scraping

Updated 20 June 2025
Follow

BBC threatens legal action against AI startup Perplexity over content scraping

  • Perplexity has faced accusations from media organizations, including Forbes and Wired, for plagiarizing their content

LONDON: The BBC has threatened legal action against Perplexity, accusing the AI startup of training its “default AI model” using BBC content, the Financial Times reported on Friday, making the British broadcaster the latest news organisation to accuse the AI firm of content scraping.

The BBC may seek an injunction unless Perplexity stops scraping its content, deletes existing copies used to train its AI systems, and submits “a proposal for financial compensation” for the alleged misuse of its intellectual property, FT said, citing a letter sent to Perplexity CEO Aravind Srinivas.

The broadcaster confirmed the FT report on Friday.

Perplexity has faced accusations from media organizations, including Forbes and Wired, for plagiarizing their content but has since launched a revenue-sharing program to address publisher concerns.

Last October, the New York Times sent it a “cease and desist” notice, demanding the firm stop using the newspaper’s content for generative AI purposes.

Since the introduction of ChatGPT, publishers have raised alarms about chatbots that comb the internet to find information and create paragraph summaries for users.

The BBC said that parts of its content had been reproduced verbatim by Perplexity and that links to the BBC website have appeared in search results, according to the FT report.

Perplexity called the BBC’s claims “manipulative and opportunistic” in a statement to Reuters, adding that the broadcaster had “a fundamental misunderstanding of technology, the internet and intellectual property law.”

Perplexity provides information by searching the internet, similar to ChatGPT and Google’s Gemini, and is backed by Amazon.com (AMZN.O) founder Jeff Bezos, AI giant Nvidia (NVDA.O), and Japan’s SoftBank Group (9984.T).

The startup is in advanced talks to raise $500 million in a funding round that would value it at $14 billion, the Wall Street Journal reported last month.


Streaming platform Deezer starts flagging AI-generated music

Updated 20 June 2025
Follow

Streaming platform Deezer starts flagging AI-generated music

  • French streaming service Deezer is now alerting users when they come across music identified as completely generated by artificial intelligence, the company told AFP on Friday

PARIS: French streaming service Deezer is now alerting users when they come across music identified as completely generated by artificial intelligence, the company told AFP on Friday in what it called a global first.
The announcement by chief executive Alexis Lanternier follows repeated statements from the platform that a torrent of AI-generated tracks is being uploaded daily — a challenge Deezer shares with other streaming services including Swedish heavyweight Spotify.
Deezer said in January that it was receiving uploads of 10,000 AI tracks a day, doubling to over 20,000 in an April statement — or around 18 percent of all music added to the platform.
The company “wants to make sure that royalties supposed to go to artists aren’t being taken away” by tracks generated from a brief text prompt typed into a music generator like Suno or Udio, Lanternier said.
AI tracks are not being removed from Deezer’s library, but instead are demonetised to avoid unfairly reducing human musicians’ royalties.
Albums containing tracks suspected of being created in this way are now flagged with a notice reading “content generated by AI,” a move Deezer says is a global first for a streaming service.
Lanternier said Deezer’s home-grown detection tool was able to spot markers of AI provenance with 98 percent accuracy.
“An audio signal is an extremely complex bundle of information. When AI algorithms generate a new song, there are little sounds that only they make which give them away... that we’re able to spot,” he said.
“It’s not audible to the human ear, but it’s visible in the audio signal.”
With 9.7 million subscribers worldwide, most of them in France, Deezer is a relative minnow compared to Spotify, which has 268 million.
The Swedish firm in January signed a deal supposed to better remunerate artists and other rights holders with the world’s biggest label, Universal Music Group.
But Spotify has not taken the same path as Deezer of demonetising AI content.
It has pointed to the lack of a clear definition for completely AI-generated audio, as well as any legal framework setting it apart from human-created works.