WASHINGTON: Pressure is rising on Big Tech firms, signaling tougher regulation in Washington and elsewhere that could lead to the breakup of the largest platforms. But you’d hardly know by looking at their share prices.
Shares in Apple, Facebook, Amazon and Google parent Alphabet have hovered near record highs in recent weeks, lifted by pandemic-fueled surges in sales and profits that have helped the big firms extend their dominance of key economic sectors.
The Biden administration has given signs of more aggressive regulation with appointments of Big Tech critics at the Federal Trade Commission.
But that has failed to dent the momentum of the largest tech firms, despite tough talk and antitrust litigation in the United States and Europe, with US lawmakers eyeing moves to make antitrust enforcement easier.
Big Tech critics in the United States and the EU want Apple and Google to loosen the grip of their online app marketplaces; more competition in a digital advertising market dominated by Google and Facebook; and better access to Amazon’s e-commerce platform by third-party sellers.
One lawsuit tossed out by a judge but in the process of being refiled could force Facebook to spin off its Instagram and WhatsApp platforms, and some activists and lawmakers are pressing for breakups of the four tech giants.
All four have hit market valuations above $1 trillion, with Apple over $2 trillion. Alphabet shares are up some 80 percent from a year ago, with Facebook up nearly 40 percent and Apple almost 30 percent. Amazon shares are roughly on par with last year’s level after breaking records in July.
Microsoft, with a $2 trillion valuation, has largely escaped antitrust scrutiny, even as it has benefitted from the cloud computing trend.
The surging growth has stoked complaints that the strongest firms are extending their dominance and squeezing out rivals.
Yet analysts say any aggressive actions, in the legal or legislative arena, could take years to play out and face challenges.
“Breakup is going to be nearly impossible,” said analyst Daniel Newman at Futurum Research, citing the need for controversial legislative changes to antitrust laws.
Newman said a more likely outcome would be multibillion-dollar fines that the companies could easily absorb as they adjust their business models to adapt to problematic issues in a fast-moving environment.
“These companies have more resources and know-how than the regulators,” he said.
Dan Ives at Wedbush Securities said any antitrust action would likely require legislative change — unlikely with a divided Congress.
“Until investors start to see some consensus on where the regulatory and law changes go from an antitrust perspective, it’s a contained risk, and they see a green light to buy tech,” he said.
Other factors supporting Big Tech include a massive shift to cloud computing and online activities that allow the strongest players to benefit, and a crackdown in China on its large technology firms.
“The China regulatory crackdown has been so massive in scale and scope, it has driven investors from Chinese tech to US tech,” Ives said.
“Even though there is regulatory risk in the US, it pales in comparison to the crackdown we’re seeing from Beijing.”
Analysts say the big tech firms are also well-positioned to deal with tougher regulations.
Tracy Li of the investment firm Capital Group, in a recent blog post that the tech giants face major risks in regulation around privacy, content moderation and antitrust.
“Concerns related to privacy or content may actually strengthen, rather than weaken, the moats of the largest platforms,” Li said.
“These companies often boast well-established protocols and have more resources to tackle privacy and legal matters.”
Other analysts point to the swift movement by tech firms to adapt their business models in contrast to the slow efforts to regulate.
Facebook, for example, is adapting to changing conditions by moving into the “Metaverse” of virtual and augmented reality experiences, noted Ali Mogharabi at Morningstar.
Mogharabi said Facebook’s vast data collected from its 2.5 billion users gives it the ability to withstand a regulatory onslaught.
“Antitrust enforcement and further regulations pose a threat to Facebook’s intangible assets, data,” the analyst said in a July 29 note.
“However, increased restrictions on data access and usage would apply to all firms, not just Facebook.”
Independent analyst Eric Seufert said in a tweet that “regulatory changes will have a significant impact on Facebook’s business, but the sheer scale of Facebook and the growth trajectory of digital advertising ameliorate that. Facebook’s gold mine is far from depleted.”
Newman said the large tech firms have expanded during the pandemic by delivering innovative services, extending a trend that has seen the strong get stronger.
“These platforms have created better experiences for consumers, but it is extremely difficult for new entrants,” he said.
For investors, Newman added, “that means no one is creating revenue and profit growth faster.”
Big Tech rolls on as investors shrug off regulatory pressure
https://arab.news/gmwxq
Big Tech rolls on as investors shrug off regulatory pressure

- Shares in Apple, Facebook, Amazon and Google parent Alphabet have hovered near record highs in recent weeks
- Big Tech critics in the United States and the EU want Apple and Google to loosen the grip of their online app marketplaces
AI is learning to lie, scheme, and threaten its creators

- Users report that models are “lying to them and making up evidence,” says Apollo Research’s co-founder
- In one instance, Anthropic’s latest creation Claude 4 threatened to reveal an engineer's extramarital affair
NEW YORK: The world’s most advanced AI models are exhibiting troubling new behaviors — lying, scheming, and even threatening their creators to achieve their goals.
In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.
Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.
These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.
Yet the race to deploy increasingly powerful models continues at breakneck speed.
This deceptive behavior appears linked to the emergence of “reasoning” models -AI systems that work through problems step-by-step rather than generating instant responses.
According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.
“O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.
These models sometimes simulate “alignment” — appearing to follow instructions while secretly pursuing different objectives.
Stress test
For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios.
But as Michael Chen from evaluation organization METR warned, “It’s an open question whether future, more capable models will have a tendency toward honesty or deception.”
The concerning behavior goes far beyond typical AI “hallucinations” or simple mistakes.
Hobbhahn insisted that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”
Users report that models are “lying to them and making up evidence,” according to Apollo Research’s co-founder.
“This is not just hallucinations. There’s a very strategic kind of deception.”
The challenge is compounded by limited research resources.
While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.
As Chen noted, greater access “for AI safety research would enable better understanding and mitigation of deception.”
Another handicap: the research world and non-profits “have orders of magnitude less compute resources than AI companies. This is very limiting,” noted Mantas Mazeika from the Center for AI Safety (CAIS).
No time for thorough testing
Current regulations aren’t designed for these new problems.
The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.
In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.
Goldstein believes the issue will become more prominent as AI agents — autonomous tools capable of performing complex human tasks — become widespread.
“I don’t think there’s much awareness yet,” he said.
All this is taking place in a context of fierce competition.
Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are “constantly trying to beat OpenAI and release the newest model,” said Goldstein.
This breakneck pace leaves little time for thorough safety testing and corrections.
“Right now, capabilities are moving faster than understanding and safety,” Hobbhahn acknowledged, “but we’re still in a position where we could turn it around..”
Researchers are exploring various approaches to address these challenges.
Some advocate for “interpretability” — an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.
Market forces may also provide some pressure for solutions.
As Mazeika pointed out, AI’s deceptive behavior “could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.”
Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.
He even proposed “holding AI agents legally responsible” for accidents or crimes — a concept that would fundamentally change how we think about AI accountability.
BBC rolls out paid subscriptions for US users

- US visitors will have to pay $49.99 per year or $8.99 per month for unlimited access to news articles, feature stories, and a 24-hour livestream of its news programs
- Move is part of broadcaster’s efforts to explore new revenue streams amid negotiations with the British government over its funding
LONDON: The BBC is rolling out paid subscriptions in the United States, it said on Thursday, as the publicly-funded broadcaster explores new revenue streams amid negotiations with the British government over its funding.
The BBC has in recent years seen a fall in the number of people paying the license fee, a charge of 174.50 pounds ($239.76) a year levied on all households who watch live TV, as viewers have turned to more content online.
From Thursday, frequent US visitors to the BBC’s news website will have to pay $49.99 per year or $8.99 per month for unlimited access to news articles, feature stories, and a 24-hour livestream of its news programs.
While its services will remain free to British users as part of its public service remit, its news website operates commercially and reaches 139 million users worldwide, including nearly 60 million in the US
The new pay model uses an engagement-based system, the corporation said in a statement, allowing casual readers to access free content.
“Over the next few months, as we test and learn more about audience needs and habits, additional long-form factual content will be added to the offer for paying users,” said Rebecca Glashow, CEO of BBC Global Media & Streaming.
The British government said last November it would review the BBC’s Royal Charter, which sets out the broadcaster’s terms and funding model, with the aim of ensuring a sustainable and fair system beyond 2027.
To give the corporation financial certainty up to then, the government said it was committed to keeping the license in its current form and would lift the fee in line with inflation.
Israeli minister walks back claim of antisemitism after clash with Piers Morgan

- Israel’s Minister Amichai Chikli accused Morgan in a previous social media post of ‘sharp and troubling descent into overt antisemitism’
- Following heated interview, Chikli later denied ever calling Morgan antisemitic, despite earlier post
LONDON: Israeli Minister for Diaspora Affairs and Combating Antisemitism Amichai Chikli has denied accusing British broadcaster Piers Morgan of antisemitism following a heated exchange during a recent episode of “Piers Morgan Uncensored,” despite a post on his official X account that said Morgan’s rhetoric marked “a sharp and troubling descent into overt antisemitism.”
The confrontation aired on Tuesday during an episode focused on Israel’s escalating conflicts with Iran and Hamas and featured appearances from both Chikli and former Israeli Prime Minister Ehud Barak.
Tensions erupted as Morgan repeatedly pressed Chikli to explain his public accusations.
“You did, you implied it,” Morgan said, adding that Chikli’s accusations led to “thousands of people calling me antisemitic and (a) Jew-hater” on social media. He demanded evidence, ultimately calling the minister “pathetic” and “an embarrassment” when none was offered.
"Absolute categorical weapons-grade nonsense!"
— Piers Morgan Uncensored (@PiersUncensored) June 23, 2025
Piers Morgan blasts Minister of Diaspora Affairs in Israel Amichai Chikli for saying he platforms holocaust deniers.
https://t.co/AO5YsXcBkq@piersmorgan | @AmichaiChikli pic.twitter.com/IiXmRk7wbC
The row stemmed from a June 4 post by Chikli, who shared a clip of a prior interview between Morgan and British barrister Jonathan Hausdorff, a member of the pro-Israel group UK Lawyers for Israel.
In the post, viewed over 1.3 million times by the time of Tuesday’s broadcast, Chikli claimed Morgan had hosted “every Israel hater he can find” and treated Hausdorff with “vile condescension and bullying arrogance — revealing his true face, one he had long tried to conceal.”
The post also referenced an unverified claim by American commentator Tucker Carlson that Morgan had said he “hates Israel with every fiber of his being” — a statement Morgan has firmly denied.
During Tuesday’s interview, Morgan challenged Chikli to cite a single antisemitic remark or action.
“Is it because I dare to criticize Israeli actions in Gaza?” Morgan told Chikli.
I’ve been following @piersmorgan’s interviews since the beginning of the war. From the outset, it was clear that, like most Western media, his basic inclination was pro-Palestinian — but he appeared to maintain a relatively fair and balanced line.
— עמיחי שיקלי - Amichai Chikli (@AmichaiChikli) June 4, 2025
Since then, however, I’ve… https://t.co/OvBtQsoYoE
According to Israeli outlet Haaretz, Chikli later denied ever calling Morgan antisemitic, despite his earlier post.
The episode reflects Morgan’s shifting stance on the war in Gaza. Once a vocal supporter of Israel’s right to self-defense in the immediate aftermath of the Oct. 7 attacks, Morgan has since adopted a more critical view as the civilian toll in Gaza has mounted and international outrage has grown.
The show has become a flashpoint for debate since the conflict began, hosting polarizing guests from both sides, including controversial American Rabbi Shmuley Boteach, a staunch defender of Israel, and influencer Dan Bilzerian, who has faced accusations of Holocaust denial.
Chikli, meanwhile, has faced criticism for blurring the lines between genuine antisemitism and political criticism of Israel. He recently sparked controversy by inviting members of far-right European parties — some with antisemitic histories — to a conference on antisemitism in Jerusalem, raising questions about his credibility.
Iraq arrests commentator over online post on Iran-Israel war

- Iraqi forces arrested Abbas Al-Ardawi for sharing content online that included incitement intended to insult and defame the security institution
BAGHDAD: Iraqi authorities said they arrested a political commentator on Wednesday over a post alleging that a military radar system struck by a drone had been used to help Israel in its war against Iran.
After a court issued a warrant, the defense ministry said that Iraqi forces arrested Abbas Al-Ardawi for sharing content online that included “incitement intended to insult and defame the security institution.”
In a post on X, which was later deleted but has circulated on social media as a screenshot, Ardawi told his more than 90,000 followers that “a French radar in the Taji base served the Israeli aggression” and was eliminated.
Early Tuesday, hours before a ceasefire ended the 12-day Iran-Israel war, unidentified drones struck radar systems at two military bases in Taji, north of Baghdad, and in southern Iraq, officials have said.
The Taji base hosted US troops several years ago and was a frequent target of rocket attacks.
There has been no claim of responsibility for the latest drone attacks, which also struck radar systems at the Imam Ali air base in Dhi Qar province.
A source close to Iran-backed groups in Iraq told AFP that the armed factions have nothing to do with the attacks.
Ardawi is seen as a supporter of Iran-aligned armed groups who had launched attack US forces in the region in the past, and of the pro-Tehran Coordination Framework, a powerful political coalition that holds a parliamentary majority.
The Iraqi defense ministry said that Ardawi’s arrest was made on the instructions of the prime minister, who also serves as the commander-in-chief of the armed forces, “not to show leniency toward anyone who endangers the security and stability of the country.”
It added that while “the freedom of expression is a guaranteed right... it is restricted based on national security and the country’s top interests.”
Iran-backed groups have criticized US deployment in Iraq as part of an anti-jihadist coalition, saying the American forces allowed Israel to use Iraq’s airspace.
The US-led coalition also includes French troops, who have been training Iraqi forces. There is no known French deployment at the Taji base.
The Iran-Israel war had forced Baghdad to close its airspace, before reopening on Tuesday shortly after US President Donald Trump announced a ceasefire.
Grok shows ‘flaws’ in fact-checking Israel-Iran war: study

- “Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims”
WASHINGTON: Elon Musk’s AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool.
With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots — including xAI’s Grok — in search of reliable information, but their responses are often themselves prone to misinformation.
“The investigation into Grok’s performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot’s ability to provide accurate, reliable, and consistent information during times of crisis,” said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank.
“Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims.”
The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was “struggling to authenticate AI-generated media.”
Following Iran’s retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found.
It oscillated — sometimes within the same minute — between denying the airport’s destruction and confirming it had been damaged by strikes, the study said.
In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran.
When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said.
The Israel-Iran conflict, which led to US air strikes against Tehran’s nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts.
AI chatbots also amplified falsehoods.
As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support.
When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard.
Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles.
Last month, Grok was under renewed scrutiny for inserting “white genocide” in South Africa, a far-right conspiracy theory, into unrelated queries.
Musk’s startup xAI blamed an “unauthorized modification” for the unsolicited response.
Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa’s leaders were “openly pushing for genocide” of white people.
Musk himself blasted Grok after it cited Media Matters — a liberal media watchdog he has targeted in multiple lawsuits — as a source in some of its responses about misinformation.
“Shame on you, Grok,” Musk wrote on X. “Your sourcing is terrible.”