Runaway growth of AI chatbots portends a future poised between utopia and dystopia

Short Url
Updated 18 April 2023
Follow

Runaway growth of AI chatbots portends a future poised between utopia and dystopia

  • Engineers who had been slogging away for years in academia and industry are finally having their day in the sun
  • Job displacements and social upheavals are nothing compared to the extreme risks posed by advancing AI tech

DUBAI: It was way back in the late 1980s that I first encountered the expressions “artificial intelligence,” “pattern recognition” and “image processing.” I was completing the final semester of my undergrad college studies, while also writing up my last story for the campus magazine of the Indian Institute of Technology at Kharagpur.

Never having come across these technical terms during the four years I majored in instrumentation engineering, I was surprised to discover that the smartest professors and the brightest postgrad students of the electronics and computer science and engineering departments of my own college were neck-deep in research and development work involving AI technologies. All while I was blissfully preoccupied with the latest Madonna and Billy Joel music videos and Time magazine stories about glasnost and perestroika.




Now that the genie is out, the question is whether or not Big Tech is willing or even able to address the issues raised by the runaway growth of AI. (Supplied)

More than three decades on, William Faulkner’s oft-quoted saying, “the past is never dead. It is not even past,” rings resoundingly true to me, albeit for reasons more mundane than sublime. Terms I seldom bumped into as a newspaperman and editor since leaving campus — “artificial intelligence,” “machine learning” and “robotics” — have sneaked back into my life, this time not as semantic curiosities but as man-made creations for good or ill, with the power to make me redundant.

Indeed, an entire cottage industry that did not exist just six months ago has sprung up to both feed and whet a ravenous global public appetite for information on, and insights into, ChatGPT and other AI-powered web tools.




Teachers are seen behind a laptop during a workshop on ChatGpt bot organized by the School Media Service (SEM) of the Public education of the Swiss canton of Geneva on February 1, 2023. (AFP)

The initial questions about what kind of jobs would be created and how many professions would be affected, have given way to far more profound discussions. Can conventional religions survive the challenges that will spring from artificial intelligence in due course? Will humans ever need to wrack their brains to write fiction, compose music or paint masterpieces? How long will it take before a definitive cure for cancer is found? Can public services and government functions be performed by vastly more efficient and cheaper chatbots in the future?

Even until October last year, few of us employed outside of the arcane world of AI could have anticipated an explosion of existential questions of this magnitude in our lifetime. The speed with which they have moved from the fringes of public discourse to center stage is at once a reflection of the severely disruptive nature of the developments and their potentially unsettling impact on the future of civilization. Like it or not, we are all engineers and philosophers now.




Attendees watch a demonstration on artificial intelligence during the LEAP Conference in Riyadh last February. (Supplied)

By most accounts, as yet no jobs have been eliminated and no collapse of the post-Impressionist art market has occurred as a result of the adoption of AI-powered web tools, but if the past (as well as Ernest Hemingway’s famous phrase) is any guide, change will happen at first “gradually, then suddenly.”

In any event, the world of work has been evolving almost imperceptibly but steadily since automation disrupted the settled rhythms of manufacturing and service industries that were essentially byproducts of the First Industrial Revolution.

For people of my age group, a visit to a bank today bears little resemblance to one undertaken in the 1980s and 1990s, when withdrawing cash meant standing in an orderly line first for a metal token, then waiting patiently in a different queue to receive a wad of hand-counted currency notes, each process involving the signing of multiple counterfoils and the spending of precious hours.

Although the level of efficiency likely varied from country to country, the workflow required to dispense cash to bank customers before the advent of automated teller machines was more or less the same.

Similarly, a visit to a supermarket in any modern city these days feels rather different from the experience of the late 1990s. The row upon row of checkout staff have all but disappeared, leaving behind a lean-and-mean mix with the balance tilted decidedly in favor of self-service lanes equipped with bar-code scanners, contactless credit-card readers and thermal receipt printers.

Whatever one may call these endangered jobs in retrospect, minimum-wage drudgery or decent livelihood, society seems to have accepted that there is no turning the clock back on technological advances whose benefits outweigh the costs, at least from the point of view of business owners and shareholders of banks and supermarket chains.

Likewise, with the rise of generative AI (GenAI) a new world order (or disorder) is bound to emerge, perhaps sooner rather than later, but of what kind, only time will tell.




Just 4 months since ChatGPT was launched, Open AI's conversational chat bot is now facing at least two complaints before a regulatory body in France on the use of personal data. (AFP)

In theory, ChatGPT could tell too. To this end, many a publication, including Arab News, has carried interviews with the chatbot, hoping to get the truth from the machine’s mouth, so to say, instead of relying on the thoughts and prescience of mere humans.

But the trouble with ChatGPT is that the answers it punches out depend on the “prompts” or questions it is asked. The answers will also vary with every update of its training data and the lessons it draws from these data sets’ internal patterns and relationships. Put simply, what ChatGPT or GPT-4 says about its destructive powers today is unlikely to remain unchanged a few months from now.

Meanwhile, tantalizing though the tidbits have been, the occasional interview with the CEO of OpenAI, Sam Altman, or the CEO of Google, Sundar Pichai, has shed little light on the ramifications of rapid GenAI advances for humanity.




OpenAI CEO Sam Altman, left, and Microsoft CEO Satya Nadella. (AFP)

With multibillion-dollar investments at stake and competition for market share intensifying between Silicon Valley companies, these chief executives, as also Microsoft CEO Satya Nadella, can hardly be expected to objectively answer the many burning questions, starting with whether Big Tech ought to declare “a complete global moratorium on the development of AI.”

Unfortunately for a large swathe of humanity, the great debates of the day, featuring polymaths who can talk without fear or favor about a huge range of intellectual and political trends, are raging mostly out of reach behind strict paywalls of publications such as Bloomberg, Wall Street Journal, Financial Times, and Time.

An essay by Niall Ferguson, the pre-eminent historian of the ideas that define our time, published in Bloomberg on April 9, offers a peek into the deepest worries of philosophers and futurists, implying that the fears of large-scale job displacements and social upheavals are nothing compared to the extreme risks posed by galloping AI advancements.

“Most AI does things that offer benefits not threats to humanity … The debate we are having today is about a particular branch of AI: the large language models (LLMs) produced by organizations such as OpenAI, notably ChatGPT and its more powerful successor GPT-4,” Ferguson wrote before going on to unpack the downsides.

In sum, he said: “The more I read about GPT-4, the more I think we are talking here not about artificial intelligence … but inhuman intelligence, which we have designed and trained to sound convincingly like us. … How might AI off us? Not by producing (Arnold) Schwarzenegger-like killer androids (of the 1984 film “The Terminator”), but merely by using its power to mimic us in order to drive us insane and collectively into civil war.”

Intellectually ready or not, behemoths such as Microsoft, Google and Meta, together with not-so-well-known startups like Adept AI Labs, Anthropic, Cohere and Stable Diffusion API, have had greatness thrust upon them by virtue of having developed their own LLMs with the aid of advances in computational power and mathematical techniques that have made it possible to train AI on ever larger data sets than before.

Just like in Hindu mythology, where Shiva, as the Lord of Dance Nataraja, takes on the persona of a creator, protector and destroyer, in the real world tech giants and startups (answerable primarily to profit-seeking shareholders and venture capitalists) find themselves playing what many regard as the combined role of creator, protector and potential destroyer of human civilization.




Microsoft is the “exclusive” provider of cloud computing services to OpenAI, the developer of ChatGPT. (AFP file)

While it does seem that a science-fiction future is closer than ever before, no technology exists as of now to turn back time to 1992 and enable me to switch from instrumentation engineering to computer science instead of a vulnerable occupation like journalism. Jokes aside, it would be disingenuous of me to claim that I have not been pondering the “what-if” scenarios of late.

Not because I am terrified of being replaced by an AI-powered chatbot in the near future and compelled to sign up for retraining as a food-delivery driver. Journalists are certainly better psychologically prepared for such a drastic reversal of fortune than the bankers and property owners in Thailand who overnight had to learn to sell food on the footpaths of Bangkok to make a living in the aftermath of the 1997 Asian financial crisis.

The regret I have is more philosophical than material: We are living in a time when engineers who had been slogging away for years in the forgotten groves of academe and industry, pushing the boundaries of AI and machine learning one autocorrect code at a time, are finally getting their due as the true masters of the universe. It would have felt good to be one of them, no matter how relatively insignificant one’s individual contribution.

There is a vicarious thrill, though, in tracking the achievements of a man by the name of P. Sundarajan, who won admission to my alma mater to study metallurgical engineering one year after I graduated.




Google Inc. CEO Sundar Pichai (C) is applauded as he arrives to address students during a forum at The Indian Institute of Technology in Kharagpur, India, on January 5, 2017. (AFP file)

Now 50 years old, he has a big responsibility in shaping the GenAI landscape, although he probably had no inkling of what fate had in store for him when he was focused on his electronic materials project in the final year of his undergrad studies. That person is none other than Sundar Pichai, whose path to the office of Google CEO went via IIT Kharagpur, Stanford University and Wharton business school.

Now, just as in the final semester of my engineering studies, I have no illusions about the exceptionally high IQ required to be even a writer of code for sophisticated computer programs. In an age of increasing specialization, “horses for courses” is not only a rational approach, it is practically the only game in town.

I am perfectly content with the knowledge that in the pre-digital 1980s, well before the internet as we know it had even been created, I had got a glimpse of the distant exciting future while reporting on “artificial intelligence,” “pattern recognition” and “image processing.” Only now do I fully appreciate how great a privilege it was.

 


Iraq arrests commentator over online post on Iran-Israel war

Updated 25 June 2025
Follow

Iraq arrests commentator over online post on Iran-Israel war

  • Iraqi forces arrested Abbas Al-Ardawi for sharing content online that included incitement intended to insult and defame the security institution

BAGHDAD: Iraqi authorities said they arrested a political commentator on Wednesday over a post alleging that a military radar system struck by a drone had been used to help Israel in its war against Iran.

After a court issued a warrant, the defense ministry said that Iraqi forces arrested Abbas Al-Ardawi for sharing content online that included “incitement intended to insult and defame the security institution.”

In a post on X, which was later deleted but has circulated on social media as a screenshot, Ardawi told his more than 90,000 followers that “a French radar in the Taji base served the Israeli aggression” and was eliminated.

Early Tuesday, hours before a ceasefire ended the 12-day Iran-Israel war, unidentified drones struck radar systems at two military bases in Taji, north of Baghdad, and in southern Iraq, officials have said.

The Taji base hosted US troops several years ago and was a frequent target of rocket attacks.

There has been no claim of responsibility for the latest drone attacks, which also struck radar systems at the Imam Ali air base in Dhi Qar province.

A source close to Iran-backed groups in Iraq told AFP that the armed factions have nothing to do with the attacks.

Ardawi is seen as a supporter of Iran-aligned armed groups who had launched attack US forces in the region in the past, and of the pro-Tehran Coordination Framework, a powerful political coalition that holds a parliamentary majority.

The Iraqi defense ministry said that Ardawi’s arrest was made on the instructions of the prime minister, who also serves as the commander-in-chief of the armed forces, “not to show leniency toward anyone who endangers the security and stability of the country.”

It added that while “the freedom of expression is a guaranteed right... it is restricted based on national security and the country’s top interests.”

Iran-backed groups have criticized US deployment in Iraq as part of an anti-jihadist coalition, saying the American forces allowed Israel to use Iraq’s airspace.

The US-led coalition also includes French troops, who have been training Iraqi forces. There is no known French deployment at the Taji base.

The Iran-Israel war had forced Baghdad to close its airspace, before reopening on Tuesday shortly after US President Donald Trump announced a ceasefire.


Grok shows ‘flaws’ in fact-checking Israel-Iran war: study

Updated 25 June 2025
Follow

Grok shows ‘flaws’ in fact-checking Israel-Iran war: study

  • “Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims”

WASHINGTON: Elon Musk’s AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool.
With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots — including xAI’s Grok — in search of reliable information, but their responses are often themselves prone to misinformation.
“The investigation into Grok’s performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot’s ability to provide accurate, reliable, and consistent information during times of crisis,” said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank.
“Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims.”
The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was “struggling to authenticate AI-generated media.”
Following Iran’s retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found.
It oscillated — sometimes within the same minute — between denying the airport’s destruction and confirming it had been damaged by strikes, the study said.
In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran.
When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said.
The Israel-Iran conflict, which led to US air strikes against Tehran’s nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts.
AI chatbots also amplified falsehoods.
As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support.
When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard.
Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles.
Last month, Grok was under renewed scrutiny for inserting “white genocide” in South Africa, a far-right conspiracy theory, into unrelated queries.
Musk’s startup xAI blamed an “unauthorized modification” for the unsolicited response.
Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa’s leaders were “openly pushing for genocide” of white people.
Musk himself blasted Grok after it cited Media Matters — a liberal media watchdog he has targeted in multiple lawsuits — as a source in some of its responses about misinformation.
“Shame on you, Grok,” Musk wrote on X. “Your sourcing is terrible.”


Tech-fueled misinformation distorts Iran-Israel fighting

Updated 24 June 2025
Follow

Tech-fueled misinformation distorts Iran-Israel fighting

  • It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation

WASHINGTON: AI deepfakes, video game footage passed off as real combat, and chatbot-generated falsehoods — such tech-enabled misinformation is distorting the Israel-Iran conflict, fueling a war of narratives across social media.
The information warfare unfolding alongside ground combat — sparked by Israel’s strikes on Iran’s nuclear facilities and military leadership — underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication.
The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers.
After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport.
The videos were widely shared across Facebook, Instagram and X.
Using a reverse image search, AFP’s fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content.
There has been a “surge in generative AI misinformation, specifically related to the Iran-Israel conflict,” Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP.
“These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication.”
GetReal Security, a US company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict.
The company linked the visually compelling videos — depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer — to Google’s Veo 3 AI generator, known for hyper-realistic visuals.
The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show “the moment an Iranian missile” struck Tel Aviv.
“It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion,” said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley.
Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration.
“This eight-second limit obviously doesn’t prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share,” he said.
The falsehoods are not confined to social media.
Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims — ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots.
Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said.
“We’re seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience,” McKenzie Sadeghi, a researcher with NewsGuard, told AFP.
Sadeghi described Iranian citizens as “trapped in a sealed information environment,” where state media outlets dominate in a chaotic attempt to “control the narrative.”
Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women’s protests and urging people to take to the streets.
Adding to the information chaos were online clips lifted from war-themed video games.
AFP’s fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3.
Israel’s military has rejected Iranian media reports claiming its fighter jets were downed over Iran as “fake news.”
Chatbots such as xAI’s Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said.
“This highlights a broader crisis in today’s online information landscape: the erosion of trust in digital content,” BitMindAI’s Miyachi said.
“There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse.”


BBC shelves Gaza documentary over impartiality concerns, sparking online outrage

Updated 22 June 2025
Follow

BBC shelves Gaza documentary over impartiality concerns, sparking online outrage

  • The film, titled “Gaza: Doctors Under Attack” had been under editorial consideration by the broadcaster for several months.

LONDON: The BBC has decided not to air a highly anticipated documentary about medics in Gaza, citing concerns over maintaining its standards of impartiality amid the ongoing Israel-Gaza conflict.

The film, titled “Gaza: Doctors Under Attack” (also known as “Gaza: Medics Under Fire”), was produced by independent company Basement Films and had been under editorial consideration by the broadcaster for several months.

In a statement issued on June 20, the BBC said it had concluded that broadcasting the documentary “risked creating a perception of partiality that would not meet the BBC’s editorial standards.” The rights have since been returned to the filmmakers, allowing them to seek distribution elsewhere.

The decision comes in the wake of growing scrutiny over how the BBC is covering the Israel-Gaza war. Earlier this year, the broadcaster faced backlash after airing “Gaza: How to Survive a War Zone,” a short film narrated by a 13-year-old boy later revealed to be the son of a Hamas official. The segment triggered nearly 500 complaints, prompting an internal review and raising questions about vetting, translation accuracy, and the use of sources in conflict zones.

BBC insiders report that portions of “Gaza: Doctors Under Attack” had been considered for integration into existing news programming. However, concerns reportedly emerged during internal reviews that even limited broadcast could undermine the BBC’s reputation for neutrality, particularly given the politically charged context of the ongoing war.

Filmmaker Ben de Pear and journalist Ramita Navai, who worked on the documentary, have expressed disappointment at the decision. They argue that the film provided a necessary and unfiltered look at the conditions medical workers face in Gaza. “This is a documentary about doctors — about the reality of trying to save lives under bombardment,” said Navai. “To shelve this is to silence those voices.”

Critics of the BBC’s decision have been vocal on social media and online forums, accusing the broadcaster of yielding to political pressure and censoring Palestinian perspectives. One commenter wrote, “Sorry, supporters of the Israeli government would get very offended if we demonstrated the consequences … so we shelved it.” Others, however, defended the move, citing the importance of neutrality in public service broadcasting.

A BBC spokesperson said the decision was made independently of political influence and reflected long-standing editorial guidelines. “We are committed to reporting the Israel-Gaza conflict with accuracy and fairness. In this case, we concluded the content, in its current form, could compromise audience trust.”

With the rights now returned, Basement Films is expected to seek other avenues for release. Whether the documentary will reach the public via another broadcaster or platform remains to be seen.


Iran’s Internet blackout leaves public in dark, creates uneven picture of war with Israel

Updated 20 June 2025
Follow

Iran’s Internet blackout leaves public in dark, creates uneven picture of war with Israel

  • Civilians are left unaware of when and where Israel will strike next, despite Israeli forces issuing warnings
  • Activists see it as a form of psychological warfare

DUBAI: As the war between Israel and Iran hits the one-week mark, Iranians have spent nearly half of the conflict in a near-communication blackout, unable to connect not only with the outside world but also with their neighbors and loved ones across the country.
Civilians are left unaware of when and where Israel will strike next, despite Israeli forces issuing warnings through their Persian-language online channels. When the missiles land, disconnected phone and web services mean not knowing for hours or days if their family or friends are among the victims. That’s left many scrambling on various social media apps to see what’s happening — again, only a glimpse of life able to reach the Internet in a nation of over 80 million people.
Activists see it as a form of psychological warfare for a nation all-too familiar with state information controls and targeted Internet shutdowns during protests and unrest.
“The Iranian regime controls the information sphere really, really tightly,” Marwa Fatafta, the Berlin-based policy and advocacy director for digital rights group Access Now, said in an interview with The Associated Press. “We know why the Iranian regime shuts down. It wants to control information. So their goal is quite clear.”
War with Israel tightens information space
But this time, it’s happening during a deadly conflict that erupted on June 13 with Israeli airstrikes targeting nuclear and military sites, top generals and nuclear scientists. At least 657 people, including 263 civilians, have been killed in Iran and more than 2,000 wounded, according to a Washington-based group called Human Rights Activists.
Iran has retaliated by firing 450 missiles and 1,000 drones at Israel, according to Israeli military estimates. Most have been shot down by Israel’s multitiered air defenses, but at least 24 people in Israel have been killed and hundreds others wounded. Guidance from Israeli authorities, as well as round-the-clock news broadcasts, flows freely and consistently to Israeli citizens, creating in the last seven days an uneven picture of the death and destruction brought by the war.
The Iranian government contended Friday that it was Israel who was “waging a war on truth and human conscience.” In a post on X, a social media platform blocked for many of its citizens, Iran’s Foreign Ministry asserted Israel banned foreign media from covering missile strikes.
The statement added that Iran would organize “global press tours to expose Israel’s war crimes” in the country. Iran is one of the world’s top jailer of journalists, according to the Committee to Protect Journalists, and in the best of times, reporters face strict restrictions.
Internet-access advocacy group NetBlocks.org reported on Friday that Iran had been disconnected from the global Internet for 36 hours, with its live metrics showing that national connectivity remained at only a few percentage points of normal levels. The group said a handful of users have been able to maintain connectivity through virtual private networks.
Few avenues exist to get information
Those lucky few have become lifelines for Iranians left in the dark. In recent days, those who have gained access to mobile Internet for a limited time describe using that fleeting opportunity to make calls on behalf of others, checking in on elderly parents and grandparents, and locating those who have fled Tehran.
The only access to information Iranians do have is limited to websites in the Islamic Republic. Meanwhile, Iran’s state-run television and radio stations offer irregular updates on what’s happening inside the country, instead focusing their time on the damage wrought by their strikes on Israel.
The lack of information going in or out of Iran is stunning, considering that the advancement of technology in recent decades has only brought far-flung conflicts in Ukraine, the Gaza Strip and elsewhere directly to a person’s phone anywhere in the world.
That direct line has been seen by experts as a powerful tool to shift public opinion about any ongoing conflict and potentially force the international community to take a side. It has also turned into real action from world leaders under public and online pressure to act or use their power to bring an end to the fighting.
But Mehdi Yahyanejad, a key figure in promoting Internet freedom in Iran, said that the Islamic Republic is seeking to “purport an image” of strength, one that depicts only the narrative that Israel is being destroyed by sophisticated Iranian weapons that include ballistic missiles with multiple warheads.
“I think most likely they’re just afraid of the Internet getting used to cause mass unrest in the next phase of whatever is happening,” Yahayanejad said. “I mean, some of it could be, of course, planned by the Israelis through their agents on the ground, and some of this could be just a spontaneous unrest by the population once they figure out that the Iranian government is badly weakened.