Two journalists killed in Iraq drone strike: officials

The mother (C) and father (2-R) of one of two women journalists killed in a drone strike walk past Kurdish Asayish security forces as relatives and journalists gather in front of the forensic department where the bodies of the two women were brought, in Sulaimaniyah in Iraq's autonomous Kurdistan region on August 23, 2024. (AFP)
Short Url
Updated 24 August 2024
Follow

Two journalists killed in Iraq drone strike: officials

  • The director of Kurdish media production house CHATR, Kamal Hama Ridha, said he employed the journalists, saying one was a resident of Sulaimaniyah province while the other was a Kurd from Turkiye

SULAIMANIYAH, Iraq: A drone strike killed two women journalists in Iraq’s autonomous Kurdish region on Friday, officials said, blaming Turkiye whose military operates against Kurdish fighters in the area.
The counter-terrorism service in regional capital Irbil said the dead were fighters of the Kurdistan Workers’ Party (PKK) but officials in the region’s second city Sulaimaniyah said they were journalists,
An Iraqi security official told AFP on condition of anonymity that a “drone likely belonging to the Turkish army struck a vehicle carrying journalists” in Sayyid Sadik, east of Sulaimaniyah.
When contacted by AFP, the defense ministry in Ankara said it was “not the Turkish army” that carried out the strike.
The counter-terrorism service in Irbil reported a strike by “a Turkish army drone against a vehicle of fighters of the Kurdistan Workers’ Party in Sayyid Sadik district.”
“A PKK official, his driver and a fighter were killed” in the bombing, it added.
But the head of the Sulaimaniyah journalists’ union, Karouan Anwar, told reporters that the two women killed were “known to work in the world of journalism and the media.”
The director of Kurdish media production house CHATR, Kamal Hama Ridha, said he employed the journalists, saying one was a resident of Sulaimaniyah province while the other was a Kurd from Turkiye.
The Kurdish region’s deputy prime minister, Qubad Talabani, described the strike as an “unjustifiable crime” and a “flagrant violation of Iraqi sovereignty.”
“The victims of the drone attack... were two journalists and not members of an armed force and did not represent a threat to the security and stability of any country or the region,” he said.
The PKK, which has fought a decades-long insurgency against the Turkish state, has rear-bases in the mountains of northern Iraq.
The Turkish army maintains a network of bases in the region to fight the Kurdish militant group, which is blacklisted as a “terrorist organization” by the European Union and the United States.
Following a visit to Baghdad by Turkish officials, the federal government declared the PKK a “banned organization” in March.
Earlier this month, Turkiye agreed a military cooperation pact with Iraq that will see joint training and command centers to fight the Kurdish militants.
 

 


Musk calls Australian government ‘fascists’ over misinformation law

Updated 59 sec ago
Follow

Musk calls Australian government ‘fascists’ over misinformation law

  • Australia’s Labor government unveils legislation which could fine Internet platforms up to 5 percent of their global revenue
  • The proposed law would require tech platforms to set codes of conduct to stop dangerous falsehoods spreading and be approved by a regulator

SYDNEY: Elon Musk, owner of social media platform X, on Friday called Australia’s center-left government “fascists” over proposed legislation to slap fines on social media firms for failing to prevent the spread of misinformation online.
Australia’s Labor government on Thursday unveiled legislation which could fine Internet platforms up to 5 percent of their global revenue for enabling misinformation, joining a worldwide push to rein in borderless tech giants.
The proposed law would require tech platforms to set codes of conduct to stop dangerous falsehoods spreading and be approved by a regulator. The regulator would set its own standard if a platform failed to do so and fine firms for non-compliance.
Musk, who views himself as a champion of free speech, responded to a post by an X user linking the Reuters story about the misinformation law with one word: “Fascists.”
A spokesperson for Communications Minister Michelle Rowland said by email that companies operating in Australia must comply with Australian laws.
“This bill improves the transparency and accountability of platforms for users and the Australian people,” Rowland said.
Musk’s comment on the push against misinformation drew criticism and ridicule from other government lawmakers.
“Elon Musk’s had more positions on free speech than the Kama Sutra. When it’s in its commercial interests, he is the champion of free speech and when he doesn’t like it ... he’s going to shut it all down,” Government Services Minister Bill Shorten told Channel Nine’s breakfast show.
Social media platforms should not publish scam content, deepfake materials and livestream violence in the name of free speech, Assistant Treasurer Stephen Jones told ABC television.
In a previous clash with the Australian government, X in April went to court to challenge a cyber regulator’s order for the removal of some posts about the stabbing of a bishop in Sydney, prompting Prime Minister Anthony Albanese to call Musk an “arrogant billionaire.”
The regulator later dropped its challenge against X after a setback in the federal court.
X had blocked Australian users from viewing the posts about the stabbing but refused to remove them globally on the grounds that one country’s rules should not control the Internet.


Seeing isn’t believing: AI Summit’s warning on deepfakes

Updated 12 September 2024
Follow

Seeing isn’t believing: AI Summit’s warning on deepfakes

RIYADH: The use of deepfakes — images or recordings that have been altered or manipulated to depict real or nonexistent people — raised worrying ethical questions, but could also bring positive, life-changing benefits, the Global AI Summit in Riyadh was told on Thursday. 
In a panel titled “Deepfakes: Navigating Ethical, Technical, and Social Implications” Thomas Schneider, ambassador and director of international affairs at the Swiss Federal Office of Communications, said that definitions of a “deepfake” can have a political, technical, or legal basis.

The technology has deep ethical implications depending on “whether it’s a piece of art or (if it is) insulting someone.” 

On the same panel, Jennifer Williams, assistant professor at the University of Southampton, said that deepfakes could have “minuscule edits, minor alterations.” 

“It only takes three to five seconds of your voice in order to create a deepfake that can bypass automatic speaker verification systems that we use, for example in the UK,” she said.

Williams said that deepfakes are not always designed to be misleading or illicit.  

The rise of speech synthesis technology, a form of deepfake, is leading to growing demand in a number of companies, she said.  

Benefits of the technology include use in voice reconstruction, voice privacy that involves sourcing biometric information from the speech signal, and screening readers for those who are visually impaired through speech synthesis technology. 

Williams warned that deepfake technology could also lead to a rise in mental health issues due to fake media consumption.  

“One of my concerns is that we might see various new types of mental health issues surrounding how we question our reality. I don’t know if the younger generation is going to come up to speed as quickly as we have with the changing world and how that might impact how they handle information or just each other,” she said.

Walid Magdy, co-founder and CTO at Alan AI, said that deepfakes have been put to negative use on social media platforms, such as a TikTok account operated by a man impersonating US actor Tom Cruise.  

The slow response in removing deepfakes from platforms poses problems, he said. 

“To convince the people to remove the harm will take even longer. It’s the same with fake news — it propagates among people, and they think it’s true and act on it. 

“Don’t believe everything you see; you have to double check many times.” 

At the end of the panel, Schneider discussed the future of AI, saying: “We need to learn and step up. It’s a mix of measures. We need to technically develop standards that help us. We need to legally develop standards that help us. We need to develop our own skills and common sense. We have no choice.” 


Israeli army investigates claims of fabricated stories in The Jewish Chronicle

Updated 13 September 2024
Follow

Israeli army investigates claims of fabricated stories in The Jewish Chronicle

  • Experts and media organizations debunked stories about Gaza that appeared in the London-based newspaper under the byline Elon Perry
  • Claims about Perry’s reported experience with the Israeli military and as a former professor at Tel Aviv University have also been called into question

DUBAI: The Israel Defense Forces are investigating stories published by The Jewish Chronicle, amid claims that they were not true.

The London-based newspaper published an exclusive story on Sept. 5 that said Hamas leader Yahya Sinwar was planning to flee to Iran through the Philadelphi Corridor, a narrow strip of land along the border between the Gaza Strip and Egypt, along with other leaders of his organization and Israeli hostages.

The claim was said to be based on details provided by “intelligence sources,” and information obtained “during the interrogation of a captured, senior Hamas official” and “documents seized” on Aug. 29.

The story gained traction online but was quickly debunked by several experts, media organizations and other critics.

Israeli broadcaster Channel 12 said that “all of the relevant sources in the security establishment are unaware of the supposed intelligence.”

Israeli news website Ynet published a story in which four sources from the country’s intelligence community and the army’s prisoners and missing persons division were quoted as saying that the newspaper’s claims were “100 percent lies” and a “wild fabrication”.

The story is not the only one published by The Jewish Chronicle that has been called into question. Several other reports carrying the same byline, Elon Perry, are now alleged to have been fabricated.

Perry’s biographical information on the newspaper’s website states he is “a former commando soldier of the elite Golani Brigade of the Israel Defense Forces, which he served in for 28 years” and he has been “a journalist for 25 years covering wars and terrorist attacks.”

The Guardian newspaper found no record of any significant published stories written by Perry as a reporter in English or Hebrew, other than the recent series of articles in The Jewish Chronicle.

On his personal website, Perry said that during his military service he was part of the Israeli army’s mista’arvim units, the members of which go undercover as Palestinians. He also said he participated in Operation Entebbe, the 1976 Israeli military raid in Uganda to rescue the passengers of a plane that had been hijacked by Palestinian and German militants.

Perry is also a published author. Pen and Sword Books, the publisher of his second book, “Children in War,” describes him as having been a political science and history professor at Tel Aviv University between 1998 and 2014

These claims are false, Israeli broadcaster Channel 13 reported on Sept. 9 in its program “Hazinor.” It said it could find no record of Perry working at the university or participating in Operation Entebbe.

During an interview with a Channel 13 reporter, Perry confirmed he had not been a professor at Tel Aviv University or an undercover Israeli army operative.

Other publications and journalists, including Israeli-Palestinian magazine +972 and Simi Spolter, a tech journalist with Israel business newspaper The Marker, also highlighted inconsistencies and outright falsehoods in Perry’s claims.

The Jewish Chronicle published a statement on Thursday in which it said it was “aware of allegations concerning a freelance journalist, which we take very seriously.”

It added: “The Jewish Chronicle is the oldest Jewish newspaper in the world and has always maintained the highest standards of reporting and integrity. An investigation is underway and there will be an update in due course.”


IBM deal with Saudi Arabia “brings a piece of HQ to KSA,” regional VP says

Updated 12 September 2024
Follow

IBM deal with Saudi Arabia “brings a piece of HQ to KSA,” regional VP says

  • SDAIA and IBM announce collaboration of their AI models ALLAM and Watsonx is available on Deem

RIYADH: The latest IBM collaboration with the Saudi Data and AI Authority means a major part of the computing giant’s headquarters has made its way to Saudi Arabia, IBM’s regional vice president said.

“I am happy to share that the majority of the employees in there (software development lab) are actually Saudis that have already filed patents and those patents are already being used, and one of those products is Watsonx,” Ayman Al-Rashed told Arab News at the main hall of the Global AI Summit in Riyadh on Thursday.

AI was expected to contribute $135 billion to the Kingdom's economy by 2030, the equivalent of 12.5 percent of GDP, Al-Rashed said.

“That’s massive, and when you have that much impact, usually innovation is going to follow. So what we think is that you’re going to have a lot of breakthroughs. We believe that joint effort will accelerate the breakthrough.”

On Tuesday, SDAIA and IBM announced that the collaboration of their AI models ALLAM and Watsonx, respectively, was available on Deem, a government cloud-computing platform.

Deem is “software as a service” — such as email, file sharing, video meeting, and data rights management and infrastructure. It also serves as “infrastructure as a service” — such as virtual data center, backup, cloud storage and domain hosting.

Al-Rashed told Arab News that Watsonx, IBM’s commercial generative AI and scientific data platform based on cloud offering a studio, data store and governance toolkit, is being developed in Riyadh.

Allam, the AI generative platform serving Saudi Arabia and Arabic speakers around the world, was included in IBM’s Watsonx data platform at the IBM Think 2024 conference in its pilot phase as one of the best generative models in Arabic in the world.

He said that with this collaboration, IBM “wanted to bring a piece of the headquarters to KSA.”

Al-Rashed highlighted IBM’s plan to announce future projects at their “IBM Think” event next week.


AI can bridge gap between industry and academia, Riyadh summit told

Updated 12 September 2024
Follow

AI can bridge gap between industry and academia, Riyadh summit told

RIYADH: AI could be the key to breaking the long-running rivalry between industry and academia, experts have told the Global AI Summit in Riyadh.

Long viewed as having competing interests, the two sectors could work in harmony, opening major opportunities for both, panelists said on the final day of the summit.

Ahmed Serag, professor and director of AI Innovation Lab at Weill Cornell Medicine in Qatar, said that though academia and industry often operate in parallel, their differing “incentives and priorities” can create challenges for collaboration.

“They both have different incentives and priorities. Academia has been, most of the time, if not all the time, about advancing knowledge — which seems to take long time frames.

“(The outcome) also gets measured in publications and peer recognition. Looking at industry, on the other hand, they tend to build products that will provide return on investment — basically, generating profit,” he said.

Serag attributed the delay in applying academic research to industry’s domination of resources.

This is evident in the AI field, where talent, data and infrastructure are heavily concentrated in the private sector, he said.

However, Chuck Yoo, executive vice president for research affairs at Korea University, said that there is potential to reverse the trend.

“These days with the AI era, I’m seeing a huge change in how academia and industry collaborate,” he said, adding AI’s rapid development is the key to bridging the gap between academia and industry.

Serag highlighted the importance of effective communication in solving the issue.

“One of the solutions to this (communication problem) is, for example, to have programs or fellowships where interns or Ph.D. students could spend some time in the industry,” he said.

This would “expand their perspectives and give them a taste of how their work could apply in the real world,” he added.

A common trap in academia is falling into what academics call an “endless loop of research,” a problem that industry rarely faces due to financial incentives, Serag said.

More collaboration could prevent the issue by giving researchers a clearer picture of targets, he added.

“There have also been very good initiatives like building joint research centers and research labs,” Serag said, highlighting facilities formed between the Saudi Data and Artifical Intelligence Authority, King Abdullah University ofScience and Technology, and King Fahd University of Petroleum and Minerals.

Establishing shared intellectual property agreements at an early stage is key to bridging the gap, the panelists said.

“This is a fundamental part of why the industry, the company, wants to protect the rights to use this technology, and on the other hand the university wants to publish, to get recognition, which is why we call publications ‘the currency of academia’,” Serag added.

One solution is to “have a buffer where you agree on a patent on this (technology) first, and then for the university just to publish that after,” he said.

Abdulmuhsen Al-Ajaji, vice president of cloud software and services at Ericsson Saudi Arabia, said that more and more academics are taking examples from the industrial world.

“Universities are now launching their own accelerators, their own incubators and VCs (venture capitals), and investing directly in companies and startups to not only be part of the research, but also more toward owning that IP, commercializing that IP and just launching it for the public,” he said.

But industry’s exploitation of academic research is a long-running trend that will prove difficult to break, Serag said.

“The first leap that happened in 2012, where we managed to get most of the advanced algorithms for AI vision based on neural networks, actually started from academia; with the ImageNet competition that was organized by researchers, and then Google took over and scaled it up with their resources, and it is now part of all of the models we use for self-driving cars, medical diagnosing and so on.”