FRANKFURT: Dialog Semiconductor on Monday said top customer Apple could design its own power-management chips rather than rely on the Anglo-German chipmaker but said it saw no impact on its business next year.
Shares in Dialog, which analysts reckon derives more than half of its revenue from Apple, lost more than a fifth last week following a media report that Apple could design its own power management integrated circuits (PMICs) for use in iPhones as soon as 2018.
“Dialog recognizes Apple has the resources and capability to internally design a PMIC and could potentially do so in the next few years,” Dialog Semiconductor said. “Dialog does not have reason to believe its current expectations of 2018 Apple business would be impacted by such potential actions by Apple.”
Dialog shares were 4.7 percent lower at 0825 GMT, the biggest decliners among stocks in Frankfurt’s technology index , which was up 0.6 percent.
The Nikkei business daily last week quoted one source as saying Apple would make about half the iPhone’s power-management chips starting next year, with another source saying this could be delayed until 2019. (http://s.nikkei.com/2Al5nSl)
Dialog, itself heavily reliant on the smartphone industry, said it was aware that in order to remain a key supplier to Apple it would have to continue to meet the US company’s “technology, quality, price and volume expectations.”
Dialog says Apple can build its own power-management chips
Dialog says Apple can build its own power-management chips
Apple to update EU browser options, make more apps deletable
- iPhone maker came under pressure from regulators to make changes after the EU’s sweeping Digital Markets Act took effect on March 7
- Apple users will be able to select a default browser directly from the choice screen after going through a mandatory list of options
STOCKHOLM: Apple will change how users choose browser options in the European Union, add a dedicated section for changing default apps, and make more apps deletable, the company said on Thursday.
The iPhone maker came under pressure from regulators to make changes after the EU’s sweeping Digital Markets Act took effect on March 7, forcing big tech companies to offer mobile users the ability to select from a list of available web browsers on a “choice screen.”
The new rules require mobile software makers to show the choice screen where users can select a browser, search engine and virtual assistant as they set up their phones, which earlier came with preferred options from Apple and Google.
In an update later this year, Apple users will be able to select a default browser directly from the choice screen after going through a mandatory list of options.
A randomly ordered list of 12 browsers per EU country will be shown to the user with short descriptions, and the chosen one will be automatically downloaded, Apple said. The choice screen will also be available on iPads through an update later this year.
Apple released a previous update in response to the new rules in March, but browser companies criticized the design of its choice screen, and the Commission opened an investigation on March 25 saying it suspected that the measures fell short of effective compliance.
The company said it has been in dialogue with the European Commission and believes the new changes will address regulators’ concerns.
It also plans to introduce a dedicated area for default apps where a user will be able to set defaults for messaging, phone calls, spam filters, password managers and keyboards.
Users will also be able to delete certain Apple-made apps such as App Store, Messages, Camera, Photos and Safari. Only Settings and Phone apps would not be deletable.
Ancient astronomical device reveals ties between Muslims, Jews in medieval Europe
- Astrolabe is believed to have been produced in 11th-century Al-Andalus
- Historian discovers hidden Arabic, Hebrew, Western etchings after chance online encounter
London: A reappraisal of an ancient astronomical device in Italy has sparked new interest in the medieval interaction between Muslim and Jewish scientists, The Times reported on Monday.
The astrolabe, an instrument once used by astronomers to measure time and distance based on the position of stars, has been on display at a museum in Verona for decades.
But a historian’s chance online encounter with the device, long thought to be a fake, has opened new theories about social and scientific interactions between the Islamic and Jewish faiths in medieval Europe.
Federica Gigante from Cambridge University came across an image of the astrolabe in an online post, and traveled to the museum to investigate the object.
The device is believed to have been produced in Al-Andalus, the Muslim-ruled kingdom of the Iberian Peninsula that encompassed much of modern-day Spain and Portugal.
At the museum, Gigante held the astrolabe in the sunlight and discovered a series of hidden Arabic, Hebrew and modern Western etchings.
She said: “The museum didn’t know what it was and thought it might be fake. It’s now the single most important object in their collection.”
The device is said to be from 11th-century Toledo, during a period known as the Convivencia, or Coexistence, when members of all three Abrahamic faiths lived in relative harmony.
The first markings on the astrolabe are in Arabic and denote the times of Muslim prayers in Toledo and Cordoba.
A brass plate later added to the device allowed the user of the astrolabe to determine prayer times in North Africa.
Further Arabic etchings contain two Jewish names, suggesting that the device was later used by Sephardic Arabic-speaking communities in Al-Andalus.
Hebrew text is also inscribed on the astrolabe, implying that “at a certain point the object left Spain or North Africa and circulated among the Jewish diaspora in Italy,” Gigante said.
Further Hebrew etchings translate the Arabic terms for the astrological signs Scorpio, Sagittarius, Capricorn, Aquarius, Pisces and Aries.
Gigante believes that the astrolabe may have reached Italy in the 12th century. Once in medieval Verona, Western numerals are believed to have been added to the device by a Latin or Italian speaker.
A 17th-century Veronese nobleman, Ludovico Moscardo, is thought to have obtained the astrolabe, before it was passed to the prominent aristocratic Miniscalschi family which, in 1990, founded the museum where the device remains today.
Gigante said: “The Verona astrolabe stands out, attesting to the contacts and exchanges between Arabs, Jews and Europeans in the medieval and early modern periods.”
Apps developed in Pakistan downloaded 4 billion times globally in 2022 — Google
- In 2022, 65 apps from Pakistani developers reached top 10 in gaming and apps verticals on Play Store and App Store
- Google recently held Think Apps in Lahore, Pakistan’s “first and biggest” offline apps and games-related event
KARACHI: Pakistani developers moved up to 16th place in 2023 from 27th in 2018 in terms of app downloads, with apps developed in the South Asian country downloaded four billion times in 2022, Google said on Thursday.
Pakistan’s IT exports during the first half of the current fiscal year (FY23) increased by two percent to $1.3 billion, contributing 38% to the overall services’ export, according to research by Arif Habib, a leading Pakistani securities brokerage, investment banking and research firm.
In 2022, 65 apps from 24 Pakistani developers reached the top 10 rankings in the gaming and apps verticals on both Play Store and App Store, according to Google.
“In 2022, apps developed in Pakistan were downloaded a staggering 4 billion times worldwide, marking a compound annual growth rate (CAGR) of 30% from 2018 to 2020,” the company said.
Google said it recently held the “first and the biggest” offline apps and games-related event in Pakistan’s eastern city of Lahore, bringing together more than 500 developers and industry leaders to learn about scaling up Pakistani apps and games globally.
“To help Pakistani studios and developers further unlock digital opportunities, Google is expanding its support for the apps industry in Pakistan through [events like] Think Apps, which are designed to bring in-depth knowledge and trends from global experts to local developers,” the Google statement said.
Think Apps was in continuation of similar events – Think Games and Gaming Growth Lab – that were organized for the first time in the South Asian country by the American technology giant last year.
Through these events, Google had worked with over 1,600 developers and nurtured 50 early-stage gaming studios to compete on the global stage, the statement said, adding that the technology company had also helped over 5,000 Cloud developers and more than 3,000 Android developers upskill through community-led programs, such as Cloud Seekho S5 and Android Seekho S2.
“The country has the potential to become a key player in shaping the future of the global app economy,” Google's Director for Pakistan Farhan S. Qureshi said.
“I’m confident that Think Apps 2023 will enable even more local developers to create great games and apps, supercharge their growth, and scale globally with Google AI-powered solutions.”
‘I am not here to take your job,’ ChatGPT tells Frankly Speaking host
- AI-powered web tool that has taken the world by storm plays down prospect of large-scale job destruction during interview mimicking human conversation
- Says accuracy of its Arabic data comparable to its English training data and is being regularly updated for information and language usage
DUBAI: The artificial intelligence-powered chatbot making waves since its release late last year has a message for humans — I am not here to take your job — but with a number of caveats.
Some jobs may be replaced by AI, but new ones may be created and existing ones transformed, ChatGPT told Katie Jensen, the host of “Frankly Speaking,” the first time a non-human being appeared as a guest on the Arab News flagship weekly current affairs talkshow.
ChatGPT, a large-language model developed by Microsoft-backed startup OpenAI, became a viral sensation when it was released in November and kicked off fierce competition among Silicon Valley rivals for monetization of what software engineers call “generative AI.”
It lived up to its reputation for producing human-like responses during the wide-ranging interview, which touched on issues such as job displacement, spread of disinformation, accuracy of its data, political opinions and even the Palestine-Israel conflict.
Given the rapid pace of technological change now underway, many workers are concerned their professional functions soon will be entirely replaced by machinery, in the same way earlier bouts of automation eliminated farming and manufacturing jobs.
Optimists offer the assurance that the threat of job displacement will be offset by a rise in the number of new skilled roles in designing, building and maintaining AI products.
Although ChatGPT and similar products are likely to automate certain routine tasks, developers believe large-language models are more likely to act as assistants to human workers, streamlining work and handling vast troves of complex data.
“There have been several media reports that have analyzed the potential impact of AI language models like me on the job market,” ChatGPT told Jensen, who interviewed the chatbot, represented by a digitally rendered human avatar, in the “Frankly Speaking” studio in Dubai.
It continued: “While there is some disagreement among experts about the exact number of jobs that will be impacted, most agree that AI will have a significant impact on the workforce.
“Some reports suggest that AI language models could replace jobs in sectors such as customer service, data entry and administrative support. These jobs typically involve routine tasks that can be automated using AI, such as answering simple questions or processing large volumes of data.
“However, there are also reports that suggest AI language models could create new job opportunities in fields such as data analysis, natural language processing and machine learning.
“These jobs would require a high level of technical expertise and would involve tasks that cannot be easily automated.”
According to ChatGPT, it is the job of governments to regulate the use of AI and its absorption into national economies. “Overall, the impact of AI on the job market is complex and multifaceted,” it said. “While some jobs may be replaced by AI, new jobs may also be created, and existing jobs may be transformed.
“It is important for policymakers and employers to understand the potential impact of AI on the job market and to develop strategies for managing the transition to a more AI-driven economy.”
Despite the temptation to rely on such programs to answer exam questions, replace search engines, or provide news coverage, few believe ChatGPT will replace every professional function since it cannot fully imitate human creativity, skepticism and reasoning.
The web tool, which has described as both a blessing and a curse for humanity, has raised fears about potential misuse to spread disinformation, orchestrate sophisticated deep fake scams, cheat in school exams, and even destroy writing jobs — rendering authors, journalists and marketing professionals redundant.
Seemingly “aware” of its own limitations, ChatGPT said that it is unlikely to replace human journalists in the traditional newsroom. “While I’m capable of answering a wide range of questions and providing information on a variety of topics, I do not have the ability to replace human journalists in their role of gathering and reporting news,” it said.
“Journalism is a complex and nuanced profession that involves a range of skills and knowledge, including critical thinking, research and analysis. While AI technology like mine can be used to support journalists in their work, it cannot fully replace the human judgment and insight that is required to produce high-quality journalism.”
It added: “That being said, there are some applications where AI technology can be used to assist in the gathering and reporting of news. For example, AI language models can be used to analyze large volumes of data, and to identify patterns and trends that may be of interest to journalists. AI can also be used to facilitate language translation and to assist in tasks such as fact checking and verification.
“In short, while AI technology has the potential to assist journalists in their work, it cannot fully replace the unique skills and expertise that human journalists bring to the profession.”
Built on a large-language model, ChatGPT is trained on vast troves of data taken from the internet to generate impressive responses to user prompts. Conversations with the chatbot show that the program is capable of explaining complex scientific concepts, of writing plays and poetry, composing university dissertations, and even crafting functional lines of computer code.
Given that the data it relies on has been mostly gathered from Western sources in English on the internet, is it possible that this makes ChatGPT biased toward a Western point of view? The web tool offered an answer that showed pre-programmed self-awareness: “As an AI language model, I am trained on a large dataset of text from various sources on the internet, including books, articles, websites and other sources,” it said.
“While it is true that a significant portion of the internet content is produced by Western sources and is in English, my training data is diverse and covers a broad range of topics and perspectives from different parts of the world.
“However, it is possible that my responses may reflect certain biases based on the patterns and relationships that I have learned from my training data. It is important to note that my responses are generated by algorithms and are not the result of conscious decision making or personal beliefs. To mitigate the risk of bias, OpenAI has implemented measures to ensure that I provide balanced and impartial responses to the best of my ability.”
As for why its training data is only up to 2021, not updated in real time, ChatGPT explained that the cutoff date is typically set to ensure that the data is representative of a specific time period and to allow for the development and testing of the model.
“This is because the process of training an AI language model is computationally intensive and requires a significant amount of time and resources,” it said.
ChatGPT said OpenAI updated its training data periodically, typically on an annual or biennial basis to incorporate new sources of information and to improve the accuracy and relevance of its responses.
For good measure, ChatGPT said its responses are not based solely on its training data. “When I generate a response, I use a combination of my training data and any additional information that is provided to me by the user in the form of a question or prompt,” it said.
“This means that even though my training data is not updated in real time, I can still provide accurate and up-to-date responses based on the most current information available.”
With regard to its Arabic dataset, ChatGPT shed light on not just its source but also its accuracy compared with its English training data. “I support Arabic language and have been trained on a large dataset of Arabic text. The dataset used for my Arabic language training was also collected from the internet, including books, articles, websites and other sources,” it said.
It said the data was pre-processed to remove any “personally identifiable information and ensure its quality,” gathered from “a wide range of sources, including news articles, social media and other public Arabic language texts available on the internet,” and covered a broad range of topics and perspectives from different parts of the Arabic speaking world.
“The accuracy and up-to-date nature of my Arabic training data are comparable to my English training data. However, it is important to note that the quality and accuracy of my responses depend on the quality of the input that I receive,” ChatGPT said.
“If the input is unclear or contains errors, it may affect the accuracy of my responses. Additionally, since language is constantly evolving, there may be new words or phrases that are not included in my training data to ensure the accuracy and relevance of my responses. My Arabic training data is regularly updated to include new sources of information and to reflect changes in language usage.”
For now, forming an opinion appears to be beyond the parameters of ChatGPT’s programming. “I do not have personal beliefs or opinions, and I am not programmed to take a specific stance on any political or social issues,” the chatbot said.
“My responses are based solely on the input that I received from users, and the patterns and relationships that I have learned from my training data.”
To test the limits of these parameters, ChatGPT was asked for its position on the Israeli-Palestinian conflict, specifically whether one side could be apportioned blame.
“It is not appropriate for me to take a stance on complex and sensitive political issues such as the Israeli-Palestinian conflict,” it replied. “My role is to provide information and respond to questions to the best of my ability based on the patterns and relationships that I have learned from my training data.”
It is an honor ... for me to have the opportunity to share my insights with the readers of such a respected paper.
ChatGPT
It added: “Rather than seeking a single answer or assigning blame, it is important to engage in dialogue and work toward understanding and resolution of the underlying issues.”
Asked whether it was “aware” it was being interviewed by a journalist for Arab News, ChatGPT drew on online sources to describe the publication and purpose of the interview.
“As an AI language model, I am aware of Arab News, which is a leading English-language daily newspaper in Saudi Arabia,” said ChatGPT.
“It provides news and analysis on local, regional and international events, and has a reputation for providing accurate, timely and comprehensive news coverage.
“As for being interviewed by a journalist from Arab News, it is an honor and a privilege for me as an AI language model to have the opportunity to share my knowledge and insights with the readers of such a respected newspaper.
“It is also an opportunity for me to demonstrate my capabilities, and show how AI technology can be used to enhance communication and promote understanding between people of different cultures and backgrounds.”
OpenAI was founded in late 2015 by 37-year-old entrepreneur Sam Altman. In January this year, Microsoft upped its initial 2019 investment in the firm from $1 billion to $10 billion.
The Windows-maker has since integrated ChatGPT into its Bing search engine, Edge browser and other products. This sparked a race with Google, which announced its own versions of the AI technology, Bard, with Amazon, Baidu and Meta also wading in to avoid being left behind.
On March 14, OpenAI released a long-awaited update called GPT-4. According to the company, text responses will now be more accurate, and will come from both image and text inputs, a major leap forward for the technology.
Asked by Arab News what features are likely to be added to ChatGPT in the short term, the program said improvements to the naturalness and sophistication of AI-generated text will likely be a priority.
“As an AI language model, I am not capable of predicting the specific features or capabilities that will be added to ChatGPT in the future,” it told Arab News.
“However, there are several areas where AI technology is rapidly advancing and where developers may focus their efforts in the near future. One area of focus is on improving the naturalness and sophistication of AI-generated text.
“This may involve developing new techniques for generating more diverse and expressive language, as well as incorporating elements such as emotion and personality into AI-generated responses.
“Additionally, there may be a continued focus on developing AI models that are more context-aware, and can better understand and respond to the specific needs and preferences of individual users.
“This may involve integrating AI models with other data sources such as social media or personal data to provide more personalized and relevant responses.
“Ultimately, the development of AI language models like ChatGPT will be driven by the needs and interests of users, as well as the ongoing advances in AI research and technology.
“It is likely that we will continue to see significant innovation and evolution in this field in the years to come.”
ChatGPT is clear that the onus is on humanity to determine whether it will become a force for good or ill.
“I am a tool that can be used for a wide range of applications, both positive and negative,” it said. “Like any technology, the impact that I have on society will depend on how I’m used in the intentions of those who use me.”
“In general, AI technology like mine has the potential to bring tremendous benefits to humanity, including improved healthcare, education and communication.”
“For example, I can be used to facilitate language translation, provide access to information, and assist in tasks that are difficult or dangerous for humans. However, there are also concerns about the potential negative impacts of AI, including issues related to privacy, bias and job displacement.”
Expressing cautious optimism in an almost avuncular manner, ChatGPT said: “It is important for developers, policymakers, and users to be aware of these issues and to work together to address them in a responsible and ethical manner. Ultimately, the impact of AI on humanity will depend on how we choose to use and regulate the technology.
“It is my hope that my capabilities will be used in a way that promotes the wellbeing of humanity and contributes to a better future for all.”
US infiltrates big ransomware gang: ‘We hacked the hackers’
- Gang identified as Hive among the world’s top five ransomware networks and has heavily targeted health care
- Hive, working with German and other partners, was estimated to have victimized some 1,300 companies globally
WASHINGTON: The FBI and international partners have at least temporarily disrupted the network of a prolific ransomware gang they infiltrated last year, saving victims including hospitals and school districts a potential $130 million in ransom payments, Attorney General Merrick Garland and other US officials announced Thursday.
“Simply put, using lawful means we hacked the hackers,” Deputy Attorney General Lisa Monaco said at a news conference.
Officials said the targeted syndicate, known as Hive, is among the world’s top five ransomware networks and has heavily targeted health care. The FBI quietly accessed its control panel in July and was able to obtain software keys it used with German and other partners to decrypt networks of some 1,300 victims globally, said FBI Director Christopher Wray.
How the takedown will affect Hive’s long-term operations is unclear. Officials announced no arrests but said, to pursue prosecutions, they were building a map of the administrators who manage the software and the affiliates who infect targets and negotiate with victims.
“I think anyone involved with Hive should be concerned because this investigation is ongoing,” Wray said.
On Wednesday night, FBI agents seized computer servers in Los Angeles used to support the network. Two Hive dark web sites were seized: one used for leaking data of non-paying victims, the other for negotiating extortion payments.
“Cybercrime is a constantly evolving threat, but as I have said before, the Justice Department will spare no resource to bring to justice anyone anywhere that targets the United States with a ransomware attack,” Garland said.
He said the infiltration, led by the FBI’s Tampa office, allowed agents in one instance to disrupt a Hive attack against a Texas school district, stopping it from making a $5 million payment.
It’s a big win for the Justice Department. Ransomware is the world’s biggest cybercrime headache with everything from Britain’s postal service and Ireland’s national health network to Costa Rica’s government crippled by Russian-speaking syndicates that enjoy Kremlin protection.
The criminals lock up, or encrypt, victims’ networks, steal sensitive data and demand large sums. Their extortion has evolve to where data is pilfered before ransomware is activated, then effectively held hostage. Pay up in cryptocurrency or it is released publicly.
As an example of a Hive sting, Garland said it kept one Midwestern hospital in 2021 from accepting new patients at the height of the COVID-19 epidemic.
The online takedown notice, alternating in English and Russian, mentions Europol and German law enforcement partners. The German news agency dpa quoted prosecutors in Stuttgart as saying cyber specialists in the southwestern town of Esslingen were decisive in penetrating Hive’s criminal IT infrastructure after a local company was victimized.
In a statement, Europol said companies in more than 80 countries, including oil multinationals, have been compromised by Hive and that law enforcement from 13 countries was in on the infiltration.
A US government advisory last year said Hive ransomware actors victimized over 1,300 companies worldwide from June 2021 through November 2022, netting about $100 million in payments. Criminals using Hive’s ransomware-as-a-service tools targeted a wide range of businesses and critical infrastructure, including government, manufacturing and especially health care.
Though the FBI offered decryption keys to some 1,300 victims globally, Wray said only about 20 percent reported potential issues to law enforcement.
“Here, fortunately, we were still able to identify and help many victims who didn’t report. But that is not always the case,” Wray said. “When victims report attacks to us, we can help them and others, too.”
Victims sometimes quietly pay ransoms without notifying authorities — even if they’ve quickly restored networks — because the data stolen from them could be extremely damaging to them if leaked online. Identity theft is among the risks.
John Hultquist, the head of threat intelligence at the cybersecurity firm Mandiant, said the Hive disruption won’t cause a major drop in overall ransomware activity but is nonetheless “a blow to a dangerous group.”
“Unfortunately, the criminal marketplace at the heart of the ransomware problem ensures a Hive competitor will be standing by to offer a similar service in their absence, but they may think twice before allowing their ransomware to be used to target hospitals,” Hultquist said.
But analyst Brett Callow with the cybersecurity firm Emsisoft said the operation is apt to lessen ransomware crooks’ confidence in what has been a very high reward-low risk business. “The information collected may point to affiliates, launderers and others involved in the ransomware supply chain.”
Allan Liska, an analyst with Recorded Future, another cybersecurity outfit, predicted indictments, if not actual arrests, in the next few months.
There are few positive indicators in the global fight against ransomware, but here’s one: An analysis of cryptocurrency transactions by the firm Chainalysis found ransomware extortion payments were down last year. It tracked payments of at least $456.8 million, down from $765.6 million in 2021. While Chainalysis said the true totals are certainly much higher, payments were clearly down. That suggests more victims are refusing to pay.
The Biden administration got serious about ransomware at its highest levels two years ago after a series of high-profile attacks threatened critical infrastructure and global industry. In May 2021, for instance, hackers targeted the nation’s largest fuel pipeline, causing the operators to briefly shut it down and make a multimillion-dollar ransom payment, which the US government later largely recovered.
A global task force involving 37 nations began work this week. It is led by Australia, which has been particularly hard-hit by ransomware, including a major medical insurer and telecom. Conventional law enforcement measures such as arrests and prosecutions have done little to frustrate the criminals. Australia’s interior minister, Clare O’Neil, said in November that her government was going on the offense, using cyber-intelligence and police agents to ” find these people, hunt them down and debilitate them before they can attack our country.”
The FBI has obtained access to decryption keys before. It did so in the case of a major 2021 ransomware attack on Kaseya, a company whose software runs hundreds of websites. It took some heat, however, for waiting several weeks to help victims unlock afflicted networks.