New Frontiers in AI: Lessons from Riyadh

New Frontiers in AI: Lessons from Riyadh

Author
Short Url

Seldom does a day go by when one doesn’t hear about a new breakthrough in AI. Terms like “artificial general intelligence” or “the alignment problem,” once known only to domain experts, have largely become part of our vocabulary now.

AI is said to be at the cusp of revolutionizing our lives. In a sense, it can be thought of as being a truly “disruptive” technology meaning that it will alter existing markets and industries and give rise to new sectors and business models. 

To be sure, AI’s impact will not remain confined only to the economy. It will significantly impact politics and our increasingly fraught societies. Some argue, it is already doing just that.

Not too long ago, to take stock of developments at the cutting edge of AI, visionary experts, academics and policymakers converged in Riyadh for the Global AI Summit (GAIN), 2024. Among the many path-breaking sessions on AI including one that featured Deepak Chopra, the famous new age guru, one session specifically took issue with “hallucination” in AI, a problem that AI experts are desperate to get a handle on. 

As AI pioneers grapple with challenges at the frontiers of AI, it is heartening to see the giant strides taken by the Kingdom of Saudi Arabia.

Dr. Aqdas Afzal

In the context of AI, hallucination refers to the generation of incorrect or misleading results. Hallucination comes in many forms. AI may predict or forecast an unlikely event like rain when there is almost no probability of it taking place. Another example of hallucination is the citing of studies that do not exist to begin with. Moreover, contradictions in text, when one sentence does not follow the previous one, is also an example of hallucination as are completely random answers that have no connection to the input prompt. 

Why does AI hallucinate? Well, the answer to this question lies partly in the nature of “intelligence” within AI. Leading AI-based tools like Large Language Models (LLMs)--GPT-4 for instance--are not intelligent like humans. LLMs are pattern-spotting engines, at best. What this means is that LLMs can only predict the next word in a sentence without knowing what the word actually means. For this reason, when outputs go awry, LLMs have no way of realizing their mistakes.

AI hallucination also has to do with the nature and quality of data. Researchers at Rice University have shown that using synthetic data for training AI models carries significant negative risks including complete “model collapse.” AI models are also prone to hallucination if the quality of data is poor, that is if it contains errors, biases or inconsistencies. 

Many leading AI experts argue that AI’s generation of incorrect results should not be termed hallucination. The claim certainly has merit as in order for an entity to hallucinate, it has to perceive a non-existent thing as real. Since LLMs do not possess cognition, they do no perceive and thus cannot hallucinate. In a sense, LLMs are “zombies” that lacking an inner self cannot be expected to hallucinate. 

Instead of hallucination, AI experts argue, “confabulation” is a more appropriate term that describes the erroneous outputs generated by LLMs. In human psychology, confabulation refers to the fabrication of explanations when certain events in someone’s long-term memory are hard to retrieve due to brain injury or dementia. In a sense, confabulation occurs when the brain is forced to fill in the blanks without a word list to choose from. Confabulation is not lying, because it is involuntary and is not intended for deceiving others. 

A lot of work is being done on developing advanced hallucination/confabulation detection models like HHEM. At least one way in which we can potentially deal with the hallucination/confabulation problem is by employing tools that are used in the field of quality assurance. For instance, if AI output goes beyond three standard-deviations from the mean, then the model should immediately prompt the user to seek alternate sources of information. 

As AI pioneers grapple with challenges at the frontiers of AI, it is heartening to see the giant strides taken by the Kingdom of Saudi Arabia, like the launching of ALLaM or Arabic language model, considered one of the best generative AI models in the Arabic language worldwide. 

Pakistan is also hoping to replicate the Kingdom’s AI successes as evidenced by the announcement to develop Pakistan’s first Urdu language LLM. To keep moving forward, Pakistan’s policymakers will require vision, talent, resources and dedication. But, given other nations’ head start in AI, Pakistan’s journey appears long and daunting.  

– The writer completed his doctorate in economics on a Fulbright scholarship.

X: @AqdasAfzal 

Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News' point-of-view