Reading time: 3 minutes

Despite its flaws, millions of dollars are being invested in the artificial intelligence industry, and the level of trust people have in it is worrying - said AI expert, computer scientist and author Dr Emmanuel Maggiori on the drawbacks and limits of AI at Vibe Festival on 7 July. He was joined in a discussion with Ákos Péter, computer science student and member of the MCC University Program.

The speaker drew attention to one of the major, and so far seemingly incorrigible, flaws of artificial intelligence: hallucination. AI presents false or inaccurate information in a plausible way, as if it were giving a clear, real answer. As an example, he said that ChatGPT once told him that an anaconda would not fit in a shopping mall. Asking a question on the interface for fun or spotting a misrepresentation straight away is not a problem, but there have been cases where a lawyer has followed ChatGPT and referred to non-existent cases, and in the case of self-driving cars, errors can be particularly spectacular and even dangerous.

But why is AI hallucinating in the first place? While humans have a comprehensive knowledge of the way the world works, current AI learns to perform tasks from examples: translation from documents written by humans, predictive text input from sentences found on the internet, and driving by copying the decisions of human drivers. It shortens the learning process, but by imitating the work, it fails to see how the world works and to deal with unexpected situations.

Unfortunately, there is no solution to hallucinations for the time being. They are not fixable bugs, but are an expected part of machine learning, and cause problems in real, running projects. According to Emmanuel Maggiori, it is futile to expect the hallucination issue to solve itself soon. To overcome the problem, a new AI approach closer to Artificial General Intelligence (AGI) is needed.

The speaker advised that AI can be very useful to facilitate our work, but it is important to assess the needs: how complex is the task, what does the client want, is the possible hallucination confusing? For example, a perfect translation is not required for a hotel review, but it is essential for a legal or medical document. A meaningless, clickbait article can be written by the AI, but an excellent, exciting and valuable text requires the journalist's experience, research, the atmosphere and uniqueness they convey. So there are cases where acceptable quality is fine - these are the jobs that AI can and will replace.

Emmanuel Maggiori's book called Smart Until It's Dumb: Why artificial intelligence keeps making epic mistakes (and why the AI bubble will burst) was published last year. In it, the author exposes unreasonable expectations, dubious practices and misinformation about AI. The London-based software engineer has been working in the AI industry for 10 years and is on a mission to raise awareness of the deceptively simple mechanisms behind AI and the dangers of overconfidence.