How to reduce hallucinations of large language models

03 April 2024 Consultancy.eu

The use of Large Language Models (LLMs) is taking off in the business landscape. While most of the output of LLMs is accurate, such models still provide responses that seem correct but are, in fact, inaccurate – known as hallucinations.

LLM hallucinations are events when models, particularly large language models like GPT-4, produce wrong outputs. They can range from subtle discrepancies, only detectable by experts, to glaring errors that everyone notices.

For any businesses leveraging LLMs, it is key to ensure hallucinations are kept to an absolute minimum. Tristan van Thielen, AI expert at Devoteam, outlines four ways how artificial intelligence engineers and machine learning experts can reduce the volume and impact of LLM hallucinations.

How to reduce hallucinations of large language models

1) Retrieval Augmented Generation (RAG)
One emerging trend in addressing this challenge is Retrieval Augmented Generation (RAG). This method involves re-grounding LLMs in knowledge bases, ensuring they have access to relevant context before generating answers.

Different companies may have diverse knowledge bases, such as product catalogs, customer support knowledge, or internal business processes. By linking the question and pertinent information to the LLM, RAG aims to produce answers that are closer to the available information, minimizing the risk of hallucinations.

2) Reinforcement Learning from Human Feedback
When trying to detect hallucinations, the question arises: how can we effectively monitor and validate the output of these LLMs?

The most reliable and practical approach is to introduce human oversight into the process, a concept referred to as 'Reinforcement Learning with Human Feedback'. This approach, notably adopted by Google, hinges on the valuable input provided by human reviewers. When LLM-generated outputs are presented to these human evaluators, they have the authority to make judgments regarding correctness. If they identify inaccuracies, they have the power to amend them, thereby fine-tuning the LLM’s performance.

This reinforcement learning method becomes indispensable when a 100% accuracy rate is imperative. For instance, in customer support, one can opt for a setup where customers interact with chatbots. If the responses from the LLM fall short in accuracy, the issue can be escalated to a human agent, who then ensures the correct answer is provided.

Another prevalent use case for this approach is in legal advice. Here, LLMs can be instrumental in producing boilerplate text, which is subsequently verified and refined by paralegals, making the process significantly more efficient. Likewise, in the domain of medical advice, LLMs can assist patients by providing recommendations based on shared symptoms, with a doctor’s responsibility to validate these suggestions.

Furthermore, LLMs can summarize medical patient records and make predictions based on the available information.

3) Putting in place Automated Alert Systems
Fast forward to the present where the agile way of working is adopted in most places. Teams have shifted their perspective from the “what” of work, to the “way” of work. It is now all about establishing feedback loops, improving teamwork and continuously improving.

4) Using Topic Extraction Models
Another key aspect of reducing hallucinations involves employing topic extraction models. These models analyze the output text and the input text of the LLM and can for example scrutinize them for references to sensitive topics such as sexism or racism. When such references are detected, they trigger alerts, indicating potential issues with the content.

Additionally, these models can be configured to search for and identify blacklisted words or phrases and require specific terms to be present, further fortifying the validation process. This approach significantly contributes to the task of validating the correctness and appropriateness of LLM-generated content.

Conclusion 

As the adoption of AI and LLMs continues to pick up, the issue of hallucinations is one that will continue to impact the success of data driven ways of working. Even with methods like Retrieval Augmented Generation (RAG) and Reinforcement Learning from Human Feedback, the problem of hallucinations remains unsolved and therefore deserves continued priority and attention.

More on: Devoteam
Europe
Company profile
Devoteam is not a Europe partner of Consultancy.org
Partnership information »
Partnership information

Consultancy.org works with three partnership levels: Local, Regional and Global.

Devoteam is a Local partner of Consultancy.org in Middle East, Netherlands.

Upgrade or more information? Get in touch with our team for details.