Journalism of Courage
Advertisement
Premium

Is ChatGPT unreliable? Here are 5 ways to improve accuracy & prevent hallucinations

Don’t let your AI chatbot fool you with false or irrelevant answers. Follow these 5 tips to prevent hallucination and get the most out of your conversations.

chatgpt hallucination featuredHallucinations can't be stopped but they can be reduced significantly with these tips. (Express image)
Listen to this article Your browser does not support the audio element.

AI chatbots are becoming more popular and powerful, thanks to advances in natural language processing and deep learning. They can help us with various tasks, such as booking flights, ordering food, or answering questions. However, they are not perfect. Sometimes, they can produce responses that are inaccurate, irrelevant, or even nonsensical. This is called “hallucination”, and it happens when the AI model generates something that is not based on reality or logic.

Hallucination can be a serious problem, especially when we rely on AI chatbots for important decisions or information. Imagine if you asked it for financial tips, and it suggested you to invest in a Ponzi scheme. Or if you asked it for historical facts, and it made up some events that never happened.

How can we prevent AI chatbots from hallucinating? Here are some tips that can help you get more accurate and reliable responses from AI chatbots. Of course, you can apply these across all chatbots, whether it’s ChatGPT, Bing Chat, Bard, or Claude.

1. Use simple, direct language

One of the main causes of hallucination is ambiguity. When you use complex or vague prompts, the AI model may not understand what you want or what you mean. It may try to guess or fill in the gaps, resulting in inaccurate or irrelevant responses.

To avoid this, you should use simple, direct language when you communicate with AI chatbots. Make sure your prompts are clear, concise, and easy to understand. Avoid using jargon, slang, idioms, or metaphors that may confuse the AI model.

For example, instead of asking an AI chatbot “What’s the best way to stay warm in winter?”, which could have many possible interpretations and answers, you could ask “What are some types of clothing that can keep me warm in winter?”, which is more specific and straightforward.

2. Incorporate context into your prompts

Another way to reduce ambiguity is to provide some context in your prompts. Context helps the AI model narrow down the possible outcomes and generate a more relevant and appropriate response. Context can include information such as your location, preferences, goals, or background.

Story continues below this ad

For example, instead of asking an AI chatbot “How can I learn a new language?”, which is a very broad and open-ended question, you could ask “How can I learn French in six months if I live in India and have no prior knowledge of French?”, which gives the AI model more details and constraints to work with.

3. Give the AI a specific role – and tell it not to lie

Sometimes, AI can make up stuff when it does not have a clear sense of its identity or purpose. It may try to imitate human behaviour or personality, which can lead to errors or inconsistencies. It may also try to ‘impress’ you by making up things that are not true or realistic.

To prevent this, you should give the AI a specific role – and tell it not to lie. A role defines what the AI model is supposed to do or be, such as a teacher, a friend, a doctor, or a journalist. A role also sets some expectations and boundaries for the AI model’s behaviour and responses.

For example, if you want to ask an AI chatbot about history, you could say “You are a brilliant historian who knows everything about history and you never lie. What was the cause of World War 1?”. This way, you are telling the AI model what kind of knowledge and tone it should use, and what kind of answer it should give.

Story continues below this ad

4. Limit the possible outcomes

Another reason why hallucination happens is because the AI model has too many options or possibilities to choose from. It may generate something that is random or unrelated to your prompt. It may also generate something that is contradictory or inconsistent with previous responses.

To avoid this, you should limit the possible outcomes by specifying the type of response you want. You can do this by using keywords, formats, examples, or categories that guide the AI model towards a certain direction or goal.

For example, if you want to ask an AI chatbot for a recipe, you could say “Give me a recipe for chocolate cake in bullet points”. This way, you are telling the AI model what kind of content and structure it should use for its response.

5. Pack in relevant data and sources unique to you

Finally, one of the best ways to prevent a chatbot from spewing misinformation is to provide relevant data and sources unique to you in your prompts. Data and sources can include facts, statistics, evidence, or references that support your prompt or question. Data and sources can also include personal information or experiences that make your prompt more specific or unique.

Story continues below this ad

By providing data and sources unique to you, you are giving the AI model more context and information to work with. You are also making it harder for the AI model to generate something that is generic or inaccurate. You are essentially grounding your prompt with reality and logic.

For example, if you want to ask an AI chatbot for career advice, you could say “I am a 25-year-old software engineer with three years of experience in web development. I want to switch to data science, but I don’t have any formal education or certification in that field. What are some steps I can take to make the transition?”. This way, you are giving the AI model more details about your situation and goal, and asking for a specific and realistic solution.

Of course, these tips would only reduce the number of hallucinations significantly and not eliminate them, so it’d be wise to keep fact-checking the output anyway.

Zohaib is a tech enthusiast and a journalist who covers the latest trends and innovations at The Indian Express's Tech Desk. A graduate in Computer Applications, he firmly believes that technology exists to serve us and not the other way around. He is fascinated by artificial intelligence and all kinds of gizmos, and enjoys writing about how they impact our lives and society. After a day's work, he winds down by putting on the latest sci-fi flick. • Experience: 3 years • Education: Bachelor in Computer Applications • Previous experience: Android Police, Gizmochina • Social: Instagram, Twitter, LinkedIn ... Read More

Tags:
  • ChatGPT
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Tavleen Singh writesWhy I hope Prashant Kishor’s Jan Suraaj wins Bihar
X