Why Your Chatbot is a Tool Not Your Friend

If you use ChatGPT, Co-Pilot, or any other chatbot, you’ve realized they are programmed to please. I find Co-Pilot sickeningly sweet at times. If you need someone to tell you how outstanding your work is, use a chatbot for help, and it will shower you with praise.

It borders on the ridiculous at times, which makes me roll my eyes. Did you know this behavior is one of the causes of hallucinations? It’s trained to find an answer for you, even if it’s wrong.

Double-check the answers the AI gives you. That’s a given, but you can reduce the chances of hallucination. Upgrade your prompts, also known as prompt engineering.

General prompts, such as “how do I find new customers,” are likely to induce hallucinations and won’t provide helpful advice. Be specific with your requests. Try “Provide links to expert articles on ways to find new customers for PR consulting businesses.”

Stay away from complex, rambling prompts. Break them up into well-defined, shorter sentences. Giving it a cue by referring to a specific resource or website yields better results.

Use CoT (chain of thought prompting). Structure the prompt by asking for step-by-step conclusions and/or information. The chatbot should show reasoning for each step.

Remember, the more specific your prompt, the better result you will get. Don’t let a chatbot write your final draft of a document, as it could hallucinate something embarrassing.

Author: Kris Keppeler, a curious writer who finds technology fascinating. Follow her on X (Twitter) @KrisNarrates, on Medium.com @kriskeppeler, and LinkedIn.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.