I recently attended a high-profile tech conference where artificial intelligence was the talk of the week. After listening to the hype and the promises, and the long-term visions from heavily funded and deeply invested big tech companies, my takeaway is that AIs still aren’t very smart. And yes, there are already many AI engines out there, and many more are on the horizon.
It’s true that AI has reached several significant milestones, and the technology continues to grow and develop exponentially, but instead of being self-aware super geniuses, in many cases, the AIs are more like confused toddlers.
Humans who want to use AI to work faster, produce better results, and innovate at high speed, not only have to be good at their job, but they also have to be skilled in the art of AI prompt creation. A prompt is the text or other input you use to ask an AI for a response. If you don’t know how to write a great prompt that includes examples, gives direction, advises on formatting, and builds on previous prompts, you’re not really doing it right, and that could cause the AI to hallucinate.
What’s an AI hallucination, you ask? It’s the wrong answer. It’s false information. It’s a plausible lie. AI hallucinations can be silly, nonsensical or dangerous, particularly when you don’t realize that it’s happening. Just ask the attorney Steven A. Schwartz, who was sanctioned recently for submitting a legal brief to a federal district court that contained references to more than half a dozen non-existent court decisions that ChatGPT made up to support the prompt he entered.
Hallucinations can be great if you’re using AI to be creative; for example, they can lead to inventive storytelling, creative poems and unique images. But fact-finding and decision-making are risky when left up to a delusional AI engine.
Hallucinations happen because AIs don’t understand the real context that language describes. They use statistical patterns to generate language that is grammatically and semantically correct within the context of the prompt. Hallucinations can range from a minor inconsistency to a flat-out lie. They can happen for any number of reasons including poor source data, built-in biases, skewed training and unclear prompts.
But have no fear, there are already guides, classes and certificate programs ready to teach you how to become a “prompt engineer,” so you can learn how to coax better answers from AI. In other words, you can spend your time and money learning how to write questions and instructions in a way that makes it easier for the AI engine to understand and process, so you can — hopefully — get the results you’re looking for. For me, this defeats the entire reason for using AI in the first place. It should make my job easier, not harder.
If you’re like me, and you don’t want to spend a lot of time and money learning how to get better at prompting AI, here are a few tips to help you get the most out of your experience:
When it comes to AI, fact-checking is always critical. Careful prompting is also important, but that can be nuanced and error-prone, leading to hallucinations. Not to mention, prompt writing is more of an art than a science. This might change over time, but until AI gets a lot smarter, we need to be very careful about how much faith we put into its answers.
April Godwin is an IT specialist. She lives in Lakebay.
UNDERWRITTEN BY NEWSMATCH/MIAMI FOUNDATION, THE ANGEL GUILD, ROTARY CLUB OF GIG HARBOR, ADVERTISERS, DONORS AND PEOPLE WHO SUPPORT LOCAL, INDEPENDENT NONPROFIT NEWS