Home

10 Mistakes to Avoid When Writing AI Prompts

|
|  Updated:  
11-07-2024
Generative AI For Dummies
Explore Book
Buy On Amazon

When you’re new to crafting AI prompts, you can easily make mistakes. Using AI tools the right way makes you more productive and efficient. But if you aren’t careful, you may develop bad habits when you’re still learning. We clue you in to 10 mistakes you should avoid from the start in this video and article.

Not Spending Enough Time Crafting and Testing Prompts

One common mistake when using AI tools is not putting in the effort to carefully craft your prompts. You may be tempted — very tempted — to quickly type out a prompt and get a response back from the AI, but hurried prompts usually produce mediocre results. Taking the time to compose your prompt using clear language will increase your chances of getting the response you want. A poor response spells the need for you to evaluate the prompt to see where you can clarify or improve it. It’s an iterative process, so don’t be surprised if you have to refine your prompt several times. Like any skill, learning to design effective prompts takes practice and patience. The key is to resist the urge to take shortcuts. Make sure to put in the work needed to guide the AI to a great response.

Assuming the AI Understands Context or Subtext

It’s easy to overestimate the capabilities of AI tools and assume they understand the meaning of language the way humans do. Current AI tools take things literally. They don’t actually understand the context of a conversation. An AI assistant may be trained to identify patterns and connections and is aware of these things as concepts (like norms, emotions, or sarcasm), all of which rely on context, but it struggles to identify them reliably.

Humans can read between the lines and understand meaning beyond what’s actually written. An AI interprets instructions and prompts in a very literal sense — it doesn’t understand the meaning behind them. You can’t assume an AI understands concepts it hasn’t been trained for.

Asking Overly Broad or Vague Questions

When interacting with an AI, avoid overly broad or vague questions. The AI works best when you give it clear, specific prompts. Providing prompts like “Tell me about human history” or “Explain consciousness” is like asking the AI to search the entire internet. The response will probably be unfocused. The AI has no sense of what information is relevant or important so you need to refocus and try again.

Good prompts are more direct. You can start with a prompt such as “Summarize this research paper in two paragraphs” or “Write a 500-word article on summer plants that require shade.” The prompt should give the AI boundaries and context to shape its response. Going from broad to increasingly narrow questions also helps.

You can start generally asking about a topic and then follow up with focused requests on the specific details. Providing concrete examples guides the AI. The key is to give the AI precise prompts centered directly on the information you want instead of typing a request with a vague, borderless question. Sharp, specific questioning produces the best AI results.

Not Checking Outputs for Errors and Biases

A common mistake when using AI apps is taking the results at face value without double-checking them. AI systems may reflect bias, or generate text that seems right but has errors. Just because the content came from an AI doesn’t mean it’s necessarily accurate. Reviewing AI responses rather than blindly trusting the technology is critical. Look for instances of bias where specific demographics are negatively characterized or tropes (clichés) are reinforced.

Always check facts and figures against other sources. Look for logic that indicates the AI was “confused.” Providing feedback when the AI makes a mistake can further enhance its training. The key is to approach responses skeptically instead of assuming that the AI always generates perfect results. As with any human team member, reviewing their work is essential before using it. Careful oversight of AI tools mitigates risks.

Using Offensive, Unethical, or Dangerous Prompts

A primary concern when working with AI is that the apps can inadvertently amplify harmful biases if users write offensive, unethical, or dangerous prompts. The AI will generate text for any input, but the response may be that you’re asking for a harmful response and it will not comply. Prompting an AI with inappropriate language or potential discrimination may reinforce biases from the data the model was trained on.

If users are cautious when formulating prompts, that can help steer the technology toward more thoughtful responses. AI can be subject to the whims of bad actors.

Expecting Too Much Originality or Creativity from the AI

One common mistake when using AI apps is expecting too much original thought or creativity. AI tools can generate unique mixes of text, imagery, and other media, but there are limits. As of this writing, AI apps are only capable of remixing existing information and patterns into new combinations. They can’t really create responses that break new ground. An AI has no natural creative flair like human artists or thinkers. Its training data consists only of past and present works. So, although an AI can generate new work, expecting a “masterpiece” is unrealistic.

Copying Generated Content Verbatim

A big mistake users make when first using AI tools is to take the text and use it verbatim, without any edits or revisions. AI can often produce text that appears to be well written, but the output is more likely to be a bit rough and require a good edit. Mindlessly copying the unedited output can result in unclear and generic work. (Also, plagiarizing or passing the writing off as your own is unethical.)

A best practice is to use the suggestions as a starting point that you build upon with your own words and edits to polish the final product. Keep the strong parts and make it into something original. The key is that the AI app should support your work, not replace it. With the right editing and polishing, you can produce something you’ll be proud of.

Providing Too Few Examples and Use Cases

When you’re training an AI app to handle a new task, a common mistake is to provide too few examples of inputs. Humans can usually extrapolate from a few samples, but AI apps can’t. An AI must be shown examples to grasp the full scope of the case. You need to feed the AI varied use cases to help it generalize effectively.

Similarly, limiting prompts to just a couple of instances produces equally poor results because the AI has little indication of the boundaries of the task. Providing diverse examples helps the AI form an understanding about how to respond. Having patience and supplying many examples lets the AI respond appropriately.

Not Customizing Prompts for Different Use Cases

One common mistake when working with AI tools is attempting to use the same generic prompt to handle all your use cases. Creating a one-size-fits-all prompt is easier, but it will deliver disappointing results. Each use case and application has its own unique goals and information that need to be conveyed, as discussed throughout this book. For example, a prompt for a creative nonfiction story should be designed differently than a prompt for a medical article.

An inventory of prompts designed for various use cases allows the AI to adapt quickly to different needs. The key is customization. Building a library of specialized prompts is an investment that pays dividends.

Becoming Overly Reliant on AI Tasks Better Suited for Humans

Almost everyone is excited about using AI tools to make their job easier. But it’s important to avoid becoming too dependent on them. AI is great for tasks like automation and personalization, but applying ethics and conveying empathy are still human strengths.

About This Article

This article is from the book: 

About the book author:

Stephanie Diamond is a marketing professional and author or coauthor of more than two dozen books, including Digital Marketing All-in-One For Dummies and Facebook Marketing For Dummies.

Jeffrey Allan is the Director of the Institute for Responsible Technology and Artificial Intelligence (IRT) at Nazareth University.