Home

How to Write Effective AI Prompts for Different Real World Uses

|
Updated:  
12-06-2024
|
Generative AI For Dummies
Explore Book
Buy On Amazon

As you delve deeper into the realm of prompt engineering, you find out that the effectiveness of a prompt can vary greatly depending on the application. Whether you’re using AI for creative writing, data analysis, customer service, or any other specific use, the prompts you use need to be tailored to fit the task at hand.

The art in prompt engineering is matching your form of communication to the nature of the task. If you succeed, you’ll unlock the vast potential of AI.

For instance, when engaging with AI for creative writing, your prompts should be open-ended and imaginative, encouraging the AI to generate original and diverse ideas. A prompt like “Write a story about a lost civilization discovered by a group of teenagers” sets the stage for a creative narrative.

In contrast, data analysis requires prompts that are precise and data-driven. Here, you might need to guide the AI with specific instructions or questions, such as “Analyze the sales data from the last quarter and identify the top-performing products.” You may need to include that data in the prompt if it isn’t already loaded into the training data, retrieval-augmented generation (RAG), system or custom messages, or a specialized GPT. In any case, this type of prompt helps the AI focus on the exact task, ensuring that the output is relevant and actionable.

The key to designing effective prompts lies in understanding the domain you’re addressing. Each field has its own set of terminologies, expectations, and objectives. For example, legal prompts require a different structure and language than those used in entertainment or education. It’s essential to incorporate domain-specific knowledge into your prompts to guide the AI in generating the desired output.

Following are some examples across various industries that illustrate how prompts can be tailored for domain-specific applications:

  • Legal domain: In the legal industry, precision and formality are paramount. Prompts must be crafted to reflect the meticulous nature of legal language and reasoning. For instance, a prompt for contract analysis might be, “Identify and summarize the obligations and rights of each party as per the contract clauses outlined in Section 2.3 and 4.1.” This prompt is structured to direct the AI to focus on specific sections, reflecting the detailed-oriented nature of legal work.
  • Healthcare domain: In healthcare, prompts must be sensitive to medical terminology and patient privacy. A prompt for medical diagnosis might be, “Given the following anonymized patient symptoms and test results, what are the potential differential diagnoses?” This prompt respects patient confidentiality while leveraging the AI’s capability to process medical data.
  • Education domain: Educational prompts often aim to engage and instruct. A teacher might use a prompt like, “Create a lesson plan that introduces the concept of photosynthesis to 5th graders using interactive activities.” This prompt is designed to generate educational content that is age-appropriate and engaging.
  • Finance domain: In finance, prompts need to be data-driven and analytical. A financial analyst might use a prompt such as, “Analyze the historical price data of XYZ stock over the past year and predict the trend for the next quarter based on the moving average and standard deviation.” This prompt asks the AI to apply specific financial models to real-world data.
  • Marketing domain: Marketing prompts often focus on creativity and audience engagement. A marketing professional could use a prompt like, “Generate a list of catchy headlines for our new eco-friendly product line that will appeal to environmentally conscious consumers.” This prompt encourages the AI to produce creative content that resonates with a target demographic.
  • Software development domain: In software development, prompts can be technical and require understanding of coding languages. A prompt might be, “Debug the following Python code snippet and suggest optimizations for increasing its efficiency.” This prompt is technical, directing the AI to engage with code directly.
  • Customer service domain: For customer service, prompts should be empathetic and solution oriented. A prompt could be, “Draft a response to a customer complaint about a delayed shipment, ensuring to express understanding and offer a compensatory solution.” This prompt guides the AI to handle a delicate situation with care.

By understanding the unique requirements and language of each domain, you can craft prompts to effectively guide AI in producing the desired outcomes. It’s not just about giving commands; it’s about framing them in a way that aligns with the goals, terms, and practices of the industry in question. As AI continues to evolve, the ability to engineer precise and effective prompts becomes an increasingly valuable skill across all sectors.

15 tips and tricks for better AI prompting

Although GenAI may seem like magic, it takes knowledge and practice to write effective prompts that will generate the content you’re looking for. The following list provides some insider tips and tricks to help you optimize your prompts to get the most out of your interactions with GenAI tools:

  • Know your goal. Decide what you want from the AI — like a simple how-to or a bunch of ideas — before you start asking.
  • Get specific. The clearer you are, the better the AI can help. Ask “How do I bake a beginner's chocolate cake?” instead of just “How do I make a cake?”
  • Keep it simple. Use easy language unless you’re in a special field like law or medicine where using the right terms is necessary.
  • Add context. Give some background if it's a special topic, like tips for small businesses on social media.
  • Play pretend. Tell the AI to act like someone, like a fitness coach, to get answers that fit that role.
  • Try again. If the first answer isn't great, change your question a bit and ask again.
  • Show examples. If you want something creative, show the AI an example to follow, like asking for a poem like one by Robert Frost.
  • Don't overwhelm. Keep your question focused. If it's too packed with info, it gets messy.
  • Mix it up. Try asking in different ways, like with a question or a command, to see what works best.
  • Embrace the multimodal functionality. Multimodal functionality means that the GenAI model you’re working with can accept more than one kind of prompt input. Typically, that means it can accept both text and images in the input.
  • Understand the model’s limitations. GenAI is not infallible and can still produce errors or “hallucinate” responses. Always approach the AI’s output with a critical eye and use it as a starting point rather than the final word on any subject.
  • Leverage the enhanced problem-solving abilities. GenAI’s enhanced problem-solving skills mean that you can tackle more complex prompts. Use this to your advantage when crafting prompts that require a deep dive into a topic.
  • Keep prompts aligned with AI training. For example, remember that GPT-4, like its predecessors, is trained on a vast dataset up to a certain point in time (April 2023 at the time of this writing). It doesn’t know about anything that happened after that date. If you need to reference more recent events or data, provide that context within your prompt.
  • Experiment with different prompt lengths. Short prompts can be useful for quick answers, while longer, more detailed prompts can provide more context and yield more comprehensive responses.
  • Incorporate feedback loops. After receiving a response from your GenAI application, assess its quality and relevance. If it hit — or is close to — the mark, click on the thumbs-up icon. If it’s not quite what you were looking for, provide feedback in your next prompt by clicking on the thumbs-down icon. This iterative process can help refine the AI’s understanding of your requirements and improve the quality of future responses.

By keeping these tips in mind and staying informed about the latest developments in the capabilities of various GenAI models and applications, you’ll be able to craft prompts that are not only effective but also responsible and aligned with the AI’s strengths and limitations.

How to use prompts to fine-tune the AI model

The point of prompt engineering is to carefully compose a prompt that can shape the AI’s learning curve and fine-tune its responses to perfection. In this section, you dive into the art of using prompts to refine the GenAI model, ensuring that it delivers the most accurate and helpful answers possible. In other words, you discover how to use prompts to also teach the model to perform better for you over time. Here are some specific tactics:

  • When you talk to the AI and it gives you answers, tell it if you liked the answer or not. Do this by clicking the thumbs up or thumbs down, or the + or – icons above or below the output. The model will learn how to respond better to you and your prompts over time if you do this consistently.
  • If the AI gives you a weird answer, there's a “do-over” button you can press. It's like asking your friend to explain something again if you didn't get it the first time. Look for “Regenerate Response'’ or some similar wording (term varies among models) near the output. Click on that and you’ll instantly get the AI’s second try!
  • Think of different ways to ask the AI the same or related questions. It's like using magic words to get the best answers. If you're really good at it, you can make a list of prompts that others can use to ask good questions too. Prompt libraries are very helpful to all. It’s smart to look at prompt libraries for ideas when you’re stumped on how or what to prompt.
  • Share your successful prompts. If you find a super good way to ask something, you can share it online (at sites like GitHub) with other prompt engineers and use prompts others have shared there too.
  • Instead of teaching the AI everything from scratch (retraining the model), you can teach it a few more new things through your prompting. Just ask it in different ways to do new things. Over time, it will learn to expand its computations. And with some models, what it learns from your prompts will be stored in its memory. This will improve the outputs it gives you too!
  • Redirect AI biases. If the AI says something that seems mean or unfair, rate it a thumbs down and state why the response was unacceptable in your next prompt. Also, change the way you ask questions going forward to redirect the model away from this tendency.
  • Be transparent and accountable when you work with AI. Tell people why you're asking the AI certain questions and what you hope to get from it. If something goes wrong, try to make it right. It's like being honest about why you borrowed your friend's toy and fixing it if it breaks.
  • Keep learning. The AI world changes a lot, and often. Keep up with new models, features, and tactics, talk to others, and always try to get better at making the AI do increasingly more difficult things.

The more you help GenAI learn, the better it gets at helping you!

What to do when AI goes wrong

When you engage with AI through your prompts, be aware of common pitfalls that can lead to biased or undesirable outcomes. Following are some strategies to avoid these pitfalls, ensuring that your interactions with AI are both effective and ethically sound.

  • Recognize and mitigate biases. Biases in AI can stem from the data it was trained on or the way prompts are structured. For instance, a healthcare algorithm in the United States inadvertently favored white patients over people of color because it used healthcare cost history as a proxy for health needs, which correlated with race. To avoid such biases, carefully consider the variables and language used in your prompts. Ensure they do not inadvertently favor one group over another or perpetuate stereotypes.
  • Question assumptions. Wrong or flawed assumptions can lead to misguided AI behavior. For example, Amazon’s hiring algorithm developed a bias against women because it was trained on resumes predominantly submitted by men. Regularly review the assumptions behind your prompts and be open to challenging and revising them as needed.
  • Avoid overgeneralization. AI can make sweeping generalizations based on limited data. To prevent this, provide diverse and representative examples in your prompts. This helps the AI understand the nuances and variations within the data, leading to more accurate and fair outcomes.
  • Keep your purpose in sight. Losing sight of the purpose of your interaction with AI can result in irrelevant or unhelpful responses. Always align your prompts with the intended goal and avoid being swayed by the AI’s responses into a direction that deviates from your original objective.
  • Diversify information sources. Relying on too narrow a set of information can skew AI responses. Ensure that the data and examples you provide cover a broad spectrum of scenarios and perspectives. This helps the AI develop a well-rounded understanding of the task at hand. For example, if the AI is trained to find causes of helicopter crashes and the only dataset the AI has is of events when helicopters crash, it will deduce that all helicopters crash which in turn will render skewed outputs that could be costly or even dangerous. Add data on flights or events when helicopters did not crash, and you’ll get better outputs because the model has more diverse and more complete information to analyze.
  • Encourage open debate. AI can sometimes truncate debate by providing authoritative-sounding answers. Encourage open-ended prompts that allow for multiple viewpoints and be critical of the AI’s responses. This fosters a more thoughtful and comprehensive exploration of the topic.
  • Be wary of consensus. Defaulting to consensus can be tempting, especially when AI confirms our existing beliefs. However, it’s important to challenge the AI and yourself by considering alternative viewpoints and counterarguments. This helps in uncovering potential blind spots and biases.
  • Check your work. Always review the AI’s responses for accuracy and bias. As with the healthcare algorithm that skewed resources toward white patients, unintended consequences can arise from seemingly neutral variables. Rigorous checks and balances are necessary to ensure the AI’s outputs align with ethical standards.

About This Article

This article is from the book: 

About the book author:

Pam Baker is a veteran business analyst, speaker, and journalist whose work is focused on big data, artificial intelligence, machine learning, business intelligence, and data analysis. She is the author of Data Divination – Big Data Strategies and ChatGPT For Dummies.