Boost AI Collaboration: Mastering the Art of Prompt Engineering
“Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke, Science Fiction Writer
This article is a snippet of the book “The AI-Boosted Startup Vol.1”, chapter 3, by Cesar S. Cesar. Hyperboost Global, referenced in the article, is a partner advisory company that supplies some frameworks used in the Masterminds AI™ platform.
Please read the previous article of the “Mastering AI” series here.
The valuable tips in this article works when your are working with any Large Language Model (LLM), and with the Masters and Minds in the Masterminds AI Platform.
Picture yourself stepping into a conversation with a remarkably intelligent intern, eager to help you with anything you need, but have never worked before. This intern requires clear instructions on what to do and what you expect. When they come back with something unexpected, you provide more specific guidance or change your approach, conversing and training them back and forth until they are trained to deliver what you need now and in the future. They may deliver much more than you can possibly think!
Imagine giving them a task to research trends in renewable energy. If you don't impose limits and bounds—mostly based on your own limitations—not only do they return with a detailed report, but they also provide a predictive model using AI to forecast future developments, complete with graphs and data visualizations. This intern isn't just meeting your expectations—they're exceeding them in ways you hadn't even considered. This is what working with an LLM can feel like.
LLM TIP: ATTENTION! LLMs sometimes act like rebellious teenagers—they won’t always do what you ask! Other times, they won’t give you a complete answer, and they might even give you the wrong one. I’m sorry to say, but as the ‘adult human’ in this relationship, it’s your responsibility to guide them to provide the answer you need, at the quality you need, and to check their accuracy (most times they are accurate). Follow the tips in this chapter closely to help you deal with these ‘teens’—at least until they grow up. :)
The Human In The Loop
Your really smart intern can answer any questions for you, but sometimes they might not understand exactly what you're asking. They might get things a little wrong, or give you an answer that's not quite right.
That's where you come in! You're the "human in the loop." You're like the helper or the editor, making sure your smart friend understands your question correctly.
You ask your friend, "What's the capital of France?" Your friend might say, "Paris, France." That's a good answer! But, if it said, "France," you'd say, "What is missing in your response?" You coach them, give them more information or rephrase your question until they understand what you want to know.
You keep editing your prompt or giving them feedback until they give you the perfect answer. So, the human in the loop is the part where you work with your helpful AI tool and guide it to give you the best possible answer.
Crucial Conversations with your LLM
Diving a little deeper in the art of prompt engineering—the way you converse with LLMs—is essential for maximizing your collaboration with your AI assistant. Don't panic, though! We've already engineered all you need to create your product. These tips and tricks are helpful for dealing with LLM "hiccups" and driving follow-up conversations from those prompts. Of course, they'll also come in handy when you start creating your own prompts for the countless other tasks you might need along the way.
LLM TIP: Please execute the prompt engineering techniques below in a LLM like ChatGPT or Gemini while you read, to fixate them in your memory. Trust me: you will need them!
1. Set The Stage with Kindness: Just as you would with a human colleague, polite and respectful interactions with your AI assistant can cause better responses. Politeness tends to trigger more helpful and nuanced responses. Starting a prompt with, "Could you please provide a summary of the latest trends in AI? Thank you!" sets a positive tone for the interaction.
2. Embrace the Conversation: Remember, LLMs excel in dialogue. It might take a few rounds of back-and-forth to refine your prompts and get the perfect response. Don’t be afraid to ask follow-up questions or request clarifications. Think of it as a collaborative brainstorming session where each interaction builds upon the last. For example, you might start with, "Can you clarify what you meant by the 'future of work' in your last response? Could you give some specific examples?" This encourages a more detailed and nuanced answer.
3. Adopt a Coaching Approach: One highly effective way to accelerate your LLM's learning is to adopt a coaching approach, using the Socratic method. When the LLM provides an output that doesn't align with your instructions or expectations, instead of simply correcting it, ask probing questions that encourage it to reflect on its process and identify areas for improvement. For example, you might ask: "What did you do wrong in this response?". "What specific part of my instructions did you miss?" "How can you ensure that you'll interpret my requests correctly in the future?" This type of questioning helps the LLM identify its own errors and develop strategies for avoiding them in subsequent interactions.
4. Impersonate Experts or Authors: Leveraging your LLM's persona power can be incredibly effective. For instance, instructing it to adopt a specific role, like, "You are a senior strategy consultant who's worked for McKinsey and Bain in the last 25 years and knows everything about problem-solving frameworks and the banking industry. I want you to explain the impact of digital banking on traditional financial institutions," which can yield highly tailored and insightful responses.
5. Create The Prompt for You: If you're unsure how to structure a prompt to achieve your objective, you can ask the LLM to create the prompt for you. Simply provide a clear explanation of what you want to achieve and the type of information you need. For example, you could ask, "I want to compare the features of different project management software. Could you create a prompt that will help me gather this information in a table format?"
6. Experiment with Re/Phrasing: Don't be afraid to try different phrasings, keywords, and angles in your prompts. This will help you discover what works best. You can even edit the previous prompt for clarity. For instance, asking "What are some innovative approaches to reducing healthcare costs?" might yield a variety of creative solutions.
7. Editing vs. Refining Prompts: While the conversational nature of LLMs allows for refining outputs through a series of interactions, most of the time it's more efficient to go back and edit the previous prompt itself (maybe more than once!). This is especially true if the LLM generated an output that significantly deviates from your instructions.
LLM TIP: Prioritize editing your prompts, instead of refining with follow up questions and additional information. This helps eliminate “waste” that could lead the LLMs down an unexpected path.
8. Additional Tips For Editing Prompts: If the AI still doesn't produce the desired result after editing the prompt, analyze the new error and consider adding even more specific instructions.
9. Be Precise with Keywords: Use relevant keywords and phrases that directly align with your desired topic. Instead of vague prompts like "Tell me about robots," use specific terms like "artificial intelligence," "ethics of automation," or "robotic surgery."
10. Think Small and Specific: Break down complex questions into smaller, focused ones. For example, rather than asking a broad question like "What are the social trends shaping the future of work?", ask something more specific like "What are Gen Z's attitudes towards remote work based on recent social media discussions?"
11. Focus on the Idea, Not Typos: There's no need to obsess over perfect spelling or grammar in your prompts. Your AI assistant can understand your intent even with errors.
12. Confirming Comprehension: For complex or lengthy prompts, ask your AI assistant to summarize its understanding. This simple step ensures you're both on the same page.
13. Illustrate Your Intent with Examples: Like humans, LLMs benefit from clear and concrete examples. When asking for a very specific response, it's helpful to provide references to illustrate your intent.
14. Specify Response Format: Illustrate your expectations with references or examples. For instance, you could say, "Compare healthcare costs between the United States, Canada, and the United Kingdom. Provide a response in a table format with these columns: X, Y, Z."
15. Long-Form Content Made Easy: For comprehensive responses, end your prompt with something like, "I need a 1000-word response, please divide your answer into several messages to accomplish this."
16. Transform Data into Visuals: LLMs can transform raw data into clear and informative tables or charts.
17. Analyze Spreadsheets: Feed your AI assistant data directly from a CSV file for powerful analysis.
18. Get Creative with Data: Don’t limit yourself to traditional charts and tables. Ask your AI assistant to create unique visualizations.
19. Beware of 'Hallucinations': LLMs are powerful but not perfect. Always cross-check important information.
20. Increasing Confidence in LLM Outputs: When dealing with responses that require the highest level of statistical confidence, it's beneficial to run the same prompt 3-5 times and compare the results.
21. Dealing with conversation flow problems: Includes techniques like regenerating responses, continuing stalled conversations, and restarting conversations if they become buggy.
Understanding the Technicalities
- Navigating Processing Errors: Break prompts into smaller messages.
- Navigating Memory Limitations: Ask for regular summaries.
- Understanding Memory Capacities: Comparison of Gemini 1.5 Pro (1.5M+ tokens), ChatGPT 4o (128k), Claude 3.5 Sonnet (200k), etc.
- Understanding Usage Limits: Explanation of tokens and computational resources.
Masterminds AI™: Your Very Own Product Master Team Members
Our Master AIs focus on chained steps and iterative refinement, anticipating a future where the limitations of token windows in LLMs become a thing of the past. Recent breakthroughs suggest the possibility of achieving practically "infinite" token windows. This framework lays the groundwork for a world where your AI Product Master can truly become a long-term partner in innovation.
The Path Forward
We've explored the art of conversing with AI, learning prompt engineering as a powerful tool for directing our AI assistant's capabilities. This experience will be invaluable to collaborate with Masters and Minds of the Masterminds AI™ Platform, which guides you from idea to MVP concept in 10 days!
