This is part 2 of a series on how to write better prompts for AI assistants. Click here to read part 1.
Crafting a great prompt is like taking an X-ray of your request. You need to see all the essential parts lined up correctly. A well-structured prompt typically includes several key components that work together to guide the AI towards a high-quality response. In this section, we’ll dissect a prompt into its core sections step by step, explain the purpose of each part, and explore how to optimize them. We’ll also cover how long each section should be, common pitfalls to avoid, why repeating crucial info can help, and some extra tricks to make your prompts even better. Let’s put a prompt under the X-ray and examine its “bones”!
A robust prompt often contains a few main sections, each serving a specific purpose in steering the AI. Not every prompt will need all of these, but understanding them helps you assemble prompts more effectively. The common components are:
Role/context: This section sets the scene or persona for the AI. You might define who the AI is or provide background info. For example: “You are a senior Python developer and database expert.” Establishing a role or context helps the model adopt the right tone and knowledge base for the task. It gives the AI a frame of reference for its response (e.g. an expert, a friendly advisor, a specific context or scenario).
Instruction (directive): The instruction is the core task you want the AI to perform. It should be a clear, explicit command or question. For instance: “Write a function that calculates the average of a list of numbers.” This directive tells the model exactly what output or action is expected. A well-defined instruction is critical. Without it, the AI may respond irrelevantly or wander off-topic.
Input data/examples: If the task involves some data or examples, include them in the prompt. This is the information the model needs to refer to in order to perform the task. For example, you might provide a piece of text to summarize, a JSON snippet to parse, or a few example QA pairs to demonstrate a format. Clearly delimit this input (e.g. with quotes, code blocks, or separators) so the AI knows it’s reference material and not part of the instruction. Including relevant data or examples helps ground the model’s response in facts or desired patterns.
Expected output/format: A high-quality prompt often specifies what form the answer should take. This section describes the desired output format or style. For example: “Provide the answer as a JSON object.” or “Output a bulleted list of three points.” By telling the AI how to format its answer, you reduce ambiguity and get results that are easier to use. This acts as an output indicator guiding the model on how to present the answer.
Constraints/guidelines: Constraints are additional rules or caveats for the response. They might include length limits, style guidelines, or things to avoid. For instance: “The explanation should be no more than two sentences and use simple language.” or “Do not mention any internal variable names in the output.” Constraints fine-tune the response, ensuring it meets specific requirements (like brevity, tone, or avoiding certain content). They keep the AI from going off-track or giving undesired output.
To see these sections in action, let’s examine a sample prompt a developer might write:
You are an expert Python developer and data analyst. Below is a list of numbers: [2, 4, 6, 8, 10] Task: Write a short Python function to calculate the average of these numbers. Provide only the code and a one-sentence explanation of how it works. The code should not use any external libraries.
This prompt contains all the key sections:
Role/context: “You are an expert Python developer and database analyst.” Sets the expertise and context for the AI.
Input data: The list [2, 4, 6, 8, 10] is provided as the data the function will use. It’s clearly separated and gives the model something concrete to work with.
Instruction: “Write a short Python function to calculate the average of these numbers.” Tells the AI exactly what to do (the main task).
Expected output/format: “Provide only the code and a one-sentence explanation of how it works.” Specifies the format: the answer should include the code solution and a brief explanation, nothing more.
Constraints: “The code should not use any external libraries.” Adds a rule the solution must follow, in this case limiting which tools can be used.
By structuring the prompt this way, we’ve guided the AI with context, given it data to act on, explicitly stated the task, described the required output format, and set constraints. Each section plays a role in reducing uncertainty and focusing the model’s efforts on what we actually want.
How much detail should you include in each part of the prompt? It’s important to find a balance: enough detail to be clear, but not so much that you drown the model in noise. There’s a tension between providing necessary context and keeping the prompt concise. Here are some guidelines on length for each section:
Role/context: Usually 1-2 sentences are enough for setting the role or context. A short phrase can do the job (“You are a helpful customer support agent…”). If you need to include background info or a scenario, keep it to a brief paragraph at most, focusing only on details that will influence the answer. Avoid lengthy lore or backstory that isn’t directly relevant. Extraneous context can confuse the model or distract it.
Instruction: Aim to state the task in one clear sentence or a single question if possible. In more complex cases, you might use a couple of sentences or break the task into sub-bullets, but brevity is generally better. Overly long or compound instructions can be hard to parse. If you find your instruction running on and on, consider that it might be doing too much at once. Try splitting the task into smaller steps (we’ll cover this in “Other tricks” below).
Input data/examples: Include as much input data as needed for the task, but only what’s needed. If the model has to analyze or transform provided text/code, you may need to paste it in full (even if it’s long). Use delimiters (like triple backquotes ``` or a clearly labeled section) to separate this input from the rest of the prompt for clarity. If the input data is huge (thousands of words), consider whether a summary or a smaller excerpt would suffice, as very long inputs can exhaust the token limit or cause the model to lose focus. Essentially, give the model enough information to work with, but no more than necessary.
Expected output/format: This usually can be expressed in a short phrase or sentence. You might even format it as a bullet list of criteria. For example: “Output format:\n- JSON with keys name and age”. Being concise here is fine, as long as the requirement is unmistakably clear. If the output needs multiple specifications (e.g. style, length, format), it’s often better to list them as separate bullet points for readability.
Constraints/guidelines: These should be brief and specific. It’s common to list constraints as a few bullets or sentences at the end of the prompt. Each constraint might be just a clause (e.g. “Max 100 words,” “Explain in layman’s terms,” “No references to the prompt itself”). Too many constraints can over-complicate things or even conflict with each other, so don’t include needless rules. Stick to the critical ones that ensure the response meets your goals. If you have more than 3-4 constraints, ask yourself if they’re all truly necessary.
In summary, be as detailed as needed, but as succinct as possible for each part. The prompt should give the AI just enough information to do the job and no excess fat. If any section of your prompt can be shorter without loss of clarity, shorten it. Conversely, if making something a bit longer would prevent ambiguity, add those few extra words. Finding the sweet spot comes with practice and sometimes a bit of trial and error.
Even with a good structure, prompts can fail due to some common mistakes. Here are a few prompt “gotchas” to watch out for, and how to fix them:
Vague phrasing: If your prompt is vague, the AI might produce a broad or irrelevant answer. For example, asking “Tell me about technology.” is so open-ended that you could get practically anything back. Vague or generic prompts often lead to generic or off-target responses. Fix: Be specific about what you want. Identify the particular aspect or angle you’re interested in. Instead of “tell me about technology,” you might ask, “Explain three ways quantum computing could impact cybersecurity.” The more precise your wording, the more focused the answer will be.
Over-explaining or prompt bloat: On the flip side, stuffing the prompt with unnecessary detail, long-winded context, or irrelevant information can confuse the model. If you bury a simple question inside a paragraph of fluff, the model might miss the point or latch onto the wrong detail. Overly complex, convoluted prompts can lead to convoluted responses. Fix: Trim the fat. Remove details that don’t directly affect the task. Keep context short and pertinent. You can certainly provide context, but don’t turn your prompt into a novel unless you actually need the model to consider all that information. Aim for clarity and simplicity over sheer length.
Ambiguous instructions: Ambiguity is the enemy of helpful AI output. For instance, “Draw the bank card” is unclear and could mean a financial diagram or literally sketching a credit card. Similarly, “Write about Python” doesn’t specify if you mean the programming language or the snake. Ambiguity often yields answers that don’t address your actual need. Fix: Identify ambiguous terms or double meanings and clarify them. If a word could be interpreted in multiple ways, rephrase it or add detail. In our example, you’d do better to say “Generate an image of a credit card” vs. “draw,” or “Write about the Python programming language’s history.” Provide enough detail so there’s only one reasonable interpretation of the request.
Unrealistic ask: Sometimes prompts fail because they ask for the impossible or the model misinterprets feasibility. For example, “Predict the stock market next week with 100% accuracy.” The model will either produce a poor guess or refuse. While not exactly a structural issue, it’s a pitfall in what you’re asking. Fix: Keep requests within the model’s capabilities and knowledge. If you need prediction or external data, reframe the task (e.g., “List factors that could influence stock prices” instead). Also, avoid contradictory instructions (e.g., “Give a detailed answer in one word” sets an impossible task). Ensure your constraints and asks align logically.
Ignoring format guidelines: A subtle pitfall is when the prompt does specify a format, but the instructions are buried or not emphasized, so the model might ignore them. If you said in the middle of a long prompt, “answer must be JSON,” the model might miss it, giving a narrative answer instead. Fix: Make format instructions stand out (e.g., as the final line or in a list) and consider reiterating them (more on repetition next). Clearly separating format requirements (using formatting or keywords like “Output format:”) helps the model register that constraint. Many providers support JSON structured output, and that can be leveraged in projects that can benefit from it.
By being mindful of these pitfalls, you can debug prompts that aren’t working well. If you get a weird or wrong output, reread your prompt and check: Was I vague? Did I include extraneous info? Is my request clear and achievable? Oftentimes, a quick rewrite addressing these issues will fix the model’s response in the next try.
When it comes to crucial details in your prompt, it can pay off to be a bit repetitive (in a smart way!). Large language models can be influenced by recency bias, meaning they pay more attention to the last things said in the prompt. This implies that restating key instructions or important information at the end of your prompt can reinforce those points and make the model more likely to follow them.
Why repeat? Reinforcement helps ensure the model doesn’t “forget” important constraints or context, especially in longer prompts. For example, if your prompt includes a long background and then your question, the model might lose track of a detail mentioned only once in the beginning. By repeating or summarizing that critical detail in the instruction or at the end (“Remember, use the data above in your analysis”), you remind the model of what matters most. Think of it as highlighting the main points for the AI.
What to repeat: You should repeat the key requirements or nuances that are absolutely essential to get right. This could be the desired output format, the main topic, or a do/do-not rule. For instance, if it’s vital that the answer is in bullet points, you might state at the top and bottom, “The answer should be a bulleted list.” If the tone must be professional, you might weave that into the role and also say “(in a professional tone)” again in the prompt.
Reinforcement vs. redundancy: Be careful to repeat strategically, not aimlessly. You don’t want to confuse the model with contradictions or lots of noise. It’s usually best to repeat by paraphrasing or summarizing the crucial instruction. For example: “Explain the code in one sentence.” at the end echoes the earlier constraint of brevity. This consistency makes the instruction hard to miss. On the other hand, avoid repeatedly emphasizing something trivial or repeating undesired words. (If you keep saying “Don’t mention X” several times, you’re actually fixating the model on “X” which can backfire by making it more likely to mention it!). So, repeat the positive instructions of what you do want.
Recency matters: As noted, models tend to give weight to the last part of the prompt. A practical tip is to end your prompt on a strong note: either end with the main question/instruction itself, or a quick recap of the most important constraint. For example: “Provide the three insights as JSON. Remember: output only valid JSON.” The final reminder (“output only valid JSON”) sits at the end, leveraging recency to nudge the model in the right direction.
In summary, a bit of deliberate echoing in your prompt can significantly improve reliability. If there’s something you absolutely need in the answer, don’t shy away from reinforcing it. The model is more likely to comply when it’s seen that requirement multiple times in clear terms. Just make sure those repetitions are clear, consistent, and focused on your goal.
Once you have the basic prompt structure down, you can employ various techniques to make your prompt even more effective. Here are some extra tricks and optimizations, especially handy for developers and advanced prompt crafters:
Use clear formatting and delimiters: Structure your prompt so that each part is unambiguous. You can use separators like --- or triple backticks to isolate different sections (e.g. one for context/data, one for instructions). For instance, if you include a block of text or code for the AI to act on, wrap it in triple quotes or a code block. This way, the model knows exactly what text is the data or example and what is the actual question. Clear formatting prevents the AI from mixing up instructions vs. input. It also improves readability for you and the model. Many prompt guides recommend delimiters because they reduce confusion and even help avoid inadvertent prompt injection .
Ask for structured output: If you need the answer in a particular structure (JSON, XML, a table, bullet points, etc.), explicitly ask for it. You can even provide a template or an example of the desired format. For example: “Answer in JSON with keys status and message.” or “Respond in a markdown table with columns X, Y, Z.” When you specify the format, the model will usually try to match it . This is extremely useful for developers who plan to parse the output. It saves time cleaning up the answer. If necessary, demonstrate the format (e.g., give a dummy JSON or a partial example) to eliminate any guesswork. Many providers support structured output via specialized instructions sent via JSON schemas. Leverage them when it makes sense for your project.
Break complex tasks into steps: Don’t hesitate to guide the AI through a multi-step reasoning process. LLMs often do better when they’re instructed to tackle a problem step-by-step. You can prompt this in a couple of ways. One way is to explicitly enumerate subtasks: “1. First, summarize the user input. 2. Then, check for any contradictions. 3. Finally, output a conclusion.” By listing the steps in the prompt, you help the model organize its approach. Another way is to ask the model to “think aloud” or reason before the final answer (e.g., “Explain your reasoning, then give the answer”). This is related to chain-of-thought prompting, where the model’s intermediate reasoning leads to a more accurate final result. In any case, breaking down the task can prevent the model from getting overwhelmed or making leaps of logic.
Control verbosity with explicit cues: You can influence how verbose or concise the model is by explicitly stating your preference. If you want a brief answer, say so: “(Answer in one sentence.)” or “Keep the explanation under 50 words.” On the other hand, if you want a detailed answer, encourage depth: “Provide a step-by-step analysis… elaborate on each point.” You can also prime the output by phrasing a cue at the end of the prompt. For example, ending the prompt with “Answer in a single sentence:” will cue the model to be concise (it sees the colon and likely fills in one sentence). Conversely, “Explain in detail:” suggests a longer answer. Being direct about length and detail can greatly help the model hit the target response length. Remember, the model doesn’t inherently know if you want a summary or an essay unless you tell it.
Iterative prompt refinement: Treat prompt-writing as an iterative process. Rarely will a complex task be perfect on the first try. Developers often try a prompt, see how the AI responds, and then tweak the prompt to fix any issues. For example, if the output wasn’t in the right format, you can add a line to your prompt explicitly instructing that format. If the answer was off-topic, you may need to add a clarifying detail or constraint. Each iteration is a chance to sharpen the prompt. A good workflow is: test the prompt with the AI, examine the response, adjust the prompt, and repeat. Over a few iterations, you’ll converge on a prompt that consistently yields high-quality results. This practice is essentially using the AI as your collaborator to zero in on the best phrasing. Don’t be afraid to experiment.
Leverage bullet points and lists: When asking for multiple items or providing multiple criteria, use bullet points or numbering in your prompt. If you ask a question in paragraph form with several questions embedded, the model might skip one. But if you format it as “1. Do X, 2. Do Y, 3. Do Z,” the model is more likely to address each part in order. Likewise, if you want an answer in list form, literally say “Give the answer as a list:” or provide a template like “- First insight\n- Second insight\n- …”. Models respond well to list structures and will often mirror them in the output. This trick not only improves completeness but also readability.
Use delimiters or tags for dynamic content: If your prompt includes changing content (like user input in a larger system prompt), consider tagging it with identifiers. For example: … around a user query in a system prompt. While the AI doesn’t literally require XML tags, clearly marking sections can help if your prompt is programmatically constructed. It’s an organizational tool that can prevent the model from, say, confusing your system notes with the user’s query. Some developers use comments or labels like “Context:” and “Question:” to similar effect.
Test edge cases: As a final optimization, think of possible misunderstandings and test your prompt against them. If your prompt could be interpreted in two ways, try phrasing it both ways to see how the model reacts. If the task is critical, test slightly varied prompts or add explicit clarifications to handle those edge cases. It’s easier to adjust the prompt before deployment than to get a surprise later. This goes hand-in-hand with iterative refinement as you’re essentially stress-testing your prompt to make sure it’s foolproof.
Good techniques help transform a decent prompt into a highly effective one. Keep in mind that not every trick is necessary for every prompt. Use the ones that make sense for your situation. Over time, you’ll develop an intuition for which prompt optimizations yield the best results for the task at hand.