Ever feel like you’re trying to reason with a teenager who’s only half-listening when you use AI? You’re not alone. Some colleagues recently showed me prompts they tried, along with the off-the-mark responses they got back. The results were often frustrating—more confusing than helpful, and definitely not what they asked for.
It’s like you said, “Clean your room,” and they stuffed everything under the bed.
A lot of people jump into generative AI and end up feeling more frustrated than helped. But here’s the good news: it’s probably not the AI’s fault.
In most cases, the key to better results is how you ask.
Think of it like ordering coffee. If you just say, “coffee,” you’ll get something. If you ask for “a grande, non-fat, caramel macchiato,” you’ll get exactly what you want.
Prompting AI works the same way—and this post will help you learn how to order your perfect AI coffee.
Two basic prompting styles
Dr. Philippa Hardman offers a helpful way to think about prompting styles. She suggests two main approaches: structured and conversational.
An easy way to remember them is to picture the difference between following a recipe and having a chat about baking.
Structured prompting: Like giving AI a recipe
Structured prompting is when you give AI detailed, step-by-step instructions—like a recipe for cake.
Dr. Hardman calls this approach CIDI (Context, Instruction, Detail, Input).
Key elements:
- Context – Background information the AI should know
- Instruction – Clear and specific steps
- Details – Must-haves and must-nots
- Input – Data, documents, or content to work with
(Note: Dr. Hardman’s CIDI doesn’t list “role” separately, but it’s usually wrapped into the context.)
When to use structured prompting:
- Task Type: Content creation, data analysis, summaries—anything that needs precision
- Tool Type: Great with ChatGPT and research-oriented tools like Perplexity.ai, Scite.ai, and Consensus.app
Anatomy of a strong structured prompt
Use this simple framework:
- Goal: What do you want?
- Format: How do you want it delivered?
- Context: What should the AI know?
- Constraints: What should it avoid?
- Voice and tone: What style should it use?
- Examples or input: Can you show it what you mean?
- Role: Whose voice or perspective should it use?
Simple example
Goal: Summarize the key points of this article.
Format: Bulleted list.
Context: For grade 6 students.
Constraints: No personal opinions.
Voice and tone: Friendly and encouraging.
Examples or input: Here’s a sample summary. [Insert example]
Role: You are a science teacher who works with grade 6 students.
More Advanced Example
Goal: Draft a 3-day workshop outline on ‘Facilitating Effective Online Collaboration.’
Format: Table with columns for ‘Day,’ ‘Topic,’ ‘Learning Objectives,’ and ‘Activities.’
Context: Adult learners with intermediate experience using online collaboration tools.
Constraints: No made-up information. State “I don’t know” if unsure. Avoid bias.
Voice and tone: Professional yet engaging.
Examples or input: Use the attached past workshop outline and audience brief.
Role: You are a senior learning science professor specializing in workplace learning.
Why assign a role? Setting a high-expertise role (like a senior professor or policy advisor) helps the AI “think” with deeper knowledge and produce better insights—beyond just surface-level answers. It tells the AI to look for more specific information and not look at the entire body of knowledge. I think this helps prevent it from using too many Reddit sources!
Formatting your prompts: Use visual cues
Clear formatting helps AI follow your instructions more accurately.
The latest research shows structured formatting improves model understanding.
Easy ways to format:
- Bold headings – Label each part clearly
- Markup formats – Use XML or Markdown for advanced prompts
Example with Bold Headings:
Goal: Draft a workshop outline…
Format: Table format…
Context: Audience details…
Constraints: Avoid made-up facts…
Voice and tone: Professional and engaging…
Examples or input: Include outline and briefing note…
Role: Speak as an experienced facilitator…
Example with XML Markup:
<task>Draft a course outline for a 3-day workshop on online collaboration</task>
<format>Table: Day, Topic, Learning Objectives, Activities</format>
<context>Adult learners with diverse tech skills</context>
<constraints>No made-up info; state unknowns; avoid bias</constraints>
<tone>Professional and engaging</tone>
<role>Experienced facilitator with digital learning background</role>
Conversational prompting: Like brainstorming with a friend
Sometimes you don’t know exactly what you need yet. That’s where conversational prompting comes in—more flexible and exploratory.
Dr. Hardman calls this style OPRO (Optimization by Prompting), aka “iterative optimization.”
Key elements:
- Start with a broad prompt
- Refine and adjust based on the AI’s responses
- Layer in details gradually
- Encourage creativity and exploration
When to use conversational prompting:
- Task Type: Brainstorming, strategy, creative development
- Tool Type: Works great with ChatGPT, Claude, Pi (Inflection AI), and Character.ai
Advanced Prompting Techniques
These techniques build on both structured and conversational prompting:
Prompt chaining
Building step-by-step, using previous outputs.
Great for: Complex projects (like building a course outline)
Not ideal for: Simple, one-off tasks
Chain of thought prompting
Asking the AI to explain its reasoning step-by-step.
Great for: Analytical tasks like summarizing research or analyzing feedback
Not ideal for: Straightforward facts or short answers
Tree of thought prompting
Exploring multiple options and solutions.
Great for: Strategy planning, brainstorming project ideas
Not ideal for: Tasks needing a single, clear answer
Research: Prompting is complicated and context-dependent
According to Prompt Engineering is Complicated and Contingent (Meincke, Mollick, Mollick, Shapiro, 2025):
Usually good practices:
- Use clear formatting and structure.
- Adapt prompts to the context—no one-size-fits-all rules.
- Set appropriate evaluation criteria (define what success looks like).
- Experiment and iterate—try several versions of a prompt.
- Balance your tone depending on the situation.
- Give clear, specific instructions whenever possible.
Bottom line: Good prompting is an experiment, not a formula.
Prompting deep research models: A different approach
With new Deep Research models like GPT-4 O1, Claude Opus, and the new Google Gemini model 2.5:
- Chain-of-Thought prompting often hurts performance.
- Zero-shot prompting (asking directly) usually works better.
- Fewer examples, more clarity—long prompts with specific expectations are better.
Tips for Deep Research Models:
- Be specific about your needs (e.g., “focus on randomized controlled trials”).
- Invite clarifying questions.
- Prioritize clarity over controlling the thought process.
For more on this, check out Ross Stevenson’s Steal These Thoughts video:
How to Prompt AI Reasoning Models (7 Proven Tips)—highly recommended! Seriously, follow Ross everywhere. One of the best explainers ever!
Tips and tricks for beginners
Gen AI is like a 6 year old genius that needs very clear instructions and that may make stuff up!
- Use clear labels, brackets, quotation marks, or special characters.
- Start simple. Build complexity step-by-step.
- Try different versions of a prompt.
- Be specific about what you want.
- Ask: “What else do you need to know to complete this task?”
- Request examples when unsure.
- Remember: Deep thinking models work differently.
Final thoughts
Prompting AI is part art, part science.
It’s like learning to cook or speak a new language—it takes some trial and error.
But once you get the hang of it, AI becomes an incredibly powerful partner in your work and creativity.
So next time you’re staring at that blank prompt box, don’t settle for just “coffee.”
Ask for the grande, non-fat, caramel macchiato..
You’re much more likely to get what you really want.
Crossposted to LinkedIn