In Episode 3 of The Morning Prompt, Kyle and Will dig into one of the most misunderstood parts of AI: prompting.
This episode isn’t about memorizing clever phrases or copying templates. It’s about learning to think with AI; refining your ideas, providing better context, and treating the tool as a thinking partner rather than a vending machine.
Kyle shares how experimenting with prompts dramatically improved the quality of his results, while Will breaks down a simple framework anyone can use to get clearer, more useful outcomes. They explore why prompting is really about problem framing, why refining beats restarting, and how speaking your prompt out loud — using voice mode — unlocks better thinking altogether.
Whether you’re brand new to AI or already using it daily, this episode helps you move from frustration to confidence by showing how better questions lead to better answers.
This episode covers:
- 00:00 – Welcome
- 00:26 – What’s Brewing?
- 6:17 – The Deep Pour
- 12:21 – Our Sponsor (Guest Surprise)
- 13:04 – The Deep Pour, continued
- 19:07 – Something to Sip On
- 20:32 – Tune into the Next Episode (AI Agents)
A great starting point for listeners who want clarity, confidence, and a real-world map of what’s out there.
Transcript below…
Will Clevenger (00:26)
Welcome back folks. If you listened to episode two, we dove into how families can use AI in their daily lives. And for today’s episode, we’re brewing up practical tips on how to use AI and get hands on with one of the most misunderstood parts of AI, prompting.
Kyle Kelin (00:44)
Okay Will, so before we get into prompting, I wanted to talk about what’s new in the world of AI. I’m sure you’ve seen Gemini’s newest release and wanted to see if you had a chance to play with that yet.
Will Clevenger (01:00)
Yeah, the newest drop was earlier this week. You’ve probably seen a lot of news on it. People are trying to figure out if they should switch tools or not. It seems like a good model, but this is really a competition of cycles. It’s just the next best model. You have to determine how it fits into how you work. That said, there are a lot of great updates.
Kyle Kelin (01:28)
It’s funny. All the YouTube videos made it seem like a massive step change. But honestly, I used it and didn’t notice a huge difference compared to ChatGPT or other tools. For the average user, what advice would you give?
Will Clevenger (01:58)
If you’ve already invested in something like ChatGPT, there’s comfort in that. I still think people should test new tools. You can even ask ChatGPT directly to compare tools. Say, “I’m doing X, how does Gemini compare?” It can give you a side-by-side like Consumer Reports. Use the tools to evaluate other tools.
Kyle Kelin (02:36)
I had a funny moment in my consulting business. I created a deliverable for a client and sent it over. For the first time, the client used ChatGPT to review my work and give feedback.
Will Clevenger (03:08)
That’s a great example. This is where we’re headed. AI will be used for both creation and review. Now I’m curious, what gaps did it find?
Kyle Kelin (03:23)
This is where it got interesting. We ended up debating the prompt he used. I had agreed to provide initial thoughts in a bulleted document. But his prompt was something like, “Review this full go-to-market strategy and identify gaps.” I had to push back and say that wasn’t the scope.
Will Clevenger (04:05)
That’s important. When people talk about “human in the loop,” what they really mean is this isn’t an easy button. You need to read the prompt and the response. Did you ask the right question? Did it respond the way you expected?
Kyle Kelin (04:33)
It’s hard to agree on the output if you don’t agree on the prompt. The last thing I wanted to mention was Oboe.ai. I sent you that last week. Did you check it out?
Will Clevenger (05:04)
Yeah, it went from a text you sent me to my favorites bar. I’m already using it. Whether you’re learning cooking, Algebra, or something advanced, it adapts to how you learn. Lecture style, podcast, quizzes. It’s powerful.
Kyle Kelin (05:39)
For those who haven’t seen it, you go to Oboe.ai, describe what you want to learn and your learning style, and it builds a full learning plan. It’s pretty incredible.
Will Clevenger (06:03)
Turns out that’s a prompt.
Kyle Kelin (06:06)
Exactly. So let’s move into the deep dive.
Kyle Kelin (06:17)
Let’s talk about prompting. I’ll guide us through key things people should keep in mind. Sound good?
Will Clevenger (06:32)
Let’s do it.
Kyle Kelin (06:39)
A few months ago, I sent you a manufacturing article I wrote with AI. Your feedback was critical but helpful. You gave me a totally different way to think about prompting.
Will Clevenger (07:21)
First, prompting is just talking. That mental block that it’s a technical skill is wrong. You’re already doing it. Second, what you get back isn’t always deep enough. Engage with it. Ask follow-ups. Say “tell me more.” Just like a conversation.
Kyle Kelin (08:14)
You also pushed me on using a single prompt. You were debating each part of the content instead of just generating it.
Will Clevenger (09:24)
Exactly. Think like a five-year-old asking “why.” Challenge the output. Ask for proof. Ask for clarification. Shape it. Like cooking. Taste, adjust, refine.
Kyle Kelin (10:29)
I now treat AI like a collaborator instead of something I hand work to.
Will Clevenger (11:03)
Here’s the framework. Context is king. Tell it what you’re doing, why, who it’s for, and what success looks like.
Kyle Kelin (12:11)
It’s like onboarding an intern. You need to set them up to succeed.
Will Clevenger (13:05)
Exactly. Treat it like a person or assistant. That helps remove the barrier.
Kyle Kelin (13:34)
What’s next?
Will Clevenger (13:36)
Constraints. Tell it what not to do. Don’t use outdated sources. Don’t pull from certain platforms. That improves quality.
Kyle Kelin (14:15)
That’s where reliability improves. You can refine sources and even have tools fact-check each other.
Will Clevenger (14:51)
And you’ll learn over time. Stay engaged. If something’s off, adjust and try again.
Kyle Kelin (15:33)
What’s next?
Will Clevenger (15:36)
Output. Define what you want. Don’t leave it up to the tool.
Kyle Kelin (15:58)
I break things into steps. Outline first, then full doc, then maybe a presentation.
Will Clevenger (16:36)
Exactly. Think about ingredients and steps. You refine over time.
Kyle Kelin (17:47)
I don’t think of it as failure. It’s iteration.
Will Clevenger (18:35)
If your prompt doesn’t sound like how you’d naturally ask for feedback, it’s probably not a good prompt.
Kyle Kelin (19:09)
Big takeaway: AI is a thinking partner, not a vending machine.
Will Clevenger (19:40)
The quality of output reflects the quality of your thinking. Speak it out loud if needed. That helps uncover missing context.
Will Clevenger (20:34)
That’s it for today’s Morning Prompt. The best way to understand AI is to use it. Try things. Refine your prompts. Get started.
Kyle Kelin (20:57)
Next time, we’ll talk about AI agents and how they can make life easier. Thanks for starting your day with us.

Leave a Reply