AI and Families: Raising the Next Generation of Learners

The Morning Prompt
The Morning Prompt
AI and Families: Raising the Next Generation of Learners
Loading
/

In Episode 2, Kyle and Will bring the AI conversation home. Building on Episode 1’s AI 101, they explore how families can begin using AI in practical, healthy ways that support learning, communication, and day-to-day organization.

Kyle shares real examples of how his own family is using AI at home — from learning and problem-solving to everyday life — while Will introduces a simple framework for families: Explore, Experiment, and Establish Boundaries. Together, they unpack how parents can lead by example, help kids build strong digital literacy, and treat AI as a tool that supports curiosity rather than replacing critical thinking.

This episode is about progress, not perfection — and how families can grow their relationship with AI together.

This episode covers:

  • 00:00 – Welcome
  • 00:26 – What’s Brewing?
  • 4:06 – The Deep Pour
  • 8:54 – Our Sponsor, GTMIFY.IO
  • 9:36 – The Deep Pour, continued
  • 17:46 – Something to Sip On
  • 19:09 – Tune into the Next Episode

A great starting point for listeners who want clarity, confidence, and a real-world map of what’s out there.

Transcript: The Morning Prompt – Episode 2: AI at Home

Will Clevenger (00:26)
Last episode, we talked about AI 101 and the tools that make life easier.

I talked about a series of tools around the AI companies in the first episode that included Manus for Research, Notebook LM for how you can learn and create your own podcast. We had Claude, we had Copilot from Microsoft, we had Grok from X, ChatGPT, Gemini from Google and Perplexity, and so many more.

Today, we’re taking that conversation home. And Kyle, you and I were just talking the other day about an experience that you had. I’d love for you to share that because it really rings true in this conversation.

Kyle Kelin (01:05)
I mean, it was right after we had recorded episode one and I had some friends over for dinner, another couple, and yeah, we just kind of got on the topic of AI and they were super confused around which tools to use. And, you know, probably really didn’t realize how interchangeable some of these were and the similar features. But we had a really good conversation with that — told them, “Hey, stay tuned. I’ll send them episode one when it drops.”

And, you know, the wife had mentioned she had felt like AI was kind of cheating — that, you know, it wasn’t really her work. And we’ll talk about that more as well. But I just had told her like, “Hey, do you spell check?” And she’s like, “Yeah.” And I’m like, “Well, it’s kind of a similar thing. It’s a tool that assists you to write better.”

And then the very next day, I went to lunch with a buddy of mine from high school who owns a construction company. And we got on the topic as well and similar conversation of “I don’t really — I think I should be doing more in my business with AI and even more at home and don’t quite know where to start. I’m overwhelmed by all the different options and different tools out there.”

So, you know, similar advice that we talked about on episode one is just like — start. Start with whatever you’re more comfortable with and just start.

Will Clevenger (02:23)
Apprehension is natural. That’s human. I think fear and uncertainty and doubt in this space is fair.

So I think what they’re articulating is this — does this really work? And she talked about cheating and it reminds me a little bit of the calculator. We had to learn how to do algebra first before we got the calculator. And there are all kinds of calculators. Which one do I use and how do I make sure that I know how to use it and do it correctly? And that creates a lot of that apprehension, but I think we can actually help learn those basics so that you can choose the right calculator for you so that you can execute the math the way that you want to do, or whatever the construction owner wanted to do, or whatever she wanted to do at home.

Think of it like teaching your child to drive. You wouldn’t just hand over the keys without showing them how to steer and the basics. AI is no different. Parents need to be in the driver’s seat first.

So today we’ll walk through a simple approach that those individuals or families or anyone can take and we’re gonna talk about how can you explore in these tools, how can you experiment with them and what you do personally, and then how do you establish boundaries. Because you still have to think about maybe what you did with the internet or mobile with your children and yourself. We still want to think about what’s a healthy way to use these in the right way. And it serves as a framework that we can all start.

Kyle Kelin (03:47)
Wait, so explore, experiment, establish. So that sounds like three E’s. Yeah, I know we’ve been doing consulting between the two of us for about 40 years, but I guess we can’t help ourselves creating three-letter acronyms and frameworks on the fly. All right, all right. Dive into the deep pour.

Kyle Kelin (04:07)
Okay, well, so for today’s pour, I’m gonna throw out a few examples of how I’ve integrated AI into my family’s everyday life. And Will, it’s gonna be important for you to chime in and let the listeners know your point of view, especially with how you can help them break into their journey with AI through the 3Es framework — explore, experiment, and establishing boundaries. Are you ready to go?

Will Clevenger (04:35)
Let’s do it.

Kyle Kelin (04:36)
Alright, so I think the first one — for anybody that has kids in sports, you know how horrible the sports and the league websites are. And so for me, when I need to get the schedule, in past years I have to go to the website. Usually it’s not a printable version. I have to create one of those to print it. And then I also have to create calendar invites for every single practice and game.

So it’s quite time consuming. And so this year I decided to explore a little bit and see — could I get AI to create that printable version for me and an ICS file to upload to my Google Calendar? And well, I got to tell you, it did it within about 10 minutes. I think it took me two versions of the prompt to get it right. But it did it, added all the calendar invites to my calendar, copied my wife, and saved me an hour. It wasn’t a huge amount of time, but it’s definitely a painful, monotonous hour.

Will Clevenger (05:41)
Well, I think you called out two things. One is that you got it done in two iterations. I think it could be 10. I think when we talk about exploration and experimenting, I don’t care if it’s one or a hundred — the journey that you went through of learning is what gets reapplied. And then that hour, you know, I know you talked about being a pain point, but I think if everybody said, “If I could give you one hour back, even if it was a week, would you take it?” And then I think you’re gonna give other examples, but each of these is a pain point that gives back. And you think about how else you might apply your human time to your kids and to your family and yourself. There’s a lot of benefit in that.

Kyle Kelin (06:21)
And next season, I’m going to teach my daughter the prompt and delegate the entire thing to her.

So I think one that’s a little more experimentation — in the past, I use an online service to do blood work. So I go into a local lab. They send it to this online service. And then, you know, twice a year, I meet with a physician’s assistant who goes over the results and the biomarkers and things for me and tells me, you know, basically eat less sugar and drink less alcohol. But we go through that.

And so they switched their business model this year to use AI. And so the first round of that blood work review now is with this AI agent avatar. And I was super skeptical at first. I’m like, “Really?” And I got to tell you, Will, it was a fantastic experience. It had all the lab work. It had all the previous lab works I’ve done and answered all the questions I had, did a really great readout. And what I also found was I could ask it questions that I may normally have not wanted to ask the physician’s assistant.

Will Clevenger (07:35)
Yeah, I think — talk about that more, because I think your perception is going to be common with a lot of people. Is AI correct? Is it accurate? It feels weird, it’s technology. I think the fact that you broke down that fear, but then the conversation you had with it — talk about that a little bit more, because I think that’s important for the people listening to understand your journey, just even in that micro moment.

Kyle Kelin (08:01)
Yeah, I think there’s obviously some questions that are more personal that you feel a little weird maybe asking the physician’s assistant. No qualms about asking an AI agent that.

And then the other cool thing is the next morning I had another question. And so normally I’d have to go book another meeting or another appointment. And I just fired up my app and started talking to it again. It knew exactly where we left off. And it answered the question and it took me about 10 minutes.

So I think obviously there’s always the fallback that if I need an appointment with a person because I have some real major issue with the blood work or something very confusing, I can get an appointment with the physician’s assistant. They’re there. But this first readout is an AI agent, which is super cool. And they did lower the price a little bit overall of the service, so.

Will Clevenger (09:35)
Well, and I think — can you talk about boundaries too? Because I think there’s — do we get through the fear of it to use it, but then also understanding a little bit of what we’re talking about in this regard is our health, those conversations, privacy. So think about the boundaries that you started to think about as part of that.

Kyle Kelin (09:51)
Yeah, I think it’s a little bit of a closed service, right? This one — so I don’t get to write the prompts. But if I was writing the prompts myself, I would say something like, “Hey, only refer to academically supported papers. Don’t go out on the Internet and any random forum and use that as evidence.”

And so I think you could build that prompt. And you’ve showed me how to do this in the past — of really being careful what sources you allow it to leverage. And then, you know, for this one, again, I couldn’t write the prompts, but as I was asking the questions, I would say, “Are you sure about that? Have you checked that through multiple sources?” So just thinking critically and debating it.

Which funny enough, like for me, I would do that with the physician’s assistant as well. I don’t really trust them without having a debate. But some people will trust their doctor verbatim, whatever they say. I would say, you know, with the AI agent, you got to be a little different. You got to think, “Hey, I’m collaborating with it and I’m going to test it and push back on it and ask it — hey, you recommended I do this. Are you sure about that? Did you verify that with three different sources?”

Will Clevenger (11:09)
Yeah, I think the trust but verify is absolute for sure.

And I think the other thing to me, and we’ve heard about some of the issues with people just going into some of these general GPTs like ChatGPT and looking for health guidance or some of the other implications that it’s had — in fact, they’ve actually removed the ability for you to get health advice in some of those within the last week or two. I think that’s the other part where in this instance, you’re actually engaging with an avatar, an agent that they created and has the protections for us.

Where I think it also, you know, as you think about the folks listening — your children, yourself and others — you have to understand too, going into the open space versus something that is guided and protected and has a little bit of a boundary to it already. I think those are the other things that we have to think about — when you do that, where you do that, what you trust, what you talk about. There’s a safe space and there’s a safer space.

So doing it — you would never just go into the open mall and have a conversation about your personal health, right? Well, that’s a little like ChatGPT. You certainly wouldn’t walk up to somebody randomly and ask them for feedback. That’s a little like GPT. So I think this is the other part about when you do it, how that applies.

Kyle Kelin (12:20)
And so going back to how I use this with my family and everyday life — we definitely use it now for a lot of trip planning and making reservations and so forth. You know, when my wife and I have a Thanksgiving trip coming up, we’re going up to the mountains. And so most of the itinerary now is being built with ChatGPT and, you know, my wife and I were collaborating on it.

It’s all — those types of trips to me always feel like a jigsaw puzzle of, well, you want to do this thing, but you can only do it on this day. And then this other thing is on this day. So now you got to pick and choose. And it really runs through all that really quickly for us. And then we can kind of focus on, you know, moments that we really want to have and what matters.

Will Clevenger (13:08)
Yeah, I love the learning exercise because yours was trip planning. Mine was I was building a fire pit in my backyard. And can I do the math on it, right? Pi R squared to figure out what the circumference is. And I could measure each of the stones.

You could go look at where you could go and when they’re available and “hey, we want to eat.” But maybe I didn’t think about the travel time and that part of the day and that part of Colorado. And so for me to go do the fire pit was — I just told it what the diameter was and I told it to go look at the rock and to figure out how many I needed. And so all of the planning around it — for me to do that, and again, it’s to your point, I could have done it, but it saved me an hour. It actually probably thought about things I didn’t think about. And I think the other part for both you and I is it’s a continued learning and skill development. It solves a problem and I continue to learn. That’s a key part of what we’re doing in this journey too.

Kyle Kelin (14:02)
Yeah, you brought the point there of thinking of things I wouldn’t have thought of, right? And so same thing — obviously, I could go to Google and plan out a trip, but what I like is one, some of the basics it takes off our plate really quick and then we’re focused on finding those things off the beaten path or that are maybe more unique or memorable about the trip. And so it suggests a lot of those things too, like “Have you thought about this? Have you thought about this?” And it’s all based on the prompt that I build about me and my family in order to create that.

Will Clevenger (14:36)
Yeah, they’re all fun exercises and I don’t think there is a wrong exercise and I don’t think there’s a combination that you can’t do. I think this goes back to how do you explore, how do you experiment and then make sure that you establish boundaries around that. We’re going to keep going back to those three E’s.

You know, my father was a PhD and a teacher forever and so was my grandfather. So I’m the first one in three generations not to have a PhD, but I have a little bit of that spirit. I know it. I know it.

Kyle Kelin (15:00)
Slacker, slacker.

Let’s talk about the boundaries more. I know next episode we’re gonna get into prompting more. But this is one that you really taught me a lot around — how to be smart about the prompt you write to create a more reliable result.

Will Clevenger (15:22)
Yeah, I think that’s part of the journey. But I do think as you do it, one is going back to how would you actually ask — remove yourself from the screen. Because I think this again goes back to — we talked about typing a conversation. I think we’re much better at that if we just did it vocally.

But there’s a trust but verify. And you know, when you prompt, there’s the curiosity in the prompt. If you don’t know, ask it. I think that’s the other big part. “I don’t know how to do this prompt.” And again, these prompting styles have changed. If you remember when we started, chain of thought didn’t exist. And you think about — maybe you kind of talk about your first prompt and then talk about how you do it now with the reasoning. You just talked about doing it with the travel. I bet that was different from what you did six months ago.

Kyle Kelin (16:08)
Just for anyone who doesn’t know, what’s chain of thought?

Will Clevenger (16:11)
So chain of thought is reasoning. If you think about how we started prompting — and I’ll have to put my $5 in the jar for using either an acronym or a term that maybe we haven’t explained yet — but we’ve moved from the ability for them to use their models of context to respond, but they didn’t reason or think.

Chain of thought now really will take what you’re asking for, and it will start to think about how would I unravel or unpack that? And again, for both of us, that’s where some of those thoughts of “Well, did you think about travel? Did you think about travel time or weather? Or did you think about the space between the rock?” “Oh crap, I didn’t account for that.” Simply thinking — it’s just reasoning and doing a chain of thought about how it would solve whatever problem you’ve given it.

And that came out earlier this year. I think ChatGPT was around January. So when we started, we were having to give it our thought to guide it. And now it actually can do a whole lot of that. And you can watch and see what it’s doing as a way to understand that process.

Kyle Kelin (17:16)
Yeah, I think it’s very cool. It continues to evolve there. Well, I think that’s good for the examples as well. So I appreciate you chiming in and helping to kind of deep dive into some of those.

Will Clevenger (17:31)
Yeah, I think keep it simple. Find a place to start. Focus on your three E’s and keep learning. And whatever you learn, please feel sure to share back. This is a community kind of engagement.

Kyle Kelin (17:45)
So here’s how I recommend you get started. I would think of AI as my executive assistant for the household. And so any time I have a task that is requiring a fairly decent amount of time, my first question would be, “Can AI do this for me?” And then I’m going to fire up one of these tools, and I’m going to experiment a little bit and try to see if I could get that done. And that’s how I would go about it — just really like one task at a time that I kind of hate.

Will Clevenger (18:23)
Mine would be — you might use the tool to figure out where you use the tool. And I know we’ve talked about it, but I might describe my family and situations of what we do and have it ask me questions. There’s actually a prompt pattern where you can say, “I don’t know what I want to do with my family, but can you help ask me questions to figure out how you might best be able to help?” And it will sit there and ask me questions because maybe I don’t know what questions to ask.

So again, you might have the thought process like you did where you were very thoughtful and able to go, “I know exactly where my pain is at.” I might be going, “I don’t know what pain is most applicable to it.” So I think either one of those work and I think you should maybe do both.

Kyle Kelin (19:04)
Absolutely. I love that take on it. Like just ask the tool.

Will Clevenger (19:09)
That’s it for today’s Morning Prompt. Thanks for skipping the jargon with us and joining the conversation on how AI is changing our homes and our families.

Kyle Kelin (19:18)
And remember, the best way to understand AI is to use it. And we hope this episode sparked new ideas for you and your family to explore AI in a very practical way. So tune in next time as we dive into how to talk — or prompt — to AI.

Leave a Reply