Approaching AI: The Daily Practice Extended

I. Introduction: Following My Own Advice

Here’s a note from my journal, written as advice to a colleague:

“Make a small, high-ROI investment: Pay for a premium AI subscription. ChatGPT Plus, Claude Pro, or HyperWrite. The gap between free and paid is now representative of a year of progress, and may be accelerating. You cannot effectively assess the opportunities or the threats with obsolete tools. Then, block 90 minutes every day for the next six months. Don’t just chat. Give the tools a real problem and drive them to produce something usable. Then you will understand what’s coming better than 99% of the people around you.”

I wrote that six weeks ago. Then I had an uncomfortable realization: I wasn’t following my own prescription. I was observing AI development from the sidelines, reading about it, thinking about it, and writing about it…but not truly practicing with the level of discipline I was advocating. I was committing the classic mistake—confusing awareness with competence.

So I decided to practice what I preach. This article is both a record of that decision and a refined version of the advice I gave. Because if there’s one thing I’ve learned from years of education work, it’s this: advice-givers must also be practitioners. Theory without practice is just performance.

What follows are two specific commitments that will build genuine AI fluency over six months. Not casual familiarity. Not theoretical understanding. Fluency.

II. The Professional Divide Is Accelerating

Eighteen months from now, the professional landscape will have two distinct groups: those who developed practical competence with AI, and those who didn’t. The gap between them won’t be subtle. It will be life-changing.

Those with competence will have redesigned significant portions of their workflows, and will have adapted their thinking to deal with an increasingly AI-mediated world. They’ll know which tasks to augment, which to automate, and, most importantly, which to leave alone. They’ll have developed critical judgment about AI outputs and outcomes, and a visceral sense of when to trust, when to verify, and when to discard. They’ll be able to assess new AI capabilities quickly because they’ve built pattern recognition through hundreds of hours of hands-on work.

The other group will still be figuring out if AI is “real” or “hype.” They’ll have opinions, certainly. But those opinions will be based on headlines, demos, and other people’s experiences. When asked to actually use AI to solve a problem, they’ll struggle. Not because they’re incapable, but because they never built the muscle.

The divide opens slowly, then becomes dangerous. You can’t close it quickly once it widens. This isn’t like learning a new software package over a weekend. AI fluency requires extended deliberate practice because you’re not just learning to operate a tool. You’re developing judgment about a moving target. You’re building pattern recognition about where current AI excels and where it fails. You’re learning to distinguish genuine capability from marketing hype. That takes volume and repetition.

Those who start building this competence now have a compounding advantage. Every week of practice makes the next week more productive. Every pattern you recognize makes the next one easier to spot. Every failure you recognize and adjust to makes future failures less costly.

Those who wait six months will find themselves trying to learn what’s current while the frontier has moved again. They’ll be learning on tools that early practitioners have already mastered and moved beyond.

“Waiting to see” feels like a neutral position. It’s not. It’s a decision with consequences.

There’s also an epistemological problem here that many don’t recognize: You cannot accurately assess whether AI is transformative or overhyped without hands-on experience. Reading about capabilities doesn’t give you the ground truth. Watching demos doesn’t reveal the failure modes. Listening to experts doesn’t build your own judgment.

The only way to know what’s real is to use it on real problems until you develop a gut sense of its boundaries. And the only way to develop informed intuition about what’s coming is to understand the current frontier deeply enough to extrapolate responsibly.

That requires practice. Daily, deliberate practice.

III. First Commitment: Invest in Your Tools

This isn’t about upgrading to a better product. It’s about investing in your own professional ecosystem. The same way a serious photographer invests in quality lenses or a developer invests in better hardware, you’re investing in the tools that will build a critical career skill.

The ROI on this investment is asymmetric. You’ll spend roughly $240-480 per year, depending on which platform you choose. In exchange, you get access to the technological frontier and the ability to develop competence that will compound over your entire career.

Compare that to professional obsolescence. What’s it worth to be on the wrong side of the divide eighteen months from now? What’s it worth to lack fluency in a technology that’s reshaping knowledge work across domains?

The capability gap is a year of progress

When I say the gap between free and paid versions represents “a year of progress,” I’m not being hyperbolic. The free tiers are typically running models that were frontier 12-18 months ago. That sounds close. It isn’t.

A year of AI development currently means:

  • Dramatically better reasoning on complex problems
  • Longer context windows (ability to work with more information simultaneously)
  • Improved instruction following and reduced hallucination rates
  • Better code generation and debugging
  • More reliable structured outputs
  • Enhanced multimodal capabilities

These aren’t incremental improvements. They represent qualitative shifts in what’s possible.

This matters for learning because you need to see what’s currently possible to understand where this technology is actually going. If you practice on last year’s capabilities, you’re building pattern recognition about outdated constraints. You’re learning to work around limitations that no longer exist. You’re developing intuitions that will be wrong. At the current pace of technology, you’re learning about a world that has moved on.

This isn’t about getting fancier outputs. It’s about understanding trajectory. Every breakthrough you witness on the current frontier teaches you something about the next frontier. Every limitation you encounter tells you where research energy is likely focused. You’re not just learning to use a tool—you’re learning to think about a rapidly evolving technology.

Which platform?

The honest answer: it matters less than you think, but consider this when you choose.

ChatGPT Plus is the most widely adopted and has the strongest general-purpose capabilities across the broadest range of tasks. Good default choice.

Claude Pro excels at longer-form analysis, has strong coding capabilities, and tends to produce more nuanced reasoning on complex problems. Better for deep analytical work.

Other platforms (Perplexity Pro, Gemini Advanced, etc.) have specific strengths, but the two above are the most robust for sustained practice.

My recommendation: Pick one and go deep for the full six months. The cost of platform-hopping is lost continuity. You want to develop fluency with a specific tool’s patterns, quirks, and capabilities. Depth beats breadth for learning.

If you’re genuinely uncertain, flip a coin between ChatGPT Plus and Claude Pro. Both are more than capable of supporting serious practice. Start tomorrow. You can always switch after six months if you have a good reason.

Addressing “I’ll wait for prices to drop”

They might. Probably will eventually. But that’s irrelevant.

Six months from now, the free tier will likely be where the paid tier is today. You’ll still be behind. Those who started today will have moved forward with the frontier. They’ll have six months of pattern recognition, six months of deliberate practice, six months of compounding knowledge.

You’ll be starting from zero, learning what they already mastered, while they’re learning what’s next.

The knowledge you build now has multiplier effects. Every hour of practice makes the next hour more productive. Every pattern you recognize makes future patterns easier to spot. This isn’t a linear investment; it is a compounding one.

Waiting for cheaper access means permanently losing six months of that compounding growth. That’s the actual cost.

Pay the $20-40/month. Start building fluency today.

IV. Second Commitment: Deliberate Practice

Block out two 45-minute sessions every day for the next six months. Two sessions are better than one 90-minute block. This isn’t a scheduling preference—it’s grounded in how humans actually learn complex skills.

The learning science foundation

Deliberate practice is the term psychologist Anders Ericsson used to describe the specific kind of practice that builds expertise. It’s not just repetition. It requires:

  1. Focused attention on improvement – You’re working on getting better, not just getting things done
  2. Operating at the edge of current capability – Tasks should be challenging but achievable
  3. Immediate feedback and iteration – You assess outputs, refine approaches, try again
  4. Specific goals per session – You know what you’re trying to accomplish

This is fundamentally different from casual use. Casual use is asking ChatGPT to write a fun poem or explain a concept. Deliberate practice is bringing a real problem from your work, pushing the tool to produce something genuinely usable, then critically evaluating what worked and what didn’t.

Spaced repetition is one of the most robust findings in cognitive psychology. Learning distributed across multiple sessions with spacing in between produces better long-term retention and transfer than the same total time spent in a single block.

Why? Your brain needs time to consolidate. Between your morning and evening sessions, your neural networks are processing patterns, integrating insights, and strengthening connections. The spacing isn’t dead time—it’s when learning actually happens at the neural level.

There’s also a practical cognitive load argument: Deliberate practice with AI is mentally taxing. You’re not just using a tool. You’re simultaneously:

  • Formulating precise problems
  • Evaluating outputs for accuracy and utility
  • Developing critical judgment
  • Building pattern recognition about where AI excels and fails
  • Learning to iterate effectively

Ninety minutes of sustained high-quality attention on this is difficult. Two 45-minute blocks with spacing allows you to maintain quality throughout both sessions.

Interleaving opportunities

Two sessions also create natural variation in your practice:

Session 1 (typically morning or early afternoon):

  • Exploration and experimentation
  • Taking on new problem types
  • Pushing into unfamiliar territory
  • Discovering what’s possible

Session 2 (typically evening):

  • Refinement of morning’s work
  • Application to variations
  • Documentation of what worked
  • Critical reflection on failures

This interleaving—switching between exploration and refinement, between pushing boundaries and consolidating gains—supports learning transfer. You’re not just practicing a single skill; you’re building flexible expertise.

Why six months?

This timeline is deliberate.

Skill acquisition literature suggests that genuine fluency in a complex domain requires hundreds of hours of deliberate practice. Six months of daily practice gives you roughly 180-200 hours. That’s enough to move from novice to intermediate competence—where you have reliable intuitions, can solve novel problems, and can distinguish good outputs from poor ones automatically.

Habit formation takes longer than the popular “21 days” myth suggests. Research on habit formation (Lally et al., 2010) found it takes an average of 66 days for a new behavior to become automatic, with significant individual variation. Six months provides enough runway to move through the difficult early period into sustained practice that feels natural.

Pattern recognition requires volume. You need to see hundreds of AI outputs across diverse problem types to develop reliable intuitions about capabilities and failure modes. The first month builds surface familiarity. The second and third months start revealing patterns. By months four through six, you’re operating with genuine fluency.

There are also progression markers:

Month 1-2: You’re learning the basics—how to prompt effectively, how to iterate, what “good enough” looks like in your domain.

Month 3-4: Patterns emerge. You start predicting where AI will excel before you even try. You develop shortcuts. Your prompts become more sophisticated.

Month 5-6: Fluency. You’re working at a qualitatively different level. Problems that took your entire first session now take 15 minutes. You’re tackling significantly more complex work. Your judgment about outputs has become reliable.

This is why six months. Anything less and you’re still in the learning-the-basics phase. You want to get to fluency.

Structuring effective sessions

What does good practice look like?

Session 1 (45 minutes):

Minutes 1-5: Problem selection

  • Choose a real problem from your actual work
  • Not hypothetical, not generic—something with stakes
  • It should be at the edge of your current capability with AI
  • Write down your specific goal for the session

Minutes 6-35: Active iteration

  • First prompt and output
  • Critical evaluation: what worked, what didn’t
  • Refined prompt
  • Second output and evaluation
  • Continue iterating until you have something usable
  • Push for deployment-quality, not just “interesting”

Minutes 36-45: Documentation

  • What patterns did you notice?
  • Where did the AI excel? Where did it fail?
  • What would you do differently next time?
  • Record this in your practice journal (more on this below)

Break period (minimum 2-4 hours, ideally more):

Let your brain consolidate. This isn’t wasted time. Neural consolidation during rest periods is when pattern recognition actually forms. Do other work. Live your life. Your brain is processing in the background.

Session 2 (45 minutes):

Minutes 1-5: Review and variation

  • Look at what you produced in Session 1
  • Identify a variation or extension
  • Or tackle a related problem that builds on morning’s patterns

Minutes 6-35: Refinement and application

  • Apply morning’s insights to a new problem
  • Or push the morning’s output to higher quality
  • Test whether the patterns you noticed actually generalize
  • Deliberately try to break what worked—understand boundaries

Minutes 36-45: Meta-reflection

  • What did you learn today about AI capabilities?
  • What did you learn about your domain’s vulnerability to AI?
  • What surprised you?
  • Update your practice journal

The practice journal

This is non-negotiable if you want to maximize learning. Use whichever format works for you—digital doc, notebook, note-taking app—but record:

  • Date and session number
  • Problem attempted
  • What worked (prompt strategies, approaches, iterations)
  • What failed (be specific about failure modes)
  • Patterns noticed
  • Questions raised
  • Predictions about what would work next time

Over six months, this becomes an invaluable resource. You’ll notice meta-patterns across dozens of problems. You’ll see your own learning trajectory. You’ll have a personalized knowledge base specific to your domain and problem types.

This is also where the spacing between sessions becomes powerful. Session 2 often reveals that what you thought worked in Session 1 actually has limitations. Recording both perspectives captures learning you’d miss in a single session.

What “real problems” actually means

This is where most people fail. They confuse using AI with practicing AI.

Not real problems:

  • “Write me a blog post about leadership”
  • “Explain quantum computing”
  • “Generate creative team names”
  • “Summarize this article”

These are casual requests. They might produce useful outputs, but they don’t build fluency because there’s no genuine constraint, no stakes, and insufficient complexity to force you to develop real skill.

Real problems have:

  • Specific constraints (“within 500 words,” “using only these sources,” “matching this brand voice”)
  • Context that matters (“for this specific audience,” “building on this existing work”)
  • Quality thresholds you can actually evaluate (“good enough to send to client,” “accurate enough to present to leadership”)
  • Consequences (“I will actually use this output in my work”)

Examples across domains:

Business/Strategy:

  • Analyze this market segment using these three competitor reports and our internal data
  • Draft talking points for next week’s board presentation on Q4 performance
  • Create a decision framework for choosing between these three vendor proposals
  • Write the first draft of our response to this RFP, incorporating our past winning proposals

Creative/Communications:

  • Write a client brief response that matches our brand guidelines and addresses these three specific objections
  • Draft three variations of this email campaign, each optimized for different audience segments
  • Develop a content outline for a webinar that bridges our technical capabilities and client pain points
  • Rewrite this technical documentation for a non-technical executive audience

Technical/Analytical:

  • Debug this code that’s failing in production with these specific error messages
  • Optimize this SQL query that’s timing out on our production database
  • Analyze this dataset to identify patterns related to customer churn
  • Generate test cases for this new feature based on our existing test framework

Research/Learning:

  • Synthesize these five academic papers on [topic] and identify contradictions in their methodology
  • Create a structured comparison of these three theoretical frameworks
  • Extract key insights from this 100-page report that are relevant to our specific question
  • Build a research timeline showing how thinking evolved on this topic over the past decade

Notice the specificity. Notice the constraints. Notice that in every case, you have enough context to evaluate whether the output is actually good.

Progressive difficulty

Don’t start with the hardest problems. Build up.

Week 1-2: Foundational

  • Single-step problems with clear success criteria
  • Tasks you could do yourself but want to do faster
  • Focus on learning basic prompting and iteration

Week 3-4: Intermediate

  • Multi-step problems requiring AI to synthesize information
  • Tasks where you need to evaluate quality, not just completion
  • Start pushing into areas where you’d need significant time to do manually

Week 5-8: Advanced

  • Complex problems requiring multiple iterations and refinement
  • Tasks where the AI’s output becomes the foundation for further work
  • Problems where you’re genuinely uncertain if AI can help

Week 9+: Expert

  • Novel problems you haven’t seen AI tackle before
  • Combinations of capabilities (analysis + writing + structuring)
  • Tasks where failure is likely but learning is high
  • Problems where you’re testing the boundaries of what’s possible

This progression ensures you’re always operating at the edge of your capability—the sweet spot for deliberate practice—without getting overwhelmed early or plateauing later.

The iteration loop

Deliberate practice is fundamentally about iteration. The cycle looks like this:

1. Formulate the problem precisely

  • What exactly are you asking for?
  • What context does the AI need?
  • What constraints matter?
  • What does success look like?

2. Generate initial output

  • Your first prompt
  • Review what comes back
  • Resist the urge to accept “good enough” on iteration one

3. Critical evaluation

  • Where is it accurate? Where does it hallucinate?
  • What’s missing? What’s extraneous?
  • Does it match your quality threshold?
  • Would you actually use this?

4. Refine your approach

  • Adjust the prompt (more specific, different framing, added context)
  • Or try a completely different strategy
  • Don’t make tiny tweaks—make meaningful changes

5. Iterate until deployment-ready

  • Keep going until you’d actually use the output
  • “Interesting” isn’t enough
  • “Close” isn’t enough
  • It should meet your professional standards

6. Deploy and document

  • Actually use the output (or decide why you won’t)
  • Record what worked in your practice journal
  • Note what you’d do differently next time

Most people stop after steps 1-2. They get an initial output, think “neat,” and move on. That’s not practice. That’s browsing.

Deliberate practice means pushing through 4-6 iterations until you have something you’d stake your professional reputation on. That’s where the learning happens: in the gap between the first attempt and “deployment ready.”

The feedback mechanisms you need

AI outputs are seductive. They look polished. They sound confident. They’re often wrong in subtle ways that require expertise to catch.

You need reliable feedback mechanisms to develop good judgment:

1. Ground truth comparison

For factual claims, verify against authoritative sources. Don’t assume accuracy. Check specific facts, especially:

  • Statistics and data points
  • Dates and timelines
  • Attributions and quotes
  • Technical specifications
  • Causal claims

2. Expert evaluation when possible

If you’re in a domain with clear expertise:

  • Compare AI output to expert-produced work
  • Have colleagues review AI-assisted outputs
  • Test outputs against professional standards you already know

3. Deployment testing

The ultimate feedback: does it work in the real world?

  • Send the email—does it get the response you wanted?
  • Present the analysis—does it hold up to questions?
  • Run the code—does it execute correctly?
  • Use the strategy—does it produce results?

4. Failure analysis

When outputs fail, dissect why:

  • Was the prompt unclear?
  • Did the AI lack necessary context?
  • Did it hallucinate specific types of information?
  • Was the task actually beyond current capabilities?

Record these failures. They’re more valuable for learning than successes.

5. Calibrating your “good enough” threshold

Over time, you’ll develop an internal sensor for output quality. This sensor needs calibration:

  • Too permissive: You accept outputs that need significant revision. You’re not getting real value, and you’re not learning where the boundaries are.
  • Too strict: You demand perfection and never use AI-assisted work. You’re creating make-work and missing genuine productivity gains.
  • Well-calibrated: You quickly distinguish between “deploy as-is,” “needs minor revision,” and “scrap and restart.” You trust the good outputs and catch the bad ones reflexively.

This calibration only comes from volume. You need to evaluate hundreds of outputs and see which ones hold up under scrutiny and which ones don’t. There’s no shortcut.

6. Build your personal rubric

After the first month, start formalizing what you evaluate:

For written outputs:

  • Factual accuracy
  • Logical coherence
  • Appropriate tone and voice
  • Completeness (addresses all parts of the prompt)
  • Usefulness (can you actually deploy this?)

For analytical outputs:

  • Correct methodology
  • Sound reasoning
  • Acknowledges limitations
  • Cites sources appropriately
  • Actionable insights

For code:

  • Executes without errors
  • Follows best practices
  • Handles edge cases
  • Readable and maintainable
  • Secure and efficient

Your rubric will be domain-specific. Build it consciously over time. This becomes your quality filter—the difference between using AI productively and being misled by confident-sounding nonsense.

V. What You’ll Actually Learn

Six months of deliberate practice will give you two distinct types of knowledge: empirical understanding of what’s real and informed intuition about what’s coming. These are different epistemological categories, and it’s important to distinguish them.

Empirical understanding (what’s real)

This is knowledge grounded in direct experience. You’ve done it. You’ve debugged it. You’ve felt it fail. You know it the way you know how to ride a bicycle—in your body, in your gut, beyond verbal explanation.

Pattern recognition about capabilities:

After hundreds of sessions, you’ll instantly recognize:

  • Tasks where AI consistently excels (structured analysis, synthesis of large documents, code generation within familiar patterns, format conversion)
  • Tasks where it’s unreliable (novel creative thinking, precise numerical reasoning, understanding nuanced context, maintaining consistency across very long outputs)
  • Edge cases where success depends on prompt quality (complex instructions requiring multiple constraints, domain-specific terminology, stylistic matching)

This isn’t theoretical knowledge. It’s pattern recognition developed through volume. You see a problem and immediately know whether AI will help, how much effort it will take, and what failure modes to watch for.

Your domain’s vulnerability map:

You’ll develop a detailed internal model of which aspects of your work are genuinely transformable by AI and which aren’t. This is incredibly valuable strategic knowledge.

For instance, you might discover:

  • Initial draft generation: AI is excellent, saves you 60% of time
  • Data synthesis: AI is good but requires careful verification
  • Strategic judgment: AI provides useful perspectives but can’t replace expertise
  • Client relationship management: AI can draft communications but can’t substitute for personal connection

This map is specific to your domain and your role. Generic advice can’t give this to you. Only practice can.

Workflow redesign intuitions:

You’ll naturally start restructuring how you work:

  • Which tasks to fully automate
  • Which to augment (you + AI is better than either alone)
  • Which to leave entirely to human judgment
  • How to sequence work to maximize AI leverage

This happens organically through practice. You don’t plan the workflow redesign in advance—you discover it by noticing which collaborations with AI feel productive and which feel like friction.

Output quality calibration:

You develop a reliable gut sense:

  • “This output is deployment-ready.”
  • “This needs minor revision, but the core is solid.”
  • “This has subtle errors that could be dangerous.”
  • “This completely missed the point—restart.”

This judgment becomes automatic. You scan an output and know within seconds whether it’s trustworthy. That’s expertise.

Informed projection (what’s coming)

You can’t know the future empirically. But you can develop informed intuition through pattern recognition across many observations.

Distinguishing capability from hype:

Hands-on practice is the only reliable way to separate signal from noise in AI discourse. You’ll be able to:

  • Evaluate new capability claims against your direct experience
  • Identify when marketing overstates actual utility
  • Recognize when skeptics are dismissing genuine breakthroughs
  • Assess whether new features actually matter for your work

When someone claims “AI can do X,” you won’t need to take it on faith or dismiss it reflexively. You’ll have frameworks for testing the claim yourself.

Strategic thinking about trajectories:

With a deep understanding of current capabilities, you can make educated guesses about 2-5 year trajectories:

  • If AI currently handles structured analysis well but struggles with novel synthesis, where is research energy likely focused?
  • If current limitations are primarily about context length and consistency, what changes when those improve?
  • Which of your domain’s tasks are likely to transform first? Which will resist transformation?

This isn’t prophecy. It’s an informed extrapolation based on understanding the frontier.

The confidence gap:

Perhaps most importantly, you’ll develop appropriate epistemic humility:

  • Knowing: “AI can currently do X with reliability of Y”
  • Understanding: “Here’s why it succeeds at X and fails at Z”
  • Forecasting: “Based on current trajectories, I expect…”

You’ll learn to distinguish these levels and communicate with appropriate confidence at each level. You won’t make overconfident claims about the future, but you also won’t be paralyzed by uncertainty about the present.

Why hands-on practice is the only path to good forecasting:

Reading about AI gives you secondhand knowledge. It’s better than nothing, but it’s fundamentally limited. You won’t know which claims to trust. You don’t have frameworks for evaluating new capabilities, so you can’t distinguish marginal improvements from genuine breakthroughs.

Hands-on practice gives you primary evidence. You’ve seen what works and what doesn’t. You’ve developed pattern recognition. When new capabilities emerge, you can test them against problems you’ve already worked on. You have a baseline.

This is why six months of deliberate practice will give you better forecasting ability than three years of reading about AI. Direct experience builds calibrated intuition. Secondhand knowledge builds opinions.

The meta-skill: Learning to learn with AI

Perhaps the most valuable outcome isn’t building any specific capability, but becoming skilled at learning new AI capabilities as they emerge.

The tools will evolve. Rapidly. What you learn from today’s models will be partially obsolete in a year…or a month. But the skill of learning how to work with new AI capabilities—that’s transferable.

You’ll have developed:

  • Efficient prompting strategies that transfer across models
  • Frameworks for testing new capabilities quickly
  • Pattern recognition about AI strengths and limitations
  • Iteration strategies that work regardless of the specific tool
  • Quality evaluation rubrics that adapt as capabilities improve

This meta-skill is the real prize. It means you’re not dependent on any specific tool or model. You can adapt as the technology evolves.

VI. Addressing the Objections

Let me address the resistance you’re likely feeling. These objections are reasonable. They’re also surmountable.

“I don’t have two 45-minute sessions available”

Yes, you do. Let me be direct about this.

You have 90 minutes daily that you’re currently spending on something else. Email. Meetings. Social media. News browsing. Low-value work that feels urgent but isn’t important. The question isn’t whether you have the time, but whether you’re willing to reallocate it.

Calculate the opportunity cost: What does professional obsolescence cost you over the next five years? What’s it worth to lack fluency in a technology that’s reshaping knowledge work across every domain? Compare that to 90 minutes daily for six months.

If you genuinely cannot find 90 minutes, if every minute of your day is allocated to high-value activities with no slack, then you have a time management problem that precedes AI literacy. Fix that first.

For most people, the honest issue isn’t availability. It’s prioritization. You don’t believe the investment is worth it yet. That’s fine. Just be honest about what’s actually stopping you.

“AI is overhyped / will plateau”

Maybe. You might be right. Let’s assume you are.

What hands-on practice gives you is empirical evidence to assess this claim yourself. You won’t be dependent on other people’s opinions. You’ll know what current AI can and cannot do because you’ve tested it on real problems in your domain.

If AI plateaus, you haven’t wasted six months. You’ve developed skills that make you more productive with the current plateau. You’ve built expertise that differentiates you from peers who never bothered to learn.

If AI doesn’t plateau and capabilities continue expanding, you’re positioned at the frontier. You’re not scrambling to catch up while others have months or years of compounding expertise.

This is what investors refer to as an asymmetric bet. The downside of being wrong about AI’s trajectory is minimal if you’ve built practical skills. The downside of being right about hype but doing nothing is that you will never have developed competence with tools that, even in their current state, provide significant leverage.

More importantly, hands-on practice is the only reliable way to separate signal from noise. Reading about AI doesn’t give you this. Everyone has opinions, but few have evidence. Six months of deliberate practice gives you evidence.

“My industry is different”

Every industry thinks this. Most are wrong.

I’ve heard this from lawyers, doctors, educators, consultants, engineers, creatives, analysts, and executives. They all believe their work involves unique judgment, context, and expertise that AI cannot replicate.

They’re partially right. AI cannot (currently) replicate senior-level strategic judgment in complex domains. But that’s not what’s at stake.

What’s at stake is whether AI can handle 30-50% of the tasks that currently occupy your time—the analysis, synthesis, drafting, formatting, research, and structured thinking that surround the core judgment work. In most knowledge work roles, it can.

The practice helps you determine if your industry actually is different. Maybe it is. But you cannot make that determination without testing AI on real problems from your domain. Theoretical arguments about uniqueness don’t hold up against empirical evidence.

Bring your actual work to the practice sessions. Test the boundaries. See what holds and what doesn’t. Then you’ll know.

“I’ll wait for better tools”

In six months, the tools will be better. You’ll be right.

And you’ll still be behind everyone who started today.

The reason is simple: They won’t be using the same tools you’re finally picking up. They’ll have moved forward with the frontier. They’ll have six months of pattern recognition, six months of developed judgment, six months of compounding expertise.

You’ll be starting from zero on tools that they’ve already mastered and moved beyond.

The skill isn’t just operating a specific tool. It’s learning how to learn with evolving AI capabilities. That skill only develops through sustained practice. Waiting for better tools means you’re also waiting to start building that meta-skill.

Six months from now, you’ll wish you had started today. This is true whether the tools improve dramatically or plateau. The regret comes from lost time, not from using imperfect tools.

Start with what’s available now. Build competence. Adapt as tools evolve.

“This feels like work, not exploration”

It is work. That’s the entire point.

Exploration is pleasant. It’s low-stakes. You ask AI to write a poem or explain a concept, marvel at the output, and move on. Exploration doesn’t build fluency. It builds familiarity at best.

Deliberate practice is effortful. You’re working at the edge of your capability. You’re debugging failures. You’re iterating until outputs meet professional standards. You’re developing critical judgment. This is cognitively demanding.

That’s why it works.

All skill acquisition feels like work. Learning to play an instrument, speak a language, write code, or analyze data requires sustained, focused effort. AI fluency is no different.

If you want casual exploration, the free tier is fine. Browse. Experiment. Have fun.

If you want genuine competence of the kind that gives you professional advantage and strategic understanding, you need to do the work. Six months of deliberate practice. Two sessions daily. Real problems. Usable outputs.

The choice is yours. But be clear about what you’re choosing.

VII. Building the Habit

Deliberate practice only works if you actually maintain it. What follows are strategies to make it sustainable.

Making it sustainable

Scheduling matters more than motivation:

Don’t rely on willpower. Build structure.

Fix the times first. Same two windows every day. Treat them like non-negotiable meetings. Morning and evening works for most people—first session before the workday chaos begins, second session after dinner. But find your rhythm. Block the calendar. Literally put these sessions on your calendar. Decline conflicting meetings. Prepare the environment. Close unnecessary tabs. Silence notifications. Remove friction.

Accountability mechanisms:

Some people thrive with external accountability. Daily check-ins with a practice partner work well—a simple yes/no on whether you completed both sessions. Public commitment helps. Tell colleagues you’re doing this. Social pressure supports follow-through. Streak tracking matters. Mark an X on a calendar for every day completed. Don’t break the chain.

Others work better with internal accountability. The practice journal becomes its own evidence and motivation. Notice yourself getting faster, better, more fluent. Track skill progression. You start producing higher-quality outputs in less time, and that becomes the reward.

Find what works for you. But have something. Pure willpower fails.

The first 30 days are hardest:

You’ll face resistance. Your brain will generate excellent reasons to skip sessions:

  • “I’m too tired today”
  • “This one meeting is more important”
  • “I’ll do extra tomorrow to make up for it”

These are lies you’re telling yourself. Not malicious lies—just the normal resistance that accompanies habit formation.

Push through. The first month is about building the habit, not about dramatic breakthroughs. If you maintain the practice through day 30, you’re over the hardest part.

By day 60, it starts feeling natural. By day 90, you’ll notice when you miss it—the way you notice when you skip exercise or meditation if those are established habits.

Documentation as practice

Your practice journal isn’t optional overhead. It’s part of the practice.

After each session, spend five minutes capturing the essentials: what problem you attempted, what strategies worked (be specific about prompts), what failed (be specific about failure modes), what patterns emerged, what surprised you, what questions arose, and what you’ll tackle tomorrow. This shouldn’t feel like homework. You’re externalizing insights that would otherwise evaporate by morning.

Why this matters:

First, externalized memory. You can’t hold six months of patterns in your head. The journal holds them for you.

Second, meta-learning. Reviewing past entries reveals patterns you didn’t notice in the moment. “I keep making the same prompting mistake.” “This type of problem consistently works well.” “My iteration strategy has evolved.”

Third, progress tracking. On days when practice feels futile, flip back 30 days. Look at what stumped you then versus what’s trivial now. The progress is real.

Fourth, personal knowledge base. This becomes a reference you’ll use for years. Domain-specific insights about AI capabilities in your field, documented through direct experience.

Format doesn’t matter. Consistency does. Digital doc, notebook, note-taking app, voice recordings—use whichever format you’ll actually maintain. But maintain it.

Adjusting without abandoning

Life happens. You’ll get sick. You’ll travel. Family emergencies arise. The practice will be interrupted.

The difference between flexibility and collapse:

Flexibility: You miss three days due to illness. Day four, you resume both sessions.

Collapse: You miss three days, decide you’ve “ruined the streak,” and stop practicing.

Don’t collapse. Adjust.

When disruptions happen, reduce but don’t eliminate. If two 45-minute sessions aren’t possible, do one 30-minute session and maintain the habit at reduced capacity. When you know disruptions are coming—travel, major deadlines, family events—plan how you’ll maintain practice in that context. Hotel rooms work. Airport lounges work. It’s possible. If you stop for more than a week, treat it like a fresh start. Don’t try to “catch up.” Just begin again with session one. And if you break the practice, review rather than regret. What conditions led to the breakdown? How can you prevent it next time? Then move forward.

The six-month goal is a target, not a contract:

If you complete 150 days instead of 180, you still have vastly more expertise than someone who never started. Don’t let perfect be the enemy of good. But also don’t use this as permission for inconsistency. The goal is 180 consecutive days of deliberate practice. Maintain that standard while building in realistic flexibility for genuine disruptions.

VIII. Conclusion: Agency vs. Observation

Consider what the next six months look like:

Path A: You observe.

You read articles about AI. You watch demos. You form opinions based on other people’s experiences. You wait to see what happens. You assume that when AI becomes “important enough,” you’ll learn it then.

Six months from now, you’re still observing. The frontier has moved. Those who were practicing are now fluent. They’re operating at a level you don’t fully understand. They’re redesigning workflows, producing higher-quality work in less time, and developing strategic insights you don’t have access to.

You’re forming opinions about their work. They’re doing the work.

Path B: You practice.

You invest in premium tools. You block two 45-minute sessions daily. You bring real problems. You iterate until outputs are deployment-ready. You build pattern recognition through hundreds of hours of hands-on work. You document what you learn.

Six months from now, you have fluency. You know what’s real and what’s hype because you’ve tested it. You have a visceral understanding of where AI excels and where it fails in your specific domain. You’ve developed informed intuition about what’s coming because you understand the current frontier deeply.

Most importantly, you have agency. You’re not reacting to change. You’re positioned to shape it in your field.

The distinction between these paths is choice, not circumstance.

You have access to the tools. You have the time if you choose to reallocate it. You have real problems to practice on. The only variable is whether you commit.

This isn’t about “keeping up” with technology. It’s about professional agency in a period of profound transformation. Those who build competence now will have disproportionate influence over how AI reshapes their domains. Those who wait will be shaped by decisions others make.

Which side of that divide do you want to be on?

What’s real becomes clearer through practice. What’s coming becomes more predictable through understanding the present.

Both require the same thing: daily, deliberate engagement with the actual technology, not with discourse about the technology.

Start tomorrow. Two 45-minute sessions. Real problems. Usable outputs. Six months of this will transform your relationship with AI from theoretical to practical, from observation to competence, from reaction to agency.

The investment compounds daily. The knowledge gap you close is enormous.

Everything in this article points to one actionable step: open your calendar and block those two 45-minute sessions for tomorrow. Then show up and practice.

The work begins now.

This article is the anchor piece in my “Approaching AI” series on building practical AI literacy. For related frameworks and resources, visit [your blog/foundation link]. The short version of this article, “Approaching AI: The Daily Practice,” provides a condensed reference you can return to regularly.

Future resources in this series will include downloadable practice journal templates, session planning worksheets, and domain-specific example problems to support your daily practice.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top