Building Software Faster with LLMs: Part 5 - Learning and Knowledge Accumulation
Note: This article series was written by me, with LLMs helping to refine the style and structure.
This is the final article in my series on working with LLMs at scale. Part 1 through Part 4 focused on building with LLMs—managing sessions, ensuring quality, coordinating parallel work.
This article explores a different theme: the place of learning in the era of LLMs.
Series Navigation: ← Part 4: Rapid Evolution
The Learning Paradox
Here’s the uncomfortable truth: to be effective with LLMs today, you need to know the craft. When using Claude to write code, I rely on decades of software engineering experience—I know what good code looks like, what patterns to avoid, what trade-offs matter.
But people training today don’t have the same need to learn that I did. Why spend years mastering the intricacies of memory management, algorithmic complexity, or API design when an LLM can handle most of it?
This creates a void in expertise. If fewer people develop deep craft knowledge because LLMs handle the implementation details, where does the next generation of experts come from? The ones who can steer the LLMs, evaluate their output, and make good architectural decisions?
It’s likely a temporary problem. There’s no reason to think that I myself—and my ways of thinking—cannot eventually be replaced by LLMs. The “steering” and “judgment” I provide today may be automated tomorrow.
But for now, we’re in this transition. And I still value learning deeply. It’s been a core motivation for me throughout my career.
So here’s how I use LLMs to accelerate my own learning, working with my fallible human brain that still needs repetition, reinforcement, and deliberate practice to internalize complex ideas.
Prompting as a Learned Skill
I’ve had over 2,000 conversations with AI. I’m still learning how to prompt better. It’s a skill, and one I recommend others invest in.
Good prompts aren’t just about getting answers—they’re about shaping the learning experience to match how you learn best. Here are some examples from my own learning journey.
Example 1: Catching Up on React
I used React when it first started, but it’s changed dramatically. Instead of asking “explain React hooks,” I framed it as a structured learning path:
I used React at the beginning when it started but now it has changed a lot.
Do you think you can help me be up to speed with all the concepts in a few
bite-size lectures with example code?
I am interested to cover at least: hooks, effects, callbacks, routing,
state management, async operations, axios, reusable components, performance,
and the history of the framework.
Key elements of this prompt:
- Context: I’m not a beginner, I’m catching up
- Format: Bite-size lectures with code
- Scope: Specific topics I need to cover
- Goal: Get up to speed, not comprehensive mastery
This gave me a structured curriculum tailored to my experience level.
Example 2: Understanding Anthropic
When I wanted to learn about Anthropic as a company, I didn’t ask for a Wikipedia-style overview. I asked for serialized, digestible stories:
I want you to give me bite-size stories about Anthropic, the AI company.
The stories have to come in chronological order, one at a time (wait for me
to say continue). They have to be few paragraphs long to cater to the low
attention span of human beings in the 21st century.
Cover: the origin of the company, the founders, key people, Claude, ethics,
Sonnet and Opus, Claude Code, etc. Aim for 10 stories total for now.
Key elements:
- Format: Serialized stories, not a wall of text
- Pacing: One at a time, controlled by me
- Tone: Acknowledges modern attention spans
- Coverage: Specific topics I care about
- Scope: 10 stories sets expectations
This turned learning into an interactive experience rather than passive reading.
Pattern 1: Heavily-Annotated Code (Worksheets)
When learning a new codebase or pattern, I ask Claude to generate worksheets—code with extensive inline comments explaining every decision.
This works because code with explanations serves as a learning artifact you can read, annotate, and reference later. The key is being specific about what you want explained:
Generate [concept] code with inline comments explaining:
- Why this approach (vs alternatives)
- Common mistakes
- Performance considerations
- When to use / not use
- How it connects to [related concept]
I’ve used this pattern to learn:
- Rust’s ownership system
- Clojure’s transducers
- Kubernetes operators
- Distributed system patterns
- Database indexing strategies
Pattern 2: Flashcards for Spaced Repetition
Reading annotated code helps understanding, but doesn’t ensure retention. For that, I use flashcards with spaced repetition.
The Flashcard System
I built a flashcard CLI that implements spaced repetition algorithms (SM-2, Leitner boxes).
But manually creating flashcards is tedious. So I ask Claude to generate them.
Example: Learning System Design
Generate flashcards for distributed systems concepts. Format as JSON.
Focus on: CAP theorem, consensus algorithms, partitioning strategies.
Each card should have:
- A precise question
- A complete answer
- Tags for organization
- Notes with additional context
Claude generates:
[
{
"question": "In the CAP theorem, what does 'Partition Tolerance' actually mean?",
"answer": "The system continues to operate despite network partitions (messages between nodes being lost or delayed). You must choose: consistency (all nodes see same data) or availability (all requests get responses) during a partition.",
"tags": ["cap-theorem", "distributed-systems", "fundamentals"],
"notes": "Common misconception: You can't 'give up' partition tolerance—network partitions will happen. You choose between CP (consistent but not available during partition) or AP (available but inconsistent during partition)."
},
{
"question": "What is the key difference between Raft and Paxos consensus algorithms?",
"answer": "Raft uses leader election + log replication (simpler to understand). Paxos uses proposal numbers + acceptors (more flexible but complex). Raft is easier to implement correctly.",
"tags": ["consensus", "raft", "paxos", "algorithms"],
"notes": "Raft was explicitly designed to be more understandable than Paxos. In practice, most systems use Raft (etcd, Consul) or Raft-like algorithms (Zab in ZooKeeper)."
},
{
"question": "When should you use hash-based partitioning vs range-based partitioning?",
"answer": "Hash partitioning: Uniform distribution, good for random access. Range partitioning: Preserves ordering, good for range queries. Choose based on your query patterns.",
"tags": ["partitioning", "sharding", "database"],
"notes": "Real example: User IDs → hash partition (uniform load). Timestamps → range partition (range queries like 'last 24 hours'). Hybrid approaches exist (compound keys)."
}
]
I import these directly into my flashcard system:
cat system_design_cards.json | ./flashcards bulk-create --project system-design
Now I can quiz myself:
$ ./flashcards quiz --project system-design --limit 5
Question 1/5: In the CAP theorem, what does 'Partition Tolerance' actually mean?
[Show answer? y/n]
The Spaced Repetition Loop
Here’s how the system works:
- Generate flashcards from concept (Claude)
- Import into flashcard CLI
- Quiz yourself (spaced repetition schedules reviews)
- Rate confidence (1-5): Low = see again soon, High = see much later
- Repeat over days/weeks until internalized
The beauty: I can generate flashcards for any technical topic in minutes. No manual card creation. No finding existing decks that don’t quite match my needs.
The Meta-Pattern: AI as a Learning Compiler
These patterns share a common insight:
LLMs can transform any learning goal into a personalized learning artifact.
You provide a concept to learn, your preferred style (flashcards, annotated code), and your knowledge gaps. Claude generates materials tailored to you—at the right complexity, in the format you learn best from.
Practical Workflow
My typical learning workflow:
- Generate annotated code: Complete implementation with explanatory comments
- Extract flashcards: Key concepts become cards for spaced repetition
- Quiz over time: Spaced repetition embeds knowledge long-term
This takes 30-60 minutes to generate materials, then ongoing review. Traditional learning (reading papers, watching videos, manual note-taking) would take hours or days to achieve the same depth.
The Flashcard System Details
Since I mentioned my flashcard system, here’s how it works:
Features:
- Multiple spaced repetition algorithms (SM-2, Leitner, fixed intervals)
- Project organization (tag by topic)
- Adaptive quiz sessions (repeat failed items immediately)
- JSON import/export (easy to generate with LLMs)
- CLI for terminal-based studying
Usage:
# Create from Claude-generated JSON
flashcards bulk-create --file cards.json --project rust
# Start adaptive quiz session
flashcards quiz --project rust --limit 10
# Review a specific card
flashcards review <id> --confidence 4
# See statistics
flashcards stats --project rust
The adaptive sessions are key: if you get a card wrong (confidence < 3), it immediately re-queues it. You practice until you get it right, then spaced repetition takes over for long-term retention.
Key Learnings
After a year of using LLMs for learning:
Annotated code beats plain code. The “why” matters more than the “what.”
Flashcards with spaced repetition work. Generate hundreds of cards in minutes, internalize them over weeks.
Active learning beats passive. Quizzing yourself is more effective than rereading notes.
Conclusion: Beyond Code Generation
This series has been about working with LLMs at scale—managing multiple sessions, ensuring quality, building coordination patterns. But it’s also been about a fundamental shift in how we work with code and knowledge.
LLMs aren’t just code generators. They’re:
- Productivity multipliers (Part 2)
- Quality enforcers (Part 3)
- Coordination layers (Part 3)
- Learning accelerators (Part 5)
The developers who thrive in this new world won’t be the ones who use LLMs as fancy autocomplete. They’ll be the ones who build workflows, tooling, and patterns that let them work at 10x their previous speed while maintaining quality and continuously learning.
My system—tmux integration, telemetry, memento, smoke tests, code review workflows, flashcards—is just one approach. Yours will look different based on your needs, your style, your projects.
But the core principles apply:
- Make the invisible visible (ergonomics, telemetry)
- Enforce quality automatically (smoke tests)
- Share context between sessions (memento)
- Learn continuously (flashcards, annotated code)
The future of software development is human-AI collaboration. Not humans being replaced, but humans being amplified. The question isn’t whether to use LLMs—it’s how to use them most effectively.
I hope this series has given you ideas, tools, and patterns you can adapt to your own workflow.
Now go build something amazing.
If you’re interested in any of the tools or patterns mentioned in this series, feel free to reach out. I’m happy to discuss what you find compelling and share more details.