Video
Source: Clief Notes Q and A: April 4th 2026 by Jake Van Clief
Executive Summary
In this unscheduled live Q&A, Jake Van Clief works through a curated batch of questions collected via Google Form from his 13,000-member community. The session focuses almost entirely on the "getting started and learning" category — a topic that dominated the submissions. Jake's throughline is consistent and pointed: most people are learning the wrong things in the wrong order, and the fastest path to competence (or income) is not to study AI tools in isolation, but to identify real problems first and let those problems drive what you learn.
The session is as much a philosophy-of-learning talk as a technical one. Jake argues against syntax-first education, against over-reliance on feature lists, and against the idea that AI agents are a shortcut around understanding. Instead, he advocates for what he calls "failure-based learning" — picking a project you expect to fail at and using those failures to surface exactly what you need to study. He applies the same logic to monetization: find where people are already paying for solutions, quantify the time or money you can save them, and only then figure out which AI tools are actually relevant.
The final third of the stream covers more applied territory — using folder architecture as a substitute for building full apps, leveraging LaTeX and Claude for academic research, and how to filter the relentless flow of AI content without getting left behind. Throughout, Jake frames AI coding agents like Claude Code not as autonomous builders but as collaborators that need to be aligned to your thinking, your sources, and your standards of quality.
Key Takeaways
- Start with problems, not tools: Before learning any AI tool or language, identify a specific problem people are already paying money to solve. The tool choice should follow the problem, not the other way around.
- Failure-based learning is the fastest path: Pick a project you know you'll fail at. The failures reveal exactly which knowledge gaps actually matter — far more efficiently than any structured curriculum.
- Understand the AI's reasoning, not just its output: With any coding agent (Claude Code, Codex, Copilot), the key skill is reviewing why it made a decision — which sources it consulted, where its logic diverged — not just accepting or rejecting the final output.
- Align agents to your thinking: Getting an AI aligned to your opinions, preferences, and reference materials produces better results than attaching more tools to it. A well-aligned basic agent outperforms a bloated, misaligned one.
- Build processes before building apps: Folder architecture plus a well-structured prompt set can often achieve the output of a full app without writing a line of code. Only build actual software when a process-based solution hits a hard wall.
- Monetize by saving time or making money: The value formula for AI consulting is simple — quantify the hours you save, anchor to the dollar value of those hours, and charge a fraction of that. Five employees, five hours a week, at $80/hr compounds fast.
- Focus on evergreen content: Any educational content that will be useless in six months isn't worth your time. Prioritize fundamentals that hold up across years, not tool-specific tutorials.
Detailed Analysis
Learning AI From Scratch: Fundamentals Over Syntax
Multiple community members asked variations of the same question: where do you start when you have no background in coding or AI? Jake's answer is consistent — don't start with syntax. Whether that means Python, C, or a specific AI framework, memorizing a language's rules without understanding why those rules exist will leave you unable to adapt when the landscape shifts (which in AI, it does every few months).
His recommended entry point is computing fundamentals: why do file systems exist? How does code actually get executed? What problem does version control solve? These questions predate modern AI by decades, but they're the load-bearing concepts underneath every agentic workflow. Jake points to his own free "Foundation" course modules as one resource, but stresses that the source matters less than the habit of asking first-principles questions.
For learners who want to join a dev team or contribute to enterprise software specifically, he adds one more layer: understand the organizational problems, not just the technical ones. Folder architecture, version control, data organization — these are the places where beginners can add immediate, tangible value to a team without needing deep language expertise.
Failure-Based Learning: The Core Framework
The most substantive section of the stream is Jake's articulation of "failure-based learning." The argument is pragmatic: in an environment where you could learn hundreds of languages, frameworks, and tools, the only rational filter is finding out which ones actually matter for your specific problem. The only way to find that out is to start building.
The process he describes: pick a single, small project that solves a real problem in your daily life or work. Expect to fail. When you fail, you'll discover exactly which knowledge gaps caused the failure — and those are the gaps worth closing. Gaps that don't surface during the build probably don't matter for your goals, even if a curriculum would tell you they're important.
He extends this to AI tool selection as well. He notes that he built multi-agent group chat software three years ago, recognized that Anthropic and OpenAI would eventually build better infrastructure than he could, and pivoted to focusing on what they won't automate: his own opinions, voice, folder systems, and bespoke organizational methods. That realization came from building and iterating, not from studying in advance.
Making Money with AI: The Value-Anchoring Formula
For community members explicitly trying to generate income from AI skills, Jake offers a concrete reframe: stop trying to learn AI first. Instead, identify where people are already paying for solutions — ideally in a domain where you have at least some baseline competence. The math he walks through is illustrative: if you can save five employees five hours a week at $80/hour, that's $24,000 in recovered time over twelve weeks. Charge half of that, frame it as a $12,000 engagement, and point to a projected $288,000 in annual savings. The pitch isn't about your technical skills — it's about anchoring your fee to the value you create.
The implication for skill development: once you've identified that target domain, then figure out which AI tools are relevant to it. Learn only those. Ignore everything else until it becomes relevant to another paying problem. This is the inverse of how most people approach AI education, where they try to become generally competent first and find applications later.
Understanding AI Agents: Alignment Over Features
A significant chunk of the Q&A addresses how much you need to understand about what a coding agent is actually doing. Jake's position: you don't need to understand every bash command or line of code at first, but you do need to understand the reasoning. Specifically, you should be reviewing which sources the agent consulted, whether those sources are authoritative, and where its logic went sideways.
He gives a concrete example: Claude autonomously reading a GitHub repo during a task. The question isn't whether it read a repo — it's whether that repo is a trustworthy source for the task at hand. Catching that, and redirecting the agent to a better source, is the skill that separates effective users from passive ones.
His broader principle: getting an AI agent aligned to your way of thinking — your opinions, your reference materials, your standards — produces more value than stacking features onto a misaligned one. He applies this across Claude Code, Codex, and enterprise Copilot, and teaches it to corporate clients as well.
Building Processes Before Building Apps
For a community member in procurement asking whether to invest time in building custom apps, Jake's answer is to exhaust the process-based approach first. The framework: organize your data into folders that mirror the mental model your team already uses. Drop that data into a Claude instance. Run a prompt that describes the task step by step. See where it fails. Add a markdown file to patch each failure point. Repeat.
The result is often a working workflow that produces the same output a custom app would — without a single line of code. The AI becomes the runtime. Only when you hit a hard constraint (employees without licenses, production-scale requirements, sharing limitations) do you need to actually build software. Most people jump to building apps before exhausting the folder-and-prompt approach, which is backwards.
He also touches on the importance of starting small and adding complexity over time, rather than building something complex and optimizing afterward. Get maximum output for minimum input first.
Academic Research with AI: LaTeX and Claude
For a computer science student who had implemented Jake's folder architecture for a research project, Jake expands on how he uses AI at the University of Edinburgh's neuropolitics lab. The key tool is LaTeX — specifically, organizing a research paper into modular files (citations, sections, notes) so that Claude only reads the part currently being worked on. This avoids context bloat and keeps the agent focused.
The use case he describes is not AI-generated research — it's AI-assisted formatting and citation management. When adding a citation from a paper you've read to a specific section, Claude handles the LaTeX syntax and formatting while you provide the intellectual content. The research thinking stays human; the mechanical overhead gets automated.
Timestamped Topic Outline
| Timestamp | Topic |
|---|---|
| 0:00 | Stream setup and platform check-in |
| 1:48 | Overview: Q&A format from community Google Form |
| 2:56 | Q1: How to start learning AI from scratch (university context) |
| 5:16 | Q2: Do you need to understand what Claude Code is doing? |
| 8:41 | Q3: Starting from zero with a goal of financial freedom via AI |
| 12:31 | Q4: Learning to build enterprise-grade full-stack apps |
| 15:14 | Golden nugget: Using GitHub repos + Claude to create custom lessons |
| 20:51 | Failure-based learning framework |
| 24:27 | Q6: Scanning and organizing AI content at scale |
| 27:22 | Q7: Impostor syndrome — feeling like a monkey waving around |
| 30:16 | Q8: Building tools vs. building processes (procurement use case) |
| 33:41 | Q9: Academic research workflows — LaTeX + Claude |
| 37:00 | Closing: Plans for future Q&A cadence (free vs. VIP) |
Sources & Further Reading
- Jake Van Clief's free Foundation course — referenced multiple times; covers computing fundamentals, Python execution, agentic pipeline architecture, and folder systems. Available in his School community.
- Jake's "Archive" classroom — older videos (2023–2024) reviewing past AI predictions with retrospective commentary; available in the same School community.
- Jupyter Notebook / Google Colaboratory — recommended for practicing code and building interactive lesson notebooks from existing GitHub repos.
- Stack Overflow — cited as a model for the build-search-reuse cycle that predates AI-assisted development.
- LaTeX — recommended for academic research paper organization in conjunction with Claude for formatting and citation management.
- R Studio — mentioned for data visualization in research contexts.
- University of Edinburgh Neuropolitics Lab — Jake's research affiliation; context for the academic workflow discussion.