Just-In-Time Mastery
Accelerated Context Acquisition for New Domains
Part 4 of 4 in the "The AI Autodidact" series
In the traditional model of education, knowledge is treated like inventory. We stockpile facts, theories, and frameworks in the warehouse of our minds during our undergraduate years, hoping that one day, decades later, a specific item might be pulled from the shelf to solve a problem. This is Just-In-Case learning. It is inefficient, prone to decay (forgetting), and often misaligned with reality. The curriculum of 2010 is rarely the solution to the problems of 2026.
In manufacturing, Toyota revolutionized the world with the Just-In-Time (JIT) production system. They realized that inventory is waste. It takes up space, it becomes obsolete, and it hides inefficiencies. They shifted to producing exactly what was needed, when it was needed, in the exact amount needed.
The modern AI Autodidact must make the same shift. We are moving from Just-In-Case education to Just-In-Time Mastery.
In this final installment of The AI Autodidact, we explore the tactical workflows that allow us to enter entirely new domains—whether it’s biotech, constitutional law, or quantum mechanics—and acquire not just vocabulary, but functional context in a matter of hours. We examine the tooling of the "Exocortex," the cognitive science of Transfer Learning, and the protocols used by high-leverage learners like Simon Eskildsen to maintain "Director-level" competence across rapidly shifting technical landscapes.
The Problem of "Unknown Unknowns"
When entering a new field, the primary obstacle is not the difficulty of the material; it is the opacity of the map. You don't know what you don't know. A software engineer trying to read a biology paper isn't just struggling with the word "mitochondria"; they are struggling with the invisible web of assumptions, methodologies, and standard practices that every biologist takes for granted.
This is the Context Gap.
Traditionally, bridging this gap requires "paying your dues"—reading the intro textbooks, attending the 101 lectures, and slowly osmosing the culture of the field. This takes years.
The AI-enabled learner bridges this gap by using Large Language Models (LLMs) not just as tutors, but as translators. By explicitly mapping the concepts of a new domain onto the mental models of a domain you already master, you can bypass the "101" phase and engage with high-level concepts almost immediately. This is the art of Accelerated Context Acquisition.
Case Study: The Contextual Intelligence of Simon Eskildsen
Simon Eskildsen is a prime exemplar of the modern technical autodidact. From his rise as an intern to Director of Production Engineering at Shopify, to founding the vector search startup turbopuffer, Eskildsen has consistently demonstrated an ability to scale his mental models alongside hyper-growth systems.
In his discussions on learning (such as on the Every podcast "AI & I"), Eskildsen highlights a critical shift: the move from static memorization to dynamic, spaced-repetition-assisted context maintenance. But beyond Anki cards (which we covered in Part 2), his career trajectory illustrates a higher-order skill: Contextual Intelligence.
Contextual Intelligence is the ability to walk into a room (or a codebase, or a boardroom) where you have zero prior experience, and rapidly construct a mental scaffolding that allows you to ask the right questions. It isn't about knowing all the answers; it's about knowing the structure of the answers.
Eskildsen’s approach suggests that the "Generalist" is dead. The "Specialist" is too rigid. The winner of the next decade is the Serial Specialist—someone who can dive deep into a vertical (like vector search algorithms), master the requisite constraints, execute, and then pivot to the next vertical (like go-to-market strategy) without losing momentum.
AI is the engine of this serial specialization.
Tactical Tooling: The Micro-Loop
How do we operationalize this? It starts with friction reduction.
If you are reading a complex text and encounter an unknown term, the "cost" of looking it up determines whether you learn it or skip it. If you have to open a new tab, type into Google, scroll past ads, and read a Wikipedia intro, the cost is high. You will skip it. You will accept a fuzzy understanding.
To achieve JIT Mastery, the lookup cost must approach zero. We need tools that live in the "0.5-second" range.
1. The Command Line for Reality (Raycast / Alfred)
Tools like Raycast (on macOS) have evolved beyond simple app launchers into AI-native command centers. They allow you to access LLM intelligence without context switching.
The Workflow: You are reading a legal contract. You encounter the phrase "force majeure".
- Old Way: Open browser -> Search "force majeure meaning" -> Read generic definition.
- JIT Way: Highlight text -> Hit Hyper-Key (e.g., CapsLock) -> Type "Explain" -> Raycast AI pops up with a definition overlay.
But we can go deeper. The generic definition is often useless. We need contextual definitions.
2. The "Domain Bridge" Prompt
The most powerful "micro-tool" is a pre-configured AI command that uses your existing background as a Rosetta Stone.
If you are a programmer learning finance, generic financial definitions are dry and abstract. You need a translation.
Create a Raycast AI Command called "DevTranslate":
System Prompt: "You are an expert tutor. I am a Senior Software Engineer. I understand systems, logic, git, databases, and variables. Whenever I ask about a concept in a new domain (Law, Bio, Finance), explain it using software engineering metaphors. Use analogies involving code, architecture, or data flows."
Usage: User: "What is a 'Call Option'?" AI: "Think of a Call Option like a Promise in JavaScript that resolves to a stock purchase. You pay a small 'fee' (gas cost) to create the Promise. If the stock price goes up (resolves successfully), you execute the callback and buy at the old price. If it goes down (rejects), you just let the Promise timeout and lose only your gas fee."
Suddenly, the concept clicks. You aren't learning finance from scratch; you are importing finance into your existing SoftwareEngineering library.
Theory: Engineering Transfer Learning
This "DevTranslate" technique leverages a cognitive science concept called Transfer Learning.
In educational theory, Thorndike’s Theory of Identical Elements suggests that transfer occurs when the new context shares elements with the old one. Usually, this happens by accident (e.g., learning Latin helps with Spanish).
In Machine Learning, transfer learning is rigorous. We take a model trained on ImageNet (millions of images) and "fine-tune" it on X-rays. The model already knows what "edges" and "shapes" are; it just needs to learn that these specific shapes are tumors.
Humans are capable of High-Road Transfer—abstracting a principle from one domain and applying it to another. But it is mentally taxing.
AI lowers the metabolic cost of High-Road Transfer. It acts as the "mapping layer." It finds the "Identical Elements" between disparate fields that your brain might miss.
The Protocol for New Domains:
- Identify your Source Domain: What are you best at? (Coding, Cooking, Chess?)
- Prompt the Bridge: Instruct the AI to use the Source Domain as the metaphor engine for the Target Domain.
- Iterate: When the metaphor breaks down (and all metaphors eventually do), ask the AI: "Where does this analogy fail?" This boundary-checking is where true mastery begins.
Building the Scaffolding: Custom Glossaries and Tutors
For larger projects, simple lookups aren't enough. You need a persistent "tutor" that understands the specific project context.
When tackling a new field—say, you are building an app for Dentists—you are drowning in acronyms (PPO, HMO, CDT codes).
The "Project Glossary" Workflow:
- Create a
glossary.md: In your project root, keep a running list of terms you encounter. - The "Context Injection" Script: Use a CLI tool (like
llmor a custom script) to inject this glossary into every question you ask.
```bash
Example alias for a project-specific query
alias ask-dent="cat glossary.md | llm -s 'You are an expert Dental practice consultant. Use the provided glossary for context. Answer the following question:'"
```
Now, when you ask "Why is the claim rejected?", the AI checks your glossary, sees your definition of specific insurance codes, and gives a highly specific answer.
This builds a Personal Knowledge Graph. You are not just looking up facts; you are building a networked understanding of the domain that grows with your project.
The Personal Knowledge Graph & Semantic Search
The final piece of the puzzle is retrieval. As you accumulate these glossaries, notes, and translations, where do they go?
If they go into a folder structure, they die. Folders are rigid. Knowledge is fluid.
This is where Semantic Search (the technology behind turbopuffer and modern RAG systems) becomes a personal superpower. Instead of searching for keywords, you search for concepts.
Imagine you read a paper on "Biological Homeostasis" three months ago. Today, you are debugging a "Server Load Balancer." You have a vague memory that the biological concept is relevant.
With a standard search (grep), searching "load balancer" won't find the biology paper. With Semantic Search (embedding-based), searching "systems that self-correct to maintain stability" will return both the Load Balancer docs and the Homeostasis paper.
Action Item: Use tools like Obsidian with the Smart Connections plugin or Mem.ai that utilize vector embeddings. This turns your note-taking app into a "Second Brain" that proactively suggests connections. It realizes you are writing about server stability and whispers: "Hey, this looks like that biology paper you read last quarter."
Conclusion: The Autodidact as Cyborg
We have reached the end of the syllabus.
In Part 1, we built the Cognitive Supply Chain, optimizing the flow of information. In Part 2, we Engineered Retention, using Anki to fix the leaky bucket of human memory. In Part 3, we adopted the Synthetic Socratic Method, turning AI into an active debate partner. Here in Part 4, we deployed Just-In-Time Mastery, using exocortex tools to map new worlds onto our existing maps.
The future belongs to the curious. But curiosity alone is no longer enough. The complexity of the world is accelerating faster than biological evolution can keep up. To remain relevant, to solve the "wicked problems" of the 21st century, we cannot rely on the slow, linear download of traditional education.
We must become cyborg learners. We must outsource storage to silicon, outsource retrieval to vectors, and outsource translation to LLMs, freeing up our biological wetware for what it does best: Synthesis, Creativity, and Wisdom.
The tools are here. The syllabus is yours to write.
Start now.
This concludes the "AI Autodidact" series. This article is part of XPS Institute's Solutions column, dedicated to practical applications of high-leverage technology. Explore our other columns for deep dives into Schemas (Theory) and Stacks (Engineering).
```



