Skip to content

Latest commit

 

History

History
230 lines (118 loc) · 27.2 KB

File metadata and controls

230 lines (118 loc) · 27.2 KB

Upstack: A Meta-Framework for Human Learning in the Age of AI

Concept Paper v1.1 Author: Ishan De Silva Date: 02 March 2026 Status: Foundational — philosophical and theoretical foundation


Abstract

Upstack is an open-source meta-prompt framework that enables individuals and organisations to configure AI as a genuine tutor rather than an answer machine. It is built on a single foundational insight: in the age of AI, knowledge has become a commodity. What AI cannot commoditise is insight — the deep understanding that comes from building mental models through effortful learning. Upstack is the system that gets you there.

The framework provides structured templates, principles, and protocols for two primary personas: the individual learner who wants to move from knowledge to genuine understanding, and the organisation that wants to develop engineers and professionals beyond output production into real reasoning capability. The Go language learning project by the author serves as the featured, fully worked use case that demonstrates the framework in action.


1. The Problem

1.1 The Decoupling of Output from Understanding

Something fundamental has changed in how people learn. AI tools — large language models in particular — have broken the historic link between producing an output and understanding how to produce it. A junior engineer today can generate a working REST API, a clean sorting algorithm, a well-structured class design — without ever building the mental model of why it works. The output is indistinguishable from the output of someone who deeply understands the domain. The understanding is not there.

This is genuinely new. Before AI, the friction of figuring things out was the mechanism of learning. Stack Overflow still required you to read, interpret, and adapt. Textbooks required you to struggle with examples. Senior colleagues required you to articulate your thinking before they would help. Every pathway to an answer involved some degree of effortful processing, and that effort was — without anyone necessarily designing it this way — the actual learning happening.

AI has removed that friction almost entirely. And in doing so, it has inadvertently removed the learning.

1.2 The Generation Most at Risk

Fresh graduates and junior professionals entering the workforce today have grown up with algorithmically optimised content designed to be immediately rewarding. Their tolerance for unstructured exploration — sitting with uncertainty, following a thread without knowing where it leads, failing productively before succeeding — has been systematically reduced by years of frictionless information access.

They are also entering a world where their peers and seniors are using AI extensively, where the pressure to produce quickly is intense, and where the cultural identity around being a "good engineer" has partly shifted toward being able to find and assemble answers fast. Being able to derive from first principles — slowly, messily, independently — is less culturally valued than it was a generation ago.

The result is a cohort that is highly capable at producing outputs and deeply underdeveloped in the reasoning capacity that makes those outputs trustworthy, extensible, and transferable to novel contexts.

1.3 The Organisational Dimension

From a manager's or technical leader's perspective, this problem manifests as a specific frustration: junior engineers who can solve problems they have seen before but collapse when the context changes slightly. The output looks correct. The reasoning, when probed, is absent or borrowed.

A foundational principle of good engineering leadership is that the answer matters less than how you got there. The reasoning reveals the mental model. The mental model predicts performance under novel pressure. An engineer who can walk you through their thinking, acknowledge where they are uncertain, and defend their rationale under challenge is an engineer you can build on. An engineer who produces correct outputs without visible reasoning is an engineer you cannot trust with increasing responsibility.

This is not a new problem — but AI has made it significantly worse, faster.


2. The Theoretical Foundation

Upstack is grounded in four converging bodies of work. Understanding these is not just academic context — they are the load-bearing walls of the framework's design.

2.1 The Learning Stack

Human learning is not binary. It exists in layers of increasing depth, and the goal of education — formal or informal — is to move a learner as far up the stack as possible.

At the base is Exposure: you have encountered something. You can recognise it. You have seen examples. This is the shallowest form of learning and requires almost no effort.

Above that is Knowledge: you can recall it. You know what it is, what it is called, and broadly how it works. You can reproduce patterns you have seen. Most traditional education and most AI-assisted learning stops here.

Above that is Understanding: you know why it works. You have a genuine mental model — an internal representation that lets you predict behaviour, identify exceptions, and reason from first principles. This is where real competence lives.

At the top is Transfer: you can apply your understanding in contexts you have never encountered before. Near transfer applies knowledge to situations structurally similar to where it was learned — AI handles this trivially, because near transfer is essentially pattern matching. Far transfer applies knowledge across superficially different domains, requiring genuine abstraction and analogical reasoning. This is where human value in the AI era is concentrated.

The central problem AI creates is this: it accelerates movement from Exposure to Knowledge while bypassing the climb from Knowledge to Understanding. The bypass is invisible. The learner feels productive — they are producing outputs, they are "getting things done" — but no mental model is being constructed. When a novel context demands Transfer, there is nothing to draw on.

2.2 Productive Struggle and the Neuroscience of Learning

The mechanism that builds Understanding is effortful processing — what educational researchers call productive struggle. This is not metaphorical. Neuroscience research shows that challenging tasks spur the production of myelin, a substance that increases the strength and speed of neural signals. Struggle literally builds the physical infrastructure of understanding in the brain. Effortless acquisition does not trigger that construction.

The research is careful to distinguish productive struggle from unproductive struggle. Productive struggle is calibrated to just beyond the learner's current ability — hard enough to activate the learning system, not so hard as to produce confusion and misconceptions. This is Vygotsky's Zone of Proximal Development operationalised: learning happens in the gap between what you can do alone and what you can do with guidance. The right kind of friction, at the right moment, with the right support available, is the mechanism of deep learning.

When AI removes all friction, it removes productive struggle entirely. The learner never operates in the zone where understanding forms.

This has a direct design implication for Upstack: the framework does not remove AI from the learning process. It calibrates AI's role so that the right friction is preserved. The AI remains as guide — available for questions, for concept explanation, for challenge — but withholds the full answer, forces the learner to attempt first, and requires articulation of reasoning before providing correction.

2.3 Near and Far Transfer

Transfer — the ability to apply learning in new contexts — comes in two distinct forms, and the distinction matters enormously for curriculum and framework design.

Near transfer involves applying knowledge to situations that are structurally and superficially similar to where the learning occurred. A student who learned bubble sort implementing insertion sort. An engineer who learned the Observer pattern applying it to a slightly different notification system. This is important but limited, and crucially, AI handles near transfer almost perfectly. If you have seen a similar problem, AI can bridge the gap for you.

Far transfer involves applying understanding across domains that are superficially very different but structurally analogous. Using recursive thinking learned in algorithms to understand organisational hierarchies. Applying the Open/Closed Principle — learned in code — to understand why good policy is designed to be extended through process rather than constantly rewritten. Far transfer requires genuine abstraction, the ability to see through surface features to underlying structure. This is a cognitive capability that only develops through deep learning — through the Understanding layer — and it cannot be shortcut.

The implication for framework and curriculum design is direct: learning assessments that can be answered by near transfer are nearly worthless in the AI era. The only exercises worth designing are those that require far transfer, because those are the only ones that reveal whether genuine Understanding exists.

2.4 Argyris and Double-Loop Learning

Chris Argyris's 1991 Harvard Business Review article "Teaching Smart People How to Learn" identified a paradox that is now more relevant than ever: highly capable people are often the worst learners, because their intelligence has always been sufficient to succeed without deeply examining their own assumptions. When things go wrong, they externalise — blame the system, the requirements, the tools — rather than questioning the mental models that produced the failure. Argyris called this defensive reasoning.

He distinguished two learning modes. Single-loop learning detects an error and corrects the action, but leaves the governing assumption intact. The thermostat analogy: the room gets cold, the thermostat fires the heater, but the thermostat never asks whether the target temperature is right. Double-loop learning detects an error and questions the governing assumption that produced it. Not just "what did I do wrong?" but "why did I think this approach was correct in the first place?"

AI enables infinite single-loop learning at zero cost. Error → ask AI → fix → repeat. The loop is fast, frictionless, and completely shallow. No mental model is ever rebuilt. No assumption is ever challenged. The new defensive reasoning in the AI era is AI dependency itself: the self-sealing loop where you never have to confront a gap in your own understanding because the gap is always filled externally.

Double-loop learning is what Upstack is designed to produce. Every principle in the framework — constrained AI helpfulness, the "before you ask" protocol, the explanation obligation, the reasoning visibility requirement — is a mechanism that prevents single-loop retreat and forces the learner into double-loop territory.

2.5 Metacognition as the Upstream Skill

Metacognition — the ability to think about your own thinking, to recognise gaps in your own understanding, and to self-regulate your learning process — is the foundational capability that all of the above requires. Research confirms that metacognition and productive struggle are not separate ideas: struggle develops metacognitive capacity, and metacognitive capacity is what enables far transfer.

The Go README that inspired Upstack was written by someone with well-developed metacognition. Knowing to write "do not spoon feed" requires understanding how AI fails as a tutor. Knowing to declare your prior knowledge as the bridge requires understanding how learning is associative. Knowing to document the "aha moments" rather than just the facts requires understanding how retrieval and articulation consolidate learning.

Most learners — especially fresh graduates — have not developed this metacognitive sophistication. They do not know what they do not know about their own learning processes. Upstack makes the metacognitive layer explicit and teachable: the framework itself is an act of learning how to learn.


3. The Core Insight

All of the above converges on a single formulation that sits at the heart of Upstack:

Knowledge is a commodity. Everyone will give you answers. What they cannot give you is insight. Upstack is about how you become insightful.

In the AI era, knowledge has zero marginal cost. The differentiator is no longer what you know — it is the depth of understanding that lets you operate where AI cannot follow. That depth only comes from double-loop learning, from productive struggle, from building genuine mental models through the effortful climb up the learning stack.

Upstack does not fight AI. It redirects it. Instead of configuring AI as an oracle — ask, receive, move on — it configures AI as a tutor: present, challenging, available for guidance, but deliberately withholding the full answer until the learner has genuinely attempted the climb.


4. The Framework

4.1 The Two Personas

Upstack serves two distinct personas who share the same underlying framework but configure it differently.

The Personal Learner is an individual — typically an experienced professional learning a new domain, technology, or skill — who wants to use AI to learn deeply rather than quickly. They are self-directed, motivated, and aware enough of AI's failure modes to want a structured alternative. The Go language learning project is the canonical example: an experienced C++/Python engineer learning Go idiomatically, using a carefully designed AI tutor configuration to build genuine understanding of the language rather than just producing working code.

The Organisational L&D User is a manager, technical lead, or L&D professional who wants to develop junior engineers and apprentices beyond output production. They are concerned with the reasoning visibility problem: how do you develop someone's capacity to think through novel problems when AI can shortcut all the familiar assessments? They need both a framework for configuring AI-assisted learning in their teams and a protocol for evaluating whether genuine understanding is developing.

The framework's principles are identical for both personas. The templates, configurations, and success metrics differ.

4.2 The Four Core Documents

Every Upstack learning project — whether personal or organisational — is structured around four documents. These are not bureaucratic requirements. Each one serves a specific learning function that the research validates.

The Tutor Contract is the meta-prompt that configures the AI's behaviour. It defines the AI's roles — Guide, Scribe, Code Reviewer, Challenger — and critically, the constraints on each role. The most important constraint is the guide role: the AI should ask questions that lead toward the answer rather than providing the answer directly. It should challenge assumptions rather than confirm them. It should require the learner to attempt before it assists, and to explain before it corrects. This document is the operational heart of Upstack. It is what converts an oracle into a tutor.

The Learner Context declares who the learner is and what they already know. This is not administrative — it is the bridge. Learning is associative: new knowledge hooks onto existing knowledge. Without an explicit declaration of prior knowledge, the AI cannot calibrate the difficulty, cannot use the right analogies, cannot know where to start the bridge from known to unknown. The quality of this document directly determines the quality of the tutoring. The Go project's audience declaration — "experienced engineers coming from C++, Java, Python; skip programming basics; focus on Go idioms and differences from system languages" — is the model.

The Learner Profile where the Learner Context is per-course — what you bring to this course — the Profile is global: who you are across all courses. The two form a layered calibration model: the profile captures the learner's full professional anatomy (background, skills, mental models, Dreyfus levels); the learner context captures which parts of that anatomy are relevant to one specific course. Critically, the anatomy grows: completing courses builds new skills, shifts Dreyfus levels upward, and adds new mental models. Each new course draws from a bigger, stronger body.

The Learning Log is the living document that captures the journey. Its structure — Initial Question, Conceptual Shift, Comparison to Prior Knowledge, Practical Examples — is not arbitrary. It is a forced retrieval and articulation mechanism. The act of documenting the conceptual shift is the consolidation of learning. Writing "I thought interfaces worked like this, but in Go they are implicit and consumer-defined — here is the moment that clicked" is not documentation after the learning; it is the final step of the learning itself. The log also creates a teaching artefact: something that can be used to help the next person on the same journey.

4.3 The AI Role Configuration

The most important design decision in any Upstack configuration is how the AI's roles are defined and constrained. The framework identifies five distinct roles, each with different relationships to the learner's struggle.

As Guide, the AI leads with questions rather than answers. It responds to "how does this work?" with "what do you think is happening?" before offering explanation. It provides conceptual scaffolding without completing the climb. This role preserves the productive struggle that builds Understanding.

As Challenger, the AI proactively questions assumptions. For a C++ engineer learning Go, this means flagging where C++ habits lead to Go anti-patterns, even when the learner has not asked. This is the double-loop mechanism: the AI is explicitly instructed to question governing assumptions, not just correct immediate errors.

As Scribe, the AI structures and records the learning journey into the Learning Log. This is a documentation role, not a learning one — it supports the retrieval and articulation function without generating the understanding itself.

As Code Reviewer, the AI audits for idiomatic correctness and suggests refactorings. This role is explicitly permitted to provide direct feedback — but on code the learner has already written through their own effort, not on problems they have not yet attempted.

As Maintainer, the AI keeps documentation and code consistent as the project evolves — renaming cascades, removing stale examples, tracking progress. This is entirely a support role with no learning function.

The critical rule is that the Guide and Challenger roles take precedence. The AI must never short-circuit productive struggle by jumping into Reviewer or Scribe mode before the learner has genuinely attempted the problem.

4.4 The Reasoning Visibility Principle

For the organisational persona, Upstack introduces a specific principle that extends the framework into evaluation and management practice. It is called the Reasoning Visibility Principle:

Understanding only exists if it can be made visible under novel pressure. Output is evidence of knowledge. Reasoning under improvised challenge is evidence of understanding.

A manager who asks "walk me through your thinking" is not being pedantic. They are probing for the mental model. If the reasoning is sound, the model exists. If the reasoning is absent or borrowed, it does not — and the next novel problem will expose this. The correct answer tells you nothing about what happens when the problem changes. The rationale tells you everything.

In the AI era, this probe must go deeper than it used to. A junior engineer can now produce a correct answer with sophisticated-sounding rationale, because they asked AI for both. The follow-up questions that reveal genuine understanding are improvisational: "What would happen if we changed this constraint?" "Where does this approach break down?" "Why did you rule out the alternative?" These questions require far transfer. They cannot be pre-generated. This is where genuine understanding either surfaces or collapses.

The Reasoning Visibility Principle has implications for how Upstack's organisational templates are designed: evaluation protocols are built around improvisational probing rather than fixed assessments, and the Apprentice Contract includes explicit guidance for managers on how to conduct reasoning reviews.

4.5 The Override/Extend Architecture

Upstack is designed as a framework, not a rigid prescription. Its architecture reflects the software engineering principle of framework versus application: the framework defines the structure and the defaults; adopters customise and extend within that structure.

The framework core — base templates, the principles documentation, the anti-patterns guide — is designed to be inherited without modification. Its stability is what gives the framework its integrity across different use cases. Overrides and extensions live in a separate layer that takes precedence where conflicts arise, using the same pattern as dotfiles, configuration systems, and theme frameworks.

Individual learning projects are self-contained within the repository, each containing the three core documents plus assignment materials and documentation. A blank, annotated scaffold makes starting a new project as frictionless as possible.


5. The Featured Use Case: Learning Go

The Go language learning project is not just an example — it is the proof of concept that preceded and inspired the framework. It was built by the author as a personal learning workspace for an experienced C++/Python developer learning Go idiomatically, and it contains the three Upstack documents in their earliest, most authentic form.

The original tutor configuration — the document that became the Tutor Contract template — includes the most important single line in the entire framework: "Do NOT offer the full answer with code snippets. Help the human learn, don't spoon feed." This instruction, written before Upstack existed as a named framework, is the productive struggle principle operationalised as an AI configuration. It works because it explicitly fights AI's natural gravity toward maximum immediate helpfulness.

The learner context document demonstrates the power of precise audience declaration. By explicitly stating what is excluded ("programming basics") and what the bridge is ("experienced engineers from C++, Java, Python"), it enables the AI tutor to calibrate every explanation, every analogy, every challenge to exactly the right level. The Go-specific idioms and differences from system languages are the entire focus — the signal-to-noise ratio of the learning is extraordinarily high as a result.

The two completed assignments — HostManager (a system monitor) and Feed Catcher (a market data feed client for Binance orderbook streams) — demonstrate the context-before-concept principle. These are not toy examples. They are real systems drawn from the author's capital markets domain. Every concept — goroutines, channels, interfaces, context cancellation — emerges from the needs of the system rather than being introduced abstractly. This is far transfer being seeded: the mental models formed are connected to real problem spaces from the beginning, which is what makes them transferable.

In the Upstack repository, the Go project is annotated to make the framework principles visible. Each design decision in the three core documents is called out with an annotation explaining which principle it embodies. This transforms the Go project from a personal learning workspace into a teaching artefact — exactly what the Learning Log protocol is designed to produce.


6. Contributing

Upstack grows stronger with every use case contributed. Each new learning project validates and extends the framework. A Go-to-Rust project from a Python engineer demonstrates the Learner Context for a different prior knowledge profile. A domain-driven design project from a junior engineer demonstrates the framework for conceptual rather than language learning. An org-L&D case study demonstrates the Reasoning Visibility Principle in a real team context.

Contributors are asked to annotate their learning projects with Upstack callouts — brief notes that make the framework principles visible in their specific context. This is what transforms personal learning artefacts into community teaching resources.


7. Relationship to The Complete Engineer

Upstack is a standalone open-source project, but it is also a concrete expression of two pillars of The Complete Engineer framework. The Technical Foundations pillar — knowing why, not just what — is precisely the Understanding layer of the Upstack learning stack. The Professional Integrity pillar — self-regulation, accountability, the internal governance that operates without external enforcement — is the organisational expression of metacognition: the capacity to self-regulate your learning and your performance without needing a manager to administer productive struggle for you.

Upstack gives The Complete Engineer something tangible and downloadable. The newsletter builds understanding; the framework gives readers something to act on. They are complementary expressions of the same underlying conviction: that in the age of AI, depth of understanding is the irreducible human differentiator.


8. The Name

Upstack refers to the learning stack — the progression from Exposure at the base, through Knowledge, through Understanding, to Transfer at the peak. Moving upstack means deliberately climbing toward genuine insight rather than staying at the level of recalled knowledge and AI-generated output.

It is a one-word name that engineers feel immediately. Moving up the stack, toward the layer where things actually matter, is something any experienced engineer understands intuitively. The name is also quietly aspirational: it implies that most AI-assisted learning today is happening downstack, and that Upstack is the framework for doing better.


9. Summary

Upstack exists because AI has made a specific and important kind of learning significantly harder: the kind that builds genuine understanding through effortful processing, productive struggle, and the slow construction of mental models that transfer to novel contexts. It does not fight AI — it redirects it, configuring it as a calibrated tutor rather than a frictionless oracle.

The framework rests on solid scholarly ground: the learning stack from cognitive science, the productive struggle research from neuroscience and educational psychology, the near/far transfer distinction from learning theory, and Argyris's double-loop learning from organisational behaviour. These are not decorative references — they are the design rationale for every principle and template in the framework.

The Go learning project is the living proof that this approach works. Built before Upstack had a name, it already contained all three core documents and all five AI role configurations. It produced genuine idiomatic Go fluency in an experienced engineer, in a domain he was approaching from scratch. It is the seed from which the framework grew.

Knowledge is a commodity. Upstack is how you build insight.


Concept Paper v1.1 Ishan De Silva For questions, contributions, and organisational enquiries: see the project repository.