This project is a beginner-friendly foundation for building agentic loops in TypeScript.
If you are new to AI programming, this repo is meant to show the core pattern:
- keep state,
- ask a model what to do next,
- optionally call tools,
- update state,
- stop safely when done.
The current behavior is intentionally simple so you can plug in your own workflow later.
An agentic loop is a repeated decision cycle where the model can choose actions (respond, ask user, call a tool, or stop) based on current state.
flowchart TD
userInput[UserInput] --> initState[InitState]
initState --> policyCheck[PolicyCheck]
policyCheck --> modelDecision[ModelDecision]
modelDecision -->|CALL_TOOL| toolExecution[ToolExecution]
toolExecution --> stateUpdate[StateUpdate]
stateUpdate --> policyCheck
modelDecision -->|ASK_USER| waitForInput[WaitForUserInput]
waitForInput --> policyCheck
modelDecision -->|RESPOND_or_STOP| finalize[Finalize]
-
Enable Corepack (one-time):
corepack enable -
Install dependencies:
yarn install
-
Create local env file:
cp .env.example .env
-
Run the project:
yarn dev
-
Trigger a tool call:
yarn dev --agent echo "echo: hello from tool"
-
Try an interactive agent (
ASK_USER/awaiting_userflow):yarn dev --agent interviewer
-
List available agents:
yarn dev --list-agents
Provider: model backend (rule,openai,anthropic)Model target: pair of{ provider, model }Agent plugin: modular agent definition (prompt + tools + metadata)Tool: function the model can request (echois the starter tool)Loop policy: hard safety rules (step limits, timeout, repeat-decision stop)State: conversation + loop counters + final answerObservability: logs and in-memory metrics for each run
flowchart LR
entryPoint[EntryPoint src/index.ts] --> loopEngine[LoopEngine src/agent/loop.ts]
entryPoint --> agentRegistry[AgentRegistry src/agents]
loopEngine --> policy[Policy src/agent/policy.ts]
loopEngine --> context[ContextWindow src/agent/context.ts]
loopEngine --> router[ModelRouter src/llm/router.ts]
router --> llmClient[LlmClient src/llm/client.ts]
llmClient --> providers[Providers src/llm/providers]
agentRegistry --> tools[ToolRegistry src/tools/registry.ts]
loopEngine --> tools
tools --> builtins[BuiltInTools src/tools/builtins]
loopEngine --> logs[Logger src/observability/logger.ts]
loopEngine --> metrics[Metrics src/observability/metrics.ts]
src/index.tswires config, logger, metrics, LLM client, and tools.- It resolves a selected agent (
--agent) fromsrc/agents/registry.ts. - It resumes loop execution when reason is
awaiting_userusing@inquirer/prompts.
src/agent/loop.tsis the orchestrator.- It performs:
- policy check,
- model decision,
- optional tool execution,
- state update,
- termination.
src/llm/client.tsis provider-agnostic and routes by provider name.src/llm/providers/contains provider adapters:ruleProvider.tsis implemented for deterministic local behavior.openaiProvider.tsis implemented for real OpenAI decision calls.anthropicProvider.tsis scaffolded for future integration.
src/llm/router.tsprovides primary + fallback model routing.
src/tools/contracts.tsdefines tool contracts and retry/idempotency metadata.src/tools/registry.tshandles registration, input validation, retries, and execution.src/tools/builtins/echo.tsis a minimal example tool.
- Hard limits:
maxStepsmaxToolCallstimeoutMs
- Context window cap:
maxMessages
- Repeat-decision guard:
maxRepeatedDecisionSignatures
ASK_USERaction pauses execution withawaiting_userreason for safe turn-by-turn interactions.- Non-idempotent tools are not retried.
- Idempotent tools can retry on retryable errors only.
zodvalidates config, model decision shape, and tool inputs.pinologs structured lifecycle events.- in-memory metrics track counters and timings for loop/model/tool operations.
src/index.ts- app entrypointsrc/agents/- multi-agent plugin contracts, registry, and built-inssrc/config.ts- env parsing and defaultssrc/types.ts- shared typessrc/errors.ts- standardized error classessrc/agent/- loop orchestration and policysrc/llm/- model client, schemas, routingsrc/llm/providers/- provider adapterssrc/tools/- tool contracts, registry, and built-inssrc/observability/- logs and metricssrc/prompts/- base prompts and examplessrc/utils/- retry helpertests/- unit tests for config, loop, tools, and provider routing
LLM_PROVIDER/LLM_MODEL: primary model targetLLM_FALLBACK_MODELS: fallback targets, comma-separated- Example:
openai:gpt-4.1-mini,openai:gpt-4.1-nano
- Example:
OPENAI_API_KEY,ANTHROPIC_API_KEY: provider credentialsLOOP_MAX_STEPS,LOOP_MAX_TOOL_CALLS,LOOP_TIMEOUT_MSLOOP_MAX_MESSAGES,LOOP_MAX_REPEAT_DECISION_SIGNATURESLOG_LEVEL
yarn dev- run in dev modeyarn dev --agent echo "echo: hello"for tool demoyarn dev --agent interviewerfor interactive question flowyarn dev --agent country-hellofor favorite-country guess + native greeting (starts withWhat is your name?)yarn dev --list-agentsto see registered agents
yarn smoke:openai- run a one-shot OpenAI adapter health check with clear auth/quota/network diagnosticsyarn build- compile todist/yarn start- run compiled appyarn lint- type-checkyarn test- run testsyarn format/yarn format:write- format checks/fixes
- Create a new plugin in
src/agents/builtins/<yourAgent>.ts. - Provide
id,description,systemPrompt, andregisterTools(...). - Register the plugin in
src/agents/index.ts. - Add or update tools in
src/tools/builtins/with strict zod schemas. - Add tests in
tests/for loop reasons (completed,awaiting_user) and tool behavior.
Use immutable installs in CI to prevent lockfile drift:
yarn install --immutableImplement Anthropic provider parity in src/llm/providers/anthropicProvider.ts, then add one domain workflow (for example, vacation preference narrowing) as a first-class agent plugin.
- Interactive prompts use
@inquirer/promptswhen available and automatically fall back to Nodereadlinefor compatibility. - Timeout is measured per active run turn; waiting time between
ASK_USERrounds does not consume timeout budget.
If you are new to AI engineering, read the beginner primer:
docs/agentic-loop-primer.md
