Methodology
How it works.
We translate books into structured tools your AI agent can call. Here's why that matters, and what we actually do.
The context-engineering case
Context engineering is the practice of curating what goes into an agent's context window. Everything the model sees when generating — system prompt, user message, conversation history, tool definitions, retrieved documents, tool results — competes for the same space, and shapes the answer in proportion to what gets through.
When you ask an agent a substantive question, the answer comes from one of three places. Each has tradeoffs:
- Training data. Whatever the model absorbed during training. Fast, but uncited, fuzzy, and frozen at the training cutoff. The more specific a book is to your problem, the more partial the model's grasp.
- Direct retrieval. Paste source pages into context at the moment of the question — the RAG pattern. Works for public-domain text. Doesn't work for paywalled, copyrighted, or licensed material — and dumping a 200-page book into context for one question is wasteful regardless.
- Tool calls. The agent invokes a structured function and gets back a scoped, typed answer — the way it already calls
web_searchor your project's file search. Fast, citable, in-scope. But only useful if the right tools exist for the source material you care about.
Books-as-tools is the third option, applied to books. We translate a source book into structured tools — MCP servers and skills — that an agent can call when relevant, returning cited excerpts on demand. The diagram below shows where that sits.
An agent's context window
Everything the model sees when generating a response.
- System prompt Role, persona, behaviour.
- User message The task in front of you.
- Conversation history Prior turns in the session.
- Tools What the agent can call.
- web_search General information retrieval.
- file_search Your project's own files.
- book-power MCPs & skills Cited, scoped, book-derived.
- Tool results Only the cited excerpt enters context.
- Response What the user reads, with citations intact.
The agent calls a tool only when the current question warrants it. The book's content enters context only on demand, only the pieces that matter, with chapter and page right there — ready to be cited back to the user.
What we build
For each book in the catalog we ship one of two artifact types:
- An MCP server
- A Model Context Protocol server any MCP-compatible client — Claude Desktop, Claude Code, and a growing list of others — can install and call as a tool. Most of our work to date.
- An Agent Skill
- A structured directory of files loaded into the agent's context as referenceable knowledge — no separate server, no install step.
The choice depends on the source. Get in touch if you want to talk through which would suit yours.
How a book becomes a tool
Licensing is the filter, not a step. Public-domain or open-licensed sources ship as public artifacts under an open license. Copyrighted sources require an agreement with the rights-holder before any work begins. With that settled:
- Read the book. A human (us) actually reads it, with notes on what the book contains that a practitioner would call on, and the questions they'd come with.
- Process the source. Run the book through Kreuzberg to get clean chapter-by-chapter text from the PDF or EPUB.
- Extract the catalog. Use Claude Sonnet via the Anthropic API, with structured-output schemas, to extract the book's load-bearing content from each chapter — cases, frameworks, definitions, quotes — preserving the source location for every entry.
- Wrap as MCP or skill. Build the artifact that exposes the catalog as callable functions or consultable files. Every return value carries a citation back to the source.
- Deploy. For MCP servers, host on Railway with HTTP transport — installable from any MCP-compatible client without the user setting up anything locally.