tl;dr Colin is an experimental context engine that can load dynamic information into agent skills and keep them fresh. Give it a star on GitHub!
Context goes stale.
This is an increasingly serious problem for anyone building with agents, and it manifests in a few different ways. Stale context can mean:
- The information is out of date: a skill that was accurate when written but hasn’t been touched since.
- The information is unavailable: a conversation was compacted and details didn’t survive the summary.
- The information is siloed: it exists, but in a different conversation, a different chat window, or a different day.
There are two major standards for delivering context to agents today: MCP and agent skills. They occupy opposite ends of a spectrum, and neither has a particularly good solution to this problem.
Skills are optimized for passive access to static information. Drop a markdown file in a folder and the agent discovers it when relevant. Skills are lightweight, progressively disclosed, and always available.
And that’s a problem.
Skills are markdown, so updating them means editing files by hand. Most of us don’t, and so our agents’ skills decay, becoming less relevant over time.
MCP is optimized for active retrieval of dynamic information. The agent fetches what it needs on demand, so the information is always current.
And that’s a problem.
MCP requires conversational boilerplate because every conversation starts from scratch, so the agent has to figure out what it needs, invoke tools, load data, and accumulate knowledge. This is a lot of cycles and tokens spent setting up context that the agent already had yesterday.
Wouldn’t it be nice to combine the dynamicism of MCP—open tickets, customer requests, upcoming meetings, recent PRs—with the passive availability of skills? Then our agents would have way to consistently access a flow of constantly changing information. No copying and pasting. No waiting for tools to load. No hoping the agent sets up context the same way it did yesterday.
We need a way to combine the best of both worlds. And to quote the late Tom Lehrer: I have a modest example here.
Colin is an experimental context engine that keeps agent skills fresh. It works by treating skills as software.
Colin combines two major capabilities:
A powerful templating engine that can load information from dynamic sources and (optionally) process it with LLMs. Templates can reference other files, GitHub files and PRs, Notion pages, Linear issues, any MCP server, and more. The templating language is Jinja, extended with providers for dynamic content and filters for LLM processing in order to summarize, classify, and extract information to your editorial specifications.
A dependency resolution system that tracks every reference to dynamic content and forms a resolution graph. When you compile a template, Colin traces that graph, evaluates all the references, and only updates the parts that have actually changed. Staleness can be content-based (the source changed), time-based (an hour passed), or both. Colin caches the rest (including LLM calls) in order to incrementally materialize your context.
Together, these let you write context that ranges from completely static to fully dynamic and LLM-processed, and use Colin to keep it up to date.
You can use Colin’s output however you want: it’s just markdown. But to me, compiling agent skills is the obvious use case because the world has settled on them as the standard way to provide file-based context to agents. Therefore, Colin has first-class support for writing output directly to your skills folder. But the engine is equally happy to produce documentation, reports, configuration, and anything else you need to keep up to date.
Here’s what a Colin template looks like:
---name: team-statusdescription: Current state of platform team workcolin: cache: expires: 1d---
# Team Status
## In Progress
{% for issue in colin.linear.issues(team='Platform', state='In Progress') %}- {{ issue.identifier }}: {{ issue.title }} ({{ issue.assignee }}){% endfor %}
## Summary
{{ ref('team/weekly-notes.md').content | llm_extract('key blockers and priorities') }}Once compiled, Colin knows how to keep this skill up to date. The ref() creates a dependency on weekly-notes.md. The Linear call creates a dependency on those issues. The cache directive enforces time-based staleness. Colin watches all of it, and recompiles when something changes.
Try It
We just open-sourced Colin. It’s experimental, it’s going to grow, and I hope you’ll have fun with it. Please give it a star if you think it’ll be useful!
- Get the code: github.com/PrefectHQ/colin
- Read the docs: colin.prefect.io
- Try it out:
pip install colin-py
One fun thing: Colin’s quickstart actually compiles itself into a live-updating skill, so any time we update the docs, your agent automatically learns the new features. Ambitious? Yes. But easy? Also yes!
Happy context engineering!
Subscribe
Comments
Join the conversation by posting on social media.
Introductory blog post: https://www.jlowin.dev/blog/colin
GitHub: https://github.com/PrefectHQ/colin
I think this is super neat - congrats OP! My only challenge here is that now we have two things when we can apply leverage in one. I would love the MCP to Skills Context Engine. So I pour more energy into that one thing. But I totally see value here!
Yeah, this makes sense. Skills feel great to use, but they fall apart once the info starts changing. If Colin really keeps things fresh without extra glue code, that’s actually a nice middle ground. Curious how it holds up once things get messy.
Happy launch day/week! Thank you for releasing this (and whilst I'm here, also for FastMCP!) — I'm on board with the idea of skills for static behaviour and MCP for dynamic information so I'm looking forward to taking Colin for a spin.
I'm currently building out an MCP server for Apache Kafka and since it relies on familiarity with Kafka and its workflows, I'd been thinking a lot about how to organise skills to get quick behavioural wins for agents without having to build out a full blown knowledge base (I'm trying to respect the time it takes teams traversing the AI engineering maturity curve). It felt somewhat natural to put skills behind MCP resources so I like where you are going with the Skills Provider in the recent FastMCP 3 release.
What do you think about Cloudflare's recent approach for agent skills discovery via .well-known URIs and using RFC/standards?
Thank you for your submission, for any questions regarding AI, please check out our wiki at https://www.reddit.com/r/ai_agents/wiki (this is currently in test and we are actively adding to the wiki)
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
this is great!