Skip to content
GUIDES

Writing a Skill

Author your own Cognithor skill — a markdown file that tells the Planner how to use the tools for a specific job. Covers the skill format, the tool allowlist, testing, and installing locally.

A skill is a markdown file that makes Cognithor good at a specific thing. "Draft a standup update," "triage my inbox," "watch three subreddits for leads" — all skills. This guide walks you through writing one from scratch and installing it locally.

If you want to ship your skill to the community marketplace (Q4 2026), the extra steps — signing, metadata, packaging — live on the Publish page. This doc stays focused on the local-author path.

The anatomy of a skill

A Cognithor skill is a single markdown file with YAML frontmatter and a body. The frontmatter declares what tools the skill is allowed to call. The body is the prompt the Planner reads when the skill is active.

Here is a minimal skill:

---
name: weekly-standup
title: Weekly Standup Draft
description: Draft a 3-line weekly standup from the last seven days of chat history.
tools:
  - memory_read
  - vault_write
version: 0.1.0
---

You are drafting a weekly standup update.

Read the last seven days of chat history from the Episode tier
(use memory_read with tier="episode", range="7d"). Identify:

1. Three things the user actually shipped
2. One thing that is blocked or slow
3. One thing they plan to do next week

Write the draft in plain prose, three short paragraphs, first-person ("I shipped...").
Save it to the vault under `/standups/YYYY-MM-DD.md` using vault_write.

Do not invent events that are not in the chat history. If you cannot find
three things, say so — do not pad.

Two things to notice:

  • The tools: list is the allowlist. When the skill is active, the Tool Enforcer only lets the Planner call tools that appear in this list. Every other tool returns a "not allowed in this skill" error. This is how you scope what a skill can touch.
  • The prompt is instructions, not magic. The Planner reads this as system context on every turn the skill is active. Keep it short, specific, and actionable. Long prompts waste tokens and usually hurt quality.

The folder layout

Local skills you author go in ~/.jarvis/skills/user/. The filename is the skill name, with .md extension:

~/.jarvis/skills/
├── builtin/        # ships with Cognithor, don't edit here
├── generated/      # drafted by the Meta-Learner (pending review)
├── community/      # installed from the marketplace
└── user/           # your own skills
    └── weekly-standup.md

Drop your file in user/, restart the daemon (or run /skill reload), and the skill is registered. List skills with /skill list and activate one with /skill use weekly-standup.

Writing a good prompt

The single biggest mistake authors make is writing prompts that are too long. A good skill prompt is rarely over 200 words. Here is the shape that works:

  1. One sentence of role. "You are drafting X." or "You are helping the user do Y." Not three paragraphs of personality. The Planner already has a personality from the core prompts.
  2. The goal in plain language. What does success look like? If you cannot describe success in two sentences, the skill's scope is too wide.
  3. The tools and when to use each. Don't re-explain what a tool does — the Planner already knows. Just say "use vault_write to save the draft."
  4. One or two guardrails. "Do not invent events." "Ask for confirmation before posting." These are the rules that separate a useful skill from a sloppy one.
  5. Examples if the task is ambiguous. If there is a specific output format you want, show one. Show, don't describe.

Things to leave out: the model's name, the channel it is running in, the user's name. All of that is in the context bundle already — the skill doesn't need to reinvent it.

Testing a skill

Cognithor has a /skill test slash command that runs a skill against a synthetic prompt and prints the PGE loop trace without committing any tool calls. Use it before you trust a new skill with real data.

> /skill test weekly-standup "draft this week's update"

You see:

  • Every tool call the Planner proposed
  • Every Gatekeeper decision (and reasoning)
  • Which calls were blocked by the Tool Enforcer's allowlist (should be zero if your tools list is correct)
  • The final output the Planner would have sent

If a tool call is blocked unexpectedly, either your allowlist is missing that tool or the Planner is trying to do something outside the skill's scope. Both are bugs — fix the prompt or widen the allowlist.

Versioning

The version field in frontmatter follows semver. Bump the minor version when you change the prompt or the tool list; bump the patch version for typo fixes. The Performance Tracker (see the Skills feature page) uses version boundaries to detect regressions — if your skill starts failing after a bump, it auto-disables the new version and cools off the old one.

Common mistakes

Listing every tool "just in case." Don't. The allowlist is there to scope the skill. A skill that lists every filesystem tool plus shell plus web will get classified as ORANGE by default, and every call will need approval. Pick the minimum set.

Writing prompts that assume a specific model. Cognithor routes to whichever model fits the task. A prompt that says "as GPT-4..." will break when the router sends the turn to qwen3.

Trying to manage state inside the prompt. The Planner is stateless across skill activations. If you need state, use memory tiers or vault writes. Do not try to "remember" things by telling the prompt to remember them.

Ignoring failure modes. If your skill calls web_fetch, the fetch will sometimes fail. The prompt should tell the Planner what to do when that happens — "if the fetch returns an error, ask the user what to do" is a line worth adding.

What's next