Skip to content
▸ COMMON QUESTIONS

FAQ.

Answers to the questions that come up most often. If you do not see yours here, the docs cover the operational details, and the manifesto covers the philosophy.

Getting started

What is Cognithor exactly?

Cognithor is a local-first agent operating system. You install it on your own machine, point it at a local LLM (via Ollama, LM Studio, or llama.cpp), and it runs a full Planner → Gatekeeper → Executor loop against a library of 145 MCP tools. The closest analogy is “Jarvis that runs on your own hardware” — no cloud, no telemetry, no subscription.

How much does it cost?

The core is free under Apache 2.0. The optional paid packs are one-time purchases (the first one, Reddit Lead Hunter Pro, is $79). You never pay to run Cognithor itself, and you never pay a recurring fee. See the Manifesto for the commitments the project has made about that.

What hardware do I actually need?

16 GB RAM minimum, 32 GB recommended. On a 16 GB machine you use the smaller 7B planner model; on 32 GB you get the default 27B one and noticeably better output quality. An NVIDIA GPU (12 GB+) or Apple Silicon dramatically speeds things up but is not required — a modern Intel/AMD CPU runs the 7B model at usable speed. Full details in the install doc.

Does it run on Windows, macOS, and Linux?

Yes — all three. Windows 11 is the primary development target, macOS 13+ and Ubuntu 22.04+ are tested on every release. The binary is platform-native, not a web-app-in-a-window.

How long does installation take?

Cognithor itself installs in under a minute. The model download (via Ollama) is the slow part — the 27B planner is ~18 GB, the 7B fallback is ~4 GB. On a typical home connection, budget 15 minutes end-to-end the first time. Every subsequent run is instant because everything is cached locally.

Privacy and data

Do you collect any data about me?

No. The software has zero telemetry — no analytics pings, no crash reports, no anonymized usage tracking. The marketing website has no cookies, no analytics script, and no third-party embeds. Both commitments are in the privacy policy and the Manifesto.

Will my conversations be used to train a model?

Never. Your chat history, your vault, your memory tiers — none of it ever leaves your machine. The Manifesto has this as one of the “never” commitments: the project will never train models on your vault, your chats, or your memory. It is an architectural property of the system, not a policy we could change.

What happens if I turn off my internet connection?

Cognithor keeps working. The Planner, Gatekeeper, Executor, Memory, Tools, and Skills all run locally. You lose web search (because that reaches outward by definition) and any channel that needs a network connection (Telegram, Discord, etc.), but the assistant itself — voice, CLI, vision, vault, skills — is fully functional offline.

Can I read the audit log of everything the agent did?

Yes. Every Gatekeeper decision — every proposed tool call, its risk class, whether it was approved, blocked, or escalated — is written to ~/.jarvis/logs/audit.jsonl. You can type /audit last inside the running daemon to see the most recent call with full reasoning.

Models and LLMs

Which LLMs can I use?

Any model Ollama, LM Studio, or llama.cpp can serve. The default planner is qwen3:27b. The default coder is qwen2.5-coder. The default vision model is llava. The Multi-LLM router assigns different models to different roles and lets you override any assignment with cognithor model set.

Can I use a cloud model like GPT-4 or Claude?

Yes, but opt-in only and never as the default. The Model Router supports cloud backends for specific roles (Anthropic, OpenAI, OpenRouter) if you configure them. Cloud routing is gated per role, so you can, for example, use a local planner and only fall back to a cloud model for one specific task. The default configuration uses zero cloud models.

What about running multiple models at once?

The router can load multiple models in parallel if your hardware has enough RAM/VRAM to hold them. Typical setups keep the planner and the coder loaded simultaneously (about 24 GB combined) and swap in the vision model on demand. Ollama handles the swap transparently — it keeps recently-used models warm and evicts idle ones.

Packs and the marketplace

What is a pack?

A pack is a bundle of skills, tools, and configs that makes Cognithor good at a specific job — “hunt leads on Reddit,” “triage my inbox,” “research a topic.” Packs ship as signed bundles and install through a 5-check validation pipeline. See the Skills feature page for the full story.

When does the creator marketplace open?

Q4 2026. The waitlist is already open on the Publish page. Creators get 70% of every sale, no exclusivity, one-time pricing, Gumroad-handled payments.

Who verifies community packs?

Every pack goes through an automated 5-check validation pipeline before install: syntax check, prompt-injection scan, tool allowlist verification, safety scan, and Ed25519 signature verification. Packs that fail any check cannot be installed. See the Skills feature page.

Commercial use and licensing

Can I use Cognithor in my company?

Yes. The core is Apache License 2.0 — use it commercially, modify it, redistribute it, build products on top of it. There is no “enterprise tier” that unlocks features. The same binary runs the same way whether you are a solo developer or a 500-person company.

Can I fork Cognithor and ship my own distribution?

Yes, and the Apache 2.0 license guarantees your right to do so forever. You must preserve the license notice and attribution headers in source files you distribute. Other than that, ship whatever you want.

Is there a support contract available?

Not from the core team. This is an independent open-source project — the maintainer works on it alongside their day job. Community support happens on Discord and GitHub issues. If your use case needs guaranteed response times, you should plan accordingly or sponsor a maintainer independently.

Didn't find your question?

Open an issue on GitHub, or check the docs for deeper technical material.