Cognithor runs fully on your own machine. Installing it takes three steps: grab the binary, install a local LLM, run the first-time setup. Budget about 15 minutes for the whole thing.
System requirements
| Resource | Minimum | Recommended |
|---|---|---|
| OS | macOS 13+, Ubuntu 22.04+, Windows 11 | Same |
| CPU | 8 cores, 3.0 GHz | 12+ cores |
| RAM | 16 GB | 32 GB |
| Disk | 20 GB free | 50 GB+ (for model weights) |
| GPU | Optional | NVIDIA RTX 3060 12GB or Apple Silicon |
The 27B default planner model wants about 18 GB of RAM loaded. On a 16 GB machine, use the 7B planner and accept slightly lower quality.
Step 1: Install Ollama
Ollama is how Cognithor talks to local LLMs. It is the default backend — LM Studio and llama.cpp also work if you already run them.
macOS / Linux:
curl -fsSL https://ollama.com/install.sh | sh
Windows: Download the installer from ollama.com/download and run it.
Verify:
ollama --version
Step 2: Pull the default model
Cognithor defaults to qwen3:27b for planning. On a 16 GB machine, use qwen3:7b instead.
# Default (32 GB+ machines)
ollama pull qwen3:27b
# Small-RAM fallback
ollama pull qwen3:7b
The model downloads run 15–40 GB depending on which one you pick. The first pull is slow; every subsequent run is local.
Step 3: Install Cognithor
Download the binary from the releases page for your platform. Unzip it into a directory on your PATH (for example ~/.local/bin/ on macOS/Linux, or C:\Program Files\Cognithor\ on Windows).
Verify:
cognithor --version
Step 4: Initial setup
Run the bootstrap command. It creates the config directory, initializes the vault, and walks you through picking a model.
cognithor init
The bootstrap:
- Creates
~/.jarvis/with an empty vault, memory tiers, and an audit log file - Probes your Ollama installation to see which models are available
- Writes a default
~/.jarvis/config.yamlyou can edit later - Generates a per-session API token for the REST interface
When it finishes, the CLI prints a short "next steps" block. You are ready for the first run.
Troubleshooting
ollama: command not found after the install script — close and reopen your terminal, or source your shell rc file. The install script adds Ollama to your PATH but does not reload the current shell.
Bootstrap fails on Windows with a path error — Cognithor assumes %USERPROFILE% is writable. If you have redirected it via group policy, set COGNITHOR_HOME to a writable directory before running cognithor init.
Model pull hangs at 99% — the final percent is the checksum verification; it is slow on spinning disks. Wait it out.
cognithor: ollama unreachable at localhost:11434 — make sure Ollama is running (ollama serve or the system service). On Linux the installer registers a systemd unit; on macOS and Windows the installer runs Ollama at login.
What's next
- First Run — what to do in your first 10 minutes with Cognithor
- Connect a Channel — add Telegram, Discord, or another chat interface
- Architecture Overview — how the pieces fit together