Knowladex docs.

Everything you need to drop raw notes into Knowladex and walk away with an interconnected wiki you can query from Claude, Cursor, or any tool that speaks MCP. Start with the 5-minute quickstart, or jump straight to the tool reference.

§ 01 What is Knowladex?

Knowladex is a multi-tenant knowledge-base platform built on a single architectural choice: the model that compiles your knowledge base is the model you're already paying for.

Drop raw documents — meeting transcripts, half-finished specs, customer call notes, scratch files — into a project. Then ask your AI tool to compile. It calls Knowladex's MCP compile tool, gets a payload of raw docs and current wiki state, and writes interconnected articles back over write_article. Knowladex stores, indexes, cross-links, and serves them.

The compiler isn't ours. It's the Claude (or other model) you brought to the conversation. Knowladex hands the raw documents back to your AI tool over MCP and your model does the reading and writing in its own context window — on its own bill. The substrate is ours: storage, multi-tenant primitives, the MCP server, the dashboard, the auth model, the search index, the backlink graph. The thinking happens upstream of us, in the AI tool you brought.

§ 02 Who it's for

  • Consultants & agencies running knowledge bases for many clients. Each client is its own org with its own users, tokens, and (optionally) subdomain.
  • AI-native product teams who want their own internal docs queryable by their own agents — same wiki, same scope, same permissions as their humans.
  • Solo operators drowning in scratch files and transcripts who want a wiki that organizes itself overnight.

§ 03 Quickstart (5 minutes)

Assumes Knowladex has already provisioned your workspace. (If you're self-hosting, see the repo for the deploy guide.)

  1. 01
    Sign in to your workspace

    Open https://app.knowladex.com/c/<your-org>/login and sign in with the email and password your account team gave you.

  2. 02
    Create your first project

    Click New project. A project is one knowledge base — pick a slug like acme-rebrand or q1-research.

  3. 03
    Mint an API token

    Go to My accountCreate API token. You'll get a kbm.<org>.<random> string. Save it — it's only shown in full once.

  4. 04
    Wire up your MCP client

    The My account page has copy-paste configs for Claude Code, Cursor, and Claude Desktop. See connecting your AI tool for the full guide.

  5. 05
    Ingest your first doc

    In your AI tool: "ingest this file into the acme-rebrand project on Knowladex" — drop a markdown file. The AI tool will call ingest_document.

  6. 06
    Compile

    Then: "compile the acme-rebrand knowledge base". Your AI tool will call compile, read every raw doc, and write wiki articles via write_article.

  7. 07
    Browse

    Open the dashboard at /c/<org>/p/<project>. You'll see a fully-formed wiki with cross-links, an index, and search.

§ 04 Core concepts

Knowladex's data model is small enough to fit in one diagram. Five things, in order from outermost to innermost:

Organization

The top-level tenant. One org = one client (if you're an agency) or one company (if you're a product team). Orgs contain users, projects, and tokens, and are the unit of isolation — users in one org can never see another org's data. Org slugs look like acme-corp or knowladex.

Project

One knowledge base inside an org. An org can have many projects (e.g. marketing-research, infra-rewrite, product-decisions). Each project has its own raw store, wiki, index, schema, and activity log.

Raw documents

The unstructured input. Markdown, transcripts, meeting dumps, scratch files. Knowladex stores them as-is in the project's raw/ directory. Each raw doc gets an id, title, tags, and optional source URL. Raw docs are never modified after ingest — they're the source of truth.

Wiki articles

The compiled output. Markdown files with frontmatter (title, tags, sources), stored in the project's wiki/ directory. Articles link to each other with [[wikilinks]], and link back to the raw documents they were compiled from. Knowladex maintains a backlink graph automatically.

The compile loop

The verb that ties raw docs to wiki articles. When you ask your AI tool to "compile", it calls compile, which returns:

  • Every raw doc that's new or changed since the last compile
  • The current state of the wiki (existing articles, the index)
  • Per-project compile instructions from schema.md

Your AI tool reads everything, decides what articles to create or update, and calls write_article for each one. Knowladex stores the article, regenerates the dashboard, and updates the backlink graph. The next compile only sees what's changed since this run.

MCP — the bridge

Knowladex exposes everything — ingest, compile, write, search, lint — as Model Context Protocol tools. Any AI tool that speaks MCP (Claude Code, Cursor, Claude Desktop, your own agent) can call them with a Bearer token. There is no Knowladex SDK to install. The protocol is the SDK.

§ 05 Authentication & roles

Two ways to authenticate

Knowladex has two entry points: a web dashboard and an MCP endpoint. Each takes a different credential.

  • Web dashboard — sign in at /c/<org>/login with your email and password. A SameSite=Lax session cookie keeps you signed in.
  • MCP endpoint — POST to /mcp with Authorization: Bearer kbm.<org>.<random>. Tokens are minted from your account page and bound to a single user.
Tokens are scoped to one org. A kbm.acme.xxx token can only ever read or write acme's data. The org slug is encoded in the token, so cross-org leakage is structurally impossible.

Three roles

Every user (and every API token, since tokens inherit their user's role) has one of three roles:

  • viewer — read-only. Can list projects, read articles, search the wiki, view the activity log. Cannot create or modify anything.
  • editor — read + write content. Can ingest documents, compile, write articles, update the schema. Cannot manage users or tokens.
  • org_admin — full control of the org. Editor permissions plus manage users, mint tokens, change roles, delete users.

Role changes take effect on the next request — sessions and token caches are invalidated immediately. There is no waiting period.

See the permissions matrix for a tool-by-tool breakdown.

§ 06 Connecting your AI tool

Knowladex exposes POST /mcp over HTTP using the MCP Streamable HTTP transport. Any client that supports HTTP MCP can connect directly. Clients that only speak stdio (Claude Desktop, currently) connect through a small bridge.

Claude Code recommended

Claude Code has first-class HTTP MCP support. One terminal command:

claude mcp add knowladex https://app.knowladex.com/mcp \
  --scope user \
  --transport http \
  --header "Authorization: Bearer kbm.acme.YOUR_TOKEN"

--scope user is the important flag — without it Claude Code only registers Knowladex for the folder you're standing in. With it, Knowladex is available in every Claude Code session you ever start. From then on, try: "claude, list all the projects in knowladex".

Cursor

Add to ~/.cursor/mcp.json (create the file if it doesn't exist). Cursor will pick it up on next restart.

{
  "mcpServers": {
    "knowladex": {
      "url": "https://app.knowladex.com/mcp",
      "headers": {
        "Authorization": "Bearer kbm.acme.YOUR_TOKEN"
      }
    }
  }
}

Claude Desktop

Claude Desktop currently only speaks stdio MCP, so we use the mcp-remote bridge to forward stdio over HTTP. Add to claude_desktop_config.json (Settings → Developer → Edit Config) and restart Claude.

{
  "mcpServers": {
    "knowladex": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://app.knowladex.com/mcp",
        "--header",
        "Authorization: Bearer kbm.acme.YOUR_TOKEN"
      ]
    }
  }
}

Custom / curl

The endpoint speaks standard JSON-RPC 2.0 over HTTP. Test it with curl:

curl -X POST https://app.knowladex.com/mcp \
  -H "Authorization: Bearer kbm.acme.YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"tools/list","id":1}'

You'll get back a list of every tool you have permission to call. From there, send tools/call requests with the tool name and arguments.

The setup snippets on your /me page are pre-filled with your real token and your real endpoint. The ones here are templates with placeholders — go to My account in the dashboard for one-click copy-paste.

§ 07 Recipes

Compile a knowledge base from scratch

You have a folder of meeting notes, customer transcripts, and half-finished specs. You want a wiki.

  1. Create a project: "create a project called acme-discovery in Knowladex".
  2. Ingest the docs: drop the folder into your AI tool and say "ingest every file in this folder into acme-discovery on knowladex, tagged with the year and the source type".
  3. Set the schema (optional): "update the schema for acme-discovery to require articles for every customer mentioned, every product mentioned, and every blocker raised".
  4. Compile: "compile the acme-discovery knowledge base". Your AI tool will call compile, get every raw doc plus your schema, and write articles.
  5. Lint: "lint acme-discovery and fix any orphans". Your AI tool will call lint_wiki and patch any issues.

Incremental updates

You've added new docs to a wiki that's been running for a while. You want to update only what changed.

  1. Ingest the new docs as usual.
  2. Ask your AI tool: "compile the acme-discovery knowledge base — incremental, just what's changed".
  3. The compile tool defaults to incremental mode. It returns only the new and modified raw docs, plus the current wiki state, so your AI tool can decide whether each new doc gets its own article, gets merged into an existing one, or just adds backlinks.

For a full rebuild from raw, pass fullRecompile: true. You'll rarely need this — incremental is the right default.

Lint and clean up

Wiki rot is real. Articles get orphaned, links break, content goes stale. The lint_wiki tool finds these issues; your AI tool can fix them.

  1. "lint acme-discovery on knowladex and show me the report"
  2. "fix every broken wikilink in the report" — your AI tool calls read_article, decides what the link should be, and calls write_article with the fix.
  3. "any article that hasn't been touched in 90 days, add a 'stale' tag"

Per-project schema

Each project has a schema.md file that's read on every compile. It's free-form markdown — instructions written for an LLM, not a parser. Use it to encode conventions:

# schema.md

## Article conventions
- Every customer mentioned in raw docs gets its own article.
- Every decision mentioned gets an article tagged 'decision'.
- Use [[wikilinks]] for every internal cross-reference.
- Cite source raw docs in the frontmatter sources field.

## Voice
- Past tense for events, present tense for current state.
- Lead with the conclusion, then the evidence.

Read it with get_schema; update it with update_schema. The next compile picks it up automatically.

§ 08 Tool reference — project

Every Knowladex capability is exposed as an MCP tool. This section documents the customer-facing tools you (and your AI tool) can call. Two more tools — create_org and delete_org — exist but are restricted to the Knowladex global admin.

list_projects

viewer+

List all knowledge base projects in an organization.

Parameters

org string Org slug. Defaults to the token's bound org.

create_project

org_admin

Create a new knowledge base project inside an organization. Regenerates the dashboard.

Parameters

org string Org slug. Defaults to the token's bound org.
name* string Human-readable project name. 1–500 chars.
slug string URL-safe identifier. Auto-derived from name if omitted.
description string Brief description. Max 5,000 chars.

delete_project

org_admin

Permanently delete a project and all its raw docs, articles, and logs. Destructive.

Parameters

org string Org slug.
slug* string Project slug to delete.
confirm* boolean Must be true or the call is rejected.

list_orgs

any

List organizations the caller has access to. For org-scoped tokens, returns only the bound org.

No parameters.

create_client_token

org_admin

Create a legacy guest share token (read-only or editor). For real per-user access, use create_org_user + create_api_token instead.

Parameters

org* string Org slug.
name* string Human label for this token.
role enum viewer (default) or editor.

list_client_tokens

org_admin

List legacy guest share tokens for an org.

Parameters

org* string Org slug.

revoke_client_token

org_admin

Invalidate a guest share token immediately.

Parameters

org* string Org slug.
token* string The full token value to revoke.

§ 09 Tool reference — ingest

ingest_document

editor+

Add a raw document to a project's knowledge base. The document is stored as-is for later compilation. Regenerates the dashboard.

Parameters

org string Org slug. Defaults to the token's bound org.
project* string Project slug.
title* string Document title. 1–500 chars.
content* string Markdown body. Max ~5MB.
sourceUrl string (URL) Original source URL, if any.
tags string[] Up to 30 tags, 1–64 chars each.
sourceType enum article · paper · repo · dataset · note · other

list_raw_documents

viewer+

List all raw documents in a project. Optional filters narrow by tag or ingest date.

Parameters

org string Org slug.
project* string Project slug.
tag string Filter by tag.
since ISO 8601 Only docs ingested after this timestamp.

§ 10 Tool reference — wiki

write_article

editor+

Create or update a wiki article. This is the primary write path for compile output.

Parameters

org string Org slug.
project* string Project slug.
slug* string Article slug (kebab-case).
title* string Human title. 1–500 chars.
content* string Markdown body. Use [[slug]] for wikilinks. Max ~5MB.
tags string[] Topic tags. Up to 30.
sourceDocuments string[] IDs of raw docs this article cites. Up to 200.

read_article

viewer+

Read one wiki article by slug. Returns full content, frontmatter, wikilinks, and the backlink graph for that article.

Parameters

org string Org slug.
project* string Project slug.
slug* string Article slug.

list_articles

viewer+

List all articles in a project, optionally filtered by tag. Returns slugs, titles, tags, modified timestamps, and link counts (no full content).

Parameters

org string Org slug.
project* string Project slug.
tag string Filter by tag.

get_index

viewer+

Get the project's master index / table of contents — both as rendered markdown and as a structured JSON tree.

Parameters

org string Org slug.
project* string Project slug.

§ 12 Tool reference — compile

get_compile_status

viewer+

Check what raw docs are new, changed, or deleted since the last compile. Read-only — doesn't modify anything.

Parameters

org string Org slug.
project* string Project slug.

compile

editor+

The big one. Returns raw documents (new/changed by default, all if fullRecompile), the current wiki state, and per-project compile instructions, all bundled for the calling LLM to process. The LLM then calls write_article for each output article.

Parameters

org string Org slug.
project* string Project slug.
fullRecompile boolean If true, returns ALL raw documents instead of just changes. Default false.

§ 13 Tool reference — maintenance

lint_wiki

viewer+

Run health checks on a project's wiki. Reports orphaned articles, broken wikilinks, missing frontmatter, stale articles, and suggested cross-references.

Parameters

org string Org slug.
project* string Project slug.

get_schema

viewer+

Read the project's schema.md. The schema defines wiki conventions and per-project compile instructions.

Parameters

org string Org slug.
project* string Project slug.

update_schema

editor+

Replace the project's schema.md. Free-form markdown — instructions for an LLM, not a parser.

Parameters

org string Org slug.
project* string Project slug.
content* string Full markdown body. Max ~5MB.

get_log

viewer+

Read the project's chronological activity log — every ingest, compile, write, and delete.

Parameters

org string Org slug.
project* string Project slug.
limit integer Max entries. Range 1–1,000.

§ 14 Tool reference — user & tokens

User and token management is admin-only. End users manage their own profile, password, and tokens through the My account page in the dashboard — no MCP call required.

create_org_user

org_admin

Create a new user account in an organization. The user can then sign in at /c/<org>/login.

Parameters

org* string Org slug.
email* string Login email. Must be unique within the org.
password* string Initial password. Min 8 chars.
role* enum org_admin · editor · viewer

list_org_users

org_admin

List all users in an org. Password hashes are never returned.

Parameters

org* string Org slug.

set_org_user_password

org_admin

Reset a user's password. Drops their active sessions immediately.

Parameters

org* string Org slug.
userId* string User ID (from list_org_users).
newPassword* string New password. Min 8 chars.

set_org_user_role

org_admin

Change a user's role. Drops their active sessions so the new role takes effect immediately.

Parameters

org* string Org slug.
userId* string User ID.
role* enum org_admin · editor · viewer

delete_org_user

org_admin

Permanently delete a user account. Drops their sessions.

Parameters

org* string Org slug.
userId* string User ID.
confirm* boolean Must be true.

create_api_token

org_admin

Mint a Bearer API token for a user. Returns a kbm.<org>.<random> string. Token is only shown once at creation — save it.

Parameters

org* string Org slug.
userId* string User ID to issue the token for.
name* string Human label (e.g. "Claude Code on laptop").

list_api_tokens

org_admin

List a user's API tokens. Returns metadata only — full token values are masked.

Parameters

org* string Org slug.
userId* string User ID.

revoke_api_token

org_admin

Invalidate an API token immediately.

Parameters

org* string Org slug.
userId* string User ID.
token* string Full token to revoke.

§ 15 Permissions matrix

Quick reference: which role can call which tool. Roles are additive — editor can do everything viewer can, and org_admin can do everything editor can.

Toolviewereditororg_admin
list_projects
list_orgs
list_raw_documents
read_article
list_articles
get_index
search
get_compile_status
lint_wiki
get_schema
get_log
ingest_document
write_article
compile
update_schema
create_project
delete_project
create_org_user
list_org_users
set_org_user_password
set_org_user_role
delete_org_user
create_api_token
list_api_tokens
revoke_api_token
create_client_token
list_client_tokens
revoke_client_token

§ 16 FAQ

Does Knowladex make LLM calls on my behalf?

No. The compile work runs in your AI tool, on your Anthropic (or other) bill. Knowladex stores raw documents and wiki articles, exposes MCP tools, and serves the dashboard — but it never calls an LLM itself. That's the whole point of the architecture.

What happens if I lose my API token?

Mint a new one from My accountCreate API token, then revoke the old one. Tokens are cheap and disposable. The dashboard never shows old tokens in plaintext after creation, so you can't recover one — only replace it.

Can two users in the same org work on the same project at the same time?

Yes. Knowladex uses a per-project mutex around writes, so concurrent write_article calls are serialized within a single project. Reads are unrestricted. Two editors writing different articles in the same project will not conflict.

How big can a single raw doc be?

The hard ceiling is ~5MB per document (5,000,000 characters). For most use cases that's effectively unlimited, but for very large transcripts or PDFs you may want to chunk on ingest.

Can I self-host?

Yes — the codebase is on GitHub and the repo includes a fly.toml for one-command Fly deploys. You'll need to bring your own object storage / disk and configure an admin token. Knowladex offers managed hosting with multi-tenant primitives, custom subdomains per client, and white-glove onboarding — see the pricing page.

Does Knowladex support SSO?

Not in the default build. SSO (SAML / OIDC) is available on the Agency plan. Talk to us if you need it.

Where's my data physically stored?

Hosted: a Fly.io persistent volume in the region your workspace was provisioned in (default: iad). Backups are taken nightly. Self-hosted: wherever you point the storage backend — local disk by default.

What if I want to leave?

Every raw doc, every wiki article, and every schema file is plain markdown on disk. Ask us for a tarball and you have your data, no lock-in. Knowladex's value is the compile loop, the multi-tenant primitives, and the MCP server — not hostage-holding.

How do I get help?

Sign in to the dashboard and open My accountGet help from a human. Or email hello@knowladex.com. Real people respond, typically within one business day.