ContextBridge

Human oversight for AI coding

Human-in-the-loop annotation for AI coding sessions.

$ brew install contextbridge/tap/cli
terminal

# Pipe a plan to ContextBridge

$ cat plan.md | contextbridge plan

✓ Opening browser for review…

✓ Human approved with 2 comments

"status": "approved",

"comments": […]


Up and running in seconds

Install the CLI, pipe in a plan, and a human annotates it in their browser. The result flows back on stdout.

1. Install

One command via Homebrew (or npm).
terminal
$ brew install contextbridge/tap/cli

2. Run a plan review

Pipe a plan file into the CLI. It opens a local browser UI where a human can approve, request changes, or annotate line-by-line. The structured result is emitted on stdout — ready for your agent harness to consume.
terminal
$ cat my-plan.md | contextbridge plan
Opening browser for review…
Human approved with 2 comments
{
"status": "approved",
"comments": [
{ "line": 12, "body": "Consider extracting this into a helper." },
{ "line": 28, "body": "This needs a test." }
]
}

Designed for agent workflows

Works with any AI coding agent that can shell out to a CLI and read its stdout.

Harness-agnostic

Works with Claude Code, Codex, or any agent that can shell out to a command and read its stdout.

Local-only

No remote backend. The CLI spawns an ephemeral local server, opens your browser, and shuts down when you submit.

Structured output

Approval status, inline comments, and change requests come back as a typed JSON payload — no parsing required.

Ready to get started?

Install ContextBridge and bring human oversight into your AI coding workflow.