Deterministic · Structured artifacts

Your prompt
has a schema.

intent-compiler turns Markdown protocols into deterministic artifacts — typed, validated, versioned. No more prompt drift.
Write protocol. Compile to behavior.

Get started → See how it works
$ intent lint protocols

// the problem

Two kinds of wrong.

There are two broken approaches to prompts in production today. Neither scales. Neither is maintainable. Both make debugging a nightmare.

Hardcoded
in the logic

The prompt is buried inside Python functions, concatenated with f-strings, mixed with business logic.

def analyze(code): prompt = f"You are an expert..." prompt += f"Analyze: {code}" # 400 lines later... return llm.call(prompt)

Generated
without structure

The prompt is thrown at the AI ad-hoc, with no validation, no output schema, no versioning.

# slack message to the team "hey the AI is acting weird" # the 'protocol' "gpt make me an analysis" "return something useful" # production. real.

// live demo

Markdown in. Artifact out.

Write your protocol on the left. Watch the resolver emit a structured artifact on the right — in real time.

protocol.md markdown
output waiting
// click "Resolve" to see the artifact

// the full picture

From schema to
everything else.

Define once. Generate everything. Deterministic prompts mean predictable AI behavior — no more "it worked yesterday" excuses.

📋
1. Protocol
# protocol.md version: 1.0.0 model: claude-3 schema: username: string email: email age: int 18-120
// generate test data
$ proto generate p.md --mock { "username": "john_42", "email": "john@test.io", "age": 28 }
// generate html form
$ proto generate p.md --ui <form> <input name="username"/> <input name="email" type="email"/> <input name="age" type="number"/> </form>
// validate llm output
$ proto lint output.json ✓ valid schema: matches types: correct constraints: ok
// semantic versioning
# breaking change? bump major version: 2.0.0 # old: username string # new: user_id string + username # protocol.md --1.0.0 # protocol.md --2.0.0
// gate every pr
$ proto lint protocols/ ✓ 12/12 valid exit 0 → merge allowed exit 1 → pr blocked
Type-safe slots
Wrong type injection fails at parse time, not at runtime
Schema validation
LLM output validated against JSON Schema Draft 7
Semver contracts
Breaking changes are explicit. No silent breakage.
Reproducible mocks
Same seed = same data. Deterministic tests every time.
$ pip install intent-compiler

// the solution

Prompts are infrastructure.
Treat them like code.

intent-compiler compiles Markdown protocols into typed, validated artifacts. Every prompt becomes a deterministic contract — with a schema, slot types, and CI-enforced rules. Version it in Git. Lint it before merge. Ship with confidence.

"If your Dockerfile defines what runs, your protocol.md defines how your model thinks. One governs the container. The other governs the cognition."
intent-compiler manifesto
sentiment.md
Version: 1.0.0 Model: anthropic/claude-sonnet-4 ## Context You are a sentiment analyst. ## Slots {{text}} string — text to analyze {{lang}} string — en | pt | es ## Schema { "sentiment": "positive | neutral | negative", "confidence": "0.0..1.0", "reasoning": "string" }

// mvp status

Deterministic by design.
CI-native from day one.

intent-compiler ships parser + validator + CLI + tests. Run checks locally. Gate every PR. Know exactly what your model will do — before it runs.

CLI
$ intent lint protocols --format compact x invalid_protocol.md 2e 0w v valid_protocol.md 0e 0w 1/2 valid . exit 1 blocks CI
intent resolve
$ intent resolve protocol.md { "sentiment": "positive", "confidence": 0.92, "reasoning": "string" }
intent generate --mock
# Generate mock API data from JSON Schema $ intent generate protocol.md --mock --count 3 { "mocks": [ { "summary": "sample" }, { "summary": "sa" }, { "summary": "sample" } ] } # Supports: string, number, integer, boolean, array, object # Formats: date-time, email, uri, uuid, ipv4, base64
intent-compiler in CI
# .github/workflows/quality.yml name: Quality Check on: [push, pull_request] jobs: lint: steps: - run: pip install pyyaml jsonschema - run: intent lint protocols/ --strict

// why it matters

Governance for the
cognitive layer.

[#]

Type-safe injection

Slots are declared with types. Inject the wrong type or a missing slot and the build breaks. Same discipline as typed code.

[v]

Semantic versioning

Every protocol has a semver. Breaking changes to slots or schema are major bumps. Your app pins the version.

[>]

Schema validation

Output is validated against the declared schema before it reaches your application. No more surprise shapes in production.

[~]

Composable protocols

Protocols can import and extend each other. Build a library of reusable cognitive contracts like Lego.

[*]

Model-agnostic

Declare the model in the protocol. Switch from Claude to GPT to Gemini by changing one line. Your code stays the same.

[?]

Human-readable

It's Markdown. Non-engineers can read, review, and understand what the AI is instructed to do. PRs are meaningful.


// get started

From zero to
first protocol.

Four steps. No magic. Just structure you should have had from the start.

step 01

Install MVP dependencies

The MVP runs directly from the repository. Install runtime dependencies and use the local CLI.

$ pip install pyyaml jsonschema
step 02

Write your first protocol

Create a protocols/ directory and add a `.md` protocol with frontmatter, slots, constraints and schema.

$ cp protocols/valid_protocol.md protocols/my_first.md
step 03

Lint before you commit

Run `proto lint` locally and in CI. If slots are mismatched or schema is missing, the pipeline fails with exit code 1.

$ intent lint protocols $ pytest tests/ -v
step 04

Gate merges with quality workflow

Use GitHub Actions to run syntax checks, tests and CLI smoke checks on every push/PR to `main`.

$ git add . $ git push origin main # quality-check.yml runs automatically

Stop writing prompts.
Start writing protocols.

Your model's behavior deserves the same discipline as your code. Version it. Lint it. Compile it.

View on GitHub → Read the docs