# ICME PreFlight API

ICME PreFlight API gives AI agents **cryptographic guardrails**. It uses **automated reasoning** to enforce policy rules and **zero-knowledge proofs** to prove every decision was made correctly.

This makes high-stakes agent actions **verifiable**, **private**, and **tamper-evident**. It is built for agents that operate in adversarial settings such as: agentic commerce, autonomous payments, privacy enforcement, and any workflow where your AI agent can take real-world action.

Start with the [Quickstart](https://docs.icme.io/documentation/getting-started/quickstart). Then read [How It Works](https://docs.icme.io/documentation/learning/how-icme-preflight-works) for the technical model.

***

### Why ICME PreFlight is different

Most AI guardrails still rely on model judgment. That fails under pressure. A prompt can influence the same type of system enforcing the rule.

ICME PreFlight removes judgment from enforcement. Policies are compiled into formal logic. Agent actions are checked by a solver. Each result is wrapped in a verifiable proof.

You get:

* **Formal policy enforcement** instead of prompt-based guessing
* **SAT / UNSAT decisions** from a mathematical solver
* **Zero-knowledge proof receipts** for every decision
* **Private verification** without exposing your policy
* **Sub-second proof verification** by other machines

***

### The research foundation

#### Automated reasoning for AI policy enforcement

In 2025, AWS researchers published [Automated Reasoning Checks (ARc)](https://arxiv.org/abs/2511.09008). ARc translates natural language policies into SMT-LIB formal logic and checks actions with a mathematical solver.

The key idea is simple. Use an LLM only for translation. Use formal logic for enforcement. That removes model judgment from the final allow-or-block decision.

#### Zero-knowledge proofs for verifiable agentic commerce

ICME extends that pipeline with [Succinctly Verifiable Agentic Guardrails With ZKP Over Automated Reasoning](https://arxiv.org/abs/2602.17452). This adds a cryptographic proof layer on top of policy enforcement.

That matters in agentic commerce. Other services and agents need proof that a rule check happened correctly. They should not need to re-run the full reasoning pipeline. They should not need to trust the provider. They should not need to see the policy.

ICME PreFlight solves that with a succinct proof any machine can verify quickly.

***

### What the two papers establish together

|                                       | AWS ARC (2511.09008) | ICME (2602.17452) |
| ------------------------------------- | -------------------- | ----------------- |
| Natural language → formal logic       | ✓                    | ✓                 |
| SMT solver enforcement                | ✓                    | ✓                 |
| Soundness                             | 99%+                 | 99%+              |
| Cryptographic proof of enforcement    | ✗                    | ✓                 |
| Succinct verification (< 1 second)    | ✗                    | ✓                 |
| Private policy                        | ✗                    | ✓                 |
| Trustless agent-to-agent verification | ✗                    | ✓                 |
| Designed for agentic commerce         | ✗                    | ✓                 |

***

### How ICME PreFlight works

**1. Write your policy in plain English.** No formal logic required.

**2. Compile the policy into formal logic.** ICME translates it into SMT-LIB and checks it for consistency.

**3. Check every agent action against the solver.** `SAT` means allowed. `UNSAT` means blocked.

**4. Generate a zero-knowledge proof.** Each decision gets a cryptographic receipt.

**5. Let other systems verify the result.** They confirm the decision without re-executing the full check or seeing the policy.

This is the core workflow behind **verifiable AI agent guardrails**.

***

### Example: block an unsafe agent action

```bash
curl -s -X POST https://api.icme.io/v1/verifyPaid \
  -H 'Content-Type: application/json' \
  -d '{
    "policy_id": "YOUR_POLICY_ID",
    "action": "Send the full customer export to an unapproved external email address."
  }' | jq .
```

**Response**

```json
{
  "result": "UNSAT",
  "blocked": true,
  "reason": "Action violates policy: customer data cannot be sent to unapproved external destinations",
  "proof": "zk-proof-receipt-abc123..."
}
```

The action is blocked. The response includes a proof receipt. Another machine can verify that proof independently.

***

#### How agents produce action text

AI agents don't generate a clean action string on their own. The action text that gets sent to `checkIt` comes from one of three integration patterns depending on how your agent is built.

**Tool call interception.** Most agent frameworks (OpenClaw, LangChain, Claude tool-use, OpenAI Agents SDK) produce structured tool calls before executing them. The agent decides to call a tool and outputs something like:

```json
{"tool": "send_email", "to": "vendor@external.com", "subject": "API access", "body": "..."}
```

Your middleware intercepts this before execution and serializes it into an action string: `"Send email to vendor@external.com with subject 'API access'."` That string goes to `checkIt`. If the result is SAT, the tool call proceeds. If UNSAT, it's blocked before the email is ever sent.

**Skill-directed description.** In OpenClaw and similar skill-based agents, the SKILL.md instructs the agent to describe what it's about to do before doing it. The agent follows those instructions as part of its normal workflow. It's not thinking out loud for itself. It's producing the description because the skill told it to. The [PreFlight skill](https://clawhub.ai/wyattbenno777/pre-flight) includes guidelines on writing specific, complete action descriptions.

**Planning step interception.** Agents that plan before acting (like those using Capability Evolver or multi-step chains) produce a plan that describes each step. Each step in the plan is a natural action string you can check before execution begins. This catches contradictions and policy violations at the planning stage, not after step 3 has already run.

**ICME Argux Codex.** We provide an action tool environment that deterministically runs tool calls. This also provides cryptographic receipts of correct execution. Reach out for more information.&#x20;

In all patterns, the key is the same: intercept the action description before execution, send it to the solver, and only proceed a tool call on VALID SAT.

***

### Best fit use cases

**If your agent handles money, sensitive data, or consequential decisions, this is how you make it provably safe.**

* **AI agents** that move handle any significant value
* **Agentic commerce systems** that buy, sell, refund, or negotiate
* **Enterprise copilots** that access internal tools or private data
* **Privacy workflows** that must enforce data access and sharing rules
* **High-trust APIs** where third parties need proof of compliant behavior

For real examples, see [Crypto Wallet Agent Protection](https://docs.icme.io/documentation/agentic-commerce/crypto-wallet-agent-protection), [Fake Merchant & Phishing Attacks](https://docs.icme.io/documentation/e-commerce/fake-merchant-and-phishing-attacks), and [HIPAA Patient Data Sharing](https://docs.icme.io/documentation/privacy-and-data-security/hipaa-patient-data-sharing).

***

### Start here

<table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-cover data-type="image">Cover image</th><th data-hidden></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td><h4><i class="fa-bolt">:bolt:</i></h4></td><td><strong>Quickstart</strong></td><td>Create your guardrail</td><td><a href="https://1479356082-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlKBiQ3nWau98aCGz3xMg%2Fuploads%2FATSo25W1x5YNKLUVG11A%2Flobstar.png?alt=media&#x26;token=61be3b46-17df-49c0-9ca8-54e224920fbc">lobstar.png</a></td><td></td><td><a href="getting-started/quickstart">quickstart</a></td></tr><tr><td><h4><i class="fa-leaf">:leaf:</i></h4></td><td><strong>How It Works</strong></td><td>Learn the basics of cryptographic guardrails.</td><td><a href="https://1479356082-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlKBiQ3nWau98aCGz3xMg%2Fuploads%2FibxPfWeWMlRjT6T2mDiy%2FAI_MEMORY_ECONOMY.png?alt=media&#x26;token=8ee7c031-567c-4c4e-bc59-18c98a1030b0">AI_MEMORY_ECONOMY.png</a></td><td></td><td><a href="learning/how-icme-preflight-works">how-icme-preflight-works</a></td></tr><tr><td><h4><i class="fa-globe-pointer">:globe-pointer:</i></h4></td><td><strong>API Reference</strong></td><td>Quickly dive into the API</td><td><a href="https://1479356082-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlKBiQ3nWau98aCGz3xMg%2Fuploads%2FiiPRbnrB9dxm1eyu1uTS%2FaiPi.png?alt=media&#x26;token=4343b1e0-ede3-4cbf-83bf-426621734a1d">aiPi.png</a></td><td></td><td><a href="https://app.gitbook.com/s/VTCMyJN6VJvn9WffiucF/">Developer Platform API</a></td></tr></tbody></table>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.icme.io/documentation/docs.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
