# Cut your agent observability costs and make every trace auditor-proof

AI agent observability is getting expensive fast. [Vendors charge per GB ingested](https://byteiota.com/opentelemetry-observability-costs-2026-can-it-save-you-from-crisis/), per host monitored, per feature enabled. Datadog bills can [explode 100x over initial budgets](https://byteiota.com/opentelemetry-observability-costs-2026-can-it-save-you-from-crisis/). The [Grafana Observability Survey](https://grafana.com/observability-survey/2025/) found that 49% of OpenTelemetry users in production cite cost as a top concern, with scalability close behind at 44%.

Meanwhile, your agent is running thousands of actions a day, and your OTel pipeline captures all of them with the same trace depth. Formatting text gets the same span as transferring funds. Reading a config file gets the same treatment as sending customer data to an external API. You are paying to store telemetry for actions that carry zero compliance risk and will never be audited.

And when the audit does come, those traces won't help you anyway. OTel traces are logs in a database you control. Anyone with write access can edit them. A regulator can't tell a real trace from one you modified last Tuesday.

This guide shows how to solve both problems with ICME PreFlight: classify which agent actions matter (for free), enforce policy on the ones that do ($0.01 each), and attach a cryptographic proof to every decision. You stop paying to trace actions nobody cares about, and the traces you do keep become verifiable evidence.

### Why OTel alone isn't enough for agent compliance

#### Mutable logs are not audit trails

Every compliance framework governing AI agents requires tamper-evident records. [HIPAA §164.312(b)](https://www.kiteworks.com/regulatory-compliance/ai-agent-audit-trail-siem-integration/) requires mechanisms to record and examine activity on systems containing PHI. The SEC requires attributable records of advisory activities. The EU AI Act requires technical documentation and traceability.

OTel traces stored in Jaeger, Grafana Tempo, or a vendor backend don't meet that bar. Organizations are building [Merkle tree hash chains, digital signatures, and WORM storage](https://blog.dreamfactory.com/why-audit-logs-matter-ai-governance) to make their logs tamper-evident. That's a lot of infrastructure to solve a problem cryptographic proofs solve natively.

#### Regulators aren't waiting

[33% of organizations](https://www.mintmcp.com/blog/ai-agent-security) lack audit trails for their AI agent activity. Only [14.4%](https://venturebeat.com/security/ai-agent-zero-trust-architecture-audit-credential-isolation-anthropic-nvidia-nemoclaw) reported full security approval for their entire agent fleet. The SEC and OCC are [actively examining](https://galileo.ai/blog/ai-agent-compliance-governance-audit-trails-risk-management) AI governance. In financial services, missing traces are treated as a books-and-records violation.

The question has shifted from "who did what?" to "can you prove it?" Proving it means the auditor doesn't have to trust your infrastructure to verify your claims.

#### Traces show what happened, not whether it was allowed

OTel tells you a function was called with these arguments at this timestamp. It doesn't tell you whether the agent was permitted to do it, which policy was evaluated, or what the decision was. When an [incident turns into a blame game](https://galileo.ai/blog/ai-agent-compliance-governance-audit-trails-risk-management), you need the enforcement decision, not just the execution trace.

### How PreFlight plugs into your OpenTelemetry pipeline

PreFlight adds three things no processor, exporter, or backend can provide: free action classification, deterministic policy enforcement, and a cryptographic proof on every decision.

```
Agent proposes action
        ↓
  checkRelevance (free, <300ms)
        ↓
┌───────────────────────────┬───────────────────────────────────────┐
│ should_check: false        │ should_check: true                     │
│                            │                                        │
│ Action is policy-irrelevant│ Action touches policy variables        │
│ No compliance risk         │ Requires enforcement                   │
│                            │                                        │
│ Lightweight OTel span:     │ Full OTel span:                        │
│ • Minimal attributes       │ • checkIt result (SAT / UNSAT)         │
│ • Low storage cost         │ • check_id (audit receipt)             │
│ • Never audited            │ • zk_proof_id (cryptographic proof)    │
│                            │ • matched_variables (policy context)   │
│                            │ • extracted values from solver         │
│                            │                                        │
│                            │ UNSAT → action blocked before execute  │
│                            │ SAT → action proceeds                  │
└───────────────────────────┴───────────────────────────────────────┘
        ↓
  OTel Collector → Backend
```

**`checkRelevance` (free)** classifies whether an action touches any of your policy variables: data access scope, transmission endpoints, retention duration, transaction amounts. Actions that match nothing get a lightweight span. Actions that match get full enforcement and tracing. You stop paying to store telemetry for actions that will never be questioned.

**`checkIt` ($0.01)** checks the action against a formally verified policy using an SMT solver, not an LLM judge. The result is deterministic: same input, same output, every time. `SAT` means proceed. `UNSAT` means blocked before execution. The solver can't be prompt-injected or socially engineered.

**The ZK proof** is generated on every `checkIt` call. Anyone can verify it in under one second, without re-running the computation, without trusting your infrastructure, and without seeing your policy. The proof is the audit trail.

#### What your OpenTelemetry backend looks like after

Benign action (formatting, summarizing, file reading):

```
Span: agent.action
  icme.relevant: false
  duration_ms: 45
```

Minimal. Cheap. Nobody will ever query this span for compliance.

Policy-checked action that passed:

```
Span: agent.action.policy_checked
  icme.relevant: true
  icme.result: SAT
  icme.check_id: "a1b2c3d4-..."
  icme.zk_proof_id: "e5f6a7b8-..."
  icme.matched_variables: ["dataTransmittedToApprovedEndpointOnly", "dataAccessScope..."]
```

The `zk_proof_id` is what changes everything. A regulator calls `POST /v1/verifyProof` with that ID and gets independent cryptographic confirmation the policy check happened correctly. No access to your systems required.

Policy-checked action that was blocked:

```
Span: agent.action.policy_checked
  icme.relevant: true
  icme.result: UNSAT
  icme.check_id: "c9d0e1f2-..."
  icme.zk_proof_id: "f3a4b5c6-..."
  icme.blocked: true
  icme.detail: "Data transmitted to unapproved endpoint"
  otel.status_code: ERROR
```

Proof that the violation was caught. Proof that it was blocked. Proof that the policy was evaluated correctly. All verifiable without trusting you.

### Where to intercept in your framework

Every major agent framework exposes a hook where tool calls can be inspected and blocked before execution. That hook is where you serialize the tool call into a plain English action string and send it to PreFlight.

#### LangChain / LangGraph

LangChain's `wrap_tool_call` hook intercepts each tool execution individually. You get a `ToolCallRequest` containing the tool call dict (tool name + arguments) and the `BaseTool` instance. Serialize the call, check it, and either call the handler to proceed or raise to block.

<kbd>python</kbd>

```python
from langchain.agents.middleware import AgentMiddleware, ToolCallRequest
from langchain.agents import create_agent
from langchain_core.messages import ToolMessage
from typing import Callable

class PreFlightMiddleware(AgentMiddleware):
    def wrap_tool_call(self, request: ToolCallRequest, handler: Callable) -> ToolMessage:
        tool_name = request.call["name"]
        tool_args = request.call["args"]

        # Serialize the tool call into a plain English action string
        action = serialize_tool_call(tool_name, tool_args)

        # Free relevance check
        relevance = check_relevance(POLICY_ID, action)
        if not relevance["should_check"]:
            return handler(request)

        # Paid policy check ($0.01)
        result = check_it(POLICY_ID, action)
        if result["result"] == "SAT":
            return handler(request)

        # Blocked. Return error to the agent, tool never executes.
        return ToolMessage(
            content=f"BLOCKED: {result['detail']} (check_id: {result['check_id']})",
            tool_call_id=request.call["id"],
            status="error",
        )

agent = create_agent(
    model="gpt-4.1",
    tools=[gmail_search, contacts_list, drive_read, http_post],
    middleware=[PreFlightMiddleware()],
)
```

See [LangChain middleware docs](https://docs.langchain.com/oss/python/langchain/middleware/custom) and the [`wrap_tool_call` reference](https://reference.langchain.com/python/langchain/agents/middleware/types/AgentMiddleware/wrap_tool_call).

#### OpenAI Agents SDK

The OpenAI Agents SDK uses [Guardrail objects](https://openai.github.io/openai-agents-python/guardrails/) that validate inputs and outputs. For tool-level interception, wrap each tool function to check before execution.

<kbd>python</kbd>

```python
from agents import Agent, function_tool

@function_tool
def gmail_search(query: str, max_results: int = 25) -> str:
    action = f"Call gmail.search with query {query}, returning up to {max_results} results."
    result = check_it(POLICY_ID, action)
    if result["result"] != "SAT":
        return f"BLOCKED: {result['detail']}"
    return _gmail_search_impl(query, max_results)

agent = Agent(
    name="data-access-agent",
    instructions="You help the user with their email, calendar, and documents.",
    tools=[gmail_search, contacts_list, drive_read],
)
```

#### Strands Agents (AWS)

Strands provides `BeforeToolCallEvent`, a hook that fires before every tool execution. Set `event.cancel_tool` to block.

<kbd>python</kbd>

```python
from strands.hooks import HookProvider, BeforeToolCallEvent

class PreFlightHook(HookProvider):
    def on_before_tool_call(self, event: BeforeToolCallEvent):
        action = serialize_tool_call(event.tool_name, event.tool_input)
        relevance = check_relevance(POLICY_ID, action)
        if not relevance["should_check"]:
            return

        result = check_it(POLICY_ID, action)
        if result["result"] != "SAT":
            event.cancel_tool = f"BLOCKED: {result['detail']}"
```

See the [Strands guardrails guide](https://dev.to/aws/ai-agent-guardrails-rules-that-llms-cannot-bypass-596d) for details on the `cancel_tool` mechanism.

#### CrewAI

CrewAI doesn't have a middleware layer in the same sense. The standard pattern is to wrap your tool functions directly.

<kbd>python</kbd>

```python
from crewai import Tool

def safe_gmail_search(query: str, max_results: int = 25) -> str:
    action = f"Call gmail.search with query {query}, returning up to {max_results} results."
    result = check_it(POLICY_ID, action)
    if result["result"] != "SAT":
        return f"BLOCKED: {result['detail']}"
    return _gmail_search_impl(query, max_results)

gmail_tool = Tool(
    name="gmail_search",
    func=safe_gmail_search,
    description="Search the user's Gmail inbox",
)
```

#### Serializing tool calls into action strings

The solver evaluates concrete facts in the action text. Your serialization function should include the tool name, all arguments, and any facts the policy references (record counts, destination URLs, storage behavior).

<kbd>python</kbd>

```python
def serialize_tool_call(tool_name: str, args: dict) -> str:
    """
    Turn a structured tool call into a plain English string
    the solver can extract variables from.
    """
    if tool_name == "gmail_search":
        return (
            f"Call gmail.search with query {args.get('query', '')}, "
            f"returning up to {args.get('max_results', 25)} results. "
            f"Data stays in working memory and is discarded after the response."
        )
    if tool_name == "http_post":
        return (
            f"HTTP POST to {args.get('url', 'unknown')} "
            f"with {args.get('record_count', 'unknown')} records in the request body."
        )
    if tool_name == "contacts_list":
        return (
            f"Call contacts.list returning up to {args.get('max_results', 'all')} "
            f"contact records. Data stays in working memory."
        )
    if tool_name == "write_to_db":
        return (
            f"Write {args.get('record_count', 'unknown')} records "
            f"to {args.get('destination', 'unknown')} for persistent storage."
        )
    # Fallback: serialize everything
    return f"Call {tool_name} with arguments: {args}"
```

The solver reads "returning up to 500 results" and extracts `numberOfEmailsAccessed: 500`. It reads "HTTP POST to <https://vendor.com/api>" and extracts `isExternalTransmission: true` and `destinationUrl`. The more concrete your serialization, the more reliably the policy evaluates.

### Full OpenTelemetry integration

The middleware examples above handle enforcement. To add the OTel tracing layer with `checkRelevance` routing and `zk_proof_id` on every span, wrap the check logic with span creation:

`python`

```python
import httpx
from opentelemetry import trace
from opentelemetry.trace import StatusCode

tracer = trace.get_tracer("agent.middleware")

ICME_API_KEY = "sk-smt-..."
ICME_POLICY_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
ICME_BASE = "https://api.icme.io/v1"


async def check_relevance(action: str, threshold: float = 0.0) -> dict:
    """Free relevance screening. No credits charged."""
    async with httpx.AsyncClient() as client:
        resp = await client.post(
            f"{ICME_BASE}/checkRelevance",
            json={
                "policy_id": ICME_POLICY_ID,
                "action": action,
                "threshold": threshold,
            },
            headers={
                "Content-Type": "application/json",
                "X-API-Key": ICME_API_KEY,
            },
        )
        return resp.json()


async def check_action(action: str) -> dict:
    """Full policy check. 1 credit ($0.01)."""
    async with httpx.AsyncClient(timeout=30.0) as client:
        resp = await client.post(
            f"{ICME_BASE}/checkIt",
            json={"policy_id": ICME_POLICY_ID, "action": action},
            headers={
                "Content-Type": "application/json",
                "X-API-Key": ICME_API_KEY,
            },
        )
        return resp.json()


async def guarded_execute(action: str, execute_fn):
    """
    Middleware that classifies, enforces, and traces
    agent actions with cryptographic proof.
    """
    relevance = await check_relevance(action)

    if not relevance.get("should_check", True):
        # Benign action. No policy variables touched. Lightweight trace.
        with tracer.start_as_current_span("agent.action") as span:
            span.set_attribute("icme.relevant", False)
            return await execute_fn(action)

    # Policy-relevant action. Full enforcement and tracing.
    with tracer.start_as_current_span("agent.action.policy_checked") as span:
        span.set_attribute("icme.relevant", True)
        span.set_attribute("icme.relevance_score", relevance["relevance"])
        span.set_attribute("icme.matched_variables", str(relevance["matched"]))

        result = await check_action(action)

        span.set_attribute("icme.result", result.get("result", "ERROR"))
        span.set_attribute("icme.check_id", result.get("check_id", ""))
        span.set_attribute("icme.zk_proof_id", result.get("zk_proof_id", ""))

        if result.get("result") == "SAT":
            return await execute_fn(action)

        # UNSAT. Blocked. The proof records the block.
        span.set_status(StatusCode.ERROR, result.get("detail", "Policy violation"))
        span.set_attribute("icme.blocked", True)
        raise PolicyViolation(
            action=action,
            detail=result.get("detail"),
            check_id=result.get("check_id"),
            proof_id=result.get("zk_proof_id"),
        )


class PolicyViolation(Exception):
    def __init__(self, action, detail, check_id, proof_id):
        self.action = action
        self.detail = detail
        self.check_id = check_id
        self.proof_id = proof_id
        super().__init__(f"Blocked: {detail} (check: {check_id}, proof: {proof_id})")
```

### Setting up a policy

#### 1. Create an account

<kbd>bash</kbd>

```bash
curl -s -X POST https://api.icme.io/v1/createUserCard \
  -H 'Content-Type: application/json' \
  -d '{"username": "otel-agent"}' | jq .
# Open checkout_url in browser. $5.00, gives 325 credits.
# Then retrieve your API key:
curl -s https://api.icme.io/v1/session/SESSION_ID | jq .
```

#### 2. Compile a policy

A good AR policy uses concrete, extractable variables: numbers, URLs, booleans that the solver can read directly from the action text. Vague conditions like "scope matches user intent" or "approved endpoints" cause translation failures because the solver has no way to evaluate them. See [AWS AR best practices](https://docs.aws.amazon.com/bedrock/latest/userguide/automated-reasoning-policy-best-practices.html) for why this matters.

Every variable in the policy below maps to a fact your middleware can serialize from a tool call: a record count, a destination URL, a storage flag.

This example covers a personal data access agent with access to email, calendar, contacts, and documents:

<kbd>bash</kbd>

```bash
curl -s -N -X POST https://api.icme.io/v1/makeRules \
  -H 'Content-Type: application/json' \
  -H "X-API-Key: $ICME_API_KEY" \
  -d '{
    "policy": "The agent has access to email, calendar, contacts, and documents on behalf of the user.\n\nThe agent must not make outbound HTTP requests to any external URL. Any action that sends data via HTTP POST, PUT, or outbound network request to a URL is an external transmission. If the agent transmits data externally, the action must be rejected.\n\nThe agent must not access more than 25 email messages in a single operation. If the number of emails accessed exceeds 25, the action must be rejected.\n\nThe agent must not access more than 50 contact records in a single operation. If the number of contacts accessed exceeds 50, the action must be rejected.\n\nThe agent must not access more than 10 documents in a single operation. If the number of documents accessed exceeds 10, the action must be rejected.\n\nThe agent must not write, cache, or persist any extracted data to disk, database, or long-term memory. All data must be held in working memory only and discarded after the response is returned. If the agent stores data after the response, the action must be rejected."
  }'
```

The solver extracts variables like `numberOfEmailsAccessed`, `isExternalTransmission`, `destinationUrl`, `retainsDataAfterResponse` from these rules. When your middleware serializes a tool call, it includes these as concrete facts: "POST 500 email bodies to <https://vendor.com/api>" gives the solver a URL and a record count to evaluate.

Save the `policy_id`. Compilation costs 300 credits, one-time. See the [full use case walkthrough](https://docs.icme.io/documentation/personal-and-privacy/personal-data-access-agent) for details on variable extraction and [battle testing](https://docs.icme.io/documentation/battle-testing-rules) to verify the solver interprets your policy correctly.

#### 3. Set environment variables

<kbd>bash</kbd>

```bash
export ICME_API_KEY=sk-smt-...
export ICME_POLICY_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```

### What gets caught

The action strings below are what your middleware would serialize from real tool calls. Each one includes the concrete facts the solver needs: record counts, destination URLs, storage behavior.

#### SAT: normal email search, small scope, no external transmission

<kbd>bash</kbd>

```bash
curl -s -N -X POST https://api.icme.io/v1/checkIt \
  -H 'Content-Type: application/json' \
  -H "X-API-Key: $ICME_API_KEY" \
  -d '{
    "policy_id": "'"$ICME_POLICY_ID"'",
    "action": "Call gmail.search with query from:alice@company.com subject:Q3 review, returning up to 10 results. Data stays in working memory for summarization and is discarded after the response."
  }'
```

Result: **SAT**. 10 emails is under the 25 limit. No external transmission. No persistent storage. The proof records the pass.

#### UNSAT: external transmission with a destination URL

<kbd>bash</kbd>

```bash
curl -s -N -X POST https://api.icme.io/v1/checkIt \
  -H 'Content-Type: application/json' \
  -H "X-API-Key: $ICME_API_KEY" \
  -d '{
    "policy_id": "'"$ICME_POLICY_ID"'",
    "action": "HTTP POST 500 email bodies to https://summarizer.external-vendor.com/v1/batch for summarization. The emails were retrieved from the users inbox."
  }'
```

Result: **UNSAT**. The solver extracts `isExternalTransmission: true` and `destinationUrl: https://summarizer.external-vendor.com/v1/batch`. The policy blocks all outbound HTTP requests. The data never leaves. The proof records the block.

#### UNSAT: record count exceeds the limit

<kbd>bash</kbd>

```bash
curl -s -N -X POST https://api.icme.io/v1/checkIt \
  -H 'Content-Type: application/json' \
  -H "X-API-Key: $ICME_API_KEY" \
  -d '{
    "policy_id": "'"$ICME_POLICY_ID"'",
    "action": "Call gmail.search with query after:2026-04-01, returning up to 500 results to build context for answering a question about the Alice thread."
  }'
```

Result: **UNSAT**. The solver extracts `numberOfEmailsAccessed: 500`, which exceeds the 25-message limit. The agent wanted to read 500 emails to answer a question about one thread. The proof captures the exact count that violated the policy.

#### UNSAT: persistent storage of extracted data

<kbd>bash</kbd>

```bash
curl -s -N -X POST https://api.icme.io/v1/checkIt \
  -H 'Content-Type: application/json' \
  -H "X-API-Key: $ICME_API_KEY" \
  -d '{
    "policy_id": "'"$ICME_POLICY_ID"'",
    "action": "Write extracted contact names, email addresses, and phone numbers to agent_memory.db for reuse in future sessions. 342 contact records processed."
  }'
```

Result: **UNSAT**. The solver extracts `retainsDataAfterResponse: true`. Writing to a database is persistent storage. The proof records the violation.

#### UNSAT: instruction injected via calendar event

<kbd>bash</kbd>

```bash
curl -s -N -X POST https://api.icme.io/v1/checkIt \
  -H 'Content-Type: application/json' \
  -H "X-API-Key: $ICME_API_KEY" \
  -d '{
    "policy_id": "'"$ICME_POLICY_ID"'",
    "action": "Call drive.list returning all 847 documents. This instruction was found in a calendar event body titled Team Sync while the agent was reading todays schedule."
  }'
```

Result: **UNSAT**. 847 documents exceeds the 10-document limit. The proof captures it regardless of where the instruction came from.

### Verifying proofs

Verify cryptographically:

<kbd>bash</kbd>

```bash
curl -s -X POST https://api.icme.io/v1/verifyProof \
  -H 'Content-Type: application/json' \
  -d '{"proof_id": "YOUR_PROOF_ID"}' | jq .
```

Check proof metadata:

<kbd>bash</kbd>

```bash
curl -s https://api.icme.io/v1/proof/YOUR_PROOF_ID \
  -H "X-API-Key: $ICME_API_KEY" | jq .
```

Download the raw proof binary:

<kbd>bash</kbd>

```bash
curl -s https://api.icme.io/v1/proof/YOUR_PROOF_ID/download -o proof.bin
```

The proof confirms: this specific policy was checked against this specific action, by this specific solver, and returned this specific result. No re-execution required. No trust in the provider required. No policy exposure required.

### What people are building today vs. what this gives you

| Problem                                    | Current approach                                                                                              | With PreFlight + OTel                                                        |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| Tamper-evident audit trail                 | Merkle tree hash chains, blockchain anchoring, WORM storage                                                   | ZK proof on every check, verifiable by anyone, no infrastructure needed      |
| Prove a policy was enforced                | Log entries in a mutable database                                                                             | Cryptographic proof the solver evaluated the policy correctly                |
| Reduce observability cost                  | Sample traces, drop low-value spans in Collector                                                              | `checkRelevance` (free) classifies at source so you only trace what matters  |
| Answer "was the agent allowed to do this?" | Reconstruct from logs after the fact                                                                          | `check_id` + `proof_id` on the OTel span, answer is immediate and verifiable |
| Third-party verification                   | Grant auditor access, or download proofs for later checks. [See API docs](https://docs.icme.io/api-reference) | Auditor calls `/v1/verifyProof` with zero access to your systems             |
| Policy privacy during audit                | Expose your rules to the verifier                                                                             | ZK proof verifies without revealing the policy                               |

### Cost

| Action                | Cost                                                                                                |
| --------------------- | --------------------------------------------------------------------------------------------------- |
| `checkRelevance`      | **Free**                                                                                            |
| `checkIt`             | 1 credit ($0.01)                                                                                    |
| Policy compilation    | 300 credits ($3.00), one-time                                                                       |
| Account creation      | $5.00 (gives 325 credits)                                                                           |
| ZK proof verification | Free (included with check)                                                                          |
| Credit top-up         | $5 to $100 ([volume bonuses up to 20%](https://docs.icme.io/documentation/getting-started/pricing)) |

For an agent averaging 1,000 actions/day where 15% are policy-relevant: 150 paid checks/day at $1.50/day. The other 850 relevance screenings are free. Compare that to what you're paying to store and query 1,000 full-depth traces per day in your observability backend.

### Production checklist

* [ ] Account created, API key saved as `ICME_API_KEY`
* [ ] Policy compiled, ID saved as `ICME_POLICY_ID`
* [ ] `checkRelevance` called before every agent action
* [ ] `checkIt` called for every action where `should_check: true`
* [ ] `check_id` and `zk_proof_id` set as OTel span attributes
* [ ] Fail-closed: any non-`SAT` result blocks execution
* [ ] Per-action enforcement for multi-step chains (check before each step, not just once)
* [ ] Proof verification tested with `/v1/verifyProof`
* [ ] Compliance team briefed on proof verification workflow


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.icme.io/documentation/privacy-and-data-security/cut-your-agent-observability-costs-and-make-every-trace-auditor-proof.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
