Skip to main content
Once you have a GuardedRun from guard.run(), use model_call() and tool_call() to wrap your model and tool invocations. Each call is checked against your guardrails before execution.

Model Calls

async with guard.run(user_id="alice") as run:
    response = await run.model_call(
        fn=lambda: openai.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Hello"}],
        ),
        model="gpt-4o",
        input_data={"messages": [{"role": "user", "content": "Hello"}]},
        token_extractor=lambda r: (
            r.usage.prompt_tokens,
            r.usage.completion_tokens,
        ),
    )

Parameters

ParameterTypeRequiredDescription
fnCallable[[], T | Awaitable[T]]YesThe function that makes the model call. Can be sync or async.
modelstrYesModel identifier (e.g., "gpt-4o", "claude-sonnet-4-5-20250929"). Used for cost calculation.
input_datadict[str, Any] | NoneNoInput payload to record in the step (for audit).
token_extractorCallable[[T], tuple[int, int]] | NoneNoExtracts (prompt_tokens, completion_tokens) from the response.

What happens

  1. Check: POST /v1/runs/{id}/steps with type MODEL_CALL — guardrails are evaluated
  2. Execute: fn() is called (awaited if async)
  3. Report: PATCH /v1/runs/{id}/steps/{step_id} with status, duration, and token counts
If the guardrail check denies the step, PikarcBlockedError is raised before fn() executes — your model call never happens, and you’re never billed by the provider.

Token Extractor

The token_extractor function receives the return value of fn() and should return a tuple of (prompt_tokens, completion_tokens). This is how Pikarc tracks token usage and calculates costs. For OpenAI:
token_extractor=lambda r: (r.usage.prompt_tokens, r.usage.completion_tokens)
For Anthropic:
token_extractor=lambda r: (r.usage.input_tokens, r.usage.output_tokens)
If omitted, token counts won’t be recorded for the step.

Tool Calls

async with guard.run(user_id="alice") as run:
    result = await run.tool_call(
        fn=lambda: search_database(query="recent orders"),
        tool_name="search_database",
        input_data={"query": "recent orders"},
    )

Parameters

ParameterTypeRequiredDescription
fnCallable[[], T | Awaitable[T]]YesThe function that executes the tool. Can be sync or async.
tool_namestrYesTool identifier (e.g., "search_database", "send_email").
input_datadict[str, Any] | NoneNoInput payload to record in the step (for audit).

What happens

  1. Check: POST /v1/runs/{id}/steps with type TOOL_CALL — guardrails are evaluated
  2. Execute: fn() is called (awaited if async)
  3. Report: PATCH /v1/runs/{id}/steps/{step_id} with status and duration
Tool calls don’t have a token_extractor since tools don’t produce token counts.

Sync and Async Functions

Both model_call() and tool_call() accept sync or async callables for fn. The SDK detects whether the return value is awaitable and handles it automatically:
# Sync function — works fine
await run.tool_call(
    fn=lambda: my_sync_tool(arg),
    tool_name="my_sync_tool",
)

# Async function — also works
await run.tool_call(
    fn=lambda: my_async_tool(arg),
    tool_name="my_async_tool",
)

Full Example

from pikarc import AsyncPikarc, PikarcBlockedError
from openai import AsyncOpenAI

openai = AsyncOpenAI()
guard = AsyncPikarc(api_key="lg_...", base_url="http://localhost:8000")

try:
    async with guard.run(user_id="alice", metadata={"agent": "support-bot"}) as run:
        # Model call — guardrail-checked, tokens tracked
        response = await run.model_call(
            fn=lambda: openai.chat.completions.create(
                model="gpt-4o",
                messages=[{"role": "user", "content": "Find recent orders"}],
            ),
            model="gpt-4o",
            token_extractor=lambda r: (
                r.usage.prompt_tokens,
                r.usage.completion_tokens,
            ),
        )

        # Tool call — guardrail-checked, duration tracked
        orders = await run.tool_call(
            fn=lambda: search_orders(user="alice", limit=10),
            tool_name="search_orders",
            input_data={"user": "alice", "limit": 10},
        )

        # Another model call — guardrails checked again
        summary = await run.model_call(
            fn=lambda: openai.chat.completions.create(
                model="gpt-4o",
                messages=[
                    {"role": "user", "content": "Find recent orders"},
                    {"role": "assistant", "content": response.choices[0].message.content},
                    {"role": "user", "content": f"Summarize: {orders}"},
                ],
            ),
            model="gpt-4o",
            token_extractor=lambda r: (
                r.usage.prompt_tokens,
                r.usage.completion_tokens,
            ),
        )

except PikarcBlockedError as e:
    print(f"Blocked: {e.reason} ({e.deny_reason})")

finally:
    await guard.close()