Skip to main content
A tool integration is a reusable HTTP endpoint that an agent can invoke during a call. You give ThunderPhone a JSON-schema description of the tool plus an endpoint URL; the agent decides when to call it based on the conversation, and ThunderPhone makes the outbound HTTP request from its servers and returns the response to the agent. This guide walks through building a weather-lookup tool end-to-end.

Anatomy of a tool

Two pieces:
  1. The schema — an OpenAI-style function definition ({type: "function", function: {name, description, parameters}}) that tells the LLM what the tool does and what arguments it takes.
  2. The endpoint — the URL ThunderPhone’s servers call when the LLM decides to use the tool. The request is JSON POST with the LLM’s chosen arguments as the body.

1. Choose a storage strategy

Inline on the agent

Attach a one-off tool to the agent’s tools array. Simple, but not reusable.

Saved integration

Store the tool as a reusable integration and link it from many agents. Recommended for anything used more than once.
This guide uses the saved-integration path.

2. Create the integration

curl -X POST https://api.thunderphone.com/v1/integrations \
  -H "Authorization: Bearer sk_live_YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "display_name": "Weather API",
    "spec": {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Return the current weather for a zip code.",
        "parameters": {
          "type": "object",
          "properties": {
            "zip": { "type": "string", "description": "5-digit US ZIP code" }
          },
          "required": ["zip"]
        }
      }
    },
    "endpoint_url":    "https://api.example.com/weather",
    "endpoint_method": "GET",
    "headers": [
      { "key": "X-Api-Key", "value": "your-provider-key" }
    ]
  }'
Save the returned id (a UUID).
Spend real effort on the description of the tool and of each parameter. The LLM uses these strings at runtime to decide whether and how to call the tool. Vague descriptions → vague tool calls.

3. Sandbox-test the endpoint

Before you link the integration to an agent, fire a signed request from ThunderPhone’s servers to confirm connectivity:
curl -X POST https://api.thunderphone.com/v1/integrations/test-request \
  -H "Authorization: Bearer sk_live_YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url":    "https://api.example.com/weather?zip=94110",
    "method": "GET",
    "headers": { "X-Api-Key": "your-provider-key" }
  }'
Response
{
  "ok": true,
  "status": 200,
  "elapsed_ms": 187,
  "response_headers": { "content-type": "application/json" },
  "response_preview": "{\"temperature_f\": 64, ...}"
}
This test also hardens ThunderPhone’s SSRF guards — requests to localhost or private IP ranges return 400 code=url_not_allowed. Attach via integration_ids when you create or update an agent:
curl -X PATCH https://api.thunderphone.com/v1/agents/12 \
  -H "Authorization: Bearer sk_live_YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "integration_ids": ["f9b5a1a4-..."]
  }'
You can link many integrations to one agent. The agent’s prompt can reference them by name — “use get_weather when the caller asks about conditions” — or it can discover them implicitly from the schema descriptions.

5. Implement the endpoint

When the agent invokes the tool, ThunderPhone sends a signed POST to your endpoint_url:
POST /weather HTTP/1.1
Host: api.example.com
X-Api-Key: your-provider-key
X-ThunderPhone-Signature: <HMAC-SHA256 hex>
Content-Type: application/json

{"zip": "94110"}
Your server responds with JSON that gets handed back to the LLM:
{"temperature_f": 64, "condition": "Partly cloudy", "wind_mph": 8}
The LLM ingests that response and speaks a human summary to the caller.
The signature is computed over the raw request body using the same secret as your webhook endpoint. Verify it — tool endpoints are internet-facing and subject to the same spoofing concerns as webhooks. See Verify webhook signatures.

6. Test the loop

Run a mic session against the agent and ask the question your tool handles (“What’s the weather in 94110?”). The transcript’s structured history shows the full round trip:
[
  { "role": "user",      "content_type": "text/plain", "content": "What's the weather in 94110?" },
  { "role": "tool_call", "content_type": "application/json",
    "content": { "name": "get_weather", "arguments": {"zip": "94110"} } },
  { "role": "tool_response", "content_type": "application/json",
    "content": { "temperature_f": 64, "condition": "Partly cloudy" } },
  { "role": "assistant", "content_type": "text/plain",
    "content": "It's 64 degrees and partly cloudy." }
]
You can pull this via GET /v1/calls/{call_id}/history.

Common gotchas

The LLM decides based on the tool’s description. If the caller’s question doesn’t match the description, the model won’t invoke the tool. Tighten the description (add common synonyms and phrasing) or mention it explicitly in the agent prompt (“When the caller asks about weather, use get_weather.”).
Responses over 6 kB are truncated in the transcript preview. Return just the fields the LLM needs — not your whole row.
Tool endpoints have a 10-second default timeout. If you need longer, handle it asynchronously: return {"status": "pending", "request_id": "..."} and surface the result via a separate tool call.
Every integration PATCH creates a new revision. Inspect GET /v1/integrations/{id}/versions to see who changed what. If you break a tool’s schema, you can roll back manually by PATCH-ing an older snapshot back in.

Next steps

Integrations reference

CRUD, transfer, version history.

Function Tools spec

Full JSON schema grammar and the signed-endpoint contract.

Verify signatures

Apply the webhook-signature pattern to tool endpoints.

Transcript + history API

Inspect the full round-trip of a tool call.