Skip to main content

Test Mode

Test mode lets you interact with your voice agent in the terminal using pure text — no phone calls, no STT/TTS, no external services required. It simulates a phone conversation for rapid agent development.

Quick Start

import { Patter } from "@patter-dev/sdk";

const phone = new Patter({ openaiKey: "sk-...", mode: "local" });

const agent = phone.agent({
  systemPrompt: "You are a helpful customer service agent for Acme Corp.",
  firstMessage: "Hello! Welcome to Acme Corp. How can I help you today?",
});

await phone.test(agent);
This opens an interactive REPL:
============================================================
  PATTER TEST MODE
============================================================
  Agent: gpt-4o-realtime / alloy
  Provider: openai_realtime
  Call ID: test_a1b2c3d4e5f6
  Caller: +15550000001  →  Callee: +15550000002
------------------------------------------------------------
  Commands: /quit  /transfer <number>  /hangup  /history
============================================================

  Agent: Hello! Welcome to Acme Corp. How can I help you today?

  You: I need to return an item
  Agent: I'd be happy to help with your return...

Commands

CommandDescription
/quitEnd the test session.
/hangupSimulate hanging up the call.
/transfer <number>Simulate a call transfer.
/historyPrint the full conversation history.

Using with onMessage

Test mode works with custom message handlers, exactly as they would work in production:
async function onMessage(data: Record<string, unknown>, callControl: CallControl) {
  const userText = data.text as string;

  if (userText.toLowerCase().includes("cancel")) {
    await callControl.transfer("+15550001111");
    return "Let me transfer you to our cancellation team.";
  }

  return `You said: ${userText}`;
}

await phone.test(agent, { onMessage });
The callControl parameter is automatically provided as the second argument.

Using with Built-in LLM Loop

When no onMessage handler is provided and an openaiKey is available, test mode uses the built-in LLM loop with streaming responses:
const agent = phone.agent({
  systemPrompt: "You are a restaurant booking assistant.",
  tools: [
    {
      name: "check_availability",
      description: "Check table availability",
      parameters: {
        type: "object",
        properties: {
          date: { type: "string" },
          partySize: { type: "number" },
        },
      },
      webhook: "https://api.example.com/availability",
    },
  ],
});

await phone.test(agent);
The LLM loop supports tool calling — tools are executed via their configured webhooks just like in production.

Event Callbacks

Test mode fires the same lifecycle callbacks as serve():
await phone.test(agent, {
  onCallStart: async (event) => {
    console.log(`Test call started: ${event.callId}`);
    return { variables: { customerName: "Test User" } };
  },
  onCallEnd: async (event) => {
    console.log(`Test call ended: ${event.callId}`);
    console.log(`Transcript: ${event.transcript.length} messages`);
  },
});
Test mode is designed for development only. It does not use any external APIs (except OpenAI for the LLM loop when no onMessage is provided).