TTS is used in pipeline mode to synthesize the agent’s response audio. If you use an engine such as OpenAIRealtime or ElevenLabsConvAI, speech synthesis is handled internally by the engine.Each TTS ships as both a namespaced class (import * as elevenlabs from "getpatter/tts/elevenlabs" → new elevenlabs.TTS()) and a flat alias (import { ElevenLabsTTS } from "getpatter"). They are equivalent — the flat aliases are convenient for short examples, the namespaced form avoids name collisions when mixing providers.
// npx tsx example.tsimport { Patter, Twilio, DeepgramSTT, ElevenLabsTTS } from "getpatter";const phone = new Patter({ carrier: new Twilio(), phoneNumber: "+15550001234" });const agent = phone.agent({ stt: new DeepgramSTT(), // DEEPGRAM_API_KEY from env tts: new ElevenLabsTTS({ voiceId: "rachel" }), // ELEVENLABS_API_KEY from env systemPrompt: "You are a helpful assistant.",});await phone.serve({ agent });
The same agent using namespaced imports:
import * as deepgram from "getpatter/stt/deepgram";import * as elevenlabs from "getpatter/tts/elevenlabs";const agent = phone.agent({ stt: new deepgram.STT(), tts: new elevenlabs.TTS({ voiceId: "rachel" }), systemPrompt: "You are a helpful assistant.",});