Module 1 - Lesson 3e: Streaming Responses
Real-time output with streaming responses.
Published: 1/3/2026
Example 5: Streaming Responses
Streaming allows you to receive responses in real-time as the AI generates them, rather than waiting for the complete response.
What It Does
Enables streaming mode so you can display text as it's generated, creating a more interactive experience (like ChatGPT).
Code Snippet
Create src/stream-prompt.ts:
import OpenAI from "openai"; import dotenv from "dotenv"; dotenv.config(); const openai = new OpenAI(); async function streamPrompt(): Promise<void> { try { console.log("Testing OpenAI connection..."); // Enable streaming with stream: true const response = await openai.responses.create({ model: "gpt-5-nano", input: [ { role: "system", content: "You are a helpful travel assistant. Provide detailed travel suggestions based on user preferences and give a guide to the destination and include distance from the airport.", }, { role: "user", content: "Suggest a travel destination within Europe where there is a Christmas market that is famous but is not in a big city. I would like to go somewhere that is less than 2 hours from a major airport and has good public transport links.", }, ], stream: true, // Enable streaming }); // Variables to collect streaming data let finalResponse = ""; let usageInfo = null; console.log("✅ Stream Prompt Success!"); console.log("---------Streaming event data start-------"); // Iterate over streaming events for await (const event of response) { console.log(event); // Capture the final text when done if (event.type === "response.output_text.done") { finalResponse = event.text; } // Capture usage info when completed if (event.type === "response.completed") { usageInfo = event.response.usage; } } console.log("---------Streaming event data end-------"); console.log("Final Response:", finalResponse); console.log("Tokens used:"); console.dir(usageInfo, { depth: null }); console.log("✅ Stream Prompt Completed!"); } catch (error) { if (error instanceof OpenAI.APIError) { console.log("❌ API Error:", error.status, error.message); } else if (error instanceof Error) { console.log("❌ Error:", error.message); } } } streamPrompt().catch(console.error);
Run It
pnpm tsx src/stream-prompt.ts
Key Points
- Real-time output: See text as it's generated
- Better UX: Users don't wait for complete response
- Event-based: Process events as they arrive
- Use case: Chat interfaces, long responses, interactive apps
Streaming Event Types
// Text chunk events event.type === "response.output_text.delta"; // Partial text event.type === "response.output_text.done"; // Complete text // Completion events event.type === "response.completed"; // Full response done