Module 1 - Lesson 3c: Temperature Control

Adjusting response creativity with the temperature parameter.

Published: 1/3/2026

Example 3: Temperature Parameter

Temperature controls how creative or deterministic the AI's responses are. It's a number between 0 and 2.

What It Does

Demonstrates how the temperature parameter affects response creativity:

  • Low (0.1-0.3): More focused, deterministic, consistent
  • Medium (0.5-0.7): Balanced creativity and consistency
  • High (0.9-2.0): More creative, varied, unpredictable

Why a Different Model?

Notice this example uses gpt-4.1-nano instead of gpt-5-nano. This is intentional to demonstrate an important concept: OpenAI models have different capabilities and characteristics.

Model Differences:

  • gpt-5-nano: Newer model, optimized for reasoning and complex tasks, may have different temperature behavior
  • gpt-4.1-nano: Previous generation, well-tested temperature responses, good for demonstrating temperature effects

Why This Matters:

  1. Temperature Sensitivity: Different models respond differently to temperature changes. Some models are more sensitive to temperature variations, making the effect more noticeable.

  2. Model Capabilities: Each OpenAI model has unique strengths:

    • Some excel at creative tasks
    • Others are better for factual, technical content
    • Temperature interacts differently with each model's architecture
  3. Cost & Performance: Different models have different:

    • Pricing (tokens per dollar)
    • Speed (response time)
    • Context windows (max input/output length)
  4. Best Practices: When experimenting with parameters like temperature, it's useful to:

    • Test with different models to see which responds best
    • Check OpenAI's documentation for model-specific recommendations
    • Consider your use case (creative vs factual) when choosing a model

In Production:

Choose your model based on:

  • Task requirements (creative writing vs data extraction)
  • Budget constraints (some models are more expensive)
  • Response time needs (some models are faster)
  • Temperature behavior (test to see which model works best for your use case)

Code Snippet

Create src/basic-prompt-with-temperature.ts:

import OpenAI from "openai";
import dotenv from "dotenv";

dotenv.config();
const openai = new OpenAI();

async function basicPromptWithTemperature(): Promise<void> {
  try {
    console.log("Testing OpenAI connection...");

    const response = await openai.responses.create({
      model: "gpt-4.1-nano",
      input: [
        {
          role: "system",
          content: "You are a helpful travel assistant.",
        },
        {
          role: "user",
          content: "Suggest a travel destination",
        },
      ],
      temperature: 0.9, // Try: 0.1, 0.5, 0.9, 2.0
    });

    console.log("✅ Basic Prompt Success!");
    console.log("AI Response:", response.output_text);
    console.log("Tokens used:");
    console.dir(response.usage, { depth: null });
  } catch (error) {
    if (error instanceof OpenAI.APIError) {
      console.log("❌ API Error:", error.status, error.message);
    } else if (error instanceof Error) {
      console.log("❌ Error:", error.message);
    }
  }
}

basicPromptWithTemperature().catch(console.error);

Run It

pnpm tsx src/basic-prompt-with-temperature.ts

Experiment

Try different temperature values and compare responses:

temperature: 0.1; // Very focused, factual
temperature: 0.5; // Balanced
temperature: 0.9; // Creative
temperature: 2.0; // Very creative (may be too random)

Key Points

  • Temperature range: 0.0 to 2.0
  • Default: Usually 1.0 if not specified
  • Lower = more consistent: Good for factual, technical content
  • Higher = more creative: Good for creative writing, brainstorming
  • Model matters: Different OpenAI models respond differently to temperature
  • Test different models: Find which model + temperature combination works best for your use case
  • Use case: Adjust based on your needs (factual vs creative)