Module 2 (Gemini) - Lesson 2c: Temperature Control with Gemini
Control creativity with the temperature parameter in Gemini.
Published: 1/15/2026
Lesson 2c: Temperature Control with Google Gemini
Temperature controls how creative or deterministic Gemini's responses are. Learn how Gemini's temperature range compares to OpenAI and Anthropic.
Key Differences from OpenAI and Anthropic
OpenAI: Temperature range is 0.0 to 2.0 Anthropic: Temperature range is 0.0 to 1.0 Gemini: Temperature range is 0.0 to 2.0 (same as OpenAI)
Code Example
Create src/gemini/basic-prompt-with-temperature.ts:
import { GoogleGenAI, ApiError } from "@google/genai"; import dotenv from "dotenv"; // Load environment variables dotenv.config(); // Create Gemini client const gemini = new GoogleGenAI({}); // Async function with proper return type async function basicPromptWithTemperature(): Promise<void> { try { console.log("Testing Gemini connection..."); const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "Suggest a travel destination", config: { systemInstruction: "You are a helpful travel assistant.", temperature: 0.9, // Higher temperature for more creative responses }, }); console.log("Prompt with Temperature Success!"); console.log("Tokens used:"); console.dir(response.usageMetadata, { depth: null }); if (!response.text) { throw new Error("No content in response"); } console.log("AI Response:", response.text); } catch (error) { if (error instanceof ApiError) { console.log("API Error:", error.status, error.message); } else if (error instanceof Error) { console.log("Error:", error.message); } else { console.log("Unknown error occurred"); } } } // Run the test basicPromptWithTemperature().catch((error) => { console.error("Error:", error); });
Run It
pnpm tsx src/gemini/basic-prompt-with-temperature.ts
Understanding Temperature
Temperature Scale (Gemini: 0.0 - 2.0)
| Temperature | Behavior | Best For |
|---|---|---|
| 0.0 | Deterministic, same every time | Facts, math, code, analysis |
| 0.3 | Mostly consistent, slight variation | Business writing, documentation |
| 0.5 | Balanced | General conversations |
| 0.7 | More creative, varied | Content creation, storytelling |
| 1.0 | Creative (default for many use cases) | Brainstorming, ideas |
| 1.5 | Highly creative, unpredictable | Experimental content |
| 2.0 | Maximum creativity | Poetry, highly experimental |
Examples by Temperature
Same Prompt, Different Temperatures
Prompt: "Suggest a travel destination"
Temperature 0.1 (Consistent):
I'd suggest visiting Paris, France. It's a world-renowned destination
known for the Eiffel Tower, Louvre Museum, and excellent cuisine.
Best visited in spring or fall.
Temperature 0.5 (Balanced):
Consider visiting Kyoto, Japan! This historic city offers beautiful
temples, traditional gardens, and authentic cultural experiences.
The cherry blossom season in spring is particularly magical.
Temperature 1.2 (Creative):
How about embarking on an adventure to the enchanting island of Madeira,
Portugal? Imagine yourself hiking through mystical laurel forests,
discovering hidden coastal villages, and savoring the world's most
unique fortified wine while watching dramatic Atlantic sunsets!
When to Use Each Temperature
Low Temperature (0.0 - 0.3)
Use Cases:
- Code generation
- Mathematical calculations
- Factual Q&A
- Data extraction
- Legal/medical text
- API responses requiring consistency
Example:
const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "Write a function to calculate fibonacci numbers", config: { systemInstruction: "You are a code generator. Provide accurate TypeScript code.", temperature: 0.1, } });
Medium Temperature (0.4 - 0.7)
Use Cases:
- General conversation
- Customer support
- Educational content
- Business communication
- Product descriptions
Example:
const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "How do I reset my password?", config: { systemInstruction: "You are a friendly customer support agent.", temperature: 0.5, } });
High Temperature (0.8 - 2.0)
Use Cases:
- Creative writing
- Brainstorming
- Marketing copy
- Story generation
- Poetry
- Unique perspectives
Example:
const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "Give me 5 unique sci-fi story concepts", config: { systemInstruction: "You are a creative writer helping with story ideas.", temperature: 1.2, } });
Experiment: Temperature Comparison
Try this experiment to see temperature in action:
import { GoogleGenAI } from "@google/genai"; const gemini = new GoogleGenAI({}); async function compareTemperatures() { const prompt = "Suggest a travel destination in Europe"; const temperatures = [0.1, 0.5, 1.0, 1.5]; for (const temp of temperatures) { console.log(`\n=== Temperature ${temp} ===`); const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: prompt, config: { temperature: temp, maxOutputTokens: 200, } }); console.log(response.text); } } compareTemperatures();
Provider Temperature Comparison
| Feature | OpenAI | Anthropic | Gemini |
|---|---|---|---|
| Range | 0.0 - 2.0 | 0.0 - 1.0 | 0.0 - 2.0 |
| Default | ~1.0 | 1.0 | ~1.0 |
| Max creativity | 2.0 | 1.0 | 2.0 |
| Config location | Top-level param | Top-level param | config.temperature |
Note: Anthropic's 1.0 is roughly equivalent to OpenAI/Gemini's 1.5-1.8 in terms of creativity.
Side-by-Side Code Comparison
OpenAI
const response = await openai.responses.create({ model: "gpt-5-nano", input: "Suggest a travel destination", temperature: 0.9 });
Anthropic
const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 1000, messages: [{ role: "user", content: "Suggest a travel destination" }], temperature: 0.9 // Max is 1.0! });
Gemini
const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "Suggest a travel destination", config: { temperature: 0.9 // Can go up to 2.0 } });
Common Mistakes
1. Using High Temperature for Tasks Requiring Accuracy
Wrong:
config: { temperature: 1.5, } // With prompt: "What is 234 x 567?"
Right:
config: { temperature: 0.0, } // With prompt: "What is 234 x 567?"
2. Expecting Identical Responses
Even with temperature 0.0, responses may vary slightly due to:
- Model updates
- Internal randomness
- Context differences
3. Overusing High Temperature
High temperature doesn't always mean better:
- Can produce nonsensical output
- May miss important details
- Reduces reliability
4. Forgetting Temperature in Config
Wrong:
const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "...", temperature: 0.5 // Won't work at top level! });
Right:
const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "...", config: { temperature: 0.5 // Must be in config } });
Additional Config Options
Gemini offers more than just temperature for controlling output:
config: { temperature: 0.7, // Creativity (0.0 - 2.0) topP: 0.9, // Nucleus sampling (0.0 - 1.0) topK: 40, // Top-k sampling maxOutputTokens: 1000, // Maximum response length stopSequences: ["END"], // Stop generation at these strings }
Understanding topP and topK
topP (Nucleus Sampling):
- Controls diversity by considering tokens whose cumulative probability exceeds P
- Lower values = more focused, higher values = more diverse
- Often used with temperature
topK:
- Limits sampling to top K most likely tokens
- Lower values = more focused, higher values = more diverse
Recommended Combinations:
// Precise, deterministic config: { temperature: 0.1, topP: 0.1, topK: 1 } // Balanced config: { temperature: 0.7, topP: 0.9, topK: 40 } // Creative config: { temperature: 1.2, topP: 0.95, topK: 100 }
Best Practices
- Start with defaults and adjust based on results
- Use low temperature (0.0-0.3) for deterministic tasks
- Test different temperatures to find optimal setting
- Document your choice in production code
- Consider caching with low temperature for consistency
- Combine with topP/topK for fine-tuned control
Key Takeaways
- Gemini temperature range is 0.0 - 2.0 (same as OpenAI, wider than Anthropic)
- Temperature must be in
configobject, not at top level - Lower temperature = more consistent, predictable
- Higher temperature = more creative, varied
- Choose based on your use case, not "creativity = better"
- Combine with
topPandtopKfor more control
Next Steps
Learn how to craft detailed, complex prompts for advanced tasks!
Next: Lesson 2d - Extended Prompts
Quick Reference
// Low temperature (consistent) config: { temperature: 0.1 } // Medium temperature (balanced) config: { temperature: 0.5 } // High temperature (creative) config: { temperature: 1.2 } // Full control config: { temperature: 0.7, topP: 0.9, topK: 40, maxOutputTokens: 1000 }