Module 2 (Gemini) - Lesson 2d: Extended Prompts with Gemini
Provide detailed context and roles using structured contents.
Published: 1/15/2026
Lesson 2d: Extended Prompts with Google Gemini
Learn how to craft detailed, comprehensive prompts that leverage Gemini's large context window and excellent reasoning capabilities.
Gemini's Advantage
Google Gemini supports up to 1 million tokens of context (Gemini 1.5 Pro) - the largest context window available, perfect for long, detailed prompts that would overwhelm other models.
| Model | Context Window |
|---|---|
| OpenAI GPT-4o | 128,000 tokens |
| Anthropic Claude | 200,000 tokens |
| Gemini 1.5 Pro | 1,000,000 tokens |
Code Example
Create src/gemini/extended-prompt.ts:
import { GoogleGenAI, ApiError } from "@google/genai"; import dotenv from "dotenv"; // Load environment variables dotenv.config(); // Create Gemini client const gemini = new GoogleGenAI({}); // Async function with proper return type async function extendedPrompt(): Promise<void> { try { console.log("Testing Gemini connection..."); const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: { role: "user", parts: [ { text: "Suggest a travel destination within Europe where there is a Christmas market that is famous but is not in a big city. I would like to go somewhere that is less than 2 hours from a major airport and has good public transport links.", }, ], }, config: { systemInstruction: "You are a helpful travel assistant. Provide detailed travel suggestions based on user preferences and include distance from the airport.", }, }); console.log("Extended Prompt Success!"); console.log("Tokens used:"); console.dir(response.usageMetadata, { depth: null }); if (!response.text) { throw new Error("No content in response"); } console.log("AI Response:", response.text); } catch (error) { if (error instanceof ApiError) { console.log("API Error:", error.status, error.message); } else if (error instanceof Error) { console.log("Error:", error.message); } else { console.log("Unknown error occurred"); } } } // Run the test extendedPrompt().catch((error) => { console.error("Error:", error); });
Run It
pnpm tsx src/gemini/extended-prompt.ts
Crafting Extended Prompts
Basic vs Extended Prompts
Basic Prompt:
Suggest a travel destination
Extended Prompt:
Suggest a travel destination within Europe where there is a Christmas
market that is famous but is not in a big city. I would like to go
somewhere that is less than 2 hours from a major airport and has good
public transport links.
The extended version provides:
- Geographic constraints (Europe)
- Specific attraction (Christmas market)
- Size preference (not a big city)
- Accessibility requirements (airport distance, transport)
Structured Contents Format
Gemini offers flexibility in how you structure your prompts:
Simple String (Quick)
contents: "Suggest a travel destination"
Object with Role and Parts (Recommended for complex prompts)
contents: { role: "user", parts: [ { text: "Suggest a travel destination in Europe" }, { text: "Include budget estimates and best time to visit" } ] }
Array of Messages (Multi-turn conversations)
contents: [ { role: "user", parts: [{ text: "Suggest a destination in France" }] }, { role: "model", parts: [{ text: "I recommend Lyon! It's a beautiful city..." }] }, { role: "user", parts: [{ text: "What about somewhere more rural?" }] } ]
Extended Prompt Patterns
1. Context + Constraints + Requirements
contents: { role: "user", parts: [{ text: `I'm planning a family vacation with two children (ages 6 and 9) for next summer. We have a budget of $5000 for a week-long trip. Requirements: - Family-friendly activities - Safe destination - English widely spoken or easy to navigate - Good weather in July/August - Not too expensive - Within 10 hours flight from New York Please suggest 3 destinations with brief explanations for each.` }] }
2. Multi-Step Instructions
config: { systemInstruction: "You are a code review expert specializing in TypeScript." }, contents: { role: "user", parts: [{ text: `Review this TypeScript code and provide: 1. Security vulnerabilities (if any) 2. Performance issues 3. Best practice violations 4. Suggested improvements with code examples 5. Overall quality rating (1-10) Code: \`\`\`typescript ${codeToReview} \`\`\` Format your response with clear sections for each point.` }] }
3. Role + Context + Task + Format
config: { systemInstruction: "You are an experienced data analyst with expertise in e-commerce." }, contents: { role: "user", parts: [{ text: `Context: Our online store has seen a 30% drop in conversion rates over the past 3 months despite increased traffic. Data summary: - Traffic: +40% increase - Bounce rate: +15% increase - Average session time: -2 minutes decrease - Cart abandonment: +25% increase Task: Analyze this situation and provide: 1. Likely root causes (ranked by probability) 2. Recommended diagnostic tests 3. Potential solutions for the top 3 causes 4. Implementation priority Format: Use bullet points and include specific metrics to track.` }] }
Leveraging Gemini's Strengths
1. Long Context Analysis
Gemini's massive context window makes it ideal for document analysis:
const document = `[Insert entire 100-page document here]`; const response = await gemini.models.generateContent({ model: "gemini-3-pro-preview", // Use Pro for complex analysis contents: { role: "user", parts: [{ text: `Analyze this contract and: 1. Summarize key terms 2. Identify potential risks 3. Flag unusual clauses 4. Provide recommendations Contract: ${document}` }] }, config: { maxOutputTokens: 4000, } });
2. Multi-Turn Reasoning
contents: { role: "user", parts: [{ text: `Let's solve this problem step by step: Problem: A train leaves Chicago at 9am traveling at 60mph toward New York (800 miles away). Another train leaves New York at 10am traveling at 80mph toward Chicago. Questions: 1. When will they meet? 2. How far from Chicago will they meet? 3. Show your work for each step.` }] }
3. Structured Output Requests
contents: { role: "user", parts: [{ text: `Analyze the top 5 programming languages in 2026 and provide a comparison in this exact format: For each language, include: - Name - Primary use cases (3-5 bullet points) - Strengths (3 points) - Weaknesses (3 points) - Market trend (growing/stable/declining) - Recommended for beginners? (yes/no with brief reason) Format as a markdown table for easy reading.` }] }
Best Practices for Extended Prompts
1. Structure Your Prompt
Use clear sections:
Context: [Background information]
Requirements: [What you need]
Constraints: [Limitations]
Task: [What to do]
Format: [How to respond]
2. Be Specific
Vague:
Tell me about travel options
Specific:
List 3 budget-friendly beach destinations in Southeast Asia
accessible from Bangkok, suitable for solo travelers in March,
with estimated costs per day.
3. Break Down Complex Tasks
contents: { role: "user", parts: [{ text: `Task 1: First, list all European cities with famous Christmas markets Task 2: Filter to only cities with population under 500,000 Task 3: Check which have airports within 2 hours Task 4: Rank by public transport quality Task 5: Recommend top 3 with explanation` }] }
4. Provide Examples
contents: { role: "user", parts: [{ text: `Generate product descriptions in this style: Example: "The Aurora Backpack: Where functionality meets adventure. Crafted from weather-resistant canvas with ergonomic straps, this 30L companion transforms your daily commute into an expedition. Features: Laptop sleeve, water bottle pockets, hidden security compartment. $89.99" Now create descriptions for: 1. Wireless headphones 2. Smart water bottle 3. Portable charger` }] }
Comparison: Extended Prompts Across Providers
OpenAI
const response = await openai.responses.create({ model: "gpt-5-nano", input: `Context: ... Requirements: ... Task: ...` });
Anthropic
const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 2000, system: "You are a travel expert.", messages: [{ role: "user", content: `Context: ... Requirements: ... Task: ...` }] });
Gemini
const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", config: { systemInstruction: "You are a travel expert.", maxOutputTokens: 2000, }, contents: { role: "user", parts: [{ text: `Context: ... Requirements: ... Task: ...` }] } });
Common Patterns
Research & Analysis
config: { systemInstruction: "You are a market research analyst." }, contents: { role: "user", parts: [{ text: `Research the electric vehicle market and provide: 1. Current market size and growth rate 2. Top 5 manufacturers by market share 3. Key technological trends 4. Regulatory environment summary 5. 5-year outlook Include specific numbers and cite reasoning.` }] }
Planning & Strategy
config: { systemInstruction: "You are a project management consultant." }, contents: { role: "user", parts: [{ text: `Create a 3-month launch plan for a new SaaS product: Product: AI-powered customer support chatbot Target: Small businesses (10-50 employees) Budget: $50,000 Team: 2 developers, 1 designer, 1 marketer Provide: - Week-by-week timeline - Resource allocation - Key milestones - Risk factors - Success metrics` }] }
Code & Technical Tasks
config: { systemInstruction: "You are a senior full-stack developer." }, contents: { role: "user", parts: [{ text: `Design a scalable REST API for a social media platform: Requirements: - User authentication and profiles - Post creation/editing/deletion - Comments and likes - Following/followers - Feed generation - Real-time notifications Provide: 1. API endpoint structure 2. Data models 3. Authentication strategy 4. Caching approach 5. Scaling considerations` }] }
Key Takeaways
- Gemini has the largest context window (1M tokens for Pro)
- Use structured
contentswithroleandpartsfor complex prompts - Structure prompts with clear sections (Context, Requirements, Task, Format)
- Break complex tasks into numbered steps
- Specify output format explicitly
- Use examples to guide response style
Next Steps
Learn how to stream responses in real-time for better UX!
Next: Lesson 2e - Streaming Responses
Quick Reference
// Extended prompt structure const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", config: { systemInstruction: "[Detailed role and expertise]", maxOutputTokens: 2000, }, contents: { role: "user", parts: [{ text: ` Context: [Background] Requirements: [What you need] Constraints: [Limitations] Task: [Specific request] Format: [Output structure] ` }] } });
Common Pitfalls
- Using simple string when structured contents would be clearer
- Not specifying output format explicitly
- Overloading a single prompt instead of breaking into steps
- Forgetting to set
maxOutputTokensfor long responses