Module 2 (Gemini) - Lesson 2a: Basic Prompt with Gemini
Send a simple text prompt with Google Gemini and read the response.
Published: 1/15/2026
Lesson 2a: Basic Prompt with Google Gemini
This is the simplest form of prompt with Gemini - just a text input with no system context. You'll learn the fundamental differences between OpenAI, Anthropic, and Google's API structure.
What It Does
Sends a simple text prompt to Gemini and receives a response. This demonstrates Google's basic API structure and response format.
Key Differences from OpenAI and Anthropic
- max_tokens is Optional: Unlike Anthropic (required) and similar to OpenAI (optional)
- Response Structure: Response uses
textproperty directly, simpler than both OpenAI and Anthropic - Contents Parameter: Uses
contentsinstead ofmessages(OpenAI/Anthropic) orinput(OpenAI Responses API) - Client Initialization: Uses
GoogleGenAIclass from@google/genai
Code Example
Create src/gemini/basic-prompt.ts:
import { GoogleGenAI, ApiError } from "@google/genai"; import dotenv from "dotenv"; // Load environment variables dotenv.config(); // Create Gemini client - reads GOOGLE_GENAI_API_KEY from environment const gemini = new GoogleGenAI({}); // Async function with proper return type async function basicPrompt(): Promise<void> { try { console.log("Testing Gemini connection..."); // Make API call - note the simpler structure const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "Suggest a travel destination", }); console.log("Basic Prompt Success!"); // Show response usage console.log("Tokens used:"); console.dir(response.usageMetadata, { depth: null }); // Check if we got a response if (!response.text) { throw new Error("No content in response"); } // Access response directly - much simpler than OpenAI/Anthropic! console.log("AI Response:", response.text); } catch (error) { // Proper error handling with type guards if (error instanceof ApiError) { console.log("API Error:", error.status, error.message); } else if (error instanceof Error) { console.log("Error:", error.message); } else { console.log("Unknown error occurred"); } } } // Run the test basicPrompt().catch((error) => { console.error("Error:", error); });
Run It
pnpm tsx src/gemini/basic-prompt.ts
Expected Output
Testing Gemini connection...
Basic Prompt Success!
Tokens used:
{
promptTokenCount: 5,
candidatesTokenCount: 150,
totalTokenCount: 155
}
AI Response:
I'd recommend visiting Kyoto, Japan! This historic city offers...
[rest of response]
Side-by-Side Comparison
OpenAI Version
const response = await openai.responses.create({ model: "gpt-5-nano", input: "Suggest a travel destination" // max_tokens is optional }); // Access response const text = response.output_text;
Anthropic Version
const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 1000, // Required! messages: [{ role: "user", content: "Suggest a travel destination" }] }); // Access response (array of content blocks) const textBlocks = response.content.filter(block => block.type === "text"); const text = textBlocks.map(block => block.text).join("\n");
Gemini Version
const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "Suggest a travel destination" // max_tokens optional, uses maxOutputTokens in config }); // Access response - simplest of all three! const text = response.text;
Key Concepts
1. Simple Contents Parameter
Gemini accepts contents as either a simple string or structured content:
// Simple string (easiest) const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "Hello, how are you?" }); // Structured content (more control) const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: { role: "user", parts: [{ text: "Hello, how are you?" }] } });
2. Direct Text Access
Unlike OpenAI and Anthropic, Gemini provides direct access to the response text:
// Gemini - direct access const text = response.text; // Compare to Anthropic - must filter content blocks const text = response.content.filter(b => b.type === "text")[0].text; // Compare to OpenAI - access via output_text const text = response.output_text;
3. Response Structure
Gemini responses use a candidate structure:
// Response structure { text: "Your response here", // Convenience accessor candidates: [{ content: { parts: [{ text: "Your response here" }], role: "model" }, finishReason: "STOP" }], usageMetadata: { promptTokenCount: 5, candidatesTokenCount: 150, totalTokenCount: 155 } }
Why candidates? Gemini can return multiple candidate responses (though typically just one).
Error Handling
Gemini provides detailed error types via ApiError:
import { ApiError } from "@google/genai"; try { const response = await gemini.models.generateContent({...}); } catch (error) { if (error instanceof ApiError) { // API-specific errors console.log("Status:", error.status); // 400, 401, 429, etc. console.log("Message:", error.message); } else if (error instanceof Error) { // Generic errors console.log("Error:", error.message); } }
Common Errors
| Error Code | Meaning | Solution |
|---|---|---|
| 401 | Invalid API key | Check .env for GOOGLE_GENAI_API_KEY |
| 400 | Invalid request | Check model name and parameters |
| 429 | Rate limit / quota exceeded | Slow down or check quota |
| 500 | Server error | Retry after delay |
Token Usage
Understanding token consumption with Gemini:
const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "Suggest a travel destination" }); console.log("Prompt tokens:", response.usageMetadata?.promptTokenCount); console.log("Output tokens:", response.usageMetadata?.candidatesTokenCount); console.log("Total:", response.usageMetadata?.totalTokenCount);
Cost Calculation (Gemini 2.0 Flash):
Input: 5 tokens x $0.10/1M = $0.0000005
Output: 150 tokens x $0.40/1M = $0.00006
Total: ~$0.00006 (extremely affordable!)
Practice Exercises
Try modifying the code:
-
Change the Prompt
contents: "Write a haiku about coding" -
Use Structured Contents
contents: { role: "user", parts: [{ text: "Tell me a joke about programming" }] } -
Try Different Models
model: "gemini-3-pro-preview" // More capable model: "gemini-3-flash-preview" // Faster, cheaper
Comparison Table
| Feature | OpenAI | Anthropic | Gemini |
|---|---|---|---|
| API Method | responses.create() | messages.create() | models.generateContent() |
| max_tokens | Optional | Required | Optional (maxOutputTokens) |
| Input param | input | messages | contents |
| Response Path | output_text | content[0].text | text |
| Error Type | OpenAI.APIError | Anthropic.APIError | ApiError |
| Models | gpt-5-nano, gpt-4o | claude-haiku, sonnet, opus | gemini-flash, gemini-pro |
Key Takeaways
- Gemini uses
contentsparameter (can be string or structured) - Response text is accessed directly via
response.text max_tokensis optional (set viaconfig.maxOutputTokensif needed)- Error handling uses
ApiErrorfrom@google/genai - Token usage available via
usageMetadata - Simplest response structure of the three providers
Next Steps
Now that you understand basic prompts, let's add context with system prompts!
Next: Lesson 2b - System Prompt
Quick Reference
Minimal Working Example
import { GoogleGenAI } from "@google/genai"; const gemini = new GoogleGenAI({}); const response = await gemini.models.generateContent({ model: "gemini-3-flash-preview", contents: "Hello!" }); console.log(response.text);
Common Pitfalls
- Forgetting to import
ApiErrorfor error handling - Using
messagesinstead ofcontents - Trying to access
response.contentinstead ofresponse.text - Not checking if
response.textexists before using it
Completed Lesson 2a! You now know how to make basic API calls to Google Gemini.