Module 2 (Gemini) - Lesson 2a: Basic Prompt with Gemini

Send a simple text prompt with Google Gemini and read the response.

Published: 1/15/2026

Lesson 2a: Basic Prompt with Google Gemini

This is the simplest form of prompt with Gemini - just a text input with no system context. You'll learn the fundamental differences between OpenAI, Anthropic, and Google's API structure.

What It Does

Sends a simple text prompt to Gemini and receives a response. This demonstrates Google's basic API structure and response format.

Key Differences from OpenAI and Anthropic

  1. max_tokens is Optional: Unlike Anthropic (required) and similar to OpenAI (optional)
  2. Response Structure: Response uses text property directly, simpler than both OpenAI and Anthropic
  3. Contents Parameter: Uses contents instead of messages (OpenAI/Anthropic) or input (OpenAI Responses API)
  4. Client Initialization: Uses GoogleGenAI class from @google/genai

Code Example

Create src/gemini/basic-prompt.ts:

import { GoogleGenAI, ApiError } from "@google/genai";
import dotenv from "dotenv";

// Load environment variables
dotenv.config();

// Create Gemini client - reads GOOGLE_GENAI_API_KEY from environment
const gemini = new GoogleGenAI({});

// Async function with proper return type
async function basicPrompt(): Promise<void> {
  try {
    console.log("Testing Gemini connection...");

    // Make API call - note the simpler structure
    const response = await gemini.models.generateContent({
      model: "gemini-3-flash-preview",
      contents: "Suggest a travel destination",
    });

    console.log("Basic Prompt Success!");
    // Show response usage
    console.log("Tokens used:");
    console.dir(response.usageMetadata, { depth: null });

    // Check if we got a response
    if (!response.text) {
      throw new Error("No content in response");
    }

    // Access response directly - much simpler than OpenAI/Anthropic!
    console.log("AI Response:", response.text);
  } catch (error) {
    // Proper error handling with type guards
    if (error instanceof ApiError) {
      console.log("API Error:", error.status, error.message);
    } else if (error instanceof Error) {
      console.log("Error:", error.message);
    } else {
      console.log("Unknown error occurred");
    }
  }
}

// Run the test
basicPrompt().catch((error) => {
  console.error("Error:", error);
});

Run It

pnpm tsx src/gemini/basic-prompt.ts

Expected Output

Testing Gemini connection...
Basic Prompt Success!
Tokens used:
{
  promptTokenCount: 5,
  candidatesTokenCount: 150,
  totalTokenCount: 155
}
AI Response:
I'd recommend visiting Kyoto, Japan! This historic city offers...
[rest of response]

Side-by-Side Comparison

OpenAI Version

const response = await openai.responses.create({
  model: "gpt-5-nano",
  input: "Suggest a travel destination"
  // max_tokens is optional
});

// Access response
const text = response.output_text;

Anthropic Version

const response = await anthropic.messages.create({
  model: "claude-haiku-4-5",
  max_tokens: 1000,  // Required!
  messages: [{ role: "user", content: "Suggest a travel destination" }]
});

// Access response (array of content blocks)
const textBlocks = response.content.filter(block => block.type === "text");
const text = textBlocks.map(block => block.text).join("\n");

Gemini Version

const response = await gemini.models.generateContent({
  model: "gemini-3-flash-preview",
  contents: "Suggest a travel destination"
  // max_tokens optional, uses maxOutputTokens in config
});

// Access response - simplest of all three!
const text = response.text;

Key Concepts

1. Simple Contents Parameter

Gemini accepts contents as either a simple string or structured content:

// Simple string (easiest)
const response = await gemini.models.generateContent({
  model: "gemini-3-flash-preview",
  contents: "Hello, how are you?"
});

// Structured content (more control)
const response = await gemini.models.generateContent({
  model: "gemini-3-flash-preview",
  contents: {
    role: "user",
    parts: [{ text: "Hello, how are you?" }]
  }
});

2. Direct Text Access

Unlike OpenAI and Anthropic, Gemini provides direct access to the response text:

// Gemini - direct access
const text = response.text;

// Compare to Anthropic - must filter content blocks
const text = response.content.filter(b => b.type === "text")[0].text;

// Compare to OpenAI - access via output_text
const text = response.output_text;

3. Response Structure

Gemini responses use a candidate structure:

// Response structure
{
  text: "Your response here",  // Convenience accessor
  candidates: [{
    content: {
      parts: [{ text: "Your response here" }],
      role: "model"
    },
    finishReason: "STOP"
  }],
  usageMetadata: {
    promptTokenCount: 5,
    candidatesTokenCount: 150,
    totalTokenCount: 155
  }
}

Why candidates? Gemini can return multiple candidate responses (though typically just one).


Error Handling

Gemini provides detailed error types via ApiError:

import { ApiError } from "@google/genai";

try {
  const response = await gemini.models.generateContent({...});
} catch (error) {
  if (error instanceof ApiError) {
    // API-specific errors
    console.log("Status:", error.status);  // 400, 401, 429, etc.
    console.log("Message:", error.message);
  } else if (error instanceof Error) {
    // Generic errors
    console.log("Error:", error.message);
  }
}

Common Errors

Error CodeMeaningSolution
401Invalid API keyCheck .env for GOOGLE_GENAI_API_KEY
400Invalid requestCheck model name and parameters
429Rate limit / quota exceededSlow down or check quota
500Server errorRetry after delay

Token Usage

Understanding token consumption with Gemini:

const response = await gemini.models.generateContent({
  model: "gemini-3-flash-preview",
  contents: "Suggest a travel destination"
});

console.log("Prompt tokens:", response.usageMetadata?.promptTokenCount);
console.log("Output tokens:", response.usageMetadata?.candidatesTokenCount);
console.log("Total:", response.usageMetadata?.totalTokenCount);

Cost Calculation (Gemini 2.0 Flash):

Input: 5 tokens x $0.10/1M = $0.0000005
Output: 150 tokens x $0.40/1M = $0.00006
Total: ~$0.00006 (extremely affordable!)

Practice Exercises

Try modifying the code:

  1. Change the Prompt

    contents: "Write a haiku about coding"
    
  2. Use Structured Contents

    contents: {
      role: "user",
      parts: [{ text: "Tell me a joke about programming" }]
    }
    
  3. Try Different Models

    model: "gemini-3-pro-preview"   // More capable
    model: "gemini-3-flash-preview" // Faster, cheaper
    

Comparison Table

FeatureOpenAIAnthropicGemini
API Methodresponses.create()messages.create()models.generateContent()
max_tokensOptionalRequiredOptional (maxOutputTokens)
Input paraminputmessagescontents
Response Pathoutput_textcontent[0].texttext
Error TypeOpenAI.APIErrorAnthropic.APIErrorApiError
Modelsgpt-5-nano, gpt-4oclaude-haiku, sonnet, opusgemini-flash, gemini-pro

Key Takeaways

  • Gemini uses contents parameter (can be string or structured)
  • Response text is accessed directly via response.text
  • max_tokens is optional (set via config.maxOutputTokens if needed)
  • Error handling uses ApiError from @google/genai
  • Token usage available via usageMetadata
  • Simplest response structure of the three providers

Next Steps

Now that you understand basic prompts, let's add context with system prompts!

Next: Lesson 2b - System Prompt


Quick Reference

Minimal Working Example

import { GoogleGenAI } from "@google/genai";

const gemini = new GoogleGenAI({});

const response = await gemini.models.generateContent({
  model: "gemini-3-flash-preview",
  contents: "Hello!"
});

console.log(response.text);

Common Pitfalls

  1. Forgetting to import ApiError for error handling
  2. Using messages instead of contents
  3. Trying to access response.content instead of response.text
  4. Not checking if response.text exists before using it

Completed Lesson 2a! You now know how to make basic API calls to Google Gemini.