Module 2 (Gemini), Lesson 1: Working with Google Gemini
Introduction to the Gemini track - building prompts with Google's Gemini API and multi-provider thinking.
Published: 1/15/2026
Welcome to Module 2 (Gemini)
You built your foundation with OpenAI in Module 1 and expanded to Anthropic. Now you will add Google Gemini to your toolkit so you can pick the right model for each job and avoid vendor lock-in.
Why Gemini?
- Strong speed/price balance with the
gemini-3-flashfamily - Native JSON and function calling via
responseMimeType, schemas, and tools - Simple streaming helpers with
generateContentStream - Google ecosystem fit if you already use Firebase, Vertex, or Workspace
What You Will Learn
- ✅ Gemini client setup with
@google/genai - ✅ Prompt patterns that mirror your OpenAI and Anthropic lessons
- ✅ How Gemini structures
contents,candidates, andusageMetadata - ✅ Streaming, structured output, and tool calling the Gemini way
Code Folder for This Module
All lesson code lives in src/gemini/ in the playground repo:
src/
├── gemini/
│ ├── basic-prompt.ts
│ ├── basic-prompt-with-system.ts
│ ├── basic-prompt-with-temperature.ts
│ ├── extended-prompt.ts
│ ├── stream-prompt.ts
│ ├── structured-output-prompt.ts
│ └── tools-prompt.ts
├── anthropic/
└── openai/
Navigation
- Next: Lesson 2: Gemini Prompts Overview
- Module Index: AI SDK Essentials
Ready? In the next lesson you will see the Gemini-specific prompt map for the rest of this module.