Module 2, Lesson 1: Working with Multiple AI Providers
Introduction to Module 2 - Learning Anthropic Claude API and understanding multi-provider AI development.
Published: 1/10/2026
Welcome to Module 2!
In Module 1, you learned the fundamentals of AI development using OpenAI's API. Now it's time to expand your skillset by working with Anthropic's Claude API.
Why Learn Multiple AI Providers?
Real-World Benefits
-
Provider Flexibility
- Different models have different strengths
- Avoid vendor lock-in
- Choose the best model for each task
-
Cost Optimization
- Compare pricing across providers
- Use cheaper models for simple tasks
- Balance cost vs performance
-
Reliability
- Fallback options if one provider has issues
- Higher uptime for production apps
- Better disaster recovery
-
Performance Comparison
- Test which model works best for your use case
- Compare response quality
- Benchmark speed and accuracy
Industry Reality
Professional AI developers typically work with multiple providers:
// Production code often looks like this if (task.requiresLongContext) { return await claude.messages.create({...}); // Anthropic for 200k context } else if (task.requiresSpeed) { return await openai.chat.completions.create({...}); // OpenAI for speed }
Module 2 Overview
What You'll Learn
In this module, you'll:
- ✅ Understand Anthropic's Claude API
- ✅ Compare OpenAI vs Anthropic approaches
- ✅ Implement the same prompts using both providers
- ✅ Learn provider-specific features
- ✅ Build multi-provider applications
Course Structure
Module 2
├── Lesson 1 (This lesson) - Module Overview
└── Lesson 2 - Anthropic Claude Prompts
├── 2a - Basic Prompt
├── 2b - System Prompt
├── 2c - Temperature Control
├── 2d - Extended Prompts
├── 2e - Streaming Responses
├── 2f - Structured Output
└── 2g - Function/Tool Calling
Meet Anthropic Claude
What is Anthropic?
Anthropic is an AI safety company founded by former OpenAI researchers. They created Claude, a family of AI models known for:
- Safety and reliability - Strong focus on helpful, harmless, and honest AI
- Large context windows - Up to 200,000 tokens (vs OpenAI's 128,000)
- High-quality reasoning - Excellent for complex analysis
- Strong writing - Natural, human-like responses
Claude Model Family
| Model | Context | Speed | Best For |
|---|---|---|---|
| Claude Sonnet | 200k | Medium | Balanced - general use |
| Claude Opus | 200k | Slower | Complex reasoning, analysis |
| Claude Haiku | 200k | Fast | Simple tasks, high volume |
For this course, we'll use Claude Haiku 4.5 - it's fast, cost-effective, and perfect for learning.
OpenAI vs Anthropic: Key Differences
API Structure
OpenAI:
const response = await openai.responses.create({ model: "gpt-5-nano", input: "Hello" });
Anthropic:
const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 1000, // Required! messages: [{ role: "user", content: "Hello" }] });
Key Differences
-
max_tokens is Required
- OpenAI: Optional parameter
- Anthropic: Must be specified
-
Response Structure
- OpenAI:
response.choices[0].message.content - Anthropic:
response.content(array of blocks)
- OpenAI:
-
System Prompts
- OpenAI: Part of messages array
- Anthropic: Separate
systemparameter
-
Streaming
- OpenAI:
stream: trueparameter - Anthropic:
.stream()method with event handlers
- OpenAI:
When to Use Which?
Use OpenAI When:
- You need the fastest responses (GPT-4o)
- You want the most mature ecosystem
- You need advanced features (vision, audio, etc.)
- Cost is less critical
Use Anthropic When:
- You need very large context (200k tokens)
- Safety and reliability are top priorities
- You want high-quality reasoning and writing
- You're processing long documents
Setting Up Anthropic
Get Your API Key
- Visit Anthropic Console
- Sign up or log in
- Create a new API key
- Copy it securely
Add to Environment
Update your .env file:
# OpenAI (from Module 1) OPENAI_API_KEY=sk-your-openai-key-here # Anthropic (new!) ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here
Install SDK
pnpm add @anthropic-ai/sdk
Verify Setup
Create a quick test file:
import Anthropic from "@anthropic-ai/sdk"; import dotenv from "dotenv"; dotenv.config(); const anthropic = new Anthropic(); async function test() { const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 1000, messages: [{ role: "user", content: "Say hello!" }] }); console.log(response.content[0].text); } test();
Run it:
pnpm tsx test.ts
If you see Claude's response, you're ready to go!
New Folder Structure
In Module 2, we've reorganized the code to support multiple providers:
src/
├── anthropic/ ← New Anthropic examples
│ ├── basic-prompt.ts
│ ├── system-prompt.ts
│ └── ...
└── openai/ ← Your Module 1 code moved here
├── basic-prompt.ts
├── system-prompt.ts
└── ...
This structure makes it easy to:
- Compare implementations side-by-side
- Run examples from different providers
- See the differences in approach
Your Learning Path
Module 1 Recap
You already know how to:
- ✅ Make basic API calls
- ✅ Use system prompts
- ✅ Control temperature
- ✅ Stream responses
- ✅ Get structured output
- ✅ Use function calling
Module 2 Goals
Now you'll learn to:
- Translate these concepts to Anthropic
- Compare OpenAI vs Anthropic approaches
- Choose the right provider for each task
- Build multi-provider applications
Success Metrics
By the end of Module 2, you'll be able to:
- Use both OpenAI and Anthropic APIs fluently
- Explain key differences between providers
- Choose the right provider for a given task
- Implement the same feature with multiple providers
- Build a provider-agnostic AI application
Pricing Comparison
Anthropic Claude Haiku 4.5 (January 2026)
- Input: $0.25 per 1M tokens
- Output: $1.25 per 1M tokens
OpenAI GPT-4o-mini (January 2026)
- Input: $0.15 per 1M tokens
- Output: $0.60 per 1M tokens
For Learning: Both are incredibly cheap! 1,000 test prompts will cost less than $0.50 on either platform.
Provider-Specific Features
Anthropic Strengths
- Constitutional AI: Built-in safety and helpfulness
- Long context: 200k token window
- Quality reasoning: Excellent for analysis
- Transparent pricing: Clear, predictable costs
OpenAI Strengths
- Ecosystem: More third-party tools and integrations
- Multimodal: Vision, audio, and more
- Speed: Generally faster responses
- Innovation: Cutting-edge features first
Ready to Begin?
In the next lesson, you'll dive into Anthropic's Claude API with hands-on examples. You'll build the same prompts you created in Module 1, but using Anthropic's SDK.
Before You Continue
Make sure you have:
- ✅ Anthropic API key in
.env - ✅
@anthropic-ai/sdkinstalled - ✅ Basic understanding from Module 1
- ✅ Node.js and pnpm working
Go to Lesson 2: Anthropic Prompts Overview →
Quick Reference
Anthropic Resources
Support
Questions to Consider
Before moving on, think about:
- Why might you choose Anthropic over OpenAI for a specific project?
- What are the key API differences you'll need to remember?
- How could multi-provider support benefit your applications?
If you're ready to explore these answers, let's continue to Lesson 2!