Module 2 - Lesson 2a: Basic Prompt with Anthropic
Simple text input prompt with Anthropic Claude.
Published: 1/10/2026
Lesson 2a: Basic Prompt with Anthropic Claude
This is the simplest form of prompt with Anthropic - just a text input with no system context. You'll learn the fundamental differences between OpenAI and Anthropic's API structure.
What It Does
Sends a simple text prompt to Claude and receives a response. This demonstrates Anthropic's basic API structure and response format.
Key Differences from OpenAI
- max_tokens is Required: Anthropic requires you to specify
max_tokens - Response Structure: Response is
contentarray, notchoicesarray - Content Blocks: Each response contains an array of content blocks
Code Example
Create src/anthropic/basic-prompt.ts:
import Anthropic from "@anthropic-ai/sdk"; import dotenv from "dotenv"; // Load environment variables dotenv.config(); // Create Anthropic client with typed configuration const anthropic = new Anthropic(); // Async function with proper return type async function basicPrompt(): Promise<void> { try { console.log("Testing Anthropic connection..."); // Make API call - response is automatically typed! const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 1000, // Required! Must specify max tokens messages: [{ role: "user", content: "Suggest a travel destination" }], }); console.log("✅ Basic Prompt Success!"); // show response usage console.log("Tokens used:"); console.dir(response.usage, { depth: null }); // Check if we got a response if (!response.content || response.content.length === 0) { throw new Error("No content in response"); } // Extract text from content blocks // Anthropic returns an array of content blocks const textBlocks = response.content.filter( (block) => block.type === "text" ); if (textBlocks.length === 0) { throw new Error("No text content in response"); } // TypeScript knows the structure of response console.log( "AI Response:", textBlocks.map((block) => block.text).join("\n") ); } catch (error) { // Proper error handling with type guards if (error instanceof Anthropic.APIError) { console.log("❌ API Error:", error.status, error.message); } else if (error instanceof Error) { console.log("❌ Error:", error.message); } else { console.log("❌ Unknown error occurred"); } } } // Run the test basicPrompt().catch((error) => { console.error("Error:", error); });
Run It
pnpm tsx src/anthropic/basic-prompt.ts
Expected Output
Testing Anthropic connection...
✅ Basic Prompt Success!
Tokens used:
{
input_tokens: 10,
output_tokens: 150
}
AI Response:
I'd be happy to suggest a travel destination! How about visiting Kyoto, Japan?
Kyoto offers a perfect blend of ancient tradition and modern culture...
[rest of response]
Side-by-Side Comparison
OpenAI Version
const response = await openai.responses.create({ model: "gpt-5-nano", input: "Suggest a travel destination" // max_tokens is optional }); // Access response const text = response.output_text;
Anthropic Version
const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 1000, // Required! messages: [{ role: "user", content: "Suggest a travel destination" }] }); // Access response (array of content blocks) const textBlocks = response.content.filter(block => block.type === "text"); const text = textBlocks.map(block => block.text).join("\n");
Key Concepts
1. Required max_tokens
Why? Anthropic requires you to explicitly set a maximum token limit to prevent runaway costs.
// ❌ This will error const response = await anthropic.messages.create({ model: "claude-haiku-4-5", messages: [{ role: "user", content: "Hello" }] }); // ✅ This works const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 1000, // Must specify messages: [{ role: "user", content: "Hello" }] });
Choosing max_tokens:
- Short responses: 100-500 tokens
- Medium responses: 500-1500 tokens
- Long responses: 1500-4000 tokens
- Maximum: Up to model limit (varies by model)
2. Content Blocks Array
Anthropic responses use a content block structure:
// Response structure { id: "msg_01ABC123", type: "message", role: "assistant", content: [ { type: "text", text: "Your response here" } ], model: "claude-haiku-4-5", usage: { input_tokens: 10, output_tokens: 150 } }
Why content blocks? This allows for multi-modal responses in the future (text + images, etc.)
3. Response Extraction
Always filter for text blocks:
// Get all text content const textBlocks = response.content.filter( (block) => block.type === "text" ); // Join into single string const text = textBlocks.map((block) => block.text).join("\n");
Error Handling
Anthropic provides detailed error types:
try { const response = await anthropic.messages.create({...}); } catch (error) { if (error instanceof Anthropic.APIError) { // API-specific errors console.log("Status:", error.status); // 400, 401, 429, etc. console.log("Message:", error.message); console.log("Type:", error.type); // e.g., "invalid_request_error" } else if (error instanceof Error) { // Generic errors console.log("Error:", error.message); } }
Common Errors
| Error Code | Meaning | Solution |
|---|---|---|
| 401 | Invalid API key | Check .env file |
| 400 | Missing max_tokens | Add max_tokens parameter |
| 429 | Rate limit | Slow down requests |
| 500 | Server error | Retry after delay |
Token Usage
Understanding token consumption:
const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 1000, messages: [{ role: "user", content: "Suggest a travel destination" }] }); console.log("Input tokens:", response.usage.input_tokens); // ~7 console.log("Output tokens:", response.usage.output_tokens); // ~150 console.log("Total:", response.usage.input_tokens + response.usage.output_tokens);
Cost Calculation (Claude Haiku 4.5):
Input: 7 tokens × $0.25/1M = $0.00000175
Output: 150 tokens × $1.25/1M = $0.0001875
Total: ~$0.00019 (less than a cent!)
Practice Exercises
Try modifying the code:
-
Change the Prompt
messages: [{ role: "user", content: "Write a haiku about coding" }] -
Adjust max_tokens
max_tokens: 50 // Very short response max_tokens: 2000 // Longer response -
Try Different Models
model: "claude-sonnet-4-5" // More capable, slower model: "claude-opus-4-5" // Most capable, slowest
Comparison Table
| Feature | OpenAI | Anthropic |
|---|---|---|
| API Method | responses.create() | messages.create() |
| max_tokens | Optional | Required |
| Response Path | output_text | content[0].text |
| Error Type | OpenAI.APIError | Anthropic.APIError |
| Models | gpt-5-nano, gpt-4o-mini | claude-haiku, sonnet, opus |
Key Takeaways
- ✅ Anthropic requires
max_tokensparameter - ✅ Response is a content blocks array, not choices array
- ✅ Always filter for
type === "text"blocks - ✅ Error handling is similar to OpenAI
- ✅ Token usage tracking is built-in
Next Steps
Now that you understand basic prompts, let's add context with system prompts!
Next: Lesson 2b - System Prompt →
Quick Reference
Minimal Working Example
import Anthropic from "@anthropic-ai/sdk"; const anthropic = new Anthropic(); const response = await anthropic.messages.create({ model: "claude-haiku-4-5", max_tokens: 1000, messages: [{ role: "user", content: "Hello!" }] }); const text = response.content[0].text; console.log(text);
Common Pitfalls
- ❌ Forgetting
max_tokens - ❌ Using
choicesinstead ofcontent - ❌ Not filtering content blocks
- ❌ Wrong error type in catch block
Completed Lesson 2a! You now know how to make basic API calls to Anthropic Claude. 🎉