Module 1, Lesson 1: What Are LLMs and How Do They Work?
Introduction to Large Language Models (LLMs) and their capabilities.
Published: 1/1/2026
Welcome to my AI Playground!
This playground is designed to provide a simple overview of what I've learned about AI. No prior knowledge is required; you can use it to get a general sense of the basics as well.
What is an LLM?
You’ve probably heard the term LLM before, but what does it actually mean?
LLM stands for Large Language Model. Imagine it as an advanced text predictor that has been trained on massive amounts of books, websites, and articles.
Simple Analogy
Imagine you're texting a friend:
- You type: "The weather today is..."
- Your phone suggests: "sunny", "rainy", "cold"
An LLM does the same thing, but much smarter. It doesn't just predict the next word - it understands context, follows instructions, and can have conversations.
What Can LLMs Do?
LLMs can:
- Answer questions
- Write code
- Translate languages
- Summarize documents
- Have conversations
- Generate creative content
- Analyze data
- Help with research
What LLMs CANNOT Do (Important!)
LLMs cannot:
- Access the internet (unless you give them tools to do so)
- Remember previous conversations (unless you send the history)
- Execute code on their own
- Access your files directly
- Know current events after their training date
How Do You Use an LLM?
You interact with LLMs through APIs (Application Programming Interfaces).
What's an API?
Think of an API as a waiter at a restaurant:
- You (your code) tell the waiter (API) what you want
- The waiter takes your order to the kitchen (LLM)
- The kitchen prepares your food (generates a response)
- The waiter brings it back to you (API returns the response)
Popular LLM Providers
| Provider | Popular Models | Best For |
|---|---|---|
| OpenAI | GPT-3.5, GPT-4, GPT-4o | General use, reliable |
| Anthropic | Claude 3 Opus, Sonnet, Haiku | Long documents, reasoning, safety |
| Gemini (Pro, Ultra) | Large contexts, cost-effective |
For this course, we'll start with OpenAI because:
- Easy to use
- Great documentation
- Reliable service
- Good for learning
Understanding Tokens
When you send text to an LLM, it gets broken into tokens.
What's a Token?
A "token" is a small chunk of text used by LLMs; typically one token generally corresponds to ~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens ~= 75 words) (This definition comes from OpenAI's tokenizer documentation: https://platform.openai.com/tokenizer) Here are some examples:
- "Hello" = 1 token
- "Hello, world!" = 4 tokens
- "The weather today is sunny" = 5 tokens
Remember: tokens aren’t exactly the same as words or characters. Punctuation and spaces count towards tokens.
try out the tool on the page linked to above.
Why Do Tokens Matter?
- Cost: You pay per token (both input and output)
- Limits: Models have maximum token limits
- Speed: More tokens = slower responses
Example Token Calculation
Input: "What is the capital of France?" = ~7 tokens
Output: "The capital of France is Paris." = ~7 tokens
Total: ~14 tokens
Cost: ~$0.00014 (with GPT-4o mini)
Its fairly cheap but does come at a cost even to experiment. https://platform.openai.com/docs/pricing provides a great insight into costs for OpenAI, similar cost guides are available for Anthropic and Gemini and any other provider you may want to use.
How LLMs Generate Responses
Simple Explanation
- You send a prompt (your question or instruction)
- The LLM processes it (understands context and intent)
- It generates tokens one by one (predicts most likely next word)
- It stops when complete or reaches a limit
Temperature: Controlling Creativity
LLMs have a setting called temperature (0.0 to 2.0):
-
Temperature 0.0: Very predictable, same answer every time
- Use for: Math, code, factual answers
-
Temperature 0.7: Balanced (default)
- Use for: General conversation, helpful assistants
-
Temperature 1.5+: Very creative, random
- Use for: Creative writing, brainstorming
Example
Prompt: "Complete this sentence: The dog ran..."
Temperature 0.0:
- "The dog ran quickly across the yard."
Temperature 1.5:
- "The dog ran like a caffeinated tornado through the neighbor's prize-winning petunias!"
Your First Concept: The Prompt
A prompt is what you send to the LLM. It's how you communicate.
Bad Prompt vs Good Prompt
Bad Prompt:
"Write about dogs"
Too vague! You'll get a random essay about dogs.
Good Prompt:
"Write a 3-sentence summary of why dogs make good pets for families with children."
Specific, clear instructions!
Prompt Components
A good prompt often includes:
- Role: "You are a helpful tutor..."
- Task: "Explain photosynthesis..."
- Context: "...to a 10-year-old student..."
- Format: "...using simple words and one example."
Example Prompts
For Code Help:
I'm learning JavaScript. Can you explain what an array is
using a real-world analogy? Keep it under 50 words.
For Writing:
Write a professional email declining a job offer.
Be polite and brief. Keep it under 100 words.
For Analysis:
Summarize the key points from this article in 3 bullet points:
[article text here]
System Prompts vs User Prompts
System Prompt (Sets the personality)
"You are a friendly teacher who explains things simply."
User Prompt (What you want)
"Explain what an API is."
The system prompt stays the same for the conversation. The user prompts change with each message.
Think of it like:
- System: "You are a French chef"
- User: "How do I make pasta?" ← Gets French-style advice
- User: "What about dessert?" ← Still French-style advice
Understanding API Keys
An API key is like a password that lets you use the LLM service.
Important Safety Rules
-
Never share your API key publicly
- Don't put it in code you share on GitHub
- Don't post it in Discord/Slack
- Don't email it around
-
Never commit it to version control
- Use environment variables
- Use
.envfiles (and add to.gitignore)
-
Set spending limits
- OpenAI lets you set monthly limits
- Start with $5-10 limit while learning
What Happens If Your Key Leaks?
- Someone could use your account
- They could rack up charges
- You'd be responsible for the bill
Solution: Delete the old key, create a new one immediately!
Cost Awareness
Let's talk money. AI APIs are surprisingly cheap for learning, but you should still be aware.
Typical Costs for Learning
OpenAI GPT-5.1-nano (based on December 2025 documentaion best for learning, if you have access):
- Input: $0.10 per 1M tokens
- Output: $0.40 per 1M tokens
What does this mean in practice?
1,000 messages like "What is the weather?" with short answers:
- ~20,000 tokens total
- Cost: ~$0.02 (two cents!)
Free Credits
- OpenAI: Often free credits for new accounts
- Google: Generous free tier
- Anthropic: Limited free tier
Quick Reference
Key Terms
- LLM: Large Language Model - the AI brain
- API: How you talk to the LLM
- Token: Chunk of text (roughly a word)
- Prompt: What you send to the LLM
- Temperature: Creativity setting (0-2)
- API Key: Your password to use the service
Costs (OpenAI GPT-4.1-nano)
- ~$0.10 per 1M input tokens
- ~$0.40 per 1M output tokens
- 1 token ≈ 4 characters
- Average conversation: $0.001-0.01
Safety Checklist
- API key stored securely
-
.envfile in.gitignore - Spending limit set on account
- Never sharing keys publicly
Ready for Lesson 2?
In the next lesson, you'll:
- Set up your development environment
- Get your OpenAI API key
- Install the necessary tools
- Run your first API call!
Go to Lesson 2: TypeScript Setup
Questions to Test Your Understanding
Before moving on, can you answer these?
- What does LLM stand for?
- What's a token and why does it matter?
- What's the difference between temperature 0.0 and 1.5?
- What are the two main parts of a good prompt?
- Why should you never share your API key?
If you can answer these, you're ready to move forward! If not, re-read the sections you're unsure about.