Chapter 2: Introduction to Prompting

Mastering LLM Communication
Prompting is the primary way we communicate with Large Language Models. Effective prompting unlocks the full potential of LLMs, while poor prompting can lead to frustrating or unreliable results. This module teaches you the fundamentals of crafting high-quality prompts.
Learning Objectives:
- Understand how LLMs interpret basic prompts
- Recognize the importance of prompt engineering
- Learn the components of effective prompt structure
- Compare good and bad prompt examples
- Explore different prompting techniques
Basic Prompting
At its simplest, prompting involves providing text input to an LLM and receiving text output in response. The model processes your prompt token-by-token, predicting the most likely continuation based on its training.
Simple Prompt Examples
"Explain quantum computing in simple terms"
→ The model generates a beginner-friendly explanation of quantum computing
"Write a short story about a robot learning human emotions"
→ The model creates a creative narrative based on the given premise
How LLMs Process Prompts
- Tokenization: Your prompt is split into tokens (words or subwords)
- Contextual Understanding: The model analyzes relationships between all tokens
- Prediction: The model generates the most probable continuation
- Generation: Output is produced token-by-token until completion
Key Insight
LLMs don't "understand" prompts in the human sense. They predict text based on patterns learned during training. The quality of their predictions depends heavily on how you frame your request.
The Need for Prompt Engineering
While LLMs can work with simple prompts, well-crafted prompts significantly improve results. Prompt engineering is the practice of designing inputs to get better outputs from language models.
Benefits of Good Prompts
- Higher Accuracy: Reduces hallucinations and incorrect information
- More Relevant Outputs: Keeps responses focused on your needs
- Efficiency: Gets better results with fewer iterations
- Consistency: Produces reliable outputs across similar queries
Challenges with Poor Prompts
- Vagueness: Leads to generic or off-target responses
- Ambiguity: Causes the model to guess your intent
- Overly Broad: May produce excessively long or unfocused outputs
- Lack of Context: Forces the model to make assumptions
Prompt Processing Visualization
Prompt Structure Basics
Effective prompts typically include several key components that guide the model toward your desired output.
1. Instruction
The primary task you want the model to perform (e.g., "Summarize this article", "Write Python code")
2. Context
Relevant background information (e.g., "for a 5th grade science class", "as if explaining to a beginner")
3. Constraints
Limitations on the response (e.g., "in 3 sentences", "using simple terms", "in bullet points")
Good vs. Bad Prompt Examples
Good Prompt
"Explain the concept of machine learning in 2-3 paragraphs, using analogies suitable for high school students."
✅ Specific about length, targets appropriate knowledge level, requests analogies for better understanding
Bad Prompt
"Tell me about machine learning"
❌ Too vague - could result in response that's too technical, too brief, or off-target
Good Prompt
"Generate a list of 5 healthy dinner recipes that can be prepared in under 30 minutes, using chicken as the main protein. Include preparation time for each."
✅ Specific about quantity, cooking time, main ingredient, and requested format
Bad Prompt
"Give me some recipes"
❌ Lacks any specificity - could return anything from desserts to complex multi-hour dishes
Good Prompt
"Act as a financial advisor. I'm a 30-year-old with $50,000 in savings. Recommend a diversified investment portfolio with moderate risk, explaining each recommended asset class in simple terms."
✅ Sets clear role, provides relevant context, specifies risk preference, requests explanations
Bad Prompt
"How should I invest my money?"
❌ No context about age, risk tolerance, or amount - response will be too generic to be useful
Introduction to Prompt Types
Different prompting techniques serve different purposes. Understanding these approaches helps you select the right method for your needs.
Zero-Shot Prompting
Asking the model to perform a task without providing any examples.
"Translate this English sentence to French: 'The weather is nice today.'"
Best for: Simple, straightforward tasks where the model has strong baseline performance
Few-Shot Prompting
Providing several examples to demonstrate the desired pattern or format.
"Convert these dates to the format DD/MM/YYYY:
Example 1: January 5, 2023 → 05/01/2023
Example 2: March 20, 2024 → 20/03/2024
Now convert: July 4, 2025"
Best for: Tasks requiring specific formatting or when teaching new patterns
Role-Based Prompting
Asking the model to adopt a specific persona or expertise.
"You are an experienced high school biology teacher.
Explain photosynthesis to a 10th grade class, using
one analogy and one real-world application."
Best for: Getting responses tailored to specific perspectives or knowledge levels
Code Example: Basic Prompting with OpenAI API
import openai
# Set your API key (in practice, use environment variables)
openai.api_key = "your-api-key-here"
# Define a function to get completions
def get_completion(prompt, model="gpt-4"):
response = openai.ChatCompletion.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=0.7, # Controls randomness (0-2)
max_tokens=500, # Maximum length of response
)
return response.choices[0].message.content
# Example prompt
prompt = """Act as a historian specializing in 20th century technology.
Explain the invention of the transistor in 3 paragraphs, highlighting its
impact on computing. Write for an audience of college students."""
# Get and print the response
response = get_completion(prompt)
print(response)
Key Features of This Example:
- Uses role-based prompting ("Act as a historian...")
- Specifies length (3 paragraphs) and audience (college students)
- Focuses the response on a particular aspect (impact on computing)
- Includes API parameters to control response characteristics
Summary
Effective prompting is both an art and a science. By understanding these fundamentals, you're now equipped to craft better prompts that yield more useful, accurate, and relevant responses from LLMs.
Key Takeaways
- Clear, specific prompts produce better results than vague ones
- Good prompts typically include instruction, context, and constraints
- Different prompting techniques (zero-shot, few-shot, role-based) serve different purposes
- Prompt engineering is essential for reliable, high-quality outputs
- Always consider your audience and purpose when crafting prompts