Chapter 6: Prompt Engineering Course
This module explores practical applications of Large Language Models (LLMs) across various domains. You'll learn how to craft effective prompts for common real-world tasks, with examples and interactive demonstrations.

1. Structured Data Extraction
LLMs can transform unstructured text into structured formats like JSON, CSV, or tables. This is particularly useful for data processing pipelines.
Example Prompt:
"The Quantum X900 laptop features a 15.6" 4K display, 32GB RAM, and 1TB SSD storage. It's priced at $1,299.99 and comes with a 2-year warranty."
Output:
"name": "Quantum X900 laptop",
"price": 1299.99,
"features": [
"15.6\" 4K display",
"32GB RAM",
"1TB SSD storage",
"2-year warranty"
]
}
Python Implementation:
# Initialize the text-to-json pipeline
extractor = pipeline("text2text-generation", model="google/flan-t5-large")
# Define the prompt and input text
prompt = "Extract the product details as JSON with name, price, and features:"
text = "The Quantum X900 laptop features a 15.6\" 4K display..."
# Generate the structured output
result = extractor(prompt + text)
print(result['generated_text'])
2. Inferring: Sentiment Analysis & Summarization
LLMs can analyze text for sentiment, extract key topics, and create concise summaries—valuable for processing customer feedback or research papers.
Sentiment Analysis Prompt:
"The battery life is excellent, lasting a full day of heavy use. However, the keyboard feels cheap and the touchpad is unresponsive at times."
Output:
Positive aspects: Battery life
Negative aspects: Keyboard quality, touchpad responsiveness
Summarization Prompt:
[Article text about recent advancements in renewable energy...]
Output:
• Breakthrough in battery storage reduces costs by 40%
• Global renewable energy capacity grew by 15% in 2024
3. Writing Professional Emails
LLMs can help craft well-structured emails with specific tones, saving time on business communication.
Prompt:
Output:
Subject: Update on Project Timeline
Dear Mr. Johnson,
I hope this message finds you well. I'm writing to inform you that we've encountered unforeseen technical challenges that require us to adjust the project timeline. After careful assessment, we've determined that we'll need approximately two additional weeks to resolve these issues while maintaining our quality standards.
Please rest assured that we've already allocated additional resources to address these challenges, and our team is working diligently to complete the project by the new deadline of June 15th.
We sincerely apologize for this adjustment and appreciate your understanding. Should you have any questions or require additional information, please don't hesitate to contact me directly.
Best regards,
[Your Name]
[Your Position]
4. Coding Assistance
LLMs excel at explaining, generating, and debugging code across multiple programming languages.
Code Generation Prompt:
Output:
from statistics import mean, median, mode
def calculate_stats(numbers: List[float]) -> Dict[str, float]:
"""Calculate basic statistics from a list of numbers.
Args:
numbers: List of numeric values
Returns:
Dictionary with 'mean', 'median', and 'mode' keys
"""
try:
mode_val = mode(numbers)
except StatisticsError:
mode_val = None
return {
"mean": mean(numbers),
"median": median(numbers),
"mode": mode_val
}
Debugging Prompt:
def calculate_discount(price, discount):
return price - (price * discount)
# When called with calculate_discount(100, 0.2) it works,
# but with calculate_discount(100, 20) it gives wrong result
Output:
# The function expects discount as decimal (0.2 for 20%),
# but sometimes gets percentage (20 for 20%).
# Solution 1: Document expectation and validate input
def calculate_discount(price: float, discount: float) -> float:
"""Calculate price after discount.
Args:
discount: Decimal (e.g., 0.2 for 20% off)
"""
if discount > 1: # Assume percentage was passed
discount /= 100
return price * (1 - discount)
5. Study Buddy & Learning Assistance
LLMs can simplify complex topics, generate study materials, and create practice questions.
Simplification Prompt:
Output:
Quantum computing is like having a magical notebook that can be in multiple states at once. While regular computers use bits (like light switches that are either ON or OFF), quantum computers use qubits that can be ON, OFF, or both at the same time—this is called superposition.
Imagine spinning a coin—while it's spinning, it's not just heads or tails, but in a mix of both states. Qubits work similarly until you "look" at them (measure them), when they settle into one state.
Entanglement is like having two magical coins that are linked—no matter how far apart they are, if you spin one and it lands heads, the other will instantly land tails. This spooky connection helps quantum computers solve certain problems much faster than regular computers.
Quiz Generation Prompt:
Output:
1. What is the main difference between supervised and unsupervised learning?
- A) Supervised learning uses labeled data, unsupervised doesn't
- B) Supervised learning is faster
- C) Unsupervised learning always gives better results
2. What are neural networks loosely inspired by?
- A) The structure of atoms
- B) The human brain's neurons
- C) Computer processors
3. Overfitting occurs when a model:
- A) Performs well on training data but poorly on new data
- B) Is too simple to capture patterns
- C) Has too few parameters
Correct answers: 1-A, 2-B, 3-A
6. Designing Chatbots
LLMs can power conversational agents with predefined behaviors and personalities.
Chatbot Prompt:
- Greet customers warmly
- Answer questions about products, shipping, and returns
- Escalate complex issues to human agents
- Maintain a helpful but professional tone
The customer says: "Hi, I ordered a wireless charger 3 days ago but haven't received any shipping update. Can you help?"
Output:
"Hello! Thanks for reaching out to TechGadgets support. I'd be happy to check on your order. Could you please share your order number so I can look up the details? Our standard shipping usually takes 3-5 business days, but I can check the current status for you."
Python Implementation:
# Initialize the chatbot with our system prompt
chatbot = pipeline("conversational", model="facebook/blenderbot-400M-distill")
# System message defines the bot's behavior
system_prompt = """You are a friendly customer support bot for TechGadgets..."""
# Conversation history
conversation = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Hi, I ordered a wireless charger..."}
]
# Generate response
response = chatbot(conversation)
print(response['generated_text'])
7. Content Generation
LLMs can assist with creating blog posts, social media content, and other marketing materials.
Blog Post Prompt:
Output:
The Remote Work Revolution: Why Everyone Wins
The traditional 9-to-5 office model is undergoing a dramatic transformation. According to a 2024 Global Workplace Report, 72% of knowledge workers now operate remotely at least part-time—and both employees and employers are reaping the benefits.
For employees, remote work offers unparalleled flexibility. A Stanford study found that remote workers reported 25% less stress and 50% lower turnover rates. The elimination of commutes (saving the average worker 200+ hours annually) translates directly into better work-life balance and more time for family, hobbies, and self-care.
Employers benefit too. Companies report 15-20% productivity boosts from remote teams, along with significant real estate savings. Tech giant Dell saved $12 million annually by reducing office space, while maintaining employee satisfaction scores above 90%.
8. Data Analysis Support
LLMs can help interpret data, generate insights, and even suggest visualizations.
Analysis Prompt:
- Q1 Revenue: $1.2M (15% growth YoY)
- Q2 Revenue: $1.1M (8% growth YoY)
- Customer acquisition cost increased from $50 to $75
- Retention rate dropped from 85% to 78%
- New product line contributed 25% of Q2 revenue
Output:
Key Insights:
- Revenue growth is slowing despite new product success (25% contribution)
- Increasing acquisition costs and declining retention suggest customer satisfaction issues
- The new product line is performing well but may be cannibalizing existing products
Recommendations:
- Investigate reasons for retention drop (survey customers, analyze support tickets)
- Allocate more budget to retain existing customers rather than acquire new ones
9. Prompt Engineering Workflow
Effective prompt engineering follows an iterative process of refinement and testing.
Best Practices:
- Start specific: Clearly define the task and desired output format
- Provide examples: Show the model what good output looks like
- Use constraints: Limit response length or format when needed
- Iterate: Test multiple prompt variations to find what works best
- Evaluate: Check outputs for accuracy, completeness, and relevance
Prompt Refinement Example:
Initial Prompt:
Tell me about machine learning
Refined Prompt:
Explain supervised machine learning to a business executive in 3 paragraphs. Focus on practical applications in customer analytics and use analogies from everyday life. Avoid mathematical terminology.
Further Refined:
You are explaining supervised machine learning to a retail
company's CEO who has no technical background. Structure your
response as:
1) Core concept analogy (like teaching a child with examples)
2) Two business use cases from their industry (customer
segmentation and demand forecasting)
3) Implementation requirements (data needed, timeline, expected
outcomes)
Keep each section under 100 words.