Latest update Android YouTube

Writing Good Prompts | Prompt Engineering: Master the Language of AI

Estimated read time: 51 min

Chapter 3: Writing Good Prompts

 Writing Good Prompts | Prompt Engineering: Master the Language of AI | IndinTechnoEra

The Art of Prompt Engineering

Crafting effective prompts is essential for getting the most out of Large Language Models. This module covers advanced techniques to improve your prompt writing skills and achieve better results.

Learning Objectives:

  • Master techniques like delimiters, structured output, and Chain of Thought
  • Learn to specify style, tone, and conditions in your prompts
  • Understand few-shot prompting and self-solving instructions
  • Recognize and avoid common prompt writing mistakes
  • Develop strategies for testing and refining your prompts

Using Delimiters

Delimiters help clearly separate instructions from input data, reducing ambiguity and improving model performance.

Common Delimiters

  • Triple quotes: """ for multi-line separation
  • Triple dashes: --- for section separation
  • XML tags: <input> and </input>
  • Section headers: ### Instruction:

Example with Delimiters

"""
Translate the following text to French. 
Maintain a formal tone and use professional vocabulary.
"""

Text to translate:
"The quarterly financial report shows a 15% increase in revenue compared to last year."

Delimiter Effectiveness

Without Delimiters

Higher chance of instruction confusion

Accuracy: 68%

With Basic Delimiters

Clearer instruction separation

Accuracy: 83%

With Structured Delimiters

Explicit section labeling

Accuracy: 94%

Structured Output

Requesting structured output formats enables better integration with other systems and more predictable responses.

JSON Output Example

"""  
Extract the following information from the text below as JSON:
- company_name
- revenue_change
- comparison_period
- is_positive

Use this structure:
{
  "company_name": string,
  "revenue_change": string,
  "comparison_period": string,
  "is_positive": boolean
}

Text: 
"Microsoft reported a 12% revenue increase compared to Q2 2024."
"""

Expected Output

{
  "company_name": "Microsoft",
  "revenue_change": "12%",
  "comparison_period": "Q2 2024",
  "is_positive": true
}

Code Example: Generating Structured Output

Style and Tone

Specifying style and tone helps tailor responses to your audience and use case.

Formal Tone

"Explain blockchain technology in a formal tone suitable for a business whitepaper."

Output will use professional vocabulary and complete sentences

Conversational

"Describe how photosynthesis works as if you're talking to a 10-year-old."

Output will be casual with simple analogies

Technical

"Provide a detailed technical explanation of SSL handshakes for network engineers."

Output will include technical terms and specifics

Tone Comparison

Prompt Output Characteristics
"Explain AI" Generic, medium length, neutral tone
"Explain AI in simple terms" Shorter sentences, basic vocabulary
"Explain AI technically" Longer, includes technical terms
"Explain AI like I'm 5" Very simple, uses analogies

Chain of Thought (CoT)

Chain of Thought prompting encourages the model to break down complex problems into steps, improving reasoning accuracy.

Basic CoT Example

"""  
A store has 12 apples. 5 are sold on Monday and 3 more on Tuesday. 
How many apples remain? 

Think step by step and show your work.
"""

Sample Output:

1. Start with 12 apples
2. Monday: 12 - 5 = 7 apples left
3. Tuesday: 7 - 3 = 4 apples left
4. Final answer: 4 apples remain

Advanced CoT Application

"""  
Analyze whether this tweet sentiment is positive, neutral, or negative. 
Show your reasoning step by step before giving the final answer.

Tweet: 
"Just tried the new café downtown. The coffee was amazing but the service was painfully slow."

1. Identify positive aspects
2. Identify negative aspects
3. Weigh their importance
4. Determine overall sentiment
"""

CoT Effectiveness

Without CoT

  • Direct answers may skip important reasoning steps
  • Higher chance of logical errors on complex problems
  • Harder to debug incorrect responses

With CoT

  • Forces systematic problem-solving approach
  • Makes model's reasoning transparent
  • Allows verification of intermediate steps
  • Improves accuracy on multi-step problems by ~30%

Common Prompt Writing Mistakes

Avoid these frequent errors that lead to poor model performance.

Problematic Patterns

  • Vagueness

    "Write about computers" (What aspect? For whom?)

  • Overly Complex

    Multi-part prompts without clear structure

  • Lack of Examples

    Not showing desired format for complex outputs

Improved Alternatives

  • Specificity

    "Explain how CPU caches work for software developers"

  • Structured

    Break complex requests into numbered steps

  • Few-Shot

    Include 1-2 examples of desired output format

Prompt Optimization Tips

Strategies to refine your prompts for better performance.

1. Start Specific

Begin with a narrow focus, then broaden if needed. Specific prompts yield more relevant responses.

2. Iterate

Treat prompts as prototypes - test, evaluate, and refine based on outputs.

3. Balance Length

Provide enough context but avoid unnecessary details that may distract the model.

Prompt Structure Visualization

Hover over each component to learn more

Instruction

Clear task definition

Context

Background information

Examples

Few-shot demonstrations

Constraints

Format/length rules

Testing and Evaluating Prompts

Systematic evaluation ensures your prompts produce reliable, high-quality outputs.

Evaluation Criteria

  • Accuracy: Is the information correct?
  • Relevance: Does it address the request?
  • Consistency: Similar inputs → similar outputs?
  • Completeness: All aspects covered?
  • Bias: Any problematic biases in outputs?

Testing Approach

  • Create a test set of diverse inputs
  • Run multiple trials with each prompt
  • Vary temperature settings to test stability
  • Compare outputs against ground truth when possible
  • Document successful and failed cases

Code Example: Automated Prompt Testing

Summary

Effective prompt engineering combines art and science. By applying these techniques systematically, you can dramatically improve your results with Large Language Models.

Key Takeaways

  • Use delimiters to separate instructions from input data
  • Request structured outputs when integrating with other systems
  • Specify style and tone to match your audience
  • Employ Chain of Thought for complex reasoning tasks
  • Test prompts systematically and iterate based on results
  • Avoid common mistakes like vagueness and over-complexity

Post a Comment

Feel free to ask your query...
Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.