Academy/AI Prompt Engineer/Your First High-Quality Prompt
Free Chapter 12 minChapter 2/5

Your First High-Quality Prompt

Master the three fundamental techniques: role setting, example provision, and format control

本章学习要点

2 / 5
1

Understand how Large Language Models (LLMs) work in plain language

2

Differentiate the characteristics of mainstream LLMs (GPT/Claude/DeepSeek/Gemini)

3

Understand the meaning of core parameters like Temperature and Token

4

Recognize the capabilities and limitations of LLMs—what they can and cannot do

5

Establish the value proposition and learning framework for Prompt Engineering

In the previous chapter, we understood how LLMs work. Now let's get practical—master the three most fundamental and important Prompt techniques: Role Prompting, Few-shot Examples, and Format Control. Using any one individually can improve answer quality, but combining them multiplies the effect.

Technique One: Role Prompting

This is the simplest yet most effective technique. Telling the AI what role it should play at the beginning of a conversation significantly improves the quality of its responses. Research shows that adding a role prompt increases the professionalism and relevance of model outputs by an average of 20-40%.

Why Does Role Prompting Work?

When you say 'You are a senior marketing expert,' the LLM adjusts its prediction direction—subsequently generated content will more frequently reference knowledge, terminology, and ways of thinking from the marketing field. It's like putting a generalist student into 'marketing' exam mode, activating the knowledge pathways most relevant to that role.

From a technical perspective: LLM training data contains vast amounts of content written by different roles (diagnostic reports by doctors, legal opinions by lawyers, code comments by programmers...). Role prompting is essentially telling the model, 'Please refer to the writing style of the XX role next,' helping the model narrow the prediction space and improve output quality.

The Four Elements of Role Prompting

**Professional Field**: Clearly state its area of expertise. The more specific, the better. For example, 'B2B SaaS marketing' is 10 times more precise than 'marketing,' and 'brand marketing focused on TikTok short videos' is even more precise in specific scenarios than 'B2B SaaS marketing.'

**Experience Level**: 'A senior expert with 10 years of experience' elicits deeper, more insightful answers than 'an expert.' Numbers matter—'15 years of experience' prompts the model to draw upon more advanced knowledge and more mature expression.

**Communication Style**: 'Use simple, straightforward language, as if explaining to a high school student' or 'Use a professional academic style, including terminology and citations'—set this based on your audience. Style directly influences vocabulary choice, sentence complexity, and information density.

**Behavioral Constraints**: Tell the AI what it should *not* do. For example, 'Avoid clichés,' 'Say you don't know instead of guessing when uncertain,' 'Answers must be fact-based, do not fabricate data.' Constraints can significantly reduce low-quality output.

Example Comparison

**Generic Prompt**: 'How to do SEO for a website?' → You'll get a generic SEO introductory guide.

**Role Prompt**: 'You are an SEO expert with 8 years of experience, specializing in optimizing Chinese content websites. I run a newly launched AI tool review site with about 5,000 monthly pageviews. Please create a 6-month SEO growth strategy for me, aiming for 100,000 monthly visits. Please list it in checklist form, with specific action steps and expected effects for each item.' → You'll get a tailored, actionable growth plan.

The difference isn't just 'more detailed'; the depth of thought and professionalism in the answer will be completely different. Role prompting shifts the model from 'generic answering' to 'expert consultation.'

Common Role Prompt Templates

**Technical Scenario**: 'You are a {specific direction} engineer with {X} years of experience, having previously been responsible for {specific project} at a {company type} company.'

**Business Scenario**: 'You are a {position} in the {industry} field, skilled in {specific skill}, and your clients are primarily {customer profile}.'

**Educational Scenario**: 'You are an experienced {subject} teacher, teaching {student level} students. Please explain using a {style} approach.'

**Writing Scenario**: 'You are a senior {position} at {media name}, skilled in {writing style}, and your readers are {reader profile}.'

实用建议

The golden rule of role prompting: The more specific, the better. 'SEO expert' is not as good as 'expert focused on Google SEO for independent websites.' 'Programmer' is not as good as 'senior engineer with 5 years of Python backend development experience.' Specific role prompts activate more precise knowledge in the model.

Technique Two: Providing Examples (Few-shot Learning)

Show the AI a few examples of the output you want, and it will understand your format, style, and quality requirements. This is a qualitative leap from 'describing needs in words' to 'demonstrating needs with examples'—'Show, don't tell' applies equally in Prompt Engineering.

Why is Few-shot So Effective?

Human communication also heavily relies on examples. When you tell an intern, 'Write a report similar to the last one,' that 'last report' is a Few-shot example. Similarly, by analyzing your examples, the LLM can precisely understand your implicit requirements for format, tone, information density, level of detail, etc., which are difficult to fully describe in words.

Zero-shot vs One-shot vs Few-shot

**Zero-shot**: Provide no examples, just the instruction. Suitable for simple, clear tasks. E.g., 'Translate: Hello → 你好. Translate: Thank you →'.

**One-shot**: Provide 1 example. Suitable for tasks with less complex formats.

**Few-shot**: Provide 2-5 examples. Suitable for tasks with complex formats or specific style requirements.

Practical Example One: Standardized Format Output

'Please write introductions for AI tools in the following format:

Example 1: ChatGPT | All-purpose AI assistant by OpenAI | Powerful language understanding and generation, supports multimodal input | Best for: Writing, programming, analysis | Rating: 4.8/5

Example 2: Midjourney | Top-tier AI image generation tool | Strong artistic sense, rich details, diverse styles | Best for: Design, creativity, illustration | Rating: 4.7/5

Now please generate introductions for: DeepSeek, Kimi, Cursor'

With just 2 examples, the AI precisely understands your requirements: the pipe-separated format, information density per field, and the rating scale.

Practical Example Two: Style Transfer

If you need the AI to mimic a specific writing style, Few-shot is the most effective method. Provide 2-3 sample passages you approve of as examples, then ask the AI to write new content in the same style. The model will automatically analyze features like sentence structure, vocabulary choice, paragraph rhythm, etc., from the examples and imitate them.

Practical Example Three: Classification Task

'Please judge the sentiment of the following user comments:

Comment: 'This product is amazing, completely exceeded expectations!' → Positive

Comment: 'It's okay, nothing special' → Neutral

Comment: 'The return process is too complicated, and customer service attitude is poor' → Negative

Comment: 'Good value for money, but the packaging is a bit shoddy' → ?'

By showing examples for 'Positive,' 'Neutral,' and 'Negative' classifications, the model can precisely understand your classification criteria and boundaries.

Few-shot Considerations

**Number of Examples**: 2-5 is optimal. One may not be representative enough; more than 5 wastes tokens and may cause the model to overfit a particular pattern.

**Example Diversity**: Examples should cover different types of inputs. If doing a classification task, provide at least one example per category.

**Example Quality**: Examples represent your quality standard. Giving a sloppy example leads to sloppy output from the AI. It's worth spending time refining examples.

**Example Order**: Research shows the order of examples influences output. It's recommended to place the most important or representative examples at the very beginning and end (primacy effect + recency effect).

注意事项

Don't give too many Few-shot examples—2-5 is enough. Too many examples consume the context window and can scatter the AI's attention. Also, pay attention to consistency—if examples have inconsistent formats, the AI will be confused about which format to follow.

Technique Three: Specifying Output Format

Directly tell the AI what format you want for the output: table, list, JSON, step-by-step, etc. Format instructions may seem simple, but they have a huge impact on output quality—they don't just change the layout; they change how the model organizes information.

Common Format Instructions and Use Cases

**'Please compare using a Markdown table'**—Best for comparative analysis. Forces the model to analyze systematically across multiple dimensions instead of listing items randomly.

**'Please list in numbered steps 1-2-3'**—Best for operational guides. Numbered steps make processes clearer and easier to check off during execution.

**'Please give the conclusion first, then expand the analysis'**—Best for quick decision-making scenarios. Borrows the inverted pyramid structure of the 'Pyramid Principle,' saving the decision-maker's time.

**'Please keep it under 200 words'**—Essential when you need concise output. Without length limits, models tend towards 'it's better to say a bit more' verbose answers.

**'Please output in JSON format'**—Essential when output needs to be parsed by a program. Specifying JSON key names and data types avoids parsing errors.

**'Please use the STAR format'**—Situation/Task/Action/Result, suitable for case analysis and experience summaries.

Advanced Format Technique: Output Templates

For complex output needs, directly providing a 'fill-in-the-blank template' is more efficient than describing format requirements. For example:

'Please output the product analysis report using the following template: ## Product Name: [Product Name] ### Core Features (3-5): - [Feature1]: [One-sentence description] ### Target Users: [Profile description] ### Competitive Advantage: [1-2 sentences] ### Main Risks: [1-2 sentences] ### Recommendation Score: [1-5 stars]+[Reason]'

A template eliminates the model's need to guess the desired format; it simply fills in the content. This is especially useful for batch content generation—ensuring consistent format across all outputs.

Combined Use: The Multiplicative Effect of the Three Techniques

Each technique is effective on its own, but using them together creates a 'multiplicative effect'—the result isn't 1+1+1=3, but more like 1×2×2=4 or even higher.

Combination Example: AI Product Review

'You are an AI product analyst with 5 years of experience, having published multiple AI tool reviews on GeekPark and SSPAI (Role Prompting).

Please compare the performance of ChatGPT, Claude, and DeepSeek across the following five dimensions using a Markdown table (Format Control).

Refer to the following review dimensions and scoring example (Provide Example): | Dimension | Description | Scoring Criteria | | Chinese Writing | Natural language sense and expression quality | 1-5 points, 5 being best |

Five dimensions: Chinese Writing, Coding Ability, Long-text Analysis, Creative Divergence, Cost-effectiveness. Provide a score and a one-sentence comment for each dimension. Finally, provide a summary recommendation.'

This Prompt combines Role (AI product analyst) + Example (table dimension demonstration) + Format (Markdown table + scoring + summary), resulting in output quality far surpassing any single technique.

Combination Example: Weekly Report Generation

'You are an efficient project management assistant (Role). Please organize the following work logs into a weekly report with the following format (Format): ## Completed This Week ## Planned for Next Week ## Items Requiring Coordination. Refer to the level of detail in this example (Example): ## Completed This Week - Completed user registration module development, passed QA testing, deployed to staging environment...'

Technique Priority

If you only have time for one technique, use Role Prompting. If you have time for two, add Format Control. If you have ample time, add Few-shot Examples. Priority: Role Prompting > Format Control > Few-shot Examples.

Common Mistakes and Pitfall Avoidance Guide

Mistake One: Overly Vague Role Prompt

Bad: 'You are an expert' → An expert in what? Good: 'You are a Google Ads expert focused on e-commerce independent websites, primarily serving DTC brands.'

Mistake Two: Examples Not Matching the Actual Task

If your examples are for writing product introductions, but you actually want the AI to write user stories, the model will be confused about whether to follow the examples or the instruction. Ensure examples and the task are of the same type.

Mistake Three: Conflicting Format Instructions and Content

'Please provide a detailed analysis in 200 words... and give a complete implementation plan'—'200 words' and 'detailed analysis + complete plan' are contradictory. Either relax the word limit or reduce the level of detail required.

Mistake Four: Stuffing Too Many Requirements into One Prompt

When requirements are complex, instead of writing one super-long Prompt, break it into a multi-step conversation. Have the AI complete the first step, confirm satisfaction, then proceed to the next. This is much more effective than giving 10 requirements at once.

重要提醒

Role prompting is not a cure-all. If you set the AI as a 'doctor' and ask for diagnostic advice, the AI might give seemingly professional but actually incorrect answers. For professional fields like medicine, law, and finance, AI output is for reference only and must be verified by professionals.

Three Technique Combination Strategy

Role Prompting (Sets Direction / +30% Quality)
Provide Examples (Sets Format / +25% Quality)
Specify Output (Sets Standard / +20% Quality)
Combined Use (Multiplicative Effect)

Prompt Quality Improvement Path

Vague Question (Baseline)
Add Role Prompt (+30%)
Add Format Control (+50%)
Add Few-shot Examples (+80%)
Precise Output

Few-shot Example Count vs. Effect

0 (Zero-shot)
1 (One-shot)
2-3 (Optimal Range)
5+ (Diminishing Returns / Wastes Tokens)
After mastering these three foundational techniques, your AI usage efficiency will improve by at least 3x. In the next chapter, we'll learn the more powerful Chain-of-Thought technique—making the AI reason step-by-step like a human.

Finished? Mark as completed

Complete all chapters to earn your certificate

Want to unlock all course content?

Purchase the full learning pack for all chapters + certification guides + job templates

View Full Course