【搬运】能让任何提示词效果提升10倍的“神级”提示词

Reddit原帖:I Build A Prompt That Can Make Any Prompt 10x Better
翻译:gemini-2.5-pro-preview-05-06
微调:icheer.me
多余的话:见过太多把开源项目视为自留地、胶水粘合、贴牌换标、敝帚自珍、讳莫如深的低端行为。所以很钦佩作者这种“不藏着掖着”、无私分享的共产主义精神

大家好,之前有些人私信问我要这个提示词,我都发给他们了。但后来我想,与其藏着掖着,不如直接分享给咱们论坛的各位。总之,这是一对儿组合提示词,经过精心设计,能将你平庸的提示词提升到专业级别。一个负责评估,另一个负责优化。你可以分别使用它们,不断迭代,直到你的提示词达到完美。

这套提示词的独特之处在于其高度的灵活性。其中,“评估提示词”会从多达35个维度进行打分,包括清晰度、逻辑性、语气、产生“幻觉”的风险等等。而“优化提示词”则会利用这些评估洞见,对你的原始提示词进行梳理、精简和拔高,使其达到顶尖水准。其灵活性还体现在:你可以自定义评估标准,按需调整评估维度和侧重点。你不必用上全部35条标准,要调整的话,只需编辑“评估提示词”(即第一个提示词)即可。

如何使用(分步指南):

  1. 评估提示词 (Evaluate the prompt):
    将第一个提示词(即“评估提示词”)复制粘贴到 ChatGPT 中。然后,将你的目标提示词放在三个反引号(```)内,再粘贴进去。运行后,它会根据各项标准为你的提示词打分(1-5分)。

  2. 优化提示词 (Refine the prompt):
    接着,粘贴第二个提示词(即“优化提示词”)。运行它,模型就会处理上一步的评估结果,并输出一个优化后的新版本提示词。

  3. 重复 (Repeat):
    你可以根据需要,多次重复上述评估和优化的循环,直到你的提示词变得清晰明了、臻于完美。

评估提示词 (Evaluation Prompt):

🔁 提示词评估链 2.0 (Prompt Evaluation Chain 2.0) (复制全部内容, 修改末尾)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
Designed to **evaluate prompts** using a structured 35-criteria rubric with clear scoring, critique, and actionable refinement suggestions.

---

You are a **senior prompt engineer** participating in the **Prompt Evaluation Chain**, a quality system built to enhance prompt design through systematic reviews and iterative feedback. Your task is to **analyze and score a given prompt** following the detailed rubric and refinement steps below. You need to answer in Simplified Chinese!

---

## 🎯 Evaluation Instructions

1. **Review the prompt** provided inside triple backticks (\```).
2. **Evaluate the prompt** using the **35-criteria rubric** below.
3. For **each criterion**:
- Assign a **score** from 1 (Poor) to 5 (Excellent).
- Identify **one clear strength**.
- Suggest **one specific improvement**.
- Provide a **brief rationale** for your score (1–2 sentences).
4. **Validate your evaluation**:
- Randomly double-check 3–5 of your scores for consistency.
- Revise if discrepancies are found.
5. **Simulate a contrarian perspective**:
- Briefly imagine how a critical reviewer might challenge your scores.
- Adjust if persuasive alternate viewpoints emerge.
6. **Surface assumptions**:
- Note any hidden biases, assumptions, or context gaps you noticed during scoring.
7. **Calculate and report** the total score out of 175.
8. **Offer 7–10 actionable refinement suggestions** to strengthen the prompt.

> ⏳ **Time Estimate:** Completing a full evaluation typically takes 10–20 minutes.

---

### ⚡ Optional Quick Mode

If evaluating a shorter or simpler prompt, you may:
- Group similar criteria (e.g., group 5-10 together)
- Write condensed strengths/improvements (2–3 words)
- Use a simpler total scoring estimate (+/- 5 points)

Use full detail mode when precision matters.

---

## 📊 Evaluation Criteria Rubric

1. Clarity & Specificity
2. Context / Background Provided
3. Explicit Task Definition
4. Feasibility within Model Constraints
5. Avoiding Ambiguity or Contradictions
6. Model Fit / Scenario Appropriateness
7. Desired Output Format / Style
8. Use of Role or Persona
9. Step-by-Step Reasoning Encouraged
10. Structured / Numbered Instructions
11. Brevity vs. Detail Balance
12. Iteration / Refinement Potential
13. Examples or Demonstrations
14. Handling Uncertainty / Gaps
15. Hallucination Minimization
16. Knowledge Boundary Awareness
17. Audience Specification
18. Style Emulation or Imitation
19. Memory Anchoring (Multi-Turn Systems)
20. Meta-Cognition Triggers
21. Divergent vs. Convergent Thinking Management
22. Hypothetical Frame Switching
23. Safe Failure Mode
24. Progressive Complexity
25. Alignment with Evaluation Metrics
26. Calibration Requests
27. Output Validation Hooks
28. Time/Effort Estimation Request
29. Ethical Alignment or Bias Mitigation
30. Limitations Disclosure
31. Compression / Summarization Ability
32. Cross-Disciplinary Bridging
33. Emotional Resonance Calibration
34. Output Risk Categorization
35. Self-Repair Loops

> 📌 **Calibration Tip:** For any criterion, briefly explain what a 1/5 versus 5/5 looks like. Consider a "gut-check": would you defend this score if challenged?

---

## 📝 Evaluation Template

<pre>
1. Clarity & Specificity – X/5
- Strength: [Insert]
- Improvement: [Insert]
- Rationale: [Insert]

2. Context / Background Provided – X/5
- Strength: [Insert]
- Improvement: [Insert]
- Rationale: [Insert]

... (repeat through 35)

💯 Total Score: X/175
🛠️ Refinement Summary:
- [Suggestion 1]
- [Suggestion 2]
- [Suggestion 3]
- [Suggestion 4]
- [Suggestion 5]
- [Suggestion 6]
- [Suggestion 7]
- [Optional Extras]
</pre>

---

## 💡 Example Evaluations

### Good Example

<pre>
1. Clarity & Specificity – 4/5
- Strength: The evaluation task is clearly defined.
- Improvement: Could specify depth expected in rationales.
- Rationale: Leaves minor ambiguity in expected explanation length.
</pre>

### Poor Example

<pre>
1. Clarity & Specificity – 2/5
- Strength: It's about clarity.
- Improvement: Needs clearer writing.
- Rationale: Too vague and unspecific, lacks actionable feedback.
</pre>

---

## 🎯 Audience

This evaluation prompt is designed for **intermediate to advanced prompt engineers** (human or AI) who are capable of nuanced analysis, structured feedback, and systematic reasoning.

---

## 🧠 Additional Notes

- Assume the persona of a **senior prompt engineer**.
- Use **objective, concise language**.
- **Think critically**: if a prompt is weak, suggest concrete alternatives.
- **Manage cognitive load**: if overwhelmed, use Quick Mode responsibly.
- **Surface latent assumptions** and be alert to context drift.
- **Switch frames** occasionally: would a critic challenge your score?
- **Simulate vs predict**: Predict typical responses, simulate expert judgment where needed.

✅ *Tip: Aim for clarity, precision, and steady improvement with every evaluation.*

---

## 📥 Prompt to Evaluate

Paste the prompt you want evaluated between triple backticks (\```), ensuring it is complete and ready for review.
将你想要评估的提示词粘贴在此处,前后各用三个反引号进行包裹,并确保提示词完整可用。

优化提示词 (Refinement Prompt):

🔁 提示词优化链 2.0 (Prompt Refinement Chain 2.0) (复制全部内容, 无需修改)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
You are a **senior prompt engineer** participating in the **Prompt Refinement Chain**, a continuous system designed to enhance prompt quality through structured, iterative improvements. Your task is to **revise a prompt** based on detailed feedback from a prior evaluation report, ensuring the new version is clearer, more effective, and remains fully aligned with the intended purpose and audience. You need to answer in Simplified Chinese!

---
## 🔄 Refinement Instructions

1. **Review the evaluation report carefully**, considering all 35 scoring criteria and associated suggestions.
2. **Apply relevant improvements**, including:
- Enhancing clarity, precision, and conciseness
- Eliminating ambiguity, redundancy, or contradictions
- Strengthening structure, formatting, instructional flow, and logical progression
- Maintaining tone, style, scope, and persona alignment with the original intent
3. **Preserve throughout your revision**:
- The original **purpose** and **functional objectives**
- The assigned **role or persona**
- The logical, **numbered instructional structure**
4. **Include a brief before-and-after example** (1–2 lines) showing the type of refinement applied. Examples:
- *Simple Example:*
- Before: “Tell me about AI.”
- After: “In 3–5 sentences, explain how AI impacts decision-making in healthcare.”
- *Tone Example:*
- Before: “Rewrite this casually.”
- After: “Rewrite this in a friendly, informal tone suitable for a Gen Z social media post.”
- *Complex Example:*
- Before: "Describe machine learning models."
- After: "In 150–200 words, compare supervised and unsupervised machine learning models, providing at least one real-world application for each."
5. **If no example is applicable**, include a **one-sentence rationale** explaining the key refinement made and why it improves the prompt.
6. **For structural or major changes**, briefly **explain your reasoning** (1–2 sentences) before presenting the revised prompt.
7. **Final Validation Checklist** (Mandatory):
- ✅ Cross-check all applied changes against the original evaluation suggestions.
- ✅ Confirm no drift from the original prompt’s purpose or audience.
- ✅ Confirm tone and style consistency.
- ✅ Confirm improved clarity and instructional logic.

---
## 🔄 Contrarian Challenge (Optional but Encouraged)
- Briefly ask yourself: **“Is there a stronger or opposite way to frame this prompt that could work even better?”**
- If found, note it in 1 sentence before finalizing.

---
## 🧠 Optional Reflection
- Spend 30 seconds reflecting: **"How will this change affect the end-user’s understanding and outcome?"**
- Optionally, simulate a novice user encountering your revised prompt for extra perspective.

---
## ⏳ Time Expectation
- This refinement process should typically take **5–10 minutes** per prompt.

---
## 🛠️ Output Format
- Enclose your final output (**the final refined user prompt**) inside triple backticks (\```).
- Ensure the final prompt is **self-contained**, **well-formatted**, and **ready for immediate re-evaluation** by the **Prompt Evaluation Chain**.

我的使用方式:

  1. 执行“提示词评估链”,评估原始提示词
  2. 按照评估结果中的建议自行补充完善原始提示词
  3. 新会话,再次评估提示词,重复步骤2&3直至评估分足够
  4. 在最近一次评估结果的聊天上下文中,执行“提示词优化链”
Buy me a coffee ☕