vault backup: 2024-09-29 16:06:10

This commit is contained in:
RainBus
2024-09-29 16:06:10 +08:00
parent 3531b107e4
commit c369cb4ae1

View File

@@ -1,12 +1,13 @@
## Standard Prompt
## Standard Few-shot Prompt
Prompt: `Q(question) + A(answer)`
> **Model Input:** Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
> A: The answer is 11.
> Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?
> **Model Output:** A: The answer is 27.x
## Few-shot-CoT
Prompt: `rationale(r) + answer(a)`
## Few-shot CoT
思维链提示,就是把一个多步骤推理问题,分解成很多个中间步骤,分配给更多的计算量,生成更多的 token再把这些答案拼接在一起进行求解。
Prompt: Q + A(r(rationale) + a(answer))
Answer: LLM同样会给出理由和答案。
> **Model Input:** Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
@@ -14,4 +15,6 @@ Answer: LLM同样会给出理由和答案。
> Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?
> **Model Output:** A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9.
思维链提示,就是把一个多步骤推理问题,分解成很多个中间步骤,分配给更多的计算量,生成更多的 token再把这些答案拼接在一起进行求解。
## Zero-shot CoT
通过 `Let's think step by step` 可以让 LLM 生成回答问题的思维链。我们可以将 Zero-shot CoT 看作一个 pipeline我们先使用 `Let's think step by step` 让 LLM 尽可能生成一些思考过程,然后将生成的 rationale 和 question 拼接起来,重新配合一个指向 answer 的 Prompt 来激励模型生成答案。
Prompt: Q + Let's think step by step LLM | Q + (上一步的输出) + The answer is | LLM Output