1.7 KiB
Standard Few-shot Prompt
Prompt: Q(question) + A(answer)
Model Input: Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? A: The answer is 11. Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have? Model Output: A: The answer is 27.(x)
Few-shot CoT
思维链提示,就是把一个多步骤推理问题,分解成很多个中间步骤,分配给更多的计算量,生成更多的 token,再把这些答案拼接在一起进行求解。 Prompt: Q + A(r(rationale) + a(answer)) Answer: LLM同样会给出理由和答案。
Model Input: Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11. Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have? Model Output: A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9.
Zero-shot CoT
通过 Let's think step by step 可以让 LLM 生成回答问题的思维链。我们可以将 Zero-shot CoT 看作一个 pipeline,我们先使用 Let's think step by step 让 LLM 尽可能生成一些思考过程,然后将生成的 rationale 和 question 拼接起来,重新配合一个指向 answer 的 Prompt 来激励模型生成答案。
Prompt: Q + Let's think step by step | LLM | Q + (上一步的输出) + The answer is | LLM | Output