2 Comments
User's avatar
Alastair's avatar

Do you have any insight into whether asking an LLM to be succinct will cause it to produce less well thought through responses, or just shorter ones? Would it produce a markedly different response if you ask it to produce a short answer, or let it give it's normally verbose one and then ask it to shorten that response?

Expand full comment
Dan Taylor-Watt's avatar

Good question! Some research came out this week suggesting asking LLMs for short answers might increase hallucination rates (https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/), although the example prompt the researchers reference ("answer this question briefly") does feel ambiguous - i.e. are you asking the LLM to give a brief answer or to spend a brief amount of time answering the question?

If a prompt does require a more considered response, I'd be inclined to actively prompt the LLM to think carefully before responding. Probably not needed for most emails! Your suggested approach would also work. Interestingly, Claude's Sonnet models fared better than other frontier models in this research - full results here: https://huggingface.co/blog/davidberenstein1957/phare-analysis-of-hallucination-in-leading-llms

Expand full comment