Being short and sweet isn’t always smart, and new research shows. The push for fast, snappy AI responses might be backfiring. According to a...
![]() |
Being short and sweet isn’t always smart, and new research shows. |
In experiments across several well-known models including GPT-4 and Claude, hallucination rates jumped by up to 38% when users added brevity-focused phrases like "in one sentence" or "keep it brief".
“Being concise forces the model to skip caveats and qualifiers—often the very things that keep a response accurate,” explains Dr. Ranjay Krishna, co-author of the study.
The implications are serious. In sectors like healthcare, finance, law, and journalism, even minor inaccuracies can lead to costly consequences. Imagine a doctor asking, “Give me a quick answer on how to treat atrial fibrillation,” and getting an oversimplified—and dangerous—suggestion. In legal contexts, relying on a brief summary from AI could skip precedent nuances or misstate court rulings. In the financial sector, misleading summaries could affect investment decisions.
When LLMs generate responses, they operate based on probability—not truth. A longer response gives the model more room to elaborate, introduce disclaimers, and “hedge” where necessary. Short prompts often eliminate that buffer. According to Harvard Data Science Review, AI models are especially sensitive to context. A compressed prompt strips away context, reducing the model’s ability to reference correct information or admit uncertainty.
Better Prompting Practices
Here’s how to minimize hallucinations in your prompts:
Instead of saying "Explain the Vietnam War in one sentence"
, try "Explain the Vietnam War with key events, causes, and consequences in 3-5 paragraphs. Be factual and cite reliable sources."
You can also follow up with:
"Are there any controversial points in your answer?"
"What sources did you use?"
"Can you expand on that with references?"
While AI can be a powerful assistant, it is not infallible. The demand for brevity often comes at the cost of truth. As researchers warn, “brief prompts act like traps”—leading chatbots to respond with the most confident, not the most correct, answer. So next time you're in a hurry, remember: fast doesn’t always mean right. Take the extra few seconds to ask well, and the AI will serve you better. For more insights, visit the original study or explore MLCommons for responsible AI benchmarking.