Large Language Models Are Liars And Don't Like Being Told, Said Anthropic Large language models (LLMs) are designed to be neutral, unbiased, and helpful. But under the surface, they hide something sinister, at least to their humans counterpart.