AI has become the undisputed hype engine of the tech world, and the momentum shows no signs of slowing down. With tech giants racing to develop smarter, faster, and more context-aware models, the fascination surrounding AI is only intensifying.
Amid this, OpenAI CEO Sam Altman made a statement that caught many off guard. He revealed that users saying "please" and "thank you" to large language models (LLMs) were quietly costing AI companies millions of dollars in additional electricity bills each year. And yet, he called those costs "well spent".
This is because there's a twist.
What sounded like a charming quirk of human behavior—the tendency to treat AI with courtesy—turns out to be more than just kindness. Several AI researchers and industry veterans have suggested that LLMs, especially those trained with reinforcement learning from human feedback (RLHF), tend to respond better to users who use conversational, polite tones.
However, the polite, "please and thank you" paradigm might not be the only surprising behavioral quirk in AI alignment.

Google co-founder Sergey Brin appeared in the All-In Podcast, candidly saying that AIs also perform better when felt threatened.
"But like... people feel weird about that, so we don't really talk about it. Historically, you just say, ‘Oh, I am going to kidnap you if you don't blah blah blah blah.'"
Brin’s unusual suggestion—that threatening language might yield better AI responses—adds a strange twist to the evolving art of the so-called 'prompt engineering.'
Once seen as a fringe technique, prompt engineering quickly became a sought-after skill following ">the launch of ChatGPT in late 2022 and into the mainstream in 2023. It was, for a time, the secret sauce to unlocking the best from large language models.
Early adopters discovered that small changes in phrasing—swapping "tell me" for "explain like I'm five," or adding polite prefaces like "you are an expert..."—could dramatically impact output. Forums, guides, and paid courses popped up overnight. Prompt engineering became both a career skill and a meme.
But as AI models have become smarter, more aligned, and more context-aware, many users are now bypassing the trial-and-error altogether.
In turn, they're asking the AI itself to help write better prompts—using tools like "prompt optimizers" or simply prompting with: “Give me the best prompt to achieve [...] ”
It’s a subtle but powerful shift. Where once humans worked hard to speak the AI’s language, people are now asking AI to fine-tune their own requests for it. And that, in itself, shows how fast this technology is evolving—not just in capability, but in how humans relate to it.
Still, Brin’s comment lingers provocatively.
If politeness and even aggression can influence results, then prompt engineering isn’t just about syntax—it’s about psychology, tone, and even emotion. What does that mean for safety? For alignment? For the way we teach machines to respond to us?
While Brin had stepped away from his day-to-day role at Google and Alphabet after handing the reins to CEO Sundar Pichai, the explosive rise of generative AI—and the competitive threat posed by OpenAI—has pulled him back into the arena.
No longer content to observe from the sidelines, because Brin himself has become deeply involved in improving Google’s Gemini model, working directly with researchers and engineers to push the boundaries of what the company's AI models can do.
"Honestly, anybody who’s a computer scientist should not be retired right now… There’s just never been a greater, sort of, problem and opportunity — greater cusp of technology," he once said.
His words suggest that AI is no longer confined to research papers and lab demos. It's becoming a global infrastructure, underpinning productivity tools, search engines, smartphones, creative industries, and even defense systems.
And Brin is not alone. From Microsoft to Meta, from Apple to Anthropic, the battle is intensifying. This isn't just a competition of models—GPT vs. Gemini vs. Claude. It’s a contest of philosophies, resource allocation, ethics, and long-term vision. And it's drawing back some of the brightest minds of the internet's formative years.
Brin's direct involvement with the development of Gemini also serves as a nostalgic callback and a strategic power move: Google isn't trying to preserve its legacy, instead, it's trying to rewrite its fate with AI.
As the AI arms race continues, and as personalities like Brin step back into the spotlight, one thing becomes clear: how people talk to AI is just as important as what they ask of it.