The LLM war is heating up, with major players like Google racing to catch up in the generative AI space.
As tools like Gemini, ChatGPT, and others increasingly dominate how people discover information, traditional search traffic has become more volatile and unpredictable. Website owners and publishers are feeling the squeeze: traffic dips, ad revenue fluctuates, and the old rules of SEO seem less reliable in an era where AI summaries often pull answers directly from pages without sending visitors through.
In this scramble, many desperate creators have turned to desperate tactics.
A popular one is "content chunking," which refer to breaking articles into ultra-short paragraphs, one- or two-sentence sections, and question-style subheadings that mimic chatbot queries.
The theory? LLMs supposedly "prefer" bite-sized, easily digestible pieces, making them more likely to ingest, cite, or feature the content in AI-generated responses.
Some sites even experiment with dual versions, one natural for humans, another fragmented for machines, in hopes of snagging that elusive AI visibility boost.
But Google is pushing back hard, saying that the strategy is not a good idea.

In an episode of their Search Off the Record podcast, Search Liaison Danny Sullivan, alongside John Mueller, addressed this trend head-on:
"So we don't want you to do that. I was talking to some engineers about that. We don't want you to do that. We really don't."
"We don't want people to have to be crafting anything for Search specifically. That's never been where we've been at and we still continue to be that way. We really don't want you to think you need to be doing that or produce two versions of your content, one for the LLM and one for the net."
Sullivan called it out as misguided advice that's spreading rapidly, saying breaking content to bite-sized chunks is not what LLMs actually require.
Sullivan emphasized that he consulted Google engineers before making the statement, and that he said Google's stance remains consistent: the company dooesn't want creators crafting content specifically for Search algorithms, neither it wants creators to target LLMs.
Rankings in Google Search still prioritize signals from real human behavior: what people click, read, and engage with, rather than formatting tricks aimed at machines.
Even if chunking appears to deliver short-term gains in some edge cases (or with other LLMs), Sullivan warned it's likely temporary.
"So you've gone through all this effort. You've made all these things that you did specifically for a ranking system, not for a human, being because you were trying to be more successful in the ranking system, not staying focused on the human being. And then the systems improve, probably the way the systems always try to improve, to reward content written for humans."
"All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term."
The core message echoes timeless SEO wisdom: focus on creating high-quality, helpful content for actual (human) audience.
When creators build for people, their content will naturally align with what future AI systems will value too, because LLMs are ultimately trained to serve human needs and preferences. Chasing the latest "secret weapon" might offer a quick win, but it risks wasting resources, disrupting teams, and harming long-term reputation.
In the end, the internet's future favors authenticity over adaptation hacks. It has been this way, and will always do.