Webmasters 'Don't Have To Be Worried' For Using JavaScript On Websites, Google Said

Gone are the days of creating and developing websites with static pages.

With JavaScript, webmasters and web owners can ramp up things up to a whole different level, and create more engaging user-interface that is more appealing and more interesting.

But when it comes to discoverability, most websites on the web rely on search engines. They need to appeal search engines, most notably Google, in order to provide them with enough visibility, which in turn, will give them the traffic they need to conduct their business.

And it has long been said that JavaScript can hurt SEO if done the wrong way.

While this can be correct, considering that Google sees the content of a page based on the way it is rendered visible, there can be some elements that cannot be rendered properly, purposefully or not, and make Google to fail to see them. If this happens, SEO can be affected in a bad way.

But as time progresses, Google knows what it needs to do when dealing with JavaScript. As more and more websites use JavaScript to generate or show its contents, Google has become better in understanding how to render them like they should.

Google, Search off the record

In an episode with Google’s Martin Splitt, John Mueller, Gary Illyes, and Daniel Waisberg at the Search Off the Record podcast, Splitt said that:

No, you don’t have to be worried about that…

A question that I often also get with JavaScript is if we treat JavaScript content differently. We do have annotations for content– what we think is the centerpiece of an article or what we think is content on the side and stuff.

But as far as I know, and as far as I can see, we crawl a page and then put the content into the document in our index, and then we render the page, and then we complete the content from the DOM.

That’s it. There’s nothing that is fundamentally different between JavaScript generated content and static content, except for when there’s edge cases, and we can’t see content that is generated by JavaScript.

In other words, Google said that there is no need to worry when it comes to SEO, as there’s nothing fundamentally different about it compared to static content.

Google has for numerous times discussed how websites can run into SEO problems when using JavaScript.

But the thing is about using JavaScript is that, what can really cause problem is when a script forces users to interact with an element of a page before the content of the page is loaded.

For example, if a JavaScript creates a modal box before a web page's content can be loaded, and users have to close that modal box in order for the content to load, this can definitely restrict Google's ability to index that page.

As far as SEO is concerned, Googlebot doesn’t interact with anything when it crawls web pages.

In this case, when crawling and trying to index a page, Google won't click on any button or anything to close that modal box.

As a result, Google won't see the content of the web page, simply because it couldn't.

Google that cannot see the content other than the modal box, won't understand the page and won't rank it on its search results page.

This is why webmasters and web owners who use JavaScript this way should make sure that the hidden content is not crucial, and won't affect Google in understanding what the page is all about.

For those who are unsure, they can use the tool 'Fetch as Google' on Google Search Console to see what Googlebox sees when the crawler crawls their web page.

If the Fetch as Google tool is able to render all critical content of a web page like it should, then indeed, there is nothing webmasters and web owners should worry about.