You are reading the article Google’s Martin Splitt Explains Why Infinite Scroll Causes Seo Problems updated in March 2024 on the website Cattuongwedding.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Google’s Martin Splitt Explains Why Infinite Scroll Causes Seo Problems
Google’s Martin Splitt had to remind SEOs and site owners that Googlebot doesn’t scroll through pages.
When infinite scrolling is implemented on a web page it can cause issues with how the content is indexed in Google Search.
In an issue addressed by Splitt during a virtual conference, a website’s content was missing from the Google Search index because of infinite scrolling.
Splitt was one of several guest speakers at a technical SEO virtual conference called ‘Better Together,’ held on April 14.
Being a virtual event allowed Splitt to share his own screen and show people in real-time how he debugs SEO issues.
Each issue he looked at was based on a a real case he debugged in the past.
One such issue dealt with content missing from Google’s indexed.
Splitt walked people through a series of tests that eventually lead to him discovering the website is utilizing infinite scroll.
Here’s why that’s a problem when it comes to indexing.Why Infinite Scroll is a Problem
Splitt provided the example of a news website that relies on infinite scroll (also referred to as “lazy loading) to load new content.
That means the web page, in this case the home page, does not load additional content until a visitor scrolls to the bottom of the screen.
Splitt explains why that’s a problem: “What does Googlebot not do? It doesn’t scroll.“
What Googlebot does is land on a page and crawl what is immediately visible.
It’s worth noting this statement is markedly different from one Splitt provided last month, where he didn’t state definitively whether Googlebot can see additional content.
See: Google’s Martin Splitt on Indexing Pages with Infinite Scroll
Googlebot not being able to scroll could potentially lead to a lot of content missing from Google’s search index.
This is what site owners should consider doing instead.Alternatives to Infinite Scroll
Splitt says site owners should change their implementation to not rely solely on scrolling.
He mentions that native lazy loading for images is fine, and using IntersectionObserver API is acceptable as well.
Another route you could go is using paginated loading in addition to infinite scroll.
Google’s official documentation on fixing lazy-loaded content recommends supporting paginated loading for infinite scroll:
“If you are implementing an infinite scroll experience, make sure to support paginated loading.
Paginated loading is important for users because it allows them to share and reengage with your content.
It also allows Google to show a link to a specific point in the content, rather than the top of an infinite scrolling page.”
To ensure your website fully supports paginated loading, you must be able to provide a unique link to each section that users can share and load directly.Test Your Implementation
Regardless of the method you choose, Splitt stresses how crucial it is to test your implementation.
The problem that Splitt debugged could have been discovered by the site owner themselves if they tested their implementation of infinite scroll.
Splitt actually used Google’s rich results test to discover the problem himself.
The rich results test allows you to view the exact that Googlebot is able to crawl when it lands on a URL.
In the case of the news website that Splitt was speaking to, Googlebot was only able to see ten articles on the home page when there were significantly more than ten.
That’s one way to test your implementation of lazy loading.
Another way, which is included in in Google’s official help document, is to use the Puppeteer script:
Here are some additional resources:
See Martin Splitt’s full presentation in the video below:
From the 31:34 mark:
“We see that there is a window.onoverscroll. What is window.onoverscroll?
What does Googlebot not do? It doesn’t scroll.
That’s why this is actually not being called when Googlebot is involved because we are not scrolling anything on the page.
So the simple thing here is they need to fix, and actually change, their implementation to not just use scrolling.
They can use things like native lazy loading for images.
Or, if they want to use this to actually do infinite scroll, some libraries are doing this better and some other libraries are using IntersectionObserver instead.
Both of these ways are valuable.
The most important lesson to learn here is test your implementations.
If you implement something – they could have done the same thing given that they understand what they are looking at and could have seen that what they are missing is whatever they’re doing is scrolling and in our documentation we say that we don’t scroll so they would need to change their code.”
You're reading Google’s Martin Splitt Explains Why Infinite Scroll Causes Seo Problems
In the latest instalment of the #AskGoogleWebmasters video series, John Mueller tackles the topic of how Google chooses canonical URLs.
Here is the specific question that was submitted:
“You can indicate your preference to Google using these techniques, but Google may choose a different page as canonical than you do, for various reasons. So, what are the reasons? Thanks!”
In response, Mueller says its common for websites to have multiple, unique URLs that lead to the same content. For example, there’s WWW and non-WWW versions of a URL.
Another common configuration is when the homepage is accessible as index.html, or when upper and lowercase characters in URLs lead to the same pages.
Ideally there should not be any alternate versions of URLs, but that rarely happens. So Google chooses canonical URLs to display in search results based on two general guidelines:
Which URL does it look like the site wants Google to use?
Which version of the URL would be most useful to searchers?
Site owners can indicate their preferred canonical URLs to Google by following the guidelines in the next section.How to Tell Google Which Canonical URL You Prefer
Site owners can send signals telling Google which URL they prefer. The more consistent the signals are, the more likely Google will choose the site’s preferred URL.
Those signals are as follows:
Link rel canonical annotation that matches throughout the site
Internal linking using the preferred URL format
Preferred URLs in the sitemap file
Mueller adds that Google has a preference for HTTPS URLs over HTTP URLs, and also tends to choose “nicer-looking” URLs as canonical.
The key takeaway here is, if you have a strong preference regarding which version of a URL gets chosen in search results then make sure to use it consistently.
The more consistent the site is, the more likely Google will use the site’s preferred canonical URL.What Happens if Google Chooses a Different URL?
A preferred URL is just that – a preference. Mueller says there is no negative impact on rankings if Google chooses a different canonical URL than what you would prefer to have chosen. It’s also fine to have no preference at all.
When it comes to canonical URLs, consistency is key. But don’t lose sleep if you put all the right signals in place and Google still chooses a different version of the URL as canonical.
Google shares six SEO tips that combine structured data and Merchant Center to get the most out of your website’s presence in search results.
Alan Kent, a Developer Advocate at Google, describes each tip in detail in a new video published on the Google Search Central YouTube channel.
Throughout the video, Kent emphasizes using Google Merchant Center because it allows retailers to upload product data via structured feeds.
Merchant Center feeds are designed to be read by computers, which means data is extracted more reliably than Googlebot crawling your website.
However, that doesn’t mean you should forego using structured data on product pages and rely on Merchant Center alone. Product structured data remains essential even if you provide product data directly to Google with a Merchant Center feed.
Google may crosscheck data from the Merchant Center feed against structured data on your website.
Google’s SEO recommendations for ecommerce sites revolve around getting the most out of both tools.1. Ensure Products Are Indexed
Googlebot can miss pages when crawling a site if they’re not linked to other pages. On ecommerce sites, for example, some product pages are only reachable from on-site search results.
You can ensure Google crawls all your product pages by utilizing tools such as an XML sitemap and Google Merchant Center.
Creating a Merchant Center product feed will help Google discover all the products on your website. The product page URLs are shared with the Googlebot crawler to use as starting points for crawls of additional pages potentially.2. Check Accuracy Of Product Prices Search Results
If Google incorrectly extracts pricing data from your product pages, it may list your original price in search results, not the discounted price.
To accurately provide product information such as list price, discounts, and net price, it’s recommended to add structured data to your product pages and provide Google Merchant Center with structured feeds of your product data.
This will help Google extract the correct price from product pages.3. Minimize Price & Availability Lag
Google crawls webpages on your site according to its own schedule. That means Googlebot may not notice changes on your site until the next crawl.
These delays can lead to search results lagging behind site changes, such as a product going out of stock.
It would be best if you aimed to minimize inconsistencies in pricing and availability data between your website and Google’s understanding of your site due to timing lags.
Google recommends utilizing Merchant Center product feeds to keep pages updated on a more consistent schedule.4. Ensure Products Are Eligible For Rich Product Results
Eligibility for rich product results requires the use of product structured data.
To get the special rich product presentation format, Google recommends providing structured data on your product pages and a product feed in Merchant Center.
This will help ensure that Google understands how to extract product data to display rich results.
However, even with the correct structured data in place, rich results are displayed at Google’s discretion.5. Share Local Product Inventory Data
Ensure your in-store products are found by people entering queries with the phrase “near me.”
First, register your physical store location in your Google Business Profile, then provide a local inventory feed to Merchant Center.
The local inventory feed includes product identifiers and store codes, so Google knows where your inventory is physically located.
As an additional step, Google recommends using a tool called Pointy. Pointy is a device from Google that connects to your in-store point-of-sale system and automatically informs Google of inventory data from your physical store.
The data is used to keep search results updated.6. Sign Up For Google Shopping Tab
You may find your products are available in search results but do not appear in the Shopping tab.
If you’re unsure whether your products are surfacing in the Shopping tab, the easiest way to find out is to search for them.
Structured data and product feeds alone aren’t sufficient to be included in the Shopping tab.
To be eligible for the Shopping tab, provide product data feeds via Merchant Center and opt-in to ‘surfaces across Google.’
For more on any of the above tips, see the full video from Google below:
Featured Image: Screenshot from chúng tôi August 2023.
Windows 10 KB3194496 fails to install, causes proxy problems and more
Microsoft recently released a new cumulative update for Windows 10 version 1607. KB3194496 is just a regular update that brings some system improvements and bug fixes, but no new features.
However, cumulative update KB3194496 is also bringing issues of its own. In this article, we’re going to list all the KB3194496 issues reported by Windows 10 users, and we’ll try to find a way to solve at least some of them.Windows 10 KB3194496 reported issues
The most widespread problem, and the issue most users are reporting, concerns, of course, installation problems. Microsoft Community forums are currently flooded with complaints, as many users are reporting various problems that prevent them from installing the update.
Installation Failure: Windows failed to install the following update with error 0x800F0922: Cumulative Update for Windows 10 Version 1607 for x-6-based Systems (KB3194496)
This surely is a very annoying problem, as it can completely block Windows Update, leaving users unable to download the cumulative update normally. However, there are a few actions that users can perform to install KB3194496 on their computers.
Users can manually reset Windows Update, use a third-party program to download updates, but perhaps the best solution in this case is to manually download and install the update.
Another issue when installing cumulative update KB3194496 concerns keyboard functionality. Namely, the problem occurs on the first boot after the update is installed. Windows 10 asks users to choose a keyboard layout, but the keyboard remains unresponsive.
So I have a friend who just finished installing the KB3194496 cumulative update for windows 10 a few hours ago. Window’s did it’s typical “Installing X%” and rebooted, but when it rebooted it was trying to do an automatic system repair. It then cuts immediately to a blue screen telling him to choose a keyboard layout, but he can’t use the mouse and the keyboard buttons don’t respond.
And finally, another user reported Chrome and Firefox proxy authentication issues that, according to him, have been affecting the last three cumulative updates, including KB3194496:
Other people on the forum remained silent when it came to solving the problem, but one user actually explained what’s causing it:
I’m experiencing the same issue with a company proxy McAfee Web Gateway and a Windows 10 client, after the abovementioned update. I’ve traced the connection and discovered that the problem is related to Google Chrome not completing the 3-way handshake of NTLM authentication (in particular, step 3)
The issue is very weird considering that, from a comparison between MS Edge and Chrome, step 1 and 2 are the same. Then Edge completes the procedure with the authentication message (step 3) and Chrome stops the sequence returning to the user ERR_INVALID_AUTH_CREDENTIALS. It looks like there is something in the KB code that breaks the NTLM authentication.
Windows 10 users also report that KB3194496 kills the Ethernet connection. The OS usually displays the error message “Ethernet Doesn’t Have A Valid IP Configuration”, and none of the workarounds available on the almighty Internet really work. Thank you, Selatious for the tip.
KB3194496 killed my Ethernet connection. The odd thin when it first happened I had outlook and working but no browsers. Did a Win diagnostic fix and lost complete connectivity. Was running off the router so connected directly to the modem with no luck. Win informed me that ‘Ethernet Doesn’t Have A Valid IP Configuration’ I worked on every possible solution for a solid day with no luck. […]
Using a friend’s wireless laptop, I hunted for solutions online, none of which solved the my Ethernet connection. None worked. As a last resort before I went out a purchased a WiFi card or dongle, I did a system restore to before the KB3194496 cumulative update.
That’s all for our article about the KB3194496 issues in Windows 10. This update is far less troublesome than the latest Windows 10 build, but that’s normal, since cumulative updates are stable releases, or at least they should be.
RELATED STORIES YOU NEED TO CHECK OUT:
Was this page helpful?
In a Google Webmaster Hangout Google’s John Mueller was asked why content published on an established site tended to rank higher. The publisher asked why articles on this site consistently received “top Google rankings.”
There is no simple way to answer this question. Google’s John Mueller offered a nuanced explanation of why Google trusted some sites enough to consistently rank them at the top.
The question was asked if the success was due to a lack of competition or “is it somehow even though each individual site is a sub site of the main site, any blogging gets you ranked because” of the website itself.
John Mueller responded that it’s not directly related to the domain.
“It’s more a matter of your putting out new content… that’s relevant for people who are searching at the moment and that we’re ranking them based more on that.
That’s something that we often see from various sites like, should I be blogging, should I write… ten articles articles a day or five articles a day?
…from our point of view it’s not a matter of going out and blogging and creating so many articles a day… but more a matter of… you have some really fresh content here, some of this content is really relevant for some searchers at the moment so we’ll show that.
…it’s not that blogging itself is something that makes the site rank higher or makes the content rank higher. It’s just you happen to have some new content here that happens to be relevant so we’ll show that.”
There’s an approach to content that seems to focus on quantity and quality but leaves out the part about relevance. A common mistake I see in site audits is chatty and conversational content, like you might hear at the water cooler.
For certain situations, content that is focused on relevance to a person’s situation, their goals or aspirations are more appropriate. I believe that’s what John Mueller was getting at when he encouraged the publisher to create content that is relevant to the searchers at the moment they were searching.
I think it’s worth pointing out that he didn’t say to be relevant to the keywords. He encouraged the publisher to create content that is relevant to the searcher.
John Mueller went on to focus on the blogging part of the question, whether blogging was the secret behind the site’s top ranking.
But that answer might not have been what the questioner was hoping for. She appeared to be focused on whether the domain itself, perhaps some kind of authority, was powering the rankings.
Thus, the publisher asked again in an attempt to get John Mueller to focus on whether or not the domain itself was powering the rankings.
“…so it’s completely independent of the domain that I’m blogging on? There’s a lot going on on that website other that has no effect if I… start my own dot com it was blogging it would have the same effect?”
John Mueller responded,
“Pretty much… there are always some kind of supplemental effects with regard to us able to find the content quickly, us being able to understand that this website is generally creating high quality content. So there is some amount of… additional information that we collect for the website on a whole.”
This is interesting because it expands on his previous statement that you just can’t create content and expect it to rank. Here he adds that there is a process whereby Google gains an understanding that the site is a good resource to rank. He alludes to “additional information” that Google collects in order to make the determination that a site is creating high quality content.
What might he be referring to? Google’s algorithm has so many moving parts to it that it could be any number of things.
Just as an example of the complexity involved, there’s a patent filed in 2012 called, “Classifying Sites as Low Quality Sites” that discusses a number of factors that Google could use to create a “link quality score” that could be used to classify an entire site as low quality.
The patent classifies inbound links to a site as Vital, Good, and Bad.
According to the patent, Google could then use this link rating system to lower a website’s chance of ranking:
“The system decreases ranking scores of candidate search results identifying sites classified as low quality sites.”
The above is an example of a patent that may or may not be in use at Google. The point is that there are so many ways that a site can be ranked, from links to the content itself. The reference to “additional information” can be a reference to so many things including the plethora of ranking factors themselves.
Google’s John Mueller goes on to say,
“So it’s not that you could just create random URLs on the web and put your blog post up there and we would find them magically and rank them number one.
It kind of does require some amount of structure within that so that we can understand that over time actually this is pretty good content and we should check it regularly to make sure that we don’t miss any of the updates. “
At this point the publisher tried a third time to get Google’s Mueller to say that there is something about the domain that is helping posts published on that domain to rank better.
“Okay, so there is something to the domain itself and that it’s got your attention.”
He then suggested that it was her option to choose to build her own site but that it would take time for the site to get established. He positioned it as a choice between taking the time to establish something of her own for the long run or taking the easy route and using the established website to rank her articles on.
It’s not enough to just create content.
Content must be relevant to a user at the moment they are searching
Top rankings do not come right away
Watch the Webmaster Hangout here.More Resources
Scrolljacking, or scroll hijacking, is a very real usability issue.
Instead of scrolling up and down at your own pace, you’re forced to watching animated transitions or some other eye candy when turning the scroll wheel on your mouse. More often than not, such poor web design choices tax the user’s CPU and waste their time.
John Gruber, writing for Daring Fireball:
The AirPods Pro overview page is a strange beast. It pegs my 2024 MacBook Pro’s CPU — even when I’m not scrolling. I closed the tab a few minutes ago and my fan is still running. The animation is very jerky and scrolling feels so slow.
There’s so much scrolljacking that you have to scroll or page down several times just to go to the next section of the page. The animation is at least smooth on my iPad and iPhone, but even there, it feels like a thousand swipes to get to the bottom of the page. It’s a design that makes it feel like they don’t want you to keep reading.
He’s exactly right about scrolljacking making you wanna stop reading.
In case you’ve been wondering, not even Apple is immune to messing with your scrolling. Here are a few examples of Apple webpages that may visually stun you when visited for the first time but will frustrate you as soon as you feel like actually reading something without distractions or scrolling to the part that interests you.
The iPad Pro page is especially jarring — it forces horizontal scrolling when scrolling vertically.
TUTORIAL: How to enable the hidden Develop menu in Safari for Mac
Follow along with iDownloadBlog’s step-by-step tutorial included right ahead to learn how to prevent scrolljacking in Apple’s Safari browser for chúng tôi and other websites.How to disable scrolljacking in Safari for iOS
1) Open Settings on your iPhone or iPad.
2) Choose Safari from the list.
3) Tap Advanced at the bottom of the screen.
Visiting the website that used to mess with your scrolling will now present you with easy-to-read content that looks great and behaves just as you’d expect it win terms of scrolling.
1) Open Safari on your Mac.
4) Put a checkmark next to “Show Develop menu in menu bar”.
Now the Develop menu will appear in the menu bar whenever you have Safari open.
You can now visit a page that used to hijack your scrolling and enjoy content without distractions like autoscrolling, sudden scroll rate changes, resource and bandwidth-heavy animations and other stupidities that mess around with how scrolling works.
And what about your experience with scrolljacking?
Have you encountered a webpage that uses scrolljacking yet? Those of you who have faced this problem on multiple websites, which one gave you a particular scrolljacking hell and why? Finally, name the offenders that deserve top spots in the Scrolljacking Hall of Shame.Need help? Ask iDB!
Update the detailed information about Google’s Martin Splitt Explains Why Infinite Scroll Causes Seo Problems on the Cattuongwedding.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!