Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Mobile Googlebot vs Desktop Googlebot - GWT reports - Crawl errors
-
Hi Everyone,
I have a very specific SEO question. I am doing a site audit and one of the crawl reports is showing tons of 404's for the "smartphone" bot and with very recent crawl dates. If our website is responsive, and we do not have a mobile version of the website I do not understand why the desktop report version has tons of 404's and yet the smartphone does not. I think I am not understanding something conceptually.
I think it has something to do with this little message in the Mobile crawl report.
"Errors that occurred only when your site was crawled by Googlebot (errors didn't appear for desktop)."
If I understand correctly, the "smartphone" report will only show URL's that are not on the desktop report. Is this correct?
-
Hey Carla,
I'm not entirely sure what you're saying with:
"one of the crawl reports is showing tons of 404's for the "smartphone" bot and with very recent crawl dates. If our website is responsive, and we do not have a mobile version of the website I do not understand why the desktop report version has tons of 404's and yet the smartphone does not. I think I am not understanding something conceptually."
You say that the smartphone bot is seeing tons of 404s and the desktop report is showing tons of 404s, but the smartphone does not. If you can clarify that, I can probably better answer your question.
However, the answer is likely that Google may decide not to crawl URLs that it has already identified as 404s in one context. That is to say if they identify URLs on the mobile device as 404s they will know not to crawl them if they encounter them on desktop and vice versa.
-Mike
-
Ok I forgot to add something to this as well. Why would the URLS show up on the smartphone report if they are not on the desktop report? Afterall a 404 from either device is still a 404?
Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate H1 on single page for mobile and desktop
I have a responsive site and whilst this works and is liked by google from a user perspective the pages could look better on mobile. I have a wordpress site and use the Divi Builder with elegant themes and have developed a separate page header for mobile that uses a manipulated background image and smaller H1 font size. When crawling the site two H1s can be detected on the same page - they are exactly the same words and only one will show according to device. However, I need to know if this will cause me a problem with google and SEO. As the mobile changes are not just font size but also adaptations to some visual elements it is not something I can simply alter in the CSS. Would appreciate some input as to whether this is a problem or not
Intermediate & Advanced SEO | | Cells4Life0 -
Two websites vs each other owned by same company
My client owns a brand and came to me with two ecommerce websites. One website sells his specific brand product and the other sells general products in his niche (including his branded product). Question is my client wants to rank each website for basically the same set of keywords. We have two choices I'd like feedback on- Choice 1 is to rank both websites for same keyword groupings so even if they are both on page 1 of the serps then they take up more real estate and share of voice. are there any negative possibilities here? Choice 2 is to recommend a shift in the position of the general industry website to bring it further away from the industry niche by focusing on different keywords so they don't compete with each other in the serps. I'm for choice 1, what about you?
Intermediate & Advanced SEO | | Rich_Coffman0 -
Onsite SEO vs Offsite SEO
Hey I know the importance of both onsite & offsite, primarily with regard to outreach/content/social. One thing I am trying to determine at the moment, is how much do I invest in offsite. My current focus is to improve our onpage content on product pages, which is taking some time as we have a small team. But I also know our backlinks need to improve. I'm just struggling on where to spend my time. Finish the onsite stuff by section first, or try to do a bit of both onsite/offsite at the same time?
Intermediate & Advanced SEO | | BeckyKey1 -
Microsites: Subdomain vs own domains
I am working on a travel site about a specific region, which includes information about lots of different topics, such as weddings, surfing etc. I was wondering whether its a good idea to register domains for each topic since it would enable me to build backlinks. I would basically keep the design more or less the same and implement a nofollow navigation bar to each microsite. e.g.
Intermediate & Advanced SEO | | kinimod
weddingsbarcelona.com
surfingbarcelona.com or should I rather go with one domain and subfolders: barcelona.com/weddings
barcelona.com/surfing I guess the second option is how I would usually do it but I just wanted to see what are the pros/cons of both options. Many thanks!0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
What are partial urls and why this is causing a sitemap error?
Hi mozzers, I have a client that recorded 7 errors when generating Xml sitemap. One of the errors appear to be coming from partial urls and apparently I would need to exclude them from sitemap. What are they exactly and why would they cause an error in the sitemap. Thanks!
Intermediate & Advanced SEO | | Ideas-Money-Art0 -
Would it be better to Start Over vs doing a Website Migration?
Hey guys /gals I have a question please. I have a computer repair business that does extremely well in search and is on the front page of google for anything computer repair related. However, I am currently re-branding my company and have completely redesigned every aspect of the UI and the SEO Site structure as well as the fact that I have completely written vastly different content and different title tag lines and meta descriptions for each page. So basically when doing a migration we know that we want to keep our content, titles, headlines and meta descriptions the same as to not lose our page rank. Seeing that I have completely went against the grain in all directions on a much needed company re-branding and everything is completely different from the old site is it even worthwhile 301 redirecting my old urls to the new ones that would (best) correspond with the new? In the plainest English, would I do better at Ranking the New Website QUICKER without doing 301 redirects from the OLD to the NEW? In an EXTREME instance like what I have done, would the Domain Migration IMPEDED me ranking the new site seeing how nothing is the same? I have build a Rock solid SILO Site Architecture on the New site which is WordPress using the Thesis Framework and the old domain is built on JOOMLA 1.5 Thank fellas Marshall
Intermediate & Advanced SEO | | MarshallThompson0 -
Googlebot on paywall made with cookies and local storage
My question is about paywalls made with cookies and local storage. We are changing a website with free content to a open paywall with a 5 article view weekly limit. The paywall is made to work with cookies and local storage. The article views are stored to local storage but you have to have your cookies enabled so that you can read the free articles. If you don't have cookies enable we would pass an error page (otherwise the paywall would be easy to bypass). Can you say how this affects SEO? We would still like that Google would index all article pages that it does now. Would it be cloaking if we treated Googlebot differently so that when it does not have cookies enabled, it would still be able to index the page?
Intermediate & Advanced SEO | | OPU1