Reading Crawl Diagnostics and Taking Action on results
-
My site crawl diagnostics are showing a high number of duplicate page titles and content. When i look at the flagged pages, many errors are simply listed from multiple pages of product category search results. This looks pretty normal to me and I am at a loss for understanding how to fix this situation. Can I talk with someone?
thanks,
Gary
-
If you're still looking for ideas of what to do with the duplicate content, Dr. Pete's post from earlier this month gives an in-depth look at the different types of duplicate content and solutions.
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
-
Some of these are all the same product but sorted in a different way. Usually I'd recommend implementing the canonical tag but then some of your products are different so I'd be interested to hear some more replies.
-
|
This is what I am seeing (below). Dog show - Express Line is a category we have set up on the site. The duplicate pages and content errors seem to be tirggered by simply paging through the porducts listed for this catagory
Dog Show - Express Line
http://www.hodgesbadge.com/<b>dog-show<-b>-express-line/c/45005/ 5 26 1 Dog Show - Express Line
http://www.hodgesbadge.com/<b>dog-show<-b>-express-line/c/45005/action/showall/ 2 26 1 Dog Show - Express Line
http://www.hodgesbadge.com/<b>dog-show<-b>-express-line/c/45005/action/showall/sb/0/ 2 No Data No Data Dog Show - Express Line
http://www.hodgesbadge.com/<b>dog-show<-b>-express-line/c/45005/action/showall/sb/1/ 1 No Data No Data Dog Show - Express Line
http://www.hodgesbadge.com/<b>dog-show<-b>-express-line/c/45005/action/showall/sb/2/ | 1 | No Data | No Data |
-
Could you give an example of the category pages? For example is it showing up duplicate on one category with a number of pages within that category?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dynamic Canonical Tag for Search Results Filtering Page
Hi everyone, I run a website in the travel industry where most users land on a location page (e.g. domain.com/product/location, before performing a search by selecting dates and times. This then takes them to a pre filtered dynamic search results page with options for their selected location on a separate URL (e.g. /book/results). The /book/results page can only be accessed on our website by performing a search, and URL's with search parameters from this page have never been indexed in the past. We work with some large partners who use our booking engine who have recently started linking to these pre filtered search results pages. This is not being done on a large scale and at present we only have a couple of hundred of these search results pages indexed. I could easily add a noindex or self-referencing canonical tag to the /book/results page to remove them, however it’s been suggested that adding a dynamic canonical tag to our pre filtered results pages pointing to the location page (based on the location information in the query string) could be beneficial for the SEO of our location pages. This makes sense as the partner websites that link to our /book/results page are very high authority and any way that this could be passed to our location pages (which are our most important in terms of rankings) sounds good, however I have a couple of concerns. • Is using a dynamic canonical tag in this way considered spammy / manipulative? • Whilst all the content that appears on the pre filtered /book/results page is present on the static location page where the search initiates and which the canonical tag would point to, it is presented differently and there is a lot more content on the static location page that isn’t present on the /book/results page. Is this likely to see the canonical tag being ignored / link equity not being passed as hoped, and are there greater risks to this that I should be worried about? I can’t find many examples of other sites where this has been implemented but the closest would probably be booking.com. https://www.booking.com/searchresults.it.html?label=gen173nr-1FCAEoggI46AdIM1gEaFCIAQGYARS4ARfIAQzYAQHoAQH4AQuIAgGoAgO4ArajrpcGwAIB0gIkYmUxYjNlZWMtYWQzMi00NWJmLTk5NTItNzY1MzljZTVhOTk02AIG4AIB&sid=d4030ebf4f04bb7ddcb2b04d1bade521&dest_id=-2601889&dest_type=city& Canonical points to https://www.booking.com/city/gb/london.it.html In our scenario however there is a greater difference between the content on both pages (and booking.com have a load of search results pages indexed which is not what we’re looking for) Would be great to get any feedback on this before I rule it out. Thanks!
Technical SEO | | GAnalytics1 -
Kill your htaccess file, take the risk to learn a little
Last week I was browsing Google's index with "site:www.mydomain.com and wanted to scan over to see what Google had indexed with my site. I came across a URL that was mistakenly indexed. It went something like this www.mydomain.com/link1/link2/link1/link4/link3 I didn't understand why Google had indexed a page like that of mine when the "link" pages were links that were on my main bar which were site wide links. It seemed to be looping infinitely over and over. So I started trying to see how many of these Google had indexed and I came across about 20 pages. I went through the process of removing the URL's in Webmaster Tools, but then I wanted to know why it was happening. I had discovered that I had mistakenly placed some links on my site in my header in such a manner link1 link2 link3 If you know HTML you will realize that by not placing the "/" in the front of the link I was telling that page to add that link in addition to the URL that is was currently on. What this did was create an infinite loop of links which is not good 🙂 Basically when Google went to www.mydomain.com/link1/ it found the other links which then told Google to add that url to the existing URL and then go to that link. Something like: www.mydomain.com/links1/link2/... When you do not add the "/" in front of the directory you are linking too it will do this. The "/" refers to the root so if you place that in front of your directory you are linking too it will always assume that first "/" as the root then the url will follow. So what did I do? Even though I was able to find about 20 URL's using the "site:" search method there had to be more out there. Even though I tried to search I was not able to find anymore, but I was not convinced. The light bulb went on at this point My .htaccess file contained many 301 redirects in my attempt to try and redirect those pages to a real page, they were not really relevant pages to redirect too. So how could I really find out what Google had indexed out there for me since Webmaster Tools only reports the top 1000 links. I decided to kill my htaccess file. Knowing that Google is "forgiving" when major changes to your site happen I knew Google would not simply just kill my site for removing my htaccess file immediately. I waited 3 days then BOOM! Webmaster Tools was reporting to me that it found a ton of 401's on my site. I looked at the Crawl Errors and there they were. All those infinite loop links that I knew had to be more out there, I was able to see. How many were there? Google found in the first crawl over 5,000 of them. OMG! Yeah could you imagine the "Low quality" score I was getting on those pages? By seeing all those links I was able to determine about 4 patterns in the links. For example: www.mydomain.com/link1/link2/ www.mydomain.com/link1/link3/ www.mydomain.com/link1/link4/ www.mydomain.com/link1/link5/ Now my issue was I wanted to keep all the URL's that were pointing to www.mydomain.com/link1 but anything after that I needed gone. I went into my Robots.txt file and added this Disallow: www.mydomain.com/link1/link2/ Disallow: www.mydomain.com/link1/link3/ Disallow: www.mydomain.com/link1/link4/ Disallow: www.mydomain.com/link1/link5/ Now there were many more pages indexed that went deeper into those links but I knew I wanted anything after the 2nd URL gone since it was the start of the loop that I detected. With that I was able to have from what I know at least 5k links if not more. What did I learn from this? Kill your htaccess file for a few days and see what comes back in your reports. You might learn something 🙂 After doing this I simply replaced my htaccess file and I am on my way to removing a ton of "low quality" links I didn't even know I had.
Technical SEO | | cbielich0 -
Google Webmaster Tool - Crawl Stats Query ?
Dear All, I have been looking at GWT Crawl Stats and wondering how should I be interrupting the crawl stats chart. AllI I see is 3 charts telling me a high , low and average for the below but I am wondering is there anything I really need to be looking for ?. Pages crawled per day Kilobytes downloaded per day Time spent downloading a page (in milliseconds) thanks Sarah
Technical SEO | | SarahCollins0 -
Higher PA score not reflected in google results - Redirect Issue ?
We have a redirect on our site at www.subsidesports.com to www.subsidesports.com/uk. Checking both home page scores in OSE, the .com/uk site has a higher PA and other metrics than .com yet all Home Page SERPS listed in Google still show .com with the lower PA and other metrics although the DA score of course is the same for both. Are we doing anything wrong here ? As part of my troubleshooting performed a redirect check using <http://www.ragepank.com/redirect-check/> and received the following error report: http://www.subsidesports.com/index.html returns a 200 (OK) response. PR N/A http://subsidesports.com/index.html returns a 200 (OK) response. PR N/A Potential problems on this site 2 pages returned a 200 response. This indicates potential for duplicate content problems. Ideally, only http://www.subsidesports.com OR http://subsidesports.com should return a 200 response. Are these two issues related and perhaps answered my own question ?
Technical SEO | | gooner10 -
Blocking AJAX Content from being crawled
Our website has some pages with content shared from a third party provider and we use AJAX as our implementation. We dont want Google to crawl the third party's content but we do want them to crawl and index the rest of the web page. However, In light of Google's recent announcement about more effectively indexing google, I have some concern that we are at risk for that content to be indexed. I have thought about x-robots but have concern about implementing it on the pages because of a potential risk in Google not indexing the whole page. These pages get significant traffic for the website, and I cant risk. Thanks, Phil
Technical SEO | | AU-SEO0 -
Using differing calls to action based on IP address
Hi, We have an issue with a particular channel on a lead generation site where we have sales staff requiring different quality of leads in different parts of the country. In saturated markets they require a stricter lead qualification process than those in more challenging markets. To combat the problem I am toying with the idea of severing very slightly different content based on IP address. The main change in content would be in terms of calls to action and lead qualification processes. We would plan to have a "standard" version of the site for when IP location can not be detected. URLs on this version would be the rel="canonical" for the location specific pages. Is there a way to do this without creating duplicate content, cloaking or other such issues on the site? Any advice, theories or case studies would be greatly appreciated.
Technical SEO | | SEM-Freak1 -
Help with steps to take when fixing cannonical url structure?
I would like to 301 redirect all the variations of my site to a single url but would like some clarification on some issues. I have always been confused about how to handle cannonicalization and hopefully this can clear it up for me and others. This particular site is about 1 year old and gets approximately 15k uniques a month in a great niche. I want to make sure I do this correctly as to not hurt my existing rankings which are quite good. Here is is what I am unsure about. Basically I should pick the best url structure to redirect all the others to correct? What determines what url is best to redirect all the rest to? is it www.domain.com, http://domain.com or http://www.domain.com? Is the best one to redirect to always standard and something I should set up at the beginning of my site? Or is picking the best url to redirect to based on what url starts to rank in google and you then use that one? Should I be going through each of my rankings and seeing what url is ranking in google for each page? On this particular site ALL of my urls in google have no www. or http but instead show up in the SE as domain.com or domain.com/inner-page/html. In that case what do I do? I know the slow way to do redirects. I use my hostgator account and do it in cpanel and do it one by one. Is there a faster way where I can go and make lots of changes at once? Maybe I can choose all the variations and put in the one I want them all to redirect to? After I figure the above out is fixing all of this as simple as redirecting ALL variations to the one I will use moving forward for each page on my site? Then I am done? Thanks again for the help! Jake
Technical SEO | | PEnterprises0 -
Does RogerBot read URL wildcards in robots.txt
I believe that the Google and Bing crawlbots understand wildcards for the "disallow" URL's in robots.txt - does Roger?
Technical SEO | | AspenFasteners0