Why does my crawl report show just one page result?
-
I just ran a crawl report on my site: http://dozoco.com The result report shows results for just one page - the home page, but no other pages. The report doesn't indicate any errors or "do not follows" so I'm unclear on the issue, although I suspect user error - mine.
-
Thanks Sha. The content is "ours" - at least in so far as we've pulled it from retailer sites and or affiliate networks and modified to fit our needs...so not entirely ours, not a pure duplicate either. We do operate a fund raising site which shares the content which is something which I hadn't considered until now...will have to decide how to handle the duplication across the two sites. That said - the rest of your points are well taken and appreciated. We'll have to do some further research into the javascript points and determine how to best handle.
-
Thanks Keri - very helpful.
-
Hi William,
As indicated from the help page that Keri provided, the problem is that the page is entirely rendered in javascript and SEOmoz crawlers do not follow javascript links or redirects.
Of course, the reason why the SEOmoz crawlers do not do this is most likely because Google's (and other search engines) stated position is that they are "getting better" at handling javascript, but the likelihood of trouble free crawling for googlebot is likely low or at the very least unknown.
Bing now has an option in its Webmaster Central that lets you indicate that javascript crawling is required for a site. I have not seen any information on the effectiveness of this as yet, but you could investigate that by hitting their help forum.
Even if search engines manage to crawl the javascript without issue, there are other significant problems with the content on the site. It appears that the site is a multi affiliate whitelabel? All of the text is actually being pulled in from an external page and that page contains content that is duplicated across many other websites. This is the case with every "page".
Unfortunately, all of these things add up to a fairly bad SEO situation. Your best option for generating traffic would be to become massively popular through social channels and use them to feed traffic to the site. That is assuming that this whitelabel platform does not give you the option to create your own content (which would be much better).
Another alternative would be to create a site on a new domain with awesome, unique, shareable content with links to feed traffic to this site, but if you are going that route, making people take an extra click through a second domain on the way to the retailer's site would not be optimal for conversions. So it would be better to add direct affiliate links within the pages.
So, on the whole, I would say that ramping up your social activity is your best approach.
Hope this helps,
Sha
-
Here's a post from the help desk with a couple of reasons for that. http://seomoz.zendesk.com/entries/409821-why-isn-t-my-site-being-crawled-you-only-crawled-one-page. If that doesn't take care of the problem for you, email [email protected] and they'll work with you on getting the rest of the site crawled.
I'm looking at a site:dozoco.com search in Google and all the URLs I see look like http://dozoco.com/#!/store/us-pets. The #! may be the cause of the problem; I'm not exactly sure how Roger deals with crawling that.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does moz give different page authority to the same page if a visit comes from adwords vs organic search?
When clicking on an adwords ad the page the landing page has a page authority of 26. When clicking on organic search to the same exact landing page the page authority is 37. Why is this. Does moz or, more importantly Google see these as the same or separate pages? Thanks Tom
Moz Pro | | ffctas1 -
How to make a Crawl Report readable?
Hi! I am trying to find out how I make my CSV report neat, so I can interprete the report. I know have a CSV report with just numbers and text all in one column. I tried the button text to columns but that doesn't work because when I do that at column A it overwrites column B in which I have the same problem! Thanks
Moz Pro | | HetCommunicatielokaal0 -
"link_count" column in Crawl Diagnostics report
On the Crawl Diagnostics report, does "link_count" represent external (links to this URL), internal, both, or what ?
Moz Pro | | GlennFerrell0 -
Is there a report I can run to get a list of all pages indexed by Google for my website?
I want to get a CSV file of all the pages that are indexed by Google and other search engines so I can create and .htaccess file of 301 redirects
Moz Pro | | etraction0 -
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them.
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them. So i get something like: http://example.com/page1, http://example.com/page2, http://example.com/page3, http://example.com/page4, Because I now have to open each in "Issue: Duplicate Page Content", and this takes a lot of time. The same for duplicate page title.
Moz Pro | | nvs.nim0 -
Initiate crawl
Anyway to start the crawl of a site immediately after changes have been made? Or must you wait for the next scheduled crawl? Thanks.
Moz Pro | | dave_whatsthebigidea.com0 -
Crawl Diagnostics Update
I have corrected some errors in my SEOMoz Crawl Diagnostics, however the errors are still showing. It says a crawl has happen since. Any idea's why?
Moz Pro | | petewinter0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0