Best Practices for adding Dynamic URL's to XML Sitemap
-
Hi Guys,
I'm working on an ecommerce website with all the product pages using dynamic URL's (we also have a few static pages but there is no issue with them).
The products are updated on the site every couple of hours (because we sell out or the special offer expires) and as a result I keep seeing heaps of 404 errors in Google Webmaster tools and am trying to avoid this (if possible).
I have already created an XML sitemap for the static pages and am now looking at incorporating the dynamic product pages but am not sure what is the best approach.
The URL structure for the products are as follows:
http://www.xyz.com/products/product1-is-really-cool
http://www.xyz.com/products/product2-is-even-cooler
http://www.xyz.com/products/product3-is-the-coolestHere are 2 approaches I was considering:
1. To just include the dynamic product URLS within the same sitemap as the static URLs using just the following http://www.xyz.com/products/ - This is so spiders have access to the folder the products are in and I don't have to create an automated sitemap for all product
OR
2. Create a separate automated sitemap that updates when ever a product is updated and include the change frequency to be hourly - This is so spiders always have as close to be up to date sitemap when they crawl the sitemap
I look forward to hearing your thoughts, opinions, suggestions and/or previous experiences with this.
Thanks heaps,
LW
-
Hi LW
I agree with Mark re archiving products. Although our products don't expire as quickly as yours appear to do I use http://www.xml-sitemaps.com/standalone-google-sitemap-generator.html on a cron job to keep our sitemap fresh.
I also exclude some of our over dynamic URLs using this tool from appearing in the sitemap.
Dean
-
Hi LW,
What system is backing the online store? Are you using a CMS-driven e-commerce solution?
My suggestion would be to create an automated sitemap for the products. Pay careful attention to the priorities you assign and the update frequencies. (Hourly/daily is fine) I definitely think that you'd be spending far too much time on updating a sitemap if you had to do it manually.
This method will result in you having a more accurate sitemap on crawling.
Also, if you are planning on offering the same project in future, it might be an idea not to remove the product altogether, but rather have a page saying "This offer is currently not available" or something along those lines.
Another option might be to have an archive category of products, where all your expired offers can be placed, not available for order. This could allow you to keep your indexed pages, avoid 404s as well as use the product pages to direct new visitors to related/newer products should they see the products in the archive.
Just thinking out loud.
I'd be interested to see the website and the solution that you do eventually implement.
Regards
Mark
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do you need an on page site map as well as an XML Sitemap?
Do on page site maps help with SEO or are they more for user experience? We submit and update our XML Sitemaps for the search engines but wondering if /sitemap for users is necessary?
Technical SEO | | bonnierSEO0 -
Will an XML sitemap override a robots.txt
I have a client that has a robots.txt file that is blocking an entire subdomain, entirely by accident. Their original solution, not realizing the robots.txt error, was to submit an xml sitemap to get their pages indexed. I did not think this tactic would work, as the robots.txt would take precedent over the xmls sitemap. But it worked... I have no explanation as to how or why. Does anyone have an answer to this? or any experience with a website that has had a clear Disallow: / for months , that somehow has pages in the index?
Technical SEO | | KCBackofen0 -
SEO url best practices
We're revamping our site architecture and making several services pages that are accessible from one overarching service page. An example would be as follows: Services Student Services Essay editing Essay revision Author Services Book editing Manuscript critique We'll also be putting breadcrumbs throughout the site for easy navigation, however, is it imperative that we build the URLs that deep? For example, could we simply have www.site.com/essay-editing rather than www.site.com/services/students/essay-editing? I prefer the simplicity of the former, but I feel the latter may be more "search robot friendly" and better for SEO. Any advice on this is much appreciated.
Technical SEO | | Kibin0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0 -
Sitemap for pages that aren't on menus
I have a site that has pages that has a large number, about 3,000, pages that have static URLs, but no internal links and are not connected to the menu. The pages are pulled up through a user-initiated selection process that builds the URL as they make their selections, but,as I said, the pages already exist with static URLs. The question: should the sitemap for this site include these 3,000 static URLs? There is very little opportunity to optimize the pages in any serious kind of way, if you feel that makes a difference. There is also no chance that a crawler is going to find its way to these pages through the natural flow of the site. There isn't a single link to any of these pages anywhere on the site. Help?
Technical SEO | | RockitSEO0 -
H-tags and Page Name best practice
For the past few months I've been working on a new site launch, but have been left with a couple of annoyances from my predecessor.. I've got a couple of questions about best practise (and if it's worth changing now). For reference, a good example page is http://polestars.net/hen-party/life-drawing-hen-party/ H-tags The (external) web designer has insisted that wrapping the logo in an H1 tag (with the same branded H1 text on every page), and using an H2 for the actual title of the page is fine. I really don't believe him, but at the same time, feel like maybe google is smart enough to discern the theme of a page in this structure. Is it worth having this changed so that the actual title is the first H1? Page naming convention Another annoyance that I've been left with is the fact that every product page is named the same "burlesque hen party", "life drawing hen party", "whatever hen party"... It looks a little weird, but my real concern is that as we now have 60 "hen party" links in the navigation menu of a bunch of our pages, this may be seen as keyword stuffing - is this a real concern, or am I overthinking it?
Technical SEO | | AlecPR0 -
What's the SEO impact of url suffixes?
Is there an advantage/disadvantage to adding an .html suffix to urls in a CMS like WordPress. Plugins exist to do it, but it seems better for the user to leave it off. What do search engines prefer?
Technical SEO | | Cornucopia0 -
How Best to Handle 'Site Jacking' (Unauthorized Use of Someone else's Dedicated IP Address)
Anyone can point their domain to any IP address they want. I've found at least two domains (same owner) with two totally unrelated domains (to each other and to us) that are currently pointing their domains to our IP address. The IP address is on our dedicated server (we control the entire physical server) and is exclusive to only that one domain (so it isn't a virtual hosting misconfiguration issue) This has caused Google to index their two domains with duplicate content from our site (found by searching for site:www.theirdomain.com) Their site does not come up in the first 50 results though for any of the keywords we come up for so Google obviously knows THEY are the dupe content, not us (our site has been around for 12 years - much longer than them.) Their registration is private and we have not been able to contact these people. I'm not sure if this is just a mistake on the DNS for the two domains or it is someone doing this intentionally to try to harm our ranking. It has been going on for a while, so it is most likely not a mistake for two live sites as they would have noticed long ago they were pointing to the wrong IP. I can think of a variety of actions to take but I can find no information anywhere regarding what Google officially recommends doing in this situation, assuming you can't get a response. Here's my ideas. a) Approach it as a Digital Copyright Violation and go through the lengthy process of having their site taken down. Pro: Eliminates the issue. Con: Sort of a pain and we could be leaving possibly some link juice on the table? b) Modify .htaccess to do a 301 redirect from any URL not using our domain, to our domain. This means Google is going to see several domains all pointing to the same IP and all except our domain, 301 redirecting to our domain. Not sure if THAT will harm (or help) us? Would we not receive link juice then from any site out there that was linking to these other domains? Con: Google will see the context of the backlinks and their link text will not be related at all to our site. In addition, if any of these other domains pointing to our IP have backlinks from 'bad neighborhoods' I assume it could hurt us? c) Modify .htaccess to do a 404 File Not Found or 403 forbidden error? I posted in other forums and have gotten suggestions that are all over the map. In many cases the posters don't even understand what I'm talking about - thinking they are just normal backlinks. Argh! So I'm taking this to "The Experts" on SEOMoz.
Technical SEO | | jcrist1