Why should I add URL parameters where Meta Robots NOINDEX available?
-
Today, I have checked Bing webmaster tools and come to know about Ignore URL parameters.
Bing webmaster tools shows me certain parameters for URLs where I have added META Robots with NOINDEX FOLLOW syntax.
I can see canopy_search_fabric parameter in suggested section. It's due to following kind or URLs.
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1728
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1729
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1730
http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=2239
But, I have added META Robots NOINDEX Follow to disallow crawling. So, why should it happen?
-
This is good for me... Let me drill down more on that article.... I'll check in Google webmaster tools before make it live on server... So, It may help me more to achieve 100% perfection in task!
-
Don't disallow: /*?
because that may well disallow everything - you will need to be more specific than that.
Read that whole article on pattern matching and then do a search for 'robots.txt pattern matching' and you will find some examples so you can follow something based on others' experiences.
-
I hope, following one is for me... Right?
Disallow: /*?
-
I suggest then you use pattern matching in order to restrict which parameters you don't want to be crawled.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449
-
I'm agree to deal with Robots.txt. But, my website have 1000+ attributes for narrow by search & I don't want to disallow all dynamic pages by Robots.txt.
Will it flexible for me to handle? And answer is no!
What you think about it?
-
I'd say the first thing to say is that NOINDEX is an assertion on your part that the pages should not be indexed. Search Bots have the ability to ignore your instruction - it should be rare that they do ignore it, but it's not beyond the realms of probability.
What I would do in your position is add a disallow line to your** robots.txt** to completely disallow access to
/patio-umbrellas?canopy_fabric_search*
That should be more effective if you really don't want these URLs in the index.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How important is it to add hyphens between words in URL Permalink Structure?
We have an issue with our URL Permalink Structures for dynamically generated pages on our website. As we generated hundreds of pages, it does not automatically Space the Words in the Permalink Structure . For example, if we have a product name Under Armour Fire Basketball Shoe , it will show up in the url as: "mywebsite.com/underarmourfire-basketballshoe" vs "mywebsite.com/under-armour-fire-basketball-shoe" How important is it that the URL includes these spaces between each word in the permalink?
Intermediate & Advanced SEO | | NJ-Keith0 -
Should I add rel=nofollow ?
Say I have an article that includes a list of many websites with ressources for the articles topic. From a SEO perspective, should I add nofollow to them? some of them? all of them?
Intermediate & Advanced SEO | | Superberto0 -
URL Parameters as a single solution vs Canonical tags
Hi all, We are running a classifieds platform in Spain (mercadonline.es) that has a lot of duplicate content. The majority of our duplicate content consists of URL's that contain site parameters. In other words, they are the result of multiple pages within the same subcategory, that are sorted by different field names like price and type of ad. I believe if I assign the correct group of url's to each parameter in Google webmastertools then a lot these duplicate issues will be resolved. Still a few questions remain: Once I set f.ex. the 'page' parameter and i choose 'paginates' as a behaviour, will I let Googlebot decide whether to index these pages or do i set them to 'no'? Since I told Google Webmaster what type of URL's contain this parameter, it will know that these are relevant pages, yet not always completely different in content. Other url's that contain 'sortby' don't differ in content at all so i set these to 'sorting' as behaviour and set them to 'no' for google crawling. What parameter can I use to assign this to 'search' I.e. the parameter that causes the URL's to contain an internal search string. Since this search parameter changes all the time depending on the user input, how can I choose the best one. I think I need 'specifies'? Do I still need to assign canonical tags for all of these url's after this process or is setting parameters in my case an alternative solution to this problem? I can send examples of the duplicates. But most of them contain 'page', 'descending' 'sort by' etc values. Thank you for your help. Ivor
Intermediate & Advanced SEO | | ivordg0 -
URL mapping for site migration
Hi all! I'm currently working on a migration for a large e-commerce site. The old one has around 2.5k urls, the new one 7.5k. I now need to sort out the redirects from one to the other. This is proving pretty tricky, as the URL structure has changed site wide. There doesn't seem to be any consistent rules either so using regex doesn't really work. By and large, the copy appears to be the same though. Does anybody know of a tool I can crawl the sites with that will export the crawled url and related copy into a spreadsheet? That way I can crawl both sites and compare the copy to match them up. Thanks!
Intermediate & Advanced SEO | | Blink-SEO0 -
Should /node/ URLs be 301 redirect to Clean URLs
Hi All! We are in the process of migrating to Drupal and I know that I want to block any instance of /node/ URLs with my robots.txt file to prevent search engines from indexing them. My question is, should we set 301 redirects on the /node/ versions of the URLs to redirect to their corresponding "clean" URL, or should the robots.txt blocking and canonical link element be enough? My gut tells me to ask for the 301 redirects, but I just want to hear additional opinions. Thank you! MS
Intermediate & Advanced SEO | | MargaritaS0 -
Robots.txt: Can you put a /* wildcard in the middle of a URL?
We have noticed that Google is indexing the language/country directory versions of directories we have disallowed in our robots.txt. For example: Disallow: /images/ is blocked just fine However, once you add our /en/uk/ directory in front of it, there are dozens of pages indexed. The question is: Can I put a wildcard in the middle of the string, ex. /en/*/images/, or do I need to list out every single country for every language in the robots file. Anyone know of any workarounds?
Intermediate & Advanced SEO | | IHSwebsite0 -
Quick URL structure question
Say you've got 5,000 articles. Each of these are from 2-3 generations of taxonomy. For example: example.com/motherboard/pc/asus39450 example.com/soundcard/pc/hp39 example.com/ethernet/software/freeware/stuffit294 None of the articles were SUPER popular as is, but they still bring in a bit of residual traffic combined. Few thousand or so a day. You're switching to a brand new platform. Awesome new structure, taxonomy, etc. The real deal. But, historically, you don't have the old taxonomy functions. The articles above, if created today, file under example.com/hardware/ This is the way it is from here on out. But what to do with the historical files? keep the original URL structure, in the new system. Readers might be confused if they try to reach example.com/motherboard, but at least you retain all SEO weight and these articles are all older anyways. Who cares? Grab some lunch. change the urls to /hardware/, and redirect everything the right way. Lose some rank maybe, but its a smooth operation, nice and neat. Grab some dinner. change the urls to /hardware/ DONT redirect, surprise Google with 5k articles about old computer hardware. Magical traffic splurge, go skydiving. Panic, cry into your pillow. Get job signing receipts at CostCo Thoughts?
Intermediate & Advanced SEO | | EricPacifico0 -
Should we block urls like this - domainname/shop/leather-chairs.html?brand=244&cat=16&dir=ascℴ=price&price=1 within the robots.txt?
I've recently added a campaign within the SEOmoz interface and received an alarming number of errors ~9,000 on our eCommerce website. This site was built in Magento, and we are using search friendly url's however most of our errors were duplicate content / titles due to url's like: domainname/shop/leather-chairs.html?brand=244&cat=16&dir=asc&order=price&price=1 and domainname/shop/leather-chairs.html?brand=244&cat=16&dir=asc&order=price&price=4. Is this hurting us in the search engines? Is rogerbot too good? What can we do to cut off bots after the ".html?" ? Any help would be much appreciated 🙂
Intermediate & Advanced SEO | | MonsterWeb280