Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
-
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
-
Update - Google has crawled this correctly and is returning the correct, redirected page. Meaning, it seems to have understood that we don't want any of the parametered versions indexed ("return representative link") from our original page and all of its campaign-tracked brethren, and is then redirecting from the representative link correctly.
And finally there was peace in the universe...for now. ;> Tim
-
Agree...it feels like leaving a bit to chance, but I'll keep an eye on it over the next few weeks to see what comes of it. We seem to be re-indexed every couple of days, so maybe I can test it out Monday.
BTW, this issue really came up when we were creating a server side 301 redirect for the root URL, and then I got to wondering if we'd need to set up an irule for all parameters. Hopefully not...hopefully Google will figure it out for us.
Thanks Peter. Tim
-
It's really tough to say, but moving away from "Let Google decide" to a more definitive choice seems like a good next step. You know which URL should be canonical, and it's not the parameterized version (if I'm understanding correctly).
If you say "Let Google decide", it seems a bit more like rel=prev/next. Google may allow any page in the set to rank, BUT they won't treat those pages as duplicates, etc. How does this actually impact the PR flow to any given page in that series? We have no idea. They're probably consolidating them on the fly, to some degree. They basically have to be, since the page they choose to rank form the set is query-dependent.
-
This question deals with dynamically created pages, it seems, and Google seems to recommend NOT choosing the "no" option in WMT - choose "yes" when you edit the parameter settings for this and you'll see an option for your case, I think, Christian (I know this is 3 years late, but still).
BUT I have a situation where we use SiteCatalyst to create numerous tracking codes as parameters to a URL. Since there is not a new page being created, we are following Google's advice to select "no" - apparently will:
"group the duplicate URLs into one cluster and select what we think is the "best" URL to represent the cluster in search results. We then consolidate properties of the URLs in the cluster, such as link popularity, to the representative URL."
What worries me is that a) the "root" URL will not be returned, somehow (perhaps due to freakish amount of inbound linking to one of our parametered URLs), and b) the root URL will not be getting the juice. The reason we got suspicious about this problem in the first place was that Google was returning one of our parametered URLs (PA=45) instead of the "root" URL (PA=58).
This may be an anomaly that will be sorted out now that we changed the parameter setting from "Let Google Decide" to "No, page does not change" i.e. return the "Representative" link, but would love your thoughts - esp on the juice passage.
Tim
-
This sounds unusual enough that I'd almost have to see it in action. Is the JS-based URL even getting indexed? This might be a non-issue, honestly. I don't have solid evidence either way about GWT blocking passing link-juice, although I suspect it behaves like a canonical in most cases.
-
I agree. The URL parameter option seems to be the best solution since this is not a unique page. It is the main page with javascript that calls for additional content to be displayed in the form of a lightbox overlay if the condition is right. Since it is not an actual page, I cannot add the rel-canonical statement to the header. It is not clear however, whether the link juice will be passed with this parameter setting in Webmaster Tools.
-
If you're already use rel-canonical, then there's really no reason to also block the parameter. Rel-canonical will preserve any link-juice, and will also keep the page available to visitors (unlike a 301-redirect).
Are you seeing a lot of these pages indexed (i.e. is the canonical tag not working)? You could block the parameter in that case, but my gut reaction is that it's unnecessary and probably counter-productive. Google may just need time to de-index (it can be a slow process).
I suspect that Google passes some link-juice through blocked parameters and treats it more like a canonical, but it may be situational and I haven't seen good data on that. So many things in Google Webmaster Tools end up being a bit of a black box. Typically, I view it as a last resort.
-
I can just repeat myself: Set Crawl to yes and use rel canonical with website.com/?v3 pointing to website.com
-
My fault for not being clear.
I understand that the rel=canonical cannot be added to the robot.txt file. We are already using the canonical statement.
I do not want to add the page with the url parameter to the robot.txt file as that would prevent the link juice from being passed.
Perhaps this example will help clarify:
URL = website.com
ULR parameter = website.com/?v3
website.com/?v3 has a lot of backlinks. How can I pass the link juice to website.com and Not have website.com/?v3 appear in the SERP"s?
-
I'm getting a bit lost with your explanation, maybe it would be easier if I saw the urls, but here"s a brief:
I would not use parameters at all. Cleen urls are best for seo, remove everything not needed. You definately don't need an url parameter to indicate that content is unique for 25%of traffic. (I got a little bit lost here: how can a content be unique for just part of your traffic. If it is found elsewhere on your pae it is not unique, if it is not found elswehere, it is unique) So anyway those url parameters do not indicate nothing to google, just stuff your url structure with useles info (for google) so why use them?
I am already using a link rel=canonical statement. I don't want to add this to the robots.txt file as that would prevent the juice from being passed.
I totally don't get this one. You can't add canonical to robots.txt. This is not a robots.txt statement.
To sum up: If you do not want your parametered page to appear in the serps than as I said: Set Crawl to yes! and use rel canonical. This way page will no more apperar in serps, but will be available for readers and will pass link juice.
-
The parameter to this URL specifies unique content for 25% of my traffic to the home page. If I use a 301 redirect than those people will not see the unique content that is relevant to them. But since this parameter is only relevant to 25% of my traffic, I would like the main URL displayed in the SERPs rather then the unique one.
Google's Webmaster Tools let you choose how you would Google to handle URL parameters. When using this tool you must specify the parameters effect on content. You can then specify what you would like googlebot to crawl. If I say NO crawl, I understand that the page with this parameter will not be crawled but will the link juice be passed to the page without the parameter?
I am already using a link rel=canonical statement. I don't want to add this url parameter to the robots.txt file either as that would prevent the juice from being passed.
What is the best way to keep this parameter and pass the juice to the main page but not have the URL parameter displayed in the SERPs?
-
What do you men by url parameter specifies content?
If a page is not crawled it definately won't pass link juice. Set Crawl to yes and use rel canonical: http://www.youtube.com/watch?v=Cm9onOGTgeM
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does anyone know the linking of hashtags on Wix sites does it negatively or postively impact SEO. It is coming up as an error in site crawls 'Pages with 404 errors' Anyone got any experience please?
Does anyone know the linking of hashtags on Wix sites does it negatively or positively impact SEO. It is coming up as an error in site crawls 'Pages with 404 errors' Anyone got any experience please? For example at the bottom of this blog post https://www.poppyandperle.com/post/face-painting-a-global-language the hashtags are linked, but they don't go to a page, they go to search results of all other blogs using that hashtag. Seems a bit of a strange approach to me.
Technical SEO | | Mediaholix0 -
Google Crawling Issues! How Can I Get Google to Crawl My Website Regularly?
Hi Everyone! My website is not being crawled regularly by Google - there are weeks when it's regular but for the past month or so it does not get crawled for seven to eight days. There are some specific pages, that I want to get ranked but they of late are not being crawled AT ALL unless I use the 'Fetch As Google' tool! That's not normal, right? I have checked and re-checked the on-page metrics for these pages (and the website as a whole, backlinking is a regular and ongoing process as well! Sitemap is in place too! Resubmitted it once too! This issue is detrimental to website traffic and rankings! Would really appreciate insights from you guys! Thanks a lot!
Technical SEO | | farhanm1 -
Page disappeared from Google index. Google cache shows page is being redirected.
My URL is: http://shop.nordstrom.com/c/converse Hi. The week before last, my top Converse page went missing from the Google index. When I "fetch as Googlebot" I am able to get the page and "submit" it to the index. I have done this several times and still cannot get the page to show up. When I look at the Google cache of the page, it comes up with a different page. http://webcache.googleusercontent.com/search?q=cache:http://shop.nordstrom.com/c/converse shows: http://shop.nordstrom.com/c/pop-in-olivia-kim Back story: As far as I know we have never redirected the Converse page to the Pop-In page. However the reverse may be true. We ran a Converse based Pop-In campaign but that used the Converse page and not the regular Pop-In page. Though the page comes back with a 200 status, it looks like Google thinks the page is being redirected. We were ranking #4 for "converse" - monthly searches = 550,000. My SEO traffic for the page has tanked since it has gone missing. Any help would be much appreciated. Stephan
Technical SEO | | shop.nordstrom0 -
Google Reconsideration Request (Penguin) - Will Google give links to remove?
When Penguin v1 hit, our site took a hit for a single phrase (i.e. "widgets") due to the techniques our SEO company was using (network). We've since had those links cleaned up, and our rankings have not recovered. Our SEO company said they submitted a reconsideration request on our behalf, and that Google denied it and didn't provide which links we needed removed. Does Google list links that need removing if they are still not happy with your link profile?
Technical SEO | | crucialx0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0 -
Crawl reveals hundreds of urls with multiple urls in the url string
The latest crawl of my site revealed hundreds of duplicate page content and duplicate page title errors. When I looked it was from a large number of urls with urls appended to them at the end. For example: http://www.test-site.com/page1.html/page14.html or http://www.test-site.com/page4.html/page12.html/page16.html some of them go on for a hundred characters. I am totally stymied, as are the people at my ISP and the person who talked to me on the phone from SEOMoz. Does anyone know what's going on? Thanks So much for any help you can offer! Jean
Technical SEO | | JeanYates0 -
Why would a link shown on OSE appear differently than the page containing the link?
I recently traded links with a site that I will call www.example.com When I used open site explorer to check the link it came back with a different page authority as www.example.com/index.htm yet the link does appear on the www.example.com page. Why would this be?
Technical SEO | | casper4340 -
Remove Deleted (but indexed) Pages Through Webmaster Tools?
I run a blog/directory site. Recently, I changed directory software and, as a result, Google is showing 404 Not Found crawling errors for about 750 non-existent pages. I've had some suggest that I should implement a 301 redirect, but can't see the wisdom in this as the pages are obscure, unlikely to appear in search and they've been deleted. Is the best course to simply manually enter each 404 error page in to the Remove Page option in Webmaster Tools? Will entering deleted pages into the Removal area hurt other healthy pages on my site?
Technical SEO | | JSOC0