Meta NOINDEX... how long before Google drops dupe pages?
-
Hi,
I have a lot of near dupe content caused by URL params - so I have applied:
How long will it take for this to take effect? It's been over a week now, I have done some removal with GWT removal tool, but still no major indexed pages dropped.
Any ideas?
Thanks,
Ben
-
In his case - he wants to get rid of some duplicate content only.
I see what you mean but if he is not in the situation listed in http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1269119 then it might be the best bet / fastest bet.
For me personally it worked so far very well - if no robots.txt is used as that won't help on the long run as the removal tool has an expiration date of several months.
The down side of the removal tools is the same expiration date - as if you change your mind you will have some issues getting the page sinto the index.
-
You know that I think you are the bees knees, but I am going to have to disagree on this one. Even Google does not recommend using the removal rool for this application.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1269119
Still pals?
-
There are several things that you can do to get Google to crawl your site (or your new content) quicker and more often. You should be doing all of these, but in case you're not, here is the list.
-
Create a Sitemap and submit it through Web Master Tools
-
Install Google Analytics
-
Create social accounts/update your social accounts
-
Fetch as Google Webmaster tools
-
Update your content more often (to get Google to crawl your site more frequently).
-
Adjust the crawl speed on Google Webmaster tools.
-
Check crawl errors on Google Webmaster tools. Are there sever side errors (500)?
I hope that helps!
-
-
Hi,
The best bet is the removal tool from GWT - this is the fastes way.
If your pages are static and google bot is visiting those pages once a month or once 4-5-6 months - you will need to wait until google bot is visiting those pages again, notice the nofollow and drop those from the index.
I'v e seen cases with 6 months.
Anyway you will probably see those pages drop step by step.
What you can try, although is not very straight forward is to build an xml sitemap only with those files and submit it via GWMt - sometimes google bot will think that something new happen and will visit those pages, see the no index and speed the process - but not always as I've seen in some cases that this didn't work - in some cases it did.
Again, the best bet will be the GWMT removal tool.
Cheers.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does Google understand misspellings in terms of what keywords I should optimize a page for
Hey there! This is sort of an oddball question. We do a lot of hospital websites. One client that we have spells "Orthopedics" as "Orthopaedics" which is another spelling. When I did initial keyword research the volume for Orthopedics as I expected is much higher. However when I do a test search for "Orthopaedics" it looks like I'm getting the same results and Google is highlighting in the content "orthopaedics" even though my search query was "orthopedics". What I'm wondering - is it the same thing to optimize for "orthopaedics" or is it a recommendation I should make to the client to change to "orthopedics" Thanks!
Intermediate & Advanced SEO | | CentreTEK0 -
Can Google Bot View Links on a Wix Page?
Hi, The way Wix is configured you can't see any of the on-page links within the source code. Does anyone know if Google Bots still count the links on this page? Here is the page in question: https://www.ncresourcecenter.org/business-directory If you do think Google counts these links, can you please send me URL fetcher to prove that the links are crawlable? Thank you SO much for your help.
Intermediate & Advanced SEO | | Fiyyazp0 -
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Can we talk a bit more about cannibalisation? Will Google pick one page and disregard others.
Hi all. I work for an e-commerce site called TOAD Diaries and we've been building some landing pages recently. Our most generic page was for '2017 Diaries'. Take a look here. Initial results are encouraging as this page is ranking top page for a lot of 'long tail' search queries, e.g) '2017 diaries a4', '2017 diaries a5', '2017 diaries week to view' etc. Interesting it doesn't even rank top 50 for the 'head term'... '2017 diaries'. **And our home page outranks it for this search term. **Yet it seems clear that this page is considered relevant and quality by Google it ranks just fine for the long tails. Question: Does this mean Google 'chosen' our home page over the 2017-page landing page? And that's why the 2017-page effectively doesn't rank for it's 'head term'? (I can't see this as many times a website will rank multiple times such as amazon) But any thoughts would be greatly appreciated. Also, what would you do in this scenario? Work on home-page to try to push it up for that term and not worry about the landing page? Any suggestions or thoughts would be greatly appreciated. Hope that makes sense. Do shout if not. Thanks in advance. Isaac.
Intermediate & Advanced SEO | | isaac6630 -
Syndicated content with meta robots 'noindex, nofollow': safe?
Hello, I manage, with a dedicated team, the development of a big news portal, with thousands of unique articles. To expand our audiences, we syndicate content to a number of partner websites. They can publish some of our articles, as long as (1) they put a rel=canonical in their duplicated article, pointing to our original article OR (2) they put a meta robots 'noindex, follow' in their duplicated article + a dofollow link to our original article. A new prospect, to partner with with us, wants to follow a different path: republish the articles with a meta robots 'noindex, nofollow' in each duplicated article + a dofollow link to our original article. This is because he doesn't want to pass pagerank/link authority to our website (as it is not explicitly included in the contract). In terms of visibility we'd have some advantages with this partnership (even without link authority to our site) so I would accept. My question is: considering that the partner website is much authoritative than ours, could this approach damage in some way the ranking of our articles? I know that the duplicated articles published on the partner website wouldn't be indexed (because of the meta robots noindex, nofollow). But Google crawler could still reach them. And, since they have no rel=canonical and the link to our original article wouldn't be followed, I don't know if this may cause confusion about the original source of the articles. In your opinion, is this approach safe from an SEO point of view? Do we have to take some measures to protect our content? Hope I explained myself well, any help would be very appreciated, Thank you,
Intermediate & Advanced SEO | | Fabio80
Fab0 -
Links / Top Pages by Page Authority ==> pages shouldnt be there
I checked my site links and top pages by page authority. What i have found i dont understand, because the first 5-10 pages did not exist!! Should know that we launched a new site and rebuilt the static pages so there are a lot of new pages, and of course we deleted some old ones. I refreshed the sitemap.xml (these pages are not in there) and upload it in GWT. Why those old pages appear under the links menu at top pages by page authority?? How can i get rid off them? thx, Endre
Intermediate & Advanced SEO | | Neckermann0 -
Taking Back Google+ Page Managed By Advertiser
The Google+ Page that appears under local listings in Google for a company I work for, was setup by an advertiser, such as Yellow Pages. I'm not sure how they got control of it, but apparently somebody at this agency set it up, and verified the url. The G+ page is now outdated, and needs to be updated. More importantly, I have to manage the page, and not this third party. I sent a request to Google to transfer ownership. My question is, has anybody had any luck in getting control of a Google+ page which was setup by the wrong party, back? As a backup, I've initiated a plan of creating a new Google+ page with the account info from the WMT's account of the url. I figure, that might have more authority than a page erroneously setup by a third party. My hope is it would eventually replace the Google+ page the currently shows in local returns. Any thoughts or comment would be appreciated.
Intermediate & Advanced SEO | | alrockn0 -
Sudden drop in page rank
Hi One of our client websites has had a sudden drop in rankings, and also in page rank. We have obviously been off for 2 weeks, so havent been blogging daily etc as we normally would have. Would this cause such a drop? Some keywords have lost from position 2/3 to page 7 over night, and we havent change the strategy! Thanks,
Intermediate & Advanced SEO | | SEOwins0