noindex, follow for thin content advice
-
Hello there
We struggle with a number of none indexed pages. I want to ask your professional opinion.
The robots tag is set up as follows, <meta name='robots' content='noindex, follow' /> those pages haven`t got any value but contain valuable pages.
Is setting up robots name="robots" content="noindex, nofollow" / would be a good solution?Here is the page https://www.lrbconsulting.co.uk/tag/enforcement/page/2/
with noindex robot tag.Please let me know what you think.
#noindex, follow for thin content
#noindex, follow
#meta robots set up -
will do as you say, "noindex",follow"
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Staging website got indexed by google
Our staging website got indexed by google and now MOZ is showing all inbound links from staging site, how should i remove those links and make it no index. Note- we already added Meta NOINDEX in head tag
Intermediate & Advanced SEO | | Asmi-Ta0 -
Unsolved Using NoIndex Tag instead of 410 Gone Code on Discontinued products?
Hello everyone, I am very new to SEO and I wanted to get some input & second opinions on a workaround I am planning to implement on our Shopify store. Any suggestions, thoughts, or insight you have are welcome & appreciated! For those who aren't aware, Shopify as a platform doesn't allow us to send a 410 Gone Code/Error under any circumstance. When you delete or archive a product/page, it becomes unavailable on the storefront. Unfortunately, the only thing Shopify natively allows me to do is set up a 301 redirect. So when we are forced to discontinue a product, customers currently get a 404 error when trying to go to that old URL. My planned workaround is to automatically detect when a product has been discontinued and add the NoIndex meta tag to the product page. The product page will stay up but be unavailable for purchase. I am also adjusting the LD+JSON to list the products availability as Discontinued instead of InStock/OutOfStock.
Technical SEO | | BakeryTech
Then I let the page sit for a few months so that crawlers have a chance to recrawl and remove the page from their indexes. I think that is how that works?
Once 3 or 6 months have passed, I plan on archiving the product followed by setting up a 301 redirect pointing to our internal search results page. The redirect will send the to search with a query aimed towards similar products. That should prevent people with open tabs, bookmarks and direct links to that page from receiving a 404 error. I do have Google Search Console setup and integrated with our site, but manually telling google to remove a page obviously only impacts their index. Will this work the way I think it will?
Will search engines remove the page from their indexes if I add the NoIndex meta tag after they have already been index?
Is there a better way I should implement this? P.S. For those wondering why I am not disallowing the page URL to the Robots.txt, Shopify won't allow me to call collection or product data from within the template that assembles the Robots.txt. So I can't automatically add product URLs to the list.0 -
Unsolved What should I do with WordPress Blog homepage
Hi, I have a large WordPress blog with thousands of posts. By default, the blog homepage contains the excerpt of each posts. As there are so many posts, the homepage is paginated(Totally 1341 pages) I use siteliner to check and find a lot of duplicate contents on the blog homepage. So, what should I do with it? Should I noindex the homepage and all the paginated pages accordingly? Thank you
On-Page Optimization | | ccw0 -
Can you use no-index to counter duplicate content across separate domains?
Hi Moz Community, I have a client who is splitting out a sub brand from a company website to its own domain. They have lots of content around the theme and they want to migrate most of the content out to the new domain, but they also wanted to keep that content on the main site as the main site gets lots of traffic. My question is, as they want search traffic to go to the new site, but want to keep the best content on the original site too, so it can be found in the nav, if they no-index identical content on main site and index content on the new site will they still be penalised for duplicate content? Our advice has been to keep the thematic content on both sites but make them different enough so they are not considered duplicate - we routinely write the same blog post in 50 different ways for them but their Head of Web asked if the no-index is a route, which means they don't need to pay for and wait for brand new content? They are comfortable in losing traffic until the new domain gets traction. In theory, if they are telling Google not to index or rank the main site content, the new site shouldn't be penalised but I'm not confident giving that advice as I've never been asked to do this before. Thoughts?
Technical SEO | | Algorhythm_jT0 -
Scraper Advice
Hi all, I know we all deal with scraped content issues. I have one I could use advice on. I found a site that is posting our blog content on their site verbatim, including the links I added to the posts (which is good) and mention our blog home page in a right sidebar beside the content (also good). However, they aren't linking to the specific posts from their copied versions anywhere and their pages canonical back to their versions, not mine. It's not a very spammy site and has a decent domain authority (though significantly lower than our own). I did a long tail search related to one of posts after discovering it, however, and found their version was outranking the original. I know I can report this one via Webmaster Tools. I wanted to get your opinion on whether asking them to add a link back to the original post on our site might be sufficient, or do I need to ask for that plus a canonical tag update? I know getting both is ideal, but the links and relationship could be valuable, so I want to leave this particular bridge in tact if I can. Just trying to decide if I take an "either/or" approach to my request when I mention those two action items, or if I need be a little firmer and ask them to do both and potentially risk losing a potential outlet for future content? Thanks, Andrew
Technical SEO | | SafeNet_Interactive_Marketing1 -
Determining where duplicate content comes from...
I am getting duplicate content warnings on the SEOMOZ crawl. I don't know where the content is duplicated. Is there a site that will find duplicate content?
Technical SEO | | JML11790 -
Affiliate urls and duplicate content
Hi, What is the best way to get around having an affiliate program, and the affiliate links on your site showing as duplicate content?
Technical SEO | | Memoz0 -
Advice on strange URL problem
I'm considering doing some pro bono work for a local non-profit and upon initial review they have a number of serious issues but there is one in particular I'd like to check my thinking on. The developer who set up the site some years ago implemented a javascript redirect on their root domain so that it redirects to: http://domain.com/wordpress This is wrong for all kinds of reasons and I want to recommend eliminating this redirect and getting rid of the 'wordpress' part of the path altogether. However, the site is quite established with good PR and they would take a hit by changing the path. I'd do 301 redirects to the new URLs that would not have 'wordpress' in the path in addition to other remediation. My question - is my thinking here good? It's worth it, right? The other option is just get rid of the weird redirect and keep 'wordpress' in the path but this seems unacceptable to me. Any opinions?
Technical SEO | | friendlymachine0