Metadata and duplicate content issues
-
Hi there: I'm seeing a steady decline in organic traffic, but at the same time and increase in pageviews and direct traffic. My site has about 3,000 crawl errors!! Errors are duplicate content, missing description tags, and description too long. Most of these issues are related to events that are being imported from Google calendars via ical and the pages created from these events. Should we block calendar events from being crawled by using the disallow directive in the robots.txt file? Here's the site: https://www.landmarkschool.org/
-
Yes, of course you can keep running the calendar .
But you have to keep in mind somes pages will still appear in search results even when you has deleted those URL.
You can watch this video
Matt Cutts explains why a page that is disallowed in robots.txt may still appear in Google's search results.On that case just to make sure, you can implement a 301 redirection.
This is going to be your second line defense. Just redirect all of those URLs to your home page.
There are many option to make a redirection. In my I'm case wordpress user so, whit a simple plugin I can resolve the problem in 5 minutes, in your case I have been checking your website and I have no idea which cms you are using.
Anyway you can use this app 301 Redirect Code Generator with many option available
PHP, JS, ASP, ASP.NET and of course APACHE (htaccess)Now is the right moment to use the list that I mentioned in my first answer.
(2 - Create a list of all url that you want disable)**So lets talk about your second question. **
Of course it will hurt your ranking, if you have 3020 index pages on google but just 20 of those pages are useful for the users you have a big problem.A website should address any question or concern that a current or potential customer or client may have. If it doesn’t, the website is essentially useless.
with a simple divison 20 / 3020= 0.00625 less that 1% of your site is useful. So Im pretty sure that your rank has ben affected.
Dont forget mark my answer as a "GOOD ANSWER" that will make me happy, and good luck.
-
Hi Roman: Thanks so much for your prompt reply. I agree that using robots.txt is the way to go. I do not want to disable the google calendar sync (we're a school and need our events to feed from several google calendars). I want to confirm that the robots.txt option will still work if the calendars are still syncing with the site.
One more question--do you think that all these errors are causing the dip in organic traffic?
-
SOLUTION
1 - You have to disable the google calendar sync with your website
2 - Create a list of all url that you want disable
3 - At this point you have multiples option to block those URLs that you want to exclude from search engines.So first lets define your problem
By blocking a URL on your site, you can stop Google from indexing that web page for display in Google Search results. In other words, people looking through Google Search results can't see or navigate to a blocked URL or its content.
If you have pages or other content that you don't want to appear in Google Search results, you can do this using a number of options:
- robots.txt files (Best Option)
- meta tags
- password-protection of web server files
In your case the option 2 will take a lot of time, why? beacuse you will have to manually add the "noindex" meta tag to each page, one by one....no make sense and the option 3 requires some server configurations and for me are little bit complex and time consuming at leats in my case, I would have to research on google, see some videos on Youtube and see what happen.
So firts option is the winner for me ....let see some example of how your robot.txt should look like.
- The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/events/january/" or "/tmp/", or /calendar.html:
<------------------------------START HERE------------------------------>
robots.txt for https://www.landmarkschool.org/
User-agent: *
Disallow: /events/january/ # This is an infinite virtual URL space
Disallow: /tmp/ # these will soon disappear
Disallow: /calendar.html
<------------------------------END HERE------------------------------>FOR MORE INFO SEE THE VIDEO > https://www.youtube.com/watch?v=40hlRN0paks
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Tracking links and duplicate content
Hi all, I have a bit of a conundrum for you all pertaining to a tracking link issue I have run into on a clients site. They currently have over duplicate content. Currently, they have over 15,000 pages being crawled (using Screaming Frog) but only 7,000+ are legitimate pages in the sense of they are not duplicates of themselves. The client is using Omniture instead of Google Analytics and using an advanced tracking system on their site for internal and external links (ictids and ectids) in the URL parameters. This is creating thousands of duplicated pages being crawled by Google (as seen on their Search Console and on Screaming Frog). They also are in the middle of moving over from http to https and have thousands of pages currently set up for both, again, creating a duplicate content issue. What I have suggested for the tracking links is setting up a URL parameter in Search Console for these tracking links. I've also suggested they canonical all tracking links to point to the clean page so the pages that have already been indexed point to the correct clean url. Does this seam like the appropriate strategy? Additionally, I've told them before they submit a new sitemap to Google, they need to switch their website over to https to avoid worsening their duplicate content issue. They have not submitted a sitemap to Google Search Console since March 2015. Thank you for any help you can offer!
Reporting & Analytics | | Rydch410 -
Google Tag Manager chrome plugin to diagnose Analytics issues
Hi I've just used Google Tag Manager chrome plugin to look at possible analytics issues on a clients site and it has reported that its Analytics ID is being tracked twice. 1 is Universal and the other is Universal Asynchronous And when i click the question mark next to the 'Where to Optimise' info in GTM this page is displayed with teh following info highlighted: https://developers.google.com/analytics/devguides/collection/gajs/asyncMigrationExamples ga.js is a legacy library. If you are starting a new implementation we recommend you use the latest version of this library, analytics.js. For exisiting implementations, learn how to migrate from ga.js to analytics.js. Since both versions seem to be on there surely i dont need to migrate but just delete the old non-asynchronous version ? Or do i need do anything else or additional ? All Best Dan
Reporting & Analytics | | Dan-Lawrence0 -
Hello, our domain authority dropped significantly overnight from 37 to 29\. We have been building good links from high DA pages and producing quality, regular content.
Hello, our domain authority dropped significantly overnight from 37 to 29. We have been building good links from high DA sites and producing regular, good quality content. Anyone able to offer any ideas why? Thanks
Reporting & Analytics | | ProMOZ1231 -
Removing blog posts with little/thin content
We've got quite a lot (I'd say 75%) of our blog posts which I'd consider as low quality. Short content (500 words or less) with few shares, no backlinks & comments; most of which gets 0-2 unique views a day (however combined this adds up). Will removing these pages provide an SEO benefit greater than the reduction in traffic from the removal of these pages? I've heard the likes of Neil Patel/Brian Dean suggest so, however I'm scared it will provide the opposite as less content is indexed and I'll actually see traffic fall. Sam
Reporting & Analytics | | Sam.at.Moz1 -
Google Analytics Content Experiments - Experiment Conversions and Goal Figures Don't Match
Hi, I set up a new content experiment 6 days ago, the experiment says there have been 2 conversions but the goal associated with it says 5. The experiment is set to target 100% of traffic, distributed evenly among the variations, the goal is a destination URL goal. I've doubled checked the goal set up and everything seems fine. How can the content experiment report a different figure to the goal associated with it? Has anyone else noticed the same problem? Is this a bug? Is there a workaround available? Or is there a setting I need to be aware of when creating content experiments to prevent this from happening? I need to know I can trust the results the content experiments provide.
Reporting & Analytics | | UNIT40 -
404 errors more than 1.8 lacs, Duplicate Content, Duplicate title, missing meta description increasing as site is based on regular ticket selling (CRM), kindly help
Sites error increasing i.e. 404 errors more than 1.8 lacs, Duplicate Content, Duplicate title, missing meta description increasing day by day as site is based on regular ticket selling (CRM), We have checked with webmasters for 404's, but it is not easy to delete 1.8 lac entries. How to resolve this issue for future. kindly help and suggest the solution.
Reporting & Analytics | | 1akal0 -
What is the best way to eliminate this specific image low lying content?
The site in question is www.homeanddesign.com where we are working on recovering from some big traffic loss. I finally have gotten the sites articles properly meta titled and descriptioned now I'm working on removing low lying content. The way there CMS is built, images have their own page (every one that's clickable). So this leads to a lot of thin content that I think needs to be removed from the index. Here is an example: http://www.homeanddesign.com/photodisplay.asp?id=3633 I'm considering the best way to remove it from the index but not disturb how users enjoy the site. What are my options? Here is what I'm thinking: add Disallow: /photodisplay to the robots.txt file See if there is a way to make a lightbox instead of a whole new page for images. But this still leaves me with 100s of pages with just an image on there with backlinks, etc. Add noindex tag to the photodisplay pages
Reporting & Analytics | | williammarlow0 -
Google Analytic Tracking Issue (&utm_nooverride=1)
Hello, We have a problem that means we are unable to track our AdWords and organic work at all. Looking at "/All Traffic Sources" and clicking on "Ecommerce Tab" in Analytics we can see that (made up ratio :)):
Reporting & Analytics | | jannkuzel
£2 is attributed to Google/ CPC
£1 is attributed to Google / Organic
But £100 to Payment Provider/ referral and also various referrals from banking transaction pages. All of the revenue/conversions are being credited to the payment provider or the bank security checks the payment goes through. After having done some research we have found that the problem may be that Google Analytics attributes the purchase to the most recent click (on the payment provider button) rather than the initial click on the cpc campaign/organic or direct etc. Some people have suggested using the "&utm_nooverride=1"
tag which we wanted to run past you guys and confirm whether adding
this tag to the payment provider 'buy now' button on our website will
presumably fix this referral problem? Alternatively does the tag need
to be entered into our CPC campaigns as well? Or can you please guide
us in another way? We have also heard that "cross-domain" tracking could be the solution. So we are really confused what to do and where hoping someone had maybe been through something similar and could advice before we fully launch into a solution. In addition, it should be noted that our 'Goals Funnel Visualisation'
of 'checkout' breaks up at the penultimate stage of the checkout. All
customers exit through the /checkout_process (penultimate) but are recognised returning to the successful checkout page but there is a missing link in between these
two stages as 0% pass through is shown even though they do return? Thank you so much in advance for all your help.0