Index Problem
-
Hi guys
I have a critical problem with google crawler.
Its my website : https://1stquest.com
I can't create sitemap with online site map creator tools such as XML-simemap.org
Fetch as google tools usually mark as partial
MOZ crawler test found both HTTP and HTTPS version on site!
and google cant index several pages on site.
Is problem regards to "unsafe URL"? or something else?
-
Hi peter
just curious did you have to do anything specific for SEO side of things? I am working with a developer who assures me that angular will not impact on SEO but looking at their past efforts I am not too sure?
-
Hello all,
Thought I'd drop in my $0.02 about Google and Angular JS. We switched over to it two weeks ago at FixedPriceCarService.com.au - existing pages were fine, with no transition issues, and same goes for all the new pages we added. All were indexed quite quickly on Google.
Google Search Console has no issue with finding things like Page Title and Meta Description that have their home now in the DOM.
I'm curious about when MOZ might also be able to crawl the DOM, and not report missing page titles that are not truly missing, rather they've been relocated.
Scott - Fixed Price Car Service
-
Hi Hamid!
Did Peter or Martijn answer your question? If so, please mark one or both as a Good Answer.
If not, what are you still looking for?
-
Peter is indeed correct, it doesn't seem you have to worry about unsafe URLs but more about the technical way that your site is built. AngularJS is relatively new and Google and as it's JS based it's rather hard for Google to retrieve all the content already on these pages in order to 'get' the whole page. His suggestion is also spot on and will make it easier for search engines to crawl, read and index your site.
-
If you can't create xml sitemap then you can make plugin for your site with your devs for this. Google can't render your webiste because technology. I've seen that your site using AngularJS and you have tiny HTML code.
However there is solution for you described here:
https://www.distilled.net/resources/prerender-and-you-a-case-study-in-ajax-crawlability/So if you provide fully rendered HTML to search engine bots then your site can be finally indexed proper.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is a canonicalized URL still in index?
Hi Mozers, We recently canonicalized a few thousand URLs but when I search for these pages using the site: operator I can see that they are all still in Google's index. Why is that? Is it reasonable to expect that they would be taken out of the index? Or should we only expect that they won't rank as high as the canonical URLs? Thanks!
Intermediate & Advanced SEO | | yaelslater0 -
Trying to find example of in app indexing in SERPs
My colleague who is a developer is trying to find an example of in apps being indexed in the SERPs. Does anybody know of any examples? Thanks+
Intermediate & Advanced SEO | | RosemaryB0 -
Can too many "noindex" pages compared to "index" pages be a problem?
Hello, I have a question for you: our website virtualsheetmusic.com includes thousands of product pages, and due to Panda penalties in the past, we have no-indexed most of the product pages hoping in a sort of recovery (not yet seen though!). So, currently we have about 4,000 "index" page compared to about 80,000 "noindex" pages. Now, we plan to add additional 100,000 new product pages from a new publisher to offer our customers more music choice, and these new pages will still be marked as "noindex, follow". At the end of the integration process, we will end up having something like 180,000 "noindex, follow" pages compared to about 4,000 "index, follow" pages. Here is my question: can this huge discrepancy between 180,000 "noindex" pages and 4,000 "index" pages be a problem? Can this kind of scenario have or cause any negative effect on our current natural SEs profile? or is this something that doesn't actually matter? Any thoughts on this issue are very welcome. Thank you! Fabrizio
Intermediate & Advanced SEO | | fablau0 -
Does Unnatural Links penalization cause de-indexation?
Hi All, One of my sites was under Unnatural Links Manual Penalization. Its been over two months since it was revoked and we see no changes at all. In fact, we still have couple of pages (important landing pages) that are still de-indexed (I checked it by searching in quotes a whole sentence within the page and got no results). Does it mean that even though the site's penalization was revoked it is not completely over yet and I just need to be patient or is there something else hovering over the website? Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
Keep older blog content indexed or no?
Our really old blog content still sees traffic, but engagement metrics aren't the best (little time on site), and as a result, traffic has gradually started to decrease. Should we de-index it?
Intermediate & Advanced SEO | | nicole.healthline0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180 -
Problem of indexing
Hello, sorry, I'm French and my English is not necessarily correct. I have a problem indexing in Google. Only the home page is referenced: http://bit.ly/yKP4nD. I am looking for several days but I do not understand why. I looked at: The robots.txt file is ok The sitemap, although it is in ASP, is valid with Google No spam, no hidden text I made a request for reconsideration via Google Webmaster Tools and it has no penalties We do not have noindex So I'm stuck and I'd like your opinion. thank you very much A.
Intermediate & Advanced SEO | | android_lyon0 -
How long does a Google penalty last if you have fixed the problem??
Hi I stupidly thought that it would be a good idea to set up a reciprocal links page on my website named 'links'. I did this because my competitors were linking to these pages so I though it would be a good idea and I genuinely didn't know that you could be punished for this. Within about 3 weeks my rank dropped about 3 pages. I have since removed the links and the page was cached last Friday but the site still appears to have a penalty. I assumed when Google cached the page and saw the links were not there anymore that the penalty would be lifted. Anyone got any ideas? ps. The competitor websites had broken their links pages into various categories relating to the website i.e. related directories etc. so this might be why they weren't penalized.
Intermediate & Advanced SEO | | BelfastSEO0