Url shows up in "Inurl' but not when using time parameters
-
Hey everybody,
I have been testing the Inurl: feature of Google to try and gauge how long ago Google indexed our page. SO, this brings my question.
If we run inurl:https://mysite.com all of our domains show up.
If we run inurl:https://mysite.com/specialpage the domain shows up as being indexed
If I use the "&as_qdr=y15" string to the URL, https://mysite.com/specialpage does not show up.
Does anybody have any experience with this? Also on the same note when I look at how many pages Google has indexed it is about half of the pages we see on our backend/sitemap. Any thoughts would be appreciated.
TY!
-
There are several ways to do this, some are more accurate than others. If you have access to the site which contain the web-page on Google Analytics, obviously you could filter your view down to one page / landing page and see when the specified page first got traffic (sessions / users). Note that if a page existed for a long time before it saw much usage, this wouldn't be very accurate.
If it's a WordPress site which you have access to, edit the page and check the published date and / or revision history. If it's a post of some kind then it may displays its publishing date on the front-end without you even having to log in. Note that if some content has been migrated from a previous WordPress site and the publishing dates have not been updated, this may not be wholly accurate either.
You can see when the WayBack Machine first archived the specified URL. The WayBack Machine uses a crawler which is always discovering new pages, not necessarily on the date(s) they were created (so this method can't be trusted 100% either)
In reality, even using the "inurl:" and "&as_qdr=y15" operators will only tell you when Google first saw a web-page, it won't tell you how old the page is. Web pages do not record their age in their coding, so in a way your quest is impossible (if you want to be 100% accurate)
-
So, then I will pose a different question to you. How would you determine the age of a page?
-
Oh ty! Ill try that out!
-
Not sure on the date / time querying aspect, but instead of using "inurl:https://mysite.com" you might have better luck checking indexation via "site:mysite.com" (don't put in subdomains, www or protocol like HTTP / HTTPS)
Then be sure to tell Google to 'include' omitted results (if that notification shows up, sometimes it does - sometimes it doesn't!)
You can also use Google Search Console to check indexed pages:
- https://d.pr/i/oKcHzS.png (screenshot)
- https://d.pr/i/qvKhPa.png (screenshot)
You can only see the top 1,000 - but it does give you a count of all the indexed pages. I am pretty sure you could get more than 1k pages out of it, if you used the filter function repeatedly (taking less than 1k URLs from each site-area at a time)
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to deal with rel=canonical when using POST parameters
Hi there,
On-Page Optimization | | mjk26
I currently have a number of URLs throughout my site of the form: https://www.concerthotels.com/venue-hotels/o2-academy-islington-hotels/256133#checkin_4-21-2024&checkout_4-22-2024&rooms_1&guests_2&artistid_15878:256133 This sends the user through to a page showing hotels near the O2 Academy Islington. Once the page loads, my code looks at the parameters specified in the # part of the URL, and uses them to fill in a form, before submitting the form as a POST. This basically reloads the page, but checks the availability of the hotels first, and therefore returns slightly different content to the "canonical" version of this page (which simply lists the hotels before any availability checks done). Until now, I've marked the page that has had availability checks as noindex,follow. But because the form was submitted with POST parameters, the URL looks exactly like the canonical one. So the two URLs are identical, but due to POST parameters, the content is slightly different. Does that make sense? My question is, should both versions of this page be marked as index,follow? Thanks
Mike0 -
Will including "Contact Me" form degrade Google page ranking?
I have a content-rich page about one of my offerings. Will Google knock down the ranking if I include a contact me reply form on the page vs including a link to a standalone reply page? My concern is that including the form will cause Google to downgrade the page as being "too commercial".
On-Page Optimization | | Lysarden0 -
Long url links
Just wondering about creating links.
On-Page Optimization | | Robotnik
Is it ok to have very long links?
Like: http://www.robotnik.com/computer-hardware-ram/8gb-ddr3-1600-desktop Is the above too long, is it better for SEO to be more to the point? Also For better SEO, is it better to use hyphens in a domain name or not?0 -
Is it impossible to get out of Panda? Matt Cutts says if you fix the problem you "pop back" but if so why are their so few examples?
In this video matt cutts says: http://www.youtube.com/watch?v=8IzUuhTyvJk about 15 "once we re-run our data (every few weeks) if we determine your site is of higher quality you would pop back out of being affected" Panda has effected thousands of sites and a lot of smart people have been working on the problem for about 2 years since the first panda was launched, but I can only find 1 site that has "popped back" to their original rankings. e.g. http://searchengineland.com/google-panda-two-years-later-losers-still-losing-one-real-recovery-149491 Apart from Motortrend.com I can't find any sites (of reasonable size) / case studies of sites that have solved the panda problem, and were definitely hit by panda. Which doesn't feel right, some people have deleted a ton of pages, redesigned their site, improved their content, etc with no success. Therefore is it a pointless exercise? Therefore, is it better to simply give up and start a new site?
On-Page Optimization | | julianhearn1 -
The "100 links/page recommendation" - Do Duplicate Links Count?
We have way too many links on our homepage. The PageRank Link Juice Calculator (www.ecreativeim.com/pagerank-link-juice-calculator.php) counts them to 300. But all of them are not unique, that is some links point to the same URL. So my question: does the "100 links/page recommendation" refer to all anchors on the page or only to unique link target URLs? I know "100" is just a standard recommendation.
On-Page Optimization | | TalkInThePark0 -
20 x '400' errors in site but URLs work fine in browser...
Hi, I have a new client set-up in SEOmoz and the crawl completed this morning... I am picking up 20 x '400' errors, but the pages listed in the crawl report load fine... any ideas? example - http://www.morethansport.co.uk/products?sortDirection=descending&sortField=Title&category=women-sports clothing
On-Page Optimization | | Switch_Digital0 -
Ecommerce Product Subcategory URL
Our website has 5 main categories displayed in tabs in the header. The main landing page of each of the 5 categories is a paginated page (3pages- set up with canonical tags to avoid duplicate content) with a side bar which splits the main category into many subcategories. Each of these subcategories essentially filter the main landing page into more defined categories customers find useful (price/colour) BUT once clicked enter into a separate landing page. We have worked hard to avoid any duplicate content issues between these sub-landing pages and the main landing page. This was done as we wanted each of the subpages to organically rank (thus we went with this method rather than filters). Hope we didn't do the wrong thing there? The question is should these sub-landing pages route straight from home to have the best chance to get individually ranked or routed through the main category bearing in mind we have 5 main categories each with many subcategories. i.e. domain.co.uk/subcategory or domain.co.uk/category/subcategory Thanks in advance for any advice given.
On-Page Optimization | | jannkuzel0 -
SERP listing of a websites' 'categories'
Hi all, just wondering if anyone has thoughts on what I can do to encourage SERP listings that include website categories, eg http://www.google.com.au/search?q=seomoz&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a . I'm assuming search engines only display type of listings when the search query closely matches the domain name? Thank heaps!
On-Page Optimization | | TheWebSearchMarketingCompany0