Extension-less URLS to extension and vice versa - does it affect PA?
-
Quick question: Will adding an extension such as .html or .php to a URL affect the Page Authority?
Long explanation:
My site is built in Drupal, and has the rewrite rules in place to redirect URLs with .php extension to no extension URLs. For example, the real URL for one of my pages is: http://www.trueresults.com/index.php?q=get-started. Because of the rewrite rule, it is rewritten to http://www.trueresults.com/get-started by Drupal.
If I wanted to keep the url the same, but add an extension to the end (ie. ".html") would that affect my Page Authority? Would Google consider this an entirely new URL?
The reasoning behind this is I am working on setting up some goals and events in my analytics and it requires urls with an extension, it's not accepting my "extension-less" urls.
thanks!
-
Thanks for the feeback, based on the responses I've got, I'm going to leave them as-is (extension-less). Regarding the extensions being needed to work with my Goals setup in Google Analytics, I was having trouble getting it to work and speculated that it was due to the missing extension, but now that I know that doesn't matter. I can figure out what the "real" issue is. thanks!
-
Extensionless URLs are a good idea because it helps with unavoidable future platform transitions. If you switch server technologies, you might end pages in .jsp or .html or something else.
Search engines generally consider every character when determining separate URLs. This goes for adding .html and even capital letters. Therefore, these two URLs are different in the eyes of search engines:
But to really answer your question, adding an extension should have little to no effect on page authority. Search engines are more concerned with your page's content, relevancy, popularity, authority, etc.
-
To answer your question, if you use appropriate 301 redirects this should not affect your PA.
However you should not need to do this to facilitate your analytics. Why won't it work with an extension?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
WWW used in research URL, or not to WWW
Long time user, infrequent poster.... thanks for taking my question... When I go to gather a series of data elements on a company's URL, the data changes (sometime dramatically) depending on whether the 'www.' is added to the URL & it seems related more to Page data than Domain. My question is about which data I should be using to assess the real strength of the site / page? Is there a 'best practice' question here, a personal preference or is there an actual difference in the performance of the www vs the non-www version? aquGYdz
Moz Pro | | SWGroves0 -
Can using url builder for campaign tracking impact link equity?
We have used the URL builder tools for building custom links that are placed on our referrer websites mainly for campaign tracking in Google Analytics, but when you use a shortened link on another website how does that impact the the link juice or equity? Is there any negative impact on the link rankings? Or should you provide the specific landing page url to the company that will be posting a link to your site?
Moz Pro | | CSobus0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
How long for authority to transfer form an old page to a new page via a 301 redirect? (& Moz PA score update?)
Hi How long aproximately does G take to pass authority via a 301 from an old page to its new replacement page ? Does Moz Page Authority reflect this in its score once G has passed it ? All Best
Moz Pro | | Dan-Lawrence
Dan3 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
I don't get what a dynamic URL is?
I have a whole bunch of them and I have no idea how I created them. I just make titles, that's it. Nothin' fancy.
Moz Pro | | annasus0 -
Does anyone know what the %5C at the end of a URL is?
I've just had a look at the crawl diagnostics and my site comes up with duplicate page content and duplicate titles. I noticed that the url all has %5C at the end which I've never seen before. Does anybody know what that means?
Moz Pro | | Greg800 -
Why would PA be 1 (0 links from 0 root domains) if it's linked to internally?
Question just about said it all: I've seen a number of pages on sites that have a PA of 1 (with the metrics being 0 links from 0 root domains) when I can see on the site that it is linked to internally - from the main nav (which is CSS, not Javascript) and also from the footer, if not other places. Why would this be? Update: upon looking further at the site, it appears that there's some kind of redirect going on, where the page linked to from the nav actually redirects to the real page. Would that eliminate PA, even if it's a 301? And additionally, is whatever is causing this lack of PA a reflection of how Google would relate to the page? Thanks, Aviva
Moz Pro | | debi_zyx0