What needs to be done to tell google my site has moved /changed
-
Hi everyone,
I have a site, which I have re-built on a temporary domain, so that my main ecommerce site can still run.. I have noticed that google has already crawled my temporary domain. The only problem is I now want to transfer the new site back onto its proper domain (www.ourbrand.com). I have changed some of the URL structures of the new site so realize I will need to do re-directs relating to the same domain, but will google get confused that another domain used to have my new website on?
I don't plan on using the old temporary domain again and wondered if I need to tell google in some way it was used just to build my site on?
Michelle
-
Hi Michelle!
Here are the steps on how tell Google when your site had moved:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=83106
Just follow the steps there and you'll be fine.
Cheers!
-
Once you do the transfer, robot.txt the old temporary domain.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will Google Also Penalize Desktop Rankings If Your Site is Not Mobile Friendly?
Apologies if this question has already been answered. I was unable to find it. For desktop organic rankings: Will Google take into consideration mobile-readiness as a ranking factor? Thanks in advance for any reply, Kind regards,
Technical SEO | | Eric_Lifescript
Eric Darby1 -
Meta keywords shown in Google SERPS as site description
I'm seeing Google display meta-keywords in the SERP description for some sites (at least a half dozen that I've checked). I BELIEVE IT IS AN AJAX ISSUE BECAUSE: The sites all use AJAX to display content. So the meta-keywords are in the header, and the javascript that displays the content. Non-AJAX parts of the site display properly in Google SERPS The meta-keywords don't visibly appear anywhere on the page. When I turn off images and Javascript in Chrome I don't see any hidden keyword text. I BELIEVE IT IS A GOOGLE-SPECIFIC ISSUE BECAUSE: Each site displays properly in Bing and Yahoo SERPS - the meta-description is the description. However, (as expected) I see the same strange meta-keyword activity in Aol search In Screaming Frog's SERP preview I see the meta-description as the description. Google has been ignoring met-keywords for years. Any idea why it's appearing in the SERPS for these AJAX powered sites? I found one other person who saw that Google may be reading and displaying their content in AJAX even though that content is meant to appear on a different "page". No one on that Google Forum seemed to understand the person's problem. The only reason I get it is because now I'm seeing it with my own eyes. I know the Moz community can do better, so i'm posting about it here.
Technical SEO | | AlexCobb0 -
Do I need to verify my site on webmaster both with and without the "www." at the start?
As per title, is it necessary to verify a site on webmaster twice, with and without the "www"? I only ask as I'm about to submit a disavow request, and have just read this: NB: Make sure you verify both the http:website.com and http://www.website.com versions of your site and submit the links disavow file for each. Google has said that they view these as completely different sites so it’s important not to forget this step. (here) Is there anything in this? It strikes me as more than a bit odd that you need to submit a site twice.
Technical SEO | | mgane0 -
How to Remove /feed URLs from Google's Index
Hey everyone, I have an issue with RSS /feed URLs being indexed by Google for some of our Wordpress sites. Have a look at this Google query, and click to show omitted search results. You'll see we have 500+ /feed URLs indexed by Google, for our many category pages/etc. Here is one of the example URLs: http://www.howdesign.com/design-creativity/fonts-typography/letterforms/attachment/gilhelveticatrade/feed/. Based on this content/code of the XML page, it looks like Wordpress is generating these: <generator>http://wordpress.org/?v=3.5.2</generator> Any idea how to get them out of Google's index without 301 redirecting them? We need the Wordpress-generated RSS feeds to work for various uses. My first two thoughts are trying to work with our Development team to see if we can get a "noindex" meta robots tag on the pages, by they are dynamically-generated pages...so I'm not sure if that will be possible. Or, perhaps we can add a "feed" paramater to GWT "URL Parameters" section...but I don't want to limit Google from crawling these again...I figure I need Google to crawl them and see some code that says to get the pages out of their index...and THEN not crawl the pages anymore. I don't think the "Remove URL" feature in GWT will work, since that tool only removes URLs from the search results, not the actual Google index. FWIW, this site is using the Yoast plugin. We set every page type to "noindex" except for the homepage, Posts, Pages and Categories. We have other sites on Yoast that do not have any /feed URLs indexed by Google at all. Side note, the /robots.txt file was previously blocking crawling of the /feed URLs on this site, which is why you'll see that note in the Google SERPs when you click on the query link given in the first paragraph.
Technical SEO | | M_D_Golden_Peak0 -
Why hasn't my sites indexed on opensiteexplorer.org changed in weeks?
Why hasn't my sites indexed on opensiteexplorer.org changed in weeks, even though I've done link-building like crazy?
Technical SEO | | AccountKiller0 -
How can I tell Google, that a page has not changed?
Hello, we have a website with many thousands of pages. Some of them change frequently, some never. Our problem is, that googlebot is generating way too much traffic. Half of our page views are generated by googlebot. We would like to tell googlebot, to stop crawling pages that never change. This one for instance: http://www.prinz.de/party/partybilder/bilder-party-pics,412598,9545978-1,VnPartypics.html As you can see, there is almost no content on the page and the picture will never change.So I am wondering, if it makes sense to tell google that there is no need to come back. The following header fields might be relevant. Currently our webserver answers with the following headers: Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0, public
Technical SEO | | bimp
Pragma: no-cache
Expires: Thu, 19 Nov 1981 08:52:00 GMT Does Google honor these fields? Should we remove no-cache, must-revalidate, pragma: no-cache and set expires e.g. to 30 days in the future? I also read, that a webpage that has not changed, should answer with 304 instead of 200. Does it make sense to implement that? Unfortunatly that would be quite hard for us. Maybe Google would also spend more time then on pages that actually changed, instead of wasting it on unchanged pages. Do you have any other suggestions, how we can reduce the traffic of google bot on unrelevant pages? Thanks for your help Cord0 -
Google Confusion: Two Sites and a 301 Redirect.
Hi, We have a client who just sprang a new project on us. As always, they went ahead and did some stuff before bringing us into the loop! (oh the joy of providing SEO services!) Anyway, i'm pretty swamped right now and need some extra brains on this. Basically the client had www.examplesiteA.com online for many years (an affiliate site which had built up a strong brand in the industry). They have now decided to turn this affiliate site into a full blown service platform and so with the new site being built they 301'd the whole thing over to www.examplesiteB.com - this is where they want all the old affiliate content to be hosted. So essentially examplesiteA.com is now examplesiteB.com and a new site is being placed on examplesiteA.com - still with me? So this has all happened and a brand new website is on examplesiteA.com and the old examplesiteA is now sitting exactly as it used to, but on the examplesiteB domain. The 301 redirect has been removed and the new examplesiteA seems to have been crawled, but the homepage is not indexed. When you search for examplesiteA, examplesiteB is the top result. Now they are similar domain names and to be fair I have very little data at this point i.e. I don't know when the 301 redirect was removed and it maybe that this all fixes itself with time. How is link equity effected now that examplesiteA.com was 301 redirected to examplesiteB.com and cached in this way, but now the 301 redirect has been removed and does not exist? Would link juice have been diluted throughout the process? Obviously if we had been in on all this before anything was implemented we would have done things differently. Interested to hear what others would do coming in at this point. Thanks and look forward to the advice!
Technical SEO | | MarcLevy0