Magento Help - Server Reset
-
Good Morning,
After rebooting a server, a magento based website reset itself going back to December 2013. All changes to the site and orders dating up until yesterday (6/19/14) have disappeared.
There are several folders on the root of the server that have files with yesterday's date but we don't know how to bring everything back and restore.
Any Magento or server experts out there ever face this issue or have any ideas or potential solutions?
Thanks
-
Thanks Prestashop.
I'll look into that and let you know. Appreciate the advice
-
In a situation like that my first guess would be someone changed the document root in apache but did not restart the server. I am not a Magento person, so you might have what I am saying checked out, because I don't know where things are located. What I would do is look in the databases on the server and see if there is one that holds the most recent customer information. Then I would look at the settings file that Magento uses to specify what database the application uses, see if they are the same. If not, then look around on the server for another directory that holds the most recent version.
If someone changed the document root and did not reboot, it would not take effect until the server was rebooted, thus changing the whole configuration of the site. One place you can have someone look to be sure is the apache logs. In the log it will have the complete system path of the resources accessed, see if there was a change at the time of the reboot.
-
yikes! not much help to you now, but you should really at least do weekly backups of the ftp and nightly of your database. And definitely always to a backup before you do anything major (like a reboot).
you should go to you host, they possible might have a back up of your server, fingers crossed.
-
We have no backup, that's the issue, the last backup is from 2013.
-
I agree with Paddy. Did you do a backup before the changes? If so, getting your site back to how it was will be very easy, especially if you are using Cpanel. When you do your server reboots, make sure you are using the "graceful" method.
Here is a video explaining this:
https://www.youtube.com/watch?v=PddZ3FwHFZw -
this is the right place to ask about this, you would get a better responce on the magento forums or you could look something like freelancer or pph and find an magento expert with a good review history.
Do you have a backup of your data base? if so you should be able to restore it, if not, check if your host company has a back up.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My url disappeared from Google but Search Console shows indexed. This url has been indexed for more than a year. Please help!
Super weird problem that I can't solve for last 5 hours. One of my urls: https://www.dcacar.com/lax-car-service.html Has been indexed for more than a year and also has an AMP version, few hours ago I realized that it had disappeared from serps. We were ranking on page 1 for several key terms. When I perform a search "site:dcacar.com " the url is no where to be found on all 5 pages. But when I check my Google Console it shows as indexed I requested to index again but nothing changed. All other 50 or so urls are not effected at all, this is the only url that has gone missing can someone solve this mystery for me please. Thanks a lot in advance.
Intermediate & Advanced SEO | | Davit19850 -
Blank Cart Pages Showing as Duplicate, HELP
Hi Everyone, I'm seeing a bunch of URLs that look something like this [ domain.com/cart?add&id_product=42&token=776d4a08721f3d8c920e287248797547] showing as duplicate content in my Moz crawls. I think these are just blank pages for the most part. Is there anything to be concerned with here? Is there a way to clean this up? Thanks! Ricky
Intermediate & Advanced SEO | | RickyShockley0 -
Help: How to optimize my duplicate category pages
Hi all My category pages will showcase the same products how do I go about optimizing these pages so they don't show up as duplicate content? Would appreciate your all feedaback! Thanks
Intermediate & Advanced SEO | | edward-may0 -
Robots.txt help
Hi Moz Community, Google is indexing some developer pages from a previous website where I currently work: ddcblog.dev.examplewebsite.com/categories/sub-categories Was wondering how I include these in a robots.txt file so they no longer appear on Google. Can I do it under our homepage GWT account or do I have to have a separate account set up for these URL types? As always, your expertise is greatly appreciated, -Reed
Intermediate & Advanced SEO | | IceIcebaby0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Alternative Markup Challenge. Can anyone help?
I have a challenge around alternative markup. We currently operate a single domain with geo-targeted folders and alternative markup implemented. We are now now looking to expand this out to non-English content. Current Implementation; All generic English language content hosted on the main domain, with x5 English language content variations (locales) available under a folder structure (.com/en-us/ etc.). Alternative markup is in place for all locales within the HTML, implemented automatically by developers via the CMS. Locale folders geo-targeted via GWT and Bing WT. Planned Launch; Introduction of 5 new non-English locale folders (e.g. /de-de/ etc.), targeted to their respective country and language. Content language will be mixed, with around 1/10 of pages translated and the other 9/10 of pages (business listings) having their body content remain in English, with headers / footers translated. Locale folders will be geo-targeted via GWT and Bing WT. Folder and markup usage TBC. Options; Folders; Implement folder structure /de/, attempting to indicate country but not language (issue; usually a single identifier indicates language, not country?). Implement /de-de/ folder structure to match the English locales and maintain correct country targeting (issue; some content is not in language). Alternative markup; Do not make use of markup at all. Implement CMS based automated markup on all English and non-English content throughout the locale (e.g. /de-de/), but exclude English language versions (e.g. /en-gb/). Attempt manually implementing markup to bridge the English and non-English locales, potentially creating future issues with new content going live and content being removed. A heavy risk. Current approach is webmaster tools targeting, a /de-de/ folder structure and automated implementation of markup. This means English language URLs will have markup and non-English language URLs will have markup, but they will not match up (e.g. English pages will never have markup for non-English language content). If you minds haven't melted, what's your thoughts? Any help is much appreciated.
Intermediate & Advanced SEO | | HelloAlba0 -
Help Me find a Great Seo for my Budget!
I am looking for a Good SEO for my tech news site and would like your help in recommending a good SEO that will fit in my budget of 300-500 per month.I have contacted many firms in the Moz directory of recommended firms but found they are out of my monthly price range.Google search for a decent SEO can be scary with so many so called SEO companies.I would like to work with a experienced SEO individual who can come up with a great plan for our site and also implement them.We just had a SEO forensic audit done with Alan Blieweiss and implemented his suggestions and are now looking for someone to work with long term for the rest of our SEO needs.I understand that I cannot afford the top SEO firms or industry leaders but with your help and suggestions I am sure we can afford and find a great SEO. Please reply here or message me.
Intermediate & Advanced SEO | | chrisyak0 -
Planning for Website traffic on a self-hosted web server
How would you plan for the levels of traffic on an in-house web server? The scenario is that website is basically running on a T1 (1.5 Mbps) connection pipe, and traffic projects to increase significantly with content growing from about 40 uniques a day (on less than 20 poorly optimized web pages + associated PDF documents), to over 150 search optimized content pages + offsite traffic and link building. I'm trying to figure out what kinds of avg traffic levels (plus spikes) would represent a maximum bandwidth capacity for this...given that its a narrow specialty B2B focus. Any answers would be useful.
Intermediate & Advanced SEO | | GLogic0