What's your take on the fire at OVH

SenseiSteve

HD Moderator
Staff member
OVH France SBG was engulfed in flames resulting in the loss of a significant amount of infrastructure. They're saying no one was hurt, but lots of servers bit the dust.

Your thoughts?
 
Kinda crazy, especially after their IPO announcement.

The good news is, from what I've read, their clients that were in the two datacenters are being restored and reactivated at other locations. Of course that doesn't help with any dedicated hardware equipment a client might have had, but at least it appears that backups and copies were distributed to other datacenters.
 
From what I read on twitter, most using them had no backups off site.

Crazy, I mean what happened to fire suppression? How did it start? It needs explaining to the public before people loose faith, I believe it will be a huge loss to them business wise for the future.
 
What I was most surprised about is that a single fire took out multiple data centres, which is pretty much unforgiveable.

As a host that sits on mainly on OVH (not Strasbourg luckily) it's given me food for thought.

Luckily I chose not to use the "Truck Container" sites and my data in is London and Gravelines.
With backups in Canada and on premise.
 
That must have caused huge damage to their company and to their customers as well.

I can only imagine how all the people who lost their important data are feeling now.
 
If nothing else it highlights again the importance of not just keeping backups but keeping them remote, not in the same building or the data centre next door.

We sync all of our backups once every 24 hours from Helsinki to Amsterdam, and I wonder if even that is enough or if we should maybe be doing that once every 12 hours instead.
 
If nothing else it highlights again the importance of not just keeping backups but keeping them remote, not in the same building or the data centre next door.

We sync all of our backups once every 24 hours from Helsinki to Amsterdam, and I wonder if even that is enough or if we should maybe be doing that once every 12 hours instead.

I think its highlighted what you get for 'cheap' that particular DC doesn't look like a DC.
 
We sync all of our backups once every 24 hours from Helsinki to Amsterdam, and I wonder if even that is enough or if we should maybe be doing that once every 12 hours instead.
Depends on the backup. I think backing up files every 12-24 hours is OK, but if you're running a busy e-commerce, that could be a lot of data lost.

We used to run database backups every hour for clients and that seemed to be more than enough for most. We have seen some users setup systems to generate partial dumps every 5 or 10 minutes - just depends on how big of a database you have and how often information is updating.

File systems though, usually 12-24 hours is enough.
 
Depends on the backup. I think backing up files every 12-24 hours is OK, but if you're running a busy e-commerce, that could be a lot of data lost.

We used to run database backups every hour for clients and that seemed to be more than enough for most. We have seen some users setup systems to generate partial dumps every 5 or 10 minutes - just depends on how big of a database you have and how often information is updating.

File systems though, usually 12-24 hours is enough.
Yes but regardless still risky to have the backups in the same datacentre as the servers you are backing up.
 
Just saw this headline on the Search Engine Journal: New update to OVH data center fire recovery. Outage may continue until March 22, 2021 for some customers.
 
Depends on the backup. I think backing up files every 12-24 hours is OK, but if you're running a busy e-commerce, that could be a lot of data lost.

We back up a lot more frequently than that, with a different frequency depending on the site in question.

We sync our backups to a different country that often though. :)
 
The fire itself:
=> well we learned that if you build dc's in a way to optimize cost to be a slow as possible and like going an experimental route for a lot of stuff, then under some circumstances things can go wrong.
=> apparently they learned a lot of design lessons that they are trying to implement at their other locations now.
=> from news articles: parts of that dc should have been de-commissioned a while back

people using their services:
=> probably learned that just sticking with one (most likely the lowest cost) is not that good of a plan in case things go wrong.
=> probably a lot of people learned the importance of backups ;)
(not only making them but also testing their recovery strategy)
 

Forum statistics

Threads
80,896
Messages
248,403
Members
20,678
Latest member
hostys
Top