uptime of the host

This is all dedicated hosting companies which we already used in past. And this is a uptime record of our servers with them in past.

Gigenet:100% uptime
LiquidWeb: 100% uptime
SteadFast: 100% uptime
LimestoneNetwork: 99.99% uptime
Softlayer: 99.98% uptime
Ubiquity: 99.98% uptime
HiVelocity: 99.98% uptime
 
i am surprised you got 99.98% from HiVelocity, when i used them it was very rare to get over 94%

over main servers are with Steadfast now and is 100%
 
Last edited:
Surely, most of the hosting companies indicate 99.9% uptime on their website, but in order to know th real uptime, it is better to read the reviews of the analytical companies who make the research about hosting companies uptime.
 
100% uptime is nearly impossible. 5ESS Telephone switches were designed with 99.999% uptime over 40 or 50 years, which amounts to a few seconds a year of downtime. However there was a lot of redundancy built into those systems to ensure that.

Network reliability in the datacenter can be at 100% if designed correctly, but once your connections are in any kind of range of a backhoe that reliability can go out the window. Even a datacenter that brings in fiber from two different directions who did not do their homework to ensure those connections never share a facility to their final destination are at risk of an outage someday.

TCP/IP was never designed for 100% network uptime, it was designed to make sure the packet arrived in one piece.

Trying for 100% server uptime is not really good in the long term, eventually a reboot needs to be done unless you're looking to find one of those uptime bugs that may strike after so many days which can be disastrous. A reboot every 6 months is generally a good way to flush out any bad memory and also let fsck do it's thing every once in awhile.

That's a really good post - I too come from a telecoms background and the nearest I see these days to the levels of redundancy which were in telecomms hardware like 5ESS are big centralised kit like SANs. It's very rare to get it in a server with the occasional very expensive exception like Stratus and HP Nonstop (ex Tandem) fault tolerant kit - both of which came from telecomms roots hosting AIN/IN services like 800 routing. Lovely bits of kit they were!

Your point about fibre diversity is also very true - there was a big outage in North West UK a few years ago which took out huge chunks of comms for many providers. If I recall the post mortem correctly, it turned out that two fibres which were supposed to be going through different routes had been 'optimised' into the same duct and the paperwork never completed properly to reflect that. Cue one small duct fire and all mobile, emergency and who-knows-what-else comms in the region go dead for a day or so until the mess is sorted out and everything rerouted.
 
Most of the datacenters do have planned maintenance sometimes that cannot be avoided.

true, but our DC has a fallover system in place, so if they do carry out planned maintenance then our servers still stay online. any good DC should have such a system in place. it is unplanned issues that cause the problems.
 
Most of the datacenters do have planned maintenance sometimes that cannot be avoided.

Datacentres are generally designed to that maintenance can be done with no impact on the end customer. The Uptime Institute have lots of influence over the terminology used in datacentres and they have defined four levels of resilience, which in rough terms are:

Tier 1 - no redundancy
Tier 2 - some redundancy but still single points of failure
Tier 3 - concurrently maintainable (ie you can power anything down for maintenance and there is enough redundancy to keep things running (colloquially, and not always accurately known as N+1).
Tier 4 - double everything - sufficient redundancy is built in such that even after powering something down for maintenance, there is still redundancy in the system (colloquially 2N, actually at least 2(N+1)).

Tier 1's are generally cheap and cheerful, basement type operations, Tier 2 covers more operators than you might imagine, or indeed that might admit to it! Personally I would say 'enterprise level' starts at Tier 3. Tier 4 is normally only for financials and governments as the cost is of course much higher than the lower Tiers.

The tier definitions are widely abused by data centre operators but still provide a good indication of resilience in a facility.
 
our main server is in a DC on a tier 3 resilience.

about 6 months ago i did get a request from an international company that wanted me to get a dedicated server in the UK with a tier 4 resilience, just to host their sites on (they wanted me to run the server) which i was happy to do, but we all know in the UK prices are higher than the USA. After i gave them a price for such a server on tier 4 they never got back to me.
 
our main server is in a DC on a tier 3 resilience.

about 6 months ago i did get a request from an international company that wanted me to get a dedicated server in the UK with a tier 4 resilience, just to host their sites on (they wanted me to run the server) which i was happy to do, but we all know in the UK prices are higher than the USA. After i gave them a price for such a server on tier 4 they never got back to me.

Haha, I'm not surprised, the Tier 4 colo market in the UK is very small and tends to be used only by people with very deep pockets. Did you manage to get a single server slot in one directly or did you have to go through a reseller? (I'm not asking names, just curious as I don't think I've ever seen a UK Tier 4 offer much below private suites, maybe occasional single racks)
 
Haha, I'm not surprised, the Tier 4 colo market in the UK is very small and tends to be used only by people with very deep pockets. Did you manage to get a single server slot in one directly or did you have to go through a reseller? (I'm not asking names, just curious as I don't think I've ever seen a UK Tier 4 offer much below private suites, maybe occasional single racks)

The thing about Tier 4 colocation is that there are a lot of data centers that call themselves "Tier 4". The truth is that there is only a handful of actual certified Tier 4 data centers in the world. The reason behind this is the cost it takes to apply and become Tier 4 accredited.

I toured a facility that told me they were certified tier 4 and when I asked to see their accreditation certificate, they couldn't produce it for me. So my rule of thumb when it comes to data center tier accreditation: request proof.
 
Haha, I'm not surprised, the Tier 4 colo market in the UK is very small and tends to be used only by people with very deep pockets. Did you manage to get a single server slot in one directly or did you have to go through a reseller? (I'm not asking names, just curious as I don't think I've ever seen a UK Tier 4 offer much below private suites, maybe occasional single racks)

like you i could not see one on offer, so i spent a few days on the phone calling around most of the major datacenters. a couple said they could offer tier 4 but they were not Tier 4 accredited. i then remembered a local indepentant DC i saw advertised in a local PC mag. I found their address and went to visit them and they were Tier 4 accredited (saw proof)
 
The thing about Tier 4 colocation is that there are a lot of data centers that call themselves "Tier 4". The truth is that there is only a handful of actual certified Tier 4 data centers in the world. The reason behind this is the cost it takes to apply and become Tier 4 accredited.

I toured a facility that told me they were certified tier 4 and when I asked to see their accreditation certificate, they couldn't produce it for me. So my rule of thumb when it comes to data center tier accreditation: request proof.

It's certainly true that many facilities which meet the standards of Tier 3 and Tier 4 never get accredited by the UI, and (being very careful here!) it may also be true that SOME of those facilities that claim to be Tier 3 or Tier 4 might not be when you look rather closer (perish the thought....).

Lack of accreditation isn't a bad sign, indeed the vast majority of facilities will never get accredited as it just doesn't make business sense. The accreditation process is complex, intrusive and expensive (the UI is a profit-making company, not a charity, industry body or educational institute, despite the name) and most larger customers will be making up their own minds on a facility's suitability through some fairly deep audits of what's been built, rather than relying on UI specs.

None of that of course is an excuse for a facility claiming accreditations they don't have!
 
like you i could not see one on offer, so i spent a few days on the phone calling around most of the major datacenters. a couple said they could offer tier 4 but they were not Tier 4 accredited. i then remembered a local indepentant DC i saw advertised in a local PC mag. I found their address and went to visit them and they were Tier 4 accredited (saw proof)

There is a list of certified designs and facilities on the UI website - since the only reason anyone ever gets a UI rating is for marketing purposes, I'd be surprised if it's incomplete. According to that, there is only one certified Tier 4 in the UK, and that at design documentation level, not built facility level.

http://uptimeinstitute.com/TierCertification/certMaps.php
 
None of that of course is an excuse for a facility claiming accreditations they don't have!

Exactly what I'm getting at. In fact most places that meet the Tier 4 standards don't bother with accreditation because it's so expensive and wouldn't add any true value to their business. The ones in which I refer to are data centers that claim they are accredited which they are actually not. I think that is deceitful and dishonest and should not be tolerated by customers. Transparency to customers, especially in this industry, is very important IMO.
 
Back
Top