I addressed this somewhat in a blog last January. The basic elements still apply.
Power requirements
Ok, we all know that power is a huge consideration in building out or maintaining a data center. A 1MW data center consumes 177 million kWH, worth approximately $17 million dollars over ten years, at $0.10 per kWH. So it should be no surprise that energy costs have become the second highest expense for data centers.
We’re already seeing more and more relocations and outsourced applications, primarily to reduce power expenses. During the next five years, it’s projected that one in four businesses will experience a significant business interuption. Couple that with the explosive growth of data. 161 ******** of data were created in 2006, which is approximately three million times the information of all the books ever written.
Data center space
Going back a few years to the dot com boom and subsequent collapse, a lot of space suddenly became empty and available for pennies on the dollar. Millions of square feet of space that was built out to a specification of 100 watts per sqare foot was sold at bargain basement prices to enterprises seeking regulatory compliance. That space is long gone.
The subsequent advent of metered power started squeezing power margins, making reasonable return on investments even harder.
Thinking of relocating?
New data centers need 200 to 400 watts per sqare foot to be competitive. Unfortunately, local utilities are unable to deliver power in some markets, as the costs of producing that energy has quadrupled in the last few years. How do you combat unmanageable power expenses? Many companies are colocating to data centers in less expensive power areas, for example from the West or East coast to the Midwest. Many are seeing drops from $0.07 to $0.02 per kWH.
Other factors
Fluctuations in rack pricing follows suit, from $400 for low KW sites to over $1500 for high KW sites – not including power. Cooling those racks adds another dimension of expense. Data centers are increasingly offering shorter term contracts to leverage clients for pricing increases at renewal.
Consider this.
In the year 2000, there were approximately fifteen million servers. That grew in 2005 to twenty seven million servers installed worldwide, and it took approximately $100 Billion to manage them. It’s projected by the end of 2009, there will be thirty five million servers – that’s an increase of eight million servers in just four years! What will these eight million additional servers consume in power? A single CPU processor can consume as much as 130 watts (more than most standard light fixtures), and each server itself consumes two to four times more power than five years ago. Older data centers were designed to support four kilowatts per rack, but current requirements can be as high as fifteen to twenty kilowatts per rack. Cooling all these servers adds an additional expense.
Utility load versus critical load
Data centers consume power for IT equipment (critical load) and to operate cooling and lighting (utility load). Most data centers consume equal amounts of power for each. Thus, a small data center with a 4MW feed would consume 2MW of that power before any IT equipment is accounted for. On the utility side, cooling consumes 25%, air movement 12%, electricity (transformer/UPS) 10% and lighting 3%. Larger data centers have the ability to improve this percentage.
What is the cost to build a new data center?
A tier I or tier II data center (20,000 square feet) can cost $15.4 million. A tier III or IV data center can cost $48.4 million.
What about other complications, such as lead times?
A typical lead time to build a top tier 200W per square foot data center is one year (75,000 sq ft and up). Consider lead times for a UPS, generators and PDUs. Add time for land acquisition, design, permits and build-out.