The build vs. lease conversation is entering a new stage. With increasing financial and operating risks associated with building, managing, and maintaining a company-owned data center, the scales are tipping in favor of a leasing model for most organizations.
Building a data center is one of the most expensive decisions a CIO will make. Changing hardware, software and business needs can quickly turn a data center investment into a resource drain on a company.
A recent Forbes magazine article titled “The Top 10 Strategic CIO Issues” drives home the limitations of managing an in-house data center:
“Far too many companies today find that they need to devote 70% or even 80% of their IT budget just to run and maintain what they’ve already got, leaving as little as 20% for innovation.”
The Forbes’ article points to server sprawl, massively under-utilized storage resources, unproductive data centers, labor-intensive integration requirements, and a near-endless list of “strategic” vendors as the biggest culprits consuming existing budgets.
Analyzing the Risks of Building a Data Center
For large tech companies, like Google, Amazon or Microsoft, building internal data centers makes sense. But for the vast majority of other businesses, the benefits rarely outweigh the risks.
Below is a summary of some of the pitfalls involved in the build decision:
- The Real Cost of Building a Data Center – A typical 50,000-square-foot, five Megawatt (MW) data center built by a Fortune 1000 company costs anywhere from $115 million to $200 million and has a useful life of about 20 to 30 years — if it’s designed properly. In addition, the facility houses hundreds of millions of dollars in hardware and software assets, each of which has a shelf life of three to five years.
Data center construction projects aren’t familiar ground for most enterprises. It generally takes companies two to three years to complete construction on a data center project and typically it is at a much higher expense than forecasted.
CIOs are challenged with trying to determine software, hardware or communication trends over the next five to ten years. Attempts to stretch that time period to 10-30 years (the life of a new data center) is virtually impossible.
Many CIOs use a simple approach based on general assumptions like, “Let’s assume our existing power and square foot print grows by 5% annually, and then project forward for 15 years.” They then use this figure as the basis for estimating the size of the data center. However, such generalizations can lead to underutilization of the facility or capacity constraints as a company is growing. Both of these scenarios can cost the organization valuable time and financial investment over the long term.
- Data Center Returns Can Be Severely Diminished by Underutilization – Forecasting errors are one of the riskier propositions of building a new data center. If the CIO made the wrong decision and the facility is too large, the cost per utilized kilowatt (kW) or square foot, will be significantly higher than forecast. If the facility is too small, the CIO will have to seek additional capital earlier than planned to build another facility.
The impact forecasting variability can be enormous and long lasting. For example, a $115 million data center can be 50% utilized if the company’s IT computing migrates to the cloud or if part of the IT footprint becomes virtualized. As a result, the company’s realized price per kW or price-per-foot could be twice as much as originally predicted.
Given the planning uncertainty and the residual risk associated with building expensive, long-lived assets, more and more Fortune 500 executives are reconsidering the significant capital investments needed to build in-house data centers.
- Technological Advancement Can Fuel Planning Uncertainty – Leveraging new hardware and software technologies appropriately are key considerations for CIOs. They also need to ensure data center environments have the power density and cooling capability needed for higher performing equipment that will run an increasing array of applications.
CIOs must carefully analyze the risk involved in technology deployments. Items to review include:
- Hardware Risk – Which hardware platforms must be deployed to give the company the IT advantage it needs to drive revenue, increase productivity, and provide stronger analytical support to take advantage of emerging, big data demands? Is a high-power computing (HPC) solution needed? How much hardware will be virtualized? What are the forecasted power densities of the proposed hardware and how will storage be managed?
- Software Risk – What software investments strengthen the organization’s competitive advantage? How will mobile applications come into play and what will the software roadmap look like? How much business will be moved to the cloud over the next 5-10 years?
- Business Risk – What is the company’s strategic roadmap? Will growth come via acquisition and/or will some units be divested? How nimble must the IT platform be in order to anticipate changing business strategies and growth through acquisitions?
Colocation Can Address Many Challenges
More and more companies are turning to colocation providers to deliver data center space that includes flexible space to scale as needs grow, power and cooling redundancy, physical security, fire suppression, and access to telecommunications carriers.
CyrusOne is building some of the largest and most energy efficient “Massively Modular” data centers today. Using a design philosophy leveraging massive economies, CyrusOne can efficiently scale to meet a company’s current and future needs.
CyrusOne’s best-in-class data centers have been selected by nine of the global Fortune 20 firms and over 135 of the Fortune 1000 firms.
For more information about colocation solutions, visit http://www.cyrusone.com/.