CyrusOne Articles



Data Center Build vs. Lease: Avoid the Ultimate Gamble

The build vs. lease conversation is entering a new stage. With increasing financial and operating risks associated with building, managing, and maintaining a company-owned data center, the scales are tipping in favor of a leasing model for most organizations.

Building a data center is one of the most expensive decisions a CIO will make.  Changing hardware, software and business needs can quickly turn a data center investment into a resource drain on a company.

A recent Forbes magazine article titled “The Top 10 Strategic CIO Issues” drives home the limitations of managing an in-house data center:

“Far too many companies today find that they need to devote 70% or even 80% of their IT budget just to run and maintain what they’ve already got, leaving as little as 20% for innovation.” 

The Forbes’ article points to server sprawl, massively under-utilized storage resources, unproductive data centers, labor-intensive integration requirements, and a near-endless list of “strategic” vendors as the biggest culprits consuming existing budgets.

Analyzing the Risks of Building a Data Center

For large tech companies, like Google, Amazon or Microsoft, building internal data centers makes sense.  But for the vast majority of other businesses, the benefits rarely outweigh the risks.

Below is a summary of some of the pitfalls involved in the build decision:

  • The Real Cost of Building a Data Center – A typical 50,000-square-foot, five Megawatt (MW) data center built by a Fortune 1000 company costs anywhere from $115 million to $200 million and has a useful life of about 20 to 30 years — if it’s designed properly. In addition, the facility houses hundreds of millions of dollars in hardware and software assets, each of which has a shelf life of three to five years.

Data center construction projects aren’t familiar ground for most enterprises. It generally takes companies two to three years to complete construction on a data center project and typically it is at a much higher expense than forecasted.

CIOs are challenged with trying to determine software, hardware or communication trends over the next five to ten years.  Attempts to stretch that time period to 10-30 years (the life of a new data center) is virtually impossible.

Many CIOs use a simple approach based on general assumptions like, “Let’s assume our existing power and square foot print grows by 5% annually, and then project forward for 15 years.” They then use this figure as the basis for estimating the size of the data center.  However, such generalizations can lead to underutilization of the facility or capacity constraints as a company is growing. Both of these scenarios can cost the organization valuable time and financial investment over the long term.

  • Data Center Returns Can Be Severely Diminished by Underutilization – Forecasting errors are one of the riskier propositions of building a new data center. If the CIO made the wrong decision and the facility is too large, the cost per utilized kilowatt (kW) or square foot, will be significantly higher than forecast.  If the facility is too small, the CIO will have to seek additional capital earlier than planned to build another facility.

The impact forecasting variability can be enormous and long lasting. For example, a $115 million data center can be 50% utilized if the company’s IT computing migrates to the cloud or if part of the IT footprint becomes virtualized.  As a result, the company’s realized price per kW or price-per-foot could be twice as much as originally predicted.

Given the planning uncertainty and the residual risk associated with building expensive, long-lived assets, more and more Fortune 500 executives are reconsidering the significant capital investments needed to build in-house data centers.

  • Technological Advancement Can Fuel Planning Uncertainty – Leveraging new hardware and software technologies appropriately are key considerations for CIOs. They also need to ensure data center environments have the power density and cooling capability needed for higher performing equipment that will run an increasing array of applications.

CIOs must carefully analyze the risk involved in technology deployments. Items to review include:

  • Hardware Risk – Which hardware platforms must be deployed to give the company the IT advantage it needs to drive revenue, increase productivity, and provide stronger analytical support to take advantage of emerging, big data demands? Is a high-power computing (HPC) solution needed? How much hardware will be virtualized? What are the forecasted power densities of the proposed hardware and how will storage be managed?
  • Software Risk – What software investments strengthen the organization’s competitive advantage? How will mobile applications come into play and what will the software roadmap look like? How much business will be moved to the cloud over the next 5-10 years?
  • Business Risk – What is the company’s strategic roadmap? Will growth come via acquisition and/or will some units be divested? How nimble must the IT platform be in order to anticipate changing business strategies and growth through acquisitions?

Colocation Can Address Many Challenges

More and more companies are turning to colocation providers to deliver data center space that includes flexible space to scale as needs grow, power and cooling redundancy, physical security, fire suppression, and access to telecommunications carriers.

CyrusOne is building some of the largest and most energy efficient “Massively Modular” data centers today.  Using a design philosophy leveraging massive economies, CyrusOne can efficiently scale to meet a company’s current and future needs.

CyrusOne’s best-in-class data centers have been selected by nine of the global Fortune 20 firms and over 135 of the Fortune 1000 firms.

For more information about colocation solutions, visit

Eliminating the Confusion Surrounding Private, Public and Hybrid Clouds

Greater choice, but risks need to be considered.

Over the past few years, cloud computing has been touted as the number one way to simplify, yet strengthen, how the digital world functions.  Today, the benefits have been well documented.  Thanks to cloud computing technologies, companies of all sizes are managing their IT operations in ways never imagined.

Using cloud resources, computer applications and services are delivered to users through networks or the Internet.  Computational work is done remotely and delivered online for on-demand, anytime, anywhere, service availability.  As a result, companies reduce storage and processing power on local computers and devices.  They can invest less in infrastructure assets and operate with greater elasticity.

Because cloud technology creates exceptional value through better IT management, industry analysts predict continued growth.  The McKinsey Global Institute, a business and economics research firm, predicts by 2025 most IT applications and services will be cloud-delivered.  International Data Corporation (IDC), a market research firm specializing in information technology, expects cloud spending will surge by 25% in 2014, reaching over $100 billion.

This growth can be attributed to these major drivers:

  • IT departments continue to be taxed to do more with less.  Cloud technology allows companies to cut costs, as well as add applications, services and capacity quickly.
  • Due to technological obsolescence, businesses often experience poor rates of return on their IT investments.  Cloud technology eliminates companies spending huge amounts of money on infrastructure assets only to find them outdated in a few years.
  • Small and medium-sized businesses usually can’t compete with larger enterprises in building and managing IT infrastructures.  Cloud technology frees SMBs from spending valuable capital on facilities and managing complex IT infrastructures.  Therefore, SMBs will continue to expand into cloud technology in greater numbers so they can compete on a more even playing field.

An Overview of Cloud Models

Cloud services can be delivered via a public, private or hybrid model.  Each cloud model provides value, however the best cloud for an organization depends on its specific requirements and portfolio of applications.

  • Public Cloud – In this model, third-party service providers own and manage multi-tenant IT infrastructures.  Customers can access shared resources, including servers, operating systems, storage and bandwidth, from the service provider’s data center.  Sharing components spreads costs across multiple users.

Public clouds allow organizations to avoid making large capital investments in IT infrastructure.  Companies incur an operating cost instead of a making a capital investment.

In addition to requiring no upfront capital investment, public clouds are a good choice for applications experiencing changing demand.  Public clouds offer high levels of elasticity for applications needing to scale up and down.

  • Private Cloud – Like public clouds, private clouds offer scalability.  However, they’re configured for only one tenant.  Organizations can build their own private clouds or they can partner with a service provider to deliver and manage them in the provider’s data center.

Private clouds make sense for companies requiring customized applications and greater infrastructure control.  Companies often select private clouds when they must provide enhanced security or comply with regulatory requirements.

  • Hybrid Cloud – As the name implies, hybrid clouds consist of multiple types of clouds.  They combine capabilities and leverage the benefits of each cloud used to create the hybrid model.

For example, an organization can lower costs by using a public cloud configuration for highly elastic applications or tertiary applications such as back-up storage or test and development programs.  Yet, when specialized security and greater control is required, companies can place applications in the private area of the hybrid cloud.

When Security is Paramount, Public Clouds Can Be Risky Business

Even with the known benefits of agility and savings, some companies are not adopting cloud technology.  They question whether applications and data in the cloud are truly secure.  This is a major hurdle in adoption, especially in a public cloud model.

For example, analysts at 451 Research found 69% of respondents in a 2013 survey are greatly concerned with public cloud security.  This significant figure indicates security issues can affect cloud adoption despite impressive industry growth projections and a slew of other benefits.

According to the Ponemon Institute, a research center dedicated to privacy, data protection and information security policy, companies often fail to incorporate reliable cloud security strategies.  Ponemon surveyed 748 IT and IT security practitioners for a 2013 Security of Cloud Computing Users study and found only about half of respondents believed their organizations were incorporating best practices for cloud security.

When it comes to adoption and security, 46% of respondents in the Ponemon study indicated cloud deployment was impeded or slowed because of security concerns.  The remaining respondents said cloud adoption wasn’t negatively impacted (45%) or were unsure (9%) of the effect.

As a result of significant security concerns, applications needing continuous uptime are typically not moving to the public cloud.

How to Ensure Cloud Security Using a Private Cloud

Fortunately, service providers offer solutions to address cloud security concerns.   For example, CyrusOne works with companies to set up their own private cloud networks in CyrusOne data centers.  This private cloud solution provides greater security, more control and increased scalability.

In addition, CyrusOne developed “Sky for the Cloud,” an enablement platform for the cloud.  This solution provides the data center infrastructure or “home” for the cloud in a customized data hall where businesses can run their cloud system. It optimizes Power Usage Effectiveness (PUE) and enables fast interconnection to an ecosystem of over two dozen business partners, content providers, networks, carriers, Internet service providers and Ethernet buyers and sellers.

When properly deployed, cloud deployments provide a wide range of benefits. In a CyrusOne colocation scenario, companies experience all the benefits of cloud technology, including the highest levels of security, without the risks, expenses and headaches involved in building and managing an on-premise solution.

For more information about cloud solutions, visit

Big Data: Cutting Through the Hype

“Big data” is as the name implies – sets of data so huge and complicated traditional storage and analytical tools often times have difficulty processing them.

As big data is being stored at record rates, companies have two major challenges.  First, they need to store and manage the volume of data.  Second, and most important, they need to analyze the vast amounts of data to derive value from it.

To put this explosion in perspective, a paper published during the 2012 IEEE Aerospace Conference estimated the size of digital data in 2011 to be 1.8 Zettabytes, or 1.8 trillion Gigabytes.

By 2020, this figure is anticipated to be 50 times higher.

Along with the challenges of managing such monumental amounts of data, organizations can also reap significant rewards.  When properly collected, stored and analyzed, big data can help solve real business problems and can provide a good return on investment.

Using predictive analytics, organizations can begin to see patterns and anomalies in the data.  Many companies are starting to take the next step to uncover this meaningful information to increase revenues, cut costs, and enhance operations.

Big data analytics offers possibilities to improve performance.  These two examples show how:

  • Better Customer Understanding – Big data replaces the previous methods of basic survey-taking by enabling for conversations and natural language to be analyzed.  Rather than evaluating checked boxes and canned answers, today’s analytics tools capture and process customer responses on a much deeper level.
  • Improved Public Safety – Analyzing big data collected from a variety of sources can also be used to keep people safer.  Changes in crime rates can be evaluated to see if correlations exist with certain events, such as changing demographics, increased construction activity, more public assistance requests and social media chatter.

Filtering through the Hype and Misconceptions

Just like any emerging technological concept, hype surrounds big data.  And it’s important for companies to weed through what the so-called pundits are saying to find the real value for their business.  The potential is certainly there to reap significant rewards.  But, becoming a success story typically means overcoming a challenges along the way too.

To separate the big data hype from reality, organizations must understand the following facts:

  • Big Data Benefits All-Sized Companies – Just because it’s “big” data doesn’t mean it’s only for big companies.  Forgoing a big data project could mean missing out on opportunities to improve performance.  Today’s sophisticated analytics tools allow even the smallest company to look for patterns and relationships in large amounts of data.

In fact, using big data best practices enables smaller companies to better compete.  Their size usually makes them more nimble than larger firms.  And when smaller companies become more data-driven, they are likely well positioned to outpace larger competitors.

  • Big Data Equates More to Growth than Size – Although big data often refers to massive amounts of data, it’s more appropriate to relate the term to how much data volume is growing for a particular company.  Even the smallest of organizations is likely to experience data growth.  If an organization’s data volume doubles, it doesn’t really matter if the amount may be considered small by big company standards.
  • Companies Don’t Need a Data Scientist – To benefit from big data, sophisticated algorithms need to analyze it.  Although these algorithms are written by data scientists, companies don’t need to have these experts on staff.  Packaged software is available to help companies of all sizes become more data-driven.  It allows them to retrieve valuable, actionable information.
  • Analysis of Big Data Doesn’t Require Traditional Reporting of Historical Data – The major benefit companies get from analyzing big data is identifying what’s going on today and what is likely to happen in the future.  Determining patterns and relationships doesn’t necessarily mean analyzing static historical data.

How to Best Leverage Big Data

Once organizations understand the reality of big data, they can begin to develop a plan for benefitting from it.  Here are some suggestions on how best to collect, store and analyze big data to positively impact performance:

  • Don’t Focus on the Technology at the Expense of Business Objectives – It’s easy to get caught up in the hype and latest innovations surrounding big data.  But before implementing a solution, companies must understand the underlying business problem they want big data to solve.

What can the information gleaned from big data analysis help improve?  What current challenge does the business need to overcome?  To get the most from big data initiatives, avoid starting with the chicken (solution) before the egg (problem).

  • Eliminate Employee Information Processing Tasks Whenever Possible – Big data can help streamline jobs for many employees.  Often, performing routine informational tasks takes people away from what’s really important.  Instead of handling routine information processing, the proper big data program can enable employees to focus on contributing to meeting company goals.
  • Consolidate Organizational Data – Big data eliminates the need to keep functional data separate.  No longer does Marketing, Human Resources or Finance need their own data silos.  Instead, big data enables everyone in the organization to access the same data set.

This consolidation of data enables departments to collaborate better.  In addition, big data provides analytical capabilities and valuable insights to a broader group of employees within the organization.

  • Implementing an Effective Cloud Infrastructure – In most cases, big data need a fast connection to the cloud.  Cloud storage provides benefits over traditional storage, including faster transfer times for large amounts of data, better scalability to accommodate changing data volumes, enhanced disaster recovery in the event of an outage, and better security technology and practices.

In addition, cloud storage solutions for big data require no installation or large capital outlays.  Businesses can be up and running very quickly.  By providing economies of scale, most cloud solutions will also lower overall big data storage costs.

CyrusOne enables businesses to successfully run their cloud infrastructure systems using the Sky for the Cloud™ technology. This enablement platform provides a home for cloud providers and enterprise private clouds in a customized data hall.

Data centers at CyrusOne are designed for optimizing Power Usage Effectiveness (PUE) and enabling fast interconnection to an ecosystem of channel and cloud partners, content providers, networks, carriers, Internet service providers, and Ethernet buyers and sellers. The solution enables customers to more quickly and affordably pull content from the edge of the Internet to the heart of the data center.

According to industry experts, big data will keep growing at a dramatic pace.  Companies large and small can take advantage of this trend without it becoming a daunting project.  The right tools, data center configuration, and approach to big data will result in better information and competitive advantages.  The results of big data analytics will provide meaningful information that could improve decision-making and overall company performance.

For more information about big data solutions, visit

Exploding Internet Traffic Drives the Need for Multi-Facility Interconnectivity

The growth of social networking, web services and cloud applications has caused an exponential increase in data and network traffic. As a result, powerful data centers are needed to handle burgeoning infrastructure requirements.

In addition, organizations must configure solutions for disaster recovery and business continuity.  Ensuring ongoing operations in the event of a power outage has never been more critical.

Data center providers must respond to both demands – maintaining high performance levels in light of escalating network traffic and enabling customers to continue operations during a disaster.  Interconnecting multiple data center facilities helps accomplish these goals.

In the not-so-distant past, enterprise applications were not designed to be used in different locations without advanced infrastructure and complex software.  Fortunately, today’s data center interconnectivity solutions can extend a data network between geographically dispersed facilities.  By providing an efficient and reliable solution, data center interconnectivity has become a common business practice.

Evaluating Interconnection Options

Data center interconnectivity involves connecting multiple facilities across the wide area network using routing and transport technologies.  The result is a larger available pool of resources.

The goal for providers is to develop an interconnectivity solution providing the highest bandwidth, lowest latency and most efficient energy usage.  To this end, a variety of connection options exist for interconnecting data centers.

When connecting two or more data centers, the specific technology deployed will depend on the application and acceptable trade-offs.  Specific examples of high-speed connection options include MPLS and Optical Waves:

  • As the name implies, Multi-Protocol Label Switching (MPLS) provides a protocol-independent and scalable transport medium. Data traffic can travel end-to-end regardless of whether it’s based on ATM, Frame Relay, SONET, IP or Ethernet.  In the future, many network engineers believe MPLS will even replace these technologies.
  • In a colocation model, wholesale carriers provide Optical Waves connections at each site.  Because these connections appear as dedicated fiber cables, customer access to each site’s network infrastructure is fast and seamless.  Optical interfaces provide high-capacity transport within the data center and to other facilities, as well as low latency and reduced energy consumption.

Building Multi-Facility Interconnectivity

To get the highest performance, enterprises should be able to mix and match data center facilities. Full interconnectivity ensures reliable applications and global access to resources.

Configuring a customized solution to address disaster recovery and interconnectivity requires customers complete two steps:

  • Decide on Active-Passive, Active-Active or Combination Configurations. Active-Passive configurations use a recovery site for non-production applications. Active-Active designs use two sites for production applications.  A combination configuration would be Active-Active-Passive.  In this configuration, the first and second sites are used for production applications while the recovery site is used for non-production applications.
  • Select Interconnectivity between and within Metros.  Based on requirements, an organization must determine which data center provider can provide the lowest-cost and highest-speed city-to-city connections.

Multi-facility interconnectivity delivers significant benefits to enterprises. For example, it likely lowers the average cost for point-to-point connectivity between data centers.  It also creates added revenue opportunities by enabling customers to deliver services within other markets.

By selecting a proven colocation provider, enterprises can leverage these important advantages, as well as potentially improve the service quality to their own customers.  When connected to top-tier facilities, service delivery becomes more reliable and resilient.

For more information about data center interconnectivity, visit

How to Evaluate Reliability in a Colocation Provider’s Data Center

As data dependence intensifies, businesses of all sizes need to mitigate the risk from both human-caused and natural disasters.  IT outages hurt every company’s operation and finances, regardless of the industry.

To ensure business continuity, data center best practices should be deployed to deliver 100% availability to data center customers.

Today, businesses expect and depend on immediate communications. Business expectations revolve around 24/7 electronic communications and regular availability of data.

Ongoing innovations in technology and processes improve data reliability.  But, keeping pace with constant technological change is challenging for every organization.  Businesses know they must have seamless communication and always available data.

Why Colocation Makes Sense

Given the expectation of an always-on professional environment, which data center strategy offers the best solution?

In many scenarios, the colocation concept provides the highest levels of reliability at the most affordable cost.  Regular access to data, even during unexpected outages and natural disasters, could mean the difference between success and failure for many companies.

With fully redundant systems and proven processes, experienced colocation providers can often deal with power outages faster and more cost effectively than most businesses.  But, building and managing a data center with these fully redundant power architectures requires specialized expertise and huge capital expenditures.  Since most organizations aren’t in a position to invest in and maintain a facility of this nature, colocating in a proven provider’s data center makes good business sense.

What Should Drive Data Center Selection

To ensure the right fit between a company and colocation provider, decision makers need to carefully analyze their data center requirements. Businesses need to strike the right balance between an available environment and cost efficiencies.  They want to fully protect their infrastructure, but at the same time, not pay for more than they need.

Some important questions to ask include:

  • How long can the company afford to endure an outage – seconds, minutes, hours, days, weeks?  Each business will have different thresholds. Therefore, objectively answering this question will help a company avoid being under-protected or spending more money than necessary.
  • At what point will a data loss become a substantial problem?  How much data loss, if any, can a company experience without crippling ramifications?  The answer to this question determines to what extent data needs to be protected and available in the event of a disaster.

Evaluation Parameters for Data Center Selection

To ensure reliability, business decision makers must carefully evaluate a data center provider’s capabilities.  Important questions to ask regarding availability include:

  • Is the facility located strategically for the business?  A good location will be easily accessible geographically and provide as much immunity as possible to natural disasters.
  • How protected is the actual facility?  Are advanced security monitoring systems in place?  Does the facility incorporate next-generation fire suppression systems?
  • Are the provider’s services backed by a 100% uptime SLA regardless of power demands?
  • Does the provider use advanced parallel power support architectures?  The data center should be designed with:
    • Separate parallel transformers with separate parallel underground utility feeds
    • Dual power feeds from multiple power distribution units (PDUs) within each enclosure
    • Multiple generators, fuel tanks, and batteries

To ensure a data center is up to the challenge of providing an always-on environment, getting answers to the above questions are a critical part of understanding if a data center can meet your business expectations.  Otherwise, the solution may not be the best fit.

For more information about data center reliability, visit

Recent Earthquakes Cause Concern for Data Center Customers

This past month alone, the U.S. Geological Survey (USGS) reported 17 “significant” earthquakes around the globe.  The recent magnitude-8.2 earthquake in Chile, the 5.8 in Panama, and the multiple earthquakes in Southern California have people wondering if they’re related and whether activity will increase. Although experts believe the odds are against this spate of events being connected, data center customers remain concerned nonetheless.

However, what all these earthquakes do have in common is they’re located along the notorious “circum-Pacific seismic belt,” the world’s greatest earthquake belt.  Also called the “Ring of Fire,” this seismic belt runs along the Pacific Ocean from New Zealand to Chile and is where 81% of the world’s biggest earthquakes originate.  In fact, the deadly 2011 Japan disastrous tsunami resulted from an earthquake in this area.

Concern for Organizations in Earthquake Prone Areas – West Coast

For U.S. companies, the California earthquakes are especially troubling.  The USGS calculates Southern California experiences over 27 earthquakes each day.  Although most can’t be felt, the fact that they occur in the first place can be unsettling for both residents and businesses in the area.

The San Andreas Fault has been the earthquake zone getting the most attention over the years.  However, experts think the lesser-known Puente Hills thrust fault could do more damage.  Activity in this fault was responsible for the recent 5.1 earthquake in La Habra, California.  Over 100 aftershocks occurred from northern Orange County to Hollywood.

Given the heavily populated area, a magnitude-7.5 earthquake could be especially catastrophic.  In addition to the tragic loss of life, the USGS estimates a large earthquake in this fault area could cause as much as $250 billion in damage.

In essence, these facilities use giant shock absorbers to protect them.

Select a Data Center Outside Earthquake Zones

As can be expected, earthquake mitigation systems can be expensive. The complexity and cost increases significantly from equipment systems to building-level solutions.  These costs get passed on to the data center’s customers.

Other factors to consider include how prepared the data center is for a disaster like an earthquake.  Even if the facility houses the most sophisticated mitigation systems, a long-term power outage from an earthquake carries additional risk.  For example, fuel supplies for generators may be in short supply and/or they might have to endure pricing surges.

What’s the best protection from earthquakes and other natural disasters?  The easiest and most economical disaster recovery solution is to select a data center located in a disaster-free market.  Phoenix and San Antonio provide two great examples of sites free from earthquakes, hurricanes, tornadoes and other major natural disasters that could cripple an operation.

For more information about disaster recovery solutions, visit

Do Companies Need Military-Grade Data Center Security?

Maintaining a physically secure data center should be important to every business.  However, many companies may not be able to afford the military grade level security required to adequately protect valuable IT infrastructure.

Incorporating the equipment, personnel, and systems needed for a highly secure company-owned facility or data center environment can be cost prohibitive for many companies.

However, all organization can benefit from the highest security standards by outsourcing their operation to an experienced data center provider.  Top-tier providers have already made a significant investment in physical security.

With the right provider selection, businesses can access a military-grade data center without needing the security expertise or capital outlay required to run their own data center.

Performing a Security Risk Assessment

Concerning physical security, business decision-makers need to analyze two things:

  1. What impact would a physical security breach into the IT environment have on the business?
  1. What is the likelihood a security breach could happen in a particular facility?

With this information, executives and IT professionals can make a determination regarding the type of facility they need in terms of physical security.

The greater the impact of a security breach, the more important it is to house operations in a military-grade facility.

Identifying Important Physical Security Features in an Outsourced Data Center

Selecting a secure facility involves several principles.  At a minimum, most companies need the following features:

  • Entry Points – Access to the facility must be limited and strictly controlled.  Perimeter fencing is critical to reduce traffic around the building and ensure security.  Also, visitors should be allowed to enter through one location, and deliveries need to pre-notified and directed to a loading bay.
  • Man-Traps – To eliminate unauthorized visitors, data centers must implement man-trips to monitor entry and exit.  Man-traps prevent “tailgating” which is when someone follows another through a door before it closes.
  • Internal Facility Access –Data center providers control who is permitted in each area of the data center.  Biometric access on doors, for example, allows only authorized personnel to enter specific areas.  Access should also be designed in a layered fashion.  As the person goes deeper into the data center, the more checkpoints he or she must pass through.
  • Cameras – For the best physical security, video surveillance is required around the outside perimeter of the facility, including all entry/exit locations, as well as throughout the inside of the building.  Video footage should be digitally stored and be easily accessed when needed.
  • Door Alarms – All doors, including fire exists, must be alarmed.  The facility provider needs to know when doors are opened or left open for a period of time.
  • Upgraded Door Locks – Sometimes companies upgrade older facilities with more advanced locks.  However, they must also reconfigure doors so hinges are located on the inside to avoid having the hinge pins easily removed.
  • Parking Lots – Just as the building must be secured, so should access to the parking lots.  Entry is often controlled using gates, concrete bollards, perimeter fencing and security personnel to identify authorized access.
  • Testing Protocols – How often and to what extent does the provider test its physical security systems?  Video surveillance, alarms, access systems and security procedures need to be audited regularly.
  • Security Personnel – Does the provider hire contract security or use permanent security staff?  Although contract staff can offer some benefits, permanent staff allows a data center provider to know its security personnel on a deeper level.  Permanent staff also tends to know the company better in terms of the site, processes and people.  Whether contract or permanent, security personnel must be onsite around the clock, every day of the year.

These items provide a starting point for developing a comprehensive security checklist. At the end of the day, most companies require military-grade security to protect its mission-critical IT infrastructure.  The thought of losing data, systems access, or having key applications go down are usually not acceptable in for most businesses.

For more information about military-grade data center security, visit

The Cost and Control Concerns in Data Center Outsourcing: How Colocation Provides Value

Moving to a colocation strategy sometimes creates angst among IT professionals.  In many cases, they’re concerned with giving up control of the data center infrastructure.  And in other situations, IT managers need to analyze whether colocation will bring down costs and mitigate downtime risk, as well as provide additional strategic benefits.

In recent years, numerous companies, especially emerging and medium-sized businesses (EMBs), have moved to colocation.  Several factors are driving this colocation trend.

The “Great Recession” created a difficult environment for companies to invest in their own data center facilities.  With money tight and increased market uncertainty, businesses have been choosing to lease colocation space instead of building, owning and operating a data center. Gartner Inc., a respected industry analyst, does not believe this trend will subside unless capital markets improve significantly.

The shifting priorities of CIOs also contribute to the attractiveness of colocation.  In a Gartner survey of 2,014 CIOs across all industries and geographic locations, the top two priorities in 2008 were improving business processes and attracting new customers.  However, the focus for 2014 is on growing the business and improving operations.

Although attracting new customers and growing the business seem synonymous, the Gartner study provides insight into why there’s a distinction.  According to the CIO participants, business continuity and risk management have become more than insurance policies.  A resilient environment contributes to business growth – which is the number one priority for today’s CIOs.

Why Companies Select a Colocation Strategy

Colocation helps accomplish an organization’s goals in an economical, low-risk manner.  A few of these advantages include:

  • Diverse Sites – Rather than have a company’s entire data center operation in one location, CIOs can better protect the business by relocating some or all of its IT infrastructure to a colocation site.

An off-site location can provide a different power grid and road access.  If power or access issues develop at the corporate headquarters, chances are the off-site data center operation won’t be affected.

With advanced facility design and leading-edge equipment, proven colocation providers offer high levels of redundancy and reliability.  They also staff their facility around-the-clock with IT experts who monitor and manage the facility.  Most EMBs find it challenging to fund a similar data center operation.

  • Network Connectivity – A good candidate for providing colocation services will have access to multiple network service provider backbones.  They’ve developed strong carrier relationships and forged agreements most EMBs would find difficult to do.

However, the best colocation providers are carrier-neutral.  In other words, they don’t lock companies into any carrier for network connectivity.  Instead, companies can decide which carriers will best meet their requirements.  A colocation provider creates a market environment in which their customers can buy bandwidth.  Therefore, if one carrier experiences an outage, the colocation provider has other carrier networks available to provide Internet connectivity.

  • Cost Economies – Creating a high-availability environment is cost prohibitive for many EMBs.  Fortunately, data center outsourcing provides dedicated space, power and bandwidth in an economical solution.  Rather than incur significant capital outlays for building a facility, companies turn to colocation for a low, monthly operating expense.

This monthly fee typically includes three things:  floor space, power and bandwidth.  Colocation providers usually offer additional services, including monitoring and “remote hands” services for an added fee.

  • Multi-Purpose Space – Not only does colocation provide a sound disaster recovery strategy, but it also provides a testing environment.  When companies launch a new application or modify components within an infrastructure, they need to first test performance before moving to a production environment.

Colocation space can be configured to emulate multiple environments.  It will help determine how resilient the application or infrastructure will be under various conditions.

Businesses have a choice when acquiring data center space.  They can build a facility they own, operate and manage.  They can lease a facility and retrofit it to their requirements.  Or, they can move their infrastructure to a colocation provider’s data center facility and lease space on a monthly basis.

According to Gartner, colocation is usually the most cost-effective strategy for data center footprints of 3,000 square feet and below.  The research firm also believes the colocation market is maturing, with the U.S., Canada and Western European markets offering the most stability.

The colocation trend continues to grow for good reason.  CIOs and their companies get the best of both worlds:  they gain access to an off-site facility without giving up control over managing their infrastructure.  And, colocation frees up budget, eliminates facility management issues and helps reduce the risk of costly downtime.  And that’s a tough value proposition to beat.

For more information on colocation services, visit

The Data Center in Your Closet: How Risks of Power Loss Impact Business Operations

All companies, regardless of size, rely on power to run their business-critical operations. As a result, data center availability has evolved into a top requirement.  Any amount of downtime can be devastating for emerging and medium-sized businesses (EMBs).

Just one short downtime event can impact everything from revenues, profitability, reputation, to even a business’ viability.  However, many company executives and IT professionals don’t truly understand how their current infrastructure may be putting the business at risk, as well as the ramifications and associated costs of a power loss.

Downtime Misconceptions Abound Among Internal Staff

Ponemon Institute, an independent research firm focused on privacy, data protection and information security issues, conducted a study involving more than 400 data center professionals.  Relative to downtime impact, survey results found some troubling misconceptions among the study’s respondents.

For example, participants indicated their company experienced an average of two downtime events within a two-year period.  On average, each downtime period lasted 120 minutes.  Yet, a disconnect between reality and perception seems to exist. An astonishing 62% of C-level respondents said unplanned outages didn’t happen frequently versus 41% of IT staff.

Another surprising finding in this study involves the use of best practices.  Of those responding, less than 32% believed their company employs best practices to maximize availability.  In addition, 71% of senior-level respondents compared to 58% of IT staff acknowledge the company’s dependence on data center performance to generate revenue.

These findings create a dilemma for many companies, especially EMBs.  Senior executives acknowledge the critical role data center operations play in fulfilling the company’s goals.  However, many execs don’t realize how much downtime their company is actually experiencing, nor the real impact it’s having on the business.  And, a fairly high percentage of internal staff is doing little in terms of developing best practices to circumvent downtime.

Factors Contributing to Downtime Risk

The common misconceptions among senior executives and IT staff can contribute to costly downtime.  Therefore, the first order for any business is to fully understand the current state of affairs concerning systems availability.

How many downtime events has the business really experienced in the past couple of years?  What was the average downtime of these events? Can failures be attributed to equipment issues, operating procedures, security breaches, unplanned disasters or a combination of these things?  What best practices can the company develop to boost availability?  With the answers to these questions, EMBs can begin to develop a better data center strategy.

To provide a foundation for improved availability, companies must invest in the most advanced infrastructure technologies.  However, from a time, expertise and budget standpoint, maintaining leading edge technology proves challenging for even the largest enterprises.  EMBs find it especially difficult to keep up on the latest technological advances to minimize downtime.

Also, having in-house staff with the required operational, facility management and security expertise proves difficult for most EMBs.  These experts can develop the necessary best practices to enhance infrastructure availability.  Unfortunately, these resources are expensive and can be hard to find.

How Downtime Cripples EMBs

Over the past decade, data center systems have played a critical role in generating revenue and growing a business.  With all the company operations dependent on power, any downtime can be devastating.  In fact, just a few minutes of downtime can harm an EMB more than most company executives even imagine.

For example the Ponemon Institute study found, on average, downtime costs businesses thousands of dollars per minute.  And, the average cost of a single downtime event was approximately $500,000. Once EMB executives think about all the required activities dependent on power, the huge costs become more apparent.

Here’s a list of some of the ways a power outage will detrimentally affect operations and increase costs dramatically:

  • eMail – For years, email has been a productivity enhancer.  However, during power outages, Internet access and the applications dependent on it become unavailable.
  • Phones – Most businesses are dead in the water without phone communication.  Yet, when the power goes out, so does the phone system.  The whole business becomes incommunicado.
  • Order Processing – If phone and email communication is unavailable, order processing comes to a grinding halt.  Will customers simply be frustrated by a company’s lack of availability?  Or, will they place their order with a competitor instead?
  • Mission-Critical Applications – If there’s no power, there’s no access to applications used to run the business.  Today, companies depend on certain software applications to operate every aspect of the business. Without access to ERP systems and other mission-critical applications, the company ceases to function.
  • Data Loss or Corruption – Data is the lifeblood of any organization.  Whether it’s a customer database, financial records or a list of sales prospects, any loss can devastate business operations.  Protecting data against security breaches and unplanned outages requires sound disaster recovery strategies.
  • Equipment Damage – Whether faulty equipment caused an outage or an outage damaged the equipment, companies will incur a substantial expense for the replacements.
  • Legal Ramifications – Depending on the industry, some companies may be faced with legal consequences for downtime.  Regulatory fines can add significantly to the overall cost of a power outage.
  • Company Reputation – If customers can’t get through to a business, they become frustrated and question whether the company “has its act together.”  Downtime can cause companies to re-evaluate the relationship and possibly look at competitive offerings.

Fortifying an Infrastructure through Colocation

The in-house data center situation for many EMBs has created numerous difficulties.  The lack of understanding regarding downtime frequency and consequences results in serious business vulnerabilities.  In addition, inadequate technology and staffing expertise adds fuel to the fire.

However, colocation provides a viable solution for EMBs in this situation.  It can provide a speedy ROI on IT infrastructure investments while providing added protection from downtime.  Because it’s their core business, experienced colocation providers:

  • Make regular investments in advanced technologies
  • Have infrastructure and security experts available 24/7 to monitor and manage data center operations
  • Incorporate appropriate levels of redundancy to ensure 100% availability
  • Offer disaster recovery solutions for unplanned events
  • Construct physical facilities to the highest operating standards
  • Develop operational procedures to ensure an always-on environment

As outlined above, just one downtime event can impact a business’ profitability in a huge way.  If EMBs operate inadequate data centers, they risk recurring downtime.  And, these downtime events can result in massive financial and productivity losses, as well as damage to a company’s reputation.

For more information on colocation services, visit

How Colocation Services Help Companies Deal with Trends

CIOs face many challenges created by IT trends.  Companies must tackle and overcome these obstacles to remain competitive. Certain trends have a profound impact on the IT infrastructure models an organization chooses to pursue.

When it comes to data center space, companies can build or lease.  Building requires purchasing land and constructing a new data center.  A company can also buy an existing property and retrofit it to specific requirements.

Leasing (or colocation) offers a simpler, more economical solution than building a new space.  Customers relocate their infrastructure to a turnkey data center in which the service provider takes full responsibility for facility management.

Trends Impacting Data Center Space

Although CIOs must deal with many issues, four trends directly affect the decisions companies make regarding data center space.  Success depends on CIOs understanding these trends and developing sound strategies to address them.

The top concerns CIOs involve:

  • Staffing – A shortage of both personnel resources and appropriate skill sets.   An aging workforce doesn’t help the situation and hits IT especially hard.  Plus, many companies were burned by the “Great Recession” and are reluctant to ramp-up hiring again.
  • Security – Cyber attacks and theft of proprietary information continue to plague companies of all sizes and in every industry.  However, keeping up with the evolving security landscape challenges even the most capable IT organizations.  Most companies don’t have the time, resources or capacity to deal adequately with today’s security threats.
  • Expansion – As the economy picks up steam, businesses must be well positioned to leverage growth opportunities and new innovations.  Therefore, supporting and expanding the core business remains a top priority for CIOs.
  • Consolidation – As cost pressure continues, companies will look for more ways to consolidate data center operations, especially if facilities need upgrading.  At a minimum, data centers built just ten years ago usually require costly power and cooling upgrades.  Consolidating operations of aging data centers into more efficient spaces can help lower both capital and operating costs..

How Colocation Addresses Trends

Considering the current demands being placed on CIOs, colocation provides an economical, flexible option.  It’s a logical migration step when companies face resource, infrastructure, and/or operational constraints.

In fact, IT leaders who participated in Vanson Bourne’s January 2014 Global IT Trends study indicated colocation services are the most popular outsourced infrastructure model today.

The report also concluded a majority of these IT leaders plan to shift more workloads to colocation over the next two years.

Here’s how colocation helps companies deal with the four major IT trends of 2014:

  • Data center outsourcing frees-up staffing resources – Companies don’t have to worry about having in-house staff to monitor data center operations, including power, cooling and availability.  All facility responsibilities shift to the service provider.  In-house IT staff can then focus efforts on more strategic projects to accomplish corporate goals.
  • Data center outsourcing provides access to secure, state-of-the-art facilities – Evolving security threats can consume in-house IT staff.  Often, companies don’t have the internal resources to focus full-time on maintaining security.

The best colocation data centers have the highest levels of physical and network security in place. Because the security infrastructure is shared across multiple clients, data center providers can offer advanced security capabilities at a much lower cost.

  • Data center outsourcing enables business expansion – In a colocation model, companies aren’t restricted to a single location.  Flexibility and scalability have long been mainstays of colocation. As a business grows, the data center provider’s solutions keep pace.

When business needs change, colocation allows companies to adjust quickly.  An agile environment can meet short- and long-term demands while simultaneously controlling costs.

  • Data center outsourcing enables companies to consolidate space – Centralizing operations saves money and improves efficiencies.  For example, inefficient energy consumption can create a huge expense for company-owned data centers.  Consolidating space into a more efficient multi-tenant data center can reduce your for power and cooling costs.

In addition, companies don’t have to worry about investing in and deploying the latest technologies.  The best data center service providers will operate facilities equipped with recent technological advances.

Creating flexible, agile environments is the best way to address continued data center complexities fueled by major IT trends.  With its long history of enabling businesses to evolve with the times, colocation provides companies with needed flexibility for whatever the future holds.

For more information on colocation services, visit

Need more information? (855) 564-3198 Contact Us