Modern data centers are very different than they were just a short time ago. Infrastructure has shifted from traditional on-premises physical servers to virtual networks that support applications and workloads across pools of physical infrastructure and into a multi-cloud environment.
In this era, data exists and is connected across multiple data centers, the edge, and public and private clouds. The data center must be able to communicate across these multiple sites, both on-premises and in the cloud. Even the public cloud is a collection of data centers. When applications are hosted in the cloud, they are using data center resources from the cloud provider.
What are the standards for data center infrastructure?
The most widely adopted standard for data center design and data center infrastructure is ANSI/TIA-942. It includes standards for ANSI/TIA-942-ready certification, which ensures compliance with one of four categories of data center tiers rated for levels of redundancy and fault tolerance.
Tier 1: Basic site infrastructure. A Tier 1 data center offers limited protection against physical events. It has single-capacity components and a single, non-redundant distribution path.
Tier 2: Redundant-capacity component site infrastructure. This data center offers improved protection against physical events. It has redundant-capacity components and a single, non-redundant distribution path.
Tier 3: Concurrently maintainable site infrastructure. This data center protects against virtually all physical events, providing redundant-capacity components and multiple independent distribution paths. Each component can be removed or replaced without disrupting services to end-users.
Tier 4: Fault-tolerant site infrastructure. This data center provides the highest levels of fault tolerance and redundancy. Redundant-capacity components and multiple independent distribution paths enable concurrent maintainability and one fault anywhere in the installation without causing downtime.
Infrastructure evolution: From mainframes to cloud applications
Computing infrastructure has experienced three macro waves of evolution over the last 65 years:
The first wave saw the shift from proprietary mainframes to x86 – based servers, based on-premises and managed by internal IT teams.
A second wave saw widespread virtualization of the infrastructure that supported applications. This allowed for improved use of resources and mobility of workloads across pools of physical infrastructure.
The third wave finds us in the present, where we are seeing the move to cloud, hybrid cloud and cloud-native. The latter describes applications born in the cloud.
Distributed network of applications:
This evolution has given rise to distributed computing. This is where data and applications are distributed among disparate systems, connected and integrated by network services and interoperability standards to function as a single environment. It has meant the term data center is now used to refer to the department that has responsibility for these systems irrespective of where they are located.
Organizations can choose to build and maintain their own hybrid cloud data centers, lease space within colocation facilities (colos), consume shared compute and storage services, or use public cloud-based services. The net effect is that applications today no longer reside in just one place. They operate in multiple public and private clouds, managed offerings, and traditional environments. In this multi-cloud era, the data center has become vast and complex, geared to drive the ultimate user experience.