History of data centers
Data centers date back to the 1940s. The US military's Electrical Numerical Integrator and Computer (ENIAC), completed in 1945 at the University of Pennsylvania, is an early example of a data center that required dedicated space to house its massive machines.
Over the years, computers became more size-efficient, requiring less physical space. In the 1990s, microcomputers came on the scene, drastically reducing the amount of space needed for IT operations. These microcomputers that began filling old mainframe computer rooms became known as “servers,” and the rooms became known as “data centers.”
The advent of cloud computing in the early 2000s significantly disrupted the traditional data center landscape. Cloud services allow organizations to access computing resources on-demand, over the internet, with pay-per-use pricing—enabling the flexibility to scale up or down as needed.
In 2006, Google launched the first hyperscale data center in The Dalles, Oregon. This hyperscale facility currently occupies 1.3 million square feet of space and employs a staff of approximately 200 data center operators.1
A study from McKinsey & Company projects the industry to grow at 10% a year through 2030, with global spending on the construction of new facilities reaching USD49 billion.2

Types of data centers
There are different types of data center facilities, and a single company might use more than one type, depending on workloads and business needs.
Enterprise (on-premises) data centers
This data center model hosts all IT infrastructure and data on-premises. Many companies choose on-premises data centers. They have more control over information security and can more easily comply with regulations such as the European Union General Data Protection Regulation (GDPR) or the US Health Insurance Portability and Accountability Act (HIPAA). The company is responsible for all deployment, monitoring and management tasks in an enterprise data center.
Public cloud data centers and hyperscale data centers
Cloud data centers (also called cloud computing data centers) house IT infrastructure resources for shared use by multiple customers—from scores to millions—through an internet connection.
Many of the largest cloud data centers—called hyperscale data centers—are run by major cloud service providers (CSPs), such as Amazon Web Services (AWS), Google Cloud Platform, IBM Cloud and Microsoft Azure. These companies have major data centers in every region of the world. For example, IBM operates over 60 IBM Cloud Data Centers in various locations around the world.
Hyperscale data centers are larger than traditional data centers and can cover millions of square feet. They typically contain at least 5,000 servers and miles of connection equipment, and they can sometimes be as large as 60,000 square feet.
Cloud service providers typically maintain smaller, edge data centers (EDCs) located closer to cloud customers (and cloud customers’ customers). Edge data centers form the foundation for edge computing, a distributed computing framework that brings applications closer to end users. Edge data centers are ideal for real-time, data-intensive workloads like big data analytics, artificial intelligence (AI), machine learning (ML) and content delivery. They help minimize latency, improving overall application performance and customer experience.
Managed data centers and colocation facilities
Managed data centers and colocation facilities are options for organizations that lack the space, staff or expertise to manage their IT infrastructure on-premises. These options are ideal for those who prefer not to host their infrastructure by using the shared resources of a public cloud data center.
In a managed data center, the client company leases dedicated servers, storage and networking hardware from the provider, and the provider handles the client company's administration, monitoring and management.
In a colocation facility, the client company owns all the infrastructure and leases a dedicated space to host it within the facility. In the traditional colocation model, the client company has sole access to the hardware and full responsibility for managing it. This model is ideal for privacy and security but often impractical, particularly during outages or emergencies. Today, most colocation providers offer management and monitoring services to clients who want them.
Companies often choose managed data centers and colocation facilities to house remote data backup and disaster recovery (DR) technology for small and midsized businesses (SMBs).

Modern data center architecture
Most modern data centers, including in-house on-premises ones, have evolved from the traditional IT architecture. Instead of running each application or workload on dedicated hardware, they now use a cloud architecture where physical resources such as CPUs, storage and networking are virtualized. Virtualization enables these resources to be abstracted from their physical limits and pooled into capacity that can be allocated across multiple applications and workloads in whatever quantities they require.
Virtualization also enables software-defined infrastructure (SDI)—infrastructure that can be provisioned, configured, run, maintained and "spun down" programmatically without human intervention.
This virtualization has led to new data center architectures such as software-defined data centers (SDDC), a server management concept that virtualizes infrastructure elements such as networking, storage and compute, delivering them as a service. This capability allows organizations to optimize infrastructure for each application and workload without making physical changes, which can help improve performance and control costs. As-a-service data center models are poised to become more prevalent, with IDC forecasting that 65% of tech buyers will prioritize these models by 2026.
