A brief history of data centres

Today’s world of modern technology allows us to make countless digital interactions. Nowadays we take connectivity between people, places and things for granted. Having access to the internet 24/7, being able to work remotely, even making bank transactions at the drop of a hat, are all part of today’s modern world.shutterstock_492340057

But do we ever about the technology that allows this all to happen? Well the answer quite simply is probably only when things have gone wrong, and we can’t access what we want when we want!

60 years ago, data centres didn’t even exist, they were known as simply mainframes. In today’s terminology most computer manufacturers now refer to a mainframe as a server. However, back then mainframes were huge and costly. Without having any network connectivity these mainframes were stand alone and took hours to process information.

In the 70’s and 80’s desktop computers began to emerge and became more common in offices as technology moved on and progressed. During this time mainframe technology was concentrating primarily on reliability rather than processing power and efficiency. Due to their complexity, size and cost to run though, many companies could not afford them.

As this era evolved, microcomputers (https://www.britannica.com/technology/microcomputer) started to take over space in mainframe computer rooms, now known as servers, and started to be called data centres. As more microcomputers were bought on board they were installed in lines in banks on the walls of the data centre room. This was the foundation for a functional computer network, consisting of many microcomputers.

As the internet matured and the need for storing data became more important, investment in IT became a necessity. Internet hosting companies and providers started to build huge facilities, called Internet Data Centres from which the idea of colocation and external data centres started to become more common and ultimately became a business requirement.

shutterstock_661113826The resulting Internet Data Centres, consisting of hundreds, even thousands of servers, were created which became the most common solution for companies to adopt. These huge data centres bought an unprecedented increase for computer space which saw a huge rise in power costs. For instance, in 2002 apparently 1.5% of power consumption in the U.S. was taken by data centres (https://www.energy.gov/eere/buildings/data-centers-and-servers), with an expected increase of 10% every year.

With this problem becoming more and more feasible hardware makers started to concentrate on more power efficient components to help reduce the need for the extensive data centre cooling requirements. As well as hardware manufacturers focusing on reducing energy costs, owners of the data centres also started to redesign and adopt new methods of cooling and airflow to become more efficient. By the end of the noughties large data centres were now turning to renewable energy to help reduce costs and become more environmentally friendly.

As we move into the next decade, we are seeing a shift from an infrastructure, hardware and ownership model towards a subscription and capacity on demand model. Today’s data centre’s now need to be able to support and match application demands, especially through the cloud.

Technology doesn’t stand still but even with new trends such as cloud computing, the Internet of Things and the emerging field of Cyber Physical Systems (also known as Artificial Intelligence) data centres such as Rackspace and Virtus will for the foreseeable future be at the heart of the digital world.

×