Does Facebook's Open Compute Project Open Holes in Your Data Centre Strategy?
In 2010, Facebook started a project to build one of the most cost and energy efficient computing facilities. One year later, they are set to open a new data centre in Prineville, Oregon that is 38% more efficient and 24% less expensive to build and run than other state-of-the-art data centres. At the same time, Facebook initiated the Open Compute Project, releasing the technical specifications and design drawings related to components of its new facility. Facebook places very specific demands on its technology infrastructure. Nonetheless, while other organizations evolve their own business and technology architectures to optimize costs, improve service delivery, or support expansion into new markets, can the Open Compute Project offer any advice?
What is Facebook Trying to Accomplish?
Facebook has more than 600 million active users, 50% of whom log in to Facebook each day. During the average month, users spend more than 11.6 billion hours, sharing more than 30 billion pieces of new content (web links, news stories, blog posts, notes, photo albums, etc.). Although these staggering figures differ from those of most other companies, the problems Facebook sought to address with its new data centre are likely familiar to most large enterprises who deploy complex computing technology. Key priorities include:
- Minimizing real estate and energy footprints. Data centres that support the 24x7 demands of 21st century business have evolved into sophisticated (and expensive) facilities. No organization wants to spend more on real estate to house IT infrastructure than necessary. Additionally, the lifecycle capital expenditure for a $1,500 server now reaches $8,000 when the power and air conditioning infrastructure expenses are considered. Without careful management of facility and energy utilization, cost efficiency can suffer.
- Creating an agile service-based IT infrastructure. Cloud computing represents an evolution in the way IT services are delivered. One of the most significant benefits it can help deliver is efficient testing and validation (or dismissal) of scenarios. Facebook’s data centre uses the same sort of automation used by other cloud providers to allow dynamic reconfiguration of infrastructure to address changing resource demands.
- Enabling efficient business growth. Focusing on process automation, minimizing invested capital, and streamlining operations allow more resources to be dedicated to the sorts of innovation that drive improvements in operating performance.
What are the Implications for Other Businesses?
Facebook’s release of data centre schematics alters the data centre landscape, whether you are building your own data centre, or leasing space in a co-location facility. If you are committed to building your own data centre, you can work with a data-centre engineering firm to incorporate elements of Facebook’s facility design. All of drawings and specification documents related to electrical, mechanical, rack and battery components are available on the Open Compute site. Although incorporating ideas espoused on the Open Compute site can help you create a highly efficient data centre, Facebook’s use of special-purpose, custom-fabricated servers (which are unlikely to satisfy more general computing requirements) might make achieving their stated PUE of 1.07 challenging. Nonetheless with the non-compute infrastructure such as cooling, and power-management often using more than twice as much energy as the computers themselves, just building an efficient power distribution and cooling system can lead to significant cost savings.
If, instead of operating its own data centre, an organization chooses to lease space in co-location facilities, the Open Compute project may still have benefits to offer. Many co-location facilities were not built with efficiencies as a first priority. Consequently, enormous amounts of power are wasted. Scott Noteboom, Yahoo’s Vice President of Data Center Engineering and Operations, estimated waste figure reaches as high as 60%. Of course, co-location customers and suppliers absorb the cost of this inefficiency.
Google uses similar principles to those described in the Open Compute project documentation to design and build its own data centres. As a result, Google-designed data centres use about half the energy of a typical data centre.
If organizations are faced with significant co-location expenses, it might be time to examine contracts, and engage in a strategic sourcing initiative to explore more power efficient facilities. Alternatively, the cost of building your own data centre could be worth a more serious look.
Conclusions
Business today depends on technology as never before – to drive transformation, productivity and global operations – and technology initiatives not linked to specific, measurable business goals risk not meeting the desired objectives. Facebook’s contribution of their data centre designs to the IT community helps advance the state of the art, and offers pragmatic solutions to the challenge of PUE in data centre design.