Open Compute Project

Offering Choices to Address Growing Data Center Needs

Thursday, June 01, 2017 · Posted by at 3:09 AM

Earlier this week we held our leadership sprint with the community’s project leaders and technical committee (IC). I presented on the value and benefits of the Carrier Grade OpenRack-19 architecture and the community was quick to point out that the architecture is also an ideal fit for cloud data centers where compatibility with 19” EIA-310 is desirable.  They asked that I share my thoughts on this architecture, so here we go! 

Let’s start off with a little history. When the Open Compute Project was kicked off in 2011, Facebook open sourced the rack architecture deployed into their Prineville Oregon data center. The OpenRack architecture broke from the legacy 19” EIA-310 rack with a design that made 15% better use of volume, better airflow (which enabled an air-cooled facility), highly efficient power conversion, with tool-less and front accessible modules to reduce deployment time and maintenance costs. By 2016, we were realizing that no single rack solution was ideally suited for the constraints placed on the rack by the facility designers or operations teams. Our community had grown to a worldwide organization and so had the requirements.   

 CG-OpenRack-19 System  (rear view)Along come Radisys & ADLINK. A worldwide provider of telecommunications equipment, Radisys saw a similar demand for a new rack architecture that delivered the same benefits as OpenRack, while providing backwards compatibility with the legacy equipment found in telecom data centers and central offices. Working with Verizon and leveraging the OpenRack concept, Radisys prototyped a new rack architecture for the worldwide telecom carriers. They deployed hundreds of nodes, executing months of field trials, and capturing feedback from OCP’s Telecom and OpenRack project teams. The design was an intersection of OpenRack and 19” EIA-310, flavored with a few requests from the carriers. By December 2016 the CG-OpenRack-19 specification (CG for Carrier Grade) was accepted.  A few months later, ADLINK, another leading telecom segment provider, contributed the Open Sled specification for CG-OpenRack-19.  The Open Sled enables suppliers to build a one-half width sled with zones defined for mezzanine boards, network interface modules (NIMs), switches, hardware accelerators, MR-IOV functions, and PCIe expansion slots.   The entire design package is for Open Sled is available, enabling the industry to quickly design and deploy new I/O to meet the various use cases found in the central office.   

Compatible and serviceable. CG-OpenRack-19 is backward compatible with most equipment that fits into a 19” wide EIA-310 rack. The architecture allows for 1U and 2U sleds, either ½ or full width, that install from the front and blind-mate to a 12VDC bus bar.  The sled enclosure provides EMI/EMC shielding and safety and the suppliers provide these sleds with safety and regulatory certification.  

From one to any. CG-OpenRack-19 gives your operations team flexibility now and in the future. Any number of storage sleds, server sleds, switches, or power shelfs can be installed or added. There is no minimum, nor is there a max (as room allows).   

Pre-wired Network. Connectors, even locking connectors, aren’t foolproof when service personnel are working in the hot aisle. One of the biggest causes of system crashes and downtime is from technicians unplugging the wrong connector. CG-OpenRack-19 offers a simple solution. Power is delivered thru blind mate bus bars and data connections are made via a single blind mate connector on the rear of each tray.  The data connector provides four (4) optical Ethernet connections to each ½ or full width sled. Based on the intended installation, the rack would be pre-wired for the data center’s management and network topology. As an example, a data center with isolated and segregated systems management may choose to allocate one port for 1GbE device management, one port for 10GbE application management, and use the other two as the 40GbE primary and secondary data planes. Once installed, the rear of the rack never needs to be accessed and the hot aisle is off limits!

Picture15

Efficient, scalable power. Looking ahead at processor roadmaps, for FPGA devices, and the use of GPUs, power to each sled must be scale-able (as an example, four GPU’s in a sled will draw 1000 watts!). The power shelf concept facilitates ‘right sizing’ and eliminates phase imbalance while the bus bar provides sufficient power. CG-OpenRack-19 implements two bus bar pairs, each identical to OpenRack, mounted onto an ordinary EIA-310 rack and spanning across any number of RU slots. 12VDC power is cabled to the bus bars in the same manner as OpenRack. The industry already provides a variety of options for the power shelf, including A and B utilities feeds, single and 3-phase input, with phase balancing and failover, and BBU options.  

Picture17

Looking back, looking ahead. When I grew up, the coax to our house was labeled, “TV CABLE.” Years later, that same coax just said “CABLE” and transmitted so much more than analog video. Similarly, CG-OpenRack-19 is much more than a carrier solution. It’s a solution that delivers efficiency, scale, and openness to the cloud data center. It’s supported by the biggest open hardware community in the world.  In the fast-moving-world we live in today, ‘carrier-grade’ is now deployable well beyond carriers’ networks: CG is becoming ‘standard’.

Find out more about how you can participate in the OCP Community at:

http://opencompute.org/participate

Continue reading this post