Open Compute Project

Faster, Leaner, Smarter, Better Data Centers

Friday, March 07, 2014 · Posted by at 1:30 AM

Four years ago, Facebook broke ground on its first greenfield data center project in Prineville, Oregon. In the years since, we’ve deployed six iterations of that design, culminating in the first building currently under construction at our new campus in Altoona, Iowa. With facilities around the world, we constantly challenge ourselves to improve our data center designs to maximize efficiency, reduce material use, and speed up build times.

At this year’s Open Compute Summit, we previewed what we believe will be a step change in those ongoing efficiency efforts: a new “rapid deployment data center” (RDDC) concept that takes modular and lean construction principles and applies them at the scale of a Facebook data center.

We expect this new approach to data center design will enable us to construct and deploy new capacity twice as fast as our previous approach. We also believe it will prove to be much more site-agnostic and will greatly reduce the amount of material used in the construction. And with today’s exciting news from my colleague Joel Kjellgren, we will get to test these theses: Our newly announced second building at our Luleå, Sweden, campus will be the first Facebook data center to be built to our RDDC design.

In true Facebook style, the RDDC concept began with a hack. In October 2012, our data center strategic engineering and development team and several experts in lean construction came together to hack on a design for a data center that would look less like a construction project and more like a manufactured product. From this hack, a couple of core ideas for streamlining and simplifying the build process emerged.

The “chassis” approach

The first idea developed during the hack was employing pre-assembled steel frames 12 feet in width and 40 feet in length. This is similar in concept to basing the assembly of a car on a chassis: build the frame, and then attach components to it on an assembly line. In this model, cable trays, power busways, containment panels, and even lighting are pre-installed in a factory.

photo1

The “chassis” would be assembled in a factory and transported to the site

These chassis support all the services that are found overhead above the racks. Unlike containerized solutions, which are a full volumetric approach that includes a floor, this idea focuses solely on the framework that exists above the racks, to avoid shipping the empty space that will eventually be occupied by the racks. When the chassis arrives on site, it’s set atop posts mounted to the slab. Two of these chassis attached end to end create the typical 60-foot-long cold aisle, with 10 feet of aisle space at each end. For context, a typical data hall is composed of 52 total chassis, attached in a 4 x 13 grid configuration, with 13 cold aisles.

Chassis being placed on post

Chassis being placed on post

Sectional view of two 40’ chassis

Sectional view of two 40’ chassis

Plan view of 40’ modules being placed in a grid pattern

Plan view of 40’ modules being placed in a grid pattern

We concentrated our modular production efforts on the framework chassis over the racks – which define the cold aisle – to avoid having to ship modules that also include the hot-aisle space. Instead, the width of the hot aisle is established by the pitch at which chassis are set. Using a pitch of 15 feet, we’re able to decrease the hot aisle width to 3 feet – translating to a reduction of approximately 28 feet in overall data hall length over our previous stick-build designs. In conjunction, to further reduce the amount of structural steel needed in the stick-build design, this floor-supported approach removes the supply air penthouse. In its place, manufactured air handling units are installed at grade flanking the data halls.

The “flat pack” approach

The second concept developed during the hack was inspired by the flat-pack assemblies made famous by Ikea. Our previous data center designs have called for a high capacity roof structure that carries the weight of all our distribution and our cooling penthouse; this type of construction requires a lot of work on lifts and assembly on site. Instead, as Ikea has done by packing all the components of a bookcase efficiently into one flat box, we sought to develop a concept where the walls of a data center would be panelized and could fit into standard modules that would be easily transportable to a site.

In this scheme, we employ standard building products like metal studs and preassembled unitized containment panels that are then erected onsite and are fully self-supporting. These panels are limited to a width of 8 feet to maximize the amount of material that can be shipped on a flatbed trailer without requiring special permits for wide loads.

A mock-up of a flat-pack test assembly

A mock-up of a flat-pack test assembly

The wall panels – which are 14 feet tall – have been simplified using off-the-shelf components and easily mate with each other. Careful attention was paid to minimizing the number of unique components. For example, 364 identical wall panels are used in each data hall. The ceiling panels use Epicore metal deck product, which spans the 12 feet width of the cold aisle and racks. This serves the additional duty of carrying the loads of the trays, power bus, and light fixtures below it using a proprietary hanger clip for the threaded rods.

This flat pack concept is still early in its development, but the current evaluation has already identified great time and material savings. Expected results Both of these key RDDC concepts (the chassis and the flat pack) should allow us to make a number of measurable gains, including:

- Site-agnostic design: By deploying pre-manufactured assemblies, a majority of the components can be used interchangeably. The goal is to be deployable wherever we seek to build next. It’s our hope that by standardizing the designs of our component assemblies much like we do with OCP servers we can deploy a unitized data center into almost any region in the world faster, leaner, and more cost-effectively. Performing more of the assembly in a controlled environment and at ground level also reduces assembly time.

- Reduced on-site impact: The RDDC concept will deploy pre-engineered unitized modules that minimize the amount of time required for heavy equipment on site and overall time to complete a data hall. The modules reduce the generation of on-site waste and the impacts associated with the delivery and staging of individual construction materials common to traditional construction techniques.

- Improved execution and workmanship: Having a predictable and repeatable product delivered to the site allows local teams to easily replicate the quality and fit from one region to another. Our RDDC design will produce this result by using explicit assembly instructions with established tolerances.

We expect to begin construction on our second data center in Luleå soon, using these RDDC designs. We will continue to share our learnings about RDDC design and construction so the OCP community can contribute their ideas and help advance data center design and construction that much more quickly.

A rendering of Facebook’s Luleå 2 Rapid Deployment Data Center (RDDC)

A rendering of Facebook’s Luleå 2 Rapid Deployment Data Center (RDDC)

Marco Magarelli is a design engineer on Facebook’s data center design team. For more background, watch his presentation from the OCP Summit in January (begins at 21:50 mark).

Continue reading this post

OCP Hackathon Winner: Adaptive Storage

Thursday, March 06, 2014 · Posted by at 9:05 AM

For the last three OCP Summits, engineers with a passion for open source technologies and hardware have come together for a 24-hour hackathon where they work nonstop in a competition to create the best hack. This year, three teams won the hackathon, and two of them will share their experiences on the OCP blog. Here’s the first blog post from Ron Herardian, who provides more detail about his team’s winning project.

Whenever new pieces of technology are introduced, the first question most engineers ask is how they can be used to make things run better and more efficiently. At the Open Compute Summit’s hardware hackathon, we got the chance to answer that question and were thrilled by what we found.

The team I worked with was a diverse group representing a range of companies, and for the most part, we had never met before. Our team was comprised of Andreas Olofsson from Adapteva, Inc., Peter Mooshammer, formerly with IBM, Jon Ehlen from Facebook, and Dimitar Boyn from I/O Switch Technologies, Inc. The team also included Rob Markovic, an independent consultant, and myself, a computer hobbyist and hacker. Although Rob and I were both acquainted with Dimitar from I/O Switch – but not with one another – none of the other team members had previously met and no plans existed prior to the event. Nonetheless, there was immediate synergy and an ambitious plan took shape during an hour of brainstorming.

We decided on a project that we called Adaptive Storage, in which compute and storage resources would be loosely coupled over a network and would scale independently to optimize big data platforms like Hadoop. The project involved creating Hadoop data nodes using Advanced Reduced Instruction Set Computing Machine (ARM) processor-based micro servers and network-connected disk drives. I/O Switch provided a printed circuit board (PCB) that allowed disk drives to be cabled directly to network switches. Hadoop micro server nodes would control one or more disk drives over the network but any micro server could read any disk drive. This would make it possible to dynamically recombine compute and storage resources on a common network switched fabric in a completely flexible way. If it worked, Adaptive Storage could be used to eliminate compute hotspots and coldspots in Hadoop.

Right from the start, the whole team was fascinated by the possibilities of the new Parallella micro server for cloud service providers and large enterprises. Although it is ideal for the hobbyist and education markets, Parallella is a powerful, flexible and extendable computing platform. The Parallella computer has a dual core Zynq Z-7020 ARM A9 processor together with a 16-core Epiphany Multicore Accelerator and one gigabyte of random access memory. It also has built-in gigabit Ethernet, Universal Serial Bus (USB) and High-Definition Multimedia Interface (HDMI) as well as 50 gigabits per second expansion connectors.

 

 

Parallella Micro Server

Parallella Micro Server

 

The Adaptive Storage concept was developed by Dimitar and Andreas with valuable input from other team members. The project required loosely coupling distributed Parallella compute capacity to storage resources over a network. This involved connecting disk drives to a network using Advanced Technology Attachment (ATA) over Ethernet (AoE) and running open source AoE drivers on Parallella Hadoop data nodes. Adapteva provided the Parallella hardware and Linux distribution while I/O Switch provided AoE to Serial ATA (SATA) PCBs (“AoE Enabler”) and other hardware to build the test lab environment. 

IO Switch AoE Enabler

I/O Switch AoE Enabler

 

The hack required building a custom Linux kernel and compiling open source driver code, and each member of the team quickly focused in on their areas of expertise. Andreas’s hands-on knowledge of the Parallella platform and of Linaro Linux on the ARM processor was vital to the project. Jon was instrumental in showing how Parallella storage nodes and I/O Switch AoE Enablers could be deployed together in a real data center. Jon’s contributions of a real world use case and 3D CAD drawings tied the whole project together. In addition to solving many problems, Peter was able to prototype the entire software stack in a virtual machine environment, giving the team confidence that the project’s goals could be achieved. Rob and I set up the test lab, worked on troubleshooting, helped coordinate the team’s efforts, made emergency trips to the nearest Fry’s Electronics store, and prepared the team’s presentation.

The whole team worked late into the night on Tuesday, January 28. Dimitar and Andreas worked in shifts during the night getting the custom Linux kernel up and running and deploying Hadoop on the Parallella platform. A crucial moment came around 1:00 a.m. when the testbed Parallella computer overheated during a kernel compilation. We quickly solved the problem by lifting a fan from a prototype I/O Switch Hailstorm storage enclosure, borrowing some wire from another team and connecting the fan to the Parallella board.

 

 

Andreas Olofsson CEO Adapteva Inc

Andreas Olofsson, CEO, Adapteva, Inc.

 After 24 hours of hard work, the team was still scrambling to wrap up as the presentations began, and we just finished our slides in time for a smooth presentation. Dimitar gave an in-depth live demo and handled the lengthy Q&A session including a discussion of how to use Adaptive Storage to implement Seagate’s Kinetic storage API or an Amazon S3-like RESTful API for scalable object storage.

Dimitar Boyn CTO IO Switch Technologies Inc

Dimitar Boyn, CTO, I/O Switch Technologies, Inc.

 

In Adaptive Storage, disk drives are connected directly to network switches. There is no conventional storage array. Also connected to the switch are Parallella micro servers running Hadoop, each of which can handle data for one or more disk drives. Because every disk drive is individually connected to a network by an I/O Switch AoE Enabler PCB, any micro server can read from any disk drive. This means that micro servers can join forces to process complex jobs or larger data sets.

The idea is that micro servers can combine on demand and the recombination of micro servers takes place dynamically. Since additional micro servers can be recruited automatically to process complex jobs or large data sets, Adaptive Storage is elastic for compute. Additional physical micro servers can be added to the network switched fabric any time, independent of storage.

Similarly, any micro server can take over unassigned disk drives on the network for exclusive write access or release them when they are no longer needed. This is also done on demand so that Adaptive Storage is elastic for storage resources. Additional physical disk drives can be added to the network any time independent of compute.

In addition to dramatic power savings and independent, elastic scaling of compute and storage resources, Adaptive Storage is a simple and elegant way to eliminate compute hotspots and coldspots in Hadoop. But the concepts and methods of Adaptive Storage are not limited to Hadoop. It can be applied to virtually any big data technology, such as Cassandra or MongoDB or to object storage in general. For example, Adaptive Storage is complementary to Seagate Kinetic because the Kinetic API can run on micro servers managing one or more disk drives connected to a network.

For production, Facebook’s standard 1/2 width Knox OCP Serial Attached Small Computer System Interface (SAS) Expander Board (SEB) can easily be replaced by a full-width Adaptive Storage Base Board, on which card guides and a riser card / backplane can be mounted. The entire structure can be supported on the sides by sheet metal brackets. Production readiness is straightforward from a mechanical and manufacturing perspective.

 

Adaptive Storage Base Board

Adaptive Storage Base Board

Adaptive Storage raises fundamental questions about the way storage and compute are connected and about the power requirements for big data. In just 24 hours, with no budget and with a few boxes of computers, circuit boards and networking equipment, our small team of engineers was able to imagine a totally new way of organizing Hadoop data nodes, build and demonstrate a working prototype running on ARM processor-based micro servers using open source software, and show production-ready engineering CAD drawings for a production implementation.

We know from experience that a similar project in a large technology company could take several months. That may be why we received so much interest in our hack from those at the OCP Summit. We can’t say for sure what the future holds for Adaptive Storage, but we were excited that so many companies and individuals who viewed our hack were intrigued by what they saw and wanted to continue working with it. This reinforced our belief in open sourcing and in the Open Compute Project. We were able to use open source technology to build something great and we’ll keep building, innovating and developing with even more partners in the future.

 

Continue reading this post

Call for nominations - OCP Incubation Committee 2014-2015 Term

Monday, March 10, 2014 · Posted by at 9:51 AM

Update:  The timelines and eligibilty requiments have been updated.

The Open Compute Project (OCP) Foundation is pleased to announce the call for nominations to serve on the 2014-2015 OCP Incubation Committee (IC). There are seven seats available for the OCP IC 2014-2015 term.

These seven seat correspond with the OCP Projects: 

  • Certification and Interoperability
  • Data Center
  • Hardware Management
  • Networking
  • Open Rack
  • Server
  • Storage

 

By participating as a member of the OCP IC you will be responsible for reviewing all contributions that are submitted to the OCP Foundation. You will work with both the project leads and the contributor to make sure all contributions are a fit for the goals and missions of the OCP Foundation.

This is an exciting opportunity to help shape the future of the OCP Foundation and it’s community while advancing the mission and goals of the Foundation.

Nomination Process

Eligibility: 

  • Individual must belong to an OCP Silver, Gold or Platinum Corporate Member organization.
  • If individual is participating on own then said individual must show sustained contribution and must be approved by the OCP Foundation as individual OCP Community Thought Leader.

Exceptions:

  • Silver Members can hold no more than 1 leadership position at any given point in time.
  • Gold Members can hold no more than 2 leadership positions at any given point in time.
  • Platinum Members can hold no more than 3 leadership positions at any given point in time.
  • No one person can hold more than one leadership position at any given point in time.

  (If an organizational member is nominated and this nomination would cause the organization to go over its membership level limit then that individual and organization will be contacted prior to the names being released)

Call for nominations ends at 11:59pm UTC on 28 March 2014.

APPLY TODAY!

Only submissions added to this webform will be accepted.  No personal emails will be accepted as eligible submissions. As the nominations are reviewed, nominees will be contacted by the OCP Community Manager make sure they would like to accept the nomination and to provide a short bio.

To nominate yourself or someone click here.

IC Positions

As noted above there are seven seats available on the IC.  To understand what is required of these positions please take a moment to review the roles and responsibilities of these positions.

  • Help shape new projects
    • Provide guidance and feedback to incubated projects
    • Proactively provide lightweight feedback
    • Ensure OCP principles are embodied in specifications
    • Ensure specifications contribute meaningful value
  • Vote on new specifications
  • Work with and encourage project chairs to bring new specs to IC
  • Ensure proper functionality of the IC
  • Participate in IC meetings (every six weeks)

 

Testimonial Phase

Eligibility: Any community member can add a positive endorsement to any nominee. During this phase of the election each nominee will have a page on the website which will include a picture, a bio and information on why they want to sit on the IC. This page is where the community will be allowed to endorse each nominee. While only Gold and Platinum members can vote on the nominees the testimonial phase is open to any member of the community. (Please note all comments will be moderated and only positive endorsements will be allowed. Any negative comments will be deleted.)

Voting Process

Eligibility: Only Gold and Platinum OCP Corporate Members and the OCP Foundation Board will be eligible to vote. Gold Members will be given 2 voting keys and Platinum Members will be given 3 voting keys.  Between now and 11 April 2014, the OCP Community Manager will be contacting the point of contact for each Gold and Platinum member to determine who from each organization will be sent keys.  Each key is tied to an email address and can not be shared.  

Timeline

Date

Action

10 March

Call for Nominations Opens

28 March

Call for Nominations Closes

31 March

Publish List of Eligible Nominees to Wiki and Website

31 March 

Testimonial Phase Starts

10 April

Testimonial Phase Ends

11 April

Voting Phase Starts

25 April

Voting Phase Ends

28 April

New IC Announced

 

If you have any questions please contact Amber Graner.

 

Continue reading this post

OCP Hackathon Winner: The Codeless Hack

Friday, March 14, 2014 · Posted by at 3:04 AM

Our second blog post about the OCP Summit’s hardware hackathon comes from Derek Jouppi and Andrew Andrade who are interns at Facebook and won the hack with a new design for the Open Rack V2’s Battery Backup Unit.

Derek and I decided to take part in the hardware hackathon at the Open Compute Summit on a whim. We’d heard about the hackathon and were intrigued enough to stop by, but didn’t plan on joining in. Once we were there, however, we realized that the opportunity was too much to pass up, and we jumped in without the team size and equipment that many other groups had.

What we did have was an idea. We wanted to build something that was highly impactful, that could be contributed immediately to Open Compute, and that we hoped would be a shippable prototype within 24 hours. Given our criteria, we decided we wanted to do a hack for the newly released Battery Backup Unit (BBU) on the Open Rack V2 design, since this is the area that Derek was most familiar with. 

The problem with the BBU is that if it is not functional, debugging a failed unit is extremely difficult and time consuming. You have to use a nest of wires, probes, DMM, and/or oscilloscope to find the solution – and it’s definitely not a great solution. Instead, we envisioned a coupler that would take in the output from the scope, process the signals and intelligently display what the error was. Using a simple ATMEGA microcontroller, and LED display and a few other components, we could make a compact and intelligent tool for technicians working in data centers. It was simple enough we could design and build a working prototype in only 24 hours, and yet it had high impact as it could directly contribute to running a better data center. It seemed like a perfect project for our two-man team.

Thanks to one of the hackathon’s organizers, John Kenevey, we were able to run back to Menlo Park to grab the supplies we needed: two Arduinos, bread breadboards, LED’s, wires, DMM, and an assortment of other parts we thought might be useful. 

Derek and Andrew OCP Hackathon blog FINAL2

Derek Jouppi and I begin to assemble the equipment we need for our hack.

Then we got to work. Since the display BBU’s were inaccessible, I quickly began to write Arduino code to simulate a failing BBU. I also wrote some code that acted as a DAC to take in the signals from the BBU. Meanwhile, Derek got to work doing a pin out drawing and sketching the initial schematic. We worked for a few hours before bringing our design to an Open Rack prototype to investigate how we could implement it. That’s when we realized our solution was all wrong.

Our monitoring system that sat on all the racks didn’t make sense. There was not enough room in the V2 rack to hold a coupler, and a microcontroller solution would be too expensive and not scalable in the large data center.

With only a few hours left in the hackathon, we decided we needed to pivot to simplicity. We’d been working furiously to design a complex solution, when what we really needed was to take the Open Compute approach: remove anything in the data center that didn’t contribute to efficiency. In this case, we decided we wanted to create a simple, minimal device to solve the problem of bringing visibility to failing BBU’s.

We realized that, given the mapping of the pin outs, one solution would be to connect the signals to the status LED’s, which would give instant visualization to the problems with the BBU. While it would take manual diagnostics to determine issues, the cost of the device would be significantly lower, and it would be much easier to operate. For added functionality we included an output header. This could be used in the future to connect to a microcontroller if a more automated method had to be deployed. Finally, since the BBU relied on a power supply to charge, we included a power header that could be used to connect to the power supply.  

Derek and Andrew OCP Hackathon blog FINAL

The debug card that resulted from our hack could easily attach as a component to the back of a Battery Backup Unit as shown is this rendering and video.

Our hack had become a codeless debug tool that would enable techs to quickly and efficiently debug issues with battery back up units in their datacenters.

After drafting out a quick design on paper, we quickly iterated our design on breadboards attempting to gain the designed functionality. We borrowed an actual failing BBU from the demo and after some iteration, we were able to come up with a very simple, “bare bones” design for the debug card. 

Derek and Andrew OCP Hackathon blog schematics

Schematic of the debug card we made using Upverter’s schematic layout tool.

For our circuit layout we used Upverter, a start-up that focuses on online board design CAD software. We drew up a quick schematic by sourcing our components online and imported their drawings on Upverter’s schematic tool. We then used the tool to create a bill of materials that included the component prices and the cost to spin the board.

We then created a PCB layout for the board. To parallelize our efforts, we performed a quick mock-up on MS Paint of the layout of the board. Derek finished up the PCB layout on Upverter while I did a quick rending of our final product using mechanical CAD software.

When we demonstrated our project for the judges, we were proud to watch as the BBU powered up and our LED’s lit up a split second later, indicating the status of the unit. Our simple solution had made it easy and inexpensive to know if the BBU was functional at any given time. This vital information will make it easier to be sure that any power loss won’t result in a loss of functionality.

Derek and Andrew OCP Hackathon blog Figure 2

As we powered the BBU for the demo, we demonstrated how we could instantly see the status of the unit from the output LED’s.

We were incredibly excited by what we’d built, and we couldn’t help but note the community spirit of Open Compute that had gotten us there. Throughout the hack, we frequently traded tools with other teams and asked other participants for their input as we worked. Everyone lived and breathed the open source philosophy – that sharing knowledge and experience will result in better hacks overall. It seems to us that this desire to collaborate is what Open Compute is all about, and in that vein, we plan to release the specifications for our BBU hack to the open source community in the next few weeks.

 

 

 

Continue reading this post

I have some exciting news to share: The Open Compute Project Foundation board has voted to expand to include two new members. Bill Laing, Corporate Vice President of Cloud and Enterprise at Microsoft, and Jason Taylor, Director of Infrastructure at Facebook, have joined the board effective immediately. I recently left Facebook to pursue a new OCP-related startup opportunity, and will remain on the board as an independent. The board has also voted to retain me as the president and chairman of the foundation.

The Open Compute Project continues to gain momentum, and Bill and Jason will be great additions to the community's leadership. Together we will continue to accelerate the pace of innovation in this industry and to make datacenter technologies more open, more efficient, and more sustainable.Expanding the OCP Foundation Board

Continue reading this post

Highlights from OCP Networking Workshop

Friday, March 28, 2014 · Posted by at 8:01 AM

Following the fifth Open Compute Summit earlier this year, a lot of people in the community are excited about the recent progress in the OCP networking project. At the conference, we held a keynote panel, moderated by Facebook’s Najam Ahmad, discussing how to open up network hardware.

We also had nearly 100 people participate during the OCP engineering workshop sessions, with significant hardware and software developments discussed by Mellanox, Broadcom, Cumulus, and Big Switch.

Screen Shot 2014 03 28 at 9.03.38 PM

In this post, I’ll cover some of the highlights from the OCP workshop. But first, for those of you who might be new to OCP or to the networking project, I wanted to provide a little background. 

Background & Motivation

We started the OCP networking project about ten months ago. In the project’s charter, you can see the essence of its mission (my emphasis):

create a set of networking technologies that are disaggregated and fully open, allowing for rapid innovation in the network space

Here’s how we view each of those key tenets:

 - "Disaggregated": In traditional networking, we buy an appliance from a vendor, and that box comes with the software from that same vendor. One of the first goals of the networking project has been to break that apart – to separate the network hardware from the network software – so consumers aren’t locked in to a single solution from a single vendor.


 - "Fully open": This too is rare in the networking world, but we are sowing the seeds of change here. At the hardware level, the designs of the vast majority of devices are closed, but we at least see merchant silicon being leveraged by many companies. At the software level, we see APIs on top of network devices that allow some level of automation, and these sometimes are "open." However, they don't give a low level of access to the underlying forwarding hardware – say, at the ASIC SDK level. Within the project, we are emphasizing the need for publishing the design of working switch hardware as well as opening up software APIs at the network ASIC level.


 - "Rapid Innovation": We believe that, by disaggregating the network hardware and software and by being fully open, we can create an ecosystem of rapid innovation, where best-of-breed software can be written for specific deployment use cases that can fully utilize the open switch hardware. 

Recent Highlights: Working Switches, Working Code

The OCP networking project’s initial focus has been in the datacenter, at the top-of-rack (TOR) switch. Many vendors are already leveraging merchant silicon for their TOR switches, and on the operator side, it's an easy place to try out a new hardware switch, as the failure domain is mostly limited to the servers in the rack.

At the recent OCP Summit and the networking engineering workshop, we had some significant progress both in hardware and software for the TOR switch:

Screen Shot 2014 03 28 at 9.14.12 PM

 - Switch specifications and hardware: Mellanox published the specification for its SwitchX-2 TOR switch as part of the project and showed a spine switch with 36 40GbE QSFP as part of the project.

This joins the specification that Broadcom published late last year for its OCP network switch, and a switch based on the Broadcom spec that is already available from Interface Masters.

The review committee of the project is excited to submit these specifications to the OCP incubation process as the next step.

- Foundational software: Cumulus demoed for the first time their Open Network Install Environment (ONIE) running on an x86-based Interface Masters OCP switch, and that solution is now available for the OCP switch.

Big Switch Networks announced they have contributed a Linux distribution, dubbed Open Network Linux, specifically to allow users to quickly get a Linux environment that has had all the kernel and driver mods to work with these open switches. It is not yet available for the Interface Masters OCP switch, but will be soon.

Forwarding Software for “Software-Agnostic” Switches 

The OCP networking project initially started discussing designs for switches, and progressively we have been moving up into the software for that hardware. The contributions from Cumulus and Big Switch provide a software foundation, but in order for the OCP switch to actually forward packets, we still need forwarding software on top of the hardware switch itself. That brings up more questions that we want to work as a community to answer:

- “What will the forwarding software need to handle? The forwarding software needs to handle traffic from the connected hosts, including control traffic (e.g., ARP, DHCP, and DNS) and the hosts’ actual data traffic. It also needs to be able to interact with the control plane of the surrounding network (e.g., an L2 environment of VLANs and spanning trees, an MPLS-based network, or an L3 routed network) as well as forward data traffic to/from the rest of the network.

Screen Shot 2014 03 28 at 9.22.09 PM

- “Where will it come from?One of our key design goals in the OCP networking project is that the switches should be “software-agnostic.”

For example, the hardware should support forwarding software that is based on either a traditional distributed networking protocol stack or a more “SDN”/centralized model with OpenFlow. Another example is that the hardware should support either open source forwarding software or commercial forwarding software.

Indeed, we envision being able to run many network software packages at once, just like you can run different applications on your servers.

If you're interested in following the activities of the OCP networking project, please check out http://www.opencompute.org/projects/networking -- we'd love to have your help in opening up the network layer.

Continue reading this post

Incubation Committee Election Process Extended

Monday, March 31, 2014 · Posted by at 9:37 AM

Timeline

We are extending the Incubation Committee (IC) process by two weeks and updating the time for each phase of the IC Election Process to the following:

  • 10 March 2014 - Call for Nominations Starts
  • 11 April 2014 - Call for Nominations Ends
  • 14 April 2014 - Publish list 
  • 14 April 2014 - Testimonial Phase Starts
  • 25 April 2014 - Testimonial Phase Ends
  • 28 April 2014 - Voting Phase Begins
  •  9 May 2014 - Voting Phase Ends
  • 12 May 2014 - New IC Announced


Who is eligible for nomination to the IC?

Any Silver, Gold, or Platinum OCP Corporate Member in good standing is eligible to be a member of the IC.

To nominate yourself or someone you know please click here.

Who is eligible to vote in the IC elections?

Any Gold or Platinum OCP Corporate Member in good standing. 

Why we are extending the process

We are in the final stages of rolling out our new tiered membership process and we don’t want to rush through the election process or miss any member organization who qualifies for nomination or a voting key.  We are in the process of checking the qualification contributions and will email all our members in the next two weeks.  We will keep the nomination phase open until all members have been contacted.

This also allows us to time both the Project Lead and IC Elections in such a way that these elections are always after the OCP Summit and do not conflict with one another.

Thank you so much for you patience and understanding.  We are working hard to include our members and community.  Stay tuned for more information on the election process and tiered membership throughout the coming weeks.

More information can be found at: http://www.opencompute.org/wiki/Governance

If you have any questions, please contact Amber Graner, OCP Community Manager.

 

Continue reading this post