A new future for cloud computing

NSF awards $20 million to support cloud-computing applications and experiments.
August 25, 2014

Apt, an NSF-funded precursor to CloudLab, is a testbed instrument that is adaptable to many different research domains through customizable profiles. Shown here is the computer cluster that provides the main hardware resource for Apt. It is comprised of approximately 200 servers that are located in the University of Utah’s Downtown Data Center. The Utah portion of CloudLab will go into the same facility, right next to the equipment in the photographs. (Credit: Chris Coleman, School of Computing, University of Utah)

The National Science Foundation (NSF) has announced two $10 million projects, called “Chameleon” and “CloudLab,” to create cloud-computing testbeds to help the academic research community develop and experiment with novel cloud architectures and applications.

The NSF is especially interested in real-time, safety-critical applications like those used in medical devices, power grids, and transportation systems.

Stampede is one of the most powerful machines in the world for open-science research. Funded by the National Science Foundation and built in partnership with Intel, Dell and Mellanox, Stampede went into production on January 7, 2013 at The University of Texas at Austin’s Texas Advanced Computing Center (TACC). The Chameleon system will join Stampede at TACC. (Credit: Sean Cunningham, TACC)

Chameleon

Chameleon will be a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin.

It will consist of 650 cloud nodes with 5 petabytes of storage. Researchers will be able to test the efficiency and usability of different cloud architectures on a range of problems, from machine learning and adaptive operating systems to climate simulations and flood prediction.

The testbed will allow “bare-metal access” — an alternative to the virtualization technologies currently used to share cloud hardware, allowing for experimentation with new virtualization technologies that could improve reliability, security and performance.

One aspect that makes Chameleon unique is its support for heterogeneous computer architectures, including low-power processors, general processing units (GPUs) and field-programmable gate arrays (FPGAs), as well as a variety of network interconnects and storage devices. Researchers can mix-and-match hardware, software and networking components and test their performance.

This flexibility is expected to benefit many scientific communities, including the growing field of cyber-physical systems, which integrates computation into physical infrastructure. The research team plans to add new capabilities in response to community demand or when innovative new products are released.

Other partners on the Chameleon project (and their primary area of expertise) are The Ohio State University (high performance interconnects), Northwestern University (networking) and the University of Texas at San Antonio (outreach).

CloudLab

The second NSFCloud project supports the development of “CloudLab,” a large-scale distributed infrastructure based at the University of Utah, Clemson University and the University of Wisconsin, on top of which researchers will be able to construct many different types of clouds.

Each site will have unique hardware, architecture and storage features, and will connect to the others via 100 gigabit-per-second connections on Internet2’s advanced platform, supporting OpenFlow (an open standard that enables researchers to run experimental protocols in campus networks) and other software-defined networking technologies.

“Today’s clouds are designed with a specific set of technologies ‘baked in’, meaning some kinds of applications work well in the cloud, and some don’t,” said Robert Ricci, a research assistant professor of computer science at the University of Utah and principal investigator of CloudLab. “CloudLab will be a facility where researchers can build their own clouds and experiment with new ideas with complete control, visibility and scientific fidelity. CloudLab will help researchers develop clouds that enable new applications with direct benefit to the public in areas of national priority such as real-time disaster response or the security of private data like medical records.”

In total, CloudLab will provide approximately 15,000 processing cores and in excess of 1 petabyte of storage at its three data centers. Each center will comprise different hardware, facilitating additional experimentation. In that capacity, the team is partnering with three vendors: HP, Cisco and Dell to provide diverse, cutting-edge platforms for research. Like Chameleon, CloudLab will feature bare-metal access. Over its lifetime, CloudLab is expected to run dozens of virtual experiments simultaneously and to support thousands of researchers.

Other partners on CloudLab include Raytheon BBN Technologies, the University of Massachusetts Amherst and US Ignite, Inc.

Advancing cloud computing broadly

Ultimately, the goal of the NSFCloud program and the two new projects is to advance the field of cloud computing broadly. The awards announced today are the first step in meeting this goal. They will develop new concepts, methods and technologies to enable infrastructure design and ramp-up activities and will demonstrate the readiness for full-fledged execution. In the second phase of the program, each cloud resource will become fully staffed and operational, fulfilling the proposed mission of serving as a testbed that is used extensively by the research community.

“Just as NSFNet laid some of the foundations for the current Internet, we expect that the NSFCloud program will revolutionize the science and engineering for cloud computing,” said Suzi Iacono, acting head of NSF’s Directorate for Computer and Information Science and Engineering (CISE).