Cheap, power-efficient flash memory for big data without sacrificing speed

August 6, 2015

A 20-node BlueDBM Cluster (credit: Sang-Woo Jun et al./ISCA 2015)

There’s a big problem with big data: the huge RAM memory required. Now MIT researchers have developed a new system called “BlueDBM” that should make servers using flash memory as efficient as those using conventional RAM for several common big-data applications, while preserving their power and cost savings.

Here’s the context: Data sets in areas such as genomics, geological data, and daily twitter feeds can be as large as 5TB to 20 TB. Complex data queries in such data sets require high-speed random-access memory (RAM). But that would require a huge cluster with up to 100 servers, each with 128GB to 256GBs of DRAM (dynamic random access memory).

Flash memory (used in smart phones and other portable devices) could provide an alternative to conventional RAM for such applications. It’s about a tenth as expensive, and it consumes about a tenth as much power. The problem: it’s also a tenth as fast.

But at the International Symposium on Computer Architecture in June, the MIT researchers, with colleagues at Quanta Computer, presented experimental evidence showing that if conventional servers executing a distributed computation have to go to disk for data even 5 percent of the time, their performance falls to a level that’s comparable with flash anyway.

In fact, they found that for a 10.5-terabyte computation, just 20 servers with 20 terabytes’ worth of flash memory each could do as well as 40 servers with 10 terabytes’ worth of RAM, and could consume only a fraction as much power. This was even without the researchers’ new techniques for accelerating data retrieval from flash memory.

“This is not a replacement for DRAM [dynamic RAM] or anything like that,” says Arvind, the Johnson Professor of Computer Science and Engineering at MIT, whose group performed the new work. “But there may be many applications that can take advantage of this new style of architecture. Which companies recognize — everybody’s experimenting with different aspects of flash. We’re just trying to establish another point in the design space.”

Technical details

The researchers were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving a little computational power off of the servers and onto the chips that control the flash drives. By preprocessing some of the data on the flash drives before passing it back to the servers, those chips can make distributed computation much more efficient. And since the preprocessing algorithms are wired into the chips, they dispense with the computational overhead associated with running an operating system, maintaining a file system, and the like.

With hardware contributed by some of their sponsors — Quanta, Samsung, and Xilinx — the researchers built a prototype network of 20 servers. Each server was connected to a field-programmable gate array, or FPGA, a kind of chip that can be reprogrammed to mimic different types of electrical circuits. Each FPGA, in turn, was connected to two half-terabyte — or 500-gigabyte — flash chips and to the two FPGAs nearest it in the server rack.

Because the FPGAs were connected to each other, they created a very fast network that allowed any server to retrieve data from any flash drive. They also controlled the flash drives, which is no simple task: The controllers that come with modern commercial flash drives have as many as eight different processors and a gigabyte of working memory.

Finally, the FPGAs also executed the algorithms that preprocessed the data stored on the flash drives. The researchers tested three such algorithms, geared to three popular big-data applications. One is image search, or trying to find matches for a sample image in a huge database. Another is an implementation of Google’s PageRank algorithm, which assesses the importance of different Web pages that meet the same search criteria. And the third is an application called Memcached, which big, database-driven websites use to store frequently accessed information.

FPGAs are about one-tenth as fast as purpose-built chips with hardwired circuits, but they’re much faster than central processing units using software to perform the same computations. Ordinarily, either they’re used to prototype new designs, or they’re used in niche products whose sales volumes are too small to warrant the high cost of manufacturing purpose-built chips.

But the MIT and Quanta researchers’ design suggests a new use for FPGAs: A host of applications could benefit from accelerators like the three the researchers designed. And since FPGAs are reprogrammable, they could be loaded with different accelerators, depending on the application. That could lead to distributed processing systems that lose little versatility while providing major savings in energy and cost.

“Many big-data applications require real-time or fast responses,” says Jihong Kim, a professor of computer science and engineering at Seoul National University. “For such applications, BlueDBM” — the MIT and Quanta researchers’ system — “is an appealing solution.”

Relative to some other proposals for streamlining big-data analysis, “The main advantage of BlueDBM might be that it can easily scale up to a lot bigger storage system with specialized accelerated supports,” Kim says.


Abstract of BlueDBM: An Appliance for Big Data Analytics

Complex data queries, because of their need for random accesses, have proven to be slow unless all the data can be accommodated in DRAM. There are many domains, such as genomics, geological data and daily twitter feeds where the datasets of interest are 5TB to 20 TB. For such a dataset, one would need a cluster with 100 servers, each with 128GB to 256GBs of DRAM, to accommodate all the data in DRAM. On the other hand, such datasets could be stored easily in the flash memory of a rack-sized cluster. Flash storage has much better random access performance than hard disks, which makes it desirable for analytics workloads. In this paper we present BlueDBM, a new system architecture which has flashbased storage with in-store processing capability and a low-latency high-throughput inter-controller network. We show that BlueDBM outperforms a flash-based system without these features by a factor of 10 for some important applications. While the performance of a ram-cloud system falls sharply even if only 5%~10% of the references are to the secondary storage, this sharp performance degradation is not an issue in BlueDBM. BlueDBM presents an attractive point in the cost-performance trade-off for Big Data analytics.