MIT scientists, including one of Indian origin, have developed a new data centre caching method that uses a fraction of the energy and cost less that existing memory storage system. Most modern websites store data in databases, and since database queries are relatively slow, most sites also maintain so-called cache servers, which list the results of common queries for faster access.
A data centre for a major web service such as Google or Facebook might have as many as 1,000 servers dedicated just to caching. Cache servers generally use random-access memory (RAM), which is fast but expensive and power-hungry.
Researchers including Professor Arvind from Massachusetts Institute of Technology (MIT) in the US developed the new system for data centre caching that instead uses flash memory, the kind of memory used in most smartphones. Per gigabyte of memory, flash consumes about 5 per cent as much energy as RAM and costs about one-tenth as much. It also has about 100 times the storage density, meaning that more data can be crammed into a smaller space.
In addition to costing less and consuming less power, a flash caching system could dramatically reduce the number of cache servers required by a data centre. The drawback to flash is that it is much slower than RAM. However, flash access is still much faster than human reactions to new sensory stimuli.
Users would not notice the difference between a request that takes .0002 seconds to process and one that takes .0004 seconds because it involves a flash query.