

New ways of linking datasets have played a large role in generating new insights. Instead, King and his graduate students came up with an algorithm within two hours that would do the same thing in 20 minutes-on a laptop: a simple example, but illustrative. One colleague, faced with a mountain of data, figured out that he would need a $2-million computer to analyze it.

The doubling of computing power every 18 months (Moore’s Law) “is nothing compared to a big algorithm”-a set of rules that can be used to solve a problem a thousand times faster than conventional computational methods could. The revolution lies in improved statistical and computational methods, not in the exponential growth of storage or even computational capacity, King explains. “The big data revolution is that now we can do something with the data.” But it is not the quantity of data that is revolutionary. “There is a big data revolution,” says Weatherhead University Professor Gary King. The data flow so fast that the total accumulation of the past two years-a zettabyte-dwarfs the prior record of human civilization. Data now stream from daily life: from phones and credit cards and televisions and computers from the infrastructure of cities from sensor-equipped buildings, trains, buses, planes, bridges, and factories.
