MapReduce has been gaining some popularity lately. With the Parallel Library Computing patterns and objects databases like MongoDB using it.
MapReduce is a computing algorithm which uses key value pairs in order to quickly find results in large amounts of data by dividing it into many intermediate key value pair based temporary result sets, that can be computed across multiple processors.
I was going to write a long boring blog post about how it works, but, then I realized it won’t be as good or clear as the source itself.
Here is the original study by Jeffrey Dean and Sanjay Ghemawat from December, 2004: MapReduce: Simplied Data Processing on Large Clusters
11 Jun 2010 6:03 PM