Site icon Bizety: Research & Consulting

Facebook Global Memcache Infrastructure

This entry in our video series on distributed computing will summarize how Facebook has scaled their memcache to support a massive, globally distributed read load of a billion reads per second. Rajesh Nishtala, a member of the team that developed Facebook’s memcache infrastructure, has explained how Facebook scaled from a system with a few databases and no memcache, to a few memcache servers, to clusters of memcache servers that are globally distributed.

Memcache is a memory caching systems used to reduce the number of reads to a database. Because Facebook is a read-heavy site with over 1.79 billion users active each month, memcache is used to significantly improves speed and helps to prevent database crashes. Facebook’s front-end clusters consist of several web servers and several memcache servers, which write and read updates to a storage cluster containing several databases. However, this setup can produce various problems such as incast congestion, all-to-all limits with horizontal scaling, consistent caching, and too many packets.

Below, we have summarized Rajesh’s primary talking points about the problems that occurred at each stage of scaling and the solutions that were developed to ensure a consistent user experience despite heavy reads across widely dispersed geographical areas.

Copyright secured by Digiprove © 2017
Exit mobile version