分布式系统

Google File System II: Dawn of the Multiplying Mas

2010年12月18日 阅读(683)

Updated As its custom-built file system strains under the weight of an online empire it was never designed to support, Google is brewing a replacement.

Apparently, this overhaul of the Google File System is already under test as part of the "Caffeine" infrastructure the company announced earlier this week.

In an interview with the Association for Computer Machinery (ACM), Google’s Sean Quinlan says that nearly a decade after its arrival, the original Google File System (GFS) has done things he never thought it would do.

"Its staying power has been nothing short of remarkable given that Google’s operations have scaled orders of magnitude beyond anything the system had been designed to handle, while the application mix Google currently supports is not one that anyone could have possibly imagined back in the late 90s," says Quinlan, who served as the GFS tech leader for two years and remains at Google as a principal engineer.

But GFS supports some applications better than others. Designed for batch-oriented applications such as web crawling and indexing, it’s all wrong for applications like Gmail or YouTube, meant to serve data to the world’s population in near real-time.

"High sustained bandwidth is more important than low latency," read the original GPS research paper. "Most of our target applications place a premium on processing data in bulk at a high rate, while few have stringent response-time requirements for an individual read and write." But this has changed over the past ten years – to say the least – and though Google has worked to build its public-facing apps so that they minimize the shortcomings of GFS, Quinlan and company are now building a new file system from scratch.

With GFS, a master node oversees data spread across a series of distributed chunkservers. Chunkservers, you see, store chunks of data. They’re about 64 megabytes apiece.

The trouble – at least for applications that require low latency – is that there’s only one master. "One GFS shortcoming that this immediately exposed had to do with the original single-master design," Quinlan says. "A single point of failure may not have been a disaster for batch-oriented applications, but it was certainly unacceptable for latency-sensitive applications, such as video serving."

In the beginning, GFS even lacked an automatic failover scenario if the master went down. You had to manually restore the master, and service vanished for up to an hour. Automatic failover was later added, but even then, there was a noticeable service outage. According to Quinlan, the lapse started out at several minutes and now it’s down to about 10 seconds.

Which is still too high.

"While these instances – where you have to provide for failover and error recovery – may have been acceptable in the batch situation, they’re definitely not OK from a latency point of view for a user-facing application," Quinlan explains.

But even when the system is running well, there can be delays. "There are places in the design where we’ve tried to optimize for throughput by dumping thousands of operations into a queue and then just processing through them," he continues. "That leads to fine throughput, but it’s not great for latency. You can easily get into situations where you might be stuck for seconds at a time in a queue just waiting to get to the head of the queue."

GFS dovetails well with MapReduce, Google’s distributed data-crunching platform. But it seems that Google has jumped through more than a few hoops to build BigTable, its (near) real-time distributed database. And nowadays, BigTable is taking more of the load.

"Our user base has definitely migrated from being a MapReduce-based world to more of an interactive world that relies on things such as BigTable. Gmail is an obvious example of that. Videos aren’t quite as bad where GFS is concerned because you get to stream data, meaning you can buffer. Still, trying to build an interactive database on top of a file system that was designed from the start to support more batch-oriented operations has certainly proved to be a pain point."

The trouble with file counts

The other issue is that Google’s single master can handle only a limited number of files. The master node stores the metadata describing the files spread across the chunkservers, and that metadata can’t be any larger than the master’s memory. In other words, there’s a finite number of files a master can accommodate.

With its new file system – GFS II? – Google is working to solve both problems. Quinlin and crew are moving to a system that uses not only distributed slaves but distributed masters. And the slaves will store much smaller files. The chunks will go from 64MB down to 1MB.

This takes care of that single point of failure. But it also handles the file-count issue – up to a point. With more masters you can not only provide redundancy, you can also store more metadata. "The distributed master certainly allows you to grow file counts, in line with the number of machines you’re willing to throw at it," Quinlan says. "That certainly helps."

And with files shrunk to 1MB, Quinlan argues, you have more room to accommodate another ten years of change. "My gut feeling is that if you design for an average 1MB file size, then that should provide for a much larger class of things than does a design that assumes a 64MB average file size. Ideally, you would like to imagine a system that goes all the way down to much smaller file sizes, but 1MB seems a reasonable compromise in our environment.

Why didn’t Google design the original GFS around distributed masters? This wasn’t an oversight, according to Quinlan.

"The decision to go with a single master was actually one of the very first decisions, mostly just to simplify the overall design problem. That is, building a distributed master right from the outset was deemed too difficult and would take too much time," Quinlan says.

"Also, by going with the single-master approach, the engineers were able to simplify a lot of problems. Having a central place to control replication and garbage collection and many other activities was definitely simpler than handling it all on a distributed basis."

So Google was building for the short term. And now it’s ten years later. Definitely time for an upgrade.

"There’s no question that GFS faces many challenges now," Quinlin says. "Engineers at Google have been working for much of the past two years on a new distributed master system designed to take full advantage of BigTable to attack some of those problems that have proved particularly difficult for GFS."

In addition to running the Google empire, GFS, MapReduce, and BigTable have spawned an open-source project, Hadoop, that underpins everything from Yahoo! to Facebook to – believe it or not – Microsoft Bing.

And of course, Quinlin believes that the sequel will put the original to shame. "It now seems that beyond all the adjustments made to ensure the continued survival of GFS, the newest branch on the evolutionary tree will continue to grow in significance over the years to come." ?

Update: This story has been updated to show that Google’s new file system is apparently part of the new "Caffeine" infrastructure that the company announced earlier this week

 

Google search index splits with MapReduce (9 September 2010)

Google Caffeine jolts worldwide search machine (9 June 2010)
Google blesses Hadoop with MapReduce patent license (27 April 2010)
Intel and Yahoo! spawn open-source ‘Tashi’ cluster (27 February 2010)
Google’s MapReduce patent – no threat to stuffed elephants (22 February 2010)
Yahoo! looks beyond Google’s data cruncher (16 February 2010)
Google search primed for ‘Caffeine’ injection (10 November 2009)
ApacheCon 09 Open-sourcers promise cloud elephant won’t trample your code (5 November 2009)
Hadoop World Amazon pumps sky-high Big Data cruncher (2 October 2009)
Hadoop World Cloudera hangs (more) elephants in sky (2 October 2009)
Hadoop World Web GUI hides number-crunching open source elephant (2 October 2009)
Google wriggles in open voice and browser debates (1 October 2009)
Yahoofrastructure swells in face of Microsoft pact (1 October 2009)
Microsoft apes Google with chillerless* data center (25 September 2009)
‘Grid computing Red Hat’ lands elephant on VMware cloud (31 August 2009)
Google Caffeine: What it really is (14 August 2009)
Updated Google File System II: Dawn of the Multiplying Master Nodes (12 August 2009)
Amazon teaches cloud to speak Pig Latin (11 August 2009)
Yahoo!’s open source elephant loses its daddy (10 August 2009)
Updated Microsoft pact holds gun to Yahoo!’s stuffed elephant (30 July 2009)
Bing finds meaning in Powerset (1 July 2009)
Structure 09 Google mocks Bing and the stuff behind it (27 June 2009)
Hadoop Summit ‘Grid computing Red Hat’ out-Amazons Amazon (11 June 2009)
Hadoop Summit Yahoo! defies Facebook with Hadoop SQL dupe (10 June 2009)
Hadoop Summit Yahoo! exposes very own stuffed elephant code (10 June 2009)
Microsoft Bing rides open source to semantic search (4 June 2009)
Microsoft’s new search – Built on open-source (7 May 2009)
Hadoop – Why is Google juicing Yahoo! search? (9 April 2009)
Cloudera floats commercial Hadoop distro (16 March 2009)

You Might Also Like