Facebook went offline for the second time in two days yesterday. The Thursday outage--which lasted more than two hours for some users--is a tale of a database control gone awry and illustrates the need for effective testing and change control procedures.
According to a blog post from Facebook describing the details of the issue, "The key flaw that caused this outage to be so severe was an unfortunate handling of an error condition:
http://www.facebook.com/note.php?note_id=431441338919&id=9445547199&ref=mf An automated system for verifying configuration values ended up causing much more damage than it fixed."
That is only half the story, though. The database glitch was triggered by a change implemented to a configuration value. The database error handling is supposed to detect when a configuration value is invalid, and update it with a designated configuration value. However, the new designated configuration value implemented by Facebook was also seen as invalid, causing an endless loop.
Facebook explains, "To make matters worse, every time a client got an error attempting to query one of the databases it interpreted it as an invalid value, and deleted the corresponding cache key. This meant that even after the original problem had been fixed, the stream of queries continued. As long as the databases failed to service some of the requests, they were causing even more requests to themselves. We had entered a feedback loop that didn't allow the databases to recover."
Ultimately, Facebook was forced to shut the site down and take the affected database cluster offline to break the loop. It eventually allowed users back onto the site, but disabled the configuration error correction system that sparked the problem while it investigates new solutions to prevent this from occurring again in the future.
Like the Twitter cross-site scripting worm incident earlier this week, the Facebook outage holds some lessons for IT admins. The Twitter worm exploited a vulnerability that Twitter had already identified and patched, but inadvertently exposed again with a subsequent Web site update.
The Facebook outage was caused by implementing a configuration value on the live Web site without proper testing and validation. Had Facebook tested the new configuration value in a lab environment designed to mirror the real-world database cluster, it should have identified the problem with the new configuration value, and the error loop that caused this problem before allowing it to take the entire Facebook site offline.
Your Web site may not have half a billion users spending more time on it than any other destination on the Web like Facebook, but there are users, partners, and customers that rely on it nonetheless. Make sure you follow secure coding practices, and follow solid patch management and change control procedures to detect and resolve issues like this proactively before they take your site down.
(PCW)