Published on 18 September 2013 by

Amazon's Leo Zhadanovsky gave a great talk at PuppetConf 2013 on his work with Barack Obama's presidential campaign, explaining why his organization went with cloud infrastructure and how it managed scale challenges with Puppet Labs technologies.

Leo's a senior solutions architect on the AWS team working with state and local government and educational organizations. He started working for the Democratic National Committee in 2009. He was embedded in the Obama for America campaign to work through the election.

His 30-minute talk ranged from the challenges of bootstrapping an IT operation in the money-starved early stages of a campaign to coping with the arrival of a hurricane three weeks before the election, which required a last-minute replication of the infrastructure and all its data to safety on the West Coast.

The 40-developer team he worked on was responsible for a huge operation. Once running, the campaign's donation platform was around the 30th biggest e-commerce operation in the world. Its responsibilities included over 200 distinct applications, ranging from mobile tools used by campaign workers to large-scale analytics projects. The organization handled hundreds of terabytes of data across thousands of servers.

All of that was managed in the pressure cooker of an election cycle's unforgiving schedule. "The election day was constitutionally mandated, so we really had to stick to deadlines," he said.

Choosing Cloud Infrastructure

In the early stages of a campaign, Leo explained, the fund-raising engine isn't fully spun up. The development team was a mix of volunteers and "close to volunteers" drawn from all over, with many giving up Silicon Valley jobs to move to Chicago to work on the campaign. "We basically lived there," he said, "it was crowded. You didn't get much personal space. Business as usual for a tech startup." That lack of space played a part in IT decisions.

"The classic approach would have just been 'buy servers,'" he said, but with space at a premium and with a need to quickly move into production, "you can't wait a month for a server to arrive," then deal with the challenges of less and less space as the campaign scaled up.

"We thought about it and decided cloud computing was the way we should go," he said. "In the first few months of a campaign, you're not raising that much money, so you don't have a lot of money to just buy servers. " He said scale requirements are also low at the onset, and it would have been foolish to purchase anticipated capacity before needing to put it into production.

Leo said going with a cloud infrastructure also allowed the team to self-service. "We didn't have to call anybody to spin up a server or instance ... we could scale up and down easily and automatically."

Finally, cloud infrastructure allowed for a more iterative approach to developing the campaign's applications. "We were able to experiment ... We had about 200 apps, some of them failed, but that was fine because we didn't buy any resources for them."

If you're curious about just what all this looked like once fully operational, the team put together a map of the whole thing, available at http://awsofa.info.

"There are some easter eggs in there," said Leo. "We really liked Take 5 candy bars. If you zoom in far enough, they're in there."

Puppet at Cloud Scale

Once the campaign was fully operational, scale was a challenge at all levels, including IT automation and infrastructure management. The team developed a set of practices for scaling its Puppet deployment. Leo said those practices included:

  • Using CloudInit (available with Ubuntu and Amazon Linux) to bootstrap new instances, taking advantage of its Puppet support to ease setup.
  • Autosigning certificate requests based on certificate names
  • Establishing a base class for every node in the infrastructure
  • Using run stages to order resources

The team also used RPMs and Debian packages which they hosted on private repositories running from Amazon S3 buckets for application deployment. By using ensure => latest on their Puppet-managed package resources, they managed application updates by moving the newest versions to their repositories and letting Puppet handle the rest.

To address the scale and volatility of the cloud, they maintained groups of puppet masters operating in different availability zones and bootstrapped them either from other puppet masters, or via packages stored in an S3 bucket.

There's a lot more in the talk, including a great anecdote about how quickly the team was able to migrate its cloud infrastructure to West Coast availability zones when Hurricane Sandy threatened the operation.

Learn More

Share via:
Tagged:
The content of this field is kept private and will not be shown publicly.

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.