Continuous Delivery in a .NET Shop with Puppet Enterprise

This post originally appeared on September 22, 2014. We're republishing it as part of our series on change agents of ops.

When the topic of continuous delivery of builds into the development environment at IP Commerce was initially brought up, we all agreed it was a worthy goal. But from where we were standing at that point the idea sounded very pie-in-the-sky.

We are primarily a Microsoft .NET development shop, developing both IIS-hosted and self-hosted Windows services along with supporting MVC (model view controller) websites. We were using the long time standard for source control, Subversion with CruiseControl.net on our build server. The build process may have been automated to the point that it did run our unit tests as a gate for a successful build, and the process reliably produced .msi installation packages for our products, but beyond that it was nearly all manual.

Since the majority of the business logic in our products lives in long-running, self-hosted Windows services, that is where most of the development happens. In the lower development and quality assurance environments, installation of a new service build required grabbing the msi from the build server, remoting into the target server, stopping the service, uninstalling the currently installed version, running the msi and starting the service again. Oh wait, did you forget to copy the old config file before running the uninstaller to diff with the newly laid down config file? It was a mess of potential pitfalls that we were tired of dealing with.

Thankfully for the development team, IP Commerce was going through some changes and we were given the opportunity to aim for that pie in the sky. Around 12 months later we hit it: continuous delivery of software builds into our development environment.

Faster delivery, one click to production

In some ways it is hard to quantify the benefits we have seen from the new process because there are so many. The old headaches have become a piece of nostalgia we now laugh about. It feels like we cut our develop-and-deliver time by 50 percent. And while the hard metrics likely put that particular statistic somewhere south of 50, the feeling is still great. When delivering new features and other changes was slow, we had to choose between quality of output, time to deliver and number of features — we might get two of these, but never all three. Now we don't have to sacrifice any these important elements in a given release. And it’s just one click to get it into production!

The systems we use to accomplish all of this are:

  • Atlassian Confluence with Gliffy - Instantly (if desired) shared documentation with illustrations
  • Atlassian Jira with Agile plugin - Ticket system at the center of the development and delivery process
  • Git - Source control
  • GitLab - Easy code collaboration and peer reviews
  • Atlassian Bamboo - Build server leveraging MSBuild
  • OpenCover - Unit test coverage reports as an artifact of the build process
  • Proget - Nuget server for internally shared binaries
  • Chocolatey - Nuget server to deliver .msi packages
  • MySQL - Storage for environment specific configuration variables
  • Puppet - Installation of all product software across all environments

At the root of everything we changed is our source control system. Subversion served us well over the years, but when looking to improve and modernize our systems we decided the functionality of Git was the right fit for us. Our Git workflow is based on feature branches and release branches. At the beginning of a sprint we create the branches in our repositories that will be continuously delivered into our development environment.

The IP Commerce development workflow

Coding tasks done during a sprint are always represented by a Jira ticket. When starting work on a ticket, the first thing we do is create a local branch in Git using the Jira ticket number as the branch name (Step 1). This gives the developer a clean representation of the release branch in which to code.

Bamboo and Confluence (both Atlassian products) and GitLab are all able to recognize Jira ticket numbers. Using the ticket number as the branch name, and in commits to a branch, allows these systems to infer a link to the Jira ticket. GitLab picks up on the ticket number in merge requests and builds out a link to the ticket, allowing anybody to easily navigate from the code change set to the development task ticket and through to the requirements in Confluence if necessary. Bamboo recognizes the ticket numbers, and links the build output page to and from Jira. Bamboo also provides rollup of features being delivered to an environment, based on the previously installed version (not all that useful in the development environment since we are continuously installing builds, but very useful for the quality assurance environment, where not every single build is installed). This helps avoid confusion about which build includes any particular feature.

The distributed nature and excellent merging capabilities of Git allow us to commit work to our feature branches incrementally (Step 2), while leaving the main release branch unaltered until the functionality is complete. When the developer is satisfied that the feature and unit tests are performing as intended, the branch is pushed from the local repository up to our Git server (Step 3). At this point our build server, Bamboo sees the new branch (or that an existing feature branch has been updated) and starts the build process directly on the feature branch. These feature branch builds are used as the first gate to pass through to get the code merged into the current release branch.

By running the build process on the feature branch, we have the opportunity to get ahead of any problems if the build is not a success. Since the build process also runs all of our unit tests, it will identify any tests that are not passing, allowing the developer to fix them before they have the chance to break the release branch. If a unit test fails, the entire build is considered a failure; when this happens in a release branch, the release is essentially broken until the test is fixed. Running the build process on our feature branch while increasing the load on our build server identifies build problems before they pollute the main release branch.

When Bamboo has successfully built the feature branch, verified that all unit tests are passing and produced unit test coverage reports using OpenCover (Step 4), the developer files a merge request using GitLab (Step 5) and assigns it to another team member for peer review. The UI in GitLab easily shows the difference between the feature branch and the release branch target for the merge. Each merge request can have a threaded discussion targeting specific lines of code if necessary, allowing communication about potential peer review issues to be seen by any team member.

Our former process also required peer code reviews, but these were done after the release was code-complete. The old process required the developer to manually identify all the files that had changed. The peer reviewer could see what had changed, but it was difficult to determine why something had changed. If the reviewer wanted to know why, they had to figure it out by digging through check-ins in Subversion, or asking the developer. Sure, you could always diff the current version of the file with what the file looked like before the sprint began, but code changes relating to specific functionality were not always apparent, especially if a class had changes dictated by more than one requested feature. GitLab’s UI summarizes all the changes around a particular feature in a way that’s easy to read. Changes in one class are stacked up with the changes in other classes, making it much more apparent why certain pieces of code have changed in relation to the feature being implemented.

It is great that we have been able to ditch the incredibly boring process of taking several hours (or more) to read and review possibly dozens of code files, without being able to see easily what changed or exactly why it changed. Equally good are the benefits of having peer review be a gate to even get into the release branch. It all comes down to having the opportunity to identify potential problems much earlier in the delivery cycle, before they make their way into the public-facing environments and cause problems for customers.

When the code has passed peer review (Step 6), the reviewer approves the merge request and documents the Jira ticket with review details and a link back to the GitLab merge request page. The Jira ticket now represents a summary of everything that happened during development. Bamboo sees that a change to the current release branch has happened and kicks off the build process, which upon successful completion triggers Puppet to install the updated components.

Our completed Jira tickets are now the center of what is essentially a self-documenting development process. If a requirement had to change for any reason during development, it is very likely that the discussion about that change happened in a Jira comment thread, so even if the original documented requirement never gets updated, the linked history of how the ticket got implemented very likely has all the details around the changes. The ticket structure will have links to product requirements stored in Confluence, discussion threads related to the implementation of the requirements, quality assurance tickets defining the test process and any bugs found, and links to the GitLab merge requests for all code level details of what went down.

The best tool for the job

At IP Commerce, we like to think of ourselves as a “best tool for the job” kind of place. We write .NET code all day long. We love Visual Studio. We think IIS is a perfectly fine web server, and we host our websites and services on Windows machines. But we consider ourselves to be a .NET shop, not necessarily a Microsoft shop. For example, after evaluating our options, the most recent addition to our products is Redis running on CentOS Linux for caching calls to remote service providers, as well as centralized session state storage for our websites.

I have not used Microsoft Team Foundation Server myself, but my teammates that have used it tell me that our system is better. Are we perhaps biased in favor of our system because we built it? Of course, but let me say that after the change, our sales and marketing team commented on how quickly and reliably we were now turning out product features. That's an interesting barometer for measuring success: When stakeholders who aren't directly involved in a process notice a change for the better, something has certainly gone right.

Jason Moorehead is the application architect at IP Commerce.

Learn More

  • Check out our ebook on continuous delivery, available for free download.
  • Read the case study about how IP Commerce uses Puppet Enterprise to enable its DevOps practices.
Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.