Archived: Agile continuous delivery in the cloud – Part 1

This blog builds on the previous blue/green article and is part 1 of 4 posts explaining how we use the latest techniques to continuously deliver new functionality and upgrades to the TfL website. Our highest priority is to keep the website updated, continuously improving, and reducing the cycle time between an idea and useable software. Customers expect real-time updates from us, so we need to deliver new features to the website constantly. Therefore we use a variety of the latest development practices, including; agile, continuous integration and continuous delivery.

This helps us accelerate the release of software and digital services, e.g. improved web page, User experience (UX), a bug fix, or an update, quickly and with reduced manual interventions. We have found that frequent small changes to the website, are not only less risky, but we also get faster and more focused feedback about what works and what doesn’t. Coupling this with an agile development strategy, using cloud hosted, “pay-on-demand” environments, we have significantly accelerated the rate at which we can bring new services and features to the website.

bitbucket
We use BitBucket as a management interface for GIT (our source code repository) used for governance, code reviews & pull requests

What is it ? Continuous delivery is about automating as much as possible in the software development cycle. It combines three core disciplines, (1) continuous integration (merging all developer working copies with a shared trunk, early and often) (2) continuous, mostly automated testing and (3) continuous release & deployment. It’s a high-discipline approach and requires significant cultural changes within traditional development teams.

We have been through this journey with the release of the new cloud based responsive website, and now have the ability and capability for continuous delivery. Our cloud environments use standard Amazon Web Services (AWS) components (e.g. EC2, S3, EBS, IAM, VPC, Route53, and Cloudwatch) and make extensive use of automation technology (e.g Autoscaling, Cloud formation, Puppet) this enables provisioning of new environments quick and easy and as infrastructure resources can be recycled, makes efficient and cost effective use of cloud services.

Currently, we manage weekly deployments of code, new features, bug fixes, reference data and other digital products to the website without compromising quality or website availability. We do this by automating as much as possible, maintaining version control, and employing a parallel deployment/release pipeline in which our entire system and website is recreated. This is possible through the versatility of automatic provisioning and integrating our entire cloud infrastructure with the development process.

“For us continuous delivery means reduced manual intervention, shorter feedback cycles and the capability to release at any time, – everyone works together, with a sense of urgency and “can-do” attitude.”

Five key pillars of continuous delivery

  1. Build, compile, unit test, version, package
  2. Quality, code reviews …etc
  3. Test; acceptance, regression & performance tests
  4. Provision environments – global release pipe-line
  5. Blue/Green deployment to the live production website

In our context, blue/green is an enabler for continuous delivery, so we continuously prove our ability to deliver new code or functionality by treating each release package for the website as if it could be deployed to the live website. We do this by progressing the release package via a parallel deployment pipeline and a series of build-test-deploy cycles that safely prove suitability and optimise the release ready for deployment. At the end of the pipeline, barring a couple of manual steps, we can instantly deploy to our production website with near zero outage.

In part 2, I’ll talk about keeping the lines of communication open, testing and DevOps.

8 Comments

  1. At the end, you say “barring a couple of manual steps”. Could you elaborate on what these steps are and whether you’ve tried to automate them too?

    Also, did you consider going the whole way and having new commits going out to live with no manual intervention? If so, what stopped you: was it a technical limitation, or was it too risky from a business point-of-view?

    1. Hi Luke, good questions, I’ll hopefuly be answering the points you’ve raised in the next few blogs in this series – broadly speaking you are on the right lines, it’s a combination of weighing up; risk, business drivers, scope, resources, QA ..etc, plus our technical limitiations – thanks for the interest, and let me know if the blog series does not answer your queries.

  2. Would like to learn more about how you are satisfying the governance requirements of these processes whilst achieving the rapidity that will take you towards CD. We’d certainly like to discuss the ethos adopted, the pragmatism applied, the tool-set chosen, the 3rd party agreements and practices needed and the customer expectation shaping your approach. I’m sure there will be other nuggets (and boulders) of wisdom you can impart from your journey so far. We look forward to talking to you and your colleagues in greater depth, thanks

Leave a Reply

Your email address will not be published. Required fields are marked *