Archived: Blue/Green deployment strategy in the new TfL website

Hello Folks,

Thanks for popping by, I’m Tariq Khurshid and lead on the website (www.tfl.gov.uk), Service Desk, Change & Release Management. In this blog I’d like to share with you the success we have enjoyed using the “blue/green” approach for software release and deployment in the new website.

This is a summary of our experience doing blue/green software deployments and releases to tfl.gov.uk using Amazon Web Services (AWS) cloud infrastructure.

Blue/green deployment of software to the website is a process that we use to safely release new versions of (www.tfl.gov.uk) without any down time or outages for customers.

The key to success is to maintain two identical production environments to switch between. As (www.tfl.gov.uk) is now hosted on virtual servers in the cloud, this is relatively easy and cost effective.

Blue/green deployment allows us to develop software to a high standard, test independently of the live site and easily package and then deploy to live. This means we have the ability to rapidly, reliably and repeatedly push out enhancements and bug fixes to (www.tfl.gov.uk) at low risk, with minimal overheads, and best of all,….. no outages for customers.

blue-green schematic final

In order to achieve a single click/deploy, automated solution for releases in a blue/green continuous delivery environment; we decoupled monolithic CloudFormation stacks into a modular format, and deployed them in parallel with a scalable Puppet Enterprise Infrastructure. This was not easy, as the design is extremely advanced, based on an agile architecture, so we had to trail-blaze.

By our cloud providers own admission, www.tfl.gov.uk, is currently one of the most technically advanced cloud formations for a public transport website in the world.

One of the challenges with blue/green, is the cut-over, taking software from the final stage of testing to live production. We do this by doing a Domain Name Server (DNS) switch using Route53 between our two identical production environments; Pre-production (blue) and Production (green)

As we prepare a new release of software, we do our final stage of testing in the pre-prod environment. When the software has passed all regression and quality tests and checks in pre-prod, we switch the DNS, so that all incoming requests go to the pre-prod (blue) environment. The old production (green) environment, is now idle and ready to be used as a back-up or roll-back if there are any issues.

Blue/green switch and roll-backs
In terms of the actual switch, our experience has shown that the DNS (Domain Name Servers) propagation using Amazon’s dynamic Route 53 tool between is very quick, with virtually no discernible impacts to users. However, some users may experience transfer anomalies as the network changes take effect which are usually transient and a refresh of the cache (F5) or re-start of the browser quickly resolves any problems.

Roll-backs are pretty much instant, however do entail some additional DB synchronisations to capture any user submitted data during the switch and switch back.

Cloud based hosting
Blue/green deployment is cost efficient for us as when we need more capacity, or a parallel pipeline, with a couple of clicks, we can automatically leverage the scalability and elasticity of AWS for “on-demand” cloud infrastructure, and spin up multiple test and integration environments. This provides the agility TfL needs, as we are no longer limited by our production website and systems being deployed in a physical DataCentre. When we are not using the production environment and it becomes pre-prod (“blue”), we immediately spin down the infrastructure to default non-live size which is very cost effective.

We currently have around 10 different on demand cloud environments (e.g. blue/green, development, testing and projects) but we only really need four for blue/green, three of which can be scaled down to minimum infrastructure size as they are for development and test purposes. Cloud size is based on website traffic and the live website can automatically scale up/down according to customer demands so we only pay for infrastructure that we actually need.

We are going through a cultural shift using hosting in the cloud, where often times it’s more cost effective to throw away a poorly performing virtual server/system,- spin up a new instance, load test and quickly make live. We are still in the mind-set of let’s fix it, trouble-shoot, and work it out, or wonder if this is a more personal, “man vs code” ?

Blue-Green Deployment

Continuous delivery
In our context, blue/green deployment is an enabler for continuous delivery, so we continuously prove our ability to deliver new code or functionality, by continuously treating each release package for the website as if it could be deployed to the live website. We do this by progressing the release package via a parallel deployment pipeline and a series of build/test-deploy cycles that safely prove suitability and optimise the release ready for deployment. At the end of the pipeline, barring a couple of manual steps, we can deploy automatically to our production website, i.e. continuous delivery, or we can make a business decision on next steps via a Change Advisory Board (CAB).

Database’s and blue/green
All schema changes are done during a full deployment which loads all of the data as part of a batch loading process. This includes data such as tube status, bus predictions, bus routes..etc., which run independently in each environment, so don’t require the data to be synchronised as part of the blue/green deployment. Data collected by users submitting information via the TfL site, will have their data synchronised between the environments before and after the blue/green switch using automatic scripts.

Any user submitted data usually stays in the same table structures that rarely change. If there are changes to a table, when the tables/columns are synchronised, any that aren’t matched during the synchronisation because they have been removed/added can be ignored. We soon plan to move to a new centralised RDS design of database (DB), so that we no longer have to synchronise DBs when we do blue/green, making the process smoother and more seamless.

DB

Release cycle
The great thing about blue/green is that it takes away traditional time pressures on a Release team, to quickly deploy code in a limited maintenance window or outage, because there is none !

Currently we aim for a release cycle every two weeks on bug fix/new code and also complete a weekly standard website reference data refresh. Release packages are built in collaboration with the business stake-holders, project managers, development and test teams. When end-to-end testing has been completed, right up to pre-prod, the release package is reviewed in a Change Advisory Board (CAB) for a Go/No-Go decision.

no pressure

Technically complex, but worth it
Cloud based hosting can be tough, and it has been technically complex and challenging. A real life case study revolved around a blue/green deploy we did without pre-warming the Elastic Load Balancers (ELB). After the DNS switch we immediately saw the maintenance holding page pop up. We tried trouble-shooting live, however decided to instantly roll-back, which we did within minutes and all services were immediately restored. We followed up with a root cause analysis conference call with our cloud partner (AWS). We have now reconfigured both ELB (Production and Pre-prod) configurations, across three different AWS availability zones (AZ), with fixed auto-scale thresholds to prevent this happening again.

10-10-2014 15-05-52

Benefits of the blue/green approach using cloud based hosting;

1. Reduces risk by allowing time for full regression testing prior to the release of a new version to production.

2. Near zero-downtime deployments.

3. Fast rollback should anything go wrong.

4. As new code is already loaded on to a parallel environment and the live site is unaffected, the Release and Test teams have no time pressures on quickly completing the push of new code to the website during a planned outage.

5. Allows us to test disaster recovery procedure each and every time we do a blue/green

6. Eco friendly, as we no longer have to keep IT hard-ware and infrastructure on stand-by in a duplicate data centre. We simply spin up, on demand a pre-prod environment using the cloud provided by Amazon Web Services (AWS).

7. Enables continuous incremental service improvement, so our website will always be evolving and is easier to change and update when required.

10-10-2014 14-56-52

8. No more “big bang” changes like the launch of a whole new website, because the website will now theoretically never become out of date.

9. We have developed a process to synchronise databases before and after each switch to ensure no loss of customers transactions (web form data) during the cut-over.

10. Facilitates planned software releases based on a release cycle, (currently every 2 weeks)

11. Reduces risk as we are able to regression, soak and load test in an exact replica of the production environment before deploying to live.

12. Reduces customer impacts as virtually no planned outages or down-time of the website.

13. Improves confidence levels in the release package and allows for easier, pressure free, trouble-shooting along the release pipe-line.

14. Releases can be scheduled during office hours. Currently we schedule blue/green switches between the peak commute rush hours, so release window can be anytime between 10am and 4pm.

Blue/green is a powerful technique to manage software releases, especially when using cloud infrastructure. Our cloud provider (AWS) enables us to easily create new on-demand environments at the push of a button and provides different cost-effective options to implement blue/green deployments.

Since go-live of the new website on 24/3/14, we have deployed numerous new releases of software as either bug fixes, new functionality, or weekly updates of reference data, using blue/green, which has worked seamlessly and customers have experienced virtually zero down-time.

Now that we have now adopted a Continuous Integration and Deployment (CI/CD) pipeline, using the blue/green approach, we’ll always be continually improving, so the website will incrementally grow and evolve with customer needs – and we too will evolve, adapt and grow with all the cutting edge, technologies employed in our new website.

For further reading/reference, see below links to some other articles/blogs on using the blue/green approach;

http://www.thoughtworks.com/insights/blog/implementing-blue-green-deployments-aws

http://martinfowler.com/bliki/BlueGreenDeployment.html

52 Comments

  1. Since the launch of the new site there seems to have been little visible improvement on of those areas for which a public beta test was never made available. Indeed, there are a number of features that were previously available that are not now.

    The quality of content is disappointing, and trying to find something, demonstrates how processes continue to have not been thought through, despite issues having been flagged up when under test.

    I understand that functionality is primarily geared towards the mobile phone or tablet user. The fact that not everyone lives or wants to live on their mobile strikes of the arrogance culture that seems to pervade TfL at present.

    Rather than rant here I, and I think many others too, would welcome the opportunity for some serious dialogue about the difficulties in using the site on a day to day basis. This would highlight what the plus points are and, for the down sides,, perhaps an opportunity to explain what the difficulties are in trying to fix them.

    1. Hi James, can you be a bit more specific? What features are missing for you on the new site? We want the site to be great on all devices (including desktops) – what problems are you getting on a desktop at the moment?

      1. I fear that I didn’t write my comment that clearly. Since the switch to the new sight I find that an immense amount of scrolling is required when using a PC. My main gripes are more to do with general functionality, particularly Journey Planner. Since I see that your colleague Gerard Butler has just started a series of blogs on this topic I will wait to see what he has to say and then follow up if I still have some comments.

  2. Whoooohooo the magical cloud, Wow TfL getting up to date, so tell us about the challenges of this so called blue-green release way of updating, sounds pretty cool but surely can’t be that easy.

    1. Hi Sarah,

      Yes you are right, initially it was not that easy and since go-live we have learnt a lot and our blue-green process has evolved and grown along with our experience. Initial problems revolved around the lack of appreciation that our new production website now technically consists of two inter-changeable environments. This means that all new code, scheduled tasks and legacy elements need to be configured and replicated in the non-live environment 24hrs after go-live, following a blue-green switch. Unfortunately in the early days this key step was not always rigidly followed.

      Validating the scale up of infrastructure to production capacity before the switch is also key, again we learned from this and addressed the gap in the process.

      As our website uses many forms, other issues we experienced following a blue-green deployment, revolved around configuring our databases during switching. This was overcome by synchronising the data half an hour after the switch and using scripts that compare to ensure that no data is lost. In the longer term, we are planning to move to a centralised data base which will resolve this problem.

      1. Hi Sarah,

        In answer to your follow-up about time and naming;

        The great thing about blue-green is that it takes away traditional time pressures on a Release team, to quickly deploy code in a limited maintenance window or outage, because there is none !

        In terms of the actual switch, our experience has shown that the DNS propagation between blue-green environments is virtually instantaneous. Some Users may however need to refresh their local cache (F5) or re-start their browser.

        Having done a quick bit of research on the internet, it appears that blue-green was coined by Martin Fowler back in 2010 who has also written an excellent article on the subject. I believe its also known as flip/flop. I think of it as, Blue = on ice, holding, and Green= Go, (live).

  3. Blue / Green has a higher cost in storage / cpu / memory than methods such as a Canary deployment. Any particular reasons why you didnt go Canary?

    1. Hi Phil,

      Blue-green deployments are cost efficient for us as we use on demand environments from AWS, so we can spin up “green” (live) automatically within 20 mins. When we are not using the environment (“blue”) we immediately spin down the infrastructure to default non-live size which is very cost effective.

      I understand that “Canary” involves incremental releases of small packages, however with blue-green when we do a change, we run a full regression test pack on the whole website so all changes are tested independently of the live site and on the full website and services. This ensures that all impacts and dependencies of any change are fully tested.

      Blue also serves as our disaster recovery, back-up environment so we save on traditional hard-ware and Data Centre hosting costs.

      Thanks for the feedback on the Canary method which I will share with the development team.

  4. Blue/green deployment is not “continuous delivery” as you state. Continuous delivery is a collection of software development and deployment practices that allow you to deploy at nearly no cost.

    Blue/Green deployment is one of those practices, but it’s not the whole thing.

    1. In our context, blue-green deployment is an enabler for continuous delivery, so we continuously prove our ability to deliver new code or functionality, by continuously treating each release package for the website as if it could be deployed to the live website. We do this by progressing the release package via a parallel deployment pipeline and a series of build-test-deploy cycles, that safely prove suitability and optimise the release ready for deployment. At the end of the pipeline, barring a couple of manual steps, we can deploy automatically to our production website, i.e. continuous delivery, or we can make a business decision on next steps via a Change Advisory Board (CAB).

      1. But as you say, it’s just one of the possible enablers alongside source control, automated unit and smoke testing. You wouldn’t call any of those “continuous delivery” so I don’t see why you explicitly call “Blue/Green” deployment continuous delivery.

        I’ve worked on continuously delivered system that that didn’t have blue/green deployment.

    1. Having checked with our DB Expert (Rob Dukes); all schema changes are done during a full deployment which loads all of the data as part of a batch loading process. This includes data such as tube status, bus predictions, bus routes etc, which run independently in each environment so don’t require the data to be synchronised as part of the blue-green deployment. Data collected by Users submitting information via the TfL site will have their data synchronised between the environments before and after the switch between blue-green using automatic scripts.

      1. How do the synchronisation scripts deal with any schema changes to the tables which store user-submitted data? Are they updated to include data transforms for each release? (Presumably you’d need to remember to remove any old transforms for the following release)

      2. Any user submitted data usually stays in the same table structures that rarely change. If there are changes to a table, when the tables and / or columns are synchronised, any that aren’t matched during the sync because they have been removed or added will be ignored.

  5. A business friend mentioned that TfL was now using blue/green deployment on the website. To say that I was flabbergasted and stunned, is an understatement. Up till now, I associated TfL with ancient technology, let’s face it, most of your tube lines are as old as the hills with signalling systems about as reliable as a fart. I digress, there are many theoretical articles on the net about the pros and cons of blue/green but precious few with real hands on experience of actually using this release process. I’m interested to know, how quickly you can do a roll-back, have you actually done an emergency switch-back, how often is your release cycle, how many cloud environments do you need ? how do you decide on the size of your cloud, how do you build your release packages and any other valuable hands on experience you can share of using blue/green in the real world. A heaving, old public transport, .gov organisation leading the way, who would have thought it.

    1. Thanks for the interest, roll-backs are pretty much instant, and yes we have done a roll-back following a blue-green when things did not go as expected; again very quick and no loss of services for the customer – it just meant we had to do some additional DB synchronisiations to capture any data submitted during the switch and switch back. Currently we aim for a release cycle every two weeks on bug fix/new code and also complete a weekly standard reference JP data release. We currently have x10 different on demand cloud environments (e.g. blue-green, development, testing and projects) but we only really need four for blue-green, three of which can be scaled down to minimum infarstructure size as they are for development and test purposes. Cloud size is based on website traffic and the live website can automatically scale up/down according to customer demands so we only pay for infrastructure that we actually need. Release packages are built in collaboration with the business stake-holders, project managers, development and test teams. When end-to-end testing has been completed, right up to the blue environment, the release package is reviewed in a Change Advisory Board (CAB) for a Go/No-Go decision.

  6. I am an upcoming visitor to London (3 weeks in Nov-Dec). On my last visit, two years ago, I found Tfl website extremely easy to use. Now I cannot input my address, for example, 6 Handel Street, city of london and proceed with my enquiry. I get the message there are multiple locations for this address and I cannot choose #1 and move to the directions for my journey. This has happened with other addresses.

    Also the maps are of the whole city and not the specific bus or tube stop I need. What has happened to make this so difficult? I previously got the bus stop with all the locations (letters A, B, C, etc.).
    I am using Windows 7 on my computer and an Ipad with IOS 8.

    1. Hi there, I can see the problem which happens when you use that address. It seems to work fine on most addresses but not this one, so we’ll take a look and see why that is. Your best bet in this case is to use the postcode – if you don’t have it you can look it up at http://www.royalmail.com/find-a-postcode. For maps if you go to http://www.tfl.gov.uk/maps and put in a postcode or place it will take you to that area and show you all public transport stops and you can interact with them to see departures. In Journey Planner results, if you choose the map you can see the proposed route. If you want more detail you can click on the button in the top right of the map and choose ‘stations, stops and piers’ which will switch these on.

    1. True, cloud based hosting can be tough, and it has been technically complex and challenging. A real life case study revolved around a blue/green deploy we did without pre-warming the Elastic Load Balancers (ELB). After the DNS switch we immediately saw the maintenance holding page pop up. We tried trouble-shooting live, however decided to instantly roll-back, which we did within minutes and all services were immediately restored. We followed up with a root cause analysis conference call with our cloud partner (AWS). We have now reconfigured both ELB (Production and Pre-prod) configurations, across three different AWS availability zones (AZ), with fixed auto-scale thresholds to prevent this happening again.

      In order to offer continuously improving online services and to keep pace with customer demands, we’ll continue to evolve, adapt and grow with all the cutting edge, technologies employed in our new website. Thanks for the interest.

  7. I think that what you said made a great deal of sense.

    But, what about this? what if you composed a catchier title?
    I am not saying your information is not solid, but suppose you added
    a title to maybe grab folk’s attention? I mean Blue-Green deployment strategy in the new website | TfL Digital blog is a little
    plain. You might glance at Yahoo’s home page and watch how they
    create news titles to get viewers to click. You might add a video or a picture or two to get readers interested about everything’ve got to say.
    Just my opinion, it might make your posts a little bit more interesting.

    1. Thank-you for the feedback, it’s appreciated. This particular blog article is primarily aimed at Developers and Techies, however, yes you are right it could be more creative, I’ll see what we can do. In the mean-time, could you do us the honour of suggesting a more catchier title for blue/green ? Thanks

  8. you’re in point of fact a excellent webmaster. The site loading
    speed is incredible. It sort of feels that you’re doing any distinctive trick.
    Furthermore, The contents are masterpiece. you’ve performed a excellent activity in this topic!

    1. Thank-you, we are lucky to have some really talented people and it was a great team effort. The new website project was highly complex and technically challenging and we all worked a few late nights along the way. Now that we have the blue/green approach, we’ll always be continually improving, so the website will incrementally grow and evolve with customer needs.

  9. For several days now, tfl.gov.uk has been displaying a maintenance page whenever I go to it

    It was only by coincidence that I hit a bookmarked page to tfl.gov.uk/jurneyplanner that I realised the site was not down and on further investigation I discovered that http://www.tfl,gov.uk (adding the www prefix) was actually working. But I’ve always used it without www (as indeed the blog shows it on the second line at the top of this page).

    It seems to me that there must be a misconfiguration if the non-www domain is yielding a maintenance page while the www subdomain is yielding valid content.

    1. Yes, the internet has millions of caching nameservers with varying different refresh cycles and TTL (Time to live – mechanism that limits the lifespan of data in a network). TTL varies from ISP (Internet Service Provider) to ISP, and can vary from from a few minutes to up to 48hours. So this all depends on the specific route you take to get to http://www.tfl.gov.uk saved in your local cache/browser favourites and ISP. As you say, entering http://www.tfl.gov.uk or a refresh (F5) or re-boot, usually resolves most issues. Thanks for the feedback

  10. Greetings from California! I’m bored to tears at work so I decided to browse your website on my iphone during lunch break.
    I enjoy the info you provide here and can’t wait to take a look
    when I get home. I’m surprised at how quick your blog loaded on my
    cell phone .. I’m not even using WIFI, just 3G
    .. Anyways, excellent blog!

  11. Judging by your website, you are doing some very clever design, makes me want to give blue/green a try. What testing do you do on pre-production before switching ? are there any manual configurations in your AWS stack creation ? what is a RDS Database ? does it really matter what time you switch over ?

    1. Once we have spun-up Pre-prod (blue) to default production infrastructure size and verified, the Test team run automated scripts in a regression pack on Pre-prod before the actual cut-over. There are no manual configurations in our AWS stack creation, in fact we can spin-up a whole new virgin environment in about 1.5hrs with just a few clicks of the mouse. Amazon’s Relational Database Service (RDS) is a central database design that makes it relatively easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable DB capacity while managing time-consuming database management tasks, so making blue/green even easier. Yes, timing of the switch is key, therefore we plan blue/green’s outside peak commuter rush hour traffic, as that is when our high availability caching layer (Varnish) is working hardest. Thank-you for the interest and feedback.

  12. Hi tariq – first of all thanks for all the great info.
    I’m particularly interested in how you handle the DB side of things.. even with a central database (that I assume would be shared by blue & green), how do you go about upgrading; for example if you update the database before flicking the switch from then you could be updating a piece of logic controlling data or changing a schema that might impact your existing live environment – or do development practices make that a non-issue (if so it’d be great to understand how / why?)

    1. Hi Dave, I checked with our DB expert (Rob Dukes) and he said, that Schema changes for the RDS instance are coordinated with the release to have minimum impact on the user. The RDS instance contains mainly user data, so most changes for the RDS are minimum as these relate to form changes – usually just additional columns being added. Bigger changes to the DB that are not user data related, are done in the blue/green environments where the database can be destroyed and recreated with no impacts. Thanks for the interest.

  13. Hi Tariq,
    Very useful information and thanks for sharing. I guess this is just a stepping stone .. I expect more to come in near future.
    quick question on database: if the changes to DB made at a time of deployment, how you guys testing your change/code in Pre-Prod {as DB will be shared}?

    Thanks
    Nish

    1. Hello Nish,

      We test and deploy all new code to our parallel release and deployment pipe-line that shares the same RDS database. In the test environment (non-production) if all tests pass, then we promote to pre-prod (no interruption), ready for seamless go-live – so there are no DB sync issues. Thanks for interest.

  14. Glad to hear more people are putting this into practice although, in an immutable model like this, green generally refers to the new, untested version and is promoted to blue following live traffic migration =P

    Since you are using DNS cutover, how are you handling load balancer scaling since your new stack will be created at minimum levels? Are you having to contact AWS support to pre-warm your ELBs or have you rolled your own load balancing solution?

    1. Hi Justin, I see what you mean about the naming convention as in “new green off-shoot”. Yes, we have a special configuration set up on our ELB’s to enable seamless blue/green switches so they stay permanently warmed (scaled up) in three different availability zones. Thanks

  15. Woops, I missed where you already addressed my ELB question. You might have a look at this session from this year’s re:Invent. You can control which instances are blue/green by simply using ELB health checks. That way you never have to mess with DNS or ELBs. It also allows canary testing in tandem with blue/green deployments.

    1. Hello Susan,

      With the blue/green approach we are trying to reduce the cycle time between an idea and usable software by automating, keeping track of version control, and employing a Deployment/Release Pipeline in which our entire system and website is recreated.

      For us blue/green is simpler to manage, in that when we have made the switch you, it’s done and we don’t have the full release bleeding over in terms of time, when it’s Live-Live. Also we have embargoed releases, which can’t be bled-over and therefore are not well suited to incremental deployment
      Canary approach implies that you have a mechanism to get feedback from a set of “canary” users so might have been good to use when we were in beta. However all of the users to our website are essentially anonymous, unlike an application that requires you to log in. Therefore, it would be difficult for us to be certain that our Canary has passed testing, it would only be a gut feeling ?. We do have some data that is supplied by end-users that is stateful, this becomes harder to reconcile and consolidate on a Canary approach, especially if the data schema is different in the Canary version of the software.

      Holistically, Continuous Delivery is a higher-level practice that can include blue/green deployment and we do use other Continuous Delivery methods, such as source control linked to automated tests and automated build. Thanks for the interest and suggest you check-out “Continuous Delivery” by Jez Humble / David Farley; http://www.amazon.com/dp/0321601912

  16. Great blog right here! Additionally you website lots up verfy fast!
    What host are you using? Can I am getting your associate hyperlink in your host?
    I want my site loaded up as fast ass yours lol

  17. Hi Tariq,
    Long time since the post, have you moved to RDS, I have some doubts.
    How are you managing the blue / green with the RDS?.
    I have an RDS with several schemas, more than 100 and growing up (I haven’t defined when to use an aditional RDS, and I will). From my research, a restoration/sync with AWS RDS is a nightmare, it takes a lot of time to make a restoration (since each schema is more than 5 GB and the application is a still a baby). I’m very concerned about making a restoration for the deployment and then sync the data.
    I’m going, 99%, to apply CD to database management, doing a bigger control on each stage, to what is going to change the database and a strict controll at preproduction and production stages (refer to https://www.simple-talk.com/content/article.aspx?article=1974 ) .

    What is your suggestion about this.

    1. Hi Sergio, Rob Dukes our DB expert, has answered your question;

      We are using RDS to store some of the data and because of that we do perform daily snapshots and additional snapshots prior to deployments (in case of the worst case scenario). To help with the problem you mention of speed, we have another SQL instance (EC2), that periodically sync’s to the RDS instance using PowerShell. This allows us to; (1) Create backups from the RDS instances data that can be retained for more than 35 days. (2) A quick way to compare schema / data on a release to validate if needed (Redgate Software) (3) A quick way to roll-back changes if any problems are found in the production environment (Redgate Software). Hope that helps, thanks

Leave a Reply

Your email address will not be published. Required fields are marked *