Evaluate and refine the delivery process
Our team continuously refines processes to advance our software development practices. While we already employ solid CI/CD pipelines in our projects, our team actively and progressively seeks higher levels of continuous deployment maturity.
To validate and enhance the current processes, the team reviewed the CI/CD metrics captured by our DevOps staff, including:
|Deployment frequency||How often do we/should we deploy to production?|
|Lead time for change||How much time does it take to go from build submission to successful and verifiable production deployment?|
|Change failure rate||What is the proportion of deployment failures in which a bug or problem necessitates a rollback, feature toggle-off, or another remedy?|
|Meantime to restore||How long does it take to restore the production app if there is a critical failure?|
We compared actuals against metrics on all products and services deployed to production. Collaboratively, we analyzed results and then identified problems and opportunities for improvement. The feedback directly affected how to improve our CI/CD processes and more fully embrace continuous deployment.
Leveraging the DevOps quadrant helps teams pinpoint where they are now and where they should move next. The tool enables the team to identify outcomes that matter most, empowering them to take ownership of the results and giving them the freedom to decide how to improve.
Our projects already in production mapped to DevOps quadrants.
With a clearer picture of our CI/CD process c/o the DevOps quadrant, we compiled a list of the advantages and disadvantages.
Manual deployment efforts reduced to no overtime, no working off business hours, and no taking the application offline for a few hours to perform updates.
Many new changes and bug fixes were deployed automatically to production within minutes.
Many tests were automated, including smoke testing and the entire regression suite.
As a result of no more code freezing, no manual regression, no fixing of regression bugs during the code freeze period, and no manual deployment, the team continuously worked on features.
Even with the additional effort to implement testing automation, overall productivity and delivery targets did not change.
Significant additional effort was necessary to build out an adequate architecture, infrastructure, and automation plan.
More tools were necessary for better support in executing this initiative.
In advance, we knew the risk of delivering more bugs to production, crippling production during deployments, corrupting databases, or even taking down whole applications due to a misconfigured build. Perhaps due to our awareness and vigilance in executing well, these risks did not materialize.