The ideal DevOps configuration
Before examining what constitutes an ideal DevOps configuration, there a few things to note.
A good DevOps implementation does not rely on a single set of tools. The most effective DevOps are able to change their tech stack if another tool set is better suited for a need. Ideally, the entire team—from the developers and testers to the system admins and architects—will decide together which tools will be used.
It starts with the CI/CD pipeline.
To fully realize the speed advantages of DevOps, a Continuous Integration and Continuous Delivery/Deployment (CI/CD) pipeline is essential. The specific technology used may vary, but the objectives of a good CI/CD pipeline are always the same.
Increase efficiency. Reduce the time required to establish development environments.
Reduce cost. Eliminate the cost of unused environments.
Deploy code quicker. Get the code to the team sooner. Don’t bottleneck any stage of development.
Embrace iteration and information sharing. Show accomplishments early and often—allowing the team to provide feedback and adjust the product as it’s being built.
At first blush, this sounds like any other development pipeline, but the key word here is continuous. In a CI/CD pipeline, a segment of code is written and then tested. However, rather than waiting for test results, developers continue to write the next segment. If a previously written section of code fails a test, it goes back to development. If it passes, it goes straight to delivery or deployment. There is no downtime for any step in the process. Code is continuously being written, tested, rewritten, or deployed.
Most robust CI/CD pipelines make extensive use of automation for testing and delivery. With proper environments set up, developers are able to automatically test code as soon as it’s written. Then, they can see within minutes if it passes. The faster failures are identified, the faster development can continue. If code passes, it can automatically be delivered for deployment or User Acceptance Testing (UAT).
Automation isn’t all about speed. Teams can introduce more quality into the development life cycle by automating repetitive, mundane tasks. This reduces the risk of human error, particularly if those tasks involve multiple steps. It also makes for a happier development team that spends more time on product improvements and less time slogging through mind-numbing checklists.
This is where it all starts: Sprint 0...setting up the environment developers will use to build, test, and deliver code. Determining what kind of environments to use should be a team decision that is shaped by the requirements of the job. Traditionally, these environments existed on servers or Virtual Machines (VMs), and they still could. These days, options such as containers and serverless solutions are often more efficient and can be created in a fraction of the time.
Servers and VMs are time-tested and familiar, but they also take longer to spin-up and require more resources. Serverless solutions are the latest technology, but not many development teams are familiar with them.
Container solutions, such as Docker, are fast becoming a popular environment choice and have proven to work well for both long- and short-term projects. If using containers, you will need an orchestration engine such as Docker Swarm or Google Kubernetes to deploy and manage them.
Orchestration engines are primarily used to automate when containers start and stop so teams can quickly scale out or scale down container clusters as needed.
Need to get cracking on new code? The holy grail of environment configuration is a turnkey approach that allows the team to automatically spin-up the necessary environments with a push of a single button. Establish a local dev environment that connects to the source repository, the build process, and so on. Simply, run the script.
Because testing is part of the development phase in a CI/CD pipeline, the testing strategy will play a big part in how quickly work is completed. The cornerstone of any good DevOps testing strategy is automation. Manually testing code can take days or weeks. It’s also a process fraught with human error. Automating unit testing and static code analysis eliminates these bottlenecks.
Another important component of a test strategy is coverage. As with so much of DevOps, there are no hard-and-fast rules for how much test coverage is enough. Ultimately, it will depend on a combination of factors such as the time available, the risk tolerance of the organization, and the nature of the job itself. For example, a service application might benefit from high unit test coverage (e.g. 90%+); whereas a lightweight client application might not be justifiable for such high test coverage due to associated costs.
For larger organizations, binary repositories are a secure way to store the binaries used in the development process. This will allow a team to develop and deliver applications to production using versioning. If a new version has a bug, they can roll back to an earlier version in the repository.
Repositories also obviate the need to store binaries in source code. If binaries are allowed to be stored with source code, large sections of stored code will become cumbersome to work with and require an excessive amount of time to test.