If you’re thinking about moving to a DevOps environment you should know that you get to decide how integrated your “continuous integration” is and how continuous your “continuous deployment” is. Much will depend on how you want to manage your risks (and which risks you want to manage).
When I talk to my clients about DevOps they often seem to assume that adopting DevOps means that whenever one of their developers checks in some piece of code, their line-of-business application is going to be redeployed. They justifiably worry about the risk of disruption this may impose on their business. In other words, they assume that DevOps = Continuous Deployment. That’s not the case.
In DevOps we talk about “continuous integration” and “continuous deployment” but these aren’t absolute terms that you have no control over. With “continuous integration,” you get to decide on the level of integration you want and, with “continuous deployment,” you get to decide how continuous you want to be.
It’s certainly true that “continuous deployment,” can mean rebuilding and redeploying line-of-business applications every time source code is checked in. If you’re new to DevOps it’s easy to see the potential disruption to the business as a high-risk activity.
But, while there’s no argument around what “deploy” means (moving the developers’ completed changes to the production environment where end users can start working with it), the organization is in control of how “continuous” the process should be. An automated deployment process requires dedicated resources to build, test, and prepare a deployment package. As a result, simply as a matter of economics, shops will want to decide how often they want to marshal those resources. That can be anywhere from “every check-in” to “every night,” “every week,” or even “every month.”
However, extending to “every month” introduces risks that shorter intervals avoid. As the interval between deployments increases, the amount of ‘new stuff’ in the deployment package also increases. In Learning Tree’s Fundamentals of DevOps, we refer to the ‘new stuff’ as the “increment of functionality”. The longer between deployments, the larger the increment of functionality, the greater the chance of a problem, and the larger the potential disruption can be.
Organizations should be basing their definition of “continuous” in continuous deployment on the size of the changes to any related set of applications. Organizations will need to look at how many changes they make to applications in any time period and align that with how much risk they’re willing to bear. Some applications with a low frequency of change might well “continuously deploy” on a monthly basis.
As noted in the course, the goal of continuous deployment is to produce a potentially deployable increment of functionality. “Potentially” means that you don’t have to deploy it: You can simply assure yourself that it is possible to produce a deployment package and deploy it to a production-like environment where you can run tests that prove the package would have worked in production.
But, again, the longer the interval between actual deployments, the greater the risk…but this time the risk is that you’ll lose touch with your organization’s reality. Only with deployment do you start getting feedback from users which will tell you how well you’re doing and provide you with guidance as you design and build the next set of changes.
In addition, when a developer gets negative feedback on a feature implemented in the more distant past, it takes longer for the developer to pull together the resources to provide a fix or change. In an ongoing project, there’s also a real danger that you’ll incur rework that could have been avoided had feedback been available earlier to guide new development.
It’s not unusual, therefore, for organizations moving to DevOps, to begin with longer intervals between deployments as they get comfortable with automated deployments. Once they get comfortable with that, though, something interesting happens: The organization starts worrying about the risks associated with larger “increments of functionality” and the lack of feedback…and they worry less about the potential disruption caused by smaller, more frequent deployments. At that point, organizations start thinking about reducing risk by deploying more frequently. Eventually, some balance that makes sense to the organization is struck.
The term “continuous integration” refers to frequently merging developers’ changes into a single application rather than letting those changes sit in isolation waiting to be merged. Continuous integration ensures that all developers are working with the same code, rather than each developer having their own special copy.
In the early days of continuous integration, as code was merged into the main application, all that was required was that it still be possible to compile the application (as the Windows NT team put it, “You don’t break the build”). Over the years, however, continuous integration has grown to include tests that can only be performed on the whole application.
These days developers are considered responsible for developing automated unit tests that give them instant feedback on how well their code meets requirements. This is obviously a good thing because it allows developers to check that recent changes haven’t introduced new bugs to code that was working. However, developers can’t be expected to perform tests that reveal bugs that only appear when their code is used in other applications. Those tests can only happen during integration testing, after the developer’s changes are merged back into the main application or applications.
However, while integration tests are necessary and valuable, you’re in control of which integration tests will be performed and where they will be performed.
For example, all integration tests could be done in a duplicate of the production environment (that’s certainly the ideal). But, depending on cost/expense, the organization may settle for just a ‘production-like’ environment for integration testing.
Also, ideally, every possible integration test (including performance and stress testing) would be performed during integration testing. But, again, organizations may decide that their more-frequent integration tests will simply prove that the code works — that it runs to completion and produces the approved results; Organizations may chose to defer more complicated/expensive/resource-intensive testing (like stress testing or user-acceptance testing) to the less-frequent deployment process.
In real life, organizations will decide where they want to put their integration tests based on two things: How easy it is to keep their integration environment in sync with their production environment and how robust/expensive/time-consuming their integration and deployment processes are.
Deferring any test, however, increases the costs of fixing any problem discovered during testing because performing tests earlier provides the developer with feedback on problems while the changes are still fresh in the developer’s mind. It’s not unusual, therefore, as an organization’s integration processes become more efficient, for the organization to start pulling some tests out of the deployment process to earlier in the process — even to the point where those tests are performed on each check-in.
The key takeaway from this is that you shouldn’t regard “continuous deployment” or “continuous integration” as absolutes that are imposed on you by the discipline of DevOps. You should regard them as goals that are very movable — you’ll decide how far out or how near you’ll put them. And, as you develop expertise and resources that support DevOps you’ll make new decisions on where those posts will go. You will always be in control of how “continuous” you are and how “integrated” you are.