DevOps has served us well, resolving the conflicts that often arise between developers throwing software over a wall to IT personnel to deploy and support. However, this is only true for organizations that truly adopt DevOps (which may still be too few of us). To remedy this, DevOps will continue to change, thanks in part to its built-in feedback loops, to the point where it will look very different in two years.
Below is my take (informed by my work in the field, combined with research) on how DevOps is poised to evolve over the next two years. In my view, the DevOps focus will expand to include quality, rather than just faster development cycles.
How Do You Measure DevOps Success?
According to the 2016 State of DevOps Report, the benefits of DevOps include deployments that are 200 times more frequent, require 22% less rework and are three times less likely to result in deployment failures.
Yet there’s still room for improvement, since success cannot be measured purely in terms of how quickly you release. You also need to take quality into account.
A good way to define quality is to think in terms of change. In the next two years, quality testing will change from the traditional “Will it break?” question to “How easily hacked?” As such, testing will come to the forefront, as will product management, which according to the same survey, is ready to benefit from the same lessons Agile and Lean taught us. Let’s take a look at these issues in detail.
From Continuous Development to Continuous Delivery
Some organizations still view DevOps as a set of tools to get developers involved to help make deployments go more smoothly. In reality, DevOps isn’t a set of tools, or even a software deployment process. It’s a philosophy that should stretch back into product definition, taking testing and quality into account at every step, even as part of the mindset of executive leadership. In fact, one strategy is to put testing front and center in the requirements (and agile story) definition phase. After all, if you can’t describe how a new software requirement should be tested or proven correct, how can you be sure you’re describing the requirement itself correctly?
With a continuous focus on quality and testing sooner in the development process, software quality will improve to the point that continuous delivery to end-users should be possible with less and less risk. As an example, cars continuously driving off a modern assembly line are delivered to dealerships and end-customers immediately, with only a minimum amount of final testing. Quality was baked in from the beginning.
Is this possible with software? In my opinion, if auto manufacturers can deliver automobiles (which suffer from the perception of poor quality and are typically prone to abuse on poor roads and from questionable drivers) with this much confidence, then software manufacturers should as well. As our agile processes and DevOps strategies improve over the next two years, our software will improve to the point where it can be delivered to its intended users only moments after it is built.
Quality and Security Transform Developers into Surgeons
The focus on quality and security needed to be a successful software manufacturer today will require a fundamental shift in developer philosophy within the next two years. This means that developers will need to act more like surgeons than artists. This isn’t to say style and creativity won’t matter; after all, some surgeons are more successful than others due to their personal approaches, and new techniques are invented thanks to the creativity of the best of them.
However, even when tuning that style, or working through new techniques, careful planning, practice and review are required when lives are at stake. With software at the heart of a growing number of systems in our lives, and with more of them critical to our well-being thanks to IoT and other advances, developers will increasingly need to take a surgical approach to writing and delivering code. In practical terms, this translates into:
- Better development tools, specifically built for certain jobs, with a move away from general purpose IDEs, editors, and so on. Just as different surgical procedures require an increasingly specialized set of tools, so will development within the next two years.
- More precisely defined and assembled development environments, analogous to the preparation of an operating room where nothing is left to chance. This includes tools and libraries automatically installed to match exactly what’s prescribed for the development environment (i.e. tools, libraries, OS settings, and so on). Just as a surgeon requires an operating room be sanitized and well-lit, with the right tools, personnel and supplies within reach, developers will require the same from their personal build and test environments. These will be formally defined within the next two years.
- Improved modeling and simulation software to prove designs and end-solutions both visually and perhaps mathematically before they’re built. Just as surgeons practice and refine techniques and procedures, test new procedures, or define radically new surgical approaches on non-living models, software developers, product managers, and IT will work together to do the same even before new software is developed.
- Larger-scale teaming will be emphasized over the single developer, paired developer, or even the “two-pizza rule” (http://www.businessinsider.com/jeff-bezos-two-pizza-rule-for-productive-meetings-2013-10). Just as surgeons, nurses, orderlies, and others have well-defined roles in a seamless surgical procedure, so will product managers, developers, QA personnel, and IT staff in the software development and delivery process.
Overall, the development process will migrate to a quality-focused procedure within the next two years. Different procedures will be defined and followed depending on the type of software being developed, such as IoT device software, communication software, cloud-based analytics, mobile, and so on. Supporting environments will move from packaged software with a singular purpose (i.e. a database) to a solution set with deployment capabilities built-in (i.e. cloud-based platform solutions).
Merging of Fail-Fast Agile and Well-Planned DevOps
With the move from hacker to surgeon in support of DevOps over the next two years, where does that leave the fail-fast, agile, experiment-focused business practices of today? Does Agile have a place in the future where quality and security are paramount? I think it does, and the two should not be viewed as mutually exclusive. They just apply to different levels (and in some cases, phases) of the software development practice.
For example, being an agile organization, taking part in A-B testing (https://www.optimizely.com/ab-testing), and building feedback loops for new experimental features and solutions doesn’t mean you should do so haphazardly. Your software organization can be iterative, with two-week sprints, frequent releases and tuning via user feedback, while taking measures to ensure that those releases are executed and delivered with surgeon-like precision. The next two years will see improvements in the specification of how to execute feature-based development quickly, with quality, by redefining how software requirements are spelled out from the beginning.
Retooling for the Future Now
Overall, reading between the lines of what’s been described so far, DevOps will expand to truly encompass the processes that come before and after the traditional act of software coding. Agile helped to bring rigor and control to the software development practice, with frequent iteration and feedback along the way. Over the next two years, DevOps will bring this same rigor to those who help define software requirements, test the overall viability and usability of what’s being proposed and built, and deliver and support the end product.
And while I’m not suggesting developers need to be as methodical and precise as a surgeon, or as repetitive or specialized as an assembly line worker, there’s something to be learned from both. This will involve specialized development tools, with modeling and measurement emphasized over libraries and compilers. It’s up to all of us to refine our tools and make necessary shifts from “edit, compile, deploy, and test,” to something closer to “model, prove, test, and deliver.”