Let’s flashback to pre-2007, when DevOps was in its infancy. The Agile manifesto had been released in 2001, and software teams had been experimenting with new ways of building software. While some great advances were made, there was still a lingering problem where the traditional walls between the development and operational teams remained - this led to many frustrations. On the flip side, we had Cloud Computing, Infrastructure as a Service (IaaS), and the rise of continuous integration, a term pegged by the infamous Grady Brooch in 1991, coming to the forefront of software development.
However, the release of The Phoenix Project by Gene Kim, Kevin Behr and George Spafford was a significant moment within the history of DevOps. To give you the cliff notes version, this book follows the story of an IT manager burdened with a situation beyond his control - saving a development project that’s gone wrong (something I am sure many readers are familiar with). In a twist from the normal tales, the protagonist within the story guides his mentor (who is keen on the more traditional forms of development) into the unknown - the future of software development, DevOps.
With the release of this book and a few well-timed conferences, DevOps slowly transitioned from a technical term into a well-known process used within most software-based businesses.
How is DevOps different from other methods?
The DevOps process is a set of practices that combines software development, testing, and operations under a single banner - instead of the more traditional siloed approach used in most organisations. This change in mindset allows teams to communicate better, improves the quality of service and increases efficiency.
The implementation of a DevOps process creates a synergy that brings out the best in your teams. While it often requires a change in not only the culture of your organisation but the mindset of how you develop software, the resulting benefits rapidly outweigh the initial growing pains.
There are a set of core concepts that feed into the foundation and implementation of the DevOps process. These key practices include continuous integration, continuous delivery, microservices, infrastructure as code, monitoring, and communication.
Continuous Integration / Continuous Delivery (CI/CD)
One of the most important fundamentals within DevOps is Continuous Integration and Continuous Delivery - two acronyms that are often dropped whenever DevOps is discussed. To break it down, CI is when code changes are merged frequently into a central location that allows for the automatic building and testing of code.
On the flipside, CD deploys those changes into a production or testing environment after the code is built. Ultimately, this means that you have a process in which you can test and release code on a case-by-case basis.
Microservices is a software development technique that arranges an application as a collection of loosely coupled services. This is juxtaposed to a monolithic architecture in which all processes are tightly coupled to run as a single service. The decentralisation of these services allows developers to separate into smaller teams specialising in various stacks.
Infrastructure as code (IaC)
Iac is the process of managing infrastructure via software development techniques and source code rather than a manual process. The implementation of this approach reduces friction when deploying.
Monitoring allows developers to easily identify and resolve issues that impact a products’ uptime, speed or functionality. During the development of a software product, the team will often collect data that provides feedback on customer behaviour and the performance of an application. This data can then be used to improve the product before an end-user even realises this is something they need. Although it is important to remember that monitoring isn’t specifically feature-based, you can also monitor structured application logs, environment status or even suspicious activity.
Communication and collaboration
The final aspect of the DevOps foundations is collaboration and communication, which are essential to teams uniting and sharing responsibilities. The ultimate benefit is that the development and operations teams can combine their workflows for the most efficient outcome.
Now that we know the origins of DevOps and its fundamentals, let’s take a look at tooling. If we look at the graphic below, we have listed eight stages that follow a DevOps process usually follows. The eight stages are just an example of what a DevOps cycle might look like.
Different organisations use various terminology (for example, GitLab’s lifecycle includes: plan, create, verify, package, release, configure, protect and monitor) however, the eight stages listed below are a great start and is the process we currently use at Codebots to develop and deploy our products.
- Operate, and
Planning is the use of tools that allow you to track bugs, create backlogs, visualise progress, and schedule tasks that need completing. The use of such tools within a project ensures that only the most important features are focused on first and that the most viable MVP is delivered.
Tools that offer these services: JIRA, Confluence and Trello.
Coding and building
As the name suggests, coding and building are referencing the code written for the software project. This part of the process is all about developing the proposed solution. During this step, a developer may find themselves branching, merging or rewriting a project.
Tools that offer these services: GitLab or GitHub.
Once the project has been planned and written, it is time to test whether it is performing to industry standards. Testing ensures that errors, such as poor code quality or user experience problems, are minimised.
Tools that offer these services: CI/CD platforms (i.e. Jenkins/GitLab).
Releasing and deploying
The continuous deployment of a project ensures that any changes made as you move along the project timeline are predictable but also changeable. Ultimately, this allows developers to pick up on issues before an end-user can pick up on them. Generally, there are various environments a developer would deploy to such as development, test or production environments.
Tools that offer these services: Kubernetes, Docker and CI/CD platforms (i.e. Jenkins/GitLab).
The operating stage of the DevOps cycle is managing the application in a live production environment with the correct configuration, scalability and appropriate measurement tools to gauge success.
Tools that offer these services: GitLab, Cloud provider consoles, and Octopus Deploy.
Various tools help to monitor the success of a software product, for example, Datadog offers error tracking and incident management. Whereas, a product such as Hotjar gains an insight into how visitors are navigating websites. There are many monitoring tools available however it is important to choose the most relevant tools for the project.
Transitioning to a DevOps process and following the key fundamentals and tools listed above will be an insightful journey for your business. Once implemented there is a good chance you will see differences in how your teams communicate and operate. Ultimately, these changes should funnel down into your overall efficiencies and customer outcomes providing even more positive results for the broader business.
Codebots is the combination of Model-Driven Engineering and AI, however for our customers to reach their full potential they need the ability to freely use CI/CD within their development workflow. We do this by not vendor locking our customers, which means that they own their source code. Codebots is a world-first tool that allows MDE to be used in CI/CD.
If you are interested in what other benefits you could obtain from implementing this process, watch this space for another article on the benefits of DevOps, coming soon.