Lots of freelancers hate having to call up people on the phone and asking if they’ve got any work for them. If only there was a...
Our take on DevOpsJelle Van Hees
In this blog post, we share our view on DevOps. This isn’t an in-depth technical article, but rather a high level overview of our workflows and methods. We’ll discuss our DevOps flows in application development and how we implement the same principles in our infrastructure workflows. We’ll also go over an area which we’re still exploring right now: fully automating the steps necessary to create our DevOps pipelines.
Our setup for building and deploying code
DevOps has been around for some time now and people use it as a job description, as a philosophy or as a way to describe a team’s workflow. However you look at it, Continuous Deployment and Integration are always a part of it.
At ACA, we have thoroughly adopted these principles in our daily way of working. Our agile teams rely heavily on the tools we provide for them: they have their projects and repositories in our Bitbucket server and build everything from Java apps in Docker containers to multi-platform mobile apps on Jenkins. Jenkins heavily depends on the Git server because it provides the code to build as well as a lot of the configuration information for the builds itself in the form of Jenkinsfiles and Maven configuration. This allows the development teams to determine exactly how a build works.
Because the build config is managed in Bitbucket and through the use of pull-requests and git flow, everyone is always up to date with what’s on Jenkins. During this building process we perform both unit and end-2-end testing using a variety of frameworks and methodologies. The nature of a project and preference by the team largely determines the choice of frameworks. Development teams have full control over the way they implement this due to their control over the Maven configuration and pipelines in the form of Jenkins files.
During these builds, we perform some checks and gather information about the code that is being built. These tests tell us how well tests cover the code, whether there are any security issues with used libraries and lots of other things. We also host our own repository for both build artefacts and Docker containers. Our artefact repository also doubles as npm and Maven repository proxy and cache.
Deploying and managing infrastructure
While the first paragraph discusses the way we build and deploy code, we shouldn’t forget that code needs to run somewhere. ACA specializes in building sturdy, highly available environments. Most of all though, environments need to be manageable. We achieve these manageable environments through a similar DevOps process. There are 3 big categories here with a slightly different approach for each:
- we use Terraform to manage AWS infrastructure,
- maintain a few different ways for serverles or near serverless application for both us and development teams,
- and we manage Kubernetes clusters.
Managing lots of different AWS environments, all with near identical dev test acceptance and production environments, is only doable with organized code. We organize our Terraform code in Bitbucket, where we use a combination of branching and versioning to maintain consistency over all these environments. This means that not only new software versions move through all these environments before reaching production, but also different versions of the infrastructure can be deployed and tested in a matter of seconds. This is very useful if you want to validate a caching service or a different database setting.
Putting DevOps into practice
A new component can be added in dev and test so the team can play with it. After that, things move to acceptance and eventually to production, while we remain sure what we have in acceptance is identical to what is in production. Our biggest help here is Jenkins, a tool that applies the correct configuration from a central place to a certain environment.
AWS provides a lot of interesting tools for serverless or near serverless applications like Lambda and ECS. This is where we really put DevOps into practice. The dev team releases a brand new version of their software. That new version now needs to reach the cloud where it will run. The Continuous Integration pipeline then automatically makes some required changes in our Terraform code and triggers our Jenkins jobs to deploy it.
Managing a Kubernetes cluster is where developers and operations teams really come together for collaborating. We manage different components of the cluster in their own git repositories. We manage the machines where these clusters are built on in Terraform, but application and Terraform configurations are not as tightly coupled as they are in serverless environments. The advantage here is that we don’t need a Terraform deploy when something changes in the Kubernetes config. Both teams (dev as well as ops) manage that config and pull requests keep everyone up-to-date. If there are any changes to the config environment, they are automatically deployed during the nightly deploy cycles.
Automating the DevOps pipeline
So far, we’ve discussed how we support the DevOps process for development teams. We also talked about how we use the same tooling and methods to manage the operations side of things. However, perhaps most important is how the development teams involve the operations team and vice versa: DevOps. Now, we want to take it one step further. We already talked about all the components involved in developing software or managing infrastructure in the paragraphs above, but what about using these DevOps tools to manage themselves?
When a new project lands, we need to provision a lot of things, such as a JIRA project, a place to store code on Bitbucket and a plethora of Jenkins jobs. The operations team needs the same tools and resources as the development team to design, set up and manage this new project’s infrastructure. While we’ve automated most of these things to some degree, they never come together in one long pipeline.
Creating the ultimate workflow
We are currently working on automating the last few tasks, like automatically creating the correct branching model in new Bitbucket repositories. Once that’s finished, we can combine all the automation tasks and create the following ultimate workflow.
- A project manager receives the go for a new customer project.
- The project managers begins with entering a name for this project in a Jenkins job, choosing a few options and that’s it.
- Jenkins then automatically creates the all the necessary resources: a JIRA project. Jenkins views and builds, Bitbucket repositories and Bitbucket projects.
We could later on expand on this and really bring power to the people by allowing expansion on this basic project. Let’s say a developer wants a new Dockerized project. We could automatically add a new repository to the existing Bitbucket project, spin up the necessary build and release pipelines in Jenkins or even create the required ECS or Kubernetes config to deploy all this.
When we achieve this degree of automatization, we not only free up a lot of time on our schedules, but also greatly increase the consistency of our environments. We can build everything on the same building blocks and from the same templates, which makes it easy to pick things up when you switch teams.