The Mythical DevOps Engineer

Reading Time: 8 minutes

I’m always a little suspicious of job specs looking for the so-called DevOps Engineer role. They often mention a vast variety of duties and responsibilities.

Are they hiring for a single role or a whole team?

Roles having DevOps in their title hardly share the same meaning. They often have something in common, though. They try to cover for what traditionally would have been the specialization of different professionals.

Don’t get me wrong: cross-functional expertise is definitely important. But I don’t think DevOps means replacing a multitude of specialization with a single role. Different specializations like Operations, Security, Testing, Development, Product Management and so on, are vast and require specific knowledge.

I think the key differentiator of successful DevOps organizations is that they enable effective collaboration. They have as clear North Star the goal to deliver value to the end user.

Overall, I don’t think we should be talking about a DevOps Engineer, but rather about DevOps culture in organizations.

But let’s take a step back first.

What does DevOps mean, really?

I tweeted my own definition of DevOps some time ago.

DevOps organizations incentivise different specialities to collaborate. The intrinsic existing tension between Dev, making changes to the system, and Ops, wanting to keep the system stable, dissolves. The greater good is now the value stream.

A stable system that delivers nothing is as useless as an unstable system that keeps offering new functionality.

Dev and Ops understand the importance of working together to maximise this flow to figure out which bets worked out and which ones didn’t.

Organizations that embrace the DevOps mindset can be more effective than the competition at experimenting with new functionality. They quickly validate their assumptions, activating and deactivating functionality by flipping a switch on a dashboard.

Incidents become an opportunity for learning rather than a chance of blaming someone.

In general, DevOps organization learn to adapt and evolve to any situation.

Overall, I think there shouldn’t be a single DevOps role but, rather, a set of specific specialities collaborating effectively.

This ideal view of the terminology, though, might sometimes clash with the reality of the job market. Companies willing to attract the best talent with the most current skills may end up advertising for roles that are counterproductive in the context of DevOps principles.

But let’s have a look at a few interesting job specs.

work harder neon sign photo
Photo by Jordan Whitfield on Unsplash

What are companies looking for?

Let’s read through a few excerpts from job specs I found out there in the wild.

The flexible problem solver

[…] Devops Engineers are IT professionals who collaborate with software developers, system operators and other IT staff members to manage code releases. They cross and merge the barriers that exist between software development, testing and operations teams and keep existing networks in mind as they design, plan and test. Responsible for multitasking and dealing with multiple urgent situations at a time, Devops Engineers must be extremely flexible. […]

A job spec on the internet

This is one of those classic examples where the organization believes that the DevOps principles should be delegated to a single team.

The spec mentions the myriad of duties that are responsibility of the Devops Engineers in the company. A Devops Engineer is expected to “multi-task and deal with multiple urgent situations at a time”. Therefore, they “must be extremely flexible”.

Multitasking and dealing with multiple urgent situations at a time is, for sure, likely to happen anywhere: I don’t think this should be a peculiarity of a role in an organization. On the contrary, a healthy environment empowers every engineer to handle urgent situations and learn from them.


Coming across this role, I’d think that the organization is not really trying to adopt DevOps practices. Instead of encouraging people to collaborate and improve, they’re building a dedicated team to throw issues and urgent situations at.

This job spec would be a big red flag for me.

The productivity booster

A DevOps Engineer combines an understanding of both engineering and coding. A DevOps Engineer works with various departments to create and develop systems within a company. From creating and implementing software systems to analysing data to improve existing ones, a DevOps Engineer increases productivity in the workplace.

Another job spec on the internet

In a DevOps organization engineers do work with various departments. But what’s the point then of having a dedicated DevOps Engineer role? Do the other type of engineers not work with the various departments of the organization? Do non-DevOps Engineers not analyse data and improve existing systems? Additionally, the job spec claims that a DevOps Engineer increases productivity in the workplace. How? Does it radiate productivity?

The Release Manager… but DevOps!

A DevOps Engineer works with developers and the IT staff to oversee the code releases. […] Ultimately, you will execute and automate operational processes fast, accurately and securely.

My favourite so far

This is quite a condensed one but the release aspect mentioned in it strikes me as particularly interesting.

I tend to separate the concept of deployment from the one of release. Users experience product updates governed by a release policy that may or may not be the same as the deployment policy. This really depends on the strategy of the organization.

Regardless of this distinction, though, I believe that constraining the capability of delivering value to the end user to a specific role undermines the agility of an organization.

The teams should be able to continuously release code into production. Mechanisms such as feature flags should control the release of functionality. This means that the code in production doesn’t necessarily activate upon deploying it, making it possible for the organization to control when the functionality actually reaches the user.

In general, a deployment should be a non-event: nothing special, just another merge into the main branch that causes code to end up in production.

In a fast-paced world like the one we live in an organization shouldn’t constrain itself by requiring dedicated engineers to release new functionality. Modern environments require companies to always be experimenting. Organizations should empower non-technical teams to run experiments, analyse data and autonomously decide when to release new functionality. All of this, ideally, shouldn’t require ad hoc intervention from a specific engineer.

Job specs like this one feel like they’re trying to repurpose the role of the Release Manager to keep up with the latest trends by just changing a few words.

I don’t think release management goes away in a DevOps organization. Rather, the Release Management becomes ensuring that the rest of the organization can be autonomous at releasing. Achieving this means investing in automation and internal tools for the whole company.

A Platform Engineer. But cooler!

The DevOps Engineer will be a key leader in shaping processes and tools that enable cross-functional collaboration and drive CI/CD transformation. The DevOps Engineer will work closely with product owners, developers, and external development teams to build and configure a high performing, scalable, cloud-based platform that can be leveraged by other product teams.

This is the least bad of the job specs I’ve encountered. It describes a set of responsibilities that usually pertain to a Platform or Infrastructure Team. Most of these teams often get renamed to DevOps Team and their members become DevOps Engineers for fashion reasons.

The Platform Engineering team is the key enabler for organizations that want to embrace the DevOps principles. But thinking that they only pertain to a specific team will hardly result in a successful journey.

This team will surely be responsible to build the relevant infrastructure that enables the other teams to build on top but they can’t be left alone in the understanding and application of those principles.

Developer teams will need to become autonomous at adopting and making changes to those systems; they will need to understand the implications of their code running in production; understand how to recognize if the system is not behaving as expected and be able to action to restore it.

Equally, the Product team should spend time understanding what new important capabilities derive from adopting DevOps practices. Code continuously flowing into production behind feature flags, containerization technologies, improved monitoring and alerting, et cetera, open endless opportunities.

Improved user experience and experimentation opportunities, for example, are an important asset to leverage to remain competitive.

people riding boat on body of water photo
Photo by Matteo Vistocco on Unsplash

What should companies be looking for?

We’ve just gone through a few job specs that look for variations of a DevOps Engineer role and I’ve outlined what aspects I think are flawed in those roles. But what should companies look for, then?

Before blindly starting to hire for roles driven by industry fashion trends, organizations should rather invest in understanding what’s holding them back from being DevOps.

In the Unicorn Project, Gene Kim mentions the Five Ideals of successful DevOps organizations. I think they’re an effective set of principles to take the temperature of your organization in terms of DevOps practices. Those ideals are:

  • Locality and Simplicity
  • Focus, Flow and Joy
  • Improvement of Daily Work
  • Psychological Safety
  • Customer Focus

Locality and Simplicity

Making changes to the system, in order to deliver greater value to the end user, should be easy: easy in terms of team’s autonomy to make changes to the product as well as being easy in terms of friction that the technology in use imposes on the changes.

Focus, Flow and Joy

Developers should be able to focus on their work and be able to develop software with minimum impediments. This is facilitated by making sure that the software development lifecycle infrastructure is working for the benefit of the engineering organization.

Improvement of Daily Work

Continuously learning and improving the conditions in which the work gets done is the key to maximise the flow of value and the happiness of the people doing the work. Successful organizations facilitate a continuously improving environment by enabling engineers to build tools and practices that enhance their daily operations.

Psychological Safety

An organization will hardly be able to improve if the people that are part of it are not incentivised to raise issues and address them. This is not something you solve for by hiring a specific role. It’s the organization’s responsibility to facilitate an environment where constructive feedback is the norm.

Customer Focus

Last but not least, the engineering organization, just like any other department in the company, should be sharply focused on the customer. All the efforts should be balanced against what’s best for the customer and, ultimately, for the company.


What should companies be looking for then? I think the priority should be on understanding what’s blocking the them from fully embracing a DevOps mindset, across all departments. Most of the times you’ll realise that the set of skills you need is already there around you. What’s holding you back is probably the current set of processes through which work gets done at your company.

nude man statue photo
Photo by Roi Dimor on Unsplash

A mythical role

It feels like the DevOps Engineer is a mythical figure that certain organizations pursue in the hope of finding the holy grail of a Software Engineer capable of doing anything.

This, of course, will hardly be the case. Recognizing the importance of the single specializations is what makes organization successful and capable of maximising the expertise of the people that they are made of.

What happens in a DevOps organization is that responsibilities are redistributed: developers are empowered to make changes to production environments because organizations recognize the importance of moving fast. This means opportunities for success increase together with the opportunities of failure.

Eliminating barriers and creating a safe space for collaboration helps Devs and Ops work together to resolve issues when they occur. This is what ultimately leads to high performing teams that are incentivised to follow the North Star of the continuous value stream to the end user.

Specific DevOps knowledge in terms of technology, tools and best practices, will be required, for sure, but it won’t be something a single role should be responsible of.

Instead of pursuing a mythical role then, let’s go after the much more plausible alternative of creating a well oiled machine where all the people are incentivised to work together in harmony with the clear goal of maximising the value to the end user.


Thanks for getting to the end of this article. I sincerely hope you’ve enjoyed it. Follow me on Twitter if you want to stay up-to-date with all my articles and the software I work on.

Cover photo by Rhii Photography on Unsplash

How I setup my continuous deployment pipeline for free

Reading Time: 4 minutes

Continuous Deployment refers to the capability of your organisation to produce and release software changes in short and frequent cycles.

Pain vs Frequency relationship – https://www.martinfowler.com/bliki/FrequencyReducesDifficulty.html

One of the ideas behind Continuous Deployment is that increasing the frequency of deployment of your changes to production will reduce the friction associated with it. On the contrary, deployment is often an activity that gets neglected until the last minute: it is perceived more as a necessary evil rather than an inherent part of a software engineer’s job. However, shifting deployment left, as early as possible in the development life cycle, will help surfacing issues, dependencies and unexpected constraints sooner rather than later.

For instance, continuously deploying will make it easier to understand which change caused issues, if any, as well as making it easier to recover. Imagine having to scan through hundreds of commit messages in your version control system history to find the change that introduced the issue…

Automatism is key to achieve continuous deployment.

The project

In this article we’re gonna explore how to leverage tools like GitLab Pipeline, Heroku and Docker to achieve a simple continuous deployment pipeline.

Let’s start by creating a simple Hello World application. For the purpose of this article I’m gonna use Create React App:

$ npx create-react-app continuous-deployment
$ cd continuous-deployment
$ npm start

Now that we have a running application, let’s build a Docker image to be able to deploy it to Heroku.

The container image

We’re going to write a simple Dockerfile to build our app:

FROM node:10.17-alpine
COPY . .
RUN sh -c 'yarn global add serve && yarn && yarn build'
CMD serve -l $PORT -s build

First of all, two things to keep in mind when building images for Heroku:

  • Containers are not run with root privileges
  • The port to listen on is fed by Heroku into the container and needs to be consumed from an environment variable

As you can see from the Dockerfile definition, we are starting the app by passing the PORT environment variable. We can now test the image locally.

$ docker build . -t continuous-deployment:latest
$ docker run -e PORT=4444 -p4444:4444

The -e PORT=4444 specifies which port we’re going to listen to. You can now try your application at http://localhost:4444.

Additionally, I’ve added a myuser user at the end of the Dockerfile, just to make sure everything still works with a non-root user.

Deploy to Heroku

Before building our continuous deployment pipeline, let’s deploy manually to make sure our image is good. Create a new application on Heroku and give it a name. In my case it’s gonna be cd-alediaferia.

Screenshot 2019-12-05 at 15.04.37

Now let’s tag and push our image to the Heroku Registry after logging in.

$ heroku container:login
$ docker tag <image> registry.heroku.com/<app-name>/web
$ docker push registry.heroku.com/<app-name>/web

And release it straight to Heroku:

$ heroku container:release -a  web

You should now have your app successfully up and running on Heroku at this point.

The GitLab Pipeline

In this paragraph, we’re going to configure the pipeline piece on GitLab so that we can continuously deploy our app. Here follows the .gitlab-ci.yml file that I have configured for my repository.

stages:
  - build
  - release

build_image:
  only:
    - master
  image: registry.gitlab.com/majorhayden/container-buildah
  stage: build
  variables:
    STORAGE_DRIVER: "vfs"
    BUILDAH_FORMAT: "docker"
  before_script:
    - dnf install -y nodejs
    - curl https://cli-assets.heroku.com/install.sh | sh
    - sed -i '/^mountopt =.*/d' /etc/containers/storage.conf
  script:
    - buildah bud --iidfile iidfile -t cd-alediaferia:$CI_COMMIT_SHORT_SHA .
    - buildah push --creds=_:$(heroku auth:token) $(cat iidfile) registry.heroku.com/cd-alediaferia/web

release:
  only:
    - master
  image: node:10.17-alpine
  stage: release
  before_script:
    - apk add curl bash
    - curl https://cli-assets.heroku.com/install.sh | sh
  script:
    - heroku container:release -a cd-alediaferia web

In the above snippet we have defined two jobs: build_image and release.

build_image

This job specifies how to build our Docker image. If you look closely, you’re actually going to notice that I’m not using Docker specifically but Buildah. Buildah is an OCI-compliant container building tool that is capable of producing Docker image with some minor configuration.

release

This job performs the actual release by pushing to your Heroku app.

Additional configuration

Before trying our pipeline out, let’s configure the HEROKU_API_KEY so that it can get picked up by the herokucli that we’re going to use in the pipeline definition.

Pipeline Variable Setting
GitLab pipeline variable setting

Pushing to GitLab

Now that we have set everything up we are ready to push our code to the deployment pipeline.

GitLab pipeline in action

Let’s have a look at the build step that GitLab successfully executed.

GitLab pushing to the Heroku Registry

The first line uses buildah to build the image. It works pretty much like docker and I’ve used --iidfile to export the Image ID to a file that I then read from the command-line in the subsequent invocation.

The second line simply pushes to the Heroku Registry. Notice how easily I can log in by doing --creds=_:$(heroku auth:token): this tells buildah to use the token provided by Heroku to log into the registry.

The deployment job, finally, is as easy as:

$ heroku container:release -a cd-alediaferia web

Conclusion

My app is finally deployed, and everything happened automatically after my push to master. This is awesome because I can now continuously deliver my changes to production in a pain-free fashion.

My successfully deployed app

I hope you enjoyed this post. Let me know in the comments and follow me on Twitter if you want to stay up-to-date about DevOps and Software Engineering practices.