How to keep your Amazon MQ queues clean

Reading Time: 2 minutes

Amazon MQ queues might fill up if you use them in your tests but don’t take care of cleaning them up. Let’s explore together a way of addressing this issue.

I was hoping to avoid writing dedicated code to just consume all the messages enqueued during tests so I started looking around for some tool I could integrate in my Continuous Integration pipeline.

I found amazonmq-cli. I have to say, it’s not the most straightforward tool when used in a CI pipeline. When it comes to command line options it’s not very flexible and it favours the use of command files to read from. It also only allows for file-based configuration.

Nevertheless, I downloaded and configured it.

Configuring amazonmq-cli in your CI pipeline

Amazon MQ does not support JMX but if your broker does, you could give activemq-cli a try.

For the above reason, this tool needs Web Console access, so make sure that’s configured.

Since I’m running this command as part of a clean-up job in my CI pipeline, I have to automate its configuration.

We need to produce two files:

  • the configuration file to be able to access the broker (and the web console)
  • and the list of commands to execute

Once you download and extract the tool you’ll see its directory structure is as follows.

➜ tree -L 1
.
├── bin
├── conf
├── lib
└── output

4 directories, 0 files

There’s not much opportunity for customization, so we will have to produce the relevant configuration file and put it in the conf folder (where you can find a sample one).

First thing, then, is to create the broker configuration.

echo -e "broker {\n  aws-broker {\n    web-console = \"$ACTIVEMQ_WEB_CONSOLE_URL\"\n    amqurl = \"$ACTIVEMQ_BROKER_URL\"\n    username = \"$ACTIVE_MQ_USER\"\n    password = \"$ACTIVE_MQ_PASSWORD\"\n    prompt-color = \"light-blue\"\n  }\n}\n\nweb-console {\n  pause = 100\n  timeout = 300000\n}" > /home/ciuser/amazonmq-cli/conf/amazonmq-cli.config

As you can see, I’ve also included a web-console part that is expected by the tool when performing certain commands like the purge-all-queue one that I needed in this case.

Now, let’s configure the list of commands to run.

In my case all the queues generated in the CI environment during the test share the same prefix. This makes it easier for me to purge them all or delete them later.

echo -e "connect --broker aws-broker\npurge-all-queues --force --filter $MY_SPECIAL_PREFIX\n" > purge-commands.txt

Drain them!

Finally, we can perform the command.

/home/ciuser/amazonmq-cli/bin/amazonmq-cli --cmdfile $(pwd)/purge-commands.txt

That’s it! Let me know your experiences with cleaning up queues on Amazon MQ!

The Mythical DevOps Engineer

Reading Time: 8 minutes

I’m always a little suspicious of job specs looking for the so-called DevOps Engineer role. They often mention a vast variety of duties and responsibilities.

Are they hiring for a single role or a whole team?

Roles having DevOps in their title hardly share the same meaning. They often have something in common, though. They try to cover for what traditionally would have been the specialization of different professionals.

Don’t get me wrong: cross-functional expertise is definitely important. But I don’t think DevOps means replacing a multitude of specialization with a single role. Different specializations like Operations, Security, Testing, Development, Product Management and so on, are vast and require specific knowledge.

I think the key differentiator of successful DevOps organizations is that they enable effective collaboration. They have as clear North Star the goal to deliver value to the end user.

Overall, I don’t think we should be talking about a DevOps Engineer, but rather about DevOps culture in organizations.

But let’s take a step back first.

What does DevOps mean, really?

I tweeted my own definition of DevOps some time ago.

DevOps organizations incentivise different specialities to collaborate. The intrinsic existing tension between Dev, making changes to the system, and Ops, wanting to keep the system stable, dissolves. The greater good is now the value stream.

A stable system that delivers nothing is as useless as an unstable system that keeps offering new functionality.

Dev and Ops understand the importance of working together to maximise this flow to figure out which bets worked out and which ones didn’t.

Organizations that embrace the DevOps mindset can be more effective than the competition at experimenting with new functionality. They quickly validate their assumptions, activating and deactivating functionality by flipping a switch on a dashboard.

Incidents become an opportunity for learning rather than a chance of blaming someone.

In general, DevOps organization learn to adapt and evolve to any situation.

Overall, I think there shouldn’t be a single DevOps role but, rather, a set of specific specialities collaborating effectively.

This ideal view of the terminology, though, might sometimes clash with the reality of the job market. Companies willing to attract the best talent with the most current skills may end up advertising for roles that are counterproductive in the context of DevOps principles.

But let’s have a look at a few interesting job specs.

work harder neon sign photo
Photo by Jordan Whitfield on Unsplash

What are companies looking for?

Let’s read through a few excerpts from job specs I found out there in the wild.

The flexible problem solver

[…] Devops Engineers are IT professionals who collaborate with software developers, system operators and other IT staff members to manage code releases. They cross and merge the barriers that exist between software development, testing and operations teams and keep existing networks in mind as they design, plan and test. Responsible for multitasking and dealing with multiple urgent situations at a time, Devops Engineers must be extremely flexible. […]

A job spec on the internet

This is one of those classic examples where the organization believes that the DevOps principles should be delegated to a single team.

The spec mentions the myriad of duties that are responsibility of the Devops Engineers in the company. A Devops Engineer is expected to “multi-task and deal with multiple urgent situations at a time”. Therefore, they “must be extremely flexible”.

Multitasking and dealing with multiple urgent situations at a time is, for sure, likely to happen anywhere: I don’t think this should be a peculiarity of a role in an organization. On the contrary, a healthy environment empowers every engineer to handle urgent situations and learn from them.


Coming across this role, I’d think that the organization is not really trying to adopt DevOps practices. Instead of encouraging people to collaborate and improve, they’re building a dedicated team to throw issues and urgent situations at.

This job spec would be a big red flag for me.

The productivity booster

A DevOps Engineer combines an understanding of both engineering and coding. A DevOps Engineer works with various departments to create and develop systems within a company. From creating and implementing software systems to analysing data to improve existing ones, a DevOps Engineer increases productivity in the workplace.

Another job spec on the internet

In a DevOps organization engineers do work with various departments. But what’s the point then of having a dedicated DevOps Engineer role? Do the other type of engineers not work with the various departments of the organization? Do non-DevOps Engineers not analyse data and improve existing systems? Additionally, the job spec claims that a DevOps Engineer increases productivity in the workplace. How? Does it radiate productivity?

The Release Manager… but DevOps!

A DevOps Engineer works with developers and the IT staff to oversee the code releases. […] Ultimately, you will execute and automate operational processes fast, accurately and securely.

My favourite so far

This is quite a condensed one but what strikes me as interesting is the release aspect mentioned in it.

It is quite a complex aspect of the DevOps culture in my opinion. I tend to separate the concept of deployment from the one of release. Product updates as experienced by the user are governed by a release policy that may or may not be the same as the deployment policy. This really depends on the strategy of the organization.

Regardless of this distinction, though, I believe that constraining the capability of delivering value to the end user to a specific role undermines the agility of an organization.

The teams should be able to continuously release code into production. The release of functionality should be controlled through mechanisms such as feature flags so that the code that reaches production does not necessarily activate. This makes it possible for the organization to control when the functionality actually reaches the user.

In general, a deployment should be a non-event: nothing special, just another merge into the main branch that causes code to end up in production. Moreover, releasing the actual functionality to the user should not require a dedicated engineer to be performed: the relevant stakeholders in the company, usually departments other than engineering, should be able to enable the functionality to the user, perform experiments and autonomously decide when it is appropriate to release new functionality.

Job specs like this one feel like they’re trying to repurpose the role of the Release Manager to keep up with the latest trends by just changing a few words.

I don’t think release management goes away in a DevOps organization. What changes is that the concept of deployment gets decoupled from the one of release. This is done to enhance the agility of the engineering organization and reduce the risk that’s intrinsic of the changes reaching production environments.

At the same time, the relevant technological changes are implemented so that releasing new functionality to the users can become a non-event, in the hands of the strategic initiatives of the organizations.

A Platform Engineer. But cooler!

The DevOps Engineer will be a key leader in shaping processes and tools that enable cross-functional collaboration and drive CI/CD transformation. The DevOps Engineer will work closely with product owners, developers, and external development teams to build and configure a high performing, scalable, cloud-based platform that can be leveraged by other product teams.

This job spec is one of those that I consider the least bad. It describes a set of responsibilities that usually pertain to a Platform or Infrastructure Team. Most of these teams often get renamed to DevOps Team and their members become DevOps Engineers for fashion reasons.

The Platform Engineering team becomes the key enabler for organizations that want to embrace the DevOps principles. But thinking that such principles will only be embraced by that team will hardly result in a successful journey.

The Platform Engineering team will surely be responsible to build the relevant infrastructure that enables the other teams to build on top but they can’t be left alone in the understanding and application of those principles.

Developer teams will need to become autonomous in adopting and making changes to those systems; they will need to understand the implications of their code running in production; understand how to recognize if the system is not behaving as expected and be able to react to it.

At the same time, even the Product team should spend time understanding what new important capabilities derive from successfully adopting DevOps practices. Code continuously flowing into production behind feature flags, containerization technologies, improved monitoring and alerting, and so on, open endless opportunities of improved user experience and experimentation that should be leveraged to maximise the company competitiveness.

people riding boat on body of water photo
Photo by Matteo Vistocco on Unsplash

What should companies be looking for?

We’ve just gone through a few job specs that look for variations of a DevOps Engineer role and I’ve outlined what aspects I think are flawed in those roles. But what should companies look for, then?

Before blindly starting to hire for roles driven by industry fashion trends, organizations should rather invest in understanding what’s holding them back from being DevOps.

In the Unicorn Project, Gene Kim mentions the Five Ideals of successful DevOps organizations. I think they’re an effective set of principles to take the temperature of your organization in terms of DevOps practices. Those ideals are as follows:

  • Locality and Simplicity
  • Focus, Flow and Joy
  • Improvement of Daily Work
  • Psychological Safety
  • Customer Focus

Locality and Simplicity

Making changes to the system, in order to deliver greater value to the end user, should be easy: easy in terms of team’s autonomy to make changes to an area of the product as well as being easy in terms of friction that the technology in use imposes on the changes.

Focus, Flow and Joy

Developers should be able to focus on their work and be able to make software with minimum impediments. This is facilitated by making sure that the software development lifecycle infrastructure is working for the benefit of the engineering organization.

Improvement of Daily Work

Continuously learning and improving the conditions in which the work gets done is the key to maximise the flow of value and the happiness of the people doing the work. Successful organizations facilitate a continuously improving environment by enabling engineers to build tools and practices that improve their daily operations.

Psychological Safety

An organization will hardly be able to improve if the people that are part of it are not incentivised to raise issues and discuss them. This is not something you solve for by hiring a specific role. It the organization’s responsibility to facilitate an environment where constructive feedback is the norm.

Customer Focus

Last but not least, the engineering organization, just like any other department in the company, should be sharply focused on the customer. All the efforts should be balanced against what’s best for the customer and, ultimately, for the company.


What should companies be looking for then? I think the priority should be on understanding what’s blocking the organization from fully embracing a DevOps mindset. Once that’s established, the expertise needed to get there is probably going to be easier to identify.

The most important aspect for me, though, is understanding the importance of specialities. Every role will have incredible value to add to the journey towards DevOps. What’s fundamental is understanding the importance of collaboration between the different roles. The organization will have to put the relevant practice changes in place to facilitate collaboration. Specific DevOps knowledge in terms of technology, tools and best practices, will be required, for sure, but it won’t be something a single role should be responsible of.

nude man statue photo
Photo by Roi Dimor on Unsplash

A mythical role

It feels like the DevOps Engineer is a mythical figure that certain organizations pursue in the hope of finding the holy grail of a Software Engineer capable of doing anything.

This, of course, will hardly be the case. Recognizing the importance of the single specializations is what makes organization successful and capable of maximising the expertise of the people that they are made of.

What happens in a DevOps organization is that responsibilities are redistributed: developers are empowered to make changes to production environments because organizations recognize the importance of moving fast. This means opportunities for success increase together with the opportunities of failure.

Eliminating barriers and creating a safe space for collaboration helps Devs and Ops work together to resolve issues when they occur. This is what ultimately leads to high performing teams that are incentivised to follow the North Star of the continuous value stream to the end user.

Instead of pursuing a mythical role then, let’s go after the much more plausible alternative of creating a well oiled machine where all the people are incentivised to work together in harmony with the clear goal of maximising the value to the end user.


Thanks for getting to the end of this article. I sincerely hope you’ve enjoyed it. Follow me on Twitter if you want to stay up-to-date with all my articles and the software I work on.

Cover photo by Rhii Photography on Unsplash

How I enabled CORS for any API on my Single Page App

Reading Time: 7 minutes

In this blog post I’ll show you how I used free services available to anyone to build a little proxy server for my app to overcome certain CORS limitations for my Single Page App.

I built Chisel to help with some repetitive API responses composition and manipulation that I was doing at work.

It’s a single page app that allows you to perform requests against any API endpoint and compose results to extract only what you need. It also allows for CSV exports. Pretty straightforward.

Being it still in its earliest days I decided that I wanted to build it with the simplest architecture in order for me to be able to iterate quickly. I went for the JAMstack, built it in React and deployed on Netlify.

Since it doesn’t have a back-end server it talks to, anything you do stays on your machine. Unfortunately, not all APIs allow for cross-origin requests so, in certain cases, you won’t be able to perform any request from your browser unless you enable the proxy functionality.

Proxy feature on chisel.cloud

What happens if you don’t is that your browser will attempt a CORS preflight request which will fail if the API doesn’t respond with the expected headers.

CORS preflight request failure

What is CORS and when is it a problem for your Single Page App?

From the MDN documentation:

Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional HTTP headers to tell browsers to give a web application running at one origin, access to selected resources from a different origin. A web application executes a cross-origin HTTP request when it requests a resource that has a different origin (domain, protocol, or port) from its own.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS

Now, there are certain requests, called Simple Requests, that don’t trigger CORS checks. Unfortunately, these type of requests are quite limited and don’t allow to pass certain headers like the Authorization one (e.g. a basic-auth request). You can read more about these type of requests here.

For this reason, we’re going to allow a good set of HTTP methods and headers to pass through our proxy and return back the response as unchanged as possible.

The bulk of the work will be configuring the right set of Access-Control-Allow-* headers to be returned back to the browser when CORS preflighted checks are performed. I recommend you have a look at the MDN Documentation to learn more about CORS as it is quite comprehensive.

The proxy

In order to allow any request to pass the CORS preflight checks I built a simple proxy server that returns the expected headers to the browser and passes through the requests to the destination server.

You can find the source code for it on Github, but let’s go through the steps to build your own for free.

Setting up NGINX

The proxy itself is a simple instance of NGINX configured with a server to allow for proxied request to a dynamic destination.

In order to be able to run NGINX on Heroku we have to make some changes to run it as non-privileged user.

We’re basically making sure that NGINX will try to write to unprivileged writeable locations: this is because Heroku enforces that our container runs as non-root. You can read more about it here.

Accounting for any URL

The second aspect of this configuration is actually defining our dynamic proxy: we will translate requests to any URL so that they will expose the right CORS information.

The main complexity of the Chisel case resides in the fact that we want to allow any URL to be proxied. This is because we won’t know in advance what URL the user will type in, of course.

The way NGINX allows for setting up the proxy functionality is through the proxy_pass directive:

Sets the protocol and address of a proxied server and an optional URI to which a location should be mapped. As a protocol, “http” or “https” can be specified.

The NGINX documentation

In order to be able to specify the URL to pass to dynamically I decided to go with a custom header: X-Chisel-Proxied-Url. This way Chisel will use that header to tell the proxy which destination to proxy through to.

proxy_pass $http_x_chisel_proxied_url;

The $ symbol in NGINX is used to reference variables and the HTTP headers get automatically converted to $http_ prefixed variables using the above syntax.

There’s quite a bit of things to go through in this NGINX server configuration. Let’s start with the location / block first.

The first bit in there is the if statement: it handles the CORS preflighted requests case and it basically allows for a bunch of HTTP methods and headers by default. It restricts everything to the https://chisel.cloud Origin, just because I don’t want my proxy to be used by other applications.

  • proxy_redirect off: I disabled redirects for now. I’m still not sure how I’m going to handle them so I decided to turn them off until I can find a use case for them.
  • proxy_set_header Host $proxy_host: this is simply forwarding the destination host as the Host header. This is a requirement for valid HTTP requests through browsers. This value will be exactly the same as the one being set for proxy_pass.
  • proxy_set_header X-Real-IP $remote_addr: here we’re simply taking care of forwarding the client IP through to the destination.
  • proxy_pass $http_x_chisel_proxied_url: this is the real important bit of the whole configuration. We’re taking the header coming in from the Chisel client application and setting it as the URL to pass through to. This is effectively making the dynamic proxy possible.
  • proxy_hide_header 'access-control-allow-origin': this, together with the following add_header 'access-control-allow-origin' 'https://chisel.cloud' is basically making sure to override whatever Access-Control-Allow-Origin header is coming back from the destination server with one that only allows requests from our Chisel application.

Finally, the top two directives.

  • resolver: this is needed so that NGINX knows how to resolve the names of the upstream servers to proxy through to. In my case I picked a public free DNS. You can pick yours from here.
  • listen $__PORT__$ default_server: this one, instead, is the directive that makes everything possible using Docker on Heroku. We will have a look at it later in this blog post, so keep reading!

Building the container image

As mentioned above, I’m going to use NGINX’s base image.

Dockerfile

The Dockerfile is pretty simple. We’re replacing the default nginx.conf with our own to make sure that NGINX can run unprivileged. We’re also copying our proxy server configuration.

As you can see I have named the file as proxy.conf.tpl. I’ve done this to be explicit about the fact that the file is not ready to be used as is. We will have to dynamically edit the port it is going to listen on at runtime before starting NGINX.

As clarified in the documentation, Heroku expects the containers to be able to listen on the value specified within the $PORT environment variable. The solution we’re using here, then, is making sure to replace the $__PORT__$ placeholder I have included in the configuration with the actual content of the $PORT environment variable.

Setting up Heroku

We’re almost there. Now we need to configure our application so that we can deploy our container straight from our repository.

Create a new lovely app on Heroku so that we can prepare it to work with containers.

Next, let’s configure the app to work with container images. I haven’t found a way to do it through the dashboard so let’s go ahead with the command line.

Now add a simple heroku.yml file to your repository so that Heroku knows what to do to build the image.

build:
  docker:
    web: Dockerfile

Simple as that.

Now, in the Deploy tab of your application dashboard, make sure you connect your repository to the app: this way you’ll be able to deploy automatically.

Heroku Deploy section – Github connection

Your proxy is finally ready to go. Once you kick off the deploy you’ll be able to see it start up in the application logs as follows.

Startup application logs

As you can see, the process is being started using the command we have specified through the CMD directive and the PORT value is being injected by Heroku.

With the proxy up you’ll now be able to forward your requests through the proxy. As mentioned above, you will need to use the custom X-Chisel-Proxied-Url header (or whatever header you decide to configure for your proxy) to specify the original URL the user intended to hit.

As you can see from the animated gif below, the proxy feature allows to overcome the CORS limitation when hitting the Nager.Date API from Chisel.

The Chisel proxy in action

Conclusion

We have just built a proxy server reusing open-source technology. This allows us to keep our Singe Page App separate from the server logic that’s needed to overcome the CORS limitations.

In general, CORS is one of the security measures your browser employs to mitigate certain opportunity for hijacking your website to perform unintended activity. Even if we have just examined an opportunity for bypassing this limitation, always think twice about whether it is appropriate or not for your use case.

I hope you enjoyed this quick walk-through to build your own free proxy server. Don’t forget to follow me on Twitter for more content like this.