Yes, TDD slows you down

Reading Time: 3 minutes

I recently had the opportunity to reflect about testing practices. In particular, the different layers of testing: unit testing, integration testing, acceptance testing.

The test pyramid suggests that each level has an associated cost. The cost of your tests translates then into the pace at which you can make changes to your system.

Quite frequently, early-stage organizations tend to focus less on unit tests and more on acceptance testing. This might be the case at the early days of a new product. There’s a high fluctuation in requirements and you might feel that unit tests will slow you down when it comes to making radical changes to the implementation.

By not investing in unit testing, though, you are giving up on the opportunity to do TDD. Yes, you will still be able to write acceptance tests before writing any code. But having so coarse-grained tests will not constrain you in the way you structure your code, your components and how they communicate with each other.

Your acceptance tests will guarantee that your system meets the requirements, now.

TDD is not just about testing

TDD is a software development methodology that strictly dictates the order you do things in. And it’s not dictating that order for the sake of it. Writing tests first and letting that drive the functional code you write after is going to impact the way you design and architect the system.

This is going to have numerous benefits: your code will be modular, loosely coupled and with high cohesion, as well as being clearly documented and easy to extend.

Unit testing is one aspect of TDD. It helps you at a fundamental level of your system. The implementation level, that is ultimately going to decide how agile you will be in the future at making changes.

If you develop a system without a TDD approach you won’t necessarily have a system without tests. But you will most likely have a system that is hard to extend and hard to understand.

This is the reason why I’m more and more convinced that you can’t really be that liberal about your testing layers. If you’re building new functionality and decide it is going to solely be tested through acceptance testing, you’re most likely going to miss out on the added benefits of TDD at your service layer.

When you mix TDD with non-TDD you’re not just compromising on the tests at certain layers of your test stack. You’re compromising on the architecture of your system.

Do you really need to go faster?

Of course you do! Who thinks that going slower is better?

Too bad the question should usually be rephrased to:

Do you really need to go faster now, and go slower later?

I think this way of phrasing it helps making the right decision. As I said at the beginning of this post, going faster now might actually be the right decision. TDD might be what slows you down now, and you might not be able to afford slowing down now. But my advice is not to stop asking that question as you keep making compromises.

When you defer the adoption of TDD you’re not just skipping on unit tests. You’re developing code unconstrained, increasing the opportunities of high coupling and low cohesion. This inevitably makes you slower as you go. And it will keep making you slower until you will inevitably incur in such a high cost of change that implementing even the most trivial feature will seem like an insurmountable challenge. Not to mention this will impact your team morale, as well as the confidence that the rest of the organization will have for your team.

Be conscious

As I said at the beginning of this post, giving up on unit tests might actually be the right thing to do for you, now. What’s important to realize, though, is that giving up on TDD at a certain layer doesn’t just mean giving up on tests. It means giving up on a framework that guides you through specific principles. If you are not going to do TDD, do you have an alternative set of principles to stick to? Do you have an alternative way that helps you structure your system effectively, to help you scale? Are you going to have the confidence to make changes? Are new team members going to be able to easily on-board themselves and make changes without requiring additional contextual knowledge?

If you enjoyed this post, chances are you’ll enjoy what I post on Twitter. Thank you for getting to the end of this article.

How to keep your Amazon MQ queues clean

Reading Time: 2 minutes

Amazon MQ queues might fill up if you use them in your tests but don’t take care of cleaning them up. Let’s explore together a way of addressing this issue.

I was hoping to avoid writing dedicated code to just consume all the messages enqueued during tests so I started looking around for some tool I could integrate in my Continuous Integration pipeline.

I found amazonmq-cli. I have to say, it’s not the most straightforward tool when used in a CI pipeline. When it comes to command line options it’s not very flexible and it favours the use of command files to read from. It also only allows for file-based configuration.

Nevertheless, I downloaded and configured it.

Configuring amazonmq-cli in your CI pipeline

Amazon MQ does not support JMX but if your broker does, you could give activemq-cli a try.

For the above reason, this tool needs Web Console access, so make sure that’s configured.

Since I’m running this command as part of a clean-up job in my CI pipeline, I have to automate its configuration.

We need to produce two files:

  • the configuration file to be able to access the broker (and the web console)
  • and the list of commands to execute

Once you download and extract the tool you’ll see its directory structure is as follows.

➜ tree -L 1
├── bin
├── conf
├── lib
└── output

4 directories, 0 files

There’s not much opportunity for customization, so we will have to produce the relevant configuration file and put it in the conf folder (where you can find a sample one).

First thing, then, is to create the broker configuration.

echo -e "broker {\n  aws-broker {\n    web-console = \"$ACTIVEMQ_WEB_CONSOLE_URL\"\n    amqurl = \"$ACTIVEMQ_BROKER_URL\"\n    username = \"$ACTIVE_MQ_USER\"\n    password = \"$ACTIVE_MQ_PASSWORD\"\n    prompt-color = \"light-blue\"\n  }\n}\n\nweb-console {\n  pause = 100\n  timeout = 300000\n}" > /home/ciuser/amazonmq-cli/conf/amazonmq-cli.config

As you can see, I’ve also included a web-console part that is expected by the tool when performing certain commands like the purge-all-queue one that I needed in this case.

Now, let’s configure the list of commands to run.

In my case all the queues generated in the CI environment during the test share the same prefix. This makes it easier for me to purge them all or delete them later.

echo -e "connect --broker aws-broker\npurge-all-queues --force --filter $MY_SPECIAL_PREFIX\n" > purge-commands.txt

Drain them!

Finally, we can perform the command.

/home/ciuser/amazonmq-cli/bin/amazonmq-cli --cmdfile $(pwd)/purge-commands.txt

That’s it! Let me know your experiences with cleaning up queues on Amazon MQ!

The Mythical DevOps Engineer

Reading Time: 8 minutes

I’m always a little suspicious of job specs looking for the so-called DevOps Engineer role. They often mention a vast variety of duties and responsibilities.

Are they hiring for a single role or a whole team?

Roles having DevOps in their title hardly share the same meaning. They often have something in common, though. They try to cover for what traditionally would have been the specialization of different professionals.

Don’t get me wrong: cross-functional expertise is definitely important. But I don’t think DevOps means replacing a multitude of specialization with a single role. Different specializations like Operations, Security, Testing, Development, Product Management and so on, are vast and require specific knowledge.

I think the key differentiator of successful DevOps organizations is that they enable effective collaboration. They have as clear North Star the goal to deliver value to the end user.

Overall, I don’t think we should be talking about a DevOps Engineer, but rather about DevOps culture in organizations.

But let’s take a step back first.

What does DevOps mean, really?

I tweeted my own definition of DevOps some time ago.

DevOps organizations incentivise different specialities to collaborate. The intrinsic existing tension between Dev, making changes to the system, and Ops, wanting to keep the system stable, dissolves. The greater good is now the value stream.

A stable system that delivers nothing is as useless as an unstable system that keeps offering new functionality.

Dev and Ops understand the importance of working together to maximise this flow to figure out which bets worked out and which ones didn’t.

Organizations that embrace the DevOps mindset can be more effective than the competition at experimenting with new functionality. They quickly validate their assumptions, activating and deactivating functionality by flipping a switch on a dashboard.

Incidents become an opportunity for learning rather than a chance of blaming someone.

In general, DevOps organization learn to adapt and evolve to any situation.

Overall, I think there shouldn’t be a single DevOps role but, rather, a set of specific specialities collaborating effectively.

This ideal view of the terminology, though, might sometimes clash with the reality of the job market. Companies willing to attract the best talent with the most current skills may end up advertising for roles that are counterproductive in the context of DevOps principles.

But let’s have a look at a few interesting job specs.

work harder neon sign photo
Photo by Jordan Whitfield on Unsplash

What are companies looking for?

Let’s read through a few excerpts from job specs I found out there in the wild.

The flexible problem solver

[…] Devops Engineers are IT professionals who collaborate with software developers, system operators and other IT staff members to manage code releases. They cross and merge the barriers that exist between software development, testing and operations teams and keep existing networks in mind as they design, plan and test. Responsible for multitasking and dealing with multiple urgent situations at a time, Devops Engineers must be extremely flexible. […]

A job spec on the internet

This is one of those classic examples where the organization believes that the DevOps principles should be delegated to a single team.

The spec mentions the myriad of duties that are responsibility of the Devops Engineers in the company. A Devops Engineer is expected to “multi-task and deal with multiple urgent situations at a time”. Therefore, they “must be extremely flexible”.

Multitasking and dealing with multiple urgent situations at a time is, for sure, likely to happen anywhere: I don’t think this should be a peculiarity of a role in an organization. On the contrary, a healthy environment empowers every engineer to handle urgent situations and learn from them.

Coming across this role, I’d think that the organization is not really trying to adopt DevOps practices. Instead of encouraging people to collaborate and improve, they’re building a dedicated team to throw issues and urgent situations at.

This job spec would be a big red flag for me.

The productivity booster

A DevOps Engineer combines an understanding of both engineering and coding. A DevOps Engineer works with various departments to create and develop systems within a company. From creating and implementing software systems to analysing data to improve existing ones, a DevOps Engineer increases productivity in the workplace.

Another job spec on the internet

In a DevOps organization engineers do work with various departments. But what’s the point then of having a dedicated DevOps Engineer role? Do the other type of engineers not work with the various departments of the organization? Do non-DevOps Engineers not analyse data and improve existing systems? Additionally, the job spec claims that a DevOps Engineer increases productivity in the workplace. How? Does it radiate productivity?

The Release Manager… but DevOps!

A DevOps Engineer works with developers and the IT staff to oversee the code releases. […] Ultimately, you will execute and automate operational processes fast, accurately and securely.

My favourite so far

This is quite a condensed one but the release aspect mentioned in it strikes me as particularly interesting.

I tend to separate the concept of deployment from the one of release. Users experience product updates governed by a release policy that may or may not be the same as the deployment policy. This really depends on the strategy of the organization.

Regardless of this distinction, though, I believe that constraining the capability of delivering value to the end user to a specific role undermines the agility of an organization.

The teams should be able to continuously release code into production. Mechanisms such as feature flags should control the release of functionality. This means that the code in production doesn’t necessarily activate upon deploying it, making it possible for the organization to control when the functionality actually reaches the user.

In general, a deployment should be a non-event: nothing special, just another merge into the main branch that causes code to end up in production.

In a fast-paced world like the one we live in an organization shouldn’t constrain itself by requiring dedicated engineers to release new functionality. Modern environments require companies to always be experimenting. Organizations should empower non-technical teams to run experiments, analyse data and autonomously decide when to release new functionality. All of this, ideally, shouldn’t require ad hoc intervention from a specific engineer.

Job specs like this one feel like they’re trying to repurpose the role of the Release Manager to keep up with the latest trends by just changing a few words.

I don’t think release management goes away in a DevOps organization. Rather, the Release Management becomes ensuring that the rest of the organization can be autonomous at releasing. Achieving this means investing in automation and internal tools for the whole company.

A Platform Engineer. But cooler!

The DevOps Engineer will be a key leader in shaping processes and tools that enable cross-functional collaboration and drive CI/CD transformation. The DevOps Engineer will work closely with product owners, developers, and external development teams to build and configure a high performing, scalable, cloud-based platform that can be leveraged by other product teams.

This is the least bad of the job specs I’ve encountered. It describes a set of responsibilities that usually pertain to a Platform or Infrastructure Team. Most of these teams often get renamed to DevOps Team and their members become DevOps Engineers for fashion reasons.

The Platform Engineering team is the key enabler for organizations that want to embrace the DevOps principles. But thinking that they only pertain to a specific team will hardly result in a successful journey.

This team will surely be responsible to build the relevant infrastructure that enables the other teams to build on top but they can’t be left alone in the understanding and application of those principles.

Developer teams will need to become autonomous at adopting and making changes to those systems; they will need to understand the implications of their code running in production; understand how to recognize if the system is not behaving as expected and be able to action to restore it.

Equally, the Product team should spend time understanding what new important capabilities derive from adopting DevOps practices. Code continuously flowing into production behind feature flags, containerization technologies, improved monitoring and alerting, et cetera, open endless opportunities.

Improved user experience and experimentation opportunities, for example, are an important asset to leverage to remain competitive.

people riding boat on body of water photo
Photo by Matteo Vistocco on Unsplash

What should companies be looking for?

We’ve just gone through a few job specs that look for variations of a DevOps Engineer role and I’ve outlined what aspects I think are flawed in those roles. But what should companies look for, then?

Before blindly starting to hire for roles driven by industry fashion trends, organizations should rather invest in understanding what’s holding them back from being DevOps.

In the Unicorn Project, Gene Kim mentions the Five Ideals of successful DevOps organizations. I think they’re an effective set of principles to take the temperature of your organization in terms of DevOps practices. Those ideals are:

  • Locality and Simplicity
  • Focus, Flow and Joy
  • Improvement of Daily Work
  • Psychological Safety
  • Customer Focus

Locality and Simplicity

Making changes to the system, in order to deliver greater value to the end user, should be easy: easy in terms of team’s autonomy to make changes to the product as well as being easy in terms of friction that the technology in use imposes on the changes.

Focus, Flow and Joy

Developers should be able to focus on their work and be able to develop software with minimum impediments. This is facilitated by making sure that the software development lifecycle infrastructure is working for the benefit of the engineering organization.

Improvement of Daily Work

Continuously learning and improving the conditions in which the work gets done is the key to maximise the flow of value and the happiness of the people doing the work. Successful organizations facilitate a continuously improving environment by enabling engineers to build tools and practices that enhance their daily operations.

Psychological Safety

An organization will hardly be able to improve if the people that are part of it are not incentivised to raise issues and address them. This is not something you solve for by hiring a specific role. It’s the organization’s responsibility to facilitate an environment where constructive feedback is the norm.

Customer Focus

Last but not least, the engineering organization, just like any other department in the company, should be sharply focused on the customer. All the efforts should be balanced against what’s best for the customer and, ultimately, for the company.

What should companies be looking for then? I think the priority should be on understanding what’s blocking the them from fully embracing a DevOps mindset, across all departments. Most of the times you’ll realise that the set of skills you need is already there around you. What’s holding you back is probably the current set of processes through which work gets done at your company.

nude man statue photo
Photo by Roi Dimor on Unsplash

A mythical role

It feels like the DevOps Engineer is a mythical figure that certain organizations pursue in the hope of finding the holy grail of a Software Engineer capable of doing anything.

This, of course, will hardly be the case. Recognizing the importance of the single specializations is what makes organization successful and capable of maximising the expertise of the people that they are made of.

What happens in a DevOps organization is that responsibilities are redistributed: developers are empowered to make changes to production environments because organizations recognize the importance of moving fast. This means opportunities for success increase together with the opportunities of failure.

Eliminating barriers and creating a safe space for collaboration helps Devs and Ops work together to resolve issues when they occur. This is what ultimately leads to high performing teams that are incentivised to follow the North Star of the continuous value stream to the end user.

Specific DevOps knowledge in terms of technology, tools and best practices, will be required, for sure, but it won’t be something a single role should be responsible of.

Instead of pursuing a mythical role then, let’s go after the much more plausible alternative of creating a well oiled machine where all the people are incentivised to work together in harmony with the clear goal of maximising the value to the end user.

Thanks for getting to the end of this article. I sincerely hope you’ve enjoyed it. Follow me on Twitter if you want to stay up-to-date with all my articles and the software I work on.

Cover photo by Rhii Photography on Unsplash

How I enabled CORS for any API on my Single Page App

Reading Time: 7 minutes

In this blog post I’ll show you how I used free services available to anyone to build a little proxy server for my app to overcome certain CORS limitations for my Single Page App.

I built Chisel to help with some repetitive API responses composition and manipulation that I was doing at work.

It’s a single page app that allows you to perform requests against any API endpoint and compose results to extract only what you need. It also allows for CSV exports. Pretty straightforward.

Being it still in its earliest days I decided that I wanted to build it with the simplest architecture in order for me to be able to iterate quickly. I went for the JAMstack, built it in React and deployed on Netlify.

Since it doesn’t have a back-end server it talks to, anything you do stays on your machine. Unfortunately, not all APIs allow for cross-origin requests so, in certain cases, you won’t be able to perform any request from your browser unless you enable the proxy functionality.

Proxy feature on

What happens if you don’t is that your browser will attempt a CORS preflight request which will fail if the API doesn’t respond with the expected headers.

CORS preflight request failure

What is CORS and when is it a problem for your Single Page App?

From the MDN documentation:

Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional HTTP headers to tell browsers to give a web application running at one origin, access to selected resources from a different origin. A web application executes a cross-origin HTTP request when it requests a resource that has a different origin (domain, protocol, or port) from its own.

Now, there are certain requests, called Simple Requests, that don’t trigger CORS checks. Unfortunately, these type of requests are quite limited and don’t allow to pass certain headers like the Authorization one (e.g. a basic-auth request). You can read more about these type of requests here.

For this reason, we’re going to allow a good set of HTTP methods and headers to pass through our proxy and return back the response as unchanged as possible.

The bulk of the work will be configuring the right set of Access-Control-Allow-* headers to be returned back to the browser when CORS preflighted checks are performed. I recommend you have a look at the MDN Documentation to learn more about CORS as it is quite comprehensive.

The proxy

In order to allow any request to pass the CORS preflight checks I built a simple proxy server that returns the expected headers to the browser and passes through the requests to the destination server.

You can find the source code for it on Github, but let’s go through the steps to build your own for free.

Setting up NGINX

The proxy itself is a simple instance of NGINX configured with a server to allow for proxied request to a dynamic destination.

In order to be able to run NGINX on Heroku we have to make some changes to run it as non-privileged user.

We’re basically making sure that NGINX will try to write to unprivileged writeable locations: this is because Heroku enforces that our container runs as non-root. You can read more about it here.

Accounting for any URL

The second aspect of this configuration is actually defining our dynamic proxy: we will translate requests to any URL so that they will expose the right CORS information.

The main complexity of the Chisel case resides in the fact that we want to allow any URL to be proxied. This is because we won’t know in advance what URL the user will type in, of course.

The way NGINX allows for setting up the proxy functionality is through the proxy_pass directive:

Sets the protocol and address of a proxied server and an optional URI to which a location should be mapped. As a protocol, “http” or “https” can be specified.

The NGINX documentation

In order to be able to specify the URL to pass to dynamically I decided to go with a custom header: X-Chisel-Proxied-Url. This way Chisel will use that header to tell the proxy which destination to proxy through to.

proxy_pass $http_x_chisel_proxied_url;

The $ symbol in NGINX is used to reference variables and the HTTP headers get automatically converted to $http_ prefixed variables using the above syntax.

There’s quite a bit of things to go through in this NGINX server configuration. Let’s start with the location / block first.

The first bit in there is the if statement: it handles the CORS preflighted requests case and it basically allows for a bunch of HTTP methods and headers by default. It restricts everything to the Origin, just because I don’t want my proxy to be used by other applications.

  • proxy_redirect off: I disabled redirects for now. I’m still not sure how I’m going to handle them so I decided to turn them off until I can find a use case for them.
  • proxy_set_header Host $proxy_host: this is simply forwarding the destination host as the Host header. This is a requirement for valid HTTP requests through browsers. This value will be exactly the same as the one being set for proxy_pass.
  • proxy_set_header X-Real-IP $remote_addr: here we’re simply taking care of forwarding the client IP through to the destination.
  • proxy_pass $http_x_chisel_proxied_url: this is the real important bit of the whole configuration. We’re taking the header coming in from the Chisel client application and setting it as the URL to pass through to. This is effectively making the dynamic proxy possible.
  • proxy_hide_header 'access-control-allow-origin': this, together with the following add_header 'access-control-allow-origin' '' is basically making sure to override whatever Access-Control-Allow-Origin header is coming back from the destination server with one that only allows requests from our Chisel application.

Finally, the top two directives.

  • resolver: this is needed so that NGINX knows how to resolve the names of the upstream servers to proxy through to. In my case I picked a public free DNS. You can pick yours from here.
  • listen $__PORT__$ default_server: this one, instead, is the directive that makes everything possible using Docker on Heroku. We will have a look at it later in this blog post, so keep reading!

Building the container image

As mentioned above, I’m going to use NGINX’s base image.


The Dockerfile is pretty simple. We’re replacing the default nginx.conf with our own to make sure that NGINX can run unprivileged. We’re also copying our proxy server configuration.

As you can see I have named the file as proxy.conf.tpl. I’ve done this to be explicit about the fact that the file is not ready to be used as is. We will have to dynamically edit the port it is going to listen on at runtime before starting NGINX.

As clarified in the documentation, Heroku expects the containers to be able to listen on the value specified within the $PORT environment variable. The solution we’re using here, then, is making sure to replace the $__PORT__$ placeholder I have included in the configuration with the actual content of the $PORT environment variable.

Setting up Heroku

We’re almost there. Now we need to configure our application so that we can deploy our container straight from our repository.

Create a new lovely app on Heroku so that we can prepare it to work with containers.

Next, let’s configure the app to work with container images. I haven’t found a way to do it through the dashboard so let’s go ahead with the command line.

Now add a simple heroku.yml file to your repository so that Heroku knows what to do to build the image.

    web: Dockerfile

Simple as that.

Now, in the Deploy tab of your application dashboard, make sure you connect your repository to the app: this way you’ll be able to deploy automatically.

Heroku Deploy section – Github connection

Your proxy is finally ready to go. Once you kick off the deploy you’ll be able to see it start up in the application logs as follows.

Startup application logs

As you can see, the process is being started using the command we have specified through the CMD directive and the PORT value is being injected by Heroku.

With the proxy up you’ll now be able to forward your requests through the proxy. As mentioned above, you will need to use the custom X-Chisel-Proxied-Url header (or whatever header you decide to configure for your proxy) to specify the original URL the user intended to hit.

As you can see from the animated gif below, the proxy feature allows to overcome the CORS limitation when hitting the Nager.Date API from Chisel.

The Chisel proxy in action


We have just built a proxy server reusing open-source technology. This allows us to keep our Singe Page App separate from the server logic that’s needed to overcome the CORS limitations.

In general, CORS is one of the security measures your browser employs to mitigate certain opportunity for hijacking your website to perform unintended activity. Even if we have just examined an opportunity for bypassing this limitation, always think twice about whether it is appropriate or not for your use case.

I hope you enjoyed this quick walk-through to build your own free proxy server. Don’t forget to follow me on Twitter for more content like this.

How I used GCP to create the transcripts for my Podcast

Reading Time: 4 minutes

I’m currently working on a series of episodes for a Podcast I’ll be publishing soon. The Podcast will be in Italian and I wanted to make sure to publish the episode transcripts together with the audio episodes.

The idea of manually typing all the episodes text wasn’t really appealing to me so I started looking around.

What are the tools out there?

From a quick Google search, it seems that some companies are offering a mix of automated and human-driven transcription services.

I wasn’t really interested in that for now. I was, of course, just interested in consuming an API I could push my audio to and get back some text in a reasonable amount of time.

For this reason, I started looking for speech-to-text APIs and, of course, the usual suspects figured among the first results.

To be quite honest, I didn’t spend too much time investigating the solutions above. I probably spent more time reading about them to write this blog post.

I decided to go with Google Cloud because I’ve never used GCP before and wanted to give it a try. Additionally, the documentation for it seemed quite straightforward, as well as the support for Italian as language to transcribe from (the podcast is in Italian). I also had a few free credits available because I’ve never used GCP for personal use before.

Setting up

If you want to try transcribing your episodes too, follow this quick setup guide to get started.

Head over to Google Cloud and set up an account. Make sure you create a project and enable the Speech-to-Text API. If you forget to do so gcloud will be able to take care of that for you, later.

Google Cloud Speech-to-Text
Google Cloud Speech-to-Text

Second thing I did was installing gcloud, the CLI Google Cloud provides for interacting with the APIs. This time I was only interested in testing the API so it seemed to me that this tool was the only way to get started quickly.

Additionally, there’s not much you can do from the Google Cloud Web Console if you want to deal with Speech-to-Text APIs.

Get your file ready for transcription

Sampling rate for your audio file should be at least 16 kHz for better results. Additionally, GCP recommends a lossless codec. I only had an mp3 of my episode handy at the time so I gave it a try anyway and it worked well enough.

Make sure you know the sample rate of your file, though, because specifying a wrong one might lead to poor results.

You can usually verify the sample rate by getting info on your file from your Mac’s Finder:

File's context menu
Just click Get Info on your file’s context menu
File's Finder Info section with sample rate
There’s the sample rate

You can read more about the recommended settings on the Best Practices section.

Upload your episode to the bucket

GCP needs your file to be available from a Storage Bucket so, go ahead and create one.

Storage Bucket creation example

You’ll be able to upload your episode from there.

Time to transcribe

Once you have your episode file up there in the cloud go back to your local machine terminal were you have configured the gcloud tool.

Gcloud used to trigger the speech-to-text transcription

If your episode lasts longer than 60 seconds (😬) you’ll want to use recognize-long-running and most likely specify --async.

As I said before, make sure you specify the right --sample-rate: in my case 44100. This will help GCP transcribe your file with better results.

The --async switch creates a long-running asynchronous operation. It took around 5 minutes for me to have the operation complete.

Oddly, I wasn’t able to find any reference to the asynchronous operation from my Google Cloud Console. So, if you want to be able to know what happened to your transcription job, make sure you take note of the operation identifier. You’ll need it to query the speech operations API for information about your transcription job.

The speech operation metadata

The transcribed data

Once your transcription operation is complete the describe command will return the transcript excerpts, together with the confidence rate.

The speech transcript excerpt

I wasn’t particularly interested in the confidence rate, I only wanted a big blob of text to be able to review and use for SEO purposes as well as to be able to include it with the episode. For this reason, jq to the resque!

I love jq, you can achieve so much with when it comes to manipulate JSON.

In my case, I only wanted to concatenate all the transcript fields and save them to a file. Here’s how I did:

$ ./bin/gcloud ml speech operations describe <your-transcription-operation-id> | jq '.response.results[].alternatives[].transcript' > my-transcript.txt

And that’s it!


I thought of sharing the steps above because they’ve been useful to me in producing the transcripts. I think GCP Speech-to-Text works quite well with Italian but, of course, the transcript is not suitable to be used as it is, unless your accent is perfect. Mine wasn’t 😅.

If you want to know more about my journey towards publishing my first podcast follow me on Twitter were I’ll be sharing more about it.

Photo by Malte Wingen on Unsplash

How I used Chisel to pull Gitlab pipelines stats

Reading Time: 4 minutes

I built in my spare time to automate something I did to derive insights about my Gitlab pipeline times.

In this blog post I’m going to show you how I did it in the hope that it might be useful to you too.

chisel app main screen

As you can see from the picture above, Chisel is still pretty early stage. I decided to publish it anyway because I’m curious to know whether something like this can be useful to you too or not.

Understanding deployment time

The goal of this exercise was for me to better understand the deployment time (from build to being live in production) of my project and have a data-driven approach as to what to do next.

Since the project in question uses Gitlab CI/CD, I thought of taking advantage of its API to pull down this kind of information.

Gitlab Pipelines API

The Gitlab pipelines API is pretty straightforward but a few differences between the /pipelines and the /pipelines/:id APIs means that you have to do a little composition work to pull down interesting data.

Here’s how I did it.

1. Pull down your successful pipelines

First thing I did was fetching the successful pipelines for my project.

chisel app first screen

As you can see, this API returns minimal information about each pipeline. What I needed to do next in order to understand pipeline times was to fetch further details for each pipeline.

Chisel – Transform

Chisel provides a handy transformation tool that uses JMESPath to help you manipulate the JSON returned by the API you are working with. I used it to extract the pipeline IDs from the returned response.

chisel app second screen

Chisel shows you an live preview of your transformation. Something as simple as [*].id is enough for now. The result is an array of pipeline IDs.

Right after obtaining all the IDs I need I can apply another transformation to turn those IDs into pipeline objects with all the relevant information I need for my stats.

Chisel has another kind of transformation type called Fetch that helps you transform the selected values into the result of something fetched from a URL.

chisel app third screen

In particular, you can use the ${1} placeholder to pass in the mapped value. In my case, each ID is being mapped to the /pipelines/${1} API.

The result is pretty straightforward.

chisel app fourth screen

2. Filter out what you don’t need

As you can see, some of the returned pipelines have a before_shaof value 0000000000000000000000000000000000000000. Those are pipelines triggered outside of merges into master so I’m not interested in them.

Filtering those out is as simple as [?before_sha != '0000000000000000000000000000000000000000]

chisel app fifth screen

The transformation history

As you can see, on the right of the screen there’s a little widget that shows you the transformations you have applied. You can use it to go back and forth in the transformation history and rollback/reapply the modifications to your data.

chisel app transformation history

3. The last transformation

The last transformation I need to be able to start pulling out useful information has to turn my output into a set of records.

chisel app sixth screen

I’m selecting only a few fields and turning the result into an array of array. This is the right format to be able to export it as a CSV.

chiel app csv download screen

Google Sheets

Finally, I can upload my CSV export to Google Sheets and plot the information I need.

google sheets import


Chisel is still at its earliest stage of development and it is pretty much tailored on my specific use case but if you see this tool can be useful to you too, please head to the Github repo and suggest the improvements you’d like to see.

If you liked this post and want to know more about Chisel, follow me on Twitter!

Featured image by Dominik Scythe on Unsplash

How I setup my continuous deployment pipeline for free

Reading Time: 4 minutes

Continuous Deployment refers to the capability of your organisation to produce and release software changes in short and frequent cycles.

Pain vs Frequency relationship –

One of the ideas behind Continuous Deployment is that increasing the frequency of deployment of your changes to production will reduce the friction associated with it. On the contrary, deployment is often an activity that gets neglected until the last minute: it is perceived more as a necessary evil rather than an inherent part of a software engineer’s job. However, shifting deployment left, as early as possible in the development life cycle, will help surfacing issues, dependencies and unexpected constraints sooner rather than later.

For instance, continuously deploying will make it easier to understand which change caused issues, if any, as well as making it easier to recover. Imagine having to scan through hundreds of commit messages in your version control system history to find the change that introduced the issue…

Automatism is key to achieve continuous deployment.

The project

In this article we’re gonna explore how to leverage tools like GitLab Pipeline, Heroku and Docker to achieve a simple continuous deployment pipeline.

Let’s start by creating a simple Hello World application. For the purpose of this article I’m gonna use Create React App:

$ npx create-react-app continuous-deployment
$ cd continuous-deployment
$ npm start

Now that we have a running application, let’s build a Docker image to be able to deploy it to Heroku.

The container image

We’re going to write a simple Dockerfile to build our app:

FROM node:10.17-alpine
COPY . .
RUN sh -c 'yarn global add serve && yarn && yarn build'
CMD serve -l $PORT -s build

First of all, two things to keep in mind when building images for Heroku:

  • Containers are not run with root privileges
  • The port to listen on is fed by Heroku into the container and needs to be consumed from an environment variable

As you can see from the Dockerfile definition, we are starting the app by passing the PORT environment variable. We can now test the image locally.

$ docker build . -t continuous-deployment:latest
$ docker run -e PORT=4444 -p4444:4444

The -e PORT=4444 specifies which port we’re going to listen to. You can now try your application at http://localhost:4444.

Additionally, I’ve added a myuser user at the end of the Dockerfile, just to make sure everything still works with a non-root user.

Deploy to Heroku

Before building our continuous deployment pipeline, let’s deploy manually to make sure our image is good. Create a new application on Heroku and give it a name. In my case it’s gonna be cd-alediaferia.

Screenshot 2019-12-05 at 15.04.37

Now let’s tag and push our image to the Heroku Registry after logging in.

$ heroku container:login
$ docker tag <image><app-name>/web
$ docker push<app-name>/web

And release it straight to Heroku:

$ heroku container:release -a  web

You should now have your app successfully up and running on Heroku at this point.

The GitLab Pipeline

In this paragraph, we’re going to configure the pipeline piece on GitLab so that we can continuously deploy our app. Here follows the .gitlab-ci.yml file that I have configured for my repository.

  - build
  - release

    - master
  stage: build
    BUILDAH_FORMAT: "docker"
    - dnf install -y nodejs
    - curl | sh
    - sed -i '/^mountopt =.*/d' /etc/containers/storage.conf
    - buildah bud --iidfile iidfile -t cd-alediaferia:$CI_COMMIT_SHORT_SHA .
    - buildah push --creds=_:$(heroku auth:token) $(cat iidfile)

    - master
  image: node:10.17-alpine
  stage: release
    - apk add curl bash
    - curl | sh
    - heroku container:release -a cd-alediaferia web

In the above snippet we have defined two jobs: build_image and release.


This job specifies how to build our Docker image. If you look closely, you’re actually going to notice that I’m not using Docker specifically but Buildah. Buildah is an OCI-compliant container building tool that is capable of producing Docker image with some minor configuration.


This job performs the actual release by pushing to your Heroku app.

Additional configuration

Before trying our pipeline out, let’s configure the HEROKU_API_KEY so that it can get picked up by the herokucli that we’re going to use in the pipeline definition.

Pipeline Variable Setting
GitLab pipeline variable setting

Pushing to GitLab

Now that we have set everything up we are ready to push our code to the deployment pipeline.

GitLab pipeline in action

Let’s have a look at the build step that GitLab successfully executed.

GitLab pushing to the Heroku Registry

The first line uses buildah to build the image. It works pretty much like docker and I’ve used --iidfile to export the Image ID to a file that I then read from the command-line in the subsequent invocation.

The second line simply pushes to the Heroku Registry. Notice how easily I can log in by doing --creds=_:$(heroku auth:token): this tells buildah to use the token provided by Heroku to log into the registry.

The deployment job, finally, is as easy as:

$ heroku container:release -a cd-alediaferia web


My app is finally deployed, and everything happened automatically after my push to master. This is awesome because I can now continuously deliver my changes to production in a pain-free fashion.

My successfully deployed app

I hope you enjoyed this post. Let me know in the comments and follow me on Twitter if you want to stay up-to-date about DevOps and Software Engineering practices.

Writing a GraphQL DSL in Kotlin

Reading Time: 3 minutes

I’ve recently spent some time testing a GraphQL endpoint against a few queries. At the moment I’m keeping my queries as multi-line strings but I was wondering:

How hard would it be to build a GraphQL query DSL in Kotlin?

I thought this could be a good opportunity to become more familiar with Kotlin DSL capabilities.

Here’s what I’ve got so far.

query("theQuery") {
    "allUsers"("type" to "users", "limit" to 10) {
        select("address") {
            select("street"("postCode" to true))

The above snippet produces the following result:

query theQuery {
    allUsers(type: "users", limit: 10) {
        address {
            street(postCode: true)

The main challenges I have faced up to this point have been around supporting:

  • any string to be used as the root field of the query (e.g."allUsers")
  • nested selection of fields
  • a map-like syntax for field arguments (I’ve settled for the to method for now)

Any String is a Field

As you can see from the above example, it is possible to start the root field declaration using a string, followed by the fields selection:

"allUsers" {

I’ve achieved that thanks to the Invoke operator overloading support. Read on to find out how I have implemented it.

String.invoke to the rescue

The incredibly powerful extensions support helps me define my own implementation of invoke on a String.

operator fun String.invoke(block: Field.Builder.() -> Unit) =
    Field.Builder().apply {

This way, any String instance can be turned into a Field.Builder by passing a block to the invoke operator (). Additionally, Kotlin’s compact syntax, saves us from having to explicitly use the open and close parenthesis, making the result a little more readable.

Select-ing sub-fields

"allUsers" {

Inside the declared root field, a sequence of select instructions informs the current field builder about which sub-fields we are interested in. The way this is achieved is by letting the compiler know that we are in the context of a Field.Builder and that any method specified in the block has to be resolved against it. This is possible thanks to function literals with receiver.

Function literals with receiver

This is probably the most useful feature Kotlin has to offer when it comes to building DSLs.

operator fun String.invoke(block: Field.Builder.() -> Unit)

The block argument has been declared as Field.Builder.() -> Unit.
As we can see from the docs:

[…] Kotlin provides the ability to call an instance of a function type with receiver providing the receiver object.

Function literals with receiver – Kotlin reference

What this means is that I can invoke the block having the current Field.Builder instance as receiver resulting in the select invocations to be being resolved against it.

Field arguments

When it comes to specifying field arguments, I’ve had to settle for that not-so-pretty to syntax.

"type" to "users", "limit" to 10

I still think it’s a good compromise considering that Kotlin doesn’t offer much more when it comes to map-building syntax.

"allUsers"("type" to "users", "limit" to 10) {
    select("address") {
        select("street"("postCode" to true))

The to method that allows for that comes from the standard library.

public infix fun <A, B> B): Pair<A, B> = Pair(this, that)

Note that the infix keyword is what allows for the simplified notation receiver method argument.

Finally, a slightly more complicated definition of String.invoke accepts instances of Pair<String, T> allowing for the to syntax to be used when specifying field arguments. The explicit String type as the left type helps keeping it all a little more robust.

operator fun <T> String.invoke(vararg args: Pair<String, T>, block: (Field.Builder.() -> Unit)? = null): Field.Builder

Wrapping up

As you can see, I’m not a DSL expert (at all!) but this is a fun experiment to play with. You can follow my work at the graphql-forger repo. Please, feel free to contribute by opening issues or pull requests.

I hope you have enjoyed the post and learnt something new about Kotlin.

Testing LiveData on Android

Reading Time: 3 minutes

Testing LiveData represents an interesting challenge due to the peculiarities of its technology and the way it eases development for your Android app.

I’ve recently started to build an Android app to keep motivated on my journey to learn Kotlin. My most recent experience has been with Architecture Components and this brief blog post, in particular, will focus on unit testing your DAO when using LiveData.

What is LiveData?

LiveData is a lifecycle-aware, observable data holder that will help you react to changes in you data source. In my case, I’m using it in combination with Room to make sure my app reacts to the new data becoming available in my database.

Our simple DAO

For this blog post let’s just pretend we have a very simple DAO that looks like the following:

This is just a DAO that helps you fetch posts.

As you can see, the return type for the function is not just a plain List<Post> but it wraps it in a LiveData instance. This is great because we can get an instance of our list of posts once and then observe for changes and react to them.

Let’s test it

The Android Developer documentation has a neat example on how to unit test your DAO:

This pretty simple test has the aim of testing that the data is being exposed correctly and that, once the post is added to the database, it is reflected by the getAll() invocation.

Unfortunately, by the time we are asserting on it, the value of the LiveData instance will not be populated and will make our test fail. This is because LiveData uses a lifecycle-oriented asynchronous mechanism to populate the underlying data and expects an observer to be registered in order to inform about data changes.

Observe your data

LiveData offers a convenient observe method that allows for observing the data as it changes. We can use it to register an observer that will assert on the expected value.

The observe method has the following signature:

void observe(LifecycleOwner owner, Observer<T> observer)

It expects an observer instance and the owner for its life-cycle. In our case, we’re only interested in keeping the observer around just enough to be able to assert on the changed data. We don’t want the same assertion to be evaluated every time the data changes.

Own your lifecycle

What we can do, then, is build an observer instance that owns its own life-cycle. After handling the onChange event we will mark the observer life-cycle as destroyed and let the framework do the rest.

Let’s see what the observer code looks like:

This observer implementation accepts a lambda that will be executed as part of the onChange event. As soon as the handler is complete, its own lifecycle will proceed to mark itself as ON_DESTROY which will trigger the removal process from the LiveData instance.

We can then create an extension on LiveData to leverage this kind of observer:

fun <T> LiveData<T>.observeOnce(onChangeHandler: (T) -> Unit) { 
    val observer = OneTimeObserver(handler = onChangeHandler) 
    observe(observer, observer)

Let’s test it again

A couple of things to notice this time.

First of all, we’re taking advantage of an InstantTaskExecutorRule. This is a helpful utility rule that takes care of swapping the background asynchronous task executor with a synchronous one. This is vital to be able to deterministically test our code. (Check this out if you wanna know more about JUnit rules).

In addition to that, we’re now leveraging the LiveData extension that we have written to write our assertions:

postDao.getAll().observeOnce {
    assertEquals(0, it.size)

We have just asserted in a much more compact and expressive way by leaving all the details inside our observer implementation. We are now asserting on the LiveData instance in a deterministic way making this kind of tests easier to read and write.


I hope this post will help you write tests for your DAO more effectively. This is one of my earliest experiences with Kotlin and Android: please, feel free to comment with better approaches to solve this.

Follow me on Twitter for more!

Cover photo by Irvan Smith on Unsplash

Business outcome language: an introduction for software engineers

Reading Time: 4 minutes

Using a business outcome language helps keeping the problem definition focused on the value that you should deliver to your customers. Let’s explore together how to make sure our language is not polluted by technical details.

As a technological leader at your company, you are the citizen of two worlds: the business world and the implementation world. Each world speaks its own language, so it’s vital that they communicate clearly and unambiguously.

Your first days in the implementation world

Software engineers typically start their career in the implementation world, with few contact opportunities with the business world.

As a software engineer what excites you is the challenge of solving a problem. You crave more problems and you feel more satisfied as you solve them. Your brain is triggered by the problem definition and it starts spitting out all of the possible ways you can achieve the expected result.

The problem definition at this stage is probably in an implementation-oriented language with little indication as to what business outcome the company is looking for. It’s likely that you don’t care about that though because you only care about the coding challenge.

Your language evolves around the how

In the implementation world you develop a language that is specific to the solution to the problems and the tools you have to solve them.

You communicate with your peers by focusing on how to achieve the solution. The discussions often develop around the technical details of the existing software.
The foreigners living outside of the implementation world look at you with surprise and curiosity: your words don’t really make much sense from the outside.

The first contact with the foreigners

There comes a point in your career when you have to interact with the people from the business world and work together.

Depending on the approach your company has for building product, this interaction can often be an opportunity for the two worlds to collaborate and find a sensible outcome to pursue.
The most effective way of doing this is by a common language: the business outcome language.

Certainly, this is often challenging for the implementation people because their native language is not well understood outside their world. Using the implementation language comes naturally for them. Explaining how something can or cannot be achieved through implementation details feels incredibly easy.

The trouble with the implementation language

Let’s see a few examples of why using the implementation language for every interaction can be counter-productive.

We can’t do that!

dilbert, technical debt, comic strip, business outcome language

“What you’re asking is impossible to do by that date. The amount of technical debt we have accumulated means that we need 3 months to refactor the service classes to be capable of supporting this new functionality. We also have to update to the latest version of the database in order to support that change…”

– An implementation world native

While those might all be valid reasons as to why the specific change in question is hard to achieve, they are contributing to setting the wrong tone for the conversation.

Raising very specific technical concerns is going to shift the conversation to an explicit technical negotiation.
The people from the business world will inevitably, and often subconsciously, try to petition the implementation people for a middle-ground solution. This achieves a similar outcome but overcomes some of the technical limitations that have been raised.

When the negotiation is successful it might feel like a WIN-WIN situation but in reality, it isn’t.

First of all, the business people might have just agreed to build functionality that is polluted by workarounds and has evolved around technical constraints instead of around the customer.
Similarly, the implementation people might have just accepted to build more technical debt into their world.

I speak your language too, I’ll tell you exactly what you have to do!

dilbert, feature creep, comic strip, business outcome language

“We need to add a new button to the page so that the users can export the data as PDF.”

– A business world native speaking the implementation language

The above example is outlining exactly which technical solution should be used without even specifying what the problem to be solved is.
Imagine the following problem definition, instead: the users want to share the insights data with their colleagues with minimum effort.

While an “Export as PDF” button might try and address the issue, it only partially solves for it. Exporting as PDF, saving it to your disk, finding it again to attach it into an e-mail doesn’t necessarily mean minimum effort.

Having clarified that the users want to share insights with minimum effort helps the engineers come up with a much more effective solution. One possibility is providing the users with a “Share” call to action that sends the insights to the desired recipients, without having to export anything.

Clarifying what the problem is instead of mandating a specific solution will let the engineers and the designers explore the best approach within the current constraints.

A technical language anchors down the thinking

The implementation language is a powerful way of describing solutions. It is very detailed and it helps engineers be unambiguous as to what has to be done.
Its power, though, makes it dangerous if used in the wrong context.

Bringing too many implementation details into the discussions anchors the thinking to the status quo, potentially preventing from identifying the right value for the users of your product. It is important to defer the implementation evaluation to the very end in order to let the thinking evolve and articulate in the space of the business outcome that the company wants to achieve.

Help free the language from the technical constraints

As a technological leader, you should be fluent in both languages. Take advantage of this and make sure the conversation stays away from implementation details by ensuring that all the people involved speak a business outcome language.

Understand the technical limitations and concerns raised by your compatriots and translate them into the appropriate language, weighing what is really worth including in the conversation.

Make sure not to preclude innovation and avoid constraining the thinking to the status-quo of the technical limitations. This exercise will help you evaluate many more implementation opportunities than before.

Cover photo by Headway on Unsplash