The mostly technical DevOps blog

I write about things. Mostly music, and DevOps related topics

View My GitHub Profile

Fuel my coffee addiction https://cafecito.app/skyok

31 October 2020

Experimentation blues: implementing werf for the greater good (Part 1)

by ~sky


Werf is the GitOps workflow automation tool that doesn’t fail to be confusing.

Which is something to be expected, quite franky. This tool tries to centrailze the “gold standard” of DevOps workflows:

  1. Build a new container image
  2. Publish it to a container registry
  3. Deploy said image to a kubernetes cluster with a pseudo-chart

That’s far too many responsibilities for a single program, yet here I am, writing a post about my experience trying to grok said tool.

So, given a confusing tool, why would I even recommend it? For starters, because it works, which I unfortunately can’t say about many other programs that promise the same results.

On the other hand, we have the recurring issue at my job of having to re-define this exact same workflow on far too many CI/CD systems, so hey, if I can just write a small file and a command, and have that work for basically any CI system, all the better.

As a final selling point, this tool aims to be GitOps-first.

Ok enough pathetic attempts at being sassy, let’s go with the details:

Installation

Right off the bat, setting up werf is as simple as executing a shell script which installs multiwerf. That’s great! as soon as we start with it, we’re provided with a version management system for it.

I like that it doesn’t try to do any funky stuff like trying to attach itself to your current shell’s configuration or any of the sort.

Something to consider: werf expects to install itself on a directory that’s in PATH, so be sure to run the installation command in one. Normally, $HOME/local/.bin should be good enough.

Ok so, we have multiwerf setup on our system. How do we get the actual werf executable now?

Before getting our hands dirty, you should take a read at werf’s release system, which is the first of many confusing headaches to deal with at first.

At the time of writing this, the experiment was run with both 0.12 alpha, and 0.11 stable Differences between both will be pointed out towards the end

Project setup

Alright enough setup durdling, let’s get to the action.

So, given an arbitrary project, 3 things are expected:

  1. A Dockerfile
    1. It doesn’t have to be in the project’s root, but at least one Dockerfile is expected
    2. This is where werf really shines. The process is extremely transparent to monorepos with multiple projects for reasons we will mention below
  2. A werf.yaml specification
    1. This has to be in the repository’s root
  3. A .helm directory containing a pseudo-chart
    1. Like the werf specification, this must be in the project’s root directory

We already know about the Dockerfile, so I won’t go in detail here.

The werf specification is a yaml file that expects the following:

The final piece to this project setup is the “pseudo-chart”… pseudo?

This should be the make or break point for you, reader: yes, Werf centralizes the Docker + kubernetes workflow, but breaks some compatibility in order to do so. Your project’s helm chart won’t be usable outside of werf (not without some minor extra work at least). 1 But in exchange, the previously mentioned workflow will be simplified in a way that’s incomparable to most other solutions, as well as some really nifty utilities (see kubedog) being available from the get go

Convergence: the new workflow

Tracing back a couple paragraphs, we mentioned that werf centralizes the build => publish => deploy workflow. Naturally, this would be 3 separate steps (2 in 0.12), with their own set of CLI options and arguments.

In order to keep work simple, werf provides the converge command, which combines all these steps in one, centralizing their CLI options as you would expect

From now on, converge will be the one source of truth of all stages of the workflow for our project: We will get its publish repository from the command line options, as well as the deployment namespace, release name, environment, helm options… you get the idea.

If CLI options are not possible for whatever reason, Werf also provides an equivalent environment variable for each of these, all documented in their website.

As to illustrate, this is the command I used to apply this workflow to a simple-ish project (no helm values, no docker build arguments):

werf converge --repo <account id>.dkr.ecr.us-east-1.amazonaws.com/meli-authenticator --repo-implementation ecr --env dev --namespace default --release meli-authenticator --atomic true

In order, if you run these steps in the project, you will see the following:

  1. Build output for the given Docker image(s)
  2. Helm release output:
    1. Pod logs for all related replication controllers
    2. Resource release status, refreshing every 5 seconds

Conclusions

Werf brings a lot to the table, at the cost of compatibility with other tools like ad-hoc helm

Next part will evaluate the process of integrating Werf in a Jenkins pipeline, in which everything might as well fall apart to pieces.


  1. 2020-12-31: I’ve been contacted by Flant regarding this bit. Paraphrasing their comments:

    werf v1.2 is compatible with Helm 3 (and werf v1.1 is compatible with Helm 2) in terms of storing releases and displaying the list of installed releases. For example, Helm will display the releases installed with werf (in both, v1.1 & v1.2).

    The only incompatibility is the description of the chart itself that is placed in Git. Thus, Helm can’t deploy a werf chart because it has additional constructions (because werf extends the original Helm chart). However it also means that it works vice versa, i.e. you can deploy a regular Helm chart with werf. Finally, werf also has the “werf helm” subcommand that copies the Helm’s interface.

    Huge thanks to Dmitry Shurupov and the Flant team for bringing this to my attention! 

tags: