Our latest project at work involved another Go API and in the effort of easing our debugging and issue tracking I'm used to inject the app version into the logs.
It helps a lot monitoring, especially when rolling out a new version (canary, 50% traffic deployment, etc.).

Preamble

Pre-2018, I hardcoded the version in an app constant which I would increment as part of my release process. It's easy, quick but the only issue is to not forget about it because you may lose versioning syncing over time.

Plus, considering this to be done before each new release, this adds an extra round of CI, which is fine for small projects but not the best for fast deployment cycles.
Fortunately, our app is written in Go so the CI doesn't last more than ~2 min (container shipping included).

Version syncing sucks

With Dockerized applications, it's even easier to track the version because of the nature of Docker image naming: tags!

However, what happens the day you ship a v1.9.0-dev versioned source code and tagged your release v2.0.0? Darn...

The versioning consistency is completely broken and you will end up with a container running code version mismatching the Docker image.

So, to which version to trust? Docker image tag or logged one? How misleading...

One version, all along

In our latest app, the authoritative version issuer is Git and nothing/no-one else.

When pushing a new tag on Git, our CI is triggered, builds and tests the application under the tag name; builds, packs and tags a Docker image from the Git tag name and ships it into our container repository.

At the end, the built and tested source code is bare version but is packed in a properly tagged Docker image. This is so cool because the only moment when you have to take care of tracking your app version is when you create your Git tag.

No more version syncing, no more version inconsistency.

Retrieve app version from the app

We recently switched from a CoreOS to a AWS ECS container management/orchestration stack. Before that I used to hardcode my app version into my source code so I didn't study how and if we could do container inspection from inside a container running with CoreOS.

AWS ECS allows container metadata retrieval from inside a running container, which is pretty damn cool. You got the point, have you?

The only thing you have to do is to retrieve the container image name, parse out of it the image tag and, there, you have your genuine version ready to use:

package main

import (
	"encoding/json"
	"io/ioutil"
	"os"
	"strings"
)

// ResolveVersion returns the app version from either VERSION envvar
// or extracts from AWS ECS_CONTAINER_METADATA_FILE content
func ResolveVersion() string {
	fallback := os.Getenv("VERSION")

	mf := os.Getenv("ECS_CONTAINER_METADATA_FILE")
	if mf == "" {
		return fallback
	}

	data, err := ioutil.ReadFile(mf)
	if err != nil {
		return fallback
	}

	metadata := struct{ ImageName string }{}

	if err := json.Unmarshal(data, &metadata); err != nil {
		return fallback
	}

	parts := strings.Split(metadata.ImageName, ":")
	if tag := parts[1]; tag != "" {
		return tag
	}

	return fallback
}

Instead of throwing an error in case the app could not parse the AWS ECS container metadata file, I chose to return a fallback version with I could provide in an environment variable. Even if questionable, it's ideal for testing.

Drawback

If you maintain several released versions of your app, the source code cannot live outside of a Git repository or the container that packs it.
Just because the code source itself is bare version, if you use application outside of the two contexts, it might be tricky to find out which LOC belongs to which version.

I hope this article will inspire you to better handle containeriz'd app versioning and if you have any feedback or other practises to share, I'd love to read about them.


Joris Berthelot