CI/CD with Azure Pipelines

bitsofinfo
DevOps Dudes
Published in
8 min readDec 15, 2019

--

If you use Azure DevOps one of their related CI/CD offerings within this service is Azure Pipelines. I’ve been doing prototyping of various CI/CD offerings on the market and Azure Pipelines was one of them.

If you look out into the world of CI/CD it can be a vast, intimidating and confusing array of platforms and offerings that are a mix of open-source, closed-source, self-hosted and cloud based. However in my experience you can generally categorize them into two difference camps. The first camp being what I’ll call a “modern architecture” whereby the orchestration, scheduling and execution environment is highly portable and leverages underlying first class constructs provided by an underlying platform like Kubernetes to do the bulk of the scheduling and orchestration work.

The second camp would be what I’ll call the “traditional architecture” which has more battle tested history but is based on older execution environment concepts such as masters/schedulers and worker roles that are fulfilled by a wide variety of physical things such as VMs/containers etc, however the software that manages the scheduling and orchestration is highly proprietary and custom in nature. Azure Pipelines falls into the latter camp of what I’d call a traditional architecture.

The basics

First off if you are using Azure DevOps you will have access to Azure Pipelines. To be clear here, this article is talking about using the Azure hosted, Azure DevOps based cloud offering service. Not Azure DevOps Server.

Azure Pipelines can be hooked up to any Git repository and is not limited to just Azure Git repos. Unfortunately Azure Pipelines can be a bit confusing right off the bat as there are two “flavors” of pipelines in Azure:

Classic mode:

When using this mode, you actually craft pipelines using a visual editor within the Azure Devops web interface via the “classic editor”. You can select task types, fill out form fields for their inputs/outputs etc. This then configures you a pipeline that can react to things that happen in a Git repo or on demand/scheduled etc. From what I can tell this is an older mode that Azure appears to be on the way to deprecating in favor of the YAML mode and appears to be functionality ported into Azure from other products like Azure DevOps Server and Team Foundation Server. Note that within the “Classic Mode” there are yet two different sub modes, one for “build” (CI) and the other for “release” (CD), which have different levels of support for the overall “pipeline” feature set. I would avoid using “classic” mode for new pipelines.

YAML mode:

The YAML mode is likely much more familiar to you from the get go. In YAML mode, each configured Git repository can declare a azure-pipelines.yml file to declare a pipeline, its steps, tasks etc and all of the behaviors. Anyone who has used Travis or Jenkins by declaring CI/CD pipeline behavior in configuration files should feel right at home here. The YAML based way of configuring Azure Pipelines appears to be Azure’s preferred way for starting new CI/CD in Azure. You can hook up Azure Pipelines to any Git repository (not just Azure Git repositories) and if it finds an “azure-pipelines.yaml” file in it, it will auto manage an associated pipeline as you have it configured. I would start all new Azure Pipeline projects by using “YAML” mode.

Pricing

With Azure Pipelines you are pretty much paying for the costs to run your jobs. Pipeline executions are run with “agents”, and each “agent” can run at most, one (1) invocation of a pipeline at any one time. The more agents you have, the more parallelism you can get for concurrent jobs. Out of the box you get either one free Microsoft hosted agent (run on demand capped at 1800 minutes per month) or one free “self hosted” agent (that you would run as a container or VM in your Azure cloud and pay for) with no limit to the number of minutes per month.

If you want more than that its an additional $40/m for each additional hosted MS agent, or $15/m for each additional self hosted agent. So… at the end of the day you are basically trading the overhead of managing agent installs for cost savings, while you pay for the cost of the underlying hardware if you self host.

If you need an artifact repository that your CI steps can push to (i.e. something to store Docker images, maven artifacts, NPM etc etc packages), they also have an offering for that.

The meat: Pipelines

Ok great, so what is it like to actually use it?

Simple example showing an azure-pipelines.yml file that triggers builds that require an approval before deploying to an targe
Simple example showing an azure-pipelines.yml file that triggers builds that require an approval before deploying to a target “environment”.

Well honestly, there is nothing too remarkable about this platform. You create a YAML file to declare your stages, steps and tasks, check it into Git, then let Azure Pipelines execute it and upon completion you can view the STDOUT/STDERR that the job emitted. You can define “environments” which are basically arbitrarily named constructs that your pipelines interact with, and you can “gate” these environment with approvals/checks, functions, gates etc which automatically trigger prior to certain stages that reference the named environment.

Your pipeline YAML can utilize a wide variety of different pre-defined tasks for invoking things like Git, calling Docker, or Helm etc; and if there isn’t a pre-defined task plugin for what you need… well just use the good old bash task and you can do just about anything you want.

Just like any other CI/CD system, there are certain things that you generally do NOT define within the pipeline configuration that lives alongside your code. Examples of these things are pipeline resources such as sensitive secrets (passwords etc) and the definition of well known resources that are part of your build system (think databases, artifact repositories or target clusters etc). In Azure Pipelines you can define these things in various ways such as:

  • Secure Files: Secure Files are just that, named references to files w/ sensitive data that can be referenced and automatically mounted and made available to your tasks.
  • Service Connections: Service Connections are just named “connections” to things like Kubernetes (AKS) clusters (think predefined kube config settings/certs), artifact repositories, docker registries, SSH connection info etc.

These kinds of pipeline resources must be defined outside of the YAML and inside of the Azure DevOps web UI, you manage them there, but then reference them by name from your pipeline YAML. For example if you want to deploy via Helm to an AKS cluster, you can use the Helm task to reference a Kubernetes service connection.

There is also a concept of Secure Variables which are variables that contain secrets etc. Similar to the resources previously described, you have to declare/manage these variables outside of the pipeline YAML and within the web-interface. See the screenshot below:

“Secure variables” can be used like any other defined variable in YAML…. but they are NOT declared there in the YAML, you have to manage the variables and their names in a GUI dialog in the git repo where the pipeline YAML file resides. It’s easy to forget about these or a newcomer to the project to be completely unaware they even exist.

Interesting features

One of the more useful and interesting features I encountered in this platform is Logging Commands which gives the YAML author a way to make their pipeline a bit more dynamic across various stages by enabling them to emit specifically crafted strings to STDOUT, which will be parsed and acted upon by the Azure Pipelines execution engine. With logging commands you can do things like setting dynamic variables, modify service connections on the fly, attach files to the build, manage artifacts etc. It’s actually a pretty useful feature and I used it while prototyping.

[EDIT:] Recently I wrote another article on this specific feature and how I found it useful as a way to permit developers to customize the pipeline behavior through “arguments” embedded in Git commit messages. Definitely check out the logging commands feature, its unique and may prove useful for certain situations.

Slack integration

If you would like to have your pipeline executions notify you when pipelines are triggered, their result outcome, as well as basic approval capability, you can use the Azure Pipelines Slack App but don’t expect too much in regards to interaction. Its pretty much limited to notifying you of a build being started, the result, and a very basic “yes” or “no” kind of interactive button for approvals. Also note that if you have many different repositories, the relation of the slack integration to your Azure DevOps organization may lead to one channel being flooded with hard to follow message/build lineage due to the lack of threads usage by the slack app. All of the information can be easily lost or hard to follow.

Final thoughts

Overall Azure Pipelines works fine and may be an attractive option to you if you’re team is already heavily embedded into the Azure ecosystem, and in particular makes use of other Azure DevOps related services such as their Git repositories, Container registry, Artifact stores etc. Azure Pipelines has a ton of task plugins and enough features to handle most common CI to CD flows and will likely work for most situations. As a downside, there is some confusion and lack of clarity in the documentation with regards to the YAML vs “classic” mode of operation. The references to these two different modes is seamlessly and often confusingly weaved throughout the documentation, sometimes leaving you wonder whether the current feature they are talking about is available or not in your “mode”.

Keep in mind, that the YAML syntax for Azure Pipelines is proprietary and not portable to other CI/CD platforms etc. This is not unique per-say, just be aware of it. You will most definitely have some level of vendor lock-in by using this product. Want to move to Jenkins, Team City or [insert other CICD product here]? You will have to port your azure-pipelines.yml content to each respective format, as well as the equivalents of service connections, secrets and other resources. This is certainly not unheard of as most CI/CD platforms each have their own proprietary non-portable configuration format, however the difference is that unlike open-source/self-hosted options who’s execution engine can be deployed anywhere (any cloud), cloud specific solutions like Azure Pipelines, can never run anywhere else… your execution environment is non-portable as well as the configuration of it.

Final word? If you are fully committed to Azure as your platform and don’t see that changing anytime soon, go with it. Otherwise, I’d take a look at less cloud vendor coupled options.

Originally published at http://bitsofinfo.wordpress.com on December 15, 2019.

--

--