top of page

How To Configure an Azure DevOps Pipeline


When you search online, you will find various blog posts, documentation and tutorials on Azure DevOps. All of these items are valuable resources but rarely does one walk you through a real-world scenario. Many skim over the security aspect leaving passwords in clear text for simplicity or an end product that essentially does nothing. Let's change that.


In this article/tutorial, you're going to learn from soup to nuts how to build a real Azure DevOps release pipeline that automates infrastructure. More specifically, you're going to learn how to use Azure DevOps to build a continuous deployment pipeline to provision Azure virtual machines.


By the end of this project, you will have a fully-functioning Azure pipeline. From a single GitHub repo commit, it will:

Build a temporary Azure resource group

  • Provision an Azure VM via an ARM template

  • Set up said ARM template in a CI/CD pipeline

  • Upon any change to the template, kick off a template validation test

  • Deploy the ARM template to Azure

  • Test the deployed infrastructure

  • Tear down all Azure resources

Let's dive right in!


Project Overview

This project is going to be broken down into six main sections. They are:


Azure resource preparation

In this section, you will learn how to set up all of the prerequisite resources in Azure. Here you will:


  • Create an Azure service principal for various tasks in the pipeline

  • Set up an Azure Key Vault with secrets for the pipeline to use

  • Set appropriate access policies for ARM deployments and pipeline usage

Azure DevOps preparation


Once you have all of the Azure resources set up, it's time to prepare Azure DevOps for your pipeline. In this section, you will:

Install the Pester Test Runner build task in your Azure DevOps organization

  • Create service connections to provide Azure DevOps with required resource access

  • Create an Azure DevOps variable group linking a key vault to access Azure Key Vault secrets

Script/template overview

There are various artifacts that go with this project including the ARM template to build the server and the Pester tests. In this section, we'll briefly cover what the template is provisioning and what exactly Pester is testing in the pipeline.

Pipeline creation

In this section is where the real fun begins. You will begin setting up the actual pipeline. Here you'll learn how to setup this entire orchestration via a single YAML file.

You'll be building the pipeline using the Multi-Stage Pipeline UI experience. As of this writing, this feature is in Preview.

Pipeline demonstration

Once the pipeline is built, you need to see it run! In this section is where you'll learn how to trigger the pipeline and simply watch the magic happen.

Cleanup

And finally, this since is just a demonstration, you'll get access to a script to tear down everything built during the tutorial.

Does this sound like a lot? It is! But don't worry, you'll learn step-by-step as attack each task one at a time.

If you’d like a script with all of the Azure CLI commands used to build this pipeline, you can find it in the ServerAutomationDemo GitHub repo as demo.ps1.

Prerequisites

You're going to learn a lot but you're also expected to come to the table with a few things. If you plan to follow along, be sure you have the following:


  • An Azure DevOps organization - Check out the Microsoft QuickStart guide for instructions on how to do this. In this article, you'll be working with a project called ServerAutomationDemo.

  • A GitHub repo - In this article, you'll be learning from a GitHub repo called ServerAutomationDemo. Sorry, we're not using Azure repos in this article.

  • A GitHub personal access token - Be sure to create the token with the permissions of admin:repo_hook, all repo and all user options

  • Cloud Shell or PowerShell 6+ if running locally - Examples may work in Windows PowerShell but were not tested. All of the examples will be performed locally via a PowerShell console but the Cloud Shell will work just as well. You will automate building the pipeline.

  • Azure CLI installed (if running locally) - You'll learn how to perform tasks in this article with the Azure CLI. But, the same procedures can also be performed via the Azure Portal, PowerShell or the Azure SDK.

Warning: The actions you're about to perform do cost real money unless you have some Azure credit. The most cost-intensive resources you'll be bringing up in Azure is a VM but only temporarily.

Before you Start

You're going to be doing a lot of configuration in this tutorial. Before you begin, please be sure you have the following items handy.


  • The name of the Azure subscription resources will be deployed to - the examples will use Adam the Automator.

  • The ID of the subscription

  • The Azure AD tenant ID

  • The DevOps organization name - the examples will use adbertram.

  • The region you're placing resources into - the examples will use eastus.

  • The name of the Azure resource group to put the temporary key vault into - the examples will use ServerAutomationDemo.

  • A password to assign to the local administrator account on a deployed VM - the examples will use "I like azure.".

  • The URL to the GitHub repo - the examples will use https://github.com/adbertram/ServerAutomationDemo.

Logging in with the Azure CLI

Be prepared to do a lot of work with the Azure CLI in the article. I love the Azure PowerShell cmdlets but the Azure CLI is currently capable of doing more DevOps tasks.


Your first task is getting to a PowerShell 6+ console. Once in the console, authenticate to Azure using the command az login. This command will open a browser window and prompt you for your account.


Once authenticated, be sure to set your subscription to the default. Setting it as the default will prevent you from having to specify it constantly.


az login az account set --subscription 'Adam the Automator'

Preparing Azure Resources

Once you're logged in with the Azure CLI, it's time to get down to business. An Azure Pipeline has many different dependencies and various knobs to turn. In this first section, you're going to learn how to do some set up and prepare your environment for your pipeline.

Installing the Azure CLI DevOps Extension


You'll need a way to build the various Azure DevOps components with the Azure CLI. By default, it doesn't include that functionality. To manage Azure DevOps from the Azure CLI, you'll need to install the DevOps extension.

Luckily, installing the extension is a single line as shown below.








az extension add --name azure-devops
Once the extension has been installed, then set your organization as the default to prevent specifying it over and over.







az devops configure --defaults organization=https://dev.azure.com/adbertram
Creating the Resource Group


Although the pipeline will be creating a temporary resource group, you should also create one for any resources brought up in this demo. More specifically, this resource group is where you'll create an Azure Key Vault.



az group create --location "eastus" --name "ServerAutomationDemo"
Creating the Azure Service Principal


The next task is to create an Azure service principal. You're going to need an Azure service principal to authenticate to the Azure Key Vault. You'll also use this service principal to authenticate a service connection. Create the service principal for both the key vault and for the eventual ARM deployment as shown below.








$spIdUri = "http://ServerAutomationDemo"$sp = az ad sp create-for-rbac --name $spIdUri | ConvertFrom-Json

At this point, it’d be a good idea to save the value of $sp.appId somewhere. When you get to building the pipeline later, you will need this!
You'll notice some PowerShell commands in the examples e.g. ConvertFrom-Json. Since the Azure CLI only returns JSON strings, it's easier to reference properties if converted to a PowerShell object.

Building the Key Vault


The pipeline in this tutorial needs to reference a couple of passwords. Rather than storing passwords in clear text, let's do it the right way. All sensitive information will be stored in an Azure Key Vault.

To create the key vault, use the az keyvault create command as shown below. This command creates the key vault in the previously-created resource group. Also notice the enabled-for-template-deployment switch. This changes the key vault access policy to allow the future ARM deployment to access the key vault.




az keyvault create --location $region --name "ServerAutomationDemo-KV" --resource-group "ServerAutomationDemo" --enabled-for-template-deployment true

Once in the variable group, click on Link secrets from an Azure key vault as variables. Once you do, you'll then be warned you'll be erasing all variables and click Confirm. You'll see how to do this below. This action is fine since the foo variable was temporary all along anyway.



Once confirmed, select the ARM service connection and the ServerAutomationDemo-KV key vault created earlier as shown below. Click Add.







Now check both of the secrets created earlier as shown below and click OK and Save to save the changes.






Project Files Overview


If you've made it this far, congratulations! You're now ready to begin building the pipeline. But wait...there's more!


To make building an Azure Pipeline real world, this tutorial builds a pipeline complete with "unit" and "acceptance" tests. This makes the tutorial more interesting but also warrants some additional explanation on what's going on.


In the GitHub repo for this tutorial, you'll find a few files as shown below. Now would be a good time to either clone this repo or build your own from the files.



  • azure-pipelines.yml - The final YAML pipeline

  • connect-azure.ps1 - PowerShell script to authenticate to an Azure subscription

  • server.infrastructure.tests.ps1 - A simple Pester test to confirm VM configuration is

  • goodserver.json - An Azure ARM template to provision a VM

  • server.parameters.json - An Azure ARM parameters template that provides the ARM template with parameter values.

  • server.templates.tests.ps1 - Pester "unit" tests to confirm the ARM template is valid

You'll see how each of these files fits together in the pipeline in a little bit.


Creating the Pipeline


Assuming you've cloned my GitHub repo or set one up on your own, it's time to create the pipeline! To do so, run the az pipelines create command. The below command creates a pipeline called ServerAutomationDemo using the provided GitHub repo as a trigger. It will look at the master branch and use the service connection built earlier.


az pipelines create --name "ServerAutomationDemo" --repository "https://github.com/adbertram/ServerAutomationDemo" --branch master --service-connection $gitHubServiceEndpoint.id --skip-run

Depending on if you have the azure-pipelines.yml file in your GitHub repo, you may or may not receive feedback like below. Either way, your console will look similar. Be sure to have your GitHub personal access token ready!



YAML Pipeline Review

At this point, your pipeline is ready to run but it's important to first understand the YAML pipeline. Take a look at the azure-pipelines.yml file. This file is the pipeline when using the multi-stage YAML pipeline feature.


Let's break down the various components that make up this YAML pipeline.


The Trigger


Since you're building a CI pipeline that automatically runs, you need a trigger. The trigger below instructs the pipeline to run when a commit is detected in the Git master branch.


Notice also the paths section. By default, if you don't specifically include or exclude files or directories in a CI build, the pipeline will run when a commit it done on any file. Because this project is all built around an ARM template, you don't want to run the pipeline if, for example, you made a tweak to a Pester test.



trigger:  branches:    include:      - master  paths:    include:      - server.json      - server.parameters.json

The Pool

Every build needs an agent. Every build agent needs to run on a VM. In this case, the VM is using the ubuntu-latest VM image. This image is the default image that was defined when the build was originally created. It hasn't been changed due to the "simplicity" of this pipeline.



pool:  vmImage: "ubuntu-latest" 

Variables


Next up, we have all of the variables and the variable group. The various tasks in this pipeline require reading values like the Azure subscription ID, tenant ID and the application ID for the service principal and so on. Rather than replicating static values in each task, they are defined as variable.


Also notice the group element. This element is referencing the variable group you created earlier. Be sure to replace the subscription_id and tenant_id at this time. You'll learn how to get the application_id a little bit later.


Remember in the Creating the Azure Service Principal section you were reminded to save the value of $sp.appId somewhere? This is where you’ll need it. Assign the value of that service principal application ID to application_id as shown below.


variables:    - group: ServerAutomationDemo    - name: azure_resource_group_name        value: "ServerProvisionTesting-$(Build.BuildId)"    - name: subscription_id        value: "XXXXXXXXXXXXX"    - name: application_id        value: "XXXXXXXXXXXXX"    - name: tenant_id        value: "XXXXXXXXXXXX"

Note the value of the azure_resource_group_name variable. Inside of that value you'll see $(Build.BuildId). This is a system variable that represents the build ID of the current job. In this context, it is being used to ensure the temporary resource group created is unique.

PowerShell Prep Tasks


The next set of tasks invoke PowerShell code. This pipeline example uses PowerShell to create and remove a temporary resource group for testing purposes. In these deployment tasks, you'll see two examples of invoking PowerShell code.


The first task invokes a script called connect-azure.ps1 that exists in the GitHub repo. This task authenticating to the Azure subscription for the subsequent Azure PowerShell commands to run.


This Azure PowerShell connect task is calling the script and passing a key vault secret value (Server Automation Demo-AppPw) and the pipeline variables subscription_id, application_id and tenant_id.


The second task is running PowerShell code inline meaning a script doesn't already exist. Instead, PowerShell code is defined in the pipeline YAML itself using the value of the azure_resource_group_name pipeline variable.


- task: PowerShell@2  inputs:    filePath: "connect-azure.ps1"    arguments: '-ServicePrincipalPassword "$(ServerAutomationDemo-AppPw)" -SubscriptionId $(subscription_id) -ApplicationId $(application_id) -TenantId $(tenant_id)'- task: PowerShell@2  inputs:    targetType: "inline"    script: New-AzResourceGroup -Name $(azure_resource_group_name) -Location eastus -Force

Pester Template Test


Next up we have the first Pester test. In a CI/CD pipeline such as this, it's important to have a few different layers of tests. If you were creating a pipeline for a software project, you may create various unit tests.


Since this example pipeline is built around a single ARM VM deployment, the first "unit" tests will be to test the validity of the JSON template. Inside of the server.templates.tests.ps1 file is where you can add as many different tests on the ARM template file itself as you'd like.


Notice below, the pipeline is using various system variables. These variables are referencing the file location of the files once they get onto the build agent.


The Peste rRunner task is sending the test results out to an XML file which will then be read later in the pipeline.



- task: Pester@0  inputs:    scriptFolder: "@{Path='$(System.DefaultWorkingDirectory)/server.template.tests.ps1'; Parameters=@{ResourceGroupName='$(azure_resource_group_name)'}}"    resultsFile: "$(System.DefaultWorkingDirectory)/server.template.tests.XML"    usePSCore: true    run32Bit: False

The ARM VM Deployment


We have now come to the ARM deployment. Since the entire pipeline is build around the ability to deploy a VM, this one is important! This task deploying the ARM template providing all of the required attributes needed to make it happen.


Note the deployment Outputs: arm_output attribute. In the next step, a task needs to connect to the VM that was deployed. A great way to get the DNS name or IP address of this VM is by returning it via the ARM deployment. The deployment Outputs option creates a pipeline variable that can be referenced in other tasks.



- task: AzureResourceManagerTemplateDeployment@3 inputs: deploymentScope: "Resource Group" azureResourceManagerConnection: "ARM" subscriptionId: "1427e7fb-a488-4ec5-be44-30ac10ca2e95" action: "Create Or Update Resource Group" resourceGroupName: $(azure_resource_group_name) location: "East US" templateLocation: "Linked artifact" csmFile: "server.json" csmParametersFile: "server.parameters.json" deploymentMode: "Incremental" deploymentOutputs: "arm_output"
Pester "Acceptance" Test


Once the VM has been deployed, you need to ensure it was deployed properly with an "integration" or "acceptance" test. This Pester Runner task is invoking Pester and running another set of infrastructure-related tests to ensure the VM was deployed successfully.


Notice that we're passing in the value of the output of the ARM deployment via the Arm Deployment Json Output parameter. The Pester test script file has a parameter defined that takes the value and reads the DNS hostname of the VM.



 task: Pester@0    inputs:      scriptFolder: "@{Path='$(System.DefaultWorkingDirectory)/server.infrastructure.tests.ps1'; Parameters=@{ArmDeploymentJsonOutput='$(arm_output)'}}"      resultsFile: "$(System.DefaultWorkingDirectory)/server.infrastructure.tests.XML"      usePSCore: true      run32Bit: False


You can see below what he server.infrastructure.tests.ps1 PowerShell script looks like. Notice that it's reading the VM's DNS hostname to then run a simple open port check.



$ArmDeploymentOutput = $ArmDeploymentJsonOutput | convertfrom-json## Run the tests describe 'Network Connnectivity' {     it 'the VM has RDP/3389 open' {         Test-Connection -TCPPort 3389 -TargetName $ArmDeploymentOutput.hostname.value -Quiet | should -Be $true     } }

"Acceptance" Test Cleanup

The only reason the pipeline deployed any infrastructure was to test the validity of the ARM template. Because this infrastructure as only temporary, it needs to be cleaned up. In the last PowerShell task, the pipeline is removing the resource group created earlier and everything in it.








- task: PowerShell@2  inputs:    targetType: "inline"    script: Get-AzResourceGroup -Name $(azure_resource_group_name) | Remove-AzResourceGroup -Force
Pester Test Publishing






And finally, we have come to the last set of tasks. Azure Pipelines has a task called Publish Test Results. This task reads an XML file on the build agent and displays tests results in Azure DevOps. This is a handy way to easily see the results of all tests run.



- task: PublishTestResults@2  inputs:    testResultsFormat: "NUnit"    testResultsFiles: "$(System.DefaultWorkingDirectory)/server.infrastructure.tests.XML"    failTaskOnFailedTests: true- task: PublishTestResults@2  inputs:    testResultsFormat: "NUnit"    testResultsFiles: "$(System.DefaultWorkingDirectory)/server.template.tests.XML"    failTaskOnFailedTests: true

Using the Azure DevOps Pipeline

Finally, we're ready to run the pipeline and see how it works. In the Azure DevOps web UI, ensure you're in the Server Automation Demo project. Once here, click on Pipelines and then you should see the Server Automation Demo pipeline.


One way to run the pipeline is to click on the three dots on the far right as shown below. Then, click on Run pipeline. This will kick off the automation goodness.



The pipeline will chug along and run each task as instructed. By the end, you should see all green check marks for each task performed by the job as shown below.





Cleaning Up

Once you've fiddled around with the pipeline and everything you've accomplished here, you should clean things up. After all, this was just meant to be a tutorial and not a production task!


Below you'll find some commands to clean up everything built in this article. This code removes the service principal, Azure AD application, the resource group and everything in it and the Azure DevOps project.


$spId = ((az ad sp list --all | ConvertFrom-Json) | ? { '<https://ServerAutomationDemo>' -in $_.serviceprincipalnames }).objectId az ad sp delete --id $spId## Remove the resource group az group delete --name "ServerAutomationDemo" --yes --no-wait  ## remove project$projectId = ((az devops project list | convertfrom-json).value | where { $_.name -eq 'ServerAutomationDemo' }).id az devops project delete --id $projectId --yes

Summary

This tutorial was meant to give you a peak into building a real Azure DevOps infrastructure automation pipeline. Even though there are countless other ways to build pipelines such as this, the skills you've learned in this tutorial should assist you through many different configurations.


Now get out there and start doing more automating!



15 views0 comments

Comments


bottom of page