Automating Test Execution with Azure DevOps Pipelines
Azure DevOps Pipelines stand as a critical tool for modern software teams, driving the engine of Continuous Integration (CI) and Continuous Delivery (CD). A CI/CD pipeline automates the steps required to get code from version control to production, making the build, test, and deployment phases reliable and repeatable.
In
the context of test automation, Azure DevOps provides a powerful, flexible
environment for managing execution, scaling infrastructure, and reporting
results. This post delves into the core components, infrastructure choices
(agents), and practical YAML implementation necessary to establish robust test
automation pipelines.
The Core Architecture of Azure Pipelines
An
Azure DevOps pipeline is not a single script but an assembly of interconnected
components designed to automate workflows.
Component |
Function |
Details |
Pipelines |
The
main entity defining the CI/CD process. |
Can
be created using YAML files (often preferred for version control) or the
classic editor (graphical interface). |
Stages |
Logical
groupings of jobs. |
Used
to organize the workflow into different phases, such as build, test, and deploy. |
Jobs |
A
collection of steps that runs on a single agent. |
Jobs
can be configured to run sequentially or in parallel. |
Steps |
The
smallest unit of work. |
Can
be a script, a predefined task, or a specific action. |
Tasks |
Individual
steps that perform specific actions. |
Examples
include running scripts, installing dependencies, or deploying applications. |
Variables |
Store
reusable values. |
Can
be set at the pipeline level, stage level, or configured as secret variables. |
Secrets |
Store
sensitive data (passwords, API keys) securely. |
These
are accessed during the pipeline run and protected by Azure DevOps. |
Artifacts |
Files
or packages generated during the build process. |
These
are typically used in later stages, particularly during deployment. |
Service
Connections |
Enable
communication with external services. |
These
grant the pipeline access to Azure resources, Docker registries, or other
third-party services. |
Environments
& Approvals |
Define
deployment targets and mandatory review gates. |
Approval
gates ensure deployments are reviewed before they proceed. |
Choosing Your Execution Engine: Agents
The
agent is the computational resource that actually runs your jobs and tasks.
Azure DevOps offers three primary options: Microsoft-hosted agents
(cloud), Self-hosted agents (on-premises), and Docker and Kubernetes
(cloud or on-premise).
1. Microsoft-Hosted (Cloud) Agents
Using
Microsoft-hosted agents offers significant convenience because you do not need
to worry about maintaining any infrastructure; Azure DevOps manages and keeps
the machines up-to-date.
Setup
Steps:
- Create a
Pipeline:
Navigate to the Pipelines section in your Azure DevOps project and
initiate a new pipeline creation.
- Select
Repository: Choose the repository containing your
source code and test scripts.
- Choose Agent
Pool:
Select the Microsoft-hosted agent pool, indicating that Azure
DevOps will manage the underlying infrastructure.
- Pick an Agent
Image:
Select a suitable operating system image, such as a Windows image (or
another OS, depending on requirements).
- Define
Pipeline Steps: Add necessary steps like installing
dependencies, executing tests, and publishing the results.
- Use
Variables/Secrets: Configure any required variables or secrets
within Azure DevOps.
- Run and
Monitor:
Execute and monitor the pipeline through the Azure DevOps portal.
2. Self-Hosted (On-Premises) Agents
This
setup allows teams to leverage their own existing infrastructure while still
utilizing the powerful features of Azure DevOps pipelines. This is often
necessary when working with specialized hardware or internal networks that cannot
be accessed by cloud agents.
Setup
Steps:
All steps remain the same as above, except for the
3rd step. Please follow the steps below for step 3.
- Create/Use
Agent Pool: In Azure DevOps, go to the Agent Pools
section and either create a new pool or use an existing one.
- Download Agent
Software:
Download the agent software directly from the Azure DevOps portal.
- Configure the
Agent:
Run the configuration script on your on-premises machine, providing your
Azure DevOps URL and a Personal Access Token (PAT) for authentication.
- Run the Agent: Configure the
agent to run as a service so it starts automatically with the machine.
- Use in
Pipeline:
When configuring your pipeline, specify the agent pool containing your
self-hosted agent so that the jobs execute on that specific on-premises
machine.
- Monitor: Manage and
monitor the agent’s status directly from the Azure DevOps portal.
3. Docker and Kubernetes (cloud
or on-premise)
For large-scale test automation, especially involving end-to-end browser
testing using tools like Selenium, orchestrating the test environment using
containers is highly effective. Azure DevOps supports a sophisticated workflow
utilizing Docker and Kubernetes.
Execution
Flow:
- Trigger: Azure DevOps
initiates the pipeline.
- Grid
Deployment: The Selenium Grid (Hub and Nodes) is
deployed within a Kubernetes cluster.
- Test
Execution: A separate test container spins up,
connects to the Selenium Grid, and begins running parallel browser
sessions. (Running tests in parallel significantly cuts down execution
time, a key benefit of containerized grids).
- Reporting: Extent
Reports (or similar detailed test reports) are generated and stored in a
shared volume.
- Publishing: The pipeline
publishes both the generated reports and the standard test results.
- Cleanup: The Grid and
test pods are automatically cleaned up post-execution.
Sample YAML Pipeline Breakdown
Using
YAML allows for Infrastructure as Code (IaC), meaning your pipeline
configuration is version-controlled alongside your application code. The
following sample demonstrates a pipeline configured to build a .NET project and
run Selenium NUnit tests.
trigger:
- main #
or master
pool:
vmImage:
'windows-latest'
variables:
buildConfiguration: 'Release'
testProjectPath: '**/*Tests.csproj'
resultsFolder: 'TestResults'
reportFolder: 'Reports'
steps:
# 1 Checkout source code
- checkout: self
displayName: 'Checkout source code'
# 2 Setup .NET SDK
- task: UseDotNet@2
displayName: 'Setup .NET SDK'
inputs:
packageType: 'sdk'
version: '8.0.x'
# 3 Restore dependencies
- task: DotNetCoreCLI@2
displayName: 'Restore NuGet packages'
inputs:
command: 'restore'
projects: '$(testProjectPath)'
# 4 Build the project
- task: DotNetCoreCLI@2
displayName: 'Build the project'
inputs:
command: 'build'
projects: '$(testProjectPath)'
arguments: '--configuration $(buildConfiguration)'
# 5 Run Selenium NUnit Tests (headless)
- task: DotNetCoreCLI@2
displayName: 'Run Selenium NUnit Tests'
inputs:
command: 'test'
projects: '$(testProjectPath)'
arguments: >
--configuration $(buildConfiguration)
--logger "trx;LogFileName=TestResults.trx"
--results-directory $(resultsFolder)
env:
DOTNET_ENVIRONMENT: 'CI'
# 6 Publish NUnit Test Results
- task: PublishTestResults@2
displayName: 'Publish NUnit Test Results'
condition: succeededOrFailed()
inputs:
testResultsFormat: 'VSTest'
testResultsFiles: '$(resultsFolder)/**/*.trx'
mergeTestResults: true
failTaskOnFailedTests: false
# 7 Publish Extent Report + Screenshots
- task: PublishBuildArtifacts@1
displayName: 'Publish Extent Report and Screenshots'
condition: succeededOrFailed()
inputs:
PathtoPublish: '$(Build.SourcesDirectory)/$(reportFolder)'
ArtifactName: 'ExtentReport'
publishLocation: 'Container'
Key
Sections in the Sample:
- Trigger &
Pool:
- trigger: -
main specifies that the pipeline will run whenever changes are pushed to
the main branch.
- pool:
vmImage: 'windows-latest' confirms the use of a Microsoft-hosted agent
running the latest Windows image.
- Variables: Variables
like buildConfiguration (set to 'Release'), testProjectPath (**/*Tests.csproj),
resultsFolder ('TestResults'), and reportFolder ('Reports') are defined
for reuse throughout the pipeline.
- Preparation
Steps:
- checkout:
self: Checks out the source code.
- task:
UseDotNet@2: Installs the required .NET SDK version (e.g., '8.0.x').
- task:
DotNetCoreCLI@2 with command restore: Restores NuGet packages based on
the defined $(testProjectPath).
- task:
DotNetCoreCLI@2 with command build: Builds the project using the
configured arguments and path.
- Test
Execution:
- task:
DotNetCoreCLI@2 with command test: This critical step runs the Selenium
NUnit tests.
- The arguments
specify configuration (--configuration $(buildConfiguration)), set up
logging to generate a standard test result file (--logger
"trx;LogFileName=TestResults.trx"), and define the results
directory (--results-directory $(resultsFolder)).
- An
environment variable DOTNET_ENVIRONMENT: 'CI' is set for the duration of
the test run.
- Publishing
Results and Reports:
- Publish NUnit
Test Results: The PublishTestResults@2 task takes the
generated .trx files (the VSTest format commonly used for .NET results).
The condition: succeededOrFailed() ensures this task runs even if
previous tests failed.
- Publish
Extent Report + Screenshots: The PublishBuildArtifacts@1 task
publishes the custom test reports (like Extent Reports) and associated
screenshots to the Azure DevOps container. The PathtoPublish is set to
the defined $(Build.SourcesDirectory)/$(reportFolder), and the artifact
is named 'ExtentReport'. Publishing reports as artifacts allows
stakeholders to view rich HTML reports outside of the standard Azure
DevOps test results summary.
Refer to the blog below related to Docker and Kubernetes.
Understanding
Kubernetes and Its Role in Testing
Comments