Application Config – Options and when to use

What to avoid

  • Having a variation of an application for each environment. A single artifact should be built once and then deployed to all environments otherwise you can’t guarantee that each variation has been tested.
  • Having to re-build and re-deploy the artifact if changes are required in it’s environmental configuration, for same reason as above.
  • Have secrets mixed in with your non-secret environment config (or anywhere in source code for that matter).

Consider Separation of Concerns

Consider who is going to make changes to the config, both secret and non secret.


DevelopersTestersResponsible for the schema of the configuration:Need to know the keys NOT the values across environments 
OperationsSecurityResponsible for the life cycle of the configuration e.g.CRUDRenew expiring secretsEnsure securityNeed to set the values as they are the ones who create these for each environment

Where to Store

The 12 factor app says you should have config separate from application source code. Mixing environment config in with the application source code presents some problems:

  • Infrastructure team have to modify application source code to update values (or pass values to Developers)
  • Resource names are potentially sensitive information that might help a hacker to gain access to systems
  • Infrastructure would need to modify multiple app configs if those apps are deployed to a single service e.g. K8s cluster. Whereas if config is obtained by the service runtime this only has to be done once
Configuration StoreStores environment specific values for the application but does not contain sensitive informationDoes not require encryptionAccess by general operations
Secret StoreSensitive configuration valuesConnection stringsCertificatesAccess tokensEach application should have it’s own scopeAccess restricted to elevated operational roles

Consider how config changes get into production

By accessing the config at runtime you avoid having to rebuild and redeploy app when config changes

TechniqueRequired to make config change live
Config store accessed at runtime e.g. Azure App Configuration accessed by SDKApp picks up while running  
Config store accessed at build timee.g. Config file in with source code.A file for each environment e.g. appsettings.json, appsettings-dev.json, appsettings-test.jsonRebuild and redeploy the app  
Config store accessed at deploy time e.g. HelmRedeploy the app  

Why use config store?

  • Allows config to be centrally stored so easier to debug problems and compare config across related services
  • Supports hierarchies of config parameters
  • Control feature availability in real-time through feature flags

Slow CICD Pipeline? Try These Techniques

Running buddies

Identify the long running stages that don’t need to run sequentially. For example you may run static code analysers, a Sonar code quality scan and also a Checkmarx cxSAST security scan. These can be run independently and so are good candidates to run at the same time. They also tend to take a few minutes which is generally longer than most other build tasks.

Azure Pipelines allows stages to be run in parallel by simply not specifying a dependsOn to indicate dependent jobs

- job: Windows
    vmImage: 'vs2017-win2016'
  - script: echo hello from Windows
- job: macOS
    vmImage: 'macOS-10.14'
  - script: echo hello from macOS
- job: Linux
    vmImage: 'ubuntu-16.04'
  - script: echo hello from Linux

Quick feature builds, Full Pull Request builds

Rather than run all your build tasks for all branches including feature branches, think about moving longer running tasks to only execute on pull request builds. For example you could move the Sonar code quality scan to only run when merging to master through a pull request. The downside to this is that developers are getting the feedback slightly later in the cycle but one way to mitigate this is to run SonarLint within your IDE to get feedback as you code.


Downloading dependencies can be bandwidth intensive and time consuming. By caching the third party packages that are needed for your build you can avoid the cost of downloading each time. This is especially important if you use disposable agents that are thrown away after executing their build stage.

Azure Pipelines also supports caching across multiple pipeline runs.

Call my Agent

Check how many agents you have available to run pipeline tasks. If you are running tasks in parallel you will need multiple agents per pipeline. Check the queued tasks and consider increasing the number of available agents if you see tasks are waiting for others to complete.

Azure Pipelines NuGet Monorepo

This post shows how to structure a monorepo for NuGet packages and then automate their build using Azure YAML pipelines.

Project Structure

Use a package’s solution file to define the location of the packages source code and test code. This single solution file can then be passed to the DotNetCoreCLI@2 task to only build and pack that particular package.

Solution files are Common.A.sln and Common.B.sln.

Project file structure for two packages common.A and common.B


Configuring the pipeline for each NuGet package

Next we want to configure a pipeline for each package. The contents of each azure-pipelines.yml are show below.


      - common/common.A/*
- task: DotNetCoreCLI@2
  displayName: 'DotNet Build'
    command: 'build'
    projects: 'common/common.A/common.A.sln'
- task: DotNetCoreCLI@2
  displayName: 'Dotnet Pack'
    command: 'pack'
    packagesToPack: 'common/common.A/common.A.sln'
    includesymbols: true
    packDirectory: '$(Pipeline.Workspace)/dist'
    configuration: 'Release'
    nobuild: true
    versionEnvVar: 'BUILD_BUILDNUMBER'
    versioningScheme: 'byEnvVar'
    buildProperties: 'SymbolPackageFormat=snupkg'
  • The pipeline is only triggered when changes occur in common/common.A/*
  • The DotNet Build task only builds the packages listed in the solution file ‘common/common.A/common.A.sln’
  • The DotNet pack task only packages ‘common/common.A/common.A.sln’


      - common/common.B/*
- task: DotNetCoreCLI@2
  displayName: 'DotNet Build'
    command: 'build'
    projects: 'common/common.B/common.B.sln'      
- task: DotNetCoreCLI@2
  displayName: 'Dotnet Pack'
    command: 'pack'
    packagesToPack: 'common/common.B/common.B.sln'
    includesymbols: true
    packDirectory: '$(Pipeline.Workspace)/dist'
    configuration: 'Release'
    nobuild: true
    versionEnvVar: 'BUILD_BUILDNUMBER'
    versioningScheme: 'byEnvVar'
    buildProperties: 'SymbolPackageFormat=snupkg'    
  • The pipeline is only triggered when changes occur in common/common.B/*
  • The DotNet Build task only builds the packages listed in the solution file ‘common/common.B/common.B.sln’
  • The DotNet pack task only packages ‘common/common.B/common.B.sln’

Create a pipeline for each package

Finally you can create a pipeline to build each package by simply selecting ‘New pipeline’ from the Pipelines tab and providing the the azure-pipelines.yml file for that package.

Opportunity Cost

As Seth Godin knows opportunity cost just went up. Building and delivering software is getting more complicated, so keep your human mind free to focus on the interesting bits and leave the boring, repetitive stuff to the machines.

What’s the best way to do this? Build your CD pipeline first, setup the infrastructure (preferably serverless) from the start and then get into the flow of development. Add more checks and balances as you go. Run automated tests. Deploy to the cloud.

Azure DevOps Pipelines – how secret are secret variables?

Azure Pipelines supports storing secret variables within the project, either through variable groups or as secret variables.

This is a convenient place to store all those database connection strings and access tokens you need to allow access to external services like JFrog Artifactory or deploy to Azure services such as a Kubernetes cluster or CosmosDB.

Simply enter the secret value, check the padlock and everything is safe right? Well depends what you mean by ‘safe’!

Not so fast kiddo

Although Azure Pipelines take pains to obscure the echoing of secrets to the pipeline console and even prevents secret being made available by default to pipeline scripts this does not mean the secrets can’t be still be , how shall we say, obtained.

A pipeline developer who has access to the azure-pipelines.yml can quite easily grab the secret value, echo it to a file, publish that file as a pipeline artifact and then simply download the file from the pipeline console once run.

What do you mean – ‘developers can see the production passwords’?

Well, if you use variable groups to store secret variables for each environment you deploy to, then YES!

Now this may be fine if only admins have access to run the pipelines, but if your azure-pipelines.yml file is embedded within your application source code this in theory means an application developer could change the pipeline definition to reveal production secrets.

So how do I prevent this happening in our team?

Luckily the Azure Pipelines security onion does have a good selection of layers to peel back.

  1. Force azure-pipelines.yml to extend a ‘master’ template which restricts what tasks and scripts can be run by the child script using a mechanism similar to inheritance. Use a ‘required template’ check to ensure only sanctioned templates that extend ‘master’ can be run.
  2. Don’t put your azure-pipelines.yml in your app source code, instead store it within separate protected repo and pull in a reference to your app repo.
  3. Use permissions on variable groups to allow access to pipeline admin level roles excluding developer roles. However this only works in devs are not allowed to deploy code to higher environments.
  4. Separate the CI phase from the CD phase. This is similar to the previous technique and is in fact how pre yaml Azure Pipelines structures it’s builds. You could argue that the CI phase can be run freely by developers including deploying to a dev environment but the CD phase should only be accessible more privileged user that can promote the deployment through the various environments to production.
  5. Finally, don’t use pipeline library secret variables. Instead use Service Connections, but a problem here is that not all services support service connections e.g. Azure Databricks.

Security should always be concern number zero with any production system and a CD Pipeline that holds the keys to so many precious castles is a core component to protect.

Secure DevOps Kit for Azure

AzSK ARM Template Checker

This is a very useful open source tool used internally by Microsoft to validate that best practices are being followed in their Azure ARM templates.

This short post shows how to incorporate the AzSK ARM Template Checker into your Azure YAML Pipeline.

If you want to use a Linux build agent, you can use the PowerShell task to run AzSK.

- task: PowerShell@2
            targetType: 'inline'
            script: |
              Set-PSRepository -Name "PSGallery" -InstallationPolicy Trusted
              Install-Module AzSK
              Import-Module AzSK
              Get-AzSKARMTemplateSecurityStatus -ARMTemplatePath $Env:BUILD_SOURCESDIRECTORY/arm-templates
            failOnStderr: true

Otherwise if you are using a Windows build agent, you can use the Azure Extension

Counting word frequency using map.merge

Imagine we want to produce a Map with each unique word as a key and it’s frequency as the value.

Prior to Java 8 we would have to do something like this:

Map<String, Integer> map = new HashMap<>();
for(String word : words) {
    if(map.containsKey(word)) {
        int count = map.get(word);
        map.put(word, ++count);
    } else {
        map.put(word, 1);

However with the map.merge method we can now do:

Map<String, Integer> map = new HashMap<>();;
for(String word : words) {
    map.merge(word, 1, Integer::sum);
return map;

Pretty nice eh?

Email is dead, long live Slack

Here are some great reasons to stop using email for team communication and instead switch to Slack

  • By default email messages are private – only available to the recipients. Slack messages are by default available to the whole team. Simply join the channel you’re interested in (or leave if not). How many times have you had to forward an email to someone who wasn’t on the original? or worse that other person never got to give their valuable contribution because they were never on the list?
  • Build a knowledge base – with email when someone leaves your company their account is deactivated and along with it all their sent emails. Imagine how useful this info could be if preserved and made searchable! Key decisions, how-tos and historical context can be available throughout the project and made available to all especially new comers.
  • Marketplace of apps – Slack has LOTS on fantastic integrations, like Git, Jenkins and JIRA which help to keep task communications flowing. Eg. see when a code review is required and openly discuss.
  • Self service – No need to request mailing lists from your email admin for topics or projects, simply create a channel and invite the relevant team members. E.g. Just developers working on project X
  • Multimedia – Call a video conference and screen share from within a shared channel without having to mess around with other conferencing apps. You can even give others control over your desktop (useful if it’s going to take too long to explain a technical task)
  • Sync and Async – Conversations are much closer to real-time than email, but still have the option of being asynchronous if you don’t want to be distracted.
  • Connection – Remote team mates feel more connected with Slack. You can see who’s online. You can see other work happening even if you’re not directly involved with the project or you can simply have a bit of banter with fellow employees easily without worrying about who should I CC in this email.
  • Strangers are friends – Other companies can be given access to a specific Slack channel and feel part of the team.
  • Don’t repeat yourself – No huge email chains with reply to all that require you to scroll through pages of crap to find the context on the conversation.

Spring Boot Microservice Integration Test Using Hoverfly

  • WebTestClient used to simplify external web service calls into microservice
  • SpringBootTest enabled with WebEnvironment defined port to enable webserver
  • Hoverfly to mock external webservice and provide precanned responses
package hello;

import io.specto.hoverfly.junit.core.HoverflyConfig;
import io.specto.hoverfly.junit.rule.HoverflyRule;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.reactive.server.WebTestClient;

import static io.specto.hoverfly.junit.core.SimulationSource.defaultPath;

		webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT,
		classes = hello.Application.class)
public class IntegrationTests {

	public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(defaultPath("sm9-create-ticket.json"),

	private WebTestClient webClient;

	public void testCreateTicket() {

				.expectBody(String.class).isEqualTo("Response from HPSM for create ticket");