Application Config – Options and when to use

What to avoid

  • Having a variation of an application for each environment. A single artifact should be built once and then deployed to all environments otherwise you can’t guarantee that each variation has been tested.
  • Having to re-build and re-deploy the artifact if changes are required in it’s environmental configuration, for same reason as above.
  • Have secrets mixed in with your non-secret environment config (or anywhere in source code for that matter).

Consider Separation of Concerns

Consider who is going to make changes to the config, both secret and non secret.

Roles

RoleWhat
DevelopersTestersResponsible for the schema of the configuration:Need to know the keys NOT the values across environments 
OperationsSecurityResponsible for the life cycle of the configuration e.g.CRUDRenew expiring secretsEnsure securityNeed to set the values as they are the ones who create these for each environment

Where to Store

The 12 factor app says you should have config separate from application source code. Mixing environment config in with the application source code presents some problems:

  • Infrastructure team have to modify application source code to update values (or pass values to Developers)
  • Resource names are potentially sensitive information that might help a hacker to gain access to systems
  • Infrastructure would need to modify multiple app configs if those apps are deployed to a single service e.g. K8s cluster. Whereas if config is obtained by the service runtime this only has to be done once
StorageWhatRole
Configuration StoreStores environment specific values for the application but does not contain sensitive informationDoes not require encryptionAccess by general operations
Secret StoreSensitive configuration valuesConnection stringsCertificatesAccess tokensEach application should have it’s own scopeAccess restricted to elevated operational roles

Consider how config changes get into production

By accessing the config at runtime you avoid having to rebuild and redeploy app when config changes

TechniqueRequired to make config change live
Config store accessed at runtime e.g. Azure App Configuration accessed by SDKApp picks up while running  
Config store accessed at build timee.g. Config file in with source code.A file for each environment e.g. appsettings.json, appsettings-dev.json, appsettings-test.jsonRebuild and redeploy the app  
Config store accessed at deploy time e.g. HelmRedeploy the app  

Why use config store?

  • Allows config to be centrally stored so easier to debug problems and compare config across related services
  • Supports hierarchies of config parameters
  • Control feature availability in real-time through feature flags

Slow CICD Pipeline? Try These Techniques

Running buddies

Identify the long running stages that don’t need to run sequentially. For example you may run static code analysers, a Sonar code quality scan and also a Checkmarx cxSAST security scan. These can be run independently and so are good candidates to run at the same time. They also tend to take a few minutes which is generally longer than most other build tasks.

Azure Pipelines allows stages to be run in parallel by simply not specifying a dependsOn to indicate dependent jobs

jobs:
- job: Windows
  pool:
    vmImage: 'vs2017-win2016'
  steps:
  - script: echo hello from Windows
- job: macOS
  pool:
    vmImage: 'macOS-10.14'
  steps:
  - script: echo hello from macOS
- job: Linux
  pool:
    vmImage: 'ubuntu-16.04'
  steps:
  - script: echo hello from Linux

https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml

Quick feature builds, Full Pull Request builds

Rather than run all your build tasks for all branches including feature branches, think about moving longer running tasks to only execute on pull request builds. For example you could move the Sonar code quality scan to only run when merging to master through a pull request. The downside to this is that developers are getting the feedback slightly later in the cycle but one way to mitigate this is to run SonarLint within your IDE to get feedback as you code. https://www.sonarqube.org/sonarlint/

Cache

Downloading dependencies can be bandwidth intensive and time consuming. By caching the third party packages that are needed for your build you can avoid the cost of downloading each time. This is especially important if you use disposable agents that are thrown away after executing their build stage.

Azure Pipelines also supports caching across multiple pipeline runs.

https://docs.microsoft.com/en-us/azure/devops/pipelines/release/caching?view=azure-devops

Call my Agent

Check how many agents you have available to run pipeline tasks. If you are running tasks in parallel you will need multiple agents per pipeline. Check the queued tasks and consider increasing the number of available agents if you see tasks are waiting for others to complete.

Azure Pipelines NuGet Monorepo

This post shows how to structure a monorepo for NuGet packages and then automate their build using Azure YAML pipelines.

Project Structure

Use a package’s solution file to define the location of the packages source code and test code. This single solution file can then be passed to the DotNetCoreCLI@2 task to only build and pack that particular package.

Solution files are Common.A.sln and Common.B.sln.

Project file structure for two packages common.A and common.B

/common-nuget-library
   /common
     /common.A
       azure-pipelines.yml
       common.A.sln
       common.A.csproj
       /src
     /common.B  
       azure-pipelines.yml
       common.B.sln
       common.B.csproj
       /src
   /tests  
     common.A.tests
       common.A.tests.csproj
     common.B.tests
       common.B.tests.csproj

Configuring the pipeline for each NuGet package

Next we want to configure a pipeline for each package. The contents of each azure-pipelines.yml are show below.

common.A/azure-pipelines.yml

trigger:
  paths:
    include:
      - common/common.A/*
...
- task: DotNetCoreCLI@2
  displayName: 'DotNet Build'
  inputs:
    command: 'build'
    projects: 'common/common.A/common.A.sln'
- task: DotNetCoreCLI@2
  displayName: 'Dotnet Pack'
  inputs:
    command: 'pack'
    packagesToPack: 'common/common.A/common.A.sln'
    includesymbols: true
    packDirectory: '$(Pipeline.Workspace)/dist'
    configuration: 'Release'
    nobuild: true
    versionEnvVar: 'BUILD_BUILDNUMBER'
    versioningScheme: 'byEnvVar'
    buildProperties: 'SymbolPackageFormat=snupkg'
  • The pipeline is only triggered when changes occur in common/common.A/*
  • The DotNet Build task only builds the packages listed in the solution file ‘common/common.A/common.A.sln’
  • The DotNet pack task only packages ‘common/common.A/common.A.sln’

common.B/azure-pipelines.yml

trigger:
  paths:
    include:
      - common/common.B/*
...
- task: DotNetCoreCLI@2
  displayName: 'DotNet Build'
  inputs:
    command: 'build'
    projects: 'common/common.B/common.B.sln'      
- task: DotNetCoreCLI@2
  displayName: 'Dotnet Pack'
  inputs:
    command: 'pack'
    packagesToPack: 'common/common.B/common.B.sln'
    includesymbols: true
    packDirectory: '$(Pipeline.Workspace)/dist'
    configuration: 'Release'
    nobuild: true
    versionEnvVar: 'BUILD_BUILDNUMBER'
    versioningScheme: 'byEnvVar'
    buildProperties: 'SymbolPackageFormat=snupkg'    
  • The pipeline is only triggered when changes occur in common/common.B/*
  • The DotNet Build task only builds the packages listed in the solution file ‘common/common.B/common.B.sln’
  • The DotNet pack task only packages ‘common/common.B/common.B.sln’

Create a pipeline for each package

Finally you can create a pipeline to build each package by simply selecting ‘New pipeline’ from the Pipelines tab and providing the the azure-pipelines.yml file for that package.

Opportunity Cost

As Seth Godin knows opportunity cost just went up. Building and delivering software is getting more complicated, so keep your human mind free to focus on the interesting bits and leave the boring, repetitive stuff to the machines.

What’s the best way to do this? Build your CD pipeline first, setup the infrastructure (preferably serverless) from the start and then get into the flow of development. Add more checks and balances as you go. Run automated tests. Deploy to the cloud.

Secure DevOps Kit for Azure

AzSK ARM Template Checker

This is a very useful open source tool used internally by Microsoft to validate that best practices are being followed in their Azure ARM templates.

This short post shows how to incorporate the AzSK ARM Template Checker into your Azure YAML Pipeline.

If you want to use a Linux build agent, you can use the PowerShell task to run AzSK.

- task: PowerShell@2
          inputs:
            targetType: 'inline'
            script: |
              Set-PSRepository -Name "PSGallery" -InstallationPolicy Trusted
              Install-Module AzSK
              Import-Module AzSK
              Get-AzSKARMTemplateSecurityStatus -ARMTemplatePath $Env:BUILD_SOURCESDIRECTORY/arm-templates
            failOnStderr: true

Otherwise if you are using a Windows build agent, you can use the Azure Extension

Counting word frequency using map.merge

Imagine we want to produce a Map with each unique word as a key and it’s frequency as the value.

Prior to Java 8 we would have to do something like this:

Map<String, Integer> map = new HashMap<>();
for(String word : words) {
    if(map.containsKey(word)) {
        int count = map.get(word);
        map.put(word, ++count);
    } else {
        map.put(word, 1);
    }
}

However with the map.merge method we can now do:

Map<String, Integer> map = new HashMap<>();;
for(String word : words) {
    map.merge(word, 1, Integer::sum);
}
return map;

Pretty nice eh?

Email is dead, long live Slack

Here are some great reasons to stop using email for team communication and instead switch to Slack

  • By default email messages are private – only available to the recipients. Slack messages are by default available to the whole team. Simply join the channel you’re interested in (or leave if not). How many times have you had to forward an email to someone who wasn’t on the original? or worse that other person never got to give their valuable contribution because they were never on the list?
  • Build a knowledge base – with email when someone leaves your company their account is deactivated and along with it all their sent emails. Imagine how useful this info could be if preserved and made searchable! Key decisions, how-tos and historical context can be available throughout the project and made available to all especially new comers.
  • Marketplace of apps – Slack has LOTS on fantastic integrations, like Git, Jenkins and JIRA which help to keep task communications flowing. Eg. see when a code review is required and openly discuss.
  • Self service – No need to request mailing lists from your email admin for topics or projects, simply create a channel and invite the relevant team members. E.g. Just developers working on project X
  • Multimedia – Call a video conference and screen share from within a shared channel without having to mess around with other conferencing apps. You can even give others control over your desktop (useful if it’s going to take too long to explain a technical task)
  • Sync and Async – Conversations are much closer to real-time than email, but still have the option of being asynchronous if you don’t want to be distracted.
  • Connection – Remote team mates feel more connected with Slack. You can see who’s online. You can see other work happening even if you’re not directly involved with the project or you can simply have a bit of banter with fellow employees easily without worrying about who should I CC in this email.
  • Strangers are friends – Other companies can be given access to a specific Slack channel and feel part of the team.
  • Don’t repeat yourself – No huge email chains with reply to all that require you to scroll through pages of crap to find the context on the conversation.

Continuous Integration, Continous Delivery, Continous Deployment.

Within DevOps the terms Continuous Integration, Continuous Delivery and Continuous Deployment get thrown around a lot. Here is the simplest definition I could come up with to quickly explain each to a non techie like a project manager.

Continuous IntegrationRunning unit and other tests on every branch on every commit and merging to master every day
Continuous DeliveryAs above but each commit CAN be pushed to production
Continuous DeploymentAs above but each commit IS pushed to production

Serverless Continuous Deployment for Java AWS Lamba using AWS CodePipeline

This post shows step by step how to deploy your serverless Java AWS Lambas continuously to production. Moving from pull request, merge, build, deploy and finally test.

Overview


Project Setup

For our project we are going to assume a standard Maven Java project structure, with Cloudformation and build specification config in the root of the project.

Within the Maven pom.xml file, you must include the lambda core libraries.

  • <dependencies>
  •       <dependency>
  •           <groupId>com.amazonaws</groupId>
  •           <artifactId>aws-lambda-java-core</artifactId>
  •           <version>1.2.0</version>
  •       </dependency>
  • And also include the AWS SDK Java BOM
  • <!–https://aws.amazon.com/blogs/developer/managing-dependencies-with-aws-sdk-for-java-bill-of-materials-module-bom/–>
  •    <dependencyManagement>
  •        <dependencies>
  •            <dependency>
  •                <groupId>com.amazonaws</groupId>
  •                <artifactId>aws-java-sdk-bom</artifactId>
  •                <version>${com.amazonaws.version}</version>
  •                <type>pom</type>
  •                <scope>import</scope>
  •            </dependency>
  •        </dependencies>
  •    </dependencyManagement>
  • Next you also need to ensure that the JAR artifact is built flat
  • <build>
  •       <plugins>
  •           <plugin>
  •               <groupId>org.apache.maven.plugins</groupId>
  •               <artifactId>maven-shade-plugin</artifactId>
  •               <version>3.1.0</version>
  •               <configuration>
  •                   <createDependencyReducedPom>false</createDependencyReducedPom>
  •               </configuration>
  •               <executions>
  •                   <execution>
  •                       <phase>package</phase>
  •                       <goals>
  •                           <goal>shade</goal>
  •                       </goals>
  •                       <configuration>
  •                           <transformers>
  •                               <transformer
  •                                     implementation=”com.github.edwgiz.mavenShadePlugin.log4j2CacheTransformer.PluginsCacheFileTransformer”>
  •                               </transformer>
  •                           </transformers>
  •                       </configuration>
  •                   </execution>
  •               </executions>
  •               <dependencies>
  •                   <dependency>
  •                       <groupId>com.github.edwgiz</groupId>
  •                       <artifactId>maven-shade-plugin.log4j2-cachefile-transformer</artifactId>
  •                       <version>2.8.1</version>
  •                   </dependency>
  •               </dependencies>
  •           </plugin>

Build Pipeline using AWS CodePipeline

Source Step

The first step in the AWS CodePipeline is to fetch the source from the S3 bucket

  • Action Name: Source
  • Action Provider: S3
  • Bucket: <your release bucket>
  • S3 Object Key <path of your application>.zip
  • Output Artifact: MyApp

Build Step

Next step in the pipeline, you need to configure a CodeBuild project.

Set the current environment image to aws/codebuild/java:openjdk-8

Use the following buildspec.yml in the root of your project:

  • version: 0.2
  • phases:
  • build:
  •   commands:
  •     – echo Build started on `date`
  •     # Unit tests, Code analysis and dependencies check (maven lifecycle phases: validate, compile, test, package, verify)
  •     – mvn verify shade:shade -B -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=error
  •     – mv target/MyApp-1.0.jar .
  •     – unzip MyApp-1.0.jar
  •     – rm -rf target tst src buildspec.yml pom.xml MyApp-1.0.jar
  •     – aws cloudformation package –template-file main.yaml –output-template-file transformed_main.yaml –s3-bucket myapp-prod-outputbucket-xxxxxxxx
  • cache:
  • paths:
  •   – ‘/root/.m2/**/*’
  • artifacts:
  • type: zip
  • files:
  •   – transformed_main.yaml

Staging Step

After the artifact is built, we now want to create a change set using CloudFormation.

  • Action mode: Create or replace a change set
  • Template: MyAppBuildOut::transformed_main.yaml
  • Stackname: <name of your created stack here>

Define your Lambda function using Java (using serverless format), in your Cloudformation config file, placed in the root of your Maven project.

LambaFunctionName:
     Type: AWS::Serverless::Function
     Properties:
       Handler: au.com.nuamedia.camlinpayment.handler.MenuHandler
       Runtime: java8
       Timeout: 10
       MemorySize: 1024
       Events:
         GetEvent:
           Type: Api
           Properties:
             Path: /menu
             Method: get

Deploy Step

The ChangeSet can then be executed and the changes automatically rolled out to production safely. Any problems encountered and an automatic rollback occurs.

  • Action Mode: Execute changeset
  • Stackname: <name of your created stack here>
  • Change set name: <change set name from previous step>

Outcome

Congratulations! you now have your Java AWS Lamba functions deploying to production using Continuous Deployment. AWS CodePipeline is easily configurable via the UI and can also be defined as code and stored in version control.