Azure DevOps Pipelines – how secret are secret variables?

Azure Pipelines supports storing secret variables within the project, either through variable groups or as secret variables.

This is a convenient place to store all those database connection strings and access tokens you need to allow access to external services like JFrog Artifactory or deploy to Azure services such as a Kubernetes cluster or CosmosDB.

Simply enter the secret value, check the padlock and everything is safe right? Well depends what you mean by ‘safe’!

Not so fast kiddo

Although Azure Pipelines take pains to obscure the echoing of secrets to the pipeline console and even prevents secret being made available by default to pipeline scripts this does not mean the secrets can’t be still be , how shall we say, obtained.

A pipeline developer who has access to the azure-pipelines.yml can quite easily grab the secret value, echo it to a file, publish that file as a pipeline artifact and then simply download the file from the pipeline console once run.

What do you mean – ‘developers can see the production passwords’?

Well, if you use variable groups to store secret variables for each environment you deploy to, then YES!

Now this may be fine if only admins have access to run the pipelines, but if your azure-pipelines.yml file is embedded within your application source code this in theory means an application developer could change the pipeline definition to reveal production secrets.

So how do I prevent this happening in our team?

Luckily the Azure Pipelines security onion does have a good selection of layers to peel back.

  1. Force azure-pipelines.yml to extend a ‘master’ template which restricts what tasks and scripts can be run by the child script using a mechanism similar to inheritance. Use a ‘required template’ check to ensure only sanctioned templates that extend ‘master’ can be run.
  2. Don’t put your azure-pipelines.yml in your app source code, instead store it within separate protected repo and pull in a reference to your app repo.
  3. Use permissions on variable groups to allow access to pipeline admin level roles excluding developer roles. However this only works in devs are not allowed to deploy code to higher environments.
  4. Separate the CI phase from the CD phase. This is similar to the previous technique and is in fact how pre yaml Azure Pipelines structures it’s builds. You could argue that the CI phase can be run freely by developers including deploying to a dev environment but the CD phase should only be accessible more privileged user that can promote the deployment through the various environments to production.
  5. Finally, don’t use pipeline library secret variables. Instead use Service Connections, but a problem here is that not all services support service connections e.g. Azure Databricks.

Security should always be concern number zero with any production system and a CD Pipeline that holds the keys to so many precious castles is a core component to protect.

Secure DevOps Kit for Azure

AzSK ARM Template Checker

This is a very useful open source tool used internally by Microsoft to validate that best practices are being followed in their Azure ARM templates.

This short post shows how to incorporate the AzSK ARM Template Checker into your Azure YAML Pipeline.

If you want to use a Linux build agent, you can use the PowerShell task to run AzSK.

- task: PowerShell@2
          inputs:
            targetType: 'inline'
            script: |
              Set-PSRepository -Name "PSGallery" -InstallationPolicy Trusted
              Install-Module AzSK
              Import-Module AzSK
              Get-AzSKARMTemplateSecurityStatus -ARMTemplatePath $Env:BUILD_SOURCESDIRECTORY/arm-templates
            failOnStderr: true

Otherwise if you are using a Windows build agent, you can use the Azure Extension

Counting word frequency using map.merge

Imagine we want to produce a Map with each unique word as a key and it’s frequency as the value.

Prior to Java 8 we would have to do something like this:

Map<String, Integer> map = new HashMap<>();
for(String word : words) {
    if(map.containsKey(word)) {
        int count = map.get(word);
        map.put(word, ++count);
    } else {
        map.put(word, 1);
    }
}

However with the map.merge method we can now do:

Map<String, Integer> map = new HashMap<>();;
for(String word : words) {
    map.merge(word, 1, Integer::sum);
}
return map;

Pretty nice eh?

Email is dead, long live Slack

Here are some great reasons to stop using email for team communication and instead switch to Slack

  • By default email messages are private – only available to the recipients. Slack messages are by default available to the whole team. Simply join the channel you’re interested in (or leave if not). How many times have you had to forward an email to someone who wasn’t on the original? or worse that other person never got to give their valuable contribution because they were never on the list?
  • Build a knowledge base – with email when someone leaves your company their account is deactivated and along with it all their sent emails. Imagine how useful this info could be if preserved and made searchable! Key decisions, how-tos and historical context can be available throughout the project and made available to all especially new comers.
  • Marketplace of apps – Slack has LOTS on fantastic integrations, like Git, Jenkins and JIRA which help to keep task communications flowing. Eg. see when a code review is required and openly discuss.
  • Self service – No need to request mailing lists from your email admin for topics or projects, simply create a channel and invite the relevant team members. E.g. Just developers working on project X
  • Multimedia – Call a video conference and screen share from within a shared channel without having to mess around with other conferencing apps. You can even give others control over your desktop (useful if it’s going to take too long to explain a technical task)
  • Sync and Async – Conversations are much closer to real-time than email, but still have the option of being asynchronous if you don’t want to be distracted.
  • Connection – Remote team mates feel more connected with Slack. You can see who’s online. You can see other work happening even if you’re not directly involved with the project or you can simply have a bit of banter with fellow employees easily without worrying about who should I CC in this email.
  • Strangers are friends – Other companies can be given access to a specific Slack channel and feel part of the team.
  • Don’t repeat yourself – No huge email chains with reply to all that require you to scroll through pages of crap to find the context on the conversation.

Spring Boot Microservice Integration Test Using Hoverfly

  • WebTestClient used to simplify external web service calls into microservice
  • SpringBootTest enabled with WebEnvironment defined port to enable webserver
  • Hoverfly to mock external webservice and provide precanned responses
package hello;

import io.specto.hoverfly.junit.core.HoverflyConfig;
import io.specto.hoverfly.junit.rule.HoverflyRule;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.reactive.server.WebTestClient;

import static io.specto.hoverfly.junit.core.SimulationSource.defaultPath;

@RunWith(SpringRunner.class)
@SpringBootTest(
		webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT,
		classes = hello.Application.class)
public class IntegrationTests {

	@ClassRule
	public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(defaultPath("sm9-create-ticket.json"),
			HoverflyConfig.localConfigs().asWebServer().proxyPort(8500));

	@Autowired
	private WebTestClient webClient;

	@Test
	public void testCreateTicket() {

		this.webClient.get().uri("/ticket?query=x").exchange().expectStatus().isOk()
				.expectBody(String.class).isEqualTo("Response from HPSM for create ticket");

	}

}

Continuous Integration, Continous Delivery, Continous Deployment.

Within DevOps the terms Continuous Integration, Continuous Delivery and Continuous Deployment get thrown around a lot. Here is the simplest definition I could come up with to quickly explain each to a non techie like a project manager.

Continuous IntegrationRunning unit and other tests on every branch on every commit and merging to master every day
Continuous DeliveryAs above but each commit CAN be pushed to production
Continuous DeploymentAs above but each commit IS pushed to production

Serverless Continuous Deployment for Java AWS Lamba using AWS CodePipeline

This post shows step by step how to deploy your serverless Java AWS Lambas continuously to production. Moving from pull request, merge, build, deploy and finally test.

Overview


Project Setup

For our project we are going to assume a standard Maven Java project structure, with Cloudformation and build specification config in the root of the project.

Within the Maven pom.xml file, you must include the lambda core libraries.

  • <dependencies>
  •       <dependency>
  •           <groupId>com.amazonaws</groupId>
  •           <artifactId>aws-lambda-java-core</artifactId>
  •           <version>1.2.0</version>
  •       </dependency>
  • And also include the AWS SDK Java BOM
  • <!–https://aws.amazon.com/blogs/developer/managing-dependencies-with-aws-sdk-for-java-bill-of-materials-module-bom/–>
  •    <dependencyManagement>
  •        <dependencies>
  •            <dependency>
  •                <groupId>com.amazonaws</groupId>
  •                <artifactId>aws-java-sdk-bom</artifactId>
  •                <version>${com.amazonaws.version}</version>
  •                <type>pom</type>
  •                <scope>import</scope>
  •            </dependency>
  •        </dependencies>
  •    </dependencyManagement>
  • Next you also need to ensure that the JAR artifact is built flat
  • <build>
  •       <plugins>
  •           <plugin>
  •               <groupId>org.apache.maven.plugins</groupId>
  •               <artifactId>maven-shade-plugin</artifactId>
  •               <version>3.1.0</version>
  •               <configuration>
  •                   <createDependencyReducedPom>false</createDependencyReducedPom>
  •               </configuration>
  •               <executions>
  •                   <execution>
  •                       <phase>package</phase>
  •                       <goals>
  •                           <goal>shade</goal>
  •                       </goals>
  •                       <configuration>
  •                           <transformers>
  •                               <transformer
  •                                     implementation=”com.github.edwgiz.mavenShadePlugin.log4j2CacheTransformer.PluginsCacheFileTransformer”>
  •                               </transformer>
  •                           </transformers>
  •                       </configuration>
  •                   </execution>
  •               </executions>
  •               <dependencies>
  •                   <dependency>
  •                       <groupId>com.github.edwgiz</groupId>
  •                       <artifactId>maven-shade-plugin.log4j2-cachefile-transformer</artifactId>
  •                       <version>2.8.1</version>
  •                   </dependency>
  •               </dependencies>
  •           </plugin>

Build Pipeline using AWS CodePipeline

Source Step

The first step in the AWS CodePipeline is to fetch the source from the S3 bucket

  • Action Name: Source
  • Action Provider: S3
  • Bucket: <your release bucket>
  • S3 Object Key <path of your application>.zip
  • Output Artifact: MyApp

Build Step

Next step in the pipeline, you need to configure a CodeBuild project.

Set the current environment image to aws/codebuild/java:openjdk-8

Use the following buildspec.yml in the root of your project:

  • version: 0.2
  • phases:
  • build:
  •   commands:
  •     – echo Build started on `date`
  •     # Unit tests, Code analysis and dependencies check (maven lifecycle phases: validate, compile, test, package, verify)
  •     – mvn verify shade:shade -B -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=error
  •     – mv target/MyApp-1.0.jar .
  •     – unzip MyApp-1.0.jar
  •     – rm -rf target tst src buildspec.yml pom.xml MyApp-1.0.jar
  •     – aws cloudformation package –template-file main.yaml –output-template-file transformed_main.yaml –s3-bucket myapp-prod-outputbucket-xxxxxxxx
  • cache:
  • paths:
  •   – ‘/root/.m2/**/*’
  • artifacts:
  • type: zip
  • files:
  •   – transformed_main.yaml

Staging Step

After the artifact is built, we now want to create a change set using CloudFormation.

  • Action mode: Create or replace a change set
  • Template: MyAppBuildOut::transformed_main.yaml
  • Stackname: <name of your created stack here>

Define your Lambda function using Java (using serverless format), in your Cloudformation config file, placed in the root of your Maven project.

LambaFunctionName:
     Type: AWS::Serverless::Function
     Properties:
       Handler: au.com.nuamedia.camlinpayment.handler.MenuHandler
       Runtime: java8
       Timeout: 10
       MemorySize: 1024
       Events:
         GetEvent:
           Type: Api
           Properties:
             Path: /menu
             Method: get

Deploy Step

The ChangeSet can then be executed and the changes automatically rolled out to production safely. Any problems encountered and an automatic rollback occurs.

  • Action Mode: Execute changeset
  • Stackname: <name of your created stack here>
  • Change set name: <change set name from previous step>

Outcome

Congratulations! you now have your Java AWS Lamba functions deploying to production using Continuous Deployment. AWS CodePipeline is easily configurable via the UI and can also be defined as code and stored in version control.

Do your dependencies leave you open to attack?

According to the 2015 Verizon Data Breach Investigations Report (DBIR). 98% of attacks are opportunistic in nature, and aimed at easy targets. The report also found that more than 70% of attacks exploited known vulnerabilities that had patches available.

The recent breach at Equifax was caused by a known vulnerability in the popular Struts web framework library, when uploading files. It took Equifax at least two weeks after the attack to discover the data breach and this was almost four months after the exploit had been made public. Automated alerting on known exploits could have prevented this catastrophic security hole.

This post shows an automated way to check your third party library dependencies to ensure your site does not become a victim to these opportunistic attacks.

We will use the dependency checker provided by OWASP. This example shows integration with a Maven build where the check is run against every build during the verify stage. The first run will take a while as it has to download the entire vulnerability database. Subsequent runs will have this cached and so will run much faster.

Maven dependency include:

         <dependency>
            <groupId>org.owasp</groupId>
            <artifactId>dependency-check-maven</artifactId>
            <version>${org.owasp.dependency-check-maven.version}</version>
            <scope>test</scope>
        </dependency>
Maven plugin configuration:
             <plugin>
                <groupId>org.owasp</groupId>
                <artifactId>dependency-check-maven</artifactId>
                <version>3.1.2</version>
                <configuration>
                    <cveValidForHours>12</cveValidForHours>
                    <failBuildOnCVSS>8</failBuildOnCVSS>
                </configuration>
                <executions>
                    <execution>
                        <phase>verify</phase>
                        <goals>
                            <goal>check</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>


Maven command to run:

mvn org.owasp:dependency-check-maven:check

Reference

https://www.triology.de/en/blog-entries/automatic-checks-for-vulnerabilities-in-java-project-dependencies

https://jeremylong.github.io/DependencyCheck/dependency-check-maven/index.html

https://www.owasp.org/index.php/OWASP_Dependency_Check