Continuous Integration, Continous Delivery, Continous Deployment.

Within DevOps the terms Continuous Integration, Continuous Delivery and Continuous Deployment get thrown around a lot. Here is the simplest definition I could come up with to quickly explain each to a non techie like a project manager.

Continuous IntegrationRunning unit and other tests on every branch on every commit and merging to master every day
Continuous DeliveryAs above but each commit CAN be pushed to production
Continuous DeploymentAs above but each commit IS pushed to production

Serverless Continuous Deployment for Java AWS Lamba using AWS CodePipeline

This post shows step by step how to deploy your serverless Java AWS Lambas continuously to production. Moving from pull request, merge, build, deploy and finally test.


Project Setup

For our project we are going to assume a standard Maven Java project structure, with Cloudformation and build specification config in the root of the project.

Within the Maven pom.xml file, you must include the lambda core libraries.

  • <dependencies>
  •       <dependency>
  •           <groupId>com.amazonaws</groupId>
  •           <artifactId>aws-lambda-java-core</artifactId>
  •           <version>1.2.0</version>
  •       </dependency>
  • And also include the AWS SDK Java BOM
  • <!––>
  •    <dependencyManagement>
  •        <dependencies>
  •            <dependency>
  •                <groupId>com.amazonaws</groupId>
  •                <artifactId>aws-java-sdk-bom</artifactId>
  •                <version>${com.amazonaws.version}</version>
  •                <type>pom</type>
  •                <scope>import</scope>
  •            </dependency>
  •        </dependencies>
  •    </dependencyManagement>
  • Next you also need to ensure that the JAR artifact is built flat
  • <build>
  •       <plugins>
  •           <plugin>
  •               <groupId>org.apache.maven.plugins</groupId>
  •               <artifactId>maven-shade-plugin</artifactId>
  •               <version>3.1.0</version>
  •               <configuration>
  •                   <createDependencyReducedPom>false</createDependencyReducedPom>
  •               </configuration>
  •               <executions>
  •                   <execution>
  •                       <phase>package</phase>
  •                       <goals>
  •                           <goal>shade</goal>
  •                       </goals>
  •                       <configuration>
  •                           <transformers>
  •                               <transformer
  •                                     implementation=”com.github.edwgiz.mavenShadePlugin.log4j2CacheTransformer.PluginsCacheFileTransformer”>
  •                               </transformer>
  •                           </transformers>
  •                       </configuration>
  •                   </execution>
  •               </executions>
  •               <dependencies>
  •                   <dependency>
  •                       <groupId>com.github.edwgiz</groupId>
  •                       <artifactId>maven-shade-plugin.log4j2-cachefile-transformer</artifactId>
  •                       <version>2.8.1</version>
  •                   </dependency>
  •               </dependencies>
  •           </plugin>

Build Pipeline using AWS CodePipeline

Source Step

The first step in the AWS CodePipeline is to fetch the source from the S3 bucket

  • Action Name: Source
  • Action Provider: S3
  • Bucket: <your release bucket>
  • S3 Object Key <path of your application>.zip
  • Output Artifact: MyApp

Build Step

Next step in the pipeline, you need to configure a CodeBuild project.

Set the current environment image to aws/codebuild/java:openjdk-8

Use the following buildspec.yml in the root of your project:

  • version: 0.2
  • phases:
  • build:
  •   commands:
  •     – echo Build started on `date`
  •     # Unit tests, Code analysis and dependencies check (maven lifecycle phases: validate, compile, test, package, verify)
  •     – mvn verify shade:shade -B
  •     – mv target/MyApp-1.0.jar .
  •     – unzip MyApp-1.0.jar
  •     – rm -rf target tst src buildspec.yml pom.xml MyApp-1.0.jar
  •     – aws cloudformation package –template-file main.yaml –output-template-file transformed_main.yaml –s3-bucket myapp-prod-outputbucket-xxxxxxxx
  • cache:
  • paths:
  •   – ‘/root/.m2/**/*’
  • artifacts:
  • type: zip
  • files:
  •   – transformed_main.yaml

Staging Step

After the artifact is built, we now want to create a change set using CloudFormation.

  • Action mode: Create or replace a change set
  • Template: MyAppBuildOut::transformed_main.yaml
  • Stackname: <name of your created stack here>

Define your Lambda function using Java (using serverless format), in your Cloudformation config file, placed in the root of your Maven project.

     Type: AWS::Serverless::Function
       Runtime: java8
       Timeout: 10
       MemorySize: 1024
           Type: Api
             Path: /menu
             Method: get

Deploy Step

The ChangeSet can then be executed and the changes automatically rolled out to production safely. Any problems encountered and an automatic rollback occurs.

  • Action Mode: Execute changeset
  • Stackname: <name of your created stack here>
  • Change set name: <change set name from previous step>


Congratulations! you now have your Java AWS Lamba functions deploying to production using Continuous Deployment. AWS CodePipeline is easily configurable via the UI and can also be defined as code and stored in version control.

Do your dependencies leave you open to attack?

According to the 2015 Verizon Data Breach Investigations Report (DBIR). 98% of attacks are opportunistic in nature, and aimed at easy targets. The report also found that more than 70% of attacks exploited known vulnerabilities that had patches available.

The recent breach at Equifax was caused by a known vulnerability in the popular Struts web framework library, when uploading files. It took Equifax at least two weeks after the attack to discover the data breach and this was almost four months after the exploit had been made public. Automated alerting on known exploits could have prevented this catastrophic security hole.

This post shows an automated way to check your third party library dependencies to ensure your site does not become a victim to these opportunistic attacks.

We will use the dependency checker provided by OWASP. This example shows integration with a Maven build where the check is run against every build during the verify stage. The first run will take a while as it has to download the entire vulnerability database. Subsequent runs will have this cached and so will run much faster.

Maven dependency include:

Maven plugin configuration:

Maven command to run:

mvn org.owasp:dependency-check-maven:check



The Four Tendencies

Gretchen Rubin says a useful way to think about people’s behavior is by considering how willing they are to meet or resist expectations on them. Expectations can either be external, like your boss asking for a project to be completed or internal, like exercising regularly.

From these she identifies the four combinations labelled as The Four Tendencies.

External Expectation
Internal Expectation
Upholder (tick) (tick)
Questioner (warning) (tick)
Obliger (tick) (warning)
Rebel (warning) (warning)

(tick) Meets (warning) Resists

This could provide you with more empathy when considering your colleagues, friends or family and make you a more effective communicator. Maybe try thinking about members of your software development team and which tendency they seem to exhibit.

Which one are you? Take the quiz and find out.

Achieving PCI compliance the easy way with a serverless architecture

Achieving PCI Compliance can be a rather onerous ongoing commitment. The first thing you will have to show is your architecture in the form of a network diagram together with a data flow diagram showing the routes where credit card data is transmitted. The more complex the architecture the more work involved in making it PCI compliant. So what can you do to mimise the amount of effort? Well your first thought should be around minimising the scope that falls under PCI compliance. This can be done by isolating only the components requried for payments and moving everything else outside of this scope.Once this is done you should then think about how to implement the system that will be in scope for compliance and this is what I want to talk about in this post.

The key to easier PCI compliance of YOUR system is to offload as much as possible to other providers and by other providers I mean Cloud hosting providers like AWS and GCP who’s platforms hare already attained PCI compliance.

In this example I’m going to use AWS. I’m going to compare two architectures, one deploying your application on EC2 instances and the other using Lambda as our serverless facilitator.

Lets take a simple application that takes credit card details from customers over the phone line and uses Paypal payment gateway to process the payment.

High level network diagram

If we ignore the telephony section of the data flow to keep things simple in this illustration then these are the architectures that we have to produce in each case

Traditional Architecture

Network diagram for traditional architecture

AWS Components used

  1. VPC
  2. public subnets
  3. private subnets
  4. Internet gateway
  5. NAT gateway
  6. Two Availability zones
  7. Internet Gateway
  8. Bastion host for SSH
  9. Security groups
  10. ACLs
  11. Route tables
  12. S3 buckets for Cloudwatch logs

Serverless Architecture

Network diagram for serverless architecture

AWS Components used

  1. Lambda (using default VPC)
  2. API gateway
  3. S3 buckets for Cloudwatch logs

It’s pretty clear how much simpler the serverless architecture is.

Next we’ll see how some of the PCI requirements are met by each architecture and especially how must of the compliance responsibility is handed over to AWS when using the serverless solution.

How each architecture meets PCI requirements – traditional versus serverless

PCI requirement
Traditional Architecture
Serverless Architecture
Harden operating system Hardened AMIs, remove unused applications, lock down ports No instances  (AWS takes care of runtime)
Incoming Firewall  DMZ to prevent unauthorized access, Security Group on VPC Lambda comes with default VPC
Outoing internet access Internet gateway, NAT gateway, Security groups and/or ACLs Default Lambda VPC has outgoining internet access
Do not provide direct access to instances Use a bastion host to allow SSH access No Instances, so no SSH required


By going serverless we are able to totally ignore the following requirements:

  • Requirement 2: Do not use vendor-supplied defaults for system passwords and other security parameters
  • Requirement 5: Protect all systems against malware and regularly update anti-virus software or programsg

Does your Kanban maturity level match your team’s ?

A Kanban board seems at first site like something easy to implement. You draw some columns on a wall, put headings for backlog, in progress and done and then stick postit notes all over it to represents tasks, easy right?

Well kind of 🙂

I’ve been promoting the use of Kanban at our company, we have been using it for about 6 months now. I knew it was an incremental approach and that we should introduce something simple and build from there with the attitude of continuous improvement (Kaizen).

However, alot of the Kanban guides push you in at a quite advanced level, talking about limiting Work In Progress (WIP) and pulling work through the system.

For immature teams (which I guess is the majaority first adopting Kanban) this is too much early on and could lead to the team rejecting the kanban system.

After reading David J Anderson’s excellent book I’ve come to realise that the best way is to evolve Kanban into an organisation gradually, being aware of the maturity of your team and matching that to the sophistication of the Kanban system in use.

This is a great article on doing just that:

Calculating Transactions Per Minute from log file entries

Here’s a linux command I found useful in extracting simple transactions per minute from log file entries

grep -h 'Unique text per transaction' requests.log.* | cut -c1-16 | uniq -c | sed s/./,/8 > transactions_per_minute.csv

This produces a file with the number of transactions/requests for each minute over time: e.g.

34,2017-03-29 11:45
83,2017-03-29 11:46
114,2017-03-29 11:47
84,2017-03-29 11:48
70,2017-03-29 11:49
64,2017-03-29 11:50
76,2017-03-29 11:51

You can now convert this into a visually revealing graph, using for example Google Sheets e.g.

Install Docker on Linux Mint 18

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates -y
sudo apt-key adv --keyserver hkp:// --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo echo deb ubuntu-xenial main | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get purge lxc-docker
sudo apt-get install linux-image-extra-$(uname -r) -y
sudo apt-get install docker-engine cgroup-lite apparmor -y
sudo usermod -a -G docker $USER
sudo service docker start

Keyboard shortcuts for any web page

Vimium gives you a way to navigate a web page without using the mouse.

I now use this browser extension all the time, in fact I feel lost when using someone else’s computer and have to resort to reaching for the mouse.

It simply gives each hyper link a unique character code e.g. ‘gw’ which you type to select the link. To trigger the codes simply press ‘f’ on a web page.

Vimium is free and open source, and supports Chrome and Firefox.


Do you have any keyboard shortcut productivity tricks? Let me know.