My current previous WordPress hosting with SiteGround costs $20.39 per month.
Amazon Lightsail $3.50 per month for an instance running a WordPress image.
A saving of a couple hundred dollars over the year!
My current previous WordPress hosting with SiteGround costs $20.39 per month.
Amazon Lightsail $3.50 per month for an instance running a WordPress image.
A saving of a couple hundred dollars over the year!
This is a very useful open source tool used internally by Microsoft to validate that best practices are being followed in their Azure ARM templates.
This short post shows how to incorporate the AzSK ARM Template Checker into your Azure YAML Pipeline.
If you want to use a Linux build agent, you can use the PowerShell task to run AzSK.
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
Set-PSRepository -Name "PSGallery" -InstallationPolicy Trusted
Install-Module AzSK
Import-Module AzSK
Get-AzSKARMTemplateSecurityStatus -ARMTemplatePath $Env:BUILD_SOURCESDIRECTORY/arm-templates
failOnStderr: true
Otherwise if you are using a Windows build agent, you can use the Azure Extension
Imagine we want to produce a Map with each unique word as a key and it’s frequency as the value.
Prior to Java 8 we would have to do something like this:
Map<String, Integer> map = new HashMap<>(); for(String word : words) { if(map.containsKey(word)) { int count = map.get(word); map.put(word, ++count); } else { map.put(word, 1); } }
However with the map.merge method we can now do:
Map<String, Integer> map = new HashMap<>();; for(String word : words) { map.merge(word, 1, Integer::sum); } return map;
Pretty nice eh?
Here are some great reasons to stop using email for team communication and instead switch to Slack
Within DevOps the terms Continuous Integration, Continuous Delivery and Continuous Deployment get thrown around a lot. Here is the simplest definition I could come up with to quickly explain each to a non techie like a project manager.
Continuous Integration | Running unit and other tests on every branch on every commit and merging to master every day |
Continuous Delivery | As above but each commit CAN be pushed to production |
Continuous Deployment | As above but each commit IS pushed to production |
This post shows step by step how to deploy your serverless Java AWS Lambas continuously to production. Moving from pull request, merge, build, deploy and finally test.
For our project we are going to assume a standard Maven Java project structure, with Cloudformation and build specification config in the root of the project.
Within the Maven pom.xml file, you must include the lambda core libraries.
The first step in the AWS CodePipeline is to fetch the source from the S3 bucket
Next step in the pipeline, you need to configure a CodeBuild project.
Set the current environment image to aws/codebuild/java:openjdk-8
Use the following buildspec.yml in the root of your project:
After the artifact is built, we now want to create a change set using CloudFormation.
Define your Lambda function using Java (using serverless format), in your Cloudformation config file, placed in the root of your Maven project.
LambaFunctionName:
Type: AWS::Serverless::Function
Properties:
Handler: au.com.nuamedia.camlinpayment.handler.MenuHandler
Runtime: java8
Timeout: 10
MemorySize: 1024
Events:
GetEvent:
Type: Api
Properties:
Path: /menu
Method: get
The ChangeSet can then be executed and the changes automatically rolled out to production safely. Any problems encountered and an automatic rollback occurs.
Congratulations! you now have your Java AWS Lamba functions deploying to production using Continuous Deployment. AWS CodePipeline is easily configurable via the UI and can also be defined as code and stored in version control.
Type | Duration | Target | Scope | Estimate | Displayed on |
Epic | Multiple sprints over a few months | Product Owner | High level objective | – | Backlog |
User Story | Single sprint | End user or customer | Feature level | Story Points | Backlog |
Task | Few days | Development Team | Assignable to individual | Hours | Agile board |
Gretchen Rubin says a useful way to think about people’s behavior is by considering how willing they are to meet or resist expectations on them. Expectations can either be external, like your boss asking for a project to be completed or internal, like exercising regularly.
From these she identifies the four combinations labelled as The Four Tendencies.
External Expectation
|
Internal Expectation
|
|
---|---|---|
Upholder | ||
Questioner | ||
Obliger | ||
Rebel |
Meets Resists
This could provide you with more empathy when considering your colleagues, friends or family and make you a more effective communicator. Maybe try thinking about members of your software development team and which tendency they seem to exhibit.
Which one are you? Take the quiz and find out.
A Kanban board seems at first site like something easy to implement. You draw some columns on a wall, put headings for backlog, in progress and done and then stick postit notes all over it to represents tasks, easy right?
Well kind of 🙂
I’ve been promoting the use of Kanban at our company, we have been using it for about 6 months now. I knew it was an incremental approach and that we should introduce something simple and build from there with the attitude of continuous improvement (Kaizen).
However, alot of the Kanban guides push you in at a quite advanced level, talking about limiting Work In Progress (WIP) and pulling work through the system.
For immature teams (which I guess is the majaority first adopting Kanban) this is too much early on and could lead to the team rejecting the kanban system.
After reading David J Anderson’s excellent book I’ve come to realise that the best way is to evolve Kanban into an organisation gradually, being aware of the maturity of your team and matching that to the sophistication of the Kanban system in use.
This is a great article on doing just that:
http://anderson.leankanban.com/kanban-patterns-organizational-maturity/
Here’s a linux command I found useful in extracting simple transactions per minute from log file entries
grep -h 'Unique text per transaction' requests.log.* | cut -c1-16 | uniq -c | sed s/./,/8 > transactions_per_minute.csv
This produces a file with the number of transactions/requests for each minute over time: e.g.
34,2017-03-29 11:45 83,2017-03-29 11:46 114,2017-03-29 11:47 84,2017-03-29 11:48 70,2017-03-29 11:49 64,2017-03-29 11:50 76,2017-03-29 11:51
You can now convert this into a visually revealing graph, using for example Google Sheets e.g.