Comments

In the first part of Making awesome software with Lean principles I started looking at how Lean principles have helped me while working on the Mid Sussex Tri Club website.

The first three principles Eliminating waste, Amplifying learning and Deciding as late as possible are all fundamental to help guide software development. However they only form part of the picture, there are still four more principles remaining that help guide our decisions so let’s take a look at them now.

Deliver as fast as possible

In the era of rapid technology evolution, it is not the biggest that survives, but the fastest. The sooner the end product is delivered without major defects, the sooner feedback can be received, and incorporated into the next iteration.

I’ve seen the benefits of an iterative approach over and again throughout my professional life so I regard this as an essential principle to apply to all software projects. I put in place a pretty slick release process as one of the first changes I made and have been reaping the benefits ever since. Basically any check-ins on the master branch of the source code repository are immediately deployed to the website. I’ve already blogged about how I set-up this ‘zero-click’ deploy process and it has worked largely without problems since set-up.

However, this technique must be used with caution and certainly in tandem with the ‘Build quality in’ principle. It is essential to ensure that a suitable branching process is in place too and changes can only be deployed once they have been tested. If you are working in a larger team I’d recommend limiting write access to the master branch to a single person who can coordinate the deployments. While automation is great for productivity people still need to understand how it all works and retain control of the systems.

Empower the team

The lean approach favors the aphorism “find good people and let them do their own job,” encouraging progress, catching errors, and removing impediments, but not micro-managing.

I have been fortunate to be involved with an organisation that trusted me to make the technical decisions. I presented my ideas on the features to implement and got feedback on any changes they thought may be needed. So I was empowered to make the changes as and when they were needed. However I also think it is important to ‘bring the stakeholders with you’ when making the changes. This has several benefits, not only does it help ensure that the features are relevant, it also gives me some good candidates for people to test the system before it goes live, which brings me neatly onto the next principle!

Build quality in

Conceptual integrity means that the system’s separate components work well together as a whole with balance between flexibility, maintainability, efficiency, and responsiveness.

As I mentioned earlier, this principle acts as a counterweight to some of the other Lean principles, such as Deliver as fast as possible, to help ensure that changes are not rushed and the software doesn’t accumulate bugs and brittle implementations. For this project I’ve not had the luxury of any other developers who could review the code, however I’ve still ensured that changes are functionally reviewed before deploying them to the live site. To facilitate this I’ve setup a ‘staging’ environment using the same deployment technique running off a separate ‘staging’ repository branch. I first implement the changes on the staging branch and only merge them into the live ‘master’ branch once someone has tested them out on the staging website.

On one occasion I skipped the staging process and fate taught me a lesson! The update was to use an Umbraco plugin to automatically resize images but little did I know that there was a memory leak in the Umbraco plugin which meant the site quickly went over its memory allocation and Azure automatically took it offline. That was a nasty surprise to find on the live site and some load testing in the staging environment would certainly have helped!

See the whole

By decomposing the big tasks into smaller tasks, and by standardizing different stages of development, the root causes of defects should be found and eliminated.

There are two key elements to the last principle, See the whole. I’ve already talked about how the staging and live environments are standardized, in my case this was a fairly trivial exercise made easy by modern hosting tools such as Azure and github that enable multiple environments to be set up at very low cost. The small fee to run an extra Azure database is well worth the value it adds for the club.

The second element recommends splitting tasks into smaller pieces and is certainly one that I’d advocate. Limiting the number of changes made at any one time really helps with tracking down issues. Its a practice which I’ve learnt the hard way over the years, it can be tempting to try and ‘fix the world’ when you get your hands on a code-base and hammer out a high number of changes in a short time. However, not only does this massively increase the probability for bugs, it also makes finding them much more difficult. Looking for a bug in a commit with 50+ changed files isn’t much fun!

I’ve used the github issue tracker on this project, which may seem a bit odd when there is no one else to share the work with, however it has helped keep me focussed on what I’ve been trying to achieve, also its nice to get the satisfaction of pressing the ‘Closed’ button after fixing each issue!

Lean == Awesome

I hope you’ve enjoyed reading about my experiences with Lean principles. I’d love to hear about any opinions you’ve got regarding the way I’ve interpreted the principles or how you’ve used the principles to make your own awesome software, so feel free to comment on the posts below!

Comments

I’ve been a keen amateur triathlete and member the Mid Sussex Tri Club for a couple of years. Its a relatively small club but there is a surprising amount of administration involved in running it and after chatting to some of the clubs committee members it was clear that the admin overheads were becoming a burden. I could see the potential for software to help lessen the burden so I offered to take on the club’s website and add some features to it.

Fast forward a year and we have a greatly enhanced website with new features that have helped grow the club without taking up more of people’s time. Throughout making these changes I’ve applied Lean software principles which have really helped me. So in this post I’m taking a look at the first 3 principles of Lean and how these have assisted on this project.

Eliminate waste

Lean philosophy regards everything not adding value to the customer as waste.

I had great motivation to ensure that this principle was applied as I’ve been making all the changes during my spare time. Its amazing what you can achieve in a short amount of time with a bit of planning. For example just last week I was able to add 2 new payment options to the site in an hour. The key techniques I’ve used to help eliminate waste are:

  • Make sure I understand what the key stakeholder wants to achieve from the feature before implementing it
  • Design directly in HTML – I’ve designed all the new user interfaces directly in HTML so when it comes to implementing the feature I just need to enhance the existing HTML with the dynamic data elements
  • Use third parties to do the heavy lifting (ie. don’t reinvent the wheel). For this project the key third party systems used are Umbraco for content management, Bootstrap for styling and GoCardless for online direct debit payments

Amplify learning

Software development is a continuous learning process with the additional challenge of development teams and end product sizes.

Working in a development team of one on this particular project obviously made knowledge sharing a non issue, however there were opportunities to ensure learning was applied instead of doing things the old way. Integrating the GoCardless payment system presented a great opportunity for learning. The club were reluctant to extend the existing paypal based solution due to the fees involved, with paypal charging nearly 4% per transaction. So I did some research and found that the GoCardless direct debit service charged just 1% per transaction.

We decided to trial the system and I was pleasantly surprised by how intuitive it was it integrate with. They have top notch documentation of their API and client libraries for a wide range of platforms including ours, .NET. I had a test payment process integrated with our website within a couple of hours, their support was also excellent and helped clarify the few points I wasn’t clear on regarding setting up specific redirect URLs.

The system has been up and running for around 6 months now and the only issue we’ve had was with some intermittent error responses. Again their support team looked into the problem as soon as I raised it and implemented a fix. I’m glad that I took the time to learn how to integrate with GoCardless as it has saved our small club hundreds of pounds in fees already!

Decide as late as possible

As software development is always associated with some uncertainty, better results should be achieved with an options-based approach, delaying decisions as much as possible until they can be made based on facts and not on uncertain assumptions and predictions.

Taking this approach has helped me avoid the common ‘analysis paralysis’ problem whereby you try and solve too many problems at once, tie yourself in knots and deliver nothing! I deferred the implementation of several tricky features and was pleasantly surprised that the solutions turned out to be more straight forward than I’d originally anticipated.

For example we wanted to add an online entry system for several club events however the we needed to accept entries from club members, affiliated club members or guests. My initial thinking was to have an entry form for each type of entrant with some reporting so the event organisers could see who had entered. This sounded like a big feature that would take weeks to implement. So I started by just adding a simple form for existing members to enter our Duathlon event (fortunately no guests were allowed for our Duathlon!). Later on I returned to the problem of adding guest entries and suddenly it became obvious to me that guests could be members of the website as well just with a different role to limit their access. Then the same entry forms could be used for everyone and I wouldn’t have to add any new reporting. Deferring the decision for how to implement this feature saved me loads of time and resulted in a simpler system, wins all round!

Stay tuned for part II…

The first three principles of Lean development have really helped me deliver on this project but there are still four more principles to go, so if you’ve found this interesting check back soon for part II in the series!

Comments

These are some notes from my reading on the Tin Can API, a specification for the transmission and storage of messages related to learning activities. The specification has been developed by parties in the e-learning industry and aims to be an elegant solution to enable learning systems to communicate with one another.

All this information is available on the Tin Can API website, this is just my own tl;dr type summary.

What is it?

  • A RESTful web service specification
  • Defines the structure for JSON messages which represent learning activities (Or other types of activity)
  • Each message is called a Statement
  • A statement consists of 3 parts: Actor, Verb and Object like “Mike read the Tin Can Explained article”
  • Based on the activity streams specification developed by/for social networks

Why is it good?

  • More flexible version of the old SCORM standard
  • Device agnostic – anything that can send HTTP requests can use the API
  • Almost any type of content can be stored
  • Decouples content from the LMS (Learning management system) by storing in a separate LRS (Learning Record Store)
  • A single statement can be stored in multiple learning record stores
  • Allows potential for a learner owning their own content instead of their employers
  • Data is accessible and easy to report on

A statement example

{
    "actor": {
        "name": "Sally Glider",
            "mbox": "mailto:sally@example.com"
        },
    "verb": {
            "id": "http://adlnet.gov/expapi/verbs/experienced",
            "display": {"en-US": "experienced"}
     },
    "object": {
            "id": "http://example.com/activities/solo-hang-gliding",
            "definition": {
                "name": { "en-US": "Solo Hang Gliding" }
            }
    }
}

You can generate test statements with valid syntax using the Statement Generator

Details of the valid message formats are given on the full xAPI specification.

The registry

  • Contains definitions for verbs, activities, activity types, attachment types and extensions
  • Should be referenced to in messages by URIs
  • A shared domain language for agreed definitions of terms
  • Anyone can add their own definitions to the registry

Recipes

  • Recipes provide a standardised way for people to describe activities
  • Simplifies reporting as the same terms should be used to describe things
  • A few recipes exist now, for example the video recipe
  • More recipes can be added by the community
  • Helps keep the API flexible and useful for future applications that may not even exist yet!

Reading about the Tin Can API over the last few days and seeing it in action in the Tessello application has really whet my appetite to work more with the specification. I can see great potential for systems that leverage this specification as it provides a flexible framework for messaging without being restrictive and gives us the basis for a common technical language to use to enable our systems to talk to one another.

Comments

Azure web site hosting has a great built in feature for deployment automation. All you need to do is point Azure at the location of your website in your source control platform and it will automatically monitor for updates, pull them into Azure and deploy them, Boom! Well that is the theory anyway, turns out in my case I needed to do some tweaking to get the automation to work.

Tinkering under the hood

The first step to set up the automated deployment is involves going to your Azure website dashboard and selecting ‘Setup up deployment from source control’. There are a bunch of options for what source control services are supported and how they can be setup, this is all pretty well documented in the Azure publishing from source control article so I won’t rehash all of that here. Suffice to say I pointed the website at my github repo and sorted out all the authentication, then Azure pulled through a fresh version to deploy.

Unfortunately when I checked the shiny new ‘Deployments’ tab I found that the deployment had failed, after looking in the error log the reason was clear enough:

“Error: The site directory path should be the same as repository root or a sub-directory of it.”

My website was not in the root folder of the repository, it is in a ‘website’ folder as you can see in the repo here. so I needed to tell whatever magic was running these deployments to check in that folder instead for the code changes. After a bit of googling I found out that the deployments are driven by an application called kudu which has some documenation on its github wiki. Turns out that it is pretty straight forward to modify the folder, as explained on the customising deployments wiki page I just had to add a .deployment file to the repository root with these contents:

[config]
project = website

Simples, the deployment worked fine after adding that file… well it did when I just tried it but previously it didn’t seem to work. Either I made some stupid syntax error previously or kudu got fixed since last time I tried!

A robot to build a robot

The actual website is running a different configuration based on a custom deployment script, while this is a little OTT to just change the folder path going the extra mile paid dividends later on when I needed to make some other customisations during the deployment. It was pretty straightforward to set up, thanks to the azure-cli tool which generates a deployment script for you based on a set of parameters. Instructions on how to do this are on the kudu wiki deployment hooks page. In my case I just needed to run the following command from my repository root to generate a working .deployment and deploy.cmd file.

azure site deploymentscript --aspWebSite -r website

Once checked in those files are used by kudu to control the automated deployment process. Check back in Azure and the deployment should now be showing as successful, awesome!

Comments

If you have read my previous post on Umbraco to Azure migration the topic of this post will be of no surprise.

I’ve inherited an application using a MySQL database which I’d like to host in Azure. However the hosting support for a MySQL database in Azure is expensive. So I have investigated migrating the DB into an SQL Azure supported format.

What about a VM?

That is a good question and one I didn’t immediately consider. The Azure platform offers so many options that sometimes the most obvious can be missed. An easy way to keep the MySQL database without needing to fork out for the ClearDB service is to just spin up an Azure VM, install MySQL on the VM and host everything from there.

This option offers a low overhead entry to hosting the site in Azure, with a lower cost than ClearDB. However I decided not to pursue it as it negates some of the advantages of cloud hosting, one of the aspects of Azure that is attractive is how streamlined you can make the deployment process, you can either download a publish profile and deploy directly from visual studio or hook it directly into your source control and deploy on check in. Also I must admit all the shiny monitoring graphs for Azure websites are pretty cool and give you great visibility of how your hosting is performing. Granted all these features could be achieved with a VM but not nearly so easily. So, onwards with the database migration challenge!

Microsoft to the rescue! (nearly)

After a bit of research I decided to try using the SQL Server Migration Assistant to migrate the database. I was hoping this would automate all the tedious work and leave me just to press a few buttons, sit back and receive all the glory and admiration. Unfortunately it wasn’t quite that simple as there are certain data types that just don’t have a straight conversion between MySQL and SQL Server.

How many ways can you say Yes and No?

At last, a simple question, surely we can all agree what ‘Yes’ and ‘No’ look like, after all its the basis for all digital computing! Unfortunately when it comes to technology nothing is quite that straight forward. In the world of MS SQL we have a bit data type for this function, 1 for Yes, 0 for No, simples. However the MySQL bit data type also takes a length parameter so it is only equivalent to MS SQL if the length is set to 1.

To complicate things further MySQL actually advise in their numeric types overview that a TINYINT(1) data type should be used to represent a boolean value. However the actual values of this type can be anything from -128 to 127, pretty crazy huh! Unfortunately the database I am trying to migrate chose the MySQL recommended data type of TINYINT(1) and, quite understandably that is not supported by SSMA (SQL Server Migration Assistant) as a straight migration to bit. My solution for this was to craft a ‘pre migration’ script to manually convert all the MySQL booleans into a bit(1) data type, which could then be migrated by SSMA by adding a custom type mapping.

I also made a couple of other tweaks to the MySQL database before starting the conversion:

  • Added primary keys to the identifying columns on the tables cmsmember, cmsstylesheet, cmsstylesheetproperty and umbracouserlogins
  • Cleared out temporary data stored in the cmspreviewxml and umbracolog tables and run optimize on the tables to free up unused space

I was then ready to fire up the SSMA tool and migrate the database to Azure. I won’t go into details about this as there is already a decent guide for using SSMA.

The Devil is in the detail

After the migration there was a final synchronisation step to carry out. I needed to manually check the data types for each of the columns in the Azure DB and update any that were not correct. I found out what the types were meant to be by downloading the same umbraco version I was working with and comparing the types. I expect there is a tool that can be used to do this but the database wasn’t particularly large so it didn’t take too long to carry out manually.

Most of the changes could be made directly to the Azure tables by just changing the datatype in the management tool. However a couple of columns proved more difficult, if they were being used as primary keys in azure there was no way to change them in place so instead I copied the data into a new table which had the correct schema, removed the old table and renamed the new table. Here is an example script for the cmstasktype table:

EXECUTE sp_rename N'[PK_cmstasktype_ID]', N'[PK_cmstasktype_ID_old]',  'OBJECT'

CREATE TABLE [Tempcmstasktype](
    [ID] [tinyint] NOT NULL,
    [ALIAS] [nvarchar](255) NOT NULL    
    CONSTRAINT [PK_cmstasktype_ID] PRIMARY KEY CLUSTERED 
    ([ID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ) 
GO

Insert INTO [Tempcmstasktype] ([ID], [ALIAS])
    Select  [ID], [ALIAS]   From [cmstasktype]

drop table [cmstasktype]
EXECUTE sp_rename N'Tempcmstasktype', N'cmstasktype', 'OBJECT'

After ensuring all the data types matched I was able to fire up the azure website and hey presto everything functioned correctly! OK I admit it wasn’t quite that smooth as I’d missed changing one column from an int to tinyint which broke the whole CMS admin UI but once I’d tracked that down everything worked fine, hooray!

Copyright © 2015 - Mike Hook - Powered by Octopress