Comments

I’ve been a keen amateur triathlete and member the Mid Sussex Tri Club for a couple of years. Its a relatively small club but there is a surprising amount of administration involved in running it and after chatting to some of the clubs committee members it was clear that the admin overheads were becoming a burden. I could see the potential for software to help lessen the burden so I offered to take on the club’s website and add some features to it.

Fast forward a year and we have a greatly enhanced website with new features that have helped grow the club without taking up more of people’s time. Throughout making these changes I’ve applied Lean software principles which have really helped me. So in this post I’m taking a look at the first 3 principles of Lean and how these have assisted on this project.

Eliminate waste

Lean philosophy regards everything not adding value to the customer as waste.

I had great motivation to ensure that this principle was applied as I’ve been making all the changes during my spare time. Its amazing what you can achieve in a short amount of time with a bit of planning. For example just last week I was able to add 2 new payment options to the site in an hour. The key techniques I’ve used to help eliminate waste are:

  • Make sure I understand what the key stakeholder wants to achieve from the feature before implementing it
  • Design directly in HTML – I’ve designed all the new user interfaces directly in HTML so when it comes to implementing the feature I just need to enhance the existing HTML with the dynamic data elements
  • Use third parties to do the heavy lifting (ie. don’t reinvent the wheel). For this project the key third party systems used are Umbraco for content management, Bootstrap for styling and GoCardless for online direct debit payments

Amplify learning

Software development is a continuous learning process with the additional challenge of development teams and end product sizes.

Working in a development team of one on this particular project obviously made knowledge sharing a non issue, however there were opportunities to ensure learning was applied instead of doing things the old way. Integrating the GoCardless payment system presented a great opportunity for learning. The club were reluctant to extend the existing paypal based solution due to the fees involved, with paypal charging nearly 4% per transaction. So I did some research and found that the GoCardless direct debit service charged just 1% per transaction.

We decided to trial the system and I was pleasantly surprised by how intuitive it was it integrate with. They have top notch documentation of their API and client libraries for a wide range of platforms including ours, .NET. I had a test payment process integrated with our website within a couple of hours, their support was also excellent and helped clarify the few points I wasn’t clear on regarding setting up specific redirect URLs.

The system has been up and running for around 6 months now and the only issue we’ve had was with some intermittent error responses. Again their support team looked into the problem as soon as I raised it and implemented a fix. I’m glad that I took the time to learn how to integrate with GoCardless as it has saved our small club hundreds of pounds in fees already!

Decide as late as possible

As software development is always associated with some uncertainty, better results should be achieved with an options-based approach, delaying decisions as much as possible until they can be made based on facts and not on uncertain assumptions and predictions.

Taking this approach has helped me avoid the common ‘analysis paralysis’ problem whereby you try and solve too many problems at once, tie yourself in knots and deliver nothing! I deferred the implementation of several tricky features and was pleasantly surprised that the solutions turned out to be more straight forward than I’d originally anticipated.

For example we wanted to add an online entry system for several club events however the we needed to accept entries from club members, affiliated club members or guests. My initial thinking was to have an entry form for each type of entrant with some reporting so the event organisers could see who had entered. This sounded like a big feature that would take weeks to implement. So I started by just adding a simple form for existing members to enter our Duathlon event (fortunately no guests were allowed for our Duathlon!). Later on I returned to the problem of adding guest entries and suddenly it became obvious to me that guests could be members of the website as well just with a different role to limit their access. Then the same entry forms could be used for everyone and I wouldn’t have to add any new reporting. Deferring the decision for how to implement this feature saved me loads of time and resulted in a simpler system, wins all round!

Stay tuned for part II…

The first three principles of Lean development have really helped me deliver on this project but there are still four more principles to go, so if you’ve found this interesting check back soon for part II in the series!

Comments

These are some notes from my reading on the Tin Can API, a specification for the transmission and storage of messages related to learning activities. The specification has been developed by parties in the e-learning industry and aims to be an elegant solution to enable learning systems to communicate with one another.

All this information is available on the Tin Can API website, this is just my own tl;dr type summary.

What is it?

  • A RESTful web service specification
  • Defines the structure for JSON messages which represent learning activities (Or other types of activity)
  • Each message is called a Statement
  • A statement consists of 3 parts: Actor, Verb and Object like “Mike read the Tin Can Explained article”
  • Based on the activity streams specification developed by/for social networks

Why is it good?

  • More flexible version of the old SCORM standard
  • Device agnostic – anything that can send HTTP requests can use the API
  • Almost any type of content can be stored
  • Decouples content from the LMS (Learning management system) by storing in a separate LRS (Learning Record Store)
  • A single statement can be stored in multiple learning record stores
  • Allows potential for a learner owning their own content instead of their employers
  • Data is accessible and easy to report on

A statement example

{
    "actor": {
        "name": "Sally Glider",
            "mbox": "mailto:sally@example.com"
        },
    "verb": {
            "id": "http://adlnet.gov/expapi/verbs/experienced",
            "display": {"en-US": "experienced"}
     },
    "object": {
            "id": "http://example.com/activities/solo-hang-gliding",
            "definition": {
                "name": { "en-US": "Solo Hang Gliding" }
            }
    }
}

You can generate test statements with valid syntax using the Statement Generator

Details of the valid message formats are given on the full xAPI specification.

The registry

  • Contains definitions for verbs, activities, activity types, attachment types and extensions
  • Should be referenced to in messages by URIs
  • A shared domain language for agreed definitions of terms
  • Anyone can add their own definitions to the registry

Recipes

  • Recipes provide a standardised way for people to describe activities
  • Simplifies reporting as the same terms should be used to describe things
  • A few recipes exist now, for example the video recipe
  • More recipes can be added by the community
  • Helps keep the API flexible and useful for future applications that may not even exist yet!

Reading about the Tin Can API over the last few days and seeing it in action in the Tessello application has really whet my appetite to work more with the specification. I can see great potential for systems that leverage this specification as it provides a flexible framework for messaging without being restrictive and gives us the basis for a common technical language to use to enable our systems to talk to one another.

Comments

Azure web site hosting has a great built in feature for deployment automation. All you need to do is point Azure at the location of your website in your source control platform and it will automatically monitor for updates, pull them into Azure and deploy them, Boom! Well that is the theory anyway, turns out in my case I needed to do some tweaking to get the automation to work.

Tinkering under the hood

The first step to set up the automated deployment is involves going to your Azure website dashboard and selecting ‘Setup up deployment from source control’. There are a bunch of options for what source control services are supported and how they can be setup, this is all pretty well documented in the Azure publishing from source control article so I won’t rehash all of that here. Suffice to say I pointed the website at my github repo and sorted out all the authentication, then Azure pulled through a fresh version to deploy.

Unfortunately when I checked the shiny new ‘Deployments’ tab I found that the deployment had failed, after looking in the error log the reason was clear enough:

“Error: The site directory path should be the same as repository root or a sub-directory of it.”

My website was not in the root folder of the repository, it is in a ‘website’ folder as you can see in the repo here. so I needed to tell whatever magic was running these deployments to check in that folder instead for the code changes. After a bit of googling I found out that the deployments are driven by an application called kudu which has some documenation on its github wiki. Turns out that it is pretty straight forward to modify the folder, as explained on the customising deployments wiki page I just had to add a .deployment file to the repository root with these contents:

[config]
project = website

Simples, the deployment worked fine after adding that file… well it did when I just tried it but previously it didn’t seem to work. Either I made some stupid syntax error previously or kudu got fixed since last time I tried!

A robot to build a robot

The actual website is running a different configuration based on a custom deployment script, while this is a little OTT to just change the folder path going the extra mile paid dividends later on when I needed to make some other customisations during the deployment. It was pretty straightforward to set up, thanks to the azure-cli tool which generates a deployment script for you based on a set of parameters. Instructions on how to do this are on the kudu wiki deployment hooks page. In my case I just needed to run the following command from my repository root to generate a working .deployment and deploy.cmd file.

azure site deploymentscript --aspWebSite -r website

Once checked in those files are used by kudu to control the automated deployment process. Check back in Azure and the deployment should now be showing as successful, awesome!

Comments

If you have read my previous post on Umbraco to Azure migration the topic of this post will be of no surprise.

I’ve inherited an application using a MySQL database which I’d like to host in Azure. However the hosting support for a MySQL database in Azure is expensive. So I have investigated migrating the DB into an SQL Azure supported format.

What about a VM?

That is a good question and one I didn’t immediately consider. The Azure platform offers so many options that sometimes the most obvious can be missed. An easy way to keep the MySQL database without needing to fork out for the ClearDB service is to just spin up an Azure VM, install MySQL on the VM and host everything from there.

This option offers a low overhead entry to hosting the site in Azure, with a lower cost than ClearDB. However I decided not to pursue it as it negates some of the advantages of cloud hosting, one of the aspects of Azure that is attractive is how streamlined you can make the deployment process, you can either download a publish profile and deploy directly from visual studio or hook it directly into your source control and deploy on check in. Also I must admit all the shiny monitoring graphs for Azure websites are pretty cool and give you great visibility of how your hosting is performing. Granted all these features could be achieved with a VM but not nearly so easily. So, onwards with the database migration challenge!

Microsoft to the rescue! (nearly)

After a bit of research I decided to try using the SQL Server Migration Assistant to migrate the database. I was hoping this would automate all the tedious work and leave me just to press a few buttons, sit back and receive all the glory and admiration. Unfortunately it wasn’t quite that simple as there are certain data types that just don’t have a straight conversion between MySQL and SQL Server.

How many ways can you say Yes and No?

At last, a simple question, surely we can all agree what ‘Yes’ and ‘No’ look like, after all its the basis for all digital computing! Unfortunately when it comes to technology nothing is quite that straight forward. In the world of MS SQL we have a bit data type for this function, 1 for Yes, 0 for No, simples. However the MySQL bit data type also takes a length parameter so it is only equivalent to MS SQL if the length is set to 1.

To complicate things further MySQL actually advise in their numeric types overview that a TINYINT(1) data type should be used to represent a boolean value. However the actual values of this type can be anything from -128 to 127, pretty crazy huh! Unfortunately the database I am trying to migrate chose the MySQL recommended data type of TINYINT(1) and, quite understandably that is not supported by SSMA (SQL Server Migration Assistant) as a straight migration to bit. My solution for this was to craft a ‘pre migration’ script to manually convert all the MySQL booleans into a bit(1) data type, which could then be migrated by SSMA by adding a custom type mapping.

I also made a couple of other tweaks to the MySQL database before starting the conversion:

  • Added primary keys to the identifying columns on the tables cmsmember, cmsstylesheet, cmsstylesheetproperty and umbracouserlogins
  • Cleared out temporary data stored in the cmspreviewxml and umbracolog tables and run optimize on the tables to free up unused space

I was then ready to fire up the SSMA tool and migrate the database to Azure. I won’t go into details about this as there is already a decent guide for using SSMA.

The Devil is in the detail

After the migration there was a final synchronisation step to carry out. I needed to manually check the data types for each of the columns in the Azure DB and update any that were not correct. I found out what the types were meant to be by downloading the same umbraco version I was working with and comparing the types. I expect there is a tool that can be used to do this but the database wasn’t particularly large so it didn’t take too long to carry out manually.

Most of the changes could be made directly to the Azure tables by just changing the datatype in the management tool. However a couple of columns proved more difficult, if they were being used as primary keys in azure there was no way to change them in place so instead I copied the data into a new table which had the correct schema, removed the old table and renamed the new table. Here is an example script for the cmstasktype table:

EXECUTE sp_rename N'[PK_cmstasktype_ID]', N'[PK_cmstasktype_ID_old]',  'OBJECT'

CREATE TABLE [Tempcmstasktype](
    [ID] [tinyint] NOT NULL,
    [ALIAS] [nvarchar](255) NOT NULL    
    CONSTRAINT [PK_cmstasktype_ID] PRIMARY KEY CLUSTERED 
    ([ID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ) 
GO

Insert INTO [Tempcmstasktype] ([ID], [ALIAS])
    Select  [ID], [ALIAS]   From [cmstasktype]

drop table [cmstasktype]
EXECUTE sp_rename N'Tempcmstasktype', N'cmstasktype', 'OBJECT'

After ensuring all the data types matched I was able to fire up the azure website and hey presto everything functioned correctly! OK I admit it wasn’t quite that smooth as I’d missed changing one column from an int to tinyint which broke the whole CMS admin UI but once I’d tracked that down everything worked fine, hooray!

Comments



Today I am investigating cloud hosting platforms for a green field web application, there are quite a few platforms out there now, however I’ll be looking at some of the big hitters as they are tried and tested in the marketplace.

The full technology stack is still under review so the hosting capabilities need to be fairly flexible however the requirements we know for sure are:

  • Able to spin up multiple instances to run testing, staging and production environments
  • Low cost, ideally zero for testing / staging as this is grass roots project!
  • Flexibility to host a range of technologies
  • High reliability / speed (this should be a given for any hosting environment)

Contenders ready!

I’m going to investigate 3 platforms against the main requirements above, Windows Azure, Amazon Web Services and Heroku. Each have their advantages, so lets find out which will be the best fit for our project.

Azure

Azure is the Microsoft Cloud offering. As such it has a Microsoft technologies leaning but is by no means limited to their stack.

OS / Languages / DBs

  • Windows or Linux
  • .NET, Node.js, Java, PHP, Python, Ruby
  • Native DBs: Azure SQL Server
  • Third party DBs: Neo4j, MySQL, MongoDB (fiddly)

Pricing

  • 30 day free trial then,
  • $10/month per site
  • $2.50/month per DB

AWS

Amazon Web Services were one of the first cloud based hosting solutions out there. It is a mature platform with many options but lets see whether the acronyms compare favourably.

OS / Languages

  • Windows or Linux
  • .NET, Java, PHP, Python, Ruby
  • Native DBs: SQL Server, MySQL, Oracle, PostgreSQL
  • Third party DBs: Neo4j, MongoDB, RavenDB (basically anything you can run on VM)

Pricing

  • 12 months free for 1 instance / DB
  • $15/month per site
  • $40/month per DB

Heroku

Heroku is more of a grass roots developer led cloud platform. It is well suited to an open source license free stack but will this be suitable for our application?

OS / Languages

  • Windows or Linux
  • Node.js, Java, Python, Ruby
  • Native DBs: Postgres
  • Third party FBs: neo4j, MySQL

Pricing

  • 1 dyno free, $35/month per extra instance
  • Dev DBs free (Up to 10K rows), Basic $9/month

And the winner is…

The technology stack hasn’t been chosen yet so we will just need to wait to see you the winner is!

Copyright © 2016 - Hook Technologies Ltd - Powered by Octopress