Comments

These are some notes from my reading on the Tin Can API, a specification for the transmission and storage of messages related to learning activities. The specification has been developed by parties in the e-learning industry and aims to be an elegant solution to enable learning systems to communicate with one another.

All this information is available on the Tin Can API website, this is just my own tl;dr type summary.

What is it?

  • A RESTful web service specification
  • Defines the structure for JSON messages which represent learning activities (Or other types of activity)
  • Each message is called a Statement
  • A statement consists of 3 parts: Actor, Verb and Object like “Mike read the Tin Can Explained article”
  • Based on the activity streams specification developed by/for social networks

Why is it good?

  • More flexible version of the old SCORM standard
  • Device agnostic – anything that can send HTTP requests can use the API
  • Almost any type of content can be stored
  • Decouples content from the LMS (Learning management system) by storing in a separate LRS (Learning Record Store)
  • A single statement can be stored in multiple learning record stores
  • Allows potential for a learner owning their own content instead of their employers
  • Data is accessible and easy to report on

A statement example

{
    "actor": {
        "name": "Sally Glider",
            "mbox": "mailto:sally@example.com"
        },
    "verb": {
            "id": "http://adlnet.gov/expapi/verbs/experienced",
            "display": {"en-US": "experienced"}
     },
    "object": {
            "id": "http://example.com/activities/solo-hang-gliding",
            "definition": {
                "name": { "en-US": "Solo Hang Gliding" }
            }
    }
}

You can generate test statements with valid syntax using the Statement Generator

Details of the valid message formats are given on the full xAPI specification.

The registry

  • Contains definitions for verbs, activities, activity types, attachment types and extensions
  • Should be referenced to in messages by URIs
  • A shared domain language for agreed definitions of terms
  • Anyone can add their own definitions to the registry

Recipes

  • Recipes provide a standardised way for people to describe activities
  • Simplifies reporting as the same terms should be used to describe things
  • A few recipes exist now, for example the video recipe
  • More recipes can be added by the community
  • Helps keep the API flexible and useful for future applications that may not even exist yet!

Reading about the Tin Can API over the last few days and seeing it in action in the Tessello application has really whet my appetite to work more with the specification. I can see great potential for systems that leverage this specification as it provides a flexible framework for messaging without being restrictive and gives us the basis for a common technical language to use to enable our systems to talk to one another.

Comments

Azure web site hosting has a great built in feature for deployment automation. All you need to do is point Azure at the location of your website in your source control platform and it will automatically monitor for updates, pull them into Azure and deploy them, Boom! Well that is the theory anyway, turns out in my case I needed to do some tweaking to get the automation to work.

Tinkering under the hood

The first step to set up the automated deployment is involves going to your Azure website dashboard and selecting ‘Setup up deployment from source control’. There are a bunch of options for what source control services are supported and how they can be setup, this is all pretty well documented in the Azure publishing from source control article so I won’t rehash all of that here. Suffice to say I pointed the website at my github repo and sorted out all the authentication, then Azure pulled through a fresh version to deploy.

Unfortunately when I checked the shiny new ‘Deployments’ tab I found that the deployment had failed, after looking in the error log the reason was clear enough:

“Error: The site directory path should be the same as repository root or a sub-directory of it.”

My website was not in the root folder of the repository, it is in a ‘website’ folder as you can see in the repo here. so I needed to tell whatever magic was running these deployments to check in that folder instead for the code changes. After a bit of googling I found out that the deployments are driven by an application called kudu which has some documenation on its github wiki. Turns out that it is pretty straight forward to modify the folder, as explained on the customising deployments wiki page I just had to add a .deployment file to the repository root with these contents:

[config]
project = website

Simples, the deployment worked fine after adding that file… well it did when I just tried it but previously it didn’t seem to work. Either I made some stupid syntax error previously or kudu got fixed since last time I tried!

A robot to build a robot

The actual website is running a different configuration based on a custom deployment script, while this is a little OTT to just change the folder path going the extra mile paid dividends later on when I needed to make some other customisations during the deployment. It was pretty straightforward to set up, thanks to the azure-cli tool which generates a deployment script for you based on a set of parameters. Instructions on how to do this are on the kudu wiki deployment hooks page. In my case I just needed to run the following command from my repository root to generate a working .deployment and deploy.cmd file.

azure site deploymentscript --aspWebSite -r website

Once checked in those files are used by kudu to control the automated deployment process. Check back in Azure and the deployment should now be showing as successful, awesome!

Comments

If you have read my previous post on Umbraco to Azure migration the topic of this post will be of no surprise.

I’ve inherited an application using a MySQL database which I’d like to host in Azure. However the hosting support for a MySQL database in Azure is expensive. So I have investigated migrating the DB into an SQL Azure supported format.

What about a VM?

That is a good question and one I didn’t immediately consider. The Azure platform offers so many options that sometimes the most obvious can be missed. An easy way to keep the MySQL database without needing to fork out for the ClearDB service is to just spin up an Azure VM, install MySQL on the VM and host everything from there.

This option offers a low overhead entry to hosting the site in Azure, with a lower cost than ClearDB. However I decided not to pursue it as it negates some of the advantages of cloud hosting, one of the aspects of Azure that is attractive is how streamlined you can make the deployment process, you can either download a publish profile and deploy directly from visual studio or hook it directly into your source control and deploy on check in. Also I must admit all the shiny monitoring graphs for Azure websites are pretty cool and give you great visibility of how your hosting is performing. Granted all these features could be achieved with a VM but not nearly so easily. So, onwards with the database migration challenge!

Microsoft to the rescue! (nearly)

After a bit of research I decided to try using the SQL Server Migration Assistant to migrate the database. I was hoping this would automate all the tedious work and leave me just to press a few buttons, sit back and receive all the glory and admiration. Unfortunately it wasn’t quite that simple as there are certain data types that just don’t have a straight conversion between MySQL and SQL Server.

How many ways can you say Yes and No?

At last, a simple question, surely we can all agree what ‘Yes’ and ‘No’ look like, after all its the basis for all digital computing! Unfortunately when it comes to technology nothing is quite that straight forward. In the world of MS SQL we have a bit data type for this function, 1 for Yes, 0 for No, simples. However the MySQL bit data type also takes a length parameter so it is only equivalent to MS SQL if the length is set to 1.

To complicate things further MySQL actually advise in their numeric types overview that a TINYINT(1) data type should be used to represent a boolean value. However the actual values of this type can be anything from -128 to 127, pretty crazy huh! Unfortunately the database I am trying to migrate chose the MySQL recommended data type of TINYINT(1) and, quite understandably that is not supported by SSMA (SQL Server Migration Assistant) as a straight migration to bit. My solution for this was to craft a ‘pre migration’ script to manually convert all the MySQL booleans into a bit(1) data type, which could then be migrated by SSMA by adding a custom type mapping.

I also made a couple of other tweaks to the MySQL database before starting the conversion:

  • Added primary keys to the identifying columns on the tables cmsmember, cmsstylesheet, cmsstylesheetproperty and umbracouserlogins
  • Cleared out temporary data stored in the cmspreviewxml and umbracolog tables and run optimize on the tables to free up unused space

I was then ready to fire up the SSMA tool and migrate the database to Azure. I won’t go into details about this as there is already a decent guide for using SSMA.

The Devil is in the detail

After the migration there was a final synchronisation step to carry out. I needed to manually check the data types for each of the columns in the Azure DB and update any that were not correct. I found out what the types were meant to be by downloading the same umbraco version I was working with and comparing the types. I expect there is a tool that can be used to do this but the database wasn’t particularly large so it didn’t take too long to carry out manually.

Most of the changes could be made directly to the Azure tables by just changing the datatype in the management tool. However a couple of columns proved more difficult, if they were being used as primary keys in azure there was no way to change them in place so instead I copied the data into a new table which had the correct schema, removed the old table and renamed the new table. Here is an example script for the cmstasktype table:

EXECUTE sp_rename N'[PK_cmstasktype_ID]', N'[PK_cmstasktype_ID_old]',  'OBJECT'

CREATE TABLE [Tempcmstasktype](
    [ID] [tinyint] NOT NULL,
    [ALIAS] [nvarchar](255) NOT NULL    
    CONSTRAINT [PK_cmstasktype_ID] PRIMARY KEY CLUSTERED 
    ([ID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ) 
GO

Insert INTO [Tempcmstasktype] ([ID], [ALIAS])
    Select  [ID], [ALIAS]   From [cmstasktype]

drop table [cmstasktype]
EXECUTE sp_rename N'Tempcmstasktype', N'cmstasktype', 'OBJECT'

After ensuring all the data types matched I was able to fire up the azure website and hey presto everything functioned correctly! OK I admit it wasn’t quite that smooth as I’d missed changing one column from an int to tinyint which broke the whole CMS admin UI but once I’d tracked that down everything worked fine, hooray!

Comments



Today I am investigating cloud hosting platforms for a green field web application, there are quite a few platforms out there now, however I’ll be looking at some of the big hitters as they are tried and tested in the marketplace.

The full technology stack is still under review so the hosting capabilities need to be fairly flexible however the requirements we know for sure are:

  • Able to spin up multiple instances to run testing, staging and production environments
  • Low cost, ideally zero for testing / staging as this is grass roots project!
  • Flexibility to host a range of technologies
  • High reliability / speed (this should be a given for any hosting environment)

Contenders ready!

I’m going to investigate 3 platforms against the main requirements above, Windows Azure, Amazon Web Services and Heroku. Each have their advantages, so lets find out which will be the best fit for our project.

Azure

Azure is the Microsoft Cloud offering. As such it has a Microsoft technologies leaning but is by no means limited to their stack.

OS / Languages / DBs

  • Windows or Linux
  • .NET, Node.js, Java, PHP, Python, Ruby
  • Native DBs: Azure SQL Server
  • Third party DBs: Neo4j, MySQL, MongoDB (fiddly)

Pricing

  • 30 day free trial then,
  • $10/month per site
  • $2.50/month per DB

AWS

Amazon Web Services were one of the first cloud based hosting solutions out there. It is a mature platform with many options but lets see whether the acronyms compare favourably.

OS / Languages

  • Windows or Linux
  • .NET, Java, PHP, Python, Ruby
  • Native DBs: SQL Server, MySQL, Oracle, PostgreSQL
  • Third party DBs: Neo4j, MongoDB, RavenDB (basically anything you can run on VM)

Pricing

  • 12 months free for 1 instance / DB
  • $15/month per site
  • $40/month per DB

Heroku

Heroku is more of a grass roots developer led cloud platform. It is well suited to an open source license free stack but will this be suitable for our application?

OS / Languages

  • Windows or Linux
  • Node.js, Java, Python, Ruby
  • Native DBs: Postgres
  • Third party FBs: neo4j, MySQL

Pricing

  • 1 dyno free, $35/month per extra instance
  • Dev DBs free (Up to 10K rows), Basic $9/month

And the winner is…

The technology stack hasn’t been chosen yet so we will just need to wait to see you the winner is!

Comments

Why Azure

I recently inherited an Umbraco version 4 site and need to setup web hosting for it. I decided that Azure would be a good option for hosting as it offers great benefits for future scalability and a good costing model. I also wanted to try it out and see what all the fuss is about!

There were a few steps to carry out to get the site up and running in Azure, so I’ve documented my experience here.

First steps

  • Obviously you need to sign up to a Windows Azure account. Its free to try but you do need a credit card. I guess this makes it nice and seamless for Microsoft to start charging you if you opt to continue past the 1 month trial!
  • Now login to the portal and go to Websites –> new –> custom create –> enter details, selecting a mysql database from the drop down

Migrating the database

  • Click on the new website and go to the ‘linked resources’ tab. The MySql Database should be listed here
  • Click the name to open ClearDB (the third party provider of MySql databases support for Windows Azure)
  • Setup MySQL workbench to connect to ClearDB using an SSL connection. You may also need to install OpenSSL to enable generation of the rsa key.
  • Reduce the database size to fit in the 20MB size limit by removing old document versions and preview / log files.

Uploading the web site

  • Download the publish profile file from your azure account. It is available on the dashboard page of your website, under the ‘quick glance’ menu
  • Open the site in WebMatrix, press publish and browse to select the publish profile file you just downloaded.
  • Web Matrix will now run a diff on the files and prompt you to upload any changes to Azure.
  • In Azure go to the site configure tab and set the app settings for ‘umbracoDbDSN’ and any other connection string values to the ClearDB database connection string. If you are not sure what the connection string you can get it from the ‘quick glance’ menu.

I like this ability to set your production connection strings / app settings in Azure. This will be help enable me to open source the code without storing these secure settings in the config files themselves.

Hey presto, your site should be up and running!

So, what’s the verdict?

Is Azure as nice a hue as promised or is it more murky in practice? Well from my experience its all worked pretty much as expected so far. Its clear the Microsoft has invested some serious resources into the platform and unlike some of their other products (ahem Windows 8) the usability is excellent.

I think this is a decent work flow to initially get a site up and running. However in the long term I would want to hook up source control for the site to publish when merging to a branch. There is an option in Azure to ‘Set up deployment from source control’ so I will be investigating how this works next.

The main drawback I currently see with this setup is that the database has a limit on the number of connections. The free version has 4 which is fair enough, given it is just for trialling. However even the paid versions have low limits of 15, 30 or 40 for $10, $50 or $100 respectively. My feeling is that these limits may well cause problems when the site is running in production. I already hit the 4 connections limit just browsing the site myself and publishing a new page and it is bound to use more when real users are browsing at the same time.

The SQL databases offered in Azure do not have this connections limit and would also be significantly cheaper in the long run, so I will be investigating the feasibility of migrating the Umbraco DB from MySQL into SQL.

Copyright © 2014 - Mike Hook - Powered by Octopress