Friday, 4 April 2014

Part3 #xAPI and #LRS #learningAnalytics #eln

Part3 Ben Betts from HT2 on Learning Record Store and #xAPI of the seminar, part 1 including general links and frameworks can be seen here, part 2 focusing more on tools can be found here

xAPI (official name) and Tin Can are two names but covering the same thing, bit different ownership.

Ben looks at the anatomy of a statement (showing it IRL).

Learning Record Store: what it is, what it can do
Format is a triplet of data: actor – verb (English) – object (what it is you are doing: name, activity, …)
JSON is the language – light weight xml, each bit identifies

Important stuff:
 each verb used needs to be VERY WELL DEFINED, at this point in time you need to be VERY standardized. E.g. enter : this verb can cover multiple options, so definition is key to make the LRS meaningful.

Example using Curator platform: everytime I interact with the platform an xAPI data will be tracked

Also shows an example of xAPI integration in Google Chrome, which when clicked in that box of Google Chrome is immediately fed back as a statement to xAPI.

More on LRS
LRS is a potential key infrastructure and an LRS is a STANDARD
I was bemoaning social learning, e.g. MOOC learning, how much is done by the learner, yet is lost both for the learner and for the learning system.
Learners should own their learning data. Enabling a learning journey. Owning the JSON, the raw data.  This is what triggered the xAPI in Ben
Data gathering must be such that meaning can be taken out by other systems.

Learning locker
Ben shows his social learning linked to xAPI statements (GREAT!!!!)
Validation of actions is made much easier when observation is put into the equation: if someone else has seen you doing an activity and endorses it, you can believe the action actually took place.

But when Ben was looking at the first data, he saw that the data was dull in many cases
So a couple of experiments was done: one was on giving the learners some data to build upon (like a customer card, filled from the start with 3 stamps => more motivating!).
Combining all learning in the LRS: quantified self, LMS, surfing, mobile apps…
It is not difficult to come up with a LRS, but doing this is not as easy as thinking of it

Challenges of the ecosystem
  • Data carrying challenge: it must be awesomely standardized, otherwise analytic tools will not work (coding/decoding challenge)
  • But some systems you want to keep (e.g. Oracle, statistics tools)
  • Ben uses it to power Open Badges from Mozilla (great for informal mooc learning), xAPI provide an underlying layer for a badge: if you do this, this and this…. You get a badge.
  • And a key attribute of LRS is, that you MUST be able to put it in another LRS (think SCORM). So buying LRS needs to be critical.
  • Because we know so little for the moment, that a lot of the data will be unuseful due to all the changes anyway


Working on for the moment
  • Blending learning platforms – MOOCs (cross platform) (Ask Ben whether he is been contacted for FutureLearn - answer: no known plans)
  • Customising eBooks (is done in the OU)
  • Effective Performance Support – improving learning design
  • Issuing Open Badges
  • Predictive Analysis – are you going to fail? (mentioning OU)
  • Personal Learning Records
  • Personalising Learning Experiences

Ben shows admin dashboard
Because of the wide variety of data, there are a lot of xAPI statements (e.g. used for training engineers: comparing starting engineer learners statements with expert engineer statements – 60.000 statements per day)

Reports can be pulled from the LRS
This can be exported to excel

Why open source
  • LRS is a standard: difficult to differentiate from a tech perspective
  • Chance to shape a key piece of tech for our industry – xAPI needs this so people can experiment
  • OS gives us more marketing opportunities and vastly improved network
  • Personal sense of purpose
  • New revenue stream

Designing for data
85% of organisations will not be able to exploit big data for competitive advantage through 2015 (Gartner, 2013)

Consider 3 big needs
  • Design for analytics
  • Adopt standards
  • Consider the data supply chain – show does data flow through your organization?
See: http://www.accenture.com/microsites/it-technology-trends-2014/Pages/data-supply-chain.aspx

The personal LRS
Based on the core code of the organization LRS, allowing
  • Individual ownership of data (but more focus on quantified self on the front-end)
  • Presentations of experience
  • Customisation of future learning experiences (API)

Possible uses
  • Google circles: using data for putting information to different people
  • This might provide the flipped solicitation room, as the facts of what you know can be seen pre-interview, real action during interview.

Strategic aims
We will become the open source standard for LRS: the de-facto standard for LRS. To achieve this aim and fulfill our vision we have adopted three strategic aims
Develop an enterprise-ready LRS
… did not get all three

Get involved
Next week a new version of the LRS is rolled out for use (? huh, keep an eye out!)
There is also a cloud version (so no set-up needed, Inge ask for this)

On-going projects
  • Tin badges (open cans) – look up Bryan Mathers, 2014 – tin can and open badges
  • Moodle madness
  • Content without borders

Questions for ben
Does he work with FutureLearn
Can xAPI be implemented in smaller systems, e.g. Wordpress?
Where can I find the cloud version of LRS? It will be on license 125 pounds (seems still need to be rolled out)

Some answers:
Grassblade – wordpress – look up (thank you David Glow!)
Design cohorts in ADL as a resource to find what people are working on, projects (http://ymlp.com/zan0J7)

On feasibility of building a research instrument to capture informal learning:
It is possible, but you need to plan enormously up front: REALLY know what you want to track, and how you call it, and how it must be put in.
The instrument must use VERY TIGHT TAG architecture, really defining the meta-tags and the way the people must do it, in order to be able to analyse it.
NO open statements, otherwise it becomes a nightmare to analyse => different ID for each statements

(Inge thinking: you could use an instrument for quantified, then compare it to written learning diaries)