Friday, 11 April 2014

#Diabetes, Sue Townsend, Easter a personal mix

This morning sad news was announced as the author Sue Townsend died (well known for her Adrian Mole books, but all round prolific and wonderful writer). Two days ago I finished her book “The woman who went to bed for a year”, a book that compelled me to laugh out loud, reflect upon the spirits of people, and of whom at the end cares the most for each one of us. I really enjoyed the book. As I read the back end of the book two facts sparked an extra, personal interest: blind, kidney transplant. Linking those two together I immediately thought of diabetes complications. After a quick search I found Sue Townsend was just like me a diabetes type 1 person since the 80’s. After reading it, I tucked my feelings on these two facts away to the back of my head.
This morning those feelings stormed to the front again as I read that Sue Townsend was dead. That fact scared the hell out of me. I saw the upcoming years: loosing eye sight, reduced – stopped - kidney function, stroke. I am scared of endings, I am scared of death, scared for me, for my loved ones, for the unknown, for the reducing of my own autonomy.
A letter came a month ago, that I needed to get my eye screening appointment set, I had ignored it. Why? Because I am scared of my own disease at times, and I have the tendency to flee from personal topics of anxiety. I know fleeing never helps on such occasions, but it is clinging on to the utopia of life and how I imagine it: ideal, healthy.
So, an hour ago I got up, picked up the phone and made the dreaded appointment. I need to stay in optimal condition for as long as possible, I owe it to my mum, my son, my partner and myself.
We must all live our own life, with whatever life has in store for us. Being scared only helps in extreme situations where caution is needed, in all other occasions we must face fear, and tackle it by either learning on how to overcome it, or reflecting on what the real basis of this fear is. It also means one must understand and look for that which makes us tick. For me understanding the learning process, looking for solutions that involve technology is a pleasure. Research, talking, thinking. To me learning, either face-to-face, online, any type of blended, is necessary to direct all of us towards trust, life, joining hands. Learning from people we like, support us, whom we support. Professionally I am on a good track.
Personally it is about connecting to people I care about. Partners, friends, colleagues (virtual and IRL), sharing, caring, supporting, dreaming.

Easter is coming, the flowers are out, spring is shining, and at this point in time I live, I love and I am thankful for all the wonderful people and opportunities that surround me. Life is in the living. 

Thursday, 10 April 2014

Complacency, failure, improvement cycle and #pearltrees #pkm14

For what ever reason, I seem to have a personal complacency => failure => improvement cycle. Which means that every few years something that I was good at turns into mush.

Messed up more then one presentation
The latest one concerns presentation skills. So I have been good at it (why do I know: feedback forms, mouth to mouth) and then it turns bad (why do I know? Again feedback forms). I did feel myself slip, but I simply told myself 'I had a day off' and soothed me into not worrying. So what is the typical decay of my presentation skills: I know what I know, I actually know quite a bit about certain topics (mlearning, cMOOC), but then I want to share ALL that I know in one hour slot of presentation AND I rely on my brain to come up with structure ad hoc. This does not happen.
There are multiple reasons, as every and any teacher/trainer will know:

  • pushing too much information forward to the public simply does not transmit the message
  • if you are not a naturally structured person, ad-lib will result in chaos and rambling
  • use simple slides for presenting, and use notes to elaborate on your slides, or add audio so people understand the pictures you use (this means: I always think that slides can be used as content booklets (see my slides for yourself)... but I think I will just need to step away from that. Either present, or offer booklets I guess). As Marshall McLuhan said: the medium is the message , and I scrambled both up resulting in a confused feeling in the heads of the listeners.
  • a starting point and relations or concepts that are obvious to me, is not obvious for others
  • practice, practice, practice to become really good, and aim for the moon while I am at it
  • there was even someone in the audience figuring out if I could be one of their future presenters... well I scared them away big time *sigh*
Actions taken
I want to make sure I do not get trapped in my own world of greatness again (apparently I come from a city known for its citizens to think they are great (Antwerp, Belgium). Those citizens even get called 'señor/a' as a nickname to describe that complacent ego phenomenon. Anyway, I searched for a way to improve:
  • get some pointers on how to present, i.e. expert knowledge
  • corner a future presentation occasion: May 2014 an online forum (which will be on related subject as the presentation I messed up, and the forum also has feedback forms) 
  • practice, practice, practice and get better. Why: I want to give pleasurable insights on topic into people's minds, not chaos!
Because of the personal knowledge management course I am following and its assignments. One of our current assignment is testing out new tools. In the past I have used content mapping as a way to organize and built content and information towards new knowledge. I used CMAP from the University of Florida, USA. But it missed some of the easy social media sharing options (but does have meaningful relational descriptions as an option between to sets of content). As such I strolled through Jane Hart's eLearning tools and I found pearltrees as a new tool to put together content in a jiffy, make it visual and retrievable and share it with others. So sharing my pearltree here ( a just started Coursera course, examples of great public speakers, good books, and presentation tips):

Presenting skills and books / Great public speaker examples in Inge Ignatia de Waard (ignatia_dw)

Collect what you like from your Android devices thanks to the Pearltrees' app

Wednesday, 9 April 2014

Proceedings from recent #MOOC conference

Just a quick post linking to a set of conference proceedings worth reading. eMOOC2014 proceedings of the research track (European MOOC summit in Switzerland, February 2014) which are fully online here. There is a link to a co-authored paper (written by Inge de Waard, Michael Sean Gallagher, Ronda Zelezny-Green, Laura Czerniewicz, Stephen Downes, Agnes Kukulska-Hulme and
Julie Willemson ) Vulnerable Learners in MOOC which I linked to earlier.

But the proceedings link to some magnificent global MOOC cases and experiences and lots of papers on learner drop-out and retention. I just started to read them all through, and the following were already of interest to me (related to my research interest):
Analyzing completion rates in the First French xMOOC by Matthieu Cisel (p. 26)
Scaffolding Self-learning in MOOCs (p. 43) written by Israel Gutiérrez-Rojas, Carlos Alario-Hoyos, Mar Pérez-Sanagustín, Derick Leony, Carlos Delgado-Kloos
MOOC Learning in Spontaneous Study Groups: Does Synchronously Watching Videos Make a Difference? (p. 88) by Nan Li, Himanshu Verma, Afroditi Skevi, Guillaume Zufferey and Pierre Dillenbourg
Signals of Success and Self-directed Learning (p. 18) by Penny Bentley, Helen Crump, Paige Cuffe, Iwona Gniadek, Briar Jamieson, Sheila MacNeill and Yishay Mor




Filtering for Future ProFessional Frontiers #pkm14

As the Personal Knowledge Management (PKM14) course moves into its second week, all the participants are asked to filter their social media / their networks. We are suggested to use more advanced filters: e.g. using feeds from people or/and groups, using automated filters of choice (e.g. hootsuite or tweetdeck to filter the personal twitter and other streams).

First I took a look at hootsuite (suggested by Ronda Zelezny-Green) and tweetdeck (both of these are free to some extend. Another paid option is sproutsocial which has wonderful options, but fits more with an enterprise type of social media stream analysis. I tested both and looked at other user comparisons to get an idea of which tool would suit me. I had both used them briefly in the past, but not to their full potential. And as it has been some years now, it is clear that I am better at understanding what these types of tools can do, and the overall structure of the tools in themselves is an improved user experience as well. For me Hootsuite works (but without integrating it into my browser, that feature was the reason why I stopped using hootsuite in the past: too much bling makes me angry and puts me off a tool - but that is me, not the tool).

After only two days, I already found some immediately relevant information (e.g. more status updates on learning analytics for informal learning, more about weak/strong ties in online communities), but at the same time I am loosing more time as I got lost in an information loop. Again... knowledge management is about finding useful tools, optimizing or personalizing them to fit your own goals, and limiting your time on those tools to get the best experience out of them (for me the best experience is: activating a peer network. One immediate benefit of Hootsuite is being able to schedule tweets, this saves time and will - eventually - keep me from returning to my streams in Hootsuite until a moment in time I consciously choose and limit.

The information streams in Hootsuite are currently based on one list, and keywords (e.g. mLearning, learning analytics). I started to build MyKeyPeople list within twitter: adding those twitterers that are of importance to me, that provide new insights, links, ideas.

But I do realize this is person-based, now - like a fellow participant Kavi ( ) mentioned - knowledge can be distilled from a higher level by using twitter lists and combining these, or to search for good curators/curations and link these to a RSS/feedread or scoopit... I still have to work on those options. So that makes a good to do. 
Speaking of to do lists, another participant of the course (Shane Johnson) mentioned todoist, a cross-platform software to plan your time or project. Will have to see if this works for me. 

Sunday, 6 April 2014

8 years of Working Out Loud thoughts #PKM

As the course on Personal Knowledge Management (PKM) is well on its way, one of the steps to reflect on and improve the PKM is about making a summary of Working Out Loud (or set out actions if this is the first time to be working out loud = sharing what you do through your own social media in order to build an active network that supports you and which you in turn can support).

I have been Working Out Loud for the last 8 years, and my main knowledge areas have been eLearning options as stand-alone options as well as partnerships with other institutes, mobile learning, continued medical education tools for training, and mobile learning solutions for developing regions. This took me on a very steep learning curve. Especially as mobile learning in developing regions was quite a new feature when I started on that track (and top management was not convinced mobile learning would be a good training option - first projects were paid out of my own pocket).

Why did I work in the open? 

  • It allowed me to connect with the few peers that were out there, somewhere on the globe
  • to share what I learned with others who are contemplating to roll out similar projects
  • keeping track of what I did learn, and how I solved certain challenges (a personal learning archive)
  • building a network that I could consult and feed back into. 
  • It is part of professional activism: sharing consciously to plea for open science, open commons, openness overall


Looking back at those last 8 years, how can I make improvements? Which actions will I take?

  • Challenges and failures should be shared more frequently. It does not feel good for the ego, and in some cultures failure just is not discussed, but failure is part of the essence of failure. 
  • Increase curation (synthesizing and disseminating what other peers do). I need to reconnect with my 'top notch 50' peers. I have been reducing my 'reading what others do' in the last few months and this resulted in more of an isolated feeling. The reason why I started to link less to others was due to time management challenges, a professional identity shift (from corporate solution to academic research), and loosing track on my own personal knowledge management overall. 

Friday, 4 April 2014

Part4 #xAPI seminar Andrew Downes on #learningdesign #eln

Almost groggy here after information overload and process... hoping still to make sense a bit. Great seminar!
Part4 #xAPI seminar Andrew Downes on #learningdesign. Wonderful resource: http:// http://tincanapi.co.uk/

And if you want to get an xAPI UK project going, have a look here, at the end there is link to get in touch and start a conversation

If you want to start designing xAPI statements, here is that start:

And a few pointers on how to start tracking xAPI and real world

And an iPhone xAPI statements viewer (free) here:
https://itunes.apple.com/gb/app/experience-api-xapi-statement/id550133878?mt=8

Looking at how Tin Can design affects learning design
He is key UK lead for xAPI at Epic


A xAPI mindset
The xAPI looks at events, not necessarily status of events (latter typical of scorm)
Which results in different learning designs.

I
Did
This
A learner
Succeeded at
A work task


Some elearning


Their personal goal
Computer
completed
me


Think about
What different experiences do or could make up your blend of learning?
What needs to happen in experiences X to trigger a change in experience Y
What is a natural flow for your learners?


Challenges on informal xAPI stuff
Reliability of self-reporting
I did this => prove it!
Will learners report their learning?
Please complete this form => possibly NO (so what can we do to avoid this, or work around. What might be a benefit?) – e.g. training courses only made available to those who do send feedback.
Privacy concerns
We are tracking everything you do => Ummmmm (big brother issue, communicated well, open on it)
Interoperability

It’s not working, is it? => Have you tried turning it off and on again? (So communicating on this is crucial, both with learners, as with developers/experts)
Too much data
Joe moved this mouse 1 pixel => Which direction?
Correlation is not causation. 

Part3 #xAPI and #LRS #learningAnalytics #eln

Part3 Ben Betts from HT2 on Learning Record Store and #xAPI of the seminar, part 1 including general links and frameworks can be seen here, part 2 focusing more on tools can be found here

xAPI (official name) and Tin Can are two names but covering the same thing, bit different ownership.

Ben looks at the anatomy of a statement (showing it IRL).

Learning Record Store: what it is, what it can do
Format is a triplet of data: actor – verb (English) – object (what it is you are doing: name, activity, …)
JSON is the language – light weight xml, each bit identifies

Important stuff:
 each verb used needs to be VERY WELL DEFINED, at this point in time you need to be VERY standardized. E.g. enter : this verb can cover multiple options, so definition is key to make the LRS meaningful.

Example using Curator platform: everytime I interact with the platform an xAPI data will be tracked

Also shows an example of xAPI integration in Google Chrome, which when clicked in that box of Google Chrome is immediately fed back as a statement to xAPI.

More on LRS
LRS is a potential key infrastructure and an LRS is a STANDARD
I was bemoaning social learning, e.g. MOOC learning, how much is done by the learner, yet is lost both for the learner and for the learning system.
Learners should own their learning data. Enabling a learning journey. Owning the JSON, the raw data.  This is what triggered the xAPI in Ben
Data gathering must be such that meaning can be taken out by other systems.

Learning locker
Ben shows his social learning linked to xAPI statements (GREAT!!!!)
Validation of actions is made much easier when observation is put into the equation: if someone else has seen you doing an activity and endorses it, you can believe the action actually took place.

But when Ben was looking at the first data, he saw that the data was dull in many cases
So a couple of experiments was done: one was on giving the learners some data to build upon (like a customer card, filled from the start with 3 stamps => more motivating!).
Combining all learning in the LRS: quantified self, LMS, surfing, mobile apps…
It is not difficult to come up with a LRS, but doing this is not as easy as thinking of it

Challenges of the ecosystem
  • Data carrying challenge: it must be awesomely standardized, otherwise analytic tools will not work (coding/decoding challenge)
  • But some systems you want to keep (e.g. Oracle, statistics tools)
  • Ben uses it to power Open Badges from Mozilla (great for informal mooc learning), xAPI provide an underlying layer for a badge: if you do this, this and this…. You get a badge.
  • And a key attribute of LRS is, that you MUST be able to put it in another LRS (think SCORM). So buying LRS needs to be critical.
  • Because we know so little for the moment, that a lot of the data will be unuseful due to all the changes anyway


Working on for the moment
  • Blending learning platforms – MOOCs (cross platform) (Ask Ben whether he is been contacted for FutureLearn - answer: no known plans)
  • Customising eBooks (is done in the OU)
  • Effective Performance Support – improving learning design
  • Issuing Open Badges
  • Predictive Analysis – are you going to fail? (mentioning OU)
  • Personal Learning Records
  • Personalising Learning Experiences

Ben shows admin dashboard
Because of the wide variety of data, there are a lot of xAPI statements (e.g. used for training engineers: comparing starting engineer learners statements with expert engineer statements – 60.000 statements per day)

Reports can be pulled from the LRS
This can be exported to excel

Why open source
  • LRS is a standard: difficult to differentiate from a tech perspective
  • Chance to shape a key piece of tech for our industry – xAPI needs this so people can experiment
  • OS gives us more marketing opportunities and vastly improved network
  • Personal sense of purpose
  • New revenue stream

Designing for data
85% of organisations will not be able to exploit big data for competitive advantage through 2015 (Gartner, 2013)

Consider 3 big needs
  • Design for analytics
  • Adopt standards
  • Consider the data supply chain – show does data flow through your organization?
See: http://www.accenture.com/microsites/it-technology-trends-2014/Pages/data-supply-chain.aspx

The personal LRS
Based on the core code of the organization LRS, allowing
  • Individual ownership of data (but more focus on quantified self on the front-end)
  • Presentations of experience
  • Customisation of future learning experiences (API)

Possible uses
  • Google circles: using data for putting information to different people
  • This might provide the flipped solicitation room, as the facts of what you know can be seen pre-interview, real action during interview.

Strategic aims
We will become the open source standard for LRS: the de-facto standard for LRS. To achieve this aim and fulfill our vision we have adopted three strategic aims
Develop an enterprise-ready LRS
… did not get all three

Get involved
Next week a new version of the LRS is rolled out for use (? huh, keep an eye out!)
There is also a cloud version (so no set-up needed, Inge ask for this)

On-going projects
  • Tin badges (open cans) – look up Bryan Mathers, 2014 – tin can and open badges
  • Moodle madness
  • Content without borders

Questions for ben
Does he work with FutureLearn
Can xAPI be implemented in smaller systems, e.g. Wordpress?
Where can I find the cloud version of LRS? It will be on license 125 pounds (seems still need to be rolled out)

Some answers:
Grassblade – wordpress – look up (thank you David Glow!)
Design cohorts in ADL as a resource to find what people are working on, projects (http://ymlp.com/zan0J7)

On feasibility of building a research instrument to capture informal learning:
It is possible, but you need to plan enormously up front: REALLY know what you want to track, and how you call it, and how it must be put in.
The instrument must use VERY TIGHT TAG architecture, really defining the meta-tags and the way the people must do it, in order to be able to analyse it.
NO open statements, otherwise it becomes a nightmare to analyse => different ID for each statements

(Inge thinking: you could use an instrument for quantified, then compare it to written learning diaries)

Part2 notes + mini course on #xAPI link #learninganalytics

First of all: great blogpost on “Tell your own learning story through #xAPI”

JOIN THE CLASS (found this while surfin just now:

Tell your own learning story through xAPI

By using Experience API, xAPI, learning experiences can be logged anywhere, trackable beyond the way you’ve ever imagined. xAPI brings new possibilities to every traditional learning standard, including e-Portfolio and SCORM. The statements are in the form of “Actor”+”Verb”+”Object”+…. But, we don’t think the “Actor” is only a puppet in a given story. They are characters in their own stories and in peers’ stories. Learner-generated-learning is one of the most desirable learning moments, isn’t it? xAPI enables data driven design, the data can form a feedback mechanism for learning designers, but also for learners – who should be the co-designers of learning experiences. Furthermore, the autonomy of a learner DOES tell something about him.
We are creating an online course exploring how to leverage xAPI to answer lots of questions — “Learning Architect”. Please register if you’re interested, we’ll inform you when it’s ready. We invite you to join the journey with us.


Now Part2 Stuart Jones from Unicorn training on cases and experiences (look here for part 1)

Context: eLearning company which uses a lot of 3rd party content and has their own learning management system.
If you use an LMS, you must think about the standards
AICC => Scorm 1.2 => SCORM 2004 (before iPad, introduced sequencing and navigation… I remember that, building learning paths, phew!)

Baking Tin Cans: where are we today?
  1. Layer 1: SCORM parity
  2. Layer 2: Record any learning experience
  3. Layer 3: free the data
  4. Layer 4: Big data


Stuart shows how the Tin Can api has made it easier to keep track of personal online quests and retrievals. But it does mean the learner must add them to the learning system (e.g. copy paste a visited/useful URL, using content delivery networks

Off the shelf tools: still in development, but the apps options are feeling cool already, some examples:
Storyline (a bit painful for iPad development, but it is getting better). The apps sort out some of the challenges. Showing how to do it)
Articulate example is shown now. It needs to be looked at online (downside), and you need to upload it in the mobile option/publishing option
iSpring (Russian company based) works also nicely with Tin Can. Publishing to Tin Can is easier, as it has their own tab for it. It works similar to articulate: it starts where course was left off, you need to be online, but the tracking for non-assessments is not yet ideal.

Other tools: adobe captivate, Lectora…
Look at: bookmarklets: https://demo.tincanapi.com
Using easy copy paste to add bookmarks to the LMS.

Concluding remarks
  • Tin can is here for self builders
  • Authoring tools aren’t all there
  • LMS just starting out
  • LRS may prove disruptive
  • It is the future

part1 notes from #xApi workshop #LearningAnalytics

In the past I wrote posts on the xAPI (first one in 2011 launching brainstorm, an update on the project in 2012, and one where the project is getting up to speed in 2013) which is in continued development and seems promising for learner analytics, especially the mobile options. This set of notes is 1/4, the others notes from this workshop can be found here: part2/4 (some tools and link to xAPI demo site), part 3/4 (on connecting xAPI with LRS), part 4/4 (on learning design).

The project started at the centre of ADL, and you can find many resources at the Experience API (xAPI) page here. And there is a wiki with lots of working documents and status details as well.

For me, I am interested in the xAPI in combination with its mobile options for: tracking self-reported experiential learning, informal learning capture and 3rd party evaluation. Why? Because I am looking to build a research instrument which sort of combines classic 'learning logs' (i.e. diaries kept by the learner where s/he notes down what they have learned) and the qualtified self (a combination of the qualified and the quantified self which I wrote a post about here).

A good overview to start from are the slides from Nikolaus Hruska:
http://www.adlnet.gov/what-should-i-track-in-my-learning-experiences-to-build-learning-analytics/

And a really nice twitter implemenation example of the xAPI is described by the wonderful Mayra Aixa Villar in her blogpost #xAPI twitter chat - joint venture team - recap.

These parts of the notes are from the first speaker Andy Wooler from Hitachi systems 
He starts the day, framing learning data in general (look for Hitachi bloggers on big data to keep on top of their ongoing roadmap on data)
Making sense of your data (video) 
What data is really important.
What does the technical architecture have to look like in the next 5 year.

Think about why I – Inge - want to use it?
  • My reason: map the triggers in MOOC learning across multiple devices (in a ubiquitous learning platform like FutureLearn)
  • Linking learning data to provide from the ground up research data
  • Looking for learning meaning across contemporary social files (social media, email, text files, machine log files)
  • Allowing serendipitous learning to occur (so creating an open research instrument, yet based on first findings coming out of the pilot that was performed)


Comparing with Berein by Delottte (Talent analytics maturity model looks like

At Hitachi:
ARIES currently: Analytics and Reporting Integrated Enterprise System:
  • LMS
  • assessment tool, feeding in an Oracle system.


This enables different yet not ideal dashboards analysing learning data.
Currently the dashboards on LMS’s are not ideal as a UI.

Vision for the future at Hitachi (HDS academy data future)
  • ARIES future
  • Lms
  • It project database
  • Hds community
  • The loop
  • Help desk
  • CRM systems
  • LRS (Learning Record Stores)

Roadmap towards 2018 advance analytics tools predicted.

Concept of the never ending course (CPD – continued professional development with added spaced learning), this poses a problem for SCORM. Where people are at that point in time in their learning journey.

(look up the stairway to elearning heaven (Fiona Leteney @fionaleteney also into Tin Can/xAPI)

Reflection suggestion: what type of learning would be of interest to you taken into account the full personal, informal learning as well? What can be taken out of your community tools.

What do you need to think about when looking for meaningful data/analysis?
  • Learning strategy needs to be built to know what you want to look at
  • Self-reporting is depending on subjective meaning giving (cfr classic learning diaries that learners provide by self-reporting)
  • Validation of actions actually being undertaken by the learner participant
  • Coming to a mutual shared meaning of used self-reported vocabulary
  • Iteration of tracking due to learning which data does matter, how tin can actually works, how to automate data tracking for self-reported and informal learning.
  • Looking for higher level analysis: actively linking the data with discourse research, networked learning, social learning research, looking at the big five traits of a learner

Thursday, 3 April 2014

Free report durable Technology Enhanced Learning #Telearning

The Beyond Prototypes report provides a UK-based in-depth examination of the processes of innovation in technology-enhanced learning (TEL) with a special emphasis on building online learning solutions that are durable. The focus is also on design-based research.
In order to do this, the report looks back at some long-running programs (one going back to the 80's), and their follow-up projects. The report also looks at challenges and misconceptions of TELearning: e.g. MISCONCEPTION: Most of the TEL innovation process takes place within universities.

It is a nice, 40 page report describing three cases: 
The microelectronics education program 
The £23 million Microelectronics Education Programme (MEP) for England, Wales and Northern Ireland was established by the Government in November 1980 and ran for six years. The aim was to support schools in preparing children ‘for life in a society in which devices and systems based on microelectronics are commonplace and pervasive’. To complement this work, the Department of Industry made £16 million available to help local education authorities purchase computers for schools.
MEP took into account areas as diverse as curriculum development, teacher training, resource organisation and support. It promoted change at national, regional and local levels, encouraging collaboration and cross-fertilisation of ideas. Plans for evaluation and field studies were incorporated from the start. Although there was relatively little emphasis on pedagogy, the programme did note the potential to ‘add new and rewarding dimensions to the relationship between teacher and class or teacher and pupil’ [8].(curriculum, teacher training, infrastructure provision to schools. A 6 year program, which afterwards gave rise to EU projects building upon that expertise and changing education as a result)  
This project had follow-ups in European projects tackling education innovation. 

the Yoza cellphones project (mobile): 
The aim of the Shuttleworth Foundation funded Yoza Cellphone Stories project (Yoza), formally entitled m4Lit, was to promote leisure reading by the distribution of m-novels to mobile phones in South Africa – a country where less than 10% of public schools have functional libraries but 70% of urban youth have internetenabled mobile phones. The project began in 2009, taking inspiration from work done in Japan, using an existing mobile chat platform to release content and advertise, and publishing in local languages, including Afrikaans and isiXhosa, as well as English. Yoza considers the key innovation in this process of bricolage
not to be the use of phones, but the provision of really engaging stories (some published in episodes), available easily and affordably, with readers able to comment and see others’ comments in near real time.
In early 2013, Yoza won the Netexplo Award in Paris and had a catalogue of over 50 openly licensed m-novels, poems and plays, some of which deal with difficult subjects such as living with HIV. Use of the service has been strong, with over half a million completed reads and 50,000 user comments recorded in
the 17 months to December 2012. Securing further funding has proved challenging. However, content has been reused elsewhere, including by Young Africa Live, and the model has helped pave the way for other initiatives in South Africa such as the FunDza Literacy Trust.

iZone driver performance (corporate case)
iZone was set up in 2009 to address a change in Fédération Internationale de l’Automobile (FIA)
regulations, which reduced racing teams’ testing time. While test equipment and simulators for the testing
of cars and components were already used, nothing was available that could replace track time for drivers.
Sophisticated simulators with video screens had been developed over the previous 35 years, but much more
complex systems, able to provide physical feedback such as g-forces, were required for the development of
elite drivers.
iZone addressed this problem by interlinking physiological systems and electromechanical systems. It uses
eye-tracking technology to enable coaches to analyse drivers’ performance and assess their control during
the simulation. This technology was developed by the company’s simulator designer, John Reid, who was
inspired by an article about the use of eye-tracking systems in helicopter gunships.
iZone has links with Cranfield Aerospace that stretch back to the 1980s, when company chairman Alex
Hawkridge used the wind tunnel at Cranfield to develop the aerodynamics of Toleman F1 cars. The
company now uses the g-force technology from Cranfield’s helicopter trainer and also has PhD students
from Cranfield working with the company on aspects of the project. A similar long-term relationship with
the Department of Electrical Engineering at the University of Sheffield has also helped with the development
of the simulator.
Based on work with racing drivers prior to setting up iZone, the team has created a training regime developed
by sports scientists and sport psychologists to offer a complete driver development programme that includes
the use of the simulators. The sport psychology input came from Dave Collins, who had developed a name
for coaching and mentoring in athletics and football as well as in motorsports.
Most technology businesses are concerned with the protection of intellectual property (IP), but Alex
Hawkridge’s view is that, ‘the things that are patentable, we don’t think it would be wise to patent, because
you then tell people exactly what you’re doing.’ He considers that the most important way to protect the
business’s IP is to keep developing the simulator business. The potential for iZone to run a similar operation
at every major racetrack in the world is a real opportunity; a future way forward might include franchising
the model in order to maintain its speed of development

The report also makes recommendations for researchers, government and policy makers. Just mentioning a few,, the recommendations feel rather intuitive:
  • The interim and final results from design-based studies should be systematically shared with other researchers so that the process of innovation can be compared, expanded, and continued over time. They should also be widely disseminated to policy makers and practitioners, through events such as ‘what research says’ meetings. 
  •  Research institutes should set up long-lasting collaborations and consortia, involving schools, museums and other educational settings as test-beds, to support large-scale comparative and crosscultural investigations.
  • Policy and funding should support innovations in pedagogy and practice, as well as the technological developments that will support these. This should recognize the need to fund professional development of practitioners and evaluation of the innovation in practice.
  • Policy and funding should recognize the importance of extended development and provide support for scaling and sustaining of innovations, beyond prototypes into educational transformations.
  • Policy and funding should encourage the development of skilled, multidisciplinary teams that are able to complete the TEL innovation process. Recognition and support should be given to visionary thinking and experimentation, to generate fresh insights and achievable visions of educational developments.