Thursday, 27 September 2018

Machine learning benefits and risks by expert Stella Lee #AI #data #learning

Machine learning has moved from a mere rave into a real strong, acknowledged learning power (not only in the news, but also on the stock market of AI, e.g. STOXX AI global indices - I was quite surprised to see this). Machine learning has the power to support personalized learning, as well as adaptive learning, which allows an instructional designer to engage learners in such a way that learning outcomes can be reached in more than one way (always a benefit!). Machine learning allows the content or information that is provided for training/learning to be delivered in such a way that it fits the learner, and that it reacts to the learner feedback (answers, speed of response, etc). To be able to tailor a fixed set of learning objectives into flexible training demands some technological options: data, algorithms that can interpret the data, access to some sort of connectivity (e.g. it might be ad hoc with a wifi and an information hub, or it might be via cloud and the internet), and money to program, iterate and optimize the learning options continuously.

This (data, interpretation, choices made by machines - algorithms) means that machine learning combines so many learning tools, data and computing power, that it inevitably comes with a high sense of philosophical and ethical decisions: what is the real learning outcome we want to achieve, what are the interpretations of our algorithms, what is the difference between manipulation towards a something people must learn and learning that still offers a critically based outcome for the learner?

Stella Lee offers a great overview of what it means to use machine learning (e.g. for personalized learning paths, for chatbox that deliver tech or coaching support, for performance enhancement). This talk is worth a look or listen. Stella Lee is one of those people who inspire me through their love for technology, by being thorough, thoughtful, and being able to turn complex learning issues into feasable learning opportunities you want to try out. She gave a talk to Google Cambridge on the subject of machine learning and AI and ... she inspired her tech-savvy audience.

In her talk she also goes deeper into the subject of 'explainable AI' which offers AI that can be interpreted easily by people (including relative laymen, which is the case for most learners). Explainable AI is an alternative to the more common black box of AI (useful article), where the data interpretation is left to a select few. Stella Lee's solution for increasing explainable AI is granularity. This simple concept of granularity, or considering what data or indicators to show, and which to keep behind the curtains enables a quicker interpretation of the data by the learner or other stakeholders. Of course this does not solve all transparency, but it enables a path towards interpretation or description towards explainable AI. That way you show the willingness to enter into dialogue with the learners, and to consider their feedback on the machine learning processes. As always engaging the learners is key for trust, advancement and clear interpretation (Stella says it way better than my brief statement here!).

Have a look at her talk on machine learning bias, risks and mitigation below (30 minute talk followed by a 15 min Q&A), or take a quick look at the accompanying article here.

One of the main risks is of course some sort of censorship, or interpretation done by the machine which results in an unbalanced, sometimes discriminatory result. In January I organised some thoughts on AI and education in another blogpost here. And I also gave a talk on the benefits and risks of AI last year, where I argued for increased ethics in AI for education (slides here).

Machine learning is a complex type of learning, it involves a lot of data interpretation, algorithms to get meaningful reactions coming from the data, and of course feedback loops to provide adaptive, personal learning tracks to a number of learners.
Situating it, I would call it costly, useful rather for formal than informal learning (at this point in time), and somewhere between individual and social learning, as the data comes from the many, but the adapted use is for the one. It does not leave much room for self-directed learning,  unless this is built into the machine learning algorithms (first ask learner for learning outcomes, then make choices based on data). 

Tuesday, 25 September 2018

Hearables for learning combining #language, #AI & internet #edtech #informal learning

Hearables are clearly on the rise. After the screens (read & write web), learning on the go (mobile learning), the eyes (all sorts of augmented glasses), some kinetic learning (various motion controllers), other wearables (e.g. smart clothing) ... the next sense that is now ready to inspire new learning is: hearing (HLearning). "Hearables are wireless smart micro-computers with artificial intelligence that incorporate both speakers and microphones. They fit in the ears and can connect to the Internet and to other devices; they are designed to be worn daily. One form of specialised hearables are the earphone language translators that offer potential in language teaching." (thank you Rory McGreal for this wonderful description).

Learning with hearables is linked to other, more experienced forms of technology based learning: it is mobile (it is a wearable), it can be used in-context (e.g. in a refugee camp enabling dialogue), it can be implemented within informal learning (using it to increase language skills, or simply to move around in a country where you do not speak the language), hence it helps self-directed learning as you can use the hearables in contexts that you find interesting, and it augments the current information you have, by being able to provide audio feedback or information on a personal level by whispering it into your ear to augment the real world around and within you (wifi and sensor enabled). This puts hearables amidst the already complex learning supported by technology.

Rory McGreal has just given a great overview of hearables for learning, in his most recent CIDER conference. You can download his slides here and listen to his talk here. Or look around on the CIDER page which is packed with EdTech and distance learning talks:
https://landing.athabascau.ca/groups/profile/289790/cider/tab/359765/sessions 

Hearables will be quite a leap forward in translation and language learning (if seamless learning becomes feasable). And for those of us who like spy movies... yep, it has that special agent ring to it as well! 

My colleague Agnes Kukulska-Hulme recently pointed me to the Babel Fish option (referring to the ever inspiring The Hitchhikers Guide to the Galaxy), that specific hearable called the Pilot, and build by Waverly labs. This particular device - the Pilot - supports 15 languages (a.o. English, Arabic, Mandarin, Russian, Hindi, Spanish, Japanese...), with male and female voices that translate the audio which is recorded by the microphone through a cloud-based translation engine. They even claim to have a low latency (which is kind of nice when you want to match what is said to body language). 

While in-ear translations are a straight forward implementation of augmented and language learning, the processing and AI behind is will also allow increased hearing range, audio information of any kind you choose (biometrics, recognizing a bird in the wild, communication between fish, use it as a recognising machine to get names right of those people you meet, look like a secret agent on top of whatever information which makes you look cool, ...). Of course, the usual considerations can be made: hearables will listen in on what you do and where you go, hearables are not yet a seamless learning aid (the name Pilot is clearly well chosen), battery life (as with all things mobile), connectivity can vary while mobile, and it risks to be another addition to distraction by tech. Nevertheless, this is cool and worth looking into.