Showing posts with label adaptive learning. Show all posts
Showing posts with label adaptive learning. Show all posts

Thursday, 3 October 2019

Yes a learning engine: demo is ready, but #AI and #Learning challenges ahead #TBB2019 @InnoEnergyCE

If you have ideas on ensuring continuity in pedagogy when clustering courses (research), on certifying across corporate and university learning (blockchain/bit of trust certification), on opening up industry academies to decrease L&D costs (HR and L&D), ... please think along and respond to the challenges mentioned at the end.

People in high and common places seem to agree that the world is in transition, especially workplace learning, as innovations keep changing what is possible. As I am working on one such an innovation (the skill project of InnoEnergy), I am at the one hand very excited about the new opportunities it might open, yet at the same time concerned that the complexity is bigger than expected.

First: have a look at the demo screencast here. It shows the overall idea, and ... this might immediately give rise to questions.

Today the Business Booster event (TBB) is opened, and with it, the skill project demo is launched. The skillproject (we still need to get a brand name for it), is combining AI and learning for the sustainable energy sector. But in essence, once we get the sustainable energy sector mapped with this tool, others can follow. 

AI and learning? What does it do: the project identifies industry needs (AI-driven), pinpoints emerging skill gaps in the sustainable energy sector (AI-driven), analysis the existing workforce to know where urgent skills gaps are situated (AI-driven) and then refers employees to a personalized learning trajectory addressing their skills gap (part AI, part human support). The goal of this project is to ensure that employees of the sustainable energy sector stay futureproof in a quickly changing working environment. Let's be honest, it sounds cool, but ... the challenges are multiple. 

The emergence of a Learning Engine
The skillproject helps realize the emergence of a learning engine, an intelligent career-oriented engine which knows your own skills and which signposts you to where you want to go with your career by suggesting a personalized learning track.
In the Learning Engine you simply type in “goal: become Director of Innovation’s in offshore wind energy which courses?” and the engine immediately returns a tailored, personalized learning track consisting of a variety of certified, business training from both universities, corporate academies, open educational energy resources and coaching options to send you on your way. This will allow professional learning to surpass the limits of classical, university-based learning.

Challenges
In order to get our engine to come up with the best, most-tailored courses, we need access to industry academies, as well as university courses. 
Learning-to-Learn capacities. Once we signpost learners to a cluster of courses, they need to take them (the familiar 'take the horse to water' comes to mind). But even if the learners are taking the courses, 
Granularity for course clustering: clustering courses to keep on top of your field of expertise is one thing, but then what is the granularity of those courses? Micro-learning is an option, and modular learning will become a clear necessity, as all learners have different existing knowledge, which means they all need different parts in order to upskill what they already know. 
Ensuring pedagogical continuity, even OU finds that a challenge. Great, so let's cluster modules. But then, how can we link these modules together, Do we believe in the non-pedagogical support (e.g. hole in the wall from Sugata Mitra already dates back 10 years), or do we need to find a solution to provide pedagogical continuity that fits with this new assembly of short modules, and courses coming from different sources (both university and industry)?
Certification across the learning ecologies: to blockchain or not to blockchain. Once we start learning across institutes, we need to keep track of that what we learn, by keeping tabs on the actual learning: corporate academy learning, university modules, hands-on training, workplace learning... one solution is to embed blockchain in education to keep track of all learning. But this is easier said than done, and open standards and trust might be an issue to consider (bit of trust initiative offers good reading). 

Feel free to send questions, comments, share your own projects... let's get together.

Thursday, 27 September 2018

Machine learning benefits and risks by expert Stella Lee #AI #data #learning

Machine learning has moved from a mere rave into a real strong, acknowledged learning power (not only in the news, but also on the stock market of AI, e.g. STOXX AI global indices - I was quite surprised to see this). Machine learning has the power to support personalized learning, as well as adaptive learning, which allows an instructional designer to engage learners in such a way that learning outcomes can be reached in more than one way (always a benefit!). Machine learning allows the content or information that is provided for training/learning to be delivered in such a way that it fits the learner, and that it reacts to the learner feedback (answers, speed of response, etc). To be able to tailor a fixed set of learning objectives into flexible training demands some technological options: data, algorithms that can interpret the data, access to some sort of connectivity (e.g. it might be ad hoc with a wifi and an information hub, or it might be via cloud and the internet), and money to program, iterate and optimize the learning options continuously.

This (data, interpretation, choices made by machines - algorithms) means that machine learning combines so many learning tools, data and computing power, that it inevitably comes with a high sense of philosophical and ethical decisions: what is the real learning outcome we want to achieve, what are the interpretations of our algorithms, what is the difference between manipulation towards a something people must learn and learning that still offers a critically based outcome for the learner?

Stella Lee offers a great overview of what it means to use machine learning (e.g. for personalized learning paths, for chatbox that deliver tech or coaching support, for performance enhancement). This talk is worth a look or listen. Stella Lee is one of those people who inspire me through their love for technology, by being thorough, thoughtful, and being able to turn complex learning issues into feasable learning opportunities you want to try out. She gave a talk to Google Cambridge on the subject of machine learning and AI and ... she inspired her tech-savvy audience.

In her talk she also goes deeper into the subject of 'explainable AI' which offers AI that can be interpreted easily by people (including relative laymen, which is the case for most learners). Explainable AI is an alternative to the more common black box of AI (useful article), where the data interpretation is left to a select few. Stella Lee's solution for increasing explainable AI is granularity. This simple concept of granularity, or considering what data or indicators to show, and which to keep behind the curtains enables a quicker interpretation of the data by the learner or other stakeholders. Of course this does not solve all transparency, but it enables a path towards interpretation or description towards explainable AI. That way you show the willingness to enter into dialogue with the learners, and to consider their feedback on the machine learning processes. As always engaging the learners is key for trust, advancement and clear interpretation (Stella says it way better than my brief statement here!).

Have a look at her talk on machine learning bias, risks and mitigation below (30 minute talk followed by a 15 min Q&A), or take a quick look at the accompanying article here.

One of the main risks is of course some sort of censorship, or interpretation done by the machine which results in an unbalanced, sometimes discriminatory result. In January I organised some thoughts on AI and education in another blogpost here. And I also gave a talk on the benefits and risks of AI last year, where I argued for increased ethics in AI for education (slides here).

Machine learning is a complex type of learning, it involves a lot of data interpretation, algorithms to get meaningful reactions coming from the data, and of course feedback loops to provide adaptive, personal learning tracks to a number of learners.
Situating it, I would call it costly, useful rather for formal than informal learning (at this point in time), and somewhere between individual and social learning, as the data comes from the many, but the adapted use is for the one. It does not leave much room for self-directed learning,  unless this is built into the machine learning algorithms (first ask learner for learning outcomes, then make choices based on data).