Machine learning has moved from a mere rave into a real strong, acknowledged learning power (not only in the news, but also on the stock market of AI, e.g. STOXX AI global indices - I was quite surprised to see this). Machine learning has the power to support personalized learning, as well as adaptive learning, which allows an instructional designer to engage learners in such a way that learning outcomes can be reached in more than one way (always a benefit!). Machine learning allows the content or information that is provided for training/learning to be delivered in such a way that it fits the learner, and that it reacts to the learner feedback (answers, speed of response, etc). To be able to tailor a fixed set of learning objectives into flexible training demands some technological options: data, algorithms that can interpret the data, access to some sort of connectivity (e.g. it might be ad hoc with a wifi and an information hub, or it might be via cloud and the internet), and money to program, iterate and optimize the learning options continuously.
This (data, interpretation, choices made by machines - algorithms) means that machine learning combines so many learning tools, data and computing power, that it inevitably comes with a high sense of philosophical and ethical decisions: what is the real learning outcome we want to achieve, what are the interpretations of our algorithms, what is the difference between manipulation towards a something people must learn and learning that still offers a critically based outcome for the learner?
Stella Lee offers a great overview of what it means to use machine learning (e.g. for personalized learning paths, for chatbox that deliver tech or coaching support, for performance enhancement). This talk is worth a look or listen. Stella Lee is one of those people who inspire me through their love for technology, by being thorough, thoughtful, and being able to turn complex learning issues into feasable learning opportunities you want to try out. She gave a talk to Google Cambridge on the subject of machine learning and AI and ... she inspired her tech-savvy audience.
In her talk she also goes deeper into the subject of 'explainable AI' which offers AI that can be interpreted easily by people (including relative laymen, which is the case for most learners). Explainable AI is an alternative to the more common black box of AI (useful article), where the data interpretation is left to a select few. Stella Lee's solution for increasing explainable AI is granularity. This simple concept of granularity, or considering what data or indicators to show, and which to keep behind the curtains enables a quicker interpretation of the data by the learner or other stakeholders. Of course this does not solve all transparency, but it enables a path towards interpretation or description towards explainable AI. That way you show the willingness to enter into dialogue with the learners, and to consider their feedback on the machine learning processes. As always engaging the learners is key for trust, advancement and clear interpretation (Stella says it way better than my brief statement here!).
Have a look at her talk on machine learning bias, risks and mitigation below (30 minute talk followed by a 15 min Q&A), or take a quick look at the accompanying article here.
One of the main risks is of course some sort of censorship, or interpretation done by the machine which results in an unbalanced, sometimes discriminatory result. In January I organised some thoughts on AI and education in another blogpost here. And I also gave a talk on the benefits and risks of AI last year, where I argued for increased ethics in AI for education (slides here).
Machine learning is a complex type of learning, it involves a lot of data interpretation, algorithms to get meaningful reactions coming from the data, and of course feedback loops to provide adaptive, personal learning tracks to a number of learners.
Situating it, I would call it costly, useful rather for formal than informal learning (at this point in time), and somewhere between individual and social learning, as the data comes from the many, but the adapted use is for the one. It does not leave much room for self-directed learning, unless this is built into the machine learning algorithms (first ask learner for learning outcomes, then make choices based on data).
This (data, interpretation, choices made by machines - algorithms) means that machine learning combines so many learning tools, data and computing power, that it inevitably comes with a high sense of philosophical and ethical decisions: what is the real learning outcome we want to achieve, what are the interpretations of our algorithms, what is the difference between manipulation towards a something people must learn and learning that still offers a critically based outcome for the learner?
Stella Lee offers a great overview of what it means to use machine learning (e.g. for personalized learning paths, for chatbox that deliver tech or coaching support, for performance enhancement). This talk is worth a look or listen. Stella Lee is one of those people who inspire me through their love for technology, by being thorough, thoughtful, and being able to turn complex learning issues into feasable learning opportunities you want to try out. She gave a talk to Google Cambridge on the subject of machine learning and AI and ... she inspired her tech-savvy audience.
In her talk she also goes deeper into the subject of 'explainable AI' which offers AI that can be interpreted easily by people (including relative laymen, which is the case for most learners). Explainable AI is an alternative to the more common black box of AI (useful article), where the data interpretation is left to a select few. Stella Lee's solution for increasing explainable AI is granularity. This simple concept of granularity, or considering what data or indicators to show, and which to keep behind the curtains enables a quicker interpretation of the data by the learner or other stakeholders. Of course this does not solve all transparency, but it enables a path towards interpretation or description towards explainable AI. That way you show the willingness to enter into dialogue with the learners, and to consider their feedback on the machine learning processes. As always engaging the learners is key for trust, advancement and clear interpretation (Stella says it way better than my brief statement here!).
Have a look at her talk on machine learning bias, risks and mitigation below (30 minute talk followed by a 15 min Q&A), or take a quick look at the accompanying article here.
One of the main risks is of course some sort of censorship, or interpretation done by the machine which results in an unbalanced, sometimes discriminatory result. In January I organised some thoughts on AI and education in another blogpost here. And I also gave a talk on the benefits and risks of AI last year, where I argued for increased ethics in AI for education (slides here).
Machine learning is a complex type of learning, it involves a lot of data interpretation, algorithms to get meaningful reactions coming from the data, and of course feedback loops to provide adaptive, personal learning tracks to a number of learners.
Situating it, I would call it costly, useful rather for formal than informal learning (at this point in time), and somewhere between individual and social learning, as the data comes from the many, but the adapted use is for the one. It does not leave much room for self-directed learning, unless this is built into the machine learning algorithms (first ask learner for learning outcomes, then make choices based on data).