Wednesday, 26 April 2017

Using #learningAnalytics to inform research and practice #educon17

Talk during Educon2017 by Dragan Gasevic known for his award-winning work of his team on the LOCO-Analytics software is considered one of the pioneering contributions in the growing area of learning analytics. In 2014 he founded ProSolo Technologies Inc (https://www.youtube.com/watch?v=4ACNKw7A_04) that develops a software solution for tracking, evaluating, and recognizing competences gained through self-directed learning and social interactions.

He jumps up the stage with a bouncy step and was in good form to get his talk going.

What he understands under learning analytics is the following: shaping the context of learning analytics results in challenges and opportunities. Developing a lifelong learning journey automatically results in a measuring system that can support and guide the learning experience for individuals.
Active learning also means constant funding, to enable the constant iteration of knowledge, research and tech. But even if you provide new information, there are only limited means to understand who in the room is actually learning something, or not. So addressing the need to get meaningful feedback on what is learned is the basis of learning analytics.
Learning system (e.g. LMS), we also use socio-economic details of individuals
No matter which technologies are used, the interaction with these technologies results in digital footprints. Initially the technologists used the digital footprints as a means to adjust the technology. But gradually natural language processing, learning, meaning creation… also became investigated using these technologies.

Actual applications of learning analytics are given: two well known examples
Course Signals from the Purdue university: analysing the student actions within their LMS (blackboard), different student variables, outcome variables for student risk (high, mediat, low risk) provided by algorithms using the data from the digital footprints of each students. The teachers and students got ‘traffic light’ alerts. Those students they used the signals, had an increase of 10 to 20 percent student success.
Doing a content analysis of using course signals, summative feedback seemed to have much less related to student success, but formative (detailed specific) feedback did have immediate effect on learning success.
University of Michigan E2Coach (top 2 public universities in US). They have large science classrooms, but populated by students with very varied science grade background.
In the E2coach project, they used the idea of ‘better than expected’, so they looked at successful learning patterns: successful students would be adaptive (trying different options to learn), and those who self-organised in peer groups, to enable content structuring.
Top performing students were asked to give pointers on what they did to be successful learners. Those pointers were given to new students to provide them feedback on how they could increase learning success, but at the same time giving them the option to learn (self-determination theory). This resulted in about 5 percent improvement of learner success.

Challenges of learning analytics
Four challenges:
Generalisability: while we are seeing predictive models for student success, but they only extend to what can be generalised. Significance of these models were not too applicable across different context, so the generalisability was quite low. Some indicators seem to be significant predictors, yet in other contexts they are not. So what is the reason behind this. This means we are now collecting massive amounts of MOOC data to look for specific reasons. But this work is difficult, as we need to understand what questions do we need to address.
Student agency: also a challenge. How much of student decisions are made by themselves, but the responsibility of learning is in their hands in their hands.
Common myth in learning analytics: more time spend on tasks, the more they will learn. Actually, this is not the case, more reverse actually. Even time with educators is frequently showing that it is an indicator for poor learner success.
Feedback presentation: we felt that the only way to give feedback is visualisation and dashboards. But many different type of vendors involved in learning analytics look into dashboards. But they sometimes these dashboards are harmful, as the students compared with the class performance, resulting in less student engagement and learning. Students sometimes invested less time as they felt from the dashboards they were doing well, so with less investment less learning.
Investment and willingness to understand: http://he-analytics.com and the SheilaProject  http://sheilaproject.eu/ 50plus senior leaders investigated for their understanding of learning analytics. Institutions hardly provide opportunities to learn what learning analtytics are really about. Lack of leadership on learning analytics, so in many cases they are not sure what it entails, or what to do with it. So that results in buying a product… which does not make sense.
Lack of active engagement of all the stakeholders: students are mostly not involved from day one in development of these learner analytics (no user-centered approach).

Direction for learning analytics
Learning analytics are about learning. So we need to fall back on what we already know about learning, then design certain types of intervention using learning analytics. Learning analytics is more than data science, it provides powerful algorithms, machine learning algorithms, system dynamics… but we are end up into a data crunching problem, as we need Theory (particular approaches: cognitive load, self-regulation), practices also inform where to go. We need to take into account whether these results make sense. Which of the correlations are really meaningful, which make sense… but at the same time e need to take into account learning design and the way we are constructing the learning paths for our students. We cannot ignore experimental design, if we also are using meaningful learning analytics. We need to be very specific about study design. Interaction design is for types of interfaces, but they need to be aligned with pedagogical methods.

How does this result in the challenges mentioned before
Generalisability: if we want to make sure we think about this, we need to take into account that one fits all will never work in learning. Different mission, different population, different models, different legislation.. level of individual courses. Differences in instructional design, different courses need different approaches. It is all about contextual information. So what shapes our engagement? Social networks work only for those called weak ties. Networks with only strong ties restrain full learning success. Data mining can help us to analyse networks: exponential random graphs (not sure here?). Machine learning transfer: using it across different domains. Recent good developments addressing this.

Student agency: back to established knowledge (2006 paper: students use operations and to create artefacts for recall or trying to provide arguments or critical thinking). The student decisions are based on student conditions: prior knowledge, study skills, motivations… all of these conditions need to be taken into account. Identifying sub-groups of learners based on algorithms. Some students are really active, but not productive. Some students were only performing were only mediocre active, yet very good performing in terms of studying. Study skills are changing, priorities are changing during learning… so this means different learning agency. Desirable difficulties need to be addressed and investigated. So no significant success between the highly active and mediocre active students, which needs to be studied to find reasons behind it. Learners motivation changes the most during the day as can be seen from literature. So we need to focus to understand these reasons, and to set up interdisciplinary teams to highlight possible reasons while strongly grounding it in existing theory.

Analytics-based Feedback: students need guidance, not only task specific language indicators. This can be done by semi-automatic teacher triggers to provide more support and guidance, resulting in meaningful feedback used by students. (look up research from Sydney, ask reference Inge). Personalised feedback have a significant effect (Inge, again seen in mobimooc). http://ontasklearning.org
Shall we drop the study of visualization? No, it is significant for study skills, and decision making on analytics, but we need to focus on which methods work and to gradually involve visualisations to know what works, what not. Taylor it to specific tasks, design it in a different way than up till now.

Development of analytics capacity and culture: ethics, privacy concerns, very few faculty really ask students for feedback. What are the key points for developing culture: think about discourse (not only technical specs). We need to understand data, we need to work with IT, using different type of models not only data crunching, start from what we know already, finally transformation: we need to step away from learning analytics as technology, we need to talk to our stakeholders to know how to act, how to do it, who is responsible for certain things, … the process can be inclusive adoption process (look it up). We need to think about questions, design strategies, working on the whole phenomenon we need to involve the students. Students are highly aware of the usefulness of their data.

If we want to be successful with learning analytics we need to work together if we want to make a significant difference or impact.

No comments:

Post a Comment