Showing posts with label data. Show all posts
Showing posts with label data. Show all posts

Tuesday, 17 September 2019

#ECTEL2019 Workshop #AI in #Education #liveblogpost #AIED @cova_rodrigo @paco

This is a live blog, so bits and pieces noted.

Paco Iniesto (The Open University, IET, AIED) is the workshop lead, and he is looking good and giving a strong overview.
AI is all around us: cars, games, robotics, AlphaGo (see netflix), predictive policy, dating apps, thispersondoesnotexist.com (3 min video is of interest, how they generate these images), ...

What is AI?
It isn't easy to define AI and many people have an idea, but there is no definition.
computer systems desinged to interact with the world ... (Luckin, Waynes...)

The promise of AI is not yet realized, although it has been developing for 40 years.
It's big business
AI shines a spothlight on existing educational practices
AI rehashes what we have at this point in time

Implications of AIED: algorithms and computation: what are the algorithms, what are their consequences, how to control them... accuracy and validity of assessments, are we treating students as human beings?

Lumilo augmented reality glasses for teachers (https://hechingerreport.org/these-glasses-give-teachers-superpowers/), video can be found here: https://kenholstein.myportfolio.com/the-lumilo-project This got some negative critiques from teachers and learners.



Ethical questions
Connection between effect and psychological traits of learners, but where can this lead to? (cfr Cambridge analytics).
What if we have the data for 'good', what if others use it for 'bad' ideas.
What about GDPR, who owns the data, how does this affect funding, if students opt out of the system and all their data is erased; can we use blockchain in order to keep the data connected to the learners?
Where is the data in order for the data be erased, how does this affect future employment?
Will the system be able to evaluate actual learning, if this is the case, what benefits will it bring to teaching and learning?
Does the support of learners lead to limiting the self-directed learning-to-learn of the learners
Starting from the technology to move to support the learning seems to be the other way round then it should be done,
What is the educational progress using these technologies?
What is the difference between monitoring and surveillance? (where is the barrier)
Can learners hack the system to get more or less support?
Does the teacher have enough time to support learners with difficulties? And does their help actually benefit the learning?
Consent forms of those who are not able to give consent?
marginalized people are in need of technological support, but how do we support them in a secure way?

Sources:
Sheila project: https://sheilaproject.eu/
Methods of mass destruction book

The post-it notes with ideas from three different groups addressing some of the questions mentioned in the above slide.







Monday, 14 January 2019

EU report on the impact of AI on Learning Teaching and Education #AI #education #EU #policy

The resently published report on the impact of artificial intelligence (AI) on learning, teaching and education gives a great outline on the realities of AI, the state of the art, and the challenges as well as opportunities for those of us with an expertise in learning in general, or learning in terms of learning theory. The report is part of the JRC Science for Policy documents, and it is very well written by Ilkka Tuomi (who is renowned for his expertise in Internet, data, AI and computer science). Ilkka recorded a brief overview of the report, which can be seen below. In the report-related video, he refers to current machine learning systems as datavors, he defines (and right fully so) the term of machine learning as an oxymoron and he puts current AI in very accessible parallel, namely the Artificial Instict (as current AI is mainly about behaviourist approaches and patterns).

A very interesting perspective is that Ilkka and the report stress the importance of having someone on board of AI for learning/teaching/education on board, who has expertise in learning and learning theory.

The policy challenges mentioned at the end of the report are:

  • A continuous dialogue on the appropriate and responsible uses of AI in education is therefore needed.
  • In the domain of educational policy, it is important for educators and policymakers to understand AI in the broader context of the future of learning. As AI will be used to automate productive processes, we may need to reinvent current educational institutions.
  • In general, the balance may thus shift from the instrumental role of education towards its more developmental role.
  • A general policy challenge, thus, is to increase among educators and policymakers awareness of AI technologies and their potential impact.
  • Learning sciences could have much to offer to research on AI, and such mutual interaction would enable better understanding about how to use AI for learning and in educational settings, as well as in other domains of application.
  • As there may be fundamental theoretical and practical limits in designing AI systems that can explain their behaviour and decisions, it is important to keep humans in the decision-making loop.
  • The ethics of AI is a generic challenge, but it has specific relevance for educational policies.
  • Human agency means that we can make choices about future acts, and thus become responsible for them.  AI can also limit the domain where humans can express their agency.
  • An important policy challenge is how such large datasets that are needed for the development and use of AI-based systems could be made more widely available.


This 47 page report offers the following topics:

Introduction ...................................................................................................... 5
2 What is Artificial Intelligence? ............................................................................. 7
2.1 A three-level model of action for analysing AI and its impact ............................. 7
2.2 Three types of AI ....................................................................................... 10
2.2.1 Data-based neural AI ......................................................................... 10
2.2.2 Logic- and knowledge-based AI ........................................................... 12
2.3 Recent and future developments in AI .......................................................... 13
2.3.1 Models of learning in data-based AI ..................................................... 15
2.3.2 Towards the future............................................................................. 16
2.4 AI impact on skill and competence demand ................................................... 17
2.4.1 Skills in economic studies of AI impact ................................................. 18
2.4.2 Skill-biased and task-biased models of technology impact ....................... 20
2.4.3 AI capabilities and task substitution in the three-level model ................... 21
2.4.4 Trends and transitions ........................................................................ 22
2.4.5 Neural AI as data-biased technological change ...................................... 23
2.4.6 Education as a creator of capability platforms ........................................ 23
2.4.7 Direct AI impact on advanced digital skills demand ................................ 25
3 Impact on learning, teaching, and education ....................................................... 27
3.1 Current developments ................................................................................ 27
3.1.1 “No AI without UI” ............................................................................. 28
3.2 The impact of AI on learning ....................................................................... 28
3.2.1 Impact on cognitive development ........................................................ 30
3.3 The impact of AI on teaching ....................................................................... 31
3.3.1 AI-generated student models and new pedagogical opportunities............. 31
3.3.2 The need for future-oriented vision regarding AI .................................... 32
3.4 Re-thinking the role of education in society ................................................... 32
4 Policy challenges ............................................................................................. 34

Below is the 20 minute video of Ilkka Tuomi which explains the report in easy terms.




Saturday, 8 December 2018

#AI #MachineLearning and #philosophy session #OEB18 @oebconference @OldPhilNewLearn


At OEB2018 the last session I lead was on the subject of AI, machine learning in combination with old philosophers and new learning.  The session drew a big group of very outspoken, intelligent people making this session a wonderful source of ideas on the subject of philosophy and AI.

As promised to the participants, I am adding my notes taken during the session. There were a lot of ideas, so my apologies if I missed any. The notes follow below, afterwards embedding the slides that preceeded the session in order to indicate where the idea for the workshop came from.

Notes from the session Old Philosophers and New Learning @OldPhilNewLearn #AI #machineLearning and #Philosophy

The session started of with the choices embedded in any AI, e.g. a Tesla car running into people, will he run into a grandmother or into two kids? What is the ‘best solution’… further into the session this question got additional dimensions: we as humans do not necessarily see what is best, as we do not have all the parameters, and: we could build into the car that in case of emergency, the car needs to decide that the lives of others are more important than the lives of those in the car, and as such simply crash the car into the wall, avoiding both grandmother and kids.

The developer or creator gives parameters to the AI, with machine learning embedded, the AI will start to learn from there, based on feedback from or directed to the parameters. This is in contrast with computer-based learning, where rules are given, and they are either successful or not but they are no basis for new rules to be implemented.

From a philosophical point of view, the impact of AI (including its potential bias coming from the developers or the feedback received) could be analysed using Hannah Arendt’s ‘Power of the System’, in her time this referred to the power mechanisms during WWII, but the abstract lines align with the power of the AI system.

The growth of the AI based on human algorithms does not necessarily mean that the AI will think like us. It might choose to derive different conclusions, based on priority algorithms it chooses. As such current paradigms may shift.

Throughout the ages, the focus of humankind changed depending on new developments, new thoughts, new insights into philosophy. But this means that if humans put parameters into AI, those parameters (which are seen as priority parameters) will also change over time. This means that we can see from where AI starts, but not where it is heading.

How much ‘safety stops’ are built into AI?
Can we put some kind of ‘weighing’ into the AI parameters, enabling the AI to fall back on more important or less important parameters when a risk needs to be considered?

Failure as humans can results into growth based on those failures. AI also learns from ‘failures’, but the AI learns from differences in datapoints. At present the AI only receives a message ‘this is wrong’, at that moment in time – if something is wrong – humans make a wide variety of risk considerations. In the bigger picture, one can see an analogy with Darwin’s evolutionary theory where time finds what works based on evolutionary diversity. But with AI the speed of adaptation enhances immensely.

With mechanical AI it was easier to define which parameters were right or wrong. E.g. with Go or Chess you have specific boundaries, and specific rules. Within these boundaries there are multiple options, but choosing those options is a straight path of considerations. At present humans make much more considerations for one conundrum or action that occurs. This means that there is a whole array of considerations that can also imply emotions, preferences…. When looking at philosophy you can see that there is an abundance of standpoints you can take, some even directly opposing each other (Hayek versus Dewey on democracy), and this diversity sometimes gives good solutions for both, workable solutions which can be debated as being valuable outcomes although based on different priorities, and even very different takes on a concept. The choices or arguments made in philosophy (over time) also clearly point to the power of society, technology and reigning culture at that point in time. For what is good now in one place, can be considered wrong in another place, or at another point in time.

 It could benefit teachers if they were supported with AI to signal students with problems. (but of course this means that ‘care’ is one of the parameters important for society, in another society it could simply be that those students who have problems will be set aside. Either choice is valid, but it builds on other views on whether we care in a ‘supporting all’ or care in a ‘support those who can so we can move forward quicker’. It is only human emotion that makes a difference in which choice might be the ‘better’ one to choose.

AI works in the virtual world. Always. Humans make a difference between the real and the virtual world, but for the AI all is real (though virtual to us).
Asimov’s laws of robotics still apply.

Transparency is needed to enable us to see which algorithms are behind decisions, and how we – as humans – might change them if deemed necessary.

Law suits become more difficult: a group of developers can set the basis of an AI, but the machine takes it from their learning itself. The machine learns, as such the machine becomes liable if something goes wrong, but ….? (e.g. Tesla crash).

Trust in AI needs to be built over time. This also implies empathy in dialogue (e.g. sugar pill / placebo-effect in medicine, which is enhanced if the doctor or health care worker provides it with additional care and attention to the patient.
Similar, smart object dialogue took off once a feeling of attention was built into it: e.g. replies from Google home or Alexa in the realm off “Thank you” when hearing a compliment.  Currently machines fool us with faked empathy. This faked empathy also refers to the difference between feeling ‘related to’ something or being ‘attached to’ something.

Imperfections will become more important and attractive than the perfections we sometimes strive for at this moment.

AI is still defined between good and bad (ethics), and ‘improvement’ which is linked to the definition of what is ‘best’ at that time.

Societal decisions: what do we develop first – with AI? The refugee crisis or self-driving cars? This affects the parameters at the start. Compare it to some idiot savants, where high intelligence, does not necessarily implies active consciousness.

Currently some humans are already bound by AI: e.g. astronauts where the system calculates all.  

And to conclude: this session ranged from the believers in AI “I cannot wait for AI to organise our society”  to those who think it is time for the next step in evolution, in the words of Jane Bozart: “Humans had their Chance”




Thursday, 6 December 2018

Data driven #education session #OEB18 @oebconference #data @m_a_s_c

From the session on data driven education, with great EU links and projects.

Carlos Delgado Kloos: using analytics in education
Opportunities
Khan academy system is a proven system, with one of the best visualisations of how the students are advancing. With a lot of stats and graphs. Carlos used this approach for their 0 courses (courses on basic knowledge that students must know before moving on in higher ed).
Based on the Khan stats, they built a high level analytics system.
Predictions in MOOCs (see paper of Kloos), focusing on drop-out.
Monitoring in SPOCs (small private online courses)
Measurement of Real Workload of the students, the tool adapts the workload to the reality.
FlipApp (to gamify flipped classroom), remember and to notify the students that they need to see the videos before class, or they will not be able to follow. (Inge: sent to Barbara).
Creation of Educational Material using Google classroom. Google classroom sometimes knows what the answer of a quiz will be, which can save time for the teacher.
Learning analytics to improve teacher content delivery.
Use of IRT (Item Response Theory) to see which quizzes are more useful and effective, interesting to select quizzes.
Coursera define skills, match it to the jobs and based on that recommend courses.
Industry 4.0 (big data, AI…) for industry, can be transferred to Education 4.0 (learning analytics based on machine learning). (Education3.0 is using the cloud, where both learners and teachers go to).
Machine learning infers the rules from getting answers which are data analysed (in comparison to computer learning, which is just the opposite, based on rules, giving answers).
Dangers:
Correlations: correlations are not necessary correct conclusions. (see spurious correlations for fun links).
Bias: e.g. decisions for giving credit based on redlining and weblining.
Decisions for recruitment: eg. Amazon recruits that the automation of their recruiting system resulted in a biase leading to recruiting more men than women.
Decisions in trials: eg. Compas is used by judges to calculate repeat offenders, but color of skin was a clear bias in this program.
Chinese social credit system which gives minor points if you do something that is seen as not being ‘proper’. Also combined with facial recognition, and monitoring attention in class (Hangzhou number 11 high school).
Monitoring (gaggle, …)
Challenges
Luca challenge: responsible use of AI.
GDPR Art 22: automated individual decision-making, including profiling.
Sheilaproject.eu : identifying policies to adopt learning analytics. Bit.ly/sheilaMOOC is the course on the project.
Atoms and bits comparison. As with atoms you can use it for the better, or for the worse (like atomic bombs).


Maren Scheffel on Getting the trust into trusted learning analytics @m_a_s_c
(Welten Institute of Open University, Netherlands)
Learning analytics: Siemens (2011) definition still the norm. But nowadays it is a lot about analytics, but only little about learning.

Trust: currently we believe that something is reliable, the truth, or ability. Multiple definitions of trust, it is multidimensional and multidisciplinary construct. Luhmanndefined trust as a way to cope with risk, complexity, and a lack of system understanding. For Luhmann the concept of trust compensates for insufficient capabilities for fully understanding the complexity of the world (Luhmann, 1979, trust and …)
 For these reasons we must be transparent, reliable, and be integer to attract the trust of learners. There should not be a black box, but it should be a transparent box with algorithms (transparent indicators, open algorithms, full access to data, knowing who accesses your data).

Policies: see https://sheilaproject.eu   

User involvement and co-creation: see the competen-SEA project see http://competen-sea.eu capacity building projects for remote areas or sensitive learner groups. One of the outcomes was to co-design to create MOOCs (and trust) getting all the stakeholders together in order to come to an end product. MOOCs for people, by people.  Twitter #competenSEA

Thursday, 27 September 2018

Machine learning benefits and risks by expert Stella Lee #AI #data #learning

Machine learning has moved from a mere rave into a real strong, acknowledged learning power (not only in the news, but also on the stock market of AI, e.g. STOXX AI global indices - I was quite surprised to see this). Machine learning has the power to support personalized learning, as well as adaptive learning, which allows an instructional designer to engage learners in such a way that learning outcomes can be reached in more than one way (always a benefit!). Machine learning allows the content or information that is provided for training/learning to be delivered in such a way that it fits the learner, and that it reacts to the learner feedback (answers, speed of response, etc). To be able to tailor a fixed set of learning objectives into flexible training demands some technological options: data, algorithms that can interpret the data, access to some sort of connectivity (e.g. it might be ad hoc with a wifi and an information hub, or it might be via cloud and the internet), and money to program, iterate and optimize the learning options continuously.

This (data, interpretation, choices made by machines - algorithms) means that machine learning combines so many learning tools, data and computing power, that it inevitably comes with a high sense of philosophical and ethical decisions: what is the real learning outcome we want to achieve, what are the interpretations of our algorithms, what is the difference between manipulation towards a something people must learn and learning that still offers a critically based outcome for the learner?

Stella Lee offers a great overview of what it means to use machine learning (e.g. for personalized learning paths, for chatbox that deliver tech or coaching support, for performance enhancement). This talk is worth a look or listen. Stella Lee is one of those people who inspire me through their love for technology, by being thorough, thoughtful, and being able to turn complex learning issues into feasable learning opportunities you want to try out. She gave a talk to Google Cambridge on the subject of machine learning and AI and ... she inspired her tech-savvy audience.

In her talk she also goes deeper into the subject of 'explainable AI' which offers AI that can be interpreted easily by people (including relative laymen, which is the case for most learners). Explainable AI is an alternative to the more common black box of AI (useful article), where the data interpretation is left to a select few. Stella Lee's solution for increasing explainable AI is granularity. This simple concept of granularity, or considering what data or indicators to show, and which to keep behind the curtains enables a quicker interpretation of the data by the learner or other stakeholders. Of course this does not solve all transparency, but it enables a path towards interpretation or description towards explainable AI. That way you show the willingness to enter into dialogue with the learners, and to consider their feedback on the machine learning processes. As always engaging the learners is key for trust, advancement and clear interpretation (Stella says it way better than my brief statement here!).

Have a look at her talk on machine learning bias, risks and mitigation below (30 minute talk followed by a 15 min Q&A), or take a quick look at the accompanying article here.

One of the main risks is of course some sort of censorship, or interpretation done by the machine which results in an unbalanced, sometimes discriminatory result. In January I organised some thoughts on AI and education in another blogpost here. And I also gave a talk on the benefits and risks of AI last year, where I argued for increased ethics in AI for education (slides here).

Machine learning is a complex type of learning, it involves a lot of data interpretation, algorithms to get meaningful reactions coming from the data, and of course feedback loops to provide adaptive, personal learning tracks to a number of learners.
Situating it, I would call it costly, useful rather for formal than informal learning (at this point in time), and somewhere between individual and social learning, as the data comes from the many, but the adapted use is for the one. It does not leave much room for self-directed learning,  unless this is built into the machine learning algorithms (first ask learner for learning outcomes, then make choices based on data). 

Tuesday, 28 November 2017

Data analytics: one call for papers, free research papers, and a free paper on libraries

BayLAN conference, UC Berkeley

Submission deadline December 15th, 2017 (submit here)
Conference date: February 24, 2018
Conference fee: students for free, professionals: 15 $
Can't make it to LAK'18 in Sydney? Want to meet Bay Area LA practitioners and researchers on your way south? BayLAN is a local network of researchers and professionals in the field of learning analytics. The BayLAN conference is a regional event designed to  facilitate the exchange of information, case studies, ideas, and early stage research in the field of learning analytics broadly construed.

The Bay Area Learning Analytics Network (BayLAN), in co-operation with SoLAR, is hosting the third annual BayLAN conference on February 24, 2018 at the University of California, Berkeley. The BayLAN conference brings together thought leaders from both industry and academia. Presentations and discussions will focus on current research at the intersection of education, data science, and technology.

Registration is free for students and $15 for professionals. To register click here:
https://www.eventbrite.com/e/the-bay-area-learning-analytics-conference-2018-baylan-tickets-38191297198

BayLAN is currently accepting abstract submissions for the conference. Abstracts should report on research in the broad area of learning analytics.  Presentations may include technical work that applies data science or other quantitative methods to improve education, as well as interventions, methodologies, tools or technology that are intended to improve learning outcomes.
Suggested topics: 
  • Theoretical topics: cognitive science models about education, data science methods applied to learning, novel theories about learning
  • Lessons learned: After going through the learning analytics implementation process, share insights that have surfaced that affect the completion of the project
  • Innovative new tools/techniques: Share newly developed tools or approaches to learning analytics that have been implemented at an institution.
  • Application of standards: A project making use of data/analytics standards and illustrating the benefits of such an approach.
  • Collaboration and sharing: How are groups of institutions/practitioners partnering to solve shared problems in the learning analytics space?
The deadline for submission is December 15, 2017. Submit here 

Free papers on Big data available until 4 December 2018 (the free access is available until then)

Special Issue: Big Data in Robotics from Liebert publishing
This issue was guest edited by Jeannette Bohg, Matei Ciocarlie, Javier Civera, and Lydia E. Kavraki
FREE ACCESS through December 4, 2017. Read Now:
Big Data on Robotics
Jeannette Bohg, Matei Ciocarlie, Javier Civera, and Lydia E. Kavraki  Read Now
Recent Data Sets on Object Manipulation: A Survey
Yongqiang Huang, Matteo Bianchi, Minas Liarokapis, and Yu Sun  Read Now
Leveraging Large-Scale Semantic Networks for Adaptive Robot Task Learning and Execution
Adrian Boteanu, Aaron St. Clair, Anahita Mohseni-Kabir, Carl Saldanha, and Sonia Chernova  Read Now
The KIT Motion-Language Dataset
Matthias Plappert, Christian Mandery, and Tamim Asfour  Read Now
DOOMED: Direct Online Optimization of Modeling Errors in Dynamics
Nathan Ratliff, Franziska Meier, Daniel Kappler, and Stefan Schaal  Read Now

Free paper on libraries from paper to cloudbased

With the open access discussion going strong as ever (great reflective article here on the goal of open access from Stephen Downes, in reply to Willey's view on open access).
IGI global has just released a free paper (downloading free papers seems like a good strategy) on library history from paper to cloudbased, with examples.

The library paper can be downloaded here (you must register for IGI global)

The abstract of this paper: A Library is an organized collection of resources made accessible to a defined community for reference or borrowing. It provides physical or digital access to material, and may be a physical building, or a virtual space, or both. During the last decays, the Libraries had witnessed a continuous revolution and still do. This paper reviews the main milestone of such revolution starting from the classical up to the current Cloud-based era passing by the intermediate digital transformation period. It reviews the library types, services, problems and drive of changes from the classical form. The paper then tackles the transformation of the library to the digital form. It discusses the characteristics of the digital library, the web-based library, and the library 2.0 through their advantages and limitations. The paper finally focuses on the current Cloud-based ear, where most of the library cloud platforms, services management, innovative products and opened environments are addressed through their features, add values, pros and cons. The paper also provides a comparative study on such solutions coming up with opened research issues. Hereby, the paper provides a comprehensive overview on the development of the library till now. 

Friday, 16 September 2016

2 day online seminar (fee): explore opportunities for data & analytics #data

The eLearning Guild is organizing a two day online seminar on 21 - 22 September 2016 on the opportunities for data and analytics within the eLearning industry. The standard rate is 395 $, but there are discounts available (e.g. academics, non-profits, government: 35% discount). When looking at the full summit program you will see that the organizers provide a nice balance between technology, usability and theoretical frameworks on the subject.

There is a considerable amount of buzz in our industry surrounding data. Data continues to multiply, and so does the processing power available for analytics and interpretation. People are grappling with making sense of it all. At the Data & Analytics Summit, we’re inviting leaders from the data and analytics space (Ellen Wagner, Aaron Silvers & Megan Bowe, George Siemens, JD Dillon, MJ Bishop, Sean Putman, Tim Martin and a panel session putting data into perspective chaired by David Kelly) to explore the questions that our industry is struggling to answer and provide clarity around what we’re doing today and what’s possible for tomorrow. Each speaker session is about 60 min long, including Q&A. 

This Summit aims to give you a better understanding of the relationships between data and content, show you how to make data actionable within your organization, and prepare you to take advantage of the new opportunities that data is going to open up for learning in the future.

Thursday, 3 December 2015

#OEB15 keynote Cory Doctorow we are living in a surveillance state

If you can see a sci-fi novelist, blogger, and technology activist at work using a wonderfully harsh Canadian accent …. you need to stretch your fingers, massage your brain and prepare for some quick thinking.
Cory wears a nice reversed white and black jacket over his skull-pirate t-shirt and it suits his stage presents. So, Cory Doctorow.

Schools are increasingly surveyanced places, but this means that learners are negatively impacted by the idea of what is good and bad learning. Eg. Website pages that have been blocked for learners, but this flies in the face of digital learning. As kids are exploring information and content.
This means we are filtering pages, censoring pages for repressive regimes. We are offshoring our kids clicks to war criminals.
But kids (time rich and cash poor) will find solutions, but this means that they are not really learning digital skills, but marginal digital solution finding.
So what if we will give them real life challenges: which pages would you catalogue, and what do you think about the pages that they are not allowed to be seen.
Freedom of information act: explore that
Research companies by using the internet, and give that to the journals, magazines… which will make them fully digital citizens.
Children are the beta testers of the internet age.
It matters what we teach our kids.
Macbooks: laptop was equipped with software that would harvest the clicks of all the kids (in the most affluent high school of USA).
Now school administrations provide laptops, with those types of software.
The surveillance state are increasingly spreading to all digital users. They want to take the inkjet model into every home. Making it difficult to build tools without giving them some money (standardisation).
Digital locks are now used in cars to make sure that every garage owner buys the readers. And this pressures those garagists to buy parts with particular stores… which should be seen as a felony
But it is not restricted to cars, it is part of the complete ecosystem we live in and in which (John Deers tractors, with software from Monsanto).
Also inside of the body. The logbook of continuous blood glucose meters… so human beings are turned into inktjet printers.
The rules that prohibit people from downloading their own data generatied by these softwares, makes them objects without rights.
We only have one methodology to see whether security works: making it transparent.
We need to ask for a knowledge age that is enlightened, to free people in our society.

Cory gives example of STazi, then NSA spionage, … so there is a productivity gain in surveillance due to data recording devices.
(inge: add this to the telepathic slides)
And, strangely enough each one of us is actually paying the companies that get these data for this data (mobile plans).
In our own living memory, people that are seen as right, which first were people that could go to jail, social inclused… the way we as a society changed to a more open social attitude, we made things transparent. But how do we do this?

ICT literacy is thinking critically on how they stand on the digital data, the social implications of this data… all foundational, future fights will be fought on the internet. So it is pivotal to make our world more transparent, especially the security software… and to make people critical and smart and above all subversive on how they use the technology around them.
Computers have brought new powers to us, but producers prohibit access to your own data.
Although computers can have really safe encrypting software, our kids must just learn to use it.
 People care about security, so that is a good thing.
Electronic frontier toolkit (Inge look it up).
We need better tools, and social
Living in an age of surveillance: total control of the means of information: why is the computer not doing what you want it to do.
Improving digital citizenships: should be lead by institutions, so as teachers the only thing we can do is to teach them how to ask critical questions, to demand evidence-based proof. Digital citizenship is crucial, but there is a lock on personal data. Digital locks have been put on so much, how can we see where to unlock them: it is a matter of policy and skills.
At present non of us know how much of our data is shared or owned by whom.

Security services should be on the side of the users, not on their own existence only. 

Wednesday, 29 April 2015

new issue of free #mobile journal and Learning #Analytics newsletter

Just sharing two new free options (one journal and one newsletter) that are filled with interesting articles on mobile learning (including a focus on lab experiments) and learning analytics (including a regional viewpoint).

The Learning Analytics Community Exchange newsletter is out, addressing the latest learning analytics research projects and ongoing ideas from the LACE community. 

There is a new series of country reports from scholars renowned for their contribution to national and international learning analytics research: a Dutch, Korean, Chinese and a Taiwanese perspective.

The newsletter also features interviews with Learning analytics experts and their views into the Future.

And one evidence based article is placed into the spotlight (I like this focus): The ‘Evidence of the Month’ on the site for April 2015 is a paper from this year’s Learning Analytics and Knowledge (LAK15) conference, ‘Crowd-sourced learning in MOOCs: learning analytics meets measurement theory‘.


A new iJim issue is out packed with articles that focus on remote labs (really interesting research):

*International Journal of Interactive Mobile Technologies (iJIM)*
Volume 9, Issue 2 (2015)

*Guest Editorial*
From the eScience Project Chair (Thomas Zimmer)

*Special Focus Papers*
  • Developing a Remote Laboratory for Heat Transfer Studies (Ridha Ennetta, Ibrahim Nasri)UC1 Oscillator Remote Lab for Distant Electronics Education (Saida Latreche, Zehira Ziari, Smail Mouissat)
  • Remote Lab Experiments in Electronics for Use and Reuse (Thomas Zimmer, M. Billaud, M. Pic, D. Geoffroy)
  • Implementation of Online Optoelectronic Devices Course and Remote Experiments in UC1 iLab (Saida Rebiai, Nour El Houda Touidjen, Smail Mouissat)
  • Online Temperature Control System (Ikhlef Ameur, Kihel Mouloud, Boubekeur Boukhezzar, Guerroudj Abdelmalek, Mansouri Nora)
  • Online Laboratory in Digital Electronics Using NI ELVIS II+ (Ahmed Naddami, Ahmed Fahli, Mourad Gourmaj, Mohammed Moussetad)

*Regular Papers*
  • Agent and Mobile Tools for Telehomecare in Developing Countries: An Architecture Approach (Karim Zarour)
  • Exploring Smartphone Addiction: Insights from Long-Term Telemetric Behavioral Measures (Chad Tossel, Philip Kortum, Clayton Shepard, Ahmad Rahmati, Lin Zhong)
  • Virtual ATM: A Low Cost Secured Alternative to Conventional Mobile Banking (Shabnam Shaheen Sifat, Ali Shihab Sabbir)
  • A Mobile Based Tigrigna Language Learning Tool (Hailay Kidu Teklehaimanot)

*Short Papers*
  • Do We Have to Prohibit the Use of Mobile Phones in Classrooms? (Heba Mohammad, Ayham Fayyoumi, Omar AlShathry)
  • Influences on the Adoption of Mobile Learning in Saudi Women Teachers in Higher Education
  • (Leena Ahmad Alfarani)

Cartoon in this blogpost is from the fabulous Nick D. Kim - the http://www.lab-initio.com/ site

Friday, 5 December 2014

#oeb14 Ellen Wagner @edwsonoma on PAR and #data are changing everything

This was a very illuminating session by Ellen Wagner, as it provided real options to tackle educational challenges (on the level of institutes as well as learners) on the basis of common educational data retrieved from a varied amount of higher ed institutions. REALLY interesting.

Analytics are taking the world by storm.
parframework.org

Learning analytics, big data is at its vanguard.
Staggering revelations about big data: in just 5 years we will look at 40 zetabytes information, that is HUGE.

All these data that are floating around are not being analysed that much as we think. It takes an amazing amount of time, tech talent, and human interpretation to find value from the data.

The full effect of data can not be envisioned today, as it hits all of society.

Where are we heading?
Pushing the data in LMS, in comprehensive analytics is very complex.
Big data landscape is getting bigger every month. But virtually no company on big data is involved in education.
there is a specific reason why: money of course, but in education - despite of the fact we talk about it - we do not use it yet. We cannot process it using normal analytical tools. Most of data comes in spreadsheets, so there is a big step to figure out of the big data solutions compared to our educational analytics.

While big data raise expectations, student data drive big decisions in .edu.
Because it is new, we do not really know where we will go.

Ellen shares some US cases, to show which work is being done.

In US there is an educational problem. There is more student dept than there is house doubt, this is unsustainable, and people sometimes cannot pay it back within their lifetime
Some schools have 'open enrollment', which results in very high drop out rates unfortunately.
So colleges are now given score cards. But this means there must be standards. The metrics at present are focused on the first time freshman... but this means 85% of the contemporary learners do not fit that profile.

Public education has dropped dramatically. Performance metrics are the basis for funding, but as standards do not exist it means that it is tough to get the expectations and hit targets.

So the score card metrics need to be reviewed.

Additionally, pedagogy are not mentioned on the score cards.
In California: license of student textbooks must be tied to student performance, but this effects universities lives.

Metrics have ramped up expectations of what analytics can do. But the challenge is to build metrics that are constructive for both students and educators.

Education is helping people to grow.

Prescriptive analytics: proscribing educational treatment.

Use case: predictive analytics reporting framework (PAR)
a national, non-profit, multi-institutional collaborative focusesd on institutional effectiveness and student success
a massive data anlysis effor using predictive analytics to identify drivers related to student risk
Par uses descriptive, inferential and predictive analyses to create benchmarks, institutional predictive models and to inventory, map and measure student success interventions that have direct positive impact on behaviors correlated with success.

If you - as an educators - do not know what happens with your learners, than how can we suspect to change education for the better?

pedagogy is important, but the emotional effects of education and the feel of education must be taken into account as well.

The privacy issues related to the learner data are staggering.

First meeting with Bill and Melinda Gates Foundation: NO, we do not think you can do it. So she went on a school circle to see what they wanted, could offer, needed. 700.000 student records, from multiple institutes. The data setup was upped to 8 million records.. this got the YES from Bill and Melinda. Granted, the data were nothing compared to weather data, business data.

But descriptive benchmarks can now be done, for each institutions predictive data can be given based on their student records.

something unexpected emerged: making a prediction is not enough. Finding how this can address the challenges is the most important thing.

It is difficult to get the open, transparent to work with due to educational ethica student related issues around their data.

First three years: building the data resources to get started with analytics.
Now: start the analysis.

The institutes varied: community colleges (done rarely), schools that were considered progressive, as well as 'old school', competency based universities.

We tried to collect data that was available for every student, every school: the simple things that could get hands on => common data definitions.

All data is anonymised (both learners, as well as schools. But the schools keep the encryption to link data to learners (VERY important cfr InBloom project for k12 schools, which resulted in a big emotional issue around data)

If solutions are found to improve learners education, that would be delivered to all. so every action taken by the school is questioned for impact, enabling to make the actual impactful strategies visible.

So they used structured, readily availbale data. Openly published via a cc license
https://public.datacookbook.com/public/institutions/par

One of the things that happened: they can now make comparible conclusions.

Great point on online versus f-2-f colleges (note to self: add movie)

Descriptive benchmarks (cross instittutional) and predictive insights (institutional specific), all with specific filters, e.g. isolate subgroups.

Predictive models reduce guesswork to find students at risk.

For those institutes that have open enrollment and attract all students, these predictive insights can be complimented by different other factors that you - as an institute can research: for instance the inpredictability of humanity (no paycheck in time, death, health issues)

Putting it all together
determine student probability of failure
determine which students respond to interventions
determine which interventions are most effective
allocate resources accordingly

Now also (based on John Campbell work)
inventorying and categorizing student success interventions / supports using a commmon framework
based on known predictors of risk and success
in the context of the academic life cycle

addresses "now what?" by linking predictions to action
enables cross institutional benchmarking
supports local and cross institutional

[Ellen: very hard time to turn down a dare :-)  ]

#oeb14 Stephen Downes on personalised #learning

These are liveblogging notes from Stephen Downes keynote.

Preoccupations around the learner, and how they seem to be pro-active online, but rather passive in face-to-face situations. The idea of the learners as citizens and learners as consumers.
How do we equip the learners of today for the jobs today and in the future.

Stephen Downes on reclaiming personalised learning
The overall theme is that education is in for a reality check. The reality check he wants to share is that education and educational companies must face the fact that they no longer own the learners nor their learning.
The learner is no longer the customer, the learner is now the product. Maybe in the form of tuition fees, maybe in the form of big data.
When they talk about the learning process, the core concept is about being interactive, repurposing, about personal learning.
To teach is to model and demonstrate, to learn is to reflect and construct.
Learning is a form of recognition. It is something that feels natural, and that grows following patterns, this enables recognition.

Reclaiming the web, means it is important to have his own space, this is what is meant by reclaiming. The concept of bringing back to us of what is ours.
Education has been the march of the LMS – the giant silos of learning. Once you leave the LMS institution, your own content is lost. Education as a discipline where the personal content is no longer personal in many occasions. We must not only reclaiming the content, but we must reclaim the production, the process.

Stephen distinguishes between personal learning and personalized learning. Personal learning is all your own learning. Personalized you take from the shelf and tweek and then say: it is personalised. Metaphor: caramel and caramelized.

The web is not a platform, it is a bunch of personal spaces. This is the basis for the cMOOCs in 2008. The fact that these mooc were distributed added to the fact that this learning was personal. This created a network where the participants genuiinly owned their learning. But what followed took the learning away from the personal to the personalized again, where the data was no longer personal.

Learning data, learning data are important, but should not be commoditised. Not everything needs to be commercialized. Institutions need to learn this distinction: let it go. Platforms and proprietors do not own the  data. The data should belong to the person.
Why? They do not own this market no matter what they think. But a person should be able to choose from ALL sources, if you are locked into a platform what you can see is not visible. This is top down control, which is something we do not need for learning. Learning is not acquiring something, but becoming something. We do not become something by consuming prehashed content, but by resharing, creating, recreating content that is of importance to us as a personal learner.

Analytics purports to tell you who you are. All humans are put into 16 categories, There is a distinction between big data and personal data. Big data= many people using one service, personal data is the one person using many data services = deep analysis. This truly tells the story of ‘me’. And who would want that all this personal data would be given to outsiders.
Personal network versus institutional library. Personal network of connections, resources and content that you as a person aggregate.
Education as the commodity, not the student. The things we do belong to us, the person.

With that in mind that Stephen supports the idea of reclaiming personal learning

LPSS – learning and performance support systmes: It is a network of personal learning environments.
http://lpss.me – prototype PLE – it does instantiate the end to end solution of personal learning, and storing it in a space of your choice.
The design is based on putting the learner at the center connecting to services and institutes. It is not a platform, but a connector of resources and services. To be used for your own purposes and goods. The idea of Ed Net Neutrality.
No matter which provider, they should not indicate what you should do. The personal learning record – data owned by the individual, shared only with persmissions.
Look at downes eporfolio-and-bades-workshop-oeb14.hml

Relevant PLR (personal learning projects). Third parties can provide analytics services, but they do not get free unfettered access
Analytics as a service….
(inge check out owncloud)

We collectively need to decide whether our own personal data is something that needs to be commoditised or should be reclaimed as a person, as an individual.


Friday, 4 April 2014

Part4 #xAPI seminar Andrew Downes on #learningdesign #eln

Almost groggy here after information overload and process... hoping still to make sense a bit. Great seminar!
Part4 #xAPI seminar Andrew Downes on #learningdesign. Wonderful resource: http:// http://tincanapi.co.uk/

And if you want to get an xAPI UK project going, have a look here, at the end there is link to get in touch and start a conversation

If you want to start designing xAPI statements, here is that start:

And a few pointers on how to start tracking xAPI and real world

And an iPhone xAPI statements viewer (free) here:
https://itunes.apple.com/gb/app/experience-api-xapi-statement/id550133878?mt=8

Looking at how Tin Can design affects learning design
He is key UK lead for xAPI at Epic


A xAPI mindset
The xAPI looks at events, not necessarily status of events (latter typical of scorm)
Which results in different learning designs.

I
Did
This
A learner
Succeeded at
A work task


Some elearning


Their personal goal
Computer
completed
me


Think about
What different experiences do or could make up your blend of learning?
What needs to happen in experiences X to trigger a change in experience Y
What is a natural flow for your learners?


Challenges on informal xAPI stuff
Reliability of self-reporting
I did this => prove it!
Will learners report their learning?
Please complete this form => possibly NO (so what can we do to avoid this, or work around. What might be a benefit?) – e.g. training courses only made available to those who do send feedback.
Privacy concerns
We are tracking everything you do => Ummmmm (big brother issue, communicated well, open on it)
Interoperability

It’s not working, is it? => Have you tried turning it off and on again? (So communicating on this is crucial, both with learners, as with developers/experts)
Too much data
Joe moved this mouse 1 pixel => Which direction?
Correlation is not causation.