Tuesday, 19 February 2019

Just sharing a few rejections: paper & funding, and solutions #academicLife #life #loveMyNetwork


Life can be hard, both personally and professionally, yet at the same time life can simply push you towards a more pleasant option along the way, seemingly using rejections to get you on to the right track. I sure hope this will be the case, but only hindsight will tell. [addition one day after writing this post: while sharing these ideas on Facebook, I got such an inspiring response from my network, I decided to add the ideas and remarks they had below, between square brackets]
Today I was informed that my co-authored paper for the eMOOC summit 2019 in Naples was rejected. Rejections rarely result in joy, and this was no exception. For some reason writing a paper is also a personal effort. You try with all your ability (and mostly under a bit of time pressure) to come up with a paper that shares your research in just a few pages. Referencing to prior great minds in your field of expertise. So, when a paper gets rejected, it simply hurts. It feels personal to some extent.
The rejection came one week after my submission to get a prestigious Marie Curie Fellowship got rejected as well, it did not get the threshold. The review did have a lot of positive points though (which did soften the blow). Granted, I wrote this submission as a plan B in order to increase my options to get back to work after I recovered from the year rehab after the cancer diagnosis. I put my heart into it, not only me but also the professor who was willing to employ me in his department if the fellowship was successful. Luckily, I was able to get back to work and on good terms, and on an inspiring project.
[It seems that rejections are common to everyone, even the highest esteemed scholars get them despite their obvious wisdom and knowledge. My friends shared some good advice and resources that help to bounce back from rejection. First off: upward and onward, as simple as it sounds, it works ... once you have managed to soften the feeling of a work being rejected. The process is to reflect, look at the feedback (or if they did not send any, ask for all the feedback, of course, anonymized), and rewrite and resubmit. Next, a great article in Medium on The Iceberg Illusion, adding the picture here as well.]

But the above two rejections just made me realize once more that I am not a traditional academic and as such, I doubt whether I can ever be part of the whole deal. Maybe this frequency of rejection is simply normal, but at present, I just feel I need to take another leap. Just like I did three times before. Maybe I am not made to gradually move forward? Maybe my thing is just this .... jumping ahead and then working on that 'new' concept until it becomes more mainstream.
[Feedback is an essential first step, next of course is to get going and to know thy self. And to repeat to yourself that critique is not personal, and it can be based on a number of reasons that do not even have to be immediately related to the work you did. In a way, emotion wins over ratio every time, but that does not mean we cannot rationalize after the first emotions have gone.]
Ciska sometimes tells me: "don't wine because you are living off the beaten track, even if you could walk the straight and narrow, you still would roll out your own route to get to the next place". Maybe she is right, but it does not make things easier. Maybe, it is never easy for any of us. Even for those who walk the more traditional roads to achieve a professional space in society. I don't know, but each time I get such a rejection, I just feel it's because of me, and it feels personal.
Okay, time to move forward again. Working on a project which combines human resources, AI and learning... fun, I must admit.
[and this is - and has always been - an inspiring Last Lecture]


Cartoon in this blogpost is from the fabulous Nick D. Kim - the http://www.lab-initio.com/ site

Monday, 14 January 2019

EU report on the impact of AI on Learning Teaching and Education #AI #education #EU #policy

The resently published report on the impact of artificial intelligence (AI) on learning, teaching and education gives a great outline on the realities of AI, the state of the art, and the challenges as well as opportunities for those of us with an expertise in learning in general, or learning in terms of learning theory. The report is part of the JRC Science for Policy documents, and it is very well written by Ilkka Tuomi (who is renowned for his expertise in Internet, data, AI and computer science). Ilkka recorded a brief overview of the report, which can be seen below. In the report-related video, he refers to current machine learning systems as datavors, he defines (and right fully so) the term of machine learning as an oxymoron and he puts current AI in very accessible parallel, namely the Artificial Instict (as current AI is mainly about behaviourist approaches and patterns).

A very interesting perspective is that Ilkka and the report stress the importance of having someone on board of AI for learning/teaching/education on board, who has expertise in learning and learning theory.

The policy challenges mentioned at the end of the report are:

  • A continuous dialogue on the appropriate and responsible uses of AI in education is therefore needed.
  • In the domain of educational policy, it is important for educators and policymakers to understand AI in the broader context of the future of learning. As AI will be used to automate productive processes, we may need to reinvent current educational institutions.
  • In general, the balance may thus shift from the instrumental role of education towards its more developmental role.
  • A general policy challenge, thus, is to increase among educators and policymakers awareness of AI technologies and their potential impact.
  • Learning sciences could have much to offer to research on AI, and such mutual interaction would enable better understanding about how to use AI for learning and in educational settings, as well as in other domains of application.
  • As there may be fundamental theoretical and practical limits in designing AI systems that can explain their behaviour and decisions, it is important to keep humans in the decision-making loop.
  • The ethics of AI is a generic challenge, but it has specific relevance for educational policies.
  • Human agency means that we can make choices about future acts, and thus become responsible for them.  AI can also limit the domain where humans can express their agency.
  • An important policy challenge is how such large datasets that are needed for the development and use of AI-based systems could be made more widely available.


This 47 page report offers the following topics:

Introduction ...................................................................................................... 5
2 What is Artificial Intelligence? ............................................................................. 7
2.1 A three-level model of action for analysing AI and its impact ............................. 7
2.2 Three types of AI ....................................................................................... 10
2.2.1 Data-based neural AI ......................................................................... 10
2.2.2 Logic- and knowledge-based AI ........................................................... 12
2.3 Recent and future developments in AI .......................................................... 13
2.3.1 Models of learning in data-based AI ..................................................... 15
2.3.2 Towards the future............................................................................. 16
2.4 AI impact on skill and competence demand ................................................... 17
2.4.1 Skills in economic studies of AI impact ................................................. 18
2.4.2 Skill-biased and task-biased models of technology impact ....................... 20
2.4.3 AI capabilities and task substitution in the three-level model ................... 21
2.4.4 Trends and transitions ........................................................................ 22
2.4.5 Neural AI as data-biased technological change ...................................... 23
2.4.6 Education as a creator of capability platforms ........................................ 23
2.4.7 Direct AI impact on advanced digital skills demand ................................ 25
3 Impact on learning, teaching, and education ....................................................... 27
3.1 Current developments ................................................................................ 27
3.1.1 “No AI without UI” ............................................................................. 28
3.2 The impact of AI on learning ....................................................................... 28
3.2.1 Impact on cognitive development ........................................................ 30
3.3 The impact of AI on teaching ....................................................................... 31
3.3.1 AI-generated student models and new pedagogical opportunities............. 31
3.3.2 The need for future-oriented vision regarding AI .................................... 32
3.4 Re-thinking the role of education in society ................................................... 32
4 Policy challenges ............................................................................................. 34

Below is the 20 minute video of Ilkka Tuomi which explains the report in easy terms.




Friday, 4 January 2019

Call for Papers #CfP #AI #mLearning #MOOC in conferences #UNESCO @FedericaUniNa

January has started and three important calls for papers are coming up, all related to conferences. The three conferences are: eMOOCs2019 (on MOOCs), Mobile Learning week at UNESCO (focus on AI for development and mobile learning, and eLearning Africa (this year in Cote d'Ivoir), listed per deadline of the CfP.

Mobile learning week UNESCO (Paris, France): focus on AI for sustainable development
Call for proposals deadline: 11 January 2019
UNESCO Global AI Conference: Monday 4 March 2019
Policy Forum and Workshops: Tuesday 5 March 2019
Symposium: Wednesday 6 & Thursday 7 March 2019
Strategy labs & International Women’s Day: Friday 8 March 2019
Exhibits: Monday 4 to Friday 8 March 2019
More information: https://en.unesco.org/mlw/2019
UNESCO, in partnership with its confirmed partners – the International Telecommunication Union and the Profuturo Foundation – will convene a special edition of Mobile Learning Week (MLW) from 4 to 8 March 2019, at the UNESCO Headquarters building in Paris (France). The five-day event, under the theme ‘Artificial Intelligence for Sustainable development’ will start with the ‘Global Conference - Principles for AI: Towards a humanistic approach?’, followed by a one-day Policy Forum and Workshops, a two-day International Symposium and a half-day of Strategy Labs. On 8 March, towards the close of MLW, participants will be invited to join the celebration of International Women’s Day, particularly a debate on Women in AI to be held in UNESCO Headquarters. During the entire week, exhibitions and demonstrations of innovative AI applications for education and more than 20 workshops will be organized by international partners and all programme sectors of UNESCO.
eMOOCs 2019 in Napels, Italy
Deadline CfP: 14 January 2019.
Conference date:  May 20 – 22, 2019
More informationhttps://emoocs2019.eu/call-for-papers/overview/
Description
The Higher Education landscape is changing. As the information economy progresses, demand for a more highly, and differently, qualified workforce and citizens increases, and HE Institutions face the challenge of training, reskilling and upskilling people throughout their lives, rather than providing a one-time in-depth education. The corporate and NGO sectors are themselves exploring the benefits of a more qualified online approach to training, and are entering the education market in collaboration with HE Institutions, but also autonomously or via new certifying agencies. Technology is the other significant player in this fast-changing scenario. It allows for new, data-driven ways of measuring learning outcomes, new forms of curriculum definition and compilation, and alternative forms of recruitment strategy via people analytics.

At the MOOC crossroads where the three converge, we ask ourselves whether university degrees are still the major currency in the job market, or whether a broader portfolio of qualifications and micro-credentials may be emerging as an alternative. What implications does this have for educational practice? What policy decisions are required? And as online access eliminates geographical barriers to learning, but the growing MOOC market is increasingly dominated by the big American platforms, what strategic policy do European HE Institutions wish to adopt in terms of branding, language and culture?

The EMOOCs 2019 MOOC stakeholders summit comprises the consolidated format of Research and Experience, Policy and Business tracks, as well as interactive workshops. Original contributions that share knowledge and carry forward the debate around MOOCs are very welcome.

eLearning AFrica - Abidjan - Cote d'Ivoir
Deadline CfP: February 22, 2019.
Conference date: October 23 - 25, 2019
More informationhttps://www.elearning-africa.com/programme_cfp.php
Description
The 14th edition of eLearning Africa, the International Conference & Exhibition on ICT for Education, Training & Skills Development, which will take place in Abidjan, Côte d'Ivoire from October 23 - 25, 2019 and is co-hosted by the Government of Côte d'Ivoire. 

A unique event, Africa’s largest conference and exhibition on technology supported learning, training and skills development, eLearning Africa is a network of leading experts, professionals and investors, committed to the future of education & training in Africa.

Read more about the eLearning Africa 2019 themeThe Keys to the Future: Learnability and Employability, and become involved in shaping the conference agenda by proposing a topic, talk or session here.
Register today to profit from our Early Bird Rate

About eLearning Africa
Founded in 2005, eLearning Africa is the leading pan-African conference and exhibition on ICT for Education, Training & Skills Development. The three day event offers participants the opportunity to develop multinational and cross-industry contacts and partnerships, as well as to enhance their knowledge and skills.
Over 13 consecutive years, eLearning Africa has hosted 17,278 participants from 100+ different countries around the world, with over 80% coming from the African continent. More than 3,530 speakers have addressed the conference about every aspect of technology supported learning, training and skills development.

Tuesday, 1 January 2019

Planning for what might prove to be impossible #OPNLearn

After days if not weeks of contemplation - and reading Eleanor Roosevelt's "You Learn by Living", I have decided to go for it, no matter what this new frontier will bring me. This idea of Old Philosophers and New learning will no doubt need more time to develop and mature, but from here onward it will be a project and I will develop it as openly as possible. 

The thought of starting and being able to bring a new project to fruition is daunting. I am over 50, I have been a diabetic type 1 for seven years, and I have had breast cancer. Looking at these three facts makes me doubt whether any new project will be successful. And with success I mean being able to lean on this activity to feel confident, provide new ideas by combining old ones, and have money to support all of this happening, even growing. On the other hand ... I have been working on new technologies and innovation with success (= international awards), I was able to grow from my early years as a cleaning lady/waitress into a person with a PhD (rough road), and all along I have gathered some wonderful, intelligent, interesting and magnificent friends living across this beautiful globe. In Dutch I would say that the odds of any new project that I would start would result in ... "het kan vriezen, het kan dooien", it can go either way, but it will at least result in something. 

So here it goes. As anxiety is present and I must admit I do not like to fail at something, I need to do this. It feels as though this is the last thing I can do to attain something that might possible add to a thoughtfull, respectful world. Here goes nothing...

  

Saturday, 8 December 2018

#AI #MachineLearning and #philosophy session #OEB18 @oebconference @OldPhilNewLearn


At OEB2018 the last session I lead was on the subject of AI, machine learning in combination with old philosophers and new learning.  The session drew a big group of very outspoken, intelligent people making this session a wonderful source of ideas on the subject of philosophy and AI.

As promised to the participants, I am adding my notes taken during the session. There were a lot of ideas, so my apologies if I missed any. The notes follow below, afterwards embedding the slides that preceeded the session in order to indicate where the idea for the workshop came from.

Notes from the session Old Philosophers and New Learning @OldPhilNewLearn #AI #machineLearning and #Philosophy

The session started of with the choices embedded in any AI, e.g. a Tesla car running into people, will he run into a grandmother or into two kids? What is the ‘best solution’… further into the session this question got additional dimensions: we as humans do not necessarily see what is best, as we do not have all the parameters, and: we could build into the car that in case of emergency, the car needs to decide that the lives of others are more important than the lives of those in the car, and as such simply crash the car into the wall, avoiding both grandmother and kids.

The developer or creator gives parameters to the AI, with machine learning embedded, the AI will start to learn from there, based on feedback from or directed to the parameters. This is in contrast with computer-based learning, where rules are given, and they are either successful or not but they are no basis for new rules to be implemented.

From a philosophical point of view, the impact of AI (including its potential bias coming from the developers or the feedback received) could be analysed using Hannah Arendt’s ‘Power of the System’, in her time this referred to the power mechanisms during WWII, but the abstract lines align with the power of the AI system.

The growth of the AI based on human algorithms does not necessarily mean that the AI will think like us. It might choose to derive different conclusions, based on priority algorithms it chooses. As such current paradigms may shift.

Throughout the ages, the focus of humankind changed depending on new developments, new thoughts, new insights into philosophy. But this means that if humans put parameters into AI, those parameters (which are seen as priority parameters) will also change over time. This means that we can see from where AI starts, but not where it is heading.

How much ‘safety stops’ are built into AI?
Can we put some kind of ‘weighing’ into the AI parameters, enabling the AI to fall back on more important or less important parameters when a risk needs to be considered?

Failure as humans can results into growth based on those failures. AI also learns from ‘failures’, but the AI learns from differences in datapoints. At present the AI only receives a message ‘this is wrong’, at that moment in time – if something is wrong – humans make a wide variety of risk considerations. In the bigger picture, one can see an analogy with Darwin’s evolutionary theory where time finds what works based on evolutionary diversity. But with AI the speed of adaptation enhances immensely.

With mechanical AI it was easier to define which parameters were right or wrong. E.g. with Go or Chess you have specific boundaries, and specific rules. Within these boundaries there are multiple options, but choosing those options is a straight path of considerations. At present humans make much more considerations for one conundrum or action that occurs. This means that there is a whole array of considerations that can also imply emotions, preferences…. When looking at philosophy you can see that there is an abundance of standpoints you can take, some even directly opposing each other (Hayek versus Dewey on democracy), and this diversity sometimes gives good solutions for both, workable solutions which can be debated as being valuable outcomes although based on different priorities, and even very different takes on a concept. The choices or arguments made in philosophy (over time) also clearly point to the power of society, technology and reigning culture at that point in time. For what is good now in one place, can be considered wrong in another place, or at another point in time.

 It could benefit teachers if they were supported with AI to signal students with problems. (but of course this means that ‘care’ is one of the parameters important for society, in another society it could simply be that those students who have problems will be set aside. Either choice is valid, but it builds on other views on whether we care in a ‘supporting all’ or care in a ‘support those who can so we can move forward quicker’. It is only human emotion that makes a difference in which choice might be the ‘better’ one to choose.

AI works in the virtual world. Always. Humans make a difference between the real and the virtual world, but for the AI all is real (though virtual to us).
Asimov’s laws of robotics still apply.

Transparency is needed to enable us to see which algorithms are behind decisions, and how we – as humans – might change them if deemed necessary.

Law suits become more difficult: a group of developers can set the basis of an AI, but the machine takes it from their learning itself. The machine learns, as such the machine becomes liable if something goes wrong, but ….? (e.g. Tesla crash).

Trust in AI needs to be built over time. This also implies empathy in dialogue (e.g. sugar pill / placebo-effect in medicine, which is enhanced if the doctor or health care worker provides it with additional care and attention to the patient.
Similar, smart object dialogue took off once a feeling of attention was built into it: e.g. replies from Google home or Alexa in the realm off “Thank you” when hearing a compliment.  Currently machines fool us with faked empathy. This faked empathy also refers to the difference between feeling ‘related to’ something or being ‘attached to’ something.

Imperfections will become more important and attractive than the perfections we sometimes strive for at this moment.

AI is still defined between good and bad (ethics), and ‘improvement’ which is linked to the definition of what is ‘best’ at that time.

Societal decisions: what do we develop first – with AI? The refugee crisis or self-driving cars? This affects the parameters at the start. Compare it to some idiot savants, where high intelligence, does not necessarily implies active consciousness.

Currently some humans are already bound by AI: e.g. astronauts where the system calculates all.  

And to conclude: this session ranged from the believers in AI “I cannot wait for AI to organise our society”  to those who think it is time for the next step in evolution, in the words of Jane Bozart: “Humans had their Chance”




Thursday, 6 December 2018

Data driven #education session #OEB18 @oebconference #data @m_a_s_c

From the session on data driven education, with great EU links and projects.

Carlos Delgado Kloos: using analytics in education
Opportunities
Khan academy system is a proven system, with one of the best visualisations of how the students are advancing. With a lot of stats and graphs. Carlos used this approach for their 0 courses (courses on basic knowledge that students must know before moving on in higher ed).
Based on the Khan stats, they built a high level analytics system.
Predictions in MOOCs (see paper of Kloos), focusing on drop-out.
Monitoring in SPOCs (small private online courses)
Measurement of Real Workload of the students, the tool adapts the workload to the reality.
FlipApp (to gamify flipped classroom), remember and to notify the students that they need to see the videos before class, or they will not be able to follow. (Inge: sent to Barbara).
Creation of Educational Material using Google classroom. Google classroom sometimes knows what the answer of a quiz will be, which can save time for the teacher.
Learning analytics to improve teacher content delivery.
Use of IRT (Item Response Theory) to see which quizzes are more useful and effective, interesting to select quizzes.
Coursera define skills, match it to the jobs and based on that recommend courses.
Industry 4.0 (big data, AI…) for industry, can be transferred to Education 4.0 (learning analytics based on machine learning). (Education3.0 is using the cloud, where both learners and teachers go to).
Machine learning infers the rules from getting answers which are data analysed (in comparison to computer learning, which is just the opposite, based on rules, giving answers).
Dangers:
Correlations: correlations are not necessary correct conclusions. (see spurious correlations for fun links).
Bias: e.g. decisions for giving credit based on redlining and weblining.
Decisions for recruitment: eg. Amazon recruits that the automation of their recruiting system resulted in a biase leading to recruiting more men than women.
Decisions in trials: eg. Compas is used by judges to calculate repeat offenders, but color of skin was a clear bias in this program.
Chinese social credit system which gives minor points if you do something that is seen as not being ‘proper’. Also combined with facial recognition, and monitoring attention in class (Hangzhou number 11 high school).
Monitoring (gaggle, …)
Challenges
Luca challenge: responsible use of AI.
GDPR Art 22: automated individual decision-making, including profiling.
Sheilaproject.eu : identifying policies to adopt learning analytics. Bit.ly/sheilaMOOC is the course on the project.
Atoms and bits comparison. As with atoms you can use it for the better, or for the worse (like atomic bombs).


Maren Scheffel on Getting the trust into trusted learning analytics @m_a_s_c
(Welten Institute of Open University, Netherlands)
Learning analytics: Siemens (2011) definition still the norm. But nowadays it is a lot about analytics, but only little about learning.

Trust: currently we believe that something is reliable, the truth, or ability. Multiple definitions of trust, it is multidimensional and multidisciplinary construct. Luhmanndefined trust as a way to cope with risk, complexity, and a lack of system understanding. For Luhmann the concept of trust compensates for insufficient capabilities for fully understanding the complexity of the world (Luhmann, 1979, trust and …)
 For these reasons we must be transparent, reliable, and be integer to attract the trust of learners. There should not be a black box, but it should be a transparent box with algorithms (transparent indicators, open algorithms, full access to data, knowing who accesses your data).

Policies: see https://sheilaproject.eu   

User involvement and co-creation: see the competen-SEA project see http://competen-sea.eu capacity building projects for remote areas or sensitive learner groups. One of the outcomes was to co-design to create MOOCs (and trust) getting all the stakeholders together in order to come to an end product. MOOCs for people, by people.  Twitter #competenSEA

Session on #AI, #machineLearning and #learninganalytics #AIED #OEB18

This was a wonderful AI session, with knowledgeable speakers, which is always a pleasure. Some of the speakers showed their AI solutions, and described their process; others focused on the opportunities and challenges. Some great links as well.

Squirrel AI, the machine that regularly outperforms human teachers and redefines education by Wei Zhou
Squirrel AI is an AI to respond to the need for teachers in China. Based on knowledge diagnosis, looking for educational gaps. A bit like an intake at the beginning of a master education for adults.
Human versus machine competition for scoring education, and tailored learning content offerings. (collaborates with Stanford Uni). Also recognized by Unesco. (sidenote: it is clearly oriented at 'measurable, class and curriculum related content testing). 

 The ideas behind AI: adaptive learning is a booming market.
Knowledge graph + knowledge space theory: monitoring students real-time learning progress to evaluate student knowledge mastery and predict future learning skills. based on Bayesian network plus Bayesian inference and knowledge tracing and Item Response Theory. The system identifies the knowledge of the student based on the their intake or tests. Based on big data analysis the students get a tailored learning path. (personalised content recommendation using fuzzy logic, classification tree, and personalized based on logistic regression, graph theory, and genetic algorithm.). Adaptive learning based on Bayesian network, plus Bayesian inference, plus Bayesian knowledge tracing, plus IRT to precisely determine students current knowledge state and needs.
Nanoscale Knowledge Points: granularity is six time’s deeper.  Used in medical field.
Some experiments and results: the forth Human versus AI competition, which resulted in AI being quicker and more adapt to score tests of students.  Artificial Intelligence in Education conference (AIED18 conference link, look up video youtube.com, call for papers deadline 8 February 2019 for AIED19 here).

Claus Biermann on Approaches to the Use of AI in Learning
Artificial Intelligence and Learning: myths, limits and the real opportunities.  
Area9 lyceum: also adaptive  learning long-term company with new investments.
Referring to Blooms 2sigma problem.
Deep, personalized learning, biologically enabled data modeling, four-dimensional teaching approach.
How we differ: adaptive learning adapts to the individual, only shows content when it is necessary, takes into consideration what the student already knows, follows up on what the student is having trouble with.  This reduces the time of learning, and increases motivation. Impact from adaptive learning, almost 50% reduction of learning time.
Supports conscious competence concept.
AI is 60% of the platform, but the most important part is the human being, learning engineers, the team of humans who work together makes it possible.

Marie-Lou Papasian from Armenia (Jerevan).
Tumo is a learning platform where students direct their own development. After school program, 2 hours twice a week, and thousands of students come to the centre of TUMO. Armenia and Paris, and Beirut.
14 learning targets ranging from animation, to writing, to robotics, game development…
Main education is based on self learning, workshops and learning labs.
Coaches support the students and they are in all the workshops and learning labs.
Personalisation: each students choose their learning plan, their topics, their speed. That happens through the ‘Tumo path’, which is an interface which enables a personalised learning path (cfr LMS learning paths, but personalized in terms of speed and choices of the students). After the self-paced parts, the students can go to a workshop to reach their maximum potential, to learn and know they can explore and learn. These are advanced students (12 – 18 years, free of charge).
Harnessing the power of AI: the AI solves a lot of problems, as well as provide freedom to personalise the students learning experience. A virtual assistant will be written to help the coaches to help the student guided through the system.
AI guided dog: a mascot to help the students.
The coaches, assistants… are their to learn the students to take up more responsibility.
For those learners who are not that quick, a dynamic content aspect is planned to support their learning.

Wayne Holmes from the OU, UK and center for curriculum redesign, US
A report commissioned about personalized learning and digital ... (free German version here , English version might follow, will ask Wayne HOlmes).
Looking at the ways AI can impact education

A taxonomy of AI in education
Intelligent Tutoring System (as examples mentioned earlier in the panel talk)
Dialogue-based tutoring system (Pearson and Watson tutor example)
Exploratory Learning Environments (the biggest difference with the above, is that this is more based on diversification of solving a specific problem by the student)
Automatic writing evaluation (tools that will mark assignments for the teachers, also tools that will automatically give feedback to the students to improve their assignments).
Learning network orchestrators (tools that put people in contact with people, e.g. smart learning partner, third space learner, the system allows the student to connect with the expert).
Language learning (the system can identify languages and support conversation)
ITS+ (eg.. ALP, Alt school, Lumilo. The teacher wears google glasses, and the students activity comes as a bubble visualizing what the student is doing).

So there is a lot of stuff already out there.
We assume that personalized learning will be wonderful, but what about participative or collaborative learning

Things in development
Collaborative learning (what one person is talking about might be of interest to what another person is talking about).
Student forum monitoring
Continuous assessment (supported by AI)
AI learning companions (e.g. mobile phones supporting the learning, makes connections)
AI teaching assistants (data of students sent to teachers)
AI as a research tool to further the learning sciences

The ethics of AIED
A lot of work has been done round ethics in data. But there are also the algorithms that tweak the data outcomes, how do we prevent biases, guard against mistakes, protect against unintended consequences….
But what about education: self-fulfilling teacher wishes…
So how do we merge algorithms and big data and education?

With great power comes great responsibility (Spiderman, 1962, or French revolution national convention, 1793)
ATS tool built by Facebook, but the students went on strike (look this up).

Gunay Kazimzade Future of Learning, biases, myths, etcetera (Azerbaijan / Germany)
Digitalization and its ethical impact on society.
Six interdisciplines overlap.
Criticality of AI-biased systems.
(look up papers, starting to get tired, although the presentation is really interesting)
What is the impact of AI on our children is her main research considerations. How is the interaction between children and the smart agents. And what do we have to do, to avoid biases while children are using AI agents.
At present the AI biases infiltrate our world as we know, but can we transform this towards less biases?

Keynote talk of Anita Schjoll Brede @twitnitnit @oebconference #AI #machineLearning #oeb18


(liveblog starts after a general paragraph on the two keynotes that preceded her talk, and really her talk was really GREAT! And with fresh, relevant structure).

First of a talk on the skill sets of future workers (the new skills needed, referring to critical thinking, but not mentioning what is understood with critical thinking) and the collective intelligence (but clearly linking it to big data not small data, as well described in an article by Stella Lee).

Self-worth idea for the philosophy session, refer tot he Google map approach where small companies who offer one particular aspect of what it took to build google maps were bought by Google, and as such producing something that was bigger than the sum of its parts). But this of course means that the identity and the self-versus-the-other becomes under pressure, as people that really make a difference at some point, do not have the satisfying moment to think they are on top of the world (you can no longer show off your quality easily… for there are so many others just like you… as you can see when you read the news, follow people online…). While feeling important was easier, or possible in a ‘smaller’ world, where the local tech person was revered for her or his knowledge. So, in some way we are loosing the feeling of being special based on what we do. Additionally, if AI enters more of the working world, how do we ensure that work will be there for everyone, as work is also a way to ‘feel’ self-worth. I think keeping self-worth will be an increasing challenge in the connected, and AI supported world. As a self-test, simply think of yourself, and wanting to be invited to be on a stage… it is a simple yet possibly mentally alarming aspect. Our society is promoting ‘being the best’ at something, or having the most ‘likes’, what can we do to install or keep self-worth?
Than a speaker on the promise of online education, referring to MOOCs versus formal education, the increase of young people going to college… which strangely contradicts what the most profiles of future jobs seems to be like (professions that are rather labour intensive). The speaker Kaplan managed to knock down people who get into good jobs based on non-traditional schooling (obviously, my eye-brows went up, and I am sure there are more of us in the audience pondering which conservative thinking label can be put on that type of scolding stereotype speech, protecting the norm, he is clearly not even a non-conformist).

Here a person in the line of my interest takes the stage: Anita  Schjoll Brede. Anita founded an AI company Iris.ai , and tries to simplify the AI, machine learning and data science for easier implementation. So… of interest.

Learning how to learn sets us human beings apart. We are in the era where machines will learn, based on how we learn… inevitably changing what we need to learn.
She gives what AI is seen by most people, and where that model is not really correct.
Machine learning is based on the workings of a human brain. Over time the machine will adapt based on the data, and it will learn new skills. It is a great model to see the difference. One caveat, we still not sure how the human mind really works.
If we think of AI, we think of software, hardware, data … but our brains are slightly different and our human brains are also flawed. We want to build machines that are complementary to the human brain.

Iris.ai started with the idea that there are papers and new research published every day, humans can no longer read all. Iris.ai goes through the science and the literature process is relatively automated. The process is currently possible with a time decrease of 80%. Next step is hypothesis extraction, than build a truth tree of the document based on scientific arguments. Once you have the truth trees are done, link that to a lab or specific topic, … with an option of the machine learning results leading to different types of research. Human beings will still do the deeper understanding.

Another example is one tutor per child. Imagine that there is one tutor for that child, which grows with that child, helps with lifelong learning. The system will know you so well, that it will know how to motivate you, or get you forward. It might also have filters to identify discriminatory feelings or actions (remark of myself: but I do wonder, if this is the case, then isn’t this limiting the freedom of saying what you want and being the person you want to be… it might risk becoming extreme in either way of the doctrine system).
Refers to the Watson Lawyer AI, which makes that the junior lawyers will no longer do all the groundwork. So the new employees will have to learn other stuff, and be integrated differently. But this relates to critical ideas of course, as you must choose for employing people (but make yourself less competitive) or you only higher senior lawyers (remark of myself: but than you loose diversity and workforce).
Refers to doctors built by machine learning, used in sub-Saharan settings, to analyse human blood for malaria. Which saves time for the doctors, health care workers… but evidently, this has an impact on the health care worker jobs.
Cognitive bias codex (brain picture with lots of links). Lady in the red dress experiment.

Her take on what we need to learn:
Critical thinking,  refers to source criticism she learned during her schooling.
Who builds the AI, lets say Google will transgress the first general AI… their business model will still get us to buy more soap.
Complex problem solving: we need to hold this uncertainty and have that understanding. To understand why machines were lead to specific choices.
Creativity: machines can be creative, we can learn this. Rehashing what is done, and making it to something of your own is something that is (refers to lawyer commercial that was built by AI based on hours of legal commercials).
Empathy: is at the core of human capabilities like this. Machines are currently doing things, but not yet empathic. But empathy is also important to build machines that can result in positive evolutions for humans. If we can support machines that will be able to love the world, including humans.


Wednesday, 5 December 2018

@oebconference workshop notes and documents #instructionalDesign #learningTools

After being physically out of the learning circuit for about a year and a half, it is really nice to get active again. And what better venue to rekindle professional interests than at Online Educa Berlin.

Yesterday I lead a workshop on using an ID instrument I call the Instructional Design Variation matrix (IDVmatrix). It is an instrument to reflect on the learning architecture (including tools and approaches) that you are currently using, to see whether these tools enable you to build a more contextualized or standardized type of learning (the list organises learning tools according to 5 parameters: informal - formal, simple - complex, free - expensive, standardized to contextualized, and more aimed at individual learning - social learning). The documents of the workshop can be seen here.

The workshop started of with an activity called 'winning a workshop survival bag', where the attendees could win a bag with cookies, nuts, and of course the template and lists of the IDVmatrix.
We then proceeded to give a bit of background on the activity, and how it related to the IDVmatrix.
Afterwards focusing on learning cases, and particularly challenges that the participants of the workshop were facing.
And we ended up trying to find solutions for these cases, sharing information, connections, ideas (have a look at this engaging crowd - movie recorded during the session).
The workshop was using elements from location-based learning, networking, mobile learning, machine learning, just-in-time learning, social learning, social media, multimedia, note taking, and a bit of gamification.

It was a wonderful crowd, so everyone went away with ideas. The networking part went very well also due to the icebreaker activity at the beginning. This was the icebreaker:

The WorkShop survival bag challenge!
Four actions, 1 bag for each team!

Action 1
Which person of your group has the longest first name?
Write down that name in the first box below.

Action 2

  • Choose two person prior to this challenge: a person who will record a short (approx. 6 seconds)
  • video with their phone and tweet it, and a person/s who will talk in that video.
  • Record a 6 second video which includes a booth at the OEB exhibition (shown in the
  • background) and during which a person gives a short reason why this particular learning solution
  • (the one represented by the booth) would be of use to that persons learning environment
  • (either personal or professional).
  • Once you have recorded the video, share it on twitter using the following hashtags: #OEB #M5
  • #teamX (with X being the number of your team, e.g. #team1) . This share is necessary to get the
  • next word of your WS survival bag challenge.
  • Once you upload the movie, you will get a response tweet on #OEB #M5 #teamX (again with the
  • number of your team).

Write down the word you received in response to your video in the second box below.

Action 3

  • Go to the room which is shown in the 360° picture in twitter (see #M5 #OEBAllTeams).
  • Find the spot where 5 pages are lined up, each of them with another language sign written on
  • them.
  • Each team has to ‘translate’ the sign assigned to their team. You can use the Google Translateapp for this (see google play, the app is free!).
Write down the translation in the third box below.

Action 4
Say the following words into the Google Home device which is located in the WS room

“OK Google 'say word box 1', say word box 2, say word box 3“

If Google answers, you will get your WS survival bag!

And although the names were not always very English, with a bit of tweaking using the IFTTT app, all the teams were able to get Google home mini to congratulate them for getting all the challenges right. 

Tuesday, 13 November 2018

#MOOC free report, event MOOC for refugees (w travel fund options) and #CfP eMOOC2019

Two interesting MOOC events coming up one focused on MOOCs for refugees, and one for all you out there involved in researching or experiencing MOOCs (the eMOOC2019 conference). 

Free MOOC report

Linked to the MOONLITE event, there is a free MOOC report (130 pages) on “Exploiting MOOCs for Access and Progression into Higher Education Institutions and Employment Market”
The report gives an overview of the goals of the project, the methodology, and finishes with the practical recommendations for using online courses to enhance access and progression into higher education and the employment market (for refugees). 

MOONLITE multiplier event (part of a EU Erasmus+ project)

The MOONLITE event supports learning without borders, practically it harnesses the potential of MOOCs for refugees and migrants to build their language competences and entrepreneurial skills for employent, higher education, and social inclusion. 

There are bursaries to help cover your travel expenses which you can apply for at the venue!
23-24 November, UNED (Madrid, Spain).
Friday November 23
15:20. Welcome (Timothy Read & Elena Barcena, UNED, Spain)
15:30-16:30. Presentation of the MOONLITE project and its outputs (Jorge Arús-Hita, UCM, Spain & Beatriz Sedano, UNED, Spain)
16:30-17:30. Open Education Passports and Micro Credentials for refugees and migrants (Ildiko Mazar, Knowledge Innovation Centre, Malta)
17:30-18:00 Coffee
18:00-19:00 Kiron Educational Model and Quality Assurance for MOOC-based curricula (María Bloecher, Kiron, Germany)
Saturday November 24
10:00-11:00:  Inclusive by design: how MOOCs have the potential to reach people in ways other online courses do not (Kate Borthwick, University of Southampton, UK)
11:00-12:00: A tool for institutions for quantifying the costs & benefits of Open Education (Anthony Camilleri, Knowledge Innovation Centre, Malta)
12:00-12:30 Coffee
12:30-13:30: Workshop on how to design a socially inclusive MOOC (Elena Martín- Monje & Timothy Read, UNED, Spain)
13:30. Farewell (Timothy Read & Elena Barcena, UNED, Spain)

See travel details, online registration and more info here. No attendance fee. Limited places. 
➢ Sign up here: https://goo.gl/forms/RXYWS8MiQgYqfLkC2 (to obtain attendance certificate, materials, coffee).
➢ Venue: C/ Juan del Rosal, 16 - 28040 Madrid. How to get there: Metro until the stop: “Ciudad Universitaria” + Bus “U” until
the stop: UNED-Juan del Rosal: http://www.ia.uned.es/llegar-etsii

Call for papers eMOOC2019

Dates: 20 - 22 May 2019 
Venue: University of Naples, Federico II in Italy

Important Dates:
16 Jan 2017: Paper submissions for Research Track.
24 Feb 2017: Notification of acceptance/rejection
20 Mar 2017: Camera-ready versions for Springer LNCS Proceedings and copyright form.

The Higher Education landscape is changing. As the information economy progresses, demand for a more highly, and differently, qualified workforce increases, and HE Institutions face the challenge of reskilling and upskilling people throughout their lives. The corporate and NGO sectors are themselves exploring the benefits of a more qualified online approach to training, and are entering the education market in collaboration with HE Institutions, but also autonomously or via new certifying agencies. Technology is the other significant player in this scenario. It allows for new, data-driven ways of measuring learning outcomes, new curriculum structures and alternative forms of recruitment strategy via people analytics.

MOOCs represent the crossroads where the three converge. Come to EMOOCs 2019 and explore the impact and future direction of open, online education on a social, political and institutional level.

The eMOOC summit has four tracks: research, business, policy and experience track.
At the MOOC crossroads: where academia and business converge

The Higher Education landscape is changing. As the information economy progresses, demand for a more highly, and differently, qualified workforce and citizens increases, and HE Institutions face the challenge of training, reskilling and upskilling people throughout their lives, rather than providing a one-time in-depth education. The corporate and NGO sectors are themselves exploring the benefits of a more qualified online approach to training, and are entering the education market in collaboration with HE Institutions, but also autonomously or via new certifying agencies. Technology is the other significant player in this fast-changing scenario. It allows for new, data-driven ways of measuring learning outcomes, new forms of curriculum definition and compilation, and alternative forms of recruitment strategy via people analytics.

At the MOOC crossroads where the three converge, we ask ourselves whether university degrees are still the major currency in the job market, or whether a broader portfolio of qualifications and micro-credentials may be emerging as an alternative. What implications does this have for educational practice? What policy decisions are required? And as online access eliminates geographical barriers to learning, but the growing MOOC market is increasingly dominated by the big American platforms, what strategic policy do European HE Institutions wish to adopt in terms of branding, language and culture?

The EMOOCs 2019 MOOC stakeholders summit comprises the consolidated four-track format of Research and Experience, Policy and Business. And will feature keynote speakers, round table and panel sessions as well as individual presentations in each track. The aim is for decision-makers and practitioners to explore innovative and emerging trends in online education delivery, and the strategic policy that supports them. Original contributions that share knowledge and carry forward the debate around MOOCs are very welcome.The number of HE institutions involved in MOOCs, and the numbers of courses and enrolled students, has increased exponentially in recent years both in Europe and beyond. One of the results of this growing MOOC movement is an increasing body of research evidence that positions itself within the established research communities in technology enhanced learning, open education and distance learning. Key trends that are accelerating HE technology adoption are blended learning design and collaborative learning as well as a growing focus on measuring learning and redesigning learning spaces, and, in the long-term, deeper learning approaches and cultures of innovation.

This track welcomes high-level papers supported by empirical evidence to provide a rigorous theoretical backdrop to the more practical approaches described in the experience track, and particularly invites contributions in the area of these key trends.

  • Learning Designs – blended learning, collaborative learning, learner-generated content, open textbooks, immersive learning, relating course and content to learning outcomes
  • Defining and Measuring learning – learning analytics, educational data mining, user behaviour studies, adaptive and personalisation studies, cognitive theories and deep learning
  • Technology – infrastructure and interface, tools and methods to provide learning at scale; tools and methods for assessment; tools and methods for data collection and processing; blockchain technology; AI + automated feedback

Submission of Papers
This is a one-step process, via direct submission of abstract and full paper.

Full paper: up to 10 pages including references

There will be official conference proceedings for this track and submissions will be handled through EasyChair.

The use of the supplied Springer template is mandatory: https://www.springer.com/it/computer-science/lncs/conference-proceedings-guidelines

Please remember to indicate the relevant Track when you submit your paper.

Proceedings

The Proceedings of the Research Track will be published by Springer in the Lecture Notes in Computer Science (LNCS) Series.
Submission of Work-in-Progress Short Papers

Short papers (up to 6 pages) are also accepted in this track, reflecting work in progress, for publication in Online proceedings with ISBN.

The use of the Springer template is mandatory:
https://www.springer.com/it/computer-science/lncs/conference-proceedings-guidelines

When submitting your paper, please indicate type of paper and track in the submission process.

Proceedings

The Work-in-Progress proceedings will be submitted to CEUR-WS.org for online publication. Outstanding short papers may be included in the Springer Proceedings.

Important dates:
25 February 2019: Short Paper submissions for Research Track.
25 March 2019: Notification of acceptance/rejection
29 April 2019: Camera-ready versions for online Proceedings with ISBN and copyright form