Saturday 8 December 2018

#AI #MachineLearning and #philosophy session #OEB18 @oebconference @OldPhilNewLearn


At OEB2018 the last session I lead was on the subject of AI, machine learning in combination with old philosophers and new learning.  The session drew a big group of very outspoken, intelligent people making this session a wonderful source of ideas on the subject of philosophy and AI.

As promised to the participants, I am adding my notes taken during the session. There were a lot of ideas, so my apologies if I missed any. The notes follow below, afterwards embedding the slides that preceeded the session in order to indicate where the idea for the workshop came from.

Notes from the session Old Philosophers and New Learning @OldPhilNewLearn #AI #machineLearning and #Philosophy

The session started of with the choices embedded in any AI, e.g. a Tesla car running into people, will he run into a grandmother or into two kids? What is the ‘best solution’… further into the session this question got additional dimensions: we as humans do not necessarily see what is best, as we do not have all the parameters, and: we could build into the car that in case of emergency, the car needs to decide that the lives of others are more important than the lives of those in the car, and as such simply crash the car into the wall, avoiding both grandmother and kids.

The developer or creator gives parameters to the AI, with machine learning embedded, the AI will start to learn from there, based on feedback from or directed to the parameters. This is in contrast with computer-based learning, where rules are given, and they are either successful or not but they are no basis for new rules to be implemented.

From a philosophical point of view, the impact of AI (including its potential bias coming from the developers or the feedback received) could be analysed using Hannah Arendt’s ‘Power of the System’, in her time this referred to the power mechanisms during WWII, but the abstract lines align with the power of the AI system.

The growth of the AI based on human algorithms does not necessarily mean that the AI will think like us. It might choose to derive different conclusions, based on priority algorithms it chooses. As such current paradigms may shift.

Throughout the ages, the focus of humankind changed depending on new developments, new thoughts, new insights into philosophy. But this means that if humans put parameters into AI, those parameters (which are seen as priority parameters) will also change over time. This means that we can see from where AI starts, but not where it is heading.

How much ‘safety stops’ are built into AI?
Can we put some kind of ‘weighing’ into the AI parameters, enabling the AI to fall back on more important or less important parameters when a risk needs to be considered?

Failure as humans can results into growth based on those failures. AI also learns from ‘failures’, but the AI learns from differences in datapoints. At present the AI only receives a message ‘this is wrong’, at that moment in time – if something is wrong – humans make a wide variety of risk considerations. In the bigger picture, one can see an analogy with Darwin’s evolutionary theory where time finds what works based on evolutionary diversity. But with AI the speed of adaptation enhances immensely.

With mechanical AI it was easier to define which parameters were right or wrong. E.g. with Go or Chess you have specific boundaries, and specific rules. Within these boundaries there are multiple options, but choosing those options is a straight path of considerations. At present humans make much more considerations for one conundrum or action that occurs. This means that there is a whole array of considerations that can also imply emotions, preferences…. When looking at philosophy you can see that there is an abundance of standpoints you can take, some even directly opposing each other (Hayek versus Dewey on democracy), and this diversity sometimes gives good solutions for both, workable solutions which can be debated as being valuable outcomes although based on different priorities, and even very different takes on a concept. The choices or arguments made in philosophy (over time) also clearly point to the power of society, technology and reigning culture at that point in time. For what is good now in one place, can be considered wrong in another place, or at another point in time.

 It could benefit teachers if they were supported with AI to signal students with problems. (but of course this means that ‘care’ is one of the parameters important for society, in another society it could simply be that those students who have problems will be set aside. Either choice is valid, but it builds on other views on whether we care in a ‘supporting all’ or care in a ‘support those who can so we can move forward quicker’. It is only human emotion that makes a difference in which choice might be the ‘better’ one to choose.

AI works in the virtual world. Always. Humans make a difference between the real and the virtual world, but for the AI all is real (though virtual to us).
Asimov’s laws of robotics still apply.

Transparency is needed to enable us to see which algorithms are behind decisions, and how we – as humans – might change them if deemed necessary.

Law suits become more difficult: a group of developers can set the basis of an AI, but the machine takes it from their learning itself. The machine learns, as such the machine becomes liable if something goes wrong, but ….? (e.g. Tesla crash).

Trust in AI needs to be built over time. This also implies empathy in dialogue (e.g. sugar pill / placebo-effect in medicine, which is enhanced if the doctor or health care worker provides it with additional care and attention to the patient.
Similar, smart object dialogue took off once a feeling of attention was built into it: e.g. replies from Google home or Alexa in the realm off “Thank you” when hearing a compliment.  Currently machines fool us with faked empathy. This faked empathy also refers to the difference between feeling ‘related to’ something or being ‘attached to’ something.

Imperfections will become more important and attractive than the perfections we sometimes strive for at this moment.

AI is still defined between good and bad (ethics), and ‘improvement’ which is linked to the definition of what is ‘best’ at that time.

Societal decisions: what do we develop first – with AI? The refugee crisis or self-driving cars? This affects the parameters at the start. Compare it to some idiot savants, where high intelligence, does not necessarily implies active consciousness.

Currently some humans are already bound by AI: e.g. astronauts where the system calculates all.  

And to conclude: this session ranged from the believers in AI “I cannot wait for AI to organise our society”  to those who think it is time for the next step in evolution, in the words of Jane Bozart: “Humans had their Chance”




Thursday 6 December 2018

Data driven #education session #OEB18 @oebconference #data @m_a_s_c

From the session on data driven education, with great EU links and projects.

Carlos Delgado Kloos: using analytics in education
Opportunities
Khan academy system is a proven system, with one of the best visualisations of how the students are advancing. With a lot of stats and graphs. Carlos used this approach for their 0 courses (courses on basic knowledge that students must know before moving on in higher ed).
Based on the Khan stats, they built a high level analytics system.
Predictions in MOOCs (see paper of Kloos), focusing on drop-out.
Monitoring in SPOCs (small private online courses)
Measurement of Real Workload of the students, the tool adapts the workload to the reality.
FlipApp (to gamify flipped classroom), remember and to notify the students that they need to see the videos before class, or they will not be able to follow. (Inge: sent to Barbara).
Creation of Educational Material using Google classroom. Google classroom sometimes knows what the answer of a quiz will be, which can save time for the teacher.
Learning analytics to improve teacher content delivery.
Use of IRT (Item Response Theory) to see which quizzes are more useful and effective, interesting to select quizzes.
Coursera define skills, match it to the jobs and based on that recommend courses.
Industry 4.0 (big data, AI…) for industry, can be transferred to Education 4.0 (learning analytics based on machine learning). (Education3.0 is using the cloud, where both learners and teachers go to).
Machine learning infers the rules from getting answers which are data analysed (in comparison to computer learning, which is just the opposite, based on rules, giving answers).
Dangers:
Correlations: correlations are not necessary correct conclusions. (see spurious correlations for fun links).
Bias: e.g. decisions for giving credit based on redlining and weblining.
Decisions for recruitment: eg. Amazon recruits that the automation of their recruiting system resulted in a biase leading to recruiting more men than women.
Decisions in trials: eg. Compas is used by judges to calculate repeat offenders, but color of skin was a clear bias in this program.
Chinese social credit system which gives minor points if you do something that is seen as not being ‘proper’. Also combined with facial recognition, and monitoring attention in class (Hangzhou number 11 high school).
Monitoring (gaggle, …)
Challenges
Luca challenge: responsible use of AI.
GDPR Art 22: automated individual decision-making, including profiling.
Sheilaproject.eu : identifying policies to adopt learning analytics. Bit.ly/sheilaMOOC is the course on the project.
Atoms and bits comparison. As with atoms you can use it for the better, or for the worse (like atomic bombs).


Maren Scheffel on Getting the trust into trusted learning analytics @m_a_s_c
(Welten Institute of Open University, Netherlands)
Learning analytics: Siemens (2011) definition still the norm. But nowadays it is a lot about analytics, but only little about learning.

Trust: currently we believe that something is reliable, the truth, or ability. Multiple definitions of trust, it is multidimensional and multidisciplinary construct. Luhmanndefined trust as a way to cope with risk, complexity, and a lack of system understanding. For Luhmann the concept of trust compensates for insufficient capabilities for fully understanding the complexity of the world (Luhmann, 1979, trust and …)
 For these reasons we must be transparent, reliable, and be integer to attract the trust of learners. There should not be a black box, but it should be a transparent box with algorithms (transparent indicators, open algorithms, full access to data, knowing who accesses your data).

Policies: see https://sheilaproject.eu   

User involvement and co-creation: see the competen-SEA project see http://competen-sea.eu capacity building projects for remote areas or sensitive learner groups. One of the outcomes was to co-design to create MOOCs (and trust) getting all the stakeholders together in order to come to an end product. MOOCs for people, by people.  Twitter #competenSEA

Session on #AI, #machineLearning and #learninganalytics #AIED #OEB18

This was a wonderful AI session, with knowledgeable speakers, which is always a pleasure. Some of the speakers showed their AI solutions, and described their process; others focused on the opportunities and challenges. Some great links as well.

Squirrel AI, the machine that regularly outperforms human teachers and redefines education by Wei Zhou
Squirrel AI is an AI to respond to the need for teachers in China. Based on knowledge diagnosis, looking for educational gaps. A bit like an intake at the beginning of a master education for adults.
Human versus machine competition for scoring education, and tailored learning content offerings. (collaborates with Stanford Uni). Also recognized by Unesco. (sidenote: it is clearly oriented at 'measurable, class and curriculum related content testing). 

 The ideas behind AI: adaptive learning is a booming market.
Knowledge graph + knowledge space theory: monitoring students real-time learning progress to evaluate student knowledge mastery and predict future learning skills. based on Bayesian network plus Bayesian inference and knowledge tracing and Item Response Theory. The system identifies the knowledge of the student based on the their intake or tests. Based on big data analysis the students get a tailored learning path. (personalised content recommendation using fuzzy logic, classification tree, and personalized based on logistic regression, graph theory, and genetic algorithm.). Adaptive learning based on Bayesian network, plus Bayesian inference, plus Bayesian knowledge tracing, plus IRT to precisely determine students current knowledge state and needs.
Nanoscale Knowledge Points: granularity is six time’s deeper.  Used in medical field.
Some experiments and results: the forth Human versus AI competition, which resulted in AI being quicker and more adapt to score tests of students.  Artificial Intelligence in Education conference (AIED18 conference link, look up video youtube.com, call for papers deadline 8 February 2019 for AIED19 here).

Claus Biermann on Approaches to the Use of AI in Learning
Artificial Intelligence and Learning: myths, limits and the real opportunities.  
Area9 lyceum: also adaptive  learning long-term company with new investments.
Referring to Blooms 2sigma problem.
Deep, personalized learning, biologically enabled data modeling, four-dimensional teaching approach.
How we differ: adaptive learning adapts to the individual, only shows content when it is necessary, takes into consideration what the student already knows, follows up on what the student is having trouble with.  This reduces the time of learning, and increases motivation. Impact from adaptive learning, almost 50% reduction of learning time.
Supports conscious competence concept.
AI is 60% of the platform, but the most important part is the human being, learning engineers, the team of humans who work together makes it possible.

Marie-Lou Papasian from Armenia (Jerevan).
Tumo is a learning platform where students direct their own development. After school program, 2 hours twice a week, and thousands of students come to the centre of TUMO. Armenia and Paris, and Beirut.
14 learning targets ranging from animation, to writing, to robotics, game development…
Main education is based on self learning, workshops and learning labs.
Coaches support the students and they are in all the workshops and learning labs.
Personalisation: each students choose their learning plan, their topics, their speed. That happens through the ‘Tumo path’, which is an interface which enables a personalised learning path (cfr LMS learning paths, but personalized in terms of speed and choices of the students). After the self-paced parts, the students can go to a workshop to reach their maximum potential, to learn and know they can explore and learn. These are advanced students (12 – 18 years, free of charge).
Harnessing the power of AI: the AI solves a lot of problems, as well as provide freedom to personalise the students learning experience. A virtual assistant will be written to help the coaches to help the student guided through the system.
AI guided dog: a mascot to help the students.
The coaches, assistants… are their to learn the students to take up more responsibility.
For those learners who are not that quick, a dynamic content aspect is planned to support their learning.

Wayne Holmes from the OU, UK and center for curriculum redesign, US
A report commissioned about personalized learning and digital ... (free German version here , English version might follow, will ask Wayne HOlmes).
Looking at the ways AI can impact education

A taxonomy of AI in education
Intelligent Tutoring System (as examples mentioned earlier in the panel talk)
Dialogue-based tutoring system (Pearson and Watson tutor example)
Exploratory Learning Environments (the biggest difference with the above, is that this is more based on diversification of solving a specific problem by the student)
Automatic writing evaluation (tools that will mark assignments for the teachers, also tools that will automatically give feedback to the students to improve their assignments).
Learning network orchestrators (tools that put people in contact with people, e.g. smart learning partner, third space learner, the system allows the student to connect with the expert).
Language learning (the system can identify languages and support conversation)
ITS+ (eg.. ALP, Alt school, Lumilo. The teacher wears google glasses, and the students activity comes as a bubble visualizing what the student is doing).

So there is a lot of stuff already out there.
We assume that personalized learning will be wonderful, but what about participative or collaborative learning

Things in development
Collaborative learning (what one person is talking about might be of interest to what another person is talking about).
Student forum monitoring
Continuous assessment (supported by AI)
AI learning companions (e.g. mobile phones supporting the learning, makes connections)
AI teaching assistants (data of students sent to teachers)
AI as a research tool to further the learning sciences

The ethics of AIED
A lot of work has been done round ethics in data. But there are also the algorithms that tweak the data outcomes, how do we prevent biases, guard against mistakes, protect against unintended consequences….
But what about education: self-fulfilling teacher wishes…
So how do we merge algorithms and big data and education?

With great power comes great responsibility (Spiderman, 1962, or French revolution national convention, 1793)
ATS tool built by Facebook, but the students went on strike (look this up).

Gunay Kazimzade Future of Learning, biases, myths, etcetera (Azerbaijan / Germany)
Digitalization and its ethical impact on society.
Six interdisciplines overlap.
Criticality of AI-biased systems.
(look up papers, starting to get tired, although the presentation is really interesting)
What is the impact of AI on our children is her main research considerations. How is the interaction between children and the smart agents. And what do we have to do, to avoid biases while children are using AI agents.
At present the AI biases infiltrate our world as we know, but can we transform this towards less biases?

Keynote talk of Anita Schjoll Brede @twitnitnit @oebconference #AI #machineLearning #oeb18


(liveblog starts after a general paragraph on the two keynotes that preceded her talk, and really her talk was really GREAT! And with fresh, relevant structure).

First of a talk on the skill sets of future workers (the new skills needed, referring to critical thinking, but not mentioning what is understood with critical thinking) and the collective intelligence (but clearly linking it to big data not small data, as well described in an article by Stella Lee).

Self-worth idea for the philosophy session, refer tot he Google map approach where small companies who offer one particular aspect of what it took to build google maps were bought by Google, and as such producing something that was bigger than the sum of its parts). But this of course means that the identity and the self-versus-the-other becomes under pressure, as people that really make a difference at some point, do not have the satisfying moment to think they are on top of the world (you can no longer show off your quality easily… for there are so many others just like you… as you can see when you read the news, follow people online…). While feeling important was easier, or possible in a ‘smaller’ world, where the local tech person was revered for her or his knowledge. So, in some way we are loosing the feeling of being special based on what we do. Additionally, if AI enters more of the working world, how do we ensure that work will be there for everyone, as work is also a way to ‘feel’ self-worth. I think keeping self-worth will be an increasing challenge in the connected, and AI supported world. As a self-test, simply think of yourself, and wanting to be invited to be on a stage… it is a simple yet possibly mentally alarming aspect. Our society is promoting ‘being the best’ at something, or having the most ‘likes’, what can we do to install or keep self-worth?
Than a speaker on the promise of online education, referring to MOOCs versus formal education, the increase of young people going to college… which strangely contradicts what the most profiles of future jobs seems to be like (professions that are rather labour intensive). The speaker Kaplan managed to knock down people who get into good jobs based on non-traditional schooling (obviously, my eye-brows went up, and I am sure there are more of us in the audience pondering which conservative thinking label can be put on that type of scolding stereotype speech, protecting the norm, he is clearly not even a non-conformist).

Here a person in the line of my interest takes the stage: Anita  Schjoll Brede. Anita founded an AI company Iris.ai , and tries to simplify the AI, machine learning and data science for easier implementation. So… of interest.

Learning how to learn sets us human beings apart. We are in the era where machines will learn, based on how we learn… inevitably changing what we need to learn.
She gives what AI is seen by most people, and where that model is not really correct.
Machine learning is based on the workings of a human brain. Over time the machine will adapt based on the data, and it will learn new skills. It is a great model to see the difference. One caveat, we still not sure how the human mind really works.
If we think of AI, we think of software, hardware, data … but our brains are slightly different and our human brains are also flawed. We want to build machines that are complementary to the human brain.

Iris.ai started with the idea that there are papers and new research published every day, humans can no longer read all. Iris.ai goes through the science and the literature process is relatively automated. The process is currently possible with a time decrease of 80%. Next step is hypothesis extraction, than build a truth tree of the document based on scientific arguments. Once you have the truth trees are done, link that to a lab or specific topic, … with an option of the machine learning results leading to different types of research. Human beings will still do the deeper understanding.

Another example is one tutor per child. Imagine that there is one tutor for that child, which grows with that child, helps with lifelong learning. The system will know you so well, that it will know how to motivate you, or get you forward. It might also have filters to identify discriminatory feelings or actions (remark of myself: but I do wonder, if this is the case, then isn’t this limiting the freedom of saying what you want and being the person you want to be… it might risk becoming extreme in either way of the doctrine system).
Refers to the Watson Lawyer AI, which makes that the junior lawyers will no longer do all the groundwork. So the new employees will have to learn other stuff, and be integrated differently. But this relates to critical ideas of course, as you must choose for employing people (but make yourself less competitive) or you only higher senior lawyers (remark of myself: but than you loose diversity and workforce).
Refers to doctors built by machine learning, used in sub-Saharan settings, to analyse human blood for malaria. Which saves time for the doctors, health care workers… but evidently, this has an impact on the health care worker jobs.
Cognitive bias codex (brain picture with lots of links). Lady in the red dress experiment.

Her take on what we need to learn:
Critical thinking,  refers to source criticism she learned during her schooling.
Who builds the AI, lets say Google will transgress the first general AI… their business model will still get us to buy more soap.
Complex problem solving: we need to hold this uncertainty and have that understanding. To understand why machines were lead to specific choices.
Creativity: machines can be creative, we can learn this. Rehashing what is done, and making it to something of your own is something that is (refers to lawyer commercial that was built by AI based on hours of legal commercials).
Empathy: is at the core of human capabilities like this. Machines are currently doing things, but not yet empathic. But empathy is also important to build machines that can result in positive evolutions for humans. If we can support machines that will be able to love the world, including humans.


Wednesday 5 December 2018

@oebconference workshop notes and documents #instructionalDesign #learningTools

After being physically out of the learning circuit for about a year and a half, it is really nice to get active again. And what better venue to rekindle professional interests than at Online Educa Berlin.

Yesterday I lead a workshop on using an ID instrument I call the Instructional Design Variation matrix (IDVmatrix). It is an instrument to reflect on the learning architecture (including tools and approaches) that you are currently using, to see whether these tools enable you to build a more contextualized or standardized type of learning (the list organises learning tools according to 5 parameters: informal - formal, simple - complex, free - expensive, standardized to contextualized, and more aimed at individual learning - social learning). The documents of the workshop can be seen here.

The workshop started of with an activity called 'winning a workshop survival bag', where the attendees could win a bag with cookies, nuts, and of course the template and lists of the IDVmatrix.
We then proceeded to give a bit of background on the activity, and how it related to the IDVmatrix.
Afterwards focusing on learning cases, and particularly challenges that the participants of the workshop were facing.
And we ended up trying to find solutions for these cases, sharing information, connections, ideas (have a look at this engaging crowd - movie recorded during the session).
The workshop was using elements from location-based learning, networking, mobile learning, machine learning, just-in-time learning, social learning, social media, multimedia, note taking, and a bit of gamification.

It was a wonderful crowd, so everyone went away with ideas. The networking part went very well also due to the icebreaker activity at the beginning. This was the icebreaker:

The WorkShop survival bag challenge!
Four actions, 1 bag for each team!

Action 1
Which person of your group has the longest first name?
Write down that name in the first box below.

Action 2

  • Choose two person prior to this challenge: a person who will record a short (approx. 6 seconds)
  • video with their phone and tweet it, and a person/s who will talk in that video.
  • Record a 6 second video which includes a booth at the OEB exhibition (shown in the
  • background) and during which a person gives a short reason why this particular learning solution
  • (the one represented by the booth) would be of use to that persons learning environment
  • (either personal or professional).
  • Once you have recorded the video, share it on twitter using the following hashtags: #OEB #M5
  • #teamX (with X being the number of your team, e.g. #team1) . This share is necessary to get the
  • next word of your WS survival bag challenge.
  • Once you upload the movie, you will get a response tweet on #OEB #M5 #teamX (again with the
  • number of your team).

Write down the word you received in response to your video in the second box below.

Action 3

  • Go to the room which is shown in the 360° picture in twitter (see #M5 #OEBAllTeams).
  • Find the spot where 5 pages are lined up, each of them with another language sign written on
  • them.
  • Each team has to ‘translate’ the sign assigned to their team. You can use the Google Translateapp for this (see google play, the app is free!).
Write down the translation in the third box below.

Action 4
Say the following words into the Google Home device which is located in the WS room

“OK Google 'say word box 1', say word box 2, say word box 3“

If Google answers, you will get your WS survival bag!

And although the names were not always very English, with a bit of tweaking using the IFTTT app, all the teams were able to get Google home mini to congratulate them for getting all the challenges right.