Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, 27 November 2019

Why is #AI useful to pro-actively prepare #learners in a changing world? #skills

Preparing for my talk today at Online Educa Berlin, after a great workshop-filled day yesterday (one of the workshops was on preparing for the 4th industrial revolution guided by Gilly Salmonhttps://www.gillysalmon.com/presentations.html ) and a wonderfully inspiring and ideas provoking workshop with Bryan Alexander looking at methods to predict parts of the future).

Below you can find my slides for the session at Online Educa Berlin looking at ways that Artificial Intelligence can be used to pro-actively prepare learners for the skills of the future.

It covers the steps we have tackled at InnoEnergy with the skills engine. In the talk I will share our approach, and how this differs from what was previously done. The slides are rather minimal, but if you download the talk, you can look at the notes in the slides to get the full picture.



Thursday, 3 October 2019

Yes a learning engine: demo is ready, but #AI and #Learning challenges ahead #TBB2019 @InnoEnergyCE

If you have ideas on ensuring continuity in pedagogy when clustering courses (research), on certifying across corporate and university learning (blockchain/bit of trust certification), on opening up industry academies to decrease L&D costs (HR and L&D), ... please think along and respond to the challenges mentioned at the end.

People in high and common places seem to agree that the world is in transition, especially workplace learning, as innovations keep changing what is possible. As I am working on one such an innovation (the skill project of InnoEnergy), I am at the one hand very excited about the new opportunities it might open, yet at the same time concerned that the complexity is bigger than expected.

First: have a look at the demo screencast here. It shows the overall idea, and ... this might immediately give rise to questions.

Today the Business Booster event (TBB) is opened, and with it, the skill project demo is launched. The skillproject (we still need to get a brand name for it), is combining AI and learning for the sustainable energy sector. But in essence, once we get the sustainable energy sector mapped with this tool, others can follow. 

AI and learning? What does it do: the project identifies industry needs (AI-driven), pinpoints emerging skill gaps in the sustainable energy sector (AI-driven), analysis the existing workforce to know where urgent skills gaps are situated (AI-driven) and then refers employees to a personalized learning trajectory addressing their skills gap (part AI, part human support). The goal of this project is to ensure that employees of the sustainable energy sector stay futureproof in a quickly changing working environment. Let's be honest, it sounds cool, but ... the challenges are multiple. 

The emergence of a Learning Engine
The skillproject helps realize the emergence of a learning engine, an intelligent career-oriented engine which knows your own skills and which signposts you to where you want to go with your career by suggesting a personalized learning track.
In the Learning Engine you simply type in “goal: become Director of Innovation’s in offshore wind energy which courses?” and the engine immediately returns a tailored, personalized learning track consisting of a variety of certified, business training from both universities, corporate academies, open educational energy resources and coaching options to send you on your way. This will allow professional learning to surpass the limits of classical, university-based learning.

Challenges
In order to get our engine to come up with the best, most-tailored courses, we need access to industry academies, as well as university courses. 
Learning-to-Learn capacities. Once we signpost learners to a cluster of courses, they need to take them (the familiar 'take the horse to water' comes to mind). But even if the learners are taking the courses, 
Granularity for course clustering: clustering courses to keep on top of your field of expertise is one thing, but then what is the granularity of those courses? Micro-learning is an option, and modular learning will become a clear necessity, as all learners have different existing knowledge, which means they all need different parts in order to upskill what they already know. 
Ensuring pedagogical continuity, even OU finds that a challenge. Great, so let's cluster modules. But then, how can we link these modules together, Do we believe in the non-pedagogical support (e.g. hole in the wall from Sugata Mitra already dates back 10 years), or do we need to find a solution to provide pedagogical continuity that fits with this new assembly of short modules, and courses coming from different sources (both university and industry)?
Certification across the learning ecologies: to blockchain or not to blockchain. Once we start learning across institutes, we need to keep track of that what we learn, by keeping tabs on the actual learning: corporate academy learning, university modules, hands-on training, workplace learning... one solution is to embed blockchain in education to keep track of all learning. But this is easier said than done, and open standards and trust might be an issue to consider (bit of trust initiative offers good reading). 

Feel free to send questions, comments, share your own projects... let's get together.

Thursday, 19 September 2019

LiveBlog #Ectel2019 Rose Luckin @Knowldgillusion Keynote #AI & #education mindset

 Rose Luckin takes the stage with a headset and immediately getting into her talk. The talk was very informative and to me it looked as though Rose is so knowledgeable about a range of topics, so I got a bit curious and envious in how her mind works [It I heard - I do not know if this is correct, will ask her ] that she only got into academic life later on in life?

Key topic: develop the right AI mindset for businesses

A perfect storm: data mass plus computing power and memory enhancements, sophisticated algorithms ... this made AI part of our lives and education.

3 routes to Impact on Education

  • using AI ED to tackle some of the big educational challenges
  • education people about AI so that they can use it safely and effectively
  • changing education so that we focus on human intelligence and prepare people for an AI world (hardest to do at the moment)

Working with select committee processes to try and take forward new developments. Debating on 4th industrial revolution and what it means that people understand AI (it is not coding, it is about the humans and their understanding of the fundamentals of machine algorithms, awareness, it is a much higher order we need to engage people with).

Need for multidisciplinary teams with equal input
As change happens, we need to change our educational systems (Singapore). Be resilient to change, be adaptive.
The above are not separate routes, it interconnects, and these interconnections increase AI and that we need to change and invest in our society using emerging ideas and realities of these three buckets.
We need to build bridges between communities: all stakeholders (parents, communities, government, coders...).
Currently separated communities need to work together to build a credible, societaly based AI solution.

Companies working with UCL EDUCATE
Not all companies are already using AI, but they want to understand more about it.
EDUCATE was from Europe, but turning into a global program from Jan 2020.
250 educational study start-ups (each start-up has to have a link with London, but they need to have some profile in London, so most UK-originated).
UCL provides training (labs, clinics, blended rooms, mentoring sessions)
It is free for the companies (years spend on figuring out the gaps between educational departments and industry. This was the case for hard sciences and industry, but not education). A lot of the reasons was because they did not know who to talk to, where to start => reason for starting with start-ups, embedding the educational mindset and to understand more about outcomes and validation of educational projects, so what it means when we say 'it works' (complexities... this results in the golden triangle: edtech developers, teachers & learners, academic researchers).

Start-ups are pushed to build a logic model, and the change being the learning that they want to take place. Opportunities they have to analyze the data, how should they demonstrate impact. We hope they will get to the last stage (see picture).
EdWards are set in place (awards to proof evidence applied and evidence aware awards).
120 companies became evidence aware, and 25 become evidence applied (last being much more difficult to achieve).

EDUCATE for schools
objective: build capacity in schools to identify and evaluate edtech that meets the needs of their teaching, learning or environment.
This approach can work in different educational programs.
Sit down, get head teacher in to pick two or three educational challenges - what they find tricky, than teachers are chosen to test it, to find out how the edtech works.
Currently this is under development:
all resources included in option 1, schools identify new or existing edtech to pilot
EDUCATE provides new resources to help schools plan their edtech pilot,
educate povides video and document resurces to walk schools through the pilot process
schools step through piloting process and recieve one hour of 1:1 video mentoring support
evaluate it (not sure I put this in correctly - this last step)

Sources
Century AI:
AI and big data powers personalised learning
Quipper: video insight, smart study planner, knowledge base
EvidenceB KidsCode : paths through materials, optimised parts through material

classic recommender systems (finding the right resources for the educator/student)
Bibblio
teachpitch

Chatterbox: refugee as expert native speaker with matching backgrounds (e.g. engineering background)
OyaLabs cloudbased monitor in the baby lounge and monitors interactions between baby and its cognitive developments for language developments
MyCognition algorithms automatically increase the number of training loops for the domains where you have the greatest need. If attention is your greatest needs you will receive more attention loops, building resilience in attention. As you progress the loops become more challenging. Looks at your attention, actions... assessment and report, which powers aquasnap and takes you to a underwater world (sea routes, fish names...) and adapted to your own cognitive status.

Building an AI mindset
Important for any company that wants to get into AI
What does it means to have the right data,
not just the tech team must understand the data and AI
as an individual it would be good to understand more about AI

Working with OSTC / ZISHI company: example of AI mindset collaboration. What they do: training for trader floors. They have to train everyone. They try to attract diversity in the workforce and pick them from less evident universities. ZISHI tries to use AI, AI for financial sector.
Financial sector has used AI for some time. AI used for assist in recruiting the best traders, assist in training the traders, help traders in improving performance, mentor the traders through out their careers.

Understanding OSTC's performance metrics

  • how can training behavior be measured?
  • can we profile traders by their trading behavior?
  • how do these profiles relate to performance?
  • can we then create a tool to help recruitment a tool to help traders and a tool to help managers?

The CEO of OSTC started out at the post floor of Lloyds and moved up. One's he saw the lack of training, he got into training and set up OSTC. Fundamentally what they try to do is creating AI mindset.

Much is not easy or obvious of what traders do

  • what others tell me that I do
  • what I think I do
  • what I really do
  • what family thinks you do...

Workflow
Nearly half their traders left less than one year in. So something was wrong, and investment was too costly for the results in the longterm.
Modeling using machine learning techniques to profile traders and make predictions (recruitment data from tests, interviews and videos, trading history data from trading platforms, multimodal data from eye-movements and button clicks, and behavioral data.
Masses of data from the tools used in the company.

Profiling 4 types of traders, using four identified characteristics:
data visualizations, using clustering techniques.
It turns out that the behavioral patterns relate to significantly different performance (risk management, bonuses... and different cognitive abilities & traits (openness to experiences, agreeableness...) [here my mind went off... must be something related to trader-vocabulary?]

Challenges to IA mindset

  • collaboration: is everybody onboard?
  • getting rid of AI's sci-fi fantasies and fears
  • digging in rich soil will bring out stuff. Are we ready to act upon it?
  • the appetite comes with the first byte - be ethically prepared to diet
  • data is har to collect, standardize, clean, #you-name-it

Opportunities for IA mindset

  • map the organisations' data information knowledge wisdom pyramid (and who is where
  • identify data sources: what is ready to be picked, what still needs to be ripened or sown
  • what can we learn from previous (successful of failed) experiments or pilots? what hypotheses they already have? what are their blind spots?
  • metrics - how do we know what success looks like?

OSTC - lessons

  • team members across different tiers need to embrace change
  • collect as much data 
  • tech team in company not the same as data team
  • need new expertise to digitize documenten and learning content
  • develop coherent and consistent procedures in all offices across the globe despite the cultural bias
  • track the daily activities through logs and multimodal data
  • develop tools

Developing an AI mindset

  • AI is set to transform education
  • three core types of interconnected work: using AI, understanding AI, changing education because of AI
  • multi-stakeholder collaboration can help achieve these three types of work
  • EDUCATE is an example of a multi-stakeholder collaboration to help develop a research mindset in Edtech developers and educators
  • for AI companies, or companies who want to use their data and AI we also need to develop an AI mindset (or perhaps initially a data mindset)
  • Academic research partners need to be put in this mix

Barclays provided somebody (eagle) in branches, and they would help people to use technology (from simple to complex) to get people engaged about using and thinking about technology, and how they can get involved.

Wednesday, 18 September 2019

#ectel2019 #mlearn2019 keynote @GeoffStead on #informal learning at scale #languages #AI

Geoff Stead (@geoffstead ) takes the stage with a headset, a black shirt and walking like a fit Californian surfer (looking great).

As chief product person of the Babbel language corporation, he talks about informal learning at scale and will offer insights. 750 people all working on 1 app, fully funded by individuals willing to pay small amounts of money to learn languages. Mostly Euro-centric coming from the organic growth of the organisation.

5000 courses => 64000 lessons (unique language pairs), focus on communicative confidence, light-hearted, diverse topics. Well over 1 million subscribers (of which I am one - Spanish).

Digital = scale and reach
Team of 10 people can start the magic of the web.
How can we ensure Quality?
Learner centric, otherwise what is the value of the application?

Using a learner journey to unite efforts, to enable connections between learners. Conceptual flows of individuals that is used as the mantra to move the app forward.
See picture, where they also embed some spaced learning.
They work with patterns that are turned into fake persona's, which are designed and modeled (design thinking approach). Enabling developers and strategist to understand the different demographics. These personas are linked to learner journeys. Which enables to keep a focus on the learners.

Learning from the learners
What do they do? analytics, A/B tests, behavioral segmentation (showing what you did, signposting to what you did and worked...), interviews, intercept surveys, wishboard, market surveys, UX research (ask permission to video tape parts of the learner journey and ideas), customer service, market research. Not one is representative, but hoping that with enough different angles they are hoping to get closer to the actual learning in all it's complexity.

Dev at scale
20 different teams of people, a lot of independence, but only one product. So how likely it is that the releases are synchronizable as soon as they are launched by teams? Tripping over each other, contradictions, ...it quickly becomes chaos. So it is self-driven and autonomous, but potentially disastrous for the learners. Marketing and money was basis for scaling: stickers in planes and on poles in big cities, get people to pay a bit of money.

How do you trade off freedom versus working together
Teams organised around User Journey: Experience Groups (XGs) are clusters of teams across Product & Engineering, uniting tho enhance cross-functional collaboration around product ideas and speed up the development cycle: impressions, engagement, learning, learning media, platform and infrastructure (really interesting this!).

Product department 
Product is made up of many specialist teams. some teams are embedded within multi-function or engineering teams: didactics, product design, product management and QA, data engineering and analytics, quality and release management.


Towards "learning experience design"
Mixed multidisciplinary approach, but in larger companies most of the time they are not often set up as bridged teams in a multidisciplinary, cross-functionalness.

Babbel meetups in Berlin every 2 - 3 months, welcome to come and have a look.

LXD basics
digital learning is not content distribution, we are only a small slice of our learner's day, we never really know what is going on. Learning Experience Design, all about the multidisciplinary nature.

Learner engagement
It only works for them if they use it. What is the science of pulling learners back in?
Weekly active paying users: returners. One of the key drivers = 7 day return to learning (it is this that most of the dev teams use to validate short term impact of new features and refinements). If the people who try a new release, do they come back within 7 days to use this newly released option. This simplifies discussions on what is important.

Obsessive focus on interpreting events: Tableau, Amplitude (big fat data stream).
Mixing art and science to understand the engagement ladder (to help our learenrs focus - hooked (N Eyal) triggers motivation (Fogg), Nudge (Thaler, Flow state, spaced repetition, babbel qualitative and quantitative data....).

Gamification: treat with care, some very useful tools, often used for trivial impact.

AI to make Babbel more human
AI is a very broad umbrella term for a wide range of very specific disciplines. Babbel uses 'narrow AI' to focus on very specific problems/opportunities. NLP, CL, ASR...
Making interfaces more human (hybrid human-AI). Using NLP to give the automated feedback more human (eg "I understand what you meant").
Making guidance more useful: content recommendations, based on other, related topics and level. Still very much in beta. Optimising for speed, and identifying opportunities.

Rose Luckin's golden triangle is used.
Tutorbot corpus (Kate McCurdy, Dragan Gasevic...)





Tuesday, 17 September 2019

#Ectel2019 Covadonga Rodrigo from #UNED @cova_rodrigo #gender #AI #bias


From here a couple of cases and projects (slides will follow)

Great presentation by UNED Covadonga Rodrigo: will AI be sexist? @cova_rodrigo (liveblog)
Referring to male/female recruitment of Amazon. AI had a biased in favor of men. Why?
Because the AI was trained with historical data, so more males, which made the system think male candidates were preferable.
Microsoft (2016) had the same result with their AI system: automated bots on twitter, this bot was getting sexist in the end due to AI learning.

So who is programming the AI systems: up to 90 % are men (2015), it changes gradually, but at the moment women are only 16 to 19% of the programmers. This results in differences in terms of bias. By 2023 it will probably be 27,7% (= number of software developers in the world) this is not the critical threshold of 33% that we know is critical from social sciences in order for a group to get their voices heard).

Some issues Glass ceiling, identity of what engineers are, school atmosphere, more female references in the curricula. It is not only in engineering, also in other areas.
The AI assistants are also mostly female-voice based => the female secretary, not female leads.

Ethics: curricula are biased, ethical subjects in curricula. Lack of humanistic studies in education, we need to transform this.

Mentions that she is 50+ and she was an engineer from early on, so there were women engineers, so no problem with entry of women. So we have male domination, which results in biases in terms of gender, and differences that exist in society.

Sources of sexism (slides will follow)


#ECTEL2019 Workshop #AI in #Education #liveblogpost #AIED @cova_rodrigo @paco

This is a live blog, so bits and pieces noted.

Paco Iniesto (The Open University, IET, AIED) is the workshop lead, and he is looking good and giving a strong overview.
AI is all around us: cars, games, robotics, AlphaGo (see netflix), predictive policy, dating apps, thispersondoesnotexist.com (3 min video is of interest, how they generate these images), ...

What is AI?
It isn't easy to define AI and many people have an idea, but there is no definition.
computer systems desinged to interact with the world ... (Luckin, Waynes...)

The promise of AI is not yet realized, although it has been developing for 40 years.
It's big business
AI shines a spothlight on existing educational practices
AI rehashes what we have at this point in time

Implications of AIED: algorithms and computation: what are the algorithms, what are their consequences, how to control them... accuracy and validity of assessments, are we treating students as human beings?

Lumilo augmented reality glasses for teachers (https://hechingerreport.org/these-glasses-give-teachers-superpowers/), video can be found here: https://kenholstein.myportfolio.com/the-lumilo-project This got some negative critiques from teachers and learners.



Ethical questions
Connection between effect and psychological traits of learners, but where can this lead to? (cfr Cambridge analytics).
What if we have the data for 'good', what if others use it for 'bad' ideas.
What about GDPR, who owns the data, how does this affect funding, if students opt out of the system and all their data is erased; can we use blockchain in order to keep the data connected to the learners?
Where is the data in order for the data be erased, how does this affect future employment?
Will the system be able to evaluate actual learning, if this is the case, what benefits will it bring to teaching and learning?
Does the support of learners lead to limiting the self-directed learning-to-learn of the learners
Starting from the technology to move to support the learning seems to be the other way round then it should be done,
What is the educational progress using these technologies?
What is the difference between monitoring and surveillance? (where is the barrier)
Can learners hack the system to get more or less support?
Does the teacher have enough time to support learners with difficulties? And does their help actually benefit the learning?
Consent forms of those who are not able to give consent?
marginalized people are in need of technological support, but how do we support them in a secure way?

Sources:
Sheila project: https://sheilaproject.eu/
Methods of mass destruction book

The post-it notes with ideas from three different groups addressing some of the questions mentioned in the above slide.







Monday, 26 August 2019

Working on the #LearningEngine matching #learning to #skillgaps #skills


Forget the search engine, ravel on the emergence of the Learning Engine (admittedly it is still a dream in progress, but we are getting closer)

What made search engines so innovative decades ago? 
They created connections. Connections between online users and content. The search engine developers did not produce a lot of content, but they referred to content from outside providers, and that was what made it special: the immediate connection. It connected supply with demand, connecting small and big businesses, individuals and groups. The service built upon existing new developments that each of the content producers provided. 
Content free and available. A great big benefit of the content that comes up in the search engines is of course that it is free, ... which is a lot more difficult if you are trying to build a learning engine. Professional courses are rarely free (MOOCs notwithstanding), and in a lot of cases even the courses themselves are behind closed walls: e.g. online courses only available for employees, for registered students...

Search engines are great, but Learning Engines are becoming a really urgent demand
The shift in our professional society is no longer about jobs that disappear due to automation, it is about jobs diversifying through the demands of change, driven by innovation. Learning to learn is becoming essential to being employed and moving forward (or at least it seems that way for some of the jobs in sectors driven by innovation and change). 
In order to learn how to do a variety of jobs, we need to learn, and we need to personalize each of our learning journeys based on our previous experiences and skills, both hard and soft skills. This is where the Learning Engine comes in and takes shape. At InnoEnergy I am now co-developing learning for real life jobs. At present ‘addressing the skills gap’ is all the rave. LinkedIn is investing in its Economic Graph, Burning glass and alike are gathering data on Skills, countries and regions are building skills taxonomies (e.g. Nesta ), that can be used either in manual brainstorms and in Artificial Intelligence driven projects. 

If you take into considerations these latest tech-innovations and options, it isn't difficult to imagine a true personalized Learning Engine. 

The challenge is how to build a Futureproof Workforce? Maybe a Learning Engine
With the Learning Engine in my mind it combines innovation, AI and learning skills for the sustainable energy sector (as EIT InnoEnergy works within the sustainable energy sector). Basically, the project identifies industry needs, pinpoints emerging skill gaps in the sustainable energy sector, analysis the existing workforce to know where urgent skills gaps are situated and then refers employees (or employee groups) to a personalized learning trajectory to alleviate their skills gap. 
The combination of these steps should ensure that the employees of the sustainable energy sector stay futureproof in a quickly changing working environment. 

This project helps to realize the emergence of the ‘Learning Engine’, an intelligent career-oriented engine which knows your own skills and signposts you to where you want to go with your career by suggesting a personalized learning track. 
Just imagine that you go to the Learning Engine and you simply type in “Director of Innovation’s in offshore wind energy” and the engine immediately returns a tailored, personalized learning track consisting of a variety of certified trainings from both universities, corporate academies, open educational energy resources and coaching options! Personally, I think that would be quite a catch!

Learners mix and match already
In a way, we already see this shift towards a more quilted professional learning in the MOOC’s which are taken by professional learners to enhance their career opportunities. Those career-minded employees register for MOOCs developed by universities as well as businesses, and they take a few courses here, and a few courses there. Soon employees will be able to link different course certificates to ensure a future-proof career (whether we should be using blockchain in Education to validate the learning trajectory is something else (see some mails on this here and here).

Corporate academies will need to open part of their courses: are they willing?
As the project evolves, it is clear that the AI engines are running and becoming smarter as additional data is fed into the system. But the main challenge is still: how to get access to course descriptions so we can signpost learners to those courses. If we don't have access to courses, even descriptions than we cannot send learners to them. 
I would think that corporate academies would benefit from sharing some of their courses: if they form a network, they will no longer need to develop all the courses, they could 'swap' or agree to develop specific courses and find other courses for their employees at competing companies. Because although they are competing, all of them have basic courses for their employees, and those course costs could be cut by coming to a course-development agreement. 

Thursday, 28 February 2019

Liveblog @mathvermeulen #JustDoIt #vovpitstop @vovnetwerk

Liveblog Mathias Vermeulen Ode aan Angus
(Great keynote, capturing the audience first, coming to business with strong ideas)
Lang leve technologie!
Technologie is (ahem)
  • ·       Ons LMS
  • ·       Ons eLearningmodules
  • ·       Onze course vending machine

MacGyver is biggest inspiration of @mathiasVermeulen
Fabulous learning is developed by thinking ‘What would MacGyver do?”
·       Find what is out there, and use it to your own advantage and needs!
·       L&D is a party for everyone: becoming best friends with IT. HR, L&D
·       “Ik ben een bricoleur”

Zwitsers zakmes
  • ·       xAPI – LRS
  • ·       VR/AR
  • ·       Games (bury me my love – try it, text but serious game on Syria)
  • ·       Mobile
  • ·       AI and chatbots

Don’t worry be crappy (Guy Kawasaki)
Try out tools, set aside time (e.g. Friday afternoon) to test, think, come up with ideas on learning solutions.
Think ahead
  • ·       New people (we are good in this)
  • ·       More (what can we do to train our people)
  • ·       Apply (e.g. performance support when they need it: just-in-time learning)
  • ·       Solve (again, take time to learn what is out there)
  • ·       Change (produce a lean learning approach)

Monday, 14 January 2019

EU report on the impact of AI on Learning Teaching and Education #AI #education #EU #policy

The resently published report on the impact of artificial intelligence (AI) on learning, teaching and education gives a great outline on the realities of AI, the state of the art, and the challenges as well as opportunities for those of us with an expertise in learning in general, or learning in terms of learning theory. The report is part of the JRC Science for Policy documents, and it is very well written by Ilkka Tuomi (who is renowned for his expertise in Internet, data, AI and computer science). Ilkka recorded a brief overview of the report, which can be seen below. In the report-related video, he refers to current machine learning systems as datavors, he defines (and right fully so) the term of machine learning as an oxymoron and he puts current AI in very accessible parallel, namely the Artificial Instict (as current AI is mainly about behaviourist approaches and patterns).

A very interesting perspective is that Ilkka and the report stress the importance of having someone on board of AI for learning/teaching/education on board, who has expertise in learning and learning theory.

The policy challenges mentioned at the end of the report are:

  • A continuous dialogue on the appropriate and responsible uses of AI in education is therefore needed.
  • In the domain of educational policy, it is important for educators and policymakers to understand AI in the broader context of the future of learning. As AI will be used to automate productive processes, we may need to reinvent current educational institutions.
  • In general, the balance may thus shift from the instrumental role of education towards its more developmental role.
  • A general policy challenge, thus, is to increase among educators and policymakers awareness of AI technologies and their potential impact.
  • Learning sciences could have much to offer to research on AI, and such mutual interaction would enable better understanding about how to use AI for learning and in educational settings, as well as in other domains of application.
  • As there may be fundamental theoretical and practical limits in designing AI systems that can explain their behaviour and decisions, it is important to keep humans in the decision-making loop.
  • The ethics of AI is a generic challenge, but it has specific relevance for educational policies.
  • Human agency means that we can make choices about future acts, and thus become responsible for them.  AI can also limit the domain where humans can express their agency.
  • An important policy challenge is how such large datasets that are needed for the development and use of AI-based systems could be made more widely available.


This 47 page report offers the following topics:

Introduction ...................................................................................................... 5
2 What is Artificial Intelligence? ............................................................................. 7
2.1 A three-level model of action for analysing AI and its impact ............................. 7
2.2 Three types of AI ....................................................................................... 10
2.2.1 Data-based neural AI ......................................................................... 10
2.2.2 Logic- and knowledge-based AI ........................................................... 12
2.3 Recent and future developments in AI .......................................................... 13
2.3.1 Models of learning in data-based AI ..................................................... 15
2.3.2 Towards the future............................................................................. 16
2.4 AI impact on skill and competence demand ................................................... 17
2.4.1 Skills in economic studies of AI impact ................................................. 18
2.4.2 Skill-biased and task-biased models of technology impact ....................... 20
2.4.3 AI capabilities and task substitution in the three-level model ................... 21
2.4.4 Trends and transitions ........................................................................ 22
2.4.5 Neural AI as data-biased technological change ...................................... 23
2.4.6 Education as a creator of capability platforms ........................................ 23
2.4.7 Direct AI impact on advanced digital skills demand ................................ 25
3 Impact on learning, teaching, and education ....................................................... 27
3.1 Current developments ................................................................................ 27
3.1.1 “No AI without UI” ............................................................................. 28
3.2 The impact of AI on learning ....................................................................... 28
3.2.1 Impact on cognitive development ........................................................ 30
3.3 The impact of AI on teaching ....................................................................... 31
3.3.1 AI-generated student models and new pedagogical opportunities............. 31
3.3.2 The need for future-oriented vision regarding AI .................................... 32
3.4 Re-thinking the role of education in society ................................................... 32
4 Policy challenges ............................................................................................. 34

Below is the 20 minute video of Ilkka Tuomi which explains the report in easy terms.




Friday, 4 January 2019

Call for Papers #CfP #AI #mLearning #MOOC in conferences #UNESCO @FedericaUniNa

January has started and three important calls for papers are coming up, all related to conferences. The three conferences are: eMOOCs2019 (on MOOCs), Mobile Learning week at UNESCO (focus on AI for development and mobile learning, and eLearning Africa (this year in Cote d'Ivoir), listed per deadline of the CfP.

Mobile learning week UNESCO (Paris, France): focus on AI for sustainable development
Call for proposals deadline: 11 January 2019
UNESCO Global AI Conference: Monday 4 March 2019
Policy Forum and Workshops: Tuesday 5 March 2019
Symposium: Wednesday 6 & Thursday 7 March 2019
Strategy labs & International Women’s Day: Friday 8 March 2019
Exhibits: Monday 4 to Friday 8 March 2019
More information: https://en.unesco.org/mlw/2019
UNESCO, in partnership with its confirmed partners – the International Telecommunication Union and the Profuturo Foundation – will convene a special edition of Mobile Learning Week (MLW) from 4 to 8 March 2019, at the UNESCO Headquarters building in Paris (France). The five-day event, under the theme ‘Artificial Intelligence for Sustainable development’ will start with the ‘Global Conference - Principles for AI: Towards a humanistic approach?’, followed by a one-day Policy Forum and Workshops, a two-day International Symposium and a half-day of Strategy Labs. On 8 March, towards the close of MLW, participants will be invited to join the celebration of International Women’s Day, particularly a debate on Women in AI to be held in UNESCO Headquarters. During the entire week, exhibitions and demonstrations of innovative AI applications for education and more than 20 workshops will be organized by international partners and all programme sectors of UNESCO.
eMOOCs 2019 in Napels, Italy
Deadline CfP: 14 January 2019.
Conference date:  May 20 – 22, 2019
More informationhttps://emoocs2019.eu/call-for-papers/overview/
Description
The Higher Education landscape is changing. As the information economy progresses, demand for a more highly, and differently, qualified workforce and citizens increases, and HE Institutions face the challenge of training, reskilling and upskilling people throughout their lives, rather than providing a one-time in-depth education. The corporate and NGO sectors are themselves exploring the benefits of a more qualified online approach to training, and are entering the education market in collaboration with HE Institutions, but also autonomously or via new certifying agencies. Technology is the other significant player in this fast-changing scenario. It allows for new, data-driven ways of measuring learning outcomes, new forms of curriculum definition and compilation, and alternative forms of recruitment strategy via people analytics.

At the MOOC crossroads where the three converge, we ask ourselves whether university degrees are still the major currency in the job market, or whether a broader portfolio of qualifications and micro-credentials may be emerging as an alternative. What implications does this have for educational practice? What policy decisions are required? And as online access eliminates geographical barriers to learning, but the growing MOOC market is increasingly dominated by the big American platforms, what strategic policy do European HE Institutions wish to adopt in terms of branding, language and culture?

The EMOOCs 2019 MOOC stakeholders summit comprises the consolidated format of Research and Experience, Policy and Business tracks, as well as interactive workshops. Original contributions that share knowledge and carry forward the debate around MOOCs are very welcome.

eLearning AFrica - Abidjan - Cote d'Ivoir
Deadline CfP: February 22, 2019.
Conference date: October 23 - 25, 2019
More informationhttps://www.elearning-africa.com/programme_cfp.php
Description
The 14th edition of eLearning Africa, the International Conference & Exhibition on ICT for Education, Training & Skills Development, which will take place in Abidjan, Côte d'Ivoire from October 23 - 25, 2019 and is co-hosted by the Government of Côte d'Ivoire. 

A unique event, Africa’s largest conference and exhibition on technology supported learning, training and skills development, eLearning Africa is a network of leading experts, professionals and investors, committed to the future of education & training in Africa.

Read more about the eLearning Africa 2019 themeThe Keys to the Future: Learnability and Employability, and become involved in shaping the conference agenda by proposing a topic, talk or session here.
Register today to profit from our Early Bird Rate

About eLearning Africa
Founded in 2005, eLearning Africa is the leading pan-African conference and exhibition on ICT for Education, Training & Skills Development. The three day event offers participants the opportunity to develop multinational and cross-industry contacts and partnerships, as well as to enhance their knowledge and skills.
Over 13 consecutive years, eLearning Africa has hosted 17,278 participants from 100+ different countries around the world, with over 80% coming from the African continent. More than 3,530 speakers have addressed the conference about every aspect of technology supported learning, training and skills development.

Saturday, 8 December 2018

#AI #MachineLearning and #philosophy session #OEB18 @oebconference @OldPhilNewLearn


At OEB2018 the last session I lead was on the subject of AI, machine learning in combination with old philosophers and new learning.  The session drew a big group of very outspoken, intelligent people making this session a wonderful source of ideas on the subject of philosophy and AI.

As promised to the participants, I am adding my notes taken during the session. There were a lot of ideas, so my apologies if I missed any. The notes follow below, afterwards embedding the slides that preceeded the session in order to indicate where the idea for the workshop came from.

Notes from the session Old Philosophers and New Learning @OldPhilNewLearn #AI #machineLearning and #Philosophy

The session started of with the choices embedded in any AI, e.g. a Tesla car running into people, will he run into a grandmother or into two kids? What is the ‘best solution’… further into the session this question got additional dimensions: we as humans do not necessarily see what is best, as we do not have all the parameters, and: we could build into the car that in case of emergency, the car needs to decide that the lives of others are more important than the lives of those in the car, and as such simply crash the car into the wall, avoiding both grandmother and kids.

The developer or creator gives parameters to the AI, with machine learning embedded, the AI will start to learn from there, based on feedback from or directed to the parameters. This is in contrast with computer-based learning, where rules are given, and they are either successful or not but they are no basis for new rules to be implemented.

From a philosophical point of view, the impact of AI (including its potential bias coming from the developers or the feedback received) could be analysed using Hannah Arendt’s ‘Power of the System’, in her time this referred to the power mechanisms during WWII, but the abstract lines align with the power of the AI system.

The growth of the AI based on human algorithms does not necessarily mean that the AI will think like us. It might choose to derive different conclusions, based on priority algorithms it chooses. As such current paradigms may shift.

Throughout the ages, the focus of humankind changed depending on new developments, new thoughts, new insights into philosophy. But this means that if humans put parameters into AI, those parameters (which are seen as priority parameters) will also change over time. This means that we can see from where AI starts, but not where it is heading.

How much ‘safety stops’ are built into AI?
Can we put some kind of ‘weighing’ into the AI parameters, enabling the AI to fall back on more important or less important parameters when a risk needs to be considered?

Failure as humans can results into growth based on those failures. AI also learns from ‘failures’, but the AI learns from differences in datapoints. At present the AI only receives a message ‘this is wrong’, at that moment in time – if something is wrong – humans make a wide variety of risk considerations. In the bigger picture, one can see an analogy with Darwin’s evolutionary theory where time finds what works based on evolutionary diversity. But with AI the speed of adaptation enhances immensely.

With mechanical AI it was easier to define which parameters were right or wrong. E.g. with Go or Chess you have specific boundaries, and specific rules. Within these boundaries there are multiple options, but choosing those options is a straight path of considerations. At present humans make much more considerations for one conundrum or action that occurs. This means that there is a whole array of considerations that can also imply emotions, preferences…. When looking at philosophy you can see that there is an abundance of standpoints you can take, some even directly opposing each other (Hayek versus Dewey on democracy), and this diversity sometimes gives good solutions for both, workable solutions which can be debated as being valuable outcomes although based on different priorities, and even very different takes on a concept. The choices or arguments made in philosophy (over time) also clearly point to the power of society, technology and reigning culture at that point in time. For what is good now in one place, can be considered wrong in another place, or at another point in time.

 It could benefit teachers if they were supported with AI to signal students with problems. (but of course this means that ‘care’ is one of the parameters important for society, in another society it could simply be that those students who have problems will be set aside. Either choice is valid, but it builds on other views on whether we care in a ‘supporting all’ or care in a ‘support those who can so we can move forward quicker’. It is only human emotion that makes a difference in which choice might be the ‘better’ one to choose.

AI works in the virtual world. Always. Humans make a difference between the real and the virtual world, but for the AI all is real (though virtual to us).
Asimov’s laws of robotics still apply.

Transparency is needed to enable us to see which algorithms are behind decisions, and how we – as humans – might change them if deemed necessary.

Law suits become more difficult: a group of developers can set the basis of an AI, but the machine takes it from their learning itself. The machine learns, as such the machine becomes liable if something goes wrong, but ….? (e.g. Tesla crash).

Trust in AI needs to be built over time. This also implies empathy in dialogue (e.g. sugar pill / placebo-effect in medicine, which is enhanced if the doctor or health care worker provides it with additional care and attention to the patient.
Similar, smart object dialogue took off once a feeling of attention was built into it: e.g. replies from Google home or Alexa in the realm off “Thank you” when hearing a compliment.  Currently machines fool us with faked empathy. This faked empathy also refers to the difference between feeling ‘related to’ something or being ‘attached to’ something.

Imperfections will become more important and attractive than the perfections we sometimes strive for at this moment.

AI is still defined between good and bad (ethics), and ‘improvement’ which is linked to the definition of what is ‘best’ at that time.

Societal decisions: what do we develop first – with AI? The refugee crisis or self-driving cars? This affects the parameters at the start. Compare it to some idiot savants, where high intelligence, does not necessarily implies active consciousness.

Currently some humans are already bound by AI: e.g. astronauts where the system calculates all.  

And to conclude: this session ranged from the believers in AI “I cannot wait for AI to organise our society”  to those who think it is time for the next step in evolution, in the words of Jane Bozart: “Humans had their Chance”




Thursday, 6 December 2018

Data driven #education session #OEB18 @oebconference #data @m_a_s_c

From the session on data driven education, with great EU links and projects.

Carlos Delgado Kloos: using analytics in education
Opportunities
Khan academy system is a proven system, with one of the best visualisations of how the students are advancing. With a lot of stats and graphs. Carlos used this approach for their 0 courses (courses on basic knowledge that students must know before moving on in higher ed).
Based on the Khan stats, they built a high level analytics system.
Predictions in MOOCs (see paper of Kloos), focusing on drop-out.
Monitoring in SPOCs (small private online courses)
Measurement of Real Workload of the students, the tool adapts the workload to the reality.
FlipApp (to gamify flipped classroom), remember and to notify the students that they need to see the videos before class, or they will not be able to follow. (Inge: sent to Barbara).
Creation of Educational Material using Google classroom. Google classroom sometimes knows what the answer of a quiz will be, which can save time for the teacher.
Learning analytics to improve teacher content delivery.
Use of IRT (Item Response Theory) to see which quizzes are more useful and effective, interesting to select quizzes.
Coursera define skills, match it to the jobs and based on that recommend courses.
Industry 4.0 (big data, AI…) for industry, can be transferred to Education 4.0 (learning analytics based on machine learning). (Education3.0 is using the cloud, where both learners and teachers go to).
Machine learning infers the rules from getting answers which are data analysed (in comparison to computer learning, which is just the opposite, based on rules, giving answers).
Dangers:
Correlations: correlations are not necessary correct conclusions. (see spurious correlations for fun links).
Bias: e.g. decisions for giving credit based on redlining and weblining.
Decisions for recruitment: eg. Amazon recruits that the automation of their recruiting system resulted in a biase leading to recruiting more men than women.
Decisions in trials: eg. Compas is used by judges to calculate repeat offenders, but color of skin was a clear bias in this program.
Chinese social credit system which gives minor points if you do something that is seen as not being ‘proper’. Also combined with facial recognition, and monitoring attention in class (Hangzhou number 11 high school).
Monitoring (gaggle, …)
Challenges
Luca challenge: responsible use of AI.
GDPR Art 22: automated individual decision-making, including profiling.
Sheilaproject.eu : identifying policies to adopt learning analytics. Bit.ly/sheilaMOOC is the course on the project.
Atoms and bits comparison. As with atoms you can use it for the better, or for the worse (like atomic bombs).


Maren Scheffel on Getting the trust into trusted learning analytics @m_a_s_c
(Welten Institute of Open University, Netherlands)
Learning analytics: Siemens (2011) definition still the norm. But nowadays it is a lot about analytics, but only little about learning.

Trust: currently we believe that something is reliable, the truth, or ability. Multiple definitions of trust, it is multidimensional and multidisciplinary construct. Luhmanndefined trust as a way to cope with risk, complexity, and a lack of system understanding. For Luhmann the concept of trust compensates for insufficient capabilities for fully understanding the complexity of the world (Luhmann, 1979, trust and …)
 For these reasons we must be transparent, reliable, and be integer to attract the trust of learners. There should not be a black box, but it should be a transparent box with algorithms (transparent indicators, open algorithms, full access to data, knowing who accesses your data).

Policies: see https://sheilaproject.eu   

User involvement and co-creation: see the competen-SEA project see http://competen-sea.eu capacity building projects for remote areas or sensitive learner groups. One of the outcomes was to co-design to create MOOCs (and trust) getting all the stakeholders together in order to come to an end product. MOOCs for people, by people.  Twitter #competenSEA