Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Thursday, 3 October 2019

Yes a learning engine: demo is ready, but #AI and #Learning challenges ahead #TBB2019 @InnoEnergyCE

If you have ideas on ensuring continuity in pedagogy when clustering courses (research), on certifying across corporate and university learning (blockchain/bit of trust certification), on opening up industry academies to decrease L&D costs (HR and L&D), ... please think along and respond to the challenges mentioned at the end.

People in high and common places seem to agree that the world is in transition, especially workplace learning, as innovations keep changing what is possible. As I am working on one such an innovation (the skill project of InnoEnergy), I am at the one hand very excited about the new opportunities it might open, yet at the same time concerned that the complexity is bigger than expected.

First: have a look at the demo screencast here. It shows the overall idea, and ... this might immediately give rise to questions.

Today the Business Booster event (TBB) is opened, and with it, the skill project demo is launched. The skillproject (we still need to get a brand name for it), is combining AI and learning for the sustainable energy sector. But in essence, once we get the sustainable energy sector mapped with this tool, others can follow. 

AI and learning? What does it do: the project identifies industry needs (AI-driven), pinpoints emerging skill gaps in the sustainable energy sector (AI-driven), analysis the existing workforce to know where urgent skills gaps are situated (AI-driven) and then refers employees to a personalized learning trajectory addressing their skills gap (part AI, part human support). The goal of this project is to ensure that employees of the sustainable energy sector stay futureproof in a quickly changing working environment. Let's be honest, it sounds cool, but ... the challenges are multiple. 

The emergence of a Learning Engine
The skillproject helps realize the emergence of a learning engine, an intelligent career-oriented engine which knows your own skills and which signposts you to where you want to go with your career by suggesting a personalized learning track.
In the Learning Engine you simply type in “goal: become Director of Innovation’s in offshore wind energy which courses?” and the engine immediately returns a tailored, personalized learning track consisting of a variety of certified, business training from both universities, corporate academies, open educational energy resources and coaching options to send you on your way. This will allow professional learning to surpass the limits of classical, university-based learning.

Challenges
In order to get our engine to come up with the best, most-tailored courses, we need access to industry academies, as well as university courses. 
Learning-to-Learn capacities. Once we signpost learners to a cluster of courses, they need to take them (the familiar 'take the horse to water' comes to mind). But even if the learners are taking the courses, 
Granularity for course clustering: clustering courses to keep on top of your field of expertise is one thing, but then what is the granularity of those courses? Micro-learning is an option, and modular learning will become a clear necessity, as all learners have different existing knowledge, which means they all need different parts in order to upskill what they already know. 
Ensuring pedagogical continuity, even OU finds that a challenge. Great, so let's cluster modules. But then, how can we link these modules together, Do we believe in the non-pedagogical support (e.g. hole in the wall from Sugata Mitra already dates back 10 years), or do we need to find a solution to provide pedagogical continuity that fits with this new assembly of short modules, and courses coming from different sources (both university and industry)?
Certification across the learning ecologies: to blockchain or not to blockchain. Once we start learning across institutes, we need to keep track of that what we learn, by keeping tabs on the actual learning: corporate academy learning, university modules, hands-on training, workplace learning... one solution is to embed blockchain in education to keep track of all learning. But this is easier said than done, and open standards and trust might be an issue to consider (bit of trust initiative offers good reading). 

Feel free to send questions, comments, share your own projects... let's get together.

Tuesday, 17 September 2019

#Ectel2019 Covadonga Rodrigo from #UNED @cova_rodrigo #gender #AI #bias


From here a couple of cases and projects (slides will follow)

Great presentation by UNED Covadonga Rodrigo: will AI be sexist? @cova_rodrigo (liveblog)
Referring to male/female recruitment of Amazon. AI had a biased in favor of men. Why?
Because the AI was trained with historical data, so more males, which made the system think male candidates were preferable.
Microsoft (2016) had the same result with their AI system: automated bots on twitter, this bot was getting sexist in the end due to AI learning.

So who is programming the AI systems: up to 90 % are men (2015), it changes gradually, but at the moment women are only 16 to 19% of the programmers. This results in differences in terms of bias. By 2023 it will probably be 27,7% (= number of software developers in the world) this is not the critical threshold of 33% that we know is critical from social sciences in order for a group to get their voices heard).

Some issues Glass ceiling, identity of what engineers are, school atmosphere, more female references in the curricula. It is not only in engineering, also in other areas.
The AI assistants are also mostly female-voice based => the female secretary, not female leads.

Ethics: curricula are biased, ethical subjects in curricula. Lack of humanistic studies in education, we need to transform this.

Mentions that she is 50+ and she was an engineer from early on, so there were women engineers, so no problem with entry of women. So we have male domination, which results in biases in terms of gender, and differences that exist in society.

Sources of sexism (slides will follow)


#ECTEL2019 Workshop #AI in #Education #liveblogpost #AIED @cova_rodrigo @paco

This is a live blog, so bits and pieces noted.

Paco Iniesto (The Open University, IET, AIED) is the workshop lead, and he is looking good and giving a strong overview.
AI is all around us: cars, games, robotics, AlphaGo (see netflix), predictive policy, dating apps, thispersondoesnotexist.com (3 min video is of interest, how they generate these images), ...

What is AI?
It isn't easy to define AI and many people have an idea, but there is no definition.
computer systems desinged to interact with the world ... (Luckin, Waynes...)

The promise of AI is not yet realized, although it has been developing for 40 years.
It's big business
AI shines a spothlight on existing educational practices
AI rehashes what we have at this point in time

Implications of AIED: algorithms and computation: what are the algorithms, what are their consequences, how to control them... accuracy and validity of assessments, are we treating students as human beings?

Lumilo augmented reality glasses for teachers (https://hechingerreport.org/these-glasses-give-teachers-superpowers/), video can be found here: https://kenholstein.myportfolio.com/the-lumilo-project This got some negative critiques from teachers and learners.



Ethical questions
Connection between effect and psychological traits of learners, but where can this lead to? (cfr Cambridge analytics).
What if we have the data for 'good', what if others use it for 'bad' ideas.
What about GDPR, who owns the data, how does this affect funding, if students opt out of the system and all their data is erased; can we use blockchain in order to keep the data connected to the learners?
Where is the data in order for the data be erased, how does this affect future employment?
Will the system be able to evaluate actual learning, if this is the case, what benefits will it bring to teaching and learning?
Does the support of learners lead to limiting the self-directed learning-to-learn of the learners
Starting from the technology to move to support the learning seems to be the other way round then it should be done,
What is the educational progress using these technologies?
What is the difference between monitoring and surveillance? (where is the barrier)
Can learners hack the system to get more or less support?
Does the teacher have enough time to support learners with difficulties? And does their help actually benefit the learning?
Consent forms of those who are not able to give consent?
marginalized people are in need of technological support, but how do we support them in a secure way?

Sources:
Sheila project: https://sheilaproject.eu/
Methods of mass destruction book

The post-it notes with ideas from three different groups addressing some of the questions mentioned in the above slide.







Monday, 14 January 2019

EU report on the impact of AI on Learning Teaching and Education #AI #education #EU #policy

The resently published report on the impact of artificial intelligence (AI) on learning, teaching and education gives a great outline on the realities of AI, the state of the art, and the challenges as well as opportunities for those of us with an expertise in learning in general, or learning in terms of learning theory. The report is part of the JRC Science for Policy documents, and it is very well written by Ilkka Tuomi (who is renowned for his expertise in Internet, data, AI and computer science). Ilkka recorded a brief overview of the report, which can be seen below. In the report-related video, he refers to current machine learning systems as datavors, he defines (and right fully so) the term of machine learning as an oxymoron and he puts current AI in very accessible parallel, namely the Artificial Instict (as current AI is mainly about behaviourist approaches and patterns).

A very interesting perspective is that Ilkka and the report stress the importance of having someone on board of AI for learning/teaching/education on board, who has expertise in learning and learning theory.

The policy challenges mentioned at the end of the report are:

  • A continuous dialogue on the appropriate and responsible uses of AI in education is therefore needed.
  • In the domain of educational policy, it is important for educators and policymakers to understand AI in the broader context of the future of learning. As AI will be used to automate productive processes, we may need to reinvent current educational institutions.
  • In general, the balance may thus shift from the instrumental role of education towards its more developmental role.
  • A general policy challenge, thus, is to increase among educators and policymakers awareness of AI technologies and their potential impact.
  • Learning sciences could have much to offer to research on AI, and such mutual interaction would enable better understanding about how to use AI for learning and in educational settings, as well as in other domains of application.
  • As there may be fundamental theoretical and practical limits in designing AI systems that can explain their behaviour and decisions, it is important to keep humans in the decision-making loop.
  • The ethics of AI is a generic challenge, but it has specific relevance for educational policies.
  • Human agency means that we can make choices about future acts, and thus become responsible for them.  AI can also limit the domain where humans can express their agency.
  • An important policy challenge is how such large datasets that are needed for the development and use of AI-based systems could be made more widely available.


This 47 page report offers the following topics:

Introduction ...................................................................................................... 5
2 What is Artificial Intelligence? ............................................................................. 7
2.1 A three-level model of action for analysing AI and its impact ............................. 7
2.2 Three types of AI ....................................................................................... 10
2.2.1 Data-based neural AI ......................................................................... 10
2.2.2 Logic- and knowledge-based AI ........................................................... 12
2.3 Recent and future developments in AI .......................................................... 13
2.3.1 Models of learning in data-based AI ..................................................... 15
2.3.2 Towards the future............................................................................. 16
2.4 AI impact on skill and competence demand ................................................... 17
2.4.1 Skills in economic studies of AI impact ................................................. 18
2.4.2 Skill-biased and task-biased models of technology impact ....................... 20
2.4.3 AI capabilities and task substitution in the three-level model ................... 21
2.4.4 Trends and transitions ........................................................................ 22
2.4.5 Neural AI as data-biased technological change ...................................... 23
2.4.6 Education as a creator of capability platforms ........................................ 23
2.4.7 Direct AI impact on advanced digital skills demand ................................ 25
3 Impact on learning, teaching, and education ....................................................... 27
3.1 Current developments ................................................................................ 27
3.1.1 “No AI without UI” ............................................................................. 28
3.2 The impact of AI on learning ....................................................................... 28
3.2.1 Impact on cognitive development ........................................................ 30
3.3 The impact of AI on teaching ....................................................................... 31
3.3.1 AI-generated student models and new pedagogical opportunities............. 31
3.3.2 The need for future-oriented vision regarding AI .................................... 32
3.4 Re-thinking the role of education in society ................................................... 32
4 Policy challenges ............................................................................................. 34

Below is the 20 minute video of Ilkka Tuomi which explains the report in easy terms.




Friday, 4 January 2019

Call for Papers #CfP #AI #mLearning #MOOC in conferences #UNESCO @FedericaUniNa

January has started and three important calls for papers are coming up, all related to conferences. The three conferences are: eMOOCs2019 (on MOOCs), Mobile Learning week at UNESCO (focus on AI for development and mobile learning, and eLearning Africa (this year in Cote d'Ivoir), listed per deadline of the CfP.

Mobile learning week UNESCO (Paris, France): focus on AI for sustainable development
Call for proposals deadline: 11 January 2019
UNESCO Global AI Conference: Monday 4 March 2019
Policy Forum and Workshops: Tuesday 5 March 2019
Symposium: Wednesday 6 & Thursday 7 March 2019
Strategy labs & International Women’s Day: Friday 8 March 2019
Exhibits: Monday 4 to Friday 8 March 2019
More information: https://en.unesco.org/mlw/2019
UNESCO, in partnership with its confirmed partners – the International Telecommunication Union and the Profuturo Foundation – will convene a special edition of Mobile Learning Week (MLW) from 4 to 8 March 2019, at the UNESCO Headquarters building in Paris (France). The five-day event, under the theme ‘Artificial Intelligence for Sustainable development’ will start with the ‘Global Conference - Principles for AI: Towards a humanistic approach?’, followed by a one-day Policy Forum and Workshops, a two-day International Symposium and a half-day of Strategy Labs. On 8 March, towards the close of MLW, participants will be invited to join the celebration of International Women’s Day, particularly a debate on Women in AI to be held in UNESCO Headquarters. During the entire week, exhibitions and demonstrations of innovative AI applications for education and more than 20 workshops will be organized by international partners and all programme sectors of UNESCO.
eMOOCs 2019 in Napels, Italy
Deadline CfP: 14 January 2019.
Conference date:  May 20 – 22, 2019
More informationhttps://emoocs2019.eu/call-for-papers/overview/
Description
The Higher Education landscape is changing. As the information economy progresses, demand for a more highly, and differently, qualified workforce and citizens increases, and HE Institutions face the challenge of training, reskilling and upskilling people throughout their lives, rather than providing a one-time in-depth education. The corporate and NGO sectors are themselves exploring the benefits of a more qualified online approach to training, and are entering the education market in collaboration with HE Institutions, but also autonomously or via new certifying agencies. Technology is the other significant player in this fast-changing scenario. It allows for new, data-driven ways of measuring learning outcomes, new forms of curriculum definition and compilation, and alternative forms of recruitment strategy via people analytics.

At the MOOC crossroads where the three converge, we ask ourselves whether university degrees are still the major currency in the job market, or whether a broader portfolio of qualifications and micro-credentials may be emerging as an alternative. What implications does this have for educational practice? What policy decisions are required? And as online access eliminates geographical barriers to learning, but the growing MOOC market is increasingly dominated by the big American platforms, what strategic policy do European HE Institutions wish to adopt in terms of branding, language and culture?

The EMOOCs 2019 MOOC stakeholders summit comprises the consolidated format of Research and Experience, Policy and Business tracks, as well as interactive workshops. Original contributions that share knowledge and carry forward the debate around MOOCs are very welcome.

eLearning AFrica - Abidjan - Cote d'Ivoir
Deadline CfP: February 22, 2019.
Conference date: October 23 - 25, 2019
More informationhttps://www.elearning-africa.com/programme_cfp.php
Description
The 14th edition of eLearning Africa, the International Conference & Exhibition on ICT for Education, Training & Skills Development, which will take place in Abidjan, Côte d'Ivoire from October 23 - 25, 2019 and is co-hosted by the Government of Côte d'Ivoire. 

A unique event, Africa’s largest conference and exhibition on technology supported learning, training and skills development, eLearning Africa is a network of leading experts, professionals and investors, committed to the future of education & training in Africa.

Read more about the eLearning Africa 2019 themeThe Keys to the Future: Learnability and Employability, and become involved in shaping the conference agenda by proposing a topic, talk or session here.
Register today to profit from our Early Bird Rate

About eLearning Africa
Founded in 2005, eLearning Africa is the leading pan-African conference and exhibition on ICT for Education, Training & Skills Development. The three day event offers participants the opportunity to develop multinational and cross-industry contacts and partnerships, as well as to enhance their knowledge and skills.
Over 13 consecutive years, eLearning Africa has hosted 17,278 participants from 100+ different countries around the world, with over 80% coming from the African continent. More than 3,530 speakers have addressed the conference about every aspect of technology supported learning, training and skills development.

Saturday, 8 December 2018

#AI #MachineLearning and #philosophy session #OEB18 @oebconference @OldPhilNewLearn


At OEB2018 the last session I lead was on the subject of AI, machine learning in combination with old philosophers and new learning.  The session drew a big group of very outspoken, intelligent people making this session a wonderful source of ideas on the subject of philosophy and AI.

As promised to the participants, I am adding my notes taken during the session. There were a lot of ideas, so my apologies if I missed any. The notes follow below, afterwards embedding the slides that preceeded the session in order to indicate where the idea for the workshop came from.

Notes from the session Old Philosophers and New Learning @OldPhilNewLearn #AI #machineLearning and #Philosophy

The session started of with the choices embedded in any AI, e.g. a Tesla car running into people, will he run into a grandmother or into two kids? What is the ‘best solution’… further into the session this question got additional dimensions: we as humans do not necessarily see what is best, as we do not have all the parameters, and: we could build into the car that in case of emergency, the car needs to decide that the lives of others are more important than the lives of those in the car, and as such simply crash the car into the wall, avoiding both grandmother and kids.

The developer or creator gives parameters to the AI, with machine learning embedded, the AI will start to learn from there, based on feedback from or directed to the parameters. This is in contrast with computer-based learning, where rules are given, and they are either successful or not but they are no basis for new rules to be implemented.

From a philosophical point of view, the impact of AI (including its potential bias coming from the developers or the feedback received) could be analysed using Hannah Arendt’s ‘Power of the System’, in her time this referred to the power mechanisms during WWII, but the abstract lines align with the power of the AI system.

The growth of the AI based on human algorithms does not necessarily mean that the AI will think like us. It might choose to derive different conclusions, based on priority algorithms it chooses. As such current paradigms may shift.

Throughout the ages, the focus of humankind changed depending on new developments, new thoughts, new insights into philosophy. But this means that if humans put parameters into AI, those parameters (which are seen as priority parameters) will also change over time. This means that we can see from where AI starts, but not where it is heading.

How much ‘safety stops’ are built into AI?
Can we put some kind of ‘weighing’ into the AI parameters, enabling the AI to fall back on more important or less important parameters when a risk needs to be considered?

Failure as humans can results into growth based on those failures. AI also learns from ‘failures’, but the AI learns from differences in datapoints. At present the AI only receives a message ‘this is wrong’, at that moment in time – if something is wrong – humans make a wide variety of risk considerations. In the bigger picture, one can see an analogy with Darwin’s evolutionary theory where time finds what works based on evolutionary diversity. But with AI the speed of adaptation enhances immensely.

With mechanical AI it was easier to define which parameters were right or wrong. E.g. with Go or Chess you have specific boundaries, and specific rules. Within these boundaries there are multiple options, but choosing those options is a straight path of considerations. At present humans make much more considerations for one conundrum or action that occurs. This means that there is a whole array of considerations that can also imply emotions, preferences…. When looking at philosophy you can see that there is an abundance of standpoints you can take, some even directly opposing each other (Hayek versus Dewey on democracy), and this diversity sometimes gives good solutions for both, workable solutions which can be debated as being valuable outcomes although based on different priorities, and even very different takes on a concept. The choices or arguments made in philosophy (over time) also clearly point to the power of society, technology and reigning culture at that point in time. For what is good now in one place, can be considered wrong in another place, or at another point in time.

 It could benefit teachers if they were supported with AI to signal students with problems. (but of course this means that ‘care’ is one of the parameters important for society, in another society it could simply be that those students who have problems will be set aside. Either choice is valid, but it builds on other views on whether we care in a ‘supporting all’ or care in a ‘support those who can so we can move forward quicker’. It is only human emotion that makes a difference in which choice might be the ‘better’ one to choose.

AI works in the virtual world. Always. Humans make a difference between the real and the virtual world, but for the AI all is real (though virtual to us).
Asimov’s laws of robotics still apply.

Transparency is needed to enable us to see which algorithms are behind decisions, and how we – as humans – might change them if deemed necessary.

Law suits become more difficult: a group of developers can set the basis of an AI, but the machine takes it from their learning itself. The machine learns, as such the machine becomes liable if something goes wrong, but ….? (e.g. Tesla crash).

Trust in AI needs to be built over time. This also implies empathy in dialogue (e.g. sugar pill / placebo-effect in medicine, which is enhanced if the doctor or health care worker provides it with additional care and attention to the patient.
Similar, smart object dialogue took off once a feeling of attention was built into it: e.g. replies from Google home or Alexa in the realm off “Thank you” when hearing a compliment.  Currently machines fool us with faked empathy. This faked empathy also refers to the difference between feeling ‘related to’ something or being ‘attached to’ something.

Imperfections will become more important and attractive than the perfections we sometimes strive for at this moment.

AI is still defined between good and bad (ethics), and ‘improvement’ which is linked to the definition of what is ‘best’ at that time.

Societal decisions: what do we develop first – with AI? The refugee crisis or self-driving cars? This affects the parameters at the start. Compare it to some idiot savants, where high intelligence, does not necessarily implies active consciousness.

Currently some humans are already bound by AI: e.g. astronauts where the system calculates all.  

And to conclude: this session ranged from the believers in AI “I cannot wait for AI to organise our society”  to those who think it is time for the next step in evolution, in the words of Jane Bozart: “Humans had their Chance”




Thursday, 6 December 2018

Data driven #education session #OEB18 @oebconference #data @m_a_s_c

From the session on data driven education, with great EU links and projects.

Carlos Delgado Kloos: using analytics in education
Opportunities
Khan academy system is a proven system, with one of the best visualisations of how the students are advancing. With a lot of stats and graphs. Carlos used this approach for their 0 courses (courses on basic knowledge that students must know before moving on in higher ed).
Based on the Khan stats, they built a high level analytics system.
Predictions in MOOCs (see paper of Kloos), focusing on drop-out.
Monitoring in SPOCs (small private online courses)
Measurement of Real Workload of the students, the tool adapts the workload to the reality.
FlipApp (to gamify flipped classroom), remember and to notify the students that they need to see the videos before class, or they will not be able to follow. (Inge: sent to Barbara).
Creation of Educational Material using Google classroom. Google classroom sometimes knows what the answer of a quiz will be, which can save time for the teacher.
Learning analytics to improve teacher content delivery.
Use of IRT (Item Response Theory) to see which quizzes are more useful and effective, interesting to select quizzes.
Coursera define skills, match it to the jobs and based on that recommend courses.
Industry 4.0 (big data, AI…) for industry, can be transferred to Education 4.0 (learning analytics based on machine learning). (Education3.0 is using the cloud, where both learners and teachers go to).
Machine learning infers the rules from getting answers which are data analysed (in comparison to computer learning, which is just the opposite, based on rules, giving answers).
Dangers:
Correlations: correlations are not necessary correct conclusions. (see spurious correlations for fun links).
Bias: e.g. decisions for giving credit based on redlining and weblining.
Decisions for recruitment: eg. Amazon recruits that the automation of their recruiting system resulted in a biase leading to recruiting more men than women.
Decisions in trials: eg. Compas is used by judges to calculate repeat offenders, but color of skin was a clear bias in this program.
Chinese social credit system which gives minor points if you do something that is seen as not being ‘proper’. Also combined with facial recognition, and monitoring attention in class (Hangzhou number 11 high school).
Monitoring (gaggle, …)
Challenges
Luca challenge: responsible use of AI.
GDPR Art 22: automated individual decision-making, including profiling.
Sheilaproject.eu : identifying policies to adopt learning analytics. Bit.ly/sheilaMOOC is the course on the project.
Atoms and bits comparison. As with atoms you can use it for the better, or for the worse (like atomic bombs).


Maren Scheffel on Getting the trust into trusted learning analytics @m_a_s_c
(Welten Institute of Open University, Netherlands)
Learning analytics: Siemens (2011) definition still the norm. But nowadays it is a lot about analytics, but only little about learning.

Trust: currently we believe that something is reliable, the truth, or ability. Multiple definitions of trust, it is multidimensional and multidisciplinary construct. Luhmanndefined trust as a way to cope with risk, complexity, and a lack of system understanding. For Luhmann the concept of trust compensates for insufficient capabilities for fully understanding the complexity of the world (Luhmann, 1979, trust and …)
 For these reasons we must be transparent, reliable, and be integer to attract the trust of learners. There should not be a black box, but it should be a transparent box with algorithms (transparent indicators, open algorithms, full access to data, knowing who accesses your data).

Policies: see https://sheilaproject.eu   

User involvement and co-creation: see the competen-SEA project see http://competen-sea.eu capacity building projects for remote areas or sensitive learner groups. One of the outcomes was to co-design to create MOOCs (and trust) getting all the stakeholders together in order to come to an end product. MOOCs for people, by people.  Twitter #competenSEA

Session on #AI, #machineLearning and #learninganalytics #AIED #OEB18

This was a wonderful AI session, with knowledgeable speakers, which is always a pleasure. Some of the speakers showed their AI solutions, and described their process; others focused on the opportunities and challenges. Some great links as well.

Squirrel AI, the machine that regularly outperforms human teachers and redefines education by Wei Zhou
Squirrel AI is an AI to respond to the need for teachers in China. Based on knowledge diagnosis, looking for educational gaps. A bit like an intake at the beginning of a master education for adults.
Human versus machine competition for scoring education, and tailored learning content offerings. (collaborates with Stanford Uni). Also recognized by Unesco. (sidenote: it is clearly oriented at 'measurable, class and curriculum related content testing). 

 The ideas behind AI: adaptive learning is a booming market.
Knowledge graph + knowledge space theory: monitoring students real-time learning progress to evaluate student knowledge mastery and predict future learning skills. based on Bayesian network plus Bayesian inference and knowledge tracing and Item Response Theory. The system identifies the knowledge of the student based on the their intake or tests. Based on big data analysis the students get a tailored learning path. (personalised content recommendation using fuzzy logic, classification tree, and personalized based on logistic regression, graph theory, and genetic algorithm.). Adaptive learning based on Bayesian network, plus Bayesian inference, plus Bayesian knowledge tracing, plus IRT to precisely determine students current knowledge state and needs.
Nanoscale Knowledge Points: granularity is six time’s deeper.  Used in medical field.
Some experiments and results: the forth Human versus AI competition, which resulted in AI being quicker and more adapt to score tests of students.  Artificial Intelligence in Education conference (AIED18 conference link, look up video youtube.com, call for papers deadline 8 February 2019 for AIED19 here).

Claus Biermann on Approaches to the Use of AI in Learning
Artificial Intelligence and Learning: myths, limits and the real opportunities.  
Area9 lyceum: also adaptive  learning long-term company with new investments.
Referring to Blooms 2sigma problem.
Deep, personalized learning, biologically enabled data modeling, four-dimensional teaching approach.
How we differ: adaptive learning adapts to the individual, only shows content when it is necessary, takes into consideration what the student already knows, follows up on what the student is having trouble with.  This reduces the time of learning, and increases motivation. Impact from adaptive learning, almost 50% reduction of learning time.
Supports conscious competence concept.
AI is 60% of the platform, but the most important part is the human being, learning engineers, the team of humans who work together makes it possible.

Marie-Lou Papasian from Armenia (Jerevan).
Tumo is a learning platform where students direct their own development. After school program, 2 hours twice a week, and thousands of students come to the centre of TUMO. Armenia and Paris, and Beirut.
14 learning targets ranging from animation, to writing, to robotics, game development…
Main education is based on self learning, workshops and learning labs.
Coaches support the students and they are in all the workshops and learning labs.
Personalisation: each students choose their learning plan, their topics, their speed. That happens through the ‘Tumo path’, which is an interface which enables a personalised learning path (cfr LMS learning paths, but personalized in terms of speed and choices of the students). After the self-paced parts, the students can go to a workshop to reach their maximum potential, to learn and know they can explore and learn. These are advanced students (12 – 18 years, free of charge).
Harnessing the power of AI: the AI solves a lot of problems, as well as provide freedom to personalise the students learning experience. A virtual assistant will be written to help the coaches to help the student guided through the system.
AI guided dog: a mascot to help the students.
The coaches, assistants… are their to learn the students to take up more responsibility.
For those learners who are not that quick, a dynamic content aspect is planned to support their learning.

Wayne Holmes from the OU, UK and center for curriculum redesign, US
A report commissioned about personalized learning and digital ... (free German version here , English version might follow, will ask Wayne HOlmes).
Looking at the ways AI can impact education

A taxonomy of AI in education
Intelligent Tutoring System (as examples mentioned earlier in the panel talk)
Dialogue-based tutoring system (Pearson and Watson tutor example)
Exploratory Learning Environments (the biggest difference with the above, is that this is more based on diversification of solving a specific problem by the student)
Automatic writing evaluation (tools that will mark assignments for the teachers, also tools that will automatically give feedback to the students to improve their assignments).
Learning network orchestrators (tools that put people in contact with people, e.g. smart learning partner, third space learner, the system allows the student to connect with the expert).
Language learning (the system can identify languages and support conversation)
ITS+ (eg.. ALP, Alt school, Lumilo. The teacher wears google glasses, and the students activity comes as a bubble visualizing what the student is doing).

So there is a lot of stuff already out there.
We assume that personalized learning will be wonderful, but what about participative or collaborative learning

Things in development
Collaborative learning (what one person is talking about might be of interest to what another person is talking about).
Student forum monitoring
Continuous assessment (supported by AI)
AI learning companions (e.g. mobile phones supporting the learning, makes connections)
AI teaching assistants (data of students sent to teachers)
AI as a research tool to further the learning sciences

The ethics of AIED
A lot of work has been done round ethics in data. But there are also the algorithms that tweak the data outcomes, how do we prevent biases, guard against mistakes, protect against unintended consequences….
But what about education: self-fulfilling teacher wishes…
So how do we merge algorithms and big data and education?

With great power comes great responsibility (Spiderman, 1962, or French revolution national convention, 1793)
ATS tool built by Facebook, but the students went on strike (look this up).

Gunay Kazimzade Future of Learning, biases, myths, etcetera (Azerbaijan / Germany)
Digitalization and its ethical impact on society.
Six interdisciplines overlap.
Criticality of AI-biased systems.
(look up papers, starting to get tired, although the presentation is really interesting)
What is the impact of AI on our children is her main research considerations. How is the interaction between children and the smart agents. And what do we have to do, to avoid biases while children are using AI agents.
At present the AI biases infiltrate our world as we know, but can we transform this towards less biases?

Keynote talk of Anita Schjoll Brede @twitnitnit @oebconference #AI #machineLearning #oeb18


(liveblog starts after a general paragraph on the two keynotes that preceded her talk, and really her talk was really GREAT! And with fresh, relevant structure).

First of a talk on the skill sets of future workers (the new skills needed, referring to critical thinking, but not mentioning what is understood with critical thinking) and the collective intelligence (but clearly linking it to big data not small data, as well described in an article by Stella Lee).

Self-worth idea for the philosophy session, refer tot he Google map approach where small companies who offer one particular aspect of what it took to build google maps were bought by Google, and as such producing something that was bigger than the sum of its parts). But this of course means that the identity and the self-versus-the-other becomes under pressure, as people that really make a difference at some point, do not have the satisfying moment to think they are on top of the world (you can no longer show off your quality easily… for there are so many others just like you… as you can see when you read the news, follow people online…). While feeling important was easier, or possible in a ‘smaller’ world, where the local tech person was revered for her or his knowledge. So, in some way we are loosing the feeling of being special based on what we do. Additionally, if AI enters more of the working world, how do we ensure that work will be there for everyone, as work is also a way to ‘feel’ self-worth. I think keeping self-worth will be an increasing challenge in the connected, and AI supported world. As a self-test, simply think of yourself, and wanting to be invited to be on a stage… it is a simple yet possibly mentally alarming aspect. Our society is promoting ‘being the best’ at something, or having the most ‘likes’, what can we do to install or keep self-worth?
Than a speaker on the promise of online education, referring to MOOCs versus formal education, the increase of young people going to college… which strangely contradicts what the most profiles of future jobs seems to be like (professions that are rather labour intensive). The speaker Kaplan managed to knock down people who get into good jobs based on non-traditional schooling (obviously, my eye-brows went up, and I am sure there are more of us in the audience pondering which conservative thinking label can be put on that type of scolding stereotype speech, protecting the norm, he is clearly not even a non-conformist).

Here a person in the line of my interest takes the stage: Anita  Schjoll Brede. Anita founded an AI company Iris.ai , and tries to simplify the AI, machine learning and data science for easier implementation. So… of interest.

Learning how to learn sets us human beings apart. We are in the era where machines will learn, based on how we learn… inevitably changing what we need to learn.
She gives what AI is seen by most people, and where that model is not really correct.
Machine learning is based on the workings of a human brain. Over time the machine will adapt based on the data, and it will learn new skills. It is a great model to see the difference. One caveat, we still not sure how the human mind really works.
If we think of AI, we think of software, hardware, data … but our brains are slightly different and our human brains are also flawed. We want to build machines that are complementary to the human brain.

Iris.ai started with the idea that there are papers and new research published every day, humans can no longer read all. Iris.ai goes through the science and the literature process is relatively automated. The process is currently possible with a time decrease of 80%. Next step is hypothesis extraction, than build a truth tree of the document based on scientific arguments. Once you have the truth trees are done, link that to a lab or specific topic, … with an option of the machine learning results leading to different types of research. Human beings will still do the deeper understanding.

Another example is one tutor per child. Imagine that there is one tutor for that child, which grows with that child, helps with lifelong learning. The system will know you so well, that it will know how to motivate you, or get you forward. It might also have filters to identify discriminatory feelings or actions (remark of myself: but I do wonder, if this is the case, then isn’t this limiting the freedom of saying what you want and being the person you want to be… it might risk becoming extreme in either way of the doctrine system).
Refers to the Watson Lawyer AI, which makes that the junior lawyers will no longer do all the groundwork. So the new employees will have to learn other stuff, and be integrated differently. But this relates to critical ideas of course, as you must choose for employing people (but make yourself less competitive) or you only higher senior lawyers (remark of myself: but than you loose diversity and workforce).
Refers to doctors built by machine learning, used in sub-Saharan settings, to analyse human blood for malaria. Which saves time for the doctors, health care workers… but evidently, this has an impact on the health care worker jobs.
Cognitive bias codex (brain picture with lots of links). Lady in the red dress experiment.

Her take on what we need to learn:
Critical thinking,  refers to source criticism she learned during her schooling.
Who builds the AI, lets say Google will transgress the first general AI… their business model will still get us to buy more soap.
Complex problem solving: we need to hold this uncertainty and have that understanding. To understand why machines were lead to specific choices.
Creativity: machines can be creative, we can learn this. Rehashing what is done, and making it to something of your own is something that is (refers to lawyer commercial that was built by AI based on hours of legal commercials).
Empathy: is at the core of human capabilities like this. Machines are currently doing things, but not yet empathic. But empathy is also important to build machines that can result in positive evolutions for humans. If we can support machines that will be able to love the world, including humans.


Thursday, 18 October 2018

Call for papers #CfP from #BJET & call for co-authoring book on #Philosophy #AI #humanmachine #interdisciplinary

The call for papers below is for authors researching 'human learning and learning analytics in the age of artificial intelligence' and is an action to celebrate BJET's 50th anniversary. But first ... the call for co-authors to realize a new Rebus book on the subject of Introduction to Philosophy series.


Seeking Authors & Editors for Introduction to Philosophy Series

The Rebus Community initiative Introduction to Philosophy series has grown tremendously, and a few books are nearing the final stages! Led by Christina Hendricks (University of British Columbia), the series includes eight volumes in total, ranging across themes. We are currently seeking faculty interested in contributing to the series by authoring chapters in the following books:
Epistemology
Aesthetics
Metaphysics
Social and Political Philosophy
Philosophy of Religion

See the full list of open and completed chapters.

Authors should have a PhD in philosophy and teaching experience at the first-year level. PhD students and candidates may also be considered as authors, or can contribute to the book in other ways. If you are interested, please let us know in Rebus Projects. Include your CV, a brief summary of your experience teaching an intro to philosophy course, and the chapters you would like to write.

We’re also looking for a co-editor for the Aesthetics book, and an editor forPhilosophy of Science. If you’re interested in taking on one of these roles, read the full job posting and then comment in the activity on Rebus Projects, including some details about your experience and the area in which you are interested.

The editorial team encourages contributions from members of under-represented groups within the philosophy community. Decisions will be made by the team on a rolling basis.
Photo by Samuel Sianipar on Unsplash Reading source Mary Midgley, "Philosophical Plumbing" 

CfP for papers on the subject of Human learning and learning analytics in the age of artificial intelligence, a 50th anniversary edition of BJET

At the 50th anniversary of the Britisch Journal of Educational Technology (BJET) invites you to contribute your most current research to BJET as a way to celebrate BJET’s anniversary. Title of the special section: Human learning and learning analytics in the age of artificial intelligence (Critical perspectives on learning analytics and artificial intelligence in education)

Deadline for manuscript submissions: February 10th, 2019
Publication: Online as soon as copy editing complete.
Acceptance Deadline: 10th August 2019
Issue Publication: November 2019.
Guest editor: Andreja Istenič Starčič, Professor University of Primorska & University of Ljubljana; Visiting scholar University of North Texas. For all information, please contact: andreja.starcic@gmail.com

This special section focuses on human learning and learning analytics in the age of artificial intelligence across disciplines.

In May 2018, they organized a working symposium entitled The “The Human-Technology Frontier: Understanding the Human Intelligence 0.2 with Artificial Intelligence 2.0.” The symposium was sponsored by the Association of Educational Communications and Technology (AECT). Distinguished scholars, including learning scientists, psychologists, neuroscientists, computer scientists, and educators addressed some urgent questions and issues on the learner as a whole person, with healthy development of the brain, habit, behaviour, and learning in the fast-advancing technological world. The symposium inspired these special issue topics (which not limited to):

1. Learning and human intelligence: Based on what we know of the brain and what we are likely to understand in the near future, how should learning be defined/redefined?
2. Learning and innovation skills, the 4C - creativity, critical thinking, communication and collaboration: How could learning technologies support the transformative nature of learning involving all domains of learning, cognitive, psychomotor, affective-social? How could the advanced feedback and scaffolds support the transition from “combinational” to the “exploratory” and “transformational” creativity, thinking and potential consequences for communication and collaboration?
3. Towards a holistic account of a person – brain, body, habits, and environment: What would a learning and research design that embraces a whole person perspective look like?
4. Human intelligence with innovations and advances of technologies: What technologies are most likely to have a positive impact on learning in the short and long future?
5. Properties and units of measures of learning: What are the constructs of learning and beliefs about learners and learners’ needs given the multilevel technologies, collaborative networks, interaction and interface modalities, methodologies and analysis techniques we have to work with?
6. Learning perspectives: Do we face transitions in theories of learning?

In the past 50 years, BJET has been at the front offering a platform and forcing discussions in the above areas. At the 50th anniversary of BJET, we invite interdisciplinary scholars to contribute their most current research to BJET as a way to celebrate BJET’s anniversary.

Please send me the working title of your paper with a short abstract (if you include co-authors, please also provide names of all authors) to my e-mail andreja.starcic@gmail.com by November 30th, 2018.

For further information, please contact professor Andreja Istenič Starčič at andreja.starcic@gmail.com