Showing posts with label liveBlog. Show all posts
Showing posts with label liveBlog. Show all posts

Thursday, 19 September 2019

LiveBlog #Ectel2019 Rose Luckin @Knowldgillusion Keynote #AI & #education mindset

 Rose Luckin takes the stage with a headset and immediately getting into her talk. The talk was very informative and to me it looked as though Rose is so knowledgeable about a range of topics, so I got a bit curious and envious in how her mind works [It I heard - I do not know if this is correct, will ask her ] that she only got into academic life later on in life?

Key topic: develop the right AI mindset for businesses

A perfect storm: data mass plus computing power and memory enhancements, sophisticated algorithms ... this made AI part of our lives and education.

3 routes to Impact on Education

  • using AI ED to tackle some of the big educational challenges
  • education people about AI so that they can use it safely and effectively
  • changing education so that we focus on human intelligence and prepare people for an AI world (hardest to do at the moment)

Working with select committee processes to try and take forward new developments. Debating on 4th industrial revolution and what it means that people understand AI (it is not coding, it is about the humans and their understanding of the fundamentals of machine algorithms, awareness, it is a much higher order we need to engage people with).

Need for multidisciplinary teams with equal input
As change happens, we need to change our educational systems (Singapore). Be resilient to change, be adaptive.
The above are not separate routes, it interconnects, and these interconnections increase AI and that we need to change and invest in our society using emerging ideas and realities of these three buckets.
We need to build bridges between communities: all stakeholders (parents, communities, government, coders...).
Currently separated communities need to work together to build a credible, societaly based AI solution.

Companies working with UCL EDUCATE
Not all companies are already using AI, but they want to understand more about it.
EDUCATE was from Europe, but turning into a global program from Jan 2020.
250 educational study start-ups (each start-up has to have a link with London, but they need to have some profile in London, so most UK-originated).
UCL provides training (labs, clinics, blended rooms, mentoring sessions)
It is free for the companies (years spend on figuring out the gaps between educational departments and industry. This was the case for hard sciences and industry, but not education). A lot of the reasons was because they did not know who to talk to, where to start => reason for starting with start-ups, embedding the educational mindset and to understand more about outcomes and validation of educational projects, so what it means when we say 'it works' (complexities... this results in the golden triangle: edtech developers, teachers & learners, academic researchers).

Start-ups are pushed to build a logic model, and the change being the learning that they want to take place. Opportunities they have to analyze the data, how should they demonstrate impact. We hope they will get to the last stage (see picture).
EdWards are set in place (awards to proof evidence applied and evidence aware awards).
120 companies became evidence aware, and 25 become evidence applied (last being much more difficult to achieve).

EDUCATE for schools
objective: build capacity in schools to identify and evaluate edtech that meets the needs of their teaching, learning or environment.
This approach can work in different educational programs.
Sit down, get head teacher in to pick two or three educational challenges - what they find tricky, than teachers are chosen to test it, to find out how the edtech works.
Currently this is under development:
all resources included in option 1, schools identify new or existing edtech to pilot
EDUCATE provides new resources to help schools plan their edtech pilot,
educate povides video and document resurces to walk schools through the pilot process
schools step through piloting process and recieve one hour of 1:1 video mentoring support
evaluate it (not sure I put this in correctly - this last step)

Sources
Century AI:
AI and big data powers personalised learning
Quipper: video insight, smart study planner, knowledge base
EvidenceB KidsCode : paths through materials, optimised parts through material

classic recommender systems (finding the right resources for the educator/student)
Bibblio
teachpitch

Chatterbox: refugee as expert native speaker with matching backgrounds (e.g. engineering background)
OyaLabs cloudbased monitor in the baby lounge and monitors interactions between baby and its cognitive developments for language developments
MyCognition algorithms automatically increase the number of training loops for the domains where you have the greatest need. If attention is your greatest needs you will receive more attention loops, building resilience in attention. As you progress the loops become more challenging. Looks at your attention, actions... assessment and report, which powers aquasnap and takes you to a underwater world (sea routes, fish names...) and adapted to your own cognitive status.

Building an AI mindset
Important for any company that wants to get into AI
What does it means to have the right data,
not just the tech team must understand the data and AI
as an individual it would be good to understand more about AI

Working with OSTC / ZISHI company: example of AI mindset collaboration. What they do: training for trader floors. They have to train everyone. They try to attract diversity in the workforce and pick them from less evident universities. ZISHI tries to use AI, AI for financial sector.
Financial sector has used AI for some time. AI used for assist in recruiting the best traders, assist in training the traders, help traders in improving performance, mentor the traders through out their careers.

Understanding OSTC's performance metrics

  • how can training behavior be measured?
  • can we profile traders by their trading behavior?
  • how do these profiles relate to performance?
  • can we then create a tool to help recruitment a tool to help traders and a tool to help managers?

The CEO of OSTC started out at the post floor of Lloyds and moved up. One's he saw the lack of training, he got into training and set up OSTC. Fundamentally what they try to do is creating AI mindset.

Much is not easy or obvious of what traders do

  • what others tell me that I do
  • what I think I do
  • what I really do
  • what family thinks you do...

Workflow
Nearly half their traders left less than one year in. So something was wrong, and investment was too costly for the results in the longterm.
Modeling using machine learning techniques to profile traders and make predictions (recruitment data from tests, interviews and videos, trading history data from trading platforms, multimodal data from eye-movements and button clicks, and behavioral data.
Masses of data from the tools used in the company.

Profiling 4 types of traders, using four identified characteristics:
data visualizations, using clustering techniques.
It turns out that the behavioral patterns relate to significantly different performance (risk management, bonuses... and different cognitive abilities & traits (openness to experiences, agreeableness...) [here my mind went off... must be something related to trader-vocabulary?]

Challenges to IA mindset

  • collaboration: is everybody onboard?
  • getting rid of AI's sci-fi fantasies and fears
  • digging in rich soil will bring out stuff. Are we ready to act upon it?
  • the appetite comes with the first byte - be ethically prepared to diet
  • data is har to collect, standardize, clean, #you-name-it

Opportunities for IA mindset

  • map the organisations' data information knowledge wisdom pyramid (and who is where
  • identify data sources: what is ready to be picked, what still needs to be ripened or sown
  • what can we learn from previous (successful of failed) experiments or pilots? what hypotheses they already have? what are their blind spots?
  • metrics - how do we know what success looks like?

OSTC - lessons

  • team members across different tiers need to embrace change
  • collect as much data 
  • tech team in company not the same as data team
  • need new expertise to digitize documenten and learning content
  • develop coherent and consistent procedures in all offices across the globe despite the cultural bias
  • track the daily activities through logs and multimodal data
  • develop tools

Developing an AI mindset

  • AI is set to transform education
  • three core types of interconnected work: using AI, understanding AI, changing education because of AI
  • multi-stakeholder collaboration can help achieve these three types of work
  • EDUCATE is an example of a multi-stakeholder collaboration to help develop a research mindset in Edtech developers and educators
  • for AI companies, or companies who want to use their data and AI we also need to develop an AI mindset (or perhaps initially a data mindset)
  • Academic research partners need to be put in this mix

Barclays provided somebody (eagle) in branches, and they would help people to use technology (from simple to complex) to get people engaged about using and thinking about technology, and how they can get involved.

Tuesday, 17 September 2019

#ECTEL2019 Workshop #AI in #Education #liveblogpost #AIED @cova_rodrigo @paco

This is a live blog, so bits and pieces noted.

Paco Iniesto (The Open University, IET, AIED) is the workshop lead, and he is looking good and giving a strong overview.
AI is all around us: cars, games, robotics, AlphaGo (see netflix), predictive policy, dating apps, thispersondoesnotexist.com (3 min video is of interest, how they generate these images), ...

What is AI?
It isn't easy to define AI and many people have an idea, but there is no definition.
computer systems desinged to interact with the world ... (Luckin, Waynes...)

The promise of AI is not yet realized, although it has been developing for 40 years.
It's big business
AI shines a spothlight on existing educational practices
AI rehashes what we have at this point in time

Implications of AIED: algorithms and computation: what are the algorithms, what are their consequences, how to control them... accuracy and validity of assessments, are we treating students as human beings?

Lumilo augmented reality glasses for teachers (https://hechingerreport.org/these-glasses-give-teachers-superpowers/), video can be found here: https://kenholstein.myportfolio.com/the-lumilo-project This got some negative critiques from teachers and learners.



Ethical questions
Connection between effect and psychological traits of learners, but where can this lead to? (cfr Cambridge analytics).
What if we have the data for 'good', what if others use it for 'bad' ideas.
What about GDPR, who owns the data, how does this affect funding, if students opt out of the system and all their data is erased; can we use blockchain in order to keep the data connected to the learners?
Where is the data in order for the data be erased, how does this affect future employment?
Will the system be able to evaluate actual learning, if this is the case, what benefits will it bring to teaching and learning?
Does the support of learners lead to limiting the self-directed learning-to-learn of the learners
Starting from the technology to move to support the learning seems to be the other way round then it should be done,
What is the educational progress using these technologies?
What is the difference between monitoring and surveillance? (where is the barrier)
Can learners hack the system to get more or less support?
Does the teacher have enough time to support learners with difficulties? And does their help actually benefit the learning?
Consent forms of those who are not able to give consent?
marginalized people are in need of technological support, but how do we support them in a secure way?

Sources:
Sheila project: https://sheilaproject.eu/
Methods of mass destruction book

The post-it notes with ideas from three different groups addressing some of the questions mentioned in the above slide.







Monday, 22 May 2017

When learning analytics meet #MOOCs by @mebner #learninganalytics #liveblog

Today I had the pleasure of meeting up with Martin Ebner from the TUGraz, who gave a detailed and critical overview to show how Learning Analytics can support the future MOOC-learner as well as the future MOOC-lecturer.

Martin starts off talking abut iMOOX (with an explicit open license (creative commons), so you can use the content for free. EdX license for open courses was a conscious choice.
‘Making’ kreatives digitales gestalten met Kindern (http://imoox.at, another famous one: Gratis online lernen. Which won them the Austrian National Price of Adult Education in 2015. So, to reach all adults a blended approach was used to get the learners from what they new (paper learning) to MOOC learning.

How did they implement learning analytics
iMOOX learning analytics prototype architecture: learners, MOOC  platform, learning analytics and prototyping, that was returned to the learners.
The webserver did log-files, the data was collected, and the learning analytics server visualised results. The LA server was developed at TUGraz (Khalil, yeah!!) with screenshots.

Some MOOC learning statements: what can we learn from these learning analytics
High dropout rate on MOOCs is a legend: statistics show that it is a myth (referring to Lackner, Ebner, Khalil (2015). Moocs as granular systems: design patterns to foster participants activity, elearning papers, 42 (2015), pp. 28-37.

Activity profile is shown: posting, reading, text files, … with different colors for certificate earners.
The decline of participation over the weeks. Very similar to other MOOCs. After the first 4 weeks learner activity is quite stable.

Video tracking was done, (Khalil, 2016, what MOOC Stakeholders can learn from learning analytics? In Lockee, Childress (Ed.). The video start is seen, but the video end rarely.
Learning success:what is it? In many cases learner activity is compared. But not clear link to which activity is related to learning success (not sure if I got this).
Learning analytics tells us how learning in classroom happens: social aspects lecture of Ebner was turned into a MOOC. The idea was to open the course to both the students and the public. The course lasts 10 weeks, with regular content, and the learners can choose how to go through the MOOC, and it includes a self-assessment per week. Each quiz could be taken 5 times. Cut-off was 75 % and if you did all the quizzes you got a certificate. When looking at the clusters (Khalil, Ebner, 2016, clustering ptterns of engagement in MOOCs: the use of learning analytics to reveal student categories).
University students had highest certification ratio, with high reading, low writing, high video watching and high quiz attempts. (really interesting).
Learning happens all the time and can be fun. (Lackner, Khalil, Ebner, 2016, How to foter forum discussions within MOOCs, a case study).

Then badges were introduced, compliant with mozilla backpack. Badges for quizzes and course finalisation. This was used for the first time in 2015. Learners who did take badges, went for all the badges. The drop out rate was much lower for those learners going for badges, then simply certification. (Inge: is this related to multiple motivation drivers?).

Gamification: was done in one MOOC. (Khalil, Ebner, 2016, how gamification can improve your MOOC student engagement). The gamification element got more learners active than in the first week! Never seen that graph in any other MOOCs. (interesting). But one setback with gamification, are those who are no longer active, as these are not engaged in the game.
 


Learning analytics constraints
Revealing personal information
Morality to view student data
Collecting and analysing data transparency
Deleting data policies
Who owns student data
Protection and copyright
Integrity, confidentiality and availability
Inaccurate analysis results

Benefits: potential for society
Knowledge society
Access to education (but bachelor degree at least are the most successful learners in MOOCs)
Lifelong learning
Cost reducing
Quality improvements

Learning analytics: drop out not that high, get them in for the first 4 weeks, the rest is stable.

Remark of Martin on a recent adult learning MOOC, which used local ad-hoc groups that could meet on the subject of MOOCs, which turned out to be very successful (similar to other MOOC groups that were set up).

Future actions: bridging MOOCs to prepare students prior to university. 

These are Martin's slides:


Thursday, 6 October 2016

#edenRW9 editors and the publishing process #papers

Question and answer session with editors, really great questions!

Question: Interesting question from Airina Volungeviciene whether the published research can be tagged in some way (like with xAPI or similar but than connected to bridge theory to practical) in order to see how applicable the research is, whether it is implemented and how. Some indicators to see the impact of science.  
No clear answer to that  from the editors (Dianne Conrad, Jill Buban, Josep Duart, Som Naidu) (yet). I would like to see this happen, also  in other areas. Aaron Silvers, what do you think would this be possible.

Question from Marti Cleveland-Innes: increasing fear for reviewing articles, editing journals… how can this become more possible given the increased time shortage. And can you give a notion of who reviews your articles, how do you choose reviewers.
Answer: (Diane Conrad) you hit the nail on the head, it is a major fallacy of peer-review. We rate reviewers and than evaluate whether they can stay on as reviewer. We assign 4 reviewers to every article (in the hope 2 will respond).
(Som Naidu): we are editors for our academic responsibility. And universities do not reward people for doing reviews! Academics might be given awards for that type of work. There is a process for each reviewer, and if reviewers do not respond than they come out. A full professor is the busiest, and they almost do not respond. So we choose reviewers who are early in their academic career.
(Josep Duart). We collect as much information as possible in terms of their expertise, so we have a good profile of them as academics in their field.

Question: George Veletsianos: could you give feedback to the authors on how to respond to ‘minor or major’ revisions.
Answer: (Diane Conrad) it is upon the reviewer to make sensible comments that support the review suggestion. The feedback needs to provide support for the decision made by the reviewer.
(Som Naidu) I look at the reports and filter them if they are too different before sending them to the author. And than provide the author with the option of what the author can do, making it possible to the author to enter into a constructive conversation.

Question: how can we get published
(Diane Conrad) Most common reasons for rejection:
Turn it in is used, and its reports.
We look at the scope of the research for the journal
Issue of the research with little importance, specifically to our readership
There is always something out there on any topic… never think there is not.  (we prescreen articles before sending them to reviewers)
Make an article understandable right from the abstracts. Abstracts should be written after the article, and be VERY precise, including the conclusions!
If the research is old, probably not accepted, use up to date research.
Use up to date literature
Do not republish, we have seen it before makes it unpublishable.

Question: how do you handle contradictory reviews
There always are more opinions, and it can be depressing to see how different equal experts give different reviews and decisions.
Each review is different, so as an editor I choose what goes out and what does not go out.
I consider the quality of the reviews, and the quality of the reviewer. I might send it to another reviewer, or take a personal decision as an expert myself (Diane)
If one decline is suggested by reviewers, I do not publish it.
Some reviews are not up to snuff, so I do look over the reviewer comments.

There are a lot of subjective components in the editorship. 
editors get not paid

#edenRW9 Studying learning expeditions in cross action spaces with digital didactical designs

Isa Jahnke (http://www.isa-jahnke.com/ ) is an inspiring academic, and one who knows how to network.
Liveblogging from Oldenburg, Germany.

@isaja

Who is using a device with internet access? That is cross action spaces… when you tweet content is used to link to other people in other spaces, to learn from.
Human interaction is crossaction… multiple spaces. Conversations go from network to network to network. Humans connect across locations/spaces virtual or physical.
This crossaction can also be done in classrooms. About 20 years ago, school was the place to learn. Now with these crossaction spaces learning is opened up from classrooms only.

Learners apply classroom themes into the material world, in which they are living. So the outside comes into classroom and from classroom to other spaces.

What is learning:
Reflective doing of multiple crossactions
Reflective performance of crossactions

Reflective communication

Theoretical lens:
Learning is embedded in a bigger organisational framework.
It is not only about teacher student, it is about learning goals, institutional strategies, curricula, academic staff development...

Digital didactical design has 5 components to study about 80% of what is happening in the classroom:
learning activities,
assessment
learning outcomes
social relations
web-enabled technologies

She investigated 64 classes to see where these classes were in the 5 layered framework. This position in the framework, enabled to identify where problems arose. 

The patterns provided meaningful interpretations into the inner/outer classroom actions. 

The university of the future is made of those crossaction spaces, in which teaching is organised in project teacher teams across existing disciplines
teacher teams from different departments work together and design a learning expedition and the students develop learning expeditions, learning by topic and not by the subject. 

Have a look at her book!
Digital Didactical Designs, published by Routledge

Wednesday, 5 October 2016

#edenRW9 Tweaking facilitator focus in MOOCs changes course dynamic dramatically

Martha Cleveland Innes from Athabasca University (yeah!!!) talks about MOOC
MOOCs are part of educational reform.
She looks at the drivers for higher education and change, and looking at those will (hopefully) make. This is a liveblog from EDEN research workshop 9. 

Everything we are doing, online, offline in our universities, MOOCs… are all part of pressure for change and it is greater than any the university has faced in any previous history.
Athabasca is a reformed university working to maintain and fullfil the iron triangle, which also includes pedagogical quality.
Iron triangle: three components of higher ed that is a good for society: access, quality, affordability and cost-effectiveness. (Daniel, Kanwar and Uvalie-trumble, 2009)
Whatever the response we get from higher ed on MOOCs, never before in history have so many learners are following all these courses. But can MOOCs help us with educational reform.
AU MOOC advisory group mandate:
·        Create and provide an expert, evidence-based assessment of and a critical academic and practical voice on MOOC issues to our local, national and international networks
·        Determine direction regarding the assignment of credit for individuals who participate in MOOCs
·        Support those interested in constructing an AU version off a MOOC, where such an endeavor will continue our mandate.
·        Observe, document, measure analyze and disseminate MOOC experience.
MOOC research initiative at AU selecting MOOC research initiatives.
Results from ensuring quality and scale impact for MOOCs.
Learning to learn online (www.ltlo.ca) was the MOOC to focus on. 5 week MOOC, launched 9 March 2015, given on Canvas LMS, designed for both new and experienced learners (community based education). Learner-centered.
Key question: is it possible to maintain the access and affordability offered by MOOCs while completing the education iron triangle which requires pedagogical quality as well.
3 levels o instruction: one flat instruction (video), an inspirer starts and ends the week through inspiring, and supporters (students in DE, to support, guide, offer assistance). The completion rates were similar to other MOOCs, regardless of the pedagogical choices. They ran it a second time, due to the quality experience it gave. The second time there were only 4 facilitators, not 10 and the focus of the facilitators was to connect leaners. This lead to a much higher social interaction between student-to-student interactions. Just by tweaking the facilitator focus, the course dynamic changed dramatically (in this case for the positive).
interviews were done with specific students, these qualitative data showed

Also have a look at telmooc.org 

Wednesday, 21 October 2015

Keynote #CLIL Teresa Ting second #language learning challenges

Y.L. Teresa Ting from the Universitá della Calabria (Italy) has an Italian charm and looks fabulous as she takes the stage.  Today she focused on the question on how the CLIL format can answer the many challenges of educating pupils in a foreign language, especially if one takes into account that the outcomes of native language courses keep having flaws. She is also clearly a teacher, narrating, yet paying attention at our focus, and she makes us do things (takes me back to the classroom).I was following with pleasure, until the sentence "students need good textbooks more then ever, as you never know what they will find on the internet" - okay, just imagine my face when hearing that sentence! Fun though, and the full keynote was definitely of interest. .

Throughout her keynote Teresa used notes to illustrate how students and teachers think. She put these sources online for those interested, to be found (google drive docs) here.

She opens with OECD skills outlook 2013. Where she refers to the challenge of literacy … how can instruction through a foreign language help.
Visible learning and the Science of How We Learn
(Hattie and Gregory Yeates (Routledge publishing))
What makes up great learning? One thing occurred in all classrooms, if teachers help see learning through the eyes of the students, and students see their learning through their own eyes.
Nobody likes to learn content that they did not choose.

Teresa Ting started off with starting English in Italy, while actually being a neuroscientist. Around 2000 Teresa was given the opportunity to engage with CLIL.
Evolution is a very conservative process, as such the brains of rats are comparable to human brains (Teresa first researched rats and learning).
Brain Reward System: part of the brain where rats would feel really good. It is part of the primitive part of the brain. Rats will press the stimulus until they are dead, omitting eating, drinking… Similar with human brains. When brain surgery  is done, this part in the human brain is stimulated.
Motivation is already embedded in the brain, it is there, we (as teachers) only have to activate it.

The big question: the point is: can we activate these pathways of motivation?
C1 level English as foreign language => implications that might be a problem.
In 1980 in the Anglophone ex-colonies science was given in English. Teachers did most of the talking, but learners kept quiet and did not understand most of it.
So the risk of C1 competences means that teachers => transmissive education.
In Italy the teachers do not speak, so they have to come up with different learning activities, AND the material will become easier, though aimed at reaching the learning goals.

At a given point during the presentation, Teresa gives us exercises to illustrate what learner-centered teaching with little input from the teacher is like (Inge note: very similar to cMOOC).
Then Teresa also illustrates the complex language use that occurs when learners are left to their own devices (or at least, when they are given more freedom).

Teresa refers to Lexical Density (Inge note: look up the tool you used for easy English, you used in MobiMOOC the Fog or something scale?).

With disciplinary expertise, the disciplinary literacy needed increases. Bourdieu and Passeron, 1977: academic language is no one’s mother tongue.
But disciplinary discourse is the most precise way of speaking (community language).
But the language of the community must be mastered. (disciplinary discourse), if you cannot do it, you have not learned the content yet (not mastered it).

The challenge: the discourse student should output to show they have learnt cannot be used as input for learning (Ting, 2015). Because if the language is too difficult (difficult), this is not picked up.

Working memory scans the environment to see what kind of information you need to pay attention too (short term memory). In a classroom you want to move from short term to long-term memory. But working memory is volatile, limited capacity (5-7 digits), limited duration (only seconds). To get into long-term memory, it needs to get attention, it needs to be attended to.
Working memory overload even in the mother tongue, good teachers are aware, but must be aware of this risk of disciplinary language for 50 minutes.
This is why Teresa creates tasks: transforming texts into task, which follow a learning progression.
(Inge note: parallels contextualized learning ).

Learning content always embraces two parts: the content itself, as well as the language which describes the content. If content is difficult, the language must be easy; if content is easy, the language can become difficult. Which is a way to be aware of working memory.

An option is assessment of learning for learning (see exercise 3 of the prints provided)… seems like cMOOC.
Inge note: In the CLIL-MOOC: Big macro content –learning is cut into little content pieces (eg. What are the elements of MOOCs, and what can we do with these elements, and do it). This is assessed, and based upon feedback, new iterations are provided, as well as reflective moments (progress diary).
Problems provide dilated pupils with those humans solving the problem, as soon as the problem is solved the pupil undilates: so it is a physiological response.
The brain likes solvable problems.
Haptic tasks: proprioceptive feedback also stimulates synaptic grounding.
Semantic incongruency: this alerts the brain, but it is not positive thing. Such information generates "distraction" and therefore is not easily processed. This points to the message that academic text is full of semantics which are incongruent with how we usually use our mother tongue. Which is why academic text is not a good source of input for learning - but an essential source for reference.
Priming: is a way to prepare humans for what is to come, and orient the brain (eg, being able to think whether what follows will be difficult or easy – cfr advanced organiser).
Important factor for language learning: we need to use a whole language approach to increase academic language and disciplinary discourse. Providing holistic language to increase contextualised grounding of the language.


Input must be whole language, tasks must be whole thoughts to make an impact. 
Teresa said: students need good textbooks in this day and age to increase their core concepts and details, as well as academic discourse. 

Tuesday, 19 May 2015

Sian Bayne keynote on teacherbot #emoocs2015 @sbayne

Great keynote by Siän Bayne who is professor of digital education at the University of Edinburgh, UK, on teacherbot, a twitterbot used within an educational mooc. Really interesting from the automation point and social effect on the debate.
Sian provides multiple references, so where possible I mention the author and year of the reference in this liveblog. Hot from the press Times Higher Ed article on Teacherbot here.

1 Debates in teacher automation
Artificial intelligenc in education
Adaptive learning
Teacher automation

Suppes, 1966; automated teacher visionair

Electric tutors: education is about to undergo a revolution unequalied since Gutenberg’s movable type. Arthur C. Clark – visionary article

The electronic tutor is going to spread across the planet as wiftly as the transistor radio … pure technocentriciy (1980)
By 2011 – Underwood and Luckin: AI in education are still not very well known about, not very well used, because we have not understood why to use them or how to use them.
So there is a body of research which critiques this automated tutor.
Feenberg, 2003: the goal is to replace fac-to-face teaching by professional faculty with an industrial product, infinitly reproducible at decreasing unit cost – critique.

This has attracted a lot of funding opportunities, again it is very political.

Who is thinking against this? Neil Selwyn has written many books describing the neo-liberal, capitalist lines of this automated vision.

How can we respond to this automation function?
Critical pedagogy approach would bring the focus back to the correlation with students and tutors (Clegg, 2003).
Humans, academics and teachers working together.
Mobilization in defense of the human touch (Feenberg, 2003)

2 the people t/technology divide
Two choices provided by Hamilton and Friesen, 2013
Instrumentalism: technologies are seen as neutral means employed for ends determined independitly by their users.
Essentialism: technology has its own trajectory, humans need to adapt, like a Newtonian god, watche unaffected as the drama unfolds.

This is something we must think about critically to move forward.
Things and people need to learn to join – (Fenwick, Edwards and Sawchuck, 2011) focusing on the world and the dynamics in them.
Whatmore, 2004: the human is always evolving and adapting to tech, we (tech and human) make each other.

Any teacher that can be replaced by a machine should be (Arthur c clark)

3 twitterbots as a cultural form
Twitterbots tweet on their own without any human action. 8,5 % tweets are deliverd by twitterbots.
So it is an interesting, contemporative social form. Here are some examples.
Example: ‘dear assistant’ made by amit agarwal, or LA quakebot made by bill snitzer, Olivia taters made by rob dubbin, desire bot made by felix jung,
Bots of conviction: they can be all sorts of functions, with this bot being political. NRA Tally made by Mark Sample, two headlines made by darius kazemi.

Rob dubbin wants to demystify what happens in the world through critical twitterbots.

4 teacherbot in the EDCMOOC
This bot wants to do some critical work on the boundaries of teacher and technology.
Teacherbot ran from October/November 2014 in Coursera mooc (12000 enrollments).
50% of enrollers work in education, so should be perceptive to this critical teacherbot.

#edcmooc : based on a simple GUI : mostly topics were process, content, social and pastoral related (see example in picture)
Challenge was to make a teacherbot that was part of the curriculum. So now the answers of the bot were fed content from the course.

Serious lessons: we never intended to trick the learners into thinking it was human. Everyone knew.
Teacherbot got into a dark loop at first, with a fatalistic tweet, but then things settled down.
The bot was actually very dominantly present in the twitter stream (about 25% of the tweets)
But it got extra discussions and reflections going (ambush teaching)

5 Rethinking teacher automation
Deficit => excess: based on teacher numbers. The teacherbot helped with creating excess.
Supercession => entanglement: if we do automation right/wrong it will affect teachers. The question was to find what the interest would be for education as complimentary extra’s.
Embrace/resistance => play
What works => what do we want?


Sian Bayne (2015) teacherbot in journal of higher education to be found here.

Monday, 18 May 2015

Liveblog #emoocs2015 Amdocs a corporate training MOOC

This session is presented by Nomi Malca, and it provides a good idea of the international impact and challenges of MOOCs for employees in a worldwide company.

These moocs are seen as personal development, not mandatory training. Employees need to work in their own time (or at least that is normally the only option considering the workload), and they get a certificate of accomplishment.

One of the challenges was: Current organisational culture versus the desired learning culture: the
So it demands a new culture to implement this new training for all the employees.

For this MOOC it was important to start small, so for the pilot we had 30 – 40 people, eventually it became 300 learners in each closed course of the company.
So each mooc is different, but using recorded videos, online sessions, links to external contents, reading material, hands-on sessions (this was considered as a priority), and creating a workers community (also perceived as important).

The MOOC topics:
Stay updated (short term) technical excellence
Project management
System improvement
Creativity
Process excellence

The moocs were very interactive, as learners had two time zones to choose from.
In project management the elearning and video lectures were built with acquired authoring software (looks like articulate)
The hands-on sessions became important after people indicated that they wanted a completely directed process of practical implementation.

Creativity and innovation
External courses learned in groups: so learning MOOCs in groups, with some synchronic sessions to close the loop for the learner.

The retention rate was higher then academic MOOCs, but not enough to get budgets for a corporate training. So the challenge is to increase these retention rate.
50% of the actual learners finished the course.

They used an internal platform: Moodle out of the box, but inside of the organisation due to IP demands.

Achievements so fare
Satisfaction based on positive vibe on MOOCs, so they all talk about it.
The content was always meaningful: no compromising in comparison to classroom content.
Organisational recognition of these moocs.

Liveblog #emoocs2015 Amdocs a corporate training MOOC

This session is presented by Nomi Malca, and it provides a good idea of the international impact and challenges of MOOCs for employees in a worldwide company.

One of the challenges was: Current organisational culture versus the desired learning culture: the
So it demands a new culture to implement this new training for all the employees.

For this MOOC it was important to start small, so for the pilot we had 30 – 40 people, eventually it became 300 learners in each closed course of the company.
So each mooc is different, but using recorded videos, online sessions, links to external contents, reading material, hands-on sessions (this was considered as a priority), and creating a workers community (also perceived as important).

The MOOC topics:
Stay updated (short term) technical excellence
Project management
System improvement
Creativity
Process excellence

The moocs were very interactive, as learners had two time zones to choose from.
In project management the elearning and video lectures were built with acquired authoring software (looks like articulate)
The hands-on sessions became important after people indicated that they wanted a completely directed process of practical implementation.

Creativity and innovation
External courses learned in groups: so learning MOOCs in groups, with some synchronic sessions to close the loop for the learner.

The retention rate was higher then academic MOOCs, but not enough to get budgets for a corporate training. So the challenge is to increase these retention rate.
50% of the actual learners finished the course.

They used an internal platform: Moodle out of the box, but inside of the organisation due to IP demands.

Achievements so fare
Satisfaction based on positive vibe on MOOCs, so they all talk about it.
The content was always meaningful: no compromising in comparison to classroom content.
Organisational recognition of these moocs.

liveblog #eMOOCs2015 collaborative MOOCs a challenging experience

This session is given by Sandra Soarez-Frazao and Yves Zech from RESCIF.
Very interesting as I can see parallels between other North-South course challenges.

The MOOC they talk about is about ‘Rivers and Men’, in the international (French speaking countries) RESCIF, a North South cooperation. This network has as an objective to focus on:
Water
Energy
Nutrition

Teaching support in this North-South context: lectures are organised in both institutions and teach each other’s. MOOCs can offer a different way of sharing teaching experience.

With the global climate change water is becoming an increasingly important commodity, which is at the basis of choosing to organise this MOOC.
Topic: dynamics of trained rivers from experience in Norther and Southern rivers. Designed for engineers that want to refresh their knowledge, or to understand all the basics that are needed for water measuring. And without intending it, citizens who had concerns about the environment came in as learners as well.

Welcome week, four week course, personal project (choice of a problematic and a related real case.

All the MOOC week topics are shared. They all build upon each other, so simple to complex.
Remark from Sandra: because the MOOC was in French less participants joined.
The participants were less African based then expected.
Learner participation was quite constant throughout the MOOC (for those participating actively).

Assets of collaborative MOOCs
· Extended resources:
· More people share the work
· The best practice of the various teams
· Extended network for advertising
· Opportunity for some teams to enter the MOOC world
· Various pedagogies (e.g. web references versus literature)

Teamwork
· Brainstorming
· Mutual incentive
· Mutual criticism
· Strong encouragement to hold the production schedule

Challenges of collaborative MOOCs
Heterogeneity of course team
Scientists and tech team might define concepts differently
Scientists of diverse disciplines (e.g. earth and life versus civil engineering)
Disciplines with diverse cultures
Vocabulary (uniform flow for example versus steady flow)
Empirical versus mathematical approach (also related to different jargon and ways of perceiving things)
Difficulty to define the target audience
Risk of a kind of competition (who’s presentation was best for example)

Heterogeneity of course itself
Each week with a distinct level of difficulty
Some reluctance to mathematics, even basic
Forums of discussion not in line with the course content

Required uniformity sometimes felt as a weight
Choice of templates
Constraints of uniform sequence schedules
Constraints due to the FUN platform

Problems of communication
Travels required to meet
Overloaded agendas of course team members
Weak efficiency of distance meetings.

Improve the teamwork effects
Systematisation of mutual (positive) criticism
Better links between lessons (avoid useless repetitions, and contradictions)

Real involvement of Southern partners
Using their study cases (partly done)
More open to teachers from the south
Organise adapted MOOC operations where required (offline versions, use as a support to local teachers, organise local evaluations)

Question: did you get remarks regarding the Northern tech that is used as a symbolic gatekeeper?


Tuesday, 10 June 2014

#CALRG14 The OpenupEd quality label: benchmarks for #MOOC, by Jon Rosewell

In his Jon Rosewell's talk on the OpenupEd quality label: benchmarks for MOOCs, he covers some reasons and practical benchmarks currently used to assess quality in MOOCs that are on the OpenupED EU MOOC platform. (nice sidenote: the first EU MOOCs rolled out on the platform did not go through the complete process, as people wanted to get the MOOC out there, and they came from institutions with strong quality assessment in their set up).

Learners have a variety of ideas on what it is to follow a MOOC, which results in mixed emotions in terms of usefulness of the MOOC they followed.

So why bother with quality?
  • Students: know what they are committing to
  • Employers: recognizing content and skills
  • Authors: personal reputation, glow of success
  • Institutions: brand reputation
  • Funders: philanthropic, venture caps
Other issues of quality
Reported completion may be very low (1 – 10%)
Does that matter? With very large starting numbers, there are still many learnes copleting and maybe learners achieve personal goals even if they do not complete.
Can MOOCs encourage access to HE if >90% have an experience which is a failure (note from myself: the same can be said about any institutionalized education, many young pupils are unhappy with education as well, so what is new?).

Often MOOCs come out of HE at this point in time, which means the approval pattern of these moocs are related to Quality assurance linked to the university that wants to roll them out. User recommendation is also coming into the evaluation of a MOOC.

OpenedUP benchmarks: benchmarking as an improvement tool:
  • Quality enhancement
  • Identification of weaknesses and strengths
  • Action plan for improvement
  • It is not expected that every institution will achieve every benchmark or feature
more benchmarks OpenUpEd in slidedeck.



#CALRG14 Eileen Scanlon narrates her journey of MOOC-in-the-Now


From the CALRG conference in the Open University, Milton Keynes, UK. A conference on sharing latest ideas in online learning related fields (broad and interesting). 

First session by EileenScanlon on what she learned as she went on a journey of MOOC-in-the-Now
Eileen is a tower of power and vision, as she is always ready to share latest ideas and insights, amongst who wants to listen. Her talk was of interest. 

My notes are live blognotes with its resulting short or quirky sentences.

Learningat scale (Atlanta)


Single track conference (so you could go to all the tracks, which was really nice).

Chris dede keynote New wine in no bottles: immersive, personalized ubiquitous learning: thinking outside the box of teaching is essential to realizing learning at scale. Virtual worlds and augmented realities can complement digitized classroom instruction through simulated apprenticeships, embedded support for learning everywhere, and transformed social interactions. Going big also requires thinking small: analyzing diagnostic mircropatterns to customize individual learning, sifting through millions of participants to find the ideal partners to aid each other's growth. To reach massive with universal access and powerful outcomes, we must creatively expand our visions of platforms, pedagogy, and financing.
used metaphors from film and learning sciences

Very lively conference, as people were voicing their ideas, felt they could, all had experience.

Scanlon, McAndrew, O'Shea contribution: distance learning, OER and MOOCs, case study: the open science laboratory
osl cofuonded by the OU with support from Wolfson foundation, built a collection of tools to combine remote access, virtual experimets and citizen sciece into the curriculum
39 applications across the board.
Case study Ednburgh: ESSQ was shared.


Invitational only summit (but 150 people)
opened with a conversation between presidetns of Berkely and STanford
John Hennessy We're going to invent the future
Colleges will be taking a more scinetific approach to online learning than in the past relying on their schools of education to measure student learning and provide feedback.

Great session: Eric Grimson (MIT) reflections on EDX: Expand access to education for students worlwide through online learning
while reinventing campus education through blended models
and learn about learning
gave example of undergraduate physics course transformation (also see Breslow et al. In ACML@s - Eileen has the pdf of all the presentations (but not open yet)
Undergraduate required physics course: TEAL style classroom teaching
Group problem solving interspersed with mini
Shared detailed stuff on forums, and other data. Results that it is not a residential experience (a MOOC), if you want to find out what happens in EdX, you can just google report and get it. Very open with detailed information. So seems that there is a difference once something is not for profit.

KenKoedinger was another highlight
Pittsburgh Science Learning Centre, LearnLab
Part of the Simon initiative at Carnegie Mellon University 'exploring the mystery and potential of human learning'
Built on the core principles of learning advanced by CMU's Nobel Laureate and pioneering educator, Herbert A. Simon, whose work linked cognitive models of learning with computation tools, the Simon Initiative makes the learner its focus and measurably improving learning outcomes its goal. The Simon Initiative will harness the university's vast technology-enhanced, educational ecosystem, which goes beyond the university, embraces the whole of society, which makes universities move beyond their individual concerns, very interesting.
Carnegie cognitive tutors KK: makes money from prior research, they kept faith in individual tuition. (Inge look up on Web)

Lyticslab (Stanford): 4 or 5 postgrads got together in september 2012 with focusing on what was needed for them to research MOOC. They ended up asking two main professors to head the lab. So it came from the students.
Candace Thile: new associate professor moving from Carnegie Open Learning Institute
Project to use learning science with open educational delivery
"She will complement the strengths we have in studying the effect of context in learning and research on the role of technology in education, and can tie a lot of these things together. She can help us transform what would otherwise be independent, somewhat fragmented efforts into systematic improvement of this kind of pedagogy' Steele Head of GSE.
At this point the Lytics lab did transform, but it is still of interest.

Guest speakeer at Lytics
Carolyn Roose

Mitchell Stevens (director of digital education)
"With the arrival of online education, the world is on the verge of a "epochal and pivotal moment' in the history of higher education on a scale of importance as deep as the expansion of higher education after World War 2 with the GI bill".
In stanford it looks as a much more instruction led approach, more then EdX. Academically driven.
From this conference, people went to:
Dagstule (where Mike Sharples went), Germany
LAK14
EU MOOCs conference (Lausanne)
coursera conference (London)
FutureLearn Academic Network (FLAN) Led by Eileen, Mike Sharples and Russel Beale

Collaboration between partners
ESRC proposal on the future of higher education
Centre for Open Adaptive Connected Higher education (COACH)
Partners:
University of Edinburgh (Bayne and Heywood)
Carnegie Mellon University (Koedinger and Rose)
Oxford University (Pullman).

New directions, new partnerships
Things are moving very fast, and you need to visit people in order to understand what they are doing, in terms of vision, of research, and look for international partnerships. To infinity and beyond.

Wednesday, 22 January 2014

#PhD notes about #Grounded theory WS with Anne Adams

Anne Adams during a presentation

While writing up my findings for my pilot study, I am lucky enough to follow a workshop on grounded theory here at the Open University. This post reflects some of the points raised by the presenter Anne Adams, so live blogging from the workshop.
Starting from previous qualitative experiences of all the participants, Anne gets an idea of her audience. Anne who is very energetic and clearly so knowledgeable that she is open to questions from the floor at any moment. Anne’s approach is starting from the data.
Premise: it can start from mixed methods, as well as purely qualitative

Important for PhD justifying the methodology used.

Quantitative versus qualitative; either way reflective
With grounded theory the results come from the data of the participants, so the subjectivity of the qualitative is in dialogue with the predetermination of the quantitative.
Quantitative challenge: imposing external system of meaning for internal subjective structures, whereas grounded theory comes from the participants.
Qualitative challenge: generates working hypothesis by producing concepts from data, representing participants reality in its complex context. But here the researcher’s assumption does add to framing the data.
So research always has challenges through the instruments used. In ALL research challenges emerge. So being reflective is the answer to reach validity.

Grounded theory Background
Glaser and Strauss (1967) Glaser comes from quantitative, and Strauss from qualitative.
The theory is the end goal of their Grounded Theory approach. (Henwood and Pidgeon, 1992, p. 101): “both qualitative and quantitative approach…”

An important view when writing up the verification/argumentation: important view: looking at who will examine your research: educationalists, technologists…
Another important idea to remember is that Grounded Theory (GT) is an iteration. So the coding is not done linear, but iterative, where the linear is occurring in stages to find depth and meaning, after which the whole argumentation is thought through again.

GT strengths
Phenomena complexity
Unknown phenomena
Structured/focused approach to theory building
Integrating mixed data sources

It is a skill as a researcher that you can continually manage to combine detail to theory in a valid justification.

Quality rules (Henwood and Pidgeon, 1992)
  •  Importance of fit with the data
  • Integrated at all levels of abstraction
  • Reflexivity (always look for the why, justification)
  • Keep documentation (field notes, memo’s,  notes taking during the exploration)
  • Theoretical sampling and negative case analysis. The sampling is very central to the process, because the experiment is not designed, the reason for selecting your participants becomes more important and should be clearly mentioned in your PhD account.
  • Theoretical sensitivity (the methodological approach, you should not go in with a prior theory – in theory – this is seen by Glaser as polluting, but there are different flavours of GT. So you need to take a position on what you use, which GT you follow, Anne went in not with a framework, but being guided by some theory (Inge, wondering if this is more Charmaz?).
  • Transferability: how far can your findings be transferred beyond your group. The GT purists would say that any theory coming out can only be related to that specific group, but there is an element of transferability to different contexts. So the sample might be generalizable to other contexts. This might also be of importance to your PhD dissertation, but you must be very clear on it not to ignite more discussion than necessary.

A qualitative approach to HCI by Adams: http://oro.open.ac.uk/11911/  (2008) and another one but not typing quick enough to get that one, scholar googled Anne Adams here.

GT application
Data in whatever form is: broken down, conceptualised, and put back together in new ways
Analysis stage – 3 levels of coding: open, axial and selective.

Open (concepts are identified, grouped into categories – more abstract concepts and hooks, properties and dimensions of the category identified. Each category might cover a specific property, and will have a dimension – and dimension range - either frequency it occurs, or scope that it has, the intensity with which it is mentioned, and the duration it refers to). A rule of thumb with open coding is to look for frequency and if it is only mentioned infrequently, than it might be fundamental (e;g. after that I never learned anything online again). The idea of saturation is the moment that you know you have gone far enough. Saturation will emerge where you no longer find fundamental new ideas.

Axial: start to move up from the categories, looking for high-level phenomena and conditions: causal conditions, contextual condition, intervening conditions.  Phenomena action/interaction strategies and consequences identified by your participants. Where phenomena are central ideas or events. Whereas conditions are events that lead to occurrence or development of a phenomenon. The context is a set of properties (location, e.g.) that pertain to the phenomenon. Intervening conditions provides a light coming from a broader structural contexts (e.g. is it the individual, the organisational, the societal … which depends on the research question you started from and which you are searching an answer to). Action/interaction strategies: devised to manage, handle, carry out, respond to a phenomenon under a specific set of perceived conditions. Consequences – outcomes or results of action/interaction.
(e.g. when I want o have (context) a personal conversation (phenomenon), I encrypt the message (strategy), I think this makes the email private (consequence). )

Selective (latter with process effects):
·         Select the core category (central phenomenon around which all the other categories are integrated) and high level story line (a descriptive narrative about the central phenomenon). The high level story line comes from your core category, so just a couple of sentences at the very most, that which goes into your abstract.
·         Related subsidiary categories by its properties (the other things are all related – in many cases by the properties – to your core category)
·         Relate categories at the dimensional level (it might be related dimensionally: some a lot, some less)
·         Iterative validation of relationships with data
·         Identify category gaps (telling what might be related, but is NOT what you are addressing with your research – patching holes provides the boundaries of your research)

At the end you – as a researcher – need to find a missing part of the puzzle
Lines between each type of coding are artificial
·         Data presented at dimensional level
·         Action/interaction and conditions present
The solution to come to these results are tools.
Very important: keep relationship coding notes in open coding/analysis without loss of detail, and code both open and axial together.

My additional question: who area non-purist GT theorists
Gilbert Grounded design flavour? (not sure about this, might have misheard it)
Focus is very variable, because GT has been adapted by many disciplines resulting in different GT flavors : so best is to look at your discipline and look at papers from that area.
Charmaz is really good for a intermediate approach that allows staring from some theoretical assumptions.

Tool remarks
Atlas TI fits perfectly with grounded theory (more than NVIVO), within CREET group of licences for Atlas TI (Inge, you must ask whether CREET has licences to offer you as a PhD student)
Using an analysis tool makes it easier to keep the codes up-to-date through the overall process, even if you are changing the code names. Atlas works better with multimedia files (easy coding). BUT the tool should never stand in the way of rigor and personal work/research, if the tool feels restrictive, it is better to use physical options that work for you post-it notes, Word…