Showing posts with label big data. Show all posts
Showing posts with label big data. Show all posts

Thursday, 3 October 2019

Yes a learning engine: demo is ready, but #AI and #Learning challenges ahead #TBB2019 @InnoEnergyCE

If you have ideas on ensuring continuity in pedagogy when clustering courses (research), on certifying across corporate and university learning (blockchain/bit of trust certification), on opening up industry academies to decrease L&D costs (HR and L&D), ... please think along and respond to the challenges mentioned at the end.

People in high and common places seem to agree that the world is in transition, especially workplace learning, as innovations keep changing what is possible. As I am working on one such an innovation (the skill project of InnoEnergy), I am at the one hand very excited about the new opportunities it might open, yet at the same time concerned that the complexity is bigger than expected.

First: have a look at the demo screencast here. It shows the overall idea, and ... this might immediately give rise to questions.

Today the Business Booster event (TBB) is opened, and with it, the skill project demo is launched. The skillproject (we still need to get a brand name for it), is combining AI and learning for the sustainable energy sector. But in essence, once we get the sustainable energy sector mapped with this tool, others can follow. 

AI and learning? What does it do: the project identifies industry needs (AI-driven), pinpoints emerging skill gaps in the sustainable energy sector (AI-driven), analysis the existing workforce to know where urgent skills gaps are situated (AI-driven) and then refers employees to a personalized learning trajectory addressing their skills gap (part AI, part human support). The goal of this project is to ensure that employees of the sustainable energy sector stay futureproof in a quickly changing working environment. Let's be honest, it sounds cool, but ... the challenges are multiple. 

The emergence of a Learning Engine
The skillproject helps realize the emergence of a learning engine, an intelligent career-oriented engine which knows your own skills and which signposts you to where you want to go with your career by suggesting a personalized learning track.
In the Learning Engine you simply type in “goal: become Director of Innovation’s in offshore wind energy which courses?” and the engine immediately returns a tailored, personalized learning track consisting of a variety of certified, business training from both universities, corporate academies, open educational energy resources and coaching options to send you on your way. This will allow professional learning to surpass the limits of classical, university-based learning.

Challenges
In order to get our engine to come up with the best, most-tailored courses, we need access to industry academies, as well as university courses. 
Learning-to-Learn capacities. Once we signpost learners to a cluster of courses, they need to take them (the familiar 'take the horse to water' comes to mind). But even if the learners are taking the courses, 
Granularity for course clustering: clustering courses to keep on top of your field of expertise is one thing, but then what is the granularity of those courses? Micro-learning is an option, and modular learning will become a clear necessity, as all learners have different existing knowledge, which means they all need different parts in order to upskill what they already know. 
Ensuring pedagogical continuity, even OU finds that a challenge. Great, so let's cluster modules. But then, how can we link these modules together, Do we believe in the non-pedagogical support (e.g. hole in the wall from Sugata Mitra already dates back 10 years), or do we need to find a solution to provide pedagogical continuity that fits with this new assembly of short modules, and courses coming from different sources (both university and industry)?
Certification across the learning ecologies: to blockchain or not to blockchain. Once we start learning across institutes, we need to keep track of that what we learn, by keeping tabs on the actual learning: corporate academy learning, university modules, hands-on training, workplace learning... one solution is to embed blockchain in education to keep track of all learning. But this is easier said than done, and open standards and trust might be an issue to consider (bit of trust initiative offers good reading). 

Feel free to send questions, comments, share your own projects... let's get together.

Tuesday, 17 September 2019

#ECTEL2019 Workshop #AI in #Education #liveblogpost #AIED @cova_rodrigo @paco

This is a live blog, so bits and pieces noted.

Paco Iniesto (The Open University, IET, AIED) is the workshop lead, and he is looking good and giving a strong overview.
AI is all around us: cars, games, robotics, AlphaGo (see netflix), predictive policy, dating apps, thispersondoesnotexist.com (3 min video is of interest, how they generate these images), ...

What is AI?
It isn't easy to define AI and many people have an idea, but there is no definition.
computer systems desinged to interact with the world ... (Luckin, Waynes...)

The promise of AI is not yet realized, although it has been developing for 40 years.
It's big business
AI shines a spothlight on existing educational practices
AI rehashes what we have at this point in time

Implications of AIED: algorithms and computation: what are the algorithms, what are their consequences, how to control them... accuracy and validity of assessments, are we treating students as human beings?

Lumilo augmented reality glasses for teachers (https://hechingerreport.org/these-glasses-give-teachers-superpowers/), video can be found here: https://kenholstein.myportfolio.com/the-lumilo-project This got some negative critiques from teachers and learners.



Ethical questions
Connection between effect and psychological traits of learners, but where can this lead to? (cfr Cambridge analytics).
What if we have the data for 'good', what if others use it for 'bad' ideas.
What about GDPR, who owns the data, how does this affect funding, if students opt out of the system and all their data is erased; can we use blockchain in order to keep the data connected to the learners?
Where is the data in order for the data be erased, how does this affect future employment?
Will the system be able to evaluate actual learning, if this is the case, what benefits will it bring to teaching and learning?
Does the support of learners lead to limiting the self-directed learning-to-learn of the learners
Starting from the technology to move to support the learning seems to be the other way round then it should be done,
What is the educational progress using these technologies?
What is the difference between monitoring and surveillance? (where is the barrier)
Can learners hack the system to get more or less support?
Does the teacher have enough time to support learners with difficulties? And does their help actually benefit the learning?
Consent forms of those who are not able to give consent?
marginalized people are in need of technological support, but how do we support them in a secure way?

Sources:
Sheila project: https://sheilaproject.eu/
Methods of mass destruction book

The post-it notes with ideas from three different groups addressing some of the questions mentioned in the above slide.







Monday, 14 January 2019

EU report on the impact of AI on Learning Teaching and Education #AI #education #EU #policy

The resently published report on the impact of artificial intelligence (AI) on learning, teaching and education gives a great outline on the realities of AI, the state of the art, and the challenges as well as opportunities for those of us with an expertise in learning in general, or learning in terms of learning theory. The report is part of the JRC Science for Policy documents, and it is very well written by Ilkka Tuomi (who is renowned for his expertise in Internet, data, AI and computer science). Ilkka recorded a brief overview of the report, which can be seen below. In the report-related video, he refers to current machine learning systems as datavors, he defines (and right fully so) the term of machine learning as an oxymoron and he puts current AI in very accessible parallel, namely the Artificial Instict (as current AI is mainly about behaviourist approaches and patterns).

A very interesting perspective is that Ilkka and the report stress the importance of having someone on board of AI for learning/teaching/education on board, who has expertise in learning and learning theory.

The policy challenges mentioned at the end of the report are:

  • A continuous dialogue on the appropriate and responsible uses of AI in education is therefore needed.
  • In the domain of educational policy, it is important for educators and policymakers to understand AI in the broader context of the future of learning. As AI will be used to automate productive processes, we may need to reinvent current educational institutions.
  • In general, the balance may thus shift from the instrumental role of education towards its more developmental role.
  • A general policy challenge, thus, is to increase among educators and policymakers awareness of AI technologies and their potential impact.
  • Learning sciences could have much to offer to research on AI, and such mutual interaction would enable better understanding about how to use AI for learning and in educational settings, as well as in other domains of application.
  • As there may be fundamental theoretical and practical limits in designing AI systems that can explain their behaviour and decisions, it is important to keep humans in the decision-making loop.
  • The ethics of AI is a generic challenge, but it has specific relevance for educational policies.
  • Human agency means that we can make choices about future acts, and thus become responsible for them.  AI can also limit the domain where humans can express their agency.
  • An important policy challenge is how such large datasets that are needed for the development and use of AI-based systems could be made more widely available.


This 47 page report offers the following topics:

Introduction ...................................................................................................... 5
2 What is Artificial Intelligence? ............................................................................. 7
2.1 A three-level model of action for analysing AI and its impact ............................. 7
2.2 Three types of AI ....................................................................................... 10
2.2.1 Data-based neural AI ......................................................................... 10
2.2.2 Logic- and knowledge-based AI ........................................................... 12
2.3 Recent and future developments in AI .......................................................... 13
2.3.1 Models of learning in data-based AI ..................................................... 15
2.3.2 Towards the future............................................................................. 16
2.4 AI impact on skill and competence demand ................................................... 17
2.4.1 Skills in economic studies of AI impact ................................................. 18
2.4.2 Skill-biased and task-biased models of technology impact ....................... 20
2.4.3 AI capabilities and task substitution in the three-level model ................... 21
2.4.4 Trends and transitions ........................................................................ 22
2.4.5 Neural AI as data-biased technological change ...................................... 23
2.4.6 Education as a creator of capability platforms ........................................ 23
2.4.7 Direct AI impact on advanced digital skills demand ................................ 25
3 Impact on learning, teaching, and education ....................................................... 27
3.1 Current developments ................................................................................ 27
3.1.1 “No AI without UI” ............................................................................. 28
3.2 The impact of AI on learning ....................................................................... 28
3.2.1 Impact on cognitive development ........................................................ 30
3.3 The impact of AI on teaching ....................................................................... 31
3.3.1 AI-generated student models and new pedagogical opportunities............. 31
3.3.2 The need for future-oriented vision regarding AI .................................... 32
3.4 Re-thinking the role of education in society ................................................... 32
4 Policy challenges ............................................................................................. 34

Below is the 20 minute video of Ilkka Tuomi which explains the report in easy terms.




Saturday, 8 December 2018

#AI #MachineLearning and #philosophy session #OEB18 @oebconference @OldPhilNewLearn


At OEB2018 the last session I lead was on the subject of AI, machine learning in combination with old philosophers and new learning.  The session drew a big group of very outspoken, intelligent people making this session a wonderful source of ideas on the subject of philosophy and AI.

As promised to the participants, I am adding my notes taken during the session. There were a lot of ideas, so my apologies if I missed any. The notes follow below, afterwards embedding the slides that preceeded the session in order to indicate where the idea for the workshop came from.

Notes from the session Old Philosophers and New Learning @OldPhilNewLearn #AI #machineLearning and #Philosophy

The session started of with the choices embedded in any AI, e.g. a Tesla car running into people, will he run into a grandmother or into two kids? What is the ‘best solution’… further into the session this question got additional dimensions: we as humans do not necessarily see what is best, as we do not have all the parameters, and: we could build into the car that in case of emergency, the car needs to decide that the lives of others are more important than the lives of those in the car, and as such simply crash the car into the wall, avoiding both grandmother and kids.

The developer or creator gives parameters to the AI, with machine learning embedded, the AI will start to learn from there, based on feedback from or directed to the parameters. This is in contrast with computer-based learning, where rules are given, and they are either successful or not but they are no basis for new rules to be implemented.

From a philosophical point of view, the impact of AI (including its potential bias coming from the developers or the feedback received) could be analysed using Hannah Arendt’s ‘Power of the System’, in her time this referred to the power mechanisms during WWII, but the abstract lines align with the power of the AI system.

The growth of the AI based on human algorithms does not necessarily mean that the AI will think like us. It might choose to derive different conclusions, based on priority algorithms it chooses. As such current paradigms may shift.

Throughout the ages, the focus of humankind changed depending on new developments, new thoughts, new insights into philosophy. But this means that if humans put parameters into AI, those parameters (which are seen as priority parameters) will also change over time. This means that we can see from where AI starts, but not where it is heading.

How much ‘safety stops’ are built into AI?
Can we put some kind of ‘weighing’ into the AI parameters, enabling the AI to fall back on more important or less important parameters when a risk needs to be considered?

Failure as humans can results into growth based on those failures. AI also learns from ‘failures’, but the AI learns from differences in datapoints. At present the AI only receives a message ‘this is wrong’, at that moment in time – if something is wrong – humans make a wide variety of risk considerations. In the bigger picture, one can see an analogy with Darwin’s evolutionary theory where time finds what works based on evolutionary diversity. But with AI the speed of adaptation enhances immensely.

With mechanical AI it was easier to define which parameters were right or wrong. E.g. with Go or Chess you have specific boundaries, and specific rules. Within these boundaries there are multiple options, but choosing those options is a straight path of considerations. At present humans make much more considerations for one conundrum or action that occurs. This means that there is a whole array of considerations that can also imply emotions, preferences…. When looking at philosophy you can see that there is an abundance of standpoints you can take, some even directly opposing each other (Hayek versus Dewey on democracy), and this diversity sometimes gives good solutions for both, workable solutions which can be debated as being valuable outcomes although based on different priorities, and even very different takes on a concept. The choices or arguments made in philosophy (over time) also clearly point to the power of society, technology and reigning culture at that point in time. For what is good now in one place, can be considered wrong in another place, or at another point in time.

 It could benefit teachers if they were supported with AI to signal students with problems. (but of course this means that ‘care’ is one of the parameters important for society, in another society it could simply be that those students who have problems will be set aside. Either choice is valid, but it builds on other views on whether we care in a ‘supporting all’ or care in a ‘support those who can so we can move forward quicker’. It is only human emotion that makes a difference in which choice might be the ‘better’ one to choose.

AI works in the virtual world. Always. Humans make a difference between the real and the virtual world, but for the AI all is real (though virtual to us).
Asimov’s laws of robotics still apply.

Transparency is needed to enable us to see which algorithms are behind decisions, and how we – as humans – might change them if deemed necessary.

Law suits become more difficult: a group of developers can set the basis of an AI, but the machine takes it from their learning itself. The machine learns, as such the machine becomes liable if something goes wrong, but ….? (e.g. Tesla crash).

Trust in AI needs to be built over time. This also implies empathy in dialogue (e.g. sugar pill / placebo-effect in medicine, which is enhanced if the doctor or health care worker provides it with additional care and attention to the patient.
Similar, smart object dialogue took off once a feeling of attention was built into it: e.g. replies from Google home or Alexa in the realm off “Thank you” when hearing a compliment.  Currently machines fool us with faked empathy. This faked empathy also refers to the difference between feeling ‘related to’ something or being ‘attached to’ something.

Imperfections will become more important and attractive than the perfections we sometimes strive for at this moment.

AI is still defined between good and bad (ethics), and ‘improvement’ which is linked to the definition of what is ‘best’ at that time.

Societal decisions: what do we develop first – with AI? The refugee crisis or self-driving cars? This affects the parameters at the start. Compare it to some idiot savants, where high intelligence, does not necessarily implies active consciousness.

Currently some humans are already bound by AI: e.g. astronauts where the system calculates all.  

And to conclude: this session ranged from the believers in AI “I cannot wait for AI to organise our society”  to those who think it is time for the next step in evolution, in the words of Jane Bozart: “Humans had their Chance”




Thursday, 6 December 2018

Data driven #education session #OEB18 @oebconference #data @m_a_s_c

From the session on data driven education, with great EU links and projects.

Carlos Delgado Kloos: using analytics in education
Opportunities
Khan academy system is a proven system, with one of the best visualisations of how the students are advancing. With a lot of stats and graphs. Carlos used this approach for their 0 courses (courses on basic knowledge that students must know before moving on in higher ed).
Based on the Khan stats, they built a high level analytics system.
Predictions in MOOCs (see paper of Kloos), focusing on drop-out.
Monitoring in SPOCs (small private online courses)
Measurement of Real Workload of the students, the tool adapts the workload to the reality.
FlipApp (to gamify flipped classroom), remember and to notify the students that they need to see the videos before class, or they will not be able to follow. (Inge: sent to Barbara).
Creation of Educational Material using Google classroom. Google classroom sometimes knows what the answer of a quiz will be, which can save time for the teacher.
Learning analytics to improve teacher content delivery.
Use of IRT (Item Response Theory) to see which quizzes are more useful and effective, interesting to select quizzes.
Coursera define skills, match it to the jobs and based on that recommend courses.
Industry 4.0 (big data, AI…) for industry, can be transferred to Education 4.0 (learning analytics based on machine learning). (Education3.0 is using the cloud, where both learners and teachers go to).
Machine learning infers the rules from getting answers which are data analysed (in comparison to computer learning, which is just the opposite, based on rules, giving answers).
Dangers:
Correlations: correlations are not necessary correct conclusions. (see spurious correlations for fun links).
Bias: e.g. decisions for giving credit based on redlining and weblining.
Decisions for recruitment: eg. Amazon recruits that the automation of their recruiting system resulted in a biase leading to recruiting more men than women.
Decisions in trials: eg. Compas is used by judges to calculate repeat offenders, but color of skin was a clear bias in this program.
Chinese social credit system which gives minor points if you do something that is seen as not being ‘proper’. Also combined with facial recognition, and monitoring attention in class (Hangzhou number 11 high school).
Monitoring (gaggle, …)
Challenges
Luca challenge: responsible use of AI.
GDPR Art 22: automated individual decision-making, including profiling.
Sheilaproject.eu : identifying policies to adopt learning analytics. Bit.ly/sheilaMOOC is the course on the project.
Atoms and bits comparison. As with atoms you can use it for the better, or for the worse (like atomic bombs).


Maren Scheffel on Getting the trust into trusted learning analytics @m_a_s_c
(Welten Institute of Open University, Netherlands)
Learning analytics: Siemens (2011) definition still the norm. But nowadays it is a lot about analytics, but only little about learning.

Trust: currently we believe that something is reliable, the truth, or ability. Multiple definitions of trust, it is multidimensional and multidisciplinary construct. Luhmanndefined trust as a way to cope with risk, complexity, and a lack of system understanding. For Luhmann the concept of trust compensates for insufficient capabilities for fully understanding the complexity of the world (Luhmann, 1979, trust and …)
 For these reasons we must be transparent, reliable, and be integer to attract the trust of learners. There should not be a black box, but it should be a transparent box with algorithms (transparent indicators, open algorithms, full access to data, knowing who accesses your data).

Policies: see https://sheilaproject.eu   

User involvement and co-creation: see the competen-SEA project see http://competen-sea.eu capacity building projects for remote areas or sensitive learner groups. One of the outcomes was to co-design to create MOOCs (and trust) getting all the stakeholders together in order to come to an end product. MOOCs for people, by people.  Twitter #competenSEA

Thursday, 27 September 2018

Machine learning benefits and risks by expert Stella Lee #AI #data #learning

Machine learning has moved from a mere rave into a real strong, acknowledged learning power (not only in the news, but also on the stock market of AI, e.g. STOXX AI global indices - I was quite surprised to see this). Machine learning has the power to support personalized learning, as well as adaptive learning, which allows an instructional designer to engage learners in such a way that learning outcomes can be reached in more than one way (always a benefit!). Machine learning allows the content or information that is provided for training/learning to be delivered in such a way that it fits the learner, and that it reacts to the learner feedback (answers, speed of response, etc). To be able to tailor a fixed set of learning objectives into flexible training demands some technological options: data, algorithms that can interpret the data, access to some sort of connectivity (e.g. it might be ad hoc with a wifi and an information hub, or it might be via cloud and the internet), and money to program, iterate and optimize the learning options continuously.

This (data, interpretation, choices made by machines - algorithms) means that machine learning combines so many learning tools, data and computing power, that it inevitably comes with a high sense of philosophical and ethical decisions: what is the real learning outcome we want to achieve, what are the interpretations of our algorithms, what is the difference between manipulation towards a something people must learn and learning that still offers a critically based outcome for the learner?

Stella Lee offers a great overview of what it means to use machine learning (e.g. for personalized learning paths, for chatbox that deliver tech or coaching support, for performance enhancement). This talk is worth a look or listen. Stella Lee is one of those people who inspire me through their love for technology, by being thorough, thoughtful, and being able to turn complex learning issues into feasable learning opportunities you want to try out. She gave a talk to Google Cambridge on the subject of machine learning and AI and ... she inspired her tech-savvy audience.

In her talk she also goes deeper into the subject of 'explainable AI' which offers AI that can be interpreted easily by people (including relative laymen, which is the case for most learners). Explainable AI is an alternative to the more common black box of AI (useful article), where the data interpretation is left to a select few. Stella Lee's solution for increasing explainable AI is granularity. This simple concept of granularity, or considering what data or indicators to show, and which to keep behind the curtains enables a quicker interpretation of the data by the learner or other stakeholders. Of course this does not solve all transparency, but it enables a path towards interpretation or description towards explainable AI. That way you show the willingness to enter into dialogue with the learners, and to consider their feedback on the machine learning processes. As always engaging the learners is key for trust, advancement and clear interpretation (Stella says it way better than my brief statement here!).

Have a look at her talk on machine learning bias, risks and mitigation below (30 minute talk followed by a 15 min Q&A), or take a quick look at the accompanying article here.

One of the main risks is of course some sort of censorship, or interpretation done by the machine which results in an unbalanced, sometimes discriminatory result. In January I organised some thoughts on AI and education in another blogpost here. And I also gave a talk on the benefits and risks of AI last year, where I argued for increased ethics in AI for education (slides here).

Machine learning is a complex type of learning, it involves a lot of data interpretation, algorithms to get meaningful reactions coming from the data, and of course feedback loops to provide adaptive, personal learning tracks to a number of learners.
Situating it, I would call it costly, useful rather for formal than informal learning (at this point in time), and somewhere between individual and social learning, as the data comes from the many, but the adapted use is for the one. It does not leave much room for self-directed learning,  unless this is built into the machine learning algorithms (first ask learner for learning outcomes, then make choices based on data). 

Tuesday, 27 March 2018

Redirect FB algorithms now and 4 lessons from #CambridgeAnalytica #digitalcitizens

Anyone interested in data and ethics has been reading a gazillion of articles the last week. So, time to recap the big results coming out of the Cambridge Analytica files: correlations have their scientific merits (argh!), humans can be profiled in just 12 likes (honestly, this is how diverse we all are?!), anything measured can be used against us (a Cobra), and teachers around the globe seem more ethical than scientists (my partner says it’s true, I say it isn’t). Well... manipulation is part of history, I guess... but still!

First of all, a nice MIT research project on “How to manipulate Facebook andTwitter instead of letting them manipulate you’ (yes, it is a timely title 😊 ) mentioned in MIT’s Technology Review. The project let’s you – the user – manipulate algorithms emphorced on you by Twitter and Facebook (I like it, activism from within the system). This initiative is called GOBO (if you want to jump right in, you can login for this project here) and it is a prject from researchers at the MIT Media Lab’s Center for CivicMedia. It has an interesting parallel referring to Cambridge Analytica approach, BUT in this case it is truly scientific, and they ensure deleting ANY and EVERY data collected once they have results on how you would like to see algorithms adjusted. So take back the algorithms of Twitter and Facebook with GOBO.

I am just resurfacing after the Cambridge Analytica fraud (I call it fraud as they have been anything but ethical in their so called scientific data gathering: no informed consent, data gathered and not anonymised before using it for 3 parties, data not deleted after a project was finished….).

Correlations are used successfully? Argh!! For years, many educationalists and researchers emphasize that correlation is no replacement for causality. Causality is the basis of all strong research. It is clear that education and correlation aren’t a love story. We- as educators and researchers - know and understand the importance of context, of language use, of how personal each of our learning journeys takes form. In a sense, we should know better then to construct a test that puts everyone in the same batch, and then believe in it to state those things that we think sound nice (however tempting that type of action is... I mean, saves time on reflecting, nuancing, evaluating... and all these time-staking stuff) … but Cambridge Analytica got away with it. PISA was/is another such example. It even manages to enter the OECD report (https://www.oecd.org/education/) as core element of proof leading to rigorous outcomes. PISA test is an in correlation resulting test. A nice list of educationalists that argued against using PISA here. With the Cambridge Analytica files, the correlation monster pops up once again … AND it is now used ‘successfully’ to blind-side people and to get them to doubt their political choices just enough to swing their vote. So, correlations can be used quite viciously for some of the time.  

Forget complex human traits: humans can be profiled in just 12 likes! And all of this comes from research (great paper on how it was set up here, Schwartz , Eichstaedt, Kern, Durzynski, Ramones, Agrawal, Shah, Kosinski,Stillwell, Seligman and Ungar (2013) . Well… how difficult is becomes to state (and belief) that humanity is truly diverse! Admittedly, the Big Five Traits also distil human diversity into just 5 personality traits, but still… being profiled on 12 likes… How individual are we, if that is all it takes to cast each one of us in a box that subsequently can be manipulated from that moment onward? It becomes quite difficult to see humans as complex beings when I take that into account… but we are social, at least that is now proven once again.

Anything measured can be used against us. One of the most interesting blogposts I have read, is an older one from MikeTaylor, stating that as soon as you try to measure how well people are doing, they will switch to optimising for whatever you’re measuring, rather than putting their best efforts into actually doing good work, and this optimising is always at risk of being distorted, even corrupted (Mike refers to Goodhart’s law, Campbell’s law and the Cobra effect – great read).

And teachers around the world have more ethical sense than scientists that do not teach… well it is a discussion, my partner says that fact is well known, I say scientists who do not teach can be ethical as well…. Those darn Cambridge Analytical (and derivates) people! (good example of this is Autumm Caines , she wrote on Platform literacy refering to her encounter with Cambridge Analytica to get all her data from them all the way back in February 2017 (which was a hastle!). Yes, she got active one year before this whole event blew up into an international scandal. Autumm keeps ethics high!   

Tuesday, 30 January 2018

Top 10 open access papers from 2017 @nidl_dcu #research #onlinelearning #open

The National Institute for Digital Learning in Dublin (NIDL), Ireland has listed what they see as the top 10 open access articles worth reading in 2017 here (with a small abstract for each). All the top reads are featured in open access journals with high quality criteria. The paper by Perkins and Lowenthal is of interest, as they ranked open access journals. I have the impression open access is (luckily!) still on the rise. Unfortunately, open access papers are rarely seen as a valuable career move for early to experienced researchers.The NIDL launched the top papers one by one through twitter @NIDL_DCU .

Last year one of my co-authored papers with Aras Bozkurt (who was also a top author in the 2017 papers!) and Nilgün Ozdamar Keskin entitled Research Trends in Massive Open Online Course (MOOC) Theses and Dissertations: Surfing the Tsunami Wave was part of the top 10 reads for 2016 (which we only found out just now *blush*). For those wanting to read the full list of 2016 articles, feel free to find them listed here.

These are the 10 publications that NIDL has considered for the 2017 list, although it needs to be stressed that there are many other journal articles worthy of consideration and further evaluation depending on your specific interests:

No. 1: Blended Learning Citation Patterns And Publication Networks Across Seven Worldwide Regions

Authors: Kristian Spring & Charles Graham Journal: Australasian Journal of Educational Technology

No. 2 Review and Content Analysis of International Review of Research in Open and Distance/Distributed Learning (2000–2015)

Authors: Olaf Zawacki-Richte, Uthman Alturki & Ahmed Aldraiweesh
Journal: International Review of Research in Open and Distributed Learning

No. 3 Trends and Patterns in Massive Open Online Courses: Review and Content Analysis of Research on MOOCs (2008-2015)

Authors: Aras Bozkurt, Ela Akgün-Özbek, & Olaf Zawacki-Richter
Journal: International Review of Research in Open and Distributed Learning

No. 4 Theories and Frameworks for Online Education: Seeking an Integrated Model

Author: Anthony G Picciano
Journal: Online Learning Journal 

No. 5 A Critical Review of the Use of Wenger’s Community of Practice (CoP) Theoretical Framework in Online and Blended Learning Research, 2000-2014

Authors: Sedef Uzuner Smith, Suzanne Hayes & Peter Shea
Journal: Online Learning Journal

No. 6 Refining Success and Dropout in Massive Open Online Courses Based on the Intention–behavior Gap

Authors: Maartje A. Henderikx, Karel Kreijns & Marco Kalz
Journal: Distance Education

No. 7 Special Report on the Role of Open Educational Resources in Supporting the Sustainable Development Goal 4: Quality Education Challenges and Opportunities

Author: Rory McGreal
Journal: International Review of Research in Open and Distributed Learning

No. 8 A National Study of Online Learning Leaders in US Higher Education

 Author: Eric Fredericksen
Journal: Online Learning Journal

No. 9 Bot-teachers in Hybrid Massive Open Online Courses (MOOCs): A post-Humanist Experience

Authors: Aras Bozkurt, Whitney Kilgore & Matt Crosslin
Journal: Australasian Journal of Educational Technology

No. 10 Gamifying Education: What is Known, What is Believed and What Remains Uncertain: A Critical Review

Authors: Christo Dichev and Darina Dicheva
Journal: International Journal of Educational Technology in Higher Education

    Great reads, each one of them.