Showing posts with label assessment. Show all posts
Showing posts with label assessment. Show all posts

Thursday, 19 March 2020

Sharing #oralAssignments and #OnlineExam #bestPractices to limit cheating

Request for expertise sharing on #online #exams #covid19 pro-active planning until the end of this academic year and offering #BestPractices for #OnlineExams below. 

The first rumors are indicating that our international HigherEd students will not be requested to come to their guest universities to plan their exams for the end of the 2019-2020 academic year. 

I am trying to find a solid online learning tool that can be used, but in the meanwhile, I want to share best practices that are already used at our and other institutes. Feel free to add any ideas or measures I might have missed when listing our guidelines.

Best practices for organising online exams and online tests All of us working with international students scattered around the globe as they have rejoined their families in their country of origin, will probably be facing online examination needs. With this in mind, I am listing best practices and in a second stage I will be reviewing #EdTech tools that might come in handy if you have multiple students linking up remotely for their exams (we are preparing for 382 students which is a feasible number, yet demands a streamlined approach). I took my master’s in education (M.Ed.) exams remotely myself (thank you @AthabascaUniversity, so sharing those best practices with some additions below. Best practices using only camera and audio as technology: Preparing the exam Switch any written exam questions you might have to oral exam questions. These can include notes that need to be shared (ask contextualized questions, questions that show they understand the material yet can apply it to new contexts; e.g. ask short oral essay questions). Create original exam questions: i.e. questions are not available in educational textbooks (otherwise tech-savvy students will be able to find them in no time :D Choose an online meeting tool that offers recording options (think legal discussions, you need to be able to show why you gave the examination points you gave) and a tool that allows for lengthy recordings at that (no one wants their exam to suddenly stop). Choose a tool that enables sharing the screen (might come in useful for some short essays, designs, stats…). Prepare an informed consent document and send that to the student, so they know their exam will be recorded and stored at the admin server space for X time. If possible, indicate the amount of time set aside for the exam. Make a designated exam folder structured according to your admin. Additionally: you might want to send out a ‘code of conduct’ to the students, so they know what is expected of them. This is where the penalties might be discussed: what is considered cheating, what is the penalty for each stage of cheating… Once the exam starts Introduce the student to the fact that their online session will be recorded (GDPR) – check that the informed consent was signed and sent back to you. Start recording. Indicate the overall guidelines of the exam: open book, closed book, time available, number of questions (if relevant). The student must be made aware of what they can expect. Ask whether they understood what you have just said. Check identity: ask the examinee to show their passport and take a screenshot, save that screenshot as part of the examination administration. Ask them to show their desk, room, and that they need to be in view mid-torso with hands and keyboard visible. (you know why 😊 In case you choose to go with closed book examination: ask them to share their full screen (look at the tabs that are open!). Of course, there is a workaround if they are tech-savvy, which is why exam questions should preferably be open book, it allows them some freedom, yet they still need to really understand how they come to a solution. Only offer one question at the time. Feedback is important… but: depending on the number of questions you prepared, you might want to choose a different feedback strategy. If you have different questions for each student: give feedback as you see fit. If you want to reuse questions: limited feedback is preferable. As we all know, students quickly inform each other on which type of exam questions they got, what the answers or feedback was to what they gave, and what feedback they got. Feedback is given at the end Stop recording and make sure it is in the right folder. What cannot you address in case you work with audio/video tools only? Disabling the right-click button (copying and pasting options, so that students can quickly save questions). A reason to go to tailored questions per student, based on comprehension and creative thinking. Single function add-on tools Use the Respondus Lockdown browser or similar tool to ensure that students cannot look up answers, but yet again, you need to block students looking up answers https://web.respondus.com/he/lockdownbrowser/ A review of more designated tools such as ProctorU, ProctorExam, … will follow.

Monday, 8 October 2018

(free) book Assessment strategies for online learning #education #assessment #eLearning #instructionaldesign

Assessing online learning has many challenges, but with this new book written by experts Dianne Conrad and Jason Openo, a lot of solutions can be found. The book, entitled Assessment Strategies for Online Learning - Engagement and Authenticity, can be bought for 32,99 dollars  here (if you have a budget this is the way to go as you support author and initiative), or you can have a look at the free pdf here. This book is a must read for those using assessment, as it not only gives traditional assessment, but also dives into evaluations that are linked to open learning, journals, portfolios, etc. Great and interesting read.

If you want to check out what Dianne Conrad has in mind while talking about assessment, or if you have some questions, you can join the free online CIDER session on 10th October 2018

When: Wednesday, October 10, 2018 - 11am to 12noon Mountain Time (Canada)

Where: Online through Adobe Connect at:
https://athabascau.adobeconnect.com/cider

Registration is not required; all are welcome. CIDER Sessions are recorded and archived for later viewing through the CIDER website. For more information on CIDER and our Sessions, please visit us at: http://cider.athabascau.ca
(from the book description):
For many learners, assessment conjures up visions of red pens scrawling percentages in the top right-hand corner of exams and feelings of stress, inadequacy, and failure. Although learners sometimes respond negatively to evaluation, assessments have provided educational institutions with important information about learning outcomes and the quality of education for many decades. But how accurate are these data and have they informed practice or been fully incorporated into the learning cycle? Conrad and Openo argue that the potential inherent in online learning environments to alter and improve assessment and evaluation has yet to be explored by educators and learners.
In their investigation of assessment methods and learning approaches, Conrad and Openo explore assessment that engages and authentically evaluates learning. They insist that online and distance learning environments afford educators new opportunities to embrace only the most effective face-to-face assessment methods and to realize the potential of engaged learning in the digital age. In this volume, practitioners will find not only an indispensable introduction to new forms of assessment but also a number of best practices as described by experienced educators.

1. The Big Picture: A Framework for Assessment in Online Learning

2. The Contribution of Adult Education Principles to Online Learning and Assessment

3. What Do You Believe? The Importance of Beliefs about Teaching and Learning in Online Assessment

4. Authenticity and Engagement: The Question of Quality in Assessment

5. Assessment Using E-Portfolios, Journals, Projects, and Group Work

6. The Age of “Open”: Alternative Assessments, Flexible Learning, Badges, and Accreditation

7. Planning an Assessment and Evaluation Strategy—Authentically

8. Flexible, Flipped, and Blended: Technology and New Possibilities in Learning and Assessment

9. A Few Words on Self-Assessment

10. Summing Up

Appendix • Other Voices: Reflections from the Field

This work is licensed under a Creative Commons License (CC BY-NC-ND 4.0). It may be reproduced for non-commercial purposes, provided that the original author is credited.

Assessing online learning is mostly part of formal education, but can be used to provide a formal status to self-directed learning which the learner wants to show to the public. 

Monday, 22 May 2017

Rubrics as part of online MOOC peer reviews #mooc #elearning

An addition to the EdTech options that I am currently organising. A rubric is a grading tool used within a course (blended or online courses) which is used to enable students as well as learners to understand what is expected of them in terms of solving an assignment or reviewing assignments from their peers.

Where - within the learning process - can a rubric be used?

Typically, as a teacher you will first introduce a case or project (generic example) that is exemplary for a specific process or project (for instance designing an online course overview). Each concept of interest is highlighted in detail. After explaining that particular example of a case, an alternative is given to deepen understanding. Then the learners are requested to build a similar case, yet adapted befitting their own context, infrastructure or conditions. By asking them to build a contextualised case, you bring the content and the assignment closer to their own previous knowledge. To offer guidance you provide a rubric, including the concepts you described in the detailed example. 

Brief orientation of the rubric: the rubric provided bellow can be used as an example rubric which can be adapted to align the conditions to the course topic. In this case the rubric is used to peer review online course overview. So the assignment include providing a course overview, including content and accompanying assignments consisting of several modules, with one module completely worked out in detail. The overview will provide an idea of the overall structure of the course, the detailed module gives an idea of signposting, descriptions, attained learning objectives. 

What is the benefit of using a rubric
A rubric has multiple purposes and can be used in different settings:
  •  It can be provided to learners prior to having to submit an assignment. That way they understand what the professor will be looking for, what the important criteria of the assignment are, and the rubric will offer a structured overview of how to strengthen a project, proposal or assignment prior to submitting it.
  • A rubric can also be used as a grading or reviewing tool between peers (e.g. learners). A rubric offers a more objective way to review each others work. In addition reviewing each others work will result in a more in-depth understanding of what the project/proposal can be and how your own project can be enhanced by looking at how your peers solve it, or design it.
  • By using a rubric the learners also get an idea of critically looking at other projects, and at the same time knowing the challenges that come along when writing a project based on specific criteria. Which is useful for future project work or collaborations with partners.

Using a rubric triggers deeper reflection in the learner on a specific tasks, as well as trigger additional actions concerning the task by integrating the criteria in a project or task. This leads to higher order thinking.

Example rubric
This example rubric is based on reviewing an online course project, but it can be adapted to any field using criteria that are relevant to that field and the requested project or proposal at hand.

The rubric below is made up out of four grading elements, you can increase or decrease them according to your own preference. In this case I choose to use an even amount of grading elements, as this pushes the learner to make a non-neutral choice, the feedback is either bad or good, not neutral. So you put the learner out of a comfort zone by not providing a neutral option, which would be an option when given three or five grades.

In general, once you have a criteria, you will be able to describe a good quality delivery of that criteria as that is typically what a teacher/professor would hope to get from a learner. From there you work your way back towards what you would consider to be a poor quality delivery of that particular criteria.


Grading criteria
Poor quality
Insufficient  quality
Sufficient quality
Good quality
Overall course structure
There is no coherent course structure.
An attempt is made to provide a course structure, but the course lacks descriptions, has no sign-posting to guide the learner through the course.
The course elements are structured, but not all course units are accompanied by descriptions and/or signposting. Leaving the learner to test those course units for themselves.
The course is well-structured providing clear descriptions and sign-posting throughout the course, enabling self-directed learning.
Online content in alignment with learning objectives
No learning objectives are given.
Learning objectives are given, but they seem to be disconnected from the content that is provided in the course, or they are not covered by the content of the course.
Learning objectives are given, but it is not always clear where the relevant content connected to these learning objectives can be found.
The learning objectives in the course are all clearly reached by the end of the course. The alignment of the learning objectives with the course content can be traced by looking  at the titles of the different course segments.
Course content engagement
The content is boring, lengthy and non-inspiring.
The course content consists of an amalgam of course elements that do not touch any challenges, nor do they inspire to integrate ideas coming from the content into my own context.
Parts of the content are engaging and inspiring. Some course units lack mentioning challenges and solutions, but they do provide informative background material.
The course is captivating, in an engaging way. It provides consize and meaningful content related to the subject matter, highlighting challenges and solutions related to each course unit.
Complexity of the learning path.
The course elements are provided chaotically, without enabling the learner to grow as they go through the different learning units.
An attempt is made to enable the learner to grow throughout the course, but too little stepping stones are provided in between the course units. The learner isn’t provided with enough background to assimilate new knowledge so they can move to the next course unit and understand what is covered there.
The course consists of logical steps, moving the learner towards more understanding by providing new information that supports basic knowledge creation. But some units lack additional, advanced learning material.  
The course evolves from simple concepts to complex combinations of concepts. Within each course unit there is also a consistent increase of content complexity.
Relevance and contextualisation of course assessments
Assignments are lacking.
Too little assessment is available to enable the learner to self-evaluate their own learning.
The assessments provide ample opportunity to see whether the content is understood. However, there are no contextualizable assessments or assignments provided. For example: no challenge or need based assignments.
Course assignments can be contextualised given the learner’s background or field expertise. Course assessments are varied and range from simple to complex. The course offers self-assessment options after each larger content segment covering a learning objective.
Content support through media use
Only one type of media is offered as content throughout the course.
The course integrates two different types of media (video and text), but the visuals add nothing to the story that is told. It could just as well be offered in writing. The video is of very low quality, you can hardly see what is recorded.

The course uses a mix of media, in accordance with the affordances of that particular media (e.g. discussion paper to increase debate, video of an actual engineering plant described in the course module).
Critical viewpoints provided and stimulated.
The content only shows the topic from one particular angle and is not critical.
The content is infrequently critically analysed by the content provider.
The content is enriched with critical arguments, both the challenges and the solutions.
Challenges and solutions related to the content are addressed from multiple angles. The learners are engaged to find additional viewpoints, or add critical content.

Wednesday, 26 April 2017

#Mobile #assessment based on self-determination theory of motivation #educon17

Talk given at Educon in Athens, Greece by Stavros Nikou, really interesting mobile learning addition in the area of vocational and learning assessment. Mobile devices in assessment: offer and support new learning pedagogies and new ways of assessment: collaborative and personalised assessments.

Motivation of the framework is aiming to address: following the self-determination theory (http://selfdeterminationtheory.org/theory/) : intrinsic and extrinsic motivation. Intrinsic motivation works from insight of the person, and because it is enjoyable. Extrinsic is build upon reward or punishment. What they try to do is get more intrinsic motivation ignited, as it leads to better understanding and better performance.

There are 3 elements in the theory: autonomy, competence, relatedness all of this impacts the self-determination. This study try to use these three elements to increase intrinsic motivation.
Mobile-based assessment motivational framework: the framework is still in a preliminary phase, but of interest. Autonomy: personalised and adaptive guidance, grouping questions into different difficulty levels (adaptive to learner), location specific – context-aware.
Competence: provide emotional and cognitive feedback that is immediate. Drive students to engage in authentic learning activities, appropriate guidance to support learners.
Preliminary evaluation of the proposed framework: paper based and mobile based assessments used prior and after intervention to test out the framework. Using an experimental design, assessments after each week of formal training, two assessments in total for both groups. ANCOVA data analysis used.

Results: significant difference of autonomy, and competence, and relatedness. The framework will be expanded with additional mobile learning features, and framework will be used with different students. Future research wants to enhance the framework.
The mobile assessment had a social media collaborative element in it, and it also made use of more feedback options due to the technical possibilities that the mLearning option had.


Wednesday, 8 June 2016

#CALRG16 Lightning presentations – quick topics

Interesting start using non-text assessments in online courses. Creative non-text artefacts for assessment use. Presented by Soraya Kouadri Mostefaoui based on research that has started in 2010. Set of 6 criteria. Content criteria: meeting the brief, factual accuracy & understanding. Presentation criteria: appropriateness of components used, ordering of ideas, technical level, and narrative. Soraya is looking for people with similar interests or who have already implemented similar non-text assessments in their courses, so feel free to contact her (Her linkedIn profile here).

Chenxi Li reporting on Chinese undergraduate students' online English language learning experiences and perspectives. This is a study of synchronous English language classes through audiographic conferencing tools in China. What are Chinese students’ online language learning experiences of audio graphic conferencing classrooms and what do they think about them? What are the major problems for audio graphic conferencing ELT classes in China? This study attempts to answer these questions above through questionnaires and interviews with online English teachers and learners. An innovative data collection method proved to be very effective which combines an online survey tool (Survey Star) with a popular social networking mobile App (WeChat). The quantitative findings will be mainly reported in this presentation: big classes in one conferencing class: so almost no students get individualised feedback, tech problems, people are still happy about the overall experience. many students complain that teachers cannot deal with the tech problems (not always correct assumption), 86% students say they have interacted but it does not compute with the actual interaction stats. So maybe their interpretation of interaction is limited to very small, basic interacting: "hello". Some students feel it is hard to concentrate. lack of online teacher training in these contexts.

Ralph Mercer talks energetically about online learning: an exploration of the last 20 inches. The last 20 inches referring to the last bit where students have learned and interact with teachers/trainers... Can we built a system where learners self-assess and develop learning agency, as this will affect learning positively. What are the key attributes for starting to self-assessment: self-report good and bad learning day, to learn to build better learning days in general. Looking for common threads using self-regulated, cognitive factors. If the student is more motivated, it will result in better learning. So moving towards Learning Wellness Framework. It is a personalised tool, where the students themselves built a sort of fitbit for learning, for themselves. And which they can compare to external feedback or people afterwards: e.g. teachers, trainers expectations. Ralph's abstract: "My research will look at the physical and social spaces that surround online learners and explores how the attributes of those personal learning space influences online learning habits and effect learning goal achievement. From this research I intend to demonstrate that the adoption of a learning wellness framework could increase self-regulated learning habits and minimize the influence of personal learning spaces. Learning Wellness is described as the convergence of personal learning informatics and self-regulated learning combined with physical/emotional wellness principles to persuade (nudge) learners to develop self-agency and learning skills to succeed in the online learning environments ."

Wednesday, 7 October 2015

Learning, #assessments should be future, context oriented thx @jaycross

Sitting in a train heading for The Open University  on a rainy day in September (Autumn in the Northern hemisphere). Writing a progress log or Plog as Jay Cross would call it, and it took me ages as I struggled with the use and application of assessing new knowledge. In some ways this plog relates to discussions on the use of testing/assessing which is a reoccurring discussion in education everywhere and online learning in particular.

Self-assessment in the real learning realm
I suddenly wondered how intuitive assessing one’s own knowledge really is. What we do with new ideas? Do we just apply them, do we test them in safe environments first before implementing them in real life, do we Just Do It and test new stuff live as we live it … What would be my take on assessment and its place within real learning? At the end I realized that if my aim is to increase lifelong learning skills, my assessment methods should push me to point my vision towards my (wishful) future.

Read and grow
The new book on Real Learning by Jay Cross has kept parts of my brain busy over the last couple of weeks. It is essentially a book for those that want to change their lives towards new or adjusted goals through learning (professionally and/or personally).  Jay Cross kindly asked me to furnish him with feedback on a beta version of his book, and of course I gladly accepted the task. At present I go through the book at a slow pace (kid sick, working on PhD, dream, getting some rogue research planned and described, being me). But despite my slow pace (or is it because of it?) the book seems to stir my mind on subjects I found straight forward in some ways, and makes me wonder whether they are as straight forward as they should be.

The latest idea that I got questioned whether power learners (with which I would describe those people who reinvent themselves every few years, or people who read the book and put its suggestions into practice) need to self-assess their newly acquired content or whether that is taking the learning out of its organically formed or natural ecosystem: the growing, living mind. Is assessment in any way a natural phenomenon?

To consciously self-assess versus  applying intuitively
Traditionally, if you build a module for either classic, blended or online learning, you are going to fit in some self-assessments. It is generally a routine action, and build on two main premises:

  1. if someone goes through some learning, they want to see whether they really understood what they learned,
  2. because those who build a course want to have some kind of measurement or grading tool to see whether the new content/action/practice has been understood (or it has lead to obtain a learning objective).

This prompted me to suggest that the Real Learning book might possibly cover an overview of ways of self-assessing newly assimilated knowledge. So I posted it that suggestion to the executive group forum of the book, to add my two cents worth. As soon as I posted my remark, I could feel something was wrong with it. It felt like a dinosaur statement and I could not figure out why at first.

Assessing knowledge seems impossible within its own confinement. As I was thinking about it I got the idea that assessing knowledge is only useful when it takes that knowledge towards a new future. It does not reflect on what is learned, but where one wants to take it. Informal learning, or adult authentic learning has little use for the past, it needs to be useful in the present (for sure) and the future (possibly).

Tracing my own learning, I can honestly say I do not build external architectures of ecologies to test my own new knowledge. I just venture out into the real world and implement them. It might be an intuitive action, it might rely on previous knowledge, never the less it is moving forward.

Learning as part of a personal trait
Additionally, I feel that assessing new knowledge is part of a very personal action. The personal constructs the learning. Learning is a very personal action: what we learn, how we learn, the reasons we learn, … and how we then use what is learned. Research literature is littered with the adagio that we connect new knowledge to the old knowledge we have acquired… but this is highly personal. The way we all think, our philosophical frame, our hopes and dreams make up our thinking.

Again a central question comes up: can we really uplift people universally through the process of learning? Does it benefit the world, or is it just a simple natural process that furnishes a sense of accomplishment? Can assessment take us to a more human level of thinking? A type of assessment which fits our own goals, this means assessment should have an element of future enabling in it… which means it must be made generic, enabling contextual solutions and - no matter what - it will be implemented by the person, in her/his vision.

I do not assess consciously?
While reflecting upon my own assessment iterations, I realized I do not consciously assess anything I learn. But on an unconscious level I do implement new stuff I learn.

But then again I also do not stumble upon integrating new knowledge into my own context. First of all I shape the new knowledge that I am acquiring. Pretty much like a combination of steps put forward by Jay Cross: reflecting, tracking progress of my own work/interest by sharing it, discussing content by Working Out Loud within my own personal learning network, and of course the very personal yet inevitable characteristic of anyone wanting to grow: critical thinking as an organic barometer for learning

Assessing should be future oriented
So suddenly, while racing ahead within the Virgin express train I realized that from now on, the only assessments I will make or take would be those that take me to the next level. Because that is the only real life situation. It also prepares for lifelong learning.
There is no use in assessing only what is seen, it is much more useful to see whether one can apply new knowledge in a personal, as such contextualized setting. Every course or training I took that demanded contextualized responses always pushed me forward. In a way I guess that what I will do from now is use assessment purposes to ask learners to build their own advanced organizers for the next bit of content they want to assimilate and turn into authentically useful knowledge. 

Sunday, 20 September 2015

#MOOC: no disruption, no real assessment and adults aren't dropouts! #grumpyWoman

Okay, this is a mail written on the aeroplane after returning from EC-tel conference. A wonderful conference (I will share the positive vibes and connections soon). But first I have to get some of the reoccurring assumptions uttered by some (yet too many) off my chest.

Briefly: there is NO disruption happening as those building the disruption are those being part of classic education, we are ADULTs so please stop using drop-outs – we are adults CHOOSING what we learn, and please STOP thinking in terms of classic assignments as that is NOT the only way to assess knowledge – our personal learning network can do that! And OH YEAH, that means assessment can take place contextualized and outside MOOC platforms.

Real disruption: not there yet: attract people from outside of classic education to look at the future of education
“How can we ensure that future professors will be able to teach with new technology”: well, first of all, let us all learn to cope with change and new technologies. Digital literacy in terms of cultural awareness, critical analysis (quality, selection…), and attract people that were NOT educated through the existing classic system of primary, high school, university… Interdisciplinary might not be enough to really capture or draw up a roadmap for future education. Inevitably the best scoring/performing people within a system are those who can replicate the system. As such, I often wonder how much disruption can take place if we build the so-called disruptive systems with those that come out of those systems… there won’t be too much change happening. Only marginal differences that are called ‘disruptive changes’ because it sounds nice at this moment in history.

MOOCs are disruptive? Do not make me laugh!
While discussions are multiple on the disruptive effect of MOOCs, I keep wonder how little it takes for the ‘disruption’ label to appear. At the end of the day, I still see/hear the majority of academics/professors talk about teaching, not learning. The MOOCs are in many cases simply a digitized form of earlier, existing content. So not much disruption there.
And one of the – for me questionable – outcomes of the MOOCification, are the multiple mentions of how now less teachers will be needed, less professorships, as more less well paid people can pick up certain aspects of MOOCs. So, if I understand correctly education is again cut due to (false) arguments. This also does not feel disruptive at all, in fact it feels very familiar. In this case I think the word ‘disruptive’ is only used to do more of the same (cuts) but using the word ‘disruptive’ as a false argument that simply sounds good and that people take for granted.
Real disruption would happen if a new model of education would come up that makes people be citizens and rulers of their own life. A new societal model, that would (just imagine) lead to more satisfied lives where basic (= not surviving but living) needs can be secured for all and where learning is seen as a truly important, life enabling and satisfying activity. For me, gaining knowledge and sharing it so we can all benefit is the way to go. Technology as support, living life as sense-making, following personal goals that are based on personal strengths and connected improvement for all (which does not imply a linear move towards improved living necessarily).

Drop-outs? Adult, autonomous thinking and choices you mean!
On drop-outs in MOOCs: okay, I am willing to see how graduate students might be in need of following a full MOOC (especially for those MOOCs embedded in the degree curriculum of a university, but even then… *sigh*), but most MOOC’ers are ADULTS. And adults (at least a good portion of them) can really think by themselves! So, please, can we drop the drop-out! Do any MOOC-teachers/professors really still think that the way they provide content and assessment is the only way to grasp content? As an adult, I think we can all choose what is of importance to us, and we do not need to be assessed in classic ways, we can figure it out by ourselves (at least some of us, not all, and voluntary choice is good for everyone of course).

Assessment by feedback from personal learning network
If we cannot figure it out - as adults - what we need, and how to master it, it proofs that we did not learn to find it out by ourselves. Sometimes we need classic assessment, sometimes we need to explore the unknown, and sometimes we just roam the premises in order to learn serendipitously. If small children are able to master new content, we as adults surely can too. I am not pushing assessment aside, I just feel that some of us are able to assess what we need, and whether we learned it ourselves, not necessarily by externally designed tests. Another options would be to always include contextual assignments, that way adults can embed new information in their professional/personal context and think about it. Of course the question comes up: “yes, but how will it be graded?”. Simple enough. If we really belief in a networked world, where people have their own personal learning network, you can rest assured that when sharing our drafts of these personally written assignments with each of our own network community, they will give ample feedback. And much more contextualized feedback, coming from real life, authentic experience. It cannot be called a disruptive action, if we still restrict peer review feedback to the other people taking a course. That is too simple AND it assumes that those people are the only ones being able to provide the right feedback. For me, the best feedback I get is from those in my personal learning network, not necessarily from other MOOC participants.

And I refrain myself from the tiresome discussion on using the term ‘teaching’ much more then ‘learning’. How many times must we agree on the importance of learning and learner autonomy, and then simply stick with teaching as the core concept. We learn by doing, we learn by learning. 

Okay, should probably sleep. Grumpy woman here.

Sunday, 30 August 2015

#Quiz tool with variable grading for profiling


 Shaf Cangil, an educational consultant (and Open University alumni: hooray!), who has a strong experience in SCRUM mailed me last week with a challenge: find a quiz (preferably for free) that allows multiple grading, so you can use those grades to visualize or distill a profile. Shaf wanted to set up a survey that will immediately provide feedback to the user of that feedback and tell him or her which type of scrum-person they are.
She mailed me based on a previous blog in which I describe the use of google forms and flubaroo to set up mobile quizzes. So I returned to that option, but couldn't get the forms+flubaroo to provide different grades to different answers for the same question (admittedly I went through it quickly, so maybe there is an answer - if you know it, feel free to share).

In a second attempt I looked around for other options, and this got me to the OnlineQuizCreator which has an option to build profiles based on the answer one gets from MCQ.
Small remark: you can try out the test options for free and without registering, but as soon as you have filled in the questions and built a test the software does ask you to register in order to get access to the full quiz you have built (I went for the free option: quizzes up to 15 questions).

In order to build multiple choice questions, and build a profile based on multiple profiles (categories):

  • select the 'assessment' option,
  • select the 'multiple categories' option
  • create the categories (in this case your profiles: I used 3 categories: learner, facilitator, course organiser - crude categories, but simple for the sake of testing the tool). You must fill in titles (only example titles are given, but they do not count as 'real', so you have to change them in order to build your trial test. Also add a description to each category. You can choose to add a category URL (for instance if you want to provide some background information on that category). But you can also leave it blank.
  • After you have created the categories, you need to create questions.
  • Create questions: at least 3
  • Then fill in the answers (and here it becomes exciting): you can link the answers to a category (profile) AND you can even add a grading scale to it, for instance if you are a participant in a course you are linked to the profile of the learner. But if you are a facilitator, and you consider yourself a learner: you can add full grades or profiling to you as a facilitator, but you can also add a small grade for you as a learner. Nice in case the profiles overlap at times!
  • once you filled in all the questions, save your test.
  • If you have not registered by now, you must do so in this step, as this will allow you to really see your test in action.

You do have an assessment dashboard, where you can change settings, colors, shuffle questions, etcetera.
And the results are shown to the testee immediately (by default, you can change it). I choose the simple option, as that allows mobile use as well. Nice tool with options, and really easy to use.

There is also an option to embed your test:

Loading effe rap


Tuesday, 23 June 2015

Not open BJET issue on #MOOC disrupting teaching & learning in #HigherEd

The British Journal of Educational Technologies just published a Special Issue: Massive Open Online Courses (MOOCs): ‘disrupting’ teaching and learning practices in higher education. The articles have been brought together by the wonderful academics Dick N'Gambi and VivienneVivienne. 

 The papers are of interest, unfortunately you need to pay for them or lend them at the least (thank you Stephen Downes for informing me, as I got access due to my OU account).

Here is the short list of article titles, with a multitude of interesting MOOC angles: assessment, engagement, methods, improving classroom instruction... The full set of articles with available html and pdf's can be found here.

  1. Will MOOCs transform learning and teaching in higher education? Engagement and course retention in online learning provision (pages 455–471)
    Sara Isabella de Freitas, John Morgan and David Gibson
    Article first published online: 8 APR 2015 | DOI: 10.1111/bjet.12268
  2. Massive open online courses (MOOCs): Insights and challenges from a psychological perspective (pages 472–487)
    Melody M Terras and Judith Ramsay
    Article first published online: 8 APR 2015 | DOI: 10.1111/bjet.12274
  3. Methodological approaches in MOOC research: Retracing the myth of Proteus(pages 488–509)
    Juliana Elisa Raffaghelli, Stefania Cucchiara and Donatella Persico
    Article first published online: 25 MAY 2015 | DOI: 10.1111/bjet.12279
  4. What public media reveals about MOOCs: A systematic analysis of news reports(pages 510–527)
    Vitomir Kovanović, Srećko Joksimović, Dragan Gašević, George Siemens and Marek Hatala
    Article first published online: 6 APR 2015 | DOI: 10.1111/bjet.12277
  5. Survey of learning experiences and influence of learning style preferences on user intentions regarding MOOCs (pages 528–541)
    Ray I Chang, Yu Hsin Hung and Chun Fu Lin
    Article first published online: 1 APR 2015 | DOI: 10.1111/bjet.12275
  6. Experiential online development for educators: The example of the Carpe Diem MOOC (pages 542–556)
    Gilly Salmon, Janet Gregory, Kulari Lokuge Dona and Bella Ross
    Article first published online: 4 MAR 2015 | DOI: 10.1111/bjet.12256
  7. Who are with us: MOOC learners on a FutureLearn course (pages 557–569)
    Tharindu Rekha Liyanagunawardena, Karsten Øster Lundqvist and Shirley Ann Williams
    Article first published online: 3 MAR 2015 | DOI: 10.1111/bjet.12261
  8. Digging deeper into learners' experiences in MOOCs: Participation in social networks outside of MOOCs, notetaking and contexts surrounding content consumption (pages 570–587)
    George Veletsianos, Amy Collier and Emily Schneider
    Article first published online: 25 MAY 2015 | DOI: 10.1111/bjet.12297
  9. E-assessment: Institutional development strategies and the assessment life cycle(pages 588–596)
    Carmen Tomas, Michaela Borg and Jane McNeil
    Article first published online: 17 MAR 2014 | DOI: 10.1111/bjet.12153
  10. A tool for learning or a tool for cheating? The many-sided effects of a participatory student website in mass higher education (pages 597–607)
    Tereza Stöckelová and Tereza Virtová
    Article first published online: 26 MAR 2014 | DOI: 10.1111/bjet.12155
  11. Bridging the research-to-practice gap in education: A software-mediated approach for improving classroom instruction (pages 608–618)
    Mark E. Weston and Alan Bain
    Article first published online: 27 MAR 2014 | DOI: 10.1111/bjet.12157
  12. Pattern of accesses over time in an online asynchronous forum and academic achievements (pages 619–628)
    Luisa Canal, Patrizia Ghislandi and Rocco Micciolo
    Article first published online: 1 APR 2014 | DOI: 10.1111/bjet.12158
  13. Technological utopia, dystopia and ambivalence: Teaching with social media at a South African university (pages 629–648)
    Patient Rambe and Liezel Nel
    Article first published online: 4 APR 2014 | DOI: 10.1111/bjet.12159
  14. Assessment of children's digital courseware in light of developmentally appropriate courseware criteria (pages 649–663)
    Fathi Mahmoud Ihmeideh
    Article first published online: 21 APR 2014 | DOI: 10.1111/bjet.12163
  15. Educational games based on distributed and tangible user interfaces to stimulate cognitive abilities in children with ADHD (pages 664–678)
    Elena de la Guía, María D. Lozano and Víctor M. R. Penichet
    Article first published online: 27 APR 2014 | DOI: 10.1111/bjet.12165