Thursday 28 May 2009

#ela2009: pre-conference workshop feedback: evaluation kicks off when ethical issues are discussed


Yesterday I had the pleasure of collaborating with John Traxler and David Hollow during a workshop on Effective Evaluation. The workshop was part of the pre-conference workshops organized in eLearning Africa. We had ten participants, but they were a very inspiring bunch. For those with limited time, just go to the last ‘ethics and quality’ part of the post.

For this workshop John suggested that we use the Delphi technique to get the most out of both the participants and all the available knowledge in the room. The technique demands a lot of dynamic (re)thinking within the groups.
Short scenario used:
  • Appoint a facilitatorItalic
  • Think about the question in private
  • List your five ideas in private
  • (Each group has 10 minutes for the above first steps)
  • Each person reads out and explains their five items.
  • Facilitator merges list from everyone's items.
  • Debate five most important - vote if necessary
  • Finally write them on flipchart
  • Report to big group
Stop after the planned three topic rounds (principle and priorities, models and tools, ethics and quality, in this case). Normally the Delphi technique uses iterations until a consensus is reached, but due to time constraints only a couple of iterations could be made to rethink the groups thoughts on each topic. But still the strenghts came out.

During each round the three of us (John, David and I) would listen to what happens and add some points when relevant, but keeping the groups working with their own ideas.

Each group started from an example scenario of an eLearning project and the groups were asked to start thinking about how they would evaluate the particular projects. To enable step-by-step growth into evaluation, the above three parts were drawn up to give the process some feasible steps to work on.
As the discussions went on, you could see the group synergizing. All the individuals were motivating and inspiring each other throughout the process. Again it was easy to see that the group surpluses the individual.

Principles and priorities
  • The groups came up with different priorities, depending on the evaluation choices they thought were a priority, but in general the big evaluation principles and priorities were drawn up:
  • The aim of the evaluation (efficiency, technological aim, capacity building…)
  • The audience of the evaluation
  • The stakeholders that influence the project and possibly the desired outcomes
  • Competencies that could be evaluated
  • The time and budget restraints that make out what can be evaluated

Models and tools
This rolled out further in the how of evaluation, leading up to the models and tools:
  • The philosophy of the evaluation
  • Internal or external evaluation
  • Formative or summative evaluation
  • The type of evaluation: correlative evaluation, participative evaluation,
  • The roles of both the audience and the stakeholders within the evaluation
  • And off course the budget

Ethics and quality
What was so nice to see was that the philosophy of the evaluation was already mentioned the first parts of the evaluation process, leading up to the third part of the workshop, namely the ethics and quality part. This was to my opinion the most inspiring part of the workshop, as personal opinions, philosophies and ethics come forward to give direction to the overall evaluation process.

Although ethics and quality do define how we live our lives and the choices we make, it is not always consciously considered from the start of an evaluation. But the impact it has is enormous. What was mentioned?

  • Consent of the participants, guilty knowledge (= knowledge you as an evaluator sometimes can acquire from the audience, but that you wished you had not known), transparency of the process.
  • Go for confidentiality not anonymity, because the in this digital age anonymity can seldom be assured.
  • Do an audience or member feedback check (part of transparency): if you do extract data from specific audience members cross check if they agree with what you have drawn up.
  • The design of the tool/evaluation should consider the (sub)cultural parts of the groups (all groups have their set of rules of what is acceptable/ethically appropriate or not in terms of behavior etcetera).
  • User centered evaluations can decrease the power diversity between the evaluator and the evaluee.

There is an evaluation workshop coming up in Uppsala, Sweden in September 2009, focusing on ICT4D evaluation, but aiming at PhD students.

In general it was agreed upon that an understandable and clear code of ethics should always be drawn up before starting an evaluation.

2 comments:

  1. Thanks for sharing such useful information. Have anyone heard of Cloud computing conference It is the 1st annual virtual conference. It was more useful for me. It is the about the latest trends and changes in the cloud computing world.

    ReplyDelete
  2. Hi Adamma,
    thanks for sharing the link to the Cloud computing conference!

    ReplyDelete