HEA logoOn 8th December myself, Russell Smeaton (E-Learning Team) and Liam O’Hare (School of Science & Engineering) went along to a Higher Education Academy Seminar entitled “Technology Enhanced Assessment for Learning: Case Studies and Best Practice“, held at the University of Bradford.  This event was part of the Evidence-based Practice Seminar Series 2010.

The Seminar centered around two presentations, both of which discussed approaches taken to formative e-assessment at the University of Bradford, with the concept of feedback as ‘feedforward’ provided electronically central to both.

Presentation 1

The first presentation by Dr Liz Carpenter, Department of Clinical Sciences, talked about the process of creating a formative electronic assessment that students had to sit under exam conditions.  The students had to attend an IT lab, on campus.  Following this exam, the assessment was released via the VLE as a revision tool right through until the summative assessment to observe how engagement with the assessment throughout the course impacted upon what was called student “progress”.  “Progress” was defined as the summative mark minus the formative mark.  The e-assessment contained automated feedback for each question.  Liz rightly commented that embarking upon this kind of venture absolutely will not save you time when it comes to marking/feedback – at least not in the short-term (although both presenters did comment that they thought it would save time in the long-term).  It takes a big investment of time and effort to produce an electronic assessment complete with useful, constructive feedback.

Anyway, back to the outcomes of the project.  It was interesting to note that the students who engaged most with the formative assessment (i.e. had the most ‘attempts’) tended to make the most “progress”.  It was also noted that the students who viewed the e-assessment as part of their learning (as oppose to, say, a mock exam) tended to make the most “progress”.  Liz was careful to point out that some students did not engage because the formative test simply did not fit with their preferred learning style.   Also interesting was how the students accessed the assessment.  Out of around 70-80 students, only 3 accessed it at any time using a mobile device.  I had expected this figure to be greater but upon reflection, perhaps this could be down to how well the test displayed on mobile devices, or perhaps it was because it was ‘assessment’ related that the students felt a degree of safety in using a standard laptop/PC to access it.

As is usual in this kind of forum, the question of how to engage the ‘weaker’ students was raised.  This resulted in much debate and some interesting ideas, including the use of anonymous league tables containing scores for the revision assessment – something being used effectively by one attendee.  Apparently the students liked the competitive element of the league table and it improved engagement with the assessment.  The general conclusion though was, in my opinon, that there is no magic solution to the problem of engaging weaker students.

The discussion of this presentation raised some interesting points, for example, as teachers should we also be developing the study skills of students?  Should we be teaching them how to deal with feedback?  Do we send the feedback and that’s it?  Or do we have a responsibility to follow it through to ensure students are reflecting on and learning from the feedback we give?

Some people in attendance had some interesting ideas.  One attendee now asks his students to write their own multiple choice questions around a given topic.  This involves the students researching the topic, identifying the correct answer choice but, of equal if not greater learning value, coming up with some distracters (incorrect answer choices) and then justifying their choice of answer options to the rest of the group.  The exercise gives students an understanding of exactly what’s involved in setting these questions but also provides an interesting way for them to learn and gives a greater appreciation of exactly what constitutes good feedback.  The lecturer who uses this method has been using it now for a couple years and commented that the questions the students produce are frequently good enough for him to include in formative/summative assessments for future cohorts.  He also pointed out that sometimes the questions/feedback the students create help him identify common misunderstandings they may have.

Presentation 2

The second presentation by Dr Darwin Liang, Department of Engineering talked about his attempts to improve student grades, appreciation of feedback, and engagement with the material via the use of a twice weekly electronic quiz that built up to form a small percentage of the summative mark (20% with the intention to raise this to 40% in the future).  His findings were similar to the first presentation in that, once again, it was the “better” students who engaged with the assessment.  He commented that most students appeared more interested in marks than learning.  Student feedback indicated they would rather have spent the time in lectures – something they perceived as preparation for the main assessment.  Darwin did agree that letting students design their own learning, whether that be writing questions or designing their own experiments, did help them learn and also, in some cases, improve engagement of weaker students.

Group Discussion Task

Following the presentations, we held group discussions in an attempt to come up with what we thought were the challenges and issues facing us when it comes to formative electronic assessment.  Our group decided that things such as time, engaging students (and staff) and personalising the assessments/feedback were some key issues.  Unfortunately we ran out of time to listen to what the other groups came up with but a summary of the discussion was forwarded after the event.  Key challenges identified by two or more groups include:

  • engagement; e.g. student engagement in the assessment and in the feedback but also staff/institutional engagement in e-assessment as a tool
  • the time involved in/difficulty of producing quality e-assessment/feedback
  • how we replicate/reproduce the social learning aspect.

The day finished with a look at the Bradford’s e-assessment suite – a computer lab of around 100 thin client terminals and a single server running QuestionMark that are set up for the specific purpose of conducting electronic assessments.  All in all it was an useful day with the discussions and points raised during and after the presentations proving the most useful and interesting.  I’ll be keeping an eye out for future seminars in this series.

At the time of writing this blog post the two presentations from the seminar are available at:

Technology Enhanced Assessment for Learning
Tagged on:                     

Leave a Reply

Your email address will not be published. Required fields are marked *