pic

WORKSHOP : Self-assessment: strategies and software to stimulate learning

Higher Education Academy Workshop and Seminar Series 2012
Monday 11 June 2012 (1000-1600)
The Open University, Milton Keynes
Organised by: Sally Jordan (OU: s.e.jordan@open.ac.uk ) and Tony Gardner-Medwin (UCL: ucgbarg@ucl.ac.uk)

PROGRAMME: Click here
******* RECORDINGS and PRESENTATION FILES: Click here *******
******* DISCUSSION FORUM: Click here *******

RATIONALE

The effects of tests within a teaching/learning strategy can be both positive and negative. Later recall of test items is usually enhanced, but untested material can suffer. Test distractors may be recalled as if learned; and rote learning may be developed without proper understanding. This meeting will address how most efficiently to achieve benefits with exercises that stimulate thought more than memory : developing connections between ideas, identifying misconceptions  and encouraging evaluation of arguments. The aim is self-assessment integral to learning, not formative-assessment to provide exam practice or shape teaching, nor summative- to measure success. It is perhaps better described as practice rather than assessment, by analogy with how we learn music or tennis, driven by challenge, exhilaration and the stimulus of failure, rather than by a series of hurdles.

We shall highlight key questions about self-assessment and try to establish where evidence, ideas and common sense can inform best practice and new software. Though the initiative for the workshop stems partly from ideas developed at UCL, we have chosen the OU as venue because of its wide experience in this area, central location, and involvement with one of the principal open-source tools – Moodle – for developing new assessment formats. Computer techniques are key to giving students extensive access to challenging exercises and we will address how much the student or teacher should control this access, and which special features may (or may not) be worthwhile. Specific examples are student selection of questions, certainty-based marking, instant feedback, conditional or unconditional explanations, smart text evaluation, adaptive question structures, student comments on content, collaborative working. We shall discuss experiments where students themselves create and evaluate exercises, the disciplines that could most gain from specific strategies, and how dialogue between teachers and developers may improve the tools available.

AIMS

We are focussing on a quite specific area of assessment that many students find particularly valuable in their study, and which we think deserves wider attention. We hope to attract up to 60 mainly teachers (but also developers) working in many disciplines, to discuss how self-assessment may be best used. We see this as an efficient means to improve student learning experience (at low cost in staff time), and hope that by raising specific questions with experienced presenters we will stimulate new trials, experiments and developments and send people away keen to try out what they have discussed.

SPEAKERS (see also PROGRAMME )

Nancy Curtin (Imperial)
Maria Fernandez-Toro (OU)
Tony Gardner-Medwin (UCL)
Gwyneth Hughes (IOE)
Tim Hunt (OU)
Sally Jordan (OU)
John Kleeman (Questionmark)
Tim Lowe (OU)
Jon Rosewell (OU)

USEFUL LINKS

Gwyneth Hughes: Ipsative Assessment
Tony Gardner-Medwin: Certainty-Based Marking,   CBM in Moodle
Sally Jordan: e-assessment (f)or learning
Peerwise (Paul Denny, NZ: Students writing questions;   Edinburgh group )
Jon Rosewell: Opening up multiple-choice
Phil Butcher & OU colleagues: OpenLearn, LabSpace

POSTERS

Simon Bates, Judy Hardy and Ross Galloway, The University of Edinburg: SGC4L: Student Generated Content for Learning
Martin Bush, London South Bank University: Formative and summative assessment for large classes: multiple-choice tests
Ramon Eixarch, Maths for More: Maths tools for assessment
Tony Gardner-Medwin, UCL: Analysis of Exams using Certainty-Based Marking
Sally Jordan, The Open University: Student engagement with e-assessment
Clare Wakeham, University of Oxford: Leading and Managing Health and Safety – an online course

A Range of Questions about Self-assessment

Self-assessment as a stimulus to learning: how should it differ from formative and summative assessment?

Must assessments always encourage ‘learning (and teaching) to the test’?

Can self-assessment capitalise on the truism ‘assessment drives learning’ in a more constructive way?

Can e-practice and e-challenge be a stimulus rather than a tedious hurdle?

Why should tennis practice be fun, but learning practice a bore?

Why is tennis coaching enjoyable while marking is often perceived to be a chore?

Can exercises challenge & push boundaries, rather than simply grade & assess?

Should the learner or teacher be in charge of what self-assessments to do, and when?

Will the right students tend to opt voluntarily to use self-assessment?

Will students too much treat self-assessment Qs simply as rote-learning opportunities?

Is Certainty-Based Marking a worthwhile feature in self-assessment?

Should we reward students for identifying and acknowledging uncertainty?

Should we punish misconceptions and inappropriate ideas about the reliability of their knowledge?

Is there more to knowledge than getting things right?

Is instant feedback important? (or is a delay better)

Should feedback & explanation be delivered while the student is still thinking – or in delayed review?

Is there a balance of crude immediate computer feedback vs. delayed skilled & costly teacher feedback?

Is 'smart' computerised analysis of text answers worthwhile?

Do self-assessment exercises need to be different from exam-style tests?

Can we use past exam Qs to stimulate learning?

Are planned question sequences and Qs conditional on answers worthwhile?

Is rigorous marking less important than in exams?

Are explanations of answers important as feedback?

Can explanations stimulate connections between different areas of wider knowledge?

Are particular disciplines (maths, science, language, law?) more suited for self-assessment?

Is different software from exam software required or preferred?

Is exam-style security unimportant?

Are open comment facilities relating to Qs and explanations important?

Are server interactions significantly slower than responses from a local computer?

Should marks (percentages) be expressed relative to student-selected Qs, rather than an entire quiz?

Is Moodle providing what is needed for self-assessments?

Should students doing self-assessments be encouraged to work together?

             Is collaborative discussion (face to face or e-discussion) a stimulus to learning?

Should students themselves  write and edit exercises and explanations?

Do students need to learn to distinguish important (‘big’) questions from matters of detail?

Is there common ground between self-assessment and peer assessment?

Should individual students’ marks on self-assessments  be ignored by teachers?

Are students’ mistakes indicative of valuable learning experiences?

Should timing of self-assessments be under the control of students or teachers?

Are self-assessments most useful after study (‘revision’) or as a stimulus to continuing study?

Should we use self-assessments or formal tests as preparation before practicals, etc.?

Again, which is better to encourage consolidation of what should be learned from such sessions?