Why is your institution (probably) not using confidence-based marking (CBM) in place of right-wrong marking for objective tests? Decades of research and a decade of large-scale implementation at UCL have shown it to be theoretically sound, pedagogically beneficial, popular with students and easy to implement with both on-line and optical mark reader technologies.
If the answer is ignorance, then you should look at our FDTL-funded dissemination website (www.ucl.ac.uk/lapt). Maybe the answer is inertia and the imagined constraints of an institutional VLE. But if you think that CBM must somehow be subjective, arbitrary, irrelevant to assessment of knowledge and understanding, discipline-specific, time-wasting, requiring new types of assessment material, or favouring particular personalities, then almost certainly you need to think or read more deeply about it. Within instructional material and formative or summative tests it helps reduce some of the very sensible regrets that we all have when we are forced to replace part of our paper-based assessments and small group teaching with automated tests and material. If you worry that your students simply repeat what they have learned - whether in essays or computer tests - without understanding why it is true, then CBM can help you discriminate between well-justified knowledge, tentative hunches, lucky guesses, simple ignorance and seriously confident errors.
The presentation will explain what CBM is all about, give you experience based on questions about the Highway Code, seek audience feedback about what you perceive as potential + and - features, and cover evidence about many of the issues raised above. The take away message is that you fail in your duty to your students if you treat lucky guesses as equivalent to knowledge, or serious misconceptions as no worse than acknowledged ignorance. Your assessments should be something in which you have confidence.