2024: How to ensure robust assessment in the light of Generative AI developments

 

Have you been wondering how robust your assessments are against AI? This session will report on large scale research carried out by The Open University, funded by NCFE, between March and July 2024. The research looked to identify the most and least robust assessment types to be answered by Generative AI (GAI), enable some comparison across subject disciplines and levels, and to assess the effectiveness of a short training programme to upskill educators in recognising scripts containing AI-generated material. A mixed-methods approach involving quantitative and qualitative data considered the results of marking from 944 answers (representing 59 questions from 17 different assessment types, 17 disciplines and 4 FHEQ levels).

The research team will share the results including the performance of GAI across a range of different assessments and the impact of training on markers. They will suggest how assessment can be made more robust in light of GAI developments and recommend how higher education institutions might adopt AI-informed approaches to learning, teaching and assessment. 

 

Sessions are hosted by Professor Geoffrey Crisp, Retired (DVC Academic, University of Canberra) and Dr Mathew Hillier, Australia.

Please note all sessions are recorded and made public after the event.

Tagged in assessment, Artificial Intelligence, higher education