Assessment design for the two AIs

As part of Academic Integrity Awareness week, LEI and the Academic Integrity team presented a workshop sharing sector best practice, experiences from teaching and assessing in the age of Chat GPT, and ideas for the future of assessment design. Here are some of the key takeaways.

Banning Artificial Intelligence is not the answer!
In the short term conversations around AI and assessment have focused on mitigating the misuse of AI in our assessment and detecting inappropriate use. Of course, it is important to ensure that our students meet learning outcomes and that we assure their skills and knowledge. As TEQSA’s recently released Draft Guiding Principles for Assessment Reform suggest, our students will also need to develop their skills as confident, ethical users of AI, and across the sector educators are already experimenting with integrative approaches to AI in assessment.  

Artificial Intelligence is highlighting what we already knew about assessment design
While AI has created concern about assessments, in many ways it’s nothing new. Assessments that can be easily outsourced to AI were likely already vulnerable to things like plagiarism or contract cheating. Assessments that require iterative submissions, higher-order thinking and problem solving are more difficult to outsource than those which require descriptive writing or generic information retrieval. AI has highlighted an opportunity to rethink what and how we assess.

No assessment design is completely cheat-proof, but we can raise the cost of cheating
The ‘cost’ of cheating relates to the time, effort and resources that a student must expend in order to cheat on the assessment. Designing an assessment which is difficult to outsource (to a person or to AI), and where appropriate, securing the assessment with technologies that control the assessment environment, can deter academic misconduct of all kinds. Not every assessment needs to be secured to the same level - we can also think about key assessment moments in our courses and programmes and deploy our resources to secure these assessments (Dawson, 2020).

Students are using AI in a range of different ways – we need to keep talking about what’s ok
Between content-generation, writing enhancements, collating, parsing and summarising information, and acting as a tutor for complex concepts, AI can assist students in a range of ways, and students are experimenting with what these tools can do. We need to have open dialogue with our students about what is expected and what is ok in each course and discipline (our student Guidelines can help with this). This also involves discussing the learning outcomes for your assessments, and why they are so important.

Utilising vivas
Another approach is to embed a Viva into major assessment tasks. To assist in standardising the Q&A and securing validity, the questions could be based on the rubric criteria for the overall assessment, with each criterion stating that the student is able to confirm their understanding in the Viva. To prevent any increased workload, markers would skim the written component to prepare for the Q&A, and dedicate most of their marking time to the Viva. 

Securing the assessment environment – educational technologies
Cadmus is a useful tool which makes the drafting process visible. Students write their assignments inside the LMS, and Cadmus highlights where blocks of text have been pasted, words typed and indicates the time taken to compete an assignment. This video demonstrates the potential of this tool.

Another tool that can secure a face-to-face assessment environment is Respondus Lockdown Browser. When set up in advance, it prevents students from accessing external websites during a workshop activity such as completing a Multiple Choice or Short Answer Quiz in MyUni or on paper.

Contact LEI for support with capability building in the use of educational technologies to enhance teaching, learning and assessment.

Integrating AI with integrity
How can educators guide students to use AI in their assessments while preserving academic integrity and safeguarding student learning? Currently, there is no ‘perfect’ solution. Furthermore, there are almost certainly ways to work with AI in assessment that we have not even conceived of yet. For now, here are some possible approaches to using AI with integrity that have been trialled at the University:

1. The integrative approach
Assignments are built around generative AI, requiring its usage in some capacity. Students might be instructed to critique, edit, or provide commentary on the output of AI. Students could also be asked to use AI in the drafting period.

Academic integrity can be preserved, as student usage of AI is transparent, acknowledged, and scaffolded. Students are evaluated on their relationship to the AI output and how effectively they are able to utilise, iterate upon, or evaluate it.

Pros:

  • Strong at building certain student capabilities, such as critical evaluation of a source.
  • Inherently builds digital literacies and AI literacies
  • Focuses on learning processes, rather than a student’s ‘perfect output’.
  • Anticipates the way students will likely use generative AI in their future working and personal lives.

Cons:

  • May be weaker at building some student capabilities, such as creativity, imagination, and formulation of original ideas.
  • There are some arguments that this approach reduces student agency and freedom of expression, such as in this article by education lecturer Adrian J. Wallbank.

2. The reflective/declarative approach
Students are permitted to use generative AI in certain limited ways, but they must cite, declare, and/or reflect upon their usage of AI. Students may be required to cite the output of AI in their references. They could be instructed to write a short reflection describing how they used AI or provide the text of the prompts they typed in. In this way, citing AI is akin to citing any other book, website, or even a lecturer.

Pros:

  • In addition to the advantages of the integrative approach, requiring students to reflect on their usage of AI promotes metacognition and critical analysis.

Cons:

  • It may be hard to enforce usage with integrity and transparency; there is a risk that students simply do not correctly declare their usage.
  • Equity issues: not all students may have equal skills or access to AI. Scaffolding student AI usage with formative activities in class is an essential step in mitigating equity issues.

3. A permissive approach?
Warning: this one is experimental! Law professor Stuart Hargreaves (2023) proposes a possible model where students can use AI with no limitations. However, the baseline standard becomes much higher and there is a strict grading curve. The students' own authentic work is required, as they cannot pass if they simply use AI to write their assignment. The rubric and learning outcomes must be carefully designed to ensure this.

Pros:

  • Using AI for cognitive offloading or as an ‘Extended Mind’ could allow for more focus on higher-order cognitive skills

Cons:

  • Students will present AI output as their own work
  • Certain capabilities, such as writing skills or formulating an argument from scratch, may be decreased

To decide which approach to take in designing your own assessments, consider the pros and cons of each approach in relation to the specific learning outcomes you wish to build and assess. For example, if writing skills are not relevant to your learning outcomes, you may allow students to use AI to generate the content of their assignment under a declarative model. However, if writing good prose is a desirable skill, asking your students to provide written commentary upon the output of AI could be more appropriate.

No matter the chosen approach, scaffolding the use of AI is crucial to ensure student equity. Ask yourself whether your students have the necessary skills and knowledge of AI to use it effectively. You should also consider whether students have the necessary subject knowledge, expertise, or skills to critique its output.

Article by Amy Milka, John Murphy, Tamika Glouftsis, Paul Moss

Tagged in #AI