Responsible AI Research Centre

Responsible AI Research Centre

A safe and responsible AI future for Australia

The Responsible AI Research (RAIR) Centre is a landmark collaboration of experts from Australia’s national science agency, CSIRO, and Adelaide University, in partnership with the South Australian Government.

The RAIR Centre advances research and collaboration to ensure AI is developed and deployed in ways that benefit society. In Australia, this work is underpinned by nationally recognised guardrails that support the safe and responsible use of AI across the economy. The Centre focuses on aligning AI technologies with societal values and expectations, addressing fairness, transparency and privacy, and designing systems that are trustworthy, ethical and inclusive from the outset.

What is responsible AI?

According to the International Organization for Standardization (ISO), responsible AI denotes international efforts to align AI with societal values and expectations, including addressing ethical concerns around bias, transparency, and privacy. Responsible AI seeks to ensure that AI is developed and deployed in the interests of everyone, regardless of gender, race, faith, demographic, location, or net worth.

Principles of responsible AI

Responsible AI is the practice of developing and using AI systems in a way that provides benefits to individuals, groups, and the wider society while minimising the risk of negative consequences. Given their increasing importance in our society and economy, AI systems must be trusted to behave and make decisions in a responsible manner.

Values and ethics must be hard-wired into the very design of AI from the beginning.

While there isn’t a fixed, universally agreed-upon set of principles for Responsible AI, the Australian Department of Industry, Science and Resources has identified 10 guardrails to ‘create a foundation for safe and responsible AI use.’

Centre objectives

The RAIR Centre has been designed to place Australia at the global forefront of research into safe and responsible artificial intelligence.

We will focus on developing fundamental responsible AI research at the right scale to have impact at both a national and international level by combining the Software Engineering for AI (SE4AI) strength of CSIRO’s Data61 with AIML’s strength in machine learning research and translation.

Scholarships

Available scholarships from the AIML, including those of the RAIR Centre, will be listed on our Engage with us page.

Explore our range of publications to learn more

Focus

Google scholar profiles

Theme 1: Tackling misinformation

Professor Simon Lucey

Dr Surya Nepal

Theme 2: Safe AI in the real world

Associate Professor Qi Wu

Dr Jiajun Liu

Theme 3: AI system evaluation for safety and diversity

Professor Dino Sejdinovic

Dr Qinghua Lu

Theme 4: Causal AI for a changing world

Professor Javen Qinfeng Shi

Cross, multi, and trans-disciplinary

TBA

The authors acknowledge financial support from The State Government of South Australia. The collaboration facilitated through this funding has significantly contributed to towards the identified research breakthroughs.

Themes

The RAIR Centre will pursue core research questions that will enable AI's safe and responsible deployment. The Centre is focused on four distinct themes:

Theme 1: Tackling misinformation 

This theme focuses on ensuring we can trace the origin of AI-generated content and verify its authenticity. It aims to develop methods to detect fake or manipulated content created by AI. The goal is to maintain trust in digital information and combat misinformation.

Themes

Theme 2: Safe AI in the real world

This theme explores how AI can understand the physical environment and utilise the means and tools to interact with it safely. It aims to create AI systems that can perform deep reasoning for complex navigation and manipulation tasks and allow natural interfaces for the human users to communicate with such systems. This gears towards the vision of safe AI systems that can have enough awareness and reasoning capability to make efficient and safe feedback to the environment, as well as to assist humans in real-world tasks.


Theme 3: AI system evaluation for safety

This theme focuses on developing AI systems and safeguards that can accurately assess AI systems’ knowledge limitations and reasoning flaws, and reliably express uncertainty when safety matters. It aims to create AI that can recognise when it lacks sufficient information or expertise impacting safety, prompting it to seek additional data or human input rather than providing potentially inaccurate responses. The goal is to make trustworthy AI systems that provide better support to decision-making in real-world applications, particularly in the Australian workplace context, by reducing risks, overconfidence and hallucination.


Theme 4: Causal AI for a changing world

This research seeks to develop AI that understands cause-and-effect relationships, beyond correlations, particularly in complex and dynamic environments. The aim is to create AI systems that can adapt to new situations, make more accurate predictions, and model the actual consequences of interventions to help shape future outcomes. Ultimately, the goal is to enable AI to reason about the world in a more human-like way, considering context, causation, and consequences.

External resources provided by partner organisations

National Artificial Intelligence Centre (NAIC)

Day of AI Australia

CSIRO

Video resources

Introduction to Responsible AI Engineering workshop

Contact us

For more information or to express your interest in AIML’s RAIR Centre, please contact us at raircentre@adelaide.edu.au.

Program Manager

Kate Klimeš, RAIR Program Manager

Email: raircentre@adelaide.edu.au