Responsible AI Research Centre
What is Responsible AI?
According to the International Organization for Standardization (ISO), responsible artificial intelligence (AI) denotes international efforts to align AI with societal values and expectations, including addressing ethical concerns around bias, transparency, and privacy. Responsible AI seeks to ensure that AI is developed and deployed in the interests of everyone, regardless of gender, race, faith, demographic, location, or net worth.
Principles of responsible AI
Responsible AI is the practice of developing and using AI systems in a way that provides benefits to individuals, groups, and the wider society while minimising the risk of negative consequences. Given their increasing importance in our society and economy, AI systems must be trusted to behave and make decisions in a responsible manner.
Values and ethics must be hard-wired into the very design of AI from the beginning.
While there isn’t a fixed, universally agreed-upon set of principles for Responsible AI, the Australian Department of Industry, Science and Resources has identified 10 guardrails to ‘create a foundation for safe and responsible AI use.’
Responsible AI Research Centre (RAIR)
The Responsible AI Research Centre (RAIR) will combine the expertise of the Australian Institute for Machine Learning (AIML) with CSIRO’s Data61 to attract top research talent to South Australia and establish cutting edge initiatives in responsible AI. Researchers will address responsible AI for both national and international impact.
RAIR will be a testament to Australia’s already growing reputation as a world leader in the field of responsible AI and AI safety research. The Centre is focussed on four distinct themes:
Theme 1: Tackling misinformation
This theme will explore how to develop methods that enable attribution of trusted data sources to AI-generated content in order to avoid misinformation and misuse.
Theme 2: Safe AI in the Real World
Exploring the foundational science questions that underpin how AI interacts with the physical world, linking to areas including robotics.
Theme 3: Diverse AI
This theme will explore new directions for developing AI systems that can accurately assess their own knowledge limitations and reliably express uncertainty, helping to reduce AI hallucinations.
Theme 4: AI that can explain its actions
Developing AI that understands cause-and-effect relationships, beyond correlations, particularly in complex and dynamic environments.
RAIR will also focus on global engagement and the expansion of investment research within Australia, extending to the Asia-Pacific region and beyond.
Relevant AIML scholarships
University of Adelaide Research Scholarship and Supplementary Scholarship Opportunity - Ethics of Healthcare AI - The Ethics of Healthcare AI supports a full-time PhD student who is interested in pursuing research on the responsible and ethical evaluation of machine learning or artificial intelligence systems in healthcare settings.
And there will be more RAIR PhD scholarships to come, please check back soon for updates.
Related AIML news stories
- Responsible AI means keeping humans in the loop
- Responsible AI has power to improve every aspect of our lives: report
- What are the limiting factors for AI today?