Do we need social scientists and the general public to contribute to AI development?
By Professor Anna Ma-Wyatt. Professor, School of Psychology, the University of Adelaide; Co-Director, CNRS IRL CROSSING.
This article is an extract from Artificial intelligence: your questions answered, a report published in partnership with the Australian Strategic Policy Institute (ASPI).
To most people, social science and computer science seem like they’re at opposite ends of a spectrum. But social science—especially behavioural science—is really about ways of understanding how humans interact with each other and the world around them. There are several ways in which social scientists and the general public can contribute to AI development and, if we get this right, I think it would be of great benefit.
Let’s start with thinking about how social science might contribute to AI development. In behavioural science, we’ve traditionally used lab-based studies and very careful measurements of behaviour. Those approaches have allowed us to build theories about how mechanisms for all kinds of human behaviour work—from memory, to visual processing, to language.
With advances in sensors and sensing, there are new and exciting ways to measure human behaviour. AI and machine-learning techniques are already commonly used to help process high-volume datasets, and that approach has been especially effective in behavioural neuroscience.
Moreover, advances in sensors and sensing also offer new opportunities to measure human performance outside of the lab and as people interact with their environment in more naturalistic and more complex ways. These types of approaches produce enormous amounts of data, so AI can be helpful for analysing that data and helping us work out new ways of understanding behaviour. They’ve already shown their power by allowing us to understand patterns of movement of older people at home.
As AI becomes more prevalent in everyday life, understanding how humans interact with automated systems is another exciting multidisciplinary area. For example, cutting-edge approaches to cybersecurity acknowledge that human behaviour can be a huge risk, so aspects of behavioural science are being integrated into those approaches. Teams consisting of social scientists, political scientists, lawyers, data scientists and computer scientists are already working together.
A key part of the success of this approach is making sure we have good dialogue between behavioural and social scientists and people developing AI. We need a respectful environment in which we can explore how to integrate our approaches to have greater explanatory and predictive power. If social science and AI can work together, they offer exciting opportunities for developing new ways of understanding humans and developing new forms of technology that humans can use in different contexts.
The broader community has a truly important role to play not only as users and consumers, but also because we as a society need to understand the implications of AI. We need to make sure developers help the general public (who are users, too) understand the potential of AI but also the implications of using it for different purposes. There can be a lot of confusion around any new technology. To counter that, it can be helpful for the public to be engaged in understanding the new opportunities and understanding the strengths and limitations of AI and how it can be applied. The education of users and the ability to communicate clearly with the public will be important skills for people working in AI development.
There are also significant questions about the ethical use of AI that would benefit from having the general public and social scientists, among others, as part of those discussions. AI will be so pervasive that our conversations must be inclusive and broad across all levels of society to ensure that we use an ethical approach that fits with our culture. Public policy and law will of course have a very important role to play in those conversations.
We need to make sure that there’s access across all parts of society, but how do we ensure that? What do ‘access’ and ‘representation’ mean? What are the sociotechnical and infrastructure requirements necessary to make this possible? Social scientists (for example, anthropologists, psychologists, sociologists) will be invaluable in working with teams to understand the social implications and working with AI developers and policy- and lawmakers. For example, what sort of cultural norms are OK to have in AI? Who in society gets to make those decisions? There are already great groups working on those questions (such as the 3A Institute at the Australian National University), and I think it’s important work that needs to be part of the conversation.
This article is an extract from Artificial intelligence: your questions answered, a report published in partnership with the Australian Strategic Policy Institute (ASPI).