Preventing Rage Against The Machine
The COVID-19 crisis has severely disrupted the way companies organise and perform work, intensifying the rate at which organisations are adopting new technologies to increase productivity and manage remote workers. Many companies are exploring the potential benefits of artificial intelligence capabilities for transforming the way their employees work.
AI can direct workers by restricting and recommending information or actions. For example, an AI could generate a script that a call centre employee must follow, or an AI could automatically recommend several responses to an email that a client sent. Employers can use AI to evaluate how workers perform tasks and assess their behavioural patterns. For example, AI can discover which employees are best suited for different tasks that a workforce needs to complete. Management can use AI to find opportunities for optimising workflows and identify the most diligent employees. Organisations may also use AI to discipline workers. For example, AI can discover erratic and dangerous driving behaviour in taxi drivers or detect safety violations such as not wearing appropriate safety attire when entering restricted areas.
The use of AI in the workplace raises many important considerations. One pivotal question is the impact of AI adoption in the workplace on the occupational health and safety (OHS) of the workforce.
OHSM schemes tend to be best suited for addressing situations where there is a straightforward connection between the cause of a specific safety risk and its resolution. For example, to address the risk of a robot colliding with a human, the robot could be placed behind a fenced area. In general, OHSM schemes tend to favour the regulation of physical safety-related risks. OHSM schemes are not well-suited for scenarios where the risk is ambiguous and its resolution complicated and multi-faceted such as psychological hazards including stress.
The most common psychological hazard is stress which is triggered when individuals feel unable to bridge the gap between their capabilities and the requirements or expectations placed on them. Under the guise of performance monitoring and people analytics, organisations can use AI tools to increase surveillance and tracking of employees through intrusive data collection (Moore, 2019). The increased control can wind up as micro-management, thus causing stress and anxiety. The integration of monitoring tools into the physical space of a worker, such as a smart wristband, exacerbates these risks. If a worker feels their every action is evaluated against performance targets, they may feel there is no choice but to work faster and harder, leading to significant safety and health hazards (Moore, 2018).
Organisations may buy AI-based people analytics tools with the intent of expediting decisions about who to hire and fire in both the office and factories. But if the AI tool's recommendations are not explainable, and if the data upon which the recommendations is secret, workers will end up stressed and anxious about the fact that decisions may not be fair (Moore, 2018).
There is therefore a clear and present danger that the thoughtless adoption of AI technologies that affect the workforce will have unintended harmful consequences. The risk may be greatest if AI adoption is driven purely by the prospect of cost-savings which may be heightened in a recession.
The NSW Centre for Work Health and Safety is aware of that risk and has commissioned an interdisciplinary research project “Ethical deployment of AI in the workplace: preparing a protocol and scorecard for business” which is a collaboration between the SA Centre for Economic Studies and the Australian Institute for Machine Learning of the University of Adelaide, the Australian Industrial Transformation Institute at Flinders University, and the NSW Centre for Work Health and Safety. The purpose of the research is to develop practical tools that organisations can use to ensure that their AI adoption is thoughtful and mindful of the impact on the workforce.
The research team consists of Dr Andreas Cebulla, Dr Genevieve Knight, Dr Catherine Howell, Dr Michael O’Neil, Dr Zygmunt Szpak and several researchers from the NSW Centre for Work Health and Safety. The outcomes of the research will be posted on the NSW Centre for Work Health and Safety website.
Footnotes:
Moore, P. V. (2018). The Threat of Physical and Psychosocial Violence and Harassment in Digitalized Work. Geneva, Switzerland: International Labour Office
By Dr Zygmunt Szpak, Senior Research Associate, Australian Institute of Machine Learning
Dr Szpak received his PhD degree in Computer Science from the University of Adelaide, Australia in 2013, and his MSc degree in Computer Science from the University of KwaZulu-Natal, South Africa in 2009. He is a senior research associate at the Australian Institute for Machine Learning working at the interface of machine learning, image processing and challenging industry problems. He is also the founder of Insight Via Artificial Intelligence—a company that helps organisations develop policies, practices and decisions that are based on the systematic and practical use of artificial intelligence.
Story initially posted here by Ms Brooke Lee.