About this research program

This program explores how we ensure public values guide technological development and innovation. Our work brings together insights from the social sciences, law, ethics and computer science to create new ways of governing technology.

We aim to encourage fair, trustworthy and inclusive approaches to the rules, institutions and decision-making processes around artificial intelligence (AI) and other emerging technologies – keeping society at the centre of technological development and use.  

A cornerstone of this work is the Technology × Society Forum, which creates an open space for public dialogue about technology, trust, and the role of regulation and governance in ensuring technology serves society. 

How we conduct our research

We combine a number of research methods to understand the risks and opportunities of AI.

Qualitative research

Policy analysis

Legal studies

Data-driven assessments

Participatory design workshops

Current projects

We collaborate with a diverse range of stakeholders, including government agencies, industry leaders, academic institutions and non-profit organisations. These partnerships are crucial for co-creating regulatory frameworks and governance models that are both practical and innovative. 

The rise of neurosurveillance – the use of neurotechnology to monitor workers’ brains – presents a novel and under-examined challenge for Australian workplaces.

Neurotechnologies that track fatigue, attention, effort and stress are already being deployed in several Australian workplaces, including mining sites using ‘SmartCap’ helmets. 

Looking ahead, more advanced neurotechnologies that use electrical currents to alter brain activity are expected to be introduced into workplaces. 

However, workplace neurosurveillance is largely unregulated and raises serious ethical concerns, particularly regarding the validity of the technology and its implications for workers’ mental privacy, dignity and autonomy.   

Our project examines neurosurveillance through an interdisciplinary lens. It will assess its scientific credibility, analyse its legal and governance context, and explore its impact on the behaviours of both workers and management.

Our findings aim to inform workplace policy and practice by highlighting gaps between the technology’s capabilities, its real-world implications, and the legal and ethical frameworks that govern its use. 

This stream of research is about the structures and parameters that businesses need to put around the use of AI in the workplace – from the board of directors to the factory, shop or office floor. 

Researchers Helen Bird and Natania Locke investigate corporate governance practices for managing AI in organisations and the impact of AI on corporate legal liabilities such as directors’ duties. 

Explore more research programs

Contact the Social Innovation Research Institute

If your organisation would like to collaborate with us to solve a complex problem, or you simply want to contact our team, get in touch by calling +61 3 9214 8180 or emailing sii@swinburne.edu.au.

Contact us