"First, do no harm" - transdisciplinarity and responsible software engineering
![]() |
| Transdisciplinary Research for Responsible Software Engineering |
Digital systems shape our safety, our autonomy, our relationships, and our access to opportunity. As software increasingly blends with the physical and social world, the boundaries between the technical and the human become blurred. This means software engineers must grapple with a fundamental ethical question: How do we build technology that maximises benefits while minimising the risk of harm?
- Cyber‑physical (think IoT, smart homes)
- Socio‑technical (think social media, digital identity, automated decision-making)
- Deeply embedded in political, cultural, psychological, environmental and health contexts
A login mechanism can influence someone’s sense of security. A recommendation algorithm can shape a public conversation. A poorly designed system can put a vulnerable person at risk. Technical design choices now ripple outward into society.
This is why responsible software engineering—ensuring software maximises benefits while minimising harm—is no longer optional. It’s a professional and societal imperative.
But even this isn’t sufficient.
Interdisciplinary work typically starts with research questions defined by researchers. Yet responsible software engineering requires us to understand lived experience: how real people, in real contexts, encounter real risks.
This calls for transdisciplinary research—an approach where:
- Researchers from multiple disciplines work with practitioners, stakeholders, and affected communities.
- The problem space is co‑defined.
- Diverse expertise is integrated to produce actionable, context‑specific solutions.
- Knowledge extends beyond academia to include practical, experiential know‑how.
In short: we cannot design responsible digital systems without involving the people who live with their consequences.
By bringing these groups together, the Centre aims to enable the development of “safe by design” software technologies and improve the broader digital ecosystem for women and girls. This work not only identifies risks but co‑creates tools, frameworks, and processes that help developers build safer, pro‑social technologies—from privacy‑aware design guidelines to guardrails that limit harmful outcomes that might result from the integration of generative AI into software products.
- Funding mechanisms tend to prioritise single‑discipline research.
- PhD training often doesn’t prepare researchers to collaborate across domains.
- High‑profile publication venues rarely encourage transdisciplinary output.
- Teams need to bridge differences in terminology, methods, and expectations.
- Practitioners and communities need time and support to participate meaningfully.
Yet the benefits—impact, innovation, and inclusion—make it worth pursuing.
- Competence – bring your expertise, but recognise its limits.
- Courage – face complex problems and ethical trade‑offs honestly.
- Curiosity – seek out the perspectives you don’t have.
- Openness – embrace new methods, models, and voices.
- Focus – anchor your efforts in real-world harm and specific problem contexts.
To “first, do no harm,” software engineers must look beyond the technical system and towards the human system it affects. And the only way to do that well is through transdisciplinary collaboration.
This post is a summary of the keynote talk I delivered at the Responsible Software Engineering Workshop, which took place as part of the 2025 ACM International Conference on the Foundations of Software Engineering (FSE 2025). It is an edited version of a summary produced using MS Copilot based on the talk abstract and slides.

Comments
Post a Comment