Towards Safety by Design for Protecting Women Online

 


I was pleased to be invited to join a panel discussion on 'Safety by Design' at the Refuge Tech Safety Summit (https://refugetechsafetysummit.vfairs.com). The panel included colleagues with experience in improving online safety at various technology companies, including some of the major online social media platforms. The focus of the discussion was to explore some of the types of harm experienced by women and girls, together with things that are being done to mitigate these harms. My contributions focussed on some of the barriers and challenges to achieving safety by design in online platforms and the work that we are embarking on at the OU's Centre for Protecting Women Online

The complexity of mitigating online harms to women and girls is compounded by the heterogeneity and dynamicity of these harms, together with the challenges of navigating the diverse viewpoints from which harms are understood. These viewpoints include our understanding of online harms from the perspectives of human behaviour, legal and regulatory frameworks, policing, and technology. The work streams of the Centre for Protecting Women Online bring together experts from these different areas to build a shared understanding of online harms to women and girls and develop new approaches to addressing the challenges of heterogeneity and dynamicity.

Heterogeneity: this arises because online harms can take many forms. This includes the different media that are used to perpetrate harm: text, images, audio and video, as well as the different social contexts and enablers that affect the form of harm. For example, some technology platforms could enable harms like harassment by not having features to block users or preventing anonymous user messaging.

Dynamicity: As well as being diverse, the landscape of online harms is continuously changing - enabled by rapid shifts in technology and societal norms. The most obvious technological shift over the past year has been the wider availability of generative artificial intelligence technologies (GenAI), that enable perpetrators to produce realistic fake content that could be used to harass women online. Shifts in societal norms include the rise of social media use as an integral part of daily life, where not being on some platforms could lead to exclusion from social groups.

Therefore, to address these challenges and get closer to achieving safety by design, we need to develop ways of building and maintaining a shared understanding of the landscape of online harms to women and girls, including the relationships between:

  • the types of harm, 
  • the contextual factors that relate to each type, 
  • the social and technological enablers of harm, and 
  • the mechanisms that could inhibit the harm.

Such an ontology of online harms against women and girls could help us evaluate existing technologies (and laws/regulations) in terms of their coverage and support to mitigate different online harms and enable us to better understand the design space for online safety. This can help us build better software and build software better - which are the focus areas of our work in Responsible Future Technologies / Response Software Engineering in the Centre for Protecting Women Online.

Building software better: aims to help development teams actively consider the potential for their product to enable harm to women and girls and mitigate them. This includes developing new methods and tools that can be integrated with Design Thinking methods to help developers integrate safety by design into their products. Additionally, it would be necessary to help developers have responsible approaches to integrating artificial intelligence technologies into their products.

Building better software: aims to investigate new features and capabilities for software products to make them safer for women and girls (and hopefully by extension for other users too). This includes features that prevent users from being victims of harm but also inform and educate them about avoiding causing harm to others. An important aspect of developing such features includes the integration of artificial intelligence technologies for detecting and preventing online harm. Additionally, better software will help platform operators and authorities effectively investigate incidents where harm occurs.

Follow our LinkedIn page to stay abreast of the work of the Centre for Protecting Women Online.

Comments

Popular posts from this blog

Cloud Wedge - geek of the week

Priming Code Club

Cyber security by the rest of us ...