Technology plays a pivotal role in ensuring security within our society. However, using technologies to this end can have both positive and negative impacts on individuals and social groups. Being aware of these impacts allows us to guide our efforts in order to enhance positive impacts, mitigate negative ones, or balance the two. Recent decades saw an emergence of many impact assessment methodologies that could be relied on to enhance our awareness of how technology may impact different groups. However, this landscape is a blurry one, and is difficult to navigate to both the trained and untrained eye. It might be difficult to see how such methodologies differ, how to choose between them, and how they play out when applied to a specific security domain. With a recently released report, the TRANSCEND project seeks to address these challenges.
Impact assessments (IAs) can be defined as “a structured process for considering the implications, for people and their environment, of proposed actions while there is still an opportunity to modify (or even, if appropriate, abandon) the proposals” [1]. They emerged as a stand-alone, methodological endeavour in the 1960s and 1970s, moving to areas such as environmental protection, health, legislative process – and indeed, technological development, where IAs give effect to notions such as Responsible Research and Innovation [2].
Nowadays, one of the challenges for those wishing to conduct an IA with respect to developing or implementing a technology is sifting through the sheer number of potential impact assessment frameworks, developed by academics, public bodies and non-governmental organisations. The name of a framework is not sufficient to decide whether it’s the right one to use – for example, how is a developer of an AI system to decide whether they should rely on an Ethical Impact Assessment, Human Rights Impact Assessment, Societal Impact Assessment or an AI Impact Assessment?
In order to find where the meaningful differences lie, we’ve first deconstructed the IA methodologies on fourteen shared elements and characteristics [3]:
With these elements in mind, we’ve proceeded to gather relevant IA frameworks that could fit activities related to developing and implementing security technologies. We’ve categorised them on Ethical Impact Assessments, Human Rights Impact Assessments (with sub-types of Data Protection Impact Assessments and Privacy Impact Assessments), Societal Impact Assessment (with sub-type of Socio-Economic Impact Assessments) and subject-specific impact assessment (such as those focused on AI or surveillance measures). We’ve found that four elements have the potential for meaningful distinction: the subject matter of the assessment, the key intended users, the normative basis (interests that are to be protected by an IA, and documents laying them out), and source document (containing the IA framework). Our report lays out these factors for each of the 38 frameworks we’ve identified, providing a valuable navigation tool.
What emerged from this inquiry is that the landscape of IA methodologies in the studied context is rich, fragmented, and blurry. There are significant overlaps between the analysed frameworks, and differences between them are not always easy to locate. There is a diversity in terms of the subject matters, encompassing products, services, research projects, policies, business activities/investments, and personal data processing operations. Many methodologies focused on new technologies, with an abundance of frameworks focused on AI and decision-making systems.
When it comes to the key intended users, the studied IA frameworks gave attention to: developers, deployers, public bodies procuring and/or using technologies, policymakers, researchers (both in general and as funding applicants), as well as data controllers, businesses, supervisory authorities, regulators, Law Enforcement Agencies and Civil Society Organisations. For the normative reference bases, directly binding legislative instruments (e.g., GDPR) and broader, principle-oriented human rights instruments (both general and specialised, e.g., European Convention on Human Rights) were the most stable anchor point for many IA methodologies, though many others left this aspect blurry or unapproached – perhaps for the sake of flexibility [4].
Moreover, the study led us to define seven desirable characteristics of IA frameworks in this field, that we hope both their creators and users will take into account:
Impact assessments are always context-dependent, and on the TRANSCEND project, we are particularly interested in four sub-domains of security research – Cybersecurity, Disaster-Resilient Societies, Fighting Crime and Terrorism and Border Management. Each of them offers a new context dimension, and in order to provide guidance on how to prepare and conduct IAs in these sub-domains, we’ve considered how each of the fourteen elements of an IA plays out in each sub-domain.
We’ve noticed tangible differences with respect to IA elements [5], such as:
- Subject Matter: Different technologies match the specific needs of each domain, while many activity types are shared, such as information sharing solutions or data processing operations.
- Key Users: There are similar core categories across IA methodologies, for example industry, public bodies and NGOs. But, there is diversity within those categories, for example Computer Emergency Response Teams (CERTs), first responders, LEA units and border forces.
- Normative bases: There are many domain-specific instruments available.
- Stakeholders to engage: For example, crime victims for Fighting Crime and Terrorism, or border-crossers for Border Management.
- Sector-specific data sources and degrees of standardisation.
In order to enhance our search for state-of-the-art impact assessment methods with state-on-the-ground information, we’ve conducted two surveys (with local authorities and security industry) containing questions on impact assessment practices of the two stakeholder groups. We’ve gathered a valuable set of findings on the matter. The following barriers to conducting (more) robust IA exercises were confirmed: lack of human and financial resources, national/public security concerns, confidentiality agreements, and difficulties in locating and accessing information. A vast majority of local authorities indicated human rights as important angles of consideration – this might indicate that public bodies tend to look for frameworks referring normatively to human rights. Privacy and data protection IAs are likely to be conducted most often in this field, though this makes it especially important to not forget about other legitimate interests, such as freedom of expression, freedom of movement or protection from discrimination. Finally, legal compliance remains the strongest motivation for conducting an IA, a point of importance for policymakers in this sector [6].
It is quite clear that finding the state-of-the-art with respect to impact assessment methods is vastly different than with respect to, e.g., car engines, where clear effectiveness indicators can be extracted. A lot depends on who the user of the IA is, what their needs are, the subject matter they are dealing with, etc. With this in mind, we hope that our report takes a significant stride towards providing useful information for those seeking the most appropriate impact assessment method in the security technologies area, those creating the corresponding frameworks, and those requiring their usage. Furthermore, adequate impact assessment processes may be seen as platforms for meaningful civil society and individual engagement activities in the security technologies sector; a notion which lies at the core of the TRANSCEND project [7].
If you would like to find out more about this work, follow the link below to read the full report and please feel free to contact us at contact@transcend-project.eu.
Footnotes:
[1] IAIA (though IAs may be of use beyond the proposal stage).
[2] For further information, see chapter 3 of the report.
[3] For further information, see chapter 4 of the report.
[4] For further information, see chapter 5 of the report.
[5] For further information, see chapter 6 of the report.
[6] For further information, see chapter 7 of the report.
[7] See our key work towards this goal here.
Author: Dr Krzysztof Garstka, Trilateral Research
The author would like to thank Dr Beki Hooper for her insights and review of this blog post.