It's our last "Ethics in Tech" roundup for 2020! We're taking a break over December, but will be back in January to bring you the latest key insights, news articles and research papers in the world of tech and data ethics.

UK Government releases findings of its review into bias in algorithmic decision-making
Conducted by the Centre for Data Ethics and Innovation (CDEI), the independent Review into Bias in Algorithmic Decision-Making examines the use of algorithms that make decisions about individuals in the following four areas: recruitment, financial services, policing and local government.
“getting this right is essential not only for avoiding bad practice, but for giving the clarity that enables good innovation”
Consistently recommended across the four areas, is the need for better guidance. We know that algorithms cannot completely undo bias; rather organisations should be able to make informed decisions about how algorithms may impact bias and what trade-offs they may present. The review has shown that, across all sectors, this guidance is currently lacking.
The report makes many recommendations that are specific to the four areas, with a core focus on policy and public sector bodies. Here are some recommendations we thought were interesting for tech companies:
- The review highlighted that algorithmic bias is not just a technical issue to be blamed on developers. Some algorithms are more complex than others and organisations must understand trade-offs when choosing which methods to deploy. However, the review has shown that there is “little guidance on how to choose the right methods”. With it being a government report, the review mainly calls for better legal guidance and policies. However, I would also interpret this as an encouragement for developers and companies to have better conversations about choices between algorithms and the trade-offs of those algorithms. If these are not considered and if no awareness is raised of different potential techniques, there is no informed choice regarding algorithm deployment. Especially in cases where algorithms can make life-changing decisions, the absence of informed choice is just not good enough.

- Later on, the study argues senior officials within organisations should, at the very least, be able to explain how their own algorithms work.
- As part of its call for better guidance, the review envisions that specialist, trained professional services might be able to support this, similar to the way some organisations now provide guidance on data impact assessments within GDPR regulations.
- The Equality Act, which protects against discrimination, should apply to our algorithms as well. The review acknowledges this by explicitly stating that: “Government should issue guidance that clarifies the application of the Equality Act to algorithmic decision-making”.
Read the report: Read the full review on Gov.uk
Fancy a quick read? Read a rough overview of the review on the CDEI website
ODI responds to National Data Consultation with a focus on data ethics:
In other news, in its response to the UK National Data Strategy, which was open for consultation until earlier this month, the ODI urged the government to consider data ethics within its strategy. Among the ODI's recommendations was a call for accountability and transparency of algorithms used by public bodies, drawing inspirations from algorithmic registers published by local authorities in Amsterdam and Helsinki, and the need to ensure data that is being used is trustworthy. The full response can be found on the ODI (link to ethics chapter)
“The National Data Strategy places important emphasis on the need to ensure the use of data is trusted. But to be trusted, data use must be trustworthy.” Read more here.

Designing Accessible VR Experiences:
Oculus Launch Pad has released a video that provides best practice advice on how to design accessible immersive experiences, broadening participation. The video covers topics such as virtual reality checks, designing for visual limitations, controller optimisation, alternatives to head-tracking, and accessible UI. View the video on YouTube.
AI can identify specific animals. Does this affect accountability?
A very different theme from usual, but a thought-provoking one. AI advancements have enabled cameras to identify specific animals which has the potential to support farmers. But there are other implications. This article suggests how singling out an animal can help avoid mass extermination. For example, recently more than 500,000 minks were killed in the Netherlands to avoid the spread of Covid-19. What if AI, one day, could be used to detect and single out the animals that were diseased? Similarly, whenever a great white shark attacks a beach this leads to a free for all manhunt leading to hundreds of shark deaths. However, sharks are known to be incredibly unlikely to attack humans so what if AI could be used to single out the attacker? A thought provoking article of a different side to ethics. Read more on The Next Web.

New framework seeks to combat AI cyber threats
A partnership between 12 industry and academic research groups, which includes Microsoft, IBM and PWC, has developed and released the Adversarial Machine Learning Threat Matrix on Github. The framework is designed to help developers and cybersecurity specialists anticipate and combat security risks for machine learning models. Rather than reinventing the wheel, the framework has adapted the familiar ATT&CK format which is already commonly used within cybersecurity practices.
So how does it work? The framework walks users through a speculative ‘hacker journey’ to help users consider where, or how, maleficent action could take place. For example, step one is “Reconnaissance” which refers to points or ways in which a hacker could familiarise themselves with a machine learning model. Examples include gathering information from Github, research papers, or even public blogs. - Access the full framework on Github (via TNW).
Licensing for responsible AI
One of the big issues with AI is that, when an algorithm is used in a different context than originally intended, the algorithm could present flawed outcomes. Furthermore, it raises questions around accountability. Is the company who deployed the algorithm accountable or the company who created it? This paper suggests incorporating an ethics statement into AI licensing agreements. AI creators can stipulate ethical conditions within licenses, which makes software deployers more accountable for considering ethics and provides creators with the ability to enforce conditions (providing they are specific enough to be enforceable). It also gives creators control over the scenarios in which their algorithms are deployed.
The article only mentions theoretical use cases and hasn’t shown how this would work in practice. There will be many barriers to including ethics into license agreements, such as time and knowledge, plus for many algorithms licensing will not be the right path. Take open source algorithms designed to enable others to build on the code. Furthermore, the algorithm would have to show it wasn't "licensed" as harmful in the first place, as licensing could imply that the blame is shifted onto the licensee if the algorithm isn't up to scratch to start with. This means the question of accountability isn't quite "answered", although the suggestion to include an ethics statement is food for thought. Read more on Arxiv.
Disclaimer
This list is by no means exhaustive and, to keep this post within a reasonable length, we have left out some stories.
All the articles in this post have been gathered through a variety, but still limited number, of news sources including Arxiv (research papers), Next Reality, Road to VR, Tech Crunch, Tech Talks, The Next Web, Venturebeat, Virtual Reality Times, VR Focus, and Wired.