Blog post
Written:
November 5, 2020
Author:

Ethics in Tech Roundup October 2020

Share:
Share:
Organisation growth

Welcome to our monthly "Ethics in Tech" roundup series. In these series, we will be outlining some of the key insights, news articles and research papers we came across in relation to tech and data ethics. 

Arxiv now allows researchers to submit code with papers

Researchers can now submit actual code alongside their research papers on database ArXiv. This is a welcome functionality as it enables more transparency and scrutiny, and allows others to build on existing codes and research.

According to a State of AI 2020 report (via Venturebeat) only 15% of research papers actually included code in June 2020. Looking at the original source (Papers with Code) it looks like the number has been updated to 25%. This figure is based on the availability of code on Github “for every open access machine learning paper”. Arxiv' move to make it easier to include code alongside research papers is therefore a welcome feature. Read more on Venturebeat. 


Twitter may let users choose how to crop image previews after bias issues

Twitter has announced it’s considering giving users more power on how images within their tweets are cropped. This is a response to an algorithmic issue with its cropping tool which has been reported to favour white faces over those of people of colour. The proposed changes by Twitter reiterate how, while automated decision-making can be incredibly beneficial, it is important to consider human autonomy and provide agency  when possible.

“Yes, you read that right. A tech company is affirming that automating certain decisions may not, in fact, be the smart thing to do — tacitly acknowledging that removing human agency can generate harm.” Read more on TechCrunch.


Brands are using deepfakes in ad campaigns during covid

Big brands are increasingly exploring the use of deepfake technology in order to either bring a sense of connection to consumers, or to enable business operations during the pandemic.

Enabling operations: The Adweek article starts with explaining how deepfake technology helped State Farm produce a take for the Michael Jordan docuseries when the pandemic had disrupted in-person filming.

Building connection: Another example was that of Spotify, which is using deepfakes to show virtual avatars of music celebrities and build a connection with the fans:

““The goal … is to provide fans with an intimate experience at a time when artists and fans are missing the connection of live music,” Spotify vp and global executive creative director Alex Bodman said.”

This article comes at a time where deepfake technologies are often met with negative connotations, and indicates how they can also be positively used in other contexts. Read more on Adweek.


Automated checks show bias in passport photo rejections

A study has shown that people with darker skin are more likely to be told their photos fail UK passport rules when they submit them online due to faulty automated checks. For example, dark-skinned women were 8% more likely to be told that their photos were invalid than light-skinned women.

Documents released as part of a freedom of information request in 2019 had previously revealed the Home Office was aware of this problem, but decided "overall performance" was good enough to launch the online checker.

“If a system "doesn't work for everyone, it doesn't work" 

[..] "The fact [the Home Office] knew there were problems is enough evidence of their responsibility." Read more on the BBC.

Data Poisoning: the malware to AI

Data poisoning (the exploitation of machine learning models through deliberate misclassification of data) can present real security risks to machine learning technologies. This article explains more about what data poisoning is, how it presents itself, and why it can be such an issue.

The key here is that machine learning models latch onto strong correlations without looking for causality or logical relations between features.

And this is a characteristic that can be weaponized against them.”

[..] “imagine a self-driving car that uses machine learning to detect road signs. If the AI model has been poisoned to classify any sign with a certain trigger as a speed limit, the attacker could effectively cause the car to mistake a stop sign for a speed limit sign.”

To poison data, attackers need to have access to the machine learning’s training pipeline (unless, of course, they provided the data set in the first place). This reinforces the need to consider data security and the risks surrounding the use of algorithms to make life-changing decisions. Just like there have always been hacking risks, these risks grow with new technologies in new ways. Read more on The Next Web.


How videogames are saving those who served

This article from Wired shows the role of videogames to support veterans with PTSD, anxiety and other mental health challenges – especially during times of isolation. The article interviews various vets about their challenges, and the role that videogames have played. It states that videogames can help with cognitive processing therapies to encourage players to think, practically, about trauma and trigger-inducing scenarios. 

While it’s not a silver bullet, videogames can provide support and solace to those in need, and even aid with therapies. Read more on Wired.

Understanding Bias in Facial Recognition Technologies: An Explanation. 

The Alan Turing Institute has produced its explainer on understanding bias in facial recognition technology. 

The explainer takes a deep dive into ethical considerations surrounding the deployment, design, and justification of facial recognition technologies and asks whether and to what extent they should be permitted. The article delves into different facial detection and recognition techniques that are being deployed, investigating both the history of such technologies and current case studies. 

The explainer is a living document and the Alan Turing Institute welcomes any thoughts and additional feedback. Read more on the Alan Turing Institute.


How card games can educate computer science students on AI Ethics

In an earlier Tech Ethics Roundup we reported that the majority of computer science students are not yet taught AI ethics. This paper successfully trials cardgame The Value Toolkit in the computer science classroom to raise awareness of incorporating AI Ethics as part of the design process.

The study presents the The Value Toolkit card game as a tool to raise awareness of stakeholder values during the AI design process. The game is inspired by Value Sensitive Design: a design process that advocates the consideration of human values, and how technologies may affect these (this article on UXDesign provides a good background) .

The cards seek to educate developers on potential trade-offs between different algorithms and performance metrics, and stakeholder values and impact.  The game encourages deliberation on algorithm deployment. It uses three different type of cards:

  • Model Cards: different machine learning models
  • Persona Cards: different potential stakeholders and their values
  • Checklist Cards: other social and technical considerations to facilitate deliberation

“as Yang et al. [64] point out that there is a tendency among practitioners who are not formally trained in ML to overly prioritize accuracy over other system criteria in model building – “many perceived percentage accuracy as a sole measure of performance, thus problematic models proceeded to deployment.” Read more on ArXiv.

Disclaimer

This list is by no means exhaustive and, to keep this post within a reasonable length, we have left out some stories. 

All the articles in this post have been gathered through a variety, but still limited number, of news sources including Arxiv (research papers), Next Reality, Road to VR, Tech Crunch, Tech Talks, The Next Web, Venturebeat, Virtual Reality Times, VR Focus, and Wired.