Welcome to our monthly "Ethics in Tech" roundup series. In these series, we will be outlining some of the key insights, news articles and research papers we came across in relation to tech and data ethics.

Artie launches new speech data set to detect and improve bias
Artie, a US startup, has launched a new speech data set called the 'Artie Bias Corpus' which seeks to help automated speech technology become more inclusive.
The data set is designed to help measure bias in speech algorithms by comparing user characteristics, such as gender and accents, with model performance. This can help companies better understand whether their product could be biased. Artie’s script is open-source and an academic article outlining its ‘criteria’ was published earlier this year. The tool kit is considered a step in the right direction, but by no means a final solution as it has only been tested on a limited number of applications. Read more on Artie (Discovered via Venturebeat).
New Zealand sets standards for government use of algorithms
New Zealand’s government has introduced a set of standards that should encourage public service officials to consider how their algorithms might drive decision-making. The standards includes the following criteria:
- Give plain-English explanations of how decision-making is driven by algorithms
- Identify and manage biases that inform algorithms
- Consider the Indigenous world views on data collection and consult with groups
- Publicize information on how data is processed and stored
However, it’s important to note that there are no enforcement mechanisms in place to ensure the standard is met. Read more on The Guardian (spotted via Ethical Intelligence).
The Genderify Debacle
You might have seen it in the news; Genderify, a new AI startup, recently revealed its new software on ProductHunt claiming it presents a “unique solution” to determine a person’s gender based on their username, name, and email address. The aim? To provide businesses with more audience analytics.
Understandably, this soon sparked an outrage on the social web and it didn't take long for bias to be detected: words such as ‘Dr’ and ‘Scientist’ were found to be much more likely to be assigned to male genders. Genderify has now shut down with the following statement: “Since AI trained on the existing data, this is an excellent example to show how bias is the data available around us.” Read more on Synced Review.
IBM data set breaches consent
Unfortunately, there is no shortage of news articles that report breaches in privacy and consent. In this case, two individuals are suing three tech companies, including IBM, over the use of their images in facial recognition technology.
What's interesting about this story is how the breaches in privacy occurred. Firstly, the victim's faces were initially featured in the IBM 'Diversity in Faces' database which was then shared with other companies to create, or optimise, algorithms. This highlights the issue with using and sharing data sets that may contain personal data, without having a full understanding of how and what consent was obtained. Secondly, IBM initially claimed its data set was used to aid academic research but is now using it for commercial purposes, which again is a breach in privacy. If you change the way your data is used, e.g. you choose to make a data set available to third parties, you need to ensure you've gathered consent for these alternative uses. Read more on Tech Crunch.

How a fake news article made it into mainstream news, and the problem with it
This insights article by the Next Web tells a gripping story of how two specific fake news articles made it into the mainstream news.
The first example outlines how the (fake) story of a republican politician who wants 'martial law to control the Obama-soros antifa super soldiers” led to actual violence and protests. The story was generated by an AI-driven news aggregator which was designed to identify popular news headlines and promote them via Facebook ads, with the intention to help local people discover more local news. However, in this instance, the story that was shared came from a satirical website. Taken out of its satirical context, the story was shared with a different audience on Facebook: an audience that had an active, emotional, interest within the topic.
The story shows how algorithms can struggle with making contextual decisions which, if it goes wrong, can lead to actual violence. Read more on The Next Web.
To AI or Not to AI? Areas to avoid
Building on the Genderify debacle, TNW produced an opinion piece with 6 key areas that should not be predetermined by AI as they are more likely to cause harm than provide a benefit to humanity. Areas include including determining gender, sexuality, human potential rates, and criminal profiling.
“AI cannot determine the likelihood that a given individual, group of people, or specific population will commit a crime. Neither humans nor machines are psychic. Predictive policing is racist. It uses historical data to predict where crime is most likely to occur based on past trends.” Read more on The Next Web.
Reflecting on how tech is responding to users with disabilities
As the Americans with Disabilities Act has turned 30, Tech Crunch approached prominent organisations to ask about their views of how tech has improved the everyday lives of people with disabilities, where it has fallen short, and what tech companies could be doing better. Read more on Tech Crunch.
Forget deepfake images and sound. Text is the problem
This insights piece from Wired investigates deepfake text and its pervasiveness. While most news focuses on deepfake images and sound, these are still relatively easy to detect (take this new deepfake voice generator) as opposed to deepfake text. Deepfake text is much more subtle and, unlike other forms of deepfake, has the power of being posted in different ways through different channels; creating the illusion of multiple people discussing one particular topic.
“Pervasive generated text has the potential to warp our social communication ecosystem: algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement.” Read more on Wired.

New in: AI and Ethics Journal
John MacIntyre (University of Sunderland) and Larry Medsker (George Washington University) have launched a new journal called ‘AI and Ethics’. The journal looks at “how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future” as well as how AI techniques “provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge. Read more on DailyNous.
Only 18% of data science students are learning about AI ethics
Data science platform Anaconda has conducted a State of Data Science survey among 2,360 individuals across more than 100 countries, including the UK. Findings indicated that despite a recognition that ethics in AI is among the biggest challenges in the industry, education on AI ethics is still widely underrepresented. In the survey, only 15% of instructors said they were teaching AI ethics, while 18% of students said they were learning about it.
“Concerns about bias and privacy are on the minds of data professionals, with nearly half of respondents citing one of these two themes as the “biggest problem to tackle in the AI/ML arena today.” Yet concerningly, only 15% of respondents said their team is currently actively addressing the issue of bias, and only 15% of universities include courses in ethics.” Download the report on Anaconda (discovered via The Next Web).
Using enhanced wellbeing as an impact assessment tool for AI
It’s difficult, if not impossible, to prevent AI algorithms from causing harm. Tools that exist to help consider AI ethics are limited and there are still no official actionable metrics. This research paper contributes to this debate by proposing an additional metric: enhanced wellbeing.
The Enhanced Wellbeing Impact Assessment (EWA) already exists within public policy and proposes “(1) internal analysis, informed by user and stakeholder engagement, (2) development and refinement of a well-being indicator dashboard, (3) data planning and collection, (4) data analysis of the evaluation outputs that could inform improvements for the A/IS.”
The aim of this framework is to avoid ethics washing by introducing something that is measurable, taking inspiration from the wellbeing space. However, the article is vague in terms of providing clear metrics and therefore limited in its potential for direct practical application. While the idea of adopting measurable frameworks from public policy into AI is interesting, and will certainly have its uses within cases where wellbeing is directly impacted by AI, it will still need to be further tested or developed. Read the full paper on Arxiv.

In-depth analysis: Using cards to facilitate ethics by design
Two card decks, 'Moral-IT Cards' and 'The Ethical Explorer' have been featured in the news/research papers this month. Both packs are aimed at prompting teams to consider ethics as part of the design process. We've conducted a (slightly lengthy) comparison of the two new decks:
The Ethical Explorer
Created by the Omidyar Network (established by Ebay's founder Pierre Omidyar), The Ethical Explorer particularly seeks to support those individuals who might lack confidence to raise ethical concerns within the workplace or those interested in exploring the ethical implications of their own technologies.
It includes activities such as how to develop an argument, and a workshop to anticipate ethical risks as a team. The kit is available as free digital download or as a free physical card deck (these can be shipped to the UK).
Moral-IT Cards
While Moral-IT Cards was first introduced in 2018 (free to access here), it just released a new research paper to indicate how these cards are best used, reflecting on a pilot with 20 participants.
Observing how the cards work best in-use, the research paper proposes the following approach:
- As a group, summarise your tech and key ethical priorities
- Pick the 5 most important cards that apply to the technology’s ethical risk
- Rank the cards based on importance
- Annotate with post-it notes
- Identify safeguards
- Annotate with post-it notes
- Consider challenges and barriers to implementation
- Team discussion
Comparison: Moral-IT vs Ethical Explorer Cards
This comparison is purely based on available information about the cards, as we haven't (yet) used them in a practical setting.
ThemesKey QuestionsProcessThe Moral-IT Deck-Security
-Ethics
-Privacy
-Law-Secure for whom?
-Consider the setting this tech could be used in & why this is important?Based on the 4 stages of ethics design:
1) identify risks
2) rank risks
3) establish safeguards
4) explore implementation barriersThe Ethical Explorer
-Surveillance
-Disinformation
-Exclusion & Equity
-Algorithmic Bias
-Addiction
-Data Control
-Bad actors & ethical use
-Outsized powerHow will we..
-Protect privacy?
-Promote truth?
-Enable equity?
-Promote fairness?
-Promote healthier behaviours?
-Enable transparency?
-Promote civility?
-Promote choice?Choice of different programs:
-Explore your own argument
-Build ethics as a team habit
-Explore how to improve ethics
-Celebrate your current progress
-Half day workshop
Different suggestions on how to use the cards:
The Moral IT-Card proposes a more structured approach to evaluating and addressing ethical considerations. It encourages you to rank and evaluate key ethical risks and put appropriate safeguards in place. The Ethical Explorer, on the other hand, provides different variations on how its cards could be used within the workplace: from the individual worker who wants to formulate an informed opinion before approaching management, to team workshops that are similar to the approach advocated by the Moral IT Card Deck.
Ultimately, it looks like both approaches can be adopted within the different card decks and I would definitely recommend that, if you are considering the use of a deck, you give the Moral-IT research paper a read.


Caption: left = Ethical Explorer card. Right = Moral-IT card.
Structure and Questions
At 74 pages, the Moral-IT Card deck has more choice of questions - however, it also feels more disorganised. While the Moral-IT Card Deck does categorise its questions within different themes, it is not always clear what is meant by a specific theme up-front. It acknowledges this in its own research paper, indicating that users were challenged with “grouping” or categorising ethical topics.
The Ethical Explorer, on the other hand, does a very good job at clarifying key themes. Every theme is introduced with one key question which explains more about it. This, by the looks of it, makes it a bit more user-friendly from a ranking perspective.
However, both card decks are free to use and both have the potential to function as a useful resource.
Disclaimer
This list is by no means exhaustive and, to keep this post within a reasonable length, we have left out some stories.
All the articles in this post have been gathered through a variety, but still limited number, of news sources including Arxiv (research papers), Next Reality, Road to VR, Tech Crunch, Tech Talks, The Next Web, Venturebeat, Virtual Reality Times, VR Focus, and Wired.