Some tech companies make a splash when they launch, others seem to bellyflop.
Genderify, a new service that promises to identify someone’s gender by analyzing their name, email address, or username with the help AI, looks firmly to be in the latter camp. The company launched on Product Hunt last week, but has picked up a lot of attention on social media recently as users discovered biases and inaccuracies in its algorithms.
Type the name “Meghan Smith” into Genderify, for example, and the service offers the assessment: “Male: 39.60%, Female: 60.40%.” Change that name to “Dr. Meghan Smith,” however, and the assessment changes to: “Male: 75.90%, Female: 24.10%.” Other names prefixed with “Dr” produce similar results, and other inputs seem to skew consistently male. “Test@test.com” is said to be 96.90 percent male, for example, while “Mrs Joan smith” is 94.10 percent male.
The outcry seems to have been so great that, at the time of writing, the service’s website, Genderify.com, has been taken offline and its free API is no longer accessible.
AI bias in action: https://t.co/vRM53tEUMs pic.twitter.com/YgLON4vpT8
— michael (@mpchlets) July 28, 2020
These sorts of biases regularly appear in machine learning systems, but the thoughtlessness of Genderify seems to have surprised many experts in the field. The response from Meredith Whittaker, co-founder of the AI Now Institute, which studies the impact of AI on society, was somewhat typical. “Are we being trolled?” she asked. “Is this a psyop meant to distract the tech+justice world? Is it cringey tech April fool’s day already?”
Making assumptions about people’s gender at scale could be harmful
The problem with Genderify is not that it makes assumptions about someone’s gender based on their name. People do this all the time, and sometimes make mistakes in the process. That’s why it’s polite to find out how people self-identify and how they want to be addressed. The problem with Genderify is that it automates these assumptions; applying them at scale while sorting individuals into a male/female binary (and so ignoring individuals who identify as non-binary) while reinforcing gender stereotypes in the process (such as: if you’re a doctor you’re probably a man).
And/or its just garbage overall pic.twitter.com/YMySNyVT1a
— Jon Doane (@jpdoane) July 29, 2020
The potential harm of this depends on how and where Genderify is applied. If the service was integrated into a medical chatbot, for example, its assumptions about users’ genders could lead to the chatbot issuing misleading medical advice.
Thankfully, the service doesn’t seem to be aiming to automate this sort of system, but is primarily designed to be a marketing tool. As Genderify’s creator, Arevik Gasparyan, said on Product Hunt: “Genderify can obtain data that will help you with analytics, enhancing your customer data, segmenting your marketing database, demographic statistics, etc.”
In the same comment section, Gasparyan acknowledged the concerns of some users about bias and ignoring non-binary individuals, but didn’t offer any concrete answers.
One user asked: “Let’s say I choose to identify as neither Male or Female, how do you approach this? How do you avoid gender discrimination? How are you tackling gender bias?” To which Gasparyan replied that the service makes its decisions based on “already existing binary name/gender databases,” and that the company was “actively looking into ways of improving the experience for transgender and non-binary visitors” by “separating the concepts of name/username/email from gender identity.” It’s a confusing answer given that the entire premise of Genderify is that this data is a reliable proxy for gender identity.
We’ve reached out to the company and will update this story with any comment we receive.