Siri is a Scandinavian female name that means “beautiful victory.” Dag Kittlaus, the co-creator of Siri, initially found the name when he was an expectant father. When he ended up having a son, he repurposed the name for his startup. To drive adoption of virtual assistants, developers gave Siri traits as close to human as possible. Unfortunately, these constructed, female identities reinforce dangerous stereotypes of subservient female assistants.
A recent survey by AppDynamics found that 84% of millennials use voice-activated assistants to manage their day-to-day lives. So how do we steer the technology toward a future that unravels unconscious bias, instead of reinforcing it? Siri, Alexa, and Cortana may be the most impactful feminists of our time, if only they’ll ditch one programmed trait: gender.
We could start to fix the problem by making the teams that develop virtual assistants more representative of the consumer. White men make up the vast majority of software development teams. Gender and racial norms are then passed down to the agents that those homogeneous teams create. As we continue the path toward parity in business, government, and society at large, we must also consider the bias that has already been programmed into the software we use.
Some companies are already removing some gender identifiers: Capital One released Eno, the first SMS chatbot from a U.S. Bank. Its gender? It doesn’t have one. Audra Koklys Plummer, head of AI design, explains, “Making Eno gender-neutral freed us in a sense because we didn’t have to worry about evoking any biases. We could just focus on solving customer problems.”
It’s more difficult when the AI speaks. But through machine learning, we’ve been able to break down the elements of voice that are attributed to creating a seemingly male or female pitch. We can extrapolate this information and apply it to the voice we use with virtual agents to create perfectly androgynous tones that have removed the pitch identifiers, evolving the interaction between human and technology so that it’s not reaffirming a bias that’s destructive to women. While vocal androgyny needs to be the default, voices across an ethnic, racial, and cultural spectrum should also exist. The decision to change the default should be a conscious one. What is just as problematic as codifying a traditionally white and female tone into Siri, Cortana, and Alexa is that the decision to interact with that specific voice is not an active choice. We don’t have to decide the voice, so it’s easier not to consider the implications.
The application of androgyny should extend through what we call the robot. Because the name of a virtual assistant is the method used to initiate a command, every time we’re calling out “Alexa” or “Hey Siri,” we’re reminded of the robot’s given gender. By eliminating gender from a virtual assistant’s name, we are further extracting antiquated stereotypes from its identity.
Though the goal of voice technology may be ubiquity, we’re still a long way from that future. So while its presence in the home grows, we must also consider how communication with our connected devices will change. Instead of having a conversation with each device in a chosen language, sound effects will become an important way to communicate with the world around us. Whether it’s Star Wars’ R2-D2 or the tone of the iPhone text, there are already examples in which robots use sound to distill valuable information. As voice technology matures, it will blend natural language with sound to project additional meaning. This fusion of sound contextualization is another argument for decoupling gender from the identity of virtual assistants, in order to prioritize communication efficiency over superficial auditories.
With more diverse teams developing AI, and the adoption of naming and vocal androgyny, we can evolve virtual assistants so that their evolution is symbiotic to our own. The humanization of voice technology is important in order to foster adoption. However, the only duality relevant to Siri is binary code, not gender binary.