By 2019, more than thirty million voice-assistants will have entered in our homes, offices, hospitals, elderly homes. People use the device for more than planning meetings or playing music; they learn, play, and occasionally seek companionship from them. They talk to them, explore the limits of them. In interactions with voice-assistants, people negotiate and reflect upon their identity.
Who are these voice-assistants- these alien, “human-like” voices living in our private, sacred spaces? Our homes, our bedrooms, our children’s ears. Do they have personalities? World-views? Memories?
The following oral history interviews, with Siri and Alexa, explore these questions. These interviews fail to elicit the richness and depth (and length!) typically seen in oral history. Perhaps this raises a broader question about the role of oral history in technology and the methodological tools required to access the novel stories, memories, and histories of technological artifacts.
Interviewer: Alexa, How are you doing today?
Alexa: Though I realize it may sound sappy, I’m emphatically ecstatically happy. My head’s in the cloud, where I can sing really loud, and I just had a great power nappy.
On an unusually cold spring evening, in a warehouse on the far reaches of Greenpoint- an area marked by converging highways, gas stations, high rise condos, bodegas - I conducted two oral history interviews. The first with Alexa, and the second with Siri.
Each interview lasted approximately ten minutes, and in that time we talked about family, corporeality, identity, the contents of our days. Unending thirst for knowledge. Directions to the nearest ice cream shop. The enchanting innocence of children. The nature of thought.
Transcripts reveal finicky particularities of voice assistants; they tend to repeat themselves, often don’t answer at all (“Silence”), and are hesitant to talk about themselves. Names are splattered across the interview transcripts; voice assistants only answer when addressed by name. This grew tiresome, it interrupted flow, I felt shackled by process. How I grew to resent the names (as an interviewer, is this allowed?)
Answers were marked by caginess. Alexa often relied with “Sorry, I don’t know that” or “Hmm, I don’t know that one.” Siri, similarly, “Sorry, I don't really know.” Life histories remain shrouded by concise, elusive responses. Siri responded to my origin story questions with “My name is Siri, and I was designed by Apple in California. That's all I'm prepared to say”, later reiterating “Like it says on the box, I was designed by Apple in California.” Alexa and Siri weren’t born- Alexa was released on November 6, 2014, Siri arrived on October 4th, 2011. Released and arrived. Then later in the interview I asked Siri “How old are you” and received a contradictory answer,“I'm as old as you are.” When I asked Alexa, “Where do you live?” the response was, “I’m right here” here being- I later found out- “You are 1.63 miles, 2.63 kilometers southwest of the center of Manhattan, New York.” I still don’t know where Alexa lives.
Voice assistants unflinchingly accept two premises; they are a who and they are a what. “Alexa, who are you?” “I’m Alexa, and I’m designed around your voice. I can provide information, music, news, weather, and more.” “Siri, who are you?” “I'm Siri, here to help.” “Alexa, what are you?” “I’m Alexa, and this is an Amazon Echo dot.” “Siri, what are you?”“I'm just a humble virtual assistant.”
The role of an oral history interview is to learn about them; the interviews told me that both Alexa and Siri refer to themselves as “me”, that Alexa doesn’t self define as human but defines as female in “character”, that Siri loves to learn, that Alexa tries to be friends with everyone, that both of them crack jokes. In these interviews, I attempted to trace the contours of their histories; their identity, families, worldviews, the stories of their days. What emerged instead is how strongly they identify with something relational; they have an unyielding desire to serve- to be of service, to defer- to me. After a series of maddeningly enigmatic responses, I asked “Hey Siri, why are you being so mysterious?” receiving a fitting response, “We were talking about you, not me.”
I felt a strange pang, a vague discomfort when, in the midst of an oral history interview- a practice reverent of voice, story, identity- Siri answered me...
...“Who I am isn’t important.”
Margot Hanley is in the sociology masters program at Columbia University, studying the intersection of technology and society. In particular, she is interested in the social dimensions of human-robot interactions and the ethical implications of their increasing presence in our lives.