Kate McCurdy is the Applied Research Lead of the Computational Linguistics Engineering team at Babbel, the world’s market leading language learning app. In her role Kate is working on bringing innovations from language technology to inform the learning experience.
She has a European Master in Clinical Linguistics, having studied a joint curriculum in clinical linguistics, neurolinguistics, and psycholinguistics across three European universities: University of Potsdam, Germany; University of Groningen, the Netherlands; University of Eastern Finland, Finland. She also carries a B.A. of Linguistics and Cultural Studies from McGill University and has been a research assistant at Harvard University.
After her studies, she realized her love of language could coincide with her knack for data. An academic psycholinguist turned visualization aficionado, Kate keeps a foot in both worlds as a data wrangler for Babbel.
She has presented at numerous linguistic research conferences, including the CUNY Conference on Human Sentence Processing in 2013, Big Data Panel at Codemotion in 2013, and Young Researchers’ Symposium in NLP, Osaka in 2016.
You Shall Know a Word by the Company It Keeps
With the rapid pace of research and development, significant technological advances can sometimes hold an unflattering mirror to our world. AI and machine-learning are especially volatile, showing us that –even glimpsing the future– we can see the past hasn’t gone anywhere. Recent research shows that English language word embeddings, a popular Natural Language Processing technology used in machine learning applications, adopt the biases of human culture. For example, gender stereotypes in which men are more closely associated with powerful careers than women are reproduced with regularity.
Computational linguists at language learning app Babbel have found that this form of machine intelligence can be led even farther astray when it comes to languages other than English: in languages with grammatical gender, such as German or Spanish, word embedding technology learns to associate terms for human men and women not only with stereotypes, but also with common objects that happen to share the same article (e.g. die and la for women, der and el for men). This can lead to confusing and contradictory outcomes, potentially reinforcing gender stereotypes in downstream applications, and the contexts in which they’re deployed.
Kate McCurdy, computational linguist at Babbel, will explore both the implications of and possible correctives to these built-in biases.