singing and weeping and laughing to teach an algorithm, 2023
VIDEO ● SOUND
Sounds emitted by human bodies can be training data for algorithms used in voice interfaces for digital assistants like Alexa or Siri. The data is collected by human annotators who verify the sounds found online, mainly on YouTube videos. The extracted sounds are then categorized and labelled manually. A vast human sounds repository is created to feed the algorithmic audiences that grow bigger year by year. Listening to and learning from the sounds that our bodies produce could establish a very intimate connection with a machine. How will our interaction and relations with machines evolve through sound?
Commissioned by Wild Alchemy Lab.