Cherie Hu explores a new frontier for AI-fuelled music.
One key difference: no blood is shed, no antagonistic words exchanged. This is in part because the collaborator is not so much a personified, autonomous body as it is a faceless, obedient machine—groomed to generate new beats, melodies and chords at its parent's command, at ten times the efficiency of humans.
While this machine-collaborator is the artist's voluntary creation, the human doesn't quite know how it operates under the hood. All she knows is that the AI has devoted its entire lifetime to surveilling and studying its parent's songs, her production style, even her voice—not just exposing the backbone of her deep-seated ideas and behaviors, but also extending and manipulating them beyond what her own abilities currently allow.
Another important difference: unlike Us, this isn't just a movie.
For innovators like the experimental sound artist Holly Herndon, whose third album, PROTO, came out on May 10th, it's a stark reality unfolding in real time. In collaboration with longtime partner Mat Dryhurst and software engineer Jules LaPlace, Herndon has developed a machine-learning voice model named SPAWN that is designed to learn, interpret and recreate the sonic patterns of any voice it "consumes." The trio has trained SPAWN on multiple hours' worth of spoken and sung audio files from Herndon and her collaborators, tweaking and improving the model over the course of several months.
Despite the buzz around their methodology, Herndon insists that PROTO is "not just another AI album." She's treating SPAWN as an "ensemble member" among human collaborators. The AI accounts only for a small minority of sounds across the tracks.