Google and Universal Music negotiate deal over AI ‘deepfakes’

News

Google and Universal Music are in talks to license artists’ melodies and voices for songs generated by artificial intelligence as the music business tries to monetise one of its biggest threats.

The discussions, confirmed by four people familiar with the matter, aim to strike a partnership for an industry that is grappling with the implications of new AI technology.

The rise of generative AI has bred a surge in “deepfake” songs that can convincingly mimic the voices, lyrics or sound of established artists, often without their consent.

Frank Sinatra’s voice has been used on a version of the hip-hop song “Gangsta’s Paradise” while Johnny Cash’s has been deployed on the pop single “Barbie Girl”. A YouTube user called PluggingAI offers songs imitating the voices of the deceased rappers Tupac and Notorious B.I.G.

“An artist’s voice is often the most valuable part of their livelihood and public persona, and to steal it, no matter the means, is wrong,” Universal Music general counsel Jeffrey Harleston told US lawmakers last month.

Discussions between Google and Universal Music are at an early stage and no product launch is imminent, but the goal is to develop a tool for fans to create these tracks legitimately, and pay the owners of the copyrights for it, said people close to the situation. Artists would have the choice to opt in, the people said.

Warner Music, the third-largest music label, has also been talking to Google about a product, said a person familiar with the matter.

Music executives liken the rise of AI-generated songs to the earlier days of Google-owned YouTube, when users began adding popular songs as the soundtracks to videos they created. The music industry spent years battling with YouTube over copyright infringement, but the two sides established a system that now pays the music industry about $2bn a year for these user-generated videos.

As AI has gained traction, some big stars have expressed anxiety that their work will be diluted by fake versions of their songs and voices. 

The issue was thrust into the spotlight earlier this year when an AI-produced song that mimicked the voices of Drake and The Weeknd went viral online. Universal Music, home to Drake, Taylor Swift and other popular musicians, had the song removed from streaming platforms over copyright infringement. 

Drake in April slammed another song that used AI to mimic his voice, calling it “the final straw”, while rapper Ice Cube has described such cloned tracks as “demonic”. 

Other artists have embraced the technology. Grimes, the electronic artist, has offered to let people use her voice in AI-generated songs and split the royalties. 

“There’s some good stuff,” she told Wired magazine this week, referencing AI tracks using her voice. “They’re so in line with what my new album might be like that it was sort of disturbing . . . On the other hand, it’s like, ‘Oh, sick, I might get to live forever.’ I’m into self-replication.” 

Robert Kyncl, chief executive of Warner Music, on Tuesday told investors that “with the right framework in place”, AI could “enable fans to pay their heroes the ultimate compliment through a new level of user-driven content . . . including new cover versions and mash-ups”. 

He added that artists should have a choice to opt in. “There are some that may not like it, and that’s totally fine,” he said. 

For Google, creating a music product could help the company compete with rivals such as Microsoft, which has invested $10bn in leading AI company OpenAI, owner of the market-leading AI model known as GPT-4. 

The model has already been integrated into Microsoft’s Bing search engine and productivity software, and Google has raced to catch up by launching its own AI products such as the chatbot Bard. 

Universal Music in April urged streaming platforms to prevent AI services scraping their songs without permission or payment, the Financial Times reported. The company, which controls about a third of the global music market, asked Spotify and Apple to cut off access to its music catalogue for developers using it to train AI technology.

Lyor Cohen, a former record label executive who leads YouTube’s music division, has been working on the project for Google, according to people familiar with the matter.

In January, Google previewed a version of AI-powered music software in an academic paper that was able to generate music from text descriptions such as “upbeat arcade game” or “reggaeton fused with electronic dance” and more detailed prompts like “a calming violin melody backed by a distorted guitar riff”.

At the time, it said it had “no plans” to release the tool commercially, and the authors pointed out limitations, including potential copyright infringement, when the software reproduced specific artists’ music from its training data. In May, however, Google released the experimental tool known as MusicLM to consumers and said it had been working in collaboration with artists to develop it. 

Google and Universal Music declined to comment.

Articles You May Like

These economists say artificial intelligence can narrow U.S. deficits by improving health care
‘Sigh of relief’: Wall Street welcomes Trump’s pick of Bessent for Treasury
BlackRock has deal to buy private credit manager HPS
Video platform Rumble plans to buy up to $20 million in bitcoin in new treasury strategy
Ceasefire deal reached in Israel-Lebanon war