I can clone your voice with impunity
Why the UK needs fresh legislation to guard against unauthorised voice cloning
Last April, before all the current Oasis reunion hoo-ha, UK indie band Breezer released an album of original songs under the name AISIS (see what they did there?) with the vocals performed by an AI clone of Liam Gallagher’s voice.
Liam was seemingly cool with it but what if he hadn’t been? What would his legal options have been?
The short answer is pretty limited as the UK lacks any rights around voice likeness.
He would have been unlikely to have any luck pursuing the passing off tort as the tracks were clearly labeled as AI generated and ‘in the style of’.
He / Sony BMG could potentially have claimed copyright infringement of the tracks used to train the voice clone, although the courts have yet to rule on if and when training AI on copyrighted material constitutes infringement.
His best bet would probably have been waiting until this July and taking action in Tennessee, which recently passed the ELVIS (Ensuring Likeness Voice and Image Security) Act - an amendment to the Protection of Personal Rights law, prohibiting the unauthorised commercial use of an individual’s voice.
Most US states are already in a better position than the UK thanks to the Right of Publicity, which offers some protection against commercial exploitation of one’s voice, although the scope of that protection varies by state and there’s broad consensus that more targeted federal legislation is required (the NO FAKES Act and No AI FRAUD Act are both currently wending their way through the legislative process).
When Scarlett Johansson took issue with one of ChatGPT’s new voices, Sky, sounding an awful lot like the AI operating system she voiced in the (excellent) 2013 film, Her, her legal team threatened ChatGPT’s parent company, OpenAI, with legal action.
Sam Altman, CEO of OpenAI, responded that “the voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers” but that they would pause using Sky’s voice in their products “out of respect for Ms. Johansson”.
I’d suggest that only one of those three statements is true (that “the voice of Sky is not Scarlett Johansson’s”) and that the existing protections afforded by the Right of Publicity were sufficient to persuade OpenAI’s legal team to strongly advise Mr Altman to remove Sky as a voice option on ChatGPT.
Threatening legal action may be a realistic option when it comes to unauthorised use of a voice likeness by high-profile global companies. It’s less realistic when it comes to individual content creators. Sir David Attenborough has spoken about finding AI clones of his voice being used without his permission “personally distressing” but the video that prompted that comment remains available and Googling for ‘David Attenborough voice clone’ continues to return plenty of results.
If and when UK legislation is drafted to guard against unauthorised voice clones, careful consideration will need to be given to parody. Since 2014, an amendment to the Copyright, Designs and Patents Act has enabled the use of part of a copyrighted work “for the purposes of caricature, parody or pastiche”. To what extent should that exception also apply to voice clones?
Elon Musk recently reposted a parody campaign video featuring a Kamala Harris voice clone, with the caption “This is amazing 😂”. When the video was first posted by its creator, it was captioned “Kamala Harris Campaign Ad PARODY”. Musk chose to remove that caption.
Regardless of what proportion of people who watched that video understood it was parody without a label telling them so (I’d wager it wasn’t all of them), using voice clones feels like a different kettle of fish to employing human impersonators for parody (it’s notable that ITV opted to use voiceover artists for its Deep Fake Neighbour Wars).
It strikes me that, alongside new legislation, we’re also going to need a voice equivalent of Content ID (which Google uses to detect copyrighted music on videos uploaded to YouTube) - the vocal equivalent of a fingerprint that can be used to detect and flag unauthorised uses of a person’s voice on YouTube, TikTok, X etc.
We’re also going to need better scaffolding in place around authorised use of voice clones. US actors union SAG-AFTRA recently agreed a deal with AI startup Narrativ for licensing performers voice replicas in digital advertising. Meanwhile, ElevenLabs has licensed the voices of Judy Garland, James Dean and Lawrence Olivier from their respective estates for use its in Reader app and Meta is reportedly “offering Hollywood celebrities millions of dollars for the right to record and use their voices for artificial intelligence projects”.
We’re in a period of heightened anxiety around the use of AI in creative output, where people tend to assume the worst. For example, this irresponsibly-headlined Guardian article about the BBC’s use of an AI generated voice in a forthcoming documentary. It’s not until the tenth paragraph that you get the all-important context that the documentary features a contributor who is nearing end of life and is unable to speak and that using AI to recreate her voice was in line with her family’s wishes.
Giving the voiceless a voice appears to be the one use of voice cloning that is currently (mostly) beyond reproach. Apple was at pains to position its 2023 toe in the voice cloning water, Personal Voice, as purely an accessibility feature for those at risk of losing their voice. AI being used to create a voice clone for a post-tracheotomy Val Kilmer was positively received in (pre-ChatGPT) 2022. As was ElevenLabs more recent work to enable Congresswoman Jennifer Wexton, who voice has been significantly affected by Progressive Supranuclear Palsy, to speak in her original voice.
However, ElevenLabs role in creating unauthorised voice clones (most notably the New Hampshire primary Biden robocall) has meant their recently announced Impact Program, which aims to “help 1 million people reclaim their voice” has been cynically received by some.
More generally, the sensitivities around the use of AI in creative output are acting a natural deterrent to greater adoption in mainstream media, with almost all uses attracting negative reactions (e.g. Netflix’s recent use of AI to voice the words of disgraced music mogul Lou Pearlman, who died in 2016, in Dirty Pop: The Boy Band Scam).
However, voice clones of the rich and famous in mainstream media isn’t where the greatest damage is going to be done. That’ll be to regular people in the real world, as the it becomes increasingly easy to create an AI voice clone, with less audio and with greater verisimilitude.
AI voice clones have already been used to fake kidnaps, scam CEOs, dupe bank security systems and put racist words in the mouth of a schoolteacher.
Whilst the clear criminality of all of these abuses of voice cloning technology were caught by other laws, we are going to see more instances of voice clones being used to mislead and to damage relationships and reputations without an obvious crime having been committed under existing legislation.
New legislation will need to consider the age-old question of what degree of responsibility should fall on end-users and what degree on companies enabling the unauthorised creation and use of voice clones.
Whilst some companies, such as Microsoft and OpenAI have opted not to make their advanced voice cloning tools (VALL-E 2 and Voice Engine) generally available, the latter citing the need to first “bolster societal resilience”, there are plenty of companies that won’t have such qualms, as well as plenty of open-source voice cloning tools, such as OpenVoice.
So, what to do? Assuming you’re not a government minister (in which case I’d crack on with tabling some new legislation), I’d suggest three things:
Lend your support to calls for new legislation to tackle the creation and use of unlicensed / unauthorised voice clones.
Be mindful of where recordings of your voice appear and consider switching to a generic voicemail message.
Agree a code word with your family that you could use in the unlikely event you fall victim to a voice cloning scam.
Thanks for reading Dan’s Media & AI Sandwich. If you’re new here (👋 ) please consider subscribing below.
If you’re already a subscriber, please consider liking or sharing this post with someone you think might find it interesting. Thanks!