
It was just 2 years and 3 months ago that that unsanctioned, AI-generated collaboration between Drake and The Weeknd briefly faked out pop fans — and nothing has seemed the same since. AI-powered tools, platforms and ways of applying them to music-making have proliferated at a dizzying rate. So have lawsuits brought by record labels and publishing companies over the training of AI models on copyrighted material from their catalogs. It’s contentious territory that’s still large unregulated, and in many ways, unprecedented.
And for that reason, teaching law students how to navigate AI requires its own innovation. I realized this while speaking with Daniel Gervais, a Vanderbilt University law professor who leads the school’s intellectual property program and closely follows developments in AI. He’s also completing a trilogy of sci-fi novels and the first volume he’s published, “Forever,” is required reading for his course on AI and the law.
“There really isn’t a ton of cases to read, which is what we usually use in law school,” he explains. “Because a case is just really a story between two or more litigants, I decided, ‘Why not write the story?’ ”
One of Gervais’s central characters is an executive in a heavily AI-based Nashville music industry, set in 2035.
“The students are going to live with that future, especially my students then become lawyers,” he notes. “And the question is, as lawyers, how do you adapt the legal system to this new reality of these machines that perform many of the same functions, sometimes a lot better than humans? And then find a way that these machines will both understand and abide by human law? That’s probably the biggest challenge for policymakers, I would say, for the next at least 10 years.”
Jewly Hight: But before we jump a decade ahead, I want to get your take on the present. Of the dozens of lawsuits have been filed in the U.S. against AI companies, what’s drawn your attention so far?
Daniel Gervais: The surprise is not in the number of lawsuits, but it’s the fact that the initial indications we have from courts is that they will find that a lot of the training is fair use and therefore beyond the reach of copyright.
I think the number of people were not expecting courts to go in that direction necessarily, and that will change the picture.
That being said, it’s a 5- or 6-year process before the entire array of lawsuits is either settled or ends in an award of damages or a dismissal.
JH: So what do the trends you’re seeing in AI litigation more broadly suggest for lawsuits over the use of AI in the music industry?
DG: I think the music industry lawsuits could end up in a different place.
There was a report in the national newspaper that mentioned that major labels are actually offering deals to smaller startups in AI. But there are also persistent rumors that some major labels will agree to let their recordings be used for training in exchange for maybe some stock or some position in the AI platforms.
That doesn’t strike me as improbable, because that’s what happened with Spotify. There was this lawsuit and then the major labels ended up owning something like 18%, I believe, of Spotify
And my concern there is not that the major labels are getting compensated for the use the recordings, but it’s whether the creators, the songwriters and the artists, will also receive compensation.
More: Tennessee became the first state to protect musicians from generative AI
JH: When it comes to regulation of AI, it’s hard to make sense of where things are headed. Just before the White House pushed to ban regulation on the state level in the Big Beautiful Bill, Congress took up the No Fakes Act again, which would ban the unauthorized use of anyone’s vocal likeness, and the Copyright Office issued a report questioning how AI models are trained.
Could you place these regulatory efforts in perspective?
DG: The No Fakes Act, take two, so to speak, is not a surprise. Congress has been interested in legislating against certain types of fakes, trying to catch up with previous law, which had two problems. One is it’s state law. So the rights that we currently have are state rights, not federal rights. Congress is trying to make this a federal matter. But also the fact that a voice is central to many artists, and voice was not captured very well by the pre-existing right.
The report of the Copyright Office is generally pretty much on point. But there is this one paragraph that made a lot of people jump, I think, in the AI companies. It says that it would weigh against fair use that you take copyrighted material created by humans, feed it to the machine, and then use the machine to compete against those same creators. Typically fair use has been more about one song or one book or one image competing against one previous song or book or image. Machines diluting the marketplace for human created works is not something we’d seen as such in the in the case law before.
More: AI music isn’t going away. Here are 4 big questions about what’s next
JH: How do the policies taking shape in the U.S. compare with measures being considered the UK and the European Union?
DG: The European Union was the first to legislate on what it calls text and data mining. And what the European Union did was to say, ‘If you’re a not-for-profit research type entity like a university or a library, you can do training for free without asking for permission. If you’re anybody else, you can train unless the people who own the copyright basically told you not to use the material.’ It’s called an opt out. And there are a whole bunch of debates in Europe as to how the opt out works.
The UK, when it left the EU, had to make its own decisions, and the debates are ongoing. And it’s a debate between the rights of copyright holders and authors on one side and then the AI platforms on the other.
There are other countries of note. I’ll mention two. Japan has a pretty broad exception for machine training, and so does Singapore. These are attempts by those countries to attract the AI industry. Bu the tricky part is, if you want to balance the policy vis-a-vis copyright holders, where do you draw the line? Different countries have drawn the line in different places.
JH: I’ve mostly seen musicians advocating for protections against AI. It does really seem like the lines are drawn between tech companies and creators and rights holders. How do you see the stakes for each side?
DG: Your description is largely accurate to say that there are these two camps.
People say AI will replace jobs. That’s clear and that’s true. But then they say it’s like the horse and buggy. This was replaced by automobiles and there’s nothing wrong with that. In fact, automobiles are better and faster.
But the point that creators are making is that they contribute to human progress in a way that really no one else does in terms of producing new ideas. And if that is no longer produced by humans in the future because the marketplace has been occupied, so to speak, by the machines, their point is that is an existential threat.
JH: I have noted few select examples of artists acknowledging that they’ve used AI as a music-making tool. There was the Randy Travis single that was made through cloning his vocal performances on old recordings, and the super producer, Timbaland, has been speaking openly about his new reliance on AI.
What are the stakes, in general, for musicians who make it known that they’re working with AI?
DG: One is a more technical, which is a copyright related point. And that is that a human needs to be the source of what the law calls the creative choices that make a copyrighted work special or original. And there needs to be work enough creative choices from a human author. That’s U.S. law and that’s the law in most countries, to obtain copyright protection. Which means if a machine produces a song 100%, there’s no copyright in it. It’s free for anyone to use.
That idea of documenting the choices made by the human author is really important. For example, if you register at the U.S. Copyright Office, they will ask you if you use the AI, and then to describe the steps. Then the Copyright office determines whether there are enough creative choices that are caused by a human.
The second point is a larger point, in a way. Once you transfer a task to a machine, you can no longer perform it yourself after a while. People who’ve never driven a car without GPS can’t drive using road signs, for example. Your brain has offloaded the task and it’s really, really hard to take it back. And so the more machines are used by human creators for certain tasks in music creation, the less likely it is that humans can then perform those tasks and train future artists and musicians and songwriters to perform those tasks. That’s a serious risk.
JH: Some have identified generational divides in attitudes toward the use of AI tools in music making — that, generally speaking, younger, less established music makers are more accepting of the general prospect of using AI. Whereas more established music-makers, those who are already well-known, who’ve already made large bodies of work and done that work over time without the use of AI, are more resistant to it. How clearly have you seen that generational divide?
DG: Very clearly. Some people draw the line at age 30 or 40. I think that’s a bit difficult to do, but generally speaking, younger people were born with the technology in front of them, digital technology, and now AI is kind of the next step in that evolution. AI makes it easier to perform certain creative tasks, so obviously it’s tempting. And AI can do certain things that are very difficult to do without experience.