On the Wednesday, Sept. 17, 2025 episode of The Excerpt podcast: Are we on the precipice of AI-generated art replacing human creators? Ramesh Srinivasan, a professor of Information Studies at UCLA, director of the UC Center for Global Digital Culture and host of the Utopias podcast, sat down with USA TODAY’s The Excerpt to discuss the future of art and generative AI.
Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.
Podcasts: True crime, in-depth interviews and more USA TODAY podcasts right here
Dana Taylor:
Generative AI is now widely used to make artwork, music, and even film.
MUSIC:
Dust on the wind.
Boots on the ground.
Smoke in the sky.
No peace found.
Dana Taylor:
Valuable cultural contributions that used to be exclusively the province of the creative class. Could AI-generated art start displacing human creators? And what about AI’s cultural influence on society writ large? Hello, and welcome to USA TODAY’s The Excerpt, I’m Dana Taylor. Is AI-generated art in the beginning of an artistic revolution or the stuff of a dystopian nightmare? Here to help me unpack the many ethical and societal issues at play here is Ramesh Srinivasan, a Professor of Information Studies at UCLA, and Director of the UC Center for Global Digital Culture, and host of the Utopias podcast. Thanks for joining me, Ramesh.
Ramesh Srinivasan:
Thanks, Dana. Thank you for having me.
Dana Taylor:
Let’s start with the recent news about a popular new band called The Velvet Sundown, which we featured in our open. Their two albums quickly went viral online, generating over a million plays on Spotify before it was suddenly revealed that the band, the images, and the music, all of it were entirely generated by AI. Some fans felt they were tricked, and listeners should have been warned. Let’s unpack this in stages here. First, can you talk about the ethics here? If no one was hurt, is there even an ethical conflict in the first place?
Ramesh Srinivasan:
I think there are deep ethical questions, right? Because we have to ask the question, how was this album created? What did it use for its creation? So many generative AI-created artworks and musical works are using existing human-created content. Generative AI systems do not create art or music in a vacuum, they are learning from our data, our creativity, when I’m saying our, I mean we human beings, right? And they’re exquisitely mimicking various creations that we’ve all developed, almost completely without any compensation for any of us, without any disclosure for listeners or the wider public, or even those whose content might be used in such creativity. And basically, this is now creating more and more of a sense of distrust amongst many around what is or is not real. Now, of course, what we consider to be real has always been involved machines on some level, but it’s not machines replacing human beings without any disclosure or compensation. And therein lies the major set of issues that we really have to deal with right now when it comes to AI and creativity.
Dana Taylor:
Let’s stay with this issue from the perspective of the creative class. How can we allay their fears that their work isn’t simply being co-opted by the large learning models or LLMs that AI uses? How can artists safeguard their intellectual property?
Ramesh Srinivasan:
I have quite a few friends in the Writers Guild. I live in Los Angeles, I teach at UCLA. I host a podcast Utopias, which has a lot of artists on it, and I have quite a few friends in the Screen Actors Guild, the SAG, as well. As you know, they both were protesting quite significantly over the past year or two to ensure that they would be protected, given the onset of AI and the likely violation of many forms of copyright that attorneys are only just beginning to catch up to.
So, I’m not interested in fearmongering on any level at all, but I don’t think there is a way to allay such fears because I do believe on many levels this is a violation of human creativity, human labor. And most importantly, in a nation and world that seems to get more unequal by the moment, amplified by the direction digital technologies are taking and opportunity for us to set things right. So, there’s a qualitative distinction between technologies that replace all we do without compensation or disclosure, and technologies we use to enhance and augment our creativity. And we need to ensure that digital technologies support what we all are creating, rather than replace us.
Dana Taylor:
Artists with physical works are now using tools like image cloaking and data poisoning to prevent these LLMs from learning to mimic their styles. Can you talk us through how these work and are there parallels to these tools for musicians and filmmakers?
Ramesh Srinivasan:
Yeah, it’s a tough slog actually, to be honest, to try to throw one technology at problems created by another technology. So, what do I mean by that? Like I, as a faculty member at UCLA, have been teaching undergraduate students for 20 years, and I know fully well that many of them are not really writing the way we all used to, or even my students just a few years ago used to. Many are not necessarily reading either, and that’s really tied to social media and reels and clips and the ways it’s affecting our attention and our focus. Right?
So given all that, that’s a long way of saying that you can’t really throw another technology to try to error detect or privacy protect at problems that are caused by another invasive technology, because almost every technology has false positives. So, what do I mean by that, Dana? Is many technologies end up in many cases promoting false content or misidentifying content, and this includes older AI systems, even ones fairly recent, like Google’s image recognition system that had trouble telling the difference between images of gorillas and images of African people. This involves Elon Musk’s Grok system that went full Nazi just recently. So, false positives can also come from technologies that are trying to detect whether a generative AI system is being used.
So, what we need are binding rules and checks and balances to ensure that AI systems in certain cases, sure, let them create content, but they shouldn’t create content at the harm of artists and creative folks in the creative industry. And I think that’s very important because one thing we do very as humans is be creative. We still don’t fully understand what creativity is, by the way, cognitive scientists, philosophers, artists themselves have all speculated on that. We know that that’s something very, very important that we all do.
So, technologies that mimic or emulate or imitate, this was a big term in AI dating back decades, imitation and imitation game. Technologies that imitate are not the same as technologies that create. Anything that we create borrows on and is inspired by other texts, other films, other pieces of music. Right? We know so many bands that are inspired by other bands, but you don’t take the human creativity out of the equation, particularly at a moment where we see, again, a lot of anxiety about where jobs are going to be. That’s why SAG, Writers Guild of America, many musicians are very concerned about the moment we’re in. And so that’s why we need to, before it gets too late, just like what happens with social media, rein this in and ensure that we have a balanced world. A world that serves we human beings rather than a few corporate investors that place us in unnecessary silos of division.
Dana Taylor:
Netflix recently shared that they’ve used AI-generated scenes in one of their TV shows called The Eternaut. That show is science fiction, and so using AI to create some scenes there makes intrinsic sense. But what about using AI in a documentary? Do you see a line, and where is it, between using AI to make video better or cheaper and using AI to make something deceptive that tricks audiences? Is it all about the intent?
Ramesh Srinivasan:
I think intent is key. I of course, recognize the power, superpower of these generative AI systems as tools that can aid us, that can support us. There are many positive uses for generative AI systems. I keep calling it generative AI because these AI systems which have been introduced most notably like ChatGPT but others in the last few years, are very, very different from the history of AI. Even large language models that they’re built upon date back to the ’70s, people don’t know that. So of course, technologies as tools to help creativity is what I’m getting at. So that could mean using generative AI within movies, movie scenes, post-production, maybe some editing, perhaps even letting folks know, this is an AI created scene. We should definitely, whenever we see anything online that’s created by an AI or substantially modified by an AI, we know that it’s involving an AI. And that’s okay, that doesn’t have to be a big deal.
But when an AI prevaricates, poses as human, poses as truth, specifically in your question with the documentary, that is deeply troubling and that builds on what we’re seeing in our country. We’re here with USA TODAY, but I’m seeing it around the world, which is a deep distrust of one another, a questioning of authenticity and sincerity. I never even knew the word catfishing until fairly recently. So, this is a new normal that we don’t have to make normal. It’s actually the new abnormal. I don’t think that that’s what we all want, right?
So what we want is to harness these powerful tools that are built on all our data. None of these AI systems, we work anything without our data, which we’re not compensated for. We don’t even know what of our data is being used, right? What band or set of bands’ content is being used to create this sort of fake album that we discussed earlier, or this fake band that we discussed earlier? So, there is a qualitative distinction between using AI to support creativity versus using AI to replace human security, creativity, and I would say even dignity.
Dana Taylor:
Let’s shift now to art consumers, the viewers and the listeners. Is putting AI-generated art out into the universe deceptive by nature? What’s the harm if all it’s being voluntarily consumed in the first place?
Ramesh Srinivasan:
It’s deceptive if it’s not clear that it’s created by an AI or at least substantially modified by an AI, because generally speaking, we human beings when we encounter works of art, think that they’re created by other human beings. I love environmental-like beauty, but we know, and I know when I look at a beautiful mountain or a beautiful forest, that that’s like nature’s work, right? That’s not created by an AI. So, what I’m getting at is not that AI, bad, human, good, or this kind of nonsense. Just that we deserve that as human beings who have also, it’s we human beings that created these incredible technologies. So, we deserve to know what we’re seeing and where it’s coming from.
And to this point, there are some substantially successful AI artists who work with AI, including some of my former students at UCLA, and that’s wonderful, but it’s very clear that they are working with AI, that they’re working with the language models, and it’s a human-machine entanglement that’s creating this beautiful form of art. And that’s actually, again, what I’m getting at is something we’ve always done, right? The paintbrush is a technology. The printing press is a technology. These are all tools that enhance and augmented human creativity. They didn’t pretend to be human and confuse us all.
Dana Taylor:
The U.S. Copyright Office issued a ruling earlier this year that said, “Artworks solely produced by generative AI can’t be copyrighted unless there was meaningful human input into the creation.” Does this guidance protect the incentive for human creativity?
Ramesh Srinivasan:
I hope it does. I’m glad to hear of that. I followed that ruling when it occurred. I want everybody to know that we were very, very close just a few weeks ago when Trump’s so-called Big Beautiful Bill was passed, for having a clause included within it that would’ve banned any state from regulating AI for, I believe, the next 10 years. Can you believe that? Which will really lead us down the same path that we’ve gone with social media, which people profoundly distrust while being addicted to.
So yes, I think the devil is always in the details as it often is when it comes to law. I don’t know if a work of art created 99.9% by an AI with 0.1% sort of a semantic human, like a small little symbolic, is what I mean, symbolic human input, that should not be okay. But I think this question is interpretive, right? What is considered meaningful human input? But I think at the minimum, we should have watermarking type disclosure or anytime any significant use of an AI is involved. So at least if it’s not about property, it’s for we, the public. We are then able to actually know what we’re engaging with, what we’re looking at.
We are seeing actually fake content, bots created content, which is an earlier stage of this generative AI, go viral on tech platforms, on social media platforms like IG, TikTok, etc, certainly on Twitter. When we’re seeing the fake content go viral because the fake content is designed to inflame or arouse or grab our attention, then we understand that not all speech or not all creations, not all creativity is equal online. And AI systems can be magnificently equipped to actually disinform us, to deceive us, because they know what works for that, because they’re all built upon our data, all the data around engagement and so on. So, that’s why it’s absolutely critical that the benchmark should be high in supporting human creativity and human property, at least in these early stages when it comes to the emergence of generative AI around creative artworks, creative works of all types of art.
Dana Taylor:
I want to pivot now to art’s broad influence on the evolution of societal and cultural values. Playwright and poet, Bertolt Brecht, famously stated, “Art is not a mirror held up to reality, but a hammer with which to shape it.” Is there a risk here of AI having a negative influence on societal and cultural values? And what entity, if any, would be able to guard against that?
Ramesh Srinivasan:
So I love Brecht, actually, he’s one of my favorite playwrights, and I appreciate that line. It’s quite a famous line by Brecht. I do think that art is not supposed to be a mirror, but AI should not be a black mirror, if you know what I mean. That’s a reference of course, to the Netflix award-winning show. So what I mean by that is, art and artists are interpreting and reflecting upon the human condition. They’re reflecting upon beauty on many of the aspects that we know are very dear to us as human beings, inhabitants of this planet. But at the same time, reflecting on things, interpreting our lives, interpreting and telling stories is very different than all art being like a hammer, like a sledgehammer to the head.
I don’t see many safeguards at present. I do believe that we can institute some checks and balances right now that not only deal with the questions of anxiety, the distrust that has set in, but also deal with the economic anxieties tied to emerging AI.
Dana Taylor:
Ramesh, it was wonderful to talk to you about this topic. Thanks for coming on The Excerpt.
Ramesh Srinivasan:
Thank you, Dana. I enjoyed the conversation. Thank you.
Dana Taylor:
Thanks to our senior producers, Shannon Rae Green and Kaely Monahan for their production assistance. Our Executive Producer is Laura Beatty. Let us know what you think of this episode by sending a note to podcasts@usatoday.com. Thanks for listening, I’m Dana Taylor. Taylor Wilson will be back tomorrow morning with another episode of USA TODAY’s, The Excerpt.
No Comment! Be the first one.