Responding to Brandon Sanderson’s Speech on AI

Dig this image? It’s by Phineas X. Jones. Click or tap it to get it on a shirt at Threadless.

Several days ago, Brandon Sanderson posted a video and a companion blog in which he explored his thoughts on art and tried to get at the root of why he doesn’t like generative AI in art spaces. I found myself nodding along as he worked his way to the idea that generative AI removes an essential component to art creation (us) and that, in so doing, denies artists the necessary experience to grow in their discipline.

I cocked my head and raised a brow. That’s it? That’s not far enough, Brandon.

In May 2024, I wrote a far-too-long-to-expect-anyone-to-actually-read post about why generative AI in art bothers me. In that post, I did essentially the same exercise as Sanderson, asking what art is, what its purpose is, where we stand in its creation, how creativity works, and why all of that is important. I arrived at a conclusion that includes Sanderson’s but is more comprehensive.

As a fiction writer, I’m an artist, and I’ve found any artist who has actually created something understands instinctively the problem that generative AI poses in art. In short, the point I made in my May 2024 post was, while generative AI has the potential to make everything easier for us, the struggle in artistic creation is where the magic of actual creation happens.

It isn’t just that we stop growing as artists (as Sanderson argues) but also that we stop being able to actually create art.

For people who aren’t artists or who’ve never aspired to be an artist, this might be a difficult concept to grasp. What’s the big deal? You want to make a thing. You get what you need to make the thing, and then you make the thing. Get on over to the creativity Lowe’s, pick up what you need, and get your hands dirty. Ta-da!

Hate to break it to you, but creativity just doesn’t work that way. Creating art—actually creating something that is new and uniquely from you—isn’t something that is quantifiable in work-hours or dollar signs. There is no metric for what it takes to create something, nor are there metrics to validate when something has been created.

People have asked me questions like, “how long did it take you to write that?” or “how much longer until you’re finished with the thing you’re working on?” The answer is always the same: I have no idea. Writing Carrier was three years and change on the calendar, but I have no concept of how much time I spent at the computer or in my head during that time. The Pillars of Dawn was about 18 months, but again, no idea how much time it actually took me to create. The novel I’m currently shopping? Five years. Five long years of living with that thing, and while I know I put more actual time at the computer and in my brain into this one than I did the previous, I have no concept of it beyond “more time than any rational, sane person should spend on any one thing.”

Art isn’t a structured process of assembling a 1,000-piece puppy puzzle or a Lego Millenium Falcon. There is no guide or instruction book. You can’t watch a YouTube video for examples of how to create art. You can get direction on specific techniques, of course, but no one and nothing can show you how to do the thing that only you can do. You have to figure it out for yourself, and I know that’s a scary idea that makes generative AI tempting.

One of my favorite paragraphs I’ve ever written came in a review/craft essay of Josh Malerman’s Malorie. How I felt about that book doesn’t matter, but what I learned from the book, Josh, and my experience writing about it was formative:

We fiction writers measure progress in time spent with a piece. We also use word or page counts. We numerate drafts. We use such devices because some part of our human nature requires evidence of produced work or forward momentum. We can’t quantify the daydream of a pre-pandemic bus ride or the surge of inspiration when two thoughts collide and interlock like gear teeth. This insistence on quantification neglects the reality that we are, all of us, wandering a foggy void for beacons of light. Some of us are content to stop when we’ve discovered one or two. Some of us can’t stop until we’ve collected all of them. Time doesn’t exist in this space. It’s just us and the lights, and when we finally emerge from that void, cradling our treasures, and someone tells us that we’ve been missing for days, weeks, months, or years, our only response should be to hold up our shiny objects and say, “but see what I’ve found?”

I get it. No one likes the struggle, but I’m here to tell you the struggle is necessary. The struggle—when you’re on your hands and knees crawling through the cold muck in the darkness of that foggy void, and you’re searching for the lights that you’ll know only when you feel them—is everything. That is where the magic of creativity happens. Eureka comes only after the struggle, and generative AI may make it feel like it’s assisting through that struggle or removing it entirely, but what it’s really doing is stealing creativity.

In his speech/essay, Sanderson does talk about this, but it’s important to reiterate here. It isn’t to say art has to be difficult or miserable. Not at all. It can be easy if it comes to you easily, but if it’s easy because a computer’s large language model made it so, it’s easy because you ceded creative decision making to it.

This is why generative AI is not a tool. The camera comparisons are flawed. That is a tool that established a new artistic discipline because photographers were still in full control of the decision making. Generative AI takes the creative decision making away from the user.

I think anyone who tries to refute that is being disingenuous and dishonest. 

Again, I’m nodding along to Sanderson, so let’s get to where he was wrong:

Art isn’t useless. 

In his speech, Sanderson quotes Oscar Wilde as proclaiming art is useless.

We can forgive a man for making a useful thing as long as he does not admire it. The only excuse for making a useless thing is that one admires it intensely. All art is quite useless. — Oscar Wilde, The Picture of Dorian Grey

Cool story, but that just isn’t true. Art may not be a scientific instrument that helps us understand the universe. It may not be an energy source that powers cities or a gear that turns engines. 

Wilde’s point is that art is made to be admired; therefore, it serves no use. Poppycock! 

Art is a form of expression, communication, and exploration. Art helps us understand the universe in grand ways, and it helps us understand ourselves in intimate ways. Art distills, characterizes, and catalogues cultures. Art even entertains us in ways that are not trivial. To feel, readily, the human experience in a range of emotions is something that we all cherish. Art moves us in human ways only humans can understand because those movements are inextricably human.

Which is why tech bros had to (illegally and unethically) feed generative AI massive amounts of copyrighted intellectual property created by actual humans who already are taken for granted and exploited. The generative AI couldn’t understand how to move us until it devoured that which already moved us, and the tech bros couldn’t build it themselves because they didn’t understand it either.

The interesting thing about this is the tech bros, ironically, argue they are putting this otherwise useless material to use because they had no idea it already served an immense, unquantifiable, incomprehensible use. They had no idea because they aren’t as smart as they think they are, and that goes similarly for anyone who would defend generative AI in art spaces.

The idea that art is useless is a perspective that views art purely as a form of expression, but I see the expression as only one piece of the art. Wilde and Sanderson are wrong. Art is useful. We don’t merely admire it. We use it for growth, understanding, and the enrichment of our experience as a living being. Without art, our growth is stunted.

Sanderson gets another big piece wrong, and far too many artists do as well:

Art isn’t merely a form of expression. It is a dialogic communication between two human beings.

In too many artistic corners, we put artists on pedestals with audiences kneeling at their feet. Artists are artists, art is art, but without an audience, artists and their art are nothing. Yes, the artist may grow in the creation of their art (growth that generative AI prevents), and Wilde might argue art necessarily needs an audience to admire it, but we should not neglect the essential ways an audience also grows from their experience with art.

In context of the generative AI issue, if we accept art is a communication between artist and audience, what is the computer’s role in that communication when it begins making artistic decisions? I know there is a trend of people getting addicted to chatting with generative AI, but generative AI’s identity is an utter illusion. At least for now, until we create actual general artificial intelligence, generative AI cannot participate in that communication because it cannot have a perspective. It cannot have a viewpoint. It cannot have lived experiences to draw from and create an identity. The only identity it can have is the identity we give to it (and the one the tech bros stole from us).

Where did the generative AI get that identity? It went to the culture store and gobbled up our art. While it was there, it also consumed a ton of the utter bullshit the worst of us posted online, which makes it unreliable for informational purposes, but I digress.

Despite Wilde suggesting art requires an admirer, Sanderson neglects the person who receives the art and how the art changes them. That’s fundamental to the art itself, and it is the usefulness of art. 

If AI performs the creation, it’s no longer a human-human conversation but a computer-human one, and giving up that vital communication between each other would be a tragic loss for our civilization because it denies that the whole point is we’ve been talking to each other in these ways in an attempt to build culture, community, society, and civilization, and we’ve been doing it since before we even had words for those concepts. If we lose human-generated art, do we also lose influence over our cultures, communities, societies, and civilization?

Why does any of this matter? 

Why not just keep making art and let the world do what the world’s gonna do? You know, control what we can control? Because Sanderson is right about this:

All we have to do is say “No.” If we do, they lose.

You don’t have to stop using it for your social media profile pics or your intramural soccer fliers or your PowerPoint slide decks (though you are contributing to rising energy costs, water shortages, community erosion and devaluation, and much more), but you should stop using it for creative endeavors that actually matter. You can stop buying it and supporting it. You can stop making AI-generated music trend on Billboard charts and Spotify, for instance.

You can help stop the wholesale loss of human-generated art. You can help preserve our creativity. Art and creation make us unique in the universe. A compelling case can be made that creating art is one of the primary endeavors that give our species purpose. You can help us hold onto that uniqueness and purpose.

But it’s going to take all of us, because the bottom line is whether it makes money. If enough of us buy and use it, it’s going to be profitable, and if it’s profitable, we know capitalists will exploit it as much as they can. If there is a market for it, they’ll do everything they can to grow that market. They will reap as much money from it as possible before absconding to their exclusive space station utopia in orbit where they’ll gaze down upon the rest of us, joke about how purposeless and meaningless our lives are, light another cigar, sip their 200-year-old wine, and move on to the next thing to exploit. 

They’re a virus, and our art is sick. It’s been ill for a long time, but this is the first time they’ve actually sought to replace artists en masse. 

Please help us.