A word about AI

I thought about writing this as one of my Geek Punditry columns, but that doesn’t really fit the thesis of that feature. Geek Punditry is where I write about things that I love, and this doesn’t qualify. But at the same time, it’s been pervasive lately, and in the last few weeks seems to have gotten worse than ever, so it’s time I talk about it.

Yesterday on LinkedIn, I got a sponsored message offering me the chance to apply to an AWESOME new job making UP TO $15 an hour! And all I had to do – get this – is TRAIN AI TO WRITE. The laughable thing is not the pay, nor is it the fact that they were making this offer to somebody who has been remarkably vocal about being AGAINST the use of generative AI in the arts. The really funny thing is that somehow the algorithm somehow got the idea that I would be willing to work at something that destroys everything I care about.

Using artificial intelligence to generate art of any kind – writing, comics, 3D modeling, music, you name it – is completely abhorrent to me. AI takes the work of real artists, human artists, breaks it down into data points, and spit out some sort of amalgamation that is as bland as it is fast. People love to joke that everything Hollywood puts out these days is just regurgitating old ideas anyway, but folks, you have NO idea how much worse it would get if AI becomes the norm.

I’ve heard the arguments, of course: 

  • “Human artists draw inspiration from other artists too!” Sure, but they still have the ability to innovate and make something new, which AI does not. 
  • “AI is just a tool, like a typewriter.” Bull. If someone can write an essay using a typewriter and I take the typewriter away, they can write it with a pen and paper. It might not be as fast, it might have more mistakes, but they can do it. If I take away the pen, they can scratch words out in the dirt with their fingers. But if someone can only “write” using ChatGPT and I take away ChatGPT, they’re helpless. That’s not a tool, that’s a replacement. 
  • “New technology has always replaced old technology. Do you think we should still be using the horse and buggy?” The difference here is that in the past, the creation of new technology has brought with it new jobs to replace the old ones. When the automobile arose we no longer needed as many people caring for horses, but now we needed workers in car factories, mechanics, and people to construct and maintain roadways, not to mention all of the ancillary jobs that cropped up as the tourism and hospitality industry grew exponentially to keep up with the greater ability to travel. But AI is taking away jobs WITHOUT any appreciable creation of new jobs, and that’s not sustainable. 
  • “AI is the future.” Calling something awful the way of the future has been the tool of every despot in history. You don’t get to decide what the future is, the future will decide that itself.

I was blindsided a few weeks ago when I discovered that National Novel Writing Month, the annual writing challenge that I have participated in and championed for nearly 20 years, was taking advertising from companies that use generative AI. What’s even worse, when asked to define their position on the matter, a spokesperson for NaNoWriMo said it was “ableist” to deny people the right to use AI to create. A great tactic, that. The surest way to try to get the internet on your side is to call your opponent anything-”ist,” because there’s nothing in the universe worse than being an “-ist.” But it’s a garbage argument, friends. Has anyone ever looked at one of Stephen Hawking’s books and said, “if only there was an algorithm that could have written this for him”? Has anyone ever thought that about the works of Helen Keller? Has anyone said that Beethoven, Ray Charles, or Stevie Wonder really could have made something of themselves if there was a computer to compose for them? No, NaNoWriMo, calling it “ableist” to oppose generative AI is a slap in the face to every person who has overcome their own difficulties and a transparent, pathetic attempt to deflect criticism from yourself coming from the very community that you helped to build.

I deleted my account. It hurt, but I did.

I need you to understand that I do not oppose artificial intelligence in its entirety. It CAN have uses, and it DOES have positive applications. I teach a unit on this to my senior class every year, and as such I try to keep up with what it is and how it can be used, so I flatter myself to think I know at least a little more about the topic than a lot of people. One thing AI is really good at, for example, is pattern recognition, and that can be very useful. It can detect potentially cancerous cells before they become malignant. It can be used to sort and categorize information. Hell, you could theoretically use it to help solve crimes. These are things that are beneficial, helpful, even potentially life-saving.

But using AI to write a book or draw a picture benefits nobody except for the person who didn’t want to devote the time and effort to learn how to do it themselves.

I don’t even understand how anybody can take pride in something they “create” with AI, as all they’re essentially doing is describing what they want. If I need a book cover, I contact an artist (usually my pal Jacob Bascle, who has done most of my books) and we discuss what I’d like it to look like. He does a mock-up, I give him thoughts on any changes or adjustments I want, and then he creates a finished product. But at no point in the process do I think I can call myself the artist or the designer of this piece, any more than someone who commissions a painter to paint his portrait is the artist or someone who goes down to Sears Portrait Studio (does that still exist?) can call themselves a photographer. I can be happy with the design, and I always am, but the pride I feel is because I know this is a cover that is going to get people to look at my book, not because I feel like I had any true hand in its creation. 

The problem is that the people with the pursestrings love AI because it can do the job CHEAPER and FASTER than a human being, and don’t give a damn if it’s actually BETTER. The tragedy is that, especially when you’re talking about movie and television production, these are the ones deciding WHAT GETS MADE. 

So what can we do about it? There’s only really one way to stop it: we have to make it unprofitable. If someone is using AI in the creation of a movie, or a television show, or a cartoon, or a novel, or a video game, or a comic book, we have to collectively decide to NOT SUPPORT THAT WORK. Lionsgate, for example, has recently signed a deal with an AI studio that they hope can be used to eliminate things like storyboard and visual effects artists. Awesome, right? Faster! Cheaper! Worse product that puts actual human beings out of a job, but who cares as long as it’s faster and cheaper? So that means that I can’t – and none of us should – continue to support the studio behind the Hunger Games and Saw franchise, among many others.

Then there’s James Cameron, director extraordinaire, who has joined the board of directors of Stability AI, the company behind things like the controversial Stable Diffusion system. You would think the man who created Skynet in the Terminator franchise would know better, but no. Instead, the company says that having him on board will “empower creators to tell stories in ways once unimaginable.” The real takeaway here seems to be that Cameron is more interested in shiny new technology than he is in actual creativity or innovation in storytelling, although that shouldn’t come as a surprise to anybody who has seen Avatar. 

Of course, the tough part about a boycott of these companies and creators is that you can’t trust that they’ll all be as honest about it as Lionsgate and James Cameron, so we may wind up throwing support behind AI without realizing it. That’s where we need the creators themselves to take action. Last year we saw a prolonged strike from both the writers and actors in American film over various issues, AI included, but it doesn’t seem like the industry has learned its lesson. So now we need the writers, directors, actors, and other creatives making these things to refuse to work with companies or individuals that use generative AI and, what’s more, BE VOCAL ABOUT IT. We need them to TELL us when they turn down a job because of AI so we know not to support that work, because otherwise we’ll see it quickly spiral into a modern witchhunt of accusations. Earlier this week, people accused Disney of using AI in the creation of the new poster for the upcoming Thunderbolts* movie. That accusation appears to have been unfounded, but you can be sure that more people heard the accusation than the exoneration. It just proves that we need first-hand accounts, not speculation.

I know that’s easier said than done. These people are under contract. A lot of these contracts include a clause forbidding them from speaking out against the company they’re working for. And people at the bottom of the hierarchy may not be able to afford turning down work for reasons of integrity, because ultimately most people will have to choose putting food on the table instead of principle. (The sad irony is that these people at the bottom are also the first ones that will be replaced when AI use becomes rampant.) I feel for these people, and I don’t blame them for staying quiet. So it’s going to have to be up to the people at the TOP to speak up. Can you imagine the response if people like Zoe Saldana or Sigourney Weaver said they’re not going to make any more Avatar movies as long as Cameron is involved with Stability? The impact could be seismic.

My biggest fear is that it’s already too late. Pandora’s box is cracked. (The original Pandora, not the Avatar one.) If it’s opened too widely, it’ll be impossible to stop this. We need to fight back against it now while there MAY still be time to push it back. If not, the future of the arts will just be as bleak as the one James Cameron once tried to warn us about.

Geek Punditry #25: Artificially Entertaining

In this week’s episode of “Things People Are Outraged Over on the Internet,” we’re going to talk about Marvel’s Secret Invasion. The new miniseries dropped its premiere episode this week, bringing back Samuel L. Jackson’s Nick Fury as the star of an espionage thriller about the Skrulls – a race of shapeshifting aliens – infiltrating Earth and subtly influencing world events by pretending to be human, a clever analogue for what happens when Californians move to Texas. But the thing that has people upset isn’t the content of the show, it’s the opening credits sequence, in which the bizarre and unsettling images shown to the viewer turned out to have been created, at least in part, by an AI image algorithm.

Bet you couldn’t even tell.

AI has become a hot button topic in creative endeavors. Not so long ago, the world of comic and commercial art was consumed with a debate over how AI functions, with many programs essentially scraping the internet for existing artwork and using that as a basis to synthesize new images. Some people argue that this isn’t all that different from a human artist drawing inspiration from the works of other artists, while others say that this amounts to plagiarism on the part of the person using the AI to generate “new” work. While I’m not an expert on any of these matters, I find them fascinating and a little bit scary, not only as a fan of media, but as a writer and as a high school teacher as well. AI is becoming more and more prevalent, and the fact of the matter is that we as a society are going to have to decide what the place for things like the Secret Invasion opening sequence is.

For what it’s worth, the US Copyright Office has already laid down a ruling. Earlier this year, following a debate over a comic book created with AI art, the Copyright Office ruled that only work made “by humans” is eligible for copyright. So the next time your neighbor tries to show you that painting made by having his dachshund dip his little weiner legs in paint and walk across a canvas, feel free to steal the painting and put it on a T-shirt. 

Back to Secret Invasion, though. When word got out that the intro sequence was made using AI, there was something of an internet firestorm. Artists were pretty angry about it, saying that the AI had cost graphic artists work, and the studio responsible for it quickly tried to “clarify” the announcement that it was AI animation. Method Studios, the company that made the sequence, released a statement to the Hollywood Reporter which read, in part, “AI is just one tool among the array of tool sets our artists used. No artists’ jobs were replaced by incorporating these new tools; instead, they complemented and assisted our creative teams.” If that doesn’t clear things up for you, congratulations! You’re normal. One of the things that makes AI so controversial is the confusion over how exactly it works, and for those of us who don’t entirely understand it all, statements like this one do absolutely nothing to illuminate the issue.

Speaking purely from an artistic standpoint, I get what the makers of the show were going for. A lot of AI art is, for lack of a better term, “unearthly.” For all the things it can do well, a lot of the images you get from an AI generator like OpenAI are still a little “off” when it comes to creating realistic images of people. You get people with extra fingers, noses where they shouldn’t be, or faces that look like they went right up to the edge of the Uncanny Valley and bungee jumped in. That unearthly quality is actually quite appropriate for the story behind Secret Invasion, which is (again) about alien shapeshifters that are ALMOST human, but not quite. So yeah, I get the idea. But just because I understand the idea doesn’t make it a good one, particularly considering the current climate in Hollywood when it comes to AI. 

In addition to the aforementioned controversy that consumed the world of comic books not that long ago, I feel like somebody at Marvel Studios should have opened the blinds of their office windows and looked at all of the writers currently marching in picket lines. The Writer’s Guild of America went on strike on May 2, and for almost two months Hollywood has not been allowed to generate new scripts or make any changes to existing ones. They could go ahead and film scripts that are already finished, but nothing new is being made. A lot of film and TV productions have had to freeze production, including some of Marvel’s own upcoming shows like Daredevil: Born Again. And while there are many, many issues at play here in the writer’s strike, one of the big ones is the proposed use of artificial intelligence in Hollywood productions. The fact that apparently NOBODY on the Secret Invasion team thought about this at any point in the seven weeks since the strike began and said, “Hey, maybe we should change up the title sequence” is truly baffling.  

This is what AI sees when it looks out the window. Uncanny.

Writers, as you may imagine, aren’t keen on the idea of AI being used to turn out scripts. Of course, many of you have probably seen posts on the internet where people used AI to write a script for, say, an episode of Seinfeld, and what it returned was something that was full of cliches and tropes related to the material, but laughably inept. It was funny because of how close it was, but wasn’t close enough to pass for the real thing. The same thing goes for a lot of the AI artwork we’ve mentioned. “Ha ha!” you say. “An algorithm will never be able to produce work of the same quality as a human being!”

Elaine: Hello, Jerrald. Shall we resume our frequently-alluded to previous relationship?
Jerry: Perhaps, after I have finished enjoying Superman and breakfast cereal.
George: Women despise me.
Kramer: I have entered the room!

Except that ten years ago, it was unfathomable that an algorithm would be able to produce something as close as those fake Seinfeld scripts, or that almost-but-not-quite real image of a Skrull used in Secret Invasion. And here’s the other thing, guys: the AI isn’t going to get any dumber. It’s just going to get better at it. And while some people will still argue (and I hope they’re right) that no AI will ever be able to produce something as good as a work of art created entirely by a human being…that’s not the point. With most of the media we consume being turned out by giant corporations (remember Secret Invasion is owned by Marvel, which is owned by Disney, which is owned by the Skrull Empire), the question is will it get good enough? At what point will CEOs say, “Why are we paying writers when we can just have the computer spit out a script that people will come and see anyway, even if it’s stale and derivative and just a Xerox copy of a thousand better ideas?”

Because if that remains an option, you know they will.

“No,” the corporate types say. “AI is just a tool, and artists have always had to learn to use new tools to create their own art. It’s no different.”

It is different, though. A typewriter is a tool that allows people to put words on the page faster. An airbrush is a tool that allows painters to have very precise control of their lines. Photoshop is a tool that allows for different effects to be made on preexisting images, carefully and intentionally manipulated by the person using the software. But never before has it been possible to say to one of these tools, “Hey, give me a picture of Santa Claus wearing a Star Trek uniform” and then just sit back and wait 0.8 seconds for the work to be done. 

Nailed it.

As I mentioned before, I’m a high school teacher, and I actually teach an entire unit about Artificial Intelligence in which we touch upon many aspects of the concept. The idea is that it’s a high-interest subject that’s going to be very relevant to the lives of my students (most of whom are in the 16 to 18-year-old age range) that is very controversial, allowing for them to learn to write arguments to defend their positions on a complicated topic, no matter what that position may be. The upshot of it is that I’ve learned way more about AI than I ever thought possible, and some of it scares the bejeezus out of me. When I started teaching this unit a few years ago, the conclusion that many students arrived at was that the integration of AI into society would eliminate many low-level jobs and that our economy would have to pivot to something that’s more craft-based – in other words, giving more importance to the arts. Writing, painting, making things by hand, things that AI can’t do. Hah. Boy, were we wrong.

Another thing that’s come up is the problem of academic dishonesty, of students using AI the way that the studio bosses will, and just having them whip up their schoolwork for them. In the past, it’s been relatively easy to catch someone plagiarizing an essay. You just pop the text into Google and you can find it in seconds. I’ve caught students copying from Sparknotes, from Wikipedia…one time a kid turned in a book report that had been copied verbatim from the back of the DVD case of the movie version of the book. It’s almost funny. But with AI doing the work, the student still isn’t learning squat, but it’s a lot harder to catch.

“In conclusion, the best part of The Great Gatsby was when Leo Titanic held up his champagne and made that meme.”

Last week, as an experiment, I played around a little with ChatGPT, probably the best-known of the AI algorithms that are being used in this fashion. I fed it an essay prompt regarding Kate Chopin’s classic “The Story of an Hour,” a story that’s less than four pages long, and as such makes for a great subject for a quick writing exercise. The essay that ChatGPT spit back to me (so fast that my hand hadn’t even left the keyboard yet) answered the question fully, on-point, and if a student were to turn it in to me the only clue I would have had that it was plagiarized would be if I simply didn’t think the kid in question was capable of work that good…and I’d have no way to prove it. 

Now I’m not bringing this up because I want anyone to suggest solutions to the problem. We (and by we, I mean the teachers and school district I’m a part of) are already discussing the issue and looking for ways to deal with it. I bring it up just to illustrate the point. Some people will say, “Well why does it matter if the kid knows ‘The Story of an Hour’? When are they ever going to use that information in real life?” They won’t, you moron, that’s not why we write essays. We write essays so that students will know how to construct an argument, and having the computer do it for you is just going to leave you unable to do it yourself. (These are probably the same people who whine that they were never taught “how to do taxes” but also slept through every basic math class they ever took.)

Now to be fair, AI can be used as a tool. Not long ago I had a very interesting discussion with a comic book writer/artist of my acquaintance who told me some of the ways he was using it – to help with research, for example, or to evaluate his own work. And there’s definitely merit to that. I played around with ChatGPT some more and tried it in the ways he suggested. As a research assistant, I determined that it can definitely give me better and more nuanced details than a simple Google search can. On the other hand, some of these AI programs have been known to make up information out of whole cloth that SOUNDED correct in order to answer the query. Some of them have even written citations for sources that do not exist, which is kinda hilarious to me. 

Then there was the question of using it to polish your own writing. I fed ChatGPT the first two pages of a new story I’m working on and asked what it thought. It gave me a response that was very complimentary, telling me that the characters were well-illustrated and that I’d done a good job of painting a picture of the two teenagers having a conversation on that page. Then it gave me tips for improving my sentence structure.

I was gobsmacked, not just by the entirely accurate critique of my structure problem, but by how well it understood what I was trying to write about. It said, and I quote, “Overall, the passage you wrote has an engaging and descriptive style that draws the reader into the narrator’s perspective and world…There is a conversational tone to the narration, which adds authenticity and relatability to the character’s voice. The passage effectively blends introspection, self-reflection, and storytelling to create a strong narrative voice.” I’m quoting this, by the way, not to brag about how awesome I am, but because if I were to read this in a review of my work written by a human being I would have been terribly flattered, and when I realized I was equally flattered by the AI…well, I was a little embarrassed. Fortunately, nobody would ever know I felt that way even momentarily unless I did something stupid like post it on the internet. 

AI is an increasingly complicated issue, and I’m not trying to settle the debate, merely to illustrate how I feel about it. There’s no getting rid of it at this point, the box is open and Pandora has run away to hide under the bed and point a finger at Epimetheus to try to deflect blame. Since we can’t eliminate it, then, the only thing to do is to try to figure out how to use it responsibly. 

As for what exactly that means, your guess is as good as mine. 

UPDATE: A Facebook conversation about this topic with artist Jesse Elliott has led to him posting his thoughts on the issue on his own blog. Please head over there and read his perspective!

Blake M. Petit is a writer, teacher, and dad from Ama, Louisiana. His current writing project is the superhero adventure series Other People’s Heroes: Little Stars, a new episode of which is available every Wednesday on Amazon’s Kindle Vella platform. Just for giggles, after he finished writing this column he showed it to ChatGPT to ask its opinion. It returned a seven-point critique that basically said, “Hey, ya did pretty good.” ChatGPT is at least genial. If it takes over the world it will do so very politely.