Am I the only person curious if something can be done, but not keen on seeing it? The internet is flooded with AI generated images and mixed results of writings from the new chatGPT site. I remember Dr. Malcolm saying that the scientists were so busy trying to see if it could be done, without stopping to ask if it should. Well, I’m a fiction writer and a software engineer, and I suspect it can be done. Here’s how.
Not ChatGPT
At present, chatGTP seems good at summarizing things. Ask it to explain “steamfunk” and it’s good enough to have been a wikipedia article. It even mentions one of the founders of the genre, Milton Davis. Cool. But, if you ask it to write a poem or a short story, the results are bad. It’s not creative, as if that’s something magic, and uncodeable.
From articles and interviews, I think chatGPT has an understanding of the things it writes about. Perhaps not the way we do, but it understands what a creator is and can relate that to God or a parent when it creates responses in the style of. You can’t do that if you don’t know those things are sort of the same thing.
What this iteration of the system is missing is an understanding of how stories work.
The Story Formula
I’m wary of any writer complaining about formulaic writing. Because good stories are chock full of patterns and formula. Once you combine somebody who knows how to explain this to an AI programmer, the jig is up.
In my view, writing the words is the last leg of the journey. Coming up with names, the plot, what happens next, and the complications are both random and considered. You can make random generators for these things to form an outline. From there, you write it, scene by scene, and there’s a structure for that, as well.
Let’s break down the pieces of how this can be done.
Story Basics
A story is a person, place, and problem. Fill in those blanks, and you have the setup for a story.
Bob, at home, needs to get a check from his mailbox so he can pay off a debt.
I made that up. A computer could have filled in fields from random lists. That’s not AI, but it’s the root of generating ideas from which a smarter system can expound upon and no different than creatives throwing ideas at the wall to see what sticks.
A more advance structure for the Three Ps is the Story Question. I cribbed this from Jim Butcher’s blog, and Deb Chester’s excellent book The Fantasy Fiction Formula.
WHEN SOMETHING HAPPENS, YOUR PROTAGONIST PURSUES A GOAL. But will he succeed when ANTAGONIST PROVIDES OPPOSITION?
Again, a writer fills in the blanks with their ideas, and we end up with a decent explanation of the story idea.
We can frame a story as three acts: beginning, middle, and end. The classic example being the romance novel stereotype of boy meets girl, boy loses girl, boy gets girl back.
Even if there’s other story structures, hitting this with an AI can’t be that hard. I don’t do AI for a living, so that estimate will fall short. But framing this up with templates and lists to randomly pick from is easy. It’ll get harder.
Where’s it at: Scenes
I’ve written, talked, and even presented on the topic of scene by scene writing. Scenes are the cells that make up a story. And like a story, they have a beginning, middle, and end. The situation changes and our hero is left in a different state by the end. It’s easy for a human to point at a scene and ask, “what changed?” If we can’t find it, the scene needs work or can be cut. After all, nothing changed, nobody would notice.
This is where it gets interesting. Writers approach scenes on the fly (pantsing) or methodically (planning). The outcome should be the same, as I described above.
For a machine generated scene, it has to comply with good scene definition, and actually be written well. ChatCPT appears to craft well-formed sentences. That’s already not a problem. Once the AI understands what a protagonist is, what the goal is, it will generate text describing the scene and the attempt to achieve the goal, and with work, the code will get better and it will start making sensical outputs.
Even better, it will understand what it wrote before, and carry that information forward as context for what happens next. ChatGPT is close to being able to do this already by virtue of what data it accesses.
Prompt Responses
The art AI systems work by user entered prompts. Someone loads up the site, types in “steampunk santa” and sees what they get. If they don’t like it, the terms are changed, and a new set of results appear. The user keeps what they like.
Remember the Story Question and Three P’s concept earlier. Those are prompts. Whether you fill in the blanks or the computer does, and then you change them after the first run, the result is the same. Blanks are filled, a story question is formed, then a scene by scene outline, and then the smarter part kicks in and generates prose for each scene.
This is how the sausage is made today by humans. Some might not codify it as such, but some variation of this process is happening in our heads.
The Point
I see a clear line of how to do this with precursor concepts I already work with. Just as with the AI generated art, AI generated prose is coming. People complain about formulaic Hollywood movies. Guess what, the system is in place, it’s just been humans. The quality varies, but it works.
Here we come to Dr. Malcolm’s question: Should we? And will that answer matter because you know somebody will do it anyway? Hallmark Christmas movies are loved and criticized for being formulaic. They could save a few bucks on screenwriters and farm that out to AI. Nobody but the hungry screenwriters would notice. The cost of developing those AI may far exceed the screenwriters, but Hallmark isn’t paying for that directly.
If the results are good enough and the price is lower, consumers (including studios and publishers) are going to choose price over humans. It happens every time. Yes, like Organic veggies and other Made-the-Old-Fashioned-Way products, some will choose otherwise. But, if I can pay $10 for an AI book cover vs. $300, I’d be sorely tempted. Especially when they both look good.
The Cost
A warning sign of status quo thinking is expecting calamity as the final outcome of change. Short skirts will be the end of civilization. Photography will be the end of art. Trying to keep things the same by resisting change never works out.
That said, not all change is good. Thanks to social media, we have new forms of bullying. People are addicted to getting the Likes and gaining followers. Depression is higher. Social media also connected isolated people, and made the world closer. Unregulated, it also empowered bigots, and truth denial.
The real cost to technology taking over creative work, is that creative work is fun. Stimulating. Kids can use chatGPT to write school reports. Now they’re not learning to research, how to weigh information, or even how to express their ideas because the computer can do it better with a few grunts into the microphone.
Creative jobs are some of the most fulfilling to have. But those will be gone when a company can eliminate the staff and let the machines take over. I’d say to imagine a world where people don’t get paid to imagine, but as they say, use it or lose it. One day, we could live in that world, and literally not be capable of imagining anything different.
Will it really be that bad?
People still paint and draw. Photography didn’t kill it. Ever hear of Rotoscope? Where they use photos to trace and form drawn images for animation. The thing that threatened the end supported the older craft in making something new.
Comments