I’ve been using ChatGPT for some SW-related shenanigans. I will now relate some observations.
First of all, writers (and other creators) have nothing to worry about when it comes to this thing. It has no brain, and you can tell. This is an information regurgitator. It can crudely riff off some ideas, but only by re-mixing basic elements that it can pull from its vast data-set. Everything is immensely superficial. It is incapable of saying anything insightful or profound.
However, because its data-set is huge, it can say things that can give you insight. For instance, since this thing is basically using a giant rip of the entire internet as its "knowledge", asking it about the key elements of a Star Wars story will actually give you a basic distillation of what the “internet consensus” is about that.
But when you ask it to actually come up with a story... that sort of thing is difficult for it, and it tends to revert to very general statements. But if you ask it to elaborate, or to name examples of how the ideas or aspects that it mentioned could be portrayed or incorporated into the story, it does start to do some “creative writing”. Or rather, a facsimile of it.
Because, again, it’s mostly regurgitation and re-mix. It matches together common plot elements seen in SW. When you ask it for names, it often just re-uses names from SW works, like naming a villain “Darth Nihilus”. If you ask it to create an original sequel protagonist, it always defaults to “Kira” (the most well-known and oft-referenced name for “proto-Rey” during production). It also regularly uses slight variations on established names from SW works, e.g. you may get “Cade” (as in “Cade Skywalker”, from the old legacy comics) when you ask for possible hero names, but it may then also list “Kaede” as an option. Or you get both “Jacen” and “Jace”.
Some original names that it tends to default to when asked for suggestions (in several different conversations) are “Zorin”, “Vayne”, “Garrick”, “Jax”, “Nara”, “Ava” and “Lyra”. I have no idea why, but those names seem to be associated with SW somewhere in its network. I found that interesting, because I can’t explain why. Some of those names gave appeared in EU works, but not with anything resembling prominence. Yet it repeatedly defaults to these specific names.
Anyway, when you ask the AI to elaborate on whatever it puts out, it’ll typically give a list of wildly divergent ways you could go with these ideas. At that point, you can tell it which ideas you like. Perhaps the strongest point of ChatGPT is that it remembers your choices (to an extent), so if it suggests that the villains could be pro-Imperial warlords or Sith cultists, and you rule out Sith cultists, it doesn’t bring those back up. If you then say that (for instance) Luke’s son turns evil, and you ask why the character might do that— then ChatGPT will remember that the baddies are Imperial warlords, and suggest (for instance) that Luke’s son became convinced that strong, militarist government was needed for galactic stability.
In general, ChatGPT is mostly a quasi-interactive “sounding board” that can riff off statements that you feed it in a quasi-competent but very superficial manner. I found it fun to play with, but as far as creative purposes go, it’s mostly useful as “suggestions machine”. Oh, there’s a superweapon, you say? What is it? Give me some examples. No, I don’t like those. Give me some more. Oh, I like that one. Who built that weapon? Why did this character do that? No, that makes no sense, give me another backstory for that character... (And so on and so forth.)
ChatGPT repeats itself a lot. Whenever you ask it to tell you more about an idea, or to elaborate in any way, the majority of its response is a re-iteration of what it had already said before. So if you were to gather up its various responses and remove all repetition and all useless suggestions, what’s left is actually pretty meagre. Ultimately less than 20% of what it said in total. So is this an effective tool to use as a sounding board for, say, writing an original SW story? I’d say it’s no more useful than asking a forum full of random users for their thoughts. You get way fewer outlyers. On a forum, you always get some stupid or totally unhelpful responses. You also occasionally get a really creative and original response. ChatGPT gives you neither. As I said: it’s middle-of-the-road, superficial, and uncreative.
It’s fun to try it a few times, just to see what it does, but I find that it’s more trouble than it’s worth to keep trying to get useful responses from this idiot machine. It has its uses, but creative endeauvours are not really within its spectrum.
This kind of AI would be more useful for creative purposes is it could be trained with specific, goal-targeted data-sets, e.g. to exclude information that’s irrelevant to SW, or to specifically exclude all knowledge of Disney SW and have it base its responses on the old EU instead. Another immense improvement would be if it could remember more of the ongoing conversation, and be trained to avoid repeating itself in conversation.
At the moment, this kind of AI is far more useful when it comes to image generation, because there, the kind of “selective data-set that you can put together yourself” is a possibility, and it can be asked to refine pictures it has already created (which is akin to “remembering what was done before”). With that, you can (through some effort) create pretty cool visual impressions. I find these more inspiring as starting points for my own creative process than whatever ChatGPT can produce as a prompt.
All in all, this AI tech has a long way to go if it’s ever to produce anything like “autonomous creativity”. But it can be used in various ways to support or spark your creativity. ChatGPT is probably not the most useful of those ways, though.