The Writers Guild of America is on strike. I'm not fully versed on the provisions the Guild is striking over, but I know one of them is protections against the absolute tidal wave of AI. AI can't write with feeling or emotion (yet), but the WGA is wise to address the inevitable point where it will.
AI developments are coming fast and furious and are honestly hard to keep up with. While not a scientific poll, I'm using ChatGPT more often in everyday situations and know that my colleagues are, too. Today, I want to address the evolving state of AI writing tools, how to potentially use them responsibly, and what it means for the future of writers.
First, this is one of those situations where views change as the technology evolves. I've always looked at generative AI as a functional tool writers can use in their arsenal, but not something that should be used solely to create "content" (boy, do I dislike that word). I'm sticking with this stance, but the lines are starting to blur.
Currently, The Writing Cooperative rules state you must disclose the use of a generative AI. Not one submission in the last four weeks has done so. Does that mean no one used ChatGPT to build their submissions? Maybe. Though, I find it highly unlikely. Someone on one of the channels recently questioned the policy, asking what happens when all writing tools and apps have generative AI built in. It's a really good question.
Let's look at Grammarly for a minute. Technically, Grammarly has always been an AI company. Their fancy algorithm determines the most likely order of words, and it considers that arrangement grammatically correct. This description is an oversimplification, but it works. Now, Grammarly is going deeper into germinative AI with their Go product. Is it different from what they've been offering simply because it creates longer passages? I don't know. I don't, however, think writers need to disclose when they use Grammarly. So what does that say?
Lately, I've used ChatGPT for multiple projects in what I think are responsible ways. Here are a few ways I've used the tool recently:
- Revising existing passages by using the prompt "revise this:" and entering the paragraph;
- Asking for subheadings when my mind draws a blank by using the prompt "what is a one-word subheading for the following paragraph:" and entering the text;
- Taking my bullet point notes from client calls and asking ChatGPT to put them into complete sentences using the prompt "take the following notes and turn them into coherent sentences:" and then entering my bullet points.
Additionally, I've been working with the ChatGPT API to essentially build a fancy MadLib for my nonprofit clients. They'll eventually input a few pieces of information, which I'll use behind the scenes to combine into a text prompt run through the API. Ultimately, this will help clients better express their ideas and provide me with better information when working with them.
I'd like to think these are all responsible ways to use ChatGPT in my regular writing process. However, I'm torn with the dichotomy here. On one hand, as a writer myself, I want to advocate for others and their livelihoods. Writers should be paid for their work, and the WGA is right to ask for AI protections. On the other hand, I see how ChatGPT saves me time and enhances my existing workflow. Like Natalie Imbruglia, I'm torn.
I still don't think generative AI should be used to solely create entertainment. I don't want to read a personal essay penned by an AI, nor do I think the next blockbuster film should be written by an AI that knows what will likely make the most money. Will I notice these things when they happen? Maybe at first, but over time, probably not.
What do you think? Are you torn like I am, or are your views of ChatGPT and generative AI rock solid?
PS: Besides Grammarly and asking how to spell Imbruglia, everything in this article came out of my head.
Let's check in on Bluesky...
After talking about Bluesky and other "Twitter alternatives" last week, one of you kind people gave me an invite to the platform. My initial impression? It's chaotic.
I think Bluesky is intentionally inviting journalists, Twitter clout chasers, and meme lords in the first wave to try and garner some of this initial hype. It's why you keep seeing articles that say Bluesky is the next great thing despite only having roughly 60k users.
To me, Bluesky is the latest version of Clubhouse, the overhyped social platform that quickly rose to prominence and just as quickly died. It's invite-only and going after the "cool kids" from Twitter. Sure, it creates an initial buzz, but it didn't work out for Clubhouse. Maybe it will for Bluesky? I don't know.
Scrolling through Bluesky feels like the tech equivalent of the White House Correspondent’s Dinner mixed with dick jokes. It's a bunch of political and journalist nerds mixed with shitposters. That’s not inherently a bad thing, but is that what we want from social media? Or is that exactly what we want from social media?