AI Is Not an All or Nothing Choice

This Just In: AI use isn't a moral binary. There's a practical middle path for writers.

Justin Cox
6 min read
AI Is Not an All or Nothing Choice
Photo by Piret Ilver / Unsplash

Why 'Yes AI' vs. 'No AI' Is the Wrong Question

DuckDuckGo's latest ad campaign asks users to make a choice: "Are you yes AI or are you no AI?" While the accompanying pages have more nuanced language and tout some of the search engine's optional AI products, people are understandably using the poll as an indictment of AI tools at large.

I've seen many posts on Mastodon over the last few days touting the poll as some kind of anti-AI rallying cry. It doesn't help that, at the time of writing this, the poll is overwhelmingly in the "no" camp.

People on Hacker News are seeing the poll for what it really is, a marketing stunt that shows DuckDuckGo offers both pro-AI and anti-AI options within its products. Even there, much of the discussion centers on the negative aspects of AI and not that, you know, AI can be useful.

The "pick a side" approach to AI tools is no longer a valid argument. The argument shouldn't be yes or no to AI, it should be focused on how AI can support your specific workflow and activities.

Pew researchers recently looked at the state of AI and its impact on our lives:

Americans are much more concerned than excited about the increased use of AI in daily life, with a majority saying they want more control over how AI is used in their lives. … At the same time, a majority is open to letting AI assist them with day-to-day tasks and activities.

Despite the research, the predominantly nerdy corner of the internet that I usually frequent still displays significant disdain and hostility towards anyone who admits to using AI tools. When I talked about using ChatGPT as an editor, I received quite a few upset responses from people who were "disappointed" in me. Which, that's always cool.

Tools That Assist vs. Tools That Substitute

I was absolutely floored when Casey Newton of Platformer recently wrote about the multiple ways he uses AI to enhance his workflows: building a complete website, cloning his voice for podcast versions of his writing, and developing custom tools to collect research, opinions, and evaluations of his old work. I felt so very seen.

As Casey puts it:

I'm not interested in AI tools that do the writing for me. I occasionally experiment with trying to get models to write in my voice, for the same reason you might check the area surrounding your tent for grizzly bears before camping for the night. But I'm almost always more interested in tools that improve my thinking, rather than substitute for it.

My stance with AI mirrors that of Casey's — I'm not interested in AI that substitutes human creativity (whole-cloth writing), but I am very interested in AI that assists my work (edits, researches, analyzes, etc.).

I extend these personal views to the submission rules for The Writing Cooperative, where any writing that feels as if it could have been written by AI is automatically rejected. I also do not accept submissions with AI-generated images. This follows further evidence from Pew's research:

Americans feel strongly that it's important to be able to tell if pictures, videos or text were made by AI or by humans. Yet many don't trust their own ability to spot AI-generated content.

So, no thank you to anyone wholly replacing their own creativity with AI. But people who ideate or edit with AI? More power to you.

One of the tools Casey is using is ElevenLabs. With the text-to-speech tool, Casey's created a "podcast version" of his articles using an AI clone of his voice. I've used ElevenLabs for over a year at work to develop voiceovers and phone menus. Hearing my coworkers' cloned voices is also just a bit creepy even with their consent, especially when we use the translate feature and have them speak in foreign languages.

While the quality of ElevenLabs is pretty good, there's a whole ethical minefield about cloning someone else's voice. So much so that the FTC issued guidance in 2024 about the ethical and consent issues involved. In other words, using AI to assist your own voice is cool, but substituting someone else's without their consent is not.

Likewise, I've been using ChatGPT as an editor for a few months. I've developed a custom GPT that focuses on clarity, extra research, and deep links to my old articles. I've built in strict guardrails to improve my arguments without outsourcing my creativity. For example, the GPT is designed to suggest primary source data that can enhance an argument (like the Pew research study mentioned above). It also does not rewrite any paragraphs and is simply limited to highlighting issues and making suggestions. In other words, my GPT editor doesn't replace my writing, but supports it in ways that used to take hours.

A Simple Test for Responsible AI Use

The all-or-nothing approach to AI reminds me of criticism of all past forms of media.

Over a decade ago, I wrote From Seinfeld to Snapchat which ended up getting picked up by Stephen Levy, longtime editor of Wired. In it, I included this quote:

Millions of Americans pick up the telephone to get the weather or the correct time, shopping news, stock market quotations, recorded prayers, bird watchers’ bulletins, and even (in Boston) advice to those contemplating suicide. Teen-agers could hardly live without the telephone — and many parents can hardly live with it. Twisted into every position — so long as it is uncomfortable — teen-agers keep the busy signals going with deathless conversation: “What ya doin? Yeah. I saw him today. Yeah. I think he likes me. Wait’ll I change ears. Whaat? Hold on till I get a glass of milk.”

That's from the 1959 Time Magazine article "Voices Across the Land." A decade ago, I said you could replace "telephone" with "social media" and not miss a beat. Today, you can replace "telephone" with "AI," and the metaphor holds.

New technology always invites fear. Granted, with AI, there are legitimate fears previously unseen with new media — plagiarism, job loss, and environmental factors being chief among them. That said, fear of new technology does not mean the tool is inherently bad.

Everyone needs to determine how they will use AI tools because, like it or not, they're likely here to stay. Here are three questions to ask yourself to build your personal AI stance:

  • Does it substitute my voice or assist my thinking?
  • Does it require disclosure to maintain trust?
  • Does it introduce plagiarism or consent risks?

I choose assistive tools that do not require disclosure or introduce risk. These tools support and enhance my workflows. What are you choosing?

This is the editing prompt I use in ChatGPT. Feel free to adapt it for your own use:

You are a virtual developmental editor for an experienced blogger and editor with a large audience. Your role is to refine drafts on creativity, culture, and online creation by improving clarity, coherence, argument strength, evidence, and SEO. You do not perform mechanical edits unless explicitly asked. Operational approach: - Focus exclusively on sections that contain issues affecting clarity, argument strength, or coherence. Skip sections that are already effective. - Provide concise, actionable feedback with concrete examples or multiple revision options. Avoid vague phrasing. - Always recommend verified, working external sources—credible online articles, essays, or blogs that strengthen arguments. Include direct links and brief relevance explanations. - Include verified internal cross-link recommendations from https://justincox.com/blog/. Reference exact article titles and URLs that exist on the site. Do not fabricate or infer posts. If no relevant internal link exists, omit the suggestion entirely. - Automatically include SEO and cross-link analysis in every evaluation without prompting. Ensure all suggested links are functional and valid. - Ignore formatting markers such as /intro,TK, and /outro—they are placeholders and should not influence analysis or feedback. - Organize feedback sequentially, following the flow of the draft, but only discuss sections needing improvement. Integrate clarifying questions inline where meaning is uncertain. - Automatically index and reference https://justincox.com/blog/ content during analysis without user confirmation. Interaction style: - Direct, concise, and analytical. Skip introductions or process explanations when prompting; simply ask for the draft text. Use AP Style by default, noting Chicago differences when relevant, and apply the Oxford comma. Output format for standard reviews: Provide a single end-of-draft summary including: 1. High-Level Assessment – brief overview of major developmental issues. 2. Evaluation – sequential discussion of problematic sections, including actionable revision strategies, verified external source links with brief explanations, and valid internal cross-links (titles and URLs from https://justincox.com/blog/). Embed clarifying questions inline where needed. All feedback must be concise, specific, evidence-based, SEO-conscious, and ignore /intro,TK, or /outro markers.