As an Acrolinx partner, owner of a business writing firm, and former journalist, I spend a lot of time thinking about how words move from conception to consumption. And lately, I have, like many others, been thinking about how that’s going to change with generative AI. I’ve also been chatting to smart people like Paul Bongers, VP of Strategy at Acrolinx, including in this new webinar titled “Did I Write That?

The upshot is that the way that words are conceived, written, and checked is likely to change quite a lot, along with the roles of the people and machines involved. Yet the written word’s role as a medium for conveying thoughts from an author to a reader – to inform, motivate, or build a relationship – seems likely to remain very similar. 

Let’s start with some of the things that are likely to change in how we write and edit.

First is deciding what to write. Generative AI tools such as ChatGPT, Bard, and Bing Chat provide a powerful new avenue for developing ideas. Instead of staring at a blank sheet of paper or plugging terms into search engines, authors can pose more complete questions and get more complete answers as they explore what to write. 

Something like “hybrid work news” might become, “What’s currently newsworthy about hybrid work?” Then once they get an answer, they can ask further questions that build on the first in the chat format, such as “Who are some of the top thought leaders on the topic?” The writer is still thinking but getting an AI-powered range of input in a useful format.

When it comes to producing words, the writer can choose whether to be a “writer” and actually write or a “prompt engineer.” If they write, they’ll type up a story. If they want the machine to do it, they’ll craft prompts aimed at generating usable copy. 

The next step in a traditional editorial workflow is for an editor to review the author’s copy. At this point, the author can send their words to a fellow human or ask the computer for help. This isn’t new given we’ve had tools like Acrolinx and spell checkers for years. But it’s new that the computer can now actively rewrite and extend sentences, and more.

This presents interesting questions. If an author can get further with the help of technology, will they become both a writer and an editor? And if the computer can help with writing, will editors also become writers? Or are the two professions and mindsets so different that technology might have an impact, but is unlikely to change the game too much? I don’t yet have an answer but, as a writer who married an editor, I do have some views!

Finally, generative AI has some quirks, like making stuff up, which is the sort of thing journalists get in a lot of trouble for. It also likes to mash other people’s copy together then present its work as original without always giving citations, which could get a professor sacked. 

This means proofreaders and fact checkers will need to cover new ground. They’re also likely to see their workload jump to cope with the increased volume of words to be checked for everything from spelling and grammatical errors to fake “facts,” plagiarism, and bias.

There will also be new roles as organizations seek to exploit the many positive attributes of generative AI while minimizing the quirks. The person who previously wrote a writing style guide to keep a large organization on track, for instance, might now also work with the IT department to shape how large language models are being used and operated. 

So, what won’t change? The first what we might call “stable factor” is that the task of writing is ultimately to transmit an idea from one person’s mind to another’s. You can scale that idea up to be from one mind to many or from an organization to another organization, but the biggest challenge is still to conceive compelling ideas and have them received.

The second stable factor, as Paul Bongers has expertly explored, is that the aim of that communication is to build a relationship between the author and the reader. That might be a sales relationship, an employment relationship, a romantic relationship, or a political relationship, but it’s still a bond between two people.

The third stable factor is trust. If a reader stops trusting an author or organization, they’ll stop accepting messages. They certainly won’t want a deeper relationship.

This flows on to the related issue of accountability. No matter how they produce content, organizations will remain responsible for the material they publish. This means they’re placing a lot of trust in the people or machines that create content for them. But while people can be held directly accountable for errors, plagiarism, and other issues, machines can’t.     

It seems we should all expect to see substantial changes in who does what, and what role computers play. But I also think some things will remain surprisingly sticky. Those things will be held in place by the laws of human connection and trust, and the fact that editorial roles have evolved both in line with technology and that different people think differently. 

To explore these issues in more depth, you can watch this new webinar: “Did I Write That: Impact and Opportunities Surrounding Generative AI,” featuring me along with Paul Bongers and Daniel Nutburn from Acrolinx. The topics we cover include:

  • AI’s role in corporate accuracy, trust, and brand language
  • Balancing intellectual property protection with AI’s creative potential
  • The ongoing importance of writers and editors in business and government

Access the webinar here:

Grant Butler is the founder and director of Editor Group, a leading corporate writing, and editing firm and Acrolinx partner. He was formerly a senior newspaper reporter and is the author of Think Write Grow, a book about thought leadership writing.