I don’t use AI to draft blog posts or reports for clients. The process of writing helps me work out what I think about things. I enjoy it and I think the world is a better place with authentic human voices with opinions and stylistic foibles (e.g. my overuse of parentheses).
However, I do believe AI can help improve my writing and once I have written something, I often ask an AI assistant to review it. Sometimes I just say “Review this” and see what it picks up on. Other times I give it a more specific brief (“Proofread this”, “Fact-check this”1, “Tell me what you think I’ve missed”).
My go-to AI assistant for feedback on my writing is Claude. Partly because I like its manner (whilst functionally very similar, the main AI chatbots definitely have different vibes) and partly because it’s easy to create custom styles.
I’ve created a custom style based on 3 years of posts from this blog/newsletter. It doesn’t mean it can write just like me but it does give it an appreciation of how I like to communicate and makes it less likely to suggest I revise a section using a load of BS buzzwords.
I still ask it for feedback rather than getting it to redraft.
To illustrate how this works in practice, below are some screengrabs of me conversing with Claude over drafts of last week’s post on AI Hype, Investment Bubbles & Transformative Potential.
In this instance, I started by asking it to score my first draft out of 10. It flattered me by scoring it 8.5 before serving up a classic feedback sandwich, with a ‘What could be improved’ section as the filling, finished with a dollop of tempered praise (“thoughtful analysis packaged accessibly, but hasn’t quite reached your ‘greatest hits’ territory”).
I updated my draft, adding an additional Q&A on company adoption, which prompted Claude to up its score to 9.2.
I added a parenthetical reference to Apple’s sherlocking of Partiful and made a few other tweaks, which prompted Claude to up its score to 9.5.
After chuckling at “prime Dan Taylor-Watt content”, I decided the post didn’t need a chart showing AI investment trends and hit Publish.
Some reflections:
I’d have more confidence in feedback from an experienced human editor. However, I don’t have one on hand and getting some feedback from an AI which has been trained on trillions of words plus my own writing style is still helpful.
Using a custom style2 familiar with my writing significantly increases the usefulness of the feedback for me. It’s less likely to suggest things out of keeping with my writing style.
The scoring is pretty arbitrary but provides me with some - possibly bogus - reassurance that my editing (which I find harder than the initial drafting) is improving the draft. That said…
Most AI chatbots are trained to please and often default to telling you what you want to hear (AI chatbots often respond to questions with a ‘yes’, only to go on to outline a very clear ‘no’). Don’t blindly accept their recommendations.
Hallucination (aka making stuff up) is inherent to the current generation of AI chatbots. Always manually fact check things that matter.
For me, AI functions as an always-available, affordable and tireless editor, proofreader and sounding board. Despite its fallibility, it challenges me to improve my writing in ways I wouldn’t arrive at on my own. How do you use AI in your writing process? Comment below👇
I don’t use Claude for fact-checking as it doesn’t currently have real-time internet access. Perplexity is a better option for this, although it’s still no substitute for manually checking facts that it’s important are right.
Whilst ChatGPT doesn’t have a directly equivalent feature to Claude’s custom styles, it is possible to achieve a similar effect using a combination of Custom Instructions and a custom GPT (instructions here).
Really interesting and each user can find their own way with this as a guide.