How I use AI to build simple web tools
It still slightly blows my mind that I can use AI to create software using natural language.
When AI assistant Claude first enabled Artifacts last summer, I used it to create a simple browser-based game.
Now whenever I think ‘I wish there was a tool that could do X (and that won’t harvest my data / bombard me with ads / charge me after 5 uses)’ I can have a crack at creating it using Claude or ChatGPT (which introduced a similar feature to Artifacts, called Canvas).
Example from this week:
I often include logos in my presentations but the high-resolution logos you find online are typically black and my presentation template has a black background (easier on the eye / a hangover from many years working on BBC iPlayer).
I usually convert the black logo to a white logo by inverting the position of the black and white point sliders in the Preview app on my Mac, but it’s a bit of a faff.
So, having a bunch of logos to invert, I decided to ask Claude to ‘build a working app which inverts the colours of a transparent png from an upload. use a contrasting background when displaying the image. enable the user to download the inverted image’.
After a bit of back and forth to get it working and looking how I wanted, I now have a tool that I can use that does exactly what I want it to do (and nothing else).
I then thought I could also use a tool for removing backgrounds, so I asked Claude to ‘repurpose the tool to remove backgrounds for images’. A minute or so later, I had that working too.
More interesting than the finished artifacts (which you can try out by clicking the above images), is the back and forth that enabled me to guide the development using nothing but natural language.
Claude’s ability to debug its own code and try different approaches means my primary role, after my initial brief, is User Acceptance Testing. I can just say ‘it doesn’t seem to be removing the background’ and Claude will try a different approach.
Of course, the complexity of the apps you can create without technical knowledge is limited, but Claude Artifacts and ChatGPT Canvas still represent a profound shift in the ability of people without coding skills to have an idea and then create working software that solves their specific needs.
As noted in my previous post on Artifacts:
I don’t believe AI models being able to code in response to natural language instructions in this way will put software developers out of a job.
Rather, I think it will lower the barrier to entry for people wanting to realise ideas that would previously have required a level of coding expertise.
It will also result in more software getting written (with big question marks over the quality, security and maintenance of that code).
Reactions to DeepSeek’s recent releases have prompted lots of people to ask Google / an AI assistant about Jevons Paradox, which I think also applies here. Making software development more accessible likely won't reduce the demand for professional developers, but instead expand the scope of software being created.
Dan’s Media & AI Sandwich is free of advertising and free to access. If you value my writing and want to help me dedicate more time to it, please consider becoming a paid subscriber (huge thanks to those who already have). Alternatively, you can spread the word.
I work with a wide range of organisations to help them make sense of AI. Whether it's delivering keynotes, running training sessions, drafting policy or designing experiments, I’d love to help. Drop me a line at mail@dantaylorwatt.com.