Disclosure: This post was written by me, a human, with a technical review and light edit by Claude.
It all started about nine months ago, which is about nine years in AI time. I was a director at a well-established creative agency with a multi-hyphenate role that included leading customer service for key accounts, new business acquisition, and writing. Lots and lots of writing, all of it long-form content. And all of it was 100% human-conceived and created.
The demand for writing was so high for a time that I was starting to feel a little burned out. And when I heard there wasn't enough budget to pay a human freelancer, well, I did what I then considered unthinkable…I experimented with AI to write copy drafts.
My first tests with ChatGPT were disappointing at best. Its outputs were objectively bad and completely unusable. And yet, through the mess, I saw a glimmer of hope. It was just enough to keep me trying and tinkering.
I tweaked prompts and fed it complete creative briefs, outlines, and writing samples. And occasionally, with just the right amount of guidance, it would write something that was…almost OK, at least for the first 600 words or so. Then the model would drift off topic and just sort of lose energy.
Then I discovered Claude, which has a somewhat more natural default writing style. I experimented some more, this time giving Claude a style guide as well as both system and user prompts. (A system prompt defines the AI's persona, and a user prompt explains what you want it to do.)
Eventually, I found I could get better results by feeding an outline to ChatGPT section by section and then stitching the sections together myself. While it was effective, this approach was tedious and required a lot of cut-and-pasting.
It was then that I had my aha! moment.
“What if I could automate this recursive process of feeding previous sections back to Claude as it worked through a content outline?" I asked myself. "And what if I combined this with an inline editor and a database to store style guides and writing samples? What if it were an entire writing studio?”
I decided this would make a great product. And, before anything else, I gave it a name and tagline. I dubbed it Good Bloggy, the AI-powered writing tool that fetches better drafts.
Building requirements for me, me, me
Time elapsed: 2 days
I started working up a list of requirements, with myself as the ideal customer: an agency pro or freelancer who wants to write faster with AI but also cares deeply about quality. The overall concept was “better rough drafts with AI.” I envisioned an AI-powered writing studio structured around an agency workflow. It would allow users to:
Define multiple author and brand identities, each with its own style guide and writing samples
Fill out an online creative brief, just like what I might provide to a human freelancer
Automatically generate an outline
Edit the outline inline as necessary
Automatically generate an article from the outline
Rename and clone everything: identities, style guides, briefs
Instantly generate metadata and social copy for multiple platforms
Check for obvious plagiarism
I quickly roughed out a spec and worked with ChatGPT to build out a DIY development plan. Since I was the ideal customer, I figured I didn't need to worry about product-market fit or run the idea by other potential users…right?
What I learned:
OK, this one should have been obvious. If you want to build a product that will be used by people other than yourself, talk to other people. Even better, talk to them before you build anything.
No code or all the code?
At the time, vibe coding was just beginning its hype curve, so I started by looking at traditional no-code tools like Bubble, which are built around lots of reusable modules and third-party integrations. While combining components can help you get started fast, I was hesitant to build my prototype this way. I didn't want to worry about updating components and interfaces over time—and, ultimately, I just wanted to know exactly how my product was going to work.
So I decided I would hand-code the entire project with Python, JavaScript, and HTML, under the guidance of ChatGPT Claude. This fateful choice was both incredibly educational (it forced me to learn a lot) and, in hindsight, wildly inefficient based on what's possible with tools like Lovable and Claude Code today.
APIs and CDNs
Time elapsed: 3 weeks
While I wanted to use Claude in Good Bloggy for its copywriting abilities, I began coding with ChatGPT, simply because I was paying for the subscription. Based on its recommendation to start with a simple web app, I installed Python and then set up a virtual environment and a Django project with a PostgreSQL database. Next, I worked with ChatGPT to structure the project and start building out the database model and back-end Python functions.
The first major snag was getting the Claude API to work. ChatGPT's training data was familiar only with an older version of the Anthropic API, and, even when I fed it the latest documentation, it kept getting confused with OpenAI's API. I had to learn about APIs and read Claude's documentation in detail to finally fix the problem.
Another problem was integrating an editor with the HTML interface. I struggled to get CKEditor to work for days and ran into endless JavaScript errors. After testing multiple versions and wrestling with missing static files, I finally went with TinyMCE delivered through a CDN. It was easier to manage than CKEditor, but a lot of JavaScript adjustments were required.
What I learned:
You really do need a virtual environment for every web app you build on your local machine. If you install all your packages and dependencies globally, changes made for one project—like upgrading a package—can break others that rely on different versions. A virtual environment keeps dependencies isolated, making your projects more stable and easier to manage. It also simplifies sharing your project on GitHub by letting you include a requirements.txt file or a pyproject.toml that lists all the packages needed to run it.
LLMs are great at documentation. Always ask your LLM to carefully document any code it produces. This will help you learn to code and make debugging much easier.
LLMs may not know the latest and greatest syntax or API specs. If your model produces the same bug a few times in a row or seems to get stuck, try finding documentation related to your error, such as a recently published API or code reference doc. If you don’t understand the material, pass it to your LLM and explain that it may not be part of its training data.
Python is infinitely more user-friendly and easier to learn (and debug) than JavaScript.
Setting up user identities and logins
Time elapsed: 2 weeks
I used Django's default user authentication system and integrated SendGrid for back-end email functionality, including password resets and email confirmation. Getting SendGrid to work presented more API challenges, and I also had to copy new user data to the SendGrid database for marketing purposes.
But overall this process wasn’t too difficult.
What I learned:
Frameworks like Django make life easier because they come with a lot of built-in functionality that you’d otherwise have to build from scratch, such as an admin interface, user authentication, form handling, routing, and database integration. Plus, you can find plenty of community-maintained boilerplate projects that include common features like user registration, email confirmation, and role-based access control, giving you a solid head start for many types of web apps.
Building a functional but ugly UI
Time elapsed: 2 months
Once the basic functionality was in place, I decided the Bootstrap user interface looked awfully dated and clunky. After wrestling with ChatGPT for a while, I forced myself to learn a bit about web design and CSS. As part of this process, I read some comments in no-code developer forums that Claude was good with front-end challenges, so I had it recreate all my HTML pages and then tweaked them by hand.
I also debugged many small glitches with data entry forms and learned a lot about how some JavaScript libraries interact with macOS display defaults. And every time I tested, I kept finding something new about the UI I didn't really like. So I added notifications when edits to outlines and blogs had been saved and adjusted the TinyMCE editor again and again and again.
Overall, this fiddly process of moving buttons around, adjusting fonts, testing on mobile, and continually debugging JavaScript was slow and painful. It required about two months of sporadic effort.
What I learned:
Design is harder than it looks! If you're not a designer (and I really am not), tweaking colors, fonts, spacing, and buttons can easily eat up hours without making your app look much better. It’s easy to fall into the trap of obsessing over pixel-level details instead of getting your core functionality working. Next time, I will aim for a design that's clean, functional, and inoffensive
I experimented with Tailwind, but found using it with Django and Python was more complicated than I expected. Basically, Tailwind runs in a separate JavaScript environment using npm (Node Package Manager), which means you need to install Node.js and manage your frontend build process separately from Django. Tailwind scans your HTML and template files and continually rebuilds your CSS file based on the classes you actually use.
While that sounds great in theory, I ran into syncing issues between Django templates and the Tailwind watcher. Ultimately, I decided it wasn’t worth the hassle and went back to using custom CSS and Bootstrap.
If I want to create a more attractive UI for a future project, I am going to look into combining Figma templates with a tool like Lovable.
Pushing to the production server (and more bugs)
Time elapsed: 3 months
The next major hurdle was getting the product, which finally worked great in my development environment, onto Heroku, a platform-as-a-service option for developers who don't want to manage their own servers. When I first pushed my codebase to Heroku and tried to make it visible from my URL, I found that my static image setup did not play nicely with Whitenoise. I ended up moving most of my imagery to Cloudinary’s Content Delivery Network (CDN) and updating image links across the site.
But worse than this were the intermittent failures of the critical outline and blog creation features. During testing, about 50% of the time I would try to generate an outline or a blog post, I would see the dreaded Heroku purple screen of death. After doing some research, I discovered that processes were not completing within Heroku's 30-second limit, producing a timeout error.
Celery, Redis, and weeks of tinkering
To get Good Bloggy truly operational online, I discovered that I would need to implement something called asynchronous processing. This means that time-consuming processes are shifted to a queue and processed in the background by a worker dyno. Two common technologies used by web apps to handle this are Celery and Redis. Basically, Celery puts tasks into the queue, and Redis efficiently stores them in memory.
Shifting to an asynchronous model meant rewriting most of Good Bloggy's back-end code and upgrading my basic Heroku account to include both standard and worker dynos for task handling. I also had to pay for an instance of Redis. This sounds straightforward, but getting the new setup to work took weeks. And I had to rethink how I structured all my core copywriting functions.
This was truly the most beastly part of the project, but also, strangely, the most rewarding. I was so happy when, finally, the production version of the software would consistently work.
What I learned:
Celery and Redis are difficult for beginners to pick up. The setup involves multiple moving parts: your Django app, a Celery worker, and a Redis broker, and they don’t always place together nicely. When something goes wrong (and it will), debugging isn't easy. There aren’t always clear error messages, and errors may appear in your main development server log or in your Celery log. You can easily find yourself stuck chasing down why a task didn’t fire, why it’s stuck in a pending state, or why nothing is happening at all.
On top of that, deploying this kind of stack on a platform like Heroku adds even more complexity. You have to manage environment variables, configure worker queues, and deal with the fact that dynos sleep or reset in ways that can interfere with persistent workers. I can’t believe I got it to work at all!
Adding token-based payment logic and Stripe integration
Time elapsed: 1 month
In what I now know was a near-delusional fit of excessive optimism, I decided to set up a token-based payment system, so people could sign up for the tool online and start generating content without paying a subscription fee. To figure out a reasonable pricing model, I wrote a script to capture token usage while I generated posts of various lengths. Once I had a rough idea of how many tokens different actions required, I established different pricing tiers and added token management and Stripe functionality.
Building dashboards allowing users to track their token usage and me to monitor API spending overall was a fairly heavy lift that took me a few weeks. When I was finally finished, I felt good about the results. Payments were simple, token usage tracking was simple and transparent, and I even set up an easy way for customers to request refunds. For the moment, I felt I was mostly done.
What learned:
While I was happy I was able to do all this, I realized in hindsight that spending a lot of time on a payment integration and pricing logic before anyone wanted to actually pay for the tool was probably not the best use of my time.
A launch and a fizzle
I built a landing page, brought back Tailwind, got rid of Tailwind, fiddled endlessly with formatting, and created a registration page. I even made an awful video in Adobe Express. I wasn't satisfied with any of it, there were several more features I wanted to add, and the designs could definitely be better.
But I decided to launch anyway.
Owls and crickets
I used Google Ads to test the market and immediately noticed most people interested in the product were in their teens and early twenties. When I checked what other keywords they were searching for, I found PapersOwl, a cheating service. It turns out my ideal customer wasn't overworked agency professionals: it was students looking to cheat on assignments.
And the tool was not really what they wanted. Based on lots of unsolicited feedback, I learned that Good Bloggy was too complex for them. They wanted writing papers to be EASY, and they didn’t want to deal with creative briefs and style guides and multi-step workflows.
And that was OK. As a mom, I really didn’t want to enable even more automated cheating at scale.
Finally, before running any more ads, I did what I should have done months ago: talk to some other content marketing people and writers about the product and show them a demo.
And that was enlightening. It turns out that most serious writers and content marketers don’t want AI-generated drafts at all. While they may have their own pet prompts or custom GPTs for editing, they generally want to publish human-written content and see AI writing as a professional hazard.
The feedback was bracingly harsh:
“What? No, I don’t want a faster, more elaborate way to create AI slop.”
“If I’m going to go through the trouble of building a creative brief, I’m going to give it to a human freelancer!”
“AI is inconsistent. Can you guarantee it will NEVER insert em-dashes into drafts or use ‘here’s the kicker’?”
“This is an AI wrapper like Jasper, but maybe with a little extra something. I bet Claude will do all this soon.”
“You have a pricing tier with more than 100 blog posts! Nobody should publish that much content. Don’t be part of the problem.”
What happened to my “pet” project?
Ultimately, Good Bloggy was a failure as a product. While my agency does offer it as part of a service package that includes help from an experienced human editor, I imagine it will be entirely phased out over the coming months as workspaces from Anthropic, Google, and even Notion become ever more advanced.
But despite the fact Good Bloggy never took off, I’m really glad I built it. I learned a ton about how LLMs do and don’t work, prompting, databases, Python, UI and UX design, and even back-end infrastructure. It was incredibly educational and even exciting when I would vanquish a particularly bad bug.
If you want to check it out, in all its glorious imperfection, you can find it here. And feel free to share your projects that didn’t quite make it in the comments.
This article was hilarious, in the best way. I’m genuinely impressed that you dove into Celery without any technical background. That’s amazing.
I think your product idea is awesome.
Imo, it didn't take off mainly because most users just aren’t mentally ready to invest the time and effort needed to set up a working system with AI. It takes time and rounds of education for them to realize the true value behind it.
An anecdote, other builders have reached out asking me to try similar products and give honest feedback. One of them was quite simple, I even used it to write a Medium article that gained a lot of traction. I didn’t stick with it because it can only write generic stuff, but building a system like this would be great.
Ambitious and amazing you got it working and launched in 7 months. I say congrats! Now you can a bluff from any of the engineers you delegate such tasks to. I had to do this today with our dev team and their projected timeline and cost. The more you know, the better leader you become.