I built a Chrome extension to manage hundreds of scattered AI prompts
My resolve to get organized shall not waver.
Disclosure: This article was written by me, a human, because apparently I can’t stop building Chrome extensions and my New Year’s resolution is to get organized once and for all. Claude helped edit the copy and assisted with development. The extension is available for free for the next two weeks, then all bets are off.
The extension is free to anyone who subscribes to Wondering About AI. The discount code appears in the welcome email you receive right after you sign up. If you’re an existing subscriber, DM me for the code.
Two weeks ago I released Substack Reader, a Chrome extension for managing my 1,200+ Substack subscriptions. That project went surprisingly well. I learned the basics of Chrome extension architecture, figured out how to work with undocumented APIs, and shipped something useful in about a day and a half.
So naturally, I immediately started building another one.
This time the problem was prompts. I’ve been wanting to run some real tests on whether “prompt engineering” actually matters as much as people claim, or if the models have gotten good enough that a straightforward request works just as well as a carefully structured one. To run those tests consistently, I needed a way to save prompts and reuse them across different models and conversations.
I also had prompts I’d copied from Substack articles because I thought they were cool scattered everywhere. Apple Notes. Google Docs with titles like “AI Prompts (RANDOM ASSORTMENT NOVEMBER).” A folder of text files I haven’t opened in months. If you play with AI regularly, you probably know the feeling. You try a prompt, get good results, and then three days later when you need it again, it’s disappeared into the endless scroll of your chat history.
Instead of this chaos, I wanted something simple, ideally a local tool that could store everything in my browser and wouldn’t require user authorization or hosting. Something I could build once and easily share with others.
To get my prompts organized, I ultimately decided to build Prompt Collector. Here’s a quick demo:
The spec-first approach
I’ve learned (sometimes the hard way) that jumping straight into code with AI assistance leads to wandering, inconsistent results. Before I opened my editor or wrote a single line of JavaScript, I sat down with Claude to write a detailed technical specification.
Collaborating with Claude
I started with rough notes, basically a brain dump of everything I wanted, such as features to:
Capture prompts from any webpage or chat window
Tag and organize prompts
Search across my prompt library
Handle variables (so I can have placeholders like [TOPIC] or [TONE] that I fill in when I use a prompt)
Create collections for different use cases
Export to CSV or JSON
From there, Claude helped me expand these bullet points into something comprehensive. I kept asking questions to flesh out the implementation details:
Can a prompt belong to multiple collections?
What happens when you delete a collection? Do the prompts get deleted too?
How should tag colors be assigned? Random? User-selected? From a palette?
What if a variable has multiple possible values, like [TONE] could be ‘professional’ or ‘casual’ or ‘friendly’?
Each question forced a decision, and each decision got documented. By the end, I had a 700-line specification that included:
Data models defining exactly what a Prompt, Collection, Tag, and Variable would look like
User flows mapping step-by-step interactions for capturing prompts, searching, and using variables
UI specifications including exact hex codes (teal accents: #2dd4bf), typography scales, and component behaviors
Technical architecture showing how files would be organized across popup, background scripts, and content scripts
Edge cases documenting what happens when things go wrong
Scope boundaries listing features I would not build
That last part proved useful throughout development. The spec included a “Future Considerations (Out of Scope)” section listing features I explicitly wasn’t going to build, which included user accounts, cloud sync, AI-powered auto-tagging, collaboration features, MCP server integration, and nested collections.
Having this written down made it easier to resist scope creep. When I thought “wouldn’t it be nice if…” I could check the spec. If it wasn’t there, it wasn’t happening in v1.
Why a Chrome extension
I considered a couple of options before landing on a Chrome extension.
A web app might have been a reasonable choice, but it would have required servers, databases, user authentication, and ongoing hosting costs. Plus, I’m already running Future Scan in production, and I didn’t want to support (or pay for) more than one scalable hosted app at a time. (Note: I might add a hosted endpoint and authentication for MCP support for phase 2.)
A desktop app could have worked offline and stored data locally, but it would have been a heavier lift to build and distribute. Plus, prompts mostly live in browser tabs.
A Chrome extension was the obvious choice. They run locally in your browser, store data in chrome.storage.local (which has no practical size limit for this use case), and can interact with any webpage, no servers, user accounts, or hosting costs required.
I’d also just built Substack Reader as an extension, so I already had the basic architecture fresh in my head. Manifest V3, popup windows, background service workers, content scripts. I could reuse patterns I’d figured out two weeks ago.
Design notes (e.g., making it pretty)
I spent more time than you’d expect on visual design. Most Chrome extensions look like afterthoughts. Cramped layouts, clashing colors, tiny text. I wanted Prompt Collector to feel calm, organized, and professional.
The color palette is mostly grayscale with teal accents (#2dd4bf for interactive elements). Light mode uses white backgrounds with subtle gray borders. Dark mode uses dark grays (#171717 for the page background, #262626 for cards) rather than pure black, which is easier on the eyes.
Typography is Inter throughout, with a clear hierarchy. Body text at 16px, headings progressively larger, generous line height for readability.
Tags get auto-assigned colors from a rotating palette of soft pastels. Blue, green, amber, pink, indigo, orange, purple, teal. Each new tag gets the next color in the sequence. It keeps things visually organized without requiring users to make decisions about color.
The floating capture button is a 48px teal circle that sits in the corner of every page. Visible enough to find, subtle enough to ignore when you don’t need it.
Building with the spec as shared context
With the specification complete, development went remarkably fast. The detailed spec meant I could give Claude precise instructions rather than vague directions.
Instead of: “Build a Chrome extension for saving prompts”
I could say: “Implement the Storage class from section 4.6 of the spec, with methods for getPrompts, savePrompt, and deletePrompt using chrome.storage.local. The Prompt interface is defined in section 4.1.”
The specification became a shared understanding between me and Claude. When discussing features, I could reference specific sections. And when something wasn’t working, I could check whether the implementation matched the spec or whether the spec needed updating.
This is under-appreciated in AI-assisted development: the quality of your prompts to the AI depends on the quality of your thinking beforehand. A detailed spec is essentially a very long, very detailed prompt that provides context for every subsequent conversation.
Where the spec failed (and how I adapted)
The original specification included “AI site integration,” which meant custom code injected into ChatGPT, Claude, and other AI chat interfaces to add “Save” buttons directly in the conversation.
This was ambitious. And it mostly didn’t work.
AI chat interfaces are React applications with dynamically generated class names, constantly changing DOM structures, and aggressive Content Security Policies. A CSS selector that works today breaks tomorrow when the site pushes an update. I hit Content Security Policy violations immediately:
Refused to execute inline script because it violates the following
Content Security Policy directive: “script-src ‘self’...”ChatGPT’s interface in particular was hostile to injection. Claude’s interface was slightly more cooperative but still fragile. Rather than fight this battle with site-specific hacks that would break constantly, I pivoted, and rewrote the spec to incorporating floating buttons for capturing and copying prompts.
These buttons work everywhere, require no site-specific code, and won’t break when AI companies update their interfaces. And users can specify where they appear in settings.
Lesson learned: The spec is a guideline, not a Bible (or the stern religious tome of your choice). If what you planned isn’t working, pause and update your spec. Then try again. Don’t allow AI to fail more than 5-6 times without reconsidering your approach either to the UX or the underlying technology.
The variable system: where real usage reshaped the spec
The variables feature went through the most evolution. My original concept was simple: define variables like [MY_BLOG_NAME] = “Wondering About AI,” and when you copy a prompt containing that variable, it gets replaced automatically.
Simple enough, right? Then I actually started using it.
Problem 1: Variables often need multiple values.
A [TONE] variable might need to be “professional” sometimes and “casual” other times. A [AUDIENCE] variable might be “developers,” “executives,” or “general readers” depending on the context.
I extended the variable system to support multiple options per variable. When you copy a prompt containing [TONE], a modal appears letting you choose which option to use. You pick “professional,” click copy, and the prompt lands in your clipboard with the substitution made.
Problem 2: Creating variables was tedious.
Users had to manually type [VARIABLE_NAME] in their prompts, then separately navigate to the Variables section and create a matching variable. This was error-prone. Typos meant variables wouldn’t match, and the whole process felt clunky.
I added two features not in the original spec:
Make Fill-in. Select any text in your prompt, click a button, and it automatically converts to [UPPERCASE_FORMAT] with underscores replacing spaces
Insert Variable. A dropdown showing all existing variables, click to insert at cursor position
These emerged from testing, not planning. The specification was a starting point, not a straitjacket.
Problem 3: Long variable values were unwieldy.
Some of my variables have lengthy values. A full paragraph describing my target audience, or a list of URLs for citation. The variable cards became overwhelming, and editing was painful.
I added collapsible display with “Show more” expansion, auto-expanding text areas in the editor, and labels for options. When a variable option is too long to display nicely (like a URL), you can add a short label. The picker menu shows “Blog URL” instead of https://wonderingaboutai.substack.com/really/long/path.
The bugs that taught me things
Building Prompt Collector involved plenty of debugging. A few highlights:
The clipboard permission dance
Chrome extensions have complex rules about clipboard access. Writing to the clipboard requires either user interaction (a click event) or the clipboardWrite permission. But the permission alone isn’t enough in all contexts. You also need to be in a “secure context” and sometimes need to use the newer navigator.clipboard.writeText() API instead of the older document.execCommand(’copy’). I ended up implementing both with fallback logic.
Storage timing issues
When the popup loads, it needs to fetch prompts from chrome.storage.local. But storage reads are asynchronous, so if you try to render before the data arrives, you get an empty list that flickers when data finally loads. The fix: show a loading state immediately, render cached data if available, then update when fresh data arrives.
The case sensitivity trap
Variable names are stored uppercase (BLOG_NAME), but users might type them in prompts with inconsistent casing ([blog_name] or [Blog_Name]). The replacement logic needs to be case-insensitive, but the display needs to be consistent. I normalize everything to uppercase on storage but match case-insensitively on replacement.
Dark mode gotchas
Tailwind’s dark: variant makes dark mode relatively easy, but there are edge cases. Autofill backgrounds in form inputs don’t respect your color scheme. Chrome forces a light blue background that looks awful in dark mode. The fix involves CSS hacks with -webkit-autofill selectors and transition delays.
What I shipped vs. what I planned
Comparing the final product to the original specification:
Features that made it mostly unchanged
Floating capture button on every page
Collections and tags for organization
Full-text search across prompts
Variable substitution with [PLACEHOLDER] syntax
Light/dark mode with system preference detection
JSON and CSV export
Keyboard shortcuts for power users
Version history
Features I dropped
AI site integration (too fragile)
Onboarding tooltip tour (substituted a short video for now)
Bold/italic formatting in prompts (unnecessary complexity)
Features I added
Multi-option variables with picker modal
“Make Fill-in” button for quick variable creation
Insert Variable dropdown in editor
Labels for long variable options
Collapsible sidebar when you have many collections
Import functionality for prompt packs
Alphabetical index navigation for large prompt libraries
The benefits of building local-only (again)
Like Substack Reader, Prompt Collector runs entirely in your browser. The architecture has real advantages:
Privacy. Your prompts never leave your machine. No servers, no databases, no analytics. I literally cannot see what prompts you’re saving or how you’re organizing them.
No accounts. Install the extension, click the icon, start saving prompts. No signup, no email verification, no password to remember.
No ongoing costs. For me as the developer, there’s no infrastructure to maintain. For you as the user, there’s no subscription fee. One-time download, works until Chrome fundamentally changes how extensions work.
Works offline. Once you’ve saved prompts, everything is available without internet access.
The tradeoff is no sync across devices. Your prompts live in one browser on one machine. For me, that’s fine. I do most of my AI work on one computer. If you need cross-device access, you might want a cloud-based solution instead.
Are there similar products?
Yes. When I started researching this problem, I found PromptBox, which is a SaaS product that offers prompt management with folders, tags, and team collaboration. It requires an account and has paid tiers and emphasizes social sharing.
Various Notion templates also exist for prompt organization. They work if you’re already living in Notion, but they require manual copy-paste and don’t integrate with your browser.
Note apps and text files are what most people actually use. It’s what I was using before building this. They work, sort of, until you have more than a dozen prompts and need to find one quickly or want to swap out key inputs.
Lessons for builders
1. Write the spec first (seriously)
Yes, writing a 700-line specification before coding feels slow. But every hour spent on the specification saved three hours of confused implementation, rework, and “wait, how should this actually work?” Exchanges later.
The spec doesn’t have to be perfect. Mine evolved significantly during development. But having something comprehensive to reference transformed every conversation with Claude from “let’s figure this out together” to “let’s implement this specific thing.”
2. Be mentally prepared for scope reduction
My spec included AI site integration knowing it might not work. And when it failed, I remained calm and reimagined the feature instead of burning a whole day on something that ultimately wasn’t essential functionality just because it was in the spec.
3. Test with real work (not just bots)
Synthetic testing (“click here, verify the modal appears”) catches bugs. Real testing (“use this to save and organize the prompts I’m actually using today”) catches design problems.
The variable picker modal came from real usage. So did the alphabetical index navigation. I only noticed these needs because I was using Prompt Collector for actual work while building it.
4. Better inputs = better results
Claude was invaluable as a coding partner. But the quality of its output directly correlated with the quality of my input. The spec wasn’t just for me. It was context for every AI interaction.
If you’re building with AI assistance, invest heavily in upfront documentation. Your future self (and even your AI collaborator) will thank you.
5. Ship simple, then iterate
The original spec explicitly ruled out future considerations. I shipped without cloud sync, without collaboration, without AI-powered auto-tagging, without MCP. Those might come later. But v1 needed to be solid, simple, and complete.
The best tool you can ship today beats the perfect tool you’ll ship never.
What’s next
Prompt Collector is available now as a free Chrome extension. But I’m not sure I’m stopping there. Some features I’m considering for a possible Phase 2:
Prompt packs. Pre-built collections for specific use cases (content creation, coding, research)
Sync via file export. Manual sync by exporting/importing JSON files (no backend needed)
Usage analytics. See which prompts you use most, which you never touch
MCP integration. Exposing your prompt library to AI agents via Anthropic’s Model Context Protocol
That last one is especially interesting. What if your AI agents could access your carefully curated prompt library? When you ask Claude to write a newsletter, it could automatically pull your “Newsletter Outline” prompt with your preferred structure and tone variables. The prompts you’re saving today could become instructions for your AI workforce tomorrow.
But that’s v2 thinking. For now, I’m focused on making sure v1 offers a great experience and is as bug-free as I think it is.
If you try Prompt Collector, I’d love to hear what you think. And if you build something using the spec-first approach, tell me about it. I’m always curious what other builders create when they slow down and think before they code.




This looks great! Very cool. For requirements: I've been using Ryan Carson's 3-file system for creating requirements and getting the AI to create the task list before coding starts.
It's designed for features and branches, but I've adapted the process to my workflow for building full apps. It's still a work in progress, but I usually start with a conversation with the AI about the tech stack and key requirements and then move into the 3-file system.
As for prompts, I have a domain name that I bought some time ago that's been waiting for me to fill it with prompts. I've been thinking about ramping that up, so your Chrome extension is kind of timely.
Hey Karen,
sounds like you have another winner 🏆!
If you continue shipping products at this rate, you may need to develop a Chrome extension manager 😊