How I Wrote a #1-Ranking Article That Saves Zapier $10K/Year in Ad Spend

If you Google “best authenticator apps,” you’ll find this article I wrote for Zapier. It should be numbers 1-4, depending on where you’re searching from. It shows up on AI overviews.  On Perplexity.  On ChatGPT. Ranks for 189 keywords and saves the Zapier team over $10K/year they otherwise would have spent on ads to attract the same traffic. In this article (case study, if you will), I’ll share what went into writing the piece. It could help you or your team create better content. The opportunity most writers miss Sometimes the best content ideas don’t come from a keyword research tool. They come from paying attention. I was working on a content refresh project when I noticed something. There was this small section in an article about authentication. It was just a list of seven tool suggestions. Nothing super detailed or helpful beyond the names. I asked my editor if we could expand it into its own piece. She said yes. When refreshing content, it’s easy to simply update the statistics and call it a day. Change 2025 to 2026. Swap out an outdated screenshot. Maybe add a sentence or two. But it shouldn’t always be like that. You can use those refresh projects to find gaps—places where readers might want more but weren’t getting it. I imagined that a reader might have been frustrated with just a list of names, and have questions like: Which one should I use? What makes them different? Do they work on my device? I didn’t want to leave them hanging. The research process: where most listicles fall apart Here’s where I see most “best of” posts go wrong.  Writers pick tools based on what other articles mention. They copy the same 5-7 options everyone else covers. I wasn’t going to do that. I started by compiling everything I could find. Popular tools, recommended apps on Reddit, Twitter, and anywhere else, stuff I’d heard about. I ended up with 15-17 options. Then I set the criteria, real criteria that would help readers choose: These criteria alone cut the list down to about 10 tools. Testing 1,2,3 I didn’t just read about these tools; I used them. I installed them. All of them. I tested them on my Android tablet. Tested them on my iPhone. Set up accounts, transferred data between apps, and tried to use them the way a typical user would. I looked at: As I tested, I took notes. What worked for me. What didn’t. Where I got confused. What impressed me. The list got shorter. Some apps looked good on paper but were clunky to use. Others had great interfaces but terrible documentation. A few didn’t handle multi-device as smoothly as they claimed. By the time I finished testing, I knew which ones were worth recommending. Structure for busy readers The outline was straightforward: The table was key. You could read just the intro and table and get what you needed. Most readers would. That’s fine. The detailed sections were there for people who wanted more context. Writing like a real person used these tools When I sat down to write, instead of summarizing other articles, I was writing about what I’d just done. Many writers never actually touch the product. And it is easy to tell because that firsthand knowledge—or in many cases, the lack of it—shows up in the writing. For each tool, I wrote about how it worked, what stood out, and where it fell short compared to others. “The interface is clean and minimal, and while it feels slightly more polished on Android, it works well enough on iOS too.” “I first tried it out of curiosity, expecting a complicated onboarding flow because…Cisco.” “One thing to keep in mind: Microsoft uses app data to train its AI models by default. It’s not something I love seeing in a security-focused app.” I also applied the standard practices that make B2B content work: Then I edited. Read it out loud to catch awkward phrasing. Ran it through Hemingway to flag overly complex sentences and cut unnecessary words.  Optimizing for the machines Once I had written for humans, it was time to optimize for the machines, you know? Google, AI search engines, and so on. Many writers treat optimization as a checklist.  Hit these keywords ✅ Use this exact density ✅  Follow this formula ✅ This type of optimization is a result of over-reliance on content optimization tools. These tools work, but you shouldn’t treat their scores as the ultimate measure of content quality. For example, I used MarketMuse. It is one of my favorite tools to use as a content writer.  If I use all the keywords MarketMuse suggests, I would get a perfect score, BUT there’s a 98% chance that my sentences would read awkwardly.  I don’t want that. So what I like to do when using such tools is to ask: If yes, I find a natural place for it. If no, I skip it. Other ways I optimized the article were to: Off the page, I added details that AI and traditional search engines notice. These were details like: None of this is revolutionary. It’s just doing the fundamentals well. Dotting i’s and crossing t’s When I sent the first draft to my editor, she had a couple of questions, which I addressed almost immediately. I sent the revised version. She approved it, and we published. Eleven days later, it hit the front page. Zapier has a VERY strong SEO foundation, which I must admit helped the article reach these heights. But a solid foundation wouldn’t matter much if what you build on it is trash. What this means for your content If you’re hiring writers, here’s what to look for: Do they spot opportunities? I didn’t just execute the original brief. I saw a gap and suggested we fill it. Do they test things? Most listicles are rewritten versions of other listicles. I actually used the products I was recommending. Do they write from

How I Wrote a #1-Ranking Article That Saves Zapier $10K/Year in Ad Spend Read More »