If you Google “best authenticator apps,” you’ll find this article I wrote for Zapier.
It should be numbers 1-4, depending on where you’re searching from.
It shows up on AI overviews.

On Perplexity.

On ChatGPT.

Ranks for 189 keywords and saves the Zapier team over $10K/year they otherwise would have spent on ads to attract the same traffic.
In this article (case study, if you will), I’ll share what went into writing the piece. It could help you or your team create better content.
The opportunity most writers miss
Sometimes the best content ideas don’t come from a keyword research tool. They come from paying attention.
I was working on a content refresh project when I noticed something. There was this small section in an article about authentication.
It was just a list of seven tool suggestions. Nothing super detailed or helpful beyond the names.
I asked my editor if we could expand it into its own piece.
She said yes.
When refreshing content, it’s easy to simply update the statistics and call it a day. Change 2025 to 2026. Swap out an outdated screenshot. Maybe add a sentence or two.
But it shouldn’t always be like that.
You can use those refresh projects to find gaps—places where readers might want more but weren’t getting it.
I imagined that a reader might have been frustrated with just a list of names, and have questions like:
Which one should I use? What makes them different? Do they work on my device?
I didn’t want to leave them hanging.
The research process: where most listicles fall apart
Here’s where I see most “best of” posts go wrong.
Writers pick tools based on what other articles mention. They copy the same 5-7 options everyone else covers.
I wasn’t going to do that.
I started by compiling everything I could find. Popular tools, recommended apps on Reddit, Twitter, and anywhere else, stuff I’d heard about. I ended up with 15-17 options.
Then I set the criteria, real criteria that would help readers choose:
- Cross-platform compatibility. The app had to work on both iOS and Android. There are amazing Apple-only authenticators. Great Android-only options too. But Zapier’s audience uses both. I wasn’t going to recommend something that locked people into one ecosystem.
- Free plan available. Not everyone wants to pay for an authenticator app. The free version had to be genuinely useful.
- Separate from password managers. Some password managers include authentication. Those are fine. But I specifically wanted standalone authenticators. Don’t put all your eggs in one basket, you know?
These criteria alone cut the list down to about 10 tools.
Testing 1,2,3
I didn’t just read about these tools; I used them. I installed them. All of them.
I tested them on my Android tablet. Tested them on my iPhone. Set up accounts, transferred data between apps, and tried to use them the way a typical user would.
I looked at:
- Security features to know what was protecting my codes
- How easy it was to transfer data if I wanted to switch
- The app’s design. Can I find what I need when I’m in a hurry?
- Multi-device support. Does it actually sync, or just claim to do so?
- Documentation and help resources. What happens when I get stuck?
As I tested, I took notes. What worked for me. What didn’t. Where I got confused. What impressed me.
The list got shorter. Some apps looked good on paper but were clunky to use. Others had great interfaces but terrible documentation. A few didn’t handle multi-device as smoothly as they claimed.
By the time I finished testing, I knew which ones were worth recommending.
Structure for busy readers
The outline was straightforward:
- Intro explaining why you need an authenticator app
- What makes the best authenticator app
- Quick comparison table
- Detailed breakdown of each tool
The table was key. You could read just the intro and table and get what you needed. Most readers would. That’s fine. The detailed sections were there for people who wanted more context.
Writing like a real person used these tools
When I sat down to write, instead of summarizing other articles, I was writing about what I’d just done.
Many writers never actually touch the product. And it is easy to tell because that firsthand knowledge—or in many cases, the lack of it—shows up in the writing.
For each tool, I wrote about how it worked, what stood out, and where it fell short compared to others.
“The interface is clean and minimal, and while it feels slightly more polished on Android, it works well enough on iOS too.”
“I first tried it out of curiosity, expecting a complicated onboarding flow because…Cisco.”
“One thing to keep in mind: Microsoft uses app data to train its AI models by default. It’s not something I love seeing in a security-focused app.”
I also applied the standard practices that make B2B content work:
- Short paragraphs. Nobody’s reading walls of text. If a paragraph runs longer than 3-4 lines, split it
- Subheadings every 200-300 words. Readers should be able to jump to exactly what they need. I used clear, descriptive H2s and H3s that effectively previewed what was to come
- Bulleted lists for features. When I’m listing an app’s pros and cons, bullets work better than sentences. It’s faster to scan and easier to compare
- Images and screenshots. These break up the texts and help readers see what each tool looks like
Then I edited. Read it out loud to catch awkward phrasing. Ran it through Hemingway to flag overly complex sentences and cut unnecessary words.
Optimizing for the machines
Once I had written for humans, it was time to optimize for the machines, you know? Google, AI search engines, and so on.
Many writers treat optimization as a checklist.
Hit these keywords ✅
Use this exact density ✅
Follow this formula ✅
This type of optimization is a result of over-reliance on content optimization tools. These tools work, but you shouldn’t treat their scores as the ultimate measure of content quality.
For example, I used MarketMuse. It is one of my favorite tools to use as a content writer.
If I use all the keywords MarketMuse suggests, I would get a perfect score, BUT there’s a 98% chance that my sentences would read awkwardly.
I don’t want that. So what I like to do when using such tools is to ask:
- Does it make sense to mention this?
- Does it help the reader?
If yes, I find a natural place for it. If no, I skip it.
Other ways I optimized the article were to:
- Include the target keyword in the right places. H1, first paragraph, a few H2s, meta description. But naturally
- Add related keywords throughout. MarketMuse helped me surface relevant secondary terms people use when searching for this topic
- Add internal links to relevant Zapier content. I linked to articles about security, password management, and account protection. These links help readers find more helpful info and help Google understand topic relationships
- Make it scannable. I used short paragraphs, clear subheadings, and bullet points for pros and cons. I also added bold text for key points, but used it sparingly
- Write alt text for the images. These were descriptive but concise. So instead of “Google screenshot,” it was something like “Google Authenticator, our pick for the best authenticator app for most people.”
- Optimize it for mobile. Since most B2B readers browse on their phones, every section had to work on a small screen
Off the page, I added details that AI and traditional search engines notice. These were details like:
- A clean, keyword-rich URL structure: /blog/best-authenticator-apps
- A meta title. It was under 60 characters and included the main keyword and value proposition
None of this is revolutionary. It’s just doing the fundamentals well.
Dotting i’s and crossing t’s
When I sent the first draft to my editor, she had a couple of questions, which I addressed almost immediately.
I sent the revised version. She approved it, and we published.
Eleven days later, it hit the front page.
Zapier has a VERY strong SEO foundation, which I must admit helped the article reach these heights. But a solid foundation wouldn’t matter much if what you build on it is trash.
What this means for your content
If you’re hiring writers, here’s what to look for:
Do they spot opportunities? I didn’t just execute the original brief. I saw a gap and suggested we fill it.
Do they test things? Most listicles are rewritten versions of other listicles. I actually used the products I was recommending.
Do they write from experience? There’s a difference between research-based content and experience-based content. Readers can tell.
Do they understand SEO without obsessing over it? Optimization matters. But it’s a tool, not a religion.
Can they balance speed with quality? This piece took time to research and test. But once I started writing, it moved fast. Two weeks from publication to the front page and LLM citations.
The bottom line
While there is still a ton of uncertainty about SEO and AEO and GEO or whatever you decide to call it, one thing is clear—you still need to create really good content.
Content that readers want. Content they find helpful, that answers their questions, and helps them get from point A to B.
In this case, that meant:
- Recognizing a content gap
- Setting clear selection criteria
- Testing tools hands-on
- Writing about real experience
- Optimizing without overthinking it
The rankings followed because the content was actually helpful.
That’s the playbook.
Want to show up where your buyers are searching? Not just Google, but ChatGPT, Perplexity, and AI overviews? Let’s talk. I write bottom-funnel content for B2B SaaS companies. Comparison pages, alternative guides, and product content that show up when buyers are ready to choose.
Thinking about becoming a SaaS content writer yourself? Read my step-by-step guide on breaking into SaaS content writing to learn the skills you need to learn.