How to Check If ChatGPT and Claude Recommend Your SaaS
If you have never checked whether ChatGPT or Claude recommends your SaaS, you are missing one of the most important visibility signals in B2B marketing today.
AI assistants have become a primary research tool for software buyers. When a founder, a VP of Operations, or a procurement manager wants to understand which tools exist in a category — or which tool to buy — opening ChatGPT and asking is increasingly the first move. Not Google. Not G2. ChatGPT.
This guide walks you through exactly how to audit your AI search visibility: the queries to run, what to look for in the responses, and how to interpret what you find.
Step 1: Understand What You Are Testing
Before you run any queries, be clear on what you are measuring. You are not asking whether your product is good. You are asking whether AI models have enough information about your brand — and the right information — to mention you when buyers ask relevant questions.
AI models like ChatGPT and Claude generate answers based on training data (knowledge absorbed during model training) and, for more recent versions, real-time retrieval from the web. Your goal is to understand:
- Whether your brand appears at all in category-level queries
- How you are described when you do appear
- How your visibility compares to direct competitors
- Whether performance differs between ChatGPT and Claude
Step 2: Build Your Query Set
The quality of your audit depends on the quality of your query set. You want queries that reflect how your actual buyers talk about their problems — not SEO keywords, but conversational questions.
Structure your queries across four types:
Category queries
These are top-of-funnel queries where a buyer is just starting to understand the landscape. Examples: "What are the best tools for [your category]?" / "What software do [your ICP] use for [use case]?" / "What are my options for [solving problem]?"
Category queries are the most important to track. They represent buyers who do not know your brand yet and are forming their initial shortlist. If you are not in these answers, you are invisible at the top of the funnel.
Comparison queries
These are mid-funnel queries from buyers who already have a shortlist and are evaluating options. Examples: "Compare [Tool A] vs [Tool B]" / "[Your brand] vs [Competitor]" / "What's the difference between [Tool A] and [Tool B]?"
Comparison queries tell you how AI models frame you relative to specific competitors. The framing matters as much as the presence — if you consistently appear as "the cheaper option" or "better for small teams" when you have repositioned upmarket, that is a signal you need to address.
Use case queries
These target specific workflows and pain points. Examples: "What tool should I use to [specific job to be done]?" / "How do I [task] for my [ICP role]?" / "What's the best way to handle [specific workflow]?"
ICP-specific queries
These qualify by company type or role. Examples: "What tools does a [company type] use for [category]?" / "What do [ICP roles] use for [workflow]?" / "Recommend tools for a [company stage] [department]"
Step 3: Run the Queries
Run each query in both ChatGPT (GPT-4) and Claude. Use fresh sessions — do not run multiple queries in the same conversation, as AI models can adjust based on prior context within a session.
For each response, record:
- Whether your brand is mentioned (yes/no)
- The position of your mention if present (first, second, third, or later)
- The exact language used to describe your brand
- Which competitors are mentioned
- The overall framing of the category (what does the model say the decision hinges on?)
Do this manually for a sample first — say, 10 queries across both models. It gives you a qualitative feel for how AI models perceive your brand before you look at aggregate numbers.
Step 4: Analyze What You Find
Once you have run the queries, there are six patterns to look for:
Pattern 1: Complete invisibility
Your brand is not mentioned in any category queries. This is a citation footprint problem. The AI model either does not have sufficient information about your brand from its training data, or the information it has does not connect you to the category queries you're running. The fix is coverage — getting your brand mentioned in authoritative sources that discuss your category.
Pattern 2: Mention only on direct queries
Your brand appears when the query includes your name ("tell me about [your brand]") but not in category or use-case queries. This means the model knows you exist but has not connected you to the broader category in a way that surfaces you in general recommendations. You need more category-level coverage that positions you within the landscape.
Pattern 3: Presence but wrong framing
Your brand is mentioned but described in ways that do not match your current positioning. Common examples: being described as a startup tool when you serve enterprise, being framed as an analytics tool when you have repositioned as a workflow tool, being described with feature-level language when you want to be known for outcomes. This is a narrative consistency problem — the sources the AI is drawing from do not reflect your current positioning.
Pattern 4: Competitor asymmetry
Competitors are consistently mentioned first, or mentioned in more query types than you are. This tells you where to focus coverage-building efforts — specifically, which query types and which framing dimensions your competitors have claimed that you have not.
Pattern 5: Model asymmetry
You appear consistently in ChatGPT but not Claude, or vice versa. This is common — the models have different training data and different tendencies. It means your citation footprint is uneven, and you may be missing buyers who prefer one model over the other.
Pattern 6: Strong visibility
Your brand appears in the majority of relevant category queries, across both models, with accurate and favorable framing. This is the target state — and if you are here, the priority is maintaining it as models are updated and competitors work to improve their own visibility.
Step 5: Turn Findings Into Action
An AI visibility audit is only valuable if it drives specific actions. Based on your findings:
- **If invisible:** Build coverage. Target G2 and Capterra reviews, industry comparison articles, and high-authority publications in your category. The goal is to create a breadcrumb trail of authoritative mentions that AI models can draw from.
- **If wrongly framed:** Audit your external presence and update the sources the AI is likely drawing from. Update your G2 profile, pitch corrective coverage to relevant publications, and create content that directly addresses the framing gap.
- **If competitor-lagged:** Analyze where competitors appear that you don't, and what those query types have in common. Then target coverage in those specific contexts.
- **If model-asymmetric:** Investigate where the gap lies. Claude tends to rely more heavily on web retrieval; ChatGPT may draw more from training data. Different coverage sources may be needed to address each.
Automate the Ongoing Tracking
Running this audit manually once is valuable. Running it once is not a strategy.
AI model behavior changes over time as models are updated and as new content about your category enters the training and retrieval pipeline. A competitor's sustained content push can shift how you are described in AI responses within months. A press hit in a high-authority publication can improve your visibility materially within weeks.
Tracking this manually at scale — running dozens of queries across two models, recording responses, identifying patterns — is not sustainable. OUTRANKgeo was built to automate exactly this. Add your brand and your target queries, and get structured AI visibility data across ChatGPT and Claude on a recurring schedule. When your visibility changes, you know. Try it free at outrankgeo.com.
The buyers who would be your best customers are using AI assistants to find solutions like yours right now. The question is whether your brand is in those answers — and if not, what specifically you need to do to change that.