If you have spent any amount of time in Google Search Console, you have probably seen it: a growing list of URLs sitting under the "Crawled – currently not indexed" status. It is one of the most common and most misunderstood statuses in all of GSC.
The instinct is to treat it as a bug. Something must be broken. The page is right there, the content is published, Googlebot clearly found it. So why is it not in the index?
The short answer: Google looked at your page and decided not to include it. That is not a technical error. It is an editorial decision made by an algorithm, and it tells you more about how Google sees your site than most people realize.
What This Post Covers
- What "Crawled – currently not indexed" actually means and how it differs from "Discovered – not indexed"
- The most common reasons Google decides not to index a crawled page
- How Core Web Vitals play a less obvious but real role in the equation
- Which pages to worry about and which to leave alone
- Step-by-step fixes for pages that should be in the index
What "Crawled – Currently Not Indexed" Actually Means
To understand this status, you need to understand the difference between crawling and indexing, because they are two separate steps in how Google processes the web.
Crawling is discovery. Googlebot sends a request to your server, downloads the page, and reads the content. If your page shows up under "Crawled – currently not indexed," this step happened successfully. Google found your page and looked at it.
Indexing is the decision to store that page and make it eligible to appear in search results. This is where Google evaluates what it found during the crawl and decides whether the page adds enough value to warrant a spot in the index.
Crawling vs. Indexing
Crawling ✓
Google visited the URL, downloaded the page, and read its content. The technical part worked.
Indexing ✗
Google evaluated what it found and decided not to store the page in its search index. This is a quality or relevance decision.
When a page gets the "Crawled – currently not indexed" status, Google is saying: I saw this page, I read it, and I chose not to include it. The page is not blocked. There is no noindex tag in the way. Google simply decided the page was not worth indexing at this time.
That distinction matters because it changes what you need to fix. This is not a robots.txt problem. It is not a sitemap problem. It is not a redirect chain problem. It is a quality, relevance, or structural problem — and in some cases all three.
Why Google Decides Not to Index a Page It Already Crawled
Google does not publish a checklist for this. But based on years of industry testing, documentation from Google, and observable patterns, the most common causes fall into a few categories.
1. The Content Is Thin or Duplicative
This is the most frequent cause. If a page does not offer enough substance to differentiate it from what is already in the index, Google has little reason to include it. This is especially common with archive pages, tag pages, paginated listings, and landing pages that are slight variations of each other.
2. The Page Lacks Internal Link Support
If a page is buried deep in your site architecture with few or no internal links pointing to it, Google interprets that as a signal that even you do not consider it important. Pages need to be connected to the rest of the site in a meaningful way. Orphan pages and poorly linked deep pages regularly end up in the "crawled, not indexed" bucket.
3. The Site Has Overall Quality Concerns
Google does not evaluate pages in isolation. If a site has a large proportion of low-quality or thin pages, that reputation can affect how Google treats even the decent pages on the same domain. A site that publishes a hundred pages of mediocre content and ten pages of strong content may find those ten pages harder to get indexed than they would be on a leaner, higher-quality site.
4. There Is No Clear Search Demand
Google is increasingly selective about what it indexes. If a page targets a query that almost nobody searches for, or if the topic is already well-served by existing results, Google may skip it. This does not mean the content is bad. It means Google does not see a reason to index it given the current state of the search results.
5. The Content Lacks Originality or Information Gain
This has become more relevant over the past couple of years. Google's systems are now evaluating not just whether content is accurate, but whether it offers something that existing indexed pages do not. Rewriting what already exists at the same depth and from the same angle is no longer sufficient. The bar has moved.
The Difference Between "Crawled, Not Indexed" and "Discovered, Not Indexed"
These two statuses show up near each other in the Page Indexing report and people frequently confuse them. They are not the same.
| Status | What Happened | Root Cause |
|---|---|---|
| Crawled – currently not indexed | Google visited, read, and rejected the page | Content quality or site structure issue |
| Discovered – currently not indexed | Google knows the URL exists but has not crawled it | Crawl budget or priority issue |
The distinction matters because the fixes are different. A page that was discovered but not crawled is likely a crawl budget or priority issue. A page that was crawled but not indexed is a content quality or site structure issue. Treating one like the other wastes time.
How Core Web Vitals Factor Into the Indexing Equation
This is where the conversation gets nuanced, and where a lot of bad advice circulates.
Google's John Mueller has said directly that Core Web Vitals scores are ranking factors, not quality factors, and that improving your CWV scores will not directly improve your indexing outcomes. That statement is accurate and worth taking seriously.
But here is where it gets more complicated.
Server Speed Affects Crawl Efficiency
If your server responds slowly, Googlebot cannot crawl as many pages in the same amount of time. Google allocates a crawl budget to every site, and slow server response times mean fewer pages get crawled per session. For small sites this rarely matters. For sites with thousands or tens of thousands of pages, slow response times can mean the difference between Google reaching your important pages or running out of budget before it gets to them.
Page Experience Is Part of the Overall Quality Equation
While CWV scores alone will not determine indexing, Google's systems increasingly consider the full user experience when evaluating a page's value. A page with great content but terrible load performance, major layout shifts, and unresponsive interactions is objectively a worse experience for users. After Google's June 2025 core update, the SEO community observed that pages with persistent technical deficiencies — including poor Core Web Vitals — were more likely to be deindexed or left unindexed, particularly in competitive niches.
The Three Current Core Web Vitals
LCP
Largest Contentful Paint
Measures how quickly the main content loads.
Good: < 2.5s
INP
Interaction to Next Paint
Measures responsiveness after user interaction.
Good: < 200ms
CLS
Cumulative Layout Shift
Measures visual stability of the page.
Good: < 0.1
The Practical Takeaway
Fixing your Core Web Vitals will not magically get pages indexed. But consistently poor page performance is one more reason for Google to deprioritize your content, and on large sites, slow performance directly limits how much of your site Google can even evaluate.
Pages You Should and Should Not Worry About
Not every URL in the "Crawled – currently not indexed" list is a problem. Some of them are exactly where they should be.
Don't Worry About These ✓
- Archive pages
- Tag pages
- Author pages with no unique content
- Paginated URLs
- Internal search result pages
- Image URLs (especially WebP files)
- Utility pages not meant to rank
Worry About These ✗
- Blog posts with original content
- Service pages
- Product pages
- Landing pages you are trying to rank
- Any page targeting a keyword with real search demand
The first step is always to check the URL Inspection Tool for each affected page. The Page Indexing report refreshes more slowly than URL Inspection, so there can be a lag. Google has confirmed this. If URL Inspection shows the page as indexed, the report will catch up eventually.
If URL Inspection also shows the page as not indexed, it is time to investigate.
How to Fix Pages That Should Be Indexed
There is no single fix. The right approach depends on why Google is not indexing the page. But the following steps cover the most common root causes.
1. Improve the Content
This is the most common fix because thin or undifferentiated content is the most common cause. Add depth, add original data or perspective, cover subtopics that competing pages miss, and make the page genuinely useful to someone who lands on it. Google is evaluating information gain — give it something to find.
2. Strengthen Internal Linking
Every important page should be reachable through multiple internal links from other relevant pages on your site. If a page is only linked from one place — or worse, only from the sitemap — Google has very little reason to treat it as important. Link to it from related blog posts, from relevant service pages, from your navigation where appropriate. Visualizing your site's link structure can reveal exactly which pages are orphaned or poorly connected.
3. Consolidate Duplicate or Near-Duplicate Content
If you have multiple pages that cover substantially the same topic, Google may index one and skip the others. Check for duplicate meta tags as a starting point — they often reveal pages competing for the same queries. Audit for cannibalization. Decide which page should be the canonical version and either redirect or consolidate.
4. Improve Page Performance
Run your pages through PageSpeed Insights and address the issues. Compress images, reduce render-blocking resources, minimize JavaScript, and make sure your server responds quickly. You are not doing this to check a CWV box — you are doing it because faster pages are easier for Google to crawl and better for users to experience.
5. Build External Signals
Pages with no backlinks can appear insignificant to Google. You do not need a massive link building campaign, but a few relevant, quality backlinks from real sites signal that the page has value beyond your own domain.
6. Request Indexing Selectively
After making improvements, use the URL Inspection tool to request indexing. This is not a fix by itself, but it tells Google to come back and reevaluate. Do not spam the request. One submission after a meaningful improvement is enough.
What Not to Do
A few approaches waste time or make things worse.
Common Mistakes to Avoid
- Do not submit the same URL repeatedly. Requesting indexing over and over without changing anything does not help. Google already saw the page. Sending it back without improvements will not change the outcome.
- Do not assume it is a technical issue. The most common mistake is chasing technical fixes when the real problem is content quality. If Googlebot crawled the page successfully, the technical basics are working. The issue is what it found when it got there.
- Do not panic about the number. Some percentage of URLs in the "crawled, not indexed" bucket is normal for every site. The goal is not to get that number to zero. The goal is to make sure the pages that matter to your business are not stuck there.
Think of It as Feedback, Not an Error
The "Crawled – currently not indexed" status is not a bug in Google Search Console. It is Google telling you something about how it perceives your content. The pages in that list were evaluated and found insufficient for the index. That is useful information.
For agency teams managing multiple client sites, this status is actually one of the best diagnostic signals available. A spike in crawled-but-not-indexed pages after a content push tells you the quality bar was not met. A steady accumulation over time suggests structural issues or a growing proportion of low-value pages on the site.
The Bottom Line
The fix is almost always some combination of better content, better site architecture, and better page performance. None of that is quick, but all of it compounds. Sites that consistently publish original, well-structured content on a technically sound foundation do not tend to have persistent indexing problems.
The pages that matter should be in the index. If they are not, Google is telling you why. The status is the starting point. What you do with it is the work.
Barracuda SEO connects directly to Google Search Console and surfaces content gaps, declining pages, and indexing issues — with AI-powered diagnostics that tell you what to fix and why.
Try Barracuda SEO Free