Why Is My Page Not Indexed by Google?

You published your page. You waited. You searched Google for it. Nothing.
This is one of the most frustrating experiences in SEO, and it happens more often than most people realise. Google does not automatically index every page it finds. It makes choices, and sometimes your page does not make the cut.
The good news is that most indexing problems have a clear cause and a straightforward fix. This guide walks through the nine most common reasons Google is not indexing your page and exactly what to do about each one.
Before you start troubleshooting, make sure you actually know the page is not indexed. Use Index Status Checker to confirm the status in seconds. Do not rely on the site: operator, as it is not reliable enough for this kind of diagnosis.
Reason 1: The Page Has a Noindex Tag
This is the most common cause, and the most embarrassing one to discover. A noindex tag tells Google explicitly not to index the page, and Google follows that instruction.
It usually gets added by accident. A developer sets a page to noindex during staging and forgets to remove it before launch. A CMS plugin like Yoast SEO gets misconfigured. A template setting applies noindex to a whole category of pages without anyone noticing.
How to check: Open the page in your browser, right-click, and select View Page Source. Search for noindex in the code. If you see <meta name=”robots” content=”noindex”>, that is your problem.
How to fix: Remove the noindex tag or change the setting in your CMS. After fixing it, go to Google Search Console and use the URL Inspection tool to request indexing.
Read more about what noindex tags do and when to use them intentionally: What Is a Noindex Tag
Reason 2: The Page Is Blocked by robots.txt
Your robots.txt file tells search engine crawlers which parts of your site they are allowed to visit. If the page or its directory is listed under a Disallow rule, Google will not crawl it, and a page Google cannot crawl cannot be indexed.
How to check: Visit yourdomain.com/robots.txt and look for any Disallow rules that could apply to your page. Also check Google Search Console under Settings, then Crawl Stats, for any blocked URLs.
How to fix: Remove the Disallow rule for the affected URL or directory. Be careful not to accidentally open up sections of your site that should remain blocked. After making changes, submit your updated sitemap in Google Search Console.
Reason 3: The Page Was Published Too Recently
Google does not index pages instantly. For new websites or pages on sites Google crawls infrequently, indexing can take anywhere from a few days to several weeks.
If your page was published within the last week, this may simply be a waiting game.
How to check: Use the URL Inspection tool in Google Search Console. If it shows “URL is not on Google” but no technical issues, the page is likely just waiting to be crawled.
How to fix: Request indexing directly through GSC’s URL Inspection tool. This does not guarantee instant indexing but it signals to Google that the page exists and is ready to be crawled.
Also make sure the page is included in your XML sitemap and that internal links point to it from other indexed pages on your site. Internal links are one of the most reliable ways to help Google discover new content faster.
Reason 4: The Content Is Thin or Low Quality
Google is selective about what it indexes. Pages with very little content, content that closely duplicates other pages, or content that provides no real value to users are frequently skipped.
This is not just about word count. A 2,000-word page full of filler content can be considered thin. A focused 400-word page that answers a specific question clearly can be indexed without issue.
How to check: Read your page honestly. Does it answer the user’s question better than what is already ranking? Does it add something original? Is the content written for people or for search engines?
How to fix: Improve the content. Add original insight, real examples, or specific information that similar pages do not cover. Make sure the page serves a clear purpose and answers a specific question well.
Reason 5: The Page Has Duplicate Content Issues
If Google finds two or more pages with very similar or identical content, it will often index only one of them and ignore the others. This applies to pages on your own site competing with each other and to content that closely mirrors pages elsewhere on the web.
How to check: Use Google Search Console to look for pages flagged as duplicate without user-selected canonical. Also search Google for a unique sentence from your page in quotation marks. If another page with the same text appears, you have a duplication issue.
How to fix: Add a canonical tag to the preferred version of the page. Consolidate very similar pages into one stronger page. If the page is legitimately unique, review the content and make it more distinct.
Reason 6: No Internal Links Pointing to the Page
Google discovers pages by following links. If no other indexed page on your site links to the new page, Google may simply never find it regardless of whether it is in your sitemap.
This is called an orphan page and it is a surprisingly common issue, especially on large sites where new pages get published without being added to the navigation or linked from related content.
How to check: Search your site for mentions of the page topic and see if any existing pages link to the new URL. Use a crawl tool like Screaming Frog to identify orphan pages across your whole site.
How to fix: Add internal links to the page from at least two or three relevant indexed pages on your site. The closer those pages are to your homepage in terms of link depth, the faster Google is likely to find the new page.
Reason 7: The Page Has Crawl Errors
Server errors, redirect loops, broken links, and slow page load times can all prevent Google from successfully crawling a page. If Googlebot tries to visit the page and gets an error, it moves on and the page does not get indexed.
How to check: Open Google Search Console and go to Pages under the Indexing section. Look for pages with crawl errors including 404 errors, 5xx server errors, and redirect issues. The URL Inspection tool will also show you the last crawl status for a specific page.
How to fix: Fix the underlying technical issue. Resolve redirect chains, fix broken internal links, correct server configuration errors, and make sure the page loads reliably. After fixing, request re-crawling through the URL Inspection tool.
Reason 8: The Site Has Crawl Budget Issues
Crawl budget refers to the number of pages Google is willing to crawl on your site within a given time frame. On very large sites or sites with a lot of low-quality pages, Google may not get around to crawling every page.
This is rarely a problem for small or medium-sized sites. But if your site has thousands of pages, significant amounts of duplicate or thin content, or a lot of URL parameters generating unnecessary page variations, crawl budget can become a real issue.
How to check: In Google Search Console, go to Settings and then Crawl Stats. Look at the total pages crawled per month and compare it to the size of your site. If Google is crawling far fewer pages than your site contains, crawl budget may be a factor.
How to fix: Block low-value pages from crawling using robots.txt. Use canonical tags to consolidate duplicate URLs. Clean up URL parameter issues. Improve site speed, as faster sites tend to get crawled more efficiently.
Reason 9: The Page Was Manually Removed or Penalised
In some cases, Google removes pages from its index deliberately. This can happen if Google detects content that violates its guidelines, if someone submitted a URL removal request through GSC, or if the site received a manual action penalty.
How to check: In Google Search Console, check the Manual Actions section under Security and Manual Actions. Also check if anyone on your team submitted a URL removal request, as these can temporarily remove indexed pages.
How to fix: If there is a manual action, read the details carefully and follow Google’s instructions to resolve the issue. Then submit a reconsideration request. If a URL removal request was submitted by mistake, you can cancel it in the Removals section of GSC.
Quick Diagnosis Checklist
If your page is not indexed, work through this list in order.
| Check | Where to Look |
| Noindex tag present | Page source code or CMS SEO settings |
| Blocked by robots.txt | yourdomain.com/robots.txt |
| Page too new | GSC URL Inspection tool |
| Thin or duplicate content | Manual content review |
| No internal links | Site crawl or manual check |
| Crawl errors | GSC Pages report |
| Crawl budget issues | GSC Crawl Stats |
| Manual action | GSC Manual Actions report |
Work through these in order. Most indexing problems will be identified within the first three or four checks.
After You Fix the Issue
Once you have identified and resolved the cause, go to Google Search Console and use the URL Inspection tool to request indexing for the affected page.
Then use Index Status Checker to monitor the page over the following days. Check back after 48 to 72 hours to confirm Google has picked up the fix.
If the page is still not indexed after a week, revisit the checklist. Some pages have more than one issue and fixing the most obvious one may reveal a secondary problem underneath.
For backlinks that are not indexed, the process is slightly different. Read: How to Get Backlinks Indexed by Google
Frequently Asked Questions
Q: Why won’t Google index my page?
The most common reasons are a noindex tag on the page, a robots.txt block, thin or duplicate content, no internal links pointing to the page, or the page being too new. Use Google Search Console’s URL Inspection tool alongside Index Status Checker to diagnose the specific cause.
Q: How long does it take Google to index a new page?
It varies. Pages on well-established sites with strong internal linking can be indexed within hours. Pages on newer or less-crawled sites can take several weeks. Read our detailed guide: How Long Does Google Take to Index a Page
Q: What is the difference between crawled and indexed?
Crawled means Google visited the page. Indexed means Google stored it in its database and made it eligible to appear in search results. A page can be crawled but still not indexed if Google decides the content does not meet its quality standards.
Q: Can I force Google to index my page?
You cannot force it, but you can request it. Use the URL Inspection tool in Google Search Console and click Request Indexing. This puts the page in a priority crawl queue. It does not guarantee indexing but it speeds up the process for pages that are genuinely ready.
Q: What does “discovered but not indexed” mean in GSC?
It means Google found the URL, usually through your sitemap or an internal link, but has not crawled it yet. This typically happens when crawl budget is a constraint or the page is very new. Improving internal links to the page and requesting indexing through GSC can help.
Q: What does “crawled but not indexed” mean?
It means Google visited the page but decided not to add it to the index. This is usually a content quality signal. Google crawled the page, evaluated it, and chose not to index it. Improving the content is the primary fix. Read our dedicated guide: How to Fix Crawled But Not Indexed Pages
Start by confirming whether your page is actually indexed.
Paste the URL into Index Status Checker and get a clear answer in seconds. Then work through the checklist above to find and fix the cause.
Need to check multiple pages at once? Use the Bulk Google Index Checker.