spyderproxy

HTTP 429 Too Many Requests: Causes & Complete Fix Guide (2026)

S

SpyderProxy Team

|
Published date

2026-04-21

HTTP 429 Too Many Requests is the server's way of saying "slow down" — you've exceeded its rate limit for your IP, API key, session, or user account. It's not a permissions error (that's 403) and not a server failure (5xx); the server is fine and your request was well-formed, you just sent too many of them too fast.

This guide is the complete fix for anyone hitting 429s in an API client, web scraper, or CI/CD pipeline. We cover: what triggers a 429, how to read the Retry-After response header, exponential backoff code for Python and Node.js, when proxy rotation is the real fix, and how 429 differs from 403, 503, and 529.

What HTTP 429 Actually Means

From RFC 6585 section 4: "The 429 status code indicates that the user has sent too many requests in a given amount of time (rate limiting)."

The key word is user. The server is telling a specific client — identified by IP address, API token, user account, cookie session, or fingerprint — that they've exceeded an allowed request rate. Other users of the same service are completely unaffected.

A well-behaved 429 response includes:

  • Status line: HTTP/1.1 429 Too Many Requests
  • Retry-After header (RFC 6585 recommends this): either an integer number of seconds to wait, or an HTTP-date like Wed, 21 Oct 2026 07:28:00 GMT.
  • Body: a human-readable explanation or JSON error object. Many APIs put the rate-limit policy here.
  • Often also: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset headers that expose the policy explicitly. Not standard but widely used (GitHub, Twitter/X, Stripe, Discord, and most modern APIs).

Common Causes of 429 Errors

In rough order of how often each shows up in the wild:

1. API Rate Limits

Every public API publishes rate limits. Examples:

  • GitHub REST API: 60 unauthenticated requests/hour, 5,000 authenticated/hour.
  • Stripe: 100 read operations/second in live mode.
  • Discord: per-route and global limits, published in X-RateLimit-* headers.
  • Twitter/X: different limits per endpoint and per API tier.
  • OpenAI: tokens-per-minute and requests-per-minute limits per model.

If you're hitting 429 on an API with an auth token, you're almost always exceeding the documented rate. First step: read the API docs.

2. Web Scraping Without Proxy Rotation

If you're scraping and seeing 429, it's because the target site has identified your IP as a repeat visitor and is throttling you. This accounts for the majority of "random" 429s. See the scraping fix section below.

3. Cloudflare, Fastly, and WAF-Level Throttling

Many sites don't set their own rate limits — their CDN does. Cloudflare's Rate Limiting and Super Bot Fight Mode return 429 (or 403) when a client exceeds policy. Fastly, Akamai, AWS WAF, and others all do the same. The actual origin server never sees the offending request.

4. Login Brute-Force Protection

Trying to log in to an account too many times — even with correct credentials — can trigger 429. This is intentional (protection against credential stuffing). The fix is never "retry faster"; it's "wait and re-auth once".

5. Accidental Infinite Loops

Retry-on-failure logic without a backoff, polling loops without a delay, or a bug that re-fires the same API call on every render in a React app. The canonical self-inflicted 429.

6. Shared IP (NAT) Problems

If you're on a corporate VPN or university network, your outbound IP is shared with thousands of other users. When any of them hits a third-party rate limit, everyone on that IP sees 429s. You didn't do anything wrong — your network did.

How to Read a 429 Response (Step 1 of Every Fix)

Before you code a retry, read the response. The server is literally telling you what to do.

import requests resp = requests.get("https://api.github.com/users/octocat") print(resp.status_code) # 429 print(dict(resp.headers)) print(resp.text) # Typical response headers on a 429: # { # "Retry-After": "43", # "X-RateLimit-Limit": "60", # "X-RateLimit-Remaining": "0", # "X-RateLimit-Reset": "1713723600", # "X-RateLimit-Used": "60", # "Content-Type": "application/json" # }

The three pieces of information you want, in order of preference:

  1. Retry-After: if present, this is the server telling you exactly how long to wait. Just use it.
  2. X-RateLimit-Reset: a Unix timestamp when your quota resets. Compute the delta from now.
  3. Fallback exponential backoff if neither header is there.

Fix 1: Respect Retry-After (Every Time)

The correct generic retry loop for any HTTP client:

import requests, time def request_with_retry(url, max_retries=5, **kwargs): for attempt in range(max_retries): resp = requests.get(url, **kwargs) if resp.status_code != 429: return resp # Parse Retry-After (seconds OR HTTP-date) retry_after = resp.headers.get("Retry-After") if retry_after: try: wait = int(retry_after) except ValueError: # It's an HTTP-date from email.utils import parsedate_to_datetime import datetime wait_until = parsedate_to_datetime(retry_after) wait = max(0, (wait_until - datetime.datetime.now(wait_until.tzinfo)).total_seconds()) else: # No header — fall back to exponential backoff wait = min(60, 2 ** attempt) print(f"429 received, waiting {wait:.1f}s (attempt {attempt + 1}/{max_retries})") time.sleep(wait) raise RuntimeError(f"Exceeded {max_retries} retries on {url}") # Usage resp = request_with_retry("https://api.github.com/users/octocat")

Fix 2: Exponential Backoff with Jitter

When there's no Retry-After header, back off exponentially with randomization (jitter) so retrying clients don't thunder-herd the server at the same moment.

import time, random def request_with_backoff(url, max_retries=5, base=1.0, cap=60.0, **kwargs): for attempt in range(max_retries): resp = requests.get(url, **kwargs) if resp.status_code not in (429, 503): return resp # Exponential backoff with full jitter # Formula: random between 0 and min(cap, base * 2^attempt) sleep = random.uniform(0, min(cap, base * (2 ** attempt))) print(f"{resp.status_code} received, sleeping {sleep:.2f}s") time.sleep(sleep) return resp # Give up gracefully

"Full jitter" is the AWS-recommended backoff pattern — it's better than fixed exponential because it distributes retries more evenly across time.

Fix 3: Respect X-RateLimit-Remaining Proactively

Don't wait until you get a 429 — watch the counters and slow down preemptively.

def smart_rate_limiter(resp): """Sleep preemptively when rate limit is about to run out.""" remaining = int(resp.headers.get("X-RateLimit-Remaining", "1")) reset_ts = int(resp.headers.get("X-RateLimit-Reset", "0")) if remaining <= 1 and reset_ts: wait = max(0, reset_ts - time.time() + 1) if wait > 0: print(f"Rate limit almost empty, sleeping {wait:.1f}s until reset") time.sleep(wait) # In your request loop for item in items_to_fetch: resp = requests.get(f"https://api.example.com/items/{item}") smart_rate_limiter(resp)

Fix 4: Node.js / Axios Version

import axios from 'axios'; async function requestWithRetry(url, options = {}, maxRetries = 5) { for (let attempt = 0; attempt < maxRetries; attempt++) { try { return await axios(url, options); } catch (err) { if (err.response?.status !== 429) throw err; const retryAfter = err.response.headers['retry-after']; let waitMs; if (retryAfter && /^\d+$/.test(retryAfter)) { waitMs = parseInt(retryAfter, 10) * 1000; } else if (retryAfter) { waitMs = Math.max(0, new Date(retryAfter).getTime() - Date.now()); } else { // Exponential backoff with jitter waitMs = Math.min(60000, Math.random() * (1000 * 2 ** attempt)); } console.warn(`429 received, waiting ${waitMs}ms`); await new Promise(r => setTimeout(r, waitMs)); } } throw new Error(`Exceeded ${maxRetries} retries`); } // Usage const resp = await requestWithRetry('https://api.github.com/users/octocat');

Fix 5: For Web Scraping — Proxy Rotation Is the Real Solution

If you're scraping and hitting 429, none of the backoff strategies above will truly fix it. The site's rate limit is tied to your IP address, and no amount of waiting makes your IP less suspicious. You need a different IP for each batch of requests.

This is what residential proxies are for. Each SpyderProxy request can be issued from a fresh residential IP, so to the target site every request looks like a different user. No single IP ever exceeds its rate limit.

import requests, random, time def get_proxy_session(): session_id = random.randint(1, 10_000_000) return { "http": f"http://YOUR_USERNAME-session-{session_id}:[email protected]:10000", "https": f"http://YOUR_USERNAME-session-{session_id}:[email protected]:10000", } # Scrape with per-request IP rotation urls = [f"https://example.com/item/{i}" for i in range(1000)] for url in urls: proxies = get_proxy_session() try: resp = requests.get(url, proxies=proxies, timeout=30) if resp.status_code == 429: # Even a residential IP can hit 429 occasionally — rotate and retry once proxies = get_proxy_session() resp = requests.get(url, proxies=proxies, timeout=30) # Process resp.text here except Exception as e: print(f"error on {url}: {e}") time.sleep(random.uniform(0.5, 2.0))

This pattern — fresh residential IP per request, small randomized delay, single-retry on 429 — drops the 429 rate to near zero on most sites. The SpyderProxy residential pool has 130M+ IPs in 195+ countries, so you're never reusing an IP on the same target.

How 429 Differs from Other Status Codes

CodeMeaningWhat to do
403 ForbiddenServer refuses the request. Usually auth, permissions, or geo-block.Fix credentials, check geo, or rotate IP. See our 403 guide.
429 Too Many RequestsYou're rate limited.Back off, respect Retry-After, rotate IPs.
503 Service UnavailableServer temporarily down/overloaded.Retry with backoff. Not your fault.
529 Site Is OverloadedNon-standard. Used by Cloudflare and some sites to mean "we're getting too much traffic globally".Wait longer than a 429; back off 1–10 minutes.
418 I'm a TeapotApril Fools' joke from RFC 2324. Some sites (mainly Cloudflare) use it as "we've detected you're a bot and we won't tell you how".Rotate IP, change User-Agent, add stealth to your headless browser.

A 429 means the server could serve you but is choosing not to. A 503 means the server can't serve anyone right now. Those need different fixes.

Troubleshooting Checklist

When you're seeing repeated 429s, walk through this list:

  1. Read the response body. Half the time the API tells you exactly which limit you hit.
  2. Check for a Retry-After header and honor it exactly.
  3. Look at X-RateLimit-* headers and start watching them preemptively.
  4. Review your request rate. Are you actually exceeding the documented limit? A ReactJS useEffect that re-fires on every render can send 10,000 requests per second on a single page.
  5. Am I authenticated correctly? Many APIs have very low anonymous limits (60/hr on GitHub) and high authenticated limits (5,000/hr). Missing or wrong token = 429 fast.
  6. Add exponential backoff with jitter as a baseline in every HTTP client you write.
  7. If scraping: rotate residential IPs. No amount of backoff fixes a flagged IP.
  8. Check for a shared NAT issue. If the 429s are random and sparse from a VPN or corp network, that's someone else on your IP. Use a residential proxy to bypass.

When 429 Is Actually 403 (And How to Tell)

Some bot-detection systems return 429 when they actually mean "we've identified you as a bot". Telltale signs:

  • No Retry-After header — a real rate limiter almost always sets this.
  • Consistent timing — real rate limits reset on clean windows; bot detection gives you a permanent "soft 429" from that IP.
  • 403 on retry from a different path on the same site — if the site refuses to serve you anywhere, not just the endpoint you were hitting, it's bot detection.
  • Challenge page in the response body — HTML with "verifying you are a human" text.

When you see these signs, backoff won't help. Rotate to a fresh residential IP and change your browser fingerprint.

Proxies vs Backoff: When to Use Each

  • Using a public API with an auth token: backoff and respect Retry-After. Proxies don't help because the rate limit is on your API key, not your IP.
  • Scraping a public site without auth: rotate residential IPs. Backoff alone won't fix a per-IP rate limit.
  • Both (authenticated scraping): rotate IPs and back off. The rate limit is probably on the session, not the IP alone, so each fresh IP should also get a fresh cookie jar.

Frequently Asked Questions

What does HTTP 429 Too Many Requests mean?

HTTP 429 means the server is rate-limiting your client. You've sent too many requests in a time window that the server enforces. The server is not broken, and your request is not malformed — you just need to slow down or back off.

How do I fix a 429 error?

Three-step fix: (1) read the Retry-After header in the response and wait that long; (2) implement exponential backoff with jitter for when there's no Retry-After; (3) if you're scraping, rotate your IP using residential proxies because the rate limit is tied to your IP, not just your request pattern.

How long should I wait after a 429?

If the server sent a Retry-After header (in seconds or as an HTTP-date), wait exactly that long. If not, start with a 1-second delay and double it on each retry up to a maximum of 60 seconds. Add random jitter to each delay so multiple clients don't all retry at the same instant.

What's the difference between 429 and 403?

403 Forbidden means the server refuses to serve your request permanently (or at least until you fix something — wrong credentials, missing permissions, geo-block). 429 Too Many Requests means the server could serve you but you're hitting rate limits; wait and it resolves itself.

Does using a proxy fix 429 errors?

It depends. For web scraping (IP-based rate limits), yes — rotating residential proxies makes each request come from a different IP and the per-IP rate limit doesn't accumulate. For API-key-based limits (GitHub, Stripe, OpenAI), proxies don't help because the rate limit is on the auth token.

Should I retry immediately on 429?

No. Immediate retry will almost always produce another 429 and may get your IP flagged for abuse. Always wait at least what the Retry-After header specifies, or use exponential backoff with jitter if there's no header.

What's the Retry-After header?

Retry-After is a standard HTTP response header (RFC 7231) telling the client how long to wait before retrying. Its value is either an integer number of seconds (Retry-After: 30) or an HTTP-date (Retry-After: Wed, 21 Oct 2026 07:28:00 GMT). Most good APIs set it on every 429 and 503 response.

Why do I get 429 on Cloudflare-protected sites even with low traffic?

Cloudflare's rate limiting and bot-detection tiers evaluate more than just request rate — they look at your IP reputation, ASN, fingerprint, and behavior. Datacenter IPs, residential IPs flagged for previous abuse, or missing browser fingerprints can all trigger 429s at very low request rates. Residential or mobile proxies with clean IP reputation fix this.

Can 429 errors affect my SEO or ranking?

Yes. If Googlebot gets 429s from your site, Google will slow or stop crawling. Persistent 429s during indexing can delay new content being indexed by days or weeks. If you use rate limits on your own site, exclude crawler user-agents or whitelist Google's verified bot IP ranges.

Bottom Line

HTTP 429 Too Many Requests is simple: the server wants you to slow down. The complete fix is three layers: read Retry-After, back off with jitter, rotate residential proxies if scraping. Most production clients should implement all three. Skip any of them and you'll see recurring 429s as soon as your traffic grows.

For scraping workloads specifically, rate limits are per-IP, so SpyderProxy rotating residential at $1.75/GB (Budget) or $2.75/GB (Premium) is the single highest-leverage fix — rotating IPs on each request drops the 429 rate by an order of magnitude compared to a single-IP scraper with backoff.

Related Resources