spyderproxy

cURL Set User Agent: -A Flag and Browser Spoofing (2026)

A

Alex R.

|
Published date

Sun May 10 2026

Quick verdict: Use -A "Mozilla/5.0 ..." to set a custom User-Agent (or -H "User-Agent: ...", equivalent). cURL's default UA is curl/X.Y.Z which many sites block immediately. But UA spoofing alone is not enough for modern bot detection — sites also check TLS fingerprint, HTTP/2 settings, and behavioral signals. Pair UA with curl_cffi (Chrome impersonation) or route through residential proxies for the rest.

cURL's Default User-Agent

$ curl -v https://httpbin.org/headers 2>&1 | grep User-Agent
> User-Agent: curl/8.1.2

Honest and useful for debugging cURL itself. Hostile environments — CDN bot filters, anti-scraping tools — recognize this UA and reject immediately.

Set Custom UA

Two equivalent ways:

# Short form (-A flag)
curl -A "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36" https://example.com

# Long form (-H flag)
curl -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36" https://example.com

Both produce the same request. Use -A for cleaner scripts when UA is your only header customization. Use -H when you are setting multiple headers anyway.

Current Browser UA Strings (May 2026)

Pick a UA matching a recent stable release. These rotate — check useragentstring.com for the latest.

Chrome on Windows

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36

Chrome on macOS

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36

Firefox on Windows

Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:126.0) Gecko/20100101 Firefox/126.0

Safari on macOS

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15

Mobile Safari on iPhone

Mozilla/5.0 (iPhone; CPU iPhone OS 17_5 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Mobile/15E148 Safari/604.1

Android Chrome

Mozilla/5.0 (Linux; Android 14; Pixel 8) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Mobile Safari/537.36

Rotate UAs Across Requests

UAS=(
  "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36"
  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36"
  "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:126.0) Gecko/20100101 Firefox/126.0"
)

for i in {1..10}; do
  UA="${UAS[$RANDOM % ${#UAS[@]}]}"
  curl -A "$UA" "https://target.com/page/$i"
  sleep 2
done

Each request picks a random UA from the list. Combine with proxy rotation for serious scraping.

Remove the User-Agent Entirely

Empty value:

curl -A "" https://example.com
# or
curl -H "User-Agent:" https://example.com

Some servers reject requests without a UA. Most accept it. Setting empty is different from leaving the default — sites that block "curl/X.Y" specifically might let "" through, and vice versa.

Why UA Alone Is Not Enough

Modern anti-bot services (Cloudflare, Akamai, DataDome, PerimeterX) score your traffic at multiple layers. UA is just one signal:

  1. IP reputation — datacenter IPs blocked regardless of UA. Use residential proxies for the IP layer.
  2. TLS ClientHello fingerprint — cURL's TLS stack has a fingerprint that does not match Chrome's. Sites can detect Chrome UA + non-Chrome TLS = bot. Use curl_cffi with impersonate="chrome126" for real Chrome TLS.
  3. HTTP/2 SETTINGS frame — the order and values of HTTP/2 settings are part of the fingerprint. curl_cffi matches this; plain cURL does not.
  4. Header order — browsers send headers in a specific order. cURL's order differs. Use -H in browser order to match.
  5. Missing browser-specific headerssec-ch-ua, sec-fetch-mode, sec-fetch-site. Real browsers send these; cURL does not unless you add them manually.

For sites running modern anti-bot, plain cURL with a spoofed UA gets blocked even with the most realistic UA string. UA spoofing is necessary but not sufficient.

Match UA to Other Headers

If you spoof Chrome on Windows, your other headers should match too:

curl https://api.target.com/data \
     -A "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36" \
     -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8" \
     -H "Accept-Language: en-US,en;q=0.9" \
     -H "Accept-Encoding: gzip, deflate, br" \
     -H "sec-ch-ua: \"Chromium\";v=\"126\", \"Not-A.Brand\";v=\"24\", \"Google Chrome\";v=\"126\"" \
     -H "sec-ch-ua-mobile: ?0" \
     -H "sec-ch-ua-platform: \"Windows\""

This passes the basic header checks. TLS layer still needs curl_cffi for the toughest sites.

Custom UA Patterns

Some legitimate use cases for custom UAs:

  • Identifying your scraper: -A "MyCrawler/1.0 (+https://mysite.com/about)" — tells site owners who you are and how to contact you. Good citizenship; some sites whitelist identified scrapers.
  • API key in UA: some APIs accept the API key in the UA string. Check their docs.
  • Mobile-only content: set a mobile UA to get the mobile version of a site (smaller pages, sometimes different data).

UA vs Browser Fingerprinting

UA is the LEAST important fingerprint signal in 2026:

LayerWhat it seescURL's default
UA stringSelf-reported identitycurl/X.Y (easy to spoof)
TLS ClientHelloCrypto handshake fingerprintcURL/OpenSSL signature (hard to spoof without curl_cffi)
HTTP/2 fingerprintSETTINGS frame, header ordercURL's pattern (hard to change)
IP reputationASN, blocklistsWhatever you're on (use residential proxies)
JS fingerprint (canvas, WebGL)Only fires if you run JScURL never runs JS (use Playwright)

For scraping that survives modern detection, fix the bottom layers first.

Related: cURL auth, cURL GET, Browser fingerprinting, Cloudscraper.