curl vs wget is the oldest debate in command-line tooling. Both ship on every Linux distribution, both download files over HTTP, and both have devoted users who insist their pick is the "right" answer. The truth is that curl and wget were built for different jobs — and once you know the difference, the choice between them is obvious for any specific task. This guide compares curl 8.10 and wget 1.25 across 12 dimensions: protocol support, recursion and mirroring, proxy authentication, HTTP/2 and HTTP/3, scripting ergonomics, resume on failure, cookies, certificate handling, throughput, and which tool to use for scraping behind rotating residential proxies.
Every command in this article was tested on Ubuntu 24.04 LTS with curl 8.10.1 and GNU Wget 1.25.0. Network traffic was routed through SpyderProxy Premium Residential ($2.75/GB) for proxy examples. Throughput numbers were captured against a 1 GB ISO file on a 1 Gbps link, averaged across 10 runs.
Use curl when you are scripting API calls, debugging HTTP, sending custom headers and POST bodies, working with cookies, doing one-off downloads, or you need HTTP/2 and HTTP/3 support. curl is a Swiss-army-knife transfer client that speaks 28+ protocols and is the de facto choice inside CI pipelines, Dockerfiles, and ops scripts. Use wget when you need to recursively mirror a website, resume large downloads automatically across reboots, batch-download a file list with -i list.txt, or run unattended downloads that retry on failure without human intervention. wget is the right tool to clone a documentation site or pull 10,000 PDFs overnight. For scraping behind a rotating proxy, both work — pick by ergonomics, not by capability.
curl ("Client URL") is a command-line transfer tool first released by Daniel Stenberg in 1997. It is built on top of libcurl, the underlying C library that powers HTTP transfers in millions of devices — from automotive infotainment systems to PlayStation consoles, network printers, and PHP / Python / Node bindings. As of 2026, curl supports HTTP/1.1, HTTP/2, HTTP/3 (over QUIC), HTTPS, FTP, FTPS, SFTP, SCP, MQTT, Gopher, IMAP/IMAPS, LDAP/LDAPS, POP3/POP3S, SMTP/SMTPS, RTMP, RTSP, TELNET, TFTP, WebSocket, WebSocket Secure, and more. The default behaviour is to print the response body to stdout, making curl ideal for pipelines.
wget (originally "Geturl", renamed in 1996) is a GNU project for non-interactive file retrieval. Its design center is resilience and recursion: download a 50 GB file over a flaky connection, walk away from the terminal, come back the next morning to a complete file. wget natively supports HTTP, HTTPS, FTP, and FTPS. It does not speak HTTP/2 by default in 1.25, does not handle WebSockets, and does not do any of the protocol breadth curl covers. What it does do, better than curl, is recursive mirroring with link rewriting, automatic retry-on-failure, and unattended batch downloads.
# curl
curl https://example.com
# wget
wget -qO- https://example.com
# curl
curl -o homepage.html https://example.com
# wget
wget -O homepage.html https://example.com
# curl
curl -H "User-Agent: SpyderBot/1.0" https://example.com
# wget
wget --header "User-Agent: SpyderBot/1.0" https://example.com
# curl
curl -X POST -H "Content-Type: application/json" \
-d '{"key":"value"}' https://api.example.com/data
# wget
wget --method=POST --header="Content-Type: application/json" \
--body-data='{"key":"value"}' \
-O - https://api.example.com/data
# curl (must opt in with -L)
curl -L https://example.com
# wget (follows by default)
wget https://example.com
For one-shot HTTP scripting — sending JSON, custom headers, debugging APIs — curl is roughly 30 percent shorter and reads like the request itself. For "fetch this URL and put it on disk," wget is shorter because it does the right thing by default.
This is the single biggest reason to install wget. Mirroring an entire site recursively in a single command:
wget --mirror --convert-links --adjust-extension --page-requisites \
--no-parent https://docs.example.com
This walks every internal link, downloads every page plus all referenced CSS, JS, images, fonts, rewrites the links in the HTML to point at the local copies, and refuses to ascend above the starting directory. curl has no equivalent — you would have to script it yourself, parsing HTML and queueing URLs.
For ethical scraping, always check /robots.txt and respect --wait and --random-wait:
wget --mirror --wait=2 --random-wait \
--user-agent="Mozilla/5.0 (compatible; SpyderBot/1.0)" \
https://docs.example.com
Both tools resume interrupted downloads, with different ergonomics. curl uses -C - (continue at the offset libcurl figures out automatically):
curl -C - -O https://releases.example.com/large-file.iso
wget uses -c:
wget -c https://releases.example.com/large-file.iso
For unattended overnight downloads, wget's combination of -c + --tries=0 (infinite retries) + --retry-connrefused wins. curl needs an external retry wrapper or shell loop to match it.
Both tools speak HTTP, HTTPS, and SOCKS5 proxies, but the syntax and authentication ergonomics differ.
# Inline credentials
curl -x http://USERNAME:[email protected]:7777 https://example.com
# Or via flags
curl --proxy gw.spyderproxy.com:7777 \
--proxy-user USERNAME:PASSWORD https://example.com
# SOCKS5
curl --socks5 USERNAME:[email protected]:1080 https://example.com
# Via env vars
export http_proxy=http://USERNAME:[email protected]:7777
export https_proxy=$http_proxy
wget https://example.com
# Or via flags
wget --proxy-user=USERNAME --proxy-password=PASSWORD \
-e use_proxy=yes -e https_proxy=gw.spyderproxy.com:7777 \
https://example.com
wget does not natively support SOCKS5. To route wget over SOCKS5 you wrap it with tsocks or proxychains4:
proxychains4 wget https://example.com
For SpyderProxy users specifically, curl is more convenient — pass the proxy URL with -x and you are done. For wget you typically set environment variables in ~/.wgetrc or a wrapper script. Full step-by-step setup is in our How to Set a Proxy for wget guide.
SpyderProxy's rotating residential gateway issues a new IP per request when you connect to gw.spyderproxy.com:7777. With curl, each invocation gets a fresh IP automatically — you do not have to do anything special:
# Each call gets a different residential IP
for i in {1..50}; do
curl -s -x http://USER:[email protected]:7777 \
https://httpbin.org/ip
done
For sticky sessions (10-minute or 24-hour IP persistence) attach a session ID to your username:
curl -x http://USER-session-abc123:[email protected]:7777 \
https://target.com
Use the same USER-session-abc123 for the lifetime of the session and you keep the same IP. Change the suffix to rotate. wget works identically — same URL format applies in http_proxy.
curl is materially better at HTTP scripting. Sending complex requests is one-flag-per-feature:
# Cookies
curl -b "session=abc123; uid=42" https://example.com
curl -b cookies.txt -c cookies.txt https://example.com # read+write jar
# Form data
curl -F "[email protected]" -F "name=spyder" https://api.example.com/upload
# Basic auth
curl -u USERNAME:PASSWORD https://api.example.com/private
# Bearer token
curl -H "Authorization: Bearer eyJhbGc..." https://api.example.com/me
wget supports cookies via --load-cookies / --save-cookies and basic auth via --user / --password, but does not have a clean equivalent of -F form upload, does not support multipart bodies, and is awkward for non-trivial APIs. For any HTTP scripting beyond plain GET, prefer curl.
curl supports HTTP/2 since 7.43 (2015) and HTTP/3 over QUIC since 7.66 (2019). Enabling each:
curl --http2 https://example.com
curl --http3 https://example.com # requires curl built with quiche or ngtcp2
wget 1.25 still does not support HTTP/2 or HTTP/3 in the mainline distribution. Some distributions ship a wget2 package which has HTTP/2; check with wget2 --version. For modern web servers that prefer HTTP/2 (Cloudflare, AWS CloudFront, Google Cloud), curl avoids the protocol downgrade.
For a single-stream download of a 1 GB file, both tools saturate the network link and finish in roughly the same time. The differences emerge in concurrency:
| Scenario | curl | wget |
|---|---|---|
| Single 1 GB download | 9.8 s | 9.9 s |
| Parallel 10 files (xargs) | 2.1 s | 3.4 s |
| Recursive mirror, 500 URLs | n/a (no native) | 78 s |
| 1000 sequential GETs through proxy | 112 s | 148 s |
curl edges wget on raw HTTP throughput because it speaks HTTP/2 (multiplexing several requests over one connection). For mirroring, wget wins because it is the only tool with a built-in crawler.
curl exposes a richer set of exit codes (90+ specific failures) and supports -w "%{http_code}" output formatting that makes it trivial to branch on status:
code=$(curl -s -o /dev/null -w "%{http_code}" https://example.com)
if [ "$code" = "200" ]; then
echo "OK"
fi
wget exits with codes 0-8 (most failures collapse to 8) which is less actionable. For CI/CD scripts that need to react to specific HTTP statuses, curl is the better fit.
-v or --trace-ascii).wget -i urls.txt.--limit-rate for politeness.Never use -k (curl) or --no-check-certificate (wget) in production. Both flags disable TLS certificate verification, exposing you to man-in-the-middle attacks. If you hit a certificate error through a proxy, install your CA bundle properly via --cacert (curl) or --ca-certificate (wget). SpyderProxy uses publicly-trusted certificates on all gateway endpoints, so neither flag is ever needed on our service.
curl is for HTTP scripting and one-off downloads. wget is for recursive mirroring and unattended batch downloads. They are complements, not competitors — install both, learn both, and pick by task. For scraping behind rotating residential proxies, curl's syntax is shorter and HTTP/2 support helps avoid protocol-downgrade fingerprinting. SpyderProxy Premium Residential at $2.75/GB pairs cleanly with both — see our wget proxy setup guide for the full configuration. Or if you only need curl-style API calls, drop in -x http://USER:[email protected]:7777 and you are done.