spyderproxy

Mastering HTTPX Client for Python (2026)

A

Alex R.

|
Published date

Wed May 06 2026

Quick verdict: HTTPX is the modern Python HTTP client — same API as requests for sync code, plus async support and HTTP/2. For new projects in 2026, HTTPX is the recommended pick. Existing requests code works fine; migrate when you need async, HTTP/2, or to consolidate sync + async into one library. Performance: 5-10x faster than requests for concurrent workloads, comparable for single requests.

This guide covers what HTTPX is, when to migrate from requests, performance comparisons, and 8 working examples for sync, async, streaming, HTTP/2, and proxies.

Install

pip install httpx
# For HTTP/2 support
pip install httpx[http2]

HTTPX vs requests vs aiohttp

Feature requests HTTPX aiohttp
Sync APIYesYesNo
Async APINoYesYes
HTTP/2NoYesNo (HTTP/3 in v4+)
API similarity to requestsNative~95% drop-inDifferent
Throughput (100 concurrent reqs)Slow (sync)FastFastest

8 Working Examples

1. Basic GET (drop-in for requests)

import httpx
r = httpx.get("https://api.example.com/items")
print(r.json())

2. Client with persistent connection pool

with httpx.Client() as client:
    r1 = client.get("https://api.example.com/items")
    r2 = client.get("https://api.example.com/users")
# Connections are pooled and reused — much faster than separate calls

3. Async with concurrent requests

import asyncio
import httpx

async def main():
    urls = [f"https://api.example.com/items/{i}" for i in range(100)]
    async with httpx.AsyncClient() as client:
        rs = await asyncio.gather(*[client.get(u) for u in urls])
    return [r.json() for r in rs]

asyncio.run(main())

4. HTTP/2 enabled

with httpx.Client(http2=True) as client:
    r = client.get("https://www.cloudflare.com")
    print(r.http_version)  # "HTTP/2"

5. Through a proxy

proxy = "http://USER:[email protected]:8080"
with httpx.Client(proxies={"http://": proxy, "https://": proxy}) as client:
    r = client.get("https://api.example.com")

6. Streaming a large response

with httpx.stream("GET", "https://example.com/large-file.zip") as r:
    with open("local.zip", "wb") as f:
        for chunk in r.iter_bytes():
            f.write(chunk)

7. Custom timeouts

timeout = httpx.Timeout(30.0, connect=10.0, read=20.0)
with httpx.Client(timeout=timeout) as client:
    r = client.get("https://slow-api.com")

8. POST JSON with auth

r = httpx.post(
    "https://api.example.com/items",
    json={"name": "sample", "price": 42},
    headers={"Authorization": "Bearer YOUR_TOKEN"},
)
print(r.json())

Migrating From requests

For most code, the migration is a one-line import change:

# Before
import requests
r = requests.get(url, params={"q": "x"}, headers={"X-Key": "abc"})

# After
import httpx
r = httpx.get(url, params={"q": "x"}, headers={"X-Key": "abc"})

The 5% incompatibility cases:

  • Sessions: requests.Session()httpx.Client()
  • Verify SSL: requests' verify=False works in HTTPX too, but for custom CA use verify=ssl.create_default_context(...)
  • Streaming: requests uses r.iter_content(); HTTPX uses r.iter_bytes() in a stream context
  • Proxies: requests accepts {"http": ..., "https": ...}; HTTPX needs trailing slash {"http://": ..., "https://": ...}

Async + Proxies for Scraping at Scale

import asyncio
import httpx

PROXY = "http://USER:[email protected]:8080"

async def fetch(client, url):
    r = await client.get(url, timeout=20)
    return r.text

async def main(urls):
    async with httpx.AsyncClient(
        proxies={"http://": PROXY, "https://": PROXY},
        http2=True,
        limits=httpx.Limits(max_connections=50),
    ) as client:
        return await asyncio.gather(*[fetch(client, u) for u in urls])

urls = ["https://example.com/" + str(i) for i in range(1000)]
results = asyncio.run(main(urls))

Through a rotating residential proxy with HTTPX async + HTTP/2, real-world throughput is 200-500 requests/second on a single client, vs 20-50/second with sync requests.