spyderproxy

How to Scrape Google Maps for Business Data

Complete 2026 guide to scraping business names, phone numbers, addresses, and ratings from Google Maps — with working Python code and proxy configurations that actually ship.

D

Daniel K.

|
Published date

Apr 15, 2026

|14 min read

Google Maps is the largest structured database of local businesses on the planet — more than 200 million listings covering every restaurant, shop, service provider, and professional on Earth. For sales teams, market researchers, and agencies, scraping Google Maps is one of the highest-ROI lead generation activities possible.

The problem: Google's anti-bot systems are among the most advanced on the web. A naive scraper gets blocked in under 100 requests. This guide walks through exactly how to scrape Google Maps reliably in 2026 — what data is actually available, the right proxy setup, working Python code, and the tactics that keep scrapers running at scale.

What Data Can You Scrape from Google Maps?

Google Maps exposes a surprising amount of structured data on public listings. Here's what's available without authentication:

FieldAvailabilityReliability
Business nameEvery listingVery high
AddressEvery physical listingVery high
Phone number~85% of businessesHigh
Website URL~60% of businessesHigh
CategoryEvery listingVery high
Hours of operation~75% of businessesMedium
Average ratingListings with reviewsVery high
Review countListings with reviewsVery high
Individual reviewsPaginated, up to a capMedium
Price level ($ to $$$$)Restaurants, servicesMedium
Plus Code / coordinatesEvery physical listingVery high
PhotosMost listingsHigh
Popular timesSome businessesLow

For B2B lead generation — the most common use case — the four fields that matter most are business name, address, phone number, and website. These are the minimum required to enrich a contact list or feed a CRM.

Google Maps Official API vs Scraping: Which Should You Use?

Google offers the Places API as the official way to access this data. Before reaching for a scraper, understand the tradeoffs:

FactorGoogle Places APIDirect Scraping
Cost$17-32 per 1,000 requestsProxy costs only ($50-200/month)
Rate limitsQuota-based, billableLimited by your proxy pool size
Result caps60 results max per searchEffectively unlimited
Data freshnessCached, sometimes staleReal-time from Maps UI
Fields availableLimited by field masksEverything visible on the page
Legal statusFully authorizedTOS violation (gray area)
MaintenanceZeroOngoing — selectors break

The API is the right choice if you need 60 or fewer results per query, budget isn't a constraint, and you want zero maintenance. Scraping makes more sense when you need comprehensive coverage of a region (thousands of listings), unrestricted data fields, or real-time data at scale.

For most lead generation projects — "give me every plumber in Texas" — scraping is the only economically viable option. The API's 60-result cap per query would require tens of thousands of separate API calls to cover a state.

Why Google Maps Is Hard to Scrape

Google's anti-scraping stack is one of the most sophisticated on the web. Naive approaches fail within minutes because of:

IP Rate Limiting

Google aggressively throttles IPs that send too many requests. Typical thresholds are around 100-300 Maps requests per hour before CAPTCHA challenges appear, and a few hundred more before you get fully blocked. Datacenter IPs are limited much more aggressively — often blocked within 20-50 requests.

CAPTCHA Challenges

Google shows reCAPTCHA v2 and v3 challenges on suspicious traffic. Once you see one, your IP's reputation is compromised for hours. Solving them automatically is possible (through services like 2Captcha) but adds cost and latency.

JavaScript-Heavy Rendering

Google Maps is a dynamic single-page app. Data is loaded via XHR requests to internal APIs, which require maintaining a valid browser session and cookies. Pure HTML scraping misses most of the structured data.

ASN and Fingerprint Detection

Google identifies traffic from datacenter ASNs (AWS, DigitalOcean, OVH) instantly and treats it as hostile by default. Combined with browser fingerprinting — TLS signature, canvas, WebGL — most scrapers fail on either the IP check or the fingerprint check.

Proxy Setup: What Actually Works for Google Maps

Proxy choice is the foundation. Without clean IPs, no amount of code sophistication saves you.

Proxy TypeGoogle Maps Success RateThroughput Per IPCost Efficiency
Free / Public0-2%N/AUnusable
Datacenter10-25%20-50 req/hrPoor
Rotating Residential85-93%30-80 req/hrExcellent
Static Residential / ISP75-88%80-120 req/hrGood
Mobile 4G/5G95-98%50-100 req/hrExpensive but reliable

Rotating residential proxies offer the best cost-per-successful-request ratio for most Google Maps scraping. They give you access to a pool of clean IPs from real ISPs at a per-GB cost an order of magnitude lower than mobile proxies.

For aggressive scraping where uptime matters more than cost, mobile proxies remain the gold standard — Google cannot blanket-block carrier IPs without disrupting their own mobile user base.

Scraping Google Maps with Python: Working Code

Here's a complete scraper using Playwright for JS rendering and residential proxies for clean IPs. This handles the full flow: search → extract listings → get details → save.

Step 1: Install Dependencies

pip install playwright playwright-stealth
playwright install chromium

Step 2: Configure Proxies

PROXY = {
    "server": "http://gate.spyderproxy.com:10000",
    "username": "your_username",
    "password": "your_password",
}

SEARCH_QUERY = "plumbers in Austin, Texas"
MAX_RESULTS = 200

Step 3: Build the Scraper

import asyncio
import json
import random
from playwright.async_api import async_playwright
from playwright_stealth import stealth_async

async def scrape_google_maps(query, max_results=200):
    results = []

    async with async_playwright() as p:
        browser = await p.chromium.launch(
            headless=True,
            proxy=PROXY,
            args=["--disable-blink-features=AutomationControlled"],
        )

        context = await browser.new_context(
            user_agent=(
                "Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
                "AppleWebKit/537.36 (KHTML, like Gecko) "
                "Chrome/124.0.0.0 Safari/537.36"
            ),
            viewport={"width": 1920, "height": 1080},
            locale="en-US",
        )

        page = await context.new_page()
        await stealth_async(page)

        # Navigate to Google Maps search
        url = f"https://www.google.com/maps/search/{query.replace(' ', '+')}"
        await page.goto(url, wait_until="networkidle")

        # Wait for results list to load
        await page.wait_for_selector('div[role="feed"]', timeout=30000)

        # Scroll the results panel to load more listings
        panel = page.locator('div[role="feed"]')
        previous_count = 0

        while len(results) < max_results:
            # Extract currently visible listings
            cards = await page.locator('div[role="feed"] > div > div[jsaction]').all()

            for card in cards[previous_count:]:
                data = await extract_listing(card)
                if data:
                    results.append(data)
                    if len(results) >= max_results:
                        break

            previous_count = len(cards)

            # Scroll to load more
            await panel.evaluate("el => el.scrollBy(0, el.scrollHeight)")
            await asyncio.sleep(random.uniform(1.5, 3.5))

            # Check if we've reached the end
            new_cards = await page.locator('div[role="feed"] > div > div[jsaction]').count()
            if new_cards == previous_count:
                break

        await browser.close()

    return results


async def extract_listing(card):
    """Extract structured data from a single listing card"""
    try:
        name = await card.locator('.fontHeadlineSmall, [role="heading"]').inner_text()

        # Get the rating and review count
        rating_block = await card.locator('.fontBodyMedium span[role="img"]').first.get_attribute('aria-label') or ""

        # Click into the listing for full details
        await card.click()
        await asyncio.sleep(random.uniform(1.5, 3.0))

        # Extract details from the side panel
        details = await card.page.locator('[role="main"]').first
        phone = ""
        address = ""
        website = ""
        category = ""

        # Phone
        phone_btn = details.locator('button[data-item-id^="phone"]')
        if await phone_btn.count() > 0:
            phone = await phone_btn.first.get_attribute('aria-label') or ""

        # Address
        addr_btn = details.locator('button[data-item-id="address"]')
        if await addr_btn.count() > 0:
            address = await addr_btn.first.get_attribute('aria-label') or ""

        # Website
        web_btn = details.locator('a[data-item-id="authority"]')
        if await web_btn.count() > 0:
            website = await web_btn.first.get_attribute('href') or ""

        return {
            "name": name.strip(),
            "address": address.replace("Address: ", "").strip(),
            "phone": phone.replace("Phone: ", "").strip(),
            "website": website,
            "rating_info": rating_block.strip(),
        }
    except Exception as e:
        print(f"Error extracting: {e}")
        return None


# Run it
results = asyncio.run(scrape_google_maps(SEARCH_QUERY, MAX_RESULTS))
with open("leads.json", "w") as f:
    json.dump(results, f, indent=2)

print(f"Scraped {len(results)} businesses")

This scraper handles the core flow. For production use at scale, you'll want to add the patterns in the next section.

7 Anti-Detection Tactics for Sustained Scraping

Getting one run of 200 listings is easy. Running thousands of queries per day without interruption takes more work.

1. Query-Level IP Rotation

Don't rotate per request — rotate per query. Each search query is a session that includes the search, scrolling, and detail views. Keeping the same IP through one complete query looks natural. Rotating in the middle of a query looks robotic.

2. Randomize Scroll Timing and Distance

Real users don't scroll with perfect cadence. Use random delays between 1.5-3.5 seconds between scrolls, vary the scroll distance, and occasionally scroll back up a bit before continuing down.

3. Vary Your Search Patterns

If you're scraping "plumbers in [city]" for every US city, don't run the queries sequentially in alphabetical order. Shuffle the query list. Mix in unrelated queries occasionally. Make the traffic look like organic usage.

4. Rotate Browser Fingerprints

Cycle through realistic combinations of user agent, viewport, and locale. A real population visiting Google Maps isn't all Chrome 124 on 1920x1080 — it's a mix of Chrome, Firefox, Edge, Safari, and dozens of screen sizes.

5. Cache Aggressively

Google Maps data doesn't change minute-to-minute. Cache results for 24-48 hours and refresh incrementally. Every request you don't make is a request that can't trigger a block.

6. Handle CAPTCHAs Gracefully

When you hit a CAPTCHA, don't fight it on that IP. Stop, rotate to a fresh IP, and cool down. Trying to solve CAPTCHAs from the same IP just reinforces the block.

7. Scale Horizontally, Not Vertically

Don't run 100 threads from one machine. Run distributed scrapers across regions — 10 workers × 10 concurrent queries each, spread across different cloud regions or serverless deployments. Horizontal scaling avoids the burst patterns that trip Google's volumetric detection.

Handling Reviews and Photos at Scale

Reviews and photos add an order of magnitude to scraping complexity because they're paginated and often require additional user interaction.

Scraping Reviews

  • Click the "Reviews" tab on the listing's side panel
  • Scroll the reviews feed to load more — each scroll typically loads 10 reviews
  • Google caps review pagination somewhere around 500-1,000 reviews per listing depending on the business
  • For each review: reviewer name, date, rating, review text, and owner response if any

Scraping Photos

  • Click the "Photos" tab to open the photo gallery
  • Images are served from *.googleusercontent.com
  • Replace the URL size suffix (=w203-h152-k-no) with =w1200-h900-k-no for higher resolution
  • Be aware that heavy photo scraping significantly increases bandwidth cost on your proxy plan

Legal and Ethical Considerations

Scraping Google Maps sits in a legal gray area. Key points to understand before proceeding:

  • Google's Terms of Service explicitly prohibit scraping. This is a contractual matter, not criminal law, but TOS violations can expose you to civil liability.
  • Publicly displayed business data (names, addresses, phone numbers, hours) is generally considered factual information that can't be copyrighted. This is the data most B2B lead gen projects collect.
  • User-generated reviews are protected by copyright owned by the reviewers. Aggregating or republishing reviews exposes you to DMCA liability — stick to metadata (counts, averages) if possible.
  • Personal data protection laws (GDPR in the EU, CCPA in California) apply when you're collecting data on identifiable individuals. Business phone numbers are typically fine; personal reviewer names less so.
  • Google has successfully sued scrapers in the past under the CFAA. The landmark case is hiQ v. LinkedIn, which favors scrapers of public data, but Google's TOS and enforcement are specific to their platform.

For commercial scraping at scale, consult an attorney qualified in your jurisdiction. This guide is technical, not legal advice.

Alternatives: When Not to Scrape Google Maps

Direct scraping isn't always the best choice. Consider these alternatives:

Google Places API

Official, authorized, works within strict rate limits. Best for small-to-medium data needs and when you can't tolerate any TOS risk.

Local Data Providers

Services like Data Axle, D&B Hoovers, and ZoomInfo aggregate business data from many sources including Google Maps. Expensive but legally clean and pre-verified.

Yelp, Yellow Pages, Facebook Business

Alternative sources that cover similar data. Often easier to scrape and sometimes have different listings than Google. Combining sources gives better coverage than any single one.

Open Data Sets

OpenStreetMap has volunteer-contributed business data that's free and legally unrestricted. Coverage varies but for some categories (restaurants, cafes, tourist attractions) it's excellent.

Frequently Asked Questions

How much data can I scrape from Google Maps per day?

With a well-configured scraper using residential proxies and proper pacing, 5,000-20,000 listings per day from a small proxy pool is achievable. With mobile proxies and horizontal scaling, 100,000+ per day is realistic at a higher cost.

Does Google block all scrapers?

Google blocks poorly-configured scrapers quickly — those using datacenter IPs, default user agents, or aggressive pacing. A well-designed scraper using residential proxies, stealth browsers, and realistic pacing can run indefinitely without getting blocked. It's an engineering problem, not an impossibility.

What's the fastest way to scrape Google Maps without coding?

No-code tools like Apify, Octoparse, and PhantomBuster offer pre-built Google Maps scrapers. Costs range from $0.50 to $5.00 per 1,000 listings depending on the service. Faster to get started but usually more expensive than running your own scraper at volume.

Can I scrape Google Maps reviews?

Technically yes, but be aware that reviews are copyrighted by the reviewers, not by Google. Aggregating reviews for analysis (sentiment, keywords) is lower-risk than republishing them. Most commercial projects scrape review counts and average ratings, not the full text.

Which proxies are best for scraping Google Maps?

Rotating residential proxies offer the best cost-per-success ratio for most use cases. Mobile 4G/5G proxies give the highest success rates (95-98%) but at 3-5x the cost. Datacenter proxies fail on Google Maps within a handful of requests — avoid them entirely. See our proxy plans for options tailored to scraping.

Is it legal to scrape Google Maps for lead generation?

The legal situation is nuanced. Scraping publicly-visible business information (name, address, phone, website) is generally considered lower-risk than scraping personal data or copyrighted content. However, it violates Google's TOS, which creates civil liability exposure. Consult a qualified attorney in your jurisdiction before running commercial scraping at scale.

Related Guides

Ready to Scrape Google Maps at Scale?

SpyderProxy residential and mobile proxies deliver 85-98% success rates on Google Maps with worldwide IP coverage and sticky session support for sustained scraping.