Tools

The Scraper API includes powerful tools for URL discovery and domain crawling. Use these tools to find and collect URLs at scale.

Available Tools

Tool Description Endpoint
Url Discovery Discover URLs from sitemaps, Common Crawl, or active crawl. Returns URLs only. /api/v1/scraper/map
Domain Crawling Crawl a website and automatically scrape every discovered URL. /api/v1/scraper/crawl
Search Find relevant domains by searching for topics, products, or keywords. /api/v1/scraper/search

Quick Comparison

Feature Url Discovery Domain Crawling Search
Returns URLs only URLs + scraped content Domains + titles
Scrapes No Yes (every URL) No
Sources sitemap, commoncrawl, crawl crawl only web search
Default depth 1 1 N/A
URL validation Optional Not applicable N/A

Which Tool Should You Use?

Use Url Discovery when:

  • You only need URLs (not content)
  • You want flexibility in discovery sources
  • You need to validate which URLs are live
  • You’re building a URL list for another process

Use Domain Crawling when:

  • You need both URLs and scraped content
  • You want to scrape every discovered page
  • You want a single request to do discovery + scraping
  • You’re doing complete site scraping

Use Search when:

  • You don’t know the exact domain to scrape
  • You want to find relevant websites by topic or keyword
  • You need to discover competitors or similar sites
  • You’re doing market research or lead generation

Getting Started

Choose a tool above to see full documentation, parameters, and examples.

â„šī¸
Both tools support async mode for large jobs. Use async: true to process in the background and poll for results.

Credit Requirements

Url Discovery:

  • 2 credits per source (sitemap, commoncrawl)
  • 0.5 credits per URL validation
  • Crawl source uses scraper rates

Domain Crawling:

Search:

  • 5 credits per query used
  • Refunded for unused queries if max URLs reached

Related Documentation