Tools
The Scraper API includes powerful tools for URL discovery and domain crawling. Use these tools to find and collect URLs at scale.
Available Tools
| Tool | Description | Endpoint |
|---|---|---|
| Url Discovery | Discover URLs from sitemaps, Common Crawl, or active crawl. Returns URLs only. | /api/v1/scraper/map |
| Domain Crawling | Crawl a website and automatically scrape every discovered URL. | /api/v1/scraper/crawl |
Quick Comparison
| Feature | Url Discovery | Domain Crawling |
|---|---|---|
| Returns | URLs only | URLs + scraped content |
| Scrapes | No | Yes (every URL) |
| Sources | sitemap, commoncrawl, crawl | crawl only |
| Default depth | 1 | 1 |
| URL validation | Optional | Not applicable |
Which Tool Should You Use?
Use Url Discovery when:
- You only need URLs (not content)
- You want flexibility in discovery sources
- You need to validate which URLs are live
- You’re building a URL list for another process
Use Domain Crawling when:
- You need both URLs and scraped content
- You want to scrape every discovered page
- You want a single request to do discovery + scraping
- You’re doing complete site scraping
Getting Started
Choose a tool above to see full documentation, parameters, and examples.
âšī¸
Both tools support async mode for large jobs. Use
async: true to process in the background and poll for results.Credit Requirements
Url Discovery:
- 2 credits per source (sitemap, commoncrawl)
- 0.5 credits per URL validation
- Crawl source uses scraper rates
Domain Crawling:
- Actual costs depend on number of URLs scraped
- Uses Scraper API pricing
Related Documentation
- Scraper API Main Docs - Core scraping functionality
- Parameters - All scraper configuration options
- Usage Examples - Code samples