Cancel Async Job
Tool to cancel an asynchronous scraping job. Use when you need to stop processing of pending tasks in a job. Completed tasks remain available.
16 tools available
Scrape.do is a web scraping API offering rotating residential, data-center, and mobile proxies with headless browser support and session management to bypass anti-bot protections (e.g., Cloudflare, Akamai) and extract data at scale in formats like JSON and HTML.
Tool to cancel an asynchronous scraping job. Use when you need to stop processing of pending tasks in a job. Completed tasks remain available.
Tool to create an asynchronous scraping job with specified targets and options. Use when you need to scrape multiple URLs in parallel without waiting for results. Returns a job ID immediately for polling results later via the get job status action.
Retrieves account information and usage statistics from Scrape.do. This action makes a GET request to the Scrape.do info endpoint to fetch: - Subscription status - Concurrent request limits and usage - Monthly request limits and remaining requests - Real-time usage statistics Rate limit: Maximum 10 requests per minute. Use remaining request counts to monitor credits proactively, as different scraping operations (e.g., rendered-page requests) consume varying credit amounts and exhaustion mid-run causes failures.
Get all seller offers for any Amazon product. Retrieves every seller listing including pricing, shipping costs, seller information, and Buy Box status in structured JSON format. Use when you need to compare prices across multiple sellers or find the best deal for a specific product.
Extract structured product data from Amazon product detail pages (PDP). Returns comprehensive product information including title, pricing, ratings, images, best seller rankings, and technical specifications in JSON format.
Tool to get raw HTML from any Amazon page with ZIP code geo-targeting. Use when you need complete unprocessed HTML source from Amazon URLs with location-based targeting. Ideal for scraping pages not covered by other structured endpoints.
Tool to get account information for the Async API including concurrency limits and usage statistics. Use when you need to check available concurrency slots, active jobs, or remaining credits for Async API operations.
Tool to retrieve details and status of a specific asynchronous scraping job. Use when you need to check the progress, status, or results of a previously created async job. Returns job metadata including creation time, completion time, task counts, and detailed task list.
Tool to retrieve the result of a specific task within an asynchronous job. Returns the scraped content for that particular URL. Use when you need to check the status and result of a previously submitted async scraping task.
A tool to scrape web pages using scrape.do's API service. Makes a basic GET request to fetch webpage content while handling anti-bot protections and proxy rotation automatically. Does not execute JavaScript by default — pages requiring client-side rendering (SPAs, dynamically loaded content) will return incomplete HTML; use SCRAPE_DO_GET_RENDER_PAGE or set render=true for those cases.
Tool to list all asynchronous scraping jobs. Returns paginated list of jobs with their status and metadata. Use when you need to retrieve job history or monitor job statuses. Supports pagination with up to 100 jobs per page.
This tool implements the Proxy Mode functionality of scrape.do, which allows routing requests through their proxy server. It provides an alternative way to access web scraping capabilities by handling complex JavaScript-rendered pages, geolocation-based routing, device simulation, and built-in anti-bot and retry mechanisms.
Tool to scrape web pages using POST method via scrape.do API. Use when you need to send POST requests to target websites with custom request body data. Supports all parameters from GET endpoint plus request body customization for POST/PUT/PATCH methods.
Tool to search Amazon and scrape product listings with structured results. Performs keyword searches and returns structured product data including titles, prices, ratings, Prime status, sponsored flags, and position rankings in JSON format. Use when you need to search for products on Amazon marketplace or gather product information from search results.
This tool allows users to block specific URLs during the scraping process. It's particularly useful for blocking unwanted resources like analytics scripts, advertisements, or any other URLs that might interfere with the scraping process or slow it down. It provides granular control by allowing users to specify URL patterns to block, thereby improving scraping performance and maintaining privacy.
This tool allows users to set a broader geographical targeting by specifying a region code instead of a specific country code. This is useful when you want to scrape content from an entire region rather than a specific country. Note that this feature requires super mode to be enabled and is only available for Business Plan or higher subscriptions.
Wire it up in minutes. No coding required.