Scrapfly

12 tools available

Scrapfly is a web scraping API that enables developers to extract data from websites efficiently, offering features like JavaScript rendering, anti-bot protection bypass, and proxy rotation.

ai web scrapingdeveloper tools

Tools & actions 12

Capture Website Screenshot

Tool to capture a full-page or viewport screenshot of a website. Use when you need to take a screenshot with options like JS rendering, custom resolution, or accessibility testing. Returns the screenshot image directly. Supports vision deficiency simulations and dark mode.

Capture Screenshot Metadata (HEAD)

Tool to capture screenshot metadata without downloading the image body. Use this for async screenshot workflows where you need the URL to retrieve the image later. Returns the screenshot URL in response, saving bandwidth compared to full screenshot retrieval.

Create Scrapfly Crawler

Tool to create a new web crawler to recursively crawl an entire website. Returns a crawler UUID for tracking progress. Use when you need to crawl multiple pages from a website with configurable limits and extraction rules.

Extract Structured Data

Tool to extract structured data from HTML or other content using AI models, LLM prompts, or custom templates. Use when you need to parse web pages or documents into structured JSON data. Supports predefined extraction models for common types (articles, products, events) or custom extraction via prompts/templates.

Get Scrapfly Account Information

Tool to retrieve Scrapfly account information. Use after authenticating to get API credit balance and usage stats. Returns comprehensive account data including subscription plan, usage statistics, billing info, and project settings.

Get Crawler Artifact

Tool to download crawler artifact files in WARC or HAR format. Use when you need to retrieve the complete crawl results as an archive file. WARC format is recommended for large crawls as it includes gzip compression.

Get Crawler Contents

Tool to retrieve extracted content from crawled pages. Supports multiple output formats including markdown, text, HTML, and JSON. Use when you need to access the actual content extracted during a crawl, with optional filtering by URL and format selection.

Get Crawler Status

Tool to get the current status of a crawler including progress, pages crawled, and completion state. Use for polling workflow to monitor crawl progress.

Get Crawler URLs

Tool to retrieve the list of discovered and crawled URLs from a crawler. Use when you need to get all URLs found during a crawl or filter by status to analyze failed URLs with error codes. Supports pagination for large result sets.

Scrapfly Scrape

Tool to perform a web scraping request. Use when you need to fetch a page with custom configuration like JS rendering, proxies, and extraction.

Scrapfly Scrape POST

Tool to scrape web pages using POST method to send data in the request body. Use when you need to scrape endpoints that require POST requests, such as form submissions or APIs that expect data payload.

Scrape With PUT

Tool to scrape web pages using PUT method with body payload. Use when the target API requires PUT requests with data in the request body. Forwards PUT request with custom body to the target URL. If not specified, content-type defaults to application/x-www-form-urlencoded.

Ready to automate with Scrapfly?

Wire it up in minutes. No coding required.

← All integrations