Introduction
Welcome to the UseScraper docs
Let’s get started
UseScraper is a powerful yet easy to use API for web crawling and web scraping. We offer two main APIs:
Scraper
With our Scraper API you can instatly scrape any webpage as plain text, markdown or raw html. When you use the Scraper API, a Chrome browser with Javascript instantly visits the specified API, and results are returned. Enable our advanced scraping proxy to cicumvent most bot detection and blocking systems.
Crawler
With Crawler, you can extract content from thousands of pages on a website in minutes. You can choose to save the content as plan text, markdown or raw html. Markdown is ideal for AI fine-tuning or saving to a vector database to retrieve later with RAG.
Crawler will automatically detect whether a sitemap exists. If it does it will crawl the pages listed in the sitemap. If no sitemap is found, it will automatically use link-crawling and find the pages as it goes.
A Chrome browser with Javascript enabled is used to scrape every page, so you can be confident that even the most complex websites can be crawled.
By default we crawl with 5 simultaneous Chrome browsers, if you need to increase this please just get in contact and we’ll be happy to.