GET
/
crawler
/
jobs
/
{id}
curl --request GET \
  --url https://api.usescraper.com/crawler/jobs/{id} \
  --header 'Authorization: Bearer <token>'
{
    "id": "7YEGS3M8Q2JD6TNMEJB8B6EKVS",
    "urls": [
        "https://example.com"
    ],
    "createdAt": 1699964378397,
    "status": "running",
    "sitemapPageCount": 12,
    "progress": {
        "scraped": 6,
        "discarded": 0,
        "failed": 0
    },
    "costCents": 0,
    "webhookFails": []
}

Crawler jobs may take several minutes to complete. Use this endpoint to check the status of a job, and fetch the results from the Get job data endpoint when the job is complete.

{
    "id": "7YEGS3M8Q2JD6TNMEJB8B6EKVS",
    "urls": [
        "https://example.com"
    ],
    "createdAt": 1699964378397,
    "status": "running",
    "sitemapPageCount": 12,
    "progress": {
        "scraped": 6,
        "discarded": 0,
        "failed": 0
    },
    "costCents": 0,
    "webhookFails": []
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

id
string
required

Job ID

Response

200
application/json
Details of created job
id
string
required
org
string
required
urls
string[]
required
output_format
enum<string>
required
Available options:
text,
html,
markdown
output_expiry
integer
required
Required range: x > 0
min_length
number
required
use_browser
boolean
required
createdAt
number
required

UNIX timestamp

status
enum<string>
required
Available options:
starting,
running,
succeeded,
failed,
cancelled
notices
object[]
required
sitemapPageCount
number
required
exclude_globs
string[]
exclude_elements
string
webhook_url
string
page_limit
number
Required range: x > 0
force_crawling_mode
enum<string>
Available options:
link,
sitemap
block_resources
boolean
default:
true
include_linked_files
boolean
default:
false
finishedAt
number

UNIX timestamp

costMillicents
number
costCents
number
progress
object
webhookFails
any