π Search Endpoint
Submit a search query and retrieve the top 10 results from a supported search engine. This endpoint is useful for discovery, automation workflows, SEO tools, and more.
π§° Using with SDKs
Prefer code over curl? Crawlio offers official SDKs for seamless integration with your stack:
- Node.js SDK (npm) β Perfect for backend automation, agents, and JS projects.
- Python SDK (PyPI) β Ideal for data science, AI/ML workflows, and scripting.
π View full usage docs: π Node.js SDK Docs π Python SDK Docs
We are working on an extensive documentation on our SDKs. Thanks for your cooperation!
Cost
Name | Cost | Type |
---|---|---|
Scrape | 1 | Scrape |
π POST /search
π₯ Request
Endpoint:
Headers:
Request Body Parameters:
Field | Type | Required | Description |
---|---|---|---|
query | string | β Yes | The search string to look up. |
π§Ύ Example Request
π€ Response
On success, Crawlio returns a list of the top 10 search results.
Field | Type | Description |
---|---|---|
results | array of objects | Each result includes title, URL, and snippet. |
π¦ Example Response
Use this endpoint to automate keyword research, content discovery, or feed URLs into other Crawlio scraping workflows.
What and Why?
The Search feature is built for discovery and automation. It enables you to fetch the top 10 search results for any query, just like you'd get from a major search engine β but accessible via API for use in your applications and workflows.
This feature is ideal when you need to:
- π Perform keyword research or competitor monitoring
- π° Find articles, blog posts, or product listings related to a topic
- π€ Automate AI and scraping pipelines by identifying URLs to process
- π Feed SEO tools or content recommendation engines
You can use this endpoint to quickly build a list of relevant pages that can then be scraped or analyzed further.
Key Advantages:
- π Automate discovery of fresh content without manual searching
- π Seamlessly feed results into Crawlio scraping jobs
- β±οΈ Save time by skipping the browser and getting structured results instantly
Use the /search
endpoint in combination with /scrape
, /batch-scrape
, or /crawl
to create full-circle data workflows β from discovery to extraction.
π Crawl Endpoint
Initiate a full website crawl starting from a given URL. Crawlio will recursively follow links and extract content from each page, subject to the options you provide.
πͺ Webhooks
Crawlio supports webhooks for notifying your server when important events occur β such as the completion of crawl or scrape jobs. With webhooks, you can automate your workflows, trigger pipelines, or log job outcomes in real time.