crawlio

πŸ”Ž Search Endpoint

Submit a search query and retrieve the top 10 results from a supported search engine. This endpoint is useful for discovery, automation workflows, SEO tools, and more.

🧰 Using with SDKs

Prefer code over curl? Crawlio offers official SDKs for seamless integration with your stack:

πŸ“– View full usage docs: πŸ‘‰ Node.js SDK Docs πŸ‘‰ Python SDK Docs

We are working on an extensive documentation on our SDKs. Thanks for your cooperation!

Cost

NameCostType
Scrape1Scrape

πŸ”Ž POST /search

πŸ“₯ Request

Endpoint:

POST https://crawlio.xyz/api/results

Headers:

Authorization: Bearer YOUR_API_KEY  
Content-Type: application/json

Request Body Parameters:

FieldTypeRequiredDescription
querystringβœ… YesThe search string to look up.

🧾 Example Request

POST /results
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY

{
  "query": "best web scraping tools 2025"
}

πŸ“€ Response

On success, Crawlio returns a list of the top 10 search results.

FieldTypeDescription
resultsarray of objectsEach result includes title, URL, and snippet.

πŸ“¦ Example Response

{
  "results": [
    {
      "title": "Top 10 Web Scraping Tools in 2025",
      "url": "https://example.com/best-scrapers",
      "description": "Here’s a breakdown of the best scraping tools this year..."
    },
    {
      "title": "Crawlio vs Competitors: A Detailed Comparison",
      "url": "https://example.com/crawlio-vs-others",
      "description": "We compare Crawlio with other tools based on features and pricing..."
    }
    // ... up to 10 results
  ]
}

Use this endpoint to automate keyword research, content discovery, or feed URLs into other Crawlio scraping workflows.


What and Why?

The Search feature is built for discovery and automation. It enables you to fetch the top 10 search results for any query, just like you'd get from a major search engine β€” but accessible via API for use in your applications and workflows.

This feature is ideal when you need to:

  • πŸ” Perform keyword research or competitor monitoring
  • πŸ“° Find articles, blog posts, or product listings related to a topic
  • πŸ€– Automate AI and scraping pipelines by identifying URLs to process
  • πŸ“Š Feed SEO tools or content recommendation engines

You can use this endpoint to quickly build a list of relevant pages that can then be scraped or analyzed further.

Key Advantages:

  • πŸ“ˆ Automate discovery of fresh content without manual searching
  • πŸ”— Seamlessly feed results into Crawlio scraping jobs
  • ⏱️ Save time by skipping the browser and getting structured results instantly

Use the /search endpoint in combination with /scrape, /batch-scrape, or /crawl to create full-circle data workflows β€” from discovery to extraction.

On this page