For some sites, we would like to be able to scrape many pages that all have the same layout/etc and extract information from all of those pages using the same selectors/config/etc.
Currently, we'd have to either replicate the Scraper for each unique URL, update the URL, and run the Scrapers or set the scraper URL, run the scraper, wait, update the URL, repeat. This becomes particularly problematic if we want to regularly scrape these pages and the site later changes or we want to extract additional information from the pages as we'd have to either update every Scraper or re-duplicate from a new "base" for every URL that we want to scrape.
It would be helpful if we could create the scraper once and then provide the specific page URL when we want to run the Scraper (via the UI and/or API).
Alternatively, if we could define the selectors/config/etc in a "parent Scraper" and then define the URL in the "child Scraper" that would also work.