Replies: 2 comments
-
|
@basedygt I guess the above workaround is way to go, as |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
This discussion closed automatically due to inactivity. Feel free to reopen or start new if still relevant. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
i know we can use
-ctargument to specify maximum crawl time but this options sets the crawl time for entire list of URLs.I want to know how to set
-ctfor each URL instead of all URLs since i can't find any built in options inside the tool for it nor any workaround like below command worked:The reason why this is a handy feature as many websites root domain has endless contents and the scraper fails to scrape it's subdomain automatically due to this issue.
Beta Was this translation helpful? Give feedback.
All reactions