If you build enough web scrapers you’ll run into a situation where you have hundreds or thousands of URLs to churn through but the scraper breaks after hour 4 on page 1,387 because the request bailed for some reason and you didn’t catch the error. It’s a bummer, especially since it usually crushes that wonderful feeling of watching a robot do something repetitive on your behalf. Sigh. I’ve found that using recursive Javascript promises to fetch data makes adding retry behavior a breez...