In the primordial days of Marginalia Search, it used a dynamic approach to crawling the Internet. It ran a number of crawler threads, 32 or 64 or some such, that fetched jobs from a director service, that grabbed them straight out of the URL database, these jobs were batches of 100 or so documents that needed to be crawled. Crawling was not planned ahead of time, but rather decided through a combination of how much of a website had been visited, and the quality score of that website determine...