Here’s the code for my infinite nonsense crawler trap: process.py: Text preprocessor babble.c: Garbage server What follows is an explanation of how to set it up… Training the Markov chain: First, you’ll want to find three long-ish sources of text, between 1000 and 50,000 words. I used ebooks from Project Guttenberg, but long blog posts or Wikipedia articles will also work. Save the text in files named “book1.txt”, “book2.txt” and “book3.txt”. Remove any page numbers, heading...