Lossless compression algorithms are almost magical; you can take some data source squish it down into a smaller space, and then restore it back to its full size later. They let us store and transfer large amounts of data for a fraction of the cost of the total data. The algorithms aren’t really magical though, they just exploit redundancies in the data like repeated substrings and some strings being used much more frequently than others. For example suppose I need to store a large list of E...