My adventures in random data compression
For the past 3 months I’ve been working on designing an algorithm to compress random data. Read on to learn about the extreme difficulty in solving this problem and why my work eventually lead me to building a small super computer to assist in finding a solution.
Why compress random data
Existing compression algorithms are based on finding simple patterns in data or reducing the quality of a message. They cannot handle random data. What this means in practice is in order to squeeze more content down the tubes (the Internet is a series of tubes) companies like YouTube and Netflix reduce the quality of their content to compensate.
You can only use this approach so many times before the content starts to look too shitty for your customers to enjoy. The obvious downside is this approach involves loss. You’re not enjoying how the message originally looked. It’s been warped to fit your...
Continue reading →