I somehow ended up at a video describing how our eye compresses the images we see down 125-to-1 before sending it to the brain. Its quite a simple process (excuse my poor terminology), each ganglion cell connects with a group of photo receptors in a receptive field, and provides a single output value for that information. The receptors in the centre of the receptive field increase the output, the outer receptors reduce the output.
I’ve attempted to build an image compression application that works the same way, a series of receptor fields are randomly generated and processed to determine their output. The picture below shows the overlap and value of these fields.
A decompressor then takes this data and enforces the constraints of each receptor field, changing the values so that the output of each field is as it should be, reproducing something resembling the input image. This is shown below, with a similarly-sized jpeg compressed image to the right (both ~10kb, a compression rate of ~20-to-1)
A lot of the noise could likely be reduced with further tweaking. A few unique features of the way this method works:
- As the compressed data only contains differences, the overall colour balance is lost.
- As the compression enforces constraints, the image can be variable detail (the centre of the above image is intentionally higher detail than the edges), and any shape.
- As I’m using an iterative approach for decompression, it takes a LOT of processing power.
EDIT: Wanted to leave it at that, but I ended up doing some more tweaking to it, its getting very difficult to push it any further, but these fixes give it slightly better colour and higher contrast. It really needs a pass to smooth areas of the image where appropriate, but I cant seem to figure out a reliable way of doing that (so the subtle gaussian blur will have to do):
Edit: Download Source