This is a perfectly normal result. A lossless data compression algorithm

*attempts* to reduce the required storage space for a given data set, but that goal is

*not necessarily achievable* ^{[1]}. Sometimes a data block is not compressible with a particular algorithm. When this happens, the XISF support module ignores compression parameters and just stores the uncompressed block.

The Zlib codec implements a lossless compression algorithm based on entropy analysis and reduction techniques. Zlib is extremely efficient and rarely fails to compress a data block. LZ4, on the other hand, works by detecting data redundancies directly on the data stream. This is much faster than entropy analysis, but is also less efficient in compression ratio terms. With highly entropic data (high noise contents, for example), plain LZ4 compression can fail to compress a block with relative ease. LZ4-HC is the "high compression" variant of LZ4, which usually achieves somewhat smaller compression ratios than Zlib, but is faster, especially for decompression.

The

byte shuffling algorithm increments data locality, that is, data items are redistributed so that similar values tend to be placed close together. This greatly improves compressibility, so all compression codecs, including LZ4, can usually work much better with byte-shuffled data.

^{[1]} *Pedantic mode on -* The proof of this is done by reduction to the absurd. If a data compression algorithm could guarantee that any data set can be compressed, even by reducing its size by one bit, then any data set could be compressed

*ad infinitum*, that is, by compressing the result of a previous compression. Such data set would eventually require zero bytes of storage space

* - pedantic mode off*.