Skip to content

Compress the data in the znode to go bypass the 1MB znode data limit

Brouberol requested to merge gzip into main

Zookeeper has a 1MB data limit per ZNode, and the data we're writing is highly compressible. As topicmappr can already peak at the data and decompress it if necessary (https://github.com/DataDog/kafka-kit/blob/master/kafkazk/zookeeper.go#L446), we can safely gzip the broker and partition data.

Merge request reports