Compress the data in the znode to go bypass the 1MB znode data limit
Zookeeper has a 1MB data limit per ZNode, and the data we're writing is highly compressible. As topicmappr
can already peak at the data and decompress it if necessary (https://github.com/DataDog/kafka-kit/blob/master/kafkazk/zookeeper.go#L446), we can safely gzip the broker and partition data.