1. it is developed by Meta
2. the first release was Oct 2025 (so not reliable yet for use in stable distributions)
3. see warnings on their github. they develop this stuff for their usage (Facebook). They don't guarantee stability/compatibility. So not usable for stable linux distributions
4. and many more...
And sometimes it helps to read the very first chapter of a project description on github:
"It is designed for engineers that deal with large quantities of specialized datasets (like AI workloads for example) and require high speed for their processing pipelines."
Or the offical documentation:
"much stronger ratios than generic compressors, at the speeds required by datacenter workloads."
Due to the small amount of data used in the rpm meta data, there wouldn't be any reallife advantage for using openzl. It has advantages for Meta's use case like data centers and LLMs and other "big" stuff.
Do you know what the data compression/decompression speed of the zstd is? It is 220MB/s compression, 850MB/s decompression. And now tell us what the advantage of even more speed would be for some MB of rpm meta data (and additionally have a look at the typical r/w speed of basic customer HDDs/SSDs...). The biggest meta data file is for the core updates repo with actually 400MB.
thealio wrote:I remember that Arch had great benefits when they switched to Zstandard (zstd)
You may want to read the Mageia 9 release notes.
https://wiki.mageia.org/en/Mageia_9_Rel ... es#Stage_1https://wiki.mageia.org/en/Mageia_9_Rel ... _and_urpmi