Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

16-bit integer support #195

Open
bojeckkk opened this issue Dec 22, 2022 · 3 comments
Open

16-bit integer support #195

bojeckkk opened this issue Dec 22, 2022 · 3 comments

Comments

@bojeckkk
Copy link

Is it possible to introduce 16-bit signed integer support to zfp? If so, how hard would it be and where one should start?

@lindstro
Copy link
Member

I see four options:

  1. Use the existing support for 32-bit integers with the zfp_promote_int16_to_int32() and zfp_demote_int32_to_int16() utility functions. This, however, requires making copies of the data. If you're OK with this approach, then do use these conversion functions rather than just cast your data. See this FAQ.
  2. Expand the zfp API with functions that perform these promotions and demotions automatically. This would be fairly straightforward, aside from numerous new tests that have to be written.
  3. Add a whole new 16-bit pipeline to the zfp library, which would additionally allow support for 16-bit floating point. While this effort could exploit significant code reuse, there is still a lot of code that has to be written, not to mention hundreds of new tests. I think the primary motivation for this would be to support FP16 rather than int16. On the other hand, we have begun to realize that from a rate distortion perspective, it's usually better to compress FP32 data by first converting to FP64 and then compressing it as such, as the reduced integer precision used for the 32-bit floating-point pipeline places a hard limit on accuracy. This limitation would only be exacerbated for FP16.
  4. Use a unified implementation with a 64-bit pipeline, where all input formats (whether integer, floating point, or other) are first converted to a common uncompressed "zfp interchange format." We envision this as our long-term strategy, but it will likely be a year or two before we have a chance to work on this.

@xnorai
Copy link

xnorai commented Oct 12, 2023

And fp16 support would be pretty cool too. Especially if it also supports bfloat16.

@lindstro
Copy link
Member

See the third bullet above. Currently, FP16 and bfloat16 can be handled similarly to zfp_promote_int16_to_int32, but with the user performing the conversions (e.g., to/from float). We do eventually want to add full support, but as mentioned above, we'd need to add hundreds of tests and deal with the difficulties of portably converting to/from these types that typically don't have native support, including how to deal with rounding, subnormals, NaNs, etc. This is potentially a lot of work, especially when you consider the multiple back-ends (serial, OpenMP, CUDA, HIP, SYCL), language bindings (C, C++, Python, Fortran), multiple array dimensionalities (1D-4D), the conversion functions themselves, the actual (de)compression pipeline, plus documentation, tests, and examples for the Cartesian product of all these variants. This is a huge undertaking that will be simplified considerably when we transition to a single (de)compression pipeline and to a common implementation across back-ends. Unfortunately, that work will itself take some time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants