Issue
how will you decide what precision works best for your inference model? Both BF16 and F16 takes two bytes but they use different number of bits for fraction and exponent.
Range will be different but I am trying to understand why one chose one over other.
Thank you
|--------+------+----------+----------|
| Format | Bits | Exponent | Fraction |
|--------+------+----------+----------|
| FP32 | 32 | 8 | 23 |
| FP16 | 16 | 5 | 10 |
| BF16 | 16 | 8 | 7 |
|--------+------+----------+----------|
Range
bfloat16: ~1.18e-38 … ~3.40e38 with 3 significant decimal digits.
float16: ~5.96e−8 (6.10e−5) … 65504 with 4 significant decimal digits precision.
Solution
bfloat16
is generally easier to use, because it works as a drop-in replacement for float32
. If your code doesn't create nan/inf
numbers or turn a non-0
into a 0
with float32
, then it shouldn't do it with bfloat16
either, roughly speaking. So, if your hardware supports it, I'd pick that.
Check out AMP if you choose float16
.
Answered By - bobcat
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.