Skip to content
Commit ee98592b authored by David Conrad's avatar David Conrad
Browse files

Fix overflow when saturating dequantized coefficients clipped to 0

It's possible to encode a large coefficient that becomes 0 after
the clipping in dequant (Abs( dq ) & 0xFFFFFF), e.g. 0x1000000
After that &0xFFFFFF, coeffs are saturated in the range of
[-(1 << (bitdepth+7)), 1 << (bitdepth+7))

dav1d implements this saturation via umin(dq - sign, cf_max), then applies
the sign afterwards via xor. However, for dq = 0 and sign = 1, this step
evaulates to umin(UINT_MAX, cf_max) == cf_max instead of the expected 0.

So instead, do unsigned saturate as umin(dq, cf_max + sign),
then apply sign via (sign ? -dq : dq)
On arm this is the same number of instructions, since cneg exists and is used
On x86 this requires an additional instruction, but this isn't a
latency-critical path
parent 1bdb776c
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment