not sure if 0.08 is enough in this case though, since kernel weights are multiplied to 0.08^33 as well. I guess 0.1 is a much safer choice (0.1**33 * 1/1024 is still in range)
But I don't understand why it can only be trigger by --dscale antiringing
When downscaling kernel size increases, as we sample more pixels to produce single output one. This is what correct_downscaling does. So we end up with bigger filter size that apparently overflows.
I understand how correct-downscaling works. Increasing filter kernel size will summing more samples, but it's an underflow issue instead of overflow. So I guess it's not relevant in this case.