swscale: don't map to native-endian formats
These changes seek to fix rendering issues I noticed on a big-endian machine when either the rendered input (e.g., a PNG with transparency) or output (e.g., XRender) has an alpha component. In such cases, the colors appeared scrambled, suggesting an issue with byte order.
Previously a codec like VLC_CODEC_ARGB
was mapped to format AV_PIX_FMT_BGR32_1
, whose byte order depended on the native byte order of the machine: BGRA for big-endian machines and ARGB for little-endian machines. Thus, only on little-endian machines did you get the byte order "advertised" in the VLC_CODEC_ARGB
name (ARGB). With the proposed changes, a codec like VLC_CODEC_ARGB
is now mapped to AV_PIX_FMT_ARGB
, and so you will get ARGB byte order on any machine.
I don't know if the proposed changes are the right way to fix the issue, but the changes appear to work under some testing. There are also multiple places in VLC's code which appear to already assume that the byte order advertised in the VLC codec name is the actual byte order in memory (e.g., here or in the XRender video output). But I acknowledge that there may have been some original motivation for mapping the codecs to native-endian pixel formats that I don't presently understand, and it's fair to say that I don't really grok this part of the code base very well.
Note that these changes only affect behavior on big-endian clients.