Skip to content
Snippets Groups Projects
Commit bf4f21b5 authored by Romain Vimont's avatar Romain Vimont Committed by Jean-Baptiste Kempf
Browse files

opengl: convert texture coords in fragment shader


A picture is stored in OpenGL textures (one per plane), possibly with
padding (the texture may be larger than the actual picture).

The conversion from picture coordinates to texture coordinates (which
takes the padding into account) was applied on the input coordinates,
before the vertex shader. As a consequence, the vertex shader received
one vector of input texture coordinates per plane (the padding is not
necessarily the same for all the planes):

    (before this commit)

   picture   texture
   coords    coords        (attributes)      (varyings)
          (1 per plane)

             (x0, y0) --> MultiTexCoord0     TexCoord0     fragment
   (x,y) --> (x1, y1) --> MultiTexCoord1 --> TexCoord1 --> shader
             (x2, y2) --> MultiTexCoord2     TexCoord2

This poses a problem to separate chroma conversion from rendering: the
renderer should be able to retrieve a pixel color in picture
coordinates, regarless of the input format or padding.

To solve this issue, pass the picture coordinates instead of the texture
coordinates as attribute, and initialize uniform matrices to convert
from picture to texture coordinates for each plane directly in the
fragment shader:

    (after this commit)

   picture
   coords    (attribute)     (varying)

   (x,y) --> PicCoordsIn --> PicCoords --> fragment shader
                                             ^^^
                                             |||
                             TexCoordsMap0 --'||
                (uniforms)   TexCoordsMap1 ---'|
                             TexCoordsMap2 ----'

Note that this also changes the multiplication order of
(non-commutative) matrices, from (semantically):

    TexCoords = Orientation * TexCoordsMap * PicCoords

to:

    TexCoords = TexCoordsMap * Orientation * PicCoords

The latter is the correct one: the orientation defines how the input
picture is rotated, so it must apply to picture coordinates, regardless
of the actual coordinates in the texture.

As a side effect, BuildRectangle, BuildSphere and BuildCube are now
independant of both the number of planes and any texture padding.

For now, TexCoordsMap is computed from the renderer, but the goal is to
move it to a separate component (a "sampler").

Signed-off-by: default avatarJean-Baptiste Kempf <jb@videolan.org>
parent fb77881b
No related branches found
No related tags found
No related merge requests found
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment