- Oct 25, 2021
-
-
A sampler now allows to handle chroma conversion from any OpenGL picture provided by the filter. It can be the input picture directly (most of the time), or any picture produced by the filter itself. Therefore, make the sampler a simple helper for filters, independent of the filter engine.
-
Some sampler functions were private and only used internally by the filter engine. Expose them publicly so that filters could create and manage their own sampler.
-
This will allow filters to access the format easily from the callbacks.
-
This will allow filters to create their own sampler (a vlc_gl_t instance is necessary).
-
Now that filters have access to the input vlc_gl_picture, move the transform to convert from picture coordinates to texture coordinates to the picture. It is independent of the sampler.
-
Pass an OpenGL picture to the draw() callback of OpenGL filter modules.
-
Pass the input format to the Open() function of OpenGL filter modules.
-
This will allow to pass the format to the Open() function of OpenGL filters modules.
-
This makes the sampler independent of interop and importer. Instead, it is created from a vlc_gl_format, and receive instances of vlc_gl_picture.
-
A sampler was responsible for exposing a VLC picture_t as an RGBA picture, via a vlc_texture() GLSL function. Extract part of its responsibilities to an "importer", to prepare its split into two parts: 1. importing a picture_t to textures (via an interop), handling the necessary coordinates transformations; 2. expose input textures as a single RGBA texel, by handling texture access, swizzle and chroma conversion internally. This will allow OpenGL filters to: - bind and read the raw input textures directly; - generate chroma conversion GLSL code for any vlc_gl_picture with a given vlc_gl_format (not only for input VLC pictures). As a first step, the importer is internal to the sampler, to split without impacting the sampler API.
-
Replace the private sampler fields related to OpenGL format by an instance of the struct vlc_gl_picture introduced recently. This prepares further refactors.
-
Replace the private sampler fields related to OpenGL format by a public instance of the struct vlc_gl_format introduced recently. This prepares further refactors.
-
This will allow OpenGL filters to handle OpenGL textures directly (currently, they can only receive the input picture content in GLSL via vlc_texture()).
-
A sampler can be initialized in two ways: with or without an interop. In both cases, it needs the number of input textures and their sizes. When initialized from vlc_gl_sampler_NewFromInterop(), the number of textures and their sizes were set immediately. However, when initialized from vlc_gl_sampler_NewFromTexture2D(), the number of textures was re-computed indirectly from the chroma, and the texture sizes were updated on each picture (even if they are constant). To unify the behavior, always initialize the number of textures and their sizes on sampler creation. This paves the way to move the interop out of the sampler.
-
The "config" argument is used.
-
Requires libplacebo@957ad294 Note: We could technically support mirroring (and 180° rotation) on older libplacebo as well if needed, but I decided to go for the simpler implementation since those cases are comparatively rare anyways. 90° rotation is the most important case to support properly.
-
-
- Oct 24, 2021
-
-
cf. vlc-android#2221
-
-
-
-
- Oct 23, 2021
-
-
In the case count=0, the loop is not processed and the picture array's single element is not initialized, leading to a warning. We don't use count=0 anyway so remove the case. Fix the warnings (<unknown> is the VLA): ../../src/misc/picture_pool.c: In function ‘picture_pool_NewFromFormat’: ../../src/misc/picture_pool.c:140:28: warning: ‘<unknown>’ may be used uninitialized [-Wmaybe-uninitialized] 140 | picture_pool_t *pool = picture_pool_New(count, picture); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../../src/misc/picture_pool.c:102:17: note: by argument 2 of type ‘picture_t * const*’ to ‘picture_pool_New’ declared here 102 | picture_pool_t *picture_pool_New(unsigned count, picture_t *const *tab) | ^~~~~~~~~~~~~~~~
-
This will ensure that the callback has the expected signature.
-
- Oct 22, 2021
-
-
emnm is a wrapper of llvm-nm and is actually located in the SDK binaries path added in the PATH when the SDK environment is sourced (/etc/profile.d/emscripten.sh) or when adding the compiler to the PATH by sourcing emsdk_env.sh like in the CI configuration. This avoid defining the EMSDK variable when emscripten is installed on the system.
-
Those defines are always needed and should be defined by configure.ac so as to also be present when using ./configure directly.
-
-
This should have a better precision (<1us), and we don't have to beg the system to give us a 5 ms precision. We also don't need extra LoadLibray. And we use the same code between regular/uwp builds. Some background on the latency and performance (we are in the Win7 case): https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps
-
-
intel_gfx_api-x86.dll exports InitialiseMediaSession and DisposeMediaSession, but libqsv_plugin.dll imports InitialiseMediaSession@12 and DisposeMediaSession@4. The name decorations are caused by the APIENTRY modifier, resulting in libqsv_plugin.dll not being able to load intel_gfx_api-x86.dll. It fails with an Entry Point Not Found error. On debug mode UWP, this will crash the CoreCLR. Should fix videolan/LibVLCSharp#374. Backport from upstream mfx_dispatch, fixed since https://github.com/lu-zero/mfx_dispatch/commit/7e4d221c36c630c1250b23a5dfa15657bc04c10c.
-
-
-
-
-
-
The composition works as follow: * Both the interface and the video are rendered in an offscreen window (using X11 composite extension) * We register to damage event (x11 damage extension) to get notified when the content of the offscreen video window change. When we receive a damage event we ask the rendering part of the composition to refresh * The interface is rendered in the offscreen surface using a RenderControl, when the interface do render, the composition is asked to refresh * A dedicated thread is spawned to do the rendering, upon a refresh event it will take the pictures from the offscreen surface and blend them into the actual window using X11 render extension. Using a separated thread from Qt ensure that the rendering of the video will not be stalled if Qt thread is busy. * The damage events are listened on a separate X11 connection and on a separate thread than Qt main thread (here the rendering thread). This allows to receive theses events independently from Qt (in case the Qt thread is stalled). Note that it is not possible to peek in qt X11 event queue from a non gui thread as the QX11Info::peekEventQueue is no longer thread safe since 5.12. fixes: #25627, #22155
-
-
-
...after checking GPU affinity.
-
The context and original HDC is from the decoder thread. Close() is called from the vout_thread when the vout is being stopped. This leads to one of the following error when stopping the media player: - error 2004: The requested transformation operation is not supported. - error 6: The handle is invalid Those errors means that either the hDC is still in use in another thread or that the hDC is not valid anymore. Since this was happening on Close(), there was no way to make it fail gracefully. Getting a new DC when making the context current, thus in the closing thread too, fix the error by ensuring thread-safety with those objects.
-
When MakeCurrent fails, there was not much information. Since it can fails in location where we want to release OpenGL resources, it was hard to track it down to a wgl failure. In case of failure, this patch details the last error message available.
-