- 14 Feb, 2019 2 commits
-
-
Niklas Haas authored
- change HTML syntax to markdown syntax - move the (now rather lengthy) authors section below the (more important) API overview section - merge (and reword) the contributors and authors sections, which were a bit redundant
-
Jess authored
-
- 06 Feb, 2019 3 commits
-
-
Niklas Haas authored
-
Niklas Haas authored
Similar to the SDL2 demo, except it doesn't actually do anything useful yet. Contains all of the boilerplate for setting up a GLFW window for use with libplacebo, though.
-
Niklas Haas authored
-
- 03 Feb, 2019 1 commit
-
-
Niklas Haas authored
"Oops"
-
- 31 Jan, 2019 1 commit
-
-
Niklas Haas authored
pl_format -> pl_fmt
-
- 29 Jan, 2019 2 commits
-
-
Niklas Haas authored
Technically needed for wayland. Not that SDL2 supports it anyway.
-
Niklas Haas authored
This is used both for updating the size and querying the size. I don't want to make these separate functions because it should be painfully obvious that the size you get may not be the size you request. This allows libplacebo to work on wayland, which mediates the concept of swapchain resizing to protocols like xdg_shell that mesa/vulkan can't know anything about (by design).
-
- 28 Jan, 2019 1 commit
-
-
Niklas Haas authored
There's no reason the user shouldn't be allowed to change the log parameters around later on.
-
- 11 Jan, 2019 9 commits
-
-
Niklas Haas authored
-
Niklas Haas authored
Not implemented for the other types of objects, because the chances of a user needing to associate unique data for objects is much less likely. It helps for buf/tex in particular since the user can use this to e.g. hold extra state that needs to be tracked for synchronization / external API usage. It's also motivated by the `pl_tex_dummy_create`, which can benefit from allowing users to attach their own objects to dummy textures so they know which one corresponds to what.
-
Niklas Haas authored
-
Niklas Haas authored
This is basically a software emulated `pl_gpu`, which does all texture/buffer operations in host memory using CPU instructions. Attempting to somehow soft-emulate shaders or rasterize is _way_ out of scope for this dumb thing, so we don't support `pl_tex_blit` or `pl_pass` at all. Literally the only point is to let users generate "faux" shaders, while keeping track of the generated LUTs etc.
-
Niklas Haas authored
This was accidentally left exposed from a previous version of the API. It makes no sense in the public-facing code, so move it to the private `pl_shader_reset_ex`. Also swap the order of parameters for consistency.
-
Niklas Haas authored
This was supposed to be `index`, not `ident`. No change to semantics, but possibly confusing for users.
-
Niklas Haas authored
uint32_t -> `int` can overflow
-
Niklas Haas authored
-
Niklas Haas authored
Turns out, doing this improves coverage massively. :-)
-
- 05 Jan, 2019 15 commits
-
-
Niklas Haas authored
Forgot to update this, now it compiles again (and tests sensible stuff)
-
Niklas Haas authored
We accidentally implemented this inside vulkan/gpu.c instead of the general purpose wrapper code. That also meant we never tested the `tex->params.export_handle` condition. However, I realized that this condition is actually too restrictive for our test framework, and working around it there would be sort of annoying. So just drop the restriction. I won't bother updating the API version for this change, since the actual behavior hasn't changed. (And even if it had, it would only matter for our own test framework) As an aside, fix a bunch of related comments that still had outdated field names in the documentation.
-
Niklas Haas authored
Turns out that this actually causes problems when e.g. trying to reuse an SSBO between a compute shader and a fragment shader, which comes up when using the new peak detection code. One annoying thing about the implementation is that we will always need to make sure that buf_flush happens after buf_signal, since the former depends on the stage variable set by the latter. Maybe I'll change this in the future if it gets annoying. Fixes #49.
-
Niklas Haas authored
The whole reason this `prelinearized` field *exists* is because we still want to pass the original colorspace when performing color management or constructing the 3DLUT. It's weird how we never used it. Fix this oversight.
-
Niklas Haas authored
This also allows us to finally separate peak detection from color management. The current place in the code actually has almost no drawbacks, since it's effectively free unless FBOs are disabled. One annoying consequence is that this means we will now always perform peak detection at the source resolution, even if the display is smaller. In the relatively common case of 4K video on 1080p displays, this is a performance regression. To fix it, we could try investigating whether to do the analysis after up/downscaling, but then we have more special cases to think about, so I think I'll live with the status quo for now. Peak detection isn't the end of the world even at 4K. Closes #40.
-
Niklas Haas authored
Now we actually have a use for this!
-
Niklas Haas authored
-
Niklas Haas authored
The previous approach of using an FIR with tunable hard threshold for scene changes had several problems: - the FIR involved annoying dynamic buffer sizes, high VRAM usage, and the FIR sum was prone to numerical overflow which limited the number of frames we could average over. - the hard scene change detection was prone to both false positives and false negatives, each with their own (annoying) issues. Scrap this entirely and switch to a dual approach of using a simple single-pole IIR low pass filter to smooth out noise, while using a softer scene change curve (with tunable low and high thresholds), based on `smoothstep`. The IIR filter is extremely simple in its implementation and has an arbitrarily user-tunable cutoff frequency, while the smoothstep-based scene change curve provides a good, tunable tradeoff between adaptation speed and stability - without exhibiting either of the traditional issues associated with the hard cutoff. Another way to think about the new options is that the "low threshold" provides a margin of error within which we don't care about small fluctuations in the scene (which will therefore be smoothed out by the IIR filter). While redesigning the algorithm, I also redesigned the API - so that peak detection and tone mapping are separate, discrete steps that can be done as part of two different shaders. (Or as part of the same shader) This is required for #40, and in particular, means that issue is now within reach. cf. https://github.com/mpv-player/mpv/pull/6415
-
Niklas Haas authored
Annoying special case needed to allow us to split up peak detection and tone mapping into two passes, each of which may end up attaching the same SSBO to the same shader.
-
Niklas Haas authored
Tone mapping really dislikes handling negative values, which are a natural result of the gamut mapping taking colors out-of-gamut. Resolve this fundamental conflict by tone mapping first.
-
Niklas Haas authored
Move the sig_scale handling into our concept of OOTF, and also move the dst/src_luma coeffs there. (We can do this now since the tone mapping algorithm no longer needs them)
-
Niklas Haas authored
The current logic skips the OOTF when performing primary adaptation or tone mapping on scene-referred content, which was wrong. Instead, merge `need_ootf` and `need_linear` and always perform it whenever linearizing.
-
Niklas Haas authored
No reason not to have this, and it's useful.
-
Niklas Haas authored
Now we actually test a whole bunch of options, including different dithering modes, tone mapping curves, etc. Makes the tests quite a bit more slow and also quite a bit more verbose, but that's the price of not having bugs! Also bump up the queue count so that we actually test multi-queue scenarios properly.
-
Niklas Haas authored
Instead of hackily desaturating towards white, we instead perform tone mapping in per-channel ("desaturating") mode, and then desaturate towards that result. This helps significantly for very over-mastered content like Mad Max, and also makes sanely mastered content feel slightly more realistic/natural. More importantly, it's closer to what the mastering engineers expect TVs to do. We also allow the user to configure a bit of over-exposure for dark scenes, which can help with extremely dark movies like Annihilation. cf. https://github.com/mpv-player/mpv/issues/6405 and https://github.com/mpv-player/mpv/issues/6415
-
- 31 Dec, 2018 1 commit
-
-
Philip Langdale authored
We were forgetting to remap from our handle_type enum to the Vulkan one.
-
- 22 Dec, 2018 5 commits
-
-
Niklas Haas authored
To emphasize that it's only testing interop, and also point out that we should keep these separate.
-
Philip Langdale authored
-
Philip Langdale authored
This change adds a test that exports a texture and then imports it back in again. This round-tripping is explicitly supported by the Vulkan spec. The test only runs for dma_buf handles because that's the only one we have an import implementation for. I also fixed the existing test which didn't properly check the fd value of the buffer.
-
Philip Langdale authored
This change introduces new capabilities to allow for external memory to imported and bound to textures. The specific use-case is supporting interop with vaapi hardware decoding, where dma_buf is used to export decoded video surfaces. The API basically involves passing a `pl_shared_mem` when creating a texture, to describe the external memory to be used. When this is done, the external memory is imported and used instead of making a normal memory allocation. Past that point, the texture behaves like a normal one, and destroying the texture will free the imported allocation. Note that we will repeatedly import a single allocation if it is passed for separate textures. This happens in the vaapi case because each surface is a single allocation but each plane must be imported as a separate texture. The Vulkan spec explicitly required multiple-import to work, so this is safe. I have a corresponding mpv change that demonstrates this all works. Note that this implementation is more fragile than you'd want because we can't use `VK_EXT_image_drm_format_modifier`. This extension has yet to be enabled in the Mesa vulkan drivers, and until it is enabled, we cannot communicate detailed format information from vaapi to Vulkan. Examples of this include the pitch, tiling configuration and exact image format. For the common cases we deal with in vaapi, this is not fatal - it happens to be the case that pitch and tiling work out, and mpv is able to pick a compatible format based on its existing format mapping code. Note that this code may not pass validation, as a result. In my mpv tests, I see validation failures when mpv is doing format probing, reflecting that some of the more exotic formats cannot be matched up without `VK_EXT_image_drm_format_modifier`. However, once probing is complete, they decode and display run without validation errors.
-
Philip Langdale authored
dma_buf fds are treated as a distinct type (vs OPAQUE_FD) in Vulkan, and so we need to treat them separately too. While the basic interactions are the same as for OPAQUE_FD, there is one distinct difference, which is that dma_buf fds cannot be used for external semaphores, so we have to separate the handle type lists when probing for support. Note that vkGetMemoryFdPropertiesKHR function import is currently unused, but is necessary for importing dma_buf fds down the line. While the method is part of the generic external_fd extension, the spec says it cannot be used for OPAQUE_FD (seriously?) and so it's really only relevant for dma_buf fds.
-