...
 
Commits (97)
......@@ -81,6 +81,15 @@ style-check:
fi;
done
x86inc-check:
extends: .debian-amd64-common
stage: style
script:
- git remote rm x86inc 2> /dev/null || true
- git remote add x86inc https://code.videolan.org/videolan/x86inc.asm.git
- git fetch -q x86inc master
- git diff --exit-code x86inc/master:x86inc.asm src/ext/x86/x86inc.asm
allow_failure: true
build-debian:
extends: .debian-amd64-common
......@@ -455,9 +464,12 @@ test-debian-asan:
-Dtestdata_tests=true
-Dlogging=false
-Db_sanitize=address
-Denable_asm=false
- ninja -C build
- cd build && time meson test -v --setup=sanitizer
- cd build
- exit_code=0
- time meson test -v --setup=sanitizer --test-args "--cpumask 0" || exit_code=$((exit_code + $?))
- time meson test -v --setup=sanitizer --test-args "--cpumask 0xff" || exit_code=$((exit_code + $?))
- if [ $exit_code -ne 0 ]; then exit $exit_code; fi
test-debian-msan:
extends:
......
......@@ -12,7 +12,7 @@ The todo list can be found [on the wiki](https://code.videolan.org/videolan/dav1
The codebase is developed with the following assumptions:
For the library:
- C language with C99 version, without the VLA or the Complex (*\_\_STDC_NO_COMPLEX__*) features, and without compiler extension,
- C language with C99 version, without the VLA or the Complex (*\_\_STDC_NO_COMPLEX__*) features, and without compiler extensions. Anonymous structures and unions are the only allowed compiler extensions for internal code.
- x86 asm in .asm files, using the NASM syntax,
- arm/arm64 in .S files, using the GAS syntax limited to subset llvm 5.0's internal assembler supports,
- no C++ is allowed, whatever the version.
......
Changes for 0.8.0 'Eurasian hobby":
-----------------------------------
0.8.0 is a major update for dav1d:
- Improve the performance by using a picture buffer pool;
The improvements can reach 10% on some cases on Windows.
- Support for Apple ARM Silicon
- ARM32 optimizations for 8bit bitdepth for ipred paeth, smooth, cfl
- ARM32 optimizations for 10/12/16bit bitdepth for mc_avg/mask/w_avg,
put/prep 8tap/bilin, wiener and CDEF filters
- ARM64 optimizations for cfl_ac 444 for all bitdepths
- x86 optimizations for MC 8-tap, mc_scaled in AVX2
- x86 optimizations for CDEF in SSE and {put/prep}_{8tap/bilin} in SSSE3
Changes for 0.7.1 'Frigatebird':
------------------------------
......
![dav1d logo](dav1d_logo.png)
![dav1d logo](doc/dav1d_logo.png)
# dav1d
......@@ -30,17 +30,21 @@ The plan is the following:
1. Complete C implementation of the decoder,
2. Provide a usable API,
3. Port to most platforms,
4. Make it fast on desktop, by writing asm for AVX-2 chips.
4. Make it fast on desktop, by writing asm for AVX2 chips.
5. Make it fast on mobile, by writing asm for ARMv8 chips,
6. Make it fast on older desktop, by writing asm for SSSE3+ chips.
6. Make it fast on older desktop, by writing asm for SSSE3+ chips,
7. Make high bit-depth fast on mobile, by writing asm for ARMv8 chips.
### On-going
7. Make it fast on older mobiles, by writing asm for ARMv7 chips,
8. Improve C code base with [various tweaks](https://code.videolan.org/videolan/dav1d/wikis/task-list),
9. Accelerate for less common architectures, like PPC, SSE2 or AVX-512.
8. Make it fast on older mobile, by writing asm for ARMv7 chips,
9. Make high bit-depth fast on older mobile, by writing asm for ARMv7 chips,
10. Improve C code base with [various tweaks](https://code.videolan.org/videolan/dav1d/wikis/task-list),
11. Accelerate for less common architectures, like PPC, SSE2 or AVX-512.
### After
10. Use more GPU, when possible.
12. Make high bit-depth fast on desktop, by writing asm for AVX2 chips,
13. Make high bit-depth fast on older desktop, by writing asm for SSSE3+ chips,
14. Use more GPU, when possible.
# Contribute
......@@ -130,7 +134,7 @@ We think that an implementation written from scratch can achieve faster decoding
## I am not a developer. Can I help?
- Yes. We need testers, bug reporters, and documentation writers.
- Yes. We need testers, bug reporters and documentation writers.
## What about the AV1 patent license?
......@@ -142,3 +146,5 @@ Please read the [AV1 patent license](doc/PATENTS) that applies to the AV1 specif
- We do, but we don't have either the time or the knowledge. Therefore, patches and contributions welcome.
## Where can I find documentation?
- The current library documentation, built from master, can be found [here](https://videolan.videolan.me/dav1d/).
......@@ -16,13 +16,16 @@ The Alliance for Open Media (AOM) for funding this project.
And all the dav1d Authors (git shortlog -sn), including:
Janne Grunau, Ronald S. Bultje, Martin Storsjö, Henrik Gramner, James Almer,
Marvin Scholz, Luc Trudeau, Jean-Baptiste Kempf, Victorien Le Couviour--Tuffet,
David Michael Barr, Hugo Beauzée-Luyssen, Steve Lhomme, Nathan E. Egge,
Francois Cartegnie, Konstantin Pavlov, Liwei Wang, Xuefeng Jiang,
Derek Buitenhuis, Raphaël Zumer, Niklas Haas, Michael Bradshaw, Kyle Siefring,
Raphael Zumer, Boyuan Xiao, Thierry Foucu, Matthias Dressel, Thomas Daede,
Rupert Swarbrick, Jan Beich, Dale Curtis, SmilingWolf, Tristan Laurent,
Vittorio Giovara, Rostislav Pehlivanov, Shiz, skal, Steinar Midtskogen,
Luca Barbato, Justin Bull, Jean-Yves Avenard, Timo Gurr, Fred Barbier,
Anisse Astier, Pablo Stebler, Nicolas Frattaroli, Mark Shuttleworth.
Martin Storsjö, Janne Grunau, Henrik Gramner, Ronald S. Bultje, James Almer,
Marvin Scholz, Luc Trudeau, Victorien Le Couviour--Tuffet, Jean-Baptiste Kempf,
Hugo Beauzée-Luyssen, Matthias Dressel, Konstantin Pavlov, David Michael Barr,
Steve Lhomme, Niklas Haas, B Krishnan Iyer, Francois Cartegnie, Liwei Wang,
Nathan E. Egge, Derek Buitenhuis, Michael Bradshaw, Raphaël Zumer,
Xuefeng Jiang, Luca Barbato, Jan Beich, Wan-Teh Chang, Justin Bull, Boyuan Xiao,
Dale Curtis, Kyle Siefring, Raphael Zumer, Rupert Swarbrick, Thierry Foucu,
Thomas Daede, Colin Lee, Emmanuel Gil Peyrot, Lynne, Michail Alvanos,
Nico Weber, SmilingWolf, Tristan Laurent, Vittorio Giovara, Anisse Astier,
Dmitriy Sychov, Ewout ter Hoeven, Fred Barbier, Jean-Yves Avenard,
Mark Shuttleworth, Matthieu Bouron, Nicolas Frattaroli, Pablo Stebler,
Rostislav Pehlivanov, Shiz, Steinar Midtskogen, Sylvestre Ledru, Timo Gurr,
Tristan Matthews, Xavier Claessens, Xu Guangxin, kossh1 and skal.
......@@ -501,7 +501,7 @@ static int placebo_upload_image(void *cookie, Dav1dPicture *dav1d_pic,
.num_points_uv = { src->num_uv_points[0], src->num_uv_points[1] },
.scaling_shift = src->scaling_shift,
.ar_coeff_lag = src->ar_coeff_lag,
.ar_coeff_shift = src->ar_coeff_shift,
.ar_coeff_shift = (int)src->ar_coeff_shift,
.grain_scale_shift = src->grain_scale_shift,
.uv_mult = { src->uv_mult[0], src->uv_mult[1] },
.uv_mult_luma = { src->uv_luma_mult[0], src->uv_luma_mult[1] },
......
......@@ -65,9 +65,9 @@ typedef struct Dav1dSettings {
int operating_point; ///< select an operating point for scalable AV1 bitstreams (0 - 31)
int all_layers; ///< output all spatial layers of a scalable AV1 biststream
unsigned frame_size_limit; ///< maximum frame size, in pixels (0 = unlimited)
uint8_t reserved[32]; ///< reserved for future use
Dav1dPicAllocator allocator; ///< Picture allocator callback.
Dav1dLogger logger; ///< Logger callback.
uint8_t reserved[32]; ///< reserved for future use
} Dav1dSettings;
/**
......
......@@ -28,6 +28,7 @@
#ifndef DAV1D_HEADERS_H
#define DAV1D_HEADERS_H
#include <stdint.h>
#include <stddef.h>
// Constants from Section 3. "Symbols and abbreviated terms"
......@@ -95,9 +96,9 @@ typedef struct Dav1dWarpedMotionParams {
union {
struct {
int16_t alpha, beta, gamma, delta;
};
} p;
int16_t abcd[4];
};
} u;
} Dav1dWarpedMotionParams;
enum Dav1dPixelLayout {
......@@ -127,6 +128,7 @@ enum Dav1dColorPrimaries {
DAV1D_COLOR_PRI_SMPTE431 = 11,
DAV1D_COLOR_PRI_SMPTE432 = 12,
DAV1D_COLOR_PRI_EBU3213 = 22,
DAV1D_COLOR_PRI_RESERVED = 255,
};
enum Dav1dTransferCharacteristics {
......@@ -147,6 +149,7 @@ enum Dav1dTransferCharacteristics {
DAV1D_TRC_SMPTE2084 = 16, ///< PQ
DAV1D_TRC_SMPTE428 = 17,
DAV1D_TRC_HLG = 18, ///< hybrid log/gamma (BT.2100 / ARIB STD-B67)
DAV1D_TRC_RESERVED = 255,
};
enum Dav1dMatrixCoefficients {
......@@ -164,6 +167,7 @@ enum Dav1dMatrixCoefficients {
DAV1D_MC_CHROMAT_NCL = 12, ///< Chromaticity-derived
DAV1D_MC_CHROMAT_CL = 13,
DAV1D_MC_ICTCP = 14,
DAV1D_MC_RESERVED = 255,
};
enum Dav1dChromaSamplePosition {
......
......@@ -31,11 +31,15 @@ version_h_target = configure_file(input: 'version.h.in',
output: 'version.h',
configuration: version_h_data)
dav1d_api_headers = [
'common.h',
'data.h',
'dav1d.h',
'headers.h',
'picture.h',
]
# install headers
install_headers('common.h',
'data.h',
'dav1d.h',
'headers.h',
'picture.h',
install_headers(dav1d_api_headers,
version_h_target,
subdir : 'dav1d')
......@@ -23,14 +23,14 @@
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
project('dav1d', ['c'],
version: '0.7.1',
version: '0.8.0',
default_options: ['c_std=c99',
'warning_level=2',
'buildtype=release',
'b_ndebug=if-release'],
meson_version: '>= 0.47.0')
meson_version: '>= 0.49.0')
dav1d_soname_version = '4.0.2'
dav1d_soname_version = '5.0.0'
dav1d_api_version_array = dav1d_soname_version.split('.')
dav1d_api_version_major = dav1d_api_version_array[0]
dav1d_api_version_minor = dav1d_api_version_array[1]
......@@ -62,7 +62,8 @@ endforeach
# ASM option
is_asm_enabled = (get_option('enable_asm') == true and
(host_machine.cpu_family().startswith('x86') or
(host_machine.cpu_family() == 'x86' or
(host_machine.cpu_family() == 'x86_64' and cc.get_define('__ILP32__') == '') or
host_machine.cpu_family() == 'aarch64' or
host_machine.cpu_family().startswith('arm') or
host_machine.cpu() == 'ppc64le'))
......@@ -117,6 +118,17 @@ if host_machine.system() == 'windows'
thread_compat_dep = declare_dependency(sources : files('src/win32/thread.c'))
rt_dependency = []
rc_version_array = meson.project_version().split('.')
winmod = import('windows')
rc_data = configuration_data()
rc_data.set('PROJECT_VERSION_MAJOR', rc_version_array[0])
rc_data.set('PROJECT_VERSION_MINOR', rc_version_array[1])
rc_data.set('PROJECT_VERSION_REVISION', rc_version_array[2])
rc_data.set('API_VERSION_MAJOR', dav1d_api_version_major)
rc_data.set('API_VERSION_MINOR', dav1d_api_version_minor)
rc_data.set('API_VERSION_REVISION', dav1d_api_version_revision)
rc_data.set('COPYRIGHT_YEARS', '2020')
else
thread_dependency = dependency('threads')
thread_compat_dep = []
......@@ -226,7 +238,7 @@ endif
# Compiler flags that should be set
# But when the compiler does not supports them
# it is not an error and silently tolerated
if cc.get_id() != 'msvc'
if cc.get_argument_syntax() != 'msvc'
optional_arguments += [
'-Wundef',
'-Werror=vla',
......@@ -313,8 +325,8 @@ if host_machine.cpu_family().startswith('x86')
cdata.set('STACK_ALIGNMENT', stack_alignment)
endif
cdata.set10('ARCH_AARCH64', host_machine.cpu_family() == 'aarch64')
cdata.set10('ARCH_ARM', host_machine.cpu_family().startswith('arm'))
cdata.set10('ARCH_AARCH64', host_machine.cpu_family() == 'aarch64' or host_machine.cpu() == 'arm64')
cdata.set10('ARCH_ARM', host_machine.cpu_family().startswith('arm') and host_machine.cpu() != 'arm64')
if (is_asm_enabled and
(host_machine.cpu_family() == 'aarch64' or
host_machine.cpu_family().startswith('arm')))
......@@ -350,6 +362,7 @@ cdata.set10('ARCH_X86_64', host_machine.cpu_family() == 'x86_64')
cdata.set10('ARCH_X86_32', host_machine.cpu_family() == 'x86')
if host_machine.cpu_family().startswith('x86')
cdata_asm.set('private_prefix', 'dav1d')
cdata_asm.set10('ARCH_X86_64', host_machine.cpu_family() == 'x86_64')
cdata_asm.set10('ARCH_X86_32', host_machine.cpu_family() == 'x86')
cdata_asm.set10('PIC', true)
......@@ -424,6 +437,28 @@ if is_asm_enabled and host_machine.cpu_family().startswith('x86')
])
endif
use_gaspp = false
if (is_asm_enabled and
(host_machine.cpu_family() == 'aarch64' or
host_machine.cpu_family().startswith('arm')) and
cc.get_argument_syntax() == 'msvc')
gaspp = find_program('gas-preprocessor.pl')
use_gaspp = true
gaspp_gen = generator(gaspp,
output: '@BASENAME@.obj',
arguments: [
'-as-type', 'armasm',
'-arch', host_machine.cpu_family(),
'--',
host_machine.cpu_family() == 'aarch64' ? 'armasm64' : 'armasm',
'-nologo',
'-I@0@'.format(dav1d_src_root),
'-I@0@/'.format(meson.current_build_dir()),
'@INPUT@',
'-c',
'-o', '@OUTPUT@'
])
endif
# Generate config.h
config_h_target = configure_file(output: 'config.h', configuration: cdata)
......
......@@ -5,7 +5,7 @@ ar = 'ar'
strip = 'strip'
[properties]
c_link_args = ['-m32']
c_link_args = ['-m32', '-Wl,-z,text']
c_args = ['-m32']
[host_machine]
......
......@@ -27,6 +27,7 @@
#include "src/arm/asm.S"
#include "util.S"
#include "cdef_tmpl.S"
// n1 = s0/d0
// w1 = d0/q0
......@@ -190,11 +191,9 @@ function cdef_padding\w\()_8bpc_neon, export=1
beq 1f
// CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
0:
ldrh r12, [r3], #2
vldr \n1, [r1]
vdup.16 d2, r12
vld1.16 {d2[]}, [r3, :16]!
ldrh r12, [r1, #\w]
add r1, r1, r2
load_n_incr d0, r1, r2, \w
subs r5, r5, #1
vmov.16 d2[1], r12
vmovl.u8 q0, d0
......@@ -207,9 +206,8 @@ function cdef_padding\w\()_8bpc_neon, export=1
b 3f
1:
// CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
ldrh r12, [r3], #2
vld1.16 {d2[]}, [r3, :16]!
load_n_incr d0, r1, r2, \w
vdup.16 d2, r12
subs r5, r5, #1
vmovl.u8 q0, d0
vmovl.u8 q1, d2
......@@ -327,230 +325,12 @@ endfunc
padding_func_edged 8, 16, d0, 64
padding_func_edged 4, 8, s0, 32
.macro dir_table w, stride
const directions\w
.byte -1 * \stride + 1, -2 * \stride + 2
.byte 0 * \stride + 1, -1 * \stride + 2
.byte 0 * \stride + 1, 0 * \stride + 2
.byte 0 * \stride + 1, 1 * \stride + 2
.byte 1 * \stride + 1, 2 * \stride + 2
.byte 1 * \stride + 0, 2 * \stride + 1
.byte 1 * \stride + 0, 2 * \stride + 0
.byte 1 * \stride + 0, 2 * \stride - 1
// Repeated, to avoid & 7
.byte -1 * \stride + 1, -2 * \stride + 2
.byte 0 * \stride + 1, -1 * \stride + 2
.byte 0 * \stride + 1, 0 * \stride + 2
.byte 0 * \stride + 1, 1 * \stride + 2
.byte 1 * \stride + 1, 2 * \stride + 2
.byte 1 * \stride + 0, 2 * \stride + 1
endconst
.endm
dir_table 8, 16
dir_table 4, 8
const pri_taps
.byte 4, 2, 3, 3
endconst
tables
.macro load_px d11, d12, d21, d22, w
.if \w == 8
add r6, r2, r9, lsl #1 // x + off
sub r9, r2, r9, lsl #1 // x - off
vld1.16 {\d11,\d12}, [r6] // p0
vld1.16 {\d21,\d22}, [r9] // p1
.else
add r6, r2, r9, lsl #1 // x + off
sub r9, r2, r9, lsl #1 // x - off
vld1.16 {\d11}, [r6] // p0
add r6, r6, #2*8 // += stride
vld1.16 {\d21}, [r9] // p1
add r9, r9, #2*8 // += stride
vld1.16 {\d12}, [r6] // p0
vld1.16 {\d22}, [r9] // p1
.endif
.endm
.macro handle_pixel s1, s2, thresh_vec, shift, tap, min
.if \min
vmin.u16 q2, q2, \s1
vmax.s16 q3, q3, \s1
vmin.u16 q2, q2, \s2
vmax.s16 q3, q3, \s2
.endif
vabd.u16 q8, q0, \s1 // abs(diff)
vabd.u16 q11, q0, \s2 // abs(diff)
vshl.u16 q9, q8, \shift // abs(diff) >> shift
vshl.u16 q12, q11, \shift // abs(diff) >> shift
vqsub.u16 q9, \thresh_vec, q9 // clip = imax(0, threshold - (abs(diff) >> shift))
vqsub.u16 q12, \thresh_vec, q12// clip = imax(0, threshold - (abs(diff) >> shift))
vsub.i16 q10, \s1, q0 // diff = p0 - px
vsub.i16 q13, \s2, q0 // diff = p1 - px
vneg.s16 q8, q9 // -clip
vneg.s16 q11, q12 // -clip
vmin.s16 q10, q10, q9 // imin(diff, clip)
vmin.s16 q13, q13, q12 // imin(diff, clip)
vdup.16 q9, \tap // taps[k]
vmax.s16 q10, q10, q8 // constrain() = imax(imin(diff, clip), -clip)
vmax.s16 q13, q13, q11 // constrain() = imax(imin(diff, clip), -clip)
vmla.i16 q1, q10, q9 // sum += taps[k] * constrain()
vmla.i16 q1, q13, q9 // sum += taps[k] * constrain()
.endm
// void dav1d_cdef_filterX_8bpc_neon(pixel *dst, ptrdiff_t dst_stride,
// const uint16_t *tmp, int pri_strength,
// int sec_strength, int dir, int damping,
// int h, size_t edges);
.macro filter_func w, pri, sec, min, suffix
function cdef_filter\w\suffix\()_neon
cmp r8, #0xf
beq cdef_filter\w\suffix\()_edged_neon
.if \pri
movrel_local r8, pri_taps
and r9, r3, #1
add r8, r8, r9, lsl #1
.endif
movrel_local r9, directions\w
add r5, r9, r5, lsl #1
vmov.u16 d17, #15
vdup.16 d16, r6 // damping
filter 8, 8
filter 4, 8
.if \pri
vdup.16 q5, r3 // threshold
.endif
.if \sec
vdup.16 q7, r4 // threshold
.endif
vmov.16 d8[0], r3
vmov.16 d8[1], r4
vclz.i16 d8, d8 // clz(threshold)
vsub.i16 d8, d17, d8 // ulog2(threshold)
vqsub.u16 d8, d16, d8 // shift = imax(0, damping - ulog2(threshold))
vneg.s16 d8, d8 // -shift
.if \sec
vdup.16 q6, d8[1]
.endif
.if \pri
vdup.16 q4, d8[0]
.endif
1:
.if \w == 8
vld1.16 {q0}, [r2, :128] // px
.else
add r12, r2, #2*8
vld1.16 {d0}, [r2, :64] // px
vld1.16 {d1}, [r12, :64] // px
.endif
vmov.u16 q1, #0 // sum
.if \min
vmov.u16 q2, q0 // min
vmov.u16 q3, q0 // max
.endif
// Instead of loading sec_taps 2, 1 from memory, just set it
// to 2 initially and decrease for the second round.
// This is also used as loop counter.
mov lr, #2 // sec_taps[0]
2:
.if \pri
ldrsb r9, [r5] // off1
load_px d28, d29, d30, d31, \w
.endif
.if \sec
add r5, r5, #4 // +2*2
ldrsb r9, [r5] // off2
.endif
.if \pri
ldrb r12, [r8] // *pri_taps
handle_pixel q14, q15, q5, q4, r12, \min
.endif
.if \sec
load_px d28, d29, d30, d31, \w
add r5, r5, #8 // +2*4
ldrsb r9, [r5] // off3
handle_pixel q14, q15, q7, q6, lr, \min
load_px d28, d29, d30, d31, \w
handle_pixel q14, q15, q7, q6, lr, \min
sub r5, r5, #11 // r5 -= 2*(2+4); r5 += 1;
.else
add r5, r5, #1 // r5 += 1
.endif
subs lr, lr, #1 // sec_tap-- (value)
.if \pri
add r8, r8, #1 // pri_taps++ (pointer)
.endif
bne 2b
vshr.s16 q14, q1, #15 // -(sum < 0)
vadd.i16 q1, q1, q14 // sum - (sum < 0)
vrshr.s16 q1, q1, #4 // (8 + sum - (sum < 0)) >> 4
vadd.i16 q0, q0, q1 // px + (8 + sum ...) >> 4
.if \min
vmin.s16 q0, q0, q3
vmax.s16 q0, q0, q2 // iclip(px + .., min, max)
.endif
vmovn.u16 d0, q0
.if \w == 8
add r2, r2, #2*16 // tmp += tmp_stride
subs r7, r7, #1 // h--
vst1.8 {d0}, [r0, :64], r1
.else
vst1.32 {d0[0]}, [r0, :32], r1
add r2, r2, #2*16 // tmp += 2*tmp_stride
subs r7, r7, #2 // h -= 2
vst1.32 {d0[1]}, [r0, :32], r1
.endif
// Reset pri_taps and directions back to the original point
sub r5, r5, #2
.if \pri
sub r8, r8, #2
.endif
bgt 1b
vpop {q4-q7}
pop {r4-r9,pc}
endfunc
.endm
.macro filter w
filter_func \w, pri=1, sec=0, min=0, suffix=_pri
filter_func \w, pri=0, sec=1, min=0, suffix=_sec
filter_func \w, pri=1, sec=1, min=1, suffix=_pri_sec
function cdef_filter\w\()_8bpc_neon, export=1
push {r4-r9,lr}
vpush {q4-q7}
ldrd r4, r5, [sp, #92]
ldrd r6, r7, [sp, #100]
ldr r8, [sp, #108]
cmp r3, #0 // pri_strength
bne 1f
b cdef_filter\w\()_sec_neon // only sec
1:
cmp r4, #0 // sec_strength
bne 1f
b cdef_filter\w\()_pri_neon // only pri
1:
b cdef_filter\w\()_pri_sec_neon // both pri and sec
endfunc
.endm
filter 8
filter 4
find_dir 8
.macro load_px_8 d11, d12, d21, d22, w
.if \w == 8
......@@ -756,219 +536,3 @@ filter_func_8 \w, pri=1, sec=1, min=1, suffix=_pri_sec
filter_8 8
filter_8 4
const div_table, align=4
.short 840, 420, 280, 210, 168, 140, 120, 105
endconst
const alt_fact, align=4
.short 420, 210, 140, 105, 105, 105, 105, 105, 140, 210, 420, 0
endconst
// int dav1d_cdef_find_dir_8bpc_neon(const pixel *img, const ptrdiff_t stride,
// unsigned *const var)
function cdef_find_dir_8bpc_neon, export=1
push {lr}
vpush {q4-q7}
sub sp, sp, #32 // cost
mov r3, #8
vmov.u16 q1, #0 // q0-q1 sum_diag[0]
vmov.u16 q3, #0 // q2-q3 sum_diag[1]
vmov.u16 q5, #0 // q4-q5 sum_hv[0-1]
vmov.u16 q8, #0 // q6,d16 sum_alt[0]
// q7,d17 sum_alt[1]
vmov.u16 q9, #0 // q9,d22 sum_alt[2]
vmov.u16 q11, #0
vmov.u16 q10, #0 // q10,d23 sum_alt[3]
.irpc i, 01234567
vld1.8 {d30}, [r0, :64], r1
vmov.u8 d31, #128
vsubl.u8 q15, d30, d31 // img[x] - 128
vmov.u16 q14, #0
.if \i == 0
vmov q0, q15 // sum_diag[0]
.else
vext.8 q12, q14, q15, #(16-2*\i)
vext.8 q13, q15, q14, #(16-2*\i)
vadd.i16 q0, q0, q12 // sum_diag[0]
vadd.i16 q1, q1, q13 // sum_diag[0]
.endif
vrev64.16 q13, q15
vswp d26, d27 // [-x]
.if \i == 0
vmov q2, q13 // sum_diag[1]
.else
vext.8 q12, q14, q13, #(16-2*\i)
vext.8 q13, q13, q14, #(16-2*\i)
vadd.i16 q2, q2, q12 // sum_diag[1]
vadd.i16 q3, q3, q13 // sum_diag[1]
.endif
vpadd.u16 d26, d30, d31 // [(x >> 1)]
vmov.u16 d27, #0
vpadd.u16 d24, d26, d28
vpadd.u16 d24, d24, d28 // [y]
vmov.u16 r12, d24[0]
vadd.i16 q5, q5, q15 // sum_hv[1]
.if \i < 4
vmov.16 d8[\i], r12 // sum_hv[0]
.else
vmov.16 d9[\i-4], r12 // sum_hv[0]
.endif
.if \i == 0
vmov.u16 q6, q13 // sum_alt[0]
.else
vext.8 q12, q14, q13, #(16-2*\i)
vext.8 q14, q13, q14, #(16-2*\i)
vadd.i16 q6, q6, q12 // sum_alt[0]
vadd.i16 d16, d16, d28 // sum_alt[0]
.endif
vrev64.16 d26, d26 // [-(x >> 1)]
vmov.u16 q14, #0
.if \i == 0
vmov q7, q13 // sum_alt[1]
.else
vext.8 q12, q14, q13, #(16-2*\i)
vext.8 q13, q13, q14, #(16-2*\i)
vadd.i16 q7, q7, q12 // sum_alt[1]
vadd.i16 d17, d17, d26 // sum_alt[1]
.endif
.if \i < 6
vext.8 q12, q14, q15, #(16-2*(3-(\i/2)))
vext.8 q13, q15, q14, #(16-2*(3-(\i/2)))
vadd.i16 q9, q9, q12 // sum_alt[2]
vadd.i16 d22, d22, d26 // sum_alt[2]
.else
vadd.i16 q9, q9, q15 // sum_alt[2]
.endif
.if \i == 0
vmov q10, q15 // sum_alt[3]
.elseif \i == 1
vadd.i16 q10, q10, q15 // sum_alt[3]
.else
vext.8 q12, q14, q15, #(16-2*(\i/2))
vext.8 q13, q15, q14, #(16-2*(\i/2))
vadd.i16 q10, q10, q12 // sum_alt[3]
vadd.i16 d23, d23, d26 // sum_alt[3]
.endif
.endr
vmov.u32 q15, #105
vmull.s16 q12, d8, d8 // sum_hv[0]*sum_hv[0]
vmlal.s16 q12, d9, d9
vmull.s16 q13, d10, d10 // sum_hv[1]*sum_hv[1]
vmlal.s16 q13, d11, d11
vadd.s32 d8, d24, d25
vadd.s32 d9, d26, d27
vpadd.s32 d8, d8, d9 // cost[2,6] (s16, s17)
vmul.i32 d8, d8, d30 // cost[2,6] *= 105
vrev64.16 q1, q1
vrev64.16 q3, q3
vext.8 q1, q1, q1, #10 // sum_diag[0][14-n]
vext.8 q3, q3, q3, #10 // sum_diag[1][14-n]
vstr s16, [sp, #2*4] // cost[2]
vstr s17, [sp, #6*4] // cost[6]
movrel_local r12, div_table
vld1.16 {q14}, [r12, :128]
vmull.s16 q5, d0, d0 // sum_diag[0]*sum_diag[0]
vmull.s16 q12, d1, d1
vmlal.s16 q5, d2, d2
vmlal.s16 q12, d3, d3
vmull.s16 q0, d4, d4 // sum_diag[1]*sum_diag[1]
vmull.s16 q1, d5, d5
vmlal.s16 q0, d6, d6
vmlal.s16 q1, d7, d7
vmovl.u16 q13, d28 // div_table
vmovl.u16 q14, d29
vmul.i32 q5, q5, q13 // cost[0]
vmla.i32 q5, q12, q14
vmul.i32 q0, q0, q13 // cost[4]
vmla.i32 q0, q1, q14
vadd.i32 d10, d10, d11
vadd.i32 d0, d0, d1
vpadd.i32 d0, d10, d0 // cost[0,4] = s0,s1
movrel_local r12, alt_fact
vld1.16 {d29, d30, d31}, [r12, :64] // div_table[2*m+1] + 105
vstr s0, [sp, #0*4] // cost[0]
vstr s1, [sp, #4*4] // cost[4]
vmovl.u16 q13, d29 // div_table[2*m+1] + 105
vmovl.u16 q14, d30
vmovl.u16 q15, d31
.macro cost_alt dest, s1, s2, s3, s4, s5, s6
vmull.s16 q1, \s1, \s1 // sum_alt[n]*sum_alt[n]
vmull.s16 q2, \s2, \s2
vmull.s16 q3, \s3, \s3
vmull.s16 q5, \s4, \s4 // sum_alt[n]*sum_alt[n]
vmull.s16 q12, \s5, \s5
vmull.s16 q6, \s6, \s6 // q6 overlaps the first \s1-\s2 here
vmul.i32 q1, q1, q13 // sum_alt[n]^2*fact
vmla.i32 q1, q2, q14
vmla.i32 q1, q3, q15
vmul.i32 q5, q5, q13 // sum_alt[n]^2*fact
vmla.i32 q5, q12, q14
vmla.i32 q5, q6, q15
vadd.i32 d2, d2, d3
vadd.i32 d3, d10, d11
vpadd.i32 \dest, d2, d3 // *cost_ptr
.endm
cost_alt d14, d12, d13, d16, d14, d15, d17 // cost[1], cost[3]
cost_alt d15, d18, d19, d22, d20, d21, d23 // cost[5], cost[7]
vstr s28, [sp, #1*4] // cost[1]
vstr s29, [sp, #3*4] // cost[3]
mov r0, #0 // best_dir
vmov.32 r1, d0[0] // best_cost
mov r3, #1 // n
vstr s30, [sp, #5*4] // cost[5]
vstr s31, [sp, #7*4] // cost[7]
vmov.32 r12, d14[0]
.macro find_best s1, s2, s3
.ifnb \s2
vmov.32 lr, \s2
.endif
cmp r12, r1 // cost[n] > best_cost
itt gt
movgt r0, r3 // best_dir = n
movgt r1, r12 // best_cost = cost[n]
.ifnb \s2
add r3, r3, #1 // n++
cmp lr, r1 // cost[n] > best_cost
vmov.32 r12, \s3
itt gt
movgt r0, r3 // best_dir = n
movgt r1, lr // best_cost = cost[n]
add r3, r3, #1 // n++
.endif
.endm
find_best d14[0], d8[0], d14[1]
find_best d14[1], d0[1], d15[0]
find_best d15[0], d8[1], d15[1]
find_best d15[1]
eor r3, r0, #4 // best_dir ^4
ldr r12, [sp, r3, lsl #2]
sub r1, r1, r12 // best_cost - cost[best_dir ^ 4]
lsr r1, r1, #10
str r1, [r2] // *var
add sp, sp, #32
vpop {q4-q7}
pop {pc}
endfunc
/*
* Copyright © 2018, VideoLAN and dav1d authors
* Copyright © 2020, Martin Storsjo
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "src/arm/asm.S"
#include "util.S"
#include "cdef_tmpl.S"
// r1 = d0/q0
// r2 = d2/q1
.macro pad_top_bot_16 s1, s2, w, stride, r1, r2, align, ret
tst r6, #1 // CDEF_HAVE_LEFT
beq 2f
// CDEF_HAVE_LEFT
tst r6, #2 // CDEF_HAVE_RIGHT
beq 1f
// CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
vldr s8, [\s1, #-4]
vld1.16 {\r1}, [\s1, :\align]
vldr s9, [\s1, #2*\w]
vldr s10, [\s2, #-4]
vld1.16 {\r2}, [\s2, :\align]
vldr s11, [\s2, #2*\w]
vstr s8, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s9, [r0, #2*\w]
add r0, r0, #2*\stride
vstr s10, [r0, #-4]
vst1.16 {\r2}, [r0, :\align]
vstr s11, [r0, #2*\w]
.if \ret
pop {r4-r7,pc}
.else
add r0, r0, #2*\stride
b 3f
.endif
1:
// CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
vldr s8, [\s1, #-4]
vld1.16 {\r1}, [\s1, :\align]
vldr s9, [\s2, #-4]
vld1.16 {\r2}, [\s2, :\align]
vstr s8, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s12, [r0, #2*\w]
add r0, r0, #2*\stride
vstr s9, [r0, #-4]
vst1.16 {\r2}, [r0, :\align]
vstr s12, [r0, #2*\w]
.if \ret
pop {r4-r7,pc}
.else
add r0, r0, #2*\stride
b 3f
.endif
2:
// !CDEF_HAVE_LEFT
tst r6, #2 // CDEF_HAVE_RIGHT
beq 1f
// !CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
vld1.16 {\r1}, [\s1, :\align]
vldr s8, [\s1, #2*\w]
vld1.16 {\r2}, [\s2, :\align]
vldr s9, [\s2, #2*\w]
vstr s12, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s8, [r0, #2*\w]
add r0, r0, #2*\stride
vstr s12, [r0, #-4]
vst1.16 {\r2}, [r0, :\align]
vstr s9, [r0, #2*\w]
.if \ret
pop {r4-r7,pc}
.else
add r0, r0, #2*\stride
b 3f
.endif
1:
// !CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
vld1.16 {\r1}, [\s1, :\align]
vld1.16 {\r2}, [\s2, :\align]
vstr s12, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s12, [r0, #2*\w]
add r0, r0, #2*\stride
vstr s12, [r0, #-4]
vst1.16 {\r2}, [r0, :\align]
vstr s12, [r0, #2*\w]
.if \ret
pop {r4-r7,pc}
.else
add r0, r0, #2*\stride
.endif
3:
.endm
// void dav1d_cdef_paddingX_16bpc_neon(uint16_t *tmp, const pixel *src,
// ptrdiff_t src_stride, const pixel (*left)[2],
// const pixel *const top, int h,
// enum CdefEdgeFlags edges);
// r1 = d0/q0
// r2 = d2/q1
.macro padding_func_16 w, stride, r1, r2, align
function cdef_padding\w\()_16bpc_neon, export=1
push {r4-r7,lr}
ldrd r4, r5, [sp, #20]
ldr r6, [sp, #28]
vmov.i16 q3, #0x8000
tst r6, #4 // CDEF_HAVE_TOP
bne 1f
// !CDEF_HAVE_TOP
sub r12, r0, #2*(2*\stride+2)
vmov.i16 q2, #0x8000
vst1.16 {q2,q3}, [r12]!
.if \w == 8
vst1.16 {q2,q3}, [r12]!
.endif
b 3f
1:
// CDEF_HAVE_TOP
add r7, r4, r2
sub r0, r0, #2*(2*\stride)
pad_top_bot_16 r4, r7, \w, \stride, \r1, \r2, \align, 0
// Middle section
3:
tst r6, #1 // CDEF_HAVE_LEFT
beq 2f
// CDEF_HAVE_LEFT
tst r6, #2 // CDEF_HAVE_RIGHT
beq 1f
// CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
0:
vld1.32 {d2[]}, [r3, :32]!
vldr s5, [r1, #2*\w]
vld1.16 {\r1}, [r1, :\align], r2
subs r5, r5, #1
vstr s4, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s5, [r0, #2*\w]
add r0, r0, #2*\stride
bgt 0b
b 3f
1:
// CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
vld1.32 {d2[]}, [r3, :32]!
vld1.16 {\r1}, [r1, :\align], r2
subs r5, r5, #1
vstr s4, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s12, [r0, #2*\w]
add r0, r0, #2*\stride
bgt 1b
b 3f
2:
tst r6, #2 // CDEF_HAVE_RIGHT
beq 1f
// !CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
0:
vldr s4, [r1, #2*\w]
vld1.16 {\r1}, [r1, :\align], r2
subs r5, r5, #1
vstr s12, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s4, [r0, #2*\w]
add r0, r0, #2*\stride
bgt 0b
b 3f
1:
// !CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
vld1.16 {\r1}, [r1, :\align], r2
subs r5, r5, #1
vstr s12, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s12, [r0, #2*\w]
add r0, r0, #2*\stride
bgt 1b
3:
tst r6, #8 // CDEF_HAVE_BOTTOM
bne 1f
// !CDEF_HAVE_BOTTOM
sub r12, r0, #4
vmov.i16 q2, #0x8000
vst1.16 {q2,q3}, [r12]!
.if \w == 8
vst1.16 {q2,q3}, [r12]!
.endif
pop {r4-r7,pc}
1:
// CDEF_HAVE_BOTTOM
add r7, r1, r2
pad_top_bot_16 r1, r7, \w, \stride, \r1, \r2, \align, 1
endfunc
.endm
padding_func_16 8, 16, q0, q1, 128
padding_func_16 4, 8, d0, d2, 64
tables
filter 8, 16
filter 4, 16
find_dir 16
/*
* Copyright © 2018, VideoLAN and dav1d authors
* Copyright © 2020, Martin Storsjo
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND