Skip to content
Commits on Source (97)
......@@ -81,6 +81,15 @@ style-check:
fi;
done
x86inc-check:
extends: .debian-amd64-common
stage: style
script:
- git remote rm x86inc 2> /dev/null || true
- git remote add x86inc https://code.videolan.org/videolan/x86inc.asm.git
- git fetch -q x86inc master
- git diff --exit-code x86inc/master:x86inc.asm src/ext/x86/x86inc.asm
allow_failure: true
build-debian:
extends: .debian-amd64-common
......@@ -455,9 +464,12 @@ test-debian-asan:
-Dtestdata_tests=true
-Dlogging=false
-Db_sanitize=address
-Denable_asm=false
- ninja -C build
- cd build && time meson test -v --setup=sanitizer
- cd build
- exit_code=0
- time meson test -v --setup=sanitizer --test-args "--cpumask 0" || exit_code=$((exit_code + $?))
- time meson test -v --setup=sanitizer --test-args "--cpumask 0xff" || exit_code=$((exit_code + $?))
- if [ $exit_code -ne 0 ]; then exit $exit_code; fi
test-debian-msan:
extends:
......
......@@ -12,7 +12,7 @@ The todo list can be found [on the wiki](https://code.videolan.org/videolan/dav1
The codebase is developed with the following assumptions:
For the library:
- C language with C99 version, without the VLA or the Complex (*\_\_STDC_NO_COMPLEX__*) features, and without compiler extension,
- C language with C99 version, without the VLA or the Complex (*\_\_STDC_NO_COMPLEX__*) features, and without compiler extensions. Anonymous structures and unions are the only allowed compiler extensions for internal code.
- x86 asm in .asm files, using the NASM syntax,
- arm/arm64 in .S files, using the GAS syntax limited to subset llvm 5.0's internal assembler supports,
- no C++ is allowed, whatever the version.
......
Changes for 0.8.0 'Eurasian hobby":
-----------------------------------
0.8.0 is a major update for dav1d:
- Improve the performance by using a picture buffer pool;
The improvements can reach 10% on some cases on Windows.
- Support for Apple ARM Silicon
- ARM32 optimizations for 8bit bitdepth for ipred paeth, smooth, cfl
- ARM32 optimizations for 10/12/16bit bitdepth for mc_avg/mask/w_avg,
put/prep 8tap/bilin, wiener and CDEF filters
- ARM64 optimizations for cfl_ac 444 for all bitdepths
- x86 optimizations for MC 8-tap, mc_scaled in AVX2
- x86 optimizations for CDEF in SSE and {put/prep}_{8tap/bilin} in SSSE3
Changes for 0.7.1 'Frigatebird':
------------------------------
......
![dav1d logo](dav1d_logo.png)
![dav1d logo](doc/dav1d_logo.png)
# dav1d
......@@ -30,17 +30,21 @@ The plan is the following:
1. Complete C implementation of the decoder,
2. Provide a usable API,
3. Port to most platforms,
4. Make it fast on desktop, by writing asm for AVX-2 chips.
4. Make it fast on desktop, by writing asm for AVX2 chips.
5. Make it fast on mobile, by writing asm for ARMv8 chips,
6. Make it fast on older desktop, by writing asm for SSSE3+ chips.
6. Make it fast on older desktop, by writing asm for SSSE3+ chips,
7. Make high bit-depth fast on mobile, by writing asm for ARMv8 chips.
### On-going
7. Make it fast on older mobiles, by writing asm for ARMv7 chips,
8. Improve C code base with [various tweaks](https://code.videolan.org/videolan/dav1d/wikis/task-list),
9. Accelerate for less common architectures, like PPC, SSE2 or AVX-512.
8. Make it fast on older mobile, by writing asm for ARMv7 chips,
9. Make high bit-depth fast on older mobile, by writing asm for ARMv7 chips,
10. Improve C code base with [various tweaks](https://code.videolan.org/videolan/dav1d/wikis/task-list),
11. Accelerate for less common architectures, like PPC, SSE2 or AVX-512.
### After
10. Use more GPU, when possible.
12. Make high bit-depth fast on desktop, by writing asm for AVX2 chips,
13. Make high bit-depth fast on older desktop, by writing asm for SSSE3+ chips,
14. Use more GPU, when possible.
# Contribute
......@@ -130,7 +134,7 @@ We think that an implementation written from scratch can achieve faster decoding
## I am not a developer. Can I help?
- Yes. We need testers, bug reporters, and documentation writers.
- Yes. We need testers, bug reporters and documentation writers.
## What about the AV1 patent license?
......@@ -142,3 +146,5 @@ Please read the [AV1 patent license](doc/PATENTS) that applies to the AV1 specif
- We do, but we don't have either the time or the knowledge. Therefore, patches and contributions welcome.
## Where can I find documentation?
- The current library documentation, built from master, can be found [here](https://videolan.videolan.me/dav1d/).
......@@ -16,13 +16,16 @@ The Alliance for Open Media (AOM) for funding this project.
And all the dav1d Authors (git shortlog -sn), including:
Janne Grunau, Ronald S. Bultje, Martin Storsjö, Henrik Gramner, James Almer,
Marvin Scholz, Luc Trudeau, Jean-Baptiste Kempf, Victorien Le Couviour--Tuffet,
David Michael Barr, Hugo Beauzée-Luyssen, Steve Lhomme, Nathan E. Egge,
Francois Cartegnie, Konstantin Pavlov, Liwei Wang, Xuefeng Jiang,
Derek Buitenhuis, Raphaël Zumer, Niklas Haas, Michael Bradshaw, Kyle Siefring,
Raphael Zumer, Boyuan Xiao, Thierry Foucu, Matthias Dressel, Thomas Daede,
Rupert Swarbrick, Jan Beich, Dale Curtis, SmilingWolf, Tristan Laurent,
Vittorio Giovara, Rostislav Pehlivanov, Shiz, skal, Steinar Midtskogen,
Luca Barbato, Justin Bull, Jean-Yves Avenard, Timo Gurr, Fred Barbier,
Anisse Astier, Pablo Stebler, Nicolas Frattaroli, Mark Shuttleworth.
Martin Storsjö, Janne Grunau, Henrik Gramner, Ronald S. Bultje, James Almer,
Marvin Scholz, Luc Trudeau, Victorien Le Couviour--Tuffet, Jean-Baptiste Kempf,
Hugo Beauzée-Luyssen, Matthias Dressel, Konstantin Pavlov, David Michael Barr,
Steve Lhomme, Niklas Haas, B Krishnan Iyer, Francois Cartegnie, Liwei Wang,
Nathan E. Egge, Derek Buitenhuis, Michael Bradshaw, Raphaël Zumer,
Xuefeng Jiang, Luca Barbato, Jan Beich, Wan-Teh Chang, Justin Bull, Boyuan Xiao,
Dale Curtis, Kyle Siefring, Raphael Zumer, Rupert Swarbrick, Thierry Foucu,
Thomas Daede, Colin Lee, Emmanuel Gil Peyrot, Lynne, Michail Alvanos,
Nico Weber, SmilingWolf, Tristan Laurent, Vittorio Giovara, Anisse Astier,
Dmitriy Sychov, Ewout ter Hoeven, Fred Barbier, Jean-Yves Avenard,
Mark Shuttleworth, Matthieu Bouron, Nicolas Frattaroli, Pablo Stebler,
Rostislav Pehlivanov, Shiz, Steinar Midtskogen, Sylvestre Ledru, Timo Gurr,
Tristan Matthews, Xavier Claessens, Xu Guangxin, kossh1 and skal.
......@@ -501,7 +501,7 @@ static int placebo_upload_image(void *cookie, Dav1dPicture *dav1d_pic,
.num_points_uv = { src->num_uv_points[0], src->num_uv_points[1] },
.scaling_shift = src->scaling_shift,
.ar_coeff_lag = src->ar_coeff_lag,
.ar_coeff_shift = src->ar_coeff_shift,
.ar_coeff_shift = (int)src->ar_coeff_shift,
.grain_scale_shift = src->grain_scale_shift,
.uv_mult = { src->uv_mult[0], src->uv_mult[1] },
.uv_mult_luma = { src->uv_luma_mult[0], src->uv_luma_mult[1] },
......
......@@ -65,9 +65,9 @@ typedef struct Dav1dSettings {
int operating_point; ///< select an operating point for scalable AV1 bitstreams (0 - 31)
int all_layers; ///< output all spatial layers of a scalable AV1 biststream
unsigned frame_size_limit; ///< maximum frame size, in pixels (0 = unlimited)
uint8_t reserved[32]; ///< reserved for future use
Dav1dPicAllocator allocator; ///< Picture allocator callback.
Dav1dLogger logger; ///< Logger callback.
uint8_t reserved[32]; ///< reserved for future use
} Dav1dSettings;
/**
......
......@@ -28,6 +28,7 @@
#ifndef DAV1D_HEADERS_H
#define DAV1D_HEADERS_H
#include <stdint.h>
#include <stddef.h>
// Constants from Section 3. "Symbols and abbreviated terms"
......@@ -95,9 +96,9 @@ typedef struct Dav1dWarpedMotionParams {
union {
struct {
int16_t alpha, beta, gamma, delta;
};
} p;
int16_t abcd[4];
};
} u;
} Dav1dWarpedMotionParams;
enum Dav1dPixelLayout {
......@@ -127,6 +128,7 @@ enum Dav1dColorPrimaries {
DAV1D_COLOR_PRI_SMPTE431 = 11,
DAV1D_COLOR_PRI_SMPTE432 = 12,
DAV1D_COLOR_PRI_EBU3213 = 22,
DAV1D_COLOR_PRI_RESERVED = 255,
};
enum Dav1dTransferCharacteristics {
......@@ -147,6 +149,7 @@ enum Dav1dTransferCharacteristics {
DAV1D_TRC_SMPTE2084 = 16, ///< PQ
DAV1D_TRC_SMPTE428 = 17,
DAV1D_TRC_HLG = 18, ///< hybrid log/gamma (BT.2100 / ARIB STD-B67)
DAV1D_TRC_RESERVED = 255,
};
enum Dav1dMatrixCoefficients {
......@@ -164,6 +167,7 @@ enum Dav1dMatrixCoefficients {
DAV1D_MC_CHROMAT_NCL = 12, ///< Chromaticity-derived
DAV1D_MC_CHROMAT_CL = 13,
DAV1D_MC_ICTCP = 14,
DAV1D_MC_RESERVED = 255,
};
enum Dav1dChromaSamplePosition {
......
......@@ -31,11 +31,15 @@ version_h_target = configure_file(input: 'version.h.in',
output: 'version.h',
configuration: version_h_data)
dav1d_api_headers = [
'common.h',
'data.h',
'dav1d.h',
'headers.h',
'picture.h',
]
# install headers
install_headers('common.h',
'data.h',
'dav1d.h',
'headers.h',
'picture.h',
install_headers(dav1d_api_headers,
version_h_target,
subdir : 'dav1d')
......@@ -23,14 +23,14 @@
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
project('dav1d', ['c'],
version: '0.7.1',
version: '0.8.0',
default_options: ['c_std=c99',
'warning_level=2',
'buildtype=release',
'b_ndebug=if-release'],
meson_version: '>= 0.47.0')
meson_version: '>= 0.49.0')
dav1d_soname_version = '4.0.2'
dav1d_soname_version = '5.0.0'
dav1d_api_version_array = dav1d_soname_version.split('.')
dav1d_api_version_major = dav1d_api_version_array[0]
dav1d_api_version_minor = dav1d_api_version_array[1]
......@@ -62,7 +62,8 @@ endforeach
# ASM option
is_asm_enabled = (get_option('enable_asm') == true and
(host_machine.cpu_family().startswith('x86') or
(host_machine.cpu_family() == 'x86' or
(host_machine.cpu_family() == 'x86_64' and cc.get_define('__ILP32__') == '') or
host_machine.cpu_family() == 'aarch64' or
host_machine.cpu_family().startswith('arm') or
host_machine.cpu() == 'ppc64le'))
......@@ -117,6 +118,17 @@ if host_machine.system() == 'windows'
thread_compat_dep = declare_dependency(sources : files('src/win32/thread.c'))
rt_dependency = []
rc_version_array = meson.project_version().split('.')
winmod = import('windows')
rc_data = configuration_data()
rc_data.set('PROJECT_VERSION_MAJOR', rc_version_array[0])
rc_data.set('PROJECT_VERSION_MINOR', rc_version_array[1])
rc_data.set('PROJECT_VERSION_REVISION', rc_version_array[2])
rc_data.set('API_VERSION_MAJOR', dav1d_api_version_major)
rc_data.set('API_VERSION_MINOR', dav1d_api_version_minor)
rc_data.set('API_VERSION_REVISION', dav1d_api_version_revision)
rc_data.set('COPYRIGHT_YEARS', '2020')
else
thread_dependency = dependency('threads')
thread_compat_dep = []
......@@ -226,7 +238,7 @@ endif
# Compiler flags that should be set
# But when the compiler does not supports them
# it is not an error and silently tolerated
if cc.get_id() != 'msvc'
if cc.get_argument_syntax() != 'msvc'
optional_arguments += [
'-Wundef',
'-Werror=vla',
......@@ -313,8 +325,8 @@ if host_machine.cpu_family().startswith('x86')
cdata.set('STACK_ALIGNMENT', stack_alignment)
endif
cdata.set10('ARCH_AARCH64', host_machine.cpu_family() == 'aarch64')
cdata.set10('ARCH_ARM', host_machine.cpu_family().startswith('arm'))
cdata.set10('ARCH_AARCH64', host_machine.cpu_family() == 'aarch64' or host_machine.cpu() == 'arm64')
cdata.set10('ARCH_ARM', host_machine.cpu_family().startswith('arm') and host_machine.cpu() != 'arm64')
if (is_asm_enabled and
(host_machine.cpu_family() == 'aarch64' or
host_machine.cpu_family().startswith('arm')))
......@@ -350,6 +362,7 @@ cdata.set10('ARCH_X86_64', host_machine.cpu_family() == 'x86_64')
cdata.set10('ARCH_X86_32', host_machine.cpu_family() == 'x86')
if host_machine.cpu_family().startswith('x86')
cdata_asm.set('private_prefix', 'dav1d')
cdata_asm.set10('ARCH_X86_64', host_machine.cpu_family() == 'x86_64')
cdata_asm.set10('ARCH_X86_32', host_machine.cpu_family() == 'x86')
cdata_asm.set10('PIC', true)
......@@ -424,6 +437,28 @@ if is_asm_enabled and host_machine.cpu_family().startswith('x86')
])
endif
use_gaspp = false
if (is_asm_enabled and
(host_machine.cpu_family() == 'aarch64' or
host_machine.cpu_family().startswith('arm')) and
cc.get_argument_syntax() == 'msvc')
gaspp = find_program('gas-preprocessor.pl')
use_gaspp = true
gaspp_gen = generator(gaspp,
output: '@BASENAME@.obj',
arguments: [
'-as-type', 'armasm',
'-arch', host_machine.cpu_family(),
'--',
host_machine.cpu_family() == 'aarch64' ? 'armasm64' : 'armasm',
'-nologo',
'-I@0@'.format(dav1d_src_root),
'-I@0@/'.format(meson.current_build_dir()),
'@INPUT@',
'-c',
'-o', '@OUTPUT@'
])
endif
# Generate config.h
config_h_target = configure_file(output: 'config.h', configuration: cdata)
......
......@@ -5,7 +5,7 @@ ar = 'ar'
strip = 'strip'
[properties]
c_link_args = ['-m32']
c_link_args = ['-m32', '-Wl,-z,text']
c_args = ['-m32']
[host_machine]
......
......@@ -27,6 +27,7 @@
#include "src/arm/asm.S"
#include "util.S"
#include "cdef_tmpl.S"
// n1 = s0/d0
// w1 = d0/q0
......@@ -190,11 +191,9 @@ function cdef_padding\w\()_8bpc_neon, export=1
beq 1f
// CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
0:
ldrh r12, [r3], #2
vldr \n1, [r1]
vdup.16 d2, r12
vld1.16 {d2[]}, [r3, :16]!
ldrh r12, [r1, #\w]
add r1, r1, r2
load_n_incr d0, r1, r2, \w
subs r5, r5, #1
vmov.16 d2[1], r12
vmovl.u8 q0, d0
......@@ -207,9 +206,8 @@ function cdef_padding\w\()_8bpc_neon, export=1
b 3f
1:
// CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
ldrh r12, [r3], #2
vld1.16 {d2[]}, [r3, :16]!
load_n_incr d0, r1, r2, \w
vdup.16 d2, r12
subs r5, r5, #1
vmovl.u8 q0, d0
vmovl.u8 q1, d2
......@@ -327,230 +325,12 @@ endfunc
padding_func_edged 8, 16, d0, 64
padding_func_edged 4, 8, s0, 32
.macro dir_table w, stride
const directions\w
.byte -1 * \stride + 1, -2 * \stride + 2
.byte 0 * \stride + 1, -1 * \stride + 2
.byte 0 * \stride + 1, 0 * \stride + 2
.byte 0 * \stride + 1, 1 * \stride + 2
.byte 1 * \stride + 1, 2 * \stride + 2
.byte 1 * \stride + 0, 2 * \stride + 1
.byte 1 * \stride + 0, 2 * \stride + 0
.byte 1 * \stride + 0, 2 * \stride - 1
// Repeated, to avoid & 7
.byte -1 * \stride + 1, -2 * \stride + 2
.byte 0 * \stride + 1, -1 * \stride + 2
.byte 0 * \stride + 1, 0 * \stride + 2
.byte 0 * \stride + 1, 1 * \stride + 2
.byte 1 * \stride + 1, 2 * \stride + 2
.byte 1 * \stride + 0, 2 * \stride + 1
endconst
.endm
dir_table 8, 16
dir_table 4, 8
const pri_taps
.byte 4, 2, 3, 3
endconst
tables
.macro load_px d11, d12, d21, d22, w
.if \w == 8
add r6, r2, r9, lsl #1 // x + off
sub r9, r2, r9, lsl #1 // x - off
vld1.16 {\d11,\d12}, [r6] // p0
vld1.16 {\d21,\d22}, [r9] // p1
.else
add r6, r2, r9, lsl #1 // x + off
sub r9, r2, r9, lsl #1 // x - off
vld1.16 {\d11}, [r6] // p0
add r6, r6, #2*8 // += stride
vld1.16 {\d21}, [r9] // p1
add r9, r9, #2*8 // += stride
vld1.16 {\d12}, [r6] // p0
vld1.16 {\d22}, [r9] // p1
.endif
.endm
.macro handle_pixel s1, s2, thresh_vec, shift, tap, min
.if \min
vmin.u16 q2, q2, \s1
vmax.s16 q3, q3, \s1
vmin.u16 q2, q2, \s2
vmax.s16 q3, q3, \s2
.endif
vabd.u16 q8, q0, \s1 // abs(diff)
vabd.u16 q11, q0, \s2 // abs(diff)
vshl.u16 q9, q8, \shift // abs(diff) >> shift
vshl.u16 q12, q11, \shift // abs(diff) >> shift
vqsub.u16 q9, \thresh_vec, q9 // clip = imax(0, threshold - (abs(diff) >> shift))
vqsub.u16 q12, \thresh_vec, q12// clip = imax(0, threshold - (abs(diff) >> shift))
vsub.i16 q10, \s1, q0 // diff = p0 - px
vsub.i16 q13, \s2, q0 // diff = p1 - px
vneg.s16 q8, q9 // -clip
vneg.s16 q11, q12 // -clip
vmin.s16 q10, q10, q9 // imin(diff, clip)
vmin.s16 q13, q13, q12 // imin(diff, clip)
vdup.16 q9, \tap // taps[k]
vmax.s16 q10, q10, q8 // constrain() = imax(imin(diff, clip), -clip)
vmax.s16 q13, q13, q11 // constrain() = imax(imin(diff, clip), -clip)
vmla.i16 q1, q10, q9 // sum += taps[k] * constrain()
vmla.i16 q1, q13, q9 // sum += taps[k] * constrain()
.endm
// void dav1d_cdef_filterX_8bpc_neon(pixel *dst, ptrdiff_t dst_stride,
// const uint16_t *tmp, int pri_strength,
// int sec_strength, int dir, int damping,
// int h, size_t edges);
.macro filter_func w, pri, sec, min, suffix
function cdef_filter\w\suffix\()_neon
cmp r8, #0xf
beq cdef_filter\w\suffix\()_edged_neon
.if \pri
movrel_local r8, pri_taps
and r9, r3, #1
add r8, r8, r9, lsl #1
.endif
movrel_local r9, directions\w
add r5, r9, r5, lsl #1
vmov.u16 d17, #15
vdup.16 d16, r6 // damping
filter 8, 8
filter 4, 8
.if \pri
vdup.16 q5, r3 // threshold
.endif
.if \sec
vdup.16 q7, r4 // threshold
.endif
vmov.16 d8[0], r3
vmov.16 d8[1], r4
vclz.i16 d8, d8 // clz(threshold)
vsub.i16 d8, d17, d8 // ulog2(threshold)
vqsub.u16 d8, d16, d8 // shift = imax(0, damping - ulog2(threshold))
vneg.s16 d8, d8 // -shift
.if \sec
vdup.16 q6, d8[1]
.endif
.if \pri
vdup.16 q4, d8[0]
.endif
1:
.if \w == 8
vld1.16 {q0}, [r2, :128] // px
.else
add r12, r2, #2*8
vld1.16 {d0}, [r2, :64] // px
vld1.16 {d1}, [r12, :64] // px
.endif
vmov.u16 q1, #0 // sum
.if \min
vmov.u16 q2, q0 // min
vmov.u16 q3, q0 // max
.endif
// Instead of loading sec_taps 2, 1 from memory, just set it
// to 2 initially and decrease for the second round.
// This is also used as loop counter.
mov lr, #2 // sec_taps[0]
2:
.if \pri
ldrsb r9, [r5] // off1
load_px d28, d29, d30, d31, \w
.endif
.if \sec
add r5, r5, #4 // +2*2
ldrsb r9, [r5] // off2
.endif
.if \pri
ldrb r12, [r8] // *pri_taps
handle_pixel q14, q15, q5, q4, r12, \min
.endif
.if \sec
load_px d28, d29, d30, d31, \w
add r5, r5, #8 // +2*4
ldrsb r9, [r5] // off3
handle_pixel q14, q15, q7, q6, lr, \min
load_px d28, d29, d30, d31, \w
handle_pixel q14, q15, q7, q6, lr, \min
sub r5, r5, #11 // r5 -= 2*(2+4); r5 += 1;
.else
add r5, r5, #1 // r5 += 1
.endif
subs lr, lr, #1 // sec_tap-- (value)
.if \pri
add r8, r8, #1 // pri_taps++ (pointer)
.endif
bne 2b
vshr.s16 q14, q1, #15 // -(sum < 0)
vadd.i16 q1, q1, q14 // sum - (sum < 0)
vrshr.s16 q1, q1, #4 // (8 + sum - (sum < 0)) >> 4
vadd.i16 q0, q0, q1 // px + (8 + sum ...) >> 4
.if \min
vmin.s16 q0, q0, q3
vmax.s16 q0, q0, q2 // iclip(px + .., min, max)
.endif
vmovn.u16 d0, q0
.if \w == 8
add r2, r2, #2*16 // tmp += tmp_stride
subs r7, r7, #1 // h--
vst1.8 {d0}, [r0, :64], r1
.else
vst1.32 {d0[0]}, [r0, :32], r1
add r2, r2, #2*16 // tmp += 2*tmp_stride
subs r7, r7, #2 // h -= 2
vst1.32 {d0[1]}, [r0, :32], r1
.endif
// Reset pri_taps and directions back to the original point
sub r5, r5, #2
.if \pri
sub r8, r8, #2
.endif
bgt 1b
vpop {q4-q7}
pop {r4-r9,pc}
endfunc
.endm
.macro filter w
filter_func \w, pri=1, sec=0, min=0, suffix=_pri
filter_func \w, pri=0, sec=1, min=0, suffix=_sec
filter_func \w, pri=1, sec=1, min=1, suffix=_pri_sec
function cdef_filter\w\()_8bpc_neon, export=1
push {r4-r9,lr}
vpush {q4-q7}
ldrd r4, r5, [sp, #92]
ldrd r6, r7, [sp, #100]
ldr r8, [sp, #108]
cmp r3, #0 // pri_strength
bne 1f
b cdef_filter\w\()_sec_neon // only sec
1:
cmp r4, #0 // sec_strength
bne 1f
b cdef_filter\w\()_pri_neon // only pri
1:
b cdef_filter\w\()_pri_sec_neon // both pri and sec
endfunc
.endm
filter 8
filter 4
find_dir 8
.macro load_px_8 d11, d12, d21, d22, w
.if \w == 8
......@@ -756,219 +536,3 @@ filter_func_8 \w, pri=1, sec=1, min=1, suffix=_pri_sec
filter_8 8
filter_8 4
const div_table, align=4
.short 840, 420, 280, 210, 168, 140, 120, 105
endconst
const alt_fact, align=4
.short 420, 210, 140, 105, 105, 105, 105, 105, 140, 210, 420, 0
endconst
// int dav1d_cdef_find_dir_8bpc_neon(const pixel *img, const ptrdiff_t stride,
// unsigned *const var)
function cdef_find_dir_8bpc_neon, export=1
push {lr}
vpush {q4-q7}
sub sp, sp, #32 // cost
mov r3, #8
vmov.u16 q1, #0 // q0-q1 sum_diag[0]
vmov.u16 q3, #0 // q2-q3 sum_diag[1]
vmov.u16 q5, #0 // q4-q5 sum_hv[0-1]
vmov.u16 q8, #0 // q6,d16 sum_alt[0]
// q7,d17 sum_alt[1]
vmov.u16 q9, #0 // q9,d22 sum_alt[2]
vmov.u16 q11, #0
vmov.u16 q10, #0 // q10,d23 sum_alt[3]
.irpc i, 01234567
vld1.8 {d30}, [r0, :64], r1
vmov.u8 d31, #128
vsubl.u8 q15, d30, d31 // img[x] - 128
vmov.u16 q14, #0
.if \i == 0
vmov q0, q15 // sum_diag[0]
.else
vext.8 q12, q14, q15, #(16-2*\i)
vext.8 q13, q15, q14, #(16-2*\i)
vadd.i16 q0, q0, q12 // sum_diag[0]
vadd.i16 q1, q1, q13 // sum_diag[0]
.endif
vrev64.16 q13, q15
vswp d26, d27 // [-x]
.if \i == 0
vmov q2, q13 // sum_diag[1]
.else
vext.8 q12, q14, q13, #(16-2*\i)
vext.8 q13, q13, q14, #(16-2*\i)
vadd.i16 q2, q2, q12 // sum_diag[1]
vadd.i16 q3, q3, q13 // sum_diag[1]
.endif
vpadd.u16 d26, d30, d31 // [(x >> 1)]
vmov.u16 d27, #0
vpadd.u16 d24, d26, d28
vpadd.u16 d24, d24, d28 // [y]
vmov.u16 r12, d24[0]
vadd.i16 q5, q5, q15 // sum_hv[1]
.if \i < 4
vmov.16 d8[\i], r12 // sum_hv[0]
.else
vmov.16 d9[\i-4], r12 // sum_hv[0]
.endif
.if \i == 0
vmov.u16 q6, q13 // sum_alt[0]
.else
vext.8 q12, q14, q13, #(16-2*\i)
vext.8 q14, q13, q14, #(16-2*\i)
vadd.i16 q6, q6, q12 // sum_alt[0]
vadd.i16 d16, d16, d28 // sum_alt[0]
.endif
vrev64.16 d26, d26 // [-(x >> 1)]
vmov.u16 q14, #0
.if \i == 0
vmov q7, q13 // sum_alt[1]
.else
vext.8 q12, q14, q13, #(16-2*\i)
vext.8 q13, q13, q14, #(16-2*\i)
vadd.i16 q7, q7, q12 // sum_alt[1]
vadd.i16 d17, d17, d26 // sum_alt[1]
.endif
.if \i < 6
vext.8 q12, q14, q15, #(16-2*(3-(\i/2)))
vext.8 q13, q15, q14, #(16-2*(3-(\i/2)))
vadd.i16 q9, q9, q12 // sum_alt[2]
vadd.i16 d22, d22, d26 // sum_alt[2]
.else
vadd.i16 q9, q9, q15 // sum_alt[2]
.endif
.if \i == 0
vmov q10, q15 // sum_alt[3]
.elseif \i == 1
vadd.i16 q10, q10, q15 // sum_alt[3]
.else
vext.8 q12, q14, q15, #(16-2*(\i/2))
vext.8 q13, q15, q14, #(16-2*(\i/2))
vadd.i16 q10, q10, q12 // sum_alt[3]
vadd.i16 d23, d23, d26 // sum_alt[3]
.endif
.endr
vmov.u32 q15, #105
vmull.s16 q12, d8, d8 // sum_hv[0]*sum_hv[0]
vmlal.s16 q12, d9, d9
vmull.s16 q13, d10, d10 // sum_hv[1]*sum_hv[1]
vmlal.s16 q13, d11, d11
vadd.s32 d8, d24, d25
vadd.s32 d9, d26, d27
vpadd.s32 d8, d8, d9 // cost[2,6] (s16, s17)
vmul.i32 d8, d8, d30 // cost[2,6] *= 105
vrev64.16 q1, q1
vrev64.16 q3, q3
vext.8 q1, q1, q1, #10 // sum_diag[0][14-n]
vext.8 q3, q3, q3, #10 // sum_diag[1][14-n]
vstr s16, [sp, #2*4] // cost[2]
vstr s17, [sp, #6*4] // cost[6]
movrel_local r12, div_table
vld1.16 {q14}, [r12, :128]
vmull.s16 q5, d0, d0 // sum_diag[0]*sum_diag[0]
vmull.s16 q12, d1, d1
vmlal.s16 q5, d2, d2
vmlal.s16 q12, d3, d3
vmull.s16 q0, d4, d4 // sum_diag[1]*sum_diag[1]
vmull.s16 q1, d5, d5
vmlal.s16 q0, d6, d6
vmlal.s16 q1, d7, d7
vmovl.u16 q13, d28 // div_table
vmovl.u16 q14, d29
vmul.i32 q5, q5, q13 // cost[0]
vmla.i32 q5, q12, q14
vmul.i32 q0, q0, q13 // cost[4]
vmla.i32 q0, q1, q14
vadd.i32 d10, d10, d11
vadd.i32 d0, d0, d1
vpadd.i32 d0, d10, d0 // cost[0,4] = s0,s1
movrel_local r12, alt_fact
vld1.16 {d29, d30, d31}, [r12, :64] // div_table[2*m+1] + 105
vstr s0, [sp, #0*4] // cost[0]
vstr s1, [sp, #4*4] // cost[4]
vmovl.u16 q13, d29 // div_table[2*m+1] + 105
vmovl.u16 q14, d30
vmovl.u16 q15, d31
.macro cost_alt dest, s1, s2, s3, s4, s5, s6
vmull.s16 q1, \s1, \s1 // sum_alt[n]*sum_alt[n]
vmull.s16 q2, \s2, \s2
vmull.s16 q3, \s3, \s3
vmull.s16 q5, \s4, \s4 // sum_alt[n]*sum_alt[n]
vmull.s16 q12, \s5, \s5
vmull.s16 q6, \s6, \s6 // q6 overlaps the first \s1-\s2 here
vmul.i32 q1, q1, q13 // sum_alt[n]^2*fact
vmla.i32 q1, q2, q14
vmla.i32 q1, q3, q15
vmul.i32 q5, q5, q13 // sum_alt[n]^2*fact
vmla.i32 q5, q12, q14
vmla.i32 q5, q6, q15
vadd.i32 d2, d2, d3
vadd.i32 d3, d10, d11
vpadd.i32 \dest, d2, d3 // *cost_ptr
.endm
cost_alt d14, d12, d13, d16, d14, d15, d17 // cost[1], cost[3]
cost_alt d15, d18, d19, d22, d20, d21, d23 // cost[5], cost[7]
vstr s28, [sp, #1*4] // cost[1]
vstr s29, [sp, #3*4] // cost[3]
mov r0, #0 // best_dir
vmov.32 r1, d0[0] // best_cost
mov r3, #1 // n
vstr s30, [sp, #5*4] // cost[5]
vstr s31, [sp, #7*4] // cost[7]
vmov.32 r12, d14[0]
.macro find_best s1, s2, s3
.ifnb \s2
vmov.32 lr, \s2
.endif
cmp r12, r1 // cost[n] > best_cost
itt gt
movgt r0, r3 // best_dir = n
movgt r1, r12 // best_cost = cost[n]
.ifnb \s2
add r3, r3, #1 // n++
cmp lr, r1 // cost[n] > best_cost
vmov.32 r12, \s3
itt gt
movgt r0, r3 // best_dir = n
movgt r1, lr // best_cost = cost[n]
add r3, r3, #1 // n++
.endif
.endm
find_best d14[0], d8[0], d14[1]
find_best d14[1], d0[1], d15[0]
find_best d15[0], d8[1], d15[1]
find_best d15[1]
eor r3, r0, #4 // best_dir ^4
ldr r12, [sp, r3, lsl #2]
sub r1, r1, r12 // best_cost - cost[best_dir ^ 4]
lsr r1, r1, #10
str r1, [r2] // *var
add sp, sp, #32
vpop {q4-q7}
pop {pc}
endfunc
/*
* Copyright © 2018, VideoLAN and dav1d authors
* Copyright © 2020, Martin Storsjo
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "src/arm/asm.S"
#include "util.S"
#include "cdef_tmpl.S"
// r1 = d0/q0
// r2 = d2/q1
.macro pad_top_bot_16 s1, s2, w, stride, r1, r2, align, ret
tst r6, #1 // CDEF_HAVE_LEFT
beq 2f
// CDEF_HAVE_LEFT
tst r6, #2 // CDEF_HAVE_RIGHT
beq 1f
// CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
vldr s8, [\s1, #-4]
vld1.16 {\r1}, [\s1, :\align]
vldr s9, [\s1, #2*\w]
vldr s10, [\s2, #-4]
vld1.16 {\r2}, [\s2, :\align]
vldr s11, [\s2, #2*\w]
vstr s8, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s9, [r0, #2*\w]
add r0, r0, #2*\stride
vstr s10, [r0, #-4]
vst1.16 {\r2}, [r0, :\align]
vstr s11, [r0, #2*\w]
.if \ret
pop {r4-r7,pc}
.else
add r0, r0, #2*\stride
b 3f
.endif
1:
// CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
vldr s8, [\s1, #-4]
vld1.16 {\r1}, [\s1, :\align]
vldr s9, [\s2, #-4]
vld1.16 {\r2}, [\s2, :\align]
vstr s8, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s12, [r0, #2*\w]
add r0, r0, #2*\stride
vstr s9, [r0, #-4]
vst1.16 {\r2}, [r0, :\align]
vstr s12, [r0, #2*\w]
.if \ret
pop {r4-r7,pc}
.else
add r0, r0, #2*\stride
b 3f
.endif
2:
// !CDEF_HAVE_LEFT
tst r6, #2 // CDEF_HAVE_RIGHT
beq 1f
// !CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
vld1.16 {\r1}, [\s1, :\align]
vldr s8, [\s1, #2*\w]
vld1.16 {\r2}, [\s2, :\align]
vldr s9, [\s2, #2*\w]
vstr s12, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s8, [r0, #2*\w]
add r0, r0, #2*\stride
vstr s12, [r0, #-4]
vst1.16 {\r2}, [r0, :\align]
vstr s9, [r0, #2*\w]
.if \ret
pop {r4-r7,pc}
.else
add r0, r0, #2*\stride
b 3f
.endif
1:
// !CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
vld1.16 {\r1}, [\s1, :\align]
vld1.16 {\r2}, [\s2, :\align]
vstr s12, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s12, [r0, #2*\w]
add r0, r0, #2*\stride
vstr s12, [r0, #-4]
vst1.16 {\r2}, [r0, :\align]
vstr s12, [r0, #2*\w]
.if \ret
pop {r4-r7,pc}
.else
add r0, r0, #2*\stride
.endif
3:
.endm
// void dav1d_cdef_paddingX_16bpc_neon(uint16_t *tmp, const pixel *src,
// ptrdiff_t src_stride, const pixel (*left)[2],
// const pixel *const top, int h,
// enum CdefEdgeFlags edges);
// r1 = d0/q0
// r2 = d2/q1
.macro padding_func_16 w, stride, r1, r2, align
function cdef_padding\w\()_16bpc_neon, export=1
push {r4-r7,lr}
ldrd r4, r5, [sp, #20]
ldr r6, [sp, #28]
vmov.i16 q3, #0x8000
tst r6, #4 // CDEF_HAVE_TOP
bne 1f
// !CDEF_HAVE_TOP
sub r12, r0, #2*(2*\stride+2)
vmov.i16 q2, #0x8000
vst1.16 {q2,q3}, [r12]!
.if \w == 8
vst1.16 {q2,q3}, [r12]!
.endif
b 3f
1:
// CDEF_HAVE_TOP
add r7, r4, r2
sub r0, r0, #2*(2*\stride)
pad_top_bot_16 r4, r7, \w, \stride, \r1, \r2, \align, 0
// Middle section
3:
tst r6, #1 // CDEF_HAVE_LEFT
beq 2f
// CDEF_HAVE_LEFT
tst r6, #2 // CDEF_HAVE_RIGHT
beq 1f
// CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
0:
vld1.32 {d2[]}, [r3, :32]!
vldr s5, [r1, #2*\w]
vld1.16 {\r1}, [r1, :\align], r2
subs r5, r5, #1
vstr s4, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s5, [r0, #2*\w]
add r0, r0, #2*\stride
bgt 0b
b 3f
1:
// CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
vld1.32 {d2[]}, [r3, :32]!
vld1.16 {\r1}, [r1, :\align], r2
subs r5, r5, #1
vstr s4, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s12, [r0, #2*\w]
add r0, r0, #2*\stride
bgt 1b
b 3f
2:
tst r6, #2 // CDEF_HAVE_RIGHT
beq 1f
// !CDEF_HAVE_LEFT+CDEF_HAVE_RIGHT
0:
vldr s4, [r1, #2*\w]
vld1.16 {\r1}, [r1, :\align], r2
subs r5, r5, #1
vstr s12, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s4, [r0, #2*\w]
add r0, r0, #2*\stride
bgt 0b
b 3f
1:
// !CDEF_HAVE_LEFT+!CDEF_HAVE_RIGHT
vld1.16 {\r1}, [r1, :\align], r2
subs r5, r5, #1
vstr s12, [r0, #-4]
vst1.16 {\r1}, [r0, :\align]
vstr s12, [r0, #2*\w]
add r0, r0, #2*\stride
bgt 1b
3:
tst r6, #8 // CDEF_HAVE_BOTTOM
bne 1f
// !CDEF_HAVE_BOTTOM
sub r12, r0, #4
vmov.i16 q2, #0x8000
vst1.16 {q2,q3}, [r12]!
.if \w == 8
vst1.16 {q2,q3}, [r12]!
.endif
pop {r4-r7,pc}
1:
// CDEF_HAVE_BOTTOM
add r7, r1, r2
pad_top_bot_16 r1, r7, \w, \stride, \r1, \r2, \align, 1
endfunc
.endm
padding_func_16 8, 16, q0, q1, 128
padding_func_16 4, 8, d0, d2, 64
tables
filter 8, 16
filter 4, 16
find_dir 16
/*
* Copyright © 2018, VideoLAN and dav1d authors
* Copyright © 2020, Martin Storsjo
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "src/arm/asm.S"
#include "util.S"
.macro dir_table w, stride
const directions\w
.byte -1 * \stride + 1, -2 * \stride + 2
.byte 0 * \stride + 1, -1 * \stride + 2
.byte 0 * \stride + 1, 0 * \stride + 2
.byte 0 * \stride + 1, 1 * \stride + 2
.byte 1 * \stride + 1, 2 * \stride + 2
.byte 1 * \stride + 0, 2 * \stride + 1
.byte 1 * \stride + 0, 2 * \stride + 0
.byte 1 * \stride + 0, 2 * \stride - 1
// Repeated, to avoid & 7
.byte -1 * \stride + 1, -2 * \stride + 2
.byte 0 * \stride + 1, -1 * \stride + 2
.byte 0 * \stride + 1, 0 * \stride + 2
.byte 0 * \stride + 1, 1 * \stride + 2
.byte 1 * \stride + 1, 2 * \stride + 2
.byte 1 * \stride + 0, 2 * \stride + 1
endconst
.endm
.macro tables
dir_table 8, 16
dir_table 4, 8
const pri_taps
.byte 4, 2, 3, 3
endconst
.endm
.macro load_px d11, d12, d21, d22, w
.if \w == 8
add r6, r2, r9, lsl #1 // x + off
sub r9, r2, r9, lsl #1 // x - off
vld1.16 {\d11,\d12}, [r6] // p0
vld1.16 {\d21,\d22}, [r9] // p1
.else
add r6, r2, r9, lsl #1 // x + off
sub r9, r2, r9, lsl #1 // x - off
vld1.16 {\d11}, [r6] // p0
add r6, r6, #2*8 // += stride
vld1.16 {\d21}, [r9] // p1
add r9, r9, #2*8 // += stride
vld1.16 {\d12}, [r6] // p0
vld1.16 {\d22}, [r9] // p1
.endif
.endm
.macro handle_pixel s1, s2, thresh_vec, shift, tap, min
.if \min
vmin.u16 q2, q2, \s1
vmax.s16 q3, q3, \s1
vmin.u16 q2, q2, \s2
vmax.s16 q3, q3, \s2
.endif
vabd.u16 q8, q0, \s1 // abs(diff)
vabd.u16 q11, q0, \s2 // abs(diff)
vshl.u16 q9, q8, \shift // abs(diff) >> shift
vshl.u16 q12, q11, \shift // abs(diff) >> shift
vqsub.u16 q9, \thresh_vec, q9 // clip = imax(0, threshold - (abs(diff) >> shift))
vqsub.u16 q12, \thresh_vec, q12// clip = imax(0, threshold - (abs(diff) >> shift))
vsub.i16 q10, \s1, q0 // diff = p0 - px
vsub.i16 q13, \s2, q0 // diff = p1 - px
vneg.s16 q8, q9 // -clip
vneg.s16 q11, q12 // -clip
vmin.s16 q10, q10, q9 // imin(diff, clip)
vmin.s16 q13, q13, q12 // imin(diff, clip)
vdup.16 q9, \tap // taps[k]
vmax.s16 q10, q10, q8 // constrain() = imax(imin(diff, clip), -clip)
vmax.s16 q13, q13, q11 // constrain() = imax(imin(diff, clip), -clip)
vmla.i16 q1, q10, q9 // sum += taps[k] * constrain()
vmla.i16 q1, q13, q9 // sum += taps[k] * constrain()
.endm
// void dav1d_cdef_filterX_Ybpc_neon(pixel *dst, ptrdiff_t dst_stride,
// const uint16_t *tmp, int pri_strength,
// int sec_strength, int dir, int damping,
// int h, size_t edges);
.macro filter_func w, bpc, pri, sec, min, suffix
function cdef_filter\w\suffix\()_\bpc\()bpc_neon
.if \bpc == 8
cmp r8, #0xf
beq cdef_filter\w\suffix\()_edged_neon
.endif
.if \pri
.if \bpc == 16
clz r9, r9
sub r9, r9, #24 // -bitdepth_min_8
neg r9, r9 // bitdepth_min_8
.endif
movrel_local r8, pri_taps
.if \bpc == 16
lsr r9, r3, r9 // pri_strength >> bitdepth_min_8
and r9, r9, #1 // (pri_strength >> bitdepth_min_8) & 1
.else
and r9, r3, #1
.endif
add r8, r8, r9, lsl #1
.endif
movrel_local r9, directions\w
add r5, r9, r5, lsl #1
vmov.u16 d17, #15
vdup.16 d16, r6 // damping
.if \pri
vdup.16 q5, r3 // threshold
.endif
.if \sec
vdup.16 q7, r4 // threshold
.endif
vmov.16 d8[0], r3
vmov.16 d8[1], r4
vclz.i16 d8, d8 // clz(threshold)
vsub.i16 d8, d17, d8 // ulog2(threshold)
vqsub.u16 d8, d16, d8 // shift = imax(0, damping - ulog2(threshold))
vneg.s16 d8, d8 // -shift
.if \sec
vdup.16 q6, d8[1]
.endif
.if \pri
vdup.16 q4, d8[0]
.endif
1:
.if \w == 8
vld1.16 {q0}, [r2, :128] // px
.else
add r12, r2, #2*8
vld1.16 {d0}, [r2, :64] // px
vld1.16 {d1}, [r12, :64] // px
.endif
vmov.u16 q1, #0 // sum
.if \min
vmov.u16 q2, q0 // min
vmov.u16 q3, q0 // max
.endif
// Instead of loading sec_taps 2, 1 from memory, just set it
// to 2 initially and decrease for the second round.
// This is also used as loop counter.
mov lr, #2 // sec_taps[0]
2:
.if \pri
ldrsb r9, [r5] // off1
load_px d28, d29, d30, d31, \w
.endif
.if \sec
add r5, r5, #4 // +2*2
ldrsb r9, [r5] // off2
.endif
.if \pri
ldrb r12, [r8] // *pri_taps
handle_pixel q14, q15, q5, q4, r12, \min
.endif
.if \sec
load_px d28, d29, d30, d31, \w
add r5, r5, #8 // +2*4
ldrsb r9, [r5] // off3
handle_pixel q14, q15, q7, q6, lr, \min
load_px d28, d29, d30, d31, \w
handle_pixel q14, q15, q7, q6, lr, \min
sub r5, r5, #11 // r5 -= 2*(2+4); r5 += 1;
.else
add r5, r5, #1 // r5 += 1
.endif
subs lr, lr, #1 // sec_tap-- (value)
.if \pri
add r8, r8, #1 // pri_taps++ (pointer)
.endif
bne 2b
vshr.s16 q14, q1, #15 // -(sum < 0)
vadd.i16 q1, q1, q14 // sum - (sum < 0)
vrshr.s16 q1, q1, #4 // (8 + sum - (sum < 0)) >> 4
vadd.i16 q0, q0, q1 // px + (8 + sum ...) >> 4
.if \min
vmin.s16 q0, q0, q3
vmax.s16 q0, q0, q2 // iclip(px + .., min, max)
.endif
.if \bpc == 8
vmovn.u16 d0, q0
.endif
.if \w == 8
add r2, r2, #2*16 // tmp += tmp_stride
subs r7, r7, #1 // h--
.if \bpc == 8
vst1.8 {d0}, [r0, :64], r1
.else
vst1.16 {q0}, [r0, :128], r1
.endif
.else
.if \bpc == 8
vst1.32 {d0[0]}, [r0, :32], r1
.else
vst1.16 {d0}, [r0, :64], r1
.endif
add r2, r2, #2*16 // tmp += 2*tmp_stride
subs r7, r7, #2 // h -= 2
.if \bpc == 8
vst1.32 {d0[1]}, [r0, :32], r1
.else
vst1.16 {d1}, [r0, :64], r1
.endif
.endif
// Reset pri_taps and directions back to the original point
sub r5, r5, #2
.if \pri
sub r8, r8, #2
.endif
bgt 1b
vpop {q4-q7}
pop {r4-r9,pc}
endfunc
.endm
.macro filter w, bpc
filter_func \w, \bpc, pri=1, sec=0, min=0, suffix=_pri
filter_func \w, \bpc, pri=0, sec=1, min=0, suffix=_sec
filter_func \w, \bpc, pri=1, sec=1, min=1, suffix=_pri_sec
function cdef_filter\w\()_\bpc\()bpc_neon, export=1
push {r4-r9,lr}
vpush {q4-q7}
ldrd r4, r5, [sp, #92]
ldrd r6, r7, [sp, #100]
.if \bpc == 16
ldrd r8, r9, [sp, #108]
.else
ldr r8, [sp, #108]
.endif
cmp r3, #0 // pri_strength
bne 1f
b cdef_filter\w\()_sec_\bpc\()bpc_neon // only sec
1:
cmp r4, #0 // sec_strength
bne 1f
b cdef_filter\w\()_pri_\bpc\()bpc_neon // only pri
1:
b cdef_filter\w\()_pri_sec_\bpc\()bpc_neon // both pri and sec
endfunc
.endm
const div_table, align=4
.short 840, 420, 280, 210, 168, 140, 120, 105
endconst
const alt_fact, align=4
.short 420, 210, 140, 105, 105, 105, 105, 105, 140, 210, 420, 0
endconst
.macro cost_alt dest, s1, s2, s3, s4, s5, s6
vmull.s16 q1, \s1, \s1 // sum_alt[n]*sum_alt[n]
vmull.s16 q2, \s2, \s2
vmull.s16 q3, \s3, \s3
vmull.s16 q5, \s4, \s4 // sum_alt[n]*sum_alt[n]
vmull.s16 q12, \s5, \s5
vmull.s16 q6, \s6, \s6 // q6 overlaps the first \s1-\s2 here
vmul.i32 q1, q1, q13 // sum_alt[n]^2*fact
vmla.i32 q1, q2, q14
vmla.i32 q1, q3, q15
vmul.i32 q5, q5, q13 // sum_alt[n]^2*fact
vmla.i32 q5, q12, q14
vmla.i32 q5, q6, q15
vadd.i32 d2, d2, d3
vadd.i32 d3, d10, d11
vpadd.i32 \dest, d2, d3 // *cost_ptr
.endm
.macro find_best s1, s2, s3
.ifnb \s2
vmov.32 lr, \s2
.endif
cmp r12, r1 // cost[n] > best_cost
itt gt
movgt r0, r3 // best_dir = n
movgt r1, r12 // best_cost = cost[n]
.ifnb \s2
add r3, r3, #1 // n++
cmp lr, r1 // cost[n] > best_cost
vmov.32 r12, \s3
itt gt
movgt r0, r3 // best_dir = n
movgt r1, lr // best_cost = cost[n]
add r3, r3, #1 // n++
.endif
.endm
// int dav1d_cdef_find_dir_Xbpc_neon(const pixel *img, const ptrdiff_t stride,
// unsigned *const var)
.macro find_dir bpc
function cdef_find_dir_\bpc\()bpc_neon, export=1
push {lr}
vpush {q4-q7}
.if \bpc == 16
clz r3, r3 // clz(bitdepth_max)
sub lr, r3, #24 // -bitdepth_min_8
.endif
sub sp, sp, #32 // cost
mov r3, #8
vmov.u16 q1, #0 // q0-q1 sum_diag[0]
vmov.u16 q3, #0 // q2-q3 sum_diag[1]
vmov.u16 q5, #0 // q4-q5 sum_hv[0-1]
vmov.u16 q8, #0 // q6,d16 sum_alt[0]
// q7,d17 sum_alt[1]
vmov.u16 q9, #0 // q9,d22 sum_alt[2]
vmov.u16 q11, #0
vmov.u16 q10, #0 // q10,d23 sum_alt[3]
.irpc i, 01234567
.if \bpc == 8
vld1.8 {d30}, [r0, :64], r1
vmov.u8 d31, #128
vsubl.u8 q15, d30, d31 // img[x] - 128
.else
vld1.16 {q15}, [r0, :128], r1
vdup.16 q14, lr // -bitdepth_min_8
vshl.u16 q15, q15, q14
vmov.u16 q14, #128
vsub.i16 q15, q15, q14 // img[x] - 128
.endif
vmov.u16 q14, #0
.if \i == 0
vmov q0, q15 // sum_diag[0]
.else
vext.8 q12, q14, q15, #(16-2*\i)
vext.8 q13, q15, q14, #(16-2*\i)
vadd.i16 q0, q0, q12 // sum_diag[0]
vadd.i16 q1, q1, q13 // sum_diag[0]
.endif
vrev64.16 q13, q15
vswp d26, d27 // [-x]
.if \i == 0
vmov q2, q13 // sum_diag[1]
.else
vext.8 q12, q14, q13, #(16-2*\i)
vext.8 q13, q13, q14, #(16-2*\i)
vadd.i16 q2, q2, q12 // sum_diag[1]
vadd.i16 q3, q3, q13 // sum_diag[1]
.endif
vpadd.u16 d26, d30, d31 // [(x >> 1)]
vmov.u16 d27, #0
vpadd.u16 d24, d26, d28
vpadd.u16 d24, d24, d28 // [y]
vmov.u16 r12, d24[0]
vadd.i16 q5, q5, q15 // sum_hv[1]
.if \i < 4
vmov.16 d8[\i], r12 // sum_hv[0]
.else
vmov.16 d9[\i-4], r12 // sum_hv[0]
.endif
.if \i == 0
vmov.u16 q6, q13 // sum_alt[0]
.else
vext.8 q12, q14, q13, #(16-2*\i)
vext.8 q14, q13, q14, #(16-2*\i)
vadd.i16 q6, q6, q12 // sum_alt[0]
vadd.i16 d16, d16, d28 // sum_alt[0]
.endif
vrev64.16 d26, d26 // [-(x >> 1)]
vmov.u16 q14, #0
.if \i == 0
vmov q7, q13 // sum_alt[1]
.else
vext.8 q12, q14, q13, #(16-2*\i)
vext.8 q13, q13, q14, #(16-2*\i)
vadd.i16 q7, q7, q12 // sum_alt[1]
vadd.i16 d17, d17, d26 // sum_alt[1]
.endif
.if \i < 6
vext.8 q12, q14, q15, #(16-2*(3-(\i/2)))
vext.8 q13, q15, q14, #(16-2*(3-(\i/2)))
vadd.i16 q9, q9, q12 // sum_alt[2]
vadd.i16 d22, d22, d26 // sum_alt[2]
.else
vadd.i16 q9, q9, q15 // sum_alt[2]
.endif
.if \i == 0
vmov q10, q15 // sum_alt[3]
.elseif \i == 1
vadd.i16 q10, q10, q15 // sum_alt[3]
.else
vext.8 q12, q14, q15, #(16-2*(\i/2))
vext.8 q13, q15, q14, #(16-2*(\i/2))
vadd.i16 q10, q10, q12 // sum_alt[3]
vadd.i16 d23, d23, d26 // sum_alt[3]
.endif
.endr
vmov.u32 q15, #105
vmull.s16 q12, d8, d8 // sum_hv[0]*sum_hv[0]
vmlal.s16 q12, d9, d9
vmull.s16 q13, d10, d10 // sum_hv[1]*sum_hv[1]
vmlal.s16 q13, d11, d11
vadd.s32 d8, d24, d25
vadd.s32 d9, d26, d27
vpadd.s32 d8, d8, d9 // cost[2,6] (s16, s17)
vmul.i32 d8, d8, d30 // cost[2,6] *= 105
vrev64.16 q1, q1
vrev64.16 q3, q3
vext.8 q1, q1, q1, #10 // sum_diag[0][14-n]
vext.8 q3, q3, q3, #10 // sum_diag[1][14-n]
vstr s16, [sp, #2*4] // cost[2]
vstr s17, [sp, #6*4] // cost[6]
movrel_local r12, div_table
vld1.16 {q14}, [r12, :128]
vmull.s16 q5, d0, d0 // sum_diag[0]*sum_diag[0]
vmull.s16 q12, d1, d1
vmlal.s16 q5, d2, d2
vmlal.s16 q12, d3, d3
vmull.s16 q0, d4, d4 // sum_diag[1]*sum_diag[1]
vmull.s16 q1, d5, d5
vmlal.s16 q0, d6, d6
vmlal.s16 q1, d7, d7
vmovl.u16 q13, d28 // div_table
vmovl.u16 q14, d29
vmul.i32 q5, q5, q13 // cost[0]
vmla.i32 q5, q12, q14
vmul.i32 q0, q0, q13 // cost[4]
vmla.i32 q0, q1, q14
vadd.i32 d10, d10, d11
vadd.i32 d0, d0, d1
vpadd.i32 d0, d10, d0 // cost[0,4] = s0,s1
movrel_local r12, alt_fact
vld1.16 {d29, d30, d31}, [r12, :64] // div_table[2*m+1] + 105
vstr s0, [sp, #0*4] // cost[0]
vstr s1, [sp, #4*4] // cost[4]
vmovl.u16 q13, d29 // div_table[2*m+1] + 105
vmovl.u16 q14, d30
vmovl.u16 q15, d31
cost_alt d14, d12, d13, d16, d14, d15, d17 // cost[1], cost[3]
cost_alt d15, d18, d19, d22, d20, d21, d23 // cost[5], cost[7]
vstr s28, [sp, #1*4] // cost[1]
vstr s29, [sp, #3*4] // cost[3]
mov r0, #0 // best_dir
vmov.32 r1, d0[0] // best_cost
mov r3, #1 // n
vstr s30, [sp, #5*4] // cost[5]
vstr s31, [sp, #7*4] // cost[7]
vmov.32 r12, d14[0]
find_best d14[0], d8[0], d14[1]
find_best d14[1], d0[1], d15[0]
find_best d15[0], d8[1], d15[1]
find_best d15[1]
eor r3, r0, #4 // best_dir ^4
ldr r12, [sp, r3, lsl #2]
sub r1, r1, r12 // best_cost - cost[best_dir ^ 4]
lsr r1, r1, #10
str r1, [r2] // *var
add sp, sp, #32
vpop {q4-q7}
pop {pc}
endfunc
.endm
This diff is collapsed.
......@@ -40,8 +40,8 @@ function wiener_filter_h_8bpc_neon, export=1
mov r8, r5
vld1.16 {q0}, [r4]
movw r9, #(1 << 14) - (1 << 2)
vdup.16 q14, r9
vmov.s16 q15, #2048
vdup.16 q14, r9
vmov.s16 q15, #2048
// Calculate mid_stride
add r10, r5, #7
bic r10, r10, #7
......@@ -108,8 +108,8 @@ function wiener_filter_h_8bpc_neon, export=1
0:
// !LR_HAVE_LEFT, fill q1 with the leftmost byte
// and shift q2 to have 3x the first byte at the front.
vdup.8 q1, d4[0]
vdup.8 q8, d18[0]
vdup.8 q1, d4[0]
vdup.8 q8, d18[0]
// Move r2 back to account for the last 3 bytes we loaded before,
// which we shifted out.
sub r2, r2, #3
......@@ -127,7 +127,7 @@ function wiener_filter_h_8bpc_neon, export=1
bne 4f
// If we'll need to pad the right edge, load that byte to pad with
// here since we can find it pretty easily from here.
sub r9, r5, #14
sub r9, r5, #14
ldrb r11, [r2, r9]
ldrb r9, [lr, r9]
// Fill q12/q13 with the right padding pixel
......@@ -144,7 +144,6 @@ function wiener_filter_h_8bpc_neon, export=1
b 6f
4: // Loop horizontally
.macro filter_8
// This is tuned as some sort of compromise between Cortex A7, A8,
// A9 and A53.
vmul.s16 q3, q1, d0[0]
......@@ -187,8 +186,6 @@ function wiener_filter_h_8bpc_neon, export=1
vshr.s16 q10, q10, #3
vadd.s16 q3, q3, q15
vadd.s16 q10, q10, q15
.endm
filter_8
vst1.16 {q3}, [r0, :128]!
vst1.16 {q10}, [r12, :128]!
......@@ -206,50 +203,43 @@ function wiener_filter_h_8bpc_neon, export=1
5: // Filter 4 pixels, 7 <= w < 11
.macro filter_4
vext.8 d20, d2, d3, #2
vext.8 d21, d2, d3, #4
vext.8 d22, d2, d3, #6
vext.8 d23, d3, d4, #2
vext.8 d8, d3, d4, #4
vmul.s16 d6, d2, d0[0]
vext.8 q10, q1, q2, #2
vext.8 q11, q1, q2, #4
vmla.s16 d6, d20, d0[1]
vmla.s16 d6, d22, d0[2]
vext.8 q10, q1, q2, #6
vext.8 q11, q1, q2, #8
vmla.s16 d6, d20, d0[3]
vmla.s16 d6, d22, d1[0]
vext.8 q10, q1, q2, #10
vext.8 q11, q1, q2, #12
vmla.s16 d6, d20, d1[1]
vmla.s16 d6, d22, d1[2]
vmul.s16 d20, d16, d0[0]
vext.8 q11, q8, q9, #2
vext.8 q4, q8, q9, #4
vmla.s16 d20, d22, d0[1]
vmla.s16 d20, d8, d0[2]
vext.8 q11, q8, q9, #6
vext.8 q4, q8, q9, #8
vmla.s16 d20, d22, d0[3]
vmla.s16 d20, d8, d1[0]
vext.8 q11, q8, q9, #10
vext.8 q4, q8, q9, #12
vmla.s16 d20, d22, d1[1]
vmla.s16 d20, d8, d1[2]
vext.8 q11, q1, q2, #6
vshl.s16 d22, d22, #7
vsub.s16 d22, d22, d28
vqadd.s16 d6, d6, d22
vext.8 q11, q8, q9, #6
vshl.s16 d22, d22, #7
vsub.s16 d22, d22, d28
vqadd.s16 d20, d20, d22
vshr.s16 d6, d6, #3
vshr.s16 d20, d20, #3
vadd.s16 d6, d6, d30
vadd.s16 d20, d20, d30
vmla.s16 d6, d21, d0[2]
vmla.s16 d6, d22, d0[3]
vmla.s16 d6, d3, d1[0]
vmla.s16 d6, d23, d1[1]
vmla.s16 d6, d8, d1[2]
vext.8 d20, d16, d17, #2
vext.8 d21, d16, d17, #4
vext.8 d22, d16, d17, #6
vext.8 d23, d17, d18, #2
vext.8 d8, d17, d18, #4
vmul.s16 d7, d16, d0[0]
vmla.s16 d7, d20, d0[1]
vmla.s16 d7, d21, d0[2]
vmla.s16 d7, d22, d0[3]
vmla.s16 d7, d17, d1[0]
vmla.s16 d7, d23, d1[1]
vmla.s16 d7, d8, d1[2]
vext.8 d22, d2, d3, #6
vext.8 d23, d16, d17, #6
vshl.s16 q11, q11, #7
vsub.s16 q11, q11, q14
vqadd.s16 q3, q3, q11
vshr.s16 q3, q3, #3
vadd.s16 q3, q3, q15
.endm
filter_4
vst1.16 {d6}, [r0, :64]!
vst1.16 {d20}, [r12, :64]!
vst1.16 {d7}, [r12, :64]!
subs r5, r5, #4 // 3 <= w < 7
vext.8 q1, q1, q2, #8
......@@ -323,7 +313,7 @@ L(variable_shift_tbl):
// w >= 4, filter 4 pixels
filter_4
vst1.16 {d6}, [r0, :64]!
vst1.16 {d20}, [r12, :64]!
vst1.16 {d7}, [r12, :64]!
subs r5, r5, #4 // 0 <= w < 4
vext.8 q1, q1, q2, #8
vext.8 q8, q8, q9, #8
......@@ -338,11 +328,11 @@ L(variable_shift_tbl):
vdup.16 d25, d16[3]
vpadd.s16 d6, d6, d6
vtrn.16 d24, d25
vshl.s16 d24, d24, #7
vsub.s16 d24, d24, d28
vqadd.s16 d6, d6, d24
vshr.s16 d6, d6, #3
vadd.s16 d6, d6, d30
vshl.s16 d24, d24, #7
vsub.s16 d24, d24, d28
vqadd.s16 d6, d6, d24
vshr.s16 d6, d6, #3
vadd.s16 d6, d6, d30
vst1.s16 {d6[0]}, [r0, :16]!
vst1.s16 {d6[1]}, [r12, :16]!
subs r5, r5, #1
......@@ -363,7 +353,6 @@ L(variable_shift_tbl):
0:
vpop {q4}
pop {r4-r11,pc}
.purgem filter_8
.purgem filter_4
endfunc
......@@ -422,22 +411,22 @@ function wiener_filter_v_8bpc_neon, export=1
// Interleaving the mul/mla chains actually hurts performance
// significantly on Cortex A53, thus keeping mul/mla tightly
// chained like this.
vmull.s16 q2, d16, d0[0]
vmlal.s16 q2, d18, d0[1]
vmlal.s16 q2, d20, d0[2]
vmlal.s16 q2, d22, d0[3]
vmlal.s16 q2, d24, d1[0]
vmlal.s16 q2, d26, d1[1]
vmlal.s16 q2, d28, d1[2]
vmull.s16 q3, d17, d0[0]
vmlal.s16 q3, d19, d0[1]
vmlal.s16 q3, d21, d0[2]
vmlal.s16 q3, d23, d0[3]
vmlal.s16 q3, d25, d1[0]
vmlal.s16 q3, d27, d1[1]
vmlal.s16 q3, d29, d1[2]
vqrshrun.s32 d4, q2, #11
vqrshrun.s32 d5, q3, #11
vmull.s16 q2, d16, d0[0]
vmlal.s16 q2, d18, d0[1]
vmlal.s16 q2, d20, d0[2]
vmlal.s16 q2, d22, d0[3]
vmlal.s16 q2, d24, d1[0]
vmlal.s16 q2, d26, d1[1]
vmlal.s16 q2, d28, d1[2]
vmull.s16 q3, d17, d0[0]
vmlal.s16 q3, d19, d0[1]
vmlal.s16 q3, d21, d0[2]
vmlal.s16 q3, d23, d0[3]
vmlal.s16 q3, d25, d1[0]
vmlal.s16 q3, d27, d1[1]
vmlal.s16 q3, d29, d1[2]
vqrshrun.s32 d4, q2, #11
vqrshrun.s32 d5, q3, #11
vqmovun.s16 d4, q2
vst1.8 {d4}, [r0], r1
.if \compare
......@@ -473,7 +462,7 @@ function wiener_filter_v_8bpc_neon, export=1
52: // 2 rows in total, q11 already loaded, load q12 with content data
// and 2 rows of edge.
vld1.16 {q14}, [r2, :128], r7
vmov q15, q14
vmov q15, q14
b 8f
53:
// 3 rows in total, q11 already loaded, load q12 and q13 with content
......@@ -615,8 +604,8 @@ L(copy_narrow_tbl):
asr r1, r1, #1
22:
subs r4, r4, #1
vld1.16 {d0[]}, [r2]!
vst1.16 {d0[0]}, [r0], r1
vld1.16 {d0[]}, [r2, :16]!
vst1.16 {d0[0]}, [r0, :16], r1
bgt 22b
0:
pop {r4,pc}
......@@ -644,8 +633,8 @@ L(copy_narrow_tbl):
ble 0f
b 42b
41:
vld1.32 {d0[]}, [r2]
vst1.32 {d0[0]}, [r0]
vld1.32 {d0[]}, [r2, :32]
vst1.32 {d0[0]}, [r0, :32]
0:
pop {r4,pc}
......@@ -785,7 +774,7 @@ function sgr_box3_h_8bpc_neon, export=1
bne 4f
// If we'll need to pad the right edge, load that byte to pad with
// here since we can find it pretty easily from here.
sub lr, r5, #(2 + 16 - 2 + 1)
sub lr, r5, #(2 + 16 - 2 + 1)
ldrb r11, [r3, lr]
ldrb lr, [r12, lr]
// Fill q14/q15 with the right padding pixel
......@@ -1058,7 +1047,7 @@ function sgr_box5_h_8bpc_neon, export=1
bne 4f
// If we'll need to pad the right edge, load that byte to pad with
// here since we can find it pretty easily from here.
sub lr, r5, #(2 + 16 - 3 + 1)
sub lr, r5, #(2 + 16 - 3 + 1)
ldrb r11, [r3, lr]
ldrb lr, [r12, lr]
// Fill q14/q15 with the right padding pixel
......@@ -1100,7 +1089,7 @@ function sgr_box5_h_8bpc_neon, export=1
vaddl_u16_n q12, q13, d2, d3, d16, d17, \w
vaddl_u16_n q8, q9, d18, d19, d20, d21, \w
vaddw_u16_n q12, q13, d22, d23, \w
vadd_i32_n q12, q13, q8, q9, \w
vadd_i32_n q12, q13, q8, q9, \w
vext.8 q8, q5, q6, #2
vext.8 q9, q5, q6, #4
vext.8 q10, q5, q6, #6
......@@ -1152,7 +1141,7 @@ function sgr_box5_h_8bpc_neon, export=1
6: // Pad the right edge and produce the last few pixels.
// w < 7, w+1 pixels valid in q0/q4
sub lr, r5, #1
sub lr, r5, #1
// lr = pixels valid - 2
adr r11, L(box5_variable_shift_tbl)
ldr lr, [r11, lr, lsl #2]
......
/*
* Copyright © 2018, VideoLAN and dav1d authors
* Copyright © 2020, Martin Storsjo
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "src/arm/asm.S"
#include "util.S"
// void dav1d_wiener_filter_h_16bpc_neon(int16_t *dst, const pixel (*left)[4],
// const pixel *src, ptrdiff_t stride,
// const int16_t fh[7], const intptr_t w,
// int h, enum LrEdgeFlags edges,
// const int bitdepth_max);
function wiener_filter_h_16bpc_neon, export=1
push {r4-r11,lr}
vpush {q4-q7}
ldrd r4, r5, [sp, #100]
ldrd r6, r7, [sp, #108]
ldr r8, [sp, #116] // bitdepth_max
vld1.16 {q0}, [r4]
clz r8, r8
vmov.i32 q14, #1
sub r9, r8, #38 // -(bitdepth + 6)
sub r8, r8, #25 // -round_bits_h
neg r9, r9 // bitdepth + 6
vdup.32 q1, r9
vdup.32 q13, r8 // -round_bits_h
vmov.i16 q15, #8192
vshl.u32 q14, q14, q1 // 1 << (bitdepth + 6)
mov r8, r5
// Calculate mid_stride
add r10, r5, #7
bic r10, r10, #7
lsl r10, r10, #1
// Clear the last unused element of q0, to allow filtering a single
// pixel with one plain vmul+vpadd.
mov r12, #0
vmov.16 d1[3], r12
// Set up pointers for reading/writing alternate rows
add r12, r0, r10
lsl r10, r10, #1
add lr, r2, r3
lsl r3, r3, #1
// Subtract the width from mid_stride
sub r10, r10, r5, lsl #1
// For w >= 8, we read (w+5)&~7+8 pixels, for w < 8 we read 16 pixels.
cmp r5, #8
add r11, r5, #13
bic r11, r11, #7
bge 1f
mov r11, #16
1:
sub r3, r3, r11, lsl #1
// Set up the src pointers to include the left edge, for LR_HAVE_LEFT, left == NULL
tst r7, #1 // LR_HAVE_LEFT
beq 2f
// LR_HAVE_LEFT
cmp r1, #0
bne 0f
// left == NULL
sub r2, r2, #6
sub lr, lr, #6
b 1f
0: // LR_HAVE_LEFT, left != NULL
2: // !LR_HAVE_LEFT, increase the stride.
// For this case we don't read the left 3 pixels from the src pointer,
// but shift it as if we had done that.
add r3, r3, #6
1: // Loop vertically
vld1.16 {q2, q3}, [r2]!
vld1.16 {q4, q5}, [lr]!
tst r7, #1 // LR_HAVE_LEFT
beq 0f
cmp r1, #0
beq 2f
// LR_HAVE_LEFT, left != NULL
vld1.16 {d3}, [r1]!
// Move r2/lr back to account for the last 3 pixels we loaded earlier,
// which we'll shift out.
sub r2, r2, #6
sub lr, lr, #6
vld1.16 {d13}, [r1]!
vext.8 q3, q2, q3, #10
vext.8 q2, q1, q2, #10
vext.8 q5, q4, q5, #10
vext.8 q4, q6, q4, #10
b 2f
0:
// !LR_HAVE_LEFT, fill q1 with the leftmost pixel
// and shift q2/q3 to have 3x the first pixel at the front.
vdup.16 q1, d4[0]
vdup.16 q6, d8[0]
// Move r2 back to account for the last 3 pixels we loaded before,
// which we shifted out.
sub r2, r2, #6
sub lr, lr, #6
vext.8 q3, q2, q3, #10
vext.8 q2, q1, q2, #10
vext.8 q5, q4, q5, #10
vext.8 q4, q6, q4, #10
2:
tst r7, #2 // LR_HAVE_RIGHT
bne 4f
// If we'll need to pad the right edge, load that byte to pad with
// here since we can find it pretty easily from here.
sub r9, r5, #14
lsl r9, r9, #1
ldrh r11, [r2, r9]
ldrh r9, [lr, r9]
// Fill q11/q12 with the right padding pixel
vdup.16 q11, r11
vdup.16 q12, r9
3: // !LR_HAVE_RIGHT
// If we'll have to pad the right edge we need to quit early here.
cmp r5, #11
bge 4f // If w >= 11, all used input pixels are valid
cmp r5, #7
bge 5f // If w >= 7, we can filter 4 pixels
b 6f
4: // Loop horizontally
vext.8 q10, q2, q3, #6
vext.8 q8, q2, q3, #2
vext.8 q9, q2, q3, #4
vshll.u16 q6, d20, #7
vshll.u16 q7, d21, #7
vmlal.s16 q6, d4, d0[0]
vmlal.s16 q6, d16, d0[1]
vmlal.s16 q6, d18, d0[2]
vmlal.s16 q6, d20, d0[3]
vmlal.s16 q7, d5, d0[0]
vmlal.s16 q7, d17, d0[1]
vmlal.s16 q7, d19, d0[2]
vmlal.s16 q7, d21, d0[3]
vext.8 q8, q2, q3, #8
vext.8 q9, q2, q3, #10
vext.8 q10, q2, q3, #12
vmlal.s16 q6, d16, d1[0]
vmlal.s16 q6, d18, d1[1]
vmlal.s16 q6, d20, d1[2]
vmlal.s16 q7, d17, d1[0]
vmlal.s16 q7, d19, d1[1]
vmlal.s16 q7, d21, d1[2]
vext.8 q10, q4, q5, #6
vext.8 q2, q4, q5, #2
vshll.u16 q8, d20, #7
vshll.u16 q9, d21, #7
vmlal.s16 q8, d8, d0[0]
vmlal.s16 q8, d4, d0[1]
vmlal.s16 q8, d20, d0[3]
vmlal.s16 q9, d9, d0[0]
vmlal.s16 q9, d5, d0[1]
vmlal.s16 q9, d21, d0[3]
vext.8 q2, q4, q5, #4
vext.8 q10, q4, q5, #8
vmlal.s16 q8, d4, d0[2]
vmlal.s16 q8, d20, d1[0]
vmlal.s16 q9, d5, d0[2]
vmlal.s16 q9, d21, d1[0]
vext.8 q2, q4, q5, #10
vext.8 q10, q4, q5, #12
vmlal.s16 q8, d4, d1[1]
vmlal.s16 q8, d20, d1[2]
vmlal.s16 q9, d5, d1[1]
vmlal.s16 q9, d21, d1[2]
vmvn.i16 q10, #0x8000 // 0x7fff = (1 << 15) - 1
vadd.i32 q6, q6, q14
vadd.i32 q7, q7, q14
vadd.i32 q8, q8, q14
vadd.i32 q9, q9, q14
vrshl.s32 q6, q6, q13
vrshl.s32 q7, q7, q13
vrshl.s32 q8, q8, q13
vrshl.s32 q9, q9, q13
vqmovun.s32 d12, q6
vqmovun.s32 d13, q7
vqmovun.s32 d14, q8
vqmovun.s32 d15, q9
vmin.u16 q6, q6, q10
vmin.u16 q7, q7, q10
vsub.i16 q6, q6, q15
vsub.i16 q7, q7, q15
vst1.16 {q6}, [r0, :128]!
vst1.16 {q7}, [r12, :128]!
subs r5, r5, #8
ble 9f
tst r7, #2 // LR_HAVE_RIGHT
vmov q2, q3
vmov q4, q5
vld1.16 {q3}, [r2]!
vld1.16 {q5}, [lr]!
bne 4b // If we don't need to pad, just keep filtering.
b 3b // If we need to pad, check how many pixels we have left.
5: // Filter 4 pixels, 7 <= w < 11
.macro filter_4
vext.8 d18, d4, d5, #6
vext.8 d16, d4, d5, #2
vext.8 d17, d4, d5, #4
vext.8 d19, d5, d6, #2
vext.8 d20, d5, d6, #4
vshll.u16 q6, d18, #7
vmlal.s16 q6, d4, d0[0]
vmlal.s16 q6, d16, d0[1]
vmlal.s16 q6, d17, d0[2]
vmlal.s16 q6, d18, d0[3]
vmlal.s16 q6, d5, d1[0]
vmlal.s16 q6, d19, d1[1]
vmlal.s16 q6, d20, d1[2]
vext.8 d18, d8, d9, #6
vext.8 d16, d8, d9, #2
vext.8 d17, d8, d9, #4
vext.8 d19, d9, d10, #2
vext.8 d20, d9, d10, #4
vshll.u16 q7, d18, #7
vmlal.s16 q7, d8, d0[0]
vmlal.s16 q7, d16, d0[1]
vmlal.s16 q7, d17, d0[2]
vmlal.s16 q7, d18, d0[3]
vmlal.s16 q7, d9, d1[0]
vmlal.s16 q7, d19, d1[1]
vmlal.s16 q7, d20, d1[2]
vmvn.i16 q10, #0x8000 // 0x7fff = (1 << 15) - 1
vadd.i32 q6, q6, q14
vadd.i32 q7, q7, q14
vrshl.s32 q6, q6, q13
vrshl.s32 q7, q7, q13
vqmovun.s32 d12, q6
vqmovun.s32 d13, q7
vmin.u16 q6, q6, q10
vsub.i16 q6, q6, q15
.endm
filter_4
vst1.16 {d12}, [r0, :64]!
vst1.16 {d13}, [r12, :64]!
subs r5, r5, #4 // 3 <= w < 7
vext.8 q2, q2, q3, #8
vext.8 q3, q3, q3, #8
vext.8 q4, q4, q5, #8
vext.8 q5, q5, q5, #8
6: // Pad the right edge and filter the last few pixels.
// w < 7, w+3 pixels valid in q2-q3
cmp r5, #5
blt 7f
bgt 8f
// w == 5, 8 pixels valid in q2, q3 invalid
vmov q3, q11
vmov q5, q12
b 88f
7: // 1 <= w < 5, 4-7 pixels valid in q2
sub r9, r5, #1
// r9 = (pixels valid - 4)
adr r11, L(variable_shift_tbl)
ldr r9, [r11, r9, lsl #2]
add r11, r11, r9
vmov q3, q11
vmov q5, q12
bx r11
.align 2
L(variable_shift_tbl):
.word 44f - L(variable_shift_tbl) + CONFIG_THUMB
.word 55f - L(variable_shift_tbl) + CONFIG_THUMB
.word 66f - L(variable_shift_tbl) + CONFIG_THUMB
.word 77f - L(variable_shift_tbl) + CONFIG_THUMB
44: // 4 pixels valid in q2/q4, fill the high half with padding.
vmov d5, d6
vmov d9, d10
b 88f
// Shift q2 right, shifting out invalid pixels,
// shift q2 left to the original offset, shifting in padding pixels.
55: // 5 pixels valid
vext.8 q2, q2, q2, #10
vext.8 q2, q2, q3, #6
vext.8 q4, q4, q4, #10
vext.8 q4, q4, q5, #6
b 88f
66: // 6 pixels valid
vext.8 q2, q2, q2, #12
vext.8 q2, q2, q3, #4
vext.8 q4, q4, q4, #12
vext.8 q4, q4, q5, #4
b 88f
77: // 7 pixels valid
vext.8 q2, q2, q2, #14
vext.8 q2, q2, q3, #2
vext.8 q4, q4, q4, #14
vext.8 q4, q4, q5, #2
b 88f
8: // w > 5, w == 6, 9 pixels valid in q2-q3, 1 pixel valid in q3
vext.8 q3, q3, q3, #2
vext.8 q3, q3, q11, #14
vext.8 q5, q5, q5, #2
vext.8 q5, q5, q12, #14
88:
// w < 7, q2-q3 padded properly
cmp r5, #4
blt 888f
// w >= 4, filter 4 pixels
filter_4
vst1.16 {d12}, [r0, :64]!
vst1.16 {d13}, [r12, :64]!
subs r5, r5, #4 // 0 <= w < 4
vext.8 q2, q2, q3, #8
vext.8 q4, q4, q5, #8
beq 9f
888: // 1 <= w < 4, filter 1 pixel at a time
vmull.s16 q6, d4, d0
vmull.s16 q7, d5, d1
vmull.s16 q8, d8, d0
vmull.s16 q9, d9, d1
vadd.i32 q6, q7
vadd.i32 q8, q9
vpadd.i32 d12, d12, d13
vpadd.i32 d13, d16, d17
vdup.16 d14, d4[3]
vdup.16 d15, d8[3]
vpadd.i32 d12, d12, d13
vtrn.16 d14, d15
vadd.i32 d12, d12, d28
vshll.u16 q7, d14, #7
vmvn.i16 d20, #0x8000 // 0x7fff = (1 << 15) - 1
vadd.i32 d12, d12, d14
vrshl.s32 d12, d12, d26
vqmovun.s32 d12, q6
vmin.u16 d12, d12, d20
vsub.i16 d12, d12, d30
vst1.16 {d12[0]}, [r0, :16]!
vst1.16 {d12[1]}, [r12, :16]!
subs r5, r5, #1
vext.8 q2, q2, q3, #2
vext.8 q4, q4, q5, #2
bgt 888b
9:
subs r6, r6, #2
ble 0f
// Jump to the next row and loop horizontally
add r0, r0, r10
add r12, r12, r10
add r2, r2, r3
add lr, lr, r3
mov r5, r8
b 1b
0:
vpop {q4-q7}
pop {r4-r11,pc}
.purgem filter_4
endfunc
// void dav1d_wiener_filter_v_16bpc_neon(pixel *dst, ptrdiff_t stride,
// const int16_t *mid, int w, int h,
// const int16_t fv[7], enum LrEdgeFlags edges,
// ptrdiff_t mid_stride, const int bitdepth_max);
function wiener_filter_v_16bpc_neon, export=1
push {r4-r7,lr}
vpush {q4-q5}
ldrd r4, r5, [sp, #52]
ldrd r6, r7, [sp, #60]
ldr lr, [sp, #68] // bitdepth_max
vmov.i16 q1, #0
mov r12, #128
vld1.16 {q0}, [r5]
vdup.16 q5, lr
clz lr, lr
vmov.i16 d2[3], r12
sub lr, lr, #11 // round_bits_v
vadd.i16 q0, q0, q1
vdup.32 q4, lr
mov lr, r4
vneg.s32 q4, q4 // -round_bits_v
// Calculate the number of rows to move back when looping vertically
mov r12, r4
tst r6, #4 // LR_HAVE_TOP
beq 0f
sub r2, r2, r7, lsl #1
add r12, r12, #2
0:
tst r6, #8 // LR_HAVE_BOTTOM
beq 1f
add r12, r12, #2
1: // Start of horizontal loop; start one vertical filter slice.
// Load rows into q8-q11 and pad properly.
tst r6, #4 // LR_HAVE_TOP
vld1.16 {q8}, [r2, :128], r7
beq 2f
// LR_HAVE_TOP
vld1.16 {q10}, [r2, :128], r7
vmov q9, q8
vld1.16 {q11}, [r2, :128], r7
b 3f
2: // !LR_HAVE_TOP
vmov q9, q8
vmov q10, q8
vmov q11, q8
3:
cmp r4, #4
blt 5f
// Start filtering normally; fill in q12-q14 with unique rows.
vld1.16 {q12}, [r2, :128], r7
vld1.16 {q13}, [r2, :128], r7
vld1.16 {q14}, [r2, :128], r7
4:
.macro filter compare
subs r4, r4, #1
// Interleaving the mul/mla chains actually hurts performance
// significantly on Cortex A53, thus keeping mul/mla tightly
// chained like this.
vmull.s16 q2, d16, d0[0]
vmlal.s16 q2, d18, d0[1]
vmlal.s16 q2, d20, d0[2]
vmlal.s16 q2, d22, d0[3]
vmlal.s16 q2, d24, d1[0]
vmlal.s16 q2, d26, d1[1]
vmlal.s16 q2, d28, d1[2]
vmull.s16 q3, d17, d0[0]
vmlal.s16 q3, d19, d0[1]
vmlal.s16 q3, d21, d0[2]
vmlal.s16 q3, d23, d0[3]
vmlal.s16 q3, d25, d1[0]
vmlal.s16 q3, d27, d1[1]
vmlal.s16 q3, d29, d1[2]
vrshl.s32 q2, q2, q4 // round_bits_v
vrshl.s32 q3, q3, q4
vqmovun.s32 d4, q2
vqmovun.s32 d5, q3
vmin.u16 q2, q2, q5 // bitdepth_max
vst1.16 {q2}, [r0], r1
.if \compare
cmp r4, #4
.else
ble 9f
.endif
vmov q8, q9
vmov q9, q10
vmov q10, q11
vmov q11, q12
vmov q12, q13
vmov q13, q14
.endm
filter 1
blt 7f
vld1.16 {q14}, [r2, :128], r7
b 4b
5: // Less than 4 rows in total; not all of q12-q13 are filled yet.
tst r6, #8 // LR_HAVE_BOTTOM
beq 6f
// LR_HAVE_BOTTOM
cmp r4, #2
// We load at least 2 rows in all cases.
vld1.16 {q12}, [r2, :128], r7
vld1.16 {q13}, [r2, :128], r7
bgt 53f // 3 rows in total
beq 52f // 2 rows in total
51: // 1 row in total, q11 already loaded, load edge into q12-q14.
vmov q13, q12
b 8f
52: // 2 rows in total, q11 already loaded, load q12 with content data
// and 2 rows of edge.
vld1.16 {q14}, [r2, :128], r7
vmov q15, q14
b 8f
53:
// 3 rows in total, q11 already loaded, load q12 and q13 with content
// and 2 rows of edge.
vld1.16 {q14}, [r2, :128], r7
vld1.16 {q15}, [r2, :128], r7
vmov q1, q15
b 8f
6:
// !LR_HAVE_BOTTOM
cmp r4, #2
bgt 63f // 3 rows in total
beq 62f // 2 rows in total
61: // 1 row in total, q11 already loaded, pad that into q12-q14.
vmov q12, q11
vmov q13, q11
vmov q14, q11
b 8f
62: // 2 rows in total, q11 already loaded, load q12 and pad that into q12-q15.
vld1.16 {q12}, [r2, :128], r7
vmov q13, q12
vmov q14, q12
vmov q15, q12
b 8f
63:
// 3 rows in total, q11 already loaded, load q12 and q13 and pad q13 into q14-q15,q1.
vld1.16 {q12}, [r2, :128], r7
vld1.16 {q13}, [r2, :128], r7
vmov q14, q13
vmov q15, q13
vmov q1, q13
b 8f
7:
// All registers up to q13 are filled already, 3 valid rows left.
// < 4 valid rows left; fill in padding and filter the last
// few rows.
tst r6, #8 // LR_HAVE_BOTTOM
beq 71f
// LR_HAVE_BOTTOM; load 2 rows of edge.
vld1.16 {q14}, [r2, :128], r7
vld1.16 {q15}, [r2, :128], r7
vmov q1, q15
b 8f
71:
// !LR_HAVE_BOTTOM, pad 3 rows
vmov q14, q13
vmov q15, q13
vmov q1, q13
8: // At this point, all registers up to q14-q15,q1 are loaded with
// edge/padding (depending on how many rows are left).
filter 0 // This branches to 9f when done
vmov q14, q15
vmov q15, q1
b 8b
9: // End of one vertical slice.
subs r3, r3, #8
ble 0f
// Move pointers back up to the top and loop horizontally.
mls r0, r1, lr, r0
mls r2, r7, r12, r2
add r0, r0, #16
add r2, r2, #16
mov r4, lr
b 1b
0:
vpop {q4-q5}
pop {r4-r7,pc}
.purgem filter
endfunc
// void dav1d_copy_narrow_16bpc_neon(pixel *dst, ptrdiff_t stride,
// const pixel *src, int w, int h);
function copy_narrow_16bpc_neon, export=1
push {r4,lr}
ldr r4, [sp, #8]
adr r12, L(copy_narrow_tbl)
ldr r3, [r12, r3, lsl #2]
add r12, r12, r3
bx r12
.align 2
L(copy_narrow_tbl):
.word 0
.word 10f - L(copy_narrow_tbl) + CONFIG_THUMB
.word 20f - L(copy_narrow_tbl) + CONFIG_THUMB
.word 30f - L(copy_narrow_tbl) + CONFIG_THUMB
.word 40f - L(copy_narrow_tbl) + CONFIG_THUMB
.word 50f - L(copy_narrow_tbl) + CONFIG_THUMB
.word 60f - L(copy_narrow_tbl) + CONFIG_THUMB
.word 70f - L(copy_narrow_tbl) + CONFIG_THUMB
10:
add r3, r0, r1
lsl r1, r1, #1
18:
subs r4, r4, #8
blt 110f
vld1.16 {q0}, [r2, :128]!
vst1.16 {d0[0]}, [r0, :16], r1
vst1.16 {d0[1]}, [r3, :16], r1
vst1.16 {d0[2]}, [r0, :16], r1
vst1.16 {d0[3]}, [r3, :16], r1
vst1.16 {d1[0]}, [r0, :16], r1
vst1.16 {d1[1]}, [r3, :16], r1
vst1.16 {d1[2]}, [r0, :16], r1
vst1.16 {d1[3]}, [r3, :16], r1
ble 0f
b 18b
110:
add r4, r4, #8
asr r1, r1, #1
11:
subs r4, r4, #1
vld1.16 {d0[]}, [r2]!
vst1.16 {d0[0]}, [r0], r1
bgt 11b
0:
pop {r4,pc}
20:
add r3, r0, r1
lsl r1, r1, #1
24:
subs r4, r4, #4
blt 210f
vld1.32 {q0}, [r2, :128]!
vst1.32 {d0[0]}, [r0, :32], r1
vst1.32 {d0[1]}, [r3, :32], r1
vst1.32 {d1[0]}, [r0, :32], r1
vst1.32 {d1[1]}, [r3, :32], r1
ble 0f
b 24b
210:
add r4, r4, #4
asr r1, r1, #1
22:
subs r4, r4, #1
vld1.32 {d0[]}, [r2, :32]!
vst1.32 {d0[0]}, [r0, :32], r1
bgt 22b
0:
pop {r4,pc}
30:
ldr r3, [r2]
ldrh r12, [r2, #4]
add r2, r2, #6
subs r4, r4, #1
str r3, [r0]
strh r12, [r0, #4]
add r0, r0, r1
bgt 30b
pop {r4,pc}
40:
add r3, r0, r1
lsl r1, r1, #1
42:
subs r4, r4, #2
blt 41f
vld1.16 {q0}, [r2, :128]!
vst1.16 {d0}, [r0, :64], r1
vst1.16 {d1}, [r3, :64], r1
ble 0f
b 42b
41:
vld1.16 {d0}, [r2, :64]
vst1.16 {d0}, [r0, :64]
0:
pop {r4,pc}
50:
vld1.16 {d0}, [r2]
ldrh r12, [r2, #8]
add r2, r2, #10
subs r4, r4, #1
vst1.16 {d0}, [r0]
strh r12, [r0, #8]
add r0, r0, r1
bgt 50b
pop {r4,pc}
60:
vld1.16 {d0}, [r2]
ldr r12, [r2, #8]
add r2, r2, #12
subs r4, r4, #1
vst1.16 {d0}, [r0]
str r12, [r0, #8]
add r0, r0, r1
bgt 60b
pop {r4,pc}
70:
vld1.16 {d0}, [r2]
ldr r12, [r2, #8]
ldrh lr, [r2, #12]
add r2, r2, #14
subs r4, r4, #1
vst1.16 {d0}, [r0]
str r12, [r0, #8]
strh lr, [r0, #12]
add r0, r0, r1
bgt 70b
pop {r4,pc}
endfunc
......@@ -1403,12 +1403,12 @@ L(\type\()_8tap_h_tbl):
vld1.8 {d24}, [\sr2], \s_strd
vmovl.u8 q8, d16
vmovl.u8 q12, d24
vext.8 q9, q8, q8, #2
vext.8 q10, q8, q8, #4
vext.8 q11, q8, q8, #6
vext.8 q13, q12, q12, #2
vext.8 q14, q12, q12, #4
vext.8 q15, q12, q12, #6
vext.8 d18, d16, d17, #2
vext.8 d20, d16, d17, #4
vext.8 d22, d16, d17, #6
vext.8 d26, d24, d25, #2
vext.8 d28, d24, d25, #4
vext.8 d30, d24, d25, #6
subs \h, \h, #2
vmul.s16 d4, d16, d0[0]
vmla.s16 d4, d18, d0[1]
......@@ -1431,7 +1431,7 @@ L(\type\()_8tap_h_tbl):
pop {r4-r11,pc}
80: // 8xN h
vld1.8 {d0}, [\mx]
vld1.8 {d0}, [\mx, :64]
sub \src, \src, #3
add \ds2, \dst, \d_strd
add \sr2, \src, \s_strd
......@@ -1482,7 +1482,7 @@ L(\type\()_8tap_h_tbl):
// one temporary for vext in the loop. That's slower on A7 and A53,
// (but surprisingly, marginally faster on A8 and A73).
vpush {q4-q6}
vld1.8 {d0}, [\mx]
vld1.8 {d0}, [\mx, :64]
sub \src, \src, #3
add \ds2, \dst, \d_strd
add \sr2, \src, \s_strd
......@@ -1629,7 +1629,7 @@ L(\type\()_8tap_v_tbl):
28: // 2x8, 2x16 v
vpush {q4-q7}
vld1.8 {d0}, [\my]
vld1.8 {d0}, [\my, :64]
sub \sr2, \src, \s_strd, lsl #1
add \ds2, \dst, \d_strd
sub \src, \sr2, \s_strd
......@@ -1709,7 +1709,7 @@ L(\type\()_8tap_v_tbl):
480: // 4x8, 4x16 v
vpush {q4}
vld1.8 {d0}, [\my]
vld1.8 {d0}, [\my, :64]
sub \sr2, \src, \s_strd, lsl #1
add \ds2, \dst, \d_strd
sub \src, \sr2, \s_strd
......@@ -1782,7 +1782,7 @@ L(\type\()_8tap_v_tbl):
640:
1280:
vpush {q4}
vld1.8 {d0}, [\my]
vld1.8 {d0}, [\my, :64]
sub \src, \src, \s_strd
sub \src, \src, \s_strd, lsl #1
vmovl.s8 q0, d0
......@@ -1951,11 +1951,10 @@ L(\type\()_8tap_hv_tbl):
bl L(\type\()_8tap_filter_2)
vext.8 d18, d17, d26, #4
vmov d19, d26
vmull.s16 q2, d16, d2[0]
vmlal.s16 q2, d17, d2[1]
vmlal.s16 q2, d18, d2[2]
vmlal.s16 q2, d19, d2[3]
vmlal.s16 q2, d26, d2[3]
vqrshrn.s32 d4, q2, #\shift_hv
vqmovun.s16 d4, q2
......@@ -1964,11 +1963,11 @@ L(\type\()_8tap_hv_tbl):
vst1.16 {d4[1]}, [\ds2, :16], \d_strd
ble 0f
vmov d16, d18
vmov d17, d19
vmov d17, d26
b 2b
280: // 2x8, 2x16, 2x32 hv
vld1.8 {d2}, [\my]
vld1.8 {d2}, [\my, :64]
sub \src, \src, #1
sub \sr2, \src, \s_strd, lsl #1
sub \src, \sr2, \s_strd
......@@ -2001,7 +2000,6 @@ L(\type\()_8tap_hv_tbl):
28:
bl L(\type\()_8tap_filter_2)
vext.8 d22, d21, d26, #4
vmov d23, d26
vmull.s16 q2, d16, d2[0]
vmlal.s16 q2, d17, d2[1]
vmlal.s16 q2, d18, d2[2]
......@@ -2009,7 +2007,7 @@ L(\type\()_8tap_hv_tbl):
vmlal.s16 q2, d20, d3[0]
vmlal.s16 q2, d21, d3[1]
vmlal.s16 q2, d22, d3[2]
vmlal.s16 q2, d23, d3[3]
vmlal.s16 q2, d26, d3[3]
vqrshrn.s32 d4, q2, #\shift_hv
vqmovun.s16 d4, q2
......@@ -2022,7 +2020,7 @@ L(\type\()_8tap_hv_tbl):
vmov d18, d20
vmov d19, d21
vmov d20, d22
vmov d21, d23
vmov d21, d26
b 28b
0:
......@@ -2108,7 +2106,7 @@ L(\type\()_8tap_filter_2):
b 4b
480: // 4x8, 4x16, 4x32 hv
vld1.8 {d2}, [\my]
vld1.8 {d2}, [\my, :64]
sub \src, \src, #1
sub \sr2, \src, \s_strd, lsl #1
sub \src, \sr2, \s_strd
......@@ -2211,7 +2209,7 @@ L(\type\()_8tap_filter_4):
bgt 880f
vpush {q4-q7}
add \my, \my, #2
vld1.8 {d0}, [\mx]
vld1.8 {d0}, [\mx, :64]
vld1.32 {d2[]}, [\my]
sub \src, \src, #3
sub \src, \src, \s_strd
......@@ -2301,8 +2299,8 @@ L(\type\()_8tap_filter_4):
640:
1280:
vpush {q4-q7}
vld1.8 {d0}, [\mx]
vld1.8 {d2}, [\my]
vld1.8 {d0}, [\mx, :64]
vld1.8 {d2}, [\my, :64]
sub \src, \src, #3
sub \src, \src, \s_strd
sub \src, \src, \s_strd, lsl #1
......
This diff is collapsed.