| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
handle the cases when vl_compositor_set_csc_matrix(),
vl_compositor_init_state() and vl_compositor_init() fail
Signed-off-by: Nayan Deshmukh <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
| |
handle the cases when vl_compositor_set_csc_matrix(),
vl_compositor_init_state() and vl_compositor_init() fail
Signed-off-by: Nayan Deshmukh <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
pipe_buffer_map and pipe_buffer_create may return NULL
Signed-off-by: Nayan Deshmukh <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
| |
|
|
|
|
| |
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
| |
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
The function will be used later to create the filedescriptor
for other metrics.
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dump values for every selected data source in GALLIUM_HUD.
Every data source has its own file and the filename is
equal to the data source identifier.
Set GALLIUM_HUD_DUMP_DIR to dump values to files in this directory.
No values are dumped if the environment variable is not set, the
directory doesn't exist or the user doesn't have write access.
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
See:
dEQP-GLES2.functional.shaders.swizzles.vector_swizzles.mediump_vec2_yyyy_fragment
if we only access (in FS) varying.y then it ends up in slot zero.. I'm
not sure the hw likes that..
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
This matches the naming of nir_lower_vars_to_ssa, the other to-SSA pass.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Jonas's patch got us most of the benefit of scheduling instructions into
the delay slots of thread switch, but if there had been nothing to pair
the thrsw with, it would move the thrsw up and leave a NOP where the thrsw
was.
Instead, don't pair anything with thrsw through the normal scheduling
path, and have a separate helper function that inserts the thrsw earlier
if possible and inserts any necessary NOPs.
total instructions in shared programs: 93027 -> 92643 (-0.41%)
instructions in affected programs: 14952 -> 14568 (-2.57%)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scan for instructions without a signal set in front of the switching
instruction and move the signal up there.
shader-db results:
total instructions in shared programs: 94494 -> 93027 (-1.55%)
instructions in affected programs: 23545 -> 22078 (-6.23%)
v2: Fix re-emitting of the instruction in the loop trying to emit NOPs,
drop a scheduling change from branch delay slots. (by anholt)
Signed-off-by: Jonas Pfeil <[email protected]>
|
|
|
|
|
| |
This successfully unrolls a new shader in GLB2.7, which also gets that
shader to successfully compile in multithreaded mode.
|
|
|
|
|
|
|
| |
It should actually be 32 for a4xx/a5xx.. we still only advertise 16 but
for a5xx the linkage map includes position/psize.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
We need this in case it is streamed out. Not sure why we were treating
it specially before. Having it as a VS out is harmless if FS doesn't
have a matching input.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We'll need to revisit when adding hw binning pass support, whether we
can still do this in main draw step, as we do w/ a3xx/a4xx, or if we
needed to move it to the binning stage.
Still some failing piglits but most tests pass and the common cases seem
to work.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
Pull in a5xx streamout related regs. Also fixes a couple incorrect
register definitions.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Update address calculation to support 64b addresses.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Rework how we lay out driver constants (driver-params, UBO/TFBO buffer
addresses, immediates) for more flexibility. For a5xx+ we need to deal
with the fact that gpu ptrs are 64b instead of 32b, which makes the
fixed offset scheme not work so well. While we are dealing with that
we might also make the layout more dynamic to account for varying # of
UBOs, etc.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Reloc for the buffer address is two dwords on 64b devices (a5xx+)
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
Seems to be imilar to a4xx, and sampler state "array-pitch" needs
to be aligned to page size.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
| |
For dealing w/ 32b vs 64b gpu addresses, I need to rework how we pass
UBO buffer addresses to shader, and knowing up front the # of UBOs is
useful. But I noticed ttn wasn't setting this.
Signed-off-by: Rob Clark <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Presently errors from frontend are handled only if they occur in
clang::CompilerInvocation::CreateFromArgs(). This patch uses
clang::DiagnosticsEngine to detect errors such as invalid values for
Clang frontend arguments.
Fixes Piglit's cl/program/build/fail/invalid-version-declaration.cl
test.
v2: fix inconsistent code formatting
Signed-off-by: Vedran Miletić <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
Tested-by: Aaron Watry <[email protected]>
|
|
|
|
|
|
|
|
| |
ICC doesn't like the use of nullptr (std::nullptr_t) argument in
p_atomic_set. GCC and clang don't complain.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99119
Reviewed-by: Tim Rowley <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Hashcat needs MAX_GLOBAL_BUFFERS to be 21 or even 22 for some modes. It'll crash otherwise.
I'm adding an assert to see if programs need it to be even higher.
Signed-off-by: Christian Inci <[email protected]>
[Handle first properly; should be NFC, since clover always uses first == 0.]
Signed-off-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The clipper hardware doesn't consider points as primitives that can be
clipped. Simply setting the corresponding cull bits works, and should not
have an adverse effect on other primitive types according to the hardware
team.
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Edward O'Callaghan <[email protected]>
|
|
|
|
|
|
|
|
| |
Should have no effect (other than perhaps on power consumption), but
Vulkan does this.
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Edward O'Callaghan <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fix linking error with 'make check'.
CXXLD lp_test_format
../../../../src/gallium/auxiliary/.libs/libgallium.a(os_time.o): In function `os_time_get_nano':
src/gallium/auxiliary/os/os_time.c:59: undefined reference to `clock_gettime'
Signed-off-by: Vinson Lee <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
v2: use gfxip names for llvm 4.0+
v3: use tonga for llvm <= 3.8, drop gfxip name,
we can just change that we change the other asics.
Reviewed-by: Marek Olšák <[email protected]>
Signed-off-by: Junwei Zhang <[email protected]>
Reviewed-by: Nicolai Hähnle <[email protected]>
Acked-by: Christian König <[email protected]>
|
|
|
|
|
|
| |
Fixes a warning.
Reviewed-by: Samuel Iglesias Gonsálvez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per the C spec, it is illegal to alias pointers to different
types. This results in undefined behaviour after optimization
passes, resulting in very subtle bugs that happen only on a
full moon..
Use a memcpy() as a well defined coercion between the isomorphic
bit-field interpretations of memory.
V.2: Use C99 compat STATIC_ASSERT() over C11 static_assert().
Signed-off-by: Edward O'Callaghan <[email protected]>
Reviewed-by: Charmaine Lee <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that there's some SoA fetch which never falls back, we should always get
results which are better or at least not worse (something like rgba32f will
stay the same).
For cases which get way better, think something like R16_UNORM with 8-wide
vectors: this was 8 sign-extend fetches, 8 cvt, 8 muls, followed by
a couple of shuffles to stitch things together (if it is smart enough,
6 unpacks) and then a (8-wide) transpose (not sure if llvm could even
optimize the shuffles + transpose, since the 16bit values were actually
sign-extended to 128bit before being cast to a float vec, so that would be
another 8 unpacks). Now that is just 8 fetches (directly inserted into
vector, albeit there's one 128bit insert needed), 1 cvt, 1 mul.
v2: ditch the old AoS code instead of just disabling it.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This can now handle rgtc (unorm) too - this path no longer handles plain
formats, but that's unnecessary they now all have their proper SoA unpack
(this will still be dog-slow though due to the actual fetch being per-pixel
util fallbacks).
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This previously always fell back to AoS conversion. Even for 4-float formats
(which is the optimal case by far for that fallback case) this was suboptimal,
since it meant the conversion couldn't be done with 256bit vectors. While this
may still only be partly possible for some formats, (unless there's AVX2
support) at least the transpose can be done with half the unpacks
(and before using the transpose for AoS fallbacks, it was worse still).
With less than 4 channels, things got way worse with the AoS fallback
quickly even with 128bit vectors.
The strategy is pretty much the same as the existing one for formats
which fit into 32 bits, except there's now multiple vectors to be
fetched (2 or 4 to be exact), which need to be shuffled first (if it's 4
vectors, this amounts to a transpose, for 2 it's a bit different),
then the unpack is done the same (with the exception that the shift
of the channels is now modulo 32, and we need to select the right
vector).
In fact the most complex part about it is to get the shuffles right
for separating into lo/hi parts for AVX/AVX2...
This also makes use of the new ability of gather to use provided type
information, which we abuse to outsmart llvm so we get decent shuffles,
and to fetch 3x32bit vectors without having to ZExt the scalar.
And just because we can, we handle double formats too, albeit they are
a bit different (draw sometimes needs to handle that).
v2: fix typo float/int bug (generating inefficient code).
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By using a dst_type in the the gather interface, gather has some more
knowledge about how values should be fetched.
E.g. if this is a 3x32bit fetch and dst_type is 4x32bit vector gather
will no longer do a ZExt with a 96bit scalar value to 128bit, but
just fetch the 96bit as 3x32bit vector (this is still going to be
2 loads of course, but the loads can be done directly to simd vector
that way).
Also, we can now do some try to use the right int/float type. This should
make no difference really since there's typically no domain transition
penalties for such simd loads, however it actually makes a difference
since llvm will use different shuffle lowering afterwards so the caller
can use this to trick llvm into using sane shuffle afterwards (and yes
llvm is really stupid there - nothing against using the shuffle
instruction from the correct domain, but not at the cost of doing 3 times
more shuffles, the case which actually matters is refusal to use shufps
for integer values).
Also do some attempt to avoid things which look great on paper but llvm
doesn't really handle (e.g. fetching 3-element 8 bit and 16 bit vectors
which is simply disastrous - I suspect type legalizer is to blame trying
to extend these vectors to 128bit types somehow, so fetching these with
scalars like before which is suboptimal due to the ZExt).
Remove the ability for truncation (no point, this is gather, not conversion)
as it is complex enough already.
While here also implement not just the float, but also the 64bit avx2
gathers (disabled though since based on the theoretical numbers the benefit
just isn't there at all until Skylake at least).
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
We should do transpose, not extract/insert, at least with "sufficient" amount
of channels (for 4 channels, extract/insert shuffles generated otherwise look
truly terrifying). Albeit we shouldn't fallback to that so often in any case.
v2: ditch the extract/insert path, not worth keeping (we're going to avoid
hitting the fallback that often with future patches).
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
soa fetch so far always assumed that data was aligned. However, we want to
use this for vertex fetch, and data might not be aligned there, so handle
it in this path too (basically just pass through alignment through to other
functions). (It looks like it wouldn't work for for cached s3tc but this is
no different than with AoS fetch.)
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Avoid synchronization by using the secondary context
for uploading the vertex data for Draw*Up.
v2: Rely on u_upload_mgr to use persistent coherent
buffers. Do not flush.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
Tests suggest MANAGED buffers are made dirty
at Lock time, not at Unlock time.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new buffer upload path enables to lock
faster than the normal path when using
DISCARD/NOOVERWRITE.
v2: Diverse cleanups and fixes.
v3: Fix allocation size for 'lone' buffers and
add more debug info.
v4: Rewrite of the path to handle when DISCARD/NOOVERWRITE
is not used anymore. The resource content is copied to the
new resource used.
v5: flush for safety after unmap (not sure it is really required
here, but safer to flush).
v6: Do not use the path if persistent coherent mapping is unavailable.
Fix buffer creation flags.
v7: Do not flush since it is not needed.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
Next patches will introduce an offset.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
If the volumes (and the texture container) are not referenced,
then they are no pending operations on them. We can lock directly.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
If the surfaces (and the texture container) are not referenced,
then they are no pending operations on them. We can lock directly.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
The new arguments enable to reference the objects while
the function hasn't run.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
Will enable to use the bind count as an information for
whether the surface/volume is used in the worker thread.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
Will enable to use the bind count as an information for
whether the surface/volume is used in the worker thread.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use nine_context_box_upload for uploads:
. systemmem volume to default volume
. managed volume internal content to its resource.
Check the uploads are executed before any action
that can alter the data, that is LockBox and
volume destruction.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
The last level was not released.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
The last level was not released.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use nine_context_box_upload for uploads:
. systemmem surface to default surface
. managed surface internal content to its resource.
Check the uploads are executed before any action
that can alter the data, that is LockRect,
NineSurface9_CopyDefaultToMem and surface destruction.
Signed-off-by: Axel Davy <[email protected]>
|