| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The equivalent of the last patch for the hash table. I'm not aware of
any issues this fixes.
v2:
- use entry_is_deleted (Timothy)
Reviewed-by: Timothy Arceri <[email protected]>
Signed-off-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we delete entries in the hash set, we mark them "deleted" by
setting their key to the deleted_key, which points to a dummy
deleted_key_value. When searching for an entry, we normally skip over
those, but set_add() had some code for searching for duplicate entries
which forgot to skip over deleted entries. This led to a segfault inside
the NIR vectorization pass, since its key comparison function
interpreted the memory where deleted_key_value resides as a pointer and
tried to dereference it.
v2:
- add better commit message (Timothy)
- use entry_is_deleted (Timothy)
Reviewed-by: Timothy Arceri <[email protected]>
Signed-off-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes:
* dEQP-GLES31.functional.compute.basic.shared_atomic_op_multiple_groups
* dEQP-GLES31.functional.compute.basic.shared_atomic_op_multiple_invocation
* dEQP-GLES31.functional.compute.basic.shared_atomic_op_single_group
* dEQP-GLES31.functional.compute.basic.shared_atomic_op_single_invocation
From https://android.googlesource.com/platform/external/deqp
Reported-by: Ilia Mirkin <[email protected]>
Signed-off-by: Jordan Justen <[email protected]>
Tested-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit ab30426e335116e29473faaafe8b57ec760516ee.
Apparently the memory isn't quite as aligned when this gets called
as it should be, causing crashes. (Albeit this looks independent
from this code, should crash just as well if ssse3 is enabled when
compiling without this patch.)
https://bugs.freedesktop.org/show_bug.cgi?id=93962
|
|
|
|
|
|
|
|
| |
This is fallout from the previous changes.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93961
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for these opcodes, the conversion functions were already
there albeit need some new packing stuff.
Just like the tgsi version, piglit won't like it for all the same
reasons, so it's disabled (UP2H passes piglit arb_shader_language_packing
tests, albeit since PK2H won't due to those rounding differences I don't
know if that one works or not as the piglit test is rather difficult to
deal with).
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Add support for these opcodes, the conversion functions were already
there albeit need some new packing stuff.
Just like the tgsi version, piglit won't like it for all the same
reasons, so it's disabled (UP2H passes piglit arb_shader_language_packing
tests, albeit since PK2H won't due those rounding differences I don't
know if that one works or not as the piglit test is rather difficult to
deal with).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The util functions handle the half-float conversion.
Note that piglit won't like it much due to:
a) The util functions use magic float mul conversion but when run inside
softpipe/llvmpipe, denorms are flushed to zero, therefore when the conversion
is from/to f16 denorm the result will be zero. This is a bug which should be
fixed in these functions (should not rely on denorms being available), but
will happen elsewhere just the same (e.g. conversion to f16 render targets).
b) The util functions use trunc round mode rather than round-to-nearest. This
is NOT a bug (as it is a d3d10 requirement). This will result of rounding not
representable finite values to MAX_F16 rather than INFINITY. My belief is the
piglit tests are wrong here but it's difficult to tell (generally glsl
rounding mode is undefined, however I'm not sure if rounding mode might need
to be consistent for different operations). Nevertheless, for gl it would be
better to use round-to-nearest, but using different rounding for GL and d3d10
is an unsolved problem (as it affects things like conversion to f16 render
targets, clear colors, this shader opcode).
Hence for now don't enable the cap bit (so the code is unused).
(Code is from imirkin, comment from sroland)
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the tri is fully inside a scissor edge (or rather, we just use the
bounding box of the tri for the comparison), then we can drop these
additional scissor "planes" early. We do not even need to allocate
space for them in the tri.
The math actually appears to be slightly iffy due to bounding boxes
being rounded, but it doesn't matter in the end.
Those scissor rects are costly - the 4 planes from the scissor are
already more expensive to calculate than the 3 planes from the tri itself,
and it also prevents us from using the specialized raster code for small
tris.
This helps openarena performance by about 8% or so. Of course, it helps
there that while openarena often enables scissoring (and even moves the
scissor rect around) I have not seen a single tri actually hit the
scissor rect, ever.
v2: drop individual scissor edges, and do it earlier, not even allocating
space for them.
v3: help the compiler a bit with simpler code, suggested by Brian.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
Just slightly simpler assembly.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we switched to 64bit rasterization, we could no longer use straight
aligned loads for loading the plane data. However, what the code actually
does for loading 3 planes, is 12 scalar loads + 9 unpacks, and then there's
another 8 unpacks for the transpose we need (!).
It would be possible to do the (scalar) loads of course already transposed
(at least saving the additional unpacks), however instead just use
(un)aligned vector loads, and recalculate the eo values, which is much less
instructions (note in case of the triangle_32_3_4 case, the eo values are
not even used, making the scalar loads + unpacks for them all the more
pointless).
This drops execution time of the triangle_32_3_4 function considerably,
albeit it doesn't really make a measurable difference (for small tris we're
essentially limited by vertex throughput in any case), for triangle_32_3_16
it's essentially noise (the loop is more costly than the initial code there).
(I'm thinking about just ditching storing the eo values in the plane data,
so could switch back to using aligned planes, however right now they are
still used in the other raster functions dealing with planes with scalar
code. Also not touching the ppc code, might not be that bad there in any
case.)
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing code used ssse3, and because it isn't compiled in a separate
file compiled with that, it is usually not used (that, of course, could
be fixed...), whereas sse2 is always present at least with 64bit builds.
This should be pretty much as fast as the pshufb version, albeit those
code paths aren't really used on chips without llc in any case.
v2: fix andnot argument order, add comments
v3: use pshuflw/hw instead of shifts (suggested by Matt Turner), cut comments
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
Enabling swrast on Android causes a link error because vtest is missing.
Signed-off-by: Rob Herring <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The virgl reference counting of buffers is broken for prime fd buffers.
Each prime fd passed into virgl_drm_winsys_resource_create_handle creates
a new resource. The solution requires creating a separate hash table to
track flink names separately from prime handles.
Signed-off-by: Rob Herring <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
It is necessary to share the screen between mesa and gralloc to
properly ref count resources. This implements a hash lookup on
the file description to re-use an already created screen. This is
a similar implementation as freedreno and radeon.
Signed-off-by: Rob Herring <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The change is necessary to avoid the following building error in android:
external/mesa/src/gallium/drivers/nouveau/nouveau_vp3_video_bsp.c: In function 'nouveau_vp3_bsp_next':
external/mesa/src/gallium/drivers/nouveau/nouveau_vp3_video_bsp.c:269:14: error: 'bsp_bo' undeclared (first use in this function)
assert(bsp_bo->size >= str_bsp->w0[0] + num_bytes[i]);
^
This matches the declaration of the variables in question.
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use this logic to detect live ranges and then do plain renaming
across the whole codebase. As such, to prevent WaW hazards, we have to
treat a write as if it were also a read.
For example, the following sequence was observed before this patch:
13: UIF TEMP[6].xxxx :0
14: ADD TEMP[6].x, CONST[6].xxxx, -IN[3].yyyy
15: RCP TEMP[7].x, TEMP[3].xxxx
16: MUL TEMP[3].x, TEMP[6].xxxx, TEMP[7].xxxx
17: ADD TEMP[6].x, CONST[7].xxxx, -IN[3].yyyy
18: RCP TEMP[7].x, TEMP[3].xxxx
19: MUL TEMP[4].x, TEMP[6].xxxx, TEMP[7].xxxx
While after this patch it becomes:
13: UIF TEMP[7].xxxx :0
14: ADD TEMP[7].x, CONST[6].xxxx, -IN[3].yyyy
15: RCP TEMP[8].x, TEMP[3].xxxx
16: MUL TEMP[4].x, TEMP[7].xxxx, TEMP[8].xxxx
17: ADD TEMP[7].x, CONST[7].xxxx, -IN[3].yyyy
18: RCP TEMP[8].x, TEMP[3].xxxx
19: MUL TEMP[5].x, TEMP[7].xxxx, TEMP[8].xxxx
Most importantly note that in the first example, the second RCP is done
on the result of the MUL while in the second, the second RCP should have
the same value as the first. Looking at the GLSL source, it is apparent
that both of the RCP's should have had the same source.
Looking at what's going on, the GLSL looks something like
float tmin_8;
float tmin_10;
tmin_10 = tmin_8;
... lots of code ...
tmin_8 = tmpvar_17;
... more code that never looks at tmin_8 ...
And so we end up with a last_read somewhere at the beginning, and a
first_write somewhere at the bottom. For some reason DCE doesn't remove
it, but even if that were fixed, DCE doesn't handle 100% of cases, esp
including loops.
With the last_read somewhere high up, we overwrite the previously
correct (and large) last_read with a low one, and then proceed to decide
to merge all kinds of junk onto this temp. Even if that weren't the
case, and there were just some writes after the last read, then we might
still overwrite a merged value with one of those.
As a result, we should treat a write as a last_read for the purpose of
determining the live range.
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Dave Airlie <[email protected]>
Cc: [email protected]
|
|
|
|
| |
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
| |
And mark nir_op_pack_uvec4_to_uint unreachable, since it's only produced
by lowering pack[SU]norm4x8 which the vec4 backend does not need.
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
| |
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
| |
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
| |
The vec4 backend will lower it.
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
| |
The uint versions zero extend while the int versions sign extend.
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
| |
i965/fs was the only consumer, and we're now doing the lowering in NIR.
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
| |
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
| |
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
| |
We'll want to have different lowering options set for scalar/vector
stages.
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
| |
A future patch will want to use designated initalizers, which aren't
available in C++, but this is C.
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
| |
Strangely the return and parameter types were reversed.
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable GL_OES_geometry_shader enums for OpenGL ES 3.1.
V4: EXTRA tokens updated according to comments from Ilia Mirkin.
V5: Account for check_extra does not evaluate "or" lazy. Fix issues
with EXTRA_EXT_FB_NO_ATTACH_CS.
Signed-off-by: Marta Lofstedt <[email protected]>
Reviewed-by: Tapani Pälli <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
| |
Cc: [email protected]
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This happens especially with exports and varying packing, where the last
bits aren't always filled in. We end up trying to do quad-wide stores,
which ends up being a lot of register moves that carefully preserve the
nop value. Instead don't do the stores.
total instructions in shared programs : 6131375 -> 6125267 (-0.10%)
total gprs used in shared programs : 910139 -> 895501 (-1.61%)
total local used in shared programs : 15328 -> 15328 (0.00%)
local gpr inst
helped 0 7442 4693
hurt 0 90 2687
Most of the helped/hurt instruction changes are by one or two ops
because can no longer do quad-wide stores in all cases.
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an instruction has multiple defs, we have to do a lot more checks to
make sure that we can move it forward. Among other things, various code
likes to do
a, b = tex()
if () c = a
else c = b
which means that a single phi node will have results pointing at the
same instruction. We obviously can't propagate the tex in this case, but
properly accounting for this situation is tricky. Just don't try for
instructions with multiple defs.
This fixes about 20 shaders in shader-db, including the dolphin efb2ram
shader.
Signed-off-by: Ilia Mirkin <[email protected]>
Cc: [email protected]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It appears that the nvidia render engine is quite picky when it comes to
linear surfaces. It doesn't like non-256-byte aligned offsets, and
apparently doesn't even do non-256-byte strides.
This makes arb_clear_buffer_object-unaligned pass on both nv50 and nvc0.
As a side-effect this also allows RGB32 clears to work via GPU data
upload instead of synchronizing the buffer to the CPU (nvc0 only).
Signed-off-by: Ilia Mirkin <[email protected]> # tested on GF108, GT215
Tested-by: Nick Sarnie <[email protected]> # GK208
Cc: [email protected]
|
|
|
|
|
|
|
|
|
|
|
| |
Since we emulate clip-planes, the clip-vertex is used within the VS
itself (thanks to nir_lower_clip). So just ignore it as a VS output.
Fixes a boatload of piglit tests that were asserting on unknown
varying slot.
(Also unrelated spelling/typo fix.)
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
With glsl_to_nir we end up with local variables, instead of global, for
arrays.
Note that we'll eventually have to do something more clever, I think,
when we support multiple functions, but that will probably take some
work in a few places.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Something we start to see with glsl_to_nir.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
With tgsi_to_nir we get this as a normal input with VARYING_SLOT_FACE.
But glsl_to_nir plus nir_lower_system_values this becomes an intrinsic.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Experimentally derived max size.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
| |
When using the "shared" vertex array configuration strategy, we bind
each of the buffers as a separate array. However there can be holes in
such vertex buffer lists, so just emit a disable for those.
Signed-off-by: Ilia Mirkin <[email protected]>
Cc: [email protected]
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
| |
Teach the emitter that the two registers are sequential, and drop the
second arg entirely, in favor of a double-wide first argument.
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
| |
This largely leaves the existing image logic alone. When image support
is added this will have to be harmonized somehow.
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
| |
Issue a MEM_BARRIER. No idea if this is sufficient. As there are no
tests for this, it'll have to do for now.
Signed-off-by: Ilia Mirkin <[email protected]>
|