| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This reverts commit a9b800aa81cffdcaef2490ff49986099feae2663. With push
constant support, the constructed SURFACE_STATE is unused and wasted. The
change only slows things down.
|
|
|
|
|
|
|
|
| |
We were relying on libEGL to pull in libwayland-client symbols, but with
commit 2c2e64edaba0f6aeb181ca5b51eb8dea8e9b39f9 cleaned up the
symbol leak.
https://bugs.freedesktop.org/show_bug.cgi?id=67962
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because we must maintain an exec_mask even if there's currently nothing
on the mask stack, we can still have an exec_mask at the end of the program.
Effectively, this mask should be set back to default when returning from main.
Without relying on END/RET opcode (I think it's valid to have neither) it is
actually difficult to do this, as there doesn't seem any reasonable place to
do it, so instead let's just say the exec_mask is invalid outside main (which
it really is effectively).
The problem is that geometry shader called end_primitive outside the shader
(in the epilogue), and as a result used a bogus mask, leading to bugs if we
had to set the (somewhat misnamed) ret_in_main bit anywhere. So just avoid
the mask combining function when called from outside the shader.
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The code was quite weird, the second comparison was in fact a complete no-op
and we can also do the comparison with the vector directly instead of scalar,
which should not also be faster but it is way more obvious how that mask
is actually going to look like.
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of reducing masks to 0/1 simply use the mask directly as -1.
Also use some signed comparison instead of unsigned (as far as I understand
these values have to be (very) small and signed means llvm doesn't have to
apply additional logic to do the unsigned comparisons the cpu can't do).
Saves a couple of instructions in some test geometry shader here.
v2: that was a bit to much optimization, don't skip combining the masks...
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
| |
And dump the variant key too (same as vs does).
Just so I can stop wondering why I see the tgsi dump for fs and vs but not
gs...
|
|
|
|
| |
Was meant to match the name used elsewhere, spotted by Anthony.
|
|
|
|
|
|
|
|
|
|
| |
CMP instructions use BRW_ARF_NULL as a destination. Prior to this
patch, dump_instruction() decoded the destination as "???".
Now it decodes BRW_ARF_NULL as "(null)" and other ARFs numerically.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This resulted in printouts like:
246: cmp.cmod.f0.0
???, vgrf152, 0.000000f, (null),
With this patch, CMP is properly printed on one line.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many GLSL shaders contain code of the form:
x = condition ? foo : bar
The compiler emits an ir_if tree for this, since each subexpression
might be a complex tree that could have side-effects and short-circuit
logic operations.
However, the common case is to simply pick one of two constants or
variable's values---which is exactly what SEL is for. Replacing IF/ELSE
with SEL also simplifies the control flow graph, making optimization
passes which work on basic blocks more effective.
The shader-db statistics:
total instructions in shared programs: 1655247 -> 1503234 (-9.18%)
instructions in affected programs: 949188 -> 797175 (-16.02%)
2,970 shaders were helped, none hurt. Gained 181 SIMD16 programs.
This helps Valve's Source Engine games (max -41.33%), The Cave
(max -33.33%), Serious Sam 3 (max -18.64%), Yo Frankie! (max -30.19%),
Zen Bound (max -22.22%), GStreamer (max -6.12%), and GLBenchmark 2.7
(max -1.94%).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The instruction
(+f0.0) SEL dst, src0, src1
will write either src0 or src1 to dst, depending on the predicate.
Unlike most predicated instructions, it always writes to dst.
fs_inst::is_partial_write() is supposed to return true if the whole
register is guaranteed to be written. The !inst->predicated check makes
sense for most instructions, which might not write the whole register,
but SEL is a special case.
This caused live interval analysis to ignore the destination of
predicated SEL instructions when computing "def" information.
Requires the previous commit to avoid regressions.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing inst->is_partial_write() already disallows predicated
instructions, so this has no functional change. However, it's worth
doing explicitly since the CSE pass does not consider the flag register.
This means it could blindly factor out operations that use the same
sources, but which have different condition codes set.
This prevents a regression in the next commit.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Usually, the driver creates both 8-wide and 16-wide variants of every
fragment shader. When 16-wide compilation fails, it logs a performance
warning explaining why only an 8-wide program exists.
However, when there are pull parameters, the driver won't even bother
trying the 16-wide compile (since it would fail). In this case, it
failed to emit a performance warning, leaving no explanation for the
missing 16-wide program.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
Fix ilo_gpe_init_view_surface_for_buffer to allow buffer to be NULL, and add
ilo_gpe_set_view_surface_bo to set it later. This allows us to set up
SURFACE_STATE early for constant buffers backed by user buffers.
|
|
|
|
|
|
|
|
|
|
| |
In finalize_index_buffer(), when the current index buffer was destroyed due to
u_upload_data(), it may happen that the new index buffer is at the same
address as the old one. Comparing the pointers to the two buffers could fail
to work, and 3DSTATE_INDEX_BUFFER would be incorrectly skipped.
Holding a reference to the current index buffer before calling u_upload_data()
should fix the problem.
|
|
|
|
|
|
|
| |
Makes this flag appear in the output for INTEL_DEBUG=state
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
INTEL_DEBUG=vue now emits a listing of each slot in the VUE map,
and the corresponding interpolation mode.
V2: Fix whitespace issues.
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Fixes piglit tests const-inout-parameter.frag and
const-out-parameter.frag.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Cc: "9.2" <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
My previous attempt at doing so double-failed miserably (minification of
zero still gives one, and even if it would not the value was never written
anyway).
While here also rename the confusingly named int_vec bld as we have int vecs
of different sizes, and rename need_nr_mips (as this also changes out-of-bounds
behavior) to is_sviewinfo too.
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
d3d10 has no notion of distinct array resources neither at the resource nor
sampler view level. However, shader dcl of resources certainly has, and
d3d10 expects resinfo to return the values according to that - in particular
a resource might have been a 1d texture with some array layers, then the
sampler view might have only used 1 layer so it can be accessed both as 1d
or 1d array texture (I think - the former definitely works). resinfo of a
resource decleared as array needs to return number of array layers but
non-array resource needs to return 0 (and not 1). Hence fix this by passing
the target from the shader decl to emit_size_query and use that (in case of
OpenGL the target will come from the instruction itself).
Could probably do the same for actual sampling, though it may not matter there
(as the bogus components will essentially get clamped away), possibly could
wreak havoc though if it REALLY doesn't match (which is of course an error
but still).
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
| |
Specifically, must return 0 for non-existent mip levels (and non-existent
textures which is an unsolved problem) for everything but total mip count.
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
|
|
| |
GLSL 1.50 incorporates the functionality of the
ARB_fragment_coord_conventions extension, so we need to make this
functionality available even if the extension isn't enabled.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From section E.1 (Profiles and Deprecated Features of OpenGL 3.0)
of the OpenGL 3.0 spec:
"LineWidth is not deprecated, but values greater than 1.0
will generate an INVALID VALUE error"
From context it is clear that values greater than 1.0 should only
generate an INVALID VALUE error in a forward-compatible context.
The code was correctly quoting this spec text, but it was disallowing
all line widths in forward-compatible contexts, instead of just widths
greater than 1.0.
This patch introduces the correct check, so that setting a line width
of 1.0 or less is permitted.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Otherwise gcc might do very unsafe optimizations, spotted by Uros Bizjak.
Hopefully this time it's finally right?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cayman and trinity systems still seem to suffer from
stability problems with GPUVM. This also fixes compute
on these asics. It can still be enabled for testing
by setting env var RADEON_VA=true.
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=65958
Signed-off-by: Alex Deucher <[email protected]>
CC: "9.2" <[email protected]>
CC: "9.1" <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
| |
softpipe has a really weird handling of the draw attrs, lets
just not inject outputs in its data.
Trivial.
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can't be injecting the primitive id's in the pipeline because
by that time the primitives have already been decomposed. To
properly number the primitives we need to handle the adjacency
primitives by hand. This patch moves the prim id injection into
the original primitive assembler and completely removes the
useless pipeline stage.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Without reseting the vertex id, with primitives where the same
vertex is used with different primitives (e.g. tri/lines strips)
our vbuf module won't re-emit those vertices with the changed
primitive id. So lets reset the vertex id whenever injecting
new primitive id to make sure that the vertex data is correctly
emitted.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Before inserting new front face and prim id outputs cleanup
the old extra outputs, otherwise our cache will use previous
output slots which will break as soon as outputs of the current
shader don't match the last.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
| |
Oops. Should fix https://bugs.freedesktop.org/show_bug.cgi?id=67921
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
libEGL was incorrectly exporting *all* symbols, public and private.
This patch adds -fvisibility=hidden to libEGL's linker flags to ensure
that only symbols annotated with __attribute__((visibility("default")))
get exported.
Sanity-checked with libEGL's builtin DRI2 driver and the i965 DRI driver
by running Piglit on X/EGL and by running weston-gears on Weston as an
X client.
Sanity-checked with libEGL's Gallium driver (which is not built-in) and
the swrast Gallium driver by running es2gears_x11.
Kristian reviewed the symbol diff in `nm libEGL.so`.
CC: "9.2" <[email protected]>
CC: Ian Romanick <[email protected]>
Acked-by: Kristian Høgsberg <[email protected]>
Reviewed-by: Jakob Bornecrantz <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise, blits to the window system buffer may cause crashes,
since dst_irb->mt may be NULL.
This code is lifted straight out of brw_blorp_framebuffer()'s
try_blorp_blit() helper.
Fixes crashes in Piglit's fbo-sys-blit on systems without BLORP.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=65919
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Cc: "9.2" <[email protected]>
|
| |
|
|
|
|
|
|
|
|
| |
d3d10 specifies ordered comparisons for everything but not_equal which is
unordered (http://msdn.microsoft.com/en-us/library/windows/desktop/cc308050.aspx).
OpenGL probably doesn't care.
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
| |
Clamping is only done for fixed-point formats as part of conversion to
texture format.
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
|
| |
This is wrong both for OpenGL and d3d. (In fact clamping is a side effect
of converting to depth format, so this should really do quantization too
at least in d3d10 for the comparisons to be truly correct.)
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
| |
Clearly the returned values need to be per-element if the lod is per element.
Does not actually change behavior yet.
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This opcode is quite problematic in tgsi, while it tries to mirror
d3d10 resinfo it can't really do what's stated there due to missing
the crazy return type modifiers. Hence specify this is ignored along
with the swizzle.
(Other options would be to have multiple opcodes or specify the ret
type modifier maybe in dst_reg as there's padding bits left there but
it is the only instruction allowing this.)
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
For d3d10 and ARB_robust_buffer_access_behavior, we are required to return
0 for out-of-bounds coordinates (for which we can just enable the code already
there was just disabled). Additionally, also need to return 0 for
out-of-bounds mip level and out-of-bounds layer. This changes the logic
so instead of clamping the level/layer, an out-of-bound mask is computed
instead in this case (actual clamping then can be omitted just like with
coordinates, since we set the fetch offset to zero if that happens anyway).
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While so far this only causes some harmless test failures, there's lots more
cpus with DAZ. All 64bit capable ones can do it (particularly relevant for
AMD cpus as they supported sse3 very very late) but if really necessary we
can check support for that for real with some more magic.
(In fact just about ANY cpu with sse2 can support DAZ, I believe the only
exception are first gen P4 (Willamette) and from those only early steppings
which can't do it it's almost like intel forgot to add it... - a real pity
though docs say you can't just try to set it as they will throw a GPF.)
While this was meant to address https://bugs.freedesktop.org/show_bug.cgi?id=67672
it does not fix it. Most likely the tests need fixing as I don't think
there's any guarantee about denorm handling in the reference math library
functions if the flags aren't set to standard values. Nevertheless enabling
DAZ on all cpus which can do it should be the right thing to do.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Should be much faster, seems to work in softpipe.
While here (also it's now disabled) fix up the pow factor - the former value
is what is in GL core it is however not actually accurate to fp32 standard
(as it is 1.0/2.4), and if someone would do all the accurate math there's no
reason to waste 8 mantissa bits or so...
v2: use real table generating function instead of just printing the values
(might take a bit longer as it does calculations on some 3+ million floats
but much more descriptive obviously).
Also fix up another inaccurate pow factor (this time in the python code) -
wondering where the couple one bit errors came from :-(.
Reviewed-by: Jose Fonseca <[email protected]>
Reviewed-by: Zack Rusin <[email protected]>
|
|
|
|
| |
I think it's actually not good enough now...
|
|
|
|
| |
Move the estimate functions out of the tables and kill the tables.
|
|
|
|
|
| |
This reduces the number of source files need to be recompiled when GPE
functions are changed other than regular clean ups.
|
|
|
|
| |
This could speed up BLEND_STATE and COLOR_CALC_STATE emission a bit.
|
|
|
|
| |
There is only one caller so fold it.
|
|
|
|
| |
There is only one caller so fold it.
|
|
|
|
|
| |
Eliminate pipeline and GPE function vectors and have the pipeline functions
call the GPE emit functions directly.
|
| |
|
|
|
|
| |
CC: "9.2 <[email protected]>"
|