| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Simple clean-up.
Reviewed-by: Charmaine Lee <[email protected]>
|
|
|
|
| |
Reviewed-by: Neha Bhende <[email protected]>
|
|
|
|
|
|
|
| |
Move the calls to svga_hwtnl_set_fillmode() and svga_hwtnl_set_flatshade()
out of the two retry_draw_*() functions to the svga_draw_vbo() function.
Reviewed-by: Charmaine Lee <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a few Piglit transform feedback regressions caused by
commit 7a1401938b351.
In that change I moved the moved svga_update_state() into the loops,
after the calls to svga_hwtnl_set_flatshade(). But
svga_hwtnl_set_flatshade() actually depends on some derived shader
state. This patch moves the svga_update_state() call into
svga_draw_vbo() so it's not duplicated in two places.
Fixes: 7a1401938b351 ("svga: clean up retry_draw_range_elements(),
retry_draw_arrays()")
Reviewed-by: Charmaine Lee <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we fail to compile the normal VS or FS we fall back to a simple/
dummy shader. We need to rescan the the shader to update the shader
info. Otherwise, this can lead to further translations failures
because the shader info doesn't match the actual shader.
Found by adding some extra debug assertions in the state-update code
while debugging something else.
v2: also update shader generic_inputs/outputs, etc. per Charmaine
Reviewed-by: Charmaine Lee <[email protected]>
|
|
|
|
|
|
|
|
| |
fixes KHR-GL45.multi_bind.dispatch_bind_textures on Maxwell
Suggested-by: Ilia Mirkin <[email protected]>
Signed-off-by: Karol Herbst <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
| |
We were incorrectly using the input info for outputs.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
Fixes cts test:
KHR-GL46.shader_ballot_tests.ShaderBallotFunctionBallot
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
|
|
|
|
|
|
| |
This fixes a regression in the broadcast color to all color bufs case.
Fixes: 6c691081a (r600: fixup sparse color exports.)
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I found a shader with
DCL TEMP[0], LOCAL
DCL TEMP[1..256], ARRAY(1), LOCAL
DCL TEMP[257..512], ARRAY(2), LOCAL
DCL TEMP[513..768], ARRAY(3), LOCAL
DCL TEMP[769], LOCAL
This would remap badly, as it would add up all the spilled sizes
and subtract it from the temp for 0. If the current temp is less
than the array start break out.
Fixes: 1d871aa6 (r600g: Implement spilling of temp arrays (v2))
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
| |
This enable ARB_sample_shading if the renderer supports it.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
| |
This relies on the renderer code landing first.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Nobody queries these and nobody sets them to anything useful,
the docs say TODO.
Drop them until a use appears.
Reviewed-by: Roland Scheidegger <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This struct allows us to report:
- accurate max point size/line width.
- accurate texel and texture gather offsets
- vertex/geometry limits.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Flush a resource's previous write_batch synchronously. Because a
resource's associated batches are not updated until after the flush
thread submits rendering to the kernel, this was causing a bit of
confusion in the following loop. This fixes a bug that appeared with
recent stk.
Perhaps we need to re-work things a bit to clear out dependent patches
in the ctx's thread and use a fence to deal with the period between
when a flush is queued and when it is submitted to the kernel. But
this will do until time permits a larger refactor.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because of loops, we can't schedule all of a block's predecessors first.
Instead just assume that the result consumed in a block was written far
enough away in all paths into a block. And do an intra-block scheduling
pass to figure out if there are any cases where we need to insert extra
nop's. This works out better than always assuming the worst case (ie.
that a value live into a block was written in the last instruction in
the predecessor block).
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
Account for the move to predicate register, to try to avoid needing to
insert extra NOPs later.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normally false-deps are not something to consider, since they mostly
exist for delay-slot related reasons:
* barriers
* ordering writes after read
* SSBO/image access ordering
The exception is a false-dependency on an array store.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we didn't handle flow control in legalize, and instead just
set (ss)(sy) on the first instruction in every block. Which isn't very
clever.
Instead, consider output state of all predecessor blocks, so we only
set a sync bit if needed for any possible path leading into a block.
Because of loops, we can't require that all successor blocks are
legalized before a given block, so instead run in a loop until results
converge.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Useful in the following patches.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Maybe there is a better way for this.. where it comes useful is "array"
loads, which end up as a false-dep for a later array store.
If all the uses of an array load are CP'd into their consumer, it still
leaves the dangling array load, leading to funny things like:
mov.u32u32 r5.y, r0.y
mov.u32u32 r5.y, r0.z
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
Consider also immediates for swapping the first two srcs, because they
can be lowered to constant.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Now that we convert phi webs to ssa, we can drop all this.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Now that it is unused.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Generally seems to do worse on instruction count and register usage,
according to shader-db. But shader-db also doesn't do a very good job
of weighting loop bodies, so that might not be totally valid.
So add an env variable to enable GCM pass for easier experimentation.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
There are more useful nir passes added since initial conversion to nir.
But ir3 was never updated to use them.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Agressively lowering all if/else to selects in some extreme cases
results in much higher register pressure. Using peephole select instead
with a modest threshold speeds up alu2 4x!
16 seems like a good limit, low enough to help alu2 but not too low that
it penalizes everything else. With a bit better scheduling of the
instruction that moves a value into a predicate register, we might be
able to lower this limit a bit more in the future, but since we need 6
cycles from the move to predicate register to predicated branch, that
puts some sort of lower bound on how far we can lower this threshold.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
nir's from_ssa pass is much better at avoiding inserting extra moves
than our logic is. And lowering phi webs to regs just treats anything
involved in a phi web as an array of length=1. Which with previous
array related fixes in RA/etc ends up working out quite well. This cuts
down on extra instructions and also helps with register pressure.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
Makes it easier to compare values seen in-game (where there are many
shaders) to cmdline standalone compiler.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
Instead, if possible fold (sat) flag into src, otherwise use:
(sat)max.f rD, rS, rS
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
Seems to be there since a3xx, but we always lowered fsat. But we can
shave some instructions, especially in shaders that use lots of
clamp(foo, 0.0, 1.0) by not lowering fsat.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
Use livein state of other blocks to extend liverange of arrays when they
are still needed by successor blocks.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
(Plus a couple TODOs)
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
Since these are not in SSA form, add to block's keeps so it doesn't
appear unused.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
When eliminating movs, the instruction that is now directly using the
src of the mov has the same scheduling order constraints as the original
mov instruction.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Function ends after this if/else ladder, so it was pointless.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The number of bits depends on generation. But printing negative values
with a5xx encoding (largest size) but compiling for a3xx or a4xx, would
result in negative values printed as large positive values.
I guess in practice huge negative branch offsets aren't likely (and if
that is the case, the shader is probably too big to grok by reading the
assembly). So just print using smallest bitfield size.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Try to clean up things like:
br !p0.x #2
br p0.x #something
to eliminate the first branch.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
When trying to optimize to reduce stalls, it is nice to see this info.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|