| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Later we will pass compiler to nir_optimise to be used by the loop unroll
pass.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2:
- tidy ups suggested by Connor.
- tidy up cloning logic and handle copy propagation
based of suggestion by Connor.
- use nir_ssa_def_rewrite_uses to fix up lcssa phis
suggested by Connor.
- add support for complex loop unrolling (two terminators)
- handle case were the ssa defs use outside the loop is already a phi
- support unrolling loops with multiple terminators when trip count
is know for each terminator
V3:
- set correct num_components when creating phi in complex unroll
- rewrite update remap table based on Jasons suggestions.
- remove unrequired extract_loop_body() helper as suggested by Jason.
- simplify the lcssa phi fix up code for simple loops as per Jasons suggestions.
- use mem context to keep track of hash table memory as suggested by Jason.
- move is_{complex,simple}_loop helpers to the unroll code
- require nir_metadata_block_index
- partially rewrote complex unroll to be simpler and easier to follow.
V4:
- use rzalloc() when creating nir_phi_src but not setting pred right away
fixes regression cause by ralloc() no longer zeroing memory.
V5:
- simplify calling of complex_unroll()
- use new loop terminator fields to get the break/continue from blocks
and simplify loop unrolling code
- handle slightly less trivial loop terminators. if branches can
now have instructions but can only contain a single block.
- use nir print type IR snippets in unroll function descriptions
- add better explanation and variable for why we need to clone
additional times when the second terminator it the limiting
terminator.
- partially convert out of ssa before unrolling loops (suggested by Jason)
v6:
- remove unused nir_builder
- use Jasons new from ssa helper
- tidy/fixup cursor use
- unroll terminators that contain control flow correctly
- unroll complex loops with control flow before the terminators
correctly
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
| |
V2:
- updated to create a generic list clone helper nir_cf_list_clone()
- continue to assert on clone when fallback flag not set as suggested
by Jason.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
| |
We need to do this because we partially get out of SSA when unrolling
and cloning loops.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
| |
This will be useful for fixing phi srcs when cloning a loop body
during loop unrolling.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2: Do a "depth first search" to convert to LCSSA
V3: Small comment fixup
V4: Rebase, adapt to removal of function overloads
V5: Rebase, adapt to relocation of nir to compiler/nir
Still need to adapt to potential if-uses
Work around nir_validate issue
V6 (Timothy):
- tidy lcssa and stop leaking memory
- dont rewrite the src for the lcssa phi node
- validate lcssa phi srcs to avoid postvalidate assert
- don't add new phi if one already exists
- more lcssa phi validation fixes
- Rather than marking ssa defs inside a loop just mark blocks inside
a loop. This is simpler and fixes lcssa for intrinsics which do
not have a destination.
- don't create LCSSA phis for loops we won't unroll
- require loop metadata for lcssa pass
- handle case were the ssa defs use outside the loop is already a phi
V7: (Timothy)
- pass indirect mask to metadata call
v8: (Timothy)
- make convert to lcssa a helper function rather than a nir pass
- replace inside loop bitset with on the fly block index logic.
- remove lcssa phi validation special cases
- inline code from useless helpers, suggested by Jason.
- always do lcssa on loops, suggested by Jason.
- stop making lcssa phis special. Add as many source as the block
has predecessors, suggested by Jason.
V9: (Timothy)
- fix regression with the is_lcssa_phi field not being initialised
to false now that ralloc() doesn't zero out memory.
V10: (Timothy)
- remove extra braces in SSA example, pointed out by Topi
V11: (Timothy)
- add missing support for LCSSA phis in if conditions.
V12: (Timothy)
- small tidy up suggested by Jason.
- always create lcssa phi even if it just points to an lcssa
phi from an inner loop
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This pass detects induction variables and calculates the
trip count of loops to be used for loop unrolling.
V2: Rebase, adapt to removal of function overloads
V3: (Timothy Arceri)
- don't try to find trip count if loop terminator conditional is a phi
- fix trip count for do-while loops
- replace conditional type != alu assert with return
- disable unrolling of loops with continues
- multiple fixes to memory allocation, stop leaking and don't destroy
structs we want to use for unrolling.
- fix iteration count bugs when induction var not on RHS of condition
- add FIXME for && conditions
- calculate trip count for unsigned induction/limit vars
V4: (Timothy Arceri)
- count instructions in a loop
- set the limiting_terminator even if we can't find the trip count for
all terminators. This is needed for complex unrolling where we handle
2 terminators and the trip count is unknown for one of them.
- restruct structs so we don't keep information not required after
analysis and remove dead fields.
- force unrolling in some cases as per the rules in the GLSL IR pass
V5: (Timothy Arceri)
- fix metadata mask value 0x10 vs 0x16
V6: (Timothy Arceri)
- merge loop_variable and nir_loop_variable structs and lists suggested by Jason
- remove induction var hash table and store pointer to induction information in
the loop_variable suggested by Jason.
- use lowercase list_addtail() suggested by Jason.
- tidy up init_loop_block() as per Jasons suggestions.
- replace switch with nir_op_infos[alu->op].num_inputs == 2 in
is_var_basic_induction_var() as suggested by Jason.
- use nir_block_last_instr() in and rename foreach_cf_node_ex_loop() as suggested
by Jason.
- fix else check for is_trivial_loop_terminator() as per Connors suggetions.
- simplify offset for induction valiables incremented before the exit conditions is
checked.
- replace nir_op_isub check with assert() as it should have been lowered away.
V7: (Timothy Arceri)
- use rzalloc() on nir_loop struct creation. Worked previously because ralloc()
was broken and always zeroed the struct.
- fix cf_node_find_loop_jumps() to find jumps when loops contain
nested if statements. Code is tidier as a result.
V8: (Timothy Arceri)
- move is_trivial_loop_terminator() to nir.h so we can use it to assert is
the loop unroll pass
- fix analysis to not bail when looking for terminator when the break is in the else
rather then the if
- added new loop terminator fields: break_block, continue_from_block and
continue_from_then so we don't have to gather these when doing unrolling.
- get correct array length when forcing unrolling of variables
indexed arrays that are the same size as the iteration count
- add support for induction variables of type float
- update trival loop terminator check to allow an if containing
instructions as long as both branches contain only a single
block.
V9: (Timothy)
- bunch of tidy ups and simplifications suggested by Jason.
- rewrote trivial terminator detection, now the only restriction is there
must be no nested jumps, anything else goes.
- rewrote the iteration test to use nir_eval_const_opcode().
- count instruction properly even when forcing an unroll.
- bunch of other tidy ups and simplifications.
V10: (Timothy)
- some trivial tidy ups suggested by Jason.
- conditional fix for break inside continue branch by Jason.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This moves the nir_lower_indirect_derefs() call into
brw_preprocess_nir() so thats is called by both OpenGL and Vulkan
and removes that call to the old GLSL IR pass
lower_variable_index_to_cond_assign()
We want to do this pass in nir to be able to move loop unrolling
to nir.
There is a increase of 1-3 instructions in a small number of shaders,
and 2 Kerbal Space program shaders that increase by 32 instructions.
The changes seem to be caused be the difference in the GLSL IR vs
NIR variable index lowering passes. The GLSL IR pass creates a
simple if ladder for arrays of size 4 or less, while the NIR pass
implements a binary search for all arrays regardless of size.
Shader-db results BDW:
total instructions in shared programs: 13021176 -> 13021819 (0.00%)
instructions in affected programs: 57693 -> 58336 (1.11%)
helped: 20
HURT: 190
total cycles in shared programs: 299805580 -> 299750826 (-0.02%)
cycles in affected programs: 2290024 -> 2235270 (-2.39%)
helped: 337
HURT: 442
total fills in shared programs: 19984 -> 19984 (0.00%)
fills in affected programs: 0 -> 0
helped: 0
HURT: 0
LOST: 4
GAINED: 0
V2: remove the do_copy_propagation() call from the i965 GLSL IR
linking code. This call was added in f7741c52111 but since we are
moving the variable index lowering to NIR we no longer need it and
can just rely on the nir copy propagation pass.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Without this we will regress the max-samplers piglit test on Gen6
and lower when loop unrolling is done in NIR. There is a check
in the GLSL IR linker that errors when it finds indirects and
EmitNoIndirectSampler is set.
As far as I can tell there is no reason for not enabling this for
all gens regardless of whether they fully support ARB_gpu_shader5
or not.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
These are designed for use within an optimization pass when SSA becomes
more pain than it's worth. They're very naive and don't generate
anything close to optimal register-based NIR. Also, they may result in
shaders which do not validate because of, for instance, registers in phi
sources. However, the register-based into-SSA pass should be pretty
efficient at cleaning up the mess.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Without this check driver crash when application window
closed unexpectedly.
Acked-by: Edward O'Callaghan <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Cc: "13.0" <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
When application window closed unexpectedly due to
lost window visualtypes getting invlaid parameters
which is causing a crash. Necessary check is added
to prevent the crash.
Reviewed-by: Jason Ekstrand <[email protected]>
Cc: "13.0" <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Hashcat needs MAX_GLOBAL_BUFFERS to be 21 or even 22 for some modes. It'll crash otherwise.
I'm adding an assert to see if programs need it to be even higher.
Signed-off-by: Christian Inci <[email protected]>
[Handle first properly; should be NFC, since clover always uses first == 0.]
Signed-off-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The clipper hardware doesn't consider points as primitives that can be
clipped. Simply setting the corresponding cull bits works, and should not
have an adverse effect on other primitive types according to the hardware
team.
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Edward O'Callaghan <[email protected]>
|
|
|
|
|
|
|
|
| |
Should have no effect (other than perhaps on power consumption), but
Vulkan does this.
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Edward O'Callaghan <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fix linking error with 'make check'.
CXXLD lp_test_format
../../../../src/gallium/auxiliary/.libs/libgallium.a(os_time.o): In function `os_time_get_nano':
src/gallium/auxiliary/os/os_time.c:59: undefined reference to `clock_gettime'
Signed-off-by: Vinson Lee <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the index to the location when assigning driver locations for
output variables.
Otherwise two fragment shader outputs declared as:
layout (location = 0, index = 0) out vec4 output1;
layout (location = 0, index = 1) out vec4 output2;
will end up aliasing one another.
Note that this patch will make the second output variable in the above
example alias a possible third output variable with location = 1 and
index = 0. But this shouldn't be a problem in practice since only one
color attachment is supported when dual-source blending is used.
Cc: "13.0" <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
|
|
|
|
|
|
| |
This passes all the CTS tests that get enabled for this.
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Thanks to Ilia's patch this works fine on radv.
No regressions in CTS, all enabled tests pass.
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The spec says to ignore these fields for exclusive images.
Fixes crashes in:
dEQP-VK.clipping.*
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
(cc'ing stable as I'd like to backport the ubo speedup as well)
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
Cc: "13.0" <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
v2: use gfxip names for llvm 4.0+
v3: use tonga for llvm <= 3.8, drop gfxip name,
we can just change that we change the other asics.
Reviewed-by: Marek Olšák <[email protected]>
Signed-off-by: Junwei Zhang <[email protected]>
Reviewed-by: Nicolai Hähnle <[email protected]>
Acked-by: Christian König <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
| |
Fixes a warning.
Reviewed-by: Samuel Iglesias Gonsálvez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GL 4.5 spec says:
"If any enabled array’s buffer binding is zero when DrawArrays
or one of the other drawing commands defined in section 10.4 is
called, the result is undefined."
This commits avoids crashing the code, which is not a very good
"undefined result".
This fixes spec/!opengl 3.1/vao-broken-attrib piglit test.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per the C spec, it is illegal to alias pointers to different
types. This results in undefined behaviour after optimization
passes, resulting in very subtle bugs that happen only on a
full moon..
Use a memcpy() as a well defined coercion between the isomorphic
bit-field interpretations of memory.
V.2: Use C99 compat STATIC_ASSERT() over C11 static_assert().
Signed-off-by: Edward O'Callaghan <[email protected]>
Reviewed-by: Charmaine Lee <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that there's some SoA fetch which never falls back, we should always get
results which are better or at least not worse (something like rgba32f will
stay the same).
For cases which get way better, think something like R16_UNORM with 8-wide
vectors: this was 8 sign-extend fetches, 8 cvt, 8 muls, followed by
a couple of shuffles to stitch things together (if it is smart enough,
6 unpacks) and then a (8-wide) transpose (not sure if llvm could even
optimize the shuffles + transpose, since the 16bit values were actually
sign-extended to 128bit before being cast to a float vec, so that would be
another 8 unpacks). Now that is just 8 fetches (directly inserted into
vector, albeit there's one 128bit insert needed), 1 cvt, 1 mul.
v2: ditch the old AoS code instead of just disabling it.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This can now handle rgtc (unorm) too - this path no longer handles plain
formats, but that's unnecessary they now all have their proper SoA unpack
(this will still be dog-slow though due to the actual fetch being per-pixel
util fallbacks).
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This previously always fell back to AoS conversion. Even for 4-float formats
(which is the optimal case by far for that fallback case) this was suboptimal,
since it meant the conversion couldn't be done with 256bit vectors. While this
may still only be partly possible for some formats, (unless there's AVX2
support) at least the transpose can be done with half the unpacks
(and before using the transpose for AoS fallbacks, it was worse still).
With less than 4 channels, things got way worse with the AoS fallback
quickly even with 128bit vectors.
The strategy is pretty much the same as the existing one for formats
which fit into 32 bits, except there's now multiple vectors to be
fetched (2 or 4 to be exact), which need to be shuffled first (if it's 4
vectors, this amounts to a transpose, for 2 it's a bit different),
then the unpack is done the same (with the exception that the shift
of the channels is now modulo 32, and we need to select the right
vector).
In fact the most complex part about it is to get the shuffles right
for separating into lo/hi parts for AVX/AVX2...
This also makes use of the new ability of gather to use provided type
information, which we abuse to outsmart llvm so we get decent shuffles,
and to fetch 3x32bit vectors without having to ZExt the scalar.
And just because we can, we handle double formats too, albeit they are
a bit different (draw sometimes needs to handle that).
v2: fix typo float/int bug (generating inefficient code).
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By using a dst_type in the the gather interface, gather has some more
knowledge about how values should be fetched.
E.g. if this is a 3x32bit fetch and dst_type is 4x32bit vector gather
will no longer do a ZExt with a 96bit scalar value to 128bit, but
just fetch the 96bit as 3x32bit vector (this is still going to be
2 loads of course, but the loads can be done directly to simd vector
that way).
Also, we can now do some try to use the right int/float type. This should
make no difference really since there's typically no domain transition
penalties for such simd loads, however it actually makes a difference
since llvm will use different shuffle lowering afterwards so the caller
can use this to trick llvm into using sane shuffle afterwards (and yes
llvm is really stupid there - nothing against using the shuffle
instruction from the correct domain, but not at the cost of doing 3 times
more shuffles, the case which actually matters is refusal to use shufps
for integer values).
Also do some attempt to avoid things which look great on paper but llvm
doesn't really handle (e.g. fetching 3-element 8 bit and 16 bit vectors
which is simply disastrous - I suspect type legalizer is to blame trying
to extend these vectors to 128bit types somehow, so fetching these with
scalars like before which is suboptimal due to the ZExt).
Remove the ability for truncation (no point, this is gather, not conversion)
as it is complex enough already.
While here also implement not just the float, but also the 64bit avx2
gathers (disabled though since based on the theoretical numbers the benefit
just isn't there at all until Skylake at least).
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
We should do transpose, not extract/insert, at least with "sufficient" amount
of channels (for 4 channels, extract/insert shuffles generated otherwise look
truly terrifying). Albeit we shouldn't fallback to that so often in any case.
v2: ditch the extract/insert path, not worth keeping (we're going to avoid
hitting the fallback that often with future patches).
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
soa fetch so far always assumed that data was aligned. However, we want to
use this for vertex fetch, and data might not be aligned there, so handle
it in this path too (basically just pass through alignment through to other
functions). (It looks like it wouldn't work for for cached s3tc but this is
no different than with AoS fetch.)
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Avoid synchronization by using the secondary context
for uploading the vertex data for Draw*Up.
v2: Rely on u_upload_mgr to use persistent coherent
buffers. Do not flush.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
Tests suggest MANAGED buffers are made dirty
at Lock time, not at Unlock time.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new buffer upload path enables to lock
faster than the normal path when using
DISCARD/NOOVERWRITE.
v2: Diverse cleanups and fixes.
v3: Fix allocation size for 'lone' buffers and
add more debug info.
v4: Rewrite of the path to handle when DISCARD/NOOVERWRITE
is not used anymore. The resource content is copied to the
new resource used.
v5: flush for safety after unmap (not sure it is really required
here, but safer to flush).
v6: Do not use the path if persistent coherent mapping is unavailable.
Fix buffer creation flags.
v7: Do not flush since it is not needed.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
Next patches will introduce an offset.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
If the volumes (and the texture container) are not referenced,
then they are no pending operations on them. We can lock directly.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
If the surfaces (and the texture container) are not referenced,
then they are no pending operations on them. We can lock directly.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
The new arguments enable to reference the objects while
the function hasn't run.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
Will enable to use the bind count as an information for
whether the surface/volume is used in the worker thread.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
| |
Will enable to use the bind count as an information for
whether the surface/volume is used in the worker thread.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use nine_context_box_upload for uploads:
. systemmem volume to default volume
. managed volume internal content to its resource.
Check the uploads are executed before any action
that can alter the data, that is LockBox and
volume destruction.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
The last level was not released.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
The last level was not released.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use nine_context_box_upload for uploads:
. systemmem surface to default surface
. managed surface internal content to its resource.
Check the uploads are executed before any action
that can alter the data, that is LockRect,
NineSurface9_CopyDefaultToMem and surface destruction.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
This function will be used for surface and volume uploads
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
Generate mipmaps in the worker thread.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
To offload mipmap generation as well.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Do the upload in the other thread.
Usually managed buffers are used once per frame.
It is then very likely pending_upload is 0 at Lock
time.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
| |
Will be used to upload buffers.
Signed-off-by: Axel Davy <[email protected]>
|