summaryrefslogtreecommitdiffstats
path: root/src/broadcom/compiler
Commit message (Collapse)AuthorAgeFilesLines
* nir: allow specifying a set of opcodes in lower_alu_to_scalarJonathan Marek2019-05-101-1/+1
| | | | | | | | | This can be used by both etnaviv and freedreno/a2xx as they are both vec4 architectures with some instructions being scalar-only. Signed-off-by: Jonathan Marek <[email protected]> Reviewed-by: Christian Gmeiner <[email protected]> Reviewed-by: Eric Anholt <[email protected]>
* nir: Initialize lower_flrp_progress everywhereIan Romanick2019-05-091-1/+1
| | | | | | | | | | | | | | | | I don't know why I thought NIR_PASS always set the progress variable. Derp. Fixes: d41cdef2a59 ("nir: Use the flrp lowering pass instead of nir_opt_algebraic") Reviewed-by: Brian Paul <[email protected]> Reviewed-by: Timothy Arceri <[email protected]> Reviewed-by: Emil Velikov <[email protected]> Coverity CID: 1444996 Coverity CID: 1444995 Coverity CID: 1444994 Coverity CID: 1444993 Coverity CID: 1444991 Coverity CID: 1444989
* nir: Use the flrp lowering pass instead of nir_opt_algebraicIan Romanick2019-05-061-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | I tried to be very careful while updating all the various drivers, but I don't have any of that hardware for testing. :( i965 is the only platform that sets always_precise = true, and it is only set true for fragment shaders. Gen4 and Gen5 both set lower_flrp32 only for vertex shaders. For fragment shaders, nir_op_flrp is lowered during code generation as a(1-c)+bc. On all other platforms 64-bit nir_op_flrp and on Gen11 32-bit nir_op_flrp are lowered using the old nir_opt_algebraic method. No changes on any other Intel platforms. v2: Add panfrost changes. Iron Lake and GM45 had similar results. (Iron Lake shown) total cycles in shared programs: 188647754 -> 188647748 (<.01%) cycles in affected programs: 5096 -> 5090 (-0.12%) helped: 3 HURT: 0 helped stats (abs) min: 2 max: 2 x̄: 2.00 x̃: 2 helped stats (rel) min: 0.12% max: 0.12% x̄: 0.12% x̃: 0.12% Reviewed-by: Matt Turner <[email protected]>
* nir: nir_shader_compiler_options: drop native_integersChristian Gmeiner2019-05-071-1/+0
| | | | | | | | Driver which do not support native integers should use a lowering pass to go from integers to floats. Signed-off-by: Christian Gmeiner <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]>
* v3d: Fix detection of TMU write sequences in register spilling.Eric Anholt2019-04-261-2/+9
| | | | | | | | We can't use the QPU functions to detect this until register allocation is done and we've moved inst->dst into inst->qpu. Fixes bad TMU sequences from register spilling in KHR-GLES31.core.compute_shader.shared-max.
* v3d: Fix detection of the last ldtmu before a new TMU op.Eric Anholt2019-04-261-3/+3
| | | | | We were looking at the start instruction, instead of scanning through the list of following instructions to find any more ldtmus.
* v3d: Re-add support for memory_barrier_shared.Eric Anholt2019-04-261-0/+1
| | | | | | | | Looks like I lost it in a rebase conflict resolution. We'd hit the unknown intrinsic assertion in KHR-GLES31.core.compute_shader.shared-struct. Fixes: 6b1c65982509 ("v3d: Add Compute Shader compilation support.")
* v3d: Add a note about i/o indirection for future performance work.Eric Anholt2019-04-261-0/+7
|
* v3d: Assert that we do request the normal texturing return data.Eric Anholt2019-04-261-0/+2
| | | | | An unused tex should be DCEed, but if it wasn't we'd run into trouble with not doing a TMUWT.
* v3d: Fix atomic cmpxchg in shaders on hardware.Eric Anholt2019-04-181-3/+13
| | | | | | | | In what might be my first case of finding a divergence between hardware and simpenrose for v3d 4.x, it seems that despite what the spec claims, you actually need specific values in the TYPE field for atomic ops. Fixes dEQP-GLES31.functional.*.compswap.*
* v3d: Fix an invalid reuse of flags generation from before a thrsw.Eric Anholt2019-04-181-0/+4
| | | | | Noticed while debugging the last GLES 3.1 failure, though it doesn't seem to affect that bug.
* v3d: Always set up the qregs for CSD payload.Eric Anholt2019-04-161-10/+2
| | | | | | | | We were failing to set up payload[1] for use by LocalInvocationIndex/ID and shared variable accesses if gl_WorkGroupID/gl_GlobalInvocationID wasn't used (possibly because you only have one workgroup). You're always going to use payload[1], and payload[0] is common enough and we have DCE in the backend to clean it up if it happens to not be used.
* v3d: Only look up the 3rd texture gather offset for non-arrays.Eric Anholt2019-04-161-1/+1
| | | | | | | Fixes assertion failures in the CTS since Karol's cleanup when NIR started noticing that we were reading an invalid component. Fixes: 5450f1c9fb09 ("v3d: prefer using nir_src_comp_as_int over nir_src_as_const_value")
* nir: make nir_const_value scalarKarol Herbst2019-04-141-1/+1
| | | | | | | | | v2: remove & operator in a couple of memsets add some memsets v3: fixup lima Signed-off-by: Karol Herbst <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]> (v2)
* v3d: Use the new lower_to_scratch implementation for indirects on temps.Eric Anholt2019-04-126-10/+190
| | | | | | | | | | | | | We can use the same register spilling infrastructure for our loads/stores of indirect access of temp variables, instead of doing an if ladder. Cuts 50% of instructions and max-temps from 2 KSP shaders in shader-db. Also causes several other KSP shaders with large bodies and large loop counts to not be force-unrolled. The change was originally motivated by NOLTIS slightly modifying register pressure in piglit temp mat4 array read/write tests, triggering register allocation failures.
* v3d: Add missing dumping for the spill offset/size uniforms.Eric Anholt2019-04-121-0/+8
|
* v3d: Add missing base offset to CS shared memory accesses.Eric Anholt2019-04-121-9/+20
| | | | | This code is so touchy, trying to emit the minimum amount of address math. Some day we'll move it all to NIR, I hope.
* v3d: Add Compute Shader compilation support.Eric Anholt2019-04-123-4/+44
| | | | | | | | While waiting for the CSD UABI to get reviewed, I keep having to rebase the CS patch. Just land the compiler side for now to keep it from diverging. For now this covers just GLES 3.1 compute shaders, not CL kernels.
* v3d: Replace the old shader-db env var output with the ARB_debug_output.Eric Anholt2019-04-123-30/+4
| | | | | | | | | We're using ARB_debug_output for the main shader-db, but I had this env var left around from the shader-db-2 support (vc4 apitrace-based). Keep the env var around since it's nice sometimes to get the stats on a shader you're optimizing without having to do a shader-db run, but drop the old formatting that's not useful and keeps tricking me when I go to add another measurement to the shader-db output.
* v3d: Include the number of max temps used in the shader-db output.Eric Anholt2019-04-121-1/+29
| | | | | This gives us finer-grained feedback on how we're doing on register pressure than "did we trigger a new shader to spill or drop thread count?"
* v3d: Add and use a define for the number of channels in a QPU invocation.Eric Anholt2019-04-122-3/+4
| | | | | A shader invocation always executes 16 channels together, so we often end up multiplying things by this magic 16 number. Give it a name.
* nir/i965/freedreno/vc4: add a bindless bool to type size functionsTimothy Arceri2019-04-121-1/+1
| | | | | | | This required to calculate sizes correctly when we have bindless samplers/images. Reviewed-by: Marek Olšák <[email protected]>
* v3d: Add an optimization pass for redundant flags updates.Eric Anholt2019-04-114-0/+142
| | | | | | | | | | | | Our exec masking introduces lots of redundant flags updates, and even without that there will be cases where NIR comparisons on the same sources for different reasons may generate the same comparison instruction before the selection. total instructions in shared programs: 6492930 -> 6460934 (-0.49%) total uniforms in shared programs: 2117460 -> 2115106 (-0.11%) total spills in shared programs: 4983 -> 4987 (0.08%) total fills in shared programs: 6408 -> 6416 (0.12%)
* nir: Get rid of global registersJason Ekstrand2019-04-091-1/+0
| | | | | | | | | We have a pass to lower global registers to locals and many drivers dutifully call it. However, no one ever creates a global register ever so it's all dead code. It's time we bury it. Acked-by: Karol Herbst <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* v3d: prefer using nir_src_comp_as_int over nir_src_as_const_valueKarol Herbst2019-04-072-11/+8
| | | | | Signed-off-by: Karol Herbst <[email protected]> Reviewed-by: Eric Anholt <[email protected]>
* v3d: Remove some dead members of struct v3d_compile.Eric Anholt2019-03-211-12/+0
| | | | These are more vc4 leftovers.
* v3d: Upload all of UBO[0] if any indirect load occurs.Eric Anholt2019-03-213-129/+1
| | | | | | | | | | | | | | | The idea was that we could skip uploading the constant-indexed uniform data and just upload the uniforms that are variably-indexed. However, since the VS bin and render shaders may have a different set of uniforms used, this meant that we had to upload the UBO for each of them. The first case is generally a fairly small impact (usually the uniform array is the most space, other than a couple of FSes in shader-db), while the second is a larger impact: 3DMMES2 was uploading 38k/frame of uniforms instead of 18k. Given that the optimization is of dubious value, has a big downside, and is quite a bit of code, just drop it. No change in shader-db. No change on 3DMMES2 (n=15).
* v3d: Move constant offsets to UBO addresses into the main uniform stream.Eric Anholt2019-03-213-9/+16
| | | | | | | | | | We'd end up with the constant offset in the uniform stream anyway, since they're bigger than small immediates. Avoids the extra uniforms and adds in the shader in favor of just adding once on the CPU. shader-db: total instructions in shared programs: 6496865 -> 6494851 (-0.03%) total uniforms in shared programs: 2119511 -> 2117243 (-0.11%)
* v3d: Rename v3d_tmu_config_data to v3d_unit_data.Eric Anholt2019-03-212-9/+9
| | | | | | I want to reuse this for encoding small constant UBO/SSBO offsets into the uniform stream to reduce the extra uniform loads and adds for the small constant offsets.
* v3d: Fix leak of the mem_ctx after the DAG refactor.Eric Anholt2019-03-121-2/+2
| | | | | | Noticed while trying to get a CTS run again. Fixes: 33886474d646 ("v3d: Use the DAG datastructure for QPU instruction scheduling.")
* v3d: Use the DAG datastructure for QPU instruction scheduling.Eric Anholt2019-03-111-114/+72
| | | | Just a small code reduction from shared infrastructure.
* v3d: Reuse list_for_each_entry_rev().Eric Anholt2019-03-111-2/+2
|
* v3d: Include a count of register pressure in the RA failure dumps.Eric Anholt2019-03-061-1/+13
| | | | | | You usually want to go find the highest pressure and figure out why you couldn't spill or what pattern led to a bunch of pressure leading to that point.
* v3d: Drop the V3D 3.x vpm read dead code elimination.Eric Anholt2019-03-051-33/+2
| | | | | We now have NIR dead code eliminating our VPM reads, so this shouldn't be necessary.
* v3d: Eliminate the TLB and TLBU files.Eric Anholt2019-03-054-41/+20
| | | | We can just use the magic register file like we do for other magic waddrs.
* v3d: Use ldunif instructions for uniforms.Eric Anholt2019-03-0510-269/+27
| | | | | | | | | | | | | | The idea is that for repeated use of the same uniform, we could avoid loading it on each consumer. The results look pretty good. total instructions in shared programs: 6413571 -> 6521464 (1.68%) total threads in shared programs: 154214 -> 154000 (-0.14%) total uniforms in shared programs: 2393604 -> 2119629 (-11.45%) total spills in shared programs: 4960 -> 4984 (0.48%) total fills in shared programs: 6350 -> 6418 (1.07%) Once we do scheduling at the NIR level, the register pressure (and thus also instructions) issues we see here will drop back down.
* v3d: Add support for register-allocating a ldunif to a QFILE_TEMP.Eric Anholt2019-03-052-14/+77
| | | | | On V3D 4.x, we can use ldunifrf to load uniforms to any register, and this will let us schedule the ldunif wherever we want in the program.
* v3d: Drop the old class bits splitting up the accumulators.Eric Anholt2019-03-051-7/+3
| | | | This seems to be left over from vc4, and I don't use them any more.
* v3d: Add support for vir-to-qpu of ldunif instructions to a temp.Eric Anholt2019-03-051-2/+15
| | | | | We can load a uniform to any register, so add support for non-ALU instructions with sig.ldunif to a temp.
* v3d: Switch implicit uniforms over to being any qinst->uniform != ~0.Eric Anholt2019-03-0510-123/+77
| | | | | I'm not sure why I didn't do this before -- it's clearly much simpler to add dumping of the extra thing than to have it as another implicit source.
* v3d: Do uniform rematerialization spilling before dropping threadcountEric Anholt2019-03-051-8/+10
| | | | | | | | | This feels like the right tradeoff for threads vs uniforms, particularly given that we often have very short thread segments right now: total instructions in shared programs: 6411504 -> 6413571 (0.03%) total threads in shared programs: 153946 -> 154214 (0.17%) total uniforms in shared programs: 2387665 -> 2393604 (0.25%)
* v3d: Fix temporary leaks of temp_registers and when spilling.Eric Anholt2019-03-051-5/+4
| | | | | | | On each iteration of successfully spilling a reg, we'd allocate another copy of temp_registers, and when decrementing thread conut we'd allocate another copy of the graph. These all got cleaned up on freeing the compile.
* v3d: Move the stores for fixed function VS output reads into NIR.Eric Anholt2019-03-054-195/+334
| | | | | | | | | | | | | | | This lets us emit the VPM_WRITEs directly from nir_intrinsic_store_output() (useful once NIR scheduling is in place so that we can reduce register pressure), and lets future NIR scheduling schedule the math to generate them. Even in the meantime, it looks like this lets NIR DCE some more code and make better decisions. total instructions in shared programs: 6429246 -> 6412976 (-0.25%) total threads in shared programs: 153924 -> 153934 (<.01%) total loops in shared programs: 486 -> 483 (-0.62%) total uniforms in shared programs: 2385436 -> 2388195 (0.12%) Acked-by: Ian Romanick <[email protected]> (nir)
* v3d: Translate f2i(fround_even) as FTOIN.Eric Anholt2019-03-051-2/+9
| | | | | This appears to be just what the opcode does. Needed for equivalence when moving FF VPM stores into NIR.
* v3d: Stop treating exec masking specially.Eric Anholt2019-03-053-14/+3
| | | | | | | | | | | | | | | In our backend, the successor edges from the blocks only point to where QPU control flow goes, not where the notional control flow goes from a "break" or "continue" modifying the execution mask to resume writing to some channels later. As a result, this attempt at restricting live ranges ended up missing the live range of a value where a conditional break/continue was present in a loop before the later def of a variable. The previous commit ended up fixing the problem that the flag tried to solve. Fixes glsl-vs-loop-continue.shader_test and/or glsl-vs-loop-redundant-condition.shader_test based on register allocation results.
* v3d: Restrict live intervals to the blocks reachable from any def.Eric Anholt2019-03-052-4/+43
| | | | | | | | | | | | | | | In the backend, we often have condition codes on writes to variables, such that there's no screening def anywhere and the previous live ranges algorithm would conclude that the start of the range extends to the start of the program. However, we do know that the live range can only extend as early as you can reach from all blocks writing to the variable. The motivation was that, while we have a couple of hacks to try to promote conditional writes up to being a def within the block, the exec_mask one was broken and needed a replacement. Based on c3c1aa5aeb92 ("intel/fs: Restrict live intervals to the subset possibly reachable from any definition.").
* v3d: Rematerialize MOVs of uniforms instead of spilling them.Eric Anholt2019-02-252-27/+68
| | | | | | | | | | | | | | | | | | | | | | | | | If we have a MOV of a uniform value available to spill, that's one of our best choices. We can just not spill the value, and emit a new load of the uniform as the fill. This saves bothering the TMU and the thrsw, and is the same cost in uniforms (since the spill offset is a uniform anyway). This doesn't have a huge impact on shader-db, since there aren't a whole lot of spills and we usually copy-prop the uniforms at the VIR level such that the only uniform MOVs are from vir_lower_uniforms: total instructions in shared programs: 6430292 -> 6430279 (<.01%) total uniforms in shared programs: 2386023 -> 2385787 (<.01%) total spills in shared programs: 4961 -> 4960 (-0.02%) total fills in shared programs: 6352 -> 6350 (-0.03%) However, I'm interested in dropping the uniforms copy-prop in the backend, since it would be cheaper to not load repeated uniforms if we have the registers to spare. This also saves many spills on dEQP-GLES31.functional.ubo.random.all_per_block_buffers.20, which is what motivated a bunch of my recent backend work in the first place: before: 46 spills, 106 fills, 3062 instructions after: 0 spills, 0 fills, 2611 instructions
* v3d: Dump the VIR after register spilling if we were forced to.Eric Anholt2019-02-251-0/+10
| | | | | Spilling is unusual, but one often has to debug it when it happens, so dump it.
* v3d: Fix vir_is_raw_mov() for input unpacks.Eric Anholt2019-02-251-0/+7
| | | | | There are no users at the moment, but I wanted to start using this in register spilling.
* v3d: Move i2b and f2b support into emit_comparison.Eric Anholt2019-02-181-13/+12
| | | | | This lets us save a resolve to NIR true/false for ifs and discard_if. No change in shader-db.