aboutsummaryrefslogtreecommitdiffstats
path: root/src/broadcom
Commit message (Collapse)AuthorAgeFilesLines
* v3d: Add support for CS barrier() intrinsics.Eric Anholt2019-01-143-0/+61
|
* v3d: Add support for CS shared variable load/store/atomics.Eric Anholt2019-01-143-13/+83
| | | | | CS shared variables are handled effectively as SSBO access to a temporary buffer that will be allocated at CS dispatch time.
* v3d: Add support for CS workgroup/invocation id intrinsics.Eric Anholt2019-01-145-1/+67
| | | | | | We get a payload for the ivec3 workgroup and an int local invocation index, and we use the core lowering to turn into the global invocation id and the local invocation id ivec3s.
* v3d: Add support for shader_image_load_store.Eric Anholt2019-01-148-3/+652
| | | | | | This is only exposed on V3D 4.1+, because we didn't have the TMU write operations for images on 3.3 (To do GLES 3.1 there, you have to lower it to SSBO load/stores, which is a problem to solve later).
* v3d: Add SSBO/atomic counters support.Eric Anholt2019-01-143-6/+143
| | | | | So far I assume that all the buffers get written. If they weren't, you'd probably be using UBOs instead.
* v3d: Add support for matrix inputs to the FS.Eric Anholt2019-01-141-13/+14
| | | | | | We've been relying on linking splitting up our varying matrices into separate vectors, but with SSO that doesn't happen. Supporting matrix inputs isn't too hard, though.
* v3d: Fix txf_ms 2D_ARRAY array index.Eric Anholt2019-01-141-8/+10
| | | | | | We need to pass the array index through our coordinate transform unchanged. Fixes dEQP-GLES31.functional.texture.multisample.samples_1.*_2d_array
* v3d: Add support for the early_fragment_tests flag.Eric Anholt2019-01-141-0/+10
| | | | | If this flag hasn't been set by the shader and it has some visible side effects, then we need to disable EZ.
* v3d: Add support for flushing dirty TMU data at job end.Eric Anholt2019-01-141-0/+23
| | | | This will be needed for SSBOs and image_load_store.
* nir: Add nir_lower_tex support for Broadcom's swizzled TG4 results.Eric Anholt2019-01-081-0/+2
| | | | | | | | V3D returns the texels in a different order in the resulting vec4 from what GLSL wants, so we need to put in a swizzle. Fixes dEQP-GLES31.functional.texture.gather.basic.2d.rgba8.base_level.level_1 Reviewed-by: Jason Ekstrand <[email protected]>
* v3d: Use the core tex lowering.Eric Anholt2019-01-043-123/+10
| | | | | | | | Even without any clever optimization on the unpack operations, this gives us a useful value for the channels read field, which we can use to avoid ldtmu instructions to the no-op register. instructions in affected programs: 890712 -> 881974 (-0.98%)
* v3d: Stop scalarizing our uniform loads.Eric Anholt2019-01-042-102/+57
| | | | | | | | | | | | | | | We can pull a whole vector in a single indirect load. This saves a bunch of round-trips to the TMU, instructions for setting up multiple loads, references to the UBO base in the uniforms, and apparently manages to reduce register pressure as well. instructions in affected programs: 3086665 -> 2454967 (-20.47%) uniforms in affected programs: 919581 -> 721039 (-21.59%) threads in affected programs: 1710 -> 3420 (100.00%) spills in affected programs: 596 -> 522 (-12.42%) fills in affected programs: 680 -> 562 (-17.35%) Improves 3dmmes performance by 2.29312% +/- 0.139825% (n=5)
* v3d: Do UBO loads a vector at a time.Eric Anholt2019-01-042-35/+99
| | | | | | | In the process of adding support for SSBOs and CS shared vars, I ended up needing a helper function for doing TMU general ops. This helper can be that starting point, and saves us a bunch of round-trips to the TMU by loading a vector at a time.
* v3d: Remove dead switch cases and comments from v3d_nir_lower_io.Eric Anholt2019-01-041-8/+3
| | | | Moving things to NIR left this mess around. All we lower now is uniforms.
* v3d: Reinstate the new shader-db output after v3d_compile() refactor.Eric Anholt2019-01-041-1/+18
| | | | I misplaced it in the rebase conflicts.
* v3d: Refactor compiler entrypoints.Eric Anholt2019-01-022-163/+164
| | | | | | Before, I had per-stage entryoints with some helpers shared between them. As I extended for compute shaders and shader-db, it turned out that the other common code in the middle wanted to be shared too.
* v3d: Handle dynamically uniform IF statements with uniform control flow.Eric Anholt2019-01-021-1/+65
| | | | | | | | | Loops will be trickier, since we need some analysis to figure out if the breaks/continues inside are uniform. Until we get that in NIR, this gets us some quick wins. total instructions in shared programs: 6192844 -> 6174162 (-0.30%) instructions in affected programs: 487781 -> 469099 (-3.83%)
* v3d: Fold comparisons for IF conditions into the flags for the IF.Eric Anholt2019-01-025-12/+57
| | | | | total instructions in shared programs: 6193810 -> 6192844 (-0.02%) instructions in affected programs: 800373 -> 799407 (-0.12%)
* v3d: Don't try to fold non-SSA-src comparisons into bcsels.Eric Anholt2019-01-021-1/+17
| | | | | There could have been a write of a src in between the comparison and the bcsel that would invalidate the comparison.
* v3d: Move the "Find the ALU instruction generating our bool" out of bcsel.Eric Anholt2019-01-021-6/+9
| | | | This will be reused for if statements.
* v3d: Simplify the emission of comparisons for the bcsel optimization.Eric Anholt2019-01-021-37/+24
| | | | | | I wanted to reuse the comparison stuff for nir_ifs, but for that I just want the flags and no destination value. Splitting the conditions from the destinations ended up cleaning the existing code up, anyway.
* v3d: Add support for gl_HelperInvocation.Eric Anholt2018-12-301-0/+8
| | | | | | We can just look at the MSF flags -- if they're unset, then we're definitely in a helper invocation. Fixes dEQP-GLES31.functional.shaders.helper_invocation.* with GLES3.1 enabled.
* v3d: Add support for textureSize() on MSAA textures.Eric Anholt2018-12-301-0/+1
| | | | | | Fixes failures in dEQP-GLES31.functional.shaders.builtin_functions.texture_size.samples_1_texture_2d in the GLES3.1 suite.
* v3d: Add support for non-constant texture offsets.Eric Anholt2018-12-301-8/+24
| | | | | | Fixes dEQP-GLES31.functional.texture.gather.offset_dynamic.min_required_offset.2d.rgba8.size_pot.clamp_to_edge_repeat and others.
* v3d: Force sampling from base level for tg4.Eric Anholt2018-12-301-3/+3
| | | | | | This is what the GLSL ES 310 spec tells us to do, but apparently the "gather mode" flag doesn't imply it in the HW. Fixes dEQP-GLES31.functional.texture.gather.basic.2d.rgba8.filter_mode.min_nearest_mipmap_linear_mag_linear
* v3d: Add a note for a potential performance win on multop/umul24.Eric Anholt2018-12-301-0/+4
| | | | Noticed while debugging a testcase.
* v3d: Dead-code eliminate unused flags updates.Eric Anholt2018-12-301-4/+42
| | | | | | | | | The greedy comparison folding in bcsel means that we may have left the original bool-generating NIR ALU instruction dead, but DCE wasn't eliminating the VIR code for it because of the flags updates. total instructions in shared programs: 5186024 -> 5100894 (-1.64%) instructions in affected programs: 1448695 -> 1363565 (-5.88%)
* v3d: Don't generate temps for comparisons.Eric Anholt2018-12-301-12/+14
| | | | | This was just generated work for vir_opt_dead_code and cluttered up the dumps.
* v3d: Move "does this instruction have flags" from sched to generic helpers.Eric Anholt2018-12-306-55/+48
| | | | I wanted to reuse it for DCE of flags updates.
* v3d: Drop incorrect dependency for flpop.Eric Anholt2018-12-301-4/+0
| | | | | It is just shifting probably-means-flags bits out of a value, it doesn't actually update the flags on its own.
* v3d: Drop unused count_nir_instrs() helper.Eric Anholt2018-12-301-18/+0
| | | | | This was for shader-db, but I haven't cared about NIR instruction counts in a long time.
* v3d: Hook up some shader-db output to GL_ARB_debug_output.Eric Anholt2018-12-303-2/+43
| | | | | | | This allows the original shader-db project's run.c runner to parse things easily, and is probably a good thing to have for GL_ARB_debug_output in general. I formatted it more like Intel's so I can mostly reuse their report script.
* v3d: Add a "precompile" debug flag for shader-db.Eric Anholt2018-12-292-0/+2
| | | | | | | | | I've been using my apitrace-based shader-db so far, but it's slow (apitrace decompression), intrusive (apitrace windows spamming the screen), and doesn't have much coverage. The original shader-db provides a lot more coverage and compiles faster, at the expense of not having the actual runtime variant key. As v3d has a lot less runtime variation than vc4 did, this tradeoff makes more sense.
* v3d: Fix uniform pretty printing assertion failure with branches.Eric Anholt2018-12-291-0/+3
| | | | Fixes: 248a7fb392ba ("v3d: Do uniform pretty-printing in the QPU dump.")
* v3d: Drop shadow comparison state from shader variant key.Eric Anholt2018-12-201-2/+0
| | | | The shadow state is now in the sampler.
* v3d: Add a fallthrough path for utile load/store of 32 byte lines.Eric Anholt2018-12-191-12/+16
| | | | | | Now that V3D has 8 byte per pixel formats exposed, we've got stride==32 utiles to load and store. Just handle them through the non-NEON paths for now.
* vc4: Move the utile load/store functions to a header for reuse by v3d.Eric Anholt2018-12-192-0/+223
| | | | | These implementations of whole-utile load/stores would be the same for v3d, though the layouts of blocks of utiles has changed.
* nir/opt_peephole_select: Don't peephole_select expensive math instructionsIan Romanick2018-12-171-1/+1
| | | | | | | | | | | | | | | | On some GPUs, especially older Intel GPUs, some math instructions are very expensive. On those architectures, don't reduce flow control to a csel if one of the branches contains one of these expensive math instructions. This prevents a bunch of cycle count regressions on pre-Gen6 platforms with a later patch (intel/compiler: More peephole select for pre-Gen6). v2: Remove stray #if block. Noticed by Thomas. Signed-off-by: Ian Romanick <[email protected]> Reviewed-by: Thomas Helland <[email protected]> Reviewed-by: Lionel Landwerlin <[email protected]>
* nir/opt_peephole_select: Don't try to remove flow control around indirect loadsIan Romanick2018-12-171-1/+1
| | | | | | | | | | | | | | | | | | | That flow control may be trying to avoid invalid loads. On at least some platforms, those loads can also be expensive. No shader-db changes on any Intel platform (even with the later patch "intel/compiler: More peephole select"). v2: Add a 'indirect_load_ok' flag to nir_opt_peephole_select. Suggested by Rob. See also the big comment in src/intel/compiler/brw_nir.c. v3: Use nir_deref_instr_has_indirect instead of deref_has_indirect (from nir_lower_io_arrays_to_elements.c). v4: Fix inverted condition in brw_nir.c. Noticed by Lionel. Signed-off-by: Ian Romanick <[email protected]> Reviewed-by: Lionel Landwerlin <[email protected]>
* v3d: Fix the argument type for vir_BRANCH().Eric Anholt2018-12-171-1/+1
| | | | | Apparently this has been spewing warnings for Jason's clang, but not my gcc.
* nir: Add a bool to int32 lowering passJason Ekstrand2018-12-161-0/+2
| | | | | | | | We also enable it in all of the NIR drivers. Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Bas Nieuwenhuizen <[email protected]> Tested-by: Bas Nieuwenhuizen <[email protected]>
* nir: Rename Boolean-related opcodes to include 32 in the nameJason Ekstrand2018-12-161-22/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a squash of a bunch of individual changes: nir/builder: Generate 32-bit bool opcodes transparently nir/algebraic: Remap Boolean opcodes to the 32-bit variant Use 32-bit opcodes in the NIR producers and optimizations Generated with a little hand-editing and the following sed commands: sed -i 's/nir_op_ball_fequal/nir_op_b32all_fequal/g' **/*.c sed -i 's/nir_op_bany_fnequal/nir_op_b32any_fnequal/g' **/*.c sed -i 's/nir_op_ball_iequal/nir_op_b32all_iequal/g' **/*.c sed -i 's/nir_op_bany_inequal/nir_op_b32any_inequal/g' **/*.c sed -i 's/nir_op_\([fiu]lt\)/nir_op_\132/g' **/*.c sed -i 's/nir_op_\([fiu]ge\)/nir_op_\132/g' **/*.c sed -i 's/nir_op_\([fiu]ne\)/nir_op_\132/g' **/*.c sed -i 's/nir_op_\([fiu]eq\)/nir_op_\132/g' **/*.c sed -i 's/nir_op_\([fi]\)ne32g/nir_op_\1neg/g' **/*.c sed -i 's/nir_op_bcsel/nir_op_b32csel/g' **/*.c Use 32-bit opcodes in the NIR back-ends Generated with a little hand-editing and the following sed commands: sed -i 's/nir_op_ball_fequal/nir_op_b32all_fequal/g' **/*.c sed -i 's/nir_op_bany_fnequal/nir_op_b32any_fnequal/g' **/*.c sed -i 's/nir_op_ball_iequal/nir_op_b32all_iequal/g' **/*.c sed -i 's/nir_op_bany_inequal/nir_op_b32any_inequal/g' **/*.c sed -i 's/nir_op_\([fiu]lt\)/nir_op_\132/g' **/*.c sed -i 's/nir_op_\([fiu]ge\)/nir_op_\132/g' **/*.c sed -i 's/nir_op_\([fiu]ne\)/nir_op_\132/g' **/*.c sed -i 's/nir_op_\([fiu]eq\)/nir_op_\132/g' **/*.c sed -i 's/nir_op_\([fi]\)ne32g/nir_op_\1neg/g' **/*.c sed -i 's/nir_op_bcsel/nir_op_b32csel/g' **/*.c Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Bas Nieuwenhuizen <[email protected]> Tested-by: Bas Nieuwenhuizen <[email protected]>
* v3d: Use the original bit size when scalarizing uniform loads.Eric Anholt2018-12-161-1/+2
| | | | | | Prevents a regression in jekstrand's 1-bit series. Reviewed-by: Jason Ekstrand <[email protected]>
* v3d: Drop in a bunch of notes about performance improvement opportunities.Eric Anholt2018-12-143-1/+61
| | | | | | These have all been floating in my head, and while I've thought about encoding them in issues on gitlab once they're enabled, they also make sense to just have in the area of the code you'll need to work in.
* v3d: Do uniform pretty-printing in the QPU dump.Eric Anholt2018-12-143-1/+62
| | | | | If you're trying to trace what's going on in a QPU dump, this will definitely help you find your way.
* v3d: Move uniform pretty-printing to its own helper function.Eric Anholt2018-12-142-71/+77
| | | | I want to reuse it in the QPU dump.
* v3d: Avoid assertion failures when removing end-of-shader instructions.Eric Anholt2018-12-141-0/+6
| | | | | | | | | After generating VIR, we leave c->cursor pointing at the end of the shader. If the shader had dead code at the end (for example from preamble instructions in a shader with no side effects), we would assertion fail that we were leaving the cursor pointing at freed memory. Since anything following DCE should be setting up a new cursor anyway, just clear the cursor at the start.
* v3d: Add support for draw indirect for GLES3.1.Eric Anholt2018-12-141-0/+39
| | | | | | In trying to enable compute shaders, I found that a bunch of deqp-gles31's compute stuff wanted to interact with indirect dispatch. This was easy to do on its own.
* v3d: Add missing flagging of SYNCB as a TSY op.Eric Anholt2018-12-141-0/+1
| | | | Fixes: f2e41daac577 ("broadcom/vc5: Update QPU instruction pack/unpack for v4.2.")
* v3d: Make sure that a thrsw doesn't split a multop from its umul24.Eric Anholt2018-12-141-0/+1
| | | | | | | The thrsw will invalidate rtop, just like accumulators and flags. Caught by simulator assertions in CS imulextended/umulextended tests. Fixes: 90269ba35333 ("broadcom/vc5: Use THRSW to enable multi-threaded shaders.")