diff options
author | Jason Ekstrand <[email protected]> | 2018-01-27 13:19:57 -0800 |
---|---|---|
committer | Jason Ekstrand <[email protected]> | 2018-08-29 14:04:02 -0500 |
commit | 37f7983bcca1afd4d570bc654b927a92308d1c68 (patch) | |
tree | 7cb87742e416068af5811bf4752d2d569a6021a6 /src/intel/vulkan/anv_pipeline.c | |
parent | b217705dec60ef8335e4ff304605f26e9038b632 (diff) |
intel/compiler: Do image load/store lowering to NIR
This commit moves our storage image format conversion codegen into NIR
instead of doing it in the back-end. This has the advantage of letting
us run it through NIR's optimizer which is pretty effective at shrinking
things down. In the common case of rgba8, the number of instructions
emitted after NIR is done with it is half of what it was with the
lowering happening in the back-end. On the downside, the back-end's
lowering is able to directly use predicates and the NIR lowering has to
use IFs.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15166910 -> 15166872 (<.01%)
instructions in affected programs: 5895 -> 5857 (-0.64%)
helped: 15
HURT: 0
Clearly, we don't have that much image_load_store happening in the
shaders in shader-db....
Reviewed-by: Kenneth Graunke <[email protected]>
Diffstat (limited to 'src/intel/vulkan/anv_pipeline.c')
-rw-r--r-- | src/intel/vulkan/anv_pipeline.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/src/intel/vulkan/anv_pipeline.c b/src/intel/vulkan/anv_pipeline.c index 0fe0c7e296e..19d59b7fbac 100644 --- a/src/intel/vulkan/anv_pipeline.c +++ b/src/intel/vulkan/anv_pipeline.c @@ -532,6 +532,8 @@ anv_pipeline_lower_nir(struct anv_pipeline *pipeline, if (nir->info.stage != MESA_SHADER_COMPUTE) brw_nir_analyze_ubo_ranges(compiler, nir, NULL, prog_data->ubo_ranges); + NIR_PASS_V(nir, brw_nir_lower_image_load_store, compiler->devinfo); + assert(nir->num_uniforms == prog_data->nr_params * 4); stage->nir = nir; |