summaryrefslogtreecommitdiffstats
path: root/src/intel/compiler/brw_nir.h
diff options
context:
space:
mode:
authorJason Ekstrand <[email protected]>2018-01-27 13:19:57 -0800
committerJason Ekstrand <[email protected]>2018-08-29 14:04:02 -0500
commit37f7983bcca1afd4d570bc654b927a92308d1c68 (patch)
tree7cb87742e416068af5811bf4752d2d569a6021a6 /src/intel/compiler/brw_nir.h
parentb217705dec60ef8335e4ff304605f26e9038b632 (diff)
intel/compiler: Do image load/store lowering to NIR
This commit moves our storage image format conversion codegen into NIR instead of doing it in the back-end. This has the advantage of letting us run it through NIR's optimizer which is pretty effective at shrinking things down. In the common case of rgba8, the number of instructions emitted after NIR is done with it is half of what it was with the lowering happening in the back-end. On the downside, the back-end's lowering is able to directly use predicates and the NIR lowering has to use IFs. Shader-db results on Kaby Lake: total instructions in shared programs: 15166910 -> 15166872 (<.01%) instructions in affected programs: 5895 -> 5857 (-0.64%) helped: 15 HURT: 0 Clearly, we don't have that much image_load_store happening in the shaders in shader-db.... Reviewed-by: Kenneth Graunke <[email protected]>
Diffstat (limited to 'src/intel/compiler/brw_nir.h')
-rw-r--r--src/intel/compiler/brw_nir.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/src/intel/compiler/brw_nir.h b/src/intel/compiler/brw_nir.h
index 5c75ef2324a..72a6ee8884a 100644
--- a/src/intel/compiler/brw_nir.h
+++ b/src/intel/compiler/brw_nir.h
@@ -114,6 +114,9 @@ void brw_nir_lower_tcs_outputs(nir_shader *nir, const struct brw_vue_map *vue,
GLenum tes_primitive_mode);
void brw_nir_lower_fs_outputs(nir_shader *nir);
+bool brw_nir_lower_image_load_store(nir_shader *nir,
+ const struct gen_device_info *devinfo);
+
nir_shader *brw_postprocess_nir(nir_shader *nir,
const struct brw_compiler *compiler,
bool is_scalar);