aboutsummaryrefslogtreecommitdiffstats
path: root/src/intel/compiler
diff options
context:
space:
mode:
authorFrancisco Jerez <[email protected]>2019-12-29 18:17:10 -0800
committerFrancisco Jerez <[email protected]>2020-01-17 13:20:46 -0800
commitd9a57c85cc5bbb3fada60476ec7b379bd0b5ac64 (patch)
tree140cebeabcb05cff490a579e82368288ba79176e /src/intel/compiler
parent3ba16d36c988a1c7b31c7fe44c1b6a24d9d8227d (diff)
intel/fs: Try to vectorize header setup in lower_load_payload().
In cases where LOAD_PAYLOAD is provided a pair of contiguous registers as header sources, try to use a single SIMD16 instruction in order to initialize them. This is unlikely to affect the overall cycle count of the shader, since the compressed instruction has twice the issue time, except due to the reduced pressure on the instruction cache. Main motivation is avoiding instruction-count regressions in combination with the following copy propagation improvements, which will allow the SIMD16 g0-1 header setup emitted for framebuffer writes to be copy-propagated into its LOAD_PAYLOAD, leading to the emission of two SIMD8 MOV instructions instead of a single SIMD16 MOV. Reverting this commit on top of the copy propagation changes would lead to the following shader-db regressions on SKL and other platforms: total instructions in shared programs: 14926738 -> 14935415 (0.06%) instructions in affected programs: 1892445 -> 1901122 (0.46%) helped: 0 HURT: 8676 Without the following copy propagation changes this doesn't have any effect on shader-db on Gen7+, because we would typically set up the FB write header with a separate SIMD16 MOV that isn't currently copy-propagated into the LOAD_PAYLOAD, so the individual SIMD8 MOVs result of LOAD_PAYLOAD lowering would get register-coalesced away under normal circumstances. However that wasn't the case for MRF LOAD_PAYLOAD destinations on Gen6 and earlier, because register coalesce only kicks in for GRFs, leaving a number of redundant SIMD8 MOVs lying around. On SNB this leads to the following shader-db improvements: total instructions in shared programs: 10770538 -> 10734681 (-0.33%) instructions in affected programs: 2700655 -> 2664798 (-1.33%) helped: 17791 HURT: 0 Reviewed-by: Kenneth Graunke <[email protected]>
Diffstat (limited to 'src/intel/compiler')
-rw-r--r--src/intel/compiler/brw_fs.cpp24
1 files changed, 16 insertions, 8 deletions
diff --git a/src/intel/compiler/brw_fs.cpp b/src/intel/compiler/brw_fs.cpp
index 4dabf6c9395..dfe5b2a7282 100644
--- a/src/intel/compiler/brw_fs.cpp
+++ b/src/intel/compiler/brw_fs.cpp
@@ -3768,15 +3768,23 @@ fs_visitor::lower_load_payload()
dst.nr = dst.nr & ~BRW_MRF_COMPR4;
const fs_builder ibld(this, block, inst);
- const fs_builder hbld = ibld.exec_all().group(8, 0);
+ const fs_builder ubld = ibld.exec_all();
- for (uint8_t i = 0; i < inst->header_size; i++) {
- if (inst->src[i].file != BAD_FILE) {
- fs_reg mov_dst = retype(dst, BRW_REGISTER_TYPE_UD);
- fs_reg mov_src = retype(inst->src[i], BRW_REGISTER_TYPE_UD);
- hbld.MOV(mov_dst, mov_src);
- }
- dst = offset(dst, hbld, 1);
+ for (uint8_t i = 0; i < inst->header_size;) {
+ /* Number of header GRFs to initialize at once with a single MOV
+ * instruction.
+ */
+ const unsigned n =
+ (i + 1 < inst->header_size && inst->src[i].stride == 1 &&
+ inst->src[i + 1].equals(byte_offset(inst->src[i], REG_SIZE))) ?
+ 2 : 1;
+
+ if (inst->src[i].file != BAD_FILE)
+ ubld.group(8 * n, 0).MOV(retype(dst, BRW_REGISTER_TYPE_UD),
+ retype(inst->src[i], BRW_REGISTER_TYPE_UD));
+
+ dst = byte_offset(dst, n * REG_SIZE);
+ i += n;
}
if (inst->dst.file == MRF && (inst->dst.nr & BRW_MRF_COMPR4) &&