diff options
author | Matt Turner <[email protected]> | 2020-06-09 13:51:10 -0700 |
---|---|---|
committer | Marge Bot <[email protected]> | 2020-06-12 19:01:26 +0000 |
commit | 66111bc95a5bba96ae39a4274c98cc4e3e183cae (patch) | |
tree | 8f19c85585468087b03890aed6c85531651fa7a7 /doxygen/glapi.doxy | |
parent | dd938356c7f98445f1adfc5023247550efdefc92 (diff) |
intel/compiler: Drop opt_sampler_eot()
Gen9 and Cherryview have the ability to mark texture instructions with
the End-of-thread bit under some conditions, which allows the texture
result to be written to the render target directly, rather than
returning to the EU.
In order to handle overlapping primitives correctly, we have to use the
'sendc' instruction which stalls until other threads potentially writing
to the same locations in the render target are retired. Unfortunately,
this stall happens before the texture is sampled (rather than in
parallel with stall), so for some literal edge cases (like the diagonal
edge between two triangles forming a rectangle) there can be a
performance penalty. As a result, it's probably not a good idea to use
this optimization in general.
I had planned to leave it enabled only for BLORP, where we use rectangle
primitives and are typically clearing/blitting an entire render target
without any overlapping primitives, but I noticed that the optimization
wasn't applied in some normal cases anyway. For example, in the piglit
test tests/shaders/glsl-fs-texture2d-bias.shader_test it is applied to
one BLORP-blit shader but not another due to some kind of mishandling of
register types (the destination register type of the texture operation
is UD while the color source of the render target write is F).
Additionally the instruction scheduler assumed that the combined texture
and render target write operation took 0 cycles, leading to cycle
estimates that are wildly inaccurate. Since the optimization was not
implemented for SIMD32 and our decision whether to use the SIMD32
program is made by comparing the estimated performance with that of the
SIMD16 shader, we wrongly threw out a bunch of SIMD32 programs that are
likely profitable.
total cycles in shared programs: 472807891 -> 473784245 (0.21%)
cycles in affected programs: 108277 -> 1084631 (901.72%)
helped: 0
HURT: 1290
total sends in shared programs: 998955 -> 1000245 (0.13%)
sends in affected programs: 1400 -> 2690 (92.14%)
helped: 0
HURT: 1290
LOST: 0
GAINED: 33
This patch shows no performance changes in Intel's Mesa performance CI.
Given the problems, the lack of evidence that the pass improves
performance, and the fact that the hardware feature was removed from
subsequent GPU generations, I think that the pass is not valuable and
should be removed.
Reviewed-by: Jason Ekstrand <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Matt Turner <[email protected]>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5412>
Diffstat (limited to 'doxygen/glapi.doxy')
0 files changed, 0 insertions, 0 deletions