diff options
author | Francisco Jerez <currojerez@riseup.net> | 2016-10-26 14:25:06 -0700 |
---|---|---|
committer | Francisco Jerez <currojerez@riseup.net> | 2016-12-14 16:50:26 -0800 |
commit | ad38ba113491869ab0dffed937f7b3dd50e8a735 (patch) | |
tree | 1545105f12c1b259d1e883014e6781425a2ac4c2 /scons/source_list.py | |
parent | 3c78d31374422b028b19afa5799689c404a5b73e (diff) |
i965/fs: Switch to the constant cache for uniform pull constants.
This reverts to using the oword block read messages for uniform pull
constant loads, as used to be the case until
4c1fdae0a01b3f92ec03b61aac1d3df5. There are two important differences
though: Now the L3 cacheability bits are set up correctly for UBOs
(since 11f5d8a5d4fbb861ec161f68593e429cbd65d1cd), and we target the
constant cache instead of the data cache. The latter used to get no
L3 way allocation on boot on all platforms that existed at the time,
so oword read messages wouldn't get cached on L3 regardless of the
MOCS bits, what probably explains the apparent slowness of oword
fetches.
Constant cache loads seem to perform better than SIMD4x2 sampler loads
in a number of cases, they alleviate some of the cache thrashing
caused by the competition with textures for the L1/L2 sampler caches,
and they allow fetching up to 128B worth of constants with a single
oword fetch message.
Note that IVB devices suffer from a hardware bug that leads to
serialization of L3 read requests overlapping the same cacheline as
result of a (on IVB buggy) mechanism of the L3 to preserve coherency.
Since read requests for matching cachelines from any L3 client are not
pipelined, throughput may decrease in cases where there are no
non-overlapping requests left in the queue that can be processed
between them.
This situation should be relatively uncommon as long as we make sure
that we don't use the 1/2 oword messages in cases where the shader
intends to read from any other location of the same cacheline at some
other point. This is generally a good idea anyway on all generations
because using the 1 and 2 oword messages is expected to waste
bandwidth since the minimum L3 request size for the DC is exactly 4
owords (i.e. one cacheline). A future commit will have this effect.
I haven't been able to find any real-world example where this would
still result in a regression on IVB, but if someone happens to find
one it shouldn't be too difficult to add an IVB-specific check to have
it fall back to the sampler cache for pull constant loads.
Note that on SKL+ this change has the additional benefit of reducing
the register footprint of pull constant loads. The following table
summarizes the effect of the whole series on several shader-db stats:
Total instructions Total cycles
BWR: 4571248 -> 4568342 (-0.06%) 123375740 -> 123373296 (-0.00%)
ELK: 3989020 -> 3985402 (-0.09%) 98757068 -> 98754058 (-0.00%)
ILK: 6383591 -> 6376787 (-0.11%) 143649910 -> 143648914 (-0.00%)
SNB: 7528395 -> 7501446 (-0.36%) 103503796 -> 102460370 (-1.01%)
IVB: 6949221 -> 6943317 (-0.08%) 60592262 -> 60584422 (-0.01%)
HSW: 6409753 -> 6403702 (-0.09%) 60609070 -> 60604414 (-0.01%)
BDW: 8043467 -> 7976364 (-0.83%) 68427730 -> 68483042 (0.08%)
CHV: 8045019 -> 7977916 (-0.83%) 68297426 -> 68352756 (0.08%)
SKL: 8204037 -> 7939086 (-3.23%) 66583900 -> 65624378 (-1.44%)
Lost->Gained Total spills Total fills
BWR: 5 -> 5 1488 -> 1488 (0.00%) 1957 -> 1957 (0.00%)
ELK: 5 -> 5 1489 -> 1489 (0.00%) 1958 -> 1958 (0.00%)
ILK: 1 -> 4 1449 -> 1449 (0.00%) 1921 -> 1921 (0.00%)
SNB: 0 -> 0 549 -> 549 (0.00%) 52 -> 52 (0.00%)
IVB: 13 -> 3 1271 -> 1271 (0.00%) 1162 -> 1162 (0.00%)
HSW: 11 -> 0 1271 -> 1271 (0.00%) 1162 -> 1162 (0.00%)
BDW: 12 -> 0 1340 -> 1340 (0.00%) 1452 -> 1452 (0.00%)
CHV: 12 -> 0 1340 -> 1340 (0.00%) 1452 -> 1452 (0.00%)
SKL: 0 -> 120 1269 -> 375 (-70.45%) 1563 -> 690 (-55.85%)
v3: Non-trivial rebase.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Diffstat (limited to 'scons/source_list.py')
0 files changed, 0 insertions, 0 deletions