| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
More than half of the stuff in intel_reg.h had nothing whatsoever to do
with registers and really belongs in brw_defines.h anyway.
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses the unblocked time of the exit assigned to each available
node to attempt to unblock exit nodes as early as possible,
potentially reducing the runtime of the shader when an exit branch is
taken. There is a natural trade-off between terminating the program
as early as possible and reducing the worst-case latency of the
program as a whole (since this will typically move exit-unblocking
nodes closer to its dependencies potentially causing additional stalls
of the execution pipeline), but in practice the bandwidth and ALU
cycle savings from terminating the program earlier tend to outweigh
the slight increase in worst-case program execution latency, so it
makes sense to prefer nodes likely to unblock an earlier exit
regardless of the latency benefits of other available nodes.
I haven't observed any benchmark regressions from this change after
testing on VLV, HSW, BDW, BSW and SKL. The FPS of the GfxBench
Manhattan benchmark increases by 10%-20% and the FPS of Unigine Valley
improves by roughly 5% depending on the platform and settings.
The change to the register pressure-sensitive heuristic is rather
conservative and gives precedence to the existing heuristic in order
to avoid increasing register pressure and causing spill count and SIMD
width regressions in shader-db. It may make sense to revisit this
with additional performance data.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a bit of metadata to schedule_node that will be used to
compare available nodes in the scheduling heuristic code based on
which of them unblocks the earliest successor exit node. Note that
assigning exit nodes wouldn't be necessary in a bottom-up scheduler
because we could achieve the same effect by scheduling the exit nodes
themselves appropriately.
No shader-db changes.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The critical path of each node is calculated by induction based on the
critical paths of its children, which can be done in a post-order
depth-first traversal of the dependency graph. The current code
implements graph traversal by iterating over all nodes of the graph
and then recursing into its children -- But it turns out that
recursion is unnecessary because the lexical order of instructions in
the block is already a good enough reverse post-order of the
dependency graph (if it weren't a reverse post-order some instruction
would have been located before one of its dependencies in the original
ordering of the basic block, which is impossible), so we just need to
walk the instruction list in reverse to achieve the same result more
efficiently.
No shader-db changes.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ANY4H is more efficient than ANY8H and ANY16H because it makes sure
that whenever a whole subspan hits a discard statement it gets
disabled by the EU until the end of the program, regardless of whether
the discard condition is uniform across all channels of the SIMD8-16
thread. OTOH ANY8H/ANY16H would cause the rest of the program to be
executed for *all* channels if only one of the channels hadn't taken
the discard branch, potentially increasing the bandwidth and ALU usage
of the program unnecessarily.
This change increases the FPS by over 3x of a simple micro-benchmark
that discards a bunch of fragments and then does a single costly
texturing operation. I've just re-verified the FPS change on HSW and
SKL, but I expect all platforms from Gen6 up to get a similar benefit.
Note that we could potentially be more aggressive and use the NORMAL
predicate to discard individual channels, but that would need to
happen post-scheduling because the scheduler currently doesn't care to
reorder HALT instructions with respect to other instructions, and the
NORMAL predicate would cause the results of subsequent derivative
computations to become undefined -- If the scheduler didn't reorder
HALT instructions it would actually be safe to switch to NORMAL
because the behavior of derivative computations after a non-uniform
discard statement is undefined by the GLSL spec, but that would make
the optimization implemented by one of the following commits somewhat
more difficult.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This may have been the reason people ran into problems with
non-uniform HALT instructions and ended up using the inefficient
ANY16H/ANY8H predicates instead of ANY4H or NORMAL in order to prevent
non-uniform discard. The HALT instruction is able to handle
non-uniform execution masks just fine.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "Barrier Count" field goes in 14:9 of m0.2. The vec4 backend
correctly shifts by 9, but the scalar backend only shifted by 8.
It's not like this changed - I think I just made a typo when writing
the original scalar TCS backend code.
Cc: [email protected]
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Alejandro Piñeiro <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the scalar TCS backend was generating:
mov(8) g17<1>UD 0x00000000UD { align1 WE_all 1Q compacted };
and(8) g17.2<1>UD g0.2<0,1,0>UD 0x0001e000UD { align1 WE_all 1Q };
shl(8) g17.2<1>UD g17.2<8,8,1>UD 0x0000000bUD { align1 WE_all 1Q };
or(8) g17.2<1>UD g17.2<8,8,1>UD 0x00008200UD { align1 WE_all 1Q };
send(8) null<1>UW g17<8,8,1>UD
gateway (barrier msg) mlen 1 rlen 0 { align1 WE_all 1Q };
This is rubbish - g17.2<8,8,1>UD spans two registers, and is an illegal
region. Not to mention it clobbers 8 channels of data when we only
wanted to touch m0.2.
Instead, we want:
mov(8) g17<1>UD 0x00000000UD { align1 WE_all 1Q compacted };
and(1) g17.2<1>UD g0.2<0,1,0>UD 0x0001e000UD { align1 WE_all };
shl(1) g17.2<1>UD g17.2<0,1,0>UD 0x0000000bUD { align1 WE_all };
or(1) g17.2<1>UD g17.2<0,1,0>UD 0x00008200UD { align1 WE_all };
send(8) null<1>UW g17<8,8,1>UD
gateway (barrier msg) mlen 1 rlen 0 { align1 WE_all 1Q };
Using component() accomplishes this.
Fixes GL44-CTS.tessellation_shader.tessellation_shader_tc_barriers.
barrier_guarded_read_write_calls on Skylake. Probably fixes other
barrier issues on Gen8+.
v2: Use a group(1, 0) builder so inst->exec_size is set correctly
(thanks to Francisco Jerez for catching that it was incorrect).
Cc: [email protected]
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Alejandro Piñeiro <[email protected]> [v1]
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes several GL44-CTS.tessellation_shader (and GL45 and ES31) subcases:
- vertex_spacing
- tessellation_shader_point_mode.points_verification
- tessellation_shader_quads_tessellation.inner_tessellation_level_rounding
Cc: [email protected]
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Alejandro Piñeiro <[email protected]>
|
|
|
|
|
|
| |
This lets us remove the brw_reg.h include
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This mega-commit pulls most of the i965-specific bits of blorp into the
brw_blorp.c/h files which now contain nothing but i965 wrappers around
"core blorp" calls. The "core blorp" api is moved into blorp.h and the
internal blorp data structures are moved into blorp_priv.h. The new file
blorp.c is created to house "core blorp" internals which are pulled from
the old brw_blorp.c
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
The helpers are completely miptree-unaware and each fairly cleanly do a
single thing. This does come at the downside of not doing proper debug
reporting on whether or not we're doing replicated clears.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This pulls the mcs allocation into the if statement where we initially
determine that we are doing a fast clear and moves the programming of
wm_inputs and figuring out the fast clear rect into it's own if statement.
The next commit will put code inbetween the two.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
| |
The blorp_surface_info_init call above should set the format for us and
stomping it later does nothing whatsoever.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
We had another inline copy of brw_meta_get_buffer_rect embedded in
get_fast_clear_rect for no good reason. This lets us get rid of the
gl_frameuffer parameter to get_fast_clear_rect.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
We already have an inlined version of the function slightly higher up in
do_single_blorp_clear and all calling it does is stomp the values with the
same thing. We might as well just get rid of it.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Now that we have the brw_blorp_surf struct, we can start to make bits of
blorp completely miptree-unaware. To start things off, we split the guts
of brw_blorp_blit_miptrees into a brw_blorp_blit function which knows
nothing about miptrees.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
At the moment, this seems to make all of the interfaces messier rather than
clener. However, it does provide a representation of a surface that
simultaneously contains everything and is completely unaware of miptrees.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
The isl_surf munging doesn't happen until fairly late in the blorp_blit
function. We can use the isl_surf for the vast majority if not all of our
params setup.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
This keeps all of the nastyness of gen6 stencil on the i965 side of the API
line and lets us delete that nasty hand-rolled ISL-based offset path that
we were using for ALL_SLICES_AT_EACH_LOD.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This commit also adds support for an offset for aux surfaces. In GL, this
only gets used for HiZ on SNB at the moment. However, in Vulkan, all aux
surfaces are at a non-zero offset and that is likely to happen in GL
eventually.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This commit movies us from a miptree model to a surf+bo+offset model. In
the GL driver, miptrees are almost always at the start of the bo so the
offset is zero but we don't want to always make that assumption. In the
sort term, gen6 stencil and HiZ will be at an offset but, in the long term,
any Vulkan surface is liable to be at a non-zero offset.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
The previous HiZ support was bogus because all of get_aux_isl_surf looked
at mt->mcs_mt directly. For HiZ buffers, you need to look at either
mt->hiz_buf or mt->hiz_buf->mt.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
In order for the calculations of things such as fast clear rectangles to
work, we need more details of the auxiliary surface to be correct. In
particular, we need to be able to trust the width and height fields.
(These are not necessarily what you want coming out of the miptree.) The
only values state setup really cares about are the row and array pitch and
those we can safely stomp from the miptree.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
| |
At one point, we were doing this correctly. It must have gotten lost in
one of the many rebases.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
| |
The only reason why we need layer or level is that we need the z-offset for
3-D surfaces. Let's just have the one field for that.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
| |
The data comes in via ISL in a format that's almost directly usable by the
hardware so we can avoid some of the conversion headache.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
| |
Now that the generic blorp path uses base level/layer, there's no need to
make gen8 special.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the dawn of time, blorp has used offsets directly to get at different
mip levels and array slices of surfaces. This isn't really necessary since
we can just use the base level/layer provided in the surface state. While
it may have simplified blorp's original design, we haven't been using the
blorp path for surface state on gen8 thanks to render compression and
there's really no good need for it most of the time. This commit restricts
such surface munging to the cases of fake W-tiling and fake interleaved
multisampling.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
| |
The layer field is in terms of physical layers which isn't quite what the
sampler will want for 2-D MS array textures.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
Multisample array surfaces on IVB don't support the minimum array element
surface attribute so it needs to come through the sampler message. We may
as well just pass it through everything.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the moment, the minify operation does nothing because
params.depth.view.base_level is always zero. However, as soon as we start
using actual base miplevels and array slices, we are going to need the
minification. Also, we only need to align the surface dimensions in the
case where we are operating on miplevel 0. Previously, it didn't matter
because it aligned on miplevel 0 and, for all other miplevels, the miptree
code guaranteed that the level was already aligned.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The sampling hardware can handle them ok. It just looks at the tiling to
determine whether it's the new gen9 1-D layout or the old one. The render
hardware isn't so smart.
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
| |
Instead, we manually mutate the surface size as needed.
Reviewed-by: Topi Pohjolainen <[email protected]>
|