| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes the three dead code elimination passes and the
VEC4/FS instruction scheduling passes so they leave instructions with
side effects alone.
At some point it might be interesting to have the instruction
scheduler calculate the exact memory dependencies between atomic ops,
but they're rare enough that it seems unlikely that it will make any
practical difference.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new option clamps GL_MAX_SAMPLES to a hardware-supported MSAA mode.
If negative, then no clamping occurs.
v2: (for Paul)
- Add option to i965 only, not to all DRI drivers.
- Do not realy on int->uint cast to convert negative
values to large positive values. Explicitly check for
clamp_max_samples < 0.
v3: (for Ken)
- Don't allow clamp_max_samples to alter context version.
- Use clearer for-loop and correct comment.
- Rename variables.
v4: (for Ken)
- Merge identical if-branches.
Reviewed-and-tested-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
| |
Fixes "Macro compares unsigned to 0" defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Fixes "Macro compares unsigned to 0" defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Fixes "Uninitialized pointer field" defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Ken Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Enable GEN7_WM_MSDISPMODE_PERSAMPLE, GEN7_WM_POSOFFSET_SAMPLE,
GEN7_WM_OMASK_TO_RENDER_TARGET as per extension's specification.
- Only enable one of GEN7_WM_8_DISPATCH_ENABLE or GEN7_WM_16_DISPATCH_ENABLE
when GEN7_WM_MSDISPMODE_PERSAMPLE is enabled. Refer IVB PRM Vol. 2, Part 1,
Page 288 for details.
V2:
- Use shared function _mesa_get_min_invocations_per_fragment().
- Use brw_wm_prog_data variables: uses_pos_offset, uses_omask.
V3:
- Enable simd16 dispatch with per sample shading.
- Make changes to give preference to 'simd16 only' mode over
'simd8 only' mode in case of non 1x per sample shading.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Enable GEN6_WM_MSDISPMODE_PERSAMPLE, GEN6_WM_POSOFFSET_SAMPLE,
GEN6_WM_OMASK_TO_RENDER_TARGET as per extension's specification.
- Only enable one of GEN6_WM_8_DISPATCH_ENABLE or GEN6_WM_16_DISPATCH_ENABLE
when GEN6_WM_MSDISPMODE_PERSAMPLE is enabled.
Refer SNB PRM Vol. 2, Part 1, Page 279 for details.
V2:
- Use shared function _mesa_get_min_invocations_per_fragment().
- Use brw_wm_prog_data variables: uses_pos_offset, uses_omask.
V3:
- Enable simd16 dispatch with per sample shading.
- Make changes to give preference to 'simd16 only' mode over
'simd8 only' mode in case of non 1x per sample shading.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2:
- Update comments
- Add a special backend instructions to compute sample_mask.
- Add a new variable uses_omask in brw_wm_prog_data.
V3:
- Make changes to support simd16 mode.
- Delete redundant AND instruction and handle the register
stride in FS backend instruction.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2:
- Update comments
- Add compute_sample_id variables in brw_wm_prog_key
- Add a special backend instruction to compute sample_id.
V3:
- Make changes to support simd16 mode.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2:
- Update comments.
- Add compute_pos_offset variable in brw_wm_prog_key.
- Add variable uses_pos_offset in brw_wm_prog_data.
V3:
- Make changes to support simd16 mode.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is required while adding builtin system value vec{2, 3, 4}
variables. For example:
(declare (sys) vec2 gl_SamplePosition)
Without this patch above glsl ir splits in to:
(declare (temporary) float gl_SamplePosition_x)
(declare (temporary) float gl_SamplePosition_y)
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Only one program's instruction count is changed, but a shader in Tropics
is also affected.
instructions in affected programs: 326 -> 320 (-1.84%)
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
total instructions in shared programs: 1409124 -> 1406971 (-0.15%)
instructions in affected programs: 158376 -> 156223 (-1.36%)
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Helps a lot of Steam games.
total instructions in shared programs: 1409360 -> 1409124 (-0.02%)
instructions in affected programs: 20842 -> 20606 (-1.13%)
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to the GLSL CSE pass, all of our testing happened to have a freshly
computed temporary in op[1], from the multiply by 16 to get a byte offset.
As of CSE you'll get var_refs of a reused value when you've got multiple
loads from the same offset.
Make a proper temporary for computing our temporary value, to avoid
shifting the value farther and farther down. Avoids a regression in
gs-float-array-variable-index
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
Previously, the write of each 32-bit half might land in separate batch
buffers, which is insane.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This depends on ARB_transform_feedback2, so I've predicated it on the
ability to do register writes.
It also depends on ARB_transform_feedback3, which is the only reason we
couldn't expose it previously.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This extension is written a bit strangely. Although it introduces the
concept of multiple transform feedback streams, it doesn't actually
provide more than a single stream.
The ARB_gpu_shader5 extension is what introduces the ability to write to
streams other than stream #0 and increases the required number of streams.
Since we don't yet support ARB_gpu_shader5, we can safely enable
ARB_transform_feedback3 even though we only support a single stream.
This does provide some useful functionality: applications can now use
more than one interleaved transform feedback buffer.
v2: Only expose the extension if ARB_transform_feedback2 is also
available, to avoid confusing applications (suggested by Ian).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ARB_transform_feedback3 allows applications to insert blank space
between interleaved varyings by adding fake 1, 2, 3, or 4-component
varyings named gl_SkipComponents[1234].
Mesa's core data structures don't explicitly track these, instead simply
tracking the buffer offset for each real varying. If there is padding
due to gl_SkipComponents, these will not be contiguous.
Our hardware takes the specification quite literally. Instead of
specifying offsets for each varying, it assumes they're all contiguous
and requires you to program fake varyings for each "hole".
This patch adds support for emitting SO_DECL structures for these holes.
Although we've lost the information about exactly how the application
specified their padding (i.e. gl_SkipComponents2, gl_SkipComponents2
vs. a single gl_SkipComponents4), it shouldn't matter. We just need to
emit the right amount of space. This patch emits the minimal number of
hole SO_DECL structures.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, we emit one SO_DECL structure per output, so we use the index
in the Outputs[] array as the index into the so_decl[] array as well.
In order to support the fake "gl_SkipComponents[1234]" varyings from
ARB_transform_feedback3, we'll need to emit SO_DECLs to fill in the
holes between successive outputs. This means we'll likely emit more
SO_DECLs than there are outputs, so we need to count it explicitly.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is a bit shorter.
v2: Mark the temporary const (requested by Ian).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
With Linux 3.12, register writes work on Ivybridge and Baytrail, but not
Haswell. That will be fixed in a future kernel revision, at which point
this extension will automatically be enabled.
v2: Use I915_GEM_DOMAIN_INSTRUCTION for the register read, and also
correctly set the writeable flag when mapping (caught by Eric).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We only want to enable ARB_transform_feedback2 if we can write to
registers from batchbuffers. In order to test that, we need to be able
to submit batches. And for batches to work, we need to program the
initial pipeline state (like PIPELINE_SELECT), which is done from
brw_state_init().
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implementing the GetTransformFeedbackVertexCount() driver hook allows
the VBO module to call us with the right number of vertices.
The hardware doesn't directly count the number of vertices written by
SOL, so we instead use the SO_NUM_PRIMS_WRITTEN(n) counters and multiply
by the number of vertices per primitive.
Unfortunately, counting the number of primitives generated is tricky:
a program might pause a transform feedback operation, start a second one
with a different object, then switch back and resume. Both transform
feedback operations share the SO_NUM_PRIMS_WRITTEN counters.
To work around this, we save the counter values at Begin, Pause, Resume,
and End. This "bookends" each section where transform feedback is
active for the current object. Adding up differences of pairs gives
us the number of primitives generated. (This is similar to what we
do for occlusion queries on platforms without hardware contexts.)
v2: Fix missing parenthesis in assertion (caught by Eric Anholt).
v3: Reuse prim_count_bo rather than freeing it and immediately
allocating a new one (suggested by Topi Pohjolainen).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Renaming it makes it obvious that it isn't used, and the assertion
verifies that the VBO module never passes us such an object.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DrawTransformFeedback() needs to obtain the number of vertices written
to a particular stream during the last Begin/EndTransformFeedback block.
The new driver hook returns exactly that information.
Gallium drivers already implement this by passing the transform feedback
object to the drawing function, counting the number of vertices written
on the GPU, and using draw indirect. This is efficient, but doesn't
always work:
If vertex data comes from user arrays, then the VBO module needs to
know how many vertices to upload, so we need to synchronously count.
Gallium drivers are currently broken in this case.
It also doesn't work if primitive restart is done in software. For
normal drawing, vbo_draw_arrays() performs software primitive restart,
splitting the draw call in two. vbo_draw_transform_feedback() currently
doesn't because it has no idea how many vertices need to be drawn.
The new driver hook gives it that information, allowing us to reuse
the existing vbo_draw_arrays() code to do everything right.
On Intel hardware (at least Ivybridge), using the draw indirect approach
is difficult since the hardware counts primitives, rather than vertices,
which requires doing some simple math. So we always use this hook.
Gallium drivers will likely want to use this hook in some cases, but
want to use the existing draw indirect approach where possible. Hence,
I've added a flag to allow drivers to opt-in to this call.
v2: Make it possible to implement this hook but only use this path
when necessary (suggested by Marek).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ARB_transform_feedback2 extension introduces the ability to pause
and resume transform feedback sessions. Although only one can be active
at a time, it's possible to switch between multiple transform feedback
objects while paused.
In order to facilitate this, we need to save/restore the SO_WRITE_OFFSET
registers so that after resuming, the GPU continues writing where it
left off.
This functionality also exists in ES 3.0, but somehow we completely
forgot to implement it.
v2: Reduce alignment from 4096 to 64 (it seemed excessive).
v3: Use I915_GEM_DOMAIN_INSTRUCTION instead of RENDER, for consistency
with other writes. It shouldn't matter on IVB+.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the basic driver hooks to allocate/free the brw variant.
It doesn't contain any additional information yet, but it will soon.
v2: Use the new _mesa_init_transform_feedback_object helper function
(requested by Eric and Ian).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizes
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
into
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
total instructions in shared programs: 1644938 -> 1638181 (-0.41%)
instructions in affected programs: 574955 -> 568198 (-1.18%)
Two more 16-wide programs (in L4D2). Some large (-9%) decreases in
instruction count in some of Valve's Source Engine games. No
regressions.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
We'd like to CSE some instructions, like CMP, that often have null
destinations. Instead of replacing them with MOVs to null, just don't
emit the MOV.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This avoids a lot of message setup we had to do otherwise. Improves
GLB2.7 performance with register spilling force enabled by 1.6442% +/-
0.553218% (n=4).
v2: Use BRW_PREDICATE_NONE, improve a comment (by Paul).
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
| |
I'm going to be introducing gen7 variants, and the previous naming was
going to get confusing.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
| |
We were clearing the reg_offset before trying to use it. Oops. Fixes
glsl-fs-texture2drect with the reg spilling debug enabled.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Things blew up when I enabled the debug register spill code without
disabling 16-wide, so I decided to just fix 16-wide spilling.
We still don't generate 16-wide when register spilling happens as part of
allocation (since we expect it to be slower), but now we can experiment
with allowing it in some cases in the future.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
I believe this will never happen in SIMD8 mode, but it could for SIMD16
when we fix it.
v2: Fix off-by-one in my register counting comment (caught by Paul).
Reviewed-by: Paul Berry <[email protected]> (v1)
|
|
|
|
|
|
|
|
|
|
|
| |
Now that reg spilling generates new vgrfs, we were looping forever if you
ever turned it on.
Instead, move the debug code into the register allocator right near where
we'd be doing spilling anyway, which should more accurately reflect how
register spilling occurs in the wild.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
I'm going to need to reuse this for fixing register spilling on SIMD16.
Note that BRW_MAX_MRF is 16, which is the same as BRW_MAX_GRF -
GEN7_MRF_HACK_START.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
| |
This hasn't been true since SIMD16 mode was added.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When faced with a million instructions that all became candidates at the
same time (none of which individually reduce register pressure), the ones
on the critical path are more likely to be the ones that will free up some
candidates soon.
shader-db:
total instructions in shared programs: 1681070 -> 1681070 (0.00%)
instructions in affected programs: 0 -> 0
GAINED: 40
LOST: 74
Fixes indistinguishable-from-hanging behavior in GLES3conform's
uniform_buffer_object_max_uniform_block_size test, regressed by
c3c9a8c85758796a26b48e484286e6b6f5a5299a. Given that
93bd627d5a6c485948b94488e6cd53a06b7ebdcf was unlocked by that commit, the
net effect on 16-wide program count is still quite positive, and I think
this should give us more stable scheduling (less dependency on original
instruction emit order).
v2: Comment suggestions by Paul
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70943
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a step in doing scheduling as described in Muchnick (p538). A
difference is that our latency function is only specific to one
instruction (it doesn't describe, for example, the different latency
between WAR of a send's arguments and RAW of a send's destination), but
that's changeable later. We also don't separately compute the postorder
traversal of the graph, since we can use the setting of the delay field as
the "visited" flag.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use PKG_CHECK_MODULE over requesting the user to setup the
option at configure time. Drop unused EXPAT_INCLUDE and
update all targets.
NOTE: The this commit removes the --with-expat configure
option. One should ensure that the expat they wish to use
has expat.pc file accessible by pkg-config.
v2:
* Add note about the removal of --with-expat
(per Tom Stellard)
* Drop EXPAT_CFLAGS for targets that do not build DRI_COMMON
(spotted by Matt Turner)
v3:
* Rebase on top of megadrivers (drop EXPAT_CFLAGS from swrast)
Acked-by: Matt Turner <[email protected]> (v2)
Reviewed-by: Tom Stellard <[email protected]> (v2)
Signed-off-by: Emil Velikov <[email protected]>
Conflicts:
configure.ac
src/mesa/drivers/dri/common/Makefile.am
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea of the original order was that you'd dead code eliminate accesses
to push constants. But I've never seen a case of that (nor has
shader-db), while we frequently see sparse accesses of large constant
arrays that would overflow into pull constants.
Cuts pull constant use on csgo, serious sam, planeshift, and the cave:
total instructions in shared programs: 1695103 -> 1688795 (-0.37%)
instructions in affected programs: 92024 -> 85716 (-6.85%)
GAINED: 339
LOST: 0
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
The MRF variant is going to be used extensively by the atomic counter
intrinsics to assemble untyped atomic and surface read messages
easily.
Reviewed-by: Paul Berry <[email protected]>
|
| |
|