| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
[[email protected]: Tested NIR_TEST_CLONE=1 with valgrind]
Tested-by: Jordan Justen <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The way NIR_PASS works (and, by extension, nir_optimize) is that they
may clone the shader and throw the old one away. (We use this for
testing nir_clone.) It's better if we just make a temporary variable,
use it for everything, and re-assign to the gl_program at the end.
[[email protected]: Tested NIR_TEST_CLONE=1 with valgrind]
Tested-by: Jordan Justen <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have been exposing only 16 since 1e3e72e3054de with arguments
based on register pressure and the number of available GRFs, however,
our scalar backend will always limit the number of push registers
for GS threads to 24 and fallback to pull model for anything else,
so there is really no reason to lower the number under those arguments.
By bumping this up to 32 we make it the same as all the other stages,
which is a nice feature to have that can help applications in some
cases (I recently fixed a bug in CTS that assumed that the number
of input locations in a stage matches the number of output locations
in the previous stage for example).
Pre-gen8, we use the vector backend and push model, so in that case
the arguments in 1e3e72e3054de are still valid.
v2: check if we have scalar GS instead of the hw gen to enable this (Ken).
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
This makes it easier to loop over programs.
Reviewed-by: Alejandro Piñeiro <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For now linking is just removing unused varyings between stages.
shader-db results BDW:
total instructions in shared programs: 13198288 -> 13191693 (-0.05%)
instructions in affected programs: 48325 -> 41730 (-13.65%)
helped: 473
HURT: 0
total cycles in shared programs: 541184926 -> 541159260 (-0.00%)
cycles in affected programs: 213238 -> 187572 (-12.04%)
helped: 435
HURT: 8
V2:
- lower indirects on demoted inputs as well as outputs.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
This will allow us to insert a nir linking step in brw_link_shader().
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eduardo Lima Mitev <[email protected]>
|
|
|
|
|
|
|
|
| |
This will help us call gather info at a later point and allow us
to do some linking in nir.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eduardo Lima Mitev <[email protected]>
|
|
|
|
|
|
|
| |
do_flush_locked isn't a great name - especially given that there's no
locking going on in our code relating to execbuf.
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have a nice utility function for this, which eliminates the need for
locking stuff. This isn't really performance critical, but it's less
code to use the atomic.
p_atomic_inc_return does pre-increment rather than post-increment, so we
change screen->program_id to be initialized to 0 instead of 1. At which
point, we can just delete the initialization because intel_screen is
rzalloc'd.
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
| |
There's no real advantage or disadvantage here, it's just for stylistic
consistency with the rest of the codebase.
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
| |
These have been unused for a while now.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If transform feedback is recording a varying, it needs a slot in the
VUE map, regardless of whether or not the shader writes it.
Together with the previous patch, this fixes:
- KHR-GL45.enhanced_layouts.xfb_capture_struct
The test captures a structure where the vertex shader writes the first
and third members - but the second still needs a slot.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
unify_interfaces() only updates the NIR program info, not the copy
in the gl_program itself. So, by using the old copy, we were missing
out on these updates.
The TCS/TES ones already did this correctly.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brw_finish_batch emits commands needed at the end of every batch buffer,
including any workarounds. In the past, we freed up some "reserved"
batch space before calling it, so we would never have to flush during
it. This was error prone and easy to screw up, so I deleted it a while
back in favor of growing the batch.
There were two problems:
1. We're in the middle of flushing, so brw->no_batch_wrap is guaranteed
not to be set. Using BEGIN_BATCH() to emit commands would cause a
recursive flush rather than growing the buffer as intended.
2. We already recorded the throttling batch before growing, which
replaces brw->batch.bo with a different (larger) buffer. So growing
would break throttling.
These are easily remedied by shuffling some code around and whacking
brw->no_batch_wrap in brw_finish_batch(). This also now includes the
final workarounds in the batch usage statistics. Found by inspection.
Fixes: 2c46a67b4138631217141f (i965: Delete BATCH_RESERVED handling.)
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
| |
This is, by definition, finishing the batch.
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fixes quite a few 'texwrap [12]d border color only' tests on NV20
(10de:0201). All told, 40 more tests pass.
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Ian RomanicK <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
Tested-by: Ian RomanicK <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
v2: Force T and R wrap modes to GL_CLAMP_TO_EDGE for 1D textures.
This fixes a regression in tex1d-2dborder. The test uses a 1D texture
but it provides S and T texture coordinates. Since the T wrap mode
would (correctly) be set to GL_CLAMP, the texture would gradually
blend (incorrectly) with the border color.
I also tried setting NV20_3D_TEX_FORMAT_DIMS_1D instead of
NV20_3D_TEX_FORMAT_DIMS_2D for 1D textures, but that did not help.
It is possible that the same problem exists for 2D textures with the
R-wrap mode, but I don't think there are any piglit tests for that.
No test changes on NV20 (10de:0201).
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Some DRI image properties weren't properly duplicated in the
new image. Some properties are still missing, but I'm not
certain if there was a good reason to let them out in the first
place.
Signed-off-by: Louis-Francis Ratté-Boulianne <[email protected]>
Reviewed-by: Daniel Stone <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
| |
This reverts commit e97f4b748094466567c7f3bad1a02ecee13db9c8.
|
|
|
|
|
|
|
|
|
|
| |
I originally wrote the code to call the maps 'batch' and 'state',
until I remembered that 'batch' is the intel_batchbuffer struct pointer.
The NULL check was still using the wrong variable.
Caught by Coverity.
CID: 1418109
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The setTexBuffer2 hook from GLX is used to implement glxBindTexImageEXT
which has tighter restrictions than just "it's shared". In particular,
it says that any rendering to the image while it is bound causes the
contents to become undefined. This means that we can do whatever aux
tracking we want between glxBindTexImageEXT and glxReleaseTexImageEXT so
long as we always transition from external in Bind and to external in
Release.
The fact that we were using make_shareable before was a problem because
it would resolve away 100% of the aux data and then throw away our
reference to the aux buffer. If the aux data was shared with some other
application (i.e. if we're using I915_FORMAT_MOD_Y_TILED_CCS) then we
would forget that the aux data even existed for the rest of eternity.
This is fine for the first frame but any subsequent calls to
glxBindTexImageEXT would bind the texture as if it has no aux
whatsoever and no resolves would happen and texturing would happen as if
there is no aux. This was causing rendering corruption in mutter when
running on top of X11 with modifiers.
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Daniel Stone <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old code made a new miptree that referenced the same BO as the
renderbuffer and just trusted in the memory aliasing to work. There are
only two ways in which the new miptree is liable to differ from the one
in the renderbuffer and neither of them matter:
1) It may have a different target. The only targets that we can ever
see in intelSetTexBuffer2 are GL_TEXTURE_2D and GL_TEXTURE_RECTANGLE
and the difference between the two doesn't matter as far as the
miptree is concerned; genX(update_sampler_state) only looks at the
gl_texture_object and not the miptree when determining whether or
not to use normalized coordinates.
2) It may have a very slightly different format. Again, this doesn't
matter because we've supported texture views for quite some time so
we always look at the gl_texture_object format instead of the
miptree format for hardware setup anyway.
On the other hand, because we were recreating the miptree, we were using
intel_miptree_create_for_bo which doesn't understand modifiers. We
really want this function to work without doing a resolve so long as you
have modifiers so we need to fix that.
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Daniel Stone <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we get a miptree in through glxBindImageEXT, we don't know the
current aux state so we have to assume the worst-case. If the image
gets recreated, everything is fine because miptreecreate_for_dri_image
sets it to the default. However, if our miptree is recycled, then we
may have stale aux_usage and we need to reset to the default otherwise
our aux_state tracking will get messed up.
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Daniel Stone <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Daniel Stone <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
| |
This shouldn't really happen in practice, but I hit it a couple of times
when running a driver with a bad memory leak. We may as well hook up
the warning, because if it ever triggers, we'll know something is wrong.
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
| |
We'll want to pass this to brw_bo_map in a moment.
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes a regression introduced with b96313c0e1289b296d7, which removed
BRW_NEW_BLORP for a bunch of SURFACE_STATE setup code, including render
targets, on the basis that blorp invalidates binding tables but not
surface states, however, at least on Broadwell, this caused a regression
in a CTS test, which Ken and Jason tracked down to the fact that we
are not uploading new render target surface states after allocating
new CCS_D surfaces for fast clears (which allocation is deferred until
an actual clear occurs).
The reason this only fails in BDW is that on SKL+ we use CCS_E which
is allocated up front so it exists in the initial surface state, the
problem can be reproduced in these platforms too if we use
INTEL_DEBUG=norcb to force the CCS_D path.
This patch, together with the ones preceding it, fixes the regression
by ensuring that we track and flag as dirty all aux state changes.
Credit goes to Jason and Ken for figuring out the reason for the
regression.
Fixes:
KHR-GL45.transform_feedback.draw_xfb_test
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
| |
v2: rename intel_miptree_set_clear_value to intel_miptree_set_clear_color
(Jason)
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
| |
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
| |
We want to use this flag to signal changes to the aux surfaces,
so let's not make it about fast clearing only. Suggested by Jason.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Jason and I use this for debugging all the time. Recompiling the driver
to enable it is kind of annoying. It's a great thing to try along with
always_flush_batch=true and always_flush_cache=true to detect a class of
problems - namely, atoms listening to an insufficient set of dirty bits.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Eric Engestrom <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
It's nearly the same so there's no good reason why it can't be in a
common function. The one difference is that _mesa_store_teximage
calls AllocTextureImageBuffer for us, while _mesa_store_texsubimage
doesn't, but we don't need that anyway - intelTexImage already does it.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
|
| |
It is set to false in both callers. It isn't needed for glTexImage
because intelTexImage calls AllocTextureImageBuffer before calling
texsubimage_tiled_memcpy.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
| |
These two paths are basically the same. There's no good reason to have
them in different files.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This fixes a crash on Haswell when we try to upload a stencil texture
with blorp. It would also be a problem if someone tried to texture from
stencil after glBlitFramebuffers.
Cc: "17.2 17.1" <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
fixes following warning:
warning: format specifies type 'long' but the argument has type 'uint64_t' (aka 'unsigned long long')
cast is needed to avoid this change turning in to another warning:
warning: format specifies type 'unsigned long long' but the argument has type 'uint64_t' (aka 'unsigned long')
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
| |
It's nice to have this information. While we're at it, tweak the
formatting to try and vertically align numbers in the common case.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We now flush the batch when either the batchbuffer or statebuffer
reaches the original intended batch size, instead of when the sum of
the two reaches a certain size (which makes no sense now that they're
separate buffers).
With this change, we also need to update our "are we near the end?"
estimate to require separate batch and state buffer space. I obtained
these estimates by looking at the size of draw calls in the Unreal 4
Elemental Demo (using INTEL_DEBUG=flush and always_flush_batch=true).
This will significantly impact the size of our batches. I've adjusted
both down to try and be roughly similar to what we had been doing. On
various benchmarks, a 20kB batch and 16kB statebuffer seemed to about
right, but we may need to adjust this further. I tried a 16kB batch,
but that regressed Synmark OglMultithread performance by a fair bit.
32kB for both would have significantly increased our batch sizes.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Now that we can grow the batchbuffer if we absolutely need the extra
space, we don't need to reserve space for the final do-or-die ending
commands.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
| |
We need to set brw->no_batch_wrap to actually avoid flushing in the
middle of our BLORP operation, and instead grow the batchbuffer.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we would just assert fail and die in this case. The only
safeguard is the "estimated max prim size" checks when starting a draw
(or compute dispatch or BLORP operation)...which are woefully broken.
Growing is fairly straightforward:
1. Allocate a new larger BO.
2. memcpy the existing contents over to the new buffer
3. Set the new BO to the same GTT offset as the old BO. When emitting
relocations, we write the presumed GTT offset of the target BO. If
we changed it, we'd have to update all the existing values (by
walking the relocation list and looking at offsets), which is more
expensive. With the old BO freed, ideally the kernel could simply
place the new BO at that offset anyway.
4. Update the validation list to contain the new BO.
5. Update the relocation list to have the GEM handle for the new BO
(which we can skip if using I915_EXEC_HANDLE_LUT).
v2: Update to handle malloc'd shadow buffers.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we emitted GPU commands and indirect state into the same
buffer, using a stack/heap like system where we filled in commands from
the start of the buffer, and state from the end of the buffer. We then
flushed before the two met in the middle.
Meeting in the middle is fatal, so you have to be certain that you
reserve the correct amount of space before emitting commands or state
for a draw. Currently, we will assert !no_batch_wrap and die if the
estimate is ever too small. This has been mercifully obscure, but has
happened on a number of occasions, and could in theory happen to any
application that issues a large draw at just the wrong time.
Estimating the amount of batch space required is painful - it's hard to
get right, and getting it right involves a lot of code that would burn
CPU time, and also be painful to maintain. Rolling back to a saved
state and retrying is also painful - failing to save/restore all the
required state will break things, and redoing state emission burns a
lot of CPU. memcpy'ing to a new batch and continuing is painful,
because commands we issue for a draw depend on earlier commands as well
(such as STATE_BASE_ADDRESS, or the GPU being in a pirtacular state).
The best plan is to never run out of space, which is totally doable but
pretty wasteful - a pessimal draw requires a huge amount of space, and
rarely occurs. Instead, we'd like to grow the batch buffer if we need
more space and can't safely flush.
We can't grow with a meet in the middle approach - we'd have to move the
state to the end, which would mean updating every offset from dynamic
state base address. Using separate batch and state buffers, where both
fill starting at the beginning, makes it easy to grow either as needed.
This patch separates the two concepts. We create a separate state
buffer, with a second relocation list, and use that for brw_state_batch.
However, this patch tries to retain the original flushing behavior - it
adds the amount of batch and state space together, as if they were still
co-existing in a single buffer. The hope is to flush at the same time
as before. This is necessary to avoid provoking bugs caused by broken
batch wrap handling (which we'll fix shortly). It also avoids suddenly
increasing the size of the batch (due to state not taking up space),
which could have a significant performance impact. We'll tune it later.
v2:
- Mark the statebuffer with EXEC_OBJECT_CAPTURE when supported (caught
by Chris). Unfortunately, we lose the ability to capture state data
on older kernels.
- Continue to support the malloc'd shadow buffers.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
| |
This will let us access screen->kernel_features in the next patch.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We'll need to read from both buffers when decoding state.
This also drops the "failed to map" fallback - it's completely useless
on LLC systems where we write directly to the mapped BO. It's not that
useful on non-LLC systems either.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
brw_batch_reloc emits a relocation from the batchbuffer to elsewhere.
brw_state_reloc emits a relocation from the statebuffer to elsewhere.
For now, they do the same thing, but when we actually split the two
buffers, we'll change brw_state_reloc to use the state buffer.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
I'm planning on splitting batch and state into separate buffers, at
which point we'll need two relocation lists. In preparation for that,
this patch refactors the relocation stuff into a structure we can
replicate...which looks a lot like anv_reloc_list.
Reviewed-by: Chris Wilson <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The batch buffer and state buffer code is fairly tied together,
and having it in one .c file will make refactoring easier.
Also, drop some commentary above brw_state_batch. The "aperture
checking performance hacks" are long since gone, so that paragraph
makes little sense at this point.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|