| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Jason and I use this for debugging all the time. Recompiling the driver
to enable it is kind of annoying. It's a great thing to try along with
always_flush_batch=true and always_flush_cache=true to detect a class of
problems - namely, atoms listening to an insufficient set of dirty bits.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
v2: pass llvm context reference instead of a pointer
Signed-off-by: Jan Vesely <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
| |
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Only on GFX9 we implement them as 2D images.
This fixes:
dEQP-VK.image.image_size.1d_array.readonly_12x34
dEQP-VK.image.image_size.1d_array.readonly_1x1
dEQP-VK.image.image_size.1d_array.readonly_32x32
dEQP-VK.image.image_size.1d_array.readonly_7x1
dEQP-VK.image.image_size.1d_array.readonly_writeonly_12x34
dEQP-VK.image.image_size.1d_array.readonly_writeonly_1x1
dEQP-VK.image.image_size.1d_array.readonly_writeonly_32x32
dEQP-VK.image.image_size.1d_array.readonly_writeonly_7x1
dEQP-VK.image.image_size.1d_array.writeonly_12x34
dEQP-VK.image.image_size.1d_array.writeonly_1x1
dEQP-VK.image.image_size.1d_array.writeonly_32x32
dEQP-VK.image.image_size.1d_array.writeonly_7x1
Fixes: 1bcb953e166 "radv: handle GFX9 1D textures"
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Eric Engestrom <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
It's nearly the same so there's no good reason why it can't be in a
common function. The one difference is that _mesa_store_teximage
calls AllocTextureImageBuffer for us, while _mesa_store_texsubimage
doesn't, but we don't need that anyway - intelTexImage already does it.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
|
| |
It is set to false in both callers. It isn't needed for glTexImage
because intelTexImage calls AllocTextureImageBuffer before calling
texsubimage_tiled_memcpy.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
| |
These two paths are basically the same. There's no good reason to have
them in different files.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This fixes a crash on Haswell when we try to upload a stencil texture
with blorp. It would also be a problem if someone tried to texture from
stencil after glBlitFramebuffers.
Cc: "17.2 17.1" <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Needed for 32-bit PowerPC.
Cc: "17.2" <[email protected]>
Fixes: a6a38a038bd ("util/u_atomic: provide 64bit atomics where
they're missing")
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Platforms without particular atomic operations require the
implementations in u_atomic.c
Cc: "17.2" <[email protected]>
Fixes: a6a38a038bd ("util/u_atomic: provide 64bit atomics where
they're missing")
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Lionel Landwerlin <[email protected]>
Acked-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Enable the toggle to catch when the library is missing from the link
path. Better to test, fail and address before releasing Mesa ;-)
Cc: [email protected]
Signed-off-by: Emil Velikov <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
libunwind is a optional dependency used by the gallium aux module
(libgallium) and consequently the final binaries must be linked against
it. To test whether the library is properly specified in the link pass
add it to the travis-ci build environment and force its use.
Cc: [email protected]
Reviewed-by: Eric Engestrom <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Ubuntu Trusty the default version of llvm is 3.4 and the build was
actually randomly picking 3.5 or 3.9. Adding libunwind would then result
is build success or failure depending of what version was picked.
Install the llvm-3.3-dev package and force its use: On one hand it is
the minimum required version we want to the build test against, and on
the other hand forcing the version stabilizes the build.
Cc: [email protected]
Reviewed-by: Eric Engestrom <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Include src/gallium/Automake.inc, correct the build flags accordingly.
Force -std=c++11 (extensively used by the test) as otherwise it gets
defined only when building against llvm >= 3.9.
Fixes: 7be6d8fe12 ("mesa/st: glsl_to_tgsi: add tests for the new
temporary lifetime tracker")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102665
Reviewed-by: Emil Velikov <[email protected]> (v1)
|
|
|
|
|
|
|
| |
Otherwise it will be missing from the tarball, leadin to build failure.
Fixes: d4d777317b9 ("radv: move shaders related code to radv_shader.c")
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
| |
Changes --enable-omx option to --enable-omx-bellagio
Signed-off-by: Gurkirpal Singh <[email protected]>
Reviewed-and-Tested-by: Julien Isorce <[email protected]>
Acked-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
fixes following warning:
warning: format specifies type 'long' but the argument has type 'uint64_t' (aka 'unsigned long long')
cast is needed to avoid this change turning in to another warning:
warning: format specifies type 'unsigned long long' but the argument has type 'uint64_t' (aka 'unsigned long')
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Also, it's useless to set the error code twice. Though, we
should probably skip the next commands when the command buffer
is considered invalid.
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
|
|
|
|
|
|
| |
Similar to RadeonSI renderer string.
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
| |
|
|
|
|
|
|
|
| |
Before this change we were defaulting to STD140 which is slightly
less efficient at packing arrays.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
| |
v2: always set can_speculate and allow_smem to true
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
This will allow us to use STD430 packing by default if the driver
supports it.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
| |
Will be used to add LOAD support to UBOs.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
This will be use to distinguish between load types when using
the TGSI_OPCODE_LOAD opcode.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The virgl protocol version of tgsi doesn't handle this yet,
transform it back to the old ways.
Thanks to Nicolai Hähnle <[email protected]>
for also writing nearly the same patch.
Fixes: 41e342d5 tgsi/ureg: always emit constants (and their decls) as 2D
Tested-by: Rob Herring <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise we end up using a 32-bit comparison which didn't end well.
Timothy caught this while playing around with some opt passes.
Fixes: 278580729a (st/glsl_to_tgsi: add support for 64-bit integers)
Reviewed-by: Timothy Arceri <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
| |
It's nice to have this information. While we're at it, tweak the
formatting to try and vertically align numbers in the common case.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We now flush the batch when either the batchbuffer or statebuffer
reaches the original intended batch size, instead of when the sum of
the two reaches a certain size (which makes no sense now that they're
separate buffers).
With this change, we also need to update our "are we near the end?"
estimate to require separate batch and state buffer space. I obtained
these estimates by looking at the size of draw calls in the Unreal 4
Elemental Demo (using INTEL_DEBUG=flush and always_flush_batch=true).
This will significantly impact the size of our batches. I've adjusted
both down to try and be roughly similar to what we had been doing. On
various benchmarks, a 20kB batch and 16kB statebuffer seemed to about
right, but we may need to adjust this further. I tried a 16kB batch,
but that regressed Synmark OglMultithread performance by a fair bit.
32kB for both would have significantly increased our batch sizes.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Now that we can grow the batchbuffer if we absolutely need the extra
space, we don't need to reserve space for the final do-or-die ending
commands.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
| |
We need to set brw->no_batch_wrap to actually avoid flushing in the
middle of our BLORP operation, and instead grow the batchbuffer.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we would just assert fail and die in this case. The only
safeguard is the "estimated max prim size" checks when starting a draw
(or compute dispatch or BLORP operation)...which are woefully broken.
Growing is fairly straightforward:
1. Allocate a new larger BO.
2. memcpy the existing contents over to the new buffer
3. Set the new BO to the same GTT offset as the old BO. When emitting
relocations, we write the presumed GTT offset of the target BO. If
we changed it, we'd have to update all the existing values (by
walking the relocation list and looking at offsets), which is more
expensive. With the old BO freed, ideally the kernel could simply
place the new BO at that offset anyway.
4. Update the validation list to contain the new BO.
5. Update the relocation list to have the GEM handle for the new BO
(which we can skip if using I915_EXEC_HANDLE_LUT).
v2: Update to handle malloc'd shadow buffers.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we emitted GPU commands and indirect state into the same
buffer, using a stack/heap like system where we filled in commands from
the start of the buffer, and state from the end of the buffer. We then
flushed before the two met in the middle.
Meeting in the middle is fatal, so you have to be certain that you
reserve the correct amount of space before emitting commands or state
for a draw. Currently, we will assert !no_batch_wrap and die if the
estimate is ever too small. This has been mercifully obscure, but has
happened on a number of occasions, and could in theory happen to any
application that issues a large draw at just the wrong time.
Estimating the amount of batch space required is painful - it's hard to
get right, and getting it right involves a lot of code that would burn
CPU time, and also be painful to maintain. Rolling back to a saved
state and retrying is also painful - failing to save/restore all the
required state will break things, and redoing state emission burns a
lot of CPU. memcpy'ing to a new batch and continuing is painful,
because commands we issue for a draw depend on earlier commands as well
(such as STATE_BASE_ADDRESS, or the GPU being in a pirtacular state).
The best plan is to never run out of space, which is totally doable but
pretty wasteful - a pessimal draw requires a huge amount of space, and
rarely occurs. Instead, we'd like to grow the batch buffer if we need
more space and can't safely flush.
We can't grow with a meet in the middle approach - we'd have to move the
state to the end, which would mean updating every offset from dynamic
state base address. Using separate batch and state buffers, where both
fill starting at the beginning, makes it easy to grow either as needed.
This patch separates the two concepts. We create a separate state
buffer, with a second relocation list, and use that for brw_state_batch.
However, this patch tries to retain the original flushing behavior - it
adds the amount of batch and state space together, as if they were still
co-existing in a single buffer. The hope is to flush at the same time
as before. This is necessary to avoid provoking bugs caused by broken
batch wrap handling (which we'll fix shortly). It also avoids suddenly
increasing the size of the batch (due to state not taking up space),
which could have a significant performance impact. We'll tune it later.
v2:
- Mark the statebuffer with EXEC_OBJECT_CAPTURE when supported (caught
by Chris). Unfortunately, we lose the ability to capture state data
on older kernels.
- Continue to support the malloc'd shadow buffers.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
| |
This will let us access screen->kernel_features in the next patch.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We'll need to read from both buffers when decoding state.
This also drops the "failed to map" fallback - it's completely useless
on LLC systems where we write directly to the mapped BO. It's not that
useful on non-LLC systems either.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
brw_batch_reloc emits a relocation from the batchbuffer to elsewhere.
brw_state_reloc emits a relocation from the statebuffer to elsewhere.
For now, they do the same thing, but when we actually split the two
buffers, we'll change brw_state_reloc to use the state buffer.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
I'm planning on splitting batch and state into separate buffers, at
which point we'll need two relocation lists. In preparation for that,
this patch refactors the relocation stuff into a structure we can
replicate...which looks a lot like anv_reloc_list.
Reviewed-by: Chris Wilson <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The batch buffer and state buffer code is fairly tied together,
and having it in one .c file will make refactoring easier.
Also, drop some commentary above brw_state_batch. The "aperture
checking performance hacks" are long since gone, so that paragraph
makes little sense at this point.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Prior to the previous patch, we would pwrite the batchbuffer contents,
and wanted to skip the execbuffer if that failed. Now that we memcpy,
we don't set ret != 0 on failure anymore, so it will always be 0.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We'd like to eliminate the malloc'd shadow copy eventually, but there
are still unresolved performance problems. In the meantime, let's at
least get rid of pwrite.
On Apollolake, improves Synmark OglBatch6 performance by:
1.53581% +/- 0.269589% (n=108).
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
| |
This makes the assertion safe against batchbuffers growing.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This assertion prevents you from doing intel_batchbuffer_require_space
with a size so huge it won't fit in the batchbuffer. This doesn't seem
like a common mistake, and I've never seen the assert to be useful.
Soon, I hope to have batches grow, at which point this won't make sense.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Emil Velikov <[email protected]>
Reviewed-by: Daniel Stone <[email protected]>
|
|
|
|
|
|
|
|
|
| |
For non-CCS images, we were reporting just one plane even though they
may have multiple in the case of YUV.
Reviewed-by: Ben Widawsky <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
Reviewed-by: Daniel Stone <[email protected]>
|