| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To flush batches out of order, the gmem code needs to not depend on
state from fd_context (since that may apply to a more recent batch).
So this all moves into batch.
The one exception is the gmem/pipe/tile state itself. But this is
only used from gmem code (and batches are flushed serially). The
alternative would be having to re-calculate GMEM layout on every
batch, even if the dimensions of the render targets are the same.
Note: This opens up the possibility of pushing gmem/submit into a
helper thread.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce the batch object, to track a batch/submit's worth of
ringbuffers and other bookkeeping. In this first step, just move
the ringbuffers into batch, since that is mostly uninteresting
churn.
For now there is just a single batch at a time. Note that one
outcome of this change is that rb's are allocated/freed on each
use. But the expectation is that the bo pool in libdrm_freedreno
will save us the GEM bo alloc/free which was the initial reason
to implement a rb pool in gallium.
The purpose of the batch is to eventually facilitate out-of-order
rendering, with batches associated to framebuffer state, and
tracking the dependencies on other batches.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
finally unused
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
| |
This bug seems to have always been there. Applications changing shaders
but not textures between draw calls would have gotten undefined behavior.
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
| |
Already done as part of ST_NEW_FRAGMENT_PROGRAM in st_validate_state.
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This just needs to be done by st_validate_state.
v2: add "shaders_may_be_dirty" flags for not skipping st_validate_state
on _NEW_PROGRAM to detect real shader changes
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
| |
v2: - also don't check edge flags for GLES
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The goal is to do this in st_validate_state:
while (dirty)
atoms[u_bit_scan(&dirty)]->update(st);
That implies that atoms can't specify which flags they consume.
There is exactly one ST_NEW_* flag for each atom. (58 flags in total)
There are macros that combine multiple flags into one for easier use.
All _NEW_* flags are translated into ST_NEW_* flags in st_invalidate_state.
st/mesa doesn't keep the _NEW_* flags after that.
torcs is 2% faster between the previous patch and the end of this series.
v2: - add st_atom_list.h to Makefile.sources
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
| |
This won't be needed after the rewrite.
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pass I introduced in commit a2dc11a7818c04d8dc0324e8fcba98d60bae
was entirely broken. A missing "break" made the load_interpolated_input
case always fall through to "default" and hit a "continue", making it
not actually move any load_interpolated_input intrinsics at all.
It would only move the simple load_barycentric_* intrinsics, which
don't emit any code anyway, making it basically useless.
The initial version I sent of the pass worked, but I apparently
failed to verify that the simplified version in v2 actually worked.
With the obvious fix applied (so we actually tried to move
load_interpolated_input intrinsics), I discovered a second bug: we
weren't moving the offset SSA def to the top, breaking SSA validation.
The new version of the pass actually moves load_interpolated_input
intrinsics and all their dependencies, as intended.
Papers over GPU hangs on Ivybridge and Baytrail caused by the
recent NIR FS input rework by restoring the old behavior.
(I'm not honestly sure why they hang with PLN not at the top.)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97083
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
Seems to mostly work on a3xx. Except when it doesn't and kills gpu
quite badly.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
Valgrind detected that variable ir_copy_propagation_visitor::killed_all
is uninitialized.
Signed-off-by: Jan Ziak (http://atom-symbol.net) <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Exported dmabufs can get imported by the same process, but the handle was
not getting added to the hash table on export. Add the handle to the hash
table on export.
Signed-off-by: Rob Herring <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Vulkan CTS test results on gen9:
./deqp-vk --deqp-case=dEQP-VK.pipeline.multisample.min_sample_shading*
Test run totals:
Passed: 60/90 (66.7%)
Failed: 0/90 (0.0%)
Not supported: 30/90 (33.3%)
Warnings: 0/90 (0.0%)
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We should use the persample_dispatch variable in prog_data.
Fixes all (~60) the DEQP sample shading tests. Many tests exited with
VK_ERROR_OUT_OF_DEVICE_MEMORY without this patch.
V2: Use the shader key bits set in brw_compile_fs (Jason)
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
android.opengl.cts.WrapperTest#testGetIntegerv1 CTS test calls
eglTerminate, followed by eglReleaseThread. A similar case is
observed in this bug: https://bugs.freedesktop.org/show_bug.cgi?id=69622,
where the test calls eglTerminate, then eglMakeCurrent(dpy, NULL, NULL, NULL).
With the current code, dri2_dpy structure is freed on eglTerminate
call, so the display is not initialized when eglReleaseThread calls
MakeCurrent with NULL parameters, to unbind the context, which
causes a a segfault in drv->API.MakeCurrent (dri2_make_current),
either in glFlush or in a latter call.
eglTerminate specifies that "If contexts or surfaces associated
with display is current to any thread, they are not released until
they are no longer current as a result of eglMakeCurrent."
However, to properly free the current context/surface (i.e., call
glFlush, unbindContext, driDestroyContext), we still need the
display vtbl (and possibly an active dri dpy connection). Therefore,
we add some reference counter to dri2_egl_display, to make sure
the structure is kept allocated as long as it is required.
One drawback of this is that eglInitialize may not completely reinitialize
the display (if eglTerminate was called with a current context), however,
this seems to meet the EGL spec quite well, and does not permanently
leak any context/display even for incorrectly written apps.
Cc: "12.0" <[email protected]>
Signed-off-by: Nicolas Boichat <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The file was removed with earlier commit breaking 'make dist'.
Drop it from Makefile.sources since it's no longer around.
Fixes: 16985eb308e ("vc4: Switch to using the libdrm-provided
vc4_drm.h.")
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
| |
Found by Coverity.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The order of optimizations can lead to the conditional discard optimization
being applied twice to the same discard statement. In this case, we must
ensure that both conditions are applied.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96762
Cc: [email protected]
Tested-by: Kai Wasserbäch <[email protected]>
Reviewed-by: Tapani Pälli <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
When an application declares varying arrays but does not actually do any
indirect indexing, some array indices may end up unused in the consuming
shader, so the number of input slots that correspond to the array ends
up less than the array_size.
Cc: [email protected]
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Without this GCC 4.8.x throws below error:
error: invalid initialization of non-const reference of type
'clover::llvm::compat::raw_ostream_to_emit_file {aka llvm::raw_svector_ostream&}'
from an rvalue of type '<brace-enclosed initializer list>'
v2: change commit title and add error message like Eric Engestrom requested
Signed-off-by: Dieter Nützel <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97019
[ Francisco Jerez: Trivial formatting fix. ]
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
| |
We would have hit a segfault already if this could be null.
Fixes Coverity warning spotted by Matt.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
These are only used by get_matching_input() which has been call
at this point so free the hash tables.
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
| |
This exposes OpenGL 4.1 on Maxwell (tested on GM107 and GM206).
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
| |
PFETCH, actually ISBERD on GM107+ ISA only accepts a GPR for src0.
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The number of outputs patch (limited to 255) has moved in the TCP
header, but blob seems to also set the old position. Also, the high
8-bits are now located inbetween the min/max parallel output read
address at position 20.
Signed-off-by: Samuel Pitoiset <[email protected]>
Acked-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case of split primitives we need to restore
the original setting of the vtx.attrsz array to make
immediate mode attribute array tracking work.
v2: Use bool instead of boolean.
Signed-off-by: Mathias Fröhlich <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Tested-by: Brian Paul <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96950
|
|
|
|
|
|
| |
This reverts commit f84e9d749fbb6da73a60fb70e6725db773c9b8f8.
Bioshock Infinite no longer hangs.
|
|
|
|
|
|
| |
v2: set endian swap to 16
untested
|
|
|
|
|
|
|
|
| |
This cuts down the overhead of si_dump_shader when ddebug is capturing
shader logs, which is done for every draw call unconditionally (that's
quite a lot of work for a draw call).
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
| |
to separate individual shaders dumped consecutively.
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For good performance while being able to generate decent hang reports.
The report doesn't contain the parsed IB and the buffer list, but it
isolates the draw call and dumps shaders while not having to flush
the context.
This is for GPU hangs that are harder to reproduce and require interactive
playing for minutes or even hours.
dd_pipe.h explains some implementation details. Initializing, copying
(recording) and clearing states is most of the code.
The performance should be at least 50% of the normal performance depending
on the circumstances. (i.e. 50% is expected to be the worst case scenario,
not the best case) The majority of time is spent in
dump_debug_state(PIPE_DUMP_CURRENT_SHADERS) and that's after all
the optimizations in later patches. There is no obvious way to optimize
that further.
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
| |
We don't want a core dump.
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
| |
The pipelined hang detection mode will not want to dump everything.
(and it's also time consuming) It will only dump shaders after a draw call
and then dump the status registers separately if a hang is detected.
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is necessary to reuse existing BOs when dmabufs are imported. There
are 2 cases that need to be handled. dmabufs can be created/exported and
imported by the same process and can be imported multiple times.
Copying other drivers, add a hash table to track exported BOs so the
BOs get reused.
v2: Whitespace fixup (by anholt)
Signed-off-by: Rob Herring <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
| |
We don't tell the hardware whether we're computing depth, so we need
to manage early Z state manually. Fixes piglit early-z.
|
|
|
|
|
|
|
| |
We could use the nir_shader_gather_info() pass to update it after the
fact, but this is what glsl_to_nir and prog_to_nir do.
Reviewed-by: Rob Clark <[email protected]>
|