summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* anv: fix build failureNicolai Hähnle2017-11-091-2/+2
| | | | Fixes: e3a8013de8ca ("util/u_queue: add util_queue_fence_wait_timeout")
* mesa: flush and wait after creating a fallback textureNicolai Hähnle2017-11-091-0/+5
| | | | | | | | Fixes non-deterministic failures in dEQP-EGL.functional.sharing.gles2.multithread.simple_egl_sync.images.texture_source.teximage2d_render and others in dEQP-EGL.functional.sharing.gles2.multithread.* Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* mesa: increase MaxServerWaitTimeoutNicolai Hähnle2017-11-091-1/+1
| | | | | | | | | | | | The current value was introduced in commit a27180d0d8666, which claims that it represents ~1.11 years. However, it is interpreted in nanoseconds, so it actually only represents ~9.8 hours. That seems a bit short. Use the largest value consistent with both int32 and int64. It corresponds to ~292 years in nanoseconds. Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* st/mesa: remove redundant flushes from st_flushNicolai Hähnle2017-11-093-3/+6
| | | | | | | | | | | st_flush should flush state tracker-internal state and the pipe, but not mesa/main state. Of the four callers: - glFlush/glFinish already call FLUSH_{VERTICES,STATE}. - st_vdpau doesn't need to call them. - st_manager will now call them explicitly. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* st/dri: use stapi flush instead of pipe flush when creating fencesNicolai Hähnle2017-11-091-5/+6
| | | | | | | | | There may be pending operations (e.g. vertices) that need to be flushed by the state tracker. Found by inspection. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: use a threaded context even for debug contextsNicolai Hähnle2017-11-091-9/+2
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: record and dump time of flushNicolai Hähnle2017-11-093-1/+8
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* ddebug: optionally handle transfer commands like drawsNicolai Hähnle2017-11-094-66/+288
| | | | | | | | Transfer commands can have associated GPU operations. Enabled by passing GALLIUM_DDEBUG=transfers. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* ddebug: dump context and before/after times of drawsNicolai Hähnle2017-11-092-0/+10
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* ddebug: generalize print_named_xxx via a PRINT_NAMED macroNicolai Hähnle2017-11-091-15/+10
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* ddebug: rewrite to always use a threaded approachNicolai Hähnle2017-11-094-515/+546
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch has multiple goals: 1. Off-load the writing of records in 'always' mode to another thread for performance. 2. Allow using ddebug with threaded contexts. This really forces us to move some of the "after_draw" handling into another thread. 3. Simplify the different modes of ddebug, both in the code and in the user interface, i.e. GALLIUM_DDEBUG. In particular, there's no 'pipelined' anymore, since we're always pipelined; and 'noflush' is replaced by 'flush', since we no longer flush by default. 4. Fix the fences in pipelining mode. They previously relied on writes via pipe_context::clear_buffer. However, on radeonsi, those could (quite reasonably) end up in the SDMA buffer. So we use the newly added PIPE_FLUSH_{TOP,BOTTOM}_OF_PIPE fences instead. 5. Improve pipelined mode overall, using the finer grained information provided by the new fences. Overall, the result is that pipelined mode should be more useful, and using ddebug in default mode is much less invasive, in the sense that it changes the overall driver behavior less (which is kind of crucial for a driver debugging tool). An example of the new hang debug output: Gallium debugger active. Hang detection timeout is 1000ms. GPU hang detected, collecting information... Draw # driver prev BOP TOP BOP dump file ------------------------------------------------------------- 2 YES YES YES NO /home/nha/ddebug_dumps/shader_runner_19919_00000000 3 YES NO YES NO /home/nha/ddebug_dumps/shader_runner_19919_00000001 4 YES NO YES NO /home/nha/ddebug_dumps/shader_runner_19919_00000002 5 YES NO YES NO /home/nha/ddebug_dumps/shader_runner_19919_00000003 Done. We can see that there were almost certainly 4 draws in flight when the hang happened: the top-of-pipe fence was signaled for all 4 draws, the bottom-of-pipe fence for none of them. In virtually all cases, we'd expect the first draw in the list to be at fault, but due to the GPU parallelism, it's possible (though highly unlikely) that one of the later draws causes a component to get stuck in a way that prevents the earlier draws from making progress as well. (In the above example, there were actually only 3 draws truly in flight: the last draw is a blit that waits for the earlier draws; however, its top-of-pipe fence is emitted before the cache flush and wait, and so the fact that the draw hasn't truly started yet can only be seen from a closer inspection of GPU state.) Acked-by: Marek Olšák <marek.olsak@amd.com>
* ddebug: use an atomic increment when numbering filesNicolai Hähnle2017-11-091-1/+3
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* dd/util: extract dd_get_debug_filename_and_mkdirNicolai Hähnle2017-11-091-12/+18
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium/u_dump: add and use util_dump_transfer_usageNicolai Hähnle2017-11-094-16/+61
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium/u_dump: add util_dump_nsNicolai Hähnle2017-11-092-0/+13
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium/u_dump: export util_dump_ptrNicolai Hähnle2017-11-092-2/+5
| | | | | | Change format to %p while we're at it. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: implement PIPE_FLUSH_{TOP,BOTTOM}_OF_PIPENicolai Hähnle2017-11-091-1/+88
| | | | | | | v2: use uncached system memory for the fence, and use the CPU to clear it so we never read garbage when checking the fence Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: document some subtle details of fence_finish & fence_server_syncNicolai Hähnle2017-11-091-0/+22
| | | | | | | v2: remove the change to si_fence_server_sync, we'll handle that more robustly Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium: add pipe_context::callbackNicolai Hähnle2017-11-093-0/+58
| | | | | | | For running post-draw operations inside the driver thread. ddebug will use it. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium/u_threaded: implement pipe_context::set_log_contextNicolai Hähnle2017-11-091-0/+11
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium/u_threaded: avoid syncs for get_query_resultNicolai Hähnle2017-11-091-17/+48
| | | | | | | | | | Queries should still get marked as flushed when flushes are executed asynchronously in the driver thread. To this end, the management of the unflushed_queries list is moved into the driver thread. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium/u_threaded: implement asynchronous flushesNicolai Hähnle2017-11-096-27/+238
| | | | | | | | | | | | | This requires out-of-band creation of fences, and will be signaled to the pipe_context::flush implementation by a special TC_FLUSH_ASYNC flag. v2: - remove an incorrect assertion - handle fence_server_sync for unsubmitted fences by relying on the improved cs_add_fence_dependency - only implement asynchronous flushes on amdgpu Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium/u_threaded: mark queries flushed only for non-deferred flushesNicolai Hähnle2017-11-092-4/+6
| | | | | | | | | | | The driver uses (and must use) the flushed flag of queries as a hint that it does not have to check for synchronization with currently queued up commands. Deferred flushes do not actually flush queued up commands, so we must not set the flushed flag for them. Found by inspection. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: move fence functions to si_fence.cNicolai Hähnle2017-11-096-267/+312
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* winsys/amdgpu: handle cs_add_fence_dependency for deferred/unsubmitted fencesNicolai Hähnle2017-11-094-12/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The idea is to fix the following interleaving of operations that can arise from deferred fences: Thread 1 / Context 1 Thread 2 / Context 2 -------------------- -------------------- f = deferred flush <------- application-side synchronization -------> fence_server_sync(f) ... flush() flush() We will now stall in fence_server_sync until the flush of context 1 has completed. This scenario was unlikely to occur previously, because applications seem to be doing Thread 1 / Context 1 Thread 2 / Context 2 -------------------- -------------------- f = glFenceSync() glFlush() <------- application-side synchronization -------> glWaitSync(f) ... and indeed they probably *have* to use this ordering to avoid deadlocks in the GLX model, where all GL operations conceptually go through a single connection to the X server. However, it's less clear whether applications have to do this with other WSI (i.e. EGL). Besides, even this sequence of GL commands can be translated into the Gallium-level sequence outlined above when Gallium threading and asynchronous flushes are used. So it makes sense to be more robust. As a side effect, we no longer busy-wait on submission_in_progress. We won't enable asynchronous flushes on radeon, but add a cs_add_fence_dependency stub anyway to document the potential issue. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium: add PIPE_FLUSH_{TOP,BOTTOM}_OF_PIPE bitsNicolai Hähnle2017-11-092-0/+16
| | | | | | | | | | | | | | | | These bits are intended to be used by the ddebug hang detection and are named in analogy to the Vulkan stage bits (and the corresponding Radeon pipeline event). Hang detection needs fences on the granularity of individual commands, which nothing else really covers. The closest alternative would have been PIPE_QUERY_GPU_FINISHED, but (a) queries are a per-context object and we really want a per-screen object, (b) queries don't offer a wait with timeout, and (c) in any case, PIPE_QUERY_GPU_FINISHED is meant to imply that GPU caches are flushed, which the new bits explicitly aren't. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium: add PIPE_FLUSH_ASYNC and PIPE_FLUSH_HINT_FINISHNicolai Hähnle2017-11-093-1/+18
| | | | | | Also document some subtleties of pipe_context::flush. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* util/u_queue: add util_queue_fence_wait_timeoutNicolai Hähnle2017-11-094-26/+121
| | | | | | | | v2: - style fixes - fix missing timeout handling in futex path Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* threads: update for late C11 changesNicolai Hähnle2017-11-091-11/+13
| | | | | | | | | | | | | | | | | | | | C11 threads were changed to use struct timespec instead of xtime, and thrd_sleep got a second argument. See http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1554.htm and http://en.cppreference.com/w/c/thread/{thrd_sleep,cnd_timedwait,mtx_timedlock} Note that cnd_timedwait is spec'd to be relative to TIME_UTC / CLOCK_REALTIME. v2: Fix Windows build errors. Tested with a default Appveyor config that uses Visual Studio 2013. Judging from Brian's email and random internet sources, Visual Studio 2015 does have timespec and timespec_get, hence the _MSC_VER-based guard which I have not tested. Cc: Jose Fonseca <jfonseca@vmware.com> Cc: Brian Paul <brianp@vmware.com> Reviewed-by: Marek Olšák <marek.olsak@amd.com> (v1)
* gallium: remove unused and deprecated u_time.hNicolai Hähnle2017-11-098-157/+1
| | | | | Cc: Jose Fonseca <jfonseca@vmware.com> Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* util: move os_time.[ch] to src/utilNicolai Hähnle2017-11-0957-78/+76
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: always use async compiles when creating shader/compute statesNicolai Hähnle2017-11-092-34/+50
| | | | | | | | | | | | | | With Gallium threaded contexts, creating shader/compute states is effectively a screen operation, so we should not use context state. In particular, this allows us to avoid using the context's LLVM TargetMachine. This isn't an issue yet because u_threaded_context filters out non-async debug callbacks, and we disable threaded contexts for debug contexts. However, we may want to change that in the future. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: fix potential use-after-free of debug callbacksNicolai Hähnle2017-11-091-0/+4
| | | | | | Found by inspection. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: move pipe debug callback to si_contextNicolai Hähnle2017-11-096-19/+19
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* u_queue: add util_queue_finish for waiting for previously added jobsNicolai Hähnle2017-11-092-0/+37
| | | | | | | | | Schedule one job for every thread, and wait on a barrier inside the job execution function. v2: avoid alloca (fixes Windows build error) Reviewed-by: Marek Olšák <marek.olsak@amd.com> (v1)
* util: move pipe_barrier into src/util and rename to util_barrierNicolai Hähnle2017-11-095-88/+87
| | | | | | | | | The #if guard is probably not 100% equivalent to the previous PIPE_OS check, but if anything it should be an over-approximation (are there pthread implementations without barriers?), so people will get either a good implementation or compile errors that are easy to fix. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium: add async debug message forwarding helperNicolai Hähnle2017-11-094-0/+192
| | | | | | v2: use util_vasprintf for Windows portability Reviewed-by: Marek Olšák <marek.olsak@amd.com> (v1)
* st/mesa: guard sampler views changes with a mutexNicolai Hähnle2017-11-095-96/+250
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some locking is unfortunately required, because well-formed GL programs can have multiple threads racing to access the same texture, e.g.: two threads/contexts rendering from the same texture, or one thread destroying a context while the other is rendering from or modifying a texture. Since even the simple mutex caused noticable slowdowns in the piglit drawoverhead micro-benchmark, this patch uses a slightly more involved approach to keep locks out of the fast path: - the initial lookup of sampler views happens without taking a lock - a per-texture lock is only taken when we have to modify the sampler view(s) - since each thread mostly operates only on the entry corresponding to its context, the main issue is re-allocation of the sampler view array when it needs to be grown, but the old copy is not freed Old copies of the sampler views array are kept around in a linked list until the entire texture object is deleted. The total memory wasted in this way is roughly equal to the size of the current sampler views array. Fixes non-deterministic memory corruption in some dEQP-EGL.functional.sharing.gles2.multithread.* tests, e.g. dEQP-EGL.functional.sharing.gles2.multithread.simple.images.texture_source.create_texture_render Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* st/mesa: re-arrange st_finalize_textureNicolai Hähnle2017-11-092-8/+11
| | | | | | | | | | Move the early-out for surface-based textures earlier. This narrows the scope of the locking added in a follow-up commit. Fix one remaining case of initializing a surface-based texture without properly finalizing it. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* gallium: clarify the constraints on sampler_view_destroyNicolai Hähnle2017-11-093-7/+20
| | | | | | | | | | | | | | | | | r600 expects the context that created the sampler view to still be alive (there is a per-context list of sampler views). svga currently bails when the context of destruction is not the same as creation. The GL state tracker, which is the only one that runs into the multi-context subtleties (due to share groups), already guarantees that sampler views are destroyed before their context of creation is destroyed. Most drivers are context-agnostic, so the warning message in pipe_sampler_view_release doesn't really make sense. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: reduce the scope of sel->mutex in si_shader_select_with_keyNicolai Hähnle2017-11-091-4/+4
| | | | | | | | | | We only need the lock to guard changes in the variant linked list. The actual compilation can happen outside the lock, since we use the ready fence as a guard. v2: fix double-unlock Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* radeonsi: use ready fences on all shaders, not just optimized onesNicolai Hähnle2017-11-093-26/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's a race condition between si_shader_select_with_key and si_bind_XX_shader: Thread 1 Thread 2 -------- -------- si_shader_select_with_key begin compiling the first variant (guarded by sel->mutex) si_bind_XX_shader select first_variant by default as state->current si_shader_select_with_key match state->current and early-out Since thread 2 never takes sel->mutex, it may go on rendering without a PM4 for that shader, for example. The solution taken by this patch is to broaden the scope of shader->optimized_ready to a fence shader->ready that applies to all shaders. This does not hurt the fast path (if anything it makes it faster, because we don't explicitly check is_optimized). It will also allow reducing the scope of sel->mutex locks, but this is deferred to a later commit for better bisectability. Fixes dEQP-EGL.functional.sharing.gles2.multithread.simple.buffers.bufferdata_render Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* u_queue: add a futex-based implementation of fencesNicolai Hähnle2017-11-092-0/+94
| | | | | | | | | | | | | | | | | Fences are now 4 bytes instead of 96 bytes (on my 64-bit system). Signaling a fence is a single atomic operation in the fast case plus a syscall in the slow case. Testing if a fence is signaled is the same as before (a simple comparison), but waiting on a fence is now no more expensive than just testing it in the fast (already signaled) case. v2: - style fixes - use p_atomic_xxx macros with the right barriers Acked-by: Marek Olšák <marek.olsak@amd.com>
* u_queue: add util_queue_fence_resetNicolai Hähnle2017-11-092-3/+14
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* u_queue: export util_queue_fence_signalNicolai Hähnle2017-11-092-1/+2
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* u_queue: group fence functions togetherNicolai Hähnle2017-11-091-9/+10
| | | | Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* util/u_atomic: add p_atomic_xchgNicolai Hähnle2017-11-091-1/+31
| | | | | | | | | The closest to it in the old-style gcc builtins is __sync_lock_test_and_set, however, that is only guaranteed to work with values 0 and 1 and only provides an acquire barrier. I also don't know about other OSes, so we provide a simple & stupid emulation via p_atomic_cmpxchg. Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* util: move futex helpers into futex.hNicolai Hähnle2017-11-094-21/+57
| | | | | | v2: style fixes Reviewed-by: Marek Olšák <marek.olsak@amd.com> (v1)
* glsl: Make #pragma STDGL invariant(all) only modify outputs.Kenneth Graunke2017-11-081-24/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | According to the GLSL ES 3.20, GLSL 4.50, and GLSL 1.20 specs: "To force all output variables to be invariant, use the pragma #pragma STDGL invariant(all) before all declarations in a shader." Notably, this is only supposed to affect output variables. Furthermore, "Only variables output from a shader can be candidates for invariance." It looks like this has been wrong since we first supported the pragma in 2011 (commit 86b4398cd158024f6be9fa830554a11c2a7ebe0c). Fixes dEQP-GLES2.functional.shaders.preprocessor.pragmas.pragma_fragment. v2: Now that all cases are identical (other than compute shaders, which have no output variables anyway), we can drop the switch statement entirely. We also don't need the current_function == NULL check; this was a hold over from when we had a single var_mode_out for both function parameters and shader varyings, in the bad old days. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com> Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
* i965: expose SRGB visuals and turn on EGL_KHR_gl_colorspaceTapani Pälli2017-11-093-7/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | Patch exposes sRGB visuals and adds DRI integer query support for __DRI2_RENDERER_HAS_FRAMEBUFFER_SRGB. Further changes make sure that we mark if the app explicitly wanted sRGB and for these framebuffers we don't turn sRGB off in intel_gles3_srgb_workaround. This way we keep compatibility for existing applications relying on default sRGB and ony add more visual support. With this change, following dEQP tests start to pass: dEQP-EGL.functional.wide_color.window_8888_colorspace_srgb dEQP-EGL.functional.wide_color.pbuffer_8888_colorspace_srgb v2: some code cleanup (Emil Velikov) update num_formats correctly (reported by deveee@gmail.com) v3: cleanup, remove redundant is_srgb rename explicit_srgb as 'need_srgb' to follow style better Signed-off-by: Tapani Pälli <tapani.palli@intel.com> Reviewed-by: Emil Velikov <emil.velikov@collabora.com> (v2) Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102264 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102354 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102503