diff options
author | Nicolai Hähnle <[email protected]> | 2017-11-09 14:00:22 +0100 |
---|---|---|
committer | Nicolai Hähnle <[email protected]> | 2017-11-09 14:00:22 +0100 |
commit | e6dbc804a87aef34db138c607ba435d701703bc6 (patch) | |
tree | 72886d32739f09c9fdb41dce690b040994671f0c /src/gallium/winsys/radeon | |
parent | 1e5c9cf5902e31d3e038c565588527be35434306 (diff) |
winsys/amdgpu: handle cs_add_fence_dependency for deferred/unsubmitted fences
The idea is to fix the following interleaving of operations
that can arise from deferred fences:
Thread 1 / Context 1 Thread 2 / Context 2
-------------------- --------------------
f = deferred flush
<------- application-side synchronization ------->
fence_server_sync(f)
...
flush()
flush()
We will now stall in fence_server_sync until the flush of context 1
has completed.
This scenario was unlikely to occur previously, because applications
seem to be doing
Thread 1 / Context 1 Thread 2 / Context 2
-------------------- --------------------
f = glFenceSync()
glFlush()
<------- application-side synchronization ------->
glWaitSync(f)
... and indeed they probably *have* to use this ordering to avoid
deadlocks in the GLX model, where all GL operations conceptually
go through a single connection to the X server. However, it's less
clear whether applications have to do this with other WSI (i.e. EGL).
Besides, even this sequence of GL commands can be translated into
the Gallium-level sequence outlined above when Gallium threading
and asynchronous flushes are used. So it makes sense to be more
robust.
As a side effect, we no longer busy-wait on submission_in_progress.
We won't enable asynchronous flushes on radeon, but add a
cs_add_fence_dependency stub anyway to document the potential
issue.
Reviewed-by: Marek Olšák <[email protected]>
Diffstat (limited to 'src/gallium/winsys/radeon')
-rw-r--r-- | src/gallium/winsys/radeon/drm/radeon_drm_cs.c | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/src/gallium/winsys/radeon/drm/radeon_drm_cs.c b/src/gallium/winsys/radeon/drm/radeon_drm_cs.c index 7220f3a0240..add88f80aae 100644 --- a/src/gallium/winsys/radeon/drm/radeon_drm_cs.c +++ b/src/gallium/winsys/radeon/drm/radeon_drm_cs.c @@ -786,6 +786,24 @@ radeon_drm_cs_get_next_fence(struct radeon_winsys_cs *rcs) return fence; } +static void +radeon_drm_cs_add_fence_dependency(struct radeon_winsys_cs *cs, + struct pipe_fence_handle *fence) +{ + /* TODO: Handle the following unlikely multi-threaded scenario: + * + * Thread 1 / Context 1 Thread 2 / Context 2 + * -------------------- -------------------- + * f = cs_get_next_fence() + * cs_add_fence_dependency(f) + * cs_flush() + * cs_flush() + * + * We currently assume that this does not happen because we don't support + * asynchronous flushes on Radeon. + */ +} + void radeon_drm_cs_init_functions(struct radeon_drm_winsys *ws) { ws->base.ctx_create = radeon_drm_ctx_create; @@ -801,6 +819,7 @@ void radeon_drm_cs_init_functions(struct radeon_drm_winsys *ws) ws->base.cs_get_next_fence = radeon_drm_cs_get_next_fence; ws->base.cs_is_buffer_referenced = radeon_bo_is_referenced; ws->base.cs_sync_flush = radeon_drm_cs_sync_flush; + ws->base.cs_add_fence_dependency = radeon_drm_cs_add_fence_dependency; ws->base.fence_wait = radeon_fence_wait; ws->base.fence_reference = radeon_fence_reference; } |