| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
| |
We should avoid having potentially dangling pointers to
pipe_resources in general.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
| |
A pipe_transfer is a context object. It is fine for the
constructor/destructor to have access to the context.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
| |
A pipe_transfer is a context object. It is fine for
virgl_transfer_queue to have access to the context.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
| |
Add header guard and forward declare structs. Move virgl_resource.h
inclusion to the C file.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we don't normally flush before performing copy transfers, it's
possible in some scenarios to use too much memory for staging resources
and start failing. This can happen either because we exhaust the total
available memory (including system memory virtio-gpu swaps out to), or,
more commonly, because the total size of resources in a command buffer
doesn't fit in virtio-gpu video memory.
To reduce the chances of this happening, force a flush before a copy
transfer if the total size of queued staging resources exceeds a certain
limit. Since after a flush any queued staging resources will be
eventually released, this ensures both that each command buffer doesn't
require too much video memory, and that we don't end up consuming too
much memory for staging resources in total.
Fixes kernel errors reported when running texture_upload tests in glbench.
Signed-off-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Now that we have copy transfers in place, we can remove the incorrect
resource wait condition. Copy transfers and other optimizations minimize
the performance impact of this removal, while providing the correct
behavior.
Signed-off-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Extend copy transfers to also be used for busy textures.
Performance results:
Unigine Valley, qemu before: 22.7 FPS after: 23.1 FPS
Signed-off-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We typically need to wait for a buffer to become ready before mapping,
so that we don't write new contents while the host is still using the
old contents. However, if we are allowed to discard the contents of the
mapped buffer range, then we can avoid waiting by using a staging buffer
range which we guarantee to never be busy, copying from the staging
buffer range to the target buffer in the host.
This commit implements this optimization by utilizing a dedicated
u_upload_mgr for the staging buffer.
Performance results:
Twilight Struggle (Steam/Proton), qemu before: 7 FPS after: 25 FPS
glmark2 ubo, qemu before: 38 FPS after: 331 FPS
Signed-off-by: Alexandros Frantzis <[email protected]>
Suggested-by: Gurchetan Singh <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Support transfers that use a different resource as the source of data to
transfer. This will be used in upcoming commits to send data to host
buffers through a transfer upload buffer, in order to avoid waiting
when the buffer resource is busy.
Note that we don't support queueing copy transfers in the transfer
queue. Copy transfers should be emitted directly in the command queue,
allowing us to avoid flushes before them and leads to better
performance.
Signed-off-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Introduce definitions for the copy_transfer3d protocol command and virgl
capability. This command transfers data to the host by copying through
another resource, and will be used in upcoming commits to avoid waiting
when transferring data for busy resources.
Signed-off-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Support a new virgl bind type for staging buffers which don't require
dedicated host-side storage. These will be used to implement copy
transfers.
Signed-off-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we are not allowed to block, and we know that we will have to wait,
either because the resource is busy, or because it will become busy due
to a readback, return early to avoid performing an incomplete
transfer_get. Such an incomplete transfer_get may finish at any time,
during which another unsynchronized map could write to the resource
contents, leaving the contents in an undefined state.
Signed-off-by: Alexandros Frantzis <[email protected]>
Suggested-by: Chia-I Wu <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
| |
Add more info about why the value of VIRGL_MAP_BUFFER_ALIGNMENT.
Signed-off-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We will need the full info. This also speeds up
virgl_attach_res_atomic_buffers and fixes resource leaks when the
context is destroyed.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
| |
It replaces virgl_context::images.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
| |
It replaces virgl_context::ssbos.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
| |
It replaces virgl_context::ubos.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
virgl_shader_binding_state will be used to manage all per-stage
shader bindings. For now, it manages only sampler views.
This replaces virgl_textures_info and fixes some issues
- start_slot is now honored
- views outside of [start_slot, slart_slot+count) are unmodified
- views are released when the context is destroyed
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
On according hosts this enables the piglits as "pass":
arb_clip_control-*
v2: sync flag with host
Signed-off-by: Gert Wollny <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]> (v1)
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
When PIPE_TRANSFER_READ requires a resolve, we blit from the host
storage to a temporary storage, and do a format conversion from the
temporary storage to the guest storage. This change makes sure we
convert to the correct level of the guest storage.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
|
| |
util_format_translate_3d expects the source box to be aligned to the
block size. When resolving, make sure the size of the staging
buffer is aligned to the block size.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
When readback is true, and there are pending writes in the transfer
queue, we should flush to avoid reading back outdated data. This
fixes piglit arb_copy_buffer/dlist and a subtest of
arb_copy_buffer/data-sync.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Imagine this
resource_copy_region(ctx, dst, ..., src, ...);
transfer_map(ctx, src, 0, PIPE_TRANSFER_WRITE, ...);
at the beginning of a cmdbuf. We need to flush in transfer_map so
that the transfer is not reordered before the resource copy. The
check for "vctx->num_draws == 0 && vctx->num_compute == 0" is not
enough. Removing the optimization entirely.
Because of the more precise resource tracking in the previous
commit, I hope the performance impact is minimized. We will have to
go with perfect resource tracking, or attempt a more limited
optimization, if there are specific cases we really need to optimize
for.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This gives us more precise resource tracking. It can be beneficial
because glFlush is often followed by state changes. We don't want
to reemit resources that are going to be unbound.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TGSI's FBFETCH instruction currently only supports reading from a single
render target, but NIR intrinsics can support multiple render targets.
radeonsi can only support fetching from RT 0, but other drivers may be
able to support fetching from any render target.
To express this, this patch renames PIPE_CAP_TGSI_FS_FBFETCH to simply
PIPE_CAP_FBFETCH, and converts it from a boolean "is FBFETCH supported?"
to an integer number of render targets which can be fetched.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
virgl_transfer_queue_is_queued was used to avoid flushing. That
fails when the resource is being accessed by previous cmdbufs but
not the current one.
The new approach of tracking the valid buffer range does not apply
to textures however. But hopefully it is fine because the goal is
to avoid waiting for this scenario
glBufferSubData(..., offset, size, data1);
glDrawArrays(...);
// append new vertex data
glBufferSubData(..., offset+size, size, data2);
glDrawArrays(...);
If glTex(Sub)Image* turns out to be an issue, we will need to track
valid level/layer ranges as well.
v2: update virgl_buffer_transfer_extend as well
v3: do not remove virgl_transfer_queue_is_queued
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]> (v1)
Reviewed-by: Gurchetan Singh <[email protected]> (v2)
|
|
|
|
|
|
|
|
|
|
| |
st/mesa does not need it and virglrenderer does not really support
it. Remove the support so that we are sure pipe_surface never
refers to a buffer resource.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
When shader images/buffers are set, do not rely on
virgl_encoder_write_res and virgl_resource_dirty to do the implicit
NULL check.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Handle PIPE_TRANSFER_DONT_BLOCK and PIPE_TRANSFER_MAP_DIRECTLY.
Make virgl_resource_transfer_prepare return an enum instead of a
bool for extensibility (e.g., instruct the callers to map
differently).
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
virgl_resource_transfer_prepare should be called before mapping to
prepare the resource. It does flush, readback, and wait as needed.
virgl_res_needs_flush and virgl_res_needs_readback become internal
helpers to the new function.
There should be no externally visible change.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
| |
Add comments and follow the coding style of virgl_res_needs_flush.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Add comments and some minor cleanups.
v2: document the function
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]> (v1)
Reviewed-by: Gurchetan Singh <[email protected]>
Signed-off-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
| |
virgl_res_needs_flush should suffice.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Both apps and we (see virgl_buffer_transfer_flush_region) might
flush regions that are unmodified. We have to read back for those
flushes.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
It currently has no user and is probably incorrect (resource_wait is
required in some more cases). Remove it so that we can focus on
transfers first.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Alexandros Frantzis <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
| |
The _LEVELS assumes that the max is always power of two. For V3D 4.2, we
can support up to 7680 non-power-of-two MSAA textures, which will let X11
support dual 4k displays on newer hardware.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Inline writes skip transfer map/unamp at the cost of an extra copy
on the data during execbuffer. That is generally a win for small
transfers. But the heuristic to use inline writes based on buffer
sizes rather than transfer sizes makes little sense. More
importantly, inline writes miss optimizations that are done for
buffer transfers.
Let's just use transfers.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-By: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
virglrender has been changed such that
- VIRGL_CCMD_GET_QUERY_RESULT is fenced
- query buffers (PIPE_BIND_CUSTOM) are coherent
We can check if a query is ready using DRM_IOCTL_VIRTGPU_WAIT, and also
avoid a synchronized transfer to retrieve the query result. When
running against an older virglrenderer, it falls back to the old
behavior automatically.
TF2 @ 640x480 for pts4.dem went from 17fps to 40fps on my testing
machine.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
| |
Small buffer subdatas which are essentially doing a memcpy were getting
bogged down by all the overhead of creating new transfers.
Signed-off-by: David Riley <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Intersecting transfer queue entries allow for the possibility of
extending an existing transfer instead of creating a new one (and all
the associated mappign/unmapping).
Signed-off-by: David Riley <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
| |
Signed-off-by: David Riley <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
| |
Several empty cmdbufs are submitted by app/xserver per frame, from
glamor_block_handler for example. Let's skip them.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
| |
Clear vertex_array_dirty after the state is emitted.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We really need to wait under certain circumstances, or we can end
up writing to memory the same time the host is reading.
Partial revert of d6dc68 ("virgl: use uint16_t mask instead of separate booleans").
Test cases:
- dEQP-GLES31.functional.texture.texture_buffer.render_modify.as_vertex_array.bufferdata
on vtest protocol version 2
- Flickering during Alien Isolation
Fixes: d6dc68 ("virgl: use uint16_t mask instead of separate booleans")
Signed-off-by: Gurchetan Singh <[email protected]>
Reviewed-By: Gert Wollny <[email protected]>
Reviewed-By: Piotr Rak <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This blit can fail, but this is not new; in the old version we
didn't even try to blit in this case. So let's just document the
limitation for now, and leave this for another day.
Signed-off-by: Erik Faye-Lund <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When running on OpenGL ES, we can't just map any format for reading,
because of limitations on glReadPixels. So let's fall back to the
blit code-path, and translate the pixels to the correct format in the
end.
This fixes the remaining failures of KHR-GL32.packed_pixels.* apart
from the sRGB tests.
Signed-off-by: Erik Faye-Lund <[email protected]>
Reviewed-by: Gurchetan Singh <[email protected]>
|