| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This will let us initialize the constant buffers with loops.
Reviewed-by: Lionel Landwerlin <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Ben and I haven't observed these to help anything, but they enable
hardware optimizations for particular cases. It's probably best to
enable them ahead of time, before we run into such a case.
Reviewed-by: Plamena Manolova <[email protected]>
Acked-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
| |
As part of enabling support for SF programs, we plumb the SF URB size
through to emit_urb_config. For now, it's always zero but, on gen4, it
may be something larger.
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
|
|
|
| |
This makes us walk over the heaps one at a time and add the types for
LLC and !LLC to each heap.
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea behind doing this was to make it easier to set various flags.
However, we have enough custom flag settings floating around the driver
that this is more of a nuisance than a help. This commit has the
following functional changes:
1) The workaround_bo created in anv_CreateDevice loses both flags.
This shouldn't matter because it's very small and entirely internal
to the driver.
2) The bo created in anv_CreateDmaBufImageINTEL loses the
EXEC_OBJECT_ASYNC flag. In retrospect, it never should have gotten
EXEC_OBJECT_ASYNC in the first place.
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Instead of returning valid types as just a number, we now walk the list
and check the buffer's usage against the usage flags we store in the new
anv_memory_type structure. Currently, valid_buffer_usage == ~0.
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Before, we were just comparing the type index to 0. Now we actually
look the type up in the table and check its properties to determine what
kind of mapping we want to do.
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This doesn't matter right now since it only affects whether or not we
set the kernel bit but, if we ever do anything else based on it, we'll
want it to be correct per-gen.
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Up until now, we've been memsetting the auxiliary surface to 0 at
BindImageMemory time to ensure that it is properly initialized.
However, this isn't correct because apps are allowed to freely alias
memory between different images and buffers so long as they properly
track whether or not a particular image is valid and, if it isn't,
transition from UNINITIALIZED to something else before using it. We
now implement those transitions so we can drop the hack.
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
|
|
|
| |
This causes dEQP-VK.api.copy_and_blit.resolve_image.partial.* to start
failing due to test bugs. See CL 1031 for a test fix.
Reviewed-by: Nanley Chery <[email protected]>
Cc: "17.1" <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The procedure for decompressing an opaque BC1 Vulkan format is dependant on the
comparison of two colors stored in the first 32 bits of the compressed block.
Here's the specified OpenGL (and Vulkan) behavior for reference:
The RGB color for a texel at location (x,y) in the block is given by:
RGB0, if color0 > color1 and code(x,y) == 0
RGB1, if color0 > color1 and code(x,y) == 1
(2*RGB0+RGB1)/3, if color0 > color1 and code(x,y) == 2
(RGB0+2*RGB1)/3, if color0 > color1 and code(x,y) == 3
RGB0, if color0 <= color1 and code(x,y) == 0
RGB1, if color0 <= color1 and code(x,y) == 1
(RGB0+RGB1)/2, if color0 <= color1 and code(x,y) == 2
BLACK, if color0 <= color1 and code(x,y) == 3
The sampling operation performed on an opaque DXT1 Intel format essentially
hard-codes the comparison result of the two colors as color0 > color1. This
means that the behavior is incompatible with OpenGL and Vulkan. This is stated
in the SKL PRM, Vol 5: Memory Views:
Opaque Textures (DXT1_RGB)
Texture format DXT1_RGB is identical to DXT1, with the exception that the
One-bit Alpha encoding is removed. Color 0 and Color 1 are not compared, and
the resulting texel color is derived strictly from the Opaque Color Encoding.
The alpha channel defaults to 1.0.
Programming Note
Context: Opaque Textures (DXT1_RGB)
The behavior of this format is not compliant with the OGL spec.
The opaque and non-opaque BC1 Vulkan formats are specified to be decoded in
exactly the same way except the BLACK value must have a transparent alpha
channel in the latter. Use the four-channel BC1 Intel formats with the alpha
set to 1 to provide the behavior required by the spec.
v2 (Kenneth Graunke):
- Provide a more detailed commit message.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100925
Cc: <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Signed-off-by: Nanley Chery <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is mostly for running in our CI system to prevent dEQP from
continuing on to the next test if we get a GPU hang. As it currently
stands, dEQP uses the same VkDevice for almost all tests and if one of
the tests hangs, we set the anv_device::device_lost flag and report
VK_ERROR_DEVICE_LOST for all queue operations from that point forward
without sending anything to the GPU. dEQP will happily continue trying
to run tests and reporting failures until it eventually gets crash that
forces the test runner to start over. This circumvents the problem by
just aborting the process if we ever get a GPU hang. Since this is not
the recommended behavior most of the time, we hide it behind an
environment variable.
Reviewed-by: Lionel Landwerlin <[email protected]>
|
|
|
|
|
|
|
|
| |
We weren't wrapping this before because anv_cmd_buffer_execbuf may throw
a more meaningful error message. However, we do change the error code
into VK_ERROR_DEVICE_LOST, so we should print a new message.
Reviewed-by: Lionel Landwerlin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to the VK_KHX_multiview spec:
"Multiview causes all drawing and clear commands in the subpass to
behave as if they were broadcast to each view, where each view is
represented by one layer of the framebuffer attachments."
This adds support for multiview clears, which were missing in the
initial implementation.
v2 (Jason):
- split multiview from regular case
- Use for_each_bit() macro
Fixes new CTS multiview tests:
dEQP-VK.multiview.clear_attachments.*
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
| |
Reviewed-by: Samuel Iglesias Gonsálvez <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Emil Velikov <[email protected]>
Reviewed-by: Samuel Iglesias Gonsálvez <[email protected]>
|
|
|
|
|
|
|
|
|
| |
After successful drmGetDevices2() call, drmFreeDevices() needs to be
called.
Fixes: b1fb6e8d "anv: do not open random render node(s)"
Signed-off-by: Grazvydas Ignotas <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]> # radv version
|
|
|
|
|
|
|
|
|
|
| |
drmGetDevices2 takes count and not size. Probably hasn't caused problems
yet in practice and was missed as setups with more than 8 DRM devices
are not very common.
Fixes: b1fb6e8d "anv: do not open random render node(s)"
Signed-off-by: Grazvydas Ignotas <[email protected]>
Reviewed-by: Lionel Landwerlin <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Alejandro Piñeiro <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit e1af20f18a86f52a9640faf2d4ff8a71b0a4fa9b changed the shader_info
from being embedded into being just a pointer. The idea was that
sharing the shader_info between NIR and GLSL would be easier if it were
a pointer pointing to the same shader_info struct. This, however, has
caused a few problems:
1) There are many things which generate NIR without GLSL. This means
we have to support both NIR shaders which come from GLSL and ones
that don't and need to have an info elsewhere.
2) The solution to (1) raises all sorts of ownership issues which have
to be resolved with ralloc_parent checks.
3) Ever since 00620782c92100d77c660f9783504c6d80fa1d58, we've been
using nir_gather_info to fill out the final shader_info. Thanks to
cloning and the above ownership issues, the nir_shader::info may not
point back to the gl_shader anymore and so we have to do a copy of
the shader_info from NIR back to GLSL anyway.
All of these issues go away if we just embed the shader_info in the
nir_shader. There's a little downside of having to copy it back after
calling nir_gather_info but, as explained above, we have to do that
anyway.
Acked-by: Timothy Arceri <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
CID: 1405919 (Error handling issues)
Signed-off-by: Lionel Landwerlin <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The application might not give an output structure.
CID: 1405765 (Null pointer dereferences)
Signed-off-by: Lionel Landwerlin <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
| |
This fixes the build when not building against valgrind headers.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100945
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
| |
Reviewed-by: Samuel Iglesias Gonsálvez <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
| |
Now that we can allocate states larger than the block size, we no longer
need a block size of 1MB which can be rather wasteful.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the maximum size of a state that could be allocated from a
state pool was a block. However, this has caused us various issues
particularly with shaders which are potentially very large. We've also
hit issues with render passes with a large number of attachments when we
go to allocate the block of surface state. This effectively removes the
restriction on the maximum size of a single state. (There's still a
limit of 1MB imposed by a fixed-length bucket array.)
For states larger than the block size, we just grab a large block off of
the block pool rather than sub-allocating. When we go to allocate some
chunk of state and the current bucket does not have state, we try to
pull a chunk from some larger bucket and split it up. This should
improve memory usage if a client occasionally allocates a large block of
state.
This commit is inspired by some similar work done by Juan A. Suarez
Romero <[email protected]>.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
| |
This commit just fixes up the English a bit and re-flows the comment.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The old algorithm worked fine assuming a constant block size. We're
about to break that assumption so we need an algorithm that's a bit more
robust against suddenly growing by a huge amount compared to the
currently allocated quantity of memory.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
| |
Now that the state stream is allocating off of the state pool, there's
no reason why we need the block pool to be separate.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
| |
Now that everything is going through the state pools, the block pool no
longer needs to be able to handle re-use.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the state_stream is now pulling from a state_pool, the only thing
pulling directly off the block pool is the state pool so we can just
move the block_size there. The one exception is when we allocate
binding tables but we can just reference the state pool there as well.
The only functional change here is that we no longer grow the block pool
immediately upon creation so no BO gets allocated until our first state
allocation.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
| |
The helper functions aren't really gaining us as much as they claim and
are actually about to be in the way.
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
We should only use size_t when referring to sizes of bits of CPU memory.
Anything on the GPU or just a regular array length should be a type that
has the same size on both 32 and 64-bit architectures. For state
objects, we use a uint32_t because we'll never allocate a piece of
driver-internal GPU state larger than 2GB (more like 16KB).
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|
|
|
|
| |
Reviewed-by: Juan A. Suarez Romero <[email protected]>
|