| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Special-casing the PS_BLEND packet wasn't really gaining us anything. It's
defined to be more-or-less the contents of blend state entry 0 only without
the indirection. We can just copy-and-paste the contents. If there are no
valid color targets, then blend state 0 will be 0-initialized anyway so
it's basically the same as the special case we had before.
|
|
|
|
|
|
|
| |
Previously, we would always emit all of the render targets in the subpass.
This commit changes it so that we compact render targets just like we do
with other resources. Render targets are represented in the surface map by
using a descriptor set index of UINT16_MAX.
|
| |
|
|
|
|
|
|
| |
This reduces the number of allocations a bit and cuts back on memory usage.
Kind-of a micro-optimization but it also makes the error handling a bit
simpler so it seems like a win.
|
|
|
|
|
|
|
| |
We cast he constant 0xfff values to a uintptr_t before applying a bitwise
negate to ensure that they are actually 64-bit when needed. Also, the
count variable doesn't need to be explicitly cast, it will get upcast as
needed by the "|" operation.
|
|
|
|
|
|
|
|
|
| |
Previously we asserted every time you tried to pack a pointer and a counter
together. However, this wasn't really correct. In the case where you try
to grab the last element of the list, the "next elemnet" value you get may
be bogus if someonoe else got there first. This was leading to assertion
failures even though the allocator would safely fall through to the failure
case below.
|
| |
|
|
|
|
|
|
|
| |
The limit for these textures is 2048 not 1024.
Signed-off-by: Nanley Chery <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
| |
In 23de78768, when we switched from allocating individual BOs to using the
pool for fences, we accidentally deleted the free.
|
|
|
|
|
|
| |
Some application pass a dummy for pTessellationState which results in a
lot of noise. Only warn if we're actually given tessellation shadear
stages.
|
|
|
|
|
|
| |
Applications may create a *lot* of fences, perhaps as much as one per
vkQueueSubmit. Really, they're supposed to use ResetFence, but it's easy
enough for us to make them crazy-cheap so we might as well.
|
|
|
|
| |
v2 (Francisco Jerez): Add the state_offset to the surface state offset
|
| |
|
| |
|
|
|
|
|
| |
Move the environment variable check to cache creation time so we block
both lookups and uploads if it's turned off.
|
|
|
|
|
|
|
| |
Between the initial check the returns NO_KERNEL and compiling the
shader, other threads may have added the shader to the cache. Before
uploading the kernel, check again (under the mutex) that the compiled
shader still isn't present.
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is no API for setting the point size and the shader is always
required to set it. Section 24.4:
"If the value written to PointSize is less than or equal to zero, or
if no value was written to PointSize, results are undefined."
As such, we can just always program PointWidthSource to Vertex. This
simplifies anv_pipeline a bit and avoids trouble when we enable the
pipeline cache and don't have writes_point_size in the prog_data.
|
|
|
|
|
| |
This is state the we generate when compiling the shaders and we need it
for mapping resources from descriptor sets to binding table indices.
|
|
|
|
|
|
|
| |
Using anv_pipeline_cache_upload_kernel() will re-upload the kernel and
prog_data when we merge caches. Since the kernel and prog_data is
already in the program_stream, use anv_pipeline_cache_add_entry()
instead to only add the entry to the hash table.
|
|
|
|
| |
This function will grow the cache to make room and then add the entry.
|
|
|
|
|
|
| |
This function is a helper that unconditionally sets a hash table entry
and expects the cache to have enough room. Calling it 'add_entry'
suggests it will grow the cache as needed.
|
|
|
|
|
| |
No functional change, but the control flow around searching the cache
and falling back to compiling is a bit simpler.
|
|
|
|
|
| |
We have to keep it there for the cache to work, so let's not have an
extra copy in struct anv_pipeline too.
|
|
|
|
| |
A little less ambiguous.
|
|
|
|
|
|
| |
We can serialize as much as the application asks for and just stop once
we run out of memory. This lets applications use a fixed amount of
space for caching and still get some benefit.
|
|
|
|
| |
The final version of the pipeline cache header adds a few more fields.
|
|
|
|
|
| |
This was copied from inline code to a helper and wasn't updated to hash
a pointer instead.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Among other things, this can cause the depth or stencil test to spurriously
fail when the fragment shader uses discard.
|
| |
|
|
|
|
|
|
|
|
| |
This fixes many CTS cases, but will require an update to the kernel
command parser register whitelist. (The CS GPRs and TIMESTAMP
registers need to be whitelisted.)
Signed-off-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
| |
v2: Subtract the baseMipLevel and baseArrayLayer (Jason)
Signed-off-by: Nanley Chery <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
| |
|
|
|
|
| |
The first time I tried to fix this, I set the wrong fields.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Match the comment stated above the assignment.
Signed-off-by: Nanley Chery <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
| |
This field is no longer needed.
Signed-off-by: Nanley Chery <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
| |
|
| |
|
|
|
|
|
| |
The stencil write mask wasn't getting set at all so we were using whatever
write mask happend to be left over by the application.
|
|
|
|
|
| |
The hardware docs say that StencilBufferWriteEnable should only be set if
StencilTestEnable is set. It seems reasonable to set them together.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
sample_c is backwards from what GL and Vulkan expect.
See intel_state.c in i965.
v2: Drop unused vk_to_gen_compare_op.
Reviewed-by: Jason Ekstrand <[email protected]>
|
| |
|
| |
|