| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Add a new macro that can be used to extract the tiling mode from a
tile_mode value. This is will be used to determine the number of GOBs
used in block linear mode.
Acked-by: Emil Velikov <[email protected]>
Tested-by: Andre Heider <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
Signed-off-by: Thierry Reding <[email protected]>
|
|
|
|
|
|
|
| |
We can get it from si_screen.
Reviewed-by: Timothy Arceri <[email protected]>
Acked-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This is only required with the latest libdrm.
This fixes 32-bit support with high addresses.
(and possibly 64-bit support too because the high bits need to be masked out)
Acked-by: Christian König <[email protected]>
Acked-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
| |
Cc: 17.3 18.0 <[email protected]>
Reviewed-by: Christian König <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
| |
This enables AMD_performance_monitor extension.
Signed-off-by: Christian Gmeiner <[email protected]>
Reviewed-by: Lucas Stach <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Christian Gmeiner <[email protected]>
Reviewed-by: Lucas Stach <[email protected]>
|
|
|
|
|
|
|
|
| |
This should fix a regression with Rocket League grass rendering
on the NIR backend.
Reviewed-by: Marek Olšák <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104717
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|
| |
|
|
|
|
| |
The old function treats high values as negative, which LLVM interprets as 0.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Imported from RadeonSI.
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The comment said it will only represent the lowest 32 regs. This was
not entirely true in practice, since at least on x86 you'll get
masked shifts (unless the compiler could recognize it already and toss
it out). It turns out this actually works out alright (presumably
noone uses it for temp regs) when increasing max sampler views, so
make that behavior explicit.
Albeit it feels a bit hacky (but in any case, explicit behavior there
is better than undefined behavior).
Reviewed-by: Jose Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
Fixes piglit test:
tests/spec/arb_gpu_shader_fp64/execution/explicit-location-gs-fs-vs.shader_test
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
All the tess shader and tgsi equivalents are here and it allows
use to use llvm_type_is_64bit() in the following patch without
exposing it externally.
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The V3D engine provides several perf counters.
Implement ->get_driver_query_[group_]info() so that these counters are
exposed through the GL_AMD_performance_monitor extension.
Signed-off-by: Boris Brezillon <[email protected]>
Signed-off-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The r600 code (not the eg one) forgot to copy the ps_color_export_mask
in commit 5b14e06d8b42e2b08ebc52b6c314ef8647d87a1f when updating the
pixel state, leading to misrenderings (probably with MRT).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105262
Tested-by: LoneVVolf <[email protected]>
Tested-by: Pavel Vinogradov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some instructions, assume src and/or dst is half-precision based on a
type field (ie. f32/s32/u32 are full precision but others are half
precision). So add some code to sanity check the src/dst registers to
catch mixups.
Also propagate half-precision flag for SSA sources. The instruction
consuming a SSA value needs to be of the same type as the one producing
it.
This is probably not complete half-precision support, but a useful first
step. We do still need to add support for nir alu instructions for
converting between half/full precision.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It isn't just vertex shaders that need to fixup reg footprint for inputs
populated before shader starts.
This problem showed up with compute shaders. If you have (for example)
a localregid sysval, but only the .x component is used, the hw still
writes the .yz components, which could overflow into other threads
causing corruption. Showed up in cl cts 'basic/test_basic intmath_int'.
But in theory the same problem could crop up elsewhere.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
At least for clover.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Not *entirely* sure why this is a different BIND bit, but it is.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
I think this should also always only occur at the end of a BB (by
definition), and the BB successor should be the end block.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
Temporary hack, but since we can't do 64b math yet in ir3, pretend that
we don't support 64b pointers.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glBindBufferRange(..) in vrend_draw_bind_ubo is failing with
more than one uniform block. This is due to improper alignment
of the start of the second block. Let's query the proper
alignment from the driver and pass it back to Mesa.
Let's query for the texture alignment too, even though the Virgl
renderer doesn't call glTexBufferRange yet.
The default values are the widest workable range possible (for example,
GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT on Nvidia is 256).
Fixes:
dEQP-GLES3.functional.ubo.* on Nvidia
Example test:
dEQP-GLES3.functional.ubo.multi_basic_types.single_buffer.shared_vertex
Note: This is based on "virgl: reduce some default capset limits.",
which hasn't landed in Mesa yet but should relatively soon.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Since v2 might take a while to rollout, we should reduce
these inside some gathered minimums and then v2 can increase
them using host values.
Reviewed-by: Stéphane Marchesin <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
This checks the kernel api is new enough and asks for the
larger caps size since the kernel won't mess it up now.
Reviewed-by: Stéphane Marchesin <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes piglit tests:
tests/spec/glsl-1.50/execution/variable-indexing/gs-input-array-vec3-index-rd.shader_test
tests/spec/glsl-1.50/execution/geometry/max-input-components.shader_test
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
|
|
|
|
|
| |
This will be used in the following patch.
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
|
|
|
|
|
|
| |
Fixes: a25093de7188 ("swr/rast: Implement JIT shader caching to disk")
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
Reviewed-By: George Kyriazis <[email protected]>
|
|
|
|
|
|
|
|
| |
Since geometry shader also consumes prescale constants, the
geometry shader constant buffer will need to be updated when prescale
factor is changed.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The earlier Mesa commit 3d06c8afb5 ("st/mesa: don't translate blend
state when it's disabled for a colorbuffer") subtly changed the
details of gallium's per-RT blend state.
In particular, when pipe_rt_blend_state[i].blend_enabled is true,
we have to get the src/dst blend terms from pipe_rt_blend_state[i],
not [0] as before.
We now have to scan the blend targets to find the first one that's
enabled (if any). We have to use the index of that target for getting
the src/dst blend terms. And note that we have to set identical blend
terms for all targets.
This fixes the Piglit fbo-drawbuffers2-blend test. VMware bug 2063493.
Reviewed-by: Charmaine Lee <[email protected]>
|
|
|
|
|
|
|
| |
We were calling SVGA3D_vgpu10_DestroyBlendState() when vgpu10 was not
enabled (bs->id==0 by default), resulting in lots of device errors.
Reviewed-by: Neha Bhende<[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If svga_update_state() fails, we flush the command buffer and retry.
If it fails again, it likely means we were unable to translate a shader
for some reason (uses too many resources, for example). In that case,
let's just skip the draw call. The alternative, just disabling the
shader stage in question, would certainly lead to bad rendering anyway,
and probably device errors.
Fixes failed assertion running Piglit glsl-1.50/execution/
variable-indexing/gs-output-array-vec4-index-wr.shader_test since it
uses too many GS output registers (though the test still fails).
VMware bug 2063492.
v2: also call pipe_debug_message() so apps or apitrace can be notified
when this issue occurs.
v3: use svga_update_state_retry().
Reviewed-by: Charmaine Lee <[email protected]>
Reviewed-by: Neha Bhende <[email protected]>
|
|
|
|
|
|
|
| |
This will allow minor simplifications elsewhere.
Reviewed-by: Charmaine Lee <[email protected]>
Reviewed-by: Neha Bhende <[email protected]>
|
|
|
|
| |
Reviewed-by: Charmaine Lee <[email protected]>
|