aboutsummaryrefslogtreecommitdiffstats
path: root/src/amd/vulkan
diff options
context:
space:
mode:
authorSamuel Pitoiset <[email protected]>2020-05-04 12:01:41 +0200
committerMarge Bot <[email protected]>2020-05-05 10:04:36 +0000
commitb0a7499d28dd5a7c89a70cea79cb14d943632609 (patch)
treed4e6cffb0a37d3709dfe8952b03ffa9804cc4b15 /src/amd/vulkan
parent64662dd5baeec19a618156b52df7a7e7adba94cf (diff)
radv: enable shaderInt16 unconditionally with LLVM and only GFX8+ with ACO
The Vulkan spec says: "shaderInt16 specifies whether 16-bit integers (signed and unsigned) are supported in shader code. If this feature is not enabled, 16-bit integer types must not be used in shader code." I think it's just safe to enable it because 16-bit integers should be fully supported with LLVM and also with ACO and GFX8+. On GFX8 and earlier generations, throughput of 16-bit int is same as 32-bit but that should't change anything. For GFX6-GFX7 ACO support, we have to implement conversions without SDWA. Signed-off-by: Samuel Pitoiset <[email protected]> Reviewed-by: Daniel Schürmann <[email protected]> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4874>
Diffstat (limited to 'src/amd/vulkan')
-rw-r--r--src/amd/vulkan/radv_device.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/src/amd/vulkan/radv_device.c b/src/amd/vulkan/radv_device.c
index 49cbee18eff..9810820b252 100644
--- a/src/amd/vulkan/radv_device.c
+++ b/src/amd/vulkan/radv_device.c
@@ -908,7 +908,7 @@ void radv_GetPhysicalDeviceFeatures(
.shaderCullDistance = true,
.shaderFloat64 = true,
.shaderInt64 = true,
- .shaderInt16 = pdevice->rad_info.chip_class >= GFX9,
+ .shaderInt16 = !pdevice->use_aco || pdevice->rad_info.chip_class >= GFX8,
.sparseBinding = true,
.variableMultisampleRate = true,
.inheritedQueries = true,