| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
to reduce the amount of GLSL optimizations for drivers that can do better.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
Drivers with good compilers don't need aggressive optimizations before TGSI.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
so that backends don't have to run it manually
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
We need to move this to the shared layer.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the VUE map has slots at the end which the shader does not write,
then we'd "flush" (constructing an URB write) on the last output it
actually wrote. Then, we'd construct another SEND with EOT, but with
no actual payload data. That's not legal.
For example, SSO programs have clip distance slots allocated no matter
what, but the shader may not write them. If it doesn't write any user
defined varyings, then the clip distance slots will be the last ones.
Found while debugging
dEQP-VK.tessellation.shader_input_output.gl_position_vs_to_tcs_to_tes
Cc: [email protected]
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We can use this to track various features that may or may not be supported
by the hw / kernel. Currently, we usually do this by checking the generation
and supported command parser versions in various places thoughtout the driver
code. With this patch, we centralize all these checks in just once place at
screen creation time, then we just query the bitfield wherever we need to
check if a particular feature is supported.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
Instead, check the screen field directly.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Moving the test to the screen places it alongside the other global HW
feature tests that want to be shared between contexts.
Also, we need to know if we support pipelined register writes at
screen creation time so that we can tell if we can expose OpenGL 4.0
in gen7.
Signed-off-by: Chris Wilson <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Some of the existing tests were using '@' and '"' incidentally within the test
body. Neither of these characters are actually legal for GLSL. And since we
are planning to start generating errors for illegal characters, we need to
first make the test suite clean.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here, each legal character (as defined by GLSL Language Specification version
4.30.6, section 3.1) appears at least once in the input file. Obviously,
characters with special meaning (like '#' and '\') aren't treated exhaustively
with respect to all their possible uses. We have many other tests for that.
Here, we're simply ensuring that the test suite sees every legal character at
least once.
v2 (by Ken): Fix expectations, move to src/compiler, renumber tests.
Carl's .expected: Updated .expected:
.. ..
. . . .
. . . .
. . . .
. . . .
. ..
. .
. .
.
(For some reason, the original test expected ".." to produce two lines.
glcpp, cpp, and mcpp all follow my updated behavior, so I believe it to
be correct.)
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Of course, these aren't really useful for anything, but the GLSL language
specification does allow them:
The source character set used for the OpenGL shading languages,
outside of comments, is a subset of UTF-8. It includes the following
characters:
...
White space: the space character, horizontal tab, vertical tab, form
feed, carriage-return, and line- feed.
[GLSL Language Specification 4.30.6, section 3.1]
So treat vertical tab ('\v' or ^K) and form-feed ('\f' or ^L) as horizontal
space characters.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GCC's preprocessor accepts a macro definition where there is no space between
the macro's identifier name and the replacementlist. (GCC does emit a "missing
space" warning that we don't, but that's fine.)
This is an exhaustive test that verifies that all legal GLSL characters that
could possibly be interpreted as separating the macro name from the
replacement list are interpreted as such. So the testing here includes all
valid GLSL symbols except for:
* Characters that can be part of an identifier (a-z, A-Z, 0-9, _)
* Backslash, (allowed only as line continuation)
* Hash, (allowed only to introduce pre-processor directive, or as part of a
paste operator in a replacement list---but not as first token of
replacement list)
* Space characters (since the point of the testing is to have missing space)
* Left parenthesis (which would indicate a function-like macro)
v2 (Ken): Move to src/compiler, renumber tests.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: Move relative push constant relative offset computation down to
_vtn_load_store_tail() (Jason)
Signed-off-by: Lionel Landwerlin <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Lionel Landwerlin <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
| |
The context may be used by texture_get_handle.
Reviewed-by: Christian König <[email protected]>
Cc: 13.0 <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The context may be used by texture_get_handle.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99158
Reviewed-by: Christian König <[email protected]>
Cc: 13.0 <[email protected]>
|
|
|
|
|
|
|
|
| |
Useful when debugging with R600_DEBUG=vm,check_vm to match
addr in both outputs.
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
function was removed by b3360d23ac1db61390b2ac8963756c6133ba6e23
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes performance regression in SynMark PSPom caused by loops with float
counters not always unrolling.
For example:
for (float i = 0.02; i < 0.9; i += 0.11)
...
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
| |
It's more user friendly and it avoids to write files in unexpected
places.
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make sure unused ops and their references are removed, prior to entering
the GCM (global code motion) pass, to stop GCM from breaking the loop
logic and thus hanging the GPU.
Turns out, that sb has problems with loops and node optimizations
regarding associative folding:
- the global code motion (gcm) pass moves ops up a loop level/basic block
until they've fulfilled their total usage count
- if there are ops folded into others, the usage count won't be
fulfilled and thus the op moved way up to the top
- within GCM the op would be visited and their deps would be moved
alongside it, to fulfill the src constaints
- in a loop, an unused op is moved out of the loop and GCM would move
the src value ops up as well
- now here arises the problem: if the loop counter is one of the src
values it would get moved up as well, the loop break condition would
never get hit and the shader turn into an endless loop, resulting in the
GPU hanging and being reset
A reduced (albeit nonsense) piglit example would be:
[require]
GLSL >= 1.20
[fragment shader]
uniform int SIZE;
uniform vec4 lights[512];
void main()
{
float x = 0;
for(int i = 0; i < SIZE; i++)
x += lights[2*i+1].x;
}
[test]
uniform int SIZE 1
draw rect -1 -1 2 2
Which gets optimized to:
===== SHADER #12 OPT ================================== PS/BARTS/EVERGREEN =====
===== 42 dw ===== 1 gprs ===== 2 stack =========================================
ALU 3 @24
1 y: MOV R0.y, 0
t: MULLO_UINT R0.w, [0x00000002 2.8026e-45].x, R0.z
LOOP_START_DX10 @22
PUSH @6
ALU 1 @30 KC0[CB0:0-15]
2 M x: PRED_SETGE_INT __.x, R0.z, KC0[0].x
JUMP @14 POP:1
LOOP_BREAK @20
POP @14 POP:1
ALU 2 @32
3 x: ADD_INT R0.x, R0.w, [0x00000002 2.8026e-45].x
TEX 1 @36
VFETCH R0.x___, R0.x, RID:0 MFC:16 UCF:0 FMT[..]
ALU 1 @40
4 y: ADD R0.y, R0.y, R0.x
LOOP_END @4
EXPORT_DONE PIXEL 0 R0.____ EOP
===== SHADER_END ===============================================================
Notice R0.z being the loop counter/break condition relevant register
and being never incremented at all. Also some of the loop content
has been moved out of it, to fulfill the requirements for the one unused
op.
With a debug build of mesa this would produce an error like
error at : PRED_SETGE_INT __, __, EM.2, R1.x.2||[email protected], C0.x
: operand value R1.x.2||[email protected] was not previously written to its gpr
and the compilation would fail due to this. On a release build it gets
passed to the GPU.
When using this patch, the loop remains intact:
===== SHADER #12 OPT ================================== PS/BARTS/EVERGREEN =====
===== 48 dw ===== 1 gprs ===== 2 stack =========================================
ALU 2 @24
1 y: MOV R0.y, 0
z: MOV R0.z, 0
LOOP_START_DX10 @22
PUSH @6
ALU 1 @28 KC0[CB0:0-15]
2 M x: PRED_SETGE_INT __.x, R0.z, KC0[0].x
JUMP @14 POP:1
LOOP_BREAK @20
POP @14 POP:1
ALU 4 @30
3 t: MULLO_UINT T0.x, [0x00000002 2.8026e-45].x, R0.z
4 x: ADD_INT R0.x, T0.x, [0x00000002 2.8026e-45].x
TEX 1 @40
VFETCH R0.x___, R0.x, RID:0 MFC:16 UCF:0 FMT[..]
ALU 2 @44
5 y: ADD R0.y, R0.y, R0.x
z: ADD_INT R0.z, R0.z, 1
LOOP_END @4
EXPORT_DONE PIXEL 0 R0.____ EOP
===== SHADER_END ===============================================================
Piglit: ./piglit summary console -d results/*_gpu_noglx
name: unpatched_gpu_noglx patched_gpu_noglx
---- ------------------- -----------------
pass: 18016 18021
fail: 748 743
crash: 7 7
skip: 1124 1124
timeout: 0 0
warn: 13 13
incomplete: 0 0
dmesg-warn: 0 0
dmesg-fail: 0 0
changes: 0 5
fixes: 0 5
regressions: 0 0
total: 19908 19908
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94900
Tested-by: Heiko Przybyl <[email protected]>
Tested-on: Barts PRO HD6850
Signed-off-by: Heiko Przybyl <[email protected]>
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes tests 'dEQP-GLES3.functional.texture.mipmap.*.generate.rgba5551*' on
Intel Broadwell 0x1616.
The GL 4.5 spec describes the algorithm of glGenerateMipmap as:
The contents of the derived images are computed by repeated, filtered
reduction of the level base image. [...] No particular filter algorithm is
required, though a box filter is recommended as the default filter.
Consider a texture for which all pixels are identical at level 0.
From the spec's description above, one may reasonably assume that the "filtered
reduction" of level 0 produces a new miplevel for which again all pixels are
identical. For any 2x2 subspan of identical pixels, it is difficult to see how
the "filtered reduction" of that subspan can produce a pixel that differs from
the source pixels.
Dithering during _mesa_meta_GenerateMipmap() violated that reasonable
assumption.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99210
Reviewed-by: Kenneth Graunke <[email protected]>
Cc: [email protected]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In its current state the unified i965 backend for
AMD_performance_monitor and INTEL_performance_query isn't able to report
meaningful Observation Architecture metrics since we haven't so far had
the necessary kernel support to fully configure the OA unit, nor the
corresponding support for normalizing the counters into a form that can
be usefully interpreted by application developers (as opposed to raw
values that may, for example, scale by the number of EUs there are).
So that we can focus on implementing just one of these extensions fully
and since we anticipate some significant backend changes as we look to
use a new kernel interface to configure the OA unit, this patch removes
the current backend. This will simplify our ability to update the
frontend infrastructure and backend interface before updating our
support for performance counters.
Signed-off-by: Robert Bragg <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The variable actually needs to be signed, otherwise converting it to a
float doesn't work as expected.
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=98914
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Nayan Deshmukh <[email protected]>
Cc: "13.0" <[email protected]>
Fixes: 1fb4179f927 ("vl: Fix trivial sign compare warnings")
|
|
|
|
|
|
|
|
| |
handle the cases when vl_compositor_set_csc_matrix(),
vl_compositor_init_state() and vl_compositor_init() fail
Signed-off-by: Nayan Deshmukh <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
| |
handle the cases when vl_compositor_set_csc_matrix(),
vl_compositor_init_state() and vl_compositor_init() fail
Signed-off-by: Nayan Deshmukh <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
pipe_buffer_map and pipe_buffer_create may return NULL
Signed-off-by: Nayan Deshmukh <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FROM_DOUBLE opcodes are setup so that they use a dst register
with a size of 2 even if they only produce a single-precison
result (this is so that the opcode can use the larger register to
produce a 64-bit aligned intermediary result as required by the
hardware during the conversion process). This creates a problem for
spilling though, because when we attempt to emit a spill for the
dst we see a 32-bit destination and emit a scratch write that
allocates a single spill register, making the intermediary writes
go beyond the size of the allocation.
Prevent this by avoiding to spill the destination register of these
opcodes.
Alternatively, we can avoid this by splitting the opcode in two: one
that produces a 64-bit aligned result and one that takes the 64-bit
aligned result as input and produces a 32-bit result from it.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When 64-bit registers are (un)spilled, we need to execute data shuffling
code before writing to or after reading from memory. If we have instructions
that operate on 64-bit data via 32-bit instructions, (un)spills for the
register produced by 32-bit instructions will not do data shuffling at all
(because we only see a normal 32-bit istruction seemingly operating on
32-bit data). This means that subsequent reads with that register using DF
access will unshuffle data read from memory that was never adequately
shuffled when it was written.
Fixing this would require to identify which 32-bit instructions write
64-bit data and emit spill instructions only when the full 64-bit
data has been written (by multiple 32-bit instructions writing to different
offsets of the same register) and always emit 64-bit unspills whenever
64-bit data is read, even when the instruction uses a 32-bit type to read
from them.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current spilling code can't spill vgrf allocations larger than 1
but SIMD4x2 doubles require 2 vgrfs, so we need to permit this case (which
is handled properly for DF data types by emitting 2 scratch messages and
doing data shuffling). We accomplish this by not auto-disabling spilling
for vgrf allocations with a size of 2, and then disable spilling on any
register with an offset != 0B (which indicates array access).
Disable spilling of partial DF reads/writes because these don't read/write
data for both logical threads and our scratch messages for 64-bit data
need data for both threads to be present.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
Spilling of 64-bit data requires data shuffling for the corresponding
scratch read/write messages. This produces unsupported swizzle regions
and writemasks that we need to scalarize.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
8-wide compressed DF operations are executed as two separate 4-wide
DF operations. In that scenario, we have to be careful when we allocate
register space for their operands to prevent the case where the first
half of the instruction overwrites the source of the second half.
To do this we mark compressed instructions as having hazards to make
sure that ther register allocators assigns a register regions for the
destination that does not overlap with the region assigned for any
of its source operands.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By exploiting gen7's hardware decompression bug with vstride=0 we gain the
capacity to support additional swizzle combinations.
This also fixes ZW writes from X/Y channels like in:
mov r2.z:df r0.xxxx:df
Because DF regions use 2-wide rows with a vstride of 2, the region generated
for the source would be r0<2,2,1>.xyxy:DF, which is equivalent to r0.xxzz, so
we end up writing r0.z in r2.z instead of r0.x. Using a vertical stride of 0
in these cases we get to replicate the XX swizzle and write what we want.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Certain swizzles like XYZW can be supported by translating only the first two
64-bit swizzle channels to 32-bit channels. This happens with swizzles such
that the first two logical components, when translated to 32-bit channels and
replicated across the second dvec2 row, select the same channels specified by
the 3rd and 4th logical swizzle components.
Notice that this opens up the possibility that some instructions are not
scalarized and can end up with XY or ZW 32-bit writemasks. Make sure we always
scalarize in such cases.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Stages that use interleaved attributes generate regions with a vstride=0
that can hit the gen7 hardware decompression bug.
v2:
- Make static the function and fix indent (Matt)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
This came in handy when debugging the payload setup for Tess Eval,
since it prints correct subnr for attributes that can be loaded
in the second half of a register.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a width of 2 with 64-bit attributes.
Also, if we have a dvec3/4 attribute that gets split across two registers
such that components XY are stored in the second half of a register and
components ZW are stored in the first half of the next, we need to fix
regioning for any instruction that reads components Z/W of the attribute.
Notice this also means that we can't support sources that read cross-dvec2
swizzles (like XZ for example).
v2: don't assert that we have a single channel swizzle in the case that we
have to fix up Z/W access on the first half of the next register. We
can handle any swizzle that does not cross dvec2 boundaries, which
the double scalarization pass should have prevented anyway.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
v2: use byte_offset() instead of offset()
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
v2: use byte_offset() instead of offset()
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
v2: use byte_offset() instead of offset()
Reviewed-by: Matt Turner <[email protected]>
|