| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
_GNU_SOURCE appears to not be used reliably. Use _MSC_VER instead so
that MSVC alone is affected.
|
|
|
|
|
|
|
|
| |
Unless the polygon fill mode is different from PIPE_POLYGON_MODE_FILL,
so checking the the polygon mode is sufficient.
Testing done: no regression in polygon-mode-offset
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Not used since ages, and it wouldn't work at all with explicit derivatives now
(not that it did before as it ignored them but now the code would just use
the derivs pre-projected which would be quite random numbers).
v2: also get rid of 3 helper functions no longer used.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They need some special handling. Quite complicated.
Additionally, use the same code for implicit derivatives too if no_rho_approx
and no_quad_lod is set, because it seems while generally it should be ok
to use per quad lod for implicit derivatives there's at least some test which
insists that in case of cubemaps the shared lod value MUST come from a pixel
inside the primitive (due to the derivatives becoming different if a different
larger major axis is chosen).
v2: based on Brian's feedback, clean up code a bit.
And use sign bit of major axis instead of pre-select s/t/r sign for coord
mirroring (which should be the same in the end, saves 2 ands).
Also fix two bugs with select/mirror of derivatives, the minor axes need to
use major axis sign as well (instead of major derivative axis sign), and
don't mistakenly use absolute values of major derivative and inverse major
values.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's two reasons for this:
1) even when ignoring rho approximation for cube maps, the result is still
not correct, but it's better as the max error at edges is now sqrt(2) instead
of 2 (which was a full mip level), same as it is for ordinary 2d maps when
doing rho approximations (so the error actually goes from factor 2 at edges and
sqrt(2) completely inside a face to sqrt(2) at edges and 0 inside a face).
2) I want to repurpose rho_no_approx for cubemaps for fully correct cubemap
derivatives (so don't need yet another debug var).
Reviewed-by: Jose Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were already setting the array size of unsized arrays that appeared
inside unnamed interface blocks, but we weren't updating
ir_variable::interface_type to reflect the new array size, causing
bogus link errors.
This patch causes array_sizing_visitor to keep track of all the
unnamed interface types it sees, and the ir_variables corresponding to
each one. After the visitor runs, a new function,
fixup_unnamed_interface_types(), adjusts each unnamed interface type
to correctly correspond with the array sizes in the ir_variables.
Fixes piglit tests:
- spec/glsl-1.50/execution/unsized-in-unnamed-interface-block-gs
- spec/glsl-1.50/execution/unsized-in-unnamed-interface-block-multiple
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When multiple shaders of the same type access an interface block
containing an unsized array, we need to set the array size based on
the maximum array element accessed across all the shaders. This is
similar to what we already do with unsized arrays occurring outside of
interface blocks.
Note: one corner case is not yet addressed by these patches: the case
where one compilation unit defines an interface block containing
unsized arrays and another compilation unit defines the same interface
block containing sized arrays.
Fixes piglit test:
- spec/glsl-1.50/execution/unsized-in-named-interface-block-multiple
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unsized arrays appearing inside named interface blocks now get a
proper size assigned by the array_sizing_visitor.
Fixes piglit tests:
- spec/glsl-1.50/execution/unsized-in-named-interface-block
- spec/glsl-1.50/execution/unsized-in-named-interface-block-gs
- spec/glsl-1.50/linker/unsized-in-named-interface-block
- spec/glsl-1.50/linker/unsized-in-named-interface-block-gs
- spec/glsl-1.50/linker/unsized-in-unnamed-interface-block-gs (*)
(*) is fixed by dumb luck--support for unsized arrays in unnamed
interface blocks will come in a later patch.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch modifies update_max_array_access() so that it updates
ir_variable::max_ifc_array_access to reflect the shader's use of
arrays appearing within interface blocks.
v2: Use an ordinary function in ast_array_index.cpp rather than a
virtual function in ir_rvalue. Avoid dereferencing NULL when handling
accesses to ordinary structs.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
| |
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
For interface blocks that contain arrays, this field will contain the
maximum element of each contained array that is accessed by the
shader. This is a first step toward supporting unsized arrays in
interface blocks.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
| |
In a future patch, this will allow us to enforce invariants when the
interface type is updated.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, when converting an access to an array element from ast to
IR, we need to see if the array is an ir_dereference_variable, and if
so update the variable's max_array_access.
When we add support for unsized arrays in interface blocks, we'll also
need to account for cases where the array is an ir_dereference_record
and the record is an interface block.
To make this easier, move the update into its own function.
v2: Use an ordinary function in ast_array_index.cpp rather than a
virtual function in ir_rvalue.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although it's not explicitly stated in the GLSL 1.50 spec, unsized
arrays are allowed in interface blocks.
section 1.2.3 (Changes from revision 5 of version 1.5) of the GLSL
1.50 spec says:
* Completed full update to grammar section. Tested spec examples
against it:
...
* add unsized arrays for block members
And section 7.1 (Vertex and Geometry Shader Special Variables)
includes an unsized array in the built-in gl_PerVertex interface
block:
out gl_PerVertex {
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
};
Furthermore, GLSL 4.30 contains an example of an unsized array
occurring inside an interface block. From section 4.3.9 (Interface
Blocks):
uniform Transform { // API uses "Transform[2]" to refer to instance 2
mat4 ModelViewMatrix;
mat4 ModelViewProjectionMatrix;
vec4 a[]; // array will get implicitly sized
float Deformation;
} transforms[4];
This patch adds the parser rule to support unsized arrays inside
interface blocks. Later patches in the series will add the
appropriate semantics to handle them.
Fixes piglit tests:
- spec/glsl-1.50/execution/unsized-in-unnamed-interface-block
- spec/glsl-1.50/linker/unsized-in-unnamed-interface-block
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Interface declarations have two names associated with them: the block
name and the instance name. It's the block name that needs to be
passed to get_interface_instance(). This patch renames the argument
so that there's no confusion.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BLORP performs blits by drawing a rectangle with a shader that samples
from the source texture, and writes color data to the destination.
The sampler always returns 32-bit RGBA float data, regardless of the
source format's component ordering or data type. Likewise, the render
target write message takes 32-bit RGBA float data, and converts it
appropriately. So the bulk of the work is already taken care of for us.
This greatly accelerates a lot of CopyTexSubImage calls, and makes
Legends of Aethereus playable on Ivybridge. At the default settings,
LOA continually blits between SRGBA8888 (the window format) and
RGBA16_FLOAT. Since neither BLORP nor our BLT paths supported this,
it fell back to meta, spending 33% of the CPU in floorf() converting
between floats and half-floats.
v2: Use != instead of ^ (suggested by Ian). Note that only
CopyTexSubImage is affected by this patch (caught by Eric).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous code for sRGB overrides assumes that the source and
destination formats are equal, other than the color space. This won't
be feasible when we add support for format conversions.
Here are a few cases, and how the old code handled them:
1. RGB8 -> SRGB8, MSAA ==> SRGB8 -> SRGB8
2. RGB8 -> SRGB8, single ==> RGB8 -> RGB8
3. SRGB8 -> RGB8, MSAA ==> RGB8 -> RGB8
4. SRGB8 -> RGB8, single ==> SRGB8 -> SRGB8
Apparently, preserving the behavior of #1 is important. When doing a
multisample to single-sample resolve, blending the samples together in
an sRGB correct fashion results in a noticably higher quality image.
It also is necessary to pass Piglit's EXT_framebuffer_multisample
accuracy color tests.
Paul, Eric, Anuj, and I talked about this, and aren't sure that it
matters in the other cases.
This patch preserves the behavior of #1, but otherwise reverts to
doing everything in linear space, changing the behavior of case #4.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We could conceivably use BRW_SURFACEFORMAT_R24_UNORM_X8_TYPELESS for
Z24 source images, allowing conversions from Z24 to either Z16 or Z32F.
Unfortunately, we can't use it for destination images since it isn't
supported as a render target.
Using different formats for sources or destinations would be painful,
so for now, punt.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, all that matters is that we copy the correct number of bits,
so any format that has 32-bits of data will work fine.
Once BLORP begins handling format conversions, the sampler will need to
correctly interpret the data. We don't need a depth format, but we do
need the right number of components and data type (FLOAT).
For Z32F, this means using R32_FLOAT.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, all that matters is that we copy the correct number of bits,
so any format that has 16-bits of data will work fine.
Once BLORP begins handling format conversions, the sampler will need to
correctly interpret the data. We don't need a depth format, but we do
need the right number of components and data type (UNORM).
For Z16, this means using R16_UNORM.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Once blorp gains the ability to do format conversions, it's conceivable
that the source format may be texturable but not supported as a render
target. This would break Paul's code, which assumes that it can use the
render_target_format array even for the source format.
There are three ways to convert MESA_FORMAT enums to BRW_SURFACEFORMAT
enums:
1. brw_format_for_mesa_format()
This translates the Mesa format to the most equivalent BRW format.
2. brw->render_target_format[]
This is used for renderbuffers, and handles the subset of formats
that are renderable. However, it's not always equivalent, since
it overrides a few non-renderable formats. For example, it
converts B8G8R8X8_UNORM to B8G8R8A8_UNORM so it can be rendered to.
3. translate_tex_format()
This is used for textures. It wraps brw_format_for_mesa_format(),
but overrides depth textures, and one sRGB case on Gen4.
BLORP has a fourth function, which uses brw->render_target_format[]
and overrides depth formats (differently than translate_tex_format).
This patch makes the BLORP function to use brw_format_for_mesa_format()
for textures/source data, since not everything will be a render target.
It continues using brw->render_target_format[] for render targets, since
it needs the format overrides that provides.
We don't use translate_tex_format() since the additional overrides are
not useful or simply redundant.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to determine whether we're setting up a format for
the source (as a texture) or destination (as a render target).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
| |
GNU C++ compiler declares the C99 lrint, etc. when _GNU_SOURCE is
defined, but MSVC does not.
Trivial.
|
|
|
|
|
|
|
|
|
|
| |
As we're moving towards expanding the number of subpixel
bits and the width of the variables used in the computations
we need to make this code a bit more centralized.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Both the imul_hi and umul_hi are working with this patch.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code introduces two new 32bit integer multiplication opcodes which
can be used to produce correct 64 bit results. GLSL, OpenCL and D3D10+
require them. We use two seperate opcodes, because they match the
behavior of GLSL and OpenCL, are a lot easier to add than a single
opcode with multiple destinations and because there's not much (any)
difference wrt code-generation.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
only 8 and 32 bit integers were supported before.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The UD values were getting set up as floats. This happened to work out
because they were used as the second argument where the first was a dword,
and gen6+ doesn't do source conversions. But it did trigger fulsim
warnings, and it meant if you used the push constant as the first operand
you would have been disappointed.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Fixes 3 texelFetch tests in piglit all.tests on ivb, and cubemap npot on gm45.
v2: Don't forget the gen4 DL=6 cubemap behavior.
Cc: "9.1 9.2" <[email protected]>
Reviewed-by: Chad Versace <[email protected]> (v1)
|
|
|
|
|
|
|
|
|
|
|
| |
We hadn't run into order of operation warnings before, apparently, since
addition is so low on the order.
Cc: "9.1 9.2" <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We had a fixup for gen4's 3d-layout cubemaps (which, iirc, we'd
experimentally found to be necessary!), but while the spec still requires
it on gen5, we'd been missing it in the array-layout cubemaps.
Cc: "9.1 9.2" <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The EGL library has some references to x11 but it gets the link flags
from the XCB_DRI2_LIBS if and only if HAVE_EGL_PLATFORM_X11 is true.
The X11_LIBS variable was probably coming from a PKG_CHECK_MODULES (x11)
earlier in history.
If it is possible to have HAVE_EGL_DRIVER_GLX without HAVE_EGL_PLATFORM_X11
then the link flags for libX11 should be passed. However, it won't come
from X11_LIBS which is undefined.
Reported-by: Emil Velikov <[email protected]>
Acked-by: Emil Velikov <[email protected]>
Signed-off-by: Gaetan Nadon <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compiler cannot find the Xlib.h in the installed system headers.
All supplied include directives point to inside the mesa module.
The X11_CFLAGS variable is undefined (not defined in config.status).
It appears the intent was to use X11_INCLUDES defined in configure.ac.
The Xlib.h file is not installed on my workstation. It is supplied in
the libx11-dev package. This allows an X developer control over which
version of this file is used for X development.
Use to test: --enable-gallium-egl --enable-xlib-glx --disable-dri
Acked-by: Brian Paul <[email protected]>
Signed-off-by: Gaetan Nadon <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compiler cannot find the Xlib.h in the installed system headers.
All supplied include directives point to inside the mesa module.
The X11_CFLAGS variable is undefined (not defined in config.status).
It appears the intent was to use X11_INCLUDES defined in configure.ac.
The Xlib.h file is not installed on my workstation. It is supplied in
the libx11-dev package. This allows an X developer control over which
version of this file is used for X development.
Acked-by: Brian Paul <[email protected]>
Signed-off-by: Gaetan Nadon <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The X11_CFLAGS variable is undefined (not defined in config.status).
It appears the intent was to use X11_INCLUDES defined in configure.ac.
It is used for building the code in the x11 subdir.
The build does not fail on this one as LIBDRM_CFLAGS happens to have
the inludedir value as the one for X11. It will not always be the case.
The option --enable-gallium-egl is required durimg configuration.
Acked-by: Brian Paul <[email protected]>
Signed-off-by: Gaetan Nadon <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
pipe_screen::fence_finish with zero timeout returns quickly and
doesn't wait at all. Fix that, and also delete the fence afterwards,
so that QuerySurfaceStatus returns the right state later.
Addresses:
https://trac.videolan.org/vlc/ticket/9281
https://bugs.freedesktop.org/show_bug.cgi?id=68792
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
| |
OutputSurfaces have simple YCbCr rendering functionality built in,
but so far only 4:2:0 subsampling worked correctly. This fixes 4:2:2
and 4:4:4 formats.
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per API specification, it is legal to supply a NULL procamp. In this
case, a CSC matrix according to the colorspace should be generated,
but no further adjustments are made.
Addresses:
https://trac.videolan.org/vlc/ticket/9281
https://bugs.freedesktop.org/show_bug.cgi?id=68792
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
| |
It doesn't work (decodes to garbage) with most videos on UVD 3.0. Worse
yet, it often results in random memory corruption or GPU hangs. Rumor
has it only the newest UVD hardware could do it anyway.
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DPB size calculations seem to be off; there is various random
corruption happening, even with advanced profile. Always assuming
a minimum number of references appears to fix it, similarly to
H.264. This might overallocate the DPB. Also clean up the SPS/PPS
field setup so that it matches VC-1 specifications better.
With these changes, all advanced profile VC-1 files I could get my
hand on work fine.
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
UVD can only support NV12 in the case of hardware decoding, but we
can still use all other formats for software decoding. Use the UNKNOWN
profile to signal that we're not interesting in hardware decoding.
v2: use profile instead of entrypoint
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
which contains -Wl,-Bsymbolic. If I understand it correctly, it prevents
symbols from clashing if multiple drivers are loaded at the same time.
Tested-by: Emil Velikov <[email protected]>
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
| |
This doesn't fix any known issue. I'm just following the docs.
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
| |
Copy sechalf to the new register, otherwise we would read wrong HW registers.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
When the instruction to send the sampler message is forced uncompressed or
sechalf, send SIMD8 one even in SIMD16 mode.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
SIMD8 sampler messages are allowed in SIMD16 mode, and they could not work
without BRW_COMPRESSION_2NDHALF. Later PRMs (gen5 and later) do not
explicitly state whether BRW_COMPRESSION_2NDHALF is allowed, but they do have
examples using send with SecHalf. It should be safe to assume SecHalf is
valid.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Fixes "Uninitialized scalar field" defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
| |
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gl_PointSize is stored in the w component of VARYING_SLOT_PSIZ, but
the geometry shader infrastructure assumes that it should look for all
geometry shader inputs of type float in the x component. So when
compiling a geomtery shader that uses a gl_PointSize input, fix it up
during the shader prolog by moving the w component to the x component.
This is similar to how we emit fixups and workarounds for vertex
shader attributes.
Fixes piglit test spec/glsl-1.50/execution/geometry/core-inputs.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|