| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Fixes piglit test
spec/glsl-1.50/execution/redeclare-pervertex-subset-vs-to-gs.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From section 4.1.9 (Arrays) of the GLSL 4.40 spec (as of revision 7):
However, unless noted otherwise, blocks cannot be redeclared;
an unsized array in a user-declared block cannot be sized
through redeclaration.
The only place where the spec notes that interface blocks can be
redeclared is to allow for redeclaration of built-in interface blocks
such as gl_PerVertex. Therefore, user-defined interface blocks can
never be redeclared. This is a clarification of previous intent (see
Khronos bug 10659).
We were already preventing interface block redeclaration using the
same block name at compile time, but we weren't preventing interface
block redeclaration using the same instance name (and different block
names) at compile time. And we weren't preventing an instance name
from conflicting with a previously-declared ordinary variable.
In practice the problem would be caught at link time, but only because
of a coincidence: since ast_interface_block::hir() wasn't doing any
checking to see if the instance name already existed in the shader, it
was creating a second ir_variable in the shader having the same name
but a different type. Coincidentally, when the linker checked for
intrastage consistency of global variable declarations, it treated the
two declarations from the same shader as a conflict, so it reported a
link error.
But it seems dangerous to rely on that linker behaviour to catch
illegal redeclarations that really ought to be detected at compile
time.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes piglit tests:
- spec/glsl-1.50/execution/redeclare-pervertex-out-subset-gs
- spec/glsl-1.50/execution/redeclare-pervertex-subset-vs
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch verifies that:
- The gl_PerVertex input interface block may only be redeclared in a
geometry shader, and that it may only be redeclared as gl_in[].
- The gl_PerVertex output interface block may only be redeclared in a
vertex or geometry shader, and that it may only be redeclared as a
non-array without an interface name.
- gl_PerVertex may not be redeclared as any other type of interface
block (i.e. as a uniform interface block).
As a side-effect, the code now keeps track of what the previous
declaration of gl_PerVertex was--this will be needed in future
patches.
Fixes piglit tests:
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-in-with-incorrect-name.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-as-array.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-with-instance-name.geom
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In later patches, we'll use this in order to implement the required
behaviour that after the gl_PerVertex interface block has been
redeclared, only members of the redeclared interface block may be
used.
v2: Update the function name and comment to clarify that we aren't
actually removing the variable from the symbol table, just disabling
it.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
This will be used by future patches to change an ir_variable's
interface type when the gl_PerVertex built-in interface block is
redeclared.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch modifies the get_variable_being_redeclared() function so
that it no longer relies on the ast_declaration for the variable being
redeclared. In future patches, this will allow
get_variable_being_redeclared() to be used for processing
redeclarations of the built-in gl_PerVertex interface block.
v2: Also make get_variable_being_redeclared() static.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes piglit test
spec/glsl-1.10/compiler/struct/struct-name-uses-gl-prefix.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: we need to make an exception for the gl_PerVertex interface
block, since in geometry shaders it is allowed to be redeclared with
the instance name gl_in. Future patches will make redeclaration of
gl_PerVertex work properly.
Fixes piglit test
spec/glsl-1.50/compiler/interface-block-instance-name-uses-gl-prefix.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: we need to make an exception for the gl_PerVertex interface
block, since built-in variables are allowed to be redeclared inside
it. Future patches will make redeclaration of gl_PerVertex work
properly.
Fixes piglit tests:
- spec/glsl-1.50/compiler/interface-block-array-elem-uses-gl-prefix.vert
- spec/glsl-1.50/compiler/named-interface-block-elem-uses-gl-prefix.vert
- spec/glsl-1.50/compiler/unnamed-interface-block-elem-uses-gl-prefix.vert
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: we need to make an exception for the gl_PerVertex interface
block, since this is allowed to be redeclared. Future patches will
make redeclaration of gl_PerVertex work properly.
Fixes piglit test
spec/glsl-1.50/compiler/interface-block-name-uses-gl-prefix.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: some limited amount of redeclaration is actually allowed,
provided the shader is redeclaring the built-in gl_PerVertex interface
block. Support for this will be added in future patches.
Fixes piglit tests
spec/glsl-1.50/compiler/unnamed-interface-block-elem-conflicts-with-prev-{block-elem,global}.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GLSL reserves identifiers beginning with "gl_" or containing "__", but
we haven't been consistent about enforcing this rule. This patch
makes a new function to check whether identifier names are valid. In
the process it closes a loophole where we would previously allow
function argument names to contain "__".
v2: Rename check_valid_identifier() -> validate_identifier(). Add
curly braces in validate_identifier().
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit e2660770731b018411fbe1620cacddaf8dff5287 (glsl: Keep track
of location for interface block fields), I neglected to update
glsl_type::record_key_compare to account for the fact that interface
types now contain location information. As a result, interface types
that differ only by their location information would not be properly
distinguished.
At the moment this is not a problem, because the only interface block
in which location information != -1 is gl_PerVertex, and gl_PerVertex
is always created in the same way. However, in the patches that
follow, we'll be adding new ways to create gl_PerVertex (by
redeclaring it), so we'll need location information to be handled
properly.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Although these interfaces can't be accessed directly by GLSL (since
they don't have an instance name), they will be necessary in order to
allow redeclarations of gl_PerVertex.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Currently, we create just a single gl_PerVertex interface block for
geometry shader inputs. In later patches, we'll also need to create
an interface block for geometry and vertex shader outputs. Moving the
code into its own class will make reuse easier.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Previously, we erroneously used the name "gl_in" for both the block
name and the instance name.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use a location of -1 for variables which don't have their own
assigned locations--this includes ir_variables which represent named
interface blocks. Technically the location assigned to gl_in doesn't
matter, since gl_in is only accessed via its members (which have their
own locations). But it's nice to be consistent.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Calling radeon_drm_cs_flush from multiple threads might cause deadlocks,
fix this by immediately signaling the semaphore after waiting for it.
This is a candidate for the stable branch(es).
Partially fixes: https://bugs.freedesktop.org/show_bug.cgi?id=70123
v2: some fixes on commit message
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
| |
_GNU_SOURCE appears to not be used reliably. Use _MSC_VER instead so
that MSVC alone is affected.
|
|
|
|
|
|
|
|
| |
Unless the polygon fill mode is different from PIPE_POLYGON_MODE_FILL,
so checking the the polygon mode is sufficient.
Testing done: no regression in polygon-mode-offset
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Not used since ages, and it wouldn't work at all with explicit derivatives now
(not that it did before as it ignored them but now the code would just use
the derivs pre-projected which would be quite random numbers).
v2: also get rid of 3 helper functions no longer used.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They need some special handling. Quite complicated.
Additionally, use the same code for implicit derivatives too if no_rho_approx
and no_quad_lod is set, because it seems while generally it should be ok
to use per quad lod for implicit derivatives there's at least some test which
insists that in case of cubemaps the shared lod value MUST come from a pixel
inside the primitive (due to the derivatives becoming different if a different
larger major axis is chosen).
v2: based on Brian's feedback, clean up code a bit.
And use sign bit of major axis instead of pre-select s/t/r sign for coord
mirroring (which should be the same in the end, saves 2 ands).
Also fix two bugs with select/mirror of derivatives, the minor axes need to
use major axis sign as well (instead of major derivative axis sign), and
don't mistakenly use absolute values of major derivative and inverse major
values.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's two reasons for this:
1) even when ignoring rho approximation for cube maps, the result is still
not correct, but it's better as the max error at edges is now sqrt(2) instead
of 2 (which was a full mip level), same as it is for ordinary 2d maps when
doing rho approximations (so the error actually goes from factor 2 at edges and
sqrt(2) completely inside a face to sqrt(2) at edges and 0 inside a face).
2) I want to repurpose rho_no_approx for cubemaps for fully correct cubemap
derivatives (so don't need yet another debug var).
Reviewed-by: Jose Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were already setting the array size of unsized arrays that appeared
inside unnamed interface blocks, but we weren't updating
ir_variable::interface_type to reflect the new array size, causing
bogus link errors.
This patch causes array_sizing_visitor to keep track of all the
unnamed interface types it sees, and the ir_variables corresponding to
each one. After the visitor runs, a new function,
fixup_unnamed_interface_types(), adjusts each unnamed interface type
to correctly correspond with the array sizes in the ir_variables.
Fixes piglit tests:
- spec/glsl-1.50/execution/unsized-in-unnamed-interface-block-gs
- spec/glsl-1.50/execution/unsized-in-unnamed-interface-block-multiple
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When multiple shaders of the same type access an interface block
containing an unsized array, we need to set the array size based on
the maximum array element accessed across all the shaders. This is
similar to what we already do with unsized arrays occurring outside of
interface blocks.
Note: one corner case is not yet addressed by these patches: the case
where one compilation unit defines an interface block containing
unsized arrays and another compilation unit defines the same interface
block containing sized arrays.
Fixes piglit test:
- spec/glsl-1.50/execution/unsized-in-named-interface-block-multiple
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unsized arrays appearing inside named interface blocks now get a
proper size assigned by the array_sizing_visitor.
Fixes piglit tests:
- spec/glsl-1.50/execution/unsized-in-named-interface-block
- spec/glsl-1.50/execution/unsized-in-named-interface-block-gs
- spec/glsl-1.50/linker/unsized-in-named-interface-block
- spec/glsl-1.50/linker/unsized-in-named-interface-block-gs
- spec/glsl-1.50/linker/unsized-in-unnamed-interface-block-gs (*)
(*) is fixed by dumb luck--support for unsized arrays in unnamed
interface blocks will come in a later patch.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch modifies update_max_array_access() so that it updates
ir_variable::max_ifc_array_access to reflect the shader's use of
arrays appearing within interface blocks.
v2: Use an ordinary function in ast_array_index.cpp rather than a
virtual function in ir_rvalue. Avoid dereferencing NULL when handling
accesses to ordinary structs.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
| |
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
For interface blocks that contain arrays, this field will contain the
maximum element of each contained array that is accessed by the
shader. This is a first step toward supporting unsized arrays in
interface blocks.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
| |
In a future patch, this will allow us to enforce invariants when the
interface type is updated.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, when converting an access to an array element from ast to
IR, we need to see if the array is an ir_dereference_variable, and if
so update the variable's max_array_access.
When we add support for unsized arrays in interface blocks, we'll also
need to account for cases where the array is an ir_dereference_record
and the record is an interface block.
To make this easier, move the update into its own function.
v2: Use an ordinary function in ast_array_index.cpp rather than a
virtual function in ir_rvalue.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although it's not explicitly stated in the GLSL 1.50 spec, unsized
arrays are allowed in interface blocks.
section 1.2.3 (Changes from revision 5 of version 1.5) of the GLSL
1.50 spec says:
* Completed full update to grammar section. Tested spec examples
against it:
...
* add unsized arrays for block members
And section 7.1 (Vertex and Geometry Shader Special Variables)
includes an unsized array in the built-in gl_PerVertex interface
block:
out gl_PerVertex {
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
};
Furthermore, GLSL 4.30 contains an example of an unsized array
occurring inside an interface block. From section 4.3.9 (Interface
Blocks):
uniform Transform { // API uses "Transform[2]" to refer to instance 2
mat4 ModelViewMatrix;
mat4 ModelViewProjectionMatrix;
vec4 a[]; // array will get implicitly sized
float Deformation;
} transforms[4];
This patch adds the parser rule to support unsized arrays inside
interface blocks. Later patches in the series will add the
appropriate semantics to handle them.
Fixes piglit tests:
- spec/glsl-1.50/execution/unsized-in-unnamed-interface-block
- spec/glsl-1.50/linker/unsized-in-unnamed-interface-block
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Interface declarations have two names associated with them: the block
name and the instance name. It's the block name that needs to be
passed to get_interface_instance(). This patch renames the argument
so that there's no confusion.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BLORP performs blits by drawing a rectangle with a shader that samples
from the source texture, and writes color data to the destination.
The sampler always returns 32-bit RGBA float data, regardless of the
source format's component ordering or data type. Likewise, the render
target write message takes 32-bit RGBA float data, and converts it
appropriately. So the bulk of the work is already taken care of for us.
This greatly accelerates a lot of CopyTexSubImage calls, and makes
Legends of Aethereus playable on Ivybridge. At the default settings,
LOA continually blits between SRGBA8888 (the window format) and
RGBA16_FLOAT. Since neither BLORP nor our BLT paths supported this,
it fell back to meta, spending 33% of the CPU in floorf() converting
between floats and half-floats.
v2: Use != instead of ^ (suggested by Ian). Note that only
CopyTexSubImage is affected by this patch (caught by Eric).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous code for sRGB overrides assumes that the source and
destination formats are equal, other than the color space. This won't
be feasible when we add support for format conversions.
Here are a few cases, and how the old code handled them:
1. RGB8 -> SRGB8, MSAA ==> SRGB8 -> SRGB8
2. RGB8 -> SRGB8, single ==> RGB8 -> RGB8
3. SRGB8 -> RGB8, MSAA ==> RGB8 -> RGB8
4. SRGB8 -> RGB8, single ==> SRGB8 -> SRGB8
Apparently, preserving the behavior of #1 is important. When doing a
multisample to single-sample resolve, blending the samples together in
an sRGB correct fashion results in a noticably higher quality image.
It also is necessary to pass Piglit's EXT_framebuffer_multisample
accuracy color tests.
Paul, Eric, Anuj, and I talked about this, and aren't sure that it
matters in the other cases.
This patch preserves the behavior of #1, but otherwise reverts to
doing everything in linear space, changing the behavior of case #4.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We could conceivably use BRW_SURFACEFORMAT_R24_UNORM_X8_TYPELESS for
Z24 source images, allowing conversions from Z24 to either Z16 or Z32F.
Unfortunately, we can't use it for destination images since it isn't
supported as a render target.
Using different formats for sources or destinations would be painful,
so for now, punt.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, all that matters is that we copy the correct number of bits,
so any format that has 32-bits of data will work fine.
Once BLORP begins handling format conversions, the sampler will need to
correctly interpret the data. We don't need a depth format, but we do
need the right number of components and data type (FLOAT).
For Z32F, this means using R32_FLOAT.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, all that matters is that we copy the correct number of bits,
so any format that has 16-bits of data will work fine.
Once BLORP begins handling format conversions, the sampler will need to
correctly interpret the data. We don't need a depth format, but we do
need the right number of components and data type (UNORM).
For Z16, this means using R16_UNORM.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Once blorp gains the ability to do format conversions, it's conceivable
that the source format may be texturable but not supported as a render
target. This would break Paul's code, which assumes that it can use the
render_target_format array even for the source format.
There are three ways to convert MESA_FORMAT enums to BRW_SURFACEFORMAT
enums:
1. brw_format_for_mesa_format()
This translates the Mesa format to the most equivalent BRW format.
2. brw->render_target_format[]
This is used for renderbuffers, and handles the subset of formats
that are renderable. However, it's not always equivalent, since
it overrides a few non-renderable formats. For example, it
converts B8G8R8X8_UNORM to B8G8R8A8_UNORM so it can be rendered to.
3. translate_tex_format()
This is used for textures. It wraps brw_format_for_mesa_format(),
but overrides depth textures, and one sRGB case on Gen4.
BLORP has a fourth function, which uses brw->render_target_format[]
and overrides depth formats (differently than translate_tex_format).
This patch makes the BLORP function to use brw_format_for_mesa_format()
for textures/source data, since not everything will be a render target.
It continues using brw->render_target_format[] for render targets, since
it needs the format overrides that provides.
We don't use translate_tex_format() since the additional overrides are
not useful or simply redundant.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to determine whether we're setting up a format for
the source (as a texture) or destination (as a render target).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|
|
|
|
|
|
|
| |
GNU C++ compiler declares the C99 lrint, etc. when _GNU_SOURCE is
defined, but MSVC does not.
Trivial.
|
|
|
|
|
|
|
|
|
|
| |
As we're moving towards expanding the number of subpixel
bits and the width of the variables used in the computations
we need to make this code a bit more centralized.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Both the imul_hi and umul_hi are working with this patch.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code introduces two new 32bit integer multiplication opcodes which
can be used to produce correct 64 bit results. GLSL, OpenCL and D3D10+
require them. We use two seperate opcodes, because they match the
behavior of GLSL and OpenCL, are a lot easier to add than a single
opcode with multiple destinations and because there's not much (any)
difference wrt code-generation.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
only 8 and 32 bit integers were supported before.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The UD values were getting set up as floats. This happened to work out
because they were used as the second argument where the first was a dword,
and gen6+ doesn't do source conversions. But it did trigger fulsim
warnings, and it meant if you used the push constant as the first operand
you would have been disappointed.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Fixes 3 texelFetch tests in piglit all.tests on ivb, and cubemap npot on gm45.
v2: Don't forget the gen4 DL=6 cubemap behavior.
Cc: "9.1 9.2" <[email protected]>
Reviewed-by: Chad Versace <[email protected]> (v1)
|
|
|
|
|
|
|
|
|
|
|
| |
We hadn't run into order of operation warnings before, apparently, since
addition is so low on the order.
Cc: "9.1 9.2" <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We had a fixup for gen4's 3d-layout cubemaps (which, iirc, we'd
experimentally found to be necessary!), but while the spec still requires
it on gen5, we'd been missing it in the array-layout cubemaps.
Cc: "9.1 9.2" <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|