| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
The code it was referencing was removed in 2010.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Looking at Lightsmark's shaders, the way we used MRFs (or in gen7's
case, GRFs) was bad in a couple of ways. One was that it prevented
compute-to-MRF for the common case of a texcoord that gets used
exactly once, but where the texcoord setup all gets emitted before the
texture calls (such as when it's a bare fragment shader input, which
gets interpolated before processing main()). Another was that it
introduced a bunch of dependencies that constrained scheduling, and
forced waits for texture operations to be done before they are
required. For example, we can now move the compute-to-MRF
interpolation for the second texture send down after the first send.
The downside is that this generally prevents
remove_duplicate_mrf_writes() from doing anything, whereas previously
it avoided work for the case of sampling from the same texcoord twice.
However, I suspect that most of the win that originally justified that
code was in avoiding the WAR stall on the first send, which this patch
also avoids, rather than the small cost of the extra instruction. We
see instruction count regressions in shaders in unigine, yofrankie,
savage2, hon, and gstreamer.
Improves GLB2.7 performance by 0.633628% +/- 0.491809% (n=121/125, avg of
~66fps, outliers below 61 dropped).
Improves openarena performance by 1.01092% +/- 0.66897% (n=425).
No significant difference on Lightsmark (n=44).
v2: Squash in the fix for register unspilling for send-from-GRF, fixing a
segfault in lightsmark.
Reviewed-by: Kenneth Graunke <[email protected]>
Acked-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For texturing from GRFs, we now have payloads of arbitrary sizes up to the
message length limit.
v2 (Kenneth Graunke): Rebase on intel_context -> brw_context change.
v3: Add some comment text.
v4: Change some magic 16s to BRW_MAX_MRF (noted by Ken). Leave the 11,
which is the magic "max sampler message length". BRW_MAX_MRF sizing
on the little int arrays is retained because I could see us needing to
extend in the future if we move to GRFs for FB writes (those go to at
least 12 long in a quick scan of the specs)
Reviewed-by: Kenneth Graunke <[email protected]> (v2)
Acked-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This will let us coalesce into texture-from-GRF arguments, which would
otherwise be prevented due to the live interval for the whole vgrf
extending across all the MOVs setting up the channels of the message
v2 (Kenneth Graunke): Rebase for renames.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
v2 (Kenneth Graunke): Rebase on s/live_variables/live_intervals/g.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Now optimization passes will be able to look at the per-channel ranges.
v2: Rebase on various optimization pass changes.
v3 (Kenneth Graunke): Rename live_variables to live_intervals; split
introduction of invalidate_live_intervals() into a separate patch.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When compacting the list of VGRFs, we patch up the live interval ranges
(which are indexed by VGRF number). Unfortunately, once we make
per-component data available, this will become too complicated to
maintain. Instead, simply invalidate them.
This was pulled out of a patch by Eric Anholt.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In compute_live_intervals(), start and end are shorter names for
the virtual_grf_start and virtual_grf_end class members.
Now that the fs_live_intervals class has arrays named start and end
which are indexed by var, rather than VGRF, reusing the name is
confusing. Plus, most of the code has been factored out, so using the
long names isn't as inconvenient.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This is the information we'll actually use to replace the
virtual_grf_start[]/end[] arrays.
No change in shader-db.
v2 (Kenneth Graunke): Rebase; minor comment updates.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
These blocks are about to grow some more code, and the indentation was
getting out of hand.
v2 (Kenneth Graunke): Rebase, minor typo fixes and style changes.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
For now, this simply sets live_intervals_valid = false, but in the
future it will do something more sophisticated.
Based on a patch by Eric Anholt.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This significantly improves our handling of VGRFs of size > 1.
Previously, we only marked VGRFs as def'd if the whole register was
written by a single instruction. Large VGRFs which were written
piecemeal would not be considered def'd at all, even if they were
ultimately completely written.
Without being def'd, these were then marked "live in" to the basic
block, often extending the range to preceding blocks and sometimes
even the start of the program.
The new per-component tracking gives more accurate live intervals,
which makes register coalescing more effective.
In the future, this should help with texturing from GRFs on Gen7+.
A sampler message might be represented by a 2-register VGRF which
holds the texture coordinates. If those are incoming varyings,
they'll be produced by two PLN instructions, which are piecemeal writes.
No reduction in shader-db instruction counts. However, code which
prints the live interval ranges does show that some VGRFs now have
smaller (and more correct) live intervals.
v2: Rebase on current send-from-GRF code requiring adding extra use[]s.
v3: Rebase on live intervals fix to include defs in the end of the
interval.
v4 (Kenneth Graunke): Rebase; split off a few preparatory patches;
add lots of comments; minor style changes; rewrite commit message.
v5 (Eric Anholt): whitespace nit.
Written-by: Eric Anholt <[email protected]> [v1-3]
Signed-off-by: Kenneth Graunke <[email protected]> [v4]
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]> (v4)
|
|
|
|
|
|
|
|
|
| |
num_vars was shorthand for the number of virtual GRFs. num_vgrfs is a
bit clearer. Plus, the next patch will introduce "vars" which are
distinct from vgrfs.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
This has no functional effect, but should make subsequent changes a
little simpler.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes piglit tests:
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-in-after-other-usage.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-after-other-usage.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-after-usage.geom
- spec/glsl-1.50/compiler/vs-redeclares-pervertex-out-after-other-usage.vert
- spec/glsl-1.50/compiler/vs-redeclares-pervertex-out-after-usage.vert
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Fixes piglit test
spec/glsl-1.50/execution/redeclare-pervertex-subset-vs-to-gs.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From section 4.1.9 (Arrays) of the GLSL 4.40 spec (as of revision 7):
However, unless noted otherwise, blocks cannot be redeclared;
an unsized array in a user-declared block cannot be sized
through redeclaration.
The only place where the spec notes that interface blocks can be
redeclared is to allow for redeclaration of built-in interface blocks
such as gl_PerVertex. Therefore, user-defined interface blocks can
never be redeclared. This is a clarification of previous intent (see
Khronos bug 10659).
We were already preventing interface block redeclaration using the
same block name at compile time, but we weren't preventing interface
block redeclaration using the same instance name (and different block
names) at compile time. And we weren't preventing an instance name
from conflicting with a previously-declared ordinary variable.
In practice the problem would be caught at link time, but only because
of a coincidence: since ast_interface_block::hir() wasn't doing any
checking to see if the instance name already existed in the shader, it
was creating a second ir_variable in the shader having the same name
but a different type. Coincidentally, when the linker checked for
intrastage consistency of global variable declarations, it treated the
two declarations from the same shader as a conflict, so it reported a
link error.
But it seems dangerous to rely on that linker behaviour to catch
illegal redeclarations that really ought to be detected at compile
time.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes piglit tests:
- spec/glsl-1.50/execution/redeclare-pervertex-out-subset-gs
- spec/glsl-1.50/execution/redeclare-pervertex-subset-vs
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch verifies that:
- The gl_PerVertex input interface block may only be redeclared in a
geometry shader, and that it may only be redeclared as gl_in[].
- The gl_PerVertex output interface block may only be redeclared in a
vertex or geometry shader, and that it may only be redeclared as a
non-array without an interface name.
- gl_PerVertex may not be redeclared as any other type of interface
block (i.e. as a uniform interface block).
As a side-effect, the code now keeps track of what the previous
declaration of gl_PerVertex was--this will be needed in future
patches.
Fixes piglit tests:
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-in-with-incorrect-name.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-as-array.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-with-instance-name.geom
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In later patches, we'll use this in order to implement the required
behaviour that after the gl_PerVertex interface block has been
redeclared, only members of the redeclared interface block may be
used.
v2: Update the function name and comment to clarify that we aren't
actually removing the variable from the symbol table, just disabling
it.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
This will be used by future patches to change an ir_variable's
interface type when the gl_PerVertex built-in interface block is
redeclared.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch modifies the get_variable_being_redeclared() function so
that it no longer relies on the ast_declaration for the variable being
redeclared. In future patches, this will allow
get_variable_being_redeclared() to be used for processing
redeclarations of the built-in gl_PerVertex interface block.
v2: Also make get_variable_being_redeclared() static.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes piglit test
spec/glsl-1.10/compiler/struct/struct-name-uses-gl-prefix.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: we need to make an exception for the gl_PerVertex interface
block, since in geometry shaders it is allowed to be redeclared with
the instance name gl_in. Future patches will make redeclaration of
gl_PerVertex work properly.
Fixes piglit test
spec/glsl-1.50/compiler/interface-block-instance-name-uses-gl-prefix.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: we need to make an exception for the gl_PerVertex interface
block, since built-in variables are allowed to be redeclared inside
it. Future patches will make redeclaration of gl_PerVertex work
properly.
Fixes piglit tests:
- spec/glsl-1.50/compiler/interface-block-array-elem-uses-gl-prefix.vert
- spec/glsl-1.50/compiler/named-interface-block-elem-uses-gl-prefix.vert
- spec/glsl-1.50/compiler/unnamed-interface-block-elem-uses-gl-prefix.vert
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: we need to make an exception for the gl_PerVertex interface
block, since this is allowed to be redeclared. Future patches will
make redeclaration of gl_PerVertex work properly.
Fixes piglit test
spec/glsl-1.50/compiler/interface-block-name-uses-gl-prefix.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: some limited amount of redeclaration is actually allowed,
provided the shader is redeclaring the built-in gl_PerVertex interface
block. Support for this will be added in future patches.
Fixes piglit tests
spec/glsl-1.50/compiler/unnamed-interface-block-elem-conflicts-with-prev-{block-elem,global}.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GLSL reserves identifiers beginning with "gl_" or containing "__", but
we haven't been consistent about enforcing this rule. This patch
makes a new function to check whether identifier names are valid. In
the process it closes a loophole where we would previously allow
function argument names to contain "__".
v2: Rename check_valid_identifier() -> validate_identifier(). Add
curly braces in validate_identifier().
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit e2660770731b018411fbe1620cacddaf8dff5287 (glsl: Keep track
of location for interface block fields), I neglected to update
glsl_type::record_key_compare to account for the fact that interface
types now contain location information. As a result, interface types
that differ only by their location information would not be properly
distinguished.
At the moment this is not a problem, because the only interface block
in which location information != -1 is gl_PerVertex, and gl_PerVertex
is always created in the same way. However, in the patches that
follow, we'll be adding new ways to create gl_PerVertex (by
redeclaring it), so we'll need location information to be handled
properly.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Although these interfaces can't be accessed directly by GLSL (since
they don't have an instance name), they will be necessary in order to
allow redeclarations of gl_PerVertex.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Currently, we create just a single gl_PerVertex interface block for
geometry shader inputs. In later patches, we'll also need to create
an interface block for geometry and vertex shader outputs. Moving the
code into its own class will make reuse easier.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Previously, we erroneously used the name "gl_in" for both the block
name and the instance name.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use a location of -1 for variables which don't have their own
assigned locations--this includes ir_variables which represent named
interface blocks. Technically the location assigned to gl_in doesn't
matter, since gl_in is only accessed via its members (which have their
own locations). But it's nice to be consistent.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Calling radeon_drm_cs_flush from multiple threads might cause deadlocks,
fix this by immediately signaling the semaphore after waiting for it.
This is a candidate for the stable branch(es).
Partially fixes: https://bugs.freedesktop.org/show_bug.cgi?id=70123
v2: some fixes on commit message
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
| |
_GNU_SOURCE appears to not be used reliably. Use _MSC_VER instead so
that MSVC alone is affected.
|
|
|
|
|
|
|
|
| |
Unless the polygon fill mode is different from PIPE_POLYGON_MODE_FILL,
so checking the the polygon mode is sufficient.
Testing done: no regression in polygon-mode-offset
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Not used since ages, and it wouldn't work at all with explicit derivatives now
(not that it did before as it ignored them but now the code would just use
the derivs pre-projected which would be quite random numbers).
v2: also get rid of 3 helper functions no longer used.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They need some special handling. Quite complicated.
Additionally, use the same code for implicit derivatives too if no_rho_approx
and no_quad_lod is set, because it seems while generally it should be ok
to use per quad lod for implicit derivatives there's at least some test which
insists that in case of cubemaps the shared lod value MUST come from a pixel
inside the primitive (due to the derivatives becoming different if a different
larger major axis is chosen).
v2: based on Brian's feedback, clean up code a bit.
And use sign bit of major axis instead of pre-select s/t/r sign for coord
mirroring (which should be the same in the end, saves 2 ands).
Also fix two bugs with select/mirror of derivatives, the minor axes need to
use major axis sign as well (instead of major derivative axis sign), and
don't mistakenly use absolute values of major derivative and inverse major
values.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's two reasons for this:
1) even when ignoring rho approximation for cube maps, the result is still
not correct, but it's better as the max error at edges is now sqrt(2) instead
of 2 (which was a full mip level), same as it is for ordinary 2d maps when
doing rho approximations (so the error actually goes from factor 2 at edges and
sqrt(2) completely inside a face to sqrt(2) at edges and 0 inside a face).
2) I want to repurpose rho_no_approx for cubemaps for fully correct cubemap
derivatives (so don't need yet another debug var).
Reviewed-by: Jose Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were already setting the array size of unsized arrays that appeared
inside unnamed interface blocks, but we weren't updating
ir_variable::interface_type to reflect the new array size, causing
bogus link errors.
This patch causes array_sizing_visitor to keep track of all the
unnamed interface types it sees, and the ir_variables corresponding to
each one. After the visitor runs, a new function,
fixup_unnamed_interface_types(), adjusts each unnamed interface type
to correctly correspond with the array sizes in the ir_variables.
Fixes piglit tests:
- spec/glsl-1.50/execution/unsized-in-unnamed-interface-block-gs
- spec/glsl-1.50/execution/unsized-in-unnamed-interface-block-multiple
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When multiple shaders of the same type access an interface block
containing an unsized array, we need to set the array size based on
the maximum array element accessed across all the shaders. This is
similar to what we already do with unsized arrays occurring outside of
interface blocks.
Note: one corner case is not yet addressed by these patches: the case
where one compilation unit defines an interface block containing
unsized arrays and another compilation unit defines the same interface
block containing sized arrays.
Fixes piglit test:
- spec/glsl-1.50/execution/unsized-in-named-interface-block-multiple
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unsized arrays appearing inside named interface blocks now get a
proper size assigned by the array_sizing_visitor.
Fixes piglit tests:
- spec/glsl-1.50/execution/unsized-in-named-interface-block
- spec/glsl-1.50/execution/unsized-in-named-interface-block-gs
- spec/glsl-1.50/linker/unsized-in-named-interface-block
- spec/glsl-1.50/linker/unsized-in-named-interface-block-gs
- spec/glsl-1.50/linker/unsized-in-unnamed-interface-block-gs (*)
(*) is fixed by dumb luck--support for unsized arrays in unnamed
interface blocks will come in a later patch.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch modifies update_max_array_access() so that it updates
ir_variable::max_ifc_array_access to reflect the shader's use of
arrays appearing within interface blocks.
v2: Use an ordinary function in ast_array_index.cpp rather than a
virtual function in ir_rvalue. Avoid dereferencing NULL when handling
accesses to ordinary structs.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
| |
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
For interface blocks that contain arrays, this field will contain the
maximum element of each contained array that is accessed by the
shader. This is a first step toward supporting unsized arrays in
interface blocks.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
| |
In a future patch, this will allow us to enforce invariants when the
interface type is updated.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, when converting an access to an array element from ast to
IR, we need to see if the array is an ir_dereference_variable, and if
so update the variable's max_array_access.
When we add support for unsized arrays in interface blocks, we'll also
need to account for cases where the array is an ir_dereference_record
and the record is an interface block.
To make this easier, move the update into its own function.
v2: Use an ordinary function in ast_array_index.cpp rather than a
virtual function in ir_rvalue.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although it's not explicitly stated in the GLSL 1.50 spec, unsized
arrays are allowed in interface blocks.
section 1.2.3 (Changes from revision 5 of version 1.5) of the GLSL
1.50 spec says:
* Completed full update to grammar section. Tested spec examples
against it:
...
* add unsized arrays for block members
And section 7.1 (Vertex and Geometry Shader Special Variables)
includes an unsized array in the built-in gl_PerVertex interface
block:
out gl_PerVertex {
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
};
Furthermore, GLSL 4.30 contains an example of an unsized array
occurring inside an interface block. From section 4.3.9 (Interface
Blocks):
uniform Transform { // API uses "Transform[2]" to refer to instance 2
mat4 ModelViewMatrix;
mat4 ModelViewProjectionMatrix;
vec4 a[]; // array will get implicitly sized
float Deformation;
} transforms[4];
This patch adds the parser rule to support unsized arrays inside
interface blocks. Later patches in the series will add the
appropriate semantics to handle them.
Fixes piglit tests:
- spec/glsl-1.50/execution/unsized-in-unnamed-interface-block
- spec/glsl-1.50/linker/unsized-in-unnamed-interface-block
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Interface declarations have two names associated with them: the block
name and the instance name. It's the block name that needs to be
passed to get_interface_instance(). This patch renames the argument
so that there's no confusion.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BLORP performs blits by drawing a rectangle with a shader that samples
from the source texture, and writes color data to the destination.
The sampler always returns 32-bit RGBA float data, regardless of the
source format's component ordering or data type. Likewise, the render
target write message takes 32-bit RGBA float data, and converts it
appropriately. So the bulk of the work is already taken care of for us.
This greatly accelerates a lot of CopyTexSubImage calls, and makes
Legends of Aethereus playable on Ivybridge. At the default settings,
LOA continually blits between SRGBA8888 (the window format) and
RGBA16_FLOAT. Since neither BLORP nor our BLT paths supported this,
it fell back to meta, spending 33% of the CPU in floorf() converting
between floats and half-floats.
v2: Use != instead of ^ (suggested by Ian). Note that only
CopyTexSubImage is affected by this patch (caught by Eric).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
|