| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
execinfo.h is not available on NetBSD.
Fixes this bulid error.
CC glapi_gentable.lo
glapi_gentable.c:44:22: fatal error: execinfo.h: No such file or directory
Signed-off-by: Vinson Lee <[email protected]>
|
|
|
|
|
|
|
|
| |
This was overriding the top-level .dir-locals.el causing some settings
(like forcing spaces instead of tabs!) to be lost.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
We should be able to safely set the framebuffer state without a
fragment shader bound. bind_ps_state will take care of updating the
necessary state bits later.
v2: check in update_db_shader_control
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes GL2ExtensionTests/egl_image_external/TestSimpleUnassociated.test
which is part of gles2/3 conformance suite. Here image external
textures are switched to be treated the same as 2D textures. These
can be associated with the fallback texture providing fixed sample
values of (0, 0, 0, 1).
The OES_EGL_image_external spec says:
"Sampling an external texture which is not associated with any EGLImage
sibling will return a sample value of (0,0,0,1)."
"External textures cannot be used with TexImage2D, TexSubImage2D,
CompressedTexImage2D, CompressedTexSubImage2D, CopyTexImage2D, or
CopyTexSubImage2D, and an INVALID_ENUM error will be generated if
this is attempted."
And quoting Chad:
"That's enforced in _mesa_TexImage*() by calling
legal_teximage_target(), and enforced in _mesa_TexSubImage*() by
calling legal_texsubmimage_target(). Each of the
legal_tex*image_target() functions reject external textures.
Therefore, allowing GL_TEXTURE_EXTERNAL_OES in store_texsubimage()
won't violate the above spec quote.
I think it's safe to allow GL_TEXTURE_EXTERNAL_OES in
store_texsubimage(), as long as the texture has only a single
plane. Luckily, that's the only type of external textures that
Mesa currently supports."
CC: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extend the fast texture upload from BGRA X-tiled to include RGBA,
Alpha/Luminance, and Y-tiled. Speed improvements, measured with
mesa demos teximage program, on 256 x 256 texture, in MB/s, on a
Sandy Bridge (Ivy is comparable):
before after increase
BGRA/X-tiled 3266 4524 1.39x
BGRA/Y-tiled 1739 3971 2.28x
RGBA/X-tiled 474 4694 9.90x
RGBA/Y-tiled 477 3368 7.06x
L/X-tiled 1268 1516 1.20x
L/Y-tiled 1439 1581 1.10x
v2: Cosmetic changes only: reformat and reword comments, make doxygen-friendly,
rename variables, use existing macros, add an assert.
Signed-off-by: Frank Henigman <[email protected]>
Reviewed-and-tested-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
| |
* Fix LLVM library and defines
* Only enable tracing when scons build=debug
Acked-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
* /boot/common no longer exists in Haiku as of
a few days ago (and this is undefined)
Acked-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This is part of the prep for megadrivers, which won't allow using a single
global symbol due to the fact that there will be multiple drivers built
into the same dri.so file. For that, we'll need screen init to take a
reference to the driver to set up this vtable.
v2: Fix two missed references to driDriverAPI.
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The intel_screen.c used to be a dispatch to one of 3 driver functions, but
was down to 1, so it was kind of a waste. In addition, it was trying to
free all of the data that might have been partially freed in the kernel
3.6 check (which comes after intelInitContext, and thus might have had
driverPrivate set and result in intelDestroyContext() doing work on the
freed data). By moving the driverPrivate setup earlier, we can use
intelDestroyContext() consistently and avoid such problems in the future.
v2: Adjust the prototype of brwCreateContext to use the proper enum
(fixing a compiler warning in some builds)
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
|
|
|
|
|
|
|
|
| |
If bufmgr didn't get created, then screen creation failed, and we never
should have got here in the first place. This was added by Chris Wilson
in 2010 with no explanation for why it would be needed.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
i965, i915, radeon, r200, swrast, and nouveau were mostly trying to do the
same logic, except where they failed to. Notably, swrast had code that
appeared to try to enable GLES1/2 but forgot to set api_mask (thus
preventing any gles context from being created), and the non-intel drivers
didn't support MESA_GL_VERSION_OVERRIDE.
nouveau still relies on _mesa_compute_version(), because I don't know what
its limits actually are, and gallium drivers don't declare limits up front
at all. I think I've heard talk about doing so, though.
v2: Compat max version should be 30 (noted by Ken)
Drop r100's custom max version check, too (noted by Emil Velikov)
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The only important difference was not calling drmGetVersion, and making
the swrast extension vtable. That doesn't justify duplicating the other
330 lines of code.
v2: fix the scons build (code by Emil Velikov)
v3: fix scons build with swrast-only (code by Emil Velikov)
v4: Drop the new define I added, when we already have __NOT_HAVE_DRM_H.
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
The code it was referencing was removed in 2010.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Looking at Lightsmark's shaders, the way we used MRFs (or in gen7's
case, GRFs) was bad in a couple of ways. One was that it prevented
compute-to-MRF for the common case of a texcoord that gets used
exactly once, but where the texcoord setup all gets emitted before the
texture calls (such as when it's a bare fragment shader input, which
gets interpolated before processing main()). Another was that it
introduced a bunch of dependencies that constrained scheduling, and
forced waits for texture operations to be done before they are
required. For example, we can now move the compute-to-MRF
interpolation for the second texture send down after the first send.
The downside is that this generally prevents
remove_duplicate_mrf_writes() from doing anything, whereas previously
it avoided work for the case of sampling from the same texcoord twice.
However, I suspect that most of the win that originally justified that
code was in avoiding the WAR stall on the first send, which this patch
also avoids, rather than the small cost of the extra instruction. We
see instruction count regressions in shaders in unigine, yofrankie,
savage2, hon, and gstreamer.
Improves GLB2.7 performance by 0.633628% +/- 0.491809% (n=121/125, avg of
~66fps, outliers below 61 dropped).
Improves openarena performance by 1.01092% +/- 0.66897% (n=425).
No significant difference on Lightsmark (n=44).
v2: Squash in the fix for register unspilling for send-from-GRF, fixing a
segfault in lightsmark.
Reviewed-by: Kenneth Graunke <[email protected]>
Acked-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For texturing from GRFs, we now have payloads of arbitrary sizes up to the
message length limit.
v2 (Kenneth Graunke): Rebase on intel_context -> brw_context change.
v3: Add some comment text.
v4: Change some magic 16s to BRW_MAX_MRF (noted by Ken). Leave the 11,
which is the magic "max sampler message length". BRW_MAX_MRF sizing
on the little int arrays is retained because I could see us needing to
extend in the future if we move to GRFs for FB writes (those go to at
least 12 long in a quick scan of the specs)
Reviewed-by: Kenneth Graunke <[email protected]> (v2)
Acked-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This will let us coalesce into texture-from-GRF arguments, which would
otherwise be prevented due to the live interval for the whole vgrf
extending across all the MOVs setting up the channels of the message
v2 (Kenneth Graunke): Rebase for renames.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
v2 (Kenneth Graunke): Rebase on s/live_variables/live_intervals/g.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Now optimization passes will be able to look at the per-channel ranges.
v2: Rebase on various optimization pass changes.
v3 (Kenneth Graunke): Rename live_variables to live_intervals; split
introduction of invalidate_live_intervals() into a separate patch.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When compacting the list of VGRFs, we patch up the live interval ranges
(which are indexed by VGRF number). Unfortunately, once we make
per-component data available, this will become too complicated to
maintain. Instead, simply invalidate them.
This was pulled out of a patch by Eric Anholt.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In compute_live_intervals(), start and end are shorter names for
the virtual_grf_start and virtual_grf_end class members.
Now that the fs_live_intervals class has arrays named start and end
which are indexed by var, rather than VGRF, reusing the name is
confusing. Plus, most of the code has been factored out, so using the
long names isn't as inconvenient.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This is the information we'll actually use to replace the
virtual_grf_start[]/end[] arrays.
No change in shader-db.
v2 (Kenneth Graunke): Rebase; minor comment updates.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
These blocks are about to grow some more code, and the indentation was
getting out of hand.
v2 (Kenneth Graunke): Rebase, minor typo fixes and style changes.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
For now, this simply sets live_intervals_valid = false, but in the
future it will do something more sophisticated.
Based on a patch by Eric Anholt.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This significantly improves our handling of VGRFs of size > 1.
Previously, we only marked VGRFs as def'd if the whole register was
written by a single instruction. Large VGRFs which were written
piecemeal would not be considered def'd at all, even if they were
ultimately completely written.
Without being def'd, these were then marked "live in" to the basic
block, often extending the range to preceding blocks and sometimes
even the start of the program.
The new per-component tracking gives more accurate live intervals,
which makes register coalescing more effective.
In the future, this should help with texturing from GRFs on Gen7+.
A sampler message might be represented by a 2-register VGRF which
holds the texture coordinates. If those are incoming varyings,
they'll be produced by two PLN instructions, which are piecemeal writes.
No reduction in shader-db instruction counts. However, code which
prints the live interval ranges does show that some VGRFs now have
smaller (and more correct) live intervals.
v2: Rebase on current send-from-GRF code requiring adding extra use[]s.
v3: Rebase on live intervals fix to include defs in the end of the
interval.
v4 (Kenneth Graunke): Rebase; split off a few preparatory patches;
add lots of comments; minor style changes; rewrite commit message.
v5 (Eric Anholt): whitespace nit.
Written-by: Eric Anholt <[email protected]> [v1-3]
Signed-off-by: Kenneth Graunke <[email protected]> [v4]
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]> (v4)
|
|
|
|
|
|
|
|
|
| |
num_vars was shorthand for the number of virtual GRFs. num_vgrfs is a
bit clearer. Plus, the next patch will introduce "vars" which are
distinct from vgrfs.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
This has no functional effect, but should make subsequent changes a
little simpler.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes piglit tests:
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-in-after-other-usage.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-after-other-usage.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-after-usage.geom
- spec/glsl-1.50/compiler/vs-redeclares-pervertex-out-after-other-usage.vert
- spec/glsl-1.50/compiler/vs-redeclares-pervertex-out-after-usage.vert
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Fixes piglit test
spec/glsl-1.50/execution/redeclare-pervertex-subset-vs-to-gs.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From section 4.1.9 (Arrays) of the GLSL 4.40 spec (as of revision 7):
However, unless noted otherwise, blocks cannot be redeclared;
an unsized array in a user-declared block cannot be sized
through redeclaration.
The only place where the spec notes that interface blocks can be
redeclared is to allow for redeclaration of built-in interface blocks
such as gl_PerVertex. Therefore, user-defined interface blocks can
never be redeclared. This is a clarification of previous intent (see
Khronos bug 10659).
We were already preventing interface block redeclaration using the
same block name at compile time, but we weren't preventing interface
block redeclaration using the same instance name (and different block
names) at compile time. And we weren't preventing an instance name
from conflicting with a previously-declared ordinary variable.
In practice the problem would be caught at link time, but only because
of a coincidence: since ast_interface_block::hir() wasn't doing any
checking to see if the instance name already existed in the shader, it
was creating a second ir_variable in the shader having the same name
but a different type. Coincidentally, when the linker checked for
intrastage consistency of global variable declarations, it treated the
two declarations from the same shader as a conflict, so it reported a
link error.
But it seems dangerous to rely on that linker behaviour to catch
illegal redeclarations that really ought to be detected at compile
time.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes piglit tests:
- spec/glsl-1.50/execution/redeclare-pervertex-out-subset-gs
- spec/glsl-1.50/execution/redeclare-pervertex-subset-vs
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch verifies that:
- The gl_PerVertex input interface block may only be redeclared in a
geometry shader, and that it may only be redeclared as gl_in[].
- The gl_PerVertex output interface block may only be redeclared in a
vertex or geometry shader, and that it may only be redeclared as a
non-array without an interface name.
- gl_PerVertex may not be redeclared as any other type of interface
block (i.e. as a uniform interface block).
As a side-effect, the code now keeps track of what the previous
declaration of gl_PerVertex was--this will be needed in future
patches.
Fixes piglit tests:
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-in-with-incorrect-name.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-as-array.geom
- spec/glsl-1.50/compiler/gs-redeclares-pervertex-out-with-instance-name.geom
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In later patches, we'll use this in order to implement the required
behaviour that after the gl_PerVertex interface block has been
redeclared, only members of the redeclared interface block may be
used.
v2: Update the function name and comment to clarify that we aren't
actually removing the variable from the symbol table, just disabling
it.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
This will be used by future patches to change an ir_variable's
interface type when the gl_PerVertex built-in interface block is
redeclared.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch modifies the get_variable_being_redeclared() function so
that it no longer relies on the ast_declaration for the variable being
redeclared. In future patches, this will allow
get_variable_being_redeclared() to be used for processing
redeclarations of the built-in gl_PerVertex interface block.
v2: Also make get_variable_being_redeclared() static.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes piglit test
spec/glsl-1.10/compiler/struct/struct-name-uses-gl-prefix.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: we need to make an exception for the gl_PerVertex interface
block, since in geometry shaders it is allowed to be redeclared with
the instance name gl_in. Future patches will make redeclaration of
gl_PerVertex work properly.
Fixes piglit test
spec/glsl-1.50/compiler/interface-block-instance-name-uses-gl-prefix.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: we need to make an exception for the gl_PerVertex interface
block, since built-in variables are allowed to be redeclared inside
it. Future patches will make redeclaration of gl_PerVertex work
properly.
Fixes piglit tests:
- spec/glsl-1.50/compiler/interface-block-array-elem-uses-gl-prefix.vert
- spec/glsl-1.50/compiler/named-interface-block-elem-uses-gl-prefix.vert
- spec/glsl-1.50/compiler/unnamed-interface-block-elem-uses-gl-prefix.vert
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: we need to make an exception for the gl_PerVertex interface
block, since this is allowed to be redeclared. Future patches will
make redeclaration of gl_PerVertex work properly.
Fixes piglit test
spec/glsl-1.50/compiler/interface-block-name-uses-gl-prefix.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: some limited amount of redeclaration is actually allowed,
provided the shader is redeclaring the built-in gl_PerVertex interface
block. Support for this will be added in future patches.
Fixes piglit tests
spec/glsl-1.50/compiler/unnamed-interface-block-elem-conflicts-with-prev-{block-elem,global}.vert.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GLSL reserves identifiers beginning with "gl_" or containing "__", but
we haven't been consistent about enforcing this rule. This patch
makes a new function to check whether identifier names are valid. In
the process it closes a loophole where we would previously allow
function argument names to contain "__".
v2: Rename check_valid_identifier() -> validate_identifier(). Add
curly braces in validate_identifier().
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit e2660770731b018411fbe1620cacddaf8dff5287 (glsl: Keep track
of location for interface block fields), I neglected to update
glsl_type::record_key_compare to account for the fact that interface
types now contain location information. As a result, interface types
that differ only by their location information would not be properly
distinguished.
At the moment this is not a problem, because the only interface block
in which location information != -1 is gl_PerVertex, and gl_PerVertex
is always created in the same way. However, in the patches that
follow, we'll be adding new ways to create gl_PerVertex (by
redeclaring it), so we'll need location information to be handled
properly.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Although these interfaces can't be accessed directly by GLSL (since
they don't have an instance name), they will be necessary in order to
allow redeclarations of gl_PerVertex.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Currently, we create just a single gl_PerVertex interface block for
geometry shader inputs. In later patches, we'll also need to create
an interface block for geometry and vertex shader outputs. Moving the
code into its own class will make reuse easier.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Previously, we erroneously used the name "gl_in" for both the block
name and the instance name.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use a location of -1 for variables which don't have their own
assigned locations--this includes ir_variables which represent named
interface blocks. Technically the location assigned to gl_in doesn't
matter, since gl_in is only accessed via its members (which have their
own locations). But it's nice to be consistent.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Calling radeon_drm_cs_flush from multiple threads might cause deadlocks,
fix this by immediately signaling the semaphore after waiting for it.
This is a candidate for the stable branch(es).
Partially fixes: https://bugs.freedesktop.org/show_bug.cgi?id=70123
v2: some fixes on commit message
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
| |
_GNU_SOURCE appears to not be used reliably. Use _MSC_VER instead so
that MSVC alone is affected.
|
|
|
|
|
|
|
|
| |
Unless the polygon fill mode is different from PIPE_POLYGON_MODE_FILL,
so checking the the polygon mode is sufficient.
Testing done: no regression in polygon-mode-offset
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Not used since ages, and it wouldn't work at all with explicit derivatives now
(not that it did before as it ignored them but now the code would just use
the derivs pre-projected which would be quite random numbers).
v2: also get rid of 3 helper functions no longer used.
Reviewed-by: Jose Fonseca <[email protected]>
|