| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
The GL_ARB_texture_rg spec says that we need to support both texturing
and rendering for the GL_RED and GL_RG formats. So move the format
check up into the rendertarget_mapping[] list. Also, add
PIPE_FORMAT_R8_UNORM to the list of formats required.
Note: This is a candidate for the stable branches.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We'd been ad-hoc inserting instructions in some SEND messages with no
knowledge of when it was required (so extra instructions), but not all SENDs
(so not often enough). This should do much better than that, though it's
still flow-control-ignorant.
v2: Use BRW_MAX_MRF instead of magic numbers.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58960
Reviewed-by: Kenneth Graunke <[email protected]>
NOTE: Candidate for the stable branches.
|
|
|
|
|
| |
We have to use the EGL wayland event queue for roundtrip, so use the
wayland_roundtrip() helper, which does just that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In GLSL, sampler indices are allocated contiguously from 0. But in the
case of ARB_fragment_program (and possibly fixed function), an app that
uses texture 0 and 2 will use sampler indices 0 and 2, so we were only
allocating space for samplers 0 and 1 and setting up sampler 0. We
would read garbage for sampler 2, resulting in flickering textures and
an angry simulator.
Fixes bad rendering in 0 A.D. and ETQW. This was fixed for pre-gen7 by
28f4be9eb91b12a2c6b1db6660cca71a98c486ec
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=25201
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58680
Reviewed-by: Kenneth Graunke <[email protected]>
NOTE: This is a candidate for stable branches.
|
| |
|
| |
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I should say "fix", but it has never been used until now.
S8Z24 is the format equivalent to the GL_UNSIGNED_INT_24_8 packing,
so we'll start to see it more often with st/mesa now making smart decisions
about formats.
The DB<->CB copy can change the channel ordering for transfers, other than
that, the internal DB format doesn't really matter.
R600-R700 support is possible except shadow mapping.
FMT_24_8 is broken if the SAMPLE_C instruction is used (no idea why).
Also the sampler swizzling was broken in theory and the fact it worked was
a lucky coincidence.
radeonsi might need to port this.
Reviewed-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
| |
8 more little piglits.
NOTE: This is a candidate for the 9.1 branch.
|
| |
|
|
|
|
|
|
|
| |
Fixes uninitialized pointer field defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The default alignment is 4, so this fast path was rarely hit. Rather
than introduce logic to handle alignment, just use the Mesa core
function.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=46632
Cc: [email protected]
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The code was rather broken for non-XYZW on 8-wide, but all of our
callers were using XYZW anyway. For my experiments with using writemask
on texturing, I've been using manual header setup in the compiler
backends, since we want to actually know what registers are written for
optimization and register allocation.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
In 2 of our checks, we were missing BREAK and CONTINUE.
NOTE: Candidate for the stable branches.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Detect a duplicate Shader type as and error instead of silently allowing
it, restrict to ES2 API.
v2: Tapani Pälli <[email protected]>
- make the check run time instead of compile time
v3: chadv
- Quote spec on which error to generate.
Signed-off-by: bma <[email protected]>
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-and-tested-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes broken clipping in supertuxkart and presumably many other applications.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=51471
NOTE: Candidate for the stable branches.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The hardware just doesn't support it. I suspect this was a regression from
the move to fixed MESA_FORMATs for compressed textures and that previously we
were storing uncompressed for this or something.
Fixes GPU hangs in piglit "texwrap GL_EXT_texture_sRGB-s3tc bordercolor
swizzled" on my GM965.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All of the GLSL specs from GLSL 1.30 (and GLSL ES 3.00) onward contain
language requiring certain integer variables to be declared with the
"flat" keyword, but they differ in exactly *when* the rule is
enforced:
(a) GLSL 1.30 and 1.40 say that vertex shader outputs having integral
type must be declared as "flat". There is no restriction on fragment
shader inputs.
(b) GLSL 1.50 through 4.30 say that fragment shader inputs having
integral type must be declared as "flat". There is no restriction on
vertex shader outputs.
(c) GLSL ES 3.00 says that both vertex shader outputs and fragment
shader inputs having integral type must be declared as "flat".
Previously, Mesa's behaviour was consistent with (a). This patch
makes it consistent with (b) when compiling desktop shaders, and (c)
when compiling ES shaders.
Rationale for desktop shaders: once we add geometry shaders, (b) really
seems like the right choice, because it requires "flat" in just the
situations where it matters. Since we may want to extend geometry
shader support back before GLSL 1.50 (via ARB_geometry_shader4), it
seems sensible to apply this rule to all GLSL versions. Also, this
matches the behaviour of the nVidia proprietary driver for Linux, and
the expectations of Intel's oglconform test suite.
Rationale for ES shaders: since the behaviour specified in GLSL ES
3.00 matches neither pre-GLSL-1.50 nor post-GLSL-1.50 behaviour, it
seems likely that this was a deliberate choice on the part of the GLES
folks to be more restrictive. Also, the argument in favor of (b)
doesn't apply to GLES, since it doesn't support geometry shaders at
all.
Some discussion about this has already happened on the Mesa-dev list.
See:
http://lists.freedesktop.org/archives/mesa-dev/2013-February/034199.html
Fixes piglit tests:
- glsl-1.30/compiler/interpolation-qualifiers/nonflat-*.frag
- glsl-1.30/compiler/interpolation-qualifiers/vs-flat-int-0{2,3,4,5}.vert
- glsl-es-3.00/compiler/interpolation-qualifiers/varying-struct-nonflat-{int,uint}.frag
Fixes oglconform tests:
- glsl-q-inperpol negative.fragin.{int,uint,ivec,uvec}
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the GLSL 1.30 spec, section 4.3.6 ("Outputs") says:
"If a vertex output is a signed or unsigned integer or integer
vector, then it must be qualified with the interpolation qualifier
flat."
The GLSL ES 3.00 spec further clarifies, in section 4.3.6 ("Output
Variables"):
"Vertex shader outputs that are, *or contain*, signed or unsigned
integers or integer vectors must be qualified with the
interpolation qualifier flat."
(Emphasis mine.)
The language in the GLSL ES 3.00 spec is clearly correct and should be
applied to all shading language versions, since varyings that contain
ints can't be interpolated, regardless of which shading language
version is in use.
(Note that in GLSL 1.50 the restriction is changed to apply to
fragment shader inputs rather than vertex shader outputs, to
accommodate the fact that in the presence of geometry shaders, vertex
shader outputs are not necessarily interpolated. That will be
addressed by a future patch).
NOTE: This is a candidate for stable branches.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From GLSL ES 3.00 section 4.5.4 ("Default Precision Qualifiers"):
"The precision statement
precision precision-qualifier type;
can be used to establish a default precision qualifier. The type
field can be either int or float or any of the sampler types, and
the precision-qualifier can be lowp, mediump, or highp."
GLSL ES 1.00 has similar language. GLSL 1.30 doesn't allow precision
qualifiers on sampler types, but this seems like an oversight (since
the intention of including these in GLSL 1.30 is to allow
compatibility with ES shaders).
Previously, Mesa followed GLSL 1.30 and only allowed default precision
qualifiers to be set for float and int. This patch makes it follow
GLSL ES rules in all cases.
Fixes Piglit tests default-precision-sampler.{vert,frag}.
Partially addresses https://bugs.freedesktop.org/show_bug.cgi?id=60737.
NOTE: This is a candidate for stable branches.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
Broken by 624528834f53f54c7a934f929769b7e6b230a0b1.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise, we fail to correctly handle GL_PRIMITIVE_RESTART_FIXED_INDEX.
Fixes gles3conform's primitive_restart_mode test.
NOTE: This is a candidate for the 9.1 branch.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This commit allows using glGetTexImage during rendering and still
maintain interactive framerates.
This improves performance of WarCraft 3 under Wine. The framerate is improved
from 25 fps to 39 fps in the main menu, and from 0.5 fps to 32 fps in the game.
v2: fix choosing the format for decompression
|
|
|
|
|
|
|
|
|
| |
This is for glGetTexImage and it will be used for samplers only (which some
drivers already implement by reading util_format_description).
v2: incorporate Brian's suggestion
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Seems that alpha test being enabled confuse the GPU on the order in
which it should perform the Z testing. So force the order programmed
throught db shader control.
v2: Only force z order when alpha test is enabled
v3: Update db shader when binding new dsa + spelling fix
Signed-off-by: Jerome Glisse <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In OpenGL 4.3, new language was added that would require
this check. But, if this check results in broken applications
then perhaps it will be reversed.
For now, remove this check and re-evaluate when
desktop GL 4.3 is closer.
NOTE: This is a candidate for the 9.1 branch.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
With the llvm patches, fixing 14 piglit tests in total.
v2: increase the const limit
v3: document the const limit
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the user specifies an unsupported GLSL version,
_mesa_glsl_parse_state::process_version_directive() nicely gives them
an error message telling them which GLSL versions are supported.
Previous to this patch, the logic for determining whether a given
language version was supported was independent from the logic to
generate this error message string; as a result, we had a bug where
GLSL 3.00 would never be listed in the error message as an available
language version, even if it was really available.
To make matters worse, the code for generating the error message
string assumed that desktop GL versions were always separated by 0.10,
an assumption that will be wrong as soon as we support GLSL 3.30.
This patch fixes both problems by adding a table of supported GLSL
versions to _mesa_glsl_parse_state; this table is used both to
generate the error message and to check whether a given version is
supported.
Reviewed-by: Ian Romanick <[email protected]>
|
| |
|
|
|
|
|
|
|
|
| |
Discovered accidentally when changing SAMPLE_L definition.
Turns out the lod arguments were already correct for the new definition
but the compare and derivs were not.
Reviewed-by: Christoph Bumiller <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It looks like using coord.w as explicit lod value is a mistake, most likely
because some dx10 docs had it specified that way. Seems this was changed though:
http://msdn.microsoft.com/en-us/library/windows/desktop/hh447229%28v=vs.85%29.aspx
- let's just hope it doesn't depend on runtime build version or something.
Not only would this need translation (so go against the stated goal these
opcodes should be close to dx10 semantics) but it would prevent usage of this
opcode with cube arrays, which is apparently possible:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb509699%28v=vs.85%29.aspx
(Note not only does this show cube arrays using explicit lod, but also the
confusion with this opcode: it lists an explicit lod parameter value, but then
states last component of location is used as lod).
(For "true" hw drivers, only nv50 had code to handle it, and it appears the
code was already right for the new semantics, though fix up the seemingly
wrong c/d arguments while there.)
v2: fix comment, separate out other changes.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For PIPE_FORMAT_Z24_UNORM_S8_UINT, the Z bits are in the 24
least significant bits.
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=60527
and http://bugs.freedesktop.org/show_bug.cgi?id=60524
and http://bugs.freedesktop.org/show_bug.cgi?id=60047
Note: This is a candidate for the stable branches.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
These were, at some point in the past, used to request that Xorg's
compiler.h export a static inline xf86ReadMmio32 instead of a function
pointer. compiler.h only has this option for DEC Alpha.
But Xorg's compiler.h isn't being included by either of these two files
and the radeon driver still works on Alpha, so the definitions are dead
and not needed.
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
link up the fs outputs and blend inputs, and make sure the second blend source
is correctly loaded and converted (which is quite complex).
There's a slight refactoring of the monster generate_unswizzled_blend()
function where it makes sense to factor out alpha conversion (which needs
to run twice for dual source blend).
This passes piglit arb_blend_func_extended tests.
v2: remove new but ultimately not used function...
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
These are more recent additions, and no one remembered to update the
INTEL_DEBUG=state code.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This reorders the "brw_bits" array in brw_state_upload.c to match the
order of the #defines in brw_context.h.
Otherwise, it's really hard to see if any are missing.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
These don't need to be re-disabled on every batch if we're using
hardware contexts. (If we're not, this is equivalent.)
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
The blit must be aligned on 8 horizontal block.
v2: no need to align the reminder
Signed-off-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the same context try to flink and open the object, use the
same bo struct instead of opening a new gem handle for the object.
This way we avoid avoid having 2 different handle pointing to the
same kernel object which can latter lead to trouble with virtual
address.
Fix:
https://bugs.freedesktop.org/show_bug.cgi?id=60200
Signed-off-by: Martin Andersson <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
minecraft apparently has its piles of display lists each contain 6
instances of glBegin(GL_QUADS)/verts/glEnd(), which appear in the
compiled list as 6 prims of 4 verts each in one draw call. We can
reduce driver overhead even more by making that one prim of 24 verts.
Improves minecraft performance by 1.6% +/- .25% (n=446)
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
| |
Otherwise, the stderr and stdout debug end up interleaved wrong
when I pipe them to a file.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
| |
These have been wrong since f428255bde93a452a7cdd48fba21839c99beb6cb
back in 2009!
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
We used to have clip planes optionally included in the push constants,
resulting in a variable amount of data uploaded, but no more. This also
means less wasted space in the batch for our push constants.
v2: Update _NEW_TRANSFORM state bit information.
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
It doesn't matter with our current implementation of MapBufferRange,
but it was wrong -- the result pointer is read by intel_upload_data().
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
Tested with softpipe only exposing RGBA formats.
NOTE: This is a candidate for the stable branches.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The Mesa format can be RGBA8888_REV, the format/type can be
GL_RGBA/GL_UNSIGNED_BYTE, but the actual texture internal format can be
LUMINANCE_ALPHA, INTENSITY, etc. Therefore we should look at the base
internal format as well.
NOTE: This is a candidate for the stable branches.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
_mesa_base_tex_format doesn't accept GL_BGR and GL_ABGR_EXT, etc.
v2: add a (now hopefully complete) helper function to deal with this
NOTE: This is a candidate for the stable branches.
Reviewed-by: Brian Paul <[email protected]>
|