| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Surprisingly, this helps one vertex shader in 3DMMES.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using the get_local_param_pointer helper ensures that the LocalParams
arrays have actually been allocated before attempting to use them.
glProgramLocalParameters4fvEXT needs to do a bit of extra checking,
but it can be simplified since the helper has already validated the
target.
Fixes crashes in programs that use Cg (for example, Awesomenauts,
Rocketbirds: Hardboiled Chicken, and Tiny and Big: Grandpa's Leftovers)
since commit e5885c119de1e508099cc1111e1c9f8ff00fab88
(mesa: Dynamically allocate the storage for program local parameters.)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=73136
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Tested-by: Laurent Carlier <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We don't support MSAA (ie, number of samples is always one) therefore
sample_mask boils down to a synonym of the rasterizer_discard flag.
Also, this change makes setup actually use the value received in
lp_setup_set_rasterizer_discard instead of reaching out to llvmpipe
upper layers to re-fetch it.
Based on Si Chen's draft.
With this patch `wgf11multisample Coverage passes 100%` on the UMD
D3D10 state tracker.
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Si Chen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The initial value of cso_context::sample_mask_saved is irrelevant as it
will be overwritten with cso_context::sample_mask in
cso_save_sample_mask. Therefore it is cso_context::sample_mask that
needs to be properly initialized.
This fixes regressions in blits and mipmap generation after adding
support for sample_mask to llvmpipe.
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
| |
Implement Alpha to Coverage by discarding a fragment alpha component is
less than 0.5. This is a joint work of Jose and Si.
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 9119269ca14ed42b51c7d8e2e662500311b29fa3 moved the texel
buffer allocation to _swrast_texture_span(), however, when compiled
with OpenMP support this code already runs multi-threaded so a
critical section is required to prevent multiple allocations and
rendering errors.
Cc: "10.0" <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
code cleanup.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
Evidently, there's some other definition of "min" and "max" that
causes MSVC to choke on these function names. Renaming to min2()
and max2() fixes things.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Both brw_defines.h and intel_reg.h defined PIPE_CONTROL fields, which
had similar names, but couldn't be used in the same way. (One had
built-in shifts, and the other didn't...)
Delete the unused set to preserve sanity.
(Eric wrote an almost identical patch back in August, so I believe he
approves.)
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes this build error with gcc <= 4.5 and clang <= 3.1.
CC clientattrib.lo
In file included from ../../include/GL/glx.h:333:0,
from glxclient.h:45,
from clientattrib.c:32:
../../include/GL/glxext.h:275:13: error: redefinition of typedef 'GLXContextID'
../../include/GL/glx.h:171:13: note: previous declaration of 'GLXContextID' was here
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70591
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
* The Haiku renderers need to link to libGL to function properly
in all usage contexts. As mesa drivers build before gallium
targets, we couldn't properly link the mesa swrast driver to
the gallium libGL target for Haiku.
* This is likely better as it mimics how glx is laid out ensuring
the Haiku libGL is better understood.
* All renderers properly link in libGL now.
Acked-by: Brian Paul <[email protected]>
|
|
|
|
| |
Acked-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is part of the GL_EXT_packed_float extension.
Bugzilla: http://bugs.freedesktop.org/show_bug.cgi?id=73096
Cc: 10.0 <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
These are just software flag values (not hardware specific values), and
aren't used anywhere. Delete them to avoid confusion.
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
NUM_BANKS is not constant on CIK.
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Add extra null check in auto generated indirect_init.c via
src/mapi/glapi/gen/glX_proto_send.py
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
Fixed what I noticed; no warranty for exhaustiveness.
Signed-off-by: Nathan Kidd <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we left the size of this vgrf as 1, which caused register
allocation to be subtly broken. If we were lucky we would explode in
the post-alloc instruction scheduler; if we were unlucky we'd just stomp
on someone else and get broken rendering.
Fixes crash when running `tesseract` with the following settings:
msaa 4
glineardepth 0
Also fixes the piglit test:
arb_sample_shading-builtin-gl-sample-id
Signed-off-by: Chris Forbes <[email protected]>
Cc: Anuj Phogat <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72859
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The preprocessor currently accepts multiple else/elif-groups
per if-section. The GLSL-preprocessor is defined by the C++
specification, which defines the following parse-rule:
if-section:
if-group elif-groups(opt) else-group(opt) endif-line
This clearly only allows a single else-group, that has to come
after any elif-groups.
So let's modify the code to follow the specification. Add test
to prevent regressions.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Carl Worth <[email protected]>
Cc: 10.0 <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
definition)
The preprocessor has always replaced multi-line comments with a single space
character, (as required by the specification), but as of commit
bd55ba568b301d0f764cd1ca015e84e1ae932c8b the lexer also emitted a NEWLINE
token for each newline within the comment, (in order to preserve line
numbers).
The emitting of NEWLINE tokens within the comment broke the rule of "replace a
multi-line comment with a single space" as could be exposed by code like the
following:
#define FOO a/*
*/b
FOO
Prior to commit bd55ba568b301d0f764cd1ca015e84e1ae932c8b, this code defined
the macro FOO as "a b" as desired. Since that commit, this code instead
defines FOO as "a" and leaves a stray "b" in the output.
In this commit, we fix this by not emitting the NEWLINE tokens while lexing
the comment, but instead merely counting them in the commented_newlines
variable. Then, when the lexer next encounters a non-commented newline it
switches to a NEWLINE_CATCHUP state to emit as many NEWLINE tokens as
necessary (so that subsequent parsing stages still generate correct line
numbers).
Of course, it would have been more clear if we could have written a loop to
emit all the newlines, but flex conventions prevent that, (we must use
"return" for each token we emit).
It similarly would have been clear to have a new rule restricted to the
<NEWLINE_CATCHUP> state with an action much like the body of this if
condition. The problem with that is that this rule must not consume any
characters. It might be possible to write a rule that matches a single
lookahead of any character, but then we would also need an additional rule to
ensure for the <EOF> case where there are no additional characters available
for the lookahead to match.
Given those considerations, and given that the SKIP-state manipulation already
involves a code block at the top of the lexer function, before any rules, it
seems best to me to go with the implementation here which adds a similar
pre-rule code block for the NEWLINE_CATCHUP.
Finally, this commit also changes the expected output of a few, existing glcpp
tests. The change here is that the space character resulting from the
multi-line comment is now emitted before the newlines corresponding to that
comment. (Previously, the newlines were emitted first, and the space character
afterward.)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72686
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two things make this code confusing:
1. The uncharacteristic manipulation of lexer start state outside of
flex rules.
2. The confusing semantics of the skip_stack (including the
"lexing_if" override and the SKIP_NO_SKIP state).
This new comment is intended to bring a bit more clarity for any readers.
There is no intended beahvioral change to the code here. The actual code
changes include better indentation to avoid an excessively-long line, and
using the more descriptive INITIAL rather than 0.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Support all levels of a supported texture format.
Using 1024x1024, RGBA 8888 source, mipmap
internal-format Before (MB/sec) mipmap (MB/sec)
GL_RGBA 627.15 615.90
GL_RGB 456.35 611.53
512x512
GL_RGBA 597.00 619.95
GL_RGB 440.62 611.28
256x256
GL_RGBA 487.80 587.42
GL_RGB 376.63 585.00
Benchmark has been sent to mesa-dev list: teximage_enh
Signed-off-by: Courtney Goeltzenleuchter <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MESA_FORMAT_XRGB8888 is equivalent to MESA_FORMAT_ARGB8888 in terms
of storage on the device, so okay to use this optimized copy routine.
This series builds on work from Frank Henigman to optimize the
process of uploading a texture to the GPU. This series adds support for
MESA_XRGB_8888 and full miptrees where were found to be common activities
in the Smokin' Guns game. The issue was found while profiling the app
but that part is not benchmarked. Smokin-Guns uses mipmap textures with
an internal format of GL_RGB (MESA_XRGB_8888 in the driver).
These changes need a performance tool to run against to show how they
improve execution performance for specific texture formats. Using this
benchmark I've measured the following improvement on my Ivybridge
Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz.
1024x1024 texture size
internal-format Before (MB/sec) XRGB (MB/sec)
GL_RGBA 628.15 627.15
GL_RGB 265.95 456.35
512x512 texture size
internal-format Before (MB/sec) XRGB (MB/sec)
GL_RGBA 600.23 597.00
GL_RGB 255.50 440.62
256x256 texture size
internal-format Before (MB/sec) XRGB (MB/sec)
GL_RGBA 489.08 487.80
GL_RGB 229.03 376.63
Benchmark has been sent to mesa-dev list: teximage
Signed-off-by: Courtney Goeltzenleuchter <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
| |
I'm not aware of any piglit tests that this fixes, but the old code
was obviously wrong.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
Only a Mesa bug could cause this function to be called with an
out-of-range index, so raise an assertion if that ever happens.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch replaces the following pattern:
foo bar[MESA_SHADER_TYPES] = {
...
};
With:
foo bar[] = {
...
};
STATIC_ASSERT(Elements(bar) == MESA_SHADER_TYPES);
This way, when a new shader type is added in a future version of Mesa,
we will get a compile error to remind us that the array needs to be
updated.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
This argument was carrying the name of the shader target (as a
string). We can get this just as easily by calling
_mesa_shader_enum_to_string().
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
We already have a function for converting a shader type index to a
string: _mesa_shader_type_to_string().
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, _mesa_glsl_shader_target_name() had an overload for GLenum
and an overload for the gl_shader_type enum, each of which behaved
differently. However, since GLenum is a synonym for unsigned int, and
unsigned ints are often used in place of gl_shader_type (e.g. in loop
indices), there was a big risk of calling the wrong overload by
mistake. This patch gives the two overloads different names so that
it's always clear which one we mean to call.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 136a12ac98868d82c2ae9fcc80d11044a7ec56d1.
According to belak51 on IRC, this commit broke Allegro, which would no
longer compile. Applications apparently expect the GLXContextID typedef
to exist in glx.h; removing it breaks them. A bit of searching around
the internet revealed other complaints since upgrading to Mesa 10.
Cc: "10.0" <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to git blame, this hasn't been used in over two years:
commit d2235b0f4681f75d562131d655a6d7b7033d2d8b
Author: Eric Anholt <[email protected]>
Date: Thu Nov 17 17:01:58 2011 -0800
i965: Always handle GL_DEPTH_TEXTURE_MODE through the shader.
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|