| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
This patch allows GL_SAMPLES to be set to either 0 or 1 on i965
platforms that don't support MSAA (those prior to Gen6). Setting
GL_SAMPLES=1 has the same effect as setting it to 0 on these platforms
(because MSAA is unsupported), but is distinguishable via the GL API.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=50165
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
EXT_framebuffer_multisample is a required subpart of
ARB_framebuffer_object, which means that we must support it even on
platforms that don't support MSAA. Fortunately
EXT_framebuffer_multisample allows for this by allowing GL_MAX_SAMPLES
to be set to 1.
This leads to a tricky quirk in the GL spec: since
GlRenderbufferStorageMultisamples() accepts any value for its
"samples" parameter up to and including GL_MAX_SAMPLES, that means
that on platforms that don't support MSAA, GL_SAMPLES is allowed to be
set to either 0 or 1. On platforms that do support MSAA, GL_SAMPLES=1
is not used; 0 means no MSAA, and 2 or higher means MSAA.
In other words, GL_SAMPLES needs to be interpreted as follows:
=0 no MSAA (possible on all platforms)
=1 no MSAA (only possible on platforms where MSAA unsupported)
>1 MSAA (only possible on platforms where MSAA supported)
This patch modifies all MSAA-related code to choose between
multisampling and single-sampling based on the condition (GL_SAMPLES >
1) instead of (GL_SAMPLES > 0) so that GL_SAMPLES=1 will be treated as
"no MSAA".
Note that since GL_SAMPLES=1 implies GL_SAMPLE_BUFFERS=1, we can no
longer use GL_SAMPLE_BUFFERS to distinguish between MSAA and non-MSAA
rendering.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we advertised the extension but the builtin functions
were enabled only for GLSL and not for ES.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=52003
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
The 16-bit depth case did not follow the function's prevalent pattern.
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Nearly the whole function body was contained in the 'else' branch. The
'if' branch did one thing: return early with an error. Clean things up by
moving all the code out of the 'else' branch. Decreases max nesting level
from 4 to 3.
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
After commit "intel: Convert to using private depth/stencil buffers", we
request from DRI2GetBuffersWithFormat only the front left and back left
buffers. We no longer request depth and stencil buffers.
Assert that in intelAllocateBuffer and remove the related dead code.
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
| |
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=53053
|
|
|
|
|
|
|
|
|
|
| |
These assignments caused CFLAGS specified on the configure line to
appear twice in the final CFLAGS. Removing them makes the behavior
reasonable -- USER_CFLAGS are appended at the end of CFLAGS, allowing
the builder to override flags added by configure.ac like
-fno-strict-aliasing.
Reviewed-by: Adam Jackson <[email protected]>
|
|
|
|
| |
Reviewed-by: Adam Jackson <[email protected]>
|
|
|
|
|
|
| |
Missed by d387899388bd7090bda50593e35f8ed3cb730c47.
Reviewed-by: Adam Jackson <[email protected]>
|
|
|
|
|
|
|
|
| |
Even on s390{,x} where there's no video card, you still want this so GLX
protocol works.
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Adam Jackson <[email protected]>
|
|
|
|
|
|
|
| |
Just figured out what that bit does.
Note: It's converted back to sRGB on write, so no effective
conversion occurs.
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 5d5af7d359e0060fa00b90a8f04900b96f9058b0.
It turns out the issue this was supposed to fix merely counter-acted
a bug in the hardware driver that I wasn't aware of.
The resource_resolve is not supposed to do sRGB conversion, period.
(This would violate the requirement that source and destination must
be of the same format).
|
|
|
|
|
|
|
|
|
| |
no point in emitting aux scissor values if we
a) never enable them
b) never set the actual values
plus it is enough to have that aux scissor enable reg (which we never set to
enable) in one place not two.
|
|
|
|
|
| |
Noone was interested in the number of cliprects, and noone cared
about the intersect result neither. So just nuke this.
|
|
|
|
| |
Those functions are SO dead.
|
|
|
|
|
|
|
|
|
| |
There were several problems with these functions (which are a remnant
of dri1 hyperz mostly - should bring it back somehow someday).
First, it would always do a swrast clear if the buffer to clear was a fbo.
Second, for buffers we wouldn't handle the clear (I guess aux/accum?) we
would actually still have tried to clear that later even when we already
cleared it with swrast.
|
|
|
|
|
|
|
|
|
| |
This addresses one issue raised in bug #51658 discovered by Eugene St Leger.
The assert is bogus since there's no problem with texture width/height being
2048 (the width/height programmed is width/height minus one).
OTOH though the programmed size for scissor rect should be width/height
minus one too otherwise bad things may happen (as it is inclusive, and there's
not enough bits for more than a value of 2047).
|
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
|
|
| |
SI does not support 64-bit immediates natively, but llvm will generate
i64 immediates when indexing loads and stores (since SI has 64-bit
pointers). The i64 indices will always be small enough to fit into
32-bits (i.e. the high 32 bits will always be all zeros), so we can
treat these index values as 32-bits.
|
|
|
|
| |
We need to return true when we match the pattern.
|
|
|
|
|
|
|
| |
In tablegen, if two patterns match, the one that comes first in the file
is given preference. We want the SMRD IMM pattern to be given
preference, because it encodes the pointer offset in its immediate
field, which saves us an add instruction.
|
|
|
|
|
|
|
| |
Part of fixing piglit maxblocks.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Fixes piglit ARB_uniform_buffer_object/getuniformlocation.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
I ended up having to add rallocing of the ast_type_qualifier in order
to avoid pulling in ast.h for glsl_parser_extras.h, because I wanted
to track an ast_type_qualifier in the state.
Fixes piglit ARB_uniform_buffer_object/row-major.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Yes, you get to say things like "layout(row_major, column_major)" and
get column major.
Part of fixing piglit ARB_uniform_buffer_object/row_major.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is like a stripped-down version of glGetActiveUniform that just
returns the name, since the other return values (type and size) of
that function are now meant to be handled with
glGetActiveUniformsiv().
Fixes piglit ARB_uniform_buffer_object/getactiveuniformname
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Fixes piglit ARB_uniform_buffer_object/getactiveuniformblockname.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Fixes piglit ARB_uniform_buffer_object/uniformbufferbinding.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fixes piglit ARB_uniform_buffer_object/getprogramiv.
v2: Add extension checks.
v3: Appease MSVC.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous implementation required a flag in _mesa_glsl_parse_state
and line of code to initialize it for every version of the shading
language we intend to support. As we look to add 150, 330, 400, 410,
420, and beyond, this gets rather unwieldy.
This patch retains the switch statement (to reject, say, #version 111),
but removes all the bits. Code to check for ctx->API == API_OPENGL_CORE
could easily be added to the 110 and 120 cases to reject those.
v2: Use _mesa_is_desktop_gl to preserve the existing behavior in the
presence of the new API_OPENGL_CORE enumeration.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]> [v1]
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes some failures in getteximage-formats.
v2: Remove stray include, and drop extra test for encoding == GL_SRGB --
_mesa_get_srgb_format_linear() returns the same format if it wasn't SRGB.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=48120
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was using state->Const.GLSL_100ES, which is set if the driver
supports ARB_ES2_compatibility or we're in ES2 mode. Instead, it should
use state->language_version, as that represents the actual GLSL version
of the shader being compiled.
Since the correct logic is < 120 && !100, just make it == 110.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This will need to get refactored when we add support for core profiles
or forward-compatible contexts, but we may as well have it in the
meantime. This allows us to override the GLSL version and experiment.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move installing osmesa.pc to drivers/osmesa, where it belongs better
This also restores the installation of gl.pc if we are building osmesa at the
same time as libGL, which was broken in commit 39785488 when the .pc
installation was converted to automake
v2:
Remove HAVE_OSMESA_DRIVER automake conditional, it's now pointless as we
will only be building in the drivers/osmesa directory if the condition it
checked was true.
Signed-off-by: Jon TURNEY <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes this build failure with Intel Compiler.
src/gallium/auxiliary/util/u_format_tests.c(903): error: floating-point operation result is out of range
{PIPE_FORMAT_R16_FLOAT, PACKED_1x16(0xffff), PACKED_1x16(0x7c01), UNPACKED_1x1( NAN, 0.0, 0.0, 1.0)},
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
These functions make it easier to check for multiple API types.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Now that ir_quadop_vector exists, ir_last_binop and ir_last_opcode are
no longer the same. Only one place currently uses this enumeration, and
already handles ir_quadop_vector correctly.
Signed-off-by: Ian Romanick <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Olivier Galibert <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No types have 0 columns. The glsl_type::get_instance method contains
if ((rows < 1) || (rows > 4) || (columns < 1) || (columns > 4))
return error_type;
To get a vector, use columns = 1.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Olivier Galibert <[email protected]>
|
|
|
|
|
|
|
|
|
| |
It's more convenient to use shortcuts like glsl_type::bvec2_type than
the longwinded glsl_type::get_instance(GLSL_TYPE_BOOL, 2, 1).
Signed-off-by: Ian Romanick <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Olivier Galibert <[email protected]>
|