| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Put "table" in the names to make things more understandable.
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
| |
To make the functions more understandable.
Reviewed-by: Chia-I Wu <[email protected]>
|
|
|
|
|
|
|
|
|
| |
With a non-debug build, gcc has two complaints:
1. 'found' var not used. Silence with '(void) found;'
2. 'id' not initialized. It's assigned by the UniformHash->get()
call, actually. But init it to zero to silence gcc.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
| |
sizeof(scissor) returns the size of the full array rather than a single
element. Fix it to consider just the one element.
Fixes: 0705fa35 ("st/mesa: add support for GL_ARB_viewport_array (v0.2)")
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The texture formats of winsys fbo are always linear becase the st manager
(st/dri for example) could not know the colorspace used. But it does not mean
that we cannot make the fbo sRGB-capable. By
- setting rb->Visual.sRGBCapable to GL_TRUE when the pipe driver supports the
format in sRGB colorspace,
- giving rb an sRGB internal format, and
- updating code to check rb->Format instead of strb->texture->format,
we should be good.
Fixed bug 75226 for at least llvmpipe and ilo, with no piglit regression.
v2: do not set rb->Visual.sRGBCapable for GLES contexts to avoid surprises
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=75226
Reviewed-by: Brian Paul <[email protected]>
Tested-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
| |
The format is mapped to PIPE_FORMAT_B8G8R8X8_SRGB.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
The format is needed to represent an RGB-only winsys framebuffer that is
sRGB-capable.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Spotted by Chia-I Wu.
v2: also fix unpack_ubyte_ARGB4444_REV()
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Chia-I Wu <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need the header setup to not be predicated on which pixels are
undiscarded. I'm not sure originally if I had thought that the mask
disable implied predicate disable, or if I had just misread the mask
disable as predicate disable. Either way, I know I had spent more time
thinking about this in the gen8 generator than the gen7 generator.
Plus, it turns out that I had mis-implemented the "the GPU will use the
predicate unless this header is present" comment, by skipping setting up
the pixel mask when the header was present.
Fixes GPU hangs in piglit glsl-fs-discard-mrt, Trine, Trine 2 and
preusmably MLL.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=75207
Tested-by: Tapani Pälli <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
/bin/sh defaults to dash on debian.
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was really only used in the radeon driver for a debug printf.
And evidently, libGL.so referenced it just to work around some sort
of linker issue.
This patch removes the two calls to the function and the function
itself.
Fixes undefined _glthread_GetID symbol in libGL reported by 'nm'.
Though, the missing symbol doesn't cause any issues on my system but
it does cause glxinfo to fail on one of our test systems.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
| |
Before, it was kind of ugly to set the multisample fields with
assignments after we called _mesa_init_teximage_fields().
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If scissor optimization is used (to avoid bringing scissored portions of
the render target into GMEM and then back out to system memory) in
combination with hw binning pass, the result would be a scissor mismatch
between binning pass and rendering pass. This would cause rendering
bugs in some scenarios with (for example) gnome-shell.
I would have expected that simply using the correct screen-scissor
during the binning pass would be enough, but seems like there is
something else missing. So for now disable binning pass if scissor
optimization is used.
|
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of the logic refers to the local variable 'mt' directly but
a few cases use 'intelObj->mt' instead. These are the same for
now but will be different once stencil miptree gets used.
v2 (Ian): fixed also indentation in surrounding lines
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On earlier hardware, we had to implement math in the shader to translate
Y-tiled or untiled coordinates to W-tiled coordinates (which is what
BLORP does today in order to texture from stencil buffers).
On Broadwell, we can simply state that it's W-tiled in SURFACE_STATE,
and adjust the pitch. This is much easier.
In the surface state code, I chose to handle the "should we sample depth
or stencil?" question separately from the setup for sampling from
stencil. This should make it work with the BindRenderbufferTexImage
hook as well, and hopefully be reusable for GL_ARB_texture_stencil8
someday.
v2: Update docs/GL3.txt (caught by Matt).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While the GL_ARB_stencil_texturing extension does not allow the creation
of stencil textures, it does allow shaders to sample stencil values
stored in packed depth/stencil textures.
Specifically, applications can call glTexParameter* with a pname of
GL_DEPTH_STENCIL_TEXTURE_MODE and value of either GL_DEPTH_COMPONENT or
GL_STENCIL_INDEX to select which component they wish to sample. The
default value is GL_DEPTH_COMPONENT (for traditional depth sampling).
Shaders should use an unsigned integer sampler (presumably usampler2D)
to access stencil data. Otherwise, results are undefined. Using shadow
samplers with GL_STENCIL_INDEX selected also is undefined behavior.
This patch creates a new gl_texture_object field, StencilSampling, to
indicate that stencil should be sampled rather than depth. (I chose to
use a boolean since I figured it would be more convenient for drivers.)
It also introduces the [Get]TexParameter code to get and set the value,
and of course the extension plumbing.
v2: Also consider textures incomplete when sampling stencil with
non-NEAREST min/mag filters (caught by Eric Anholt).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
s/grap/grab/
Signed-off-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The loader infrastructure for everything but DRI2 requires that udev be
present, so we can figure out an appropriate driver from the fd. We don't
have a portable solution yet, but presumably it will have similar lookup
based on the device node.
It will also be even more required for krh's udev-based hwdb support,
which lets us have a loader that actually loads DRI drivers not included
in the loader's source distribution.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=75212
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Spotted by Michel Dänzer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because in draw we always inject position at slot 0 whenever
fragment shader would take the maximum number of inputs (32) it
meant that we had PIPE_MAX_ATTRIBS + 1 slots to translate, which
meant that we were crashing with fragment shaders that took
the maximum number of attributes as inputs. The actual max number
of attributes we need to translate thus is PIPE_MAX_ATTRIBS + 1.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Matthew McClure <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
draw_current_shader_* functions return a final output when considering
both the geometry shader and the vertex shader. But when code generating
vertex shader we can not be using output slots from the geometry shader
because, obviously, those can be completely different. This fixes a
number of very non-obvious crashes.
A side-effect of this bug was that sometimes the vertex shading code
could save some random outputs as position/clip when the geometry
shader was writing them and vertex shader had different outputs at
those slots (sometimes writing garbage and sometimes something correct).
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Matthew McClure <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glTexImage{123}D()
From OpenGL 3.3 spec, page 141:
"Textures with a base internal format of DEPTH_COMPONENT or DEPTH_STENCIL
require either depth component data or depth/stencil component data.
Textures with other base internal formats require RGBA component data.
The error INVALID_OPERATION is generated if one of the base internal
format and format is DEPTH_COMPONENT or DEPTH_STENCIL, and the other
is neither of these values."
Fixes Khronos OpenGL CTS test failure: proxy_textures_invalid_size
Cc: <[email protected]>
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
This patch makes no functional changes to the code.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From OpenGL 4.0 spec, page 398:
"The initial internal format of a texel array is RGBA
instead of 1. TEXTURE_COMPONENTS is deprecated; always
use TEXTURE_INTERNAL_FORMAT."
Fixes Khronos OpenGL CTS test failure: proxy_textures_invalid_size
Cc: <[email protected]>
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Starting with llvm-3.5svn r202574, LLVM expects C+11 mode.
commit f8bc17fadc8f170c1126328d203f0dab78960137
Author: Chandler Carruth <[email protected]>
Date: Sat Mar 1 06:31:00 2014 +0000
[C++11] Turn off compiler-based detection of R-value references, relying
on the fact that we now build in C++11 mode with modern compilers. This
should flush out any issues. If the build bots are happy with this, I'll
GC all the code for coping without R-value references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@202574 91177308-0d34-0410-b5e6-96231b3b80d8
Signed-off-by: Vinson Lee <[email protected]>
|
|
|
|
|
| |
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=75543
Cc: "10.1" <[email protected]>
|
|
|
|
|
|
|
|
| |
`--enable-llvm-shared-libs` option was recently renamed as
`--with-llvm-shared-libs`, but several error messages still mention the
old option, causing confusing.
Trivial.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GetCurrentThread() returns a pseudo-handle (a constant which only makes
sense when used within the calling thread) and not a real handle.
DuplicateHandle() will return a real handle, but it will create a new
handle every time we call. Calling DuplicateHandle() here means we will
leak handles, which can cause serious problems.
In short, the Windows implementation of thrd_t needs a thorough make
over, and it won't be pretty. It looks like C11 committee
over-simplified things: it would be much better to have seperate objects
for threads and thread IDs like C++11 does.
For now, just comment out the thrd_current() implementation, so we get
build errors if anybody tries to use it.
Thanks to Brian Paul for spotting and diagnosing this problem.
Cc: "10.0" "10.1" <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
u_thread_self() expects thrd_current() to return a unique numeric ID
for the current thread, but this is not feasible on Windows.
Cc: "10.0" "10.1" <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Per https://gist.github.com/yohhoy/2223710/#comment-710118
Cc: "10.0" "10.1" <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
| |
Cc: [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
r600_translate_colorformat is rewritten to look like radeonsi.
r600_translate_colorswap is shared with radeonsi.
r600_colorformat_endian_swap is consolidated.
This adds some formats which were missing. Future "plain" formats will
automatically be supported.
Cc: [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
| |
Also translate the Y__X swizzle.
Cc: [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
| |
This reverts commit dfe8cb48fc13587afb2d829c29432c0eacd0aba3.
Accidently pushed this commit, over 1bb23abe065(configure: disable
shared glapi when building xlib powered glx).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With commit 0432aa064b(configure: use shared-glapi when more than one
gl* API is used) we removed "disable shared-glapi when building without
dri" hunk.
In the good old days of classic mesa, dri and xlib-glx were mutually
exclusive thus the hunk made sense.
Currently enable-dri is used as a synonym for a range of things thus
it's more appropriate to handle xlib-glx explicitly.
Fixes a missing symbol '_glapi_Dispatch' in a xlib powered libGL,
build using the following
./autogen.sh --enable-xlib-glx --disable-dri --with-gallium-drivers=swrast
Cc: Brian Paul <[email protected]>
Reported-by: Brian Paul <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
| |
The _glthread_GetID() function is also defined in mapi_glapi.c
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
| |
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
| |
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
This removes the only use of _glthread_Get/SetTSD(), etc.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
| |
Get rid of the fake_glx_context struct. Now, an XMesaContext is the
same as a GLXContext.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
| |
At one point in time, the xlib driver could call the real GLX functions.
But that's long dead.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
| |
Reviewed-by: José Fonseca <[email protected]>
|