| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Sandy Bridge PRM, volume 4, part 2, section 5.3.10 ("5.3.10
Register Region Restrictions") contains the following restriction on
the execution size and operand width of instructions:
"3. ExecSize must be equal to or greater than Width."
When emitting an IF instruction in single program flow mode on Gen6+,
we use an ExecSize of 1, therefore the Width of each operand must also
be 1.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
In _mesa_BindBufferRange(), we need to verify that the offset and size
specified by the client do not exceed the size of the underlying
buffer. We were accidentally doing this check using ">=" rather than
">", so we were generating a bogus error if the client specified an
offset and size that fit exactly in the underlying buffer.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds two new fields to the gl_transform_feedback_info
struct:
- BufferStride records the total number of components (per vertex)
that transform feedback is being instructed to store in each buffer.
- Outputs[i].DstOffset records the offset within the interleaved
structure of each transform feedback output.
These values are needed by the i965 gen6 and r600g back-ends, so it
seems better to have the linker provide them rather than force each
back-end to compute them independently.
Also, DstOffset helps pave the way for supporting
ARB_transform_feedback3, which allows the transform feedback output to
contain holes between attributes by specifying
gl_SkipComponents{1,2,3,4} as the varying name.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix compilation on cygwin after commit 762c9766c93697af8d7fbaa729aed118789dbe8e
"Use VERT_ATTRIB_* indexed array in gl_array_object" added the first non-driver
use of ffsll(), which exposes the fact that this isn't provided on cygwin.
Found by tinderbox, see [1]
[1] http://tinderbox.freedesktop.org/builds/2011-11-30-0017/logs/libGL/#build
Signed-off-by: Jon TURNEY <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Maarten Lankhorst <[email protected]>
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
csc is not used for rgba and gives a warning.
Signed-off-by: Maarten Lankhorst <[email protected]>
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
Mapping to software and uploading again clearing is killing performance.
Signed-off-by: Maarten Lankhorst <[email protected]>
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Valgrind complains about a definitely lost block allocated in
intelNewTextureImage(). This leak was apparently created by
6e0f9001fe3fb191c2928bd09aa9e9d05ddf4ea9, "mesa: move
gl_texture_image::Data, RowStride, ImageOffsets to swrast", as it
removes the free() from _mesa_delete_texture_image().
Put the free() back, fixes a Valgrind error.
Signed-off-by: Pekka Paalanen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
EGL_BAD_PARAMETER should be returned when any of the coordinates is negative.
|
|
|
|
| |
EGL_BAD_PARAMETER should be returned when any of the coordinates is negative.
|
|
|
|
|
|
| |
Signed-off-by: Fredrik Höglund <[email protected]>
[olv: remove #ifdef checks]
|
|
|
|
| |
Signed-off-by: Fredrik Höglund <[email protected]>
|
|
|
|
|
|
|
|
|
| |
v2: Handle EGL_POST_SUB_BUFFER_SUPPORTED_NV in
_eglParseSurfaceAttribList()
Signed-off-by: Fredrik Höglund <[email protected]>
[olv: remove #ifdef checks]
|
| |
|
|
|
|
|
|
|
| |
There is no point in having them when we distribute eglext.h.
As for unofficial extensions, there is a chance that we might remove some of
them evetually. Keeping the #ifdef's for now should make that easier.
|
|
|
|
|
| |
We never support this unofficial extension, and it has been removed from
Android recently. There is no point in keeping it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update to revision 15052.
EGL_MESA_drm_image is now official. But apparently we have our own extension
to it and we need this in eglmesaext.h:
#ifdef EGL_MESA_drm_image
/* Mesa's extension to EGL_MESA_drm_image... */
#ifndef EGL_DRM_BUFFER_USE_CURSOR_MESA
#define EGL_DRM_BUFFER_USE_CURSOR_MESA 0x0004
#endif
#endif
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As suggested by Ian in
http://lists.freedesktop.org/archives/mesa-dev/2011-December/016035.html
Note that eglext.h has to include eglmesaext.h at the end instead of the
beginning because some extensions in eglmesaext.h depend on the official
extensions.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
| |
|
|
|
|
|
|
| |
missing depth swz.
Also fix indentation a bit.
|
|
|
|
| |
memcpy.
|
| |
|
| |
|
|
|
|
|
| |
Seriously. This fixes fragment-and-vertex-texturing in piglit and probably
a boatload of other stuff.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we advertised 0 VS texture units. Now that we have proper
support for using the sampling engine in the VS, we can advertise 16,
which is conveniently the number required for OpenGL 3.0.
v2: Enable on Gen4. I hacked up my tests to not use flat ivec varyings
and they pass.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Now that this is all factored out, it's trivial to do.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This should avoid state-dependent FS recompiles when samplers that are
only used by the VS change.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The idea is to reuse this for the VS and (in the future) GS as well.
v2: Include yuvtex data since we're not dropping GL_MESA_ycbycr.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]> [v1]
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The visit() half computes the values to put in the header based on the
IR and simply stuffs that in the vec4_instruction; the emit() half uses
this to set up the message header. This works out well since emit() can
use brw_reg directly and access individual DWords without kludgery.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We'll want to reuse this for the VS, and it's complex enough that I'd
rather not cut and paste it.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This translates the GLSL compiler's IR into vec4_instruction IR,
generating code to load coordinates, LOD info, shadow comparitors, and
so on into the appropriate message registers.
It turns out that the SIMD4x2 parameters are identical on Gen 5-7, and
the Gen4 code is similar enough that, unlike in the FS, it's easy enough
to support all generations in a single function.
v2: Load zeros for missing coordinates (fixing vs-texelFetch-sampler1D
and 2D on G45), and fix G45 message length for shadow comparisons.
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This is the part that takes the vec4_instruction IR and turns it into
actual Gen ISA.
v2: Add Gen4 messages, don't retype m0 to UW.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to Ironlake, cube maps were stored as 3D textures. In recent
refactoring, we removed a separate "layers" parameter in favor of using
depth. Unfortunately, depth was getting minified, which is only correct
for actual 3D textures.
Fixes piglit tests:
- bugs/crash-cubemap-order
- fbo/fbo-cubemap
- texturing/cubemap
Also changes texturing/cubemap npot from abort to fail.
This hasn't seen a full test run since Piglit on Mesa master hangs
GM45 a lot.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All of the extensions require that both libGL and either the server or
the direct rendering driver (or both) enable the extension before it's
advertised. It seems safe to assume that none of the other components
on OS X will enable these extensions, so all the #ifdef blocks here
just clutter the code.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Cc: Jeremy Huddleston <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
There are a few unsupported extensions (e.g., the ATI and NV float
extensions) that are still in the list. There is some small chance
that these may be supported some day.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__glXInitialize calls AllocAndFetchScreenConfigs.
AllocAndFetchScreenConfigs unconditionally sends a glXQuerySeverString
request to the server. This request is only supported with GLX 1.1 or
later, so we were already implicitly incompatible with GLX 1.0
servers. How many more similar bugs lurk in the code that nobody has
noticed in years?
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the share_xid was only set in the glXImportContextEXT path,
and it was left set to None in all of the other create-context paths.
Fixes the piglit test glx-query-context-info-ext.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Cc: Jeremy Huddleston <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Cc: Jeremy Huddleston <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Send the DestroyContext protocol immediately when glXDestroyContext is
called, and never call it when glXFreeContextEXT is called. In both
cases, either destroy the client-side structures or, if the context is
current, set xid to None so that the client-side structures will be
destroyed later.
I believe this restores the behavior of the original SGI code. See
src/glx/x11 around commit 5df82c8. The spec doesn't say anything
about glXDestroyContext not really destroying imported contexts (it
acts like glXFreeContextEXT instead), but that's what the original
code did. Note that glXFreeContextEXT on a non-imported context does
not destroy it either.
Fixes the piglit test glx-free-context.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fixes the piglit test glx-get-context-id.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glXImportContextEXT
The primary problem was that the number of reply bytes read is clamped
to sizeof(propList), but the loop that processes the properties tries
to examine all of the properties sent by the server. If the server
sends 47,000 properties, we only read 3 but process all 47,000.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each of the DRI, DRI2, and DRISW backends contain code like the
following in their create-context routine:
if (shareList) {
pcp_shared = (struct dri2_context *) shareList;
shared = pcp_shared->driContext;
}
This assumes that the glx_context *shareList is actually the correct
derived type. However, if shareList was created as an
indirect-rendering context, it will not be the expected type. As a
result, shared will contain garbage. This garbage will be passed to
the driver, and the driver will probably segfault. This can be
observed with the following GLX code:
ctx0 = glXCreateContext(dpy, visinfo, NULL, False);
ctx1 = glXCreateContext(dpy, visinfo, ctx0, True);
Create-context is the only case where this occurs. All other cases
where a context is passed to the backend, it is the 'this' pointer
(i.e., we got to the backend by call something from ctx->vtable).
To work around this, check that the shareList->vtable->destroy method
is the same as the destroy method of the expected type. We could also
check that shareList->vtable matches the vtable or by adding a "tag"
to glx_context to identify the derived type.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
This is not exposed generally yet because some of the swrast paths hit
in piglit (drawpixels, copypixels, blit) aren't yet converted to
MapRenderbuffer.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is a little more unusual than the separate MESA_FORMAT_S8_Z24
support, because in addition to storing the real stencil data in a
MESA_FORMAT_S8 miptree, we also make the Z miptree be
MESA_FORMAT_Z32_FLOAT instead of the requested format.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
With separate stencil GL_DEPTH32F_STENCIL8, the miptree will have a
really different format (MESA_FORMAT_Z32_FLOAT) from the teximage
(MESA_FORMAT_Z32_FLOAT_X24S8).
Reviewed-by: Kenneth Graunke <[email protected]>
|