| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we advertised 0 VS texture units. Now that we have proper
support for using the sampling engine in the VS, we can advertise 16,
which is conveniently the number required for OpenGL 3.0.
v2: Enable on Gen4. I hacked up my tests to not use flat ivec varyings
and they pass.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Now that this is all factored out, it's trivial to do.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This should avoid state-dependent FS recompiles when samplers that are
only used by the VS change.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The idea is to reuse this for the VS and (in the future) GS as well.
v2: Include yuvtex data since we're not dropping GL_MESA_ycbycr.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]> [v1]
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The visit() half computes the values to put in the header based on the
IR and simply stuffs that in the vec4_instruction; the emit() half uses
this to set up the message header. This works out well since emit() can
use brw_reg directly and access individual DWords without kludgery.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We'll want to reuse this for the VS, and it's complex enough that I'd
rather not cut and paste it.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This translates the GLSL compiler's IR into vec4_instruction IR,
generating code to load coordinates, LOD info, shadow comparitors, and
so on into the appropriate message registers.
It turns out that the SIMD4x2 parameters are identical on Gen 5-7, and
the Gen4 code is similar enough that, unlike in the FS, it's easy enough
to support all generations in a single function.
v2: Load zeros for missing coordinates (fixing vs-texelFetch-sampler1D
and 2D on G45), and fix G45 message length for shadow comparisons.
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This is the part that takes the vec4_instruction IR and turns it into
actual Gen ISA.
v2: Add Gen4 messages, don't retype m0 to UW.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to Ironlake, cube maps were stored as 3D textures. In recent
refactoring, we removed a separate "layers" parameter in favor of using
depth. Unfortunately, depth was getting minified, which is only correct
for actual 3D textures.
Fixes piglit tests:
- bugs/crash-cubemap-order
- fbo/fbo-cubemap
- texturing/cubemap
Also changes texturing/cubemap npot from abort to fail.
This hasn't seen a full test run since Piglit on Mesa master hangs
GM45 a lot.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All of the extensions require that both libGL and either the server or
the direct rendering driver (or both) enable the extension before it's
advertised. It seems safe to assume that none of the other components
on OS X will enable these extensions, so all the #ifdef blocks here
just clutter the code.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Cc: Jeremy Huddleston <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
There are a few unsupported extensions (e.g., the ATI and NV float
extensions) that are still in the list. There is some small chance
that these may be supported some day.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__glXInitialize calls AllocAndFetchScreenConfigs.
AllocAndFetchScreenConfigs unconditionally sends a glXQuerySeverString
request to the server. This request is only supported with GLX 1.1 or
later, so we were already implicitly incompatible with GLX 1.0
servers. How many more similar bugs lurk in the code that nobody has
noticed in years?
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the share_xid was only set in the glXImportContextEXT path,
and it was left set to None in all of the other create-context paths.
Fixes the piglit test glx-query-context-info-ext.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Cc: Jeremy Huddleston <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Cc: Jeremy Huddleston <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Send the DestroyContext protocol immediately when glXDestroyContext is
called, and never call it when glXFreeContextEXT is called. In both
cases, either destroy the client-side structures or, if the context is
current, set xid to None so that the client-side structures will be
destroyed later.
I believe this restores the behavior of the original SGI code. See
src/glx/x11 around commit 5df82c8. The spec doesn't say anything
about glXDestroyContext not really destroying imported contexts (it
acts like glXFreeContextEXT instead), but that's what the original
code did. Note that glXFreeContextEXT on a non-imported context does
not destroy it either.
Fixes the piglit test glx-free-context.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fixes the piglit test glx-get-context-id.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glXImportContextEXT
The primary problem was that the number of reply bytes read is clamped
to sizeof(propList), but the loop that processes the properties tries
to examine all of the properties sent by the server. If the server
sends 47,000 properties, we only read 3 but process all 47,000.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each of the DRI, DRI2, and DRISW backends contain code like the
following in their create-context routine:
if (shareList) {
pcp_shared = (struct dri2_context *) shareList;
shared = pcp_shared->driContext;
}
This assumes that the glx_context *shareList is actually the correct
derived type. However, if shareList was created as an
indirect-rendering context, it will not be the expected type. As a
result, shared will contain garbage. This garbage will be passed to
the driver, and the driver will probably segfault. This can be
observed with the following GLX code:
ctx0 = glXCreateContext(dpy, visinfo, NULL, False);
ctx1 = glXCreateContext(dpy, visinfo, ctx0, True);
Create-context is the only case where this occurs. All other cases
where a context is passed to the backend, it is the 'this' pointer
(i.e., we got to the backend by call something from ctx->vtable).
To work around this, check that the shareList->vtable->destroy method
is the same as the destroy method of the expected type. We could also
check that shareList->vtable matches the vtable or by adding a "tag"
to glx_context to identify the derived type.
NOTE: This is a candidate for the 7.11 branch.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
This is not exposed generally yet because some of the swrast paths hit
in piglit (drawpixels, copypixels, blit) aren't yet converted to
MapRenderbuffer.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is a little more unusual than the separate MESA_FORMAT_S8_Z24
support, because in addition to storing the real stencil data in a
MESA_FORMAT_S8 miptree, we also make the Z miptree be
MESA_FORMAT_Z32_FLOAT instead of the requested format.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
With separate stencil GL_DEPTH32F_STENCIL8, the miptree will have a
really different format (MESA_FORMAT_Z32_FLOAT) from the teximage
(MESA_FORMAT_Z32_FLOAT_X24S8).
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
The format handling here is tricky, because we're not actually
generating a Z32_FLOAT_X24S8 miptree, so we're guessing the format
that GL wants based on seeing Z32_FLOAT with a separate stencil.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
All the operations were just trying to get at irb->wrapped_depth->mt,
which is the same as irb->mt now.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
gen7 only supports the non-packed formats, even if you associate a
real separate stencil buffer -- otherwise it's as if the depth test
always fails.
This requires a little bit of care in the match_texture_image case,
since the miptree format no longer matches the texture image format.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
This little bit of logic was duplicated, which isn't much, but I was
going to need to duplicate a bit of additional logic in the next
commit.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
There were only two places it was really used at this point, which was
in the batchbuffer emit of the separate stencil packets for gen6/7.
Just write in the ->stencil_mt reference in those two places and ditch
all this flailing around with allocation and refcounts.
v2: Fix separate stencil on gen7.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
this mentions which channels are used for slice and depth comparison values.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
This fixes the piglit glsl-1.10 shadow1D related tests.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The 4th texcoord is used in this case for the comparison.
This fixes piglit glsl-fs-shadow2DArray* on softpipe.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code didn't handle the case where front wasn't specified in the vertex
shader outputs, but back was.
In that case we were doing a copy from back to non-existant front,
this code checks we have existant front/backs and only does the copy when
they both exist.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
Sets rgba layer as zeroth layer if a custom background_surface is specified.
Signed-off-by: Maarten Lankhorst <[email protected]>
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
It's harmless to add support for attributes we don't support,
since they require a feature enabled for them to affect
something. As long as they aren't enabled, nothing happens.
This enables support for custom colorspaces and background colors.
Signed-off-by: Maarten Lankhorst <[email protected]>
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Currently only validating, since nothing else can be done with it yet
Signed-off-by: Maarten Lankhorst <[email protected]>
v2: removed check_video_surface
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This sample compare was always doing linear, and this makes the
glsl-fs-shadow1DArray test render like the Intel driver.
fix wrong 0->j from initial patch
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is the first part of a fix to piglit glsl-fs-shadow1DArray
also fix the passing of unused r[2] in the normal 1D case.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The piglit draw-pixel-with-texture was asserting in the glsl->tgsi code,
due to 0 texture target, this makes sure the texture target is copied over
correctly when we copy instructions around.
v2: drive-by fix bitmap on the way past.
This avoids the assertion, have to contemplate fixing things as per the spec
later.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
This will be especially useful for loading texturing parameters, where I
need to (for example) reference m3.xz<D>.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Copy and pasted from fs_inst::is_tex(), but without TXB.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
We'll be reusing most of these for the VS shortly. The one exception is
TXB (texturing with LOD bias), which is explicitly forbidden in the VS.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Fixes a regression since d2235b0f4681f75d562131d655a6d7b7033d2d8b,
in my new textureSize sampler(1DArrayShadow|2DShadow|2DArrayShadow)
piglit tests, though I'm not honestly sure how this ever worked.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Accidentally an old patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Noticed a "warning: array subscript is above array bounds" given at one of
the existing sanity-check asserts. Turns out all the arrays of strings
haven't matched the corresponding enum values in a while, if ever.
I didn't know the proper names for any of these and couldn't find
them in the base specs aside from "result.pointsize" in
ARB_vertex_program, so I just filled in the enum's value
as was done with other slots.
Also add four STATIC_ASSERT()s to be sure and catch future additions
or bumps to MAX_VARYING/etc again, and some more non-static asserts
where there weren't any before.
(Note, the fragment enum that corresponded to result.color(half) was removed in
8d475822e6e19fa79719c856a2db5b6a205db1b9.)
Reviewed-by: Brian Paul <[email protected]>
|