| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
This function does simple texture mapping so disable normal texture mapping
before we call _swrast_write_rgba_span() so that we don't do it twice.
|
|
|
|
| |
See bug #17895. These assertions could be removed when this is resolved.
|
|
|
|
|
| |
New static configs generate DLLs that do not have a dependency on the MSCVR*
DLL's.
|
|
|
|
|
|
|
| |
This reverts commit bbda892c551e7d3f2d94cc877cc6e80f8568fa99.
Static configs rolled into regular project files (in next commit).
Provided by Karl Schultz.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were several bugs in the infrastructure for these two routines.
1. GLX_ALIAS was incorrectly used. The function and its alias must be
identical! glXMakeContextCurrent / glXMakeCurrentReadSGI and
MakeContextCurrent had different parameters. This caused the last
parameter of MakeContextCurrent to get random values.
2. We based the implementation of glXMakeContextCurrent on the manual
page instead of the GLX spec. The GLX spec says that
glXMakeContextCurrent can be passed a Window as a drawable. When this
happens, it will behave just like glXMakeCurrentReadSGI or
glXMakeCurrent.
3. If there was a problem finding or creating the DRI drawable,
MakeContextCurrent would crash instead of returning an error.
This commit fixes all three issues, and fixes bug #18367 and bug #19625.
|
|
|
|
|
|
| |
The upstream linux kernel headers and libdrm kernel headers disagree on the
tag name for the sarea struct: _drm_i915_sarea vs drm_i915_sarea. They
both typedef it to drm_i915_sarea_t though, so just use that.
|
|
|
|
|
|
|
|
| |
It's been broken and deprecated for a while, so it's time to die. This has the
wonderful benefit of cleaning up the code a fair amount; making it marginally
less twisty.
I'm unsure if the for loops in IntelWindowMoved are still needed.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Make some compiler flags per-file.
Remove driverfuncs.c from osmesa project.
|
| |
|
|
|
|
|
|
| |
This utility is useful for hardware that doesn't support HW index buffers.
It's a bit inefficient but appears to give a substantial performance gain,
as we can emit tri strips that would otherwise be split into triangles.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Dri drivers often may validate first a write drawable and then a read
drawable ("readable"). However, the hardware lock may be unlocked when
validating the readable, causing the write drawable status to be stale.
Drivers should use this macro instead when validating two drawables.
|
| |
|
|
|
|
|
| |
GL_XOR logicop mode can be approximated with blending by computing 1 - dst.
Here's a couple test programs for that.
|
| |
|
| |
|
|
|
|
| |
Adapted from patch by Matthieu Herbb <[email protected]>
|
|
|
|
|
|
| |
Since we use an inverted viewport transformation for render to texture, that
inverts front/back polygon orientation.
Now glCullFace(GL_FRONT / GL_BACK) works correctly.
|
|
|
|
|
| |
When we're rendering to textures we have to invert the viewport transformation.
This helper cleans up that test and can be used elsewhere...
|
|
|
|
|
| |
One could enable depth testing before binding an FBO that has a depth buffer
so this test is no longer useful or correct.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
See bug 17929.
Fog doesn't actually work, but the often complained about warning is
silenced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OpenGL allows mixing and matching depth and stencil renderbuffers in
framebuffer objects while the hardware really only supports interleaved
depth/stencil buffers. This makes for some tricky buffer management.
An extra wrinkle is the situation where the user allocates a 16bpp depth
texture or renderbuffer then tries to render to it along with a stencil
buffer. We'd have to promote the 16bpp Z values to 24-bit Z values and
mix in the stencil values to setup the depth/stencil renderbuffer.
There's no support for that now, so always allocate 32bpp depth textures/
renderbuffers for now.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Don't overload the Size field with the texture target, to avoid confusion.
|
|
|
|
| |
This was changed between GL 1.0 and 1.1. Mesa still had the 1.0 behaviour.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously MaxTextureUnits was used to validate both texture image
units and texture coordinate units in fragment programs. Instead, use
MaxTextureCoordUnits for texture coordinate units and
MaxTextureImageUnits for texture image units.
Fixes bugzilla #19468.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|