| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
| |
Corresponds to b87406e55f029d29594ae76a4b39a4fe1007fe4f.
|
|
|
|
|
|
|
| |
Now that color-index support is removed from t_dd_tritmp.h and
t_dd_unfilled.h, drivers no longer need define HAVE_RGBA.
Signed-off-by: Ian Romanick <[email protected]>
|
|
|
|
| |
This should fix rendering into mipmaps of tiled textures.
|
|
|
|
|
|
|
| |
The effect of this was that all objects were aligned to 128 bytes
on all generations, rather than just gen2.
Signed-off-by: Chris Wilson <[email protected]>
|
|
|
|
| |
Signed-off-by: Chris Wilson <[email protected]>
|
| |
|
|
|
|
|
| |
We need to do this before we emit any state dependent on the current
render buffers.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GL_TRUE indicates that the driver accepts the program.
GL_FALSE indicates the program can't be compiled/translated by the
driver for some reason (too many resources used, etc).
Propogate this result up to the GL API: set GL_INVALID_OPERATION
error if glProgramString() was called. Set shader program link
status to GL_FALSE if glLinkProgram() was called.
At this point, drivers still don't do any program checking and
always return GL_TRUE.
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
src/mesa/drivers/dri/intel/intel_screen.c
src/mesa/drivers/dri/intel/intel_swapbuffers.c
src/mesa/drivers/dri/r300/r300_emit.c
src/mesa/drivers/dri/r300/r300_ioctl.c
src/mesa/drivers/dri/r300/r300_tex.c
src/mesa/drivers/dri/r300/r300_texstate.c
|
| | |
|
| |
| |
| |
| | |
They are not used at all.
|
| | |
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* remove-intel-dri1:
intel: intelScreenContext() is no longer used
intel: Remove remaining dri2.enabled tests
intel: Drop more cliprect bookkeeping
intel: Remove struct intel_framebuffer
intel: Remove client-side vblank code
intel: Drop intelWindowMoved()
intel: Drop batchbuffer cliprect_mode tracking
intel: Drop DRI1 static regions
intel: Use depth buffer from ctx.DrawBuffer in copypix_src_region()
intel: Drop LOCK/UNLOCK_HARDWARE()
intel: Drop DRI1 SwapBuffer implementation
intel: Drop DRI1 CopySubBuffer implementation
intel: Drop DRI1 support
Push __driDriverExtensions out of dri_util.c and into the drivers
Remove leftover __DRI{screen,drawable,context}Private references
Check for libdrm_$chipset.pc when needed
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
As part of the DRI driver interface rewrite I merged __DRIscreenPrivate
and __DRIscreen, and likewise for __DRIdrawablePrivate and
__DRIcontextPrivate. I left typedefs in place though, to avoid renaming
all the *Private use internal to the driver. That was probably a
mistake, and it turns out a one-line find+sed combo can do the mass
rename. Better late than never.
|
| | |
| | |
| | |
| | |
| | |
| | | |
This adds missing pkg-config lookup for intel and moves the radeon
lookup into a case...esac so it's only looked up when one or more of
the radeon drivers are enabled.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When we have integer-valued texture formats, the texture border color
must also store integer and uint values.
With GL 3.0, the new glTexParameterIiv() and glTexParameterIuiv() functions
can set the border color to int or uint values.
|
|\ \ \
| |/ /
|/| /
| |/
| |
| |
| |
| |
| | |
Conflicts:
docs/relnotes.html
src/gallium/drivers/llvmpipe/lp_tex_sample_c.c
src/gallium/drivers/r300/r300_cs.h
src/mesa/drivers/dri/i965/brw_wm_surface_state.c
src/mesa/main/enums.c
|
| |
| |
| |
| |
| |
| | |
It was OK before because we proceed to clamp the value to hardware
limits, but given that other use of MaxLevel has been a trap, let's
avoid it.
|
|\|
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
configs/darwin
src/gallium/auxiliary/util/u_clear.h
src/gallium/state_trackers/xorg/xorg_exa_tgsi.c
src/mesa/drivers/dri/i965/brw_draw_upload.c
|
| | |
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
src/gallium/auxiliary/util/u_network.c
src/gallium/auxiliary/util/u_network.h
src/gallium/drivers/i915/i915_state.c
src/gallium/drivers/trace/tr_rbug.c
src/gallium/state_trackers/vega/bezier.c
src/gallium/state_trackers/vega/vg_context.c
src/gallium/state_trackers/xorg/xorg_crtc.c
src/gallium/state_trackers/xorg/xorg_driver.c
src/gallium/winsys/xlib/xlib_brw_context.c
src/mesa/main/mtypes.h
|
| | | |
|
| | |
| | |
| | |
| | | |
Shaves 400 bytes or so from i915_dri.so.
|
| | |
| | |
| | |
| | |
| | |
| | | |
We don't actually care which register is used since we're just
swizzling (0,0,0,0), but it should be a valid variable number.
Detected by clang.
|
| | |
| | |
| | |
| | |
| | | |
The same code is generated, and readers and static analyzers are
happier.
|
|\| |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
src/mesa/main/version.h
src/mesa/state_tracker/st_atom_shader.c
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
The base of the texture is always the base of the miptree. If it wasn't,
we'd have issues with this code due to miptrees not walking the same
direction for all LODs.
|
|\| |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
src/gallium/state_trackers/xorg/xorg_xv.c
src/mesa/drivers/dri/intel/intel_span.c
|
| |\| |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Now that XRGB is supported, we don't need to hack around cases of an RGBA
format buffer with an internal format of GL_RGB.
|
| | |
| | |
| | |
| | |
| | | |
Since the texformat branch merge, the value of intel_renderbuffer::texformat
is just a copy of gl_renderbuffer::Format.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The MaxCombinedTextureImageUnits is the total number of samplers that can
be bound between vertex, geometry, and fragment, not 0. This should report
the correct value on 965 now. Other DRI drivers may also need updating if
their MaxVertexTextureImageUnits != 0 (for example, if using the sw vertex
pipeline).
It's not clear to me if there's going to be a valid value for this
limit other than MaxTextureImageUnits + MaxVertexTextureImageUnits (+
MaxGeometryTextureImageUnits eventually). If not, then we should probably
just move this into the core at Get time.
Bug #25518 (wine regression). Fixes piglit vp-combined-image-units.
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
src/gallium/drivers/svga/svga_screen_texture.c
src/gallium/state_trackers/xorg/xorg_composite.c
src/gallium/state_trackers/xorg/xorg_exa.c
src/gallium/state_trackers/xorg/xorg_renderer.c
src/gallium/state_trackers/xorg/xorg_xv.c
src/mesa/main/texgetimage.c
src/mesa/main/version.h
|
| |\|
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
progs/util/shaderutil.c
src/mesa/drivers/dri/r600/r600_context.c
src/mesa/main/version.h
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Support still isn't completely correct, but it's better. piglit
point-sprite now passes. However, glean's pointSprite test fails. In
that test the texture on the sprite is somehow inverted as though
GL_POINT_SPRITE_COORD_ORIGIN were set to GL_LOWER_LEFT. i915 hardware
shouldn't be able to do that!
I believe there are also problems when not all texture units have
GL_COORD_REPLACE set. The hardware enable seems to be all or nothing.
Fixes bug #25313.
|
| | |
| | |
| | |
| | | |
Bug #24734.
|