| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Signed-off-by: Petri Latvala <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
| |
|
|
|
|
| |
Signed-off-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
| |
...and update relnotes.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Since this is now updated daily and looks to be useful.
|
|
|
|
|
|
|
|
| |
relnotes weren't updated this whole time, so I went through all the
GL3.txt changes and picked out the nouveau ones since 10.1.
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Ian points out that this being unrestricted was an oversight in the
spec, and is corrected in GLSL4.40.
Signed-off-by: Chris Forbes <[email protected]>
|
|
|
|
| |
Signed-off-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
| |
This extension is a huge grab-bag of "stuff that's in DX11". Break it
apart to make it clear what still needs to be done.
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
| |
V4: Don't claim Gen8 yet.
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is quite hard to meet the dependency of the libxml2 python bindings
outside Linux, and in particularly on MacOSX; whereas ElementTree is
part of Python's standard library. ElementTree is more limited than
libxml2: no DTD verification, defaults from DTD, or XInclude support,
but none of these limitations is serious enough to justify using
libxml2.
In fact, it was easier to refactor the code to use ElementTree than to
try to get libxml2 python bindings.
In the process, gl_item_factory class was refactored so that there is
one method for each kind of object to be created, as it simplifies
things substantially.
I confirmed that precisely the same output is generated for GL/GLX/GLES.
v2: Remove m4/ax_python_module.m4 as suggested by Matt Turner.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out we can allow COHERENT storage/mappings all the time,
regardless of LLC vs non-LLC. It just means never using temporary
mappings to avoid GPU stalls, and on non-LLC we have to use the GTT intead
of CPU mappings. If we were to use CPU maps on non-LLC (which might be
useful if apps end up using buffer_storage on PBO reads, to avoid WC read
slowness), those would be PERSISTENT but not COHERENT, but doing that
would require us driving the clflushes from userspace somehow.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This extension provides a way for an application to render to multiple
surfaces with different buffer formats without having to use multiple
contexts. An EGLContext can be created without an EGLConfig by passing
EGL_NO_CONFIG_MESA. In that case there are no restrictions on the surfaces
that can be used with the context apart from that they must be using the same
EGLDisplay.
_mesa_initialze_context can now take a NULL gl_config which will mark the
context as ‘configless’. It will memset the visual to zero in that case.
Previously the i965 and i915 drivers were explicitly creating a zeroed visual
whenever 0 is passed for the EGLConfig. Mesa needs to be aware that the
context is configless because it affects the initial value to use for
glDrawBuffer. The first time the context is bound it will set the initial
value for configless contexts depending on whether the framebuffer used is
double-buffered.
Reviewed-by: Kristian Høgsberg <[email protected]>
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On earlier hardware, we had to implement math in the shader to translate
Y-tiled or untiled coordinates to W-tiled coordinates (which is what
BLORP does today in order to texture from stencil buffers).
On Broadwell, we can simply state that it's W-tiled in SURFACE_STATE,
and adjust the pitch. This is much easier.
In the surface state code, I chose to handle the "should we sample depth
or stencil?" question separately from the setup for sampling from
stencil. This should make it work with the BindRenderbufferTexImage
hook as well, and hopefully be reusable for GL_ARB_texture_stencil8
someday.
v2: Update docs/GL3.txt (caught by Matt).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While the GL_ARB_stencil_texturing extension does not allow the creation
of stencil textures, it does allow shaders to sample stencil values
stored in packed depth/stencil textures.
Specifically, applications can call glTexParameter* with a pname of
GL_DEPTH_STENCIL_TEXTURE_MODE and value of either GL_DEPTH_COMPONENT or
GL_STENCIL_INDEX to select which component they wish to sample. The
default value is GL_DEPTH_COMPONENT (for traditional depth sampling).
Shaders should use an unsigned integer sampler (presumably usampler2D)
to access stencil data. Otherwise, results are undefined. Using shadow
samplers with GL_STENCIL_INDEX selected also is undefined behavior.
This patch creates a new gl_texture_object field, StencilSampling, to
indicate that stencil should be sampled rather than depth. (I chose to
use a boolean since I figured it would be more convenient for drivers.)
It also introduces the [Get]TexParameter code to get and set the value,
and of course the extension plumbing.
v2: Also consider textures incomplete when sampling stencil with
non-NEAREST min/mag filters (caught by Eric Anholt).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
ARB_texture_buffer_object_rgb32 has been supported for a while already.
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Reviewed-by: Fredrik Höglund <[email protected]>
|
|
|
|
|
|
| |
Cc: Ian Romanick <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Fix the version and the status before sending to Khronos for listing in
the registry.
Signed-off-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: add fw version query
v3: add README.VCE
v4: avoid error msg when kernel doesn't support it
Signed-off-by: Christian König <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Almost every driver already supported it. All current and future
Gallium drivers always support it, and most existing classic drivers
support it.
This only changes radeon and nouveau.
This extension only adds data types that can be passed to, for example,
glTexImage2D. It does not add internal formats. Since you can already
pass GL_FLOAT to glTexImage2D this shouldn't pose any additional issues
with those drivers. Note that r200 and i915 already supported this
extension, and they don't support floating-point textures either.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mesa now has a real, feature-rich EGL implementation on X11 via xcb.
Therefore I believe there is no longer a practical need for the egl_glx
driver.
Furthermore, egl_glx appears to be unmaintained. The most recent
nontrivial commit to egl_glx was 6baa5f1 on 2011-11-25.
Tested by running weston-smoke in windowed Weston on X with i965.
Signed-off-by: Chad Versace <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
Acked-by: Kristian Høgsberg <[email protected]>
|
|
|
|
| |
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
| |
Signed-off-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
| |
This updates the r600 driver status to 3.3 being fully supported.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
| |
Which was just made.
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
v2: Note that Fredrik Höglund is working on GL_ARB_multi_bind, not
Maxence Le Doré. Suggested by Matt.
Signed-off-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dd_function_table::BlitFramebuffer is already initialized to
_mesa_meta_BlitFramebuffer, so it should just work.
Tested on a Radeon 7500 (OpenGL renderer string: Mesa DRI R100 (RV200
5157) TCL DRI2). I couldn't do a full piglit run because it would tank
the system with or without this patch. I just ran all the blit tests
(-t blit to piglit-run.py). Only fbo-sys-sub-blit failed. All of the
other tests that weren't skipped (i.e., all the multisample and sRGB
tests skip) passed.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dd_function_table::BlitFramebuffer is already initialized to
_mesa_meta_BlitFramebuffer, so it should just work.
Tested on a FireGL 8800 (OpenGL renderer string: Mesa DRI R200 (R200
5148) TCL DRI).
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
| |
|
|
|
|
|
| |
Signed-off-by: Timothy Arceri <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|