| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were passing in the rt index however this was always 0 for non-independent
blend case. (The format was only actually used to decide if the color mask
covered all channels so this went unnoticed and was discovered by accident.)
Additionally, there was a second problem because we do fixups in the key based
on color buffer format we cannot use non-independent blend anyway as the fixed
up values would never get used.
So always turn non-independent blending into independent.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Filtering of DEPTH_COMPONENT and DEPTH_STENCIL for TEXTURE_3D is already
done in texture_error_check because these combinations aren't allowed on
desktop GL either.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
| |
Just like DEPTH_COMPONENT.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. The loop over dest buffers in blit_linear() needed a null pointer
check. Fixes https://bugs.freedesktop.org/show_bug.cgi?id=59499
2. The code to grab the drawRb's format needs to be inside the drawing loop.
3. An equality test was using = instead of == thus messing up a
renderbuffer attachment texture pointer. This lead to memory
corruption and a crash at exit.
Finally, fix a capitalization error NumDrawBuffers -> numDrawBuffers
and change type to unsigned to fix signed/unsigned comparison warnings.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
| |
20-odd more piglits.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
| |
About half a dozen more piglits.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Instead of deriving it from the colour buffer formats only.
Fixes a number of piglit tests which export depth from the pixel shader.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
| |
Fixes piglit 'spec/ARB_depth_buffer_float/fbo-clear-formats stencil' crash.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Enabling it for all resources still seems to cause problems, but depth/stencil
buffers are always accessed with tiling by the DB block.
Also, stick to 1D tiling for now. Getting 2D tiling to work properly will
require substantial changes in libdrm_radeon and possibly the kernel as well.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
| |
Apart from the obvious cleanup, this makes sure all blocks use the same tiling
mode for accessing the resource.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
| |
Currently the use of external firmware is required, with kernel and
userspace firmware needed for all Fermi cards except nvd9. Kepler and nvd9
should only require kernel firmware.
|
|
|
|
| |
Signed-off-by: Michel Dänzer <[email protected]>
|
|
|
|
| |
Thanks to calim for helping me find and fix the issue.
|
|
|
|
| |
Thanks to calim for helping me find and fix the issue.
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Loop over multiple destination color buffers. If we set
glDrawBuffers(GL_FRONT_AND_BACK) we need to loop over multiple color
buffers, blitting to each.
2. Add checks for null src/dst surface pointers. This fixes a crash
in the piglit fbo-missing-attachment-blit test.
See bug http://bugs.freedesktop.org/show_bug.cgi?id=59450
Reviewed-by: Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
| |
Use the renderbuffer attachment pointers that we grabbed earlier.
Reviewed-by: Reviewed-by: Marek Olšák <[email protected]>
|
| |
|
|
|
|
| |
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use os.path.join() rather than hand-rolling it, so path is correct if
sys.argv[0] returns an absolute path.
(According to the python documentation, it's platform dependent whether
sys.argv[0] is a full pathname or not. It probably also depends on how
the process was started...)
Signed-off-by: Jon TURNEY <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
| |
|
|
|
|
|
|
|
|
| |
It seems the other code expects surface[0..1] to be the luma field in interlaced case.
See for example vdpau/surface.c vlVdpVideoSurfaceClear and vlVdpVideoSurfacePutBitsYCbCr.
Signed-off-by: Maarten Lankhorst <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Writemask was XY instead of YZ (thanks to calim for spotting it).
The pixel calculation resulted in the pixel always being off by one.
If y was .5:
y' = round(y) + 0.5 = 1.5
Fixing this also means the LRP function has to swap the pixels it, since
it's now the other way around for top/bottom.
WIth these fixes only chroma for top and bottom pixel rows are wrongly interpolated
in my test program:
--- nvidia
+++ nouveau
@@ -1,4 +1,4 @@
-YCbCr[0] = 00c080
+YCbCr[0] = 00b070
YCbCr[1] = 00b070
YCbCr[2] = 029050
YCbCr[3] = 207050
@@ -61,4 +61,4 @@
YCbCr[60] = 0c5070
YCbCr[61] = c05090
YCbCr[62] = 0e70b0
-YCbCr[63] = e080c0
+YCbCr[63] = e070b0
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
| |
|
|
|
|
|
|
|
| |
Added with automake conversion, but makes no sense at all.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Andreas Boll <[email protected]>
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use this method in _mesa_GetInternalformativ for both GL_SAMPLES and
GL_NUM_SAMPLE_COUNTS.
v2: internalFormat may not be color renderable by the driver, so zero
can be returned as a sample count. Require that drivers supporting the
extension provide a QuerySamplesForFormat function. The later was
suggested by Eric Anholt.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Though, I'm tempted to always expose this extension when
GL_ARB_framebuffer_object is exposed. In that case, it would share the same
enable bit.
v2: Correctly sort extension names. Suggested by Eric Anholt.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This is for the GL_ARB_internalformat_query extension and GLES 3.0.
v2: Generate GL_INVALID_OPERATION if the extension is not supported.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
| |
Fixes build with MSVC.
Signed-off-by: Vinson Lee <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OpenGL 4.2 specification suggests rounding the float data to nearest
integer when the type of internal state is integer. Out of range floats
should be clamped to {INT_MIN, INT_MAX}. This is not specified anywhere
in gl/gles spec but below test expects this behavior. This patch makes
gles3 conformance sgis_texture_lod_basic_getter.test pass.
A GL spec bug will be raised to include clamping of out of range floats.
V2: Round float to nearest integer for all cases where
_mesa_Texparameterf() converts float param to int. Use the same block of
float to int conversion code for GL_TEXTURE_SWIZZLE_{R,G,B,A}_EXT cases
as well.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes a blitting case when drawAttachment->Texture ==
readAttachment->Texture. It was causing an assertion failure in
intel_miptree_attach_map() with gles3 conformance test case:
framebuffer_blit_functionality_minifying_blit
Number of changes in this file look scary. But most of them are caused
by introducing a big for loop to support rendering to multiple color
draw buffers.
V2: Fixed a case when number of draw buffer attachments are zero.
V3: Put a for loop in blit_nearest() and blit_linear() functions in to
support blitting to multiple color draw buffers.
V4: Remove variable declaration in for loop to avoid MSVC compilation
issues.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds required error checking in _mesa_BlitFramebuffer() when
blitting to multiple color render targets. It also fixes a case when
blitting to a framebuffer with renderbuffer/texture attached to
GL_COLOR_ATTACHMENT{i} (where i!=0). Earlier it skips color blitting if
nothing is found attached to GL_COLOR_ATTACHMENT0.
V2: Fixed a case when number of draw buffer attachments are zero.
V3: Do compatible_color_datatypes() and compatible_resolve_formats()
check for all the draw renderbuffers in fbobject.c. Fix debug code
at bottom of _mesa_BlitFramebuffer() to handle MRTs. Combine error
checking code for linear blits with other color blit error checking.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows query on default framebuffer in
glGetFramebufferAttachmentParameteriv() for gles3. Fixes unexpected GL
errors in gles3 conformance test case:
framebuffer_blit_functionality_multisampled_to_singlesampled_blit
V2: Use _mesa_is_gles3() check to restrict allowed attachment types to
specific APIs.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enables blitting to multiple color attachments of a
framebuffer. It also fixes a case when blitting to a framebuffer with
renderbuffer/texture attached to non-zero attachment point
i.e. GL_COLOR_ATTACHMENT{1, 2, ...}. Earlier we were incorrectly
blitting to GL_COLOR_ATTACHMENT0 by default.
V2: Use intel_copy_texsubimage() for blitting only if all the color
attachments can blit using it.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch rewrites _mesa_meta_BlitFrameBuffer() function to add support
for blitting with GLSL/GLSL ES shaders. These changes were required to
support glBlitFrameBuffer() in gles3. This patch, along with other
patches in this series, make 16 failing framebuffer_blit test cases in
gles3 conformance pass.
V2: Properly handle flipped blits for source and destination
renderbuffer / textures. Add support for GL_TEXTURE_RECTANGLE in
_mesa_meta_BlitFrameBuffer. Create a temp depth texture to support
depth buffer blitting.
V3: Remove unsupported / redundant shader code. Add an assertion to make
sure that we don't use rectangle texture in ES. Put API guard on
glTexEnvi().
V4: For gles3: Don't use ReadPixels or CopyTexImage2D to blit depth
buffer. gles3 spec says for CopyTexImage2D that "color buffer
components can be dropped during the conversion to internalformat,
but new components cannot be added." So, use the internal format of
read renderbuffer to create texture for color buffer blitting.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2:
If mask has GL_STENCIL_BUFFER_BIT set, the depth formats for
readRenderBuffer and drawRenderBuffer must match unless one of the two
buffers doesn't have depth, in which case it's not blitted, so the
format check should be ignored. Same comment goes for stencil formats
in depth renderbuffers if mask has GL_DEPTH_BUFFER_BIT set.
v3 (Kayden): Refactor code to be a bit more readable.
Signed-off-by: Anuj Phogat <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nothing was explicitly checking this.
v2: Update GL3 spec reference.
Signed-off-by: Anuj Phogat <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Signed-off-by: Ian Romanick <[email protected]> [v2]
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Ian Romanick <[email protected]> [v1]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In ES 3.0, when calling glDrawBuffers() on the window system
framebuffer, the only valid targets are GL_NONE or GL_BACK. Since there
is no stereo rendering in ES 3.0, this is a single buffer, unlike
desktop where it may be two (and thus isn't allowed).
For single-buffered configs, GL_BACK ironically means the front (and
only) buffer. I'm not sure that it matters, however, as ES shouldn't
have front buffer rendering in the first place.
Fixes es3conform framebuffer_blit_coverage_default_draw_buffer_binding.
v2: Update GLES3 spec reference.
Signed-off-by: Anuj Phogat <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Signed-off-by: Ian Romanick <[email protected]> [v2]
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Ian Romanick <[email protected]> [v1]
|
|
|
|
|
|
|
| |
I didn't notice this due to a noobed piglit run. It wasn't previously
noticed because the patch was only run on a driver that supported GLES3.
Signed-off-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
At process exit DLL_PROCESS_DETACH is signaled to DllMain(), where then
a final cleanup is triggered. In stw_cleanup() code is triggered that
tries to communicate a shutdown to the spawned threads -- however at
that time those threads have already been terminated by the OS and so
the process hangs.
v2: skip stw_cleanup_thread() too
Signed-off-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes error EGL_BAD_ATTRIBUTE in the tests below on Intel Sandybridge:
* piglit egl-create-context-verify-gl-flavor, testcase OpenGL ES 3.0
* gles3conform, revision 19700, when runnning GL3Tests with -fbo
This plumbing is added in order to comply with the EGL_KHR_create_context
spec. According to the EGL_KHR_create_context spec, it is illegal to call
eglCreateContext(EGL_CONTEXT_MAJOR_VERSION_KHR=3) with a config whose
EGL_RENDERABLE_TYPE does not contain the EGL_OPENGL_ES3_BIT_KHR. The
pertinent
portion of the spec is quoted below; the key word is "respectively".
* If <config> is not a valid EGLConfig, or does not support the
requested client API, then an EGL_BAD_CONFIG error is generated
(this includes requesting creation of an OpenGL ES 1.x, 2.0, or
3.0 context when the EGL_RENDERABLE_TYPE attribute of <config>
does not contain EGL_OPENGL_ES_BIT, EGL_OPENGL_ES2_BIT, or
EGL_OPENGL_ES3_BIT_KHR respectively).
To create this patch, I searched for all the ES2 bit plumbing by calling
`git grep "ES2_BIT\|DRI_API_GLES2" src/egl`, and then at each location
added a case for ES3.
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
If the hardware/driver combo supports GLES3, then set the GLES3 bit in
intel_screen's bitmask of supported DRI API's. Neither the EGL nor GLX
layer uses the bit yet.
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This enum corresponds to EGL_OPENGL_ES3_BIT_KHR.
Neither the GLX nor EGL layer use the enum yet.
I don't like the GLES bits. I'd prefer that all GLES APIs be exposed
through a single API bit, as is done in GLX_EXT_create_context_es_profile.
But, we need this GLES3 enum in order to do the plumbing necessary to
correctly support EGL_OPENGL_ES3_BIT_KHR as required by the
EGL_KHR_create_context spec.
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each driver (i830, i915, i965) used independent but similar code to
validate the requested context version. With the rececnt arrival of GLES3,
that logic has needed an update. Rather than apply identical updates to
each drivers validation code, let's just move the validation into the
shared routine intelInitContext.
This refactor required some incidental changes to functions
i830CreateContext and intelInitContext. For each function, this patch:
- Adds context version parameters to the signature.
- Adds a DRI_CTX_ERROR out param to the signature.
- Sets the DRI_CTX_ERROR at each early return.
Tested against gen6 with piglit egl-create-context-verify-gl-flavor.
Verified that this patch does not change the set of exposed EGL context
flavors.
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this patch, intelInitScreen2 set DRIScreen::api_mask with the hacky
heuristic below:
if (gen >= 3)
api_mask = GL | GLES1 | GLES2;
else
api_mask = 0;
This hack was likely broken on gen2 (i830), but I don't care enough to
properly investigate. It appears that every EGLConfig on i830 has
EGL_RENDERABLE_TYPE=0, and thus eglCreateContext will never succeed.
Anyway, moving on to living drivers...
With the arrival of EGL_OPENGL_ES3_BIT_KHR, this heuristic is now
insufficient. We must enable the GLES3 bit if and only if the driver is
capable of creating a GLES3 context. This requires us to determine the
maximum supported context version supported by the hardware/driver for
each api *during initialization of intel_screen*.
Therefore, this patch adds four new fields to intel_screen which indicate
the maximum supported context version for each api:
max_gl_core_version
max_gl_compat_version
max_gl_es1_version
max_gl_es2_version
The api mask is now correctly set as:
api_mask = GL;
if (max_gl_es1_version > 0)
api_mask |= GLES1;
if (max_gl_es2_version > 0)
api_mask |= GLES2;
Tested against gen6 with piglit egl-create-context-verify-gl-flavor.
Verified that this patch does not change the set of exposed EGL context
flavors.
v2:
- Replace the if-tree on gen with a switch, for Ian.
- Unconditionally enable the DRI_API_OPENGL bit, for Ian.
v3:
- Drop max gl version to 1.4 on gen3 if !has_occlusion_query,
because occlusion queries entered core in 1.5. For Ian.
v4:
- Drop ES2 version back to 2.0 due to rebase (Ian).
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Ian Romanick <ian.d.romanick.intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm not sure if this is the correct fix. The
_mesa_es_error_check_format_and_type function (used above in the ES 1
and 2 cases) was originally added for glTexImage checking and allows
GL_DEPTH_STENCIL/GL_UNSIGNED_INT_24_8 combinations. Using it in ES 3
causes other tests to regress.
Fixes es3conform's packed_depth_stencil_error test.
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
INVALID_ENUM is for when the type is simply not known.
Fixes part of es3conform's packed_depth_stencil_error test.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|