| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2:
If mask has GL_STENCIL_BUFFER_BIT set, the depth formats for
readRenderBuffer and drawRenderBuffer must match unless one of the two
buffers doesn't have depth, in which case it's not blitted, so the
format check should be ignored. Same comment goes for stencil formats
in depth renderbuffers if mask has GL_DEPTH_BUFFER_BIT set.
v3 (Kayden): Refactor code to be a bit more readable.
Signed-off-by: Anuj Phogat <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nothing was explicitly checking this.
v2: Update GL3 spec reference.
Signed-off-by: Anuj Phogat <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Signed-off-by: Ian Romanick <[email protected]> [v2]
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Ian Romanick <[email protected]> [v1]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In ES 3.0, when calling glDrawBuffers() on the window system
framebuffer, the only valid targets are GL_NONE or GL_BACK. Since there
is no stereo rendering in ES 3.0, this is a single buffer, unlike
desktop where it may be two (and thus isn't allowed).
For single-buffered configs, GL_BACK ironically means the front (and
only) buffer. I'm not sure that it matters, however, as ES shouldn't
have front buffer rendering in the first place.
Fixes es3conform framebuffer_blit_coverage_default_draw_buffer_binding.
v2: Update GLES3 spec reference.
Signed-off-by: Anuj Phogat <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Signed-off-by: Ian Romanick <[email protected]> [v2]
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Ian Romanick <[email protected]> [v1]
|
|
|
|
|
|
|
|
|
| |
If the hardware/driver combo supports GLES3, then set the GLES3 bit in
intel_screen's bitmask of supported DRI API's. Neither the EGL nor GLX
layer uses the bit yet.
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This enum corresponds to EGL_OPENGL_ES3_BIT_KHR.
Neither the GLX nor EGL layer use the enum yet.
I don't like the GLES bits. I'd prefer that all GLES APIs be exposed
through a single API bit, as is done in GLX_EXT_create_context_es_profile.
But, we need this GLES3 enum in order to do the plumbing necessary to
correctly support EGL_OPENGL_ES3_BIT_KHR as required by the
EGL_KHR_create_context spec.
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each driver (i830, i915, i965) used independent but similar code to
validate the requested context version. With the rececnt arrival of GLES3,
that logic has needed an update. Rather than apply identical updates to
each drivers validation code, let's just move the validation into the
shared routine intelInitContext.
This refactor required some incidental changes to functions
i830CreateContext and intelInitContext. For each function, this patch:
- Adds context version parameters to the signature.
- Adds a DRI_CTX_ERROR out param to the signature.
- Sets the DRI_CTX_ERROR at each early return.
Tested against gen6 with piglit egl-create-context-verify-gl-flavor.
Verified that this patch does not change the set of exposed EGL context
flavors.
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this patch, intelInitScreen2 set DRIScreen::api_mask with the hacky
heuristic below:
if (gen >= 3)
api_mask = GL | GLES1 | GLES2;
else
api_mask = 0;
This hack was likely broken on gen2 (i830), but I don't care enough to
properly investigate. It appears that every EGLConfig on i830 has
EGL_RENDERABLE_TYPE=0, and thus eglCreateContext will never succeed.
Anyway, moving on to living drivers...
With the arrival of EGL_OPENGL_ES3_BIT_KHR, this heuristic is now
insufficient. We must enable the GLES3 bit if and only if the driver is
capable of creating a GLES3 context. This requires us to determine the
maximum supported context version supported by the hardware/driver for
each api *during initialization of intel_screen*.
Therefore, this patch adds four new fields to intel_screen which indicate
the maximum supported context version for each api:
max_gl_core_version
max_gl_compat_version
max_gl_es1_version
max_gl_es2_version
The api mask is now correctly set as:
api_mask = GL;
if (max_gl_es1_version > 0)
api_mask |= GLES1;
if (max_gl_es2_version > 0)
api_mask |= GLES2;
Tested against gen6 with piglit egl-create-context-verify-gl-flavor.
Verified that this patch does not change the set of exposed EGL context
flavors.
v2:
- Replace the if-tree on gen with a switch, for Ian.
- Unconditionally enable the DRI_API_OPENGL bit, for Ian.
v3:
- Drop max gl version to 1.4 on gen3 if !has_occlusion_query,
because occlusion queries entered core in 1.5. For Ian.
v4:
- Drop ES2 version back to 2.0 due to rebase (Ian).
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Ian Romanick <ian.d.romanick.intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm not sure if this is the correct fix. The
_mesa_es_error_check_format_and_type function (used above in the ES 1
and 2 cases) was originally added for glTexImage checking and allows
GL_DEPTH_STENCIL/GL_UNSIGNED_INT_24_8 combinations. Using it in ES 3
causes other tests to regress.
Fixes es3conform's packed_depth_stencil_error test.
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
INVALID_ENUM is for when the type is simply not known.
Fixes part of es3conform's packed_depth_stencil_error test.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes es3conform's half_float_max_vertex_dimensions and
half_float_textures tests.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ES 3 specifies some formats as texture-only (i.e., not available for
renderbuffers).
See the "Required Texture Formats" section (pg 126) of the ES 3 spec.
v2: Allow RED and RG float rendering in core profiles The check used to
be (version > 30) || (compat profile w/extensions). Just deleting
<version > 30) broke 3.0+ core profiles.
Fixes es3conform's color_buffer_unsupported_format test.
Signed-off-by: Matt Turner <[email protected]>
Signed-off-by: Ian Romanick <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to both the GL 3.0 and ES 3.0 specifications (table 2.7 for GL
and table 2.8 for ES), the default value of BUFFER_ACCESS_FLAGS is
supposed to be zero.
Note that there are two related quantities: the obsolete BUFFER_ACCESS
enum and the new BUFFER_ACCESS_FLAGS bitfield.
BUFFER_ACCESS can only be GL_READ_ONLY, GL_WRITE_ONLY, or GL_READ_WRITE;
BUFFER_ACCESS_FLAGS can easily represent all three via GL_MAP_WRITE_BIT,
GL_MAP_READ_BIT, and their logical or. It also supports more flags.
Thus, Mesa only stores the bitfield, and simply computes the old enum
when queried, via simplified_access_mode(bufObj->AccessFlags).
The tricky part is that, while BUFFER_ACCESS_FLAGS defaults to 0,
BUFFER_ACCESS defaults to GL_READ_WRITE for desktop [GL 3.0, table 2.8]
and GL_WRITE_ONLY_OES for ES [the GL_EXT_map_buffer_range extension].
Mesa tried to implement this by setting the default AccessFlags to
GL_MAP_READ_BIT | GL_MAP_WRITE_BIT on desktop, and GL_MAP_WRITE_BIT on
ES. But in all specifications, it needs to be 0.
This patch moves that logic into simplified_access_mode(): when
AccessFlags == 0, it now returns GL_READ_WRITE for desktop and
GL_WRITE_ONLY for ES 1/2. (BUFFER_ACCESS doesn't exist on ES 3.0,
so it's irrelevant there.)
With that in place, it changes the AccessFlags default to 0.
Fixes three es3conform tsets:
- copy_buffer_defaults
- map_buffer_range_modify_indices
- pixel_buffer_object_default_parameters
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Perhaps most importantly, this patch adds comments quoting the relevant
spec paragraphs above each error condition.
It also makes three changes:
- For FBOs, GL_COLOR_ATTACHMENTm where m >= MaxDrawBuffers is supposed
to generate INVALID_OPERATION (not INVALID_ENUM).
- Constants that refer to multiple buffers (such as FRONT, BACK, LEFT,
RIGHT, and FRONT_AND_BACK) are supposed to generate INVALID_OPERATION,
not INVALID_ENUM.
- In ES 3.0, for FBOs, buffers[i] must be NONE or GL_COLOR_ATTACHMENTi
or else INVALID_OPERATION occurs. (This is a new restriction.)
Fixes es3conform's draw-buffers-api test.
v2: The error path was missing a "return" like all the other error
paths. Also, we may as well call it glDrawBuffers in the error message
since the ARB suffix doesn't exist in ES 3.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The specification requires that query results are processed in order, (when
one query result is returned, all previous query of the same type must also be
available). The implementation was failing this requirement in the case of
BeginQuery and EndQuery with no intervening drawing, (the result would be made
available immediately without flushing previous queries).
This fixes the following es3conform test:
occlusion_query_query_order
as well as the following piglit test:
occlusion_query_order
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows for avoiding the occlusion query erroneously accumulating results
during the meta operation. This functionality is made conditional on a new
MESA_META_OCCLUSION_QUERY bit so that meta-operations which should generate
fragments can continue to get the current behavior.
The implementation of glClear is specifically augmented to request the flag
since glClear is specified to not generate fragments.
This fixes the following es3conform tests:
occlusion_query_draw_occluded.test
occlusion_query_clear
occlusion_query_custom_framebuffer
occlusion_query_stencil_test
occlusion_query_discarded_fragments
As well as the following piglit test:
occlusion_query_meta_no_fragments
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This flag allows for the specified behavior that GenQueries reserves a name,
but does not associate an object with it until BeginQuery. We allocate the
object immediately with the new EverBound flag set to false, and then set the
flag to true at the time of BeginQuery.
This allows us to implement a conformant IsQuery function by checking the
state of the new EverBound flag.
This fixes the following es3conform tests:
occlusion_query_genqueries
occlusion_query_is_query_nonzero
and the following piglit test:
occlusion_query_lifetime
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
The reference to "correct, see spec" was a bit too vague to be useful,
(particularly since the language being referenced here changes between OpenGL
3.1 and OpenGL 4.3).
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
| |
The color varying may have reduced precision or be even clamped.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
This probably doesn't fix anything, but it's good to be consistent.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
I think the conditional always evaluates to false.
If I understand the code in core Mesa correctly, depthBits or stencilBits
is 0 if the depth or stencil renderbuffer is NULL, respectively.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
just check depth and stencil separately, the outcome is the same
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
All drivers implement it now.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
For floats, if GL_RGB is the source, then alpha should be set to
1.0F.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
This format is allowed by the GL_EXT_texture_type_2_10_10_10_REV
extension.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Call Driver.AllocTextureImageBuffer rather than calling
Driver.TexImage with NULL data, format=GL_NONE and type=GL_NONE.
This avoids setting ctx->Unpack, which can lead to incorrectly
trying to upload data.
The GLES3 GTF program's packed_pixels_pbo test was triggering
an error for i965 with the previous code.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes issues with gles3-gtf
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This returns the current read renderbuffer for the specified
format type.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This function checks for ES3 compatible
format/type/internalFormat/dimension combinations.
[[email protected]: additional tweaks for gles3-gtf]
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
v2:
* Only allow on GL Legacy contexts
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Add API debug trace message for:
* glRenderbufferStorage
* glRenderbufferStorageMultisample
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hardware does not support a render target without an alpha channel.
So when the user creates a render buffer with no alpha channel, there actually
is storage available for alpha internally. It requires special care to
avoid these unwanted alpha bits from causing any problems.
Specifically, when blending, and when the blend factors would read the
destination alpha values, this commit coerces the blend factors to instead be
either 0 or 1 as appropriate.
A similar fix was made for pre-gen6 hardware in commit eadd9b8e and this
commit shares the fixup function written by Ian then.
This commit the following es3conform test:
rgb8_rgba8_rgb
As well as the following piglit (sub) tests:
EXT_framebuffer_object/fbo-blending-formats/3
EXT_framebuffer_object/fbo-blending-formats/GL_RGB
EXT_framebuffer_object/fbo-blending-formats/GL_RGB8
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Andreas Boll <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Effectively this path would always assert. Move the break statement to
the (probable) intended place.
Note: This is a candidate for the stable branches.
Signed-off-by: Adam Jackson <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
v2: Andreas Boll <[email protected]>
- don't remove compatibility with scripts for the old build system
v3: Andreas Boll <[email protected]>
- remove more obsolete hacks
v4: Andreas Boll <[email protected]>
- add a previously removed TOP variable to fix vgapi build
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to bug #54524, I regressed oglconform's multicontext test
when I reenabled the fragment shader precompile.
However, these test cases only passed by miraculous coincedence. We
assign each fragment program a unique ID (brw_fragment_program::id which
becomes brw_wm_prog_key::program_string_id) which we obtain by storing a
per-context counter.
The test case uses GLX context sharing to access the same fragment
program from two different contexts. This means that we share a program
cache. Before the precompile, if both contexts happened to use the same
shaders in the same order, we'd obtain the same program_string_ids (by
virtue of doing the same computation twice). However, the more likely
scenario is that they completely disagree on program_string_id.
This meant that we'd have two completely different fragment shaders in
the cache with the same ID, tricking us to think they were the same
(aside from NOS), so we'd render using the wrong program.
This patch implements a simple fix suggested by Eric: it moves the
global counter out of brw_context and into intel_screen, which is shared
across all contexts. A mutex protects it from concurrent access.
This is also the first direct usage of pthreads in the i965 driver.
Fixes 10 subcases of oglconform's multicontext test.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54524
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Technically, variable sized arrays are a required feature of C99,
redacted to be optional in C11, and not actually part of C++ whatsoever.
Gcc allows using them in C++ unless you specify -pedantic, and Clang
appears to allow them for simple/POD types.
exec_list is arguably POD, since it doesn't have virtual methods, but I
can see why Clang would be like "meh, it's a C++ struct, say no", seeing as
it's meant to support C99.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58970
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The simulator gets very angry about our i2b code:
cmp.ne(16) g3<1>D g2<0,1,0>D 0F
We can't mix integer DWord and float types. The only reason to use 0F
here was to share code with f2b. Split it and use 0D instead.
While we don't believe anything bad will actually happen because of
this, it's nice to fix the warnings and easy enough to do.
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Often when debugging, I don't want to see SIMD16 shaders. It makes
INTEL_DEBUG=vs/fs output much easier to read, especially when a program
dumps many shaders. Plus, I also want to verify that SIMD8 works before
even considering SIMD16.
v2: Fix the likeliness check (caught by Chris and Eric).
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
| |
Choose MESA_FORMAT_ARGB2101010 when storing
GL_RGBA + GL_UNSIGNED_INT_2_10_10_10_REV or
GL_RGB + GL_UNSIGNED_INT_2_10_10_10_REV.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|