| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Texstore takes the same codepath as the corresponding linear formats.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ARB_texture_rgb10_a2ui pre-GEN4
Older hardware cannot do ARB_texture_rgb10_a2ui, and the translation
code for OES_compressed_ETC1_RGB8_texture was never implemented in the
i915 driver.
NOTE: This is a candidate for all stable branches.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes unused pointer value defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous to this patch, there were 13 identical definitions of this
macro in Mesa source. That's ridiculous. This patch consolidates 6
of them to a single definition in src/mesa/main/macros.h.
Unfortunately, I wasn't able to eliminate the remaining definitions,
since they occur in places that don't include src/mesa/main/macros.h:
- include/pci_ids/pci_id_driver_map.h
- src/egl/drivers/dri2/egl_dri2.h
- src/egl/main/egldefines.h
- src/gbm/main/backend.c
- src/gbm/main/gbm.c
- src/glx/glxclient.h
- src/mapi/mapi/stub.c
I'm open to suggestions as to how to deal with the remaining redundancy.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the i965 driver enabled EXT_framebuffer_multisample even
on pre-gen6 chipsets. However, since we don't support multisampling
on these chips, we set GL_MAX_SAMPLES=1 (the minimum allowed by
EXT_framebuffer_multisample), and if the client ever requested a
multisample buffer, we quietly supplied them with a single-sampled
buffer instead.
After some discussion on the mailing list (see thread
"ext_framebuffer_multisample: check for num_samples<=1"), it's clear
that this was the wrong approach. The correct approach is to only
expose EXT_framebuffer_multisample when we truly support
multisampling; that frees us to set a sensible value of
GL_MAX_SAMPLES=0 on other chipsets, so that we never have to deal with
a client requesting a multisample buffer when multisampling isn't
supported.
This change causes the following piglit tests to be skipped on
chipsets prior to Gen6:
- "ARB_framebuffer_sRGB/blit {renderbuffer,texture}
{linear,linear_to_srgb,srgb,srgb_to_linear}
{downsample,msaa,upsample} {disabled,enabled}"
- EXT_framebuffer_multisample/blit-mismatched-formats
- EXT_framebuffer_multisample/blit-mismatched-sizes
- EXT_framebuffer_multisample/dlist
- EXT_framebuffer_multisample/interpolation 0 *
- EXT_framebuffer_multisample/minmax
- EXT_framebuffer_multisample/negative-copypixels
- EXT_framebuffer_multisample/negative-copyteximage
- EXT_framebuffer_multisample/negative-max-samples
- EXT_framebuffer_multisample/negative-mismatched-samples
- EXT_framebuffer_multisample/negative-readpixels
- EXT_framebuffer_multisample/renderbuffer-samples
- EXT_framebuffer_multisample/renderbufferstorage-samples
- EXT_framebuffer_multisample/samples
This is expected, since the above tests exercise MSAA functionality,
and shouldn't be run on systems prior to Gen6.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the documentation for BindBufferRange, OpenGL specs from 3.0
through 4.1 contain this language:
"The error INVALID_VALUE is generated if size is less than or
equal to zero or if offset + size is greater than the value of
BUFFER_SIZE."
This text was dropped from OpenGL 4.2, and it does not appear in the
GLES 3.0 spec.
Presumably the reason for the change is because come clients change
the size of the buffer after calling BindBufferRange. We don't want
to generate an error at the time of the BindBufferRange call just
because the old size of the buffer was too small, when the buffer is
about to be resized.
Since this is a deliberate relaxation of error conditions in order to
allow clients to work, it seems sensible to apply it to all versions
of GL, not just GL 4.2 and above.
(Note that there is no danger of this change allowing a client to
access data beyond the end of a buffer. We already have code to
ensure that that doesn't happen in the case where the client shrinks
the buffer after calling BindBufferRange).
Eliminates a spurious error message in the gles3 conformance test
"transform_feedback_offset_size".
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This matches the behavior of the Windows driver, but a bspec reference
should would be nice.
NOTE: This is a candidate for the 9.0 and 9.1 branches.
Signed-off-by: Ian Romanick <[email protected]
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Should have been done in d9948e49 but I missed it because
MAX_VARYING_FLOATS doesn't appear in the ES 3 spec, but is the same
value as MAX_VARYING_COMPONENTS.
NOTE: Candidate for the 9.1 branch
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
v2: fix compilation of swrast
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we have support for overriding alpha to 1.0, we can handle
blitting between these formats in either direction.
For now, we only support two XRGB formats: MESA_FORMAT_XRGB8888 and
MESA_FORMAT_RGBX8888_REV. Most places only appear to worry about the
former, so ignore the latter for now. We can always add it later.
NOTE: This is a candidate for the 9.1 branch.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Tested-by: Martin Steigerwald <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, Blorp requires the source and destination formats to be
equal. However, we'd really like to be able to blit between XRGB and
ARGB formats; our BLT engine paths have supported this for a long time.
For ARGB -> XRGB, nothing needs to occur: the missing alpha is already
interpreted as 1.0. For XRGB -> ARGB, we need to smash the alpha
channel to 1.0 when writing the destination colors. This is fairly
straightforward with blending.
For now, this code is never used, as the source and destination formats
still must be equal. The next patch will relax that restriction.
NOTE: This is a candidate for the 9.1 branch.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Tested-by: Martin Steigerwald <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The BLT engine has many limitations. Currently, it can only blit
X-tiled buffers (since we don't have a kernel API to whack the BLT
tiling mode register), which means all depth/stencil operations get
punted to meta code, which can be very CPU-intensive.
Even if we used the BLT engine, it can't blit between buffers with
different tiling modes, such as an X-tiled non-MSAA ARGB8888 texture
and a Y-tiled CMS ARGB8888 renderbuffer. This is a fundamental
limitation, and the only way around that is to use BLORP.
Previously, BLORP only handled BlitFramebuffer. This patch adds an
additional frontend for doing CopyTexSubImage. It also makes it the
default. This is partly to increase testing and avoid hiding bugs,
and partly because the BLORP path can already handle more cases. With
trivial extensions, it should be able to handle everything the BLT can.
This helps PlaneShift massively, which tries to CopyTexSubImage2D
between depth buffers whenever a player casts a spell. Since these
are Y-tiled, we hit meta and software ReadPixels paths, eating 99% CPU
while delivering ~1 FPS. This is particularly bad in an MMO setting
because people cast spells all the time.
It also helps Xonotic in 4X MSAA mode. At default power management
settings, I measured a 6.35138% +/- 0.672548% performance boost (n=5).
(This data is from v1 of the patch.)
No Piglit regressions on Ivybridge (v3) or Sandybridge (v2).
v2: Create a fake intel_renderbuffer to wrap the destination texture
image and then reuse do_blorp_blit rather than reimplementing most
of it. Remove unnecessary clipping code and conditional rendering
check.
v3: Reuse formats_match() to centralize checks; delete temporary
renderbuffers. Reorganize the code.
v4: Actually copy stencil when dealing with separate stencil buffers but
packed depth/stencil formats. Tested by a new Piglit test.
NOTE: This is a candidate for the 9.1 branch.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]> [v4]
Reviewed-by: Ian Romanick <[email protected]> [v3]
Reviewed-and-tested-by: Carl Worth <[email protected]> [v2]
Tested-by: Martin Steigerwald <[email protected]> [v3]
|
|
|
|
|
|
|
|
|
| |
I need to use this from C++ code.
NOTE: This is a candidate for the 9.1 branch.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
These formats were added a few months after these tables were committed.
No idea why we have the table though. AFAIK, texstore always takes the slow path
for GL_RGBn.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
v2: change the requirement from GLSL 1.30 to SM 3.0 (R500 can do this)
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
based on the intel driver
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
EmitCondCodes is always false.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
v2/Kayden: Also disable write masking in the vec4 backend.
Fixes 78 oglconform glsl-bif-tex-* subcases.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]> [v1]
Reviewed-by: Eric Anholt <[email protected]> [v2]
|
|
|
|
|
|
|
|
|
| |
Strangely, the DRIimage interface we have passes the pitch in pixels
instead of bytes, which anholt missed in the change to using bytes for
region pitch.
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
If you look up a level that isn't in the miptree, you crash.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's actually nothing uniform-specific in uniform_field_visitor.
It is potentially useful for all kinds of program resources (in
particular, future patches will use it for transform feedback
varyings).
This patch renames it to program_resource_visitor, and clarifies
several comments, to reflect the fact that it is useful for more than
just uniforms.
NOTE: This is a candidate for the 9.1 branch.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The parsing logic is moved to a new function in the GLSL module,
parse_program_resource_name(). This name was chosen because it should
eventually be useful for handling everything that OpenGL 4.3 calls
"program resources" (e.g. uniforms, vertex inputs, fragment outputs,
and transform feedback varyings).
Future patches will make use of this function for linking transform
feedback varyings.
NOTE: This is a candidate for the 9.1 branch.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=60212
Tested-by: Scott Moreau <[email protected]>
Tested-by: Tiago Vignatti <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Signed-off-by: Abdiel Janulgue <[email protected]>
|
| |
|
|
|
|
|
|
|
|
| |
In particular, rework the sRGB/linear format selection code.
There's no reason to mess with the Mesa format.
Just do everything in terms of the gallium pipe_format.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
That was the only place it was being called from.
|
|
|
|
|
|
| |
The code before was getting a pipe format, then calling
st_pipe_format_to_mesa_format() and then converting back again with
st_mesa_format_to_pipe_format(). This removes one conversion step.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we call gl[Copy]TexImage2D() with a generic compression format
(e.g. intFormat=GL_COMPRESSED_RGBA) we can't choose a DXT format if
we don't have the external DXT compression library.
We weren't actually enforcing this before since the
pipe_screen::is_format_supported(DXT) query has no dependency on
the DXT compression library.
Now if we're given a generic compressed format and we can't do DXT
compression we'll fall back to a non-compressed format.
v2: use util_format_is_s3tc() function and add more comments about
the allow_dxt parameter.
Note: This is a candidate for the stable branches.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
When glCompressedTexImage is called the internalFormat is a specific
format for the incoming image and the the hardware format should be
the same (since we never do format transcoding). So use the simpler
_mesa_glenum_to_compressed_format() function. This change is also
needed for the next patch.
Note: This is a candidate for the stable branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ivybridge doesn't appear to have the same errata as Sandybridge; no
corruption was observed by setting it to more than the minimal correct
value. It's possible that we were simply lucky, since the URB entries
are 1024-bit on Ivybridge vs. 512-bit Sandybridge. Or perhaps the
underlying hardware issue is fixed.
Either way, we may as well program the minimum value since it's now
readily available, likely to be more efficient, and possibly more
correct.
v2: Use GEN7_SBE_* defines rather than GEN6_SF_*. (A copy and paste
mistake.) They're the same, but using the right names is better.
NOTE: This is a candidate for all stable branches.
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(This commit message was primarily written by Paul Berry, who explained
what's going on far better than I would have.)
Previous to this patch, we thought that the only restrictions on
3DSTATE_SF's URB read length were (a) it needs to be large enough to
read all the VUE data that the SF needs, and (b) it can't be so large
that it tries to read VUE data that doesn't exist. Since the VUE map
already tells us how much VUE data exists, we didn't bother worrying
about restriction (a); we just did the easy thing and programmed the
read length to satisfy restriction (b).
However, we didn't notice this erratum in the hardware docs: "[errata]
Corruption/Hang possible if length programmed larger than recommended".
Judging by the context surrounding this erratum, it's pretty clear that
it means "URB read length must be exactly the size necessary to read all
the VUE data that the SF needs, and no larger". Which means that we
can't program the read length based on restriction (b)--we have to
program it based on restriction (a).
The URB read size needs to precisely match the amount of data that the
SF consumes; it doesn't work to simply base it on the size of the VUE.
Thankfully, the PRM contains the precise formula the hardware expects.
Fixes random UI corruption in Steam's "Big Picture Mode", random terrain
corruption in PlaneShift, and Piglit's fbo-5-varyings test.
NOTE: This is a candidate for all stable branches.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=56920
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=60172
Tested-by: Jordan Justen <[email protected]> (v1/Piglit)
Tested-by: Martin Steigerwald <[email protected]> (PlaneShift)
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The maximum SF source attribute is necessary to compute the Vertex URB
read length properly, which will be done in the next commit.
NOTE: This is a candidate for all stable branches.
Reviewed-by: Paul Berry <[email protected]>
Tested-by: Martin Steigerwald <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The next patch will benefit from easy access to the source attribute
number and whether or not we're swizzling. It doesn't want the final
attr_override DWord form, however.
NOTE: This is a candidate for all stable branches.
Reviewed-by: Paul Berry <[email protected]>
Tested-by: Martin Steigerwald <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
| |
|
|
|
|
| |
This was used by the old VS backend, but that's long gone.
|
|
|
|
|
|
|
| |
Fixes resource leak defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Save miptree level info to DRIImage:
- Appropriately-aligned base offset pointing to the image
- Additional x/y adjustment offsets from above.
v8: -Bump intelImageExtension version
v9: -Don't use internal _eglError but implement error reporting in new DRI inteface
instead. This fixes Android build problems based on feedback from
Adrian M Negreanu and Chad Versace.
-Move the non-tile-aligned check and error-reporting to intel_set_texture_image_region
v10: -Don't #include "egl/main/eglcurrent.h". [chadv]
Reviewed-by: Eric Anholt <[email protected]> (v6)
Acked-by: Chad Versace <[email protected]> (v10)
Signed-off-by: Abdiel Janulgue <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We need to take account the offset from original bo when using glTexSubImage()
and other functions that manipulate the subregion of an exported texture.
Offsets are appended to mapped region address and when blitting from a source
region.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Signed-off-by: Abdiel Janulgue <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When binding a region to a texture image, re-create the miptree base-level
considering the offset and dimension information exported by DRIImage.
v8: - Move the alignment surface address checks from the image-from-texture
code to the texture-from-image side. This allows the error reporting to conform to
OES_EGL_Image and to prevent mixing up EGL and GL errors. Reported by Chad Versace.
- Addressed an existing issue in renderbuffer case where there is a
a possibility of creating EGL images out of depthstencil textures which isn't
really possible. This was spotted by Eric earlier.
Reviewed-by: Eric Anholt <[email protected]> (v6)
Reviewed-by: Chad Versace <[email protected]> (v8)
Signed-off-by: Abdiel Janulgue <[email protected]>
|
|
|
|
|
|
|
|
|
| |
If the offsets are present, this lets us specify a particular level and slice
in a shared region using the base level of an exported mip-map tree.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Signed-off-by: Abdiel Janulgue <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Add helper to calculate fine-grained x and y adjustment pixels
to an image within a miptree level for tiled regions.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Signed-off-by: Abdiel Janulgue <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Signed-off-by: Abdiel Janulgue <[email protected]>
|
|
|
|
|
|
|
|
| |
v8: - Append has_depthstencil field in DRIImage structure.
Reviewed-by: Eric Anholt <[email protected]> (v6)
Reviewed-by: Chad Versace <[email protected]> (v8)
Signed-off-by: Abdiel Janulgue <[email protected]>
|
| |
|