| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
This patch fixes regression introduced in
1f3c5eae5c4be582e50c2d4d7950424d86059c45
Signed-off-by: Vasily Khoruzhick <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
|
| |
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=43122
Reviewed-by: Eric Anholt <[email protected]>
Tested-by: Kai Wasserbäch <[email protected]>
|
|
|
|
|
|
|
| |
The i965 driver is now enabling all of these formats on its own from
the surface format table.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
This is a no-op change on gen6, but should result in some
actually-unsupported formats on gen4 no longer being chosen (like
RGBA_FLOAT32 now being RGBA_FLOAT16).
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
GL 3.0 specifies GL_RGB10_A2 as a required sized format for rendering
and texturing.
This introduces two piglit regressions: one due to fbo-mipmap-copypix
hitting swrast GetRow (we want to convert swrast to MapRenderbuffer),
and one due to fbo-blending-formats being too picky while leaving
dithering on.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
GL 3.0 specifies GL_RGBA16 as a required sized format for rendering
and texturing.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that all the rest of the driver is driven off of the surface
formats table, all we really need to do is add the mapping from
MESA_FORMAT to BRW_SURFACEFORMAT. However, we also add format
override for I16/L16 render targets at the same time, so that existing
users of I16 that were getting promoted to I32 and then getting the
I32->R32 override still get FBO support.
Fixes failures in piglit gl-3.0-required-sized-texture-formats, and
will prevent regressions in ARB_texture_float on gen4 when moving to
fully table-driven texture format setup.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
Fixes failures in i965 on fbo-blending-formats when the format is enabled.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
Until GL 3.0, there isn't any requirement on the actual sizes of
channels chosen. By falling back to 16 here, we can correctly support
ARB_texture_float on original i965 hardware, which can't correctly
filter 32-bit floats.
|
|
|
|
|
|
|
| |
Not all i965 hardware can do RGB float16, and this will at least save
half the memory and have expected behavior in terms of precision.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This should be a no-op change. The initializers are reordered to
match the ordering of the enum, since there isn't a clearly sensible
ordering, but "the order they were added to the driver, sort of" is
definitely not one.
Also, the unsupported formats are explicitly initialized to 0, so it's
more obvious what we aren't claiming to support.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
This is currently duplicated with intel_context.c's setup of the
formats table, and sets true for exactly the same set of formats on
gen6.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
I've never seen a use for the thread ID value, but knowing the format
being rendered is kind of a big deal.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We are already testing this if appropriate in
intel_validate_framebuffer (FBO completeness), so no need to avoid
attaching the texture to the renderbuffer here.
This causes MESA_FORMAT_R11_G11_B10_FLOAT to now be renderable as a texture
attachment on i965.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We don't want to go writing GetRow/PutRow for every format required by
GL 3.0, when it's very hard to get those functions called, and in
every case we want to make swrast do direct mapping through
MapRenderbuffer anyway.
This causes MESA_FORMAT_R11_G11_B10_FLOAT to be considered complete on gen6.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This moves any chipset-dependent logic we want for render target
format choices to init time as well. There is still logic left at
state update for SRGB handling, where format choices change based on
GL state.
The brw_render_target_supported() function should now return correct
results, instead of relying on the limited results from
intel_span_supports_format() to avoid lying about FBO completeness.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
We're going to want to provide different answers per chipset
generation.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This will be used to drive chosing formats and determining framebuffer
completeness, instead of the bunch of ad-hoc checks we have had until
now.
Reviewed-by: Kenneth Graunke <[email protected]>
Acked-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The formats.c code's "datatype" value is "what does this value mean",
i.e. unorm or snorm or float, and is the return value from the
GL_TEXTURE_RED_TYPE class of queries. The depth formats were marked
as GL_UNSIGNED_INT, which is what we use for integer, and not what we
should be returning from the glGetTexLevelParameter.
In texstore, we were inappropriately using it as an argument to
_mesa_unpack_depth_span() that was expecting a value like
GL_UNSIGNED_INT or GL_UNSIGNED_SHORT. Just hardcode
_mesa_unpack_depth_span()'s arguments for now, though it looks like
the consumers of that interface would be happier with using
MESA_FORMAT.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GL_TEXTURE_WHATEVER_SIZE entrypoints were checking if the
specified base type of the texture allowed that channel to be present
before reporting the size of the channel, so that GL_RGB didn't end up
with an alpha size if the hardware driver had to store it that way.
The GL_TEXTURE_WHATEVER_TYPE entrypoints weren't checking it, so you
would end up with strange responses from the GL involving 0-bit
floating-point alpha components in GL_RGB32F, even though it says
GL_NONE as expected for other 0-sized channels.
Make _TYPE check _BaseFormat the same as _SIZE, which results in
fixing most of the GL_RGB* testcases of gl-3.0-required-sized-formats
pass on i965.
v2: Add a default case with a warning (suggestion by Brian Paul)
Reviewed-by: Brian Paul <[email protected]> (v1)
|
| |
|
|
|
|
| |
This will throw a compile warning if there's an unhandled CAP.
|
| |
|
| |
|
|
|
|
| |
This will throw a compile warning if there's an unhandled CAP.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
also don't mark them as 'user', because they will be uploaded through
the translate fallback anyway.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The motivation behind this is to add some self-documentation in the code
about how each CAP can be used.
The idea is:
- enum pipe_cap is only valid in get_param
- enum pipe_capf is only valid in get_paramf
Which CAPs are floating-point have been determined based on how everybody
except svga implemented the functions. svga have been modified to match all
the other drivers.
Besides that, the floating-point CAPs are now prefixed with PIPE_CAPF_.
|
|
|
|
|
| |
Only i965g does not enable GLSL, but that driver has been unmaintained and
bitrotting for quite a while anyway.
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
v2: updated an error message
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
This fixes:
- depthstencil-default_fb-copypixels
- fbo-depthstencil-GL_DEPTH24_STENCIL8-copypixels
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old count_uniform_size::num_shader_uniforms was actually
calculating the number of components used. Multiplying by 4 when
setting gl_shader::num_uniform_components caused us to count 4x as
many uniform components as were actually used.
Signed-off-by: Ian Romanick <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=42930
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=42966
Acked-by: Marek Olšák <[email protected]>
Tested-by: Vinson Lee <[email protected]>
Tested-by: Pavel Ondračka <[email protected]>
Reviewed-and-tested-by: Kenneth Graunke <[email protected]>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Regresses one Piglit test: bugs/fdo10370.
I'm not enabling HiZ for gen7 yet because it causes a mysterious
performance regression.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For depthstencil renderbuffers, we were using separate stencil only if the
hardware required it. Since the performance gains from HiZ is so high, we
should always use separate stencil if the hardware supports it.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I implemented functions for horizontal/vertical alignment units separately
because I find it easier to read that way...especially with all the
corner-cases.
[chad] Corrected the vertical alignment calculation by checking for
depthstencil formats.
v2:
- Fix typos in intel_horizontal_texture_alignment_unit():
s/height/width/ and s/VALIGN/HALIGN.
- Remove special case for compressed formats in
intel_get_texture_alignment unit(). Compressed formats are already
handled in the halign and valign functions.
- Replace check ``_mesa_is_depth_format(...) ||
_mesa_is_depthstencil_format(...)`` with explcitit checks against
GL_DEPTH_COMPONENT and GL_DEPTH_STENCIL.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
| |
| |
| |
| |
| | |
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This allows us to replace all the calls to
intel_get_texture_alignment_unit() with a single call at miptree creation.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| | |
When a depth texture is first attached to framebuffer, allocate a HiZ
miptree for it.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
| |
| |
| |
| |
| | |
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
| |
| |
| |
| |
| | |
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
| |
| |
| |
| |
| | |
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|