| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a map of miptree slices to needed resolves, implemented as
a linked list. A future commit will embed such a list in
intel_mipmap_tree.
If you think I'm crazy to put a list in a miptree, read the Doxygen in
this patch for intel_resolve_map.
v2: [anholt] Move Doxygen from functin prototypes to definitions.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that intel_renderbuffer::region has been replaced with a miptree, the
HiZ functions region parameter must be replaced with a miptree parameter.
Change the return type from bool to void.
Rename the 'depth' parameter to 'layer', because it will correspond to
irb->mt_layer.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the following functions:
i830_hiz_resolve_noop
i915_hiz_resolve_noop
brw_hiz_resolve_noop
My original strategy for how intel->vtbl.resolve_*buffer was used has
substantially changed. The above functions are no longer called in the
current strategy.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is required to correctly implement HiZ for mipmapped and
multi-layered textures.
v2: Accomodate refcount fixes in intel_process_dri2_buffer_*() that were
introduced in v2 of commit
intel: Replace intel_renderbuffer::region with a miptree [v2]
Reviewed-by: Eric Anholt <eric@anholt>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
intel_mipmap_tree::stencil_mt [v3]
For depthstencil textures using separate stencil, we embedded a stencil
buffer in intel_texture_image. The intention was that the embedded stencil
buffer would be the golden copy of the texture's stencil bits. When
necessary, we scattered/gathered the stencil bits between the texture
miptree and the embedded stencil buffer.
This approach had a serious deficiency for mipmapped or multi-layer
textures. Any given moment the embedded stencil buffer was consistent with
exactly one miptree slice, the most recent one to be scattered. This
permitted tests of type A to pass, but broke tests of type B.
Test A:
1. Create a depthstencil texture.
2. Upload data into (level=x1,layer=y1).
3. Read and test stencil data at (level=x1, layer=y1).
4. Upload data into (level=x2,layer=y2).
5. Read and test stencil data at (level=x2, layer=y2).
Test B:
1. Create a depthstencil texture.
2. Upload data into (level=x1,layer=y1).
3. Upload data into (level=x2,layer=y2).
4. Read and test stencil data at (level=x1, layer=y1).
5. Read and test stencil data at (level=x2, layer=y2).
v2:
Only allocate stencil miptree if intel->must_use_separate_stencil,
because we don't make the conversion from must_use_separate_stencil to
has_separate_stencil until commit
intel: Use separate stencil whenever possible
v3:
Don't call ChooseNewTexture in intel_renderbuffer_wrap_miptree() in
order to determine the renderbuffer format. Instead, pass the format as
a param to that function.
CC: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is in preparation for properly implementing glFramebufferTexture*()
for mipmapped depthstencil textures. The FIXME comments deleted by this
patch give a rough explanation of what was broken.
This refactor does the following:
- In intel_update_wrapper() and intel_wrap_texture(), change the
parameters to prepare to remove functions' dependency on
gl_texture_image.
- Move the call to intel_renderbuffer_set_draw_offsets() from
intel_render_texture() into intel_udpate_wrapper().
Each time I encounter those functions, I dislike their vague names.
(Update which wrapper? What is wrapped? What is the wrapper?). So, while
I was mucking around, I also renamed the functions.
v2:
In addition to the ``GLenum internal_format`` parameter to
intel_wrap_miptree(), add a ``gl_format format`` parameter. This
removes the need to recalculate for the true format from
internal_format with ChooseNewTextureFormat, which was just weird.
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is a small helper function that asserts that a given level and layer
are valid for a miptree. I will be extensively using it in the future
miptree HiZ functions.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the renderbuffer tracks the miptree level and layer that it wraps,
the 'tex_image' and 'zoffset' params are no longer needed to calculate the draw
offsets.
Not only are they no longer needed, but their presence would prevent
calculating the renderbuffer draw offsets in situations where there were
no texture image. Such situations will occur during the HiZ meta-op and
during scatter/gather of separate stencil textures.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TODO: Make v2 for kwg.
Add two fields to intel_renderbuffer:
mt_level
mt_layer
Multiple renderbuffers may simultaneously wrap a single texture and each
provide a different view into that texture. [Consider
glFramebufferTextureLayer()]. The new fields indicate which slice of the
miptree is wrapped by the renderbuffer.
The buffer resolve operations, to be introduced in the future, require
these fields in order to resolve the correct slice in the miptree.
To add the fields, it was necessary to replace the type of some function
parameters from gl_texture_image to gl_renderbuffer_attachment.
v2: [kwg] Replace confusing condition `CubeMapFace > 0` with the more
sensible `Target == GL_TEXTURE_CUBE_MAP`.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For all texture targets except GL_TEXTURE_CUBE_MAP, the 'nr_images' and
'depth' fields of intel_mipmap_level were identical. In the exceptional
case, nr_images == 6 and depth == 1.
It is simple to determine if a texture is a cube or not, so the presence
of two fields here was not helpful. Worse, it was confusing. When we
eventually implement GL_ARB_texture_cube_map_array, this mess would have
become even more confusing.
This patch removes 'nr_images' and assigns to 'depth' a consistent
meaning: depth is the number of 2D slices at each miplevel. The exact
semantics of depth varies according to the texture target:
- For GL_TEXTURE_CUBE_MAP, depth is 6.
- For GL_TEXTURE_2D_ARRAY, depth is the number of array slices. It is
identical for all miplevels in the texture.
- For GL_TEXTURE_3D, it is the texture's depth at each miplevel. Its
value, like width and height, varies with miplevel.
- For other texture types, depth is 1.
As a consequence, parameters were removed from the following function
signatures:
intel_miptree_set_level_info
Remove 'nr_images'.
i945_miptree_layout
brw_miptree_layout_texture
brw_miptree_layout_texture_array
Remove 'slices'.
v2:
- Replace "It's" with "Its".
- Remove all hunks in intel_fbo.c. The hunks were spurious and sneaked
in during a rebase.
- Remove unneeded hunk in intel_tex_map_image_for_swrast(). It was
a little refactor of the for-loop's upper bound.
v4:
In intel_miptree_get_image_offset(), document the conditions under
which different if-branches are taken.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extract the body of the inner loop into a new function,
intel_miptree_copy_slice().
This is in preparation for adding support for separate stencil and HiZ to
intel_miptree_copy_teximage(). When copying a slice of a depthstencil
miptree that uses separate stencil, we will also need to copy the
corresponding slice of the stencil miptree. The easiest way to do this
will be to call intel_miptree_copy_slice() recursively. Analogous
reasoning applies to copying a slice of a depth miptree with HiZ.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new field, intel_mipmap_level::slice, and move the offset fields
into it. Also add some much needed documentation for these fields.
Before this patch, a separate array was allocated for the
intel_mipmap_level::{x,y}_offsets. This was just silly; it incurred an
extra call to malloc and diminished memory locality.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Essentially, this patch just globally substitutes `irb->region` with
`irb->mt->region` and then does some minor cleanups to avoid segfaults
and other problems.
This is in preparation for
1. Fixing scatter/gather for mipmapped separate stencil textures.
2. Supporting HiZ for mipmapped depth textures.
As a nice benefit, this lays down some preliminary groundwork for easily
texturing from any renderbuffer, even those of the window system.
A future commit will replace intel_mipmap_tree::hiz_region with a miptree.
v2:
- Return early in intel_process_dri2_buffer_*() if region allocation
fails.
- Fix double semicolon.
- Fix miptree reference leaks in the following functions:
intel_process_dri2_buffer_with_separate_stencil()
intel_image_target_renderbuffer_storage()
v3:
- [anholt] Fix check for hiz allocation failure. Replace
``if (!irb->mt)` with ``if(!irb->mt->hiz_region)``.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This function creates a miptree that is suitable as storage for
a non-texture renderbuffer.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the following inline functions:
intel_get_rb_region
intel_framebuffer_has_hiz
A future commit will replace the renderbuffer's region with a miptree.
This small refactor will eliminate the need for intel_fbo.h to include
intel_mipmap_tree.h on that commit. I'd like to avoid the situation where
each header transitively includes every other header.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The only user of intel_framebuffer_get_hiz_region() was
intel_framebuffer_has_hiz(). So I folded the body of the former into the
latter.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A great refactor thrashing begins after this commit for HiZ and separate
stencil. Removing code for texture HiZ will make that refactoring easier,
because then we don't have to maintain that code during the refactor.
To disable HiZ for textures, I've removed the hook in
intel_update_wrapper() that allocates a HiZ buffer when attaching a depth
texture to a framebuffer.
HiZ was broken for textures anyway, so there's no regression here.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function gathered the stencil buffer into the depth buffer only when
the map mode contained the read bit. But we must do the gather even if the
map mode is write-only. If we do not, then, when the depth buffer's stencil
bits are scattered into the stencil buffer by intel_unmap_renderbuffer(),
some of the scattered stencil bits would be invalid.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Don't map the depthstencil buffer twice
Place a guard in intel_renderbuffer_map() to prevent a renderbuffer
from being mapped twice. This happened if a single buffer was attached to
the framebuffer's depth and stencil attachment points. (Interestingly,
because intel_map_renderbuffer_gtt() is idempotent, the double mapping did
not cause bugs for depthstencil buffers *without* separate stencil).
2. Stop overriding gl_framebuffer::_DepthBuffer,_StencilBuffer
Normally, if a depthstencil buffer is attached to the framebuffer's
depth attachment point, then _mesa_update_framebuffer() installs
a wrapper depth renderbuffer at gl_framebuffer::_DepthBuffer. Ditto for
the stencil attachment point and gl_framebuffer::_StencilBuffer
A depthstencil intel_renderbuffer with separate stencil contains hidden
depth and stencil renderbuffers, which are the *real* renderbuffers. In
order to force swrast to work, we were installing, in
brw_update_draw_buffer(), the hidden renderbuffers at
gl_framebuffer::_DepthBuffer and _StencilBuffer, thus overriding the
behavior of _mesa_update_framebuffer(). However, now that
intel_renderbuffer_map() is implemented with MapRenderbuffer(),
overriding _mesa_update_framebuffer's introduces bugs. This patch
removes the override code.
Fixes several Piglit tests on gen7.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The special stencil span accessors, as set by intel_span_init_funcs.
perform software W detiling. Since intel_renderbuffer_map() now uses
MapRenderbuffer, rb->Data points to an *untiled* stencil buffer.
Fixes several Piglit tests on gen7.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
| |
Signed-off-by: Ben Skeggs <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously a vertex shader that used no samplers would get updated (by
calling the driver's ProgramStringNotify) when a sampler in the
fragment shader was updated. This was discovered while investigating
some spurious code generation for shaders in Cogs. The behavior in
Cogs is especially pessimal because it ping-pongs sampler uniform
settings:
glUniform1i(sampler1, 0);
glUniform1i(sampler2, 1);
draw();
glUniform1i(sampler1, 1);
glUniform1i(sampler2, 0);
draw();
glUniform1i(sampler1, 0);
glUniform1i(sampler2, 1);
draw();
// etc.
ProgramStringNotify is still too big of a hammer. Applications like
Cogs will still defeat the shader cache. A lighter-weight mechanism
that can work with the shader cache is needed. However, this patch at
least restores the previous behavior.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
In slow_read_depth_stencil_pixels_separate() we might have separate
depth and stencil buffers or a combined buffer. In the later case,
don't map the buffer twice. This function is used when the depth
scale/bias pixel transfer values are not the defaults.
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=42963
Reviewed-by: José Fonseca <[email protected]>
|
| |
|
|
|
|
|
| |
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glspec doesn't say that we should skip the attenuation and spot
calculation for infinite light(Ppli.w == 0). Instead, it gives a same
formula to do the light calculation for both finite light and infinite
light(see page 62 of glspec 2.1.pdf)
Also from the formula (2.4) at page 62 of glspec 2.1.pdf, we can skip
attenuation calculation if Ppli.w == 0.
This would fix all the intel oglc l_sed fail subcases and introduces no
intel oglc regressions.
v2: fix an wrong intendation(comments from Brian).
Signed-off-by: Yuanhan Liu <[email protected]>
Acked-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Make sure all lighting tables are updated before using the table to
calculate something, say using _SpotExpTable to calculate
_VP_inf_spot_attenuation.
Signed-off-by: Yuanhan Liu <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fixes regression in piglit:
ARB_color_buffer_float/GL_RGBA16F-getteximage
ARB_color_buffer_float/GL_RGBA16F-readpixels
ARB_color_buffer_float/GL_RGBA32F-getteximage
ARB_color_buffer_float/GL_RGBA32F-readpixels
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes some spurious GL errors in the upcoming
gl-3.0-required-sized-formats piglit test.
Reviewed-by: Dave Airlie <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
intelAllocateBuffer() was oblivious to separate stencil buffers. This
patch fixes it to allocate a non-tiled stencil buffer with special pitch,
just as the DDX does.
Without this, any app that attempted to create an EGL surface with stencil
bits would crash. Of course, this affected only environments that used the
builtin DRI2 backend, such as Android and Wayland.
Fixes GLBenchmark2.1 on Android on gen7.
Note: This is a candidate for the 7.11 branch.
Tested-by: Louie Tsaie <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I changed the dimensions of the stencil buffer's region, as allocated by
the DDX, at xf86-video-intel commit
commit 3e55f3e88b40471706d5cd45c4df4010f8675c75
dri: Do not tile stencil buffer
But I forgot to make the analogous update to the Intel DRI2 glue in Mesa.
This patch makes that update.
Surprisingly, the mismatch did not cause any bugs. But the mismatch, if
left unfixed, *would* create bugs in the next commit.
Note: This is a candidate for the 7.11 branch.
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When calculating the y offset needed for detiling window system stencil
buffers, replace the term
region->height * 2 + region->height % 2 - 1
with
rb->Height - 1 .
The two terms are incidentally equivalent due to some out-of-date,
incorrect code in the Intel DRI2 glue for DDX. (See
intel_process_dri2_buffer_with_separate_stencil(), line ``buffer_height /=
2;``).
Note: This is a candidate for the 7.11 branch (only the intel_span.c hunk).
Signed-off-by: Chad Versace <[email protected]>
|
| |
|
| |
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Rather than redefining the BYTE/SHORT_TO_FLOAT macros, just define new
ones with different names. These macros preserve zero when converting.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
and _mesa_sizeof_packed_type()
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=42635
|
|
|
|
|
|
| |
This fixes a crash with the piglit vbo-too-small test.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't assert/die if a VBO is too small. Return zero instead. For
debug builds, emit a warning message since this is an unusual situation
that might indicate that there's a bug in the app.
Note that util_draw_max_index() now returns max_index+1 instead of
max_index. This lets us return zero to indicate that one of the VBOs
is too small to draw anything.
Fixes a failure with the new piglit vbo-too-small test.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
| |
We can use the core Mesa code for glReadPixels now. We just have to
validate state and flush the bitmap cache before reading.
|
|
|
|
|
|
| |
st_cb_readpixels.c is going away next.
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
We use the code in main/readpix.c now.
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
The swrast ReadPixels code has no dependencies on swrast since moving
to Map/UnmapRenderbuffer(). We'll be able to remove s_readpix.c and
remove the state tracker's glReadPixels code next.
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
We'll soon be able to use these for a core Mesa implementation of
glReadPixels.
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
This was only used by the xlib driver to add an alpha channel to the
front/window color buffer. This was no longer going to work well with
the move to direct mapping of renderbuffers.
Reviewed-by: Eric Anholt <[email protected]>
|