| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Also make sure that pipe_blit_info gets zero'd out so that query isn't
accidentally left enabled.
Signed-off-by: Ilia Mirkin <[email protected]>
Cc: "10.2" <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is needed by _mesa_generate_mipmap.
This adds an array of pipe_transfers to st_texture_image. Each transfer is
for mapping a single layer.
v2: allocate the array of transfers on demand
|
|
|
|
|
|
|
|
|
|
|
| |
We were using REALLOC() from u_memory.h but FREE() from imports.h.
This mismatch caused us to trash the heap on Windows after we
deleted a texture object.
This fixes a regression from commit 6c59be7776e4d.
Reviewed-by: Christian König <[email protected]>
Reviewed-by: Jakob Bornecrantz <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Keep a dynamically increasing array of all the views
created for a texture instead of just the last one.
v2: add comments, fix array size calculation,
release only the first sampler view found
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
If st_GetTexImage() is to decompress the texture, avoid the fallback
path even if prefer_blit_based_texture_transfer = false. For drivers
that returned PIPE_CAP_PREFER_BLIT_BASED_TEXTURE_TRANSFER = 0, we
were always taking the fallback path for texture decompression rather
than rendering a quad. The later is a lot faster.
Cc: "10.0" "10.1" <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This binds a NULL sampler view in that case.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74251
Cc: "10.1" "10.0" <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
s/\bgl_format\b/mesa_format/g. Use better name for Mesa Formats enum
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tungsten Graphics Inc. was acquired by VMware Inc. in 2008. Leaving the
old copyright name is creating unnecessary confusion, hence this change.
This was the sed script I used:
$ cat tg2vmw.sed
# Run as:
#
# git reset --hard HEAD && find include scons src -type f -not -name 'sed*' -print0 | xargs -0 sed -i -f tg2vmw.sed
#
# Rename copyrights
s/Tungsten Gra\(ph\|hp\)ics,\? [iI]nc\.\?\(, Cedar Park\)\?\(, Austin\)\?\(, \(Texas\|TX\)\)\?\.\?/VMware, Inc./g
/Copyright/s/Tungsten Graphics\(,\? [iI]nc\.\)\?\(, Cedar Park\)\?\(, Austin\)\?\(, \(Texas\|TX\)\)\?\.\?/VMware, Inc./
s/TUNGSTEN GRAPHICS/VMWARE/g
# Rename emails
s/[email protected]/[email protected]/
s/[email protected]/[email protected]/g
s/jrfonseca-at-tungstengraphics-dot-com/jfonseca-at-vmware-dot-com/
s/jrfonseca\[email protected]/[email protected]/g
s/keithw\[email protected]/[email protected]/g
s/[email protected]/[email protected]/g
s/thomas-at-tungstengraphics-dot-com/thellstom-at-vmware-dot-com/
s/[email protected]/[email protected]/
# Remove dead links
s@Tungsten Graphics (http://www.tungstengraphics.com)@Tungsten Graphics@g
# C string src/gallium/state_trackers/vega/api_misc.c
s/"Tungsten Graphics, Inc"/"VMware, Inc"/
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
So that it acts like ordinary free(). This lets us remove a bunch of
if statements where the function is called.
v2:
- Avoiding compile error on MSVC and possible warnings on other compilers.
- Added comment regards passing NULL pointer being safe.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is a subset of geometry shaders. It's all about setting first_layer and
last_layer correctly.
Also some code between st_render_texture and update_framebuffer_state is
consolidated. It doesn't use rtt_level and derives the level from dimensions
instead as the code in st_atom_framebuffer.c did.
|
|
|
|
|
|
|
| |
Cc: [email protected]
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Intel had brokenness here, and I'd like to continue moving Mesa toward
hiding 1D_ARRAY's ridiculousness inside of the core, like we did with
MapTextureImage. Fixes copyteximage 1D_ARRAY on intel.
There's still an impedance mismatch in meta when falling back to read and
texsubimage, since texsubimage expects coordinates into 1D_ARRAY as
(width, slice, 0) instead of (width, 0, slice).
v2: Fix offset of scanline reads from the source. (Thanks Brian!), replace
dd.h comment with Paul's text and replace early exit with an assert.
Reviewed-by: Brian Paul <[email protected]> (v1)
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
Reviewed-by: Paul Berry <[email protected]> (v1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were 2 issues with it:
1) The texture format which should be used for texturing was only set
in gl_texture_image::TexFormat, which wasn't used for sampler views.
2) Textures are sometimes reallocated under some circumstances
in st_finalize_texture, which is unacceptable if the texture comes
from a window system.
The issues are resolved as follows:
1) If surface_based is true (texture_from_pixmap, etc.), store the format
in a new variable st_texture_object::surface_format.
2) Don't reallocate a surface-based texture in st_finalize_texture.
Also don't use st_ChooseTextureFormat is st_context_teximage, because
the format is dictated by the caller.
This fixes the glx-tfp piglit test.
Reviewed-by: Adam Jackson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support to the mesa state tracker for ARB_texture_multisample.
hardware doesn't seem to use a different texture instructions, so
I don't think we need to create one for TGSI at this time.
Thanks to Marek for fixes to sample number picking.
v2: idr pointed out a bug in how we picked the max sample counts,
use new internal format chooser interface to pick proper answers.
v3: use st_choose_format directly, it was okay, fix anding of masks.
Reviewed-by: Marek Olšák <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
| |
None of these were needed.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
calim pointed out we were getting mipmap levels for array multisamples,
this didn't make sense. So then I noticed this function takes last_level
so we are passing in a too high value here.
I think this should fix the case he was seeing.
Reviewed-by: Brian Paul <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
| |
https://bugs.freedesktop.org/show_bug.cgi?id=62573
Tested-by: Andreas Boll <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The blit-based paths for TexImage, GetTexImage, and ReadPixels aren't very
fast with software rasterizer. Now Gallium drivers have the ability to turn
them off.
Reviewed-by: Brian Paul <[email protected]>
Tested-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Initial version contributed by: Martin Andersson <[email protected]>
This is only used if the memcpy path cannot be used and if no transfer ops
are needed. It's pretty similar to our TexImage and GetTexImage
implementations.
The motivation behind this is to be able to use ReadPixels every frame and
still have at least 20 fps (or 60 fps with a powerful GPU and CPU)
instead of 0.5 fps.
Reviewed-by: Brian Paul <[email protected]>
Tested-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It has become a bit messy.
Changes:
- finally correct checking for transfer ops depending on the base format
- making sure the base internal format and the texture format match
(we were ignoring it, but it's important for correctness)
- the way-too-strict rule that both src and dst base formats must be the same
was dropped; ensuring the simpler and more permissive rule mentioned above
is enough
- stop using util_blit_pixels; pipe->blit is flexible enough, and now that we
have RGBX and red-alpha formats, pipe->blit can be used for more cases
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Assuming I understand EXT_texture_sRGB correctly.
NOTE: This is a candidate for the stable branches.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A temporary texture is created such that it matches the format and type
combination and pixels are copied to it using memcpy. Then the blit is used to
copy the temporary texture to the texture image being modified by TexImage or
TexSubImage. The blit takes care of the format and type conversion and
swizzling. The result is a very fast texture upload involving as little CPU
as possible.
This improves performance in apps which upload textures during rendering.
An example is the Wine OpenGL backend for DirectDraw, which I used to test
the game StarCraft. Profiling had shown that TexSubImage was taking 50% of
CPU time without this patch, which was the main motivation for this work, and
now TexSubImage only takes 14% of CPU time. I had to underclock my CPU to see
any difference in the game and this patch does make the game a lot faster
if the CPU is slow (or using the powersave cpufreq profile).
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not easy to hit, because we have 3 code paths now
(tried in this order):
- memcpy-based (skips the blit) -> _mesa_tex_getimage
- blit-based
- slow pixel packing -> _mesa_tex_getimage
The main difference later in the code is the parameters of
_mesa_image_address3d.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
BTW, we have 0 tests for glGetTexImage(format=GL_DEPTH*).
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
I'll need this later.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This commit allows using glGetTexImage during rendering and still
maintain interactive framerates.
This improves performance of WarCraft 3 under Wine. The framerate is improved
from 25 fps to 39 fps in the main menu, and from 0.5 fps to 32 fps in the game.
v2: fix choosing the format for decompression
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we call gl[Copy]TexImage2D() with a generic compression format
(e.g. intFormat=GL_COMPRESSED_RGBA) we can't choose a DXT format if
we don't have the external DXT compression library.
We weren't actually enforcing this before since the
pipe_screen::is_format_supported(DXT) query has no dependency on
the DXT compression library.
Now if we're given a generic compressed format and we can't do DXT
compression we'll fall back to a non-compressed format.
v2: use util_format_is_s3tc() function and add more comments about
the allow_dxt parameter.
Note: This is a candidate for the stable branches.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
https://bugs.freedesktop.org/show_bug.cgi?id=59143
Using GLubyte as per Brian's suggestion.
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
- We should use a 3D transfer of size Width x 1 x NumLayers.
- We should use layer_stride instead of stride.
(even though they are likely to be equal with 1D array textures)
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
This uses a 3D blit to decompress the texture and then a 3D transfer
to read it.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was the fast path based on _mesa_format_matches_format_and_type
for GetTexImage, but it never worked, because the Mesa format we were testing
there was always compressed. Further testing showed that the fast path
had been completely broken.
In this commit, the somewhat limited helper util_create_rgba_texture is
no longer used and instead, custom code for the texture creation is added,
which tries to find the best matching RGBA8 format, so that we can hit
the fast path *always* if the read format is a variant of RGBA8 and supported
by the driver.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
I'll deal with 2D arrays later.
NOTE: This is a candidate for the stable branches.
|
|
|
|
|
|
| |
Not really used by anybody now.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This adds the necessary changes to the st to allow texture buffer object
support if the driver advertises it.
v1.1: remove extra blank line and whitespace
Reviewed-by: Brian Paul <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
Just use pipe->blit, which can do resolve, flipping, and format conversions.
The util_blit_pixels codepath is still there for the cases where we have to
force alpha to 1.
This also turns on acceleration for copying GL_DEPTH_STENCIL.
|
|
|
|
|
|
|
|
|
| |
Array textures were broken.
NOTE: This is a candidate for the stable branches.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was pretty broken with array textures, where the array size (height or
depth depending on the target) shouldn't be magnified.
The guessing also doesn't fail with 1D and cube textures.
NOTE: This is a candidate for the stable branches.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This adds mesa state tracker support for the new extension,
along with glsl->tgsi conversion to use the new opcodes
where appropriate.
v2: fix assert found running textureSize tests.
Reviewed-by: Brian Paul <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The layer dimension of array textures is not subject to mipmap minification.
OTOH we were missing an assertion for the depth dimension.
Fixes assertion failures with piglit {f,v}s-textureSize-sampler1DArrayShadow.
For some reason, they only resulted in piglit 'warn' results for me, not
failures.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=56211
NOTE: This is a candidate for the stable branches.
Signed-off-by: Michel Dänzer <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Tested-by: Andreas Boll <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes WebGL texture mips conformance test, no piglit regressions.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=44912
NOTE: This is a candidate for the stable branches.
Signed-off-by: Michel Dänzer <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Tested-by: Andreas Boll <[email protected]>
|
|
|
|
|
|
|
|
| |
If GL_BASE_LEVEL==0 and GL_MAX_LEVEL==0 that's a pretty good hint that
there'll be a single mipmap level in the texture.
Google Earth sets the texture's state this way before the first glTexImage
call. This saves a bit of texture memory.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"get_transfer + transfer_map" becomes "transfer_map".
"transfer_unmap + transfer_destroy" becomes "transfer_unmap".
transfer_map must create and return the transfer object and transfer_unmap
must destroy it.
transfer_map is successful if the returned buffer pointer is not NULL.
If transfer_map fails, the pointer to the transfer object remains unchanged
(i.e. doesn't have to be NULL).
Acked-by: Brian Paul <[email protected]>
|