| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
To allow rendering in 16-bit/channel RGBA buffers.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
One fewer place to have to update.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Otherwise the state tracker will crash if the texture instructions
have offsets.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
| |
AFAICT, all gallium drivers implement TGSI_OPCODE_LRP.
Tested with softpipe, llvmpipe, svga drivers.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many GPUs have an instruction to do linear interpolation which is more
efficient than simply performing the algebra necessary (two multiplies,
an add, and a subtract).
Pattern matching or peepholing this is more desirable, but can be
tricky. By using an opcode, we can at least make shaders which use the
mix() built-in get the more efficient behavior.
Currently, all consumers lower ir_triop_lrp. Subsequent patches will
actually generate different code.
v2 [mattst88]:
- Add LRP_TO_ARITH flag to ir_to_mesa.cpp. Will be removed in a
subsequent patch and ir_triop_lrp translated directly.
v3 [mattst88]:
- Move changes from the next patch to opt_algebraic.cpp to accept
3-src operations.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
Just use simple assignments.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
Use %td for ptrdiff_t (aka GLsizeiptrARB).
|
|
|
|
|
|
|
|
| |
The old logic was kind of twisted, but seemed to work in practice.
Note: This is a candidate for the stable branches.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we destroy an ARB vp/fp whose ID was gen'd but not otherwise used we
get a pointer to the dummy/placeholder program. We can't destroy that one
so just skip it. This only failed during context tear-down because
glDeleteProgramsARB() was already aware of dummy programs.
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=38086
Note: This is a candidate for the stable branches.
Tested-by: Andreas Boll <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We sometimes convert GL_QUAD_STRIP prims into GL_TRIANGLE_STRIP, but
that changes the results of the u_trim_pipe_prim() call. We need to
pass the original primitive type to the trim function.
Note that OpenGL's GL_x prim type values match Gallium's PIPE_PRIM_x values.
Fixes a failure in the new piglit degenerate-prims test.
Note: This is a candidate for the stable branches.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We weren't mapping the PBO when using the bitmap cache (but we had
the PBO code for the non-cache path.)
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=61026
Note: This is a candidate for the stable branches.
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It has become a bit messy.
Changes:
- finally correct checking for transfer ops depending on the base format
- making sure the base internal format and the texture format match
(we were ignoring it, but it's important for correctness)
- the way-too-strict rule that both src and dst base formats must be the same
was dropped; ensuring the simpler and more permissive rule mentioned above
is enough
- stop using util_blit_pixels; pipe->blit is flexible enough, and now that we
have RGBX and red-alpha formats, pipe->blit can be used for more cases
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Assuming I understand EXT_texture_sRGB correctly.
NOTE: This is a candidate for the stable branches.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A temporary texture is created such that it matches the format and type
combination and pixels are copied to it using memcpy. Then the blit is used to
copy the temporary texture to the texture image being modified by TexImage or
TexSubImage. The blit takes care of the format and type conversion and
swizzling. The result is a very fast texture upload involving as little CPU
as possible.
This improves performance in apps which upload textures during rendering.
An example is the Wine OpenGL backend for DirectDraw, which I used to test
the game StarCraft. Profiling had shown that TexSubImage was taking 50% of
CPU time without this patch, which was the main motivation for this work, and
now TexSubImage only takes 14% of CPU time. I had to underclock my CPU to see
any difference in the game and this patch does make the game a lot faster
if the CPU is slow (or using the powersave cpufreq profile).
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not easy to hit, because we have 3 code paths now
(tried in this order):
- memcpy-based (skips the blit) -> _mesa_tex_getimage
- blit-based
- slow pixel packing -> _mesa_tex_getimage
The main difference later in the code is the parameters of
_mesa_image_address3d.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
BTW, we have 0 tests for glGetTexImage(format=GL_DEPTH*).
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
I'll need this later.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The GL_ARB_texture_rg spec says that we need to support both texturing
and rendering for the GL_RED and GL_RG formats. So move the format
check up into the rendertarget_mapping[] list. Also, add
PIPE_FORMAT_R8_UNORM to the list of formats required.
Note: This is a candidate for the stable branches.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
Broken by 624528834f53f54c7a934f929769b7e6b230a0b1.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This commit allows using glGetTexImage during rendering and still
maintain interactive framerates.
This improves performance of WarCraft 3 under Wine. The framerate is improved
from 25 fps to 39 fps in the main menu, and from 0.5 fps to 32 fps in the game.
v2: fix choosing the format for decompression
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
These formats were added a few months after these tables were committed.
No idea why we have the table though. AFAIK, texstore always takes the slow path
for GL_RGBn.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
v2: change the requirement from GLSL 1.30 to SM 3.0 (R500 can do this)
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
based on the intel driver
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
EmitCondCodes is always false.
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
|
|
|
|
|
|
|
| |
In particular, rework the sRGB/linear format selection code.
There's no reason to mess with the Mesa format.
Just do everything in terms of the gallium pipe_format.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
That was the only place it was being called from.
|
|
|
|
|
|
| |
The code before was getting a pipe format, then calling
st_pipe_format_to_mesa_format() and then converting back again with
st_mesa_format_to_pipe_format(). This removes one conversion step.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we call gl[Copy]TexImage2D() with a generic compression format
(e.g. intFormat=GL_COMPRESSED_RGBA) we can't choose a DXT format if
we don't have the external DXT compression library.
We weren't actually enforcing this before since the
pipe_screen::is_format_supported(DXT) query has no dependency on
the DXT compression library.
Now if we're given a generic compressed format and we can't do DXT
compression we'll fall back to a non-compressed format.
v2: use util_format_is_s3tc() function and add more comments about
the allow_dxt parameter.
Note: This is a candidate for the stable branches.
Reviewed-by: Jose Fonseca <[email protected]>
|
| |
|
|
|
|
|
|
|
|
| |
v2: Update to handle BufferSize being -1 and return a NULL sampler
view if the specified range would cause out of bounds access.
Reviewed-by: Brian Paul <[email protected]>
Acked-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We never really have multisampling with one sample per pixel.
See also http://bugs.freedesktop.org/show_bug.cgi?id=59873
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The gallium docs for pipe_screen::is_format_supported() says that
samples==0 or samples==1 both mean that multisampling is not supported.
Return GL_MAX_SAMPLES==0 instead of 1 for consistency with other drivers.
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
To silence warnings about unhandled cases.
|
|
|
|
|
|
|
|
|
| |
We weren't properly checking the return value of these calls (and
calls to u_upload_data()) to detect OOM errors.
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
| |
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes it easier to find switch-statements that need to be updated
after a new GLSL_TYPE_* is added because the compiler will generate a
warning.
Switch-statements that only had a small number of cases (e.g.,
everything in ir_constant_expression.cpp) were not modified. I may
regret that decision when we eventually add support for doubles.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Carl Worth <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch replaces the three ir_variable_mode enums:
- ir_var_in
- ir_var_out
- ir_var_inout
with the following five:
- ir_var_shader_in
- ir_var_shader_out
- ir_var_function_in
- ir_var_function_out
- ir_var_function_inout
This eliminates a frustrating ambiguity: it used to be impossible to
tell whether an ir_var_{in,out} variable was a shader in/out or a
function in/out without seeing where the variable was declared in the
IR. This complicated some optimization and lowering passes, and would
have become a problem for implementing varying structs.
In the lisp-style serialization of GLSL IR to strings performed by
ir_print_visitor.cpp and ir_reader.cpp, I've retained the names "in",
"out", and "inout" for function parameters, to avoid introducing code
churn to the src/glsl/builtins/ir/ directory.
Note: a couple of comments in the code seemed to indicate that we were
planning for a possible future in which geometry shaders could have
shader-scope inout variables. Our GLSL grammar rejects shader-scope
inout variables, and I've been unable to find any evidence in the GLSL
standards documents (or extensions) that this will ever be allowed, so
I've eliminated these comments.
Reviewed-by: Carl Worth <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
compression
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Lee Salzman <[email protected]>
|