| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
SVGA3D only supports SGN for vertex shaders, and this requires two additional
temporary registers for intermediate results.
For fragment shaders, lower to two CMPs and one ADD.
|
| |
|
|
|
|
| |
Makes it easier to figure out which opcode it's about.
|
|
|
|
| |
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
| |
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
| |
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The extra length is the size of the request *minus* the size of the
VendorPrivate header, not the addition.
NOTE: This is a candidate for the 7.9 and 7.10 branches
Signed-off-by: Julien Cristau <[email protected]>
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
xGLXChangeDrawableAttributesSGIXReq follows the GLXVendorPrivate header
with a drawable, number of attributes, and list of (type, value)
attribute pairs. Don't forget to put the number of attributes in there.
I don't think this can ever have worked.
NOTE: This is a candidate for the 7.9 and 7.10 branches
Signed-off-by: Julien Cristau <[email protected]>
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
| |
the context init is separate for these gpus.
|
|
|
|
|
|
| |
6xx/7xx have a max of 4 DBs, evergreen have a max of 8.
Signed-off-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Like on some r5xx, there are multiple DB backends on the r600,
we need to add up the query results from each of these to get the
final correct value.
So far I'm not 100% sure how to calculate the num_db, value
setting it to 4 should be harmless enough until we do.
This fixes occulsion_query piglit test on my rv740.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
| |
Signed-off-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
|
|
| |
eea1d8199b376f37027c14669e0bdf991a22872d
Although CUBE is a reduction inst, it writes to more than just PV.X
so we need to keep the dst channel.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This only works on r600/r700 so far, evergreen doesn't appear
to have the multiwrite enable bit in the color control, so we
may have to actually do a shader rewrite on EG hardware.
remove some duplicate code reg defines also.
Signed-off-by: Dave Airlie <[email protected]>
|
| |
|
|
|
|
|
|
|
|
| |
texture.
I know Jerome will probably rewrite the way depth textures work sometime
soon. For the time being this should at least make common depth texture usage
for shadowing work properly though.
|
|
|
|
|
|
|
| |
When the paint is opaque (currently, solid color with alpha 1.0f), no
blending is needed for VG_BLEND_SRC_OVER. This eliminates the serious
performance hit introduced by 859106f196ade77f59f8787b071739901cd1a843
for a common scenario.
|
|
|
|
|
| |
Before create_handle returns, obj->handle is 0. Calling
handle_to_object will fail.
|
| |
|
|
|
|
| |
Prevents missing symbols in libGL.so when LLVM is disabled.
|
|
|
|
| |
Fixes SCons build.
|
|
|
|
|
| |
should be no need to unset this ptr here and if we don't end up using the
blitter we've just broken the state.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Swizzles are now defined everywhere as a field with 12 bits that contains
4 channels worth of meaningful information. Any channel that is unused is
set to RC_SWIZZLE_UNUSED. This change is necessary because rgb instructions
and alpha instructions were initializing channels that would never be used
(channel 3 for rgb and channels 1-3 for alpha) with 0 (aka RC_SWIZZLE_X).
This made it impossible to use generic helper functions for swizzles,
because sometimes a channel value of 0 meant unused and other times it
meant RC_SWIZZLE_X.
All hacks that tried to guess how many channels were relevant have
also been removed.
|
| |
|
|
|
|
|
|
| |
1) Only translate the [min_index, max_index] range.
2) Upload translated vertices via the uploader.
3) Rename valid_vertex_buffer[] to real_vertex_buffer[]
|
| |
|
|
|
|
|
| |
1) Only translate the [min_index, max_index] range.
2) Upload translated vertices via the uploader.
|
|
|
|
|
|
| |
This fixes:
- piglit/draw-vertices
- piglit/draw-vertices-half-float
|
|
|
|
|
|
|
|
|
|
|
| |
Only upload the [min_index, max_index] range instead of [0, userbuf_size].
This an important optimization.
Framerate in Lightsmark:
Before: 22 fps
After: 75 fps
The same optimization is already in r300g.
|
| |
|
| |
|
|
|
|
| |
Added a conditional to spi_update per Dave's comment.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
I can't see a performance difference with this code, which means all
the driver-specific code removed in this commit was unnecessary.
Now we use u_upload_mgr in a slightly different way than we did before it got
dropped. I am not restoring the original code "as is" due to latest
u_upload_mgr changes that r300g performance benefits from.
This also fixes:
- piglit/fp-kil
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an app loads libEGL.so dynamically with RTLD_LOCAL, loading DRI
drivers would fail because of missing glapi symbols. This commit makes
egl_dri2 load libglapi.so with RTLD_GLOBAL to export glapi symbols for
future symbol resolutions.
The same trick can be found in GLX. However, egl_dri2 can only do so
when --enable-shared-glapi is given. Because, otherwise, both libGL.so
and libglapi.so define glapi symbols and egl_dri2 cannot tell which
library to load.
|
|
|
|
|
|
| |
When the user sets EGL_DRIVER to egl_dri2 (or egl_glx), make sure the
built-in driver is used. The user might leave the outdated egl_dri2.so
(or egl_glx.so) on the filesystem and we do not want to load it.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
makedepend would crash when a source includes a header indirectly, such
as
#define HEADER "some-header.h"
#include HEADER
Do not define HEADER (makedepend would detects this as an incomplete
include) and add the dependency manually in the Makefile.
This should hopefully fix bug #33374.
|
|
|
|
| |
User buffers may be the fastest way to upload data.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For 1D/2D texture arrays use the pipe_resource::array_size field.
In OpenGL 1D arrays texture use the height dimension as the array
size and 2D array textures use the depth dimension as the array size.
Gallium uses a special array_size field instead. When setting up
gallium textures or comparing Mesa textures to gallium textures we
need to be extra careful that we're comparing the right fields.
The new st_gl_texture_dims_to_pipe_dims() function maps OpenGL
texture dimensions to gallium texture dimensions and simplifies
this quite a bit.
|
|
|
|
| |
Don't use height for 1D array textures or depth for 2D array textures.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This reverts commit d3df641f0aba99b0b65ecd4d9b06798bca090a29.
The original commit had sat unpushed on my machine for months. By the
time I found it again, I had forgotten that we had decided not to use
this change after all, (the relevant test was removed long ago).
|
| |
|