| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
1) Only translate the [min_index, max_index] range.
2) Upload translated vertices via the uploader.
|
|
|
|
|
|
| |
This fixes:
- piglit/draw-vertices
- piglit/draw-vertices-half-float
|
|
|
|
|
|
|
|
|
|
|
| |
Only upload the [min_index, max_index] range instead of [0, userbuf_size].
This an important optimization.
Framerate in Lightsmark:
Before: 22 fps
After: 75 fps
The same optimization is already in r300g.
|
| |
|
| |
|
|
|
|
| |
Added a conditional to spi_update per Dave's comment.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
I can't see a performance difference with this code, which means all
the driver-specific code removed in this commit was unnecessary.
Now we use u_upload_mgr in a slightly different way than we did before it got
dropped. I am not restoring the original code "as is" due to latest
u_upload_mgr changes that r300g performance benefits from.
This also fixes:
- piglit/fp-kil
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an app loads libEGL.so dynamically with RTLD_LOCAL, loading DRI
drivers would fail because of missing glapi symbols. This commit makes
egl_dri2 load libglapi.so with RTLD_GLOBAL to export glapi symbols for
future symbol resolutions.
The same trick can be found in GLX. However, egl_dri2 can only do so
when --enable-shared-glapi is given. Because, otherwise, both libGL.so
and libglapi.so define glapi symbols and egl_dri2 cannot tell which
library to load.
|
|
|
|
|
|
| |
When the user sets EGL_DRIVER to egl_dri2 (or egl_glx), make sure the
built-in driver is used. The user might leave the outdated egl_dri2.so
(or egl_glx.so) on the filesystem and we do not want to load it.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
makedepend would crash when a source includes a header indirectly, such
as
#define HEADER "some-header.h"
#include HEADER
Do not define HEADER (makedepend would detects this as an incomplete
include) and add the dependency manually in the Makefile.
This should hopefully fix bug #33374.
|
|
|
|
| |
User buffers may be the fastest way to upload data.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For 1D/2D texture arrays use the pipe_resource::array_size field.
In OpenGL 1D arrays texture use the height dimension as the array
size and 2D array textures use the depth dimension as the array size.
Gallium uses a special array_size field instead. When setting up
gallium textures or comparing Mesa textures to gallium textures we
need to be extra careful that we're comparing the right fields.
The new st_gl_texture_dims_to_pipe_dims() function maps OpenGL
texture dimensions to gallium texture dimensions and simplifies
this quite a bit.
|
|
|
|
| |
Don't use height for 1D array textures or depth for 2D array textures.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This reverts commit d3df641f0aba99b0b65ecd4d9b06798bca090a29.
The original commit had sat unpushed on my machine for months. By the
time I found it again, I had forgotten that we had decided not to use
this change after all, (the relevant test was removed long ago).
|
| |
|
|
|
|
|
| |
Remove ES2, since AMD_conservative_depth is not listed in the OpenGL ES
extension registry.
|
|
|
|
|
| |
The same number of shaders is now printed regardless of optimizations being
enabled or not, so that we can compare shader stats side by side easily.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The GLSL specification is vague here, (just says "as is standard for
C++"), though the C specifications seem quite clear that this should
be an error.
However, an existing piglit test (CorrectPreprocess11.frag) expects
this to be a warning, not an error, so we change this, and document in
README the deviation from the specification.
|
|
|
|
|
|
|
|
|
| |
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=33440
This replaces commit 731ec60da3ccb92f5bfb4d6f1bc3c8e712751376
NOTE: This is a candidate for the 7.9 and 7.10 branches
Signed-off-by: Brian Paul <[email protected]>
|
| |
|
| |
|
| |
|
|
|
|
| |
Also cleanup the whole thing.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
So that 'foo' can be found in: OPTION=prefixfoosuffix,foo
Also allow that debug options can be separated by a non-alphanumeric characters
instead of just commas.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This drops the memblock manager for ZMASK. Instead, only one zbuffer can be
compressed at a time. Note that this does not necessarily have to be slower.
When there is a large number of zbuffers, compression might be used more often
than it was before. It's also easier to debug.
How it works:
1) 'clear' turns the compression on.
2) If some other zbuffer is set or the currently-bound zbuffer is used
for texturing, the driver decompresses it and then turns the compression off.
Notes:
- The ZMASK clear has been refactored, so that only one packet3 is used to clear
ZMASK.
- The 8x8 compression mode is disabled. I couldn't make it work without issues.
- Also removed driver-specific stuff from u_blitter.
Driver status:
- RV530 and R580 appear to just work (finally).
- RV570 should work, but there may be an issue that we don't correctly
calculate the number of dwords to clear, resulting in a partially
uninitialized zbuffer.
- RS690 misrenders as if no ZMASK clear happened. No idea what's going on.
- RV350 may even hardlock. This issue was already present and this patch doesn't
fix it.
I think we are still missing some hardware info we need to make the zbuffer
compression work fully.
Note that there is also an issue with HiZ, resulting in a sort of blocky
zigzagged corruption around some objects.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
this isn't c++ please don't mix declerations with code
|
|
|
|
| |
For previous commit.
|
|
|
|
|
|
| |
I found this parenthetical usage of parentheses to be extraneously
extraneous:
(yyextra->ARB_fragment_coord_conventions_enable)
|
|
|
|
|
| |
If an extension is prefixed with '+', attempt to enable it. This
introduces symmetry with the prefix '-', which is already allowed.
|
|
|
|
|
|
| |
* Reduce max indentation level from 7 to 3.
* Eliminate counter variables.
* Remove function append().
|
|
|
|
|
| |
All the necessary compiler infrastructure for AMD_conservative_depth is in
place, so it's safe to enable it in the parser.
|
| |
|
|
|
|
|
|
|
|
|
| |
From the AMD_conservative_depth spec:
If gl_FragDepth is redeclared in any fragment shader in a program, it
must be redeclared in all fragment shaders in that program that have
static assignments to gl_FragDepth. All redeclarations of gl_FragDepth in
all fragment shaders in a single program must have the same set of
qualifiers.
|