| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Submit all bitstreams at once to decode_bitstream.
Signed-off-by: Christian König <[email protected]>
Signed-off-by: Maarten Lankhorst <[email protected]>
|
|
|
|
|
| |
Reported-by: Andy Furniss <[email protected]>
Signed-off-by: Maarten Lankhorst <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Only initialize vlc in MPEG2 decoding once for all slices,
add more sanity checks to vlc decoding functions, support
multiple vlc input buffer, improve documentation of the
vlc functions.
v2: also implement multiple inputs for the vlc functions
v3: some bug fixes for buffer size and alignment corner cases
v4: rework of the patch, some more improvements
Signed-off-by: Maarten Lankhorst <[email protected]>
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Hi all
This fixes a memory leak of 32 bytes on exit.
From 924f8fdccb41b011f372bc57252005bcdb096105 Mon Sep 17 00:00:00 2001
From: Lauri Kasanen <[email protected]>
Date: Thu, 22 Dec 2011 21:28:33 +0200
Subject: [PATCH] gallivm: Close a memory leak
As reported by "valgrind --leak-check=full glxgears".
Signed-off-by: Lauri Kasanen <[email protected]>
Signed-off-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
| |
csc is not used for rgba and gives a warning.
Signed-off-by: Maarten Lankhorst <[email protected]>
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
Mapping to software and uploading again clearing is killing performance.
Signed-off-by: Maarten Lankhorst <[email protected]>
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
This fixes the piglit glsl-1.10 shadow1D related tests.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The 4th texcoord is used in this case for the comparison.
This fixes piglit glsl-fs-shadow2DArray* on softpipe.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code didn't handle the case where front wasn't specified in the vertex
shader outputs, but back was.
In that case we were doing a copy from back to non-existant front,
this code checks we have existant front/backs and only does the copy when
they both exist.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is the first part of a fix to piglit glsl-fs-shadow1DArray
also fix the passing of unused r[2] in the normal 1D case.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
llvm-3.1svn r145714 moved global variables into a new TargetOptions
class. TargetMachine constructor now needs a TargetOptions object as
well.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Namely:
- EXT_transform_feedback
- ARB_transform_feedback2
- ARB_transform_feedback_instanced
The old interface was not useful for OpenGL and had to be reworked.
This interface was originally designed for OpenGL, but additional
changes have been made in order to make st/d3d1x support easier.
The most notable change is the stream-out info must be linked
with a vertex or geometry shader and cannot be set independently.
This is due to limitations of existing hardware (special shader
instructions must be used to write into stream-out buffers),
and it's also how OpenGL works (stream outputs must be specified
prior to linking shaders).
Other than that, each stream output buffer has a "view" into it that
internally maintains the number of bytes which have been written
into it. (one buffer can be bound in several different transform
feedback objects in OpenGL, so we must be able to have several views
around) The set_stream_output_targets function contains a parameter
saying whether new data should be appended or not.
Also, the view can optionally be used to provide the vertex
count for draw_vbo. Note that the count is supposed to be stored
in device memory and the CPU never gets to know its value.
OpenGL way | Gallium way
------------------------------------
BeginTF = set_so_targets(append_bitmask = 0)
PauseTF = set_so_targets(num_targets = 0)
ResumeTF = set_so_targets(append_bitmask = ~0)
EndTF = set_so_targets(num_targets = 0)
DrawTF = use pipe_draw_info::count_from_stream_output
v2: * removed the reset_stream_output_targets function
* added a parameter append_bitmask to set_stream_output_targets,
each bit specifies whether new data should be appended to each
buffer or not.
v3: * added PIPE_CAP_STREAM_OUTPUT_PAUSE_RESUME for ARB_tfb2,
note that the draw-auto subset is always required (for d3d10),
only the pause/resume functionality is limited if the CAP is not
advertised
v4: * update gallium/docs
v5: * compactified struct pipe_stream_output_info, updated dump/trace
|
|
|
|
|
|
|
| |
Take viewport and scissors into account and make
the dirty area a parameter instead of a member.
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
This adds a new TGSI property to represent the GLSL layout qualifier in TGSI.
|
|
|
|
|
| |
Fixes -Wimplicit-function-declaration for ffs with GCC. Spotted/tested
by Kai Wasserbäch.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Number of fragment shader variants is not very representative of the
memory used by LLVM, neither is number of shader instructions, as often
texture sampling constitutes most of the generated code.
This change adds an additional trim criteria: least recently used
fragment shader variants will be freed until the total number of LLVM IR
instruction falls below a specified threshold.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
variant list.
u_simple_list.h uses a sentinel element, and not a NULL element. So
ensure list is not empty when reducing the list of shader variants.
Something I noticed while trying to free variants more aggressively.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
And wrap to 80 columns.
|
|
|
|
|
|
|
| |
The format is defined by GL_OES_compressed_ETC1_RGB8_texture.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fixes these GCC warnings.
u_vbuf.c: In function ‘u_vbuf_draw_begin’:
u_vbuf.c:839:20: warning: ‘max_index’ may be used uninitialized in this function [-Wuninitialized]
u_vbuf.c:838:20: warning: ‘min_index’ may be used uninitialized in this function [-Wuninitialized]
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
Complicates Gallium3D development and doesn't seem to have active users.
Signed-off-by: Kai Wasserbäch <[email protected]>
Signed-off-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
Not actively used.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
XP kernel mode was the only subsystem lacking stdio FILES.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
This format is used for ARB_texture_rgb10_a2ui extension.
Signed-off-by: Dave Airlie <[email protected]>
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
also don't mark them as 'user', because they will be uploaded through
the translate fallback anyway.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The motivation behind this is to add some self-documentation in the code
about how each CAP can be used.
The idea is:
- enum pipe_cap is only valid in get_param
- enum pipe_capf is only valid in get_paramf
Which CAPs are floating-point have been determined based on how everybody
except svga implemented the functions. svga have been modified to match all
the other drivers.
Besides that, the floating-point CAPs are now prefixed with PIPE_CAPF_.
|
|
|
|
|
| |
Only i965g does not enable GLSL, but that driver has been unmaintained and
bitrotting for quite a while anyway.
|
|
|
|
|
|
|
| |
It's intended to indicate whether the driver/hardware supports reading
of the values written into shader outputs.
Signed-off-by: Vadim Girlin <[email protected]>
|
|
|
|
|
|
|
|
| |
And update r300g.
This is different from util_draw_max_index in how it obtains vertex elements
and that it doesn't have to call util_format_description due to additional
precomputed data in vertex elements.
|
|
|
|
|
|
| |
This forks vbo_get_minmax_index. We need to know the index range when
translating non-native vertices into native ones. There is no other way
around it.
|
|
|
|
| |
It will use the index buffer soon.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't assert/die if a VBO is too small. Return zero instead. For
debug builds, emit a warning message since this is an unusual situation
that might indicate that there's a bug in the app.
Note that util_draw_max_index() now returns max_index+1 instead of
max_index. This lets us return zero to indicate that one of the VBOs
is too small to draw anything.
Fixes a failure with the new piglit vbo-too-small test.
Reviewed-by: José Fonseca <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Also, actually update const_storage_size, therefore avoiding to
unnecessarily reallocate aligned_constant_storage every single time
draw_vs_set_constants() is called.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
ary_ge_arx_arz is already set earlier.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Necessary with build against LLVM 2.6, with recent gcc, as LLVM headers
depend on ptrdiff_t but don't properly include stddef.h
|
|
|
|
|
| |
If the vbuf backend fails to allocate a vertex buffer, don't crash
or assert.
|