| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
Something is not quite right, however. The piglit tests mentioned in
fd.o bug 31226 still don't pass.
|
|
|
|
|
|
|
|
|
|
| |
Trivial change that avoids a segmentation fault when the blitter state
happens to be bound when the context is destroyed.
The free calls should probably removed altogether in the future -- the
responsibility to destroy the state atoms lies with whoever created it,
and the safest thing for the pipe driver is to not touch any bound state
in its destructor.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Discard fractional bits from linewidth. This matches the nvidia
closed drivers, my reading of the OpenGL SI and current llvmpipe
behaviour.
It looks a lot nicer & avoids ugliness where lines alternate between n
and n+1 pixels in width along their length.
Also fix up r600g to match.
|
| |
|
|
|
|
|
| |
Should do better than this and actually unbind the buffer, but haven't
yet gotten it to work.
|
|
|
|
|
| |
native_display_buffer is just a wrapper to resource_{from,get}_handle
for drm backend.
|
|
|
|
|
|
| |
The interface is a wrapper to pipe_screen::resource_from_handle and
pipe_screen::resource_get_handle. A winsys handle is
platform-dependent.
|
|
|
|
| |
This allows a backend to be written in C++.
|
|
|
|
|
|
|
| |
These were previously being left in the default (D3D) mode. This mean
that triangles were drawn slightly incorrectly, but also because this
state is relied on by the u_blitter code, all blits were half a pixel
off.
|
|
|
|
| |
These were being set but not used anywhere.
|
|
|
|
|
|
|
|
|
|
|
| |
Generalize the existing tiled_buffer path in texture transfers for use
in some non-tiled up and downloads.
Use a staging buffer, which the winsys will restrict to GTT memory.
GTT buffers have the major advantage when they are mapped, they are
cachable, which is a very nice property for downloads, usually the CPU
will want to do look at the data it downloaded.
|
|
|
|
|
|
|
|
|
| |
This opens the question of what interface the winsys layer should
really have for talking about these concepts.
For now I'm using the existing gallium resource usage concept, but
there is no reason not use terms closer to what the hardware
understands - eg. the domains themselves.
|
| |
|
|
|
|
|
| |
Added for completeness. It makes sense to have such mechanism, but I am
not aware of any user of that..
|
|
|
|
|
|
| |
The value of EGL_MAX_SWAP_INTERVAL and whether
EGL_SWAP_BEHAVIOR_PRESERVED_BIT is set will depend on the native
backend used.
|
|
|
|
|
| |
They are deprecated by native_surface::present and there is no user of
them.
|
|
|
|
|
| |
Replace native_surface::flush_frontbuffer and
native_surface::swap_buffers calls by native_surface::present calls.
|
|
|
|
|
| |
Replace native_surface::flush_frontbuffer and
native_surface::swap_buffers calls by native_surface::present calls.
|
|
|
|
|
|
|
| |
The callback presents the given attachment to the native engine. It
allows the swap behavior and interval to be controlled. It will replace
native_surface::flush_frontbuffer and native_surface::swap_buffers
shortly.
|
| |
|
|
|
|
| |
Signed-off-by: Tilman Sauerbeck <[email protected]>
|
|
|
|
| |
Signed-off-by: Tilman Sauerbeck <[email protected]>
|
|
|
|
| |
Signed-off-by: Tilman Sauerbeck <[email protected]>
|
|
|
|
| |
Signed-off-by: Tilman Sauerbeck <[email protected]>
|
|
|
|
|
|
| |
That way assert(map_count >= 0) can actually fail when we screwed up.
Signed-off-by: Tilman Sauerbeck <[email protected]>
|
|
|
|
| |
Signed-off-by: Tilman Sauerbeck <[email protected]>
|
|
|
|
|
|
|
|
| |
This ensures that we increase bo->map_count when radeon_bo_map_internal()
returns successfully, which in turn makes sure we don't decrement
bo->map_count below zero later.
Signed-off-by: Tilman Sauerbeck <[email protected]>
|
|
|
|
| |
Signed-off-by: Tilman Sauerbeck <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
| |
The call to draw_bind_fragment_shader() was using the old fragment
shader. This bug would have really only effected the draw module's
use of the fragment shader in the wide point stage.
|
|
|
|
|
|
|
|
|
|
|
| |
Important as more constant buffers per shader start to get used.
Fix up r600 (tested) and nv50 (untested) to cope with this. Drivers
previously didn't see unbinds of constant buffers often or ever, so
this isn't always dealt with cleanly.
For r600 just return and keep the reference. Will try to do better in
a followup change.
|
|
|
|
|
|
| |
This doesn't seem like it should be possible, but some test suites
manage to hit this case. Avoid crashing release builds under those
circumstances.
|
|
|
|
|
|
|
|
|
|
| |
Don't trim triangle bounding box to scissor/draw-region until after
the logic for emitting tri_16. Don't generate tri_16 commands for
triangles with untrimmed bounding boxes outside the current tile.
This is important as the tri-16 itself can extend past tile bounds and
we don't want to add code to it to check against tile bounds (slow) or
restrict it to locations within a tile (pessimistic).
|
|
|
|
| |
I thought I had singled it out before, but apparently not.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use scons target and dependency system instead of ad-hoc options.
Now is simply a matter of naming what to build. For example:
scons libgl-xlib
scons libgl-gdi
scons graw-progs
scons llvmpipe
and so on. And there is also the possibility of scepcified subdirs, e.g.
scons src/gallium/drivers
If nothing is specified then everything will be build.
There might be some rough corners over the next days. Please bare with me.
|
|
|
|
| |
[olv: formatted for 80-column wrapping]
|
|
|
|
|
| |
API_DEFINES is the defines for libmesagallium.a. Append it to
egl_CPPFLAGS only when st_GL.so, which uses libmesagallium.a, is built.
|
|
|
|
|
|
| |
Fix
$ make CC="ccache gcc"
|