| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
total instructions in shared programs: 1627754 -> 1624534 (-0.20%)
instructions in affected programs: 45748 -> 42528 (-7.04%)
GAINED: 3
LOST: 0
(serious sam, humus domino demo)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
total instructions in shared programs: 1627826 -> 1627754 (-0.00%)
instructions in affected programs: 6640 -> 6568 (-1.08%)
GAINED: 0
LOST: 0
(HoN and savage2)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
v2: Fix pasteo of an extra abs being inserted (caught by many). Rewrite
to drop the silly switch statement.
Reviewed-by: Matt Turner <[email protected]> (v1)
|
|
|
|
|
|
|
|
| |
We've had various bug reports over the years where miptrees are missing,
and when I screwed it up while adding DRI2 to the modesetting driver, I
figured I should put the info necessary for debug here.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
The intel_miptree_blit() code checks the format for us now, plus it
handles xrgb vs argb for us.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This makes it work on Broadwell, too.
v2: Drop bogus double write to 3DPRIM_BASE_VERTEX register
(caught by Chris Forbes).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This saves some boilerplate and hides the OUT_RELOC/OUT_RELOC64
distinction.
Placing the function in intel_batchbuffer.c is rather arbitrary; there
wasn't really an obvious place for it.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 9cee3ff562f3e4b51bfd30338fd1ba7716ac5737, INTEL_DEBUG=vs
has caused a NULL pointer dereference for fixed-function/ARB programs.
In the vec4 generators, "prog" is a gl_program, and "shader_prog" is the
gl_shader_program. This is different than the FS visitor.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mesa fails to retain the precision qualifier when parsing:
#version 300 es
centroid in mediump vec2 v;
Consider how the parser's type_qualifier production is applied.
First, the precision_qualifier rule creates a new ast_type_qualifier:
<precision: mediump>
Then the storage_qualifier rule creates a second one:
<flags: in>
and calls merge_qualifier() to fold in any previous qualifications,
returning:
<flags: in, precision: mediump>
Finally, the auxiliary_storage_qualifier creates one for "centroid":
<flags: centroid>
it then does $$ = $1 and $$.flags |= $2.flags, resulting in:
<flags: centroid, in>
Since precision isn't stored in the flags bitfield, it is lost. We need
to instead call merge_qualifier to combine all the fields.
Cc: [email protected]
Signed-off-by: Kenneth Graunke <[email protected]>
Reported-by: Kevin Rogovin <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
If st_GetTexImage() is to decompress the texture, avoid the fallback
path even if prefer_blit_based_texture_transfer = false. For drivers
that returned PIPE_CAP_PREFER_BLIT_BASED_TEXTURE_TRANSFER = 0, we
were always taking the fallback path for texture decompression rather
than rendering a quad. The later is a lot faster.
Cc: "10.0" "10.1" <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the specification text of NV_vertex_program1_1, the upper
limit of the RCC instruction is written as 1.884467e+19 in
scientific notation, but as 0x5F800000 in binary. But the binary
version translates to 1.84467e+19 rather than 1.884467e+19 in
scientific notation.
Since the lower-limit equals 2^-64 and the binary version equals
2^+64, let's assume the value in scientific notation is a typo
and implement this using the value from the binary version
instead.
Signed-off-by: Erik Faye-Lund <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Erik Faye-Lund <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
_eglInitResource() was used to memset entire _EGLSurface by
writing more than size of pointed target. This does work
as long as Resource is the first element in _EGLSurface,
this patch fixes such dependency.
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
_eglInitResource() was used to memset entire _EGLContext by
writing more than size of pointed target. This does work
as long as Resource is the first element in _EGLContext,
this patch fixes such dependency.
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
If a driver enables ARB_gpu_shader5 and sets Const.MaxVertexSteams >= 4,
then piglit's arb_gpu_shader5-minmax test should now pass.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Christoph Bumiller <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The pre-fetching doesn't go too far. Tested with over-allocating by only
a page, and didn't see any errors in dmesg. Saves ~512KB of VRAM.
Signed-off-by: Ilia Mirkin <[email protected]>
Cc: 10.1 <[email protected]>
Reviewed-by: Christoph Bumiller <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
In the tests they were the same so it didn't matter, but indications are
that this is the correct behaviour. Also take this opportunity to
(trivially) support using gl_Layer in fp.
Cc: 10.1 <[email protected]>
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Christoph Bumiller <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Functionally identical but much simpler. Should also better integrate
with future layer/viewport changes/fixes.
Cc: 10.1 <[email protected]>
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Christoph Bumiller <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GLX_ARB_create_context allows making a GLX context current with None
drawable and readables, but this was never implemented correctly in GLX.
We would create a __DRIdrawable for the None GLX drawable and pass that
to the DRI driver and that would somehow work. Now it's somehow broken.
The way this should have worked is that we pass a NULL DRI drawable
to the DRI driver when the GLX user calls glXMakeContextCurrent()
with None for drawable and readables.
https://bugs.freedesktop.org/show_bug.cgi?id=74143
Signed-off-by: Kristian Høgsberg <[email protected]>
|
|
|
|
|
|
|
|
| |
Flush the context when we unmap a buffer, otherwise VDPAU might
start rendering the next frame while we still reference that buffer.
Signed-off-by: Christian König <[email protected]>
Tested-by: StrangeNoises ([email protected])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bugzilla (bug needs XBMC changes as well):
https://bugs.freedesktop.org/show_bug.cgi?id=73191
When VL uploads vertex buffers, it uses PIPE_TRANSFER_DONTBLOCK, which always
flushes the context in the winsys if the buffer being mapped is busy. Since
I added handling of DISCARD_RANGE, DONTBLOCK has had no effect when combined
with DISCARD_RANGE and I think the context isn't flushed anywhere else,
so no commands are submitted to the GPU until the IB is full, which takes
a lot of frames.
Using DISCARD_RANGE is not the only way to trigger this bug. The other way
is to reallocate the vertex buffer before every upload.
BTW, I'm not sure if this is the right place for flushing, but it does fix
the bug.
v2 (chk): move the flush to the right place.
Signed-off-by: Christian König <[email protected]>
Tested-by: StrangeNoises ([email protected])
|
|
|
|
|
|
|
|
|
| |
Missed in commit e63bb298. Caused sporadic test failures, like
incorrect-in-layout-qualifier-repeated-prim.geom.
Cc: "10.0" <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
| |
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
STATIC will be removed in the following commit.
v2: changed the definition of IMMUTABLE
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
Unused.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
v2: This doesn't change the behavior. It only moves the tiling check
to r600_init_resource and removes the usage parameter.
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This binds a NULL sampler view in that case.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74251
Cc: "10.1" "10.0" <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
Not blocking for the message thread can lead to accessing freed up memory.
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Featuring a full grown MPEG2 and H264 decoder and a couple of hundred bugs.
v2 (Leo): fix an error for pic_order_cnt_type 1
v3 (Leo): implement support for field decoding
Signed-off-by: Christian König <[email protected]>
Signed-off-by: Leo Liu <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
| |
Avoid moving things around on start of stream.
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
dri2_dup_image was not copying the dri_format field.
This was causing some bugs, for example:
. we create an gbm_bo.
. we get an EGLImage from the gbm_bo.
. Bug: impossible to get again the gbm_bo from the EGLImage by
importing. (gbm dri2 backend)
Signed-off-by: Axel Davy <[email protected]>
|