summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* va: call texture_get_handle while the mutex is being heldMarek Olšák2017-01-041-2/+5
| | | | | | | The context may be used by texture_get_handle. Reviewed-by: Christian König <[email protected]> Cc: 13.0 <[email protected]>
* vdpau: call texture_get_handle while the mutex is being heldMarek Olšák2017-01-042-6/+13
| | | | | | | | | The context may be used by texture_get_handle. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99158 Reviewed-by: Christian König <[email protected]> Cc: 13.0 <[email protected]>
* radeonsi: capitalize VM hex addr when dumping buffer listSamuel Pitoiset2017-01-041-1/+1
| | | | | | | | Useful when debugging with R600_DEBUG=vm,check_vm to match addr in both outputs. Signed-off-by: Samuel Pitoiset <[email protected]> Reviewed-by: Marek Olšák <[email protected]>
* i965: remove unused brwInitVtbl declarationTapani Pälli2017-01-041-5/+0
| | | | | | | function was removed by b3360d23ac1db61390b2ac8963756c6133ba6e23 Signed-off-by: Tapani Pälli <[email protected]> Reviewed-by: Timothy Arceri <[email protected]>
* i965: remove brw_context dependency from intel_batchbuffer_init()Iago Toral Quiroga2017-01-043-28/+36
| | | | Reviewed-by: Kenneth Graunke <[email protected]>
* i965: make intel_batchbuffer_free() take a batchbuffer as argumentIago Toral Quiroga2017-01-043-6/+6
| | | | Reviewed-by: Kenneth Graunke <[email protected]>
* i965: make intel_batchbuffer_emit_dword() take a batchbuffer as argumentIago Toral Quiroga2017-01-042-12/+12
| | | | Reviewed-by: Kenneth Graunke <[email protected]>
* i965: Make intel_bachbuffer_reloc() take a batchbuffer argumentIago Toral Quiroga2017-01-043-15/+15
| | | | Reviewed-by: Kenneth Graunke <[email protected]>
* nir: fix loop iteration count calculation for floatsTimothy Arceri2017-01-041-2/+2
| | | | | | | | | | | | Fixes performance regression in SynMark PSPom caused by loops with float counters not always unrolling. For example: for (float i = 0.02; i < 0.9; i += 0.11) ... Reviewed-by: Jason Ekstrand <[email protected]>
* gallium/hud: add a path separator between dump directory and filenameEdmondo Tommasina2017-01-031-1/+2
| | | | | | | It's more user friendly and it avoids to write files in unexpected places. Signed-off-by: Marek Olšák <[email protected]>
* r600/sb: Fix loop optimization related hangs on egHeiko Przybyl2017-01-036-30/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make sure unused ops and their references are removed, prior to entering the GCM (global code motion) pass, to stop GCM from breaking the loop logic and thus hanging the GPU. Turns out, that sb has problems with loops and node optimizations regarding associative folding: - the global code motion (gcm) pass moves ops up a loop level/basic block until they've fulfilled their total usage count - if there are ops folded into others, the usage count won't be fulfilled and thus the op moved way up to the top - within GCM the op would be visited and their deps would be moved alongside it, to fulfill the src constaints - in a loop, an unused op is moved out of the loop and GCM would move the src value ops up as well - now here arises the problem: if the loop counter is one of the src values it would get moved up as well, the loop break condition would never get hit and the shader turn into an endless loop, resulting in the GPU hanging and being reset A reduced (albeit nonsense) piglit example would be: [require] GLSL >= 1.20 [fragment shader] uniform int SIZE; uniform vec4 lights[512]; void main() { float x = 0; for(int i = 0; i < SIZE; i++) x += lights[2*i+1].x; } [test] uniform int SIZE 1 draw rect -1 -1 2 2 Which gets optimized to: ===== SHADER #12 OPT ================================== PS/BARTS/EVERGREEN ===== ===== 42 dw ===== 1 gprs ===== 2 stack ========================================= ALU 3 @24 1 y: MOV R0.y, 0 t: MULLO_UINT R0.w, [0x00000002 2.8026e-45].x, R0.z LOOP_START_DX10 @22 PUSH @6 ALU 1 @30 KC0[CB0:0-15] 2 M x: PRED_SETGE_INT __.x, R0.z, KC0[0].x JUMP @14 POP:1 LOOP_BREAK @20 POP @14 POP:1 ALU 2 @32 3 x: ADD_INT R0.x, R0.w, [0x00000002 2.8026e-45].x TEX 1 @36 VFETCH R0.x___, R0.x, RID:0 MFC:16 UCF:0 FMT[..] ALU 1 @40 4 y: ADD R0.y, R0.y, R0.x LOOP_END @4 EXPORT_DONE PIXEL 0 R0.____ EOP ===== SHADER_END =============================================================== Notice R0.z being the loop counter/break condition relevant register and being never incremented at all. Also some of the loop content has been moved out of it, to fulfill the requirements for the one unused op. With a debug build of mesa this would produce an error like error at : PRED_SETGE_INT __, __, EM.2, R1.x.2||[email protected], C0.x : operand value R1.x.2||[email protected] was not previously written to its gpr and the compilation would fail due to this. On a release build it gets passed to the GPU. When using this patch, the loop remains intact: ===== SHADER #12 OPT ================================== PS/BARTS/EVERGREEN ===== ===== 48 dw ===== 1 gprs ===== 2 stack ========================================= ALU 2 @24 1 y: MOV R0.y, 0 z: MOV R0.z, 0 LOOP_START_DX10 @22 PUSH @6 ALU 1 @28 KC0[CB0:0-15] 2 M x: PRED_SETGE_INT __.x, R0.z, KC0[0].x JUMP @14 POP:1 LOOP_BREAK @20 POP @14 POP:1 ALU 4 @30 3 t: MULLO_UINT T0.x, [0x00000002 2.8026e-45].x, R0.z 4 x: ADD_INT R0.x, T0.x, [0x00000002 2.8026e-45].x TEX 1 @40 VFETCH R0.x___, R0.x, RID:0 MFC:16 UCF:0 FMT[..] ALU 2 @44 5 y: ADD R0.y, R0.y, R0.x z: ADD_INT R0.z, R0.z, 1 LOOP_END @4 EXPORT_DONE PIXEL 0 R0.____ EOP ===== SHADER_END =============================================================== Piglit: ./piglit summary console -d results/*_gpu_noglx name: unpatched_gpu_noglx patched_gpu_noglx ---- ------------------- ----------------- pass: 18016 18021 fail: 748 743 crash: 7 7 skip: 1124 1124 timeout: 0 0 warn: 13 13 incomplete: 0 0 dmesg-warn: 0 0 dmesg-fail: 0 0 changes: 0 5 fixes: 0 5 regressions: 0 0 total: 19908 19908 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94900 Tested-by: Heiko Przybyl <[email protected]> Tested-on: Barts PRO HD6850 Signed-off-by: Heiko Przybyl <[email protected]> Signed-off-by: Marek Olšák <[email protected]>
* editorconfig: Fix up the tab rendering width.Eric Anholt2017-01-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Our editorconfig file looked sensible, saying that we wanted to indent with spaces and use 3/4/whatever space indentation. However, the spec has this little surprise: "tab_width: a whole number defining the number of columns used to represent a tab character. This defaults to the value of indent_size and doesn't usually need to be specified." so once my editor started respecting editorconfig, the files that have tabs left in them started getting rendered wrong, showing up like this in brw_program.c: case GL_COMPUTE_PROGRAM_NV: { struct brw_program *prog = rzalloc(NULL, struct brw_program); if (prog) { prog->id = get_new_program_id(brw->screen); return _mesa_init_gl_program(&prog->program, target, id); } else return NULL; } Reviewed-by: Ilia Mirkin <[email protected]>
* meta: Disable dithering during glGenerateMipmapChad Versace2017-01-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Fixes tests 'dEQP-GLES3.functional.texture.mipmap.*.generate.rgba5551*' on Intel Broadwell 0x1616. The GL 4.5 spec describes the algorithm of glGenerateMipmap as: The contents of the derived images are computed by repeated, filtered reduction of the level base image. [...] No particular filter algorithm is required, though a box filter is recommended as the default filter. Consider a texture for which all pixels are identical at level 0. From the spec's description above, one may reasonably assume that the "filtered reduction" of level 0 produces a new miplevel for which again all pixels are identical. For any 2x2 subspan of identical pixels, it is difficult to see how the "filtered reduction" of that subspan can produce a pixel that differs from the source pixels. Dithering during _mesa_meta_GenerateMipmap() violated that reasonable assumption. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99210 Reviewed-by: Kenneth Graunke <[email protected]> Cc: [email protected]
* doc/features.txt: update for freedrenoRomain Failliot2017-01-031-19/+19
| | | | | | | | I lost track of who created initial patch (Ilia?).. Romain rebased it. I pushed it. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=95460 Signed-off-by: Rob Clark <[email protected]>
* i965: Remove perf monitor/query backendRobert Bragg2017-01-036-1597/+1
| | | | | | | | | | | | | | | | | | | | In its current state the unified i965 backend for AMD_performance_monitor and INTEL_performance_query isn't able to report meaningful Observation Architecture metrics since we haven't so far had the necessary kernel support to fully configure the OA unit, nor the corresponding support for normalizing the counters into a form that can be usefully interpreted by application developers (as opposed to raw values that may, for example, scale by the number of EUs there are). So that we can focus on implementing just one of these extensions fully and since we anticipate some significant backend changes as we look to use a new kernel interface to configure the OA unit, this patch removes the current backend. This will simplify our ability to update the frontend infrastructure and backend interface before updating our support for performance counters. Signed-off-by: Robert Bragg <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* vl/zscan: fix "Fix trivial sign compare warnings"Christian König2017-01-031-1/+1
| | | | | | | | | | | The variable actually needs to be signed, otherwise converting it to a float doesn't work as expected. Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=98914 Signed-off-by: Christian König <[email protected]> Reviewed-by: Nayan Deshmukh <[email protected]> Cc: "13.0" <[email protected]> Fixes: 1fb4179f927 ("vl: Fix trivial sign compare warnings")
* st/va: error handlingNayan Deshmukh2017-01-031-3/+15
| | | | | | | | handle the cases when vl_compositor_set_csc_matrix(), vl_compositor_init_state() and vl_compositor_init() fail Signed-off-by: Nayan Deshmukh <[email protected]> Reviewed-by: Christian König <[email protected]>
* st/vdpau: error handlingNayan Deshmukh2017-01-033-15/+50
| | | | | | | | handle the cases when vl_compositor_set_csc_matrix(), vl_compositor_init_state() and vl_compositor_init() fail Signed-off-by: Nayan Deshmukh <[email protected]> Reviewed-by: Christian König <[email protected]>
* vl/compositor: implement error handlingNayan Deshmukh2017-01-032-3/+12
| | | | | | | pipe_buffer_map and pipe_buffer_create may return NULL Signed-off-by: Nayan Deshmukh <[email protected]> Reviewed-by: Christian König <[email protected]>
* i965/vec4: enable ARB_gpu_shader_fp64 for HaswellIago Toral Quiroga2017-01-031-0/+3
| | | | Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: adjust spilling costs for 64-bit registers.Iago Toral Quiroga2017-01-031-2/+13
| | | | Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: prevent spilling of DOUBLE_TO_SINGLE destinationIago Toral Quiroga2017-01-031-0/+12
| | | | | | | | | | | | | | | | | | | | | FROM_DOUBLE opcodes are setup so that they use a dst register with a size of 2 even if they only produce a single-precison result (this is so that the opcode can use the larger register to produce a 64-bit aligned intermediary result as required by the hardware during the conversion process). This creates a problem for spilling though, because when we attempt to emit a spill for the dst we see a 32-bit destination and emit a scratch write that allocates a single spill register, making the intermediary writes go beyond the size of the allocation. Prevent this by avoiding to spill the destination register of these opcodes. Alternatively, we can avoid this by splitting the opcode in two: one that produces a 64-bit aligned result and one that takes the 64-bit aligned result as input and produces a 32-bit result from it. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: avoid spilling of registers that mix 32-bit and 64-bit accessIago Toral Quiroga2017-01-031-0/+24
| | | | | | | | | | | | | | | | | | | | When 64-bit registers are (un)spilled, we need to execute data shuffling code before writing to or after reading from memory. If we have instructions that operate on 64-bit data via 32-bit instructions, (un)spills for the register produced by 32-bit instructions will not do data shuffling at all (because we only see a normal 32-bit istruction seemingly operating on 32-bit data). This means that subsequent reads with that register using DF access will unshuffle data read from memory that was never adequately shuffled when it was written. Fixing this would require to identify which 32-bit instructions write 64-bit data and emit spill instructions only when the full 64-bit data has been written (by multiple 32-bit instructions writing to different offsets of the same register) and always emit 64-bit unspills whenever 64-bit data is read, even when the instruction uses a 32-bit type to read from them. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: support basic spilling of 64-bit registersIago Toral Quiroga2017-01-031-6/+28
| | | | | | | | | | | | | | | The current spilling code can't spill vgrf allocations larger than 1 but SIMD4x2 doubles require 2 vgrfs, so we need to permit this case (which is handled properly for DF data types by emitting 2 scratch messages and doing data shuffling). We accomplish this by not auto-disabling spilling for vgrf allocations with a size of 2, and then disable spilling on any register with an offset != 0B (which indicates array access). Disable spilling of partial DF reads/writes because these don't read/write data for both logical threads and our scratch messages for 64-bit data need data for both threads to be present. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: run scalarize_df() after spillingIago Toral Quiroga2017-01-031-0/+18
| | | | | | | | Spilling of 64-bit data requires data shuffling for the corresponding scratch read/write messages. This produces unsupported swizzle regions and writemasks that we need to scalarize. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: prevent src/dst hazards during 64-bit register allocationIago Toral Quiroga2017-01-031-1/+7
| | | | | | | | | | | | | | 8-wide compressed DF operations are executed as two separate 4-wide DF operations. In that scenario, we have to be careful when we allocate register space for their operands to prevent the case where the first half of the instruction overwrites the source of the second half. To do this we mark compressed instructions as having hazards to make sure that ther register allocators assigns a register regions for the destination that does not overlap with the region assigned for any of its source operands. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4/scalarize_df: support more swizzles via vstride=0Iago Toral Quiroga2017-01-033-21/+51
| | | | | | | | | | | | | | | | By exploiting gen7's hardware decompression bug with vstride=0 we gain the capacity to support additional swizzle combinations. This also fixes ZW writes from X/Y channels like in: mov r2.z:df r0.xxxx:df Because DF regions use 2-wide rows with a vstride of 2, the region generated for the source would be r0<2,2,1>.xyxy:DF, which is equivalent to r0.xxzz, so we end up writing r0.z in r2.z instead of r0.x. Using a vertical stride of 0 in these cases we get to replicate the XX swizzle and write what we want. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4/scalarize_df: do not scalarize swizzles that we can support nativelyIago Toral Quiroga2017-01-033-25/+112
| | | | | | | | | | | | | | Certain swizzles like XYZW can be supported by translating only the first two 64-bit swizzle channels to 32-bit channels. This happens with swizzles such that the first two logical components, when translated to 32-bit channels and replicated across the second dvec2 row, select the same channels specified by the 3rd and 4th logical swizzle components. Notice that this opens up the possibility that some instructions are not scalarized and can end up with XY or ZW 32-bit writemasks. Make sure we always scalarize in such cases. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: split instructions that read 64-bit interleaved attributesIago Toral Quiroga2017-01-031-2/+26
| | | | | | | | | | Stages that use interleaved attributes generate regions with a vstride=0 that can hit the gen7 hardware decompression bug. v2: - Make static the function and fix indent (Matt) Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: dump subnr for FIXED_GRFIago Toral Quiroga2017-01-031-1/+1
| | | | | | | | This came in handy when debugging the payload setup for Tess Eval, since it prints correct subnr for attributes that can be loaded in the second half of a register. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4/tes: consider register offsets during attribute setupIago Toral Quiroga2017-01-031-2/+2
| | | | Reviewed-by: Matt Turner <[email protected]>
* i965/vec4/tes: fix setup_payload() for 64bit data typesIago Toral Quiroga2017-01-031-1/+20
| | | | | | | | | | | | | | | | | | Use a width of 2 with 64-bit attributes. Also, if we have a dvec3/4 attribute that gets split across two registers such that components XY are stored in the second half of a register and components ZW are stored in the first half of the next, we need to fix regioning for any instruction that reads components Z/W of the attribute. Notice this also means that we can't support sources that read cross-dvec2 swizzles (like XZ for example). v2: don't assert that we have a single channel swizzle in the case that we have to fix up Z/W access on the first half of the next register. We can handle any swizzle that does not cross dvec2 boundaries, which the double scalarization pass should have prevented anyway. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4/tes: fix input loading for 64bit data typesIago Toral Quiroga2017-01-031-17/+55
| | | | | | v2: use byte_offset() instead of offset() Reviewed-by: Matt Turner <[email protected]>
* i965/vec4/tcs: fix outputs for 64-bit dataIago Toral Quiroga2017-01-031-2/+29
| | | | | | v2: use byte_offset() instead of offset() Reviewed-by: Matt Turner <[email protected]>
* i965/vec4/tcs: fix input loading for 64-bit dataIago Toral Quiroga2017-01-031-4/+30
| | | | | | v2: use byte_offset() instead of offset() Reviewed-by: Matt Turner <[email protected]>
* i965/vec4/gs: fix input loading for 64bit dataSamuel Iglesias Gonsálvez2017-01-031-17/+34
| | | | | | | | | v2 (Iago): - Adapt 64-bit path to component packing changes. Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]> Signed-off-by: Iago Toral Quiroga <[email protected]> Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: fix store output for 64-bit typesIago Toral Quiroga2017-01-031-2/+25
| | | | | | | | | We need to shuffle the data before it is written to the URB. Also, dvec3/4 need two vec4 slots. v2: use byte_offset() instead of offset(). Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: fix attribute setup for doublesIago Toral Quiroga2017-01-031-7/+14
| | | | Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: fix indentation in lower_attributes_to_hw_regs()Iago Toral Quiroga2017-01-031-8/+8
| | | | Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: make emit_pull_constant_load support 64-bit loadsIago Toral Quiroga2017-01-032-55/+50
| | | | | | | | | | | This way callers don't need to know about 64-bit particularities and we reuse some code. v2: - use byte_offset() instead of offset() - only mark the surface as used once Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: fix move_push_constants_to_pull_constants() for 64-bit dataIago Toral Quiroga2017-01-031-4/+19
| | | | | | v2: adapt to changes in offset() Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: fix indentation in move_push_constants_to_pull_constants()Iago Toral Quiroga2017-01-031-30/+30
| | | | Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: fix move_uniform_array_access_to_pull_constant() for 64-bit dataIago Toral Quiroga2017-01-031-2/+18
| | | | | | v2: adapt to changes in offset() Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: fix scratch writes for 64bit dataIago Toral Quiroga2017-01-031-9/+55
| | | | | | | | | | Mostly the same stuff as usual: we ned to shuffle the data before we write and we need to emit two 32-bit write messages (with appropriate 32-bit writemask channels set) for a full dvec4 scratch write. v2: use byte_offset() instead of offset(). Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: fix scratch reads for 64bit dataIago Toral Quiroga2017-01-031-2/+14
| | | | | | | | | v2: Setup for a 64-bit scratch read by checking the type size of the correct register v3: Use byte_offset() instead of offset() Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: fix scratch offset for 64bit dataIago Toral Quiroga2017-01-031-6/+16
| | | | | | | | | A vec4 is 16 bytes and a dvec4 is 32 bytes so for doubles we have to multiply the reladdr by 2. The reg_offset part is in units of 16 bytes and is used to select the low/high 16-byte chunk of a full dvec4, so we don't want to multiply that part of the address. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: do not split scratch read/write opcodesIago Toral Quiroga2017-01-031-0/+9
| | | | | | | | 64-bit scratch read/writes require to shuffle data around so we need to have access to the full 64-bit data. We will do the right thing for these when we emit the messages. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: Do not use DepCtrl with 64-bit instructionsIago Toral Quiroga2017-01-031-1/+13
| | | | | | | | | | The BDW PRM says that it is not supported, but it seems that gen7 is also affected, since doing DepCtrl on double-float instructions leads to GPU hangs in some cases, which is probably not surprising knowing that this is not supported in new hardware iterations. The SKL PRMs do not mention this restriction, so it is probably fine. Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: extend the DWORD multiply DepCtrl restriction to all gen8 platformsIago Toral Quiroga2017-01-031-3/+6
| | | | | | | v2: - Add Broxton as Intel's internal PRMs says that it is needed (Matt). Reviewed-by: Matt Turner <[email protected]>
* i965/vec4: don't copy propagate misaligned registersSamuel Iglesias Gonsálvez2017-01-031-0/+3
| | | | | | | | This means we would copy propagate partial reads or writes and that can affect the result. Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]> Reviewed-by: Matt Turner <[email protected]>