aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* i965: Make unify_interfaces not spread VARYING_BIT_TESS_LEVEL_*.Kenneth Graunke2017-01-061-2/+5
| | | | | | | | | | | This is harmless today because gl_TessLevelInner/Outer in the TES is currently treated as system values. However, when we move to treating them as inputs, this would cause a bug: with no TCS present, it would propagate TES reads of VARYING_SLOT_TESS_LEVEL into the VS output VUE map slots. This is totally bogus - those don't even exist in the VS. Signed-off-by: Kenneth Graunke <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]>
* glsl: Support gl_TessLevelInner/Outer[] as TES input variables.Kenneth Graunke2017-01-062-4/+17
| | | | | | | | | | Upcoming reworks in i965 are going to make it easy to handle this like any other input. Having it as a system value will just require additional code for no benefit. Signed-off-by: Kenneth Graunke <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]> Reviewed-by: Timothy Arceri <[email protected]>
* glsl: Mark whole variable used for ClipDistance and TessLevel*.Kenneth Graunke2017-01-061-3/+23
| | | | | | | | | | | | | | | | | | | | There's no point in trying to mark partial array access for gl_ClipDistance, gl_TessLevelOuter, or gl_TessLevelInner - they're special built-in variables that control fixed function hardware, and will likely be used in an all-or-nothing fashion. Since these arrays only occupy 1-2 varying slots, we have to avoid our normal processing which increments the slot value by the array index. (I wrote this code before i965 switched from ir_set_program_inouts to nir_shader_gather_info. It's not used by anyone today, and I'm not sure how valuable it is...the alternative to GLSL IR lowering is NIR compact arrays, at which point you should use nir_gather_info.) Signed-off-by: Kenneth Graunke <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]> Reviewed-by: Timothy Arceri <[email protected]>
* glsl: Override the # of varying slots for ClipDistance and TessLevel*.Kenneth Graunke2017-01-061-0/+18
| | | | | | | | | | | | | Right now, this shouldn't have any effect, as all drivers use LowerClipDist and LowerTessFactors to turn the float[] arrays into vectors. However, it should help make it possible for drivers to avoid that lowering. Signed-off-by: Kenneth Graunke <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]> Reviewed-by: Timothy Arceri <[email protected]>
* glsl: Create and use a new ir_variable::count_attribute_slots() wrapper.Kenneth Graunke2017-01-065-11/+17
| | | | | | | | | This wraps glsl_type::count_attribute_slots(), but will soon contain a couple of overrides for a couple of GLSL built-ins variables. Signed-off-by: Kenneth Graunke <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]> Reviewed-by: Timothy Arceri <[email protected]>
* gallium/radeon: use the internal clear_buffer callback to fix r600gMarek Olšák2017-01-061-1/+3
| | | | | | | | r600g doesn't set pipe_context::clear_buffer. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99303 Reviewed-by: Alex Deucher <[email protected]>
* llvmpipe: do transpose/untwiddle after conversion for 8bit formatsRoland Scheidegger2017-01-061-7/+143
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generally we should do tranpose after conversion, if the format has less than 32 bits per channel (if it has 32 bits, conversion is going to be a no-op anyway...). This is obviously because there's less vectors to deal with. Though the advantage for 16 bit formats isn't that big, and in fact with AVX there isn't really any (as the 32bit unpacks can be done with 256bit, but the smaller ones cannot, although that would change again with proper AVX2 support). Only makes sense for 2d and not 1d cases. And to keep things easy, only handle 1,2 and 4 channels (rgbx is just fine). For rgba unorm8 format the backend conversion sums up to these instruction totals (not counting the movs for SSE2 due to 2-op syntax - generally every 2 unpacks need an additional mov). SSE2 AVX transpose: 32 unpack 16 unpack untwiddle: 0 8 (128bit low/high permutes) convert: 16 mul + 16 cvt 8 mul + 8 cvt 32->8bit: 12 pack 8 (128bit extract) + 12 pack When doing transpose/untwiddle afterwards we get: convert: 16 mul + 16 cvt 8 mul + 8 cvt 32->8bit: 12 pack 8 (128bit extract) + 12 pack transpose/untwiddle 12 unpack 12 unpack So for SSE2, this drops 20 unpacks (total instruction count 76->56) whereas for AVX it replaces the 16 256bit unpacks with 8 128bit ones and drops the 8 lo/hi permutes (in total 60->48). (Albeit to be fair, the permutes could be dropped even when doing the transpose first, they are extremely pointless but we'd need to be able to tell lp_build_conv to reorder the vectors, for AVX2 we're going to need to be able to tell lp_build_conv about ordering in any case.) (With different ordering going into conversion, it would be possible to do 4 unpacks + 4 pshufbs instead of 12 unpacks, but that might not be better, and not all cpus can do it. Proper AVX2 support should eliminate the 8 128bit extracts, reduce these 12 packs to 6 and the 12 unpacks to 2 pshufb + 2 permq ideally (+ 2 final 128bit extracts).) Reviewed-by: Jose Fonseca <[email protected]>
* gallivm: generalize 4x4f->1x16ub special case conversionRoland Scheidegger2017-01-061-56/+118
| | | | | | | | | | | | | | | | | | | | | | | | | | | This special packing path can be easily extended to handle not just float->unorm8 but also float->snorm8 and uint32->uint8 and int32->int8 (i.e. all interesting cases for llvmpipe fs backend code). The packing parts all stay the same (only the last step packing will be signed->signed instead of signed->unsigned but luckily even sse2 can do both). While here also note some bugs with that (we keep the bugs identical to what we did before on x86, albeit other archs may differ). In particular float->unorm8 too large values will still get clamped to 0, not 255, and for float->snorm8 NaNs will end up as -1, not 0 (but we do the clamp against 1.0 there to prevent too large values ending up as -1.0 - this is inconsistent to unorm8 handling but is what we ended up before, I'm not sure we can get away without it). This is quite fishy in any case as we depend on arch-dependent behavior of the iround (my understanding is in fact with altivec the conversion would actually saturate although I've no idea about NaNs, so probably wouldn't need to do anything for snorm). (There are only minimal piglit tests for unorm clamping behavior AFAICT, in particular nothing seems to test values which are too large to be handled by the float->int conversion.) For uint32->uint8 we also do a min against MAX_INT, since the source for the packs is always signed (again, on x86 - should probably be able to express these arch-dependent bits better some day). Reviewed-by: Jose Fonseca <[email protected]>
* llvmpipe: use alpha from already converted color if possibleRoland Scheidegger2017-01-062-18/+54
| | | | | | | | | | | | | For rgbx formats, there is no point in doing alpha conversion again (and with different tranpose even, so llvm can't eliminate it). Albeit it looks like there's some minimal changes needed in the blend code (found by code inspection, no test seemed to complain) if we do this - the blend factors are already sanitized if we have no destination alpha, however for src_alpha_saturate it looks like it still might make a difference (note that we forced has_alpha to true before for some formats and nothing complained, but this seems safer). Reviewed-by: Jose Fonseca <[email protected]>
* llvmpipe: use scalar load instead of vectors for small vectors in fs backendRoland Scheidegger2017-01-061-6/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | llvm has _huge_ problems trying to load things like <4 x i8> vectors and stitching such loads together to form 128bit vectors. My understanding of the problem is that the type legalizer tries to extend that to really a <4 x i32> vector and not a <16 x i8> vector with the 4 elements first then followed by padding, so the shuffles for then combining things together are more or less impossible - you can in fact see the pmovzxd llvm generates. Pre-4.0 llvm just gives up on it completely and does a 30+ pextrb/pinsrb sequence instead. It looks like current llvm has fixed this behavior (my guess would be due to better shuffle combination and load/shuffle folds), but we can avoid this by just loading as <1 x i32> values, combine that and only cast at the end. (I suspect it might also work if we'd pad the loaded vectors immediately before shuffling them together, instead of directly stitching 2 such vectors together pairwise before combining the pair. But this _might_ lose the ability to load the values directly into their right place in the vector with pinsrd.). But using 32bit values is probably easier for llvm as it will never give it funny ideas how the vector should look like. (This is possibly only a problem for 1x8bit formats, since 2x8bit will end up fetching 64bit hence only two vectors are stitched together, not 4, but we use the same strategy anyway.) Reviewed-by: Jose Fonseca <[email protected]>
* i965: Enable several GLES 3.1 extensions on HSW+Ian Romanick2017-01-063-6/+9
| | | | | | | | | The only reason we didn't previously enable this was the dependency on OpenGL ES 3.1. These should have been enabled as soon as HSW got stencil texturing. We also needed to fixup setting MaxViewports. Signed-off-by: Ian Romanick <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* i965: Always set MaxViewports and related limitsIan Romanick2017-01-061-2/+1
| | | | | | | | | | Since 9d6ca7c3, there should be no performance hit for having MaxViewports > 1. Always set this context state. This eliminates the need to update this conditional as we add support for OES_viewport_array on older GPUs. Signed-off-by: Ian Romanick <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* winsys/amdgpu: fix a race condition between fence updates and IB submissionsMarek Olšák2017-01-062-18/+22
| | | | | | | | | | The CS thread is needed to ensure proper ordering of operations and can't be disabled (without complicating the code). Discovered by Nine CSMT, which ended up in a deadlock. Acked-by: Edward O'Callaghan <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* radeonsi: add TC L2 prefetch for shaders and VBO descriptorsMarek Olšák2017-01-063-1/+50
| | | | | Reviewed-by: Edward O'Callaghan <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* radeonsi: add CP DMA flags for greater control over synchronizationMarek Olšák2017-01-063-16/+31
| | | | | | | for L2 prefetch Reviewed-by: Edward O'Callaghan <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* radeonsi: cleanly communicate which CP DMA packet is firstMarek Olšák2017-01-061-11/+21
| | | | | Reviewed-by: Edward O'Callaghan <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* gallium/radeon: add new HUD query num-SDMA-IBsMarek Olšák2017-01-069-2/+21
| | | | | Reviewed-by: Bas Nieuwenhuizen <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* gallium/radeon: rename the num-ctx-flushes query to num-GFX-IBsMarek Olšák2017-01-069-14/+14
| | | | | Reviewed-by: Bas Nieuwenhuizen <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* radeonsi: add HUD queries for cache flush statsMarek Olšák2017-01-064-0/+32
| | | | | Reviewed-by: Bas Nieuwenhuizen <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* radeonsi: don't count fast clears and prefetches into CP DMA statsMarek Olšák2017-01-061-2/+6
| | | | | Reviewed-by: Bas Nieuwenhuizen <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* radeonsi: don't wait for compute shaders in texture_barrierMarek Olšák2017-01-061-2/+1
| | | | | | | it doesn't interact with compute shaders in any way Reviewed-by: Bas Nieuwenhuizen <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* radeonsi: assume that a TES without POSITION precedes GSMarek Olšák2017-01-061-1/+2
| | | | | Reviewed-by: Bas Nieuwenhuizen <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* radeonsi: unduplicate VS color export codeMarek Olšák2017-01-061-9/+2
| | | | | | | it's exactly the same as the other ones Reviewed-by: Bas Nieuwenhuizen <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* radeonsi: clean up more HAVE_LLVM #ifdefsMarek Olšák2017-01-062-13/+20
| | | | | Reviewed-by: Bas Nieuwenhuizen <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* gallium/radeon: clean up HAVE_LLVM #ifdefs in r600_get_llvm_processor_nameMarek Olšák2017-01-061-17/+11
| | | | Reviewed-by: Nicolai Hähnle <[email protected]>
* i965: Properly flush in hsw_pause_transform_feedback().Kenneth Graunke2017-01-061-0/+3
| | | | | | | | | | | | | | | | Fixes a number of transform feedback tests when run with Linux 4.8, which allows us to use the MI_LOAD_REGISTER_REG command, at which point we started using this new broken path. ES3-CTS.functional.transform_feedback.array_element.interleaved.lines.* and Piglit's arb_transform_feedback2/draw-auto are both fixed by this patch, for example. Thanks to Chris Wilson for catching this mistake! Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99030 Signed-off-by: Kenneth Graunke <[email protected]> Reviewed-by: Anuj Phogat <[email protected]>
* i965: Fix texturing in the vec4 TCS and GS backends.Kenneth Graunke2017-01-061-2/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were failing to zero m0.2 of the sampler message header for TCS and GS messages in the simple case. fs_generator has done this for about a year now, but we missed it in vec4_generator. Fixes ES31-CTS.core.texture_cube_map_array.sampling, GL45-CTS.texture_cube_map_array.sampling, and many dEQP-GLES31.functional.shaders.opaque_type_indexing.sampler subtests: - dynamically_uniform.tessellation_control.isampler3d - dynamically_uniform.tessellation_control.isamplercube - dynamically_uniform.tessellation_control.sampler2d - dynamically_uniform.tessellation_control.usamplercube - dynamically_uniform.tessellation_control.sampler2darray - dynamically_uniform.tessellation_control.isampler2darray - dynamically_uniform.tessellation_control.usampler3d - dynamically_uniform.tessellation_control.usampler2darray - dynamically_uniform.tessellation_control.usampler2d - dynamically_uniform.tessellation_control.sampler3d - dynamically_uniform.tessellation_control.samplercube - dynamically_uniform.tessellation_control.isampler2d - uniform.tessellation_control.isampler3d - uniform.tessellation_control.isamplercube - uniform.tessellation_control.usampler2d - uniform.tessellation_control.usampler3d - uniform.tessellation_control.sampler2darray - uniform.tessellation_control.isampler2darray - uniform.tessellation_control.usampler2darray - uniform.tessellation_control.sampler2d - uniform.tessellation_control.usamplercube - uniform.tessellation_control.sampler3d - uniform.tessellation_control.samplercube - uniform.tessellation_control.isampler2d Cc: [email protected] Signed-off-by: Kenneth Graunke <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]>
* swr: [rasterizer core] rename OutputMerger functionsTim Rowley2017-01-062-9/+9
| | | | Reviewed-by: Bruce Cherniak <[email protected]>
* swr: [rasterizer core] fix SIMD16 Transpose_16_16Tim Rowley2017-01-061-2/+2
| | | | | | | Fix incorrect swizzling in SIMD16 Transpose_16_16 breaking the two-channel 16-bpc formats like R16G16_FLOAT. Reviewed-by: Bruce Cherniak <[email protected]>
* swr: [rasterizer core] fix SIMD16 output mergerTim Rowley2017-01-062-16/+22
| | | | | | Honor the colorHottileEnable mask when accessing colorBuffer pointers. Reviewed-by: Bruce Cherniak <[email protected]>
* swr: [rasterizer core] fix SIMD16 PackTraits pack() and unpack()Tim Rowley2017-01-063-48/+82
| | | | | | Fix routines for 8-bit and 16-bit formats used by optimized tile store. Reviewed-by: Bruce Cherniak <[email protected]>
* swr: [rasterizer core] fix SIMD16 transpose functionsTim Rowley2017-01-063-113/+225
| | | | | | | | | | | | | Fixed Transpose_16 methods of following formats: Transpose8_8_8_8 Transpose8_8 Transpose32_32 Transpose16_16_16_16 Transpose16_16_16 Transpose16_16 Reviewed-by: Bruce Cherniak <[email protected]>
* swr: [rasterizer core] whitespace adjustmentsTim Rowley2017-01-061-2/+1
| | | | Reviewed-by: Bruce Cherniak <[email protected]>
* i965: Don't set EmitNoMainReturn.Kenneth Graunke2017-01-051-1/+0
| | | | | | | | | | | | | | | A while ago, we stopped using Luca's GLSL IR lower_jumps pass in favor of nir_lower_returns(). Marek's commit d3cb79e043338b0e55a3fba8df652f3 put it in do_common_optimization, which resulted in us calling it again. Dropping the EmitNoMainReturn setting makes us skip that pass again. Apparently that pass doesn't work properly, because this fixes Piglit's tests/spec/glsl-1.10/execution/vs-nested-return-sibling-loop. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99287 Signed-off-by: Kenneth Graunke <[email protected]> Reviewed-by: Timothy Arceri <[email protected]>
* vc4: Rewrite T image handling based on calling the LT handler.Eric Anholt2017-01-051-34/+75
| | | | | | | | | | | | | The T images are composed of effectively swizzled-around blocks of LT (4x4 utile) images, so we can reduce the t_utile_address() calls by 16x by calling into the simpler LT loop. This also adds support for calling down with non-utile-aligned coordinates, which will be part of lifting the utile alignment requirement on our callers and avoiding the RMW on non-utile-aligned stores. Improves 1024x1024 TexSubImage by 2.55014% +/- 1.18584% (n=46) Improves 1024x1024 GetTexImage by 2.242% +/- 0.880954% (n=32)
* vc4: Move the utile_width/height functions to header inlines.Eric Anholt2017-01-052-37/+36
| | | | | | I want these inlined in the callers, particularly with the tiling changes coming up, but we're not building with lto so some caller would suffer.
* vc4: Make the load/store utile functions static.Eric Anholt2017-01-052-4/+2
| | | | | They don't have any other callers outside of this file, and I'm hoping they get inlined soon.
* vc4: Simplify the load/store utile functions.Eric Anholt2017-01-051-10/+22
| | | | | | | | | They now have less of a dependency on the cpp, and don't have to do a divide. Hacking up mesa-demos teximage to do only one subtest and not draw points, I saw 1024x1024 glTexSubImage2D() improve by 4.86939% +/- 1.40408% (n=30) and glGetTexImage() by 2.18978% +/- 0.140268% (n=5).
* vc4: Reuse a list function to simplify bufmgr code.Eric Anholt2017-01-051-11/+2
|
* vc4: Flush the job early if we're referencing too many BOs.Eric Anholt2017-01-053-0/+16
| | | | | | | | | | | If we get up toward 256MB (or whatever the CMA area size is), VC4_GEM_CREATE will start throwing errors. Even if we don't trigger that, when we flush the kernel's BO allocation for the CLs or bin memory may end up throwing an error, at which point our job won't get rendered at all. Just flush early (half of maximum CMA size) so that hopefully we never get to that point.
* st/mesa/glsl: move SamplerTargets to gl_programTimothy Arceri2017-01-066-13/+16
| | | | | | | | This will help allow us to simplify the handling of samplers by storing them in a single location rather than duplicating them in both gl_linked_shader and gl_program. Reviewed-by: Eric Anholt <[email protected]>
* st/mesa/glsl: set SamplersUsed directly in gl_programTimothy Arceri2017-01-065-6/+3
| | | | Reviewed-by: Eric Anholt <[email protected]>
* mesa/glsl: set sampler units directly in gl_programTimothy Arceri2017-01-064-17/+7
| | | | | | | Now that we create gl_program earlier there is no need to mess about copying things to gl_linked_shader then to gl_program. Reviewed-by: Eric Anholt <[email protected]>
* mesa: simplify sampler setting codeTimothy Arceri2017-01-061-22/+11
| | | | | | | | There is no need to loop over active samplers the code above this would have already exited if the sampler was inactive, or errored if the count was larger than the uniforms array size. Reviewed-by: Eric Anholt <[email protected]>
* mesa/glsl: set num_textures per stage directly in shader_infoTimothy Arceri2017-01-066-6/+5
| | | | Reviewed-by: Eric Anholt <[email protected]>
* mesa: make _CurrentFragmentProgram a gl_program struct pointerTimothy Arceri2017-01-066-30/+22
| | | | | | | | Making this point to a gl_program struct rather than a gl_shader_program struct will allow use to later also make the CurrentProgram array hold gl_program structs which in turn will allow for code simpilifcation. Reviewed-by: Eric Anholt <[email protected]>
* i965: stop passing gl_shader_program to the precompile and codegen functionsTimothy Arceri2017-01-0612-87/+31
| | | | | | | | We no longer need it. While we are at it we mark the vs, gs, and wm codegen functions as static. Reviewed-by: Eric Anholt <[email protected]>
* mesa/glsl: remove hack to reset sampler units to zeroTimothy Arceri2017-01-063-20/+16
| | | | | | | | | | Now that we have the is_arb_asm flag we can just skip the initialisation. V2: remove hack from standalone compiler where it was never needed since it only compiles glsl shaders. Reviewed-by: Eric Anholt <[email protected]>
* i965: make use of new is_arb_asm flagTimothy Arceri2017-01-062-13/+11
| | | | Reviewed-by: Eric Anholt <[email protected]>
* st/mesa/glsl: add new is_arb_asm flag in gl_programTimothy Arceri2017-01-0613-34/+45
| | | | | | | | | | | | | | | | Set the flag via the _mesa_init_gl_program() and NewProgram() helpers. In i965 we currently check for the existance of gl_shader_program to decide if this is an ARB assembly style program or not. Adding a flag makes the code clearer and will help removes a dependency on gl_shader_program in the i965 codegen functions. Also this will allow use to skip initialising sampler units for linked shaders, we currently memset it to zero again during linking. Reviewed-by: Eric Anholt <[email protected]>