summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* i965: fs: Add fixup for textureSize on Gen6/7Chris Forbes2012-12-141-0/+11
| | | | | | | | | | | | | | | V2: Moved up into emit(ir_texture *) to avoid duplication and fix ordering for Gen7; Gen6 math quirks moved into previous patches. Tested on Gen6 only; passes all the cube_map_array piglits. V3: Fixed weird whitespace V4: Use sampler->type; otherwise broken on arrays of samplers. v5: Minor style fixes (by anholt) Signed-off-by: Chris Forbes <[email protected]> Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* i965: fs: fix gen6+ math operands in one placeChris Forbes2012-12-142-28/+33
| | | | | | | | | | V4: Fix various style nits as pointed out by Eric, and expand IMM operands on both Gen6 and Gen7. v5: minor style nits (by anholt) Signed-off-by: Chris Forbes <[email protected]> Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* i965: vs: Add fixup for textureSize with cube array samplersChris Forbes2012-12-141-0/+13
| | | | | | | | | | | V3: Fixed weird whitespace V4: Use sampler's type rather than variable's type; otherwise broken with arrays of samplers. (Thanks Eric) v5: Fix a couple more style nits (by anholt) Signed-off-by: Chris Forbes <[email protected]> Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* i965/vs: Fix gen6+ math operand quirks in one placeChris Forbes2012-12-142-34/+28
| | | | | | | | | | | This causes immediate values to get moved to a temp on gen7, which is needed for an upcoming change but hadn't happened in the visitor until then. v2: Drop gen > 7 checks (doesn't exist), and style-fix comments (changes by anholt). Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* i965: Add various plumbing for cubemap arraysChris Forbes2012-12-145-3/+11
| | | | | | | | V4: Fixed style nits Signed-off-by: Chris Forbes <[email protected]> Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* i965/fs: Add empirically-determined instruction latencies for gen7.Eric Anholt2012-12-141-3/+179
| | | | | | | | | | | | | | | | v2: Actually switch on the other math instructions mentioned in the comment. v3: Add timing data for textureSize(), and clean up some long comment lines. Testing shader_time of fs16 shaders on a few frames of various apps: nexuiz improved by 2.9% +/- 1.5% (n=10) no difference on GLB2.5 (n=36, outliers removed) no difference on GLB2.7 (n=25) etqw improved by 2.6% +/- 2.2% (n=25) no difference on lightsmark (n=25) Acked-by: Kenneth Graunke <[email protected]>
* i965/fs: Fix the clock increment in scheduling.Eric Anholt2012-12-141-3/+15
| | | | | | | I've tested this to be true with various ALU ops on gen7 (with the exception of MADs, which go at either 3 or 4 cycles per dispatch). Acked-by: Kenneth Graunke <[email protected]>
* i965/fs: Move the old gen4 bspec-based scheduling info to a helper func.Eric Anholt2012-12-141-33/+41
| | | | | | For gen7 everything changes, and we have actual information on latency. Acked-by: Kenneth Graunke <[email protected]>
* i965/fs: Set up gen7 UBO loads as sends from GRFs.Eric Anholt2012-12-145-7/+114
| | | | | | | | | | | | This gives the instruction scheduler a chance to schedule between the loads, whereas before it was restricted due to the dependencies between the MRFs for setting them up. For one shader in gles3conform, it goes from getting stuck in register allocation for as long as anybody's bothered to leave it running down to 23 seconds, thanks to the LIFO scheduling. Acked-by: Kenneth Graunke <[email protected]>
* i965/fs: Before reg alloc, schedule instructions to reduce live ranges.Eric Anholt2012-12-141-6/+41
| | | | | | | | | | | | | | | | | | | | | | | | | This came from an idea by Ben Segovia. 16-wide pixel shaders are very important for latency hiding on i965, so we want to try really hard to get them. If scheduling an instruction makes some set of instructions available, those are probably the ones that make the instruction's result dead. By choosing those first, we'll have a tendency to reduce the amount of live data as opposed to creating more. Previously, we were sometimes getting this behavior out of the scheduler, which was what produced the scheduler's original performance wins on lightsmark. Unfortunately, that was mostly an accident of the lame instruction latency information that I had, which made it impossible to fix the actual scheduling for performance. Now that we've fixed the scheduling for setup for register allocation, we can safely update the latency parameters for the final schedule. In shader-db, we lose 37 16-wide shaders, but gain 90 new ones. 4 shaders that were spilling change how many registers spill, for a reduction of 70/3899 instructions. v2: Simplify the new loop. Acked-by: Kenneth Graunke <[email protected]>
* i965/fs: Add some optional debug printfs to scheduling.Eric Anholt2012-12-141-0/+21
| | | | | | Seeing when instructions become available to schedule is really useful. Acked-by: Kenneth Graunke <[email protected]>
* i965/fs: Schedule instructions both before and after register allocation.Eric Anholt2012-12-143-18/+78
| | | | Acked-by: Kenneth Graunke <[email protected]>
* i965: Make sure that the shader_time report at context destroy happens.Eric Anholt2012-12-141-0/+3
| | | | | | Otherwise, you end up with some report from within a second of context destroy, which is now what you really want for testing the impact of changes
* i965: Print a total time for the different shader stages.Eric Anholt2012-12-141-10/+38
| | | | | | | | | Sometimes I've got a patch for a performance optimization that's not showing a statistically significant performance difference on reported FPS, but still seems like a good idea because it ought to reduce time spent in the shader. If I can see the total number of cycles spent in the shader stage being optimized, it may show that the patch is still worthwhile (or point out that it's actually broken in some way).
* i965: Scale shader_time to compensate for resets.Eric Anholt2012-12-144-9/+83
| | | | | | | | | | Some shaders experience resets more than others, which skews the numbers reported. Attempt to correct for this by linearly scaling according to the number of resets that happen. Note that will not be accurate if invocations of shaders have varying times and longer invocations are more likely to reset. However, this should at least be better than the previous situation.
* i965: Adjust the split between shader_time_end() and shader_time_write().Eric Anholt2012-12-144-51/+55
| | | | | | I'm about to emit other kinds of writes besides time deltas, and it turns out with the frequency of resets, we couldn't really use the old time delta write() function more than once in a shader.
* glsl/linker: Pack between varyings.Paul Berry2012-12-141-15/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements varying packing between varyings. Previously, each varying occupied components 0 through N-1 of its assigned varying slot, so there was no way to pack two varyings into the same slot. For example, if the varyings were a float, a vec2, a vec3, and another vec2, they would be stored as follows: <----slot1----> <----slot2----> <----slot3----> <----slot4----> slots * * * * * * * * * * * * * * * * flt x x x <vec2-> x x <--vec3---> x <vec2-> x x varyings (Each * represents a varying component, and the "x"s represent wasted space). This change packs the varyings together to eliminate wasted space between varyings, like so: <----slot1----> <----slot2----> <----slot3----> <----slot4----> slots * * * * * * * * * * * * * * * * <vec2-> <vec2-> flt <--vec3---> x x x x x x x x varyings Note that we take advantage of the sort order introduced in previous patches (vec4's first, then vec2's, then scalars, then vec3's) to minimize how often a varying is "double parked" (split across varying slots). Reviewed-by: Eric Anholt <[email protected]> v2: Skip varying packing if ctx->Const.DisableVaryingPacking is true.
* glsl/linker: Pack within compound varyings.Paul Berry2012-12-141-37/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements varying packing within varyings that are composed of multiple vectors of size less than 4 (e.g. arrays of vec2's, or matrices with height less than 4). Previously, such varyings used up a full 4-wide varying slot for each constituent vector, meaning that some of the components of each varying slot went unused. For example, a mat4x3 would be stored as follows: <----slot1----> <----slot2----> <----slot3----> <----slot4----> slots * * * * * * * * * * * * * * * * <-column1-> x <-column2-> x <-column3-> x <-column4-> x matrix (Each * represents a varying component, and the "x"s represent wasted space). In addition to wasting precious varying components, this layout complicated transform feedback, since the constituents of the varying are expected to be output to the transform feedback buffer contiguously (e.g. without gaps between the columns, in the case of a matrix). This change packs the constituents of each varying together so that all wasted space is at the end. For the mat4x3 example, this looks like so: <----slot1----> <----slot2----> <----slot3----> <----slot4----> slots * * * * * * * * * * * * * * * * <-column1-> <-column2-> <-column3-> <-column4-> x x x x matrix Note that matrix columns 2 and 3 now cross a boundary between varying slots (a characteristic I call "double parking" of a varying). We don't bother trying to eliminate the wasted space at the end of the varying, since the patch that follows will take care of that. Since compiler back-ends don't (yet) support this packed layout, the lower_packed_varyings function is used to rewrite the shader into a form where each varying occupies a full varying slot. Later, if we add native back-end support for varying packing, we can make this lowering pass optional. Reviewed-by: Eric Anholt <[email protected]> v2: Skip varying packing if ctx->Const.DisableVaryingPacking is true.
* gallium: Disable varying packing on hardware with <=8 texture indirections.Paul Berry2012-12-141-0/+14
| | | | | | | In practice this will disable varying packing on R300, R400, i915g, and nv30. Reviewed-by: Marek Olšák <[email protected]>
* mesa: Add an option so driver can opt out of varying packing.Paul Berry2012-12-141-0/+11
| | | | | | | | | | | | | | On hardware that supports a limited number of texture indirections, varying packing will comsume an extra texture indirection, since ALU operations are needed in the fragment shader to unpack the varyings before any texturing can be done. This patch introduces a new driver option, ctx->Const.DisableVaryingPacking, which can be used by a driver to opt out of varying packing if the extra texture indirection is costly enough to outweigh the advantages of packing varyings. Reviewed-by: Marek Olšák <[email protected]>
* glsl: Add a lowering pass for packing varyings.Paul Berry2012-12-143-0/+368
| | | | | | | | | | | | | | This lowering pass generates GLSL code that manually packs varyings into vec4 slots, for the benefit of back-ends that don't support packed varyings natively. No functional change--the lowering pass is not yet used. Reviewed-by: Eric Anholt <[email protected]> v2: Don't use ir_hierarchical_visitor--just loop over instructions directly. Also, make the names of the packed varyings include the names of the original varyings that were packed into them.
* glsl/linker: Sort varyings by packing class, then vector size.Paul Berry2012-12-141-0/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch paves the way for varying packing by adding a sorting step before varying assignment, which sorts the varyings into an order that increases the likelihood of being able to find an efficient packing. First, varyings are sorted into "packing classes" by considering attributes that can't be mixed during varying packing--at the moment this includes base type (float/int/uint/bool) and interpolation mode (smooth/noperspective/flat/centroid), though later we will hopefully be able to relax some of these restrictions. The number of packing classes places an upper limit on the amount of space that must be wasted by varying packing, since in theory a shader might nave 4n+1 components worth of varyings in each of m packing classes, resulting in 3m components worth of wasted space. Then, within each packing class, varyings are sorted by vector size, with vec4's coming first, then vec2's, then scalars, and then finally vec3's. The motivation for this order is that it ensures that the only vectors that might be "double parked" (with part of the vector in one varying slot and the remainder in another) are vec3's. Note that the varyings aren't actually packed yet, merely placed in an order that will facilitate packing. Reviewed-by: Eric Anholt <[email protected]>
* glsl/linker: Subdivide the first phase of varying assignment.Paul Berry2012-12-141-44/+163
| | | | | | | | | | | | | | | | | This patch further subdivides the loop that assigns varying locations into two phases: one phase to match up the varyings between shader stages, and one phase to assign them varying locations. In between the two phases the matched varyings are stored in a new data structure called varying_matches. This will free us to be able to assign varying locations in any order, which will pave the way for packing varyings. Note that the new varying_matches::assign_locations() function returns the number of varying slots that were used; this return value will be used in a future patch. Reviewed-by: Eric Anholt <[email protected]>
* glsl/linker: Defer recording transform feedback locations.Paul Berry2012-12-141-55/+48
| | | | | | | | | | | | | | | | | This patch subdivides the loop that assigns varying locations into two phases: one phase to match up varyings between shader stages (and assign them varying locations), and a second phase to record the varying assignments for use by transform feedback. This paves the way for varying packing, which will require us to further subdivide the first phase. In addition, it lets us avoid a clumsy O(n^2) algorithm, since we can now record the locations of all transform feedback varyings in a single pass through the tfeedback_decls array, rather than have to iterate through the array after assigning each varying. Reviewed-by: Eric Anholt <[email protected]>
* glsl: Create a field to store fractional varying locations.Paul Berry2012-12-143-2/+14
| | | | | | | | | | | | | | | Currently, the location of each varying is recorded in ir_variable as a multiple of the size of a vec4. In order to pack varyings, we need to be able to record, e.g. that a vec2 is stored in the second half of a varying slot rather than the first half. This patch introduces a field ir_variable::location_frac, which represents the offset within a vec4 where a varying's value is stored. Varyings that are not subject to packing will always have a location_frac value of zero. Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Ian Romanick <[email protected]>
* glsl/linker: Make separate ir_variable field to mean "unmatched".Paul Berry2012-12-142-4/+23
| | | | | | | | | | | | | | | | | Previously, the linker used a value of -1 in ir_variable::location to denote a generic input or output of the shader that had not yet been matched up to a variable in another pipeline stage. This patch introduces a new ir_variable field, is_unmatched_generic_inout, for that purpose. In future patches, this will allow us to separate the process of matching varyings between shader stages from the processes of assigning locations to those varying. That will in turn pave the way for packing varyings. Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Ian Romanick <[email protected]>
* glsl/linker: Always invalidate shader ins/outs, even in corner cases.Paul Berry2012-12-141-12/+31
| | | | | | | | | | | | | | | | | | Previously, link_invalidate_variable_locations() was only called during assign_attribute_or_color_locations() and assign_varying_locations(). This meant that in the corner case when there was only a vertex shader, and varyings were being captured by transform feedback, link_invalidate_variable_locations() wasn't being called for the varyings. This patch migrates the calls to link_invalidate_variable_locations() to link_shaders(), so that they will be called in all circumstances. In addition, it modifies the call semantics so that link_invalidate_variable_locations() need only be called once per shader stage (rather than once for inputs and once for outputs). Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Ian Romanick <[email protected]>
* glsl/lower_clip_distance: Update symbol table.Paul Berry2012-12-143-5/+10
| | | | | | | | | | | | | This patch modifies the clip distance lowering pass so that the new symbol it generates (glClipDistanceMESA) is added to the shader's symbol table. This will allow a later patch to modify the linker so that it finds transform feedback varyings using the symbol table rather than having to iterate through all the declarations in the shader. Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Ian Romanick <[email protected]>
* android: build fix for libmesa_glsl_utilsTapani Pälli2012-12-141-0/+4
| | | | | | | hash_table.c compilation requires ralloc.h include path Signed-off-by: Tapani Pälli <[email protected]> Reviewed-by: Chad Versace <[email protected]>
* mesa: minor indentation fixes in texcompress_etc.cBrian Paul2012-12-141-17/+17
|
* mesa: remove old swrast-based compressed texel fetch codeBrian Paul2012-12-148-692/+1
|
* swrast: use new core Mesa compressed texel fetch functionsBrian Paul2012-12-142-87/+110
|
* mesa: reimplement _mesa_decompress_image() using new tex fetch codeBrian Paul2012-12-141-103/+7
|
* mesa: added _mesa_get_compressed_fetch_func()Brian Paul2012-12-142-0/+36
|
* mesa: add new texel fetch code for etc formatsBrian Paul2012-12-142-0/+280
|
* mesa: add new texel fetch code for rgtc formatsBrian Paul2012-12-142-0/+166
|
* mesa: add new texel fetch code for fxt formatsBrian Paul2012-12-142-0/+45
|
* mesa: add new texel fetch code for dxt formatsBrian Paul2012-12-142-1/+110
|
* mesa: add compressed_fetch_func typedefBrian Paul2012-12-141-0/+9
| | | | | This is a first step in removing the swrast-related code in core Mesa's texture compression files.
* swrast: merge get_texel_fetch_func() and set_fetch_functions()Brian Paul2012-12-141-26/+20
| | | | No real need for separate functions anymore.
* swrast: make _mesa_get_texel_fetch_func() staticBrian Paul2012-12-142-7/+4
| | | | Not called from any other file.
* draw/llvmpipe: fix transform feedback position + enable other extensionsDave Airlie2012-12-146-8/+27
| | | | | | | | | | | | | | | | This builds on the previous draw/softpipe patch. So llvmpipe does streamout calls after clip/viewport stages, but we have the pre-clip position stored for later use, so when we are doing transform feedback, and its the position vertex grab the vertex from the stored pre clip position. The perfect fix is too probably add a codegen transform feedback stage in between shader and clip stages, but this is good enough for now. Reviewed-by: Roland Scheidegger <[email protected]> Signed-off-by: Dave Airlie <[email protected]>
* draw: add support for later transform feedback extensionsDave Airlie2012-12-143-6/+17
| | | | | | | | | | | | | | | | | This adds support to draw for the new features of transform feedback. a) fix count_from_stream_output, using max_index+1 for now but it looks like it should be valid as its derived from the vertex elements/vbo. b) fix striding and dst offsets in output buffers - was just wrong before. c) fix crash if tfb is suspended (so.num_targets == 0) This also enables the new features on softpipe. It should be possible to enable them on llvmpipe as well after this commit, but would need to schedule piglit runs. Signed-off-by: Dave Airlie <[email protected]>
* clover: Fix build since removal of pipe_surface::usageTom Stellard2012-12-131-1/+0
| | | | by commit 25409c6da8163d9acb386511aef0c11577c7aadb
* r600g/radeonsi: Silence warningsMaxence Le Dore2012-12-135-30/+49
| | | | Reviewed-by: Tom Stellard <[email protected]>
* clover: Add support for compiler flagsTom Stellard2012-12-135-12/+71
| | | | Reviewed-by: Francisco Jerez <[email protected]>
* clover: Don't erase build info of devices not being builtTom Stellard2012-12-131-2/+2
| | | | | | | | | Every call to _cl_program::build() was erasing the binaries and logs for every device associated with the program. This is incorrect because it is possible to build a program for only a subset of devices and so any device not being build should not have this information erased. Reviewed-by: Francisco Jerez <[email protected]>
* r600g: use load_ar checks with llvm output.Vincent Lejeune2012-12-131-0/+6
| | | | Reviewed-by: Tom Stellard <[email protected]>
* build: Fix AX_PROG_{CC,CXX}_FOR_BUILD macrosThierry Reding2012-12-132-52/+23
| | | | | | | | | | | | | | | | | Override the cross_compiling and ac_tool_prefix variables by reassigning to them instead of redefining the macros. Redefining them will actually cause the variable names to be replaced instead of their content. Furthermore push the definition of CPPFLAGS before running the checks for the build tools to avoid the host CPPFLAGS from leaking into the build CPPFLAGS. While at it drop the redefinition of AC_TRY_COMPILER which hasn't been used since autoconf 2.50 and make sure that all definitions are properly popped when done (LDFLAGS, ac_cv_prog_CPP, ac_cv_prog_CXXCPP). Acked-by: Matt Turner <[email protected]> Signed-off-by: Thierry Reding <[email protected]>
* gallivm: fix texel fetch for array texturesRoland Scheidegger2012-12-131-17/+38
| | | | | | | | | | Since we don't call lp_build_sample_common() in the texel fetch path we missed the layer fixup code. If someone would have tried to do texelFetch with array textures it would have crashed for sure. Not really tested (can't run the piglit test being able to use texelFetch with array samplers for now with llvmpipe). Reviewed-by: José Fonseca <[email protected]>