| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The initial helpers add support for removing unused varyings between
stages.
V2:
- Moved the io mask helper function into this file rather than
nir.h so it's not used elsewhere considering it doesn't handle
all corner cases.
- Use bitmask rather than hash table to handle tcs outputs (Ken)
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
Having this separate just makes the code harder to follow, and
requires an extra walk of the IR.
Reviewed-by: Thomas Helland <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The functionality is used by glsl and mesa. With the latter already
depending on the former.
With this in place the src/util/ static library libmesautil.la no longer
has a C++ dependency. Thus objects which use it (like libEGL) don't need
the C++ link.
Cc: "17.2" <[email protected]>
Fixes: 02cc35937277 ("egl/wayland: Use linux-dmabuf interface for buffers")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=101851
Signed-off-by: Emil Velikov <[email protected]>
Suggested-by: Jason Ekstrand <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Tested-by: Mike Lothian <[email protected]>
Tested-by: James Harvey <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This is a further lowering of default-block uniform loads that transforms
load_uniform intrinsics into load_ubo intrinsics. This simplifies the rest
of the backend.
v2: transform from load_uniform instead of straight from variables
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
This pass is a replacement for the nir_lower_samplers pass, which has the
advantage of keeping sampler references as derefs. This allows a unified
treatment of texture instructions and image intrinsics in the backend.
|
|
|
|
|
|
|
| |
i965 will want these to be scalar operations.
Reviewed-by: Connor Abbott <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
Specifically, constant fold intrinsics from ARB_shader_group_vote, but I
suspect it'll be useful for other things in the future.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old table based spirv_*_to_string functions would return NULL for
any values "inside" the table that didn't have entries. The tables also
needed to be updated by hand each time a new spirv.h was imported.
Generate the file instead.
v2: Make this script work more like src/mesa/main/format_fallback.py.
Suggested by Jason. Remove SCons supports. Suggested by Jason and
Emil. Put all the build work in Makefile.nir.am in lieu of adding a new
Makefile.spirv.am. Suggested by Emil. Add support for Android builds
based on code provided by Emil.
Signed-off-by: Ian Romanick <[email protected]>
Suggested-by: Jason Ekstrand <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
| |
This is equivalent to what mesa/st does in glsl_to_tgsi. For most hw
there isn't a particularly good reason to treat these differently.
Signed-off-by: Rob Clark <[email protected]>
Acked-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The algorithms used by this pass, especially for division, are heavily
based on the work Ian Romanick did for the similar int64 lowering pass
in the GLSL compiler.
v2: Properly handle vectors
v3: Get rid of log2_denom stuff. Since we're using bcsel, we do all the
calculations anyway and this is just extra instructions.
v4:
- Add back in the log2_denom stuff since it's needed for ensuring that
the shifts don't overflow.
- Rework the looping part of the pass to be easier to expand.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
There's nothing "double" about it other than, perhaps, the fact that it
packs two 32-bit values.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses disk_cache.c to write out a serialization of various
state that's required in order to successfully load and use a
binary written out by a drivers backend, this state is referred to as
"metadata" throughout the implementation.
This initial version is intended to work with all stages beside
compute.
This patch is based on the initial work done by Carl.
V2: extend the file's doxygen comment to cover some of the
design decisions.
V3:
- skip cache for fixed function shaders
- add int64 support
- fix glsl IR program parameter caching/restore and cache the
parameter values which are used by gallium backends.
- use new link status enum
V4:
- add compute program support
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: Rename lower_64bit.cpp and lower_64bit_test.cpp to lower_int64.
Suggested by Matt.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
These functions are directly available in shaders. A #define is added
to detect the presence. This allows these functions to be tested using
piglit regardless of whether the driver uses them for lowering. The
GLSL spec says that functions and macros beginning with __ are reserved
for use by the implementation... hey, that's us!
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This tries to move comparisons (a common source of boolean values)
closer to their first use. For GPUs which use condition codes,
this can eliminate a lot of temporary booleans and comparisons
which reload the condition code register based on a boolean.
V2: (Timothy Arceri)
- fix move comparision for phis so we dont end up with:
vec1 32 ssa_227 = phi block_34: ssa_1, block_38: ssa_240
vec1 32 ssa_235 = feq ssa_227, ssa_1
vec1 32 ssa_230 = phi block_34: ssa_221, block_38: ssa_235
- add nir_op_i2b/nir_op_f2b to the list of comparisons.
V3: (Timothy Arceri)
- tidy up suggested by Jason.
- add inot/fnot to move comparison list
V4: (Jason Ekstrand)
- clean up move_comparison_source
- get rid of the tuple
- rework phi handling
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]> [v1]
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Vulkan, we always have both the TCS and TES available in the same
pipeline, so we can simply use the TCS OutputVertices execution mode
value as the TES PatchVertices built-in.
For GLSL, we handle this in the linker. But we could use this pass
in the case when both TCS and TES are linked together, if we wanted.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Dave Airlie <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
| |
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
| |
This matches the naming of nir_lower_vars_to_ssa, the other to-SSA pass.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When shaders come in from SPIR-V, we handle continue blocks by placing
the contents of the continue inside of a "if (!first_iteration)". We do
this so that we can properly handle the fact that continues in SPIR-V
jump to the continue block at the end of the loop rather than jumping
directly to the top of the loop like they do in NIR. In particular, the
increment step of a simple for loop ends up in the continue block. This
pass looks for this case in loops that don't actually have any continues
and moves the continue contents to the end of the loop instead. We need
this because loop unrolling doesn't work if the increment is inside of a
condition.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
| |
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2:
- tidy ups suggested by Connor.
- tidy up cloning logic and handle copy propagation
based of suggestion by Connor.
- use nir_ssa_def_rewrite_uses to fix up lcssa phis
suggested by Connor.
- add support for complex loop unrolling (two terminators)
- handle case were the ssa defs use outside the loop is already a phi
- support unrolling loops with multiple terminators when trip count
is know for each terminator
V3:
- set correct num_components when creating phi in complex unroll
- rewrite update remap table based on Jasons suggestions.
- remove unrequired extract_loop_body() helper as suggested by Jason.
- simplify the lcssa phi fix up code for simple loops as per Jasons suggestions.
- use mem context to keep track of hash table memory as suggested by Jason.
- move is_{complex,simple}_loop helpers to the unroll code
- require nir_metadata_block_index
- partially rewrote complex unroll to be simpler and easier to follow.
V4:
- use rzalloc() when creating nir_phi_src but not setting pred right away
fixes regression cause by ralloc() no longer zeroing memory.
V5:
- simplify calling of complex_unroll()
- use new loop terminator fields to get the break/continue from blocks
and simplify loop unrolling code
- handle slightly less trivial loop terminators. if branches can
now have instructions but can only contain a single block.
- use nir print type IR snippets in unroll function descriptions
- add better explanation and variable for why we need to clone
additional times when the second terminator it the limiting
terminator.
- partially convert out of ssa before unrolling loops (suggested by Jason)
v6:
- remove unused nir_builder
- use Jasons new from ssa helper
- tidy/fixup cursor use
- unroll terminators that contain control flow correctly
- unroll complex loops with control flow before the terminators
correctly
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
V2: Do a "depth first search" to convert to LCSSA
V3: Small comment fixup
V4: Rebase, adapt to removal of function overloads
V5: Rebase, adapt to relocation of nir to compiler/nir
Still need to adapt to potential if-uses
Work around nir_validate issue
V6 (Timothy):
- tidy lcssa and stop leaking memory
- dont rewrite the src for the lcssa phi node
- validate lcssa phi srcs to avoid postvalidate assert
- don't add new phi if one already exists
- more lcssa phi validation fixes
- Rather than marking ssa defs inside a loop just mark blocks inside
a loop. This is simpler and fixes lcssa for intrinsics which do
not have a destination.
- don't create LCSSA phis for loops we won't unroll
- require loop metadata for lcssa pass
- handle case were the ssa defs use outside the loop is already a phi
V7: (Timothy)
- pass indirect mask to metadata call
v8: (Timothy)
- make convert to lcssa a helper function rather than a nir pass
- replace inside loop bitset with on the fly block index logic.
- remove lcssa phi validation special cases
- inline code from useless helpers, suggested by Jason.
- always do lcssa on loops, suggested by Jason.
- stop making lcssa phis special. Add as many source as the block
has predecessors, suggested by Jason.
V9: (Timothy)
- fix regression with the is_lcssa_phi field not being initialised
to false now that ralloc() doesn't zero out memory.
V10: (Timothy)
- remove extra braces in SSA example, pointed out by Topi
V11: (Timothy)
- add missing support for LCSSA phis in if conditions.
V12: (Timothy)
- small tidy up suggested by Jason.
- always create lcssa phi even if it just points to an lcssa
phi from an inner loop
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This pass detects induction variables and calculates the
trip count of loops to be used for loop unrolling.
V2: Rebase, adapt to removal of function overloads
V3: (Timothy Arceri)
- don't try to find trip count if loop terminator conditional is a phi
- fix trip count for do-while loops
- replace conditional type != alu assert with return
- disable unrolling of loops with continues
- multiple fixes to memory allocation, stop leaking and don't destroy
structs we want to use for unrolling.
- fix iteration count bugs when induction var not on RHS of condition
- add FIXME for && conditions
- calculate trip count for unsigned induction/limit vars
V4: (Timothy Arceri)
- count instructions in a loop
- set the limiting_terminator even if we can't find the trip count for
all terminators. This is needed for complex unrolling where we handle
2 terminators and the trip count is unknown for one of them.
- restruct structs so we don't keep information not required after
analysis and remove dead fields.
- force unrolling in some cases as per the rules in the GLSL IR pass
V5: (Timothy Arceri)
- fix metadata mask value 0x10 vs 0x16
V6: (Timothy Arceri)
- merge loop_variable and nir_loop_variable structs and lists suggested by Jason
- remove induction var hash table and store pointer to induction information in
the loop_variable suggested by Jason.
- use lowercase list_addtail() suggested by Jason.
- tidy up init_loop_block() as per Jasons suggestions.
- replace switch with nir_op_infos[alu->op].num_inputs == 2 in
is_var_basic_induction_var() as suggested by Jason.
- use nir_block_last_instr() in and rename foreach_cf_node_ex_loop() as suggested
by Jason.
- fix else check for is_trivial_loop_terminator() as per Connors suggetions.
- simplify offset for induction valiables incremented before the exit conditions is
checked.
- replace nir_op_isub check with assert() as it should have been lowered away.
V7: (Timothy Arceri)
- use rzalloc() on nir_loop struct creation. Worked previously because ralloc()
was broken and always zeroed the struct.
- fix cf_node_find_loop_jumps() to find jumps when loops contain
nested if statements. Code is tidier as a result.
V8: (Timothy Arceri)
- move is_trivial_loop_terminator() to nir.h so we can use it to assert is
the loop unroll pass
- fix analysis to not bail when looking for terminator when the break is in the else
rather then the if
- added new loop terminator fields: break_block, continue_from_block and
continue_from_then so we don't have to gather these when doing unrolling.
- get correct array length when forcing unrolling of variables
indexed arrays that are the same size as the iteration count
- add support for induction variables of type float
- update trival loop terminator check to allow an if containing
instructions as long as both branches contain only a single
block.
V9: (Timothy)
- bunch of tidy ups and simplifications suggested by Jason.
- rewrote trivial terminator detection, now the only restriction is there
must be no nested jumps, anything else goes.
- rewrote the iteration test to use nir_eval_const_opcode().
- count instruction properly even when forcing an unroll.
- bunch of other tidy ups and simplifications.
V10: (Timothy)
- some trivial tidy ups suggested by Jason.
- conditional fix for break inside continue branch by Jason.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Very soon this visitor will get more complicated. The users of the
existing ir_variable_refcount visitor won't need the coming
functionality, and this use doesn't need much of the functionality of
ir_variable_refcount.
v2: ir_array_refcount_visitor::get_variable_entry cannot return NULL, so
don't check it. Suggested by Timothy.
Signed-off-by: Ian Romanick <[email protected]>
Cc: [email protected]
Reviewed-by: Timothy Arceri <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
| |
v2: Use nir_is_per_vertex_io() rather than is_arrays_of_arrays().
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
It's not dependent on GLSL and it can be useful for shader caches that don't
deal with GLSL.
v2: address review comments
v3: keep the other 3 lines in configure.ac
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
| |
Otherwise it'll be missing in the release tarball.
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is only in libstandalone currently because it will only be used in
the stand-alone compiler.
v2: Change the signature of the generated function. The ir_factory is
created in the generator, and an availability predicate is taken as a
parameter.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is ported from GLSL and converts
if (cond)
discard;
into
discard_if(cond);
This removes a block, but also is needed by radv
to workaround a bug in the LLVM backend.
v2: handle if (a) discard_if(b) (nha)
cleanup and drop pointless loop (Matt)
make sure there are no dependent phis (Eric)
v3: make sure only one instruction in the then block.
v4: remove sneaky tabs, add cursor init (Eric)
Reviewed-by: Eric Anholt <[email protected]>
Cc: "13.0" <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
| |
Otherwise it'll be missing from the tarball.
Fixes: 094fe3a9591 ("nir: move nir_shader_info to a common compiler header")
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
| |
This temporary helper is no longer needed now that we have finished
refactoring common shader metadata.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function is added here to ease refactoring towards using the new shared
shader_info. Once refactoring is complete and values are set directly it
will be removed.
We call it from _mesa_copy_linked_program_data() rather than glsl_to_nir()
so that the values will be set for all drivers. In order to do this some
calls need to be moved around so that we make sure to call
do_set_program_inouts() before _mesa_copy_linked_program_data()
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Once upon a time, this was used to extract prototypes from the shader
containing GLSL built-in functions. This was removed by f5692f45 in
November 2010 for Mesa 7.10.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This code provides for an on-disk cache of objects. Objects are stored
and retrieved via names that are arbitrary 20-byte sequences,
(intended to be SHA-1 hashes of something identifying for the
content). The directory used for the cache can be specified by means
of environment variables in the following priority order:
$MESA_GLSL_CACHE_DIR
$XDG_CACHE_HOME/mesa
<user-home-directory>/.cache/mesa
By default the cache will be limited to a maximum size of 1GB. The
environment variable:
$MESA_GLSL_CACHE_MAX_SIZE
can be set (at the time of GL context creation) to choose some other
size. This variable is a number that can optionally be followed by
'K', 'M', or 'G' to select a size in kilobytes, megabytes, or
gigabytes. By default, an unadorned value will be interpreted as
gigabytes.
The cache will be entirely disabled at runtime if the variable
MESA_GLSL_CACHE_DISABLE is set at the time of GL context creation.
Many thanks to Kristian Høgsberg <[email protected]> for the initial
implementation of code that led to this patch. In particular, the idea
of using an mmapped file, (indexed by a portion of the SHA-1), for the
efficent implementation of cache_has_key was entirely his
idea. Kristian also provided some very helpful advice in discussions
regarding various race conditions to be avoided in this code.
Signed-off-by: Timothy Arceri <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Immediately previous to this patch,
diff -wud src/glsl/ir_constant_expression.cpp \
src/glsl/ir_expression_operation_constant.h
should be "minimal."
v3: With much help from José Fonseca, fix the SCons build.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
'diff -ud' is clean.
v2: Massive rebase.
v3: With much help from José Fonseca, fix the SCons build.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Acked-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
No change except to the copyright symbol. The next patch will generate
this file with Python, and Unicode + Python = pure rage.
v2: Massive rebase.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are differences in where end-of-line comments are placed, but
'diff -wud' is clean.
v2: Massive rebase.
v3: With much help from José Fonseca, fix SCons build.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Acked-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many GPUs cannot handle GL_KHR_blend_equation_advanced natively, and
need to emulate it in the pixel shader. This lowering pass implements
all the necessary math for advanced blending. It fetches the existing
framebuffer value using the MESA_shader_framebuffer_fetch built-in
variables, and the previous commit's state var uniform to select
which equation to use.
This is done at the GLSL IR level to make it easy for all drivers to
implement the GL_KHR_blend_equation_advanced extension and share code.
Drivers need to hook up MESA_shader_framebuffer_fetch functionality:
1. Hook up the fb_fetch_output variable
2. Implement BlendBarrier()
Then to get KHR_blend_equation_advanced, they simply need to:
3. Disable hardware blending based on ctx->Color._AdvancedBlendEnabled
4. Call this lowering pass.
Very little driver specific code should be required.
v2: Handle multiple output variables per render target (which may exist
due to ARB_enhanced_layouts), and array variables (even with one
render target, we might have out vec4 color[1]), and non-vec4
variables (it's easier than finding spec text to justify not
handling it). Thanks to Francisco Jerez for the feedback.
v3: Lower main returns so that we have a single exit point where we
can add our blending epilogue (caught by Francisco Jerez).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
vc4 wants to have per-scalar IO load/stores so that dead code elimination
can happen on a more granular basis, which it has been doing in the
backend using a multiplication by 4 of the intrinsic's driver_location.
We can represent it properly in the NIR using the first_component field,
though.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
No change except to the copyright symbol. The next patch will generate
this file with Python, and Unicode + Python = pure rage.
v2: Massive rebase... I guess a lot can change in a year.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This pass is similar to propagate_invariance in the GLSL compiler. The
real "output" of this pass is that any algebraic operations which are
eventually consumed by an invariant variable get marked as "exact".
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Cc: "12.0" <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jason Ekstrand <[email protected]>
Cc: "12.0" <[email protected]>
|
|
|
|
|
|
| |
Fixes: dfbae7d64f4 ("nir/algebraic: support for power-of-two
optimizations")
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
| |
Right now libglsl.la depends on libnir.la so putting it in libnir.la
adds a dependency on libglsl.la that goes the wrong direction.
Reviewed-by: Emil Velikov <[email protected]>
Reviewed-by: Kristian Høgsberg <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Split standalone glsl_compiler into a libstandalone.la and a thin
main.cpp. This way drivers can re-use the glsl standalone frontend in
their own standalone compilers.
Signed-off-by: Rob Clark <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|