summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorFrancisco Jerez <[email protected]>2015-02-16 13:38:39 +0200
committerFrancisco Jerez <[email protected]>2015-02-23 20:55:40 +0200
commitf80af89d48f2c9045c7e0438a1b35d09be3e84d5 (patch)
tree761fbb6143f5d77e3389517741a0372842fe1475
parent34c93fd7f119fa824062e05377de849b8a2da0e6 (diff)
ra: Disable round-robin strategy for optimistically colorable nodes.
The round-robin allocation strategy is expected to decrease the amount of false dependencies created by the register allocator and give the post-RA scheduling pass more freedom to move instructions around. On the other hand it has the disadvantage of increasing fragmentation and decreasing the number of equally-colored nearby nodes, what increases the likelihood of failure in presence of optimistically colorable nodes. This patch disables the round-robin strategy for optimistically colorable nodes. These typically arise in situations of high register pressure or for registers with large live intervals, in both cases the task of the instruction scheduler shouldn't be constrained excessively by the dense packing of those nodes, and a spill (or on Intel hardware a fall-back to SIMD8 mode) is invariably worse than a slightly less optimal scheduling. Shader-db results on the i965 driver: total instructions in shared programs: 5488539 -> 5488489 (-0.00%) instructions in affected programs: 1121 -> 1071 (-4.46%) helped: 1 HURT: 0 GAINED: 49 LOST: 5 v2: Re-enable round-robin already for the lowest one of the nodes pushed optimistically onto the sack (Connor). v3: Use UINT_MAX instead of ~0, open-code MIN2 (Jason, Connor). Reviewed-by: Connor Abbott <[email protected]>
-rw-r--r--src/util/register_allocate.c24
1 files changed, 23 insertions, 1 deletions
diff --git a/src/util/register_allocate.c b/src/util/register_allocate.c
index 684ee5d6cf5..2ad8c3ce11a 100644
--- a/src/util/register_allocate.c
+++ b/src/util/register_allocate.c
@@ -168,6 +168,12 @@ struct ra_graph {
unsigned int *stack;
unsigned int stack_count;
+
+ /**
+ * Tracks the start of the set of optimistically-colored registers in the
+ * stack.
+ */
+ unsigned int stack_optimistic_start;
};
/**
@@ -454,6 +460,7 @@ static void
ra_simplify(struct ra_graph *g)
{
bool progress = true;
+ unsigned int stack_optimistic_start = UINT_MAX;
int i;
while (progress) {
@@ -482,6 +489,9 @@ ra_simplify(struct ra_graph *g)
}
if (!progress && best_optimistic_node != ~0U) {
+ if (stack_optimistic_start == UINT_MAX)
+ stack_optimistic_start = g->stack_count;
+
decrement_q(g, best_optimistic_node);
g->stack[g->stack_count] = best_optimistic_node;
g->stack_count++;
@@ -489,6 +499,8 @@ ra_simplify(struct ra_graph *g)
progress = true;
}
}
+
+ g->stack_optimistic_start = stack_optimistic_start;
}
/**
@@ -542,7 +554,17 @@ ra_select(struct ra_graph *g)
g->nodes[n].reg = r;
g->stack_count--;
- if (g->regs->round_robin)
+ /* Rotate the starting point except for any nodes above the lowest
+ * optimistically colorable node. The likelihood that we will succeed
+ * at allocating optimistically colorable nodes is highly dependent on
+ * the way that the previous nodes popped off the stack are laid out.
+ * The round-robin strategy increases the fragmentation of the register
+ * file and decreases the number of nearby nodes assigned to the same
+ * color, what increases the likelihood of spilling with respect to the
+ * dense packing strategy.
+ */
+ if (g->regs->round_robin &&
+ g->stack_count - 1 <= g->stack_optimistic_start)
start_search_reg = r + 1;
}