Skip to content

Commit

Permalink
[PATCH] sched: avoid div in rebalance_tick
Browse files Browse the repository at this point in the history
Avoid expensive integer divide 3 times per CPU per tick.

A userspace test of this loop went from 26ns, down to 19ns on a G5; and
from 123ns down to 28ns on a P3.

(Also avoid a variable bit shift, as suggested by Alan. The effect
of this wasn't noticable on the CPUs I tested with).

Signed-off-by: Nick Piggin <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Alan Cox <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Nick Piggin authored and Linus Torvalds committed Feb 12, 2007
1 parent 0a9ac38 commit ff91691
Showing 1 changed file with 5 additions and 3 deletions.
8 changes: 5 additions & 3 deletions kernel/sched.c
Original file line number Diff line number Diff line change
Expand Up @@ -2897,14 +2897,16 @@ static void active_load_balance(struct rq *busiest_rq, int busiest_cpu)
static void update_load(struct rq *this_rq)
{
unsigned long this_load;
int i, scale;
unsigned int i, scale;

this_load = this_rq->raw_weighted_load;

/* Update our load: */
for (i = 0, scale = 1; i < 3; i++, scale <<= 1) {
for (i = 0, scale = 1; i < 3; i++, scale += scale) {
unsigned long old_load, new_load;

/* scale is effectively 1 << i now, and >> i divides by scale */

old_load = this_rq->cpu_load[i];
new_load = this_load;
/*
Expand All @@ -2914,7 +2916,7 @@ static void update_load(struct rq *this_rq)
*/
if (new_load > old_load)
new_load += scale-1;
this_rq->cpu_load[i] = (old_load*(scale-1) + new_load) / scale;
this_rq->cpu_load[i] = (old_load*(scale-1) + new_load) >> i;
}
}

Expand Down

0 comments on commit ff91691

Please sign in to comment.