Skip to content

Commit

Permalink
sched/numa: Don't scale the imbalance
Browse files Browse the repository at this point in the history
It's far too easy to get ridiculously large imbalance pct when you
scale it like that. Use a fixed 125% for now.

Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed May 14, 2012
1 parent 04f733b commit 870a0bb
Showing 1 changed file with 1 addition and 6 deletions.
7 changes: 1 addition & 6 deletions kernel/sched/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -6261,11 +6261,6 @@ static int *sched_domains_numa_distance;
static struct cpumask ***sched_domains_numa_masks;
static int sched_domains_curr_level;

static inline unsigned long numa_scale(unsigned long x, int level)
{
return x * sched_domains_numa_distance[level] / sched_domains_numa_scale;
}

static inline int sd_local_flags(int level)
{
if (sched_domains_numa_distance[level] > REMOTE_DISTANCE)
Expand All @@ -6286,7 +6281,7 @@ sd_numa_init(struct sched_domain_topology_level *tl, int cpu)
.min_interval = sd_weight,
.max_interval = 2*sd_weight,
.busy_factor = 32,
.imbalance_pct = 100 + numa_scale(25, level),
.imbalance_pct = 125,
.cache_nice_tries = 2,
.busy_idx = 3,
.idle_idx = 2,
Expand Down

0 comments on commit 870a0bb

Please sign in to comment.