android_kernel_motorola_sm6225/kernel/sched
Peter Zijlstra c22402a2f7 sched/fair: Let minimally loaded cpu balance the group
Currently we let the leftmost (or first idle) cpu ascend the
sched_domain tree and perform load-balancing. The result is that the
busiest cpu in the group might be performing this function and pull
more load to itself. The next load balance pass will then try to
equalize this again.

Change this to pick the least loaded cpu to perform higher domain
balancing.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-v8zlrmgmkne3bkcy9dej1fvm@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:00:51 +02:00
..
auto_group.c sched: Clean up parameter passing of proc_sched_autogroup_set_nice() 2012-03-02 12:23:49 +01:00
auto_group.h
clock.c
core.c sched: Fix OOPS when build_sched_domains() percpu allocation fails 2012-04-26 12:54:53 +02:00
cpupri.c kernel-doc: fix kernel-doc warnings in sched 2012-01-23 08:44:54 -08:00
cpupri.h
debug.c sched: Change rq->nr_running to unsigned int 2012-05-09 15:00:49 +02:00
fair.c sched/fair: Let minimally loaded cpu balance the group 2012-05-09 15:00:51 +02:00
features.h sched: Fix more load-balancing fallout 2012-04-26 12:54:52 +02:00
idle_task.c sched: Update documentation and comments 2012-05-07 15:04:18 +02:00
Makefile
rt.c Merge branch 'tip/sched/core' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace into sched/core 2012-04-14 15:12:04 +02:00
sched.h sched: Change rq->nr_running to unsigned int 2012-05-09 15:00:49 +02:00
stats.c sched: Remove sched_switch 2012-01-27 13:28:53 +01:00
stats.h
stop_task.c