1 Index: kernel-2.6.28/Documentation/scheduler/sched-BFS.txt
2 ===================================================================
4 +++ kernel-2.6.28/Documentation/scheduler/sched-BFS.txt
9 ++ BFS - The Brain Fuck Scheduler by Con Kolivas.
13 ++ The goal of the Brain Fuck Scheduler, referred to as BFS from here on, is to
14 ++ completely do away with the complex designs of the past for the cpu process
15 ++ scheduler and instead implement one that is very simple in basic design.
16 ++ The main focus of BFS is to achieve excellent desktop interactivity and
17 ++ responsiveness without heuristics and tuning knobs that are difficult to
18 ++ understand, impossible to model and predict the effect of, and when tuned to
19 ++ one workload cause massive detriment to another.
24 ++ BFS is best described as a single runqueue, O(n) lookup, earliest effective
25 ++ virtual deadline first design, loosely based on EEVDF (earliest eligible virtual
26 ++ deadline first) and my previous Staircase Deadline scheduler. Each component
27 ++ shall be described in order to understand the significance of, and reasoning for
28 ++ it. The codebase when the first stable version was released was approximately
29 ++ 9000 lines less code than the existing mainline linux kernel scheduler (in
30 ++ 2.6.31). This does not even take into account the removal of documentation and
31 ++ the cgroups code that is not used.
35 ++ The single runqueue refers to the queued but not running processes for the
36 ++ entire system, regardless of the number of CPUs. The reason for going back to
37 ++ a single runqueue design is that once multiple runqueues are introduced,
38 ++ per-CPU or otherwise, there will be complex interactions as each runqueue will
39 ++ be responsible for the scheduling latency and fairness of the tasks only on its
40 ++ own runqueue, and to achieve fairness and low latency across multiple CPUs, any
41 ++ advantage in throughput of having CPU local tasks causes other disadvantages.
42 ++ This is due to requiring a very complex balancing system to at best achieve some
43 ++ semblance of fairness across CPUs and can only maintain relatively low latency
44 ++ for tasks bound to the same CPUs, not across them. To increase said fairness
45 ++ and latency across CPUs, the advantage of local runqueue locking, which makes
46 ++ for better scalability, is lost due to having to grab multiple locks.
48 ++ A significant feature of BFS is that all accounting is done purely based on CPU
49 ++ used and nowhere is sleep time used in any way to determine entitlement or
50 ++ interactivity. Interactivity "estimators" that use some kind of sleep/run
51 ++ algorithm are doomed to fail to detect all interactive tasks, and to falsely tag
52 ++ tasks that aren't interactive as being so. The reason for this is that it is
53 ++ close to impossible to determine that when a task is sleeping, whether it is
54 ++ doing it voluntarily, as in a userspace application waiting for input in the
55 ++ form of a mouse click or otherwise, or involuntarily, because it is waiting for
56 ++ another thread, process, I/O, kernel activity or whatever. Thus, such an
57 ++ estimator will introduce corner cases, and more heuristics will be required to
58 ++ cope with those corner cases, introducing more corner cases and failed
59 ++ interactivity detection and so on. Interactivity in BFS is built into the design
60 ++ by virtue of the fact that tasks that are waking up have not used up their quota
61 ++ of CPU time, and have earlier effective deadlines, thereby making it very likely
62 ++ they will preempt any CPU bound task of equivalent nice level. See below for
63 ++ more information on the virtual deadline mechanism. Even if they do not preempt
64 ++ a running task, because the rr interval is guaranteed to have a bound upper
65 ++ limit on how long a task will wait for, it will be scheduled within a timeframe
66 ++ that will not cause visible interface jitter.
73 ++ BFS inserts tasks into each relevant queue as an O(1) insertion into a double
74 ++ linked list. On insertion, *every* running queue is checked to see if the newly
75 ++ queued task can run on any idle queue, or preempt the lowest running task on the
76 ++ system. This is how the cross-CPU scheduling of BFS achieves significantly lower
77 ++ latency per extra CPU the system has. In this case the lookup is, in the worst
78 ++ case scenario, O(n) where n is the number of CPUs on the system.
82 ++ BFS has one single lock protecting the process local data of every task in the
83 ++ global queue. Thus every insertion, removal and modification of task data in the
84 ++ global runqueue needs to grab the global lock. However, once a task is taken by
85 ++ a CPU, the CPU has its own local data copy of the running process' accounting
86 ++ information which only that CPU accesses and modifies (such as during a
87 ++ timer tick) thus allowing the accounting data to be updated lockless. Once a
88 ++ CPU has taken a task to run, it removes it from the global queue. Thus the
89 ++ global queue only ever has, at most,
91 ++ (number of tasks requesting cpu time) - (number of logical CPUs) + 1
93 ++ tasks in the global queue. This value is relevant for the time taken to look up
94 ++ tasks during scheduling. This will increase if many tasks with CPU affinity set
95 ++ in their policy to limit which CPUs they're allowed to run on if they outnumber
96 ++ the number of CPUs. The +1 is because when rescheduling a task, the CPU's
97 ++ currently running task is put back on the queue. Lookup will be described after
98 ++ the virtual deadline mechanism is explained.
102 ++ The key to achieving low latency, scheduling fairness, and "nice level"
103 ++ distribution in BFS is entirely in the virtual deadline mechanism. The one
104 ++ tunable in BFS is the rr_interval, or "round robin interval". This is the
105 ++ maximum time two SCHED_OTHER (or SCHED_NORMAL, the common scheduling policy)
106 ++ tasks of the same nice level will be running for, or looking at it the other
107 ++ way around, the longest duration two tasks of the same nice level will be
108 ++ delayed for. When a task requests cpu time, it is given a quota (time_slice)
109 ++ equal to the rr_interval and a virtual deadline. The virtual deadline is
110 ++ offset from the current time in jiffies by this equation:
112 ++ jiffies + (prio_ratio * rr_interval)
114 ++ The prio_ratio is determined as a ratio compared to the baseline of nice -20
115 ++ and increases by 10% per nice level. The deadline is a virtual one only in that
116 ++ no guarantee is placed that a task will actually be scheduled by this time, but
117 ++ it is used to compare which task should go next. There are three components to
118 ++ how a task is next chosen. First is time_slice expiration. If a task runs out
119 ++ of its time_slice, it is descheduled, the time_slice is refilled, and the
120 ++ deadline reset to that formula above. Second is sleep, where a task no longer
121 ++ is requesting CPU for whatever reason. The time_slice and deadline are _not_
122 ++ adjusted in this case and are just carried over for when the task is next
123 ++ scheduled. Third is preemption, and that is when a newly waking task is deemed
124 ++ higher priority than a currently running task on any cpu by virtue of the fact
125 ++ that it has an earlier virtual deadline than the currently running task. The
126 ++ earlier deadline is the key to which task is next chosen for the first and
127 ++ second cases. Once a task is descheduled, it is put back on the queue, and an
128 ++ O(n) lookup of all queued-but-not-running tasks is done to determine which has
129 ++ the earliest deadline and that task is chosen to receive CPU next. The one
130 ++ caveat to this is that if a deadline has already passed (jiffies is greater
131 ++ than the deadline), the tasks are chosen in FIFO (first in first out) order as
132 ++ the deadlines are old and their absolute value becomes decreasingly relevant
133 ++ apart from being a flag that they have been asleep and deserve CPU time ahead
134 ++ of all later deadlines.
136 ++ The CPU proportion of different nice tasks works out to be approximately the
138 ++ (prio_ratio difference)^2
140 ++ The reason it is squared is that a task's deadline does not change while it is
141 ++ running unless it runs out of time_slice. Thus, even if the time actually
142 ++ passes the deadline of another task that is queued, it will not get CPU time
143 ++ unless the current running task deschedules, and the time "base" (jiffies) is
144 ++ constantly moving.
148 ++ BFS has 103 priority queues. 100 of these are dedicated to the static priority
149 ++ of realtime tasks, and the remaining 3 are, in order of best to worst priority,
150 ++ SCHED_ISO (isochronous), SCHED_NORMAL, and SCHED_IDLEPRIO (idle priority
151 ++ scheduling). When a task of these priorities is queued, a bitmap of running
152 ++ priorities is set showing which of these priorities has tasks waiting for CPU
153 ++ time. When a CPU is made to reschedule, the lookup for the next task to get
154 ++ CPU time is performed in the following way:
156 ++ First the bitmap is checked to see what static priority tasks are queued. If
157 ++ any realtime priorities are found, the corresponding queue is checked and the
158 ++ first task listed there is taken (provided CPU affinity is suitable) and lookup
159 ++ is complete. If the priority corresponds to a SCHED_ISO task, they are also
160 ++ taken in FIFO order (as they behave like SCHED_RR). If the priority corresponds
161 ++ to either SCHED_NORMAL or SCHED_IDLEPRIO, then the lookup becomes O(n). At this
162 ++ stage, every task in the runlist that corresponds to that priority is checked
163 ++ to see which has the earliest set deadline, and (provided it has suitable CPU
164 ++ affinity) it is taken off the runqueue and given the CPU. If a task has an
165 ++ expired deadline, it is taken and the rest of the lookup aborted (as they are
166 ++ chosen in FIFO order).
168 ++ Thus, the lookup is O(n) in the worst case only, where n is as described
169 ++ earlier, as tasks may be chosen before the whole task list is looked over.
174 ++ The major limitations of BFS will be that of scalability, as the separate
175 ++ runqueue designs will have less lock contention as the number of CPUs rises.
176 ++ However they do not scale linearly even with separate runqueues as multiple
177 ++ runqueues will need to be locked concurrently on such designs to be able to
178 ++ achieve fair CPU balancing, to try and achieve some sort of nice-level fairness
179 ++ across CPUs, and to achieve low enough latency for tasks on a busy CPU when
180 ++ other CPUs would be more suited. BFS has the advantage that it requires no
181 ++ balancing algorithm whatsoever, as balancing occurs by proxy simply because
182 ++ all CPUs draw off the global runqueue, in priority and deadline order. Despite
183 ++ the fact that scalability is _not_ the prime concern of BFS, it both shows very
184 ++ good scalability to smaller numbers of CPUs and is likely a more scalable design
185 ++ at these numbers of CPUs.
187 ++ It also has some very low overhead scalability features built into the design
188 ++ when it has been deemed their overhead is so marginal that they're worth adding.
189 ++ The first is the local copy of the running process' data to the CPU it's running
190 ++ on to allow that data to be updated lockless where possible. Then there is
191 ++ deference paid to the last CPU a task was running on, by trying that CPU first
192 ++ when looking for an idle CPU to use the next time it's scheduled. Finally there
193 ++ is the notion of cache locality beyond the last running CPU. The sched_domains
194 ++ information is used to determine the relative virtual "cache distance" that
195 ++ other CPUs have from the last CPU a task was running on. CPUs with shared
196 ++ caches, such as SMT siblings, or multicore CPUs with shared caches, are treated
197 ++ as cache local. CPUs without shared caches are treated as not cache local, and
198 ++ CPUs on different NUMA nodes are treated as very distant. This "relative cache
199 ++ distance" is used by modifying the virtual deadline value when doing lookups.
200 ++ Effectively, the deadline is unaltered between "cache local" CPUs, doubled for
201 ++ "cache distant" CPUs, and quadrupled for "very distant" CPUs. The reasoning
202 ++ behind the doubling of deadlines is as follows. The real cost of migrating a
203 ++ task from one CPU to another is entirely dependant on the cache footprint of
204 ++ the task, how cache intensive the task is, how long it's been running on that
205 ++ CPU to take up the bulk of its cache, how big the CPU cache is, how fast and
206 ++ how layered the CPU cache is, how fast a context switch is... and so on. In
207 ++ other words, it's close to random in the real world where we do more than just
208 ++ one sole workload. The only thing we can be sure of is that it's not free. So
209 ++ BFS uses the principle that an idle CPU is a wasted CPU and utilising idle CPUs
210 ++ is more important than cache locality, and cache locality only plays a part
211 ++ after that. Doubling the effective deadline is based on the premise that the
212 ++ "cache local" CPUs will tend to work on the same tasks up to double the number
213 ++ of cache local CPUs, and once the workload is beyond that amount, it is likely
214 ++ that none of the tasks are cache warm anywhere anyway. The quadrupling for NUMA
215 ++ is a value I pulled out of my arse.
217 ++ When choosing an idle CPU for a waking task, the cache locality is determined
218 ++ according to where the task last ran and then idle CPUs are ranked from best
219 ++ to worst to choose the most suitable idle CPU based on cache locality, NUMA
220 ++ node locality and hyperthread sibling business. They are chosen in the
221 ++ following preference (if idle):
223 ++ * Same core, idle or busy cache, idle threads
224 ++ * Other core, same cache, idle or busy cache, idle threads.
225 ++ * Same node, other CPU, idle cache, idle threads.
226 ++ * Same node, other CPU, busy cache, idle threads.
227 ++ * Same core, busy threads.
228 ++ * Other core, same cache, busy threads.
229 ++ * Same node, other CPU, busy threads.
230 ++ * Other node, other CPU, idle cache, idle threads.
231 ++ * Other node, other CPU, busy cache, idle threads.
232 ++ * Other node, other CPU, busy threads.
234 ++ This shows the SMT or "hyperthread" awareness in the design as well which will
235 ++ choose a real idle core first before a logical SMT sibling which already has
236 ++ tasks on the physical CPU.
238 ++ Early benchmarking of BFS suggested scalability dropped off at the 16 CPU mark.
239 ++ However this benchmarking was performed on an earlier design that was far less
240 ++ scalable than the current one so it's hard to know how scalable it is in terms
241 ++ of both CPUs (due to the global runqueue) and heavily loaded machines (due to
242 ++ O(n) lookup) at this stage. Note that in terms of scalability, the number of
243 ++ _logical_ CPUs matters, not the number of _physical_ CPUs. Thus, a dual (2x)
244 ++ quad core (4X) hyperthreaded (2X) machine is effectively a 16X. Newer benchmark
245 ++ results are very promising indeed, without needing to tweak any knobs, features
246 ++ or options. Benchmark contributions are most welcome.
251 ++ As the initial prime target audience for BFS was the average desktop user, it
252 ++ was designed to not need tweaking, tuning or have features set to obtain benefit
253 ++ from it. Thus the number of knobs and features has been kept to an absolute
254 ++ minimum and should not require extra user input for the vast majority of cases.
255 ++ There are precisely 2 tunables, and 2 extra scheduling policies. The rr_interval
256 ++ and iso_cpu tunables, and the SCHED_ISO and SCHED_IDLEPRIO policies. In addition
257 ++ to this, BFS also uses sub-tick accounting. What BFS does _not_ now feature is
258 ++ support for CGROUPS. The average user should neither need to know what these
259 ++ are, nor should they need to be using them to have good desktop behaviour.
263 ++ There is only one "scheduler" tunable, the round robin interval. This can be
266 ++ /proc/sys/kernel/rr_interval
268 ++ The value is in milliseconds, and the default value is set to 6 on a
269 ++ uniprocessor machine, and automatically set to a progressively higher value on
270 ++ multiprocessor machines. The reasoning behind increasing the value on more CPUs
271 ++ is that the effective latency is decreased by virtue of there being more CPUs on
272 ++ BFS (for reasons explained above), and increasing the value allows for less
273 ++ cache contention and more throughput. Valid values are from 1 to 5000
274 ++ Decreasing the value will decrease latencies at the cost of decreasing
275 ++ throughput, while increasing it will improve throughput, but at the cost of
276 ++ worsening latencies. The accuracy of the rr interval is limited by HZ resolution
277 ++ of the kernel configuration. Thus, the worst case latencies are usually slightly
278 ++ higher than this actual value. The default value of 6 is not an arbitrary one.
279 ++ It is based on the fact that humans can detect jitter at approximately 7ms, so
280 ++ aiming for much lower latencies is pointless under most circumstances. It is
281 ++ worth noting this fact when comparing the latency performance of BFS to other
282 ++ schedulers. Worst case latencies being higher than 7ms are far worse than
283 ++ average latencies not being in the microsecond range.
285 ++ Isochronous scheduling.
287 ++ Isochronous scheduling is a unique scheduling policy designed to provide
288 ++ near-real-time performance to unprivileged (ie non-root) users without the
289 ++ ability to starve the machine indefinitely. Isochronous tasks (which means
290 ++ "same time") are set using, for example, the schedtool application like so:
292 ++ schedtool -I -e amarok
294 ++ This will start the audio application "amarok" as SCHED_ISO. How SCHED_ISO works
295 ++ is that it has a priority level between true realtime tasks and SCHED_NORMAL
296 ++ which would allow them to preempt all normal tasks, in a SCHED_RR fashion (ie,
297 ++ if multiple SCHED_ISO tasks are running, they purely round robin at rr_interval
298 ++ rate). However if ISO tasks run for more than a tunable finite amount of time,
299 ++ they are then demoted back to SCHED_NORMAL scheduling. This finite amount of
300 ++ time is the percentage of _total CPU_ available across the machine, configurable
301 ++ as a percentage in the following "resource handling" tunable (as opposed to a
302 ++ scheduler tunable):
304 ++ /proc/sys/kernel/iso_cpu
306 ++ and is set to 70% by default. It is calculated over a rolling 5 second average
307 ++ Because it is the total CPU available, it means that on a multi CPU machine, it
308 ++ is possible to have an ISO task running as realtime scheduling indefinitely on
309 ++ just one CPU, as the other CPUs will be available. Setting this to 100 is the
310 ++ equivalent of giving all users SCHED_RR access and setting it to 0 removes the
311 ++ ability to run any pseudo-realtime tasks.
313 ++ A feature of BFS is that it detects when an application tries to obtain a
314 ++ realtime policy (SCHED_RR or SCHED_FIFO) and the caller does not have the
315 ++ appropriate privileges to use those policies. When it detects this, it will
316 ++ give the task SCHED_ISO policy instead. Thus it is transparent to the user.
317 ++ Because some applications constantly set their policy as well as their nice
318 ++ level, there is potential for them to undo the override specified by the user
319 ++ on the command line of setting the policy to SCHED_ISO. To counter this, once
320 ++ a task has been set to SCHED_ISO policy, it needs superuser privileges to set
321 ++ it back to SCHED_NORMAL. This will ensure the task remains ISO and all child
322 ++ processes and threads will also inherit the ISO policy.
324 ++ Idleprio scheduling.
326 ++ Idleprio scheduling is a scheduling policy designed to give out CPU to a task
327 ++ _only_ when the CPU would be otherwise idle. The idea behind this is to allow
328 ++ ultra low priority tasks to be run in the background that have virtually no
329 ++ effect on the foreground tasks. This is ideally suited to distributed computing
330 ++ clients (like setiathome, folding, mprime etc) but can also be used to start
331 ++ a video encode or so on without any slowdown of other tasks. To avoid this
332 ++ policy from grabbing shared resources and holding them indefinitely, if it
333 ++ detects a state where the task is waiting on I/O, the machine is about to
334 ++ suspend to ram and so on, it will transiently schedule them as SCHED_NORMAL. As
335 ++ per the Isochronous task management, once a task has been scheduled as IDLEPRIO,
336 ++ it cannot be put back to SCHED_NORMAL without superuser privileges. Tasks can
337 ++ be set to start as SCHED_IDLEPRIO with the schedtool command like so:
339 ++ schedtool -D -e ./mprime
341 ++ Subtick accounting.
343 ++ It is surprisingly difficult to get accurate CPU accounting, and in many cases,
344 ++ the accounting is done by simply determining what is happening at the precise
345 ++ moment a timer tick fires off. This becomes increasingly inaccurate as the
346 ++ timer tick frequency (HZ) is lowered. It is possible to create an application
347 ++ which uses almost 100% CPU, yet by being descheduled at the right time, records
348 ++ zero CPU usage. While the main problem with this is that there are possible
349 ++ security implications, it is also difficult to determine how much CPU a task
350 ++ really does use. BFS tries to use the sub-tick accounting from the TSC clock,
351 ++ where possible, to determine real CPU usage. This is not entirely reliable, but
352 ++ is far more likely to produce accurate CPU usage data than the existing designs
353 ++ and will not show tasks as consuming no CPU usage when they actually are. Thus,
354 ++ the amount of CPU reported as being used by BFS will more accurately represent
355 ++ how much CPU the task itself is using (as is shown for example by the 'time'
356 ++ application), so the reported values may be quite different to other schedulers.
357 ++ Values reported as the 'load' are more prone to problems with this design, but
358 ++ per process values are closer to real usage. When comparing throughput of BFS
359 ++ to other designs, it is important to compare the actual completed work in terms
360 ++ of total wall clock time taken and total work done, rather than the reported
364 ++ Con Kolivas <kernel@kolivas.org> Thu Dec 3 2009
365 Index: kernel-2.6.28/arch/powerpc/platforms/cell/spufs/sched.c
366 ===================================================================
367 --- kernel-2.6.28.orig/arch/powerpc/platforms/cell/spufs/sched.c
368 +++ kernel-2.6.28/arch/powerpc/platforms/cell/spufs/sched.c
369 @@ -62,11 +62,6 @@ static struct timer_list spusched_timer;
370 static struct timer_list spuloadavg_timer;
373 - * Priority of a normal, non-rt, non-niced'd process (aka nice level 0).
375 -#define NORMAL_PRIO 120
378 * Frequency of the spu scheduler tick. By default we do one SPU scheduler
379 * tick for every 10 CPU scheduler ticks.
381 Index: kernel-2.6.28/fs/proc/base.c
382 ===================================================================
383 --- kernel-2.6.28.orig/fs/proc/base.c
384 +++ kernel-2.6.28/fs/proc/base.c
385 @@ -336,7 +336,7 @@ static int proc_pid_wchan(struct task_st
386 static int proc_pid_schedstat(struct task_struct *task, char *buffer)
388 return sprintf(buffer, "%llu %llu %lu\n",
389 - task->sched_info.cpu_time,
390 + tsk_seruntime(task),
391 task->sched_info.run_delay,
392 task->sched_info.pcount);
394 Index: kernel-2.6.28/include/linux/init_task.h
395 ===================================================================
396 --- kernel-2.6.28.orig/include/linux/init_task.h
397 +++ kernel-2.6.28/include/linux/init_task.h
398 @@ -47,6 +47,11 @@ extern struct files_struct init_files;
399 .posix_timers = LIST_HEAD_INIT(sig.posix_timers), \
400 .cpu_timers = INIT_CPU_TIMERS(sig.cpu_timers), \
401 .rlim = INIT_RLIMITS, \
403 + .cputime = INIT_CPUTIME, \
405 + .lock = __SPIN_LOCK_UNLOCKED(sig.cputimer.lock), \
409 extern struct nsproxy init_nsproxy;
410 @@ -117,6 +122,67 @@ extern struct group_info init_groups;
411 * INIT_TASK is used to set up the first task table, touch at
412 * your own risk!. Base=0, limit=0x1fffff (=2MB)
414 +#ifdef CONFIG_SCHED_BFS
415 +#define INIT_TASK(tsk) \
418 + .stack = &init_thread_info, \
419 + .usage = ATOMIC_INIT(2), \
420 + .flags = PF_KTHREAD, \
421 + .lock_depth = -1, \
422 + .prio = NORMAL_PRIO, \
423 + .static_prio = MAX_PRIO-20, \
424 + .normal_prio = NORMAL_PRIO, \
426 + .policy = SCHED_NORMAL, \
427 + .cpus_allowed = CPU_MASK_ALL, \
429 + .active_mm = &init_mm, \
430 + .run_list = LIST_HEAD_INIT(tsk.run_list), \
431 + .time_slice = HZ, \
432 + .tasks = LIST_HEAD_INIT(tsk.tasks), \
433 + .ptraced = LIST_HEAD_INIT(tsk.ptraced), \
434 + .ptrace_entry = LIST_HEAD_INIT(tsk.ptrace_entry), \
435 + .real_parent = &tsk, \
437 + .children = LIST_HEAD_INIT(tsk.children), \
438 + .sibling = LIST_HEAD_INIT(tsk.sibling), \
439 + .group_leader = &tsk, \
440 + .group_info = &init_groups, \
441 + .cap_effective = CAP_INIT_EFF_SET, \
442 + .cap_inheritable = CAP_INIT_INH_SET, \
443 + .cap_permitted = CAP_FULL_SET, \
444 + .cap_bset = CAP_INIT_BSET, \
445 + .securebits = SECUREBITS_DEFAULT, \
446 + .user = INIT_USER, \
447 + .comm = "swapper", \
448 + .thread = INIT_THREAD, \
450 + .files = &init_files, \
451 + .signal = &init_signals, \
452 + .sighand = &init_sighand, \
453 + .nsproxy = &init_nsproxy, \
455 + .list = LIST_HEAD_INIT(tsk.pending.list), \
456 + .signal = {{0}}}, \
457 + .blocked = {{0}}, \
458 + .alloc_lock = __SPIN_LOCK_UNLOCKED(tsk.alloc_lock), \
459 + .journal_info = NULL, \
460 + .cpu_timers = INIT_CPU_TIMERS(tsk.cpu_timers), \
461 + .fs_excl = ATOMIC_INIT(0), \
462 + .pi_lock = __SPIN_LOCK_UNLOCKED(tsk.pi_lock), \
464 + [PIDTYPE_PID] = INIT_PID_LINK(PIDTYPE_PID), \
465 + [PIDTYPE_PGID] = INIT_PID_LINK(PIDTYPE_PGID), \
466 + [PIDTYPE_SID] = INIT_PID_LINK(PIDTYPE_SID), \
468 + .dirties = INIT_PROP_LOCAL_SINGLE(dirties), \
470 + INIT_TRACE_IRQFLAGS \
473 +#else /* CONFIG_SCHED_BFS */
475 #define INIT_TASK(tsk) \
478 @@ -181,7 +247,7 @@ extern struct group_info init_groups;
479 INIT_TRACE_IRQFLAGS \
483 +#endif /* CONFIG_SCHED_BFS */
485 #define INIT_CPU_TIMERS(cpu_timers) \
487 Index: kernel-2.6.28/include/linux/ioprio.h
488 ===================================================================
489 --- kernel-2.6.28.orig/include/linux/ioprio.h
490 +++ kernel-2.6.28/include/linux/ioprio.h
491 @@ -64,6 +64,8 @@ static inline int task_ioprio_class(stru
493 static inline int task_nice_ioprio(struct task_struct *task)
495 + if (iso_task(task))
497 return (task_nice(task) + 20) / 5;
500 Index: kernel-2.6.28/include/linux/kernel_stat.h
501 ===================================================================
502 --- kernel-2.6.28.orig/include/linux/kernel_stat.h
503 +++ kernel-2.6.28/include/linux/kernel_stat.h
504 @@ -67,10 +67,16 @@ static inline unsigned int kstat_irqs(un
507 extern unsigned long long task_delta_exec(struct task_struct *);
508 -extern void account_user_time(struct task_struct *, cputime_t);
509 -extern void account_user_time_scaled(struct task_struct *, cputime_t);
510 -extern void account_system_time(struct task_struct *, int, cputime_t);
511 -extern void account_system_time_scaled(struct task_struct *, cputime_t);
512 -extern void account_steal_time(struct task_struct *, cputime_t);
513 +extern void account_user_time(struct task_struct *, cputime_t, cputime_t);
514 +extern void account_system_time(struct task_struct *, int, cputime_t, cputime_t);
515 +extern void account_steal_time(cputime_t);
516 +extern void account_idle_time(cputime_t);
518 +extern void account_process_tick(struct task_struct *, int user);
519 +extern void account_steal_ticks(unsigned long ticks);
520 +extern void account_idle_ticks(unsigned long ticks);
522 +extern void account_user_time_scaled(struct task_struct *, cputime_t, cputime_t);
523 +extern void account_system_time_scaled(struct task_struct *, cputime_t, cputime_t);
525 #endif /* _LINUX_KERNEL_STAT_H */
526 Index: kernel-2.6.28/include/linux/sched.h
527 ===================================================================
528 --- kernel-2.6.28.orig/include/linux/sched.h
529 +++ kernel-2.6.28/include/linux/sched.h
533 #define SCHED_BATCH 3
534 -/* SCHED_ISO: reserved but not implemented yet */
535 +/* SCHED_ISO: Implemented on BFS only */
537 +#ifdef CONFIG_SCHED_BFS
539 +#define SCHED_IDLEPRIO SCHED_IDLE
540 +#define SCHED_MAX (SCHED_IDLEPRIO)
541 +#define SCHED_RANGE(policy) ((policy) <= SCHED_MAX)
546 @@ -247,7 +253,6 @@ extern asmlinkage void schedule_tail(str
547 extern void init_idle(struct task_struct *idle, int cpu);
548 extern void init_idle_bootup_task(struct task_struct *idle);
550 -extern int runqueue_is_locked(void);
551 extern void task_rq_unlock_wait(struct task_struct *p);
553 extern cpumask_t nohz_cpu_mask;
554 @@ -456,16 +461,27 @@ struct task_cputime {
555 #define virt_exp utime
556 #define sched_exp sum_exec_runtime
558 +#define INIT_CPUTIME \
559 + (struct task_cputime) { \
560 + .utime = cputime_zero, \
561 + .stime = cputime_zero, \
562 + .sum_exec_runtime = 0, \
566 - * struct thread_group_cputime - thread group interval timer counts
567 - * @totals: thread group interval timers; substructure for
568 - * uniprocessor kernel, per-cpu for SMP kernel.
569 + * struct thread_group_cputimer - thread group interval timer counts
570 + * @cputime: thread group interval timers.
571 + * @running: non-zero when there are timers running and
572 + * @cputime receives updates.
573 + * @lock: lock for fields in this struct.
575 * This structure contains the version of task_cputime, above, that is
576 - * used for thread group CPU clock calculations.
577 + * used for thread group CPU timer calculations.
579 -struct thread_group_cputime {
580 - struct task_cputime *totals;
581 +struct thread_group_cputimer {
582 + struct task_cputime cputime;
588 @@ -514,10 +530,10 @@ struct signal_struct {
589 cputime_t it_prof_incr, it_virt_incr;
592 - * Thread group totals for process CPU clocks.
593 - * See thread_group_cputime(), et al, for details.
594 + * Thread group totals for process CPU timers.
595 + * See thread_group_cputimer(), et al, for details.
597 - struct thread_group_cputime cputime;
598 + struct thread_group_cputimer cputimer;
600 /* Earliest-expiration cache. */
601 struct task_cputime cputime_expires;
602 @@ -554,7 +570,7 @@ struct signal_struct {
603 * Live threads maintain their own counters and add to these
604 * in __exit_signal, except for the group leader.
606 - cputime_t cutime, cstime;
607 + cputime_t utime, stime, cutime, cstime;
610 unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw;
611 @@ -563,6 +579,14 @@ struct signal_struct {
612 struct task_io_accounting ioac;
615 + * Cumulative ns of schedule CPU time fo dead threads in the
616 + * group, not including a zombie group leader, (This only differs
617 + * from jiffies_to_ns(utime + stime) if sched_clock uses something
618 + * other than jiffies.)
620 + unsigned long long sum_sched_runtime;
623 * We don't bother to synchronize most readers of this at all,
624 * because there is no reader checking a limit that actually needs
625 * to get both rlim_cur and rlim_max atomically, and either one
626 @@ -1081,17 +1105,31 @@ struct task_struct {
628 int lock_depth; /* BKL lock depth */
630 +#ifndef CONFIG_SCHED_BFS
632 #ifdef __ARCH_WANT_UNLOCKED_CTXSW
636 +#else /* CONFIG_SCHED_BFS */
640 int prio, static_prio, normal_prio;
641 unsigned int rt_priority;
642 +#ifdef CONFIG_SCHED_BFS
643 + int time_slice, first_time_slice;
644 + unsigned long deadline;
645 + struct list_head run_list;
647 + u64 sched_time; /* sched_clock time spent running */
649 + unsigned long rt_timeout;
650 +#else /* CONFIG_SCHED_BFS */
651 const struct sched_class *sched_class;
652 struct sched_entity se;
653 struct sched_rt_entity rt;
656 #ifdef CONFIG_PREEMPT_NOTIFIERS
657 /* list of struct preempt_notifier: */
658 @@ -1114,6 +1152,9 @@ struct task_struct {
661 cpumask_t cpus_allowed;
662 +#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_SCHED_BFS)
663 + cpumask_t unplugged_mask;
666 #ifdef CONFIG_PREEMPT_RCU
667 int rcu_read_lock_nesting;
668 @@ -1174,6 +1215,9 @@ struct task_struct {
669 int __user *clear_child_tid; /* CLONE_CHILD_CLEARTID */
671 cputime_t utime, stime, utimescaled, stimescaled;
672 +#ifdef CONFIG_SCHED_BFS
673 + unsigned long utime_pc, stime_pc;
676 cputime_t prev_utime, prev_stime;
677 unsigned long nvcsw, nivcsw; /* context switch counts */
678 @@ -1358,6 +1402,64 @@ struct task_struct {
679 struct list_head *scm_work_list;
682 +#ifdef CONFIG_SCHED_BFS
683 +extern int grunqueue_is_locked(void);
684 +extern void grq_unlock_wait(void);
685 +#define tsk_seruntime(t) ((t)->sched_time)
686 +#define tsk_rttimeout(t) ((t)->rt_timeout)
687 +#define task_rq_unlock_wait(tsk) grq_unlock_wait()
689 +static inline void set_oom_timeslice(struct task_struct *p)
691 + p->time_slice = HZ;
694 +static inline void tsk_cpus_current(struct task_struct *p)
698 +#define runqueue_is_locked() grunqueue_is_locked()
700 +static inline void print_scheduler_version(void)
702 + printk(KERN_INFO"BFS CPU scheduler v0.316 by Con Kolivas ported by ToAsTcfh.\n");
705 +static inline int iso_task(struct task_struct *p)
707 + return (p->policy == SCHED_ISO);
710 +extern int runqueue_is_locked(void);
711 +extern void task_rq_unlock_wait(struct task_struct *p);
712 +#define tsk_seruntime(t) ((t)->se.sum_exec_runtime)
713 +#define tsk_rttimeout(t) ((t)->rt.timeout)
715 +static inline void sched_exit(struct task_struct *p)
719 +static inline void set_oom_timeslice(struct task_struct *p)
721 + p->rt.time_slice = HZ;
724 +static inline void tsk_cpus_current(struct task_struct *p)
726 + p->rt.nr_cpus_allowed = current->rt.nr_cpus_allowed;
729 +static inline void print_scheduler_version(void)
731 + printk(KERN_INFO"CFS CPU scheduler.\n");
734 +static inline int iso_task(struct task_struct *p)
741 * Priority of a process goes from 0..MAX_PRIO-1, valid RT
742 * priority is 0..MAX_RT_PRIO-1, and SCHED_NORMAL/SCHED_BATCH
743 @@ -1373,9 +1475,19 @@ struct task_struct {
745 #define MAX_USER_RT_PRIO 100
746 #define MAX_RT_PRIO MAX_USER_RT_PRIO
748 +#define DEFAULT_PRIO (MAX_RT_PRIO + 20)
750 +#ifdef CONFIG_SCHED_BFS
751 +#define PRIO_RANGE (40)
752 +#define MAX_PRIO (MAX_RT_PRIO + PRIO_RANGE)
753 +#define ISO_PRIO (MAX_RT_PRIO)
754 +#define NORMAL_PRIO (MAX_RT_PRIO + 1)
755 +#define IDLE_PRIO (MAX_RT_PRIO + 2)
756 +#define PRIO_LIMIT ((IDLE_PRIO) + 1)
757 +#else /* CONFIG_SCHED_BFS */
758 #define MAX_PRIO (MAX_RT_PRIO + 40)
759 -#define DEFAULT_PRIO (MAX_RT_PRIO + 20)
760 +#define NORMAL_PRIO DEFAULT_PRIO
761 +#endif /* CONFIG_SCHED_BFS */
763 static inline int rt_prio(int prio)
765 @@ -1643,7 +1755,7 @@ task_sched_runtime(struct task_struct *t
766 extern unsigned long long thread_group_sched_runtime(struct task_struct *task);
768 /* sched_exec is called by processes performing an exec */
770 +#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_BFS)
771 extern void sched_exec(void);
773 #define sched_exec() {}
774 @@ -1792,6 +1904,9 @@ extern void wake_up_new_task(struct task
775 static inline void kick_process(struct task_struct *tsk) { }
777 extern void sched_fork(struct task_struct *p, int clone_flags);
778 +#ifdef CONFIG_SCHED_BFS
779 +extern void sched_exit(struct task_struct *p);
781 extern void sched_dead(struct task_struct *p);
783 extern int in_group_p(gid_t);
784 @@ -2141,25 +2256,18 @@ static inline int spin_needbreak(spinloc
786 * Thread group CPU time accounting.
789 -extern int thread_group_cputime_alloc(struct task_struct *);
790 -extern void thread_group_cputime(struct task_struct *, struct task_cputime *);
791 +void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times);
792 +void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times);
794 static inline void thread_group_cputime_init(struct signal_struct *sig)
796 - sig->cputime.totals = NULL;
799 -static inline int thread_group_cputime_clone_thread(struct task_struct *curr)
801 - if (curr->signal->cputime.totals)
803 - return thread_group_cputime_alloc(curr);
804 + sig->cputimer.cputime = INIT_CPUTIME;
805 + spin_lock_init(&sig->cputimer.lock);
806 + sig->cputimer.running = 0;
809 static inline void thread_group_cputime_free(struct signal_struct *sig)
811 - free_percpu(sig->cputime.totals);
815 Index: kernel-2.6.28/init/Kconfig
816 ===================================================================
817 --- kernel-2.6.28.orig/init/Kconfig
818 +++ kernel-2.6.28/init/Kconfig
819 @@ -18,6 +18,19 @@ config DEFCONFIG_LIST
824 + bool "BFS cpu scheduler"
826 + The Brain Fuck CPU Scheduler for excellent interactivity and
827 + responsiveness on the desktop and solid scalability on normal
828 + hardware. Not recommended for 4096 CPUs.
830 + Currently incompatible with the Group CPU scheduler.
837 bool "Prompt for development and/or incomplete code/drivers"
839 @@ -332,7 +345,7 @@ config HAVE_UNSTABLE_SCHED_CLOCK
842 bool "Group CPU scheduler"
843 - depends on EXPERIMENTAL
844 + depends on EXPERIMENTAL && !SCHED_BFS
847 This feature lets CPU scheduler recognize task groups and control CPU
848 @@ -381,7 +394,7 @@ endchoice
850 config CGROUP_CPUACCT
851 bool "Simple CPU accounting cgroup subsystem"
853 + depends on CGROUPS && !SCHED_BFS
855 Provides a simple Resource Controller for monitoring the
856 total CPU consumed by the tasks in a cgroup
857 Index: kernel-2.6.28/init/main.c
858 ===================================================================
859 --- kernel-2.6.28.orig/init/main.c
860 +++ kernel-2.6.28/init/main.c
861 @@ -800,6 +800,9 @@ static int noinline init_post(void)
862 system_state = SYSTEM_RUNNING;
863 numa_default_policy();
865 + print_scheduler_version();
868 if (sys_open((const char __user *) "/dev/console", O_RDWR, 0) < 0)
869 printk(KERN_WARNING "Warning: unable to open an initial console.\n");
871 Index: kernel-2.6.28/kernel/delayacct.c
872 ===================================================================
873 --- kernel-2.6.28.orig/kernel/delayacct.c
874 +++ kernel-2.6.28/kernel/delayacct.c
875 @@ -127,7 +127,7 @@ int __delayacct_add_tsk(struct taskstats
877 t1 = tsk->sched_info.pcount;
878 t2 = tsk->sched_info.run_delay;
879 - t3 = tsk->sched_info.cpu_time;
880 + t3 = tsk_seruntime(tsk);
884 Index: kernel-2.6.28/kernel/exit.c
885 ===================================================================
886 --- kernel-2.6.28.orig/kernel/exit.c
887 +++ kernel-2.6.28/kernel/exit.c
888 @@ -112,6 +112,8 @@ static void __exit_signal(struct task_st
889 * We won't ever get here for the group leader, since it
890 * will have been the last reference on the signal_struct.
892 + sig->utime = cputime_add(sig->utime, task_utime(tsk));
893 + sig->stime = cputime_add(sig->stime, task_stime(tsk));
894 sig->gtime = cputime_add(sig->gtime, task_gtime(tsk));
895 sig->min_flt += tsk->min_flt;
896 sig->maj_flt += tsk->maj_flt;
897 @@ -120,6 +122,7 @@ static void __exit_signal(struct task_st
898 sig->inblock += task_io_get_inblock(tsk);
899 sig->oublock += task_io_get_oublock(tsk);
900 task_io_accounting_add(&sig->ioac, &tsk->ioac);
901 + sig->sum_sched_runtime += tsk_seruntime(tsk);
902 sig = NULL; /* Marker for below. */
905 Index: kernel-2.6.28/kernel/fork.c
906 ===================================================================
907 --- kernel-2.6.28.orig/kernel/fork.c
908 +++ kernel-2.6.28/kernel/fork.c
909 @@ -806,14 +806,15 @@ static int copy_signal(unsigned long clo
912 if (clone_flags & CLONE_THREAD) {
913 - ret = thread_group_cputime_clone_thread(current);
914 - if (likely(!ret)) {
915 - atomic_inc(¤t->signal->count);
916 - atomic_inc(¤t->signal->live);
919 + atomic_inc(¤t->signal->count);
920 + atomic_inc(¤t->signal->live);
923 sig = kmem_cache_alloc(signal_cachep, GFP_KERNEL);
926 + posix_cpu_timers_init_group(sig);
931 @@ -843,21 +844,20 @@ static int copy_signal(unsigned long clo
932 sig->tty_old_pgrp = NULL;
935 - sig->cutime = sig->cstime = cputime_zero;
936 + sig->utime = sig->stime = sig->cutime = sig->cstime = cputime_zero;
937 sig->gtime = cputime_zero;
938 sig->cgtime = cputime_zero;
939 sig->nvcsw = sig->nivcsw = sig->cnvcsw = sig->cnivcsw = 0;
940 sig->min_flt = sig->maj_flt = sig->cmin_flt = sig->cmaj_flt = 0;
941 sig->inblock = sig->oublock = sig->cinblock = sig->coublock = 0;
942 task_io_accounting_init(&sig->ioac);
943 + sig->sum_sched_runtime = 0;
944 taskstats_tgid_init(sig);
946 task_lock(current->group_leader);
947 memcpy(sig->rlim, current->signal->rlim, sizeof sig->rlim);
948 task_unlock(current->group_leader);
950 - posix_cpu_timers_init_group(sig);
952 acct_init_pacct(&sig->pacct);
955 @@ -1207,7 +1207,7 @@ static struct task_struct *copy_process(
956 * parent's CPU). This avoids alot of nasty races.
958 p->cpus_allowed = current->cpus_allowed;
959 - p->rt.nr_cpus_allowed = current->rt.nr_cpus_allowed;
960 + tsk_cpus_current(p);
961 if (unlikely(!cpu_isset(task_cpu(p), p->cpus_allowed) ||
962 !cpu_online(task_cpu(p))))
963 set_task_cpu(p, smp_processor_id());
964 Index: kernel-2.6.28/kernel/itimer.c
965 ===================================================================
966 --- kernel-2.6.28.orig/kernel/itimer.c
967 +++ kernel-2.6.28/kernel/itimer.c
968 @@ -62,7 +62,7 @@ int do_getitimer(int which, struct itime
969 struct task_cputime cputime;
972 - thread_group_cputime(tsk, &cputime);
973 + thread_group_cputimer(tsk, &cputime);
974 utime = cputime.utime;
975 if (cputime_le(cval, utime)) { /* about to fire */
976 cval = jiffies_to_cputime(1);
977 @@ -82,7 +82,7 @@ int do_getitimer(int which, struct itime
978 struct task_cputime times;
981 - thread_group_cputime(tsk, ×);
982 + thread_group_cputimer(tsk, ×);
983 ptime = cputime_add(times.utime, times.stime);
984 if (cputime_le(cval, ptime)) { /* about to fire */
985 cval = jiffies_to_cputime(1);
986 Index: kernel-2.6.28/kernel/kthread.c
987 ===================================================================
988 --- kernel-2.6.28.orig/kernel/kthread.c
989 +++ kernel-2.6.28/kernel/kthread.c
991 #include <linux/mutex.h>
992 #include <trace/sched.h>
994 -#define KTHREAD_NICE_LEVEL (-5)
995 +#define KTHREAD_NICE_LEVEL (0)
997 static DEFINE_SPINLOCK(kthread_create_lock);
998 static LIST_HEAD(kthread_create_list);
999 @@ -179,7 +179,6 @@ void kthread_bind(struct task_struct *k,
1001 set_task_cpu(k, cpu);
1002 k->cpus_allowed = cpumask_of_cpu(cpu);
1003 - k->rt.nr_cpus_allowed = 1;
1004 k->flags |= PF_THREAD_BOUND;
1006 EXPORT_SYMBOL(kthread_bind);
1007 Index: kernel-2.6.28/kernel/posix-cpu-timers.c
1008 ===================================================================
1009 --- kernel-2.6.28.orig/kernel/posix-cpu-timers.c
1010 +++ kernel-2.6.28/kernel/posix-cpu-timers.c
1012 #include <linux/kernel_stat.h>
1015 - * Allocate the thread_group_cputime structure appropriately and fill in the
1016 - * current values of the fields. Called from copy_signal() via
1017 - * thread_group_cputime_clone_thread() when adding a second or subsequent
1018 - * thread to a thread group. Assumes interrupts are enabled when called.
1020 -int thread_group_cputime_alloc(struct task_struct *tsk)
1022 - struct signal_struct *sig = tsk->signal;
1023 - struct task_cputime *cputime;
1026 - * If we have multiple threads and we don't already have a
1027 - * per-CPU task_cputime struct (checked in the caller), allocate
1028 - * one and fill it in with the times accumulated so far. We may
1029 - * race with another thread so recheck after we pick up the sighand
1032 - cputime = alloc_percpu(struct task_cputime);
1033 - if (cputime == NULL)
1035 - spin_lock_irq(&tsk->sighand->siglock);
1036 - if (sig->cputime.totals) {
1037 - spin_unlock_irq(&tsk->sighand->siglock);
1038 - free_percpu(cputime);
1041 - sig->cputime.totals = cputime;
1042 - cputime = per_cpu_ptr(sig->cputime.totals, smp_processor_id());
1043 - cputime->utime = tsk->utime;
1044 - cputime->stime = tsk->stime;
1045 - cputime->sum_exec_runtime = tsk->se.sum_exec_runtime;
1046 - spin_unlock_irq(&tsk->sighand->siglock);
1051 - * thread_group_cputime - Sum the thread group time fields across all CPUs.
1053 - * @tsk: The task we use to identify the thread group.
1054 - * @times: task_cputime structure in which we return the summed fields.
1056 - * Walk the list of CPUs to sum the per-CPU time fields in the thread group
1059 -void thread_group_cputime(
1060 - struct task_struct *tsk,
1061 - struct task_cputime *times)
1063 - struct signal_struct *sig;
1065 - struct task_cputime *tot;
1067 - sig = tsk->signal;
1068 - if (unlikely(!sig) || !sig->cputime.totals) {
1069 - times->utime = tsk->utime;
1070 - times->stime = tsk->stime;
1071 - times->sum_exec_runtime = tsk->se.sum_exec_runtime;
1074 - times->stime = times->utime = cputime_zero;
1075 - times->sum_exec_runtime = 0;
1076 - for_each_possible_cpu(i) {
1077 - tot = per_cpu_ptr(tsk->signal->cputime.totals, i);
1078 - times->utime = cputime_add(times->utime, tot->utime);
1079 - times->stime = cputime_add(times->stime, tot->stime);
1080 - times->sum_exec_runtime += tot->sum_exec_runtime;
1085 * Called after updating RLIMIT_CPU to set timer expiration if necessary.
1087 void update_rlimit_cpu(unsigned long rlim_new)
1088 @@ -300,6 +230,71 @@ static int cpu_clock_sample(const clocki
1092 +void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
1094 + struct sighand_struct *sighand;
1095 + struct signal_struct *sig;
1096 + struct task_struct *t;
1098 + *times = INIT_CPUTIME;
1101 + sighand = rcu_dereference(tsk->sighand);
1105 + sig = tsk->signal;
1109 + times->utime = cputime_add(times->utime, t->utime);
1110 + times->stime = cputime_add(times->stime, t->stime);
1111 + times->sum_exec_runtime += tsk_seruntime(t);
1113 + t = next_thread(t);
1114 + } while (t != tsk);
1116 + times->utime = cputime_add(times->utime, sig->utime);
1117 + times->stime = cputime_add(times->stime, sig->stime);
1118 + times->sum_exec_runtime += sig->sum_sched_runtime;
1120 + rcu_read_unlock();
1123 +static void update_gt_cputime(struct task_cputime *a, struct task_cputime *b)
1125 + if (cputime_gt(b->utime, a->utime))
1126 + a->utime = b->utime;
1128 + if (cputime_gt(b->stime, a->stime))
1129 + a->stime = b->stime;
1131 + if (b->sum_exec_runtime > a->sum_exec_runtime)
1132 + a->sum_exec_runtime = b->sum_exec_runtime;
1135 +void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times)
1137 + struct thread_group_cputimer *cputimer = &tsk->signal->cputimer;
1138 + struct task_cputime sum;
1139 + unsigned long flags;
1141 + spin_lock_irqsave(&cputimer->lock, flags);
1142 + if (!cputimer->running) {
1143 + cputimer->running = 1;
1145 + * The POSIX timer interface allows for absolute time expiry
1146 + * values through the TIMER_ABSTIME flag, therefore we have
1147 + * to synchronize the timer to the clock every time we start
1150 + thread_group_cputime(tsk, &sum);
1151 + update_gt_cputime(&cputimer->cputime, &sum);
1153 + *times = cputimer->cputime;
1154 + spin_unlock_irqrestore(&cputimer->lock, flags);
1158 * Sample a process (thread group) clock for the given group_leader task.
1159 * Must be called with tasklist_lock held for reading.
1160 @@ -521,16 +516,17 @@ static void cleanup_timers(struct list_h
1161 void posix_cpu_timers_exit(struct task_struct *tsk)
1163 cleanup_timers(tsk->cpu_timers,
1164 - tsk->utime, tsk->stime, tsk->se.sum_exec_runtime);
1165 + tsk->utime, tsk->stime, tsk_seruntime(tsk));
1168 void posix_cpu_timers_exit_group(struct task_struct *tsk)
1170 - struct task_cputime cputime;
1171 + struct signal_struct *const sig = tsk->signal;
1173 - thread_group_cputime(tsk, &cputime);
1174 cleanup_timers(tsk->signal->cpu_timers,
1175 - cputime.utime, cputime.stime, cputime.sum_exec_runtime);
1176 + cputime_add(tsk->utime, sig->utime),
1177 + cputime_add(tsk->stime, sig->stime),
1178 + tsk_seruntime(tsk) + sig->sum_sched_runtime);
1181 static void clear_dead_task(struct k_itimer *timer, union cpu_time_count now)
1182 @@ -687,6 +683,33 @@ static void cpu_timer_fire(struct k_itim
1186 + * Sample a process (thread group) timer for the given group_leader task.
1187 + * Must be called with tasklist_lock held for reading.
1189 +static int cpu_timer_sample_group(const clockid_t which_clock,
1190 + struct task_struct *p,
1191 + union cpu_time_count *cpu)
1193 + struct task_cputime cputime;
1195 + thread_group_cputimer(p, &cputime);
1196 + switch (CPUCLOCK_WHICH(which_clock)) {
1199 + case CPUCLOCK_PROF:
1200 + cpu->cpu = cputime_add(cputime.utime, cputime.stime);
1202 + case CPUCLOCK_VIRT:
1203 + cpu->cpu = cputime.utime;
1205 + case CPUCLOCK_SCHED:
1206 + cpu->sched = cputime.sum_exec_runtime + task_delta_exec(p);
1213 * Guts of sys_timer_settime for CPU timers.
1214 * This is called with the timer locked and interrupts disabled.
1215 * If we return TIMER_RETRY, it's necessary to release the timer's lock
1216 @@ -747,7 +770,7 @@ int posix_cpu_timer_set(struct k_itimer
1217 if (CPUCLOCK_PERTHREAD(timer->it_clock)) {
1218 cpu_clock_sample(timer->it_clock, p, &val);
1220 - cpu_clock_sample_group(timer->it_clock, p, &val);
1221 + cpu_timer_sample_group(timer->it_clock, p, &val);
1225 @@ -895,7 +918,7 @@ void posix_cpu_timer_get(struct k_itimer
1226 read_unlock(&tasklist_lock);
1229 - cpu_clock_sample_group(timer->it_clock, p, &now);
1230 + cpu_timer_sample_group(timer->it_clock, p, &now);
1231 clear_dead = (unlikely(p->exit_state) &&
1232 thread_group_empty(p));
1234 @@ -957,6 +980,7 @@ static void check_thread_timers(struct t
1236 struct list_head *timers = tsk->cpu_timers;
1237 struct signal_struct *const sig = tsk->signal;
1238 + unsigned long soft;
1241 tsk->cputime_expires.prof_exp = cputime_zero;
1242 @@ -994,7 +1018,7 @@ static void check_thread_timers(struct t
1243 struct cpu_timer_list *t = list_first_entry(timers,
1244 struct cpu_timer_list,
1246 - if (!--maxfire || tsk->se.sum_exec_runtime < t->expires.sched) {
1247 + if (!--maxfire || tsk_seruntime(tsk) < t->expires.sched) {
1248 tsk->cputime_expires.sched_exp = t->expires.sched;
1251 @@ -1005,12 +1029,13 @@ static void check_thread_timers(struct t
1253 * Check for the special case thread timers.
1255 - if (sig->rlim[RLIMIT_RTTIME].rlim_cur != RLIM_INFINITY) {
1256 - unsigned long hard = sig->rlim[RLIMIT_RTTIME].rlim_max;
1257 - unsigned long *soft = &sig->rlim[RLIMIT_RTTIME].rlim_cur;
1258 + soft = ACCESS_ONCE(sig->rlim[RLIMIT_RTTIME].rlim_cur);
1259 + if (soft != RLIM_INFINITY) {
1260 + unsigned long hard =
1261 + ACCESS_ONCE(sig->rlim[RLIMIT_RTTIME].rlim_max);
1263 if (hard != RLIM_INFINITY &&
1264 - tsk->rt.timeout > DIV_ROUND_UP(hard, USEC_PER_SEC/HZ)) {
1265 + tsk_rttimeout(tsk) > DIV_ROUND_UP(hard, USEC_PER_SEC/HZ)) {
1267 * At the hard limit, we just die.
1268 * No need to calculate anything else now.
1269 @@ -1018,14 +1043,13 @@ static void check_thread_timers(struct t
1270 __group_send_sig_info(SIGKILL, SEND_SIG_PRIV, tsk);
1273 - if (tsk->rt.timeout > DIV_ROUND_UP(*soft, USEC_PER_SEC/HZ)) {
1274 + if (tsk_rttimeout(tsk) > DIV_ROUND_UP(soft, USEC_PER_SEC/HZ)) {
1276 * At the soft limit, send a SIGXCPU every second.
1278 - if (sig->rlim[RLIMIT_RTTIME].rlim_cur
1279 - < sig->rlim[RLIMIT_RTTIME].rlim_max) {
1280 - sig->rlim[RLIMIT_RTTIME].rlim_cur +=
1283 + soft += USEC_PER_SEC;
1284 + sig->rlim[RLIMIT_RTTIME].rlim_cur = soft;
1287 "RT Watchdog Timeout: %s[%d]\n",
1288 @@ -1035,6 +1059,19 @@ static void check_thread_timers(struct t
1292 +static void stop_process_timers(struct task_struct *tsk)
1294 + struct thread_group_cputimer *cputimer = &tsk->signal->cputimer;
1295 + unsigned long flags;
1297 + if (!cputimer->running)
1300 + spin_lock_irqsave(&cputimer->lock, flags);
1301 + cputimer->running = 0;
1302 + spin_unlock_irqrestore(&cputimer->lock, flags);
1306 * Check for any per-thread CPU timers that have fired and move them
1307 * off the tsk->*_timers list onto the firing list. Per-thread timers
1308 @@ -1058,13 +1095,15 @@ static void check_process_timers(struct
1309 sig->rlim[RLIMIT_CPU].rlim_cur == RLIM_INFINITY &&
1310 list_empty(&timers[CPUCLOCK_VIRT]) &&
1311 cputime_eq(sig->it_virt_expires, cputime_zero) &&
1312 - list_empty(&timers[CPUCLOCK_SCHED]))
1313 + list_empty(&timers[CPUCLOCK_SCHED])) {
1314 + stop_process_timers(tsk);
1319 * Collect the current process totals.
1321 - thread_group_cputime(tsk, &cputime);
1322 + thread_group_cputimer(tsk, &cputime);
1323 utime = cputime.utime;
1324 ptime = cputime_add(utime, cputime.stime);
1325 sum_sched_runtime = cputime.sum_exec_runtime;
1326 @@ -1235,7 +1274,7 @@ void posix_cpu_timer_schedule(struct k_i
1327 clear_dead_task(timer, now);
1330 - cpu_clock_sample_group(timer->it_clock, p, &now);
1331 + cpu_timer_sample_group(timer->it_clock, p, &now);
1332 bump_cpu_timer(timer, now);
1333 /* Leave the tasklist_lock locked for the call below. */
1335 @@ -1319,7 +1358,7 @@ static inline int fastpath_timer_check(s
1336 struct task_cputime task_sample = {
1337 .utime = tsk->utime,
1338 .stime = tsk->stime,
1339 - .sum_exec_runtime = tsk->se.sum_exec_runtime
1340 + .sum_exec_runtime = tsk_seruntime(tsk)
1343 if (task_cputime_expired(&task_sample, &tsk->cputime_expires))
1344 @@ -1330,7 +1369,7 @@ static inline int fastpath_timer_check(s
1345 if (!task_cputime_zero(&sig->cputime_expires)) {
1346 struct task_cputime group_sample;
1348 - thread_group_cputime(tsk, &group_sample);
1349 + thread_group_cputimer(tsk, &group_sample);
1350 if (task_cputime_expired(&group_sample, &sig->cputime_expires))
1353 @@ -1412,7 +1451,7 @@ void set_process_cpu_timer(struct task_s
1354 struct list_head *head;
1356 BUG_ON(clock_idx == CPUCLOCK_SCHED);
1357 - cpu_clock_sample_group(clock_idx, tsk, &now);
1358 + cpu_timer_sample_group(clock_idx, tsk, &now);
1361 if (!cputime_eq(*oldval, cputime_zero)) {
1362 Index: kernel-2.6.28/kernel/sched.c
1363 ===================================================================
1364 --- kernel-2.6.28.orig/kernel/sched.c
1365 +++ kernel-2.6.28/kernel/sched.c
1367 +#ifdef CONFIG_SCHED_BFS
1368 +#include "sched_bfs.c"
1373 @@ -4252,7 +4255,6 @@ void account_steal_time(struct task_stru
1375 if (p == rq->idle) {
1376 p->stime = cputime_add(p->stime, steal);
1377 - account_group_system_time(p, steal);
1378 if (atomic_read(&rq->nr_iowait) > 0)
1379 cpustat->iowait = cputime64_add(cpustat->iowait, tmp);
1381 @@ -4388,7 +4390,7 @@ void __kprobes sub_preempt_count(int val
1385 - if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
1386 + if (DEBUG_LOCKS_WARN_ON(val > preempt_count() - (!!kernel_locked())))
1389 * Is the spinlock portion underflowing?
1390 @@ -9437,3 +9439,4 @@ struct cgroup_subsys cpuacct_subsys = {
1391 .subsys_id = cpuacct_subsys_id,
1393 #endif /* CONFIG_CGROUP_CPUACCT */
1394 +#endif /* CONFIG_SCHED_BFS */
1395 Index: kernel-2.6.28/kernel/sched_bfs.c
1396 ===================================================================
1398 +++ kernel-2.6.28/kernel/sched_bfs.c
1401 + * kernel/sched_bfs.c, was sched.c
1403 + * Kernel scheduler and related syscalls
1405 + * Copyright (C) 1991-2002 Linus Torvalds
1407 + * 1996-12-23 Modified by Dave Grothe to fix bugs in semaphores and
1408 + * make semaphores SMP safe
1409 + * 1998-11-19 Implemented schedule_timeout() and related stuff
1410 + * by Andrea Arcangeli
1411 + * 2002-01-04 New ultra-scalable O(1) scheduler by Ingo Molnar:
1412 + * hybrid priority-list and round-robin design with
1413 + * an array-switch method of distributing timeslices
1414 + * and per-CPU runqueues. Cleanups and useful suggestions
1415 + * by Davide Libenzi, preemptible kernel bits by Robert Love.
1416 + * 2003-09-03 Interactivity tuning by Con Kolivas.
1417 + * 2004-04-02 Scheduler domains code by Nick Piggin
1418 + * 2007-04-15 Work begun on replacing all interactivity tuning with a
1419 + * fair scheduling design by Con Kolivas.
1420 + * 2007-05-05 Load balancing (smp-nice) and other improvements
1421 + * by Peter Williams
1422 + * 2007-05-06 Interactivity improvements to CFS by Mike Galbraith
1423 + * 2007-07-01 Group scheduling enhancements by Srivatsa Vaddagiri
1424 + * 2007-11-29 RT balancing improvements by Steven Rostedt, Gregory Haskins,
1425 + * Thomas Gleixner, Mike Kravetz
1426 + * now Brainfuck deadline scheduling policy by Con Kolivas deletes
1427 + * a whole lot of those previous things.
1430 +#include <linux/mm.h>
1431 +#include <linux/module.h>
1432 +#include <linux/nmi.h>
1433 +#include <linux/init.h>
1434 +#include <asm/uaccess.h>
1435 +#include <linux/highmem.h>
1436 +#include <linux/smp_lock.h>
1437 +#include <asm/mmu_context.h>
1438 +#include <linux/interrupt.h>
1439 +#include <linux/capability.h>
1440 +#include <linux/completion.h>
1441 +#include <linux/kernel_stat.h>
1442 +#include <linux/debug_locks.h>
1443 +#include <linux/security.h>
1444 +#include <linux/notifier.h>
1445 +#include <linux/profile.h>
1446 +#include <linux/freezer.h>
1447 +#include <linux/vmalloc.h>
1448 +#include <linux/blkdev.h>
1449 +#include <linux/delay.h>
1450 +#include <linux/smp.h>
1451 +#include <linux/threads.h>
1452 +#include <linux/timer.h>
1453 +#include <linux/rcupdate.h>
1454 +#include <linux/cpu.h>
1455 +#include <linux/cpuset.h>
1456 +#include <linux/cpumask.h>
1457 +#include <linux/percpu.h>
1458 +#include <linux/kthread.h>
1459 +#include <linux/seq_file.h>
1460 +#include <linux/syscalls.h>
1461 +#include <linux/times.h>
1462 +#include <linux/tsacct_kern.h>
1463 +#include <linux/kprobes.h>
1464 +#include <linux/delayacct.h>
1465 +#include <linux/reciprocal_div.h>
1466 +#include <linux/log2.h>
1467 +#include <linux/bootmem.h>
1468 +#include <linux/ftrace.h>
1469 +#include <asm/irq_regs.h>
1470 +#include <asm/tlb.h>
1471 +#include <asm/unistd.h>
1473 +#define rt_prio(prio) unlikely((prio) < MAX_RT_PRIO)
1474 +#define rt_task(p) rt_prio((p)->prio)
1475 +#define rt_queue(rq) rt_prio((rq)->rq_prio)
1476 +#define batch_task(p) (unlikely((p)->policy == SCHED_BATCH))
1477 +#define is_rt_policy(policy) ((policy) == SCHED_FIFO || \
1478 + (policy) == SCHED_RR)
1479 +#define has_rt_policy(p) unlikely(is_rt_policy((p)->policy))
1480 +#define idleprio_task(p) unlikely((p)->policy == SCHED_IDLEPRIO)
1481 +#define iso_task(p) unlikely((p)->policy == SCHED_ISO)
1482 +#define iso_queue(rq) unlikely((rq)->rq_policy == SCHED_ISO)
1483 +#define ISO_PERIOD ((5 * HZ * num_online_cpus()) + 1)
1486 + * Convert user-nice values [ -20 ... 0 ... 19 ]
1487 + * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
1490 +#define NICE_TO_PRIO(nice) (MAX_RT_PRIO + (nice) + 20)
1491 +#define PRIO_TO_NICE(prio) ((prio) - MAX_RT_PRIO - 20)
1492 +#define TASK_NICE(p) PRIO_TO_NICE((p)->static_prio)
1495 + * 'User priority' is the nice value converted to something we
1496 + * can work with better when scaling various scheduler parameters,
1497 + * it's a [ 0 ... 39 ] range.
1499 +#define USER_PRIO(p) ((p)-MAX_RT_PRIO)
1500 +#define TASK_USER_PRIO(p) USER_PRIO((p)->static_prio)
1501 +#define MAX_USER_PRIO (USER_PRIO(MAX_PRIO))
1502 +#define SCHED_PRIO(p) ((p)+MAX_RT_PRIO)
1504 +/* Some helpers for converting to/from various scales.*/
1505 +#define JIFFIES_TO_NS(TIME) ((TIME) * (1000000000 / HZ))
1506 +#define MS_TO_NS(TIME) ((TIME) * 1000000)
1507 +#define MS_TO_US(TIME) ((TIME) * 1000)
1511 + * Divide a load by a sched group cpu_power : (load / sg->__cpu_power)
1512 + * Since cpu_power is a 'constant', we can use a reciprocal divide.
1514 +static inline u32 sg_div_cpu_power(const struct sched_group *sg, u32 load)
1516 + return reciprocal_divide(load, sg->reciprocal_cpu_power);
1520 + * Each time a sched group cpu_power is changed,
1521 + * we must compute its reciprocal value
1523 +static inline void sg_inc_cpu_power(struct sched_group *sg, u32 val)
1525 + sg->__cpu_power += val;
1526 + sg->reciprocal_cpu_power = reciprocal_value(sg->__cpu_power);
1531 + * This is the time all tasks within the same priority round robin.
1532 + * Value is in ms and set to a minimum of 6ms. Scales with number of cpus.
1533 + * Tunable via /proc interface.
1535 +int rr_interval __read_mostly = 6;
1538 + * sched_iso_cpu - sysctl which determines the cpu percentage SCHED_ISO tasks
1539 + * are allowed to run five seconds as real time tasks. This is the total over
1540 + * all online cpus.
1542 +int sched_iso_cpu __read_mostly = 70;
1545 + * The relative length of deadline for each priority(nice) level.
1547 +static int prio_ratios[PRIO_RANGE] __read_mostly;
1550 + * The quota handed out to tasks of all priority levels when refilling their
1553 +static inline unsigned long timeslice(void)
1555 + return MS_TO_US(rr_interval);
1559 + * The global runqueue data that all CPUs work off. All data is protected
1564 + unsigned long nr_running;
1565 + unsigned long nr_uninterruptible;
1566 + unsigned long long nr_switches;
1567 + struct list_head queue[PRIO_LIMIT];
1568 + DECLARE_BITMAP(prio_bitmap, PRIO_LIMIT + 1);
1570 + int iso_refractory;
1572 + unsigned long qnr; /* queued not running */
1573 + cpumask_t cpu_idle_map;
1577 +/* There can be only one */
1578 +static struct global_rq grq;
1581 + * This is the main, per-CPU runqueue data structure.
1582 + * This data should only be modified by the local cpu.
1586 +#ifdef CONFIG_NO_HZ
1587 + unsigned char in_nohz_recently;
1591 + struct task_struct *curr, *idle;
1592 + struct mm_struct *prev_mm;
1594 + /* Stored data about rq->curr to work outside grq lock */
1595 + unsigned long rq_deadline;
1596 + unsigned int rq_policy;
1597 + int rq_time_slice;
1601 + /* Accurate timekeeping data */
1602 + u64 timekeep_clock;
1603 + unsigned long user_pc, nice_pc, irq_pc, softirq_pc, system_pc,
1604 + iowait_pc, idle_pc;
1605 + atomic_t nr_iowait;
1608 + int cpu; /* cpu of this runqueue */
1611 + struct root_domain *rd;
1612 + struct sched_domain *sd;
1613 + unsigned long *cpu_locality; /* CPU relative cache distance */
1614 +#ifdef CONFIG_SCHED_SMT
1615 + int (*siblings_idle)(unsigned long cpu);
1616 + /* See if all smt siblings are idle */
1617 + cpumask_t smt_siblings;
1619 +#ifdef CONFIG_SCHED_MC
1620 + int (*cache_idle)(unsigned long cpu);
1621 + /* See if all cache siblings are idle */
1622 + cpumask_t cache_siblings;
1627 +#ifdef CONFIG_SCHEDSTATS
1629 + /* latency stats */
1630 + struct sched_info rq_sched_info;
1632 + /* sys_sched_yield() stats */
1633 + unsigned int yld_exp_empty;
1634 + unsigned int yld_act_empty;
1635 + unsigned int yld_both_empty;
1636 + unsigned int yld_count;
1638 + /* schedule() stats */
1639 + unsigned int sched_switch;
1640 + unsigned int sched_count;
1641 + unsigned int sched_goidle;
1643 + /* try_to_wake_up() stats */
1644 + unsigned int ttwu_count;
1645 + unsigned int ttwu_local;
1648 + unsigned int bkl_count;
1652 +static DEFINE_PER_CPU(struct rq, runqueues) ____cacheline_aligned_in_smp;
1653 +static DEFINE_MUTEX(sched_hotcpu_mutex);
1658 + * We add the notion of a root-domain which will be used to define per-domain
1659 + * variables. Each exclusive cpuset essentially defines an island domain by
1660 + * fully partitioning the member cpus from any other cpuset. Whenever a new
1661 + * exclusive cpuset is created, we also create and attach a new root-domain
1665 +struct root_domain {
1666 + atomic_t refcount;
1671 + * The "RT overload" flag: it gets set if a CPU has more than
1672 + * one runnable RT task.
1674 + cpumask_t rto_mask;
1675 + atomic_t rto_count;
1679 + * By default the system creates a single root-domain with all cpus as
1680 + * members (mimicking the global state we have today).
1682 +static struct root_domain def_root_domain;
1685 +static inline int cpu_of(struct rq *rq)
1695 + * The domain tree (rq->sd) is protected by RCU's quiescent state transition.
1696 + * See detach_destroy_domains: synchronize_sched for details.
1698 + * The domain tree of any CPU may only be accessed from within
1699 + * preempt-disabled sections.
1701 +#define for_each_domain(cpu, __sd) \
1702 + for (__sd = rcu_dereference(cpu_rq(cpu)->sd); __sd; __sd = __sd->parent)
1705 +#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
1706 +#define this_rq() (&__get_cpu_var(runqueues))
1707 +#define task_rq(p) cpu_rq(task_cpu(p))
1708 +#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
1709 +#else /* CONFIG_SMP */
1710 +static struct rq *uprq;
1711 +#define cpu_rq(cpu) (uprq)
1712 +#define this_rq() (uprq)
1713 +#define task_rq(p) (uprq)
1714 +#define cpu_curr(cpu) ((uprq)->curr)
1717 +#include "sched_stats.h"
1719 +#ifndef prepare_arch_switch
1720 +# define prepare_arch_switch(next) do { } while (0)
1722 +#ifndef finish_arch_switch
1723 +# define finish_arch_switch(prev) do { } while (0)
1727 + * All common locking functions performed on grq.lock. rq->clock is local to
1728 + * the cpu accessing it so it can be modified just with interrupts disabled,
1729 + * but looking up task_rq must be done under grq.lock to be safe.
1731 +static inline void update_rq_clock(struct rq *rq)
1733 + rq->clock = sched_clock_cpu(cpu_of(rq));
1736 +static inline int task_running(struct task_struct *p)
1741 +static inline void grq_lock(void)
1742 + __acquires(grq.lock)
1744 + spin_lock(&grq.lock);
1747 +static inline void grq_unlock(void)
1748 + __releases(grq.lock)
1750 + spin_unlock(&grq.lock);
1753 +static inline void grq_lock_irq(void)
1754 + __acquires(grq.lock)
1756 + spin_lock_irq(&grq.lock);
1759 +static inline void time_lock_grq(struct rq *rq)
1760 + __acquires(grq.lock)
1762 + update_rq_clock(rq);
1766 +static inline void grq_unlock_irq(void)
1767 + __releases(grq.lock)
1769 + spin_unlock_irq(&grq.lock);
1772 +static inline void grq_lock_irqsave(unsigned long *flags)
1773 + __acquires(grq.lock)
1775 + spin_lock_irqsave(&grq.lock, *flags);
1778 +static inline void grq_unlock_irqrestore(unsigned long *flags)
1779 + __releases(grq.lock)
1781 + spin_unlock_irqrestore(&grq.lock, *flags);
1784 +static inline struct rq
1785 +*task_grq_lock(struct task_struct *p, unsigned long *flags)
1786 + __acquires(grq.lock)
1788 + grq_lock_irqsave(flags);
1789 + return task_rq(p);
1792 +static inline struct rq
1793 +*time_task_grq_lock(struct task_struct *p, unsigned long *flags)
1794 + __acquires(grq.lock)
1796 + struct rq *rq = task_grq_lock(p, flags);
1797 + update_rq_clock(rq);
1801 +static inline struct rq *task_grq_lock_irq(struct task_struct *p)
1802 + __acquires(grq.lock)
1805 + return task_rq(p);
1808 +static inline void time_task_grq_lock_irq(struct task_struct *p)
1809 + __acquires(grq.lock)
1811 + struct rq *rq = task_grq_lock_irq(p);
1812 + update_rq_clock(rq);
1815 +static inline void task_grq_unlock_irq(void)
1816 + __releases(grq.lock)
1821 +static inline void task_grq_unlock(unsigned long *flags)
1822 + __releases(grq.lock)
1824 + grq_unlock_irqrestore(flags);
1828 + * grunqueue_is_locked
1830 + * Returns true if the global runqueue is locked.
1831 + * This interface allows printk to be called with the runqueue lock
1832 + * held and know whether or not it is OK to wake up the klogd.
1834 +inline int grunqueue_is_locked(void)
1836 + return spin_is_locked(&grq.lock);
1839 +inline void grq_unlock_wait(void)
1840 + __releases(grq.lock)
1842 + smp_mb(); /* spin-unlock-wait is not a full memory barrier */
1843 + spin_unlock_wait(&grq.lock);
1846 +static inline void time_grq_lock(struct rq *rq, unsigned long *flags)
1847 + __acquires(grq.lock)
1849 + local_irq_save(*flags);
1850 + time_lock_grq(rq);
1853 +static inline struct rq *__task_grq_lock(struct task_struct *p)
1854 + __acquires(grq.lock)
1857 + return task_rq(p);
1860 +static inline void __task_grq_unlock(void)
1861 + __releases(grq.lock)
1866 +#ifndef __ARCH_WANT_UNLOCKED_CTXSW
1867 +static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
1871 +static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
1873 +#ifdef CONFIG_DEBUG_SPINLOCK
1874 + /* this is a valid case when another task releases the spinlock */
1875 + grq.lock.owner = current;
1878 + * If we are tracking spinlock dependencies then we have to
1879 + * fix up the runqueue lock - which gets 'carried over' from
1880 + * prev into current:
1882 + spin_acquire(&grq.lock.dep_map, 0, 0, _THIS_IP_);
1887 +#else /* __ARCH_WANT_UNLOCKED_CTXSW */
1889 +static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
1891 +#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
1898 +static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
1901 +#ifndef __ARCH_WANT_INTERRUPTS_ON_CTXSW
1902 + local_irq_enable();
1905 +#endif /* __ARCH_WANT_UNLOCKED_CTXSW */
1908 + * A task that is queued but not running will be on the grq run list.
1909 + * A task that is not running or queued will not be on the grq run list.
1910 + * A task that is currently running will have ->oncpu set but not on the
1913 +static inline int task_queued(struct task_struct *p)
1915 + return (!list_empty(&p->run_list));
1919 + * Removing from the global runqueue. Enter with grq locked.
1921 +static void dequeue_task(struct task_struct *p)
1923 + list_del_init(&p->run_list);
1924 + if (list_empty(grq.queue + p->prio))
1925 + __clear_bit(p->prio, grq.prio_bitmap);
1929 + * When a task is freshly forked, the first_time_slice flag is set to say
1930 + * it has taken time_slice from its parent and if it exits on this first
1931 + * time_slice it can return its time_slice back to the parent.
1933 +static inline void reset_first_time_slice(struct task_struct *p)
1935 + if (unlikely(p->first_time_slice))
1936 + p->first_time_slice = 0;
1940 + * To determine if it's safe for a task of SCHED_IDLEPRIO to actually run as
1941 + * an idle task, we ensure none of the following conditions are met.
1943 +static int idleprio_suitable(struct task_struct *p)
1945 + return (!freezing(p) && !signal_pending(p) &&
1946 + !(task_contributes_to_load(p)) && !(p->flags & (PF_EXITING)));
1950 + * To determine if a task of SCHED_ISO can run in pseudo-realtime, we check
1951 + * that the iso_refractory flag is not set.
1953 +static int isoprio_suitable(void)
1955 + return !grq.iso_refractory;
1959 + * Adding to the global runqueue. Enter with grq locked.
1961 +static void enqueue_task(struct task_struct *p)
1963 + if (!rt_task(p)) {
1964 + /* Check it hasn't gotten rt from PI */
1965 + if ((idleprio_task(p) && idleprio_suitable(p)) ||
1966 + (iso_task(p) && isoprio_suitable()))
1967 + p->prio = p->normal_prio;
1969 + p->prio = NORMAL_PRIO;
1971 + __set_bit(p->prio, grq.prio_bitmap);
1972 + list_add_tail(&p->run_list, grq.queue + p->prio);
1973 + sched_info_queued(p);
1976 +/* Only idle task does this as a real time task*/
1977 +static inline void enqueue_task_head(struct task_struct *p)
1979 + __set_bit(p->prio, grq.prio_bitmap);
1980 + list_add(&p->run_list, grq.queue + p->prio);
1981 + sched_info_queued(p);
1984 +static inline void requeue_task(struct task_struct *p)
1986 + sched_info_queued(p);
1990 + * Returns the relative length of deadline all compared to the shortest
1991 + * deadline which is that of nice -20.
1993 +static inline int task_prio_ratio(struct task_struct *p)
1995 + return prio_ratios[TASK_USER_PRIO(p)];
1999 + * task_timeslice - all tasks of all priorities get the exact same timeslice
2000 + * length. CPU distribution is handled by giving different deadlines to
2001 + * tasks of different priorities.
2003 +static inline int task_timeslice(struct task_struct *p)
2005 + return (rr_interval * task_prio_ratio(p) / 100);
2010 + * qnr is the "queued but not running" count which is the total number of
2011 + * tasks on the global runqueue list waiting for cpu time but not actually
2012 + * currently running on a cpu.
2014 +static inline void inc_qnr(void)
2019 +static inline void dec_qnr(void)
2024 +static inline int queued_notrunning(void)
2030 + * The cpu_idle_map stores a bitmap of all the cpus currently idle to
2031 + * allow easy lookup of whether any suitable idle cpus are available.
2033 +static inline void set_cpuidle_map(unsigned long cpu)
2035 + cpu_set(cpu, grq.cpu_idle_map);
2038 +static inline void clear_cpuidle_map(unsigned long cpu)
2040 + cpu_clear(cpu, grq.cpu_idle_map);
2043 +static int suitable_idle_cpus(struct task_struct *p)
2045 + return (cpus_intersects(p->cpus_allowed, grq.cpu_idle_map));
2048 +static void resched_task(struct task_struct *p);
2050 +#define CPUIDLE_CACHE_BUSY (1)
2051 +#define CPUIDLE_DIFF_CPU (2)
2052 +#define CPUIDLE_THREAD_BUSY (4)
2053 +#define CPUIDLE_DIFF_NODE (8)
2056 + * The best idle CPU is chosen according to the CPUIDLE ranking above where the
2057 + * lowest value would give the most suitable CPU to schedule p onto next. We
2058 + * iterate from the last CPU upwards instead of using for_each_cpu_mask so as
2059 + * to be able to break out immediately if the last CPU is idle. The order works
2060 + * out to be the following:
2062 + * Same core, idle or busy cache, idle threads
2063 + * Other core, same cache, idle or busy cache, idle threads.
2064 + * Same node, other CPU, idle cache, idle threads.
2065 + * Same node, other CPU, busy cache, idle threads.
2066 + * Same core, busy threads.
2067 + * Other core, same cache, busy threads.
2068 + * Same node, other CPU, busy threads.
2069 + * Other node, other CPU, idle cache, idle threads.
2070 + * Other node, other CPU, busy cache, idle threads.
2071 + * Other node, other CPU, busy threads.
2073 +static void resched_best_idle(struct task_struct *p)
2075 + unsigned long cpu_tmp, best_cpu, best_ranking;
2076 + cpumask_t tmpmask;
2080 + cpus_and(tmpmask, p->cpus_allowed, grq.cpu_idle_map);
2081 + iterate = cpus_weight(tmpmask);
2082 + best_cpu = task_cpu(p);
2084 + * Start below the last CPU and work up with next_cpu_nr as the last
2085 + * CPU might not be idle or affinity might not allow it.
2087 + cpu_tmp = best_cpu - 1;
2088 + rq = cpu_rq(best_cpu);
2089 + best_ranking = ~0UL;
2092 + unsigned long ranking;
2093 + struct rq *tmp_rq;
2096 + cpu_tmp = next_cpu_nr(cpu_tmp, tmpmask);
2097 + if (cpu_tmp >= nr_cpu_ids) {
2099 + cpu_tmp = next_cpu_nr(cpu_tmp, tmpmask);
2101 + tmp_rq = cpu_rq(cpu_tmp);
2103 + if (rq->cpu_locality[cpu_tmp]) {
2105 + if (rq->cpu_locality[cpu_tmp] > 1)
2106 + ranking |= CPUIDLE_DIFF_NODE;
2108 + ranking |= CPUIDLE_DIFF_CPU;
2110 +#ifdef CONFIG_SCHED_MC
2111 + if (!(tmp_rq->cache_idle(cpu_tmp)))
2112 + ranking |= CPUIDLE_CACHE_BUSY;
2114 +#ifdef CONFIG_SCHED_SMT
2115 + if (!(tmp_rq->siblings_idle(cpu_tmp)))
2116 + ranking |= CPUIDLE_THREAD_BUSY;
2118 + if (ranking < best_ranking) {
2119 + best_cpu = cpu_tmp;
2122 + best_ranking = ranking;
2124 + } while (--iterate > 0);
2126 + resched_task(cpu_rq(best_cpu)->curr);
2129 +static inline void resched_suitable_idle(struct task_struct *p)
2131 + if (suitable_idle_cpus(p))
2132 + resched_best_idle(p);
2136 + * The cpu cache locality difference between CPUs is used to determine how far
2137 + * to offset the virtual deadline. "One" difference in locality means that one
2138 + * timeslice difference is allowed longer for the cpu local tasks. This is
2139 + * enough in the common case when tasks are up to 2* number of CPUs to keep
2140 + * tasks within their shared cache CPUs only. CPUs on different nodes or not
2141 + * even in this domain (NUMA) have "3" difference, allowing 4 times longer
2142 + * deadlines before being taken onto another cpu, allowing for 2* the double
2143 + * seen by separate CPUs above.
2144 + * Simple summary: Virtual deadlines are equal on shared cache CPUs, double
2145 + * on separate CPUs and quadruple in separate NUMA nodes.
2148 +cache_distance(struct rq *task_rq, struct rq *rq, struct task_struct *p)
2150 + return rq->cpu_locality[cpu_of(task_rq)] * task_timeslice(p);
2152 +#else /* CONFIG_SMP */
2153 +static inline void inc_qnr(void)
2157 +static inline void dec_qnr(void)
2161 +static inline int queued_notrunning(void)
2163 + return grq.nr_running;
2166 +static inline void set_cpuidle_map(unsigned long cpu)
2170 +static inline void clear_cpuidle_map(unsigned long cpu)
2174 +/* Always called from a busy cpu on UP */
2175 +static inline int suitable_idle_cpus(struct task_struct *p)
2177 + return uprq->curr == uprq->idle;
2180 +static inline void resched_suitable_idle(struct task_struct *p)
2185 +cache_distance(struct rq *task_rq, struct rq *rq, struct task_struct *p)
2189 +#endif /* CONFIG_SMP */
2192 + * activate_idle_task - move idle task to the _front_ of runqueue.
2194 +static inline void activate_idle_task(struct task_struct *p)
2196 + enqueue_task_head(p);
2201 +static inline int normal_prio(struct task_struct *p)
2203 + if (has_rt_policy(p))
2204 + return MAX_RT_PRIO - 1 - p->rt_priority;
2205 + if (idleprio_task(p))
2209 + return NORMAL_PRIO;
2213 + * Calculate the current priority, i.e. the priority
2214 + * taken into account by the scheduler. This value might
2215 + * be boosted by RT tasks as it will be RT if the task got
2216 + * RT-boosted. If not then it returns p->normal_prio.
2218 +static int effective_prio(struct task_struct *p)
2220 + p->normal_prio = normal_prio(p);
2222 + * If we are RT tasks or we were boosted to RT priority,
2223 + * keep the priority unchanged. Otherwise, update priority
2224 + * to the normal priority:
2226 + if (!rt_prio(p->prio))
2227 + return p->normal_prio;
2232 + * activate_task - move a task to the runqueue. Enter with grq locked.
2234 +static void activate_task(struct task_struct *p, struct rq *rq)
2236 + update_rq_clock(rq);
2239 + * Sleep time is in units of nanosecs, so shift by 20 to get a
2240 + * milliseconds-range estimation of the amount of time that the task
2243 + if (unlikely(prof_on == SLEEP_PROFILING)) {
2244 + if (p->state == TASK_UNINTERRUPTIBLE)
2245 + profile_hits(SLEEP_PROFILING, (void *)get_wchan(p),
2246 + (rq->clock - p->last_ran) >> 20);
2249 + p->prio = effective_prio(p);
2250 + if (task_contributes_to_load(p))
2251 + grq.nr_uninterruptible--;
2258 + * deactivate_task - If it's running, it's not on the grq and we can just
2259 + * decrement the nr_running. Enter with grq locked.
2261 +static inline void deactivate_task(struct task_struct *p)
2263 + if (task_contributes_to_load(p))
2264 + grq.nr_uninterruptible++;
2269 +void set_task_cpu(struct task_struct *p, unsigned int cpu)
2272 + * After ->cpu is set up to a new value, task_grq_lock(p, ...) can be
2273 + * successfuly executed on another CPU. We must ensure that updates of
2274 + * per-task data have been completed by this moment.
2277 + task_thread_info(p)->cpu = cpu;
2282 + * Move a task off the global queue and take it to a cpu for it will
2283 + * become the running task.
2285 +static inline void take_task(struct rq *rq, struct task_struct *p)
2287 + set_task_cpu(p, cpu_of(rq));
2293 + * Returns a descheduling task to the grq runqueue unless it is being
2296 +static inline void return_task(struct task_struct *p, int deactivate)
2299 + deactivate_task(p);
2307 + * resched_task - mark a task 'to be rescheduled now'.
2309 + * On UP this means the setting of the need_resched flag, on SMP it
2310 + * might also involve a cross-CPU call to trigger the scheduler on
2315 +#ifndef tsk_is_polling
2316 +#define tsk_is_polling(t) test_tsk_thread_flag(t, TIF_POLLING_NRFLAG)
2319 +static void resched_task(struct task_struct *p)
2323 + assert_spin_locked(&grq.lock);
2325 + if (unlikely(test_tsk_thread_flag(p, TIF_NEED_RESCHED)))
2328 + set_tsk_thread_flag(p, TIF_NEED_RESCHED);
2330 + cpu = task_cpu(p);
2331 + if (cpu == smp_processor_id())
2334 + /* NEED_RESCHED must be visible before we test polling */
2336 + if (!tsk_is_polling(p))
2337 + smp_send_reschedule(cpu);
2341 +static inline void resched_task(struct task_struct *p)
2343 + assert_spin_locked(&grq.lock);
2344 + set_tsk_need_resched(p);
2349 + * task_curr - is this task currently executing on a CPU?
2350 + * @p: the task in question.
2352 +inline int task_curr(const struct task_struct *p)
2354 + return cpu_curr(task_cpu(p)) == p;
2358 +struct migration_req {
2359 + struct list_head list;
2361 + struct task_struct *task;
2364 + struct completion done;
2368 + * wait_task_inactive - wait for a thread to unschedule.
2370 + * If @match_state is nonzero, it's the @p->state value just checked and
2371 + * not expected to change. If it changes, i.e. @p might have woken up,
2372 + * then return zero. When we succeed in waiting for @p to be off its CPU,
2373 + * we return a positive number (its total switch count). If a second call
2374 + * a short while later returns the same number, the caller can be sure that
2375 + * @p has remained unscheduled the whole time.
2377 + * The caller must ensure that the task *will* unschedule sometime soon,
2378 + * else this function might spin for a *long* time. This function can't
2379 + * be called with interrupts off, or it may introduce deadlock with
2380 + * smp_call_function() if an IPI is sent by the same process we are
2381 + * waiting to become inactive.
2383 +unsigned long wait_task_inactive(struct task_struct *p, long match_state)
2385 + unsigned long flags;
2386 + int running, on_rq;
2387 + unsigned long ncsw;
2392 + * We do the initial early heuristics without holding
2393 + * any task-queue locks at all. We'll only try to get
2394 + * the runqueue lock when things look like they will
2395 + * work out! In the unlikely event rq is dereferenced
2396 + * since we're lockless, grab it again.
2401 + if (unlikely(!rq))
2403 +#else /* CONFIG_SMP */
2407 + * If the task is actively running on another CPU
2408 + * still, just relax and busy-wait without holding
2411 + * NOTE! Since we don't hold any locks, it's not
2412 + * even sure that "rq" stays as the right runqueue!
2413 + * But we don't care, since this will return false
2414 + * if the runqueue has changed and p is actually now
2415 + * running somewhere else!
2417 + while (task_running(p) && p == rq->curr) {
2418 + if (match_state && unlikely(p->state != match_state))
2424 + * Ok, time to look more closely! We need the grq
2425 + * lock now, to be *sure*. If we're wrong, we'll
2426 + * just go back and repeat.
2428 + rq = task_grq_lock(p, &flags);
2429 + running = task_running(p);
2430 + on_rq = task_queued(p);
2432 + if (!match_state || p->state == match_state) {
2433 + ncsw = p->nivcsw + p->nvcsw;
2434 + if (unlikely(!ncsw))
2437 + task_grq_unlock(&flags);
2440 + * If it changed from the expected state, bail out now.
2442 + if (unlikely(!ncsw))
2446 + * Was it really running after all now that we
2447 + * checked with the proper locks actually held?
2449 + * Oops. Go back and try again..
2451 + if (unlikely(running)) {
2457 + * It's not enough that it's not actively running,
2458 + * it must be off the runqueue _entirely_, and not
2461 + * So if it wa still runnable (but just not actively
2462 + * running right now), it's preempted, and we should
2463 + * yield - it could be a while.
2465 + if (unlikely(on_rq)) {
2466 + schedule_timeout_uninterruptible(1);
2471 + * Ahh, all good. It wasn't running, and it wasn't
2472 + * runnable, which means that it will never become
2473 + * running in the future either. We're all done!
2482 + * kick_process - kick a running thread to enter/exit the kernel
2483 + * @p: the to-be-kicked thread
2485 + * Cause a process which is running on another CPU to enter
2486 + * kernel-mode, without any delay. (to get signals handled.)
2488 + * NOTE: this function doesnt have to take the runqueue lock,
2489 + * because all it wants to ensure is that the remote task enters
2490 + * the kernel. If the IPI races and the task has been migrated
2491 + * to another CPU then no harm is done and the purpose has been
2492 + * achieved as well.
2494 +void kick_process(struct task_struct *p)
2498 + preempt_disable();
2499 + cpu = task_cpu(p);
2500 + if ((cpu != smp_processor_id()) && task_curr(p))
2501 + smp_send_reschedule(cpu);
2506 +#define rq_idle(rq) ((rq)->rq_prio == PRIO_LIMIT)
2507 +#define task_idle(p) ((p)->prio == PRIO_LIMIT)
2510 + * RT tasks preempt purely on priority. SCHED_NORMAL tasks preempt on the
2511 + * basis of earlier deadlines. SCHED_BATCH, ISO and IDLEPRIO don't preempt
2512 + * between themselves, they cooperatively multitask. An idle rq scores as
2513 + * prio PRIO_LIMIT so it is always preempted. latest_deadline and
2514 + * highest_prio_rq are initialised only to silence the compiler. When
2515 + * all else is equal, still prefer this_rq.
2518 +static void try_preempt(struct task_struct *p, struct rq *this_rq)
2520 + struct rq *highest_prio_rq = this_rq;
2521 + unsigned long latest_deadline, cpu;
2525 + if (suitable_idle_cpus(p)) {
2526 + resched_best_idle(p);
2530 + cpus_and(tmp, cpu_online_map, p->cpus_allowed);
2531 + latest_deadline = 0;
2532 + highest_prio = -1;
2534 + for_each_cpu_mask_nr(cpu, tmp) {
2535 + unsigned long offset_deadline;
2540 + rq_prio = rq->rq_prio;
2541 + if (rq_prio < highest_prio)
2544 + offset_deadline = rq->rq_deadline -
2545 + cache_distance(this_rq, rq, p);
2547 + if (rq_prio > highest_prio ||
2548 + (time_after(offset_deadline, latest_deadline) ||
2549 + (offset_deadline == latest_deadline && this_rq == rq))) {
2550 + latest_deadline = offset_deadline;
2551 + highest_prio = rq_prio;
2552 + highest_prio_rq = rq;
2556 + if (p->prio > highest_prio || (p->prio == highest_prio &&
2557 + p->policy == SCHED_NORMAL && !time_before(p->deadline, latest_deadline)))
2560 + /* p gets to preempt highest_prio_rq->curr */
2561 + resched_task(highest_prio_rq->curr);
2564 +#else /* CONFIG_SMP */
2565 +static void try_preempt(struct task_struct *p, struct rq *this_rq)
2567 + if (p->prio < uprq->rq_prio ||
2568 + (p->prio == uprq->rq_prio && p->policy == SCHED_NORMAL &&
2569 + time_before(p->deadline, uprq->rq_deadline)))
2570 + resched_task(uprq->curr);
2573 +#endif /* CONFIG_SMP */
2576 + * try_to_wake_up - wake up a thread
2577 + * @p: the to-be-woken-up thread
2578 + * @state: the mask of task states that can be woken
2579 + * @sync: do a synchronous wakeup?
2581 + * Put it on the run-queue if it's not already there. The "current"
2582 + * thread is always on the run-queue (except when the actual
2583 + * re-schedule is in progress), and as such you're allowed to do
2584 + * the simpler "current->state = TASK_RUNNING" to mark yourself
2585 + * runnable without the overhead of this.
2587 + * returns failure only if the task is already active.
2589 +static int try_to_wake_up(struct task_struct *p, unsigned int state, int sync)
2591 + unsigned long flags;
2595 + /* This barrier is undocumented, probably for p->state? くそ */
2599 + * No need to do time_lock_grq as we only need to update the rq clock
2600 + * if we activate the task
2602 + rq = task_grq_lock(p, &flags);
2604 + /* state is a volatile long, どうして、分からない */
2605 + if (!((unsigned int)p->state & state))
2608 + if (task_queued(p) || task_running(p))
2611 + activate_task(p, rq);
2613 + * Sync wakeups (i.e. those types of wakeups where the waker
2614 + * has indicated that it will leave the CPU in short order)
2615 + * don't trigger a preemption if there are no idle cpus,
2616 + * instead waiting for current to deschedule.
2618 + if (!sync || suitable_idle_cpus(p))
2619 + try_preempt(p, rq);
2623 + trace_mark(kernel_sched_wakeup,
2624 + "pid %d state %ld ## rq %p task %p rq->curr %p",
2625 + p->pid, p->state, rq, p, rq->curr);
2626 + p->state = TASK_RUNNING;
2628 + task_grq_unlock(&flags);
2633 + * wake_up_process - Wake up a specific process
2634 + * @p: The process to be woken up.
2636 + * Attempt to wake up the nominated process and move it to the set of runnable
2637 + * processes. Returns 1 if the process was woken up, 0 if it was already
2640 + * It may be assumed that this function implies a write memory barrier before
2641 + * changing the task state if and only if any tasks are woken up.
2643 +int wake_up_process(struct task_struct *p)
2645 + return try_to_wake_up(p, TASK_ALL, 0);
2647 +EXPORT_SYMBOL(wake_up_process);
2649 +int wake_up_state(struct task_struct *p, unsigned int state)
2651 + return try_to_wake_up(p, state, 0);
2655 + * Perform scheduler related setup for a newly forked process p.
2656 + * p is forked by current.
2658 +void sched_fork(struct task_struct *p, int clone_flags)
2660 + int cpu = get_cpu();
2663 +#ifdef CONFIG_PREEMPT_NOTIFIERS
2664 + INIT_HLIST_HEAD(&p->preempt_notifiers);
2667 + * We mark the process as running here, but have not actually
2668 + * inserted it onto the runqueue yet. This guarantees that
2669 + * nobody will actually run it, and a signal or other external
2670 + * event cannot wake it up and insert it on the runqueue either.
2672 + p->state = TASK_RUNNING;
2673 + set_task_cpu(p, cpu);
2675 + /* Should be reset in fork.c but done here for ease of bfs patching */
2676 + p->sched_time = p->stime_pc = p->utime_pc = 0;
2679 + * Make sure we do not leak PI boosting priority to the child:
2681 + p->prio = current->normal_prio;
2683 + INIT_LIST_HEAD(&p->run_list);
2684 +#if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
2685 + if (unlikely(sched_info_on()))
2686 + memset(&p->sched_info, 0, sizeof(p->sched_info));
2691 +#ifdef CONFIG_PREEMPT
2692 + /* Want to start with kernel preemption disabled. */
2693 + task_thread_info(p)->preempt_count = 1;
2695 + if (unlikely(p->policy == SCHED_FIFO))
2698 + * Share the timeslice between parent and child, thus the
2699 + * total amount of pending timeslices in the system doesn't change,
2700 + * resulting in more scheduling fairness. If it's negative, it won't
2701 + * matter since that's the same as being 0. current's time_slice is
2702 + * actually in rq_time_slice when it's running.
2704 + rq = task_grq_lock_irq(current);
2705 + if (likely(rq->rq_time_slice > 0)) {
2706 + rq->rq_time_slice /= 2;
2708 + * The remainder of the first timeslice might be recovered by
2709 + * the parent if the child exits early enough.
2711 + p->first_time_slice = 1;
2713 + p->time_slice = rq->rq_time_slice;
2714 + task_grq_unlock_irq();
2720 + * wake_up_new_task - wake up a newly created task for the first time.
2722 + * This function will do some initial scheduler statistics housekeeping
2723 + * that must be done for every newly created context, then puts the task
2724 + * on the runqueue and wakes it.
2726 +void wake_up_new_task(struct task_struct *p, unsigned long clone_flags)
2728 + struct task_struct *parent;
2729 + unsigned long flags;
2732 + rq = task_grq_lock(p, &flags); ;
2733 + parent = p->parent;
2734 + BUG_ON(p->state != TASK_RUNNING);
2735 + /* Unnecessary but small chance that the parent changed cpus */
2736 + set_task_cpu(p, task_cpu(parent));
2737 + activate_task(p, rq);
2738 + trace_mark(kernel_sched_wakeup_new,
2739 + "pid %d state %ld ## rq %p task %p rq->curr %p",
2740 + p->pid, p->state, rq, p, rq->curr);
2741 + if (!(clone_flags & CLONE_VM) && rq->curr == parent &&
2742 + !suitable_idle_cpus(p)) {
2744 + * The VM isn't cloned, so we're in a good position to
2745 + * do child-runs-first in anticipation of an exec. This
2746 + * usually avoids a lot of COW overhead.
2748 + resched_task(parent);
2750 + try_preempt(p, rq);
2751 + task_grq_unlock(&flags);
2755 + * Potentially available exiting-child timeslices are
2756 + * retrieved here - this way the parent does not get
2757 + * penalised for creating too many threads.
2759 + * (this cannot be used to 'generate' timeslices
2760 + * artificially, because any timeslice recovered here
2761 + * was given away by the parent in the first place.)
2763 +void sched_exit(struct task_struct *p)
2765 + struct task_struct *parent;
2766 + unsigned long flags;
2769 + if (unlikely(p->first_time_slice)) {
2770 + int *par_tslice, *p_tslice;
2772 + parent = p->parent;
2773 + par_tslice = &parent->time_slice;
2774 + p_tslice = &p->time_slice;
2776 + rq = task_grq_lock(parent, &flags);
2777 + /* The real time_slice of the "curr" task is on the rq var.*/
2778 + if (p == rq->curr)
2779 + p_tslice = &rq->rq_time_slice;
2780 + else if (parent == task_rq(parent)->curr)
2781 + par_tslice = &rq->rq_time_slice;
2783 + *par_tslice += *p_tslice;
2784 + if (unlikely(*par_tslice > timeslice()))
2785 + *par_tslice = timeslice();
2786 + task_grq_unlock(&flags);
2790 +#ifdef CONFIG_PREEMPT_NOTIFIERS
2793 + * preempt_notifier_register - tell me when current is being being preempted & rescheduled
2794 + * @notifier: notifier struct to register
2796 +void preempt_notifier_register(struct preempt_notifier *notifier)
2798 + hlist_add_head(¬ifier->link, ¤t->preempt_notifiers);
2800 +EXPORT_SYMBOL_GPL(preempt_notifier_register);
2803 + * preempt_notifier_unregister - no longer interested in preemption notifications
2804 + * @notifier: notifier struct to unregister
2806 + * This is safe to call from within a preemption notifier.
2808 +void preempt_notifier_unregister(struct preempt_notifier *notifier)
2810 + hlist_del(¬ifier->link);
2812 +EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
2814 +static void fire_sched_in_preempt_notifiers(struct task_struct *curr)
2816 + struct preempt_notifier *notifier;
2817 + struct hlist_node *node;
2819 + hlist_for_each_entry(notifier, node, &curr->preempt_notifiers, link)
2820 + notifier->ops->sched_in(notifier, raw_smp_processor_id());
2824 +fire_sched_out_preempt_notifiers(struct task_struct *curr,
2825 + struct task_struct *next)
2827 + struct preempt_notifier *notifier;
2828 + struct hlist_node *node;
2830 + hlist_for_each_entry(notifier, node, &curr->preempt_notifiers, link)
2831 + notifier->ops->sched_out(notifier, next);
2834 +#else /* !CONFIG_PREEMPT_NOTIFIERS */
2836 +static void fire_sched_in_preempt_notifiers(struct task_struct *curr)
2841 +fire_sched_out_preempt_notifiers(struct task_struct *curr,
2842 + struct task_struct *next)
2846 +#endif /* CONFIG_PREEMPT_NOTIFIERS */
2849 + * prepare_task_switch - prepare to switch tasks
2850 + * @rq: the runqueue preparing to switch
2851 + * @next: the task we are going to switch to.
2853 + * This is called with the rq lock held and interrupts off. It must
2854 + * be paired with a subsequent finish_task_switch after the context
2857 + * prepare_task_switch sets up locking and calls architecture specific
2861 +prepare_task_switch(struct rq *rq, struct task_struct *prev,
2862 + struct task_struct *next)
2864 + fire_sched_out_preempt_notifiers(prev, next);
2865 + prepare_lock_switch(rq, next);
2866 + prepare_arch_switch(next);
2870 + * finish_task_switch - clean up after a task-switch
2871 + * @rq: runqueue associated with task-switch
2872 + * @prev: the thread we just switched away from.
2874 + * finish_task_switch must be called after the context switch, paired
2875 + * with a prepare_task_switch call before the context switch.
2876 + * finish_task_switch will reconcile locking set up by prepare_task_switch,
2877 + * and do any other architecture-specific cleanup actions.
2879 + * Note that we may have delayed dropping an mm in context_switch(). If
2880 + * so, we finish that here outside of the runqueue lock. (Doing it
2881 + * with the lock held can cause deadlocks; see schedule() for
2884 +static inline void finish_task_switch(struct rq *rq, struct task_struct *prev)
2885 + __releases(grq.lock)
2887 + struct mm_struct *mm = rq->prev_mm;
2890 + rq->prev_mm = NULL;
2893 + * A task struct has one reference for the use as "current".
2894 + * If a task dies, then it sets TASK_DEAD in tsk->state and calls
2895 + * schedule one last time. The schedule call will never return, and
2896 + * the scheduled task must drop that reference.
2897 + * The test for TASK_DEAD must occur while the runqueue locks are
2898 + * still held, otherwise prev could be scheduled on another cpu, die
2899 + * there before we look at prev->state, and then the reference would
2900 + * be dropped twice.
2901 + * Manfred Spraul <manfred@colorfullife.com>
2903 + prev_state = prev->state;
2904 + finish_arch_switch(prev);
2905 + finish_lock_switch(rq, prev);
2907 + fire_sched_in_preempt_notifiers(current);
2910 + if (unlikely(prev_state == TASK_DEAD)) {
2912 + * Remove function-return probe instances associated with this
2913 + * task and put them back on the free list.
2915 + kprobe_flush_task(prev);
2916 + put_task_struct(prev);
2921 + * schedule_tail - first thing a freshly forked thread must call.
2922 + * @prev: the thread we just switched away from.
2924 +asmlinkage void schedule_tail(struct task_struct *prev)
2925 + __releases(grq.lock)
2927 + struct rq *rq = this_rq();
2929 + finish_task_switch(rq, prev);
2930 +#ifdef __ARCH_WANT_UNLOCKED_CTXSW
2931 + /* In this case, finish_task_switch does not reenable preemption */
2934 + if (current->set_child_tid)
2935 + put_user(current->pid, current->set_child_tid);
2939 + * context_switch - switch to the new MM and the new
2940 + * thread's register state.
2943 +context_switch(struct rq *rq, struct task_struct *prev,
2944 + struct task_struct *next)
2946 + struct mm_struct *mm, *oldmm;
2948 + prepare_task_switch(rq, prev, next);
2949 + trace_mark(kernel_sched_schedule,
2950 + "prev_pid %d next_pid %d prev_state %ld "
2951 + "## rq %p prev %p next %p",
2952 + prev->pid, next->pid, prev->state,
2955 + oldmm = prev->active_mm;
2957 + * For paravirt, this is coupled with an exit in switch_to to
2958 + * combine the page table reload and the switch backend into
2961 + arch_enter_lazy_cpu_mode();
2963 + if (unlikely(!mm)) {
2964 + next->active_mm = oldmm;
2965 + atomic_inc(&oldmm->mm_count);
2966 + enter_lazy_tlb(oldmm, next);
2968 + switch_mm(oldmm, mm, next);
2970 + if (unlikely(!prev->mm)) {
2971 + prev->active_mm = NULL;
2972 + rq->prev_mm = oldmm;
2975 + * Since the runqueue lock will be released by the next
2976 + * task (which is an invalid locking op but in the case
2977 + * of the scheduler it's an obvious special-case), so we
2978 + * do an early lockdep release here:
2980 +#ifndef __ARCH_WANT_UNLOCKED_CTXSW
2981 + spin_release(&grq.lock.dep_map, 1, _THIS_IP_);
2984 + /* Here we just switch the register state and the stack. */
2985 + switch_to(prev, next, prev);
2989 + * this_rq must be evaluated again because prev may have moved
2990 + * CPUs since it called schedule(), thus the 'rq' on its stack
2991 + * frame will be invalid.
2993 + finish_task_switch(this_rq(), prev);
2997 + * nr_running, nr_uninterruptible and nr_context_switches:
2999 + * externally visible scheduler statistics: current number of runnable
3000 + * threads, current number of uninterruptible-sleeping threads, total
3001 + * number of context switches performed since bootup. All are measured
3002 + * without grabbing the grq lock but the occasional inaccurate result
3003 + * doesn't matter so long as it's positive.
3005 +unsigned long nr_running(void)
3007 + long nr = grq.nr_running;
3009 + if (unlikely(nr < 0))
3011 + return (unsigned long)nr;
3014 +unsigned long nr_uninterruptible(void)
3016 + long nu = grq.nr_uninterruptible;
3018 + if (unlikely(nu < 0))
3023 +unsigned long long nr_context_switches(void)
3025 + long long ns = grq.nr_switches;
3027 + /* This is of course impossible */
3028 + if (unlikely(ns < 0))
3030 + return (long long)ns;
3033 +unsigned long nr_iowait(void)
3035 + unsigned long i, sum = 0;
3037 + for_each_possible_cpu(i)
3038 + sum += atomic_read(&cpu_rq(i)->nr_iowait);
3043 +unsigned long nr_active(void)
3045 + return nr_running() + nr_uninterruptible();
3048 +DEFINE_PER_CPU(struct kernel_stat, kstat);
3050 +EXPORT_PER_CPU_SYMBOL(kstat);
3053 + * On each tick, see what percentage of that tick was attributed to each
3054 + * component and add the percentage to the _pc values. Once a _pc value has
3055 + * accumulated one tick's worth, account for that. This means the total
3056 + * percentage of load components will always be 100 per tick.
3058 +static void pc_idle_time(struct rq *rq, unsigned long pc)
3060 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3061 + cputime64_t tmp = cputime_to_cputime64(jiffies_to_cputime(1));
3063 + if (atomic_read(&rq->nr_iowait) > 0) {
3064 + rq->iowait_pc += pc;
3065 + if (rq->iowait_pc >= 100) {
3066 + rq->iowait_pc %= 100;
3067 + cpustat->iowait = cputime64_add(cpustat->iowait, tmp);
3070 + rq->idle_pc += pc;
3071 + if (rq->idle_pc >= 100) {
3072 + rq->idle_pc %= 100;
3073 + cpustat->idle = cputime64_add(cpustat->idle, tmp);
3079 +pc_system_time(struct rq *rq, struct task_struct *p, int hardirq_offset,
3080 + unsigned long pc, unsigned long ns)
3082 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3083 + cputime_t one_jiffy = jiffies_to_cputime(1);
3084 + cputime_t one_jiffy_scaled = cputime_to_scaled(one_jiffy);
3085 + cputime64_t tmp = cputime_to_cputime64(one_jiffy);
3087 + p->stime_pc += pc;
3088 + if (p->stime_pc >= 100) {
3089 + p->stime_pc -= 100;
3090 + p->stime = cputime_add(p->stime, one_jiffy);
3091 + p->stimescaled = cputime_add(p->stimescaled, one_jiffy_scaled);
3092 + acct_update_integrals(p);
3094 + p->sched_time += ns;
3096 + if (hardirq_count() - hardirq_offset)
3098 + else if (softirq_count()) {
3099 + rq->softirq_pc += pc;
3100 + if (rq->softirq_pc >= 100) {
3101 + rq->softirq_pc %= 100;
3102 + cpustat->softirq = cputime64_add(cpustat->softirq, tmp);
3105 + rq->system_pc += pc;
3106 + if (rq->system_pc >= 100) {
3107 + rq->system_pc %= 100;
3108 + cpustat->system = cputime64_add(cpustat->system, tmp);
3113 +static void pc_user_time(struct rq *rq, struct task_struct *p,
3114 + unsigned long pc, unsigned long ns)
3116 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3117 + cputime_t one_jiffy = jiffies_to_cputime(1);
3118 + cputime_t one_jiffy_scaled = cputime_to_scaled(one_jiffy);
3119 + cputime64_t tmp = cputime_to_cputime64(one_jiffy);
3121 + p->utime_pc += pc;
3122 + if (p->utime_pc >= 100) {
3123 + p->utime_pc -= 100;
3124 + p->utime = cputime_add(p->utime, one_jiffy);
3125 + p->utimescaled = cputime_add(p->utimescaled, one_jiffy_scaled);
3126 + acct_update_integrals(p);
3128 + p->sched_time += ns;
3130 + if (TASK_NICE(p) > 0 || idleprio_task(p)) {
3131 + rq->nice_pc += pc;
3132 + if (rq->nice_pc >= 100) {
3133 + rq->nice_pc %= 100;
3134 + cpustat->nice = cputime64_add(cpustat->nice, tmp);
3137 + rq->user_pc += pc;
3138 + if (rq->user_pc >= 100) {
3139 + rq->user_pc %= 100;
3140 + cpustat->user = cputime64_add(cpustat->user, tmp);
3145 +/* Convert nanoseconds to percentage of one tick. */
3146 +#define NS_TO_PC(NS) (NS * 100 / JIFFIES_TO_NS(1))
3149 + * This is called on clock ticks and on context switches.
3150 + * Bank in p->sched_time the ns elapsed since the last tick or switch.
3151 + * CPU scheduler quota accounting is also performed here in microseconds.
3152 + * The value returned from sched_clock() occasionally gives bogus values so
3153 + * some sanity checking is required. Time is supposed to be banked all the
3154 + * time so default to half a tick to make up for when sched_clock reverts
3155 + * to just returning jiffies, and for hardware that can't do tsc.
3158 +update_cpu_clock(struct rq *rq, struct task_struct *p, int tick)
3160 + long account_ns = rq->clock - rq->timekeep_clock;
3161 + struct task_struct *idle = rq->idle;
3162 + unsigned long account_pc;
3164 + if (unlikely(account_ns < 0))
3167 + account_pc = NS_TO_PC(account_ns);
3170 + int user_tick = user_mode(get_irq_regs());
3172 + /* Accurate tick timekeeping */
3174 + pc_user_time(rq, p, account_pc, account_ns);
3175 + else if (p != idle || (irq_count() != HARDIRQ_OFFSET))
3176 + pc_system_time(rq, p, HARDIRQ_OFFSET,
3177 + account_pc, account_ns);
3179 + pc_idle_time(rq, account_pc);
3181 + /* Accurate subtick timekeeping */
3183 + pc_idle_time(rq, account_pc);
3185 + pc_user_time(rq, p, account_pc, account_ns);
3188 + /* time_slice accounting is done in usecs to avoid overflow on 32bit */
3189 + if (rq->rq_policy != SCHED_FIFO && p != idle) {
3190 + long time_diff = rq->clock - rq->rq_last_ran;
3193 + * There should be less than or equal to one jiffy worth, and not
3194 + * negative/overflow. time_diff is only used for internal scheduler
3195 + * time_slice accounting.
3197 + if (unlikely(time_diff <= 0))
3198 + time_diff = JIFFIES_TO_NS(1) / 2;
3199 + else if (unlikely(time_diff > JIFFIES_TO_NS(1)))
3200 + time_diff = JIFFIES_TO_NS(1);
3202 + rq->rq_time_slice -= time_diff / 1000;
3204 + rq->rq_last_ran = rq->timekeep_clock = rq->clock;
3208 + * Return any ns on the sched_clock that have not yet been accounted in
3209 + * @p in case that task is currently running.
3211 + * Called with task_grq_lock() held.
3213 +static u64 do_task_delta_exec(struct task_struct *p, struct rq *rq)
3217 + if (p == rq->curr) {
3218 + update_rq_clock(rq);
3219 + ns = rq->clock - rq->rq_last_ran;
3220 + if (unlikely((s64)ns < 0))
3227 +unsigned long long task_delta_exec(struct task_struct *p)
3229 + unsigned long flags;
3233 + rq = task_grq_lock(p, &flags);
3234 + ns = do_task_delta_exec(p, rq);
3235 + task_grq_unlock(&flags);
3241 + * Return accounted runtime for the task.
3242 + * In case the task is currently running, return the runtime plus current's
3243 + * pending runtime that have not been accounted yet.
3245 +unsigned long long task_sched_runtime(struct task_struct *p)
3247 + unsigned long flags;
3248 + u64 ns, delta_exec;
3251 + rq = task_grq_lock(p, &flags);
3252 + ns = p->sched_time;
3253 + if (p == rq->curr) {
3254 + update_rq_clock(rq);
3255 + delta_exec = rq->clock - rq->rq_last_ran;
3256 + if (likely((s64)delta_exec > 0))
3259 + task_grq_unlock(&flags);
3265 + * Return sum_exec_runtime for the thread group.
3266 + * In case the task is currently running, return the sum plus current's
3267 + * pending runtime that have not been accounted yet.
3269 + * Note that the thread group might have other running tasks as well,
3270 + * so the return value not includes other pending runtime that other
3271 + * running tasks might have.
3273 +unsigned long long thread_group_sched_runtime(struct task_struct *p)
3275 + struct task_cputime totals;
3276 + unsigned long flags;
3280 + rq = task_grq_lock(p, &flags);
3281 + thread_group_cputime(p, &totals);
3282 + ns = totals.sum_exec_runtime + do_task_delta_exec(p, rq);
3283 + task_grq_unlock(&flags);
3288 +/* Compatibility crap for removal */
3289 +void account_user_time(struct task_struct *p, cputime_t cputime,
3290 + cputime_t cputime_scaled)
3294 +void account_idle_time(cputime_t cputime)
3299 + * Account guest cpu time to a process.
3300 + * @p: the process that the cpu time gets accounted to
3301 + * @cputime: the cpu time spent in virtual machine since the last update
3302 + * @cputime_scaled: cputime scaled by cpu frequency
3304 +static void account_guest_time(struct task_struct *p, cputime_t cputime,
3305 + cputime_t cputime_scaled)
3308 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3310 + tmp = cputime_to_cputime64(cputime);
3312 + /* Add guest time to process. */
3313 + p->utime = cputime_add(p->utime, cputime);
3314 + p->utimescaled = cputime_add(p->utimescaled, cputime_scaled);
3315 + p->gtime = cputime_add(p->gtime, cputime);
3317 + /* Add guest time to cpustat. */
3318 + cpustat->user = cputime64_add(cpustat->user, tmp);
3319 + cpustat->guest = cputime64_add(cpustat->guest, tmp);
3323 + * Account system cpu time to a process.
3324 + * @p: the process that the cpu time gets accounted to
3325 + * @hardirq_offset: the offset to subtract from hardirq_count()
3326 + * @cputime: the cpu time spent in kernel space since the last update
3327 + * @cputime_scaled: cputime scaled by cpu frequency
3328 + * This is for guest only now.
3330 +void account_system_time(struct task_struct *p, int hardirq_offset,
3331 + cputime_t cputime, cputime_t cputime_scaled)
3334 + if ((p->flags & PF_VCPU) && (irq_count() - hardirq_offset == 0))
3335 + account_guest_time(p, cputime, cputime_scaled);
3339 + * Account for involuntary wait time.
3340 + * @steal: the cpu time spent in involuntary wait
3342 +void account_steal_time(cputime_t cputime)
3344 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3345 + cputime64_t cputime64 = cputime_to_cputime64(cputime);
3347 + cpustat->steal = cputime64_add(cpustat->steal, cputime64);
3351 + * Account for idle time.
3352 + * @cputime: the cpu time spent in idle wait
3354 +static void account_idle_times(cputime_t cputime)
3356 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3357 + cputime64_t cputime64 = cputime_to_cputime64(cputime);
3358 + struct rq *rq = this_rq();
3360 + if (atomic_read(&rq->nr_iowait) > 0)
3361 + cpustat->iowait = cputime64_add(cpustat->iowait, cputime64);
3363 + cpustat->idle = cputime64_add(cpustat->idle, cputime64);
3366 +#ifndef CONFIG_VIRT_CPU_ACCOUNTING
3368 +void account_process_tick(struct task_struct *p, int user_tick)
3373 + * Account multiple ticks of steal time.
3374 + * @p: the process from which the cpu time has been stolen
3375 + * @ticks: number of stolen ticks
3377 +void account_steal_ticks(unsigned long ticks)
3379 + account_steal_time(jiffies_to_cputime(ticks));
3383 + * Account multiple ticks of idle time.
3384 + * @ticks: number of stolen ticks
3386 +void account_idle_ticks(unsigned long ticks)
3388 + account_idle_times(jiffies_to_cputime(ticks));
3393 + * Functions to test for when SCHED_ISO tasks have used their allocated
3394 + * quota as real time scheduling and convert them back to SCHED_NORMAL.
3395 + * Where possible, the data is tested lockless, to avoid grabbing grq_lock
3396 + * because the occasional inaccurate result won't matter. However the
3397 + * tick data is only ever modified under lock. iso_refractory is only simply
3398 + * set to 0 or 1 so it's not worth grabbing the lock yet again for that.
3400 +static void set_iso_refractory(void)
3402 + grq.iso_refractory = 1;
3405 +static void clear_iso_refractory(void)
3407 + grq.iso_refractory = 0;
3411 + * Test if SCHED_ISO tasks have run longer than their alloted period as RT
3412 + * tasks and set the refractory flag if necessary. There is 10% hysteresis
3413 + * for unsetting the flag.
3415 +static unsigned int test_ret_isorefractory(struct rq *rq)
3417 + if (likely(!grq.iso_refractory)) {
3418 + if (grq.iso_ticks / ISO_PERIOD > sched_iso_cpu)
3419 + set_iso_refractory();
3421 + if (grq.iso_ticks / ISO_PERIOD < (sched_iso_cpu * 90 / 100))
3422 + clear_iso_refractory();
3424 + return grq.iso_refractory;
3427 +static void iso_tick(void)
3430 + grq.iso_ticks += 100;
3434 +/* No SCHED_ISO task was running so decrease rq->iso_ticks */
3435 +static inline void no_iso_tick(void)
3437 + if (grq.iso_ticks) {
3439 + grq.iso_ticks -= grq.iso_ticks / ISO_PERIOD + 1;
3440 + if (unlikely(grq.iso_refractory && grq.iso_ticks /
3441 + ISO_PERIOD < (sched_iso_cpu * 90 / 100)))
3442 + clear_iso_refractory();
3447 +static int rq_running_iso(struct rq *rq)
3449 + return rq->rq_prio == ISO_PRIO;
3452 +/* This manages tasks that have run out of timeslice during a scheduler_tick */
3453 +static void task_running_tick(struct rq *rq)
3455 + struct task_struct *p;
3458 + * If a SCHED_ISO task is running we increment the iso_ticks. In
3459 + * order to prevent SCHED_ISO tasks from causing starvation in the
3460 + * presence of true RT tasks we account those as iso_ticks as well.
3462 + if ((rt_queue(rq) || (iso_queue(rq) && !grq.iso_refractory))) {
3463 + if (grq.iso_ticks <= (ISO_PERIOD * 100) - 100)
3468 + if (iso_queue(rq)) {
3469 + if (unlikely(test_ret_isorefractory(rq))) {
3470 + if (rq_running_iso(rq)) {
3472 + * SCHED_ISO task is running as RT and limit
3473 + * has been hit. Force it to reschedule as
3474 + * SCHED_NORMAL by zeroing its time_slice
3476 + rq->rq_time_slice = 0;
3481 + /* SCHED_FIFO tasks never run out of timeslice. */
3482 + if (rq_idle(rq) || rq->rq_time_slice > 0 || rq->rq_policy == SCHED_FIFO)
3485 + /* p->time_slice <= 0. We only modify task_struct under grq lock */
3489 + set_tsk_need_resched(p);
3493 +void wake_up_idle_cpu(int cpu);
3496 + * This function gets called by the timer code, with HZ frequency.
3497 + * We call it with interrupts disabled. The data modified is all
3498 + * local to struct rq so we don't need to grab grq lock.
3500 +void scheduler_tick(void)
3502 + int cpu = smp_processor_id();
3503 + struct rq *rq = cpu_rq(cpu);
3505 + sched_clock_tick();
3506 + update_rq_clock(rq);
3507 + update_cpu_clock(rq, rq->curr, 1);
3509 + task_running_tick(rq);
3514 +#if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \
3515 + defined(CONFIG_PREEMPT_TRACER))
3517 +static inline unsigned long get_parent_ip(unsigned long addr)
3519 + if (in_lock_functions(addr)) {
3520 + addr = CALLER_ADDR2;
3521 + if (in_lock_functions(addr))
3522 + addr = CALLER_ADDR3;
3527 +void __kprobes add_preempt_count(int val)
3529 +#ifdef CONFIG_DEBUG_PREEMPT
3533 + if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
3536 + preempt_count() += val;
3537 +#ifdef CONFIG_DEBUG_PREEMPT
3539 + * Spinlock count overflowing soon?
3541 + DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
3542 + PREEMPT_MASK - 10);
3544 + if (preempt_count() == val)
3545 + trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
3547 +EXPORT_SYMBOL(add_preempt_count);
3549 +void __kprobes sub_preempt_count(int val)
3551 +#ifdef CONFIG_DEBUG_PREEMPT
3555 + if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
3558 + * Is the spinlock portion underflowing?
3560 + if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
3561 + !(preempt_count() & PREEMPT_MASK)))
3565 + if (preempt_count() == val)
3566 + trace_preempt_on(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
3567 + preempt_count() -= val;
3569 +EXPORT_SYMBOL(sub_preempt_count);
3573 + * Deadline is "now" in jiffies + (offset by priority). Setting the deadline
3574 + * is the key to everything. It distributes cpu fairly amongst tasks of the
3575 + * same nice value, it proportions cpu according to nice level, it means the
3576 + * task that last woke up the longest ago has the earliest deadline, thus
3577 + * ensuring that interactive tasks get low latency on wake up. The CPU
3578 + * proportion works out to the square of the virtual deadline difference, so
3579 + * this equation will give nice 19 3% CPU compared to nice 0.
3581 +static inline int prio_deadline_diff(int user_prio)
3583 + return (prio_ratios[user_prio] * rr_interval * HZ / (1000 * 100)) ? : 1;
3586 +static inline int task_deadline_diff(struct task_struct *p)
3588 + return prio_deadline_diff(TASK_USER_PRIO(p));
3591 +static inline int static_deadline_diff(int static_prio)
3593 + return prio_deadline_diff(USER_PRIO(static_prio));
3596 +static inline int longest_deadline_diff(void)
3598 + return prio_deadline_diff(39);
3602 + * The time_slice is only refilled when it is empty and that is when we set a
3605 +static inline void time_slice_expired(struct task_struct *p)
3607 + reset_first_time_slice(p);
3608 + p->time_slice = timeslice();
3609 + p->deadline = jiffies + task_deadline_diff(p);
3612 +static inline void check_deadline(struct task_struct *p)
3614 + if (p->time_slice <= 0)
3615 + time_slice_expired(p);
3619 + * O(n) lookup of all tasks in the global runqueue. The real brainfuck
3620 + * of lock contention and O(n). It's not really O(n) as only the queued,
3621 + * but not running tasks are scanned, and is O(n) queued in the worst case
3622 + * scenario only because the right task can be found before scanning all of
3624 + * Tasks are selected in this order:
3625 + * Real time tasks are selected purely by their static priority and in the
3626 + * order they were queued, so the lowest value idx, and the first queued task
3627 + * of that priority value is chosen.
3628 + * If no real time tasks are found, the SCHED_ISO priority is checked, and
3629 + * all SCHED_ISO tasks have the same priority value, so they're selected by
3630 + * the earliest deadline value.
3631 + * If no SCHED_ISO tasks are found, SCHED_NORMAL tasks are selected by the
3632 + * earliest deadline.
3633 + * Finally if no SCHED_NORMAL tasks are found, SCHED_IDLEPRIO tasks are
3634 + * selected by the earliest deadline.
3635 + * Once deadlines are expired (jiffies has passed it) tasks are chosen in FIFO
3636 + * order. Note that very few tasks will be FIFO for very long because they
3637 + * only end up that way if they sleep for long or if if there are enough fully
3638 + * cpu bound tasks to push the load to ~8 higher than the number of CPUs for
3641 +static inline struct
3642 +task_struct *earliest_deadline_task(struct rq *rq, struct task_struct *idle)
3644 + unsigned long dl, earliest_deadline = 0; /* Initialise to silence compiler */
3645 + struct task_struct *p, *edt;
3646 + unsigned int cpu = cpu_of(rq);
3647 + struct list_head *queue;
3652 + idx = find_next_bit(grq.prio_bitmap, PRIO_LIMIT, idx);
3653 + if (idx >= PRIO_LIMIT)
3655 + queue = grq.queue + idx;
3656 + list_for_each_entry(p, queue, run_list) {
3657 + /* Make sure cpu affinity is ok */
3658 + if (!cpu_isset(cpu, p->cpus_allowed))
3660 + if (idx < MAX_RT_PRIO) {
3661 + /* We found an rt task */
3666 + dl = p->deadline + cache_distance(task_rq(p), rq, p);
3669 + * Look for tasks with old deadlines and pick them in FIFO
3670 + * order, taking the first one found.
3672 + if (time_is_before_jiffies(dl)) {
3678 + * No rt tasks. Find the earliest deadline task. Now we're in
3679 + * O(n) territory. This is what we silenced the compiler for:
3680 + * edt will always start as idle.
3682 + if (edt == idle ||
3683 + time_before(dl, earliest_deadline)) {
3684 + earliest_deadline = dl;
3688 + if (edt == idle) {
3689 + if (++idx < PRIO_LIMIT)
3694 + take_task(rq, edt);
3700 + * Print scheduling while atomic bug:
3702 +static noinline void __schedule_bug(struct task_struct *prev)
3704 + struct pt_regs *regs = get_irq_regs();
3706 + printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
3707 + prev->comm, prev->pid, preempt_count());
3709 + debug_show_held_locks(prev);
3711 + if (irqs_disabled())
3712 + print_irqtrace_events(prev);
3721 + * Various schedule()-time debugging checks and statistics:
3723 +static inline void schedule_debug(struct task_struct *prev)
3726 + * Test if we are atomic. Since do_exit() needs to call into
3727 + * schedule() atomically, we ignore that path for now.
3728 + * Otherwise, whine if we are scheduling when we should not be.
3730 + if (unlikely(in_atomic_preempt_off() && !prev->exit_state))
3731 + __schedule_bug(prev);
3733 + profile_hit(SCHED_PROFILING, __builtin_return_address(0));
3735 + schedstat_inc(this_rq(), sched_count);
3736 +#ifdef CONFIG_SCHEDSTATS
3737 + if (unlikely(prev->lock_depth >= 0)) {
3738 + schedstat_inc(this_rq(), bkl_count);
3739 + schedstat_inc(prev, sched_info.bkl_count);
3745 + * The currently running task's information is all stored in rq local data
3746 + * which is only modified by the local CPU, thereby allowing the data to be
3747 + * changed without grabbing the grq lock.
3749 +static inline void set_rq_task(struct rq *rq, struct task_struct *p)
3751 + rq->rq_time_slice = p->time_slice;
3752 + rq->rq_deadline = p->deadline;
3753 + rq->rq_last_ran = p->last_ran;
3754 + rq->rq_policy = p->policy;
3755 + rq->rq_prio = p->prio;
3758 +static void reset_rq_task(struct rq *rq, struct task_struct *p)
3760 + rq->rq_policy = p->policy;
3761 + rq->rq_prio = p->prio;
3765 + * schedule() is the main scheduler function.
3767 +asmlinkage void __sched schedule(void)
3769 + struct task_struct *prev, *next, *idle;
3770 + unsigned long *switch_count;
3771 + int deactivate, cpu;
3775 + preempt_disable();
3777 + cpu = smp_processor_id();
3780 + rcu_qsctr_inc(cpu);
3782 + switch_count = &prev->nivcsw;
3784 + release_kernel_lock(prev);
3785 +need_resched_nonpreemptible:
3788 + schedule_debug(prev);
3790 + local_irq_disable();
3791 + update_rq_clock(rq);
3792 + update_cpu_clock(rq, prev, 0);
3795 + clear_tsk_need_resched(prev);
3797 + if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) {
3798 + if (unlikely(signal_pending_state(prev->state, prev)))
3799 + prev->state = TASK_RUNNING;
3802 + switch_count = &prev->nvcsw;
3805 + if (prev != idle) {
3806 + /* Update all the information stored on struct rq */
3807 + prev->time_slice = rq->rq_time_slice;
3808 + prev->deadline = rq->rq_deadline;
3809 + check_deadline(prev);
3810 + return_task(prev, deactivate);
3811 + /* Task changed affinity off this cpu */
3812 + if (unlikely(!cpus_intersects(prev->cpus_allowed,
3813 + cpumask_of_cpu(cpu))))
3814 + resched_suitable_idle(prev);
3817 + if (likely(queued_notrunning())) {
3818 + next = earliest_deadline_task(rq, idle);
3821 + schedstat_inc(rq, sched_goidle);
3825 + prefetch_stack(next);
3827 + if (task_idle(next))
3828 + set_cpuidle_map(cpu);
3830 + clear_cpuidle_map(cpu);
3832 + prev->last_ran = rq->clock;
3834 + if (likely(prev != next)) {
3835 + sched_info_switch(prev, next);
3837 + set_rq_task(rq, next);
3838 + grq.nr_switches++;
3844 + context_switch(rq, prev, next); /* unlocks the grq */
3846 + * the context switch might have flipped the stack from under
3847 + * us, hence refresh the local variables.
3849 + cpu = smp_processor_id();
3855 + if (unlikely(reacquire_kernel_lock(current) < 0))
3856 + goto need_resched_nonpreemptible;
3857 + preempt_enable_no_resched();
3858 + if (unlikely(test_thread_flag(TIF_NEED_RESCHED)))
3859 + goto need_resched;
3861 +EXPORT_SYMBOL(schedule);
3863 +#ifdef CONFIG_PREEMPT
3865 + * this is the entry point to schedule() from in-kernel preemption
3866 + * off of preempt_enable. Kernel preemptions off return from interrupt
3867 + * occur there and call schedule directly.
3869 +asmlinkage void __sched preempt_schedule(void)
3871 + struct thread_info *ti = current_thread_info();
3874 + * If there is a non-zero preempt_count or interrupts are disabled,
3875 + * we do not want to preempt the current task. Just return..
3877 + if (likely(ti->preempt_count || irqs_disabled()))
3881 + add_preempt_count(PREEMPT_ACTIVE);
3883 + sub_preempt_count(PREEMPT_ACTIVE);
3886 + * Check again in case we missed a preemption opportunity
3887 + * between schedule and now.
3890 + } while (unlikely(test_thread_flag(TIF_NEED_RESCHED)));
3892 +EXPORT_SYMBOL(preempt_schedule);
3895 + * this is the entry point to schedule() from kernel preemption
3896 + * off of irq context.
3897 + * Note, that this is called and return with irqs disabled. This will
3898 + * protect us against recursive calling from irq.
3900 +asmlinkage void __sched preempt_schedule_irq(void)
3902 + struct thread_info *ti = current_thread_info();
3904 + /* Catch callers which need to be fixed */
3905 + BUG_ON(ti->preempt_count || !irqs_disabled());
3908 + add_preempt_count(PREEMPT_ACTIVE);
3909 + local_irq_enable();
3911 + local_irq_disable();
3912 + sub_preempt_count(PREEMPT_ACTIVE);
3915 + * Check again in case we missed a preemption opportunity
3916 + * between schedule and now.
3919 + } while (unlikely(test_thread_flag(TIF_NEED_RESCHED)));
3922 +#endif /* CONFIG_PREEMPT */
3924 +int default_wake_function(wait_queue_t *curr, unsigned mode, int sync,
3927 + return try_to_wake_up(curr->private, mode, sync);
3929 +EXPORT_SYMBOL(default_wake_function);
3932 + * The core wakeup function. Non-exclusive wakeups (nr_exclusive == 0) just
3933 + * wake everything up. If it's an exclusive wakeup (nr_exclusive == small +ve
3934 + * number) then we wake all the non-exclusive tasks and one exclusive task.
3936 + * There are circumstances in which we can try to wake a task which has already
3937 + * started to run but is not in state TASK_RUNNING. try_to_wake_up() returns
3938 + * zero in this (rare) case, and we handle it by continuing to scan the queue.
3940 +void __wake_up_common(wait_queue_head_t *q, unsigned int mode,
3941 + int nr_exclusive, int sync, void *key)
3943 + struct list_head *tmp, *next;
3945 + list_for_each_safe(tmp, next, &q->task_list) {
3946 + wait_queue_t *curr = list_entry(tmp, wait_queue_t, task_list);
3947 + unsigned int flags = curr->flags;
3949 + if (curr->func(curr, mode, sync, key) &&
3950 + (flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive)
3956 + * __wake_up - wake up threads blocked on a waitqueue.
3957 + * @q: the waitqueue
3958 + * @mode: which threads
3959 + * @nr_exclusive: how many wake-one or wake-many threads to wake up
3960 + * @key: is directly passed to the wakeup function
3962 + * It may be assumed that this function implies a write memory barrier before
3963 + * changing the task state if and only if any tasks are woken up.
3965 +void __wake_up(wait_queue_head_t *q, unsigned int mode,
3966 + int nr_exclusive, void *key)
3968 + unsigned long flags;
3970 + spin_lock_irqsave(&q->lock, flags);
3971 + __wake_up_common(q, mode, nr_exclusive, 0, key);
3972 + spin_unlock_irqrestore(&q->lock, flags);
3974 +EXPORT_SYMBOL(__wake_up);
3977 + * Same as __wake_up but called with the spinlock in wait_queue_head_t held.
3979 +void __wake_up_locked(wait_queue_head_t *q, unsigned int mode)
3981 + __wake_up_common(q, mode, 1, 0, NULL);
3985 + * __wake_up_sync - wake up threads blocked on a waitqueue.
3986 + * @q: the waitqueue
3987 + * @mode: which threads
3988 + * @nr_exclusive: how many wake-one or wake-many threads to wake up
3990 + * The sync wakeup differs that the waker knows that it will schedule
3991 + * away soon, so while the target thread will be woken up, it will not
3992 + * be migrated to another CPU - ie. the two threads are 'synchronised'
3993 + * with each other. This can prevent needless bouncing between CPUs.
3995 + * On UP it can prevent extra preemption.
3997 +void __wake_up_sync(wait_queue_head_t *q, unsigned int mode, int nr_exclusive)
3999 + unsigned long flags;
4005 + if (unlikely(!nr_exclusive))
4008 + spin_lock_irqsave(&q->lock, flags);
4009 + __wake_up_common(q, mode, nr_exclusive, sync, NULL);
4010 + spin_unlock_irqrestore(&q->lock, flags);
4012 +EXPORT_SYMBOL_GPL(__wake_up_sync); /* For internal use only */
4014 +void complete(struct completion *x)
4016 + unsigned long flags;
4018 + spin_lock_irqsave(&x->wait.lock, flags);
4020 + __wake_up_common(&x->wait, TASK_NORMAL, 1, 0, NULL);
4021 + spin_unlock_irqrestore(&x->wait.lock, flags);
4023 +EXPORT_SYMBOL(complete);
4025 +void complete_all(struct completion *x)
4027 + unsigned long flags;
4029 + spin_lock_irqsave(&x->wait.lock, flags);
4030 + x->done += UINT_MAX/2;
4031 + __wake_up_common(&x->wait, TASK_NORMAL, 0, 0, NULL);
4032 + spin_unlock_irqrestore(&x->wait.lock, flags);
4034 +EXPORT_SYMBOL(complete_all);
4036 +static inline long __sched
4037 +do_wait_for_common(struct completion *x, long timeout, int state)
4040 + DECLARE_WAITQUEUE(wait, current);
4042 + wait.flags |= WQ_FLAG_EXCLUSIVE;
4043 + __add_wait_queue_tail(&x->wait, &wait);
4045 + if ((state == TASK_INTERRUPTIBLE &&
4046 + signal_pending(current)) ||
4047 + (state == TASK_KILLABLE &&
4048 + fatal_signal_pending(current))) {
4049 + timeout = -ERESTARTSYS;
4052 + __set_current_state(state);
4053 + spin_unlock_irq(&x->wait.lock);
4054 + timeout = schedule_timeout(timeout);
4055 + spin_lock_irq(&x->wait.lock);
4056 + } while (!x->done && timeout);
4057 + __remove_wait_queue(&x->wait, &wait);
4062 + return timeout ?: 1;
4065 +static long __sched
4066 +wait_for_common(struct completion *x, long timeout, int state)
4070 + spin_lock_irq(&x->wait.lock);
4071 + timeout = do_wait_for_common(x, timeout, state);
4072 + spin_unlock_irq(&x->wait.lock);
4076 +void __sched wait_for_completion(struct completion *x)
4078 + wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_UNINTERRUPTIBLE);
4080 +EXPORT_SYMBOL(wait_for_completion);
4082 +unsigned long __sched
4083 +wait_for_completion_timeout(struct completion *x, unsigned long timeout)
4085 + return wait_for_common(x, timeout, TASK_UNINTERRUPTIBLE);
4087 +EXPORT_SYMBOL(wait_for_completion_timeout);
4089 +int __sched wait_for_completion_interruptible(struct completion *x)
4091 + long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_INTERRUPTIBLE);
4092 + if (t == -ERESTARTSYS)
4096 +EXPORT_SYMBOL(wait_for_completion_interruptible);
4098 +unsigned long __sched
4099 +wait_for_completion_interruptible_timeout(struct completion *x,
4100 + unsigned long timeout)
4102 + return wait_for_common(x, timeout, TASK_INTERRUPTIBLE);
4104 +EXPORT_SYMBOL(wait_for_completion_interruptible_timeout);
4106 +int __sched wait_for_completion_killable(struct completion *x)
4108 + long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_KILLABLE);
4109 + if (t == -ERESTARTSYS)
4113 +EXPORT_SYMBOL(wait_for_completion_killable);
4116 + * try_wait_for_completion - try to decrement a completion without blocking
4117 + * @x: completion structure
4119 + * Returns: 0 if a decrement cannot be done without blocking
4120 + * 1 if a decrement succeeded.
4122 + * If a completion is being used as a counting completion,
4123 + * attempt to decrement the counter without blocking. This
4124 + * enables us to avoid waiting if the resource the completion
4125 + * is protecting is not available.
4127 +bool try_wait_for_completion(struct completion *x)
4131 + spin_lock_irq(&x->wait.lock);
4136 + spin_unlock_irq(&x->wait.lock);
4139 +EXPORT_SYMBOL(try_wait_for_completion);
4142 + * completion_done - Test to see if a completion has any waiters
4143 + * @x: completion structure
4145 + * Returns: 0 if there are waiters (wait_for_completion() in progress)
4146 + * 1 if there are no waiters.
4149 +bool completion_done(struct completion *x)
4153 + spin_lock_irq(&x->wait.lock);
4156 + spin_unlock_irq(&x->wait.lock);
4159 +EXPORT_SYMBOL(completion_done);
4161 +static long __sched
4162 +sleep_on_common(wait_queue_head_t *q, int state, long timeout)
4164 + unsigned long flags;
4165 + wait_queue_t wait;
4167 + init_waitqueue_entry(&wait, current);
4169 + __set_current_state(state);
4171 + spin_lock_irqsave(&q->lock, flags);
4172 + __add_wait_queue(q, &wait);
4173 + spin_unlock(&q->lock);
4174 + timeout = schedule_timeout(timeout);
4175 + spin_lock_irq(&q->lock);
4176 + __remove_wait_queue(q, &wait);
4177 + spin_unlock_irqrestore(&q->lock, flags);
4182 +void __sched interruptible_sleep_on(wait_queue_head_t *q)
4184 + sleep_on_common(q, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);
4186 +EXPORT_SYMBOL(interruptible_sleep_on);
4189 +interruptible_sleep_on_timeout(wait_queue_head_t *q, long timeout)
4191 + return sleep_on_common(q, TASK_INTERRUPTIBLE, timeout);
4193 +EXPORT_SYMBOL(interruptible_sleep_on_timeout);
4195 +void __sched sleep_on(wait_queue_head_t *q)
4197 + sleep_on_common(q, TASK_UNINTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);
4199 +EXPORT_SYMBOL(sleep_on);
4201 +long __sched sleep_on_timeout(wait_queue_head_t *q, long timeout)
4203 + return sleep_on_common(q, TASK_UNINTERRUPTIBLE, timeout);
4205 +EXPORT_SYMBOL(sleep_on_timeout);
4207 +#ifdef CONFIG_RT_MUTEXES
4210 + * rt_mutex_setprio - set the current priority of a task
4212 + * @prio: prio value (kernel-internal form)
4214 + * This function changes the 'effective' priority of a task. It does
4215 + * not touch ->normal_prio like __setscheduler().
4217 + * Used by the rt_mutex code to implement priority inheritance logic.
4219 +void rt_mutex_setprio(struct task_struct *p, int prio)
4221 + unsigned long flags;
4222 + int queued, oldprio;
4225 + BUG_ON(prio < 0 || prio > MAX_PRIO);
4227 + rq = time_task_grq_lock(p, &flags);
4229 + oldprio = p->prio;
4230 + queued = task_queued(p);
4234 + if (task_running(p) && prio > oldprio)
4238 + try_preempt(p, rq);
4241 + task_grq_unlock(&flags);
4247 + * Adjust the deadline for when the priority is to change, before it's
4250 +static inline void adjust_deadline(struct task_struct *p, int new_prio)
4252 + p->deadline += static_deadline_diff(new_prio) - task_deadline_diff(p);
4255 +void set_user_nice(struct task_struct *p, long nice)
4257 + int queued, new_static, old_static;
4258 + unsigned long flags;
4261 + if (TASK_NICE(p) == nice || nice < -20 || nice > 19)
4263 + new_static = NICE_TO_PRIO(nice);
4265 + * We have to be careful, if called from sys_setpriority(),
4266 + * the task might be in the middle of scheduling on another CPU.
4268 + rq = time_task_grq_lock(p, &flags);
4270 + * The RT priorities are set via sched_setscheduler(), but we still
4271 + * allow the 'normal' nice value to be set - but as expected
4272 + * it wont have any effect on scheduling until the task is
4273 + * not SCHED_NORMAL/SCHED_BATCH:
4275 + if (has_rt_policy(p)) {
4276 + p->static_prio = new_static;
4279 + queued = task_queued(p);
4283 + adjust_deadline(p, new_static);
4284 + old_static = p->static_prio;
4285 + p->static_prio = new_static;
4286 + p->prio = effective_prio(p);
4290 + if (new_static < old_static)
4291 + try_preempt(p, rq);
4292 + } else if (task_running(p)) {
4293 + reset_rq_task(rq, p);
4294 + if (old_static < new_static)
4298 + task_grq_unlock(&flags);
4300 +EXPORT_SYMBOL(set_user_nice);
4303 + * can_nice - check if a task can reduce its nice value
4305 + * @nice: nice value
4307 +int can_nice(const struct task_struct *p, const int nice)
4309 + /* convert nice value [19,-20] to rlimit style value [1,40] */
4310 + int nice_rlim = 20 - nice;
4312 + return (nice_rlim <= p->signal->rlim[RLIMIT_NICE].rlim_cur ||
4313 + capable(CAP_SYS_NICE));
4316 +#ifdef __ARCH_WANT_SYS_NICE
4319 + * sys_nice - change the priority of the current process.
4320 + * @increment: priority increment
4322 + * sys_setpriority is a more generic, but much slower function that
4323 + * does similar things.
4325 +asmlinkage long sys_nice(int increment)
4327 + long nice, retval;
4330 + * Setpriority might change our priority at the same moment.
4331 + * We don't have to worry. Conceptually one call occurs first
4332 + * and we have a single winner.
4334 + if (increment < -40)
4336 + if (increment > 40)
4339 + nice = PRIO_TO_NICE(current->static_prio) + increment;
4345 + if (increment < 0 && !can_nice(current, nice))
4348 + retval = security_task_setnice(current, nice);
4352 + set_user_nice(current, nice);
4359 + * task_prio - return the priority value of a given task.
4360 + * @p: the task in question.
4362 + * This is the priority value as seen by users in /proc.
4363 + * RT tasks are offset by -100. Normal tasks are centered around 1, value goes
4364 + * from 0 (SCHED_ISO) up to 82 (nice +19 SCHED_IDLEPRIO).
4366 +int task_prio(const struct task_struct *p)
4368 + int delta, prio = p->prio - MAX_RT_PRIO;
4370 + /* rt tasks and iso tasks */
4374 + delta = (p->deadline - jiffies) * 40 / longest_deadline_diff();
4375 + if (delta > 0 && delta <= 80)
4377 + if (idleprio_task(p))
4384 + * task_nice - return the nice value of a given task.
4385 + * @p: the task in question.
4387 +int task_nice(const struct task_struct *p)
4389 + return TASK_NICE(p);
4391 +EXPORT_SYMBOL_GPL(task_nice);
4394 + * idle_cpu - is a given cpu idle currently?
4395 + * @cpu: the processor in question.
4397 +int idle_cpu(int cpu)
4399 + return cpu_curr(cpu) == cpu_rq(cpu)->idle;
4403 + * idle_task - return the idle task for a given cpu.
4404 + * @cpu: the processor in question.
4406 +struct task_struct *idle_task(int cpu)
4408 + return cpu_rq(cpu)->idle;
4412 + * find_process_by_pid - find a process with a matching PID value.
4413 + * @pid: the pid in question.
4415 +static inline struct task_struct *find_process_by_pid(pid_t pid)
4417 + return pid ? find_task_by_vpid(pid) : current;
4420 +/* Actually do priority change: must hold grq lock. */
4422 +__setscheduler(struct task_struct *p, struct rq *rq, int policy, int prio)
4424 + int oldrtprio, oldprio;
4426 + BUG_ON(task_queued(p));
4428 + p->policy = policy;
4429 + oldrtprio = p->rt_priority;
4430 + p->rt_priority = prio;
4431 + p->normal_prio = normal_prio(p);
4432 + oldprio = p->prio;
4433 + /* we are holding p->pi_lock already */
4434 + p->prio = rt_mutex_getprio(p);
4435 + if (task_running(p)) {
4436 + reset_rq_task(rq, p);
4437 + /* Resched only if we might now be preempted */
4438 + if (p->prio > oldprio || p->rt_priority > oldrtprio)
4443 +static int __sched_setscheduler(struct task_struct *p, int policy,
4444 + struct sched_param *param, bool user)
4446 + struct sched_param zero_param = { .sched_priority = 0 };
4447 + int queued, retval, oldpolicy = -1;
4448 + unsigned long flags, rlim_rtprio = 0;
4451 + /* may grab non-irq protected spin_locks */
4452 + BUG_ON(in_interrupt());
4454 + if (is_rt_policy(policy) && !capable(CAP_SYS_NICE)) {
4455 + unsigned long lflags;
4457 + if (!lock_task_sighand(p, &lflags))
4459 + rlim_rtprio = p->signal->rlim[RLIMIT_RTPRIO].rlim_cur;
4460 + unlock_task_sighand(p, &lflags);
4464 + * If the caller requested an RT policy without having the
4465 + * necessary rights, we downgrade the policy to SCHED_ISO.
4466 + * We also set the parameter to zero to pass the checks.
4468 + policy = SCHED_ISO;
4469 + param = &zero_param;
4472 + /* double check policy once rq lock held */
4474 + policy = oldpolicy = p->policy;
4475 + else if (!SCHED_RANGE(policy))
4478 + * Valid priorities for SCHED_FIFO and SCHED_RR are
4479 + * 1..MAX_USER_RT_PRIO-1, valid priority for SCHED_NORMAL and
4480 + * SCHED_BATCH is 0.
4482 + if (param->sched_priority < 0 ||
4483 + (p->mm && param->sched_priority > MAX_USER_RT_PRIO-1) ||
4484 + (!p->mm && param->sched_priority > MAX_RT_PRIO-1))
4486 + if (is_rt_policy(policy) != (param->sched_priority != 0))
4490 + * Allow unprivileged RT tasks to decrease priority:
4492 + if (user && !capable(CAP_SYS_NICE)) {
4493 + if (is_rt_policy(policy)) {
4494 + /* can't set/change the rt policy */
4495 + if (policy != p->policy && !rlim_rtprio)
4498 + /* can't increase priority */
4499 + if (param->sched_priority > p->rt_priority &&
4500 + param->sched_priority > rlim_rtprio)
4503 + switch (p->policy) {
4505 + * Can only downgrade policies but not back to
4509 + if (policy == SCHED_ISO)
4511 + if (policy == SCHED_NORMAL)
4515 + if (policy == SCHED_BATCH)
4518 + * ANDROID: Allow tasks to move between
4519 + * SCHED_NORMAL <-> SCHED_BATCH
4521 + if (policy == SCHED_NORMAL)
4523 + if (policy != SCHED_IDLEPRIO)
4526 + case SCHED_IDLEPRIO:
4527 + if (policy == SCHED_IDLEPRIO)
4535 + /* can't change other user's priorities */
4536 + if ((current->euid != p->euid) &&
4537 + (current->euid != p->uid))
4541 + retval = security_task_setscheduler(p, policy, param);
4545 + * make sure no PI-waiters arrive (or leave) while we are
4546 + * changing the priority of the task:
4548 + spin_lock_irqsave(&p->pi_lock, flags);
4550 + * To be able to change p->policy safely, the apropriate
4551 + * runqueue lock must be held.
4553 + rq = __task_grq_lock(p);
4554 + /* recheck policy now with rq lock held */
4555 + if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
4556 + __task_grq_unlock();
4557 + spin_unlock_irqrestore(&p->pi_lock, flags);
4558 + policy = oldpolicy = -1;
4561 + update_rq_clock(rq);
4562 + queued = task_queued(p);
4565 + __setscheduler(p, rq, policy, param->sched_priority);
4568 + try_preempt(p, rq);
4570 + __task_grq_unlock();
4571 + spin_unlock_irqrestore(&p->pi_lock, flags);
4573 + rt_mutex_adjust_pi(p);
4579 + * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
4580 + * @p: the task in question.
4581 + * @policy: new policy.
4582 + * @param: structure containing the new RT priority.
4584 + * NOTE that the task may be already dead.
4586 +int sched_setscheduler(struct task_struct *p, int policy,
4587 + struct sched_param *param)
4589 + return __sched_setscheduler(p, policy, param, true);
4592 +EXPORT_SYMBOL_GPL(sched_setscheduler);
4595 + * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
4596 + * @p: the task in question.
4597 + * @policy: new policy.
4598 + * @param: structure containing the new RT priority.
4600 + * Just like sched_setscheduler, only don't bother checking if the
4601 + * current context has permission. For example, this is needed in
4602 + * stop_machine(): we create temporary high priority worker threads,
4603 + * but our caller might not have that capability.
4605 +int sched_setscheduler_nocheck(struct task_struct *p, int policy,
4606 + struct sched_param *param)
4608 + return __sched_setscheduler(p, policy, param, false);
4612 +do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
4614 + struct sched_param lparam;
4615 + struct task_struct *p;
4618 + if (!param || pid < 0)
4620 + if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
4625 + p = find_process_by_pid(pid);
4627 + retval = sched_setscheduler(p, policy, &lparam);
4628 + rcu_read_unlock();
4634 + * sys_sched_setscheduler - set/change the scheduler policy and RT priority
4635 + * @pid: the pid in question.
4636 + * @policy: new policy.
4637 + * @param: structure containing the new RT priority.
4639 +asmlinkage long sys_sched_setscheduler(pid_t pid, int policy,
4640 + struct sched_param __user *param)
4642 + /* negative values for policy are not valid */
4646 + return do_sched_setscheduler(pid, policy, param);
4650 + * sys_sched_setparam - set/change the RT priority of a thread
4651 + * @pid: the pid in question.
4652 + * @param: structure containing the new RT priority.
4654 +asmlinkage long sys_sched_setparam(pid_t pid, struct sched_param __user *param)
4656 + return do_sched_setscheduler(pid, -1, param);
4660 + * sys_sched_getscheduler - get the policy (scheduling class) of a thread
4661 + * @pid: the pid in question.
4663 +asmlinkage long sys_sched_getscheduler(pid_t pid)
4665 + struct task_struct *p;
4666 + int retval = -EINVAL;
4669 + goto out_nounlock;
4672 + read_lock(&tasklist_lock);
4673 + p = find_process_by_pid(pid);
4675 + retval = security_task_getscheduler(p);
4677 + retval = p->policy;
4679 + read_unlock(&tasklist_lock);
4686 + * sys_sched_getscheduler - get the RT priority of a thread
4687 + * @pid: the pid in question.
4688 + * @param: structure containing the RT priority.
4690 +asmlinkage long sys_sched_getparam(pid_t pid, struct sched_param __user *param)
4692 + struct sched_param lp;
4693 + struct task_struct *p;
4694 + int retval = -EINVAL;
4696 + if (!param || pid < 0)
4697 + goto out_nounlock;
4699 + read_lock(&tasklist_lock);
4700 + p = find_process_by_pid(pid);
4705 + retval = security_task_getscheduler(p);
4709 + lp.sched_priority = p->rt_priority;
4710 + read_unlock(&tasklist_lock);
4713 + * This one might sleep, we cannot do it with a spinlock held ...
4715 + retval = copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
4721 + read_unlock(&tasklist_lock);
4725 +long sched_setaffinity(pid_t pid, const cpumask_t *in_mask)
4727 + cpumask_t cpus_allowed;
4728 + cpumask_t new_mask = *in_mask;
4729 + struct task_struct *p;
4732 + get_online_cpus();
4733 + read_lock(&tasklist_lock);
4735 + p = find_process_by_pid(pid);
4737 + read_unlock(&tasklist_lock);
4738 + put_online_cpus();
4743 + * It is not safe to call set_cpus_allowed with the
4744 + * tasklist_lock held. We will bump the task_struct's
4745 + * usage count and then drop tasklist_lock.
4747 + get_task_struct(p);
4748 + read_unlock(&tasklist_lock);
4751 + if ((current->euid != p->euid) && (current->euid != p->uid) &&
4752 + !capable(CAP_SYS_NICE))
4755 + retval = security_task_setscheduler(p, 0, NULL);
4759 + cpuset_cpus_allowed(p, &cpus_allowed);
4760 + cpus_and(new_mask, new_mask, cpus_allowed);
4762 + retval = set_cpus_allowed_ptr(p, &new_mask);
4765 + cpuset_cpus_allowed(p, &cpus_allowed);
4766 + if (!cpus_subset(new_mask, cpus_allowed)) {
4768 + * We must have raced with a concurrent cpuset
4769 + * update. Just reset the cpus_allowed to the
4770 + * cpuset's cpus_allowed
4772 + new_mask = cpus_allowed;
4777 + put_task_struct(p);
4778 + put_online_cpus();
4782 +static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
4783 + cpumask_t *new_mask)
4785 + if (len < sizeof(cpumask_t)) {
4786 + memset(new_mask, 0, sizeof(cpumask_t));
4787 + } else if (len > sizeof(cpumask_t)) {
4788 + len = sizeof(cpumask_t);
4790 + return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
4794 + * sys_sched_setaffinity - set the cpu affinity of a process
4795 + * @pid: pid of the process
4796 + * @len: length in bytes of the bitmask pointed to by user_mask_ptr
4797 + * @user_mask_ptr: user-space pointer to the new cpu mask
4799 +asmlinkage long sys_sched_setaffinity(pid_t pid, unsigned int len,
4800 + unsigned long __user *user_mask_ptr)
4802 + cpumask_t new_mask;
4805 + retval = get_user_cpu_mask(user_mask_ptr, len, &new_mask);
4809 + return sched_setaffinity(pid, &new_mask);
4812 +long sched_getaffinity(pid_t pid, cpumask_t *mask)
4814 + struct task_struct *p;
4817 + get_online_cpus();
4818 + read_lock(&tasklist_lock);
4821 + p = find_process_by_pid(pid);
4825 + retval = security_task_getscheduler(p);
4829 + cpus_and(*mask, p->cpus_allowed, cpu_online_map);
4832 + read_unlock(&tasklist_lock);
4833 + put_online_cpus();
4839 + * sys_sched_getaffinity - get the cpu affinity of a process
4840 + * @pid: pid of the process
4841 + * @len: length in bytes of the bitmask pointed to by user_mask_ptr
4842 + * @user_mask_ptr: user-space pointer to hold the current cpu mask
4844 +asmlinkage long sys_sched_getaffinity(pid_t pid, unsigned int len,
4845 + unsigned long __user *user_mask_ptr)
4850 + if (len < sizeof(cpumask_t))
4853 + ret = sched_getaffinity(pid, &mask);
4857 + if (copy_to_user(user_mask_ptr, &mask, sizeof(cpumask_t)))
4860 + return sizeof(cpumask_t);
4864 + * sys_sched_yield - yield the current processor to other threads.
4866 + * This function yields the current CPU to other tasks. It does this by
4867 + * scheduling away the current task. If it still has the earliest deadline
4868 + * it will be scheduled again as the next task.
4870 +asmlinkage long sys_sched_yield(void)
4872 + struct task_struct *p;
4876 + rq = task_grq_lock_irq(p);
4877 + schedstat_inc(rq, yld_count);
4881 + * Since we are going to call schedule() anyway, there's
4882 + * no need to preempt or enable interrupts:
4884 + __release(grq.lock);
4885 + spin_release(&grq.lock.dep_map, 1, _THIS_IP_);
4886 + _raw_spin_unlock(&grq.lock);
4887 + preempt_enable_no_resched();
4894 +static void __cond_resched(void)
4896 + /* NOT a real fix but will make voluntary preempt work. 馬鹿な事 */
4897 + if (unlikely(system_state != SYSTEM_RUNNING))
4899 +#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
4900 + __might_sleep(__FILE__, __LINE__);
4903 + * The BKS might be reacquired before we have dropped
4904 + * PREEMPT_ACTIVE, which could trigger a second
4905 + * cond_resched() call.
4908 + add_preempt_count(PREEMPT_ACTIVE);
4910 + sub_preempt_count(PREEMPT_ACTIVE);
4911 + } while (need_resched());
4914 +int __sched _cond_resched(void)
4916 + if (need_resched() && !(preempt_count() & PREEMPT_ACTIVE) &&
4917 + system_state == SYSTEM_RUNNING) {
4923 +EXPORT_SYMBOL(_cond_resched);
4926 + * cond_resched_lock() - if a reschedule is pending, drop the given lock,
4927 + * call schedule, and on return reacquire the lock.
4929 + * This works OK both with and without CONFIG_PREEMPT. We do strange low-level
4930 + * operations here to prevent schedule() from being called twice (once via
4931 + * spin_unlock(), once by hand).
4933 +int cond_resched_lock(spinlock_t *lock)
4935 + int resched = need_resched() && system_state == SYSTEM_RUNNING;
4938 + if (spin_needbreak(lock) || resched) {
4939 + spin_unlock(lock);
4940 + if (resched && need_resched())
4949 +EXPORT_SYMBOL(cond_resched_lock);
4951 +int __sched cond_resched_softirq(void)
4953 + BUG_ON(!in_softirq());
4955 + if (need_resched() && system_state == SYSTEM_RUNNING) {
4956 + local_bh_enable();
4958 + local_bh_disable();
4963 +EXPORT_SYMBOL(cond_resched_softirq);
4966 + * yield - yield the current processor to other threads.
4968 + * This is a shortcut for kernel-space yielding - it marks the
4969 + * thread runnable and calls sys_sched_yield().
4971 +void __sched yield(void)
4973 + set_current_state(TASK_RUNNING);
4974 + sys_sched_yield();
4976 +EXPORT_SYMBOL(yield);
4979 + * This task is about to go to sleep on IO. Increment rq->nr_iowait so
4980 + * that process accounting knows that this is a task in IO wait state.
4982 + * But don't do that if it is a deliberate, throttling IO wait (this task
4983 + * has set its backing_dev_info: the queue against which it should throttle)
4985 +void __sched io_schedule(void)
4987 + struct rq *rq = &__raw_get_cpu_var(runqueues);
4989 + delayacct_blkio_start();
4990 + atomic_inc(&rq->nr_iowait);
4992 + atomic_dec(&rq->nr_iowait);
4993 + delayacct_blkio_end();
4995 +EXPORT_SYMBOL(io_schedule);
4997 +long __sched io_schedule_timeout(long timeout)
4999 + struct rq *rq = &__raw_get_cpu_var(runqueues);
5002 + delayacct_blkio_start();
5003 + atomic_inc(&rq->nr_iowait);
5004 + ret = schedule_timeout(timeout);
5005 + atomic_dec(&rq->nr_iowait);
5006 + delayacct_blkio_end();
5011 + * sys_sched_get_priority_max - return maximum RT priority.
5012 + * @policy: scheduling class.
5014 + * this syscall returns the maximum rt_priority that can be used
5015 + * by a given scheduling class.
5017 +asmlinkage long sys_sched_get_priority_max(int policy)
5019 + int ret = -EINVAL;
5024 + ret = MAX_USER_RT_PRIO-1;
5026 + case SCHED_NORMAL:
5029 + case SCHED_IDLEPRIO:
5037 + * sys_sched_get_priority_min - return minimum RT priority.
5038 + * @policy: scheduling class.
5040 + * this syscall returns the minimum rt_priority that can be used
5041 + * by a given scheduling class.
5043 +asmlinkage long sys_sched_get_priority_min(int policy)
5045 + int ret = -EINVAL;
5052 + case SCHED_NORMAL:
5055 + case SCHED_IDLEPRIO:
5063 + * sys_sched_rr_get_interval - return the default timeslice of a process.
5064 + * @pid: pid of the process.
5065 + * @interval: userspace pointer to the timeslice value.
5067 + * this syscall writes the default timeslice value of a given process
5068 + * into the user-space timespec buffer. A value of '0' means infinity.
5071 +long sys_sched_rr_get_interval(pid_t pid, struct timespec __user *interval)
5073 + struct task_struct *p;
5075 + struct timespec t;
5081 + read_lock(&tasklist_lock);
5082 + p = find_process_by_pid(pid);
5086 + retval = security_task_getscheduler(p);
5090 + t = ns_to_timespec(p->policy == SCHED_FIFO ? 0 :
5091 + MS_TO_NS(task_timeslice(p)));
5092 + read_unlock(&tasklist_lock);
5093 + retval = copy_to_user(interval, &t, sizeof(t)) ? -EFAULT : 0;
5097 + read_unlock(&tasklist_lock);
5101 +static const char stat_nam[] = TASK_STATE_TO_CHAR_STR;
5103 +void sched_show_task(struct task_struct *p)
5105 + unsigned long free = 0;
5108 + state = p->state ? __ffs(p->state) + 1 : 0;
5109 + printk(KERN_INFO "%-13.13s %c", p->comm,
5110 + state < sizeof(stat_nam) - 1 ? stat_nam[state] : '?');
5111 +#if BITS_PER_LONG == 32
5112 + if (state == TASK_RUNNING)
5113 + printk(KERN_CONT " running ");
5115 + printk(KERN_CONT " %08lx ", thread_saved_pc(p));
5117 + if (state == TASK_RUNNING)
5118 + printk(KERN_CONT " running task ");
5120 + printk(KERN_CONT " %016lx ", thread_saved_pc(p));
5122 +#ifdef CONFIG_DEBUG_STACK_USAGE
5124 + unsigned long *n = end_of_stack(p);
5127 + free = (unsigned long)n - (unsigned long)end_of_stack(p);
5130 + printk(KERN_CONT "%5lu %5d %6d\n", free,
5131 + task_pid_nr(p), task_pid_nr(p->real_parent));
5133 + show_stack(p, NULL);
5136 +void show_state_filter(unsigned long state_filter)
5138 + struct task_struct *g, *p;
5140 +#if BITS_PER_LONG == 32
5142 + " task PC stack pid father\n");
5145 + " task PC stack pid father\n");
5147 + read_lock(&tasklist_lock);
5148 + do_each_thread(g, p) {
5150 + * reset the NMI-timeout, listing all files on a slow
5151 + * console might take alot of time:
5153 + touch_nmi_watchdog();
5154 + if (!state_filter || (p->state & state_filter))
5155 + sched_show_task(p);
5156 + } while_each_thread(g, p);
5158 + touch_all_softlockup_watchdogs();
5160 + read_unlock(&tasklist_lock);
5162 + * Only show locks if all tasks are dumped:
5164 + if (state_filter == -1)
5165 + debug_show_all_locks();
5169 + * init_idle - set up an idle thread for a given CPU
5170 + * @idle: task in question
5171 + * @cpu: cpu the idle task belongs to
5173 + * NOTE: this function does not set the idle thread's NEED_RESCHED
5174 + * flag, to make booting more robust.
5176 +void init_idle(struct task_struct *idle, int cpu)
5178 + struct rq *rq = cpu_rq(cpu);
5179 + unsigned long flags;
5181 + time_grq_lock(rq, &flags);
5182 + idle->last_ran = rq->clock;
5183 + idle->state = TASK_RUNNING;
5184 + /* Setting prio to illegal value shouldn't matter when never queued */
5185 + idle->prio = PRIO_LIMIT;
5186 + set_rq_task(rq, idle);
5187 + idle->cpus_allowed = cpumask_of_cpu(cpu);
5188 + set_task_cpu(idle, cpu);
5189 + rq->curr = rq->idle = idle;
5191 + set_cpuidle_map(cpu);
5192 +#ifdef CONFIG_HOTPLUG_CPU
5193 + idle->unplugged_mask = CPU_MASK_NONE;
5195 + grq_unlock_irqrestore(&flags);
5197 + /* Set the preempt count _outside_ the spinlocks! */
5198 +#if defined(CONFIG_PREEMPT) && !defined(CONFIG_PREEMPT_BKL)
5199 + task_thread_info(idle)->preempt_count = (idle->lock_depth >= 0);
5201 + task_thread_info(idle)->preempt_count = 0;
5206 + * In a system that switches off the HZ timer nohz_cpu_mask
5207 + * indicates which cpus entered this state. This is used
5208 + * in the rcu update to wait only for active cpus. For system
5209 + * which do not switch off the HZ timer nohz_cpu_mask should
5210 + * always be CPU_MASK_NONE.
5212 +cpumask_t nohz_cpu_mask = CPU_MASK_NONE;
5215 +#ifdef CONFIG_NO_HZ
5217 + atomic_t load_balancer;
5218 + cpumask_t cpu_mask;
5219 +} nohz ____cacheline_aligned = {
5220 + .load_balancer = ATOMIC_INIT(-1),
5221 + .cpu_mask = CPU_MASK_NONE,
5225 + * This routine will try to nominate the ilb (idle load balancing)
5226 + * owner among the cpus whose ticks are stopped. ilb owner will do the idle
5227 + * load balancing on behalf of all those cpus. If all the cpus in the system
5228 + * go into this tickless mode, then there will be no ilb owner (as there is
5229 + * no need for one) and all the cpus will sleep till the next wakeup event
5232 + * For the ilb owner, tick is not stopped. And this tick will be used
5233 + * for idle load balancing. ilb owner will still be part of
5236 + * While stopping the tick, this cpu will become the ilb owner if there
5237 + * is no other owner. And will be the owner till that cpu becomes busy
5238 + * or if all cpus in the system stop their ticks at which point
5239 + * there is no need for ilb owner.
5241 + * When the ilb owner becomes busy, it nominates another owner, during the
5242 + * next busy scheduler_tick()
5244 +int select_nohz_load_balancer(int stop_tick)
5246 + int cpu = smp_processor_id();
5249 + cpu_set(cpu, nohz.cpu_mask);
5250 + cpu_rq(cpu)->in_nohz_recently = 1;
5253 + * If we are going offline and still the leader, give up!
5255 + if (!cpu_active(cpu) &&
5256 + atomic_read(&nohz.load_balancer) == cpu) {
5257 + if (atomic_cmpxchg(&nohz.load_balancer, cpu, -1) != cpu)
5262 + /* time for ilb owner also to sleep */
5263 + if (cpus_weight(nohz.cpu_mask) == num_online_cpus()) {
5264 + if (atomic_read(&nohz.load_balancer) == cpu)
5265 + atomic_set(&nohz.load_balancer, -1);
5269 + if (atomic_read(&nohz.load_balancer) == -1) {
5270 + /* make me the ilb owner */
5271 + if (atomic_cmpxchg(&nohz.load_balancer, -1, cpu) == -1)
5273 + } else if (atomic_read(&nohz.load_balancer) == cpu)
5276 + if (!cpu_isset(cpu, nohz.cpu_mask))
5279 + cpu_clear(cpu, nohz.cpu_mask);
5281 + if (atomic_read(&nohz.load_balancer) == cpu)
5282 + if (atomic_cmpxchg(&nohz.load_balancer, cpu, -1) != cpu)
5289 + * When add_timer_on() enqueues a timer into the timer wheel of an
5290 + * idle CPU then this timer might expire before the next timer event
5291 + * which is scheduled to wake up that CPU. In case of a completely
5292 + * idle system the next event might even be infinite time into the
5293 + * future. wake_up_idle_cpu() ensures that the CPU is woken up and
5294 + * leaves the inner idle loop so the newly added timer is taken into
5295 + * account when the CPU goes back to idle and evaluates the timer
5296 + * wheel for the next timer event.
5298 +void wake_up_idle_cpu(int cpu)
5300 + struct task_struct *idle;
5303 + if (cpu == smp_processor_id())
5310 + * This is safe, as this function is called with the timer
5311 + * wheel base lock of (cpu) held. When the CPU is on the way
5312 + * to idle and has not yet set rq->curr to idle then it will
5313 + * be serialised on the timer wheel base lock and take the new
5314 + * timer into account automatically.
5316 + if (unlikely(rq->curr != idle))
5320 + * We can set TIF_RESCHED on the idle task of the other CPU
5321 + * lockless. The worst case is that the other CPU runs the
5322 + * idle task through an additional NOOP schedule()
5324 + set_tsk_thread_flag(idle, TIF_NEED_RESCHED);
5326 + /* NEED_RESCHED must be visible before we test polling */
5328 + if (!tsk_is_polling(idle))
5329 + smp_send_reschedule(cpu);
5332 +#endif /* CONFIG_NO_HZ */
5335 + * Change a given task's CPU affinity. Migrate the thread to a
5336 + * proper CPU and schedule it away if the CPU it's executing on
5337 + * is removed from the allowed bitmask.
5339 + * NOTE: the caller must have a valid reference to the task, the
5340 + * task must not exit() & deallocate itself prematurely. The
5341 + * call is not atomic; no spinlocks may be held.
5343 +int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask)
5345 + unsigned long flags;
5346 + int running_wrong = 0;
5351 + rq = task_grq_lock(p, &flags);
5352 + if (!cpus_intersects(*new_mask, cpu_online_map)) {
5357 + if (unlikely((p->flags & PF_THREAD_BOUND) && p != current &&
5358 + !cpus_equal(p->cpus_allowed, *new_mask))) {
5363 + queued = task_queued(p);
5365 + p->cpus_allowed = *new_mask;
5367 + /* Can the task run on the task's current CPU? If so, we're done */
5368 + if (cpu_isset(task_cpu(p), *new_mask))
5371 + if (task_running(p)) {
5372 + /* Task is running on the wrong cpu now, reschedule it. */
5373 + set_tsk_need_resched(p);
5374 + running_wrong = 1;
5376 + set_task_cpu(p, any_online_cpu(*new_mask));
5380 + try_preempt(p, rq);
5381 + task_grq_unlock(&flags);
5383 + if (running_wrong)
5388 +EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
5390 +#ifdef CONFIG_HOTPLUG_CPU
5391 +/* Schedules idle task to be the next runnable task on current CPU.
5392 + * It does so by boosting its priority to highest possible.
5393 + * Used by CPU offline code.
5395 +void sched_idle_next(void)
5397 + int this_cpu = smp_processor_id();
5398 + struct rq *rq = cpu_rq(this_cpu);
5399 + struct task_struct *idle = rq->idle;
5400 + unsigned long flags;
5402 + /* cpu has to be offline */
5403 + BUG_ON(cpu_online(this_cpu));
5406 + * Strictly not necessary since rest of the CPUs are stopped by now
5407 + * and interrupts disabled on the current cpu.
5409 + time_grq_lock(rq, &flags);
5411 + __setscheduler(idle, rq, SCHED_FIFO, MAX_RT_PRIO - 1);
5413 + activate_idle_task(idle);
5414 + set_tsk_need_resched(rq->curr);
5416 + grq_unlock_irqrestore(&flags);
5420 + * Ensures that the idle task is using init_mm right before its cpu goes
5423 +void idle_task_exit(void)
5425 + struct mm_struct *mm = current->active_mm;
5427 + BUG_ON(cpu_online(smp_processor_id()));
5429 + if (mm != &init_mm)
5430 + switch_mm(mm, &init_mm, current);
5434 +#endif /* CONFIG_HOTPLUG_CPU */
5436 +#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
5438 +static struct ctl_table sd_ctl_dir[] = {
5440 + .procname = "sched_domain",
5446 +static struct ctl_table sd_ctl_root[] = {
5448 + .ctl_name = CTL_KERN,
5449 + .procname = "kernel",
5451 + .child = sd_ctl_dir,
5456 +static struct ctl_table *sd_alloc_ctl_entry(int n)
5458 + struct ctl_table *entry =
5459 + kcalloc(n, sizeof(struct ctl_table), GFP_KERNEL);
5464 +static void sd_free_ctl_entry(struct ctl_table **tablep)
5466 + struct ctl_table *entry;
5469 + * In the intermediate directories, both the child directory and
5470 + * procname are dynamically allocated and could fail but the mode
5471 + * will always be set. In the lowest directory the names are
5472 + * static strings and all have proc handlers.
5474 + for (entry = *tablep; entry->mode; entry++) {
5476 + sd_free_ctl_entry(&entry->child);
5477 + if (entry->proc_handler == NULL)
5478 + kfree(entry->procname);
5486 +set_table_entry(struct ctl_table *entry,
5487 + const char *procname, void *data, int maxlen,
5488 + mode_t mode, proc_handler *proc_handler)
5490 + entry->procname = procname;
5491 + entry->data = data;
5492 + entry->maxlen = maxlen;
5493 + entry->mode = mode;
5494 + entry->proc_handler = proc_handler;
5497 +static struct ctl_table *
5498 +sd_alloc_ctl_domain_table(struct sched_domain *sd)
5500 + struct ctl_table *table = sd_alloc_ctl_entry(12);
5502 + if (table == NULL)
5505 + set_table_entry(&table[0], "min_interval", &sd->min_interval,
5506 + sizeof(long), 0644, proc_doulongvec_minmax);
5507 + set_table_entry(&table[1], "max_interval", &sd->max_interval,
5508 + sizeof(long), 0644, proc_doulongvec_minmax);
5509 + set_table_entry(&table[2], "busy_idx", &sd->busy_idx,
5510 + sizeof(int), 0644, proc_dointvec_minmax);
5511 + set_table_entry(&table[3], "idle_idx", &sd->idle_idx,
5512 + sizeof(int), 0644, proc_dointvec_minmax);
5513 + set_table_entry(&table[4], "newidle_idx", &sd->newidle_idx,
5514 + sizeof(int), 0644, proc_dointvec_minmax);
5515 + set_table_entry(&table[5], "wake_idx", &sd->wake_idx,
5516 + sizeof(int), 0644, proc_dointvec_minmax);
5517 + set_table_entry(&table[6], "forkexec_idx", &sd->forkexec_idx,
5518 + sizeof(int), 0644, proc_dointvec_minmax);
5519 + set_table_entry(&table[7], "busy_factor", &sd->busy_factor,
5520 + sizeof(int), 0644, proc_dointvec_minmax);
5521 + set_table_entry(&table[8], "imbalance_pct", &sd->imbalance_pct,
5522 + sizeof(int), 0644, proc_dointvec_minmax);
5523 + set_table_entry(&table[9], "cache_nice_tries",
5524 + &sd->cache_nice_tries,
5525 + sizeof(int), 0644, proc_dointvec_minmax);
5526 + set_table_entry(&table[10], "flags", &sd->flags,
5527 + sizeof(int), 0644, proc_dointvec_minmax);
5528 + /* &table[11] is terminator */
5533 +static ctl_table *sd_alloc_ctl_cpu_table(int cpu)
5535 + struct ctl_table *entry, *table;
5536 + struct sched_domain *sd;
5537 + int domain_num = 0, i;
5540 + for_each_domain(cpu, sd)
5542 + entry = table = sd_alloc_ctl_entry(domain_num + 1);
5543 + if (table == NULL)
5547 + for_each_domain(cpu, sd) {
5548 + snprintf(buf, 32, "domain%d", i);
5549 + entry->procname = kstrdup(buf, GFP_KERNEL);
5550 + entry->mode = 0555;
5551 + entry->child = sd_alloc_ctl_domain_table(sd);
5558 +static struct ctl_table_header *sd_sysctl_header;
5559 +static void register_sched_domain_sysctl(void)
5561 + int i, cpu_num = num_online_cpus();
5562 + struct ctl_table *entry = sd_alloc_ctl_entry(cpu_num + 1);
5565 + WARN_ON(sd_ctl_dir[0].child);
5566 + sd_ctl_dir[0].child = entry;
5568 + if (entry == NULL)
5571 + for_each_online_cpu(i) {
5572 + snprintf(buf, 32, "cpu%d", i);
5573 + entry->procname = kstrdup(buf, GFP_KERNEL);
5574 + entry->mode = 0555;
5575 + entry->child = sd_alloc_ctl_cpu_table(i);
5579 + WARN_ON(sd_sysctl_header);
5580 + sd_sysctl_header = register_sysctl_table(sd_ctl_root);
5583 +/* may be called multiple times per register */
5584 +static void unregister_sched_domain_sysctl(void)
5586 + if (sd_sysctl_header)
5587 + unregister_sysctl_table(sd_sysctl_header);
5588 + sd_sysctl_header = NULL;
5589 + if (sd_ctl_dir[0].child)
5590 + sd_free_ctl_entry(&sd_ctl_dir[0].child);
5593 +static void register_sched_domain_sysctl(void)
5596 +static void unregister_sched_domain_sysctl(void)
5601 +static void set_rq_online(struct rq *rq)
5603 + if (!rq->online) {
5604 + cpu_set(cpu_of(rq), rq->rd->online);
5609 +static void set_rq_offline(struct rq *rq)
5612 + cpu_clear(cpu_of(rq), rq->rd->online);
5617 +#ifdef CONFIG_HOTPLUG_CPU
5619 + * This cpu is going down, so walk over the tasklist and find tasks that can
5620 + * only run on this cpu and remove their affinity. Store their value in
5621 + * unplugged_mask so it can be restored once their correct cpu is online. No
5622 + * need to do anything special since they'll just move on next reschedule if
5623 + * they're running.
5625 +static void remove_cpu(unsigned long cpu)
5627 + struct task_struct *p, *t;
5629 + read_lock(&tasklist_lock);
5631 + do_each_thread(t, p) {
5632 + cpumask_t cpus_remaining;
5634 + cpus_and(cpus_remaining, p->cpus_allowed, cpu_online_map);
5635 + cpu_clear(cpu, cpus_remaining);
5636 + if (cpus_empty(cpus_remaining)) {
5637 + p->unplugged_mask = p->cpus_allowed;
5638 + p->cpus_allowed = cpu_possible_map;
5640 + } while_each_thread(t, p);
5642 + read_unlock(&tasklist_lock);
5646 + * This cpu is coming up so add it to the cpus_allowed.
5648 +static void add_cpu(unsigned long cpu)
5650 + struct task_struct *p, *t;
5652 + read_lock(&tasklist_lock);
5654 + do_each_thread(t, p) {
5655 + /* Have we taken all the cpus from the unplugged_mask back */
5656 + if (cpus_empty(p->unplugged_mask))
5659 + /* Was this cpu in the unplugged_mask mask */
5660 + if (cpu_isset(cpu, p->unplugged_mask)) {
5661 + cpu_set(cpu, p->cpus_allowed);
5662 + if (cpus_subset(p->unplugged_mask, p->cpus_allowed)) {
5664 + * Have we set more than the unplugged_mask?
5665 + * If so, that means we have remnants set from
5666 + * the unplug/plug cycle and need to remove
5667 + * them. Then clear the unplugged_mask as we've
5668 + * set all the cpus back.
5670 + p->cpus_allowed = p->unplugged_mask;
5671 + cpus_clear(p->unplugged_mask);
5674 + } while_each_thread(t, p);
5676 + read_unlock(&tasklist_lock);
5679 +static void add_cpu(unsigned long cpu)
5685 + * migration_call - callback that gets triggered when a CPU is added.
5687 +static int __cpuinit
5688 +migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
5690 + struct task_struct *idle;
5691 + int cpu = (long)hcpu;
5692 + unsigned long flags;
5697 + case CPU_UP_PREPARE:
5698 + case CPU_UP_PREPARE_FROZEN:
5702 + case CPU_ONLINE_FROZEN:
5703 + /* Update our root-domain */
5705 + grq_lock_irqsave(&flags);
5707 + BUG_ON(!cpu_isset(cpu, rq->rd->span));
5709 + set_rq_online(rq);
5712 + grq_unlock_irqrestore(&flags);
5715 +#ifdef CONFIG_HOTPLUG_CPU
5716 + case CPU_UP_CANCELED:
5717 + case CPU_UP_CANCELED_FROZEN:
5721 + case CPU_DEAD_FROZEN:
5722 + cpuset_lock(); /* around calls to cpuset_cpus_allowed_lock() */
5725 + /* Idle task back to normal (off runqueue, low prio) */
5728 + return_task(idle, 1);
5729 + idle->static_prio = MAX_PRIO;
5730 + __setscheduler(idle, rq, SCHED_NORMAL, 0);
5731 + idle->prio = PRIO_LIMIT;
5732 + set_rq_task(rq, idle);
5733 + update_rq_clock(rq);
5739 + case CPU_DYING_FROZEN:
5741 + grq_lock_irqsave(&flags);
5743 + BUG_ON(!cpu_isset(cpu, rq->rd->span));
5744 + set_rq_offline(rq);
5746 + grq_unlock_irqrestore(&flags);
5753 +/* Register at highest priority so that task migration (migrate_all_tasks)
5754 + * happens before everything else.
5756 +static struct notifier_block __cpuinitdata migration_notifier = {
5757 + .notifier_call = migration_call,
5761 +int __init migration_init(void)
5763 + void *cpu = (void *)(long)smp_processor_id();
5766 + /* Start one for the boot CPU: */
5767 + err = migration_call(&migration_notifier, CPU_UP_PREPARE, cpu);
5768 + BUG_ON(err == NOTIFY_BAD);
5769 + migration_call(&migration_notifier, CPU_ONLINE, cpu);
5770 + register_cpu_notifier(&migration_notifier);
5774 +early_initcall(migration_init);
5778 + * sched_domains_mutex serialises calls to arch_init_sched_domains,
5779 + * detach_destroy_domains and partition_sched_domains.
5781 +static DEFINE_MUTEX(sched_domains_mutex);
5785 +#ifdef CONFIG_SCHED_DEBUG
5787 +static inline const char *sd_level_to_string(enum sched_domain_level lvl)
5792 + case SD_LV_SIBLING:
5800 + case SD_LV_ALLNODES:
5801 + return "ALLNODES";
5809 +static int sched_domain_debug_one(struct sched_domain *sd, int cpu, int level,
5810 + cpumask_t *groupmask)
5812 + struct sched_group *group = sd->groups;
5815 + cpulist_scnprintf(str, sizeof(str), sd->span);
5816 + cpus_clear(*groupmask);
5818 + printk(KERN_DEBUG "%*s domain %d: ", level, "", level);
5820 + if (!(sd->flags & SD_LOAD_BALANCE)) {
5821 + printk("does not load-balance\n");
5823 + printk(KERN_ERR "ERROR: !SD_LOAD_BALANCE domain"
5828 + printk(KERN_CONT "span %s level %s\n",
5829 + str, sd_level_to_string(sd->level));
5831 + if (!cpu_isset(cpu, sd->span)) {
5832 + printk(KERN_ERR "ERROR: domain->span does not contain "
5835 + if (!cpu_isset(cpu, group->cpumask)) {
5836 + printk(KERN_ERR "ERROR: domain->groups does not contain"
5840 + printk(KERN_DEBUG "%*s groups:", level + 1, "");
5844 + printk(KERN_ERR "ERROR: group is NULL\n");
5848 + if (!group->__cpu_power) {
5849 + printk(KERN_CONT "\n");
5850 + printk(KERN_ERR "ERROR: domain->cpu_power not "
5855 + if (!cpus_weight(group->cpumask)) {
5856 + printk(KERN_CONT "\n");
5857 + printk(KERN_ERR "ERROR: empty group\n");
5861 + if (cpus_intersects(*groupmask, group->cpumask)) {
5862 + printk(KERN_CONT "\n");
5863 + printk(KERN_ERR "ERROR: repeated CPUs\n");
5867 + cpus_or(*groupmask, *groupmask, group->cpumask);
5869 + cpulist_scnprintf(str, sizeof(str), group->cpumask);
5870 + printk(KERN_CONT " %s", str);
5872 + group = group->next;
5873 + } while (group != sd->groups);
5874 + printk(KERN_CONT "\n");
5876 + if (!cpus_equal(sd->span, *groupmask))
5877 + printk(KERN_ERR "ERROR: groups don't span domain->span\n");
5879 + if (sd->parent && !cpus_subset(*groupmask, sd->parent->span))
5880 + printk(KERN_ERR "ERROR: parent span is not a superset "
5881 + "of domain->span\n");
5885 +static void sched_domain_debug(struct sched_domain *sd, int cpu)
5887 + cpumask_t *groupmask;
5891 + printk(KERN_DEBUG "CPU%d attaching NULL sched-domain.\n", cpu);
5895 + printk(KERN_DEBUG "CPU%d attaching sched-domain:\n", cpu);
5897 + groupmask = kmalloc(sizeof(cpumask_t), GFP_KERNEL);
5899 + printk(KERN_DEBUG "Cannot load-balance (out of memory)\n");
5904 + if (sched_domain_debug_one(sd, cpu, level, groupmask))
5913 +#else /* !CONFIG_SCHED_DEBUG */
5914 +# define sched_domain_debug(sd, cpu) do { } while (0)
5915 +#endif /* CONFIG_SCHED_DEBUG */
5917 +static int sd_degenerate(struct sched_domain *sd)
5919 + if (cpus_weight(sd->span) == 1)
5922 + /* Following flags need at least 2 groups */
5923 + if (sd->flags & (SD_LOAD_BALANCE |
5924 + SD_BALANCE_NEWIDLE |
5927 + SD_SHARE_CPUPOWER |
5928 + SD_SHARE_PKG_RESOURCES)) {
5929 + if (sd->groups != sd->groups->next)
5933 + /* Following flags don't use groups */
5934 + if (sd->flags & (SD_WAKE_IDLE |
5943 +sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent)
5945 + unsigned long cflags = sd->flags, pflags = parent->flags;
5947 + if (sd_degenerate(parent))
5950 + if (!cpus_equal(sd->span, parent->span))
5953 + /* Does parent contain flags not in child? */
5954 + /* WAKE_BALANCE is a subset of WAKE_AFFINE */
5955 + if (cflags & SD_WAKE_AFFINE)
5956 + pflags &= ~SD_WAKE_BALANCE;
5957 + /* Flags needing groups don't count if only 1 group in parent */
5958 + if (parent->groups == parent->groups->next) {
5959 + pflags &= ~(SD_LOAD_BALANCE |
5960 + SD_BALANCE_NEWIDLE |
5963 + SD_SHARE_CPUPOWER |
5964 + SD_SHARE_PKG_RESOURCES);
5966 + if (~cflags & pflags)
5972 +static void rq_attach_root(struct rq *rq, struct root_domain *rd)
5974 + unsigned long flags;
5976 + grq_lock_irqsave(&flags);
5979 + struct root_domain *old_rd = rq->rd;
5981 + if (cpu_isset(cpu_of(rq), old_rd->online))
5982 + set_rq_offline(rq);
5984 + cpu_clear(cpu_of(rq), old_rd->span);
5986 + if (atomic_dec_and_test(&old_rd->refcount))
5990 + atomic_inc(&rd->refcount);
5993 + cpu_set(cpu_of(rq), rd->span);
5994 + if (cpu_isset(cpu_of(rq), cpu_online_map))
5995 + set_rq_online(rq);
5997 + grq_unlock_irqrestore(&flags);
6000 +static void init_rootdomain(struct root_domain *rd)
6002 + memset(rd, 0, sizeof(*rd));
6004 + cpus_clear(rd->span);
6005 + cpus_clear(rd->online);
6008 +static void init_defrootdomain(void)
6010 + init_rootdomain(&def_root_domain);
6012 + atomic_set(&def_root_domain.refcount, 1);
6015 +static struct root_domain *alloc_rootdomain(void)
6017 + struct root_domain *rd;
6019 + rd = kmalloc(sizeof(*rd), GFP_KERNEL);
6023 + init_rootdomain(rd);
6029 + * Attach the domain 'sd' to 'cpu' as its base domain. Callers must
6030 + * hold the hotplug lock.
6033 +cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
6035 + struct rq *rq = cpu_rq(cpu);
6036 + struct sched_domain *tmp;
6038 + /* Remove the sched domains which do not contribute to scheduling. */
6039 + for (tmp = sd; tmp; tmp = tmp->parent) {
6040 + struct sched_domain *parent = tmp->parent;
6043 + if (sd_parent_degenerate(tmp, parent)) {
6044 + tmp->parent = parent->parent;
6045 + if (parent->parent)
6046 + parent->parent->child = tmp;
6050 + if (sd && sd_degenerate(sd)) {
6056 + sched_domain_debug(sd, cpu);
6058 + rq_attach_root(rq, rd);
6059 + rcu_assign_pointer(rq->sd, sd);
6062 +/* cpus with isolated domains */
6063 +static cpumask_t cpu_isolated_map = CPU_MASK_NONE;
6065 +/* Setup the mask of cpus configured for isolated domains */
6066 +static int __init isolated_cpu_setup(char *str)
6068 + static int __initdata ints[NR_CPUS];
6071 + str = get_options(str, ARRAY_SIZE(ints), ints);
6072 + cpus_clear(cpu_isolated_map);
6073 + for (i = 1; i <= ints[0]; i++)
6074 + if (ints[i] < NR_CPUS)
6075 + cpu_set(ints[i], cpu_isolated_map);
6079 +__setup("isolcpus=", isolated_cpu_setup);
6082 + * init_sched_build_groups takes the cpumask we wish to span, and a pointer
6083 + * to a function which identifies what group(along with sched group) a CPU
6084 + * belongs to. The return value of group_fn must be a >= 0 and < NR_CPUS
6085 + * (due to the fact that we keep track of groups covered with a cpumask_t).
6087 + * init_sched_build_groups will build a circular linked list of the groups
6088 + * covered by the given span, and will set each group's ->cpumask correctly,
6089 + * and ->cpu_power to 0.
6092 +init_sched_build_groups(const cpumask_t *span, const cpumask_t *cpu_map,
6093 + int (*group_fn)(int cpu, const cpumask_t *cpu_map,
6094 + struct sched_group **sg,
6095 + cpumask_t *tmpmask),
6096 + cpumask_t *covered, cpumask_t *tmpmask)
6098 + struct sched_group *first = NULL, *last = NULL;
6101 + cpus_clear(*covered);
6103 + for_each_cpu_mask_nr(i, *span) {
6104 + struct sched_group *sg;
6105 + int group = group_fn(i, cpu_map, &sg, tmpmask);
6108 + if (cpu_isset(i, *covered))
6111 + cpus_clear(sg->cpumask);
6112 + sg->__cpu_power = 0;
6114 + for_each_cpu_mask_nr(j, *span) {
6115 + if (group_fn(j, cpu_map, NULL, tmpmask) != group)
6118 + cpu_set(j, *covered);
6119 + cpu_set(j, sg->cpumask);
6127 + last->next = first;
6130 +#define SD_NODES_PER_DOMAIN 16
6135 + * find_next_best_node - find the next node to include in a sched_domain
6136 + * @node: node whose sched_domain we're building
6137 + * @used_nodes: nodes already in the sched_domain
6139 + * Find the next node to include in a given scheduling domain. Simply
6140 + * finds the closest node not already in the @used_nodes map.
6142 + * Should use nodemask_t.
6144 +static int find_next_best_node(int node, nodemask_t *used_nodes)
6146 + int i, n, val, min_val, best_node = 0;
6148 + min_val = INT_MAX;
6150 + for (i = 0; i < nr_node_ids; i++) {
6151 + /* Start at @node */
6152 + n = (node + i) % nr_node_ids;
6154 + if (!nr_cpus_node(n))
6157 + /* Skip already used nodes */
6158 + if (node_isset(n, *used_nodes))
6161 + /* Simple min distance search */
6162 + val = node_distance(node, n);
6164 + if (val < min_val) {
6170 + node_set(best_node, *used_nodes);
6175 + * sched_domain_node_span - get a cpumask for a node's sched_domain
6176 + * @node: node whose cpumask we're constructing
6177 + * @span: resulting cpumask
6179 + * Given a node, construct a good cpumask for its sched_domain to span. It
6180 + * should be one that prevents unnecessary balancing, but also spreads tasks
6183 +static void sched_domain_node_span(int node, cpumask_t *span)
6185 + nodemask_t used_nodes;
6186 + node_to_cpumask_ptr(nodemask, node);
6189 + cpus_clear(*span);
6190 + nodes_clear(used_nodes);
6192 + cpus_or(*span, *span, *nodemask);
6193 + node_set(node, used_nodes);
6195 + for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
6196 + int next_node = find_next_best_node(node, &used_nodes);
6198 + node_to_cpumask_ptr_next(nodemask, next_node);
6199 + cpus_or(*span, *span, *nodemask);
6202 +#endif /* CONFIG_NUMA */
6204 +int sched_smt_power_savings = 0, sched_mc_power_savings = 0;
6207 + * SMT sched-domains:
6209 +#ifdef CONFIG_SCHED_SMT
6210 +static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
6211 +static DEFINE_PER_CPU(struct sched_group, sched_group_cpus);
6214 +cpu_to_cpu_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
6215 + cpumask_t *unused)
6218 + *sg = &per_cpu(sched_group_cpus, cpu);
6221 +#endif /* CONFIG_SCHED_SMT */
6224 + * multi-core sched-domains:
6226 +#ifdef CONFIG_SCHED_MC
6227 +static DEFINE_PER_CPU(struct sched_domain, core_domains);
6228 +static DEFINE_PER_CPU(struct sched_group, sched_group_core);
6229 +#endif /* CONFIG_SCHED_MC */
6231 +#if defined(CONFIG_SCHED_MC) && defined(CONFIG_SCHED_SMT)
6233 +cpu_to_core_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
6238 + *mask = per_cpu(cpu_sibling_map, cpu);
6239 + cpus_and(*mask, *mask, *cpu_map);
6240 + group = first_cpu(*mask);
6242 + *sg = &per_cpu(sched_group_core, group);
6245 +#elif defined(CONFIG_SCHED_MC)
6247 +cpu_to_core_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
6248 + cpumask_t *unused)
6251 + *sg = &per_cpu(sched_group_core, cpu);
6256 +static DEFINE_PER_CPU(struct sched_domain, phys_domains);
6257 +static DEFINE_PER_CPU(struct sched_group, sched_group_phys);
6260 +cpu_to_phys_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
6264 +#ifdef CONFIG_SCHED_MC
6265 + *mask = cpu_coregroup_map(cpu);
6266 + cpus_and(*mask, *mask, *cpu_map);
6267 + group = first_cpu(*mask);
6268 +#elif defined(CONFIG_SCHED_SMT)
6269 + *mask = per_cpu(cpu_sibling_map, cpu);
6270 + cpus_and(*mask, *mask, *cpu_map);
6271 + group = first_cpu(*mask);
6276 + *sg = &per_cpu(sched_group_phys, group);
6282 + * The init_sched_build_groups can't handle what we want to do with node
6283 + * groups, so roll our own. Now each node has its own list of groups which
6284 + * gets dynamically allocated.
6286 +static DEFINE_PER_CPU(struct sched_domain, node_domains);
6287 +static struct sched_group ***sched_group_nodes_bycpu;
6289 +static DEFINE_PER_CPU(struct sched_domain, allnodes_domains);
6290 +static DEFINE_PER_CPU(struct sched_group, sched_group_allnodes);
6292 +static int cpu_to_allnodes_group(int cpu, const cpumask_t *cpu_map,
6293 + struct sched_group **sg, cpumask_t *nodemask)
6297 + *nodemask = node_to_cpumask(cpu_to_node(cpu));
6298 + cpus_and(*nodemask, *nodemask, *cpu_map);
6299 + group = first_cpu(*nodemask);
6302 + *sg = &per_cpu(sched_group_allnodes, group);
6306 +static void init_numa_sched_groups_power(struct sched_group *group_head)
6308 + struct sched_group *sg = group_head;
6314 + for_each_cpu_mask_nr(j, sg->cpumask) {
6315 + struct sched_domain *sd;
6317 + sd = &per_cpu(phys_domains, j);
6318 + if (j != first_cpu(sd->groups->cpumask)) {
6320 + * Only add "power" once for each
6321 + * physical package.
6326 + sg_inc_cpu_power(sg, sd->groups->__cpu_power);
6329 + } while (sg != group_head);
6331 +#endif /* CONFIG_NUMA */
6334 +/* Free memory allocated for various sched_group structures */
6335 +static void free_sched_groups(const cpumask_t *cpu_map, cpumask_t *nodemask)
6339 + for_each_cpu_mask_nr(cpu, *cpu_map) {
6340 + struct sched_group **sched_group_nodes
6341 + = sched_group_nodes_bycpu[cpu];
6343 + if (!sched_group_nodes)
6346 + for (i = 0; i < nr_node_ids; i++) {
6347 + struct sched_group *oldsg, *sg = sched_group_nodes[i];
6349 + *nodemask = node_to_cpumask(i);
6350 + cpus_and(*nodemask, *nodemask, *cpu_map);
6351 + if (cpus_empty(*nodemask))
6361 + if (oldsg != sched_group_nodes[i])
6364 + kfree(sched_group_nodes);
6365 + sched_group_nodes_bycpu[cpu] = NULL;
6368 +#else /* !CONFIG_NUMA */
6369 +static void free_sched_groups(const cpumask_t *cpu_map, cpumask_t *nodemask)
6372 +#endif /* CONFIG_NUMA */
6375 + * Initialise sched groups cpu_power.
6377 + * cpu_power indicates the capacity of sched group, which is used while
6378 + * distributing the load between different sched groups in a sched domain.
6379 + * Typically cpu_power for all the groups in a sched domain will be same unless
6380 + * there are asymmetries in the topology. If there are asymmetries, group
6381 + * having more cpu_power will pickup more load compared to the group having
6384 + * cpu_power will be a multiple of SCHED_LOAD_SCALE. This multiple represents
6385 + * the maximum number of tasks a group can handle in the presence of other idle
6386 + * or lightly loaded groups in the same sched domain.
6388 +static void init_sched_groups_power(int cpu, struct sched_domain *sd)
6390 + struct sched_domain *child;
6391 + struct sched_group *group;
6393 + WARN_ON(!sd || !sd->groups);
6395 + if (cpu != first_cpu(sd->groups->cpumask))
6398 + child = sd->child;
6400 + sd->groups->__cpu_power = 0;
6403 + * For perf policy, if the groups in child domain share resources
6404 + * (for example cores sharing some portions of the cache hierarchy
6405 + * or SMT), then set this domain groups cpu_power such that each group
6406 + * can handle only one task, when there are other idle groups in the
6407 + * same sched domain.
6409 + if (!child || (!(sd->flags & SD_POWERSAVINGS_BALANCE) &&
6411 + (SD_SHARE_CPUPOWER | SD_SHARE_PKG_RESOURCES)))) {
6412 + sg_inc_cpu_power(sd->groups, SCHED_LOAD_SCALE);
6417 + * add cpu_power of each child group to this groups cpu_power
6419 + group = child->groups;
6421 + sg_inc_cpu_power(sd->groups, group->__cpu_power);
6422 + group = group->next;
6423 + } while (group != child->groups);
6427 + * Initialisers for schedule domains
6428 + * Non-inlined to reduce accumulated stack pressure in build_sched_domains()
6431 +#define SD_INIT(sd, type) sd_init_##type(sd)
6432 +#define SD_INIT_FUNC(type) \
6433 +static noinline void sd_init_##type(struct sched_domain *sd) \
6435 + memset(sd, 0, sizeof(*sd)); \
6436 + *sd = SD_##type##_INIT; \
6437 + sd->level = SD_LV_##type; \
6442 + SD_INIT_FUNC(ALLNODES)
6443 + SD_INIT_FUNC(NODE)
6445 +#ifdef CONFIG_SCHED_SMT
6446 + SD_INIT_FUNC(SIBLING)
6448 +#ifdef CONFIG_SCHED_MC
6453 + * To minimize stack usage kmalloc room for cpumasks and share the
6454 + * space as the usage in build_sched_domains() dictates. Used only
6455 + * if the amount of space is significant.
6458 + cpumask_t tmpmask; /* make this one first */
6460 + cpumask_t nodemask;
6461 + cpumask_t this_sibling_map;
6462 + cpumask_t this_core_map;
6464 + cpumask_t send_covered;
6467 + cpumask_t domainspan;
6468 + cpumask_t covered;
6469 + cpumask_t notcovered;
6474 +#define SCHED_CPUMASK_ALLOC 1
6475 +#define SCHED_CPUMASK_FREE(v) kfree(v)
6476 +#define SCHED_CPUMASK_DECLARE(v) struct allmasks *v
6478 +#define SCHED_CPUMASK_ALLOC 0
6479 +#define SCHED_CPUMASK_FREE(v)
6480 +#define SCHED_CPUMASK_DECLARE(v) struct allmasks _v, *v = &_v
6483 +#define SCHED_CPUMASK_VAR(v, a) cpumask_t *v = (cpumask_t *) \
6484 + ((unsigned long)(a) + offsetof(struct allmasks, v))
6486 +static int default_relax_domain_level = -1;
6488 +static int __init setup_relax_domain_level(char *str)
6490 + unsigned long val;
6492 + val = simple_strtoul(str, NULL, 0);
6493 + if (val < SD_LV_MAX)
6494 + default_relax_domain_level = val;
6498 +__setup("relax_domain_level=", setup_relax_domain_level);
6500 +static void set_domain_attribute(struct sched_domain *sd,
6501 + struct sched_domain_attr *attr)
6505 + if (!attr || attr->relax_domain_level < 0) {
6506 + if (default_relax_domain_level < 0)
6509 + request = default_relax_domain_level;
6511 + request = attr->relax_domain_level;
6512 + if (request < sd->level) {
6513 + /* turn off idle balance on this domain */
6514 + sd->flags &= ~(SD_WAKE_IDLE|SD_BALANCE_NEWIDLE);
6516 + /* turn on idle balance on this domain */
6517 + sd->flags |= (SD_WAKE_IDLE_FAR|SD_BALANCE_NEWIDLE);
6522 + * Build sched domains for a given set of cpus and attach the sched domains
6523 + * to the individual cpus
6525 +static int __build_sched_domains(const cpumask_t *cpu_map,
6526 + struct sched_domain_attr *attr)
6529 + struct root_domain *rd;
6530 + SCHED_CPUMASK_DECLARE(allmasks);
6531 + cpumask_t *tmpmask;
6533 + struct sched_group **sched_group_nodes = NULL;
6534 + int sd_allnodes = 0;
6537 + * Allocate the per-node list of sched groups
6539 + sched_group_nodes = kcalloc(nr_node_ids, sizeof(struct sched_group *),
6541 + if (!sched_group_nodes) {
6542 + printk(KERN_WARNING "Can not alloc sched group node list\n");
6547 + rd = alloc_rootdomain();
6549 + printk(KERN_WARNING "Cannot alloc root domain\n");
6551 + kfree(sched_group_nodes);
6556 +#if SCHED_CPUMASK_ALLOC
6557 + /* get space for all scratch cpumask variables */
6558 + allmasks = kmalloc(sizeof(*allmasks), GFP_KERNEL);
6560 + printk(KERN_WARNING "Cannot alloc cpumask array\n");
6563 + kfree(sched_group_nodes);
6568 + tmpmask = (cpumask_t *)allmasks;
6572 + sched_group_nodes_bycpu[first_cpu(*cpu_map)] = sched_group_nodes;
6576 + * Set up domains for cpus specified by the cpu_map.
6578 + for_each_cpu_mask_nr(i, *cpu_map) {
6579 + struct sched_domain *sd = NULL, *p;
6580 + SCHED_CPUMASK_VAR(nodemask, allmasks);
6582 + *nodemask = node_to_cpumask(cpu_to_node(i));
6583 + cpus_and(*nodemask, *nodemask, *cpu_map);
6586 + if (cpus_weight(*cpu_map) >
6587 + SD_NODES_PER_DOMAIN*cpus_weight(*nodemask)) {
6588 + sd = &per_cpu(allnodes_domains, i);
6589 + SD_INIT(sd, ALLNODES);
6590 + set_domain_attribute(sd, attr);
6591 + sd->span = *cpu_map;
6592 + cpu_to_allnodes_group(i, cpu_map, &sd->groups, tmpmask);
6598 + sd = &per_cpu(node_domains, i);
6599 + SD_INIT(sd, NODE);
6600 + set_domain_attribute(sd, attr);
6601 + sched_domain_node_span(cpu_to_node(i), &sd->span);
6605 + cpus_and(sd->span, sd->span, *cpu_map);
6609 + sd = &per_cpu(phys_domains, i);
6611 + set_domain_attribute(sd, attr);
6612 + sd->span = *nodemask;
6616 + cpu_to_phys_group(i, cpu_map, &sd->groups, tmpmask);
6618 +#ifdef CONFIG_SCHED_MC
6620 + sd = &per_cpu(core_domains, i);
6622 + set_domain_attribute(sd, attr);
6623 + sd->span = cpu_coregroup_map(i);
6624 + cpus_and(sd->span, sd->span, *cpu_map);
6627 + cpu_to_core_group(i, cpu_map, &sd->groups, tmpmask);
6630 +#ifdef CONFIG_SCHED_SMT
6632 + sd = &per_cpu(cpu_domains, i);
6633 + SD_INIT(sd, SIBLING);
6634 + set_domain_attribute(sd, attr);
6635 + sd->span = per_cpu(cpu_sibling_map, i);
6636 + cpus_and(sd->span, sd->span, *cpu_map);
6639 + cpu_to_cpu_group(i, cpu_map, &sd->groups, tmpmask);
6643 +#ifdef CONFIG_SCHED_SMT
6644 + /* Set up CPU (sibling) groups */
6645 + for_each_cpu_mask_nr(i, *cpu_map) {
6646 + SCHED_CPUMASK_VAR(this_sibling_map, allmasks);
6647 + SCHED_CPUMASK_VAR(send_covered, allmasks);
6649 + *this_sibling_map = per_cpu(cpu_sibling_map, i);
6650 + cpus_and(*this_sibling_map, *this_sibling_map, *cpu_map);
6651 + if (i != first_cpu(*this_sibling_map))
6654 + init_sched_build_groups(this_sibling_map, cpu_map,
6655 + &cpu_to_cpu_group,
6656 + send_covered, tmpmask);
6660 +#ifdef CONFIG_SCHED_MC
6661 + /* Set up multi-core groups */
6662 + for_each_cpu_mask_nr(i, *cpu_map) {
6663 + SCHED_CPUMASK_VAR(this_core_map, allmasks);
6664 + SCHED_CPUMASK_VAR(send_covered, allmasks);
6666 + *this_core_map = cpu_coregroup_map(i);
6667 + cpus_and(*this_core_map, *this_core_map, *cpu_map);
6668 + if (i != first_cpu(*this_core_map))
6671 + init_sched_build_groups(this_core_map, cpu_map,
6672 + &cpu_to_core_group,
6673 + send_covered, tmpmask);
6677 + /* Set up physical groups */
6678 + for (i = 0; i < nr_node_ids; i++) {
6679 + SCHED_CPUMASK_VAR(nodemask, allmasks);
6680 + SCHED_CPUMASK_VAR(send_covered, allmasks);
6682 + *nodemask = node_to_cpumask(i);
6683 + cpus_and(*nodemask, *nodemask, *cpu_map);
6684 + if (cpus_empty(*nodemask))
6687 + init_sched_build_groups(nodemask, cpu_map,
6688 + &cpu_to_phys_group,
6689 + send_covered, tmpmask);
6693 + /* Set up node groups */
6694 + if (sd_allnodes) {
6695 + SCHED_CPUMASK_VAR(send_covered, allmasks);
6697 + init_sched_build_groups(cpu_map, cpu_map,
6698 + &cpu_to_allnodes_group,
6699 + send_covered, tmpmask);
6702 + for (i = 0; i < nr_node_ids; i++) {
6703 + /* Set up node groups */
6704 + struct sched_group *sg, *prev;
6705 + SCHED_CPUMASK_VAR(nodemask, allmasks);
6706 + SCHED_CPUMASK_VAR(domainspan, allmasks);
6707 + SCHED_CPUMASK_VAR(covered, allmasks);
6710 + *nodemask = node_to_cpumask(i);
6711 + cpus_clear(*covered);
6713 + cpus_and(*nodemask, *nodemask, *cpu_map);
6714 + if (cpus_empty(*nodemask)) {
6715 + sched_group_nodes[i] = NULL;
6719 + sched_domain_node_span(i, domainspan);
6720 + cpus_and(*domainspan, *domainspan, *cpu_map);
6722 + sg = kmalloc_node(sizeof(struct sched_group), GFP_KERNEL, i);
6724 + printk(KERN_WARNING "Can not alloc domain group for "
6728 + sched_group_nodes[i] = sg;
6729 + for_each_cpu_mask_nr(j, *nodemask) {
6730 + struct sched_domain *sd;
6732 + sd = &per_cpu(node_domains, j);
6735 + sg->__cpu_power = 0;
6736 + sg->cpumask = *nodemask;
6738 + cpus_or(*covered, *covered, *nodemask);
6741 + for (j = 0; j < nr_node_ids; j++) {
6742 + SCHED_CPUMASK_VAR(notcovered, allmasks);
6743 + int n = (i + j) % nr_node_ids;
6744 + node_to_cpumask_ptr(pnodemask, n);
6746 + cpus_complement(*notcovered, *covered);
6747 + cpus_and(*tmpmask, *notcovered, *cpu_map);
6748 + cpus_and(*tmpmask, *tmpmask, *domainspan);
6749 + if (cpus_empty(*tmpmask))
6752 + cpus_and(*tmpmask, *tmpmask, *pnodemask);
6753 + if (cpus_empty(*tmpmask))
6756 + sg = kmalloc_node(sizeof(struct sched_group),
6759 + printk(KERN_WARNING
6760 + "Can not alloc domain group for node %d\n", j);
6763 + sg->__cpu_power = 0;
6764 + sg->cpumask = *tmpmask;
6765 + sg->next = prev->next;
6766 + cpus_or(*covered, *covered, *tmpmask);
6773 + /* Calculate CPU power for physical packages and nodes */
6774 +#ifdef CONFIG_SCHED_SMT
6775 + for_each_cpu_mask_nr(i, *cpu_map) {
6776 + struct sched_domain *sd = &per_cpu(cpu_domains, i);
6778 + init_sched_groups_power(i, sd);
6781 +#ifdef CONFIG_SCHED_MC
6782 + for_each_cpu_mask_nr(i, *cpu_map) {
6783 + struct sched_domain *sd = &per_cpu(core_domains, i);
6785 + init_sched_groups_power(i, sd);
6789 + for_each_cpu_mask_nr(i, *cpu_map) {
6790 + struct sched_domain *sd = &per_cpu(phys_domains, i);
6792 + init_sched_groups_power(i, sd);
6796 + for (i = 0; i < nr_node_ids; i++)
6797 + init_numa_sched_groups_power(sched_group_nodes[i]);
6799 + if (sd_allnodes) {
6800 + struct sched_group *sg;
6802 + cpu_to_allnodes_group(first_cpu(*cpu_map), cpu_map, &sg,
6804 + init_numa_sched_groups_power(sg);
6808 + /* Attach the domains */
6809 + for_each_cpu_mask_nr(i, *cpu_map) {
6810 + struct sched_domain *sd;
6811 +#ifdef CONFIG_SCHED_SMT
6812 + sd = &per_cpu(cpu_domains, i);
6813 +#elif defined(CONFIG_SCHED_MC)
6814 + sd = &per_cpu(core_domains, i);
6816 + sd = &per_cpu(phys_domains, i);
6818 + cpu_attach_domain(sd, rd, i);
6821 + SCHED_CPUMASK_FREE((void *)allmasks);
6826 + free_sched_groups(cpu_map, tmpmask);
6827 + SCHED_CPUMASK_FREE((void *)allmasks);
6832 +static int build_sched_domains(const cpumask_t *cpu_map)
6834 + return __build_sched_domains(cpu_map, NULL);
6837 +static cpumask_t *doms_cur; /* current sched domains */
6838 +static int ndoms_cur; /* number of sched domains in 'doms_cur' */
6839 +static struct sched_domain_attr *dattr_cur;
6840 + /* attribues of custom domains in 'doms_cur' */
6843 + * Special case: If a kmalloc of a doms_cur partition (array of
6844 + * cpumask_t) fails, then fallback to a single sched domain,
6845 + * as determined by the single cpumask_t fallback_doms.
6847 +static cpumask_t fallback_doms;
6849 +void __attribute__((weak)) arch_update_cpu_topology(void)
6854 + * Set up scheduler domains and groups. Callers must hold the hotplug lock.
6855 + * For now this just excludes isolated cpus, but could be used to
6856 + * exclude other special cases in the future.
6858 +static int arch_init_sched_domains(const cpumask_t *cpu_map)
6862 + arch_update_cpu_topology();
6864 + doms_cur = kmalloc(sizeof(cpumask_t), GFP_KERNEL);
6866 + doms_cur = &fallback_doms;
6867 + cpus_andnot(*doms_cur, *cpu_map, cpu_isolated_map);
6869 + err = build_sched_domains(doms_cur);
6870 + register_sched_domain_sysctl();
6875 +static void arch_destroy_sched_domains(const cpumask_t *cpu_map,
6876 + cpumask_t *tmpmask)
6878 + free_sched_groups(cpu_map, tmpmask);
6882 + * Detach sched domains from a group of cpus specified in cpu_map
6883 + * These cpus will now be attached to the NULL domain
6885 +static void detach_destroy_domains(const cpumask_t *cpu_map)
6887 + cpumask_t tmpmask;
6890 + unregister_sched_domain_sysctl();
6892 + for_each_cpu_mask_nr(i, *cpu_map)
6893 + cpu_attach_domain(NULL, &def_root_domain, i);
6894 + synchronize_sched();
6895 + arch_destroy_sched_domains(cpu_map, &tmpmask);
6898 +/* handle null as "default" */
6899 +static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur,
6900 + struct sched_domain_attr *new, int idx_new)
6902 + struct sched_domain_attr tmp;
6908 + tmp = SD_ATTR_INIT;
6909 + return !memcmp(cur ? (cur + idx_cur) : &tmp,
6910 + new ? (new + idx_new) : &tmp,
6911 + sizeof(struct sched_domain_attr));
6915 + * Partition sched domains as specified by the 'ndoms_new'
6916 + * cpumasks in the array doms_new[] of cpumasks. This compares
6917 + * doms_new[] to the current sched domain partitioning, doms_cur[].
6918 + * It destroys each deleted domain and builds each new domain.
6920 + * 'doms_new' is an array of cpumask_t's of length 'ndoms_new'.
6921 + * The masks don't intersect (don't overlap.) We should setup one
6922 + * sched domain for each mask. CPUs not in any of the cpumasks will
6923 + * not be load balanced. If the same cpumask appears both in the
6924 + * current 'doms_cur' domains and in the new 'doms_new', we can leave
6927 + * The passed in 'doms_new' should be kmalloc'd. This routine takes
6928 + * ownership of it and will kfree it when done with it. If the caller
6929 + * failed the kmalloc call, then it can pass in doms_new == NULL,
6930 + * and partition_sched_domains() will fallback to the single partition
6931 + * 'fallback_doms', it also forces the domains to be rebuilt.
6933 + * If doms_new==NULL it will be replaced with cpu_online_map.
6934 + * ndoms_new==0 is a special case for destroying existing domains.
6935 + * It will not create the default domain.
6937 + * Call with hotplug lock held
6939 +void partition_sched_domains(int ndoms_new, cpumask_t *doms_new,
6940 + struct sched_domain_attr *dattr_new)
6944 + mutex_lock(&sched_domains_mutex);
6946 + /* always unregister in case we don't destroy any domains */
6947 + unregister_sched_domain_sysctl();
6949 + n = doms_new ? ndoms_new : 0;
6951 + /* Destroy deleted domains */
6952 + for (i = 0; i < ndoms_cur; i++) {
6953 + for (j = 0; j < n; j++) {
6954 + if (cpus_equal(doms_cur[i], doms_new[j])
6955 + && dattrs_equal(dattr_cur, i, dattr_new, j))
6958 + /* no match - a current sched domain not in new doms_new[] */
6959 + detach_destroy_domains(doms_cur + i);
6964 + if (doms_new == NULL) {
6966 + doms_new = &fallback_doms;
6967 + cpus_andnot(doms_new[0], cpu_online_map, cpu_isolated_map);
6971 + /* Build new domains */
6972 + for (i = 0; i < ndoms_new; i++) {
6973 + for (j = 0; j < ndoms_cur; j++) {
6974 + if (cpus_equal(doms_new[i], doms_cur[j])
6975 + && dattrs_equal(dattr_new, i, dattr_cur, j))
6978 + /* no match - add a new doms_new */
6979 + __build_sched_domains(doms_new + i,
6980 + dattr_new ? dattr_new + i : NULL);
6985 + /* Remember the new sched domains */
6986 + if (doms_cur != &fallback_doms)
6988 + kfree(dattr_cur); /* kfree(NULL) is safe */
6989 + doms_cur = doms_new;
6990 + dattr_cur = dattr_new;
6991 + ndoms_cur = ndoms_new;
6993 + register_sched_domain_sysctl();
6995 + mutex_unlock(&sched_domains_mutex);
6998 +#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
6999 +int arch_reinit_sched_domains(void)
7001 + get_online_cpus();
7003 + /* Destroy domains first to force the rebuild */
7004 + partition_sched_domains(0, NULL, NULL);
7006 + rebuild_sched_domains();
7007 + put_online_cpus();
7012 +static ssize_t sched_power_savings_store(const char *buf, size_t count, int smt)
7016 + if (buf[0] != '0' && buf[0] != '1')
7020 + sched_smt_power_savings = (buf[0] == '1');
7022 + sched_mc_power_savings = (buf[0] == '1');
7024 + ret = arch_reinit_sched_domains();
7026 + return ret ? ret : count;
7029 +#ifdef CONFIG_SCHED_MC
7030 +static ssize_t sched_mc_power_savings_show(struct sysdev_class *class,
7033 + return sprintf(page, "%u\n", sched_mc_power_savings);
7035 +static ssize_t sched_mc_power_savings_store(struct sysdev_class *class,
7036 + const char *buf, size_t count)
7038 + return sched_power_savings_store(buf, count, 0);
7040 +static SYSDEV_CLASS_ATTR(sched_mc_power_savings, 0644,
7041 + sched_mc_power_savings_show,
7042 + sched_mc_power_savings_store);
7045 +#ifdef CONFIG_SCHED_SMT
7046 +static ssize_t sched_smt_power_savings_show(struct sysdev_class *dev,
7049 + return sprintf(page, "%u\n", sched_smt_power_savings);
7051 +static ssize_t sched_smt_power_savings_store(struct sysdev_class *dev,
7052 + const char *buf, size_t count)
7054 + return sched_power_savings_store(buf, count, 1);
7056 +static SYSDEV_CLASS_ATTR(sched_smt_power_savings, 0644,
7057 + sched_smt_power_savings_show,
7058 + sched_smt_power_savings_store);
7061 +int sched_create_sysfs_power_savings_entries(struct sysdev_class *cls)
7065 +#ifdef CONFIG_SCHED_SMT
7066 + if (smt_capable())
7067 + err = sysfs_create_file(&cls->kset.kobj,
7068 + &attr_sched_smt_power_savings.attr);
7070 +#ifdef CONFIG_SCHED_MC
7071 + if (!err && mc_capable())
7072 + err = sysfs_create_file(&cls->kset.kobj,
7073 + &attr_sched_mc_power_savings.attr);
7077 +#endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */
7079 +#ifndef CONFIG_CPUSETS
7081 + * Add online and remove offline CPUs from the scheduler domains.
7082 + * When cpusets are enabled they take over this function.
7084 +static int update_sched_domains(struct notifier_block *nfb,
7085 + unsigned long action, void *hcpu)
7089 + case CPU_ONLINE_FROZEN:
7091 + case CPU_DEAD_FROZEN:
7092 + partition_sched_domains(1, NULL, NULL);
7096 + return NOTIFY_DONE;
7101 +static int update_runtime(struct notifier_block *nfb,
7102 + unsigned long action, void *hcpu)
7105 + case CPU_DOWN_PREPARE:
7106 + case CPU_DOWN_PREPARE_FROZEN:
7109 + case CPU_DOWN_FAILED:
7110 + case CPU_DOWN_FAILED_FROZEN:
7112 + case CPU_ONLINE_FROZEN:
7116 + return NOTIFY_DONE;
7120 +#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC)
7122 + * Cheaper version of the below functions in case support for SMT and MC is
7123 + * compiled in but CPUs have no siblings.
7125 +static int sole_cpu_idle(unsigned long cpu)
7127 + return rq_idle(cpu_rq(cpu));
7130 +#ifdef CONFIG_SCHED_SMT
7131 +/* All this CPU's SMT siblings are idle */
7132 +static int siblings_cpu_idle(unsigned long cpu)
7134 + return cpus_subset(cpu_rq(cpu)->smt_siblings,
7135 + grq.cpu_idle_map);
7138 +#ifdef CONFIG_SCHED_MC
7139 +/* All this CPU's shared cache siblings are idle */
7140 +static int cache_cpu_idle(unsigned long cpu)
7142 + return cpus_subset(cpu_rq(cpu)->cache_siblings,
7143 + grq.cpu_idle_map);
7147 +void __init sched_init_smp(void)
7149 + struct sched_domain *sd;
7152 + cpumask_t non_isolated_cpus;
7154 +#if defined(CONFIG_NUMA)
7155 + sched_group_nodes_bycpu = kzalloc(nr_cpu_ids * sizeof(void **),
7157 + BUG_ON(sched_group_nodes_bycpu == NULL);
7159 + get_online_cpus();
7160 + mutex_lock(&sched_domains_mutex);
7161 + arch_init_sched_domains(&cpu_online_map);
7162 + cpus_andnot(non_isolated_cpus, cpu_possible_map, cpu_isolated_map);
7163 + if (cpus_empty(non_isolated_cpus))
7164 + cpu_set(smp_processor_id(), non_isolated_cpus);
7165 + mutex_unlock(&sched_domains_mutex);
7166 + put_online_cpus();
7168 +#ifndef CONFIG_CPUSETS
7169 + /* XXX: Theoretical race here - CPU may be hotplugged now */
7170 + hotcpu_notifier(update_sched_domains, 0);
7173 + /* RT runtime code needs to handle some hotplug events */
7174 + hotcpu_notifier(update_runtime, 0);
7176 + /* Move init over to a non-isolated CPU */
7177 + if (set_cpus_allowed_ptr(current, &non_isolated_cpus) < 0)
7181 + * Assume that every added cpu gives us slightly less overall latency
7182 + * allowing us to increase the base rr_interval, but in a non linear
7185 + rr_interval *= 1 + ilog2(num_online_cpus());
7189 + * Set up the relative cache distance of each online cpu from each
7190 + * other in a simple array for quick lookup. Locality is determined
7191 + * by the closest sched_domain that CPUs are separated by. CPUs with
7192 + * shared cache in SMT and MC are treated as local. Separate CPUs
7193 + * (within the same package or physically) within the same node are
7194 + * treated as not local. CPUs not even in the same domain (different
7195 + * nodes) are treated as very distant.
7197 + for_each_online_cpu(cpu) {
7198 + struct rq *rq = cpu_rq(cpu);
7199 + for_each_domain(cpu, sd) {
7200 + unsigned long locality;
7203 +#ifdef CONFIG_SCHED_SMT
7204 + if (sd->level == SD_LV_SIBLING) {
7205 + for_each_cpu_mask_nr(other_cpu, sd->span)
7206 + cpu_set(other_cpu, rq->smt_siblings);
7209 +#ifdef CONFIG_SCHED_MC
7210 + if (sd->level == SD_LV_MC) {
7211 + for_each_cpu_mask_nr(other_cpu, sd->span)
7212 + cpu_set(other_cpu, rq->cache_siblings);
7215 + if (sd->level <= SD_LV_MC)
7217 + else if (sd->level <= SD_LV_NODE)
7222 + for_each_cpu_mask_nr(other_cpu, sd->span) {
7223 + if (locality < rq->cpu_locality[other_cpu])
7224 + rq->cpu_locality[other_cpu] = locality;
7229 + * Each runqueue has its own function in case it doesn't have
7230 + * siblings of its own allowing mixed topologies.
7232 +#ifdef CONFIG_SCHED_SMT
7233 + if (cpus_weight(rq->smt_siblings) > 1)
7234 + rq->siblings_idle = siblings_cpu_idle;
7236 +#ifdef CONFIG_SCHED_MC
7237 + if (cpus_weight(rq->cache_siblings) > 1)
7238 + rq->cache_idle = cache_cpu_idle;
7244 +void __init sched_init_smp(void)
7247 +#endif /* CONFIG_SMP */
7249 +int in_sched_functions(unsigned long addr)
7251 + return in_lock_functions(addr) ||
7252 + (addr >= (unsigned long)__sched_text_start
7253 + && addr < (unsigned long)__sched_text_end);
7256 +void __init sched_init(void)
7261 + prio_ratios[0] = 100;
7262 + for (i = 1 ; i < PRIO_RANGE ; i++)
7263 + prio_ratios[i] = prio_ratios[i - 1] * 11 / 10;
7265 + spin_lock_init(&grq.lock);
7267 + init_defrootdomain();
7269 + uprq = &per_cpu(runqueues, 0);
7271 + for_each_possible_cpu(i) {
7273 + rq->user_pc = rq->nice_pc = rq->softirq_pc = rq->system_pc =
7274 + rq->iowait_pc = rq->idle_pc = 0;
7280 + rq_attach_root(rq, &def_root_domain);
7282 + atomic_set(&rq->nr_iowait, 0);
7288 + * Set the base locality for cpu cache distance calculation to
7289 + * "distant" (3). Make sure the distance from a CPU to itself is 0.
7291 + for_each_possible_cpu(i) {
7295 +#ifdef CONFIG_SCHED_SMT
7296 + cpus_clear(rq->smt_siblings);
7297 + cpu_set(i, rq->smt_siblings);
7298 + rq->siblings_idle = sole_cpu_idle;
7299 + cpu_set(i, rq->smt_siblings);
7301 +#ifdef CONFIG_SCHED_MC
7302 + cpus_clear(rq->cache_siblings);
7303 + cpu_set(i, rq->cache_siblings);
7304 + rq->cache_idle = sole_cpu_idle;
7305 + cpu_set(i, rq->cache_siblings);
7307 + rq->cpu_locality = alloc_bootmem(nr_cpu_ids * sizeof(unsigned long));
7308 + for_each_possible_cpu(j) {
7310 + rq->cpu_locality[j] = 0;
7312 + rq->cpu_locality[j] = 3;
7317 + for (i = 0; i < PRIO_LIMIT; i++)
7318 + INIT_LIST_HEAD(grq.queue + i);
7319 + /* delimiter for bitsearch */
7320 + __set_bit(PRIO_LIMIT, grq.prio_bitmap);
7322 +#ifdef CONFIG_PREEMPT_NOTIFIERS
7323 + INIT_HLIST_HEAD(&init_task.preempt_notifiers);
7326 +#ifdef CONFIG_RT_MUTEXES
7327 + plist_head_init(&init_task.pi_waiters, &init_task.pi_lock);
7331 + * The boot idle thread does lazy MMU switching as well:
7333 + atomic_inc(&init_mm.mm_count);
7334 + enter_lazy_tlb(&init_mm, current);
7337 + * Make us the idle thread. Technically, schedule() should not be
7338 + * called from this thread, however somewhere below it might be,
7339 + * but because we are the idle thread, we just pick up running again
7340 + * when this runqueue becomes "idle".
7342 + init_idle(current, smp_processor_id());
7345 +#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
7346 +void __might_sleep(char *file, int line)
7349 + static unsigned long prev_jiffy; /* ratelimiting */
7351 + if ((in_atomic() || irqs_disabled()) &&
7352 + system_state == SYSTEM_RUNNING && !oops_in_progress) {
7353 + if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
7355 + prev_jiffy = jiffies;
7356 + printk(KERN_ERR "BUG: sleeping function called from invalid"
7357 + " context at %s:%d\n", file, line);
7358 + printk("in_atomic():%d, irqs_disabled():%d\n",
7359 + in_atomic(), irqs_disabled());
7360 + debug_show_held_locks(current);
7361 + if (irqs_disabled())
7362 + print_irqtrace_events(current);
7367 +EXPORT_SYMBOL(__might_sleep);
7370 +#ifdef CONFIG_MAGIC_SYSRQ
7371 +void normalize_rt_tasks(void)
7373 + struct task_struct *g, *p;
7374 + unsigned long flags;
7378 + read_lock_irq(&tasklist_lock);
7380 + do_each_thread(g, p) {
7381 + if (!rt_task(p) && !iso_task(p))
7384 + spin_lock_irqsave(&p->pi_lock, flags);
7385 + rq = __task_grq_lock(p);
7386 + update_rq_clock(rq);
7388 + queued = task_queued(p);
7391 + __setscheduler(p, rq, SCHED_NORMAL, 0);
7394 + try_preempt(p, rq);
7397 + __task_grq_unlock();
7398 + spin_unlock_irqrestore(&p->pi_lock, flags);
7399 + } while_each_thread(g, p);
7401 + read_unlock_irq(&tasklist_lock);
7403 +#endif /* CONFIG_MAGIC_SYSRQ */
7407 + * These functions are only useful for the IA64 MCA handling.
7409 + * They can only be called when the whole system has been
7410 + * stopped - every CPU needs to be quiescent, and no scheduling
7411 + * activity can take place. Using them for anything else would
7412 + * be a serious bug, and as a result, they aren't even visible
7413 + * under any other configuration.
7417 + * curr_task - return the current task for a given cpu.
7418 + * @cpu: the processor in question.
7420 + * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
7422 +struct task_struct *curr_task(int cpu)
7424 + return cpu_curr(cpu);
7428 + * set_curr_task - set the current task for a given cpu.
7429 + * @cpu: the processor in question.
7430 + * @p: the task pointer to set.
7432 + * Description: This function must only be used when non-maskable interrupts
7433 + * are serviced on a separate stack. It allows the architecture to switch the
7434 + * notion of the current task on a cpu in a non-blocking manner. This function
7435 + * must be called with all CPU's synchronised, and interrupts disabled, the
7436 + * and caller must save the original value of the current task (see
7437 + * curr_task() above) and restore that value before reenabling interrupts and
7438 + * re-starting the system.
7440 + * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
7442 +void set_curr_task(int cpu, struct task_struct *p)
7444 + cpu_curr(cpu) = p;
7450 + * Use precise platform statistics if available:
7452 +#ifdef CONFIG_VIRT_CPU_ACCOUNTING
7453 +cputime_t task_utime(struct task_struct *p)
7458 +cputime_t task_stime(struct task_struct *p)
7463 +cputime_t task_utime(struct task_struct *p)
7465 + clock_t utime = cputime_to_clock_t(p->utime),
7466 + total = utime + cputime_to_clock_t(p->stime);
7469 + temp = (u64)nsec_to_clock_t(p->sched_time);
7473 + do_div(temp, total);
7475 + utime = (clock_t)temp;
7477 + p->prev_utime = max(p->prev_utime, clock_t_to_cputime(utime));
7478 + return p->prev_utime;
7481 +cputime_t task_stime(struct task_struct *p)
7485 + stime = nsec_to_clock_t(p->sched_time) -
7486 + cputime_to_clock_t(task_utime(p));
7489 + p->prev_stime = max(p->prev_stime, clock_t_to_cputime(stime));
7491 + return p->prev_stime;
7495 +inline cputime_t task_gtime(struct task_struct *p)
7500 +void __cpuinit init_idle_bootup_task(struct task_struct *idle)
7503 +#ifdef CONFIG_SCHED_DEBUG
7504 +void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
7507 +void proc_sched_set_task(struct task_struct *p)
7510 Index: kernel-2.6.28/kernel/sched_stats.h
7511 ===================================================================
7512 --- kernel-2.6.28.orig/kernel/sched_stats.h
7513 +++ kernel-2.6.28/kernel/sched_stats.h
7514 @@ -296,20 +296,21 @@ sched_info_switch(struct task_struct *pr
7515 static inline void account_group_user_time(struct task_struct *tsk,
7518 - struct signal_struct *sig;
7519 + struct thread_group_cputimer *cputimer;
7521 /* tsk == current, ensure it is safe to use ->signal */
7522 if (unlikely(tsk->exit_state))
7525 - sig = tsk->signal;
7526 - if (sig->cputime.totals) {
7527 - struct task_cputime *times;
7528 + cputimer = &tsk->signal->cputimer;
7530 - times = per_cpu_ptr(sig->cputime.totals, get_cpu());
7531 - times->utime = cputime_add(times->utime, cputime);
7532 - put_cpu_no_resched();
7534 + if (!cputimer->running)
7537 + spin_lock(&cputimer->lock);
7538 + cputimer->cputime.utime =
7539 + cputime_add(cputimer->cputime.utime, cputime);
7540 + spin_unlock(&cputimer->lock);
7544 @@ -325,20 +326,21 @@ static inline void account_group_user_ti
7545 static inline void account_group_system_time(struct task_struct *tsk,
7548 - struct signal_struct *sig;
7549 + struct thread_group_cputimer *cputimer;
7551 /* tsk == current, ensure it is safe to use ->signal */
7552 if (unlikely(tsk->exit_state))
7555 - sig = tsk->signal;
7556 - if (sig->cputime.totals) {
7557 - struct task_cputime *times;
7558 + cputimer = &tsk->signal->cputimer;
7560 + if (!cputimer->running)
7563 - times = per_cpu_ptr(sig->cputime.totals, get_cpu());
7564 - times->stime = cputime_add(times->stime, cputime);
7565 - put_cpu_no_resched();
7567 + spin_lock(&cputimer->lock);
7568 + cputimer->cputime.stime =
7569 + cputime_add(cputimer->cputime.stime, cputime);
7570 + spin_unlock(&cputimer->lock);
7574 @@ -354,6 +356,7 @@ static inline void account_group_system_
7575 static inline void account_group_exec_runtime(struct task_struct *tsk,
7576 unsigned long long ns)
7578 + struct thread_group_cputimer *cputimer;
7579 struct signal_struct *sig;
7582 @@ -362,11 +365,12 @@ static inline void account_group_exec_ru
7586 - if (sig->cputime.totals) {
7587 - struct task_cputime *times;
7588 + cputimer = &sig->cputimer;
7590 + if (!cputimer->running)
7593 - times = per_cpu_ptr(sig->cputime.totals, get_cpu());
7594 - times->sum_exec_runtime += ns;
7595 - put_cpu_no_resched();
7597 + spin_lock(&cputimer->lock);
7598 + cputimer->cputime.sum_exec_runtime += ns;
7599 + spin_unlock(&cputimer->lock);
7601 Index: kernel-2.6.28/kernel/signal.c
7602 ===================================================================
7603 --- kernel-2.6.28.orig/kernel/signal.c
7604 +++ kernel-2.6.28/kernel/signal.c
7605 @@ -1342,7 +1342,6 @@ int do_notify_parent(struct task_struct
7606 struct siginfo info;
7607 unsigned long flags;
7608 struct sighand_struct *psig;
7609 - struct task_cputime cputime;
7613 @@ -1373,9 +1372,10 @@ int do_notify_parent(struct task_struct
7615 info.si_uid = tsk->uid;
7617 - thread_group_cputime(tsk, &cputime);
7618 - info.si_utime = cputime_to_jiffies(cputime.utime);
7619 - info.si_stime = cputime_to_jiffies(cputime.stime);
7620 + info.si_utime = cputime_to_clock_t(cputime_add(tsk->utime,
7621 + tsk->signal->utime));
7622 + info.si_stime = cputime_to_clock_t(cputime_add(tsk->stime,
7623 + tsk->signal->stime));
7625 info.si_status = tsk->exit_code & 0x7f;
7626 if (tsk->exit_code & 0x80)
7627 Index: kernel-2.6.28/kernel/sysctl.c
7628 ===================================================================
7629 --- kernel-2.6.28.orig/kernel/sysctl.c
7630 +++ kernel-2.6.28/kernel/sysctl.c
7631 @@ -86,11 +86,6 @@ extern int sysctl_nr_open_min, sysctl_nr
7632 extern int rcutorture_runnable;
7633 #endif /* #ifdef CONFIG_RCU_TORTURE_TEST */
7635 -/* Constants used for minimum and maximum */
7636 -#if defined(CONFIG_HIGHMEM) || defined(CONFIG_DETECT_SOFTLOCKUP)
7637 -static int one = 1;
7640 #ifdef CONFIG_DETECT_SOFTLOCKUP
7641 static int sixty = 60;
7642 static int neg_one = -1;
7643 @@ -101,8 +96,14 @@ static int two = 2;
7647 -static int one_hundred = 100;
7649 +static int __read_mostly one = 1;
7650 +static int __read_mostly one_hundred = 100;
7651 +#ifdef CONFIG_SCHED_BFS
7652 +extern int rr_interval;
7653 +extern int sched_iso_cpu;
7654 +static int __read_mostly five_thousand = 5000;
7656 /* this is needed for the proc_dointvec_minmax for [fs_]overflow UID and GID */
7657 static int maxolduid = 65535;
7658 static int minolduid;
7659 @@ -227,7 +228,7 @@ static struct ctl_table root_table[] = {
7663 -#ifdef CONFIG_SCHED_DEBUG
7664 +#if defined(CONFIG_SCHED_DEBUG) && !defined(CONFIG_SCHED_BFS)
7665 static int min_sched_granularity_ns = 100000; /* 100 usecs */
7666 static int max_sched_granularity_ns = NSEC_PER_SEC; /* 1 second */
7667 static int min_wakeup_granularity_ns; /* 0 usecs */
7668 @@ -235,6 +236,7 @@ static int max_wakeup_granularity_ns = N
7671 static struct ctl_table kern_table[] = {
7672 +#ifndef CONFIG_SCHED_BFS
7673 #ifdef CONFIG_SCHED_DEBUG
7675 .ctl_name = CTL_UNNUMBERED,
7676 @@ -344,6 +346,7 @@ static struct ctl_table kern_table[] = {
7678 .proc_handler = &proc_dointvec,
7680 +#endif /* !CONFIG_SCHED_BFS */
7681 #ifdef CONFIG_PROVE_LOCKING
7683 .ctl_name = CTL_UNNUMBERED,
7684 @@ -719,6 +722,30 @@ static struct ctl_table kern_table[] = {
7685 .proc_handler = &proc_dointvec,
7688 +#ifdef CONFIG_SCHED_BFS
7690 + .ctl_name = CTL_UNNUMBERED,
7691 + .procname = "rr_interval",
7692 + .data = &rr_interval,
7693 + .maxlen = sizeof (int),
7695 + .proc_handler = &proc_dointvec_minmax,
7696 + .strategy = &sysctl_intvec,
7698 + .extra2 = &five_thousand,
7701 + .ctl_name = CTL_UNNUMBERED,
7702 + .procname = "iso_cpu",
7703 + .data = &sched_iso_cpu,
7704 + .maxlen = sizeof (int),
7706 + .proc_handler = &proc_dointvec_minmax,
7707 + .strategy = &sysctl_intvec,
7709 + .extra2 = &one_hundred,
7712 #if defined(CONFIG_S390) && defined(CONFIG_SMP)
7714 .ctl_name = KERN_SPIN_RETRY,
7715 Index: kernel-2.6.28/kernel/time/tick-sched.c
7716 ===================================================================
7717 --- kernel-2.6.28.orig/kernel/time/tick-sched.c
7718 +++ kernel-2.6.28/kernel/time/tick-sched.c
7719 @@ -447,6 +447,7 @@ void tick_nohz_restart_sched_tick(void)
7720 tick_do_update_jiffies64(now);
7721 cpu_clear(cpu, nohz_cpu_mask);
7725 * We stopped the tick in idle. Update process times would miss the
7726 * time we slept as update_process_times does only a 1 tick
7727 @@ -457,10 +458,7 @@ void tick_nohz_restart_sched_tick(void)
7728 * We might be one off. Do not randomly account a huge number of ticks!
7730 if (ticks && ticks < LONG_MAX) {
7731 - add_preempt_count(HARDIRQ_OFFSET);
7732 - account_system_time(current, HARDIRQ_OFFSET,
7733 - jiffies_to_cputime(ticks));
7734 - sub_preempt_count(HARDIRQ_OFFSET);
7735 + account_idle_ticks(ticks);
7738 touch_softlockup_watchdog();
7739 Index: kernel-2.6.28/kernel/timer.c
7740 ===================================================================
7741 --- kernel-2.6.28.orig/kernel/timer.c
7742 +++ kernel-2.6.28/kernel/timer.c
7743 @@ -1021,20 +1021,21 @@ unsigned long get_next_timer_interrupt(u
7748 #ifndef CONFIG_VIRT_CPU_ACCOUNTING
7749 -void account_process_tick(struct task_struct *p, int user_tick)
7751 - cputime_t one_jiffy = jiffies_to_cputime(1);
7752 +//void account_process_tick(struct task_struct *p, int user_tick)
7754 +// cputime_t one_jiffy = jiffies_to_cputime(1);
7757 - account_user_time(p, one_jiffy);
7758 +// account_user_time(p, one_jiffy);
7759 account_user_time_scaled(p, cputime_to_scaled(one_jiffy));
7761 - account_system_time(p, HARDIRQ_OFFSET, one_jiffy);
7762 +// account_system_time(p, HARDIRQ_OFFSET, one_jiffy);
7763 account_system_time_scaled(p, cputime_to_scaled(one_jiffy));
7770 * Called from the timer interrupt handler to charge one tick to the current
7771 @@ -1045,7 +1046,7 @@ void update_process_times(int user_tick)
7772 struct task_struct *p = current;
7773 int cpu = smp_processor_id();
7775 - /* Note: this timer irq context must be accounted for as well. */
7776 + /* Accounting is done within sched_bfs.c */
7777 account_process_tick(p, user_tick);
7779 if (rcu_pending(cpu))
7780 @@ -1098,8 +1099,7 @@ static inline void calc_load(unsigned lo
7783 * This function runs timers and the timer-tq in bottom half context.
7785 -static void run_timer_softirq(struct softirq_action *h)
7786 + */run_timer_softirq(struct softirq_action *h)
7788 struct tvec_base *base = __get_cpu_var(tvec_bases);
7790 Index: kernel-2.6.28/kernel/workqueue.c
7791 ===================================================================
7792 --- kernel-2.6.28.orig/kernel/workqueue.c
7793 +++ kernel-2.6.28/kernel/workqueue.c
7794 @@ -323,7 +323,6 @@ static int worker_thread(void *__cwq)
7795 if (cwq->wq->freezeable)
7798 - set_user_nice(current, -5);
7801 prepare_to_wait(&cwq->more_work, &wait, TASK_INTERRUPTIBLE);
7802 Index: kernel-2.6.28/mm/oom_kill.c
7803 ===================================================================
7804 --- kernel-2.6.28.orig/mm/oom_kill.c
7805 +++ kernel-2.6.28/mm/oom_kill.c
7806 @@ -334,7 +334,7 @@ static void __oom_kill_task(struct task_
7807 * all the memory it needs. That way it should be able to
7808 * exit() and clear out its resources quickly...
7810 - p->rt.time_slice = HZ;
7811 + set_oom_timeslice(p);
7812 set_tsk_thread_flag(p, TIF_MEMDIE);
7814 force_sig(SIGKILL, p);