1 diff --git a/Documentation/scheduler/sched-BFS.txt b/Documentation/scheduler/sched-BFS.txt
5 +++ b/Documentation/scheduler/sched-BFS.txt
10 ++ BFS - The Brain Fuck Scheduler by Con Kolivas.
14 ++ The goal of the Brain Fuck Scheduler, referred to as BFS from here on, is to
15 ++ completely do away with the complex designs of the past for the cpu process
16 ++ scheduler and instead implement one that is very simple in basic design.
17 ++ The main focus of BFS is to achieve excellent desktop interactivity and
18 ++ responsiveness without heuristics and tuning knobs that are difficult to
19 ++ understand, impossible to model and predict the effect of, and when tuned to
20 ++ one workload cause massive detriment to another.
25 ++ BFS is best described as a single runqueue, O(n) lookup, earliest effective
26 ++ virtual deadline first design, loosely based on EEVDF (earliest eligible virtual
27 ++ deadline first) and my previous Staircase Deadline scheduler. Each component
28 ++ shall be described in order to understand the significance of, and reasoning for
29 ++ it. The codebase when the first stable version was released was approximately
30 ++ 9000 lines less code than the existing mainline linux kernel scheduler (in
31 ++ 2.6.31). This does not even take into account the removal of documentation and
32 ++ the cgroups code that is not used.
36 ++ The single runqueue refers to the queued but not running processes for the
37 ++ entire system, regardless of the number of CPUs. The reason for going back to
38 ++ a single runqueue design is that once multiple runqueues are introduced,
39 ++ per-CPU or otherwise, there will be complex interactions as each runqueue will
40 ++ be responsible for the scheduling latency and fairness of the tasks only on its
41 ++ own runqueue, and to achieve fairness and low latency across multiple CPUs, any
42 ++ advantage in throughput of having CPU local tasks causes other disadvantages.
43 ++ This is due to requiring a very complex balancing system to at best achieve some
44 ++ semblance of fairness across CPUs and can only maintain relatively low latency
45 ++ for tasks bound to the same CPUs, not across them. To increase said fairness
46 ++ and latency across CPUs, the advantage of local runqueue locking, which makes
47 ++ for better scalability, is lost due to having to grab multiple locks.
49 ++ A significant feature of BFS is that all accounting is done purely based on CPU
50 ++ used and nowhere is sleep time used in any way to determine entitlement or
51 ++ interactivity. Interactivity "estimators" that use some kind of sleep/run
52 ++ algorithm are doomed to fail to detect all interactive tasks, and to falsely tag
53 ++ tasks that aren't interactive as being so. The reason for this is that it is
54 ++ close to impossible to determine that when a task is sleeping, whether it is
55 ++ doing it voluntarily, as in a userspace application waiting for input in the
56 ++ form of a mouse click or otherwise, or involuntarily, because it is waiting for
57 ++ another thread, process, I/O, kernel activity or whatever. Thus, such an
58 ++ estimator will introduce corner cases, and more heuristics will be required to
59 ++ cope with those corner cases, introducing more corner cases and failed
60 ++ interactivity detection and so on. Interactivity in BFS is built into the design
61 ++ by virtue of the fact that tasks that are waking up have not used up their quota
62 ++ of CPU time, and have earlier effective deadlines, thereby making it very likely
63 ++ they will preempt any CPU bound task of equivalent nice level. See below for
64 ++ more information on the virtual deadline mechanism. Even if they do not preempt
65 ++ a running task, because the rr interval is guaranteed to have a bound upper
66 ++ limit on how long a task will wait for, it will be scheduled within a timeframe
67 ++ that will not cause visible interface jitter.
74 ++ BFS inserts tasks into each relevant queue as an O(1) insertion into a double
75 ++ linked list. On insertion, *every* running queue is checked to see if the newly
76 ++ queued task can run on any idle queue, or preempt the lowest running task on the
77 ++ system. This is how the cross-CPU scheduling of BFS achieves significantly lower
78 ++ latency per extra CPU the system has. In this case the lookup is, in the worst
79 ++ case scenario, O(n) where n is the number of CPUs on the system.
83 ++ BFS has one single lock protecting the process local data of every task in the
84 ++ global queue. Thus every insertion, removal and modification of task data in the
85 ++ global runqueue needs to grab the global lock. However, once a task is taken by
86 ++ a CPU, the CPU has its own local data copy of the running process' accounting
87 ++ information which only that CPU accesses and modifies (such as during a
88 ++ timer tick) thus allowing the accounting data to be updated lockless. Once a
89 ++ CPU has taken a task to run, it removes it from the global queue. Thus the
90 ++ global queue only ever has, at most,
92 ++ (number of tasks requesting cpu time) - (number of logical CPUs) + 1
94 ++ tasks in the global queue. This value is relevant for the time taken to look up
95 ++ tasks during scheduling. This will increase if many tasks with CPU affinity set
96 ++ in their policy to limit which CPUs they're allowed to run on if they outnumber
97 ++ the number of CPUs. The +1 is because when rescheduling a task, the CPU's
98 ++ currently running task is put back on the queue. Lookup will be described after
99 ++ the virtual deadline mechanism is explained.
103 ++ The key to achieving low latency, scheduling fairness, and "nice level"
104 ++ distribution in BFS is entirely in the virtual deadline mechanism. The one
105 ++ tunable in BFS is the rr_interval, or "round robin interval". This is the
106 ++ maximum time two SCHED_OTHER (or SCHED_NORMAL, the common scheduling policy)
107 ++ tasks of the same nice level will be running for, or looking at it the other
108 ++ way around, the longest duration two tasks of the same nice level will be
109 ++ delayed for. When a task requests cpu time, it is given a quota (time_slice)
110 ++ equal to the rr_interval and a virtual deadline. The virtual deadline is
111 ++ offset from the current time in jiffies by this equation:
113 ++ jiffies + (prio_ratio * rr_interval)
115 ++ The prio_ratio is determined as a ratio compared to the baseline of nice -20
116 ++ and increases by 10% per nice level. The deadline is a virtual one only in that
117 ++ no guarantee is placed that a task will actually be scheduled by this time, but
118 ++ it is used to compare which task should go next. There are three components to
119 ++ how a task is next chosen. First is time_slice expiration. If a task runs out
120 ++ of its time_slice, it is descheduled, the time_slice is refilled, and the
121 ++ deadline reset to that formula above. Second is sleep, where a task no longer
122 ++ is requesting CPU for whatever reason. The time_slice and deadline are _not_
123 ++ adjusted in this case and are just carried over for when the task is next
124 ++ scheduled. Third is preemption, and that is when a newly waking task is deemed
125 ++ higher priority than a currently running task on any cpu by virtue of the fact
126 ++ that it has an earlier virtual deadline than the currently running task. The
127 ++ earlier deadline is the key to which task is next chosen for the first and
128 ++ second cases. Once a task is descheduled, it is put back on the queue, and an
129 ++ O(n) lookup of all queued-but-not-running tasks is done to determine which has
130 ++ the earliest deadline and that task is chosen to receive CPU next. The one
131 ++ caveat to this is that if a deadline has already passed (jiffies is greater
132 ++ than the deadline), the tasks are chosen in FIFO (first in first out) order as
133 ++ the deadlines are old and their absolute value becomes decreasingly relevant
134 ++ apart from being a flag that they have been asleep and deserve CPU time ahead
135 ++ of all later deadlines.
137 ++ The CPU proportion of different nice tasks works out to be approximately the
139 ++ (prio_ratio difference)^2
141 ++ The reason it is squared is that a task's deadline does not change while it is
142 ++ running unless it runs out of time_slice. Thus, even if the time actually
143 ++ passes the deadline of another task that is queued, it will not get CPU time
144 ++ unless the current running task deschedules, and the time "base" (jiffies) is
145 ++ constantly moving.
149 ++ BFS has 103 priority queues. 100 of these are dedicated to the static priority
150 ++ of realtime tasks, and the remaining 3 are, in order of best to worst priority,
151 ++ SCHED_ISO (isochronous), SCHED_NORMAL, and SCHED_IDLEPRIO (idle priority
152 ++ scheduling). When a task of these priorities is queued, a bitmap of running
153 ++ priorities is set showing which of these priorities has tasks waiting for CPU
154 ++ time. When a CPU is made to reschedule, the lookup for the next task to get
155 ++ CPU time is performed in the following way:
157 ++ First the bitmap is checked to see what static priority tasks are queued. If
158 ++ any realtime priorities are found, the corresponding queue is checked and the
159 ++ first task listed there is taken (provided CPU affinity is suitable) and lookup
160 ++ is complete. If the priority corresponds to a SCHED_ISO task, they are also
161 ++ taken in FIFO order (as they behave like SCHED_RR). If the priority corresponds
162 ++ to either SCHED_NORMAL or SCHED_IDLEPRIO, then the lookup becomes O(n). At this
163 ++ stage, every task in the runlist that corresponds to that priority is checked
164 ++ to see which has the earliest set deadline, and (provided it has suitable CPU
165 ++ affinity) it is taken off the runqueue and given the CPU. If a task has an
166 ++ expired deadline, it is taken and the rest of the lookup aborted (as they are
167 ++ chosen in FIFO order).
169 ++ Thus, the lookup is O(n) in the worst case only, where n is as described
170 ++ earlier, as tasks may be chosen before the whole task list is looked over.
175 ++ The major limitations of BFS will be that of scalability, as the separate
176 ++ runqueue designs will have less lock contention as the number of CPUs rises.
177 ++ However they do not scale linearly even with separate runqueues as multiple
178 ++ runqueues will need to be locked concurrently on such designs to be able to
179 ++ achieve fair CPU balancing, to try and achieve some sort of nice-level fairness
180 ++ across CPUs, and to achieve low enough latency for tasks on a busy CPU when
181 ++ other CPUs would be more suited. BFS has the advantage that it requires no
182 ++ balancing algorithm whatsoever, as balancing occurs by proxy simply because
183 ++ all CPUs draw off the global runqueue, in priority and deadline order. Despite
184 ++ the fact that scalability is _not_ the prime concern of BFS, it both shows very
185 ++ good scalability to smaller numbers of CPUs and is likely a more scalable design
186 ++ at these numbers of CPUs.
188 ++ It also has some very low overhead scalability features built into the design
189 ++ when it has been deemed their overhead is so marginal that they're worth adding.
190 ++ The first is the local copy of the running process' data to the CPU it's running
191 ++ on to allow that data to be updated lockless where possible. Then there is
192 ++ deference paid to the last CPU a task was running on, by trying that CPU first
193 ++ when looking for an idle CPU to use the next time it's scheduled. Finally there
194 ++ is the notion of cache locality beyond the last running CPU. The sched_domains
195 ++ information is used to determine the relative virtual "cache distance" that
196 ++ other CPUs have from the last CPU a task was running on. CPUs with shared
197 ++ caches, such as SMT siblings, or multicore CPUs with shared caches, are treated
198 ++ as cache local. CPUs without shared caches are treated as not cache local, and
199 ++ CPUs on different NUMA nodes are treated as very distant. This "relative cache
200 ++ distance" is used by modifying the virtual deadline value when doing lookups.
201 ++ Effectively, the deadline is unaltered between "cache local" CPUs, doubled for
202 ++ "cache distant" CPUs, and quadrupled for "very distant" CPUs. The reasoning
203 ++ behind the doubling of deadlines is as follows. The real cost of migrating a
204 ++ task from one CPU to another is entirely dependant on the cache footprint of
205 ++ the task, how cache intensive the task is, how long it's been running on that
206 ++ CPU to take up the bulk of its cache, how big the CPU cache is, how fast and
207 ++ how layered the CPU cache is, how fast a context switch is... and so on. In
208 ++ other words, it's close to random in the real world where we do more than just
209 ++ one sole workload. The only thing we can be sure of is that it's not free. So
210 ++ BFS uses the principle that an idle CPU is a wasted CPU and utilising idle CPUs
211 ++ is more important than cache locality, and cache locality only plays a part
212 ++ after that. Doubling the effective deadline is based on the premise that the
213 ++ "cache local" CPUs will tend to work on the same tasks up to double the number
214 ++ of cache local CPUs, and once the workload is beyond that amount, it is likely
215 ++ that none of the tasks are cache warm anywhere anyway. The quadrupling for NUMA
216 ++ is a value I pulled out of my arse.
218 ++ When choosing an idle CPU for a waking task, the cache locality is determined
219 ++ according to where the task last ran and then idle CPUs are ranked from best
220 ++ to worst to choose the most suitable idle CPU based on cache locality, NUMA
221 ++ node locality and hyperthread sibling business. They are chosen in the
222 ++ following preference (if idle):
224 ++ * Same core, idle or busy cache, idle threads
225 ++ * Other core, same cache, idle or busy cache, idle threads.
226 ++ * Same node, other CPU, idle cache, idle threads.
227 ++ * Same node, other CPU, busy cache, idle threads.
228 ++ * Same core, busy threads.
229 ++ * Other core, same cache, busy threads.
230 ++ * Same node, other CPU, busy threads.
231 ++ * Other node, other CPU, idle cache, idle threads.
232 ++ * Other node, other CPU, busy cache, idle threads.
233 ++ * Other node, other CPU, busy threads.
235 ++ This shows the SMT or "hyperthread" awareness in the design as well which will
236 ++ choose a real idle core first before a logical SMT sibling which already has
237 ++ tasks on the physical CPU.
239 ++ Early benchmarking of BFS suggested scalability dropped off at the 16 CPU mark.
240 ++ However this benchmarking was performed on an earlier design that was far less
241 ++ scalable than the current one so it's hard to know how scalable it is in terms
242 ++ of both CPUs (due to the global runqueue) and heavily loaded machines (due to
243 ++ O(n) lookup) at this stage. Note that in terms of scalability, the number of
244 ++ _logical_ CPUs matters, not the number of _physical_ CPUs. Thus, a dual (2x)
245 ++ quad core (4X) hyperthreaded (2X) machine is effectively a 16X. Newer benchmark
246 ++ results are very promising indeed, without needing to tweak any knobs, features
247 ++ or options. Benchmark contributions are most welcome.
252 ++ As the initial prime target audience for BFS was the average desktop user, it
253 ++ was designed to not need tweaking, tuning or have features set to obtain benefit
254 ++ from it. Thus the number of knobs and features has been kept to an absolute
255 ++ minimum and should not require extra user input for the vast majority of cases.
256 ++ There are precisely 2 tunables, and 2 extra scheduling policies. The rr_interval
257 ++ and iso_cpu tunables, and the SCHED_ISO and SCHED_IDLEPRIO policies. In addition
258 ++ to this, BFS also uses sub-tick accounting. What BFS does _not_ now feature is
259 ++ support for CGROUPS. The average user should neither need to know what these
260 ++ are, nor should they need to be using them to have good desktop behaviour.
264 ++ There is only one "scheduler" tunable, the round robin interval. This can be
267 ++ /proc/sys/kernel/rr_interval
269 ++ The value is in milliseconds, and the default value is set to 6 on a
270 ++ uniprocessor machine, and automatically set to a progressively higher value on
271 ++ multiprocessor machines. The reasoning behind increasing the value on more CPUs
272 ++ is that the effective latency is decreased by virtue of there being more CPUs on
273 ++ BFS (for reasons explained above), and increasing the value allows for less
274 ++ cache contention and more throughput. Valid values are from 1 to 5000
275 ++ Decreasing the value will decrease latencies at the cost of decreasing
276 ++ throughput, while increasing it will improve throughput, but at the cost of
277 ++ worsening latencies. The accuracy of the rr interval is limited by HZ resolution
278 ++ of the kernel configuration. Thus, the worst case latencies are usually slightly
279 ++ higher than this actual value. The default value of 6 is not an arbitrary one.
280 ++ It is based on the fact that humans can detect jitter at approximately 7ms, so
281 ++ aiming for much lower latencies is pointless under most circumstances. It is
282 ++ worth noting this fact when comparing the latency performance of BFS to other
283 ++ schedulers. Worst case latencies being higher than 7ms are far worse than
284 ++ average latencies not being in the microsecond range.
286 ++ Isochronous scheduling.
288 ++ Isochronous scheduling is a unique scheduling policy designed to provide
289 ++ near-real-time performance to unprivileged (ie non-root) users without the
290 ++ ability to starve the machine indefinitely. Isochronous tasks (which means
291 ++ "same time") are set using, for example, the schedtool application like so:
293 ++ schedtool -I -e amarok
295 ++ This will start the audio application "amarok" as SCHED_ISO. How SCHED_ISO works
296 ++ is that it has a priority level between true realtime tasks and SCHED_NORMAL
297 ++ which would allow them to preempt all normal tasks, in a SCHED_RR fashion (ie,
298 ++ if multiple SCHED_ISO tasks are running, they purely round robin at rr_interval
299 ++ rate). However if ISO tasks run for more than a tunable finite amount of time,
300 ++ they are then demoted back to SCHED_NORMAL scheduling. This finite amount of
301 ++ time is the percentage of _total CPU_ available across the machine, configurable
302 ++ as a percentage in the following "resource handling" tunable (as opposed to a
303 ++ scheduler tunable):
305 ++ /proc/sys/kernel/iso_cpu
307 ++ and is set to 70% by default. It is calculated over a rolling 5 second average
308 ++ Because it is the total CPU available, it means that on a multi CPU machine, it
309 ++ is possible to have an ISO task running as realtime scheduling indefinitely on
310 ++ just one CPU, as the other CPUs will be available. Setting this to 100 is the
311 ++ equivalent of giving all users SCHED_RR access and setting it to 0 removes the
312 ++ ability to run any pseudo-realtime tasks.
314 ++ A feature of BFS is that it detects when an application tries to obtain a
315 ++ realtime policy (SCHED_RR or SCHED_FIFO) and the caller does not have the
316 ++ appropriate privileges to use those policies. When it detects this, it will
317 ++ give the task SCHED_ISO policy instead. Thus it is transparent to the user.
318 ++ Because some applications constantly set their policy as well as their nice
319 ++ level, there is potential for them to undo the override specified by the user
320 ++ on the command line of setting the policy to SCHED_ISO. To counter this, once
321 ++ a task has been set to SCHED_ISO policy, it needs superuser privileges to set
322 ++ it back to SCHED_NORMAL. This will ensure the task remains ISO and all child
323 ++ processes and threads will also inherit the ISO policy.
325 ++ Idleprio scheduling.
327 ++ Idleprio scheduling is a scheduling policy designed to give out CPU to a task
328 ++ _only_ when the CPU would be otherwise idle. The idea behind this is to allow
329 ++ ultra low priority tasks to be run in the background that have virtually no
330 ++ effect on the foreground tasks. This is ideally suited to distributed computing
331 ++ clients (like setiathome, folding, mprime etc) but can also be used to start
332 ++ a video encode or so on without any slowdown of other tasks. To avoid this
333 ++ policy from grabbing shared resources and holding them indefinitely, if it
334 ++ detects a state where the task is waiting on I/O, the machine is about to
335 ++ suspend to ram and so on, it will transiently schedule them as SCHED_NORMAL. As
336 ++ per the Isochronous task management, once a task has been scheduled as IDLEPRIO,
337 ++ it cannot be put back to SCHED_NORMAL without superuser privileges. Tasks can
338 ++ be set to start as SCHED_IDLEPRIO with the schedtool command like so:
340 ++ schedtool -D -e ./mprime
342 ++ Subtick accounting.
344 ++ It is surprisingly difficult to get accurate CPU accounting, and in many cases,
345 ++ the accounting is done by simply determining what is happening at the precise
346 ++ moment a timer tick fires off. This becomes increasingly inaccurate as the
347 ++ timer tick frequency (HZ) is lowered. It is possible to create an application
348 ++ which uses almost 100% CPU, yet by being descheduled at the right time, records
349 ++ zero CPU usage. While the main problem with this is that there are possible
350 ++ security implications, it is also difficult to determine how much CPU a task
351 ++ really does use. BFS tries to use the sub-tick accounting from the TSC clock,
352 ++ where possible, to determine real CPU usage. This is not entirely reliable, but
353 ++ is far more likely to produce accurate CPU usage data than the existing designs
354 ++ and will not show tasks as consuming no CPU usage when they actually are. Thus,
355 ++ the amount of CPU reported as being used by BFS will more accurately represent
356 ++ how much CPU the task itself is using (as is shown for example by the 'time'
357 ++ application), so the reported values may be quite different to other schedulers.
358 ++ Values reported as the 'load' are more prone to problems with this design, but
359 ++ per process values are closer to real usage. When comparing throughput of BFS
360 ++ to other designs, it is important to compare the actual completed work in terms
361 ++ of total wall clock time taken and total work done, rather than the reported
365 ++ Con Kolivas <kernel@kolivas.org> Thu Dec 3 2009
366 diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c
367 index 2ad914c..f6da979 100644
368 --- a/arch/powerpc/platforms/cell/spufs/sched.c
369 +++ b/arch/powerpc/platforms/cell/spufs/sched.c
370 @@ -62,11 +62,6 @@ static struct timer_list spusched_timer;
371 static struct timer_list spuloadavg_timer;
374 - * Priority of a normal, non-rt, non-niced'd process (aka nice level 0).
376 -#define NORMAL_PRIO 120
379 * Frequency of the spu scheduler tick. By default we do one SPU scheduler
380 * tick for every 10 CPU scheduler ticks.
382 diff --git a/fs/proc/base.c b/fs/proc/base.c
383 index d467760..8f7ccde 100644
386 @@ -347,7 +347,7 @@ static int proc_pid_wchan(struct task_struct *task, char *buffer)
387 static int proc_pid_schedstat(struct task_struct *task, char *buffer)
389 return sprintf(buffer, "%llu %llu %lu\n",
390 - task->sched_info.cpu_time,
391 + tsk_seruntime(task),
392 task->sched_info.run_delay,
393 task->sched_info.pcount);
395 diff --git a/include/linux/init_task.h b/include/linux/init_task.h
396 index 23fd890..85552e9 100644
397 --- a/include/linux/init_task.h
398 +++ b/include/linux/init_task.h
399 @@ -47,6 +47,11 @@ extern struct files_struct init_files;
400 .posix_timers = LIST_HEAD_INIT(sig.posix_timers), \
401 .cpu_timers = INIT_CPU_TIMERS(sig.cpu_timers), \
402 .rlim = INIT_RLIMITS, \
404 + .cputime = INIT_CPUTIME, \
406 + .lock = __SPIN_LOCK_UNLOCKED(sig.cputimer.lock), \
410 extern struct nsproxy init_nsproxy;
411 @@ -117,6 +122,67 @@ extern struct group_info init_groups;
412 * INIT_TASK is used to set up the first task table, touch at
413 * your own risk!. Base=0, limit=0x1fffff (=2MB)
415 +#ifdef CONFIG_SCHED_BFS
416 +#define INIT_TASK(tsk) \
419 + .stack = &init_thread_info, \
420 + .usage = ATOMIC_INIT(2), \
421 + .flags = PF_KTHREAD, \
422 + .lock_depth = -1, \
423 + .prio = NORMAL_PRIO, \
424 + .static_prio = MAX_PRIO-20, \
425 + .normal_prio = NORMAL_PRIO, \
427 + .policy = SCHED_NORMAL, \
428 + .cpus_allowed = CPU_MASK_ALL, \
430 + .active_mm = &init_mm, \
431 + .run_list = LIST_HEAD_INIT(tsk.run_list), \
432 + .time_slice = HZ, \
433 + .tasks = LIST_HEAD_INIT(tsk.tasks), \
434 + .ptraced = LIST_HEAD_INIT(tsk.ptraced), \
435 + .ptrace_entry = LIST_HEAD_INIT(tsk.ptrace_entry), \
436 + .real_parent = &tsk, \
438 + .children = LIST_HEAD_INIT(tsk.children), \
439 + .sibling = LIST_HEAD_INIT(tsk.sibling), \
440 + .group_leader = &tsk, \
441 + .group_info = &init_groups, \
442 + .cap_effective = CAP_INIT_EFF_SET, \
443 + .cap_inheritable = CAP_INIT_INH_SET, \
444 + .cap_permitted = CAP_FULL_SET, \
445 + .cap_bset = CAP_INIT_BSET, \
446 + .securebits = SECUREBITS_DEFAULT, \
447 + .user = INIT_USER, \
448 + .comm = "swapper", \
449 + .thread = INIT_THREAD, \
451 + .files = &init_files, \
452 + .signal = &init_signals, \
453 + .sighand = &init_sighand, \
454 + .nsproxy = &init_nsproxy, \
456 + .list = LIST_HEAD_INIT(tsk.pending.list), \
457 + .signal = {{0}}}, \
458 + .blocked = {{0}}, \
459 + .alloc_lock = __SPIN_LOCK_UNLOCKED(tsk.alloc_lock), \
460 + .journal_info = NULL, \
461 + .cpu_timers = INIT_CPU_TIMERS(tsk.cpu_timers), \
462 + .fs_excl = ATOMIC_INIT(0), \
463 + .pi_lock = __SPIN_LOCK_UNLOCKED(tsk.pi_lock), \
465 + [PIDTYPE_PID] = INIT_PID_LINK(PIDTYPE_PID), \
466 + [PIDTYPE_PGID] = INIT_PID_LINK(PIDTYPE_PGID), \
467 + [PIDTYPE_SID] = INIT_PID_LINK(PIDTYPE_SID), \
469 + .dirties = INIT_PROP_LOCAL_SINGLE(dirties), \
471 + INIT_TRACE_IRQFLAGS \
474 +#else /* CONFIG_SCHED_BFS */
476 #define INIT_TASK(tsk) \
479 @@ -181,7 +247,7 @@ extern struct group_info init_groups;
480 INIT_TRACE_IRQFLAGS \
484 +#endif /* CONFIG_SCHED_BFS */
486 #define INIT_CPU_TIMERS(cpu_timers) \
488 diff --git a/include/linux/ioprio.h b/include/linux/ioprio.h
489 index f98a656..b342d9d 100644
490 --- a/include/linux/ioprio.h
491 +++ b/include/linux/ioprio.h
492 @@ -64,6 +64,8 @@ static inline int task_ioprio_class(struct io_context *ioc)
494 static inline int task_nice_ioprio(struct task_struct *task)
496 + if (iso_task(task))
498 return (task_nice(task) + 20) / 5;
501 diff --git a/include/linux/kernel_stat.h b/include/linux/kernel_stat.h
502 index 4a145ca..c0c4a92 100644
503 --- a/include/linux/kernel_stat.h
504 +++ b/include/linux/kernel_stat.h
505 @@ -67,10 +67,16 @@ static inline unsigned int kstat_irqs(unsigned int irq)
508 extern unsigned long long task_delta_exec(struct task_struct *);
509 -extern void account_user_time(struct task_struct *, cputime_t);
510 -extern void account_user_time_scaled(struct task_struct *, cputime_t);
511 -extern void account_system_time(struct task_struct *, int, cputime_t);
512 -extern void account_system_time_scaled(struct task_struct *, cputime_t);
513 -extern void account_steal_time(struct task_struct *, cputime_t);
514 +extern void account_user_time(struct task_struct *, cputime_t, cputime_t);
515 +extern void account_system_time(struct task_struct *, int, cputime_t, cputime_t);
516 +extern void account_steal_time(cputime_t);
517 +extern void account_idle_time(cputime_t);
519 +extern void account_process_tick(struct task_struct *, int user);
520 +extern void account_steal_ticks(unsigned long ticks);
521 +extern void account_idle_ticks(unsigned long ticks);
523 +extern void account_user_time_scaled(struct task_struct *, cputime_t, cputime_t);
524 +extern void account_system_time_scaled(struct task_struct *, cputime_t, cputime_t);
526 #endif /* _LINUX_KERNEL_STAT_H */
527 diff --git a/include/linux/sched.h b/include/linux/sched.h
528 index 3883c32..1b682f2 100644
529 --- a/include/linux/sched.h
530 +++ b/include/linux/sched.h
534 #define SCHED_BATCH 3
535 -/* SCHED_ISO: reserved but not implemented yet */
536 +/* SCHED_ISO: Implemented on BFS only */
538 +#ifdef CONFIG_SCHED_BFS
540 +#define SCHED_IDLEPRIO SCHED_IDLE
541 +#define SCHED_MAX (SCHED_IDLEPRIO)
542 +#define SCHED_RANGE(policy) ((policy) <= SCHED_MAX)
547 @@ -246,7 +252,6 @@ extern asmlinkage void schedule_tail(struct task_struct *prev);
548 extern void init_idle(struct task_struct *idle, int cpu);
549 extern void init_idle_bootup_task(struct task_struct *idle);
551 -extern int runqueue_is_locked(void);
552 extern void task_rq_unlock_wait(struct task_struct *p);
554 extern cpumask_t nohz_cpu_mask;
555 @@ -455,16 +460,27 @@ struct task_cputime {
556 #define virt_exp utime
557 #define sched_exp sum_exec_runtime
559 +#define INIT_CPUTIME \
560 + (struct task_cputime) { \
561 + .utime = cputime_zero, \
562 + .stime = cputime_zero, \
563 + .sum_exec_runtime = 0, \
567 - * struct thread_group_cputime - thread group interval timer counts
568 - * @totals: thread group interval timers; substructure for
569 - * uniprocessor kernel, per-cpu for SMP kernel.
570 + * struct thread_group_cputimer - thread group interval timer counts
571 + * @cputime: thread group interval timers.
572 + * @running: non-zero when there are timers running and
573 + * @cputime receives updates.
574 + * @lock: lock for fields in this struct.
576 * This structure contains the version of task_cputime, above, that is
577 - * used for thread group CPU clock calculations.
578 + * used for thread group CPU timer calculations.
580 -struct thread_group_cputime {
581 - struct task_cputime *totals;
582 +struct thread_group_cputimer {
583 + struct task_cputime cputime;
589 @@ -513,10 +529,10 @@ struct signal_struct {
590 cputime_t it_prof_incr, it_virt_incr;
593 - * Thread group totals for process CPU clocks.
594 - * See thread_group_cputime(), et al, for details.
595 + * Thread group totals for process CPU timers.
596 + * See thread_group_cputimer(), et al, for details.
598 - struct thread_group_cputime cputime;
599 + struct thread_group_cputimer cputimer;
601 /* Earliest-expiration cache. */
602 struct task_cputime cputime_expires;
603 @@ -553,7 +569,7 @@ struct signal_struct {
604 * Live threads maintain their own counters and add to these
605 * in __exit_signal, except for the group leader.
607 - cputime_t cutime, cstime;
608 + cputime_t utime, stime, cutime, cstime;
611 unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw;
612 @@ -562,6 +578,14 @@ struct signal_struct {
613 struct task_io_accounting ioac;
616 + * Cumulative ns of schedule CPU time fo dead threads in the
617 + * group, not including a zombie group leader, (This only differs
618 + * from jiffies_to_ns(utime + stime) if sched_clock uses something
619 + * other than jiffies.)
621 + unsigned long long sum_sched_runtime;
624 * We don't bother to synchronize most readers of this at all,
625 * because there is no reader checking a limit that actually needs
626 * to get both rlim_cur and rlim_max atomically, and either one
627 @@ -1080,17 +1104,31 @@ struct task_struct {
629 int lock_depth; /* BKL lock depth */
631 +#ifndef CONFIG_SCHED_BFS
633 #ifdef __ARCH_WANT_UNLOCKED_CTXSW
637 +#else /* CONFIG_SCHED_BFS */
641 int prio, static_prio, normal_prio;
642 unsigned int rt_priority;
643 +#ifdef CONFIG_SCHED_BFS
644 + int time_slice, first_time_slice;
645 + unsigned long deadline;
646 + struct list_head run_list;
648 + u64 sched_time; /* sched_clock time spent running */
650 + unsigned long rt_timeout;
651 +#else /* CONFIG_SCHED_BFS */
652 const struct sched_class *sched_class;
653 struct sched_entity se;
654 struct sched_rt_entity rt;
657 #ifdef CONFIG_PREEMPT_NOTIFIERS
658 /* list of struct preempt_notifier: */
659 @@ -1113,6 +1151,9 @@ struct task_struct {
662 cpumask_t cpus_allowed;
663 +#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_SCHED_BFS)
664 + cpumask_t unplugged_mask;
667 #ifdef CONFIG_PREEMPT_RCU
668 int rcu_read_lock_nesting;
669 @@ -1173,6 +1214,9 @@ struct task_struct {
670 int __user *clear_child_tid; /* CLONE_CHILD_CLEARTID */
672 cputime_t utime, stime, utimescaled, stimescaled;
673 +#ifdef CONFIG_SCHED_BFS
674 + unsigned long utime_pc, stime_pc;
677 cputime_t prev_utime, prev_stime;
678 unsigned long nvcsw, nivcsw; /* context switch counts */
679 @@ -1357,6 +1401,64 @@ struct task_struct {
680 struct list_head *scm_work_list;
683 +#ifdef CONFIG_SCHED_BFS
684 +extern int grunqueue_is_locked(void);
685 +extern void grq_unlock_wait(void);
686 +#define tsk_seruntime(t) ((t)->sched_time)
687 +#define tsk_rttimeout(t) ((t)->rt_timeout)
688 +#define task_rq_unlock_wait(tsk) grq_unlock_wait()
690 +static inline void set_oom_timeslice(struct task_struct *p)
692 + p->time_slice = HZ;
695 +static inline void tsk_cpus_current(struct task_struct *p)
699 +#define runqueue_is_locked() grunqueue_is_locked()
701 +static inline void print_scheduler_version(void)
703 + printk(KERN_INFO"BFS CPU scheduler v0.316 by Con Kolivas ported by ToAsTcfh.\n");
706 +static inline int iso_task(struct task_struct *p)
708 + return (p->policy == SCHED_ISO);
711 +extern int runqueue_is_locked(void);
712 +extern void task_rq_unlock_wait(struct task_struct *p);
713 +#define tsk_seruntime(t) ((t)->se.sum_exec_runtime)
714 +#define tsk_rttimeout(t) ((t)->rt.timeout)
716 +static inline void sched_exit(struct task_struct *p)
720 +static inline void set_oom_timeslice(struct task_struct *p)
722 + p->rt.time_slice = HZ;
725 +static inline void tsk_cpus_current(struct task_struct *p)
727 + p->rt.nr_cpus_allowed = current->rt.nr_cpus_allowed;
730 +static inline void print_scheduler_version(void)
732 + printk(KERN_INFO"CFS CPU scheduler.\n");
735 +static inline int iso_task(struct task_struct *p)
742 * Priority of a process goes from 0..MAX_PRIO-1, valid RT
743 * priority is 0..MAX_RT_PRIO-1, and SCHED_NORMAL/SCHED_BATCH
744 @@ -1372,9 +1474,19 @@ struct task_struct {
746 #define MAX_USER_RT_PRIO 100
747 #define MAX_RT_PRIO MAX_USER_RT_PRIO
749 +#define DEFAULT_PRIO (MAX_RT_PRIO + 20)
751 +#ifdef CONFIG_SCHED_BFS
752 +#define PRIO_RANGE (40)
753 +#define MAX_PRIO (MAX_RT_PRIO + PRIO_RANGE)
754 +#define ISO_PRIO (MAX_RT_PRIO)
755 +#define NORMAL_PRIO (MAX_RT_PRIO + 1)
756 +#define IDLE_PRIO (MAX_RT_PRIO + 2)
757 +#define PRIO_LIMIT ((IDLE_PRIO) + 1)
758 +#else /* CONFIG_SCHED_BFS */
759 #define MAX_PRIO (MAX_RT_PRIO + 40)
760 -#define DEFAULT_PRIO (MAX_RT_PRIO + 20)
761 +#define NORMAL_PRIO DEFAULT_PRIO
762 +#endif /* CONFIG_SCHED_BFS */
764 static inline int rt_prio(int prio)
766 @@ -1642,7 +1754,7 @@ task_sched_runtime(struct task_struct *task);
767 extern unsigned long long thread_group_sched_runtime(struct task_struct *task);
769 /* sched_exec is called by processes performing an exec */
771 +#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_BFS)
772 extern void sched_exec(void);
774 #define sched_exec() {}
775 @@ -1791,6 +1903,9 @@ extern void wake_up_new_task(struct task_struct *tsk,
776 static inline void kick_process(struct task_struct *tsk) { }
778 extern void sched_fork(struct task_struct *p, int clone_flags);
779 +#ifdef CONFIG_SCHED_BFS
780 +extern void sched_exit(struct task_struct *p);
782 extern void sched_dead(struct task_struct *p);
784 extern int in_group_p(gid_t);
785 @@ -2140,25 +2255,18 @@ static inline int spin_needbreak(spinlock_t *lock)
787 * Thread group CPU time accounting.
790 -extern int thread_group_cputime_alloc(struct task_struct *);
791 -extern void thread_group_cputime(struct task_struct *, struct task_cputime *);
792 +void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times);
793 +void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times);
795 static inline void thread_group_cputime_init(struct signal_struct *sig)
797 - sig->cputime.totals = NULL;
800 -static inline int thread_group_cputime_clone_thread(struct task_struct *curr)
802 - if (curr->signal->cputime.totals)
804 - return thread_group_cputime_alloc(curr);
805 + sig->cputimer.cputime = INIT_CPUTIME;
806 + spin_lock_init(&sig->cputimer.lock);
807 + sig->cputimer.running = 0;
810 static inline void thread_group_cputime_free(struct signal_struct *sig)
812 - free_percpu(sig->cputime.totals);
816 diff --git a/init/Kconfig b/init/Kconfig
817 index f763762..12b3a4a 100644
820 @@ -18,6 +18,19 @@ config DEFCONFIG_LIST
825 + bool "BFS cpu scheduler"
827 + The Brain Fuck CPU Scheduler for excellent interactivity and
828 + responsiveness on the desktop and solid scalability on normal
829 + hardware. Not recommended for 4096 CPUs.
831 + Currently incompatible with the Group CPU scheduler.
838 bool "Prompt for development and/or incomplete code/drivers"
840 @@ -332,7 +345,7 @@ config HAVE_UNSTABLE_SCHED_CLOCK
843 bool "Group CPU scheduler"
844 - depends on EXPERIMENTAL
845 + depends on EXPERIMENTAL && !SCHED_BFS
848 This feature lets CPU scheduler recognize task groups and control CPU
849 @@ -381,7 +394,7 @@ endchoice
851 config CGROUP_CPUACCT
852 bool "Simple CPU accounting cgroup subsystem"
854 + depends on CGROUPS && !SCHED_BFS
856 Provides a simple Resource Controller for monitoring the
857 total CPU consumed by the tasks in a cgroup
858 diff --git a/init/main.c b/init/main.c
859 index 7e117a2..ea6d26c 100644
862 @@ -800,6 +800,9 @@ static int noinline init_post(void)
863 system_state = SYSTEM_RUNNING;
864 numa_default_policy();
866 + print_scheduler_version();
869 if (sys_open((const char __user *) "/dev/console", O_RDWR, 0) < 0)
870 printk(KERN_WARNING "Warning: unable to open an initial console.\n");
872 diff --git a/kernel/delayacct.c b/kernel/delayacct.c
873 index b3179da..cbdc400 100644
874 --- a/kernel/delayacct.c
875 +++ b/kernel/delayacct.c
876 @@ -127,7 +127,7 @@ int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
878 t1 = tsk->sched_info.pcount;
879 t2 = tsk->sched_info.run_delay;
880 - t3 = tsk->sched_info.cpu_time;
881 + t3 = tsk_seruntime(tsk);
885 diff --git a/kernel/exit.c b/kernel/exit.c
886 index 2d8be7e..7413c2a 100644
889 @@ -112,6 +112,8 @@ static void __exit_signal(struct task_struct *tsk)
890 * We won't ever get here for the group leader, since it
891 * will have been the last reference on the signal_struct.
893 + sig->utime = cputime_add(sig->utime, task_utime(tsk));
894 + sig->stime = cputime_add(sig->stime, task_stime(tsk));
895 sig->gtime = cputime_add(sig->gtime, task_gtime(tsk));
896 sig->min_flt += tsk->min_flt;
897 sig->maj_flt += tsk->maj_flt;
898 @@ -120,6 +122,7 @@ static void __exit_signal(struct task_struct *tsk)
899 sig->inblock += task_io_get_inblock(tsk);
900 sig->oublock += task_io_get_oublock(tsk);
901 task_io_accounting_add(&sig->ioac, &tsk->ioac);
902 + sig->sum_sched_runtime += tsk_seruntime(tsk);
903 sig = NULL; /* Marker for below. */
906 diff --git a/kernel/fork.c b/kernel/fork.c
907 index 495da2e..fe5befb 100644
910 @@ -806,14 +806,15 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk)
913 if (clone_flags & CLONE_THREAD) {
914 - ret = thread_group_cputime_clone_thread(current);
915 - if (likely(!ret)) {
916 - atomic_inc(¤t->signal->count);
917 - atomic_inc(¤t->signal->live);
920 + atomic_inc(¤t->signal->count);
921 + atomic_inc(¤t->signal->live);
924 sig = kmem_cache_alloc(signal_cachep, GFP_KERNEL);
927 + posix_cpu_timers_init_group(sig);
932 @@ -843,21 +844,20 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk)
933 sig->tty_old_pgrp = NULL;
936 - sig->cutime = sig->cstime = cputime_zero;
937 + sig->utime = sig->stime = sig->cutime = sig->cstime = cputime_zero;
938 sig->gtime = cputime_zero;
939 sig->cgtime = cputime_zero;
940 sig->nvcsw = sig->nivcsw = sig->cnvcsw = sig->cnivcsw = 0;
941 sig->min_flt = sig->maj_flt = sig->cmin_flt = sig->cmaj_flt = 0;
942 sig->inblock = sig->oublock = sig->cinblock = sig->coublock = 0;
943 task_io_accounting_init(&sig->ioac);
944 + sig->sum_sched_runtime = 0;
945 taskstats_tgid_init(sig);
947 task_lock(current->group_leader);
948 memcpy(sig->rlim, current->signal->rlim, sizeof sig->rlim);
949 task_unlock(current->group_leader);
951 - posix_cpu_timers_init_group(sig);
953 acct_init_pacct(&sig->pacct);
956 @@ -1211,7 +1211,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
957 * parent's CPU). This avoids alot of nasty races.
959 p->cpus_allowed = current->cpus_allowed;
960 - p->rt.nr_cpus_allowed = current->rt.nr_cpus_allowed;
961 + tsk_cpus_current(p);
962 if (unlikely(!cpu_isset(task_cpu(p), p->cpus_allowed) ||
963 !cpu_online(task_cpu(p))))
964 set_task_cpu(p, smp_processor_id());
965 diff --git a/kernel/itimer.c b/kernel/itimer.c
966 index db7c358..14294c0 100644
967 --- a/kernel/itimer.c
968 +++ b/kernel/itimer.c
969 @@ -62,7 +62,7 @@ int do_getitimer(int which, struct itimerval *value)
970 struct task_cputime cputime;
973 - thread_group_cputime(tsk, &cputime);
974 + thread_group_cputimer(tsk, &cputime);
975 utime = cputime.utime;
976 if (cputime_le(cval, utime)) { /* about to fire */
977 cval = jiffies_to_cputime(1);
978 @@ -82,7 +82,7 @@ int do_getitimer(int which, struct itimerval *value)
979 struct task_cputime times;
982 - thread_group_cputime(tsk, ×);
983 + thread_group_cputimer(tsk, ×);
984 ptime = cputime_add(times.utime, times.stime);
985 if (cputime_le(cval, ptime)) { /* about to fire */
986 cval = jiffies_to_cputime(1);
987 diff --git a/kernel/kthread.c b/kernel/kthread.c
988 index 8e7a7ce..af9eace 100644
989 --- a/kernel/kthread.c
990 +++ b/kernel/kthread.c
992 #include <linux/mutex.h>
993 #include <trace/sched.h>
995 -#define KTHREAD_NICE_LEVEL (-5)
996 +#define KTHREAD_NICE_LEVEL (0)
998 static DEFINE_SPINLOCK(kthread_create_lock);
999 static LIST_HEAD(kthread_create_list);
1000 @@ -179,7 +179,6 @@ void kthread_bind(struct task_struct *k, unsigned int cpu)
1002 set_task_cpu(k, cpu);
1003 k->cpus_allowed = cpumask_of_cpu(cpu);
1004 - k->rt.nr_cpus_allowed = 1;
1005 k->flags |= PF_THREAD_BOUND;
1007 EXPORT_SYMBOL(kthread_bind);
1008 diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
1009 index 4e5288a..d1eef76 100644
1010 --- a/kernel/posix-cpu-timers.c
1011 +++ b/kernel/posix-cpu-timers.c
1013 #include <linux/kernel_stat.h>
1016 - * Allocate the thread_group_cputime structure appropriately and fill in the
1017 - * current values of the fields. Called from copy_signal() via
1018 - * thread_group_cputime_clone_thread() when adding a second or subsequent
1019 - * thread to a thread group. Assumes interrupts are enabled when called.
1021 -int thread_group_cputime_alloc(struct task_struct *tsk)
1023 - struct signal_struct *sig = tsk->signal;
1024 - struct task_cputime *cputime;
1027 - * If we have multiple threads and we don't already have a
1028 - * per-CPU task_cputime struct (checked in the caller), allocate
1029 - * one and fill it in with the times accumulated so far. We may
1030 - * race with another thread so recheck after we pick up the sighand
1033 - cputime = alloc_percpu(struct task_cputime);
1034 - if (cputime == NULL)
1036 - spin_lock_irq(&tsk->sighand->siglock);
1037 - if (sig->cputime.totals) {
1038 - spin_unlock_irq(&tsk->sighand->siglock);
1039 - free_percpu(cputime);
1042 - sig->cputime.totals = cputime;
1043 - cputime = per_cpu_ptr(sig->cputime.totals, smp_processor_id());
1044 - cputime->utime = tsk->utime;
1045 - cputime->stime = tsk->stime;
1046 - cputime->sum_exec_runtime = tsk->se.sum_exec_runtime;
1047 - spin_unlock_irq(&tsk->sighand->siglock);
1052 - * thread_group_cputime - Sum the thread group time fields across all CPUs.
1054 - * @tsk: The task we use to identify the thread group.
1055 - * @times: task_cputime structure in which we return the summed fields.
1057 - * Walk the list of CPUs to sum the per-CPU time fields in the thread group
1060 -void thread_group_cputime(
1061 - struct task_struct *tsk,
1062 - struct task_cputime *times)
1064 - struct signal_struct *sig;
1066 - struct task_cputime *tot;
1068 - sig = tsk->signal;
1069 - if (unlikely(!sig) || !sig->cputime.totals) {
1070 - times->utime = tsk->utime;
1071 - times->stime = tsk->stime;
1072 - times->sum_exec_runtime = tsk->se.sum_exec_runtime;
1075 - times->stime = times->utime = cputime_zero;
1076 - times->sum_exec_runtime = 0;
1077 - for_each_possible_cpu(i) {
1078 - tot = per_cpu_ptr(tsk->signal->cputime.totals, i);
1079 - times->utime = cputime_add(times->utime, tot->utime);
1080 - times->stime = cputime_add(times->stime, tot->stime);
1081 - times->sum_exec_runtime += tot->sum_exec_runtime;
1086 * Called after updating RLIMIT_CPU to set timer expiration if necessary.
1088 void update_rlimit_cpu(unsigned long rlim_new)
1089 @@ -294,12 +224,77 @@ static int cpu_clock_sample(const clockid_t which_clock, struct task_struct *p,
1090 cpu->cpu = virt_ticks(p);
1092 case CPUCLOCK_SCHED:
1093 - cpu->sched = task_sched_runtime(p);
1094 + cpu->sched = task_sched_runtime(p);
1100 +void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
1102 + struct sighand_struct *sighand;
1103 + struct signal_struct *sig;
1104 + struct task_struct *t;
1106 + *times = INIT_CPUTIME;
1109 + sighand = rcu_dereference(tsk->sighand);
1113 + sig = tsk->signal;
1117 + times->utime = cputime_add(times->utime, t->utime);
1118 + times->stime = cputime_add(times->stime, t->stime);
1119 + times->sum_exec_runtime += tsk_seruntime(t);
1121 + t = next_thread(t);
1122 + } while (t != tsk);
1124 + times->utime = cputime_add(times->utime, sig->utime);
1125 + times->stime = cputime_add(times->stime, sig->stime);
1126 + times->sum_exec_runtime += sig->sum_sched_runtime;
1128 + rcu_read_unlock();
1131 +static void update_gt_cputime(struct task_cputime *a, struct task_cputime *b)
1133 + if (cputime_gt(b->utime, a->utime))
1134 + a->utime = b->utime;
1136 + if (cputime_gt(b->stime, a->stime))
1137 + a->stime = b->stime;
1139 + if (b->sum_exec_runtime > a->sum_exec_runtime)
1140 + a->sum_exec_runtime = b->sum_exec_runtime;
1143 +void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times)
1145 + struct thread_group_cputimer *cputimer = &tsk->signal->cputimer;
1146 + struct task_cputime sum;
1147 + unsigned long flags;
1149 + spin_lock_irqsave(&cputimer->lock, flags);
1150 + if (!cputimer->running) {
1151 + cputimer->running = 1;
1153 + * The POSIX timer interface allows for absolute time expiry
1154 + * values through the TIMER_ABSTIME flag, therefore we have
1155 + * to synchronize the timer to the clock every time we start
1158 + thread_group_cputime(tsk, &sum);
1159 + update_gt_cputime(&cputimer->cputime, &sum);
1161 + *times = cputimer->cputime;
1162 + spin_unlock_irqrestore(&cputimer->lock, flags);
1166 * Sample a process (thread group) clock for the given group_leader task.
1167 * Must be called with tasklist_lock held for reading.
1168 @@ -520,16 +515,17 @@ static void cleanup_timers(struct list_head *head,
1169 void posix_cpu_timers_exit(struct task_struct *tsk)
1171 cleanup_timers(tsk->cpu_timers,
1172 - tsk->utime, tsk->stime, tsk->se.sum_exec_runtime);
1173 + tsk->utime, tsk->stime, tsk_seruntime(tsk));
1176 void posix_cpu_timers_exit_group(struct task_struct *tsk)
1178 - struct task_cputime cputime;
1179 + struct signal_struct *const sig = tsk->signal;
1181 - thread_group_cputime(tsk, &cputime);
1182 cleanup_timers(tsk->signal->cpu_timers,
1183 - cputime.utime, cputime.stime, cputime.sum_exec_runtime);
1184 + cputime_add(tsk->utime, sig->utime),
1185 + cputime_add(tsk->stime, sig->stime),
1186 + tsk_seruntime(tsk) + sig->sum_sched_runtime);
1189 static void clear_dead_task(struct k_itimer *timer, union cpu_time_count now)
1190 @@ -686,6 +682,33 @@ static void cpu_timer_fire(struct k_itimer *timer)
1194 + * Sample a process (thread group) timer for the given group_leader task.
1195 + * Must be called with tasklist_lock held for reading.
1197 +static int cpu_timer_sample_group(const clockid_t which_clock,
1198 + struct task_struct *p,
1199 + union cpu_time_count *cpu)
1201 + struct task_cputime cputime;
1203 + thread_group_cputimer(p, &cputime);
1204 + switch (CPUCLOCK_WHICH(which_clock)) {
1207 + case CPUCLOCK_PROF:
1208 + cpu->cpu = cputime_add(cputime.utime, cputime.stime);
1210 + case CPUCLOCK_VIRT:
1211 + cpu->cpu = cputime.utime;
1213 + case CPUCLOCK_SCHED:
1214 + cpu->sched = cputime.sum_exec_runtime + task_delta_exec(p);
1221 * Guts of sys_timer_settime for CPU timers.
1222 * This is called with the timer locked and interrupts disabled.
1223 * If we return TIMER_RETRY, it's necessary to release the timer's lock
1224 @@ -746,7 +769,7 @@ int posix_cpu_timer_set(struct k_itimer *timer, int flags,
1225 if (CPUCLOCK_PERTHREAD(timer->it_clock)) {
1226 cpu_clock_sample(timer->it_clock, p, &val);
1228 - cpu_clock_sample_group(timer->it_clock, p, &val);
1229 + cpu_timer_sample_group(timer->it_clock, p, &val);
1233 @@ -894,7 +917,7 @@ void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec *itp)
1234 read_unlock(&tasklist_lock);
1237 - cpu_clock_sample_group(timer->it_clock, p, &now);
1238 + cpu_timer_sample_group(timer->it_clock, p, &now);
1239 clear_dead = (unlikely(p->exit_state) &&
1240 thread_group_empty(p));
1242 @@ -956,6 +979,7 @@ static void check_thread_timers(struct task_struct *tsk,
1244 struct list_head *timers = tsk->cpu_timers;
1245 struct signal_struct *const sig = tsk->signal;
1246 + unsigned long soft;
1249 tsk->cputime_expires.prof_exp = cputime_zero;
1250 @@ -993,7 +1017,7 @@ static void check_thread_timers(struct task_struct *tsk,
1251 struct cpu_timer_list *t = list_first_entry(timers,
1252 struct cpu_timer_list,
1254 - if (!--maxfire || tsk->se.sum_exec_runtime < t->expires.sched) {
1255 + if (!--maxfire || tsk_seruntime(tsk) < t->expires.sched) {
1256 tsk->cputime_expires.sched_exp = t->expires.sched;
1259 @@ -1004,12 +1028,13 @@ static void check_thread_timers(struct task_struct *tsk,
1261 * Check for the special case thread timers.
1263 - if (sig->rlim[RLIMIT_RTTIME].rlim_cur != RLIM_INFINITY) {
1264 - unsigned long hard = sig->rlim[RLIMIT_RTTIME].rlim_max;
1265 - unsigned long *soft = &sig->rlim[RLIMIT_RTTIME].rlim_cur;
1266 + soft = ACCESS_ONCE(sig->rlim[RLIMIT_RTTIME].rlim_cur);
1267 + if (soft != RLIM_INFINITY) {
1268 + unsigned long hard =
1269 + ACCESS_ONCE(sig->rlim[RLIMIT_RTTIME].rlim_max);
1271 if (hard != RLIM_INFINITY &&
1272 - tsk->rt.timeout > DIV_ROUND_UP(hard, USEC_PER_SEC/HZ)) {
1273 + tsk_rttimeout(tsk) > DIV_ROUND_UP(hard, USEC_PER_SEC/HZ)) {
1275 * At the hard limit, we just die.
1276 * No need to calculate anything else now.
1277 @@ -1017,14 +1042,13 @@ static void check_thread_timers(struct task_struct *tsk,
1278 __group_send_sig_info(SIGKILL, SEND_SIG_PRIV, tsk);
1281 - if (tsk->rt.timeout > DIV_ROUND_UP(*soft, USEC_PER_SEC/HZ)) {
1282 + if (tsk_rttimeout(tsk) > DIV_ROUND_UP(soft, USEC_PER_SEC/HZ)) {
1284 * At the soft limit, send a SIGXCPU every second.
1286 - if (sig->rlim[RLIMIT_RTTIME].rlim_cur
1287 - < sig->rlim[RLIMIT_RTTIME].rlim_max) {
1288 - sig->rlim[RLIMIT_RTTIME].rlim_cur +=
1291 + soft += USEC_PER_SEC;
1292 + sig->rlim[RLIMIT_RTTIME].rlim_cur = soft;
1295 "RT Watchdog Timeout: %s[%d]\n",
1296 @@ -1034,6 +1058,19 @@ static void check_thread_timers(struct task_struct *tsk,
1300 +static void stop_process_timers(struct task_struct *tsk)
1302 + struct thread_group_cputimer *cputimer = &tsk->signal->cputimer;
1303 + unsigned long flags;
1305 + if (!cputimer->running)
1308 + spin_lock_irqsave(&cputimer->lock, flags);
1309 + cputimer->running = 0;
1310 + spin_unlock_irqrestore(&cputimer->lock, flags);
1314 * Check for any per-thread CPU timers that have fired and move them
1315 * off the tsk->*_timers list onto the firing list. Per-thread timers
1316 @@ -1057,13 +1094,15 @@ static void check_process_timers(struct task_struct *tsk,
1317 sig->rlim[RLIMIT_CPU].rlim_cur == RLIM_INFINITY &&
1318 list_empty(&timers[CPUCLOCK_VIRT]) &&
1319 cputime_eq(sig->it_virt_expires, cputime_zero) &&
1320 - list_empty(&timers[CPUCLOCK_SCHED]))
1321 + list_empty(&timers[CPUCLOCK_SCHED])) {
1322 + stop_process_timers(tsk);
1327 * Collect the current process totals.
1329 - thread_group_cputime(tsk, &cputime);
1330 + thread_group_cputimer(tsk, &cputime);
1331 utime = cputime.utime;
1332 ptime = cputime_add(utime, cputime.stime);
1333 sum_sched_runtime = cputime.sum_exec_runtime;
1334 @@ -1234,7 +1273,7 @@ void posix_cpu_timer_schedule(struct k_itimer *timer)
1335 clear_dead_task(timer, now);
1338 - cpu_clock_sample_group(timer->it_clock, p, &now);
1339 + cpu_timer_sample_group(timer->it_clock, p, &now);
1340 bump_cpu_timer(timer, now);
1341 /* Leave the tasklist_lock locked for the call below. */
1343 @@ -1318,7 +1357,7 @@ static inline int fastpath_timer_check(struct task_struct *tsk)
1344 struct task_cputime task_sample = {
1345 .utime = tsk->utime,
1346 .stime = tsk->stime,
1347 - .sum_exec_runtime = tsk->se.sum_exec_runtime
1348 + .sum_exec_runtime = tsk_seruntime(tsk)
1351 if (task_cputime_expired(&task_sample, &tsk->cputime_expires))
1352 @@ -1329,7 +1368,7 @@ static inline int fastpath_timer_check(struct task_struct *tsk)
1353 if (!task_cputime_zero(&sig->cputime_expires)) {
1354 struct task_cputime group_sample;
1356 - thread_group_cputime(tsk, &group_sample);
1357 + thread_group_cputimer(tsk, &group_sample);
1358 if (task_cputime_expired(&group_sample, &sig->cputime_expires))
1361 @@ -1411,7 +1450,7 @@ void set_process_cpu_timer(struct task_struct *tsk, unsigned int clock_idx,
1362 struct list_head *head;
1364 BUG_ON(clock_idx == CPUCLOCK_SCHED);
1365 - cpu_clock_sample_group(clock_idx, tsk, &now);
1366 + cpu_timer_sample_group(clock_idx, tsk, &now);
1369 if (!cputime_eq(*oldval, cputime_zero)) {
1370 diff --git a/kernel/sched.c b/kernel/sched.c
1371 index e4bb1dd..2869e03 100644
1372 --- a/kernel/sched.c
1373 +++ b/kernel/sched.c
1375 +#ifdef CONFIG_SCHED_BFS
1376 +#include "sched_bfs.c"
1381 @@ -4203,7 +4206,6 @@ void account_steal_time(struct task_struct *p, cputime_t steal)
1383 if (p == rq->idle) {
1384 p->stime = cputime_add(p->stime, steal);
1385 - account_group_system_time(p, steal);
1386 if (atomic_read(&rq->nr_iowait) > 0)
1387 cpustat->iowait = cputime64_add(cpustat->iowait, tmp);
1389 @@ -4339,7 +4341,7 @@ void __kprobes sub_preempt_count(int val)
1393 - if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
1394 + if (DEBUG_LOCKS_WARN_ON(val > preempt_count() - (!!kernel_locked())))
1397 * Is the spinlock portion underflowing?
1398 @@ -9388,3 +9390,4 @@ struct cgroup_subsys cpuacct_subsys = {
1399 .subsys_id = cpuacct_subsys_id,
1401 #endif /* CONFIG_CGROUP_CPUACCT */
1402 +#endif /* CONFIG_SCHED_BFS */
1403 diff --git a/kernel/sched_bfs.c b/kernel/sched_bfs.c
1404 new file mode 100644
1405 index 0000000..7cc1752
1407 +++ b/kernel/sched_bfs.c
1410 + * kernel/sched_bfs.c, was sched.c
1412 + * Kernel scheduler and related syscalls
1414 + * Copyright (C) 1991-2002 Linus Torvalds
1416 + * 1996-12-23 Modified by Dave Grothe to fix bugs in semaphores and
1417 + * make semaphores SMP safe
1418 + * 1998-11-19 Implemented schedule_timeout() and related stuff
1419 + * by Andrea Arcangeli
1420 + * 2002-01-04 New ultra-scalable O(1) scheduler by Ingo Molnar:
1421 + * hybrid priority-list and round-robin design with
1422 + * an array-switch method of distributing timeslices
1423 + * and per-CPU runqueues. Cleanups and useful suggestions
1424 + * by Davide Libenzi, preemptible kernel bits by Robert Love.
1425 + * 2003-09-03 Interactivity tuning by Con Kolivas.
1426 + * 2004-04-02 Scheduler domains code by Nick Piggin
1427 + * 2007-04-15 Work begun on replacing all interactivity tuning with a
1428 + * fair scheduling design by Con Kolivas.
1429 + * 2007-05-05 Load balancing (smp-nice) and other improvements
1430 + * by Peter Williams
1431 + * 2007-05-06 Interactivity improvements to CFS by Mike Galbraith
1432 + * 2007-07-01 Group scheduling enhancements by Srivatsa Vaddagiri
1433 + * 2007-11-29 RT balancing improvements by Steven Rostedt, Gregory Haskins,
1434 + * Thomas Gleixner, Mike Kravetz
1435 + * now Brainfuck deadline scheduling policy by Con Kolivas deletes
1436 + * a whole lot of those previous things.
1439 +#include <linux/mm.h>
1440 +#include <linux/module.h>
1441 +#include <linux/nmi.h>
1442 +#include <linux/init.h>
1443 +#include <asm/uaccess.h>
1444 +#include <linux/highmem.h>
1445 +#include <linux/smp_lock.h>
1446 +#include <asm/mmu_context.h>
1447 +#include <linux/interrupt.h>
1448 +#include <linux/capability.h>
1449 +#include <linux/completion.h>
1450 +#include <linux/kernel_stat.h>
1451 +#include <linux/debug_locks.h>
1452 +#include <linux/security.h>
1453 +#include <linux/notifier.h>
1454 +#include <linux/profile.h>
1455 +#include <linux/freezer.h>
1456 +#include <linux/vmalloc.h>
1457 +#include <linux/blkdev.h>
1458 +#include <linux/delay.h>
1459 +#include <linux/smp.h>
1460 +#include <linux/threads.h>
1461 +#include <linux/timer.h>
1462 +#include <linux/rcupdate.h>
1463 +#include <linux/cpu.h>
1464 +#include <linux/cpuset.h>
1465 +#include <linux/cpumask.h>
1466 +#include <linux/percpu.h>
1467 +#include <linux/kthread.h>
1468 +#include <linux/seq_file.h>
1469 +#include <linux/syscalls.h>
1470 +#include <linux/times.h>
1471 +#include <linux/tsacct_kern.h>
1472 +#include <linux/kprobes.h>
1473 +#include <linux/delayacct.h>
1474 +#include <linux/reciprocal_div.h>
1475 +#include <linux/log2.h>
1476 +#include <linux/bootmem.h>
1477 +#include <linux/ftrace.h>
1478 +#include <asm/irq_regs.h>
1479 +#include <asm/tlb.h>
1480 +#include <asm/unistd.h>
1482 +#define rt_prio(prio) unlikely((prio) < MAX_RT_PRIO)
1483 +#define rt_task(p) rt_prio((p)->prio)
1484 +#define rt_queue(rq) rt_prio((rq)->rq_prio)
1485 +#define batch_task(p) (unlikely((p)->policy == SCHED_BATCH))
1486 +#define is_rt_policy(policy) ((policy) == SCHED_FIFO || \
1487 + (policy) == SCHED_RR)
1488 +#define has_rt_policy(p) unlikely(is_rt_policy((p)->policy))
1489 +#define idleprio_task(p) unlikely((p)->policy == SCHED_IDLEPRIO)
1490 +#define iso_task(p) unlikely((p)->policy == SCHED_ISO)
1491 +#define iso_queue(rq) unlikely((rq)->rq_policy == SCHED_ISO)
1492 +#define ISO_PERIOD ((5 * HZ * num_online_cpus()) + 1)
1495 + * Convert user-nice values [ -20 ... 0 ... 19 ]
1496 + * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
1499 +#define NICE_TO_PRIO(nice) (MAX_RT_PRIO + (nice) + 20)
1500 +#define PRIO_TO_NICE(prio) ((prio) - MAX_RT_PRIO - 20)
1501 +#define TASK_NICE(p) PRIO_TO_NICE((p)->static_prio)
1504 + * 'User priority' is the nice value converted to something we
1505 + * can work with better when scaling various scheduler parameters,
1506 + * it's a [ 0 ... 39 ] range.
1508 +#define USER_PRIO(p) ((p)-MAX_RT_PRIO)
1509 +#define TASK_USER_PRIO(p) USER_PRIO((p)->static_prio)
1510 +#define MAX_USER_PRIO (USER_PRIO(MAX_PRIO))
1511 +#define SCHED_PRIO(p) ((p)+MAX_RT_PRIO)
1513 +/* Some helpers for converting to/from various scales.*/
1514 +#define JIFFIES_TO_NS(TIME) ((TIME) * (1000000000 / HZ))
1515 +#define MS_TO_NS(TIME) ((TIME) * 1000000)
1516 +#define MS_TO_US(TIME) ((TIME) * 1000)
1520 + * Divide a load by a sched group cpu_power : (load / sg->__cpu_power)
1521 + * Since cpu_power is a 'constant', we can use a reciprocal divide.
1523 +static inline u32 sg_div_cpu_power(const struct sched_group *sg, u32 load)
1525 + return reciprocal_divide(load, sg->reciprocal_cpu_power);
1529 + * Each time a sched group cpu_power is changed,
1530 + * we must compute its reciprocal value
1532 +static inline void sg_inc_cpu_power(struct sched_group *sg, u32 val)
1534 + sg->__cpu_power += val;
1535 + sg->reciprocal_cpu_power = reciprocal_value(sg->__cpu_power);
1540 + * This is the time all tasks within the same priority round robin.
1541 + * Value is in ms and set to a minimum of 6ms. Scales with number of cpus.
1542 + * Tunable via /proc interface.
1544 +int rr_interval __read_mostly = 6;
1547 + * sched_iso_cpu - sysctl which determines the cpu percentage SCHED_ISO tasks
1548 + * are allowed to run five seconds as real time tasks. This is the total over
1549 + * all online cpus.
1551 +int sched_iso_cpu __read_mostly = 70;
1554 + * The relative length of deadline for each priority(nice) level.
1556 +static int prio_ratios[PRIO_RANGE] __read_mostly;
1559 + * The quota handed out to tasks of all priority levels when refilling their
1562 +static inline unsigned long timeslice(void)
1564 + return MS_TO_US(rr_interval);
1568 + * The global runqueue data that all CPUs work off. All data is protected
1573 + unsigned long nr_running;
1574 + unsigned long nr_uninterruptible;
1575 + unsigned long long nr_switches;
1576 + struct list_head queue[PRIO_LIMIT];
1577 + DECLARE_BITMAP(prio_bitmap, PRIO_LIMIT + 1);
1579 + int iso_refractory;
1581 + unsigned long qnr; /* queued not running */
1582 + cpumask_t cpu_idle_map;
1586 +/* There can be only one */
1587 +static struct global_rq grq;
1590 + * This is the main, per-CPU runqueue data structure.
1591 + * This data should only be modified by the local cpu.
1595 +#ifdef CONFIG_NO_HZ
1596 + unsigned char in_nohz_recently;
1600 + struct task_struct *curr, *idle;
1601 + struct mm_struct *prev_mm;
1603 + /* Stored data about rq->curr to work outside grq lock */
1604 + unsigned long rq_deadline;
1605 + unsigned int rq_policy;
1606 + int rq_time_slice;
1610 + /* Accurate timekeeping data */
1611 + u64 timekeep_clock;
1612 + unsigned long user_pc, nice_pc, irq_pc, softirq_pc, system_pc,
1613 + iowait_pc, idle_pc;
1614 + atomic_t nr_iowait;
1617 + int cpu; /* cpu of this runqueue */
1620 + struct root_domain *rd;
1621 + struct sched_domain *sd;
1622 + unsigned long *cpu_locality; /* CPU relative cache distance */
1623 +#ifdef CONFIG_SCHED_SMT
1624 + int (*siblings_idle)(unsigned long cpu);
1625 + /* See if all smt siblings are idle */
1626 + cpumask_t smt_siblings;
1628 +#ifdef CONFIG_SCHED_MC
1629 + int (*cache_idle)(unsigned long cpu);
1630 + /* See if all cache siblings are idle */
1631 + cpumask_t cache_siblings;
1636 +#ifdef CONFIG_SCHEDSTATS
1638 + /* latency stats */
1639 + struct sched_info rq_sched_info;
1641 + /* sys_sched_yield() stats */
1642 + unsigned int yld_exp_empty;
1643 + unsigned int yld_act_empty;
1644 + unsigned int yld_both_empty;
1645 + unsigned int yld_count;
1647 + /* schedule() stats */
1648 + unsigned int sched_switch;
1649 + unsigned int sched_count;
1650 + unsigned int sched_goidle;
1652 + /* try_to_wake_up() stats */
1653 + unsigned int ttwu_count;
1654 + unsigned int ttwu_local;
1657 + unsigned int bkl_count;
1661 +static DEFINE_PER_CPU(struct rq, runqueues) ____cacheline_aligned_in_smp;
1662 +static DEFINE_MUTEX(sched_hotcpu_mutex);
1667 + * We add the notion of a root-domain which will be used to define per-domain
1668 + * variables. Each exclusive cpuset essentially defines an island domain by
1669 + * fully partitioning the member cpus from any other cpuset. Whenever a new
1670 + * exclusive cpuset is created, we also create and attach a new root-domain
1674 +struct root_domain {
1675 + atomic_t refcount;
1680 + * The "RT overload" flag: it gets set if a CPU has more than
1681 + * one runnable RT task.
1683 + cpumask_t rto_mask;
1684 + atomic_t rto_count;
1688 + * By default the system creates a single root-domain with all cpus as
1689 + * members (mimicking the global state we have today).
1691 +static struct root_domain def_root_domain;
1694 +static inline int cpu_of(struct rq *rq)
1704 + * The domain tree (rq->sd) is protected by RCU's quiescent state transition.
1705 + * See detach_destroy_domains: synchronize_sched for details.
1707 + * The domain tree of any CPU may only be accessed from within
1708 + * preempt-disabled sections.
1710 +#define for_each_domain(cpu, __sd) \
1711 + for (__sd = rcu_dereference(cpu_rq(cpu)->sd); __sd; __sd = __sd->parent)
1714 +#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
1715 +#define this_rq() (&__get_cpu_var(runqueues))
1716 +#define task_rq(p) cpu_rq(task_cpu(p))
1717 +#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
1718 +#else /* CONFIG_SMP */
1719 +static struct rq *uprq;
1720 +#define cpu_rq(cpu) (uprq)
1721 +#define this_rq() (uprq)
1722 +#define task_rq(p) (uprq)
1723 +#define cpu_curr(cpu) ((uprq)->curr)
1726 +#include "sched_stats.h"
1728 +#ifndef prepare_arch_switch
1729 +# define prepare_arch_switch(next) do { } while (0)
1731 +#ifndef finish_arch_switch
1732 +# define finish_arch_switch(prev) do { } while (0)
1736 + * All common locking functions performed on grq.lock. rq->clock is local to
1737 + * the cpu accessing it so it can be modified just with interrupts disabled,
1738 + * but looking up task_rq must be done under grq.lock to be safe.
1740 +static inline void update_rq_clock(struct rq *rq)
1742 + rq->clock = sched_clock_cpu(cpu_of(rq));
1745 +static inline int task_running(struct task_struct *p)
1750 +static inline void grq_lock(void)
1751 + __acquires(grq.lock)
1753 + spin_lock(&grq.lock);
1756 +static inline void grq_unlock(void)
1757 + __releases(grq.lock)
1759 + spin_unlock(&grq.lock);
1762 +static inline void grq_lock_irq(void)
1763 + __acquires(grq.lock)
1765 + spin_lock_irq(&grq.lock);
1768 +static inline void time_lock_grq(struct rq *rq)
1769 + __acquires(grq.lock)
1771 + update_rq_clock(rq);
1775 +static inline void grq_unlock_irq(void)
1776 + __releases(grq.lock)
1778 + spin_unlock_irq(&grq.lock);
1781 +static inline void grq_lock_irqsave(unsigned long *flags)
1782 + __acquires(grq.lock)
1784 + spin_lock_irqsave(&grq.lock, *flags);
1787 +static inline void grq_unlock_irqrestore(unsigned long *flags)
1788 + __releases(grq.lock)
1790 + spin_unlock_irqrestore(&grq.lock, *flags);
1793 +static inline struct rq
1794 +*task_grq_lock(struct task_struct *p, unsigned long *flags)
1795 + __acquires(grq.lock)
1797 + grq_lock_irqsave(flags);
1798 + return task_rq(p);
1801 +static inline struct rq
1802 +*time_task_grq_lock(struct task_struct *p, unsigned long *flags)
1803 + __acquires(grq.lock)
1805 + struct rq *rq = task_grq_lock(p, flags);
1806 + update_rq_clock(rq);
1810 +static inline struct rq *task_grq_lock_irq(struct task_struct *p)
1811 + __acquires(grq.lock)
1814 + return task_rq(p);
1817 +static inline void time_task_grq_lock_irq(struct task_struct *p)
1818 + __acquires(grq.lock)
1820 + struct rq *rq = task_grq_lock_irq(p);
1821 + update_rq_clock(rq);
1824 +static inline void task_grq_unlock_irq(void)
1825 + __releases(grq.lock)
1830 +static inline void task_grq_unlock(unsigned long *flags)
1831 + __releases(grq.lock)
1833 + grq_unlock_irqrestore(flags);
1837 + * grunqueue_is_locked
1839 + * Returns true if the global runqueue is locked.
1840 + * This interface allows printk to be called with the runqueue lock
1841 + * held and know whether or not it is OK to wake up the klogd.
1843 +inline int grunqueue_is_locked(void)
1845 + return spin_is_locked(&grq.lock);
1848 +inline void grq_unlock_wait(void)
1849 + __releases(grq.lock)
1851 + smp_mb(); /* spin-unlock-wait is not a full memory barrier */
1852 + spin_unlock_wait(&grq.lock);
1855 +static inline void time_grq_lock(struct rq *rq, unsigned long *flags)
1856 + __acquires(grq.lock)
1858 + local_irq_save(*flags);
1859 + time_lock_grq(rq);
1862 +static inline struct rq *__task_grq_lock(struct task_struct *p)
1863 + __acquires(grq.lock)
1866 + return task_rq(p);
1869 +static inline void __task_grq_unlock(void)
1870 + __releases(grq.lock)
1875 +#ifndef __ARCH_WANT_UNLOCKED_CTXSW
1876 +static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
1880 +static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
1882 +#ifdef CONFIG_DEBUG_SPINLOCK
1883 + /* this is a valid case when another task releases the spinlock */
1884 + grq.lock.owner = current;
1887 + * If we are tracking spinlock dependencies then we have to
1888 + * fix up the runqueue lock - which gets 'carried over' from
1889 + * prev into current:
1891 + spin_acquire(&grq.lock.dep_map, 0, 0, _THIS_IP_);
1896 +#else /* __ARCH_WANT_UNLOCKED_CTXSW */
1898 +static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
1900 +#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
1907 +static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
1910 +#ifndef __ARCH_WANT_INTERRUPTS_ON_CTXSW
1911 + local_irq_enable();
1914 +#endif /* __ARCH_WANT_UNLOCKED_CTXSW */
1917 + * A task that is queued but not running will be on the grq run list.
1918 + * A task that is not running or queued will not be on the grq run list.
1919 + * A task that is currently running will have ->oncpu set but not on the
1922 +static inline int task_queued(struct task_struct *p)
1924 + return (!list_empty(&p->run_list));
1928 + * Removing from the global runqueue. Enter with grq locked.
1930 +static void dequeue_task(struct task_struct *p)
1932 + list_del_init(&p->run_list);
1933 + if (list_empty(grq.queue + p->prio))
1934 + __clear_bit(p->prio, grq.prio_bitmap);
1938 + * When a task is freshly forked, the first_time_slice flag is set to say
1939 + * it has taken time_slice from its parent and if it exits on this first
1940 + * time_slice it can return its time_slice back to the parent.
1942 +static inline void reset_first_time_slice(struct task_struct *p)
1944 + if (unlikely(p->first_time_slice))
1945 + p->first_time_slice = 0;
1949 + * To determine if it's safe for a task of SCHED_IDLEPRIO to actually run as
1950 + * an idle task, we ensure none of the following conditions are met.
1952 +static int idleprio_suitable(struct task_struct *p)
1954 + return (!freezing(p) && !signal_pending(p) &&
1955 + !(task_contributes_to_load(p)) && !(p->flags & (PF_EXITING)));
1959 + * To determine if a task of SCHED_ISO can run in pseudo-realtime, we check
1960 + * that the iso_refractory flag is not set.
1962 +static int isoprio_suitable(void)
1964 + return !grq.iso_refractory;
1968 + * Adding to the global runqueue. Enter with grq locked.
1970 +static void enqueue_task(struct task_struct *p)
1972 + if (!rt_task(p)) {
1973 + /* Check it hasn't gotten rt from PI */
1974 + if ((idleprio_task(p) && idleprio_suitable(p)) ||
1975 + (iso_task(p) && isoprio_suitable()))
1976 + p->prio = p->normal_prio;
1978 + p->prio = NORMAL_PRIO;
1980 + __set_bit(p->prio, grq.prio_bitmap);
1981 + list_add_tail(&p->run_list, grq.queue + p->prio);
1982 + sched_info_queued(p);
1985 +/* Only idle task does this as a real time task*/
1986 +static inline void enqueue_task_head(struct task_struct *p)
1988 + __set_bit(p->prio, grq.prio_bitmap);
1989 + list_add(&p->run_list, grq.queue + p->prio);
1990 + sched_info_queued(p);
1993 +static inline void requeue_task(struct task_struct *p)
1995 + sched_info_queued(p);
1999 + * Returns the relative length of deadline all compared to the shortest
2000 + * deadline which is that of nice -20.
2002 +static inline int task_prio_ratio(struct task_struct *p)
2004 + return prio_ratios[TASK_USER_PRIO(p)];
2008 + * task_timeslice - all tasks of all priorities get the exact same timeslice
2009 + * length. CPU distribution is handled by giving different deadlines to
2010 + * tasks of different priorities.
2012 +static inline int task_timeslice(struct task_struct *p)
2014 + return (rr_interval * task_prio_ratio(p) / 100);
2019 + * qnr is the "queued but not running" count which is the total number of
2020 + * tasks on the global runqueue list waiting for cpu time but not actually
2021 + * currently running on a cpu.
2023 +static inline void inc_qnr(void)
2028 +static inline void dec_qnr(void)
2033 +static inline int queued_notrunning(void)
2039 + * The cpu_idle_map stores a bitmap of all the cpus currently idle to
2040 + * allow easy lookup of whether any suitable idle cpus are available.
2042 +static inline void set_cpuidle_map(unsigned long cpu)
2044 + cpu_set(cpu, grq.cpu_idle_map);
2047 +static inline void clear_cpuidle_map(unsigned long cpu)
2049 + cpu_clear(cpu, grq.cpu_idle_map);
2052 +static int suitable_idle_cpus(struct task_struct *p)
2054 + return (cpus_intersects(p->cpus_allowed, grq.cpu_idle_map));
2057 +static void resched_task(struct task_struct *p);
2059 +#define CPUIDLE_CACHE_BUSY (1)
2060 +#define CPUIDLE_DIFF_CPU (2)
2061 +#define CPUIDLE_THREAD_BUSY (4)
2062 +#define CPUIDLE_DIFF_NODE (8)
2065 + * The best idle CPU is chosen according to the CPUIDLE ranking above where the
2066 + * lowest value would give the most suitable CPU to schedule p onto next. We
2067 + * iterate from the last CPU upwards instead of using for_each_cpu_mask so as
2068 + * to be able to break out immediately if the last CPU is idle. The order works
2069 + * out to be the following:
2071 + * Same core, idle or busy cache, idle threads
2072 + * Other core, same cache, idle or busy cache, idle threads.
2073 + * Same node, other CPU, idle cache, idle threads.
2074 + * Same node, other CPU, busy cache, idle threads.
2075 + * Same core, busy threads.
2076 + * Other core, same cache, busy threads.
2077 + * Same node, other CPU, busy threads.
2078 + * Other node, other CPU, idle cache, idle threads.
2079 + * Other node, other CPU, busy cache, idle threads.
2080 + * Other node, other CPU, busy threads.
2082 +static void resched_best_idle(struct task_struct *p)
2084 + unsigned long cpu_tmp, best_cpu, best_ranking;
2085 + cpumask_t tmpmask;
2089 + cpus_and(tmpmask, p->cpus_allowed, grq.cpu_idle_map);
2090 + iterate = cpus_weight(tmpmask);
2091 + best_cpu = task_cpu(p);
2093 + * Start below the last CPU and work up with next_cpu_nr as the last
2094 + * CPU might not be idle or affinity might not allow it.
2096 + cpu_tmp = best_cpu - 1;
2097 + rq = cpu_rq(best_cpu);
2098 + best_ranking = ~0UL;
2101 + unsigned long ranking;
2102 + struct rq *tmp_rq;
2105 + cpu_tmp = next_cpu_nr(cpu_tmp, tmpmask);
2106 + if (cpu_tmp >= nr_cpu_ids) {
2108 + cpu_tmp = next_cpu_nr(cpu_tmp, tmpmask);
2110 + tmp_rq = cpu_rq(cpu_tmp);
2112 + if (rq->cpu_locality[cpu_tmp]) {
2114 + if (rq->cpu_locality[cpu_tmp] > 1)
2115 + ranking |= CPUIDLE_DIFF_NODE;
2117 + ranking |= CPUIDLE_DIFF_CPU;
2119 +#ifdef CONFIG_SCHED_MC
2120 + if (!(tmp_rq->cache_idle(cpu_tmp)))
2121 + ranking |= CPUIDLE_CACHE_BUSY;
2123 +#ifdef CONFIG_SCHED_SMT
2124 + if (!(tmp_rq->siblings_idle(cpu_tmp)))
2125 + ranking |= CPUIDLE_THREAD_BUSY;
2127 + if (ranking < best_ranking) {
2128 + best_cpu = cpu_tmp;
2131 + best_ranking = ranking;
2133 + } while (--iterate > 0);
2135 + resched_task(cpu_rq(best_cpu)->curr);
2138 +static inline void resched_suitable_idle(struct task_struct *p)
2140 + if (suitable_idle_cpus(p))
2141 + resched_best_idle(p);
2145 + * The cpu cache locality difference between CPUs is used to determine how far
2146 + * to offset the virtual deadline. "One" difference in locality means that one
2147 + * timeslice difference is allowed longer for the cpu local tasks. This is
2148 + * enough in the common case when tasks are up to 2* number of CPUs to keep
2149 + * tasks within their shared cache CPUs only. CPUs on different nodes or not
2150 + * even in this domain (NUMA) have "3" difference, allowing 4 times longer
2151 + * deadlines before being taken onto another cpu, allowing for 2* the double
2152 + * seen by separate CPUs above.
2153 + * Simple summary: Virtual deadlines are equal on shared cache CPUs, double
2154 + * on separate CPUs and quadruple in separate NUMA nodes.
2157 +cache_distance(struct rq *task_rq, struct rq *rq, struct task_struct *p)
2159 + return rq->cpu_locality[cpu_of(task_rq)] * task_timeslice(p);
2161 +#else /* CONFIG_SMP */
2162 +static inline void inc_qnr(void)
2166 +static inline void dec_qnr(void)
2170 +static inline int queued_notrunning(void)
2172 + return grq.nr_running;
2175 +static inline void set_cpuidle_map(unsigned long cpu)
2179 +static inline void clear_cpuidle_map(unsigned long cpu)
2183 +/* Always called from a busy cpu on UP */
2184 +static inline int suitable_idle_cpus(struct task_struct *p)
2186 + return uprq->curr == uprq->idle;
2189 +static inline void resched_suitable_idle(struct task_struct *p)
2194 +cache_distance(struct rq *task_rq, struct rq *rq, struct task_struct *p)
2198 +#endif /* CONFIG_SMP */
2201 + * activate_idle_task - move idle task to the _front_ of runqueue.
2203 +static inline void activate_idle_task(struct task_struct *p)
2205 + enqueue_task_head(p);
2210 +static inline int normal_prio(struct task_struct *p)
2212 + if (has_rt_policy(p))
2213 + return MAX_RT_PRIO - 1 - p->rt_priority;
2214 + if (idleprio_task(p))
2218 + return NORMAL_PRIO;
2222 + * Calculate the current priority, i.e. the priority
2223 + * taken into account by the scheduler. This value might
2224 + * be boosted by RT tasks as it will be RT if the task got
2225 + * RT-boosted. If not then it returns p->normal_prio.
2227 +static int effective_prio(struct task_struct *p)
2229 + p->normal_prio = normal_prio(p);
2231 + * If we are RT tasks or we were boosted to RT priority,
2232 + * keep the priority unchanged. Otherwise, update priority
2233 + * to the normal priority:
2235 + if (!rt_prio(p->prio))
2236 + return p->normal_prio;
2241 + * activate_task - move a task to the runqueue. Enter with grq locked.
2243 +static void activate_task(struct task_struct *p, struct rq *rq)
2245 + update_rq_clock(rq);
2248 + * Sleep time is in units of nanosecs, so shift by 20 to get a
2249 + * milliseconds-range estimation of the amount of time that the task
2252 + if (unlikely(prof_on == SLEEP_PROFILING)) {
2253 + if (p->state == TASK_UNINTERRUPTIBLE)
2254 + profile_hits(SLEEP_PROFILING, (void *)get_wchan(p),
2255 + (rq->clock - p->last_ran) >> 20);
2258 + p->prio = effective_prio(p);
2259 + if (task_contributes_to_load(p))
2260 + grq.nr_uninterruptible--;
2267 + * deactivate_task - If it's running, it's not on the grq and we can just
2268 + * decrement the nr_running. Enter with grq locked.
2270 +static inline void deactivate_task(struct task_struct *p)
2272 + if (task_contributes_to_load(p))
2273 + grq.nr_uninterruptible++;
2278 +void set_task_cpu(struct task_struct *p, unsigned int cpu)
2281 + * After ->cpu is set up to a new value, task_grq_lock(p, ...) can be
2282 + * successfuly executed on another CPU. We must ensure that updates of
2283 + * per-task data have been completed by this moment.
2286 + task_thread_info(p)->cpu = cpu;
2291 + * Move a task off the global queue and take it to a cpu for it will
2292 + * become the running task.
2294 +static inline void take_task(struct rq *rq, struct task_struct *p)
2296 + set_task_cpu(p, cpu_of(rq));
2302 + * Returns a descheduling task to the grq runqueue unless it is being
2305 +static inline void return_task(struct task_struct *p, int deactivate)
2308 + deactivate_task(p);
2316 + * resched_task - mark a task 'to be rescheduled now'.
2318 + * On UP this means the setting of the need_resched flag, on SMP it
2319 + * might also involve a cross-CPU call to trigger the scheduler on
2324 +#ifndef tsk_is_polling
2325 +#define tsk_is_polling(t) test_tsk_thread_flag(t, TIF_POLLING_NRFLAG)
2328 +static void resched_task(struct task_struct *p)
2332 + assert_spin_locked(&grq.lock);
2334 + if (unlikely(test_tsk_thread_flag(p, TIF_NEED_RESCHED)))
2337 + set_tsk_thread_flag(p, TIF_NEED_RESCHED);
2339 + cpu = task_cpu(p);
2340 + if (cpu == smp_processor_id())
2343 + /* NEED_RESCHED must be visible before we test polling */
2345 + if (!tsk_is_polling(p))
2346 + smp_send_reschedule(cpu);
2350 +static inline void resched_task(struct task_struct *p)
2352 + assert_spin_locked(&grq.lock);
2353 + set_tsk_need_resched(p);
2358 + * task_curr - is this task currently executing on a CPU?
2359 + * @p: the task in question.
2361 +inline int task_curr(const struct task_struct *p)
2363 + return cpu_curr(task_cpu(p)) == p;
2367 +struct migration_req {
2368 + struct list_head list;
2370 + struct task_struct *task;
2373 + struct completion done;
2377 + * wait_task_inactive - wait for a thread to unschedule.
2379 + * If @match_state is nonzero, it's the @p->state value just checked and
2380 + * not expected to change. If it changes, i.e. @p might have woken up,
2381 + * then return zero. When we succeed in waiting for @p to be off its CPU,
2382 + * we return a positive number (its total switch count). If a second call
2383 + * a short while later returns the same number, the caller can be sure that
2384 + * @p has remained unscheduled the whole time.
2386 + * The caller must ensure that the task *will* unschedule sometime soon,
2387 + * else this function might spin for a *long* time. This function can't
2388 + * be called with interrupts off, or it may introduce deadlock with
2389 + * smp_call_function() if an IPI is sent by the same process we are
2390 + * waiting to become inactive.
2392 +unsigned long wait_task_inactive(struct task_struct *p, long match_state)
2394 + unsigned long flags;
2395 + int running, on_rq;
2396 + unsigned long ncsw;
2401 + * We do the initial early heuristics without holding
2402 + * any task-queue locks at all. We'll only try to get
2403 + * the runqueue lock when things look like they will
2404 + * work out! In the unlikely event rq is dereferenced
2405 + * since we're lockless, grab it again.
2410 + if (unlikely(!rq))
2412 +#else /* CONFIG_SMP */
2416 + * If the task is actively running on another CPU
2417 + * still, just relax and busy-wait without holding
2420 + * NOTE! Since we don't hold any locks, it's not
2421 + * even sure that "rq" stays as the right runqueue!
2422 + * But we don't care, since this will return false
2423 + * if the runqueue has changed and p is actually now
2424 + * running somewhere else!
2426 + while (task_running(p) && p == rq->curr) {
2427 + if (match_state && unlikely(p->state != match_state))
2433 + * Ok, time to look more closely! We need the grq
2434 + * lock now, to be *sure*. If we're wrong, we'll
2435 + * just go back and repeat.
2437 + rq = task_grq_lock(p, &flags);
2438 + running = task_running(p);
2439 + on_rq = task_queued(p);
2441 + if (!match_state || p->state == match_state) {
2442 + ncsw = p->nivcsw + p->nvcsw;
2443 + if (unlikely(!ncsw))
2446 + task_grq_unlock(&flags);
2449 + * If it changed from the expected state, bail out now.
2451 + if (unlikely(!ncsw))
2455 + * Was it really running after all now that we
2456 + * checked with the proper locks actually held?
2458 + * Oops. Go back and try again..
2460 + if (unlikely(running)) {
2466 + * It's not enough that it's not actively running,
2467 + * it must be off the runqueue _entirely_, and not
2470 + * So if it wa still runnable (but just not actively
2471 + * running right now), it's preempted, and we should
2472 + * yield - it could be a while.
2474 + if (unlikely(on_rq)) {
2475 + schedule_timeout_uninterruptible(1);
2480 + * Ahh, all good. It wasn't running, and it wasn't
2481 + * runnable, which means that it will never become
2482 + * running in the future either. We're all done!
2491 + * kick_process - kick a running thread to enter/exit the kernel
2492 + * @p: the to-be-kicked thread
2494 + * Cause a process which is running on another CPU to enter
2495 + * kernel-mode, without any delay. (to get signals handled.)
2497 + * NOTE: this function doesnt have to take the runqueue lock,
2498 + * because all it wants to ensure is that the remote task enters
2499 + * the kernel. If the IPI races and the task has been migrated
2500 + * to another CPU then no harm is done and the purpose has been
2501 + * achieved as well.
2503 +void kick_process(struct task_struct *p)
2507 + preempt_disable();
2508 + cpu = task_cpu(p);
2509 + if ((cpu != smp_processor_id()) && task_curr(p))
2510 + smp_send_reschedule(cpu);
2515 +#define rq_idle(rq) ((rq)->rq_prio == PRIO_LIMIT)
2516 +#define task_idle(p) ((p)->prio == PRIO_LIMIT)
2519 + * RT tasks preempt purely on priority. SCHED_NORMAL tasks preempt on the
2520 + * basis of earlier deadlines. SCHED_BATCH, ISO and IDLEPRIO don't preempt
2521 + * between themselves, they cooperatively multitask. An idle rq scores as
2522 + * prio PRIO_LIMIT so it is always preempted. latest_deadline and
2523 + * highest_prio_rq are initialised only to silence the compiler. When
2524 + * all else is equal, still prefer this_rq.
2527 +static void try_preempt(struct task_struct *p, struct rq *this_rq)
2529 + struct rq *highest_prio_rq = this_rq;
2530 + unsigned long latest_deadline, cpu;
2534 + if (suitable_idle_cpus(p)) {
2535 + resched_best_idle(p);
2539 + cpus_and(tmp, cpu_online_map, p->cpus_allowed);
2540 + latest_deadline = 0;
2541 + highest_prio = -1;
2543 + for_each_cpu_mask_nr(cpu, tmp) {
2544 + unsigned long offset_deadline;
2549 + rq_prio = rq->rq_prio;
2550 + if (rq_prio < highest_prio)
2553 + offset_deadline = rq->rq_deadline -
2554 + cache_distance(this_rq, rq, p);
2556 + if (rq_prio > highest_prio ||
2557 + (time_after(offset_deadline, latest_deadline) ||
2558 + (offset_deadline == latest_deadline && this_rq == rq))) {
2559 + latest_deadline = offset_deadline;
2560 + highest_prio = rq_prio;
2561 + highest_prio_rq = rq;
2565 + if (p->prio > highest_prio || (p->prio == highest_prio &&
2566 + p->policy == SCHED_NORMAL && !time_before(p->deadline, latest_deadline)))
2569 + /* p gets to preempt highest_prio_rq->curr */
2570 + resched_task(highest_prio_rq->curr);
2573 +#else /* CONFIG_SMP */
2574 +static void try_preempt(struct task_struct *p, struct rq *this_rq)
2576 + if (p->prio < uprq->rq_prio ||
2577 + (p->prio == uprq->rq_prio && p->policy == SCHED_NORMAL &&
2578 + time_before(p->deadline, uprq->rq_deadline)))
2579 + resched_task(uprq->curr);
2582 +#endif /* CONFIG_SMP */
2585 + * try_to_wake_up - wake up a thread
2586 + * @p: the to-be-woken-up thread
2587 + * @state: the mask of task states that can be woken
2588 + * @sync: do a synchronous wakeup?
2590 + * Put it on the run-queue if it's not already there. The "current"
2591 + * thread is always on the run-queue (except when the actual
2592 + * re-schedule is in progress), and as such you're allowed to do
2593 + * the simpler "current->state = TASK_RUNNING" to mark yourself
2594 + * runnable without the overhead of this.
2596 + * returns failure only if the task is already active.
2598 +static int try_to_wake_up(struct task_struct *p, unsigned int state, int sync)
2600 + unsigned long flags;
2604 + /* This barrier is undocumented, probably for p->state? くそ */
2608 + * No need to do time_lock_grq as we only need to update the rq clock
2609 + * if we activate the task
2611 + rq = task_grq_lock(p, &flags);
2613 + /* state is a volatile long, どうして、分からない */
2614 + if (!((unsigned int)p->state & state))
2617 + if (task_queued(p) || task_running(p))
2620 + activate_task(p, rq);
2622 + * Sync wakeups (i.e. those types of wakeups where the waker
2623 + * has indicated that it will leave the CPU in short order)
2624 + * don't trigger a preemption if there are no idle cpus,
2625 + * instead waiting for current to deschedule.
2627 + if (!sync || suitable_idle_cpus(p))
2628 + try_preempt(p, rq);
2632 + trace_mark(kernel_sched_wakeup,
2633 + "pid %d state %ld ## rq %p task %p rq->curr %p",
2634 + p->pid, p->state, rq, p, rq->curr);
2635 + p->state = TASK_RUNNING;
2637 + task_grq_unlock(&flags);
2642 + * wake_up_process - Wake up a specific process
2643 + * @p: The process to be woken up.
2645 + * Attempt to wake up the nominated process and move it to the set of runnable
2646 + * processes. Returns 1 if the process was woken up, 0 if it was already
2649 + * It may be assumed that this function implies a write memory barrier before
2650 + * changing the task state if and only if any tasks are woken up.
2652 +int wake_up_process(struct task_struct *p)
2654 + return try_to_wake_up(p, TASK_ALL, 0);
2656 +EXPORT_SYMBOL(wake_up_process);
2658 +int wake_up_state(struct task_struct *p, unsigned int state)
2660 + return try_to_wake_up(p, state, 0);
2664 + * Perform scheduler related setup for a newly forked process p.
2665 + * p is forked by current.
2667 +void sched_fork(struct task_struct *p, int clone_flags)
2669 + int cpu = get_cpu();
2672 +#ifdef CONFIG_PREEMPT_NOTIFIERS
2673 + INIT_HLIST_HEAD(&p->preempt_notifiers);
2676 + * We mark the process as running here, but have not actually
2677 + * inserted it onto the runqueue yet. This guarantees that
2678 + * nobody will actually run it, and a signal or other external
2679 + * event cannot wake it up and insert it on the runqueue either.
2681 + p->state = TASK_RUNNING;
2682 + set_task_cpu(p, cpu);
2684 + /* Should be reset in fork.c but done here for ease of bfs patching */
2685 + p->sched_time = p->stime_pc = p->utime_pc = 0;
2688 + * Make sure we do not leak PI boosting priority to the child:
2690 + p->prio = current->normal_prio;
2692 + INIT_LIST_HEAD(&p->run_list);
2693 +#if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
2694 + if (unlikely(sched_info_on()))
2695 + memset(&p->sched_info, 0, sizeof(p->sched_info));
2700 +#ifdef CONFIG_PREEMPT
2701 + /* Want to start with kernel preemption disabled. */
2702 + task_thread_info(p)->preempt_count = 1;
2704 + if (unlikely(p->policy == SCHED_FIFO))
2707 + * Share the timeslice between parent and child, thus the
2708 + * total amount of pending timeslices in the system doesn't change,
2709 + * resulting in more scheduling fairness. If it's negative, it won't
2710 + * matter since that's the same as being 0. current's time_slice is
2711 + * actually in rq_time_slice when it's running.
2713 + rq = task_grq_lock_irq(current);
2714 + if (likely(rq->rq_time_slice > 0)) {
2715 + rq->rq_time_slice /= 2;
2717 + * The remainder of the first timeslice might be recovered by
2718 + * the parent if the child exits early enough.
2720 + p->first_time_slice = 1;
2722 + p->time_slice = rq->rq_time_slice;
2723 + task_grq_unlock_irq();
2729 + * wake_up_new_task - wake up a newly created task for the first time.
2731 + * This function will do some initial scheduler statistics housekeeping
2732 + * that must be done for every newly created context, then puts the task
2733 + * on the runqueue and wakes it.
2735 +void wake_up_new_task(struct task_struct *p, unsigned long clone_flags)
2737 + struct task_struct *parent;
2738 + unsigned long flags;
2741 + rq = task_grq_lock(p, &flags); ;
2742 + parent = p->parent;
2743 + BUG_ON(p->state != TASK_RUNNING);
2744 + /* Unnecessary but small chance that the parent changed cpus */
2745 + set_task_cpu(p, task_cpu(parent));
2746 + activate_task(p, rq);
2747 + trace_mark(kernel_sched_wakeup_new,
2748 + "pid %d state %ld ## rq %p task %p rq->curr %p",
2749 + p->pid, p->state, rq, p, rq->curr);
2750 + if (!(clone_flags & CLONE_VM) && rq->curr == parent &&
2751 + !suitable_idle_cpus(p)) {
2753 + * The VM isn't cloned, so we're in a good position to
2754 + * do child-runs-first in anticipation of an exec. This
2755 + * usually avoids a lot of COW overhead.
2757 + resched_task(parent);
2759 + try_preempt(p, rq);
2760 + task_grq_unlock(&flags);
2764 + * Potentially available exiting-child timeslices are
2765 + * retrieved here - this way the parent does not get
2766 + * penalised for creating too many threads.
2768 + * (this cannot be used to 'generate' timeslices
2769 + * artificially, because any timeslice recovered here
2770 + * was given away by the parent in the first place.)
2772 +void sched_exit(struct task_struct *p)
2774 + struct task_struct *parent;
2775 + unsigned long flags;
2778 + if (unlikely(p->first_time_slice)) {
2779 + int *par_tslice, *p_tslice;
2781 + parent = p->parent;
2782 + par_tslice = &parent->time_slice;
2783 + p_tslice = &p->time_slice;
2785 + rq = task_grq_lock(parent, &flags);
2786 + /* The real time_slice of the "curr" task is on the rq var.*/
2787 + if (p == rq->curr)
2788 + p_tslice = &rq->rq_time_slice;
2789 + else if (parent == task_rq(parent)->curr)
2790 + par_tslice = &rq->rq_time_slice;
2792 + *par_tslice += *p_tslice;
2793 + if (unlikely(*par_tslice > timeslice()))
2794 + *par_tslice = timeslice();
2795 + task_grq_unlock(&flags);
2799 +#ifdef CONFIG_PREEMPT_NOTIFIERS
2802 + * preempt_notifier_register - tell me when current is being being preempted & rescheduled
2803 + * @notifier: notifier struct to register
2805 +void preempt_notifier_register(struct preempt_notifier *notifier)
2807 + hlist_add_head(¬ifier->link, ¤t->preempt_notifiers);
2809 +EXPORT_SYMBOL_GPL(preempt_notifier_register);
2812 + * preempt_notifier_unregister - no longer interested in preemption notifications
2813 + * @notifier: notifier struct to unregister
2815 + * This is safe to call from within a preemption notifier.
2817 +void preempt_notifier_unregister(struct preempt_notifier *notifier)
2819 + hlist_del(¬ifier->link);
2821 +EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
2823 +static void fire_sched_in_preempt_notifiers(struct task_struct *curr)
2825 + struct preempt_notifier *notifier;
2826 + struct hlist_node *node;
2828 + hlist_for_each_entry(notifier, node, &curr->preempt_notifiers, link)
2829 + notifier->ops->sched_in(notifier, raw_smp_processor_id());
2833 +fire_sched_out_preempt_notifiers(struct task_struct *curr,
2834 + struct task_struct *next)
2836 + struct preempt_notifier *notifier;
2837 + struct hlist_node *node;
2839 + hlist_for_each_entry(notifier, node, &curr->preempt_notifiers, link)
2840 + notifier->ops->sched_out(notifier, next);
2843 +#else /* !CONFIG_PREEMPT_NOTIFIERS */
2845 +static void fire_sched_in_preempt_notifiers(struct task_struct *curr)
2850 +fire_sched_out_preempt_notifiers(struct task_struct *curr,
2851 + struct task_struct *next)
2855 +#endif /* CONFIG_PREEMPT_NOTIFIERS */
2858 + * prepare_task_switch - prepare to switch tasks
2859 + * @rq: the runqueue preparing to switch
2860 + * @next: the task we are going to switch to.
2862 + * This is called with the rq lock held and interrupts off. It must
2863 + * be paired with a subsequent finish_task_switch after the context
2866 + * prepare_task_switch sets up locking and calls architecture specific
2870 +prepare_task_switch(struct rq *rq, struct task_struct *prev,
2871 + struct task_struct *next)
2873 + fire_sched_out_preempt_notifiers(prev, next);
2874 + prepare_lock_switch(rq, next);
2875 + prepare_arch_switch(next);
2879 + * finish_task_switch - clean up after a task-switch
2880 + * @rq: runqueue associated with task-switch
2881 + * @prev: the thread we just switched away from.
2883 + * finish_task_switch must be called after the context switch, paired
2884 + * with a prepare_task_switch call before the context switch.
2885 + * finish_task_switch will reconcile locking set up by prepare_task_switch,
2886 + * and do any other architecture-specific cleanup actions.
2888 + * Note that we may have delayed dropping an mm in context_switch(). If
2889 + * so, we finish that here outside of the runqueue lock. (Doing it
2890 + * with the lock held can cause deadlocks; see schedule() for
2893 +static inline void finish_task_switch(struct rq *rq, struct task_struct *prev)
2894 + __releases(grq.lock)
2896 + struct mm_struct *mm = rq->prev_mm;
2899 + rq->prev_mm = NULL;
2902 + * A task struct has one reference for the use as "current".
2903 + * If a task dies, then it sets TASK_DEAD in tsk->state and calls
2904 + * schedule one last time. The schedule call will never return, and
2905 + * the scheduled task must drop that reference.
2906 + * The test for TASK_DEAD must occur while the runqueue locks are
2907 + * still held, otherwise prev could be scheduled on another cpu, die
2908 + * there before we look at prev->state, and then the reference would
2909 + * be dropped twice.
2910 + * Manfred Spraul <manfred@colorfullife.com>
2912 + prev_state = prev->state;
2913 + finish_arch_switch(prev);
2914 + finish_lock_switch(rq, prev);
2916 + fire_sched_in_preempt_notifiers(current);
2919 + if (unlikely(prev_state == TASK_DEAD)) {
2921 + * Remove function-return probe instances associated with this
2922 + * task and put them back on the free list.
2924 + kprobe_flush_task(prev);
2925 + put_task_struct(prev);
2930 + * schedule_tail - first thing a freshly forked thread must call.
2931 + * @prev: the thread we just switched away from.
2933 +asmlinkage void schedule_tail(struct task_struct *prev)
2934 + __releases(grq.lock)
2936 + struct rq *rq = this_rq();
2938 + finish_task_switch(rq, prev);
2939 +#ifdef __ARCH_WANT_UNLOCKED_CTXSW
2940 + /* In this case, finish_task_switch does not reenable preemption */
2943 + if (current->set_child_tid)
2944 + put_user(current->pid, current->set_child_tid);
2948 + * context_switch - switch to the new MM and the new
2949 + * thread's register state.
2952 +context_switch(struct rq *rq, struct task_struct *prev,
2953 + struct task_struct *next)
2955 + struct mm_struct *mm, *oldmm;
2957 + prepare_task_switch(rq, prev, next);
2958 + trace_mark(kernel_sched_schedule,
2959 + "prev_pid %d next_pid %d prev_state %ld "
2960 + "## rq %p prev %p next %p",
2961 + prev->pid, next->pid, prev->state,
2964 + oldmm = prev->active_mm;
2966 + * For paravirt, this is coupled with an exit in switch_to to
2967 + * combine the page table reload and the switch backend into
2970 + arch_enter_lazy_cpu_mode();
2972 + if (unlikely(!mm)) {
2973 + next->active_mm = oldmm;
2974 + atomic_inc(&oldmm->mm_count);
2975 + enter_lazy_tlb(oldmm, next);
2977 + switch_mm(oldmm, mm, next);
2979 + if (unlikely(!prev->mm)) {
2980 + prev->active_mm = NULL;
2981 + rq->prev_mm = oldmm;
2984 + * Since the runqueue lock will be released by the next
2985 + * task (which is an invalid locking op but in the case
2986 + * of the scheduler it's an obvious special-case), so we
2987 + * do an early lockdep release here:
2989 +#ifndef __ARCH_WANT_UNLOCKED_CTXSW
2990 + spin_release(&grq.lock.dep_map, 1, _THIS_IP_);
2993 + /* Here we just switch the register state and the stack. */
2994 + switch_to(prev, next, prev);
2998 + * this_rq must be evaluated again because prev may have moved
2999 + * CPUs since it called schedule(), thus the 'rq' on its stack
3000 + * frame will be invalid.
3002 + finish_task_switch(this_rq(), prev);
3006 + * nr_running, nr_uninterruptible and nr_context_switches:
3008 + * externally visible scheduler statistics: current number of runnable
3009 + * threads, current number of uninterruptible-sleeping threads, total
3010 + * number of context switches performed since bootup. All are measured
3011 + * without grabbing the grq lock but the occasional inaccurate result
3012 + * doesn't matter so long as it's positive.
3014 +unsigned long nr_running(void)
3016 + long nr = grq.nr_running;
3018 + if (unlikely(nr < 0))
3020 + return (unsigned long)nr;
3023 +unsigned long nr_uninterruptible(void)
3025 + long nu = grq.nr_uninterruptible;
3027 + if (unlikely(nu < 0))
3032 +unsigned long long nr_context_switches(void)
3034 + long long ns = grq.nr_switches;
3036 + /* This is of course impossible */
3037 + if (unlikely(ns < 0))
3039 + return (long long)ns;
3042 +unsigned long nr_iowait(void)
3044 + unsigned long i, sum = 0;
3046 + for_each_possible_cpu(i)
3047 + sum += atomic_read(&cpu_rq(i)->nr_iowait);
3052 +unsigned long nr_active(void)
3054 + return nr_running() + nr_uninterruptible();
3057 +DEFINE_PER_CPU(struct kernel_stat, kstat);
3059 +EXPORT_PER_CPU_SYMBOL(kstat);
3062 + * On each tick, see what percentage of that tick was attributed to each
3063 + * component and add the percentage to the _pc values. Once a _pc value has
3064 + * accumulated one tick's worth, account for that. This means the total
3065 + * percentage of load components will always be 100 per tick.
3067 +static void pc_idle_time(struct rq *rq, unsigned long pc)
3069 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3070 + cputime64_t tmp = cputime_to_cputime64(jiffies_to_cputime(1));
3072 + if (atomic_read(&rq->nr_iowait) > 0) {
3073 + rq->iowait_pc += pc;
3074 + if (rq->iowait_pc >= 100) {
3075 + rq->iowait_pc %= 100;
3076 + cpustat->iowait = cputime64_add(cpustat->iowait, tmp);
3079 + rq->idle_pc += pc;
3080 + if (rq->idle_pc >= 100) {
3081 + rq->idle_pc %= 100;
3082 + cpustat->idle = cputime64_add(cpustat->idle, tmp);
3088 +pc_system_time(struct rq *rq, struct task_struct *p, int hardirq_offset,
3089 + unsigned long pc, unsigned long ns)
3091 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3092 + cputime_t one_jiffy = jiffies_to_cputime(1);
3093 + cputime_t one_jiffy_scaled = cputime_to_scaled(one_jiffy);
3094 + cputime64_t tmp = cputime_to_cputime64(one_jiffy);
3096 + p->stime_pc += pc;
3097 + if (p->stime_pc >= 100) {
3098 + p->stime_pc -= 100;
3099 + p->stime = cputime_add(p->stime, one_jiffy);
3100 + p->stimescaled = cputime_add(p->stimescaled, one_jiffy_scaled);
3101 + acct_update_integrals(p);
3103 + p->sched_time += ns;
3105 + if (hardirq_count() - hardirq_offset)
3107 + else if (softirq_count()) {
3108 + rq->softirq_pc += pc;
3109 + if (rq->softirq_pc >= 100) {
3110 + rq->softirq_pc %= 100;
3111 + cpustat->softirq = cputime64_add(cpustat->softirq, tmp);
3114 + rq->system_pc += pc;
3115 + if (rq->system_pc >= 100) {
3116 + rq->system_pc %= 100;
3117 + cpustat->system = cputime64_add(cpustat->system, tmp);
3122 +static void pc_user_time(struct rq *rq, struct task_struct *p,
3123 + unsigned long pc, unsigned long ns)
3125 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3126 + cputime_t one_jiffy = jiffies_to_cputime(1);
3127 + cputime_t one_jiffy_scaled = cputime_to_scaled(one_jiffy);
3128 + cputime64_t tmp = cputime_to_cputime64(one_jiffy);
3130 + p->utime_pc += pc;
3131 + if (p->utime_pc >= 100) {
3132 + p->utime_pc -= 100;
3133 + p->utime = cputime_add(p->utime, one_jiffy);
3134 + p->utimescaled = cputime_add(p->utimescaled, one_jiffy_scaled);
3135 + acct_update_integrals(p);
3137 + p->sched_time += ns;
3139 + if (TASK_NICE(p) > 0 || idleprio_task(p)) {
3140 + rq->nice_pc += pc;
3141 + if (rq->nice_pc >= 100) {
3142 + rq->nice_pc %= 100;
3143 + cpustat->nice = cputime64_add(cpustat->nice, tmp);
3146 + rq->user_pc += pc;
3147 + if (rq->user_pc >= 100) {
3148 + rq->user_pc %= 100;
3149 + cpustat->user = cputime64_add(cpustat->user, tmp);
3154 +/* Convert nanoseconds to percentage of one tick. */
3155 +#define NS_TO_PC(NS) (NS * 100 / JIFFIES_TO_NS(1))
3158 + * This is called on clock ticks and on context switches.
3159 + * Bank in p->sched_time the ns elapsed since the last tick or switch.
3160 + * CPU scheduler quota accounting is also performed here in microseconds.
3161 + * The value returned from sched_clock() occasionally gives bogus values so
3162 + * some sanity checking is required. Time is supposed to be banked all the
3163 + * time so default to half a tick to make up for when sched_clock reverts
3164 + * to just returning jiffies, and for hardware that can't do tsc.
3167 +update_cpu_clock(struct rq *rq, struct task_struct *p, int tick)
3169 + long account_ns = rq->clock - rq->timekeep_clock;
3170 + struct task_struct *idle = rq->idle;
3171 + unsigned long account_pc;
3173 + if (unlikely(account_ns < 0))
3176 + account_pc = NS_TO_PC(account_ns);
3179 + int user_tick = user_mode(get_irq_regs());
3181 + /* Accurate tick timekeeping */
3183 + pc_user_time(rq, p, account_pc, account_ns);
3184 + else if (p != idle || (irq_count() != HARDIRQ_OFFSET))
3185 + pc_system_time(rq, p, HARDIRQ_OFFSET,
3186 + account_pc, account_ns);
3188 + pc_idle_time(rq, account_pc);
3190 + /* Accurate subtick timekeeping */
3192 + pc_idle_time(rq, account_pc);
3194 + pc_user_time(rq, p, account_pc, account_ns);
3197 + /* time_slice accounting is done in usecs to avoid overflow on 32bit */
3198 + if (rq->rq_policy != SCHED_FIFO && p != idle) {
3199 + long time_diff = rq->clock - rq->rq_last_ran;
3202 + * There should be less than or equal to one jiffy worth, and not
3203 + * negative/overflow. time_diff is only used for internal scheduler
3204 + * time_slice accounting.
3206 + if (unlikely(time_diff <= 0))
3207 + time_diff = JIFFIES_TO_NS(1) / 2;
3208 + else if (unlikely(time_diff > JIFFIES_TO_NS(1)))
3209 + time_diff = JIFFIES_TO_NS(1);
3211 + rq->rq_time_slice -= time_diff / 1000;
3213 + rq->rq_last_ran = rq->timekeep_clock = rq->clock;
3217 + * Return any ns on the sched_clock that have not yet been accounted in
3218 + * @p in case that task is currently running.
3220 + * Called with task_grq_lock() held.
3222 +static u64 do_task_delta_exec(struct task_struct *p, struct rq *rq)
3226 + if (p == rq->curr) {
3227 + update_rq_clock(rq);
3228 + ns = rq->clock - rq->rq_last_ran;
3229 + if (unlikely((s64)ns < 0))
3236 +unsigned long long task_delta_exec(struct task_struct *p)
3238 + unsigned long flags;
3242 + rq = task_grq_lock(p, &flags);
3243 + ns = do_task_delta_exec(p, rq);
3244 + task_grq_unlock(&flags);
3250 + * Return accounted runtime for the task.
3251 + * In case the task is currently running, return the runtime plus current's
3252 + * pending runtime that have not been accounted yet.
3254 +unsigned long long task_sched_runtime(struct task_struct *p)
3256 + unsigned long flags;
3257 + u64 ns, delta_exec;
3260 + rq = task_grq_lock(p, &flags);
3261 + ns = p->sched_time;
3262 + if (p == rq->curr) {
3263 + update_rq_clock(rq);
3264 + delta_exec = rq->clock - rq->rq_last_ran;
3265 + if (likely((s64)delta_exec > 0))
3268 + task_grq_unlock(&flags);
3274 + * Return sum_exec_runtime for the thread group.
3275 + * In case the task is currently running, return the sum plus current's
3276 + * pending runtime that have not been accounted yet.
3278 + * Note that the thread group might have other running tasks as well,
3279 + * so the return value not includes other pending runtime that other
3280 + * running tasks might have.
3282 +unsigned long long thread_group_sched_runtime(struct task_struct *p)
3284 + struct task_cputime totals;
3285 + unsigned long flags;
3289 + rq = task_grq_lock(p, &flags);
3290 + thread_group_cputime(p, &totals);
3291 + ns = totals.sum_exec_runtime + do_task_delta_exec(p, rq);
3292 + task_grq_unlock(&flags);
3297 +/* Compatibility crap for removal */
3298 +void account_user_time(struct task_struct *p, cputime_t cputime,
3299 + cputime_t cputime_scaled)
3303 +void account_idle_time(cputime_t cputime)
3308 + * Account guest cpu time to a process.
3309 + * @p: the process that the cpu time gets accounted to
3310 + * @cputime: the cpu time spent in virtual machine since the last update
3311 + * @cputime_scaled: cputime scaled by cpu frequency
3313 +static void account_guest_time(struct task_struct *p, cputime_t cputime,
3314 + cputime_t cputime_scaled)
3317 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3319 + tmp = cputime_to_cputime64(cputime);
3321 + /* Add guest time to process. */
3322 + p->utime = cputime_add(p->utime, cputime);
3323 + p->utimescaled = cputime_add(p->utimescaled, cputime_scaled);
3324 + p->gtime = cputime_add(p->gtime, cputime);
3326 + /* Add guest time to cpustat. */
3327 + cpustat->user = cputime64_add(cpustat->user, tmp);
3328 + cpustat->guest = cputime64_add(cpustat->guest, tmp);
3332 + * Account system cpu time to a process.
3333 + * @p: the process that the cpu time gets accounted to
3334 + * @hardirq_offset: the offset to subtract from hardirq_count()
3335 + * @cputime: the cpu time spent in kernel space since the last update
3336 + * @cputime_scaled: cputime scaled by cpu frequency
3337 + * This is for guest only now.
3339 +void account_system_time(struct task_struct *p, int hardirq_offset,
3340 + cputime_t cputime, cputime_t cputime_scaled)
3343 + if ((p->flags & PF_VCPU) && (irq_count() - hardirq_offset == 0))
3344 + account_guest_time(p, cputime, cputime_scaled);
3348 + * Account for involuntary wait time.
3349 + * @steal: the cpu time spent in involuntary wait
3351 +void account_steal_time(cputime_t cputime)
3353 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3354 + cputime64_t cputime64 = cputime_to_cputime64(cputime);
3356 + cpustat->steal = cputime64_add(cpustat->steal, cputime64);
3360 + * Account for idle time.
3361 + * @cputime: the cpu time spent in idle wait
3363 +static void account_idle_times(cputime_t cputime)
3365 + struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
3366 + cputime64_t cputime64 = cputime_to_cputime64(cputime);
3367 + struct rq *rq = this_rq();
3369 + if (atomic_read(&rq->nr_iowait) > 0)
3370 + cpustat->iowait = cputime64_add(cpustat->iowait, cputime64);
3372 + cpustat->idle = cputime64_add(cpustat->idle, cputime64);
3375 +#ifndef CONFIG_VIRT_CPU_ACCOUNTING
3377 +void account_process_tick(struct task_struct *p, int user_tick)
3382 + * Account multiple ticks of steal time.
3383 + * @p: the process from which the cpu time has been stolen
3384 + * @ticks: number of stolen ticks
3386 +void account_steal_ticks(unsigned long ticks)
3388 + account_steal_time(jiffies_to_cputime(ticks));
3392 + * Account multiple ticks of idle time.
3393 + * @ticks: number of stolen ticks
3395 +void account_idle_ticks(unsigned long ticks)
3397 + account_idle_times(jiffies_to_cputime(ticks));
3402 + * Functions to test for when SCHED_ISO tasks have used their allocated
3403 + * quota as real time scheduling and convert them back to SCHED_NORMAL.
3404 + * Where possible, the data is tested lockless, to avoid grabbing grq_lock
3405 + * because the occasional inaccurate result won't matter. However the
3406 + * tick data is only ever modified under lock. iso_refractory is only simply
3407 + * set to 0 or 1 so it's not worth grabbing the lock yet again for that.
3409 +static void set_iso_refractory(void)
3411 + grq.iso_refractory = 1;
3414 +static void clear_iso_refractory(void)
3416 + grq.iso_refractory = 0;
3420 + * Test if SCHED_ISO tasks have run longer than their alloted period as RT
3421 + * tasks and set the refractory flag if necessary. There is 10% hysteresis
3422 + * for unsetting the flag.
3424 +static unsigned int test_ret_isorefractory(struct rq *rq)
3426 + if (likely(!grq.iso_refractory)) {
3427 + if (grq.iso_ticks / ISO_PERIOD > sched_iso_cpu)
3428 + set_iso_refractory();
3430 + if (grq.iso_ticks / ISO_PERIOD < (sched_iso_cpu * 90 / 100))
3431 + clear_iso_refractory();
3433 + return grq.iso_refractory;
3436 +static void iso_tick(void)
3439 + grq.iso_ticks += 100;
3443 +/* No SCHED_ISO task was running so decrease rq->iso_ticks */
3444 +static inline void no_iso_tick(void)
3446 + if (grq.iso_ticks) {
3448 + grq.iso_ticks -= grq.iso_ticks / ISO_PERIOD + 1;
3449 + if (unlikely(grq.iso_refractory && grq.iso_ticks /
3450 + ISO_PERIOD < (sched_iso_cpu * 90 / 100)))
3451 + clear_iso_refractory();
3456 +static int rq_running_iso(struct rq *rq)
3458 + return rq->rq_prio == ISO_PRIO;
3461 +/* This manages tasks that have run out of timeslice during a scheduler_tick */
3462 +static void task_running_tick(struct rq *rq)
3464 + struct task_struct *p;
3467 + * If a SCHED_ISO task is running we increment the iso_ticks. In
3468 + * order to prevent SCHED_ISO tasks from causing starvation in the
3469 + * presence of true RT tasks we account those as iso_ticks as well.
3471 + if ((rt_queue(rq) || (iso_queue(rq) && !grq.iso_refractory))) {
3472 + if (grq.iso_ticks <= (ISO_PERIOD * 100) - 100)
3477 + if (iso_queue(rq)) {
3478 + if (unlikely(test_ret_isorefractory(rq))) {
3479 + if (rq_running_iso(rq)) {
3481 + * SCHED_ISO task is running as RT and limit
3482 + * has been hit. Force it to reschedule as
3483 + * SCHED_NORMAL by zeroing its time_slice
3485 + rq->rq_time_slice = 0;
3490 + /* SCHED_FIFO tasks never run out of timeslice. */
3491 + if (rq_idle(rq) || rq->rq_time_slice > 0 || rq->rq_policy == SCHED_FIFO)
3494 + /* p->time_slice <= 0. We only modify task_struct under grq lock */
3498 + set_tsk_need_resched(p);
3502 +void wake_up_idle_cpu(int cpu);
3505 + * This function gets called by the timer code, with HZ frequency.
3506 + * We call it with interrupts disabled. The data modified is all
3507 + * local to struct rq so we don't need to grab grq lock.
3509 +void scheduler_tick(void)
3511 + int cpu = smp_processor_id();
3512 + struct rq *rq = cpu_rq(cpu);
3514 + sched_clock_tick();
3515 + update_rq_clock(rq);
3516 + update_cpu_clock(rq, rq->curr, 1);
3518 + task_running_tick(rq);
3523 +#if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \
3524 + defined(CONFIG_PREEMPT_TRACER))
3526 +static inline unsigned long get_parent_ip(unsigned long addr)
3528 + if (in_lock_functions(addr)) {
3529 + addr = CALLER_ADDR2;
3530 + if (in_lock_functions(addr))
3531 + addr = CALLER_ADDR3;
3536 +void __kprobes add_preempt_count(int val)
3538 +#ifdef CONFIG_DEBUG_PREEMPT
3542 + if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
3545 + preempt_count() += val;
3546 +#ifdef CONFIG_DEBUG_PREEMPT
3548 + * Spinlock count overflowing soon?
3550 + DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
3551 + PREEMPT_MASK - 10);
3553 + if (preempt_count() == val)
3554 + trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
3556 +EXPORT_SYMBOL(add_preempt_count);
3558 +void __kprobes sub_preempt_count(int val)
3560 +#ifdef CONFIG_DEBUG_PREEMPT
3564 + if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
3567 + * Is the spinlock portion underflowing?
3569 + if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
3570 + !(preempt_count() & PREEMPT_MASK)))
3574 + if (preempt_count() == val)
3575 + trace_preempt_on(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
3576 + preempt_count() -= val;
3578 +EXPORT_SYMBOL(sub_preempt_count);
3582 + * Deadline is "now" in jiffies + (offset by priority). Setting the deadline
3583 + * is the key to everything. It distributes cpu fairly amongst tasks of the
3584 + * same nice value, it proportions cpu according to nice level, it means the
3585 + * task that last woke up the longest ago has the earliest deadline, thus
3586 + * ensuring that interactive tasks get low latency on wake up. The CPU
3587 + * proportion works out to the square of the virtual deadline difference, so
3588 + * this equation will give nice 19 3% CPU compared to nice 0.
3590 +static inline int prio_deadline_diff(int user_prio)
3592 + return (prio_ratios[user_prio] * rr_interval * HZ / (1000 * 100)) ? : 1;
3595 +static inline int task_deadline_diff(struct task_struct *p)
3597 + return prio_deadline_diff(TASK_USER_PRIO(p));
3600 +static inline int static_deadline_diff(int static_prio)
3602 + return prio_deadline_diff(USER_PRIO(static_prio));
3605 +static inline int longest_deadline_diff(void)
3607 + return prio_deadline_diff(39);
3611 + * The time_slice is only refilled when it is empty and that is when we set a
3614 +static inline void time_slice_expired(struct task_struct *p)
3616 + reset_first_time_slice(p);
3617 + p->time_slice = timeslice();
3618 + p->deadline = jiffies + task_deadline_diff(p);
3621 +static inline void check_deadline(struct task_struct *p)
3623 + if (p->time_slice <= 0)
3624 + time_slice_expired(p);
3628 + * O(n) lookup of all tasks in the global runqueue. The real brainfuck
3629 + * of lock contention and O(n). It's not really O(n) as only the queued,
3630 + * but not running tasks are scanned, and is O(n) queued in the worst case
3631 + * scenario only because the right task can be found before scanning all of
3633 + * Tasks are selected in this order:
3634 + * Real time tasks are selected purely by their static priority and in the
3635 + * order they were queued, so the lowest value idx, and the first queued task
3636 + * of that priority value is chosen.
3637 + * If no real time tasks are found, the SCHED_ISO priority is checked, and
3638 + * all SCHED_ISO tasks have the same priority value, so they're selected by
3639 + * the earliest deadline value.
3640 + * If no SCHED_ISO tasks are found, SCHED_NORMAL tasks are selected by the
3641 + * earliest deadline.
3642 + * Finally if no SCHED_NORMAL tasks are found, SCHED_IDLEPRIO tasks are
3643 + * selected by the earliest deadline.
3644 + * Once deadlines are expired (jiffies has passed it) tasks are chosen in FIFO
3645 + * order. Note that very few tasks will be FIFO for very long because they
3646 + * only end up that way if they sleep for long or if if there are enough fully
3647 + * cpu bound tasks to push the load to ~8 higher than the number of CPUs for
3650 +static inline struct
3651 +task_struct *earliest_deadline_task(struct rq *rq, struct task_struct *idle)
3653 + unsigned long dl, earliest_deadline = 0; /* Initialise to silence compiler */
3654 + struct task_struct *p, *edt;
3655 + unsigned int cpu = cpu_of(rq);
3656 + struct list_head *queue;
3661 + idx = find_next_bit(grq.prio_bitmap, PRIO_LIMIT, idx);
3662 + if (idx >= PRIO_LIMIT)
3664 + queue = grq.queue + idx;
3665 + list_for_each_entry(p, queue, run_list) {
3666 + /* Make sure cpu affinity is ok */
3667 + if (!cpu_isset(cpu, p->cpus_allowed))
3669 + if (idx < MAX_RT_PRIO) {
3670 + /* We found an rt task */
3675 + dl = p->deadline + cache_distance(task_rq(p), rq, p);
3678 + * Look for tasks with old deadlines and pick them in FIFO
3679 + * order, taking the first one found.
3681 + if (time_is_before_jiffies(dl)) {
3687 + * No rt tasks. Find the earliest deadline task. Now we're in
3688 + * O(n) territory. This is what we silenced the compiler for:
3689 + * edt will always start as idle.
3691 + if (edt == idle ||
3692 + time_before(dl, earliest_deadline)) {
3693 + earliest_deadline = dl;
3697 + if (edt == idle) {
3698 + if (++idx < PRIO_LIMIT)
3703 + take_task(rq, edt);
3709 + * Print scheduling while atomic bug:
3711 +static noinline void __schedule_bug(struct task_struct *prev)
3713 + struct pt_regs *regs = get_irq_regs();
3715 + printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
3716 + prev->comm, prev->pid, preempt_count());
3718 + debug_show_held_locks(prev);
3720 + if (irqs_disabled())
3721 + print_irqtrace_events(prev);
3730 + * Various schedule()-time debugging checks and statistics:
3732 +static inline void schedule_debug(struct task_struct *prev)
3735 + * Test if we are atomic. Since do_exit() needs to call into
3736 + * schedule() atomically, we ignore that path for now.
3737 + * Otherwise, whine if we are scheduling when we should not be.
3739 + if (unlikely(in_atomic_preempt_off() && !prev->exit_state))
3740 + __schedule_bug(prev);
3742 + profile_hit(SCHED_PROFILING, __builtin_return_address(0));
3744 + schedstat_inc(this_rq(), sched_count);
3745 +#ifdef CONFIG_SCHEDSTATS
3746 + if (unlikely(prev->lock_depth >= 0)) {
3747 + schedstat_inc(this_rq(), bkl_count);
3748 + schedstat_inc(prev, sched_info.bkl_count);
3754 + * The currently running task's information is all stored in rq local data
3755 + * which is only modified by the local CPU, thereby allowing the data to be
3756 + * changed without grabbing the grq lock.
3758 +static inline void set_rq_task(struct rq *rq, struct task_struct *p)
3760 + rq->rq_time_slice = p->time_slice;
3761 + rq->rq_deadline = p->deadline;
3762 + rq->rq_last_ran = p->last_ran;
3763 + rq->rq_policy = p->policy;
3764 + rq->rq_prio = p->prio;
3767 +static void reset_rq_task(struct rq *rq, struct task_struct *p)
3769 + rq->rq_policy = p->policy;
3770 + rq->rq_prio = p->prio;
3774 + * schedule() is the main scheduler function.
3776 +asmlinkage void __sched schedule(void)
3778 + struct task_struct *prev, *next, *idle;
3779 + unsigned long *switch_count;
3780 + int deactivate, cpu;
3784 + preempt_disable();
3786 + cpu = smp_processor_id();
3789 + rcu_qsctr_inc(cpu);
3791 + switch_count = &prev->nivcsw;
3793 + release_kernel_lock(prev);
3794 +need_resched_nonpreemptible:
3797 + schedule_debug(prev);
3799 + local_irq_disable();
3800 + update_rq_clock(rq);
3801 + update_cpu_clock(rq, prev, 0);
3804 + clear_tsk_need_resched(prev);
3806 + if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) {
3807 + if (unlikely(signal_pending_state(prev->state, prev)))
3808 + prev->state = TASK_RUNNING;
3811 + switch_count = &prev->nvcsw;
3814 + if (prev != idle) {
3815 + /* Update all the information stored on struct rq */
3816 + prev->time_slice = rq->rq_time_slice;
3817 + prev->deadline = rq->rq_deadline;
3818 + check_deadline(prev);
3819 + return_task(prev, deactivate);
3820 + /* Task changed affinity off this cpu */
3821 + if (unlikely(!cpus_intersects(prev->cpus_allowed,
3822 + cpumask_of_cpu(cpu))))
3823 + resched_suitable_idle(prev);
3826 + if (likely(queued_notrunning())) {
3827 + next = earliest_deadline_task(rq, idle);
3830 + schedstat_inc(rq, sched_goidle);
3834 + prefetch_stack(next);
3836 + if (task_idle(next))
3837 + set_cpuidle_map(cpu);
3839 + clear_cpuidle_map(cpu);
3841 + prev->last_ran = rq->clock;
3843 + if (likely(prev != next)) {
3844 + sched_info_switch(prev, next);
3846 + set_rq_task(rq, next);
3847 + grq.nr_switches++;
3853 + context_switch(rq, prev, next); /* unlocks the grq */
3855 + * the context switch might have flipped the stack from under
3856 + * us, hence refresh the local variables.
3858 + cpu = smp_processor_id();
3864 + if (unlikely(reacquire_kernel_lock(current) < 0))
3865 + goto need_resched_nonpreemptible;
3866 + preempt_enable_no_resched();
3867 + if (unlikely(test_thread_flag(TIF_NEED_RESCHED)))
3868 + goto need_resched;
3870 +EXPORT_SYMBOL(schedule);
3872 +#ifdef CONFIG_PREEMPT
3874 + * this is the entry point to schedule() from in-kernel preemption
3875 + * off of preempt_enable. Kernel preemptions off return from interrupt
3876 + * occur there and call schedule directly.
3878 +asmlinkage void __sched preempt_schedule(void)
3880 + struct thread_info *ti = current_thread_info();
3883 + * If there is a non-zero preempt_count or interrupts are disabled,
3884 + * we do not want to preempt the current task. Just return..
3886 + if (likely(ti->preempt_count || irqs_disabled()))
3890 + add_preempt_count(PREEMPT_ACTIVE);
3892 + sub_preempt_count(PREEMPT_ACTIVE);
3895 + * Check again in case we missed a preemption opportunity
3896 + * between schedule and now.
3899 + } while (unlikely(test_thread_flag(TIF_NEED_RESCHED)));
3901 +EXPORT_SYMBOL(preempt_schedule);
3904 + * this is the entry point to schedule() from kernel preemption
3905 + * off of irq context.
3906 + * Note, that this is called and return with irqs disabled. This will
3907 + * protect us against recursive calling from irq.
3909 +asmlinkage void __sched preempt_schedule_irq(void)
3911 + struct thread_info *ti = current_thread_info();
3913 + /* Catch callers which need to be fixed */
3914 + BUG_ON(ti->preempt_count || !irqs_disabled());
3917 + add_preempt_count(PREEMPT_ACTIVE);
3918 + local_irq_enable();
3920 + local_irq_disable();
3921 + sub_preempt_count(PREEMPT_ACTIVE);
3924 + * Check again in case we missed a preemption opportunity
3925 + * between schedule and now.
3928 + } while (unlikely(test_thread_flag(TIF_NEED_RESCHED)));
3931 +#endif /* CONFIG_PREEMPT */
3933 +int default_wake_function(wait_queue_t *curr, unsigned mode, int sync,
3936 + return try_to_wake_up(curr->private, mode, sync);
3938 +EXPORT_SYMBOL(default_wake_function);
3941 + * The core wakeup function. Non-exclusive wakeups (nr_exclusive == 0) just
3942 + * wake everything up. If it's an exclusive wakeup (nr_exclusive == small +ve
3943 + * number) then we wake all the non-exclusive tasks and one exclusive task.
3945 + * There are circumstances in which we can try to wake a task which has already
3946 + * started to run but is not in state TASK_RUNNING. try_to_wake_up() returns
3947 + * zero in this (rare) case, and we handle it by continuing to scan the queue.
3949 +void __wake_up_common(wait_queue_head_t *q, unsigned int mode,
3950 + int nr_exclusive, int sync, void *key)
3952 + struct list_head *tmp, *next;
3954 + list_for_each_safe(tmp, next, &q->task_list) {
3955 + wait_queue_t *curr = list_entry(tmp, wait_queue_t, task_list);
3956 + unsigned int flags = curr->flags;
3958 + if (curr->func(curr, mode, sync, key) &&
3959 + (flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive)
3965 + * __wake_up - wake up threads blocked on a waitqueue.
3966 + * @q: the waitqueue
3967 + * @mode: which threads
3968 + * @nr_exclusive: how many wake-one or wake-many threads to wake up
3969 + * @key: is directly passed to the wakeup function
3971 + * It may be assumed that this function implies a write memory barrier before
3972 + * changing the task state if and only if any tasks are woken up.
3974 +void __wake_up(wait_queue_head_t *q, unsigned int mode,
3975 + int nr_exclusive, void *key)
3977 + unsigned long flags;
3979 + spin_lock_irqsave(&q->lock, flags);
3980 + __wake_up_common(q, mode, nr_exclusive, 0, key);
3981 + spin_unlock_irqrestore(&q->lock, flags);
3983 +EXPORT_SYMBOL(__wake_up);
3986 + * Same as __wake_up but called with the spinlock in wait_queue_head_t held.
3988 +void __wake_up_locked(wait_queue_head_t *q, unsigned int mode)
3990 + __wake_up_common(q, mode, 1, 0, NULL);
3994 + * __wake_up_sync - wake up threads blocked on a waitqueue.
3995 + * @q: the waitqueue
3996 + * @mode: which threads
3997 + * @nr_exclusive: how many wake-one or wake-many threads to wake up
3999 + * The sync wakeup differs that the waker knows that it will schedule
4000 + * away soon, so while the target thread will be woken up, it will not
4001 + * be migrated to another CPU - ie. the two threads are 'synchronised'
4002 + * with each other. This can prevent needless bouncing between CPUs.
4004 + * On UP it can prevent extra preemption.
4006 +void __wake_up_sync(wait_queue_head_t *q, unsigned int mode, int nr_exclusive)
4008 + unsigned long flags;
4014 + if (unlikely(!nr_exclusive))
4017 + spin_lock_irqsave(&q->lock, flags);
4018 + __wake_up_common(q, mode, nr_exclusive, sync, NULL);
4019 + spin_unlock_irqrestore(&q->lock, flags);
4021 +EXPORT_SYMBOL_GPL(__wake_up_sync); /* For internal use only */
4023 +void complete(struct completion *x)
4025 + unsigned long flags;
4027 + spin_lock_irqsave(&x->wait.lock, flags);
4029 + __wake_up_common(&x->wait, TASK_NORMAL, 1, 0, NULL);
4030 + spin_unlock_irqrestore(&x->wait.lock, flags);
4032 +EXPORT_SYMBOL(complete);
4034 +void complete_all(struct completion *x)
4036 + unsigned long flags;
4038 + spin_lock_irqsave(&x->wait.lock, flags);
4039 + x->done += UINT_MAX/2;
4040 + __wake_up_common(&x->wait, TASK_NORMAL, 0, 0, NULL);
4041 + spin_unlock_irqrestore(&x->wait.lock, flags);
4043 +EXPORT_SYMBOL(complete_all);
4045 +static inline long __sched
4046 +do_wait_for_common(struct completion *x, long timeout, int state)
4049 + DECLARE_WAITQUEUE(wait, current);
4051 + wait.flags |= WQ_FLAG_EXCLUSIVE;
4052 + __add_wait_queue_tail(&x->wait, &wait);
4054 + if ((state == TASK_INTERRUPTIBLE &&
4055 + signal_pending(current)) ||
4056 + (state == TASK_KILLABLE &&
4057 + fatal_signal_pending(current))) {
4058 + timeout = -ERESTARTSYS;
4061 + __set_current_state(state);
4062 + spin_unlock_irq(&x->wait.lock);
4063 + timeout = schedule_timeout(timeout);
4064 + spin_lock_irq(&x->wait.lock);
4065 + } while (!x->done && timeout);
4066 + __remove_wait_queue(&x->wait, &wait);
4071 + return timeout ?: 1;
4074 +static long __sched
4075 +wait_for_common(struct completion *x, long timeout, int state)
4079 + spin_lock_irq(&x->wait.lock);
4080 + timeout = do_wait_for_common(x, timeout, state);
4081 + spin_unlock_irq(&x->wait.lock);
4085 +void __sched wait_for_completion(struct completion *x)
4087 + wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_UNINTERRUPTIBLE);
4089 +EXPORT_SYMBOL(wait_for_completion);
4091 +unsigned long __sched
4092 +wait_for_completion_timeout(struct completion *x, unsigned long timeout)
4094 + return wait_for_common(x, timeout, TASK_UNINTERRUPTIBLE);
4096 +EXPORT_SYMBOL(wait_for_completion_timeout);
4098 +int __sched wait_for_completion_interruptible(struct completion *x)
4100 + long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_INTERRUPTIBLE);
4101 + if (t == -ERESTARTSYS)
4105 +EXPORT_SYMBOL(wait_for_completion_interruptible);
4107 +unsigned long __sched
4108 +wait_for_completion_interruptible_timeout(struct completion *x,
4109 + unsigned long timeout)
4111 + return wait_for_common(x, timeout, TASK_INTERRUPTIBLE);
4113 +EXPORT_SYMBOL(wait_for_completion_interruptible_timeout);
4115 +int __sched wait_for_completion_killable(struct completion *x)
4117 + long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_KILLABLE);
4118 + if (t == -ERESTARTSYS)
4122 +EXPORT_SYMBOL(wait_for_completion_killable);
4125 + * try_wait_for_completion - try to decrement a completion without blocking
4126 + * @x: completion structure
4128 + * Returns: 0 if a decrement cannot be done without blocking
4129 + * 1 if a decrement succeeded.
4131 + * If a completion is being used as a counting completion,
4132 + * attempt to decrement the counter without blocking. This
4133 + * enables us to avoid waiting if the resource the completion
4134 + * is protecting is not available.
4136 +bool try_wait_for_completion(struct completion *x)
4140 + spin_lock_irq(&x->wait.lock);
4145 + spin_unlock_irq(&x->wait.lock);
4148 +EXPORT_SYMBOL(try_wait_for_completion);
4151 + * completion_done - Test to see if a completion has any waiters
4152 + * @x: completion structure
4154 + * Returns: 0 if there are waiters (wait_for_completion() in progress)
4155 + * 1 if there are no waiters.
4158 +bool completion_done(struct completion *x)
4162 + spin_lock_irq(&x->wait.lock);
4165 + spin_unlock_irq(&x->wait.lock);
4168 +EXPORT_SYMBOL(completion_done);
4170 +static long __sched
4171 +sleep_on_common(wait_queue_head_t *q, int state, long timeout)
4173 + unsigned long flags;
4174 + wait_queue_t wait;
4176 + init_waitqueue_entry(&wait, current);
4178 + __set_current_state(state);
4180 + spin_lock_irqsave(&q->lock, flags);
4181 + __add_wait_queue(q, &wait);
4182 + spin_unlock(&q->lock);
4183 + timeout = schedule_timeout(timeout);
4184 + spin_lock_irq(&q->lock);
4185 + __remove_wait_queue(q, &wait);
4186 + spin_unlock_irqrestore(&q->lock, flags);
4191 +void __sched interruptible_sleep_on(wait_queue_head_t *q)
4193 + sleep_on_common(q, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);
4195 +EXPORT_SYMBOL(interruptible_sleep_on);
4198 +interruptible_sleep_on_timeout(wait_queue_head_t *q, long timeout)
4200 + return sleep_on_common(q, TASK_INTERRUPTIBLE, timeout);
4202 +EXPORT_SYMBOL(interruptible_sleep_on_timeout);
4204 +void __sched sleep_on(wait_queue_head_t *q)
4206 + sleep_on_common(q, TASK_UNINTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);
4208 +EXPORT_SYMBOL(sleep_on);
4210 +long __sched sleep_on_timeout(wait_queue_head_t *q, long timeout)
4212 + return sleep_on_common(q, TASK_UNINTERRUPTIBLE, timeout);
4214 +EXPORT_SYMBOL(sleep_on_timeout);
4216 +#ifdef CONFIG_RT_MUTEXES
4219 + * rt_mutex_setprio - set the current priority of a task
4221 + * @prio: prio value (kernel-internal form)
4223 + * This function changes the 'effective' priority of a task. It does
4224 + * not touch ->normal_prio like __setscheduler().
4226 + * Used by the rt_mutex code to implement priority inheritance logic.
4228 +void rt_mutex_setprio(struct task_struct *p, int prio)
4230 + unsigned long flags;
4231 + int queued, oldprio;
4234 + BUG_ON(prio < 0 || prio > MAX_PRIO);
4236 + rq = time_task_grq_lock(p, &flags);
4238 + oldprio = p->prio;
4239 + queued = task_queued(p);
4243 + if (task_running(p) && prio > oldprio)
4247 + try_preempt(p, rq);
4250 + task_grq_unlock(&flags);
4256 + * Adjust the deadline for when the priority is to change, before it's
4259 +static inline void adjust_deadline(struct task_struct *p, int new_prio)
4261 + p->deadline += static_deadline_diff(new_prio) - task_deadline_diff(p);
4264 +void set_user_nice(struct task_struct *p, long nice)
4266 + int queued, new_static, old_static;
4267 + unsigned long flags;
4270 + if (TASK_NICE(p) == nice || nice < -20 || nice > 19)
4272 + new_static = NICE_TO_PRIO(nice);
4274 + * We have to be careful, if called from sys_setpriority(),
4275 + * the task might be in the middle of scheduling on another CPU.
4277 + rq = time_task_grq_lock(p, &flags);
4279 + * The RT priorities are set via sched_setscheduler(), but we still
4280 + * allow the 'normal' nice value to be set - but as expected
4281 + * it wont have any effect on scheduling until the task is
4282 + * not SCHED_NORMAL/SCHED_BATCH:
4284 + if (has_rt_policy(p)) {
4285 + p->static_prio = new_static;
4288 + queued = task_queued(p);
4292 + adjust_deadline(p, new_static);
4293 + old_static = p->static_prio;
4294 + p->static_prio = new_static;
4295 + p->prio = effective_prio(p);
4299 + if (new_static < old_static)
4300 + try_preempt(p, rq);
4301 + } else if (task_running(p)) {
4302 + reset_rq_task(rq, p);
4303 + if (old_static < new_static)
4307 + task_grq_unlock(&flags);
4309 +EXPORT_SYMBOL(set_user_nice);
4312 + * can_nice - check if a task can reduce its nice value
4314 + * @nice: nice value
4316 +int can_nice(const struct task_struct *p, const int nice)
4318 + /* convert nice value [19,-20] to rlimit style value [1,40] */
4319 + int nice_rlim = 20 - nice;
4321 + return (nice_rlim <= p->signal->rlim[RLIMIT_NICE].rlim_cur ||
4322 + capable(CAP_SYS_NICE));
4325 +#ifdef __ARCH_WANT_SYS_NICE
4328 + * sys_nice - change the priority of the current process.
4329 + * @increment: priority increment
4331 + * sys_setpriority is a more generic, but much slower function that
4332 + * does similar things.
4334 +asmlinkage long sys_nice(int increment)
4336 + long nice, retval;
4339 + * Setpriority might change our priority at the same moment.
4340 + * We don't have to worry. Conceptually one call occurs first
4341 + * and we have a single winner.
4343 + if (increment < -40)
4345 + if (increment > 40)
4348 + nice = PRIO_TO_NICE(current->static_prio) + increment;
4354 + if (increment < 0 && !can_nice(current, nice))
4357 + retval = security_task_setnice(current, nice);
4361 + set_user_nice(current, nice);
4368 + * task_prio - return the priority value of a given task.
4369 + * @p: the task in question.
4371 + * This is the priority value as seen by users in /proc.
4372 + * RT tasks are offset by -100. Normal tasks are centered around 1, value goes
4373 + * from 0 (SCHED_ISO) up to 82 (nice +19 SCHED_IDLEPRIO).
4375 +int task_prio(const struct task_struct *p)
4377 + int delta, prio = p->prio - MAX_RT_PRIO;
4379 + /* rt tasks and iso tasks */
4383 + delta = (p->deadline - jiffies) * 40 / longest_deadline_diff();
4384 + if (delta > 0 && delta <= 80)
4386 + if (idleprio_task(p))
4393 + * task_nice - return the nice value of a given task.
4394 + * @p: the task in question.
4396 +int task_nice(const struct task_struct *p)
4398 + return TASK_NICE(p);
4400 +EXPORT_SYMBOL_GPL(task_nice);
4403 + * idle_cpu - is a given cpu idle currently?
4404 + * @cpu: the processor in question.
4406 +int idle_cpu(int cpu)
4408 + return cpu_curr(cpu) == cpu_rq(cpu)->idle;
4412 + * idle_task - return the idle task for a given cpu.
4413 + * @cpu: the processor in question.
4415 +struct task_struct *idle_task(int cpu)
4417 + return cpu_rq(cpu)->idle;
4421 + * find_process_by_pid - find a process with a matching PID value.
4422 + * @pid: the pid in question.
4424 +static inline struct task_struct *find_process_by_pid(pid_t pid)
4426 + return pid ? find_task_by_vpid(pid) : current;
4429 +/* Actually do priority change: must hold grq lock. */
4431 +__setscheduler(struct task_struct *p, struct rq *rq, int policy, int prio)
4433 + int oldrtprio, oldprio;
4435 + BUG_ON(task_queued(p));
4437 + p->policy = policy;
4438 + oldrtprio = p->rt_priority;
4439 + p->rt_priority = prio;
4440 + p->normal_prio = normal_prio(p);
4441 + oldprio = p->prio;
4442 + /* we are holding p->pi_lock already */
4443 + p->prio = rt_mutex_getprio(p);
4444 + if (task_running(p)) {
4445 + reset_rq_task(rq, p);
4446 + /* Resched only if we might now be preempted */
4447 + if (p->prio > oldprio || p->rt_priority > oldrtprio)
4452 +static int __sched_setscheduler(struct task_struct *p, int policy,
4453 + struct sched_param *param, bool user)
4455 + struct sched_param zero_param = { .sched_priority = 0 };
4456 + int queued, retval, oldpolicy = -1;
4457 + unsigned long flags, rlim_rtprio = 0;
4460 + /* may grab non-irq protected spin_locks */
4461 + BUG_ON(in_interrupt());
4463 + if (is_rt_policy(policy) && !capable(CAP_SYS_NICE)) {
4464 + unsigned long lflags;
4466 + if (!lock_task_sighand(p, &lflags))
4468 + rlim_rtprio = p->signal->rlim[RLIMIT_RTPRIO].rlim_cur;
4469 + unlock_task_sighand(p, &lflags);
4473 + * If the caller requested an RT policy without having the
4474 + * necessary rights, we downgrade the policy to SCHED_ISO.
4475 + * We also set the parameter to zero to pass the checks.
4477 + policy = SCHED_ISO;
4478 + param = &zero_param;
4481 + /* double check policy once rq lock held */
4483 + policy = oldpolicy = p->policy;
4484 + else if (!SCHED_RANGE(policy))
4487 + * Valid priorities for SCHED_FIFO and SCHED_RR are
4488 + * 1..MAX_USER_RT_PRIO-1, valid priority for SCHED_NORMAL and
4489 + * SCHED_BATCH is 0.
4491 + if (param->sched_priority < 0 ||
4492 + (p->mm && param->sched_priority > MAX_USER_RT_PRIO-1) ||
4493 + (!p->mm && param->sched_priority > MAX_RT_PRIO-1))
4495 + if (is_rt_policy(policy) != (param->sched_priority != 0))
4499 + * Allow unprivileged RT tasks to decrease priority:
4501 + if (user && !capable(CAP_SYS_NICE)) {
4502 + if (is_rt_policy(policy)) {
4503 + /* can't set/change the rt policy */
4504 + if (policy != p->policy && !rlim_rtprio)
4507 + /* can't increase priority */
4508 + if (param->sched_priority > p->rt_priority &&
4509 + param->sched_priority > rlim_rtprio)
4512 + switch (p->policy) {
4514 + * Can only downgrade policies but not back to
4518 + if (policy == SCHED_ISO)
4520 + if (policy == SCHED_NORMAL)
4524 + if (policy == SCHED_BATCH)
4527 + * ANDROID: Allow tasks to move between
4528 + * SCHED_NORMAL <-> SCHED_BATCH
4530 + if (policy == SCHED_NORMAL)
4532 + if (policy != SCHED_IDLEPRIO)
4535 + case SCHED_IDLEPRIO:
4536 + if (policy == SCHED_IDLEPRIO)
4544 + /* can't change other user's priorities */
4545 + if ((current->euid != p->euid) &&
4546 + (current->euid != p->uid))
4550 + retval = security_task_setscheduler(p, policy, param);
4554 + * make sure no PI-waiters arrive (or leave) while we are
4555 + * changing the priority of the task:
4557 + spin_lock_irqsave(&p->pi_lock, flags);
4559 + * To be able to change p->policy safely, the apropriate
4560 + * runqueue lock must be held.
4562 + rq = __task_grq_lock(p);
4563 + /* recheck policy now with rq lock held */
4564 + if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
4565 + __task_grq_unlock();
4566 + spin_unlock_irqrestore(&p->pi_lock, flags);
4567 + policy = oldpolicy = -1;
4570 + update_rq_clock(rq);
4571 + queued = task_queued(p);
4574 + __setscheduler(p, rq, policy, param->sched_priority);
4577 + try_preempt(p, rq);
4579 + __task_grq_unlock();
4580 + spin_unlock_irqrestore(&p->pi_lock, flags);
4582 + rt_mutex_adjust_pi(p);
4588 + * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
4589 + * @p: the task in question.
4590 + * @policy: new policy.
4591 + * @param: structure containing the new RT priority.
4593 + * NOTE that the task may be already dead.
4595 +int sched_setscheduler(struct task_struct *p, int policy,
4596 + struct sched_param *param)
4598 + return __sched_setscheduler(p, policy, param, true);
4601 +EXPORT_SYMBOL_GPL(sched_setscheduler);
4604 + * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
4605 + * @p: the task in question.
4606 + * @policy: new policy.
4607 + * @param: structure containing the new RT priority.
4609 + * Just like sched_setscheduler, only don't bother checking if the
4610 + * current context has permission. For example, this is needed in
4611 + * stop_machine(): we create temporary high priority worker threads,
4612 + * but our caller might not have that capability.
4614 +int sched_setscheduler_nocheck(struct task_struct *p, int policy,
4615 + struct sched_param *param)
4617 + return __sched_setscheduler(p, policy, param, false);
4621 +do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
4623 + struct sched_param lparam;
4624 + struct task_struct *p;
4627 + if (!param || pid < 0)
4629 + if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
4634 + p = find_process_by_pid(pid);
4636 + retval = sched_setscheduler(p, policy, &lparam);
4637 + rcu_read_unlock();
4643 + * sys_sched_setscheduler - set/change the scheduler policy and RT priority
4644 + * @pid: the pid in question.
4645 + * @policy: new policy.
4646 + * @param: structure containing the new RT priority.
4648 +asmlinkage long sys_sched_setscheduler(pid_t pid, int policy,
4649 + struct sched_param __user *param)
4651 + /* negative values for policy are not valid */
4655 + return do_sched_setscheduler(pid, policy, param);
4659 + * sys_sched_setparam - set/change the RT priority of a thread
4660 + * @pid: the pid in question.
4661 + * @param: structure containing the new RT priority.
4663 +asmlinkage long sys_sched_setparam(pid_t pid, struct sched_param __user *param)
4665 + return do_sched_setscheduler(pid, -1, param);
4669 + * sys_sched_getscheduler - get the policy (scheduling class) of a thread
4670 + * @pid: the pid in question.
4672 +asmlinkage long sys_sched_getscheduler(pid_t pid)
4674 + struct task_struct *p;
4675 + int retval = -EINVAL;
4678 + goto out_nounlock;
4681 + read_lock(&tasklist_lock);
4682 + p = find_process_by_pid(pid);
4684 + retval = security_task_getscheduler(p);
4686 + retval = p->policy;
4688 + read_unlock(&tasklist_lock);
4695 + * sys_sched_getscheduler - get the RT priority of a thread
4696 + * @pid: the pid in question.
4697 + * @param: structure containing the RT priority.
4699 +asmlinkage long sys_sched_getparam(pid_t pid, struct sched_param __user *param)
4701 + struct sched_param lp;
4702 + struct task_struct *p;
4703 + int retval = -EINVAL;
4705 + if (!param || pid < 0)
4706 + goto out_nounlock;
4708 + read_lock(&tasklist_lock);
4709 + p = find_process_by_pid(pid);
4714 + retval = security_task_getscheduler(p);
4718 + lp.sched_priority = p->rt_priority;
4719 + read_unlock(&tasklist_lock);
4722 + * This one might sleep, we cannot do it with a spinlock held ...
4724 + retval = copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
4730 + read_unlock(&tasklist_lock);
4734 +long sched_setaffinity(pid_t pid, const cpumask_t *in_mask)
4736 + cpumask_t cpus_allowed;
4737 + cpumask_t new_mask = *in_mask;
4738 + struct task_struct *p;
4741 + get_online_cpus();
4742 + read_lock(&tasklist_lock);
4744 + p = find_process_by_pid(pid);
4746 + read_unlock(&tasklist_lock);
4747 + put_online_cpus();
4752 + * It is not safe to call set_cpus_allowed with the
4753 + * tasklist_lock held. We will bump the task_struct's
4754 + * usage count and then drop tasklist_lock.
4756 + get_task_struct(p);
4757 + read_unlock(&tasklist_lock);
4760 + if ((current->euid != p->euid) && (current->euid != p->uid) &&
4761 + !capable(CAP_SYS_NICE))
4764 + retval = security_task_setscheduler(p, 0, NULL);
4768 + cpuset_cpus_allowed(p, &cpus_allowed);
4769 + cpus_and(new_mask, new_mask, cpus_allowed);
4771 + retval = set_cpus_allowed_ptr(p, &new_mask);
4774 + cpuset_cpus_allowed(p, &cpus_allowed);
4775 + if (!cpus_subset(new_mask, cpus_allowed)) {
4777 + * We must have raced with a concurrent cpuset
4778 + * update. Just reset the cpus_allowed to the
4779 + * cpuset's cpus_allowed
4781 + new_mask = cpus_allowed;
4786 + put_task_struct(p);
4787 + put_online_cpus();
4791 +static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
4792 + cpumask_t *new_mask)
4794 + if (len < sizeof(cpumask_t)) {
4795 + memset(new_mask, 0, sizeof(cpumask_t));
4796 + } else if (len > sizeof(cpumask_t)) {
4797 + len = sizeof(cpumask_t);
4799 + return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
4803 + * sys_sched_setaffinity - set the cpu affinity of a process
4804 + * @pid: pid of the process
4805 + * @len: length in bytes of the bitmask pointed to by user_mask_ptr
4806 + * @user_mask_ptr: user-space pointer to the new cpu mask
4808 +asmlinkage long sys_sched_setaffinity(pid_t pid, unsigned int len,
4809 + unsigned long __user *user_mask_ptr)
4811 + cpumask_t new_mask;
4814 + retval = get_user_cpu_mask(user_mask_ptr, len, &new_mask);
4818 + return sched_setaffinity(pid, &new_mask);
4821 +long sched_getaffinity(pid_t pid, cpumask_t *mask)
4823 + struct task_struct *p;
4826 + get_online_cpus();
4827 + read_lock(&tasklist_lock);
4830 + p = find_process_by_pid(pid);
4834 + retval = security_task_getscheduler(p);
4838 + cpus_and(*mask, p->cpus_allowed, cpu_online_map);
4841 + read_unlock(&tasklist_lock);
4842 + put_online_cpus();
4848 + * sys_sched_getaffinity - get the cpu affinity of a process
4849 + * @pid: pid of the process
4850 + * @len: length in bytes of the bitmask pointed to by user_mask_ptr
4851 + * @user_mask_ptr: user-space pointer to hold the current cpu mask
4853 +asmlinkage long sys_sched_getaffinity(pid_t pid, unsigned int len,
4854 + unsigned long __user *user_mask_ptr)
4859 + if (len < sizeof(cpumask_t))
4862 + ret = sched_getaffinity(pid, &mask);
4866 + if (copy_to_user(user_mask_ptr, &mask, sizeof(cpumask_t)))
4869 + return sizeof(cpumask_t);
4873 + * sys_sched_yield - yield the current processor to other threads.
4875 + * This function yields the current CPU to other tasks. It does this by
4876 + * scheduling away the current task. If it still has the earliest deadline
4877 + * it will be scheduled again as the next task.
4879 +asmlinkage long sys_sched_yield(void)
4881 + struct task_struct *p;
4885 + rq = task_grq_lock_irq(p);
4886 + schedstat_inc(rq, yld_count);
4890 + * Since we are going to call schedule() anyway, there's
4891 + * no need to preempt or enable interrupts:
4893 + __release(grq.lock);
4894 + spin_release(&grq.lock.dep_map, 1, _THIS_IP_);
4895 + _raw_spin_unlock(&grq.lock);
4896 + preempt_enable_no_resched();
4903 +static void __cond_resched(void)
4905 + /* NOT a real fix but will make voluntary preempt work. 馬鹿な事 */
4906 + if (unlikely(system_state != SYSTEM_RUNNING))
4908 +#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
4909 + __might_sleep(__FILE__, __LINE__);
4912 + * The BKS might be reacquired before we have dropped
4913 + * PREEMPT_ACTIVE, which could trigger a second
4914 + * cond_resched() call.
4917 + add_preempt_count(PREEMPT_ACTIVE);
4919 + sub_preempt_count(PREEMPT_ACTIVE);
4920 + } while (need_resched());
4923 +int __sched _cond_resched(void)
4925 + if (need_resched() && !(preempt_count() & PREEMPT_ACTIVE) &&
4926 + system_state == SYSTEM_RUNNING) {
4932 +EXPORT_SYMBOL(_cond_resched);
4935 + * cond_resched_lock() - if a reschedule is pending, drop the given lock,
4936 + * call schedule, and on return reacquire the lock.
4938 + * This works OK both with and without CONFIG_PREEMPT. We do strange low-level
4939 + * operations here to prevent schedule() from being called twice (once via
4940 + * spin_unlock(), once by hand).
4942 +int cond_resched_lock(spinlock_t *lock)
4944 + int resched = need_resched() && system_state == SYSTEM_RUNNING;
4947 + if (spin_needbreak(lock) || resched) {
4948 + spin_unlock(lock);
4949 + if (resched && need_resched())
4958 +EXPORT_SYMBOL(cond_resched_lock);
4960 +int __sched cond_resched_softirq(void)
4962 + BUG_ON(!in_softirq());
4964 + if (need_resched() && system_state == SYSTEM_RUNNING) {
4965 + local_bh_enable();
4967 + local_bh_disable();
4972 +EXPORT_SYMBOL(cond_resched_softirq);
4975 + * yield - yield the current processor to other threads.
4977 + * This is a shortcut for kernel-space yielding - it marks the
4978 + * thread runnable and calls sys_sched_yield().
4980 +void __sched yield(void)
4982 + set_current_state(TASK_RUNNING);
4983 + sys_sched_yield();
4985 +EXPORT_SYMBOL(yield);
4988 + * This task is about to go to sleep on IO. Increment rq->nr_iowait so
4989 + * that process accounting knows that this is a task in IO wait state.
4991 + * But don't do that if it is a deliberate, throttling IO wait (this task
4992 + * has set its backing_dev_info: the queue against which it should throttle)
4994 +void __sched io_schedule(void)
4996 + struct rq *rq = &__raw_get_cpu_var(runqueues);
4998 + delayacct_blkio_start();
4999 + atomic_inc(&rq->nr_iowait);
5001 + atomic_dec(&rq->nr_iowait);
5002 + delayacct_blkio_end();
5004 +EXPORT_SYMBOL(io_schedule);
5006 +long __sched io_schedule_timeout(long timeout)
5008 + struct rq *rq = &__raw_get_cpu_var(runqueues);
5011 + delayacct_blkio_start();
5012 + atomic_inc(&rq->nr_iowait);
5013 + ret = schedule_timeout(timeout);
5014 + atomic_dec(&rq->nr_iowait);
5015 + delayacct_blkio_end();
5020 + * sys_sched_get_priority_max - return maximum RT priority.
5021 + * @policy: scheduling class.
5023 + * this syscall returns the maximum rt_priority that can be used
5024 + * by a given scheduling class.
5026 +asmlinkage long sys_sched_get_priority_max(int policy)
5028 + int ret = -EINVAL;
5033 + ret = MAX_USER_RT_PRIO-1;
5035 + case SCHED_NORMAL:
5038 + case SCHED_IDLEPRIO:
5046 + * sys_sched_get_priority_min - return minimum RT priority.
5047 + * @policy: scheduling class.
5049 + * this syscall returns the minimum rt_priority that can be used
5050 + * by a given scheduling class.
5052 +asmlinkage long sys_sched_get_priority_min(int policy)
5054 + int ret = -EINVAL;
5061 + case SCHED_NORMAL:
5064 + case SCHED_IDLEPRIO:
5072 + * sys_sched_rr_get_interval - return the default timeslice of a process.
5073 + * @pid: pid of the process.
5074 + * @interval: userspace pointer to the timeslice value.
5076 + * this syscall writes the default timeslice value of a given process
5077 + * into the user-space timespec buffer. A value of '0' means infinity.
5080 +long sys_sched_rr_get_interval(pid_t pid, struct timespec __user *interval)
5082 + struct task_struct *p;
5084 + struct timespec t;
5090 + read_lock(&tasklist_lock);
5091 + p = find_process_by_pid(pid);
5095 + retval = security_task_getscheduler(p);
5099 + t = ns_to_timespec(p->policy == SCHED_FIFO ? 0 :
5100 + MS_TO_NS(task_timeslice(p)));
5101 + read_unlock(&tasklist_lock);
5102 + retval = copy_to_user(interval, &t, sizeof(t)) ? -EFAULT : 0;
5106 + read_unlock(&tasklist_lock);
5110 +static const char stat_nam[] = TASK_STATE_TO_CHAR_STR;
5112 +void sched_show_task(struct task_struct *p)
5114 + unsigned long free = 0;
5117 + state = p->state ? __ffs(p->state) + 1 : 0;
5118 + printk(KERN_INFO "%-13.13s %c", p->comm,
5119 + state < sizeof(stat_nam) - 1 ? stat_nam[state] : '?');
5120 +#if BITS_PER_LONG == 32
5121 + if (state == TASK_RUNNING)
5122 + printk(KERN_CONT " running ");
5124 + printk(KERN_CONT " %08lx ", thread_saved_pc(p));
5126 + if (state == TASK_RUNNING)
5127 + printk(KERN_CONT " running task ");
5129 + printk(KERN_CONT " %016lx ", thread_saved_pc(p));
5131 +#ifdef CONFIG_DEBUG_STACK_USAGE
5133 + unsigned long *n = end_of_stack(p);
5136 + free = (unsigned long)n - (unsigned long)end_of_stack(p);
5139 + printk(KERN_CONT "%5lu %5d %6d\n", free,
5140 + task_pid_nr(p), task_pid_nr(p->real_parent));
5142 + show_stack(p, NULL);
5145 +void show_state_filter(unsigned long state_filter)
5147 + struct task_struct *g, *p;
5149 +#if BITS_PER_LONG == 32
5151 + " task PC stack pid father\n");
5154 + " task PC stack pid father\n");
5156 + read_lock(&tasklist_lock);
5157 + do_each_thread(g, p) {
5159 + * reset the NMI-timeout, listing all files on a slow
5160 + * console might take alot of time:
5162 + touch_nmi_watchdog();
5163 + if (!state_filter || (p->state & state_filter))
5164 + sched_show_task(p);
5165 + } while_each_thread(g, p);
5167 + touch_all_softlockup_watchdogs();
5169 + read_unlock(&tasklist_lock);
5171 + * Only show locks if all tasks are dumped:
5173 + if (state_filter == -1)
5174 + debug_show_all_locks();
5178 + * init_idle - set up an idle thread for a given CPU
5179 + * @idle: task in question
5180 + * @cpu: cpu the idle task belongs to
5182 + * NOTE: this function does not set the idle thread's NEED_RESCHED
5183 + * flag, to make booting more robust.
5185 +void init_idle(struct task_struct *idle, int cpu)
5187 + struct rq *rq = cpu_rq(cpu);
5188 + unsigned long flags;
5190 + time_grq_lock(rq, &flags);
5191 + idle->last_ran = rq->clock;
5192 + idle->state = TASK_RUNNING;
5193 + /* Setting prio to illegal value shouldn't matter when never queued */
5194 + idle->prio = PRIO_LIMIT;
5195 + set_rq_task(rq, idle);
5196 + idle->cpus_allowed = cpumask_of_cpu(cpu);
5197 + set_task_cpu(idle, cpu);
5198 + rq->curr = rq->idle = idle;
5200 + set_cpuidle_map(cpu);
5201 +#ifdef CONFIG_HOTPLUG_CPU
5202 + idle->unplugged_mask = CPU_MASK_NONE;
5204 + grq_unlock_irqrestore(&flags);
5206 + /* Set the preempt count _outside_ the spinlocks! */
5207 +#if defined(CONFIG_PREEMPT) && !defined(CONFIG_PREEMPT_BKL)
5208 + task_thread_info(idle)->preempt_count = (idle->lock_depth >= 0);
5210 + task_thread_info(idle)->preempt_count = 0;
5215 + * In a system that switches off the HZ timer nohz_cpu_mask
5216 + * indicates which cpus entered this state. This is used
5217 + * in the rcu update to wait only for active cpus. For system
5218 + * which do not switch off the HZ timer nohz_cpu_mask should
5219 + * always be CPU_MASK_NONE.
5221 +cpumask_t nohz_cpu_mask = CPU_MASK_NONE;
5224 +#ifdef CONFIG_NO_HZ
5226 + atomic_t load_balancer;
5227 + cpumask_t cpu_mask;
5228 +} nohz ____cacheline_aligned = {
5229 + .load_balancer = ATOMIC_INIT(-1),
5230 + .cpu_mask = CPU_MASK_NONE,
5234 + * This routine will try to nominate the ilb (idle load balancing)
5235 + * owner among the cpus whose ticks are stopped. ilb owner will do the idle
5236 + * load balancing on behalf of all those cpus. If all the cpus in the system
5237 + * go into this tickless mode, then there will be no ilb owner (as there is
5238 + * no need for one) and all the cpus will sleep till the next wakeup event
5241 + * For the ilb owner, tick is not stopped. And this tick will be used
5242 + * for idle load balancing. ilb owner will still be part of
5245 + * While stopping the tick, this cpu will become the ilb owner if there
5246 + * is no other owner. And will be the owner till that cpu becomes busy
5247 + * or if all cpus in the system stop their ticks at which point
5248 + * there is no need for ilb owner.
5250 + * When the ilb owner becomes busy, it nominates another owner, during the
5251 + * next busy scheduler_tick()
5253 +int select_nohz_load_balancer(int stop_tick)
5255 + int cpu = smp_processor_id();
5258 + cpu_set(cpu, nohz.cpu_mask);
5259 + cpu_rq(cpu)->in_nohz_recently = 1;
5262 + * If we are going offline and still the leader, give up!
5264 + if (!cpu_active(cpu) &&
5265 + atomic_read(&nohz.load_balancer) == cpu) {
5266 + if (atomic_cmpxchg(&nohz.load_balancer, cpu, -1) != cpu)
5271 + /* time for ilb owner also to sleep */
5272 + if (cpus_weight(nohz.cpu_mask) == num_online_cpus()) {
5273 + if (atomic_read(&nohz.load_balancer) == cpu)
5274 + atomic_set(&nohz.load_balancer, -1);
5278 + if (atomic_read(&nohz.load_balancer) == -1) {
5279 + /* make me the ilb owner */
5280 + if (atomic_cmpxchg(&nohz.load_balancer, -1, cpu) == -1)
5282 + } else if (atomic_read(&nohz.load_balancer) == cpu)
5285 + if (!cpu_isset(cpu, nohz.cpu_mask))
5288 + cpu_clear(cpu, nohz.cpu_mask);
5290 + if (atomic_read(&nohz.load_balancer) == cpu)
5291 + if (atomic_cmpxchg(&nohz.load_balancer, cpu, -1) != cpu)
5298 + * When add_timer_on() enqueues a timer into the timer wheel of an
5299 + * idle CPU then this timer might expire before the next timer event
5300 + * which is scheduled to wake up that CPU. In case of a completely
5301 + * idle system the next event might even be infinite time into the
5302 + * future. wake_up_idle_cpu() ensures that the CPU is woken up and
5303 + * leaves the inner idle loop so the newly added timer is taken into
5304 + * account when the CPU goes back to idle and evaluates the timer
5305 + * wheel for the next timer event.
5307 +void wake_up_idle_cpu(int cpu)
5309 + struct task_struct *idle;
5312 + if (cpu == smp_processor_id())
5319 + * This is safe, as this function is called with the timer
5320 + * wheel base lock of (cpu) held. When the CPU is on the way
5321 + * to idle and has not yet set rq->curr to idle then it will
5322 + * be serialised on the timer wheel base lock and take the new
5323 + * timer into account automatically.
5325 + if (unlikely(rq->curr != idle))
5329 + * We can set TIF_RESCHED on the idle task of the other CPU
5330 + * lockless. The worst case is that the other CPU runs the
5331 + * idle task through an additional NOOP schedule()
5333 + set_tsk_thread_flag(idle, TIF_NEED_RESCHED);
5335 + /* NEED_RESCHED must be visible before we test polling */
5337 + if (!tsk_is_polling(idle))
5338 + smp_send_reschedule(cpu);
5341 +#endif /* CONFIG_NO_HZ */
5344 + * Change a given task's CPU affinity. Migrate the thread to a
5345 + * proper CPU and schedule it away if the CPU it's executing on
5346 + * is removed from the allowed bitmask.
5348 + * NOTE: the caller must have a valid reference to the task, the
5349 + * task must not exit() & deallocate itself prematurely. The
5350 + * call is not atomic; no spinlocks may be held.
5352 +int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask)
5354 + unsigned long flags;
5355 + int running_wrong = 0;
5360 + rq = task_grq_lock(p, &flags);
5361 + if (!cpus_intersects(*new_mask, cpu_online_map)) {
5366 + if (unlikely((p->flags & PF_THREAD_BOUND) && p != current &&
5367 + !cpus_equal(p->cpus_allowed, *new_mask))) {
5372 + queued = task_queued(p);
5374 + p->cpus_allowed = *new_mask;
5376 + /* Can the task run on the task's current CPU? If so, we're done */
5377 + if (cpu_isset(task_cpu(p), *new_mask))
5380 + if (task_running(p)) {
5381 + /* Task is running on the wrong cpu now, reschedule it. */
5382 + set_tsk_need_resched(p);
5383 + running_wrong = 1;
5385 + set_task_cpu(p, any_online_cpu(*new_mask));
5389 + try_preempt(p, rq);
5390 + task_grq_unlock(&flags);
5392 + if (running_wrong)
5397 +EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
5399 +#ifdef CONFIG_HOTPLUG_CPU
5400 +/* Schedules idle task to be the next runnable task on current CPU.
5401 + * It does so by boosting its priority to highest possible.
5402 + * Used by CPU offline code.
5404 +void sched_idle_next(void)
5406 + int this_cpu = smp_processor_id();
5407 + struct rq *rq = cpu_rq(this_cpu);
5408 + struct task_struct *idle = rq->idle;
5409 + unsigned long flags;
5411 + /* cpu has to be offline */
5412 + BUG_ON(cpu_online(this_cpu));
5415 + * Strictly not necessary since rest of the CPUs are stopped by now
5416 + * and interrupts disabled on the current cpu.
5418 + time_grq_lock(rq, &flags);
5420 + __setscheduler(idle, rq, SCHED_FIFO, MAX_RT_PRIO - 1);
5422 + activate_idle_task(idle);
5423 + set_tsk_need_resched(rq->curr);
5425 + grq_unlock_irqrestore(&flags);
5429 + * Ensures that the idle task is using init_mm right before its cpu goes
5432 +void idle_task_exit(void)
5434 + struct mm_struct *mm = current->active_mm;
5436 + BUG_ON(cpu_online(smp_processor_id()));
5438 + if (mm != &init_mm)
5439 + switch_mm(mm, &init_mm, current);
5443 +#endif /* CONFIG_HOTPLUG_CPU */
5445 +#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
5447 +static struct ctl_table sd_ctl_dir[] = {
5449 + .procname = "sched_domain",
5455 +static struct ctl_table sd_ctl_root[] = {
5457 + .ctl_name = CTL_KERN,
5458 + .procname = "kernel",
5460 + .child = sd_ctl_dir,
5465 +static struct ctl_table *sd_alloc_ctl_entry(int n)
5467 + struct ctl_table *entry =
5468 + kcalloc(n, sizeof(struct ctl_table), GFP_KERNEL);
5473 +static void sd_free_ctl_entry(struct ctl_table **tablep)
5475 + struct ctl_table *entry;
5478 + * In the intermediate directories, both the child directory and
5479 + * procname are dynamically allocated and could fail but the mode
5480 + * will always be set. In the lowest directory the names are
5481 + * static strings and all have proc handlers.
5483 + for (entry = *tablep; entry->mode; entry++) {
5485 + sd_free_ctl_entry(&entry->child);
5486 + if (entry->proc_handler == NULL)
5487 + kfree(entry->procname);
5495 +set_table_entry(struct ctl_table *entry,
5496 + const char *procname, void *data, int maxlen,
5497 + mode_t mode, proc_handler *proc_handler)
5499 + entry->procname = procname;
5500 + entry->data = data;
5501 + entry->maxlen = maxlen;
5502 + entry->mode = mode;
5503 + entry->proc_handler = proc_handler;
5506 +static struct ctl_table *
5507 +sd_alloc_ctl_domain_table(struct sched_domain *sd)
5509 + struct ctl_table *table = sd_alloc_ctl_entry(12);
5511 + if (table == NULL)
5514 + set_table_entry(&table[0], "min_interval", &sd->min_interval,
5515 + sizeof(long), 0644, proc_doulongvec_minmax);
5516 + set_table_entry(&table[1], "max_interval", &sd->max_interval,
5517 + sizeof(long), 0644, proc_doulongvec_minmax);
5518 + set_table_entry(&table[2], "busy_idx", &sd->busy_idx,
5519 + sizeof(int), 0644, proc_dointvec_minmax);
5520 + set_table_entry(&table[3], "idle_idx", &sd->idle_idx,
5521 + sizeof(int), 0644, proc_dointvec_minmax);
5522 + set_table_entry(&table[4], "newidle_idx", &sd->newidle_idx,
5523 + sizeof(int), 0644, proc_dointvec_minmax);
5524 + set_table_entry(&table[5], "wake_idx", &sd->wake_idx,
5525 + sizeof(int), 0644, proc_dointvec_minmax);
5526 + set_table_entry(&table[6], "forkexec_idx", &sd->forkexec_idx,
5527 + sizeof(int), 0644, proc_dointvec_minmax);
5528 + set_table_entry(&table[7], "busy_factor", &sd->busy_factor,
5529 + sizeof(int), 0644, proc_dointvec_minmax);
5530 + set_table_entry(&table[8], "imbalance_pct", &sd->imbalance_pct,
5531 + sizeof(int), 0644, proc_dointvec_minmax);
5532 + set_table_entry(&table[9], "cache_nice_tries",
5533 + &sd->cache_nice_tries,
5534 + sizeof(int), 0644, proc_dointvec_minmax);
5535 + set_table_entry(&table[10], "flags", &sd->flags,
5536 + sizeof(int), 0644, proc_dointvec_minmax);
5537 + /* &table[11] is terminator */
5542 +static ctl_table *sd_alloc_ctl_cpu_table(int cpu)
5544 + struct ctl_table *entry, *table;
5545 + struct sched_domain *sd;
5546 + int domain_num = 0, i;
5549 + for_each_domain(cpu, sd)
5551 + entry = table = sd_alloc_ctl_entry(domain_num + 1);
5552 + if (table == NULL)
5556 + for_each_domain(cpu, sd) {
5557 + snprintf(buf, 32, "domain%d", i);
5558 + entry->procname = kstrdup(buf, GFP_KERNEL);
5559 + entry->mode = 0555;
5560 + entry->child = sd_alloc_ctl_domain_table(sd);
5567 +static struct ctl_table_header *sd_sysctl_header;
5568 +static void register_sched_domain_sysctl(void)
5570 + int i, cpu_num = num_online_cpus();
5571 + struct ctl_table *entry = sd_alloc_ctl_entry(cpu_num + 1);
5574 + WARN_ON(sd_ctl_dir[0].child);
5575 + sd_ctl_dir[0].child = entry;
5577 + if (entry == NULL)
5580 + for_each_online_cpu(i) {
5581 + snprintf(buf, 32, "cpu%d", i);
5582 + entry->procname = kstrdup(buf, GFP_KERNEL);
5583 + entry->mode = 0555;
5584 + entry->child = sd_alloc_ctl_cpu_table(i);
5588 + WARN_ON(sd_sysctl_header);
5589 + sd_sysctl_header = register_sysctl_table(sd_ctl_root);
5592 +/* may be called multiple times per register */
5593 +static void unregister_sched_domain_sysctl(void)
5595 + if (sd_sysctl_header)
5596 + unregister_sysctl_table(sd_sysctl_header);
5597 + sd_sysctl_header = NULL;
5598 + if (sd_ctl_dir[0].child)
5599 + sd_free_ctl_entry(&sd_ctl_dir[0].child);
5602 +static void register_sched_domain_sysctl(void)
5605 +static void unregister_sched_domain_sysctl(void)
5610 +static void set_rq_online(struct rq *rq)
5612 + if (!rq->online) {
5613 + cpu_set(cpu_of(rq), rq->rd->online);
5618 +static void set_rq_offline(struct rq *rq)
5621 + cpu_clear(cpu_of(rq), rq->rd->online);
5626 +#ifdef CONFIG_HOTPLUG_CPU
5628 + * This cpu is going down, so walk over the tasklist and find tasks that can
5629 + * only run on this cpu and remove their affinity. Store their value in
5630 + * unplugged_mask so it can be restored once their correct cpu is online. No
5631 + * need to do anything special since they'll just move on next reschedule if
5632 + * they're running.
5634 +static void remove_cpu(unsigned long cpu)
5636 + struct task_struct *p, *t;
5638 + read_lock(&tasklist_lock);
5640 + do_each_thread(t, p) {
5641 + cpumask_t cpus_remaining;
5643 + cpus_and(cpus_remaining, p->cpus_allowed, cpu_online_map);
5644 + cpu_clear(cpu, cpus_remaining);
5645 + if (cpus_empty(cpus_remaining)) {
5646 + p->unplugged_mask = p->cpus_allowed;
5647 + p->cpus_allowed = cpu_possible_map;
5649 + } while_each_thread(t, p);
5651 + read_unlock(&tasklist_lock);
5655 + * This cpu is coming up so add it to the cpus_allowed.
5657 +static void add_cpu(unsigned long cpu)
5659 + struct task_struct *p, *t;
5661 + read_lock(&tasklist_lock);
5663 + do_each_thread(t, p) {
5664 + /* Have we taken all the cpus from the unplugged_mask back */
5665 + if (cpus_empty(p->unplugged_mask))
5668 + /* Was this cpu in the unplugged_mask mask */
5669 + if (cpu_isset(cpu, p->unplugged_mask)) {
5670 + cpu_set(cpu, p->cpus_allowed);
5671 + if (cpus_subset(p->unplugged_mask, p->cpus_allowed)) {
5673 + * Have we set more than the unplugged_mask?
5674 + * If so, that means we have remnants set from
5675 + * the unplug/plug cycle and need to remove
5676 + * them. Then clear the unplugged_mask as we've
5677 + * set all the cpus back.
5679 + p->cpus_allowed = p->unplugged_mask;
5680 + cpus_clear(p->unplugged_mask);
5683 + } while_each_thread(t, p);
5685 + read_unlock(&tasklist_lock);
5688 +static void add_cpu(unsigned long cpu)
5694 + * migration_call - callback that gets triggered when a CPU is added.
5696 +static int __cpuinit
5697 +migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
5699 + struct task_struct *idle;
5700 + int cpu = (long)hcpu;
5701 + unsigned long flags;
5706 + case CPU_UP_PREPARE:
5707 + case CPU_UP_PREPARE_FROZEN:
5711 + case CPU_ONLINE_FROZEN:
5712 + /* Update our root-domain */
5714 + grq_lock_irqsave(&flags);
5716 + BUG_ON(!cpu_isset(cpu, rq->rd->span));
5718 + set_rq_online(rq);
5721 + grq_unlock_irqrestore(&flags);
5724 +#ifdef CONFIG_HOTPLUG_CPU
5725 + case CPU_UP_CANCELED:
5726 + case CPU_UP_CANCELED_FROZEN:
5730 + case CPU_DEAD_FROZEN:
5731 + cpuset_lock(); /* around calls to cpuset_cpus_allowed_lock() */
5734 + /* Idle task back to normal (off runqueue, low prio) */
5737 + return_task(idle, 1);
5738 + idle->static_prio = MAX_PRIO;
5739 + __setscheduler(idle, rq, SCHED_NORMAL, 0);
5740 + idle->prio = PRIO_LIMIT;
5741 + set_rq_task(rq, idle);
5742 + update_rq_clock(rq);
5748 + case CPU_DYING_FROZEN:
5750 + grq_lock_irqsave(&flags);
5752 + BUG_ON(!cpu_isset(cpu, rq->rd->span));
5753 + set_rq_offline(rq);
5755 + grq_unlock_irqrestore(&flags);
5762 +/* Register at highest priority so that task migration (migrate_all_tasks)
5763 + * happens before everything else.
5765 +static struct notifier_block __cpuinitdata migration_notifier = {
5766 + .notifier_call = migration_call,
5770 +int __init migration_init(void)
5772 + void *cpu = (void *)(long)smp_processor_id();
5775 + /* Start one for the boot CPU: */
5776 + err = migration_call(&migration_notifier, CPU_UP_PREPARE, cpu);
5777 + BUG_ON(err == NOTIFY_BAD);
5778 + migration_call(&migration_notifier, CPU_ONLINE, cpu);
5779 + register_cpu_notifier(&migration_notifier);
5783 +early_initcall(migration_init);
5787 + * sched_domains_mutex serialises calls to arch_init_sched_domains,
5788 + * detach_destroy_domains and partition_sched_domains.
5790 +static DEFINE_MUTEX(sched_domains_mutex);
5794 +#ifdef CONFIG_SCHED_DEBUG
5796 +static inline const char *sd_level_to_string(enum sched_domain_level lvl)
5801 + case SD_LV_SIBLING:
5809 + case SD_LV_ALLNODES:
5810 + return "ALLNODES";
5818 +static int sched_domain_debug_one(struct sched_domain *sd, int cpu, int level,
5819 + cpumask_t *groupmask)
5821 + struct sched_group *group = sd->groups;
5824 + cpulist_scnprintf(str, sizeof(str), sd->span);
5825 + cpus_clear(*groupmask);
5827 + printk(KERN_DEBUG "%*s domain %d: ", level, "", level);
5829 + if (!(sd->flags & SD_LOAD_BALANCE)) {
5830 + printk("does not load-balance\n");
5832 + printk(KERN_ERR "ERROR: !SD_LOAD_BALANCE domain"
5837 + printk(KERN_CONT "span %s level %s\n",
5838 + str, sd_level_to_string(sd->level));
5840 + if (!cpu_isset(cpu, sd->span)) {
5841 + printk(KERN_ERR "ERROR: domain->span does not contain "
5844 + if (!cpu_isset(cpu, group->cpumask)) {
5845 + printk(KERN_ERR "ERROR: domain->groups does not contain"
5849 + printk(KERN_DEBUG "%*s groups:", level + 1, "");
5853 + printk(KERN_ERR "ERROR: group is NULL\n");
5857 + if (!group->__cpu_power) {
5858 + printk(KERN_CONT "\n");
5859 + printk(KERN_ERR "ERROR: domain->cpu_power not "
5864 + if (!cpus_weight(group->cpumask)) {
5865 + printk(KERN_CONT "\n");
5866 + printk(KERN_ERR "ERROR: empty group\n");
5870 + if (cpus_intersects(*groupmask, group->cpumask)) {
5871 + printk(KERN_CONT "\n");
5872 + printk(KERN_ERR "ERROR: repeated CPUs\n");
5876 + cpus_or(*groupmask, *groupmask, group->cpumask);
5878 + cpulist_scnprintf(str, sizeof(str), group->cpumask);
5879 + printk(KERN_CONT " %s", str);
5881 + group = group->next;
5882 + } while (group != sd->groups);
5883 + printk(KERN_CONT "\n");
5885 + if (!cpus_equal(sd->span, *groupmask))
5886 + printk(KERN_ERR "ERROR: groups don't span domain->span\n");
5888 + if (sd->parent && !cpus_subset(*groupmask, sd->parent->span))
5889 + printk(KERN_ERR "ERROR: parent span is not a superset "
5890 + "of domain->span\n");
5894 +static void sched_domain_debug(struct sched_domain *sd, int cpu)
5896 + cpumask_t *groupmask;
5900 + printk(KERN_DEBUG "CPU%d attaching NULL sched-domain.\n", cpu);
5904 + printk(KERN_DEBUG "CPU%d attaching sched-domain:\n", cpu);
5906 + groupmask = kmalloc(sizeof(cpumask_t), GFP_KERNEL);
5908 + printk(KERN_DEBUG "Cannot load-balance (out of memory)\n");
5913 + if (sched_domain_debug_one(sd, cpu, level, groupmask))
5922 +#else /* !CONFIG_SCHED_DEBUG */
5923 +# define sched_domain_debug(sd, cpu) do { } while (0)
5924 +#endif /* CONFIG_SCHED_DEBUG */
5926 +static int sd_degenerate(struct sched_domain *sd)
5928 + if (cpus_weight(sd->span) == 1)
5931 + /* Following flags need at least 2 groups */
5932 + if (sd->flags & (SD_LOAD_BALANCE |
5933 + SD_BALANCE_NEWIDLE |
5936 + SD_SHARE_CPUPOWER |
5937 + SD_SHARE_PKG_RESOURCES)) {
5938 + if (sd->groups != sd->groups->next)
5942 + /* Following flags don't use groups */
5943 + if (sd->flags & (SD_WAKE_IDLE |
5952 +sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent)
5954 + unsigned long cflags = sd->flags, pflags = parent->flags;
5956 + if (sd_degenerate(parent))
5959 + if (!cpus_equal(sd->span, parent->span))
5962 + /* Does parent contain flags not in child? */
5963 + /* WAKE_BALANCE is a subset of WAKE_AFFINE */
5964 + if (cflags & SD_WAKE_AFFINE)
5965 + pflags &= ~SD_WAKE_BALANCE;
5966 + /* Flags needing groups don't count if only 1 group in parent */
5967 + if (parent->groups == parent->groups->next) {
5968 + pflags &= ~(SD_LOAD_BALANCE |
5969 + SD_BALANCE_NEWIDLE |
5972 + SD_SHARE_CPUPOWER |
5973 + SD_SHARE_PKG_RESOURCES);
5975 + if (~cflags & pflags)
5981 +static void rq_attach_root(struct rq *rq, struct root_domain *rd)
5983 + unsigned long flags;
5985 + grq_lock_irqsave(&flags);
5988 + struct root_domain *old_rd = rq->rd;
5990 + if (cpu_isset(cpu_of(rq), old_rd->online))
5991 + set_rq_offline(rq);
5993 + cpu_clear(cpu_of(rq), old_rd->span);
5995 + if (atomic_dec_and_test(&old_rd->refcount))
5999 + atomic_inc(&rd->refcount);
6002 + cpu_set(cpu_of(rq), rd->span);
6003 + if (cpu_isset(cpu_of(rq), cpu_online_map))
6004 + set_rq_online(rq);
6006 + grq_unlock_irqrestore(&flags);
6009 +static void init_rootdomain(struct root_domain *rd)
6011 + memset(rd, 0, sizeof(*rd));
6013 + cpus_clear(rd->span);
6014 + cpus_clear(rd->online);
6017 +static void init_defrootdomain(void)
6019 + init_rootdomain(&def_root_domain);
6021 + atomic_set(&def_root_domain.refcount, 1);
6024 +static struct root_domain *alloc_rootdomain(void)
6026 + struct root_domain *rd;
6028 + rd = kmalloc(sizeof(*rd), GFP_KERNEL);
6032 + init_rootdomain(rd);
6038 + * Attach the domain 'sd' to 'cpu' as its base domain. Callers must
6039 + * hold the hotplug lock.
6042 +cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
6044 + struct rq *rq = cpu_rq(cpu);
6045 + struct sched_domain *tmp;
6047 + /* Remove the sched domains which do not contribute to scheduling. */
6048 + for (tmp = sd; tmp; tmp = tmp->parent) {
6049 + struct sched_domain *parent = tmp->parent;
6052 + if (sd_parent_degenerate(tmp, parent)) {
6053 + tmp->parent = parent->parent;
6054 + if (parent->parent)
6055 + parent->parent->child = tmp;
6059 + if (sd && sd_degenerate(sd)) {
6065 + sched_domain_debug(sd, cpu);
6067 + rq_attach_root(rq, rd);
6068 + rcu_assign_pointer(rq->sd, sd);
6071 +/* cpus with isolated domains */
6072 +static cpumask_t cpu_isolated_map = CPU_MASK_NONE;
6074 +/* Setup the mask of cpus configured for isolated domains */
6075 +static int __init isolated_cpu_setup(char *str)
6077 + static int __initdata ints[NR_CPUS];
6080 + str = get_options(str, ARRAY_SIZE(ints), ints);
6081 + cpus_clear(cpu_isolated_map);
6082 + for (i = 1; i <= ints[0]; i++)
6083 + if (ints[i] < NR_CPUS)
6084 + cpu_set(ints[i], cpu_isolated_map);
6088 +__setup("isolcpus=", isolated_cpu_setup);
6091 + * init_sched_build_groups takes the cpumask we wish to span, and a pointer
6092 + * to a function which identifies what group(along with sched group) a CPU
6093 + * belongs to. The return value of group_fn must be a >= 0 and < NR_CPUS
6094 + * (due to the fact that we keep track of groups covered with a cpumask_t).
6096 + * init_sched_build_groups will build a circular linked list of the groups
6097 + * covered by the given span, and will set each group's ->cpumask correctly,
6098 + * and ->cpu_power to 0.
6101 +init_sched_build_groups(const cpumask_t *span, const cpumask_t *cpu_map,
6102 + int (*group_fn)(int cpu, const cpumask_t *cpu_map,
6103 + struct sched_group **sg,
6104 + cpumask_t *tmpmask),
6105 + cpumask_t *covered, cpumask_t *tmpmask)
6107 + struct sched_group *first = NULL, *last = NULL;
6110 + cpus_clear(*covered);
6112 + for_each_cpu_mask_nr(i, *span) {
6113 + struct sched_group *sg;
6114 + int group = group_fn(i, cpu_map, &sg, tmpmask);
6117 + if (cpu_isset(i, *covered))
6120 + cpus_clear(sg->cpumask);
6121 + sg->__cpu_power = 0;
6123 + for_each_cpu_mask_nr(j, *span) {
6124 + if (group_fn(j, cpu_map, NULL, tmpmask) != group)
6127 + cpu_set(j, *covered);
6128 + cpu_set(j, sg->cpumask);
6136 + last->next = first;
6139 +#define SD_NODES_PER_DOMAIN 16
6144 + * find_next_best_node - find the next node to include in a sched_domain
6145 + * @node: node whose sched_domain we're building
6146 + * @used_nodes: nodes already in the sched_domain
6148 + * Find the next node to include in a given scheduling domain. Simply
6149 + * finds the closest node not already in the @used_nodes map.
6151 + * Should use nodemask_t.
6153 +static int find_next_best_node(int node, nodemask_t *used_nodes)
6155 + int i, n, val, min_val, best_node = 0;
6157 + min_val = INT_MAX;
6159 + for (i = 0; i < nr_node_ids; i++) {
6160 + /* Start at @node */
6161 + n = (node + i) % nr_node_ids;
6163 + if (!nr_cpus_node(n))
6166 + /* Skip already used nodes */
6167 + if (node_isset(n, *used_nodes))
6170 + /* Simple min distance search */
6171 + val = node_distance(node, n);
6173 + if (val < min_val) {
6179 + node_set(best_node, *used_nodes);
6184 + * sched_domain_node_span - get a cpumask for a node's sched_domain
6185 + * @node: node whose cpumask we're constructing
6186 + * @span: resulting cpumask
6188 + * Given a node, construct a good cpumask for its sched_domain to span. It
6189 + * should be one that prevents unnecessary balancing, but also spreads tasks
6192 +static void sched_domain_node_span(int node, cpumask_t *span)
6194 + nodemask_t used_nodes;
6195 + node_to_cpumask_ptr(nodemask, node);
6198 + cpus_clear(*span);
6199 + nodes_clear(used_nodes);
6201 + cpus_or(*span, *span, *nodemask);
6202 + node_set(node, used_nodes);
6204 + for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
6205 + int next_node = find_next_best_node(node, &used_nodes);
6207 + node_to_cpumask_ptr_next(nodemask, next_node);
6208 + cpus_or(*span, *span, *nodemask);
6211 +#endif /* CONFIG_NUMA */
6213 +int sched_smt_power_savings = 0, sched_mc_power_savings = 0;
6216 + * SMT sched-domains:
6218 +#ifdef CONFIG_SCHED_SMT
6219 +static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
6220 +static DEFINE_PER_CPU(struct sched_group, sched_group_cpus);
6223 +cpu_to_cpu_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
6224 + cpumask_t *unused)
6227 + *sg = &per_cpu(sched_group_cpus, cpu);
6230 +#endif /* CONFIG_SCHED_SMT */
6233 + * multi-core sched-domains:
6235 +#ifdef CONFIG_SCHED_MC
6236 +static DEFINE_PER_CPU(struct sched_domain, core_domains);
6237 +static DEFINE_PER_CPU(struct sched_group, sched_group_core);
6238 +#endif /* CONFIG_SCHED_MC */
6240 +#if defined(CONFIG_SCHED_MC) && defined(CONFIG_SCHED_SMT)
6242 +cpu_to_core_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
6247 + *mask = per_cpu(cpu_sibling_map, cpu);
6248 + cpus_and(*mask, *mask, *cpu_map);
6249 + group = first_cpu(*mask);
6251 + *sg = &per_cpu(sched_group_core, group);
6254 +#elif defined(CONFIG_SCHED_MC)
6256 +cpu_to_core_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
6257 + cpumask_t *unused)
6260 + *sg = &per_cpu(sched_group_core, cpu);
6265 +static DEFINE_PER_CPU(struct sched_domain, phys_domains);
6266 +static DEFINE_PER_CPU(struct sched_group, sched_group_phys);
6269 +cpu_to_phys_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
6273 +#ifdef CONFIG_SCHED_MC
6274 + *mask = cpu_coregroup_map(cpu);
6275 + cpus_and(*mask, *mask, *cpu_map);
6276 + group = first_cpu(*mask);
6277 +#elif defined(CONFIG_SCHED_SMT)
6278 + *mask = per_cpu(cpu_sibling_map, cpu);
6279 + cpus_and(*mask, *mask, *cpu_map);
6280 + group = first_cpu(*mask);
6285 + *sg = &per_cpu(sched_group_phys, group);
6291 + * The init_sched_build_groups can't handle what we want to do with node
6292 + * groups, so roll our own. Now each node has its own list of groups which
6293 + * gets dynamically allocated.
6295 +static DEFINE_PER_CPU(struct sched_domain, node_domains);
6296 +static struct sched_group ***sched_group_nodes_bycpu;
6298 +static DEFINE_PER_CPU(struct sched_domain, allnodes_domains);
6299 +static DEFINE_PER_CPU(struct sched_group, sched_group_allnodes);
6301 +static int cpu_to_allnodes_group(int cpu, const cpumask_t *cpu_map,
6302 + struct sched_group **sg, cpumask_t *nodemask)
6306 + *nodemask = node_to_cpumask(cpu_to_node(cpu));
6307 + cpus_and(*nodemask, *nodemask, *cpu_map);
6308 + group = first_cpu(*nodemask);
6311 + *sg = &per_cpu(sched_group_allnodes, group);
6315 +static void init_numa_sched_groups_power(struct sched_group *group_head)
6317 + struct sched_group *sg = group_head;
6323 + for_each_cpu_mask_nr(j, sg->cpumask) {
6324 + struct sched_domain *sd;
6326 + sd = &per_cpu(phys_domains, j);
6327 + if (j != first_cpu(sd->groups->cpumask)) {
6329 + * Only add "power" once for each
6330 + * physical package.
6335 + sg_inc_cpu_power(sg, sd->groups->__cpu_power);
6338 + } while (sg != group_head);
6340 +#endif /* CONFIG_NUMA */
6343 +/* Free memory allocated for various sched_group structures */
6344 +static void free_sched_groups(const cpumask_t *cpu_map, cpumask_t *nodemask)
6348 + for_each_cpu_mask_nr(cpu, *cpu_map) {
6349 + struct sched_group **sched_group_nodes
6350 + = sched_group_nodes_bycpu[cpu];
6352 + if (!sched_group_nodes)
6355 + for (i = 0; i < nr_node_ids; i++) {
6356 + struct sched_group *oldsg, *sg = sched_group_nodes[i];
6358 + *nodemask = node_to_cpumask(i);
6359 + cpus_and(*nodemask, *nodemask, *cpu_map);
6360 + if (cpus_empty(*nodemask))
6370 + if (oldsg != sched_group_nodes[i])
6373 + kfree(sched_group_nodes);
6374 + sched_group_nodes_bycpu[cpu] = NULL;
6377 +#else /* !CONFIG_NUMA */
6378 +static void free_sched_groups(const cpumask_t *cpu_map, cpumask_t *nodemask)
6381 +#endif /* CONFIG_NUMA */
6384 + * Initialise sched groups cpu_power.
6386 + * cpu_power indicates the capacity of sched group, which is used while
6387 + * distributing the load between different sched groups in a sched domain.
6388 + * Typically cpu_power for all the groups in a sched domain will be same unless
6389 + * there are asymmetries in the topology. If there are asymmetries, group
6390 + * having more cpu_power will pickup more load compared to the group having
6393 + * cpu_power will be a multiple of SCHED_LOAD_SCALE. This multiple represents
6394 + * the maximum number of tasks a group can handle in the presence of other idle
6395 + * or lightly loaded groups in the same sched domain.
6397 +static void init_sched_groups_power(int cpu, struct sched_domain *sd)
6399 + struct sched_domain *child;
6400 + struct sched_group *group;
6402 + WARN_ON(!sd || !sd->groups);
6404 + if (cpu != first_cpu(sd->groups->cpumask))
6407 + child = sd->child;
6409 + sd->groups->__cpu_power = 0;
6412 + * For perf policy, if the groups in child domain share resources
6413 + * (for example cores sharing some portions of the cache hierarchy
6414 + * or SMT), then set this domain groups cpu_power such that each group
6415 + * can handle only one task, when there are other idle groups in the
6416 + * same sched domain.
6418 + if (!child || (!(sd->flags & SD_POWERSAVINGS_BALANCE) &&
6420 + (SD_SHARE_CPUPOWER | SD_SHARE_PKG_RESOURCES)))) {
6421 + sg_inc_cpu_power(sd->groups, SCHED_LOAD_SCALE);
6426 + * add cpu_power of each child group to this groups cpu_power
6428 + group = child->groups;
6430 + sg_inc_cpu_power(sd->groups, group->__cpu_power);
6431 + group = group->next;
6432 + } while (group != child->groups);
6436 + * Initialisers for schedule domains
6437 + * Non-inlined to reduce accumulated stack pressure in build_sched_domains()
6440 +#define SD_INIT(sd, type) sd_init_##type(sd)
6441 +#define SD_INIT_FUNC(type) \
6442 +static noinline void sd_init_##type(struct sched_domain *sd) \
6444 + memset(sd, 0, sizeof(*sd)); \
6445 + *sd = SD_##type##_INIT; \
6446 + sd->level = SD_LV_##type; \
6451 + SD_INIT_FUNC(ALLNODES)
6452 + SD_INIT_FUNC(NODE)
6454 +#ifdef CONFIG_SCHED_SMT
6455 + SD_INIT_FUNC(SIBLING)
6457 +#ifdef CONFIG_SCHED_MC
6462 + * To minimize stack usage kmalloc room for cpumasks and share the
6463 + * space as the usage in build_sched_domains() dictates. Used only
6464 + * if the amount of space is significant.
6467 + cpumask_t tmpmask; /* make this one first */
6469 + cpumask_t nodemask;
6470 + cpumask_t this_sibling_map;
6471 + cpumask_t this_core_map;
6473 + cpumask_t send_covered;
6476 + cpumask_t domainspan;
6477 + cpumask_t covered;
6478 + cpumask_t notcovered;
6483 +#define SCHED_CPUMASK_ALLOC 1
6484 +#define SCHED_CPUMASK_FREE(v) kfree(v)
6485 +#define SCHED_CPUMASK_DECLARE(v) struct allmasks *v
6487 +#define SCHED_CPUMASK_ALLOC 0
6488 +#define SCHED_CPUMASK_FREE(v)
6489 +#define SCHED_CPUMASK_DECLARE(v) struct allmasks _v, *v = &_v
6492 +#define SCHED_CPUMASK_VAR(v, a) cpumask_t *v = (cpumask_t *) \
6493 + ((unsigned long)(a) + offsetof(struct allmasks, v))
6495 +static int default_relax_domain_level = -1;
6497 +static int __init setup_relax_domain_level(char *str)
6499 + unsigned long val;
6501 + val = simple_strtoul(str, NULL, 0);
6502 + if (val < SD_LV_MAX)
6503 + default_relax_domain_level = val;
6507 +__setup("relax_domain_level=", setup_relax_domain_level);
6509 +static void set_domain_attribute(struct sched_domain *sd,
6510 + struct sched_domain_attr *attr)
6514 + if (!attr || attr->relax_domain_level < 0) {
6515 + if (default_relax_domain_level < 0)
6518 + request = default_relax_domain_level;
6520 + request = attr->relax_domain_level;
6521 + if (request < sd->level) {
6522 + /* turn off idle balance on this domain */
6523 + sd->flags &= ~(SD_WAKE_IDLE|SD_BALANCE_NEWIDLE);
6525 + /* turn on idle balance on this domain */
6526 + sd->flags |= (SD_WAKE_IDLE_FAR|SD_BALANCE_NEWIDLE);
6531 + * Build sched domains for a given set of cpus and attach the sched domains
6532 + * to the individual cpus
6534 +static int __build_sched_domains(const cpumask_t *cpu_map,
6535 + struct sched_domain_attr *attr)
6538 + struct root_domain *rd;
6539 + SCHED_CPUMASK_DECLARE(allmasks);
6540 + cpumask_t *tmpmask;
6542 + struct sched_group **sched_group_nodes = NULL;
6543 + int sd_allnodes = 0;
6546 + * Allocate the per-node list of sched groups
6548 + sched_group_nodes = kcalloc(nr_node_ids, sizeof(struct sched_group *),
6550 + if (!sched_group_nodes) {
6551 + printk(KERN_WARNING "Can not alloc sched group node list\n");
6556 + rd = alloc_rootdomain();
6558 + printk(KERN_WARNING "Cannot alloc root domain\n");
6560 + kfree(sched_group_nodes);
6565 +#if SCHED_CPUMASK_ALLOC
6566 + /* get space for all scratch cpumask variables */
6567 + allmasks = kmalloc(sizeof(*allmasks), GFP_KERNEL);
6569 + printk(KERN_WARNING "Cannot alloc cpumask array\n");
6572 + kfree(sched_group_nodes);
6577 + tmpmask = (cpumask_t *)allmasks;
6581 + sched_group_nodes_bycpu[first_cpu(*cpu_map)] = sched_group_nodes;
6585 + * Set up domains for cpus specified by the cpu_map.
6587 + for_each_cpu_mask_nr(i, *cpu_map) {
6588 + struct sched_domain *sd = NULL, *p;
6589 + SCHED_CPUMASK_VAR(nodemask, allmasks);
6591 + *nodemask = node_to_cpumask(cpu_to_node(i));
6592 + cpus_and(*nodemask, *nodemask, *cpu_map);
6595 + if (cpus_weight(*cpu_map) >
6596 + SD_NODES_PER_DOMAIN*cpus_weight(*nodemask)) {
6597 + sd = &per_cpu(allnodes_domains, i);
6598 + SD_INIT(sd, ALLNODES);
6599 + set_domain_attribute(sd, attr);
6600 + sd->span = *cpu_map;
6601 + cpu_to_allnodes_group(i, cpu_map, &sd->groups, tmpmask);
6607 + sd = &per_cpu(node_domains, i);
6608 + SD_INIT(sd, NODE);
6609 + set_domain_attribute(sd, attr);
6610 + sched_domain_node_span(cpu_to_node(i), &sd->span);
6614 + cpus_and(sd->span, sd->span, *cpu_map);
6618 + sd = &per_cpu(phys_domains, i);
6620 + set_domain_attribute(sd, attr);
6621 + sd->span = *nodemask;
6625 + cpu_to_phys_group(i, cpu_map, &sd->groups, tmpmask);
6627 +#ifdef CONFIG_SCHED_MC
6629 + sd = &per_cpu(core_domains, i);
6631 + set_domain_attribute(sd, attr);
6632 + sd->span = cpu_coregroup_map(i);
6633 + cpus_and(sd->span, sd->span, *cpu_map);
6636 + cpu_to_core_group(i, cpu_map, &sd->groups, tmpmask);
6639 +#ifdef CONFIG_SCHED_SMT
6641 + sd = &per_cpu(cpu_domains, i);
6642 + SD_INIT(sd, SIBLING);
6643 + set_domain_attribute(sd, attr);
6644 + sd->span = per_cpu(cpu_sibling_map, i);
6645 + cpus_and(sd->span, sd->span, *cpu_map);
6648 + cpu_to_cpu_group(i, cpu_map, &sd->groups, tmpmask);
6652 +#ifdef CONFIG_SCHED_SMT
6653 + /* Set up CPU (sibling) groups */
6654 + for_each_cpu_mask_nr(i, *cpu_map) {
6655 + SCHED_CPUMASK_VAR(this_sibling_map, allmasks);
6656 + SCHED_CPUMASK_VAR(send_covered, allmasks);
6658 + *this_sibling_map = per_cpu(cpu_sibling_map, i);
6659 + cpus_and(*this_sibling_map, *this_sibling_map, *cpu_map);
6660 + if (i != first_cpu(*this_sibling_map))
6663 + init_sched_build_groups(this_sibling_map, cpu_map,
6664 + &cpu_to_cpu_group,
6665 + send_covered, tmpmask);
6669 +#ifdef CONFIG_SCHED_MC
6670 + /* Set up multi-core groups */
6671 + for_each_cpu_mask_nr(i, *cpu_map) {
6672 + SCHED_CPUMASK_VAR(this_core_map, allmasks);
6673 + SCHED_CPUMASK_VAR(send_covered, allmasks);
6675 + *this_core_map = cpu_coregroup_map(i);
6676 + cpus_and(*this_core_map, *this_core_map, *cpu_map);
6677 + if (i != first_cpu(*this_core_map))
6680 + init_sched_build_groups(this_core_map, cpu_map,
6681 + &cpu_to_core_group,
6682 + send_covered, tmpmask);
6686 + /* Set up physical groups */
6687 + for (i = 0; i < nr_node_ids; i++) {
6688 + SCHED_CPUMASK_VAR(nodemask, allmasks);
6689 + SCHED_CPUMASK_VAR(send_covered, allmasks);
6691 + *nodemask = node_to_cpumask(i);
6692 + cpus_and(*nodemask, *nodemask, *cpu_map);
6693 + if (cpus_empty(*nodemask))
6696 + init_sched_build_groups(nodemask, cpu_map,
6697 + &cpu_to_phys_group,
6698 + send_covered, tmpmask);
6702 + /* Set up node groups */
6703 + if (sd_allnodes) {
6704 + SCHED_CPUMASK_VAR(send_covered, allmasks);
6706 + init_sched_build_groups(cpu_map, cpu_map,
6707 + &cpu_to_allnodes_group,
6708 + send_covered, tmpmask);
6711 + for (i = 0; i < nr_node_ids; i++) {
6712 + /* Set up node groups */
6713 + struct sched_group *sg, *prev;
6714 + SCHED_CPUMASK_VAR(nodemask, allmasks);
6715 + SCHED_CPUMASK_VAR(domainspan, allmasks);
6716 + SCHED_CPUMASK_VAR(covered, allmasks);
6719 + *nodemask = node_to_cpumask(i);
6720 + cpus_clear(*covered);
6722 + cpus_and(*nodemask, *nodemask, *cpu_map);
6723 + if (cpus_empty(*nodemask)) {
6724 + sched_group_nodes[i] = NULL;
6728 + sched_domain_node_span(i, domainspan);
6729 + cpus_and(*domainspan, *domainspan, *cpu_map);
6731 + sg = kmalloc_node(sizeof(struct sched_group), GFP_KERNEL, i);
6733 + printk(KERN_WARNING "Can not alloc domain group for "
6737 + sched_group_nodes[i] = sg;
6738 + for_each_cpu_mask_nr(j, *nodemask) {
6739 + struct sched_domain *sd;
6741 + sd = &per_cpu(node_domains, j);
6744 + sg->__cpu_power = 0;
6745 + sg->cpumask = *nodemask;
6747 + cpus_or(*covered, *covered, *nodemask);
6750 + for (j = 0; j < nr_node_ids; j++) {
6751 + SCHED_CPUMASK_VAR(notcovered, allmasks);
6752 + int n = (i + j) % nr_node_ids;
6753 + node_to_cpumask_ptr(pnodemask, n);
6755 + cpus_complement(*notcovered, *covered);
6756 + cpus_and(*tmpmask, *notcovered, *cpu_map);
6757 + cpus_and(*tmpmask, *tmpmask, *domainspan);
6758 + if (cpus_empty(*tmpmask))
6761 + cpus_and(*tmpmask, *tmpmask, *pnodemask);
6762 + if (cpus_empty(*tmpmask))
6765 + sg = kmalloc_node(sizeof(struct sched_group),
6768 + printk(KERN_WARNING
6769 + "Can not alloc domain group for node %d\n", j);
6772 + sg->__cpu_power = 0;
6773 + sg->cpumask = *tmpmask;
6774 + sg->next = prev->next;
6775 + cpus_or(*covered, *covered, *tmpmask);
6782 + /* Calculate CPU power for physical packages and nodes */
6783 +#ifdef CONFIG_SCHED_SMT
6784 + for_each_cpu_mask_nr(i, *cpu_map) {
6785 + struct sched_domain *sd = &per_cpu(cpu_domains, i);
6787 + init_sched_groups_power(i, sd);
6790 +#ifdef CONFIG_SCHED_MC
6791 + for_each_cpu_mask_nr(i, *cpu_map) {
6792 + struct sched_domain *sd = &per_cpu(core_domains, i);
6794 + init_sched_groups_power(i, sd);
6798 + for_each_cpu_mask_nr(i, *cpu_map) {
6799 + struct sched_domain *sd = &per_cpu(phys_domains, i);
6801 + init_sched_groups_power(i, sd);
6805 + for (i = 0; i < nr_node_ids; i++)
6806 + init_numa_sched_groups_power(sched_group_nodes[i]);
6808 + if (sd_allnodes) {
6809 + struct sched_group *sg;
6811 + cpu_to_allnodes_group(first_cpu(*cpu_map), cpu_map, &sg,
6813 + init_numa_sched_groups_power(sg);
6817 + /* Attach the domains */
6818 + for_each_cpu_mask_nr(i, *cpu_map) {
6819 + struct sched_domain *sd;
6820 +#ifdef CONFIG_SCHED_SMT
6821 + sd = &per_cpu(cpu_domains, i);
6822 +#elif defined(CONFIG_SCHED_MC)
6823 + sd = &per_cpu(core_domains, i);
6825 + sd = &per_cpu(phys_domains, i);
6827 + cpu_attach_domain(sd, rd, i);
6830 + SCHED_CPUMASK_FREE((void *)allmasks);
6835 + free_sched_groups(cpu_map, tmpmask);
6836 + SCHED_CPUMASK_FREE((void *)allmasks);
6841 +static int build_sched_domains(const cpumask_t *cpu_map)
6843 + return __build_sched_domains(cpu_map, NULL);
6846 +static cpumask_t *doms_cur; /* current sched domains */
6847 +static int ndoms_cur; /* number of sched domains in 'doms_cur' */
6848 +static struct sched_domain_attr *dattr_cur;
6849 + /* attribues of custom domains in 'doms_cur' */
6852 + * Special case: If a kmalloc of a doms_cur partition (array of
6853 + * cpumask_t) fails, then fallback to a single sched domain,
6854 + * as determined by the single cpumask_t fallback_doms.
6856 +static cpumask_t fallback_doms;
6858 +void __attribute__((weak)) arch_update_cpu_topology(void)
6863 + * Set up scheduler domains and groups. Callers must hold the hotplug lock.
6864 + * For now this just excludes isolated cpus, but could be used to
6865 + * exclude other special cases in the future.
6867 +static int arch_init_sched_domains(const cpumask_t *cpu_map)
6871 + arch_update_cpu_topology();
6873 + doms_cur = kmalloc(sizeof(cpumask_t), GFP_KERNEL);
6875 + doms_cur = &fallback_doms;
6876 + cpus_andnot(*doms_cur, *cpu_map, cpu_isolated_map);
6878 + err = build_sched_domains(doms_cur);
6879 + register_sched_domain_sysctl();
6884 +static void arch_destroy_sched_domains(const cpumask_t *cpu_map,
6885 + cpumask_t *tmpmask)
6887 + free_sched_groups(cpu_map, tmpmask);
6891 + * Detach sched domains from a group of cpus specified in cpu_map
6892 + * These cpus will now be attached to the NULL domain
6894 +static void detach_destroy_domains(const cpumask_t *cpu_map)
6896 + cpumask_t tmpmask;
6899 + unregister_sched_domain_sysctl();
6901 + for_each_cpu_mask_nr(i, *cpu_map)
6902 + cpu_attach_domain(NULL, &def_root_domain, i);
6903 + synchronize_sched();
6904 + arch_destroy_sched_domains(cpu_map, &tmpmask);
6907 +/* handle null as "default" */
6908 +static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur,
6909 + struct sched_domain_attr *new, int idx_new)
6911 + struct sched_domain_attr tmp;
6917 + tmp = SD_ATTR_INIT;
6918 + return !memcmp(cur ? (cur + idx_cur) : &tmp,
6919 + new ? (new + idx_new) : &tmp,
6920 + sizeof(struct sched_domain_attr));
6924 + * Partition sched domains as specified by the 'ndoms_new'
6925 + * cpumasks in the array doms_new[] of cpumasks. This compares
6926 + * doms_new[] to the current sched domain partitioning, doms_cur[].
6927 + * It destroys each deleted domain and builds each new domain.
6929 + * 'doms_new' is an array of cpumask_t's of length 'ndoms_new'.
6930 + * The masks don't intersect (don't overlap.) We should setup one
6931 + * sched domain for each mask. CPUs not in any of the cpumasks will
6932 + * not be load balanced. If the same cpumask appears both in the
6933 + * current 'doms_cur' domains and in the new 'doms_new', we can leave
6936 + * The passed in 'doms_new' should be kmalloc'd. This routine takes
6937 + * ownership of it and will kfree it when done with it. If the caller
6938 + * failed the kmalloc call, then it can pass in doms_new == NULL,
6939 + * and partition_sched_domains() will fallback to the single partition
6940 + * 'fallback_doms', it also forces the domains to be rebuilt.
6942 + * If doms_new==NULL it will be replaced with cpu_online_map.
6943 + * ndoms_new==0 is a special case for destroying existing domains.
6944 + * It will not create the default domain.
6946 + * Call with hotplug lock held
6948 +void partition_sched_domains(int ndoms_new, cpumask_t *doms_new,
6949 + struct sched_domain_attr *dattr_new)
6953 + mutex_lock(&sched_domains_mutex);
6955 + /* always unregister in case we don't destroy any domains */
6956 + unregister_sched_domain_sysctl();
6958 + n = doms_new ? ndoms_new : 0;
6960 + /* Destroy deleted domains */
6961 + for (i = 0; i < ndoms_cur; i++) {
6962 + for (j = 0; j < n; j++) {
6963 + if (cpus_equal(doms_cur[i], doms_new[j])
6964 + && dattrs_equal(dattr_cur, i, dattr_new, j))
6967 + /* no match - a current sched domain not in new doms_new[] */
6968 + detach_destroy_domains(doms_cur + i);
6973 + if (doms_new == NULL) {
6975 + doms_new = &fallback_doms;
6976 + cpus_andnot(doms_new[0], cpu_online_map, cpu_isolated_map);
6980 + /* Build new domains */
6981 + for (i = 0; i < ndoms_new; i++) {
6982 + for (j = 0; j < ndoms_cur; j++) {
6983 + if (cpus_equal(doms_new[i], doms_cur[j])
6984 + && dattrs_equal(dattr_new, i, dattr_cur, j))
6987 + /* no match - add a new doms_new */
6988 + __build_sched_domains(doms_new + i,
6989 + dattr_new ? dattr_new + i : NULL);
6994 + /* Remember the new sched domains */
6995 + if (doms_cur != &fallback_doms)
6997 + kfree(dattr_cur); /* kfree(NULL) is safe */
6998 + doms_cur = doms_new;
6999 + dattr_cur = dattr_new;
7000 + ndoms_cur = ndoms_new;
7002 + register_sched_domain_sysctl();
7004 + mutex_unlock(&sched_domains_mutex);
7007 +#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
7008 +int arch_reinit_sched_domains(void)
7010 + get_online_cpus();
7012 + /* Destroy domains first to force the rebuild */
7013 + partition_sched_domains(0, NULL, NULL);
7015 + rebuild_sched_domains();
7016 + put_online_cpus();
7021 +static ssize_t sched_power_savings_store(const char *buf, size_t count, int smt)
7025 + if (buf[0] != '0' && buf[0] != '1')
7029 + sched_smt_power_savings = (buf[0] == '1');
7031 + sched_mc_power_savings = (buf[0] == '1');
7033 + ret = arch_reinit_sched_domains();
7035 + return ret ? ret : count;
7038 +#ifdef CONFIG_SCHED_MC
7039 +static ssize_t sched_mc_power_savings_show(struct sysdev_class *class,
7042 + return sprintf(page, "%u\n", sched_mc_power_savings);
7044 +static ssize_t sched_mc_power_savings_store(struct sysdev_class *class,
7045 + const char *buf, size_t count)
7047 + return sched_power_savings_store(buf, count, 0);
7049 +static SYSDEV_CLASS_ATTR(sched_mc_power_savings, 0644,
7050 + sched_mc_power_savings_show,
7051 + sched_mc_power_savings_store);
7054 +#ifdef CONFIG_SCHED_SMT
7055 +static ssize_t sched_smt_power_savings_show(struct sysdev_class *dev,
7058 + return sprintf(page, "%u\n", sched_smt_power_savings);
7060 +static ssize_t sched_smt_power_savings_store(struct sysdev_class *dev,
7061 + const char *buf, size_t count)
7063 + return sched_power_savings_store(buf, count, 1);
7065 +static SYSDEV_CLASS_ATTR(sched_smt_power_savings, 0644,
7066 + sched_smt_power_savings_show,
7067 + sched_smt_power_savings_store);
7070 +int sched_create_sysfs_power_savings_entries(struct sysdev_class *cls)
7074 +#ifdef CONFIG_SCHED_SMT
7075 + if (smt_capable())
7076 + err = sysfs_create_file(&cls->kset.kobj,
7077 + &attr_sched_smt_power_savings.attr);
7079 +#ifdef CONFIG_SCHED_MC
7080 + if (!err && mc_capable())
7081 + err = sysfs_create_file(&cls->kset.kobj,
7082 + &attr_sched_mc_power_savings.attr);
7086 +#endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */
7088 +#ifndef CONFIG_CPUSETS
7090 + * Add online and remove offline CPUs from the scheduler domains.
7091 + * When cpusets are enabled they take over this function.
7093 +static int update_sched_domains(struct notifier_block *nfb,
7094 + unsigned long action, void *hcpu)
7098 + case CPU_ONLINE_FROZEN:
7100 + case CPU_DEAD_FROZEN:
7101 + partition_sched_domains(1, NULL, NULL);
7105 + return NOTIFY_DONE;
7110 +static int update_runtime(struct notifier_block *nfb,
7111 + unsigned long action, void *hcpu)
7114 + case CPU_DOWN_PREPARE:
7115 + case CPU_DOWN_PREPARE_FROZEN:
7118 + case CPU_DOWN_FAILED:
7119 + case CPU_DOWN_FAILED_FROZEN:
7121 + case CPU_ONLINE_FROZEN:
7125 + return NOTIFY_DONE;
7129 +#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC)
7131 + * Cheaper version of the below functions in case support for SMT and MC is
7132 + * compiled in but CPUs have no siblings.
7134 +static int sole_cpu_idle(unsigned long cpu)
7136 + return rq_idle(cpu_rq(cpu));
7139 +#ifdef CONFIG_SCHED_SMT
7140 +/* All this CPU's SMT siblings are idle */
7141 +static int siblings_cpu_idle(unsigned long cpu)
7143 + return cpus_subset(cpu_rq(cpu)->smt_siblings,
7144 + grq.cpu_idle_map);
7147 +#ifdef CONFIG_SCHED_MC
7148 +/* All this CPU's shared cache siblings are idle */
7149 +static int cache_cpu_idle(unsigned long cpu)
7151 + return cpus_subset(cpu_rq(cpu)->cache_siblings,
7152 + grq.cpu_idle_map);
7156 +void __init sched_init_smp(void)
7158 + struct sched_domain *sd;
7161 + cpumask_t non_isolated_cpus;
7163 +#if defined(CONFIG_NUMA)
7164 + sched_group_nodes_bycpu = kzalloc(nr_cpu_ids * sizeof(void **),
7166 + BUG_ON(sched_group_nodes_bycpu == NULL);
7168 + get_online_cpus();
7169 + mutex_lock(&sched_domains_mutex);
7170 + arch_init_sched_domains(&cpu_online_map);
7171 + cpus_andnot(non_isolated_cpus, cpu_possible_map, cpu_isolated_map);
7172 + if (cpus_empty(non_isolated_cpus))
7173 + cpu_set(smp_processor_id(), non_isolated_cpus);
7174 + mutex_unlock(&sched_domains_mutex);
7175 + put_online_cpus();
7177 +#ifndef CONFIG_CPUSETS
7178 + /* XXX: Theoretical race here - CPU may be hotplugged now */
7179 + hotcpu_notifier(update_sched_domains, 0);
7182 + /* RT runtime code needs to handle some hotplug events */
7183 + hotcpu_notifier(update_runtime, 0);
7185 + /* Move init over to a non-isolated CPU */
7186 + if (set_cpus_allowed_ptr(current, &non_isolated_cpus) < 0)
7190 + * Assume that every added cpu gives us slightly less overall latency
7191 + * allowing us to increase the base rr_interval, but in a non linear
7194 + rr_interval *= 1 + ilog2(num_online_cpus());
7198 + * Set up the relative cache distance of each online cpu from each
7199 + * other in a simple array for quick lookup. Locality is determined
7200 + * by the closest sched_domain that CPUs are separated by. CPUs with
7201 + * shared cache in SMT and MC are treated as local. Separate CPUs
7202 + * (within the same package or physically) within the same node are
7203 + * treated as not local. CPUs not even in the same domain (different
7204 + * nodes) are treated as very distant.
7206 + for_each_online_cpu(cpu) {
7207 + struct rq *rq = cpu_rq(cpu);
7208 + for_each_domain(cpu, sd) {
7209 + unsigned long locality;
7212 +#ifdef CONFIG_SCHED_SMT
7213 + if (sd->level == SD_LV_SIBLING) {
7214 + for_each_cpu_mask_nr(other_cpu, sd->span)
7215 + cpu_set(other_cpu, rq->smt_siblings);
7218 +#ifdef CONFIG_SCHED_MC
7219 + if (sd->level == SD_LV_MC) {
7220 + for_each_cpu_mask_nr(other_cpu, sd->span)
7221 + cpu_set(other_cpu, rq->cache_siblings);
7224 + if (sd->level <= SD_LV_MC)
7226 + else if (sd->level <= SD_LV_NODE)
7231 + for_each_cpu_mask_nr(other_cpu, sd->span) {
7232 + if (locality < rq->cpu_locality[other_cpu])
7233 + rq->cpu_locality[other_cpu] = locality;
7238 + * Each runqueue has its own function in case it doesn't have
7239 + * siblings of its own allowing mixed topologies.
7241 +#ifdef CONFIG_SCHED_SMT
7242 + if (cpus_weight(rq->smt_siblings) > 1)
7243 + rq->siblings_idle = siblings_cpu_idle;
7245 +#ifdef CONFIG_SCHED_MC
7246 + if (cpus_weight(rq->cache_siblings) > 1)
7247 + rq->cache_idle = cache_cpu_idle;
7253 +void __init sched_init_smp(void)
7256 +#endif /* CONFIG_SMP */
7258 +int in_sched_functions(unsigned long addr)
7260 + return in_lock_functions(addr) ||
7261 + (addr >= (unsigned long)__sched_text_start
7262 + && addr < (unsigned long)__sched_text_end);
7265 +void __init sched_init(void)
7270 + prio_ratios[0] = 100;
7271 + for (i = 1 ; i < PRIO_RANGE ; i++)
7272 + prio_ratios[i] = prio_ratios[i - 1] * 11 / 10;
7274 + spin_lock_init(&grq.lock);
7276 + init_defrootdomain();
7278 + uprq = &per_cpu(runqueues, 0);
7280 + for_each_possible_cpu(i) {
7282 + rq->user_pc = rq->nice_pc = rq->softirq_pc = rq->system_pc =
7283 + rq->iowait_pc = rq->idle_pc = 0;
7289 + rq_attach_root(rq, &def_root_domain);
7291 + atomic_set(&rq->nr_iowait, 0);
7297 + * Set the base locality for cpu cache distance calculation to
7298 + * "distant" (3). Make sure the distance from a CPU to itself is 0.
7300 + for_each_possible_cpu(i) {
7304 +#ifdef CONFIG_SCHED_SMT
7305 + cpus_clear(rq->smt_siblings);
7306 + cpu_set(i, rq->smt_siblings);
7307 + rq->siblings_idle = sole_cpu_idle;
7308 + cpu_set(i, rq->smt_siblings);
7310 +#ifdef CONFIG_SCHED_MC
7311 + cpus_clear(rq->cache_siblings);
7312 + cpu_set(i, rq->cache_siblings);
7313 + rq->cache_idle = sole_cpu_idle;
7314 + cpu_set(i, rq->cache_siblings);
7316 + rq->cpu_locality = alloc_bootmem(nr_cpu_ids * sizeof(unsigned long));
7317 + for_each_possible_cpu(j) {
7319 + rq->cpu_locality[j] = 0;
7321 + rq->cpu_locality[j] = 3;
7326 + for (i = 0; i < PRIO_LIMIT; i++)
7327 + INIT_LIST_HEAD(grq.queue + i);
7328 + /* delimiter for bitsearch */
7329 + __set_bit(PRIO_LIMIT, grq.prio_bitmap);
7331 +#ifdef CONFIG_PREEMPT_NOTIFIERS
7332 + INIT_HLIST_HEAD(&init_task.preempt_notifiers);
7335 +#ifdef CONFIG_RT_MUTEXES
7336 + plist_head_init(&init_task.pi_waiters, &init_task.pi_lock);
7340 + * The boot idle thread does lazy MMU switching as well:
7342 + atomic_inc(&init_mm.mm_count);
7343 + enter_lazy_tlb(&init_mm, current);
7346 + * Make us the idle thread. Technically, schedule() should not be
7347 + * called from this thread, however somewhere below it might be,
7348 + * but because we are the idle thread, we just pick up running again
7349 + * when this runqueue becomes "idle".
7351 + init_idle(current, smp_processor_id());
7354 +#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
7355 +void __might_sleep(char *file, int line)
7358 + static unsigned long prev_jiffy; /* ratelimiting */
7360 + if ((in_atomic() || irqs_disabled()) &&
7361 + system_state == SYSTEM_RUNNING && !oops_in_progress) {
7362 + if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
7364 + prev_jiffy = jiffies;
7365 + printk(KERN_ERR "BUG: sleeping function called from invalid"
7366 + " context at %s:%d\n", file, line);
7367 + printk("in_atomic():%d, irqs_disabled():%d\n",
7368 + in_atomic(), irqs_disabled());
7369 + debug_show_held_locks(current);
7370 + if (irqs_disabled())
7371 + print_irqtrace_events(current);
7376 +EXPORT_SYMBOL(__might_sleep);
7379 +#ifdef CONFIG_MAGIC_SYSRQ
7380 +void normalize_rt_tasks(void)
7382 + struct task_struct *g, *p;
7383 + unsigned long flags;
7387 + read_lock_irq(&tasklist_lock);
7389 + do_each_thread(g, p) {
7390 + if (!rt_task(p) && !iso_task(p))
7393 + spin_lock_irqsave(&p->pi_lock, flags);
7394 + rq = __task_grq_lock(p);
7395 + update_rq_clock(rq);
7397 + queued = task_queued(p);
7400 + __setscheduler(p, rq, SCHED_NORMAL, 0);
7403 + try_preempt(p, rq);
7406 + __task_grq_unlock();
7407 + spin_unlock_irqrestore(&p->pi_lock, flags);
7408 + } while_each_thread(g, p);
7410 + read_unlock_irq(&tasklist_lock);
7412 +#endif /* CONFIG_MAGIC_SYSRQ */
7416 + * These functions are only useful for the IA64 MCA handling.
7418 + * They can only be called when the whole system has been
7419 + * stopped - every CPU needs to be quiescent, and no scheduling
7420 + * activity can take place. Using them for anything else would
7421 + * be a serious bug, and as a result, they aren't even visible
7422 + * under any other configuration.
7426 + * curr_task - return the current task for a given cpu.
7427 + * @cpu: the processor in question.
7429 + * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
7431 +struct task_struct *curr_task(int cpu)
7433 + return cpu_curr(cpu);
7437 + * set_curr_task - set the current task for a given cpu.
7438 + * @cpu: the processor in question.
7439 + * @p: the task pointer to set.
7441 + * Description: This function must only be used when non-maskable interrupts
7442 + * are serviced on a separate stack. It allows the architecture to switch the
7443 + * notion of the current task on a cpu in a non-blocking manner. This function
7444 + * must be called with all CPU's synchronised, and interrupts disabled, the
7445 + * and caller must save the original value of the current task (see
7446 + * curr_task() above) and restore that value before reenabling interrupts and
7447 + * re-starting the system.
7449 + * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
7451 +void set_curr_task(int cpu, struct task_struct *p)
7453 + cpu_curr(cpu) = p;
7459 + * Use precise platform statistics if available:
7461 +#ifdef CONFIG_VIRT_CPU_ACCOUNTING
7462 +cputime_t task_utime(struct task_struct *p)
7467 +cputime_t task_stime(struct task_struct *p)
7472 +cputime_t task_utime(struct task_struct *p)
7474 + clock_t utime = cputime_to_clock_t(p->utime),
7475 + total = utime + cputime_to_clock_t(p->stime);
7478 + temp = (u64)nsec_to_clock_t(p->sched_time);
7482 + do_div(temp, total);
7484 + utime = (clock_t)temp;
7486 + p->prev_utime = max(p->prev_utime, clock_t_to_cputime(utime));
7487 + return p->prev_utime;
7490 +cputime_t task_stime(struct task_struct *p)
7494 + stime = nsec_to_clock_t(p->sched_time) -
7495 + cputime_to_clock_t(task_utime(p));
7498 + p->prev_stime = max(p->prev_stime, clock_t_to_cputime(stime));
7500 + return p->prev_stime;
7504 +inline cputime_t task_gtime(struct task_struct *p)
7509 +void __cpuinit init_idle_bootup_task(struct task_struct *idle)
7512 +#ifdef CONFIG_SCHED_DEBUG
7513 +void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
7516 +void proc_sched_set_task(struct task_struct *p)
7519 diff --git a/kernel/sched_stats.h b/kernel/sched_stats.h
7520 index 7dbf72a..90fba60 100644
7521 --- a/kernel/sched_stats.h
7522 +++ b/kernel/sched_stats.h
7523 @@ -296,20 +296,21 @@ sched_info_switch(struct task_struct *prev, struct task_struct *next)
7524 static inline void account_group_user_time(struct task_struct *tsk,
7527 - struct signal_struct *sig;
7528 + struct thread_group_cputimer *cputimer;
7530 /* tsk == current, ensure it is safe to use ->signal */
7531 if (unlikely(tsk->exit_state))
7534 - sig = tsk->signal;
7535 - if (sig->cputime.totals) {
7536 - struct task_cputime *times;
7537 + cputimer = &tsk->signal->cputimer;
7539 - times = per_cpu_ptr(sig->cputime.totals, get_cpu());
7540 - times->utime = cputime_add(times->utime, cputime);
7541 - put_cpu_no_resched();
7543 + if (!cputimer->running)
7546 + spin_lock(&cputimer->lock);
7547 + cputimer->cputime.utime =
7548 + cputime_add(cputimer->cputime.utime, cputime);
7549 + spin_unlock(&cputimer->lock);
7553 @@ -325,20 +326,21 @@ static inline void account_group_user_time(struct task_struct *tsk,
7554 static inline void account_group_system_time(struct task_struct *tsk,
7557 - struct signal_struct *sig;
7558 + struct thread_group_cputimer *cputimer;
7560 /* tsk == current, ensure it is safe to use ->signal */
7561 if (unlikely(tsk->exit_state))
7564 - sig = tsk->signal;
7565 - if (sig->cputime.totals) {
7566 - struct task_cputime *times;
7567 + cputimer = &tsk->signal->cputimer;
7569 - times = per_cpu_ptr(sig->cputime.totals, get_cpu());
7570 - times->stime = cputime_add(times->stime, cputime);
7571 - put_cpu_no_resched();
7573 + if (!cputimer->running)
7576 + spin_lock(&cputimer->lock);
7577 + cputimer->cputime.stime =
7578 + cputime_add(cputimer->cputime.stime, cputime);
7579 + spin_unlock(&cputimer->lock);
7583 @@ -354,6 +356,7 @@ static inline void account_group_system_time(struct task_struct *tsk,
7584 static inline void account_group_exec_runtime(struct task_struct *tsk,
7585 unsigned long long ns)
7587 + struct thread_group_cputimer *cputimer;
7588 struct signal_struct *sig;
7591 @@ -362,11 +365,12 @@ static inline void account_group_exec_runtime(struct task_struct *tsk,
7595 - if (sig->cputime.totals) {
7596 - struct task_cputime *times;
7597 + cputimer = &sig->cputimer;
7599 - times = per_cpu_ptr(sig->cputime.totals, get_cpu());
7600 - times->sum_exec_runtime += ns;
7601 - put_cpu_no_resched();
7603 + if (!cputimer->running)
7606 + spin_lock(&cputimer->lock);
7607 + cputimer->cputime.sum_exec_runtime += ns;
7608 + spin_unlock(&cputimer->lock);
7610 diff --git a/kernel/signal.c b/kernel/signal.c
7611 index 4530fc6..85abaea 100644
7612 --- a/kernel/signal.c
7613 +++ b/kernel/signal.c
7614 @@ -1342,7 +1342,6 @@ int do_notify_parent(struct task_struct *tsk, int sig)
7615 struct siginfo info;
7616 unsigned long flags;
7617 struct sighand_struct *psig;
7618 - struct task_cputime cputime;
7622 @@ -1373,9 +1372,10 @@ int do_notify_parent(struct task_struct *tsk, int sig)
7624 info.si_uid = tsk->uid;
7626 - thread_group_cputime(tsk, &cputime);
7627 - info.si_utime = cputime_to_jiffies(cputime.utime);
7628 - info.si_stime = cputime_to_jiffies(cputime.stime);
7629 + info.si_utime = cputime_to_clock_t(cputime_add(tsk->utime,
7630 + tsk->signal->utime));
7631 + info.si_stime = cputime_to_clock_t(cputime_add(tsk->stime,
7632 + tsk->signal->stime));
7634 info.si_status = tsk->exit_code & 0x7f;
7635 if (tsk->exit_code & 0x80)
7636 diff --git a/kernel/sysctl.c b/kernel/sysctl.c
7637 index 3d56fe7..1fe0a2d 100644
7638 --- a/kernel/sysctl.c
7639 +++ b/kernel/sysctl.c
7640 @@ -86,11 +86,6 @@ extern int sysctl_nr_open_min, sysctl_nr_open_max;
7641 extern int rcutorture_runnable;
7642 #endif /* #ifdef CONFIG_RCU_TORTURE_TEST */
7644 -/* Constants used for minimum and maximum */
7645 -#if defined(CONFIG_HIGHMEM) || defined(CONFIG_DETECT_SOFTLOCKUP)
7646 -static int one = 1;
7649 #ifdef CONFIG_DETECT_SOFTLOCKUP
7650 static int sixty = 60;
7651 static int neg_one = -1;
7652 @@ -101,8 +96,14 @@ static int two = 2;
7656 -static int one_hundred = 100;
7658 +static int __read_mostly one = 1;
7659 +static int __read_mostly one_hundred = 100;
7660 +#ifdef CONFIG_SCHED_BFS
7661 +extern int rr_interval;
7662 +extern int sched_iso_cpu;
7663 +static int __read_mostly five_thousand = 5000;
7665 /* this is needed for the proc_dointvec_minmax for [fs_]overflow UID and GID */
7666 static int maxolduid = 65535;
7667 static int minolduid;
7668 @@ -227,7 +228,7 @@ static struct ctl_table root_table[] = {
7672 -#ifdef CONFIG_SCHED_DEBUG
7673 +#if defined(CONFIG_SCHED_DEBUG) && !defined(CONFIG_SCHED_BFS)
7674 static int min_sched_granularity_ns = 100000; /* 100 usecs */
7675 static int max_sched_granularity_ns = NSEC_PER_SEC; /* 1 second */
7676 static int min_wakeup_granularity_ns; /* 0 usecs */
7677 @@ -235,6 +236,7 @@ static int max_wakeup_granularity_ns = NSEC_PER_SEC; /* 1 second */
7680 static struct ctl_table kern_table[] = {
7681 +#ifndef CONFIG_SCHED_BFS
7682 #ifdef CONFIG_SCHED_DEBUG
7684 .ctl_name = CTL_UNNUMBERED,
7685 @@ -344,6 +346,7 @@ static struct ctl_table kern_table[] = {
7687 .proc_handler = &proc_dointvec,
7689 +#endif /* !CONFIG_SCHED_BFS */
7690 #ifdef CONFIG_PROVE_LOCKING
7692 .ctl_name = CTL_UNNUMBERED,
7693 @@ -719,6 +722,30 @@ static struct ctl_table kern_table[] = {
7694 .proc_handler = &proc_dointvec,
7697 +#ifdef CONFIG_SCHED_BFS
7699 + .ctl_name = CTL_UNNUMBERED,
7700 + .procname = "rr_interval",
7701 + .data = &rr_interval,
7702 + .maxlen = sizeof (int),
7704 + .proc_handler = &proc_dointvec_minmax,
7705 + .strategy = &sysctl_intvec,
7707 + .extra2 = &five_thousand,
7710 + .ctl_name = CTL_UNNUMBERED,
7711 + .procname = "iso_cpu",
7712 + .data = &sched_iso_cpu,
7713 + .maxlen = sizeof (int),
7715 + .proc_handler = &proc_dointvec_minmax,
7716 + .strategy = &sysctl_intvec,
7718 + .extra2 = &one_hundred,
7721 #if defined(CONFIG_S390) && defined(CONFIG_SMP)
7723 .ctl_name = KERN_SPIN_RETRY,
7724 diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
7725 index ef1586d..5677d7f 100644
7726 --- a/kernel/time/tick-sched.c
7727 +++ b/kernel/time/tick-sched.c
7728 @@ -447,6 +447,7 @@ void tick_nohz_restart_sched_tick(void)
7729 tick_do_update_jiffies64(now);
7730 cpu_clear(cpu, nohz_cpu_mask);
7734 * We stopped the tick in idle. Update process times would miss the
7735 * time we slept as update_process_times does only a 1 tick
7736 @@ -457,10 +458,7 @@ void tick_nohz_restart_sched_tick(void)
7737 * We might be one off. Do not randomly account a huge number of ticks!
7739 if (ticks && ticks < LONG_MAX) {
7740 - add_preempt_count(HARDIRQ_OFFSET);
7741 - account_system_time(current, HARDIRQ_OFFSET,
7742 - jiffies_to_cputime(ticks));
7743 - sub_preempt_count(HARDIRQ_OFFSET);
7744 + account_idle_ticks(ticks);
7747 touch_softlockup_watchdog();
7748 diff --git a/kernel/timer.c b/kernel/timer.c
7749 index 15e4f90..f62d67b 100644
7750 --- a/kernel/timer.c
7751 +++ b/kernel/timer.c
7752 @@ -1021,20 +1021,21 @@ unsigned long get_next_timer_interrupt(unsigned long now)
7757 #ifndef CONFIG_VIRT_CPU_ACCOUNTING
7758 -void account_process_tick(struct task_struct *p, int user_tick)
7760 - cputime_t one_jiffy = jiffies_to_cputime(1);
7761 +//void account_process_tick(struct task_struct *p, int user_tick)
7763 +// cputime_t one_jiffy = jiffies_to_cputime(1);
7766 - account_user_time(p, one_jiffy);
7767 +// account_user_time(p, one_jiffy);
7768 account_user_time_scaled(p, cputime_to_scaled(one_jiffy));
7770 - account_system_time(p, HARDIRQ_OFFSET, one_jiffy);
7771 +// account_system_time(p, HARDIRQ_OFFSET, one_jiffy);
7772 account_system_time_scaled(p, cputime_to_scaled(one_jiffy));
7779 * Called from the timer interrupt handler to charge one tick to the current
7780 @@ -1045,7 +1046,7 @@ void update_process_times(int user_tick)
7781 struct task_struct *p = current;
7782 int cpu = smp_processor_id();
7784 - /* Note: this timer irq context must be accounted for as well. */
7785 + /* Accounting is done within sched_bfs.c */
7786 account_process_tick(p, user_tick);
7788 if (rcu_pending(cpu))
7789 @@ -1098,8 +1099,7 @@ static inline void calc_load(unsigned long ticks)
7792 * This function runs timers and the timer-tq in bottom half context.
7794 -static void run_timer_softirq(struct softirq_action *h)
7795 + */run_timer_softirq(struct softirq_action *h)
7797 struct tvec_base *base = __get_cpu_var(tvec_bases);
7799 diff --git a/kernel/workqueue.c b/kernel/workqueue.c
7800 index d4dc69d..9041f86 100644
7801 --- a/kernel/workqueue.c
7802 +++ b/kernel/workqueue.c
7803 @@ -323,7 +323,6 @@ static int worker_thread(void *__cwq)
7804 if (cwq->wq->freezeable)
7807 - set_user_nice(current, -5);
7810 prepare_to_wait(&cwq->more_work, &wait, TASK_INTERRUPTIBLE);
7811 diff --git a/mm/oom_kill.c b/mm/oom_kill.c
7812 index a0a0190..4d35180 100644
7815 @@ -334,7 +334,7 @@ static void __oom_kill_task(struct task_struct *p, int verbose)
7816 * all the memory it needs. That way it should be able to
7817 * exit() and clear out its resources quickly...
7819 - p->rt.time_slice = HZ;
7820 + set_oom_timeslice(p);
7821 set_tsk_thread_flag(p, TIF_MEMDIE);
7823 force_sig(SIGKILL, p);