[PATCH] sched: uninline task_rq_lock()
authorOleg Nesterov <[email protected]>
Tue, 27 Jun 2006 09:54:42 +0000 (02:54 -0700)
committerLinus Torvalds <[email protected]>
Wed, 28 Jun 2006 00:32:45 +0000 (17:32 -0700)
Saves 543 bytes from sched.o (gcc 3.3.3).

Signed-off-by: Oleg Nesterov <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Nick Piggin <[email protected]>
Cc: Con Kolivas <[email protected]>
Cc: Peter Williams <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
kernel/sched.c

index 54fa282657cc098cf47b1dd6e3bfc2fc9315410b..19c0d5d16fef1b84c310a54ff0f0869072f7d2a8 100644 (file)
@@ -359,7 +359,7 @@ static inline void finish_lock_switch(runqueue_t *rq, task_t *prev)
  * interrupts.  Note the ordering: we can safely lookup the task_rq without
  * explicitly disabling preemption.
  */
-static inline runqueue_t *task_rq_lock(task_t *p, unsigned long *flags)
+static runqueue_t *task_rq_lock(task_t *p, unsigned long *flags)
        __acquires(rq->lock)
 {
        struct runqueue *rq;