perf: Use task_ctx_sched_out()
authorPeter Zijlstra <[email protected]>
Fri, 8 Jan 2016 09:02:37 +0000 (10:02 +0100)
committerIngo Molnar <[email protected]>
Thu, 21 Jan 2016 17:54:21 +0000 (18:54 +0100)
We have a function that does exactly what we want here, use it. This
reduces the amount of cpuctx->task_ctx muckery.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vince Weaver <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
kernel/events/core.c

index 0679e73f5f639566accd52ce083c443584020055..12f1d4a52da9f817ad58123505e7b4c004b42d46 100644 (file)
@@ -2545,8 +2545,7 @@ unlock:
 
        if (do_switch) {
                raw_spin_lock(&ctx->lock);
-               ctx_sched_out(ctx, cpuctx, EVENT_ALL);
-               cpuctx->task_ctx = NULL;
+               task_ctx_sched_out(cpuctx, ctx);
                raw_spin_unlock(&ctx->lock);
        }
 }