uprobes: Change prepare_uretprobe() to (try to) flush the dead frames
authorOleg Nesterov <[email protected]>
Tue, 21 Jul 2015 13:40:23 +0000 (15:40 +0200)
committerIngo Molnar <[email protected]>
Fri, 31 Jul 2015 08:38:05 +0000 (10:38 +0200)
Change prepare_uretprobe() to flush the !arch_uretprobe_is_alive()
return_instance's. This is not needed correctness-wise, but can help
to avoid the failure caused by MAX_URETPROBE_DEPTH.

Note: in this case arch_uretprobe_is_alive() can be false
positive, the stack can grow after longjmp(). Unfortunately, the
kernel can't 100% solve this problem, but see the next patch.

Tested-by: Pratyush Anand <[email protected]>
Signed-off-by: Oleg Nesterov <[email protected]>
Acked-by: Srikar Dronamraju <[email protected]>
Acked-by: Anton Arapov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
kernel/events/uprobes.c

index 93d939c80cd92e73d018f52ea6b1d40a4a18d9aa..7e61c8ca27e04c47e10d6a1cf4e38b9e0aa05d19 100644 (file)
@@ -1511,6 +1511,16 @@ static unsigned long get_trampoline_vaddr(void)
        return trampoline_vaddr;
 }
 
+static void cleanup_return_instances(struct uprobe_task *utask, struct pt_regs *regs)
+{
+       struct return_instance *ri = utask->return_instances;
+       while (ri && !arch_uretprobe_is_alive(ri, regs)) {
+               ri = free_ret_instance(ri);
+               utask->depth--;
+       }
+       utask->return_instances = ri;
+}
+
 static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs)
 {
        struct return_instance *ri;
@@ -1541,6 +1551,9 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs)
        if (orig_ret_vaddr == -1)
                goto fail;
 
+       /* drop the entries invalidated by longjmp() */
+       cleanup_return_instances(utask, regs);
+
        /*
         * We don't want to keep trampoline address in stack, rather keep the
         * original return address of first caller thru all the consequent