perf/core: Fix refcount bug and potential UAF in perf_mmap

Syzkaller reported a refcount_t: addition on 0; use-after-free warning
in perf_mmap.

The issue is caused by a race condition between a failing mmap() setup
and a concurrent mmap() on a dependent event (e.g., using output
redirection).

In perf_mmap(), the ring_buffer (rb) is allocated and assigned to
event->rb with the mmap_mutex held. The mutex is then released to
perform map_range().

If map_range() fails, perf_mmap_close() is called to clean up.
However, since the mutex was dropped, another thread attaching to
this event (via inherited events or output redirection) can acquire
the mutex, observe the valid event->rb pointer, and attempt to
increment its reference count. If the cleanup path has already
dropped the reference count to zero, this results in a
use-after-free or refcount saturation warning.

Fix this by extending the scope of mmap_mutex to cover the
map_range() call. This ensures that the ring buffer initialization
and mapping (or cleanup on failure) happens atomically effectively,
preventing other threads from accessing a half-initialized or
dying ring buffer.

Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://patch.msgid.link/20260202162057.7237-1-yuhaocheng035@gmail.com
This commit is contained in:
Haocheng Yu 2026-02-03 00:20:56 +08:00 committed by Peter Zijlstra
parent 6a8a48644c
commit 77de62ad3d

View file

@ -7465,29 +7465,29 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
ret = perf_mmap_aux(vma, event, nr_pages);
if (ret)
return ret;
/*
* Since pinned accounting is per vm we cannot allow fork() to copy our
* vma.
*/
vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
vma->vm_ops = &perf_mmap_vmops;
mapped = get_mapped(event, event_mapped);
if (mapped)
mapped(event, vma->vm_mm);
/*
* Try to map it into the page table. On fail, invoke
* perf_mmap_close() to undo the above, as the callsite expects
* full cleanup in this case and therefore does not invoke
* vmops::close().
*/
ret = map_range(event->rb, vma);
if (ret)
perf_mmap_close(vma);
}
/*
* Since pinned accounting is per vm we cannot allow fork() to copy our
* vma.
*/
vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
vma->vm_ops = &perf_mmap_vmops;
mapped = get_mapped(event, event_mapped);
if (mapped)
mapped(event, vma->vm_mm);
/*
* Try to map it into the page table. On fail, invoke
* perf_mmap_close() to undo the above, as the callsite expects
* full cleanup in this case and therefore does not invoke
* vmops::close().
*/
ret = map_range(event->rb, vma);
if (ret)
perf_mmap_close(vma);
return ret;
}