From a9d18c4a0c2be3d5e7dcedbedc32a9998b1e5515 Mon Sep 17 00:00:00 2001 From: Matthew Lugg Date: Wed, 18 Feb 2026 14:51:59 +0100 Subject: [PATCH] std.heap.PageAllocator: avoid mremaps which may reserve potential stack space Linux's approach to mapping the main thread's stack is quite odd: it essentially tries to select an mmap address (assuming unhinted mmap calls) which do not cover the region of virtual address space into which the stack *would* grow (based on the stack rlimit), but it doesn't actually *prevent* those pages from being mapped. It also doesn't try particularly hard: it's been observed that the first (unhinted) mmap call in a simple application is usually put at an address which is within a gigabyte or two of the stack, which is close enough to make issues somewhat likely. In particular, if we get an address which is close-ish to the stack, and then `mremap` it without the MAY_MOVE flag, we are *very* likely to map pages in this "theoretical stack region". This is particularly a problem on loongarch64, where the initial mmap address is empirically only around 200 megabytes from the stack (whereas on most other 64-bit targets it's closer to a gigabyte). To work around this, we just need to avoid mremap in some cases. Unfortunately, this system call isn't used too heavily by musl or glibc, so design issues like this can and do exist without being caught. So, when `PageAllocator.resize` is called, let's not try to `mremap` to grow the pages. We can still call `mremap` in the `PageAllocator.remap` path, because in that case we can set the `MAY_MOVE` flag, which empirically appears to make the Linux kernel avoid the problematic "theoretical stack region". --- lib/std/heap/PageAllocator.zig | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/lib/std/heap/PageAllocator.zig b/lib/std/heap/PageAllocator.zig index 66abe7d4da..db736036b3 100644 --- a/lib/std/heap/PageAllocator.zig +++ b/lib/std/heap/PageAllocator.zig @@ -225,7 +225,10 @@ pub fn realloc(uncasted_memory: []u8, alignment: Alignment, new_len: usize, may_ if (new_size_aligned == page_aligned_len) return memory.ptr; - if (posix.MREMAP != void) { + // When the stack grows down, only use `mremap` if the allocation may move. + // Otherwise, we might grow the allocation and intrude on virtual address + // space which we want to keep available to the stack. + if (posix.MREMAP != void and (stack_direction == .up or may_move)) { // TODO: if the next_mmap_addr_hint is within the remapped range, update it const new_memory = posix.mremap(memory.ptr, page_aligned_len, new_size_aligned, .{ .MAYMOVE = may_move }, null) catch return null; return new_memory.ptr;