linux/kernel/entry
Jinjie Ruan 31c9387d0d entry: Inline syscall_exit_work() and syscall_trace_enter()
After switching ARM64 to the generic entry code, a syscall_exit_work()
appeared as a profiling hotspot because it is not inlined.

Inlining both syscall_trace_enter() and syscall_exit_work() provides a
performance gain when any of the work items is enabled. With audit enabled
this results in a ~4% performance gain for perf bench basic syscall on
a kunpeng920 system:

    | Metric     | Baseline    | Inlined     | Change  |
    | ---------- | ----------- | ----------- | ------  |
    | Total time | 2.353 [sec] | 2.264 [sec] |  ↓3.8%  |
    | usecs/op   | 0.235374    | 0.226472    |  ↓3.8%  |
    | ops/sec    | 4,248,588   | 4,415,554   |  ↑3.9%  |

Small gains can be observed on x86 as well, though the generated code
optimizes for the work case, which is counterproductive for high
performance scenarios where such entry/exit work is usually avoided.

Avoid this by marking the work check in syscall_enter_from_user_mode_work()
unlikely, which is what the corresponding check in the exit path does
already.

[ tglx: Massage changelog and add the unlikely() ]

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Link: https://patch.msgid.link/20260128031934.3906955-14-ruanjinjie@huawei.com
2026-01-30 15:38:10 +01:00
..
common.c rseq: Switch to TIF_RSEQ if supported 2025-11-04 08:35:37 +01:00
Makefile entry: Rename "kvm" entry code assets to "virt" to genericize APIs 2025-09-30 22:50:18 +00:00
syscall-common.c entry: Inline syscall_exit_work() and syscall_trace_enter() 2026-01-30 15:38:10 +01:00
syscall_user_dispatch.c entry: Inline syscall_exit_work() and syscall_trace_enter() 2026-01-30 15:38:10 +01:00
virt.c entry: Rename "kvm" entry code assets to "virt" to genericize APIs 2025-09-30 22:50:18 +00:00