linux/kernel/sched
Peter Zijlstra 7dadeaa6e8 sched: Further restrict the preemption modes
The introduction of PREEMPT_LAZY was for multiple reasons:

  - PREEMPT_RT suffered from over-scheduling, hurting performance compared to
    !PREEMPT_RT.

  - the introduction of (more) features that rely on preemption; like
    folio_zero_user() which can do large memset() without preemption checks.

    (Xen already had a horrible hack to deal with long running hypercalls)

  - the endless and uncontrolled sprinkling of cond_resched() -- mostly cargo
    cult or in response to poor to replicate workloads.

By moving to a model that is fundamentally preemptable these things become
managable and avoid needing to introduce more horrible hacks.

Since this is a requirement; limit PREEMPT_NONE to architectures that do not
support preemption at all. Further limit PREEMPT_VOLUNTARY to those
architectures that do not yet have PREEMPT_LAZY support (with the eventual goal
to make this the empty set and completely remove voluntary preemption and
cond_resched() -- notably VOLUNTARY is already limited to !ARCH_NO_PREEMPT.)

This leaves up-to-date architectures (arm64, loongarch, powerpc, riscv, s390,
x86) with only two preemption models: full and lazy.

While Lazy has been the recommended setting for a while, not all distributions
have managed to make the switch yet. Force things along. Keep the patch minimal
in case of hard to address regressions that might pop up.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://patch.msgid.link/20251219101502.GB1132199@noisy.programming.kicks-ass.net
2026-01-08 12:43:57 +01:00
..
autogroup.c cgroup: Rename cgroup lifecycle hooks to cgroup_task_*() 2025-11-03 11:46:18 -10:00
autogroup.h sched: Clean up and standardize #if/#else/#endif markers in sched/autogroup.[ch] 2025-06-13 08:47:14 +02:00
build_policy.c sched_ext: Move internal type and accessor definitions to ext_internal.h 2025-09-03 11:33:28 -10:00
build_utility.c sched/smp: Make SMP unconditional 2025-06-13 08:47:18 +02:00
clock.c sched: Clean up and standardize #if/#else/#endif markers in sched/clock.c 2025-06-13 08:47:14 +02:00
completion.c sched: Make clangd usable 2025-06-11 11:20:53 +02:00
core.c sched: Further restrict the preemption modes 2026-01-08 12:43:57 +01:00
core_sched.c sched: Make clangd usable 2025-06-11 11:20:53 +02:00
cpuacct.c sched: Make clangd usable 2025-06-11 11:20:53 +02:00
cpudeadline.c sched/deadline: only set free_cpus for online runqueues 2025-10-16 11:13:49 +02:00
cpudeadline.h sched/deadline: only set free_cpus for online runqueues 2025-10-16 11:13:49 +02:00
cpufreq.c sched: Make clangd usable 2025-06-11 11:20:53 +02:00
cpufreq_schedutil.c sched: Clean up and standardize #if/#else/#endif markers in sched/cpufreq_schedutil.c 2025-06-13 08:47:15 +02:00
cpupri.c sched: Make clangd usable 2025-06-11 11:20:53 +02:00
cpupri.h sched/smp: Make SMP unconditional 2025-06-13 08:47:18 +02:00
cputime.c seqlock: Change thread_group_cputime() to use scoped_seqlock_read() 2025-10-21 12:31:57 +02:00
deadline.c sched/core: Rework sched_class::wakeup_preempt() and rq_modified_*() 2025-12-17 10:53:25 +01:00
debug.c sched: Further restrict the preemption modes 2026-01-08 12:43:57 +01:00
ext.c sched/core: Rework sched_class::wakeup_preempt() and rq_modified_*() 2025-12-17 10:53:25 +01:00
ext.h sched_ext: Use cgroup_lock/unlock() to synchronize against cgroup operations 2025-09-03 11:36:07 -10:00
ext_idle.c sched_ext: Wrap kfunc args in struct to prepare for aux__prog 2025-10-13 08:49:29 -10:00
ext_idle.h sched_ext: Always use SMP versions in kernel/sched/ext_idle.h 2025-06-13 14:47:59 -10:00
ext_internal.h sched_ext: Implement load balancer for bypass mode 2025-11-12 06:43:44 -10:00
fair.c sched/fair: Use cpumask_weight_and() in sched_balance_find_dst_group() 2026-01-08 12:43:56 +01:00
features.h sched/fair: Proportional newidle balance 2025-11-17 17:13:16 +01:00
idle.c sched/core: Rework sched_class::wakeup_preempt() and rq_modified_*() 2025-12-17 10:53:25 +01:00
isolation.c sched/isolation: Force housekeeping if isolcpus and nohz_full don't leave any 2025-11-20 20:17:31 +01:00
loadavg.c Merge branch 'tip/sched/urgent' 2025-07-14 17:16:28 +02:00
Makefile tracing: Disable branch profiling in noinstr code 2025-03-22 09:49:26 +01:00
membarrier.c rseq: Simplify the event notification 2025-11-04 08:30:09 +01:00
pelt.c sched: Clean up and standardize #if/#else/#endif markers in sched/pelt.[ch] 2025-06-13 08:47:17 +02:00
pelt.h sched/fair: Switch to task based throttle model 2025-09-03 10:03:14 +02:00
psi.c sched/psi: Fix psi_seq initialization 2025-08-04 10:51:22 -07:00
rq-offsets.c sched: Make migrate_{en,dis}able() inline 2025-09-25 09:57:16 +02:00
rt.c sched/core: Rework sched_class::wakeup_preempt() and rq_modified_*() 2025-12-17 10:53:25 +01:00
sched-pelt.h sched: Make clangd usable 2025-06-11 11:20:53 +02:00
sched.h sched: Reorder some fields in struct rq 2026-01-08 12:43:56 +01:00
smp.h sched: Make clangd usable 2025-06-11 11:20:53 +02:00
stats.c sched/smp: Use the SMP version of schedstats 2025-06-13 08:47:21 +02:00
stats.h sched/core: Fix psi_dequeue() for Proxy Execution 2025-12-06 10:13:16 +01:00
stop_task.c sched/core: Rework sched_class::wakeup_preempt() and rq_modified_*() 2025-12-17 10:53:25 +01:00
swait.c sched: Make clangd usable 2025-06-11 11:20:53 +02:00
syscalls.c Updates for the interrupt core and treewide cleanups: 2025-12-02 09:14:26 -08:00
topology.c sched/fair: Proportional newidle balance 2025-11-17 17:13:16 +01:00
wait.c ARM: 2025-07-30 17:14:01 -07:00
wait_bit.c sched: Make clangd usable 2025-06-11 11:20:53 +02:00