Miscellaneous fixes:

- Fix zero_vruntime tracking when there's a single task running
 
  - Fix slice protection logic
 
  - Fix the ->vprot logic for reniced tasks
 
  - Fix lag clamping in mixed slice workloads
 
  - Fix objtool uaccess warning (and bug) in the
    !CONFIG_RSEQ_SLICE_EXTENSION case caused by unexpected
    un-inlining, which triggers with older compilers
 
  - Fix a comment in the rseq registration rseq_size bound check code
 
  - Fix a legacy RSEQ ABI quirk that handled 32-byte area sizes
    differently, which special size we now reached naturally and
    want to avoid. The visible ugliness of the new reserved field
    will be avoided the next time the RSEQ area is extended.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmmkAl0RHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1hdsg/+IdQpmtEXsugE1FqEuuptm0ld6hcFI9WC
 mlEXhid5Gq3a3KMBv0CLd73o+k8Ju/BDEdbfLMzY8A9h8OxfnuUL1T6Jt4q7dF1h
 76ja1R+i+GNFcXWmSG8z6FUns4bRBJeNWFs3dzCFE9N2qOCCj1xBr/9BqgKvNVfZ
 cbcaiMvmi3z/vPUmT8hdMdEcA0Zo2gVcKDmny4Tca9sigyLZD8FqtW1FhqL1HX8H
 Cx8fZ2lkD2z6gKtOAbC3QuWVmP88tvZldaMsGHTAQIa14PP5h2xhyuLxBF1Zjnwy
 aWl4iYr6ILu3LRi54CmQOiESdEf3Srdbl8JxDcvU9vh8ecqXvDGPUB2xCszlPvOx
 R+scskNgNyd1WtUF2VYFLTNkj0B7Xe6eTYfIu2d5r8GrRt0YjRzsK/JQallAkV6V
 KORDm4/Xyl5Ss6tNtfZP7lpHD2qykscRGxgr0HjjJCyjA1ZNtGc1A+JKZ8D8q9Nq
 rxEbaa65KfAtYJ4i5j9goFPQwNeHXm/emToVzEfyKwZHs3ns0LwffDGSFFOYSm/p
 FVVmi9iSoxRvRFHBflvBIwFaCnIyBLTJZlB/Bp8MVaFnv+6OzdE/nfcKNaYqcVaT
 mzCpY2DFTx5KISmJR7DAWsPntoRV6WPcxVApWicTaT5G3C2TLvvTAEq8g2WIYDFB
 j6oNyEkX/Xw=
 =Nxqx
 -----END PGP SIGNATURE-----

Merge tag 'sched-urgent-2026-03-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler fixes from Ingo Molnar:

 - Fix zero_vruntime tracking when there's a single task running

 - Fix slice protection logic

 - Fix the ->vprot logic for reniced tasks

 - Fix lag clamping in mixed slice workloads

 - Fix objtool uaccess warning (and bug) in the
   !CONFIG_RSEQ_SLICE_EXTENSION case caused by unexpected un-inlining,
   which triggers with older compilers

 - Fix a comment in the rseq registration rseq_size bound check code

 - Fix a legacy RSEQ ABI quirk that handled 32-byte area sizes
   differently, which special size we now reached naturally and want to
   avoid. The visible ugliness of the new reserved field will be avoided
   the next time the RSEQ area is extended.

* tag 'sched-urgent-2026-03-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  rseq: slice ext: Ensure rseq feature size differs from original rseq size
  rseq: Clarify rseq registration rseq_size bound check comment
  sched/core: Fix wakeup_preempt's next_class tracking
  rseq: Mark rseq_arm_slice_extension_timer() __always_inline
  sched/fair: Fix lag clamp
  sched/eevdf: Update se->vprot in reweight_entity()
  sched/fair: Only set slice protection at pick time
  sched/fair: Fix zero_vruntime tracking
This commit is contained in:
Linus Torvalds 2026-03-01 11:09:24 -08:00
commit 6170625149
10 changed files with 172 additions and 52 deletions

View file

@ -87,10 +87,17 @@ struct rseq_slice_ctrl {
};
/*
* struct rseq is aligned on 4 * 8 bytes to ensure it is always
* contained within a single cache-line.
* The original size and alignment of the allocation for struct rseq is
* 32 bytes.
*
* A single struct rseq per thread is allowed.
* The allocation size needs to be greater or equal to
* max(getauxval(AT_RSEQ_FEATURE_SIZE), 32), and the allocation needs to
* be aligned on max(getauxval(AT_RSEQ_ALIGN), 32).
*
* As an alternative, userspace is allowed to use both the original size
* and alignment of 32 bytes for backward compatibility.
*
* A single active struct rseq registration per thread is allowed.
*/
struct rseq {
/*
@ -180,10 +187,21 @@ struct rseq {
*/
struct rseq_slice_ctrl slice_ctrl;
/*
* Before rseq became extensible, its original size was 32 bytes even
* though the active rseq area was only 20 bytes.
* Exposing a 32 bytes feature size would make life needlessly painful
* for userspace. Therefore, add a reserved byte after byte 32
* to bump the rseq feature size from 32 to 33.
* The next field to be added to the rseq area will be larger
* than one byte, and will replace this reserved byte.
*/
__u8 __reserved;
/*
* Flexible array member at end of structure, after last feature field.
*/
char end[];
} __attribute__((aligned(4 * sizeof(__u64))));
} __attribute__((aligned(32)));
#endif /* _UAPI_LINUX_RSEQ_H */