mirror of
https://github.com/torvalds/linux.git
synced 2026-03-08 05:44:45 +01:00
mm: clarify lazy_mmu sleeping constraints
The lazy MMU mode documentation makes clear that an implementation should not assume that preemption is disabled or any lock is held upon entry to the mode; however it says nothing about what code using the lazy MMU interface should expect. In practice sleeping is forbidden (for generic code) while the lazy MMU mode is active: say it explicitly. Link: https://lkml.kernel.org/r/20251215150323.2218608-6-kevin.brodsky@arm.com Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Borislav Betkov <bp@alien8.de> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David S. Miller <davem@davemloft.net> Cc: David Woodhouse <dwmw2@infradead.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Juegren Gross <jgross@suse.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
442bf488b9
commit
f2be745071
1 changed files with 9 additions and 5 deletions
|
|
@ -225,11 +225,15 @@ static inline int pmd_dirty(pmd_t pmd)
|
|||
* up to date.
|
||||
*
|
||||
* In the general case, no lock is guaranteed to be held between entry and exit
|
||||
* of the lazy mode. So the implementation must assume preemption may be enabled
|
||||
* and cpu migration is possible; it must take steps to be robust against this.
|
||||
* (In practice, for user PTE updates, the appropriate page table lock(s) are
|
||||
* held, but for kernel PTE updates, no lock is held). Nesting is not permitted
|
||||
* and the mode cannot be used in interrupt context.
|
||||
* of the lazy mode. (In practice, for user PTE updates, the appropriate page
|
||||
* table lock(s) are held, but for kernel PTE updates, no lock is held).
|
||||
* The implementation must therefore assume preemption may be enabled upon
|
||||
* entry to the mode and cpu migration is possible; it must take steps to be
|
||||
* robust against this. An implementation may handle this by disabling
|
||||
* preemption, as a consequence generic code may not sleep while the lazy MMU
|
||||
* mode is active.
|
||||
*
|
||||
* Nesting is not permitted and the mode cannot be used in interrupt context.
|
||||
*/
|
||||
#ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE
|
||||
static inline void arch_enter_lazy_mmu_mode(void) {}
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue