mm/memory: handle non-split locks correctly in zap_empty_pte_table()

While we handle pte_lockptr() == pmd_lockptr() correctly in
zap_pte_table_if_empty(), we don't handle it in zap_empty_pte_table(),
making the spin_trylock() always fail and forcing us onto the slow path.

So let's handle the scenario where pte_lockptr() == pmd_lockptr() better,
which can only happen if CONFIG_SPLIT_PTE_PTLOCKS is not set.

This is only relevant once we unlock CONFIG_PT_RECLAIM on architectures
that are not x86-64.

Link: https://lkml.kernel.org/r/20260119220708.3438514-3-david@kernel.org
Signed-off-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
David Hildenbrand (Red Hat) 2026-01-19 23:07:08 +01:00 committed by Andrew Morton
parent 4c640eb418
commit fb4ddf2085

View file

@ -1830,16 +1830,18 @@ static bool pte_table_reclaim_possible(unsigned long start, unsigned long end,
return details && details->reclaim_pt && (end - start >= PMD_SIZE);
}
static bool zap_empty_pte_table(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdval)
static bool zap_empty_pte_table(struct mm_struct *mm, pmd_t *pmd,
spinlock_t *ptl, pmd_t *pmdval)
{
spinlock_t *pml = pmd_lockptr(mm, pmd);
if (!spin_trylock(pml))
if (ptl != pml && !spin_trylock(pml))
return false;
*pmdval = pmdp_get(pmd);
pmd_clear(pmd);
spin_unlock(pml);
if (ptl != pml)
spin_unlock(pml);
return true;
}
@ -1931,7 +1933,7 @@ retry:
* from being repopulated by another thread.
*/
if (can_reclaim_pt && direct_reclaim && addr == end)
direct_reclaim = zap_empty_pte_table(mm, pmd, &pmdval);
direct_reclaim = zap_empty_pte_table(mm, pmd, ptl, &pmdval);
add_mm_rss_vec(mm, rss);
lazy_mmu_mode_disable();