12 hotfixes. 7 are cc:stable. 8 are for MM.

All are singletons - please see the changelogs for details.
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCaaDF4AAKCRDdBJ7gKXxA
 jhv5AQDv+B9rPkFJ0dlSS/hXqsDGqy3dGj/grJM0dw7LhkPHzgEAi/bV6D1jx0k3
 k0hcP3JUxE54+a7liLadPDLIObOMLgo=
 =R1ap
 -----END PGP SIGNATURE-----

Merge tag 'mm-hotfixes-stable-2026-02-26-14-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull misc fixes from Andrew Morton:
 "12 hotfixes.  7 are cc:stable.  8 are for MM.

  All are singletons - please see the changelogs for details"

* tag 'mm-hotfixes-stable-2026-02-26-14-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
  MAINTAINERS: update Yosry Ahmed's email address
  mailmap: add entry for Daniele Alessandrelli
  mm: fix NULL NODE_DATA dereference for memoryless nodes on boot
  mm/tracing: rss_stat: ensure curr is false from kthread context
  mm/kfence: fix KASAN hardware tag faults during late enablement
  mm/damon/core: disallow non-power of two min_region_sz
  Squashfs: check metadata block offset is within range
  MAINTAINERS, mailmap: update e-mail address for Vlastimil Babka
  liveupdate: luo_file: remember retrieve() status
  mm: thp: deny THP for files on anonymous inodes
  mm: change vma_alloc_folio_noprof() macro to inline function
  mm/kfence: disable KFENCE upon KASAN HW tags enablement
This commit is contained in:
Linus Torvalds 2026-02-26 15:27:41 -08:00
commit 69062f234a
13 changed files with 102 additions and 42 deletions

View file

@ -215,6 +215,7 @@ Daniel Lezcano <daniel.lezcano@kernel.org> <daniel.lezcano@free.fr>
Daniel Lezcano <daniel.lezcano@kernel.org> <daniel.lezcano@linexp.org>
Daniel Lezcano <daniel.lezcano@kernel.org> <dlezcano@fr.ibm.com>
Daniel Thompson <danielt@kernel.org> <daniel.thompson@linaro.org>
Daniele Alessandrelli <daniele.alessandrelli@gmail.com> <daniele.alessandrelli@intel.com>
Danilo Krummrich <dakr@kernel.org> <dakr@redhat.com>
David Brownell <david-b@pacbell.net>
David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org>
@ -880,6 +881,7 @@ Vivien Didelot <vivien.didelot@gmail.com> <vivien.didelot@savoirfairelinux.com>
Vlad Dogaru <ddvlad@gmail.com> <vlad.dogaru@intel.com>
Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@parallels.com>
Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@virtuozzo.com>
Vlastimil Babka <vbabka@kernel.org> <vbabka@suse.cz>
WangYuli <wangyuli@aosc.io> <wangyl5933@chinaunicom.cn>
WangYuli <wangyuli@aosc.io> <wangyuli@deepin.org>
Weiwen Hu <huweiwen@linux.alibaba.com> <sehuww@mail.scut.edu.cn>
@ -894,7 +896,8 @@ Yanteng Si <si.yanteng@linux.dev> <siyanteng@loongson.cn>
Ying Huang <huang.ying.caritas@gmail.com> <ying.huang@intel.com>
Yixun Lan <dlan@kernel.org> <dlan@gentoo.org>
Yixun Lan <dlan@kernel.org> <yixun.lan@amlogic.com>
Yosry Ahmed <yosry.ahmed@linux.dev> <yosryahmed@google.com>
Yosry Ahmed <yosry@kernel.org> <yosryahmed@google.com>
Yosry Ahmed <yosry@kernel.org> <yosry.ahmed@linux.dev>
Yu-Chun Lin <eleanor.lin@realtek.com> <eleanor15x@gmail.com>
Yusuke Goda <goda.yusuke@renesas.com>
Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com>

View file

@ -16654,7 +16654,7 @@ M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@kernel.org>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Vlastimil Babka <vbabka@kernel.org>
R: Mike Rapoport <rppt@kernel.org>
R: Suren Baghdasaryan <surenb@google.com>
R: Michal Hocko <mhocko@suse.com>
@ -16784,7 +16784,7 @@ M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@kernel.org>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Vlastimil Babka <vbabka@kernel.org>
R: Mike Rapoport <rppt@kernel.org>
R: Suren Baghdasaryan <surenb@google.com>
R: Michal Hocko <mhocko@suse.com>
@ -16839,7 +16839,7 @@ F: mm/oom_kill.c
MEMORY MANAGEMENT - PAGE ALLOCATOR
M: Andrew Morton <akpm@linux-foundation.org>
M: Vlastimil Babka <vbabka@suse.cz>
M: Vlastimil Babka <vbabka@kernel.org>
R: Suren Baghdasaryan <surenb@google.com>
R: Michal Hocko <mhocko@suse.com>
R: Brendan Jackman <jackmanb@google.com>
@ -16885,7 +16885,7 @@ M: David Hildenbrand <david@kernel.org>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Rik van Riel <riel@surriel.com>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Vlastimil Babka <vbabka@kernel.org>
R: Harry Yoo <harry.yoo@oracle.com>
R: Jann Horn <jannh@google.com>
L: linux-mm@kvack.org
@ -16984,7 +16984,7 @@ MEMORY MAPPING
M: Andrew Morton <akpm@linux-foundation.org>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Vlastimil Babka <vbabka@kernel.org>
R: Jann Horn <jannh@google.com>
R: Pedro Falcato <pfalcato@suse.de>
L: linux-mm@kvack.org
@ -17014,7 +17014,7 @@ M: Andrew Morton <akpm@linux-foundation.org>
M: Suren Baghdasaryan <surenb@google.com>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Vlastimil Babka <vbabka@kernel.org>
R: Shakeel Butt <shakeel.butt@linux.dev>
L: linux-mm@kvack.org
S: Maintained
@ -17030,7 +17030,7 @@ M: Andrew Morton <akpm@linux-foundation.org>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
M: David Hildenbrand <david@kernel.org>
R: Vlastimil Babka <vbabka@suse.cz>
R: Vlastimil Babka <vbabka@kernel.org>
R: Jann Horn <jannh@google.com>
L: linux-mm@kvack.org
S: Maintained
@ -23172,7 +23172,7 @@ K: \b(?i:rust)\b
RUST [ALLOC]
M: Danilo Krummrich <dakr@kernel.org>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Vlastimil Babka <vbabka@suse.cz>
R: Vlastimil Babka <vbabka@kernel.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Uladzislau Rezki <urezki@gmail.com>
L: rust-for-linux@vger.kernel.org
@ -24348,7 +24348,7 @@ F: Documentation/devicetree/bindings/nvmem/layouts/kontron,sl28-vpd.yaml
F: drivers/nvmem/layouts/sl28vpd.c
SLAB ALLOCATOR
M: Vlastimil Babka <vbabka@suse.cz>
M: Vlastimil Babka <vbabka@kernel.org>
M: Andrew Morton <akpm@linux-foundation.org>
R: Christoph Lameter <cl@gentwo.org>
R: David Rientjes <rientjes@google.com>
@ -29184,7 +29184,7 @@ K: zstd
ZSWAP COMPRESSED SWAP CACHING
M: Johannes Weiner <hannes@cmpxchg.org>
M: Yosry Ahmed <yosry.ahmed@linux.dev>
M: Yosry Ahmed <yosry@kernel.org>
M: Nhat Pham <nphamcs@gmail.com>
R: Chengming Zhou <chengming.zhou@linux.dev>
L: linux-mm@kvack.org

View file

@ -344,6 +344,9 @@ int squashfs_read_metadata(struct super_block *sb, void *buffer,
if (unlikely(length < 0))
return -EIO;
if (unlikely(*offset < 0 || *offset >= SQUASHFS_METADATA_SIZE))
return -EIO;
while (length) {
entry = squashfs_cache_get(sb, msblk->block_cache, *block, 0);
if (entry->error) {

View file

@ -339,8 +339,11 @@ static inline struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int orde
{
return folio_alloc_noprof(gfp, order);
}
#define vma_alloc_folio_noprof(gfp, order, vma, addr) \
folio_alloc_noprof(gfp, order)
static inline struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order,
struct vm_area_struct *vma, unsigned long addr)
{
return folio_alloc_noprof(gfp, order);
}
#endif
#define alloc_pages(...) alloc_hooks(alloc_pages_noprof(__VA_ARGS__))

View file

@ -23,8 +23,11 @@ struct file;
/**
* struct liveupdate_file_op_args - Arguments for file operation callbacks.
* @handler: The file handler being called.
* @retrieved: The retrieve status for the 'can_finish / finish'
* operation.
* @retrieve_status: The retrieve status for the 'can_finish / finish'
* operation. A value of 0 means the retrieve has not been
* attempted, a positive value means the retrieve was
* successful, and a negative value means the retrieve failed,
* and the value is the error code of the call.
* @file: The file object. For retrieve: [OUT] The callback sets
* this to the new file. For other ops: [IN] The caller sets
* this to the file being operated on.
@ -40,7 +43,7 @@ struct file;
*/
struct liveupdate_file_op_args {
struct liveupdate_file_handler *handler;
bool retrieved;
int retrieve_status;
struct file *file;
u64 serialized_data;
void *private_data;

View file

@ -440,7 +440,13 @@ TRACE_EVENT(rss_stat,
TP_fast_assign(
__entry->mm_id = mm_ptr_to_hash(mm);
__entry->curr = !!(current->mm == mm);
/*
* curr is true if the mm matches the current task's mm_struct.
* Since kthreads (PF_KTHREAD) have no mm_struct of their own
* but can borrow one via kthread_use_mm(), we must filter them
* out to avoid incorrectly attributing the RSS update to them.
*/
__entry->curr = current->mm == mm && !(current->flags & PF_KTHREAD);
__entry->member = member;
__entry->size = (percpu_counter_sum_positive(&mm->rss_stat[member])
<< PAGE_SHIFT);

View file

@ -134,9 +134,12 @@ static LIST_HEAD(luo_file_handler_list);
* state that is not preserved. Set by the handler's .preserve()
* callback, and must be freed in the handler's .unpreserve()
* callback.
* @retrieved: A flag indicating whether a user/kernel in the new kernel has
* @retrieve_status: Status code indicating whether a user/kernel in the new kernel has
* successfully called retrieve() on this file. This prevents
* multiple retrieval attempts.
* multiple retrieval attempts. A value of 0 means a retrieve()
* has not been attempted, a positive value means the retrieve()
* was successful, and a negative value means the retrieve()
* failed, and the value is the error code of the call.
* @mutex: A mutex that protects the fields of this specific instance
* (e.g., @retrieved, @file), ensuring that operations like
* retrieving or finishing a file are atomic.
@ -161,7 +164,7 @@ struct luo_file {
struct file *file;
u64 serialized_data;
void *private_data;
bool retrieved;
int retrieve_status;
struct mutex mutex;
struct list_head list;
u64 token;
@ -298,7 +301,6 @@ int luo_preserve_file(struct luo_file_set *file_set, u64 token, int fd)
luo_file->file = file;
luo_file->fh = fh;
luo_file->token = token;
luo_file->retrieved = false;
mutex_init(&luo_file->mutex);
args.handler = fh;
@ -577,7 +579,12 @@ int luo_retrieve_file(struct luo_file_set *file_set, u64 token,
return -ENOENT;
guard(mutex)(&luo_file->mutex);
if (luo_file->retrieved) {
if (luo_file->retrieve_status < 0) {
/* Retrieve was attempted and it failed. Return the error code. */
return luo_file->retrieve_status;
}
if (luo_file->retrieve_status > 0) {
/*
* Someone is asking for this file again, so get a reference
* for them.
@ -590,16 +597,19 @@ int luo_retrieve_file(struct luo_file_set *file_set, u64 token,
args.handler = luo_file->fh;
args.serialized_data = luo_file->serialized_data;
err = luo_file->fh->ops->retrieve(&args);
if (!err) {
luo_file->file = args.file;
/* Get reference so we can keep this file in LUO until finish */
get_file(luo_file->file);
*filep = luo_file->file;
luo_file->retrieved = true;
if (err) {
/* Keep the error code for later use. */
luo_file->retrieve_status = err;
return err;
}
return err;
luo_file->file = args.file;
/* Get reference so we can keep this file in LUO until finish */
get_file(luo_file->file);
*filep = luo_file->file;
luo_file->retrieve_status = 1;
return 0;
}
static int luo_file_can_finish_one(struct luo_file_set *file_set,
@ -615,7 +625,7 @@ static int luo_file_can_finish_one(struct luo_file_set *file_set,
args.handler = luo_file->fh;
args.file = luo_file->file;
args.serialized_data = luo_file->serialized_data;
args.retrieved = luo_file->retrieved;
args.retrieve_status = luo_file->retrieve_status;
can_finish = luo_file->fh->ops->can_finish(&args);
}
@ -632,7 +642,7 @@ static void luo_file_finish_one(struct luo_file_set *file_set,
args.handler = luo_file->fh;
args.file = luo_file->file;
args.serialized_data = luo_file->serialized_data;
args.retrieved = luo_file->retrieved;
args.retrieve_status = luo_file->retrieve_status;
luo_file->fh->ops->finish(&args);
luo_flb_file_finish(luo_file->fh);
@ -788,7 +798,6 @@ int luo_file_deserialize(struct luo_file_set *file_set,
luo_file->file = NULL;
luo_file->serialized_data = file_ser[i].data;
luo_file->token = file_ser[i].token;
luo_file->retrieved = false;
mutex_init(&luo_file->mutex);
list_add_tail(&luo_file->list, &file_set->files_list);
}

View file

@ -1252,6 +1252,9 @@ int damon_commit_ctx(struct damon_ctx *dst, struct damon_ctx *src)
{
int err;
if (!is_power_of_2(src->min_region_sz))
return -EINVAL;
err = damon_commit_schemes(dst, src);
if (err)
return err;

View file

@ -94,6 +94,9 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma)
inode = file_inode(vma->vm_file);
if (IS_ANON_FILE(inode))
return false;
return !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode);
}

View file

@ -13,6 +13,7 @@
#include <linux/hash.h>
#include <linux/irq_work.h>
#include <linux/jhash.h>
#include <linux/kasan-enabled.h>
#include <linux/kcsan-checks.h>
#include <linux/kfence.h>
#include <linux/kmemleak.h>
@ -916,6 +917,20 @@ void __init kfence_alloc_pool_and_metadata(void)
if (!kfence_sample_interval)
return;
/*
* If KASAN hardware tags are enabled, disable KFENCE, because it
* does not support MTE yet.
*/
if (kasan_hw_tags_enabled()) {
pr_info("disabled as KASAN HW tags are enabled\n");
if (__kfence_pool) {
memblock_free(__kfence_pool, KFENCE_POOL_SIZE);
__kfence_pool = NULL;
}
kfence_sample_interval = 0;
return;
}
/*
* If the pool has already been initialized by arch, there is no need to
* re-allocate the memory pool.
@ -989,14 +1004,14 @@ static int kfence_init_late(void)
#ifdef CONFIG_CONTIG_ALLOC
struct page *pages;
pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL, first_online_node,
NULL);
pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL | __GFP_SKIP_KASAN,
first_online_node, NULL);
if (!pages)
return -ENOMEM;
__kfence_pool = page_to_virt(pages);
pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL, first_online_node,
NULL);
pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL | __GFP_SKIP_KASAN,
first_online_node, NULL);
if (pages)
kfence_metadata_init = page_to_virt(pages);
#else
@ -1006,11 +1021,13 @@ static int kfence_init_late(void)
return -EINVAL;
}
__kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL);
__kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE,
GFP_KERNEL | __GFP_SKIP_KASAN);
if (!__kfence_pool)
return -ENOMEM;
kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE, GFP_KERNEL);
kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE,
GFP_KERNEL | __GFP_SKIP_KASAN);
#endif
if (!kfence_metadata_init)

View file

@ -326,7 +326,12 @@ static void memfd_luo_finish(struct liveupdate_file_op_args *args)
struct memfd_luo_folio_ser *folios_ser;
struct memfd_luo_ser *ser;
if (args->retrieved)
/*
* If retrieve was successful, nothing to do. If it failed, retrieve()
* already cleaned up everything it could. So nothing to do there
* either. Only need to clean up when retrieve was not called.
*/
if (args->retrieve_status)
return;
ser = phys_to_virt(args->serialized_data);

View file

@ -1896,7 +1896,11 @@ static void __init free_area_init(void)
for_each_node(nid) {
pg_data_t *pgdat;
if (!node_online(nid))
/*
* If an architecture has not allocated node data for
* this node, presume the node is memoryless or offline.
*/
if (!NODE_DATA(nid))
alloc_offline_node_data(nid);
pgdat = NODE_DATA(nid);

View file

@ -6928,7 +6928,8 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
{
const gfp_t reclaim_mask = __GFP_IO | __GFP_FS | __GFP_RECLAIM;
const gfp_t action_mask = __GFP_COMP | __GFP_RETRY_MAYFAIL | __GFP_NOWARN |
__GFP_ZERO | __GFP_ZEROTAGS | __GFP_SKIP_ZERO;
__GFP_ZERO | __GFP_ZEROTAGS | __GFP_SKIP_ZERO |
__GFP_SKIP_KASAN;
const gfp_t cc_action_mask = __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
/*