Merge v6.16-rc2 into timers/ptp

to pick up the __GENMASK() fix, otherwise the AUX clock VDSO patches fail
to compile for compat.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This commit is contained in:
Thomas Gleixner 2025-07-09 11:51:34 +02:00
commit 068f7b64bf
323 changed files with 2382 additions and 1386 deletions

View file

@ -426,6 +426,9 @@ Krzysztof Wilczyński <kwilczynski@kernel.org> <krzysztof.wilczynski@linux.com>
Krzysztof Wilczyński <kwilczynski@kernel.org> <kw@linux.com>
Kshitiz Godara <quic_kgodara@quicinc.com> <kgodara@codeaurora.org>
Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Kuniyuki Iwashima <kuniyu@google.com> <kuniyu@amazon.com>
Kuniyuki Iwashima <kuniyu@google.com> <kuniyu@amazon.co.jp>
Kuniyuki Iwashima <kuniyu@google.com> <kuni1840@gmail.com>
Kuogee Hsieh <quic_khsieh@quicinc.com> <khsieh@codeaurora.org>
Lee Jones <lee@kernel.org> <joneslee@google.com>
Lee Jones <lee@kernel.org> <lee.jones@canonical.com>
@ -719,6 +722,7 @@ Srinivas Ramana <quic_sramana@quicinc.com> <sramana@codeaurora.org>
Sriram R <quic_srirrama@quicinc.com> <srirrama@codeaurora.org>
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> <sriram.yagnaraman@est.tech>
Stanislav Fomichev <sdf@fomichev.me> <sdf@google.com>
Stanislav Fomichev <sdf@fomichev.me> <stfomichev@gmail.com>
Stefan Wahren <wahrenst@gmx.net> <stefan.wahren@i2se.com>
Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr>
Stephen Hemminger <stephen@networkplumber.org> <shemminger@linux-foundation.org>

View file

@ -270,6 +270,8 @@ configured for Unix Extensions (and the client has not disabled
illegal Windows/NTFS/SMB characters to a remap range (this mount parameter
is the default for SMB3). This remap (``mapposix``) range is also
compatible with Mac (and "Services for Mac" on some older Windows).
When POSIX Extensions for SMB 3.1.1 are negotiated, remapping is automatically
disabled.
CIFS VFS Mount Options
======================

View file

@ -352,6 +352,83 @@ For reaching best IO performance, ublk server should align its segment
parameter of `struct ublk_param_segment` with backend for avoiding
unnecessary IO split, which usually hurts io_uring performance.
Auto Buffer Registration
------------------------
The ``UBLK_F_AUTO_BUF_REG`` feature automatically handles buffer registration
and unregistration for I/O requests, which simplifies the buffer management
process and reduces overhead in the ublk server implementation.
This is another feature flag for using zero copy, and it is compatible with
``UBLK_F_SUPPORT_ZERO_COPY``.
Feature Overview
~~~~~~~~~~~~~~~~
This feature automatically registers request buffers to the io_uring context
before delivering I/O commands to the ublk server and unregisters them when
completing I/O commands. This eliminates the need for manual buffer
registration/unregistration via ``UBLK_IO_REGISTER_IO_BUF`` and
``UBLK_IO_UNREGISTER_IO_BUF`` commands, then IO handling in ublk server
can avoid dependency on the two uring_cmd operations.
IOs can't be issued concurrently to io_uring if there is any dependency
among these IOs. So this way not only simplifies ublk server implementation,
but also makes concurrent IO handling becomes possible by removing the
dependency on buffer registration & unregistration commands.
Usage Requirements
~~~~~~~~~~~~~~~~~~
1. The ublk server must create a sparse buffer table on the same ``io_ring_ctx``
used for ``UBLK_IO_FETCH_REQ`` and ``UBLK_IO_COMMIT_AND_FETCH_REQ``. If
uring_cmd is issued on a different ``io_ring_ctx``, manual buffer
unregistration is required.
2. Buffer registration data must be passed via uring_cmd's ``sqe->addr`` with the
following structure::
struct ublk_auto_buf_reg {
__u16 index; /* Buffer index for registration */
__u8 flags; /* Registration flags */
__u8 reserved0; /* Reserved for future use */
__u32 reserved1; /* Reserved for future use */
};
ublk_auto_buf_reg_to_sqe_addr() is for converting the above structure into
``sqe->addr``.
3. All reserved fields in ``ublk_auto_buf_reg`` must be zeroed.
4. Optional flags can be passed via ``ublk_auto_buf_reg.flags``.
Fallback Behavior
~~~~~~~~~~~~~~~~~
If auto buffer registration fails:
1. When ``UBLK_AUTO_BUF_REG_FALLBACK`` is enabled:
- The uring_cmd is completed
- ``UBLK_IO_F_NEED_REG_BUF`` is set in ``ublksrv_io_desc.op_flags``
- The ublk server must manually deal with the failure, such as, register
the buffer manually, or using user copy feature for retrieving the data
for handling ublk IO
2. If fallback is not enabled:
- The ublk I/O request fails silently
- The uring_cmd won't be completed
Limitations
~~~~~~~~~~~
- Requires same ``io_ring_ctx`` for all operations
- May require manual buffer management in fallback cases
- io_ring_ctx buffer table has a max size of 16K, which may not be enough
in case that too many ublk devices are handled by this single io_ring_ctx
and each one has very large queue depth
References
==========

View file

@ -15,7 +15,7 @@ description: |
Some peripherals such as PWM have their I/O go through the 4 "GPIOs".
maintainers:
- Jianlong Huang <jianlong.huang@starfivetech.com>
- Hal Feng <hal.feng@starfivetech.com>
properties:
compatible:

View file

@ -18,7 +18,7 @@ description: |
any GPIO can be set up to be controlled by any of the peripherals.
maintainers:
- Jianlong Huang <jianlong.huang@starfivetech.com>
- Hal Feng <hal.feng@starfivetech.com>
properties:
compatible:

View file

@ -584,7 +584,6 @@ encoded manner. The codes are the following:
ms may share
gd stack segment growns down
pf pure PFN range
dw disabled write to the mapped file
lo pages are locked in memory
io memory mapped I/O area
sr sequential read advise provided
@ -607,8 +606,11 @@ encoded manner. The codes are the following:
mt arm64 MTE allocation tags are enabled
um userfaultfd missing tracking
uw userfaultfd wr-protect tracking
ui userfaultfd minor fault
ss shadow/guarded control stack page
sl sealed
lf lock on fault pages
dp always lazily freeable mapping
== =======================================
Note that there is no guarantee that every flag and associated mnemonic will

View file

@ -4555,6 +4555,7 @@ BPF [NETWORKING] (tcx & tc BPF, sock_addr)
M: Martin KaFai Lau <martin.lau@linux.dev>
M: Daniel Borkmann <daniel@iogearbox.net>
R: John Fastabend <john.fastabend@gmail.com>
R: Stanislav Fomichev <sdf@fomichev.me>
L: bpf@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
@ -6254,6 +6255,7 @@ F: include/linux/cpuhotplug.h
F: include/linux/smpboot.h
F: kernel/cpu.c
F: kernel/smpboot.*
F: rust/helper/cpu.c
F: rust/kernel/cpu.rs
CPU IDLE TIME MANAGEMENT FRAMEWORK
@ -15919,6 +15921,7 @@ R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Nico Pache <npache@redhat.com>
R: Ryan Roberts <ryan.roberts@arm.com>
R: Dev Jain <dev.jain@arm.com>
R: Barry Song <baohua@kernel.org>
L: linux-mm@kvack.org
S: Maintained
W: http://www.linux-mm.org
@ -17493,7 +17496,7 @@ F: tools/testing/selftests/net/srv6*
NETWORKING [TCP]
M: Eric Dumazet <edumazet@google.com>
M: Neal Cardwell <ncardwell@google.com>
R: Kuniyuki Iwashima <kuniyu@amazon.com>
R: Kuniyuki Iwashima <kuniyu@google.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/net_cachelines/tcp_sock.rst
@ -17523,7 +17526,7 @@ F: net/tls/*
NETWORKING [SOCKETS]
M: Eric Dumazet <edumazet@google.com>
M: Kuniyuki Iwashima <kuniyu@amazon.com>
M: Kuniyuki Iwashima <kuniyu@google.com>
M: Paolo Abeni <pabeni@redhat.com>
M: Willem de Bruijn <willemb@google.com>
S: Maintained
@ -17538,7 +17541,7 @@ F: net/core/scm.c
F: net/socket.c
NETWORKING [UNIX SOCKETS]
M: Kuniyuki Iwashima <kuniyu@amazon.com>
M: Kuniyuki Iwashima <kuniyu@google.com>
S: Maintained
F: include/net/af_unix.h
F: include/net/netns/unix.h
@ -23661,7 +23664,6 @@ F: include/dt-bindings/clock/starfive?jh71*.h
STARFIVE JH71X0 PINCTRL DRIVERS
M: Emil Renner Berthing <kernel@esmil.dk>
M: Jianlong Huang <jianlong.huang@starfivetech.com>
M: Hal Feng <hal.feng@starfivetech.com>
L: linux-gpio@vger.kernel.org
S: Maintained
@ -26967,6 +26969,7 @@ M: David S. Miller <davem@davemloft.net>
M: Jakub Kicinski <kuba@kernel.org>
M: Jesper Dangaard Brouer <hawk@kernel.org>
M: John Fastabend <john.fastabend@gmail.com>
R: Stanislav Fomichev <sdf@fomichev.me>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
@ -26988,6 +26991,7 @@ M: Björn Töpel <bjorn@kernel.org>
M: Magnus Karlsson <magnus.karlsson@intel.com>
M: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
R: Jonathan Lemon <jonathan.lemon@gmail.com>
R: Stanislav Fomichev <sdf@fomichev.me>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained

View file

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 16
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc2
NAME = Baby Opossum Posse
# *DOCUMENTATION*
@ -1832,12 +1832,9 @@ rustfmtcheck: rustfmt
# Misc
# ---------------------------------------------------------------------------
# Run misc checks when ${KBUILD_EXTRA_WARN} contains 1
PHONY += misc-check
ifneq ($(findstring 1,$(KBUILD_EXTRA_WARN)),)
misc-check:
$(Q)$(srctree)/scripts/misc-check
endif
all: misc-check

View file

@ -327,7 +327,7 @@ extern inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -144,7 +144,7 @@
#define ARC_AUX_AGU_MOD2 0x5E2
#define ARC_AUX_AGU_MOD3 0x5E3
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <soc/arc/arc_aux.h>

View file

@ -6,7 +6,7 @@
#ifndef _ASM_ARC_ATOMIC_H
#define _ASM_ARC_ATOMIC_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/types.h>
#include <linux/compiler.h>
@ -31,6 +31,6 @@
#include <asm/atomic64-arcv2.h>
#endif
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif

View file

@ -137,12 +137,9 @@ ATOMIC64_OPS(xor, xor, xor)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP
static inline s64
arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
static inline u64 __arch_cmpxchg64_relaxed(volatile void *ptr, u64 old, u64 new)
{
s64 prev;
smp_mb();
u64 prev;
__asm__ __volatile__(
"1: llockd %0, [%1] \n"
@ -152,14 +149,12 @@ arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
" bnz 1b \n"
"2: \n"
: "=&r"(prev)
: "r"(ptr), "ir"(expected), "r"(new)
: "cc"); /* memory clobber comes from smp_mb() */
smp_mb();
: "r"(ptr), "ir"(old), "r"(new)
: "memory", "cc");
return prev;
}
#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
#define arch_cmpxchg64_relaxed __arch_cmpxchg64_relaxed
static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
{

View file

@ -10,7 +10,7 @@
#error only <linux/bitops.h> can be included directly
#endif
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/types.h>
#include <linux/compiler.h>
@ -192,6 +192,6 @@ static inline __attribute__ ((const)) unsigned long __ffs(unsigned long x)
#include <asm-generic/bitops/le.h>
#include <asm-generic/bitops/ext2-atomic-setbit.h>
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif

View file

@ -6,7 +6,7 @@
#ifndef _ASM_ARC_BUG_H
#define _ASM_ARC_BUG_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/ptrace.h>
@ -29,6 +29,6 @@ void die(const char *str, struct pt_regs *regs, unsigned long address);
#include <asm-generic/bug.h>
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif

View file

@ -23,7 +23,7 @@
*/
#define ARC_UNCACHED_ADDR_SPACE 0xc0000000
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/build_bug.h>
@ -65,7 +65,7 @@
extern int ioc_enable;
extern unsigned long perip_base, perip_end;
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
/* Instruction cache related Auxiliary registers */
#define ARC_REG_IC_BCR 0x77 /* Build Config reg */

View file

@ -9,7 +9,7 @@
#ifndef _ASM_ARC_CURRENT_H
#define _ASM_ARC_CURRENT_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#ifdef CONFIG_ARC_CURR_IN_REG
@ -20,6 +20,6 @@ register struct task_struct *curr_arc asm("gp");
#include <asm-generic/current.h>
#endif /* ! CONFIG_ARC_CURR_IN_REG */
#endif /* ! __ASSEMBLY__ */
#endif /* ! __ASSEMBLER__ */
#endif /* _ASM_ARC_CURRENT_H */

View file

@ -11,7 +11,7 @@
#define DSP_CTRL_DISABLED_ALL 0
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
/* clobbers r5 register */
.macro DSP_EARLY_INIT

View file

@ -7,7 +7,7 @@
#ifndef __ASM_ARC_DSP_H
#define __ASM_ARC_DSP_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
/*
* DSP-related saved registers - need to be saved only when you are
@ -24,6 +24,6 @@ struct dsp_callee_regs {
#endif
};
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* __ASM_ARC_DSP_H */

View file

@ -6,7 +6,7 @@
#ifndef _ASM_ARC_DWARF_H
#define _ASM_ARC_DWARF_H
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#ifdef ARC_DW2_UNWIND_AS_CFI
@ -38,6 +38,6 @@
#endif /* !ARC_DW2_UNWIND_AS_CFI */
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif /* _ASM_ARC_DWARF_H */

View file

@ -13,7 +13,7 @@
#include <asm/processor.h> /* For VMALLOC_START */
#include <asm/mmu.h>
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
#ifdef CONFIG_ISA_ARCOMPACT
#include <asm/entry-compact.h> /* ISA specific bits */
@ -146,7 +146,7 @@
#endif /* CONFIG_ARC_CURR_IN_REG */
#else /* !__ASSEMBLY__ */
#else /* !__ASSEMBLER__ */
extern void do_signal(struct pt_regs *);
extern void do_notify_resume(struct pt_regs *);

View file

@ -50,7 +50,7 @@
#define ISA_INIT_STATUS_BITS (STATUS_IE_MASK | __AD_ENB | \
(ARCV2_IRQ_DEF_PRIO << 1))
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
/*
* Save IRQ state and disable IRQs
@ -170,6 +170,6 @@ static inline void arc_softirq_clear(int irq)
seti
.endm
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View file

@ -40,7 +40,7 @@
#define ISA_INIT_STATUS_BITS STATUS_IE_MASK
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
/******************************************************************
* IRQ Control Macros
@ -196,6 +196,6 @@ static inline int arch_irqs_disabled(void)
flag \scratch
.endm
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View file

@ -2,7 +2,7 @@
#ifndef _ASM_ARC_JUMP_LABEL_H
#define _ASM_ARC_JUMP_LABEL_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/stringify.h>
#include <linux/types.h>
@ -68,5 +68,5 @@ struct jump_entry {
jump_label_t key;
};
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View file

@ -12,7 +12,7 @@
#define __ALIGN .align 4
#define __ALIGN_STR __stringify(__ALIGN)
#ifdef __ASSEMBLY__
#ifdef __ASSEMBLER__
.macro ST2 e, o, off
#ifdef CONFIG_ARC_HAS_LL64
@ -61,7 +61,7 @@
CFI_ENDPROC ASM_NL \
.size name, .-name
#else /* !__ASSEMBLY__ */
#else /* !__ASSEMBLER__ */
#ifdef CONFIG_ARC_HAS_ICCM
#define __arcfp_code __section(".text.arcfp")
@ -75,6 +75,6 @@
#define __arcfp_data __section(".data")
#endif
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View file

@ -69,7 +69,7 @@
#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
struct mm_struct;
extern int pae40_exist_but_not_enab(void);
@ -100,6 +100,6 @@ static inline void mmu_setup_pgd(struct mm_struct *mm, void *pgd)
sr \reg, [ARC_REG_PID]
.endm
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif

View file

@ -6,7 +6,7 @@
#ifndef _ASM_ARC_MMU_H
#define _ASM_ARC_MMU_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/threads.h> /* NR_CPUS */

View file

@ -19,7 +19,7 @@
#endif /* CONFIG_ARC_HAS_PAE40 */
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#define clear_page(paddr) memset((paddr), 0, PAGE_SIZE)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
@ -136,6 +136,6 @@ static inline unsigned long virt_to_pfn(const void *kaddr)
#include <asm-generic/memory_model.h> /* page_to_pfn, pfn_to_page */
#include <asm-generic/getorder.h>
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif

View file

@ -75,7 +75,7 @@
* This is to enable COW mechanism
*/
/* xwr */
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#define pte_write(pte) (pte_val(pte) & _PAGE_WRITE)
#define pte_dirty(pte) (pte_val(pte) & _PAGE_DIRTY)
@ -130,7 +130,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}
@ -142,6 +142,6 @@ PTE_BIT_FUNC(swp_clear_exclusive, &= ~(_PAGE_SWP_EXCLUSIVE));
#include <asm/hugepage.h>
#endif
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View file

@ -85,7 +85,7 @@
#define PTRS_PER_PTE BIT(PMD_SHIFT - PAGE_SHIFT)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#if CONFIG_PGTABLE_LEVELS > 3
#include <asm-generic/pgtable-nop4d.h>
@ -181,6 +181,6 @@
#define pmd_leaf(x) (pmd_val(x) & _PAGE_HW_SZ)
#endif
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif

View file

@ -19,7 +19,7 @@
*/
#define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
extern char empty_zero_page[PAGE_SIZE];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
@ -29,6 +29,6 @@ extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);
/* to cope with aliasing VIPT cache */
#define HAVE_ARCH_UNMAPPED_AREA
#endif /* __ASSEMBLY__ */
#endif /* __ASSEMBLER__ */
#endif

View file

@ -11,7 +11,7 @@
#ifndef __ASM_ARC_PROCESSOR_H
#define __ASM_ARC_PROCESSOR_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <asm/ptrace.h>
#include <asm/dsp.h>
@ -66,7 +66,7 @@ extern void start_thread(struct pt_regs * regs, unsigned long pc,
extern unsigned int __get_wchan(struct task_struct *p);
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
/*
* Default System Memory Map on ARC

View file

@ -10,7 +10,7 @@
#include <uapi/asm/ptrace.h>
#include <linux/compiler.h>
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
typedef union {
struct {
@ -172,6 +172,6 @@ static inline unsigned long regs_get_register(struct pt_regs *regs,
extern int syscall_trace_enter(struct pt_regs *);
extern void syscall_trace_exit(struct pt_regs *);
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* __ASM_PTRACE_H */

View file

@ -6,7 +6,7 @@
#ifndef _ASM_ARC_SWITCH_TO_H
#define _ASM_ARC_SWITCH_TO_H
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/sched.h>
#include <asm/dsp-impl.h>

View file

@ -24,7 +24,7 @@
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#define THREAD_SHIFT (PAGE_SHIFT << THREAD_SIZE_ORDER)
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
#include <linux/thread_info.h>
@ -62,7 +62,7 @@ static inline __attribute_const__ struct thread_info *current_thread_info(void)
return (struct thread_info *)(sp & ~(THREAD_SIZE - 1));
}
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
/*
* thread information flags

View file

@ -14,7 +14,7 @@
#define PTRACE_GET_THREAD_AREA 25
#ifndef __ASSEMBLY__
#ifndef __ASSEMBLER__
/*
* Userspace ABI: Register state needed by
* -ptrace (gdbserver)
@ -53,6 +53,6 @@ struct user_regs_arcv2 {
unsigned long r30, r58, r59;
};
#endif /* !__ASSEMBLY__ */
#endif /* !__ASSEMBLER__ */
#endif /* _UAPI__ASM_ARC_PTRACE_H */

View file

@ -241,15 +241,6 @@ static int cmp_eh_frame_hdr_table_entries(const void *p1, const void *p2)
return (e1->start > e2->start) - (e1->start < e2->start);
}
static void swap_eh_frame_hdr_table_entries(void *p1, void *p2, int size)
{
struct eh_frame_hdr_table_entry *e1 = p1;
struct eh_frame_hdr_table_entry *e2 = p2;
swap(e1->start, e2->start);
swap(e1->fde, e2->fde);
}
static void init_unwind_hdr(struct unwind_table *table,
void *(*alloc) (unsigned long))
{
@ -345,7 +336,7 @@ static void init_unwind_hdr(struct unwind_table *table,
sort(header->table,
n,
sizeof(*header->table),
cmp_eh_frame_hdr_table_entries, swap_eh_frame_hdr_table_entries);
cmp_eh_frame_hdr_table_entries, NULL);
table->hdrsz = hdrSize;
smp_wmb();

View file

@ -301,7 +301,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(swp) __pte((swp).val)
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_isset(pte, L_PTE_SWP_EXCLUSIVE);
}

View file

@ -1107,14 +1107,36 @@ static inline u64 *___ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
#define ctxt_sys_reg(c,r) (*__ctxt_sys_reg(c,r))
u64 kvm_vcpu_apply_reg_masks(const struct kvm_vcpu *, enum vcpu_sysreg, u64);
#define __vcpu_sys_reg(v,r) \
(*({ \
#define __vcpu_assign_sys_reg(v, r, val) \
do { \
const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \
u64 *__r = __ctxt_sys_reg(ctxt, (r)); \
u64 __v = (val); \
if (vcpu_has_nv((v)) && (r) >= __SANITISED_REG_START__) \
*__r = kvm_vcpu_apply_reg_masks((v), (r), *__r);\
__r; \
}))
__v = kvm_vcpu_apply_reg_masks((v), (r), __v); \
\
ctxt_sys_reg(ctxt, (r)) = __v; \
} while (0)
#define __vcpu_rmw_sys_reg(v, r, op, val) \
do { \
const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \
u64 __v = ctxt_sys_reg(ctxt, (r)); \
__v op (val); \
if (vcpu_has_nv((v)) && (r) >= __SANITISED_REG_START__) \
__v = kvm_vcpu_apply_reg_masks((v), (r), __v); \
\
ctxt_sys_reg(ctxt, (r)) = __v; \
} while (0)
#define __vcpu_sys_reg(v,r) \
({ \
const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \
u64 __v = ctxt_sys_reg(ctxt, (r)); \
if (vcpu_has_nv((v)) && (r) >= __SANITISED_REG_START__) \
__v = kvm_vcpu_apply_reg_masks((v), (r), __v); \
__v; \
})
u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);

View file

@ -563,7 +563,7 @@ static inline pte_t pte_swp_mkexclusive(pte_t pte)
return set_pte_bit(pte, __pgprot(PTE_SWP_EXCLUSIVE));
}
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & PTE_SWP_EXCLUSIVE;
}

View file

@ -108,16 +108,16 @@ static void timer_set_ctl(struct arch_timer_context *ctxt, u32 ctl)
switch(arch_timer_ctx_index(ctxt)) {
case TIMER_VTIMER:
__vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = ctl;
__vcpu_assign_sys_reg(vcpu, CNTV_CTL_EL0, ctl);
break;
case TIMER_PTIMER:
__vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl;
__vcpu_assign_sys_reg(vcpu, CNTP_CTL_EL0, ctl);
break;
case TIMER_HVTIMER:
__vcpu_sys_reg(vcpu, CNTHV_CTL_EL2) = ctl;
__vcpu_assign_sys_reg(vcpu, CNTHV_CTL_EL2, ctl);
break;
case TIMER_HPTIMER:
__vcpu_sys_reg(vcpu, CNTHP_CTL_EL2) = ctl;
__vcpu_assign_sys_reg(vcpu, CNTHP_CTL_EL2, ctl);
break;
default:
WARN_ON(1);
@ -130,16 +130,16 @@ static void timer_set_cval(struct arch_timer_context *ctxt, u64 cval)
switch(arch_timer_ctx_index(ctxt)) {
case TIMER_VTIMER:
__vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = cval;
__vcpu_assign_sys_reg(vcpu, CNTV_CVAL_EL0, cval);
break;
case TIMER_PTIMER:
__vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval;
__vcpu_assign_sys_reg(vcpu, CNTP_CVAL_EL0, cval);
break;
case TIMER_HVTIMER:
__vcpu_sys_reg(vcpu, CNTHV_CVAL_EL2) = cval;
__vcpu_assign_sys_reg(vcpu, CNTHV_CVAL_EL2, cval);
break;
case TIMER_HPTIMER:
__vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = cval;
__vcpu_assign_sys_reg(vcpu, CNTHP_CVAL_EL2, cval);
break;
default:
WARN_ON(1);
@ -1036,7 +1036,7 @@ void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu)
if (vcpu_has_nv(vcpu)) {
struct arch_timer_offset *offs = &vcpu_vtimer(vcpu)->offset;
offs->vcpu_offset = &__vcpu_sys_reg(vcpu, CNTVOFF_EL2);
offs->vcpu_offset = __ctxt_sys_reg(&vcpu->arch.ctxt, CNTVOFF_EL2);
offs->vm_offset = &vcpu->kvm->arch.timer_data.poffset;
}

View file

@ -216,9 +216,9 @@ void kvm_debug_set_guest_ownership(struct kvm_vcpu *vcpu)
void kvm_debug_handle_oslar(struct kvm_vcpu *vcpu, u64 val)
{
if (val & OSLAR_EL1_OSLK)
__vcpu_sys_reg(vcpu, OSLSR_EL1) |= OSLSR_EL1_OSLK;
__vcpu_rmw_sys_reg(vcpu, OSLSR_EL1, |=, OSLSR_EL1_OSLK);
else
__vcpu_sys_reg(vcpu, OSLSR_EL1) &= ~OSLSR_EL1_OSLK;
__vcpu_rmw_sys_reg(vcpu, OSLSR_EL1, &=, ~OSLSR_EL1_OSLK);
preempt_disable();
kvm_arch_vcpu_put(vcpu);

View file

@ -103,8 +103,8 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
fp_state.sve_state = vcpu->arch.sve_state;
fp_state.sve_vl = vcpu->arch.sve_max_vl;
fp_state.sme_state = NULL;
fp_state.svcr = &__vcpu_sys_reg(vcpu, SVCR);
fp_state.fpmr = &__vcpu_sys_reg(vcpu, FPMR);
fp_state.svcr = __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR);
fp_state.fpmr = __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR);
fp_state.fp_type = &vcpu->arch.fp_type;
if (vcpu_has_sve(vcpu))

View file

@ -37,7 +37,7 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
if (unlikely(vcpu_has_nv(vcpu)))
vcpu_write_sys_reg(vcpu, val, reg);
else if (!__vcpu_write_sys_reg_to_cpu(val, reg))
__vcpu_sys_reg(vcpu, reg) = val;
__vcpu_assign_sys_reg(vcpu, reg, val);
}
static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long target_mode,
@ -51,7 +51,7 @@ static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long target_mode,
} else if (has_vhe()) {
write_sysreg_el1(val, SYS_SPSR);
} else {
__vcpu_sys_reg(vcpu, SPSR_EL1) = val;
__vcpu_assign_sys_reg(vcpu, SPSR_EL1, val);
}
}

View file

@ -45,7 +45,7 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
if (!vcpu_el1_is_32bit(vcpu))
return;
__vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2);
__vcpu_assign_sys_reg(vcpu, FPEXC32_EL2, read_sysreg(fpexc32_el2));
}
static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
@ -456,7 +456,7 @@ static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu)
*/
if (vcpu_has_sve(vcpu)) {
zcr_el1 = read_sysreg_el1(SYS_ZCR);
__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr_el1;
__vcpu_assign_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu), zcr_el1);
/*
* The guest's state is always saved using the guest's max VL.

View file

@ -307,11 +307,11 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu)
vcpu->arch.ctxt.spsr_irq = read_sysreg(spsr_irq);
vcpu->arch.ctxt.spsr_fiq = read_sysreg(spsr_fiq);
__vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2);
__vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2);
__vcpu_assign_sys_reg(vcpu, DACR32_EL2, read_sysreg(dacr32_el2));
__vcpu_assign_sys_reg(vcpu, IFSR32_EL2, read_sysreg(ifsr32_el2));
if (has_vhe() || kvm_debug_regs_in_use(vcpu))
__vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);
__vcpu_assign_sys_reg(vcpu, DBGVCR32_EL2, read_sysreg(dbgvcr32_el2));
}
static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu)

View file

@ -26,7 +26,7 @@ void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt);
static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu)
{
__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
__vcpu_assign_sys_reg(vcpu, ZCR_EL1, read_sysreg_el1(SYS_ZCR));
/*
* On saving/restoring guest sve state, always use the maximum VL for
* the guest. The layout of the data when saving the sve state depends
@ -79,7 +79,7 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
has_fpmr = kvm_has_fpmr(kern_hyp_va(vcpu->kvm));
if (has_fpmr)
__vcpu_sys_reg(vcpu, FPMR) = read_sysreg_s(SYS_FPMR);
__vcpu_assign_sys_reg(vcpu, FPMR, read_sysreg_s(SYS_FPMR));
if (system_supports_sve())
__hyp_sve_restore_host();

View file

@ -223,9 +223,9 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
*/
val = read_sysreg_el0(SYS_CNTP_CVAL);
if (map.direct_ptimer == vcpu_ptimer(vcpu))
__vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = val;
__vcpu_assign_sys_reg(vcpu, CNTP_CVAL_EL0, val);
if (map.direct_ptimer == vcpu_hptimer(vcpu))
__vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = val;
__vcpu_assign_sys_reg(vcpu, CNTHP_CVAL_EL2, val);
offset = read_sysreg_s(SYS_CNTPOFF_EL2);

View file

@ -18,17 +18,17 @@
static void __sysreg_save_vel2_state(struct kvm_vcpu *vcpu)
{
/* These registers are common with EL1 */
__vcpu_sys_reg(vcpu, PAR_EL1) = read_sysreg(par_el1);
__vcpu_sys_reg(vcpu, TPIDR_EL1) = read_sysreg(tpidr_el1);
__vcpu_assign_sys_reg(vcpu, PAR_EL1, read_sysreg(par_el1));
__vcpu_assign_sys_reg(vcpu, TPIDR_EL1, read_sysreg(tpidr_el1));
__vcpu_sys_reg(vcpu, ESR_EL2) = read_sysreg_el1(SYS_ESR);
__vcpu_sys_reg(vcpu, AFSR0_EL2) = read_sysreg_el1(SYS_AFSR0);
__vcpu_sys_reg(vcpu, AFSR1_EL2) = read_sysreg_el1(SYS_AFSR1);
__vcpu_sys_reg(vcpu, FAR_EL2) = read_sysreg_el1(SYS_FAR);
__vcpu_sys_reg(vcpu, MAIR_EL2) = read_sysreg_el1(SYS_MAIR);
__vcpu_sys_reg(vcpu, VBAR_EL2) = read_sysreg_el1(SYS_VBAR);
__vcpu_sys_reg(vcpu, CONTEXTIDR_EL2) = read_sysreg_el1(SYS_CONTEXTIDR);
__vcpu_sys_reg(vcpu, AMAIR_EL2) = read_sysreg_el1(SYS_AMAIR);
__vcpu_assign_sys_reg(vcpu, ESR_EL2, read_sysreg_el1(SYS_ESR));
__vcpu_assign_sys_reg(vcpu, AFSR0_EL2, read_sysreg_el1(SYS_AFSR0));
__vcpu_assign_sys_reg(vcpu, AFSR1_EL2, read_sysreg_el1(SYS_AFSR1));
__vcpu_assign_sys_reg(vcpu, FAR_EL2, read_sysreg_el1(SYS_FAR));
__vcpu_assign_sys_reg(vcpu, MAIR_EL2, read_sysreg_el1(SYS_MAIR));
__vcpu_assign_sys_reg(vcpu, VBAR_EL2, read_sysreg_el1(SYS_VBAR));
__vcpu_assign_sys_reg(vcpu, CONTEXTIDR_EL2, read_sysreg_el1(SYS_CONTEXTIDR));
__vcpu_assign_sys_reg(vcpu, AMAIR_EL2, read_sysreg_el1(SYS_AMAIR));
/*
* In VHE mode those registers are compatible between EL1 and EL2,
@ -46,21 +46,21 @@ static void __sysreg_save_vel2_state(struct kvm_vcpu *vcpu)
* are always trapped, ensuring that the in-memory
* copy is always up-to-date. A small blessing...
*/
__vcpu_sys_reg(vcpu, SCTLR_EL2) = read_sysreg_el1(SYS_SCTLR);
__vcpu_sys_reg(vcpu, TTBR0_EL2) = read_sysreg_el1(SYS_TTBR0);
__vcpu_sys_reg(vcpu, TTBR1_EL2) = read_sysreg_el1(SYS_TTBR1);
__vcpu_sys_reg(vcpu, TCR_EL2) = read_sysreg_el1(SYS_TCR);
__vcpu_assign_sys_reg(vcpu, SCTLR_EL2, read_sysreg_el1(SYS_SCTLR));
__vcpu_assign_sys_reg(vcpu, TTBR0_EL2, read_sysreg_el1(SYS_TTBR0));
__vcpu_assign_sys_reg(vcpu, TTBR1_EL2, read_sysreg_el1(SYS_TTBR1));
__vcpu_assign_sys_reg(vcpu, TCR_EL2, read_sysreg_el1(SYS_TCR));
if (ctxt_has_tcrx(&vcpu->arch.ctxt)) {
__vcpu_sys_reg(vcpu, TCR2_EL2) = read_sysreg_el1(SYS_TCR2);
__vcpu_assign_sys_reg(vcpu, TCR2_EL2, read_sysreg_el1(SYS_TCR2));
if (ctxt_has_s1pie(&vcpu->arch.ctxt)) {
__vcpu_sys_reg(vcpu, PIRE0_EL2) = read_sysreg_el1(SYS_PIRE0);
__vcpu_sys_reg(vcpu, PIR_EL2) = read_sysreg_el1(SYS_PIR);
__vcpu_assign_sys_reg(vcpu, PIRE0_EL2, read_sysreg_el1(SYS_PIRE0));
__vcpu_assign_sys_reg(vcpu, PIR_EL2, read_sysreg_el1(SYS_PIR));
}
if (ctxt_has_s1poe(&vcpu->arch.ctxt))
__vcpu_sys_reg(vcpu, POR_EL2) = read_sysreg_el1(SYS_POR);
__vcpu_assign_sys_reg(vcpu, POR_EL2, read_sysreg_el1(SYS_POR));
}
/*
@ -70,13 +70,13 @@ static void __sysreg_save_vel2_state(struct kvm_vcpu *vcpu)
*/
val = read_sysreg_el1(SYS_CNTKCTL);
val &= CNTKCTL_VALID_BITS;
__vcpu_sys_reg(vcpu, CNTHCTL_EL2) &= ~CNTKCTL_VALID_BITS;
__vcpu_sys_reg(vcpu, CNTHCTL_EL2) |= val;
__vcpu_rmw_sys_reg(vcpu, CNTHCTL_EL2, &=, ~CNTKCTL_VALID_BITS);
__vcpu_rmw_sys_reg(vcpu, CNTHCTL_EL2, |=, val);
}
__vcpu_sys_reg(vcpu, SP_EL2) = read_sysreg(sp_el1);
__vcpu_sys_reg(vcpu, ELR_EL2) = read_sysreg_el1(SYS_ELR);
__vcpu_sys_reg(vcpu, SPSR_EL2) = read_sysreg_el1(SYS_SPSR);
__vcpu_assign_sys_reg(vcpu, SP_EL2, read_sysreg(sp_el1));
__vcpu_assign_sys_reg(vcpu, ELR_EL2, read_sysreg_el1(SYS_ELR));
__vcpu_assign_sys_reg(vcpu, SPSR_EL2, read_sysreg_el1(SYS_SPSR));
}
static void __sysreg_restore_vel2_state(struct kvm_vcpu *vcpu)

View file

@ -1757,7 +1757,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
out:
for (enum vcpu_sysreg sr = __SANITISED_REG_START__; sr < NR_SYS_REGS; sr++)
(void)__vcpu_sys_reg(vcpu, sr);
__vcpu_rmw_sys_reg(vcpu, sr, |=, 0);
return 0;
}

View file

@ -178,7 +178,7 @@ static void kvm_pmu_set_pmc_value(struct kvm_pmc *pmc, u64 val, bool force)
val |= lower_32_bits(val);
}
__vcpu_sys_reg(vcpu, reg) = val;
__vcpu_assign_sys_reg(vcpu, reg, val);
/* Recreate the perf event to reflect the updated sample_period */
kvm_pmu_create_perf_event(pmc);
@ -204,7 +204,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
void kvm_pmu_set_counter_value_user(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
{
kvm_pmu_release_perf_event(kvm_vcpu_idx_to_pmc(vcpu, select_idx));
__vcpu_sys_reg(vcpu, counter_index_to_reg(select_idx)) = val;
__vcpu_assign_sys_reg(vcpu, counter_index_to_reg(select_idx), val);
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
}
@ -239,7 +239,7 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
reg = counter_index_to_reg(pmc->idx);
__vcpu_sys_reg(vcpu, reg) = val;
__vcpu_assign_sys_reg(vcpu, reg, val);
kvm_pmu_release_perf_event(pmc);
}
@ -503,14 +503,14 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
reg = __vcpu_sys_reg(vcpu, counter_index_to_reg(i)) + 1;
if (!kvm_pmc_is_64bit(pmc))
reg = lower_32_bits(reg);
__vcpu_sys_reg(vcpu, counter_index_to_reg(i)) = reg;
__vcpu_assign_sys_reg(vcpu, counter_index_to_reg(i), reg);
/* No overflow? move on */
if (kvm_pmc_has_64bit_overflow(pmc) ? reg : lower_32_bits(reg))
continue;
/* Mark overflow */
__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
__vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=, BIT(i));
if (kvm_pmu_counter_can_chain(pmc))
kvm_pmu_counter_increment(vcpu, BIT(i + 1),
@ -556,7 +556,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
perf_event->attr.sample_period = period;
perf_event->hw.sample_period = period;
__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
__vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=, BIT(idx));
if (kvm_pmu_counter_can_chain(pmc))
kvm_pmu_counter_increment(vcpu, BIT(idx + 1),
@ -602,7 +602,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
/* The reset bits don't indicate any state, and shouldn't be saved. */
__vcpu_sys_reg(vcpu, PMCR_EL0) = val & ~(ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_P);
__vcpu_assign_sys_reg(vcpu, PMCR_EL0, (val & ~(ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_P)));
if (val & ARMV8_PMU_PMCR_C)
kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
@ -779,7 +779,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
u64 reg;
reg = counter_index_to_evtreg(pmc->idx);
__vcpu_sys_reg(vcpu, reg) = data & kvm_pmu_evtyper_mask(vcpu->kvm);
__vcpu_assign_sys_reg(vcpu, reg, (data & kvm_pmu_evtyper_mask(vcpu->kvm)));
kvm_pmu_create_perf_event(pmc);
}
@ -914,9 +914,9 @@ void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu)
{
u64 mask = kvm_pmu_implemented_counter_mask(vcpu);
__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask;
__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= mask;
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= mask;
__vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=, mask);
__vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=, mask);
__vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=, mask);
kvm_pmu_reprogram_counter_mask(vcpu, mask);
}
@ -1038,7 +1038,7 @@ static void kvm_arm_set_nr_counters(struct kvm *kvm, unsigned int nr)
u64 val = __vcpu_sys_reg(vcpu, MDCR_EL2);
val &= ~MDCR_EL2_HPMN;
val |= FIELD_PREP(MDCR_EL2_HPMN, kvm->arch.nr_pmu_counters);
__vcpu_sys_reg(vcpu, MDCR_EL2) = val;
__vcpu_assign_sys_reg(vcpu, MDCR_EL2, val);
}
}
}

View file

@ -228,7 +228,7 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
* to reverse-translate virtual EL2 system registers for a
* non-VHE guest hypervisor.
*/
__vcpu_sys_reg(vcpu, reg) = val;
__vcpu_assign_sys_reg(vcpu, reg, val);
switch (reg) {
case CNTHCTL_EL2:
@ -263,7 +263,7 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
return;
memory_write:
__vcpu_sys_reg(vcpu, reg) = val;
__vcpu_assign_sys_reg(vcpu, reg, val);
}
/* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
@ -605,7 +605,7 @@ static int set_oslsr_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
if ((val ^ rd->val) & ~OSLSR_EL1_OSLK)
return -EINVAL;
__vcpu_sys_reg(vcpu, rd->reg) = val;
__vcpu_assign_sys_reg(vcpu, rd->reg, val);
return 0;
}
@ -791,7 +791,7 @@ static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
mask |= GENMASK(n - 1, 0);
reset_unknown(vcpu, r);
__vcpu_sys_reg(vcpu, r->reg) &= mask;
__vcpu_rmw_sys_reg(vcpu, r->reg, &=, mask);
return __vcpu_sys_reg(vcpu, r->reg);
}
@ -799,7 +799,7 @@ static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
reset_unknown(vcpu, r);
__vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0);
__vcpu_rmw_sys_reg(vcpu, r->reg, &=, GENMASK(31, 0));
return __vcpu_sys_reg(vcpu, r->reg);
}
@ -811,7 +811,7 @@ static u64 reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
return 0;
reset_unknown(vcpu, r);
__vcpu_sys_reg(vcpu, r->reg) &= kvm_pmu_evtyper_mask(vcpu->kvm);
__vcpu_rmw_sys_reg(vcpu, r->reg, &=, kvm_pmu_evtyper_mask(vcpu->kvm));
return __vcpu_sys_reg(vcpu, r->reg);
}
@ -819,7 +819,7 @@ static u64 reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
reset_unknown(vcpu, r);
__vcpu_sys_reg(vcpu, r->reg) &= PMSELR_EL0_SEL_MASK;
__vcpu_rmw_sys_reg(vcpu, r->reg, &=, PMSELR_EL0_SEL_MASK);
return __vcpu_sys_reg(vcpu, r->reg);
}
@ -835,7 +835,7 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
* The value of PMCR.N field is included when the
* vCPU register is read via kvm_vcpu_read_pmcr().
*/
__vcpu_sys_reg(vcpu, r->reg) = pmcr;
__vcpu_assign_sys_reg(vcpu, r->reg, pmcr);
return __vcpu_sys_reg(vcpu, r->reg);
}
@ -907,7 +907,7 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
return false;
if (p->is_write)
__vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
__vcpu_assign_sys_reg(vcpu, PMSELR_EL0, p->regval);
else
/* return PMSELR.SEL field */
p->regval = __vcpu_sys_reg(vcpu, PMSELR_EL0)
@ -1076,7 +1076,7 @@ static int set_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 va
{
u64 mask = kvm_pmu_accessible_counter_mask(vcpu);
__vcpu_sys_reg(vcpu, r->reg) = val & mask;
__vcpu_assign_sys_reg(vcpu, r->reg, val & mask);
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
return 0;
@ -1103,10 +1103,10 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
val = p->regval & mask;
if (r->Op2 & 0x1)
/* accessing PMCNTENSET_EL0 */
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
__vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, |=, val);
else
/* accessing PMCNTENCLR_EL0 */
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
__vcpu_rmw_sys_reg(vcpu, PMCNTENSET_EL0, &=, ~val);
kvm_pmu_reprogram_counter_mask(vcpu, val);
} else {
@ -1129,10 +1129,10 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
if (r->Op2 & 0x1)
/* accessing PMINTENSET_EL1 */
__vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val;
__vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, |=, val);
else
/* accessing PMINTENCLR_EL1 */
__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
__vcpu_rmw_sys_reg(vcpu, PMINTENSET_EL1, &=, ~val);
} else {
p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
}
@ -1151,10 +1151,10 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
if (p->is_write) {
if (r->CRm & 0x2)
/* accessing PMOVSSET_EL0 */
__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= (p->regval & mask);
__vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, |=, (p->regval & mask));
else
/* accessing PMOVSCLR_EL0 */
__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
__vcpu_rmw_sys_reg(vcpu, PMOVSSET_EL0, &=, ~(p->regval & mask));
} else {
p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
}
@ -1185,8 +1185,8 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
if (!vcpu_mode_priv(vcpu))
return undef_access(vcpu, p, r);
__vcpu_sys_reg(vcpu, PMUSERENR_EL0) =
p->regval & ARMV8_PMU_USERENR_MASK;
__vcpu_assign_sys_reg(vcpu, PMUSERENR_EL0,
(p->regval & ARMV8_PMU_USERENR_MASK));
} else {
p->regval = __vcpu_sys_reg(vcpu, PMUSERENR_EL0)
& ARMV8_PMU_USERENR_MASK;
@ -1237,7 +1237,7 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
if (!kvm_supports_32bit_el0())
val |= ARMV8_PMU_PMCR_LC;
__vcpu_sys_reg(vcpu, r->reg) = val;
__vcpu_assign_sys_reg(vcpu, r->reg, val);
kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
return 0;
@ -2213,7 +2213,7 @@ static u64 reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
if (kvm_has_mte(vcpu->kvm))
clidr |= 2ULL << CLIDR_TTYPE_SHIFT(loc);
__vcpu_sys_reg(vcpu, r->reg) = clidr;
__vcpu_assign_sys_reg(vcpu, r->reg, clidr);
return __vcpu_sys_reg(vcpu, r->reg);
}
@ -2227,7 +2227,7 @@ static int set_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
if ((val & CLIDR_EL1_RES0) || (!(ctr_el0 & CTR_EL0_IDC) && idc))
return -EINVAL;
__vcpu_sys_reg(vcpu, rd->reg) = val;
__vcpu_assign_sys_reg(vcpu, rd->reg, val);
return 0;
}
@ -2404,7 +2404,7 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
{
if (p->is_write)
__vcpu_sys_reg(vcpu, SP_EL1) = p->regval;
__vcpu_assign_sys_reg(vcpu, SP_EL1, p->regval);
else
p->regval = __vcpu_sys_reg(vcpu, SP_EL1);
@ -2428,7 +2428,7 @@ static bool access_spsr(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
{
if (p->is_write)
__vcpu_sys_reg(vcpu, SPSR_EL1) = p->regval;
__vcpu_assign_sys_reg(vcpu, SPSR_EL1, p->regval);
else
p->regval = __vcpu_sys_reg(vcpu, SPSR_EL1);
@ -2440,7 +2440,7 @@ static bool access_cntkctl_el12(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
{
if (p->is_write)
__vcpu_sys_reg(vcpu, CNTKCTL_EL1) = p->regval;
__vcpu_assign_sys_reg(vcpu, CNTKCTL_EL1, p->regval);
else
p->regval = __vcpu_sys_reg(vcpu, CNTKCTL_EL1);
@ -2454,7 +2454,9 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
if (!cpus_have_final_cap(ARM64_HAS_HCR_NV1))
val |= HCR_E2H;
return __vcpu_sys_reg(vcpu, r->reg) = val;
__vcpu_assign_sys_reg(vcpu, r->reg, val);
return __vcpu_sys_reg(vcpu, r->reg);
}
static unsigned int __el2_visibility(const struct kvm_vcpu *vcpu,
@ -2625,7 +2627,7 @@ static bool access_mdcr(struct kvm_vcpu *vcpu,
u64_replace_bits(val, hpmn, MDCR_EL2_HPMN);
}
__vcpu_sys_reg(vcpu, MDCR_EL2) = val;
__vcpu_assign_sys_reg(vcpu, MDCR_EL2, val);
/*
* Request a reload of the PMU to enable/disable the counters
@ -2754,7 +2756,7 @@ static int set_imp_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
static u64 reset_mdcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
__vcpu_sys_reg(vcpu, r->reg) = vcpu->kvm->arch.nr_pmu_counters;
__vcpu_assign_sys_reg(vcpu, r->reg, vcpu->kvm->arch.nr_pmu_counters);
return vcpu->kvm->arch.nr_pmu_counters;
}
@ -4790,7 +4792,7 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
r->reset(vcpu, r);
if (r->reg >= __SANITISED_REG_START__ && r->reg < NR_SYS_REGS)
(void)__vcpu_sys_reg(vcpu, r->reg);
__vcpu_rmw_sys_reg(vcpu, r->reg, |=, 0);
}
set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags);
@ -5012,7 +5014,7 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
if (r->set_user) {
ret = (r->set_user)(vcpu, r, val);
} else {
__vcpu_sys_reg(vcpu, r->reg) = val;
__vcpu_assign_sys_reg(vcpu, r->reg, val);
ret = 0;
}

View file

@ -137,7 +137,7 @@ static inline u64 reset_unknown(struct kvm_vcpu *vcpu,
{
BUG_ON(!r->reg);
BUG_ON(r->reg >= NR_SYS_REGS);
__vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
__vcpu_assign_sys_reg(vcpu, r->reg, 0x1de7ec7edbadc0deULL);
return __vcpu_sys_reg(vcpu, r->reg);
}
@ -145,7 +145,7 @@ static inline u64 reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
BUG_ON(!r->reg);
BUG_ON(r->reg >= NR_SYS_REGS);
__vcpu_sys_reg(vcpu, r->reg) = r->val;
__vcpu_assign_sys_reg(vcpu, r->reg, r->val);
return __vcpu_sys_reg(vcpu, r->reg);
}

View file

@ -356,12 +356,12 @@ void vgic_v3_put_nested(struct kvm_vcpu *vcpu)
val = __vcpu_sys_reg(vcpu, ICH_HCR_EL2);
val &= ~ICH_HCR_EL2_EOIcount_MASK;
val |= (s_cpu_if->vgic_hcr & ICH_HCR_EL2_EOIcount_MASK);
__vcpu_sys_reg(vcpu, ICH_HCR_EL2) = val;
__vcpu_sys_reg(vcpu, ICH_VMCR_EL2) = s_cpu_if->vgic_vmcr;
__vcpu_assign_sys_reg(vcpu, ICH_HCR_EL2, val);
__vcpu_assign_sys_reg(vcpu, ICH_VMCR_EL2, s_cpu_if->vgic_vmcr);
for (i = 0; i < 4; i++) {
__vcpu_sys_reg(vcpu, ICH_AP0RN(i)) = s_cpu_if->vgic_ap0r[i];
__vcpu_sys_reg(vcpu, ICH_AP1RN(i)) = s_cpu_if->vgic_ap1r[i];
__vcpu_assign_sys_reg(vcpu, ICH_AP0RN(i), s_cpu_if->vgic_ap0r[i]);
__vcpu_assign_sys_reg(vcpu, ICH_AP1RN(i), s_cpu_if->vgic_ap1r[i]);
}
for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) {
@ -370,7 +370,7 @@ void vgic_v3_put_nested(struct kvm_vcpu *vcpu)
val &= ~ICH_LR_STATE;
val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE;
__vcpu_sys_reg(vcpu, ICH_LRN(i)) = val;
__vcpu_assign_sys_reg(vcpu, ICH_LRN(i), val);
s_cpu_if->vgic_lr[i] = 0;
}

View file

@ -200,7 +200,7 @@ static inline pte_t pte_mkyoung(pte_t pte)
return pte;
}
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -387,7 +387,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
(((type & 0x1f) << 1) | \
((offset & 0x3ffff8) << 10) | ((offset & 0x7) << 7)) })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -301,7 +301,7 @@ static inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
#define __pmd_to_swp_entry(pmd) ((swp_entry_t) { pmd_val(pmd) })
#define __swp_entry_to_pmd(x) ((pmd_t) { (x).val | _PAGE_HUGE })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -268,7 +268,7 @@ extern pgd_t kernel_pg_dir[PTRS_PER_PGD];
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) (__pte((x).val))
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -185,7 +185,7 @@ extern pgd_t kernel_pg_dir[128];
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -169,7 +169,7 @@ extern pgd_t kernel_pg_dir[PTRS_PER_PGD];
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -398,7 +398,7 @@ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 2 })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 2 })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -534,7 +534,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
#endif
#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte.pte_low & _PAGE_SWP_EXCLUSIVE;
}
@ -551,7 +551,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
return pte;
}
#else
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -259,7 +259,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
#define __swp_entry_to_pte(swp) ((pte_t) { (swp).val })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -411,7 +411,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -425,7 +425,7 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -365,7 +365,7 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 3 })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 3 })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -693,7 +693,7 @@ static inline pte_t pte_swp_mkexclusive(pte_t pte)
return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_SWP_EXCLUSIVE));
}
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return !!(pte_raw(pte) & cpu_to_be64(_PAGE_SWP_EXCLUSIVE));
}

View file

@ -286,7 +286,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
}
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -521,6 +521,15 @@ static int coproc_mmap(struct file *fp, struct vm_area_struct *vma)
return -EINVAL;
}
/*
* Map complete page to the paste address. So the user
* space should pass 0ULL to the offset parameter.
*/
if (vma->vm_pgoff) {
pr_debug("Page offset unsupported to map paste address\n");
return -EINVAL;
}
/* Ensure instance has an open send window */
if (!txwin) {
pr_err("No send window open?\n");

View file

@ -48,11 +48,15 @@ static ssize_t memtrace_read(struct file *filp, char __user *ubuf,
static int memtrace_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct memtrace_entry *ent = filp->private_data;
unsigned long ent_nrpages = ent->size >> PAGE_SHIFT;
unsigned long vma_nrpages = vma_pages(vma);
if (ent->size < vma->vm_end - vma->vm_start)
/* The requested page offset should be within object's page count */
if (vma->vm_pgoff >= ent_nrpages)
return -EINVAL;
if (vma->vm_pgoff << PAGE_SHIFT >= ent->size)
/* The requested mapping range should remain within the bounds */
if (vma_nrpages > ent_nrpages - vma->vm_pgoff)
return -EINVAL;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);

View file

@ -1028,7 +1028,7 @@ static inline pud_t pud_modify(pud_t pud, pgprot_t newprot)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -915,7 +915,7 @@ static inline int pmd_protnone(pmd_t pmd)
}
#endif
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -470,7 +470,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
/* In both cases, we borrow bit 6 to store the exclusive marker in swap PTEs. */
#define _PAGE_SWP_EXCLUSIVE _PAGE_USER
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte.pte_low & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -348,7 +348,7 @@ static inline swp_entry_t __swp_entry(unsigned long type, unsigned long offset)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & SRMMU_SWP_EXCLUSIVE;
}

View file

@ -1023,7 +1023,7 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -314,7 +314,7 @@ extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr);
((swp_entry_t) { pte_val(pte_mkuptodate(pte)) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_get_bits(pte, _PAGE_SWP_EXCLUSIVE);
}

View file

@ -1561,7 +1561,7 @@ static inline pte_t pte_swp_mkexclusive(pte_t pte)
return pte_set_flags(pte, _PAGE_SWP_EXCLUSIVE);
}
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_flags(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -299,3 +299,27 @@ struct smp_ops smp_ops = {
.send_call_func_single_ipi = native_send_call_func_single_ipi,
};
EXPORT_SYMBOL_GPL(smp_ops);
int arch_cpu_rescan_dead_smt_siblings(void)
{
enum cpuhp_smt_control old = cpu_smt_control;
int ret;
/*
* If SMT has been disabled and SMT siblings are in HLT, bring them back
* online and offline them again so that they end up in MWAIT proper.
*
* Called with hotplug enabled.
*/
if (old != CPU_SMT_DISABLED && old != CPU_SMT_FORCE_DISABLED)
return 0;
ret = cpuhp_smt_enable();
if (ret)
return ret;
ret = cpuhp_smt_disable(old);
return ret;
}
EXPORT_SYMBOL_GPL(arch_cpu_rescan_dead_smt_siblings);

View file

@ -1244,6 +1244,10 @@ void play_dead_common(void)
local_irq_disable();
}
/*
* We need to flush the caches before going to sleep, lest we have
* dirty data in our caches when we come back up.
*/
void __noreturn mwait_play_dead(unsigned int eax_hint)
{
struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead);
@ -1289,50 +1293,6 @@ void __noreturn mwait_play_dead(unsigned int eax_hint)
}
}
/*
* We need to flush the caches before going to sleep, lest we have
* dirty data in our caches when we come back up.
*/
static inline void mwait_play_dead_cpuid_hint(void)
{
unsigned int eax, ebx, ecx, edx;
unsigned int highest_cstate = 0;
unsigned int highest_subcstate = 0;
int i;
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
return;
if (!this_cpu_has(X86_FEATURE_MWAIT))
return;
if (!this_cpu_has(X86_FEATURE_CLFLUSH))
return;
eax = CPUID_LEAF_MWAIT;
ecx = 0;
native_cpuid(&eax, &ebx, &ecx, &edx);
/*
* eax will be 0 if EDX enumeration is not valid.
* Initialized below to cstate, sub_cstate value when EDX is valid.
*/
if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED)) {
eax = 0;
} else {
edx >>= MWAIT_SUBSTATE_SIZE;
for (i = 0; i < 7 && edx; i++, edx >>= MWAIT_SUBSTATE_SIZE) {
if (edx & MWAIT_SUBSTATE_MASK) {
highest_cstate = i;
highest_subcstate = edx & MWAIT_SUBSTATE_MASK;
}
}
eax = (highest_cstate << MWAIT_SUBSTATE_SIZE) |
(highest_subcstate - 1);
}
mwait_play_dead(eax);
}
/*
* Kick all "offline" CPUs out of mwait on kexec(). See comment in
* mwait_play_dead().
@ -1383,9 +1343,9 @@ void native_play_dead(void)
play_dead_common();
tboot_shutdown(TB_SHUTDOWN_WFS);
mwait_play_dead_cpuid_hint();
if (cpuidle_play_dead())
hlt_play_dead();
/* Below returns only on error. */
cpuidle_play_dead();
hlt_play_dead();
}
#else /* ... !CONFIG_HOTPLUG_CPU */

View file

@ -4896,12 +4896,16 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
{
u64 error_code = PFERR_GUEST_FINAL_MASK;
u8 level = PG_LEVEL_4K;
u64 direct_bits;
u64 end;
int r;
if (!vcpu->kvm->arch.pre_fault_allowed)
return -EOPNOTSUPP;
if (kvm_is_gfn_alias(vcpu->kvm, gpa_to_gfn(range->gpa)))
return -EINVAL;
/*
* reload is efficient when called repeatedly, so we can do it on
* every iteration.
@ -4910,15 +4914,18 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
if (r)
return r;
direct_bits = 0;
if (kvm_arch_has_private_mem(vcpu->kvm) &&
kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa)))
error_code |= PFERR_PRIVATE_ACCESS;
else
direct_bits = gfn_to_gpa(kvm_gfn_direct_bits(vcpu->kvm));
/*
* Shadow paging uses GVA for kvm page fault, so restrict to
* two-dimensional paging.
*/
r = kvm_tdp_map_page(vcpu, range->gpa, error_code, &level);
r = kvm_tdp_map_page(vcpu, range->gpa | direct_bits, error_code, &level);
if (r < 0)
return r;

View file

@ -2871,6 +2871,33 @@ void __init sev_set_cpu_caps(void)
}
}
static bool is_sev_snp_initialized(void)
{
struct sev_user_data_snp_status *status;
struct sev_data_snp_addr buf;
bool initialized = false;
int ret, error = 0;
status = snp_alloc_firmware_page(GFP_KERNEL | __GFP_ZERO);
if (!status)
return false;
buf.address = __psp_pa(status);
ret = sev_do_cmd(SEV_CMD_SNP_PLATFORM_STATUS, &buf, &error);
if (ret) {
pr_err("SEV: SNP_PLATFORM_STATUS failed ret=%d, fw_error=%d (%#x)\n",
ret, error, error);
goto out;
}
initialized = !!status->state;
out:
snp_free_firmware_page(status);
return initialized;
}
void __init sev_hardware_setup(void)
{
unsigned int eax, ebx, ecx, edx, sev_asid_count, sev_es_asid_count;
@ -2975,6 +3002,14 @@ void __init sev_hardware_setup(void)
sev_snp_supported = sev_snp_enabled && cc_platform_has(CC_ATTR_HOST_SEV_SNP);
out:
if (sev_enabled) {
init_args.probe = true;
if (sev_platform_init(&init_args))
sev_supported = sev_es_supported = sev_snp_supported = false;
else if (sev_snp_supported)
sev_snp_supported = is_sev_snp_initialized();
}
if (boot_cpu_has(X86_FEATURE_SEV))
pr_info("SEV %s (ASIDs %u - %u)\n",
sev_supported ? min_sev_asid <= max_sev_asid ? "enabled" :
@ -3001,15 +3036,6 @@ out:
sev_supported_vmsa_features = 0;
if (sev_es_debug_swap_enabled)
sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP;
if (!sev_enabled)
return;
/*
* Do both SNP and SEV initialization at KVM module load.
*/
init_args.probe = true;
sev_platform_init(&init_args);
}
void sev_hardware_unsetup(void)

View file

@ -192,7 +192,8 @@ out:
int arch_resume_nosmt(void)
{
int ret = 0;
int ret;
/*
* We reached this while coming out of hibernation. This means
* that SMT siblings are sleeping in hlt, as mwait is not safe
@ -206,18 +207,10 @@ int arch_resume_nosmt(void)
* Called with hotplug disabled.
*/
cpu_hotplug_enable();
if (cpu_smt_control == CPU_SMT_DISABLED ||
cpu_smt_control == CPU_SMT_FORCE_DISABLED) {
enum cpuhp_smt_control old = cpu_smt_control;
ret = cpuhp_smt_enable();
if (ret)
goto out;
ret = cpuhp_smt_disable(old);
if (ret)
goto out;
}
out:
ret = arch_cpu_rescan_dead_smt_siblings();
cpu_hotplug_disable();
return ret;
}

View file

@ -349,7 +349,7 @@ ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_swp_exclusive(pte_t pte)
static inline bool pte_swp_exclusive(pte_t pte)
{
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
}

View file

@ -998,20 +998,20 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
if (!plug || rq_list_empty(&plug->mq_list))
return false;
rq_list_for_each(&plug->mq_list, rq) {
if (rq->q == q) {
if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
BIO_MERGE_OK)
return true;
break;
}
rq = plug->mq_list.tail;
if (rq->q == q)
return blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
BIO_MERGE_OK;
else if (!plug->multiple_queues)
return false;
/*
* Only keep iterating plug list for merges if we have multiple
* queues
*/
if (!plug->multiple_queues)
break;
rq_list_for_each(&plug->mq_list, rq) {
if (rq->q != q)
continue;
if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
BIO_MERGE_OK)
return true;
break;
}
return false;
}

View file

@ -1225,6 +1225,7 @@ void blk_zone_write_plug_bio_endio(struct bio *bio)
if (bio_flagged(bio, BIO_EMULATES_ZONE_APPEND)) {
bio->bi_opf &= ~REQ_OP_MASK;
bio->bi_opf |= REQ_OP_ZONE_APPEND;
bio_clear_flag(bio, BIO_EMULATES_ZONE_APPEND);
}
/*
@ -1306,7 +1307,6 @@ again:
spin_unlock_irqrestore(&zwplug->lock, flags);
bdev = bio->bi_bdev;
submit_bio_noacct_nocheck(bio);
/*
* blk-mq devices will reuse the extra reference on the request queue
@ -1314,8 +1314,12 @@ again:
* path for BIO-based devices will not do that. So drop this extra
* reference here.
*/
if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO))
if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) {
bdev->bd_disk->fops->submit_bio(bio);
blk_queue_exit(bdev->bd_disk->queue);
} else {
blk_mq_submit_bio(bio);
}
put_zwplug:
/* Drop the reference we took in disk_zone_wplug_schedule_bio_work(). */

View file

@ -566,7 +566,7 @@ static int __init crypto_hkdf_module_init(void)
static void __exit crypto_hkdf_module_exit(void) {}
module_init(crypto_hkdf_module_init);
late_initcall(crypto_hkdf_module_init);
module_exit(crypto_hkdf_module_exit);
MODULE_LICENSE("GPL");

View file

@ -126,8 +126,8 @@ struct psp_device *aie2m_psp_create(struct drm_device *ddev, struct psp_config *
psp->ddev = ddev;
memcpy(psp->psp_regs, conf->psp_regs, sizeof(psp->psp_regs));
psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN) + PSP_FW_ALIGN;
psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz, GFP_KERNEL);
psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN);
psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz + PSP_FW_ALIGN, GFP_KERNEL);
if (!psp->fw_buffer) {
drm_err(ddev, "no memory for fw buffer");
return NULL;

View file

@ -33,7 +33,7 @@
static DEFINE_MUTEX(isolated_cpus_lock);
static DEFINE_MUTEX(round_robin_lock);
static unsigned long power_saving_mwait_eax;
static unsigned int power_saving_mwait_eax;
static unsigned char tsc_detected_unstable;
static unsigned char tsc_marked_unstable;

View file

@ -883,19 +883,16 @@ static int __init einj_init(void)
}
einj_dev = faux_device_create("acpi-einj", NULL, &einj_device_ops);
if (!einj_dev)
return -ENODEV;
einj_initialized = true;
if (einj_dev)
einj_initialized = true;
return 0;
}
static void __exit einj_exit(void)
{
if (einj_initialized)
faux_device_destroy(einj_dev);
faux_device_destroy(einj_dev);
}
module_init(einj_init);

View file

@ -476,7 +476,7 @@ bool cppc_allow_fast_switch(void)
struct cpc_desc *cpc_ptr;
int cpu;
for_each_possible_cpu(cpu) {
for_each_present_cpu(cpu) {
cpc_ptr = per_cpu(cpc_desc_ptr, cpu);
desired_reg = &cpc_ptr->cpc_regs[DESIRED_PERF];
if (!CPC_IN_SYSTEM_MEMORY(desired_reg) &&

View file

@ -23,8 +23,10 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/printk.h>
#include <linux/spinlock.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/suspend.h>
#include <linux/acpi.h>
#include <linux/dmi.h>
@ -2031,6 +2033,21 @@ void __init acpi_ec_ecdt_probe(void)
goto out;
}
if (!strstarts(ecdt_ptr->id, "\\")) {
/*
* The ECDT table on some MSI notebooks contains invalid data, together
* with an empty ID string ("").
*
* Section 5.2.15 of the ACPI specification requires the ID string to be
* a "fully qualified reference to the (...) embedded controller device",
* so this string always has to start with a backslash.
*
* By verifying this we can avoid such faulty ECDT tables in a safe way.
*/
pr_err(FW_BUG "Ignoring ECDT due to invalid ID string \"%s\"\n", ecdt_ptr->id);
goto out;
}
ec = acpi_ec_alloc();
if (!ec)
goto out;

View file

@ -175,6 +175,12 @@ bool processor_physically_present(acpi_handle handle);
static inline void acpi_early_processor_control_setup(void) {}
#endif
#ifdef CONFIG_ACPI_PROCESSOR_CSTATE
void acpi_idle_rescan_dead_smt_siblings(void);
#else
static inline void acpi_idle_rescan_dead_smt_siblings(void) {}
#endif
/* --------------------------------------------------------------------------
Embedded Controller
-------------------------------------------------------------------------- */

View file

@ -279,6 +279,9 @@ static int __init acpi_processor_driver_init(void)
* after acpi_cppc_processor_probe() has been called for all online CPUs
*/
acpi_processor_init_invariance_cppc();
acpi_idle_rescan_dead_smt_siblings();
return 0;
err:
driver_unregister(&acpi_processor_driver);

View file

@ -24,6 +24,8 @@
#include <acpi/processor.h>
#include <linux/context_tracking.h>
#include "internal.h"
/*
* Include the apic definitions for x86 to have the APIC timer related defines
* available also for UP (on SMP it gets magically included via linux/smp.h).
@ -55,6 +57,12 @@ struct cpuidle_driver acpi_idle_driver = {
};
#ifdef CONFIG_ACPI_PROCESSOR_CSTATE
void acpi_idle_rescan_dead_smt_siblings(void)
{
if (cpuidle_get_driver() == &acpi_idle_driver)
arch_cpu_rescan_dead_smt_siblings();
}
static
DEFINE_PER_CPU(struct acpi_processor_cx * [CPUIDLE_STATE_MAX], acpi_cstate);

View file

@ -666,6 +666,13 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"),
},
},
{
/* MACHENIKE L16P/L16P */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "MACHENIKE"),
DMI_MATCH(DMI_BOARD_NAME, "L16P"),
},
},
{
/*
* TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the

View file

@ -86,6 +86,7 @@ static struct device_driver faux_driver = {
.name = "faux_driver",
.bus = &faux_bus_type,
.probe_type = PROBE_FORCE_SYNCHRONOUS,
.suppress_bind_attrs = true,
};
static void faux_device_release(struct device *dev)
@ -169,7 +170,7 @@ struct faux_device *faux_device_create_with_groups(const char *name,
* successful is almost impossible to determine by the caller.
*/
if (!dev->driver) {
dev_err(dev, "probe did not succeed, tearing down the device\n");
dev_dbg(dev, "probe did not succeed, tearing down the device\n");
faux_device_destroy(faux_dev);
faux_dev = NULL;
}

View file

@ -1248,12 +1248,6 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
lo->lo_flags &= ~LOOP_SET_STATUS_CLEARABLE_FLAGS;
lo->lo_flags |= (info->lo_flags & LOOP_SET_STATUS_SETTABLE_FLAGS);
if (size_changed) {
loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit,
lo->lo_backing_file);
loop_set_size(lo, new_size);
}
/* update the direct I/O flag if lo_offset changed */
loop_update_dio(lo);
@ -1261,6 +1255,11 @@ out_unfreeze:
blk_mq_unfreeze_queue(lo->lo_queue, memflags);
if (partscan)
clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
if (!err && size_changed) {
loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit,
lo->lo_backing_file);
loop_set_size(lo, new_size);
}
out_unlock:
mutex_unlock(&lo->lo_mutex);
if (partscan)

View file

@ -396,8 +396,13 @@ static int btintel_pcie_submit_rx(struct btintel_pcie_data *data)
static int btintel_pcie_start_rx(struct btintel_pcie_data *data)
{
int i, ret;
struct rxq *rxq = &data->rxq;
for (i = 0; i < BTINTEL_PCIE_RX_MAX_QUEUE; i++) {
/* Post (BTINTEL_PCIE_RX_DESCS_COUNT - 3) buffers to overcome the
* hardware issues leading to race condition at the firmware.
*/
for (i = 0; i < rxq->count - 3; i++) {
ret = btintel_pcie_submit_rx(data);
if (ret)
return ret;
@ -1782,8 +1787,8 @@ static int btintel_pcie_alloc(struct btintel_pcie_data *data)
* + size of index * Number of queues(2) * type of index array(4)
* + size of context information
*/
total = (sizeof(struct tfd) + sizeof(struct urbd0) + sizeof(struct frbd)
+ sizeof(struct urbd1)) * BTINTEL_DESCS_COUNT;
total = (sizeof(struct tfd) + sizeof(struct urbd0)) * BTINTEL_PCIE_TX_DESCS_COUNT;
total += (sizeof(struct frbd) + sizeof(struct urbd1)) * BTINTEL_PCIE_RX_DESCS_COUNT;
/* Add the sum of size of index array and size of ci struct */
total += (sizeof(u16) * BTINTEL_PCIE_NUM_QUEUES * 4) + sizeof(struct ctx_info);
@ -1808,36 +1813,36 @@ static int btintel_pcie_alloc(struct btintel_pcie_data *data)
data->dma_v_addr = v_addr;
/* Setup descriptor count */
data->txq.count = BTINTEL_DESCS_COUNT;
data->rxq.count = BTINTEL_DESCS_COUNT;
data->txq.count = BTINTEL_PCIE_TX_DESCS_COUNT;
data->rxq.count = BTINTEL_PCIE_RX_DESCS_COUNT;
/* Setup tfds */
data->txq.tfds_p_addr = p_addr;
data->txq.tfds = v_addr;
p_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);
v_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);
p_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);
v_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);
/* Setup urbd0 */
data->txq.urbd0s_p_addr = p_addr;
data->txq.urbd0s = v_addr;
p_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);
v_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);
p_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);
v_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);
/* Setup FRBD*/
data->rxq.frbds_p_addr = p_addr;
data->rxq.frbds = v_addr;
p_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);
v_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);
p_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);
v_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);
/* Setup urbd1 */
data->rxq.urbd1s_p_addr = p_addr;
data->rxq.urbd1s = v_addr;
p_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);
v_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);
p_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);
v_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);
/* Setup data buffers for txq */
err = btintel_pcie_setup_txq_bufs(data, &data->txq);

View file

@ -154,8 +154,11 @@ enum msix_mbox_int_causes {
/* Default interrupt timeout in msec */
#define BTINTEL_DEFAULT_INTR_TIMEOUT_MS 3000
/* The number of descriptors in TX/RX queues */
#define BTINTEL_DESCS_COUNT 16
/* The number of descriptors in TX queues */
#define BTINTEL_PCIE_TX_DESCS_COUNT 32
/* The number of descriptors in RX queues */
#define BTINTEL_PCIE_RX_DESCS_COUNT 64
/* Number of Queue for TX and RX
* It indicates the index of the IA(Index Array)
@ -177,9 +180,6 @@ enum {
/* Doorbell vector for TFD */
#define BTINTEL_PCIE_TX_DB_VEC 0
/* Number of pending RX requests for downlink */
#define BTINTEL_PCIE_RX_MAX_QUEUE 6
/* Doorbell vector for FRBD */
#define BTINTEL_PCIE_RX_DB_VEC 513

View file

@ -26,9 +26,9 @@ fn find_supply_name_exact(dev: &Device, name: &str) -> Option<CString> {
}
/// Finds supply name for the CPU from DT.
fn find_supply_names(dev: &Device, cpu: u32) -> Option<KVec<CString>> {
fn find_supply_names(dev: &Device, cpu: cpu::CpuId) -> Option<KVec<CString>> {
// Try "cpu0" for older DTs, fallback to "cpu".
let name = (cpu == 0)
let name = (cpu.as_u32() == 0)
.then(|| find_supply_name_exact(dev, "cpu0"))
.flatten()
.or_else(|| find_supply_name_exact(dev, "cpu"))?;

View file

@ -1118,7 +1118,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
* Catch exporters making buffers inaccessible even when
* attachments preventing that exist.
*/
WARN_ON_ONCE(ret == EBUSY);
WARN_ON_ONCE(ret == -EBUSY);
if (ret)
return ERR_PTR(ret);
}

Some files were not shown because too many files have changed in this diff Show more