Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.19-rc9).

No adjacent changes, conflicts:

drivers/net/ethernet/spacemit/k1_emac.c
  3125fc1701 ("net: spacemit: k1-emac: fix jumbo frame support")
  f66086798f ("net: spacemit: Remove broken flow control support")
https://lore.kernel.org/aYIysFIE9ooavWia@sirena.org.uk

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2026-01-22 20:13:25 -08:00
commit a182a62ff7
187 changed files with 1545 additions and 815 deletions

View file

@ -34,6 +34,7 @@ Alexander Lobakin <alobakin@pm.me> <alobakin@marvell.com>
Alexander Lobakin <alobakin@pm.me> <bloodyreaper@yandex.ru>
Alexander Mikhalitsyn <alexander@mihalicyn.com> <alexander.mikhalitsyn@virtuozzo.com>
Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@canonical.com>
Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@futurfusion.io>
Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin.ext@nsn.com>
Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@gmx.de>
Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@nokia.com>
@ -786,7 +787,8 @@ Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com> <subashab@codeaurora.
Subbaraman Narayanamurthy <quic_subbaram@quicinc.com> <subbaram@codeaurora.org>
Subhash Jadavani <subhashj@codeaurora.org>
Sudarshan Rajagopalan <quic_sudaraja@quicinc.com> <sudaraja@codeaurora.org>
Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sudeep Holla <sudeep.holla@kernel.org> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sudeep Holla <sudeep.holla@kernel.org> <sudeep.holla@arm.com>
Sumit Garg <sumit.garg@kernel.org> <sumit.garg@linaro.org>
Sumit Semwal <sumit.semwal@ti.com>
Surabhi Vishnoi <quic_svishnoi@quicinc.com> <svishnoi@codeaurora.org>
@ -851,6 +853,7 @@ Valentin Schneider <vschneid@redhat.com> <valentin.schneider@arm.com>
Veera Sundaram Sankaran <quic_veeras@quicinc.com> <veeras@codeaurora.org>
Veerabhadrarao Badiganti <quic_vbadigan@quicinc.com> <vbadigan@codeaurora.org>
Venkateswara Naralasetty <quic_vnaralas@quicinc.com> <vnaralas@codeaurora.org>
Viacheslav Bocharov <v@baodeep.com> <adeep@lexina.in>
Vikash Garodia <vikash.garodia@oss.qualcomm.com> <vgarodia@codeaurora.org>
Vikash Garodia <vikash.garodia@oss.qualcomm.com> <quic_vgarodia@quicinc.com>
Vincent Mailhol <mailhol@kernel.org> <mailhol.vincent@wanadoo.fr>

View file

@ -7,13 +7,3 @@ Description:
signals when the PCI layer is able to support establishment of
link encryption and other device-security features coordinated
through a platform tsm.
What: /sys/class/tsm/tsmN/streamH.R.E
Contact: linux-pci@vger.kernel.org
Description:
(RO) When a host bridge has established a secure connection via
the platform TSM, symlink appears. The primary function of this
is have a system global review of TSM resource consumption
across host bridges. The link points to the endpoint PCI device
and matches the same link published by the host bridge. See
Documentation/ABI/testing/sysfs-devices-pci-host-bridge.

View file

@ -3472,6 +3472,11 @@ Kernel parameters
If there are multiple matching configurations changing
the same attribute, the last one is used.
liveupdate= [KNL,EARLY]
Format: <bool>
Enable Live Update Orchestrator (LUO).
Default: off.
load_ramdisk= [RAM] [Deprecated]
lockd.nlm_grace_period=P [NFS] Assign grace period.

View file

@ -44,6 +44,7 @@ properties:
- items:
- enum:
- fsl,imx94-sai
- fsl,imx952-sai
- const: fsl,imx95-sai
reg:

View file

@ -335,7 +335,7 @@ F: tools/power/acpi/
ACPI FOR ARM64 (ACPI/arm64)
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
M: Hanjun Guo <guohanjun@huawei.com>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-acpi@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
@ -351,7 +351,7 @@ F: drivers/acpi/riscv/
F: include/linux/acpi_rimt.h
ACPI PCC(Platform Communication Channel) MAILBOX DRIVER
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-acpi@vger.kernel.org
S: Supported
F: drivers/mailbox/pcc.c
@ -2747,14 +2747,14 @@ F: arch/arm/include/asm/hardware/dec21285.h
F: arch/arm/mach-footbridge/
ARM/FREESCALE IMX / MXC ARM ARCHITECTURE
M: Shawn Guo <shawnguo@kernel.org>
M: Frank Li <Frank.Li@nxp.com>
M: Sascha Hauer <s.hauer@pengutronix.de>
R: Pengutronix Kernel Team <kernel@pengutronix.de>
R: Fabio Estevam <festevam@gmail.com>
L: imx@lists.linux.dev
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git
F: Documentation/devicetree/bindings/firmware/fsl*
F: Documentation/devicetree/bindings/firmware/nxp*
F: arch/arm/boot/dts/nxp/imx/
@ -2769,22 +2769,22 @@ N: mxs
N: \bmxc[^\d]
ARM/FREESCALE LAYERSCAPE ARM ARCHITECTURE
M: Shawn Guo <shawnguo@kernel.org>
M: Frank Li <Frank.Li@nxp.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git
F: arch/arm/boot/dts/nxp/ls/
F: arch/arm64/boot/dts/freescale/fsl-*
F: arch/arm64/boot/dts/freescale/qoriq-*
ARM/FREESCALE VYBRID ARM ARCHITECTURE
M: Shawn Guo <shawnguo@kernel.org>
M: Frank Li <Frank.Li@nxp.com>
M: Sascha Hauer <s.hauer@pengutronix.de>
R: Pengutronix Kernel Team <kernel@pengutronix.de>
R: Stefan Agner <stefan@agner.ch>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git
F: arch/arm/boot/dts/nxp/vf/
F: arch/arm/mach-imx/*vf610*
@ -3681,7 +3681,7 @@ N: uniphier
ARM/VERSATILE EXPRESS PLATFORM
M: Liviu Dudau <liviu.dudau@arm.com>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
@ -6514,7 +6514,7 @@ F: drivers/i2c/busses/i2c-cp2615.c
CPU FREQUENCY DRIVERS - VEXPRESS SPC ARM BIG LITTLE
M: Viresh Kumar <viresh.kumar@linaro.org>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-pm@vger.kernel.org
S: Maintained
W: http://www.arm.com/products/processors/technologies/biglittleprocessing.php
@ -6610,7 +6610,7 @@ F: include/linux/platform_data/cpuidle-exynos.h
CPUIDLE DRIVER - ARM PSCI
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
M: Ulf Hansson <ulf.hansson@linaro.org>
L: linux-pm@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -9819,7 +9819,7 @@ F: include/uapi/linux/firewire*.h
F: tools/firewire/
FIRMWARE FRAMEWORK FOR ARMV8-A
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/firmware/arm_ffa/
@ -10517,7 +10517,7 @@ S: Maintained
F: scripts/gendwarfksyms/
GENERIC ARCHITECTURE TOPOLOGY
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-kernel@vger.kernel.org
S: Maintained
F: drivers/base/arch_topology.c
@ -11371,6 +11371,11 @@ F: Documentation/ABI/testing/sysfs-devices-platform-kunpeng_hccs
F: drivers/soc/hisilicon/kunpeng_hccs.c
F: drivers/soc/hisilicon/kunpeng_hccs.h
HISILICON SOC HHA DRIVER
M: Yushan Wang <wangyushan12@huawei.com>
S: Maintained
F: drivers/cache/hisi_soc_hha.c
HISILICON LPC BUS DRIVER
M: Jay Fang <f.fangjian@huawei.com>
S: Maintained
@ -15096,7 +15101,7 @@ F: drivers/mailbox/arm_mhuv2.c
F: include/linux/mailbox/arm_mhuv2_message.h
MAILBOX ARM MHUv3
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
M: Cristian Marussi <cristian.marussi@arm.com>
L: linux-kernel@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -20589,7 +20594,7 @@ F: drivers/pinctrl/pinctrl-amd.c
PIN CONTROLLER - FREESCALE
M: Dong Aisheng <aisheng.dong@nxp.com>
M: Fabio Estevam <festevam@gmail.com>
M: Shawn Guo <shawnguo@kernel.org>
M: Frank Li <Frank.Li@nxp.com>
M: Jacky Bai <ping.bai@nxp.com>
R: Pengutronix Kernel Team <kernel@pengutronix.de>
R: NXP S32 Linux Team <s32@nxp.com>
@ -20985,6 +20990,18 @@ F: Documentation/devicetree/bindings/net/pse-pd/
F: drivers/net/pse-pd/
F: net/ethtool/pse-pd.c
PSP SECURITY PROTOCOL
M: Daniel Zahka <daniel.zahka@gmail.com>
M: Jakub Kicinski <kuba@kernel.org>
M: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
F: Documentation/netlink/specs/psp.yaml
F: Documentation/networking/psp.rst
F: include/net/psp/
F: include/net/psp.h
F: include/uapi/linux/psp.h
F: net/psp/
K: struct\ psp(_assoc|_dev|hdr)\b
PSTORE FILESYSTEM
M: Kees Cook <kees@kernel.org>
R: Tony Luck <tony.luck@intel.com>
@ -23644,7 +23661,7 @@ F: include/uapi/linux/sed*
SECURE MONITOR CALL(SMC) CALLING CONVENTION (SMCCC)
M: Mark Rutland <mark.rutland@arm.com>
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/firmware/smccc/
@ -25408,7 +25425,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd.git
F: drivers/mfd/syscon.c
SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
R: Cristian Marussi <cristian.marussi@arm.com>
L: arm-scmi@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -26560,7 +26577,7 @@ F: samples/tsm-mr/
TRUSTED SERVICES TEE DRIVER
M: Balint Dobszay <balint.dobszay@arm.com>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: trusted-services@lists.trustedfirmware.org
S: Maintained

View file

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 19
SUBLEVEL = 0
EXTRAVERSION = -rc7
EXTRAVERSION = -rc8
NAME = Baby Opossum Posse
# *DOCUMENTATION*
@ -1624,7 +1624,8 @@ MRPROPER_FILES += include/config include/generated \
certs/x509.genkey \
vmlinux-gdb.py \
rpmbuild \
rust/libmacros.so rust/libmacros.dylib
rust/libmacros.so rust/libmacros.dylib \
rust/libpin_init_internal.so rust/libpin_init_internal.dylib
# clean - Delete most, but leave enough to build external modules
#

View file

@ -723,7 +723,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
dpage = pfn_to_page(uvmem_pfn);
dpage->zone_device_data = pvt;
zone_device_page_init(dpage, 0);
zone_device_page_init(dpage, &kvmppc_uvmem_pgmap, 0);
return dpage;
out_clear:
spin_lock(&kvmppc_uvmem_bitmap_lock);

View file

@ -75,26 +75,12 @@ static u32 __init_or_module sifive_errata_probe(unsigned long archid,
return cpu_req_errata;
}
static void __init_or_module warn_miss_errata(u32 miss_errata)
{
int i;
pr_warn("----------------------------------------------------------------\n");
pr_warn("WARNING: Missing the following errata may cause potential issues\n");
for (i = 0; i < ERRATA_SIFIVE_NUMBER; i++)
if (miss_errata & 0x1 << i)
pr_warn("\tSiFive Errata[%d]:%s\n", i, errata_list[i].name);
pr_warn("Please enable the corresponding Kconfig to apply them\n");
pr_warn("----------------------------------------------------------------\n");
}
void sifive_errata_patch_func(struct alt_entry *begin, struct alt_entry *end,
unsigned long archid, unsigned long impid,
unsigned int stage)
{
struct alt_entry *alt;
u32 cpu_req_errata;
u32 cpu_apply_errata = 0;
u32 tmp;
BUILD_BUG_ON(ERRATA_SIFIVE_NUMBER >= RISCV_VENDOR_EXT_ALTERNATIVES_BASE);
@ -118,10 +104,6 @@ void sifive_errata_patch_func(struct alt_entry *begin, struct alt_entry *end,
patch_text_nosync(ALT_OLD_PTR(alt), ALT_ALT_PTR(alt),
alt->alt_len);
mutex_unlock(&text_mutex);
cpu_apply_errata |= tmp;
}
}
if (stage != RISCV_ALTERNATIVES_MODULE &&
cpu_apply_errata != cpu_req_errata)
warn_miss_errata(cpu_req_errata - cpu_apply_errata);
}

View file

@ -2,7 +2,7 @@
#ifndef __ASM_COMPAT_H
#define __ASM_COMPAT_H
#define COMPAT_UTS_MACHINE "riscv\0\0"
#define COMPAT_UTS_MACHINE "riscv32\0\0"
/*
* Architecture specific compatibility types

View file

@ -20,7 +20,7 @@ extern void * const sys_call_table[];
extern void * const compat_sys_call_table[];
/*
* Only the low 32 bits of orig_r0 are meaningful, so we return int.
* Only the low 32 bits of orig_a0 are meaningful, so we return int.
* This importantly ignores the high bits on 64-bit, so comparisons
* sign-extend the low 32 bits.
*/

View file

@ -145,14 +145,14 @@ struct arch_ext_priv {
long (*save)(struct pt_regs *regs, void __user *sc_vec);
};
struct arch_ext_priv arch_ext_list[] = {
static struct arch_ext_priv arch_ext_list[] = {
{
.magic = RISCV_V_MAGIC,
.save = &save_v_state,
},
};
const size_t nr_arch_exts = ARRAY_SIZE(arch_ext_list);
static const size_t nr_arch_exts = ARRAY_SIZE(arch_ext_list);
static long restore_sigcontext(struct pt_regs *regs,
struct sigcontext __user *sc)
@ -297,7 +297,7 @@ static long setup_sigcontext(struct rt_sigframe __user *frame,
} else {
err |= __put_user(arch_ext->magic, &sc_ext_ptr->magic);
err |= __put_user(ext_size, &sc_ext_ptr->size);
sc_ext_ptr = (void *)sc_ext_ptr + ext_size;
sc_ext_ptr = (void __user *)sc_ext_ptr + ext_size;
}
}
/* Write zero to fp-reserved space and check it on restore_sigcontext */

View file

@ -42,7 +42,7 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
{
unsigned int level;
pte_t *pte = lookup_address(addr, &level);
pteval_t val;
pteval_t val, new;
if (WARN_ON(!pte || level != PG_LEVEL_4K))
return false;
@ -57,11 +57,12 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
return true;
/*
* Otherwise, invert the entire PTE. This avoids writing out an
* Otherwise, flip the Present bit, taking care to avoid writing an
* L1TF-vulnerable PTE (not present, without the high address bits
* set).
*/
set_pte(pte, __pte(~val));
new = val ^ _PAGE_PRESENT;
set_pte(pte, __pte(flip_protnone_guard(val, new, PTE_PFN_MASK)));
/*
* If the page was protected (non-present) and we're making it

View file

@ -514,7 +514,8 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
*/
spin_lock_irq(&kvm->irqfds.lock);
if (irqfd->irq_entry.type == KVM_IRQ_ROUTING_MSI) {
if (irqfd->irq_entry.type == KVM_IRQ_ROUTING_MSI ||
WARN_ON_ONCE(irqfd->irq_bypass_vcpu)) {
ret = kvm_pi_update_irte(irqfd, NULL);
if (ret)
pr_info("irq bypass consumer (eventfd %p) unregistration fails: %d\n",

View file

@ -376,6 +376,7 @@ void avic_init_vmcb(struct vcpu_svm *svm, struct vmcb *vmcb)
static int avic_init_backing_page(struct kvm_vcpu *vcpu)
{
u32 max_id = x2avic_enabled ? x2avic_max_physical_id : AVIC_MAX_PHYSICAL_ID;
struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm);
struct vcpu_svm *svm = to_svm(vcpu);
u32 id = vcpu->vcpu_id;
@ -388,8 +389,7 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
* avic_vcpu_load() expects to be called if and only if the vCPU has
* fully initialized AVIC.
*/
if ((!x2avic_enabled && id > AVIC_MAX_PHYSICAL_ID) ||
(id > x2avic_max_physical_id)) {
if (id > max_id) {
kvm_set_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_PHYSICAL_ID_TOO_BIG);
vcpu->arch.apic->apicv_active = false;
return 0;

View file

@ -5284,6 +5284,8 @@ static __init void svm_set_cpu_caps(void)
*/
kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
kvm_cpu_cap_clear(X86_FEATURE_MSR_IMM);
kvm_setup_xss_caps();
}
static __init int svm_hardware_setup(void)

View file

@ -8051,6 +8051,8 @@ static __init void vmx_set_cpu_caps(void)
kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
kvm_cpu_cap_clear(X86_FEATURE_IBT);
}
kvm_setup_xss_caps();
}
static bool vmx_is_io_intercepted(struct kvm_vcpu *vcpu,

View file

@ -9953,6 +9953,23 @@ static struct notifier_block pvclock_gtod_notifier = {
};
#endif
void kvm_setup_xss_caps(void)
{
if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
kvm_caps.supported_xss = 0;
if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) &&
!kvm_cpu_cap_has(X86_FEATURE_IBT))
kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) {
kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
kvm_cpu_cap_clear(X86_FEATURE_IBT);
kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
}
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_setup_xss_caps);
static inline void kvm_ops_update(struct kvm_x86_init_ops *ops)
{
memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops));
@ -10125,19 +10142,6 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
if (!tdp_enabled)
kvm_caps.supported_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
kvm_caps.supported_xss = 0;
if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) &&
!kvm_cpu_cap_has(X86_FEATURE_IBT))
kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) {
kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
kvm_cpu_cap_clear(X86_FEATURE_IBT);
kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
}
if (kvm_caps.has_tsc_control) {
/*
* Make sure the user can only configure tsc_khz values that

View file

@ -471,6 +471,8 @@ extern struct kvm_host_values kvm_host;
extern bool enable_pmu;
void kvm_setup_xss_caps(void);
/*
* Get a filtered version of KVM's supported XCR0 that strips out dynamic
* features for which the current process doesn't (yet) have permission to use.

View file

@ -1662,6 +1662,7 @@ static void destroy_sysfs(struct rnbd_clt_dev *dev,
/* To avoid deadlock firstly remove itself */
sysfs_remove_file_self(&dev->kobj, sysfs_self);
kobject_del(&dev->kobj);
kobject_put(&dev->kobj);
}
}

View file

@ -142,6 +142,12 @@ static const struct of_device_id simple_pm_bus_of_match[] = {
{ .compatible = "simple-mfd", .data = ONLY_BUS },
{ .compatible = "isa", .data = ONLY_BUS },
{ .compatible = "arm,amba-bus", .data = ONLY_BUS },
{ .compatible = "fsl,ls1021a-scfg", },
{ .compatible = "fsl,ls1043a-scfg", },
{ .compatible = "fsl,ls1046a-scfg", },
{ .compatible = "fsl,ls1088a-isc", },
{ .compatible = "fsl,ls2080a-isc", },
{ .compatible = "fsl,lx2160a-isc", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);

View file

@ -263,6 +263,7 @@ static const struct of_device_id qcom_cpufreq_ipq806x_match_list[] __maybe_unuse
{ .compatible = "qcom,ipq8066", .data = (const void *)QCOM_ID_IPQ8066 },
{ .compatible = "qcom,ipq8068", .data = (const void *)QCOM_ID_IPQ8068 },
{ .compatible = "qcom,ipq8069", .data = (const void *)QCOM_ID_IPQ8069 },
{ /* sentinel */ }
};
static int qcom_cpufreq_ipq8064_name_version(struct device *cpu_dev,

View file

@ -19,12 +19,6 @@
MODULE_IMPORT_NS("PCI_IDE");
#define TIO_DEFAULT_NR_IDE_STREAMS 1
static uint nr_ide_streams = TIO_DEFAULT_NR_IDE_STREAMS;
module_param_named(ide_nr, nr_ide_streams, uint, 0644);
MODULE_PARM_DESC(ide_nr, "Set the maximum number of IDE streams per PHB");
#define dev_to_sp(dev) ((struct sp_device *)dev_get_drvdata(dev))
#define dev_to_psp(dev) ((struct psp_device *)(dev_to_sp(dev)->psp_data))
#define dev_to_sev(dev) ((struct sev_device *)(dev_to_psp(dev)->sev_data))
@ -193,7 +187,6 @@ static void streams_teardown(struct pci_ide **ide)
static int stream_alloc(struct pci_dev *pdev, struct pci_ide **ide,
unsigned int tc)
{
struct pci_dev *rp = pcie_find_root_port(pdev);
struct pci_ide *ide1;
if (ide[tc]) {
@ -201,17 +194,11 @@ static int stream_alloc(struct pci_dev *pdev, struct pci_ide **ide,
return -EBUSY;
}
/* FIXME: find a better way */
if (nr_ide_streams != TIO_DEFAULT_NR_IDE_STREAMS)
pci_notice(pdev, "Enable non-default %d streams", nr_ide_streams);
pci_ide_set_nr_streams(to_pci_host_bridge(rp->bus->bridge), nr_ide_streams);
ide1 = pci_ide_stream_alloc(pdev);
if (!ide1)
return -EFAULT;
/* Blindly assign streamid=0 to TC=0, and so on */
ide1->stream_id = tc;
ide1->stream_id = ide1->host_bridge_stream;
ide[tc] = ide1;

View file

@ -173,20 +173,14 @@ static void split_transaction_timeout_callback(struct timer_list *timer)
}
}
static void start_split_transaction_timeout(struct fw_transaction *t,
struct fw_card *card)
// card->transactions.lock should be acquired in advance for the linked list.
static void start_split_transaction_timeout(struct fw_transaction *t, unsigned int delta)
{
unsigned long delta;
if (list_empty(&t->link) || WARN_ON(t->is_split_transaction))
return;
t->is_split_transaction = true;
// NOTE: This can be without irqsave when we can guarantee that __fw_send_request() for
// local destination never runs in any type of IRQ context.
scoped_guard(spinlock_irqsave, &card->split_timeout.lock)
delta = card->split_timeout.jiffies;
mod_timer(&t->split_timeout_timer, jiffies + delta);
}
@ -207,13 +201,20 @@ static void transmit_complete_callback(struct fw_packet *packet,
break;
case ACK_PENDING:
{
unsigned int delta;
// NOTE: This can be without irqsave when we can guarantee that __fw_send_request() for
// local destination never runs in any type of IRQ context.
scoped_guard(spinlock_irqsave, &card->split_timeout.lock) {
t->split_timeout_cycle =
compute_split_timeout_timestamp(card, packet->timestamp) & 0xffff;
delta = card->split_timeout.jiffies;
}
start_split_transaction_timeout(t, card);
// NOTE: This can be without irqsave when we can guarantee that __fw_send_request() for
// local destination never runs in any type of IRQ context.
scoped_guard(spinlock_irqsave, &card->transactions.lock)
start_split_transaction_timeout(t, delta);
break;
}
case ACK_BUSY_X:

View file

@ -301,12 +301,10 @@ static struct brcmstb_gpio_bank *brcmstb_gpio_hwirq_to_bank(
struct brcmstb_gpio_priv *priv, irq_hw_number_t hwirq)
{
struct brcmstb_gpio_bank *bank;
int i = 0;
/* banks are in descending order */
list_for_each_entry_reverse(bank, &priv->bank_list, node) {
i += bank->chip.gc.ngpio;
if (hwirq < i)
list_for_each_entry(bank, &priv->bank_list, node) {
if (hwirq >= bank->chip.gc.offset &&
hwirq < (bank->chip.gc.offset + bank->chip.gc.ngpio))
return bank;
}
return NULL;

View file

@ -799,10 +799,13 @@ static struct platform_device omap_mpuio_device = {
static inline void omap_mpuio_init(struct gpio_bank *bank)
{
platform_set_drvdata(&omap_mpuio_device, bank);
static bool registered;
if (platform_driver_register(&omap_mpuio_driver) == 0)
(void) platform_device_register(&omap_mpuio_device);
platform_set_drvdata(&omap_mpuio_device, bank);
if (!registered) {
(void)platform_device_register(&omap_mpuio_device);
registered = true;
}
}
/*---------------------------------------------------------------------*/
@ -1575,13 +1578,24 @@ static struct platform_driver omap_gpio_driver = {
*/
static int __init omap_gpio_drv_reg(void)
{
return platform_driver_register(&omap_gpio_driver);
int ret;
ret = platform_driver_register(&omap_mpuio_driver);
if (ret)
return ret;
ret = platform_driver_register(&omap_gpio_driver);
if (ret)
platform_driver_unregister(&omap_mpuio_driver);
return ret;
}
postcore_initcall(omap_gpio_drv_reg);
static void __exit omap_gpio_exit(void)
{
platform_driver_unregister(&omap_gpio_driver);
platform_driver_unregister(&omap_mpuio_driver);
}
module_exit(omap_gpio_exit);

View file

@ -914,6 +914,8 @@ static void pca953x_irq_shutdown(struct irq_data *d)
clear_bit(hwirq, chip->irq_trig_fall);
clear_bit(hwirq, chip->irq_trig_level_low);
clear_bit(hwirq, chip->irq_trig_level_high);
pca953x_irq_mask(d);
}
static void pca953x_irq_print_chip(struct irq_data *data, struct seq_file *p)

View file

@ -18,7 +18,6 @@
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/pinctrl/consumer.h>
#include <linux/pinctrl/pinconf-generic.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
@ -164,12 +163,6 @@ static int rockchip_gpio_set_direction(struct gpio_chip *chip,
unsigned long flags;
u32 data = input ? 0 : 1;
if (input)
pinctrl_gpio_direction_input(chip, offset);
else
pinctrl_gpio_direction_output(chip, offset);
raw_spin_lock_irqsave(&bank->slock, flags);
rockchip_gpio_writel_bit(bank, offset, data, bank->gpio_regs->port_ddr);
raw_spin_unlock_irqrestore(&bank->slock, flags);
@ -593,7 +586,6 @@ static int rockchip_gpiolib_register(struct rockchip_pin_bank *bank)
gc->ngpio = bank->nr_pins;
gc->label = bank->name;
gc->parent = bank->dev;
gc->can_sleep = true;
ret = gpiochip_add_data(gc, bank);
if (ret) {

View file

@ -35,7 +35,7 @@
struct sprd_gpio {
struct gpio_chip chip;
void __iomem *base;
spinlock_t lock;
raw_spinlock_t lock;
int irq;
};
@ -54,7 +54,7 @@ static void sprd_gpio_update(struct gpio_chip *chip, unsigned int offset,
unsigned long flags;
u32 tmp;
spin_lock_irqsave(&sprd_gpio->lock, flags);
raw_spin_lock_irqsave(&sprd_gpio->lock, flags);
tmp = readl_relaxed(base + reg);
if (val)
@ -63,7 +63,7 @@ static void sprd_gpio_update(struct gpio_chip *chip, unsigned int offset,
tmp &= ~BIT(SPRD_GPIO_BIT(offset));
writel_relaxed(tmp, base + reg);
spin_unlock_irqrestore(&sprd_gpio->lock, flags);
raw_spin_unlock_irqrestore(&sprd_gpio->lock, flags);
}
static int sprd_gpio_read(struct gpio_chip *chip, unsigned int offset, u16 reg)
@ -236,7 +236,7 @@ static int sprd_gpio_probe(struct platform_device *pdev)
if (IS_ERR(sprd_gpio->base))
return PTR_ERR(sprd_gpio->base);
spin_lock_init(&sprd_gpio->lock);
raw_spin_lock_init(&sprd_gpio->lock);
sprd_gpio->chip.label = dev_name(&pdev->dev);
sprd_gpio->chip.ngpio = SPRD_GPIO_NR;

View file

@ -1682,10 +1682,10 @@ static void gpio_virtuser_device_config_group_release(struct config_item *item)
{
struct gpio_virtuser_device *dev = to_gpio_virtuser_device(item);
guard(mutex)(&dev->lock);
if (gpio_virtuser_device_is_live(dev))
gpio_virtuser_device_deactivate(dev);
scoped_guard(mutex, &dev->lock) {
if (gpio_virtuser_device_is_live(dev))
gpio_virtuser_device_deactivate(dev);
}
mutex_destroy(&dev->lock);
ida_free(&gpio_virtuser_ida, dev->id);

View file

@ -1104,6 +1104,7 @@ acpi_gpio_adr_space_handler(u32 function, acpi_physical_address address,
unsigned int pin = agpio->pin_table[i];
struct acpi_gpio_connection *conn;
struct gpio_desc *desc;
u16 word, shift;
bool found;
mutex_lock(&achip->conn_lock);
@ -1158,10 +1159,22 @@ acpi_gpio_adr_space_handler(u32 function, acpi_physical_address address,
mutex_unlock(&achip->conn_lock);
if (function == ACPI_WRITE)
gpiod_set_raw_value_cansleep(desc, !!(*value & BIT(i)));
else
*value |= (u64)gpiod_get_raw_value_cansleep(desc) << i;
/*
* For the cases when OperationRegion() consists of more than
* 64 bits calculate the word and bit shift to use that one to
* access the value.
*/
word = i / 64;
shift = i % 64;
if (function == ACPI_WRITE) {
gpiod_set_raw_value_cansleep(desc, value[word] & BIT_ULL(shift));
} else {
if (gpiod_get_raw_value_cansleep(desc))
value[word] |= BIT_ULL(shift);
else
value[word] &= ~BIT_ULL(shift);
}
}
out:

View file

@ -498,8 +498,13 @@ void amdgpu_gmc_filter_faults_remove(struct amdgpu_device *adev, uint64_t addr,
if (adev->irq.retry_cam_enabled)
return;
else if (adev->irq.ih1.ring_size)
ih = &adev->irq.ih1;
else if (adev->irq.ih_soft.enabled)
ih = &adev->irq.ih_soft;
else
return;
ih = &adev->irq.ih1;
/* Get the WPTR of the last entry in IH ring */
last_wptr = amdgpu_ih_get_wptr(adev, ih);
/* Order wptr with ring data. */

View file

@ -235,7 +235,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
amdgpu_ring_ib_begin(ring);
if (ring->funcs->emit_gfx_shadow)
if (ring->funcs->emit_gfx_shadow && adev->gfx.cp_gfx_shadow)
amdgpu_ring_emit_gfx_shadow(ring, shadow_va, csa_va, gds_va,
init_shadow, vmid);
@ -291,7 +291,8 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
fence_flags | AMDGPU_FENCE_FLAG_64BIT);
}
if (ring->funcs->emit_gfx_shadow && ring->funcs->init_cond_exec) {
if (ring->funcs->emit_gfx_shadow && ring->funcs->init_cond_exec &&
adev->gfx.cp_gfx_shadow) {
amdgpu_ring_emit_gfx_shadow(ring, 0, 0, 0, false, 0);
amdgpu_ring_init_cond_exec(ring, ring->cond_exe_gpu_addr);
}

View file

@ -6879,7 +6879,7 @@ static int gfx_v10_0_kgq_init_queue(struct amdgpu_ring *ring, bool reset)
memcpy_toio(mqd, adev->gfx.me.mqd_backup[mqd_idx], sizeof(*mqd));
/* reset the ring */
ring->wptr = 0;
*ring->wptr_cpu_addr = 0;
atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0);
amdgpu_ring_clear_ring(ring);
}

View file

@ -4201,7 +4201,7 @@ static int gfx_v11_0_kgq_init_queue(struct amdgpu_ring *ring, bool reset)
memcpy_toio(mqd, adev->gfx.me.mqd_backup[mqd_idx], sizeof(*mqd));
/* reset the ring */
ring->wptr = 0;
*ring->wptr_cpu_addr = 0;
atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0);
amdgpu_ring_clear_ring(ring);
}
@ -6823,11 +6823,12 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
struct amdgpu_fence *timedout_fence)
{
struct amdgpu_device *adev = ring->adev;
bool use_mmio = false;
int r;
amdgpu_ring_reset_helper_begin(ring, timedout_fence);
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false);
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, use_mmio);
if (r) {
dev_warn(adev->dev, "reset via MES failed and try pipe reset %d\n", r);
@ -6836,16 +6837,18 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
return r;
}
r = gfx_v11_0_kgq_init_queue(ring, true);
if (r) {
dev_err(adev->dev, "failed to init kgq\n");
return r;
}
if (use_mmio) {
r = gfx_v11_0_kgq_init_queue(ring, true);
if (r) {
dev_err(adev->dev, "failed to init kgq\n");
return r;
}
r = amdgpu_mes_map_legacy_queue(adev, ring);
if (r) {
dev_err(adev->dev, "failed to remap kgq\n");
return r;
r = amdgpu_mes_map_legacy_queue(adev, ring);
if (r) {
dev_err(adev->dev, "failed to remap kgq\n");
return r;
}
}
return amdgpu_ring_reset_helper_end(ring, timedout_fence);

View file

@ -3079,7 +3079,7 @@ static int gfx_v12_0_kgq_init_queue(struct amdgpu_ring *ring, bool reset)
memcpy_toio(mqd, adev->gfx.me.mqd_backup[mqd_idx], sizeof(*mqd));
/* reset the ring */
ring->wptr = 0;
*ring->wptr_cpu_addr = 0;
atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0);
amdgpu_ring_clear_ring(ring);
}
@ -5297,11 +5297,12 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
struct amdgpu_fence *timedout_fence)
{
struct amdgpu_device *adev = ring->adev;
bool use_mmio = false;
int r;
amdgpu_ring_reset_helper_begin(ring, timedout_fence);
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false);
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, use_mmio);
if (r) {
dev_warn(adev->dev, "reset via MES failed and try pipe reset %d\n", r);
r = gfx_v12_reset_gfx_pipe(ring);
@ -5309,16 +5310,18 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
return r;
}
r = gfx_v12_0_kgq_init_queue(ring, true);
if (r) {
dev_err(adev->dev, "failed to init kgq\n");
return r;
}
if (use_mmio) {
r = gfx_v12_0_kgq_init_queue(ring, true);
if (r) {
dev_err(adev->dev, "failed to init kgq\n");
return r;
}
r = amdgpu_mes_map_legacy_queue(adev, ring);
if (r) {
dev_err(adev->dev, "failed to remap kgq\n");
return r;
r = amdgpu_mes_map_legacy_queue(adev, ring);
if (r) {
dev_err(adev->dev, "failed to remap kgq\n");
return r;
}
}
return amdgpu_ring_reset_helper_end(ring, timedout_fence);

View file

@ -225,7 +225,13 @@ static u32 soc21_get_config_memsize(struct amdgpu_device *adev)
static u32 soc21_get_xclk(struct amdgpu_device *adev)
{
return adev->clock.spll.reference_freq;
u32 reference_clock = adev->clock.spll.reference_freq;
/* reference clock is actually 99.81 Mhz rather than 100 Mhz */
if ((adev->flags & AMD_IS_APU) && reference_clock == 10000)
return 9981;
return reference_clock;
}

View file

@ -217,7 +217,7 @@ svm_migrate_get_vram_page(struct svm_range *prange, unsigned long pfn)
page = pfn_to_page(pfn);
svm_range_bo_ref(prange->svm_bo);
page->zone_device_data = prange->svm_bo;
zone_device_page_init(page, 0);
zone_device_page_init(page, page_pgmap(page), 0);
}
static void

View file

@ -7754,10 +7754,12 @@ static void amdgpu_dm_connector_destroy(struct drm_connector *connector)
drm_dp_mst_topology_mgr_destroy(&aconnector->mst_mgr);
/* Cancel and flush any pending HDMI HPD debounce work */
cancel_delayed_work_sync(&aconnector->hdmi_hpd_debounce_work);
if (aconnector->hdmi_prev_sink) {
dc_sink_release(aconnector->hdmi_prev_sink);
aconnector->hdmi_prev_sink = NULL;
if (aconnector->hdmi_hpd_debounce_delay_ms) {
cancel_delayed_work_sync(&aconnector->hdmi_hpd_debounce_work);
if (aconnector->hdmi_prev_sink) {
dc_sink_release(aconnector->hdmi_prev_sink);
aconnector->hdmi_prev_sink = NULL;
}
}
if (aconnector->bl_idx != -1) {

View file

@ -80,15 +80,15 @@ int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev,
enum ip_power_state pwr_state = gate ? POWER_STATE_OFF : POWER_STATE_ON;
bool is_vcn = block_type == AMD_IP_BLOCK_TYPE_VCN;
mutex_lock(&adev->pm.mutex);
if (atomic_read(&adev->pm.pwr_state[block_type]) == pwr_state &&
(!is_vcn || adev->vcn.num_vcn_inst == 1)) {
dev_dbg(adev->dev, "IP block%d already in the target %s state!",
block_type, gate ? "gate" : "ungate");
return 0;
goto out_unlock;
}
mutex_lock(&adev->pm.mutex);
switch (block_type) {
case AMD_IP_BLOCK_TYPE_UVD:
case AMD_IP_BLOCK_TYPE_VCE:
@ -115,6 +115,7 @@ int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev,
if (!ret)
atomic_set(&adev->pm.pwr_state[block_type], pwr_state);
out_unlock:
mutex_unlock(&adev->pm.mutex);
return ret;

View file

@ -56,6 +56,7 @@
#define SMUQ10_TO_UINT(x) ((x) >> 10)
#define SMUQ10_FRAC(x) ((x) & 0x3ff)
#define SMUQ10_ROUND(x) ((SMUQ10_TO_UINT(x)) + ((SMUQ10_FRAC(x)) >= 0x200))
#define SMU_V13_SOFT_FREQ_ROUND(x) ((x) + 1)
extern const int pmfw_decoded_link_speed[5];
extern const int pmfw_decoded_link_width[7];

View file

@ -57,6 +57,7 @@ extern const int decoded_link_width[8];
#define DECODE_GEN_SPEED(gen_speed_idx) (decoded_link_speed[gen_speed_idx])
#define DECODE_LANE_WIDTH(lane_width_idx) (decoded_link_width[lane_width_idx])
#define SMU_V14_SOFT_FREQ_ROUND(x) ((x) + 1)
struct smu_14_0_max_sustainable_clocks {
uint32_t display_clock;

View file

@ -1555,6 +1555,7 @@ int smu_v13_0_set_soft_freq_limited_range(struct smu_context *smu,
return clk_id;
if (max > 0) {
max = SMU_V13_SOFT_FREQ_ROUND(max);
if (automatic)
param = (uint32_t)((clk_id << 16) | 0xffff);
else

View file

@ -1178,6 +1178,7 @@ int smu_v14_0_set_soft_freq_limited_range(struct smu_context *smu,
return clk_id;
if (max > 0) {
max = SMU_V14_SOFT_FREQ_ROUND(max);
if (automatic)
param = (uint32_t)((clk_id << 16) | 0xffff);
else

View file

@ -960,16 +960,21 @@ int drm_gem_change_handle_ioctl(struct drm_device *dev, void *data,
{
struct drm_gem_change_handle *args = data;
struct drm_gem_object *obj;
int ret;
int handle, ret;
if (!drm_core_check_feature(dev, DRIVER_GEM))
return -EOPNOTSUPP;
/* idr_alloc() limitation. */
if (args->new_handle > INT_MAX)
return -EINVAL;
handle = args->new_handle;
obj = drm_gem_object_lookup(file_priv, args->handle);
if (!obj)
return -ENOENT;
if (args->handle == args->new_handle) {
if (args->handle == handle) {
ret = 0;
goto out;
}
@ -977,18 +982,19 @@ int drm_gem_change_handle_ioctl(struct drm_device *dev, void *data,
mutex_lock(&file_priv->prime.lock);
spin_lock(&file_priv->table_lock);
ret = idr_alloc(&file_priv->object_idr, obj,
args->new_handle, args->new_handle + 1, GFP_NOWAIT);
ret = idr_alloc(&file_priv->object_idr, obj, handle, handle + 1,
GFP_NOWAIT);
spin_unlock(&file_priv->table_lock);
if (ret < 0)
goto out_unlock;
if (obj->dma_buf) {
ret = drm_prime_add_buf_handle(&file_priv->prime, obj->dma_buf, args->new_handle);
ret = drm_prime_add_buf_handle(&file_priv->prime, obj->dma_buf,
handle);
if (ret < 0) {
spin_lock(&file_priv->table_lock);
idr_remove(&file_priv->object_idr, args->new_handle);
idr_remove(&file_priv->object_idr, handle);
spin_unlock(&file_priv->table_lock);
goto out_unlock;
}

View file

@ -197,7 +197,7 @@ static void drm_pagemap_get_devmem_page(struct page *page,
struct drm_pagemap_zdd *zdd)
{
page->zone_device_data = drm_pagemap_zdd_get(zdd);
zone_device_page_init(page, 0);
zone_device_page_init(page, page_pgmap(page), 0);
}
/**

View file

@ -528,6 +528,13 @@ static const struct component_ops imx_tve_ops = {
.bind = imx_tve_bind,
};
static void imx_tve_put_device(void *_dev)
{
struct device *dev = _dev;
put_device(dev);
}
static int imx_tve_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -549,6 +556,12 @@ static int imx_tve_probe(struct platform_device *pdev)
if (ddc_node) {
tve->ddc = of_find_i2c_adapter_by_node(ddc_node);
of_node_put(ddc_node);
if (tve->ddc) {
ret = devm_add_action_or_reset(dev, imx_tve_put_device,
&tve->ddc->dev);
if (ret)
return ret;
}
}
tve->mode = of_get_tve_mode(np);

View file

@ -501,8 +501,6 @@ static const struct adreno_reglist a690_hwcg[] = {
{REG_A6XX_RBBM_CLOCK_CNTL_GMU_GX, 0x00000222},
{REG_A6XX_RBBM_CLOCK_DELAY_GMU_GX, 0x00000111},
{REG_A6XX_RBBM_CLOCK_HYST_GMU_GX, 0x00000555},
{REG_A6XX_GPU_GMU_AO_GMU_CGC_DELAY_CNTL, 0x10111},
{REG_A6XX_GPU_GMU_AO_GMU_CGC_HYST_CNTL, 0x5555},
{}
};

View file

@ -425,7 +425,7 @@ nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm, bool is_large)
order = ilog2(DMEM_CHUNK_NPAGES);
}
zone_device_folio_init(folio, order);
zone_device_folio_init(folio, page_pgmap(folio_page(folio, 0)), order);
return page;
}

View file

@ -6,6 +6,7 @@ config DRM_TYR
depends on RUST
depends on ARM || ARM64 || COMPILE_TEST
depends on !GENERIC_ATOMIC64 # for IOMMU_IO_PGTABLE_LPAE
depends on COMMON_CLK
default n
help
Rust DRM driver for ARM Mali CSF-based GPUs.

View file

@ -347,11 +347,10 @@ static bool is_bound(struct xe_config_group_device *dev)
return false;
ret = pci_get_drvdata(pdev);
pci_dev_put(pdev);
if (ret)
pci_dbg(pdev, "Already bound to driver\n");
pci_dev_put(pdev);
return ret;
}

View file

@ -984,8 +984,6 @@ void xe_device_remove(struct xe_device *xe)
{
xe_display_unregister(xe);
xe_nvm_fini(xe);
drm_dev_unplug(&xe->drm);
xe_bo_pci_dev_remove_all(xe);

View file

@ -190,9 +190,9 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
goto err_syncs;
}
if (xe_exec_queue_is_parallel(q)) {
err = copy_from_user(addresses, addresses_user, sizeof(u64) *
q->width);
if (args->num_batch_buffer && xe_exec_queue_is_parallel(q)) {
err = copy_from_user(addresses, addresses_user,
sizeof(u64) * q->width);
if (err) {
err = -EFAULT;
goto err_syncs;

View file

@ -1185,7 +1185,7 @@ static ssize_t setup_invalidate_state_cache_wa(struct xe_lrc *lrc,
return -ENOSPC;
*cmd++ = MI_LOAD_REGISTER_IMM | MI_LRI_NUM_REGS(1);
*cmd++ = CS_DEBUG_MODE1(0).addr;
*cmd++ = CS_DEBUG_MODE2(0).addr;
*cmd++ = _MASKED_BIT_ENABLE(INSTRUCTION_STATE_CACHE_INVALIDATE);
return cmd - batch;

View file

@ -83,6 +83,27 @@ static bool xe_nvm_writable_override(struct xe_device *xe)
return writable_override;
}
static void xe_nvm_fini(void *arg)
{
struct xe_device *xe = arg;
struct intel_dg_nvm_dev *nvm = xe->nvm;
if (!xe->info.has_gsc_nvm)
return;
/* No access to internal NVM from VFs */
if (IS_SRIOV_VF(xe))
return;
/* Nvm pointer should not be NULL here */
if (WARN_ON(!nvm))
return;
auxiliary_device_delete(&nvm->aux_dev);
auxiliary_device_uninit(&nvm->aux_dev);
xe->nvm = NULL;
}
int xe_nvm_init(struct xe_device *xe)
{
struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
@ -132,39 +153,17 @@ int xe_nvm_init(struct xe_device *xe)
ret = auxiliary_device_init(aux_dev);
if (ret) {
drm_err(&xe->drm, "xe-nvm aux init failed %d\n", ret);
goto err;
kfree(nvm);
xe->nvm = NULL;
return ret;
}
ret = auxiliary_device_add(aux_dev);
if (ret) {
drm_err(&xe->drm, "xe-nvm aux add failed %d\n", ret);
auxiliary_device_uninit(aux_dev);
goto err;
xe->nvm = NULL;
return ret;
}
return 0;
err:
kfree(nvm);
xe->nvm = NULL;
return ret;
}
void xe_nvm_fini(struct xe_device *xe)
{
struct intel_dg_nvm_dev *nvm = xe->nvm;
if (!xe->info.has_gsc_nvm)
return;
/* No access to internal NVM from VFs */
if (IS_SRIOV_VF(xe))
return;
/* Nvm pointer should not be NULL here */
if (WARN_ON(!nvm))
return;
auxiliary_device_delete(&nvm->aux_dev);
auxiliary_device_uninit(&nvm->aux_dev);
xe->nvm = NULL;
return devm_add_action_or_reset(xe->drm.dev, xe_nvm_fini, xe);
}

View file

@ -10,6 +10,4 @@ struct xe_device;
int xe_nvm_init(struct xe_device *xe);
void xe_nvm_fini(struct xe_device *xe);
#endif

View file

@ -342,7 +342,6 @@ static const struct xe_device_desc lnl_desc = {
.has_display = true,
.has_flat_ccs = 1,
.has_pxp = true,
.has_mem_copy_instr = true,
.max_gt_per_tile = 2,
.needs_scratch = true,
.va_bits = 48,
@ -363,7 +362,6 @@ static const struct xe_device_desc bmg_desc = {
.has_heci_cscfi = 1,
.has_late_bind = true,
.has_sriov = true,
.has_mem_copy_instr = true,
.max_gt_per_tile = 2,
.needs_scratch = true,
.subplatforms = (const struct xe_subplatform_desc[]) {
@ -380,7 +378,6 @@ static const struct xe_device_desc ptl_desc = {
.has_display = true,
.has_flat_ccs = 1,
.has_sriov = true,
.has_mem_copy_instr = true,
.max_gt_per_tile = 2,
.needs_scratch = true,
.needs_shared_vf_gt_wq = true,
@ -393,7 +390,6 @@ static const struct xe_device_desc nvls_desc = {
.dma_mask_size = 46,
.has_display = true,
.has_flat_ccs = 1,
.has_mem_copy_instr = true,
.max_gt_per_tile = 2,
.require_force_probe = true,
.va_bits = 48,
@ -675,7 +671,6 @@ static int xe_info_init_early(struct xe_device *xe,
xe->info.has_pxp = desc->has_pxp;
xe->info.has_sriov = xe_configfs_primary_gt_allowed(to_pci_dev(xe->drm.dev)) &&
desc->has_sriov;
xe->info.has_mem_copy_instr = desc->has_mem_copy_instr;
xe->info.skip_guc_pc = desc->skip_guc_pc;
xe->info.skip_mtcfg = desc->skip_mtcfg;
xe->info.skip_pcode = desc->skip_pcode;
@ -864,6 +859,7 @@ static int xe_info_init(struct xe_device *xe,
xe->info.has_range_tlb_inval = graphics_desc->has_range_tlb_inval;
xe->info.has_usm = graphics_desc->has_usm;
xe->info.has_64bit_timestamp = graphics_desc->has_64bit_timestamp;
xe->info.has_mem_copy_instr = GRAPHICS_VER(xe) >= 20;
xe_info_probe_tile_count(xe);

View file

@ -46,7 +46,6 @@ struct xe_device_desc {
u8 has_late_bind:1;
u8 has_llc:1;
u8 has_mbx_power_limits:1;
u8 has_mem_copy_instr:1;
u8 has_pxp:1;
u8 has_sriov:1;
u8 needs_scratch:1;

View file

@ -1078,6 +1078,9 @@ static int tegra241_vcmdq_hw_init_user(struct tegra241_vcmdq *vcmdq)
{
char header[64];
/* Reset VCMDQ */
tegra241_vcmdq_hw_deinit(vcmdq);
/* Configure the vcmdq only; User space does the enabling */
writeq_relaxed(vcmdq->cmdq.q.q_base, REG_VCMDQ_PAGE1(vcmdq, BASE));

View file

@ -931,6 +931,8 @@ static __maybe_unused int __unmap_range(struct pt_range *range, void *arg,
struct pt_table_p *table)
{
struct pt_state pts = pt_init(range, level, table);
unsigned int flush_start_index = UINT_MAX;
unsigned int flush_end_index = UINT_MAX;
struct pt_unmap_args *unmap = arg;
unsigned int num_oas = 0;
unsigned int start_index;
@ -986,6 +988,9 @@ static __maybe_unused int __unmap_range(struct pt_range *range, void *arg,
iommu_pages_list_add(&unmap->free_list,
pts.table_lower);
pt_clear_entries(&pts, ilog2(1));
if (pts.index < flush_start_index)
flush_start_index = pts.index;
flush_end_index = pts.index + 1;
}
pts.index++;
} else {
@ -999,7 +1004,10 @@ start_oa:
num_contig_lg2 = pt_entry_num_contig_lg2(&pts);
pt_clear_entries(&pts, num_contig_lg2);
num_oas += log2_to_int(num_contig_lg2);
if (pts.index < flush_start_index)
flush_start_index = pts.index;
pts.index += log2_to_int(num_contig_lg2);
flush_end_index = pts.index;
}
if (pts.index >= pts.end_index)
break;
@ -1007,7 +1015,8 @@ start_oa:
} while (true);
unmap->unmapped += log2_mul(num_oas, pt_table_item_lg2sz(&pts));
flush_writes_range(&pts, start_index, pts.index);
if (flush_start_index != flush_end_index)
flush_writes_range(&pts, flush_start_index, flush_end_index);
return ret;
}

View file

@ -289,6 +289,7 @@ static void batch_clear(struct pfn_batch *batch)
batch->end = 0;
batch->pfns[0] = 0;
batch->npfns[0] = 0;
batch->kind = 0;
}
/*

View file

@ -168,40 +168,34 @@ ls_extirq_parse_map(struct ls_extirq_data *priv, struct device_node *node)
return 0;
}
static int __init
ls_extirq_of_init(struct device_node *node, struct device_node *parent)
static int ls_extirq_probe(struct platform_device *pdev)
{
struct irq_domain *domain, *parent_domain;
struct device_node *node, *parent;
struct device *dev = &pdev->dev;
struct ls_extirq_data *priv;
int ret;
node = dev->of_node;
parent = of_irq_find_parent(node);
if (!parent)
return dev_err_probe(dev, -ENODEV, "Failed to get IRQ parent node\n");
parent_domain = irq_find_host(parent);
if (!parent_domain) {
pr_err("Cannot find parent domain\n");
ret = -ENODEV;
goto err_irq_find_host;
}
if (!parent_domain)
return dev_err_probe(dev, -EPROBE_DEFER, "Cannot find parent domain\n");
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
ret = -ENOMEM;
goto err_alloc_priv;
}
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return dev_err_probe(dev, -ENOMEM, "Failed to allocate memory\n");
/*
* All extirq OF nodes are under a scfg/syscon node with
* the 'ranges' property
*/
priv->intpcr = of_iomap(node, 0);
if (!priv->intpcr) {
pr_err("Cannot ioremap OF node %pOF\n", node);
ret = -ENOMEM;
goto err_iomap;
}
priv->intpcr = devm_of_iomap(dev, node, 0, NULL);
if (!priv->intpcr)
return dev_err_probe(dev, -ENOMEM, "Cannot ioremap OF node %pOF\n", node);
ret = ls_extirq_parse_map(priv, node);
if (ret)
goto err_parse_map;
return dev_err_probe(dev, ret, "Failed to parse IRQ map\n");
priv->big_endian = of_device_is_big_endian(node->parent);
priv->is_ls1021a_or_ls1043a = of_device_is_compatible(node, "fsl,ls1021a-extirq") ||
@ -210,23 +204,26 @@ ls_extirq_of_init(struct device_node *node, struct device_node *parent)
domain = irq_domain_create_hierarchy(parent_domain, 0, priv->nirq, of_fwnode_handle(node),
&extirq_domain_ops, priv);
if (!domain) {
ret = -ENOMEM;
goto err_add_hierarchy;
}
if (!domain)
return dev_err_probe(dev, -ENOMEM, "Failed to add IRQ domain\n");
return 0;
err_add_hierarchy:
err_parse_map:
iounmap(priv->intpcr);
err_iomap:
kfree(priv);
err_alloc_priv:
err_irq_find_host:
return ret;
}
IRQCHIP_DECLARE(ls1021a_extirq, "fsl,ls1021a-extirq", ls_extirq_of_init);
IRQCHIP_DECLARE(ls1043a_extirq, "fsl,ls1043a-extirq", ls_extirq_of_init);
IRQCHIP_DECLARE(ls1088a_extirq, "fsl,ls1088a-extirq", ls_extirq_of_init);
static const struct of_device_id ls_extirq_dt_ids[] = {
{ .compatible = "fsl,ls1021a-extirq" },
{ .compatible = "fsl,ls1043a-extirq" },
{ .compatible = "fsl,ls1088a-extirq" },
{}
};
MODULE_DEVICE_TABLE(of, ls_extirq_dt_ids);
static struct platform_driver ls_extirq_driver = {
.probe = ls_extirq_probe,
.driver = {
.name = "ls-extirq",
.of_match_table = ls_extirq_dt_ids,
}
};
builtin_platform_driver(ls_extirq_driver);

View file

@ -1107,17 +1107,13 @@ static void detached_dev_do_request(struct bcache_device *d,
if (bio_op(orig_bio) == REQ_OP_DISCARD &&
!bdev_max_discard_sectors(dc->bdev)) {
bio_end_io_acct(orig_bio, start_time);
bio_endio(orig_bio);
return;
}
clone_bio = bio_alloc_clone(dc->bdev, orig_bio, GFP_NOIO,
&d->bio_detached);
if (!clone_bio) {
orig_bio->bi_status = BLK_STS_RESOURCE;
bio_endio(orig_bio);
return;
}
ddip = container_of(clone_bio, struct detached_dev_io_private, bio);
/* Count on the bcache device */

View file

@ -215,7 +215,7 @@ static const struct spinand_info esmt_c8_spinand_table[] = {
SPINAND_FACT_OTP_INFO(2, 0, &f50l1g41lb_fact_otp_ops)),
SPINAND_INFO("F50D1G41LB",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x11, 0x7f,
0x7f),
0x7f, 0x7f),
NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
NAND_ECCREQ(1, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,

View file

@ -1089,6 +1089,9 @@ static int adin1110_check_spi(struct adin1110_priv *priv)
reset_gpio = devm_gpiod_get_optional(&priv->spidev->dev, "reset",
GPIOD_OUT_LOW);
if (IS_ERR(reset_gpio))
return dev_err_probe(&priv->spidev->dev, PTR_ERR(reset_gpio),
"failed to get reset gpio\n");
if (reset_gpio) {
/* MISO pin is used for internal configuration, can't have
* anyone else disturbing the SDO line.

View file

@ -3505,6 +3505,23 @@ static int setup_nic_devices(struct octeon_device *octeon_dev)
*/
netdev->netdev_ops = &lionetdevops;
lio = GET_LIO(netdev);
memset(lio, 0, sizeof(struct lio));
lio->ifidx = ifidx_or_pfnum;
props = &octeon_dev->props[i];
props->gmxport = resp->cfg_info.linfo.gmxport;
props->netdev = netdev;
/* Point to the properties for octeon device to which this
* interface belongs.
*/
lio->oct_dev = octeon_dev;
lio->octprops = props;
lio->netdev = netdev;
retval = netif_set_real_num_rx_queues(netdev, num_oqueues);
if (retval) {
dev_err(&octeon_dev->pci_dev->dev,
@ -3521,16 +3538,6 @@ static int setup_nic_devices(struct octeon_device *octeon_dev)
goto setup_nic_dev_free;
}
lio = GET_LIO(netdev);
memset(lio, 0, sizeof(struct lio));
lio->ifidx = ifidx_or_pfnum;
props = &octeon_dev->props[i];
props->gmxport = resp->cfg_info.linfo.gmxport;
props->netdev = netdev;
lio->linfo.num_rxpciq = num_oqueues;
lio->linfo.num_txpciq = num_iqueues;
for (j = 0; j < num_oqueues; j++) {
@ -3596,13 +3603,6 @@ static int setup_nic_devices(struct octeon_device *octeon_dev)
netdev->min_mtu = LIO_MIN_MTU_SIZE;
netdev->max_mtu = LIO_MAX_MTU_SIZE;
/* Point to the properties for octeon device to which this
* interface belongs.
*/
lio->oct_dev = octeon_dev;
lio->octprops = props;
lio->netdev = netdev;
dev_dbg(&octeon_dev->pci_dev->dev,
"if%d gmx: %d hw_addr: 0x%llx\n", i,
lio->linfo.gmxport, CVM_CAST64(lio->linfo.hw_addr));
@ -3750,6 +3750,7 @@ static int setup_nic_devices(struct octeon_device *octeon_dev)
if (!devlink) {
device_unlock(&octeon_dev->pci_dev->dev);
dev_err(&octeon_dev->pci_dev->dev, "devlink alloc failed\n");
i--;
goto setup_nic_dev_free;
}
@ -3765,11 +3766,11 @@ static int setup_nic_devices(struct octeon_device *octeon_dev)
setup_nic_dev_free:
while (i--) {
do {
dev_err(&octeon_dev->pci_dev->dev,
"NIC ifidx:%d Setup failed\n", i);
liquidio_destroy_nic_device(octeon_dev, i);
}
} while (i--);
setup_nic_dev_done:

View file

@ -2212,11 +2212,11 @@ static int setup_nic_devices(struct octeon_device *octeon_dev)
setup_nic_dev_free:
while (i--) {
do {
dev_err(&octeon_dev->pci_dev->dev,
"NIC ifidx:%d Setup failed\n", i);
liquidio_destroy_nic_device(octeon_dev, i);
}
} while (i--);
setup_nic_dev_done:

View file

@ -1531,6 +1531,10 @@ static irqreturn_t dpaa2_switch_irq0_handler_thread(int irq_num, void *arg)
}
if_id = (status & 0xFFFF0000) >> 16;
if (if_id >= ethsw->sw_attr.num_ifs) {
dev_err(dev, "Invalid if_id %d in IRQ status\n", if_id);
goto out;
}
port_priv = ethsw->ports[if_id];
if (status & DPSW_IRQ_EVENT_LINK_CHANGED)
@ -3024,6 +3028,12 @@ static int dpaa2_switch_init(struct fsl_mc_device *sw_dev)
goto err_close;
}
if (!ethsw->sw_attr.num_ifs) {
dev_err(dev, "DPSW device has no interfaces\n");
err = -ENODEV;
goto err_close;
}
err = dpsw_get_api_version(ethsw->mc_io, 0,
&ethsw->major,
&ethsw->minor);

View file

@ -2512,10 +2512,13 @@ int enetc_configure_si(struct enetc_ndev_priv *priv)
struct enetc_hw *hw = &si->hw;
int err;
/* set SI cache attributes */
enetc_wr(hw, ENETC_SICAR0,
ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT);
enetc_wr(hw, ENETC_SICAR1, ENETC_SICAR_MSI);
if (is_enetc_rev1(si)) {
/* set SI cache attributes */
enetc_wr(hw, ENETC_SICAR0,
ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT);
enetc_wr(hw, ENETC_SICAR1, ENETC_SICAR_MSI);
}
/* enable SI */
enetc_wr(hw, ENETC_SIMR, ENETC_SIMR_EN);

View file

@ -59,10 +59,10 @@ static void enetc4_pf_set_si_primary_mac(struct enetc_hw *hw, int si,
if (si != 0) {
__raw_writel(upper, hw->port + ENETC4_PSIPMAR0(si));
__raw_writew(lower, hw->port + ENETC4_PSIPMAR1(si));
__raw_writel(lower, hw->port + ENETC4_PSIPMAR1(si));
} else {
__raw_writel(upper, hw->port + ENETC4_PMAR0);
__raw_writew(lower, hw->port + ENETC4_PMAR1);
__raw_writel(lower, hw->port + ENETC4_PMAR1);
}
}
@ -73,7 +73,7 @@ static void enetc4_pf_get_si_primary_mac(struct enetc_hw *hw, int si,
u16 lower;
upper = __raw_readl(hw->port + ENETC4_PSIPMAR0(si));
lower = __raw_readw(hw->port + ENETC4_PSIPMAR1(si));
lower = __raw_readl(hw->port + ENETC4_PSIPMAR1(si));
put_unaligned_le32(upper, addr);
put_unaligned_le16(lower, addr + 4);

View file

@ -74,10 +74,6 @@ int enetc4_setup_cbdr(struct enetc_si *si)
if (!user->ring)
return -ENOMEM;
/* set CBDR cache attributes */
enetc_wr(hw, ENETC_SICAR2,
ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT);
regs.pir = hw->reg + ENETC_SICBDRPIR;
regs.cir = hw->reg + ENETC_SICBDRCIR;
regs.mr = hw->reg + ENETC_SICBDRMR;

View file

@ -708,13 +708,24 @@ struct enetc_cmd_rfse {
#define ENETC_RFSE_EN BIT(15)
#define ENETC_RFSE_MODE_BD 2
static inline void enetc_get_primary_mac_addr(struct enetc_hw *hw, u8 *addr)
{
u32 upper;
u16 lower;
upper = __raw_readl(hw->reg + ENETC_SIPMAR0);
lower = __raw_readl(hw->reg + ENETC_SIPMAR1);
put_unaligned_le32(upper, addr);
put_unaligned_le16(lower, addr + 4);
}
static inline void enetc_load_primary_mac_addr(struct enetc_hw *hw,
struct net_device *ndev)
{
u8 addr[ETH_ALEN] __aligned(4);
u8 addr[ETH_ALEN];
*(u32 *)addr = __raw_readl(hw->reg + ENETC_SIPMAR0);
*(u16 *)(addr + 4) = __raw_readw(hw->reg + ENETC_SIPMAR1);
enetc_get_primary_mac_addr(hw, addr);
eth_hw_addr_set(ndev, addr);
}

View file

@ -152,11 +152,13 @@ gve_get_ethtool_stats(struct net_device *netdev,
u64 tmp_rx_pkts, tmp_rx_hsplit_pkt, tmp_rx_bytes, tmp_rx_hsplit_bytes,
tmp_rx_skb_alloc_fail, tmp_rx_buf_alloc_fail,
tmp_rx_desc_err_dropped_pkt, tmp_rx_hsplit_unsplit_pkt,
tmp_tx_pkts, tmp_tx_bytes;
tmp_tx_pkts, tmp_tx_bytes,
tmp_xdp_tx_errors, tmp_xdp_redirect_errors;
u64 rx_buf_alloc_fail, rx_desc_err_dropped_pkt, rx_hsplit_unsplit_pkt,
rx_pkts, rx_hsplit_pkt, rx_skb_alloc_fail, rx_bytes, tx_pkts, tx_bytes,
tx_dropped;
int stats_idx, base_stats_idx, max_stats_idx;
tx_dropped, xdp_tx_errors, xdp_redirect_errors;
int rx_base_stats_idx, max_rx_stats_idx, max_tx_stats_idx;
int stats_idx, stats_region_len, nic_stats_len;
struct stats *report_stats;
int *rx_qid_to_stats_idx;
int *tx_qid_to_stats_idx;
@ -198,6 +200,7 @@ gve_get_ethtool_stats(struct net_device *netdev,
for (rx_pkts = 0, rx_bytes = 0, rx_hsplit_pkt = 0,
rx_skb_alloc_fail = 0, rx_buf_alloc_fail = 0,
rx_desc_err_dropped_pkt = 0, rx_hsplit_unsplit_pkt = 0,
xdp_tx_errors = 0, xdp_redirect_errors = 0,
ring = 0;
ring < priv->rx_cfg.num_queues; ring++) {
if (priv->rx) {
@ -215,6 +218,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
rx->rx_desc_err_dropped_pkt;
tmp_rx_hsplit_unsplit_pkt =
rx->rx_hsplit_unsplit_pkt;
tmp_xdp_tx_errors = rx->xdp_tx_errors;
tmp_xdp_redirect_errors =
rx->xdp_redirect_errors;
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
start));
rx_pkts += tmp_rx_pkts;
@ -224,6 +230,8 @@ gve_get_ethtool_stats(struct net_device *netdev,
rx_buf_alloc_fail += tmp_rx_buf_alloc_fail;
rx_desc_err_dropped_pkt += tmp_rx_desc_err_dropped_pkt;
rx_hsplit_unsplit_pkt += tmp_rx_hsplit_unsplit_pkt;
xdp_tx_errors += tmp_xdp_tx_errors;
xdp_redirect_errors += tmp_xdp_redirect_errors;
}
}
for (tx_pkts = 0, tx_bytes = 0, tx_dropped = 0, ring = 0;
@ -249,8 +257,8 @@ gve_get_ethtool_stats(struct net_device *netdev,
data[i++] = rx_bytes;
data[i++] = tx_bytes;
/* total rx dropped packets */
data[i++] = rx_skb_alloc_fail + rx_buf_alloc_fail +
rx_desc_err_dropped_pkt;
data[i++] = rx_skb_alloc_fail + rx_desc_err_dropped_pkt +
xdp_tx_errors + xdp_redirect_errors;
data[i++] = tx_dropped;
data[i++] = priv->tx_timeo_cnt;
data[i++] = rx_skb_alloc_fail;
@ -265,20 +273,38 @@ gve_get_ethtool_stats(struct net_device *netdev,
data[i++] = priv->stats_report_trigger_cnt;
i = GVE_MAIN_STATS_LEN;
/* For rx cross-reporting stats, start from nic rx stats in report */
base_stats_idx = GVE_TX_STATS_REPORT_NUM * num_tx_queues +
GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues;
/* The boundary between driver stats and NIC stats shifts if there are
* stopped queues.
*/
base_stats_idx += NIC_RX_STATS_REPORT_NUM * num_stopped_rxqs +
NIC_TX_STATS_REPORT_NUM * num_stopped_txqs;
max_stats_idx = NIC_RX_STATS_REPORT_NUM *
(priv->rx_cfg.num_queues - num_stopped_rxqs) +
base_stats_idx;
rx_base_stats_idx = 0;
max_rx_stats_idx = 0;
max_tx_stats_idx = 0;
stats_region_len = priv->stats_report_len -
sizeof(struct gve_stats_report);
nic_stats_len = (NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues +
NIC_TX_STATS_REPORT_NUM * num_tx_queues) * sizeof(struct stats);
if (unlikely((stats_region_len -
nic_stats_len) % sizeof(struct stats))) {
net_err_ratelimited("Starting index of NIC stats should be multiple of stats size");
} else {
/* For rx cross-reporting stats,
* start from nic rx stats in report
*/
rx_base_stats_idx = (stats_region_len - nic_stats_len) /
sizeof(struct stats);
/* The boundary between driver stats and NIC stats
* shifts if there are stopped queues
*/
rx_base_stats_idx += NIC_RX_STATS_REPORT_NUM *
num_stopped_rxqs + NIC_TX_STATS_REPORT_NUM *
num_stopped_txqs;
max_rx_stats_idx = NIC_RX_STATS_REPORT_NUM *
(priv->rx_cfg.num_queues - num_stopped_rxqs) +
rx_base_stats_idx;
max_tx_stats_idx = NIC_TX_STATS_REPORT_NUM *
(num_tx_queues - num_stopped_txqs) +
max_rx_stats_idx;
}
/* Preprocess the stats report for rx, map queue id to start index */
skip_nic_stats = false;
for (stats_idx = base_stats_idx; stats_idx < max_stats_idx;
for (stats_idx = rx_base_stats_idx; stats_idx < max_rx_stats_idx;
stats_idx += NIC_RX_STATS_REPORT_NUM) {
u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name);
u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id);
@ -311,6 +337,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
tmp_rx_desc_err_dropped_pkt =
rx->rx_desc_err_dropped_pkt;
tmp_xdp_tx_errors = rx->xdp_tx_errors;
tmp_xdp_redirect_errors =
rx->xdp_redirect_errors;
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
start));
data[i++] = tmp_rx_bytes;
@ -321,8 +350,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
data[i++] = rx->rx_frag_alloc_cnt;
/* rx dropped packets */
data[i++] = tmp_rx_skb_alloc_fail +
tmp_rx_buf_alloc_fail +
tmp_rx_desc_err_dropped_pkt;
tmp_rx_desc_err_dropped_pkt +
tmp_xdp_tx_errors +
tmp_xdp_redirect_errors;
data[i++] = rx->rx_copybreak_pkt;
data[i++] = rx->rx_copied_pkt;
/* stats from NIC */
@ -354,14 +384,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;
}
/* For tx cross-reporting stats, start from nic tx stats in report */
base_stats_idx = max_stats_idx;
max_stats_idx = NIC_TX_STATS_REPORT_NUM *
(num_tx_queues - num_stopped_txqs) +
max_stats_idx;
/* Preprocess the stats report for tx, map queue id to start index */
skip_nic_stats = false;
for (stats_idx = base_stats_idx; stats_idx < max_stats_idx;
/* NIC TX stats start right after NIC RX stats */
for (stats_idx = max_rx_stats_idx; stats_idx < max_tx_stats_idx;
stats_idx += NIC_TX_STATS_REPORT_NUM) {
u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name);
u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id);

View file

@ -283,9 +283,9 @@ static int gve_alloc_stats_report(struct gve_priv *priv)
int tx_stats_num, rx_stats_num;
tx_stats_num = (GVE_TX_STATS_REPORT_NUM + NIC_TX_STATS_REPORT_NUM) *
gve_num_tx_queues(priv);
priv->tx_cfg.max_queues;
rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) *
priv->rx_cfg.num_queues;
priv->rx_cfg.max_queues;
priv->stats_report_len = struct_size(priv->stats_report, stats,
size_add(tx_stats_num, rx_stats_num));
priv->stats_report =

View file

@ -9030,7 +9030,6 @@ int i40e_open(struct net_device *netdev)
TCP_FLAG_FIN |
TCP_FLAG_CWR) >> 16);
wr32(&pf->hw, I40E_GLLAN_TSOMSK_L, be32_to_cpu(TCP_FLAG_CWR) >> 16);
udp_tunnel_get_rx_info(netdev);
return 0;
}

View file

@ -3314,18 +3314,20 @@ static irqreturn_t ice_misc_intr_thread_fn(int __always_unused irq, void *data)
if (ice_is_reset_in_progress(pf->state))
goto skip_irq;
if (test_and_clear_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread)) {
/* Process outstanding Tx timestamps. If there is more work,
* re-arm the interrupt to trigger again.
*/
if (ice_ptp_process_ts(pf) == ICE_TX_TSTAMP_WORK_PENDING) {
wr32(hw, PFINT_OICR, PFINT_OICR_TSYN_TX_M);
ice_flush(hw);
}
}
if (test_and_clear_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread))
ice_ptp_process_ts(pf);
skip_irq:
ice_irq_dynamic_ena(hw, NULL, NULL);
ice_flush(hw);
if (ice_ptp_tx_tstamps_pending(pf)) {
/* If any new Tx timestamps happened while in interrupt,
* re-arm the interrupt to trigger it again.
*/
wr32(hw, PFINT_OICR, PFINT_OICR_TSYN_TX_M);
ice_flush(hw);
}
return IRQ_HANDLED;
}
@ -7863,6 +7865,9 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
/* Restore timestamp mode settings after VSI rebuild */
ice_ptp_restore_timestamp_mode(pf);
/* Start PTP periodic work after VSI is fully rebuilt */
ice_ptp_queue_work(pf);
return;
err_vsi_rebuild:
@ -9713,9 +9718,6 @@ int ice_open_internal(struct net_device *netdev)
netdev_err(netdev, "Failed to open VSI 0x%04X on switch 0x%04X\n",
vsi->vsi_num, vsi->vsw->sw_id);
/* Update existing tunnels information */
udp_tunnel_get_rx_info(netdev);
return err;
}

View file

@ -573,6 +573,9 @@ static void ice_ptp_process_tx_tstamp(struct ice_ptp_tx *tx)
pf = ptp_port_to_pf(ptp_port);
hw = &pf->hw;
if (!tx->init)
return;
/* Read the Tx ready status first */
if (tx->has_ready_bitmap) {
err = ice_get_phy_tx_tstamp_ready(hw, tx->block, &tstamp_ready);
@ -674,14 +677,9 @@ skip_ts_read:
pf->ptp.tx_hwtstamp_good += tstamp_good;
}
/**
* ice_ptp_tx_tstamp_owner - Process Tx timestamps for all ports on the device
* @pf: Board private structure
*/
static enum ice_tx_tstamp_work ice_ptp_tx_tstamp_owner(struct ice_pf *pf)
static void ice_ptp_tx_tstamp_owner(struct ice_pf *pf)
{
struct ice_ptp_port *port;
unsigned int i;
mutex_lock(&pf->adapter->ports.lock);
list_for_each_entry(port, &pf->adapter->ports.ports, list_node) {
@ -693,49 +691,6 @@ static enum ice_tx_tstamp_work ice_ptp_tx_tstamp_owner(struct ice_pf *pf)
ice_ptp_process_tx_tstamp(tx);
}
mutex_unlock(&pf->adapter->ports.lock);
for (i = 0; i < ICE_GET_QUAD_NUM(pf->hw.ptp.num_lports); i++) {
u64 tstamp_ready;
int err;
/* Read the Tx ready status first */
err = ice_get_phy_tx_tstamp_ready(&pf->hw, i, &tstamp_ready);
if (err)
break;
else if (tstamp_ready)
return ICE_TX_TSTAMP_WORK_PENDING;
}
return ICE_TX_TSTAMP_WORK_DONE;
}
/**
* ice_ptp_tx_tstamp - Process Tx timestamps for this function.
* @tx: Tx tracking structure to initialize
*
* Returns: ICE_TX_TSTAMP_WORK_PENDING if there are any outstanding incomplete
* Tx timestamps, or ICE_TX_TSTAMP_WORK_DONE otherwise.
*/
static enum ice_tx_tstamp_work ice_ptp_tx_tstamp(struct ice_ptp_tx *tx)
{
bool more_timestamps;
unsigned long flags;
if (!tx->init)
return ICE_TX_TSTAMP_WORK_DONE;
/* Process the Tx timestamp tracker */
ice_ptp_process_tx_tstamp(tx);
/* Check if there are outstanding Tx timestamps */
spin_lock_irqsave(&tx->lock, flags);
more_timestamps = tx->init && !bitmap_empty(tx->in_use, tx->len);
spin_unlock_irqrestore(&tx->lock, flags);
if (more_timestamps)
return ICE_TX_TSTAMP_WORK_PENDING;
return ICE_TX_TSTAMP_WORK_DONE;
}
/**
@ -1379,9 +1334,12 @@ void ice_ptp_link_change(struct ice_pf *pf, bool linkup)
/* Do not reconfigure E810 or E830 PHY */
return;
case ICE_MAC_GENERIC:
case ICE_MAC_GENERIC_3K_E825:
ice_ptp_port_phy_restart(ptp_port);
return;
case ICE_MAC_GENERIC_3K_E825:
if (linkup)
ice_ptp_port_phy_restart(ptp_port);
return;
default:
dev_warn(ice_pf_to_dev(pf), "%s: Unknown PHY type\n", __func__);
}
@ -2695,32 +2653,94 @@ s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)
return idx + tx->offset;
}
/**
* ice_ptp_process_ts - Process the PTP Tx timestamps
* @pf: Board private structure
*
* Returns: ICE_TX_TSTAMP_WORK_PENDING if there are any outstanding Tx
* timestamps that need processing, and ICE_TX_TSTAMP_WORK_DONE otherwise.
*/
enum ice_tx_tstamp_work ice_ptp_process_ts(struct ice_pf *pf)
void ice_ptp_process_ts(struct ice_pf *pf)
{
switch (pf->ptp.tx_interrupt_mode) {
case ICE_PTP_TX_INTERRUPT_NONE:
/* This device has the clock owner handle timestamps for it */
return ICE_TX_TSTAMP_WORK_DONE;
return;
case ICE_PTP_TX_INTERRUPT_SELF:
/* This device handles its own timestamps */
return ice_ptp_tx_tstamp(&pf->ptp.port.tx);
ice_ptp_process_tx_tstamp(&pf->ptp.port.tx);
return;
case ICE_PTP_TX_INTERRUPT_ALL:
/* This device handles timestamps for all ports */
return ice_ptp_tx_tstamp_owner(pf);
ice_ptp_tx_tstamp_owner(pf);
return;
default:
WARN_ONCE(1, "Unexpected Tx timestamp interrupt mode %u\n",
pf->ptp.tx_interrupt_mode);
return ICE_TX_TSTAMP_WORK_DONE;
return;
}
}
static bool ice_port_has_timestamps(struct ice_ptp_tx *tx)
{
bool more_timestamps;
scoped_guard(spinlock_irqsave, &tx->lock) {
if (!tx->init)
return false;
more_timestamps = !bitmap_empty(tx->in_use, tx->len);
}
return more_timestamps;
}
static bool ice_any_port_has_timestamps(struct ice_pf *pf)
{
struct ice_ptp_port *port;
scoped_guard(mutex, &pf->adapter->ports.lock) {
list_for_each_entry(port, &pf->adapter->ports.ports,
list_node) {
struct ice_ptp_tx *tx = &port->tx;
if (ice_port_has_timestamps(tx))
return true;
}
}
return false;
}
bool ice_ptp_tx_tstamps_pending(struct ice_pf *pf)
{
struct ice_hw *hw = &pf->hw;
unsigned int i;
/* Check software indicator */
switch (pf->ptp.tx_interrupt_mode) {
case ICE_PTP_TX_INTERRUPT_NONE:
return false;
case ICE_PTP_TX_INTERRUPT_SELF:
if (ice_port_has_timestamps(&pf->ptp.port.tx))
return true;
break;
case ICE_PTP_TX_INTERRUPT_ALL:
if (ice_any_port_has_timestamps(pf))
return true;
break;
default:
WARN_ONCE(1, "Unexpected Tx timestamp interrupt mode %u\n",
pf->ptp.tx_interrupt_mode);
break;
}
/* Check hardware indicator */
for (i = 0; i < ICE_GET_QUAD_NUM(hw->ptp.num_lports); i++) {
u64 tstamp_ready = 0;
int err;
err = ice_get_phy_tx_tstamp_ready(&pf->hw, i, &tstamp_ready);
if (err || tstamp_ready)
return true;
}
return false;
}
/**
* ice_ptp_ts_irq - Process the PTP Tx timestamps in IRQ context
* @pf: Board private structure
@ -2770,7 +2790,9 @@ irqreturn_t ice_ptp_ts_irq(struct ice_pf *pf)
return IRQ_WAKE_THREAD;
case ICE_MAC_E830:
/* E830 can read timestamps in the top half using rd32() */
if (ice_ptp_process_ts(pf) == ICE_TX_TSTAMP_WORK_PENDING) {
ice_ptp_process_ts(pf);
if (ice_ptp_tx_tstamps_pending(pf)) {
/* Process outstanding Tx timestamps. If there
* is more work, re-arm the interrupt to trigger again.
*/
@ -2849,6 +2871,20 @@ static void ice_ptp_periodic_work(struct kthread_work *work)
msecs_to_jiffies(err ? 10 : 500));
}
/**
* ice_ptp_queue_work - Queue PTP periodic work for a PF
* @pf: Board private structure
*
* Helper function to queue PTP periodic work after VSI rebuild completes.
* This ensures that PTP work only runs when VSI structures are ready.
*/
void ice_ptp_queue_work(struct ice_pf *pf)
{
if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags) &&
pf->ptp.state == ICE_PTP_READY)
kthread_queue_delayed_work(pf->ptp.kworker, &pf->ptp.work, 0);
}
/**
* ice_ptp_prepare_rebuild_sec - Prepare second NAC for PTP reset or rebuild
* @pf: Board private structure
@ -2867,10 +2903,15 @@ static void ice_ptp_prepare_rebuild_sec(struct ice_pf *pf, bool rebuild,
struct ice_pf *peer_pf = ptp_port_to_pf(port);
if (!ice_is_primary(&peer_pf->hw)) {
if (rebuild)
if (rebuild) {
/* TODO: When implementing rebuild=true:
* 1. Ensure secondary PFs' VSIs are rebuilt
* 2. Call ice_ptp_queue_work(peer_pf) after VSI rebuild
*/
ice_ptp_rebuild(peer_pf, reset_type);
else
} else {
ice_ptp_prepare_for_reset(peer_pf, reset_type);
}
}
}
}
@ -3016,9 +3057,6 @@ void ice_ptp_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
ptp->state = ICE_PTP_READY;
/* Start periodic work going */
kthread_queue_delayed_work(ptp->kworker, &ptp->work, 0);
dev_info(ice_pf_to_dev(pf), "PTP reset successful\n");
return;
@ -3223,8 +3261,9 @@ static void ice_ptp_init_tx_interrupt_mode(struct ice_pf *pf)
{
switch (pf->hw.mac_type) {
case ICE_MAC_GENERIC:
/* E822 based PHY has the clock owner process the interrupt
* for all ports.
case ICE_MAC_GENERIC_3K_E825:
/* E82x hardware has the clock owner process timestamps for
* all ports.
*/
if (ice_pf_src_tmr_owned(pf))
pf->ptp.tx_interrupt_mode = ICE_PTP_TX_INTERRUPT_ALL;

View file

@ -304,8 +304,9 @@ void ice_ptp_extts_event(struct ice_pf *pf);
s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb);
void ice_ptp_req_tx_single_tstamp(struct ice_ptp_tx *tx, u8 idx);
void ice_ptp_complete_tx_single_tstamp(struct ice_ptp_tx *tx);
enum ice_tx_tstamp_work ice_ptp_process_ts(struct ice_pf *pf);
void ice_ptp_process_ts(struct ice_pf *pf);
irqreturn_t ice_ptp_ts_irq(struct ice_pf *pf);
bool ice_ptp_tx_tstamps_pending(struct ice_pf *pf);
u64 ice_ptp_read_src_clk_reg(struct ice_pf *pf,
struct ptp_system_timestamp *sts);
@ -317,6 +318,7 @@ void ice_ptp_prepare_for_reset(struct ice_pf *pf,
void ice_ptp_init(struct ice_pf *pf);
void ice_ptp_release(struct ice_pf *pf);
void ice_ptp_link_change(struct ice_pf *pf, bool linkup);
void ice_ptp_queue_work(struct ice_pf *pf);
#else /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */
static inline int ice_ptp_hwtstamp_get(struct net_device *netdev,
@ -345,16 +347,18 @@ static inline void ice_ptp_req_tx_single_tstamp(struct ice_ptp_tx *tx, u8 idx)
static inline void ice_ptp_complete_tx_single_tstamp(struct ice_ptp_tx *tx) { }
static inline bool ice_ptp_process_ts(struct ice_pf *pf)
{
return true;
}
static inline void ice_ptp_process_ts(struct ice_pf *pf) { }
static inline irqreturn_t ice_ptp_ts_irq(struct ice_pf *pf)
{
return IRQ_HANDLED;
}
static inline bool ice_ptp_tx_tstamps_pending(struct ice_pf *pf)
{
return false;
}
static inline u64 ice_ptp_read_src_clk_reg(struct ice_pf *pf,
struct ptp_system_timestamp *sts)
{
@ -383,6 +387,10 @@ static inline void ice_ptp_link_change(struct ice_pf *pf, bool linkup)
{
}
static inline void ice_ptp_queue_work(struct ice_pf *pf)
{
}
static inline int ice_ptp_clock_index(struct ice_pf *pf)
{
return -1;

View file

@ -12,6 +12,7 @@
#include <linux/dma-mapping.h>
#include <linux/etherdevice.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/iopoll.h>
@ -38,7 +39,7 @@
#define EMAC_DEFAULT_BUFSIZE 1536
#define EMAC_RX_BUF_2K 2048
#define EMAC_RX_BUF_4K 4096
#define EMAC_RX_BUF_MAX FIELD_MAX(RX_DESC_1_BUFFER_SIZE_1_MASK)
/* Tuning parameters from SpacemiT */
#define EMAC_TX_FRAMES 64
@ -193,7 +194,7 @@ static void emac_reset_hw(struct emac_priv *priv)
static void emac_init_hw(struct emac_priv *priv)
{
u32 rxirq = 0, dma = 0;
u32 rxirq = 0, dma = 0, frame_sz;
regmap_set_bits(priv->regmap_apmu,
priv->regmap_apmu_offset + APMU_EMAC_CTRL_REG,
@ -218,6 +219,15 @@ static void emac_init_hw(struct emac_priv *priv)
DEFAULT_TX_THRESHOLD);
emac_wr(priv, MAC_RECEIVE_PACKET_START_THRESHOLD, DEFAULT_RX_THRESHOLD);
/* Set maximum frame size and jabber size based on configured MTU,
* accounting for Ethernet header, double VLAN tags, and FCS.
*/
frame_sz = priv->ndev->mtu + ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN;
emac_wr(priv, MAC_MAXIMUM_FRAME_SIZE, frame_sz);
emac_wr(priv, MAC_TRANSMIT_JABBER_SIZE, frame_sz);
emac_wr(priv, MAC_RECEIVE_JABBER_SIZE, frame_sz);
/* RX IRQ mitigation */
rxirq = FIELD_PREP(MREGBIT_RECEIVE_IRQ_FRAME_COUNTER_MASK,
EMAC_RX_FRAMES);
@ -908,14 +918,14 @@ static int emac_change_mtu(struct net_device *ndev, int mtu)
return -EBUSY;
}
frame_len = mtu + ETH_HLEN + ETH_FCS_LEN;
frame_len = mtu + ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN;
if (frame_len <= EMAC_DEFAULT_BUFSIZE)
priv->dma_buf_sz = EMAC_DEFAULT_BUFSIZE;
else if (frame_len <= EMAC_RX_BUF_2K)
priv->dma_buf_sz = EMAC_RX_BUF_2K;
else
priv->dma_buf_sz = EMAC_RX_BUF_4K;
priv->dma_buf_sz = EMAC_RX_BUF_MAX;
ndev->mtu = mtu;
@ -1917,7 +1927,7 @@ static int emac_probe(struct platform_device *pdev)
ndev->hw_features = NETIF_F_SG;
ndev->features |= ndev->hw_features;
ndev->max_mtu = EMAC_RX_BUF_4K - (ETH_HLEN + ETH_FCS_LEN);
ndev->max_mtu = EMAC_RX_BUF_MAX - (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN);
ndev->pcpu_stat_type = NETDEV_PCPU_STAT_DSTATS;
priv = netdev_priv(ndev);

View file

@ -8093,7 +8093,7 @@ int stmmac_suspend(struct device *dev)
u32 chan;
if (!ndev || !netif_running(ndev))
return 0;
goto suspend_bsp;
mutex_lock(&priv->lock);
@ -8132,6 +8132,7 @@ int stmmac_suspend(struct device *dev)
if (stmmac_fpe_supported(priv))
ethtool_mmsv_stop(&priv->fpe_cfg.mmsv);
suspend_bsp:
if (priv->plat->suspend)
return priv->plat->suspend(dev, priv->plat->bsp_priv);

View file

@ -305,12 +305,19 @@ static int cpsw_purge_all_mc(struct net_device *ndev, const u8 *addr, int num)
return 0;
}
static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
static void cpsw_ndo_set_rx_mode_work(struct work_struct *work)
{
struct cpsw_priv *priv = netdev_priv(ndev);
struct cpsw_priv *priv = container_of(work, struct cpsw_priv, rx_mode_work);
struct cpsw_common *cpsw = priv->cpsw;
struct net_device *ndev = priv->ndev;
int slave_port = -1;
rtnl_lock();
if (!netif_running(ndev))
goto unlock_rtnl;
netif_addr_lock_bh(ndev);
if (cpsw->data.dual_emac)
slave_port = priv->emac_port + 1;
@ -318,7 +325,7 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
/* Enable promiscuous mode */
cpsw_set_promiscious(ndev, true);
cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, slave_port);
return;
goto unlock_addr;
} else {
/* Disable promiscuous mode */
cpsw_set_promiscious(ndev, false);
@ -331,6 +338,18 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
/* add/remove mcast address either for real netdev or for vlan */
__hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr,
cpsw_del_mc_addr);
unlock_addr:
netif_addr_unlock_bh(ndev);
unlock_rtnl:
rtnl_unlock();
}
static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
{
struct cpsw_priv *priv = netdev_priv(ndev);
schedule_work(&priv->rx_mode_work);
}
static unsigned int cpsw_rxbuf_total_len(unsigned int len)
@ -1472,6 +1491,7 @@ static int cpsw_probe_dual_emac(struct cpsw_priv *priv)
priv_sl2->ndev = ndev;
priv_sl2->dev = &ndev->dev;
priv_sl2->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG);
INIT_WORK(&priv_sl2->rx_mode_work, cpsw_ndo_set_rx_mode_work);
if (is_valid_ether_addr(data->slave_data[1].mac_addr)) {
memcpy(priv_sl2->mac_addr, data->slave_data[1].mac_addr,
@ -1653,6 +1673,7 @@ static int cpsw_probe(struct platform_device *pdev)
priv->dev = dev;
priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG);
priv->emac_port = 0;
INIT_WORK(&priv->rx_mode_work, cpsw_ndo_set_rx_mode_work);
if (is_valid_ether_addr(data->slave_data[0].mac_addr)) {
memcpy(priv->mac_addr, data->slave_data[0].mac_addr, ETH_ALEN);
@ -1758,6 +1779,8 @@ clean_runtime_disable_ret:
static void cpsw_remove(struct platform_device *pdev)
{
struct cpsw_common *cpsw = platform_get_drvdata(pdev);
struct net_device *ndev;
struct cpsw_priv *priv;
int i, ret;
ret = pm_runtime_resume_and_get(&pdev->dev);
@ -1770,9 +1793,15 @@ static void cpsw_remove(struct platform_device *pdev)
return;
}
for (i = 0; i < cpsw->data.slaves; i++)
if (cpsw->slaves[i].ndev)
unregister_netdev(cpsw->slaves[i].ndev);
for (i = 0; i < cpsw->data.slaves; i++) {
ndev = cpsw->slaves[i].ndev;
if (!ndev)
continue;
priv = netdev_priv(ndev);
unregister_netdev(ndev);
disable_work_sync(&priv->rx_mode_work);
}
cpts_release(cpsw->cpts);
cpdma_ctlr_destroy(cpsw->dma);

View file

@ -248,16 +248,22 @@ static int cpsw_purge_all_mc(struct net_device *ndev, const u8 *addr, int num)
return 0;
}
static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
static void cpsw_ndo_set_rx_mode_work(struct work_struct *work)
{
struct cpsw_priv *priv = netdev_priv(ndev);
struct cpsw_priv *priv = container_of(work, struct cpsw_priv, rx_mode_work);
struct cpsw_common *cpsw = priv->cpsw;
struct net_device *ndev = priv->ndev;
rtnl_lock();
if (!netif_running(ndev))
goto unlock_rtnl;
netif_addr_lock_bh(ndev);
if (ndev->flags & IFF_PROMISC) {
/* Enable promiscuous mode */
cpsw_set_promiscious(ndev, true);
cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, priv->emac_port);
return;
goto unlock_addr;
}
/* Disable promiscuous mode */
@ -270,6 +276,18 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
/* add/remove mcast address either for real netdev or for vlan */
__hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr,
cpsw_del_mc_addr);
unlock_addr:
netif_addr_unlock_bh(ndev);
unlock_rtnl:
rtnl_unlock();
}
static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
{
struct cpsw_priv *priv = netdev_priv(ndev);
schedule_work(&priv->rx_mode_work);
}
static unsigned int cpsw_rxbuf_total_len(unsigned int len)
@ -1398,6 +1416,7 @@ static int cpsw_create_ports(struct cpsw_common *cpsw)
priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG);
priv->emac_port = i + 1;
priv->tx_packet_min = CPSW_MIN_PACKET_SIZE;
INIT_WORK(&priv->rx_mode_work, cpsw_ndo_set_rx_mode_work);
if (is_valid_ether_addr(slave_data->mac_addr)) {
ether_addr_copy(priv->mac_addr, slave_data->mac_addr);
@ -1447,13 +1466,18 @@ static int cpsw_create_ports(struct cpsw_common *cpsw)
static void cpsw_unregister_ports(struct cpsw_common *cpsw)
{
struct net_device *ndev;
struct cpsw_priv *priv;
int i = 0;
for (i = 0; i < cpsw->data.slaves; i++) {
if (!cpsw->slaves[i].ndev)
ndev = cpsw->slaves[i].ndev;
if (!ndev)
continue;
unregister_netdev(cpsw->slaves[i].ndev);
priv = netdev_priv(ndev);
unregister_netdev(ndev);
disable_work_sync(&priv->rx_mode_work);
}
}

View file

@ -391,6 +391,7 @@ struct cpsw_priv {
u32 tx_packet_min;
struct cpsw_ale_ratelimit ale_bc_ratelimit;
struct cpsw_ale_ratelimit ale_mc_ratelimit;
struct work_struct rx_mode_work;
};
#define ndev_to_cpsw(ndev) (((struct cpsw_priv *)netdev_priv(ndev))->cpsw)

View file

@ -1567,9 +1567,10 @@ destroy_macvlan_port:
/* the macvlan port may be freed by macvlan_uninit when fail to register.
* so we destroy the macvlan port only when it's valid.
*/
if (create && macvlan_port_get_rtnl(lowerdev)) {
if (macvlan_port_get_rtnl(lowerdev)) {
macvlan_flush_sources(port, vlan);
macvlan_port_destroy(port->dev);
if (create)
macvlan_port_destroy(port->dev);
}
return err;
}

View file

@ -479,6 +479,8 @@ static void sfp_quirk_ubnt_uf_instant(const struct sfp_eeprom_id *id,
linkmode_zero(caps->link_modes);
linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
caps->link_modes);
phy_interface_zero(caps->interfaces);
__set_bit(PHY_INTERFACE_MODE_1000BASEX, caps->interfaces);
}
#define SFP_QUIRK(_v, _p, _s, _f) \

View file

@ -8530,19 +8530,6 @@ static int rtl8152_system_resume(struct r8152 *tp)
usb_submit_urb(tp->intr_urb, GFP_NOIO);
}
/* If the device is RTL8152_INACCESSIBLE here then we should do a
* reset. This is important because the usb_lock_device_for_reset()
* that happens as a result of usb_queue_reset_device() will silently
* fail if the device was suspended or if too much time passed.
*
* NOTE: The device is locked here so we can directly do the reset.
* We don't need usb_lock_device_for_reset() because that's just a
* wrapper over device_lock() and device_resume() (which calls us)
* does that for us.
*/
if (test_bit(RTL8152_INACCESSIBLE, &tp->flags))
usb_reset_device(tp->udev);
return 0;
}
@ -8653,19 +8640,33 @@ static int rtl8152_suspend(struct usb_interface *intf, pm_message_t message)
static int rtl8152_resume(struct usb_interface *intf)
{
struct r8152 *tp = usb_get_intfdata(intf);
bool runtime_resume = test_bit(SELECTIVE_SUSPEND, &tp->flags);
int ret;
mutex_lock(&tp->control);
rtl_reset_ocp_base(tp);
if (test_bit(SELECTIVE_SUSPEND, &tp->flags))
if (runtime_resume)
ret = rtl8152_runtime_resume(tp);
else
ret = rtl8152_system_resume(tp);
mutex_unlock(&tp->control);
/* If the device is RTL8152_INACCESSIBLE here then we should do a
* reset. This is important because the usb_lock_device_for_reset()
* that happens as a result of usb_queue_reset_device() will silently
* fail if the device was suspended or if too much time passed.
*
* NOTE: The device is locked here so we can directly do the reset.
* We don't need usb_lock_device_for_reset() because that's just a
* wrapper over device_lock() and device_resume() (which calls us)
* does that for us.
*/
if (!runtime_resume && test_bit(RTL8152_INACCESSIBLE, &tp->flags))
usb_reset_device(tp->udev);
return ret;
}

View file

@ -55,8 +55,6 @@ void iwl_mld_cleanup_vif(void *data, u8 *mac, struct ieee80211_vif *vif)
ieee80211_iter_keys(mld->hw, vif, iwl_mld_cleanup_keys_iter, NULL);
wiphy_delayed_work_cancel(mld->wiphy, &mld_vif->mlo_scan_start_wk);
CLEANUP_STRUCT(mld_vif);
}

View file

@ -1840,6 +1840,8 @@ static int iwl_mld_move_sta_state_down(struct iwl_mld *mld,
wiphy_work_cancel(mld->wiphy, &mld_vif->emlsr.unblock_tpt_wk);
wiphy_delayed_work_cancel(mld->wiphy,
&mld_vif->emlsr.check_tpt_wk);
wiphy_delayed_work_cancel(mld->wiphy,
&mld_vif->mlo_scan_start_wk);
iwl_mld_reset_cca_40mhz_workaround(mld, vif);
iwl_mld_smps_workaround(mld, vif, true);

View file

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Copyright (C) 2012-2014, 2018-2025 Intel Corporation
* Copyright (C) 2012-2014, 2018-2026 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@ -3214,6 +3214,8 @@ void iwl_mvm_fast_suspend(struct iwl_mvm *mvm)
IWL_DEBUG_WOWLAN(mvm, "Starting fast suspend flow\n");
iwl_mvm_pause_tcm(mvm, true);
mvm->fast_resume = true;
set_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
@ -3270,6 +3272,8 @@ int iwl_mvm_fast_resume(struct iwl_mvm *mvm)
mvm->trans->state = IWL_TRANS_NO_FW;
}
iwl_mvm_resume_tcm(mvm);
out:
clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
mvm->fast_resume = false;

View file

@ -806,8 +806,8 @@ static void nvme_unmap_data(struct request *req)
if (!blk_rq_dma_unmap(req, dma_dev, &iod->dma_state, iod->total_len,
map)) {
if (nvme_pci_cmd_use_sgl(&iod->cmd))
nvme_free_sgls(req, iod->descriptors[0],
&iod->cmd.common.dptr.sgl, attrs);
nvme_free_sgls(req, &iod->cmd.common.dptr.sgl,
iod->descriptors[0], attrs);
else
nvme_free_prps(req, attrs);
}

View file

@ -180,9 +180,10 @@ u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts)
static void nvmet_bio_done(struct bio *bio)
{
struct nvmet_req *req = bio->bi_private;
blk_status_t blk_status = bio->bi_status;
nvmet_req_complete(req, blk_to_nvme_status(req, bio->bi_status));
nvmet_req_bio_put(req, bio);
nvmet_req_complete(req, blk_to_nvme_status(req, blk_status));
}
#ifdef CONFIG_BLK_DEV_INTEGRITY

View file

@ -157,13 +157,19 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
phys_addr_t base, size;
int i, len;
const __be32 *prop;
bool nomap;
bool nomap, default_cma;
prop = of_flat_dt_get_addr_size_prop(node, "reg", &len);
if (!prop)
return -ENOENT;
nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL);
if (default_cma && cma_skip_dt_default_reserved_mem()) {
pr_err("Skipping dt linux,cma-default for \"cma=\" kernel param.\n");
return -EINVAL;
}
for (i = 0; i < len; i++) {
u64 b, s;
@ -248,10 +254,13 @@ void __init fdt_scan_reserved_mem_reg_nodes(void)
fdt_for_each_subnode(child, fdt, node) {
const char *uname;
bool default_cma = of_get_flat_dt_prop(child, "linux,cma-default", NULL);
u64 b, s;
if (!of_fdt_device_is_available(fdt, child))
continue;
if (default_cma && cma_skip_dt_default_reserved_mem())
continue;
if (!of_flat_dt_get_addr_size(child, "reg", &b, &s))
continue;
@ -389,7 +398,7 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam
phys_addr_t base = 0, align = 0, size;
int i, len;
const __be32 *prop;
bool nomap;
bool nomap, default_cma;
int ret;
prop = of_get_flat_dt_prop(node, "size", &len);
@ -413,6 +422,12 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam
}
nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL);
if (default_cma && cma_skip_dt_default_reserved_mem()) {
pr_err("Skipping dt linux,cma-default for \"cma=\" kernel param.\n");
return -EINVAL;
}
/* Need adjust the alignment to satisfy the CMA requirement */
if (IS_ENABLED(CONFIG_CMA)

View file

@ -11,7 +11,6 @@
#include <linux/pci_regs.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
#include <linux/tsm.h>
#include "pci.h"
@ -168,7 +167,7 @@ void pci_ide_init(struct pci_dev *pdev)
for (u16 i = 0; i < nr_streams; i++) {
int pos = __sel_ide_offset(ide_cap, nr_link_ide, i, nr_ide_mem);
pci_read_config_dword(pdev, pos + PCI_IDE_SEL_CAP, &val);
pci_read_config_dword(pdev, pos + PCI_IDE_SEL_CTL, &val);
if (val & PCI_IDE_SEL_CTL_EN)
continue;
val &= ~PCI_IDE_SEL_CTL_ID;
@ -283,8 +282,8 @@ struct pci_ide *pci_ide_stream_alloc(struct pci_dev *pdev)
/* for SR-IOV case, cover all VFs */
num_vf = pci_num_vf(pdev);
if (num_vf)
rid_end = PCI_DEVID(pci_iov_virtfn_bus(pdev, num_vf),
pci_iov_virtfn_devfn(pdev, num_vf));
rid_end = PCI_DEVID(pci_iov_virtfn_bus(pdev, num_vf - 1),
pci_iov_virtfn_devfn(pdev, num_vf - 1));
else
rid_end = pci_dev_id(pdev);
@ -373,9 +372,6 @@ void pci_ide_stream_release(struct pci_ide *ide)
if (ide->partner[PCI_IDE_EP].enable)
pci_ide_stream_disable(pdev, ide);
if (ide->tsm_dev)
tsm_ide_stream_unregister(ide);
if (ide->partner[PCI_IDE_RP].setup)
pci_ide_stream_teardown(rp, ide);

View file

@ -3545,10 +3545,9 @@ static int rockchip_pmx_set(struct pinctrl_dev *pctldev, unsigned selector,
return 0;
}
static int rockchip_pmx_gpio_set_direction(struct pinctrl_dev *pctldev,
struct pinctrl_gpio_range *range,
unsigned offset,
bool input)
static int rockchip_pmx_gpio_request_enable(struct pinctrl_dev *pctldev,
struct pinctrl_gpio_range *range,
unsigned int offset)
{
struct rockchip_pinctrl *info = pinctrl_dev_get_drvdata(pctldev);
struct rockchip_pin_bank *bank;
@ -3562,7 +3561,7 @@ static const struct pinmux_ops rockchip_pmx_ops = {
.get_function_name = rockchip_pmx_get_func_name,
.get_function_groups = rockchip_pmx_get_groups,
.set_mux = rockchip_pmx_set,
.gpio_set_direction = rockchip_pmx_gpio_set_direction,
.gpio_request_enable = rockchip_pmx_gpio_request_enable,
};
/*

View file

@ -302,6 +302,13 @@ static const struct dmi_system_id fwbug_list[] = {
DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"),
}
},
{
.ident = "MECHREVO Wujie 15X Pro",
.driver_data = &quirk_spurious_8042,
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "WUJIE Series-X5SP4NAG"),
}
},
{}
};

View file

@ -207,7 +207,12 @@ static ssize_t cmpc_accel_sensitivity_show_v4(struct device *dev,
acpi = to_acpi_device(dev);
inputdev = dev_get_drvdata(&acpi->dev);
if (!inputdev)
return -ENXIO;
accel = dev_get_drvdata(&inputdev->dev);
if (!accel)
return -ENXIO;
return sysfs_emit(buf, "%d\n", accel->sensitivity);
}
@ -224,7 +229,12 @@ static ssize_t cmpc_accel_sensitivity_store_v4(struct device *dev,
acpi = to_acpi_device(dev);
inputdev = dev_get_drvdata(&acpi->dev);
if (!inputdev)
return -ENXIO;
accel = dev_get_drvdata(&inputdev->dev);
if (!accel)
return -ENXIO;
r = kstrtoul(buf, 0, &sensitivity);
if (r)
@ -256,7 +266,12 @@ static ssize_t cmpc_accel_g_select_show_v4(struct device *dev,
acpi = to_acpi_device(dev);
inputdev = dev_get_drvdata(&acpi->dev);
if (!inputdev)
return -ENXIO;
accel = dev_get_drvdata(&inputdev->dev);
if (!accel)
return -ENXIO;
return sysfs_emit(buf, "%d\n", accel->g_select);
}
@ -273,7 +288,12 @@ static ssize_t cmpc_accel_g_select_store_v4(struct device *dev,
acpi = to_acpi_device(dev);
inputdev = dev_get_drvdata(&acpi->dev);
if (!inputdev)
return -ENXIO;
accel = dev_get_drvdata(&inputdev->dev);
if (!accel)
return -ENXIO;
r = kstrtoul(buf, 0, &g_select);
if (r)
@ -302,6 +322,8 @@ static int cmpc_accel_open_v4(struct input_dev *input)
acpi = to_acpi_device(input->dev.parent);
accel = dev_get_drvdata(&input->dev);
if (!accel)
return -ENXIO;
cmpc_accel_set_sensitivity_v4(acpi->handle, accel->sensitivity);
cmpc_accel_set_g_select_v4(acpi->handle, accel->g_select);
@ -549,7 +571,12 @@ static ssize_t cmpc_accel_sensitivity_show(struct device *dev,
acpi = to_acpi_device(dev);
inputdev = dev_get_drvdata(&acpi->dev);
if (!inputdev)
return -ENXIO;
accel = dev_get_drvdata(&inputdev->dev);
if (!accel)
return -ENXIO;
return sysfs_emit(buf, "%d\n", accel->sensitivity);
}
@ -566,7 +593,12 @@ static ssize_t cmpc_accel_sensitivity_store(struct device *dev,
acpi = to_acpi_device(dev);
inputdev = dev_get_drvdata(&acpi->dev);
if (!inputdev)
return -ENXIO;
accel = dev_get_drvdata(&inputdev->dev);
if (!accel)
return -ENXIO;
r = kstrtoul(buf, 0, &sensitivity);
if (r)

View file

@ -696,6 +696,11 @@ static int hp_init_bios_package_attribute(enum hp_wmi_data_type attr_type,
return ret;
}
if (!str_value || !str_value[0]) {
pr_debug("Ignoring attribute with empty name\n");
goto pack_attr_exit;
}
/* All duplicate attributes found are ignored */
duplicate = kset_find_obj(temp_kset, str_value);
if (duplicate) {

View file

@ -316,7 +316,7 @@ static int intel_plr_probe(struct auxiliary_device *auxdev, const struct auxilia
snprintf(name, sizeof(name), "domain%d", i);
dentry = debugfs_create_dir(name, plr->dbgfs_dir);
debugfs_create_file("status", 0444, dentry, &plr->die_info[i],
debugfs_create_file("status", 0644, dentry, &plr->die_info[i],
&plr_status_fops);
}

View file

@ -449,7 +449,7 @@ static int telem_pss_states_show(struct seq_file *s, void *unused)
for (index = 0; index < debugfs_conf->pss_ltr_evts; index++) {
seq_printf(s, "%-32s\t%u\n",
debugfs_conf->pss_ltr_data[index].name,
pss_s0ix_wakeup[index]);
pss_ltr_blkd[index]);
}
seq_puts(s, "\n--------------------------------------\n");
@ -459,7 +459,7 @@ static int telem_pss_states_show(struct seq_file *s, void *unused)
for (index = 0; index < debugfs_conf->pss_wakeup_evts; index++) {
seq_printf(s, "%-32s\t%u\n",
debugfs_conf->pss_wakeup[index].name,
pss_ltr_blkd[index]);
pss_s0ix_wakeup[index]);
}
return 0;

View file

@ -610,7 +610,7 @@ static int telemetry_setup(struct platform_device *pdev)
/* Get telemetry Info */
events = (read_buf & TELEM_INFO_SRAMEVTS_MASK) >>
TELEM_INFO_SRAMEVTS_SHIFT;
event_regs = read_buf & TELEM_INFO_SRAMEVTS_MASK;
event_regs = read_buf & TELEM_INFO_NENABLES_MASK;
if ((events < TELEM_MAX_EVENTS_SRAM) ||
(event_regs < TELEM_MAX_EVENTS_SRAM)) {
dev_err(&pdev->dev, "PSS:Insufficient Space for SRAM Trace\n");

View file

@ -766,6 +766,7 @@ static const struct intel_vsec_platform_info lnl_info = {
#define PCI_DEVICE_ID_INTEL_VSEC_LNL_M 0x647d
#define PCI_DEVICE_ID_INTEL_VSEC_PTL 0xb07d
#define PCI_DEVICE_ID_INTEL_VSEC_WCL 0xfd7d
#define PCI_DEVICE_ID_INTEL_VSEC_NVL 0xd70d
static const struct pci_device_id intel_vsec_pci_ids[] = {
{ PCI_DEVICE_DATA(INTEL, VSEC_ADL, &tgl_info) },
{ PCI_DEVICE_DATA(INTEL, VSEC_DG1, &dg1_info) },
@ -778,6 +779,7 @@ static const struct pci_device_id intel_vsec_pci_ids[] = {
{ PCI_DEVICE_DATA(INTEL, VSEC_LNL_M, &lnl_info) },
{ PCI_DEVICE_DATA(INTEL, VSEC_PTL, &mtl_info) },
{ PCI_DEVICE_DATA(INTEL, VSEC_WCL, &mtl_info) },
{ PCI_DEVICE_DATA(INTEL, VSEC_NVL, &mtl_info) },
{ }
};
MODULE_DEVICE_TABLE(pci, intel_vsec_pci_ids);

Some files were not shown because too many files have changed in this diff Show more