Compare commits

..

No commits in common. "master" and "v7.0-rc2" have entirely different histories.

518 changed files with 2740 additions and 6453 deletions

View file

@ -219,7 +219,6 @@ Daniele Alessandrelli <daniele.alessandrelli@gmail.com> <daniele.alessandrelli@i
Danilo Krummrich <dakr@kernel.org> <dakr@redhat.com>
David Brownell <david-b@pacbell.net>
David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org>
David Gow <david@davidgow.net> <davidgow@google.com>
David Heidelberg <david@ixit.cz> <d.okias@gmail.com>
David Hildenbrand <david@kernel.org> <david@redhat.com>
David Rheinsberg <david@readahead.eu> <dh.herrmann@gmail.com>
@ -354,7 +353,6 @@ Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@opinsys.com>
Jason Gunthorpe <jgg@ziepe.ca> <jgg@mellanox.com>
Jason Gunthorpe <jgg@ziepe.ca> <jgg@nvidia.com>
Jason Gunthorpe <jgg@ziepe.ca> <jgunthorpe@obsidianresearch.com>
Jason Xing <kerneljasonxing@gmail.com> <kernelxing@tencent.com>
<javier@osg.samsung.com> <javier.martinez@collabora.co.uk>
Javi Merino <javi.merino@kernel.org> <javi.merino@arm.com>
Jayachandran C <c.jayachandran@gmail.com> <jayachandranc@netlogicmicro.com>
@ -403,7 +401,6 @@ Jiri Slaby <jirislaby@kernel.org> <xslaby@fi.muni.cz>
Jisheng Zhang <jszhang@kernel.org> <jszhang@marvell.com>
Jisheng Zhang <jszhang@kernel.org> <Jisheng.Zhang@synaptics.com>
Jishnu Prakash <quic_jprakash@quicinc.com> <jprakash@codeaurora.org>
Joe Damato <joe@dama.to> <jdamato@fastly.com>
Joel Granados <joel.granados@kernel.org> <j.granados@samsung.com>
Johan Hovold <johan@kernel.org> <jhovold@gmail.com>
Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com>

View file

@ -1242,10 +1242,6 @@ N: Veaceslav Falico
E: vfalico@gmail.com
D: Co-maintainer and co-author of the network bonding driver.
N: Thomas Falcon
E: tlfalcon@linux.ibm.com
D: Initial author of the IBM ibmvnic network driver
N: János Farkas
E: chexum@shadow.banki.hu
D: romfs, various (mostly networking) fixes
@ -2419,10 +2415,6 @@ S: Am Muehlenweg 38
S: D53424 Remagen
S: Germany
N: Jonathan Lemon
E: jonathan.lemon@gmail.com
D: OpenCompute PTP clock driver (ptp_ocp)
N: Colin Leroy
E: colin@colino.net
W: http://www.geekounet.org/

View file

@ -1,4 +1,4 @@
What: /sys/bus/platform/devices/INOU0000:XX/fn_lock
What: /sys/bus/platform/devices/INOU0000:XX/fn_lock_toggle_enable
Date: November 2025
KernelVersion: 6.19
Contact: Armin Wolf <W_Armin@gmx.de>
@ -8,15 +8,15 @@ Description:
Reading this file returns the current enable status of the FN lock functionality.
What: /sys/bus/platform/devices/INOU0000:XX/super_key_enable
What: /sys/bus/platform/devices/INOU0000:XX/super_key_toggle_enable
Date: November 2025
KernelVersion: 6.19
Contact: Armin Wolf <W_Armin@gmx.de>
Description:
Allows userspace applications to enable/disable the super key of the integrated
keyboard by writing "1"/"0" into this file.
Allows userspace applications to enable/disable the super key functionality
of the integrated keyboard by writing "1"/"0" into this file.
Reading this file returns the current enable status of the super key.
Reading this file returns the current enable status of the super key functionality.
What: /sys/bus/platform/devices/INOU0000:XX/touchpad_toggle_enable
Date: November 2025

View file

@ -74,7 +74,6 @@
TPM TPM drivers are enabled.
UMS USB Mass Storage support is enabled.
USB USB support is enabled.
NVME NVMe support is enabled
USBHID USB Human Interface Device support is enabled.
V4L Video For Linux support is enabled.
VGA The VGA console has been enabled.
@ -4788,18 +4787,6 @@ Kernel parameters
This can be set from sysctl after boot.
See Documentation/admin-guide/sysctl/vm.rst for details.
nvme.quirks= [NVME] A list of quirk entries to augment the built-in
nvme quirk list. List entries are separated by a
'-' character.
Each entry has the form VendorID:ProductID:quirk_names.
The IDs are 4-digits hex numbers and quirk_names is a
list of quirk names separated by commas. A quirk name
can be prefixed by '^', meaning that the specified
quirk must be disabled.
Example:
nvme.quirks=7710:2267:bogus_nid,^identify_cns-9900:7711:broken_msi
ohci1394_dma=early [HW,EARLY] enable debugging via the ohci1394 driver.
See Documentation/core-api/debugging-via-ohci1394.rst for more
info.

View file

@ -24,7 +24,7 @@ Keyboard settings
The ``uniwill-laptop`` driver allows the user to enable/disable:
- the FN lock and super key of the integrated keyboard
- the FN and super key lock functionality of the integrated keyboard
- the touchpad toggle functionality of the integrated touchpad
See Documentation/ABI/testing/sysfs-driver-uniwill-laptop for details.

View file

@ -16,6 +16,7 @@ description: |
properties:
compatible:
enum:
- kontron,sa67mcu-hwmon
- kontron,sl28cpld-fan
reg:

View file

@ -87,7 +87,6 @@ required:
allOf:
- $ref: can-controller.yaml#
- $ref: /schemas/memory-controllers/mc-peripheral-props.yaml
- if:
properties:
compatible:

View file

@ -23,7 +23,6 @@ properties:
enum:
- nvidia,tegra210-audio-graph-card
- nvidia,tegra186-audio-graph-card
- nvidia,tegra238-audio-graph-card
- nvidia,tegra264-audio-graph-card
clocks:

View file

@ -20,7 +20,6 @@ properties:
- renesas,r9a07g044-ssi # RZ/G2{L,LC}
- renesas,r9a07g054-ssi # RZ/V2L
- renesas,r9a08g045-ssi # RZ/G3S
- renesas,r9a08g046-ssi # RZ/G3L
- const: renesas,rz-ssi
reg:

View file

@ -57,7 +57,7 @@ Supported chips:
- https://ww1.microchip.com/downloads/en/DeviceDoc/EMC1438%20DS%20Rev.%201.0%20(04-29-10).pdf
Author:
Kalhan Trisal <kalhan.trisal@intel.com>
Kalhan Trisal <kalhan.trisal@intel.com
Description

View file

@ -220,6 +220,7 @@ Hardware Monitoring Kernel Drivers
q54sj108a2
qnap-mcu-hwmon
raspberrypi-hwmon
sa67
sbrmi
sbtsi_temp
sch5627

View file

@ -0,0 +1,41 @@
.. SPDX-License-Identifier: GPL-2.0-only
Kernel driver sa67mcu
=====================
Supported chips:
* Kontron sa67mcu
Prefix: 'sa67mcu'
Datasheet: not available
Authors: Michael Walle <mwalle@kernel.org>
Description
-----------
The sa67mcu is a board management controller which also exposes a hardware
monitoring controller.
The controller has two voltage and one temperature sensor. The values are
hold in two 8 bit registers to form one 16 bit value. Reading the lower byte
will also capture the high byte to make the access atomic. The unit of the
volatge sensors are 1mV and the unit of the temperature sensor is 0.1degC.
Sysfs entries
-------------
The following attributes are supported.
======================= ========================================================
in0_label "VDDIN"
in0_input Measured VDDIN voltage.
in1_label "VDD_RTC"
in1_input Measured VDD_RTC voltage.
temp1_input MCU temperature. Roughly the board temperature.
======================= ========================================================

View file

@ -152,7 +152,7 @@ operations:
- compound-ops
-
name: threads-set
doc: set the maximum number of running threads
doc: set the number of running threads
attribute-set: server
flags: [admin-perm]
do:
@ -165,7 +165,7 @@ operations:
- min-threads
-
name: threads-get
doc: get the maximum number of running threads
doc: get the number of running threads
attribute-set: server
do:
reply:

View file

@ -2372,10 +2372,6 @@ quirk_flags
audible volume
* bit 25: ``mixer_capture_min_mute``
Similar to bit 24 but for capture streams
* bit 26: ``skip_iface_setup``
Skip the probe-time interface setup (usb_set_interface,
init_pitch, init_sample_rate); redundant with
snd_usb_endpoint_prepare() at stream-open time
This module supports multiple devices, autoprobe and hotplugging.

View file

@ -993,8 +993,10 @@ F: Documentation/devicetree/bindings/thermal/amazon,al-thermal.yaml
F: drivers/thermal/thermal_mmio.c
AMAZON ETHERNET DRIVERS
M: Shay Agroskin <shayagr@amazon.com>
M: Arthur Kiyanovski <akiyano@amazon.com>
M: David Arinzon <darinzon@amazon.com>
R: David Arinzon <darinzon@amazon.com>
R: Saeed Bishara <saeedb@amazon.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst
@ -4615,6 +4617,7 @@ F: drivers/bluetooth/
BLUETOOTH SUBSYSTEM
M: Marcel Holtmann <marcel@holtmann.org>
M: Johan Hedberg <johan.hedberg@gmail.com>
M: Luiz Augusto von Dentz <luiz.dentz@gmail.com>
L: linux-bluetooth@vger.kernel.org
S: Supported
@ -10168,8 +10171,8 @@ F: drivers/i2c/busses/i2c-cpm.c
FREESCALE IMX / MXC FEC DRIVER
M: Wei Fang <wei.fang@nxp.com>
R: Frank Li <frank.li@nxp.com>
R: Shenwei Wang <shenwei.wang@nxp.com>
R: Clark Wang <xiaoning.wang@nxp.com>
L: imx@lists.linux.dev
L: netdev@vger.kernel.org
S: Maintained
@ -10481,7 +10484,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
F: Documentation/trace/ftrace*
F: arch/*/*/*/*ftrace*
F: arch/*/*/*ftrace*
F: include/*/*ftrace*
F: include/*/ftrace.h
F: kernel/trace/fgraph.c
F: kernel/trace/ftrace*
F: samples/ftrace
@ -12213,6 +12216,7 @@ IBM Power SRIOV Virtual NIC Device Driver
M: Haren Myneni <haren@linux.ibm.com>
M: Rick Lindsley <ricklind@linux.ibm.com>
R: Nick Child <nnac123@linux.ibm.com>
R: Thomas Falcon <tlfalcon@linux.ibm.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/ibm/ibmvnic.*
@ -13938,7 +13942,7 @@ F: fs/smb/server/
KERNEL UNIT TESTING FRAMEWORK (KUnit)
M: Brendan Higgins <brendan.higgins@linux.dev>
M: David Gow <david@davidgow.net>
M: David Gow <davidgow@google.com>
R: Rae Moar <raemoar63@gmail.com>
L: linux-kselftest@vger.kernel.org
L: kunit-dev@googlegroups.com
@ -14758,7 +14762,7 @@ F: drivers/misc/lis3lv02d/
F: drivers/platform/x86/hp/hp_accel.c
LIST KUNIT TEST
M: David Gow <david@davidgow.net>
M: David Gow <davidgow@google.com>
L: linux-kselftest@vger.kernel.org
L: kunit-dev@googlegroups.com
S: Maintained
@ -15371,8 +15375,10 @@ F: drivers/crypto/marvell/
F: include/linux/soc/marvell/octeontx2/
MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2)
M: Mirko Lindner <mlindner@marvell.com>
M: Stephen Hemminger <stephen@networkplumber.org>
L: netdev@vger.kernel.org
S: Orphan
S: Odd fixes
F: drivers/net/ethernet/marvell/sk*
MARVELL LIBERTAS WIRELESS DRIVER
@ -15469,6 +15475,7 @@ MARVELL OCTEONTX2 RVU ADMIN FUNCTION DRIVER
M: Sunil Goutham <sgoutham@marvell.com>
M: Linu Cherian <lcherian@marvell.com>
M: Geetha sowjanya <gakula@marvell.com>
M: Jerin Jacob <jerinj@marvell.com>
M: hariprasad <hkelam@marvell.com>
M: Subbaraya Sundeep <sbhatta@marvell.com>
L: netdev@vger.kernel.org
@ -15483,7 +15490,7 @@ S: Supported
F: drivers/perf/marvell_pem_pmu.c
MARVELL PRESTERA ETHERNET SWITCH DRIVER
M: Elad Nachman <enachman@marvell.com>
M: Taras Chornyi <taras.chornyi@plvision.eu>
S: Supported
W: https://github.com/Marvell-switching/switchdev-prestera
F: drivers/net/ethernet/marvell/prestera/
@ -16157,6 +16164,7 @@ F: drivers/dma/mediatek/
MEDIATEK ETHERNET DRIVER
M: Felix Fietkau <nbd@nbd.name>
M: Sean Wang <sean.wang@mediatek.com>
M: Lorenzo Bianconi <lorenzo@kernel.org>
L: netdev@vger.kernel.org
S: Maintained
@ -16349,6 +16357,8 @@ F: include/soc/mediatek/smi.h
MEDIATEK SWITCH DRIVER
M: Chester A. Unal <chester.a.unal@arinc9.com>
M: Daniel Golle <daniel@makrotopia.org>
M: DENG Qingfang <dqfext@gmail.com>
M: Sean Wang <sean.wang@mediatek.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/dsa/mt7530-mdio.c
@ -19216,6 +19226,8 @@ F: tools/objtool/
OCELOT ETHERNET SWITCH DRIVER
M: Vladimir Oltean <vladimir.oltean@nxp.com>
M: Claudiu Manoil <claudiu.manoil@nxp.com>
M: Alexandre Belloni <alexandre.belloni@bootlin.com>
M: UNGLinuxDriver@microchip.com
L: netdev@vger.kernel.org
S: Supported
@ -19801,6 +19813,7 @@ F: arch/*/boot/dts/
F: include/dt-bindings/
OPENCOMPUTE PTP CLOCK DRIVER
M: Jonathan Lemon <jonathan.lemon@gmail.com>
M: Vadim Fedorenko <vadim.fedorenko@linux.dev>
L: netdev@vger.kernel.org
S: Maintained
@ -20108,8 +20121,9 @@ F: Documentation/devicetree/bindings/pci/marvell,armada-3700-pcie.yaml
F: drivers/pci/controller/pci-aardvark.c
PCI DRIVER FOR ALTERA PCIE IP
M: Joyce Ooi <joyce.ooi@intel.com>
L: linux-pci@vger.kernel.org
S: Orphan
S: Supported
F: Documentation/devicetree/bindings/pci/altr,pcie-root-port.yaml
F: drivers/pci/controller/pcie-altera.c
@ -20354,8 +20368,9 @@ S: Supported
F: Documentation/PCI/pci-error-recovery.rst
PCI MSI DRIVER FOR ALTERA MSI IP
M: Joyce Ooi <joyce.ooi@intel.com>
L: linux-pci@vger.kernel.org
S: Orphan
S: Supported
F: Documentation/devicetree/bindings/interrupt-controller/altr,msi-controller.yaml
F: drivers/pci/controller/pcie-altera-msi.c
@ -21442,8 +21457,9 @@ S: Supported
F: drivers/scsi/qedi/
QLOGIC QL4xxx ETHERNET DRIVER
M: Manish Chopra <manishc@marvell.com>
L: netdev@vger.kernel.org
S: Orphan
S: Maintained
F: drivers/net/ethernet/qlogic/qed/
F: drivers/net/ethernet/qlogic/qede/
F: include/linux/qed/
@ -24320,6 +24336,7 @@ F: Documentation/devicetree/bindings/interrupt-controller/kontron,sl28cpld-intc.
F: Documentation/devicetree/bindings/pwm/kontron,sl28cpld-pwm.yaml
F: Documentation/devicetree/bindings/watchdog/kontron,sl28cpld-wdt.yaml
F: drivers/gpio/gpio-sl28cpld.c
F: drivers/hwmon/sa67mcu-hwmon.c
F: drivers/hwmon/sl28cpld-hwmon.c
F: drivers/irqchip/irq-sl28cpld.c
F: drivers/pwm/pwm-sl28cpld.c

View file

@ -1497,13 +1497,13 @@ ifneq ($(wildcard $(resolve_btfids_O)),)
$(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean
endif
PHONY += objtool_clean objtool_mrproper
PHONY += objtool_clean
objtool_O = $(abspath $(objtree))/tools/objtool
objtool_clean objtool_mrproper:
objtool_clean:
ifneq ($(wildcard $(objtool_O)),)
$(Q)$(MAKE) -sC $(abs_srctree)/tools/objtool O=$(objtool_O) srctree=$(abs_srctree) $(patsubst objtool_%,%,$@)
$(Q)$(MAKE) -sC $(abs_srctree)/tools/objtool O=$(objtool_O) srctree=$(abs_srctree) clean
endif
tools/: FORCE
@ -1686,7 +1686,7 @@ PHONY += $(mrproper-dirs) mrproper
$(mrproper-dirs):
$(Q)$(MAKE) $(clean)=$(patsubst _mrproper_%,%,$@)
mrproper: clean objtool_mrproper $(mrproper-dirs)
mrproper: clean $(mrproper-dirs)
$(call cmd,rmfiles)
@find . $(RCS_FIND_IGNORE) \
\( -name '*.rmeta' \) \

View file

@ -71,7 +71,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -123,7 +123,6 @@ SECTIONS
_end = . ;
STABS_DEBUG
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -21,7 +21,6 @@ SECTIONS
COMMON_DISCARDS
*(.ARM.exidx*)
*(.ARM.extab*)
*(.modinfo)
*(.note.*)
*(.rel.*)
*(.printk_index)

View file

@ -154,7 +154,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ARM_DETAILS
ARM_ASSERTS

View file

@ -153,7 +153,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ARM_DETAILS
ARM_ASSERTS

View file

@ -91,9 +91,8 @@ __XCHG_GEN(_mb)
#define __xchg_wrapper(sfx, ptr, x) \
({ \
__typeof__(*(ptr)) __ret; \
__ret = (__force __typeof__(*(ptr))) \
__arch_xchg##sfx((__force unsigned long)(x), (ptr), \
sizeof(*(ptr))); \
__ret = (__typeof__(*(ptr))) \
__arch_xchg##sfx((unsigned long)(x), (ptr), sizeof(*(ptr))); \
__ret; \
})
@ -176,10 +175,9 @@ __CMPXCHG_GEN(_mb)
#define __cmpxchg_wrapper(sfx, ptr, o, n) \
({ \
__typeof__(*(ptr)) __ret; \
__ret = (__force __typeof__(*(ptr))) \
__cmpxchg##sfx((ptr), (__force unsigned long)(o), \
(__force unsigned long)(n), \
sizeof(*(ptr))); \
__ret = (__typeof__(*(ptr))) \
__cmpxchg##sfx((ptr), (unsigned long)(o), \
(unsigned long)(n), sizeof(*(ptr))); \
__ret; \
})

View file

@ -50,11 +50,11 @@
#define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
#define _PAGE_KERNEL (PROT_NORMAL | PTE_DIRTY)
#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY | PTE_DIRTY)
#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY | PTE_DIRTY)
#define _PAGE_KERNEL_EXEC ((PROT_NORMAL & ~PTE_PXN) | PTE_DIRTY)
#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT | PTE_DIRTY)
#define _PAGE_KERNEL (PROT_NORMAL)
#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
#define _PAGE_KERNEL_EXEC (PROT_NORMAL & ~PTE_PXN)
#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
#define _PAGE_SHARED (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
#define _PAGE_SHARED_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE)

View file

@ -2,10 +2,6 @@
#ifndef _ASM_RUNTIME_CONST_H
#define _ASM_RUNTIME_CONST_H
#ifdef MODULE
#error "Cannot use runtime-const infrastructure from modules"
#endif
#include <asm/cacheflush.h>
/* Sigh. You can still run arm64 in BE mode */

View file

@ -349,7 +349,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
HEAD_SYMBOLS

View file

@ -599,27 +599,6 @@ void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
}
EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
static bool contpte_all_subptes_match_access_flags(pte_t *ptep, pte_t entry)
{
pte_t *cont_ptep = contpte_align_down(ptep);
/*
* PFNs differ per sub-PTE. Match only bits consumed by
* __ptep_set_access_flags(): AF, DIRTY and write permission.
*/
const pteval_t cmp_mask = PTE_RDONLY | PTE_AF | PTE_WRITE | PTE_DIRTY;
pteval_t entry_cmp = pte_val(entry) & cmp_mask;
int i;
for (i = 0; i < CONT_PTES; i++) {
pteval_t pte_cmp = pte_val(__ptep_get(cont_ptep + i)) & cmp_mask;
if (pte_cmp != entry_cmp)
return false;
}
return true;
}
int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t entry, int dirty)
@ -629,37 +608,13 @@ int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
int i;
/*
* Check whether all sub-PTEs in the CONT block already match the
* requested access flags/write permission, using raw per-PTE values
* rather than the gathered ptep_get() view.
*
* __ptep_set_access_flags() can update AF, dirty and write
* permission, but only to make the mapping more permissive.
*
* ptep_get() gathers AF/dirty state across the whole CONT block,
* which is correct for a CPU with FEAT_HAFDBS. But page-table
* walkers that evaluate each descriptor individually (e.g. a CPU
* without DBM support, or an SMMU without HTTU, or with HA/HD
* disabled in CD.TCR) can keep faulting on the target sub-PTE if
* only a sibling has been updated. Gathering can therefore cause
* false no-ops when only a sibling has been updated:
* - write faults: target still has PTE_RDONLY (needs PTE_RDONLY cleared)
* - read faults: target still lacks PTE_AF
*
* Per Arm ARM (DDI 0487) D8.7.1, any sub-PTE in a CONT range may
* become the effective cached translation, so all entries must have
* consistent attributes. Check the full CONT block before returning
* no-op, and when any sub-PTE mismatches, proceed to update the whole
* range.
* Gather the access/dirty bits for the contiguous range. If nothing has
* changed, its a noop.
*/
if (contpte_all_subptes_match_access_flags(ptep, entry))
orig_pte = pte_mknoncont(ptep_get(ptep));
if (pte_val(orig_pte) == pte_val(entry))
return 0;
/*
* Use raw target pte (not gathered) for write-bit unfold decision.
*/
orig_pte = pte_mknoncont(__ptep_get(ptep));
/*
* We can fix up access/dirty bits without having to unfold the contig
* range. But if the write bit is changing, we must unfold.

View file

@ -109,7 +109,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -62,7 +62,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
.hexagon.attributes 0 : { *(.hexagon.attributes) }

View file

@ -147,7 +147,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
#ifdef CONFIG_EFI_STUB

View file

@ -85,7 +85,6 @@ SECTIONS {
_end = .;
STABS_DEBUG
MODINFO
ELF_DETAILS
/* Sections to be discarded */

View file

@ -58,7 +58,6 @@ SECTIONS
_end = . ;
STABS_DEBUG
MODINFO
ELF_DETAILS
/* Sections to be discarded */

View file

@ -51,7 +51,6 @@ __init_begin = .;
_end = . ;
STABS_DEBUG
MODINFO
ELF_DETAILS
/* Sections to be discarded */

View file

@ -217,7 +217,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
/* These must appear regardless of . */

View file

@ -57,7 +57,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -101,7 +101,6 @@ SECTIONS
/* Throw in the debugging sections */
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
/* Sections to be discarded -- must be last */

View file

@ -90,7 +90,6 @@ SECTIONS
/* Sections to be discarded */
DISCARDS
/DISCARD/ : {
*(.modinfo)
#ifdef CONFIG_64BIT
/* temporary hack until binutils is fixed to not emit these
* for static binaries

View file

@ -85,7 +85,7 @@ extern void __update_cache(pte_t pte);
printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, (unsigned long)pgd_val(e))
/* This is the size of the initially mapped kernel memory */
#if defined(CONFIG_64BIT) || defined(CONFIG_KALLSYMS)
#if defined(CONFIG_64BIT)
#define KERNEL_INITIAL_ORDER 26 /* 1<<26 = 64MB */
#else
#define KERNEL_INITIAL_ORDER 25 /* 1<<25 = 32MB */

View file

@ -56,7 +56,6 @@ ENTRY(parisc_kernel_start)
.import __bss_start,data
.import __bss_stop,data
.import __end,data
load32 PA(__bss_start),%r3
load32 PA(__bss_stop),%r4
@ -150,11 +149,7 @@ $cpu_ok:
* everything ... it will get remapped correctly later */
ldo 0+_PAGE_KERNEL_RWX(%r0),%r3 /* Hardwired 0 phys addr start */
load32 (1<<(KERNEL_INITIAL_ORDER-PAGE_SHIFT)),%r11 /* PFN count */
load32 PA(_end),%r1
SHRREG %r1,PAGE_SHIFT,%r1 /* %r1 is PFN count for _end symbol */
cmpb,<<,n %r11,%r1,1f
copy %r1,%r11 /* %r1 PFN count smaller than %r11 */
1: load32 PA(pg0),%r1
load32 PA(pg0),%r1
$pgt_fill_loop:
STREGM %r3,ASM_PTE_ENTRY_SIZE(%r1)

View file

@ -120,6 +120,14 @@ void __init setup_arch(char **cmdline_p)
#endif
printk(KERN_CONT ".\n");
/*
* Check if initial kernel page mappings are sufficient.
* panic early if not, else we may access kernel functions
* and variables which can't be reached.
*/
if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE)
panic("KERNEL_INITIAL_ORDER too small!");
#ifdef CONFIG_64BIT
if(parisc_narrow_firmware) {
printk(KERN_INFO "Kernel is using PDC in 32-bit mode.\n");
@ -271,18 +279,6 @@ void __init start_parisc(void)
int ret, cpunum;
struct pdc_coproc_cfg coproc_cfg;
/*
* Check if initial kernel page mapping is sufficient.
* Print warning if not, because we may access kernel functions and
* variables which can't be reached yet through the initial mappings.
* Note that the panic() and printk() functions are not functional
* yet, so we need to use direct iodc() firmware calls instead.
*/
const char warn1[] = "CRITICAL: Kernel may crash because "
"KERNEL_INITIAL_ORDER is too small.\n";
if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE)
pdc_iodc_print(warn1, sizeof(warn1) - 1);
/* check QEMU/SeaBIOS marker in PAGE0 */
running_on_qemu = (memcmp(&PAGE0->pad0, "SeaBIOS", 8) == 0);

View file

@ -165,7 +165,6 @@ SECTIONS
_end = . ;
STABS_DEBUG
MODINFO
ELF_DETAILS
.note 0 : { *(.note) }

View file

@ -212,13 +212,6 @@ struct pci_dev *of_create_pci_dev(struct device_node *node,
dev->error_state = pci_channel_io_normal;
dev->dma_mask = 0xffffffff;
/*
* Assume 64-bit addresses for MSI initially. Will be changed to 32-bit
* if MSI (rather than MSI-X) capability does not have
* PCI_MSI_FLAGS_64BIT. Can also be overridden by driver.
*/
dev->msi_addr_mask = DMA_BIT_MASK(64);
/* Early fixups, before probing the BARs */
pci_fixup_device(pci_fixup_early, dev);

View file

@ -397,7 +397,6 @@ SECTIONS
_end = . ;
DWARF_DEBUG
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -170,7 +170,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
.riscv.attributes 0 : { *(.riscv.attributes) }

View file

@ -159,7 +159,7 @@ static __always_inline void __stackleak_poison(unsigned long erase_low,
" j 4f\n"
"3: mvc 8(1,%[addr]),0(%[addr])\n"
"4:"
: [addr] "+&a" (erase_low), [count] "+&a" (count), [tmp] "=&a" (tmp)
: [addr] "+&a" (erase_low), [count] "+&d" (count), [tmp] "=&a" (tmp)
: [poison] "d" (poison)
: "memory", "cc"
);

View file

@ -221,7 +221,6 @@ SECTIONS
/* Debugging sections. */
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
/*

View file

@ -28,8 +28,8 @@ static void xor_xc_2(unsigned long bytes, unsigned long * __restrict p1,
" j 3f\n"
"2: xc 0(1,%1),0(%2)\n"
"3:"
: "+a" (bytes), "+a" (p1), "+a" (p2)
: : "0", "cc", "memory");
: : "d" (bytes), "a" (p1), "a" (p2)
: "0", "cc", "memory");
}
static void xor_xc_3(unsigned long bytes, unsigned long * __restrict p1,
@ -54,7 +54,7 @@ static void xor_xc_3(unsigned long bytes, unsigned long * __restrict p1,
"2: xc 0(1,%1),0(%2)\n"
"3: xc 0(1,%1),0(%3)\n"
"4:"
: "+a" (bytes), "+a" (p1), "+a" (p2), "+a" (p3)
: "+d" (bytes), "+a" (p1), "+a" (p2), "+a" (p3)
: : "0", "cc", "memory");
}
@ -85,7 +85,7 @@ static void xor_xc_4(unsigned long bytes, unsigned long * __restrict p1,
"3: xc 0(1,%1),0(%3)\n"
"4: xc 0(1,%1),0(%4)\n"
"5:"
: "+a" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4)
: "+d" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4)
: : "0", "cc", "memory");
}
@ -96,6 +96,7 @@ static void xor_xc_5(unsigned long bytes, unsigned long * __restrict p1,
const unsigned long * __restrict p5)
{
asm volatile(
" larl 1,2f\n"
" aghi %0,-1\n"
" jm 6f\n"
" srlg 0,%0,8\n"
@ -121,7 +122,7 @@ static void xor_xc_5(unsigned long bytes, unsigned long * __restrict p1,
"4: xc 0(1,%1),0(%4)\n"
"5: xc 0(1,%1),0(%5)\n"
"6:"
: "+a" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4),
: "+d" (bytes), "+a" (p1), "+a" (p2), "+a" (p3), "+a" (p4),
"+a" (p5)
: : "0", "cc", "memory");
}

View file

@ -89,7 +89,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -355,13 +355,6 @@ static struct pci_dev *of_create_pci_dev(struct pci_pbm_info *pbm,
dev->error_state = pci_channel_io_normal;
dev->dma_mask = 0xffffffff;
/*
* Assume 64-bit addresses for MSI initially. Will be changed to 32-bit
* if MSI (rather than MSI-X) capability does not have
* PCI_MSI_FLAGS_64BIT. Can also be overridden by driver.
*/
dev->msi_addr_mask = DMA_BIT_MASK(64);
if (of_node_name_eq(node, "pci")) {
/* a PCI-PCI bridge */
dev->hdr_type = PCI_HEADER_TYPE_BRIDGE;

View file

@ -191,7 +191,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -172,7 +172,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -113,7 +113,6 @@ SECTIONS
STABS_DEBUG
DWARF_DEBUG
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -113,7 +113,6 @@ vmlinux-objs-$(CONFIG_EFI_SBAT) += $(obj)/sbat.o
ifdef CONFIG_EFI_SBAT
$(obj)/sbat.o: $(CONFIG_EFI_SBAT_FILE)
AFLAGS_sbat.o += -I $(srctree)
endif
$(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE

View file

@ -28,17 +28,17 @@
#include "sev.h"
static struct ghcb boot_ghcb_page __aligned(PAGE_SIZE);
struct ghcb *boot_ghcb __section(".data");
struct ghcb *boot_ghcb;
#undef __init
#define __init
#define __BOOT_COMPRESSED
u8 snp_vmpl __section(".data");
u16 ghcb_version __section(".data");
u8 snp_vmpl;
u16 ghcb_version;
u64 boot_svsm_caa_pa __section(".data");
u64 boot_svsm_caa_pa;
/* Include code for early handlers */
#include "../../boot/startup/sev-shared.c"
@ -188,7 +188,6 @@ bool sev_es_check_ghcb_fault(unsigned long address)
MSR_AMD64_SNP_RESERVED_BIT13 | \
MSR_AMD64_SNP_RESERVED_BIT15 | \
MSR_AMD64_SNP_SECURE_AVIC | \
MSR_AMD64_SNP_RESERVED_BITS19_22 | \
MSR_AMD64_SNP_RESERVED_MASK)
#ifdef CONFIG_AMD_SECURE_AVIC

View file

@ -88,7 +88,7 @@ SECTIONS
/DISCARD/ : {
*(.dynamic) *(.dynsym) *(.dynstr) *(.dynbss)
*(.hash) *(.gnu.hash)
*(.note.*) *(.modinfo)
*(.note.*)
}
.got.plt (INFO) : {

View file

@ -31,7 +31,7 @@ static u32 cpuid_std_range_max __ro_after_init;
static u32 cpuid_hyp_range_max __ro_after_init;
static u32 cpuid_ext_range_max __ro_after_init;
bool sev_snp_needs_sfw __section(".data");
bool sev_snp_needs_sfw;
void __noreturn
sev_es_terminate(unsigned int set, unsigned int reason)

View file

@ -89,7 +89,6 @@ static const char * const sev_status_feat_names[] = {
[MSR_AMD64_SNP_VMSA_REG_PROT_BIT] = "VMSARegProt",
[MSR_AMD64_SNP_SMT_PROT_BIT] = "SMTProt",
[MSR_AMD64_SNP_SECURE_AVIC_BIT] = "SecureAVIC",
[MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT] = "IBPBOnEntry",
};
/*

View file

@ -35,38 +35,9 @@
#endif
.endm
/*
* WARNING:
*
* A bug in the libgcc unwinder as of at least gcc 15.2 (2026) means that
* the unwinder fails to recognize the signal frame flag.
*
* There is a hacky legacy fallback path in libgcc which ends up
* getting invoked instead. It happens to work as long as BOTH of the
* following conditions are true:
*
* 1. There is at least one byte before the each of the sigreturn
* functions which falls outside any function. This is enforced by
* an explicit nop instruction before the ALIGN.
* 2. The code sequences between the entry point up to and including
* the int $0x80 below need to match EXACTLY. Do not change them
* in any way. The exact byte sequences are:
*
* __kernel_sigreturn:
* 0: 58 pop %eax
* 1: b8 77 00 00 00 mov $0x77,%eax
* 6: cd 80 int $0x80
*
* __kernel_rt_sigreturn:
* 0: b8 ad 00 00 00 mov $0xad,%eax
* 5: cd 80 int $0x80
*
* For details, see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=124050
*/
.text
.globl __kernel_sigreturn
.type __kernel_sigreturn,@function
nop /* libgcc hack: see comment above */
ALIGN
__kernel_sigreturn:
STARTPROC_SIGNAL_FRAME IA32_SIGFRAME_sigcontext
@ -81,7 +52,6 @@ SYM_INNER_LABEL(vdso32_sigreturn_landing_pad, SYM_L_GLOBAL)
.globl __kernel_rt_sigreturn
.type __kernel_rt_sigreturn,@function
nop /* libgcc hack: see comment above */
ALIGN
__kernel_rt_sigreturn:
STARTPROC_SIGNAL_FRAME IA32_RT_SIGFRAME_sigcontext

View file

@ -740,10 +740,7 @@
#define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT)
#define MSR_AMD64_SNP_SECURE_AVIC_BIT 18
#define MSR_AMD64_SNP_SECURE_AVIC BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT)
#define MSR_AMD64_SNP_RESERVED_BITS19_22 GENMASK_ULL(22, 19)
#define MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT 23
#define MSR_AMD64_SNP_IBPB_ON_ENTRY BIT_ULL(MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT)
#define MSR_AMD64_SNP_RESV_BIT 24
#define MSR_AMD64_SNP_RESV_BIT 19
#define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT)
#define MSR_AMD64_SAVIC_CONTROL 0xc0010138
#define MSR_AMD64_SAVIC_EN_BIT 0

View file

@ -22,7 +22,6 @@ extern int numa_off;
*/
extern s16 __apicid_to_node[MAX_LOCAL_APIC];
extern nodemask_t numa_nodes_parsed __initdata;
extern nodemask_t numa_phys_nodes_parsed __initdata;
static inline void set_apicid_to_node(int apicid, s16 node)
{
@ -49,7 +48,6 @@ extern void __init init_cpu_to_node(void);
extern void numa_add_cpu(unsigned int cpu);
extern void numa_remove_cpu(unsigned int cpu);
extern void init_gi_nodes(void);
extern int num_phys_nodes(void);
#else /* CONFIG_NUMA */
static inline void numa_set_node(int cpu, int node) { }
static inline void numa_clear_node(int cpu) { }
@ -57,10 +55,6 @@ static inline void init_cpu_to_node(void) { }
static inline void numa_add_cpu(unsigned int cpu) { }
static inline void numa_remove_cpu(unsigned int cpu) { }
static inline void init_gi_nodes(void) { }
static inline int num_phys_nodes(void)
{
return 1;
}
#endif /* CONFIG_NUMA */
#ifdef CONFIG_DEBUG_PER_CPU_MAPS

View file

@ -19,8 +19,10 @@
extern p4d_t level4_kernel_pgt[512];
extern p4d_t level4_ident_pgt[512];
extern pud_t level3_kernel_pgt[512];
extern pud_t level3_ident_pgt[512];
extern pmd_t level2_kernel_pgt[512];
extern pmd_t level2_fixmap_pgt[512];
extern pmd_t level2_ident_pgt[512];
extern pte_t level1_fixmap_pgt[512 * FIXMAP_PMD_NUM];
extern pgd_t init_top_pgt[];

View file

@ -155,7 +155,6 @@ extern unsigned int __max_logical_packages;
extern unsigned int __max_threads_per_core;
extern unsigned int __num_threads_per_package;
extern unsigned int __num_cores_per_package;
extern unsigned int __num_nodes_per_package;
const char *get_topology_cpu_type_name(struct cpuinfo_x86 *c);
enum x86_topology_cpu_type get_topology_cpu_type(struct cpuinfo_x86 *c);
@ -180,11 +179,6 @@ static inline unsigned int topology_num_threads_per_package(void)
return __num_threads_per_package;
}
static inline unsigned int topology_num_nodes_per_package(void)
{
return __num_nodes_per_package;
}
#ifdef CONFIG_X86_LOCAL_APIC
int topology_get_logical_id(u32 apicid, enum x86_topology_domains at_level);
#else

View file

@ -95,9 +95,6 @@ EXPORT_SYMBOL(__max_dies_per_package);
unsigned int __max_logical_packages __ro_after_init = 1;
EXPORT_SYMBOL(__max_logical_packages);
unsigned int __num_nodes_per_package __ro_after_init = 1;
EXPORT_SYMBOL(__num_nodes_per_package);
unsigned int __num_cores_per_package __ro_after_init = 1;
EXPORT_SYMBOL(__num_cores_per_package);

View file

@ -364,7 +364,7 @@ void arch_mon_domain_online(struct rdt_resource *r, struct rdt_l3_mon_domain *d)
msr_clear_bit(MSR_RMID_SNC_CONFIG, 0);
}
/* CPU models that support SNC and MSR_RMID_SNC_CONFIG */
/* CPU models that support MSR_RMID_SNC_CONFIG */
static const struct x86_cpu_id snc_cpu_ids[] __initconst = {
X86_MATCH_VFM(INTEL_ICELAKE_X, 0),
X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, 0),
@ -375,14 +375,40 @@ static const struct x86_cpu_id snc_cpu_ids[] __initconst = {
{}
};
/*
* There isn't a simple hardware bit that indicates whether a CPU is running
* in Sub-NUMA Cluster (SNC) mode. Infer the state by comparing the
* number of CPUs sharing the L3 cache with CPU0 to the number of CPUs in
* the same NUMA node as CPU0.
* It is not possible to accurately determine SNC state if the system is
* booted with a maxcpus=N parameter. That distorts the ratio of SNC nodes
* to L3 caches. It will be OK if system is booted with hyperthreading
* disabled (since this doesn't affect the ratio).
*/
static __init int snc_get_config(void)
{
int ret = topology_num_nodes_per_package();
struct cacheinfo *ci = get_cpu_cacheinfo_level(0, RESCTRL_L3_CACHE);
const cpumask_t *node0_cpumask;
int cpus_per_node, cpus_per_l3;
int ret;
if (ret > 1 && !x86_match_cpu(snc_cpu_ids)) {
pr_warn("CoD enabled system? Resctrl not supported\n");
if (!x86_match_cpu(snc_cpu_ids) || !ci)
return 1;
}
cpus_read_lock();
if (num_online_cpus() != num_present_cpus())
pr_warn("Some CPUs offline, SNC detection may be incorrect\n");
cpus_read_unlock();
node0_cpumask = cpumask_of_node(cpu_to_node(0));
cpus_per_node = cpumask_weight(node0_cpumask);
cpus_per_l3 = cpumask_weight(&ci->shared_cpu_map);
if (!cpus_per_node || !cpus_per_l3)
return 1;
ret = cpus_per_l3 / cpus_per_node;
/* sanity check: Only valid results are 1, 2, 3, 4, 6 */
switch (ret) {

View file

@ -31,7 +31,6 @@
#include <asm/mpspec.h>
#include <asm/msr.h>
#include <asm/smp.h>
#include <asm/numa.h>
#include "cpu.h"
@ -493,19 +492,11 @@ void __init topology_init_possible_cpus(void)
set_nr_cpu_ids(allowed);
cnta = domain_weight(TOPO_PKG_DOMAIN);
__max_logical_packages = cnta;
pr_info("Max. logical packages: %3u\n", __max_logical_packages);
cntb = num_phys_nodes();
__num_nodes_per_package = DIV_ROUND_UP(cntb, cnta);
pr_info("Max. logical nodes: %3u\n", cntb);
pr_info("Num. nodes per package:%3u\n", __num_nodes_per_package);
cntb = domain_weight(TOPO_DIE_DOMAIN);
__max_logical_packages = cnta;
__max_dies_per_package = 1U << (get_count_order(cntb) - get_count_order(cnta));
pr_info("Max. logical packages: %3u\n", cnta);
pr_info("Max. logical dies: %3u\n", cntb);
pr_info("Max. dies per package: %3u\n", __max_dies_per_package);

View file

@ -616,10 +616,38 @@ SYM_DATA(early_recursion_flag, .long 0)
.data
#if defined(CONFIG_XEN_PV) || defined(CONFIG_PVH)
SYM_DATA_START_PTI_ALIGNED(init_top_pgt)
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
.org init_top_pgt + L4_PAGE_OFFSET*8, 0
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
.org init_top_pgt + L4_START_KERNEL*8, 0
/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
.fill PTI_USER_PGD_FILL,8,0
SYM_DATA_END(init_top_pgt)
SYM_DATA_START_PAGE_ALIGNED(level3_ident_pgt)
.quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
.fill 511, 8, 0
SYM_DATA_END(level3_ident_pgt)
SYM_DATA_START_PAGE_ALIGNED(level2_ident_pgt)
/*
* Since I easily can, map the first 1G.
* Don't set NX because code runs from these pages.
*
* Note: This sets _PAGE_GLOBAL despite whether
* the CPU supports it or it is enabled. But,
* the CPU should ignore the bit.
*/
PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
SYM_DATA_END(level2_ident_pgt)
#else
SYM_DATA_START_PTI_ALIGNED(init_top_pgt)
.fill 512,8,0
.fill PTI_USER_PGD_FILL,8,0
SYM_DATA_END(init_top_pgt)
#endif
SYM_DATA_START_PAGE_ALIGNED(level4_kernel_pgt)
.fill 511,8,0

View file

@ -468,6 +468,13 @@ static int x86_cluster_flags(void)
}
#endif
/*
* Set if a package/die has multiple NUMA nodes inside.
* AMD Magny-Cours, Intel Cluster-on-Die, and Intel
* Sub-NUMA Clustering have this.
*/
static bool x86_has_numa_in_package;
static struct sched_domain_topology_level x86_topology[] = {
SDTL_INIT(tl_smt_mask, cpu_smt_flags, SMT),
#ifdef CONFIG_SCHED_CLUSTER
@ -489,7 +496,7 @@ static void __init build_sched_topology(void)
* PKG domain since the NUMA domains will auto-magically create the
* right spanning domains based on the SLIT.
*/
if (topology_num_nodes_per_package() > 1) {
if (x86_has_numa_in_package) {
unsigned int pkgdom = ARRAY_SIZE(x86_topology) - 2;
memset(&x86_topology[pkgdom], 0, sizeof(x86_topology[pkgdom]));
@ -506,149 +513,33 @@ static void __init build_sched_topology(void)
}
#ifdef CONFIG_NUMA
/*
* Test if the on-trace cluster at (N,N) is symmetric.
* Uses upper triangle iteration to avoid obvious duplicates.
*/
static bool slit_cluster_symmetric(int N)
static int sched_avg_remote_distance;
static int avg_remote_numa_distance(void)
{
int u = topology_num_nodes_per_package();
int i, j;
int distance, nr_remote, total_distance;
for (int k = 0; k < u; k++) {
for (int l = k; l < u; l++) {
if (node_distance(N + k, N + l) !=
node_distance(N + l, N + k))
return false;
if (sched_avg_remote_distance > 0)
return sched_avg_remote_distance;
nr_remote = 0;
total_distance = 0;
for_each_node_state(i, N_CPU) {
for_each_node_state(j, N_CPU) {
distance = node_distance(i, j);
if (distance >= REMOTE_DISTANCE) {
nr_remote++;
total_distance += distance;
}
}
}
if (nr_remote)
sched_avg_remote_distance = total_distance / nr_remote;
else
sched_avg_remote_distance = REMOTE_DISTANCE;
return true;
}
/*
* Return the package-id of the cluster, or ~0 if indeterminate.
* Each node in the on-trace cluster should have the same package-id.
*/
static u32 slit_cluster_package(int N)
{
int u = topology_num_nodes_per_package();
u32 pkg_id = ~0;
for (int n = 0; n < u; n++) {
const struct cpumask *cpus = cpumask_of_node(N + n);
int cpu;
for_each_cpu(cpu, cpus) {
u32 id = topology_logical_package_id(cpu);
if (pkg_id == ~0)
pkg_id = id;
if (pkg_id != id)
return ~0;
}
}
return pkg_id;
}
/*
* Validate the SLIT table is of the form expected for SNC, specifically:
*
* - each on-trace cluster should be symmetric,
* - each on-trace cluster should have a unique package-id.
*
* If you NUMA_EMU on top of SNC, you get to keep the pieces.
*/
static bool slit_validate(void)
{
int u = topology_num_nodes_per_package();
u32 pkg_id, prev_pkg_id = ~0;
for (int pkg = 0; pkg < topology_max_packages(); pkg++) {
int n = pkg * u;
/*
* Ensure the on-trace cluster is symmetric and each cluster
* has a different package id.
*/
if (!slit_cluster_symmetric(n))
return false;
pkg_id = slit_cluster_package(n);
if (pkg_id == ~0)
return false;
if (pkg && pkg_id == prev_pkg_id)
return false;
prev_pkg_id = pkg_id;
}
return true;
}
/*
* Compute a sanitized SLIT table for SNC; notably SNC-3 can end up with
* asymmetric off-trace clusters, reflecting physical assymmetries. However
* this leads to 'unfortunate' sched_domain configurations.
*
* For example dual socket GNR with SNC-3:
*
* node distances:
* node 0 1 2 3 4 5
* 0: 10 15 17 21 28 26
* 1: 15 10 15 23 26 23
* 2: 17 15 10 26 23 21
* 3: 21 28 26 10 15 17
* 4: 23 26 23 15 10 15
* 5: 26 23 21 17 15 10
*
* Fix things up by averaging out the off-trace clusters; resulting in:
*
* node 0 1 2 3 4 5
* 0: 10 15 17 24 24 24
* 1: 15 10 15 24 24 24
* 2: 17 15 10 24 24 24
* 3: 24 24 24 10 15 17
* 4: 24 24 24 15 10 15
* 5: 24 24 24 17 15 10
*/
static int slit_cluster_distance(int i, int j)
{
static int slit_valid = -1;
int u = topology_num_nodes_per_package();
long d = 0;
int x, y;
if (slit_valid < 0) {
slit_valid = slit_validate();
if (!slit_valid)
pr_err(FW_BUG "SLIT table doesn't have the expected form for SNC -- fixup disabled!\n");
else
pr_info("Fixing up SNC SLIT table.\n");
}
/*
* Is this a unit cluster on the trace?
*/
if ((i / u) == (j / u) || !slit_valid)
return node_distance(i, j);
/*
* Off-trace cluster.
*
* Notably average out the symmetric pair of off-trace clusters to
* ensure the resulting SLIT table is symmetric.
*/
x = i - (i % u);
y = j - (j % u);
for (i = x; i < x + u; i++) {
for (j = y; j < y + u; j++) {
d += node_distance(i, j);
d += node_distance(j, i);
}
}
return d / (2*u*u);
return sched_avg_remote_distance;
}
int arch_sched_node_distance(int from, int to)
@ -658,14 +549,34 @@ int arch_sched_node_distance(int from, int to)
switch (boot_cpu_data.x86_vfm) {
case INTEL_GRANITERAPIDS_X:
case INTEL_ATOM_DARKMONT_X:
if (topology_max_packages() == 1 ||
topology_num_nodes_per_package() < 3)
if (!x86_has_numa_in_package || topology_max_packages() == 1 ||
d < REMOTE_DISTANCE)
return d;
/*
* Handle SNC-3 asymmetries.
* With SNC enabled, there could be too many levels of remote
* NUMA node distances, creating NUMA domain levels
* including local nodes and partial remote nodes.
*
* Trim finer distance tuning for NUMA nodes in remote package
* for the purpose of building sched domains. Group NUMA nodes
* in the remote package in the same sched group.
* Simplify NUMA domains and avoid extra NUMA levels including
* different remote NUMA nodes and local nodes.
*
* GNR and CWF don't expect systems with more than 2 packages
* and more than 2 hops between packages. Single average remote
* distance won't be appropriate if there are more than 2
* packages as average distance to different remote packages
* could be different.
*/
return slit_cluster_distance(from, to);
WARN_ONCE(topology_max_packages() > 2,
"sched: Expect only up to 2 packages for GNR or CWF, "
"but saw %d packages when building sched domains.",
topology_max_packages());
d = avg_remote_numa_distance();
}
return d;
}
@ -695,7 +606,7 @@ void set_cpu_sibling_map(int cpu)
o = &cpu_data(i);
if (match_pkg(c, o) && !topology_same_node(c, o))
WARN_ON_ONCE(topology_num_nodes_per_package() == 1);
x86_has_numa_in_package = true;
if ((i == cpu) || (has_smt && match_smt(c, o)))
link_mask(topology_sibling_cpumask, cpu, i);

View file

@ -427,7 +427,6 @@ SECTIONS
.llvm_bb_addr_map : { *(.llvm_bb_addr_map) }
#endif
MODINFO
ELF_DETAILS
DISCARDS

View file

@ -48,8 +48,6 @@ s16 __apicid_to_node[MAX_LOCAL_APIC] = {
[0 ... MAX_LOCAL_APIC-1] = NUMA_NO_NODE
};
nodemask_t numa_phys_nodes_parsed __initdata;
int numa_cpu_node(int cpu)
{
u32 apicid = early_per_cpu(x86_cpu_to_apicid, cpu);
@ -59,11 +57,6 @@ int numa_cpu_node(int cpu)
return NUMA_NO_NODE;
}
int __init num_phys_nodes(void)
{
return bitmap_weight(numa_phys_nodes_parsed.bits, MAX_NUMNODES);
}
cpumask_var_t node_to_cpumask_map[MAX_NUMNODES];
EXPORT_SYMBOL(node_to_cpumask_map);
@ -217,7 +210,6 @@ static int __init dummy_numa_init(void)
0LLU, PFN_PHYS(max_pfn) - 1);
node_set(0, numa_nodes_parsed);
node_set(0, numa_phys_nodes_parsed);
numa_add_memblk(0, 0, PFN_PHYS(max_pfn));
return 0;

View file

@ -57,7 +57,6 @@ acpi_numa_x2apic_affinity_init(struct acpi_srat_x2apic_cpu_affinity *pa)
}
set_apicid_to_node(apic_id, node);
node_set(node, numa_nodes_parsed);
node_set(node, numa_phys_nodes_parsed);
pr_debug("SRAT: PXM %u -> APIC 0x%04x -> Node %u\n", pxm, apic_id, node);
}
@ -98,7 +97,6 @@ acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa)
set_apicid_to_node(apic_id, node);
node_set(node, numa_nodes_parsed);
node_set(node, numa_phys_nodes_parsed);
pr_debug("SRAT: PXM %u -> APIC 0x%02x -> Node %u\n", pxm, apic_id, node);
}

View file

@ -25,6 +25,11 @@ struct hvm_start_info __initdata pvh_start_info;
const unsigned int __initconst pvh_start_info_sz = sizeof(pvh_start_info);
static u64 __init pvh_get_root_pointer(void)
{
return pvh_start_info.rsdp_paddr;
}
/*
* Xen guests are able to obtain the memory map from the hypervisor via the
* HYPERVISOR_memory_op hypercall.
@ -90,7 +95,7 @@ static void __init init_pvh_bootparams(bool xen_guest)
pvh_bootparams.hdr.version = (2 << 8) | 12;
pvh_bootparams.hdr.type_of_loader = ((xen_guest ? 0x9 : 0xb) << 4) | 0;
pvh_bootparams.acpi_rsdp_addr = pvh_start_info.rsdp_paddr;
x86_init.acpi.get_root_pointer = pvh_get_root_pointer;
}
/*

View file

@ -392,7 +392,7 @@ static void __init xen_init_capabilities(void)
/*
* Xen PV would need some work to support PCID: CR3 handling as well
* as xen_flush_tlb_multi() would need updating.
* as xen_flush_tlb_others() would need updating.
*/
setup_clear_cpu_cap(X86_FEATURE_PCID);

View file

@ -105,9 +105,6 @@ pte_t xen_make_pte_init(pteval_t pte);
static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
#endif
static pud_t level3_ident_pgt[PTRS_PER_PUD] __page_aligned_bss;
static pmd_t level2_ident_pgt[PTRS_PER_PMD] __page_aligned_bss;
/*
* Protects atomic reservation decrease/increase against concurrent increases.
* Also protects non-atomic updates of current_pages and balloon lists.
@ -1780,12 +1777,6 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
/* Zap identity mapping */
init_top_pgt[0] = __pgd(0);
init_top_pgt[pgd_index(__PAGE_OFFSET_BASE_L4)].pgd =
__pa_symbol(level3_ident_pgt) + _KERNPG_TABLE_NOENC;
init_top_pgt[pgd_index(__START_KERNEL_map)].pgd =
__pa_symbol(level3_kernel_pgt) + _PAGE_TABLE_NOENC;
level3_ident_pgt[0].pud = __pa_symbol(level2_ident_pgt) + _KERNPG_TABLE_NOENC;
/* Pre-constructed entries are in pfn, so convert to mfn */
/* L4[273] -> level3_ident_pgt */
/* L4[511] -> level3_kernel_pgt */

View file

@ -398,7 +398,8 @@ static struct bio *bio_copy_kern(struct request *rq, void *data, unsigned int le
if (op_is_write(op))
memcpy(page_address(page), p, bytes);
__bio_add_page(bio, page, bytes, 0);
if (bio_add_page(bio, page, bytes, 0) < bytes)
break;
len -= bytes;
p += bytes;

View file

@ -4793,45 +4793,38 @@ static void blk_mq_update_queue_map(struct blk_mq_tag_set *set)
}
}
static struct blk_mq_tags **blk_mq_prealloc_tag_set_tags(
struct blk_mq_tag_set *set,
int new_nr_hw_queues)
static int blk_mq_realloc_tag_set_tags(struct blk_mq_tag_set *set,
int new_nr_hw_queues)
{
struct blk_mq_tags **new_tags;
int i;
if (set->nr_hw_queues >= new_nr_hw_queues)
return NULL;
goto done;
new_tags = kcalloc_node(new_nr_hw_queues, sizeof(struct blk_mq_tags *),
GFP_KERNEL, set->numa_node);
if (!new_tags)
return ERR_PTR(-ENOMEM);
return -ENOMEM;
if (set->tags)
memcpy(new_tags, set->tags, set->nr_hw_queues *
sizeof(*set->tags));
kfree(set->tags);
set->tags = new_tags;
for (i = set->nr_hw_queues; i < new_nr_hw_queues; i++) {
if (blk_mq_is_shared_tags(set->flags)) {
new_tags[i] = set->shared_tags;
} else {
new_tags[i] = blk_mq_alloc_map_and_rqs(set, i,
set->queue_depth);
if (!new_tags[i])
goto out_unwind;
if (!__blk_mq_alloc_map_and_rqs(set, i)) {
while (--i >= set->nr_hw_queues)
__blk_mq_free_map_and_rqs(set, i);
return -ENOMEM;
}
cond_resched();
}
return new_tags;
out_unwind:
while (--i >= set->nr_hw_queues) {
if (!blk_mq_is_shared_tags(set->flags))
blk_mq_free_map_and_rqs(set, new_tags[i], i);
}
kfree(new_tags);
return ERR_PTR(-ENOMEM);
done:
set->nr_hw_queues = new_nr_hw_queues;
return 0;
}
/*
@ -5120,7 +5113,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
unsigned int memflags;
int i;
struct xarray elv_tbl;
struct blk_mq_tags **new_tags;
bool queues_frozen = false;
lockdep_assert_held(&set->tag_list_lock);
@ -5155,18 +5147,11 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
if (blk_mq_elv_switch_none(q, &elv_tbl))
goto switch_back;
new_tags = blk_mq_prealloc_tag_set_tags(set, nr_hw_queues);
if (IS_ERR(new_tags))
goto switch_back;
list_for_each_entry(q, &set->tag_list, tag_set_list)
blk_mq_freeze_queue_nomemsave(q);
queues_frozen = true;
if (new_tags) {
kfree(set->tags);
set->tags = new_tags;
}
set->nr_hw_queues = nr_hw_queues;
if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0)
goto switch_back;
fallback:
blk_mq_update_queue_map(set);

View file

@ -78,14 +78,8 @@ queue_requests_store(struct gendisk *disk, const char *page, size_t count)
/*
* Serialize updating nr_requests with concurrent queue_requests_store()
* and switching elevator.
*
* Use trylock to avoid circular lock dependency with kernfs active
* reference during concurrent disk deletion:
* update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del)
* kn->active -> update_nr_hwq_lock (via this sysfs write path)
*/
if (!down_write_trylock(&set->update_nr_hwq_lock))
return -EBUSY;
down_write(&set->update_nr_hwq_lock);
if (nr == q->nr_requests)
goto unlock;

View file

@ -807,16 +807,7 @@ ssize_t elv_iosched_store(struct gendisk *disk, const char *buf,
elv_iosched_load_module(ctx.name);
ctx.type = elevator_find_get(ctx.name);
/*
* Use trylock to avoid circular lock dependency with kernfs active
* reference during concurrent disk deletion:
* update_nr_hwq_lock -> kn->active (via del_gendisk -> kobject_del)
* kn->active -> update_nr_hwq_lock (via this sysfs write path)
*/
if (!down_read_trylock(&set->update_nr_hwq_lock)) {
ret = -EBUSY;
goto out;
}
down_read(&set->update_nr_hwq_lock);
if (!blk_queue_no_elv_switch(q)) {
ret = elevator_change(q, &ctx);
if (!ret)
@ -826,7 +817,6 @@ ssize_t elv_iosched_store(struct gendisk *disk, const char *buf,
}
up_read(&set->update_nr_hwq_lock);
out:
if (ctx.type)
elevator_put(ctx.type);
return ret;

View file

@ -876,6 +876,8 @@ config CRYPTO_BLAKE2B
- blake2b-384
- blake2b-512
Used by the btrfs filesystem.
See https://blake2.net for further information.
config CRYPTO_CMAC
@ -963,6 +965,7 @@ config CRYPTO_SHA256
10118-3), including HMAC support.
This is required for IPsec AH (XFRM_AH) and IPsec ESP (XFRM_ESP).
Used by the btrfs filesystem, Ceph, NFS, and SMB.
config CRYPTO_SHA512
tristate "SHA-384 and SHA-512"
@ -1036,6 +1039,8 @@ config CRYPTO_XXHASH
Extremely fast, working at speeds close to RAM limits.
Used by the btrfs filesystem.
endmenu
menu "CRCs (cyclic redundancy checks)"
@ -1053,6 +1058,8 @@ config CRYPTO_CRC32C
on Communications, Vol. 41, No. 6, June 1993, selected for use with
iSCSI.
Used by btrfs, ext4, jbd2, NVMeoF/TCP, and iSCSI.
config CRYPTO_CRC32
tristate "CRC32"
select CRYPTO_HASH
@ -1060,6 +1067,8 @@ config CRYPTO_CRC32
help
CRC32 CRC algorithm (IEEE 802.3)
Used by RoCEv2 and f2fs.
endmenu
menu "Compression"

View file

@ -4132,7 +4132,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha224),cbc(aes))",
.generic_driver = "authenc(hmac-sha224-lib,cbc(aes-lib))",
.generic_driver = "authenc(hmac-sha224-lib,cbc(aes-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha224_aes_cbc_tv_temp)
@ -4194,7 +4194,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha384),cbc(aes))",
.generic_driver = "authenc(hmac-sha384-lib,cbc(aes-lib))",
.generic_driver = "authenc(hmac-sha384-lib,cbc(aes-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha384_aes_cbc_tv_temp)

View file

@ -186,13 +186,13 @@ aie2_sched_resp_handler(void *handle, void __iomem *data, size_t size)
cmd_abo = job->cmd_bo;
if (unlikely(job->job_timeout)) {
amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_TIMEOUT);
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT);
ret = -EINVAL;
goto out;
}
if (unlikely(!data) || unlikely(size != sizeof(u32))) {
amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ABORT);
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT);
ret = -EINVAL;
goto out;
}
@ -202,7 +202,7 @@ aie2_sched_resp_handler(void *handle, void __iomem *data, size_t size)
if (status == AIE2_STATUS_SUCCESS)
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_COMPLETED);
else
amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ERROR);
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ERROR);
out:
aie2_sched_notify(job);
@ -244,13 +244,13 @@ aie2_sched_cmdlist_resp_handler(void *handle, void __iomem *data, size_t size)
cmd_abo = job->cmd_bo;
if (unlikely(job->job_timeout)) {
amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_TIMEOUT);
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT);
ret = -EINVAL;
goto out;
}
if (unlikely(!data) || unlikely(size != sizeof(u32) * 3)) {
amdxdna_cmd_set_error(cmd_abo, job, 0, ERT_CMD_STATE_ABORT);
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT);
ret = -EINVAL;
goto out;
}
@ -270,12 +270,19 @@ aie2_sched_cmdlist_resp_handler(void *handle, void __iomem *data, size_t size)
fail_cmd_idx, fail_cmd_status);
if (fail_cmd_status == AIE2_STATUS_SUCCESS) {
amdxdna_cmd_set_error(cmd_abo, job, fail_cmd_idx, ERT_CMD_STATE_ABORT);
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT);
ret = -EINVAL;
} else {
amdxdna_cmd_set_error(cmd_abo, job, fail_cmd_idx, ERT_CMD_STATE_ERROR);
goto out;
}
amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ERROR);
if (amdxdna_cmd_get_op(cmd_abo) == ERT_CMD_CHAIN) {
struct amdxdna_cmd_chain *cc = amdxdna_cmd_get_payload(cmd_abo, NULL);
cc->error_index = fail_cmd_idx;
if (cc->error_index >= cc->command_count)
cc->error_index = 0;
}
out:
aie2_sched_notify(job);
return ret;

View file

@ -40,8 +40,11 @@ static int aie2_send_mgmt_msg_wait(struct amdxdna_dev_hdl *ndev,
return -ENODEV;
ret = xdna_send_msg_wait(xdna, ndev->mgmt_chann, msg);
if (ret == -ETIME)
aie2_destroy_mgmt_chann(ndev);
if (ret == -ETIME) {
xdna_mailbox_stop_channel(ndev->mgmt_chann);
xdna_mailbox_destroy_channel(ndev->mgmt_chann);
ndev->mgmt_chann = NULL;
}
if (!ret && *hdl->status != AIE2_STATUS_SUCCESS) {
XDNA_ERR(xdna, "command opcode 0x%x failed, status 0x%x",
@ -293,20 +296,13 @@ int aie2_create_context(struct amdxdna_dev_hdl *ndev, struct amdxdna_hwctx *hwct
}
intr_reg = i2x.mb_head_ptr_reg + 4;
hwctx->priv->mbox_chann = xdna_mailbox_alloc_channel(ndev->mbox);
hwctx->priv->mbox_chann = xdna_mailbox_create_channel(ndev->mbox, &x2i, &i2x,
intr_reg, ret);
if (!hwctx->priv->mbox_chann) {
XDNA_ERR(xdna, "Not able to create channel");
ret = -EINVAL;
goto del_ctx_req;
}
ret = xdna_mailbox_start_channel(hwctx->priv->mbox_chann, &x2i, &i2x,
intr_reg, ret);
if (ret) {
XDNA_ERR(xdna, "Not able to create channel");
ret = -EINVAL;
goto free_channel;
}
ndev->hwctx_num++;
XDNA_DBG(xdna, "Mailbox channel irq: %d, msix_id: %d", ret, resp.msix_id);
@ -314,8 +310,6 @@ int aie2_create_context(struct amdxdna_dev_hdl *ndev, struct amdxdna_hwctx *hwct
return 0;
free_channel:
xdna_mailbox_free_channel(hwctx->priv->mbox_chann);
del_ctx_req:
aie2_destroy_context_req(ndev, hwctx->fw_ctx_id);
return ret;
@ -331,7 +325,7 @@ int aie2_destroy_context(struct amdxdna_dev_hdl *ndev, struct amdxdna_hwctx *hwc
xdna_mailbox_stop_channel(hwctx->priv->mbox_chann);
ret = aie2_destroy_context_req(ndev, hwctx->fw_ctx_id);
xdna_mailbox_free_channel(hwctx->priv->mbox_chann);
xdna_mailbox_destroy_channel(hwctx->priv->mbox_chann);
XDNA_DBG(xdna, "Destroyed fw ctx %d", hwctx->fw_ctx_id);
hwctx->priv->mbox_chann = NULL;
hwctx->fw_ctx_id = -1;
@ -920,20 +914,6 @@ void aie2_msg_init(struct amdxdna_dev_hdl *ndev)
ndev->exec_msg_ops = &legacy_exec_message_ops;
}
void aie2_destroy_mgmt_chann(struct amdxdna_dev_hdl *ndev)
{
struct amdxdna_dev *xdna = ndev->xdna;
drm_WARN_ON(&xdna->ddev, !mutex_is_locked(&xdna->dev_lock));
if (!ndev->mgmt_chann)
return;
xdna_mailbox_stop_channel(ndev->mgmt_chann);
xdna_mailbox_free_channel(ndev->mgmt_chann);
ndev->mgmt_chann = NULL;
}
static inline struct amdxdna_gem_obj *
aie2_cmdlist_get_cmd_buf(struct amdxdna_sched_job *job)
{

View file

@ -330,7 +330,9 @@ static void aie2_hw_stop(struct amdxdna_dev *xdna)
aie2_runtime_cfg(ndev, AIE2_RT_CFG_CLK_GATING, NULL);
aie2_mgmt_fw_fini(ndev);
aie2_destroy_mgmt_chann(ndev);
xdna_mailbox_stop_channel(ndev->mgmt_chann);
xdna_mailbox_destroy_channel(ndev->mgmt_chann);
ndev->mgmt_chann = NULL;
drmm_kfree(&xdna->ddev, ndev->mbox);
ndev->mbox = NULL;
aie2_psp_stop(ndev->psp_hdl);
@ -361,29 +363,10 @@ static int aie2_hw_start(struct amdxdna_dev *xdna)
}
pci_set_master(pdev);
mbox_res.ringbuf_base = ndev->sram_base;
mbox_res.ringbuf_size = pci_resource_len(pdev, xdna->dev_info->sram_bar);
mbox_res.mbox_base = ndev->mbox_base;
mbox_res.mbox_size = MBOX_SIZE(ndev);
mbox_res.name = "xdna_mailbox";
ndev->mbox = xdnam_mailbox_create(&xdna->ddev, &mbox_res);
if (!ndev->mbox) {
XDNA_ERR(xdna, "failed to create mailbox device");
ret = -ENODEV;
goto disable_dev;
}
ndev->mgmt_chann = xdna_mailbox_alloc_channel(ndev->mbox);
if (!ndev->mgmt_chann) {
XDNA_ERR(xdna, "failed to alloc channel");
ret = -ENODEV;
goto disable_dev;
}
ret = aie2_smu_init(ndev);
if (ret) {
XDNA_ERR(xdna, "failed to init smu, ret %d", ret);
goto free_channel;
goto disable_dev;
}
ret = aie2_psp_start(ndev->psp_hdl);
@ -398,6 +381,18 @@ static int aie2_hw_start(struct amdxdna_dev *xdna)
goto stop_psp;
}
mbox_res.ringbuf_base = ndev->sram_base;
mbox_res.ringbuf_size = pci_resource_len(pdev, xdna->dev_info->sram_bar);
mbox_res.mbox_base = ndev->mbox_base;
mbox_res.mbox_size = MBOX_SIZE(ndev);
mbox_res.name = "xdna_mailbox";
ndev->mbox = xdnam_mailbox_create(&xdna->ddev, &mbox_res);
if (!ndev->mbox) {
XDNA_ERR(xdna, "failed to create mailbox device");
ret = -ENODEV;
goto stop_psp;
}
mgmt_mb_irq = pci_irq_vector(pdev, ndev->mgmt_chan_idx);
if (mgmt_mb_irq < 0) {
ret = mgmt_mb_irq;
@ -406,13 +401,13 @@ static int aie2_hw_start(struct amdxdna_dev *xdna)
}
xdna_mailbox_intr_reg = ndev->mgmt_i2x.mb_head_ptr_reg + 4;
ret = xdna_mailbox_start_channel(ndev->mgmt_chann,
&ndev->mgmt_x2i,
&ndev->mgmt_i2x,
xdna_mailbox_intr_reg,
mgmt_mb_irq);
if (ret) {
XDNA_ERR(xdna, "failed to start management mailbox channel");
ndev->mgmt_chann = xdna_mailbox_create_channel(ndev->mbox,
&ndev->mgmt_x2i,
&ndev->mgmt_i2x,
xdna_mailbox_intr_reg,
mgmt_mb_irq);
if (!ndev->mgmt_chann) {
XDNA_ERR(xdna, "failed to create management mailbox channel");
ret = -EINVAL;
goto stop_psp;
}
@ -420,41 +415,38 @@ static int aie2_hw_start(struct amdxdna_dev *xdna)
ret = aie2_mgmt_fw_init(ndev);
if (ret) {
XDNA_ERR(xdna, "initial mgmt firmware failed, ret %d", ret);
goto stop_fw;
goto destroy_mgmt_chann;
}
ret = aie2_pm_init(ndev);
if (ret) {
XDNA_ERR(xdna, "failed to init pm, ret %d", ret);
goto stop_fw;
goto destroy_mgmt_chann;
}
ret = aie2_mgmt_fw_query(ndev);
if (ret) {
XDNA_ERR(xdna, "failed to query fw, ret %d", ret);
goto stop_fw;
goto destroy_mgmt_chann;
}
ret = aie2_error_async_events_alloc(ndev);
if (ret) {
XDNA_ERR(xdna, "Allocate async events failed, ret %d", ret);
goto stop_fw;
goto destroy_mgmt_chann;
}
ndev->dev_status = AIE2_DEV_START;
return 0;
stop_fw:
aie2_suspend_fw(ndev);
destroy_mgmt_chann:
xdna_mailbox_stop_channel(ndev->mgmt_chann);
xdna_mailbox_destroy_channel(ndev->mgmt_chann);
stop_psp:
aie2_psp_stop(ndev->psp_hdl);
fini_smu:
aie2_smu_fini(ndev);
free_channel:
xdna_mailbox_free_channel(ndev->mgmt_chann);
ndev->mgmt_chann = NULL;
disable_dev:
pci_disable_device(pdev);

View file

@ -303,7 +303,6 @@ int aie2_get_array_async_error(struct amdxdna_dev_hdl *ndev,
/* aie2_message.c */
void aie2_msg_init(struct amdxdna_dev_hdl *ndev);
void aie2_destroy_mgmt_chann(struct amdxdna_dev_hdl *ndev);
int aie2_suspend_fw(struct amdxdna_dev_hdl *ndev);
int aie2_resume_fw(struct amdxdna_dev_hdl *ndev);
int aie2_set_runtime_cfg(struct amdxdna_dev_hdl *ndev, u32 type, u64 value);

View file

@ -135,33 +135,6 @@ u32 amdxdna_cmd_get_cu_idx(struct amdxdna_gem_obj *abo)
return INVALID_CU_IDX;
}
int amdxdna_cmd_set_error(struct amdxdna_gem_obj *abo,
struct amdxdna_sched_job *job, u32 cmd_idx,
enum ert_cmd_state error_state)
{
struct amdxdna_client *client = job->hwctx->client;
struct amdxdna_cmd *cmd = abo->mem.kva;
struct amdxdna_cmd_chain *cc = NULL;
cmd->header &= ~AMDXDNA_CMD_STATE;
cmd->header |= FIELD_PREP(AMDXDNA_CMD_STATE, error_state);
if (amdxdna_cmd_get_op(abo) == ERT_CMD_CHAIN) {
cc = amdxdna_cmd_get_payload(abo, NULL);
cc->error_index = (cmd_idx < cc->command_count) ? cmd_idx : 0;
abo = amdxdna_gem_get_obj(client, cc->data[0], AMDXDNA_BO_CMD);
if (!abo)
return -EINVAL;
cmd = abo->mem.kva;
}
memset(cmd->data, 0xff, abo->mem.size - sizeof(*cmd));
if (cc)
amdxdna_gem_put_obj(abo);
return 0;
}
/*
* This should be called in close() and remove(). DO NOT call in other syscalls.
* This guarantee that when hwctx and resources will be released, if user

View file

@ -167,9 +167,6 @@ amdxdna_cmd_get_state(struct amdxdna_gem_obj *abo)
void *amdxdna_cmd_get_payload(struct amdxdna_gem_obj *abo, u32 *size);
u32 amdxdna_cmd_get_cu_idx(struct amdxdna_gem_obj *abo);
int amdxdna_cmd_set_error(struct amdxdna_gem_obj *abo,
struct amdxdna_sched_job *job, u32 cmd_idx,
enum ert_cmd_state error_state);
void amdxdna_sched_job_cleanup(struct amdxdna_sched_job *job);
void amdxdna_hwctx_remove_all(struct amdxdna_client *client);

View file

@ -460,49 +460,26 @@ msg_id_failed:
return ret;
}
struct mailbox_channel *xdna_mailbox_alloc_channel(struct mailbox *mb)
struct mailbox_channel *
xdna_mailbox_create_channel(struct mailbox *mb,
const struct xdna_mailbox_chann_res *x2i,
const struct xdna_mailbox_chann_res *i2x,
u32 iohub_int_addr,
int mb_irq)
{
struct mailbox_channel *mb_chann;
int ret;
if (!is_power_of_2(x2i->rb_size) || !is_power_of_2(i2x->rb_size)) {
pr_err("Ring buf size must be power of 2");
return NULL;
}
mb_chann = kzalloc_obj(*mb_chann);
if (!mb_chann)
return NULL;
INIT_WORK(&mb_chann->rx_work, mailbox_rx_worker);
mb_chann->work_q = create_singlethread_workqueue(MAILBOX_NAME);
if (!mb_chann->work_q) {
MB_ERR(mb_chann, "Create workqueue failed");
goto free_chann;
}
mb_chann->mb = mb;
return mb_chann;
free_chann:
kfree(mb_chann);
return NULL;
}
void xdna_mailbox_free_channel(struct mailbox_channel *mb_chann)
{
destroy_workqueue(mb_chann->work_q);
kfree(mb_chann);
}
int
xdna_mailbox_start_channel(struct mailbox_channel *mb_chann,
const struct xdna_mailbox_chann_res *x2i,
const struct xdna_mailbox_chann_res *i2x,
u32 iohub_int_addr,
int mb_irq)
{
int ret;
if (!is_power_of_2(x2i->rb_size) || !is_power_of_2(i2x->rb_size)) {
pr_err("Ring buf size must be power of 2");
return -EINVAL;
}
mb_chann->msix_irq = mb_irq;
mb_chann->iohub_int_addr = iohub_int_addr;
memcpy(&mb_chann->res[CHAN_RES_X2I], x2i, sizeof(*x2i));
@ -512,37 +489,61 @@ xdna_mailbox_start_channel(struct mailbox_channel *mb_chann,
mb_chann->x2i_tail = mailbox_get_tailptr(mb_chann, CHAN_RES_X2I);
mb_chann->i2x_head = mailbox_get_headptr(mb_chann, CHAN_RES_I2X);
INIT_WORK(&mb_chann->rx_work, mailbox_rx_worker);
mb_chann->work_q = create_singlethread_workqueue(MAILBOX_NAME);
if (!mb_chann->work_q) {
MB_ERR(mb_chann, "Create workqueue failed");
goto free_and_out;
}
/* Everything look good. Time to enable irq handler */
ret = request_irq(mb_irq, mailbox_irq_handler, 0, MAILBOX_NAME, mb_chann);
if (ret) {
MB_ERR(mb_chann, "Failed to request irq %d ret %d", mb_irq, ret);
return ret;
goto destroy_wq;
}
mb_chann->bad_state = false;
mailbox_reg_write(mb_chann, mb_chann->iohub_int_addr, 0);
MB_DBG(mb_chann, "Mailbox channel started (irq: %d)", mb_chann->msix_irq);
MB_DBG(mb_chann, "Mailbox channel created (irq: %d)", mb_chann->msix_irq);
return mb_chann;
destroy_wq:
destroy_workqueue(mb_chann->work_q);
free_and_out:
kfree(mb_chann);
return NULL;
}
int xdna_mailbox_destroy_channel(struct mailbox_channel *mb_chann)
{
struct mailbox_msg *mb_msg;
unsigned long msg_id;
MB_DBG(mb_chann, "IRQ disabled and RX work cancelled");
free_irq(mb_chann->msix_irq, mb_chann);
destroy_workqueue(mb_chann->work_q);
/* We can clean up and release resources */
xa_for_each(&mb_chann->chan_xa, msg_id, mb_msg)
mailbox_release_msg(mb_chann, mb_msg);
xa_destroy(&mb_chann->chan_xa);
MB_DBG(mb_chann, "Mailbox channel destroyed, irq: %d", mb_chann->msix_irq);
kfree(mb_chann);
return 0;
}
void xdna_mailbox_stop_channel(struct mailbox_channel *mb_chann)
{
struct mailbox_msg *mb_msg;
unsigned long msg_id;
/* Disable an irq and wait. This might sleep. */
free_irq(mb_chann->msix_irq, mb_chann);
disable_irq(mb_chann->msix_irq);
/* Cancel RX work and wait for it to finish */
drain_workqueue(mb_chann->work_q);
/* We can clean up and release resources */
xa_for_each(&mb_chann->chan_xa, msg_id, mb_msg)
mailbox_release_msg(mb_chann, mb_msg);
xa_destroy(&mb_chann->chan_xa);
MB_DBG(mb_chann, "Mailbox channel stopped, irq: %d", mb_chann->msix_irq);
cancel_work_sync(&mb_chann->rx_work);
MB_DBG(mb_chann, "IRQ disabled and RX work cancelled");
}
struct mailbox *xdnam_mailbox_create(struct drm_device *ddev,

View file

@ -74,16 +74,9 @@ struct mailbox *xdnam_mailbox_create(struct drm_device *ddev,
const struct xdna_mailbox_res *res);
/*
* xdna_mailbox_alloc_channel() -- alloc a mailbox channel
* xdna_mailbox_create_channel() -- Create a mailbox channel instance
*
* @mb: mailbox handle
*/
struct mailbox_channel *xdna_mailbox_alloc_channel(struct mailbox *mb);
/*
* xdna_mailbox_start_channel() -- start a mailbox channel instance
*
* @mb_chann: the handle return from xdna_mailbox_alloc_channel()
* @mailbox: the handle return from xdna_mailbox_create()
* @x2i: host to firmware mailbox resources
* @i2x: firmware to host mailbox resources
* @xdna_mailbox_intr_reg: register addr of MSI-X interrupt
@ -91,24 +84,28 @@ struct mailbox_channel *xdna_mailbox_alloc_channel(struct mailbox *mb);
*
* Return: If success, return a handle of mailbox channel. Otherwise, return NULL.
*/
int
xdna_mailbox_start_channel(struct mailbox_channel *mb_chann,
const struct xdna_mailbox_chann_res *x2i,
const struct xdna_mailbox_chann_res *i2x,
u32 xdna_mailbox_intr_reg,
int mb_irq);
struct mailbox_channel *
xdna_mailbox_create_channel(struct mailbox *mailbox,
const struct xdna_mailbox_chann_res *x2i,
const struct xdna_mailbox_chann_res *i2x,
u32 xdna_mailbox_intr_reg,
int mb_irq);
/*
* xdna_mailbox_free_channel() -- free mailbox channel
* xdna_mailbox_destroy_channel() -- destroy mailbox channel
*
* @mailbox_chann: the handle return from xdna_mailbox_create_channel()
*
* Return: if success, return 0. otherwise return error code
*/
void xdna_mailbox_free_channel(struct mailbox_channel *mailbox_chann);
int xdna_mailbox_destroy_channel(struct mailbox_channel *mailbox_chann);
/*
* xdna_mailbox_stop_channel() -- stop mailbox channel
*
* @mailbox_chann: the handle return from xdna_mailbox_create_channel()
*
* Return: if success, return 0. otherwise return error code
*/
void xdna_mailbox_stop_channel(struct mailbox_channel *mailbox_chann);

View file

@ -67,7 +67,7 @@ const struct dpm_clk_freq npu1_dpm_clk_table[] = {
static const struct aie2_fw_feature_tbl npu1_fw_feature_table[] = {
{ .major = 5, .min_minor = 7 },
{ .features = BIT_U64(AIE2_NPU_COMMAND), .major = 5, .min_minor = 8 },
{ .features = BIT_U64(AIE2_NPU_COMMAND), .min_minor = 8 },
{ 0 }
};

View file

@ -245,14 +245,11 @@ static int calc_sizes(struct drm_device *ddev,
((st->ifm.stride_kernel >> 1) & 0x1) + 1;
u32 stride_x = ((st->ifm.stride_kernel >> 5) & 0x2) +
(st->ifm.stride_kernel & 0x1) + 1;
s32 ifm_height = st->ofm.height[2] * stride_y +
u32 ifm_height = st->ofm.height[2] * stride_y +
st->ifm.height[2] - (st->ifm.pad_top + st->ifm.pad_bottom);
s32 ifm_width = st->ofm.width * stride_x +
u32 ifm_width = st->ofm.width * stride_x +
st->ifm.width - (st->ifm.pad_left + st->ifm.pad_right);
if (ifm_height < 0 || ifm_width < 0)
return -EINVAL;
len = feat_matrix_length(info, &st->ifm, ifm_width,
ifm_height, st->ifm.depth);
dev_dbg(ddev->dev, "op %d: IFM:%d:0x%llx-0x%llx\n",
@ -420,10 +417,7 @@ static int ethosu_gem_cmdstream_copy_and_validate(struct drm_device *ddev,
return ret;
break;
case NPU_OP_ELEMENTWISE:
use_scale = ethosu_is_u65(edev) ?
(st.ifm2.broadcast & 0x80) :
(st.ifm2.broadcast == 8);
use_ifm2 = !(use_scale || (param == 5) ||
use_ifm2 = !((st.ifm2.broadcast == 8) || (param == 5) ||
(param == 6) || (param == 7) || (param == 0x24));
use_ifm = st.ifm.broadcast != 8;
ret = calc_sizes_elemwise(ddev, info, cmd, &st, use_ifm, use_ifm2);

View file

@ -143,10 +143,17 @@ out:
return ret;
}
static void ethosu_job_err_cleanup(struct ethosu_job *job)
static void ethosu_job_cleanup(struct kref *ref)
{
struct ethosu_job *job = container_of(ref, struct ethosu_job,
refcount);
unsigned int i;
pm_runtime_put_autosuspend(job->dev->base.dev);
dma_fence_put(job->done_fence);
dma_fence_put(job->inference_done_fence);
for (i = 0; i < job->region_cnt; i++)
drm_gem_object_put(job->region_bo[i]);
@ -155,19 +162,6 @@ static void ethosu_job_err_cleanup(struct ethosu_job *job)
kfree(job);
}
static void ethosu_job_cleanup(struct kref *ref)
{
struct ethosu_job *job = container_of(ref, struct ethosu_job,
refcount);
pm_runtime_put_autosuspend(job->dev->base.dev);
dma_fence_put(job->done_fence);
dma_fence_put(job->inference_done_fence);
ethosu_job_err_cleanup(job);
}
static void ethosu_job_put(struct ethosu_job *job)
{
kref_put(&job->refcount, ethosu_job_cleanup);
@ -460,16 +454,12 @@ static int ethosu_ioctl_submit_job(struct drm_device *dev, struct drm_file *file
}
}
ret = ethosu_job_push(ejob);
if (!ret) {
ethosu_job_put(ejob);
return 0;
}
out_cleanup_job:
if (ret)
drm_sched_job_cleanup(&ejob->base);
out_put_job:
ethosu_job_err_cleanup(ejob);
ethosu_job_put(ejob);
return ret;
}

View file

@ -379,9 +379,8 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
{{"_CPC", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Ints/Bufs) */
PACKAGE_INFO(ACPI_PTYPE1_VAR,
ACPI_RTYPE_INTEGER | ACPI_RTYPE_BUFFER |
ACPI_RTYPE_PACKAGE, 0, 0, 0, 0),
PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_INTEGER | ACPI_RTYPE_BUFFER, 0,
0, 0, 0),
{{"_CR3", METHOD_0ARGS, /* ACPI 6.0 */
METHOD_RETURNS(ACPI_RTYPE_INTEGER)}},

View file

@ -1456,6 +1456,15 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
if (!adev || !acpi_match_device_ids(adev, special_pm_ids))
return 0;
/*
* Skip devices whose ACPI companions don't support power management and
* don't have a wakeup GPE.
*/
if (!acpi_device_power_manageable(adev) && !acpi_device_can_wakeup(adev)) {
dev_dbg(dev, "No ACPI power management or wakeup GPE\n");
return 0;
}
/*
* Only attach the power domain to the first device if the
* companion is shared by multiple. This is to prevent doing power

View file

@ -4189,7 +4189,6 @@ static const struct ata_dev_quirks_entry __ata_dev_quirks[] = {
ATA_QUIRK_FIRMWARE_WARN },
/* Seagate disks with LPM issues */
{ "ST1000DM010-2EP102", NULL, ATA_QUIRK_NOLPM },
{ "ST2000DM008-2FR102", NULL, ATA_QUIRK_NOLPM },
/* drives which fail FPDMA_AA activation (some may freeze afterwards)
@ -4232,7 +4231,6 @@ static const struct ata_dev_quirks_entry __ata_dev_quirks[] = {
/* Devices that do not need bridging limits applied */
{ "MTRON MSP-SATA*", NULL, ATA_QUIRK_BRIDGE_OK },
{ "BUFFALO HD-QSU2/R5", NULL, ATA_QUIRK_BRIDGE_OK },
{ "QEMU HARDDISK", "2.5+", ATA_QUIRK_BRIDGE_OK },
/* Devices which aren't very happy with higher link speeds */
{ "WD My Book", NULL, ATA_QUIRK_1_5_GBPS },

View file

@ -647,7 +647,7 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap,
break;
}
if (i < ATA_MAX_QUEUE && qc == ap->deferred_qc) {
if (qc == ap->deferred_qc) {
/*
* This is a deferred command that timed out while
* waiting for the command queue to drain. Since the qc
@ -659,7 +659,6 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap,
*/
WARN_ON_ONCE(qc->flags & ATA_QCFLAG_ACTIVE);
ap->deferred_qc = NULL;
cancel_work(&ap->deferred_qc_work);
set_host_byte(scmd, DID_TIME_OUT);
scsi_eh_finish_cmd(scmd, &ap->eh_done_q);
} else if (i < ATA_MAX_QUEUE) {

View file

@ -1699,7 +1699,6 @@ void ata_scsi_requeue_deferred_qc(struct ata_port *ap)
scmd = qc->scsicmd;
ap->deferred_qc = NULL;
cancel_work(&ap->deferred_qc_work);
ata_qc_free(qc);
scmd->result = (DID_SOFT_ERROR << 16);
scsi_done(scmd);

View file

@ -179,10 +179,19 @@ void device_release_driver_internal(struct device *dev, const struct device_driv
void driver_detach(const struct device_driver *drv);
void driver_deferred_probe_del(struct device *dev);
void device_set_deferred_probe_reason(const struct device *dev, struct va_format *vaf);
static inline int driver_match_device_locked(const struct device_driver *drv,
struct device *dev)
{
device_lock_assert(dev);
return drv->bus->match ? drv->bus->match(dev, drv) : 1;
}
static inline int driver_match_device(const struct device_driver *drv,
struct device *dev)
{
return drv->bus->match ? drv->bus->match(dev, drv) : 1;
guard(device)(dev);
return driver_match_device_locked(drv, dev);
}
static inline void dev_sync_state(struct device *dev)

View file

@ -928,7 +928,7 @@ static int __device_attach_driver(struct device_driver *drv, void *_data)
bool async_allowed;
int ret;
ret = driver_match_device(drv, dev);
ret = driver_match_device_locked(drv, dev);
if (ret == 0) {
/* no match */
return 0;

View file

@ -52,10 +52,9 @@ static int atmel_sha204a_rng_read_nonblocking(struct hwrng *rng, void *data,
rng->priv = 0;
} else {
work_data = kmalloc_obj(*work_data, GFP_ATOMIC);
if (!work_data) {
atomic_dec(&i2c_priv->tfm_count);
if (!work_data)
return -ENOMEM;
}
work_data->ctx = i2c_priv;
work_data->client = i2c_priv->client;

View file

@ -378,9 +378,9 @@ void sev_tsm_init_locked(struct sev_device *sev, void *tio_status_page)
return;
error_exit:
kfree(t);
pr_err("Failed to enable SEV-TIO: ret=%d en=%d initdone=%d SEV=%d\n",
ret, t->tio_en, t->tio_init_done, boot_cpu_has(X86_FEATURE_SEV));
kfree(t);
}
void sev_tsm_uninit(struct sev_device *sev)

View file

@ -1105,12 +1105,15 @@ struct page *snp_alloc_hv_fixed_pages(unsigned int num_2mb_pages)
{
struct psp_device *psp_master = psp_get_master_device();
struct snp_hv_fixed_pages_entry *entry;
struct sev_device *sev;
unsigned int order;
struct page *page;
if (!psp_master)
if (!psp_master || !psp_master->sev_data)
return NULL;
sev = psp_master->sev_data;
order = get_order(PMD_SIZE * num_2mb_pages);
/*
@ -1123,8 +1126,7 @@ struct page *snp_alloc_hv_fixed_pages(unsigned int num_2mb_pages)
* This API uses SNP_INIT_EX to transition allocated pages to HV_Fixed
* page state, fail if SNP is already initialized.
*/
if (psp_master->sev_data &&
((struct sev_device *)psp_master->sev_data)->snp_initialized)
if (sev->snp_initialized)
return NULL;
/* Re-use freed pages that match the request */
@ -1160,7 +1162,7 @@ void snp_free_hv_fixed_pages(struct page *page)
struct psp_device *psp_master = psp_get_master_device();
struct snp_hv_fixed_pages_entry *entry, *nentry;
if (!psp_master)
if (!psp_master || !psp_master->sev_data)
return;
/*

View file

@ -1439,10 +1439,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void **process_info,
*process_info = info;
}
if (cmpxchg(&vm->process_info, NULL, *process_info) != NULL) {
ret = -EINVAL;
goto already_acquired;
}
vm->process_info = *process_info;
/* Validate page directory and attach eviction fence */
ret = amdgpu_bo_reserve(vm->root.bo, true);
@ -1482,7 +1479,6 @@ validate_pd_fail:
amdgpu_bo_unreserve(vm->root.bo);
reserve_pd_fail:
vm->process_info = NULL;
already_acquired:
if (info) {
dma_fence_put(&info->eviction_fence->base);
*process_info = NULL;

Some files were not shown because too many files have changed in this diff Show more