ASoC: codec: Remove ak4641/pxa2xx-ac97 and convert to

Merge series from "Peng Fan (OSS)" <peng.fan@oss.nxp.com>:

The main goal is to convert drivers to use GPIO descriptors. While reading
the code, I think it is time to remove ak4641 and pxa2xx-ac97 driver,
more info could be found in commit log of each patch.
Then only need to convert sound/arm/pxa2xx-ac97-lib.c to use GPIO
descriptors. Not have hardware to test the pxa2xx ac97.
This commit is contained in:
Mark Brown 2026-01-28 00:37:49 +00:00
commit 1924bd68a0
No known key found for this signature in database
GPG key ID: 24D68B725D5487D0
394 changed files with 3884 additions and 2726 deletions

View file

@ -12,6 +12,7 @@
#
Aaron Durbin <adurbin@google.com>
Abel Vesa <abelvesa@kernel.org> <abel.vesa@nxp.com>
Abel Vesa <abelvesa@kernel.org> <abel.vesa@linaro.org>
Abel Vesa <abelvesa@kernel.org> <abelvesa@gmail.com>
Abhijeet Dharmapurikar <quic_adharmap@quicinc.com> <adharmap@codeaurora.org>
Abhinav Kumar <quic_abhinavk@quicinc.com> <abhinavk@codeaurora.org>
@ -878,6 +879,8 @@ Wolfram Sang <wsa@kernel.org> <wsa@the-dreams.de>
Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com>
Yanteng Si <si.yanteng@linux.dev> <siyanteng@loongson.cn>
Ying Huang <huang.ying.caritas@gmail.com> <ying.huang@intel.com>
Yixun Lan <dlan@kernel.org> <dlan@gentoo.org>
Yixun Lan <dlan@kernel.org> <yixun.lan@amlogic.com>
Yosry Ahmed <yosry.ahmed@linux.dev> <yosryahmed@google.com>
Yu-Chun Lin <eleanor.lin@realtek.com> <eleanor15x@gmail.com>
Yusuke Goda <goda.yusuke@renesas.com>

View file

@ -2231,6 +2231,10 @@ S: Markham, Ontario
S: L3R 8B2
S: Canada
N: Krzysztof Kozlowski
E: krzk@kernel.org
D: NFC network subsystem and drivers maintainer
N: Christian Krafft
D: PowerPC Cell support

View file

@ -105,7 +105,7 @@ information.
Manual fan control on the other hand, is not exposed directly by the AWCC
interface. Instead it let's us control a fan `boost` value. This `boost` value
has the following aproximate behavior over the fan pwm:
has the following approximate behavior over the fan pwm:
::

View file

@ -494,6 +494,10 @@ memory allocations.
The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT.
When CONFIG_MEM_ALLOC_PROFILING_DEBUG=y, this control is read-only to avoid
warnings produced by allocations made while profiling is disabled and freed
when it's enabled.
memory_failure_early_kill
=========================

View file

@ -7,7 +7,9 @@ ISA string ordering in /proc/cpuinfo
------------------------------------
The canonical order of ISA extension names in the ISA string is defined in
chapter 27 of the unprivileged specification.
Chapter 27 of the RISC-V Instruction Set Manual Volume I Unprivileged ISA
(Document Version 20191213).
The specification uses vague wording, such as should, when it comes to ordering,
so for our purposes the following rules apply:

View file

@ -14,7 +14,7 @@ set of mailbox registers.
More details on the interface can be found in chapter
"7 Host System Management Port (HSMP)" of the family/model PPR
Eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip
Eg: https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50
HSMP interface is supported on EPYC line of server CPUs and MI300A (APU).
@ -185,7 +185,7 @@ what happened. The transaction returns 0 on success.
More details on the interface and message definitions can be found in chapter
"7 Host System Management Port (HSMP)" of the respective family/model PPR
eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip
eg: https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50
User space C-APIs are made available by linking against the esmi library,
which is provided by the E-SMS project https://www.amd.com/en/developer/e-sms.html.

View file

@ -11,7 +11,7 @@ maintainers:
- Jitao shi <jitao.shi@mediatek.com>
description: |
MediaTek DP and eDP are different hardwares and there are some features
MediaTek DP and eDP are different hardware and there are some features
which are not supported for eDP. For example, audio is not supported for
eDP. Therefore, we need to use two different compatibles to describe them.
In addition, We just need to enable the power domain of DP, so the clock

View file

@ -74,6 +74,37 @@ allOf:
- description: aggre UFS CARD AXI clock
- description: RPMH CC IPA clock
- if:
properties:
compatible:
contains:
enum:
- qcom,sa8775p-config-noc
- qcom,sa8775p-dc-noc
- qcom,sa8775p-gem-noc
- qcom,sa8775p-gpdsp-anoc
- qcom,sa8775p-lpass-ag-noc
- qcom,sa8775p-mmss-noc
- qcom,sa8775p-nspa-noc
- qcom,sa8775p-nspb-noc
- qcom,sa8775p-pcie-anoc
- qcom,sa8775p-system-noc
then:
properties:
clocks: false
- if:
properties:
compatible:
contains:
enum:
- qcom,sa8775p-clk-virt
- qcom,sa8775p-mc-virt
then:
properties:
reg: false
clocks: false
unevaluatedProperties: false
examples:

View file

@ -15,7 +15,7 @@ and SB Temperature Sensor Interface (SB-TSI)).
More details on the interface can be found in chapter
"5 Advanced Platform Management Link (APML)" of the family/model PPR [1]_.
.. [1] https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip
.. [1] https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50
SBRMI device

View file

@ -33,6 +33,16 @@ Boot parameter:
sysctl:
/proc/sys/vm/mem_profiling
1: Enable memory profiling.
0: Disable memory profiling.
The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT.
When CONFIG_MEM_ALLOC_PROFILING_DEBUG=y, this control is read-only to avoid
warnings produced by allocations made while profiling is disabled and freed
when it's enabled.
Runtime info:
/proc/allocinfo

View file

@ -39,6 +39,8 @@ attribute-sets:
-
name: ipproto
type: u8
checks:
min: 1
-
name: type
type: u8

View file

@ -0,0 +1,41 @@
.. SPDX-License-Identifier: GPL-2.0
Linux kernel project continuity
===============================
The Linux kernel development project is widely distributed, with over
100 maintainers each working to keep changes moving through their own
repositories. The final step, though, is a centralized one where changes
are pulled into the mainline repository. That is normally done by Linus
Torvalds but, as was demonstrated by the 4.19 release in 2018, there are
others who can do that work when the need arises.
Should the maintainers of that repository become unwilling or unable to
do that work going forward (including facilitating a transition), the
project will need to find one or more replacements without delay. The
process by which that will be done is listed below. $ORGANIZER is the
last Maintainer Summit organizer or the current Linux Foundation (LF)
Technical Advisory Board (TAB) Chair as a backup.
- Within 72 hours, $ORGANIZER will open a discussion with the invitees
of the most recently concluded Maintainers Summit. A meeting of those
invitees and the TAB, either online or in-person, will be set as soon
as possible in a way that maximizes the number of people who can
participate.
- If there has been no Maintainers Summit in the last 15 months, the set of
invitees for this meeting will be determined by the TAB.
- The invitees to this meeting may bring in other maintainers as needed.
- This meeting, chaired by $ORGANIZER, will consider options for the
ongoing management of the top-level kernel repository consistent with
the expectation that it maximizes the long term health of the project
and its community.
- Within two weeks, a representative of this group will communicate to the
broader community, using the ksummit@lists.linux.dev mailing list, what
the next steps will be.
The Linux Foundation, as guided by the TAB, will take the steps
necessary to support and implement this plan.

View file

@ -68,6 +68,7 @@ beyond).
stable-kernel-rules
management-style
researcher-guidelines
conclave
Dealing with bugs
-----------------

View file

@ -363,6 +363,18 @@ just do it. As a result, a sequence of smaller series gets merged quicker and
with better review coverage. Re-posting large series also increases the mailing
list traffic.
Limit patches outstanding on mailing list
-----------------------------------------
Avoid having more than 15 patches, across all series, outstanding for
review on the mailing list for a single tree. In other words, a maximum of
15 patches under review on net, and a maximum of 15 patches under review on
net-next.
This limit is intended to focus developer effort on testing patches before
upstream review. Aiding the quality of upstream submissions, and easing the
load on reviewers.
.. _rcs:
Local variable ordering ("reverse xmas tree", "RCS")

View file

@ -3132,6 +3132,7 @@ F: drivers/*/*ma35*
K: ma35d1
ARM/NUVOTON NPCM ARCHITECTURE
M: Andrew Jeffery <andrew@codeconstruct.com.au>
M: Avi Fishman <avifishman70@gmail.com>
M: Tomer Maimon <tmaimon77@gmail.com>
M: Tali Perry <tali.perry1@gmail.com>
@ -3140,6 +3141,7 @@ R: Nancy Yuen <yuenn@google.com>
R: Benjamin Fair <benjaminfair@google.com>
L: openbmc@lists.ozlabs.org (moderated for non-subscribers)
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bmc/linux.git
F: Documentation/devicetree/bindings/*/*/*npcm*
F: Documentation/devicetree/bindings/*/*npcm*
F: Documentation/devicetree/bindings/rtc/nuvoton,nct3018y.yaml
@ -13170,6 +13172,7 @@ F: Documentation/devicetree/bindings/interconnect/
F: Documentation/driver-api/interconnect.rst
F: drivers/interconnect/
F: include/dt-bindings/interconnect/
F: include/linux/interconnect-clk.h
F: include/linux/interconnect-provider.h
F: include/linux/interconnect.h
@ -18484,9 +18487,8 @@ F: include/uapi/linux/nexthop.h
F: net/ipv4/nexthop.c
NFC SUBSYSTEM
M: Krzysztof Kozlowski <krzk@kernel.org>
L: netdev@vger.kernel.org
S: Maintained
S: Orphan
F: Documentation/devicetree/bindings/net/nfc/
F: drivers/nfc/
F: include/net/nfc/
@ -21103,6 +21105,10 @@ S: Maintained
F: rust/helpers/pwm.c
F: rust/kernel/pwm.rs
PWM SUBSYSTEM DRIVERS [RUST]
R: Michal Wilczynski <m.wilczynski@samsung.com>
F: drivers/pwm/*.rs
PXA GPIO DRIVER
M: Robert Jarzmik <robert.jarzmik@free.fr>
L: linux-gpio@vger.kernel.org
@ -22532,7 +22538,7 @@ F: drivers/mailbox/riscv-sbi-mpxy-mbox.c
F: include/linux/mailbox/riscv-rpmi-message.h
RISC-V SPACEMIT SoC Support
M: Yixun Lan <dlan@gentoo.org>
M: Yixun Lan <dlan@kernel.org>
L: linux-riscv@lists.infradead.org
L: spacemit@lists.linux.dev
S: Maintained

View file

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 19
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc7
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View file

@ -54,6 +54,7 @@
&mdio0 {
pinctrl-0 = <&miim_a_pins>;
pinctrl-names = "default";
reset-gpios = <&gpio 53 GPIO_ACTIVE_LOW>;
status = "okay";
ext_phy0: ethernet-phy@7 {

View file

@ -527,7 +527,7 @@
interrupts = <GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&pmc PMC_TYPE_PERIPHERAL 37>;
#address-cells = <1>;
#size-cells = <1>;
#size-cells = <0>;
dmas = <&dma0 AT91_XDMAC_DT_PERID(12)>,
<&dma0 AT91_XDMAC_DT_PERID(11)>;
dma-names = "tx", "rx";
@ -676,7 +676,7 @@
flx9: flexcom@e2820000 {
compatible = "microchip,sama7d65-flexcom", "atmel,sama5d2-flexcom";
reg = <0xe2820000 0x200>;
ranges = <0x0 0xe281c000 0x800>;
ranges = <0x0 0xe2820000 0x800>;
clocks = <&pmc PMC_TYPE_PERIPHERAL 43>;
#address-cells = <1>;
#size-cells = <1>;

View file

@ -30,7 +30,6 @@ config ARCH_NPCM7XX
select ARM_ERRATA_764369 if SMP
select ARM_ERRATA_720789
select ARM_ERRATA_754322
select ARM_ERRATA_794072
select PL310_ERRATA_588369
select PL310_ERRATA_727915
select MFD_SYSCON

View file

@ -202,19 +202,6 @@
nvidia,outputs = <&dsia &dsib &sor0 &sor1>;
nvidia,head = <0>;
interconnects = <&mc TEGRA210_MC_DISPLAY0A &emc>,
<&mc TEGRA210_MC_DISPLAY0B &emc>,
<&mc TEGRA210_MC_DISPLAY0C &emc>,
<&mc TEGRA210_MC_DISPLAYHC &emc>,
<&mc TEGRA210_MC_DISPLAYD &emc>,
<&mc TEGRA210_MC_DISPLAYT &emc>;
interconnect-names = "wina",
"winb",
"winc",
"cursor",
"wind",
"wint";
};
dc@54240000 {
@ -230,15 +217,6 @@
nvidia,outputs = <&dsia &dsib &sor0 &sor1>;
nvidia,head = <1>;
interconnects = <&mc TEGRA210_MC_DISPLAY0AB &emc>,
<&mc TEGRA210_MC_DISPLAY0BB &emc>,
<&mc TEGRA210_MC_DISPLAY0CB &emc>,
<&mc TEGRA210_MC_DISPLAYHCB &emc>;
interconnect-names = "wina",
"winb",
"winc",
"cursor";
};
dsia: dsi@54300000 {
@ -1052,7 +1030,6 @@
#iommu-cells = <1>;
#reset-cells = <1>;
#interconnect-cells = <1>;
};
emc: external-memory-controller@7001b000 {
@ -1066,7 +1043,6 @@
nvidia,memory-controller = <&mc>;
operating-points-v2 = <&emc_icc_dvfs_opp_table>;
#interconnect-cells = <0>;
#cooling-cells = <2>;
};

View file

@ -5788,8 +5788,12 @@
clocks = <&rpmhcc RPMH_CXO_CLK>;
clock-names = "xo";
power-domains = <&rpmhpd SC8280XP_NSP>;
power-domain-names = "nsp";
power-domains = <&rpmhpd SC8280XP_NSP>,
<&rpmhpd SC8280XP_CX>,
<&rpmhpd SC8280XP_MXC>;
power-domain-names = "nsp",
"cx",
"mxc";
memory-region = <&pil_nsp0_mem>;
@ -5919,8 +5923,12 @@
clocks = <&rpmhcc RPMH_CXO_CLK>;
clock-names = "xo";
power-domains = <&rpmhpd SC8280XP_NSP>;
power-domain-names = "nsp";
power-domains = <&rpmhpd SC8280XP_NSP>,
<&rpmhpd SC8280XP_CX>,
<&rpmhpd SC8280XP_MXC>;
power-domain-names = "nsp",
"cx",
"mxc";
memory-region = <&pil_nsp1_mem>;

View file

@ -31,9 +31,9 @@
};
&display_panel {
status = "okay";
compatible = "samsung,sofef00-ams628nw01", "samsung,sofef00";
compatible = "samsung,sofef00";
status = "okay";
};
&bq27441_fg {

View file

@ -4133,8 +4133,6 @@
usb_1: usb@a600000 {
compatible = "qcom,sm8550-dwc3", "qcom,snps-dwc3";
reg = <0x0 0x0a600000 0x0 0xfc100>;
#address-cells = <1>;
#size-cells = <0>;
clocks = <&gcc GCC_CFG_NOC_USB3_PRIM_AXI_CLK>,
<&gcc GCC_USB30_PRIM_MASTER_CLK>,

View file

@ -5150,9 +5150,6 @@
dma-coherent;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
ports {

View file

@ -1399,10 +1399,10 @@
<&gcc GCC_AGGRE_UFS_PHY_AXI_CLK>,
<&gcc GCC_UFS_PHY_AHB_CLK>,
<&gcc GCC_UFS_PHY_UNIPRO_CORE_CLK>,
<&gcc GCC_UFS_PHY_ICE_CORE_CLK>,
<&rpmhcc RPMH_CXO_CLK>,
<&gcc GCC_UFS_PHY_TX_SYMBOL_0_CLK>,
<&gcc GCC_UFS_PHY_RX_SYMBOL_0_CLK>;
<&gcc GCC_UFS_PHY_RX_SYMBOL_0_CLK>,
<&gcc GCC_UFS_PHY_ICE_CORE_CLK>;
clock-names = "core_clk",
"bus_aggr_clk",
"iface_clk",

View file

@ -199,7 +199,7 @@
compatible = "brcm,bcm43455-fmac", "brcm,bcm4329-fmac";
reg = <1>;
interrupt-parent = <&gpio0>;
interrupts = <RK_PA3 GPIO_ACTIVE_HIGH>;
interrupts = <RK_PA3 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "host-wake";
pinctrl-names = "default";
pinctrl-0 = <&wifi_host_wake>;

View file

@ -14,7 +14,8 @@
joystick_mux_controller: mux-controller {
compatible = "gpio-mux";
pinctrl = <&mux_en_pins>;
pinctrl-0 = <&mux_en_pins>;
pinctrl-names = "default";
#mux-control-cells = <0>;
mux-gpios = <&gpio3 RK_PB3 GPIO_ACTIVE_LOW>,

View file

@ -424,9 +424,7 @@
&pcie0 {
ep-gpios = <&gpio2 RK_PD4 GPIO_ACTIVE_HIGH>;
max-link-speed = <2>;
num-lanes = <2>;
pinctrl-names = "default";
status = "okay";
vpcie12v-supply = <&vcc12v_dcin>;

View file

@ -71,7 +71,6 @@
};
&pcie0 {
max-link-speed = <1>;
num-lanes = <1>;
vpcie3v3-supply = <&vcc3v3_sys>;
};

View file

@ -969,7 +969,6 @@
};
&spi1 {
max-freq = <10000000>;
status = "okay";
spiflash: flash@0 {

View file

@ -40,13 +40,13 @@
button-up {
label = "Volume Up";
linux,code = <KEY_VOLUMEUP>;
press-threshold-microvolt = <100000>;
press-threshold-microvolt = <2000>;
};
button-down {
label = "Volume Down";
linux,code = <KEY_VOLUMEDOWN>;
press-threshold-microvolt = <600000>;
press-threshold-microvolt = <300000>;
};
};

View file

@ -483,7 +483,7 @@
pinctrl-names = "default";
pinctrl-0 = <&q7_thermal_pin &bios_disable_override_hog_pin>;
gpios {
gpio-pins {
bios_disable_override_hog_pin: bios-disable-override-hog-pin {
rockchip,pins =
<3 RK_PD5 RK_FUNC_GPIO &pcfg_pull_down>;

View file

@ -529,11 +529,11 @@
rockchip,pins = <1 RK_PC5 RK_FUNC_GPIO &pcfg_pull_up>;
};
vsel1_gpio: vsel1-gpio {
vsel1_gpio: vsel1-gpio-pin {
rockchip,pins = <1 RK_PC1 RK_FUNC_GPIO &pcfg_pull_down>;
};
vsel2_gpio: vsel2-gpio {
vsel2_gpio: vsel2-gpio-pin {
rockchip,pins = <1 RK_PB6 RK_FUNC_GPIO &pcfg_pull_down>;
};
};

View file

@ -11,7 +11,6 @@
#include "rk3568-wolfvision-pf5-display.dtsi"
&st7789 {
compatible = "jasonic,jt240mhqs-hwt-ek-e3",
"sitronix,st7789v";
compatible = "jasonic,jt240mhqs-hwt-ek-e3";
rotation = <270>;
};

View file

@ -201,6 +201,7 @@
pinctrl-names = "default";
pinctrl-0 = <&hp_det_l>;
simple-audio-card,bitclock-master = <&masterdai>;
simple-audio-card,format = "i2s";
simple-audio-card,hp-det-gpios = <&gpio2 RK_PD6 GPIO_ACTIVE_LOW>;
simple-audio-card,mclk-fs = <256>;
@ -211,15 +212,16 @@
"Headphones", "HPOR",
"IN1P", "Microphone Jack";
simple-audio-card,widgets =
"Headphone", "Headphone Jack",
"Headphone", "Headphones",
"Microphone", "Microphone Jack";
simple-audio-card,codec {
sound-dai = <&rt5616>;
};
simple-audio-card,cpu {
masterdai: simple-audio-card,cpu {
sound-dai = <&sai2>;
system-clock-frequency = <12288000>;
};
};
};
@ -727,10 +729,12 @@
rt5616: audio-codec@1b {
compatible = "realtek,rt5616";
reg = <0x1b>;
assigned-clocks = <&cru CLK_SAI2_MCLKOUT>;
assigned-clocks = <&cru CLK_SAI2_MCLKOUT_TO_IO>;
assigned-clock-rates = <12288000>;
clocks = <&cru CLK_SAI2_MCLKOUT>;
clocks = <&cru CLK_SAI2_MCLKOUT_TO_IO>;
clock-names = "mclk";
pinctrl-0 = <&sai2m0_mclk>;
pinctrl-names = "default";
#sound-dai-cells = <0>;
};
};

View file

@ -1261,7 +1261,7 @@
gpu: gpu@27800000 {
compatible = "rockchip,rk3576-mali", "arm,mali-bifrost";
reg = <0x0 0x27800000 0x0 0x200000>;
reg = <0x0 0x27800000 0x0 0x20000>;
assigned-clocks = <&scmi_clk SCMI_CLK_GPU>;
assigned-clock-rates = <198000000>;
clocks = <&cru CLK_GPU>;

View file

@ -1200,7 +1200,7 @@
status = "disabled";
};
rknn_mmu_1: iommu@fdac9000 {
rknn_mmu_1: iommu@fdaca000 {
compatible = "rockchip,rk3588-iommu", "rockchip,rk3568-iommu";
reg = <0x0 0xfdaca000 0x0 0x100>;
interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH 0>;
@ -1230,7 +1230,7 @@
status = "disabled";
};
rknn_mmu_2: iommu@fdad9000 {
rknn_mmu_2: iommu@fdada000 {
compatible = "rockchip,rk3588-iommu", "rockchip,rk3568-iommu";
reg = <0x0 0xfdada000 0x0 0x100>;
interrupts = <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH 0>;

View file

@ -300,6 +300,8 @@ void kvm_get_kimage_voffset(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
void kvm_compute_final_ctr_el0(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
void kvm_pan_patch_el2_entry(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt,
u64 elr_phys, u64 par, uintptr_t vcpu, u64 far, u64 hpfar);

View file

@ -119,22 +119,6 @@ static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
return (unsigned long *)&vcpu->arch.hcr_el2;
}
static inline void vcpu_clear_wfx_traps(struct kvm_vcpu *vcpu)
{
vcpu->arch.hcr_el2 &= ~HCR_TWE;
if (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) ||
vcpu->kvm->arch.vgic.nassgireq)
vcpu->arch.hcr_el2 &= ~HCR_TWI;
else
vcpu->arch.hcr_el2 |= HCR_TWI;
}
static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu)
{
vcpu->arch.hcr_el2 |= HCR_TWE;
vcpu->arch.hcr_el2 |= HCR_TWI;
}
static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.vsesr_el2;

View file

@ -87,7 +87,15 @@ typedef u64 kvm_pte_t;
#define KVM_PTE_LEAF_ATTR_HI_SW GENMASK(58, 55)
#define KVM_PTE_LEAF_ATTR_HI_S1_XN BIT(54)
#define __KVM_PTE_LEAF_ATTR_HI_S1_XN BIT(54)
#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN BIT(54)
#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN BIT(53)
#define KVM_PTE_LEAF_ATTR_HI_S1_XN \
({ cpus_have_final_cap(ARM64_KVM_HVHE) ? \
(__KVM_PTE_LEAF_ATTR_HI_S1_UXN | \
__KVM_PTE_LEAF_ATTR_HI_S1_PXN) : \
__KVM_PTE_LEAF_ATTR_HI_S1_XN; })
#define KVM_PTE_LEAF_ATTR_HI_S2_XN GENMASK(54, 53)
@ -293,8 +301,8 @@ typedef bool (*kvm_pgtable_force_pte_cb_t)(u64 addr, u64 end,
* children.
* @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared
* with other software walkers.
* @KVM_PGTABLE_WALK_HANDLE_FAULT: Indicates the page-table walk was
* invoked from a fault handler.
* @KVM_PGTABLE_WALK_IGNORE_EAGAIN: Don't terminate the walk early if
* the walker returns -EAGAIN.
* @KVM_PGTABLE_WALK_SKIP_BBM_TLBI: Visit and update table entries
* without Break-before-make's
* TLB invalidation.
@ -307,7 +315,7 @@ enum kvm_pgtable_walk_flags {
KVM_PGTABLE_WALK_TABLE_PRE = BIT(1),
KVM_PGTABLE_WALK_TABLE_POST = BIT(2),
KVM_PGTABLE_WALK_SHARED = BIT(3),
KVM_PGTABLE_WALK_HANDLE_FAULT = BIT(4),
KVM_PGTABLE_WALK_IGNORE_EAGAIN = BIT(4),
KVM_PGTABLE_WALK_SKIP_BBM_TLBI = BIT(5),
KVM_PGTABLE_WALK_SKIP_CMO = BIT(6),
};

View file

@ -91,7 +91,8 @@
*/
#define pstate_field(op1, op2) ((op1) << Op1_shift | (op2) << Op2_shift)
#define PSTATE_Imm_shift CRm_shift
#define SET_PSTATE(x, r) __emit_inst(0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift))
#define ENCODE_PSTATE(x, r) (0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift))
#define SET_PSTATE(x, r) __emit_inst(ENCODE_PSTATE(x, r))
#define PSTATE_PAN pstate_field(0, 4)
#define PSTATE_UAO pstate_field(0, 3)

View file

@ -402,7 +402,7 @@ int swsusp_arch_suspend(void)
* Memory allocated by get_safe_page() will be dealt with by the hibernate code,
* we don't need to free it here.
*/
int swsusp_arch_resume(void)
int __nocfi swsusp_arch_resume(void)
{
int rc;
void *zero_page;

View file

@ -86,6 +86,7 @@ KVM_NVHE_ALIAS(kvm_patch_vector_branch);
KVM_NVHE_ALIAS(kvm_update_va_mask);
KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0);
KVM_NVHE_ALIAS(kvm_pan_patch_el2_entry);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_iter);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_mitigation_enable);
KVM_NVHE_ALIAS(spectre_bhb_patch_wa3);

View file

@ -968,20 +968,18 @@ static int sve_set_common(struct task_struct *target,
vq = sve_vq_from_vl(task_get_vl(target, type));
/* Enter/exit streaming mode */
if (system_supports_sme()) {
switch (type) {
case ARM64_VEC_SVE:
target->thread.svcr &= ~SVCR_SM_MASK;
set_tsk_thread_flag(target, TIF_SVE);
break;
case ARM64_VEC_SME:
target->thread.svcr |= SVCR_SM_MASK;
set_tsk_thread_flag(target, TIF_SME);
break;
default:
WARN_ON_ONCE(1);
return -EINVAL;
}
switch (type) {
case ARM64_VEC_SVE:
target->thread.svcr &= ~SVCR_SM_MASK;
set_tsk_thread_flag(target, TIF_SVE);
break;
case ARM64_VEC_SME:
target->thread.svcr |= SVCR_SM_MASK;
set_tsk_thread_flag(target, TIF_SME);
break;
default:
WARN_ON_ONCE(1);
return -EINVAL;
}
/* Always zero V regs, FPSR, and FPCR */

View file

@ -449,12 +449,28 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
if (user->sve_size < SVE_SIG_CONTEXT_SIZE(vq))
return -EINVAL;
if (sm) {
sme_alloc(current, false);
if (!current->thread.sme_state)
return -ENOMEM;
}
sve_alloc(current, true);
if (!current->thread.sve_state) {
clear_thread_flag(TIF_SVE);
return -ENOMEM;
}
if (sm) {
current->thread.svcr |= SVCR_SM_MASK;
set_thread_flag(TIF_SME);
} else {
current->thread.svcr &= ~SVCR_SM_MASK;
set_thread_flag(TIF_SVE);
}
current->thread.fp_type = FP_STATE_SVE;
err = __copy_from_user(current->thread.sve_state,
(char __user const *)user->sve +
SVE_SIG_REGS_OFFSET,
@ -462,12 +478,6 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
if (err)
return -EFAULT;
if (flags & SVE_SIG_FLAG_SM)
current->thread.svcr |= SVCR_SM_MASK;
else
set_thread_flag(TIF_SVE);
current->thread.fp_type = FP_STATE_SVE;
err = read_fpsimd_context(&fpsimd, user);
if (err)
return err;
@ -576,6 +586,10 @@ static int restore_za_context(struct user_ctxs *user)
if (user->za_size < ZA_SIG_CONTEXT_SIZE(vq))
return -EINVAL;
sve_alloc(current, false);
if (!current->thread.sve_state)
return -ENOMEM;
sme_alloc(current, true);
if (!current->thread.sme_state) {
current->thread.svcr &= ~SVCR_ZA_MASK;

View file

@ -569,6 +569,7 @@ static bool kvm_vcpu_should_clear_twi(struct kvm_vcpu *vcpu)
return kvm_wfi_trap_policy == KVM_WFX_NOTRAP;
return single_task_running() &&
vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 &&
(atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) ||
vcpu->kvm->arch.vgic.nassgireq);
}

View file

@ -403,6 +403,7 @@ static int walk_s1(struct kvm_vcpu *vcpu, struct s1_walk_info *wi,
struct s1_walk_result *wr, u64 va)
{
u64 va_top, va_bottom, baddr, desc, new_desc, ipa;
struct kvm_s2_trans s2_trans = {};
int level, stride, ret;
level = wi->sl;
@ -420,8 +421,6 @@ static int walk_s1(struct kvm_vcpu *vcpu, struct s1_walk_info *wi,
ipa = baddr | index;
if (wi->s2) {
struct kvm_s2_trans s2_trans = {};
ret = kvm_walk_nested_s2(vcpu, ipa, &s2_trans);
if (ret) {
fail_s1_walk(wr,
@ -515,6 +514,11 @@ static int walk_s1(struct kvm_vcpu *vcpu, struct s1_walk_info *wi,
new_desc |= PTE_AF;
if (new_desc != desc) {
if (wi->s2 && !kvm_s2_trans_writable(&s2_trans)) {
fail_s1_walk(wr, ESR_ELx_FSC_PERM_L(level), true);
return -EPERM;
}
ret = kvm_swap_s1_desc(vcpu, ipa, desc, new_desc, wi);
if (ret)
return ret;

View file

@ -126,7 +126,9 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
add x1, x1, #VCPU_CONTEXT
ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
alternative_cb ARM64_ALWAYS_SYSTEM, kvm_pan_patch_el2_entry
nop
alternative_cb_end
// Store the guest regs x2 and x3
stp x2, x3, [x1, #CPU_XREG_OFFSET(2)]

View file

@ -854,7 +854,7 @@ static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code,
return false;
}
static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu)
{
/*
* Check for the conditions of Cortex-A510's #2077057. When these occur

View file

@ -180,6 +180,9 @@ static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt)
/* Propagate WFx trapping flags */
hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI);
hyp_vcpu->vcpu.arch.hcr_el2 |= hcr_el2 & (HCR_TWE | HCR_TWI);
} else {
memcpy(&hyp_vcpu->vcpu.arch.fgt, hyp_vcpu->host_vcpu->arch.fgt,
sizeof(hyp_vcpu->vcpu.arch.fgt));
}
}

View file

@ -172,7 +172,6 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu)
/* Trust the host for non-protected vcpu features. */
vcpu->arch.hcrx_el2 = host_vcpu->arch.hcrx_el2;
memcpy(vcpu->arch.fgt, host_vcpu->arch.fgt, sizeof(vcpu->arch.fgt));
return 0;
}

View file

@ -211,7 +211,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
{
const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu);
synchronize_vcpu_pstate(vcpu, exit_code);
synchronize_vcpu_pstate(vcpu);
/*
* Some guests (e.g., protected VMs) are not be allowed to run in

View file

@ -144,7 +144,7 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker,
* page table walk.
*/
if (r == -EAGAIN)
return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT);
return walker->flags & KVM_PGTABLE_WALK_IGNORE_EAGAIN;
return !r;
}
@ -1262,7 +1262,8 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size)
{
return stage2_update_leaf_attrs(pgt, addr, size, 0,
KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W,
NULL, NULL, 0);
NULL, NULL,
KVM_PGTABLE_WALK_IGNORE_EAGAIN);
}
void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr,

View file

@ -536,7 +536,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
{
synchronize_vcpu_pstate(vcpu, exit_code);
synchronize_vcpu_pstate(vcpu);
/*
* If we were in HYP context on entry, adjust the PSTATE view

View file

@ -497,7 +497,7 @@ static int share_pfn_hyp(u64 pfn)
this->count = 1;
rb_link_node(&this->node, parent, node);
rb_insert_color(&this->node, &hyp_shared_pfns);
ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn, 1);
ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn);
unlock:
mutex_unlock(&hyp_shared_pfns_lock);
@ -523,7 +523,7 @@ static int unshare_pfn_hyp(u64 pfn)
rb_erase(&this->node, &hyp_shared_pfns);
kfree(this);
ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn, 1);
ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn);
unlock:
mutex_unlock(&hyp_shared_pfns_lock);
@ -1563,14 +1563,12 @@ static void adjust_nested_exec_perms(struct kvm *kvm,
*prot &= ~KVM_PGTABLE_PROT_PX;
}
#define KVM_PGTABLE_WALK_MEMABORT_FLAGS (KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED)
static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_s2_trans *nested,
struct kvm_memory_slot *memslot, bool is_perm)
{
bool write_fault, exec_fault, writable;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
struct kvm_pgtable *pgt = vcpu->arch.hw_mmu->pgt;
unsigned long mmu_seq;
@ -1665,7 +1663,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_pgtable *pgt;
struct page *page;
vm_flags_t vm_flags;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED;
if (fault_is_perm)
fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu);
@ -1933,7 +1931,7 @@ out_unlock:
/* Resolve the access fault by making the page young again. */
static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
{
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED;
struct kvm_s2_mmu *mmu;
trace_kvm_access_fault(fault_ipa);

View file

@ -4668,7 +4668,10 @@ static void perform_access(struct kvm_vcpu *vcpu,
* that we don't know how to handle. This certainly qualifies
* as a gross bug that should be fixed right away.
*/
BUG_ON(!r->access);
if (!r->access) {
bad_trap(vcpu, params, r, "register access");
return;
}
/* Skip instruction if instructed so */
if (likely(r->access(vcpu, params, r)))

View file

@ -296,3 +296,31 @@ void kvm_compute_final_ctr_el0(struct alt_instr *alt,
generate_mov_q(read_sanitised_ftr_reg(SYS_CTR_EL0),
origptr, updptr, nr_inst);
}
void kvm_pan_patch_el2_entry(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst)
{
/*
* If we're running at EL1 without hVHE, then SCTLR_EL2.SPAN means
* nothing to us (it is RES1), and we don't need to set PSTATE.PAN
* to anything useful.
*/
if (!is_kernel_in_hyp_mode() && !cpus_have_cap(ARM64_KVM_HVHE))
return;
/*
* Leap of faith: at this point, we must be running VHE one way or
* another, and FEAT_PAN is required to be implemented. If KVM
* explodes at runtime because your system does not abide by this
* requirement, call your favourite HW vendor, they have screwed up.
*
* We don't expect hVHE to access any userspace mapping, so always
* set PSTATE.PAN on enty. Same thing if we have PAN enabled on an
* EL2 kernel. Only force it to 0 if we have not configured PAN in
* the kernel (and you know this is really silly).
*/
if (cpus_have_cap(ARM64_KVM_HVHE) || IS_ENABLED(CONFIG_ARM64_PAN))
*updptr = cpu_to_le32(ENCODE_PSTATE(1, PAN));
else
*updptr = cpu_to_le32(ENCODE_PSTATE(0, PAN));
}

View file

@ -84,6 +84,7 @@ config ERRATA_STARFIVE_JH7100
select DMA_GLOBAL_POOL
select RISCV_DMA_NONCOHERENT
select RISCV_NONSTANDARD_CACHE_OPS
select CACHEMAINT_FOR_DMA
select SIFIVE_CCACHE
default n
help

View file

@ -97,13 +97,23 @@ static inline unsigned long __untagged_addr_remote(struct mm_struct *mm, unsigne
*/
#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
/*
* Use a temporary variable for the output of the asm goto to avoid a
* triggering an LLVM assertion due to sign extending the output when
* it is used in later function calls:
* https://github.com/llvm/llvm-project/issues/143795
*/
#define __get_user_asm(insn, x, ptr, label) \
do { \
u64 __tmp; \
asm_goto_output( \
"1:\n" \
" " insn " %0, %1\n" \
_ASM_EXTABLE_UACCESS_ERR(1b, %l2, %0) \
: "=&r" (x) \
: "m" (*(ptr)) : : label)
: "=&r" (__tmp) \
: "m" (*(ptr)) : : label); \
(x) = (__typeof__(x))(unsigned long)__tmp; \
} while (0)
#else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */
#define __get_user_asm(insn, x, ptr, label) \
do { \

View file

@ -51,10 +51,11 @@ void suspend_restore_csrs(struct suspend_context *context)
#ifdef CONFIG_MMU
if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SSTC)) {
csr_write(CSR_STIMECMP, context->stimecmp);
#if __riscv_xlen < 64
csr_write(CSR_STIMECMP, ULONG_MAX);
csr_write(CSR_STIMECMPH, context->stimecmph);
#endif
csr_write(CSR_STIMECMP, context->stimecmp);
}
csr_write(CSR_SATP, context->satp);

View file

@ -72,8 +72,9 @@ static int kvm_riscv_vcpu_timer_cancel(struct kvm_vcpu_timer *t)
static int kvm_riscv_vcpu_update_vstimecmp(struct kvm_vcpu *vcpu, u64 ncycles)
{
#if defined(CONFIG_32BIT)
ncsr_write(CSR_VSTIMECMP, ncycles & 0xFFFFFFFF);
ncsr_write(CSR_VSTIMECMP, ULONG_MAX);
ncsr_write(CSR_VSTIMECMPH, ncycles >> 32);
ncsr_write(CSR_VSTIMECMP, (u32)ncycles);
#else
ncsr_write(CSR_VSTIMECMP, ncycles);
#endif
@ -307,8 +308,9 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu)
return;
#if defined(CONFIG_32BIT)
ncsr_write(CSR_VSTIMECMP, (u32)t->next_cycles);
ncsr_write(CSR_VSTIMECMP, ULONG_MAX);
ncsr_write(CSR_VSTIMECMPH, (u32)(t->next_cycles >> 32));
ncsr_write(CSR_VSTIMECMP, (u32)(t->next_cycles));
#else
ncsr_write(CSR_VSTIMECMP, t->next_cycles);
#endif

View file

@ -137,6 +137,15 @@ SECTIONS
}
_end = .;
/* Sections to be discarded */
/DISCARD/ : {
COMMON_DISCARDS
*(.eh_frame)
*(*__ksymtab*)
*(___kcrctab*)
*(.modinfo)
}
DWARF_DEBUG
ELF_DETAILS
@ -161,12 +170,4 @@ SECTIONS
*(.rela.*) *(.rela_*)
}
ASSERT(SIZEOF(.rela.dyn) == 0, "Unexpected run-time relocations (.rela) detected!")
/* Sections to be discarded */
/DISCARD/ : {
COMMON_DISCARDS
*(.eh_frame)
*(*__ksymtab*)
*(___kcrctab*)
}
}

View file

@ -28,7 +28,7 @@ KBUILD_CFLAGS_VDSO := $(filter-out -mno-pic-data-is-text-relative,$(KBUILD_CFLAG
KBUILD_CFLAGS_VDSO := $(filter-out -munaligned-symbols,$(KBUILD_CFLAGS_VDSO))
KBUILD_CFLAGS_VDSO := $(filter-out -fno-asynchronous-unwind-tables,$(KBUILD_CFLAGS_VDSO))
KBUILD_CFLAGS_VDSO += -fPIC -fno-common -fno-builtin -fasynchronous-unwind-tables
KBUILD_CFLAGS_VDSO += -fno-stack-protector
KBUILD_CFLAGS_VDSO += -fno-stack-protector $(DISABLE_KSTACK_ERASE)
ldflags-y := -shared -soname=linux-vdso.so.1 \
--hash-style=both --build-id=sha1 -T

View file

@ -1574,13 +1574,22 @@ static inline bool intel_pmu_has_bts_period(struct perf_event *event, u64 period
struct hw_perf_event *hwc = &event->hw;
unsigned int hw_event, bts_event;
if (event->attr.freq)
/*
* Only use BTS for fixed rate period==1 events.
*/
if (event->attr.freq || period != 1)
return false;
/*
* BTS doesn't virtualize.
*/
if (event->attr.exclude_host)
return false;
hw_event = hwc->config & INTEL_ARCH_EVENT_MASK;
bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS);
return hw_event == bts_event && period == 1;
return hw_event == bts_event;
}
static inline bool intel_pmu_has_bts(struct perf_event *event)

View file

@ -42,10 +42,34 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
{
unsigned int level;
pte_t *pte = lookup_address(addr, &level);
pteval_t val;
if (WARN_ON(!pte || level != PG_LEVEL_4K))
return false;
val = pte_val(*pte);
/*
* protect requires making the page not-present. If the PTE is
* already in the right state, there's nothing to do.
*/
if (protect != !!(val & _PAGE_PRESENT))
return true;
/*
* Otherwise, invert the entire PTE. This avoids writing out an
* L1TF-vulnerable PTE (not present, without the high address bits
* set).
*/
set_pte(pte, __pte(~val));
/*
* If the page was protected (non-present) and we're making it
* present, there is no need to flush the TLB at all.
*/
if (!protect)
return true;
/*
* We need to avoid IPIs, as we may get KFENCE allocations or faults
* with interrupts disabled. Therefore, the below is best-effort, and
@ -53,11 +77,6 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
* lazy fault handling takes care of faults after the page is PRESENT.
*/
if (protect)
set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT));
else
set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT));
/*
* Flush this CPU's TLB, assuming whoever did the allocation/free is
* likely to continue running on this CPU.

View file

@ -821,8 +821,6 @@ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
force_sig_pkuerr((void __user *)address, pkey);
else
force_sig_fault(SIGSEGV, si_code, (void __user *)address);
local_irq_disable();
}
static noinline void
@ -1474,15 +1472,12 @@ handle_page_fault(struct pt_regs *regs, unsigned long error_code,
do_kern_addr_fault(regs, error_code, address);
} else {
do_user_addr_fault(regs, error_code, address);
/*
* User address page fault handling might have reenabled
* interrupts. Fixing up all potential exit points of
* do_user_addr_fault() and its leaf functions is just not
* doable w/o creating an unholy mess or turning the code
* upside down.
*/
local_irq_disable();
}
/*
* page fault handling might have reenabled interrupts,
* make sure to disable them again.
*/
local_irq_disable();
}
DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)

View file

@ -1480,7 +1480,7 @@ EXPORT_SYMBOL_GPL(blk_rq_is_poll);
static void blk_rq_poll_completion(struct request *rq, struct completion *wait)
{
do {
blk_hctx_poll(rq->q, rq->mq_hctx, NULL, 0);
blk_hctx_poll(rq->q, rq->mq_hctx, NULL, BLK_POLL_ONESHOT);
cond_resched();
} while (!completion_done(wait));
}

View file

@ -1957,6 +1957,7 @@ static int disk_update_zone_resources(struct gendisk *disk,
disk->nr_zones = args->nr_zones;
if (args->nr_conv_zones >= disk->nr_zones) {
queue_limits_cancel_update(q);
pr_warn("%s: Invalid number of conventional zones %u / %u\n",
disk->disk_name, args->nr_conv_zones, disk->nr_zones);
ret = -ENODEV;

View file

@ -169,6 +169,9 @@ static int crypto_authenc_esn_encrypt(struct aead_request *req)
struct scatterlist *src, *dst;
int err;
if (assoclen < 8)
return -EINVAL;
sg_init_table(areq_ctx->src, 2);
src = scatterwalk_ffwd(areq_ctx->src, req->src, assoclen);
dst = src;
@ -256,6 +259,9 @@ static int crypto_authenc_esn_decrypt(struct aead_request *req)
u32 tmp[2];
int err;
if (assoclen < 8)
return -EINVAL;
cryptlen -= authsize;
if (req->src != dst)

View file

@ -2094,13 +2094,13 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ap->flags & ATA_FLAG_EM)
ap->em_message_type = hpriv->em_msg_type;
ahci_mark_external_port(ap);
ahci_update_initial_lpm_policy(ap);
/* disabled/not-implemented port */
if (!(hpriv->port_map & (1 << i)))
if (!(hpriv->port_map & (1 << i))) {
ap->ops = &ata_dummy_port_ops;
} else {
ahci_mark_external_port(ap);
ahci_update_initial_lpm_policy(ap);
}
}
/* apply workaround for ASUS P5W DH Deluxe mainboard */

View file

@ -2872,7 +2872,8 @@ static void ata_dev_config_lpm(struct ata_device *dev)
static void ata_dev_print_features(struct ata_device *dev)
{
if (!(dev->flags & ATA_DFLAG_FEATURES_MASK))
if (!(dev->flags & ATA_DFLAG_FEATURES_MASK) && !dev->cpr_log &&
!ata_id_has_hipm(dev->id) && !ata_id_has_dipm(dev->id))
return;
ata_dev_info(dev,
@ -3116,6 +3117,11 @@ int ata_dev_configure(struct ata_device *dev)
ata_mode_string(xfer_mask),
cdb_intr_string, atapi_an_string,
dma_dir_string);
ata_dev_config_lpm(dev);
if (print_info)
ata_dev_print_features(dev);
}
/* determine max_sectors */

View file

@ -909,7 +909,7 @@ static bool ata_scsi_lpm_supported(struct ata_port *ap)
struct ata_link *link;
struct ata_device *dev;
if (ap->flags & ATA_FLAG_NO_LPM)
if ((ap->flags & ATA_FLAG_NO_LPM) || !ap->ops->set_lpm)
return false;
ata_for_each_link(link, ap, EDGE) {

View file

@ -548,6 +548,8 @@ static DEVICE_ATTR_RW(state_synced);
static void device_unbind_cleanup(struct device *dev)
{
devres_release_all(dev);
if (dev->driver->p_cb.post_unbind_rust)
dev->driver->p_cb.post_unbind_rust(dev);
arch_teardown_dma_ops(dev);
kfree(dev->dma_range_map);
dev->dma_range_map = NULL;

View file

@ -95,12 +95,13 @@ static int regcache_maple_write(struct regmap *map, unsigned int reg,
mas_unlock(&mas);
if (ret == 0) {
kfree(lower);
kfree(upper);
if (ret) {
kfree(entry);
return ret;
}
return ret;
kfree(lower);
kfree(upper);
return 0;
}
static int regcache_maple_drop(struct regmap *map, unsigned int min,

View file

@ -408,9 +408,11 @@ static void regmap_lock_hwlock_irq(void *__map)
static void regmap_lock_hwlock_irqsave(void *__map)
{
struct regmap *map = __map;
unsigned long flags = 0;
hwspin_lock_timeout_irqsave(map->hwlock, UINT_MAX,
&map->spinlock_flags);
&flags);
map->spinlock_flags = flags;
}
static void regmap_unlock_hwlock(void *__map)

View file

@ -2885,6 +2885,15 @@ static struct ublk_device *ublk_get_device_from_id(int idx)
return ub;
}
static bool ublk_validate_user_pid(struct ublk_device *ub, pid_t ublksrv_pid)
{
rcu_read_lock();
ublksrv_pid = pid_nr(find_vpid(ublksrv_pid));
rcu_read_unlock();
return ub->ublksrv_tgid == ublksrv_pid;
}
static int ublk_ctrl_start_dev(struct ublk_device *ub,
const struct ublksrv_ctrl_cmd *header)
{
@ -2953,7 +2962,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub,
if (wait_for_completion_interruptible(&ub->completion) != 0)
return -EINTR;
if (ub->ublksrv_tgid != ublksrv_pid)
if (!ublk_validate_user_pid(ub, ublksrv_pid))
return -EINVAL;
mutex_lock(&ub->mutex);
@ -2972,7 +2981,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub,
disk->fops = &ub_fops;
disk->private_data = ub;
ub->dev_info.ublksrv_pid = ublksrv_pid;
ub->dev_info.ublksrv_pid = ub->ublksrv_tgid;
ub->ub_disk = disk;
ublk_apply_params(ub);
@ -3320,12 +3329,32 @@ static int ublk_ctrl_stop_dev(struct ublk_device *ub)
static int ublk_ctrl_get_dev_info(struct ublk_device *ub,
const struct ublksrv_ctrl_cmd *header)
{
struct task_struct *p;
struct pid *pid;
struct ublksrv_ctrl_dev_info dev_info;
pid_t init_ublksrv_tgid = ub->dev_info.ublksrv_pid;
void __user *argp = (void __user *)(unsigned long)header->addr;
if (header->len < sizeof(struct ublksrv_ctrl_dev_info) || !header->addr)
return -EINVAL;
if (copy_to_user(argp, &ub->dev_info, sizeof(ub->dev_info)))
memcpy(&dev_info, &ub->dev_info, sizeof(dev_info));
dev_info.ublksrv_pid = -1;
if (init_ublksrv_tgid > 0) {
rcu_read_lock();
pid = find_pid_ns(init_ublksrv_tgid, &init_pid_ns);
p = pid_task(pid, PIDTYPE_TGID);
if (p) {
int vnr = task_tgid_vnr(p);
if (vnr)
dev_info.ublksrv_pid = vnr;
}
rcu_read_unlock();
}
if (copy_to_user(argp, &dev_info, sizeof(dev_info)))
return -EFAULT;
return 0;
@ -3470,7 +3499,7 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
pr_devel("%s: All FETCH_REQs received, dev id %d\n", __func__,
header->dev_id);
if (ub->ublksrv_tgid != ublksrv_pid)
if (!ublk_validate_user_pid(ub, ublksrv_pid))
return -EINVAL;
mutex_lock(&ub->mutex);
@ -3481,7 +3510,7 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
ret = -EBUSY;
goto out_unlock;
}
ub->dev_info.ublksrv_pid = ublksrv_pid;
ub->dev_info.ublksrv_pid = ub->ublksrv_tgid;
ub->dev_info.state = UBLK_S_DEV_LIVE;
pr_devel("%s: new ublksrv_pid %d, dev id %d\n",
__func__, ublksrv_pid, header->dev_id);

View file

@ -50,8 +50,9 @@ static int riscv_clock_next_event(unsigned long delta,
if (static_branch_likely(&riscv_sstc_available)) {
#if defined(CONFIG_32BIT)
csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF);
csr_write(CSR_STIMECMP, ULONG_MAX);
csr_write(CSR_STIMECMPH, next_tval >> 32);
csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF);
#else
csr_write(CSR_STIMECMP, next_tval);
#endif

View file

@ -1155,7 +1155,7 @@ static int do_chaninfo_ioctl(struct comedi_device *dev,
for (i = 0; i < s->n_chan; i++) {
int x;
x = (dev->minor << 28) | (it->subdev << 24) | (i << 16) |
x = (it->subdev << 24) | (i << 16) |
(s->range_table_list[i]->length);
if (put_user(x, it->rangelist + i))
return -EFAULT;

View file

@ -330,6 +330,7 @@ static int dmm32at_ai_cmdtest(struct comedi_device *dev,
static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec)
{
unsigned long irq_flags;
unsigned char lo1, lo2, hi2;
unsigned short both2;
@ -342,6 +343,9 @@ static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec)
/* set counter clocks to 10MHz, disable all aux dio */
outb(0, dev->iobase + DMM32AT_CTRDIO_CFG_REG);
/* serialize access to control register and paged registers */
spin_lock_irqsave(&dev->spinlock, irq_flags);
/* get access to the clock regs */
outb(DMM32AT_CTRL_PAGE_8254, dev->iobase + DMM32AT_CTRL_REG);
@ -354,6 +358,8 @@ static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec)
outb(lo2, dev->iobase + DMM32AT_CLK2);
outb(hi2, dev->iobase + DMM32AT_CLK2);
spin_unlock_irqrestore(&dev->spinlock, irq_flags);
/* enable the ai conversion interrupt and the clock to start scans */
outb(DMM32AT_INTCLK_ADINT |
DMM32AT_INTCLK_CLKEN | DMM32AT_INTCLK_CLKSEL,
@ -363,13 +369,19 @@ static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec)
static int dmm32at_ai_cmd(struct comedi_device *dev, struct comedi_subdevice *s)
{
struct comedi_cmd *cmd = &s->async->cmd;
unsigned long irq_flags;
int ret;
dmm32at_ai_set_chanspec(dev, s, cmd->chanlist[0], cmd->chanlist_len);
/* serialize access to control register and paged registers */
spin_lock_irqsave(&dev->spinlock, irq_flags);
/* reset the interrupt just in case */
outb(DMM32AT_CTRL_INTRST, dev->iobase + DMM32AT_CTRL_REG);
spin_unlock_irqrestore(&dev->spinlock, irq_flags);
/*
* wait for circuit to settle
* we don't have the 'insn' here but it's not needed
@ -429,8 +441,13 @@ static irqreturn_t dmm32at_isr(int irq, void *d)
comedi_handle_events(dev, s);
}
/* serialize access to control register and paged registers */
spin_lock(&dev->spinlock);
/* reset the interrupt */
outb(DMM32AT_CTRL_INTRST, dev->iobase + DMM32AT_CTRL_REG);
spin_unlock(&dev->spinlock);
return IRQ_HANDLED;
}
@ -481,14 +498,25 @@ static int dmm32at_ao_insn_write(struct comedi_device *dev,
static int dmm32at_8255_io(struct comedi_device *dev,
int dir, int port, int data, unsigned long regbase)
{
unsigned long irq_flags;
int ret;
/* serialize access to control register and paged registers */
spin_lock_irqsave(&dev->spinlock, irq_flags);
/* get access to the DIO regs */
outb(DMM32AT_CTRL_PAGE_8255, dev->iobase + DMM32AT_CTRL_REG);
if (dir) {
outb(data, dev->iobase + regbase + port);
return 0;
ret = 0;
} else {
ret = inb(dev->iobase + regbase + port);
}
return inb(dev->iobase + regbase + port);
spin_unlock_irqrestore(&dev->spinlock, irq_flags);
return ret;
}
/* Make sure the board is there and put it to a known state */

View file

@ -52,7 +52,7 @@ int do_rangeinfo_ioctl(struct comedi_device *dev,
const struct comedi_lrange *lr;
struct comedi_subdevice *s;
subd = (it->range_type >> 24) & 0xf;
subd = (it->range_type >> 24) & 0xff;
chan = (it->range_type >> 16) & 0xff;
if (!dev->attached)

View file

@ -83,10 +83,8 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin,
if (ref->pin != pin)
continue;
reg = dpll_pin_registration_find(ref, ops, priv, cookie);
if (reg) {
refcount_inc(&ref->refcount);
return 0;
}
if (reg)
return -EEXIST;
ref_exists = true;
break;
}
@ -164,10 +162,8 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll,
if (ref->dpll != dpll)
continue;
reg = dpll_pin_registration_find(ref, ops, priv, cookie);
if (reg) {
refcount_inc(&ref->refcount);
return 0;
}
if (reg)
return -EEXIST;
ref_exists = true;
break;
}

View file

@ -74,10 +74,10 @@ struct mm_struct efi_mm = {
.page_table_lock = __SPIN_LOCK_UNLOCKED(efi_mm.page_table_lock),
.mmlist = LIST_HEAD_INIT(efi_mm.mmlist),
.user_ns = &init_user_ns,
.cpu_bitmap = { [BITS_TO_LONGS(NR_CPUS)] = 0},
#ifdef CONFIG_SCHED_MM_CID
.mm_cid.lock = __RAW_SPIN_LOCK_UNLOCKED(efi_mm.mm_cid.lock),
#endif
.flexible_array = MM_STRUCT_FLEXIBLE_ARRAY_INIT,
};
struct workqueue_struct *efi_rts_wq;

View file

@ -2549,6 +2549,7 @@ static int lineinfo_changed_notify(struct notifier_block *nb,
ctx = kzalloc(sizeof(*ctx), GFP_ATOMIC);
if (!ctx) {
pr_err("Failed to allocate memory for line info notification\n");
fput(fp);
return NOTIFY_DONE;
}
@ -2696,7 +2697,7 @@ static int gpio_chrdev_open(struct inode *inode, struct file *file)
cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
if (!cdev)
return -ENODEV;
return -ENOMEM;
cdev->watched_lines = bitmap_zalloc(gdev->ngpio, GFP_KERNEL);
if (!cdev->watched_lines)
@ -2796,13 +2797,18 @@ int gpiolib_cdev_register(struct gpio_device *gdev, dev_t devt)
return -ENOMEM;
ret = cdev_device_add(&gdev->chrdev, &gdev->dev);
if (ret)
if (ret) {
destroy_workqueue(gdev->line_state_wq);
return ret;
}
guard(srcu)(&gdev->srcu);
gc = srcu_dereference(gdev->chip, &gdev->srcu);
if (!gc)
if (!gc) {
cdev_device_del(&gdev->chrdev, &gdev->dev);
destroy_workqueue(gdev->line_state_wq);
return -ENODEV;
}
gpiochip_dbg(gc, "added GPIO chardev (%d:%d)\n", MAJOR(devt), gdev->id);

View file

@ -515,7 +515,7 @@ int gpio_device_setup_shared(struct gpio_device *gdev)
{
struct gpio_shared_entry *entry;
struct gpio_shared_ref *ref;
unsigned long *flags;
struct gpio_desc *desc;
int ret;
list_for_each_entry(entry, &gpio_shared_list, list) {
@ -543,15 +543,17 @@ int gpio_device_setup_shared(struct gpio_device *gdev)
if (list_count_nodes(&entry->refs) <= 1)
continue;
flags = &gdev->descs[entry->offset].flags;
desc = &gdev->descs[entry->offset];
__set_bit(GPIOD_FLAG_SHARED, flags);
__set_bit(GPIOD_FLAG_SHARED, &desc->flags);
/*
* Shared GPIOs are not requested via the normal path. Make
* them inaccessible to anyone even before we register the
* chip.
*/
__set_bit(GPIOD_FLAG_REQUESTED, flags);
ret = gpiod_request_commit(desc, "shared");
if (ret)
return ret;
pr_debug("GPIO %u owned by %s is shared by multiple consumers\n",
entry->offset, gpio_device_get_label(gdev));
@ -562,8 +564,10 @@ int gpio_device_setup_shared(struct gpio_device *gdev)
ref->con_id ?: "(none)");
ret = gpio_shared_make_adev(gdev, entry, ref);
if (ret)
if (ret) {
gpiod_free_commit(desc);
return ret;
}
}
}
@ -579,6 +583,8 @@ void gpio_device_teardown_shared(struct gpio_device *gdev)
if (!device_match_fwnode(&gdev->dev, entry->fwnode))
continue;
gpiod_free_commit(&gdev->descs[entry->offset]);
list_for_each_entry(ref, &entry->refs, list) {
guard(mutex)(&ref->lock);

View file

@ -2453,7 +2453,7 @@ EXPORT_SYMBOL_GPL(gpiochip_remove_pin_ranges);
* on each other, and help provide better diagnostics in debugfs.
* They're called even less than the "set direction" calls.
*/
static int gpiod_request_commit(struct gpio_desc *desc, const char *label)
int gpiod_request_commit(struct gpio_desc *desc, const char *label)
{
unsigned int offset;
int ret;
@ -2515,7 +2515,7 @@ int gpiod_request(struct gpio_desc *desc, const char *label)
return ret;
}
static void gpiod_free_commit(struct gpio_desc *desc)
void gpiod_free_commit(struct gpio_desc *desc)
{
unsigned long flags;

View file

@ -244,7 +244,9 @@ DEFINE_CLASS(gpio_chip_guard,
struct gpio_desc *desc)
int gpiod_request(struct gpio_desc *desc, const char *label);
int gpiod_request_commit(struct gpio_desc *desc, const char *label);
void gpiod_free(struct gpio_desc *desc);
void gpiod_free_commit(struct gpio_desc *desc);
static inline int gpiod_request_user(struct gpio_desc *desc, const char *label)
{

View file

@ -210,7 +210,7 @@ config DRM_GPUVM
config DRM_GPUSVM
tristate
depends on DRM && DEVICE_PRIVATE
depends on DRM
select HMM_MIRROR
select MMU_NOTIFIER
help

View file

@ -108,8 +108,10 @@ obj-$(CONFIG_DRM_EXEC) += drm_exec.o
obj-$(CONFIG_DRM_GPUVM) += drm_gpuvm.o
drm_gpusvm_helper-y := \
drm_gpusvm.o\
drm_gpusvm.o
drm_gpusvm_helper-$(CONFIG_ZONE_DEVICE) += \
drm_pagemap.o
obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o
obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o

View file

@ -763,7 +763,7 @@ void amdgpu_fence_save_wptr(struct amdgpu_fence *af)
}
static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring,
u64 start_wptr, u32 end_wptr)
u64 start_wptr, u64 end_wptr)
{
unsigned int first_idx = start_wptr & ring->buf_mask;
unsigned int last_idx = end_wptr & ring->buf_mask;

View file

@ -733,8 +733,10 @@ int amdgpu_gmc_flush_gpu_tlb_pasid(struct amdgpu_device *adev, uint16_t pasid,
if (!adev->gmc.flush_pasid_uses_kiq || !ring->sched.ready) {
if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid)
return 0;
if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid) {
r = 0;
goto error_unlock_reset;
}
if (adev->gmc.flush_tlb_needs_extra_type_2)
adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,

View file

@ -302,7 +302,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
if (job && job->vmid)
amdgpu_vmid_reset(adev, ring->vm_hub, job->vmid);
amdgpu_ring_undo(ring);
return r;
goto free_fence;
}
*f = &af->base;
/* get a ref for the job */

View file

@ -217,8 +217,11 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
if (!entity)
return 0;
return drm_sched_job_init(&(*job)->base, entity, 1, owner,
drm_client_id);
r = drm_sched_job_init(&(*job)->base, entity, 1, owner, drm_client_id);
if (!r)
return 0;
kfree((*job)->hw_vm_fence);
err_fence:
kfree((*job)->hw_fence);

View file

@ -278,7 +278,6 @@ static void gfx_v12_0_select_se_sh(struct amdgpu_device *adev, u32 se_num,
u32 sh_num, u32 instance, int xcc_id);
static u32 gfx_v12_0_get_wgp_active_bitmap_per_sh(struct amdgpu_device *adev);
static void gfx_v12_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start, bool secure);
static void gfx_v12_0_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg,
uint32_t val);
static int gfx_v12_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev);
@ -4634,16 +4633,6 @@ static int gfx_v12_0_ring_preempt_ib(struct amdgpu_ring *ring)
return r;
}
static void gfx_v12_0_ring_emit_frame_cntl(struct amdgpu_ring *ring,
bool start,
bool secure)
{
uint32_t v = secure ? FRAME_TMZ : 0;
amdgpu_ring_write(ring, PACKET3(PACKET3_FRAME_CONTROL, 0));
amdgpu_ring_write(ring, v | FRAME_CMD(start ? 0 : 1));
}
static void gfx_v12_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg,
uint32_t reg_val_offs)
{
@ -5520,7 +5509,6 @@ static const struct amdgpu_ring_funcs gfx_v12_0_ring_funcs_gfx = {
.emit_cntxcntl = gfx_v12_0_ring_emit_cntxcntl,
.init_cond_exec = gfx_v12_0_ring_emit_init_cond_exec,
.preempt_ib = gfx_v12_0_ring_preempt_ib,
.emit_frame_cntl = gfx_v12_0_ring_emit_frame_cntl,
.emit_wreg = gfx_v12_0_ring_emit_wreg,
.emit_reg_wait = gfx_v12_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v12_0_ring_emit_reg_write_reg_wait,

View file

@ -120,8 +120,7 @@ static inline bool kfd_dbg_has_gws_support(struct kfd_node *dev)
&& dev->kfd->mec2_fw_version < 0x1b6) ||
(KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1)
&& dev->kfd->mec2_fw_version < 0x30) ||
(KFD_GC_VERSION(dev) >= IP_VERSION(11, 0, 0) &&
KFD_GC_VERSION(dev) < IP_VERSION(12, 0, 0)))
kfd_dbg_has_cwsr_workaround(dev))
return false;
/* Assume debugging and cooperative launch supported otherwise. */

View file

@ -79,7 +79,6 @@ int amdgpu_dm_initialize_default_pipeline(struct drm_plane *plane, struct drm_pr
goto cleanup;
list->type = ops[i]->base.id;
list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[i]->base.id);
i++;
@ -197,6 +196,9 @@ int amdgpu_dm_initialize_default_pipeline(struct drm_plane *plane, struct drm_pr
goto cleanup;
drm_colorop_set_next_property(ops[i-1], ops[i]);
list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[0]->base.id);
return 0;
cleanup:

View file

@ -248,8 +248,6 @@ static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work)
struct vblank_control_work *vblank_work =
container_of(work, struct vblank_control_work, work);
struct amdgpu_display_manager *dm = vblank_work->dm;
struct amdgpu_device *adev = drm_to_adev(dm->ddev);
int r;
mutex_lock(&dm->dc_lock);
@ -279,16 +277,7 @@ static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work)
if (dm->active_vblank_irq_count == 0) {
dc_post_update_surfaces_to_stream(dm->dc);
r = amdgpu_dpm_pause_power_profile(adev, true);
if (r)
dev_warn(adev->dev, "failed to set default power profile mode\n");
dc_allow_idle_optimizations(dm->dc, true);
r = amdgpu_dpm_pause_power_profile(adev, false);
if (r)
dev_warn(adev->dev, "failed to restore the power profile mode\n");
}
mutex_unlock(&dm->dc_lock);

View file

@ -915,13 +915,19 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev)
struct amdgpu_dm_connector *amdgpu_dm_connector;
const struct dc_link *dc_link;
use_polling |= connector->polled != DRM_CONNECTOR_POLL_HPD;
if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
continue;
amdgpu_dm_connector = to_amdgpu_dm_connector(connector);
/*
* Analog connectors may be hot-plugged unlike other connector
* types that don't support HPD. Only poll analog connectors.
*/
use_polling |=
amdgpu_dm_connector->dc_link &&
dc_connector_supports_analog(amdgpu_dm_connector->dc_link->link_id.id);
dc_link = amdgpu_dm_connector->dc_link;
/*

View file

@ -1790,12 +1790,13 @@ dm_atomic_plane_get_property(struct drm_plane *plane,
static int
dm_plane_init_colorops(struct drm_plane *plane)
{
struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES];
struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES] = {};
struct drm_device *dev = plane->dev;
struct amdgpu_device *adev = drm_to_adev(dev);
struct dc *dc = adev->dm.dc;
int len = 0;
int ret;
int ret = 0;
int i;
if (plane->type == DRM_PLANE_TYPE_CURSOR)
return 0;
@ -1806,7 +1807,7 @@ dm_plane_init_colorops(struct drm_plane *plane)
if (ret) {
drm_err(plane->dev, "Failed to create color pipeline for plane %d: %d\n",
plane->base.id, ret);
return ret;
goto out;
}
len++;
@ -1814,7 +1815,11 @@ dm_plane_init_colorops(struct drm_plane *plane)
drm_plane_create_color_pipeline_property(plane, pipelines, len);
}
return 0;
out:
for (i = 0; i < len; i++)
kfree(pipelines[i].name);
return ret;
}
#endif

View file

@ -2273,8 +2273,6 @@ static int si_populate_smc_tdp_limits(struct amdgpu_device *adev,
if (scaling_factor == 0)
return -EINVAL;
memset(smc_table, 0, sizeof(SISLANDS_SMC_STATETABLE));
ret = si_calculate_adjusted_tdp_limits(adev,
false, /* ??? */
adev->pm.dpm.tdp_adjustment,
@ -2283,6 +2281,12 @@ static int si_populate_smc_tdp_limits(struct amdgpu_device *adev,
if (ret)
return ret;
if (adev->pdev->device == 0x6611 && adev->pdev->revision == 0x87) {
/* Workaround buggy powertune on Radeon 430 and 520. */
tdp_limit = 32;
near_tdp_limit = 28;
}
smc_table->dpm2Params.TDPLimit =
cpu_to_be32(si_scale_power_for_smc(tdp_limit, scaling_factor) * 1000);
smc_table->dpm2Params.NearTDPLimit =
@ -2328,16 +2332,8 @@ static int si_populate_smc_tdp_limits_2(struct amdgpu_device *adev,
if (ni_pi->enable_power_containment) {
SISLANDS_SMC_STATETABLE *smc_table = &si_pi->smc_statetable;
u32 scaling_factor = si_get_smc_power_scaling_factor(adev);
int ret;
memset(smc_table, 0, sizeof(SISLANDS_SMC_STATETABLE));
smc_table->dpm2Params.NearTDPLimit =
cpu_to_be32(si_scale_power_for_smc(adev->pm.dpm.near_tdp_limit_adjusted, scaling_factor) * 1000);
smc_table->dpm2Params.SafePowerLimit =
cpu_to_be32(si_scale_power_for_smc((adev->pm.dpm.near_tdp_limit_adjusted * SISLANDS_DPM2_TDP_SAFE_LIMIT_PERCENT) / 100, scaling_factor) * 1000);
ret = amdgpu_si_copy_bytes_to_smc(adev,
(si_pi->state_table_start +
offsetof(SISLANDS_SMC_STATETABLE, dpm2Params) +
@ -3473,10 +3469,15 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
(adev->pdev->revision == 0x80) ||
(adev->pdev->revision == 0x81) ||
(adev->pdev->revision == 0x83) ||
(adev->pdev->revision == 0x87) ||
(adev->pdev->revision == 0x87 &&
adev->pdev->device != 0x6611) ||
(adev->pdev->device == 0x6604) ||
(adev->pdev->device == 0x6605)) {
max_sclk = 75000;
} else if (adev->pdev->revision == 0x87 &&
adev->pdev->device == 0x6611) {
/* Radeon 430 and 520 */
max_sclk = 78000;
}
}
@ -7600,12 +7601,12 @@ static int si_dpm_set_interrupt_state(struct amdgpu_device *adev,
case AMDGPU_IRQ_STATE_DISABLE:
cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT);
cg_thermal_int |= CG_THERMAL_INT__THERM_INT_MASK_HIGH_MASK;
WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int);
WREG32(mmCG_THERMAL_INT, cg_thermal_int);
break;
case AMDGPU_IRQ_STATE_ENABLE:
cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT);
cg_thermal_int &= ~CG_THERMAL_INT__THERM_INT_MASK_HIGH_MASK;
WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int);
WREG32(mmCG_THERMAL_INT, cg_thermal_int);
break;
default:
break;
@ -7617,12 +7618,12 @@ static int si_dpm_set_interrupt_state(struct amdgpu_device *adev,
case AMDGPU_IRQ_STATE_DISABLE:
cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT);
cg_thermal_int |= CG_THERMAL_INT__THERM_INT_MASK_LOW_MASK;
WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int);
WREG32(mmCG_THERMAL_INT, cg_thermal_int);
break;
case AMDGPU_IRQ_STATE_ENABLE:
cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT);
cg_thermal_int &= ~CG_THERMAL_INT__THERM_INT_MASK_LOW_MASK;
WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int);
WREG32(mmCG_THERMAL_INT, cg_thermal_int);
break;
default:
break;

View file

@ -2062,33 +2062,41 @@ struct dw_dp *dw_dp_bind(struct device *dev, struct drm_encoder *encoder,
}
ret = drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
if (ret)
if (ret) {
dev_err_probe(dev, ret, "Failed to attach bridge\n");
goto unregister_aux;
}
dw_dp_init_hw(dp);
ret = phy_init(dp->phy);
if (ret) {
dev_err_probe(dev, ret, "phy init failed\n");
return ERR_PTR(ret);
goto unregister_aux;
}
ret = devm_add_action_or_reset(dev, dw_dp_phy_exit, dp);
if (ret)
return ERR_PTR(ret);
goto unregister_aux;
dp->irq = platform_get_irq(pdev, 0);
if (dp->irq < 0)
return ERR_PTR(ret);
if (dp->irq < 0) {
ret = dp->irq;
goto unregister_aux;
}
ret = devm_request_threaded_irq(dev, dp->irq, NULL, dw_dp_irq,
IRQF_ONESHOT, dev_name(dev), dp);
if (ret) {
dev_err_probe(dev, ret, "failed to request irq\n");
return ERR_PTR(ret);
goto unregister_aux;
}
return dp;
unregister_aux:
drm_dp_aux_unregister(&dp->aux);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(dw_dp_bind);

Some files were not shown because too many files have changed in this diff Show more