ASoC: Fixes for v7.0 merge window

A reasonably small set of fixes and quriks that came in during the merge
 window, there's one more pending that I'll send tomorrow if you didn't
 send a PR already.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmmWTggACgkQJNaLcl1U
 h9CxjQgAgHYHlQ0DKFUH6JwF2So4F9iLZhAHrLZtLSrFPui2lNln5JVauSPdsemU
 S9G2R8caez5wRHdqEWtNTz7Gjv5v/3KjGG4M8EdnAGncvJDtHb9JFWh8RKY28wik
 HZ23fRfXuZhmcyhczepC094Ix70jzfcpq03YYswSQEXx4lLj+olmRwarCVT8hE/i
 kDF7q6gaFnmarHvKFwme4u1GxQkESm2+YNtYNRrcycdSTO7OTlOY4RK3BBfP3Kba
 PdYoHsAQgIUGwIkI3bVlYUAP/Vg6jUeC2Xv7GMQ8MbeYVU4D44uf+kwrq0Yq7pK4
 5a2DjPQIDajX46agS8er1Anm7xel9g==
 =yCRC
 -----END PGP SIGNATURE-----

Merge tag 'asoc-fix-v7.0-merge-window' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus

ASoC: Fixes for v7.0 merge window

A reasonably small set of fixes and quriks that came in during the merge
window, there's one more pending that I'll send tomorrow if you didn't
send a PR already.
This commit is contained in:
Takashi Iwai 2026-02-19 12:08:48 +01:00
commit d08008f196
227 changed files with 2030 additions and 1018 deletions

View file

@ -34,6 +34,7 @@ Alexander Lobakin <alobakin@pm.me> <alobakin@marvell.com>
Alexander Lobakin <alobakin@pm.me> <bloodyreaper@yandex.ru>
Alexander Mikhalitsyn <alexander@mihalicyn.com> <alexander.mikhalitsyn@virtuozzo.com>
Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@canonical.com>
Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@futurfusion.io>
Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin.ext@nsn.com>
Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@gmx.de>
Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@nokia.com>
@ -786,7 +787,8 @@ Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com> <subashab@codeaurora.
Subbaraman Narayanamurthy <quic_subbaram@quicinc.com> <subbaram@codeaurora.org>
Subhash Jadavani <subhashj@codeaurora.org>
Sudarshan Rajagopalan <quic_sudaraja@quicinc.com> <sudaraja@codeaurora.org>
Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sudeep Holla <sudeep.holla@kernel.org> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sudeep Holla <sudeep.holla@kernel.org> <sudeep.holla@arm.com>
Sumit Garg <sumit.garg@kernel.org> <sumit.garg@linaro.org>
Sumit Semwal <sumit.semwal@ti.com>
Surabhi Vishnoi <quic_svishnoi@quicinc.com> <svishnoi@codeaurora.org>

View file

@ -7,13 +7,3 @@ Description:
signals when the PCI layer is able to support establishment of
link encryption and other device-security features coordinated
through a platform tsm.
What: /sys/class/tsm/tsmN/streamH.R.E
Contact: linux-pci@vger.kernel.org
Description:
(RO) When a host bridge has established a secure connection via
the platform TSM, symlink appears. The primary function of this
is have a system global review of TSM resource consumption
across host bridges. The link points to the endpoint PCI device
and matches the same link published by the host bridge. See
Documentation/ABI/testing/sysfs-devices-pci-host-bridge.

View file

@ -3472,6 +3472,11 @@ Kernel parameters
If there are multiple matching configurations changing
the same attribute, the last one is used.
liveupdate= [KNL,EARLY]
Format: <bool>
Enable Live Update Orchestrator (LUO).
Default: off.
load_ramdisk= [RAM] [Deprecated]
lockd.nlm_grace_period=P [NFS] Assign grace period.

View file

@ -21,10 +21,10 @@ properties:
reg:
maxItems: 1
avdd-supply:
AVDD-supply:
description: Analog power supply
dvdd-supply:
DVDD-supply:
description: Digital power supply
reset-gpios:
@ -60,7 +60,7 @@ allOf:
properties:
dsd-path: false
additionalProperties: false
unevaluatedProperties: false
examples:
- |

View file

@ -19,10 +19,10 @@ properties:
reg:
maxItems: 1
avdd-supply:
AVDD-supply:
description: A 1.8V supply that powers up the AVDD pin.
dvdd-supply:
DVDD-supply:
description: A 1.2V supply that powers up the DVDD pin.
reset-gpios:
@ -32,7 +32,10 @@ required:
- compatible
- reg
additionalProperties: false
allOf:
- $ref: dai-common.yaml#
unevaluatedProperties: false
examples:
- |

View file

@ -335,7 +335,7 @@ F: tools/power/acpi/
ACPI FOR ARM64 (ACPI/arm64)
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
M: Hanjun Guo <guohanjun@huawei.com>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-acpi@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
@ -351,7 +351,7 @@ F: drivers/acpi/riscv/
F: include/linux/acpi_rimt.h
ACPI PCC(Platform Communication Channel) MAILBOX DRIVER
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-acpi@vger.kernel.org
S: Supported
F: drivers/mailbox/pcc.c
@ -2754,14 +2754,14 @@ F: arch/arm/include/asm/hardware/dec21285.h
F: arch/arm/mach-footbridge/
ARM/FREESCALE IMX / MXC ARM ARCHITECTURE
M: Shawn Guo <shawnguo@kernel.org>
M: Frank Li <Frank.Li@nxp.com>
M: Sascha Hauer <s.hauer@pengutronix.de>
R: Pengutronix Kernel Team <kernel@pengutronix.de>
R: Fabio Estevam <festevam@gmail.com>
L: imx@lists.linux.dev
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git
F: Documentation/devicetree/bindings/firmware/fsl*
F: Documentation/devicetree/bindings/firmware/nxp*
F: arch/arm/boot/dts/nxp/imx/
@ -2776,22 +2776,22 @@ N: mxs
N: \bmxc[^\d]
ARM/FREESCALE LAYERSCAPE ARM ARCHITECTURE
M: Shawn Guo <shawnguo@kernel.org>
M: Frank Li <Frank.Li@nxp.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git
F: arch/arm/boot/dts/nxp/ls/
F: arch/arm64/boot/dts/freescale/fsl-*
F: arch/arm64/boot/dts/freescale/qoriq-*
ARM/FREESCALE VYBRID ARM ARCHITECTURE
M: Shawn Guo <shawnguo@kernel.org>
M: Frank Li <Frank.Li@nxp.com>
M: Sascha Hauer <s.hauer@pengutronix.de>
R: Pengutronix Kernel Team <kernel@pengutronix.de>
R: Stefan Agner <stefan@agner.ch>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/frank.li/linux.git
F: arch/arm/boot/dts/nxp/vf/
F: arch/arm/mach-imx/*vf610*
@ -3688,7 +3688,7 @@ N: uniphier
ARM/VERSATILE EXPRESS PLATFORM
M: Liviu Dudau <liviu.dudau@arm.com>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
@ -6520,7 +6520,7 @@ F: drivers/i2c/busses/i2c-cp2615.c
CPU FREQUENCY DRIVERS - VEXPRESS SPC ARM BIG LITTLE
M: Viresh Kumar <viresh.kumar@linaro.org>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-pm@vger.kernel.org
S: Maintained
W: http://www.arm.com/products/processors/technologies/biglittleprocessing.php
@ -6616,7 +6616,7 @@ F: include/linux/platform_data/cpuidle-exynos.h
CPUIDLE DRIVER - ARM PSCI
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
M: Ulf Hansson <ulf.hansson@linaro.org>
L: linux-pm@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -9822,7 +9822,7 @@ F: include/uapi/linux/firewire*.h
F: tools/firewire/
FIRMWARE FRAMEWORK FOR ARMV8-A
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/firmware/arm_ffa/
@ -10520,7 +10520,7 @@ S: Maintained
F: scripts/gendwarfksyms/
GENERIC ARCHITECTURE TOPOLOGY
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-kernel@vger.kernel.org
S: Maintained
F: drivers/base/arch_topology.c
@ -11382,6 +11382,11 @@ F: Documentation/ABI/testing/sysfs-devices-platform-kunpeng_hccs
F: drivers/soc/hisilicon/kunpeng_hccs.c
F: drivers/soc/hisilicon/kunpeng_hccs.h
HISILICON SOC HHA DRIVER
M: Yushan Wang <wangyushan12@huawei.com>
S: Maintained
F: drivers/cache/hisi_soc_hha.c
HISILICON LPC BUS DRIVER
M: Jay Fang <f.fangjian@huawei.com>
S: Maintained
@ -15107,7 +15112,7 @@ F: drivers/mailbox/arm_mhuv2.c
F: include/linux/mailbox/arm_mhuv2_message.h
MAILBOX ARM MHUv3
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
M: Cristian Marussi <cristian.marussi@arm.com>
L: linux-kernel@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -20584,7 +20589,7 @@ F: drivers/pinctrl/pinctrl-amd.c
PIN CONTROLLER - FREESCALE
M: Dong Aisheng <aisheng.dong@nxp.com>
M: Fabio Estevam <festevam@gmail.com>
M: Shawn Guo <shawnguo@kernel.org>
M: Frank Li <Frank.Li@nxp.com>
M: Jacky Bai <ping.bai@nxp.com>
R: Pengutronix Kernel Team <kernel@pengutronix.de>
R: NXP S32 Linux Team <s32@nxp.com>
@ -20980,6 +20985,18 @@ F: Documentation/devicetree/bindings/net/pse-pd/
F: drivers/net/pse-pd/
F: net/ethtool/pse-pd.c
PSP SECURITY PROTOCOL
M: Daniel Zahka <daniel.zahka@gmail.com>
M: Jakub Kicinski <kuba@kernel.org>
M: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
F: Documentation/netlink/specs/psp.yaml
F: Documentation/networking/psp.rst
F: include/net/psp/
F: include/net/psp.h
F: include/uapi/linux/psp.h
F: net/psp/
K: struct\ psp(_assoc|_dev|hdr)\b
PSTORE FILESYSTEM
M: Kees Cook <kees@kernel.org>
R: Tony Luck <tony.luck@intel.com>
@ -23637,7 +23654,7 @@ F: include/uapi/linux/sed*
SECURE MONITOR CALL(SMC) CALLING CONVENTION (SMCCC)
M: Mark Rutland <mark.rutland@arm.com>
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/firmware/smccc/
@ -25401,7 +25418,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd.git
F: drivers/mfd/syscon.c
SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
R: Cristian Marussi <cristian.marussi@arm.com>
L: arm-scmi@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -26553,7 +26570,7 @@ F: samples/tsm-mr/
TRUSTED SERVICES TEE DRIVER
M: Balint Dobszay <balint.dobszay@arm.com>
M: Sudeep Holla <sudeep.holla@arm.com>
M: Sudeep Holla <sudeep.holla@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: trusted-services@lists.trustedfirmware.org
S: Maintained

View file

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 19
SUBLEVEL = 0
EXTRAVERSION = -rc8
EXTRAVERSION =
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View file

@ -42,7 +42,10 @@ static inline void *memset32(uint32_t *p, uint32_t v, __kernel_size_t n)
extern void *__memset64(uint64_t *, uint32_t low, __kernel_size_t, uint32_t hi);
static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
{
return __memset64(p, v, n * 8, v >> 32);
if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
return __memset64(p, v, n * 8, v >> 32);
else
return __memset64(p, v >> 32, n * 8, v);
}
/*

View file

@ -42,7 +42,7 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
{
unsigned int level;
pte_t *pte = lookup_address(addr, &level);
pteval_t val;
pteval_t val, new;
if (WARN_ON(!pte || level != PG_LEVEL_4K))
return false;
@ -57,11 +57,12 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
return true;
/*
* Otherwise, invert the entire PTE. This avoids writing out an
* Otherwise, flip the Present bit, taking care to avoid writing an
* L1TF-vulnerable PTE (not present, without the high address bits
* set).
*/
set_pte(pte, __pte(~val));
new = val ^ _PAGE_PRESENT;
set_pte(pte, __pte(flip_protnone_guard(val, new, PTE_PFN_MASK)));
/*
* If the page was protected (non-present) and we're making it

View file

@ -140,7 +140,7 @@ unsigned long vmware_hypercall3(unsigned long cmd, unsigned long in1,
"b" (in1),
"c" (cmd),
"d" (0)
: "cc", "memory");
: "di", "si", "cc", "memory");
return out0;
}
@ -165,7 +165,7 @@ unsigned long vmware_hypercall4(unsigned long cmd, unsigned long in1,
"b" (in1),
"c" (cmd),
"d" (0)
: "cc", "memory");
: "di", "si", "cc", "memory");
return out0;
}

View file

@ -514,7 +514,8 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
*/
spin_lock_irq(&kvm->irqfds.lock);
if (irqfd->irq_entry.type == KVM_IRQ_ROUTING_MSI) {
if (irqfd->irq_entry.type == KVM_IRQ_ROUTING_MSI ||
WARN_ON_ONCE(irqfd->irq_bypass_vcpu)) {
ret = kvm_pi_update_irte(irqfd, NULL);
if (ret)
pr_info("irq bypass consumer (eventfd %p) unregistration fails: %d\n",

View file

@ -376,6 +376,7 @@ void avic_init_vmcb(struct vcpu_svm *svm, struct vmcb *vmcb)
static int avic_init_backing_page(struct kvm_vcpu *vcpu)
{
u32 max_id = x2avic_enabled ? x2avic_max_physical_id : AVIC_MAX_PHYSICAL_ID;
struct kvm_svm *kvm_svm = to_kvm_svm(vcpu->kvm);
struct vcpu_svm *svm = to_svm(vcpu);
u32 id = vcpu->vcpu_id;
@ -388,8 +389,7 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
* avic_vcpu_load() expects to be called if and only if the vCPU has
* fully initialized AVIC.
*/
if ((!x2avic_enabled && id > AVIC_MAX_PHYSICAL_ID) ||
(id > x2avic_max_physical_id)) {
if (id > max_id) {
kvm_set_apicv_inhibit(vcpu->kvm, APICV_INHIBIT_REASON_PHYSICAL_ID_TOO_BIG);
vcpu->arch.apic->apicv_active = false;
return 0;

View file

@ -5284,6 +5284,8 @@ static __init void svm_set_cpu_caps(void)
*/
kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
kvm_cpu_cap_clear(X86_FEATURE_MSR_IMM);
kvm_setup_xss_caps();
}
static __init int svm_hardware_setup(void)

View file

@ -8051,6 +8051,8 @@ static __init void vmx_set_cpu_caps(void)
kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
kvm_cpu_cap_clear(X86_FEATURE_IBT);
}
kvm_setup_xss_caps();
}
static bool vmx_is_io_intercepted(struct kvm_vcpu *vcpu,

View file

@ -9953,6 +9953,23 @@ static struct notifier_block pvclock_gtod_notifier = {
};
#endif
void kvm_setup_xss_caps(void)
{
if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
kvm_caps.supported_xss = 0;
if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) &&
!kvm_cpu_cap_has(X86_FEATURE_IBT))
kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) {
kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
kvm_cpu_cap_clear(X86_FEATURE_IBT);
kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
}
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_setup_xss_caps);
static inline void kvm_ops_update(struct kvm_x86_init_ops *ops)
{
memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops));
@ -10125,19 +10142,6 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
if (!tdp_enabled)
kvm_caps.supported_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
kvm_caps.supported_xss = 0;
if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) &&
!kvm_cpu_cap_has(X86_FEATURE_IBT))
kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) {
kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
kvm_cpu_cap_clear(X86_FEATURE_IBT);
kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
}
if (kvm_caps.has_tsc_control) {
/*
* Make sure the user can only configure tsc_khz values that

View file

@ -471,6 +471,8 @@ extern struct kvm_host_values kvm_host;
extern bool enable_pmu;
void kvm_setup_xss_caps(void);
/*
* Get a filtered version of KVM's supported XCR0 that strips out dynamic
* features for which the current process doesn't (yet) have permission to use.

View file

@ -2991,6 +2991,10 @@ static void binder_set_txn_from_error(struct binder_transaction *t, int id,
* @t: the binder transaction that failed
* @data_size: the user provided data size for the transaction
* @error: enum binder_driver_return_protocol returned to sender
*
* Note that t->buffer is not safe to access here, as it may have been
* released (or not yet allocated). Callers should guarantee all the
* transaction items used here are safe to access.
*/
static void binder_netlink_report(struct binder_proc *proc,
struct binder_transaction *t,
@ -3780,6 +3784,14 @@ static void binder_transaction(struct binder_proc *proc,
goto err_dead_proc_or_thread;
}
} else {
/*
* Make a transaction copy. It is not safe to access 't' after
* binder_proc_transaction() reported a pending frozen. The
* target could thaw and consume the transaction at any point.
* Instead, use a safe 't_copy' for binder_netlink_report().
*/
struct binder_transaction t_copy = *t;
BUG_ON(target_node == NULL);
BUG_ON(t->buffer->async_transaction != 1);
return_error = binder_proc_transaction(t, target_proc, NULL);
@ -3790,7 +3802,7 @@ static void binder_transaction(struct binder_proc *proc,
*/
if (return_error == BR_TRANSACTION_PENDING_FROZEN) {
tcomplete->type = BINDER_WORK_TRANSACTION_PENDING;
binder_netlink_report(proc, t, tr->data_size,
binder_netlink_report(proc, &t_copy, tr->data_size,
return_error);
}
binder_enqueue_thread_work(thread, tcomplete);
@ -3812,8 +3824,9 @@ static void binder_transaction(struct binder_proc *proc,
return;
err_dead_proc_or_thread:
binder_txn_error("%d:%d dead process or thread\n",
thread->pid, proc->pid);
binder_txn_error("%d:%d %s process or thread\n",
proc->pid, thread->pid,
return_error == BR_FROZEN_REPLY ? "frozen" : "dead");
return_error_line = __LINE__;
binder_dequeue_work(proc, tcomplete);
err_translate_failed:

View file

@ -132,8 +132,8 @@ static int binderfs_binder_device_create(struct inode *ref_inode,
mutex_lock(&binderfs_minors_mutex);
if (++info->device_count <= info->mount_opts.max)
minor = ida_alloc_max(&binderfs_minors,
use_reserve ? BINDERFS_MAX_MINOR :
BINDERFS_MAX_MINOR_CAPPED,
use_reserve ? BINDERFS_MAX_MINOR - 1 :
BINDERFS_MAX_MINOR_CAPPED - 1,
GFP_KERNEL);
else
minor = -ENOSPC;
@ -391,12 +391,6 @@ static int binderfs_binder_ctl_create(struct super_block *sb)
if (!device)
return -ENOMEM;
/* If we have already created a binder-control node, return. */
if (info->control_dentry) {
ret = 0;
goto out;
}
ret = -ENOMEM;
inode = new_inode(sb);
if (!inode)
@ -405,8 +399,8 @@ static int binderfs_binder_ctl_create(struct super_block *sb)
/* Reserve a new minor number for the new device. */
mutex_lock(&binderfs_minors_mutex);
minor = ida_alloc_max(&binderfs_minors,
use_reserve ? BINDERFS_MAX_MINOR :
BINDERFS_MAX_MINOR_CAPPED,
use_reserve ? BINDERFS_MAX_MINOR - 1 :
BINDERFS_MAX_MINOR_CAPPED - 1,
GFP_KERNEL);
mutex_unlock(&binderfs_minors_mutex);
if (minor < 0) {
@ -431,7 +425,8 @@ static int binderfs_binder_ctl_create(struct super_block *sb)
inode->i_private = device;
info->control_dentry = dentry;
d_add(dentry, inode);
d_make_persistent(dentry, inode);
dput(dentry);
return 0;

View file

@ -39,6 +39,10 @@ use core::{
sync::atomic::{AtomicU32, Ordering},
};
fn is_aligned(value: usize, to: usize) -> bool {
value % to == 0
}
/// Stores the layout of the scatter-gather entries. This is used during the `translate_objects`
/// call and is discarded when it returns.
struct ScatterGatherState {
@ -69,17 +73,24 @@ struct ScatterGatherEntry {
}
/// This entry specifies that a fixup should happen at `target_offset` of the
/// buffer. If `skip` is nonzero, then the fixup is a `binder_fd_array_object`
/// and is applied later. Otherwise if `skip` is zero, then the size of the
/// fixup is `sizeof::<u64>()` and `pointer_value` is written to the buffer.
struct PointerFixupEntry {
/// The number of bytes to skip, or zero for a `binder_buffer_object` fixup.
skip: usize,
/// The translated pointer to write when `skip` is zero.
pointer_value: u64,
/// The offset at which the value should be written. The offset is relative
/// to the original buffer.
target_offset: usize,
/// buffer.
enum PointerFixupEntry {
/// A fixup for a `binder_buffer_object`.
Fixup {
/// The translated pointer to write.
pointer_value: u64,
/// The offset at which the value should be written. The offset is relative
/// to the original buffer.
target_offset: usize,
},
/// A skip for a `binder_fd_array_object`.
Skip {
/// The number of bytes to skip.
skip: usize,
/// The offset at which the skip should happen. The offset is relative
/// to the original buffer.
target_offset: usize,
},
}
/// Return type of `apply_and_validate_fixup_in_parent`.
@ -762,8 +773,7 @@ impl Thread {
parent_entry.fixup_min_offset = info.new_min_offset;
parent_entry.pointer_fixups.push(
PointerFixupEntry {
skip: 0,
PointerFixupEntry::Fixup {
pointer_value: buffer_ptr_in_user_space,
target_offset: info.target_offset,
},
@ -789,6 +799,10 @@ impl Thread {
let num_fds = usize::try_from(obj.num_fds).map_err(|_| EINVAL)?;
let fds_len = num_fds.checked_mul(size_of::<u32>()).ok_or(EINVAL)?;
if !is_aligned(parent_offset, size_of::<u32>()) {
return Err(EINVAL.into());
}
let info = sg_state.validate_parent_fixup(parent_index, parent_offset, fds_len)?;
view.alloc.info_add_fd_reserve(num_fds)?;
@ -803,13 +817,16 @@ impl Thread {
}
};
if !is_aligned(parent_entry.sender_uaddr, size_of::<u32>()) {
return Err(EINVAL.into());
}
parent_entry.fixup_min_offset = info.new_min_offset;
parent_entry
.pointer_fixups
.push(
PointerFixupEntry {
PointerFixupEntry::Skip {
skip: fds_len,
pointer_value: 0,
target_offset: info.target_offset,
},
GFP_KERNEL,
@ -820,6 +837,7 @@ impl Thread {
.sender_uaddr
.checked_add(parent_offset)
.ok_or(EINVAL)?;
let mut fda_bytes = KVec::new();
UserSlice::new(UserPtr::from_addr(fda_uaddr as _), fds_len)
.read_all(&mut fda_bytes, GFP_KERNEL)?;
@ -871,17 +889,21 @@ impl Thread {
let mut reader =
UserSlice::new(UserPtr::from_addr(sg_entry.sender_uaddr), sg_entry.length).reader();
for fixup in &mut sg_entry.pointer_fixups {
let fixup_len = if fixup.skip == 0 {
size_of::<u64>()
} else {
fixup.skip
let (fixup_len, fixup_offset) = match fixup {
PointerFixupEntry::Fixup { target_offset, .. } => {
(size_of::<u64>(), *target_offset)
}
PointerFixupEntry::Skip {
skip,
target_offset,
} => (*skip, *target_offset),
};
let target_offset_end = fixup.target_offset.checked_add(fixup_len).ok_or(EINVAL)?;
if fixup.target_offset < end_of_previous_fixup || offset_end < target_offset_end {
let target_offset_end = fixup_offset.checked_add(fixup_len).ok_or(EINVAL)?;
if fixup_offset < end_of_previous_fixup || offset_end < target_offset_end {
pr_warn!(
"Fixups oob {} {} {} {}",
fixup.target_offset,
fixup_offset,
end_of_previous_fixup,
offset_end,
target_offset_end
@ -890,13 +912,13 @@ impl Thread {
}
let copy_off = end_of_previous_fixup;
let copy_len = fixup.target_offset - end_of_previous_fixup;
let copy_len = fixup_offset - end_of_previous_fixup;
if let Err(err) = alloc.copy_into(&mut reader, copy_off, copy_len) {
pr_warn!("Failed copying into alloc: {:?}", err);
return Err(err.into());
}
if fixup.skip == 0 {
let res = alloc.write::<u64>(fixup.target_offset, &fixup.pointer_value);
if let PointerFixupEntry::Fixup { pointer_value, .. } = fixup {
let res = alloc.write::<u64>(fixup_offset, pointer_value);
if let Err(err) = res {
pr_warn!("Failed copying ptr into alloc: {:?}", err);
return Err(err.into());
@ -949,25 +971,30 @@ impl Thread {
let data_size = trd.data_size.try_into().map_err(|_| EINVAL)?;
let aligned_data_size = ptr_align(data_size).ok_or(EINVAL)?;
let offsets_size = trd.offsets_size.try_into().map_err(|_| EINVAL)?;
let aligned_offsets_size = ptr_align(offsets_size).ok_or(EINVAL)?;
let buffers_size = tr.buffers_size.try_into().map_err(|_| EINVAL)?;
let aligned_buffers_size = ptr_align(buffers_size).ok_or(EINVAL)?;
let offsets_size: usize = trd.offsets_size.try_into().map_err(|_| EINVAL)?;
let buffers_size: usize = tr.buffers_size.try_into().map_err(|_| EINVAL)?;
let aligned_secctx_size = match secctx.as_ref() {
Some((_offset, ctx)) => ptr_align(ctx.len()).ok_or(EINVAL)?,
None => 0,
};
if !is_aligned(offsets_size, size_of::<u64>()) {
return Err(EINVAL.into());
}
if !is_aligned(buffers_size, size_of::<u64>()) {
return Err(EINVAL.into());
}
// This guarantees that at least `sizeof(usize)` bytes will be allocated.
let len = usize::max(
aligned_data_size
.checked_add(aligned_offsets_size)
.and_then(|sum| sum.checked_add(aligned_buffers_size))
.checked_add(offsets_size)
.and_then(|sum| sum.checked_add(buffers_size))
.and_then(|sum| sum.checked_add(aligned_secctx_size))
.ok_or(ENOMEM)?,
size_of::<usize>(),
size_of::<u64>(),
);
let secctx_off = aligned_data_size + aligned_offsets_size + aligned_buffers_size;
let secctx_off = aligned_data_size + offsets_size + buffers_size;
let mut alloc =
match to_process.buffer_alloc(debug_id, len, is_oneway, self.process.task.pid()) {
Ok(alloc) => alloc,
@ -999,13 +1026,13 @@ impl Thread {
}
let offsets_start = aligned_data_size;
let offsets_end = aligned_data_size + aligned_offsets_size;
let offsets_end = aligned_data_size + offsets_size;
// This state is used for BINDER_TYPE_PTR objects.
let sg_state = sg_state.insert(ScatterGatherState {
unused_buffer_space: UnusedBufferSpace {
offset: offsets_end,
limit: len,
limit: offsets_end + buffers_size,
},
sg_entries: KVec::new(),
ancestors: KVec::new(),
@ -1014,12 +1041,16 @@ impl Thread {
// Traverse the objects specified.
let mut view = AllocationView::new(&mut alloc, data_size);
for (index, index_offset) in (offsets_start..offsets_end)
.step_by(size_of::<usize>())
.step_by(size_of::<u64>())
.enumerate()
{
let offset = view.alloc.read(index_offset)?;
let offset: usize = view
.alloc
.read::<u64>(index_offset)?
.try_into()
.map_err(|_| EINVAL)?;
if offset < end_of_previous_object {
if offset < end_of_previous_object || !is_aligned(offset, size_of::<u32>()) {
pr_warn!("Got transaction with invalid offset.");
return Err(EINVAL.into());
}
@ -1051,7 +1082,7 @@ impl Thread {
}
// Update the indexes containing objects to clean up.
let offset_after_object = index_offset + size_of::<usize>();
let offset_after_object = index_offset + size_of::<u64>();
view.alloc
.set_info_offsets(offsets_start..offset_after_object);
}

View file

@ -132,8 +132,8 @@ static int binderfs_binder_device_create(struct inode *ref_inode,
mutex_lock(&binderfs_minors_mutex);
if (++info->device_count <= info->mount_opts.max)
minor = ida_alloc_max(&binderfs_minors,
use_reserve ? BINDERFS_MAX_MINOR :
BINDERFS_MAX_MINOR_CAPPED,
use_reserve ? BINDERFS_MAX_MINOR - 1 :
BINDERFS_MAX_MINOR_CAPPED - 1,
GFP_KERNEL);
else
minor = -ENOSPC;
@ -408,8 +408,8 @@ static int binderfs_binder_ctl_create(struct super_block *sb)
/* Reserve a new minor number for the new device. */
mutex_lock(&binderfs_minors_mutex);
minor = ida_alloc_max(&binderfs_minors,
use_reserve ? BINDERFS_MAX_MINOR :
BINDERFS_MAX_MINOR_CAPPED,
use_reserve ? BINDERFS_MAX_MINOR - 1 :
BINDERFS_MAX_MINOR_CAPPED - 1,
GFP_KERNEL);
mutex_unlock(&binderfs_minors_mutex);
if (minor < 0) {

View file

@ -1225,28 +1225,16 @@ static int loop_clr_fd(struct loop_device *lo)
}
static int
loop_set_status(struct loop_device *lo, blk_mode_t mode,
struct block_device *bdev, const struct loop_info64 *info)
loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
{
int err;
bool partscan = false;
bool size_changed = false;
unsigned int memflags;
/*
* If we don't hold exclusive handle for the device, upgrade to it
* here to avoid changing device under exclusive owner.
*/
if (!(mode & BLK_OPEN_EXCL)) {
err = bd_prepare_to_claim(bdev, loop_set_status, NULL);
if (err)
goto out_reread_partitions;
}
err = mutex_lock_killable(&lo->lo_mutex);
if (err)
goto out_abort_claiming;
return err;
if (lo->lo_state != Lo_bound) {
err = -ENXIO;
goto out_unlock;
@ -1285,10 +1273,6 @@ out_unfreeze:
}
out_unlock:
mutex_unlock(&lo->lo_mutex);
out_abort_claiming:
if (!(mode & BLK_OPEN_EXCL))
bd_abort_claiming(bdev, loop_set_status);
out_reread_partitions:
if (partscan)
loop_reread_partitions(lo);
@ -1368,9 +1352,7 @@ loop_info64_to_old(const struct loop_info64 *info64, struct loop_info *info)
}
static int
loop_set_status_old(struct loop_device *lo, blk_mode_t mode,
struct block_device *bdev,
const struct loop_info __user *arg)
loop_set_status_old(struct loop_device *lo, const struct loop_info __user *arg)
{
struct loop_info info;
struct loop_info64 info64;
@ -1378,19 +1360,17 @@ loop_set_status_old(struct loop_device *lo, blk_mode_t mode,
if (copy_from_user(&info, arg, sizeof (struct loop_info)))
return -EFAULT;
loop_info64_from_old(&info, &info64);
return loop_set_status(lo, mode, bdev, &info64);
return loop_set_status(lo, &info64);
}
static int
loop_set_status64(struct loop_device *lo, blk_mode_t mode,
struct block_device *bdev,
const struct loop_info64 __user *arg)
loop_set_status64(struct loop_device *lo, const struct loop_info64 __user *arg)
{
struct loop_info64 info64;
if (copy_from_user(&info64, arg, sizeof (struct loop_info64)))
return -EFAULT;
return loop_set_status(lo, mode, bdev, &info64);
return loop_set_status(lo, &info64);
}
static int
@ -1569,14 +1549,14 @@ static int lo_ioctl(struct block_device *bdev, blk_mode_t mode,
case LOOP_SET_STATUS:
err = -EPERM;
if ((mode & BLK_OPEN_WRITE) || capable(CAP_SYS_ADMIN))
err = loop_set_status_old(lo, mode, bdev, argp);
err = loop_set_status_old(lo, argp);
break;
case LOOP_GET_STATUS:
return loop_get_status_old(lo, argp);
case LOOP_SET_STATUS64:
err = -EPERM;
if ((mode & BLK_OPEN_WRITE) || capable(CAP_SYS_ADMIN))
err = loop_set_status64(lo, mode, bdev, argp);
err = loop_set_status64(lo, argp);
break;
case LOOP_GET_STATUS64:
return loop_get_status64(lo, argp);
@ -1670,9 +1650,8 @@ loop_info64_to_compat(const struct loop_info64 *info64,
}
static int
loop_set_status_compat(struct loop_device *lo, blk_mode_t mode,
struct block_device *bdev,
const struct compat_loop_info __user *arg)
loop_set_status_compat(struct loop_device *lo,
const struct compat_loop_info __user *arg)
{
struct loop_info64 info64;
int ret;
@ -1680,7 +1659,7 @@ loop_set_status_compat(struct loop_device *lo, blk_mode_t mode,
ret = loop_info64_from_compat(arg, &info64);
if (ret < 0)
return ret;
return loop_set_status(lo, mode, bdev, &info64);
return loop_set_status(lo, &info64);
}
static int
@ -1706,7 +1685,7 @@ static int lo_compat_ioctl(struct block_device *bdev, blk_mode_t mode,
switch(cmd) {
case LOOP_SET_STATUS:
err = loop_set_status_compat(lo, mode, bdev,
err = loop_set_status_compat(lo,
(const struct compat_loop_info __user *)arg);
break;
case LOOP_GET_STATUS:

View file

@ -3495,11 +3495,29 @@ static void rbd_img_object_requests(struct rbd_img_request *img_req)
rbd_assert(!need_exclusive_lock(img_req) ||
__rbd_is_lock_owner(rbd_dev));
if (rbd_img_is_write(img_req)) {
rbd_assert(!img_req->snapc);
if (test_bit(IMG_REQ_CHILD, &img_req->flags)) {
rbd_assert(!rbd_img_is_write(img_req));
} else {
struct request *rq = blk_mq_rq_from_pdu(img_req);
u64 off = (u64)blk_rq_pos(rq) << SECTOR_SHIFT;
u64 len = blk_rq_bytes(rq);
u64 mapping_size;
down_read(&rbd_dev->header_rwsem);
img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc);
mapping_size = rbd_dev->mapping.size;
if (rbd_img_is_write(img_req)) {
rbd_assert(!img_req->snapc);
img_req->snapc =
ceph_get_snap_context(rbd_dev->header.snapc);
}
up_read(&rbd_dev->header_rwsem);
if (unlikely(off + len > mapping_size)) {
rbd_warn(rbd_dev, "beyond EOD (%llu~%llu > %llu)",
off, len, mapping_size);
img_req->pending.result = -EIO;
return;
}
}
for_each_obj_request(img_req, obj_req) {
@ -4725,7 +4743,6 @@ static void rbd_queue_workfn(struct work_struct *work)
struct request *rq = blk_mq_rq_from_pdu(img_request);
u64 offset = (u64)blk_rq_pos(rq) << SECTOR_SHIFT;
u64 length = blk_rq_bytes(rq);
u64 mapping_size;
int result;
/* Ignore/skip any zero-length requests */
@ -4738,17 +4755,9 @@ static void rbd_queue_workfn(struct work_struct *work)
blk_mq_start_request(rq);
down_read(&rbd_dev->header_rwsem);
mapping_size = rbd_dev->mapping.size;
rbd_img_capture_header(img_request);
up_read(&rbd_dev->header_rwsem);
if (offset + length > mapping_size) {
rbd_warn(rbd_dev, "beyond EOD (%llu~%llu > %llu)", offset,
length, mapping_size);
result = -EIO;
goto err_img_request;
}
dout("%s rbd_dev %p img_req %p %s %llu~%llu\n", __func__, rbd_dev,
img_request, obj_op_name(op_type), offset, length);

View file

@ -19,12 +19,6 @@
MODULE_IMPORT_NS("PCI_IDE");
#define TIO_DEFAULT_NR_IDE_STREAMS 1
static uint nr_ide_streams = TIO_DEFAULT_NR_IDE_STREAMS;
module_param_named(ide_nr, nr_ide_streams, uint, 0644);
MODULE_PARM_DESC(ide_nr, "Set the maximum number of IDE streams per PHB");
#define dev_to_sp(dev) ((struct sp_device *)dev_get_drvdata(dev))
#define dev_to_psp(dev) ((struct psp_device *)(dev_to_sp(dev)->psp_data))
#define dev_to_sev(dev) ((struct sev_device *)(dev_to_psp(dev)->sev_data))
@ -193,7 +187,6 @@ static void streams_teardown(struct pci_ide **ide)
static int stream_alloc(struct pci_dev *pdev, struct pci_ide **ide,
unsigned int tc)
{
struct pci_dev *rp = pcie_find_root_port(pdev);
struct pci_ide *ide1;
if (ide[tc]) {
@ -201,17 +194,11 @@ static int stream_alloc(struct pci_dev *pdev, struct pci_ide **ide,
return -EBUSY;
}
/* FIXME: find a better way */
if (nr_ide_streams != TIO_DEFAULT_NR_IDE_STREAMS)
pci_notice(pdev, "Enable non-default %d streams", nr_ide_streams);
pci_ide_set_nr_streams(to_pci_host_bridge(rp->bus->bridge), nr_ide_streams);
ide1 = pci_ide_stream_alloc(pdev);
if (!ide1)
return -EFAULT;
/* Blindly assign streamid=0 to TC=0, and so on */
ide1->stream_id = tc;
ide1->stream_id = ide1->host_bridge_stream;
ide[tc] = ide1;

View file

@ -263,7 +263,7 @@ static int loongson_gpio_init_irqchip(struct platform_device *pdev,
chip->irq.num_parents = data->intr_num;
chip->irq.parents = devm_kcalloc(&pdev->dev, data->intr_num,
sizeof(*chip->irq.parents), GFP_KERNEL);
if (!chip->parent)
if (!chip->irq.parents)
return -ENOMEM;
for (i = 0; i < data->intr_num; i++) {

View file

@ -1359,6 +1359,7 @@ static int acpi_gpio_package_count(const union acpi_object *obj)
while (element < end) {
switch (element->type) {
case ACPI_TYPE_LOCAL_REFERENCE:
case ACPI_TYPE_STRING:
element += 3;
fallthrough;
case ACPI_TYPE_INTEGER:

View file

@ -1920,21 +1920,21 @@ int amdgpu_amdkfd_gpuvm_free_memory_of_gpu(
/* Make sure restore workers don't access the BO any more */
mutex_lock(&process_info->lock);
list_del(&mem->validate_list);
if (!list_empty(&mem->validate_list))
list_del_init(&mem->validate_list);
mutex_unlock(&process_info->lock);
/* Cleanup user pages and MMU notifiers */
if (amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm)) {
amdgpu_hmm_unregister(mem->bo);
mutex_lock(&process_info->notifier_lock);
amdgpu_hmm_range_free(mem->range);
mutex_unlock(&process_info->notifier_lock);
}
ret = reserve_bo_and_cond_vms(mem, NULL, BO_VM_ALL, &ctx);
if (unlikely(ret))
return ret;
/* Cleanup user pages and MMU notifiers */
if (amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm)) {
amdgpu_hmm_unregister(mem->bo);
amdgpu_hmm_range_free(mem->range);
mem->range = NULL;
}
amdgpu_amdkfd_remove_eviction_fence(mem->bo,
process_info->eviction_fence);
pr_debug("Release VA 0x%llx - 0x%llx\n", mem->va,

View file

@ -2405,9 +2405,6 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
return -ENODEV;
}
if (amdgpu_aspm == -1 && !pcie_aspm_enabled(pdev))
amdgpu_aspm = 0;
if (amdgpu_virtual_display ||
amdgpu_device_asic_has_dc_support(pdev, flags & AMD_ASIC_MASK))
supports_atomic = true;

View file

@ -1671,7 +1671,7 @@ static int mes_v11_0_hw_init(struct amdgpu_ip_block *ip_block)
if (r)
goto failure;
if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x50) {
if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x52) {
r = mes_v11_0_set_hw_resources_1(&adev->mes);
if (r) {
DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r);

View file

@ -105,9 +105,12 @@ void cm_helper_program_gamcor_xfer_func(
#define NUMBER_REGIONS 32
#define NUMBER_SW_SEGMENTS 16
bool cm3_helper_translate_curve_to_hw_format(
const struct dc_transfer_func *output_tf,
struct pwl_params *lut_params, bool fixpoint)
#define DC_LOGGER \
ctx->logger
bool cm3_helper_translate_curve_to_hw_format(struct dc_context *ctx,
const struct dc_transfer_func *output_tf,
struct pwl_params *lut_params, bool fixpoint)
{
struct curve_points3 *corner_points;
struct pwl_result_data *rgb_resulted;
@ -163,6 +166,11 @@ bool cm3_helper_translate_curve_to_hw_format(
hw_points += (1 << seg_distr[k]);
}
// DCN3+ have 257 pts in lieu of no separate slope registers
// Prior HW had 256 base+slope pairs
// Shaper LUT (i.e. fixpoint == true) is still 256 bases and 256 deltas
hw_points = fixpoint ? (hw_points - 1) : hw_points;
j = 0;
for (k = 0; k < (region_end - region_start); k++) {
increment = NUMBER_SW_SEGMENTS / (1 << seg_distr[k]);
@ -223,8 +231,6 @@ bool cm3_helper_translate_curve_to_hw_format(
corner_points[1].green.slope = dc_fixpt_zero;
corner_points[1].blue.slope = dc_fixpt_zero;
// DCN3+ have 257 pts in lieu of no separate slope registers
// Prior HW had 256 base+slope pairs
lut_params->hw_points_num = hw_points + 1;
k = 0;
@ -248,6 +254,10 @@ bool cm3_helper_translate_curve_to_hw_format(
if (fixpoint == true) {
i = 1;
while (i != hw_points + 2) {
uint32_t red_clamp;
uint32_t green_clamp;
uint32_t blue_clamp;
if (i >= hw_points) {
if (dc_fixpt_lt(rgb_plus_1->red, rgb->red))
rgb_plus_1->red = dc_fixpt_add(rgb->red,
@ -260,9 +270,20 @@ bool cm3_helper_translate_curve_to_hw_format(
rgb_minus_1->delta_blue);
}
rgb->delta_red_reg = dc_fixpt_clamp_u0d10(rgb->delta_red);
rgb->delta_green_reg = dc_fixpt_clamp_u0d10(rgb->delta_green);
rgb->delta_blue_reg = dc_fixpt_clamp_u0d10(rgb->delta_blue);
rgb->delta_red = dc_fixpt_sub(rgb_plus_1->red, rgb->red);
rgb->delta_green = dc_fixpt_sub(rgb_plus_1->green, rgb->green);
rgb->delta_blue = dc_fixpt_sub(rgb_plus_1->blue, rgb->blue);
red_clamp = dc_fixpt_clamp_u0d14(rgb->delta_red);
green_clamp = dc_fixpt_clamp_u0d14(rgb->delta_green);
blue_clamp = dc_fixpt_clamp_u0d14(rgb->delta_blue);
if (red_clamp >> 10 || green_clamp >> 10 || blue_clamp >> 10)
DC_LOG_ERROR("Losing delta precision while programming shaper LUT.");
rgb->delta_red_reg = red_clamp & 0x3ff;
rgb->delta_green_reg = green_clamp & 0x3ff;
rgb->delta_blue_reg = blue_clamp & 0x3ff;
rgb->red_reg = dc_fixpt_clamp_u0d14(rgb->red);
rgb->green_reg = dc_fixpt_clamp_u0d14(rgb->green);
rgb->blue_reg = dc_fixpt_clamp_u0d14(rgb->blue);

View file

@ -59,7 +59,7 @@ void cm_helper_program_gamcor_xfer_func(
const struct pwl_params *params,
const struct dcn3_xfer_func_reg *reg);
bool cm3_helper_translate_curve_to_hw_format(
bool cm3_helper_translate_curve_to_hw_format(struct dc_context *ctx,
const struct dc_transfer_func *output_tf,
struct pwl_params *lut_params, bool fixpoint);

View file

@ -239,7 +239,7 @@ bool dcn30_set_blend_lut(
if (plane_state->blend_tf.type == TF_TYPE_HWPWL)
blend_lut = &plane_state->blend_tf.pwl;
else if (plane_state->blend_tf.type == TF_TYPE_DISTRIBUTED_POINTS) {
result = cm3_helper_translate_curve_to_hw_format(
result = cm3_helper_translate_curve_to_hw_format(plane_state->ctx,
&plane_state->blend_tf, &dpp_base->regamma_params, false);
if (!result)
return result;
@ -334,8 +334,9 @@ bool dcn30_set_input_transfer_func(struct dc *dc,
if (plane_state->in_transfer_func.type == TF_TYPE_HWPWL)
params = &plane_state->in_transfer_func.pwl;
else if (plane_state->in_transfer_func.type == TF_TYPE_DISTRIBUTED_POINTS &&
cm3_helper_translate_curve_to_hw_format(&plane_state->in_transfer_func,
&dpp_base->degamma_params, false))
cm3_helper_translate_curve_to_hw_format(plane_state->ctx,
&plane_state->in_transfer_func,
&dpp_base->degamma_params, false))
params = &dpp_base->degamma_params;
result = dpp_base->funcs->dpp_program_gamcor_lut(dpp_base, params);
@ -406,7 +407,7 @@ bool dcn30_set_output_transfer_func(struct dc *dc,
params = &stream->out_transfer_func.pwl;
else if (pipe_ctx->stream->out_transfer_func.type ==
TF_TYPE_DISTRIBUTED_POINTS &&
cm3_helper_translate_curve_to_hw_format(
cm3_helper_translate_curve_to_hw_format(stream->ctx,
&stream->out_transfer_func,
&mpc->blender_params, false))
params = &mpc->blender_params;

View file

@ -486,8 +486,9 @@ bool dcn32_set_mcm_luts(
if (plane_state->blend_tf.type == TF_TYPE_HWPWL)
lut_params = &plane_state->blend_tf.pwl;
else if (plane_state->blend_tf.type == TF_TYPE_DISTRIBUTED_POINTS) {
result = cm3_helper_translate_curve_to_hw_format(&plane_state->blend_tf,
&dpp_base->regamma_params, false);
result = cm3_helper_translate_curve_to_hw_format(plane_state->ctx,
&plane_state->blend_tf,
&dpp_base->regamma_params, false);
if (!result)
return result;
@ -501,9 +502,9 @@ bool dcn32_set_mcm_luts(
lut_params = &plane_state->in_shaper_func.pwl;
else if (plane_state->in_shaper_func.type == TF_TYPE_DISTRIBUTED_POINTS) {
// TODO: dpp_base replace
ASSERT(false);
cm3_helper_translate_curve_to_hw_format(&plane_state->in_shaper_func,
&dpp_base->shaper_params, true);
cm3_helper_translate_curve_to_hw_format(plane_state->ctx,
&plane_state->in_shaper_func,
&dpp_base->shaper_params, true);
lut_params = &dpp_base->shaper_params;
}
@ -543,8 +544,9 @@ bool dcn32_set_input_transfer_func(struct dc *dc,
if (plane_state->in_transfer_func.type == TF_TYPE_HWPWL)
params = &plane_state->in_transfer_func.pwl;
else if (plane_state->in_transfer_func.type == TF_TYPE_DISTRIBUTED_POINTS &&
cm3_helper_translate_curve_to_hw_format(&plane_state->in_transfer_func,
&dpp_base->degamma_params, false))
cm3_helper_translate_curve_to_hw_format(plane_state->ctx,
&plane_state->in_transfer_func,
&dpp_base->degamma_params, false))
params = &dpp_base->degamma_params;
dpp_base->funcs->dpp_program_gamcor_lut(dpp_base, params);
@ -575,7 +577,7 @@ bool dcn32_set_output_transfer_func(struct dc *dc,
params = &stream->out_transfer_func.pwl;
else if (pipe_ctx->stream->out_transfer_func.type ==
TF_TYPE_DISTRIBUTED_POINTS &&
cm3_helper_translate_curve_to_hw_format(
cm3_helper_translate_curve_to_hw_format(stream->ctx,
&stream->out_transfer_func,
&mpc->blender_params, false))
params = &mpc->blender_params;

View file

@ -430,7 +430,7 @@ void dcn401_populate_mcm_luts(struct dc *dc,
if (mcm_luts.lut1d_func->type == TF_TYPE_HWPWL)
m_lut_params.pwl = &mcm_luts.lut1d_func->pwl;
else if (mcm_luts.lut1d_func->type == TF_TYPE_DISTRIBUTED_POINTS) {
rval = cm3_helper_translate_curve_to_hw_format(
rval = cm3_helper_translate_curve_to_hw_format(mpc->ctx,
mcm_luts.lut1d_func,
&dpp_base->regamma_params, false);
m_lut_params.pwl = rval ? &dpp_base->regamma_params : NULL;
@ -450,7 +450,7 @@ void dcn401_populate_mcm_luts(struct dc *dc,
m_lut_params.pwl = &mcm_luts.shaper->pwl;
else if (mcm_luts.shaper->type == TF_TYPE_DISTRIBUTED_POINTS) {
ASSERT(false);
rval = cm3_helper_translate_curve_to_hw_format(
rval = cm3_helper_translate_curve_to_hw_format(mpc->ctx,
mcm_luts.shaper,
&dpp_base->regamma_params, true);
m_lut_params.pwl = rval ? &dpp_base->regamma_params : NULL;
@ -627,8 +627,9 @@ bool dcn401_set_mcm_luts(struct pipe_ctx *pipe_ctx,
if (plane_state->blend_tf.type == TF_TYPE_HWPWL)
lut_params = &plane_state->blend_tf.pwl;
else if (plane_state->blend_tf.type == TF_TYPE_DISTRIBUTED_POINTS) {
rval = cm3_helper_translate_curve_to_hw_format(&plane_state->blend_tf,
&dpp_base->regamma_params, false);
rval = cm3_helper_translate_curve_to_hw_format(plane_state->ctx,
&plane_state->blend_tf,
&dpp_base->regamma_params, false);
lut_params = rval ? &dpp_base->regamma_params : NULL;
}
result = mpc->funcs->program_1dlut(mpc, lut_params, mpcc_id);
@ -639,8 +640,9 @@ bool dcn401_set_mcm_luts(struct pipe_ctx *pipe_ctx,
lut_params = &plane_state->in_shaper_func.pwl;
else if (plane_state->in_shaper_func.type == TF_TYPE_DISTRIBUTED_POINTS) {
// TODO: dpp_base replace
rval = cm3_helper_translate_curve_to_hw_format(&plane_state->in_shaper_func,
&dpp_base->shaper_params, true);
rval = cm3_helper_translate_curve_to_hw_format(plane_state->ctx,
&plane_state->in_shaper_func,
&dpp_base->shaper_params, true);
lut_params = rval ? &dpp_base->shaper_params : NULL;
}
result &= mpc->funcs->program_shaper(mpc, lut_params, mpcc_id);
@ -674,7 +676,7 @@ bool dcn401_set_output_transfer_func(struct dc *dc,
params = &stream->out_transfer_func.pwl;
else if (pipe_ctx->stream->out_transfer_func.type ==
TF_TYPE_DISTRIBUTED_POINTS &&
cm3_helper_translate_curve_to_hw_format(
cm3_helper_translate_curve_to_hw_format(stream->ctx,
&stream->out_transfer_func,
&mpc->blender_params, false))
params = &mpc->blender_params;

View file

@ -8,6 +8,7 @@
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <drm/bridge/dw_hdmi.h>
#include <sound/asoundef.h>
@ -33,6 +34,7 @@
struct imx8mp_hdmi_pai {
struct regmap *regmap;
struct device *dev;
};
static void imx8mp_hdmi_pai_enable(struct dw_hdmi *dw_hdmi, int channel,
@ -43,6 +45,9 @@ static void imx8mp_hdmi_pai_enable(struct dw_hdmi *dw_hdmi, int channel,
struct imx8mp_hdmi_pai *hdmi_pai = pdata->priv_audio;
int val;
if (pm_runtime_resume_and_get(hdmi_pai->dev) < 0)
return;
/* PAI set control extended */
val = WTMK_HIGH(3) | WTMK_LOW(3);
val |= NUM_CH(channel);
@ -85,6 +90,8 @@ static void imx8mp_hdmi_pai_disable(struct dw_hdmi *dw_hdmi)
/* Stop PAI */
regmap_write(hdmi_pai->regmap, HTX_PAI_CTRL, 0);
pm_runtime_put_sync(hdmi_pai->dev);
}
static const struct regmap_config imx8mp_hdmi_pai_regmap_config = {
@ -101,6 +108,7 @@ static int imx8mp_hdmi_pai_bind(struct device *dev, struct device *master, void
struct imx8mp_hdmi_pai *hdmi_pai;
struct resource *res;
void __iomem *base;
int ret;
hdmi_pai = devm_kzalloc(dev, sizeof(*hdmi_pai), GFP_KERNEL);
if (!hdmi_pai)
@ -121,6 +129,13 @@ static int imx8mp_hdmi_pai_bind(struct device *dev, struct device *master, void
plat_data->disable_audio = imx8mp_hdmi_pai_disable;
plat_data->priv_audio = hdmi_pai;
hdmi_pai->dev = dev;
ret = devm_pm_runtime_enable(dev);
if (ret < 0) {
dev_err(dev, "failed to enable PM runtime: %d\n", ret);
return ret;
}
return 0;
}

View file

@ -250,7 +250,6 @@ static irqreturn_t gma_irq_handler(int irq, void *arg)
void gma_irq_preinstall(struct drm_device *dev)
{
struct drm_psb_private *dev_priv = to_drm_psb_private(dev);
struct drm_crtc *crtc;
unsigned long irqflags;
spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags);
@ -261,15 +260,10 @@ void gma_irq_preinstall(struct drm_device *dev)
PSB_WSGX32(0x00000000, PSB_CR_EVENT_HOST_ENABLE);
PSB_RSGX32(PSB_CR_EVENT_HOST_ENABLE);
drm_for_each_crtc(crtc, dev) {
struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc);
if (vblank->enabled) {
u32 mask = drm_crtc_index(crtc) ? _PSB_VSYNC_PIPEB_FLAG :
_PSB_VSYNC_PIPEA_FLAG;
dev_priv->vdc_irq_mask |= mask;
}
}
if (dev->vblank[0].enabled)
dev_priv->vdc_irq_mask |= _PSB_VSYNC_PIPEA_FLAG;
if (dev->vblank[1].enabled)
dev_priv->vdc_irq_mask |= _PSB_VSYNC_PIPEB_FLAG;
/* Revisit this area - want per device masks ? */
if (dev_priv->ops->hotplug)
@ -284,8 +278,8 @@ void gma_irq_preinstall(struct drm_device *dev)
void gma_irq_postinstall(struct drm_device *dev)
{
struct drm_psb_private *dev_priv = to_drm_psb_private(dev);
struct drm_crtc *crtc;
unsigned long irqflags;
unsigned int i;
spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags);
@ -298,13 +292,11 @@ void gma_irq_postinstall(struct drm_device *dev)
PSB_WVDC32(dev_priv->vdc_irq_mask, PSB_INT_ENABLE_R);
PSB_WVDC32(0xFFFFFFFF, PSB_HWSTAM);
drm_for_each_crtc(crtc, dev) {
struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc);
if (vblank->enabled)
gma_enable_pipestat(dev_priv, drm_crtc_index(crtc), PIPE_VBLANK_INTERRUPT_ENABLE);
for (i = 0; i < dev->num_crtcs; ++i) {
if (dev->vblank[i].enabled)
gma_enable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE);
else
gma_disable_pipestat(dev_priv, drm_crtc_index(crtc), PIPE_VBLANK_INTERRUPT_ENABLE);
gma_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE);
}
if (dev_priv->ops->hotplug_enable)
@ -345,8 +337,8 @@ void gma_irq_uninstall(struct drm_device *dev)
{
struct drm_psb_private *dev_priv = to_drm_psb_private(dev);
struct pci_dev *pdev = to_pci_dev(dev->dev);
struct drm_crtc *crtc;
unsigned long irqflags;
unsigned int i;
if (!dev_priv->irq_enabled)
return;
@ -358,11 +350,9 @@ void gma_irq_uninstall(struct drm_device *dev)
PSB_WVDC32(0xFFFFFFFF, PSB_HWSTAM);
drm_for_each_crtc(crtc, dev) {
struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc);
if (vblank->enabled)
gma_disable_pipestat(dev_priv, drm_crtc_index(crtc), PIPE_VBLANK_INTERRUPT_ENABLE);
for (i = 0; i < dev->num_crtcs; ++i) {
if (dev->vblank[i].enabled)
gma_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE);
}
dev_priv->vdc_irq_mask &= _PSB_IRQ_SGX_FLAG |

View file

@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/delay.h>
#include <linux/iopoll.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_edid.h>
@ -12,7 +13,7 @@
void mgag200_bmc_stop_scanout(struct mga_device *mdev)
{
u8 tmp;
int iter_max;
int ret;
/*
* 1 - The first step is to inform the BMC of an upcoming mode
@ -42,30 +43,22 @@ void mgag200_bmc_stop_scanout(struct mga_device *mdev)
/*
* 3a- The third step is to verify if there is an active scan.
* We are waiting for a 0 on remhsyncsts <XSPAREREG<0>).
* We are waiting for a 0 on remhsyncsts (<XSPAREREG<0>).
*/
iter_max = 300;
while (!(tmp & 0x1) && iter_max) {
WREG8(DAC_INDEX, MGA1064_SPAREREG);
tmp = RREG8(DAC_DATA);
udelay(1000);
iter_max--;
}
ret = read_poll_timeout(RREG_DAC, tmp, !(tmp & 0x1),
1000, 300000, false,
MGA1064_SPAREREG);
if (ret == -ETIMEDOUT)
return;
/*
* 3b- This step occurs only if the remove is actually
* 3b- This step occurs only if the remote BMC is actually
* scanning. We are waiting for the end of the frame which is
* a 1 on remvsyncsts (XSPAREREG<1>)
*/
if (iter_max) {
iter_max = 300;
while ((tmp & 0x2) && iter_max) {
WREG8(DAC_INDEX, MGA1064_SPAREREG);
tmp = RREG8(DAC_DATA);
udelay(1000);
iter_max--;
}
}
(void)read_poll_timeout(RREG_DAC, tmp, (tmp & 0x2),
1000, 300000, false,
MGA1064_SPAREREG);
}
void mgag200_bmc_start_scanout(struct mga_device *mdev)

View file

@ -111,6 +111,12 @@
#define DAC_INDEX 0x3c00
#define DAC_DATA 0x3c0a
#define RREG_DAC(reg) \
({ \
WREG8(DAC_INDEX, reg); \
RREG8(DAC_DATA); \
}) \
#define WREG_DAC(reg, v) \
do { \
WREG8(DAC_INDEX, reg); \

View file

@ -11,7 +11,7 @@ struct nvif_client {
int nvif_client_ctor(struct nvif_client *parent, const char *name, struct nvif_client *);
void nvif_client_dtor(struct nvif_client *);
int nvif_client_suspend(struct nvif_client *);
int nvif_client_suspend(struct nvif_client *, bool);
int nvif_client_resume(struct nvif_client *);
/*XXX*/

View file

@ -8,7 +8,7 @@ struct nvif_driver {
const char *name;
int (*init)(const char *name, u64 device, const char *cfg,
const char *dbg, void **priv);
int (*suspend)(void *priv);
int (*suspend)(void *priv, bool runtime);
int (*resume)(void *priv);
int (*ioctl)(void *priv, void *data, u32 size, void **hack);
void __iomem *(*map)(void *priv, u64 handle, u32 size);

View file

@ -2,6 +2,7 @@
#ifndef __NVKM_DEVICE_H__
#define __NVKM_DEVICE_H__
#include <core/oclass.h>
#include <core/suspend_state.h>
#include <core/intr.h>
enum nvkm_subdev_type;
@ -93,7 +94,7 @@ struct nvkm_device_func {
void *(*dtor)(struct nvkm_device *);
int (*preinit)(struct nvkm_device *);
int (*init)(struct nvkm_device *);
void (*fini)(struct nvkm_device *, bool suspend);
void (*fini)(struct nvkm_device *, enum nvkm_suspend_state suspend);
int (*irq)(struct nvkm_device *);
resource_size_t (*resource_addr)(struct nvkm_device *, enum nvkm_bar_id);
resource_size_t (*resource_size)(struct nvkm_device *, enum nvkm_bar_id);

View file

@ -20,7 +20,7 @@ struct nvkm_engine_func {
int (*oneinit)(struct nvkm_engine *);
int (*info)(struct nvkm_engine *, u64 mthd, u64 *data);
int (*init)(struct nvkm_engine *);
int (*fini)(struct nvkm_engine *, bool suspend);
int (*fini)(struct nvkm_engine *, enum nvkm_suspend_state suspend);
int (*reset)(struct nvkm_engine *);
int (*nonstall)(struct nvkm_engine *);
void (*intr)(struct nvkm_engine *);

View file

@ -2,6 +2,7 @@
#ifndef __NVKM_OBJECT_H__
#define __NVKM_OBJECT_H__
#include <core/oclass.h>
#include <core/suspend_state.h>
struct nvkm_event;
struct nvkm_gpuobj;
struct nvkm_uevent;
@ -27,7 +28,7 @@ enum nvkm_object_map {
struct nvkm_object_func {
void *(*dtor)(struct nvkm_object *);
int (*init)(struct nvkm_object *);
int (*fini)(struct nvkm_object *, bool suspend);
int (*fini)(struct nvkm_object *, enum nvkm_suspend_state suspend);
int (*mthd)(struct nvkm_object *, u32 mthd, void *data, u32 size);
int (*ntfy)(struct nvkm_object *, u32 mthd, struct nvkm_event **);
int (*map)(struct nvkm_object *, void *argv, u32 argc,
@ -49,7 +50,7 @@ int nvkm_object_new(const struct nvkm_oclass *, void *data, u32 size,
void nvkm_object_del(struct nvkm_object **);
void *nvkm_object_dtor(struct nvkm_object *);
int nvkm_object_init(struct nvkm_object *);
int nvkm_object_fini(struct nvkm_object *, bool suspend);
int nvkm_object_fini(struct nvkm_object *, enum nvkm_suspend_state);
int nvkm_object_mthd(struct nvkm_object *, u32 mthd, void *data, u32 size);
int nvkm_object_ntfy(struct nvkm_object *, u32 mthd, struct nvkm_event **);
int nvkm_object_map(struct nvkm_object *, void *argv, u32 argc,

View file

@ -13,7 +13,7 @@ struct nvkm_oproxy {
struct nvkm_oproxy_func {
void (*dtor[2])(struct nvkm_oproxy *);
int (*init[2])(struct nvkm_oproxy *);
int (*fini[2])(struct nvkm_oproxy *, bool suspend);
int (*fini[2])(struct nvkm_oproxy *, enum nvkm_suspend_state suspend);
};
void nvkm_oproxy_ctor(const struct nvkm_oproxy_func *,

View file

@ -40,7 +40,7 @@ struct nvkm_subdev_func {
int (*oneinit)(struct nvkm_subdev *);
int (*info)(struct nvkm_subdev *, u64 mthd, u64 *data);
int (*init)(struct nvkm_subdev *);
int (*fini)(struct nvkm_subdev *, bool suspend);
int (*fini)(struct nvkm_subdev *, enum nvkm_suspend_state suspend);
void (*intr)(struct nvkm_subdev *);
};
@ -65,7 +65,7 @@ void nvkm_subdev_unref(struct nvkm_subdev *);
int nvkm_subdev_preinit(struct nvkm_subdev *);
int nvkm_subdev_oneinit(struct nvkm_subdev *);
int nvkm_subdev_init(struct nvkm_subdev *);
int nvkm_subdev_fini(struct nvkm_subdev *, bool suspend);
int nvkm_subdev_fini(struct nvkm_subdev *, enum nvkm_suspend_state suspend);
int nvkm_subdev_info(struct nvkm_subdev *, u64, u64 *);
void nvkm_subdev_intr(struct nvkm_subdev *);

View file

@ -0,0 +1,11 @@
/* SPDX-License-Identifier: MIT */
#ifndef __NVKM_SUSPEND_STATE_H__
#define __NVKM_SUSPEND_STATE_H__
enum nvkm_suspend_state {
NVKM_POWEROFF,
NVKM_SUSPEND,
NVKM_RUNTIME_SUSPEND,
};
#endif

View file

@ -44,6 +44,9 @@ typedef void (*nvkm_gsp_event_func)(struct nvkm_gsp_event *, void *repv, u32 rep
* NVKM_GSP_RPC_REPLY_NOWAIT - If specified, immediately return to the
* caller after the GSP RPC command is issued.
*
* NVKM_GSP_RPC_REPLY_NOSEQ - If specified, exactly like NOWAIT
* but don't emit RPC sequence number.
*
* NVKM_GSP_RPC_REPLY_RECV - If specified, wait and receive the entire GSP
* RPC message after the GSP RPC command is issued.
*
@ -53,6 +56,7 @@ typedef void (*nvkm_gsp_event_func)(struct nvkm_gsp_event *, void *repv, u32 rep
*/
enum nvkm_gsp_rpc_reply_policy {
NVKM_GSP_RPC_REPLY_NOWAIT = 0,
NVKM_GSP_RPC_REPLY_NOSEQ,
NVKM_GSP_RPC_REPLY_RECV,
NVKM_GSP_RPC_REPLY_POLL,
};
@ -242,6 +246,8 @@ struct nvkm_gsp {
/* The size of the registry RPC */
size_t registry_rpc_size;
u32 rpc_seq;
#ifdef CONFIG_DEBUG_FS
/*
* Logging buffers in debugfs. The wrapper objects need to remain

View file

@ -352,8 +352,6 @@ nouveau_user_framebuffer_create(struct drm_device *dev,
static const struct drm_mode_config_funcs nouveau_mode_config_funcs = {
.fb_create = nouveau_user_framebuffer_create,
.atomic_commit = drm_atomic_helper_commit,
.atomic_check = drm_atomic_helper_check,
};

View file

@ -983,7 +983,7 @@ nouveau_do_suspend(struct nouveau_drm *drm, bool runtime)
}
NV_DEBUG(drm, "suspending object tree...\n");
ret = nvif_client_suspend(&drm->_client);
ret = nvif_client_suspend(&drm->_client, runtime);
if (ret)
goto fail_client;

View file

@ -62,10 +62,16 @@ nvkm_client_resume(void *priv)
}
static int
nvkm_client_suspend(void *priv)
nvkm_client_suspend(void *priv, bool runtime)
{
struct nvkm_client *client = priv;
return nvkm_object_fini(&client->object, true);
enum nvkm_suspend_state state;
if (runtime)
state = NVKM_RUNTIME_SUSPEND;
else
state = NVKM_SUSPEND;
return nvkm_object_fini(&client->object, state);
}
static int

View file

@ -30,9 +30,9 @@
#include <nvif/if0000.h>
int
nvif_client_suspend(struct nvif_client *client)
nvif_client_suspend(struct nvif_client *client, bool runtime)
{
return client->driver->suspend(client->object.priv);
return client->driver->suspend(client->object.priv, runtime);
}
int

View file

@ -41,7 +41,7 @@ nvkm_engine_reset(struct nvkm_engine *engine)
if (engine->func->reset)
return engine->func->reset(engine);
nvkm_subdev_fini(&engine->subdev, false);
nvkm_subdev_fini(&engine->subdev, NVKM_POWEROFF);
return nvkm_subdev_init(&engine->subdev);
}
@ -98,7 +98,7 @@ nvkm_engine_info(struct nvkm_subdev *subdev, u64 mthd, u64 *data)
}
static int
nvkm_engine_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_engine_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_engine *engine = nvkm_engine(subdev);
if (engine->func->fini)

View file

@ -141,7 +141,7 @@ nvkm_ioctl_new(struct nvkm_client *client,
}
ret = -EEXIST;
}
nvkm_object_fini(object, false);
nvkm_object_fini(object, NVKM_POWEROFF);
}
nvkm_object_del(&object);
@ -160,7 +160,7 @@ nvkm_ioctl_del(struct nvkm_client *client,
nvif_ioctl(object, "delete size %d\n", size);
if (!(ret = nvif_unvers(ret, &data, &size, args->none))) {
nvif_ioctl(object, "delete\n");
nvkm_object_fini(object, false);
nvkm_object_fini(object, NVKM_POWEROFF);
nvkm_object_del(&object);
}

View file

@ -142,13 +142,25 @@ nvkm_object_bind(struct nvkm_object *object, struct nvkm_gpuobj *gpuobj,
}
int
nvkm_object_fini(struct nvkm_object *object, bool suspend)
nvkm_object_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
const char *action = suspend ? "suspend" : "fini";
const char *action;
struct nvkm_object *child;
s64 time;
int ret;
switch (suspend) {
case NVKM_POWEROFF:
default:
action = "fini";
break;
case NVKM_SUSPEND:
action = "suspend";
break;
case NVKM_RUNTIME_SUSPEND:
action = "runtime";
break;
}
nvif_debug(object, "%s children...\n", action);
time = ktime_to_us(ktime_get());
list_for_each_entry_reverse(child, &object->tree, head) {
@ -212,11 +224,11 @@ nvkm_object_init(struct nvkm_object *object)
fail_child:
list_for_each_entry_continue_reverse(child, &object->tree, head)
nvkm_object_fini(child, false);
nvkm_object_fini(child, NVKM_POWEROFF);
fail:
nvif_error(object, "init failed with %d\n", ret);
if (object->func->fini)
object->func->fini(object, false);
object->func->fini(object, NVKM_POWEROFF);
return ret;
}

View file

@ -87,7 +87,7 @@ nvkm_oproxy_uevent(struct nvkm_object *object, void *argv, u32 argc,
}
static int
nvkm_oproxy_fini(struct nvkm_object *object, bool suspend)
nvkm_oproxy_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nvkm_oproxy *oproxy = nvkm_oproxy(object);
int ret;

View file

@ -51,12 +51,24 @@ nvkm_subdev_info(struct nvkm_subdev *subdev, u64 mthd, u64 *data)
}
int
nvkm_subdev_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_subdev_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_device *device = subdev->device;
const char *action = suspend ? "suspend" : subdev->use.enabled ? "fini" : "reset";
const char *action;
s64 time;
switch (suspend) {
case NVKM_POWEROFF:
default:
action = subdev->use.enabled ? "fini" : "reset";
break;
case NVKM_SUSPEND:
action = "suspend";
break;
case NVKM_RUNTIME_SUSPEND:
action = "runtime";
break;
}
nvkm_trace(subdev, "%s running...\n", action);
time = ktime_to_us(ktime_get());
@ -186,7 +198,7 @@ void
nvkm_subdev_unref(struct nvkm_subdev *subdev)
{
if (refcount_dec_and_mutex_lock(&subdev->use.refcount, &subdev->use.mutex)) {
nvkm_subdev_fini(subdev, false);
nvkm_subdev_fini(subdev, NVKM_POWEROFF);
mutex_unlock(&subdev->use.mutex);
}
}

View file

@ -73,7 +73,7 @@ nvkm_uevent_mthd(struct nvkm_object *object, u32 mthd, void *argv, u32 argc)
}
static int
nvkm_uevent_fini(struct nvkm_object *object, bool suspend)
nvkm_uevent_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nvkm_uevent *uevent = nvkm_uevent(object);

View file

@ -46,7 +46,7 @@ ga100_ce_nonstall(struct nvkm_engine *engine)
}
int
ga100_ce_fini(struct nvkm_engine *engine, bool suspend)
ga100_ce_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
{
nvkm_inth_block(&engine->subdev.inth);
return 0;

View file

@ -14,7 +14,7 @@ extern const struct nvkm_object_func gv100_ce_cclass;
int ga100_ce_oneinit(struct nvkm_engine *);
int ga100_ce_init(struct nvkm_engine *);
int ga100_ce_fini(struct nvkm_engine *, bool);
int ga100_ce_fini(struct nvkm_engine *, enum nvkm_suspend_state);
int ga100_ce_nonstall(struct nvkm_engine *);
u32 gb202_ce_grce_mask(struct nvkm_device *);

View file

@ -2936,13 +2936,25 @@ nvkm_device_engine(struct nvkm_device *device, int type, int inst)
}
int
nvkm_device_fini(struct nvkm_device *device, bool suspend)
nvkm_device_fini(struct nvkm_device *device, enum nvkm_suspend_state suspend)
{
const char *action = suspend ? "suspend" : "fini";
const char *action;
struct nvkm_subdev *subdev;
int ret;
s64 time;
switch (suspend) {
case NVKM_POWEROFF:
default:
action = "fini";
break;
case NVKM_SUSPEND:
action = "suspend";
break;
case NVKM_RUNTIME_SUSPEND:
action = "runtime";
break;
}
nvdev_trace(device, "%s running...\n", action);
time = ktime_to_us(ktime_get());
@ -3032,7 +3044,7 @@ nvkm_device_init(struct nvkm_device *device)
if (ret)
return ret;
nvkm_device_fini(device, false);
nvkm_device_fini(device, NVKM_POWEROFF);
nvdev_trace(device, "init running...\n");
time = ktime_to_us(ktime_get());
@ -3060,9 +3072,9 @@ nvkm_device_init(struct nvkm_device *device)
fail_subdev:
list_for_each_entry_from(subdev, &device->subdev, head)
nvkm_subdev_fini(subdev, false);
nvkm_subdev_fini(subdev, NVKM_POWEROFF);
fail:
nvkm_device_fini(device, false);
nvkm_device_fini(device, NVKM_POWEROFF);
nvdev_error(device, "init failed with %d\n", ret);
return ret;

View file

@ -1605,10 +1605,10 @@ nvkm_device_pci_irq(struct nvkm_device *device)
}
static void
nvkm_device_pci_fini(struct nvkm_device *device, bool suspend)
nvkm_device_pci_fini(struct nvkm_device *device, enum nvkm_suspend_state suspend)
{
struct nvkm_device_pci *pdev = nvkm_device_pci(device);
if (suspend) {
if (suspend != NVKM_POWEROFF) {
pci_disable_device(pdev->pdev);
pdev->suspend = true;
}

View file

@ -56,5 +56,5 @@ int nvkm_device_ctor(const struct nvkm_device_func *,
const char *name, const char *cfg, const char *dbg,
struct nvkm_device *);
int nvkm_device_init(struct nvkm_device *);
int nvkm_device_fini(struct nvkm_device *, bool suspend);
int nvkm_device_fini(struct nvkm_device *, enum nvkm_suspend_state suspend);
#endif

View file

@ -218,7 +218,7 @@ nvkm_udevice_map(struct nvkm_object *object, void *argv, u32 argc,
}
static int
nvkm_udevice_fini(struct nvkm_object *object, bool suspend)
nvkm_udevice_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nvkm_udevice *udev = nvkm_udevice(object);
struct nvkm_device *device = udev->device;

View file

@ -99,13 +99,13 @@ nvkm_disp_intr(struct nvkm_engine *engine)
}
static int
nvkm_disp_fini(struct nvkm_engine *engine, bool suspend)
nvkm_disp_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
{
struct nvkm_disp *disp = nvkm_disp(engine);
struct nvkm_outp *outp;
if (disp->func->fini)
disp->func->fini(disp, suspend);
disp->func->fini(disp, suspend != NVKM_POWEROFF);
list_for_each_entry(outp, &disp->outps, head) {
if (outp->func->fini)

View file

@ -128,7 +128,7 @@ nvkm_disp_chan_child_get(struct nvkm_object *object, int index, struct nvkm_ocla
}
static int
nvkm_disp_chan_fini(struct nvkm_object *object, bool suspend)
nvkm_disp_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nvkm_disp_chan *chan = nvkm_disp_chan(object);

View file

@ -93,13 +93,13 @@ nvkm_falcon_intr(struct nvkm_engine *engine)
}
static int
nvkm_falcon_fini(struct nvkm_engine *engine, bool suspend)
nvkm_falcon_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
{
struct nvkm_falcon *falcon = nvkm_falcon(engine);
struct nvkm_device *device = falcon->engine.subdev.device;
const u32 base = falcon->addr;
if (!suspend) {
if (suspend == NVKM_POWEROFF) {
nvkm_memory_unref(&falcon->core);
if (falcon->external) {
vfree(falcon->data.data);

View file

@ -122,7 +122,7 @@ nvkm_fifo_class_get(struct nvkm_oclass *oclass, int index, const struct nvkm_dev
}
static int
nvkm_fifo_fini(struct nvkm_engine *engine, bool suspend)
nvkm_fifo_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
{
struct nvkm_fifo *fifo = nvkm_fifo(engine);
struct nvkm_runl *runl;

View file

@ -72,7 +72,7 @@ struct nvkm_uobj {
};
static int
nvkm_uchan_object_fini_1(struct nvkm_oproxy *oproxy, bool suspend)
nvkm_uchan_object_fini_1(struct nvkm_oproxy *oproxy, enum nvkm_suspend_state suspend)
{
struct nvkm_uobj *uobj = container_of(oproxy, typeof(*uobj), oproxy);
struct nvkm_chan *chan = uobj->chan;
@ -87,7 +87,7 @@ nvkm_uchan_object_fini_1(struct nvkm_oproxy *oproxy, bool suspend)
nvkm_chan_cctx_bind(chan, ectx->engn, NULL);
if (refcount_dec_and_test(&ectx->uses))
nvkm_object_fini(ectx->object, false);
nvkm_object_fini(ectx->object, NVKM_POWEROFF);
mutex_unlock(&chan->cgrp->mutex);
}
@ -269,7 +269,7 @@ nvkm_uchan_map(struct nvkm_object *object, void *argv, u32 argc,
}
static int
nvkm_uchan_fini(struct nvkm_object *object, bool suspend)
nvkm_uchan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nvkm_chan *chan = nvkm_uchan(object)->chan;

View file

@ -168,11 +168,11 @@ nvkm_gr_init(struct nvkm_engine *engine)
}
static int
nvkm_gr_fini(struct nvkm_engine *engine, bool suspend)
nvkm_gr_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
{
struct nvkm_gr *gr = nvkm_gr(engine);
if (gr->func->fini)
return gr->func->fini(gr, suspend);
return gr->func->fini(gr, suspend != NVKM_POWEROFF);
return 0;
}

View file

@ -2330,7 +2330,7 @@ gf100_gr_reset(struct nvkm_gr *base)
WARN_ON(gf100_gr_fecs_halt_pipeline(gr));
subdev->func->fini(subdev, false);
subdev->func->fini(subdev, NVKM_POWEROFF);
nvkm_mc_disable(device, subdev->type, subdev->inst);
if (gr->func->gpccs.reset)
gr->func->gpccs.reset(gr);

View file

@ -1158,7 +1158,7 @@ nv04_gr_chan_dtor(struct nvkm_object *object)
}
static int
nv04_gr_chan_fini(struct nvkm_object *object, bool suspend)
nv04_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nv04_gr_chan *chan = nv04_gr_chan(object);
struct nv04_gr *gr = chan->gr;

View file

@ -951,7 +951,7 @@ nv10_gr_context_switch(struct nv10_gr *gr)
}
static int
nv10_gr_chan_fini(struct nvkm_object *object, bool suspend)
nv10_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nv10_gr_chan *chan = nv10_gr_chan(object);
struct nv10_gr *gr = chan->gr;

View file

@ -27,7 +27,7 @@ nv20_gr_chan_init(struct nvkm_object *object)
}
int
nv20_gr_chan_fini(struct nvkm_object *object, bool suspend)
nv20_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nv20_gr_chan *chan = nv20_gr_chan(object);
struct nv20_gr *gr = chan->gr;

View file

@ -31,5 +31,5 @@ struct nv20_gr_chan {
void *nv20_gr_chan_dtor(struct nvkm_object *);
int nv20_gr_chan_init(struct nvkm_object *);
int nv20_gr_chan_fini(struct nvkm_object *, bool);
int nv20_gr_chan_fini(struct nvkm_object *, enum nvkm_suspend_state);
#endif

View file

@ -89,7 +89,7 @@ nv40_gr_chan_bind(struct nvkm_object *object, struct nvkm_gpuobj *parent,
}
static int
nv40_gr_chan_fini(struct nvkm_object *object, bool suspend)
nv40_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nv40_gr_chan *chan = nv40_gr_chan(object);
struct nv40_gr *gr = chan->gr;
@ -101,7 +101,7 @@ nv40_gr_chan_fini(struct nvkm_object *object, bool suspend)
nvkm_mask(device, 0x400720, 0x00000001, 0x00000000);
if (nvkm_rd32(device, 0x40032c) == inst) {
if (suspend) {
if (suspend != NVKM_POWEROFF) {
nvkm_wr32(device, 0x400720, 0x00000000);
nvkm_wr32(device, 0x400784, inst);
nvkm_mask(device, 0x400310, 0x00000020, 0x00000020);

View file

@ -65,7 +65,7 @@ nv44_mpeg_chan_bind(struct nvkm_object *object, struct nvkm_gpuobj *parent,
}
static int
nv44_mpeg_chan_fini(struct nvkm_object *object, bool suspend)
nv44_mpeg_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nv44_mpeg_chan *chan = nv44_mpeg_chan(object);

View file

@ -37,7 +37,7 @@ nvkm_sec2_finimsg(void *priv, struct nvfw_falcon_msg *hdr)
}
static int
nvkm_sec2_fini(struct nvkm_engine *engine, bool suspend)
nvkm_sec2_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
{
struct nvkm_sec2 *sec2 = nvkm_sec2(engine);
struct nvkm_subdev *subdev = &sec2->engine.subdev;

View file

@ -76,7 +76,7 @@ nvkm_xtensa_intr(struct nvkm_engine *engine)
}
static int
nvkm_xtensa_fini(struct nvkm_engine *engine, bool suspend)
nvkm_xtensa_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
{
struct nvkm_xtensa *xtensa = nvkm_xtensa(engine);
struct nvkm_device *device = xtensa->engine.subdev.device;
@ -85,7 +85,7 @@ nvkm_xtensa_fini(struct nvkm_engine *engine, bool suspend)
nvkm_wr32(device, base + 0xd84, 0); /* INTR_EN */
nvkm_wr32(device, base + 0xd94, 0); /* FIFO_CTRL */
if (!suspend)
if (suspend == NVKM_POWEROFF)
nvkm_memory_unref(&xtensa->gpu_fw);
return 0;
}

View file

@ -182,7 +182,7 @@ nvkm_acr_managed_falcon(struct nvkm_device *device, enum nvkm_acr_lsf_id id)
}
static int
nvkm_acr_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_acr_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
if (!subdev->use.enabled)
return 0;

View file

@ -90,7 +90,7 @@ nvkm_bar_bar2_init(struct nvkm_device *device)
}
static int
nvkm_bar_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_bar_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_bar *bar = nvkm_bar(subdev);

View file

@ -577,7 +577,7 @@ nvkm_clk_read(struct nvkm_clk *clk, enum nv_clk_src src)
}
static int
nvkm_clk_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_clk_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_clk *clk = nvkm_clk(subdev);
flush_work(&clk->work);

View file

@ -67,11 +67,11 @@ nvkm_devinit_post(struct nvkm_devinit *init)
}
static int
nvkm_devinit_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_devinit_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_devinit *init = nvkm_devinit(subdev);
/* force full reinit on resume */
if (suspend)
if (suspend != NVKM_POWEROFF)
init->post = true;
return 0;
}

View file

@ -51,7 +51,7 @@ nvkm_fault_intr(struct nvkm_subdev *subdev)
}
static int
nvkm_fault_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_fault_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_fault *fault = nvkm_fault(subdev);
if (fault->func->fini)

View file

@ -56,7 +56,7 @@ nvkm_ufault_map(struct nvkm_object *object, void *argv, u32 argc,
}
static int
nvkm_ufault_fini(struct nvkm_object *object, bool suspend)
nvkm_ufault_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
{
struct nvkm_fault_buffer *buffer = nvkm_fault_buffer(object);
buffer->fault->func->buffer.fini(buffer);

View file

@ -144,7 +144,7 @@ nvkm_gpio_intr(struct nvkm_subdev *subdev)
}
static int
nvkm_gpio_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_gpio_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_gpio *gpio = nvkm_gpio(subdev);
u32 mask = (1ULL << gpio->func->lines) - 1;

View file

@ -48,7 +48,7 @@ nvkm_gsp_intr_stall(struct nvkm_gsp *gsp, enum nvkm_subdev_type type, int inst)
}
static int
nvkm_gsp_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_gsp_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_gsp *gsp = nvkm_gsp(subdev);

View file

@ -17,7 +17,7 @@
#include <nvhw/ref/gh100/dev_riscv_pri.h>
int
gh100_gsp_fini(struct nvkm_gsp *gsp, bool suspend)
gh100_gsp_fini(struct nvkm_gsp *gsp, enum nvkm_suspend_state suspend)
{
struct nvkm_falcon *falcon = &gsp->falcon;
int ret, time = 4000;

View file

@ -59,7 +59,7 @@ struct nvkm_gsp_func {
void (*dtor)(struct nvkm_gsp *);
int (*oneinit)(struct nvkm_gsp *);
int (*init)(struct nvkm_gsp *);
int (*fini)(struct nvkm_gsp *, bool suspend);
int (*fini)(struct nvkm_gsp *, enum nvkm_suspend_state suspend);
int (*reset)(struct nvkm_gsp *);
struct {
@ -75,7 +75,7 @@ int tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *);
void tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *);
int tu102_gsp_oneinit(struct nvkm_gsp *);
int tu102_gsp_init(struct nvkm_gsp *);
int tu102_gsp_fini(struct nvkm_gsp *, bool suspend);
int tu102_gsp_fini(struct nvkm_gsp *, enum nvkm_suspend_state suspend);
int tu102_gsp_reset(struct nvkm_gsp *);
u64 tu102_gsp_wpr_heap_size(struct nvkm_gsp *);
@ -87,12 +87,12 @@ int ga102_gsp_reset(struct nvkm_gsp *);
int gh100_gsp_oneinit(struct nvkm_gsp *);
int gh100_gsp_init(struct nvkm_gsp *);
int gh100_gsp_fini(struct nvkm_gsp *, bool suspend);
int gh100_gsp_fini(struct nvkm_gsp *, enum nvkm_suspend_state suspend);
void r535_gsp_dtor(struct nvkm_gsp *);
int r535_gsp_oneinit(struct nvkm_gsp *);
int r535_gsp_init(struct nvkm_gsp *);
int r535_gsp_fini(struct nvkm_gsp *, bool suspend);
int r535_gsp_fini(struct nvkm_gsp *, enum nvkm_suspend_state suspend);
int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, enum nvkm_subdev_type, int,
struct nvkm_gsp **);

View file

@ -208,7 +208,7 @@ r535_fbsr_resume(struct nvkm_gsp *gsp)
}
static int
r535_fbsr_suspend(struct nvkm_gsp *gsp)
r535_fbsr_suspend(struct nvkm_gsp *gsp, bool runtime)
{
struct nvkm_subdev *subdev = &gsp->subdev;
struct nvkm_device *device = subdev->device;

View file

@ -704,7 +704,7 @@ r535_gsp_rpc_set_registry(struct nvkm_gsp *gsp)
build_registry(gsp, rpc);
return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_NOWAIT);
return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_NOSEQ);
fail:
clean_registry(gsp);
@ -921,7 +921,7 @@ r535_gsp_set_system_info(struct nvkm_gsp *gsp)
info->pciConfigMirrorSize = device->pci->func->cfg.size;
r535_gsp_acpi_info(gsp, &info->acpiMethodData);
return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOWAIT);
return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOSEQ);
}
static int
@ -1721,7 +1721,7 @@ r535_gsp_sr_data_size(struct nvkm_gsp *gsp)
}
int
r535_gsp_fini(struct nvkm_gsp *gsp, bool suspend)
r535_gsp_fini(struct nvkm_gsp *gsp, enum nvkm_suspend_state suspend)
{
struct nvkm_rm *rm = gsp->rm;
int ret;
@ -1748,7 +1748,7 @@ r535_gsp_fini(struct nvkm_gsp *gsp, bool suspend)
sr->sysmemAddrOfSuspendResumeData = gsp->sr.radix3.lvl0.addr;
sr->sizeOfSuspendResumeData = len;
ret = rm->api->fbsr->suspend(gsp);
ret = rm->api->fbsr->suspend(gsp, suspend == NVKM_RUNTIME_SUSPEND);
if (ret) {
nvkm_gsp_mem_dtor(&gsp->sr.meta);
nvkm_gsp_radix3_dtor(gsp, &gsp->sr.radix3);

View file

@ -557,6 +557,7 @@ r535_gsp_rpc_handle_reply(struct nvkm_gsp *gsp, u32 fn,
switch (policy) {
case NVKM_GSP_RPC_REPLY_NOWAIT:
case NVKM_GSP_RPC_REPLY_NOSEQ:
break;
case NVKM_GSP_RPC_REPLY_RECV:
reply = r535_gsp_msg_recv(gsp, fn, gsp_rpc_len);
@ -588,6 +589,11 @@ r535_gsp_rpc_send(struct nvkm_gsp *gsp, void *payload,
rpc->data, rpc->length - sizeof(*rpc), true);
}
if (policy == NVKM_GSP_RPC_REPLY_NOSEQ)
rpc->sequence = 0;
else
rpc->sequence = gsp->rpc_seq++;
ret = r535_gsp_cmdq_push(gsp, rpc);
if (ret)
return ERR_PTR(ret);

View file

@ -62,7 +62,7 @@ r570_fbsr_resume(struct nvkm_gsp *gsp)
}
static int
r570_fbsr_init(struct nvkm_gsp *gsp, struct sg_table *sgt, u64 size)
r570_fbsr_init(struct nvkm_gsp *gsp, struct sg_table *sgt, u64 size, bool runtime)
{
NV2080_CTRL_INTERNAL_FBSR_INIT_PARAMS *ctrl;
struct nvkm_gsp_object memlist;
@ -81,7 +81,7 @@ r570_fbsr_init(struct nvkm_gsp *gsp, struct sg_table *sgt, u64 size)
ctrl->hClient = gsp->internal.client.object.handle;
ctrl->hSysMem = memlist.handle;
ctrl->sysmemAddrOfSuspendResumeData = gsp->sr.meta.addr;
ctrl->bEnteringGcoffState = 1;
ctrl->bEnteringGcoffState = runtime ? 1 : 0;
ret = nvkm_gsp_rm_ctrl_wr(&gsp->internal.device.subdevice, ctrl);
if (ret)
@ -92,7 +92,7 @@ r570_fbsr_init(struct nvkm_gsp *gsp, struct sg_table *sgt, u64 size)
}
static int
r570_fbsr_suspend(struct nvkm_gsp *gsp)
r570_fbsr_suspend(struct nvkm_gsp *gsp, bool runtime)
{
struct nvkm_subdev *subdev = &gsp->subdev;
struct nvkm_device *device = subdev->device;
@ -133,7 +133,7 @@ r570_fbsr_suspend(struct nvkm_gsp *gsp)
return ret;
/* Initialise FBSR on RM. */
ret = r570_fbsr_init(gsp, &gsp->sr.fbsr, size);
ret = r570_fbsr_init(gsp, &gsp->sr.fbsr, size, runtime);
if (ret) {
nvkm_gsp_sg_free(device, &gsp->sr.fbsr);
return ret;

View file

@ -176,7 +176,7 @@ r570_gsp_set_system_info(struct nvkm_gsp *gsp)
info->bIsPrimary = video_is_primary_device(device->dev);
info->bPreserveVideoMemoryAllocations = false;
return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOWAIT);
return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOSEQ);
}
static void

View file

@ -78,7 +78,7 @@ struct nvkm_rm_api {
} *device;
const struct nvkm_rm_api_fbsr {
int (*suspend)(struct nvkm_gsp *);
int (*suspend)(struct nvkm_gsp *, bool runtime);
void (*resume)(struct nvkm_gsp *);
} *fbsr;

View file

@ -161,7 +161,7 @@ tu102_gsp_reset(struct nvkm_gsp *gsp)
}
int
tu102_gsp_fini(struct nvkm_gsp *gsp, bool suspend)
tu102_gsp_fini(struct nvkm_gsp *gsp, enum nvkm_suspend_state suspend)
{
u32 mbox0 = 0xff, mbox1 = 0xff;
int ret;

View file

@ -135,7 +135,7 @@ nvkm_i2c_intr(struct nvkm_subdev *subdev)
}
static int
nvkm_i2c_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_i2c_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_i2c *i2c = nvkm_i2c(subdev);
struct nvkm_i2c_pad *pad;

View file

@ -176,7 +176,7 @@ nvkm_instmem_boot(struct nvkm_instmem *imem)
}
static int
nvkm_instmem_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_instmem_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_instmem *imem = nvkm_instmem(subdev);
int ret;

View file

@ -74,7 +74,7 @@ nvkm_pci_rom_shadow(struct nvkm_pci *pci, bool shadow)
}
static int
nvkm_pci_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_pci_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_pci *pci = nvkm_pci(subdev);

View file

@ -77,7 +77,7 @@ nvkm_pmu_intr(struct nvkm_subdev *subdev)
}
static int
nvkm_pmu_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_pmu_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_pmu *pmu = nvkm_pmu(subdev);

View file

@ -341,15 +341,15 @@ nvkm_therm_intr(struct nvkm_subdev *subdev)
}
static int
nvkm_therm_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_therm_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_therm *therm = nvkm_therm(subdev);
if (therm->func->fini)
therm->func->fini(therm);
nvkm_therm_fan_fini(therm, suspend);
nvkm_therm_sensor_fini(therm, suspend);
nvkm_therm_fan_fini(therm, suspend != NVKM_POWEROFF);
nvkm_therm_sensor_fini(therm, suspend != NVKM_POWEROFF);
if (suspend) {
therm->suspend = therm->mode;

View file

@ -149,7 +149,7 @@ nvkm_timer_intr(struct nvkm_subdev *subdev)
}
static int
nvkm_timer_fini(struct nvkm_subdev *subdev, bool suspend)
nvkm_timer_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
{
struct nvkm_timer *tmr = nvkm_timer(subdev);
tmr->func->alarm_fini(tmr);

Some files were not shown because too many files have changed in this diff Show more