Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.19-rc8).

No adjacent changes, conflicts:

drivers/net/ethernet/spacemit/k1_emac.c
  2c84959167 ("net: spacemit: Check for netif_carrier_ok() in emac_stats_update()")
  f66086798f ("net: spacemit: Remove broken flow control support")
https://lore.kernel.org/aXjAqZA3iEWD_DGM@sirena.org.uk

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2026-01-22 20:13:25 -08:00
commit a010fe8d86
288 changed files with 2584 additions and 1350 deletions

View file

@ -105,7 +105,7 @@ information.
Manual fan control on the other hand, is not exposed directly by the AWCC
interface. Instead it let's us control a fan `boost` value. This `boost` value
has the following aproximate behavior over the fan pwm:
has the following approximate behavior over the fan pwm:
::

View file

@ -231,6 +231,8 @@ eventually gets pushed out to disk. This tunable is used to define when dirty
inode is old enough to be eligible for writeback by the kernel flusher threads.
And, it is also used as the interval to wakeup dirtytime_writeback thread.
Setting this to zero disables periodic dirtytime writeback.
dirty_writeback_centisecs
=========================

View file

@ -7,7 +7,9 @@ ISA string ordering in /proc/cpuinfo
------------------------------------
The canonical order of ISA extension names in the ISA string is defined in
chapter 27 of the unprivileged specification.
Chapter 27 of the RISC-V Instruction Set Manual Volume I Unprivileged ISA
(Document Version 20191213).
The specification uses vague wording, such as should, when it comes to ordering,
so for our purposes the following rules apply:

View file

@ -14,7 +14,7 @@ set of mailbox registers.
More details on the interface can be found in chapter
"7 Host System Management Port (HSMP)" of the family/model PPR
Eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip
Eg: https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50
HSMP interface is supported on EPYC line of server CPUs and MI300A (APU).
@ -185,7 +185,7 @@ what happened. The transaction returns 0 on success.
More details on the interface and message definitions can be found in chapter
"7 Host System Management Port (HSMP)" of the respective family/model PPR
eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip
eg: https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50
User space C-APIs are made available by linking against the esmi library,
which is provided by the E-SMS project https://www.amd.com/en/developer/e-sms.html.

View file

@ -11,7 +11,7 @@ maintainers:
- Jitao shi <jitao.shi@mediatek.com>
description: |
MediaTek DP and eDP are different hardwares and there are some features
MediaTek DP and eDP are different hardware and there are some features
which are not supported for eDP. For example, audio is not supported for
eDP. Therefore, we need to use two different compatibles to describe them.
In addition, We just need to enable the power domain of DP, so the clock

View file

@ -74,6 +74,37 @@ allOf:
- description: aggre UFS CARD AXI clock
- description: RPMH CC IPA clock
- if:
properties:
compatible:
contains:
enum:
- qcom,sa8775p-config-noc
- qcom,sa8775p-dc-noc
- qcom,sa8775p-gem-noc
- qcom,sa8775p-gpdsp-anoc
- qcom,sa8775p-lpass-ag-noc
- qcom,sa8775p-mmss-noc
- qcom,sa8775p-nspa-noc
- qcom,sa8775p-nspb-noc
- qcom,sa8775p-pcie-anoc
- qcom,sa8775p-system-noc
then:
properties:
clocks: false
- if:
properties:
compatible:
contains:
enum:
- qcom,sa8775p-clk-virt
- qcom,sa8775p-mc-virt
then:
properties:
reg: false
clocks: false
unevaluatedProperties: false
examples:

View file

@ -88,7 +88,7 @@ patternProperties:
pcie1_clkreq, pcie1_wakeup, pmic0, pmic1, ptp, ptp_clk,
ptp_trig, pwm0, pwm1, pwm2, pwm3, rgmii, sdio0, sdio_sb, smi,
spi_cs1, spi_cs2, spi_cs3, spi_quad, uart1, uart2,
usb2_drvvbus1, usb32_drvvbus ]
usb2_drvvbus1, usb32_drvvbus0 ]
function:
enum: [ drvbus, emmc, gpio, i2c, jtag, led, mii, mii_err, onewire,

View file

@ -15,7 +15,7 @@ and SB Temperature Sensor Interface (SB-TSI)).
More details on the interface can be found in chapter
"5 Advanced Platform Management Link (APML)" of the family/model PPR [1]_.
.. [1] https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip
.. [1] https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50
SBRMI device

View file

@ -0,0 +1,41 @@
.. SPDX-License-Identifier: GPL-2.0
Linux kernel project continuity
===============================
The Linux kernel development project is widely distributed, with over
100 maintainers each working to keep changes moving through their own
repositories. The final step, though, is a centralized one where changes
are pulled into the mainline repository. That is normally done by Linus
Torvalds but, as was demonstrated by the 4.19 release in 2018, there are
others who can do that work when the need arises.
Should the maintainers of that repository become unwilling or unable to
do that work going forward (including facilitating a transition), the
project will need to find one or more replacements without delay. The
process by which that will be done is listed below. $ORGANIZER is the
last Maintainer Summit organizer or the current Linux Foundation (LF)
Technical Advisory Board (TAB) Chair as a backup.
- Within 72 hours, $ORGANIZER will open a discussion with the invitees
of the most recently concluded Maintainers Summit. A meeting of those
invitees and the TAB, either online or in-person, will be set as soon
as possible in a way that maximizes the number of people who can
participate.
- If there has been no Maintainers Summit in the last 15 months, the set of
invitees for this meeting will be determined by the TAB.
- The invitees to this meeting may bring in other maintainers as needed.
- This meeting, chaired by $ORGANIZER, will consider options for the
ongoing management of the top-level kernel repository consistent with
the expectation that it maximizes the long term health of the project
and its community.
- Within two weeks, a representative of this group will communicate to the
broader community, using the ksummit@lists.linux.dev mailing list, what
the next steps will be.
The Linux Foundation, as guided by the TAB, will take the steps
necessary to support and implement this plan.

View file

@ -68,6 +68,7 @@ beyond).
stable-kernel-rules
management-style
researcher-guidelines
conclave
Dealing with bugs
-----------------

View file

@ -9260,7 +9260,6 @@ F: drivers/scsi/be2iscsi/
EMULEX 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER (be2net)
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
M: Somnath Kotur <somnath.kotur@broadcom.com>
L: netdev@vger.kernel.org
S: Maintained
W: http://www.emulex.com
@ -13167,6 +13166,7 @@ F: Documentation/devicetree/bindings/interconnect/
F: Documentation/driver-api/interconnect.rst
F: drivers/interconnect/
F: include/dt-bindings/interconnect/
F: include/linux/interconnect-clk.h
F: include/linux/interconnect-provider.h
F: include/linux/interconnect.h

View file

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 19
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc7
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View file

@ -670,7 +670,6 @@ CONFIG_PINCTRL_LPASS_LPI=m
CONFIG_PINCTRL_SC7280_LPASS_LPI=m
CONFIG_PINCTRL_SM6115_LPASS_LPI=m
CONFIG_PINCTRL_SM8250_LPASS_LPI=m
CONFIG_PINCTRL_SM8350_LPASS_LPI=m
CONFIG_PINCTRL_SM8450_LPASS_LPI=m
CONFIG_PINCTRL_SC8280XP_LPASS_LPI=m
CONFIG_PINCTRL_SM8550_LPASS_LPI=m

View file

@ -300,6 +300,8 @@ void kvm_get_kimage_voffset(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
void kvm_compute_final_ctr_el0(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
void kvm_pan_patch_el2_entry(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt,
u64 elr_phys, u64 par, uintptr_t vcpu, u64 far, u64 hpfar);

View file

@ -119,22 +119,6 @@ static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
return (unsigned long *)&vcpu->arch.hcr_el2;
}
static inline void vcpu_clear_wfx_traps(struct kvm_vcpu *vcpu)
{
vcpu->arch.hcr_el2 &= ~HCR_TWE;
if (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) ||
vcpu->kvm->arch.vgic.nassgireq)
vcpu->arch.hcr_el2 &= ~HCR_TWI;
else
vcpu->arch.hcr_el2 |= HCR_TWI;
}
static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu)
{
vcpu->arch.hcr_el2 |= HCR_TWE;
vcpu->arch.hcr_el2 |= HCR_TWI;
}
static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.vsesr_el2;

View file

@ -87,7 +87,15 @@ typedef u64 kvm_pte_t;
#define KVM_PTE_LEAF_ATTR_HI_SW GENMASK(58, 55)
#define KVM_PTE_LEAF_ATTR_HI_S1_XN BIT(54)
#define __KVM_PTE_LEAF_ATTR_HI_S1_XN BIT(54)
#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN BIT(54)
#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN BIT(53)
#define KVM_PTE_LEAF_ATTR_HI_S1_XN \
({ cpus_have_final_cap(ARM64_KVM_HVHE) ? \
(__KVM_PTE_LEAF_ATTR_HI_S1_UXN | \
__KVM_PTE_LEAF_ATTR_HI_S1_PXN) : \
__KVM_PTE_LEAF_ATTR_HI_S1_XN; })
#define KVM_PTE_LEAF_ATTR_HI_S2_XN GENMASK(54, 53)
@ -293,8 +301,8 @@ typedef bool (*kvm_pgtable_force_pte_cb_t)(u64 addr, u64 end,
* children.
* @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared
* with other software walkers.
* @KVM_PGTABLE_WALK_HANDLE_FAULT: Indicates the page-table walk was
* invoked from a fault handler.
* @KVM_PGTABLE_WALK_IGNORE_EAGAIN: Don't terminate the walk early if
* the walker returns -EAGAIN.
* @KVM_PGTABLE_WALK_SKIP_BBM_TLBI: Visit and update table entries
* without Break-before-make's
* TLB invalidation.
@ -307,7 +315,7 @@ enum kvm_pgtable_walk_flags {
KVM_PGTABLE_WALK_TABLE_PRE = BIT(1),
KVM_PGTABLE_WALK_TABLE_POST = BIT(2),
KVM_PGTABLE_WALK_SHARED = BIT(3),
KVM_PGTABLE_WALK_HANDLE_FAULT = BIT(4),
KVM_PGTABLE_WALK_IGNORE_EAGAIN = BIT(4),
KVM_PGTABLE_WALK_SKIP_BBM_TLBI = BIT(5),
KVM_PGTABLE_WALK_SKIP_CMO = BIT(6),
};

View file

@ -91,7 +91,8 @@
*/
#define pstate_field(op1, op2) ((op1) << Op1_shift | (op2) << Op2_shift)
#define PSTATE_Imm_shift CRm_shift
#define SET_PSTATE(x, r) __emit_inst(0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift))
#define ENCODE_PSTATE(x, r) (0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift))
#define SET_PSTATE(x, r) __emit_inst(ENCODE_PSTATE(x, r))
#define PSTATE_PAN pstate_field(0, 4)
#define PSTATE_UAO pstate_field(0, 3)

View file

@ -402,7 +402,7 @@ int swsusp_arch_suspend(void)
* Memory allocated by get_safe_page() will be dealt with by the hibernate code,
* we don't need to free it here.
*/
int swsusp_arch_resume(void)
int __nocfi swsusp_arch_resume(void)
{
int rc;
void *zero_page;

View file

@ -86,6 +86,7 @@ KVM_NVHE_ALIAS(kvm_patch_vector_branch);
KVM_NVHE_ALIAS(kvm_update_va_mask);
KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0);
KVM_NVHE_ALIAS(kvm_pan_patch_el2_entry);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_iter);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_mitigation_enable);
KVM_NVHE_ALIAS(spectre_bhb_patch_wa3);

View file

@ -968,20 +968,18 @@ static int sve_set_common(struct task_struct *target,
vq = sve_vq_from_vl(task_get_vl(target, type));
/* Enter/exit streaming mode */
if (system_supports_sme()) {
switch (type) {
case ARM64_VEC_SVE:
target->thread.svcr &= ~SVCR_SM_MASK;
set_tsk_thread_flag(target, TIF_SVE);
break;
case ARM64_VEC_SME:
target->thread.svcr |= SVCR_SM_MASK;
set_tsk_thread_flag(target, TIF_SME);
break;
default:
WARN_ON_ONCE(1);
return -EINVAL;
}
switch (type) {
case ARM64_VEC_SVE:
target->thread.svcr &= ~SVCR_SM_MASK;
set_tsk_thread_flag(target, TIF_SVE);
break;
case ARM64_VEC_SME:
target->thread.svcr |= SVCR_SM_MASK;
set_tsk_thread_flag(target, TIF_SME);
break;
default:
WARN_ON_ONCE(1);
return -EINVAL;
}
/* Always zero V regs, FPSR, and FPCR */

View file

@ -449,12 +449,28 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
if (user->sve_size < SVE_SIG_CONTEXT_SIZE(vq))
return -EINVAL;
if (sm) {
sme_alloc(current, false);
if (!current->thread.sme_state)
return -ENOMEM;
}
sve_alloc(current, true);
if (!current->thread.sve_state) {
clear_thread_flag(TIF_SVE);
return -ENOMEM;
}
if (sm) {
current->thread.svcr |= SVCR_SM_MASK;
set_thread_flag(TIF_SME);
} else {
current->thread.svcr &= ~SVCR_SM_MASK;
set_thread_flag(TIF_SVE);
}
current->thread.fp_type = FP_STATE_SVE;
err = __copy_from_user(current->thread.sve_state,
(char __user const *)user->sve +
SVE_SIG_REGS_OFFSET,
@ -462,12 +478,6 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
if (err)
return -EFAULT;
if (flags & SVE_SIG_FLAG_SM)
current->thread.svcr |= SVCR_SM_MASK;
else
set_thread_flag(TIF_SVE);
current->thread.fp_type = FP_STATE_SVE;
err = read_fpsimd_context(&fpsimd, user);
if (err)
return err;
@ -576,6 +586,10 @@ static int restore_za_context(struct user_ctxs *user)
if (user->za_size < ZA_SIG_CONTEXT_SIZE(vq))
return -EINVAL;
sve_alloc(current, false);
if (!current->thread.sve_state)
return -ENOMEM;
sme_alloc(current, true);
if (!current->thread.sme_state) {
current->thread.svcr &= ~SVCR_ZA_MASK;

View file

@ -569,6 +569,7 @@ static bool kvm_vcpu_should_clear_twi(struct kvm_vcpu *vcpu)
return kvm_wfi_trap_policy == KVM_WFX_NOTRAP;
return single_task_running() &&
vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 &&
(atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) ||
vcpu->kvm->arch.vgic.nassgireq);
}

View file

@ -403,6 +403,7 @@ static int walk_s1(struct kvm_vcpu *vcpu, struct s1_walk_info *wi,
struct s1_walk_result *wr, u64 va)
{
u64 va_top, va_bottom, baddr, desc, new_desc, ipa;
struct kvm_s2_trans s2_trans = {};
int level, stride, ret;
level = wi->sl;
@ -420,8 +421,6 @@ static int walk_s1(struct kvm_vcpu *vcpu, struct s1_walk_info *wi,
ipa = baddr | index;
if (wi->s2) {
struct kvm_s2_trans s2_trans = {};
ret = kvm_walk_nested_s2(vcpu, ipa, &s2_trans);
if (ret) {
fail_s1_walk(wr,
@ -515,6 +514,11 @@ static int walk_s1(struct kvm_vcpu *vcpu, struct s1_walk_info *wi,
new_desc |= PTE_AF;
if (new_desc != desc) {
if (wi->s2 && !kvm_s2_trans_writable(&s2_trans)) {
fail_s1_walk(wr, ESR_ELx_FSC_PERM_L(level), true);
return -EPERM;
}
ret = kvm_swap_s1_desc(vcpu, ipa, desc, new_desc, wi);
if (ret)
return ret;

View file

@ -126,7 +126,9 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
add x1, x1, #VCPU_CONTEXT
ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
alternative_cb ARM64_ALWAYS_SYSTEM, kvm_pan_patch_el2_entry
nop
alternative_cb_end
// Store the guest regs x2 and x3
stp x2, x3, [x1, #CPU_XREG_OFFSET(2)]

View file

@ -854,7 +854,7 @@ static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code,
return false;
}
static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu)
{
/*
* Check for the conditions of Cortex-A510's #2077057. When these occur

View file

@ -180,6 +180,9 @@ static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt)
/* Propagate WFx trapping flags */
hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI);
hyp_vcpu->vcpu.arch.hcr_el2 |= hcr_el2 & (HCR_TWE | HCR_TWI);
} else {
memcpy(&hyp_vcpu->vcpu.arch.fgt, hyp_vcpu->host_vcpu->arch.fgt,
sizeof(hyp_vcpu->vcpu.arch.fgt));
}
}

View file

@ -172,7 +172,6 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu)
/* Trust the host for non-protected vcpu features. */
vcpu->arch.hcrx_el2 = host_vcpu->arch.hcrx_el2;
memcpy(vcpu->arch.fgt, host_vcpu->arch.fgt, sizeof(vcpu->arch.fgt));
return 0;
}

View file

@ -211,7 +211,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
{
const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu);
synchronize_vcpu_pstate(vcpu, exit_code);
synchronize_vcpu_pstate(vcpu);
/*
* Some guests (e.g., protected VMs) are not be allowed to run in

View file

@ -144,7 +144,7 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker,
* page table walk.
*/
if (r == -EAGAIN)
return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT);
return walker->flags & KVM_PGTABLE_WALK_IGNORE_EAGAIN;
return !r;
}
@ -1262,7 +1262,8 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size)
{
return stage2_update_leaf_attrs(pgt, addr, size, 0,
KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W,
NULL, NULL, 0);
NULL, NULL,
KVM_PGTABLE_WALK_IGNORE_EAGAIN);
}
void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr,

View file

@ -536,7 +536,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
{
synchronize_vcpu_pstate(vcpu, exit_code);
synchronize_vcpu_pstate(vcpu);
/*
* If we were in HYP context on entry, adjust the PSTATE view

View file

@ -497,7 +497,7 @@ static int share_pfn_hyp(u64 pfn)
this->count = 1;
rb_link_node(&this->node, parent, node);
rb_insert_color(&this->node, &hyp_shared_pfns);
ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn, 1);
ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn);
unlock:
mutex_unlock(&hyp_shared_pfns_lock);
@ -523,7 +523,7 @@ static int unshare_pfn_hyp(u64 pfn)
rb_erase(&this->node, &hyp_shared_pfns);
kfree(this);
ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn, 1);
ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn);
unlock:
mutex_unlock(&hyp_shared_pfns_lock);
@ -1563,14 +1563,12 @@ static void adjust_nested_exec_perms(struct kvm *kvm,
*prot &= ~KVM_PGTABLE_PROT_PX;
}
#define KVM_PGTABLE_WALK_MEMABORT_FLAGS (KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED)
static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_s2_trans *nested,
struct kvm_memory_slot *memslot, bool is_perm)
{
bool write_fault, exec_fault, writable;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
struct kvm_pgtable *pgt = vcpu->arch.hw_mmu->pgt;
unsigned long mmu_seq;
@ -1665,7 +1663,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_pgtable *pgt;
struct page *page;
vm_flags_t vm_flags;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED;
if (fault_is_perm)
fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu);
@ -1933,7 +1931,7 @@ out_unlock:
/* Resolve the access fault by making the page young again. */
static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
{
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED;
enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED;
struct kvm_s2_mmu *mmu;
trace_kvm_access_fault(fault_ipa);

View file

@ -4668,7 +4668,10 @@ static void perform_access(struct kvm_vcpu *vcpu,
* that we don't know how to handle. This certainly qualifies
* as a gross bug that should be fixed right away.
*/
BUG_ON(!r->access);
if (!r->access) {
bad_trap(vcpu, params, r, "register access");
return;
}
/* Skip instruction if instructed so */
if (likely(r->access(vcpu, params, r)))

View file

@ -296,3 +296,31 @@ void kvm_compute_final_ctr_el0(struct alt_instr *alt,
generate_mov_q(read_sanitised_ftr_reg(SYS_CTR_EL0),
origptr, updptr, nr_inst);
}
void kvm_pan_patch_el2_entry(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst)
{
/*
* If we're running at EL1 without hVHE, then SCTLR_EL2.SPAN means
* nothing to us (it is RES1), and we don't need to set PSTATE.PAN
* to anything useful.
*/
if (!is_kernel_in_hyp_mode() && !cpus_have_cap(ARM64_KVM_HVHE))
return;
/*
* Leap of faith: at this point, we must be running VHE one way or
* another, and FEAT_PAN is required to be implemented. If KVM
* explodes at runtime because your system does not abide by this
* requirement, call your favourite HW vendor, they have screwed up.
*
* We don't expect hVHE to access any userspace mapping, so always
* set PSTATE.PAN on enty. Same thing if we have PAN enabled on an
* EL2 kernel. Only force it to 0 if we have not configured PAN in
* the kernel (and you know this is really silly).
*/
if (cpus_have_cap(ARM64_KVM_HVHE) || IS_ENABLED(CONFIG_ARM64_PAN))
*updptr = cpu_to_le32(ENCODE_PSTATE(1, PAN));
else
*updptr = cpu_to_le32(ENCODE_PSTATE(0, PAN));
}

View file

@ -84,6 +84,7 @@ config ERRATA_STARFIVE_JH7100
select DMA_GLOBAL_POOL
select RISCV_DMA_NONCOHERENT
select RISCV_NONSTANDARD_CACHE_OPS
select CACHEMAINT_FOR_DMA
select SIFIVE_CCACHE
default n
help

View file

@ -97,13 +97,23 @@ static inline unsigned long __untagged_addr_remote(struct mm_struct *mm, unsigne
*/
#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
/*
* Use a temporary variable for the output of the asm goto to avoid a
* triggering an LLVM assertion due to sign extending the output when
* it is used in later function calls:
* https://github.com/llvm/llvm-project/issues/143795
*/
#define __get_user_asm(insn, x, ptr, label) \
do { \
u64 __tmp; \
asm_goto_output( \
"1:\n" \
" " insn " %0, %1\n" \
_ASM_EXTABLE_UACCESS_ERR(1b, %l2, %0) \
: "=&r" (x) \
: "m" (*(ptr)) : : label)
: "=&r" (__tmp) \
: "m" (*(ptr)) : : label); \
(x) = (__typeof__(x))(unsigned long)__tmp; \
} while (0)
#else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */
#define __get_user_asm(insn, x, ptr, label) \
do { \

View file

@ -51,10 +51,11 @@ void suspend_restore_csrs(struct suspend_context *context)
#ifdef CONFIG_MMU
if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SSTC)) {
csr_write(CSR_STIMECMP, context->stimecmp);
#if __riscv_xlen < 64
csr_write(CSR_STIMECMP, ULONG_MAX);
csr_write(CSR_STIMECMPH, context->stimecmph);
#endif
csr_write(CSR_STIMECMP, context->stimecmp);
}
csr_write(CSR_SATP, context->satp);

View file

@ -72,8 +72,9 @@ static int kvm_riscv_vcpu_timer_cancel(struct kvm_vcpu_timer *t)
static int kvm_riscv_vcpu_update_vstimecmp(struct kvm_vcpu *vcpu, u64 ncycles)
{
#if defined(CONFIG_32BIT)
ncsr_write(CSR_VSTIMECMP, ncycles & 0xFFFFFFFF);
ncsr_write(CSR_VSTIMECMP, ULONG_MAX);
ncsr_write(CSR_VSTIMECMPH, ncycles >> 32);
ncsr_write(CSR_VSTIMECMP, (u32)ncycles);
#else
ncsr_write(CSR_VSTIMECMP, ncycles);
#endif
@ -307,8 +308,9 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu)
return;
#if defined(CONFIG_32BIT)
ncsr_write(CSR_VSTIMECMP, (u32)t->next_cycles);
ncsr_write(CSR_VSTIMECMP, ULONG_MAX);
ncsr_write(CSR_VSTIMECMPH, (u32)(t->next_cycles >> 32));
ncsr_write(CSR_VSTIMECMP, (u32)(t->next_cycles));
#else
ncsr_write(CSR_VSTIMECMP, t->next_cycles);
#endif

View file

@ -137,6 +137,15 @@ SECTIONS
}
_end = .;
/* Sections to be discarded */
/DISCARD/ : {
COMMON_DISCARDS
*(.eh_frame)
*(*__ksymtab*)
*(___kcrctab*)
*(.modinfo)
}
DWARF_DEBUG
ELF_DETAILS
@ -161,12 +170,4 @@ SECTIONS
*(.rela.*) *(.rela_*)
}
ASSERT(SIZEOF(.rela.dyn) == 0, "Unexpected run-time relocations (.rela) detected!")
/* Sections to be discarded */
/DISCARD/ : {
COMMON_DISCARDS
*(.eh_frame)
*(*__ksymtab*)
*(___kcrctab*)
}
}

View file

@ -28,7 +28,7 @@ KBUILD_CFLAGS_VDSO := $(filter-out -mno-pic-data-is-text-relative,$(KBUILD_CFLAG
KBUILD_CFLAGS_VDSO := $(filter-out -munaligned-symbols,$(KBUILD_CFLAGS_VDSO))
KBUILD_CFLAGS_VDSO := $(filter-out -fno-asynchronous-unwind-tables,$(KBUILD_CFLAGS_VDSO))
KBUILD_CFLAGS_VDSO += -fPIC -fno-common -fno-builtin -fasynchronous-unwind-tables
KBUILD_CFLAGS_VDSO += -fno-stack-protector
KBUILD_CFLAGS_VDSO += -fno-stack-protector $(DISABLE_KSTACK_ERASE)
ldflags-y := -shared -soname=linux-vdso.so.1 \
--hash-style=both --build-id=sha1 -T

View file

@ -1574,13 +1574,22 @@ static inline bool intel_pmu_has_bts_period(struct perf_event *event, u64 period
struct hw_perf_event *hwc = &event->hw;
unsigned int hw_event, bts_event;
if (event->attr.freq)
/*
* Only use BTS for fixed rate period==1 events.
*/
if (event->attr.freq || period != 1)
return false;
/*
* BTS doesn't virtualize.
*/
if (event->attr.exclude_host)
return false;
hw_event = hwc->config & INTEL_ARCH_EVENT_MASK;
bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS);
return hw_event == bts_event && period == 1;
return hw_event == bts_event;
}
static inline bool intel_pmu_has_bts(struct perf_event *event)

View file

@ -821,8 +821,6 @@ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
force_sig_pkuerr((void __user *)address, pkey);
else
force_sig_fault(SIGSEGV, si_code, (void __user *)address);
local_irq_disable();
}
static noinline void
@ -1474,15 +1472,12 @@ handle_page_fault(struct pt_regs *regs, unsigned long error_code,
do_kern_addr_fault(regs, error_code, address);
} else {
do_user_addr_fault(regs, error_code, address);
/*
* User address page fault handling might have reenabled
* interrupts. Fixing up all potential exit points of
* do_user_addr_fault() and its leaf functions is just not
* doable w/o creating an unholy mess or turning the code
* upside down.
*/
local_irq_disable();
}
/*
* page fault handling might have reenabled interrupts,
* make sure to disable them again.
*/
local_irq_disable();
}
DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)

View file

@ -1480,7 +1480,7 @@ EXPORT_SYMBOL_GPL(blk_rq_is_poll);
static void blk_rq_poll_completion(struct request *rq, struct completion *wait)
{
do {
blk_hctx_poll(rq->q, rq->mq_hctx, NULL, 0);
blk_hctx_poll(rq->q, rq->mq_hctx, NULL, BLK_POLL_ONESHOT);
cond_resched();
} while (!completion_done(wait));
}

View file

@ -1957,6 +1957,7 @@ static int disk_update_zone_resources(struct gendisk *disk,
disk->nr_zones = args->nr_zones;
if (args->nr_conv_zones >= disk->nr_zones) {
queue_limits_cancel_update(q);
pr_warn("%s: Invalid number of conventional zones %u / %u\n",
disk->disk_name, args->nr_conv_zones, disk->nr_zones);
ret = -ENODEV;

View file

@ -169,6 +169,9 @@ static int crypto_authenc_esn_encrypt(struct aead_request *req)
struct scatterlist *src, *dst;
int err;
if (assoclen < 8)
return -EINVAL;
sg_init_table(areq_ctx->src, 2);
src = scatterwalk_ffwd(areq_ctx->src, req->src, assoclen);
dst = src;
@ -256,6 +259,9 @@ static int crypto_authenc_esn_decrypt(struct aead_request *req)
u32 tmp[2];
int err;
if (assoclen < 8)
return -EINVAL;
cryptlen -= authsize;
if (req->src != dst)

View file

@ -548,6 +548,8 @@ static DEVICE_ATTR_RW(state_synced);
static void device_unbind_cleanup(struct device *dev)
{
devres_release_all(dev);
if (dev->driver->p_cb.post_unbind_rust)
dev->driver->p_cb.post_unbind_rust(dev);
arch_teardown_dma_ops(dev);
kfree(dev->dma_range_map);
dev->dma_range_map = NULL;

View file

@ -95,12 +95,13 @@ static int regcache_maple_write(struct regmap *map, unsigned int reg,
mas_unlock(&mas);
if (ret == 0) {
kfree(lower);
kfree(upper);
if (ret) {
kfree(entry);
return ret;
}
return ret;
kfree(lower);
kfree(upper);
return 0;
}
static int regcache_maple_drop(struct regmap *map, unsigned int min,

View file

@ -408,9 +408,11 @@ static void regmap_lock_hwlock_irq(void *__map)
static void regmap_lock_hwlock_irqsave(void *__map)
{
struct regmap *map = __map;
unsigned long flags = 0;
hwspin_lock_timeout_irqsave(map->hwlock, UINT_MAX,
&map->spinlock_flags);
&flags);
map->spinlock_flags = flags;
}
static void regmap_unlock_hwlock(void *__map)

View file

@ -2885,6 +2885,15 @@ static struct ublk_device *ublk_get_device_from_id(int idx)
return ub;
}
static bool ublk_validate_user_pid(struct ublk_device *ub, pid_t ublksrv_pid)
{
rcu_read_lock();
ublksrv_pid = pid_nr(find_vpid(ublksrv_pid));
rcu_read_unlock();
return ub->ublksrv_tgid == ublksrv_pid;
}
static int ublk_ctrl_start_dev(struct ublk_device *ub,
const struct ublksrv_ctrl_cmd *header)
{
@ -2953,7 +2962,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub,
if (wait_for_completion_interruptible(&ub->completion) != 0)
return -EINTR;
if (ub->ublksrv_tgid != ublksrv_pid)
if (!ublk_validate_user_pid(ub, ublksrv_pid))
return -EINVAL;
mutex_lock(&ub->mutex);
@ -2972,7 +2981,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub,
disk->fops = &ub_fops;
disk->private_data = ub;
ub->dev_info.ublksrv_pid = ublksrv_pid;
ub->dev_info.ublksrv_pid = ub->ublksrv_tgid;
ub->ub_disk = disk;
ublk_apply_params(ub);
@ -3320,12 +3329,32 @@ static int ublk_ctrl_stop_dev(struct ublk_device *ub)
static int ublk_ctrl_get_dev_info(struct ublk_device *ub,
const struct ublksrv_ctrl_cmd *header)
{
struct task_struct *p;
struct pid *pid;
struct ublksrv_ctrl_dev_info dev_info;
pid_t init_ublksrv_tgid = ub->dev_info.ublksrv_pid;
void __user *argp = (void __user *)(unsigned long)header->addr;
if (header->len < sizeof(struct ublksrv_ctrl_dev_info) || !header->addr)
return -EINVAL;
if (copy_to_user(argp, &ub->dev_info, sizeof(ub->dev_info)))
memcpy(&dev_info, &ub->dev_info, sizeof(dev_info));
dev_info.ublksrv_pid = -1;
if (init_ublksrv_tgid > 0) {
rcu_read_lock();
pid = find_pid_ns(init_ublksrv_tgid, &init_pid_ns);
p = pid_task(pid, PIDTYPE_TGID);
if (p) {
int vnr = task_tgid_vnr(p);
if (vnr)
dev_info.ublksrv_pid = vnr;
}
rcu_read_unlock();
}
if (copy_to_user(argp, &dev_info, sizeof(dev_info)))
return -EFAULT;
return 0;
@ -3470,7 +3499,7 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
pr_devel("%s: All FETCH_REQs received, dev id %d\n", __func__,
header->dev_id);
if (ub->ublksrv_tgid != ublksrv_pid)
if (!ublk_validate_user_pid(ub, ublksrv_pid))
return -EINVAL;
mutex_lock(&ub->mutex);
@ -3481,7 +3510,7 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
ret = -EBUSY;
goto out_unlock;
}
ub->dev_info.ublksrv_pid = ublksrv_pid;
ub->dev_info.ublksrv_pid = ub->ublksrv_tgid;
ub->dev_info.state = UBLK_S_DEV_LIVE;
pr_devel("%s: new ublksrv_pid %d, dev id %d\n",
__func__, ublksrv_pid, header->dev_id);

View file

@ -50,8 +50,9 @@ static int riscv_clock_next_event(unsigned long delta,
if (static_branch_likely(&riscv_sstc_available)) {
#if defined(CONFIG_32BIT)
csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF);
csr_write(CSR_STIMECMP, ULONG_MAX);
csr_write(CSR_STIMECMPH, next_tval >> 32);
csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF);
#else
csr_write(CSR_STIMECMP, next_tval);
#endif

View file

@ -1155,7 +1155,7 @@ static int do_chaninfo_ioctl(struct comedi_device *dev,
for (i = 0; i < s->n_chan; i++) {
int x;
x = (dev->minor << 28) | (it->subdev << 24) | (i << 16) |
x = (it->subdev << 24) | (i << 16) |
(s->range_table_list[i]->length);
if (put_user(x, it->rangelist + i))
return -EFAULT;

View file

@ -330,6 +330,7 @@ static int dmm32at_ai_cmdtest(struct comedi_device *dev,
static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec)
{
unsigned long irq_flags;
unsigned char lo1, lo2, hi2;
unsigned short both2;
@ -342,6 +343,9 @@ static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec)
/* set counter clocks to 10MHz, disable all aux dio */
outb(0, dev->iobase + DMM32AT_CTRDIO_CFG_REG);
/* serialize access to control register and paged registers */
spin_lock_irqsave(&dev->spinlock, irq_flags);
/* get access to the clock regs */
outb(DMM32AT_CTRL_PAGE_8254, dev->iobase + DMM32AT_CTRL_REG);
@ -354,6 +358,8 @@ static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec)
outb(lo2, dev->iobase + DMM32AT_CLK2);
outb(hi2, dev->iobase + DMM32AT_CLK2);
spin_unlock_irqrestore(&dev->spinlock, irq_flags);
/* enable the ai conversion interrupt and the clock to start scans */
outb(DMM32AT_INTCLK_ADINT |
DMM32AT_INTCLK_CLKEN | DMM32AT_INTCLK_CLKSEL,
@ -363,13 +369,19 @@ static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec)
static int dmm32at_ai_cmd(struct comedi_device *dev, struct comedi_subdevice *s)
{
struct comedi_cmd *cmd = &s->async->cmd;
unsigned long irq_flags;
int ret;
dmm32at_ai_set_chanspec(dev, s, cmd->chanlist[0], cmd->chanlist_len);
/* serialize access to control register and paged registers */
spin_lock_irqsave(&dev->spinlock, irq_flags);
/* reset the interrupt just in case */
outb(DMM32AT_CTRL_INTRST, dev->iobase + DMM32AT_CTRL_REG);
spin_unlock_irqrestore(&dev->spinlock, irq_flags);
/*
* wait for circuit to settle
* we don't have the 'insn' here but it's not needed
@ -429,8 +441,13 @@ static irqreturn_t dmm32at_isr(int irq, void *d)
comedi_handle_events(dev, s);
}
/* serialize access to control register and paged registers */
spin_lock(&dev->spinlock);
/* reset the interrupt */
outb(DMM32AT_CTRL_INTRST, dev->iobase + DMM32AT_CTRL_REG);
spin_unlock(&dev->spinlock);
return IRQ_HANDLED;
}
@ -481,14 +498,25 @@ static int dmm32at_ao_insn_write(struct comedi_device *dev,
static int dmm32at_8255_io(struct comedi_device *dev,
int dir, int port, int data, unsigned long regbase)
{
unsigned long irq_flags;
int ret;
/* serialize access to control register and paged registers */
spin_lock_irqsave(&dev->spinlock, irq_flags);
/* get access to the DIO regs */
outb(DMM32AT_CTRL_PAGE_8255, dev->iobase + DMM32AT_CTRL_REG);
if (dir) {
outb(data, dev->iobase + regbase + port);
return 0;
ret = 0;
} else {
ret = inb(dev->iobase + regbase + port);
}
return inb(dev->iobase + regbase + port);
spin_unlock_irqrestore(&dev->spinlock, irq_flags);
return ret;
}
/* Make sure the board is there and put it to a known state */

View file

@ -52,7 +52,7 @@ int do_rangeinfo_ioctl(struct comedi_device *dev,
const struct comedi_lrange *lr;
struct comedi_subdevice *s;
subd = (it->range_type >> 24) & 0xf;
subd = (it->range_type >> 24) & 0xff;
chan = (it->range_type >> 16) & 0xff;
if (!dev->attached)

View file

@ -2549,6 +2549,7 @@ static int lineinfo_changed_notify(struct notifier_block *nb,
ctx = kzalloc(sizeof(*ctx), GFP_ATOMIC);
if (!ctx) {
pr_err("Failed to allocate memory for line info notification\n");
fput(fp);
return NOTIFY_DONE;
}
@ -2696,7 +2697,7 @@ static int gpio_chrdev_open(struct inode *inode, struct file *file)
cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
if (!cdev)
return -ENODEV;
return -ENOMEM;
cdev->watched_lines = bitmap_zalloc(gdev->ngpio, GFP_KERNEL);
if (!cdev->watched_lines)
@ -2796,13 +2797,18 @@ int gpiolib_cdev_register(struct gpio_device *gdev, dev_t devt)
return -ENOMEM;
ret = cdev_device_add(&gdev->chrdev, &gdev->dev);
if (ret)
if (ret) {
destroy_workqueue(gdev->line_state_wq);
return ret;
}
guard(srcu)(&gdev->srcu);
gc = srcu_dereference(gdev->chip, &gdev->srcu);
if (!gc)
if (!gc) {
cdev_device_del(&gdev->chrdev, &gdev->dev);
destroy_workqueue(gdev->line_state_wq);
return -ENODEV;
}
gpiochip_dbg(gc, "added GPIO chardev (%d:%d)\n", MAJOR(devt), gdev->id);

View file

@ -515,7 +515,7 @@ int gpio_device_setup_shared(struct gpio_device *gdev)
{
struct gpio_shared_entry *entry;
struct gpio_shared_ref *ref;
unsigned long *flags;
struct gpio_desc *desc;
int ret;
list_for_each_entry(entry, &gpio_shared_list, list) {
@ -543,15 +543,17 @@ int gpio_device_setup_shared(struct gpio_device *gdev)
if (list_count_nodes(&entry->refs) <= 1)
continue;
flags = &gdev->descs[entry->offset].flags;
desc = &gdev->descs[entry->offset];
__set_bit(GPIOD_FLAG_SHARED, flags);
__set_bit(GPIOD_FLAG_SHARED, &desc->flags);
/*
* Shared GPIOs are not requested via the normal path. Make
* them inaccessible to anyone even before we register the
* chip.
*/
__set_bit(GPIOD_FLAG_REQUESTED, flags);
ret = gpiod_request_commit(desc, "shared");
if (ret)
return ret;
pr_debug("GPIO %u owned by %s is shared by multiple consumers\n",
entry->offset, gpio_device_get_label(gdev));
@ -562,8 +564,10 @@ int gpio_device_setup_shared(struct gpio_device *gdev)
ref->con_id ?: "(none)");
ret = gpio_shared_make_adev(gdev, entry, ref);
if (ret)
if (ret) {
gpiod_free_commit(desc);
return ret;
}
}
}
@ -579,6 +583,8 @@ void gpio_device_teardown_shared(struct gpio_device *gdev)
if (!device_match_fwnode(&gdev->dev, entry->fwnode))
continue;
gpiod_free_commit(&gdev->descs[entry->offset]);
list_for_each_entry(ref, &entry->refs, list) {
guard(mutex)(&ref->lock);

View file

@ -2453,7 +2453,7 @@ EXPORT_SYMBOL_GPL(gpiochip_remove_pin_ranges);
* on each other, and help provide better diagnostics in debugfs.
* They're called even less than the "set direction" calls.
*/
static int gpiod_request_commit(struct gpio_desc *desc, const char *label)
int gpiod_request_commit(struct gpio_desc *desc, const char *label)
{
unsigned int offset;
int ret;
@ -2515,7 +2515,7 @@ int gpiod_request(struct gpio_desc *desc, const char *label)
return ret;
}
static void gpiod_free_commit(struct gpio_desc *desc)
void gpiod_free_commit(struct gpio_desc *desc)
{
unsigned long flags;

View file

@ -244,7 +244,9 @@ DEFINE_CLASS(gpio_chip_guard,
struct gpio_desc *desc)
int gpiod_request(struct gpio_desc *desc, const char *label);
int gpiod_request_commit(struct gpio_desc *desc, const char *label);
void gpiod_free(struct gpio_desc *desc);
void gpiod_free_commit(struct gpio_desc *desc);
static inline int gpiod_request_user(struct gpio_desc *desc, const char *label)
{

View file

@ -210,7 +210,7 @@ config DRM_GPUVM
config DRM_GPUSVM
tristate
depends on DRM && DEVICE_PRIVATE
depends on DRM
select HMM_MIRROR
select MMU_NOTIFIER
help

View file

@ -108,8 +108,10 @@ obj-$(CONFIG_DRM_EXEC) += drm_exec.o
obj-$(CONFIG_DRM_GPUVM) += drm_gpuvm.o
drm_gpusvm_helper-y := \
drm_gpusvm.o\
drm_gpusvm.o
drm_gpusvm_helper-$(CONFIG_ZONE_DEVICE) += \
drm_pagemap.o
obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o
obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o

View file

@ -763,7 +763,7 @@ void amdgpu_fence_save_wptr(struct amdgpu_fence *af)
}
static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring,
u64 start_wptr, u32 end_wptr)
u64 start_wptr, u64 end_wptr)
{
unsigned int first_idx = start_wptr & ring->buf_mask;
unsigned int last_idx = end_wptr & ring->buf_mask;

View file

@ -733,8 +733,10 @@ int amdgpu_gmc_flush_gpu_tlb_pasid(struct amdgpu_device *adev, uint16_t pasid,
if (!adev->gmc.flush_pasid_uses_kiq || !ring->sched.ready) {
if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid)
return 0;
if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid) {
r = 0;
goto error_unlock_reset;
}
if (adev->gmc.flush_tlb_needs_extra_type_2)
adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,

View file

@ -302,7 +302,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
if (job && job->vmid)
amdgpu_vmid_reset(adev, ring->vm_hub, job->vmid);
amdgpu_ring_undo(ring);
return r;
goto free_fence;
}
*f = &af->base;
/* get a ref for the job */

View file

@ -217,8 +217,11 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
if (!entity)
return 0;
return drm_sched_job_init(&(*job)->base, entity, 1, owner,
drm_client_id);
r = drm_sched_job_init(&(*job)->base, entity, 1, owner, drm_client_id);
if (!r)
return 0;
kfree((*job)->hw_vm_fence);
err_fence:
kfree((*job)->hw_fence);

View file

@ -278,7 +278,6 @@ static void gfx_v12_0_select_se_sh(struct amdgpu_device *adev, u32 se_num,
u32 sh_num, u32 instance, int xcc_id);
static u32 gfx_v12_0_get_wgp_active_bitmap_per_sh(struct amdgpu_device *adev);
static void gfx_v12_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start, bool secure);
static void gfx_v12_0_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg,
uint32_t val);
static int gfx_v12_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev);
@ -4634,16 +4633,6 @@ static int gfx_v12_0_ring_preempt_ib(struct amdgpu_ring *ring)
return r;
}
static void gfx_v12_0_ring_emit_frame_cntl(struct amdgpu_ring *ring,
bool start,
bool secure)
{
uint32_t v = secure ? FRAME_TMZ : 0;
amdgpu_ring_write(ring, PACKET3(PACKET3_FRAME_CONTROL, 0));
amdgpu_ring_write(ring, v | FRAME_CMD(start ? 0 : 1));
}
static void gfx_v12_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg,
uint32_t reg_val_offs)
{
@ -5520,7 +5509,6 @@ static const struct amdgpu_ring_funcs gfx_v12_0_ring_funcs_gfx = {
.emit_cntxcntl = gfx_v12_0_ring_emit_cntxcntl,
.init_cond_exec = gfx_v12_0_ring_emit_init_cond_exec,
.preempt_ib = gfx_v12_0_ring_preempt_ib,
.emit_frame_cntl = gfx_v12_0_ring_emit_frame_cntl,
.emit_wreg = gfx_v12_0_ring_emit_wreg,
.emit_reg_wait = gfx_v12_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v12_0_ring_emit_reg_write_reg_wait,

View file

@ -120,8 +120,7 @@ static inline bool kfd_dbg_has_gws_support(struct kfd_node *dev)
&& dev->kfd->mec2_fw_version < 0x1b6) ||
(KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1)
&& dev->kfd->mec2_fw_version < 0x30) ||
(KFD_GC_VERSION(dev) >= IP_VERSION(11, 0, 0) &&
KFD_GC_VERSION(dev) < IP_VERSION(12, 0, 0)))
kfd_dbg_has_cwsr_workaround(dev))
return false;
/* Assume debugging and cooperative launch supported otherwise. */

View file

@ -79,7 +79,6 @@ int amdgpu_dm_initialize_default_pipeline(struct drm_plane *plane, struct drm_pr
goto cleanup;
list->type = ops[i]->base.id;
list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[i]->base.id);
i++;
@ -197,6 +196,9 @@ int amdgpu_dm_initialize_default_pipeline(struct drm_plane *plane, struct drm_pr
goto cleanup;
drm_colorop_set_next_property(ops[i-1], ops[i]);
list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[0]->base.id);
return 0;
cleanup:

View file

@ -248,8 +248,6 @@ static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work)
struct vblank_control_work *vblank_work =
container_of(work, struct vblank_control_work, work);
struct amdgpu_display_manager *dm = vblank_work->dm;
struct amdgpu_device *adev = drm_to_adev(dm->ddev);
int r;
mutex_lock(&dm->dc_lock);
@ -279,16 +277,7 @@ static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work)
if (dm->active_vblank_irq_count == 0) {
dc_post_update_surfaces_to_stream(dm->dc);
r = amdgpu_dpm_pause_power_profile(adev, true);
if (r)
dev_warn(adev->dev, "failed to set default power profile mode\n");
dc_allow_idle_optimizations(dm->dc, true);
r = amdgpu_dpm_pause_power_profile(adev, false);
if (r)
dev_warn(adev->dev, "failed to restore the power profile mode\n");
}
mutex_unlock(&dm->dc_lock);

View file

@ -915,13 +915,19 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev)
struct amdgpu_dm_connector *amdgpu_dm_connector;
const struct dc_link *dc_link;
use_polling |= connector->polled != DRM_CONNECTOR_POLL_HPD;
if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
continue;
amdgpu_dm_connector = to_amdgpu_dm_connector(connector);
/*
* Analog connectors may be hot-plugged unlike other connector
* types that don't support HPD. Only poll analog connectors.
*/
use_polling |=
amdgpu_dm_connector->dc_link &&
dc_connector_supports_analog(amdgpu_dm_connector->dc_link->link_id.id);
dc_link = amdgpu_dm_connector->dc_link;
/*

View file

@ -1790,12 +1790,13 @@ dm_atomic_plane_get_property(struct drm_plane *plane,
static int
dm_plane_init_colorops(struct drm_plane *plane)
{
struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES];
struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES] = {};
struct drm_device *dev = plane->dev;
struct amdgpu_device *adev = drm_to_adev(dev);
struct dc *dc = adev->dm.dc;
int len = 0;
int ret;
int ret = 0;
int i;
if (plane->type == DRM_PLANE_TYPE_CURSOR)
return 0;
@ -1806,7 +1807,7 @@ dm_plane_init_colorops(struct drm_plane *plane)
if (ret) {
drm_err(plane->dev, "Failed to create color pipeline for plane %d: %d\n",
plane->base.id, ret);
return ret;
goto out;
}
len++;
@ -1814,7 +1815,11 @@ dm_plane_init_colorops(struct drm_plane *plane)
drm_plane_create_color_pipeline_property(plane, pipelines, len);
}
return 0;
out:
for (i = 0; i < len; i++)
kfree(pipelines[i].name);
return ret;
}
#endif

View file

@ -2273,8 +2273,6 @@ static int si_populate_smc_tdp_limits(struct amdgpu_device *adev,
if (scaling_factor == 0)
return -EINVAL;
memset(smc_table, 0, sizeof(SISLANDS_SMC_STATETABLE));
ret = si_calculate_adjusted_tdp_limits(adev,
false, /* ??? */
adev->pm.dpm.tdp_adjustment,
@ -2283,6 +2281,12 @@ static int si_populate_smc_tdp_limits(struct amdgpu_device *adev,
if (ret)
return ret;
if (adev->pdev->device == 0x6611 && adev->pdev->revision == 0x87) {
/* Workaround buggy powertune on Radeon 430 and 520. */
tdp_limit = 32;
near_tdp_limit = 28;
}
smc_table->dpm2Params.TDPLimit =
cpu_to_be32(si_scale_power_for_smc(tdp_limit, scaling_factor) * 1000);
smc_table->dpm2Params.NearTDPLimit =
@ -2328,16 +2332,8 @@ static int si_populate_smc_tdp_limits_2(struct amdgpu_device *adev,
if (ni_pi->enable_power_containment) {
SISLANDS_SMC_STATETABLE *smc_table = &si_pi->smc_statetable;
u32 scaling_factor = si_get_smc_power_scaling_factor(adev);
int ret;
memset(smc_table, 0, sizeof(SISLANDS_SMC_STATETABLE));
smc_table->dpm2Params.NearTDPLimit =
cpu_to_be32(si_scale_power_for_smc(adev->pm.dpm.near_tdp_limit_adjusted, scaling_factor) * 1000);
smc_table->dpm2Params.SafePowerLimit =
cpu_to_be32(si_scale_power_for_smc((adev->pm.dpm.near_tdp_limit_adjusted * SISLANDS_DPM2_TDP_SAFE_LIMIT_PERCENT) / 100, scaling_factor) * 1000);
ret = amdgpu_si_copy_bytes_to_smc(adev,
(si_pi->state_table_start +
offsetof(SISLANDS_SMC_STATETABLE, dpm2Params) +
@ -3473,10 +3469,15 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
(adev->pdev->revision == 0x80) ||
(adev->pdev->revision == 0x81) ||
(adev->pdev->revision == 0x83) ||
(adev->pdev->revision == 0x87) ||
(adev->pdev->revision == 0x87 &&
adev->pdev->device != 0x6611) ||
(adev->pdev->device == 0x6604) ||
(adev->pdev->device == 0x6605)) {
max_sclk = 75000;
} else if (adev->pdev->revision == 0x87 &&
adev->pdev->device == 0x6611) {
/* Radeon 430 and 520 */
max_sclk = 78000;
}
}
@ -7600,12 +7601,12 @@ static int si_dpm_set_interrupt_state(struct amdgpu_device *adev,
case AMDGPU_IRQ_STATE_DISABLE:
cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT);
cg_thermal_int |= CG_THERMAL_INT__THERM_INT_MASK_HIGH_MASK;
WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int);
WREG32(mmCG_THERMAL_INT, cg_thermal_int);
break;
case AMDGPU_IRQ_STATE_ENABLE:
cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT);
cg_thermal_int &= ~CG_THERMAL_INT__THERM_INT_MASK_HIGH_MASK;
WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int);
WREG32(mmCG_THERMAL_INT, cg_thermal_int);
break;
default:
break;
@ -7617,12 +7618,12 @@ static int si_dpm_set_interrupt_state(struct amdgpu_device *adev,
case AMDGPU_IRQ_STATE_DISABLE:
cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT);
cg_thermal_int |= CG_THERMAL_INT__THERM_INT_MASK_LOW_MASK;
WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int);
WREG32(mmCG_THERMAL_INT, cg_thermal_int);
break;
case AMDGPU_IRQ_STATE_ENABLE:
cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT);
cg_thermal_int &= ~CG_THERMAL_INT__THERM_INT_MASK_LOW_MASK;
WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int);
WREG32(mmCG_THERMAL_INT, cg_thermal_int);
break;
default:
break;

View file

@ -2062,33 +2062,41 @@ struct dw_dp *dw_dp_bind(struct device *dev, struct drm_encoder *encoder,
}
ret = drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
if (ret)
if (ret) {
dev_err_probe(dev, ret, "Failed to attach bridge\n");
goto unregister_aux;
}
dw_dp_init_hw(dp);
ret = phy_init(dp->phy);
if (ret) {
dev_err_probe(dev, ret, "phy init failed\n");
return ERR_PTR(ret);
goto unregister_aux;
}
ret = devm_add_action_or_reset(dev, dw_dp_phy_exit, dp);
if (ret)
return ERR_PTR(ret);
goto unregister_aux;
dp->irq = platform_get_irq(pdev, 0);
if (dp->irq < 0)
return ERR_PTR(ret);
if (dp->irq < 0) {
ret = dp->irq;
goto unregister_aux;
}
ret = devm_request_threaded_irq(dev, dp->irq, NULL, dw_dp_irq,
IRQF_ONESHOT, dev_name(dev), dp);
if (ret) {
dev_err_probe(dev, ret, "failed to request irq\n");
return ERR_PTR(ret);
goto unregister_aux;
}
return dp;
unregister_aux:
drm_dp_aux_unregister(&dp->aux);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(dw_dp_bind);

View file

@ -34,11 +34,19 @@ int _intel_color_pipeline_plane_init(struct drm_plane *plane, struct drm_prop_en
return ret;
list->type = colorop->base.base.id;
list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", colorop->base.base.id);
/* TODO: handle failures and clean up */
prev_op = &colorop->base;
colorop = intel_colorop_create(INTEL_PLANE_CB_CSC);
ret = drm_plane_colorop_ctm_3x4_init(dev, &colorop->base, plane,
DRM_COLOROP_FLAG_ALLOW_BYPASS);
if (ret)
return ret;
drm_colorop_set_next_property(prev_op, &colorop->base);
prev_op = &colorop->base;
if (DISPLAY_VER(display) >= 35 &&
intel_color_crtc_has_3dlut(display, pipe) &&
plane->type == DRM_PLANE_TYPE_PRIMARY) {
@ -55,15 +63,6 @@ int _intel_color_pipeline_plane_init(struct drm_plane *plane, struct drm_prop_en
prev_op = &colorop->base;
}
colorop = intel_colorop_create(INTEL_PLANE_CB_CSC);
ret = drm_plane_colorop_ctm_3x4_init(dev, &colorop->base, plane,
DRM_COLOROP_FLAG_ALLOW_BYPASS);
if (ret)
return ret;
drm_colorop_set_next_property(prev_op, &colorop->base);
prev_op = &colorop->base;
colorop = intel_colorop_create(INTEL_PLANE_CB_POST_CSC_LUT);
ret = drm_plane_colorop_curve_1d_lut_init(dev, &colorop->base, plane,
PLANE_GAMMA_SIZE,
@ -74,6 +73,8 @@ int _intel_color_pipeline_plane_init(struct drm_plane *plane, struct drm_prop_en
drm_colorop_set_next_property(prev_op, &colorop->base);
list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", list->type);
return 0;
}
@ -81,9 +82,10 @@ int intel_color_pipeline_plane_init(struct drm_plane *plane, enum pipe pipe)
{
struct drm_device *dev = plane->dev;
struct intel_display *display = to_intel_display(dev);
struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES];
struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES] = {};
int len = 0;
int ret;
int ret = 0;
int i;
/* Currently expose pipeline only for HDR planes */
if (!icl_is_hdr_plane(display, to_intel_plane(plane)->id))
@ -92,8 +94,14 @@ int intel_color_pipeline_plane_init(struct drm_plane *plane, enum pipe pipe)
/* Add pipeline consisting of transfer functions */
ret = _intel_color_pipeline_plane_init(plane, &pipelines[len], pipe);
if (ret)
return ret;
goto out;
len++;
return drm_plane_create_color_pipeline_property(plane, pipelines, len);
ret = drm_plane_create_color_pipeline_property(plane, pipelines, len);
for (i = 0; i < len; i++)
kfree(pipelines[i].name);
out:
return ret;
}

View file

@ -137,6 +137,7 @@ update_logtype(struct pvr_device *pvr_dev, u32 group_mask)
struct rogue_fwif_kccb_cmd cmd;
int idx;
int err;
int slot;
if (group_mask)
fw_trace->tracebuf_ctrl->log_type = ROGUE_FWIF_LOG_TYPE_TRACE | group_mask;
@ -154,8 +155,13 @@ update_logtype(struct pvr_device *pvr_dev, u32 group_mask)
cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_LOGTYPE_UPDATE;
cmd.kccb_flags = 0;
err = pvr_kccb_send_cmd(pvr_dev, &cmd, NULL);
err = pvr_kccb_send_cmd(pvr_dev, &cmd, &slot);
if (err)
goto err_drm_dev_exit;
err = pvr_kccb_wait_for_completion(pvr_dev, slot, HZ, NULL);
err_drm_dev_exit:
drm_dev_exit(idx);
err_up_read:

View file

@ -8,7 +8,7 @@ config DRM_MEDIATEK
depends on OF
depends on MTK_MMSYS
select DRM_CLIENT_SELECTION
select DRM_GEM_DMA_HELPER if DRM_FBDEV_EMULATION
select DRM_GEM_DMA_HELPER
select DRM_KMS_HELPER
select DRM_DISPLAY_HELPER
select DRM_BRIDGE_CONNECTOR

View file

@ -836,20 +836,6 @@ static int mtk_dpi_bridge_attach(struct drm_bridge *bridge,
enum drm_bridge_attach_flags flags)
{
struct mtk_dpi *dpi = bridge_to_dpi(bridge);
int ret;
dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 1, -1);
if (IS_ERR(dpi->next_bridge)) {
ret = PTR_ERR(dpi->next_bridge);
if (ret == -EPROBE_DEFER)
return ret;
/* Old devicetree has only one endpoint */
dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 0, 0);
if (IS_ERR(dpi->next_bridge))
return dev_err_probe(dpi->dev, PTR_ERR(dpi->next_bridge),
"Failed to get bridge\n");
}
return drm_bridge_attach(encoder, dpi->next_bridge,
&dpi->bridge, flags);
@ -1319,6 +1305,15 @@ static int mtk_dpi_probe(struct platform_device *pdev)
if (dpi->irq < 0)
return dpi->irq;
dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 1, -1);
if (IS_ERR(dpi->next_bridge) && PTR_ERR(dpi->next_bridge) == -ENODEV) {
/* Old devicetree has only one endpoint */
dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 0, 0);
}
if (IS_ERR(dpi->next_bridge))
return dev_err_probe(dpi->dev, PTR_ERR(dpi->next_bridge),
"Failed to get bridge\n");
platform_set_drvdata(pdev, dpi);
dpi->bridge.of_node = dev->of_node;

View file

@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015 MediaTek Inc.
* Copyright (c) 2025 Collabora Ltd.
* AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
*/
#include <linux/dma-buf.h>
@ -18,24 +20,64 @@
static int mtk_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
static const struct vm_operations_struct vm_ops = {
.open = drm_gem_vm_open,
.close = drm_gem_vm_close,
};
static void mtk_gem_free_object(struct drm_gem_object *obj)
{
struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
struct mtk_drm_private *priv = obj->dev->dev_private;
if (dma_obj->sgt)
drm_prime_gem_destroy(obj, dma_obj->sgt);
else
dma_free_wc(priv->dma_dev, dma_obj->base.size,
dma_obj->vaddr, dma_obj->dma_addr);
/* release file pointer to gem object. */
drm_gem_object_release(obj);
kfree(dma_obj);
}
/*
* Allocate a sg_table for this GEM object.
* Note: Both the table's contents, and the sg_table itself must be freed by
* the caller.
* Returns a pointer to the newly allocated sg_table, or an ERR_PTR() error.
*/
static struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj)
{
struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
struct mtk_drm_private *priv = obj->dev->dev_private;
struct sg_table *sgt;
int ret;
sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
if (!sgt)
return ERR_PTR(-ENOMEM);
ret = dma_get_sgtable(priv->dma_dev, sgt, dma_obj->vaddr,
dma_obj->dma_addr, obj->size);
if (ret) {
DRM_ERROR("failed to allocate sgt, %d\n", ret);
kfree(sgt);
return ERR_PTR(ret);
}
return sgt;
}
static const struct drm_gem_object_funcs mtk_gem_object_funcs = {
.free = mtk_gem_free_object,
.print_info = drm_gem_dma_object_print_info,
.get_sg_table = mtk_gem_prime_get_sg_table,
.vmap = mtk_gem_prime_vmap,
.vunmap = mtk_gem_prime_vunmap,
.vmap = drm_gem_dma_object_vmap,
.mmap = mtk_gem_object_mmap,
.vm_ops = &vm_ops,
.vm_ops = &drm_gem_dma_vm_ops,
};
static struct mtk_gem_obj *mtk_gem_init(struct drm_device *dev,
unsigned long size)
static struct drm_gem_dma_object *mtk_gem_init(struct drm_device *dev,
unsigned long size, bool private)
{
struct mtk_gem_obj *mtk_gem_obj;
struct drm_gem_dma_object *dma_obj;
int ret;
size = round_up(size, PAGE_SIZE);
@ -43,86 +85,65 @@ static struct mtk_gem_obj *mtk_gem_init(struct drm_device *dev,
if (size == 0)
return ERR_PTR(-EINVAL);
mtk_gem_obj = kzalloc(sizeof(*mtk_gem_obj), GFP_KERNEL);
if (!mtk_gem_obj)
dma_obj = kzalloc(sizeof(*dma_obj), GFP_KERNEL);
if (!dma_obj)
return ERR_PTR(-ENOMEM);
mtk_gem_obj->base.funcs = &mtk_gem_object_funcs;
dma_obj->base.funcs = &mtk_gem_object_funcs;
ret = drm_gem_object_init(dev, &mtk_gem_obj->base, size);
if (ret < 0) {
if (private) {
ret = 0;
drm_gem_private_object_init(dev, &dma_obj->base, size);
} else {
ret = drm_gem_object_init(dev, &dma_obj->base, size);
}
if (ret) {
DRM_ERROR("failed to initialize gem object\n");
kfree(mtk_gem_obj);
kfree(dma_obj);
return ERR_PTR(ret);
}
return mtk_gem_obj;
return dma_obj;
}
struct mtk_gem_obj *mtk_gem_create(struct drm_device *dev,
size_t size, bool alloc_kmap)
static struct drm_gem_dma_object *mtk_gem_create(struct drm_device *dev, size_t size)
{
struct mtk_drm_private *priv = dev->dev_private;
struct mtk_gem_obj *mtk_gem;
struct drm_gem_dma_object *dma_obj;
struct drm_gem_object *obj;
int ret;
mtk_gem = mtk_gem_init(dev, size);
if (IS_ERR(mtk_gem))
return ERR_CAST(mtk_gem);
dma_obj = mtk_gem_init(dev, size, false);
if (IS_ERR(dma_obj))
return ERR_CAST(dma_obj);
obj = &mtk_gem->base;
obj = &dma_obj->base;
mtk_gem->dma_attrs = DMA_ATTR_WRITE_COMBINE;
if (!alloc_kmap)
mtk_gem->dma_attrs |= DMA_ATTR_NO_KERNEL_MAPPING;
mtk_gem->cookie = dma_alloc_attrs(priv->dma_dev, obj->size,
&mtk_gem->dma_addr, GFP_KERNEL,
mtk_gem->dma_attrs);
if (!mtk_gem->cookie) {
dma_obj->vaddr = dma_alloc_wc(priv->dma_dev, obj->size,
&dma_obj->dma_addr,
GFP_KERNEL | __GFP_NOWARN);
if (!dma_obj->vaddr) {
DRM_ERROR("failed to allocate %zx byte dma buffer", obj->size);
ret = -ENOMEM;
goto err_gem_free;
}
if (alloc_kmap)
mtk_gem->kvaddr = mtk_gem->cookie;
DRM_DEBUG_DRIVER("cookie = %p dma_addr = %pad size = %zu\n",
mtk_gem->cookie, &mtk_gem->dma_addr,
DRM_DEBUG_DRIVER("vaddr = %p dma_addr = %pad size = %zu\n",
dma_obj->vaddr, &dma_obj->dma_addr,
size);
return mtk_gem;
return dma_obj;
err_gem_free:
drm_gem_object_release(obj);
kfree(mtk_gem);
kfree(dma_obj);
return ERR_PTR(ret);
}
void mtk_gem_free_object(struct drm_gem_object *obj)
{
struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
struct mtk_drm_private *priv = obj->dev->dev_private;
if (mtk_gem->sg)
drm_prime_gem_destroy(obj, mtk_gem->sg);
else
dma_free_attrs(priv->dma_dev, obj->size, mtk_gem->cookie,
mtk_gem->dma_addr, mtk_gem->dma_attrs);
/* release file pointer to gem object. */
drm_gem_object_release(obj);
kfree(mtk_gem);
}
int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
struct mtk_gem_obj *mtk_gem;
struct drm_gem_dma_object *dma_obj;
int ret;
args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
@ -135,25 +156,25 @@ int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev,
args->size = args->pitch;
args->size *= args->height;
mtk_gem = mtk_gem_create(dev, args->size, false);
if (IS_ERR(mtk_gem))
return PTR_ERR(mtk_gem);
dma_obj = mtk_gem_create(dev, args->size);
if (IS_ERR(dma_obj))
return PTR_ERR(dma_obj);
/*
* allocate a id of idr table where the obj is registered
* and handle has the id what user can see.
*/
ret = drm_gem_handle_create(file_priv, &mtk_gem->base, &args->handle);
ret = drm_gem_handle_create(file_priv, &dma_obj->base, &args->handle);
if (ret)
goto err_handle_create;
/* drop reference from allocate - handle holds it now. */
drm_gem_object_put(&mtk_gem->base);
drm_gem_object_put(&dma_obj->base);
return 0;
err_handle_create:
mtk_gem_free_object(&mtk_gem->base);
mtk_gem_free_object(&dma_obj->base);
return ret;
}
@ -161,129 +182,50 @@ static int mtk_gem_object_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
int ret;
struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
struct mtk_drm_private *priv = obj->dev->dev_private;
int ret;
/*
* Set vm_pgoff (used as a fake buffer offset by DRM) to 0 and map the
* whole buffer from the start.
*/
vma->vm_pgoff = 0;
vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
/*
* dma_alloc_attrs() allocated a struct page table for mtk_gem, so clear
* VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap().
*/
vm_flags_set(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
vm_flags_mod(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP, VM_PFNMAP);
vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
ret = dma_mmap_attrs(priv->dma_dev, vma, mtk_gem->cookie,
mtk_gem->dma_addr, obj->size, mtk_gem->dma_attrs);
ret = dma_mmap_wc(priv->dma_dev, vma, dma_obj->vaddr,
dma_obj->dma_addr, obj->size);
if (ret)
drm_gem_vm_close(vma);
return ret;
}
/*
* Allocate a sg_table for this GEM object.
* Note: Both the table's contents, and the sg_table itself must be freed by
* the caller.
* Returns a pointer to the newly allocated sg_table, or an ERR_PTR() error.
*/
struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj)
{
struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
struct mtk_drm_private *priv = obj->dev->dev_private;
struct sg_table *sgt;
int ret;
sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
if (!sgt)
return ERR_PTR(-ENOMEM);
ret = dma_get_sgtable_attrs(priv->dma_dev, sgt, mtk_gem->cookie,
mtk_gem->dma_addr, obj->size,
mtk_gem->dma_attrs);
if (ret) {
DRM_ERROR("failed to allocate sgt, %d\n", ret);
kfree(sgt);
return ERR_PTR(ret);
}
return sgt;
}
struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach, struct sg_table *sg)
struct dma_buf_attachment *attach, struct sg_table *sgt)
{
struct mtk_gem_obj *mtk_gem;
struct drm_gem_dma_object *dma_obj;
/* check if the entries in the sg_table are contiguous */
if (drm_prime_get_contiguous_size(sg) < attach->dmabuf->size) {
if (drm_prime_get_contiguous_size(sgt) < attach->dmabuf->size) {
DRM_ERROR("sg_table is not contiguous");
return ERR_PTR(-EINVAL);
}
mtk_gem = mtk_gem_init(dev, attach->dmabuf->size);
if (IS_ERR(mtk_gem))
return ERR_CAST(mtk_gem);
dma_obj = mtk_gem_init(dev, attach->dmabuf->size, true);
if (IS_ERR(dma_obj))
return ERR_CAST(dma_obj);
mtk_gem->dma_addr = sg_dma_address(sg->sgl);
mtk_gem->sg = sg;
dma_obj->dma_addr = sg_dma_address(sgt->sgl);
dma_obj->sgt = sgt;
return &mtk_gem->base;
}
int mtk_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
{
struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
struct sg_table *sgt = NULL;
unsigned int npages;
if (mtk_gem->kvaddr)
goto out;
sgt = mtk_gem_prime_get_sg_table(obj);
if (IS_ERR(sgt))
return PTR_ERR(sgt);
npages = obj->size >> PAGE_SHIFT;
mtk_gem->pages = kcalloc(npages, sizeof(*mtk_gem->pages), GFP_KERNEL);
if (!mtk_gem->pages) {
sg_free_table(sgt);
kfree(sgt);
return -ENOMEM;
}
drm_prime_sg_to_page_array(sgt, mtk_gem->pages, npages);
mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
if (!mtk_gem->kvaddr) {
sg_free_table(sgt);
kfree(sgt);
kfree(mtk_gem->pages);
return -ENOMEM;
}
sg_free_table(sgt);
kfree(sgt);
out:
iosys_map_set_vaddr(map, mtk_gem->kvaddr);
return 0;
}
void mtk_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
{
struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
void *vaddr = map->vaddr;
if (!mtk_gem->pages)
return;
vunmap(vaddr);
mtk_gem->kvaddr = NULL;
kfree(mtk_gem->pages);
return &dma_obj->base;
}

View file

@ -7,42 +7,11 @@
#define _MTK_GEM_H_
#include <drm/drm_gem.h>
#include <drm/drm_gem_dma_helper.h>
/*
* mtk drm buffer structure.
*
* @base: a gem object.
* - a new handle to this gem object would be created
* by drm_gem_handle_create().
* @cookie: the return value of dma_alloc_attrs(), keep it for dma_free_attrs()
* @kvaddr: kernel virtual address of gem buffer.
* @dma_addr: dma address of gem buffer.
* @dma_attrs: dma attributes of gem buffer.
*
* P.S. this object would be transferred to user as kms_bo.handle so
* user can access the buffer through kms_bo.handle.
*/
struct mtk_gem_obj {
struct drm_gem_object base;
void *cookie;
void *kvaddr;
dma_addr_t dma_addr;
unsigned long dma_attrs;
struct sg_table *sg;
struct page **pages;
};
#define to_mtk_gem_obj(x) container_of(x, struct mtk_gem_obj, base)
void mtk_gem_free_object(struct drm_gem_object *gem);
struct mtk_gem_obj *mtk_gem_create(struct drm_device *dev, size_t size,
bool alloc_kmap);
int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev,
struct drm_mode_create_dumb *args);
struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj);
struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach, struct sg_table *sg);
int mtk_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map);
void mtk_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
#endif

View file

@ -303,7 +303,7 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi, struct platform_device
return dev_err_probe(dev, ret, "Failed to get clocks\n");
hdmi->irq = platform_get_irq(pdev, 0);
if (!hdmi->irq)
if (hdmi->irq < 0)
return hdmi->irq;
hdmi->regs = device_node_to_regmap(dev->of_node);

View file

@ -168,7 +168,7 @@ struct mtk_hdmi {
bool audio_enable;
bool powered;
bool enabled;
unsigned int irq;
int irq;
enum hdmi_hpd_state hpd;
hdmi_codec_plugged_cb plugged_cb;
struct device *codec_dev;

View file

@ -66,11 +66,19 @@ static int mtk_ddc_check_and_rise_low_bus(struct mtk_hdmi_ddc *ddc)
return 0;
}
static int mtk_ddc_wr_one(struct mtk_hdmi_ddc *ddc, u16 addr_id,
u16 offset_id, u8 *wr_data)
static int mtk_ddcm_write_hdmi(struct mtk_hdmi_ddc *ddc, u16 addr_id,
u16 offset_id, u16 data_cnt, u8 *wr_data)
{
u32 val;
int ret;
int ret, i;
/* Don't allow transfer with a size over than the transfer fifo size
* (16 byte)
*/
if (data_cnt > 16) {
dev_err(ddc->dev, "Invalid DDCM write request\n");
return -EINVAL;
}
/* If down, rise bus for write operation */
mtk_ddc_check_and_rise_low_bus(ddc);
@ -78,16 +86,21 @@ static int mtk_ddc_wr_one(struct mtk_hdmi_ddc *ddc, u16 addr_id,
regmap_update_bits(ddc->regs, HPD_DDC_CTRL, HPD_DDC_DELAY_CNT,
FIELD_PREP(HPD_DDC_DELAY_CNT, DDC2_DLY_CNT));
/* In case there is no payload data, just do a single write for the
* address only
*/
if (wr_data) {
regmap_write(ddc->regs, SI2C_CTRL,
FIELD_PREP(SI2C_ADDR, SI2C_ADDR_READ) |
FIELD_PREP(SI2C_WDATA, *wr_data) |
SI2C_WR);
/* Fill transfer fifo with payload data */
for (i = 0; i < data_cnt; i++) {
regmap_write(ddc->regs, SI2C_CTRL,
FIELD_PREP(SI2C_ADDR, SI2C_ADDR_READ) |
FIELD_PREP(SI2C_WDATA, wr_data[i]) |
SI2C_WR);
}
}
regmap_write(ddc->regs, DDC_CTRL,
FIELD_PREP(DDC_CTRL_CMD, DDC_CMD_SEQ_WRITE) |
FIELD_PREP(DDC_CTRL_DIN_CNT, wr_data == NULL ? 0 : 1) |
FIELD_PREP(DDC_CTRL_DIN_CNT, wr_data == NULL ? 0 : data_cnt) |
FIELD_PREP(DDC_CTRL_OFFSET, offset_id) |
FIELD_PREP(DDC_CTRL_ADDR, addr_id));
usleep_range(1000, 1250);
@ -96,6 +109,11 @@ static int mtk_ddc_wr_one(struct mtk_hdmi_ddc *ddc, u16 addr_id,
!(val & DDC_I2C_IN_PROG), 500, 1000);
if (ret) {
dev_err(ddc->dev, "DDC I2C write timeout\n");
/* Abort transfer if it is still in progress */
regmap_update_bits(ddc->regs, DDC_CTRL, DDC_CTRL_CMD,
FIELD_PREP(DDC_CTRL_CMD, DDC_CMD_ABORT_XFER));
return ret;
}
@ -179,6 +197,11 @@ static int mtk_ddcm_read_hdmi(struct mtk_hdmi_ddc *ddc, u16 uc_dev,
500 * (temp_length + 5));
if (ret) {
dev_err(ddc->dev, "Timeout waiting for DDC I2C\n");
/* Abort transfer if it is still in progress */
regmap_update_bits(ddc->regs, DDC_CTRL, DDC_CTRL_CMD,
FIELD_PREP(DDC_CTRL_CMD, DDC_CMD_ABORT_XFER));
return ret;
}
@ -250,24 +273,9 @@ static int mtk_hdmi_fg_ddc_data_read(struct mtk_hdmi_ddc *ddc, u16 b_dev,
static int mtk_hdmi_ddc_fg_data_write(struct mtk_hdmi_ddc *ddc, u16 b_dev,
u8 data_addr, u16 data_cnt, u8 *pr_data)
{
int i, ret;
regmap_set_bits(ddc->regs, HDCP2X_POL_CTRL, HDCP2X_DIS_POLL_EN);
/*
* In case there is no payload data, just do a single write for the
* address only
*/
if (data_cnt == 0)
return mtk_ddc_wr_one(ddc, b_dev, data_addr, NULL);
i = 0;
do {
ret = mtk_ddc_wr_one(ddc, b_dev, data_addr + i, pr_data + i);
if (ret)
return ret;
} while (++i < data_cnt);
return 0;
return mtk_ddcm_write_hdmi(ddc, b_dev, data_addr, data_cnt, pr_data);
}
static int mtk_hdmi_ddc_v2_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs, int num)

View file

@ -1120,9 +1120,10 @@ static void mtk_hdmi_v2_hpd_disable(struct drm_bridge *bridge)
mtk_hdmi_v2_disable(hdmi);
}
static int mtk_hdmi_v2_hdmi_tmds_char_rate_valid(const struct drm_bridge *bridge,
const struct drm_display_mode *mode,
unsigned long long tmds_rate)
static enum drm_mode_status
mtk_hdmi_v2_hdmi_tmds_char_rate_valid(const struct drm_bridge *bridge,
const struct drm_display_mode *mode,
unsigned long long tmds_rate)
{
if (mode->clock < MTK_HDMI_V2_CLOCK_MIN)
return MODE_CLOCK_LOW;

View file

@ -11,13 +11,13 @@
#include <drm/drm_fourcc.h>
#include <drm/drm_framebuffer.h>
#include <drm/drm_gem_atomic_helper.h>
#include <drm/drm_gem_dma_helper.h>
#include <drm/drm_print.h>
#include <linux/align.h>
#include "mtk_crtc.h"
#include "mtk_ddp_comp.h"
#include "mtk_drm_drv.h"
#include "mtk_gem.h"
#include "mtk_plane.h"
static const u64 modifiers[] = {
@ -114,8 +114,8 @@ static void mtk_plane_update_new_state(struct drm_plane_state *new_state,
struct mtk_plane_state *mtk_plane_state)
{
struct drm_framebuffer *fb = new_state->fb;
struct drm_gem_dma_object *dma_obj;
struct drm_gem_object *gem;
struct mtk_gem_obj *mtk_gem;
unsigned int pitch, format;
u64 modifier;
dma_addr_t addr;
@ -124,8 +124,8 @@ static void mtk_plane_update_new_state(struct drm_plane_state *new_state,
int offset;
gem = fb->obj[0];
mtk_gem = to_mtk_gem_obj(gem);
addr = mtk_gem->dma_addr;
dma_obj = to_drm_gem_dma_obj(gem);
addr = dma_obj->dma_addr;
pitch = fb->pitches[0];
format = fb->format->format;
modifier = fb->modifier;

View file

@ -1,28 +1,81 @@
/* SPDX-License-Identifier: MIT */
#ifndef __NVBIOS_CONN_H__
#define __NVBIOS_CONN_H__
/*
* An enumerator representing all of the possible VBIOS connector types defined
* by Nvidia at
* https://nvidia.github.io/open-gpu-doc/DCB/DCB-4.x-Specification.html.
*
* [1] Nvidia's documentation actually claims DCB_CONNECTOR_HDMI_0 is a "3-Pin
* DIN Stereo Connector". This seems very likely to be a documentation typo
* or some sort of funny historical baggage, because we've treated this
* connector type as HDMI for years without issue.
* TODO: Check with Nvidia what's actually happening here.
*/
enum dcb_connector_type {
DCB_CONNECTOR_VGA = 0x00,
DCB_CONNECTOR_TV_0 = 0x10,
DCB_CONNECTOR_TV_1 = 0x11,
DCB_CONNECTOR_TV_3 = 0x13,
DCB_CONNECTOR_DVI_I = 0x30,
DCB_CONNECTOR_DVI_D = 0x31,
DCB_CONNECTOR_DMS59_0 = 0x38,
DCB_CONNECTOR_DMS59_1 = 0x39,
DCB_CONNECTOR_LVDS = 0x40,
DCB_CONNECTOR_LVDS_SPWG = 0x41,
DCB_CONNECTOR_DP = 0x46,
DCB_CONNECTOR_eDP = 0x47,
DCB_CONNECTOR_mDP = 0x48,
DCB_CONNECTOR_HDMI_0 = 0x60,
DCB_CONNECTOR_HDMI_1 = 0x61,
DCB_CONNECTOR_HDMI_C = 0x63,
DCB_CONNECTOR_DMS59_DP0 = 0x64,
DCB_CONNECTOR_DMS59_DP1 = 0x65,
DCB_CONNECTOR_WFD = 0x70,
DCB_CONNECTOR_USB_C = 0x71,
DCB_CONNECTOR_NONE = 0xff
/* Analog outputs */
DCB_CONNECTOR_VGA = 0x00, // VGA 15-pin connector
DCB_CONNECTOR_DVI_A = 0x01, // DVI-A
DCB_CONNECTOR_POD_VGA = 0x02, // Pod - VGA 15-pin connector
DCB_CONNECTOR_TV_0 = 0x10, // TV - Composite Out
DCB_CONNECTOR_TV_1 = 0x11, // TV - S-Video Out
DCB_CONNECTOR_TV_2 = 0x12, // TV - S-Video Breakout - Composite
DCB_CONNECTOR_TV_3 = 0x13, // HDTV Component - YPrPb
DCB_CONNECTOR_TV_SCART = 0x14, // TV - SCART Connector
DCB_CONNECTOR_TV_SCART_D = 0x16, // TV - Composite SCART over D-connector
DCB_CONNECTOR_TV_DTERM = 0x17, // HDTV - D-connector (EIAJ4120)
DCB_CONNECTOR_POD_TV_3 = 0x18, // Pod - HDTV - YPrPb
DCB_CONNECTOR_POD_TV_1 = 0x19, // Pod - S-Video
DCB_CONNECTOR_POD_TV_0 = 0x1a, // Pod - Composite
/* DVI digital outputs */
DCB_CONNECTOR_DVI_I_TV_1 = 0x20, // DVI-I-TV-S-Video
DCB_CONNECTOR_DVI_I_TV_0 = 0x21, // DVI-I-TV-Composite
DCB_CONNECTOR_DVI_I_TV_2 = 0x22, // DVI-I-TV-S-Video Breakout-Composite
DCB_CONNECTOR_DVI_I = 0x30, // DVI-I
DCB_CONNECTOR_DVI_D = 0x31, // DVI-D
DCB_CONNECTOR_DVI_ADC = 0x32, // Apple Display Connector (ADC)
DCB_CONNECTOR_DMS59_0 = 0x38, // LFH-DVI-I-1
DCB_CONNECTOR_DMS59_1 = 0x39, // LFH-DVI-I-2
DCB_CONNECTOR_BNC = 0x3c, // BNC Connector [for SDI?]
/* LVDS / TMDS digital outputs */
DCB_CONNECTOR_LVDS = 0x40, // LVDS-SPWG-Attached [is this name correct?]
DCB_CONNECTOR_LVDS_SPWG = 0x41, // LVDS-OEM-Attached (non-removable)
DCB_CONNECTOR_LVDS_REM = 0x42, // LVDS-SPWG-Detached [following naming above]
DCB_CONNECTOR_LVDS_SPWG_REM = 0x43, // LVDS-OEM-Detached (removable)
DCB_CONNECTOR_TMDS = 0x45, // TMDS-OEM-Attached (non-removable)
/* DP digital outputs */
DCB_CONNECTOR_DP = 0x46, // DisplayPort External Connector
DCB_CONNECTOR_eDP = 0x47, // DisplayPort Internal Connector
DCB_CONNECTOR_mDP = 0x48, // DisplayPort (Mini) External Connector
/* Dock outputs (not used) */
DCB_CONNECTOR_DOCK_VGA_0 = 0x50, // VGA 15-pin if not docked
DCB_CONNECTOR_DOCK_VGA_1 = 0x51, // VGA 15-pin if docked
DCB_CONNECTOR_DOCK_DVI_I_0 = 0x52, // DVI-I if not docked
DCB_CONNECTOR_DOCK_DVI_I_1 = 0x53, // DVI-I if docked
DCB_CONNECTOR_DOCK_DVI_D_0 = 0x54, // DVI-D if not docked
DCB_CONNECTOR_DOCK_DVI_D_1 = 0x55, // DVI-D if docked
DCB_CONNECTOR_DOCK_DP_0 = 0x56, // DisplayPort if not docked
DCB_CONNECTOR_DOCK_DP_1 = 0x57, // DisplayPort if docked
DCB_CONNECTOR_DOCK_mDP_0 = 0x58, // DisplayPort (Mini) if not docked
DCB_CONNECTOR_DOCK_mDP_1 = 0x59, // DisplayPort (Mini) if docked
/* HDMI? digital outputs */
DCB_CONNECTOR_HDMI_0 = 0x60, // HDMI? See [1] in top-level enum comment above
DCB_CONNECTOR_HDMI_1 = 0x61, // HDMI-A connector
DCB_CONNECTOR_SPDIF = 0x62, // Audio S/PDIF connector
DCB_CONNECTOR_HDMI_C = 0x63, // HDMI-C (Mini) connector
/* Misc. digital outputs */
DCB_CONNECTOR_DMS59_DP0 = 0x64, // LFH-DP-1
DCB_CONNECTOR_DMS59_DP1 = 0x65, // LFH-DP-2
DCB_CONNECTOR_WFD = 0x70, // Virtual connector for Wifi Display (WFD)
DCB_CONNECTOR_USB_C = 0x71, // [DP over USB-C; not present in docs]
DCB_CONNECTOR_NONE = 0xff // Skip Entry
};
struct nvbios_connT {

View file

@ -352,6 +352,8 @@ nouveau_user_framebuffer_create(struct drm_device *dev,
static const struct drm_mode_config_funcs nouveau_mode_config_funcs = {
.fb_create = nouveau_user_framebuffer_create,
.atomic_commit = drm_atomic_helper_commit,
.atomic_check = drm_atomic_helper_check,
};

View file

@ -191,27 +191,60 @@ nvkm_uconn_new(const struct nvkm_oclass *oclass, void *argv, u32 argc, struct nv
spin_lock(&disp->client.lock);
if (!conn->object.func) {
switch (conn->info.type) {
case DCB_CONNECTOR_VGA : args->v0.type = NVIF_CONN_V0_VGA; break;
case DCB_CONNECTOR_TV_0 :
case DCB_CONNECTOR_TV_1 :
case DCB_CONNECTOR_TV_3 : args->v0.type = NVIF_CONN_V0_TV; break;
case DCB_CONNECTOR_DMS59_0 :
case DCB_CONNECTOR_DMS59_1 :
case DCB_CONNECTOR_DVI_I : args->v0.type = NVIF_CONN_V0_DVI_I; break;
case DCB_CONNECTOR_DVI_D : args->v0.type = NVIF_CONN_V0_DVI_D; break;
case DCB_CONNECTOR_LVDS : args->v0.type = NVIF_CONN_V0_LVDS; break;
case DCB_CONNECTOR_LVDS_SPWG: args->v0.type = NVIF_CONN_V0_LVDS_SPWG; break;
case DCB_CONNECTOR_DMS59_DP0:
case DCB_CONNECTOR_DMS59_DP1:
case DCB_CONNECTOR_DP :
case DCB_CONNECTOR_mDP :
case DCB_CONNECTOR_USB_C : args->v0.type = NVIF_CONN_V0_DP; break;
case DCB_CONNECTOR_eDP : args->v0.type = NVIF_CONN_V0_EDP; break;
case DCB_CONNECTOR_HDMI_0 :
case DCB_CONNECTOR_HDMI_1 :
case DCB_CONNECTOR_HDMI_C : args->v0.type = NVIF_CONN_V0_HDMI; break;
/* VGA */
case DCB_CONNECTOR_DVI_A :
case DCB_CONNECTOR_POD_VGA :
case DCB_CONNECTOR_VGA : args->v0.type = NVIF_CONN_V0_VGA; break;
/* TV */
case DCB_CONNECTOR_TV_0 :
case DCB_CONNECTOR_TV_1 :
case DCB_CONNECTOR_TV_2 :
case DCB_CONNECTOR_TV_SCART :
case DCB_CONNECTOR_TV_SCART_D :
case DCB_CONNECTOR_TV_DTERM :
case DCB_CONNECTOR_POD_TV_3 :
case DCB_CONNECTOR_POD_TV_1 :
case DCB_CONNECTOR_POD_TV_0 :
case DCB_CONNECTOR_TV_3 : args->v0.type = NVIF_CONN_V0_TV; break;
/* DVI */
case DCB_CONNECTOR_DVI_I_TV_1 :
case DCB_CONNECTOR_DVI_I_TV_0 :
case DCB_CONNECTOR_DVI_I_TV_2 :
case DCB_CONNECTOR_DVI_ADC :
case DCB_CONNECTOR_DMS59_0 :
case DCB_CONNECTOR_DMS59_1 :
case DCB_CONNECTOR_DVI_I : args->v0.type = NVIF_CONN_V0_DVI_I; break;
case DCB_CONNECTOR_TMDS :
case DCB_CONNECTOR_DVI_D : args->v0.type = NVIF_CONN_V0_DVI_D; break;
/* LVDS */
case DCB_CONNECTOR_LVDS : args->v0.type = NVIF_CONN_V0_LVDS; break;
case DCB_CONNECTOR_LVDS_SPWG : args->v0.type = NVIF_CONN_V0_LVDS_SPWG; break;
/* DP */
case DCB_CONNECTOR_DMS59_DP0 :
case DCB_CONNECTOR_DMS59_DP1 :
case DCB_CONNECTOR_DP :
case DCB_CONNECTOR_mDP :
case DCB_CONNECTOR_USB_C : args->v0.type = NVIF_CONN_V0_DP; break;
case DCB_CONNECTOR_eDP : args->v0.type = NVIF_CONN_V0_EDP; break;
/* HDMI */
case DCB_CONNECTOR_HDMI_0 :
case DCB_CONNECTOR_HDMI_1 :
case DCB_CONNECTOR_HDMI_C : args->v0.type = NVIF_CONN_V0_HDMI; break;
/*
* Dock & unused outputs.
* BNC, SPDIF, WFD, and detached LVDS go here.
*/
default:
WARN_ON(1);
nvkm_warn(&disp->engine.subdev,
"unimplemented connector type 0x%02x\n",
conn->info.type);
args->v0.type = NVIF_CONN_V0_VGA;
ret = -EINVAL;
break;
}

View file

@ -37,7 +37,6 @@ static int vkms_initialize_color_pipeline(struct drm_plane *plane, struct drm_pr
goto cleanup;
list->type = ops[i]->base.id;
list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[i]->base.id);
i++;
@ -88,6 +87,8 @@ static int vkms_initialize_color_pipeline(struct drm_plane *plane, struct drm_pr
drm_colorop_set_next_property(ops[i - 1], ops[i]);
list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[0]->base.id);
return 0;
cleanup:
@ -103,18 +104,18 @@ cleanup:
int vkms_initialize_colorops(struct drm_plane *plane)
{
struct drm_prop_enum_list pipeline;
int ret;
struct drm_prop_enum_list pipeline = {};
int ret = 0;
/* Add color pipeline */
ret = vkms_initialize_color_pipeline(plane, &pipeline);
if (ret)
return ret;
goto out;
/* Create COLOR_PIPELINE property and attach */
ret = drm_plane_create_color_pipeline_property(plane, &pipeline, 1);
if (ret)
return ret;
return 0;
kfree(pipeline.name);
out:
return ret;
}

View file

@ -39,7 +39,7 @@ config DRM_XE
select DRM_TTM
select DRM_TTM_HELPER
select DRM_EXEC
select DRM_GPUSVM if !UML && DEVICE_PRIVATE
select DRM_GPUSVM if !UML
select DRM_GPUVM
select DRM_SCHED
select MMU_NOTIFIER
@ -80,8 +80,9 @@ config DRM_XE_GPUSVM
bool "Enable CPU to GPU address mirroring"
depends on DRM_XE
depends on !UML
depends on DEVICE_PRIVATE
depends on ZONE_DEVICE
default y
select DEVICE_PRIVATE
select DRM_GPUSVM
help
Enable this option if you want support for CPU to GPU address

View file

@ -1055,6 +1055,7 @@ static long xe_bo_shrink_purge(struct ttm_operation_ctx *ctx,
unsigned long *scanned)
{
struct xe_device *xe = ttm_to_xe_device(bo->bdev);
struct ttm_tt *tt = bo->ttm;
long lret;
/* Fake move to system, without copying data. */
@ -1079,8 +1080,10 @@ static long xe_bo_shrink_purge(struct ttm_operation_ctx *ctx,
.writeback = false,
.allow_move = false});
if (lret > 0)
if (lret > 0) {
xe_ttm_tt_account_subtract(xe, bo->ttm);
update_global_total_pages(bo->bdev, -(long)tt->num_pages);
}
return lret;
}
@ -1166,8 +1169,10 @@ long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
if (needs_rpm)
xe_pm_runtime_put(xe);
if (lret > 0)
if (lret > 0) {
xe_ttm_tt_account_subtract(xe, tt);
update_global_total_pages(bo->bdev, -(long)tt->num_pages);
}
out_unref:
xe_bo_put(xe_bo);

View file

@ -256,14 +256,64 @@ static ssize_t wedged_mode_show(struct file *f, char __user *ubuf,
return simple_read_from_buffer(ubuf, size, pos, buf, len);
}
static int __wedged_mode_set_reset_policy(struct xe_gt *gt, enum xe_wedged_mode mode)
{
bool enable_engine_reset;
int ret;
enable_engine_reset = (mode != XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET);
ret = xe_guc_ads_scheduler_policy_toggle_reset(&gt->uc.guc.ads,
enable_engine_reset);
if (ret)
xe_gt_err(gt, "Failed to update GuC ADS scheduler policy (%pe)\n", ERR_PTR(ret));
return ret;
}
static int wedged_mode_set_reset_policy(struct xe_device *xe, enum xe_wedged_mode mode)
{
struct xe_gt *gt;
int ret;
u8 id;
guard(xe_pm_runtime)(xe);
for_each_gt(gt, xe, id) {
ret = __wedged_mode_set_reset_policy(gt, mode);
if (ret) {
if (id > 0) {
xe->wedged.inconsistent_reset = true;
drm_err(&xe->drm, "Inconsistent reset policy state between GTs\n");
}
return ret;
}
}
xe->wedged.inconsistent_reset = false;
return 0;
}
static bool wedged_mode_needs_policy_update(struct xe_device *xe, enum xe_wedged_mode mode)
{
if (xe->wedged.inconsistent_reset)
return true;
if (xe->wedged.mode == mode)
return false;
if (xe->wedged.mode == XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET ||
mode == XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET)
return true;
return false;
}
static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf,
size_t size, loff_t *pos)
{
struct xe_device *xe = file_inode(f)->i_private;
struct xe_gt *gt;
u32 wedged_mode;
ssize_t ret;
u8 id;
ret = kstrtouint_from_user(ubuf, size, 0, &wedged_mode);
if (ret)
@ -272,22 +322,14 @@ static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf,
if (wedged_mode > 2)
return -EINVAL;
if (xe->wedged.mode == wedged_mode)
return size;
if (wedged_mode_needs_policy_update(xe, wedged_mode)) {
ret = wedged_mode_set_reset_policy(xe, wedged_mode);
if (ret)
return ret;
}
xe->wedged.mode = wedged_mode;
xe_pm_runtime_get(xe);
for_each_gt(gt, xe, id) {
ret = xe_guc_ads_scheduler_policy_toggle_reset(&gt->uc.guc.ads);
if (ret) {
xe_gt_err(gt, "Failed to update GuC ADS scheduler policy. GuC may still cause engine reset even with wedged_mode=2\n");
xe_pm_runtime_put(xe);
return -EIO;
}
}
xe_pm_runtime_put(xe);
return size;
}

View file

@ -44,6 +44,22 @@ struct xe_pat_ops;
struct xe_pxp;
struct xe_vram_region;
/**
* enum xe_wedged_mode - possible wedged modes
* @XE_WEDGED_MODE_NEVER: Device will never be declared wedged.
* @XE_WEDGED_MODE_UPON_CRITICAL_ERROR: Device will be declared wedged only
* when critical error occurs like GT reset failure or firmware failure.
* This is the default mode.
* @XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET: Device will be declared wedged on
* any hang. In this mode, engine resets are disabled to avoid automatic
* recovery attempts. This mode is primarily intended for debugging hangs.
*/
enum xe_wedged_mode {
XE_WEDGED_MODE_NEVER = 0,
XE_WEDGED_MODE_UPON_CRITICAL_ERROR = 1,
XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET = 2,
};
#define XE_BO_INVALID_OFFSET LONG_MAX
#define GRAPHICS_VER(xe) ((xe)->info.graphics_verx100 / 100)
@ -587,6 +603,8 @@ struct xe_device {
int mode;
/** @wedged.method: Recovery method to be sent in the drm device wedged uevent */
unsigned long method;
/** @wedged.inconsistent_reset: Inconsistent reset policy state between GTs */
bool inconsistent_reset;
} wedged;
/** @bo_device: Struct to control async free of BOs */

View file

@ -328,6 +328,7 @@ struct xe_exec_queue *xe_exec_queue_create_class(struct xe_device *xe, struct xe
* @xe: Xe device.
* @tile: tile which bind exec queue belongs to.
* @flags: exec queue creation flags
* @user_vm: The user VM which this exec queue belongs to
* @extensions: exec queue creation extensions
*
* Normalize bind exec queue creation. Bind exec queue is tied to migration VM
@ -341,6 +342,7 @@ struct xe_exec_queue *xe_exec_queue_create_class(struct xe_device *xe, struct xe
*/
struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
struct xe_tile *tile,
struct xe_vm *user_vm,
u32 flags, u64 extensions)
{
struct xe_gt *gt = tile->primary_gt;
@ -377,6 +379,9 @@ struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
xe_exec_queue_put(q);
return ERR_PTR(err);
}
if (user_vm)
q->user_vm = xe_vm_get(user_vm);
}
return q;
@ -407,6 +412,11 @@ void xe_exec_queue_destroy(struct kref *ref)
xe_exec_queue_put(eq);
}
if (q->user_vm) {
xe_vm_put(q->user_vm);
q->user_vm = NULL;
}
q->ops->destroy(q);
}
@ -742,6 +752,22 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
XE_IOCTL_DBG(xe, eci[0].engine_instance != 0))
return -EINVAL;
vm = xe_vm_lookup(xef, args->vm_id);
if (XE_IOCTL_DBG(xe, !vm))
return -ENOENT;
err = down_read_interruptible(&vm->lock);
if (err) {
xe_vm_put(vm);
return err;
}
if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
up_read(&vm->lock);
xe_vm_put(vm);
return -ENOENT;
}
for_each_tile(tile, xe, id) {
struct xe_exec_queue *new;
@ -749,9 +775,11 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
if (id)
flags |= EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD;
new = xe_exec_queue_create_bind(xe, tile, flags,
new = xe_exec_queue_create_bind(xe, tile, vm, flags,
args->extensions);
if (IS_ERR(new)) {
up_read(&vm->lock);
xe_vm_put(vm);
err = PTR_ERR(new);
if (q)
goto put_exec_queue;
@ -763,6 +791,8 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
list_add_tail(&new->multi_gt_list,
&q->multi_gt_link);
}
up_read(&vm->lock);
xe_vm_put(vm);
} else {
logical_mask = calc_validate_logical_mask(xe, eci,
args->width,

View file

@ -28,6 +28,7 @@ struct xe_exec_queue *xe_exec_queue_create_class(struct xe_device *xe, struct xe
u32 flags, u64 extensions);
struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
struct xe_tile *tile,
struct xe_vm *user_vm,
u32 flags, u64 extensions);
void xe_exec_queue_fini(struct xe_exec_queue *q);

View file

@ -54,6 +54,12 @@ struct xe_exec_queue {
struct kref refcount;
/** @vm: VM (address space) for this exec queue */
struct xe_vm *vm;
/**
* @user_vm: User VM (address space) for this exec queue (bind queues
* only)
*/
struct xe_vm *user_vm;
/** @class: class of this exec queue */
enum xe_engine_class class;
/**

View file

@ -322,7 +322,7 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt)
else
ggtt->pt_ops = &xelp_pt_ops;
ggtt->wq = alloc_workqueue("xe-ggtt-wq", 0, WQ_MEM_RECLAIM);
ggtt->wq = alloc_workqueue("xe-ggtt-wq", WQ_MEM_RECLAIM, 0);
if (!ggtt->wq)
return -ENOMEM;

View file

@ -41,10 +41,10 @@ struct xe_gt_sriov_vf_runtime {
};
/**
* xe_gt_sriov_vf_migration - VF migration data.
* struct xe_gt_sriov_vf_migration - VF migration data.
*/
struct xe_gt_sriov_vf_migration {
/** @migration: VF migration recovery worker */
/** @worker: VF migration recovery worker */
struct work_struct worker;
/** @lock: Protects recovery_queued, teardown */
spinlock_t lock;

View file

@ -983,16 +983,17 @@ static int guc_ads_action_update_policies(struct xe_guc_ads *ads, u32 policy_off
/**
* xe_guc_ads_scheduler_policy_toggle_reset - Toggle reset policy
* @ads: Additional data structures object
* @enable_engine_reset: true to enable engine resets, false otherwise
*
* This function update the GuC's engine reset policy based on wedged.mode.
* This function update the GuC's engine reset policy.
*
* Return: 0 on success, and negative error code otherwise.
*/
int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads)
int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads,
bool enable_engine_reset)
{
struct guc_policies *policies;
struct xe_guc *guc = ads_to_guc(ads);
struct xe_device *xe = ads_to_xe(ads);
CLASS(xe_guc_buf, buf)(&guc->buf, sizeof(*policies));
if (!xe_guc_buf_is_valid(buf))
@ -1004,10 +1005,11 @@ int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads)
policies->dpc_promote_time = ads_blob_read(ads, policies.dpc_promote_time);
policies->max_num_work_items = ads_blob_read(ads, policies.max_num_work_items);
policies->is_valid = 1;
if (xe->wedged.mode == 2)
policies->global_flags |= GLOBAL_POLICY_DISABLE_ENGINE_RESET;
else
if (enable_engine_reset)
policies->global_flags &= ~GLOBAL_POLICY_DISABLE_ENGINE_RESET;
else
policies->global_flags |= GLOBAL_POLICY_DISABLE_ENGINE_RESET;
return guc_ads_action_update_policies(ads, xe_guc_buf_flush(buf));
}

View file

@ -6,6 +6,8 @@
#ifndef _XE_GUC_ADS_H_
#define _XE_GUC_ADS_H_
#include <linux/types.h>
struct xe_guc_ads;
int xe_guc_ads_init(struct xe_guc_ads *ads);
@ -13,6 +15,7 @@ int xe_guc_ads_init_post_hwconfig(struct xe_guc_ads *ads);
void xe_guc_ads_populate(struct xe_guc_ads *ads);
void xe_guc_ads_populate_minimal(struct xe_guc_ads *ads);
void xe_guc_ads_populate_post_load(struct xe_guc_ads *ads);
int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads);
int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads,
bool enable_engine_reset);
#endif

View file

@ -15,10 +15,12 @@
#define XE_LB_MAX_PAYLOAD_SIZE SZ_4K
/**
* xe_late_bind_fw_id - enum to determine late binding fw index
* enum xe_late_bind_fw_id - enum to determine late binding fw index
*/
enum xe_late_bind_fw_id {
/** @XE_LB_FW_FAN_CONTROL: Fan control */
XE_LB_FW_FAN_CONTROL = 0,
/** @XE_LB_FW_MAX_ID: Number of IDs */
XE_LB_FW_MAX_ID
};

View file

@ -1050,6 +1050,9 @@ static ssize_t setup_utilization_wa(struct xe_lrc *lrc,
{
u32 *cmd = batch;
if (IS_SRIOV_VF(gt_to_xe(lrc->gt)))
return 0;
if (xe_gt_WARN_ON(lrc->gt, max_len < 12))
return -ENOSPC;

View file

@ -2445,7 +2445,7 @@ void xe_migrate_job_lock(struct xe_migrate *m, struct xe_exec_queue *q)
if (is_migrate)
mutex_lock(&m->job_mutex);
else
xe_vm_assert_held(q->vm); /* User queues VM's should be locked */
xe_vm_assert_held(q->user_vm); /* User queues VM's should be locked */
}
/**
@ -2463,7 +2463,7 @@ void xe_migrate_job_unlock(struct xe_migrate *m, struct xe_exec_queue *q)
if (is_migrate)
mutex_unlock(&m->job_mutex);
else
xe_vm_assert_held(q->vm); /* User queues VM's should be locked */
xe_vm_assert_held(q->user_vm); /* User queues VM's should be locked */
}
#if IS_ENABLED(CONFIG_PROVE_LOCKING)

View file

@ -346,7 +346,7 @@ int xe_sriov_vf_ccs_init(struct xe_device *xe)
flags = EXEC_QUEUE_FLAG_KERNEL |
EXEC_QUEUE_FLAG_PERMANENT |
EXEC_QUEUE_FLAG_MIGRATE;
q = xe_exec_queue_create_bind(xe, tile, flags, 0);
q = xe_exec_queue_create_bind(xe, tile, NULL, flags, 0);
if (IS_ERR(q)) {
err = PTR_ERR(q);
goto err_ret;

Some files were not shown because too many files have changed in this diff Show more