pci-v7.0-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmmKJO4UHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vzotA/+OGSOPOs9hWd+OwNF5Dm2WA81yG/3
 K3Jx5uMuPoSjduMbPhVcib02Mr6YDJTa6WlYNVa76ADs2G6HxcVMFHutlYudSVcl
 umSF48FnyeH1LTba88dRoVj4DB47Cue+BfhYY2L0ZtxmjQq/NRuDFAaGBh54uNeF
 Gcdgr52QlM01n1X6yKvl7vE9gPdcPH80L256ssHAm6oSOHI1SPc6gqEKUUD02f8G
 FtzfTUAq/cWYjlY3VoS5GKtdHxFYuXqC5WfbURhJ11o/nVJY9k1Zx8n4eI1tmAtN
 7q692xjWSQJZlzepOBBEyjFUpIiy80tZ43z2ptRRBeI/n/qMmGPAov/g4MzegBWG
 IAEHTAp/xx1Wra1ynr7RNvYVcPpXm2TEim8gIGah9DkHbNgbu7ing+OO7DnQuyfD
 2h4hGD2622o6uikqkwzVd4mYuIcFu7SA6yROZhFn83BRnz0QOQienDrDlvOB8XCV
 EodLAOMc2KClvOmmriFMy11PH7MFFoXexV6KS83VfDJHi4+XzBsy0w6TXTohcA9s
 JTPIkSWqf/u6SrdLjXlFGyyJ2/KCgRiXFIBhhtYBMhDuuO7nG+mcSVzMa1PT0s6C
 PF+QoT7sJof/5VMJ4o3BgPrPkD3CQICrlt8XIt5I8ngsy6RZRQ5rt+pUix7Shcn8
 DgcunuINYfQtkfw=
 =LIjp
 -----END PGP SIGNATURE-----

Merge tag 'pci-v7.0-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull PCI updates from Bjorn Helgaas:
 "Enumeration:

   - Don't try to enable Extended Tags on VFs since that bit is Reserved
     and causes misleading log messages (Håkon Bugge)

   - Initialize Endpoint Read Completion Boundary to match Root Port,
     regardless of ACPI _HPX (Håkon Bugge)

   - Apply _HPX PCIe Setting Record only to AER configuration, and only
     when OS owns PCIe hotplug but not AER, to avoid clobbering Extended
     Tag and Relaxed Ordering settings (Håkon Bugge)

  Resource management:

   - Move CardBus code to setup-cardbus.c and only build it when
     CONFIG_CARDBUS is set (Ilpo Järvinen)

   - Fix bridge window alignment with optional resources, where
     additional alignment requirement was previously lost (Ilpo
     Järvinen)

   - Stop over-estimating bridge window size since they are now assigned
     without any gaps between them (Ilpo Järvinen)

   - Increase resource MAX_IORES_LEVEL to avoid /proc/iomem flattening
     for nested bridges and endpoints (Ilpo Järvinen)

   - Add pbus_mem_size_optional() to handle sizes of optional resources
     (SR-IOV VF BARs, expansion ROMs, bridge windows) (Ilpo Järvinen)

   - Don't claim disabled bridge windows to avoid spurious claim
     failures (Ilpo Järvinen)

  Driver binding:

   - Fix device reference leak in pcie_port_remove_service() (Uwe
     Kleine-König)

   - Move pcie_port_bus_match() and pcie_port_bus_type to PCIe-specific
     portdrv.c (Uwe Kleine-König)

   - Convert portdrv to use pcie_port_bus_type.probe() and .remove()
     callbacks so .probe() and .remove() can eventually be removed from
     struct device_driver (Uwe Kleine-König)

  Error handling:

   - Clear stale errors on reporting agents upon probe so they don't
     look like recent errors (Lukas Wunner)

   - Add generic RAS tracepoint for hotplug events (Shuai Xue)

   - Add RAS tracepoint for link speed changes (Shuai Xue)

  Power management:

   - Avoid redundant delay on transition from D3hot to D3cold if the
     device was already in D3hot (Brian Norris)

   - Prevent runtime suspend until devices are fully initialized to
     avoid saving incompletely configured device state (Brian Norris)

  Power control:

   - Add power_on/off callbacks with generic signature to pwrseq,
     tc9563, and slot drivers so they can be used by pwrctrl core
     (Manivannan Sadhasivam)

   - Add PCIe M.2 connector support to the slot pwrctrl driver
     (Manivannan Sadhasivam)

   - Switch to pwrctrl interfaces to create, destroy, and power on/off
     devices, calling them from host controller drivers instead of the
     PCI core (Manivannan Sadhasivam)

   - Drop qcom .assert_perst() callbacks since this is now done by the
     controller driver instead of the pwrctrl driver (Manivannan
     Sadhasivam)

  Virtualization:

   - Remove an incorrect unlock in pci_slot_trylock() error handling
     (Jinhui Guo)

   - Lock the bridge device for slot reset (Keith Busch)

   - Enable ACS after IOMMU configuration on OF platforms so ACS is
     enabled an all devices; previously the first device enumerated
     (typically a Root Port) didn't have ACS enabled (Manivannan
     Sadhasivam)

   - Disable ACS Source Validation for IDT 0x80b5 and 0x8090 switches to
     work around hardware erratum; previously ACS SV was only
     temporarily disabled, which worked for enumeration but not after
     reset (Manivannan Sadhasivam)

  Peer-to-peer DMA:

   - Release per-CPU pgmap ref when vm_insert_page() fails to avoid hang
     when removing the PCI device (Hou Tao)

   - Remove incorrect p2pmem_alloc_mmap() warning about page refcount
     (Hou Tao)

  Endpoint framework:

   - Add configfs sub-groups synchronously to avoid NULL pointer
     dereference when racing with removal (Liu Song)

   - Fix swapped parameters in pci_{primary/secondary}_epc_epf_unlink()
     functions (Manikanta Maddireddy)

  ASPEED PCIe controller driver:

   - Add ASPEED Root Complex DT binding and driver (Jacky Chou)

  Freescale i.MX6 PCIe controller driver:

   - Add DT binding and driver support for an optional external refclock
     in addition to the refclock from the internal PLL (Richard Zhu)

   - Fix CLKREQ# control so host asserts it during enumeration and
     Endpoints can use it afterwards to exit the L1.2 link state
     (Richard Zhu)

  NVIDIA Tegra PCIe controller driver:

   - Export irq_domain_free_irqs() to allow PCI/MSI drivers that tear
     down MSI domains to be built as modules (Aaron Kling)

   - Allow pci-tegra to be built as a module (Aaron Kling)

  NVIDIA Tegra194 PCIe controller driver:

   - Relax Kconfig so tegra194 can be built for platforms beyond
     Tegra194 (Vidya Sagar)

  Qualcomm PCIe controller driver:

   - Merge SC8180x DT binding into SM8150 (Krzysztof Kozlowski)

   - Move SDX55, SDM845, QCS404, IPQ5018, IPQ6018, IPQ8074 Gen3,
     IPQ8074, IPQ4019, IPQ9574, APQ8064, MSM8996, APQ8084 to dedicated
     schema (Krzysztof Kozlowski)

   - Add DT binding and driver support for SA8255p Endpoint being
     configured by firmware (Mrinmay Sarkar)

   - Parse PERST# from all PCIe bridge nodes for future platforms that
     will have PERST# in Switch Downstream Ports as well as in Root
     Ports (Manivannan Sadhasivam)

  Renesas RZ/G3S PCIe controller driver:

   - Use pci_generic_config_write() since the writability provided by
     the custom wrapper is unnecessary (Claudiu Beznea)

  SOPHGO PCIe controller driver:

   - Disable ASPM L0s and L1 on Sophgo 2044 PCIe Root Ports (Inochi
     Amaoto)

  Synopsys DesignWare PCIe controller driver:

   - Extend PCI_FIND_NEXT_CAP() and PCI_FIND_NEXT_EXT_CAP() to return a
     pointer to the preceding Capability, to allow removal of
     Capabilities that are advertised but not fully implemented (Qiang
     Yu)

   - Remove MSI and MSI-X Capabilities in platforms that can't support
     them, so the PCI core automatically falls back to INTx (Qiang Yu)

   - Add ASPM L1.1 and L1.2 Substates context to debugfs ltssm_status
     for drivers that support this (Shawn Lin)

   - Skip PME_Turn_Off broadcast and L2/L3 transition during suspend if
     link is not up to avoid an unnecessary timeout (Manivannan
     Sadhasivam)

   - Revert dw-rockchip, qcom, and DWC core changes that used link-up
     IRQs to trigger enumeration instead of waiting for link to be up
     because the PCI core doesn't allocate bus number space for
     hierarchies that might be attached (Niklas Cassel)

   - Make endpoint iATU entry for MSI permanent instead of programming
     it dynamically, which is slow and racy with respect to other
     concurrent traffic, e.g., eDMA (Koichiro Den)

   - Use iMSI-RX MSI target address when possible to fix endpoints using
     32-bit MSI (Shawn Lin)

   - Allow DWC host controller driver probe to continue if device is not
     found or found but inactive; only fail when there's an error with
     the link (Manivannan Sadhasivam)

   - For controllers like NXP i.MX6QP and i.MX7D, where LTSSM registers
     are not accessible after PME_Turn_Off, simply wait 10ms instead of
     polling for L2/L3 Ready (Richard Zhu)

   - Use multiple iATU entries to map large bridge windows and DMA
     ranges when necessary instead of failing (Samuel Holland)

   - Add EPC dynamic_inbound_mapping feature bit for Endpoint
     Controllers that can update BAR inbound address translation without
     requiring EPF driver to clear/reset the BAR first, and advertise it
     for DWC-based Endpoints (Koichiro Den)

   - Add EPC subrange_mapping feature bit for Endpoint Controllers that
     can map multiple independent inbound regions in a single BAR,
     implement subrange mapping, advertise it for DWC-based Endpoints,
     and add Endpoint selftests for it (Koichiro Den)

   - Make resizable BARs work for Endpoint multi-PF configurations;
     previously it only worked for PF 0 (Aksh Garg)

   - Fix Endpoint non-PF 0 support for BAR configuration, ATU mappings,
     and Address Match Mode (Aksh Garg)

   - Set up iATU when ECAM is enabled; previously IO and MEM outbound
     windows weren't programmed, and ECAM-related iATU entries weren't
     restored after suspend/resume, so config accesses failed (Krishna
     Chaitanya Chundru)

  Miscellaneous:

   - Use system_percpu_wq and WQ_PERCPU to explicitly request per-CPU
     work so WQ_UNBOUND can eventually be removed (Marco Crivellari)"

* tag 'pci-v7.0-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (176 commits)
  PCI/bwctrl: Disable BW controller on Intel P45 using a quirk
  PCI: Disable ACS SV for IDT 0x8090 switch
  PCI: Disable ACS SV for IDT 0x80b5 switch
  PCI: Cache ACS Capabilities register
  PCI: Enable ACS after configuring IOMMU for OF platforms
  PCI: Add ACS quirk for Pericom PI7C9X2G404 switches [12d8:b404]
  PCI: Add ACS quirk for Qualcomm Hamoa & Glymur
  PCI: Use device_lock_assert() to verify device lock is held
  PCI: Use lockdep_assert_held(pci_bus_sem) to verify lock is held
  PCI: Fix pci_slot_lock () device locking
  PCI: Fix pci_slot_trylock() error handling
  PCI: Mark Nvidia GB10 to avoid bus reset
  PCI: Mark ASM1164 SATA controller to avoid bus reset
  PCI: host-generic: Avoid reporting incorrect 'missing reg property' error
  PCI/PME: Replace RMW of Root Status register with direct write
  PCI/AER: Clear stale errors on reporting agents upon probe
  PCI: Don't claim disabled bridge windows
  PCI: rzg3s-host: Fix device node reference leak in rzg3s_pcie_host_parse_port()
  PCI: dwc: Fix missing iATU setup when ECAM is enabled
  PCI: dwc: Clean up iATU index usage in dw_pcie_iatu_setup()
  ...
This commit is contained in:
Linus Torvalds 2026-02-11 17:20:38 -08:00
commit 1c2b4a4c2b
118 changed files with 6650 additions and 2513 deletions

View file

@ -95,6 +95,30 @@ by the PCI endpoint function driver.
Register space of the function driver is usually configured
using this API.
Some endpoint controllers also support calling pci_epc_set_bar() again
for the same BAR (without calling pci_epc_clear_bar()) to update inbound
address translations after the host has programmed the BAR base address.
Endpoint function drivers can check this capability via the
dynamic_inbound_mapping EPC feature bit.
When pci_epf_bar.num_submap is non-zero, the endpoint function driver is
requesting BAR subrange mapping using pci_epf_bar.submap. This requires
the EPC to advertise support via the subrange_mapping EPC feature bit.
When an EPF driver wants to make use of the inbound subrange mapping
feature, it requires that the BAR base address has been programmed by
the host during enumeration. Thus, it needs to call pci_epc_set_bar()
twice for the same BAR (requires dynamic_inbound_mapping): first with
num_submap set to zero and configuring the BAR size, then after the PCIe
link is up and the host enumerates the endpoint and programs the BAR
base address, again with num_submap set to non-zero value.
Note that when making use of the inbound subrange mapping feature, the
EPF driver must not call pci_epc_clear_bar() between the two
pci_epc_set_bar() calls, because clearing the BAR can clear/disable the
BAR register or BAR decode on the endpoint while the host still expects
the assigned BAR address to remain valid.
* pci_epc_clear_bar()
The PCI endpoint function driver should use pci_epc_clear_bar() to reset

View file

@ -84,6 +84,25 @@ device, the following commands can be used::
# echo 32 > functions/pci_epf_test/func1/msi_interrupts
# echo 2048 > functions/pci_epf_test/func1/msix_interrupts
By default, pci-epf-test uses the following BAR sizes::
# grep . functions/pci_epf_test/func1/pci_epf_test.0/bar?_size
functions/pci_epf_test/func1/pci_epf_test.0/bar0_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar1_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar2_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar3_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar4_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar5_size:1048576
The user can override a default value using e.g.::
# echo 1048576 > functions/pci_epf_test/func1/pci_epf_test.0/bar1_size
Overriding the default BAR sizes can only be done before binding the
pci-epf-test device to a PCI endpoint controller driver.
Note: Some endpoint controllers might have fixed-size BARs or reserved BARs;
for such controllers, the corresponding BAR size in configfs will be ignored.
Binding pci-epf-test Device to EP Controller
--------------------------------------------

View file

@ -52,14 +52,14 @@ pci-epf-vntb device, the following commands can be used::
# cd /sys/kernel/config/pci_ep/
# mkdir functions/pci_epf_vntb/func1
The "mkdir func1" above creates the pci-epf-ntb function device that will
The "mkdir func1" above creates the pci-epf-vntb function device that will
be probed by pci_epf_vntb driver.
The PCI endpoint framework populates the directory with the following
configurable fields::
# ls functions/pci_epf_ntb/func1
baseclass_code deviceid msi_interrupts pci-epf-ntb.0
# ls functions/pci_epf_vntb/func1
baseclass_code deviceid msi_interrupts pci-epf-vntb.0
progif_code secondary subsys_id vendorid
cache_line_size interrupt_pin msix_interrupts primary
revid subclass_code subsys_vendor_id
@ -111,13 +111,13 @@ A sample configuration for virtual NTB driver for virtual PCI bus::
# echo 0x080A > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vntb_pid
# echo 0x10 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vbus_number
Binding pci-epf-ntb Device to EP Controller
Binding pci-epf-vntb Device to EP Controller
--------------------------------------------
NTB function device should be attached to PCI endpoint controllers
connected to the host.
# ln -s controllers/5f010000.pcie_ep functions/pci-epf-ntb/func1/primary
# ln -s controllers/5f010000.pcie_ep functions/pci_epf_vntb/func1/primary
Once the above step is completed, the PCI endpoint controllers are ready to
establish a link with the host.
@ -139,7 +139,7 @@ lspci Output at Host side
-------------------------
Note that the devices listed here correspond to the values populated in
"Creating pci-epf-ntb Device" section above::
"Creating pci-epf-vntb Device" section above::
# lspci
00:00.0 PCI bridge: Freescale Semiconductor Inc Device 0000 (rev 01)
@ -152,7 +152,7 @@ lspci Output at EP Side / Virtual PCI bus
-----------------------------------------
Note that the devices listed here correspond to the values populated in
"Creating pci-epf-ntb Device" section above::
"Creating pci-epf-vntb Device" section above::
# lspci
10:00.0 Unassigned class [ffff]: Dawicontrol Computersysteme GmbH Device 1234 (rev ff)

View file

@ -98,7 +98,7 @@ function::
which allocates up to max_vecs interrupt vectors for a PCI device. It
returns the number of vectors allocated or a negative error. If the device
has a requirements for a minimum number of vectors the driver can pass a
has a requirement for a minimum number of vectors the driver can pass a
min_vecs argument set to this limit, and the PCI core will return -ENOSPC
if it can't meet the minimum number of vectors.
@ -127,7 +127,7 @@ not be able to allocate as many vectors for MSI as it could for MSI-X. On
some platforms, MSI interrupts must all be targeted at the same set of CPUs
whereas MSI-X interrupts can all be targeted at different CPUs.
If a device supports neither MSI-X or MSI it will fall back to a single
If a device supports neither MSI-X nor MSI it will fall back to a single
legacy IRQ vector.
The typical usage of MSI or MSI-X interrupts is to allocate as many vectors
@ -203,7 +203,7 @@ How to tell whether MSI/MSI-X is enabled on a device
----------------------------------------------------
Using 'lspci -v' (as root) may show some devices with "MSI", "Message
Signalled Interrupts" or "MSI-X" capabilities. Each of these capabilities
Signaled Interrupts" or "MSI-X" capabilities. Each of these capabilities
has an 'Enable' flag which is followed with either "+" (enabled)
or "-" (disabled).

View file

@ -0,0 +1,182 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/aspeed,ast2600-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: ASPEED PCIe Root Complex Controller
maintainers:
- Jacky Chou <jacky_chou@aspeedtech.com>
description:
The ASPEED PCIe Root Complex controller provides PCI Express Root Complex
functionality for ASPEED SoCs, such as the AST2600 and AST2700.
This controller enables connectivity to PCIe endpoint devices, supporting
memory and I/O windows, MSI and INTx interrupts, and integration with
the SoC's clock, reset, and pinctrl subsystems. On AST2600, the PCIe Root
Port device number is always 8.
properties:
compatible:
enum:
- aspeed,ast2600-pcie
- aspeed,ast2700-pcie
reg:
maxItems: 1
ranges:
minItems: 2
maxItems: 2
interrupts:
maxItems: 1
description: INTx and MSI interrupt
resets:
items:
- description: PCIe controller reset
reset-names:
items:
- const: h2x
aspeed,ahbc:
$ref: /schemas/types.yaml#/definitions/phandle
description:
Phandle to the ASPEED AHB Controller (AHBC) syscon node.
This reference is used by the PCIe controller to access
system-level configuration registers related to the AHB bus.
To enable AHB access for the PCIe controller.
aspeed,pciecfg:
$ref: /schemas/types.yaml#/definitions/phandle
description:
Phandle to the ASPEED PCIe configuration syscon node.
This reference allows the PCIe controller to access
SoC-specific PCIe configuration registers. There are the others
functions such PCIe RC and PCIe EP will use this common register
to configure the SoC interfaces.
interrupt-controller: true
patternProperties:
"^pcie@[0-9a-f]+,0$":
type: object
$ref: /schemas/pci/pci-pci-bridge.yaml#
properties:
reg:
maxItems: 1
resets:
items:
- description: PERST# signal
reset-names:
items:
- const: perst
clocks:
maxItems: 1
phys:
maxItems: 1
required:
- resets
- reset-names
- clocks
- phys
- ranges
unevaluatedProperties: false
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: /schemas/interrupt-controller/msi-controller.yaml#
- if:
properties:
compatible:
contains:
const: aspeed,ast2600-pcie
then:
required:
- aspeed,ahbc
else:
properties:
aspeed,ahbc: false
- if:
properties:
compatible:
contains:
const: aspeed,ast2700-pcie
then:
required:
- aspeed,pciecfg
else:
properties:
aspeed,pciecfg: false
required:
- reg
- interrupts
- bus-range
- ranges
- resets
- reset-names
- msi-controller
- interrupt-controller
- interrupt-map-mask
- interrupt-map
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/ast2600-clock.h>
pcie0: pcie@1e770000 {
compatible = "aspeed,ast2600-pcie";
device_type = "pci";
reg = <0x1e770000 0x100>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
bus-range = <0x00 0xff>;
ranges = <0x01000000 0x0 0x00018000 0x00018000 0x0 0x00008000
0x02000000 0x0 0x60000000 0x60000000 0x0 0x20000000>;
resets = <&syscon ASPEED_RESET_H2X>;
reset-names = "h2x";
#interrupt-cells = <1>;
msi-controller;
aspeed,ahbc = <&ahbc>;
interrupt-controller;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie0 0>,
<0 0 0 2 &pcie0 1>,
<0 0 0 3 &pcie0 2>,
<0 0 0 4 &pcie0 3>;
pcie@8,0 {
compatible = "pciclass,0604";
reg = <0x00004000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
resets = <&syscon ASPEED_RESET_PCIE_RC_O>;
reset-names = "perst";
clocks = <&syscon ASPEED_CLK_GATE_BCLK>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pcierc1_default>;
phys = <&pcie_phy1>;
ranges;
};
};

View file

@ -44,7 +44,7 @@ properties:
clock-names:
minItems: 3
maxItems: 5
maxItems: 6
interrupts:
minItems: 1
@ -212,14 +212,17 @@ allOf:
then:
properties:
clocks:
maxItems: 5
minItems: 5
maxItems: 6
clock-names:
minItems: 5
items:
- const: pcie
- const: pcie_bus
- const: pcie_phy
- const: pcie_aux
- const: ref
- const: extref # Optional
unevaluatedProperties: false

View file

@ -32,6 +32,8 @@ properties:
minItems: 1
maxItems: 3
msi-parent: true
required:
- compatible
- reg

View file

@ -48,6 +48,7 @@ properties:
oneOf:
- items:
- enum:
- mediatek,mt7981-pcie
- mediatek,mt7986-pcie
- mediatek,mt8188-pcie
- mediatek,mt8195-pcie

View file

@ -0,0 +1,170 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-apq8064.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm APQ8064/IPQ8064 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
enum:
- qcom,pcie-apq8064
- qcom,pcie-ipq8064
- qcom,pcie-ipq8064-v2
reg:
maxItems: 4
reg-names:
items:
- const: dbi
- const: elbi
- const: parf
- const: config
clocks:
minItems: 3
maxItems: 5
clock-names:
minItems: 3
items:
- const: core # Clocks the pcie hw block
- const: iface # Configuration AHB clock
- const: phy
- const: aux
- const: ref
interrupts:
maxItems: 1
interrupt-names:
items:
- const: msi
resets:
minItems: 5
maxItems: 6
reset-names:
minItems: 5
items:
- const: axi
- const: ahb
- const: por
- const: pci
- const: phy
- const: ext
vdda-supply:
description: A phandle to the core analog power supply
vdda_phy-supply:
description: A phandle to the core analog power supply for PHY
vdda_refclk-supply:
description: A phandle to the core analog power supply for IC which generates reference clock
required:
- resets
- reset-names
- vdda-supply
- vdda_phy-supply
- vdda_refclk-supply
allOf:
- $ref: qcom,pcie-common.yaml#
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-apq8064
then:
properties:
clocks:
maxItems: 3
clock-names:
maxItems: 3
resets:
maxItems: 5
reset-names:
maxItems: 5
else:
properties:
clocks:
minItems: 5
clock-names:
minItems: 5
resets:
minItems: 6
reset-names:
minItems: 6
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-msm8960.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset/qcom,gcc-msm8960.h>
pcie@1b500000 {
compatible = "qcom,pcie-apq8064";
reg = <0x1b500000 0x1000>,
<0x1b502000 0x80>,
<0x1b600000 0x100>,
<0x0ff00000 0x100000>;
reg-names = "dbi", "elbi", "parf", "config";
ranges = <0x81000000 0x0 0x00000000 0x0fe00000 0x0 0x00100000>, /* I/O */
<0x82000000 0x0 0x08000000 0x08000000 0x0 0x07e00000>; /* mem */
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
clocks = <&gcc PCIE_A_CLK>,
<&gcc PCIE_H_CLK>,
<&gcc PCIE_PHY_REF_CLK>;
clock-names = "core", "iface", "phy";
interrupts = <GIC_SPI 238 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
<0 0 0 2 &intc GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
<0 0 0 3 &intc GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
resets = <&gcc PCIE_ACLK_RESET>,
<&gcc PCIE_HCLK_RESET>,
<&gcc PCIE_POR_RESET>,
<&gcc PCIE_PCI_RESET>,
<&gcc PCIE_PHY_RESET>;
reset-names = "axi", "ahb", "por", "pci", "phy";
perst-gpios = <&tlmm_pinmux 27 GPIO_ACTIVE_LOW>;
vdda-supply = <&pm8921_s3>;
vdda_phy-supply = <&pm8921_lvs6>;
vdda_refclk-supply = <&v3p3_fixed>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};

View file

@ -0,0 +1,109 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-apq8084.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm APQ8084 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
enum:
- qcom,pcie-apq8084
reg:
minItems: 4
maxItems: 5
reg-names:
minItems: 4
items:
- const: parf
- const: dbi
- const: elbi
- const: config
- const: mhi
clocks:
maxItems: 4
clock-names:
items:
- const: iface # Configuration AHB clock
- const: master_bus # Master AXI clock
- const: slave_bus # Slave AXI clock
- const: aux
interrupts:
maxItems: 1
interrupt-names:
items:
- const: msi
resets:
maxItems: 1
reset-names:
items:
- const: core
vdda-supply:
description: A phandle to the core analog power supply
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/gpio/gpio.h>
pcie@fc520000 {
compatible = "qcom,pcie-apq8084";
reg = <0xfc520000 0x2000>,
<0xff000000 0x1000>,
<0xff001000 0x1000>,
<0xff002000 0x2000>;
reg-names = "parf", "dbi", "elbi", "config";
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x81000000 0 0 0xff200000 0 0x00100000>,
<0x82000000 0 0x00300000 0xff300000 0 0x00d00000>;
interrupts = <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 244 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc 0 245 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc 0 247 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc 0 248 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&gcc 324>,
<&gcc 325>,
<&gcc 327>,
<&gcc 323>;
clock-names = "iface", "master_bus", "slave_bus", "aux";
resets = <&gcc 81>;
reset-names = "core";
power-domains = <&gcc 1>;
vdda-supply = <&pma8084_l3>;
phys = <&pciephy0>;
phy-names = "pciephy";
perst-gpios = <&tlmm 70 GPIO_ACTIVE_LOW>;
pinctrl-0 = <&pcie0_pins_default>;
pinctrl-names = "default";
};

View file

@ -0,0 +1,146 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-ipq4019.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm IPQ4019 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
enum:
- qcom,pcie-ipq4019
reg:
maxItems: 4
reg-names:
items:
- const: dbi
- const: elbi
- const: parf
- const: config
clocks:
maxItems: 3
clock-names:
items:
- const: aux
- const: master_bus # Master AXI clock
- const: slave_bus # Slave AXI clock
interrupts:
maxItems: 1
interrupt-names:
items:
- const: msi
resets:
maxItems: 12
reset-names:
items:
- const: axi_m # AXI master reset
- const: axi_s # AXI slave reset
- const: pipe
- const: axi_m_vmid
- const: axi_s_xpu
- const: parf
- const: phy
- const: axi_m_sticky # AXI master sticky reset
- const: pipe_sticky
- const: pwr
- const: ahb
- const: phy_ahb
required:
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-ipq4019.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
pcie@40000000 {
compatible = "qcom,pcie-ipq4019";
reg = <0x40000000 0xf1d>,
<0x40000f20 0xa8>,
<0x80000 0x2000>,
<0x40100000 0x1000>;
reg-names = "dbi", "elbi", "parf", "config";
ranges = <0x81000000 0x0 0x00000000 0x40200000 0x0 0x00100000>,
<0x82000000 0x0 0x40300000 0x40300000 0x0 0x00d00000>;
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
clocks = <&gcc GCC_PCIE_AHB_CLK>,
<&gcc GCC_PCIE_AXI_M_CLK>,
<&gcc GCC_PCIE_AXI_S_CLK>;
clock-names = "aux",
"master_bus",
"slave_bus";
interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
<0 0 0 2 &intc GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
<0 0 0 3 &intc GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
resets = <&gcc PCIE_AXI_M_ARES>,
<&gcc PCIE_AXI_S_ARES>,
<&gcc PCIE_PIPE_ARES>,
<&gcc PCIE_AXI_M_VMIDMT_ARES>,
<&gcc PCIE_AXI_S_XPU_ARES>,
<&gcc PCIE_PARF_XPU_ARES>,
<&gcc PCIE_PHY_ARES>,
<&gcc PCIE_AXI_M_STICKY_ARES>,
<&gcc PCIE_PIPE_STICKY_ARES>,
<&gcc PCIE_PWR_ARES>,
<&gcc PCIE_AHB_ARES>,
<&gcc PCIE_PHY_AHB_ARES>;
reset-names = "axi_m",
"axi_s",
"pipe",
"axi_m_vmid",
"axi_s_xpu",
"parf",
"phy",
"axi_m_sticky",
"pipe_sticky",
"pwr",
"ahb",
"phy_ahb";
perst-gpios = <&tlmm 38 GPIO_ACTIVE_LOW>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};

View file

@ -0,0 +1,189 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-ipq5018.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm IPQ5018 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
enum:
- qcom,pcie-ipq5018
reg:
minItems: 5
maxItems: 6
reg-names:
minItems: 5
items:
- const: dbi
- const: elbi
- const: atu
- const: parf
- const: config
- const: mhi
clocks:
maxItems: 6
clock-names:
items:
- const: iface # PCIe to SysNOC BIU clock
- const: axi_m # AXI Master clock
- const: axi_s # AXI Slave clock
- const: ahb
- const: aux
- const: axi_bridge
interrupts:
maxItems: 9
interrupt-names:
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
resets:
maxItems: 8
reset-names:
items:
- const: pipe
- const: sleep
- const: sticky # Core sticky reset
- const: axi_m # AXI master reset
- const: axi_s # AXI slave reset
- const: ahb
- const: axi_m_sticky # AXI master sticky reset
- const: axi_s_sticky # AXI slave sticky reset
required:
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-ipq5018.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset/qcom,gcc-ipq5018.h>
pcie@a0000000 {
compatible = "qcom,pcie-ipq5018";
reg = <0xa0000000 0xf1d>,
<0xa0000f20 0xa8>,
<0xa0001000 0x1000>,
<0x00080000 0x3000>,
<0xa0100000 0x1000>,
<0x00083000 0x1000>;
reg-names = "dbi",
"elbi",
"atu",
"parf",
"config",
"mhi";
ranges = <0x01000000 0 0x00000000 0xa0200000 0 0x00100000>,
<0x02000000 0 0xa0300000 0xa0300000 0 0x10000000>;
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <2>;
#address-cells = <3>;
#size-cells = <2>;
/* The controller supports Gen3, but the connected PHY is Gen2-capable */
max-link-speed = <2>;
clocks = <&gcc GCC_SYS_NOC_PCIE0_AXI_CLK>,
<&gcc GCC_PCIE0_AXI_M_CLK>,
<&gcc GCC_PCIE0_AXI_S_CLK>,
<&gcc GCC_PCIE0_AHB_CLK>,
<&gcc GCC_PCIE0_AUX_CLK>,
<&gcc GCC_PCIE0_AXI_S_BRIDGE_CLK>;
clock-names = "iface",
"axi_m",
"axi_s",
"ahb",
"aux",
"axi_bridge";
msi-map = <0x0 &v2m0 0x0 0xff8>;
interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0",
"msi1",
"msi2",
"msi3",
"msi4",
"msi5",
"msi6",
"msi7",
"global";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc 0 GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc 0 GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc 0 GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
phys = <&pcie0_phy>;
phy-names = "pciephy";
resets = <&gcc GCC_PCIE0_PIPE_ARES>,
<&gcc GCC_PCIE0_SLEEP_ARES>,
<&gcc GCC_PCIE0_CORE_STICKY_ARES>,
<&gcc GCC_PCIE0_AXI_MASTER_ARES>,
<&gcc GCC_PCIE0_AXI_SLAVE_ARES>,
<&gcc GCC_PCIE0_AHB_ARES>,
<&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>,
<&gcc GCC_PCIE0_AXI_SLAVE_STICKY_ARES>;
reset-names = "pipe",
"sleep",
"sticky",
"axi_m",
"axi_s",
"ahb",
"axi_m_sticky",
"axi_s_sticky";
perst-gpios = <&tlmm 15 GPIO_ACTIVE_LOW>;
wake-gpios = <&tlmm 16 GPIO_ACTIVE_LOW>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};

View file

@ -0,0 +1,179 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-ipq6018.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm IPQ6018 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
enum:
- qcom,pcie-ipq6018
- qcom,pcie-ipq8074-gen3
reg:
minItems: 5
maxItems: 6
reg-names:
minItems: 5
items:
- const: dbi
- const: elbi
- const: atu
- const: parf
- const: config
- const: mhi
clocks:
maxItems: 5
clock-names:
items:
- const: iface # PCIe to SysNOC BIU clock
- const: axi_m # AXI Master clock
- const: axi_s # AXI Slave clock
- const: axi_bridge
- const: rchng
interrupts:
maxItems: 9
interrupt-names:
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
resets:
maxItems: 8
reset-names:
items:
- const: pipe
- const: sleep
- const: sticky # Core sticky reset
- const: axi_m # AXI master reset
- const: axi_s # AXI slave reset
- const: ahb
- const: axi_m_sticky # AXI master sticky reset
- const: axi_s_sticky # AXI slave sticky reset
required:
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-ipq6018.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset/qcom,gcc-ipq6018.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@20000000 {
compatible = "qcom,pcie-ipq6018";
reg = <0x0 0x20000000 0x0 0xf1d>,
<0x0 0x20000f20 0x0 0xa8>,
<0x0 0x20001000 0x0 0x1000>,
<0x0 0x80000 0x0 0x4000>,
<0x0 0x20100000 0x0 0x1000>;
reg-names = "dbi", "elbi", "atu", "parf", "config";
ranges = <0x81000000 0x0 0x00000000 0x0 0x20200000 0x0 0x10000>,
<0x82000000 0x0 0x20220000 0x0 0x20220000 0x0 0xfde0000>;
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
max-link-speed = <3>;
#address-cells = <3>;
#size-cells = <2>;
clocks = <&gcc GCC_SYS_NOC_PCIE0_AXI_CLK>,
<&gcc GCC_PCIE0_AXI_M_CLK>,
<&gcc GCC_PCIE0_AXI_S_CLK>,
<&gcc GCC_PCIE0_AXI_S_BRIDGE_CLK>,
<&gcc PCIE0_RCHNG_CLK>;
clock-names = "iface",
"axi_m",
"axi_s",
"axi_bridge",
"rchng";
interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0",
"msi1",
"msi2",
"msi3",
"msi4",
"msi5",
"msi6",
"msi7",
"global";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 0 GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
<0 0 0 2 &intc 0 0 GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
<0 0 0 3 &intc 0 0 GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 0 GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
phys = <&pcie_phy>;
phy-names = "pciephy";
resets = <&gcc GCC_PCIE0_PIPE_ARES>,
<&gcc GCC_PCIE0_SLEEP_ARES>,
<&gcc GCC_PCIE0_CORE_STICKY_ARES>,
<&gcc GCC_PCIE0_AXI_MASTER_ARES>,
<&gcc GCC_PCIE0_AXI_SLAVE_ARES>,
<&gcc GCC_PCIE0_AHB_ARES>,
<&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>,
<&gcc GCC_PCIE0_AXI_SLAVE_STICKY_ARES>;
reset-names = "pipe",
"sleep",
"sticky",
"axi_m",
"axi_s",
"ahb",
"axi_m_sticky",
"axi_s_sticky";
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};
};

View file

@ -0,0 +1,165 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-ipq8074.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm IPQ8074 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
enum:
- qcom,pcie-ipq8074
reg:
maxItems: 4
reg-names:
items:
- const: dbi
- const: elbi
- const: parf
- const: config
clocks:
maxItems: 5
clock-names:
items:
- const: iface # PCIe to SysNOC BIU clock
- const: axi_m # AXI Master clock
- const: axi_s # AXI Slave clock
- const: ahb
- const: aux
interrupts:
maxItems: 9
interrupt-names:
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
resets:
maxItems: 7
reset-names:
items:
- const: pipe
- const: sleep
- const: sticky # Core sticky reset
- const: axi_m # AXI master reset
- const: axi_s # AXI slave reset
- const: ahb
- const: axi_m_sticky # AXI master sticky reset
required:
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-ipq8074.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
pcie@10000000 {
compatible = "qcom,pcie-ipq8074";
reg = <0x10000000 0xf1d>,
<0x10000f20 0xa8>,
<0x00088000 0x2000>,
<0x10100000 0x1000>;
reg-names = "dbi", "elbi", "parf", "config";
ranges = <0x81000000 0x0 0x00000000 0x10200000 0x0 0x10000>, /* I/O */
<0x82000000 0x0 0x10220000 0x10220000 0x0 0xfde0000>; /* MEM */
device_type = "pci";
linux,pci-domain = <1>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
max-link-speed = <2>;
#address-cells = <3>;
#size-cells = <2>;
clocks = <&gcc GCC_SYS_NOC_PCIE1_AXI_CLK>,
<&gcc GCC_PCIE1_AXI_M_CLK>,
<&gcc GCC_PCIE1_AXI_S_CLK>,
<&gcc GCC_PCIE1_AHB_CLK>,
<&gcc GCC_PCIE1_AUX_CLK>;
clock-names = "iface",
"axi_m",
"axi_s",
"ahb",
"aux";
interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0",
"msi1",
"msi2",
"msi3",
"msi4",
"msi5",
"msi6",
"msi7",
"global";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
<0 0 0 2 &intc 0 GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
<0 0 0 3 &intc 0 GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
phys = <&pcie_qmp1>;
phy-names = "pciephy";
resets = <&gcc GCC_PCIE1_PIPE_ARES>,
<&gcc GCC_PCIE1_SLEEP_ARES>,
<&gcc GCC_PCIE1_CORE_STICKY_ARES>,
<&gcc GCC_PCIE1_AXI_MASTER_ARES>,
<&gcc GCC_PCIE1_AXI_SLAVE_ARES>,
<&gcc GCC_PCIE1_AHB_ARES>,
<&gcc GCC_PCIE1_AXI_MASTER_STICKY_ARES>;
reset-names = "pipe",
"sleep",
"sticky",
"axi_m",
"axi_s",
"ahb",
"axi_m_sticky";
perst-gpios = <&tlmm 58 GPIO_ACTIVE_LOW>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};

View file

@ -0,0 +1,183 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-ipq9574.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm IPQ9574 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
oneOf:
- enum:
- qcom,pcie-ipq9574
- items:
- enum:
- qcom,pcie-ipq5332
- qcom,pcie-ipq5424
- const: qcom,pcie-ipq9574
reg:
maxItems: 6
reg-names:
items:
- const: dbi
- const: elbi
- const: atu
- const: parf
- const: config
- const: mhi
clocks:
maxItems: 6
clock-names:
items:
- const: axi_m # AXI Master clock
- const: axi_s # AXI Slave clock
- const: axi_bridge
- const: rchng
- const: ahb
- const: aux
interrupts:
minItems: 8
maxItems: 9
interrupt-names:
minItems: 8
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
resets:
maxItems: 8
reset-names:
items:
- const: pipe
- const: sticky # Core sticky reset
- const: axi_s_sticky # AXI Slave Sticky reset
- const: axi_s # AXI slave reset
- const: axi_m_sticky # AXI Master Sticky reset
- const: axi_m # AXI master reset
- const: aux
- const: ahb
required:
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,ipq9574-gcc.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interconnect/qcom,ipq9574.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset/qcom,ipq9574-gcc.h>
pcie@10000000 {
compatible = "qcom,pcie-ipq9574";
reg = <0x10000000 0xf1d>,
<0x10000f20 0xa8>,
<0x10001000 0x1000>,
<0x000f8000 0x4000>,
<0x10100000 0x1000>,
<0x000fe000 0x1000>;
reg-names = "dbi",
"elbi",
"atu",
"parf",
"config",
"mhi";
ranges = <0x01000000 0x0 0x00000000 0x10200000 0x0 0x100000>,
<0x02000000 0x0 0x10300000 0x10300000 0x0 0x7d00000>;
device_type = "pci";
linux,pci-domain = <1>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
clocks = <&gcc GCC_PCIE1_AXI_M_CLK>,
<&gcc GCC_PCIE1_AXI_S_CLK>,
<&gcc GCC_PCIE1_AXI_S_BRIDGE_CLK>,
<&gcc GCC_PCIE1_RCHNG_CLK>,
<&gcc GCC_PCIE1_AHB_CLK>,
<&gcc GCC_PCIE1_AUX_CLK>;
clock-names = "axi_m",
"axi_s",
"axi_bridge",
"rchng",
"ahb",
"aux";
interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>,
<&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>;
interconnect-names = "pcie-mem", "cpu-pcie";
interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0",
"msi1",
"msi2",
"msi3",
"msi4",
"msi5",
"msi6",
"msi7";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc 0 GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc 0 GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc 0 GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
resets = <&gcc GCC_PCIE1_PIPE_ARES>,
<&gcc GCC_PCIE1_CORE_STICKY_ARES>,
<&gcc GCC_PCIE1_AXI_S_STICKY_ARES>,
<&gcc GCC_PCIE1_AXI_S_ARES>,
<&gcc GCC_PCIE1_AXI_M_STICKY_ARES>,
<&gcc GCC_PCIE1_AXI_M_ARES>,
<&gcc GCC_PCIE1_AUX_ARES>,
<&gcc GCC_PCIE1_AHB_ARES>;
reset-names = "pipe",
"sticky",
"axi_s_sticky",
"axi_s",
"axi_m_sticky",
"axi_m",
"aux",
"ahb";
phys = <&pcie1_phy>;
phy-names = "pciephy";
perst-gpios = <&tlmm 26 GPIO_ACTIVE_LOW>;
wake-gpios = <&tlmm 27 GPIO_ACTIVE_LOW>;
};

View file

@ -0,0 +1,156 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-msm8996.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm MSM8996 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
oneOf:
- enum:
- qcom,pcie-msm8996
- items:
- const: qcom,pcie-msm8998
- const: qcom,pcie-msm8996
reg:
minItems: 4
maxItems: 5
reg-names:
minItems: 4
items:
- const: parf
- const: dbi
- const: elbi
- const: config
- const: mhi
clocks:
maxItems: 5
clock-names:
items:
- const: pipe # Pipe Clock driving internal logic
- const: aux
- const: cfg
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
interrupts:
minItems: 8
maxItems: 9
interrupt-names:
minItems: 8
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
vdda-supply:
description: A phandle to the core analog power supply
vddpe-3v3-supply:
description: A phandle to the PCIe endpoint power supply
required:
- power-domains
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-msm8996.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
pcie@600000 {
compatible = "qcom,pcie-msm8996";
reg = <0x00600000 0x2000>,
<0x0c000000 0xf1d>,
<0x0c000f20 0xa8>,
<0x0c100000 0x100000>;
reg-names = "parf", "dbi", "elbi", "config";
ranges = <0x01000000 0x0 0x00000000 0x0c200000 0x0 0x100000>,
<0x02000000 0x0 0x0c300000 0x0c300000 0x0 0xd00000>;
device_type = "pci";
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
linux,pci-domain = <0>;
clocks = <&gcc GCC_PCIE_0_PIPE_CLK>,
<&gcc GCC_PCIE_0_AUX_CLK>,
<&gcc GCC_PCIE_0_CFG_AHB_CLK>,
<&gcc GCC_PCIE_0_MSTR_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_AXI_CLK>;
clock-names = "pipe",
"aux",
"cfg",
"bus_master",
"bus_slave";
interrupts = <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 411 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0",
"msi1",
"msi2",
"msi3",
"msi4",
"msi5",
"msi6",
"msi7";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 244 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
<0 0 0 2 &intc GIC_SPI 245 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
<0 0 0 3 &intc GIC_SPI 247 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc GIC_SPI 248 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
pinctrl-names = "default", "sleep";
pinctrl-0 = <&pcie0_state_on>;
pinctrl-1 = <&pcie0_state_off>;
phys = <&pciephy_0>;
phy-names = "pciephy";
power-domains = <&gcc PCIE0_GDSC>;
perst-gpios = <&tlmm 35 GPIO_ACTIVE_LOW>;
vddpe-3v3-supply = <&wlan_en>;
vdda-supply = <&vreg_l28a_0p925>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};

View file

@ -0,0 +1,131 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-qcs404.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm QCS404 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
enum:
- qcom,pcie-qcs404
reg:
maxItems: 4
reg-names:
items:
- const: dbi
- const: elbi
- const: parf
- const: config
clocks:
maxItems: 4
clock-names:
items:
- const: iface # AHB clock
- const: aux
- const: master_bus # AXI Master clock
- const: slave_bus # AXI Slave clock
interrupts:
maxItems: 1
interrupt-names:
items:
- const: msi
resets:
maxItems: 6
reset-names:
items:
- const: axi_m # AXI Master reset
- const: axi_s # AXI Slave reset
- const: axi_m_sticky # AXI Master Sticky reset
- const: pipe_sticky
- const: pwr
- const: ahb
required:
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-qcs404.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
pcie@10000000 {
compatible = "qcom,pcie-qcs404";
reg = <0x10000000 0xf1d>,
<0x10000f20 0xa8>,
<0x07780000 0x2000>,
<0x10001000 0x2000>;
reg-names = "dbi", "elbi", "parf", "config";
ranges = <0x81000000 0x0 0x00000000 0x10003000 0x0 0x00010000>, /* I/O */
<0x82000000 0x0 0x10013000 0x10013000 0x0 0x007ed000>; /* memory */
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
clocks = <&gcc GCC_PCIE_0_CFG_AHB_CLK>,
<&gcc GCC_PCIE_0_AUX_CLK>,
<&gcc GCC_PCIE_0_MSTR_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_AXI_CLK>;
clock-names = "iface", "aux", "master_bus", "slave_bus";
interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
<0 0 0 2 &intc GIC_SPI 224 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
<0 0 0 3 &intc GIC_SPI 267 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc GIC_SPI 268 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
phys = <&pcie_phy>;
phy-names = "pciephy";
perst-gpios = <&tlmm 43 GPIO_ACTIVE_LOW>;
resets = <&gcc GCC_PCIE_0_AXI_MASTER_ARES>,
<&gcc GCC_PCIE_0_AXI_SLAVE_ARES>,
<&gcc GCC_PCIE_0_AXI_MASTER_STICKY_ARES>,
<&gcc GCC_PCIE_0_CORE_STICKY_ARES>,
<&gcc GCC_PCIE_0_BCR>,
<&gcc GCC_PCIE_0_AHB_ARES>;
reset-names = "axi_m",
"axi_s",
"axi_m_sticky",
"pipe_sticky",
"pwr",
"ahb";
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};

View file

@ -1,168 +0,0 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-sc8180x.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SC8180x PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
description:
Qualcomm SC8180x SoC PCIe root complex controller is based on the Synopsys
DesignWare PCIe IP.
properties:
compatible:
const: qcom,pcie-sc8180x
reg:
minItems: 5
maxItems: 6
reg-names:
minItems: 5
items:
- const: parf # Qualcomm specific registers
- const: dbi # DesignWare PCIe registers
- const: elbi # External local bus interface registers
- const: atu # ATU address space
- const: config # PCIe configuration space
- const: mhi # MHI registers
clocks:
minItems: 6
maxItems: 6
clock-names:
items:
- const: pipe # PIPE clock
- const: aux # Auxiliary clock
- const: cfg # Configuration clock
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
- const: slave_q2a # Slave Q2A clock
interrupts:
minItems: 8
maxItems: 9
interrupt-names:
minItems: 8
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
resets:
maxItems: 1
reset-names:
items:
- const: pci
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-sc8180x.h>
#include <dt-bindings/interconnect/qcom,sc8180x.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@1c00000 {
compatible = "qcom,pcie-sc8180x";
reg = <0 0x01c00000 0 0x3000>,
<0 0x60000000 0 0xf1d>,
<0 0x60000f20 0 0xa8>,
<0 0x60001000 0 0x1000>,
<0 0x60100000 0 0x100000>;
reg-names = "parf",
"dbi",
"elbi",
"atu",
"config";
ranges = <0x01000000 0x0 0x60200000 0x0 0x60200000 0x0 0x100000>,
<0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0x3d00000>;
bus-range = <0x00 0xff>;
device_type = "pci";
linux,pci-domain = <0>;
num-lanes = <2>;
#address-cells = <3>;
#size-cells = <2>;
assigned-clocks = <&gcc GCC_PCIE_0_AUX_CLK>;
assigned-clock-rates = <19200000>;
clocks = <&gcc GCC_PCIE_0_PIPE_CLK>,
<&gcc GCC_PCIE_0_AUX_CLK>,
<&gcc GCC_PCIE_0_CFG_AHB_CLK>,
<&gcc GCC_PCIE_0_MSTR_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_Q2A_AXI_CLK>;
clock-names = "pipe",
"aux",
"cfg",
"bus_master",
"bus_slave",
"slave_q2a";
dma-coherent;
interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0",
"msi1",
"msi2",
"msi3",
"msi4",
"msi5",
"msi6",
"msi7",
"global";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
<0 0 0 2 &intc 0 150 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
<0 0 0 3 &intc 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
interconnects = <&aggre2_noc MASTER_PCIE 0 &mc_virt SLAVE_EBI_CH0 0>,
<&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_PCIE_0 0>;
interconnect-names = "pcie-mem", "cpu-pcie";
iommu-map = <0x0 &apps_smmu 0x1d80 0x1>,
<0x100 &apps_smmu 0x1d81 0x1>;
phys = <&pcie0_phy>;
phy-names = "pciephy";
power-domains = <&gcc PCIE_0_GDSC>;
resets = <&gcc GCC_PCIE_0_BCR>;
reset-names = "pci";
};
};

View file

@ -0,0 +1,190 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-sdm845.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SDM845 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
enum:
- qcom,pcie-sdm845
reg:
minItems: 4
maxItems: 5
reg-names:
minItems: 4
items:
- const: parf
- const: dbi
- const: elbi
- const: config
- const: mhi
clocks:
minItems: 7
maxItems: 8
clock-names:
minItems: 7
items:
- const: pipe
- const: aux
- const: cfg
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
- const: slave_q2a
- enum: [ ref, tbu ]
- const: tbu
interrupts:
minItems: 8
maxItems: 9
interrupt-names:
minItems: 8
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
resets:
maxItems: 1
reset-names:
items:
- const: pci
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-sdm845.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@1c00000 {
compatible = "qcom,pcie-sdm845";
reg = <0x0 0x01c00000 0x0 0x2000>,
<0x0 0x60000000 0x0 0xf1d>,
<0x0 0x60000f20 0x0 0xa8>,
<0x0 0x60100000 0x0 0x100000>,
<0x0 0x01c07000 0x0 0x1000>;
reg-names = "parf", "dbi", "elbi", "config", "mhi";
ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>,
<0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0xd00000>;
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
clocks = <&gcc GCC_PCIE_0_PIPE_CLK>,
<&gcc GCC_PCIE_0_AUX_CLK>,
<&gcc GCC_PCIE_0_CFG_AHB_CLK>,
<&gcc GCC_PCIE_0_MSTR_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_AXI_CLK>,
<&gcc GCC_PCIE_0_SLV_Q2A_AXI_CLK>,
<&gcc GCC_AGGRE_NOC_PCIE_TBU_CLK>;
clock-names = "pipe",
"aux",
"cfg",
"bus_master",
"bus_slave",
"slave_q2a",
"tbu";
interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0",
"msi1",
"msi2",
"msi3",
"msi4",
"msi5",
"msi6",
"msi7",
"global";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 0 GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
<0 0 0 2 &intc 0 0 GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
<0 0 0 3 &intc 0 0 GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc 0 0 GIC_SPI 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
iommu-map = <0x0 &apps_smmu 0x1c10 0x1>,
<0x100 &apps_smmu 0x1c11 0x1>,
<0x200 &apps_smmu 0x1c12 0x1>,
<0x300 &apps_smmu 0x1c13 0x1>,
<0x400 &apps_smmu 0x1c14 0x1>,
<0x500 &apps_smmu 0x1c15 0x1>,
<0x600 &apps_smmu 0x1c16 0x1>,
<0x700 &apps_smmu 0x1c17 0x1>,
<0x800 &apps_smmu 0x1c18 0x1>,
<0x900 &apps_smmu 0x1c19 0x1>,
<0xa00 &apps_smmu 0x1c1a 0x1>,
<0xb00 &apps_smmu 0x1c1b 0x1>,
<0xc00 &apps_smmu 0x1c1c 0x1>,
<0xd00 &apps_smmu 0x1c1d 0x1>,
<0xe00 &apps_smmu 0x1c1e 0x1>,
<0xf00 &apps_smmu 0x1c1f 0x1>;
power-domains = <&gcc PCIE_0_GDSC>;
phys = <&pcie0_phy>;
phy-names = "pciephy";
resets = <&gcc GCC_PCIE_0_BCR>;
reset-names = "pci";
perst-gpios = <&tlmm 35 GPIO_ACTIVE_LOW>;
wake-gpios = <&tlmm 134 GPIO_ACTIVE_HIGH>;
vddpe-3v3-supply = <&pcie0_3p3v_dual>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};
};

View file

@ -0,0 +1,172 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-sdx55.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SDX55 PCI Express Root Complex
maintainers:
- Bjorn Andersson <andersson@kernel.org>
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
enum:
- qcom,pcie-sdx55
reg:
minItems: 5
maxItems: 6
reg-names:
minItems: 5
items:
- const: parf
- const: dbi
- const: elbi
- const: atu
- const: config
- const: mhi
clocks:
maxItems: 7
clock-names:
items:
- const: pipe
- const: aux
- const: cfg
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
- const: slave_q2a
- const: sleep
interrupts:
maxItems: 8
interrupt-names:
items:
- const: msi
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: msi8
resets:
maxItems: 1
reset-names:
items:
- const: pci
required:
- power-domains
- resets
- reset-names
allOf:
- $ref: qcom,pcie-common.yaml#
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-sdx55.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
pcie@1c00000 {
compatible = "qcom,pcie-sdx55";
reg = <0x01c00000 0x3000>,
<0x40000000 0xf1d>,
<0x40000f20 0xc8>,
<0x40001000 0x1000>,
<0x40100000 0x100000>;
reg-names = "parf",
"dbi",
"elbi",
"atu",
"config";
ranges = <0x01000000 0x0 0x00000000 0x40200000 0x0 0x100000>,
<0x02000000 0x0 0x40300000 0x40300000 0x0 0x3fd00000>;
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi",
"msi2",
"msi3",
"msi4",
"msi5",
"msi6",
"msi7",
"msi8";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
<0 0 0 2 &intc GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
<0 0 0 3 &intc GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
<0 0 0 4 &intc GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
clocks = <&gcc GCC_PCIE_PIPE_CLK>,
<&gcc GCC_PCIE_AUX_CLK>,
<&gcc GCC_PCIE_CFG_AHB_CLK>,
<&gcc GCC_PCIE_MSTR_AXI_CLK>,
<&gcc GCC_PCIE_SLV_AXI_CLK>,
<&gcc GCC_PCIE_SLV_Q2A_AXI_CLK>,
<&gcc GCC_PCIE_SLEEP_CLK>;
clock-names = "pipe",
"aux",
"cfg",
"bus_master",
"bus_slave",
"slave_q2a",
"sleep";
assigned-clocks = <&gcc GCC_PCIE_AUX_CLK>;
assigned-clock-rates = <19200000>;
iommu-map = <0x0 &apps_smmu 0x0200 0x1>,
<0x100 &apps_smmu 0x0201 0x1>,
<0x200 &apps_smmu 0x0202 0x1>,
<0x300 &apps_smmu 0x0203 0x1>,
<0x400 &apps_smmu 0x0204 0x1>;
power-domains = <&gcc PCIE_GDSC>;
phys = <&pcie_phy>;
phy-names = "pciephy";
resets = <&gcc GCC_PCIE_BCR>;
reset-names = "pci";
perst-gpios = <&tlmm 57 GPIO_ACTIVE_LOW>;
wake-gpios = <&tlmm 53 GPIO_ACTIVE_HIGH>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
bus-range = <0x01 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
};
};

View file

@ -17,6 +17,7 @@ description:
properties:
compatible:
oneOf:
- const: qcom,pcie-sc8180x
- const: qcom,pcie-sm8150
- items:
- enum:

View file

@ -16,7 +16,12 @@ description:
properties:
compatible:
const: qcom,pcie-x1e80100
oneOf:
- const: qcom,pcie-x1e80100
- items:
- enum:
- qcom,glymur-pcie
- const: qcom,pcie-x1e80100
reg:
minItems: 6

View file

@ -1,782 +0,0 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm PCI express root complex
maintainers:
- Bjorn Andersson <bjorn.andersson@linaro.org>
- Manivannan Sadhasivam <mani@kernel.org>
description: |
Qualcomm PCIe root complex controller is based on the Synopsys DesignWare
PCIe IP.
properties:
compatible:
oneOf:
- enum:
- qcom,pcie-apq8064
- qcom,pcie-apq8084
- qcom,pcie-ipq4019
- qcom,pcie-ipq5018
- qcom,pcie-ipq6018
- qcom,pcie-ipq8064
- qcom,pcie-ipq8064-v2
- qcom,pcie-ipq8074
- qcom,pcie-ipq8074-gen3
- qcom,pcie-ipq9574
- qcom,pcie-msm8996
- qcom,pcie-qcs404
- qcom,pcie-sdm845
- qcom,pcie-sdx55
- items:
- enum:
- qcom,pcie-ipq5332
- qcom,pcie-ipq5424
- const: qcom,pcie-ipq9574
- items:
- const: qcom,pcie-msm8998
- const: qcom,pcie-msm8996
reg:
minItems: 4
maxItems: 6
reg-names:
minItems: 4
maxItems: 6
interrupts:
minItems: 1
maxItems: 9
interrupt-names:
minItems: 1
maxItems: 9
iommu-map:
minItems: 1
maxItems: 16
# Common definitions for clocks, clock-names and reset.
# Platform constraints are described later.
clocks:
minItems: 3
maxItems: 13
clock-names:
minItems: 3
maxItems: 13
dma-coherent: true
interconnects:
maxItems: 2
interconnect-names:
items:
- const: pcie-mem
- const: cpu-pcie
resets:
minItems: 1
maxItems: 12
reset-names:
minItems: 1
maxItems: 12
vdda-supply:
description: A phandle to the core analog power supply
vdda_phy-supply:
description: A phandle to the core analog power supply for PHY
vdda_refclk-supply:
description: A phandle to the core analog power supply for IC which generates reference clock
vddpe-3v3-supply:
description: A phandle to the PCIe endpoint power supply
phys:
maxItems: 1
phy-names:
items:
- const: pciephy
power-domains:
maxItems: 1
perst-gpios:
description: GPIO controlled connection to PERST# signal
maxItems: 1
required-opps:
maxItems: 1
wake-gpios:
description: GPIO controlled connection to WAKE# signal
maxItems: 1
required:
- compatible
- reg
- reg-names
- interrupt-map-mask
- interrupt-map
- clocks
- clock-names
anyOf:
- required:
- interrupts
- interrupt-names
- "#interrupt-cells"
- required:
- msi-map
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-apq8064
- qcom,pcie-ipq4019
- qcom,pcie-ipq8064
- qcom,pcie-ipq8064v2
- qcom,pcie-ipq8074
- qcom,pcie-qcs404
then:
properties:
reg:
minItems: 4
maxItems: 4
reg-names:
items:
- const: dbi # DesignWare PCIe registers
- const: elbi # External local bus interface registers
- const: parf # Qualcomm specific registers
- const: config # PCIe configuration space
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-ipq5018
- qcom,pcie-ipq6018
- qcom,pcie-ipq8074-gen3
- qcom,pcie-ipq9574
then:
properties:
reg:
minItems: 5
maxItems: 6
reg-names:
minItems: 5
items:
- const: dbi # DesignWare PCIe registers
- const: elbi # External local bus interface registers
- const: atu # ATU address space
- const: parf # Qualcomm specific registers
- const: config # PCIe configuration space
- const: mhi # MHI registers
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-apq8084
- qcom,pcie-msm8996
- qcom,pcie-sdm845
then:
properties:
reg:
minItems: 4
maxItems: 5
reg-names:
minItems: 4
items:
- const: parf # Qualcomm specific registers
- const: dbi # DesignWare PCIe registers
- const: elbi # External local bus interface registers
- const: config # PCIe configuration space
- const: mhi # MHI registers
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-sdx55
then:
properties:
reg:
minItems: 5
maxItems: 6
reg-names:
minItems: 5
items:
- const: parf # Qualcomm specific registers
- const: dbi # DesignWare PCIe registers
- const: elbi # External local bus interface registers
- const: atu # ATU address space
- const: config # PCIe configuration space
- const: mhi # MHI registers
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-apq8064
- qcom,pcie-ipq8064
- qcom,pcie-ipq8064v2
then:
properties:
clocks:
minItems: 3
maxItems: 5
clock-names:
minItems: 3
items:
- const: core # Clocks the pcie hw block
- const: iface # Configuration AHB clock
- const: phy # Clocks the pcie PHY block
- const: aux # Clocks the pcie AUX block, not on apq8064
- const: ref # Clocks the pcie ref block, not on apq8064
resets:
minItems: 5
maxItems: 6
reset-names:
minItems: 5
items:
- const: axi # AXI reset
- const: ahb # AHB reset
- const: por # POR reset
- const: pci # PCI reset
- const: phy # PHY reset
- const: ext # EXT reset, not on apq8064
required:
- vdda-supply
- vdda_phy-supply
- vdda_refclk-supply
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-apq8084
then:
properties:
clocks:
minItems: 4
maxItems: 4
clock-names:
items:
- const: iface # Configuration AHB clock
- const: master_bus # Master AXI clock
- const: slave_bus # Slave AXI clock
- const: aux # Auxiliary (AUX) clock
resets:
maxItems: 1
reset-names:
items:
- const: core # Core reset
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-ipq4019
then:
properties:
clocks:
minItems: 3
maxItems: 3
clock-names:
items:
- const: aux # Auxiliary (AUX) clock
- const: master_bus # Master AXI clock
- const: slave_bus # Slave AXI clock
resets:
minItems: 12
maxItems: 12
reset-names:
items:
- const: axi_m # AXI master reset
- const: axi_s # AXI slave reset
- const: pipe # PIPE reset
- const: axi_m_vmid # VMID reset
- const: axi_s_xpu # XPU reset
- const: parf # PARF reset
- const: phy # PHY reset
- const: axi_m_sticky # AXI sticky reset
- const: pipe_sticky # PIPE sticky reset
- const: pwr # PWR reset
- const: ahb # AHB reset
- const: phy_ahb # PHY AHB reset
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-ipq5018
then:
properties:
clocks:
minItems: 6
maxItems: 6
clock-names:
items:
- const: iface # PCIe to SysNOC BIU clock
- const: axi_m # AXI Master clock
- const: axi_s # AXI Slave clock
- const: ahb # AHB clock
- const: aux # Auxiliary clock
- const: axi_bridge # AXI bridge clock
resets:
minItems: 8
maxItems: 8
reset-names:
items:
- const: pipe # PIPE reset
- const: sleep # Sleep reset
- const: sticky # Core sticky reset
- const: axi_m # AXI master reset
- const: axi_s # AXI slave reset
- const: ahb # AHB reset
- const: axi_m_sticky # AXI master sticky reset
- const: axi_s_sticky # AXI slave sticky reset
interrupts:
minItems: 9
maxItems: 9
interrupt-names:
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-msm8996
then:
properties:
clocks:
minItems: 5
maxItems: 5
clock-names:
items:
- const: pipe # Pipe Clock driving internal logic
- const: aux # Auxiliary (AUX) clock
- const: cfg # Configuration clock
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
resets: false
reset-names: false
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-ipq8074
then:
properties:
clocks:
minItems: 5
maxItems: 5
clock-names:
items:
- const: iface # PCIe to SysNOC BIU clock
- const: axi_m # AXI Master clock
- const: axi_s # AXI Slave clock
- const: ahb # AHB clock
- const: aux # Auxiliary clock
resets:
minItems: 7
maxItems: 7
reset-names:
items:
- const: pipe # PIPE reset
- const: sleep # Sleep reset
- const: sticky # Core Sticky reset
- const: axi_m # AXI Master reset
- const: axi_s # AXI Slave reset
- const: ahb # AHB Reset
- const: axi_m_sticky # AXI Master Sticky reset
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-ipq6018
- qcom,pcie-ipq8074-gen3
then:
properties:
clocks:
minItems: 5
maxItems: 5
clock-names:
items:
- const: iface # PCIe to SysNOC BIU clock
- const: axi_m # AXI Master clock
- const: axi_s # AXI Slave clock
- const: axi_bridge # AXI bridge clock
- const: rchng
resets:
minItems: 8
maxItems: 8
reset-names:
items:
- const: pipe # PIPE reset
- const: sleep # Sleep reset
- const: sticky # Core Sticky reset
- const: axi_m # AXI Master reset
- const: axi_s # AXI Slave reset
- const: ahb # AHB Reset
- const: axi_m_sticky # AXI Master Sticky reset
- const: axi_s_sticky # AXI Slave Sticky reset
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-ipq9574
then:
properties:
clocks:
minItems: 6
maxItems: 6
clock-names:
items:
- const: axi_m # AXI Master clock
- const: axi_s # AXI Slave clock
- const: axi_bridge
- const: rchng
- const: ahb
- const: aux
resets:
minItems: 8
maxItems: 8
reset-names:
items:
- const: pipe # PIPE reset
- const: sticky # Core Sticky reset
- const: axi_s_sticky # AXI Slave Sticky reset
- const: axi_s # AXI Slave reset
- const: axi_m_sticky # AXI Master Sticky reset
- const: axi_m # AXI Master reset
- const: aux # AUX Reset
- const: ahb # AHB Reset
interrupts:
minItems: 8
interrupt-names:
minItems: 8
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-qcs404
then:
properties:
clocks:
minItems: 4
maxItems: 4
clock-names:
items:
- const: iface # AHB clock
- const: aux # Auxiliary clock
- const: master_bus # AXI Master clock
- const: slave_bus # AXI Slave clock
resets:
minItems: 6
maxItems: 6
reset-names:
items:
- const: axi_m # AXI Master reset
- const: axi_s # AXI Slave reset
- const: axi_m_sticky # AXI Master Sticky reset
- const: pipe_sticky # PIPE sticky reset
- const: pwr # PWR reset
- const: ahb # AHB reset
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-sdm845
then:
oneOf:
# Unfortunately the "optional" ref clock is used in the middle of the list
- properties:
clocks:
minItems: 8
maxItems: 8
clock-names:
items:
- const: pipe # PIPE clock
- const: aux # Auxiliary clock
- const: cfg # Configuration clock
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
- const: slave_q2a # Slave Q2A clock
- const: ref # REFERENCE clock
- const: tbu # PCIe TBU clock
- properties:
clocks:
minItems: 7
maxItems: 7
clock-names:
items:
- const: pipe # PIPE clock
- const: aux # Auxiliary clock
- const: cfg # Configuration clock
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
- const: slave_q2a # Slave Q2A clock
- const: tbu # PCIe TBU clock
properties:
resets:
maxItems: 1
reset-names:
items:
- const: pci # PCIe core reset
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-sdx55
then:
properties:
clocks:
minItems: 7
maxItems: 7
clock-names:
items:
- const: pipe # PIPE clock
- const: aux # Auxiliary clock
- const: cfg # Configuration clock
- const: bus_master # Master AXI clock
- const: bus_slave # Slave AXI clock
- const: slave_q2a # Slave Q2A clock
- const: sleep # PCIe Sleep clock
resets:
maxItems: 1
reset-names:
items:
- const: pci # PCIe core reset
- if:
not:
properties:
compatible:
contains:
enum:
- qcom,pcie-apq8064
- qcom,pcie-ipq4019
- qcom,pcie-ipq5018
- qcom,pcie-ipq8064
- qcom,pcie-ipq8064v2
- qcom,pcie-ipq8074
- qcom,pcie-ipq8074-gen3
- qcom,pcie-ipq9574
- qcom,pcie-qcs404
then:
required:
- power-domains
- if:
not:
properties:
compatible:
contains:
enum:
- qcom,pcie-msm8996
then:
required:
- resets
- reset-names
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-ipq6018
- qcom,pcie-ipq8074
- qcom,pcie-ipq8074-gen3
- qcom,pcie-msm8996
- qcom,pcie-msm8998
- qcom,pcie-sdm845
then:
oneOf:
- properties:
interrupts:
maxItems: 1
interrupt-names:
items:
- const: msi
- properties:
interrupts:
minItems: 8
maxItems: 9
interrupt-names:
minItems: 8
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- const: global
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-apq8064
- qcom,pcie-apq8084
- qcom,pcie-ipq4019
- qcom,pcie-ipq8064
- qcom,pcie-ipq8064-v2
- qcom,pcie-qcs404
then:
properties:
interrupts:
maxItems: 1
interrupt-names:
items:
- const: msi
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
pcie@1b500000 {
compatible = "qcom,pcie-ipq8064";
reg = <0x1b500000 0x1000>,
<0x1b502000 0x80>,
<0x1b600000 0x100>,
<0x0ff00000 0x100000>;
reg-names = "dbi", "elbi", "parf", "config";
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x81000000 0 0 0x0fe00000 0 0x00100000>,
<0x82000000 0 0 0x08000000 0 0x07e00000>;
interrupts = <GIC_SPI 238 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 36 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc 0 37 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc 0 38 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc 0 39 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&gcc 41>,
<&gcc 43>,
<&gcc 44>,
<&gcc 42>,
<&gcc 248>;
clock-names = "core", "iface", "phy", "aux", "ref";
resets = <&gcc 27>,
<&gcc 26>,
<&gcc 25>,
<&gcc 24>,
<&gcc 23>,
<&gcc 22>;
reset-names = "axi", "ahb", "por", "pci", "phy", "ext";
pinctrl-0 = <&pcie_pins_default>;
pinctrl-names = "default";
vdda-supply = <&pm8921_s3>;
vdda_phy-supply = <&pm8921_lvs6>;
vdda_refclk-supply = <&ext_3p3v>;
};
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/gpio/gpio.h>
pcie@fc520000 {
compatible = "qcom,pcie-apq8084";
reg = <0xfc520000 0x2000>,
<0xff000000 0x1000>,
<0xff001000 0x1000>,
<0xff002000 0x2000>;
reg-names = "parf", "dbi", "elbi", "config";
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x81000000 0 0 0xff200000 0 0x00100000>,
<0x82000000 0 0x00300000 0xff300000 0 0x00d00000>;
interrupts = <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 244 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc 0 245 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc 0 247 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc 0 248 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&gcc 324>,
<&gcc 325>,
<&gcc 327>,
<&gcc 323>;
clock-names = "iface", "master_bus", "slave_bus", "aux";
resets = <&gcc 81>;
reset-names = "core";
power-domains = <&gcc 1>;
vdda-supply = <&pma8084_l3>;
phys = <&pciephy0>;
phy-names = "pciephy";
perst-gpios = <&tlmm 70 GPIO_ACTIVE_LOW>;
pinctrl-0 = <&pcie0_pins_default>;
pinctrl-names = "default";
};
...

View file

@ -0,0 +1,110 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,sa8255p-pcie-ep.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm firmware managed PCIe Endpoint Controller
description:
Qualcomm SA8255p SoC PCIe endpoint controller is based on the Synopsys
DesignWare PCIe IP which is managed by firmware.
maintainers:
- Manivannan Sadhasivam <mani@kernel.org>
properties:
compatible:
const: qcom,sa8255p-pcie-ep
reg:
items:
- description: Qualcomm-specific PARF configuration registers
- description: DesignWare PCIe registers
- description: External local bus interface registers
- description: Address Translation Unit (ATU) registers
- description: Memory region used to map remote RC address space
- description: BAR memory region
- description: DMA register space
reg-names:
items:
- const: parf
- const: dbi
- const: elbi
- const: atu
- const: addr_space
- const: mmio
- const: dma
interrupts:
items:
- description: PCIe Global interrupt
- description: PCIe Doorbell interrupt
- description: DMA interrupt
interrupt-names:
items:
- const: global
- const: doorbell
- const: dma
iommus:
maxItems: 1
reset-gpios:
description: GPIO used as PERST# input signal
maxItems: 1
wake-gpios:
description: GPIO used as WAKE# output signal
maxItems: 1
power-domains:
maxItems: 1
dma-coherent: true
num-lanes:
default: 2
required:
- compatible
- reg
- reg-names
- interrupts
- interrupt-names
- reset-gpios
- power-domains
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie1_ep: pcie-ep@1c10000 {
compatible = "qcom,sa8255p-pcie-ep";
reg = <0x0 0x01c10000 0x0 0x3000>,
<0x0 0x60000000 0x0 0xf20>,
<0x0 0x60000f20 0x0 0xa8>,
<0x0 0x60001000 0x0 0x4000>,
<0x0 0x60200000 0x0 0x100000>,
<0x0 0x01c13000 0x0 0x1000>,
<0x0 0x60005000 0x0 0x2000>;
reg-names = "parf", "dbi", "elbi", "atu", "addr_space", "mmio", "dma";
interrupts = <GIC_SPI 518 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 152 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 474 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "global", "doorbell", "dma";
reset-gpios = <&tlmm 4 GPIO_ACTIVE_LOW>;
wake-gpios = <&tlmm 5 GPIO_ACTIVE_LOW>;
dma-coherent;
iommus = <&pcie_smmu 0x80 0x7f>;
power-domains = <&scmi6_pd 1>;
num-lanes = <4>;
};
};

View file

@ -105,6 +105,12 @@ properties:
define it with this name (for instance pipe, core and aux can
be connected to a single source of the periodic signal).
const: ref
- description:
Some dwc wrappers (like i.MX95 PCIes) have two reference clock
inputs, one from an internal PLL, the other from an off-chip crystal
oscillator. If present, 'extref' refers to a reference clock from
an external oscillator.
const: extref
- description:
Clock for the PHY registers interface. Originally this is
a PHY-viewport-based interface, but some platform may have

View file

@ -51,7 +51,7 @@ properties:
phy-names:
const: pcie-phy
interrupt-controller:
legacy-interrupt-controller:
type: object
additionalProperties: false
@ -111,7 +111,7 @@ examples:
<0 0 0 3 &pcie_intc 2>,
<0 0 0 4 &pcie_intc 3>;
pcie_intc: interrupt-controller {
pcie_intc: legacy-interrupt-controller {
#address-cells = <0>;
interrupt-controller;
#interrupt-cells = <1>;

View file

@ -0,0 +1,74 @@
.. SPDX-License-Identifier: GPL-2.0
===========================
Subsystem Trace Points: PCI
===========================
Overview
========
The PCI tracing system provides tracepoints to monitor critical hardware events
that can impact system performance and reliability. These events normally show
up here:
/sys/kernel/tracing/events/pci
Cf. include/trace/events/pci.h for the events definitions.
Available Tracepoints
=====================
pci_hp_event
------------
Monitors PCI hotplug events including card insertion/removal and link
state changes.
::
pci_hp_event "%s slot:%s, event:%s\n"
**Event Types**:
* ``LINK_UP`` - PCIe link established
* ``LINK_DOWN`` - PCIe link lost
* ``CARD_PRESENT`` - Card detected in slot
* ``CARD_NOT_PRESENT`` - Card removed from slot
**Example Usage**::
# Enable the tracepoint
echo 1 > /sys/kernel/debug/tracing/events/pci/pci_hp_event/enable
# Monitor events (the following output is generated when a device is hotplugged)
cat /sys/kernel/debug/tracing/trace_pipe
irq/51-pciehp-88 [001] ..... 1311.177459: pci_hp_event: 0000:00:02.0 slot:10, event:CARD_PRESENT
irq/51-pciehp-88 [001] ..... 1311.177566: pci_hp_event: 0000:00:02.0 slot:10, event:LINK_UP
pcie_link_event
---------------
Monitors PCIe link speed changes and provides detailed link status information.
::
pcie_link_event "%s type:%d, reason:%d, cur_bus_speed:%d, max_bus_speed:%d, width:%u, flit_mode:%u, status:%s\n"
**Parameters**:
* ``type`` - PCIe device type (4=Root Port, etc.)
* ``reason`` - Reason for link change:
- ``0`` - Link retrain
- ``1`` - Bus enumeration
- ``2`` - Bandwidth notification enable
- ``3`` - Bandwidth notification IRQ
- ``4`` - Hotplug event
**Example Usage**::
# Enable the tracepoint
echo 1 > /sys/kernel/debug/tracing/events/pci/pcie_link_event/enable
# Monitor events (the following output is generated when a device is hotplugged)
cat /sys/kernel/debug/tracing/trace_pipe
irq/51-pciehp-88 [001] ..... 381.545386: pcie_link_event: 0000:00:02.0 type:4, reason:4, cur_bus_speed:20, max_bus_speed:23, width:1, flit_mode:0, status:DLLLA

View file

@ -54,6 +54,7 @@ applications.
events-power
events-nmi
events-msr
events-pci
boottime-trace
histogram
histogram-design

View file

@ -3922,6 +3922,14 @@ S: Maintained
F: Documentation/devicetree/bindings/media/aspeed,video-engine.yaml
F: drivers/media/platform/aspeed/
ASPEED PCIE CONTROLLER DRIVER
M: Jacky Chou <jacky_chou@aspeedtech.com>
L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers)
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/aspeed,ast2600-pcie.yaml
F: drivers/pci/controller/pcie-aspeed.c
ASUS EC HARDWARE MONITOR DRIVER
M: Eugene Shalygin <eugene.shalygin@gmail.com>
L: linux-hwmon@vger.kernel.org
@ -20503,6 +20511,7 @@ L: linux-pci@vger.kernel.org
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
F: Documentation/devicetree/bindings/pci/qcom,sa8255p-pcie-ep.yaml
F: drivers/pci/controller/dwc/pcie-qcom-common.c
F: drivers/pci/controller/dwc/pcie-qcom-ep.c

View file

@ -336,6 +336,7 @@ void tegra_cpuidle_pcie_irqs_in_use(void)
pr_info("disabling CC6 state, since PCIe IRQs are in use\n");
tegra_cpuidle_disable_state(TEGRA_CC6);
}
EXPORT_SYMBOL_GPL(tegra_cpuidle_pcie_irqs_in_use);
static void tegra_cpuidle_setup_tegra114_c7_state(void)
{

View file

@ -39,6 +39,8 @@
#define COMMAND_COPY BIT(5)
#define COMMAND_ENABLE_DOORBELL BIT(6)
#define COMMAND_DISABLE_DOORBELL BIT(7)
#define COMMAND_BAR_SUBRANGE_SETUP BIT(8)
#define COMMAND_BAR_SUBRANGE_CLEAR BIT(9)
#define PCI_ENDPOINT_TEST_STATUS 0x8
#define STATUS_READ_SUCCESS BIT(0)
@ -55,6 +57,10 @@
#define STATUS_DOORBELL_ENABLE_FAIL BIT(11)
#define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12)
#define STATUS_DOORBELL_DISABLE_FAIL BIT(13)
#define STATUS_BAR_SUBRANGE_SETUP_SUCCESS BIT(14)
#define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15)
#define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16)
#define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17)
#define PCI_ENDPOINT_TEST_LOWER_SRC_ADDR 0x0c
#define PCI_ENDPOINT_TEST_UPPER_SRC_ADDR 0x10
@ -77,6 +83,7 @@
#define CAP_MSI BIT(1)
#define CAP_MSIX BIT(2)
#define CAP_INTX BIT(3)
#define CAP_SUBRANGE_MAPPING BIT(4)
#define PCI_ENDPOINT_TEST_DB_BAR 0x34
#define PCI_ENDPOINT_TEST_DB_OFFSET 0x38
@ -100,6 +107,8 @@
#define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588
#define PCI_ENDPOINT_TEST_BAR_SUBRANGE_NSUB 2
static DEFINE_IDA(pci_endpoint_test_ida);
#define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \
@ -414,6 +423,193 @@ static int pci_endpoint_test_bars(struct pci_endpoint_test *test)
return 0;
}
static u8 pci_endpoint_test_subrange_sig_byte(enum pci_barno barno,
unsigned int subno)
{
return 0x50 + (barno * 8) + subno;
}
static u8 pci_endpoint_test_subrange_test_byte(enum pci_barno barno,
unsigned int subno)
{
return 0xa0 + (barno * 8) + subno;
}
static int pci_endpoint_test_bar_subrange_cmd(struct pci_endpoint_test *test,
enum pci_barno barno, u32 command,
u32 ok_bit, u32 fail_bit)
{
struct pci_dev *pdev = test->pdev;
struct device *dev = &pdev->dev;
int irq_type = test->irq_type;
u32 status;
if (irq_type < PCITEST_IRQ_TYPE_INTX ||
irq_type > PCITEST_IRQ_TYPE_MSIX) {
dev_err(dev, "Invalid IRQ type\n");
return -EINVAL;
}
reinit_completion(&test->irq_raised);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_STATUS, 0);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1);
/* Reuse SIZE as a command parameter: bar number. */
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE, barno);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND, command);
if (!wait_for_completion_timeout(&test->irq_raised,
msecs_to_jiffies(1000)))
return -ETIMEDOUT;
status = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS);
if (status & fail_bit)
return -EIO;
if (!(status & ok_bit))
return -EIO;
return 0;
}
static int pci_endpoint_test_bar_subrange_setup(struct pci_endpoint_test *test,
enum pci_barno barno)
{
return pci_endpoint_test_bar_subrange_cmd(test, barno,
COMMAND_BAR_SUBRANGE_SETUP,
STATUS_BAR_SUBRANGE_SETUP_SUCCESS,
STATUS_BAR_SUBRANGE_SETUP_FAIL);
}
static int pci_endpoint_test_bar_subrange_clear(struct pci_endpoint_test *test,
enum pci_barno barno)
{
return pci_endpoint_test_bar_subrange_cmd(test, barno,
COMMAND_BAR_SUBRANGE_CLEAR,
STATUS_BAR_SUBRANGE_CLEAR_SUCCESS,
STATUS_BAR_SUBRANGE_CLEAR_FAIL);
}
static int pci_endpoint_test_bar_subrange(struct pci_endpoint_test *test,
enum pci_barno barno)
{
u32 nsub = PCI_ENDPOINT_TEST_BAR_SUBRANGE_NSUB;
struct device *dev = &test->pdev->dev;
size_t sub_size, buf_size;
resource_size_t bar_size;
void __iomem *bar_addr;
void *read_buf = NULL;
int ret, clear_ret;
size_t off, chunk;
u32 i, exp, val;
u8 pattern;
if (!(test->ep_caps & CAP_SUBRANGE_MAPPING))
return -EOPNOTSUPP;
/*
* The test register BAR is not safe to reprogram and write/read
* over its full size. BAR_TEST already special-cases it to a tiny
* range. For subrange mapping tests, let's simply skip it.
*/
if (barno == test->test_reg_bar)
return -EBUSY;
bar_size = pci_resource_len(test->pdev, barno);
if (!bar_size)
return -ENODATA;
bar_addr = test->bar[barno];
if (!bar_addr)
return -ENOMEM;
ret = pci_endpoint_test_bar_subrange_setup(test, barno);
if (ret)
return ret;
if (bar_size % nsub || bar_size / nsub > SIZE_MAX) {
ret = -EINVAL;
goto out_clear;
}
sub_size = bar_size / nsub;
if (sub_size < sizeof(u32)) {
ret = -ENOSPC;
goto out_clear;
}
/* Limit the temporary buffer size */
buf_size = min_t(size_t, sub_size, SZ_1M);
read_buf = kmalloc(buf_size, GFP_KERNEL);
if (!read_buf) {
ret = -ENOMEM;
goto out_clear;
}
/*
* Step 1: verify EP-provided signature per subrange. This detects
* whether the EP actually applied the submap order.
*/
for (i = 0; i < nsub; i++) {
exp = (u32)pci_endpoint_test_subrange_sig_byte(barno, i) *
0x01010101U;
val = ioread32(bar_addr + (i * sub_size));
if (val != exp) {
dev_err(dev,
"BAR%d subrange%u signature mismatch @%#zx: exp %#08x got %#08x\n",
barno, i, (size_t)i * sub_size, exp, val);
ret = -EIO;
goto out_clear;
}
val = ioread32(bar_addr + (i * sub_size) + sub_size - sizeof(u32));
if (val != exp) {
dev_err(dev,
"BAR%d subrange%u signature mismatch @%#zx: exp %#08x got %#08x\n",
barno, i,
((size_t)i * sub_size) + sub_size - sizeof(u32),
exp, val);
ret = -EIO;
goto out_clear;
}
}
/* Step 2: write unique pattern per subrange (write all first). */
for (i = 0; i < nsub; i++) {
pattern = pci_endpoint_test_subrange_test_byte(barno, i);
memset_io(bar_addr + (i * sub_size), pattern, sub_size);
}
/* Step 3: read back and verify (read all after all writes). */
for (i = 0; i < nsub; i++) {
pattern = pci_endpoint_test_subrange_test_byte(barno, i);
for (off = 0; off < sub_size; off += chunk) {
void *bad;
chunk = min_t(size_t, buf_size, sub_size - off);
memcpy_fromio(read_buf, bar_addr + (i * sub_size) + off,
chunk);
bad = memchr_inv(read_buf, pattern, chunk);
if (bad) {
size_t bad_off = (u8 *)bad - (u8 *)read_buf;
dev_err(dev,
"BAR%d subrange%u data mismatch @%#zx (pattern %#02x)\n",
barno, i, (size_t)i * sub_size + off + bad_off,
pattern);
ret = -EIO;
goto out_clear;
}
}
}
out_clear:
kfree(read_buf);
clear_ret = pci_endpoint_test_bar_subrange_clear(test, barno);
return ret ?: clear_ret;
}
static int pci_endpoint_test_intx_irq(struct pci_endpoint_test *test)
{
u32 val;
@ -936,12 +1132,17 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
switch (cmd) {
case PCITEST_BAR:
case PCITEST_BAR_SUBRANGE:
bar = arg;
if (bar <= NO_BAR || bar > BAR_5)
goto ret;
if (is_am654_pci_dev(pdev) && bar == BAR_0)
goto ret;
ret = pci_endpoint_test_bar(test, bar);
if (cmd == PCITEST_BAR)
ret = pci_endpoint_test_bar(test, bar);
else
ret = pci_endpoint_test_bar_subrange(test, bar);
break;
case PCITEST_BARS:
ret = pci_endpoint_test_bars(test);

View file

@ -39,6 +39,7 @@ obj-$(CONFIG_PCI_TSM) += tsm.o
obj-$(CONFIG_PCI_DYNAMIC_OF_NODES) += of_property.o
obj-$(CONFIG_PCI_NPEM) += npem.o
obj-$(CONFIG_PCIE_TPH) += tph.o
obj-$(CONFIG_CARDBUS) += setup-cardbus.o
# Endpoint library must be initialized before its users
obj-$(CONFIG_PCI_ENDPOINT) += endpoint/
@ -47,3 +48,6 @@ obj-y += controller/
obj-y += switch/
subdir-ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
CFLAGS_trace.o := -I$(src)
obj-$(CONFIG_TRACING) += trace.o

View file

@ -15,6 +15,7 @@
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/proc_fs.h>
#include <linux/slab.h>
@ -344,7 +345,6 @@ void __weak pcibios_bus_add_device(struct pci_dev *pdev) { }
void pci_bus_add_device(struct pci_dev *dev)
{
struct device_node *dn = dev->dev.of_node;
struct platform_device *pdev;
/*
* Can not put in pci_device_add yet because resources
@ -362,22 +362,11 @@ void pci_bus_add_device(struct pci_dev *dev)
pci_save_state(dev);
/*
* If the PCI device is associated with a pwrctrl device with a
* power supply, create a device link between the PCI device and
* pwrctrl device. This ensures that pwrctrl drivers are probed
* before PCI client drivers.
* Enable runtime PM, which potentially allows the device to
* suspend immediately, only after the PCI state has been
* configured completely.
*/
pdev = of_find_device_by_node(dn);
if (pdev) {
if (of_pci_supply_present(dn)) {
if (!device_link_add(&dev->dev, &pdev->dev,
DL_FLAG_AUTOREMOVE_CONSUMER)) {
pci_err(dev, "failed to add device link to power control device %s\n",
pdev->name);
}
}
put_device(&pdev->dev);
}
pm_runtime_enable(&dev->dev);
if (!dn || of_device_is_available(dn))
pci_dev_allow_binding(dev);

View file

@ -58,6 +58,22 @@ config PCI_VERSATILE
bool "ARM Versatile PB PCI controller"
depends on ARCH_VERSATILE || COMPILE_TEST
config PCIE_ASPEED
bool "ASPEED PCIe controller"
depends on ARCH_ASPEED || COMPILE_TEST
depends on OF
depends on PCI_MSI
select IRQ_MSI_LIB
help
Enable this option to support the PCIe controller found on ASPEED
SoCs.
This driver provides initialization and management for PCIe
Root Complex functionality, including INTx and MSI support.
Select Y if your platform uses an ASPEED SoC and requires PCIe
connectivity.
config PCIE_BRCMSTB
tristate "Broadcom Brcmstb PCIe controller"
depends on ARCH_BRCMSTB || ARCH_BCM2835 || ARCH_BCMBCA || \
@ -232,7 +248,7 @@ config PCI_HYPERV_INTERFACE
driver.
config PCI_TEGRA
bool "NVIDIA Tegra PCIe controller"
tristate "NVIDIA Tegra PCIe controller"
depends on ARCH_TEGRA || COMPILE_TEST
depends on PCI_MSI
select IRQ_MSI_LIB
@ -243,6 +259,7 @@ config PCI_TEGRA
config PCIE_RCAR_HOST
bool "Renesas R-Car PCIe controller (host mode)"
depends on ARCH_RENESAS || COMPILE_TEST
depends on OF
depends on PCI_MSI
select IRQ_MSI_LIB
help

View file

@ -40,6 +40,7 @@ obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o
obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o
obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
obj-$(CONFIG_PCIE_ASPEED) += pcie-aspeed.o
# pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
obj-y += dwc/

View file

@ -620,9 +620,11 @@ static int j721e_pcie_probe(struct platform_device *pdev)
gpiod_set_value_cansleep(pcie->reset_gpio, 1);
}
ret = cdns_pcie_host_setup(rc);
if (ret < 0)
goto err_pcie_setup;
if (IS_ENABLED(CONFIG_PCI_J721E_HOST)) {
ret = cdns_pcie_host_setup(rc);
if (ret < 0)
goto err_pcie_setup;
}
break;
case PCI_MODE_EP:
@ -632,9 +634,11 @@ static int j721e_pcie_probe(struct platform_device *pdev)
goto err_get_sync;
}
ret = cdns_pcie_ep_setup(ep);
if (ret < 0)
goto err_pcie_setup;
if (IS_ENABLED(CONFIG_PCI_J721E_EP)) {
ret = cdns_pcie_ep_setup(ep);
if (ret < 0)
goto err_pcie_setup;
}
break;
}
@ -659,10 +663,11 @@ static void j721e_pcie_remove(struct platform_device *pdev)
struct cdns_pcie_ep *ep;
struct cdns_pcie_rc *rc;
if (pcie->mode == PCI_MODE_RC) {
if (IS_ENABLED(CONFIG_PCI_J721E_HOST) &&
pcie->mode == PCI_MODE_RC) {
rc = container_of(cdns_pcie, struct cdns_pcie_rc, pcie);
cdns_pcie_host_disable(rc);
} else {
} else if (IS_ENABLED(CONFIG_PCI_J721E_EP)) {
ep = container_of(cdns_pcie, struct cdns_pcie_ep, pcie);
cdns_pcie_ep_disable(ep);
}
@ -728,10 +733,12 @@ static int j721e_pcie_resume_noirq(struct device *dev)
gpiod_set_value_cansleep(pcie->reset_gpio, 1);
}
ret = cdns_pcie_host_link_setup(rc);
if (ret < 0) {
clk_disable_unprepare(pcie->refclk);
return ret;
if (IS_ENABLED(CONFIG_PCI_J721E_HOST)) {
ret = cdns_pcie_host_link_setup(rc);
if (ret < 0) {
clk_disable_unprepare(pcie->refclk);
return ret;
}
}
/*
@ -741,10 +748,12 @@ static int j721e_pcie_resume_noirq(struct device *dev)
for (enum cdns_pcie_rp_bar bar = RP_BAR0; bar <= RP_NO_BAR; bar++)
rc->avail_ib_bar[bar] = true;
ret = cdns_pcie_host_init(rc);
if (ret) {
clk_disable_unprepare(pcie->refclk);
return ret;
if (IS_ENABLED(CONFIG_PCI_J721E_HOST)) {
ret = cdns_pcie_host_init(rc);
if (ret) {
clk_disable_unprepare(pcie->refclk);
return ret;
}
}
}

View file

@ -173,11 +173,21 @@ int cdns_pcie_host_dma_ranges_cmp(void *priv, const struct list_head *a,
const struct list_head *b)
{
struct resource_entry *entry1, *entry2;
u64 size1, size2;
entry1 = container_of(a, struct resource_entry, node);
entry2 = container_of(b, struct resource_entry, node);
return resource_size(entry2->res) - resource_size(entry1->res);
size1 = resource_size(entry1->res);
size2 = resource_size(entry2->res);
if (size1 > size2)
return -1;
if (size1 < size2)
return 1;
return 0;
}
EXPORT_SYMBOL_GPL(cdns_pcie_host_dma_ranges_cmp);

View file

@ -13,13 +13,13 @@
u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap)
{
return PCI_FIND_NEXT_CAP(cdns_pcie_read_cfg, PCI_CAPABILITY_LIST,
cap, pcie);
cap, NULL, pcie);
}
EXPORT_SYMBOL_GPL(cdns_pcie_find_capability);
u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap)
{
return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, pcie);
return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, NULL, pcie);
}
EXPORT_SYMBOL_GPL(cdns_pcie_find_ext_capability);

View file

@ -228,7 +228,7 @@ config PCIE_TEGRA194
config PCIE_TEGRA194_HOST
tristate "NVIDIA Tegra194 (and later) PCIe controller (host mode)"
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
depends on (ARCH_TEGRA && ARM64) || COMPILE_TEST
depends on PCI_MSI
select PCIE_DW_HOST
select PHY_TEGRA194_P2U
@ -243,7 +243,7 @@ config PCIE_TEGRA194_HOST
config PCIE_TEGRA194_EP
tristate "NVIDIA Tegra194 (and later) PCIe controller (endpoint mode)"
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
depends on (ARCH_TEGRA && ARM64) || COMPILE_TEST
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PHY_TEGRA194_P2U

View file

@ -424,6 +424,7 @@ static int dra7xx_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features dra7xx_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
};

View file

@ -52,6 +52,8 @@
#define IMX95_PCIE_REF_CLKEN BIT(23)
#define IMX95_PCIE_PHY_CR_PARA_SEL BIT(9)
#define IMX95_PCIE_SS_RW_REG_1 0xf4
#define IMX95_PCIE_CLKREQ_OVERRIDE_EN BIT(8)
#define IMX95_PCIE_CLKREQ_OVERRIDE_VAL BIT(9)
#define IMX95_PCIE_SYS_AUX_PWR_DET BIT(31)
#define IMX95_PE0_GEN_CTRL_1 0x1050
@ -114,6 +116,7 @@ enum imx_pcie_variants {
#define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9)
#define IMX_PCIE_FLAG_HAS_LUT BIT(10)
#define IMX_PCIE_FLAG_8GT_ECN_ERR051586 BIT(11)
#define IMX_PCIE_FLAG_SKIP_L23_READY BIT(12)
#define imx_check_flag(pci, val) (pci->drvdata->flags & val)
@ -136,6 +139,7 @@ struct imx_pcie_drvdata {
int (*enable_ref_clk)(struct imx_pcie *pcie, bool enable);
int (*core_reset)(struct imx_pcie *pcie, bool assert);
int (*wait_pll_lock)(struct imx_pcie *pcie);
void (*clr_clkreq_override)(struct imx_pcie *pcie);
const struct dw_pcie_host_ops *ops;
};
@ -149,6 +153,8 @@ struct imx_pcie {
struct gpio_desc *reset_gpiod;
struct clk_bulk_data *clks;
int num_clks;
bool supports_clkreq;
bool enable_ext_refclk;
struct regmap *iomuxc_gpr;
u16 msi_ctrl;
u32 controller_id;
@ -241,6 +247,8 @@ static unsigned int imx_pcie_grp_offset(const struct imx_pcie *imx_pcie)
static int imx95_pcie_init_phy(struct imx_pcie *imx_pcie)
{
bool ext = imx_pcie->enable_ext_refclk;
/*
* ERR051624: The Controller Without Vaux Cannot Exit L23 Ready
* Through Beacon or PERST# De-assertion
@ -259,13 +267,12 @@ static int imx95_pcie_init_phy(struct imx_pcie *imx_pcie)
IMX95_PCIE_PHY_CR_PARA_SEL,
IMX95_PCIE_PHY_CR_PARA_SEL);
regmap_update_bits(imx_pcie->iomuxc_gpr,
IMX95_PCIE_PHY_GEN_CTRL,
IMX95_PCIE_REF_USE_PAD, 0);
regmap_update_bits(imx_pcie->iomuxc_gpr,
IMX95_PCIE_SS_RW_REG_0,
regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_PHY_GEN_CTRL,
ext ? IMX95_PCIE_REF_USE_PAD : 0,
IMX95_PCIE_REF_USE_PAD);
regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_SS_RW_REG_0,
IMX95_PCIE_REF_CLKEN,
IMX95_PCIE_REF_CLKEN);
ext ? 0 : IMX95_PCIE_REF_CLKEN);
return 0;
}
@ -685,7 +692,7 @@ static int imx6q_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
return 0;
}
static int imx8mm_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
static void imx8mm_pcie_clkreq_override(struct imx_pcie *imx_pcie, bool enable)
{
int offset = imx_pcie_grp_offset(imx_pcie);
@ -695,6 +702,11 @@ static int imx8mm_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
regmap_update_bits(imx_pcie->iomuxc_gpr, offset,
IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN,
enable ? IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN : 0);
}
static int imx8mm_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
{
imx8mm_pcie_clkreq_override(imx_pcie, enable);
return 0;
}
@ -706,6 +718,32 @@ static int imx7d_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
return 0;
}
static void imx95_pcie_clkreq_override(struct imx_pcie *imx_pcie, bool enable)
{
regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_SS_RW_REG_1,
IMX95_PCIE_CLKREQ_OVERRIDE_EN,
enable ? IMX95_PCIE_CLKREQ_OVERRIDE_EN : 0);
regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_SS_RW_REG_1,
IMX95_PCIE_CLKREQ_OVERRIDE_VAL,
enable ? IMX95_PCIE_CLKREQ_OVERRIDE_VAL : 0);
}
static int imx95_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
{
imx95_pcie_clkreq_override(imx_pcie, enable);
return 0;
}
static void imx8mm_pcie_clr_clkreq_override(struct imx_pcie *imx_pcie)
{
imx8mm_pcie_clkreq_override(imx_pcie, false);
}
static void imx95_pcie_clr_clkreq_override(struct imx_pcie *imx_pcie)
{
imx95_pcie_clkreq_override(imx_pcie, false);
}
static int imx_pcie_clk_enable(struct imx_pcie *imx_pcie)
{
struct dw_pcie *pci = imx_pcie->pci;
@ -1322,6 +1360,12 @@ static void imx_pcie_host_post_init(struct dw_pcie_rp *pp)
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
dw_pcie_dbi_ro_wr_dis(pci);
}
/* Clear CLKREQ# override if supports_clkreq is true and link is up */
if (dw_pcie_link_up(pci) && imx_pcie->supports_clkreq) {
if (imx_pcie->drvdata->clr_clkreq_override)
imx_pcie->drvdata->clr_clkreq_override(imx_pcie);
}
}
/*
@ -1387,6 +1431,7 @@ static int imx_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features imx8m_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
@ -1396,6 +1441,7 @@ static const struct pci_epc_features imx8m_pcie_epc_features = {
};
static const struct pci_epc_features imx8q_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
@ -1416,6 +1462,7 @@ static const struct pci_epc_features imx8q_pcie_epc_features = {
* BAR5 | Enable | 32-bit | 64 KB | Programmable Size
*/
static const struct pci_epc_features imx95_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_64K, },
.align = SZ_4K,
@ -1602,7 +1649,7 @@ static int imx_pcie_probe(struct platform_device *pdev)
struct imx_pcie *imx_pcie;
struct device_node *np;
struct device_node *node = dev->of_node;
int ret, domain;
int i, ret, domain;
u16 val;
imx_pcie = devm_kzalloc(dev, sizeof(*imx_pcie), GFP_KERNEL);
@ -1653,6 +1700,9 @@ static int imx_pcie_probe(struct platform_device *pdev)
if (imx_pcie->num_clks < 0)
return dev_err_probe(dev, imx_pcie->num_clks,
"failed to get clocks\n");
for (i = 0; i < imx_pcie->num_clks; i++)
if (strncmp(imx_pcie->clks[i].id, "extref", 6) == 0)
imx_pcie->enable_ext_refclk = true;
if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_PHYDRV)) {
imx_pcie->phy = devm_phy_get(dev, "pcie-phy");
@ -1740,6 +1790,7 @@ static int imx_pcie_probe(struct platform_device *pdev)
/* Limit link speed */
pci->max_link_speed = 1;
of_property_read_u32(node, "fsl,max-link-speed", &pci->max_link_speed);
imx_pcie->supports_clkreq = of_property_read_bool(node, "supports-clkreq");
ret = devm_regulator_get_enable_optional(&pdev->dev, "vpcie3v3aux");
if (ret < 0 && ret != -ENODEV)
@ -1777,6 +1828,8 @@ static int imx_pcie_probe(struct platform_device *pdev)
*/
imx_pcie_add_lut_by_rid(imx_pcie, 0);
} else {
if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_SKIP_L23_READY))
pci->pp.skip_l23_ready = true;
pci->pp.use_atu_msg = true;
ret = dw_pcie_host_init(&pci->pp);
if (ret < 0)
@ -1838,6 +1891,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
.variant = IMX6QP,
.flags = IMX_PCIE_FLAG_IMX_PHY |
IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND |
IMX_PCIE_FLAG_SKIP_L23_READY |
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
.dbi_length = 0x200,
.gpr = "fsl,imx6q-iomuxc-gpr",
@ -1854,6 +1908,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
.variant = IMX7D,
.flags = IMX_PCIE_FLAG_SUPPORTS_SUSPEND |
IMX_PCIE_FLAG_HAS_APP_RESET |
IMX_PCIE_FLAG_SKIP_L23_READY |
IMX_PCIE_FLAG_HAS_PHY_RESET,
.gpr = "fsl,imx7d-iomuxc-gpr",
.mode_off[0] = IOMUXC_GPR12,
@ -1873,6 +1928,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
.mode_mask[1] = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE,
.init_phy = imx8mq_pcie_init_phy,
.enable_ref_clk = imx8mm_pcie_enable_ref_clk,
.clr_clkreq_override = imx8mm_pcie_clr_clkreq_override,
},
[IMX8MM] = {
.variant = IMX8MM,
@ -1883,6 +1939,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
.mode_off[0] = IOMUXC_GPR12,
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.enable_ref_clk = imx8mm_pcie_enable_ref_clk,
.clr_clkreq_override = imx8mm_pcie_clr_clkreq_override,
},
[IMX8MP] = {
.variant = IMX8MP,
@ -1893,6 +1950,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
.mode_off[0] = IOMUXC_GPR12,
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.enable_ref_clk = imx8mm_pcie_enable_ref_clk,
.clr_clkreq_override = imx8mm_pcie_clr_clkreq_override,
},
[IMX8Q] = {
.variant = IMX8Q,
@ -1913,6 +1971,8 @@ static const struct imx_pcie_drvdata drvdata[] = {
.core_reset = imx95_pcie_core_reset,
.init_phy = imx95_pcie_init_phy,
.wait_pll_lock = imx95_pcie_wait_for_phy_pll_lock,
.enable_ref_clk = imx95_pcie_enable_ref_clk,
.clr_clkreq_override = imx95_pcie_clr_clkreq_override,
},
[IMX8MQ_EP] = {
.variant = IMX8MQ_EP,
@ -1969,6 +2029,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
.core_reset = imx95_pcie_core_reset,
.wait_pll_lock = imx95_pcie_wait_for_phy_pll_lock,
.epc_features = &imx95_pcie_epc_features,
.enable_ref_clk = imx95_pcie_enable_ref_clk,
.mode = DW_PCIE_EP_TYPE,
},
};

View file

@ -930,6 +930,7 @@ static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features ks_pcie_am654_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.msix_capable = true,
.bar[BAR_0] = { .type = BAR_RESERVED, },

View file

@ -370,6 +370,7 @@ static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features artpec6_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
};

View file

@ -443,63 +443,13 @@ static ssize_t counter_value_read(struct file *file, char __user *buf,
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static const char *ltssm_status_string(enum dw_pcie_ltssm ltssm)
{
const char *str;
switch (ltssm) {
#define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3);
default:
str = "DW_PCIE_LTSSM_UNKNOWN";
break;
}
return str + strlen("DW_PCIE_LTSSM_");
}
static int ltssm_status_show(struct seq_file *s, void *v)
{
struct dw_pcie *pci = s->private;
enum dw_pcie_ltssm val;
val = dw_pcie_get_ltssm(pci);
seq_printf(s, "%s (0x%02x)\n", ltssm_status_string(val), val);
seq_printf(s, "%s (0x%02x)\n", dw_pcie_ltssm_status_string(val), val);
return 0;
}

View file

@ -72,47 +72,15 @@ EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar);
static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap)
{
return PCI_FIND_NEXT_CAP(dw_pcie_ep_read_cfg, PCI_CAPABILITY_LIST,
cap, ep, func_no);
cap, NULL, ep, func_no);
}
/**
* dw_pcie_ep_hide_ext_capability - Hide a capability from the linked list
* @pci: DWC PCI device
* @prev_cap: Capability preceding the capability that should be hidden
* @cap: Capability that should be hidden
*
* Return: 0 if success, errno otherwise.
*/
int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap)
static u16 dw_pcie_ep_find_ext_capability(struct dw_pcie_ep *ep,
u8 func_no, u8 cap)
{
u16 prev_cap_offset, cap_offset;
u32 prev_cap_header, cap_header;
prev_cap_offset = dw_pcie_find_ext_capability(pci, prev_cap);
if (!prev_cap_offset)
return -EINVAL;
prev_cap_header = dw_pcie_readl_dbi(pci, prev_cap_offset);
cap_offset = PCI_EXT_CAP_NEXT(prev_cap_header);
cap_header = dw_pcie_readl_dbi(pci, cap_offset);
/* cap must immediately follow prev_cap. */
if (PCI_EXT_CAP_ID(cap_header) != cap)
return -EINVAL;
/* Clear next ptr. */
prev_cap_header &= ~GENMASK(31, 20);
/* Set next ptr to next ptr of cap. */
prev_cap_header |= cap_header & GENMASK(31, 20);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi(pci, prev_cap_offset, prev_cap_header);
dw_pcie_dbi_ro_wr_dis(pci);
return 0;
return PCI_FIND_NEXT_EXT_CAP(dw_pcie_ep_read_cfg, 0,
cap, NULL, ep, func_no);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_hide_ext_capability);
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_header *hdr)
@ -139,18 +107,23 @@ static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
return 0;
}
static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
dma_addr_t parent_bus_addr, enum pci_barno bar,
size_t size)
/* BAR Match Mode inbound iATU mapping */
static int dw_pcie_ep_ib_atu_bar(struct dw_pcie_ep *ep, u8 func_no, int type,
dma_addr_t parent_bus_addr, enum pci_barno bar,
size_t size)
{
int ret;
u32 free_win;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep->bar_to_atu[bar])
if (!ep_func)
return -EINVAL;
if (!ep_func->bar_to_atu[bar])
free_win = find_first_zero_bit(ep->ib_window_map, pci->num_ib_windows);
else
free_win = ep->bar_to_atu[bar] - 1;
free_win = ep_func->bar_to_atu[bar] - 1;
if (free_win >= pci->num_ib_windows) {
dev_err(pci->dev, "No free inbound window\n");
@ -168,12 +141,190 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
* Always increment free_win before assignment, since value 0 is used to identify
* unallocated mapping.
*/
ep->bar_to_atu[bar] = free_win + 1;
ep_func->bar_to_atu[bar] = free_win + 1;
set_bit(free_win, ep->ib_window_map);
return 0;
}
static void dw_pcie_ep_clear_ib_maps(struct dw_pcie_ep *ep, u8 func_no, enum pci_barno bar)
{
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct device *dev = pci->dev;
unsigned int i, num;
u32 atu_index;
u32 *indexes;
if (!ep_func)
return;
/* Tear down the BAR Match Mode mapping, if any. */
if (ep_func->bar_to_atu[bar]) {
atu_index = ep_func->bar_to_atu[bar] - 1;
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index);
clear_bit(atu_index, ep->ib_window_map);
ep_func->bar_to_atu[bar] = 0;
}
/* Tear down all Address Match Mode mappings, if any. */
indexes = ep_func->ib_atu_indexes[bar];
num = ep_func->num_ib_atu_indexes[bar];
ep_func->ib_atu_indexes[bar] = NULL;
ep_func->num_ib_atu_indexes[bar] = 0;
if (!indexes)
return;
for (i = 0; i < num; i++) {
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, indexes[i]);
clear_bit(indexes[i], ep->ib_window_map);
}
devm_kfree(dev, indexes);
}
static u64 dw_pcie_ep_read_bar_assigned(struct dw_pcie_ep *ep, u8 func_no,
enum pci_barno bar, int flags)
{
u32 reg = PCI_BASE_ADDRESS_0 + (4 * bar);
u32 lo, hi;
u64 addr;
lo = dw_pcie_ep_readl_dbi(ep, func_no, reg);
if (flags & PCI_BASE_ADDRESS_SPACE)
return lo & PCI_BASE_ADDRESS_IO_MASK;
addr = lo & PCI_BASE_ADDRESS_MEM_MASK;
if (!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64))
return addr;
hi = dw_pcie_ep_readl_dbi(ep, func_no, reg + 4);
return addr | ((u64)hi << 32);
}
static int dw_pcie_ep_validate_submap(struct dw_pcie_ep *ep,
const struct pci_epf_bar_submap *submap,
unsigned int num_submap, size_t bar_size)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
u32 align = pci->region_align;
size_t off = 0;
unsigned int i;
size_t size;
if (!align || !IS_ALIGNED(bar_size, align))
return -EINVAL;
/*
* The submap array order defines the BAR layout (submap[0] starts
* at offset 0 and each entry immediately follows the previous
* one). Here, validate that it forms a strict, gapless
* decomposition of the BAR:
* - each entry has a non-zero size
* - sizes, implicit offsets and phys_addr are aligned to
* pci->region_align
* - each entry lies within the BAR range
* - the entries exactly cover the whole BAR
*
* Note: dw_pcie_prog_inbound_atu() also checks alignment for the
* PCI address and the target phys_addr, but validating up-front
* avoids partially programming iATU windows in vain.
*/
for (i = 0; i < num_submap; i++) {
size = submap[i].size;
if (!size)
return -EINVAL;
if (!IS_ALIGNED(size, align) || !IS_ALIGNED(off, align))
return -EINVAL;
if (!IS_ALIGNED(submap[i].phys_addr, align))
return -EINVAL;
if (off > bar_size || size > bar_size - off)
return -EINVAL;
off += size;
}
if (off != bar_size)
return -EINVAL;
return 0;
}
/* Address Match Mode inbound iATU mapping */
static int dw_pcie_ep_ib_atu_addr(struct dw_pcie_ep *ep, u8 func_no, int type,
const struct pci_epf_bar *epf_bar)
{
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
const struct pci_epf_bar_submap *submap = epf_bar->submap;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar = epf_bar->barno;
struct device *dev = pci->dev;
u64 pci_addr, parent_bus_addr;
u64 size, base, off = 0;
int free_win, ret;
unsigned int i;
u32 *indexes;
if (!ep_func || !epf_bar->num_submap || !submap || !epf_bar->size)
return -EINVAL;
ret = dw_pcie_ep_validate_submap(ep, submap, epf_bar->num_submap,
epf_bar->size);
if (ret)
return ret;
base = dw_pcie_ep_read_bar_assigned(ep, func_no, bar, epf_bar->flags);
if (!base) {
dev_err(dev,
"BAR%u not assigned, cannot set up sub-range mappings\n",
bar);
return -EINVAL;
}
indexes = devm_kcalloc(dev, epf_bar->num_submap, sizeof(*indexes),
GFP_KERNEL);
if (!indexes)
return -ENOMEM;
ep_func->ib_atu_indexes[bar] = indexes;
ep_func->num_ib_atu_indexes[bar] = 0;
for (i = 0; i < epf_bar->num_submap; i++) {
size = submap[i].size;
parent_bus_addr = submap[i].phys_addr;
if (off > (~0ULL) - base) {
ret = -EINVAL;
goto err;
}
pci_addr = base + off;
off += size;
free_win = find_first_zero_bit(ep->ib_window_map,
pci->num_ib_windows);
if (free_win >= pci->num_ib_windows) {
ret = -ENOSPC;
goto err;
}
ret = dw_pcie_prog_inbound_atu(pci, free_win, type,
parent_bus_addr, pci_addr, size);
if (ret)
goto err;
set_bit(free_win, ep->ib_window_map);
indexes[i] = free_win;
ep_func->num_ib_atu_indexes[bar] = i + 1;
}
return 0;
err:
dw_pcie_ep_clear_ib_maps(ep, func_no, bar);
return ret;
}
static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep,
struct dw_pcie_ob_atu_cfg *atu)
{
@ -204,35 +355,34 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar = epf_bar->barno;
u32 atu_index = ep->bar_to_atu[bar] - 1;
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep->bar_to_atu[bar])
if (!ep_func || !ep_func->epf_bar[bar])
return;
__dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags);
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index);
clear_bit(atu_index, ep->ib_window_map);
ep->epf_bar[bar] = NULL;
ep->bar_to_atu[bar] = 0;
dw_pcie_ep_clear_ib_maps(ep, func_no, bar);
ep_func->epf_bar[bar] = NULL;
}
static unsigned int dw_pcie_ep_get_rebar_offset(struct dw_pcie *pci,
static unsigned int dw_pcie_ep_get_rebar_offset(struct dw_pcie_ep *ep, u8 func_no,
enum pci_barno bar)
{
u32 reg, bar_index;
unsigned int offset, nbars;
int i;
offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
offset = dw_pcie_ep_find_ext_capability(ep, func_no, PCI_EXT_CAP_ID_REBAR);
if (!offset)
return offset;
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL);
nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg);
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL);
bar_index = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, reg);
if (bar_index == bar)
return offset;
@ -253,7 +403,7 @@ static int dw_pcie_ep_set_bar_resizable(struct dw_pcie_ep *ep, u8 func_no,
u32 rebar_cap, rebar_ctrl;
int ret;
rebar_offset = dw_pcie_ep_get_rebar_offset(pci, bar);
rebar_offset = dw_pcie_ep_get_rebar_offset(ep, func_no, bar);
if (!rebar_offset)
return -EINVAL;
@ -283,16 +433,16 @@ static int dw_pcie_ep_set_bar_resizable(struct dw_pcie_ep *ep, u8 func_no,
* 1 MB to 128 TB. Bits 31:16 in PCI_REBAR_CTRL define "supported sizes"
* bits for sizes 256 TB to 8 EB. Disallow sizes 256 TB to 8 EB.
*/
rebar_ctrl = dw_pcie_readl_dbi(pci, rebar_offset + PCI_REBAR_CTRL);
rebar_ctrl = dw_pcie_ep_readl_dbi(ep, func_no, rebar_offset + PCI_REBAR_CTRL);
rebar_ctrl &= ~GENMASK(31, 16);
dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CTRL, rebar_ctrl);
dw_pcie_ep_writel_dbi(ep, func_no, rebar_offset + PCI_REBAR_CTRL, rebar_ctrl);
/*
* The "selected size" (bits 13:8) in PCI_REBAR_CTRL are automatically
* updated when writing PCI_REBAR_CAP, see "Figure 3-26 Resizable BAR
* Example for 32-bit Memory BAR0" in DWC EP databook 5.96a.
*/
dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CAP, rebar_cap);
dw_pcie_ep_writel_dbi(ep, func_no, rebar_offset + PCI_REBAR_CAP, rebar_cap);
dw_pcie_dbi_ro_wr_dis(pci);
@ -341,12 +491,16 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
enum pci_barno bar = epf_bar->barno;
size_t size = epf_bar->size;
enum pci_epc_bar_type bar_type;
int flags = epf_bar->flags;
int ret, type;
if (!ep_func)
return -EINVAL;
/*
* DWC does not allow BAR pairs to overlap, e.g. you cannot combine BARs
* 1 and 2 to form a 64-bit BAR.
@ -360,21 +514,38 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
* calling clear_bar() would clear the BAR's PCI address assigned by the
* host).
*/
if (ep->epf_bar[bar]) {
if (ep_func->epf_bar[bar]) {
/*
* We can only dynamically change a BAR if the new BAR size and
* BAR flags do not differ from the existing configuration.
*/
if (ep->epf_bar[bar]->barno != bar ||
ep->epf_bar[bar]->size != size ||
ep->epf_bar[bar]->flags != flags)
if (ep_func->epf_bar[bar]->barno != bar ||
ep_func->epf_bar[bar]->size != size ||
ep_func->epf_bar[bar]->flags != flags)
return -EINVAL;
/*
* When dynamically changing a BAR, tear down any existing
* mappings before re-programming.
*/
if (ep_func->epf_bar[bar]->num_submap || epf_bar->num_submap)
dw_pcie_ep_clear_ib_maps(ep, func_no, bar);
/*
* When dynamically changing a BAR, skip writing the BAR reg, as
* that would clear the BAR's PCI address assigned by the host.
*/
goto config_atu;
} else {
/*
* Subrange mapping is an update-only operation. The BAR
* must have been configured once without submaps so that
* subsequent set_bar() calls can update inbound mappings
* without touching the BAR register (and clobbering the
* host-assigned address).
*/
if (epf_bar->num_submap)
return -EINVAL;
}
bar_type = dw_pcie_ep_get_bar_type(ep, bar);
@ -408,12 +579,16 @@ config_atu:
else
type = PCIE_ATU_TYPE_IO;
ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar,
size);
if (epf_bar->num_submap)
ret = dw_pcie_ep_ib_atu_addr(ep, func_no, type, epf_bar);
else
ret = dw_pcie_ep_ib_atu_bar(ep, func_no, type,
epf_bar->phys_addr, bar, size);
if (ret)
return ret;
ep->epf_bar[bar] = epf_bar;
ep_func->epf_bar[bar] = epf_bar;
return 0;
}
@ -601,6 +776,16 @@ static void dw_pcie_ep_stop(struct pci_epc *epc)
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
/*
* Tear down the dedicated outbound window used for MSI
* generation. This avoids leaking an iATU window across
* endpoint stop/start cycles.
*/
if (ep->msi_iatu_mapped) {
dw_pcie_ep_unmap_addr(epc, 0, 0, ep->msi_mem_phys);
ep->msi_iatu_mapped = false;
}
dw_pcie_stop_link(pci);
}
@ -702,15 +887,38 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
msg_addr = ((u64)msg_addr_upper) << 32 | msg_addr_lower;
msg_addr = dw_pcie_ep_align_addr(epc, msg_addr, &map_size, &offset);
ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr,
map_size);
if (ret)
return ret;
/*
* Program the outbound iATU once and keep it enabled.
*
* The spec warns that updating iATU registers while there are
* operations in flight on the AXI bridge interface is not
* supported, so we avoid reprogramming the region on every MSI,
* specifically unmapping immediately after writel().
*/
if (!ep->msi_iatu_mapped) {
ret = dw_pcie_ep_map_addr(epc, func_no, 0,
ep->msi_mem_phys, msg_addr,
map_size);
if (ret)
return ret;
ep->msi_iatu_mapped = true;
ep->msi_msg_addr = msg_addr;
ep->msi_map_size = map_size;
} else if (WARN_ON_ONCE(ep->msi_msg_addr != msg_addr ||
ep->msi_map_size != map_size)) {
/*
* The host changed the MSI target address or the required
* mapping size changed. Reprogramming the iATU at runtime is
* unsafe on this controller, so bail out instead of trying to
* update the existing region.
*/
return -EINVAL;
}
writel(msg_data | (interrupt_num - 1), ep->msi_mem + offset);
dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys);
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_msi_irq);
@ -775,7 +983,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset);
tbl_offset &= PCI_MSIX_TABLE_OFFSET;
msix_tbl = ep->epf_bar[bir]->addr + tbl_offset;
msix_tbl = ep_func->epf_bar[bir]->addr + tbl_offset;
msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr;
msg_data = msix_tbl[(interrupt_num - 1)].msg_data;
vec_ctrl = msix_tbl[(interrupt_num - 1)].vector_ctrl;
@ -836,20 +1044,20 @@ void dw_pcie_ep_deinit(struct dw_pcie_ep *ep)
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_deinit);
static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
static void dw_pcie_ep_init_rebar_registers(struct dw_pcie_ep *ep, u8 func_no)
{
struct dw_pcie_ep *ep = &pci->ep;
unsigned int offset;
unsigned int nbars;
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
unsigned int offset, nbars;
enum pci_barno bar;
u32 reg, i, val;
offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
if (!ep_func)
return;
dw_pcie_dbi_ro_wr_en(pci);
offset = dw_pcie_ep_find_ext_capability(ep, func_no, PCI_EXT_CAP_ID_REBAR);
if (offset) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL);
nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg);
/*
@ -870,16 +1078,28 @@ static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
* the controller when RESBAR_CAP_REG is written, which
* is why RESBAR_CAP_REG is written here.
*/
val = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
val = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL);
bar = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, val);
if (ep->epf_bar[bar])
pci_epc_bar_size_to_rebar_cap(ep->epf_bar[bar]->size, &val);
if (ep_func->epf_bar[bar])
pci_epc_bar_size_to_rebar_cap(ep_func->epf_bar[bar]->size, &val);
else
val = BIT(4);
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, val);
dw_pcie_ep_writel_dbi(ep, func_no, offset + PCI_REBAR_CAP, val);
}
}
}
static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
{
struct dw_pcie_ep *ep = &pci->ep;
u8 funcs = ep->epc->max_functions;
u8 func_no;
dw_pcie_dbi_ro_wr_en(pci);
for (func_no = 0; func_no < funcs; func_no++)
dw_pcie_ep_init_rebar_registers(ep, func_no);
dw_pcie_setup(pci);
dw_pcie_dbi_ro_wr_dis(pci);
@ -967,6 +1187,18 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
if (ep->ops->init)
ep->ops->init(ep);
/*
* PCIe r6.0, section 7.9.15 states that for endpoints that support
* PTM, this capability structure is required in exactly one
* function, which controls the PTM behavior of all PTM capable
* functions. This indicates the PTM capability structure
* represents controller-level registers rather than per-function
* registers.
*
* Therefore, PTM capability registers are configured using the
* standard DBI accessors, instead of func_no indexed per-function
* accessors.
*/
ptm_cap_base = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM);
/*
@ -1087,6 +1319,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
struct device *dev = pci->dev;
INIT_LIST_HEAD(&ep->func_list);
ep->msi_iatu_mapped = false;
ep->msi_msg_addr = 0;
ep->msi_map_size = 0;
epc = devm_pci_epc_create(dev, &epc_ops);
if (IS_ERR(epc)) {

View file

@ -244,7 +244,7 @@ void dw_pcie_msi_init(struct dw_pcie_rp *pp)
u64 msi_target = (u64)pp->msi_data;
u32 ctrl, num_ctrls;
if (!pci_msi_enabled() || !pp->has_msi_ctrl)
if (!pci_msi_enabled() || !pp->use_imsi_rx)
return;
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
@ -356,10 +356,20 @@ int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
* order not to miss MSI TLPs from those devices the MSI target
* address has to be within the lowest 4GB.
*
* Note until there is a better alternative found the reservation is
* done by allocating from the artificially limited DMA-coherent
* memory.
* Per DWC databook r6.21a, section 3.10.2.3, the incoming MWr TLP
* targeting the MSI_CTRL_ADDR is terminated by the iMSI-RX and never
* appears on the AXI bus. So MSI_CTRL_ADDR address doesn't need to be
* mapped and can be any memory that doesn't get allocated for the BAR
* memory. Since most of the platforms provide 32-bit address for
* 'config' region, try cfg0_base as the first option for the MSI target
* address if it's a 32-bit address. Otherwise, try 32-bit and 64-bit
* coherent memory allocation one by one.
*/
if (!(pp->cfg0_base & GENMASK_ULL(63, 32))) {
pp->msi_data = pp->cfg0_base;
return 0;
}
ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
if (!ret)
msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data,
@ -582,15 +592,15 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
}
if (pci_msi_enabled()) {
pp->has_msi_ctrl = !(pp->ops->msi_init ||
pp->use_imsi_rx = !(pp->ops->msi_init ||
of_property_present(np, "msi-parent") ||
of_property_present(np, "msi-map"));
/*
* For the has_msi_ctrl case the default assignment is handled
* For the use_imsi_rx case the default assignment is handled
* in the dw_pcie_msi_host_init().
*/
if (!pp->has_msi_ctrl && !pp->num_vectors) {
if (!pp->use_imsi_rx && !pp->num_vectors) {
pp->num_vectors = MSI_DEF_NUM_VECTORS;
} else if (pp->num_vectors > MAX_MSI_IRQS) {
dev_err(dev, "Invalid number of vectors\n");
@ -602,7 +612,7 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
ret = pp->ops->msi_init(pp);
if (ret < 0)
goto err_deinit_host;
} else if (pp->has_msi_ctrl) {
} else if (pp->use_imsi_rx) {
ret = dw_pcie_msi_host_init(pp);
if (ret < 0)
goto err_deinit_host;
@ -620,14 +630,6 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (ret)
goto err_free_msi;
if (pp->ecam_enabled) {
ret = dw_pcie_config_ecam_iatu(pp);
if (ret) {
dev_err(dev, "Failed to configure iATU in ECAM mode\n");
goto err_free_msi;
}
}
/*
* Allocate the resource for MSG TLP before programming the iATU
* outbound window in dw_pcie_setup_rc(). Since the allocation depends
@ -655,13 +657,12 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
}
/*
* Note: Skip the link up delay only when a Link Up IRQ is present.
* If there is no Link Up IRQ, we should not bypass the delay
* because that would require users to manually rescan for devices.
* Only fail on timeout error. Other errors indicate the device may
* become available later, so continue without failing.
*/
if (!pp->use_linkup_irq)
/* Ignore errors, the link may come up later */
dw_pcie_wait_for_link(pci);
ret = dw_pcie_wait_for_link(pci);
if (ret == -ETIMEDOUT)
goto err_stop_link;
ret = pci_host_probe(bridge);
if (ret)
@ -681,7 +682,7 @@ err_remove_edma:
dw_pcie_edma_remove(pci);
err_free_msi:
if (pp->has_msi_ctrl)
if (pp->use_imsi_rx)
dw_pcie_free_msi(pp);
err_deinit_host:
@ -709,7 +710,7 @@ void dw_pcie_host_deinit(struct dw_pcie_rp *pp)
dw_pcie_edma_remove(pci);
if (pp->has_msi_ctrl)
if (pp->use_imsi_rx)
dw_pcie_free_msi(pp);
if (pp->ops->deinit)
@ -846,19 +847,10 @@ static void __iomem *dw_pcie_ecam_conf_map_bus(struct pci_bus *bus, unsigned int
return pci->dbi_base + where;
}
static int dw_pcie_op_assert_perst(struct pci_bus *bus, bool assert)
{
struct dw_pcie_rp *pp = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
return dw_pcie_assert_perst(pci, assert);
}
static struct pci_ops dw_pcie_ops = {
.map_bus = dw_pcie_own_conf_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
.assert_perst = dw_pcie_op_assert_perst,
};
static struct pci_ops dw_pcie_ecam_ops = {
@ -872,9 +864,10 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dw_pcie_ob_atu_cfg atu = { 0 };
struct resource_entry *entry;
int ob_iatu_index;
int ib_iatu_index;
int i, ret;
/* Note the very first outbound ATU is used for CFG IOs */
if (!pci->num_ob_windows) {
dev_err(pci->dev, "No outbound iATU found\n");
return -EINVAL;
@ -890,37 +883,74 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
for (i = 0; i < pci->num_ib_windows; i++)
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, i);
i = 0;
/*
* NOTE: For outbound address translation, outbound iATU at index 0 is
* reserved for CFG IOs (dw_pcie_other_conf_map_bus()), thus start at
* index 1.
*
* If using ECAM, outbound iATU at index 0 and index 1 is reserved for
* CFG IOs.
*/
if (pp->ecam_enabled) {
ob_iatu_index = 2;
ret = dw_pcie_config_ecam_iatu(pp);
if (ret) {
dev_err(pci->dev, "Failed to configure iATU in ECAM mode\n");
return ret;
}
} else {
ob_iatu_index = 1;
}
resource_list_for_each_entry(entry, &pp->bridge->windows) {
resource_size_t res_size;
if (resource_type(entry->res) != IORESOURCE_MEM)
continue;
if (pci->num_ob_windows <= ++i)
break;
atu.index = i;
atu.type = PCIE_ATU_TYPE_MEM;
atu.parent_bus_addr = entry->res->start - pci->parent_bus_offset;
atu.pci_addr = entry->res->start - entry->offset;
/* Adjust iATU size if MSG TLP region was allocated before */
if (pp->msg_res && pp->msg_res->parent == entry->res)
atu.size = resource_size(entry->res) -
res_size = resource_size(entry->res) -
resource_size(pp->msg_res);
else
atu.size = resource_size(entry->res);
res_size = resource_size(entry->res);
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret) {
dev_err(pci->dev, "Failed to set MEM range %pr\n",
entry->res);
return ret;
while (res_size > 0) {
/*
* Return failure if we run out of windows in the
* middle. Otherwise, we would end up only partially
* mapping a single resource.
*/
if (ob_iatu_index >= pci->num_ob_windows) {
dev_err(pci->dev, "Cannot add outbound window for region: %pr\n",
entry->res);
return -ENOMEM;
}
atu.index = ob_iatu_index;
atu.size = MIN(pci->region_limit + 1, res_size);
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret) {
dev_err(pci->dev, "Failed to set MEM range %pr\n",
entry->res);
return ret;
}
ob_iatu_index++;
atu.parent_bus_addr += atu.size;
atu.pci_addr += atu.size;
res_size -= atu.size;
}
}
if (pp->io_size) {
if (pci->num_ob_windows > ++i) {
atu.index = i;
if (ob_iatu_index < pci->num_ob_windows) {
atu.index = ob_iatu_index;
atu.type = PCIE_ATU_TYPE_IO;
atu.parent_bus_addr = pp->io_base - pci->parent_bus_offset;
atu.pci_addr = pp->io_bus_addr;
@ -932,40 +962,71 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
entry->res);
return ret;
}
ob_iatu_index++;
} else {
/*
* If there are not enough outbound windows to give I/O
* space its own iATU, the outbound iATU at index 0 will
* be shared between I/O space and CFG IOs, by
* temporarily reconfiguring the iATU to CFG space, in
* order to do a CFG IO, and then immediately restoring
* it to I/O space. This is only implemented when using
* dw_pcie_other_conf_map_bus(), which is not the case
* when using ECAM.
*/
if (pp->ecam_enabled) {
dev_err(pci->dev, "Cannot add outbound window for I/O\n");
return -ENOMEM;
}
pp->cfg0_io_shared = true;
}
}
if (pci->num_ob_windows <= i)
dev_warn(pci->dev, "Ranges exceed outbound iATU size (%d)\n",
pci->num_ob_windows);
if (pp->use_atu_msg) {
if (ob_iatu_index >= pci->num_ob_windows) {
dev_err(pci->dev, "Cannot add outbound window for MSG TLP\n");
return -ENOMEM;
}
pp->msg_atu_index = ob_iatu_index++;
}
pp->msg_atu_index = i;
i = 0;
ib_iatu_index = 0;
resource_list_for_each_entry(entry, &pp->bridge->dma_ranges) {
resource_size_t res_start, res_size, window_size;
if (resource_type(entry->res) != IORESOURCE_MEM)
continue;
if (pci->num_ib_windows <= i)
break;
res_size = resource_size(entry->res);
res_start = entry->res->start;
while (res_size > 0) {
/*
* Return failure if we run out of windows in the
* middle. Otherwise, we would end up only partially
* mapping a single resource.
*/
if (ib_iatu_index >= pci->num_ib_windows) {
dev_err(pci->dev, "Cannot add inbound window for region: %pr\n",
entry->res);
return -ENOMEM;
}
ret = dw_pcie_prog_inbound_atu(pci, i++, PCIE_ATU_TYPE_MEM,
entry->res->start,
entry->res->start - entry->offset,
resource_size(entry->res));
if (ret) {
dev_err(pci->dev, "Failed to set DMA range %pr\n",
entry->res);
return ret;
window_size = MIN(pci->region_limit + 1, res_size);
ret = dw_pcie_prog_inbound_atu(pci, ib_iatu_index,
PCIE_ATU_TYPE_MEM, res_start,
res_start - entry->offset, window_size);
if (ret) {
dev_err(pci->dev, "Failed to set DMA range %pr\n",
entry->res);
return ret;
}
ib_iatu_index++;
res_start += window_size;
res_size -= window_size;
}
}
if (pci->num_ib_windows <= i)
dev_warn(pci->dev, "Dma-ranges exceed inbound iATU size (%u)\n",
pci->num_ib_windows);
return 0;
}
@ -1087,7 +1148,7 @@ int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
* the platform uses its own address translation component rather than
* ATU, so we should not program the ATU here.
*/
if (pp->bridge->child_ops == &dw_child_pcie_ops) {
if (pp->bridge->child_ops == &dw_child_pcie_ops || pp->ecam_enabled) {
ret = dw_pcie_iatu_setup(pp);
if (ret)
return ret;
@ -1104,6 +1165,17 @@ int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
dw_pcie_dbi_ro_wr_dis(pci);
/*
* The iMSI-RX module does not support receiving MSI or MSI-X generated
* by the Root Port. If iMSI-RX is used as the MSI controller, remove
* the MSI and MSI-X capabilities of the Root Port to allow the drivers
* to fall back to INTx instead.
*/
if (pp->use_imsi_rx) {
dw_pcie_remove_capability(pci, PCI_CAP_ID_MSI);
dw_pcie_remove_capability(pci, PCI_CAP_ID_MSIX);
}
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
@ -1147,8 +1219,11 @@ static int dw_pcie_pme_turn_off(struct dw_pcie *pci)
int dw_pcie_suspend_noirq(struct dw_pcie *pci)
{
u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
int ret = 0;
u32 val;
int ret;
if (!dw_pcie_link_up(pci))
goto stop_link;
/*
* If L1SS is supported, then do not put the link into L2 as some
@ -1165,6 +1240,16 @@ int dw_pcie_suspend_noirq(struct dw_pcie *pci)
return ret;
}
/*
* Some SoCs do not support reading the LTSSM register after
* PME_Turn_Off broadcast. For those SoCs, skip waiting for L2/L3 Ready
* state and wait 10ms as recommended in PCIe spec r6.0, sec 5.3.3.2.1.
*/
if (pci->pp.skip_l23_ready) {
mdelay(PCIE_PME_TO_L2_TIMEOUT_US/1000);
goto stop_link;
}
ret = read_poll_timeout(dw_pcie_get_ltssm, val,
val == DW_PCIE_LTSSM_L2_IDLE ||
val <= DW_PCIE_LTSSM_DETECT_WAIT,
@ -1183,6 +1268,7 @@ int dw_pcie_suspend_noirq(struct dw_pcie *pci)
*/
udelay(1);
stop_link:
dw_pcie_stop_link(pci);
if (pci->pp.ops->deinit)
pci->pp.ops->deinit(&pci->pp);
@ -1220,6 +1306,9 @@ int dw_pcie_resume_noirq(struct dw_pcie *pci)
if (ret)
return ret;
if (pci->pp.ops->post_init)
pci->pp.ops->post_init(&pci->pp);
return ret;
}
EXPORT_SYMBOL_GPL(dw_pcie_resume_noirq);

View file

@ -61,6 +61,7 @@ static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features dw_plat_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.msix_capable = true,
};

View file

@ -226,16 +226,70 @@ void dw_pcie_version_detect(struct dw_pcie *pci)
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
{
return PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap,
pci);
NULL, pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
{
return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, pci);
return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, NULL, pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
void dw_pcie_remove_capability(struct dw_pcie *pci, u8 cap)
{
u8 cap_pos, pre_pos, next_pos;
u16 reg;
cap_pos = PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap,
&pre_pos, pci);
if (!cap_pos)
return;
reg = dw_pcie_readw_dbi(pci, cap_pos);
next_pos = (reg & 0xff00) >> 8;
dw_pcie_dbi_ro_wr_en(pci);
if (pre_pos == PCI_CAPABILITY_LIST)
dw_pcie_writeb_dbi(pci, PCI_CAPABILITY_LIST, next_pos);
else
dw_pcie_writeb_dbi(pci, pre_pos + 1, next_pos);
dw_pcie_dbi_ro_wr_dis(pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_remove_capability);
void dw_pcie_remove_ext_capability(struct dw_pcie *pci, u8 cap)
{
int cap_pos, next_pos, pre_pos;
u32 pre_header, header;
cap_pos = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, &pre_pos, pci);
if (!cap_pos)
return;
header = dw_pcie_readl_dbi(pci, cap_pos);
/*
* If the first cap at offset PCI_CFG_SPACE_SIZE is removed,
* only set its capid to zero as it cannot be skipped.
*/
if (cap_pos == PCI_CFG_SPACE_SIZE) {
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi(pci, cap_pos, header & 0xffff0000);
dw_pcie_dbi_ro_wr_dis(pci);
return;
}
pre_header = dw_pcie_readl_dbi(pci, pre_pos);
next_pos = PCI_EXT_CAP_NEXT(header);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi(pci, pre_pos,
(pre_header & 0xfffff) | (next_pos << 20));
dw_pcie_dbi_ro_wr_dis(pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_remove_ext_capability);
static u16 __dw_pcie_find_vsec_capability(struct dw_pcie *pci, u16 vendor_id,
u16 vsec_id)
{
@ -246,7 +300,7 @@ static u16 __dw_pcie_find_vsec_capability(struct dw_pcie *pci, u16 vendor_id,
return 0;
while ((vsec = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, vsec,
PCI_EXT_CAP_ID_VNDR, pci))) {
PCI_EXT_CAP_ID_VNDR, NULL, pci))) {
header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER);
if (PCI_VNDR_HEADER_ID(header) == vsec_id)
return vsec;
@ -478,6 +532,9 @@ int dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
u32 retries, val;
u64 limit_addr;
if (atu->index >= pci->num_ob_windows)
return -ENOSPC;
limit_addr = parent_bus_addr + atu->size - 1;
if ((limit_addr & ~pci->region_limit) != (parent_bus_addr & ~pci->region_limit) ||
@ -551,6 +608,9 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
u64 limit_addr = pci_addr + size - 1;
u32 retries, val;
if (index >= pci->num_ib_windows)
return -ENOSPC;
if ((limit_addr & ~pci->region_limit) != (pci_addr & ~pci->region_limit) ||
!IS_ALIGNED(parent_bus_addr, pci->region_align) ||
!IS_ALIGNED(pci_addr, pci->region_align) || !size) {
@ -639,9 +699,69 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index)
dw_pcie_writel_atu(pci, dir, index, PCIE_ATU_REGION_CTRL2, 0);
}
const char *dw_pcie_ltssm_status_string(enum dw_pcie_ltssm ltssm)
{
const char *str;
switch (ltssm) {
#define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_1);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_2);
default:
str = "DW_PCIE_LTSSM_UNKNOWN";
break;
}
return str + strlen("DW_PCIE_LTSSM_");
}
/**
* dw_pcie_wait_for_link - Wait for the PCIe link to be up
* @pci: DWC instance
*
* Returns: 0 if link is up, -ENODEV if device is not found, -EIO if the device
* is found but not active and -ETIMEDOUT if the link fails to come up for other
* reasons.
*/
int dw_pcie_wait_for_link(struct dw_pcie *pci)
{
u32 offset, val;
u32 offset, val, ltssm;
int retries;
/* Check if the link is up or not */
@ -653,7 +773,29 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
}
if (retries >= PCIE_LINK_WAIT_MAX_RETRIES) {
dev_info(pci->dev, "Phy link never came up\n");
/*
* If the link is in Detect.Quiet or Detect.Active state, it
* indicates that no device is detected.
*/
ltssm = dw_pcie_get_ltssm(pci);
if (ltssm == DW_PCIE_LTSSM_DETECT_QUIET ||
ltssm == DW_PCIE_LTSSM_DETECT_ACT) {
dev_info(pci->dev, "Device not found\n");
return -ENODEV;
/*
* If the link is in POLL.{Active/Compliance} state, then the
* device is found to be connected to the bus, but it is not
* active i.e., the device firmware might not yet initialized.
*/
} else if (ltssm == DW_PCIE_LTSSM_POLL_ACTIVE ||
ltssm == DW_PCIE_LTSSM_POLL_COMPLIANCE) {
dev_info(pci->dev, "Device found, but not active\n");
return -EIO;
}
dev_err(pci->dev, "Link failed to come up. LTSSM: %s\n",
dw_pcie_ltssm_status_string(ltssm));
return -ETIMEDOUT;
}

View file

@ -305,6 +305,10 @@
/* Default eDMA LLP memory size */
#define DMA_LLP_MEM_SIZE PAGE_SIZE
/* Common struct pci_epc_feature bits among DWC EP glue drivers */
#define DWC_EPC_COMMON_FEATURES .dynamic_inbound_mapping = true, \
.subrange_mapping = true
struct dw_pcie;
struct dw_pcie_rp;
struct dw_pcie_ep;
@ -388,6 +392,10 @@ enum dw_pcie_ltssm {
DW_PCIE_LTSSM_RCVRY_EQ2 = 0x22,
DW_PCIE_LTSSM_RCVRY_EQ3 = 0x23,
/* Vendor glue drivers provide pseudo L1 substates from get_ltssm() */
DW_PCIE_LTSSM_L1_1 = 0x141,
DW_PCIE_LTSSM_L1_2 = 0x142,
DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF,
};
@ -412,7 +420,7 @@ struct dw_pcie_host_ops {
};
struct dw_pcie_rp {
bool has_msi_ctrl:1;
bool use_imsi_rx:1;
bool cfg0_io_shared:1;
u64 cfg0_base;
void __iomem *va_cfg0_base;
@ -434,11 +442,11 @@ struct dw_pcie_rp {
bool use_atu_msg;
int msg_atu_index;
struct resource *msg_res;
bool use_linkup_irq;
struct pci_eq_presets presets;
struct pci_config_window *cfg;
bool ecam_enabled;
bool native_ecam;
bool skip_l23_ready;
};
struct dw_pcie_ep_ops {
@ -463,6 +471,12 @@ struct dw_pcie_ep_func {
u8 func_no;
u8 msi_cap; /* MSI capability offset */
u8 msix_cap; /* MSI-X capability offset */
u8 bar_to_atu[PCI_STD_NUM_BARS];
struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS];
/* Only for Address Match Mode inbound iATU */
u32 *ib_atu_indexes[PCI_STD_NUM_BARS];
unsigned int num_ib_atu_indexes[PCI_STD_NUM_BARS];
};
struct dw_pcie_ep {
@ -472,13 +486,16 @@ struct dw_pcie_ep {
phys_addr_t phys_base;
size_t addr_size;
size_t page_size;
u8 bar_to_atu[PCI_STD_NUM_BARS];
phys_addr_t *outbound_addr;
unsigned long *ib_window_map;
unsigned long *ob_window_map;
void __iomem *msi_mem;
phys_addr_t msi_mem_phys;
struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS];
/* MSI outbound iATU state */
bool msi_iatu_mapped;
u64 msi_msg_addr;
size_t msi_map_size;
};
struct dw_pcie_ops {
@ -493,7 +510,6 @@ struct dw_pcie_ops {
enum dw_pcie_ltssm (*get_ltssm)(struct dw_pcie *pcie);
int (*start_link)(struct dw_pcie *pcie);
void (*stop_link)(struct dw_pcie *pcie);
int (*assert_perst)(struct dw_pcie *pcie, bool assert);
};
struct debugfs_info {
@ -562,6 +578,8 @@ void dw_pcie_version_detect(struct dw_pcie *pci);
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
void dw_pcie_remove_capability(struct dw_pcie *pci, u8 cap);
void dw_pcie_remove_ext_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_rasdes_capability(struct dw_pcie *pci);
u16 dw_pcie_find_ptm_capability(struct dw_pcie *pci);
@ -798,14 +816,6 @@ static inline void dw_pcie_stop_link(struct dw_pcie *pci)
pci->ops->stop_link(pci);
}
static inline int dw_pcie_assert_perst(struct dw_pcie *pci, bool assert)
{
if (pci->ops && pci->ops->assert_perst)
return pci->ops->assert_perst(pci, assert);
return 0;
}
static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci)
{
u32 val;
@ -818,6 +828,8 @@ static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci)
return (enum dw_pcie_ltssm)FIELD_GET(PORT_LOGIC_LTSSM_STATE_MASK, val);
}
const char *dw_pcie_ltssm_status_string(enum dw_pcie_ltssm ltssm);
#ifdef CONFIG_PCIE_DW_HOST
int dw_pcie_suspend_noirq(struct dw_pcie *pci);
int dw_pcie_resume_noirq(struct dw_pcie *pci);
@ -896,7 +908,6 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no,
u16 interrupt_num);
void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar);
int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap);
struct dw_pcie_ep_func *
dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no);
#else
@ -954,12 +965,6 @@ static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
{
}
static inline int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci,
u8 prev_cap, u8 cap)
{
return 0;
}
static inline struct dw_pcie_ep_func *
dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no)
{

View file

@ -68,6 +68,11 @@
#define PCIE_CLKREQ_NOT_READY FIELD_PREP_WM16(BIT(0), 0)
#define PCIE_CLKREQ_PULL_DOWN FIELD_PREP_WM16(GENMASK(13, 12), 1)
/* RASDES TBA information */
#define PCIE_CLIENT_CDM_RASDES_TBA_INFO_CMN 0x154
#define PCIE_CLIENT_CDM_RASDES_TBA_L1_1 BIT(4)
#define PCIE_CLIENT_CDM_RASDES_TBA_L1_2 BIT(5)
/* Hot Reset Control Register */
#define PCIE_CLIENT_HOT_RESET_CTRL 0x180
#define PCIE_LTSSM_APP_DLY2_EN BIT(1)
@ -80,6 +85,8 @@
#define PCIE_LINKUP_MASK GENMASK(17, 16)
#define PCIE_LTSSM_STATUS_MASK GENMASK(5, 0)
#define PCIE_TYPE0_HDR_DBI2_OFFSET 0x100000
struct rockchip_pcie {
struct dw_pcie pci;
void __iomem *apb_base;
@ -181,11 +188,26 @@ static int rockchip_pcie_init_irq_domain(struct rockchip_pcie *rockchip)
return 0;
}
static u32 rockchip_pcie_get_ltssm(struct rockchip_pcie *rockchip)
static u32 rockchip_pcie_get_ltssm_reg(struct rockchip_pcie *rockchip)
{
return rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS);
}
static enum dw_pcie_ltssm rockchip_pcie_get_ltssm(struct dw_pcie *pci)
{
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
u32 val = rockchip_pcie_readl_apb(rockchip,
PCIE_CLIENT_CDM_RASDES_TBA_INFO_CMN);
if (val & PCIE_CLIENT_CDM_RASDES_TBA_L1_1)
return DW_PCIE_LTSSM_L1_1;
if (val & PCIE_CLIENT_CDM_RASDES_TBA_L1_2)
return DW_PCIE_LTSSM_L1_2;
return rockchip_pcie_get_ltssm_reg(rockchip) & PCIE_LTSSM_STATUS_MASK;
}
static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip)
{
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_ENABLE_LTSSM,
@ -201,7 +223,7 @@ static void rockchip_pcie_disable_ltssm(struct rockchip_pcie *rockchip)
static bool rockchip_pcie_link_up(struct dw_pcie *pci)
{
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
u32 val = rockchip_pcie_get_ltssm(rockchip);
u32 val = rockchip_pcie_get_ltssm_reg(rockchip);
return FIELD_GET(PCIE_LINKUP_MASK, val) == PCIE_LINKUP;
}
@ -292,6 +314,8 @@ static int rockchip_pcie_host_init(struct dw_pcie_rp *pp)
if (irq < 0)
return irq;
pci->dbi_base2 = pci->dbi_base + PCIE_TYPE0_HDR_DBI2_OFFSET;
ret = rockchip_pcie_init_irq_domain(rockchip);
if (ret < 0)
dev_err(dev, "failed to init irq domain\n");
@ -302,6 +326,10 @@ static int rockchip_pcie_host_init(struct dw_pcie_rp *pp)
rockchip_pcie_configure_l1ss(pci);
rockchip_pcie_enable_l0s(pci);
/* Disable Root Ports BAR0 and BAR1 as they report bogus size */
dw_pcie_writel_dbi2(pci, PCI_BASE_ADDRESS_0, 0x0);
dw_pcie_writel_dbi2(pci, PCI_BASE_ADDRESS_1, 0x0);
return 0;
}
@ -327,9 +355,7 @@ static void rockchip_pcie_ep_hide_broken_ats_cap_rk3588(struct dw_pcie_ep *ep)
if (!of_device_is_compatible(dev->of_node, "rockchip,rk3588-pcie-ep"))
return;
if (dw_pcie_ep_hide_ext_capability(pci, PCI_EXT_CAP_ID_SECPCI,
PCI_EXT_CAP_ID_ATS))
dev_err(dev, "failed to hide ATS capability\n");
dw_pcie_remove_ext_capability(pci, PCI_EXT_CAP_ID_ATS);
}
static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep)
@ -364,6 +390,7 @@ static int rockchip_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
.msix_capable = true,
@ -384,6 +411,7 @@ static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = {
* BARs) would be overwritten, resulting in (all other BARs) no longer working.
*/
static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
.msix_capable = true,
@ -485,36 +513,9 @@ static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = rockchip_pcie_link_up,
.start_link = rockchip_pcie_start_link,
.stop_link = rockchip_pcie_stop_link,
.get_ltssm = rockchip_pcie_get_ltssm,
};
static irqreturn_t rockchip_pcie_rc_sys_irq_thread(int irq, void *arg)
{
struct rockchip_pcie *rockchip = arg;
struct dw_pcie *pci = &rockchip->pci;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = pci->dev;
u32 reg;
reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC);
rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC);
dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg);
dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip));
if (reg & PCIE_RDLH_LINK_UP_CHGED) {
if (rockchip_pcie_link_up(pci)) {
msleep(PCIE_RESET_CONFIG_WAIT_MS);
dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
/* Rescan the bus to enumerate endpoint devices */
pci_lock_rescan_remove();
pci_rescan_bus(pp->bridge->bus);
pci_unlock_rescan_remove();
}
}
return IRQ_HANDLED;
}
static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
{
struct rockchip_pcie *rockchip = arg;
@ -526,7 +527,7 @@ static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC);
dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg);
dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip));
dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm_reg(rockchip));
if (reg & PCIE_LINK_REQ_RST_NOT_INT) {
dev_dbg(dev, "hot reset or link-down reset\n");
@ -547,29 +548,14 @@ static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
return IRQ_HANDLED;
}
static int rockchip_pcie_configure_rc(struct platform_device *pdev,
struct rockchip_pcie *rockchip)
static int rockchip_pcie_configure_rc(struct rockchip_pcie *rockchip)
{
struct device *dev = &pdev->dev;
struct dw_pcie_rp *pp;
int irq, ret;
u32 val;
if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_HOST))
return -ENODEV;
irq = platform_get_irq_byname(pdev, "sys");
if (irq < 0)
return irq;
ret = devm_request_threaded_irq(dev, irq, NULL,
rockchip_pcie_rc_sys_irq_thread,
IRQF_ONESHOT, "pcie-sys-rc", rockchip);
if (ret) {
dev_err(dev, "failed to request PCIe sys IRQ\n");
return ret;
}
/* LTSSM enable control mode */
val = FIELD_PREP_WM16(PCIE_LTSSM_ENABLE_ENHANCE, 1);
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL);
@ -580,19 +566,8 @@ static int rockchip_pcie_configure_rc(struct platform_device *pdev,
pp = &rockchip->pci.pp;
pp->ops = &rockchip_pcie_host_ops;
pp->use_linkup_irq = true;
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "failed to initialize host\n");
return ret;
}
/* unmask DLL up/down indicator */
val = FIELD_PREP_WM16(PCIE_RDLH_LINK_UP_CHGED, 0);
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_INTR_MASK_MISC);
return ret;
return dw_pcie_host_init(pp);
}
static int rockchip_pcie_configure_ep(struct platform_device *pdev,
@ -711,7 +686,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
switch (data->mode) {
case DW_PCIE_RC_TYPE:
ret = rockchip_pcie_configure_rc(pdev, rockchip);
ret = rockchip_pcie_configure_rc(rockchip);
if (ret)
goto deinit_clk;
break;

View file

@ -309,6 +309,7 @@ static int keembay_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features keembay_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.msix_capable = true,
.bar[BAR_0] = { .only_64bit = true, },

View file

@ -282,12 +282,12 @@ static int s32g_pcie_parse_ports(struct device *dev, struct s32g_pcie *s32g_pp)
ret = s32g_pcie_parse_port(s32g_pp, of_port);
if (ret)
goto err_port;
break;
}
err_port:
list_for_each_entry_safe(port, tmp, &s32g_pp->ports, list)
list_del(&port->list);
if (ret)
list_for_each_entry_safe(port, tmp, &s32g_pp->ports, list)
list_del(&port->list);
return ret;
}

View file

@ -168,11 +168,13 @@ enum qcom_pcie_ep_link_status {
* @hdma_support: HDMA support on this SoC
* @override_no_snoop: Override NO_SNOOP attribute in TLP to enable cache snooping
* @disable_mhi_ram_parity_check: Disable MHI RAM data parity error check
* @firmware_managed: Set if the controller is firmware managed
*/
struct qcom_pcie_ep_cfg {
bool hdma_support;
bool override_no_snoop;
bool disable_mhi_ram_parity_check;
bool firmware_managed;
};
/**
@ -377,6 +379,14 @@ err_disable_clk:
static void qcom_pcie_disable_resources(struct qcom_pcie_ep *pcie_ep)
{
struct device *dev = pcie_ep->pci.dev;
pm_runtime_put(dev);
/* Skip resource disablement if controller is firmware-managed */
if (pcie_ep->cfg && pcie_ep->cfg->firmware_managed)
return;
icc_set_bw(pcie_ep->icc_mem, 0, 0);
phy_power_off(pcie_ep->phy);
phy_exit(pcie_ep->phy);
@ -390,12 +400,24 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
u32 val, offset;
int ret;
ret = qcom_pcie_enable_resources(pcie_ep);
if (ret) {
dev_err(dev, "Failed to enable resources: %d\n", ret);
ret = pm_runtime_resume_and_get(dev);
if (ret < 0) {
dev_err(dev, "Failed to enable device: %d\n", ret);
return ret;
}
/* Skip resource enablement if controller is firmware-managed */
if (pcie_ep->cfg && pcie_ep->cfg->firmware_managed)
goto skip_resources_enable;
ret = qcom_pcie_enable_resources(pcie_ep);
if (ret) {
dev_err(dev, "Failed to enable resources: %d\n", ret);
pm_runtime_put(dev);
return ret;
}
skip_resources_enable:
/* Perform cleanup that requires refclk */
pci_epc_deinit_notify(pci->ep.epc);
dw_pcie_ep_cleanup(&pci->ep);
@ -630,6 +652,17 @@ static int qcom_pcie_ep_get_resources(struct platform_device *pdev,
return ret;
}
pcie_ep->reset = devm_gpiod_get(dev, "reset", GPIOD_IN);
if (IS_ERR(pcie_ep->reset))
return PTR_ERR(pcie_ep->reset);
pcie_ep->wake = devm_gpiod_get_optional(dev, "wake", GPIOD_OUT_LOW);
if (IS_ERR(pcie_ep->wake))
return PTR_ERR(pcie_ep->wake);
if (pcie_ep->cfg && pcie_ep->cfg->firmware_managed)
return 0;
pcie_ep->num_clks = devm_clk_bulk_get_all(dev, &pcie_ep->clks);
if (pcie_ep->num_clks < 0) {
dev_err(dev, "Failed to get clocks\n");
@ -640,14 +673,6 @@ static int qcom_pcie_ep_get_resources(struct platform_device *pdev,
if (IS_ERR(pcie_ep->core_reset))
return PTR_ERR(pcie_ep->core_reset);
pcie_ep->reset = devm_gpiod_get(dev, "reset", GPIOD_IN);
if (IS_ERR(pcie_ep->reset))
return PTR_ERR(pcie_ep->reset);
pcie_ep->wake = devm_gpiod_get_optional(dev, "wake", GPIOD_OUT_LOW);
if (IS_ERR(pcie_ep->wake))
return PTR_ERR(pcie_ep->wake);
pcie_ep->phy = devm_phy_optional_get(dev, "pciephy");
if (IS_ERR(pcie_ep->phy))
ret = PTR_ERR(pcie_ep->phy);
@ -820,6 +845,7 @@ static void qcom_pcie_ep_init_debugfs(struct qcom_pcie_ep *pcie_ep)
}
static const struct pci_epc_features qcom_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
.align = SZ_4K,
@ -874,6 +900,12 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, pcie_ep);
pm_runtime_get_noresume(dev);
pm_runtime_set_active(dev);
ret = devm_pm_runtime_enable(dev);
if (ret)
return ret;
ret = qcom_pcie_ep_get_resources(pdev, pcie_ep);
if (ret)
return ret;
@ -894,6 +926,12 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
goto err_disable_irqs;
}
ret = pm_runtime_put_sync(dev);
if (ret < 0) {
dev_err(dev, "Failed to suspend device: %d\n", ret);
goto err_disable_irqs;
}
pcie_ep->debugfs = debugfs_create_dir(name, NULL);
qcom_pcie_ep_init_debugfs(pcie_ep);
@ -930,7 +968,15 @@ static const struct qcom_pcie_ep_cfg cfg_1_34_0 = {
.disable_mhi_ram_parity_check = true,
};
static const struct qcom_pcie_ep_cfg cfg_1_34_0_fw_managed = {
.hdma_support = true,
.override_no_snoop = true,
.disable_mhi_ram_parity_check = true,
.firmware_managed = true,
};
static const struct of_device_id qcom_pcie_ep_match[] = {
{ .compatible = "qcom,sa8255p-pcie-ep", .data = &cfg_1_34_0_fw_managed},
{ .compatible = "qcom,sa8775p-pcie-ep", .data = &cfg_1_34_0},
{ .compatible = "qcom,sdx55-pcie-ep", },
{ .compatible = "qcom,sm8450-pcie-ep", },

View file

@ -24,6 +24,7 @@
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/pci-pwrctrl.h>
#include <linux/pm_opp.h>
#include <linux/pm_runtime.h>
#include <linux/platform_device.h>
@ -55,9 +56,6 @@
#define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8
#define PARF_Q2A_FLUSH 0x1ac
#define PARF_LTSSM 0x1b0
#define PARF_INT_ALL_STATUS 0x224
#define PARF_INT_ALL_CLEAR 0x228
#define PARF_INT_ALL_MASK 0x22c
#define PARF_SID_OFFSET 0x234
#define PARF_BDF_TRANSLATE_CFG 0x24c
#define PARF_DBI_BASE_ADDR_V2 0x350
@ -134,10 +132,6 @@
/* PARF_LTSSM register fields */
#define LTSSM_EN BIT(8)
/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */
#define PARF_INT_ALL_LINK_UP BIT(13)
#define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23)
/* PARF_NO_SNOOP_OVERRIDE register fields */
#define WR_NO_SNOOP_OVERRIDE_EN BIT(1)
#define RD_NO_SNOOP_OVERRIDE_EN BIT(3)
@ -267,10 +261,15 @@ struct qcom_pcie_cfg {
bool no_l0s;
};
struct qcom_pcie_perst {
struct list_head list;
struct gpio_desc *desc;
};
struct qcom_pcie_port {
struct list_head list;
struct gpio_desc *reset;
struct phy *phy;
struct list_head perst;
};
struct qcom_pcie {
@ -289,27 +288,30 @@ struct qcom_pcie {
#define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
static void qcom_perst_assert(struct qcom_pcie *pcie, bool assert)
static void __qcom_pcie_perst_assert(struct qcom_pcie *pcie, bool assert)
{
struct qcom_pcie_perst *perst;
struct qcom_pcie_port *port;
int val = assert ? 1 : 0;
list_for_each_entry(port, &pcie->ports, list)
gpiod_set_value_cansleep(port->reset, val);
list_for_each_entry(port, &pcie->ports, list) {
list_for_each_entry(perst, &port->perst, list)
gpiod_set_value_cansleep(perst->desc, val);
}
usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500);
}
static void qcom_ep_reset_assert(struct qcom_pcie *pcie)
static void qcom_pcie_perst_assert(struct qcom_pcie *pcie)
{
qcom_perst_assert(pcie, true);
__qcom_pcie_perst_assert(pcie, true);
}
static void qcom_ep_reset_deassert(struct qcom_pcie *pcie)
static void qcom_pcie_perst_deassert(struct qcom_pcie *pcie)
{
/* Ensure that PERST has been asserted for at least 100 ms */
/* Ensure that PERST# has been asserted for at least 100 ms */
msleep(PCIE_T_PVPERL_MS);
qcom_perst_assert(pcie, false);
__qcom_pcie_perst_assert(pcie, false);
}
static int qcom_pcie_start_link(struct dw_pcie *pci)
@ -641,18 +643,6 @@ static int qcom_pcie_post_init_1_0_0(struct qcom_pcie *pcie)
return 0;
}
static int qcom_pcie_assert_perst(struct dw_pcie *pci, bool assert)
{
struct qcom_pcie *pcie = to_qcom_pcie(pci);
if (assert)
qcom_ep_reset_assert(pcie);
else
qcom_ep_reset_deassert(pcie);
return 0;
}
static void qcom_pcie_2_3_2_ltssm_enable(struct qcom_pcie *pcie)
{
u32 val;
@ -1299,7 +1289,7 @@ static int qcom_pcie_host_init(struct dw_pcie_rp *pp)
struct qcom_pcie *pcie = to_qcom_pcie(pci);
int ret;
qcom_ep_reset_assert(pcie);
qcom_pcie_perst_assert(pcie);
ret = pcie->cfg->ops->init(pcie);
if (ret)
@ -1309,15 +1299,25 @@ static int qcom_pcie_host_init(struct dw_pcie_rp *pp)
if (ret)
goto err_deinit;
ret = pci_pwrctrl_create_devices(pci->dev);
if (ret)
goto err_disable_phy;
ret = pci_pwrctrl_power_on_devices(pci->dev);
if (ret)
goto err_pwrctrl_destroy;
if (pcie->cfg->ops->post_init) {
ret = pcie->cfg->ops->post_init(pcie);
if (ret)
goto err_disable_phy;
goto err_pwrctrl_power_off;
}
qcom_pcie_clear_aspm_l0s(pcie->pci);
dw_pcie_remove_capability(pcie->pci, PCI_CAP_ID_MSIX);
dw_pcie_remove_ext_capability(pcie->pci, PCI_EXT_CAP_ID_DPC);
qcom_ep_reset_deassert(pcie);
qcom_pcie_perst_deassert(pcie);
if (pcie->cfg->ops->config_sid) {
ret = pcie->cfg->ops->config_sid(pcie);
@ -1328,7 +1328,12 @@ static int qcom_pcie_host_init(struct dw_pcie_rp *pp)
return 0;
err_assert_reset:
qcom_ep_reset_assert(pcie);
qcom_pcie_perst_assert(pcie);
err_pwrctrl_power_off:
pci_pwrctrl_power_off_devices(pci->dev);
err_pwrctrl_destroy:
if (ret != -EPROBE_DEFER)
pci_pwrctrl_destroy_devices(pci->dev);
err_disable_phy:
qcom_pcie_phy_power_off(pcie);
err_deinit:
@ -1342,7 +1347,13 @@ static void qcom_pcie_host_deinit(struct dw_pcie_rp *pp)
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct qcom_pcie *pcie = to_qcom_pcie(pci);
qcom_ep_reset_assert(pcie);
qcom_pcie_perst_assert(pcie);
/*
* No need to destroy pwrctrl devices as this function only gets called
* during system suspend as of now.
*/
pci_pwrctrl_power_off_devices(pci->dev);
qcom_pcie_phy_power_off(pcie);
pcie->cfg->ops->deinit(pcie);
}
@ -1496,7 +1507,6 @@ static const struct qcom_pcie_cfg cfg_fw_managed = {
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = qcom_pcie_link_up,
.start_link = qcom_pcie_start_link,
.assert_perst = qcom_pcie_assert_perst,
};
static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
@ -1635,37 +1645,11 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
qcom_pcie_link_transition_count);
}
static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data)
{
struct qcom_pcie *pcie = data;
struct dw_pcie_rp *pp = &pcie->pci->pp;
struct device *dev = pcie->pci->dev;
u32 status = readl_relaxed(pcie->parf + PARF_INT_ALL_STATUS);
writel_relaxed(status, pcie->parf + PARF_INT_ALL_CLEAR);
if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) {
msleep(PCIE_RESET_CONFIG_WAIT_MS);
dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
/* Rescan the bus to enumerate endpoint devices */
pci_lock_rescan_remove();
pci_rescan_bus(pp->bridge->bus);
pci_unlock_rescan_remove();
qcom_pcie_icc_opp_update(pcie);
} else {
dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n",
status);
}
return IRQ_HANDLED;
}
static void qcom_pci_free_msi(void *ptr)
{
struct dw_pcie_rp *pp = (struct dw_pcie_rp *)ptr;
if (pp && pp->has_msi_ctrl)
if (pp && pp->use_imsi_rx)
dw_pcie_free_msi(pp);
}
@ -1689,7 +1673,7 @@ static int qcom_pcie_ecam_host_init(struct pci_config_window *cfg)
if (ret)
return ret;
pp->has_msi_ctrl = true;
pp->use_imsi_rx = true;
dw_pcie_msi_init(pp);
return devm_add_action_or_reset(dev, qcom_pci_free_msi, pp);
@ -1704,19 +1688,59 @@ static const struct pci_ecam_ops pci_qcom_ecam_ops = {
}
};
/* Parse PERST# from all nodes in depth first manner starting from @np */
static int qcom_pcie_parse_perst(struct qcom_pcie *pcie,
struct qcom_pcie_port *port,
struct device_node *np)
{
struct device *dev = pcie->pci->dev;
struct qcom_pcie_perst *perst;
struct gpio_desc *reset;
int ret;
if (!of_find_property(np, "reset-gpios", NULL))
goto parse_child_node;
reset = devm_fwnode_gpiod_get(dev, of_fwnode_handle(np), "reset",
GPIOD_OUT_HIGH, "PERST#");
if (IS_ERR(reset)) {
/*
* FIXME: GPIOLIB currently supports exclusive GPIO access only.
* Non exclusive access is broken. But shared PERST# requires
* non-exclusive access. So once GPIOLIB properly supports it,
* implement it here.
*/
if (PTR_ERR(reset) == -EBUSY)
dev_err(dev, "Shared PERST# is not supported\n");
return PTR_ERR(reset);
}
perst = devm_kzalloc(dev, sizeof(*perst), GFP_KERNEL);
if (!perst)
return -ENOMEM;
INIT_LIST_HEAD(&perst->list);
perst->desc = reset;
list_add_tail(&perst->list, &port->perst);
parse_child_node:
for_each_available_child_of_node_scoped(np, child) {
ret = qcom_pcie_parse_perst(pcie, port, child);
if (ret)
return ret;
}
return 0;
}
static int qcom_pcie_parse_port(struct qcom_pcie *pcie, struct device_node *node)
{
struct device *dev = pcie->pci->dev;
struct qcom_pcie_port *port;
struct gpio_desc *reset;
struct phy *phy;
int ret;
reset = devm_fwnode_gpiod_get(dev, of_fwnode_handle(node),
"reset", GPIOD_OUT_HIGH, "PERST#");
if (IS_ERR(reset))
return PTR_ERR(reset);
phy = devm_of_phy_get(dev, node, NULL);
if (IS_ERR(phy))
return PTR_ERR(phy);
@ -1729,7 +1753,12 @@ static int qcom_pcie_parse_port(struct qcom_pcie *pcie, struct device_node *node
if (ret)
return ret;
port->reset = reset;
INIT_LIST_HEAD(&port->perst);
ret = qcom_pcie_parse_perst(pcie, port, node);
if (ret)
return ret;
port->phy = phy;
INIT_LIST_HEAD(&port->list);
list_add_tail(&port->list, &pcie->ports);
@ -1739,9 +1768,10 @@ static int qcom_pcie_parse_port(struct qcom_pcie *pcie, struct device_node *node
static int qcom_pcie_parse_ports(struct qcom_pcie *pcie)
{
struct qcom_pcie_perst *perst, *tmp_perst;
struct qcom_pcie_port *port, *tmp_port;
struct device *dev = pcie->pci->dev;
struct qcom_pcie_port *port, *tmp;
int ret = -ENOENT;
int ret = -ENODEV;
for_each_available_child_of_node_scoped(dev->of_node, of_port) {
if (!of_node_is_type(of_port, "pci"))
@ -1754,7 +1784,9 @@ static int qcom_pcie_parse_ports(struct qcom_pcie *pcie)
return ret;
err_port_del:
list_for_each_entry_safe(port, tmp, &pcie->ports, list) {
list_for_each_entry_safe(port, tmp_port, &pcie->ports, list) {
list_for_each_entry_safe(perst, tmp_perst, &port->perst, list)
list_del(&perst->list);
phy_exit(port->phy);
list_del(&port->list);
}
@ -1765,6 +1797,7 @@ err_port_del:
static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie)
{
struct device *dev = pcie->pci->dev;
struct qcom_pcie_perst *perst;
struct qcom_pcie_port *port;
struct gpio_desc *reset;
struct phy *phy;
@ -1786,27 +1819,35 @@ static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie)
if (!port)
return -ENOMEM;
port->reset = reset;
perst = devm_kzalloc(dev, sizeof(*perst), GFP_KERNEL);
if (!perst)
return -ENOMEM;
port->phy = phy;
INIT_LIST_HEAD(&port->list);
list_add_tail(&port->list, &pcie->ports);
perst->desc = reset;
INIT_LIST_HEAD(&port->perst);
INIT_LIST_HEAD(&perst->list);
list_add_tail(&perst->list, &port->perst);
return 0;
}
static int qcom_pcie_probe(struct platform_device *pdev)
{
struct qcom_pcie_perst *perst, *tmp_perst;
struct qcom_pcie_port *port, *tmp_port;
const struct qcom_pcie_cfg *pcie_cfg;
unsigned long max_freq = ULONG_MAX;
struct qcom_pcie_port *port, *tmp;
struct device *dev = &pdev->dev;
struct dev_pm_opp *opp;
struct qcom_pcie *pcie;
struct dw_pcie_rp *pp;
struct resource *res;
struct dw_pcie *pci;
int ret, irq;
char *name;
int ret;
pcie_cfg = of_device_get_match_data(dev);
if (!pcie_cfg) {
@ -1939,7 +1980,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
ret = qcom_pcie_parse_ports(pcie);
if (ret) {
if (ret != -ENOENT) {
if (ret != -ENODEV) {
dev_err_probe(pci->dev, ret,
"Failed to parse Root Port: %d\n", ret);
goto err_pm_runtime_put;
@ -1957,37 +1998,12 @@ static int qcom_pcie_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, pcie);
irq = platform_get_irq_byname_optional(pdev, "global");
if (irq > 0)
pp->use_linkup_irq = true;
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "cannot initialize host\n");
dev_err_probe(dev, ret, "cannot initialize host\n");
goto err_phy_exit;
}
name = devm_kasprintf(dev, GFP_KERNEL, "qcom_pcie_global_irq%d",
pci_domain_nr(pp->bridge->bus));
if (!name) {
ret = -ENOMEM;
goto err_host_deinit;
}
if (irq > 0) {
ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
qcom_pcie_global_irq_thread,
IRQF_ONESHOT, name, pcie);
if (ret) {
dev_err_probe(&pdev->dev, ret,
"Failed to request Global IRQ\n");
goto err_host_deinit;
}
writel_relaxed(PARF_INT_ALL_LINK_UP | PARF_INT_MSI_DEV_0_7,
pcie->parf + PARF_INT_ALL_MASK);
}
qcom_pcie_icc_opp_update(pcie);
if (pcie->mhi)
@ -1995,10 +2011,10 @@ static int qcom_pcie_probe(struct platform_device *pdev)
return 0;
err_host_deinit:
dw_pcie_host_deinit(pp);
err_phy_exit:
list_for_each_entry_safe(port, tmp, &pcie->ports, list) {
list_for_each_entry_safe(port, tmp_port, &pcie->ports, list) {
list_for_each_entry_safe(perst, tmp_perst, &port->perst, list)
list_del(&perst->list);
phy_exit(port->phy);
list_del(&port->list);
}

View file

@ -420,6 +420,7 @@ static int rcar_gen4_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features rcar_gen4_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },

View file

@ -161,6 +161,22 @@ static void sophgo_pcie_msi_enable(struct dw_pcie_rp *pp)
raw_spin_unlock_irqrestore(&pp->lock, flags);
}
static void sophgo_pcie_disable_l0s_l1(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
u32 offset, val;
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
dw_pcie_dbi_ro_wr_en(pci);
val = dw_pcie_readl_dbi(pci, PCI_EXP_LNKCAP + offset);
val &= ~(PCI_EXP_LNKCAP_ASPM_L0S | PCI_EXP_LNKCAP_ASPM_L1);
dw_pcie_writel_dbi(pci, PCI_EXP_LNKCAP + offset, val);
dw_pcie_dbi_ro_wr_dis(pci);
}
static int sophgo_pcie_host_init(struct dw_pcie_rp *pp)
{
int irq;
@ -171,6 +187,8 @@ static int sophgo_pcie_host_init(struct dw_pcie_rp *pp)
irq_set_chained_handler_and_data(irq, sophgo_pcie_intx_handler, pp);
sophgo_pcie_disable_l0s_l1(pp);
sophgo_pcie_msi_enable(pp);
return 0;

View file

@ -70,6 +70,7 @@ static int stm32_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features stm32_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.align = SZ_64K,
};

View file

@ -1988,6 +1988,7 @@ static int tegra_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features tegra_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M,

View file

@ -420,6 +420,7 @@ static const struct uniphier_pcie_ep_soc_data uniphier_pro5_data = {
.init = uniphier_pcie_pro5_init_ep,
.wait = NULL,
.features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = false,
@ -438,6 +439,7 @@ static const struct uniphier_pcie_ep_soc_data uniphier_nx1_data = {
.init = uniphier_pcie_nx1_init_ep,
.wait = uniphier_pcie_nx1_wait_ep,
.features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = false,

View file

@ -32,7 +32,7 @@ struct pci_config_window *pci_host_common_ecam_create(struct device *dev,
err = of_address_to_resource(dev->of_node, 0, &cfgres);
if (err) {
dev_err(dev, "missing \"reg\" property\n");
dev_err(dev, "missing or malformed \"reg\" property\n");
return ERR_PTR(err);
}

View file

@ -2545,12 +2545,6 @@ static const struct seq_operations tegra_pcie_ports_sops = {
DEFINE_SEQ_ATTRIBUTE(tegra_pcie_ports);
static void tegra_pcie_debugfs_exit(struct tegra_pcie *pcie)
{
debugfs_remove_recursive(pcie->debugfs);
pcie->debugfs = NULL;
}
static void tegra_pcie_debugfs_init(struct tegra_pcie *pcie)
{
pcie->debugfs = debugfs_create_dir("pcie", NULL);
@ -2624,29 +2618,6 @@ put_resources:
return err;
}
static void tegra_pcie_remove(struct platform_device *pdev)
{
struct tegra_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
struct tegra_pcie_port *port, *tmp;
if (IS_ENABLED(CONFIG_DEBUG_FS))
tegra_pcie_debugfs_exit(pcie);
pci_stop_root_bus(host->bus);
pci_remove_root_bus(host->bus);
pm_runtime_put_sync(pcie->dev);
pm_runtime_disable(pcie->dev);
if (IS_ENABLED(CONFIG_PCI_MSI))
tegra_pcie_msi_teardown(pcie);
tegra_pcie_put_resources(pcie);
list_for_each_entry_safe(port, tmp, &pcie->ports, list)
tegra_pcie_port_free(port);
}
static int tegra_pcie_pm_suspend(struct device *dev)
{
struct tegra_pcie *pcie = dev_get_drvdata(dev);
@ -2750,6 +2721,8 @@ static struct platform_driver tegra_pcie_driver = {
.pm = &tegra_pcie_pm_ops,
},
.probe = tegra_pcie_probe,
.remove = tegra_pcie_remove,
};
module_platform_driver(tegra_pcie_driver);
builtin_platform_driver(tegra_pcie_driver);
MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>");
MODULE_DESCRIPTION("NVIDIA PCI host controller driver");
MODULE_LICENSE("GPL");

File diff suppressed because it is too large Load diff

View file

@ -585,8 +585,10 @@ static int mtk_pcie_init_irq_domain(struct mtk_pcie_port *port,
if (IS_ENABLED(CONFIG_PCI_MSI)) {
ret = mtk_pcie_allocate_msi_domains(port);
if (ret)
if (ret) {
irq_domain_remove(port->irq_domain);
return ret;
}
}
return 0;

View file

@ -73,6 +73,7 @@
#define RZG3S_PCI_PINTRCVIE_INTX(i) BIT(i)
#define RZG3S_PCI_PINTRCVIE_MSI BIT(4)
/* Register is R/W1C, it doesn't require locking. */
#define RZG3S_PCI_PINTRCVIS 0x114
#define RZG3S_PCI_PINTRCVIS_INTX(i) BIT(i)
#define RZG3S_PCI_PINTRCVIS_MSI BIT(4)
@ -114,6 +115,8 @@
#define RZG3S_PCI_MSIRE_ENA BIT(0)
#define RZG3S_PCI_MSIRM(id) (0x608 + (id) * 0x10)
/* Register is R/W1C, it doesn't require locking. */
#define RZG3S_PCI_MSIRS(id) (0x60c + (id) * 0x10)
#define RZG3S_PCI_AWBASEL(id) (0x1000 + (id) * 0x20)
@ -439,28 +442,9 @@ static void __iomem *rzg3s_pcie_root_map_bus(struct pci_bus *bus,
return host->pcie + where;
}
/* Serialized by 'pci_lock' */
static int rzg3s_pcie_root_write(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 val)
{
struct rzg3s_pcie_host *host = bus->sysdata;
int ret;
/* Enable access control to the CFGU */
writel_relaxed(RZG3S_PCI_PERM_CFG_HWINIT_EN,
host->axi + RZG3S_PCI_PERM);
ret = pci_generic_config_write(bus, devfn, where, size, val);
/* Disable access control to the CFGU */
writel_relaxed(0, host->axi + RZG3S_PCI_PERM);
return ret;
}
static struct pci_ops rzg3s_pcie_root_ops = {
.read = pci_generic_config_read,
.write = rzg3s_pcie_root_write,
.write = pci_generic_config_write,
.map_bus = rzg3s_pcie_root_map_bus,
};
@ -526,8 +510,6 @@ static void rzg3s_pcie_msi_irq_ack(struct irq_data *d)
u8 reg_bit = d->hwirq % RZG3S_PCI_MSI_INT_PER_REG;
u8 reg_id = d->hwirq / RZG3S_PCI_MSI_INT_PER_REG;
guard(raw_spinlock_irqsave)(&host->hw_lock);
writel_relaxed(BIT(reg_bit), host->axi + RZG3S_PCI_MSIRS(reg_id));
}
@ -859,8 +841,6 @@ static void rzg3s_pcie_intx_irq_ack(struct irq_data *d)
{
struct rzg3s_pcie_host *host = irq_data_get_irq_chip_data(d);
guard(raw_spinlock_irqsave)(&host->hw_lock);
rzg3s_pcie_update_bits(host->axi, RZG3S_PCI_PINTRCVIS,
RZG3S_PCI_PINTRCVIS_INTX(d->hwirq),
RZG3S_PCI_PINTRCVIS_INTX(d->hwirq));
@ -1065,14 +1045,14 @@ static int rzg3s_pcie_config_init(struct rzg3s_pcie_host *host)
writel_relaxed(0xffffffff, host->pcie + RZG3S_PCI_CFG_BARMSK00L);
writel_relaxed(0xffffffff, host->pcie + RZG3S_PCI_CFG_BARMSK00U);
/* Disable access control to the CFGU */
writel_relaxed(0, host->axi + RZG3S_PCI_PERM);
/* Update bus info */
writeb_relaxed(primary_bus, host->pcie + PCI_PRIMARY_BUS);
writeb_relaxed(secondary_bus, host->pcie + PCI_SECONDARY_BUS);
writeb_relaxed(subordinate_bus, host->pcie + PCI_SUBORDINATE_BUS);
/* Disable access control to the CFGU */
writel_relaxed(0, host->axi + RZG3S_PCI_PERM);
return 0;
}
@ -1162,7 +1142,8 @@ static int rzg3s_pcie_resets_prepare_and_get(struct rzg3s_pcie_host *host)
static int rzg3s_pcie_host_parse_port(struct rzg3s_pcie_host *host)
{
struct device_node *of_port = of_get_next_child(host->dev->of_node, NULL);
struct device_node *of_port __free(device_node) =
of_get_next_child(host->dev->of_node, NULL);
struct rzg3s_pcie_port *port = &host->port;
int ret;

View file

@ -302,9 +302,10 @@ static int xilinx_allocate_msi_domains(struct xilinx_pcie *pcie)
return 0;
}
static void xilinx_free_msi_domains(struct xilinx_pcie *pcie)
static void xilinx_free_irq_domains(struct xilinx_pcie *pcie)
{
irq_domain_remove(pcie->msi_domain);
irq_domain_remove(pcie->leg_domain);
}
/* INTx Functions */
@ -480,8 +481,10 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie *pcie)
phys_addr_t pa = ALIGN_DOWN(virt_to_phys(pcie), SZ_4K);
ret = xilinx_allocate_msi_domains(pcie);
if (ret)
if (ret) {
irq_domain_remove(pcie->leg_domain);
return ret;
}
pcie_write(pcie, upper_32_bits(pa), XILINX_PCIE_REG_MSIBASE1);
pcie_write(pcie, lower_32_bits(pa), XILINX_PCIE_REG_MSIBASE2);
@ -600,7 +603,7 @@ static int xilinx_pcie_probe(struct platform_device *pdev)
err = pci_host_probe(bridge);
if (err)
xilinx_free_msi_domains(pcie);
xilinx_free_irq_domains(pcie);
return err;
}

View file

@ -55,7 +55,7 @@ struct starfive_jh7110_pcie {
struct reset_control *resets;
struct clk_bulk_data *clks;
struct regmap *reg_syscon;
struct gpio_desc *power_gpio;
struct regulator *vpcie3v3;
struct gpio_desc *reset_gpio;
struct phy *phy;
@ -153,11 +153,13 @@ static int starfive_pcie_parse_dt(struct starfive_jh7110_pcie *pcie,
return dev_err_probe(dev, PTR_ERR(pcie->reset_gpio),
"failed to get perst-gpio\n");
pcie->power_gpio = devm_gpiod_get_optional(dev, "enable",
GPIOD_OUT_LOW);
if (IS_ERR(pcie->power_gpio))
return dev_err_probe(dev, PTR_ERR(pcie->power_gpio),
"failed to get power-gpio\n");
pcie->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3");
if (IS_ERR(pcie->vpcie3v3)) {
if (PTR_ERR(pcie->vpcie3v3) != -ENODEV)
return dev_err_probe(dev, PTR_ERR(pcie->vpcie3v3),
"failed to get vpcie3v3 regulator\n");
pcie->vpcie3v3 = NULL;
}
return 0;
}
@ -270,8 +272,8 @@ static void starfive_pcie_host_deinit(struct plda_pcie_rp *plda)
container_of(plda, struct starfive_jh7110_pcie, plda);
starfive_pcie_clk_rst_deinit(pcie);
if (pcie->power_gpio)
gpiod_set_value_cansleep(pcie->power_gpio, 0);
if (pcie->vpcie3v3)
regulator_disable(pcie->vpcie3v3);
starfive_pcie_disable_phy(pcie);
}
@ -304,8 +306,11 @@ static int starfive_pcie_host_init(struct plda_pcie_rp *plda)
if (ret)
return ret;
if (pcie->power_gpio)
gpiod_set_value_cansleep(pcie->power_gpio, 1);
if (pcie->vpcie3v3) {
ret = regulator_enable(pcie->vpcie3v3);
if (ret)
dev_err_probe(dev, ret, "failed to enable vpcie3v3 regulator\n");
}
if (pcie->reset_gpio)
gpiod_set_value_cansleep(pcie->reset_gpio, 1);

View file

@ -469,9 +469,6 @@ static int pcim_add_mapping_to_legacy_table(struct pci_dev *pdev,
if (!legacy_iomap_table)
return -ENOMEM;
/* The legacy mechanism doesn't allow for duplicate mappings. */
WARN_ON(legacy_iomap_table[bar]);
legacy_iomap_table[bar] = mapping;
return 0;

View file

@ -686,7 +686,7 @@ static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi)
goto err_release_tx;
}
epf_mhi->dma_wq = alloc_workqueue("pci_epf_mhi_dma_wq", 0, 0);
epf_mhi->dma_wq = alloc_workqueue("pci_epf_mhi_dma_wq", WQ_PERCPU, 0);
if (!epf_mhi->dma_wq) {
ret = -ENOMEM;
goto err_release_rx;

View file

@ -2124,8 +2124,13 @@ static int __init epf_ntb_init(void)
{
int ret;
kpcintb_workqueue = alloc_workqueue("kpcintb", WQ_MEM_RECLAIM |
WQ_HIGHPRI, 0);
kpcintb_workqueue = alloc_workqueue("kpcintb",
WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, 0);
if (!kpcintb_workqueue) {
pr_err("Failed to allocate kpcintb workqueue\n");
return -ENOMEM;
}
ret = pci_epf_register_driver(&epf_ntb_driver);
if (ret) {
destroy_workqueue(kpcintb_workqueue);

View file

@ -33,6 +33,8 @@
#define COMMAND_COPY BIT(5)
#define COMMAND_ENABLE_DOORBELL BIT(6)
#define COMMAND_DISABLE_DOORBELL BIT(7)
#define COMMAND_BAR_SUBRANGE_SETUP BIT(8)
#define COMMAND_BAR_SUBRANGE_CLEAR BIT(9)
#define STATUS_READ_SUCCESS BIT(0)
#define STATUS_READ_FAIL BIT(1)
@ -48,6 +50,10 @@
#define STATUS_DOORBELL_ENABLE_FAIL BIT(11)
#define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12)
#define STATUS_DOORBELL_DISABLE_FAIL BIT(13)
#define STATUS_BAR_SUBRANGE_SETUP_SUCCESS BIT(14)
#define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15)
#define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16)
#define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17)
#define FLAG_USE_DMA BIT(0)
@ -57,12 +63,16 @@
#define CAP_MSI BIT(1)
#define CAP_MSIX BIT(2)
#define CAP_INTX BIT(3)
#define CAP_SUBRANGE_MAPPING BIT(4)
#define PCI_EPF_TEST_BAR_SUBRANGE_NSUB 2
static struct workqueue_struct *kpcitest_workqueue;
struct pci_epf_test {
void *reg[PCI_STD_NUM_BARS];
struct pci_epf *epf;
struct config_group group;
enum pci_barno test_reg_bar;
size_t msix_table_offset;
struct delayed_work cmd_handler;
@ -76,6 +86,7 @@ struct pci_epf_test {
bool dma_private;
const struct pci_epc_features *epc_features;
struct pci_epf_bar db_bar;
size_t bar_size[PCI_STD_NUM_BARS];
};
struct pci_epf_test_reg {
@ -102,7 +113,8 @@ static struct pci_epf_header test_header = {
.interrupt_pin = PCI_INTERRUPT_INTA,
};
static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 };
/* default BAR sizes, can be overridden by the user using configfs */
static size_t default_bar_size[] = { 131072, 131072, 131072, 131072, 131072, 1048576 };
static void pci_epf_test_dma_callback(void *param)
{
@ -806,6 +818,155 @@ set_status_err:
reg->status = cpu_to_le32(status);
}
static u8 pci_epf_test_subrange_sig_byte(enum pci_barno barno,
unsigned int subno)
{
return 0x50 + (barno * 8) + subno;
}
static void pci_epf_test_bar_subrange_setup(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
struct pci_epf_bar_submap *submap, *old_submap;
struct pci_epf *epf = epf_test->epf;
struct pci_epc *epc = epf->epc;
struct pci_epf_bar *bar;
unsigned int nsub = PCI_EPF_TEST_BAR_SUBRANGE_NSUB, old_nsub;
/* reg->size carries BAR number for BAR_SUBRANGE_* commands. */
enum pci_barno barno = le32_to_cpu(reg->size);
u32 status = le32_to_cpu(reg->status);
unsigned int i, phys_idx;
size_t sub_size;
u8 *addr;
int ret;
if (barno >= PCI_STD_NUM_BARS) {
dev_err(&epf->dev, "Invalid barno: %d\n", barno);
goto err;
}
/* Host side should've avoided test_reg_bar, this is a safeguard. */
if (barno == epf_test->test_reg_bar) {
dev_err(&epf->dev, "test_reg_bar cannot be used for subrange test\n");
goto err;
}
if (!epf_test->epc_features->dynamic_inbound_mapping ||
!epf_test->epc_features->subrange_mapping) {
dev_err(&epf->dev, "epc driver does not support subrange mapping\n");
goto err;
}
bar = &epf->bar[barno];
if (!bar->size || !bar->addr) {
dev_err(&epf->dev, "bar size/addr (%zu/%p) is invalid\n",
bar->size, bar->addr);
goto err;
}
if (bar->size % nsub) {
dev_err(&epf->dev, "BAR size %zu is not divisible by %u\n",
bar->size, nsub);
goto err;
}
sub_size = bar->size / nsub;
submap = kcalloc(nsub, sizeof(*submap), GFP_KERNEL);
if (!submap)
goto err;
for (i = 0; i < nsub; i++) {
/* Swap the two halves so RC can verify ordering. */
phys_idx = i ^ 1;
submap[i].phys_addr = bar->phys_addr + (phys_idx * sub_size);
submap[i].size = sub_size;
}
old_submap = bar->submap;
old_nsub = bar->num_submap;
bar->submap = submap;
bar->num_submap = nsub;
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar);
if (ret) {
dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret);
bar->submap = old_submap;
bar->num_submap = old_nsub;
kfree(submap);
goto err;
}
kfree(old_submap);
/*
* Fill deterministic signatures into the physical regions that
* each BAR subrange maps to. RC verifies these to ensure the
* submap order is really applied.
*/
addr = (u8 *)bar->addr;
for (i = 0; i < nsub; i++) {
phys_idx = i ^ 1;
memset(addr + (phys_idx * sub_size),
pci_epf_test_subrange_sig_byte(barno, i),
sub_size);
}
status |= STATUS_BAR_SUBRANGE_SETUP_SUCCESS;
reg->status = cpu_to_le32(status);
return;
err:
status |= STATUS_BAR_SUBRANGE_SETUP_FAIL;
reg->status = cpu_to_le32(status);
}
static void pci_epf_test_bar_subrange_clear(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
struct pci_epf *epf = epf_test->epf;
struct pci_epf_bar_submap *submap;
struct pci_epc *epc = epf->epc;
/* reg->size carries BAR number for BAR_SUBRANGE_* commands. */
enum pci_barno barno = le32_to_cpu(reg->size);
u32 status = le32_to_cpu(reg->status);
struct pci_epf_bar *bar;
unsigned int nsub;
int ret;
if (barno >= PCI_STD_NUM_BARS) {
dev_err(&epf->dev, "Invalid barno: %d\n", barno);
goto err;
}
bar = &epf->bar[barno];
submap = bar->submap;
nsub = bar->num_submap;
if (!submap || !nsub)
goto err;
bar->submap = NULL;
bar->num_submap = 0;
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar);
if (ret) {
bar->submap = submap;
bar->num_submap = nsub;
dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret);
goto err;
}
kfree(submap);
status |= STATUS_BAR_SUBRANGE_CLEAR_SUCCESS;
reg->status = cpu_to_le32(status);
return;
err:
status |= STATUS_BAR_SUBRANGE_CLEAR_FAIL;
reg->status = cpu_to_le32(status);
}
static void pci_epf_test_cmd_handler(struct work_struct *work)
{
u32 command;
@ -861,6 +1022,14 @@ static void pci_epf_test_cmd_handler(struct work_struct *work)
pci_epf_test_disable_doorbell(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
case COMMAND_BAR_SUBRANGE_SETUP:
pci_epf_test_bar_subrange_setup(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
case COMMAND_BAR_SUBRANGE_CLEAR:
pci_epf_test_bar_subrange_clear(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
default:
dev_err(dev, "Invalid command 0x%x\n", command);
break;
@ -933,6 +1102,10 @@ static void pci_epf_test_set_capabilities(struct pci_epf *epf)
if (epf_test->epc_features->intx_capable)
caps |= CAP_INTX;
if (epf_test->epc_features->dynamic_inbound_mapping &&
epf_test->epc_features->subrange_mapping)
caps |= CAP_SUBRANGE_MAPPING;
reg->caps = cpu_to_le32(caps);
}
@ -1070,7 +1243,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
if (epc_features->bar[bar].type == BAR_FIXED)
test_reg_size = epc_features->bar[bar].fixed_size;
else
test_reg_size = bar_size[bar];
test_reg_size = epf_test->bar_size[bar];
base = pci_epf_alloc_space(epf, test_reg_size, bar,
epc_features, PRIMARY_INTERFACE);
@ -1142,6 +1315,94 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
pci_epf_test_free_space(epf);
}
#define PCI_EPF_TEST_BAR_SIZE_R(_name, _id) \
static ssize_t pci_epf_test_##_name##_show(struct config_item *item, \
char *page) \
{ \
struct config_group *group = to_config_group(item); \
struct pci_epf_test *epf_test = \
container_of(group, struct pci_epf_test, group); \
\
return sysfs_emit(page, "%zu\n", epf_test->bar_size[_id]); \
}
#define PCI_EPF_TEST_BAR_SIZE_W(_name, _id) \
static ssize_t pci_epf_test_##_name##_store(struct config_item *item, \
const char *page, \
size_t len) \
{ \
struct config_group *group = to_config_group(item); \
struct pci_epf_test *epf_test = \
container_of(group, struct pci_epf_test, group); \
int val, ret; \
\
/* \
* BAR sizes can only be modified before binding to an EPC, \
* because pci_epf_test_alloc_space() is called in .bind(). \
*/ \
if (epf_test->epf->epc) \
return -EOPNOTSUPP; \
\
ret = kstrtouint(page, 0, &val); \
if (ret) \
return ret; \
\
if (!is_power_of_2(val)) \
return -EINVAL; \
\
epf_test->bar_size[_id] = val; \
\
return len; \
}
PCI_EPF_TEST_BAR_SIZE_R(bar0_size, BAR_0)
PCI_EPF_TEST_BAR_SIZE_W(bar0_size, BAR_0)
PCI_EPF_TEST_BAR_SIZE_R(bar1_size, BAR_1)
PCI_EPF_TEST_BAR_SIZE_W(bar1_size, BAR_1)
PCI_EPF_TEST_BAR_SIZE_R(bar2_size, BAR_2)
PCI_EPF_TEST_BAR_SIZE_W(bar2_size, BAR_2)
PCI_EPF_TEST_BAR_SIZE_R(bar3_size, BAR_3)
PCI_EPF_TEST_BAR_SIZE_W(bar3_size, BAR_3)
PCI_EPF_TEST_BAR_SIZE_R(bar4_size, BAR_4)
PCI_EPF_TEST_BAR_SIZE_W(bar4_size, BAR_4)
PCI_EPF_TEST_BAR_SIZE_R(bar5_size, BAR_5)
PCI_EPF_TEST_BAR_SIZE_W(bar5_size, BAR_5)
CONFIGFS_ATTR(pci_epf_test_, bar0_size);
CONFIGFS_ATTR(pci_epf_test_, bar1_size);
CONFIGFS_ATTR(pci_epf_test_, bar2_size);
CONFIGFS_ATTR(pci_epf_test_, bar3_size);
CONFIGFS_ATTR(pci_epf_test_, bar4_size);
CONFIGFS_ATTR(pci_epf_test_, bar5_size);
static struct configfs_attribute *pci_epf_test_attrs[] = {
&pci_epf_test_attr_bar0_size,
&pci_epf_test_attr_bar1_size,
&pci_epf_test_attr_bar2_size,
&pci_epf_test_attr_bar3_size,
&pci_epf_test_attr_bar4_size,
&pci_epf_test_attr_bar5_size,
NULL,
};
static const struct config_item_type pci_epf_test_group_type = {
.ct_attrs = pci_epf_test_attrs,
.ct_owner = THIS_MODULE,
};
static struct config_group *pci_epf_test_add_cfs(struct pci_epf *epf,
struct config_group *group)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
struct config_group *epf_group = &epf_test->group;
struct device *dev = &epf->dev;
config_group_init_type_name(epf_group, dev_name(dev),
&pci_epf_test_group_type);
return epf_group;
}
static const struct pci_epf_device_id pci_epf_test_ids[] = {
{
.name = "pci_epf_test",
@ -1154,6 +1415,7 @@ static int pci_epf_test_probe(struct pci_epf *epf,
{
struct pci_epf_test *epf_test;
struct device *dev = &epf->dev;
enum pci_barno bar;
epf_test = devm_kzalloc(dev, sizeof(*epf_test), GFP_KERNEL);
if (!epf_test)
@ -1161,6 +1423,8 @@ static int pci_epf_test_probe(struct pci_epf *epf,
epf->header = &test_header;
epf_test->epf = epf;
for (bar = BAR_0; bar < PCI_STD_NUM_BARS; bar++)
epf_test->bar_size[bar] = default_bar_size[bar];
INIT_DELAYED_WORK(&epf_test->cmd_handler, pci_epf_test_cmd_handler);
@ -1173,6 +1437,7 @@ static int pci_epf_test_probe(struct pci_epf *epf,
static const struct pci_epf_ops ops = {
.unbind = pci_epf_test_unbind,
.bind = pci_epf_test_bind,
.add_cfs = pci_epf_test_add_cfs,
};
static struct pci_epf_driver test_driver = {
@ -1188,7 +1453,7 @@ static int __init pci_epf_test_init(void)
int ret;
kpcitest_workqueue = alloc_workqueue("kpcitest",
WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, 0);
if (!kpcitest_workqueue) {
pr_err("Failed to allocate the kpcitest work queue\n");
return -ENOMEM;

View file

@ -1651,8 +1651,13 @@ static int __init epf_ntb_init(void)
{
int ret;
kpcintb_workqueue = alloc_workqueue("kpcintb", WQ_MEM_RECLAIM |
WQ_HIGHPRI, 0);
kpcintb_workqueue = alloc_workqueue("kpcintb",
WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, 0);
if (!kpcintb_workqueue) {
pr_err("Failed to allocate kpcintb workqueue\n");
return -ENOMEM;
}
ret = pci_epf_register_driver(&epf_ntb_driver);
if (ret) {
destroy_workqueue(kpcintb_workqueue);

View file

@ -23,7 +23,6 @@ struct pci_epf_group {
struct config_group group;
struct config_group primary_epc_group;
struct config_group secondary_epc_group;
struct delayed_work cfs_work;
struct pci_epf *epf;
int index;
};
@ -69,8 +68,8 @@ static int pci_secondary_epc_epf_link(struct config_item *epf_item,
return 0;
}
static void pci_secondary_epc_epf_unlink(struct config_item *epc_item,
struct config_item *epf_item)
static void pci_secondary_epc_epf_unlink(struct config_item *epf_item,
struct config_item *epc_item)
{
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
@ -103,7 +102,7 @@ static struct config_group
secondary_epc_group = &epf_group->secondary_epc_group;
config_group_init_type_name(secondary_epc_group, "secondary",
&pci_secondary_epc_type);
configfs_register_group(&epf_group->group, secondary_epc_group);
configfs_add_default_group(secondary_epc_group, &epf_group->group);
return secondary_epc_group;
}
@ -133,8 +132,8 @@ static int pci_primary_epc_epf_link(struct config_item *epf_item,
return 0;
}
static void pci_primary_epc_epf_unlink(struct config_item *epc_item,
struct config_item *epf_item)
static void pci_primary_epc_epf_unlink(struct config_item *epf_item,
struct config_item *epc_item)
{
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
@ -166,7 +165,7 @@ static struct config_group
config_group_init_type_name(primary_epc_group, "primary",
&pci_primary_epc_type);
configfs_register_group(&epf_group->group, primary_epc_group);
configfs_add_default_group(primary_epc_group, &epf_group->group);
return primary_epc_group;
}
@ -570,15 +569,13 @@ static void pci_ep_cfs_add_type_group(struct pci_epf_group *epf_group)
return;
}
configfs_register_group(&epf_group->group, group);
configfs_add_default_group(group, &epf_group->group);
}
static void pci_epf_cfs_work(struct work_struct *work)
static void pci_epf_cfs_add_sub_groups(struct pci_epf_group *epf_group)
{
struct pci_epf_group *epf_group;
struct config_group *group;
epf_group = container_of(work, struct pci_epf_group, cfs_work.work);
group = pci_ep_cfs_add_primary_group(epf_group);
if (IS_ERR(group)) {
pr_err("failed to create 'primary' EPC interface\n");
@ -637,9 +634,7 @@ static struct config_group *pci_epf_make(struct config_group *group,
kfree(epf_name);
INIT_DELAYED_WORK(&epf_group->cfs_work, pci_epf_cfs_work);
queue_delayed_work(system_wq, &epf_group->cfs_work,
msecs_to_jiffies(1));
pci_epf_cfs_add_sub_groups(epf_group);
return &epf_group->group;

View file

@ -596,6 +596,14 @@ int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
if (!epc_features)
return -EINVAL;
if (epf_bar->num_submap && !epf_bar->submap)
return -EINVAL;
if (epf_bar->num_submap &&
!(epc_features->dynamic_inbound_mapping &&
epc_features->subrange_mapping))
return -EINVAL;
if (epc_features->bar[bar].type == BAR_RESIZABLE &&
(epf_bar->size < SZ_1M || (u64)epf_bar->size > (SZ_128G * 1024)))
return -EINVAL;

View file

@ -19,6 +19,7 @@
#include <linux/types.h>
#include <linux/pm_runtime.h>
#include <linux/pci.h>
#include <trace/events/pci.h>
#include "../pci.h"
#include "pciehp.h"
@ -244,12 +245,20 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
case ON_STATE:
ctrl->state = POWEROFF_STATE;
mutex_unlock(&ctrl->state_lock);
if (events & PCI_EXP_SLTSTA_DLLSC)
if (events & PCI_EXP_SLTSTA_DLLSC) {
ctrl_info(ctrl, "Slot(%s): Link Down\n",
slot_name(ctrl));
if (events & PCI_EXP_SLTSTA_PDC)
trace_pci_hp_event(pci_name(ctrl->pcie->port),
slot_name(ctrl),
PCI_HOTPLUG_LINK_DOWN);
}
if (events & PCI_EXP_SLTSTA_PDC) {
ctrl_info(ctrl, "Slot(%s): Card not present\n",
slot_name(ctrl));
trace_pci_hp_event(pci_name(ctrl->pcie->port),
slot_name(ctrl),
PCI_HOTPLUG_CARD_NOT_PRESENT);
}
pciehp_disable_slot(ctrl, SURPRISE_REMOVAL);
break;
default:
@ -269,6 +278,9 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
INDICATOR_NOOP);
ctrl_info(ctrl, "Slot(%s): Card not present\n",
slot_name(ctrl));
trace_pci_hp_event(pci_name(ctrl->pcie->port),
slot_name(ctrl),
PCI_HOTPLUG_CARD_NOT_PRESENT);
}
mutex_unlock(&ctrl->state_lock);
return;
@ -281,12 +293,19 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
case OFF_STATE:
ctrl->state = POWERON_STATE;
mutex_unlock(&ctrl->state_lock);
if (present)
if (present) {
ctrl_info(ctrl, "Slot(%s): Card present\n",
slot_name(ctrl));
if (link_active)
ctrl_info(ctrl, "Slot(%s): Link Up\n",
slot_name(ctrl));
trace_pci_hp_event(pci_name(ctrl->pcie->port),
slot_name(ctrl),
PCI_HOTPLUG_CARD_PRESENT);
}
if (link_active) {
ctrl_info(ctrl, "Slot(%s): Link Up\n", slot_name(ctrl));
trace_pci_hp_event(pci_name(ctrl->pcie->port),
slot_name(ctrl),
PCI_HOTPLUG_LINK_UP);
}
ctrl->request_result = pciehp_enable_slot(ctrl);
break;
default:

View file

@ -320,7 +320,8 @@ int pciehp_check_link_status(struct controller *ctrl)
}
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA2, &linksta2);
__pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status, linksta2);
__pcie_update_link_speed(ctrl->pcie->port->subordinate, PCIE_HOTPLUG,
lnk_status, linksta2);
if (!found) {
ctrl_info(ctrl, "Slot(%s): No device found\n",

View file

@ -802,7 +802,7 @@ static struct pnv_php_slot *pnv_php_alloc_slot(struct device_node *dn)
}
/* Allocate workqueue for this slot's interrupt handling */
php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
php_slot->wq = alloc_workqueue("pciehp-%s", WQ_PERCPU, 0, php_slot->name);
if (!php_slot->wq) {
SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
kfree(php_slot->name);

View file

@ -80,7 +80,8 @@ static int init_slots(struct controller *ctrl)
slot->device = ctrl->slot_device_offset + i;
slot->number = ctrl->first_slot + (ctrl->slot_num_inc * i);
slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number);
slot->wq = alloc_workqueue("shpchp-%d", WQ_PERCPU, 0,
slot->number);
if (!slot->wq) {
retval = -ENOMEM;
goto error_slot;

View file

@ -495,7 +495,9 @@ static ssize_t sriov_numvfs_store(struct device *dev,
if (num_vfs == 0) {
/* disable VFs */
pci_lock_rescan_remove();
ret = pdev->driver->sriov_configure(pdev, 0);
pci_unlock_rescan_remove();
goto exit;
}
@ -507,7 +509,9 @@ static ssize_t sriov_numvfs_store(struct device *dev,
goto exit;
}
pci_lock_rescan_remove();
ret = pdev->driver->sriov_configure(pdev, num_vfs);
pci_unlock_rescan_remove();
if (ret < 0)
goto exit;
@ -629,18 +633,15 @@ static int sriov_add_vfs(struct pci_dev *dev, u16 num_vfs)
if (dev->no_vf_scan)
return 0;
pci_lock_rescan_remove();
for (i = 0; i < num_vfs; i++) {
rc = pci_iov_add_virtfn(dev, i);
if (rc)
goto failed;
}
pci_unlock_rescan_remove();
return 0;
failed:
while (i--)
pci_iov_remove_virtfn(dev, i);
pci_unlock_rescan_remove();
return rc;
}
@ -765,10 +766,8 @@ static void sriov_del_vfs(struct pci_dev *dev)
struct pci_sriov *iov = dev->sriov;
int i;
pci_lock_rescan_remove();
for (i = 0; i < iov->num_VFs; i++)
pci_iov_remove_virtfn(dev, i);
pci_unlock_rescan_remove();
}
static void sriov_disable(struct pci_dev *dev)

View file

@ -867,6 +867,7 @@ bool of_pci_supply_present(struct device_node *np)
return false;
}
EXPORT_SYMBOL_GPL(of_pci_supply_present);
#endif /* CONFIG_PCI */

View file

@ -147,11 +147,19 @@ static int p2pmem_alloc_mmap(struct file *filp, struct kobject *kobj,
* we have just allocated the page no one else should be
* using it.
*/
VM_WARN_ON_ONCE_PAGE(!page_ref_count(page), page);
VM_WARN_ON_ONCE_PAGE(page_ref_count(page), page);
set_page_count(page, 1);
ret = vm_insert_page(vma, vaddr, page);
if (ret) {
gen_pool_free(p2pdma->pool, (uintptr_t)kaddr, len);
/*
* Reset the page count. We don't use put_page()
* because we don't want to trigger the
* p2pdma_folio_free() path.
*/
set_page_count(page, 0);
percpu_ref_put(ref);
return ret;
}
percpu_ref_get(ref);

View file

@ -272,21 +272,6 @@ static acpi_status decode_type1_hpx_record(union acpi_object *record,
return AE_OK;
}
static bool pcie_root_rcb_set(struct pci_dev *dev)
{
struct pci_dev *rp = pcie_find_root_port(dev);
u16 lnkctl;
if (!rp)
return false;
pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &lnkctl);
if (lnkctl & PCI_EXP_LNKCTL_RCB)
return true;
return false;
}
/* _HPX PCI Express Setting Record (Type 2) */
struct hpx_type2 {
u32 revision;
@ -312,6 +297,7 @@ static void program_hpx_type2(struct pci_dev *dev, struct hpx_type2 *hpx)
{
int pos;
u32 reg32;
const struct pci_host_bridge *host;
if (!hpx)
return;
@ -319,6 +305,15 @@ static void program_hpx_type2(struct pci_dev *dev, struct hpx_type2 *hpx)
if (!pci_is_pcie(dev))
return;
host = pci_find_host_bridge(dev->bus);
/*
* Only do the _HPX Type 2 programming if OS owns PCIe native
* hotplug but not AER.
*/
if (!host->native_pcie_hotplug || host->native_aer)
return;
if (hpx->revision > 1) {
pci_warn(dev, "PCIe settings rev %d not supported\n",
hpx->revision);
@ -326,33 +321,27 @@ static void program_hpx_type2(struct pci_dev *dev, struct hpx_type2 *hpx)
}
/*
* Don't allow _HPX to change MPS or MRRS settings. We manage
* those to make sure they're consistent with the rest of the
* platform.
* We only allow _HPX to program DEVCTL bits related to AER, namely
* PCI_EXP_DEVCTL_CERE, PCI_EXP_DEVCTL_NFERE, PCI_EXP_DEVCTL_FERE,
* and PCI_EXP_DEVCTL_URRE.
*
* The rest of DEVCTL is managed by the OS to make sure it's
* consistent with the rest of the platform.
*/
hpx->pci_exp_devctl_and |= PCI_EXP_DEVCTL_PAYLOAD |
PCI_EXP_DEVCTL_READRQ;
hpx->pci_exp_devctl_or &= ~(PCI_EXP_DEVCTL_PAYLOAD |
PCI_EXP_DEVCTL_READRQ);
hpx->pci_exp_devctl_and |= ~PCI_EXP_AER_FLAGS;
hpx->pci_exp_devctl_or &= PCI_EXP_AER_FLAGS;
/* Initialize Device Control Register */
pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL,
~hpx->pci_exp_devctl_and, hpx->pci_exp_devctl_or);
/* Initialize Link Control Register */
/* Log if _HPX attempts to modify Link Control Register */
if (pcie_cap_has_lnkctl(dev)) {
/*
* If the Root Port supports Read Completion Boundary of
* 128, set RCB to 128. Otherwise, clear it.
*/
hpx->pci_exp_lnkctl_and |= PCI_EXP_LNKCTL_RCB;
hpx->pci_exp_lnkctl_or &= ~PCI_EXP_LNKCTL_RCB;
if (pcie_root_rcb_set(dev))
hpx->pci_exp_lnkctl_or |= PCI_EXP_LNKCTL_RCB;
pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL,
~hpx->pci_exp_lnkctl_and, hpx->pci_exp_lnkctl_or);
if (hpx->pci_exp_lnkctl_and != 0xffff ||
hpx->pci_exp_lnkctl_or != 0)
pci_info(dev, "_HPX attempts Link Control setting (AND %#06x OR %#06x)\n",
hpx->pci_exp_lnkctl_and,
hpx->pci_exp_lnkctl_or);
}
/* Find Advanced Error Reporting Enhanced Capability */

View file

@ -1679,6 +1679,14 @@ static int pci_dma_configure(struct device *dev)
ret = acpi_dma_configure(dev, acpi_get_dma_attr(adev));
}
/*
* Attempt to enable ACS regardless of capability because some Root
* Ports (e.g. those quirked with *_intel_pch_acs_*) do not have
* the standard ACS capability but still support ACS via those
* quirks.
*/
pci_enable_acs(to_pci_dev(dev));
pci_put_host_bridge_device(bridge);
/* @drv may not be valid when we're called from the IOMMU layer */
@ -1730,34 +1738,6 @@ const struct bus_type pci_bus_type = {
};
EXPORT_SYMBOL(pci_bus_type);
#ifdef CONFIG_PCIEPORTBUS
static int pcie_port_bus_match(struct device *dev, const struct device_driver *drv)
{
struct pcie_device *pciedev;
const struct pcie_port_service_driver *driver;
if (drv->bus != &pcie_port_bus_type || dev->bus != &pcie_port_bus_type)
return 0;
pciedev = to_pcie_device(dev);
driver = to_service_driver(drv);
if (driver->service != pciedev->service)
return 0;
if (driver->port_type != PCIE_ANY_PORT &&
driver->port_type != pci_pcie_type(pciedev->port))
return 0;
return 1;
}
const struct bus_type pcie_port_bus_type = {
.name = "pci_express",
.match = pcie_port_bus_match,
};
#endif
static int __init pci_driver_init(void)
{
int ret;

View file

@ -181,7 +181,7 @@ static ssize_t resource_show(struct device *dev, struct device_attribute *attr,
struct resource zerores = {};
/* For backwards compatibility */
if (i >= PCI_BRIDGE_RESOURCES && i <= PCI_BRIDGE_RESOURCE_END &&
if (pci_resource_is_bridge_win(i) &&
res->flags & (IORESOURCE_UNSET | IORESOURCE_DISABLED))
res = &zerores;

View file

@ -14,6 +14,7 @@
#include <linux/dmi.h>
#include <linux/init.h>
#include <linux/iommu.h>
#include <linux/lockdep.h>
#include <linux/msi.h>
#include <linux/of.h>
#include <linux/pci.h>
@ -101,12 +102,6 @@ bool pci_reset_supported(struct pci_dev *dev)
int pci_domains_supported = 1;
#endif
#define DEFAULT_CARDBUS_IO_SIZE (256)
#define DEFAULT_CARDBUS_MEM_SIZE (64*1024*1024)
/* pci=cbmemsize=nnM,cbiosize=nn can override this */
unsigned long pci_cardbus_io_size = DEFAULT_CARDBUS_IO_SIZE;
unsigned long pci_cardbus_mem_size = DEFAULT_CARDBUS_MEM_SIZE;
#define DEFAULT_HOTPLUG_IO_SIZE (256)
#define DEFAULT_HOTPLUG_MMIO_SIZE (2*1024*1024)
#define DEFAULT_HOTPLUG_MMIO_PREF_SIZE (2*1024*1024)
@ -428,7 +423,7 @@ found:
static u8 __pci_find_next_cap(struct pci_bus *bus, unsigned int devfn,
u8 pos, int cap)
{
return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, bus, devfn);
return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, NULL, bus, devfn);
}
u8 pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap)
@ -533,7 +528,7 @@ u16 pci_find_next_ext_capability(struct pci_dev *dev, u16 start, int cap)
return 0;
return PCI_FIND_NEXT_EXT_CAP(pci_bus_read_config, start, cap,
dev->bus, dev->devfn);
NULL, dev->bus, dev->devfn);
}
EXPORT_SYMBOL_GPL(pci_find_next_ext_capability);
@ -602,7 +597,7 @@ static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap)
mask = HT_5BIT_CAP_MASK;
pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, pos,
PCI_CAP_ID_HT, dev->bus, dev->devfn);
PCI_CAP_ID_HT, NULL, dev->bus, dev->devfn);
while (pos) {
rc = pci_read_config_byte(dev, pos + 3, &cap);
if (rc != PCIBIOS_SUCCESSFUL)
@ -613,7 +608,7 @@ static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap)
pos = PCI_FIND_NEXT_CAP(pci_bus_read_config,
pos + PCI_CAP_LIST_NEXT,
PCI_CAP_ID_HT, dev->bus,
PCI_CAP_ID_HT, NULL, dev->bus,
dev->devfn);
}
@ -894,7 +889,6 @@ static const char *disable_acs_redir_param;
static const char *config_acs_param;
struct pci_acs {
u16 cap;
u16 ctrl;
u16 fw_ctrl;
};
@ -997,27 +991,27 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
static void pci_std_enable_acs(struct pci_dev *dev, struct pci_acs *caps)
{
/* Source Validation */
caps->ctrl |= (caps->cap & PCI_ACS_SV);
caps->ctrl |= (dev->acs_capabilities & PCI_ACS_SV);
/* P2P Request Redirect */
caps->ctrl |= (caps->cap & PCI_ACS_RR);
caps->ctrl |= (dev->acs_capabilities & PCI_ACS_RR);
/* P2P Completion Redirect */
caps->ctrl |= (caps->cap & PCI_ACS_CR);
caps->ctrl |= (dev->acs_capabilities & PCI_ACS_CR);
/* Upstream Forwarding */
caps->ctrl |= (caps->cap & PCI_ACS_UF);
caps->ctrl |= (dev->acs_capabilities & PCI_ACS_UF);
/* Enable Translation Blocking for external devices and noats */
if (pci_ats_disabled() || dev->external_facing || dev->untrusted)
caps->ctrl |= (caps->cap & PCI_ACS_TB);
caps->ctrl |= (dev->acs_capabilities & PCI_ACS_TB);
}
/**
* pci_enable_acs - enable ACS if hardware support it
* @dev: the PCI device
*/
static void pci_enable_acs(struct pci_dev *dev)
void pci_enable_acs(struct pci_dev *dev)
{
struct pci_acs caps;
bool enable_acs = false;
@ -1033,7 +1027,6 @@ static void pci_enable_acs(struct pci_dev *dev)
if (!pos)
return;
pci_read_config_word(dev, pos + PCI_ACS_CAP, &caps.cap);
pci_read_config_word(dev, pos + PCI_ACS_CTRL, &caps.ctrl);
caps.fw_ctrl = caps.ctrl;
@ -1490,6 +1483,9 @@ static int pci_set_low_power_state(struct pci_dev *dev, pci_power_t state, bool
|| (state == PCI_D2 && !dev->d2_support))
return -EIO;
if (dev->current_state == state)
return 0;
pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
if (PCI_POSSIBLE_ERROR(pmcsr)) {
pci_err(dev, "Unable to change power state from %s to %s, device inaccessible\n",
@ -2258,7 +2254,7 @@ void pcie_clear_device_status(struct pci_dev *dev)
*/
void pcie_clear_root_pme_status(struct pci_dev *dev)
{
pcie_capability_set_dword(dev, PCI_EXP_RTSTA, PCI_EXP_RTSTA_PME);
pcie_capability_write_dword(dev, PCI_EXP_RTSTA, PCI_EXP_RTSTA_PME);
}
/**
@ -3198,8 +3194,14 @@ void pci_pm_init(struct pci_dev *dev)
poweron:
pci_pm_power_up_and_verify_state(dev);
pm_runtime_forbid(&dev->dev);
/*
* Runtime PM will be enabled for the device when it has been fully
* configured, but since its parent and suppliers may suspend in
* the meantime, prevent them from doing so by changing the
* device's runtime PM status to "active".
*/
pm_runtime_set_active(&dev->dev);
pm_runtime_enable(&dev->dev);
}
static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop)
@ -3516,7 +3518,7 @@ void pci_configure_ari(struct pci_dev *dev)
static bool pci_acs_flags_enabled(struct pci_dev *pdev, u16 acs_flags)
{
int pos;
u16 cap, ctrl;
u16 ctrl;
pos = pdev->acs_cap;
if (!pos)
@ -3527,8 +3529,7 @@ static bool pci_acs_flags_enabled(struct pci_dev *pdev, u16 acs_flags)
* or only required if controllable. Features missing from the
* capability field can therefore be assumed as hard-wired enabled.
*/
pci_read_config_word(pdev, pos + PCI_ACS_CAP, &cap);
acs_flags &= (cap | PCI_ACS_EC);
acs_flags &= (pdev->acs_capabilities | PCI_ACS_EC);
pci_read_config_word(pdev, pos + PCI_ACS_CTRL, &ctrl);
return (ctrl & acs_flags) == acs_flags;
@ -3649,15 +3650,15 @@ bool pci_acs_path_enabled(struct pci_dev *start,
*/
void pci_acs_init(struct pci_dev *dev)
{
dev->acs_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS);
int pos;
/*
* Attempt to enable ACS regardless of capability because some Root
* Ports (e.g. those quirked with *_intel_pch_acs_*) do not have
* the standard ACS capability but still support ACS via those
* quirks.
*/
pci_enable_acs(dev);
dev->acs_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS);
pos = dev->acs_cap;
if (!pos)
return;
pci_read_config_word(dev, pos + PCI_ACS_CAP, &dev->acs_capabilities);
pci_disable_broken_acs_cap(dev);
}
/**
@ -4584,7 +4585,7 @@ int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
* Link Speed.
*/
if (pdev->subordinate)
pcie_update_link_speed(pdev->subordinate);
pcie_update_link_speed(pdev->subordinate, PCIE_LINK_RETRAIN);
return rc;
}
@ -4656,7 +4657,7 @@ bool pcie_wait_for_link(struct pci_dev *pdev, bool active)
* spec says 100 ms, but firmware can lower it and we allow drivers to
* increase it as well.
*
* Called with @pci_bus_sem locked for reading.
* Context: Called with @pci_bus_sem locked for reading.
*/
static int pci_bus_max_d3cold_delay(const struct pci_bus *bus)
{
@ -4664,6 +4665,8 @@ static int pci_bus_max_d3cold_delay(const struct pci_bus *bus)
int min_delay = 100;
int max_delay = 0;
lockdep_assert_held(&pci_bus_sem);
list_for_each_entry(pdev, &bus->devices, bus_list) {
if (pdev->d3cold_delay < min_delay)
min_delay = pdev->d3cold_delay;
@ -5018,6 +5021,7 @@ static void pci_dev_save_and_disable(struct pci_dev *dev)
* races with ->remove() by the device lock, which must be held by
* the caller.
*/
device_lock_assert(&dev->dev);
if (err_handler && err_handler->reset_prepare)
err_handler->reset_prepare(dev);
else if (dev->driver)
@ -5088,7 +5092,9 @@ const struct pci_reset_fn_method pci_reset_fn_methods[] = {
* device including MSI, bus mastering, BARs, decoding IO and memory spaces,
* etc.
*
* Returns 0 if the device function was successfully reset or negative if the
* Context: The caller must hold the device lock.
*
* Return: 0 if the device function was successfully reset or negative if the
* device doesn't support resetting a single function.
*/
int __pci_reset_function_locked(struct pci_dev *dev)
@ -5097,6 +5103,7 @@ int __pci_reset_function_locked(struct pci_dev *dev)
const struct pci_reset_fn_method *method;
might_sleep();
device_lock_assert(&dev->dev);
/*
* A reset method returns -ENOTTY if it doesn't support this device and
@ -5219,13 +5226,17 @@ EXPORT_SYMBOL_GPL(pci_reset_function);
* over the reset. It also differs from pci_reset_function() in that it
* requires the PCI device lock to be held.
*
* Returns 0 if the device function was successfully reset or negative if the
* Context: The caller must hold the device lock.
*
* Return: 0 if the device function was successfully reset or negative if the
* device doesn't support resetting a single function.
*/
int pci_reset_function_locked(struct pci_dev *dev)
{
int rc;
device_lock_assert(&dev->dev);
if (!pci_reset_supported(dev))
return -ENOTTY;
@ -5341,10 +5352,9 @@ unlock:
/* Do any devices on or below this slot prevent a bus reset? */
static bool pci_slot_resettable(struct pci_slot *slot)
{
struct pci_dev *dev;
struct pci_dev *dev, *bridge = slot->bus->self;
if (slot->bus->self &&
(slot->bus->self->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET))
if (bridge && (bridge->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET))
return false;
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
@ -5361,7 +5371,10 @@ static bool pci_slot_resettable(struct pci_slot *slot)
/* Lock devices from the top of the tree down */
static void pci_slot_lock(struct pci_slot *slot)
{
struct pci_dev *dev;
struct pci_dev *dev, *bridge = slot->bus->self;
if (bridge)
pci_dev_lock(bridge);
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot != slot)
@ -5376,7 +5389,7 @@ static void pci_slot_lock(struct pci_slot *slot)
/* Unlock devices from the bottom of the tree up */
static void pci_slot_unlock(struct pci_slot *slot)
{
struct pci_dev *dev;
struct pci_dev *dev, *bridge = slot->bus->self;
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot != slot)
@ -5386,21 +5399,25 @@ static void pci_slot_unlock(struct pci_slot *slot)
else
pci_dev_unlock(dev);
}
if (bridge)
pci_dev_unlock(bridge);
}
/* Return 1 on successful lock, 0 on contention */
static int pci_slot_trylock(struct pci_slot *slot)
{
struct pci_dev *dev;
struct pci_dev *dev, *bridge = slot->bus->self;
if (bridge && !pci_dev_trylock(bridge))
return 0;
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot != slot)
continue;
if (dev->subordinate) {
if (!pci_bus_trylock(dev->subordinate)) {
pci_dev_unlock(dev);
if (!pci_bus_trylock(dev->subordinate))
goto unlock;
}
} else if (!pci_dev_trylock(dev))
goto unlock;
}
@ -5416,6 +5433,9 @@ unlock:
else
pci_dev_unlock(dev);
}
if (bridge)
pci_dev_unlock(bridge);
return 0;
}
@ -6642,7 +6662,7 @@ static void of_pci_bus_release_domain_nr(struct device *parent, int domain_nr)
return;
/* Release domain from IDA where it was allocated. */
if (of_get_pci_domain_nr(parent->of_node) == domain_nr)
if (parent && of_get_pci_domain_nr(parent->of_node) == domain_nr)
ida_free(&pci_domain_nr_static_ida, domain_nr);
else
ida_free(&pci_domain_nr_dynamic_ida, domain_nr);
@ -6681,7 +6701,9 @@ static int __init pci_setup(char *str)
if (k)
*k++ = 0;
if (*str && (str = pcibios_setup(str)) && *str) {
if (!strcmp(str, "nomsi")) {
if (!pci_setup_cardbus(str)) {
/* Function handled the parameters */
} else if (!strcmp(str, "nomsi")) {
pci_no_msi();
} else if (!strncmp(str, "noats", 5)) {
pr_info("PCIe: ATS is disabled\n");
@ -6700,10 +6722,6 @@ static int __init pci_setup(char *str)
pcie_ari_disabled = true;
} else if (!strncmp(str, "notph", 5)) {
pci_no_tph();
} else if (!strncmp(str, "cbiosize=", 9)) {
pci_cardbus_io_size = memparse(str + 9, &str);
} else if (!strncmp(str, "cbmemsize=", 10)) {
pci_cardbus_mem_size = memparse(str + 10, &str);
} else if (!strncmp(str, "resource_alignment=", 19)) {
resource_alignment_param = str + 19;
} else if (!strncmp(str, "ecrc=", 5)) {

View file

@ -5,6 +5,7 @@
#include <linux/align.h>
#include <linux/bitfield.h>
#include <linux/pci.h>
#include <trace/events/pci.h>
struct pcie_tlp_log;
@ -63,6 +64,18 @@ struct pcie_tlp_log;
#define PCIE_LINK_WAIT_MAX_RETRIES 10
#define PCIE_LINK_WAIT_SLEEP_MS 90
/* Format of TLP; PCIe r7.0, sec 2.2.1 */
#define PCIE_TLP_FMT_3DW_NO_DATA 0x00 /* 3DW header, no data */
#define PCIE_TLP_FMT_4DW_NO_DATA 0x01 /* 4DW header, no data */
#define PCIE_TLP_FMT_3DW_DATA 0x02 /* 3DW header, with data */
#define PCIE_TLP_FMT_4DW_DATA 0x03 /* 4DW header, with data */
/* Type of TLP; PCIe r7.0, sec 2.2.1 */
#define PCIE_TLP_TYPE_CFG0_RD 0x04 /* Config Type 0 Read Request */
#define PCIE_TLP_TYPE_CFG0_WR 0x04 /* Config Type 0 Write Request */
#define PCIE_TLP_TYPE_CFG1_RD 0x05 /* Config Type 1 Read Request */
#define PCIE_TLP_TYPE_CFG1_WR 0x05 /* Config Type 1 Write Request */
/* Message Routing (r[2:0]); PCIe r6.0, sec 2.2.8 */
#define PCIE_MSG_TYPE_R_RC 0
#define PCIE_MSG_TYPE_R_ADDR 1
@ -84,10 +97,16 @@ struct pcie_tlp_log;
#define PCIE_MSG_CODE_DEASSERT_INTC 0x26
#define PCIE_MSG_CODE_DEASSERT_INTD 0x27
/* Cpl. status of Complete; PCIe r7.0, sec 2.2.9.1 */
#define PCIE_CPL_STS_SUCCESS 0x00 /* Successful Completion */
#define PCI_BUS_BRIDGE_IO_WINDOW 0
#define PCI_BUS_BRIDGE_MEM_WINDOW 1
#define PCI_BUS_BRIDGE_PREF_MEM_WINDOW 2
#define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \
PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE)
extern const unsigned char pcie_link_speed[];
extern bool pci_early_dump;
@ -103,17 +122,21 @@ bool pcie_cap_has_rtctl(const struct pci_dev *dev);
* @read_cfg: Function pointer for reading PCI config space
* @start: Starting position to begin search
* @cap: Capability ID to find
* @prev_ptr: Pointer to store position of preceding capability (optional)
* @args: Arguments to pass to read_cfg function
*
* Search the capability list in PCI config space to find @cap.
* Search the capability list in PCI config space to find @cap. If
* found, update *prev_ptr with the position of the preceding capability
* (if prev_ptr != NULL)
* Implements TTL (time-to-live) protection against infinite loops.
*
* Return: Position of the capability if found, 0 otherwise.
*/
#define PCI_FIND_NEXT_CAP(read_cfg, start, cap, args...) \
#define PCI_FIND_NEXT_CAP(read_cfg, start, cap, prev_ptr, args...) \
({ \
int __ttl = PCI_FIND_CAP_TTL; \
u8 __id, __found_pos = 0; \
u8 __id, __found_pos = 0; \
u8 __prev_pos = (start); \
u8 __pos = (start); \
u16 __ent; \
\
@ -132,9 +155,12 @@ bool pcie_cap_has_rtctl(const struct pci_dev *dev);
\
if (__id == (cap)) { \
__found_pos = __pos; \
if (prev_ptr != NULL) \
*(u8 *)prev_ptr = __prev_pos; \
break; \
} \
\
__prev_pos = __pos; \
__pos = FIELD_GET(PCI_CAP_LIST_NEXT_MASK, __ent); \
} \
__found_pos; \
@ -146,21 +172,26 @@ bool pcie_cap_has_rtctl(const struct pci_dev *dev);
* @read_cfg: Function pointer for reading PCI config space
* @start: Starting position to begin search (0 for initial search)
* @cap: Extended capability ID to find
* @prev_ptr: Pointer to store position of preceding capability (optional)
* @args: Arguments to pass to read_cfg function
*
* Search the extended capability list in PCI config space to find @cap.
* If found, update *prev_ptr with the position of the preceding capability
* (if prev_ptr != NULL)
* Implements TTL protection against infinite loops using a calculated
* maximum search count.
*
* Return: Position of the capability if found, 0 otherwise.
*/
#define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, args...) \
#define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, prev_ptr, args...) \
({ \
u16 __pos = (start) ?: PCI_CFG_SPACE_SIZE; \
u16 __found_pos = 0; \
u16 __prev_pos; \
int __ttl, __ret; \
u32 __header; \
\
__prev_pos = __pos; \
__ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; \
while (__ttl-- > 0 && __pos >= PCI_CFG_SPACE_SIZE) { \
__ret = read_cfg##_dword(args, __pos, &__header); \
@ -172,9 +203,12 @@ bool pcie_cap_has_rtctl(const struct pci_dev *dev);
\
if (PCI_EXT_CAP_ID(__header) == (cap) && __pos != start) {\
__found_pos = __pos; \
if (prev_ptr != NULL) \
*(u16 *)prev_ptr = __prev_pos; \
break; \
} \
\
__prev_pos = __pos; \
__pos = PCI_EXT_CAP_NEXT(__header); \
} \
__found_pos; \
@ -242,6 +276,7 @@ void pci_config_pm_runtime_put(struct pci_dev *dev);
void pci_pm_power_up_and_verify_state(struct pci_dev *pci_dev);
void pci_pm_init(struct pci_dev *dev);
void pci_ea_init(struct pci_dev *dev);
bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub);
void pci_msi_init(struct pci_dev *dev);
void pci_msix_init(struct pci_dev *dev);
bool pci_bridge_d3_possible(struct pci_dev *dev);
@ -376,8 +411,40 @@ extern unsigned long pci_hotplug_io_size;
extern unsigned long pci_hotplug_mmio_size;
extern unsigned long pci_hotplug_mmio_pref_size;
extern unsigned long pci_hotplug_bus_size;
extern unsigned long pci_cardbus_io_size;
extern unsigned long pci_cardbus_mem_size;
static inline bool pci_is_cardbus_bridge(struct pci_dev *dev)
{
return dev->hdr_type == PCI_HEADER_TYPE_CARDBUS;
}
#ifdef CONFIG_CARDBUS
unsigned long pci_cardbus_resource_alignment(struct resource *res);
int pci_bus_size_cardbus_bridge(struct pci_bus *bus,
struct list_head *realloc_head);
int pci_cardbus_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
u32 buses, int max,
unsigned int available_buses, int pass);
int pci_setup_cardbus(char *str);
#else
static inline unsigned long pci_cardbus_resource_alignment(struct resource *res)
{
return 0;
}
static inline int pci_bus_size_cardbus_bridge(struct pci_bus *bus,
struct list_head *realloc_head)
{
return -EOPNOTSUPP;
}
static inline int pci_cardbus_scan_bridge_extend(struct pci_bus *bus,
struct pci_dev *dev,
u32 buses, int max,
unsigned int available_buses,
int pass)
{
return max;
}
static inline int pci_setup_cardbus(char *str) { return -ENOENT; }
#endif /* CONFIG_CARDBUS */
/**
* pci_match_one_device - Tell if a PCI device structure has a matching
@ -432,7 +499,6 @@ bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl,
int rrs_timeout);
bool pci_bus_generic_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl,
int rrs_timeout);
int pci_idt_bus_quirk(struct pci_bus *bus, int devfn, u32 *pl, int rrs_timeout);
int pci_setup_device(struct pci_dev *dev);
void __pci_size_stdbars(struct pci_dev *dev, int count,
@ -440,6 +506,10 @@ void __pci_size_stdbars(struct pci_dev *dev, int count,
int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
struct resource *res, unsigned int reg, u32 *sizes);
void pci_configure_ari(struct pci_dev *dev);
int pci_dev_res_add_to_list(struct list_head *head, struct pci_dev *dev,
struct resource *res, resource_size_t add_size,
resource_size_t min_align);
void __pci_bus_size_bridges(struct pci_bus *bus,
struct list_head *realloc_head);
void __pci_bus_assign_resources(const struct pci_bus *bus,
@ -452,6 +522,11 @@ void pci_walk_bus_locked(struct pci_bus *top,
const char *pci_resource_name(struct pci_dev *dev, unsigned int i);
bool pci_resource_is_optional(const struct pci_dev *dev, int resno);
static inline bool pci_resource_is_bridge_win(int resno)
{
return resno >= PCI_BRIDGE_RESOURCES &&
resno <= PCI_BRIDGE_RESOURCE_END;
}
/**
* pci_resource_num - Reverse lookup resource number from device resources
@ -475,6 +550,7 @@ static inline int pci_resource_num(const struct pci_dev *dev,
return resno;
}
void pbus_validate_busn(struct pci_bus *bus);
struct resource *pbus_select_window(struct pci_bus *bus,
const struct resource *res);
void pci_reassigndev_resource_alignment(struct pci_dev *dev);
@ -555,12 +631,28 @@ const char *pci_speed_string(enum pci_bus_speed speed);
void __pcie_print_link_status(struct pci_dev *dev, bool verbose);
void pcie_report_downtraining(struct pci_dev *dev);
static inline void __pcie_update_link_speed(struct pci_bus *bus, u16 linksta, u16 linksta2)
enum pcie_link_change_reason {
PCIE_LINK_RETRAIN,
PCIE_ADD_BUS,
PCIE_BWCTRL_ENABLE,
PCIE_BWCTRL_IRQ,
PCIE_HOTPLUG,
};
static inline void __pcie_update_link_speed(struct pci_bus *bus,
enum pcie_link_change_reason reason,
u16 linksta, u16 linksta2)
{
bus->cur_bus_speed = pcie_link_speed[linksta & PCI_EXP_LNKSTA_CLS];
bus->flit_mode = (linksta2 & PCI_EXP_LNKSTA2_FLIT) ? 1 : 0;
trace_pcie_link_event(bus,
reason,
FIELD_GET(PCI_EXP_LNKSTA_NLW, linksta),
linksta & PCI_EXP_LNKSTA_LINK_STATUS_MASK);
}
void pcie_update_link_speed(struct pci_bus *bus);
void pcie_update_link_speed(struct pci_bus *bus, enum pcie_link_change_reason reason);
/* Single Root I/O Virtualization */
struct pci_sriov {
@ -924,8 +1016,6 @@ static inline void pci_suspend_ptm(struct pci_dev *dev) { }
static inline void pci_resume_ptm(struct pci_dev *dev) { }
#endif
unsigned long pci_cardbus_resource_alignment(struct resource *);
static inline resource_size_t pci_resource_alignment(struct pci_dev *dev,
struct resource *res)
{
@ -939,10 +1029,12 @@ static inline resource_size_t pci_resource_alignment(struct pci_dev *dev,
}
void pci_acs_init(struct pci_dev *dev);
void pci_enable_acs(struct pci_dev *dev);
#ifdef CONFIG_PCI_QUIRKS
int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags);
int pci_dev_specific_enable_acs(struct pci_dev *dev);
int pci_dev_specific_disable_acs_redir(struct pci_dev *dev);
void pci_disable_broken_acs_cap(struct pci_dev *pdev);
int pcie_failed_link_retrain(struct pci_dev *dev);
#else
static inline int pci_dev_specific_acs_enabled(struct pci_dev *dev,
@ -958,6 +1050,7 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev)
{
return -ENOTTY;
}
static inline void pci_disable_broken_acs_cap(struct pci_dev *dev) { }
static inline int pcie_failed_link_retrain(struct pci_dev *dev)
{
return -ENOTTY;

View file

@ -239,9 +239,6 @@ void pcie_ecrc_get_policy(char *str)
}
#endif /* CONFIG_PCIE_ECRC */
#define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \
PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE)
int pcie_aer_is_native(struct pci_dev *dev)
{
struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
@ -1608,6 +1605,20 @@ static void aer_disable_irq(struct pci_dev *pdev)
pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32);
}
static int clear_status_iter(struct pci_dev *dev, void *data)
{
u16 devctl;
/* Skip if pci_enable_pcie_error_reporting() hasn't been called yet */
pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &devctl);
if (!(devctl & PCI_EXP_AER_FLAGS))
return 0;
pci_aer_clear_status(dev);
pcie_clear_device_status(dev);
return 0;
}
/**
* aer_enable_rootport - enable Root Port's interrupts when receiving messages
* @rpc: pointer to a Root Port data structure
@ -1629,9 +1640,19 @@ static void aer_enable_rootport(struct aer_rpc *rpc)
pcie_capability_clear_word(pdev, PCI_EXP_RTCTL,
SYSTEM_ERROR_INTR_ON_MESG_MASK);
/* Clear error status */
/* Clear error status of this Root Port or RCEC */
pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, &reg32);
pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, reg32);
/* Clear error status of agents reporting to this Root Port or RCEC */
if (reg32 & AER_ERR_STATUS_MASK) {
if (pci_pcie_type(pdev) == PCI_EXP_TYPE_RC_EC)
pcie_walk_rcec(pdev, clear_status_iter, NULL);
else if (pdev->subordinate)
pci_walk_bus(pdev->subordinate, clear_status_iter,
NULL);
}
pci_read_config_dword(pdev, aer + PCI_ERR_COR_STATUS, &reg32);
pci_write_config_dword(pdev, aer + PCI_ERR_COR_STATUS, reg32);
pci_read_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, &reg32);

View file

@ -199,7 +199,7 @@ static void pcie_bwnotif_enable(struct pcie_device *srv)
* Update after enabling notifications & clearing status bits ensures
* link speed is up to date.
*/
pcie_update_link_speed(port->subordinate);
pcie_update_link_speed(port->subordinate, PCIE_BWCTRL_ENABLE);
}
static void pcie_bwnotif_disable(struct pci_dev *port)
@ -234,7 +234,7 @@ static irqreturn_t pcie_bwnotif_irq(int irq, void *context)
* speed (inside pcie_update_link_speed()) after LBMS has been
* cleared to avoid missing link speed changes.
*/
pcie_update_link_speed(port->subordinate);
pcie_update_link_speed(port->subordinate, PCIE_BWCTRL_IRQ);
return IRQ_HANDLED;
}
@ -250,6 +250,9 @@ static int pcie_bwnotif_probe(struct pcie_device *srv)
struct pci_dev *port = srv->port;
int ret;
if (port->no_bw_notif)
return -ENODEV;
/* Can happen if we run out of bus numbers during enumeration. */
if (!port->subordinate)
return -ENODEV;

View file

@ -508,23 +508,35 @@ static void pcie_port_device_remove(struct pci_dev *dev)
pci_free_irq_vectors(dev);
}
static int pcie_port_bus_match(struct device *dev, const struct device_driver *drv)
{
struct pcie_device *pciedev = to_pcie_device(dev);
const struct pcie_port_service_driver *driver = to_service_driver(drv);
if (driver->service != pciedev->service)
return 0;
if (driver->port_type != PCIE_ANY_PORT &&
driver->port_type != pci_pcie_type(pciedev->port))
return 0;
return 1;
}
/**
* pcie_port_probe_service - probe driver for given PCI Express port service
* pcie_port_bus_probe - probe driver for given PCI Express port service
* @dev: PCI Express port service device to probe against
*
* If PCI Express port service driver is registered with
* pcie_port_service_register(), this function will be called by the driver core
* whenever match is found between the driver and a port service device.
*/
static int pcie_port_probe_service(struct device *dev)
static int pcie_port_bus_probe(struct device *dev)
{
struct pcie_device *pciedev;
struct pcie_port_service_driver *driver;
int status;
if (!dev || !dev->driver)
return -ENODEV;
driver = to_service_driver(dev->driver);
if (!driver || !driver->probe)
return -ENODEV;
@ -539,7 +551,7 @@ static int pcie_port_probe_service(struct device *dev)
}
/**
* pcie_port_remove_service - detach driver from given PCI Express port service
* pcie_port_bus_remove - detach driver from given PCI Express port service
* @dev: PCI Express port service device to handle
*
* If PCI Express port service driver is registered with
@ -547,33 +559,25 @@ static int pcie_port_probe_service(struct device *dev)
* when device_unregister() is called for the port service device associated
* with the driver.
*/
static int pcie_port_remove_service(struct device *dev)
static void pcie_port_bus_remove(struct device *dev)
{
struct pcie_device *pciedev;
struct pcie_port_service_driver *driver;
if (!dev || !dev->driver)
return 0;
pciedev = to_pcie_device(dev);
driver = to_service_driver(dev->driver);
if (driver && driver->remove) {
if (driver && driver->remove)
driver->remove(pciedev);
put_device(dev);
}
return 0;
put_device(dev);
}
/**
* pcie_port_shutdown_service - shut down given PCI Express port service
* @dev: PCI Express port service device to handle
*
* If PCI Express port service driver is registered with
* pcie_port_service_register(), this function will be called by the driver core
* when device_shutdown() is called for the port service device associated
* with the driver.
*/
static void pcie_port_shutdown_service(struct device *dev) {}
const struct bus_type pcie_port_bus_type = {
.name = "pci_express",
.match = pcie_port_bus_match,
.probe = pcie_port_bus_probe,
.remove = pcie_port_bus_remove,
};
/**
* pcie_port_service_register - register PCI Express port service driver
@ -586,9 +590,6 @@ int pcie_port_service_register(struct pcie_port_service_driver *new)
new->driver.name = new->name;
new->driver.bus = &pcie_port_bus_type;
new->driver.probe = pcie_port_probe_service;
new->driver.remove = pcie_port_remove_service;
new->driver.shutdown = pcie_port_shutdown_service;
return driver_register(&new->driver);
}

View file

@ -542,8 +542,10 @@ struct pci_ptm_debugfs *pcie_ptm_create_debugfs(struct device *dev, void *pdata,
return NULL;
dirname = devm_kasprintf(dev, GFP_KERNEL, "pcie_ptm_%s", dev_name(dev));
if (!dirname)
if (!dirname) {
kfree(ptm_debugfs);
return NULL;
}
ptm_debugfs->debugfs = debugfs_create_dir(dirname, NULL);
ptm_debugfs->pdata = pdata;
@ -574,6 +576,7 @@ void pcie_ptm_destroy_debugfs(struct pci_ptm_debugfs *ptm_debugfs)
mutex_destroy(&ptm_debugfs->lock);
debugfs_remove_recursive(ptm_debugfs->debugfs);
kfree(ptm_debugfs);
}
EXPORT_SYMBOL_GPL(pcie_ptm_destroy_debugfs);
#endif

View file

@ -14,6 +14,7 @@
#include <linux/platform_device.h>
#include <linux/pci_hotplug.h>
#include <linux/slab.h>
#include <linux/sprintf.h>
#include <linux/module.h>
#include <linux/cpumask.h>
#include <linux/aer.h>
@ -22,11 +23,9 @@
#include <linux/irqdomain.h>
#include <linux/pm_runtime.h>
#include <linux/bitfield.h>
#include <trace/events/pci.h>
#include "pci.h"
#define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */
#define CARDBUS_RESERVE_BUSNR 3
static struct resource busn_resource = {
.name = "PCI busn",
.start = 0,
@ -287,8 +286,7 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
if ((sizeof(pci_bus_addr_t) < 8 || sizeof(resource_size_t) < 8)
&& sz64 > 0x100000000ULL) {
res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED;
res->start = 0;
res->end = 0;
resource_set_range(res, 0, 0);
pci_err(dev, "%s: can't handle BAR larger than 4GB (size %#010llx)\n",
res_name, (unsigned long long)sz64);
goto out;
@ -297,8 +295,7 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
if ((sizeof(pci_bus_addr_t) < 8) && l) {
/* Above 32-bit boundary; try to reallocate */
res->flags |= IORESOURCE_UNSET;
res->start = 0;
res->end = sz64 - 1;
resource_set_range(res, 0, sz64);
pci_info(dev, "%s: can't handle BAR above 4GB (bus address %#010llx)\n",
res_name, (unsigned long long)l64);
goto out;
@ -525,8 +522,8 @@ static void pci_read_bridge_windows(struct pci_dev *bridge)
pci_read_config_dword(bridge, PCI_PRIMARY_BUS, &buses);
res.flags = IORESOURCE_BUS;
res.start = (buses >> 8) & 0xff;
res.end = (buses >> 16) & 0xff;
res.start = FIELD_GET(PCI_SECONDARY_BUS_MASK, buses);
res.end = FIELD_GET(PCI_SUBORDINATE_BUS_MASK, buses);
pci_info(bridge, "PCI bridge to %pR%s\n", &res,
bridge->transparent ? " (subtractive decode)" : "");
@ -824,14 +821,16 @@ const char *pci_speed_string(enum pci_bus_speed speed)
}
EXPORT_SYMBOL_GPL(pci_speed_string);
void pcie_update_link_speed(struct pci_bus *bus)
void pcie_update_link_speed(struct pci_bus *bus,
enum pcie_link_change_reason reason)
{
struct pci_dev *bridge = bus->self;
u16 linksta, linksta2;
pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta);
pcie_capability_read_word(bridge, PCI_EXP_LNKSTA2, &linksta2);
__pcie_update_link_speed(bus, linksta, linksta2);
__pcie_update_link_speed(bus, reason, linksta, linksta2);
}
EXPORT_SYMBOL_GPL(pcie_update_link_speed);
@ -918,7 +917,7 @@ static void pci_set_bus_speed(struct pci_bus *bus)
pcie_capability_read_dword(bridge, PCI_EXP_LNKCAP, &linkcap);
bus->max_bus_speed = pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS];
pcie_update_link_speed(bus);
pcie_update_link_speed(bus, PCIE_ADD_BUS);
}
}
@ -1313,6 +1312,26 @@ static void pci_enable_rrs_sv(struct pci_dev *pdev)
static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus,
unsigned int available_buses);
void pbus_validate_busn(struct pci_bus *bus)
{
struct pci_bus *upstream = bus->parent;
struct pci_dev *bridge = bus->self;
/* Check that all devices are accessible */
while (upstream->parent) {
if ((bus->busn_res.end > upstream->busn_res.end) ||
(bus->number > upstream->busn_res.end) ||
(bus->number < upstream->number) ||
(bus->busn_res.end < upstream->number)) {
pci_info(bridge, "devices behind bridge are unusable because %pR cannot be assigned for them\n",
&bus->busn_res);
break;
}
upstream = upstream->parent;
}
}
/**
* pci_ea_fixed_busnrs() - Read fixed Secondary and Subordinate bus
* numbers from EA capability.
@ -1324,7 +1343,7 @@ static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus,
* and subordinate bus numbers, return true with the bus numbers in @sec
* and @sub. Otherwise return false.
*/
static bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub)
bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub)
{
int ea, offset;
u32 dw;
@ -1378,8 +1397,7 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
int pass)
{
struct pci_bus *child;
int is_cardbus = (dev->hdr_type == PCI_HEADER_TYPE_CARDBUS);
u32 buses, i, j = 0;
u32 buses;
u16 bctl;
u8 primary, secondary, subordinate;
int broken = 0;
@ -1394,9 +1412,9 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
pm_runtime_get_sync(&dev->dev);
pci_read_config_dword(dev, PCI_PRIMARY_BUS, &buses);
primary = buses & 0xFF;
secondary = (buses >> 8) & 0xFF;
subordinate = (buses >> 16) & 0xFF;
primary = FIELD_GET(PCI_PRIMARY_BUS_MASK, buses);
secondary = FIELD_GET(PCI_SECONDARY_BUS_MASK, buses);
subordinate = FIELD_GET(PCI_SUBORDINATE_BUS_MASK, buses);
pci_dbg(dev, "scanning [bus %02x-%02x] behind bridge, pass %d\n",
secondary, subordinate, pass);
@ -1423,8 +1441,15 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
pci_write_config_word(dev, PCI_BRIDGE_CONTROL,
bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
if ((secondary || subordinate) && !pcibios_assign_all_busses() &&
!is_cardbus && !broken) {
if (pci_is_cardbus_bridge(dev)) {
max = pci_cardbus_scan_bridge_extend(bus, dev, buses, max,
available_buses,
pass);
goto out;
}
if ((secondary || subordinate) &&
!pcibios_assign_all_busses() && !broken) {
unsigned int cmax, buses;
/*
@ -1466,7 +1491,7 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
* do in the second pass.
*/
if (!pass) {
if (pcibios_assign_all_busses() || broken || is_cardbus)
if (pcibios_assign_all_busses() || broken)
/*
* Temporarily disable forwarding of the
@ -1477,7 +1502,7 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
* ranges.
*/
pci_write_config_dword(dev, PCI_PRIMARY_BUS,
buses & ~0xffffff);
buses & PCI_SEC_LATENCY_TIMER_MASK);
goto out;
}
@ -1508,59 +1533,16 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
if (available_buses)
available_buses--;
buses = (buses & 0xff000000)
| ((unsigned int)(child->primary) << 0)
| ((unsigned int)(child->busn_res.start) << 8)
| ((unsigned int)(child->busn_res.end) << 16);
/*
* yenta.c forces a secondary latency timer of 176.
* Copy that behaviour here.
*/
if (is_cardbus) {
buses &= ~0xff000000;
buses |= CARDBUS_LATENCY_TIMER << 24;
}
buses = (buses & PCI_SEC_LATENCY_TIMER_MASK) |
FIELD_PREP(PCI_PRIMARY_BUS_MASK, child->primary) |
FIELD_PREP(PCI_SECONDARY_BUS_MASK, child->busn_res.start) |
FIELD_PREP(PCI_SUBORDINATE_BUS_MASK, child->busn_res.end);
/* We need to blast all three values with a single write */
pci_write_config_dword(dev, PCI_PRIMARY_BUS, buses);
if (!is_cardbus) {
child->bridge_ctl = bctl;
max = pci_scan_child_bus_extend(child, available_buses);
} else {
/*
* For CardBus bridges, we leave 4 bus numbers as
* cards with a PCI-to-PCI bridge can be inserted
* later.
*/
for (i = 0; i < CARDBUS_RESERVE_BUSNR; i++) {
struct pci_bus *parent = bus;
if (pci_find_bus(pci_domain_nr(bus),
max+i+1))
break;
while (parent->parent) {
if ((!pcibios_assign_all_busses()) &&
(parent->busn_res.end > max) &&
(parent->busn_res.end <= max+i)) {
j = 1;
}
parent = parent->parent;
}
if (j) {
/*
* Often, there are two CardBus
* bridges -- try to leave one
* valid bus number for each one.
*/
i /= 2;
break;
}
}
max += i;
}
child->bridge_ctl = bctl;
max = pci_scan_child_bus_extend(child, available_buses);
/*
* Set subordinate bus number to its real value.
@ -1572,23 +1554,10 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
pci_bus_update_busn_res_end(child, max);
pci_write_config_byte(dev, PCI_SUBORDINATE_BUS, max);
}
scnprintf(child->name, sizeof(child->name), "PCI Bus %04x:%02x",
pci_domain_nr(bus), child->number);
sprintf(child->name,
(is_cardbus ? "PCI CardBus %04x:%02x" : "PCI Bus %04x:%02x"),
pci_domain_nr(bus), child->number);
/* Check that all devices are accessible */
while (bus->parent) {
if ((child->busn_res.end > bus->busn_res.end) ||
(child->number > bus->busn_res.end) ||
(child->number < bus->number) ||
(child->busn_res.end < bus->number)) {
dev_info(&dev->dev, "devices behind bridge are unusable because %pR cannot be assigned for them\n",
&child->busn_res);
break;
}
bus = bus->parent;
}
pbus_validate_busn(child);
out:
/* Clear errors in the Secondary Status Register */
@ -2277,7 +2246,8 @@ int pci_configure_extended_tags(struct pci_dev *dev, void *ign)
u16 ctl;
int ret;
if (!pci_is_pcie(dev))
/* PCI_EXP_DEVCTL_EXT_TAG is RsvdP in VFs */
if (!pci_is_pcie(dev) || dev->is_virtfn)
return 0;
ret = pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap);
@ -2417,6 +2387,37 @@ static void pci_configure_serr(struct pci_dev *dev)
}
}
static void pci_configure_rcb(struct pci_dev *dev)
{
struct pci_dev *rp;
u16 rp_lnkctl;
/*
* Per PCIe r7.0, sec 7.5.3.7, RCB is only meaningful in Root Ports
* (where it is read-only), Endpoints, and Bridges. It may only be
* set for Endpoints and Bridges if it is set in the Root Port. For
* Endpoints, it is 'RsvdP' for Virtual Functions.
*/
if (!pci_is_pcie(dev) ||
pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
pci_pcie_type(dev) == PCI_EXP_TYPE_UPSTREAM ||
pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
pci_pcie_type(dev) == PCI_EXP_TYPE_RC_EC ||
dev->is_virtfn)
return;
/* Root Port often not visible to virtualized guests */
rp = pcie_find_root_port(dev);
if (!rp)
return;
pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &rp_lnkctl);
pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_RCB,
(rp_lnkctl & PCI_EXP_LNKCTL_RCB) ?
PCI_EXP_LNKCTL_RCB : 0);
}
static void pci_configure_device(struct pci_dev *dev)
{
pci_configure_mps(dev);
@ -2426,6 +2427,7 @@ static void pci_configure_device(struct pci_dev *dev)
pci_configure_aspm_l1ss(dev);
pci_configure_eetlp_prefix(dev);
pci_configure_serr(dev);
pci_configure_rcb(dev);
pci_acpi_program_hp_params(dev);
}
@ -2554,72 +2556,10 @@ bool pci_bus_generic_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l,
bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l,
int timeout)
{
#ifdef CONFIG_PCI_QUIRKS
struct pci_dev *bridge = bus->self;
/*
* Certain IDT switches have an issue where they improperly trigger
* ACS Source Validation errors on completions for config reads.
*/
if (bridge && bridge->vendor == PCI_VENDOR_ID_IDT &&
bridge->device == 0x80b5)
return pci_idt_bus_quirk(bus, devfn, l, timeout);
#endif
return pci_bus_generic_read_dev_vendor_id(bus, devfn, l, timeout);
}
EXPORT_SYMBOL(pci_bus_read_dev_vendor_id);
#if IS_ENABLED(CONFIG_PCI_PWRCTRL)
static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn)
{
struct pci_host_bridge *host = pci_find_host_bridge(bus);
struct platform_device *pdev;
struct device_node *np;
np = of_pci_find_child_device(dev_of_node(&bus->dev), devfn);
if (!np)
return NULL;
pdev = of_find_device_by_node(np);
if (pdev) {
put_device(&pdev->dev);
goto err_put_of_node;
}
/*
* First check whether the pwrctrl device really needs to be created or
* not. This is decided based on at least one of the power supplies
* being defined in the devicetree node of the device.
*/
if (!of_pci_supply_present(np)) {
pr_debug("PCI/pwrctrl: Skipping OF node: %s\n", np->name);
goto err_put_of_node;
}
/* Now create the pwrctrl device */
pdev = of_platform_device_create(np, NULL, &host->dev);
if (!pdev) {
pr_err("PCI/pwrctrl: Failed to create pwrctrl device for node: %s\n", np->name);
goto err_put_of_node;
}
of_node_put(np);
return pdev;
err_put_of_node:
of_node_put(np);
return NULL;
}
#else
static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn)
{
return NULL;
}
#endif
/*
* Read the config data for a PCI device, sanity-check it,
* and fill in the dev structure.
@ -2629,15 +2569,6 @@ static struct pci_dev *pci_scan_device(struct pci_bus *bus, int devfn)
struct pci_dev *dev;
u32 l;
/*
* Create pwrctrl device (if required) for the PCI device to handle the
* power state. If the pwrctrl device is created, then skip scanning
* further as the pwrctrl core will rescan the bus after powering on
* the device.
*/
if (pci_pwrctrl_create_device(bus, devfn))
return NULL;
if (!pci_bus_read_dev_vendor_id(bus, devfn, &l, 60*1000))
return NULL;

View file

@ -13,6 +13,7 @@ config PCI_PWRCTRL_PWRSEQ
config PCI_PWRCTRL_SLOT
tristate "PCI Power Control driver for PCI slots"
select POWER_SEQUENCING
select PCI_PWRCTRL
help
Say Y here to enable the PCI Power Control driver to control the power

View file

@ -3,14 +3,22 @@
* Copyright (C) 2024 Linaro Ltd.
*/
#define dev_fmt(fmt) "pwrctrl: " fmt
#include <linux/device.h>
#include <linux/export.h>
#include <linux/kernel.h>
#include <linux/of.h>
#include <linux/of_graph.h>
#include <linux/of_platform.h>
#include <linux/pci.h>
#include <linux/pci-pwrctrl.h>
#include <linux/platform_device.h>
#include <linux/property.h>
#include <linux/slab.h>
#include "../pci.h"
static int pci_pwrctrl_notify(struct notifier_block *nb, unsigned long action,
void *data)
{
@ -38,16 +46,6 @@ static int pci_pwrctrl_notify(struct notifier_block *nb, unsigned long action,
return NOTIFY_DONE;
}
static void rescan_work_func(struct work_struct *work)
{
struct pci_pwrctrl *pwrctrl = container_of(work,
struct pci_pwrctrl, work);
pci_lock_rescan_remove();
pci_rescan_bus(to_pci_host_bridge(pwrctrl->dev->parent)->bus);
pci_unlock_rescan_remove();
}
/**
* pci_pwrctrl_init() - Initialize the PCI power control context struct
*
@ -57,7 +55,7 @@ static void rescan_work_func(struct work_struct *work)
void pci_pwrctrl_init(struct pci_pwrctrl *pwrctrl, struct device *dev)
{
pwrctrl->dev = dev;
INIT_WORK(&pwrctrl->work, rescan_work_func);
dev_set_drvdata(dev, pwrctrl);
}
EXPORT_SYMBOL_GPL(pci_pwrctrl_init);
@ -87,8 +85,6 @@ int pci_pwrctrl_device_set_ready(struct pci_pwrctrl *pwrctrl)
if (ret)
return ret;
schedule_work(&pwrctrl->work);
return 0;
}
EXPORT_SYMBOL_GPL(pci_pwrctrl_device_set_ready);
@ -101,8 +97,6 @@ EXPORT_SYMBOL_GPL(pci_pwrctrl_device_set_ready);
*/
void pci_pwrctrl_device_unset_ready(struct pci_pwrctrl *pwrctrl)
{
cancel_work_sync(&pwrctrl->work);
/*
* We don't have to delete the link here. Typically, this function
* is only called when the power control device is being detached. If
@ -145,6 +139,242 @@ int devm_pci_pwrctrl_device_set_ready(struct device *dev,
}
EXPORT_SYMBOL_GPL(devm_pci_pwrctrl_device_set_ready);
static int __pci_pwrctrl_power_off_device(struct device *dev)
{
struct pci_pwrctrl *pwrctrl = dev_get_drvdata(dev);
if (!pwrctrl)
return 0;
return pwrctrl->power_off(pwrctrl);
}
static void pci_pwrctrl_power_off_device(struct device_node *np)
{
struct platform_device *pdev;
int ret;
for_each_available_child_of_node_scoped(np, child)
pci_pwrctrl_power_off_device(child);
pdev = of_find_device_by_node(np);
if (!pdev)
return;
if (device_is_bound(&pdev->dev)) {
ret = __pci_pwrctrl_power_off_device(&pdev->dev);
if (ret)
dev_err(&pdev->dev, "Failed to power off device: %d", ret);
}
platform_device_put(pdev);
}
/**
* pci_pwrctrl_power_off_devices - Power off pwrctrl devices
*
* @parent: PCI host controller device
*
* Recursively traverse all pwrctrl devices for the devicetree hierarchy
* below the specified PCI host controller and power them off in a depth
* first manner.
*/
void pci_pwrctrl_power_off_devices(struct device *parent)
{
struct device_node *np = parent->of_node;
for_each_available_child_of_node_scoped(np, child)
pci_pwrctrl_power_off_device(child);
}
EXPORT_SYMBOL_GPL(pci_pwrctrl_power_off_devices);
static int __pci_pwrctrl_power_on_device(struct device *dev)
{
struct pci_pwrctrl *pwrctrl = dev_get_drvdata(dev);
if (!pwrctrl)
return 0;
return pwrctrl->power_on(pwrctrl);
}
/*
* Power on the devices in a depth first manner. Before powering on the device,
* make sure its driver is bound.
*/
static int pci_pwrctrl_power_on_device(struct device_node *np)
{
struct platform_device *pdev;
int ret;
for_each_available_child_of_node_scoped(np, child) {
ret = pci_pwrctrl_power_on_device(child);
if (ret)
return ret;
}
pdev = of_find_device_by_node(np);
if (!pdev)
return 0;
if (device_is_bound(&pdev->dev)) {
ret = __pci_pwrctrl_power_on_device(&pdev->dev);
} else {
/* FIXME: Use blocking wait instead of probe deferral */
dev_dbg(&pdev->dev, "driver is not bound\n");
ret = -EPROBE_DEFER;
}
platform_device_put(pdev);
return ret;
}
/**
* pci_pwrctrl_power_on_devices - Power on pwrctrl devices
*
* @parent: PCI host controller device
*
* Recursively traverse all pwrctrl devices for the devicetree hierarchy
* below the specified PCI host controller and power them on in a depth
* first manner. On error, all powered on devices will be powered off.
*
* Return: 0 on success, -EPROBE_DEFER if any pwrctrl driver is not bound, an
* appropriate error code otherwise.
*/
int pci_pwrctrl_power_on_devices(struct device *parent)
{
struct device_node *np = parent->of_node;
struct device_node *child = NULL;
int ret;
for_each_available_child_of_node(np, child) {
ret = pci_pwrctrl_power_on_device(child);
if (ret)
goto err_power_off;
}
return 0;
err_power_off:
for_each_available_child_of_node_scoped(np, tmp) {
if (tmp == child)
break;
pci_pwrctrl_power_off_device(tmp);
}
of_node_put(child);
return ret;
}
EXPORT_SYMBOL_GPL(pci_pwrctrl_power_on_devices);
static int pci_pwrctrl_create_device(struct device_node *np,
struct device *parent)
{
struct platform_device *pdev;
int ret;
for_each_available_child_of_node_scoped(np, child) {
ret = pci_pwrctrl_create_device(child, parent);
if (ret)
return ret;
}
/* Bail out if the platform device is already available for the node */
pdev = of_find_device_by_node(np);
if (pdev) {
platform_device_put(pdev);
return 0;
}
/*
* Sanity check to make sure that the node has the compatible property
* to allow driver binding.
*/
if (!of_property_present(np, "compatible"))
return 0;
/*
* Check whether the pwrctrl device really needs to be created or not.
* This is decided based on at least one of the power supplies defined
* in the devicetree node of the device or the graph property.
*/
if (!of_pci_supply_present(np) && !of_graph_is_present(np)) {
dev_dbg(parent, "Skipping OF node: %s\n", np->name);
return 0;
}
/* Now create the pwrctrl device */
pdev = of_platform_device_create(np, NULL, parent);
if (!pdev) {
dev_err(parent, "Failed to create pwrctrl device for node: %s\n", np->name);
return -EINVAL;
}
return 0;
}
/**
* pci_pwrctrl_create_devices - Create pwrctrl devices
*
* @parent: PCI host controller device
*
* Recursively create pwrctrl devices for the devicetree hierarchy below
* the specified PCI host controller in a depth first manner. On error, all
* created devices will be destroyed.
*
* Return: 0 on success, negative error number on error.
*/
int pci_pwrctrl_create_devices(struct device *parent)
{
int ret;
for_each_available_child_of_node_scoped(parent->of_node, child) {
ret = pci_pwrctrl_create_device(child, parent);
if (ret) {
pci_pwrctrl_destroy_devices(parent);
return ret;
}
}
return 0;
}
EXPORT_SYMBOL_GPL(pci_pwrctrl_create_devices);
static void pci_pwrctrl_destroy_device(struct device_node *np)
{
struct platform_device *pdev;
for_each_available_child_of_node_scoped(np, child)
pci_pwrctrl_destroy_device(child);
pdev = of_find_device_by_node(np);
if (!pdev)
return;
of_device_unregister(pdev);
platform_device_put(pdev);
of_node_clear_flag(np, OF_POPULATED);
}
/**
* pci_pwrctrl_destroy_devices - Destroy pwrctrl devices
*
* @parent: PCI host controller device
*
* Recursively destroy pwrctrl devices for the devicetree hierarchy below
* the specified PCI host controller in a depth first manner.
*/
void pci_pwrctrl_destroy_devices(struct device *parent)
{
struct device_node *np = parent->of_node;
for_each_available_child_of_node_scoped(np, child)
pci_pwrctrl_destroy_device(child);
}
EXPORT_SYMBOL_GPL(pci_pwrctrl_destroy_devices);
MODULE_AUTHOR("Bartosz Golaszewski <bartosz.golaszewski@linaro.org>");
MODULE_DESCRIPTION("PCI Device Power Control core driver");
MODULE_LICENSE("GPL");

View file

@ -13,12 +13,12 @@
#include <linux/slab.h>
#include <linux/types.h>
struct pci_pwrctrl_pwrseq_data {
struct pci_pwrctrl ctx;
struct pwrseq_pwrctrl {
struct pci_pwrctrl pwrctrl;
struct pwrseq_desc *pwrseq;
};
struct pci_pwrctrl_pwrseq_pdata {
struct pwrseq_pwrctrl_pdata {
const char *target;
/*
* Called before doing anything else to perform device-specific
@ -27,7 +27,7 @@ struct pci_pwrctrl_pwrseq_pdata {
int (*validate_device)(struct device *dev);
};
static int pci_pwrctrl_pwrseq_qcm_wcn_validate_device(struct device *dev)
static int pwrseq_pwrctrl_qcm_wcn_validate_device(struct device *dev)
{
/*
* Old device trees for some platforms already define wifi nodes for
@ -47,22 +47,38 @@ static int pci_pwrctrl_pwrseq_qcm_wcn_validate_device(struct device *dev)
return 0;
}
static const struct pci_pwrctrl_pwrseq_pdata pci_pwrctrl_pwrseq_qcom_wcn_pdata = {
static const struct pwrseq_pwrctrl_pdata pwrseq_pwrctrl_qcom_wcn_pdata = {
.target = "wlan",
.validate_device = pci_pwrctrl_pwrseq_qcm_wcn_validate_device,
.validate_device = pwrseq_pwrctrl_qcm_wcn_validate_device,
};
static void devm_pci_pwrctrl_pwrseq_power_off(void *data)
static int pwrseq_pwrctrl_power_on(struct pci_pwrctrl *pwrctrl)
{
struct pwrseq_desc *pwrseq = data;
struct pwrseq_pwrctrl *pwrseq = container_of(pwrctrl,
struct pwrseq_pwrctrl, pwrctrl);
pwrseq_power_off(pwrseq);
return pwrseq_power_on(pwrseq->pwrseq);
}
static int pci_pwrctrl_pwrseq_probe(struct platform_device *pdev)
static int pwrseq_pwrctrl_power_off(struct pci_pwrctrl *pwrctrl)
{
const struct pci_pwrctrl_pwrseq_pdata *pdata;
struct pci_pwrctrl_pwrseq_data *data;
struct pwrseq_pwrctrl *pwrseq = container_of(pwrctrl,
struct pwrseq_pwrctrl, pwrctrl);
return pwrseq_power_off(pwrseq->pwrseq);
}
static void devm_pwrseq_pwrctrl_power_off(void *data)
{
struct pwrseq_pwrctrl *pwrseq = data;
pwrseq_pwrctrl_power_off(&pwrseq->pwrctrl);
}
static int pwrseq_pwrctrl_probe(struct platform_device *pdev)
{
const struct pwrseq_pwrctrl_pdata *pdata;
struct pwrseq_pwrctrl *pwrseq;
struct device *dev = &pdev->dev;
int ret;
@ -76,28 +92,26 @@ static int pci_pwrctrl_pwrseq_probe(struct platform_device *pdev)
return ret;
}
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
pwrseq = devm_kzalloc(dev, sizeof(*pwrseq), GFP_KERNEL);
if (!pwrseq)
return -ENOMEM;
data->pwrseq = devm_pwrseq_get(dev, pdata->target);
if (IS_ERR(data->pwrseq))
return dev_err_probe(dev, PTR_ERR(data->pwrseq),
pwrseq->pwrseq = devm_pwrseq_get(dev, pdata->target);
if (IS_ERR(pwrseq->pwrseq))
return dev_err_probe(dev, PTR_ERR(pwrseq->pwrseq),
"Failed to get the power sequencer\n");
ret = pwrseq_power_on(data->pwrseq);
if (ret)
return dev_err_probe(dev, ret,
"Failed to power-on the device\n");
ret = devm_add_action_or_reset(dev, devm_pci_pwrctrl_pwrseq_power_off,
data->pwrseq);
ret = devm_add_action_or_reset(dev, devm_pwrseq_pwrctrl_power_off,
pwrseq);
if (ret)
return ret;
pci_pwrctrl_init(&data->ctx, dev);
pwrseq->pwrctrl.power_on = pwrseq_pwrctrl_power_on;
pwrseq->pwrctrl.power_off = pwrseq_pwrctrl_power_off;
ret = devm_pci_pwrctrl_device_set_ready(dev, &data->ctx);
pci_pwrctrl_init(&pwrseq->pwrctrl, dev);
ret = devm_pci_pwrctrl_device_set_ready(dev, &pwrseq->pwrctrl);
if (ret)
return dev_err_probe(dev, ret,
"Failed to register the pwrctrl wrapper\n");
@ -105,34 +119,34 @@ static int pci_pwrctrl_pwrseq_probe(struct platform_device *pdev)
return 0;
}
static const struct of_device_id pci_pwrctrl_pwrseq_of_match[] = {
static const struct of_device_id pwrseq_pwrctrl_of_match[] = {
{
/* ATH11K in QCA6390 package. */
.compatible = "pci17cb,1101",
.data = &pci_pwrctrl_pwrseq_qcom_wcn_pdata,
.data = &pwrseq_pwrctrl_qcom_wcn_pdata,
},
{
/* ATH11K in WCN6855 package. */
.compatible = "pci17cb,1103",
.data = &pci_pwrctrl_pwrseq_qcom_wcn_pdata,
.data = &pwrseq_pwrctrl_qcom_wcn_pdata,
},
{
/* ATH12K in WCN7850 package. */
.compatible = "pci17cb,1107",
.data = &pci_pwrctrl_pwrseq_qcom_wcn_pdata,
.data = &pwrseq_pwrctrl_qcom_wcn_pdata,
},
{ }
};
MODULE_DEVICE_TABLE(of, pci_pwrctrl_pwrseq_of_match);
MODULE_DEVICE_TABLE(of, pwrseq_pwrctrl_of_match);
static struct platform_driver pci_pwrctrl_pwrseq_driver = {
static struct platform_driver pwrseq_pwrctrl_driver = {
.driver = {
.name = "pci-pwrctrl-pwrseq",
.of_match_table = pci_pwrctrl_pwrseq_of_match,
.of_match_table = pwrseq_pwrctrl_of_match,
},
.probe = pci_pwrctrl_pwrseq_probe,
.probe = pwrseq_pwrctrl_probe,
};
module_platform_driver(pci_pwrctrl_pwrseq_driver);
module_platform_driver(pwrseq_pwrctrl_driver);
MODULE_AUTHOR("Bartosz Golaszewski <bartosz.golaszewski@linaro.org>");
MODULE_DESCRIPTION("Generic PCI Power Control module for power sequenced devices");

View file

@ -59,7 +59,7 @@
#define TC9563_POWER_CONTROL_OVREN 0x82b2c8
#define TC9563_GPIO_MASK 0xfffffff3
#define TC9563_GPIO_DEASSERT_BITS 0xc /* Bits to clear for GPIO deassert */
#define TC9563_GPIO_DEASSERT_BITS 0xc /* Clear to deassert GPIO */
#define TC9563_TX_MARGIN_MIN_UA 400000
@ -69,7 +69,7 @@
*/
#define TC9563_OSC_STAB_DELAY_US (10 * USEC_PER_MSEC)
#define TC9563_L0S_L1_DELAY_UNIT_NS 256 /* Each unit represents 256 nanoseconds */
#define TC9563_L0S_L1_DELAY_UNIT_NS 256 /* Each unit represents 256 ns */
struct tc9563_pwrctrl_reg_setting {
unsigned int offset;
@ -105,13 +105,13 @@ static const char *const tc9563_supply_names[TC9563_PWRCTL_MAX_SUPPLY] = {
"vddio18",
};
struct tc9563_pwrctrl_ctx {
struct tc9563_pwrctrl {
struct pci_pwrctrl pwrctrl;
struct regulator_bulk_data supplies[TC9563_PWRCTL_MAX_SUPPLY];
struct tc9563_pwrctrl_cfg cfg[TC9563_MAX];
struct gpio_desc *reset_gpio;
struct i2c_adapter *adapter;
struct i2c_client *client;
struct pci_pwrctrl pwrctrl;
};
/*
@ -217,7 +217,8 @@ static int tc9563_pwrctrl_i2c_read(struct i2c_client *client,
}
static int tc9563_pwrctrl_i2c_bulk_write(struct i2c_client *client,
const struct tc9563_pwrctrl_reg_setting *seq, int len)
const struct tc9563_pwrctrl_reg_setting *seq,
int len)
{
int ret, i;
@ -230,10 +231,10 @@ static int tc9563_pwrctrl_i2c_bulk_write(struct i2c_client *client,
return 0;
}
static int tc9563_pwrctrl_disable_port(struct tc9563_pwrctrl_ctx *ctx,
static int tc9563_pwrctrl_disable_port(struct tc9563_pwrctrl *tc9563,
enum tc9563_pwrctrl_ports port)
{
struct tc9563_pwrctrl_cfg *cfg = &ctx->cfg[port];
struct tc9563_pwrctrl_cfg *cfg = &tc9563->cfg[port];
const struct tc9563_pwrctrl_reg_setting *seq;
int ret, len;
@ -248,16 +249,17 @@ static int tc9563_pwrctrl_disable_port(struct tc9563_pwrctrl_ctx *ctx,
len = ARRAY_SIZE(dsp2_pwroff_seq);
}
ret = tc9563_pwrctrl_i2c_bulk_write(ctx->client, seq, len);
ret = tc9563_pwrctrl_i2c_bulk_write(tc9563->client, seq, len);
if (ret)
return ret;
return tc9563_pwrctrl_i2c_bulk_write(ctx->client,
common_pwroff_seq, ARRAY_SIZE(common_pwroff_seq));
return tc9563_pwrctrl_i2c_bulk_write(tc9563->client, common_pwroff_seq,
ARRAY_SIZE(common_pwroff_seq));
}
static int tc9563_pwrctrl_set_l0s_l1_entry_delay(struct tc9563_pwrctrl_ctx *ctx,
enum tc9563_pwrctrl_ports port, bool is_l1, u32 ns)
static int tc9563_pwrctrl_set_l0s_l1_entry_delay(struct tc9563_pwrctrl *tc9563,
enum tc9563_pwrctrl_ports port,
bool is_l1, u32 ns)
{
u32 rd_val, units;
int ret;
@ -269,30 +271,38 @@ static int tc9563_pwrctrl_set_l0s_l1_entry_delay(struct tc9563_pwrctrl_ctx *ctx,
units = ns / TC9563_L0S_L1_DELAY_UNIT_NS;
if (port == TC9563_ETHERNET) {
ret = tc9563_pwrctrl_i2c_read(ctx->client, TC9563_EMBEDDED_ETH_DELAY, &rd_val);
ret = tc9563_pwrctrl_i2c_read(tc9563->client,
TC9563_EMBEDDED_ETH_DELAY,
&rd_val);
if (ret)
return ret;
if (is_l1)
rd_val = u32_replace_bits(rd_val, units, TC9563_ETH_L1_DELAY_MASK);
rd_val = u32_replace_bits(rd_val, units,
TC9563_ETH_L1_DELAY_MASK);
else
rd_val = u32_replace_bits(rd_val, units, TC9563_ETH_L0S_DELAY_MASK);
rd_val = u32_replace_bits(rd_val, units,
TC9563_ETH_L0S_DELAY_MASK);
return tc9563_pwrctrl_i2c_write(ctx->client, TC9563_EMBEDDED_ETH_DELAY, rd_val);
return tc9563_pwrctrl_i2c_write(tc9563->client,
TC9563_EMBEDDED_ETH_DELAY,
rd_val);
}
ret = tc9563_pwrctrl_i2c_write(ctx->client, TC9563_PORT_SELECT, BIT(port));
ret = tc9563_pwrctrl_i2c_write(tc9563->client, TC9563_PORT_SELECT,
BIT(port));
if (ret)
return ret;
return tc9563_pwrctrl_i2c_write(ctx->client,
is_l1 ? TC9563_PORT_L1_DELAY : TC9563_PORT_L0S_DELAY, units);
return tc9563_pwrctrl_i2c_write(tc9563->client,
is_l1 ? TC9563_PORT_L1_DELAY : TC9563_PORT_L0S_DELAY,
units);
}
static int tc9563_pwrctrl_set_tx_amplitude(struct tc9563_pwrctrl_ctx *ctx,
static int tc9563_pwrctrl_set_tx_amplitude(struct tc9563_pwrctrl *tc9563,
enum tc9563_pwrctrl_ports port)
{
u32 amp = ctx->cfg[port].tx_amp;
u32 amp = tc9563->cfg[port].tx_amp;
int port_access;
if (amp < TC9563_TX_MARGIN_MIN_UA)
@ -321,13 +331,14 @@ static int tc9563_pwrctrl_set_tx_amplitude(struct tc9563_pwrctrl_ctx *ctx,
{TC9563_TX_MARGIN, amp},
};
return tc9563_pwrctrl_i2c_bulk_write(ctx->client, tx_amp_seq, ARRAY_SIZE(tx_amp_seq));
return tc9563_pwrctrl_i2c_bulk_write(tc9563->client, tx_amp_seq,
ARRAY_SIZE(tx_amp_seq));
}
static int tc9563_pwrctrl_disable_dfe(struct tc9563_pwrctrl_ctx *ctx,
static int tc9563_pwrctrl_disable_dfe(struct tc9563_pwrctrl *tc9563,
enum tc9563_pwrctrl_ports port)
{
struct tc9563_pwrctrl_cfg *cfg = &ctx->cfg[port];
struct tc9563_pwrctrl_cfg *cfg = &tc9563->cfg[port];
int port_access, lane_access = 0x3;
u32 phy_rate = 0x21;
@ -364,14 +375,14 @@ static int tc9563_pwrctrl_disable_dfe(struct tc9563_pwrctrl_ctx *ctx,
{TC9563_PHY_RATE_CHANGE_OVERRIDE, 0x0},
};
return tc9563_pwrctrl_i2c_bulk_write(ctx->client,
disable_dfe_seq, ARRAY_SIZE(disable_dfe_seq));
return tc9563_pwrctrl_i2c_bulk_write(tc9563->client, disable_dfe_seq,
ARRAY_SIZE(disable_dfe_seq));
}
static int tc9563_pwrctrl_set_nfts(struct tc9563_pwrctrl_ctx *ctx,
static int tc9563_pwrctrl_set_nfts(struct tc9563_pwrctrl *tc9563,
enum tc9563_pwrctrl_ports port)
{
u8 *nfts = ctx->cfg[port].nfts;
u8 *nfts = tc9563->cfg[port].nfts;
struct tc9563_pwrctrl_reg_setting nfts_seq[] = {
{TC9563_NFTS_2_5_GT, nfts[0]},
{TC9563_NFTS_5_GT, nfts[1]},
@ -381,30 +392,35 @@ static int tc9563_pwrctrl_set_nfts(struct tc9563_pwrctrl_ctx *ctx,
if (!nfts[0])
return 0;
ret = tc9563_pwrctrl_i2c_write(ctx->client, TC9563_PORT_SELECT, BIT(port));
ret = tc9563_pwrctrl_i2c_write(tc9563->client, TC9563_PORT_SELECT,
BIT(port));
if (ret)
return ret;
return tc9563_pwrctrl_i2c_bulk_write(ctx->client, nfts_seq, ARRAY_SIZE(nfts_seq));
return tc9563_pwrctrl_i2c_bulk_write(tc9563->client, nfts_seq,
ARRAY_SIZE(nfts_seq));
}
static int tc9563_pwrctrl_assert_deassert_reset(struct tc9563_pwrctrl_ctx *ctx, bool deassert)
static int tc9563_pwrctrl_assert_deassert_reset(struct tc9563_pwrctrl *tc9563,
bool deassert)
{
int ret, val;
ret = tc9563_pwrctrl_i2c_write(ctx->client, TC9563_GPIO_CONFIG, TC9563_GPIO_MASK);
ret = tc9563_pwrctrl_i2c_write(tc9563->client, TC9563_GPIO_CONFIG,
TC9563_GPIO_MASK);
if (ret)
return ret;
val = deassert ? TC9563_GPIO_DEASSERT_BITS : 0;
return tc9563_pwrctrl_i2c_write(ctx->client, TC9563_RESET_GPIO, val);
return tc9563_pwrctrl_i2c_write(tc9563->client, TC9563_RESET_GPIO, val);
}
static int tc9563_pwrctrl_parse_device_dt(struct tc9563_pwrctrl_ctx *ctx, struct device_node *node,
static int tc9563_pwrctrl_parse_device_dt(struct tc9563_pwrctrl *tc9563,
struct device_node *node,
enum tc9563_pwrctrl_ports port)
{
struct tc9563_pwrctrl_cfg *cfg = &ctx->cfg[port];
struct tc9563_pwrctrl_cfg *cfg = &tc9563->cfg[port];
int ret;
/* Disable port if the status of the port is disabled. */
@ -434,128 +450,137 @@ static int tc9563_pwrctrl_parse_device_dt(struct tc9563_pwrctrl_ctx *ctx, struct
return 0;
}
static void tc9563_pwrctrl_power_off(struct tc9563_pwrctrl_ctx *ctx)
static int tc9563_pwrctrl_power_off(struct pci_pwrctrl *pwrctrl)
{
gpiod_set_value(ctx->reset_gpio, 1);
struct tc9563_pwrctrl *tc9563 = container_of(pwrctrl,
struct tc9563_pwrctrl, pwrctrl);
regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies);
gpiod_set_value(tc9563->reset_gpio, 1);
regulator_bulk_disable(ARRAY_SIZE(tc9563->supplies), tc9563->supplies);
return 0;
}
static int tc9563_pwrctrl_bring_up(struct tc9563_pwrctrl_ctx *ctx)
static int tc9563_pwrctrl_power_on(struct pci_pwrctrl *pwrctrl)
{
struct tc9563_pwrctrl *tc9563 = container_of(pwrctrl,
struct tc9563_pwrctrl, pwrctrl);
struct device *dev = tc9563->pwrctrl.dev;
struct tc9563_pwrctrl_cfg *cfg;
int ret, i;
ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies);
ret = regulator_bulk_enable(ARRAY_SIZE(tc9563->supplies),
tc9563->supplies);
if (ret < 0)
return dev_err_probe(ctx->pwrctrl.dev, ret, "cannot enable regulators\n");
return dev_err_probe(dev, ret, "cannot enable regulators\n");
gpiod_set_value(ctx->reset_gpio, 0);
gpiod_set_value(tc9563->reset_gpio, 0);
fsleep(TC9563_OSC_STAB_DELAY_US);
ret = tc9563_pwrctrl_assert_deassert_reset(ctx, false);
ret = tc9563_pwrctrl_assert_deassert_reset(tc9563, false);
if (ret)
goto power_off;
for (i = 0; i < TC9563_MAX; i++) {
cfg = &ctx->cfg[i];
ret = tc9563_pwrctrl_disable_port(ctx, i);
cfg = &tc9563->cfg[i];
ret = tc9563_pwrctrl_disable_port(tc9563, i);
if (ret) {
dev_err(ctx->pwrctrl.dev, "Disabling port failed\n");
dev_err(dev, "Disabling port failed\n");
goto power_off;
}
ret = tc9563_pwrctrl_set_l0s_l1_entry_delay(ctx, i, false, cfg->l0s_delay);
ret = tc9563_pwrctrl_set_l0s_l1_entry_delay(tc9563, i, false, cfg->l0s_delay);
if (ret) {
dev_err(ctx->pwrctrl.dev, "Setting L0s entry delay failed\n");
dev_err(dev, "Setting L0s entry delay failed\n");
goto power_off;
}
ret = tc9563_pwrctrl_set_l0s_l1_entry_delay(ctx, i, true, cfg->l1_delay);
ret = tc9563_pwrctrl_set_l0s_l1_entry_delay(tc9563, i, true, cfg->l1_delay);
if (ret) {
dev_err(ctx->pwrctrl.dev, "Setting L1 entry delay failed\n");
dev_err(dev, "Setting L1 entry delay failed\n");
goto power_off;
}
ret = tc9563_pwrctrl_set_tx_amplitude(ctx, i);
ret = tc9563_pwrctrl_set_tx_amplitude(tc9563, i);
if (ret) {
dev_err(ctx->pwrctrl.dev, "Setting Tx amplitude failed\n");
dev_err(dev, "Setting Tx amplitude failed\n");
goto power_off;
}
ret = tc9563_pwrctrl_set_nfts(ctx, i);
ret = tc9563_pwrctrl_set_nfts(tc9563, i);
if (ret) {
dev_err(ctx->pwrctrl.dev, "Setting N_FTS failed\n");
dev_err(dev, "Setting N_FTS failed\n");
goto power_off;
}
ret = tc9563_pwrctrl_disable_dfe(ctx, i);
ret = tc9563_pwrctrl_disable_dfe(tc9563, i);
if (ret) {
dev_err(ctx->pwrctrl.dev, "Disabling DFE failed\n");
dev_err(dev, "Disabling DFE failed\n");
goto power_off;
}
}
ret = tc9563_pwrctrl_assert_deassert_reset(ctx, true);
ret = tc9563_pwrctrl_assert_deassert_reset(tc9563, true);
if (!ret)
return 0;
power_off:
tc9563_pwrctrl_power_off(ctx);
tc9563_pwrctrl_power_off(&tc9563->pwrctrl);
return ret;
}
static int tc9563_pwrctrl_probe(struct platform_device *pdev)
{
struct pci_host_bridge *bridge = to_pci_host_bridge(pdev->dev.parent);
struct pci_bus *bus = bridge->bus;
struct device_node *node = pdev->dev.of_node;
struct device *dev = &pdev->dev;
enum tc9563_pwrctrl_ports port;
struct tc9563_pwrctrl_ctx *ctx;
struct tc9563_pwrctrl *tc9563;
struct device_node *i2c_node;
int ret, addr;
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx)
tc9563 = devm_kzalloc(dev, sizeof(*tc9563), GFP_KERNEL);
if (!tc9563)
return -ENOMEM;
ret = of_property_read_u32_index(pdev->dev.of_node, "i2c-parent", 1, &addr);
ret = of_property_read_u32_index(node, "i2c-parent", 1, &addr);
if (ret)
return dev_err_probe(dev, ret, "Failed to read i2c-parent property\n");
i2c_node = of_parse_phandle(dev->of_node, "i2c-parent", 0);
ctx->adapter = of_find_i2c_adapter_by_node(i2c_node);
tc9563->adapter = of_find_i2c_adapter_by_node(i2c_node);
of_node_put(i2c_node);
if (!ctx->adapter)
if (!tc9563->adapter)
return dev_err_probe(dev, -EPROBE_DEFER, "Failed to find I2C adapter\n");
ctx->client = i2c_new_dummy_device(ctx->adapter, addr);
if (IS_ERR(ctx->client)) {
tc9563->client = i2c_new_dummy_device(tc9563->adapter, addr);
if (IS_ERR(tc9563->client)) {
dev_err(dev, "Failed to create I2C client\n");
i2c_put_adapter(ctx->adapter);
return PTR_ERR(ctx->client);
put_device(&tc9563->adapter->dev);
return PTR_ERR(tc9563->client);
}
for (int i = 0; i < ARRAY_SIZE(tc9563_supply_names); i++)
ctx->supplies[i].supply = tc9563_supply_names[i];
tc9563->supplies[i].supply = tc9563_supply_names[i];
ret = devm_regulator_bulk_get(dev, TC9563_PWRCTL_MAX_SUPPLY, ctx->supplies);
ret = devm_regulator_bulk_get(dev, TC9563_PWRCTL_MAX_SUPPLY,
tc9563->supplies);
if (ret) {
dev_err_probe(dev, ret, "failed to get supply regulator\n");
goto remove_i2c;
}
ctx->reset_gpio = devm_gpiod_get(dev, "resx", GPIOD_OUT_HIGH);
if (IS_ERR(ctx->reset_gpio)) {
ret = dev_err_probe(dev, PTR_ERR(ctx->reset_gpio), "failed to get resx GPIO\n");
tc9563->reset_gpio = devm_gpiod_get(dev, "resx", GPIOD_OUT_HIGH);
if (IS_ERR(tc9563->reset_gpio)) {
ret = dev_err_probe(dev, PTR_ERR(tc9563->reset_gpio), "failed to get resx GPIO\n");
goto remove_i2c;
}
pci_pwrctrl_init(&ctx->pwrctrl, dev);
pci_pwrctrl_init(&tc9563->pwrctrl, dev);
port = TC9563_USP;
ret = tc9563_pwrctrl_parse_device_dt(ctx, pdev->dev.of_node, port);
ret = tc9563_pwrctrl_parse_device_dt(tc9563, node, port);
if (ret) {
dev_err(dev, "failed to parse device tree properties: %d\n", ret);
goto remove_i2c;
@ -563,18 +588,20 @@ static int tc9563_pwrctrl_probe(struct platform_device *pdev)
/*
* Downstream ports are always children of the upstream port.
* The first node represents DSP1, the second node represents DSP2, and so on.
* The first node represents DSP1, the second node represents DSP2,
* and so on.
*/
for_each_child_of_node_scoped(pdev->dev.of_node, child) {
for_each_child_of_node_scoped(node, child) {
port++;
ret = tc9563_pwrctrl_parse_device_dt(ctx, child, port);
ret = tc9563_pwrctrl_parse_device_dt(tc9563, child, port);
if (ret)
break;
/* Embedded ethernet device are under DSP3 */
if (port == TC9563_DSP3) {
for_each_child_of_node_scoped(child, child1) {
port++;
ret = tc9563_pwrctrl_parse_device_dt(ctx, child1, port);
ret = tc9563_pwrctrl_parse_device_dt(tc9563,
child1, port);
if (ret)
break;
}
@ -585,45 +612,32 @@ static int tc9563_pwrctrl_probe(struct platform_device *pdev)
goto remove_i2c;
}
if (bridge->ops->assert_perst) {
ret = bridge->ops->assert_perst(bus, true);
if (ret)
goto remove_i2c;
}
tc9563->pwrctrl.power_on = tc9563_pwrctrl_power_on;
tc9563->pwrctrl.power_off = tc9563_pwrctrl_power_off;
ret = tc9563_pwrctrl_bring_up(ctx);
if (ret)
goto remove_i2c;
if (bridge->ops->assert_perst) {
ret = bridge->ops->assert_perst(bus, false);
if (ret)
goto power_off;
}
ret = devm_pci_pwrctrl_device_set_ready(dev, &ctx->pwrctrl);
ret = devm_pci_pwrctrl_device_set_ready(dev, &tc9563->pwrctrl);
if (ret)
goto power_off;
platform_set_drvdata(pdev, ctx);
return 0;
power_off:
tc9563_pwrctrl_power_off(ctx);
tc9563_pwrctrl_power_off(&tc9563->pwrctrl);
remove_i2c:
i2c_unregister_device(ctx->client);
i2c_put_adapter(ctx->adapter);
i2c_unregister_device(tc9563->client);
put_device(&tc9563->adapter->dev);
return ret;
}
static void tc9563_pwrctrl_remove(struct platform_device *pdev)
{
struct tc9563_pwrctrl_ctx *ctx = platform_get_drvdata(pdev);
struct pci_pwrctrl *pwrctrl = dev_get_drvdata(&pdev->dev);
struct tc9563_pwrctrl *tc9563 = container_of(pwrctrl,
struct tc9563_pwrctrl, pwrctrl);
tc9563_pwrctrl_power_off(ctx);
i2c_unregister_device(ctx->client);
i2c_put_adapter(ctx->adapter);
tc9563_pwrctrl_power_off(&tc9563->pwrctrl);
i2c_unregister_device(tc9563->client);
put_device(&tc9563->adapter->dev);
}
static const struct of_device_id tc9563_pwrctrl_of_match[] = {

View file

@ -8,36 +8,84 @@
#include <linux/device.h>
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/of_graph.h>
#include <linux/pci-pwrctrl.h>
#include <linux/platform_device.h>
#include <linux/pwrseq/consumer.h>
#include <linux/regulator/consumer.h>
#include <linux/slab.h>
struct pci_pwrctrl_slot_data {
struct pci_pwrctrl ctx;
struct slot_pwrctrl {
struct pci_pwrctrl pwrctrl;
struct regulator_bulk_data *supplies;
int num_supplies;
struct clk *clk;
struct pwrseq_desc *pwrseq;
};
static void devm_pci_pwrctrl_slot_power_off(void *data)
static int slot_pwrctrl_power_on(struct pci_pwrctrl *pwrctrl)
{
struct pci_pwrctrl_slot_data *slot = data;
struct slot_pwrctrl *slot = container_of(pwrctrl,
struct slot_pwrctrl, pwrctrl);
int ret;
if (slot->pwrseq) {
pwrseq_power_on(slot->pwrseq);
return 0;
}
ret = regulator_bulk_enable(slot->num_supplies, slot->supplies);
if (ret < 0) {
dev_err(slot->pwrctrl.dev, "Failed to enable slot regulators\n");
return ret;
}
return clk_prepare_enable(slot->clk);
}
static int slot_pwrctrl_power_off(struct pci_pwrctrl *pwrctrl)
{
struct slot_pwrctrl *slot = container_of(pwrctrl,
struct slot_pwrctrl, pwrctrl);
if (slot->pwrseq) {
pwrseq_power_off(slot->pwrseq);
return 0;
}
regulator_bulk_disable(slot->num_supplies, slot->supplies);
clk_disable_unprepare(slot->clk);
return 0;
}
static void devm_slot_pwrctrl_release(void *data)
{
struct slot_pwrctrl *slot = data;
slot_pwrctrl_power_off(&slot->pwrctrl);
regulator_bulk_free(slot->num_supplies, slot->supplies);
}
static int pci_pwrctrl_slot_probe(struct platform_device *pdev)
static int slot_pwrctrl_probe(struct platform_device *pdev)
{
struct pci_pwrctrl_slot_data *slot;
struct slot_pwrctrl *slot;
struct device *dev = &pdev->dev;
struct clk *clk;
int ret;
slot = devm_kzalloc(dev, sizeof(*slot), GFP_KERNEL);
if (!slot)
return -ENOMEM;
if (of_graph_is_present(dev_of_node(dev))) {
slot->pwrseq = devm_pwrseq_get(dev, "pcie");
if (IS_ERR(slot->pwrseq))
return dev_err_probe(dev, PTR_ERR(slot->pwrseq),
"Failed to get the power sequencer\n");
goto skip_resources;
}
ret = of_regulator_bulk_get_all(dev, dev_of_node(dev),
&slot->supplies);
if (ret < 0) {
@ -46,49 +94,46 @@ static int pci_pwrctrl_slot_probe(struct platform_device *pdev)
}
slot->num_supplies = ret;
ret = regulator_bulk_enable(slot->num_supplies, slot->supplies);
if (ret < 0) {
dev_err_probe(dev, ret, "Failed to enable slot regulators\n");
regulator_bulk_free(slot->num_supplies, slot->supplies);
return ret;
}
ret = devm_add_action_or_reset(dev, devm_pci_pwrctrl_slot_power_off,
slot);
if (ret)
return ret;
clk = devm_clk_get_optional_enabled(dev, NULL);
if (IS_ERR(clk)) {
return dev_err_probe(dev, PTR_ERR(clk),
slot->clk = devm_clk_get_optional(dev, NULL);
if (IS_ERR(slot->clk)) {
return dev_err_probe(dev, PTR_ERR(slot->clk),
"Failed to enable slot clock\n");
}
pci_pwrctrl_init(&slot->ctx, dev);
skip_resources:
slot->pwrctrl.power_on = slot_pwrctrl_power_on;
slot->pwrctrl.power_off = slot_pwrctrl_power_off;
ret = devm_pci_pwrctrl_device_set_ready(dev, &slot->ctx);
ret = devm_add_action_or_reset(dev, devm_slot_pwrctrl_release, slot);
if (ret)
return ret;
pci_pwrctrl_init(&slot->pwrctrl, dev);
ret = devm_pci_pwrctrl_device_set_ready(dev, &slot->pwrctrl);
if (ret)
return dev_err_probe(dev, ret, "Failed to register pwrctrl driver\n");
return 0;
}
static const struct of_device_id pci_pwrctrl_slot_of_match[] = {
static const struct of_device_id slot_pwrctrl_of_match[] = {
{
.compatible = "pciclass,0604",
},
{ }
};
MODULE_DEVICE_TABLE(of, pci_pwrctrl_slot_of_match);
MODULE_DEVICE_TABLE(of, slot_pwrctrl_of_match);
static struct platform_driver pci_pwrctrl_slot_driver = {
static struct platform_driver slot_pwrctrl_driver = {
.driver = {
.name = "pci-pwrctrl-slot",
.of_match_table = pci_pwrctrl_slot_of_match,
.of_match_table = slot_pwrctrl_of_match,
},
.probe = pci_pwrctrl_slot_probe,
.probe = slot_pwrctrl_probe,
};
module_platform_driver(pci_pwrctrl_slot_driver);
module_platform_driver(slot_pwrctrl_driver);
MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>");
MODULE_DESCRIPTION("Generic PCI Power Control driver for PCI Slots");

View file

@ -1360,6 +1360,16 @@ static void quirk_transparent_bridge(struct pci_dev *dev)
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82380FB, quirk_transparent_bridge);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TOSHIBA, 0x605, quirk_transparent_bridge);
/*
* Enabling Link Bandwidth Management Interrupts (BW notifications) can cause
* boot hangs on P45.
*/
static void quirk_p45_bw_notifications(struct pci_dev *dev)
{
dev->no_bw_notif = 1;
}
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e21, quirk_p45_bw_notifications);
/*
* Common misconfiguration of the MediaGX/Geode PCI master that will reduce
* PCI bandwidth from 70MB/s to 25MB/s. See the GXM/GXLV/GX1 datasheets
@ -3749,6 +3759,14 @@ static void quirk_no_bus_reset(struct pci_dev *dev)
dev->dev_flags |= PCI_DEV_FLAGS_NO_BUS_RESET;
}
/*
* After asserting Secondary Bus Reset to downstream devices via a GB10
* Root Port, the link may not retrain correctly.
* https://lore.kernel.org/r/20251113084441.2124737-1-Johnny-CC.Chang@mediatek.com
*/
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NVIDIA, 0x22CE, quirk_no_bus_reset);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NVIDIA, 0x22D0, quirk_no_bus_reset);
/*
* Some NVIDIA GPU devices do not work with bus reset, SBR needs to be
* prevented for those affected devices.
@ -3792,6 +3810,16 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CAVIUM, 0xa100, quirk_no_bus_reset);
*/
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TI, 0xb005, quirk_no_bus_reset);
/*
* Reports from users making use of PCI device assignment with ASM1164
* controllers indicate an issue with bus reset where the device fails to
* retrain. The issue appears more common in configurations with multiple
* controllers. The device does indicate PM reset support (NoSoftRst-),
* therefore this still leaves a viable reset method.
* https://forum.proxmox.com/threads/problems-with-pcie-passthrough-with-two-identical-devices.149003/
*/
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ASMEDIA, 0x1164, quirk_no_bus_reset);
static void quirk_no_pm_reset(struct pci_dev *dev)
{
/*
@ -4470,6 +4498,16 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_BROADCOM, 0x9000,
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_BROADCOM, 0x9084,
quirk_bridge_cavm_thrx2_pcie_root);
/*
* AST1150 doesn't use a real PCI bus and always forwards the requester ID
* from downstream devices.
*/
static void quirk_aspeed_pci_bridge_no_alias(struct pci_dev *pdev)
{
pdev->dev_flags |= PCI_DEV_FLAGS_PCI_BRIDGE_NO_ALIAS;
}
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ASPEED, 0x1150, quirk_aspeed_pci_bridge_no_alias);
/*
* Intersil/Techwell TW686[4589]-based video capture cards have an empty (zero)
* class code. Fix it.
@ -5124,6 +5162,10 @@ static const struct pci_dev_acs_enabled {
{ PCI_VENDOR_ID_QCOM, 0x0401, pci_quirk_qcom_rp_acs },
/* QCOM SA8775P root port */
{ PCI_VENDOR_ID_QCOM, 0x0115, pci_quirk_qcom_rp_acs },
/* QCOM Hamoa root port */
{ PCI_VENDOR_ID_QCOM, 0x0111, pci_quirk_qcom_rp_acs },
/* QCOM Glymur root port */
{ PCI_VENDOR_ID_QCOM, 0x0120, pci_quirk_qcom_rp_acs },
/* HXT SD4800 root ports. The ACS design is same as QCOM QDF2xxx */
{ PCI_VENDOR_ID_HXT, 0x0401, pci_quirk_qcom_rp_acs },
/* Intel PCH root ports */
@ -5598,6 +5640,7 @@ static void quirk_no_ext_tags(struct pci_dev *pdev)
pci_walk_bus(bridge->bus, pci_configure_extended_tags, NULL);
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_3WARE, 0x1004, quirk_no_ext_tags);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_3WARE, 0x1005, quirk_no_ext_tags);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0132, quirk_no_ext_tags);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0140, quirk_no_ext_tags);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0141, quirk_no_ext_tags);
@ -5796,7 +5839,7 @@ DECLARE_PCI_FIXUP_CLASS_RESUME_EARLY(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
/*
* Some IDT switches incorrectly flag an ACS Source Validation error on
* completions for config read requests even though PCIe r4.0, sec
* completions for config read requests even though PCIe r7.0, sec
* 6.12.1.1, says that completions are never affected by ACS Source
* Validation. Here's the text of IDT 89H32H8G3-YC, erratum #36:
*
@ -5809,44 +5852,20 @@ DECLARE_PCI_FIXUP_CLASS_RESUME_EARLY(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
*
* The workaround suggested by IDT is to issue a config write to the
* downstream device before issuing the first config read. This allows the
* downstream device to capture its bus and device numbers (see PCIe r4.0,
* sec 2.2.9), thus avoiding the ACS error on the completion.
* downstream device to capture its bus and device numbers (see PCIe r7.0,
* sec 2.2.9.1), thus avoiding the ACS error on the completion.
*
* However, we don't know when the device is ready to accept the config
* write, so we do config reads until we receive a non-Config Request Retry
* Status, then do the config write.
*
* To avoid hitting the erratum when doing the config reads, we disable ACS
* SV around this process.
* write, and the issue affects resets of the switch as well as enumeration,
* so disable use of ACS SV for these devices altogether.
*/
int pci_idt_bus_quirk(struct pci_bus *bus, int devfn, u32 *l, int timeout)
void pci_disable_broken_acs_cap(struct pci_dev *pdev)
{
int pos;
u16 ctrl = 0;
bool found;
struct pci_dev *bridge = bus->self;
pos = bridge->acs_cap;
/* Disable ACS SV before initial config reads */
if (pos) {
pci_read_config_word(bridge, pos + PCI_ACS_CTRL, &ctrl);
if (ctrl & PCI_ACS_SV)
pci_write_config_word(bridge, pos + PCI_ACS_CTRL,
ctrl & ~PCI_ACS_SV);
if (pdev->vendor == PCI_VENDOR_ID_IDT &&
(pdev->device == 0x80b5 || pdev->device == 0x8090)) {
pci_info(pdev, "Disabling broken ACS SV; downstream device isolation reduced\n");
pdev->acs_capabilities &= ~PCI_ACS_SV;
}
found = pci_bus_generic_read_dev_vendor_id(bus, devfn, l, timeout);
/* Write Vendor ID (read-only) so the endpoint latches its bus/dev */
if (found)
pci_bus_write_config_word(bus, devfn, PCI_VENDOR_ID, 0);
/* Re-enable ACS_SV if it was previously enabled */
if (ctrl & PCI_ACS_SV)
pci_write_config_word(bridge, pos + PCI_ACS_CTRL, ctrl);
return found;
}
/*
@ -6205,6 +6224,10 @@ DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0x2303,
pci_fixup_pericom_acs_store_forward);
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0x2303,
pci_fixup_pericom_acs_store_forward);
DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0xb404,
pci_fixup_pericom_acs_store_forward);
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0xb404,
pci_fixup_pericom_acs_store_forward);
static void nvidia_ion_ahci_fixup(struct pci_dev *pdev)
{

View file

@ -17,25 +17,6 @@ static void pci_free_resources(struct pci_dev *dev)
}
}
static void pci_pwrctrl_unregister(struct device *dev)
{
struct device_node *np;
struct platform_device *pdev;
np = dev_of_node(dev);
if (!np)
return;
pdev = of_find_device_by_node(np);
if (!pdev)
return;
of_device_unregister(pdev);
put_device(&pdev->dev);
of_node_clear_flag(np, OF_POPULATED);
}
static void pci_stop_dev(struct pci_dev *dev)
{
pci_pme_active(dev, false);
@ -73,7 +54,6 @@ static void pci_destroy_dev(struct pci_dev *dev)
pci_ide_destroy(dev);
pcie_aspm_exit_link_state(dev);
pci_bridge_d3_update(dev);
pci_pwrctrl_unregister(&dev->dev);
pci_free_resources(dev);
put_device(&dev->dev);
}

View file

@ -86,6 +86,8 @@ int pci_for_each_dma_alias(struct pci_dev *pdev,
case PCI_EXP_TYPE_DOWNSTREAM:
continue;
case PCI_EXP_TYPE_PCI_BRIDGE:
if (tmp->dev_flags & PCI_DEV_FLAGS_PCI_BRIDGE_NO_ALIAS)
continue;
ret = fn(tmp,
PCI_DEVID(tmp->subordinate->number,
PCI_DEVFN(0, 0)), data);

File diff suppressed because it is too large Load diff

306
drivers/pci/setup-cardbus.c Normal file
View file

@ -0,0 +1,306 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cardbus bridge setup routines.
*/
#include <linux/bitfield.h>
#include <linux/errno.h>
#include <linux/ioport.h>
#include <linux/pci.h>
#include <linux/sizes.h>
#include <linux/sprintf.h>
#include <linux/types.h>
#include "pci.h"
#define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */
#define CARDBUS_RESERVE_BUSNR 3
#define DEFAULT_CARDBUS_IO_SIZE SZ_256
#define DEFAULT_CARDBUS_MEM_SIZE SZ_64M
/* pci=cbmemsize=nnM,cbiosize=nn can override this */
static unsigned long pci_cardbus_io_size = DEFAULT_CARDBUS_IO_SIZE;
static unsigned long pci_cardbus_mem_size = DEFAULT_CARDBUS_MEM_SIZE;
unsigned long pci_cardbus_resource_alignment(struct resource *res)
{
if (res->flags & IORESOURCE_IO)
return pci_cardbus_io_size;
if (res->flags & IORESOURCE_MEM)
return pci_cardbus_mem_size;
return 0;
}
int pci_bus_size_cardbus_bridge(struct pci_bus *bus,
struct list_head *realloc_head)
{
struct pci_dev *bridge = bus->self;
struct resource *b_res;
resource_size_t b_res_3_size = pci_cardbus_mem_size * 2;
u16 ctrl;
b_res = &bridge->resource[PCI_CB_BRIDGE_IO_0_WINDOW];
if (resource_assigned(b_res))
goto handle_b_res_1;
/*
* Reserve some resources for CardBus. We reserve a fixed amount
* of bus space for CardBus bridges.
*/
resource_set_range(b_res, pci_cardbus_io_size, pci_cardbus_io_size);
b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN;
if (realloc_head) {
b_res->end -= pci_cardbus_io_size;
pci_dev_res_add_to_list(realloc_head, bridge, b_res,
pci_cardbus_io_size,
pci_cardbus_io_size);
}
handle_b_res_1:
b_res = &bridge->resource[PCI_CB_BRIDGE_IO_1_WINDOW];
if (resource_assigned(b_res))
goto handle_b_res_2;
resource_set_range(b_res, pci_cardbus_io_size, pci_cardbus_io_size);
b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN;
if (realloc_head) {
b_res->end -= pci_cardbus_io_size;
pci_dev_res_add_to_list(realloc_head, bridge, b_res,
pci_cardbus_io_size,
pci_cardbus_io_size);
}
handle_b_res_2:
/* MEM1 must not be pref MMIO */
pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl);
if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM1) {
ctrl &= ~PCI_CB_BRIDGE_CTL_PREFETCH_MEM1;
pci_write_config_word(bridge, PCI_CB_BRIDGE_CONTROL, ctrl);
pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl);
}
/* Check whether prefetchable memory is supported by this bridge. */
pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl);
if (!(ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0)) {
ctrl |= PCI_CB_BRIDGE_CTL_PREFETCH_MEM0;
pci_write_config_word(bridge, PCI_CB_BRIDGE_CONTROL, ctrl);
pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl);
}
b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_0_WINDOW];
if (resource_assigned(b_res))
goto handle_b_res_3;
/*
* If we have prefetchable memory support, allocate two regions.
* Otherwise, allocate one region of twice the size.
*/
if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0) {
resource_set_range(b_res, pci_cardbus_mem_size,
pci_cardbus_mem_size);
b_res->flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH |
IORESOURCE_STARTALIGN;
if (realloc_head) {
b_res->end -= pci_cardbus_mem_size;
pci_dev_res_add_to_list(realloc_head, bridge, b_res,
pci_cardbus_mem_size,
pci_cardbus_mem_size);
}
/* Reduce that to half */
b_res_3_size = pci_cardbus_mem_size;
}
handle_b_res_3:
b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_1_WINDOW];
if (resource_assigned(b_res))
goto handle_done;
resource_set_range(b_res, pci_cardbus_mem_size, b_res_3_size);
b_res->flags |= IORESOURCE_MEM | IORESOURCE_STARTALIGN;
if (realloc_head) {
b_res->end -= b_res_3_size;
pci_dev_res_add_to_list(realloc_head, bridge, b_res,
b_res_3_size, pci_cardbus_mem_size);
}
handle_done:
return 0;
}
void pci_setup_cardbus_bridge(struct pci_bus *bus)
{
struct pci_dev *bridge = bus->self;
struct resource *res;
struct pci_bus_region region;
pci_info(bridge, "CardBus bridge to %pR\n",
&bus->busn_res);
res = bus->resource[0];
pcibios_resource_to_bus(bridge->bus, &region, res);
if (resource_assigned(res) && res->flags & IORESOURCE_IO) {
/*
* The IO resource is allocated a range twice as large as it
* would normally need. This allows us to set both IO regs.
*/
pci_info(bridge, " bridge window %pR\n", res);
pci_write_config_dword(bridge, PCI_CB_IO_BASE_0,
region.start);
pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_0,
region.end);
}
res = bus->resource[1];
pcibios_resource_to_bus(bridge->bus, &region, res);
if (resource_assigned(res) && res->flags & IORESOURCE_IO) {
pci_info(bridge, " bridge window %pR\n", res);
pci_write_config_dword(bridge, PCI_CB_IO_BASE_1,
region.start);
pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_1,
region.end);
}
res = bus->resource[2];
pcibios_resource_to_bus(bridge->bus, &region, res);
if (resource_assigned(res) && res->flags & IORESOURCE_MEM) {
pci_info(bridge, " bridge window %pR\n", res);
pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0,
region.start);
pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_0,
region.end);
}
res = bus->resource[3];
pcibios_resource_to_bus(bridge->bus, &region, res);
if (resource_assigned(res) && res->flags & IORESOURCE_MEM) {
pci_info(bridge, " bridge window %pR\n", res);
pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1,
region.start);
pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_1,
region.end);
}
}
EXPORT_SYMBOL(pci_setup_cardbus_bridge);
int pci_setup_cardbus(char *str)
{
if (!strncmp(str, "cbiosize=", 9)) {
pci_cardbus_io_size = memparse(str + 9, &str);
return 0;
} else if (!strncmp(str, "cbmemsize=", 10)) {
pci_cardbus_mem_size = memparse(str + 10, &str);
return 0;
}
return -ENOENT;
}
int pci_cardbus_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
u32 buses, int max,
unsigned int available_buses, int pass)
{
struct pci_bus *child;
bool fixed_buses;
u8 fixed_sec, fixed_sub;
int next_busnr;
u32 i, j = 0;
/*
* We need to assign a number to this bus which we always do in the
* second pass.
*/
if (!pass) {
/*
* Temporarily disable forwarding of the configuration
* cycles on all bridges in this bus segment to avoid
* possible conflicts in the second pass between two bridges
* programmed with overlapping bus ranges.
*/
pci_write_config_dword(dev, PCI_PRIMARY_BUS,
buses & PCI_SEC_LATENCY_TIMER_MASK);
return max;
}
/* Clear errors */
pci_write_config_word(dev, PCI_STATUS, 0xffff);
/* Read bus numbers from EA Capability (if present) */
fixed_buses = pci_ea_fixed_busnrs(dev, &fixed_sec, &fixed_sub);
if (fixed_buses)
next_busnr = fixed_sec;
else
next_busnr = max + 1;
/*
* Prevent assigning a bus number that already exists. This can
* happen when a bridge is hot-plugged, so in this case we only
* re-scan this bus.
*/
child = pci_find_bus(pci_domain_nr(bus), next_busnr);
if (!child) {
child = pci_add_new_bus(bus, dev, next_busnr);
if (!child)
return max;
pci_bus_insert_busn_res(child, next_busnr, bus->busn_res.end);
}
max++;
if (available_buses)
available_buses--;
buses = (buses & PCI_SEC_LATENCY_TIMER_MASK) |
FIELD_PREP(PCI_PRIMARY_BUS_MASK, child->primary) |
FIELD_PREP(PCI_SECONDARY_BUS_MASK, child->busn_res.start) |
FIELD_PREP(PCI_SUBORDINATE_BUS_MASK, child->busn_res.end);
/*
* yenta.c forces a secondary latency timer of 176.
* Copy that behaviour here.
*/
buses &= ~PCI_SEC_LATENCY_TIMER_MASK;
buses |= FIELD_PREP(PCI_SEC_LATENCY_TIMER_MASK, CARDBUS_LATENCY_TIMER);
/* We need to blast all three values with a single write */
pci_write_config_dword(dev, PCI_PRIMARY_BUS, buses);
/*
* For CardBus bridges, we leave 4 bus numbers as cards with a
* PCI-to-PCI bridge can be inserted later.
*/
for (i = 0; i < CARDBUS_RESERVE_BUSNR; i++) {
struct pci_bus *parent = bus;
if (pci_find_bus(pci_domain_nr(bus), max + i + 1))
break;
while (parent->parent) {
if (!pcibios_assign_all_busses() &&
(parent->busn_res.end > max) &&
(parent->busn_res.end <= max + i)) {
j = 1;
}
parent = parent->parent;
}
if (j) {
/*
* Often, there are two CardBus bridges -- try to
* leave one valid bus number for each one.
*/
i /= 2;
break;
}
}
max += i;
/*
* Set subordinate bus number to its real value. If fixed
* subordinate bus number exists from EA capability then use it.
*/
if (fixed_buses)
max = fixed_sub;
pci_bus_update_busn_res_end(child, max);
pci_write_config_byte(dev, PCI_SUBORDINATE_BUS, max);
scnprintf(child->name, sizeof(child->name), "PCI CardBus %04x:%02x",
pci_domain_nr(bus), child->number);
pbus_validate_busn(child);
return max;
}

Some files were not shown because too many files have changed in this diff Show more