Merge branch 'pci/controller/dwc'

- Extend PCI_FIND_NEXT_CAP() and PCI_FIND_NEXT_EXT_CAP() to return a
  pointer to the preceding Capability (Qiang Yu)

- Add dw_pcie_remove_capability() and dw_pcie_remove_ext_capability() to
  remove Capabilities that are advertised but not fully implemented (Qiang
  Yu)

- Remove MSI and MSI-X Capabilities for DWC controllers in platforms that
  can't support them, so we automatically fall back to INTx (Qiang Yu)

- Remove MSI-X and DPC Capabilities for Qualcomm platforms that advertise
  but don't support them (Qiang Yu)

- Remove duplicate dw_pcie_ep_hide_ext_capability() function and replace
  with dw_pcie_remove_ext_capability() (Qiang Yu)

- Add ASPM L1.1 and L1.2 Substates context to debugfs ltssm_status for
  drivers that support this (Shawn Lin)

- Skip PME_Turn_Off broadcast and L2/L3 transition during suspend if link
  is not up to avoid an unnecessary timeout (Manivannan Sadhasivam)

- Revert dw-rockchip, qcom, and DWC core changes that used link-up IRQs to
  trigger enumeration instead of waiting for link to be up because the PCI
  core doesn't allocate bus number space for hierarchies that might be
  attached (Niklas Cassel)

- Make endpoint iATU entry for MSI permanent instead of programming it
  dynamically, which is slow and racy with respect to other concurrent
  traffic, e.g., eDMA (Koichiro Den)

- Use iMSI-RX MSI target address when possible to fix endpoints using
  32-bit MSI (Shawn Lin)

- Make dw_pcie_ltssm_status_string() available and use it for logging
  errors in dw_pcie_wait_for_link() (Manivannan Sadhasivam)

- Return -ENODEV when dw_pcie_wait_for_link() finds no devices, -EIO for
  device present but inactive, -ETIMEDOUT for other failures, so callers
  can handle these cases differently (Manivannan Sadhasivam)

- Allow DWC host controller driver probe to continue if device is not found
  or found but inactive; only fail when there's an error with the link
  (Manivannan Sadhasivam)

- For controllers like NXP i.MX6QP and i.MX7D, where LTSSM registers are
  not accessible after PME_Turn_Off, simply wait 10ms instead of polling
  for L2/L3 Ready (Richard Zhu)

- Use multiple iATU entries to map large bridge windows and DMA ranges when
  necessary instead of failing (Samuel Holland)

- Rename struct dw_pcie_rp.has_msi_ctrl to .use_imsi_rx for clarity (Qiang
  Yu)

- Add EPC dynamic_inbound_mapping feature bit for Endpoint Controllers that
  can update BAR inbound address translation without requiring EPF driver
  to clear/reset the BAR first, and advertise it for DWC-based Endpoints
  (Koichiro Den)

- Add EPC subrange_mapping feature bit for Endpoint Controllers that can
  map multiple independent inbound regions in a single BAR, implement
  subrange mapping, advertise it for DWC-based Endpoints, and add Endpoint
  selftests for it (Koichiro Den)

- Allow overriding default BAR sizes for pci-epf-test (Niklas Cassel)

- Make resizable BARs work for Endpoint multi-PF configurations; previously
  it only worked for PF 0 (Aksh Garg)

- Fix Endpoint non-PF 0 support for BAR configuration, ATU mappings, and
  Address Match Mode (Aksh Garg)

- Fix issues with outbound iATU index assignment that caused iATU index to
  be out of bounds (Niklas Cassel)

- Clean up iATU index tracking to be consistent (Niklas Cassel)

- Set up iATU when ECAM is enabled; previously IO and MEM outbound windows
  weren't programmed, and ECAM-related iATU entries weren't restored after
  suspend/resume, so config accesses failed (Krishna Chaitanya Chundru)

* pci/controller/dwc:
  PCI: dwc: Fix missing iATU setup when ECAM is enabled
  PCI: dwc: Clean up iATU index usage in dw_pcie_iatu_setup()
  PCI: dwc: Fix msg_atu_index assignment
  PCI: dwc: ep: Add comment explaining controller level PTM access in multi PF setup
  PCI: dwc: ep: Add per-PF BAR and inbound ATU mapping support
  PCI: dwc: ep: Fix resizable BAR support for multi-PF configurations
  PCI: endpoint: pci-epf-test: Allow overriding default BAR sizes
  selftests: pci_endpoint: Add BAR subrange mapping test case
  misc: pci_endpoint_test: Add BAR subrange mapping test case
  PCI: endpoint: pci-epf-test: Add BAR subrange mapping test support
  Documentation: PCI: endpoint: Clarify pci_epc_set_bar() usage
  PCI: dwc: ep: Support BAR subrange inbound mapping via Address Match Mode iATU
  PCI: dwc: Advertise dynamic inbound mapping support
  PCI: endpoint: Add BAR subrange mapping support
  PCI: endpoint: Add dynamic_inbound_mapping EPC feature
  PCI: dwc: Rename dw_pcie_rp::has_msi_ctrl to dw_pcie_rp::use_imsi_rx for clarity
  PCI: dwc: Fix grammar and formatting for comment in dw_pcie_remove_ext_capability()
  PCI: dwc: Use multiple iATU windows for mapping large bridge windows and DMA ranges
  PCI: dwc: Remove duplicate dw_pcie_ep_hide_ext_capability() function
  PCI: dwc: Skip waiting for L2/L3 Ready if dw_pcie_rp::skip_l23_wait is true
  PCI: dwc: Fail dw_pcie_host_init() if dw_pcie_wait_for_link() returns -ETIMEDOUT
  PCI: dwc: Rework the error print of dw_pcie_wait_for_link()
  PCI: dwc: Rename and move ltssm_status_string() to pcie-designware.c
  PCI: dwc: Return -EIO from dw_pcie_wait_for_link() if device is not active
  PCI: dwc: Return -ENODEV from dw_pcie_wait_for_link() if device is not found
  PCI: dwc: Use cfg0_base as iMSI-RX target address to support 32-bit MSI devices
  PCI: dwc: ep: Cache MSI outbound iATU mapping
  Revert "PCI: dwc: Don't wait for link up if driver can detect Link Up event"
  Revert "PCI: qcom: Enumerate endpoints based on Link up event in 'global_irq' interrupt"
  Revert "PCI: qcom: Enable MSI interrupts together with Link up if 'Global IRQ' is supported"
  Revert "PCI: qcom: Don't wait for link if we can detect Link Up"
  Revert "PCI: dw-rockchip: Enumerate endpoints based on dll_link_up IRQ"
  Revert "PCI: dw-rockchip: Don't wait for link since we can detect Link Up"
  PCI: dwc: Skip PME_Turn_Off broadcast and L2/L3 transition during suspend if link is not up
  PCI: dw-rockchip: Change get_ltssm() to provide L1 Substates info
  PCI: dwc: Add L1 Substates context to ltssm_status of debugfs
  PCI: qcom: Remove DPC Extended Capability
  PCI: qcom: Remove MSI-X Capability for Root Ports
  PCI: dwc: Remove MSI/MSIX capability for Root Port if iMSI-RX is used as MSI controller
  PCI: dwc: Add new APIs to remove standard and extended Capability
  PCI: Add preceding capability position support in PCI_FIND_NEXT_*_CAP macros
This commit is contained in:
Bjorn Helgaas 2026-02-06 17:09:34 -06:00
commit 93c398be49
30 changed files with 1296 additions and 350 deletions

View file

@ -95,6 +95,30 @@ by the PCI endpoint function driver.
Register space of the function driver is usually configured
using this API.
Some endpoint controllers also support calling pci_epc_set_bar() again
for the same BAR (without calling pci_epc_clear_bar()) to update inbound
address translations after the host has programmed the BAR base address.
Endpoint function drivers can check this capability via the
dynamic_inbound_mapping EPC feature bit.
When pci_epf_bar.num_submap is non-zero, the endpoint function driver is
requesting BAR subrange mapping using pci_epf_bar.submap. This requires
the EPC to advertise support via the subrange_mapping EPC feature bit.
When an EPF driver wants to make use of the inbound subrange mapping
feature, it requires that the BAR base address has been programmed by
the host during enumeration. Thus, it needs to call pci_epc_set_bar()
twice for the same BAR (requires dynamic_inbound_mapping): first with
num_submap set to zero and configuring the BAR size, then after the PCIe
link is up and the host enumerates the endpoint and programs the BAR
base address, again with num_submap set to non-zero value.
Note that when making use of the inbound subrange mapping feature, the
EPF driver must not call pci_epc_clear_bar() between the two
pci_epc_set_bar() calls, because clearing the BAR can clear/disable the
BAR register or BAR decode on the endpoint while the host still expects
the assigned BAR address to remain valid.
* pci_epc_clear_bar()
The PCI endpoint function driver should use pci_epc_clear_bar() to reset

View file

@ -84,6 +84,25 @@ device, the following commands can be used::
# echo 32 > functions/pci_epf_test/func1/msi_interrupts
# echo 2048 > functions/pci_epf_test/func1/msix_interrupts
By default, pci-epf-test uses the following BAR sizes::
# grep . functions/pci_epf_test/func1/pci_epf_test.0/bar?_size
functions/pci_epf_test/func1/pci_epf_test.0/bar0_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar1_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar2_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar3_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar4_size:131072
functions/pci_epf_test/func1/pci_epf_test.0/bar5_size:1048576
The user can override a default value using e.g.::
# echo 1048576 > functions/pci_epf_test/func1/pci_epf_test.0/bar1_size
Overriding the default BAR sizes can only be done before binding the
pci-epf-test device to a PCI endpoint controller driver.
Note: Some endpoint controllers might have fixed-size BARs or reserved BARs;
for such controllers, the corresponding BAR size in configfs will be ignored.
Binding pci-epf-test Device to EP Controller
--------------------------------------------

View file

@ -39,6 +39,8 @@
#define COMMAND_COPY BIT(5)
#define COMMAND_ENABLE_DOORBELL BIT(6)
#define COMMAND_DISABLE_DOORBELL BIT(7)
#define COMMAND_BAR_SUBRANGE_SETUP BIT(8)
#define COMMAND_BAR_SUBRANGE_CLEAR BIT(9)
#define PCI_ENDPOINT_TEST_STATUS 0x8
#define STATUS_READ_SUCCESS BIT(0)
@ -55,6 +57,10 @@
#define STATUS_DOORBELL_ENABLE_FAIL BIT(11)
#define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12)
#define STATUS_DOORBELL_DISABLE_FAIL BIT(13)
#define STATUS_BAR_SUBRANGE_SETUP_SUCCESS BIT(14)
#define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15)
#define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16)
#define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17)
#define PCI_ENDPOINT_TEST_LOWER_SRC_ADDR 0x0c
#define PCI_ENDPOINT_TEST_UPPER_SRC_ADDR 0x10
@ -77,6 +83,7 @@
#define CAP_MSI BIT(1)
#define CAP_MSIX BIT(2)
#define CAP_INTX BIT(3)
#define CAP_SUBRANGE_MAPPING BIT(4)
#define PCI_ENDPOINT_TEST_DB_BAR 0x34
#define PCI_ENDPOINT_TEST_DB_OFFSET 0x38
@ -100,6 +107,8 @@
#define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588
#define PCI_ENDPOINT_TEST_BAR_SUBRANGE_NSUB 2
static DEFINE_IDA(pci_endpoint_test_ida);
#define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \
@ -414,6 +423,193 @@ static int pci_endpoint_test_bars(struct pci_endpoint_test *test)
return 0;
}
static u8 pci_endpoint_test_subrange_sig_byte(enum pci_barno barno,
unsigned int subno)
{
return 0x50 + (barno * 8) + subno;
}
static u8 pci_endpoint_test_subrange_test_byte(enum pci_barno barno,
unsigned int subno)
{
return 0xa0 + (barno * 8) + subno;
}
static int pci_endpoint_test_bar_subrange_cmd(struct pci_endpoint_test *test,
enum pci_barno barno, u32 command,
u32 ok_bit, u32 fail_bit)
{
struct pci_dev *pdev = test->pdev;
struct device *dev = &pdev->dev;
int irq_type = test->irq_type;
u32 status;
if (irq_type < PCITEST_IRQ_TYPE_INTX ||
irq_type > PCITEST_IRQ_TYPE_MSIX) {
dev_err(dev, "Invalid IRQ type\n");
return -EINVAL;
}
reinit_completion(&test->irq_raised);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_STATUS, 0);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1);
/* Reuse SIZE as a command parameter: bar number. */
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE, barno);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND, command);
if (!wait_for_completion_timeout(&test->irq_raised,
msecs_to_jiffies(1000)))
return -ETIMEDOUT;
status = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS);
if (status & fail_bit)
return -EIO;
if (!(status & ok_bit))
return -EIO;
return 0;
}
static int pci_endpoint_test_bar_subrange_setup(struct pci_endpoint_test *test,
enum pci_barno barno)
{
return pci_endpoint_test_bar_subrange_cmd(test, barno,
COMMAND_BAR_SUBRANGE_SETUP,
STATUS_BAR_SUBRANGE_SETUP_SUCCESS,
STATUS_BAR_SUBRANGE_SETUP_FAIL);
}
static int pci_endpoint_test_bar_subrange_clear(struct pci_endpoint_test *test,
enum pci_barno barno)
{
return pci_endpoint_test_bar_subrange_cmd(test, barno,
COMMAND_BAR_SUBRANGE_CLEAR,
STATUS_BAR_SUBRANGE_CLEAR_SUCCESS,
STATUS_BAR_SUBRANGE_CLEAR_FAIL);
}
static int pci_endpoint_test_bar_subrange(struct pci_endpoint_test *test,
enum pci_barno barno)
{
u32 nsub = PCI_ENDPOINT_TEST_BAR_SUBRANGE_NSUB;
struct device *dev = &test->pdev->dev;
size_t sub_size, buf_size;
resource_size_t bar_size;
void __iomem *bar_addr;
void *read_buf = NULL;
int ret, clear_ret;
size_t off, chunk;
u32 i, exp, val;
u8 pattern;
if (!(test->ep_caps & CAP_SUBRANGE_MAPPING))
return -EOPNOTSUPP;
/*
* The test register BAR is not safe to reprogram and write/read
* over its full size. BAR_TEST already special-cases it to a tiny
* range. For subrange mapping tests, let's simply skip it.
*/
if (barno == test->test_reg_bar)
return -EBUSY;
bar_size = pci_resource_len(test->pdev, barno);
if (!bar_size)
return -ENODATA;
bar_addr = test->bar[barno];
if (!bar_addr)
return -ENOMEM;
ret = pci_endpoint_test_bar_subrange_setup(test, barno);
if (ret)
return ret;
if (bar_size % nsub || bar_size / nsub > SIZE_MAX) {
ret = -EINVAL;
goto out_clear;
}
sub_size = bar_size / nsub;
if (sub_size < sizeof(u32)) {
ret = -ENOSPC;
goto out_clear;
}
/* Limit the temporary buffer size */
buf_size = min_t(size_t, sub_size, SZ_1M);
read_buf = kmalloc(buf_size, GFP_KERNEL);
if (!read_buf) {
ret = -ENOMEM;
goto out_clear;
}
/*
* Step 1: verify EP-provided signature per subrange. This detects
* whether the EP actually applied the submap order.
*/
for (i = 0; i < nsub; i++) {
exp = (u32)pci_endpoint_test_subrange_sig_byte(barno, i) *
0x01010101U;
val = ioread32(bar_addr + (i * sub_size));
if (val != exp) {
dev_err(dev,
"BAR%d subrange%u signature mismatch @%#zx: exp %#08x got %#08x\n",
barno, i, (size_t)i * sub_size, exp, val);
ret = -EIO;
goto out_clear;
}
val = ioread32(bar_addr + (i * sub_size) + sub_size - sizeof(u32));
if (val != exp) {
dev_err(dev,
"BAR%d subrange%u signature mismatch @%#zx: exp %#08x got %#08x\n",
barno, i,
((size_t)i * sub_size) + sub_size - sizeof(u32),
exp, val);
ret = -EIO;
goto out_clear;
}
}
/* Step 2: write unique pattern per subrange (write all first). */
for (i = 0; i < nsub; i++) {
pattern = pci_endpoint_test_subrange_test_byte(barno, i);
memset_io(bar_addr + (i * sub_size), pattern, sub_size);
}
/* Step 3: read back and verify (read all after all writes). */
for (i = 0; i < nsub; i++) {
pattern = pci_endpoint_test_subrange_test_byte(barno, i);
for (off = 0; off < sub_size; off += chunk) {
void *bad;
chunk = min_t(size_t, buf_size, sub_size - off);
memcpy_fromio(read_buf, bar_addr + (i * sub_size) + off,
chunk);
bad = memchr_inv(read_buf, pattern, chunk);
if (bad) {
size_t bad_off = (u8 *)bad - (u8 *)read_buf;
dev_err(dev,
"BAR%d subrange%u data mismatch @%#zx (pattern %#02x)\n",
barno, i, (size_t)i * sub_size + off + bad_off,
pattern);
ret = -EIO;
goto out_clear;
}
}
}
out_clear:
kfree(read_buf);
clear_ret = pci_endpoint_test_bar_subrange_clear(test, barno);
return ret ?: clear_ret;
}
static int pci_endpoint_test_intx_irq(struct pci_endpoint_test *test)
{
u32 val;
@ -936,12 +1132,17 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
switch (cmd) {
case PCITEST_BAR:
case PCITEST_BAR_SUBRANGE:
bar = arg;
if (bar <= NO_BAR || bar > BAR_5)
goto ret;
if (is_am654_pci_dev(pdev) && bar == BAR_0)
goto ret;
ret = pci_endpoint_test_bar(test, bar);
if (cmd == PCITEST_BAR)
ret = pci_endpoint_test_bar(test, bar);
else
ret = pci_endpoint_test_bar_subrange(test, bar);
break;
case PCITEST_BARS:
ret = pci_endpoint_test_bars(test);

View file

@ -13,13 +13,13 @@
u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap)
{
return PCI_FIND_NEXT_CAP(cdns_pcie_read_cfg, PCI_CAPABILITY_LIST,
cap, pcie);
cap, NULL, pcie);
}
EXPORT_SYMBOL_GPL(cdns_pcie_find_capability);
u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap)
{
return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, pcie);
return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, NULL, pcie);
}
EXPORT_SYMBOL_GPL(cdns_pcie_find_ext_capability);

View file

@ -424,6 +424,7 @@ static int dra7xx_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features dra7xx_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
};

View file

@ -114,6 +114,7 @@ enum imx_pcie_variants {
#define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9)
#define IMX_PCIE_FLAG_HAS_LUT BIT(10)
#define IMX_PCIE_FLAG_8GT_ECN_ERR051586 BIT(11)
#define IMX_PCIE_FLAG_SKIP_L23_READY BIT(12)
#define imx_check_flag(pci, val) (pci->drvdata->flags & val)
@ -1387,6 +1388,7 @@ static int imx_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features imx8m_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
@ -1396,6 +1398,7 @@ static const struct pci_epc_features imx8m_pcie_epc_features = {
};
static const struct pci_epc_features imx8q_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
@ -1416,6 +1419,7 @@ static const struct pci_epc_features imx8q_pcie_epc_features = {
* BAR5 | Enable | 32-bit | 64 KB | Programmable Size
*/
static const struct pci_epc_features imx95_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_64K, },
.align = SZ_4K,
@ -1777,6 +1781,8 @@ static int imx_pcie_probe(struct platform_device *pdev)
*/
imx_pcie_add_lut_by_rid(imx_pcie, 0);
} else {
if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_SKIP_L23_READY))
pci->pp.skip_l23_ready = true;
pci->pp.use_atu_msg = true;
ret = dw_pcie_host_init(&pci->pp);
if (ret < 0)
@ -1838,6 +1844,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
.variant = IMX6QP,
.flags = IMX_PCIE_FLAG_IMX_PHY |
IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND |
IMX_PCIE_FLAG_SKIP_L23_READY |
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
.dbi_length = 0x200,
.gpr = "fsl,imx6q-iomuxc-gpr",
@ -1854,6 +1861,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
.variant = IMX7D,
.flags = IMX_PCIE_FLAG_SUPPORTS_SUSPEND |
IMX_PCIE_FLAG_HAS_APP_RESET |
IMX_PCIE_FLAG_SKIP_L23_READY |
IMX_PCIE_FLAG_HAS_PHY_RESET,
.gpr = "fsl,imx7d-iomuxc-gpr",
.mode_off[0] = IOMUXC_GPR12,

View file

@ -930,6 +930,7 @@ static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features ks_pcie_am654_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.msix_capable = true,
.bar[BAR_0] = { .type = BAR_RESERVED, },

View file

@ -370,6 +370,7 @@ static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features artpec6_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
};

View file

@ -443,63 +443,13 @@ static ssize_t counter_value_read(struct file *file, char __user *buf,
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static const char *ltssm_status_string(enum dw_pcie_ltssm ltssm)
{
const char *str;
switch (ltssm) {
#define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3);
default:
str = "DW_PCIE_LTSSM_UNKNOWN";
break;
}
return str + strlen("DW_PCIE_LTSSM_");
}
static int ltssm_status_show(struct seq_file *s, void *v)
{
struct dw_pcie *pci = s->private;
enum dw_pcie_ltssm val;
val = dw_pcie_get_ltssm(pci);
seq_printf(s, "%s (0x%02x)\n", ltssm_status_string(val), val);
seq_printf(s, "%s (0x%02x)\n", dw_pcie_ltssm_status_string(val), val);
return 0;
}

View file

@ -72,47 +72,15 @@ EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar);
static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap)
{
return PCI_FIND_NEXT_CAP(dw_pcie_ep_read_cfg, PCI_CAPABILITY_LIST,
cap, ep, func_no);
cap, NULL, ep, func_no);
}
/**
* dw_pcie_ep_hide_ext_capability - Hide a capability from the linked list
* @pci: DWC PCI device
* @prev_cap: Capability preceding the capability that should be hidden
* @cap: Capability that should be hidden
*
* Return: 0 if success, errno otherwise.
*/
int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap)
static u16 dw_pcie_ep_find_ext_capability(struct dw_pcie_ep *ep,
u8 func_no, u8 cap)
{
u16 prev_cap_offset, cap_offset;
u32 prev_cap_header, cap_header;
prev_cap_offset = dw_pcie_find_ext_capability(pci, prev_cap);
if (!prev_cap_offset)
return -EINVAL;
prev_cap_header = dw_pcie_readl_dbi(pci, prev_cap_offset);
cap_offset = PCI_EXT_CAP_NEXT(prev_cap_header);
cap_header = dw_pcie_readl_dbi(pci, cap_offset);
/* cap must immediately follow prev_cap. */
if (PCI_EXT_CAP_ID(cap_header) != cap)
return -EINVAL;
/* Clear next ptr. */
prev_cap_header &= ~GENMASK(31, 20);
/* Set next ptr to next ptr of cap. */
prev_cap_header |= cap_header & GENMASK(31, 20);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi(pci, prev_cap_offset, prev_cap_header);
dw_pcie_dbi_ro_wr_dis(pci);
return 0;
return PCI_FIND_NEXT_EXT_CAP(dw_pcie_ep_read_cfg, 0,
cap, NULL, ep, func_no);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_hide_ext_capability);
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_header *hdr)
@ -139,18 +107,23 @@ static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
return 0;
}
static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
dma_addr_t parent_bus_addr, enum pci_barno bar,
size_t size)
/* BAR Match Mode inbound iATU mapping */
static int dw_pcie_ep_ib_atu_bar(struct dw_pcie_ep *ep, u8 func_no, int type,
dma_addr_t parent_bus_addr, enum pci_barno bar,
size_t size)
{
int ret;
u32 free_win;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep->bar_to_atu[bar])
if (!ep_func)
return -EINVAL;
if (!ep_func->bar_to_atu[bar])
free_win = find_first_zero_bit(ep->ib_window_map, pci->num_ib_windows);
else
free_win = ep->bar_to_atu[bar] - 1;
free_win = ep_func->bar_to_atu[bar] - 1;
if (free_win >= pci->num_ib_windows) {
dev_err(pci->dev, "No free inbound window\n");
@ -168,12 +141,190 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
* Always increment free_win before assignment, since value 0 is used to identify
* unallocated mapping.
*/
ep->bar_to_atu[bar] = free_win + 1;
ep_func->bar_to_atu[bar] = free_win + 1;
set_bit(free_win, ep->ib_window_map);
return 0;
}
static void dw_pcie_ep_clear_ib_maps(struct dw_pcie_ep *ep, u8 func_no, enum pci_barno bar)
{
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct device *dev = pci->dev;
unsigned int i, num;
u32 atu_index;
u32 *indexes;
if (!ep_func)
return;
/* Tear down the BAR Match Mode mapping, if any. */
if (ep_func->bar_to_atu[bar]) {
atu_index = ep_func->bar_to_atu[bar] - 1;
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index);
clear_bit(atu_index, ep->ib_window_map);
ep_func->bar_to_atu[bar] = 0;
}
/* Tear down all Address Match Mode mappings, if any. */
indexes = ep_func->ib_atu_indexes[bar];
num = ep_func->num_ib_atu_indexes[bar];
ep_func->ib_atu_indexes[bar] = NULL;
ep_func->num_ib_atu_indexes[bar] = 0;
if (!indexes)
return;
for (i = 0; i < num; i++) {
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, indexes[i]);
clear_bit(indexes[i], ep->ib_window_map);
}
devm_kfree(dev, indexes);
}
static u64 dw_pcie_ep_read_bar_assigned(struct dw_pcie_ep *ep, u8 func_no,
enum pci_barno bar, int flags)
{
u32 reg = PCI_BASE_ADDRESS_0 + (4 * bar);
u32 lo, hi;
u64 addr;
lo = dw_pcie_ep_readl_dbi(ep, func_no, reg);
if (flags & PCI_BASE_ADDRESS_SPACE)
return lo & PCI_BASE_ADDRESS_IO_MASK;
addr = lo & PCI_BASE_ADDRESS_MEM_MASK;
if (!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64))
return addr;
hi = dw_pcie_ep_readl_dbi(ep, func_no, reg + 4);
return addr | ((u64)hi << 32);
}
static int dw_pcie_ep_validate_submap(struct dw_pcie_ep *ep,
const struct pci_epf_bar_submap *submap,
unsigned int num_submap, size_t bar_size)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
u32 align = pci->region_align;
size_t off = 0;
unsigned int i;
size_t size;
if (!align || !IS_ALIGNED(bar_size, align))
return -EINVAL;
/*
* The submap array order defines the BAR layout (submap[0] starts
* at offset 0 and each entry immediately follows the previous
* one). Here, validate that it forms a strict, gapless
* decomposition of the BAR:
* - each entry has a non-zero size
* - sizes, implicit offsets and phys_addr are aligned to
* pci->region_align
* - each entry lies within the BAR range
* - the entries exactly cover the whole BAR
*
* Note: dw_pcie_prog_inbound_atu() also checks alignment for the
* PCI address and the target phys_addr, but validating up-front
* avoids partially programming iATU windows in vain.
*/
for (i = 0; i < num_submap; i++) {
size = submap[i].size;
if (!size)
return -EINVAL;
if (!IS_ALIGNED(size, align) || !IS_ALIGNED(off, align))
return -EINVAL;
if (!IS_ALIGNED(submap[i].phys_addr, align))
return -EINVAL;
if (off > bar_size || size > bar_size - off)
return -EINVAL;
off += size;
}
if (off != bar_size)
return -EINVAL;
return 0;
}
/* Address Match Mode inbound iATU mapping */
static int dw_pcie_ep_ib_atu_addr(struct dw_pcie_ep *ep, u8 func_no, int type,
const struct pci_epf_bar *epf_bar)
{
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
const struct pci_epf_bar_submap *submap = epf_bar->submap;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar = epf_bar->barno;
struct device *dev = pci->dev;
u64 pci_addr, parent_bus_addr;
u64 size, base, off = 0;
int free_win, ret;
unsigned int i;
u32 *indexes;
if (!ep_func || !epf_bar->num_submap || !submap || !epf_bar->size)
return -EINVAL;
ret = dw_pcie_ep_validate_submap(ep, submap, epf_bar->num_submap,
epf_bar->size);
if (ret)
return ret;
base = dw_pcie_ep_read_bar_assigned(ep, func_no, bar, epf_bar->flags);
if (!base) {
dev_err(dev,
"BAR%u not assigned, cannot set up sub-range mappings\n",
bar);
return -EINVAL;
}
indexes = devm_kcalloc(dev, epf_bar->num_submap, sizeof(*indexes),
GFP_KERNEL);
if (!indexes)
return -ENOMEM;
ep_func->ib_atu_indexes[bar] = indexes;
ep_func->num_ib_atu_indexes[bar] = 0;
for (i = 0; i < epf_bar->num_submap; i++) {
size = submap[i].size;
parent_bus_addr = submap[i].phys_addr;
if (off > (~0ULL) - base) {
ret = -EINVAL;
goto err;
}
pci_addr = base + off;
off += size;
free_win = find_first_zero_bit(ep->ib_window_map,
pci->num_ib_windows);
if (free_win >= pci->num_ib_windows) {
ret = -ENOSPC;
goto err;
}
ret = dw_pcie_prog_inbound_atu(pci, free_win, type,
parent_bus_addr, pci_addr, size);
if (ret)
goto err;
set_bit(free_win, ep->ib_window_map);
indexes[i] = free_win;
ep_func->num_ib_atu_indexes[bar] = i + 1;
}
return 0;
err:
dw_pcie_ep_clear_ib_maps(ep, func_no, bar);
return ret;
}
static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep,
struct dw_pcie_ob_atu_cfg *atu)
{
@ -204,35 +355,34 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar = epf_bar->barno;
u32 atu_index = ep->bar_to_atu[bar] - 1;
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep->bar_to_atu[bar])
if (!ep_func || !ep_func->epf_bar[bar])
return;
__dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags);
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index);
clear_bit(atu_index, ep->ib_window_map);
ep->epf_bar[bar] = NULL;
ep->bar_to_atu[bar] = 0;
dw_pcie_ep_clear_ib_maps(ep, func_no, bar);
ep_func->epf_bar[bar] = NULL;
}
static unsigned int dw_pcie_ep_get_rebar_offset(struct dw_pcie *pci,
static unsigned int dw_pcie_ep_get_rebar_offset(struct dw_pcie_ep *ep, u8 func_no,
enum pci_barno bar)
{
u32 reg, bar_index;
unsigned int offset, nbars;
int i;
offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
offset = dw_pcie_ep_find_ext_capability(ep, func_no, PCI_EXT_CAP_ID_REBAR);
if (!offset)
return offset;
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL);
nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg);
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL);
bar_index = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, reg);
if (bar_index == bar)
return offset;
@ -253,7 +403,7 @@ static int dw_pcie_ep_set_bar_resizable(struct dw_pcie_ep *ep, u8 func_no,
u32 rebar_cap, rebar_ctrl;
int ret;
rebar_offset = dw_pcie_ep_get_rebar_offset(pci, bar);
rebar_offset = dw_pcie_ep_get_rebar_offset(ep, func_no, bar);
if (!rebar_offset)
return -EINVAL;
@ -283,16 +433,16 @@ static int dw_pcie_ep_set_bar_resizable(struct dw_pcie_ep *ep, u8 func_no,
* 1 MB to 128 TB. Bits 31:16 in PCI_REBAR_CTRL define "supported sizes"
* bits for sizes 256 TB to 8 EB. Disallow sizes 256 TB to 8 EB.
*/
rebar_ctrl = dw_pcie_readl_dbi(pci, rebar_offset + PCI_REBAR_CTRL);
rebar_ctrl = dw_pcie_ep_readl_dbi(ep, func_no, rebar_offset + PCI_REBAR_CTRL);
rebar_ctrl &= ~GENMASK(31, 16);
dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CTRL, rebar_ctrl);
dw_pcie_ep_writel_dbi(ep, func_no, rebar_offset + PCI_REBAR_CTRL, rebar_ctrl);
/*
* The "selected size" (bits 13:8) in PCI_REBAR_CTRL are automatically
* updated when writing PCI_REBAR_CAP, see "Figure 3-26 Resizable BAR
* Example for 32-bit Memory BAR0" in DWC EP databook 5.96a.
*/
dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CAP, rebar_cap);
dw_pcie_ep_writel_dbi(ep, func_no, rebar_offset + PCI_REBAR_CAP, rebar_cap);
dw_pcie_dbi_ro_wr_dis(pci);
@ -341,12 +491,16 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
enum pci_barno bar = epf_bar->barno;
size_t size = epf_bar->size;
enum pci_epc_bar_type bar_type;
int flags = epf_bar->flags;
int ret, type;
if (!ep_func)
return -EINVAL;
/*
* DWC does not allow BAR pairs to overlap, e.g. you cannot combine BARs
* 1 and 2 to form a 64-bit BAR.
@ -360,21 +514,38 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
* calling clear_bar() would clear the BAR's PCI address assigned by the
* host).
*/
if (ep->epf_bar[bar]) {
if (ep_func->epf_bar[bar]) {
/*
* We can only dynamically change a BAR if the new BAR size and
* BAR flags do not differ from the existing configuration.
*/
if (ep->epf_bar[bar]->barno != bar ||
ep->epf_bar[bar]->size != size ||
ep->epf_bar[bar]->flags != flags)
if (ep_func->epf_bar[bar]->barno != bar ||
ep_func->epf_bar[bar]->size != size ||
ep_func->epf_bar[bar]->flags != flags)
return -EINVAL;
/*
* When dynamically changing a BAR, tear down any existing
* mappings before re-programming.
*/
if (ep_func->epf_bar[bar]->num_submap || epf_bar->num_submap)
dw_pcie_ep_clear_ib_maps(ep, func_no, bar);
/*
* When dynamically changing a BAR, skip writing the BAR reg, as
* that would clear the BAR's PCI address assigned by the host.
*/
goto config_atu;
} else {
/*
* Subrange mapping is an update-only operation. The BAR
* must have been configured once without submaps so that
* subsequent set_bar() calls can update inbound mappings
* without touching the BAR register (and clobbering the
* host-assigned address).
*/
if (epf_bar->num_submap)
return -EINVAL;
}
bar_type = dw_pcie_ep_get_bar_type(ep, bar);
@ -408,12 +579,16 @@ config_atu:
else
type = PCIE_ATU_TYPE_IO;
ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar,
size);
if (epf_bar->num_submap)
ret = dw_pcie_ep_ib_atu_addr(ep, func_no, type, epf_bar);
else
ret = dw_pcie_ep_ib_atu_bar(ep, func_no, type,
epf_bar->phys_addr, bar, size);
if (ret)
return ret;
ep->epf_bar[bar] = epf_bar;
ep_func->epf_bar[bar] = epf_bar;
return 0;
}
@ -601,6 +776,16 @@ static void dw_pcie_ep_stop(struct pci_epc *epc)
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
/*
* Tear down the dedicated outbound window used for MSI
* generation. This avoids leaking an iATU window across
* endpoint stop/start cycles.
*/
if (ep->msi_iatu_mapped) {
dw_pcie_ep_unmap_addr(epc, 0, 0, ep->msi_mem_phys);
ep->msi_iatu_mapped = false;
}
dw_pcie_stop_link(pci);
}
@ -702,15 +887,38 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
msg_addr = ((u64)msg_addr_upper) << 32 | msg_addr_lower;
msg_addr = dw_pcie_ep_align_addr(epc, msg_addr, &map_size, &offset);
ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr,
map_size);
if (ret)
return ret;
/*
* Program the outbound iATU once and keep it enabled.
*
* The spec warns that updating iATU registers while there are
* operations in flight on the AXI bridge interface is not
* supported, so we avoid reprogramming the region on every MSI,
* specifically unmapping immediately after writel().
*/
if (!ep->msi_iatu_mapped) {
ret = dw_pcie_ep_map_addr(epc, func_no, 0,
ep->msi_mem_phys, msg_addr,
map_size);
if (ret)
return ret;
ep->msi_iatu_mapped = true;
ep->msi_msg_addr = msg_addr;
ep->msi_map_size = map_size;
} else if (WARN_ON_ONCE(ep->msi_msg_addr != msg_addr ||
ep->msi_map_size != map_size)) {
/*
* The host changed the MSI target address or the required
* mapping size changed. Reprogramming the iATU at runtime is
* unsafe on this controller, so bail out instead of trying to
* update the existing region.
*/
return -EINVAL;
}
writel(msg_data | (interrupt_num - 1), ep->msi_mem + offset);
dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys);
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_msi_irq);
@ -775,7 +983,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset);
tbl_offset &= PCI_MSIX_TABLE_OFFSET;
msix_tbl = ep->epf_bar[bir]->addr + tbl_offset;
msix_tbl = ep_func->epf_bar[bir]->addr + tbl_offset;
msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr;
msg_data = msix_tbl[(interrupt_num - 1)].msg_data;
vec_ctrl = msix_tbl[(interrupt_num - 1)].vector_ctrl;
@ -836,20 +1044,20 @@ void dw_pcie_ep_deinit(struct dw_pcie_ep *ep)
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_deinit);
static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
static void dw_pcie_ep_init_rebar_registers(struct dw_pcie_ep *ep, u8 func_no)
{
struct dw_pcie_ep *ep = &pci->ep;
unsigned int offset;
unsigned int nbars;
struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
unsigned int offset, nbars;
enum pci_barno bar;
u32 reg, i, val;
offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
if (!ep_func)
return;
dw_pcie_dbi_ro_wr_en(pci);
offset = dw_pcie_ep_find_ext_capability(ep, func_no, PCI_EXT_CAP_ID_REBAR);
if (offset) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL);
nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg);
/*
@ -870,16 +1078,28 @@ static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
* the controller when RESBAR_CAP_REG is written, which
* is why RESBAR_CAP_REG is written here.
*/
val = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
val = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL);
bar = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, val);
if (ep->epf_bar[bar])
pci_epc_bar_size_to_rebar_cap(ep->epf_bar[bar]->size, &val);
if (ep_func->epf_bar[bar])
pci_epc_bar_size_to_rebar_cap(ep_func->epf_bar[bar]->size, &val);
else
val = BIT(4);
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, val);
dw_pcie_ep_writel_dbi(ep, func_no, offset + PCI_REBAR_CAP, val);
}
}
}
static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
{
struct dw_pcie_ep *ep = &pci->ep;
u8 funcs = ep->epc->max_functions;
u8 func_no;
dw_pcie_dbi_ro_wr_en(pci);
for (func_no = 0; func_no < funcs; func_no++)
dw_pcie_ep_init_rebar_registers(ep, func_no);
dw_pcie_setup(pci);
dw_pcie_dbi_ro_wr_dis(pci);
@ -967,6 +1187,18 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
if (ep->ops->init)
ep->ops->init(ep);
/*
* PCIe r6.0, section 7.9.15 states that for endpoints that support
* PTM, this capability structure is required in exactly one
* function, which controls the PTM behavior of all PTM capable
* functions. This indicates the PTM capability structure
* represents controller-level registers rather than per-function
* registers.
*
* Therefore, PTM capability registers are configured using the
* standard DBI accessors, instead of func_no indexed per-function
* accessors.
*/
ptm_cap_base = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM);
/*
@ -1087,6 +1319,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
struct device *dev = pci->dev;
INIT_LIST_HEAD(&ep->func_list);
ep->msi_iatu_mapped = false;
ep->msi_msg_addr = 0;
ep->msi_map_size = 0;
epc = devm_pci_epc_create(dev, &epc_ops);
if (IS_ERR(epc)) {

View file

@ -255,7 +255,7 @@ void dw_pcie_msi_init(struct dw_pcie_rp *pp)
u64 msi_target = (u64)pp->msi_data;
u32 ctrl, num_ctrls;
if (!pci_msi_enabled() || !pp->has_msi_ctrl)
if (!pci_msi_enabled() || !pp->use_imsi_rx)
return;
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
@ -367,10 +367,20 @@ int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
* order not to miss MSI TLPs from those devices the MSI target
* address has to be within the lowest 4GB.
*
* Note until there is a better alternative found the reservation is
* done by allocating from the artificially limited DMA-coherent
* memory.
* Per DWC databook r6.21a, section 3.10.2.3, the incoming MWr TLP
* targeting the MSI_CTRL_ADDR is terminated by the iMSI-RX and never
* appears on the AXI bus. So MSI_CTRL_ADDR address doesn't need to be
* mapped and can be any memory that doesn't get allocated for the BAR
* memory. Since most of the platforms provide 32-bit address for
* 'config' region, try cfg0_base as the first option for the MSI target
* address if it's a 32-bit address. Otherwise, try 32-bit and 64-bit
* coherent memory allocation one by one.
*/
if (!(pp->cfg0_base & GENMASK_ULL(63, 32))) {
pp->msi_data = pp->cfg0_base;
return 0;
}
ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
if (!ret)
msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data,
@ -593,15 +603,15 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
}
if (pci_msi_enabled()) {
pp->has_msi_ctrl = !(pp->ops->msi_init ||
pp->use_imsi_rx = !(pp->ops->msi_init ||
of_property_present(np, "msi-parent") ||
of_property_present(np, "msi-map"));
/*
* For the has_msi_ctrl case the default assignment is handled
* For the use_imsi_rx case the default assignment is handled
* in the dw_pcie_msi_host_init().
*/
if (!pp->has_msi_ctrl && !pp->num_vectors) {
if (!pp->use_imsi_rx && !pp->num_vectors) {
pp->num_vectors = MSI_DEF_NUM_VECTORS;
} else if (pp->num_vectors > MAX_MSI_IRQS) {
dev_err(dev, "Invalid number of vectors\n");
@ -613,7 +623,7 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
ret = pp->ops->msi_init(pp);
if (ret < 0)
goto err_deinit_host;
} else if (pp->has_msi_ctrl) {
} else if (pp->use_imsi_rx) {
ret = dw_pcie_msi_host_init(pp);
if (ret < 0)
goto err_deinit_host;
@ -631,14 +641,6 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (ret)
goto err_free_msi;
if (pp->ecam_enabled) {
ret = dw_pcie_config_ecam_iatu(pp);
if (ret) {
dev_err(dev, "Failed to configure iATU in ECAM mode\n");
goto err_free_msi;
}
}
/*
* Allocate the resource for MSG TLP before programming the iATU
* outbound window in dw_pcie_setup_rc(). Since the allocation depends
@ -666,13 +668,12 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
}
/*
* Note: Skip the link up delay only when a Link Up IRQ is present.
* If there is no Link Up IRQ, we should not bypass the delay
* because that would require users to manually rescan for devices.
* Only fail on timeout error. Other errors indicate the device may
* become available later, so continue without failing.
*/
if (!pp->use_linkup_irq)
/* Ignore errors, the link may come up later */
dw_pcie_wait_for_link(pci);
ret = dw_pcie_wait_for_link(pci);
if (ret == -ETIMEDOUT)
goto err_stop_link;
ret = pci_host_probe(bridge);
if (ret)
@ -692,7 +693,7 @@ err_remove_edma:
dw_pcie_edma_remove(pci);
err_free_msi:
if (pp->has_msi_ctrl)
if (pp->use_imsi_rx)
dw_pcie_free_msi(pp);
err_deinit_host:
@ -720,7 +721,7 @@ void dw_pcie_host_deinit(struct dw_pcie_rp *pp)
dw_pcie_edma_remove(pci);
if (pp->has_msi_ctrl)
if (pp->use_imsi_rx)
dw_pcie_free_msi(pp);
if (pp->ops->deinit)
@ -874,9 +875,10 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dw_pcie_ob_atu_cfg atu = { 0 };
struct resource_entry *entry;
int ob_iatu_index;
int ib_iatu_index;
int i, ret;
/* Note the very first outbound ATU is used for CFG IOs */
if (!pci->num_ob_windows) {
dev_err(pci->dev, "No outbound iATU found\n");
return -EINVAL;
@ -892,37 +894,74 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
for (i = 0; i < pci->num_ib_windows; i++)
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, i);
i = 0;
/*
* NOTE: For outbound address translation, outbound iATU at index 0 is
* reserved for CFG IOs (dw_pcie_other_conf_map_bus()), thus start at
* index 1.
*
* If using ECAM, outbound iATU at index 0 and index 1 is reserved for
* CFG IOs.
*/
if (pp->ecam_enabled) {
ob_iatu_index = 2;
ret = dw_pcie_config_ecam_iatu(pp);
if (ret) {
dev_err(pci->dev, "Failed to configure iATU in ECAM mode\n");
return ret;
}
} else {
ob_iatu_index = 1;
}
resource_list_for_each_entry(entry, &pp->bridge->windows) {
resource_size_t res_size;
if (resource_type(entry->res) != IORESOURCE_MEM)
continue;
if (pci->num_ob_windows <= ++i)
break;
atu.index = i;
atu.type = PCIE_ATU_TYPE_MEM;
atu.parent_bus_addr = entry->res->start - pci->parent_bus_offset;
atu.pci_addr = entry->res->start - entry->offset;
/* Adjust iATU size if MSG TLP region was allocated before */
if (pp->msg_res && pp->msg_res->parent == entry->res)
atu.size = resource_size(entry->res) -
res_size = resource_size(entry->res) -
resource_size(pp->msg_res);
else
atu.size = resource_size(entry->res);
res_size = resource_size(entry->res);
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret) {
dev_err(pci->dev, "Failed to set MEM range %pr\n",
entry->res);
return ret;
while (res_size > 0) {
/*
* Return failure if we run out of windows in the
* middle. Otherwise, we would end up only partially
* mapping a single resource.
*/
if (ob_iatu_index >= pci->num_ob_windows) {
dev_err(pci->dev, "Cannot add outbound window for region: %pr\n",
entry->res);
return -ENOMEM;
}
atu.index = ob_iatu_index;
atu.size = MIN(pci->region_limit + 1, res_size);
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret) {
dev_err(pci->dev, "Failed to set MEM range %pr\n",
entry->res);
return ret;
}
ob_iatu_index++;
atu.parent_bus_addr += atu.size;
atu.pci_addr += atu.size;
res_size -= atu.size;
}
}
if (pp->io_size) {
if (pci->num_ob_windows > ++i) {
atu.index = i;
if (ob_iatu_index < pci->num_ob_windows) {
atu.index = ob_iatu_index;
atu.type = PCIE_ATU_TYPE_IO;
atu.parent_bus_addr = pp->io_base - pci->parent_bus_offset;
atu.pci_addr = pp->io_bus_addr;
@ -934,40 +973,71 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
entry->res);
return ret;
}
ob_iatu_index++;
} else {
/*
* If there are not enough outbound windows to give I/O
* space its own iATU, the outbound iATU at index 0 will
* be shared between I/O space and CFG IOs, by
* temporarily reconfiguring the iATU to CFG space, in
* order to do a CFG IO, and then immediately restoring
* it to I/O space. This is only implemented when using
* dw_pcie_other_conf_map_bus(), which is not the case
* when using ECAM.
*/
if (pp->ecam_enabled) {
dev_err(pci->dev, "Cannot add outbound window for I/O\n");
return -ENOMEM;
}
pp->cfg0_io_shared = true;
}
}
if (pci->num_ob_windows <= i)
dev_warn(pci->dev, "Ranges exceed outbound iATU size (%d)\n",
pci->num_ob_windows);
if (pp->use_atu_msg) {
if (ob_iatu_index >= pci->num_ob_windows) {
dev_err(pci->dev, "Cannot add outbound window for MSG TLP\n");
return -ENOMEM;
}
pp->msg_atu_index = ob_iatu_index++;
}
pp->msg_atu_index = i;
i = 0;
ib_iatu_index = 0;
resource_list_for_each_entry(entry, &pp->bridge->dma_ranges) {
resource_size_t res_start, res_size, window_size;
if (resource_type(entry->res) != IORESOURCE_MEM)
continue;
if (pci->num_ib_windows <= i)
break;
res_size = resource_size(entry->res);
res_start = entry->res->start;
while (res_size > 0) {
/*
* Return failure if we run out of windows in the
* middle. Otherwise, we would end up only partially
* mapping a single resource.
*/
if (ib_iatu_index >= pci->num_ib_windows) {
dev_err(pci->dev, "Cannot add inbound window for region: %pr\n",
entry->res);
return -ENOMEM;
}
ret = dw_pcie_prog_inbound_atu(pci, i++, PCIE_ATU_TYPE_MEM,
entry->res->start,
entry->res->start - entry->offset,
resource_size(entry->res));
if (ret) {
dev_err(pci->dev, "Failed to set DMA range %pr\n",
entry->res);
return ret;
window_size = MIN(pci->region_limit + 1, res_size);
ret = dw_pcie_prog_inbound_atu(pci, ib_iatu_index,
PCIE_ATU_TYPE_MEM, res_start,
res_start - entry->offset, window_size);
if (ret) {
dev_err(pci->dev, "Failed to set DMA range %pr\n",
entry->res);
return ret;
}
ib_iatu_index++;
res_start += window_size;
res_size -= window_size;
}
}
if (pci->num_ib_windows <= i)
dev_warn(pci->dev, "Dma-ranges exceed inbound iATU size (%u)\n",
pci->num_ib_windows);
return 0;
}
@ -1089,7 +1159,7 @@ int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
* the platform uses its own address translation component rather than
* ATU, so we should not program the ATU here.
*/
if (pp->bridge->child_ops == &dw_child_pcie_ops) {
if (pp->bridge->child_ops == &dw_child_pcie_ops || pp->ecam_enabled) {
ret = dw_pcie_iatu_setup(pp);
if (ret)
return ret;
@ -1106,6 +1176,17 @@ int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
dw_pcie_dbi_ro_wr_dis(pci);
/*
* The iMSI-RX module does not support receiving MSI or MSI-X generated
* by the Root Port. If iMSI-RX is used as the MSI controller, remove
* the MSI and MSI-X capabilities of the Root Port to allow the drivers
* to fall back to INTx instead.
*/
if (pp->use_imsi_rx) {
dw_pcie_remove_capability(pci, PCI_CAP_ID_MSI);
dw_pcie_remove_capability(pci, PCI_CAP_ID_MSIX);
}
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
@ -1149,8 +1230,11 @@ static int dw_pcie_pme_turn_off(struct dw_pcie *pci)
int dw_pcie_suspend_noirq(struct dw_pcie *pci)
{
u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
int ret = 0;
u32 val;
int ret;
if (!dw_pcie_link_up(pci))
goto stop_link;
/*
* If L1SS is supported, then do not put the link into L2 as some
@ -1167,6 +1251,16 @@ int dw_pcie_suspend_noirq(struct dw_pcie *pci)
return ret;
}
/*
* Some SoCs do not support reading the LTSSM register after
* PME_Turn_Off broadcast. For those SoCs, skip waiting for L2/L3 Ready
* state and wait 10ms as recommended in PCIe spec r6.0, sec 5.3.3.2.1.
*/
if (pci->pp.skip_l23_ready) {
mdelay(PCIE_PME_TO_L2_TIMEOUT_US/1000);
goto stop_link;
}
ret = read_poll_timeout(dw_pcie_get_ltssm, val,
val == DW_PCIE_LTSSM_L2_IDLE ||
val <= DW_PCIE_LTSSM_DETECT_WAIT,
@ -1185,6 +1279,7 @@ int dw_pcie_suspend_noirq(struct dw_pcie *pci)
*/
udelay(1);
stop_link:
dw_pcie_stop_link(pci);
if (pci->pp.ops->deinit)
pci->pp.ops->deinit(&pci->pp);

View file

@ -61,6 +61,7 @@ static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features dw_plat_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.msix_capable = true,
};

View file

@ -226,16 +226,70 @@ void dw_pcie_version_detect(struct dw_pcie *pci)
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
{
return PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap,
pci);
NULL, pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
{
return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, pci);
return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, NULL, pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
void dw_pcie_remove_capability(struct dw_pcie *pci, u8 cap)
{
u8 cap_pos, pre_pos, next_pos;
u16 reg;
cap_pos = PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap,
&pre_pos, pci);
if (!cap_pos)
return;
reg = dw_pcie_readw_dbi(pci, cap_pos);
next_pos = (reg & 0xff00) >> 8;
dw_pcie_dbi_ro_wr_en(pci);
if (pre_pos == PCI_CAPABILITY_LIST)
dw_pcie_writeb_dbi(pci, PCI_CAPABILITY_LIST, next_pos);
else
dw_pcie_writeb_dbi(pci, pre_pos + 1, next_pos);
dw_pcie_dbi_ro_wr_dis(pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_remove_capability);
void dw_pcie_remove_ext_capability(struct dw_pcie *pci, u8 cap)
{
int cap_pos, next_pos, pre_pos;
u32 pre_header, header;
cap_pos = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, &pre_pos, pci);
if (!cap_pos)
return;
header = dw_pcie_readl_dbi(pci, cap_pos);
/*
* If the first cap at offset PCI_CFG_SPACE_SIZE is removed,
* only set its capid to zero as it cannot be skipped.
*/
if (cap_pos == PCI_CFG_SPACE_SIZE) {
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi(pci, cap_pos, header & 0xffff0000);
dw_pcie_dbi_ro_wr_dis(pci);
return;
}
pre_header = dw_pcie_readl_dbi(pci, pre_pos);
next_pos = PCI_EXT_CAP_NEXT(header);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi(pci, pre_pos,
(pre_header & 0xfffff) | (next_pos << 20));
dw_pcie_dbi_ro_wr_dis(pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_remove_ext_capability);
static u16 __dw_pcie_find_vsec_capability(struct dw_pcie *pci, u16 vendor_id,
u16 vsec_id)
{
@ -246,7 +300,7 @@ static u16 __dw_pcie_find_vsec_capability(struct dw_pcie *pci, u16 vendor_id,
return 0;
while ((vsec = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, vsec,
PCI_EXT_CAP_ID_VNDR, pci))) {
PCI_EXT_CAP_ID_VNDR, NULL, pci))) {
header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER);
if (PCI_VNDR_HEADER_ID(header) == vsec_id)
return vsec;
@ -478,6 +532,9 @@ int dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
u32 retries, val;
u64 limit_addr;
if (atu->index >= pci->num_ob_windows)
return -ENOSPC;
limit_addr = parent_bus_addr + atu->size - 1;
if ((limit_addr & ~pci->region_limit) != (parent_bus_addr & ~pci->region_limit) ||
@ -551,6 +608,9 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
u64 limit_addr = pci_addr + size - 1;
u32 retries, val;
if (index >= pci->num_ib_windows)
return -ENOSPC;
if ((limit_addr & ~pci->region_limit) != (pci_addr & ~pci->region_limit) ||
!IS_ALIGNED(parent_bus_addr, pci->region_align) ||
!IS_ALIGNED(pci_addr, pci->region_align) || !size) {
@ -639,9 +699,69 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index)
dw_pcie_writel_atu(pci, dir, index, PCIE_ATU_REGION_CTRL2, 0);
}
const char *dw_pcie_ltssm_status_string(enum dw_pcie_ltssm ltssm)
{
const char *str;
switch (ltssm) {
#define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_1);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_2);
default:
str = "DW_PCIE_LTSSM_UNKNOWN";
break;
}
return str + strlen("DW_PCIE_LTSSM_");
}
/**
* dw_pcie_wait_for_link - Wait for the PCIe link to be up
* @pci: DWC instance
*
* Returns: 0 if link is up, -ENODEV if device is not found, -EIO if the device
* is found but not active and -ETIMEDOUT if the link fails to come up for other
* reasons.
*/
int dw_pcie_wait_for_link(struct dw_pcie *pci)
{
u32 offset, val;
u32 offset, val, ltssm;
int retries;
/* Check if the link is up or not */
@ -653,7 +773,29 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
}
if (retries >= PCIE_LINK_WAIT_MAX_RETRIES) {
dev_info(pci->dev, "Phy link never came up\n");
/*
* If the link is in Detect.Quiet or Detect.Active state, it
* indicates that no device is detected.
*/
ltssm = dw_pcie_get_ltssm(pci);
if (ltssm == DW_PCIE_LTSSM_DETECT_QUIET ||
ltssm == DW_PCIE_LTSSM_DETECT_ACT) {
dev_info(pci->dev, "Device not found\n");
return -ENODEV;
/*
* If the link is in POLL.{Active/Compliance} state, then the
* device is found to be connected to the bus, but it is not
* active i.e., the device firmware might not yet initialized.
*/
} else if (ltssm == DW_PCIE_LTSSM_POLL_ACTIVE ||
ltssm == DW_PCIE_LTSSM_POLL_COMPLIANCE) {
dev_info(pci->dev, "Device found, but not active\n");
return -EIO;
}
dev_err(pci->dev, "Link failed to come up. LTSSM: %s\n",
dw_pcie_ltssm_status_string(ltssm));
return -ETIMEDOUT;
}

View file

@ -305,6 +305,10 @@
/* Default eDMA LLP memory size */
#define DMA_LLP_MEM_SIZE PAGE_SIZE
/* Common struct pci_epc_feature bits among DWC EP glue drivers */
#define DWC_EPC_COMMON_FEATURES .dynamic_inbound_mapping = true, \
.subrange_mapping = true
struct dw_pcie;
struct dw_pcie_rp;
struct dw_pcie_ep;
@ -388,6 +392,10 @@ enum dw_pcie_ltssm {
DW_PCIE_LTSSM_RCVRY_EQ2 = 0x22,
DW_PCIE_LTSSM_RCVRY_EQ3 = 0x23,
/* Vendor glue drivers provide pseudo L1 substates from get_ltssm() */
DW_PCIE_LTSSM_L1_1 = 0x141,
DW_PCIE_LTSSM_L1_2 = 0x142,
DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF,
};
@ -412,7 +420,7 @@ struct dw_pcie_host_ops {
};
struct dw_pcie_rp {
bool has_msi_ctrl:1;
bool use_imsi_rx:1;
bool cfg0_io_shared:1;
u64 cfg0_base;
void __iomem *va_cfg0_base;
@ -434,11 +442,11 @@ struct dw_pcie_rp {
bool use_atu_msg;
int msg_atu_index;
struct resource *msg_res;
bool use_linkup_irq;
struct pci_eq_presets presets;
struct pci_config_window *cfg;
bool ecam_enabled;
bool native_ecam;
bool skip_l23_ready;
};
struct dw_pcie_ep_ops {
@ -463,6 +471,12 @@ struct dw_pcie_ep_func {
u8 func_no;
u8 msi_cap; /* MSI capability offset */
u8 msix_cap; /* MSI-X capability offset */
u8 bar_to_atu[PCI_STD_NUM_BARS];
struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS];
/* Only for Address Match Mode inbound iATU */
u32 *ib_atu_indexes[PCI_STD_NUM_BARS];
unsigned int num_ib_atu_indexes[PCI_STD_NUM_BARS];
};
struct dw_pcie_ep {
@ -472,13 +486,16 @@ struct dw_pcie_ep {
phys_addr_t phys_base;
size_t addr_size;
size_t page_size;
u8 bar_to_atu[PCI_STD_NUM_BARS];
phys_addr_t *outbound_addr;
unsigned long *ib_window_map;
unsigned long *ob_window_map;
void __iomem *msi_mem;
phys_addr_t msi_mem_phys;
struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS];
/* MSI outbound iATU state */
bool msi_iatu_mapped;
u64 msi_msg_addr;
size_t msi_map_size;
};
struct dw_pcie_ops {
@ -561,6 +578,8 @@ void dw_pcie_version_detect(struct dw_pcie *pci);
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
void dw_pcie_remove_capability(struct dw_pcie *pci, u8 cap);
void dw_pcie_remove_ext_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_rasdes_capability(struct dw_pcie *pci);
u16 dw_pcie_find_ptm_capability(struct dw_pcie *pci);
@ -809,6 +828,8 @@ static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci)
return (enum dw_pcie_ltssm)FIELD_GET(PORT_LOGIC_LTSSM_STATE_MASK, val);
}
const char *dw_pcie_ltssm_status_string(enum dw_pcie_ltssm ltssm);
#ifdef CONFIG_PCIE_DW_HOST
int dw_pcie_suspend_noirq(struct dw_pcie *pci);
int dw_pcie_resume_noirq(struct dw_pcie *pci);
@ -890,7 +911,6 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no,
u16 interrupt_num);
void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar);
int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap);
struct dw_pcie_ep_func *
dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no);
#else
@ -948,12 +968,6 @@ static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
{
}
static inline int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci,
u8 prev_cap, u8 cap)
{
return 0;
}
static inline struct dw_pcie_ep_func *
dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no)
{

View file

@ -68,6 +68,11 @@
#define PCIE_CLKREQ_NOT_READY FIELD_PREP_WM16(BIT(0), 0)
#define PCIE_CLKREQ_PULL_DOWN FIELD_PREP_WM16(GENMASK(13, 12), 1)
/* RASDES TBA information */
#define PCIE_CLIENT_CDM_RASDES_TBA_INFO_CMN 0x154
#define PCIE_CLIENT_CDM_RASDES_TBA_L1_1 BIT(4)
#define PCIE_CLIENT_CDM_RASDES_TBA_L1_2 BIT(5)
/* Hot Reset Control Register */
#define PCIE_CLIENT_HOT_RESET_CTRL 0x180
#define PCIE_LTSSM_APP_DLY2_EN BIT(1)
@ -181,11 +186,26 @@ static int rockchip_pcie_init_irq_domain(struct rockchip_pcie *rockchip)
return 0;
}
static u32 rockchip_pcie_get_ltssm(struct rockchip_pcie *rockchip)
static u32 rockchip_pcie_get_ltssm_reg(struct rockchip_pcie *rockchip)
{
return rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS);
}
static enum dw_pcie_ltssm rockchip_pcie_get_ltssm(struct dw_pcie *pci)
{
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
u32 val = rockchip_pcie_readl_apb(rockchip,
PCIE_CLIENT_CDM_RASDES_TBA_INFO_CMN);
if (val & PCIE_CLIENT_CDM_RASDES_TBA_L1_1)
return DW_PCIE_LTSSM_L1_1;
if (val & PCIE_CLIENT_CDM_RASDES_TBA_L1_2)
return DW_PCIE_LTSSM_L1_2;
return rockchip_pcie_get_ltssm_reg(rockchip) & PCIE_LTSSM_STATUS_MASK;
}
static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip)
{
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_ENABLE_LTSSM,
@ -201,7 +221,7 @@ static void rockchip_pcie_disable_ltssm(struct rockchip_pcie *rockchip)
static bool rockchip_pcie_link_up(struct dw_pcie *pci)
{
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
u32 val = rockchip_pcie_get_ltssm(rockchip);
u32 val = rockchip_pcie_get_ltssm_reg(rockchip);
return FIELD_GET(PCIE_LINKUP_MASK, val) == PCIE_LINKUP;
}
@ -327,9 +347,7 @@ static void rockchip_pcie_ep_hide_broken_ats_cap_rk3588(struct dw_pcie_ep *ep)
if (!of_device_is_compatible(dev->of_node, "rockchip,rk3588-pcie-ep"))
return;
if (dw_pcie_ep_hide_ext_capability(pci, PCI_EXT_CAP_ID_SECPCI,
PCI_EXT_CAP_ID_ATS))
dev_err(dev, "failed to hide ATS capability\n");
dw_pcie_remove_ext_capability(pci, PCI_EXT_CAP_ID_ATS);
}
static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep)
@ -364,6 +382,7 @@ static int rockchip_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
.msix_capable = true,
@ -384,6 +403,7 @@ static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = {
* BARs) would be overwritten, resulting in (all other BARs) no longer working.
*/
static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
.msix_capable = true,
@ -485,36 +505,9 @@ static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = rockchip_pcie_link_up,
.start_link = rockchip_pcie_start_link,
.stop_link = rockchip_pcie_stop_link,
.get_ltssm = rockchip_pcie_get_ltssm,
};
static irqreturn_t rockchip_pcie_rc_sys_irq_thread(int irq, void *arg)
{
struct rockchip_pcie *rockchip = arg;
struct dw_pcie *pci = &rockchip->pci;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = pci->dev;
u32 reg;
reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC);
rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC);
dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg);
dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip));
if (reg & PCIE_RDLH_LINK_UP_CHGED) {
if (rockchip_pcie_link_up(pci)) {
msleep(PCIE_RESET_CONFIG_WAIT_MS);
dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
/* Rescan the bus to enumerate endpoint devices */
pci_lock_rescan_remove();
pci_rescan_bus(pp->bridge->bus);
pci_unlock_rescan_remove();
}
}
return IRQ_HANDLED;
}
static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
{
struct rockchip_pcie *rockchip = arg;
@ -526,7 +519,7 @@ static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC);
dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg);
dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip));
dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm_reg(rockchip));
if (reg & PCIE_LINK_REQ_RST_NOT_INT) {
dev_dbg(dev, "hot reset or link-down reset\n");
@ -547,29 +540,14 @@ static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
return IRQ_HANDLED;
}
static int rockchip_pcie_configure_rc(struct platform_device *pdev,
struct rockchip_pcie *rockchip)
static int rockchip_pcie_configure_rc(struct rockchip_pcie *rockchip)
{
struct device *dev = &pdev->dev;
struct dw_pcie_rp *pp;
int irq, ret;
u32 val;
if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_HOST))
return -ENODEV;
irq = platform_get_irq_byname(pdev, "sys");
if (irq < 0)
return irq;
ret = devm_request_threaded_irq(dev, irq, NULL,
rockchip_pcie_rc_sys_irq_thread,
IRQF_ONESHOT, "pcie-sys-rc", rockchip);
if (ret) {
dev_err(dev, "failed to request PCIe sys IRQ\n");
return ret;
}
/* LTSSM enable control mode */
val = FIELD_PREP_WM16(PCIE_LTSSM_ENABLE_ENHANCE, 1);
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL);
@ -580,19 +558,8 @@ static int rockchip_pcie_configure_rc(struct platform_device *pdev,
pp = &rockchip->pci.pp;
pp->ops = &rockchip_pcie_host_ops;
pp->use_linkup_irq = true;
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "failed to initialize host\n");
return ret;
}
/* unmask DLL up/down indicator */
val = FIELD_PREP_WM16(PCIE_RDLH_LINK_UP_CHGED, 0);
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_INTR_MASK_MISC);
return ret;
return dw_pcie_host_init(pp);
}
static int rockchip_pcie_configure_ep(struct platform_device *pdev,
@ -711,7 +678,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
switch (data->mode) {
case DW_PCIE_RC_TYPE:
ret = rockchip_pcie_configure_rc(pdev, rockchip);
ret = rockchip_pcie_configure_rc(rockchip);
if (ret)
goto deinit_clk;
break;

View file

@ -309,6 +309,7 @@ static int keembay_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features keembay_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.msix_capable = true,
.bar[BAR_0] = { .only_64bit = true, },

View file

@ -820,6 +820,7 @@ static void qcom_pcie_ep_init_debugfs(struct qcom_pcie_ep *pcie_ep)
}
static const struct pci_epc_features qcom_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
.align = SZ_4K,

View file

@ -56,9 +56,6 @@
#define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8
#define PARF_Q2A_FLUSH 0x1ac
#define PARF_LTSSM 0x1b0
#define PARF_INT_ALL_STATUS 0x224
#define PARF_INT_ALL_CLEAR 0x228
#define PARF_INT_ALL_MASK 0x22c
#define PARF_SID_OFFSET 0x234
#define PARF_BDF_TRANSLATE_CFG 0x24c
#define PARF_DBI_BASE_ADDR_V2 0x350
@ -135,10 +132,6 @@
/* PARF_LTSSM register fields */
#define LTSSM_EN BIT(8)
/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */
#define PARF_INT_ALL_LINK_UP BIT(13)
#define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23)
/* PARF_NO_SNOOP_OVERRIDE register fields */
#define WR_NO_SNOOP_OVERRIDE_EN BIT(1)
#define RD_NO_SNOOP_OVERRIDE_EN BIT(3)
@ -1313,6 +1306,9 @@ static int qcom_pcie_host_init(struct dw_pcie_rp *pp)
goto err_pwrctrl_power_off;
}
dw_pcie_remove_capability(pcie->pci, PCI_CAP_ID_MSIX);
dw_pcie_remove_ext_capability(pcie->pci, PCI_EXT_CAP_ID_DPC);
qcom_ep_reset_deassert(pcie);
if (pcie->cfg->ops->config_sid) {
@ -1640,37 +1636,11 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
qcom_pcie_link_transition_count);
}
static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data)
{
struct qcom_pcie *pcie = data;
struct dw_pcie_rp *pp = &pcie->pci->pp;
struct device *dev = pcie->pci->dev;
u32 status = readl_relaxed(pcie->parf + PARF_INT_ALL_STATUS);
writel_relaxed(status, pcie->parf + PARF_INT_ALL_CLEAR);
if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) {
msleep(PCIE_RESET_CONFIG_WAIT_MS);
dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
/* Rescan the bus to enumerate endpoint devices */
pci_lock_rescan_remove();
pci_rescan_bus(pp->bridge->bus);
pci_unlock_rescan_remove();
qcom_pcie_icc_opp_update(pcie);
} else {
dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n",
status);
}
return IRQ_HANDLED;
}
static void qcom_pci_free_msi(void *ptr)
{
struct dw_pcie_rp *pp = (struct dw_pcie_rp *)ptr;
if (pp && pp->has_msi_ctrl)
if (pp && pp->use_imsi_rx)
dw_pcie_free_msi(pp);
}
@ -1694,7 +1664,7 @@ static int qcom_pcie_ecam_host_init(struct pci_config_window *cfg)
if (ret)
return ret;
pp->has_msi_ctrl = true;
pp->use_imsi_rx = true;
dw_pcie_msi_init(pp);
return devm_add_action_or_reset(dev, qcom_pci_free_msi, pp);
@ -1810,8 +1780,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
struct dw_pcie_rp *pp;
struct resource *res;
struct dw_pcie *pci;
int ret, irq;
char *name;
int ret;
pcie_cfg = of_device_get_match_data(dev);
if (!pcie_cfg) {
@ -1962,37 +1931,12 @@ static int qcom_pcie_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, pcie);
irq = platform_get_irq_byname_optional(pdev, "global");
if (irq > 0)
pp->use_linkup_irq = true;
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err_probe(dev, ret, "cannot initialize host\n");
goto err_phy_exit;
}
name = devm_kasprintf(dev, GFP_KERNEL, "qcom_pcie_global_irq%d",
pci_domain_nr(pp->bridge->bus));
if (!name) {
ret = -ENOMEM;
goto err_host_deinit;
}
if (irq > 0) {
ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
qcom_pcie_global_irq_thread,
IRQF_ONESHOT, name, pcie);
if (ret) {
dev_err_probe(&pdev->dev, ret,
"Failed to request Global IRQ\n");
goto err_host_deinit;
}
writel_relaxed(PARF_INT_ALL_LINK_UP | PARF_INT_MSI_DEV_0_7,
pcie->parf + PARF_INT_ALL_MASK);
}
qcom_pcie_icc_opp_update(pcie);
if (pcie->mhi)
@ -2000,8 +1944,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
return 0;
err_host_deinit:
dw_pcie_host_deinit(pp);
err_phy_exit:
list_for_each_entry_safe(port, tmp, &pcie->ports, list) {
phy_exit(port->phy);

View file

@ -420,6 +420,7 @@ static int rcar_gen4_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features rcar_gen4_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },

View file

@ -70,6 +70,7 @@ static int stm32_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features stm32_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.align = SZ_64K,
};

View file

@ -1988,6 +1988,7 @@ static int tegra_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
}
static const struct pci_epc_features tegra_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M,

View file

@ -420,6 +420,7 @@ static const struct uniphier_pcie_ep_soc_data uniphier_pro5_data = {
.init = uniphier_pcie_pro5_init_ep,
.wait = NULL,
.features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = false,
@ -438,6 +439,7 @@ static const struct uniphier_pcie_ep_soc_data uniphier_nx1_data = {
.init = uniphier_pcie_nx1_init_ep,
.wait = uniphier_pcie_nx1_wait_ep,
.features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = false,

View file

@ -33,6 +33,8 @@
#define COMMAND_COPY BIT(5)
#define COMMAND_ENABLE_DOORBELL BIT(6)
#define COMMAND_DISABLE_DOORBELL BIT(7)
#define COMMAND_BAR_SUBRANGE_SETUP BIT(8)
#define COMMAND_BAR_SUBRANGE_CLEAR BIT(9)
#define STATUS_READ_SUCCESS BIT(0)
#define STATUS_READ_FAIL BIT(1)
@ -48,6 +50,10 @@
#define STATUS_DOORBELL_ENABLE_FAIL BIT(11)
#define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12)
#define STATUS_DOORBELL_DISABLE_FAIL BIT(13)
#define STATUS_BAR_SUBRANGE_SETUP_SUCCESS BIT(14)
#define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15)
#define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16)
#define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17)
#define FLAG_USE_DMA BIT(0)
@ -57,12 +63,16 @@
#define CAP_MSI BIT(1)
#define CAP_MSIX BIT(2)
#define CAP_INTX BIT(3)
#define CAP_SUBRANGE_MAPPING BIT(4)
#define PCI_EPF_TEST_BAR_SUBRANGE_NSUB 2
static struct workqueue_struct *kpcitest_workqueue;
struct pci_epf_test {
void *reg[PCI_STD_NUM_BARS];
struct pci_epf *epf;
struct config_group group;
enum pci_barno test_reg_bar;
size_t msix_table_offset;
struct delayed_work cmd_handler;
@ -76,6 +86,7 @@ struct pci_epf_test {
bool dma_private;
const struct pci_epc_features *epc_features;
struct pci_epf_bar db_bar;
size_t bar_size[PCI_STD_NUM_BARS];
};
struct pci_epf_test_reg {
@ -102,7 +113,8 @@ static struct pci_epf_header test_header = {
.interrupt_pin = PCI_INTERRUPT_INTA,
};
static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 };
/* default BAR sizes, can be overridden by the user using configfs */
static size_t default_bar_size[] = { 131072, 131072, 131072, 131072, 131072, 1048576 };
static void pci_epf_test_dma_callback(void *param)
{
@ -806,6 +818,155 @@ set_status_err:
reg->status = cpu_to_le32(status);
}
static u8 pci_epf_test_subrange_sig_byte(enum pci_barno barno,
unsigned int subno)
{
return 0x50 + (barno * 8) + subno;
}
static void pci_epf_test_bar_subrange_setup(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
struct pci_epf_bar_submap *submap, *old_submap;
struct pci_epf *epf = epf_test->epf;
struct pci_epc *epc = epf->epc;
struct pci_epf_bar *bar;
unsigned int nsub = PCI_EPF_TEST_BAR_SUBRANGE_NSUB, old_nsub;
/* reg->size carries BAR number for BAR_SUBRANGE_* commands. */
enum pci_barno barno = le32_to_cpu(reg->size);
u32 status = le32_to_cpu(reg->status);
unsigned int i, phys_idx;
size_t sub_size;
u8 *addr;
int ret;
if (barno >= PCI_STD_NUM_BARS) {
dev_err(&epf->dev, "Invalid barno: %d\n", barno);
goto err;
}
/* Host side should've avoided test_reg_bar, this is a safeguard. */
if (barno == epf_test->test_reg_bar) {
dev_err(&epf->dev, "test_reg_bar cannot be used for subrange test\n");
goto err;
}
if (!epf_test->epc_features->dynamic_inbound_mapping ||
!epf_test->epc_features->subrange_mapping) {
dev_err(&epf->dev, "epc driver does not support subrange mapping\n");
goto err;
}
bar = &epf->bar[barno];
if (!bar->size || !bar->addr) {
dev_err(&epf->dev, "bar size/addr (%zu/%p) is invalid\n",
bar->size, bar->addr);
goto err;
}
if (bar->size % nsub) {
dev_err(&epf->dev, "BAR size %zu is not divisible by %u\n",
bar->size, nsub);
goto err;
}
sub_size = bar->size / nsub;
submap = kcalloc(nsub, sizeof(*submap), GFP_KERNEL);
if (!submap)
goto err;
for (i = 0; i < nsub; i++) {
/* Swap the two halves so RC can verify ordering. */
phys_idx = i ^ 1;
submap[i].phys_addr = bar->phys_addr + (phys_idx * sub_size);
submap[i].size = sub_size;
}
old_submap = bar->submap;
old_nsub = bar->num_submap;
bar->submap = submap;
bar->num_submap = nsub;
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar);
if (ret) {
dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret);
bar->submap = old_submap;
bar->num_submap = old_nsub;
kfree(submap);
goto err;
}
kfree(old_submap);
/*
* Fill deterministic signatures into the physical regions that
* each BAR subrange maps to. RC verifies these to ensure the
* submap order is really applied.
*/
addr = (u8 *)bar->addr;
for (i = 0; i < nsub; i++) {
phys_idx = i ^ 1;
memset(addr + (phys_idx * sub_size),
pci_epf_test_subrange_sig_byte(barno, i),
sub_size);
}
status |= STATUS_BAR_SUBRANGE_SETUP_SUCCESS;
reg->status = cpu_to_le32(status);
return;
err:
status |= STATUS_BAR_SUBRANGE_SETUP_FAIL;
reg->status = cpu_to_le32(status);
}
static void pci_epf_test_bar_subrange_clear(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
struct pci_epf *epf = epf_test->epf;
struct pci_epf_bar_submap *submap;
struct pci_epc *epc = epf->epc;
/* reg->size carries BAR number for BAR_SUBRANGE_* commands. */
enum pci_barno barno = le32_to_cpu(reg->size);
u32 status = le32_to_cpu(reg->status);
struct pci_epf_bar *bar;
unsigned int nsub;
int ret;
if (barno >= PCI_STD_NUM_BARS) {
dev_err(&epf->dev, "Invalid barno: %d\n", barno);
goto err;
}
bar = &epf->bar[barno];
submap = bar->submap;
nsub = bar->num_submap;
if (!submap || !nsub)
goto err;
bar->submap = NULL;
bar->num_submap = 0;
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar);
if (ret) {
bar->submap = submap;
bar->num_submap = nsub;
dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret);
goto err;
}
kfree(submap);
status |= STATUS_BAR_SUBRANGE_CLEAR_SUCCESS;
reg->status = cpu_to_le32(status);
return;
err:
status |= STATUS_BAR_SUBRANGE_CLEAR_FAIL;
reg->status = cpu_to_le32(status);
}
static void pci_epf_test_cmd_handler(struct work_struct *work)
{
u32 command;
@ -861,6 +1022,14 @@ static void pci_epf_test_cmd_handler(struct work_struct *work)
pci_epf_test_disable_doorbell(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
case COMMAND_BAR_SUBRANGE_SETUP:
pci_epf_test_bar_subrange_setup(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
case COMMAND_BAR_SUBRANGE_CLEAR:
pci_epf_test_bar_subrange_clear(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
default:
dev_err(dev, "Invalid command 0x%x\n", command);
break;
@ -933,6 +1102,10 @@ static void pci_epf_test_set_capabilities(struct pci_epf *epf)
if (epf_test->epc_features->intx_capable)
caps |= CAP_INTX;
if (epf_test->epc_features->dynamic_inbound_mapping &&
epf_test->epc_features->subrange_mapping)
caps |= CAP_SUBRANGE_MAPPING;
reg->caps = cpu_to_le32(caps);
}
@ -1070,7 +1243,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
if (epc_features->bar[bar].type == BAR_FIXED)
test_reg_size = epc_features->bar[bar].fixed_size;
else
test_reg_size = bar_size[bar];
test_reg_size = epf_test->bar_size[bar];
base = pci_epf_alloc_space(epf, test_reg_size, bar,
epc_features, PRIMARY_INTERFACE);
@ -1142,6 +1315,94 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
pci_epf_test_free_space(epf);
}
#define PCI_EPF_TEST_BAR_SIZE_R(_name, _id) \
static ssize_t pci_epf_test_##_name##_show(struct config_item *item, \
char *page) \
{ \
struct config_group *group = to_config_group(item); \
struct pci_epf_test *epf_test = \
container_of(group, struct pci_epf_test, group); \
\
return sysfs_emit(page, "%zu\n", epf_test->bar_size[_id]); \
}
#define PCI_EPF_TEST_BAR_SIZE_W(_name, _id) \
static ssize_t pci_epf_test_##_name##_store(struct config_item *item, \
const char *page, \
size_t len) \
{ \
struct config_group *group = to_config_group(item); \
struct pci_epf_test *epf_test = \
container_of(group, struct pci_epf_test, group); \
int val, ret; \
\
/* \
* BAR sizes can only be modified before binding to an EPC, \
* because pci_epf_test_alloc_space() is called in .bind(). \
*/ \
if (epf_test->epf->epc) \
return -EOPNOTSUPP; \
\
ret = kstrtouint(page, 0, &val); \
if (ret) \
return ret; \
\
if (!is_power_of_2(val)) \
return -EINVAL; \
\
epf_test->bar_size[_id] = val; \
\
return len; \
}
PCI_EPF_TEST_BAR_SIZE_R(bar0_size, BAR_0)
PCI_EPF_TEST_BAR_SIZE_W(bar0_size, BAR_0)
PCI_EPF_TEST_BAR_SIZE_R(bar1_size, BAR_1)
PCI_EPF_TEST_BAR_SIZE_W(bar1_size, BAR_1)
PCI_EPF_TEST_BAR_SIZE_R(bar2_size, BAR_2)
PCI_EPF_TEST_BAR_SIZE_W(bar2_size, BAR_2)
PCI_EPF_TEST_BAR_SIZE_R(bar3_size, BAR_3)
PCI_EPF_TEST_BAR_SIZE_W(bar3_size, BAR_3)
PCI_EPF_TEST_BAR_SIZE_R(bar4_size, BAR_4)
PCI_EPF_TEST_BAR_SIZE_W(bar4_size, BAR_4)
PCI_EPF_TEST_BAR_SIZE_R(bar5_size, BAR_5)
PCI_EPF_TEST_BAR_SIZE_W(bar5_size, BAR_5)
CONFIGFS_ATTR(pci_epf_test_, bar0_size);
CONFIGFS_ATTR(pci_epf_test_, bar1_size);
CONFIGFS_ATTR(pci_epf_test_, bar2_size);
CONFIGFS_ATTR(pci_epf_test_, bar3_size);
CONFIGFS_ATTR(pci_epf_test_, bar4_size);
CONFIGFS_ATTR(pci_epf_test_, bar5_size);
static struct configfs_attribute *pci_epf_test_attrs[] = {
&pci_epf_test_attr_bar0_size,
&pci_epf_test_attr_bar1_size,
&pci_epf_test_attr_bar2_size,
&pci_epf_test_attr_bar3_size,
&pci_epf_test_attr_bar4_size,
&pci_epf_test_attr_bar5_size,
NULL,
};
static const struct config_item_type pci_epf_test_group_type = {
.ct_attrs = pci_epf_test_attrs,
.ct_owner = THIS_MODULE,
};
static struct config_group *pci_epf_test_add_cfs(struct pci_epf *epf,
struct config_group *group)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
struct config_group *epf_group = &epf_test->group;
struct device *dev = &epf->dev;
config_group_init_type_name(epf_group, dev_name(dev),
&pci_epf_test_group_type);
return epf_group;
}
static const struct pci_epf_device_id pci_epf_test_ids[] = {
{
.name = "pci_epf_test",
@ -1154,6 +1415,7 @@ static int pci_epf_test_probe(struct pci_epf *epf,
{
struct pci_epf_test *epf_test;
struct device *dev = &epf->dev;
enum pci_barno bar;
epf_test = devm_kzalloc(dev, sizeof(*epf_test), GFP_KERNEL);
if (!epf_test)
@ -1161,6 +1423,8 @@ static int pci_epf_test_probe(struct pci_epf *epf,
epf->header = &test_header;
epf_test->epf = epf;
for (bar = BAR_0; bar < PCI_STD_NUM_BARS; bar++)
epf_test->bar_size[bar] = default_bar_size[bar];
INIT_DELAYED_WORK(&epf_test->cmd_handler, pci_epf_test_cmd_handler);
@ -1173,6 +1437,7 @@ static int pci_epf_test_probe(struct pci_epf *epf,
static const struct pci_epf_ops ops = {
.unbind = pci_epf_test_unbind,
.bind = pci_epf_test_bind,
.add_cfs = pci_epf_test_add_cfs,
};
static struct pci_epf_driver test_driver = {

View file

@ -596,6 +596,14 @@ int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
if (!epc_features)
return -EINVAL;
if (epf_bar->num_submap && !epf_bar->submap)
return -EINVAL;
if (epf_bar->num_submap &&
!(epc_features->dynamic_inbound_mapping &&
epc_features->subrange_mapping))
return -EINVAL;
if (epc_features->bar[bar].type == BAR_RESIZABLE &&
(epf_bar->size < SZ_1M || (u64)epf_bar->size > (SZ_128G * 1024)))
return -EINVAL;

View file

@ -421,7 +421,7 @@ found:
static u8 __pci_find_next_cap(struct pci_bus *bus, unsigned int devfn,
u8 pos, int cap)
{
return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, bus, devfn);
return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, NULL, bus, devfn);
}
u8 pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap)
@ -526,7 +526,7 @@ u16 pci_find_next_ext_capability(struct pci_dev *dev, u16 start, int cap)
return 0;
return PCI_FIND_NEXT_EXT_CAP(pci_bus_read_config, start, cap,
dev->bus, dev->devfn);
NULL, dev->bus, dev->devfn);
}
EXPORT_SYMBOL_GPL(pci_find_next_ext_capability);
@ -595,7 +595,7 @@ static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap)
mask = HT_5BIT_CAP_MASK;
pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, pos,
PCI_CAP_ID_HT, dev->bus, dev->devfn);
PCI_CAP_ID_HT, NULL, dev->bus, dev->devfn);
while (pos) {
rc = pci_read_config_byte(dev, pos + 3, &cap);
if (rc != PCIBIOS_SUCCESSFUL)
@ -606,7 +606,7 @@ static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap)
pos = PCI_FIND_NEXT_CAP(pci_bus_read_config,
pos + PCI_CAP_LIST_NEXT,
PCI_CAP_ID_HT, dev->bus,
PCI_CAP_ID_HT, NULL, dev->bus,
dev->devfn);
}

View file

@ -122,17 +122,21 @@ bool pcie_cap_has_rtctl(const struct pci_dev *dev);
* @read_cfg: Function pointer for reading PCI config space
* @start: Starting position to begin search
* @cap: Capability ID to find
* @prev_ptr: Pointer to store position of preceding capability (optional)
* @args: Arguments to pass to read_cfg function
*
* Search the capability list in PCI config space to find @cap.
* Search the capability list in PCI config space to find @cap. If
* found, update *prev_ptr with the position of the preceding capability
* (if prev_ptr != NULL)
* Implements TTL (time-to-live) protection against infinite loops.
*
* Return: Position of the capability if found, 0 otherwise.
*/
#define PCI_FIND_NEXT_CAP(read_cfg, start, cap, args...) \
#define PCI_FIND_NEXT_CAP(read_cfg, start, cap, prev_ptr, args...) \
({ \
int __ttl = PCI_FIND_CAP_TTL; \
u8 __id, __found_pos = 0; \
u8 __id, __found_pos = 0; \
u8 __prev_pos = (start); \
u8 __pos = (start); \
u16 __ent; \
\
@ -151,9 +155,12 @@ bool pcie_cap_has_rtctl(const struct pci_dev *dev);
\
if (__id == (cap)) { \
__found_pos = __pos; \
if (prev_ptr != NULL) \
*(u8 *)prev_ptr = __prev_pos; \
break; \
} \
\
__prev_pos = __pos; \
__pos = FIELD_GET(PCI_CAP_LIST_NEXT_MASK, __ent); \
} \
__found_pos; \
@ -165,21 +172,26 @@ bool pcie_cap_has_rtctl(const struct pci_dev *dev);
* @read_cfg: Function pointer for reading PCI config space
* @start: Starting position to begin search (0 for initial search)
* @cap: Extended capability ID to find
* @prev_ptr: Pointer to store position of preceding capability (optional)
* @args: Arguments to pass to read_cfg function
*
* Search the extended capability list in PCI config space to find @cap.
* If found, update *prev_ptr with the position of the preceding capability
* (if prev_ptr != NULL)
* Implements TTL protection against infinite loops using a calculated
* maximum search count.
*
* Return: Position of the capability if found, 0 otherwise.
*/
#define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, args...) \
#define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, prev_ptr, args...) \
({ \
u16 __pos = (start) ?: PCI_CFG_SPACE_SIZE; \
u16 __found_pos = 0; \
u16 __prev_pos; \
int __ttl, __ret; \
u32 __header; \
\
__prev_pos = __pos; \
__ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; \
while (__ttl-- > 0 && __pos >= PCI_CFG_SPACE_SIZE) { \
__ret = read_cfg##_dword(args, __pos, &__header); \
@ -191,9 +203,12 @@ bool pcie_cap_has_rtctl(const struct pci_dev *dev);
\
if (PCI_EXT_CAP_ID(__header) == (cap) && __pos != start) {\
__found_pos = __pos; \
if (prev_ptr != NULL) \
*(u16 *)prev_ptr = __prev_pos; \
break; \
} \
\
__prev_pos = __pos; \
__pos = PCI_EXT_CAP_NEXT(__header); \
} \
__found_pos; \

View file

@ -223,6 +223,13 @@ struct pci_epc_bar_desc {
/**
* struct pci_epc_features - features supported by a EPC device per function
* @linkup_notifier: indicate if the EPC device can notify EPF driver on link up
* @dynamic_inbound_mapping: indicate if the EPC device supports updating
* inbound mappings for an already configured BAR
* (i.e. allow calling pci_epc_set_bar() again
* without first calling pci_epc_clear_bar())
* @subrange_mapping: indicate if the EPC device can map inbound subranges for a
* BAR. This feature depends on @dynamic_inbound_mapping
* feature.
* @msi_capable: indicate if the endpoint function has MSI capability
* @msix_capable: indicate if the endpoint function has MSI-X capability
* @intx_capable: indicate if the endpoint can raise INTx interrupts
@ -231,6 +238,8 @@ struct pci_epc_bar_desc {
*/
struct pci_epc_features {
unsigned int linkup_notifier : 1;
unsigned int dynamic_inbound_mapping : 1;
unsigned int subrange_mapping : 1;
unsigned int msi_capable : 1;
unsigned int msix_capable : 1;
unsigned int intx_capable : 1;

View file

@ -110,6 +110,22 @@ struct pci_epf_driver {
#define to_pci_epf_driver(drv) container_of_const((drv), struct pci_epf_driver, driver)
/**
* struct pci_epf_bar_submap - BAR subrange for inbound mapping
* @phys_addr: target physical/DMA address for this subrange
* @size: the size of the subrange to be mapped
*
* When pci_epf_bar.num_submap is >0, pci_epf_bar.submap describes the
* complete BAR layout. This allows an EPC driver to program multiple
* inbound translation windows for a single BAR when supported by the
* controller. The array order defines the BAR layout (submap[0] at offset
* 0, and each immediately follows the previous one).
*/
struct pci_epf_bar_submap {
dma_addr_t phys_addr;
size_t size;
};
/**
* struct pci_epf_bar - represents the BAR of EPF device
* @phys_addr: physical address that should be mapped to the BAR
@ -119,6 +135,9 @@ struct pci_epf_driver {
* requirement
* @barno: BAR number
* @flags: flags that are set for the BAR
* @num_submap: number of entries in @submap
* @submap: array of subrange descriptors allocated by the caller. See
* struct pci_epf_bar_submap for the semantics in detail.
*/
struct pci_epf_bar {
dma_addr_t phys_addr;
@ -127,6 +146,10 @@ struct pci_epf_bar {
size_t mem_size;
enum pci_barno barno;
int flags;
/* Optional sub-range mapping */
unsigned int num_submap;
struct pci_epf_bar_submap *submap;
};
/**

View file

@ -22,6 +22,7 @@
#define PCITEST_GET_IRQTYPE _IO('P', 0x9)
#define PCITEST_BARS _IO('P', 0xa)
#define PCITEST_DOORBELL _IO('P', 0xb)
#define PCITEST_BAR_SUBRANGE _IO('P', 0xc)
#define PCITEST_CLEAR_IRQ _IO('P', 0x10)
#define PCITEST_IRQ_TYPE_UNDEFINED -1

View file

@ -70,6 +70,23 @@ TEST_F(pci_ep_bar, BAR_TEST)
EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno);
}
TEST_F(pci_ep_bar, BAR_SUBRANGE_TEST)
{
int ret;
pci_ep_ioctl(PCITEST_SET_IRQTYPE, PCITEST_IRQ_TYPE_AUTO);
ASSERT_EQ(0, ret) TH_LOG("Can't set AUTO IRQ type");
pci_ep_ioctl(PCITEST_BAR_SUBRANGE, variant->barno);
if (ret == -ENODATA)
SKIP(return, "BAR is disabled");
if (ret == -EBUSY)
SKIP(return, "BAR is test register space");
if (ret == -EOPNOTSUPP)
SKIP(return, "Subrange map is not supported");
EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno);
}
FIXTURE(pci_ep_basic)
{
int fd;