Including fixes from CAN, netfilter and wireless.

Current release - new code bugs:
 
  - sched: cake: fixup cake_mq rate adjustment for diffserv config
 
  - wifi: fix missing ieee80211_eml_params member initialization
 
 Previous releases - regressions:
 
  - tcp: give up on stronger sk_rcvbuf checks (for now)
 
 Previous releases - always broken:
 
  - net: fix rcu_tasks stall in threaded busypoll
 
  - sched: fq: clear q->band_pkt_count[] in fq_reset()
 
  - sched: only allow act_ct to bind to clsact/ingress qdiscs and
    shared blocks
 
  - bridge: check relevant per-VLAN options in VLAN range grouping
 
  - xsk: fix fragment node deletion to prevent buffer leak
 
 Misc:
 
  - spring cleanup of inactive maintainers
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmmptYEACgkQMUZtbf5S
 Irsraw/+L+L512Sbh1UlmbZjhT+AQkERHNkkfUMfXAeVb4uwHOqaydVdffvqRlTT
 zOK8Oqzqf5ojRezDZ02skXnJTh39MF9IFiugF9JHApxwT2ALv0S7PXPFUJeRQeAY
 +OiLT5+iy8wMfM6eryL6OtpM9PC8zwzH32oCYd5m4Ixf90Woj5G7x8Vooz7wUg1n
 0cAliam8QLIRBrKXqctf7J8n23AK+WcrLcAt58J+qWCGqiiXdJXMvWXv1PjQ7vs/
 KZysy0QaGwh3rw+5SquXmXwjhNIvvs58v3NV/4QbBdIKfJ5uYpTpyVgXJBQ6B4Jv
 8SATHNwGbuUHuZl8OHn9ysaPCE3ZuD5pMnHbLnbKR6fyic95GxMIx/BNAOVvvwOH
 l+GWEqch8hy6r+BVAJsoSEJzIf9aqUAlEhy0wEhVOP15yn5RWfMRQKpAaD6JKQYm
 0Q6i+PsdS8xaANcUzi1Ec6aqyaX+iIBY6srE/twU3PW23Uv2ejqAG89x4s7t9LPu
 GdMQ+iAEsR8Auph8Y5mshs4e9MrdlD3jzPCiFhkrqncWl/UcPpBgmHlD80vkTa1/
 miMyYG5wq3g9pAFT43aAuoE85K6ZdIW0xGp3wGYMiW8Zy6Ea5EdnM2Wg8kbi/om0
 W0pjfcI/2FInsZqK0g/PDeccWFKxl8C1SnfNDvy9rJHBwMkZHm4=
 =XGBM
 -----END PGP SIGNATURE-----

Merge tag 'net-7.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from CAN, netfilter and wireless.

  Current release - new code bugs:

   - sched: cake: fixup cake_mq rate adjustment for diffserv config

   - wifi: fix missing ieee80211_eml_params member initialization

  Previous releases - regressions:

   - tcp: give up on stronger sk_rcvbuf checks (for now)

  Previous releases - always broken:

   - net: fix rcu_tasks stall in threaded busypoll

   - sched:
      - fq: clear q->band_pkt_count[] in fq_reset()
      - only allow act_ct to bind to clsact/ingress qdiscs and shared
        blocks

   - bridge: check relevant per-VLAN options in VLAN range grouping

   - xsk: fix fragment node deletion to prevent buffer leak

  Misc:

   - spring cleanup of inactive maintainers"

* tag 'net-7.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (138 commits)
  xdp: produce a warning when calculated tailroom is negative
  net: enetc: use truesize as XDP RxQ info frag_size
  libeth, idpf: use truesize as XDP RxQ info frag_size
  i40e: use xdp.frame_sz as XDP RxQ info frag_size
  i40e: fix registering XDP RxQ info
  ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz
  ice: fix rxq info registering in mbuf packets
  xsk: introduce helper to determine rxq->frag_size
  xdp: use modulo operation to calculate XDP frag tailroom
  selftests/tc-testing: Add tests exercising act_ife metalist replace behaviour
  net/sched: act_ife: Fix metalist update behavior
  selftests: net: add test for IPv4 route with loopback IPv6 nexthop
  net: ipv6: fix panic when IPv4 route references loopback IPv6 nexthop
  net: vxlan: fix nd_tbl NULL dereference when IPv6 is disabled
  net: bridge: fix nd_tbl NULL dereference when IPv6 is disabled
  MAINTAINERS: remove Thomas Falcon from IBM ibmvnic
  MAINTAINERS: remove Claudiu Manoil and Alexandre Belloni from Ocelot switch
  MAINTAINERS: replace Taras Chornyi with Elad Nachman for Marvell Prestera
  MAINTAINERS: remove Jonathan Lemon from OpenCompute PTP
  MAINTAINERS: replace Clark Wang with Frank Li for Freescale FEC
  ...
This commit is contained in:
Linus Torvalds 2026-03-05 11:00:46 -08:00
commit abacaf5599
159 changed files with 2163 additions and 847 deletions

View file

@ -353,6 +353,7 @@ Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@opinsys.com>
Jason Gunthorpe <jgg@ziepe.ca> <jgg@mellanox.com>
Jason Gunthorpe <jgg@ziepe.ca> <jgg@nvidia.com>
Jason Gunthorpe <jgg@ziepe.ca> <jgunthorpe@obsidianresearch.com>
Jason Xing <kerneljasonxing@gmail.com> <kernelxing@tencent.com>
<javier@osg.samsung.com> <javier.martinez@collabora.co.uk>
Javi Merino <javi.merino@kernel.org> <javi.merino@arm.com>
Jayachandran C <c.jayachandran@gmail.com> <jayachandranc@netlogicmicro.com>
@ -401,6 +402,7 @@ Jiri Slaby <jirislaby@kernel.org> <xslaby@fi.muni.cz>
Jisheng Zhang <jszhang@kernel.org> <jszhang@marvell.com>
Jisheng Zhang <jszhang@kernel.org> <Jisheng.Zhang@synaptics.com>
Jishnu Prakash <quic_jprakash@quicinc.com> <jprakash@codeaurora.org>
Joe Damato <joe@dama.to> <jdamato@fastly.com>
Joel Granados <joel.granados@kernel.org> <j.granados@samsung.com>
Johan Hovold <johan@kernel.org> <jhovold@gmail.com>
Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com>

View file

@ -1242,6 +1242,10 @@ N: Veaceslav Falico
E: vfalico@gmail.com
D: Co-maintainer and co-author of the network bonding driver.
N: Thomas Falcon
E: tlfalcon@linux.ibm.com
D: Initial author of the IBM ibmvnic network driver
N: János Farkas
E: chexum@shadow.banki.hu
D: romfs, various (mostly networking) fixes
@ -2415,6 +2419,10 @@ S: Am Muehlenweg 38
S: D53424 Remagen
S: Germany
N: Jonathan Lemon
E: jonathan.lemon@gmail.com
D: OpenCompute PTP clock driver (ptp_ocp)
N: Colin Leroy
E: colin@colino.net
W: http://www.geekounet.org/

View file

@ -87,6 +87,7 @@ required:
allOf:
- $ref: can-controller.yaml#
- $ref: /schemas/memory-controllers/mc-peripheral-props.yaml
- if:
properties:
compatible:

View file

@ -993,10 +993,8 @@ F: Documentation/devicetree/bindings/thermal/amazon,al-thermal.yaml
F: drivers/thermal/thermal_mmio.c
AMAZON ETHERNET DRIVERS
M: Shay Agroskin <shayagr@amazon.com>
M: Arthur Kiyanovski <akiyano@amazon.com>
R: David Arinzon <darinzon@amazon.com>
R: Saeed Bishara <saeedb@amazon.com>
M: David Arinzon <darinzon@amazon.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst
@ -4617,7 +4615,6 @@ F: drivers/bluetooth/
BLUETOOTH SUBSYSTEM
M: Marcel Holtmann <marcel@holtmann.org>
M: Johan Hedberg <johan.hedberg@gmail.com>
M: Luiz Augusto von Dentz <luiz.dentz@gmail.com>
L: linux-bluetooth@vger.kernel.org
S: Supported
@ -10171,8 +10168,8 @@ F: drivers/i2c/busses/i2c-cpm.c
FREESCALE IMX / MXC FEC DRIVER
M: Wei Fang <wei.fang@nxp.com>
R: Frank Li <frank.li@nxp.com>
R: Shenwei Wang <shenwei.wang@nxp.com>
R: Clark Wang <xiaoning.wang@nxp.com>
L: imx@lists.linux.dev
L: netdev@vger.kernel.org
S: Maintained
@ -12216,7 +12213,6 @@ IBM Power SRIOV Virtual NIC Device Driver
M: Haren Myneni <haren@linux.ibm.com>
M: Rick Lindsley <ricklind@linux.ibm.com>
R: Nick Child <nnac123@linux.ibm.com>
R: Thomas Falcon <tlfalcon@linux.ibm.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/ibm/ibmvnic.*
@ -15375,10 +15371,8 @@ F: drivers/crypto/marvell/
F: include/linux/soc/marvell/octeontx2/
MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2)
M: Mirko Lindner <mlindner@marvell.com>
M: Stephen Hemminger <stephen@networkplumber.org>
L: netdev@vger.kernel.org
S: Odd fixes
S: Orphan
F: drivers/net/ethernet/marvell/sk*
MARVELL LIBERTAS WIRELESS DRIVER
@ -15475,7 +15469,6 @@ MARVELL OCTEONTX2 RVU ADMIN FUNCTION DRIVER
M: Sunil Goutham <sgoutham@marvell.com>
M: Linu Cherian <lcherian@marvell.com>
M: Geetha sowjanya <gakula@marvell.com>
M: Jerin Jacob <jerinj@marvell.com>
M: hariprasad <hkelam@marvell.com>
M: Subbaraya Sundeep <sbhatta@marvell.com>
L: netdev@vger.kernel.org
@ -15490,7 +15483,7 @@ S: Supported
F: drivers/perf/marvell_pem_pmu.c
MARVELL PRESTERA ETHERNET SWITCH DRIVER
M: Taras Chornyi <taras.chornyi@plvision.eu>
M: Elad Nachman <enachman@marvell.com>
S: Supported
W: https://github.com/Marvell-switching/switchdev-prestera
F: drivers/net/ethernet/marvell/prestera/
@ -16164,7 +16157,6 @@ F: drivers/dma/mediatek/
MEDIATEK ETHERNET DRIVER
M: Felix Fietkau <nbd@nbd.name>
M: Sean Wang <sean.wang@mediatek.com>
M: Lorenzo Bianconi <lorenzo@kernel.org>
L: netdev@vger.kernel.org
S: Maintained
@ -16357,8 +16349,6 @@ F: include/soc/mediatek/smi.h
MEDIATEK SWITCH DRIVER
M: Chester A. Unal <chester.a.unal@arinc9.com>
M: Daniel Golle <daniel@makrotopia.org>
M: DENG Qingfang <dqfext@gmail.com>
M: Sean Wang <sean.wang@mediatek.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/dsa/mt7530-mdio.c
@ -19226,8 +19216,6 @@ F: tools/objtool/
OCELOT ETHERNET SWITCH DRIVER
M: Vladimir Oltean <vladimir.oltean@nxp.com>
M: Claudiu Manoil <claudiu.manoil@nxp.com>
M: Alexandre Belloni <alexandre.belloni@bootlin.com>
M: UNGLinuxDriver@microchip.com
L: netdev@vger.kernel.org
S: Supported
@ -19813,7 +19801,6 @@ F: arch/*/boot/dts/
F: include/dt-bindings/
OPENCOMPUTE PTP CLOCK DRIVER
M: Jonathan Lemon <jonathan.lemon@gmail.com>
M: Vadim Fedorenko <vadim.fedorenko@linux.dev>
L: netdev@vger.kernel.org
S: Maintained
@ -21457,9 +21444,8 @@ S: Supported
F: drivers/scsi/qedi/
QLOGIC QL4xxx ETHERNET DRIVER
M: Manish Chopra <manishc@marvell.com>
L: netdev@vger.kernel.org
S: Maintained
S: Orphan
F: drivers/net/ethernet/qlogic/qed/
F: drivers/net/ethernet/qlogic/qede/
F: include/linux/qed/

View file

@ -324,7 +324,7 @@ static bool bond_sk_check(struct bonding *bond)
}
}
bool bond_xdp_check(struct bonding *bond, int mode)
bool __bond_xdp_check(int mode, int xmit_policy)
{
switch (mode) {
case BOND_MODE_ROUNDROBIN:
@ -335,7 +335,7 @@ bool bond_xdp_check(struct bonding *bond, int mode)
/* vlan+srcmac is not supported with XDP as in most cases the 802.1q
* payload is not in the packet due to hardware offload.
*/
if (bond->params.xmit_policy != BOND_XMIT_POLICY_VLAN_SRCMAC)
if (xmit_policy != BOND_XMIT_POLICY_VLAN_SRCMAC)
return true;
fallthrough;
default:
@ -343,6 +343,11 @@ bool bond_xdp_check(struct bonding *bond, int mode)
}
}
bool bond_xdp_check(struct bonding *bond, int mode)
{
return __bond_xdp_check(mode, bond->params.xmit_policy);
}
/*---------------------------------- VLAN -----------------------------------*/
/* In the following 2 functions, bond_vlan_rx_add_vid and bond_vlan_rx_kill_vid,

View file

@ -1575,6 +1575,8 @@ static int bond_option_fail_over_mac_set(struct bonding *bond,
static int bond_option_xmit_hash_policy_set(struct bonding *bond,
const struct bond_opt_value *newval)
{
if (bond->xdp_prog && !__bond_xdp_check(BOND_MODE(bond), newval->value))
return -EOPNOTSUPP;
netdev_dbg(bond->dev, "Setting xmit hash policy to %s (%llu)\n",
newval->string, newval->value);
bond->params.xmit_policy = newval->value;

View file

@ -241,6 +241,7 @@ static int __init dummy_can_init(void)
dev->netdev_ops = &dummy_can_netdev_ops;
dev->ethtool_ops = &dummy_can_ethtool_ops;
dev->flags |= IFF_ECHO; /* enable echo handling */
priv = netdev_priv(dev);
priv->can.bittiming_const = &dummy_can_bittiming_const;
priv->can.bitrate_max = 20 * MEGA /* BPS */;

View file

@ -1214,6 +1214,7 @@ static int mcp251x_open(struct net_device *net)
{
struct mcp251x_priv *priv = netdev_priv(net);
struct spi_device *spi = priv->spi;
bool release_irq = false;
unsigned long flags = 0;
int ret;
@ -1257,12 +1258,24 @@ static int mcp251x_open(struct net_device *net)
return 0;
out_free_irq:
free_irq(spi->irq, priv);
/* The IRQ handler might be running, and if so it will be waiting
* for the lock. But free_irq() must wait for the handler to finish
* so calling it here would deadlock.
*
* Setting priv->force_quit will let the handler exit right away
* without any access to the hardware. This make it safe to call
* free_irq() after the lock is released.
*/
priv->force_quit = 1;
release_irq = true;
mcp251x_hw_sleep(spi);
out_close:
mcp251x_power_enable(priv->transceiver, 0);
close_candev(net);
mutex_unlock(&priv->mcp_lock);
if (release_irq)
free_irq(spi->irq, priv);
return ret;
}

View file

@ -445,6 +445,11 @@ static void ems_usb_read_bulk_callback(struct urb *urb)
start = CPC_HEADER_SIZE;
while (msg_count) {
if (start + CPC_MSG_HEADER_LEN > urb->actual_length) {
netdev_err(netdev, "format error\n");
break;
}
msg = (struct ems_cpc_msg *)&ibuf[start];
switch (msg->type) {
@ -474,7 +479,7 @@ static void ems_usb_read_bulk_callback(struct urb *urb)
start += CPC_MSG_HEADER_LEN + msg->length;
msg_count--;
if (start > urb->transfer_buffer_length) {
if (start > urb->actual_length) {
netdev_err(netdev, "format error\n");
break;
}

View file

@ -272,6 +272,9 @@ struct esd_usb {
struct usb_anchor rx_submitted;
unsigned int rx_pipe;
unsigned int tx_pipe;
int net_count;
u32 version;
int rxinitdone;
@ -537,7 +540,7 @@ static void esd_usb_read_bulk_callback(struct urb *urb)
}
resubmit_urb:
usb_fill_bulk_urb(urb, dev->udev, usb_rcvbulkpipe(dev->udev, 1),
usb_fill_bulk_urb(urb, dev->udev, dev->rx_pipe,
urb->transfer_buffer, ESD_USB_RX_BUFFER_SIZE,
esd_usb_read_bulk_callback, dev);
@ -626,9 +629,7 @@ static int esd_usb_send_msg(struct esd_usb *dev, union esd_usb_msg *msg)
{
int actual_length;
return usb_bulk_msg(dev->udev,
usb_sndbulkpipe(dev->udev, 2),
msg,
return usb_bulk_msg(dev->udev, dev->tx_pipe, msg,
msg->hdr.len * sizeof(u32), /* convert to # of bytes */
&actual_length,
1000);
@ -639,12 +640,8 @@ static int esd_usb_wait_msg(struct esd_usb *dev,
{
int actual_length;
return usb_bulk_msg(dev->udev,
usb_rcvbulkpipe(dev->udev, 1),
msg,
sizeof(*msg),
&actual_length,
1000);
return usb_bulk_msg(dev->udev, dev->rx_pipe, msg,
sizeof(*msg), &actual_length, 1000);
}
static int esd_usb_setup_rx_urbs(struct esd_usb *dev)
@ -677,8 +674,7 @@ static int esd_usb_setup_rx_urbs(struct esd_usb *dev)
urb->transfer_dma = buf_dma;
usb_fill_bulk_urb(urb, dev->udev,
usb_rcvbulkpipe(dev->udev, 1),
usb_fill_bulk_urb(urb, dev->udev, dev->rx_pipe,
buf, ESD_USB_RX_BUFFER_SIZE,
esd_usb_read_bulk_callback, dev);
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
@ -903,7 +899,7 @@ static netdev_tx_t esd_usb_start_xmit(struct sk_buff *skb,
/* hnd must not be 0 - MSB is stripped in txdone handling */
msg->tx.hnd = BIT(31) | i; /* returned in TX done message */
usb_fill_bulk_urb(urb, dev->udev, usb_sndbulkpipe(dev->udev, 2), buf,
usb_fill_bulk_urb(urb, dev->udev, dev->tx_pipe, buf,
msg->hdr.len * sizeof(u32), /* convert to # of bytes */
esd_usb_write_bulk_callback, context);
@ -1298,10 +1294,16 @@ done:
static int esd_usb_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
struct usb_endpoint_descriptor *ep_in, *ep_out;
struct esd_usb *dev;
union esd_usb_msg *msg;
int i, err;
err = usb_find_common_endpoints(intf->cur_altsetting, &ep_in, &ep_out,
NULL, NULL);
if (err)
return err;
dev = kzalloc_obj(*dev);
if (!dev) {
err = -ENOMEM;
@ -1309,6 +1311,8 @@ static int esd_usb_probe(struct usb_interface *intf,
}
dev->udev = interface_to_usbdev(intf);
dev->rx_pipe = usb_rcvbulkpipe(dev->udev, ep_in->bEndpointAddress);
dev->tx_pipe = usb_sndbulkpipe(dev->udev, ep_out->bEndpointAddress);
init_usb_anchor(&dev->rx_submitted);

View file

@ -1461,12 +1461,18 @@ static void es58x_read_bulk_callback(struct urb *urb)
}
resubmit_urb:
usb_anchor_urb(urb, &es58x_dev->rx_urbs);
ret = usb_submit_urb(urb, GFP_ATOMIC);
if (!ret)
return;
usb_unanchor_urb(urb);
if (ret == -ENODEV) {
for (i = 0; i < es58x_dev->num_can_ch; i++)
if (es58x_dev->netdev[i])
netif_device_detach(es58x_dev->netdev[i]);
} else if (ret)
} else
dev_err_ratelimited(dev,
"Failed resubmitting read bulk urb: %pe\n",
ERR_PTR(ret));

View file

@ -413,6 +413,7 @@ static void f81604_read_bulk_callback(struct urb *urb)
{
struct f81604_can_frame *frame = urb->transfer_buffer;
struct net_device *netdev = urb->context;
struct f81604_port_priv *priv = netdev_priv(netdev);
int ret;
if (!netif_device_present(netdev))
@ -445,10 +446,15 @@ static void f81604_read_bulk_callback(struct urb *urb)
f81604_process_rx_packet(netdev, frame);
resubmit_urb:
usb_anchor_urb(urb, &priv->urbs_anchor);
ret = usb_submit_urb(urb, GFP_ATOMIC);
if (!ret)
return;
usb_unanchor_urb(urb);
if (ret == -ENODEV)
netif_device_detach(netdev);
else if (ret)
else
netdev_err(netdev,
"%s: failed to resubmit read bulk urb: %pe\n",
__func__, ERR_PTR(ret));
@ -620,6 +626,12 @@ static void f81604_read_int_callback(struct urb *urb)
netdev_info(netdev, "%s: Int URB aborted: %pe\n", __func__,
ERR_PTR(urb->status));
if (urb->actual_length < sizeof(*data)) {
netdev_warn(netdev, "%s: short int URB: %u < %zu\n",
__func__, urb->actual_length, sizeof(*data));
goto resubmit_urb;
}
switch (urb->status) {
case 0: /* success */
break;
@ -646,10 +658,15 @@ static void f81604_read_int_callback(struct urb *urb)
f81604_handle_tx(priv, data);
resubmit_urb:
usb_anchor_urb(urb, &priv->urbs_anchor);
ret = usb_submit_urb(urb, GFP_ATOMIC);
if (!ret)
return;
usb_unanchor_urb(urb);
if (ret == -ENODEV)
netif_device_detach(netdev);
else if (ret)
else
netdev_err(netdev, "%s: failed to resubmit int urb: %pe\n",
__func__, ERR_PTR(ret));
}
@ -874,9 +891,27 @@ static void f81604_write_bulk_callback(struct urb *urb)
if (!netif_device_present(netdev))
return;
if (urb->status)
netdev_info(netdev, "%s: Tx URB error: %pe\n", __func__,
ERR_PTR(urb->status));
if (!urb->status)
return;
switch (urb->status) {
case -ENOENT:
case -ECONNRESET:
case -ESHUTDOWN:
return;
default:
break;
}
if (net_ratelimit())
netdev_err(netdev, "%s: Tx URB error: %pe\n", __func__,
ERR_PTR(urb->status));
can_free_echo_skb(netdev, 0, NULL);
netdev->stats.tx_dropped++;
netdev->stats.tx_errors++;
netif_wake_queue(netdev);
}
static void f81604_clear_reg_work(struct work_struct *work)

View file

@ -772,9 +772,8 @@ device_detach:
}
}
static int gs_usb_set_bittiming(struct net_device *netdev)
static int gs_usb_set_bittiming(struct gs_can *dev)
{
struct gs_can *dev = netdev_priv(netdev);
struct can_bittiming *bt = &dev->can.bittiming;
struct gs_device_bittiming dbt = {
.prop_seg = cpu_to_le32(bt->prop_seg),
@ -791,9 +790,8 @@ static int gs_usb_set_bittiming(struct net_device *netdev)
GFP_KERNEL);
}
static int gs_usb_set_data_bittiming(struct net_device *netdev)
static int gs_usb_set_data_bittiming(struct gs_can *dev)
{
struct gs_can *dev = netdev_priv(netdev);
struct can_bittiming *bt = &dev->can.fd.data_bittiming;
struct gs_device_bittiming dbt = {
.prop_seg = cpu_to_le32(bt->prop_seg),
@ -1057,6 +1055,20 @@ static int gs_can_open(struct net_device *netdev)
if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
flags |= GS_CAN_MODE_HW_TIMESTAMP;
rc = gs_usb_set_bittiming(dev);
if (rc) {
netdev_err(netdev, "failed to set bittiming: %pe\n", ERR_PTR(rc));
goto out_usb_kill_anchored_urbs;
}
if (ctrlmode & CAN_CTRLMODE_FD) {
rc = gs_usb_set_data_bittiming(dev);
if (rc) {
netdev_err(netdev, "failed to set data bittiming: %pe\n", ERR_PTR(rc));
goto out_usb_kill_anchored_urbs;
}
}
/* finally start device */
dev->can.state = CAN_STATE_ERROR_ACTIVE;
dm.flags = cpu_to_le32(flags);
@ -1370,7 +1382,6 @@ static struct gs_can *gs_make_candev(unsigned int channel,
dev->can.state = CAN_STATE_STOPPED;
dev->can.clock.freq = le32_to_cpu(bt_const.fclk_can);
dev->can.bittiming_const = &dev->bt_const;
dev->can.do_set_bittiming = gs_usb_set_bittiming;
dev->can.ctrlmode_supported = CAN_CTRLMODE_CC_LEN8_DLC;
@ -1394,7 +1405,6 @@ static struct gs_can *gs_make_candev(unsigned int channel,
* GS_CAN_FEATURE_BT_CONST_EXT is set.
*/
dev->can.fd.data_bittiming_const = &dev->bt_const;
dev->can.fd.do_set_data_bittiming = gs_usb_set_data_bittiming;
}
if (feature & GS_CAN_FEATURE_TERMINATION) {

View file

@ -748,7 +748,7 @@ static void ucan_read_bulk_callback(struct urb *urb)
len = le16_to_cpu(m->len);
/* check sanity (length of content) */
if (urb->actual_length - pos < len) {
if ((len == 0) || (urb->actual_length - pos < len)) {
netdev_warn(up->netdev,
"invalid message (short; no data; l:%d)\n",
urb->actual_length);

View file

@ -769,7 +769,7 @@ static int rtl8365mb_phy_ocp_write(struct realtek_priv *priv, int phy,
out:
rtl83xx_unlock(priv);
return 0;
return ret;
}
static int rtl8365mb_phy_read(struct realtek_priv *priv, int phy, int regnum)

View file

@ -431,7 +431,7 @@
#define MAC_SSIR_SSINC_INDEX 16
#define MAC_SSIR_SSINC_WIDTH 8
#define MAC_TCR_SS_INDEX 29
#define MAC_TCR_SS_WIDTH 2
#define MAC_TCR_SS_WIDTH 3
#define MAC_TCR_TE_INDEX 0
#define MAC_TCR_TE_WIDTH 1
#define MAC_TCR_VNE_INDEX 24

View file

@ -1120,7 +1120,6 @@ int xgbe_powerdown(struct net_device *netdev, unsigned int caller)
{
struct xgbe_prv_data *pdata = netdev_priv(netdev);
struct xgbe_hw_if *hw_if = &pdata->hw_if;
unsigned long flags;
DBGPR("-->xgbe_powerdown\n");
@ -1131,8 +1130,6 @@ int xgbe_powerdown(struct net_device *netdev, unsigned int caller)
return -EINVAL;
}
spin_lock_irqsave(&pdata->lock, flags);
if (caller == XGMAC_DRIVER_CONTEXT)
netif_device_detach(netdev);
@ -1148,8 +1145,6 @@ int xgbe_powerdown(struct net_device *netdev, unsigned int caller)
pdata->power_down = 1;
spin_unlock_irqrestore(&pdata->lock, flags);
DBGPR("<--xgbe_powerdown\n");
return 0;
@ -1159,7 +1154,6 @@ int xgbe_powerup(struct net_device *netdev, unsigned int caller)
{
struct xgbe_prv_data *pdata = netdev_priv(netdev);
struct xgbe_hw_if *hw_if = &pdata->hw_if;
unsigned long flags;
DBGPR("-->xgbe_powerup\n");
@ -1170,8 +1164,6 @@ int xgbe_powerup(struct net_device *netdev, unsigned int caller)
return -EINVAL;
}
spin_lock_irqsave(&pdata->lock, flags);
pdata->power_down = 0;
xgbe_napi_enable(pdata, 0);
@ -1186,8 +1178,6 @@ int xgbe_powerup(struct net_device *netdev, unsigned int caller)
xgbe_start_timers(pdata);
spin_unlock_irqrestore(&pdata->lock, flags);
DBGPR("<--xgbe_powerup\n");
return 0;

View file

@ -76,7 +76,6 @@ struct xgbe_prv_data *xgbe_alloc_pdata(struct device *dev)
pdata->netdev = netdev;
pdata->dev = dev;
spin_lock_init(&pdata->lock);
spin_lock_init(&pdata->xpcs_lock);
mutex_init(&pdata->rss_mutex);
spin_lock_init(&pdata->tstamp_lock);

View file

@ -1004,9 +1004,6 @@ struct xgbe_prv_data {
unsigned int pp3;
unsigned int pp4;
/* Overall device lock */
spinlock_t lock;
/* XPCS indirect addressing lock */
spinlock_t xpcs_lock;
unsigned int xpcs_window_def_reg;

View file

@ -1533,7 +1533,7 @@ static irqreturn_t dpaa2_switch_irq0_handler_thread(int irq_num, void *arg)
if_id = (status & 0xFFFF0000) >> 16;
if (if_id >= ethsw->sw_attr.num_ifs) {
dev_err(dev, "Invalid if_id %d in IRQ status\n", if_id);
goto out;
goto out_clear;
}
port_priv = ethsw->ports[if_id];
@ -1553,6 +1553,7 @@ static irqreturn_t dpaa2_switch_irq0_handler_thread(int irq_num, void *arg)
dpaa2_switch_port_connect_mac(port_priv);
}
out_clear:
err = dpsw_clear_irq_status(ethsw->mc_io, 0, ethsw->dpsw_handle,
DPSW_IRQ_INDEX_IF, status);
if (err)

View file

@ -3467,7 +3467,7 @@ static int enetc_int_vector_init(struct enetc_ndev_priv *priv, int i,
priv->rx_ring[i] = bdr;
err = __xdp_rxq_info_reg(&bdr->xdp.rxq, priv->ndev, i, 0,
ENETC_RXB_DMA_SIZE_XDP);
ENETC_RXB_TRUESIZE);
if (err)
goto free_vector;

View file

@ -33,6 +33,7 @@
/* Extended Device Control */
#define E1000_CTRL_EXT_LPCD 0x00000004 /* LCD Power Cycle Done */
#define E1000_CTRL_EXT_DPG_EN 0x00000008 /* Dynamic Power Gating Enable */
#define E1000_CTRL_EXT_SDP3_DATA 0x00000080 /* Value of SW Definable Pin 3 */
#define E1000_CTRL_EXT_FORCE_SMBUS 0x00000800 /* Force SMBus mode */
#define E1000_CTRL_EXT_EE_RST 0x00002000 /* Reinitialize from EEPROM */

View file

@ -117,7 +117,8 @@ enum e1000_boards {
board_pch_cnp,
board_pch_tgp,
board_pch_adp,
board_pch_mtp
board_pch_mtp,
board_pch_ptp
};
struct e1000_ps_page {
@ -527,6 +528,7 @@ extern const struct e1000_info e1000_pch_cnp_info;
extern const struct e1000_info e1000_pch_tgp_info;
extern const struct e1000_info e1000_pch_adp_info;
extern const struct e1000_info e1000_pch_mtp_info;
extern const struct e1000_info e1000_pch_ptp_info;
extern const struct e1000_info e1000_es2_info;
void e1000e_ptp_init(struct e1000_adapter *adapter);

View file

@ -118,8 +118,6 @@ struct e1000_hw;
#define E1000_DEV_ID_PCH_ARL_I219_V24 0x57A1
#define E1000_DEV_ID_PCH_PTP_I219_LM25 0x57B3
#define E1000_DEV_ID_PCH_PTP_I219_V25 0x57B4
#define E1000_DEV_ID_PCH_PTP_I219_LM26 0x57B5
#define E1000_DEV_ID_PCH_PTP_I219_V26 0x57B6
#define E1000_DEV_ID_PCH_PTP_I219_LM27 0x57B7
#define E1000_DEV_ID_PCH_PTP_I219_V27 0x57B8
#define E1000_DEV_ID_PCH_NVL_I219_LM29 0x57B9

View file

@ -528,7 +528,7 @@ static s32 e1000_init_phy_params_pchlan(struct e1000_hw *hw)
phy->id = e1000_phy_unknown;
if (hw->mac.type == e1000_pch_mtp) {
if (hw->mac.type == e1000_pch_mtp || hw->mac.type == e1000_pch_ptp) {
phy->retry_count = 2;
e1000e_enable_phy_retry(hw);
}
@ -4932,6 +4932,15 @@ static s32 e1000_reset_hw_ich8lan(struct e1000_hw *hw)
reg |= E1000_KABGTXD_BGSQLBIAS;
ew32(KABGTXD, reg);
/* The hardware reset value of the DPG_EN bit is 1.
* Clear DPG_EN to prevent unexpected autonomous power gating.
*/
if (hw->mac.type >= e1000_pch_ptp) {
reg = er32(CTRL_EXT);
reg &= ~E1000_CTRL_EXT_DPG_EN;
ew32(CTRL_EXT, reg);
}
return 0;
}
@ -6208,3 +6217,23 @@ const struct e1000_info e1000_pch_mtp_info = {
.phy_ops = &ich8_phy_ops,
.nvm_ops = &spt_nvm_ops,
};
const struct e1000_info e1000_pch_ptp_info = {
.mac = e1000_pch_ptp,
.flags = FLAG_IS_ICH
| FLAG_HAS_WOL
| FLAG_HAS_HW_TIMESTAMP
| FLAG_HAS_CTRLEXT_ON_LOAD
| FLAG_HAS_AMT
| FLAG_HAS_FLASH
| FLAG_HAS_JUMBO_FRAMES
| FLAG_APME_IN_WUC,
.flags2 = FLAG2_HAS_PHY_STATS
| FLAG2_HAS_EEE,
.pba = 26,
.max_hw_frame_size = 9022,
.get_variants = e1000_get_variants_ich8lan,
.mac_ops = &ich8_mac_ops,
.phy_ops = &ich8_phy_ops,
.nvm_ops = &spt_nvm_ops,
};

View file

@ -55,6 +55,7 @@ static const struct e1000_info *e1000_info_tbl[] = {
[board_pch_tgp] = &e1000_pch_tgp_info,
[board_pch_adp] = &e1000_pch_adp_info,
[board_pch_mtp] = &e1000_pch_mtp_info,
[board_pch_ptp] = &e1000_pch_ptp_info,
};
struct e1000_reg_info {
@ -7922,14 +7923,12 @@ static const struct pci_device_id e1000_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ARL_I219_LM24), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ARL_I219_V24), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM25), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V25), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM26), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V26), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM27), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V27), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_NVL_I219_LM29), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_NVL_I219_V29), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM25), board_pch_ptp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V25), board_pch_ptp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_LM27), board_pch_ptp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_PTP_I219_V27), board_pch_ptp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_NVL_I219_LM29), board_pch_ptp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_NVL_I219_V29), board_pch_ptp },
{ 0, 0, 0, 0, 0, 0, 0 } /* terminate list */
};

View file

@ -3569,6 +3569,7 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
u16 pf_q = vsi->base_queue + ring->queue_index;
struct i40e_hw *hw = &vsi->back->hw;
struct i40e_hmc_obj_rxq rx_ctx;
u32 xdp_frame_sz;
int err = 0;
bool ok;
@ -3578,49 +3579,47 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
memset(&rx_ctx, 0, sizeof(rx_ctx));
ring->rx_buf_len = vsi->rx_buf_len;
xdp_frame_sz = i40e_rx_pg_size(ring) / 2;
/* XDP RX-queue info only needed for RX rings exposed to XDP */
if (ring->vsi->type != I40E_VSI_MAIN)
goto skip;
if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) {
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->queue_index,
ring->q_vector->napi.napi_id,
ring->rx_buf_len);
if (err)
return err;
}
ring->xsk_pool = i40e_xsk_pool(ring);
if (ring->xsk_pool) {
xdp_rxq_info_unreg(&ring->xdp_rxq);
xdp_frame_sz = xsk_pool_get_rx_frag_step(ring->xsk_pool);
ring->rx_buf_len = xsk_pool_get_rx_frame_size(ring->xsk_pool);
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->queue_index,
ring->q_vector->napi.napi_id,
ring->rx_buf_len);
xdp_frame_sz);
if (err)
return err;
err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
MEM_TYPE_XSK_BUFF_POOL,
NULL);
if (err)
return err;
goto unreg_xdp;
dev_info(&vsi->back->pdev->dev,
"Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",
ring->queue_index);
} else {
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->queue_index,
ring->q_vector->napi.napi_id,
xdp_frame_sz);
if (err)
return err;
err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
MEM_TYPE_PAGE_SHARED,
NULL);
if (err)
return err;
goto unreg_xdp;
}
skip:
xdp_init_buff(&ring->xdp, i40e_rx_pg_size(ring) / 2, &ring->xdp_rxq);
xdp_init_buff(&ring->xdp, xdp_frame_sz, &ring->xdp_rxq);
rx_ctx.dbuff = DIV_ROUND_UP(ring->rx_buf_len,
BIT_ULL(I40E_RXQ_CTX_DBUFF_SHIFT));
@ -3654,7 +3653,8 @@ skip:
dev_info(&vsi->back->pdev->dev,
"Failed to clear LAN Rx queue context on Rx ring %d (pf_q %d), error: %d\n",
ring->queue_index, pf_q, err);
return -ENOMEM;
err = -ENOMEM;
goto unreg_xdp;
}
/* set the context in the HMC */
@ -3663,7 +3663,8 @@ skip:
dev_info(&vsi->back->pdev->dev,
"Failed to set LAN Rx queue context on Rx ring %d (pf_q %d), error: %d\n",
ring->queue_index, pf_q, err);
return -ENOMEM;
err = -ENOMEM;
goto unreg_xdp;
}
/* configure Rx buffer alignment */
@ -3671,7 +3672,8 @@ skip:
if (I40E_2K_TOO_SMALL_WITH_PADDING) {
dev_info(&vsi->back->pdev->dev,
"2k Rx buffer is too small to fit standard MTU and skb_shared_info\n");
return -EOPNOTSUPP;
err = -EOPNOTSUPP;
goto unreg_xdp;
}
clear_ring_build_skb_enabled(ring);
} else {
@ -3701,6 +3703,11 @@ skip:
}
return 0;
unreg_xdp:
if (ring->vsi->type == I40E_VSI_MAIN)
xdp_rxq_info_unreg(&ring->xdp_rxq);
return err;
}
/**

View file

@ -88,7 +88,7 @@ TRACE_EVENT(i40e_napi_poll,
__entry->rx_clean_complete = rx_clean_complete;
__entry->tx_clean_complete = tx_clean_complete;
__entry->irq_num = q->irq_num;
__entry->curr_cpu = get_cpu();
__entry->curr_cpu = smp_processor_id();
__assign_str(qname);
__assign_str(dev_name);
__assign_bitmask(irq_affinity, cpumask_bits(&q->affinity_mask),

View file

@ -1470,6 +1470,9 @@ void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
if (!rx_ring->rx_bi)
return;
if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
if (rx_ring->xsk_pool) {
i40e_xsk_clean_rx_ring(rx_ring);
goto skip_free;
@ -1527,8 +1530,6 @@ skip_free:
void i40e_free_rx_resources(struct i40e_ring *rx_ring)
{
i40e_clean_rx_ring(rx_ring);
if (rx_ring->vsi->type == I40E_VSI_MAIN)
xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
rx_ring->xdp_prog = NULL;
kfree(rx_ring->rx_bi);
rx_ring->rx_bi = NULL;

View file

@ -2793,7 +2793,22 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter)
netdev->watchdog_timeo = 5 * HZ;
netdev->min_mtu = ETH_MIN_MTU;
netdev->max_mtu = LIBIE_MAX_MTU;
/* PF/VF API: vf_res->max_mtu is max frame size (not MTU).
* Convert to MTU.
*/
if (!adapter->vf_res->max_mtu) {
netdev->max_mtu = LIBIE_MAX_MTU;
} else if (adapter->vf_res->max_mtu < LIBETH_RX_LL_LEN + ETH_MIN_MTU ||
adapter->vf_res->max_mtu >
LIBETH_RX_LL_LEN + LIBIE_MAX_MTU) {
netdev_warn_once(adapter->netdev,
"invalid max frame size %d from PF, using default MTU %d",
adapter->vf_res->max_mtu, LIBIE_MAX_MTU);
netdev->max_mtu = LIBIE_MAX_MTU;
} else {
netdev->max_mtu = adapter->vf_res->max_mtu - LIBETH_RX_LL_LEN;
}
if (!is_valid_ether_addr(adapter->hw.mac.addr)) {
dev_info(&pdev->dev, "Invalid MAC address %pM, using random\n",

View file

@ -987,6 +987,7 @@ int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset);
void ice_print_link_msg(struct ice_vsi *vsi, bool isup);
int ice_plug_aux_dev(struct ice_pf *pf);
void ice_unplug_aux_dev(struct ice_pf *pf);
void ice_rdma_finalize_setup(struct ice_pf *pf);
int ice_init_rdma(struct ice_pf *pf);
void ice_deinit_rdma(struct ice_pf *pf);
bool ice_is_wol_supported(struct ice_hw *hw);

View file

@ -124,6 +124,8 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx)
if (vsi->type == ICE_VSI_VF) {
ice_calc_vf_reg_idx(vsi->vf, q_vector);
goto out;
} else if (vsi->type == ICE_VSI_LB) {
goto skip_alloc;
} else if (vsi->type == ICE_VSI_CTRL && vsi->vf) {
struct ice_vsi *ctrl_vsi = ice_get_vf_ctrl_vsi(pf, vsi);
@ -659,33 +661,22 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
{
struct device *dev = ice_pf_to_dev(ring->vsi->back);
u32 num_bufs = ICE_DESC_UNUSED(ring);
u32 rx_buf_len;
int err;
if (ring->vsi->type == ICE_VSI_PF || ring->vsi->type == ICE_VSI_SF) {
if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) {
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
ring->rx_buf_len);
if (err)
return err;
}
if (ring->vsi->type == ICE_VSI_PF || ring->vsi->type == ICE_VSI_SF ||
ring->vsi->type == ICE_VSI_LB) {
ice_rx_xsk_pool(ring);
err = ice_realloc_rx_xdp_bufs(ring, ring->xsk_pool);
if (err)
return err;
if (ring->xsk_pool) {
xdp_rxq_info_unreg(&ring->xdp_rxq);
rx_buf_len =
xsk_pool_get_rx_frame_size(ring->xsk_pool);
u32 frag_size =
xsk_pool_get_rx_frag_step(ring->xsk_pool);
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
rx_buf_len);
frag_size);
if (err)
return err;
err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
@ -702,14 +693,13 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
if (err)
return err;
if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) {
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
ring->rx_buf_len);
if (err)
goto err_destroy_fq;
}
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
ring->truesize);
if (err)
goto err_destroy_fq;
xdp_rxq_info_attach_page_pool(&ring->xdp_rxq,
ring->pp);
}

View file

@ -1816,6 +1816,7 @@ static bool ice_should_retry_sq_send_cmd(u16 opcode)
case ice_aqc_opc_lldp_stop:
case ice_aqc_opc_lldp_start:
case ice_aqc_opc_lldp_filter_ctrl:
case ice_aqc_opc_sff_eeprom:
return true;
}
@ -1841,6 +1842,7 @@ ice_sq_send_cmd_retry(struct ice_hw *hw, struct ice_ctl_q_info *cq,
{
struct libie_aq_desc desc_cpy;
bool is_cmd_for_retry;
u8 *buf_cpy = NULL;
u8 idx = 0;
u16 opcode;
int status;
@ -1850,8 +1852,11 @@ ice_sq_send_cmd_retry(struct ice_hw *hw, struct ice_ctl_q_info *cq,
memset(&desc_cpy, 0, sizeof(desc_cpy));
if (is_cmd_for_retry) {
/* All retryable cmds are direct, without buf. */
WARN_ON(buf);
if (buf) {
buf_cpy = kmemdup(buf, buf_size, GFP_KERNEL);
if (!buf_cpy)
return -ENOMEM;
}
memcpy(&desc_cpy, desc, sizeof(desc_cpy));
}
@ -1863,12 +1868,14 @@ ice_sq_send_cmd_retry(struct ice_hw *hw, struct ice_ctl_q_info *cq,
hw->adminq.sq_last_status != LIBIE_AQ_RC_EBUSY)
break;
if (buf_cpy)
memcpy(buf, buf_cpy, buf_size);
memcpy(desc, &desc_cpy, sizeof(desc_cpy));
msleep(ICE_SQ_SEND_DELAY_TIME_MS);
} while (++idx < ICE_SQ_SEND_MAX_EXECUTE);
kfree(buf_cpy);
return status;
}
@ -6391,7 +6398,7 @@ int ice_lldp_fltr_add_remove(struct ice_hw *hw, struct ice_vsi *vsi, bool add)
struct ice_aqc_lldp_filter_ctrl *cmd;
struct libie_aq_desc desc;
if (vsi->type != ICE_VSI_PF || !ice_fw_supports_lldp_fltr_ctrl(hw))
if (!ice_fw_supports_lldp_fltr_ctrl(hw))
return -EOPNOTSUPP;
cmd = libie_aq_raw(&desc);

View file

@ -1289,6 +1289,10 @@ static u64 ice_loopback_test(struct net_device *netdev)
test_vsi->netdev = netdev;
tx_ring = test_vsi->tx_rings[0];
rx_ring = test_vsi->rx_rings[0];
/* Dummy q_vector and napi. Fill the minimum required for
* ice_rxq_pp_create().
*/
rx_ring->q_vector->napi.dev = netdev;
if (ice_lbtest_prepare_rings(test_vsi)) {
ret = 2;
@ -3328,7 +3332,7 @@ process_rx:
rx_rings = kzalloc_objs(*rx_rings, vsi->num_rxq);
if (!rx_rings) {
err = -ENOMEM;
goto done;
goto free_xdp;
}
ice_for_each_rxq(vsi, i) {
@ -3338,6 +3342,7 @@ process_rx:
rx_rings[i].cached_phctime = pf->ptp.cached_phc_time;
rx_rings[i].desc = NULL;
rx_rings[i].xdp_buf = NULL;
rx_rings[i].xdp_rxq = (struct xdp_rxq_info){ };
/* this is to allow wr32 to have something to write to
* during early allocation of Rx buffers
@ -3355,7 +3360,7 @@ rx_unwind:
}
kfree(rx_rings);
err = -ENOMEM;
goto free_tx;
goto free_xdp;
}
}
@ -3407,6 +3412,13 @@ process_link:
}
goto done;
free_xdp:
if (xdp_rings) {
ice_for_each_xdp_txq(vsi, i)
ice_free_tx_ring(&xdp_rings[i]);
kfree(xdp_rings);
}
free_tx:
/* error cleanup if the Rx allocations failed after getting Tx */
if (tx_rings) {
@ -4505,7 +4517,7 @@ ice_get_module_eeprom(struct net_device *netdev,
u8 addr = ICE_I2C_EEPROM_DEV_ADDR;
struct ice_hw *hw = &pf->hw;
bool is_sfp = false;
unsigned int i, j;
unsigned int i;
u16 offset = 0;
u8 page = 0;
int status;
@ -4547,26 +4559,19 @@ ice_get_module_eeprom(struct net_device *netdev,
if (page == 0 || !(data[0x2] & 0x4)) {
u32 copy_len;
/* If i2c bus is busy due to slow page change or
* link management access, call can fail. This is normal.
* So we retry this a few times.
*/
for (j = 0; j < 4; j++) {
status = ice_aq_sff_eeprom(hw, 0, addr, offset, page,
!is_sfp, value,
SFF_READ_BLOCK_SIZE,
0, NULL);
netdev_dbg(netdev, "SFF %02X %02X %02X %X = %02X%02X%02X%02X.%02X%02X%02X%02X (%X)\n",
addr, offset, page, is_sfp,
value[0], value[1], value[2], value[3],
value[4], value[5], value[6], value[7],
status);
if (status) {
usleep_range(1500, 2500);
memset(value, 0, SFF_READ_BLOCK_SIZE);
continue;
}
break;
status = ice_aq_sff_eeprom(hw, 0, addr, offset, page,
!is_sfp, value,
SFF_READ_BLOCK_SIZE,
0, NULL);
netdev_dbg(netdev, "SFF %02X %02X %02X %X = %02X%02X%02X%02X.%02X%02X%02X%02X (%pe)\n",
addr, offset, page, is_sfp,
value[0], value[1], value[2], value[3],
value[4], value[5], value[6], value[7],
ERR_PTR(status));
if (status) {
netdev_err(netdev, "%s: error reading module EEPROM: status %pe\n",
__func__, ERR_PTR(status));
return status;
}
/* Make sure we have enough room for the new block */

View file

@ -360,6 +360,39 @@ void ice_unplug_aux_dev(struct ice_pf *pf)
auxiliary_device_uninit(adev);
}
/**
* ice_rdma_finalize_setup - Complete RDMA setup after VSI is ready
* @pf: ptr to ice_pf
*
* Sets VSI-dependent information and plugs aux device.
* Must be called after ice_init_rdma(), ice_vsi_rebuild(), and
* ice_dcb_rebuild() complete.
*/
void ice_rdma_finalize_setup(struct ice_pf *pf)
{
struct device *dev = ice_pf_to_dev(pf);
struct iidc_rdma_priv_dev_info *privd;
int ret;
if (!ice_is_rdma_ena(pf) || !pf->cdev_info)
return;
privd = pf->cdev_info->iidc_priv;
if (!privd || !pf->vsi || !pf->vsi[0] || !pf->vsi[0]->netdev)
return;
/* Assign VSI info now that VSI is valid */
privd->netdev = pf->vsi[0]->netdev;
privd->vport_id = pf->vsi[0]->vsi_num;
/* Update QoS info after DCB has been rebuilt */
ice_setup_dcb_qos_info(pf, &privd->qos_info);
ret = ice_plug_aux_dev(pf);
if (ret)
dev_warn(dev, "Failed to plug RDMA aux device: %d\n", ret);
}
/**
* ice_init_rdma - initializes PF for RDMA use
* @pf: ptr to ice_pf
@ -398,22 +431,14 @@ int ice_init_rdma(struct ice_pf *pf)
}
cdev->iidc_priv = privd;
privd->netdev = pf->vsi[0]->netdev;
privd->hw_addr = (u8 __iomem *)pf->hw.hw_addr;
cdev->pdev = pf->pdev;
privd->vport_id = pf->vsi[0]->vsi_num;
pf->cdev_info->rdma_protocol |= IIDC_RDMA_PROTOCOL_ROCEV2;
ice_setup_dcb_qos_info(pf, &privd->qos_info);
ret = ice_plug_aux_dev(pf);
if (ret)
goto err_plug_aux_dev;
return 0;
err_plug_aux_dev:
pf->cdev_info->adev = NULL;
xa_erase(&ice_aux_id, pf->aux_idx);
err_alloc_xa:
kfree(privd);
err_privd_alloc:
@ -432,7 +457,6 @@ void ice_deinit_rdma(struct ice_pf *pf)
if (!ice_is_rdma_ena(pf))
return;
ice_unplug_aux_dev(pf);
xa_erase(&ice_aux_id, pf->aux_idx);
kfree(pf->cdev_info->iidc_priv);
kfree(pf->cdev_info);

View file

@ -107,10 +107,6 @@ static int ice_vsi_alloc_arrays(struct ice_vsi *vsi)
if (!vsi->rxq_map)
goto err_rxq_map;
/* There is no need to allocate q_vectors for a loopback VSI. */
if (vsi->type == ICE_VSI_LB)
return 0;
/* allocate memory for q_vector pointers */
vsi->q_vectors = devm_kcalloc(dev, vsi->num_q_vectors,
sizeof(*vsi->q_vectors), GFP_KERNEL);
@ -241,6 +237,8 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi)
case ICE_VSI_LB:
vsi->alloc_txq = 1;
vsi->alloc_rxq = 1;
/* A dummy q_vector, no actual IRQ. */
vsi->num_q_vectors = 1;
break;
default:
dev_warn(ice_pf_to_dev(pf), "Unknown VSI type %d\n", vsi_type);
@ -2426,14 +2424,21 @@ static int ice_vsi_cfg_def(struct ice_vsi *vsi)
}
break;
case ICE_VSI_LB:
ret = ice_vsi_alloc_rings(vsi);
ret = ice_vsi_alloc_q_vectors(vsi);
if (ret)
goto unroll_vsi_init;
ret = ice_vsi_alloc_rings(vsi);
if (ret)
goto unroll_alloc_q_vector;
ret = ice_vsi_alloc_ring_stats(vsi);
if (ret)
goto unroll_vector_base;
/* Simply map the dummy q_vector to the only rx_ring */
vsi->rx_rings[0]->q_vector = vsi->q_vectors[0];
break;
default:
/* clean up the resources and exit */

View file

@ -5138,6 +5138,9 @@ int ice_load(struct ice_pf *pf)
if (err)
goto err_init_rdma;
/* Finalize RDMA: VSI already created, assign info and plug device */
ice_rdma_finalize_setup(pf);
ice_service_task_restart(pf);
clear_bit(ICE_DOWN, pf->state);
@ -5169,6 +5172,7 @@ void ice_unload(struct ice_pf *pf)
devl_assert_locked(priv_to_devlink(pf));
ice_unplug_aux_dev(pf);
ice_deinit_rdma(pf);
ice_deinit_features(pf);
ice_tc_indir_block_unregister(vsi);
@ -5595,6 +5599,7 @@ static int ice_suspend(struct device *dev)
*/
disabled = ice_service_task_stop(pf);
ice_unplug_aux_dev(pf);
ice_deinit_rdma(pf);
/* Already suspended?, then there is nothing to do */
@ -7859,7 +7864,7 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
ice_health_clear(pf);
ice_plug_aux_dev(pf);
ice_rdma_finalize_setup(pf);
if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG))
ice_lag_rebuild(pf);

View file

@ -560,7 +560,9 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring)
i = 0;
}
if (rx_ring->vsi->type == ICE_VSI_PF &&
if ((rx_ring->vsi->type == ICE_VSI_PF ||
rx_ring->vsi->type == ICE_VSI_SF ||
rx_ring->vsi->type == ICE_VSI_LB) &&
xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) {
xdp_rxq_info_detach_mem_model(&rx_ring->xdp_rxq);
xdp_rxq_info_unreg(&rx_ring->xdp_rxq);

View file

@ -899,6 +899,9 @@ void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring)
u16 ntc = rx_ring->next_to_clean;
u16 ntu = rx_ring->next_to_use;
if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
while (ntc != ntu) {
struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc);

View file

@ -307,9 +307,6 @@ static int idpf_del_flow_steer(struct net_device *netdev,
vport_config = vport->adapter->vport_config[np->vport_idx];
user_config = &vport_config->user_config;
if (!idpf_sideband_action_ena(vport, fsp))
return -EOPNOTSUPP;
rule = kzalloc_flex(*rule, rule_info, 1);
if (!rule)
return -ENOMEM;

View file

@ -1318,6 +1318,7 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
free_rss_key:
kfree(rss_data->rss_key);
rss_data->rss_key = NULL;
free_qreg_chunks:
idpf_vport_deinit_queue_reg_chunks(adapter->vport_config[idx]);
free_vector_idxs:

View file

@ -1314,6 +1314,9 @@ static void idpf_txq_group_rel(struct idpf_q_vec_rsrc *rsrc)
struct idpf_txq_group *txq_grp = &rsrc->txq_grps[i];
for (unsigned int j = 0; j < txq_grp->num_txq; j++) {
if (!txq_grp->txqs[j])
continue;
if (idpf_queue_has(FLOW_SCH_EN, txq_grp->txqs[j])) {
kfree(txq_grp->txqs[j]->refillq);
txq_grp->txqs[j]->refillq = NULL;
@ -1339,6 +1342,9 @@ static void idpf_txq_group_rel(struct idpf_q_vec_rsrc *rsrc)
*/
static void idpf_rxq_sw_queue_rel(struct idpf_rxq_group *rx_qgrp)
{
if (!rx_qgrp->splitq.bufq_sets)
return;
for (unsigned int i = 0; i < rx_qgrp->splitq.num_bufq_sets; i++) {
struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[i];
@ -2336,7 +2342,7 @@ void idpf_wait_for_sw_marker_completion(const struct idpf_tx_queue *txq)
do {
struct idpf_splitq_4b_tx_compl_desc *tx_desc;
struct idpf_tx_queue *target;
struct idpf_tx_queue *target = NULL;
u32 ctype_gen, id;
tx_desc = flow ? &complq->comp[ntc].common :
@ -2356,14 +2362,14 @@ void idpf_wait_for_sw_marker_completion(const struct idpf_tx_queue *txq)
target = complq->txq_grp->txqs[id];
idpf_queue_clear(SW_MARKER, target);
if (target == txq)
break;
next:
if (unlikely(++ntc == complq->desc_count)) {
ntc = 0;
gen_flag = !gen_flag;
}
if (target == txq)
break;
} while (time_before(jiffies, timeout));
idpf_queue_assign(GEN_CHK, complq, gen_flag);
@ -4059,7 +4065,7 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport,
continue;
name = kasprintf(GFP_KERNEL, "%s-%s-%s-%d", drv_name, if_name,
vec_name, vidx);
vec_name, vector);
err = request_irq(irq_num, idpf_vport_intr_clean_queues, 0,
name, q_vector);

View file

@ -47,12 +47,16 @@ static int __idpf_xdp_rxq_info_init(struct idpf_rx_queue *rxq, void *arg)
{
const struct idpf_vport *vport = rxq->q_vector->vport;
const struct idpf_q_vec_rsrc *rsrc;
u32 frag_size = 0;
bool split;
int err;
if (idpf_queue_has(XSK, rxq))
frag_size = rxq->bufq_sets[0].bufq.truesize;
err = __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx,
rxq->q_vector->napi.napi_id,
rxq->rx_buf_size);
frag_size);
if (err)
return err;

View file

@ -403,6 +403,7 @@ int idpf_xskfq_init(struct idpf_buf_queue *bufq)
bufq->pending = fq.pending;
bufq->thresh = fq.thresh;
bufq->rx_buf_size = fq.buf_len;
bufq->truesize = fq.truesize;
if (!idpf_xskfq_refill(bufq))
netdev_err(bufq->pool->netdev,

View file

@ -524,6 +524,16 @@ bool igb_xmit_zc(struct igb_ring *tx_ring, struct xsk_buff_pool *xsk_pool)
return nb_pkts < budget;
}
static u32 igb_sw_irq_prep(struct igb_q_vector *q_vector)
{
u32 eics = 0;
if (!napi_if_scheduled_mark_missed(&q_vector->napi))
eics = q_vector->eims_value;
return eics;
}
int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)
{
struct igb_adapter *adapter = netdev_priv(dev);
@ -542,20 +552,32 @@ int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)
ring = adapter->tx_ring[qid];
if (test_bit(IGB_RING_FLAG_TX_DISABLED, &ring->flags))
return -ENETDOWN;
if (!READ_ONCE(ring->xsk_pool))
return -EINVAL;
if (!napi_if_scheduled_mark_missed(&ring->q_vector->napi)) {
if (flags & XDP_WAKEUP_TX) {
if (test_bit(IGB_RING_FLAG_TX_DISABLED, &ring->flags))
return -ENETDOWN;
eics |= igb_sw_irq_prep(ring->q_vector);
}
if (flags & XDP_WAKEUP_RX) {
/* If IGB_FLAG_QUEUE_PAIRS is active, the q_vector
* and NAPI is shared between RX and TX.
* If NAPI is already running it would be marked as missed
* from the TX path, making this RX call a NOP
*/
ring = adapter->rx_ring[qid];
eics |= igb_sw_irq_prep(ring->q_vector);
}
if (eics) {
/* Cause software interrupt */
if (adapter->flags & IGB_FLAG_HAS_MSIX) {
eics |= ring->q_vector->eims_value;
if (adapter->flags & IGB_FLAG_HAS_MSIX)
wr32(E1000_EICS, eics);
} else {
else
wr32(E1000_ICS, E1000_ICS_RXDMT0);
}
}
return 0;

View file

@ -6906,28 +6906,29 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames,
return nxmit;
}
static void igc_trigger_rxtxq_interrupt(struct igc_adapter *adapter,
struct igc_q_vector *q_vector)
static u32 igc_sw_irq_prep(struct igc_q_vector *q_vector)
{
struct igc_hw *hw = &adapter->hw;
u32 eics = 0;
eics |= q_vector->eims_value;
wr32(IGC_EICS, eics);
if (!napi_if_scheduled_mark_missed(&q_vector->napi))
eics = q_vector->eims_value;
return eics;
}
int igc_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags)
{
struct igc_adapter *adapter = netdev_priv(dev);
struct igc_q_vector *q_vector;
struct igc_hw *hw = &adapter->hw;
struct igc_ring *ring;
u32 eics = 0;
if (test_bit(__IGC_DOWN, &adapter->state))
return -ENETDOWN;
if (!igc_xdp_is_enabled(adapter))
return -ENXIO;
/* Check if queue_id is valid. Tx and Rx queue numbers are always same */
if (queue_id >= adapter->num_rx_queues)
return -EINVAL;
@ -6936,9 +6937,22 @@ int igc_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags)
if (!ring->xsk_pool)
return -ENXIO;
q_vector = adapter->q_vector[queue_id];
if (!napi_if_scheduled_mark_missed(&q_vector->napi))
igc_trigger_rxtxq_interrupt(adapter, q_vector);
if (flags & XDP_WAKEUP_RX)
eics |= igc_sw_irq_prep(ring->q_vector);
if (flags & XDP_WAKEUP_TX) {
/* If IGC_FLAG_QUEUE_PAIRS is active, the q_vector
* and NAPI is shared between RX and TX.
* If NAPI is already running it would be marked as missed
* from the RX path, making this TX call a NOP
*/
ring = adapter->tx_ring[queue_id];
eics |= igc_sw_irq_prep(ring->q_vector);
}
if (eics)
/* Cause software interrupt */
wr32(IGC_EICS, eics);
return 0;
}

View file

@ -550,7 +550,8 @@ static void igc_ptp_free_tx_buffer(struct igc_adapter *adapter,
tstamp->buffer_type = 0;
/* Trigger txrx interrupt for transmit completion */
igc_xsk_wakeup(adapter->netdev, tstamp->xsk_queue_index, 0);
igc_xsk_wakeup(adapter->netdev, tstamp->xsk_queue_index,
XDP_WAKEUP_TX);
return;
}

View file

@ -852,7 +852,8 @@ static s32 ixgbevf_check_mac_link_vf(struct ixgbe_hw *hw,
if (!mac->get_link_status)
goto out;
if (hw->mac.type == ixgbe_mac_e610_vf) {
if (hw->mac.type == ixgbe_mac_e610_vf &&
hw->api_version >= ixgbe_mbox_api_16) {
ret_val = ixgbevf_get_pf_link_state(hw, speed, link_up);
if (ret_val)
goto out;

View file

@ -167,6 +167,7 @@ int libeth_xskfq_create(struct libeth_xskfq *fq)
fq->pending = fq->count;
fq->thresh = libeth_xdp_queue_threshold(fq->count);
fq->buf_len = xsk_pool_get_rx_frame_size(fq->pool);
fq->truesize = xsk_pool_get_rx_frag_step(fq->pool);
return 0;
}

View file

@ -1049,6 +1049,10 @@ void libie_fwlog_deinit(struct libie_fwlog *fwlog)
{
int status;
/* if FW logging isn't supported it means no configuration was done */
if (!libie_fwlog_supported(fwlog))
return;
/* make sure FW logging is disabled to not put the FW in a weird state
* for the next driver load
*/

View file

@ -553,6 +553,36 @@ static void octep_clean_irqs(struct octep_device *oct)
octep_free_ioq_vectors(oct);
}
/**
* octep_update_pkt() - Update IQ/OQ IN/OUT_CNT registers.
*
* @iq: Octeon Tx queue data structure.
* @oq: Octeon Rx queue data structure.
*/
static void octep_update_pkt(struct octep_iq *iq, struct octep_oq *oq)
{
u32 pkts_pend = READ_ONCE(oq->pkts_pending);
u32 last_pkt_count = READ_ONCE(oq->last_pkt_count);
u32 pkts_processed = READ_ONCE(iq->pkts_processed);
u32 pkt_in_done = READ_ONCE(iq->pkt_in_done);
netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no);
if (pkts_processed) {
writel(pkts_processed, iq->inst_cnt_reg);
readl(iq->inst_cnt_reg);
WRITE_ONCE(iq->pkt_in_done, (pkt_in_done - pkts_processed));
WRITE_ONCE(iq->pkts_processed, 0);
}
if (last_pkt_count - pkts_pend) {
writel(last_pkt_count - pkts_pend, oq->pkts_sent_reg);
readl(oq->pkts_sent_reg);
WRITE_ONCE(oq->last_pkt_count, pkts_pend);
}
/* Flush the previous wrties before writing to RESEND bit */
smp_wmb();
}
/**
* octep_enable_ioq_irq() - Enable MSI-x interrupt of a Tx/Rx queue.
*
@ -561,21 +591,6 @@ static void octep_clean_irqs(struct octep_device *oct)
*/
static void octep_enable_ioq_irq(struct octep_iq *iq, struct octep_oq *oq)
{
u32 pkts_pend = oq->pkts_pending;
netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no);
if (iq->pkts_processed) {
writel(iq->pkts_processed, iq->inst_cnt_reg);
iq->pkt_in_done -= iq->pkts_processed;
iq->pkts_processed = 0;
}
if (oq->last_pkt_count - pkts_pend) {
writel(oq->last_pkt_count - pkts_pend, oq->pkts_sent_reg);
oq->last_pkt_count = pkts_pend;
}
/* Flush the previous wrties before writing to RESEND bit */
wmb();
writeq(1UL << OCTEP_OQ_INTR_RESEND_BIT, oq->pkts_sent_reg);
writeq(1UL << OCTEP_IQ_INTR_RESEND_BIT, iq->inst_cnt_reg);
}
@ -601,7 +616,8 @@ static int octep_napi_poll(struct napi_struct *napi, int budget)
if (tx_pending || rx_done >= budget)
return budget;
napi_complete(napi);
octep_update_pkt(ioq_vector->iq, ioq_vector->oq);
napi_complete_done(napi, rx_done);
octep_enable_ioq_irq(ioq_vector->iq, ioq_vector->oq);
return rx_done;
}

View file

@ -324,10 +324,16 @@ static int octep_oq_check_hw_for_pkts(struct octep_device *oct,
struct octep_oq *oq)
{
u32 pkt_count, new_pkts;
u32 last_pkt_count, pkts_pending;
pkt_count = readl(oq->pkts_sent_reg);
new_pkts = pkt_count - oq->last_pkt_count;
last_pkt_count = READ_ONCE(oq->last_pkt_count);
new_pkts = pkt_count - last_pkt_count;
if (pkt_count < last_pkt_count) {
dev_err(oq->dev, "OQ-%u pkt_count(%u) < oq->last_pkt_count(%u)\n",
oq->q_no, pkt_count, last_pkt_count);
}
/* Clear the hardware packets counter register if the rx queue is
* being processed continuously with-in a single interrupt and
* reached half its max value.
@ -338,8 +344,9 @@ static int octep_oq_check_hw_for_pkts(struct octep_device *oct,
pkt_count = readl(oq->pkts_sent_reg);
new_pkts += pkt_count;
}
oq->last_pkt_count = pkt_count;
oq->pkts_pending += new_pkts;
WRITE_ONCE(oq->last_pkt_count, pkt_count);
pkts_pending = READ_ONCE(oq->pkts_pending);
WRITE_ONCE(oq->pkts_pending, (pkts_pending + new_pkts));
return new_pkts;
}
@ -414,7 +421,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
u16 rx_ol_flags;
u32 read_idx;
read_idx = oq->host_read_idx;
read_idx = READ_ONCE(oq->host_read_idx);
rx_bytes = 0;
desc_used = 0;
for (pkt = 0; pkt < pkts_to_process; pkt++) {
@ -499,7 +506,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
napi_gro_receive(oq->napi, skb);
}
oq->host_read_idx = read_idx;
WRITE_ONCE(oq->host_read_idx, read_idx);
oq->refill_count += desc_used;
oq->stats->packets += pkt;
oq->stats->bytes += rx_bytes;
@ -522,22 +529,26 @@ int octep_oq_process_rx(struct octep_oq *oq, int budget)
{
u32 pkts_available, pkts_processed, total_pkts_processed;
struct octep_device *oct = oq->octep_dev;
u32 pkts_pending;
pkts_available = 0;
pkts_processed = 0;
total_pkts_processed = 0;
while (total_pkts_processed < budget) {
/* update pending count only when current one exhausted */
if (oq->pkts_pending == 0)
pkts_pending = READ_ONCE(oq->pkts_pending);
if (pkts_pending == 0)
octep_oq_check_hw_for_pkts(oct, oq);
pkts_pending = READ_ONCE(oq->pkts_pending);
pkts_available = min(budget - total_pkts_processed,
oq->pkts_pending);
pkts_pending);
if (!pkts_available)
break;
pkts_processed = __octep_oq_process_rx(oct, oq,
pkts_available);
oq->pkts_pending -= pkts_processed;
pkts_pending = READ_ONCE(oq->pkts_pending);
WRITE_ONCE(oq->pkts_pending, (pkts_pending - pkts_processed));
total_pkts_processed += pkts_processed;
}

View file

@ -285,29 +285,46 @@ static void octep_vf_clean_irqs(struct octep_vf_device *oct)
octep_vf_free_ioq_vectors(oct);
}
/**
* octep_vf_update_pkt() - Update IQ/OQ IN/OUT_CNT registers.
*
* @iq: Octeon Tx queue data structure.
* @oq: Octeon Rx queue data structure.
*/
static void octep_vf_update_pkt(struct octep_vf_iq *iq, struct octep_vf_oq *oq)
{
u32 pkts_pend = READ_ONCE(oq->pkts_pending);
u32 last_pkt_count = READ_ONCE(oq->last_pkt_count);
u32 pkts_processed = READ_ONCE(iq->pkts_processed);
u32 pkt_in_done = READ_ONCE(iq->pkt_in_done);
netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no);
if (pkts_processed) {
writel(pkts_processed, iq->inst_cnt_reg);
readl(iq->inst_cnt_reg);
WRITE_ONCE(iq->pkt_in_done, (pkt_in_done - pkts_processed));
WRITE_ONCE(iq->pkts_processed, 0);
}
if (last_pkt_count - pkts_pend) {
writel(last_pkt_count - pkts_pend, oq->pkts_sent_reg);
readl(oq->pkts_sent_reg);
WRITE_ONCE(oq->last_pkt_count, pkts_pend);
}
/* Flush the previous wrties before writing to RESEND bit */
smp_wmb();
}
/**
* octep_vf_enable_ioq_irq() - Enable MSI-x interrupt of a Tx/Rx queue.
*
* @iq: Octeon Tx queue data structure.
* @oq: Octeon Rx queue data structure.
*/
static void octep_vf_enable_ioq_irq(struct octep_vf_iq *iq, struct octep_vf_oq *oq)
static void octep_vf_enable_ioq_irq(struct octep_vf_iq *iq,
struct octep_vf_oq *oq)
{
u32 pkts_pend = oq->pkts_pending;
netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no);
if (iq->pkts_processed) {
writel(iq->pkts_processed, iq->inst_cnt_reg);
iq->pkt_in_done -= iq->pkts_processed;
iq->pkts_processed = 0;
}
if (oq->last_pkt_count - pkts_pend) {
writel(oq->last_pkt_count - pkts_pend, oq->pkts_sent_reg);
oq->last_pkt_count = pkts_pend;
}
/* Flush the previous wrties before writing to RESEND bit */
smp_wmb();
writeq(1UL << OCTEP_VF_OQ_INTR_RESEND_BIT, oq->pkts_sent_reg);
writeq(1UL << OCTEP_VF_IQ_INTR_RESEND_BIT, iq->inst_cnt_reg);
}
@ -333,6 +350,7 @@ static int octep_vf_napi_poll(struct napi_struct *napi, int budget)
if (tx_pending || rx_done >= budget)
return budget;
octep_vf_update_pkt(ioq_vector->iq, ioq_vector->oq);
if (likely(napi_complete_done(napi, rx_done)))
octep_vf_enable_ioq_irq(ioq_vector->iq, ioq_vector->oq);

View file

@ -325,9 +325,16 @@ static int octep_vf_oq_check_hw_for_pkts(struct octep_vf_device *oct,
struct octep_vf_oq *oq)
{
u32 pkt_count, new_pkts;
u32 last_pkt_count, pkts_pending;
pkt_count = readl(oq->pkts_sent_reg);
new_pkts = pkt_count - oq->last_pkt_count;
last_pkt_count = READ_ONCE(oq->last_pkt_count);
new_pkts = pkt_count - last_pkt_count;
if (pkt_count < last_pkt_count) {
dev_err(oq->dev, "OQ-%u pkt_count(%u) < oq->last_pkt_count(%u)\n",
oq->q_no, pkt_count, last_pkt_count);
}
/* Clear the hardware packets counter register if the rx queue is
* being processed continuously with-in a single interrupt and
@ -339,8 +346,9 @@ static int octep_vf_oq_check_hw_for_pkts(struct octep_vf_device *oct,
pkt_count = readl(oq->pkts_sent_reg);
new_pkts += pkt_count;
}
oq->last_pkt_count = pkt_count;
oq->pkts_pending += new_pkts;
WRITE_ONCE(oq->last_pkt_count, pkt_count);
pkts_pending = READ_ONCE(oq->pkts_pending);
WRITE_ONCE(oq->pkts_pending, (pkts_pending + new_pkts));
return new_pkts;
}
@ -369,7 +377,7 @@ static int __octep_vf_oq_process_rx(struct octep_vf_device *oct,
struct sk_buff *skb;
u32 read_idx;
read_idx = oq->host_read_idx;
read_idx = READ_ONCE(oq->host_read_idx);
rx_bytes = 0;
desc_used = 0;
for (pkt = 0; pkt < pkts_to_process; pkt++) {
@ -463,7 +471,7 @@ static int __octep_vf_oq_process_rx(struct octep_vf_device *oct,
napi_gro_receive(oq->napi, skb);
}
oq->host_read_idx = read_idx;
WRITE_ONCE(oq->host_read_idx, read_idx);
oq->refill_count += desc_used;
oq->stats->packets += pkt;
oq->stats->bytes += rx_bytes;
@ -486,22 +494,26 @@ int octep_vf_oq_process_rx(struct octep_vf_oq *oq, int budget)
{
u32 pkts_available, pkts_processed, total_pkts_processed;
struct octep_vf_device *oct = oq->octep_vf_dev;
u32 pkts_pending;
pkts_available = 0;
pkts_processed = 0;
total_pkts_processed = 0;
while (total_pkts_processed < budget) {
/* update pending count only when current one exhausted */
if (oq->pkts_pending == 0)
pkts_pending = READ_ONCE(oq->pkts_pending);
if (pkts_pending == 0)
octep_vf_oq_check_hw_for_pkts(oct, oq);
pkts_pending = READ_ONCE(oq->pkts_pending);
pkts_available = min(budget - total_pkts_processed,
oq->pkts_pending);
pkts_pending);
if (!pkts_available)
break;
pkts_processed = __octep_vf_oq_process_rx(oct, oq,
pkts_available);
oq->pkts_pending -= pkts_processed;
pkts_pending = READ_ONCE(oq->pkts_pending);
WRITE_ONCE(oq->pkts_pending, (pkts_pending - pkts_processed));
total_pkts_processed += pkts_processed;
}

View file

@ -3748,12 +3748,21 @@ static int mtk_xdp_setup(struct net_device *dev, struct bpf_prog *prog,
mtk_stop(dev);
old_prog = rcu_replace_pointer(eth->prog, prog, lockdep_rtnl_is_held());
if (netif_running(dev) && need_update) {
int err;
err = mtk_open(dev);
if (err) {
rcu_assign_pointer(eth->prog, old_prog);
return err;
}
}
if (old_prog)
bpf_prog_put(old_prog);
if (netif_running(dev) && need_update)
return mtk_open(dev);
return 0;
}

View file

@ -1770,8 +1770,14 @@ static void mana_poll_tx_cq(struct mana_cq *cq)
ndev = txq->ndev;
apc = netdev_priv(ndev);
/* Limit CQEs polled to 4 wraparounds of the CQ to ensure the
* doorbell can be rung in time for the hardware's requirement
* of at least one doorbell ring every 8 wraparounds.
*/
comp_read = mana_gd_poll_cq(cq->gdma_cq, completions,
CQE_POLLING_BUFFER);
min((cq->gdma_cq->queue_size /
COMP_ENTRY_SIZE) * 4,
CQE_POLLING_BUFFER));
if (comp_read < 1)
return;
@ -2156,7 +2162,14 @@ static void mana_poll_rx_cq(struct mana_cq *cq)
struct mana_rxq *rxq = cq->rxq;
int comp_read, i;
comp_read = mana_gd_poll_cq(cq->gdma_cq, comp, CQE_POLLING_BUFFER);
/* Limit CQEs polled to 4 wraparounds of the CQ to ensure the
* doorbell can be rung in time for the hardware's requirement
* of at least one doorbell ring every 8 wraparounds.
*/
comp_read = mana_gd_poll_cq(cq->gdma_cq, comp,
min((cq->gdma_cq->queue_size /
COMP_ENTRY_SIZE) * 4,
CQE_POLLING_BUFFER));
WARN_ON_ONCE(comp_read > CQE_POLLING_BUFFER);
rxq->xdp_flush = false;
@ -2201,11 +2214,11 @@ static int mana_cq_handler(void *context, struct gdma_queue *gdma_queue)
mana_gd_ring_cq(gdma_queue, SET_ARM_BIT);
cq->work_done_since_doorbell = 0;
napi_complete_done(&cq->napi, w);
} else if (cq->work_done_since_doorbell >
cq->gdma_cq->queue_size / COMP_ENTRY_SIZE * 4) {
} else if (cq->work_done_since_doorbell >=
(cq->gdma_cq->queue_size / COMP_ENTRY_SIZE) * 4) {
/* MANA hardware requires at least one doorbell ring every 8
* wraparounds of CQ even if there is no need to arm the CQ.
* This driver rings the doorbell as soon as we have exceeded
* This driver rings the doorbell as soon as it has processed
* 4 wraparounds.
*/
mana_gd_ring_cq(gdma_queue, 0);

View file

@ -323,6 +323,7 @@ struct stmmac_priv {
void __iomem *ptpaddr;
void __iomem *estaddr;
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
unsigned int num_double_vlans;
int sfty_irq;
int sfty_ce_irq;
int sfty_ue_irq;

View file

@ -156,6 +156,7 @@ static void stmmac_tx_timer_arm(struct stmmac_priv *priv, u32 queue);
static void stmmac_flush_tx_descriptors(struct stmmac_priv *priv, int queue);
static void stmmac_set_dma_operation_mode(struct stmmac_priv *priv, u32 txmode,
u32 rxmode, u32 chan);
static int stmmac_vlan_restore(struct stmmac_priv *priv);
#ifdef CONFIG_DEBUG_FS
static const struct net_device_ops stmmac_netdev_ops;
@ -4107,6 +4108,8 @@ static int __stmmac_open(struct net_device *dev,
phylink_start(priv->phylink);
stmmac_vlan_restore(priv);
ret = stmmac_request_irq(dev);
if (ret)
goto irq_error;
@ -6766,6 +6769,9 @@ static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double)
hash = 0;
}
if (!netif_running(priv->dev))
return 0;
return stmmac_update_vlan_hash(priv, priv->hw, hash, pmatch, is_double);
}
@ -6775,6 +6781,7 @@ static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double)
static int stmmac_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid)
{
struct stmmac_priv *priv = netdev_priv(ndev);
unsigned int num_double_vlans;
bool is_double = false;
int ret;
@ -6786,7 +6793,8 @@ static int stmmac_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid
is_double = true;
set_bit(vid, priv->active_vlans);
ret = stmmac_vlan_update(priv, is_double);
num_double_vlans = priv->num_double_vlans + is_double;
ret = stmmac_vlan_update(priv, num_double_vlans);
if (ret) {
clear_bit(vid, priv->active_vlans);
goto err_pm_put;
@ -6794,9 +6802,15 @@ static int stmmac_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid
if (priv->hw->num_vlan) {
ret = stmmac_add_hw_vlan_rx_fltr(priv, ndev, priv->hw, proto, vid);
if (ret)
if (ret) {
clear_bit(vid, priv->active_vlans);
stmmac_vlan_update(priv, priv->num_double_vlans);
goto err_pm_put;
}
}
priv->num_double_vlans = num_double_vlans;
err_pm_put:
pm_runtime_put(priv->device);
@ -6809,6 +6823,7 @@ err_pm_put:
static int stmmac_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vid)
{
struct stmmac_priv *priv = netdev_priv(ndev);
unsigned int num_double_vlans;
bool is_double = false;
int ret;
@ -6820,14 +6835,23 @@ static int stmmac_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vi
is_double = true;
clear_bit(vid, priv->active_vlans);
num_double_vlans = priv->num_double_vlans - is_double;
ret = stmmac_vlan_update(priv, num_double_vlans);
if (ret) {
set_bit(vid, priv->active_vlans);
goto del_vlan_error;
}
if (priv->hw->num_vlan) {
ret = stmmac_del_hw_vlan_rx_fltr(priv, ndev, priv->hw, proto, vid);
if (ret)
if (ret) {
set_bit(vid, priv->active_vlans);
stmmac_vlan_update(priv, priv->num_double_vlans);
goto del_vlan_error;
}
}
ret = stmmac_vlan_update(priv, is_double);
priv->num_double_vlans = num_double_vlans;
del_vlan_error:
pm_runtime_put(priv->device);
@ -6835,6 +6859,23 @@ del_vlan_error:
return ret;
}
static int stmmac_vlan_restore(struct stmmac_priv *priv)
{
int ret;
if (!(priv->dev->features & NETIF_F_VLAN_FEATURES))
return 0;
if (priv->hw->num_vlan)
stmmac_restore_hw_vlan_rx_fltr(priv, priv->dev, priv->hw);
ret = stmmac_vlan_update(priv, priv->num_double_vlans);
if (ret)
netdev_err(priv->dev, "Failed to restore VLANs\n");
return ret;
}
static int stmmac_bpf(struct net_device *dev, struct netdev_bpf *bpf)
{
struct stmmac_priv *priv = netdev_priv(dev);
@ -8259,10 +8300,10 @@ int stmmac_resume(struct device *dev)
stmmac_init_coalesce(priv);
phylink_rx_clk_stop_block(priv->phylink);
stmmac_set_rx_mode(ndev);
stmmac_restore_hw_vlan_rx_fltr(priv, ndev, priv->hw);
phylink_rx_clk_stop_unblock(priv->phylink);
stmmac_vlan_restore(priv);
stmmac_enable_all_queues(priv);
stmmac_enable_all_dma_irq(priv);

View file

@ -76,7 +76,9 @@ static int vlan_add_hw_rx_fltr(struct net_device *dev,
}
hw->vlan_filter[0] = vid;
vlan_write_single(dev, vid);
if (netif_running(dev))
vlan_write_single(dev, vid);
return 0;
}
@ -97,12 +99,15 @@ static int vlan_add_hw_rx_fltr(struct net_device *dev,
return -EPERM;
}
ret = vlan_write_filter(dev, hw, index, val);
if (netif_running(dev)) {
ret = vlan_write_filter(dev, hw, index, val);
if (ret)
return ret;
}
if (!ret)
hw->vlan_filter[index] = val;
hw->vlan_filter[index] = val;
return ret;
return 0;
}
static int vlan_del_hw_rx_fltr(struct net_device *dev,
@ -115,7 +120,9 @@ static int vlan_del_hw_rx_fltr(struct net_device *dev,
if (hw->num_vlan == 1) {
if ((hw->vlan_filter[0] & VLAN_TAG_VID) == vid) {
hw->vlan_filter[0] = 0;
vlan_write_single(dev, 0);
if (netif_running(dev))
vlan_write_single(dev, 0);
}
return 0;
}
@ -124,25 +131,23 @@ static int vlan_del_hw_rx_fltr(struct net_device *dev,
for (i = 0; i < hw->num_vlan; i++) {
if ((hw->vlan_filter[i] & VLAN_TAG_DATA_VEN) &&
((hw->vlan_filter[i] & VLAN_TAG_DATA_VID) == vid)) {
ret = vlan_write_filter(dev, hw, i, 0);
if (!ret)
hw->vlan_filter[i] = 0;
else
return ret;
if (netif_running(dev)) {
ret = vlan_write_filter(dev, hw, i, 0);
if (ret)
return ret;
}
hw->vlan_filter[i] = 0;
}
}
return ret;
return 0;
}
static void vlan_restore_hw_rx_fltr(struct net_device *dev,
struct mac_device_info *hw)
{
void __iomem *ioaddr = hw->pcsr;
u32 value;
u32 hash;
u32 val;
int i;
/* Single Rx VLAN Filter */
@ -152,19 +157,8 @@ static void vlan_restore_hw_rx_fltr(struct net_device *dev,
}
/* Extended Rx VLAN Filter Enable */
for (i = 0; i < hw->num_vlan; i++) {
if (hw->vlan_filter[i] & VLAN_TAG_DATA_VEN) {
val = hw->vlan_filter[i];
vlan_write_filter(dev, hw, i, val);
}
}
hash = readl(ioaddr + VLAN_HASH_TABLE);
if (hash & VLAN_VLHT) {
value = readl(ioaddr + VLAN_TAG);
value |= VLAN_VTHM;
writel(value, ioaddr + VLAN_TAG);
}
for (i = 0; i < hw->num_vlan; i++)
vlan_write_filter(dev, hw, i, hw->vlan_filter[i]);
}
static void vlan_update_hash(struct mac_device_info *hw, u32 hash,
@ -183,6 +177,10 @@ static void vlan_update_hash(struct mac_device_info *hw, u32 hash,
value |= VLAN_EDVLP;
value |= VLAN_ESVL;
value |= VLAN_DOVLTC;
} else {
value &= ~VLAN_EDVLP;
value &= ~VLAN_ESVL;
value &= ~VLAN_DOVLTC;
}
writel(value, ioaddr + VLAN_TAG);
@ -193,6 +191,10 @@ static void vlan_update_hash(struct mac_device_info *hw, u32 hash,
value |= VLAN_EDVLP;
value |= VLAN_ESVL;
value |= VLAN_DOVLTC;
} else {
value &= ~VLAN_EDVLP;
value &= ~VLAN_ESVL;
value &= ~VLAN_DOVLTC;
}
writel(value | perfect_match, ioaddr + VLAN_TAG);

View file

@ -391,7 +391,7 @@ static void am65_cpsw_nuss_ndo_slave_set_rx_mode(struct net_device *ndev)
cpsw_ale_set_allmulti(common->ale,
ndev->flags & IFF_ALLMULTI, port->port_id);
port_mask = ALE_PORT_HOST;
port_mask = BIT(port->port_id) | ALE_PORT_HOST;
/* Clear all mcast from ALE */
cpsw_ale_flush_multicast(common->ale, port_mask, -1);

View file

@ -450,14 +450,13 @@ static void cpsw_ale_flush_mcast(struct cpsw_ale *ale, u32 *ale_entry,
ale->port_mask_bits);
if ((mask & port_mask) == 0)
return; /* ports dont intersect, not interested */
mask &= ~port_mask;
mask &= (~port_mask | ALE_PORT_HOST);
/* free if only remaining port is host port */
if (mask)
if (mask == 0x0 || mask == ALE_PORT_HOST)
cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE);
else
cpsw_ale_set_port_mask(ale_entry, mask,
ale->port_mask_bits);
else
cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE);
}
int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask, int vid)

View file

@ -273,6 +273,14 @@ static int prueth_emac_common_start(struct prueth *prueth)
if (ret)
goto disable_class;
/* Reset link state to force reconfiguration in
* emac_adjust_link(). Without this, if the link was already up
* before restart, emac_adjust_link() won't detect any state
* change and will skip critical configuration like writing
* speed to firmware.
*/
emac->link = 0;
mutex_lock(&emac->ndev->phydev->lock);
emac_adjust_link(emac->ndev);
mutex_unlock(&emac->ndev->phydev->lock);

View file

@ -617,7 +617,7 @@ static ssize_t sysdata_release_enabled_show(struct config_item *item,
bool release_enabled;
dynamic_netconsole_mutex_lock();
release_enabled = !!(nt->sysdata_fields & SYSDATA_TASKNAME);
release_enabled = !!(nt->sysdata_fields & SYSDATA_RELEASE);
dynamic_netconsole_mutex_unlock();
return sysfs_emit(buf, "%d\n", release_enabled);

View file

@ -10054,6 +10054,7 @@ static const struct usb_device_id rtl8152_table[] = {
{ USB_DEVICE(VENDOR_ID_DLINK, 0xb301) },
{ USB_DEVICE(VENDOR_ID_DELL, 0xb097) },
{ USB_DEVICE(VENDOR_ID_ASUS, 0x1976) },
{ USB_DEVICE(VENDOR_ID_TRENDNET, 0xe02b) },
{}
};

View file

@ -2130,6 +2130,11 @@ static bool route_shortcircuit(struct net_device *dev, struct sk_buff *skb)
{
struct ipv6hdr *pip6;
/* check if nd_tbl is not initiliazed due to
* ipv6.disable=1 set during boot
*/
if (!ipv6_stub->nd_tbl)
return false;
if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
return false;
pip6 = ipv6_hdr(skb);

View file

@ -5430,7 +5430,7 @@ int ath12k_mac_op_get_txpower(struct ieee80211_hw *hw,
ar->last_tx_power_update))
goto send_tx_power;
params.pdev_id = ar->pdev->pdev_id;
params.pdev_id = ath12k_mac_get_target_pdev_id(ar);
params.vdev_id = arvif->vdev_id;
params.stats_id = WMI_REQUEST_PDEV_STAT;
ret = ath12k_mac_get_fw_stats(ar, &params);
@ -13452,7 +13452,7 @@ void ath12k_mac_op_sta_statistics(struct ieee80211_hw *hw,
/* TODO: Use real NF instead of default one. */
signal = rate_info.rssi_comb;
params.pdev_id = ar->pdev->pdev_id;
params.pdev_id = ath12k_mac_get_target_pdev_id(ar);
params.vdev_id = 0;
params.stats_id = WMI_REQUEST_VDEV_STAT;
@ -13580,7 +13580,7 @@ void ath12k_mac_op_link_sta_statistics(struct ieee80211_hw *hw,
spin_unlock_bh(&ar->ab->dp->dp_lock);
if (!signal && ahsta->ahvif->vdev_type == WMI_VDEV_TYPE_STA) {
params.pdev_id = ar->pdev->pdev_id;
params.pdev_id = ath12k_mac_get_target_pdev_id(ar);
params.vdev_id = 0;
params.stats_id = WMI_REQUEST_VDEV_STAT;

View file

@ -8241,8 +8241,6 @@ static int ath12k_wmi_tlv_fw_stats_data_parse(struct ath12k_base *ab,
struct ath12k_fw_stats *stats = parse->stats;
struct ath12k *ar;
struct ath12k_link_vif *arvif;
struct ieee80211_sta *sta;
struct ath12k_sta *ahsta;
struct ath12k_link_sta *arsta;
int i, ret = 0;
const void *data = ptr;
@ -8278,21 +8276,19 @@ static int ath12k_wmi_tlv_fw_stats_data_parse(struct ath12k_base *ab,
arvif = ath12k_mac_get_arvif(ar, le32_to_cpu(src->vdev_id));
if (arvif) {
sta = ieee80211_find_sta_by_ifaddr(ath12k_ar_to_hw(ar),
arvif->bssid,
NULL);
if (sta) {
ahsta = ath12k_sta_to_ahsta(sta);
arsta = &ahsta->deflink;
spin_lock_bh(&ab->base_lock);
arsta = ath12k_link_sta_find_by_addr(ab, arvif->bssid);
if (arsta) {
arsta->rssi_beacon = le32_to_cpu(src->beacon_snr);
ath12k_dbg(ab, ATH12K_DBG_WMI,
"wmi stats vdev id %d snr %d\n",
src->vdev_id, src->beacon_snr);
} else {
ath12k_dbg(ab, ATH12K_DBG_WMI,
"not found station bssid %pM for vdev stat\n",
arvif->bssid);
ath12k_warn(ab,
"not found link sta with bssid %pM for vdev stat\n",
arvif->bssid);
}
spin_unlock_bh(&ab->base_lock);
}
data += sizeof(*src);
@ -8363,8 +8359,6 @@ static int ath12k_wmi_tlv_rssi_chain_parse(struct ath12k_base *ab,
struct ath12k_fw_stats *stats = parse->stats;
struct ath12k_link_vif *arvif;
struct ath12k_link_sta *arsta;
struct ieee80211_sta *sta;
struct ath12k_sta *ahsta;
struct ath12k *ar;
int vdev_id;
int j;
@ -8400,19 +8394,15 @@ static int ath12k_wmi_tlv_rssi_chain_parse(struct ath12k_base *ab,
"stats bssid %pM vif %p\n",
arvif->bssid, arvif->ahvif->vif);
sta = ieee80211_find_sta_by_ifaddr(ath12k_ar_to_hw(ar),
arvif->bssid,
NULL);
if (!sta) {
ath12k_dbg(ab, ATH12K_DBG_WMI,
"not found station of bssid %pM for rssi chain\n",
arvif->bssid);
guard(spinlock_bh)(&ab->base_lock);
arsta = ath12k_link_sta_find_by_addr(ab, arvif->bssid);
if (!arsta) {
ath12k_warn(ab,
"not found link sta with bssid %pM for rssi chain\n",
arvif->bssid);
return -EPROTO;
}
ahsta = ath12k_sta_to_ahsta(sta);
arsta = &ahsta->deflink;
BUILD_BUG_ON(ARRAY_SIZE(arsta->chain_signal) >
ARRAY_SIZE(stats_rssi->rssi_avg_beacon));

View file

@ -413,6 +413,7 @@ mt76_connac2_mac_write_txwi_80211(struct mt76_dev *dev, __le32 *txwi,
u32 val;
if (ieee80211_is_action(fc) &&
skb->len >= IEEE80211_MIN_ACTION_SIZE + 1 + 1 + 2 &&
mgmt->u.action.category == WLAN_CATEGORY_BACK &&
mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ) {
u16 capab = le16_to_cpu(mgmt->u.action.u.addba_req.capab);

View file

@ -668,6 +668,7 @@ mt7925_mac_write_txwi_80211(struct mt76_dev *dev, __le32 *txwi,
u32 val;
if (ieee80211_is_action(fc) &&
skb->len >= IEEE80211_MIN_ACTION_SIZE + 1 &&
mgmt->u.action.category == WLAN_CATEGORY_BACK &&
mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ)
tid = MT_TX_ADDBA;

View file

@ -800,6 +800,7 @@ mt7996_mac_write_txwi_80211(struct mt7996_dev *dev, __le32 *txwi,
u32 val;
if (ieee80211_is_action(fc) &&
skb->len >= IEEE80211_MIN_ACTION_SIZE + 1 &&
mgmt->u.action.category == WLAN_CATEGORY_BACK &&
mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ) {
if (is_mt7990(&dev->mt76))

View file

@ -668,7 +668,7 @@ static int rsi_mac80211_config(struct ieee80211_hw *hw,
struct rsi_hw *adapter = hw->priv;
struct rsi_common *common = adapter->priv;
struct ieee80211_conf *conf = &hw->conf;
int status = -EOPNOTSUPP;
int status = 0;
mutex_lock(&common->mutex);

View file

@ -264,12 +264,14 @@ int cw1200_wow_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
wiphy_err(priv->hw->wiphy,
"PM request failed: %d. WoW is disabled.\n", ret);
cw1200_wow_resume(hw);
mutex_unlock(&priv->conf_mutex);
return -EBUSY;
}
/* Force resume if event is coming from the device. */
if (atomic_read(&priv->bh_rx)) {
cw1200_wow_resume(hw);
mutex_unlock(&priv->conf_mutex);
return -EAGAIN;
}

View file

@ -1875,6 +1875,8 @@ static int __maybe_unused wl1271_op_resume(struct ieee80211_hw *hw)
wl->wow_enabled);
WARN_ON(!wl->wow_enabled);
mutex_lock(&wl->mutex);
ret = pm_runtime_force_resume(wl->dev);
if (ret < 0) {
wl1271_error("ELP wakeup failure!");
@ -1891,8 +1893,6 @@ static int __maybe_unused wl1271_op_resume(struct ieee80211_hw *hw)
run_irq_work = true;
spin_unlock_irqrestore(&wl->wl_lock, flags);
mutex_lock(&wl->mutex);
/* test the recovery flag before calling any SDIO functions */
pending_recovery = test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS,
&wl->flags);

View file

@ -16,22 +16,26 @@
*/
#define INDIRECT_CALL_1(f, f1, ...) \
({ \
likely(f == f1) ? f1(__VA_ARGS__) : f(__VA_ARGS__); \
typeof(f) __f1 = (f); \
likely(__f1 == f1) ? f1(__VA_ARGS__) : __f1(__VA_ARGS__); \
})
#define INDIRECT_CALL_2(f, f2, f1, ...) \
({ \
likely(f == f2) ? f2(__VA_ARGS__) : \
INDIRECT_CALL_1(f, f1, __VA_ARGS__); \
typeof(f) __f2 = (f); \
likely(__f2 == f2) ? f2(__VA_ARGS__) : \
INDIRECT_CALL_1(__f2, f1, __VA_ARGS__); \
})
#define INDIRECT_CALL_3(f, f3, f2, f1, ...) \
({ \
likely(f == f3) ? f3(__VA_ARGS__) : \
INDIRECT_CALL_2(f, f2, f1, __VA_ARGS__); \
typeof(f) __f3 = (f); \
likely(__f3 == f3) ? f3(__VA_ARGS__) : \
INDIRECT_CALL_2(__f3, f2, f1, __VA_ARGS__); \
})
#define INDIRECT_CALL_4(f, f4, f3, f2, f1, ...) \
({ \
likely(f == f4) ? f4(__VA_ARGS__) : \
INDIRECT_CALL_3(f, f3, f2, f1, __VA_ARGS__); \
typeof(f) __f4 = (f); \
likely(__f4 == f4) ? f4(__VA_ARGS__) : \
INDIRECT_CALL_3(__f4, f3, f2, f1, __VA_ARGS__); \
})
#define INDIRECT_CALLABLE_DECLARE(f) f

View file

@ -4711,7 +4711,7 @@ static inline u32 netif_msg_init(int debug_value, int default_msg_enable_bits)
static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu)
{
spin_lock(&txq->_xmit_lock);
/* Pairs with READ_ONCE() in __dev_queue_xmit() */
/* Pairs with READ_ONCE() in netif_tx_owned() */
WRITE_ONCE(txq->xmit_lock_owner, cpu);
}
@ -4729,7 +4729,7 @@ static inline void __netif_tx_release(struct netdev_queue *txq)
static inline void __netif_tx_lock_bh(struct netdev_queue *txq)
{
spin_lock_bh(&txq->_xmit_lock);
/* Pairs with READ_ONCE() in __dev_queue_xmit() */
/* Pairs with READ_ONCE() in netif_tx_owned() */
WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id());
}
@ -4738,7 +4738,7 @@ static inline bool __netif_tx_trylock(struct netdev_queue *txq)
bool ok = spin_trylock(&txq->_xmit_lock);
if (likely(ok)) {
/* Pairs with READ_ONCE() in __dev_queue_xmit() */
/* Pairs with READ_ONCE() in netif_tx_owned() */
WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id());
}
return ok;
@ -4746,14 +4746,14 @@ static inline bool __netif_tx_trylock(struct netdev_queue *txq)
static inline void __netif_tx_unlock(struct netdev_queue *txq)
{
/* Pairs with READ_ONCE() in __dev_queue_xmit() */
/* Pairs with READ_ONCE() in netif_tx_owned() */
WRITE_ONCE(txq->xmit_lock_owner, -1);
spin_unlock(&txq->_xmit_lock);
}
static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)
{
/* Pairs with READ_ONCE() in __dev_queue_xmit() */
/* Pairs with READ_ONCE() in netif_tx_owned() */
WRITE_ONCE(txq->xmit_lock_owner, -1);
spin_unlock_bh(&txq->_xmit_lock);
}
@ -4846,6 +4846,23 @@ static inline void netif_tx_disable(struct net_device *dev)
local_bh_enable();
}
#ifndef CONFIG_PREEMPT_RT
static inline bool netif_tx_owned(struct netdev_queue *txq, unsigned int cpu)
{
/* Other cpus might concurrently change txq->xmit_lock_owner
* to -1 or to their cpu id, but not to our id.
*/
return READ_ONCE(txq->xmit_lock_owner) == cpu;
}
#else
static inline bool netif_tx_owned(struct netdev_queue *txq, unsigned int cpu)
{
return rt_mutex_owner(&txq->_xmit_lock.lock) == current;
}
#endif
static inline void netif_addr_lock(struct net_device *dev)
{
unsigned char nest_level = 0;

View file

@ -32,6 +32,7 @@
#define VENDOR_ID_DLINK 0x2001
#define VENDOR_ID_DELL 0x413c
#define VENDOR_ID_ASUS 0x0b05
#define VENDOR_ID_TRENDNET 0x20f4
#if IS_REACHABLE(CONFIG_USB_RTL8152)
extern u8 rtl8152_get_version(struct usb_interface *intf);

View file

@ -70,6 +70,7 @@ struct tc_action {
#define TCA_ACT_FLAGS_REPLACE (1U << (TCA_ACT_FLAGS_USER_BITS + 2))
#define TCA_ACT_FLAGS_NO_RTNL (1U << (TCA_ACT_FLAGS_USER_BITS + 3))
#define TCA_ACT_FLAGS_AT_INGRESS (1U << (TCA_ACT_FLAGS_USER_BITS + 4))
#define TCA_ACT_FLAGS_AT_INGRESS_OR_CLSACT (1U << (TCA_ACT_FLAGS_USER_BITS + 5))
/* Update lastuse only if needed, to avoid dirtying a cache line.
* We use a temp variable to avoid fetching jiffies twice.

View file

@ -699,6 +699,7 @@ void bond_debug_register(struct bonding *bond);
void bond_debug_unregister(struct bonding *bond);
void bond_debug_reregister(struct bonding *bond);
const char *bond_mode_name(int mode);
bool __bond_xdp_check(int mode, int xmit_policy);
bool bond_xdp_check(struct bonding *bond, int mode);
void bond_setup(struct net_device *bond_dev);
unsigned int bond_get_num_tx_queues(void);

View file

@ -175,7 +175,7 @@ static inline bool inet6_match(const struct net *net, const struct sock *sk,
{
if (!net_eq(sock_net(sk), net) ||
sk->sk_family != AF_INET6 ||
sk->sk_portpair != ports ||
READ_ONCE(sk->sk_portpair) != ports ||
!ipv6_addr_equal(&sk->sk_v6_daddr, saddr) ||
!ipv6_addr_equal(&sk->sk_v6_rcv_saddr, daddr))
return false;

View file

@ -345,7 +345,7 @@ static inline bool inet_match(const struct net *net, const struct sock *sk,
int dif, int sdif)
{
if (!net_eq(sock_net(sk), net) ||
sk->sk_portpair != ports ||
READ_ONCE(sk->sk_portpair) != ports ||
sk->sk_addrpair != cookie)
return false;

View file

@ -101,7 +101,7 @@ static inline void ipcm_init_sk(struct ipcm_cookie *ipcm,
ipcm->oif = READ_ONCE(inet->sk.sk_bound_dev_if);
ipcm->addr = inet->inet_saddr;
ipcm->protocol = inet->inet_num;
ipcm->protocol = READ_ONCE(inet->inet_num);
}
#define IPCB(skb) ((struct inet_skb_parm*)((skb)->cb))

View file

@ -559,7 +559,7 @@ static inline u32 fib_multipath_hash_from_keys(const struct net *net,
siphash_aligned_key_t hash_key;
u32 mp_seed;
mp_seed = READ_ONCE(net->ipv4.sysctl_fib_multipath_hash_seed).mp_seed;
mp_seed = READ_ONCE(net->ipv4.sysctl_fib_multipath_hash_seed.mp_seed);
fib_multipath_hash_construct_key(&hash_key, mp_seed);
return flow_hash_from_keys_seed(keys, &hash_key);

View file

@ -597,6 +597,7 @@ __libeth_xsk_run_pass(struct libeth_xdp_buff *xdp,
* @pending: current number of XSkFQEs to refill
* @thresh: threshold below which the queue is refilled
* @buf_len: HW-writeable length per each buffer
* @truesize: step between consecutive buffers, 0 if none exists
* @nid: ID of the closest NUMA node with memory
*/
struct libeth_xskfq {
@ -614,6 +615,8 @@ struct libeth_xskfq {
u32 thresh;
u32 buf_len;
u32 truesize;
int nid;
};

View file

@ -320,11 +320,13 @@ static inline void *nft_elem_priv_cast(const struct nft_elem_priv *priv)
* @NFT_ITER_UNSPEC: unspecified, to catch errors
* @NFT_ITER_READ: read-only iteration over set elements
* @NFT_ITER_UPDATE: iteration under mutex to update set element state
* @NFT_ITER_UPDATE_CLONE: clone set before iteration under mutex to update element
*/
enum nft_iter_type {
NFT_ITER_UNSPEC,
NFT_ITER_READ,
NFT_ITER_UPDATE,
NFT_ITER_UPDATE_CLONE,
};
struct nft_set;
@ -1861,6 +1863,11 @@ struct nft_trans_gc {
struct rcu_head rcu;
};
static inline int nft_trans_gc_space(const struct nft_trans_gc *trans)
{
return NFT_TRANS_GC_BATCHCOUNT - trans->count;
}
static inline void nft_ctx_update(struct nft_ctx *ctx,
const struct nft_trans *trans)
{

View file

@ -778,13 +778,23 @@ static inline bool skb_skip_tc_classify(struct sk_buff *skb)
static inline void qdisc_reset_all_tx_gt(struct net_device *dev, unsigned int i)
{
struct Qdisc *qdisc;
bool nolock;
for (; i < dev->num_tx_queues; i++) {
qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc);
if (qdisc) {
nolock = qdisc->flags & TCQ_F_NOLOCK;
if (nolock)
spin_lock_bh(&qdisc->seqlock);
spin_lock_bh(qdisc_lock(qdisc));
qdisc_reset(qdisc);
spin_unlock_bh(qdisc_lock(qdisc));
if (nolock) {
clear_bit(__QDISC_STATE_MISSED, &qdisc->state);
clear_bit(__QDISC_STATE_DRAINING, &qdisc->state);
spin_unlock_bh(&qdisc->seqlock);
}
}
}
}

View file

@ -5,16 +5,47 @@
#include <linux/types.h>
struct net;
extern struct net init_net;
union tcp_seq_and_ts_off {
struct {
u32 seq;
u32 ts_off;
};
u64 hash64;
};
u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport);
u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
__be16 dport);
u32 secure_tcp_seq(__be32 saddr, __be32 daddr,
__be16 sport, __be16 dport);
u32 secure_tcp_ts_off(const struct net *net, __be32 saddr, __be32 daddr);
u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr,
__be16 sport, __be16 dport);
u32 secure_tcpv6_ts_off(const struct net *net,
const __be32 *saddr, const __be32 *daddr);
union tcp_seq_and_ts_off
secure_tcp_seq_and_ts_off(const struct net *net, __be32 saddr, __be32 daddr,
__be16 sport, __be16 dport);
static inline u32 secure_tcp_seq(__be32 saddr, __be32 daddr,
__be16 sport, __be16 dport)
{
union tcp_seq_and_ts_off ts;
ts = secure_tcp_seq_and_ts_off(&init_net, saddr, daddr,
sport, dport);
return ts.seq;
}
union tcp_seq_and_ts_off
secure_tcpv6_seq_and_ts_off(const struct net *net, const __be32 *saddr,
const __be32 *daddr,
__be16 sport, __be16 dport);
static inline u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr,
__be16 sport, __be16 dport)
{
union tcp_seq_and_ts_off ts;
ts = secure_tcpv6_seq_and_ts_off(&init_net, saddr, daddr,
sport, dport);
return ts.seq;
}
#endif /* _NET_SECURE_SEQ */

View file

@ -32,6 +32,7 @@ struct tcf_gate_params {
s32 tcfg_clockid;
size_t num_entries;
struct list_head entries;
struct rcu_head rcu;
};
#define GATE_ACT_GATE_OPEN BIT(0)
@ -39,7 +40,7 @@ struct tcf_gate_params {
struct tcf_gate {
struct tc_action common;
struct tcf_gate_params param;
struct tcf_gate_params __rcu *param;
u8 current_gate_status;
ktime_t current_close_time;
u32 current_entry_octets;
@ -51,47 +52,65 @@ struct tcf_gate {
#define to_gate(a) ((struct tcf_gate *)a)
static inline struct tcf_gate_params *tcf_gate_params_locked(const struct tc_action *a)
{
struct tcf_gate *gact = to_gate(a);
return rcu_dereference_protected(gact->param,
lockdep_is_held(&gact->tcf_lock));
}
static inline s32 tcf_gate_prio(const struct tc_action *a)
{
struct tcf_gate_params *p;
s32 tcfg_prio;
tcfg_prio = to_gate(a)->param.tcfg_priority;
p = tcf_gate_params_locked(a);
tcfg_prio = p->tcfg_priority;
return tcfg_prio;
}
static inline u64 tcf_gate_basetime(const struct tc_action *a)
{
struct tcf_gate_params *p;
u64 tcfg_basetime;
tcfg_basetime = to_gate(a)->param.tcfg_basetime;
p = tcf_gate_params_locked(a);
tcfg_basetime = p->tcfg_basetime;
return tcfg_basetime;
}
static inline u64 tcf_gate_cycletime(const struct tc_action *a)
{
struct tcf_gate_params *p;
u64 tcfg_cycletime;
tcfg_cycletime = to_gate(a)->param.tcfg_cycletime;
p = tcf_gate_params_locked(a);
tcfg_cycletime = p->tcfg_cycletime;
return tcfg_cycletime;
}
static inline u64 tcf_gate_cycletimeext(const struct tc_action *a)
{
struct tcf_gate_params *p;
u64 tcfg_cycletimeext;
tcfg_cycletimeext = to_gate(a)->param.tcfg_cycletime_ext;
p = tcf_gate_params_locked(a);
tcfg_cycletimeext = p->tcfg_cycletime_ext;
return tcfg_cycletimeext;
}
static inline u32 tcf_gate_num_entries(const struct tc_action *a)
{
struct tcf_gate_params *p;
u32 num_entries;
num_entries = to_gate(a)->param.num_entries;
p = tcf_gate_params_locked(a);
num_entries = p->num_entries;
return num_entries;
}
@ -105,7 +124,7 @@ static inline struct action_gate_entry
u32 num_entries;
int i = 0;
p = &to_gate(a)->param;
p = tcf_gate_params_locked(a);
num_entries = p->num_entries;
list_for_each_entry(entry, &p->entries, list)

View file

@ -13,15 +13,13 @@ struct tcf_ife_params {
u8 eth_src[ETH_ALEN];
u16 eth_type;
u16 flags;
struct list_head metalist;
struct rcu_head rcu;
};
struct tcf_ife_info {
struct tc_action common;
struct tcf_ife_params __rcu *params;
/* list of metaids allowed */
struct list_head metalist;
};
#define to_ife(a) ((struct tcf_ife_info *)a)

View file

@ -43,6 +43,7 @@
#include <net/dst.h>
#include <net/mptcp.h>
#include <net/xfrm.h>
#include <net/secure_seq.h>
#include <linux/seq_file.h>
#include <linux/memcontrol.h>
@ -2464,8 +2465,9 @@ struct tcp_request_sock_ops {
struct flowi *fl,
struct request_sock *req,
u32 tw_isn);
u32 (*init_seq)(const struct sk_buff *skb);
u32 (*init_ts_off)(const struct net *net, const struct sk_buff *skb);
union tcp_seq_and_ts_off (*init_seq_and_ts_off)(
const struct net *net,
const struct sk_buff *skb);
int (*send_synack)(const struct sock *sk, struct dst_entry *dst,
struct flowi *fl, struct request_sock *req,
struct tcp_fastopen_cookie *foc,

View file

@ -51,6 +51,11 @@ static inline u32 xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool)
return xsk_pool_get_chunk_size(pool) - xsk_pool_get_headroom(pool);
}
static inline u32 xsk_pool_get_rx_frag_step(struct xsk_buff_pool *pool)
{
return pool->unaligned ? 0 : xsk_pool_get_chunk_size(pool);
}
static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
struct xdp_rxq_info *rxq)
{
@ -122,7 +127,7 @@ static inline void xsk_buff_free(struct xdp_buff *xdp)
goto out;
list_for_each_entry_safe(pos, tmp, xskb_list, list_node) {
list_del(&pos->list_node);
list_del_init(&pos->list_node);
xp_free(pos);
}
@ -157,7 +162,7 @@ static inline struct xdp_buff *xsk_buff_get_frag(const struct xdp_buff *first)
frag = list_first_entry_or_null(&xskb->pool->xskb_list,
struct xdp_buff_xsk, list_node);
if (frag) {
list_del(&frag->list_node);
list_del_init(&frag->list_node);
ret = &frag->xdp;
}
@ -168,7 +173,7 @@ static inline void xsk_buff_del_frag(struct xdp_buff *xdp)
{
struct xdp_buff_xsk *xskb = container_of(xdp, struct xdp_buff_xsk, xdp);
list_del(&xskb->list_node);
list_del_init(&xskb->list_node);
}
static inline struct xdp_buff *xsk_buff_get_head(struct xdp_buff *first)
@ -337,6 +342,11 @@ static inline u32 xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool)
return 0;
}
static inline u32 xsk_pool_get_rx_frag_step(struct xsk_buff_pool *pool)
{
return 0;
}
static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
struct xdp_rxq_info *rxq)
{

View file

@ -1260,24 +1260,28 @@ static void lec_arp_clear_vccs(struct lec_arp_table *entry)
struct lec_vcc_priv *vpriv = LEC_VCC_PRIV(vcc);
struct net_device *dev = (struct net_device *)vcc->proto_data;
vcc->pop = vpriv->old_pop;
if (vpriv->xoff)
netif_wake_queue(dev);
kfree(vpriv);
vcc->user_back = NULL;
vcc->push = entry->old_push;
vcc_release_async(vcc, -EPIPE);
if (vpriv) {
vcc->pop = vpriv->old_pop;
if (vpriv->xoff)
netif_wake_queue(dev);
kfree(vpriv);
vcc->user_back = NULL;
vcc->push = entry->old_push;
vcc_release_async(vcc, -EPIPE);
}
entry->vcc = NULL;
}
if (entry->recv_vcc) {
struct atm_vcc *vcc = entry->recv_vcc;
struct lec_vcc_priv *vpriv = LEC_VCC_PRIV(vcc);
kfree(vpriv);
vcc->user_back = NULL;
if (vpriv) {
kfree(vpriv);
vcc->user_back = NULL;
entry->recv_vcc->push = entry->old_recv_push;
vcc_release_async(entry->recv_vcc, -EPIPE);
entry->recv_vcc->push = entry->old_recv_push;
vcc_release_async(entry->recv_vcc, -EPIPE);
}
entry->recv_vcc = NULL;
}
}

View file

@ -111,7 +111,15 @@ static bool batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh,
/* unsupported WiFi driver version */
goto default_throughput;
real_netdev = batadv_get_real_netdev(hard_iface->net_dev);
/* only use rtnl_trylock because the elp worker will be cancelled while
* the rntl_lock is held. the cancel_delayed_work_sync() would otherwise
* wait forever when the elp work_item was started and it is then also
* trying to rtnl_lock
*/
if (!rtnl_trylock())
return false;
real_netdev = __batadv_get_real_netdev(hard_iface->net_dev);
rtnl_unlock();
if (!real_netdev)
goto default_throughput;

View file

@ -204,7 +204,7 @@ static bool batadv_is_valid_iface(const struct net_device *net_dev)
}
/**
* batadv_get_real_netdevice() - check if the given netdev struct is a virtual
* __batadv_get_real_netdev() - check if the given netdev struct is a virtual
* interface on top of another 'real' interface
* @netdev: the device to check
*
@ -214,7 +214,7 @@ static bool batadv_is_valid_iface(const struct net_device *net_dev)
* Return: the 'real' net device or the original net device and NULL in case
* of an error.
*/
static struct net_device *batadv_get_real_netdevice(struct net_device *netdev)
struct net_device *__batadv_get_real_netdev(struct net_device *netdev)
{
struct batadv_hard_iface *hard_iface = NULL;
struct net_device *real_netdev = NULL;
@ -267,7 +267,7 @@ struct net_device *batadv_get_real_netdev(struct net_device *net_device)
struct net_device *real_netdev;
rtnl_lock();
real_netdev = batadv_get_real_netdevice(net_device);
real_netdev = __batadv_get_real_netdev(net_device);
rtnl_unlock();
return real_netdev;
@ -336,7 +336,7 @@ static u32 batadv_wifi_flags_evaluate(struct net_device *net_device)
if (batadv_is_cfg80211_netdev(net_device))
wifi_flags |= BATADV_HARDIF_WIFI_CFG80211_DIRECT;
real_netdev = batadv_get_real_netdevice(net_device);
real_netdev = __batadv_get_real_netdev(net_device);
if (!real_netdev)
return wifi_flags;

View file

@ -67,6 +67,7 @@ enum batadv_hard_if_bcast {
extern struct notifier_block batadv_hard_if_notifier;
struct net_device *__batadv_get_real_netdev(struct net_device *net_device);
struct net_device *batadv_get_real_netdev(struct net_device *net_device);
bool batadv_is_cfg80211_hardif(struct batadv_hard_iface *hard_iface);
bool batadv_is_wifi_hardif(struct batadv_hard_iface *hard_iface);

View file

@ -74,7 +74,7 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
eth_hdr(skb)->h_proto == htons(ETH_P_RARP)) &&
br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED)) {
br_do_proxy_suppress_arp(skb, br, vid, NULL);
} else if (IS_ENABLED(CONFIG_IPV6) &&
} else if (ipv6_mod_enabled() &&
skb->protocol == htons(ETH_P_IPV6) &&
br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED) &&
pskb_may_pull(skb, sizeof(struct ipv6hdr) +

View file

@ -170,7 +170,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
(skb->protocol == htons(ETH_P_ARP) ||
skb->protocol == htons(ETH_P_RARP))) {
br_do_proxy_suppress_arp(skb, br, vid, p);
} else if (IS_ENABLED(CONFIG_IPV6) &&
} else if (ipv6_mod_enabled() &&
skb->protocol == htons(ETH_P_IPV6) &&
br_opt_get(br, BROPT_NEIGH_SUPPRESS_ENABLED) &&
pskb_may_pull(skb, sizeof(struct ipv6hdr) +

View file

@ -1344,6 +1344,16 @@ br_multicast_ctx_options_equal(const struct net_bridge_mcast *brmctx1,
true;
}
static inline bool
br_multicast_port_ctx_options_equal(const struct net_bridge_mcast_port *pmctx1,
const struct net_bridge_mcast_port *pmctx2)
{
return br_multicast_ngroups_get(pmctx1) ==
br_multicast_ngroups_get(pmctx2) &&
br_multicast_ngroups_get_max(pmctx1) ==
br_multicast_ngroups_get_max(pmctx2);
}
static inline bool
br_multicast_ctx_matches_vlan_snooping(const struct net_bridge_mcast *brmctx)
{

View file

@ -43,9 +43,29 @@ bool br_vlan_opts_eq_range(const struct net_bridge_vlan *v_curr,
u8 range_mc_rtr = br_vlan_multicast_router(range_end);
u8 curr_mc_rtr = br_vlan_multicast_router(v_curr);
return v_curr->state == range_end->state &&
__vlan_tun_can_enter_range(v_curr, range_end) &&
curr_mc_rtr == range_mc_rtr;
if (v_curr->state != range_end->state)
return false;
if (!__vlan_tun_can_enter_range(v_curr, range_end))
return false;
if (curr_mc_rtr != range_mc_rtr)
return false;
/* Check user-visible priv_flags that affect output */
if ((v_curr->priv_flags ^ range_end->priv_flags) &
(BR_VLFLAG_NEIGH_SUPPRESS_ENABLED | BR_VLFLAG_MCAST_ENABLED))
return false;
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
if (!br_vlan_is_master(v_curr) &&
!br_multicast_port_ctx_vlan_disabled(&v_curr->port_mcast_ctx) &&
!br_multicast_port_ctx_options_equal(&v_curr->port_mcast_ctx,
&range_end->port_mcast_ctx))
return false;
#endif
return true;
}
bool br_vlan_opts_fill(struct sk_buff *skb, const struct net_bridge_vlan *v,

View file

@ -1176,6 +1176,7 @@ static int bcm_rx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
if (!op)
return -ENOMEM;
spin_lock_init(&op->bcm_tx_lock);
op->can_id = msg_head->can_id;
op->nframes = msg_head->nframes;
op->cfsiz = CFSIZ(msg_head->flags);

View file

@ -3987,7 +3987,7 @@ static struct sk_buff *validate_xmit_unreadable_skb(struct sk_buff *skb,
if (shinfo->nr_frags > 0) {
niov = netmem_to_net_iov(skb_frag_netmem(&shinfo->frags[0]));
if (net_is_devmem_iov(niov) &&
net_devmem_iov_binding(niov)->dev != dev)
READ_ONCE(net_devmem_iov_binding(niov)->dev) != dev)
goto out_free;
}
@ -4818,10 +4818,7 @@ int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
if (dev->flags & IFF_UP) {
int cpu = smp_processor_id(); /* ok because BHs are off */
/* Other cpus might concurrently change txq->xmit_lock_owner
* to -1 or to their cpu id, but not to our id.
*/
if (READ_ONCE(txq->xmit_lock_owner) != cpu) {
if (!netif_tx_owned(txq, cpu)) {
bool is_list = false;
if (dev_xmit_recursion())
@ -7794,11 +7791,12 @@ static int napi_thread_wait(struct napi_struct *napi)
return -1;
}
static void napi_threaded_poll_loop(struct napi_struct *napi, bool busy_poll)
static void napi_threaded_poll_loop(struct napi_struct *napi,
unsigned long *busy_poll_last_qs)
{
unsigned long last_qs = busy_poll_last_qs ? *busy_poll_last_qs : jiffies;
struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;
struct softnet_data *sd;
unsigned long last_qs = jiffies;
for (;;) {
bool repoll = false;
@ -7827,12 +7825,12 @@ static void napi_threaded_poll_loop(struct napi_struct *napi, bool busy_poll)
/* When busy poll is enabled, the old packets are not flushed in
* napi_complete_done. So flush them here.
*/
if (busy_poll)
if (busy_poll_last_qs)
gro_flush_normal(&napi->gro, HZ >= 1000);
local_bh_enable();
/* Call cond_resched here to avoid watchdog warnings. */
if (repoll || busy_poll) {
if (repoll || busy_poll_last_qs) {
rcu_softirq_qs_periodic(last_qs);
cond_resched();
}
@ -7840,11 +7838,15 @@ static void napi_threaded_poll_loop(struct napi_struct *napi, bool busy_poll)
if (!repoll)
break;
}
if (busy_poll_last_qs)
*busy_poll_last_qs = last_qs;
}
static int napi_threaded_poll(void *data)
{
struct napi_struct *napi = data;
unsigned long last_qs = jiffies;
bool want_busy_poll;
bool in_busy_poll;
unsigned long val;
@ -7862,7 +7864,7 @@ static int napi_threaded_poll(void *data)
assign_bit(NAPI_STATE_IN_BUSY_POLL, &napi->state,
want_busy_poll);
napi_threaded_poll_loop(napi, want_busy_poll);
napi_threaded_poll_loop(napi, want_busy_poll ? &last_qs : NULL);
}
return 0;
@ -13175,7 +13177,7 @@ static void run_backlog_napi(unsigned int cpu)
{
struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
napi_threaded_poll_loop(&sd->backlog, false);
napi_threaded_poll_loop(&sd->backlog, NULL);
}
static void backlog_napi_setup(unsigned int cpu)

Some files were not shown because too many files have changed in this diff Show more