Including fixes from IPsec, Bluetooth and netfilter

Current release - regressions:
 
   - wifi: fix dev_alloc_name() return value check
 
   - rds: fix recursive lock in rds_tcp_conn_slots_available
 
 Current release - new code bugs:
 
   - vsock: lock down child_ns_mode as write-once
 
 Previous releases - regressions:
 
   - core:
     - do not pass flow_id to set_rps_cpu()
     - consume xmit errors of GSO frames
 
   - netconsole: avoid OOB reads, msg is not nul-terminated
 
   - netfilter: h323: fix OOB read in decode_choice()
 
   - tcp: re-enable acceptance of FIN packets when RWIN is 0
 
   - udplite: fix null-ptr-deref in __udp_enqueue_schedule_skb().
 
   - wifi: brcmfmac: fix potential kernel oops when probe fails
 
   - phy: register phy led_triggers during probe to avoid AB-BA deadlock
 
   - eth: bnxt_en: fix deleting of Ntuple filters
 
   - eth: wan: farsync: fix use-after-free bugs caused by unfinished tasklets
 
   - eth: xscale: check for PTP support properly
 
 Previous releases - always broken:
 
   - tcp: fix potential race in tcp_v6_syn_recv_sock()
 
   - kcm: fix zero-frag skb in frag_list on partial sendmsg error
 
   - xfrm:
     - fix race condition in espintcp_close()
     - always flush state and policy upon NETDEV_UNREGISTER event
 
   - bluetooth:
     - purge error queues in socket destructors
     - fix response to L2CAP_ECRED_CONN_REQ
 
   - eth: mlx5:
     - fix circular locking dependency in dump
     - fix "scheduling while atomic" in IPsec MAC address query
 
   - eth: gve: fix incorrect buffer cleanup for QPL
 
   - eth: team: avoid NETDEV_CHANGEMTU event when unregistering slave
 
   - eth: usb: validate USB endpoints
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCgAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmmgYU4SHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkLBgQAINazHstJ0DoDkvmwXapRSN0Ffauyd46
 oX6nfeWOT3BzZbAhZHtGgCSs4aULifJWMevtT7pq7a7PgZwMwfa47BugR1G/u5UE
 hCqalNjRTB/U2KmFk6eViKSacD4FvUIAyAMOotn1aEdRRAkBIJnIW/o/ZR9ZUkm0
 5+UigO64aq57+FOc5EQdGjYDcTVdzW12iOZ8ZqwtSATdNd9aC+gn3voRomTEo+Fm
 kQinkFEPAy/YyHGmfpC/z87/RTgkYLpagmsT4ZvBJeNPrIRvFEibSpPNhuzTzg81
 /BW5M8sJmm3XFiTiRp6Blv+0n6HIpKjAZMHn5c9hzX9cxPZQ24EjkXEex9ClaxLd
 OMef79rr1HBwqBTpIlK7xfLKCdT5Iex88s8HxXRB/Psqk9pVP469cSoK6cpyiGiP
 I+4WT0wn9ukTiu/yV2L2byVr1sanlu54P+UBYJpDwqq3lZ1ngWtkJ+SY369jhwAS
 FYIBmUSKhmWz3FEULaGpgPy4m9Fl/fzN8IFh2Buoc/Puq61HH7MAMjRty2ZSFTqj
 gbHrRhlkCRqubytgjsnCDPLoJF4ZYcXtpo/8ogG3641H1I+dN+DyGGVZ/ioswkks
 My1ds0rKqA3BHCmn+pN/qqkuopDCOB95dqOpgDqHG7GePrpa/FJ1guhxexsCd+nL
 Run2RcgDmd+d
 =HBOu
 -----END PGP SIGNATURE-----

Merge tag 'net-7.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from IPsec, Bluetooth and netfilter

  Current release - regressions:

   - wifi: fix dev_alloc_name() return value check

   - rds: fix recursive lock in rds_tcp_conn_slots_available

  Current release - new code bugs:

   - vsock: lock down child_ns_mode as write-once

  Previous releases - regressions:

   - core:
      - do not pass flow_id to set_rps_cpu()
      - consume xmit errors of GSO frames

   - netconsole: avoid OOB reads, msg is not nul-terminated

   - netfilter: h323: fix OOB read in decode_choice()

   - tcp: re-enable acceptance of FIN packets when RWIN is 0

   - udplite: fix null-ptr-deref in __udp_enqueue_schedule_skb().

   - wifi: brcmfmac: fix potential kernel oops when probe fails

   - phy: register phy led_triggers during probe to avoid AB-BA deadlock

   - eth:
      - bnxt_en: fix deleting of Ntuple filters
      - wan: farsync: fix use-after-free bugs caused by unfinished tasklets
      - xscale: check for PTP support properly

  Previous releases - always broken:

   - tcp: fix potential race in tcp_v6_syn_recv_sock()

   - kcm: fix zero-frag skb in frag_list on partial sendmsg error

   - xfrm:
      - fix race condition in espintcp_close()
      - always flush state and policy upon NETDEV_UNREGISTER event

   - bluetooth:
      - purge error queues in socket destructors
      - fix response to L2CAP_ECRED_CONN_REQ

   - eth:
      - mlx5:
         - fix circular locking dependency in dump
         - fix "scheduling while atomic" in IPsec MAC address query
      - gve: fix incorrect buffer cleanup for QPL
      - team: avoid NETDEV_CHANGEMTU event when unregistering slave
      - usb: validate USB endpoints"

* tag 'net-7.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (72 commits)
  netfilter: nf_conntrack_h323: fix OOB read in decode_choice()
  dpaa2-switch: validate num_ifs to prevent out-of-bounds write
  net: consume xmit errors of GSO frames
  vsock: document write-once behavior of the child_ns_mode sysctl
  vsock: lock down child_ns_mode as write-once
  selftests/vsock: change tests to respect write-once child ns mode
  net/mlx5e: Fix "scheduling while atomic" in IPsec MAC address query
  net/mlx5: Fix missing devlink lock in SRIOV enable error path
  net/mlx5: E-switch, Clear legacy flag when moving to switchdev
  net/mlx5: LAG, disable MPESW in lag_disable_change()
  net/mlx5: DR, Fix circular locking dependency in dump
  selftests: team: Add a reference count leak test
  team: avoid NETDEV_CHANGEMTU event when unregistering slave
  net: mana: Fix double destroy_workqueue on service rescan PCI path
  MAINTAINERS: Update maintainer entry for QUALCOMM ETHQOS ETHERNET DRIVER
  dpll: zl3073x: Remove redundant cleanup in devm_dpll_init()
  selftests/net: packetdrill: Verify acceptance of FIN packets when RWIN is 0
  tcp: re-enable acceptance of FIN packets when RWIN is 0
  vsock: Use container_of() to get net namespace in sysctl handlers
  net: usb: kaweth: validate USB endpoints
  ...
This commit is contained in:
Linus Torvalds 2026-02-26 08:00:13 -08:00
commit b9c8fc2cae
88 changed files with 829 additions and 334 deletions

View file

@ -594,6 +594,9 @@ Values:
their sockets will only be able to connect within their own their sockets will only be able to connect within their own
namespace. namespace.
The first write to ``child_ns_mode`` locks its value. Subsequent writes of the
same value succeed, but writing a different value returns ``-EBUSY``.
Changing ``child_ns_mode`` only affects namespaces created after the change; Changing ``child_ns_mode`` only affects namespaces created after the change;
it does not modify the current namespace or any existing children. it does not modify the current namespace or any existing children.

View file

@ -1292,7 +1292,6 @@ F: include/trace/events/amdxdna.h
F: include/uapi/drm/amdxdna_accel.h F: include/uapi/drm/amdxdna_accel.h
AMD XGBE DRIVER AMD XGBE DRIVER
M: "Shyam Sundar S K" <Shyam-sundar.S-k@amd.com>
M: Raju Rangoju <Raju.Rangoju@amd.com> M: Raju Rangoju <Raju.Rangoju@amd.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Maintained
@ -6219,14 +6218,13 @@ S: Supported
F: drivers/scsi/snic/ F: drivers/scsi/snic/
CISCO VIC ETHERNET NIC DRIVER CISCO VIC ETHERNET NIC DRIVER
M: Christian Benvenuti <benve@cisco.com>
M: Satish Kharat <satishkh@cisco.com> M: Satish Kharat <satishkh@cisco.com>
S: Maintained S: Maintained
F: drivers/net/ethernet/cisco/enic/ F: drivers/net/ethernet/cisco/enic/
CISCO VIC LOW LATENCY NIC DRIVER CISCO VIC LOW LATENCY NIC DRIVER
M: Christian Benvenuti <benve@cisco.com>
M: Nelson Escobar <neescoba@cisco.com> M: Nelson Escobar <neescoba@cisco.com>
M: Satish Kharat <satishkh@cisco.com>
S: Supported S: Supported
F: drivers/infiniband/hw/usnic/ F: drivers/infiniband/hw/usnic/
@ -14412,9 +14410,9 @@ LANTIQ PEF2256 DRIVER
M: Herve Codina <herve.codina@bootlin.com> M: Herve Codina <herve.codina@bootlin.com>
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/net/lantiq,pef2256.yaml F: Documentation/devicetree/bindings/net/lantiq,pef2256.yaml
F: drivers/net/wan/framer/pef2256/ F: drivers/net/wan/framer/
F: drivers/pinctrl/pinctrl-pef2256.c F: drivers/pinctrl/pinctrl-pef2256.c
F: include/linux/framer/pef2256.h F: include/linux/framer/
LASI 53c700 driver for PARISC LASI 53c700 driver for PARISC
M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
@ -21695,7 +21693,7 @@ S: Maintained
F: drivers/net/ethernet/qualcomm/emac/ F: drivers/net/ethernet/qualcomm/emac/
QUALCOMM ETHQOS ETHERNET DRIVER QUALCOMM ETHQOS ETHERNET DRIVER
M: Vinod Koul <vkoul@kernel.org> M: Mohd Ayaan Anwar <mohd.anwar@oss.qualcomm.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: linux-arm-msm@vger.kernel.org L: linux-arm-msm@vger.kernel.org
S: Maintained S: Maintained

View file

@ -2046,19 +2046,23 @@ retry:
} }
out: out:
if (ret && retries < MAX_INIT_RETRIES) { if (ret) {
bt_dev_warn(hdev, "Retry BT power ON:%d", retries);
qca_power_shutdown(hu); qca_power_shutdown(hu);
if (hu->serdev) {
serdev_device_close(hu->serdev); if (retries < MAX_INIT_RETRIES) {
ret = serdev_device_open(hu->serdev); bt_dev_warn(hdev, "Retry BT power ON:%d", retries);
if (ret) { if (hu->serdev) {
bt_dev_err(hdev, "failed to open port"); serdev_device_close(hu->serdev);
return ret; ret = serdev_device_open(hu->serdev);
if (ret) {
bt_dev_err(hdev, "failed to open port");
return ret;
}
} }
retries++;
goto retry;
} }
retries++; return ret;
goto retry;
} }
/* Setup bdaddr */ /* Setup bdaddr */

View file

@ -981,11 +981,7 @@ zl3073x_devm_dpll_init(struct zl3073x_dev *zldev, u8 num_dplls)
} }
/* Add devres action to release DPLL related resources */ /* Add devres action to release DPLL related resources */
rc = devm_add_action_or_reset(zldev->dev, zl3073x_dev_dpll_fini, zldev); return devm_add_action_or_reset(zldev->dev, zl3073x_dev_dpll_fini, zldev);
if (rc)
goto error;
return 0;
error: error:
zl3073x_dev_dpll_fini(zldev); zl3073x_dev_dpll_fini(zldev);
@ -1026,6 +1022,7 @@ int zl3073x_dev_probe(struct zl3073x_dev *zldev,
"Unknown or non-match chip ID: 0x%0x\n", "Unknown or non-match chip ID: 0x%0x\n",
id); id);
} }
zldev->chip_id = id;
/* Read revision, firmware version and custom config version */ /* Read revision, firmware version and custom config version */
rc = zl3073x_read_u16(zldev, ZL_REG_REVISION, &revision); rc = zl3073x_read_u16(zldev, ZL_REG_REVISION, &revision);

View file

@ -35,6 +35,7 @@ struct zl3073x_dpll;
* @dev: pointer to device * @dev: pointer to device
* @regmap: regmap to access device registers * @regmap: regmap to access device registers
* @multiop_lock: to serialize multiple register operations * @multiop_lock: to serialize multiple register operations
* @chip_id: chip ID read from hardware
* @ref: array of input references' invariants * @ref: array of input references' invariants
* @out: array of outs' invariants * @out: array of outs' invariants
* @synth: array of synths' invariants * @synth: array of synths' invariants
@ -48,6 +49,7 @@ struct zl3073x_dev {
struct device *dev; struct device *dev;
struct regmap *regmap; struct regmap *regmap;
struct mutex multiop_lock; struct mutex multiop_lock;
u16 chip_id;
/* Invariants */ /* Invariants */
struct zl3073x_ref ref[ZL3073X_NUM_REFS]; struct zl3073x_ref ref[ZL3073X_NUM_REFS];
@ -144,6 +146,32 @@ int zl3073x_write_hwreg_seq(struct zl3073x_dev *zldev,
int zl3073x_ref_phase_offsets_update(struct zl3073x_dev *zldev, int channel); int zl3073x_ref_phase_offsets_update(struct zl3073x_dev *zldev, int channel);
/**
* zl3073x_dev_is_ref_phase_comp_32bit - check ref phase comp register size
* @zldev: pointer to zl3073x device
*
* Some chip IDs have a 32-bit wide ref_phase_offset_comp register instead
* of the default 48-bit.
*
* Return: true if the register is 32-bit, false if 48-bit
*/
static inline bool
zl3073x_dev_is_ref_phase_comp_32bit(struct zl3073x_dev *zldev)
{
switch (zldev->chip_id) {
case 0x0E30:
case 0x0E93:
case 0x0E94:
case 0x0E95:
case 0x0E96:
case 0x0E97:
case 0x1F60:
return true;
default:
return false;
}
}
static inline bool static inline bool
zl3073x_is_n_pin(u8 id) zl3073x_is_n_pin(u8 id)
{ {

View file

@ -475,8 +475,11 @@ zl3073x_dpll_input_pin_phase_adjust_get(const struct dpll_pin *dpll_pin,
ref_id = zl3073x_input_pin_ref_get(pin->id); ref_id = zl3073x_input_pin_ref_get(pin->id);
ref = zl3073x_ref_state_get(zldev, ref_id); ref = zl3073x_ref_state_get(zldev, ref_id);
/* Perform sign extension for 48bit signed value */ /* Perform sign extension based on register width */
phase_comp = sign_extend64(ref->phase_comp, 47); if (zl3073x_dev_is_ref_phase_comp_32bit(zldev))
phase_comp = sign_extend64(ref->phase_comp, 31);
else
phase_comp = sign_extend64(ref->phase_comp, 47);
/* Reverse two's complement negation applied during set and convert /* Reverse two's complement negation applied during set and convert
* to 32bit signed int * to 32bit signed int

View file

@ -121,8 +121,16 @@ int zl3073x_ref_state_fetch(struct zl3073x_dev *zldev, u8 index)
return rc; return rc;
/* Read phase compensation register */ /* Read phase compensation register */
rc = zl3073x_read_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP, if (zl3073x_dev_is_ref_phase_comp_32bit(zldev)) {
&ref->phase_comp); u32 val;
rc = zl3073x_read_u32(zldev, ZL_REG_REF_PHASE_OFFSET_COMP_32,
&val);
ref->phase_comp = val;
} else {
rc = zl3073x_read_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP,
&ref->phase_comp);
}
if (rc) if (rc)
return rc; return rc;
@ -179,9 +187,16 @@ int zl3073x_ref_state_set(struct zl3073x_dev *zldev, u8 index,
if (!rc && dref->sync_ctrl != ref->sync_ctrl) if (!rc && dref->sync_ctrl != ref->sync_ctrl)
rc = zl3073x_write_u8(zldev, ZL_REG_REF_SYNC_CTRL, rc = zl3073x_write_u8(zldev, ZL_REG_REF_SYNC_CTRL,
ref->sync_ctrl); ref->sync_ctrl);
if (!rc && dref->phase_comp != ref->phase_comp) if (!rc && dref->phase_comp != ref->phase_comp) {
rc = zl3073x_write_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP, if (zl3073x_dev_is_ref_phase_comp_32bit(zldev))
ref->phase_comp); rc = zl3073x_write_u32(zldev,
ZL_REG_REF_PHASE_OFFSET_COMP_32,
ref->phase_comp);
else
rc = zl3073x_write_u48(zldev,
ZL_REG_REF_PHASE_OFFSET_COMP,
ref->phase_comp);
}
if (rc) if (rc)
return rc; return rc;

View file

@ -194,6 +194,7 @@
#define ZL_REF_CONFIG_DIFF_EN BIT(2) #define ZL_REF_CONFIG_DIFF_EN BIT(2)
#define ZL_REG_REF_PHASE_OFFSET_COMP ZL_REG(10, 0x28, 6) #define ZL_REG_REF_PHASE_OFFSET_COMP ZL_REG(10, 0x28, 6)
#define ZL_REG_REF_PHASE_OFFSET_COMP_32 ZL_REG(10, 0x28, 4)
#define ZL_REG_REF_SYNC_CTRL ZL_REG(10, 0x2e, 1) #define ZL_REG_REF_SYNC_CTRL ZL_REG(10, 0x2e, 1)
#define ZL_REF_SYNC_CTRL_MODE GENMASK(2, 0) #define ZL_REF_SYNC_CTRL_MODE GENMASK(2, 0)

View file

@ -2278,6 +2278,12 @@ int sja1105_static_config_reload(struct sja1105_private *priv,
* change it through the dynamic interface later. * change it through the dynamic interface later.
*/ */
dsa_switch_for_each_available_port(dp, ds) { dsa_switch_for_each_available_port(dp, ds) {
/* May be called during unbind when we unoffload a VLAN-aware
* bridge from port 1 while port 0 was already torn down
*/
if (!dp->pl)
continue;
phylink_replay_link_begin(dp->pl); phylink_replay_link_begin(dp->pl);
mac[dp->index].speed = priv->info->port_speed[SJA1105_SPEED_AUTO]; mac[dp->index].speed = priv->info->port_speed[SJA1105_SPEED_AUTO];
} }
@ -2334,7 +2340,8 @@ int sja1105_static_config_reload(struct sja1105_private *priv,
} }
dsa_switch_for_each_available_port(dp, ds) dsa_switch_for_each_available_port(dp, ds)
phylink_replay_link_end(dp->pl); if (dp->pl)
phylink_replay_link_end(dp->pl);
rc = sja1105_reload_cbs(priv); rc = sja1105_reload_cbs(priv);
if (rc < 0) if (rc < 0)

View file

@ -6232,6 +6232,9 @@ int bnxt_hwrm_cfa_ntuple_filter_free(struct bnxt *bp,
int rc; int rc;
set_bit(BNXT_FLTR_FW_DELETED, &fltr->base.state); set_bit(BNXT_FLTR_FW_DELETED, &fltr->base.state);
if (!test_bit(BNXT_STATE_OPEN, &bp->state))
return 0;
rc = hwrm_req_init(bp, req, HWRM_CFA_NTUPLE_FILTER_FREE); rc = hwrm_req_init(bp, req, HWRM_CFA_NTUPLE_FILTER_FREE);
if (rc) if (rc)
return rc; return rc;
@ -10879,12 +10882,10 @@ void bnxt_del_one_rss_ctx(struct bnxt *bp, struct bnxt_rss_ctx *rss_ctx,
struct bnxt_ntuple_filter *ntp_fltr; struct bnxt_ntuple_filter *ntp_fltr;
int i; int i;
if (netif_running(bp->dev)) { bnxt_hwrm_vnic_free_one(bp, &rss_ctx->vnic);
bnxt_hwrm_vnic_free_one(bp, &rss_ctx->vnic); for (i = 0; i < BNXT_MAX_CTX_PER_VNIC; i++) {
for (i = 0; i < BNXT_MAX_CTX_PER_VNIC; i++) { if (vnic->fw_rss_cos_lb_ctx[i] != INVALID_HW_RING_ID)
if (vnic->fw_rss_cos_lb_ctx[i] != INVALID_HW_RING_ID) bnxt_hwrm_vnic_ctx_free_one(bp, vnic, i);
bnxt_hwrm_vnic_ctx_free_one(bp, vnic, i);
}
} }
if (!all) if (!all)
return; return;

View file

@ -3034,6 +3034,13 @@ static int dpaa2_switch_init(struct fsl_mc_device *sw_dev)
goto err_close; goto err_close;
} }
if (ethsw->sw_attr.num_ifs >= DPSW_MAX_IF) {
dev_err(dev, "DPSW num_ifs %u exceeds max %u\n",
ethsw->sw_attr.num_ifs, DPSW_MAX_IF);
err = -EINVAL;
goto err_close;
}
err = dpsw_get_api_version(ethsw->mc_io, 0, err = dpsw_get_api_version(ethsw->mc_io, 0,
&ethsw->major, &ethsw->major,
&ethsw->minor); &ethsw->minor);

View file

@ -167,6 +167,25 @@ gve_free_pending_packet(struct gve_tx_ring *tx,
} }
} }
static void gve_unmap_packet(struct device *dev,
struct gve_tx_pending_packet_dqo *pkt)
{
int i;
if (!pkt->num_bufs)
return;
/* SKB linear portion is guaranteed to be mapped */
dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]),
dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE);
for (i = 1; i < pkt->num_bufs; i++) {
netmem_dma_unmap_page_attrs(dev, dma_unmap_addr(pkt, dma[i]),
dma_unmap_len(pkt, len[i]),
DMA_TO_DEVICE, 0);
}
pkt->num_bufs = 0;
}
/* gve_tx_free_desc - Cleans up all pending tx requests and buffers. /* gve_tx_free_desc - Cleans up all pending tx requests and buffers.
*/ */
static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx) static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx)
@ -176,21 +195,12 @@ static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx)
for (i = 0; i < tx->dqo.num_pending_packets; i++) { for (i = 0; i < tx->dqo.num_pending_packets; i++) {
struct gve_tx_pending_packet_dqo *cur_state = struct gve_tx_pending_packet_dqo *cur_state =
&tx->dqo.pending_packets[i]; &tx->dqo.pending_packets[i];
int j;
for (j = 0; j < cur_state->num_bufs; j++) { if (tx->dqo.qpl)
if (j == 0) { gve_free_tx_qpl_bufs(tx, cur_state);
dma_unmap_single(tx->dev, else
dma_unmap_addr(cur_state, dma[j]), gve_unmap_packet(tx->dev, cur_state);
dma_unmap_len(cur_state, len[j]),
DMA_TO_DEVICE);
} else {
dma_unmap_page(tx->dev,
dma_unmap_addr(cur_state, dma[j]),
dma_unmap_len(cur_state, len[j]),
DMA_TO_DEVICE);
}
}
if (cur_state->skb) { if (cur_state->skb) {
dev_consume_skb_any(cur_state->skb); dev_consume_skb_any(cur_state->skb);
cur_state->skb = NULL; cur_state->skb = NULL;
@ -1154,22 +1164,6 @@ static void remove_from_list(struct gve_tx_ring *tx,
} }
} }
static void gve_unmap_packet(struct device *dev,
struct gve_tx_pending_packet_dqo *pkt)
{
int i;
/* SKB linear portion is guaranteed to be mapped */
dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]),
dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE);
for (i = 1; i < pkt->num_bufs; i++) {
netmem_dma_unmap_page_attrs(dev, dma_unmap_addr(pkt, dma[i]),
dma_unmap_len(pkt, len[i]),
DMA_TO_DEVICE, 0);
}
pkt->num_bufs = 0;
}
/* Completion types and expected behavior: /* Completion types and expected behavior:
* No Miss compl + Packet compl = Packet completed normally. * No Miss compl + Packet compl = Packet completed normally.
* Miss compl + Re-inject compl = Packet completed normally. * Miss compl + Re-inject compl = Packet completed normally.

View file

@ -259,7 +259,6 @@ static void mlx5e_ipsec_init_limits(struct mlx5e_ipsec_sa_entry *sa_entry,
static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry, static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry,
struct mlx5_accel_esp_xfrm_attrs *attrs) struct mlx5_accel_esp_xfrm_attrs *attrs)
{ {
struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry);
struct mlx5e_ipsec_addr *addrs = &attrs->addrs; struct mlx5e_ipsec_addr *addrs = &attrs->addrs;
struct net_device *netdev = sa_entry->dev; struct net_device *netdev = sa_entry->dev;
struct xfrm_state *x = sa_entry->x; struct xfrm_state *x = sa_entry->x;
@ -276,7 +275,7 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry,
attrs->type != XFRM_DEV_OFFLOAD_PACKET) attrs->type != XFRM_DEV_OFFLOAD_PACKET)
return; return;
mlx5_query_mac_address(mdev, addr); ether_addr_copy(addr, netdev->dev_addr);
switch (attrs->dir) { switch (attrs->dir) {
case XFRM_DEV_OFFLOAD_IN: case XFRM_DEV_OFFLOAD_IN:
src = attrs->dmac; src = attrs->dmac;

View file

@ -4068,6 +4068,8 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
if (mlx5_mode == MLX5_ESWITCH_LEGACY) if (mlx5_mode == MLX5_ESWITCH_LEGACY)
esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY; esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY;
if (mlx5_mode == MLX5_ESWITCH_OFFLOADS)
esw->dev->priv.flags &= ~MLX5_PRIV_FLAGS_SWITCH_LEGACY;
mlx5_eswitch_disable_locked(esw); mlx5_eswitch_disable_locked(esw);
if (mlx5_mode == MLX5_ESWITCH_OFFLOADS) { if (mlx5_mode == MLX5_ESWITCH_OFFLOADS) {
if (mlx5_devlink_trap_get_num_active(esw->dev)) { if (mlx5_devlink_trap_get_num_active(esw->dev)) {

View file

@ -1869,8 +1869,12 @@ void mlx5_lag_disable_change(struct mlx5_core_dev *dev)
mutex_lock(&ldev->lock); mutex_lock(&ldev->lock);
ldev->mode_changes_in_progress++; ldev->mode_changes_in_progress++;
if (__mlx5_lag_is_active(ldev)) if (__mlx5_lag_is_active(ldev)) {
mlx5_disable_lag(ldev); if (ldev->mode == MLX5_LAG_MODE_MPESW)
mlx5_lag_disable_mpesw(ldev);
else
mlx5_disable_lag(ldev);
}
mutex_unlock(&ldev->lock); mutex_unlock(&ldev->lock);
mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp); mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);

View file

@ -65,7 +65,7 @@ err_metadata:
return err; return err;
} }
static int enable_mpesw(struct mlx5_lag *ldev) static int mlx5_lag_enable_mpesw(struct mlx5_lag *ldev)
{ {
struct mlx5_core_dev *dev0; struct mlx5_core_dev *dev0;
int err; int err;
@ -126,7 +126,7 @@ err_add_devices:
return err; return err;
} }
static void disable_mpesw(struct mlx5_lag *ldev) void mlx5_lag_disable_mpesw(struct mlx5_lag *ldev)
{ {
if (ldev->mode == MLX5_LAG_MODE_MPESW) { if (ldev->mode == MLX5_LAG_MODE_MPESW) {
mlx5_mpesw_metadata_cleanup(ldev); mlx5_mpesw_metadata_cleanup(ldev);
@ -152,9 +152,9 @@ static void mlx5_mpesw_work(struct work_struct *work)
} }
if (mpesww->op == MLX5_MPESW_OP_ENABLE) if (mpesww->op == MLX5_MPESW_OP_ENABLE)
mpesww->result = enable_mpesw(ldev); mpesww->result = mlx5_lag_enable_mpesw(ldev);
else if (mpesww->op == MLX5_MPESW_OP_DISABLE) else if (mpesww->op == MLX5_MPESW_OP_DISABLE)
disable_mpesw(ldev); mlx5_lag_disable_mpesw(ldev);
unlock: unlock:
mutex_unlock(&ldev->lock); mutex_unlock(&ldev->lock);
mlx5_devcom_comp_unlock(devcom); mlx5_devcom_comp_unlock(devcom);

View file

@ -31,6 +31,11 @@ int mlx5_lag_mpesw_do_mirred(struct mlx5_core_dev *mdev,
bool mlx5_lag_is_mpesw(struct mlx5_core_dev *dev); bool mlx5_lag_is_mpesw(struct mlx5_core_dev *dev);
void mlx5_lag_mpesw_disable(struct mlx5_core_dev *dev); void mlx5_lag_mpesw_disable(struct mlx5_core_dev *dev);
int mlx5_lag_mpesw_enable(struct mlx5_core_dev *dev); int mlx5_lag_mpesw_enable(struct mlx5_core_dev *dev);
#ifdef CONFIG_MLX5_ESWITCH
void mlx5_lag_disable_mpesw(struct mlx5_lag *ldev);
#else
static inline void mlx5_lag_disable_mpesw(struct mlx5_lag *ldev) {}
#endif /* CONFIG_MLX5_ESWITCH */
#ifdef CONFIG_MLX5_ESWITCH #ifdef CONFIG_MLX5_ESWITCH
void mlx5_mpesw_speed_update_work(struct work_struct *work); void mlx5_mpesw_speed_update_work(struct work_struct *work);

View file

@ -193,7 +193,9 @@ static int mlx5_sriov_enable(struct pci_dev *pdev, int num_vfs)
err = pci_enable_sriov(pdev, num_vfs); err = pci_enable_sriov(pdev, num_vfs);
if (err) { if (err) {
mlx5_core_warn(dev, "pci_enable_sriov failed : %d\n", err); mlx5_core_warn(dev, "pci_enable_sriov failed : %d\n", err);
devl_lock(devlink);
mlx5_device_disable_sriov(dev, num_vfs, true, true); mlx5_device_disable_sriov(dev, num_vfs, true, true);
devl_unlock(devlink);
} }
return err; return err;
} }

View file

@ -1051,8 +1051,8 @@ static int dr_dump_domain_all(struct seq_file *file, struct mlx5dr_domain *dmn)
struct mlx5dr_table *tbl; struct mlx5dr_table *tbl;
int ret; int ret;
mutex_lock(&dmn->dump_info.dbg_mutex);
mlx5dr_domain_lock(dmn); mlx5dr_domain_lock(dmn);
mutex_lock(&dmn->dump_info.dbg_mutex);
ret = dr_dump_domain(file, dmn); ret = dr_dump_domain(file, dmn);
if (ret < 0) if (ret < 0)
@ -1065,8 +1065,8 @@ static int dr_dump_domain_all(struct seq_file *file, struct mlx5dr_domain *dmn)
} }
unlock_mutex: unlock_mutex:
mlx5dr_domain_unlock(dmn);
mutex_unlock(&dmn->dump_info.dbg_mutex); mutex_unlock(&dmn->dump_info.dbg_mutex);
mlx5dr_domain_unlock(dmn);
return ret; return ret;
} }

View file

@ -1946,7 +1946,10 @@ static void mana_gd_cleanup(struct pci_dev *pdev)
mana_gd_remove_irqs(pdev); mana_gd_remove_irqs(pdev);
destroy_workqueue(gc->service_wq); if (gc->service_wq) {
destroy_workqueue(gc->service_wq);
gc->service_wq = NULL;
}
dev_dbg(&pdev->dev, "mana gdma cleanup successful\n"); dev_dbg(&pdev->dev, "mana gdma cleanup successful\n");
} }

View file

@ -3757,7 +3757,9 @@ void mana_rdma_remove(struct gdma_dev *gd)
} }
WRITE_ONCE(gd->rdma_teardown, true); WRITE_ONCE(gd->rdma_teardown, true);
flush_workqueue(gc->service_wq);
if (gc->service_wq)
flush_workqueue(gc->service_wq);
if (gd->adev) if (gd->adev)
remove_adev(gd); remove_adev(gd);

View file

@ -853,6 +853,7 @@ static int stmmac_init_timestamping(struct stmmac_priv *priv)
netdev_info(priv->dev, netdev_info(priv->dev,
"IEEE 1588-2008 Advanced Timestamp supported\n"); "IEEE 1588-2008 Advanced Timestamp supported\n");
memset(&priv->tstamp_config, 0, sizeof(priv->tstamp_config));
priv->hwts_tx_en = 0; priv->hwts_tx_en = 0;
priv->hwts_rx_en = 0; priv->hwts_rx_en = 0;

View file

@ -403,15 +403,12 @@ static int ixp4xx_hwtstamp_set(struct net_device *netdev,
int ret; int ret;
int ch; int ch;
if (!cpu_is_ixp46x())
return -EOPNOTSUPP;
if (!netif_running(netdev)) if (!netif_running(netdev))
return -EINVAL; return -EINVAL;
ret = ixp46x_ptp_find(&port->timesync_regs, &port->phc_index); ret = ixp46x_ptp_find(&port->timesync_regs, &port->phc_index);
if (ret) if (ret)
return ret; return -EOPNOTSUPP;
ch = PORT2CHANNEL(port); ch = PORT2CHANNEL(port);
regs = port->timesync_regs; regs = port->timesync_regs;

View file

@ -232,6 +232,9 @@ static struct ixp_clock ixp_clock;
int ixp46x_ptp_find(struct ixp46x_ts_regs *__iomem *regs, int *phc_index) int ixp46x_ptp_find(struct ixp46x_ts_regs *__iomem *regs, int *phc_index)
{ {
if (!cpu_is_ixp46x())
return -ENODEV;
*regs = ixp_clock.regs; *regs = ixp_clock.regs;
*phc_index = ptp_clock_index(ixp_clock.ptp_clock); *phc_index = ptp_clock_index(ixp_clock.ptp_clock);

View file

@ -1679,7 +1679,8 @@ static void send_msg_no_fragmentation(struct netconsole_target *nt,
if (release_len) { if (release_len) {
release = init_utsname()->release; release = init_utsname()->release;
scnprintf(nt->buf, MAX_PRINT_CHUNK, "%s,%s", release, msg); scnprintf(nt->buf, MAX_PRINT_CHUNK, "%s,%.*s", release,
msg_len, msg);
msg_len += release_len; msg_len += release_len;
} else { } else {
memcpy(nt->buf, msg, msg_len); memcpy(nt->buf, msg, msg_len);

View file

@ -70,37 +70,56 @@ static void ovpn_tcp_to_userspace(struct ovpn_peer *peer, struct sock *sk,
peer->tcp.sk_cb.sk_data_ready(sk); peer->tcp.sk_cb.sk_data_ready(sk);
} }
static struct sk_buff *ovpn_tcp_skb_packet(const struct ovpn_peer *peer,
struct sk_buff *orig_skb,
const int pkt_len, const int pkt_off)
{
struct sk_buff *ovpn_skb;
int err;
/* create a new skb with only the content of the current packet */
ovpn_skb = netdev_alloc_skb(peer->ovpn->dev, pkt_len);
if (unlikely(!ovpn_skb))
goto err;
skb_copy_header(ovpn_skb, orig_skb);
err = skb_copy_bits(orig_skb, pkt_off, skb_put(ovpn_skb, pkt_len),
pkt_len);
if (unlikely(err)) {
net_warn_ratelimited("%s: skb_copy_bits failed for peer %u\n",
netdev_name(peer->ovpn->dev), peer->id);
kfree_skb(ovpn_skb);
goto err;
}
consume_skb(orig_skb);
return ovpn_skb;
err:
kfree_skb(orig_skb);
return NULL;
}
static void ovpn_tcp_rcv(struct strparser *strp, struct sk_buff *skb) static void ovpn_tcp_rcv(struct strparser *strp, struct sk_buff *skb)
{ {
struct ovpn_peer *peer = container_of(strp, struct ovpn_peer, tcp.strp); struct ovpn_peer *peer = container_of(strp, struct ovpn_peer, tcp.strp);
struct strp_msg *msg = strp_msg(skb); struct strp_msg *msg = strp_msg(skb);
size_t pkt_len = msg->full_len - 2; int pkt_len = msg->full_len - 2;
size_t off = msg->offset + 2;
u8 opcode; u8 opcode;
/* ensure skb->data points to the beginning of the openvpn packet */ /* we need at least 4 bytes of data in the packet
if (!pskb_pull(skb, off)) {
net_warn_ratelimited("%s: packet too small for peer %u\n",
netdev_name(peer->ovpn->dev), peer->id);
goto err;
}
/* strparser does not trim the skb for us, therefore we do it now */
if (pskb_trim(skb, pkt_len) != 0) {
net_warn_ratelimited("%s: trimming skb failed for peer %u\n",
netdev_name(peer->ovpn->dev), peer->id);
goto err;
}
/* we need the first 4 bytes of data to be accessible
* to extract the opcode and the key ID later on * to extract the opcode and the key ID later on
*/ */
if (!pskb_may_pull(skb, OVPN_OPCODE_SIZE)) { if (unlikely(pkt_len < OVPN_OPCODE_SIZE)) {
net_warn_ratelimited("%s: packet too small to fetch opcode for peer %u\n", net_warn_ratelimited("%s: packet too small to fetch opcode for peer %u\n",
netdev_name(peer->ovpn->dev), peer->id); netdev_name(peer->ovpn->dev), peer->id);
goto err; goto err;
} }
/* extract the packet into a new skb */
skb = ovpn_tcp_skb_packet(peer, skb, pkt_len, msg->offset + 2);
if (unlikely(!skb))
goto err;
/* DATA_V2 packets are handled in kernel, the rest goes to user space */ /* DATA_V2 packets are handled in kernel, the rest goes to user space */
opcode = ovpn_opcode_from_skb(skb, 0); opcode = ovpn_opcode_from_skb(skb, 0);
if (unlikely(opcode != OVPN_DATA_V2)) { if (unlikely(opcode != OVPN_DATA_V2)) {
@ -113,7 +132,7 @@ static void ovpn_tcp_rcv(struct strparser *strp, struct sk_buff *skb)
/* The packet size header must be there when sending the packet /* The packet size header must be there when sending the packet
* to userspace, therefore we put it back * to userspace, therefore we put it back
*/ */
skb_push(skb, 2); *(__be16 *)__skb_push(skb, sizeof(u16)) = htons(pkt_len);
ovpn_tcp_to_userspace(peer, strp->sk, skb); ovpn_tcp_to_userspace(peer, strp->sk, skb);
return; return;
} }

View file

@ -1866,8 +1866,6 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
goto error; goto error;
phy_resume(phydev); phy_resume(phydev);
if (!phydev->is_on_sfp_module)
phy_led_triggers_register(phydev);
/** /**
* If the external phy used by current mac interface is managed by * If the external phy used by current mac interface is managed by
@ -1982,9 +1980,6 @@ void phy_detach(struct phy_device *phydev)
phydev->phy_link_change = NULL; phydev->phy_link_change = NULL;
phydev->phylink = NULL; phydev->phylink = NULL;
if (!phydev->is_on_sfp_module)
phy_led_triggers_unregister(phydev);
if (phydev->mdio.dev.driver) if (phydev->mdio.dev.driver)
module_put(phydev->mdio.dev.driver->owner); module_put(phydev->mdio.dev.driver->owner);
@ -3778,16 +3773,27 @@ static int phy_probe(struct device *dev)
/* Set the state to READY by default */ /* Set the state to READY by default */
phydev->state = PHY_READY; phydev->state = PHY_READY;
/* Register the PHY LED triggers */
if (!phydev->is_on_sfp_module)
phy_led_triggers_register(phydev);
/* Get the LEDs from the device tree, and instantiate standard /* Get the LEDs from the device tree, and instantiate standard
* LEDs for them. * LEDs for them.
*/ */
if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev)) if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev)) {
err = of_phy_leds(phydev); err = of_phy_leds(phydev);
if (err)
goto out;
}
return 0;
out: out:
if (!phydev->is_on_sfp_module)
phy_led_triggers_unregister(phydev);
/* Re-assert the reset signal on error */ /* Re-assert the reset signal on error */
if (err) phy_device_reset(phydev, 1);
phy_device_reset(phydev, 1);
return err; return err;
} }
@ -3801,6 +3807,9 @@ static int phy_remove(struct device *dev)
if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev)) if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev))
phy_leds_unregister(phydev); phy_leds_unregister(phydev);
if (!phydev->is_on_sfp_module)
phy_led_triggers_unregister(phydev);
phydev->state = PHY_DOWN; phydev->state = PHY_DOWN;
phy_cleanup_ports(phydev); phy_cleanup_ports(phydev);

View file

@ -375,7 +375,7 @@ static int qca807x_gpio_get(struct gpio_chip *gc, unsigned int offset)
reg = QCA807X_MMD7_LED_FORCE_CTRL(offset); reg = QCA807X_MMD7_LED_FORCE_CTRL(offset);
val = phy_read_mmd(priv->phy, MDIO_MMD_AN, reg); val = phy_read_mmd(priv->phy, MDIO_MMD_AN, reg);
return FIELD_GET(QCA807X_GPIO_FORCE_MODE_MASK, val); return !!FIELD_GET(QCA807X_GPIO_FORCE_MODE_MASK, val);
} }
static int qca807x_gpio_set(struct gpio_chip *gc, unsigned int offset, int value) static int qca807x_gpio_set(struct gpio_chip *gc, unsigned int offset, int value)

View file

@ -1290,7 +1290,7 @@ err_set_mtu:
static void __team_port_change_port_removed(struct team_port *port); static void __team_port_change_port_removed(struct team_port *port);
static int team_port_del(struct team *team, struct net_device *port_dev) static int team_port_del(struct team *team, struct net_device *port_dev, bool unregister)
{ {
struct net_device *dev = team->dev; struct net_device *dev = team->dev;
struct team_port *port; struct team_port *port;
@ -1328,7 +1328,13 @@ static int team_port_del(struct team *team, struct net_device *port_dev)
__team_port_change_port_removed(port); __team_port_change_port_removed(port);
team_port_set_orig_dev_addr(port); team_port_set_orig_dev_addr(port);
dev_set_mtu(port_dev, port->orig.mtu); if (unregister) {
netdev_lock_ops(port_dev);
__netif_set_mtu(port_dev, port->orig.mtu);
netdev_unlock_ops(port_dev);
} else {
dev_set_mtu(port_dev, port->orig.mtu);
}
kfree_rcu(port, rcu); kfree_rcu(port, rcu);
netdev_info(dev, "Port device %s removed\n", portname); netdev_info(dev, "Port device %s removed\n", portname);
netdev_compute_master_upper_features(team->dev, true); netdev_compute_master_upper_features(team->dev, true);
@ -1632,7 +1638,7 @@ static void team_uninit(struct net_device *dev)
ASSERT_RTNL(); ASSERT_RTNL();
list_for_each_entry_safe(port, tmp, &team->port_list, list) list_for_each_entry_safe(port, tmp, &team->port_list, list)
team_port_del(team, port->dev); team_port_del(team, port->dev, false);
__team_change_mode(team, NULL); /* cleanup */ __team_change_mode(team, NULL); /* cleanup */
__team_options_unregister(team, team_options, ARRAY_SIZE(team_options)); __team_options_unregister(team, team_options, ARRAY_SIZE(team_options));
@ -1931,7 +1937,16 @@ static int team_del_slave(struct net_device *dev, struct net_device *port_dev)
ASSERT_RTNL(); ASSERT_RTNL();
return team_port_del(team, port_dev); return team_port_del(team, port_dev, false);
}
static int team_del_slave_on_unregister(struct net_device *dev, struct net_device *port_dev)
{
struct team *team = netdev_priv(dev);
ASSERT_RTNL();
return team_port_del(team, port_dev, true);
} }
static netdev_features_t team_fix_features(struct net_device *dev, static netdev_features_t team_fix_features(struct net_device *dev,
@ -2924,7 +2939,7 @@ static int team_device_event(struct notifier_block *unused,
!!netif_oper_up(port->dev)); !!netif_oper_up(port->dev));
break; break;
case NETDEV_UNREGISTER: case NETDEV_UNREGISTER:
team_del_slave(port->team->dev, dev); team_del_slave_on_unregister(port->team->dev, dev);
break; break;
case NETDEV_FEAT_CHANGE: case NETDEV_FEAT_CHANGE:
if (!port->team->notifier_ctx) { if (!port->team->notifier_ctx) {
@ -2997,3 +3012,4 @@ MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Jiri Pirko <jpirko@redhat.com>"); MODULE_AUTHOR("Jiri Pirko <jpirko@redhat.com>");
MODULE_DESCRIPTION("Ethernet team device driver"); MODULE_DESCRIPTION("Ethernet team device driver");
MODULE_ALIAS_RTNL_LINK(DRV_NAME); MODULE_ALIAS_RTNL_LINK(DRV_NAME);
MODULE_IMPORT_NS("NETDEV_INTERNAL");

View file

@ -132,11 +132,18 @@ kalmia_bind(struct usbnet *dev, struct usb_interface *intf)
{ {
int status; int status;
u8 ethernet_addr[ETH_ALEN]; u8 ethernet_addr[ETH_ALEN];
static const u8 ep_addr[] = {
1 | USB_DIR_IN,
2 | USB_DIR_OUT,
0};
/* Don't bind to AT command interface */ /* Don't bind to AT command interface */
if (intf->cur_altsetting->desc.bInterfaceClass != USB_CLASS_VENDOR_SPEC) if (intf->cur_altsetting->desc.bInterfaceClass != USB_CLASS_VENDOR_SPEC)
return -EINVAL; return -EINVAL;
if (!usb_check_bulk_endpoints(intf, ep_addr))
return -ENODEV;
dev->in = usb_rcvbulkpipe(dev->udev, 0x81 & USB_ENDPOINT_NUMBER_MASK); dev->in = usb_rcvbulkpipe(dev->udev, 0x81 & USB_ENDPOINT_NUMBER_MASK);
dev->out = usb_sndbulkpipe(dev->udev, 0x02 & USB_ENDPOINT_NUMBER_MASK); dev->out = usb_sndbulkpipe(dev->udev, 0x02 & USB_ENDPOINT_NUMBER_MASK);
dev->status = NULL; dev->status = NULL;

View file

@ -765,7 +765,6 @@ static void kaweth_set_rx_mode(struct net_device *net)
netdev_dbg(net, "Setting Rx mode to %d\n", packet_filter_bitmap); netdev_dbg(net, "Setting Rx mode to %d\n", packet_filter_bitmap);
netif_stop_queue(net);
if (net->flags & IFF_PROMISC) { if (net->flags & IFF_PROMISC) {
packet_filter_bitmap |= KAWETH_PACKET_FILTER_PROMISCUOUS; packet_filter_bitmap |= KAWETH_PACKET_FILTER_PROMISCUOUS;
@ -775,7 +774,6 @@ static void kaweth_set_rx_mode(struct net_device *net)
} }
kaweth->packet_filter_bitmap = packet_filter_bitmap; kaweth->packet_filter_bitmap = packet_filter_bitmap;
netif_wake_queue(net);
} }
/**************************************************************** /****************************************************************
@ -885,6 +883,13 @@ static int kaweth_probe(
const eth_addr_t bcast_addr = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; const eth_addr_t bcast_addr = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
int result = 0; int result = 0;
int rv = -EIO; int rv = -EIO;
static const u8 bulk_ep_addr[] = {
1 | USB_DIR_IN,
2 | USB_DIR_OUT,
0};
static const u8 int_ep_addr[] = {
3 | USB_DIR_IN,
0};
dev_dbg(dev, dev_dbg(dev,
"Kawasaki Device Probe (Device number:%d): 0x%4.4x:0x%4.4x:0x%4.4x\n", "Kawasaki Device Probe (Device number:%d): 0x%4.4x:0x%4.4x:0x%4.4x\n",
@ -898,6 +903,12 @@ static int kaweth_probe(
(int)udev->descriptor.bLength, (int)udev->descriptor.bLength,
(int)udev->descriptor.bDescriptorType); (int)udev->descriptor.bDescriptorType);
if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
!usb_check_int_endpoints(intf, int_ep_addr)) {
dev_err(dev, "couldn't find required endpoints\n");
return -ENODEV;
}
netdev = alloc_etherdev(sizeof(*kaweth)); netdev = alloc_etherdev(sizeof(*kaweth));
if (!netdev) if (!netdev)
return -ENOMEM; return -ENOMEM;

View file

@ -2094,8 +2094,6 @@ static int lan78xx_mdio_init(struct lan78xx_net *dev)
dev->mdiobus->phy_mask = ~(1 << 1); dev->mdiobus->phy_mask = ~(1 << 1);
break; break;
case ID_REV_CHIP_ID_7801_: case ID_REV_CHIP_ID_7801_:
/* scan thru PHYAD[2..0] */
dev->mdiobus->phy_mask = ~(0xFF);
break; break;
} }

View file

@ -28,6 +28,17 @@ static const char driver_name[] = "pegasus";
BMSR_100FULL | BMSR_ANEGCAPABLE) BMSR_100FULL | BMSR_ANEGCAPABLE)
#define CARRIER_CHECK_DELAY (2 * HZ) #define CARRIER_CHECK_DELAY (2 * HZ)
/*
* USB endpoints.
*/
enum pegasus_usb_ep {
PEGASUS_USB_EP_CONTROL = 0,
PEGASUS_USB_EP_BULK_IN = 1,
PEGASUS_USB_EP_BULK_OUT = 2,
PEGASUS_USB_EP_INT_IN = 3,
};
static bool loopback; static bool loopback;
static bool mii_mode; static bool mii_mode;
static char *devid; static char *devid;
@ -542,7 +553,7 @@ static void read_bulk_callback(struct urb *urb)
goto tl_sched; goto tl_sched;
goon: goon:
usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
usb_rcvbulkpipe(pegasus->usb, 1), usb_rcvbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_IN),
pegasus->rx_skb->data, PEGASUS_MTU, pegasus->rx_skb->data, PEGASUS_MTU,
read_bulk_callback, pegasus); read_bulk_callback, pegasus);
rx_status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC); rx_status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC);
@ -582,7 +593,7 @@ static void rx_fixup(struct tasklet_struct *t)
return; return;
} }
usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
usb_rcvbulkpipe(pegasus->usb, 1), usb_rcvbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_IN),
pegasus->rx_skb->data, PEGASUS_MTU, pegasus->rx_skb->data, PEGASUS_MTU,
read_bulk_callback, pegasus); read_bulk_callback, pegasus);
try_again: try_again:
@ -710,7 +721,7 @@ static netdev_tx_t pegasus_start_xmit(struct sk_buff *skb,
((__le16 *) pegasus->tx_buff)[0] = cpu_to_le16(l16); ((__le16 *) pegasus->tx_buff)[0] = cpu_to_le16(l16);
skb_copy_from_linear_data(skb, pegasus->tx_buff + 2, skb->len); skb_copy_from_linear_data(skb, pegasus->tx_buff + 2, skb->len);
usb_fill_bulk_urb(pegasus->tx_urb, pegasus->usb, usb_fill_bulk_urb(pegasus->tx_urb, pegasus->usb,
usb_sndbulkpipe(pegasus->usb, 2), usb_sndbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_OUT),
pegasus->tx_buff, count, pegasus->tx_buff, count,
write_bulk_callback, pegasus); write_bulk_callback, pegasus);
if ((res = usb_submit_urb(pegasus->tx_urb, GFP_ATOMIC))) { if ((res = usb_submit_urb(pegasus->tx_urb, GFP_ATOMIC))) {
@ -801,8 +812,19 @@ static void unlink_all_urbs(pegasus_t *pegasus)
static int alloc_urbs(pegasus_t *pegasus) static int alloc_urbs(pegasus_t *pegasus)
{ {
static const u8 bulk_ep_addr[] = {
1 | USB_DIR_IN,
2 | USB_DIR_OUT,
0};
static const u8 int_ep_addr[] = {
3 | USB_DIR_IN,
0};
int res = -ENOMEM; int res = -ENOMEM;
if (!usb_check_bulk_endpoints(pegasus->intf, bulk_ep_addr) ||
!usb_check_int_endpoints(pegasus->intf, int_ep_addr))
return -ENODEV;
pegasus->rx_urb = usb_alloc_urb(0, GFP_KERNEL); pegasus->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
if (!pegasus->rx_urb) { if (!pegasus->rx_urb) {
return res; return res;
@ -837,7 +859,7 @@ static int pegasus_open(struct net_device *net)
set_registers(pegasus, EthID, 6, net->dev_addr); set_registers(pegasus, EthID, 6, net->dev_addr);
usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
usb_rcvbulkpipe(pegasus->usb, 1), usb_rcvbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_IN),
pegasus->rx_skb->data, PEGASUS_MTU, pegasus->rx_skb->data, PEGASUS_MTU,
read_bulk_callback, pegasus); read_bulk_callback, pegasus);
if ((res = usb_submit_urb(pegasus->rx_urb, GFP_KERNEL))) { if ((res = usb_submit_urb(pegasus->rx_urb, GFP_KERNEL))) {
@ -848,7 +870,7 @@ static int pegasus_open(struct net_device *net)
} }
usb_fill_int_urb(pegasus->intr_urb, pegasus->usb, usb_fill_int_urb(pegasus->intr_urb, pegasus->usb,
usb_rcvintpipe(pegasus->usb, 3), usb_rcvintpipe(pegasus->usb, PEGASUS_USB_EP_INT_IN),
pegasus->intr_buff, sizeof(pegasus->intr_buff), pegasus->intr_buff, sizeof(pegasus->intr_buff),
intr_callback, pegasus, pegasus->intr_interval); intr_callback, pegasus, pegasus->intr_interval);
if ((res = usb_submit_urb(pegasus->intr_urb, GFP_KERNEL))) { if ((res = usb_submit_urb(pegasus->intr_urb, GFP_KERNEL))) {
@ -1133,16 +1155,31 @@ static int pegasus_probe(struct usb_interface *intf,
pegasus_t *pegasus; pegasus_t *pegasus;
int dev_index = id - pegasus_ids; int dev_index = id - pegasus_ids;
int res = -ENOMEM; int res = -ENOMEM;
static const u8 bulk_ep_addr[] = {
PEGASUS_USB_EP_BULK_IN | USB_DIR_IN,
PEGASUS_USB_EP_BULK_OUT | USB_DIR_OUT,
0};
static const u8 int_ep_addr[] = {
PEGASUS_USB_EP_INT_IN | USB_DIR_IN,
0};
if (pegasus_blacklisted(dev)) if (pegasus_blacklisted(dev))
return -ENODEV; return -ENODEV;
/* Verify that all required endpoints are present */
if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
!usb_check_int_endpoints(intf, int_ep_addr)) {
dev_err(&intf->dev, "Missing or invalid endpoints\n");
return -ENODEV;
}
net = alloc_etherdev(sizeof(struct pegasus)); net = alloc_etherdev(sizeof(struct pegasus));
if (!net) if (!net)
goto out; goto out;
pegasus = netdev_priv(net); pegasus = netdev_priv(net);
pegasus->dev_index = dev_index; pegasus->dev_index = dev_index;
pegasus->intf = intf;
res = alloc_urbs(pegasus); res = alloc_urbs(pegasus);
if (res < 0) { if (res < 0) {
@ -1154,7 +1191,6 @@ static int pegasus_probe(struct usb_interface *intf,
INIT_DELAYED_WORK(&pegasus->carrier_check, check_carrier); INIT_DELAYED_WORK(&pegasus->carrier_check, check_carrier);
pegasus->intf = intf;
pegasus->usb = dev; pegasus->usb = dev;
pegasus->net = net; pegasus->net = net;

View file

@ -2550,6 +2550,8 @@ fst_remove_one(struct pci_dev *pdev)
fst_disable_intr(card); fst_disable_intr(card);
free_irq(card->irq, card); free_irq(card->irq, card);
tasklet_kill(&fst_tx_task);
tasklet_kill(&fst_int_task);
iounmap(card->ctlmem); iounmap(card->ctlmem);
iounmap(card->mem); iounmap(card->mem);

View file

@ -951,11 +951,10 @@ int brcmf_sdiod_probe(struct brcmf_sdio_dev *sdiodev)
goto out; goto out;
/* try to attach to the target device */ /* try to attach to the target device */
sdiodev->bus = brcmf_sdio_probe(sdiodev); ret = brcmf_sdio_probe(sdiodev);
if (IS_ERR(sdiodev->bus)) { if (ret)
ret = PTR_ERR(sdiodev->bus);
goto out; goto out;
}
brcmf_sdiod_host_fixup(sdiodev->func2->card->host); brcmf_sdiod_host_fixup(sdiodev->func2->card->host);
out: out:
if (ret) if (ret)

View file

@ -4445,7 +4445,7 @@ brcmf_sdio_prepare_fw_request(struct brcmf_sdio *bus)
return fwreq; return fwreq;
} }
struct brcmf_sdio *brcmf_sdio_probe(struct brcmf_sdio_dev *sdiodev) int brcmf_sdio_probe(struct brcmf_sdio_dev *sdiodev)
{ {
int ret; int ret;
struct brcmf_sdio *bus; struct brcmf_sdio *bus;
@ -4551,11 +4551,12 @@ struct brcmf_sdio *brcmf_sdio_probe(struct brcmf_sdio_dev *sdiodev)
goto fail; goto fail;
} }
return bus; return 0;
fail: fail:
brcmf_sdio_remove(bus); brcmf_sdio_remove(bus);
return ERR_PTR(ret); sdiodev->bus = NULL;
return ret;
} }
/* Detach and free everything */ /* Detach and free everything */

View file

@ -358,7 +358,7 @@ void brcmf_sdiod_freezer_uncount(struct brcmf_sdio_dev *sdiodev);
int brcmf_sdiod_probe(struct brcmf_sdio_dev *sdiodev); int brcmf_sdiod_probe(struct brcmf_sdio_dev *sdiodev);
int brcmf_sdiod_remove(struct brcmf_sdio_dev *sdiodev); int brcmf_sdiod_remove(struct brcmf_sdio_dev *sdiodev);
struct brcmf_sdio *brcmf_sdio_probe(struct brcmf_sdio_dev *sdiodev); int brcmf_sdio_probe(struct brcmf_sdio_dev *sdiodev);
void brcmf_sdio_remove(struct brcmf_sdio *bus); void brcmf_sdio_remove(struct brcmf_sdio *bus);
void brcmf_sdio_isr(struct brcmf_sdio *bus, bool in_isr); void brcmf_sdio_isr(struct brcmf_sdio *bus, bool in_isr);

View file

@ -799,8 +799,8 @@ static void lbs_free_adapter(struct lbs_private *priv)
{ {
lbs_free_cmd_buffer(priv); lbs_free_cmd_buffer(priv);
kfifo_free(&priv->event_fifo); kfifo_free(&priv->event_fifo);
timer_delete(&priv->command_timer); timer_delete_sync(&priv->command_timer);
timer_delete(&priv->tx_lockup_timer); timer_delete_sync(&priv->tx_lockup_timer);
} }
static const struct net_device_ops lbs_netdev_ops = { static const struct net_device_ops lbs_netdev_ops = {

View file

@ -3148,7 +3148,7 @@ struct wireless_dev *mwifiex_add_virtual_intf(struct wiphy *wiphy,
SET_NETDEV_DEV(dev, adapter->dev); SET_NETDEV_DEV(dev, adapter->dev);
ret = dev_alloc_name(dev, name); ret = dev_alloc_name(dev, name);
if (ret) if (ret < 0)
goto err_alloc_name; goto err_alloc_name;
priv->dfs_cac_workqueue = alloc_workqueue("MWIFIEX_DFS_CAC-%s", priv->dfs_cac_workqueue = alloc_workqueue("MWIFIEX_DFS_CAC-%s",

View file

@ -628,6 +628,7 @@ static void pn533_usb_disconnect(struct usb_interface *interface)
usb_free_urb(phy->out_urb); usb_free_urb(phy->out_urb);
usb_free_urb(phy->ack_urb); usb_free_urb(phy->ack_urb);
kfree(phy->ack_buffer); kfree(phy->ack_buffer);
usb_put_dev(phy->udev);
nfc_info(&interface->dev, "NXP PN533 NFC device disconnected\n"); nfc_info(&interface->dev, "NXP PN533 NFC device disconnected\n");
} }

View file

@ -276,10 +276,19 @@ static inline bool vsock_net_mode_global(struct vsock_sock *vsk)
return vsock_net_mode(sock_net(sk_vsock(vsk))) == VSOCK_NET_MODE_GLOBAL; return vsock_net_mode(sock_net(sk_vsock(vsk))) == VSOCK_NET_MODE_GLOBAL;
} }
static inline void vsock_net_set_child_mode(struct net *net, static inline bool vsock_net_set_child_mode(struct net *net,
enum vsock_net_mode mode) enum vsock_net_mode mode)
{ {
WRITE_ONCE(net->vsock.child_ns_mode, mode); int new_locked = mode + 1;
int old_locked = 0; /* unlocked */
if (try_cmpxchg(&net->vsock.child_ns_mode_locked,
&old_locked, new_locked)) {
WRITE_ONCE(net->vsock.child_ns_mode, mode);
return true;
}
return old_locked == new_locked;
} }
static inline enum vsock_net_mode vsock_net_child_mode(struct net *net) static inline enum vsock_net_mode vsock_net_child_mode(struct net *net)

View file

@ -284,9 +284,9 @@ struct l2cap_conn_rsp {
#define L2CAP_CR_LE_BAD_KEY_SIZE 0x0007 #define L2CAP_CR_LE_BAD_KEY_SIZE 0x0007
#define L2CAP_CR_LE_ENCRYPTION 0x0008 #define L2CAP_CR_LE_ENCRYPTION 0x0008
#define L2CAP_CR_LE_INVALID_SCID 0x0009 #define L2CAP_CR_LE_INVALID_SCID 0x0009
#define L2CAP_CR_LE_SCID_IN_USE 0X000A #define L2CAP_CR_LE_SCID_IN_USE 0x000A
#define L2CAP_CR_LE_UNACCEPT_PARAMS 0X000B #define L2CAP_CR_LE_UNACCEPT_PARAMS 0x000B
#define L2CAP_CR_LE_INVALID_PARAMS 0X000C #define L2CAP_CR_LE_INVALID_PARAMS 0x000C
/* connect/create channel status */ /* connect/create channel status */
#define L2CAP_CS_NO_INFO 0x0000 #define L2CAP_CS_NO_INFO 0x0000
@ -493,6 +493,8 @@ struct l2cap_ecred_reconf_req {
#define L2CAP_RECONF_SUCCESS 0x0000 #define L2CAP_RECONF_SUCCESS 0x0000
#define L2CAP_RECONF_INVALID_MTU 0x0001 #define L2CAP_RECONF_INVALID_MTU 0x0001
#define L2CAP_RECONF_INVALID_MPS 0x0002 #define L2CAP_RECONF_INVALID_MPS 0x0002
#define L2CAP_RECONF_INVALID_CID 0x0003
#define L2CAP_RECONF_INVALID_PARAMS 0x0004
struct l2cap_ecred_reconf_rsp { struct l2cap_ecred_reconf_rsp {
__le16 result; __le16 result;

View file

@ -42,7 +42,9 @@ struct inet_connection_sock_af_ops {
struct request_sock *req, struct request_sock *req,
struct dst_entry *dst, struct dst_entry *dst,
struct request_sock *req_unhash, struct request_sock *req_unhash,
bool *own_req); bool *own_req,
void (*opt_child_init)(struct sock *newsk,
const struct sock *sk));
u16 net_header_len; u16 net_header_len;
int (*setsockopt)(struct sock *sk, int level, int optname, int (*setsockopt)(struct sock *sk, int level, int optname,
sockptr_t optval, unsigned int optlen); sockptr_t optval, unsigned int optlen);

View file

@ -17,5 +17,8 @@ struct netns_vsock {
enum vsock_net_mode mode; enum vsock_net_mode mode;
enum vsock_net_mode child_ns_mode; enum vsock_net_mode child_ns_mode;
/* 0 = unlocked, 1 = locked to global, 2 = locked to local */
int child_ns_mode_locked;
}; };
#endif /* __NET_NET_NAMESPACE_VSOCK_H */ #endif /* __NET_NET_NAMESPACE_VSOCK_H */

View file

@ -2098,7 +2098,7 @@ static inline int sk_rx_queue_get(const struct sock *sk)
static inline void sk_set_socket(struct sock *sk, struct socket *sock) static inline void sk_set_socket(struct sock *sk, struct socket *sock)
{ {
sk->sk_socket = sock; WRITE_ONCE(sk->sk_socket, sock);
if (sock) { if (sock) {
WRITE_ONCE(sk->sk_uid, SOCK_INODE(sock)->i_uid); WRITE_ONCE(sk->sk_uid, SOCK_INODE(sock)->i_uid);
WRITE_ONCE(sk->sk_ino, SOCK_INODE(sock)->i_ino); WRITE_ONCE(sk->sk_ino, SOCK_INODE(sock)->i_ino);

View file

@ -544,7 +544,9 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
struct request_sock *req, struct request_sock *req,
struct dst_entry *dst, struct dst_entry *dst,
struct request_sock *req_unhash, struct request_sock *req_unhash,
bool *own_req); bool *own_req,
void (*opt_child_init)(struct sock *newsk,
const struct sock *sk));
int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb); int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb);
int tcp_v4_connect(struct sock *sk, struct sockaddr_unsized *uaddr, int addr_len); int tcp_v4_connect(struct sock *sk, struct sockaddr_unsized *uaddr, int addr_len);
int tcp_connect(struct sock *sk); int tcp_connect(struct sock *sk);

View file

@ -2166,6 +2166,7 @@ static void hci_sock_destruct(struct sock *sk)
mgmt_cleanup(sk); mgmt_cleanup(sk);
skb_queue_purge(&sk->sk_receive_queue); skb_queue_purge(&sk->sk_receive_queue);
skb_queue_purge(&sk->sk_write_queue); skb_queue_purge(&sk->sk_write_queue);
skb_queue_purge(&sk->sk_error_queue);
} }
static const struct proto_ops hci_sock_ops = { static const struct proto_ops hci_sock_ops = {

View file

@ -4592,7 +4592,7 @@ static int hci_le_set_host_features_sync(struct hci_dev *hdev)
{ {
int err; int err;
if (iso_capable(hdev)) { if (cis_capable(hdev)) {
/* Connected Isochronous Channels (Host Support) */ /* Connected Isochronous Channels (Host Support) */
err = hci_le_set_host_feature_sync(hdev, 32, err = hci_le_set_host_feature_sync(hdev, 32,
(iso_enabled(hdev) ? 0x01 : (iso_enabled(hdev) ? 0x01 :

View file

@ -746,6 +746,7 @@ static void iso_sock_destruct(struct sock *sk)
skb_queue_purge(&sk->sk_receive_queue); skb_queue_purge(&sk->sk_receive_queue);
skb_queue_purge(&sk->sk_write_queue); skb_queue_purge(&sk->sk_write_queue);
skb_queue_purge(&sk->sk_error_queue);
} }
static void iso_sock_cleanup_listen(struct sock *parent) static void iso_sock_cleanup_listen(struct sock *parent)

View file

@ -4916,6 +4916,13 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
goto response_unlock; goto response_unlock;
} }
/* Check if Key Size is sufficient for the security level */
if (!l2cap_check_enc_key_size(conn->hcon, pchan)) {
result = L2CAP_CR_LE_BAD_KEY_SIZE;
chan = NULL;
goto response_unlock;
}
/* Check for valid dynamic CID range */ /* Check for valid dynamic CID range */
if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) { if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) {
result = L2CAP_CR_LE_INVALID_SCID; result = L2CAP_CR_LE_INVALID_SCID;
@ -5051,13 +5058,15 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
struct l2cap_chan *chan, *pchan; struct l2cap_chan *chan, *pchan;
u16 mtu, mps; u16 mtu, mps;
__le16 psm; __le16 psm;
u8 result, len = 0; u8 result, rsp_len = 0;
int i, num_scid; int i, num_scid;
bool defer = false; bool defer = false;
if (!enable_ecred) if (!enable_ecred)
return -EINVAL; return -EINVAL;
memset(pdu, 0, sizeof(*pdu));
if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) { if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) {
result = L2CAP_CR_LE_INVALID_PARAMS; result = L2CAP_CR_LE_INVALID_PARAMS;
goto response; goto response;
@ -5066,6 +5075,9 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
cmd_len -= sizeof(*req); cmd_len -= sizeof(*req);
num_scid = cmd_len / sizeof(u16); num_scid = cmd_len / sizeof(u16);
/* Always respond with the same number of scids as in the request */
rsp_len = cmd_len;
if (num_scid > L2CAP_ECRED_MAX_CID) { if (num_scid > L2CAP_ECRED_MAX_CID) {
result = L2CAP_CR_LE_INVALID_PARAMS; result = L2CAP_CR_LE_INVALID_PARAMS;
goto response; goto response;
@ -5075,7 +5087,7 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
mps = __le16_to_cpu(req->mps); mps = __le16_to_cpu(req->mps);
if (mtu < L2CAP_ECRED_MIN_MTU || mps < L2CAP_ECRED_MIN_MPS) { if (mtu < L2CAP_ECRED_MIN_MTU || mps < L2CAP_ECRED_MIN_MPS) {
result = L2CAP_CR_LE_UNACCEPT_PARAMS; result = L2CAP_CR_LE_INVALID_PARAMS;
goto response; goto response;
} }
@ -5095,8 +5107,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps); BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps);
memset(pdu, 0, sizeof(*pdu));
/* Check if we have socket listening on psm */ /* Check if we have socket listening on psm */
pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src, pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src,
&conn->hcon->dst, LE_LINK); &conn->hcon->dst, LE_LINK);
@ -5109,7 +5119,16 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
if (!smp_sufficient_security(conn->hcon, pchan->sec_level, if (!smp_sufficient_security(conn->hcon, pchan->sec_level,
SMP_ALLOW_STK)) { SMP_ALLOW_STK)) {
result = L2CAP_CR_LE_AUTHENTICATION; result = pchan->sec_level == BT_SECURITY_MEDIUM ?
L2CAP_CR_LE_ENCRYPTION : L2CAP_CR_LE_AUTHENTICATION;
goto unlock;
}
/* Check if the listening channel has set an output MTU then the
* requested MTU shall be less than or equal to that value.
*/
if (pchan->omtu && mtu < pchan->omtu) {
result = L2CAP_CR_LE_UNACCEPT_PARAMS;
goto unlock; goto unlock;
} }
@ -5121,7 +5140,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
BT_DBG("scid[%d] 0x%4.4x", i, scid); BT_DBG("scid[%d] 0x%4.4x", i, scid);
pdu->dcid[i] = 0x0000; pdu->dcid[i] = 0x0000;
len += sizeof(*pdu->dcid);
/* Check for valid dynamic CID range */ /* Check for valid dynamic CID range */
if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) { if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) {
@ -5188,7 +5206,7 @@ response:
return 0; return 0;
l2cap_send_cmd(conn, cmd->ident, L2CAP_ECRED_CONN_RSP, l2cap_send_cmd(conn, cmd->ident, L2CAP_ECRED_CONN_RSP,
sizeof(*pdu) + len, pdu); sizeof(*pdu) + rsp_len, pdu);
return 0; return 0;
} }
@ -5310,14 +5328,14 @@ static inline int l2cap_ecred_reconf_req(struct l2cap_conn *conn,
struct l2cap_ecred_reconf_req *req = (void *) data; struct l2cap_ecred_reconf_req *req = (void *) data;
struct l2cap_ecred_reconf_rsp rsp; struct l2cap_ecred_reconf_rsp rsp;
u16 mtu, mps, result; u16 mtu, mps, result;
struct l2cap_chan *chan; struct l2cap_chan *chan[L2CAP_ECRED_MAX_CID] = {};
int i, num_scid; int i, num_scid;
if (!enable_ecred) if (!enable_ecred)
return -EINVAL; return -EINVAL;
if (cmd_len < sizeof(*req) || cmd_len - sizeof(*req) % sizeof(u16)) { if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) {
result = L2CAP_CR_LE_INVALID_PARAMS; result = L2CAP_RECONF_INVALID_CID;
goto respond; goto respond;
} }
@ -5327,42 +5345,69 @@ static inline int l2cap_ecred_reconf_req(struct l2cap_conn *conn,
BT_DBG("mtu %u mps %u", mtu, mps); BT_DBG("mtu %u mps %u", mtu, mps);
if (mtu < L2CAP_ECRED_MIN_MTU) { if (mtu < L2CAP_ECRED_MIN_MTU) {
result = L2CAP_RECONF_INVALID_MTU; result = L2CAP_RECONF_INVALID_PARAMS;
goto respond; goto respond;
} }
if (mps < L2CAP_ECRED_MIN_MPS) { if (mps < L2CAP_ECRED_MIN_MPS) {
result = L2CAP_RECONF_INVALID_MPS; result = L2CAP_RECONF_INVALID_PARAMS;
goto respond; goto respond;
} }
cmd_len -= sizeof(*req); cmd_len -= sizeof(*req);
num_scid = cmd_len / sizeof(u16); num_scid = cmd_len / sizeof(u16);
if (num_scid > L2CAP_ECRED_MAX_CID) {
result = L2CAP_RECONF_INVALID_PARAMS;
goto respond;
}
result = L2CAP_RECONF_SUCCESS; result = L2CAP_RECONF_SUCCESS;
/* Check if each SCID, MTU and MPS are valid */
for (i = 0; i < num_scid; i++) { for (i = 0; i < num_scid; i++) {
u16 scid; u16 scid;
scid = __le16_to_cpu(req->scid[i]); scid = __le16_to_cpu(req->scid[i]);
if (!scid) if (!scid) {
return -EPROTO; result = L2CAP_RECONF_INVALID_CID;
goto respond;
chan = __l2cap_get_chan_by_dcid(conn, scid);
if (!chan)
continue;
/* If the MTU value is decreased for any of the included
* channels, then the receiver shall disconnect all
* included channels.
*/
if (chan->omtu > mtu) {
BT_ERR("chan %p decreased MTU %u -> %u", chan,
chan->omtu, mtu);
result = L2CAP_RECONF_INVALID_MTU;
} }
chan->omtu = mtu; chan[i] = __l2cap_get_chan_by_dcid(conn, scid);
chan->remote_mps = mps; if (!chan[i]) {
result = L2CAP_RECONF_INVALID_CID;
goto respond;
}
/* The MTU field shall be greater than or equal to the greatest
* current MTU size of these channels.
*/
if (chan[i]->omtu > mtu) {
BT_ERR("chan %p decreased MTU %u -> %u", chan[i],
chan[i]->omtu, mtu);
result = L2CAP_RECONF_INVALID_MTU;
goto respond;
}
/* If more than one channel is being configured, the MPS field
* shall be greater than or equal to the current MPS size of
* each of these channels. If only one channel is being
* configured, the MPS field may be less than the current MPS
* of that channel.
*/
if (chan[i]->remote_mps >= mps && i) {
BT_ERR("chan %p decreased MPS %u -> %u", chan[i],
chan[i]->remote_mps, mps);
result = L2CAP_RECONF_INVALID_MPS;
goto respond;
}
}
/* Commit the new MTU and MPS values after checking they are valid */
for (i = 0; i < num_scid; i++) {
chan[i]->omtu = mtu;
chan[i]->remote_mps = mps;
} }
respond: respond:

View file

@ -1029,10 +1029,17 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
break; break;
} }
/* Setting is not supported as it's the remote side that /* Only allow setting output MTU when not connected */
* decides this. if (sk->sk_state == BT_CONNECTED) {
*/ err = -EISCONN;
err = -EPERM; break;
}
err = copy_safe_from_sockptr(&mtu, sizeof(mtu), optval, optlen);
if (err)
break;
chan->omtu = mtu;
break; break;
case BT_RCVMTU: case BT_RCVMTU:
@ -1816,6 +1823,7 @@ static void l2cap_sock_destruct(struct sock *sk)
skb_queue_purge(&sk->sk_receive_queue); skb_queue_purge(&sk->sk_receive_queue);
skb_queue_purge(&sk->sk_write_queue); skb_queue_purge(&sk->sk_write_queue);
skb_queue_purge(&sk->sk_error_queue);
} }
static void l2cap_skb_msg_name(struct sk_buff *skb, void *msg_name, static void l2cap_skb_msg_name(struct sk_buff *skb, void *msg_name,

View file

@ -470,6 +470,7 @@ static void sco_sock_destruct(struct sock *sk)
skb_queue_purge(&sk->sk_receive_queue); skb_queue_purge(&sk->sk_receive_queue);
skb_queue_purge(&sk->sk_write_queue); skb_queue_purge(&sk->sk_write_queue);
skb_queue_purge(&sk->sk_error_queue);
} }
static void sco_sock_cleanup_listen(struct sock *parent) static void sco_sock_cleanup_listen(struct sock *parent)

View file

@ -4822,6 +4822,8 @@ int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
* to -1 or to their cpu id, but not to our id. * to -1 or to their cpu id, but not to our id.
*/ */
if (READ_ONCE(txq->xmit_lock_owner) != cpu) { if (READ_ONCE(txq->xmit_lock_owner) != cpu) {
bool is_list = false;
if (dev_xmit_recursion()) if (dev_xmit_recursion())
goto recursion_alert; goto recursion_alert;
@ -4832,17 +4834,28 @@ int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
HARD_TX_LOCK(dev, txq, cpu); HARD_TX_LOCK(dev, txq, cpu);
if (!netif_xmit_stopped(txq)) { if (!netif_xmit_stopped(txq)) {
is_list = !!skb->next;
dev_xmit_recursion_inc(); dev_xmit_recursion_inc();
skb = dev_hard_start_xmit(skb, dev, txq, &rc); skb = dev_hard_start_xmit(skb, dev, txq, &rc);
dev_xmit_recursion_dec(); dev_xmit_recursion_dec();
if (dev_xmit_complete(rc)) {
HARD_TX_UNLOCK(dev, txq); /* GSO segments a single SKB into
goto out; * a list of frames. TCP expects error
} * to mean none of the data was sent.
*/
if (is_list)
rc = NETDEV_TX_OK;
} }
HARD_TX_UNLOCK(dev, txq); HARD_TX_UNLOCK(dev, txq);
if (!skb) /* xmit completed */
goto out;
net_crit_ratelimited("Virtual device %s asks to queue packet!\n", net_crit_ratelimited("Virtual device %s asks to queue packet!\n",
dev->name); dev->name);
/* NETDEV_TX_BUSY or queue was stopped */
if (!is_list)
rc = -ENETDOWN;
} else { } else {
/* Recursion is detected! It is possible, /* Recursion is detected! It is possible,
* unfortunately * unfortunately
@ -4850,10 +4863,10 @@ int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
recursion_alert: recursion_alert:
net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n", net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n",
dev->name); dev->name);
rc = -ENETDOWN;
} }
} }
rc = -ENETDOWN;
rcu_read_unlock_bh(); rcu_read_unlock_bh();
dev_core_stats_tx_dropped_inc(dev); dev_core_stats_tx_dropped_inc(dev);
@ -4992,8 +5005,7 @@ static bool rps_flow_is_active(struct rps_dev_flow *rflow,
static struct rps_dev_flow * static struct rps_dev_flow *
set_rps_cpu(struct net_device *dev, struct sk_buff *skb, set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
struct rps_dev_flow *rflow, u16 next_cpu, u32 hash, struct rps_dev_flow *rflow, u16 next_cpu, u32 hash)
u32 flow_id)
{ {
if (next_cpu < nr_cpu_ids) { if (next_cpu < nr_cpu_ids) {
u32 head; u32 head;
@ -5004,6 +5016,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
struct rps_dev_flow *tmp_rflow; struct rps_dev_flow *tmp_rflow;
unsigned int tmp_cpu; unsigned int tmp_cpu;
u16 rxq_index; u16 rxq_index;
u32 flow_id;
int rc; int rc;
/* Should we steer this flow to a different hardware queue? */ /* Should we steer this flow to a different hardware queue? */
@ -5019,6 +5032,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
if (!flow_table) if (!flow_table)
goto out; goto out;
flow_id = rfs_slot(hash, flow_table);
tmp_rflow = &flow_table->flows[flow_id]; tmp_rflow = &flow_table->flows[flow_id];
tmp_cpu = READ_ONCE(tmp_rflow->cpu); tmp_cpu = READ_ONCE(tmp_rflow->cpu);
@ -5066,7 +5080,6 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
struct rps_dev_flow_table *flow_table; struct rps_dev_flow_table *flow_table;
struct rps_map *map; struct rps_map *map;
int cpu = -1; int cpu = -1;
u32 flow_id;
u32 tcpu; u32 tcpu;
u32 hash; u32 hash;
@ -5113,8 +5126,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
/* OK, now we know there is a match, /* OK, now we know there is a match,
* we can look at the local (per receive queue) flow table * we can look at the local (per receive queue) flow table
*/ */
flow_id = rfs_slot(hash, flow_table); rflow = &flow_table->flows[rfs_slot(hash, flow_table)];
rflow = &flow_table->flows[flow_id];
tcpu = rflow->cpu; tcpu = rflow->cpu;
/* /*
@ -5133,8 +5145,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
((int)(READ_ONCE(per_cpu(softnet_data, tcpu).input_queue_head) - ((int)(READ_ONCE(per_cpu(softnet_data, tcpu).input_queue_head) -
rflow->last_qtail)) >= 0)) { rflow->last_qtail)) >= 0)) {
tcpu = next_cpu; tcpu = next_cpu;
rflow = set_rps_cpu(dev, skb, rflow, next_cpu, hash, rflow = set_rps_cpu(dev, skb, rflow, next_cpu, hash);
flow_id);
} }
if (tcpu < nr_cpu_ids && cpu_online(tcpu)) { if (tcpu < nr_cpu_ids && cpu_online(tcpu)) {

View file

@ -5590,15 +5590,28 @@ static void __skb_complete_tx_timestamp(struct sk_buff *skb,
static bool skb_may_tx_timestamp(struct sock *sk, bool tsonly) static bool skb_may_tx_timestamp(struct sock *sk, bool tsonly)
{ {
bool ret; struct socket *sock;
struct file *file;
bool ret = false;
if (likely(tsonly || READ_ONCE(sock_net(sk)->core.sysctl_tstamp_allow_data))) if (likely(tsonly || READ_ONCE(sock_net(sk)->core.sysctl_tstamp_allow_data)))
return true; return true;
read_lock_bh(&sk->sk_callback_lock); /* The sk pointer remains valid as long as the skb is. The sk_socket and
ret = sk->sk_socket && sk->sk_socket->file && * file pointer may become NULL if the socket is closed. Both structures
file_ns_capable(sk->sk_socket->file, &init_user_ns, CAP_NET_RAW); * (including file->cred) are RCU freed which means they can be accessed
read_unlock_bh(&sk->sk_callback_lock); * within a RCU read section.
*/
rcu_read_lock();
sock = READ_ONCE(sk->sk_socket);
if (!sock)
goto out;
file = READ_ONCE(sock->file);
if (!file)
goto out;
ret = file_ns_capable(file, &init_user_ns, CAP_NET_RAW);
out:
rcu_read_unlock();
return ret; return ret;
} }

View file

@ -203,7 +203,7 @@ struct sock *tcp_get_cookie_sock(struct sock *sk, struct sk_buff *skb,
bool own_req; bool own_req;
child = icsk->icsk_af_ops->syn_recv_sock(sk, skb, req, dst, child = icsk->icsk_af_ops->syn_recv_sock(sk, skb, req, dst,
NULL, &own_req); NULL, &own_req, NULL);
if (child) { if (child) {
refcount_set(&req->rsk_refcnt, 1); refcount_set(&req->rsk_refcnt, 1);
sock_rps_save_rxhash(child, skb); sock_rps_save_rxhash(child, skb);

View file

@ -333,7 +333,7 @@ static struct sock *tcp_fastopen_create_child(struct sock *sk,
bool own_req; bool own_req;
child = inet_csk(sk)->icsk_af_ops->syn_recv_sock(sk, skb, req, NULL, child = inet_csk(sk)->icsk_af_ops->syn_recv_sock(sk, skb, req, NULL,
NULL, &own_req); NULL, &own_req, NULL);
if (!child) if (!child)
return NULL; return NULL;

View file

@ -4858,15 +4858,24 @@ static enum skb_drop_reason tcp_disordered_ack_check(const struct sock *sk,
*/ */
static enum skb_drop_reason tcp_sequence(const struct sock *sk, static enum skb_drop_reason tcp_sequence(const struct sock *sk,
u32 seq, u32 end_seq) u32 seq, u32 end_seq,
const struct tcphdr *th)
{ {
const struct tcp_sock *tp = tcp_sk(sk); const struct tcp_sock *tp = tcp_sk(sk);
u32 seq_limit;
if (before(end_seq, tp->rcv_wup)) if (before(end_seq, tp->rcv_wup))
return SKB_DROP_REASON_TCP_OLD_SEQUENCE; return SKB_DROP_REASON_TCP_OLD_SEQUENCE;
if (after(end_seq, tp->rcv_nxt + tcp_receive_window(tp))) { seq_limit = tp->rcv_nxt + tcp_receive_window(tp);
if (after(seq, tp->rcv_nxt + tcp_receive_window(tp))) if (unlikely(after(end_seq, seq_limit))) {
/* Some stacks are known to handle FIN incorrectly; allow the
* FIN to extend beyond the window and check it in detail later.
*/
if (!after(end_seq - th->fin, seq_limit))
return SKB_NOT_DROPPED_YET;
if (after(seq, seq_limit))
return SKB_DROP_REASON_TCP_INVALID_SEQUENCE; return SKB_DROP_REASON_TCP_INVALID_SEQUENCE;
/* Only accept this packet if receive queue is empty. */ /* Only accept this packet if receive queue is empty. */
@ -6379,7 +6388,8 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb,
step1: step1:
/* Step 1: check sequence number */ /* Step 1: check sequence number */
reason = tcp_sequence(sk, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq); reason = tcp_sequence(sk, TCP_SKB_CB(skb)->seq,
TCP_SKB_CB(skb)->end_seq, th);
if (reason) { if (reason) {
/* RFC793, page 37: "In all states except SYN-SENT, all reset /* RFC793, page 37: "In all states except SYN-SENT, all reset
* (RST) segments are validated by checking their SEQ-fields." * (RST) segments are validated by checking their SEQ-fields."

View file

@ -1705,7 +1705,9 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
struct request_sock *req, struct request_sock *req,
struct dst_entry *dst, struct dst_entry *dst,
struct request_sock *req_unhash, struct request_sock *req_unhash,
bool *own_req) bool *own_req,
void (*opt_child_init)(struct sock *newsk,
const struct sock *sk))
{ {
struct inet_request_sock *ireq; struct inet_request_sock *ireq;
bool found_dup_sk = false; bool found_dup_sk = false;
@ -1757,6 +1759,10 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
} }
sk_setup_caps(newsk, dst); sk_setup_caps(newsk, dst);
#if IS_ENABLED(CONFIG_IPV6)
if (opt_child_init)
opt_child_init(newsk, sk);
#endif
tcp_ca_openreq_child(newsk, dst); tcp_ca_openreq_child(newsk, dst);
tcp_sync_mss(newsk, dst4_mtu(dst)); tcp_sync_mss(newsk, dst4_mtu(dst));

View file

@ -925,7 +925,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
* socket is created, wait for troubles. * socket is created, wait for troubles.
*/ */
child = inet_csk(sk)->icsk_af_ops->syn_recv_sock(sk, skb, req, NULL, child = inet_csk(sk)->icsk_af_ops->syn_recv_sock(sk, skb, req, NULL,
req, &own_req); req, &own_req, NULL);
if (!child) if (!child)
goto listen_overflow; goto listen_overflow;

View file

@ -20,10 +20,9 @@ EXPORT_SYMBOL(udplite_table);
/* Designate sk as UDP-Lite socket */ /* Designate sk as UDP-Lite socket */
static int udplite_sk_init(struct sock *sk) static int udplite_sk_init(struct sock *sk)
{ {
udp_init_sock(sk);
pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, " pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, "
"please contact the netdev mailing list\n"); "please contact the netdev mailing list\n");
return 0; return udp_init_sock(sk);
} }
static int udplite_rcv(struct sk_buff *skb) static int udplite_rcv(struct sk_buff *skb)

View file

@ -1312,11 +1312,48 @@ static void tcp_v6_restore_cb(struct sk_buff *skb)
sizeof(struct inet6_skb_parm)); sizeof(struct inet6_skb_parm));
} }
/* Called from tcp_v4_syn_recv_sock() for v6_mapped children. */
static void tcp_v6_mapped_child_init(struct sock *newsk, const struct sock *sk)
{
struct inet_sock *newinet = inet_sk(newsk);
struct ipv6_pinfo *newnp;
newinet->pinet6 = newnp = tcp_inet6_sk(newsk);
newinet->ipv6_fl_list = NULL;
memcpy(newnp, tcp_inet6_sk(sk), sizeof(struct ipv6_pinfo));
newnp->saddr = newsk->sk_v6_rcv_saddr;
inet_csk(newsk)->icsk_af_ops = &ipv6_mapped;
if (sk_is_mptcp(newsk))
mptcpv6_handle_mapped(newsk, true);
newsk->sk_backlog_rcv = tcp_v4_do_rcv;
#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
tcp_sk(newsk)->af_specific = &tcp_sock_ipv6_mapped_specific;
#endif
newnp->ipv6_mc_list = NULL;
newnp->ipv6_ac_list = NULL;
newnp->pktoptions = NULL;
newnp->opt = NULL;
/* tcp_v4_syn_recv_sock() has initialized newinet->mc_{index,ttl} */
newnp->mcast_oif = newinet->mc_index;
newnp->mcast_hops = newinet->mc_ttl;
newnp->rcv_flowinfo = 0;
if (inet6_test_bit(REPFLOW, sk))
newnp->flow_label = 0;
}
static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
struct request_sock *req, struct request_sock *req,
struct dst_entry *dst, struct dst_entry *dst,
struct request_sock *req_unhash, struct request_sock *req_unhash,
bool *own_req) bool *own_req,
void (*opt_child_init)(struct sock *newsk,
const struct sock *sk))
{ {
const struct ipv6_pinfo *np = tcp_inet6_sk(sk); const struct ipv6_pinfo *np = tcp_inet6_sk(sk);
struct inet_request_sock *ireq; struct inet_request_sock *ireq;
@ -1332,61 +1369,10 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
#endif #endif
struct flowi6 fl6; struct flowi6 fl6;
if (skb->protocol == htons(ETH_P_IP)) { if (skb->protocol == htons(ETH_P_IP))
/* return tcp_v4_syn_recv_sock(sk, skb, req, dst,
* v6 mapped req_unhash, own_req,
*/ tcp_v6_mapped_child_init);
newsk = tcp_v4_syn_recv_sock(sk, skb, req, dst,
req_unhash, own_req);
if (!newsk)
return NULL;
newinet = inet_sk(newsk);
newinet->pinet6 = tcp_inet6_sk(newsk);
newinet->ipv6_fl_list = NULL;
newnp = tcp_inet6_sk(newsk);
newtp = tcp_sk(newsk);
memcpy(newnp, np, sizeof(struct ipv6_pinfo));
newnp->saddr = newsk->sk_v6_rcv_saddr;
inet_csk(newsk)->icsk_af_ops = &ipv6_mapped;
if (sk_is_mptcp(newsk))
mptcpv6_handle_mapped(newsk, true);
newsk->sk_backlog_rcv = tcp_v4_do_rcv;
#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
newtp->af_specific = &tcp_sock_ipv6_mapped_specific;
#endif
newnp->ipv6_mc_list = NULL;
newnp->ipv6_ac_list = NULL;
newnp->pktoptions = NULL;
newnp->opt = NULL;
newnp->mcast_oif = inet_iif(skb);
newnp->mcast_hops = ip_hdr(skb)->ttl;
newnp->rcv_flowinfo = 0;
if (inet6_test_bit(REPFLOW, sk))
newnp->flow_label = 0;
/*
* No need to charge this sock to the relevant IPv6 refcnt debug socks count
* here, tcp_create_openreq_child now does this for us, see the comment in
* that function for the gory details. -acme
*/
/* It is tricky place. Until this moment IPv4 tcp
worked with IPv6 icsk.icsk_af_ops.
Sync it now.
*/
tcp_sync_mss(newsk, inet_csk(newsk)->icsk_pmtu_cookie);
return newsk;
}
ireq = inet_rsk(req); ireq = inet_rsk(req);
if (sk_acceptq_is_full(sk)) if (sk_acceptq_is_full(sk))

View file

@ -16,10 +16,9 @@
static int udplitev6_sk_init(struct sock *sk) static int udplitev6_sk_init(struct sock *sk)
{ {
udpv6_init_sock(sk);
pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, " pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, "
"please contact the netdev mailing list\n"); "please contact the netdev mailing list\n");
return 0; return udpv6_init_sock(sk);
} }
static int udplitev6_rcv(struct sk_buff *skb) static int udplitev6_rcv(struct sk_buff *skb)

View file

@ -57,6 +57,7 @@ static int xfrm6_get_saddr(xfrm_address_t *saddr,
struct dst_entry *dst; struct dst_entry *dst;
struct net_device *dev; struct net_device *dev;
struct inet6_dev *idev; struct inet6_dev *idev;
int err;
dst = xfrm6_dst_lookup(params); dst = xfrm6_dst_lookup(params);
if (IS_ERR(dst)) if (IS_ERR(dst))
@ -68,9 +69,11 @@ static int xfrm6_get_saddr(xfrm_address_t *saddr,
return -EHOSTUNREACH; return -EHOSTUNREACH;
} }
dev = idev->dev; dev = idev->dev;
ipv6_dev_get_saddr(dev_net(dev), dev, &params->daddr->in6, 0, err = ipv6_dev_get_saddr(dev_net(dev), dev, &params->daddr->in6, 0,
&saddr->in6); &saddr->in6);
dst_release(dst); dst_release(dst);
if (err)
return -EHOSTUNREACH;
return 0; return 0;
} }

View file

@ -628,7 +628,7 @@ retry:
skb = txm->frag_skb; skb = txm->frag_skb;
} }
if (WARN_ON(!skb_shinfo(skb)->nr_frags) || if (WARN_ON_ONCE(!skb_shinfo(skb)->nr_frags) ||
WARN_ON_ONCE(!skb_frag_page(&skb_shinfo(skb)->frags[0]))) { WARN_ON_ONCE(!skb_frag_page(&skb_shinfo(skb)->frags[0]))) {
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
@ -749,7 +749,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
{ {
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
struct kcm_sock *kcm = kcm_sk(sk); struct kcm_sock *kcm = kcm_sk(sk);
struct sk_buff *skb = NULL, *head = NULL; struct sk_buff *skb = NULL, *head = NULL, *frag_prev = NULL;
size_t copy, copied = 0; size_t copy, copied = 0;
long timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); long timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
int eor = (sock->type == SOCK_DGRAM) ? int eor = (sock->type == SOCK_DGRAM) ?
@ -824,6 +824,7 @@ start:
else else
skb->next = tskb; skb->next = tskb;
frag_prev = skb;
skb = tskb; skb = tskb;
skb->ip_summed = CHECKSUM_UNNECESSARY; skb->ip_summed = CHECKSUM_UNNECESSARY;
continue; continue;
@ -933,6 +934,22 @@ partial_message:
out_error: out_error:
kcm_push(kcm); kcm_push(kcm);
/* When MAX_SKB_FRAGS was reached, a new skb was allocated and
* linked into the frag_list before data copy. If the copy
* subsequently failed, this skb has zero frags. Remove it from
* the frag_list to prevent kcm_write_msgs from later hitting
* WARN_ON(!skb_shinfo(skb)->nr_frags).
*/
if (frag_prev && !skb_shinfo(skb)->nr_frags) {
if (head == frag_prev)
skb_shinfo(head)->frag_list = NULL;
else
frag_prev->next = NULL;
kfree_skb(skb);
/* Update skb as it may be saved in partial_message via goto */
skb = frag_prev;
}
if (sock->type == SOCK_SEQPACKET) { if (sock->type == SOCK_SEQPACKET) {
/* Wrote some bytes before encountering an /* Wrote some bytes before encountering an
* error, return partial success. * error, return partial success.

View file

@ -281,6 +281,7 @@ static int ieee80211_vif_update_links(struct ieee80211_sub_if_data *sdata,
struct ieee80211_bss_conf *old[IEEE80211_MLD_MAX_NUM_LINKS]; struct ieee80211_bss_conf *old[IEEE80211_MLD_MAX_NUM_LINKS];
struct ieee80211_link_data *old_data[IEEE80211_MLD_MAX_NUM_LINKS]; struct ieee80211_link_data *old_data[IEEE80211_MLD_MAX_NUM_LINKS];
bool use_deflink = old_links == 0; /* set for error case */ bool use_deflink = old_links == 0; /* set for error case */
bool non_sta = sdata->vif.type != NL80211_IFTYPE_STATION;
lockdep_assert_wiphy(sdata->local->hw.wiphy); lockdep_assert_wiphy(sdata->local->hw.wiphy);
@ -337,6 +338,7 @@ static int ieee80211_vif_update_links(struct ieee80211_sub_if_data *sdata,
link = links[link_id]; link = links[link_id];
ieee80211_link_init(sdata, link_id, &link->data, &link->conf); ieee80211_link_init(sdata, link_id, &link->data, &link->conf);
ieee80211_link_setup(&link->data); ieee80211_link_setup(&link->data);
ieee80211_set_wmm_default(&link->data, true, non_sta);
} }
if (new_links == 0) if (new_links == 0)

View file

@ -1635,6 +1635,9 @@ static void mesh_rx_csa_frame(struct ieee80211_sub_if_data *sdata,
if (!mesh_matches_local(sdata, elems)) if (!mesh_matches_local(sdata, elems))
goto free; goto free;
if (!elems->mesh_chansw_params_ie)
goto free;
ifmsh->chsw_ttl = elems->mesh_chansw_params_ie->mesh_ttl; ifmsh->chsw_ttl = elems->mesh_chansw_params_ie->mesh_ttl;
if (!--ifmsh->chsw_ttl) if (!--ifmsh->chsw_ttl)
fwd_csa = false; fwd_csa = false;

View file

@ -7085,6 +7085,9 @@ static void ieee80211_ml_reconfiguration(struct ieee80211_sub_if_data *sdata,
control = le16_to_cpu(prof->control); control = le16_to_cpu(prof->control);
link_id = control & IEEE80211_MLE_STA_RECONF_CONTROL_LINK_ID; link_id = control & IEEE80211_MLE_STA_RECONF_CONTROL_LINK_ID;
if (link_id >= IEEE80211_MLD_MAX_NUM_LINKS)
continue;
removed_links |= BIT(link_id); removed_links |= BIT(link_id);
/* the MAC address should not be included, but handle it */ /* the MAC address should not be included, but handle it */

View file

@ -808,7 +808,9 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
struct request_sock *req, struct request_sock *req,
struct dst_entry *dst, struct dst_entry *dst,
struct request_sock *req_unhash, struct request_sock *req_unhash,
bool *own_req) bool *own_req,
void (*opt_child_init)(struct sock *newsk,
const struct sock *sk))
{ {
struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk); struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk);
struct mptcp_subflow_request_sock *subflow_req; struct mptcp_subflow_request_sock *subflow_req;
@ -855,7 +857,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
create_child: create_child:
child = listener->icsk_af_ops->syn_recv_sock(sk, skb, req, dst, child = listener->icsk_af_ops->syn_recv_sock(sk, skb, req, dst,
req_unhash, own_req); req_unhash, own_req, opt_child_init);
if (child && *own_req) { if (child && *own_req) {
struct mptcp_subflow_context *ctx = mptcp_subflow_ctx(child); struct mptcp_subflow_context *ctx = mptcp_subflow_ctx(child);

View file

@ -796,7 +796,7 @@ static int decode_choice(struct bitstr *bs, const struct field_t *f,
if (ext || (son->attr & OPEN)) { if (ext || (son->attr & OPEN)) {
BYTE_ALIGN(bs); BYTE_ALIGN(bs);
if (nf_h323_error_boundary(bs, len, 0)) if (nf_h323_error_boundary(bs, 2, 0))
return H323_ERROR_BOUND; return H323_ERROR_BOUND;
len = get_len(bs); len = get_len(bs);
if (nf_h323_error_boundary(bs, len, 0)) if (nf_h323_error_boundary(bs, len, 0))

View file

@ -166,9 +166,46 @@ static void psp_write_headers(struct net *net, struct sk_buff *skb, __be32 spi,
{ {
struct udphdr *uh = udp_hdr(skb); struct udphdr *uh = udp_hdr(skb);
struct psphdr *psph = (struct psphdr *)(uh + 1); struct psphdr *psph = (struct psphdr *)(uh + 1);
const struct sock *sk = skb->sk;
uh->dest = htons(PSP_DEFAULT_UDP_PORT); uh->dest = htons(PSP_DEFAULT_UDP_PORT);
uh->source = udp_flow_src_port(net, skb, 0, 0, false);
/* A bit of theory: Selection of the source port.
*
* We need some entropy, so that multiple flows use different
* source ports for better RSS spreading at the receiver.
*
* We also need that all packets belonging to one TCP flow
* use the same source port through their duration,
* so that all these packets land in the same receive queue.
*
* udp_flow_src_port() is using sk_txhash, inherited from
* skb_set_hash_from_sk() call in __tcp_transmit_skb().
* This field is subject to reshuffling, thanks to
* sk_rethink_txhash() calls in various TCP functions.
*
* Instead, use sk->sk_hash which is constant through
* the whole flow duration.
*/
if (likely(sk)) {
u32 hash = sk->sk_hash;
int min, max;
/* These operations are cheap, no need to cache the result
* in another socket field.
*/
inet_get_local_port_range(net, &min, &max);
/* Since this is being sent on the wire obfuscate hash a bit
* to minimize possibility that any useful information to an
* attacker is leaked. Only upper 16 bits are relevant in the
* computation for 16 bit port value because we use a
* reciprocal divide.
*/
hash ^= hash << 16;
uh->source = htons((((u64)hash * (max - min)) >> 32) + min);
} else {
uh->source = udp_flow_src_port(net, skb, 0, 0, false);
}
uh->check = 0; uh->check = 0;
uh->len = htons(udp_len); uh->len = htons(udp_len);

View file

@ -455,6 +455,9 @@ void rds_conn_shutdown(struct rds_conn_path *cp)
rcu_read_unlock(); rcu_read_unlock();
} }
/* we do not hold the socket lock here but it is safe because
* fan-out is disabled when calling conn_slots_available()
*/
if (conn->c_trans->conn_slots_available) if (conn->c_trans->conn_slots_available)
conn->c_trans->conn_slots_available(conn, false); conn->c_trans->conn_slots_available(conn, false);
} }

View file

@ -59,30 +59,12 @@ void rds_tcp_keepalive(struct socket *sock)
static int static int
rds_tcp_get_peer_sport(struct socket *sock) rds_tcp_get_peer_sport(struct socket *sock)
{ {
union { struct sock *sk = sock->sk;
struct sockaddr_storage storage;
struct sockaddr addr;
struct sockaddr_in sin;
struct sockaddr_in6 sin6;
} saddr;
int sport;
if (kernel_getpeername(sock, &saddr.addr) >= 0) { if (!sk)
switch (saddr.addr.sa_family) { return -1;
case AF_INET:
sport = ntohs(saddr.sin.sin_port);
break;
case AF_INET6:
sport = ntohs(saddr.sin6.sin6_port);
break;
default:
sport = -1;
}
} else {
sport = -1;
}
return sport; return ntohs(READ_ONCE(inet_sk(sk)->inet_dport));
} }
/* rds_tcp_accept_one_path(): if accepting on cp_index > 0, make sure the /* rds_tcp_accept_one_path(): if accepting on cp_index > 0, make sure the

View file

@ -124,7 +124,9 @@ static struct sock *smc_tcp_syn_recv_sock(const struct sock *sk,
struct request_sock *req, struct request_sock *req,
struct dst_entry *dst, struct dst_entry *dst,
struct request_sock *req_unhash, struct request_sock *req_unhash,
bool *own_req) bool *own_req,
void (*opt_child_init)(struct sock *newsk,
const struct sock *sk))
{ {
struct smc_sock *smc; struct smc_sock *smc;
struct sock *child; struct sock *child;
@ -142,7 +144,7 @@ static struct sock *smc_tcp_syn_recv_sock(const struct sock *sk,
/* passthrough to original syn recv sock fct */ /* passthrough to original syn recv sock fct */
child = smc->ori_af_ops->syn_recv_sock(sk, skb, req, dst, req_unhash, child = smc->ori_af_ops->syn_recv_sock(sk, skb, req, dst, req_unhash,
own_req); own_req, opt_child_init);
/* child must not inherit smc or its ops */ /* child must not inherit smc or its ops */
if (child) { if (child) {
rcu_assign_sk_user_data(child, NULL); rcu_assign_sk_user_data(child, NULL);

View file

@ -674,7 +674,7 @@ static void __sock_release(struct socket *sock, struct inode *inode)
iput(SOCK_INODE(sock)); iput(SOCK_INODE(sock));
return; return;
} }
sock->file = NULL; WRITE_ONCE(sock->file, NULL);
} }
/** /**

View file

@ -348,7 +348,8 @@ static bool tipc_service_insert_publ(struct net *net,
/* Return if the publication already exists */ /* Return if the publication already exists */
list_for_each_entry(_p, &sr->all_publ, all_publ) { list_for_each_entry(_p, &sr->all_publ, all_publ) {
if (_p->key == key && (!_p->sk.node || _p->sk.node == node)) { if (_p->key == key && _p->sk.ref == p->sk.ref &&
(!_p->sk.node || _p->sk.node == node)) {
pr_debug("Failed to bind duplicate %u,%u,%u/%u:%u/%u\n", pr_debug("Failed to bind duplicate %u,%u,%u/%u:%u/%u\n",
p->sr.type, p->sr.lower, p->sr.upper, p->sr.type, p->sr.lower, p->sr.upper,
node, p->sk.ref, key); node, p->sk.ref, key);
@ -388,7 +389,8 @@ static struct publication *tipc_service_remove_publ(struct service_range *r,
u32 node = sk->node; u32 node = sk->node;
list_for_each_entry(p, &r->all_publ, all_publ) { list_for_each_entry(p, &r->all_publ, all_publ) {
if (p->key != key || (node && node != p->sk.node)) if (p->key != key || p->sk.ref != sk->ref ||
(node && node != p->sk.node))
continue; continue;
list_del(&p->all_publ); list_del(&p->all_publ);
list_del(&p->local_publ); list_del(&p->local_publ);

View file

@ -2533,7 +2533,7 @@ void tls_sw_cancel_work_tx(struct tls_context *tls_ctx)
set_bit(BIT_TX_CLOSING, &ctx->tx_bitmask); set_bit(BIT_TX_CLOSING, &ctx->tx_bitmask);
set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask); set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask);
cancel_delayed_work_sync(&ctx->tx_work.work); disable_delayed_work_sync(&ctx->tx_work.work);
} }
void tls_sw_release_resources_tx(struct sock *sk) void tls_sw_release_resources_tx(struct sock *sk)

View file

@ -90,16 +90,20 @@
* *
* - /proc/sys/net/vsock/ns_mode (read-only) reports the current namespace's * - /proc/sys/net/vsock/ns_mode (read-only) reports the current namespace's
* mode, which is set at namespace creation and immutable thereafter. * mode, which is set at namespace creation and immutable thereafter.
* - /proc/sys/net/vsock/child_ns_mode (writable) controls what mode future * - /proc/sys/net/vsock/child_ns_mode (write-once) controls what mode future
* child namespaces will inherit when created. The initial value matches * child namespaces will inherit when created. The initial value matches
* the namespace's own ns_mode. * the namespace's own ns_mode.
* *
* Changing child_ns_mode only affects newly created namespaces, not the * Changing child_ns_mode only affects newly created namespaces, not the
* current namespace or existing children. A "local" namespace cannot set * current namespace or existing children. A "local" namespace cannot set
* child_ns_mode to "global". At namespace creation, ns_mode is inherited * child_ns_mode to "global". child_ns_mode is write-once, so that it may be
* from the parent's child_ns_mode. * configured and locked down by a namespace manager. Writing a different
* value after the first write returns -EBUSY. At namespace creation, ns_mode
* is inherited from the parent's child_ns_mode.
* *
* The init_net mode is "global" and cannot be modified. * The init_net mode is "global" and cannot be modified. The init_net
* child_ns_mode is also write-once, so an init process (e.g. systemd) can
* set it to "local" to ensure all new namespaces inherit local mode.
* *
* The modes affect the allocation and accessibility of CIDs as follows: * The modes affect the allocation and accessibility of CIDs as follows:
* *
@ -2825,7 +2829,7 @@ static int vsock_net_mode_string(const struct ctl_table *table, int write,
if (write) if (write)
return -EPERM; return -EPERM;
net = current->nsproxy->net_ns; net = container_of(table->data, struct net, vsock.mode);
return __vsock_net_mode_string(table, write, buffer, lenp, ppos, return __vsock_net_mode_string(table, write, buffer, lenp, ppos,
vsock_net_mode(net), NULL); vsock_net_mode(net), NULL);
@ -2838,7 +2842,7 @@ static int vsock_net_child_mode_string(const struct ctl_table *table, int write,
struct net *net; struct net *net;
int ret; int ret;
net = current->nsproxy->net_ns; net = container_of(table->data, struct net, vsock.child_ns_mode);
ret = __vsock_net_mode_string(table, write, buffer, lenp, ppos, ret = __vsock_net_mode_string(table, write, buffer, lenp, ppos,
vsock_net_child_mode(net), &new_mode); vsock_net_child_mode(net), &new_mode);
@ -2853,7 +2857,8 @@ static int vsock_net_child_mode_string(const struct ctl_table *table, int write,
new_mode == VSOCK_NET_MODE_GLOBAL) new_mode == VSOCK_NET_MODE_GLOBAL)
return -EPERM; return -EPERM;
vsock_net_set_child_mode(net, new_mode); if (!vsock_net_set_child_mode(net, new_mode))
return -EBUSY;
} }
return 0; return 0;

View file

@ -1211,6 +1211,7 @@ void wiphy_unregister(struct wiphy *wiphy)
/* this has nothing to do now but make sure it's gone */ /* this has nothing to do now but make sure it's gone */
cancel_work_sync(&rdev->wiphy_work); cancel_work_sync(&rdev->wiphy_work);
cancel_work_sync(&rdev->rfkill_block);
cancel_work_sync(&rdev->conn_work); cancel_work_sync(&rdev->conn_work);
flush_work(&rdev->event_work); flush_work(&rdev->event_work);
cancel_delayed_work_sync(&rdev->dfs_update_channels_wk); cancel_delayed_work_sync(&rdev->dfs_update_channels_wk);

View file

@ -239,14 +239,14 @@ int ieee80211_radiotap_iterator_next(
default: default:
if (!iterator->current_namespace || if (!iterator->current_namespace ||
iterator->_arg_index >= iterator->current_namespace->n_bits) { iterator->_arg_index >= iterator->current_namespace->n_bits) {
if (iterator->current_namespace == &radiotap_ns)
return -ENOENT;
align = 0; align = 0;
} else { } else {
align = iterator->current_namespace->align_size[iterator->_arg_index].align; align = iterator->current_namespace->align_size[iterator->_arg_index].align;
size = iterator->current_namespace->align_size[iterator->_arg_index].size; size = iterator->current_namespace->align_size[iterator->_arg_index].size;
} }
if (!align) { if (!align) {
if (iterator->current_namespace == &radiotap_ns)
return -ENOENT;
/* skip all subsequent data */ /* skip all subsequent data */
iterator->_arg = iterator->_next_ns_data; iterator->_arg = iterator->_next_ns_data;
/* give up on this namespace */ /* give up on this namespace */

View file

@ -683,7 +683,7 @@ static int cfg80211_wext_siwencodeext(struct net_device *dev,
idx = erq->flags & IW_ENCODE_INDEX; idx = erq->flags & IW_ENCODE_INDEX;
if (cipher == WLAN_CIPHER_SUITE_AES_CMAC) { if (cipher == WLAN_CIPHER_SUITE_AES_CMAC) {
if (idx < 4 || idx > 5) { if (idx < 5 || idx > 6) {
idx = wdev->wext.default_mgmt_key; idx = wdev->wext.default_mgmt_key;
if (idx < 0) if (idx < 0)
return -EINVAL; return -EINVAL;

View file

@ -536,7 +536,7 @@ static void espintcp_close(struct sock *sk, long timeout)
sk->sk_prot = &tcp_prot; sk->sk_prot = &tcp_prot;
barrier(); barrier();
cancel_work_sync(&ctx->work); disable_work_sync(&ctx->work);
strp_done(&ctx->strp); strp_done(&ctx->strp);
skb_queue_purge(&ctx->out_queue); skb_queue_purge(&ctx->out_queue);

View file

@ -544,6 +544,14 @@ static int xfrm_dev_down(struct net_device *dev)
return NOTIFY_DONE; return NOTIFY_DONE;
} }
static int xfrm_dev_unregister(struct net_device *dev)
{
xfrm_dev_state_flush(dev_net(dev), dev, true);
xfrm_dev_policy_flush(dev_net(dev), dev, true);
return NOTIFY_DONE;
}
static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void *ptr) static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void *ptr)
{ {
struct net_device *dev = netdev_notifier_info_to_dev(ptr); struct net_device *dev = netdev_notifier_info_to_dev(ptr);
@ -556,8 +564,10 @@ static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void
return xfrm_api_check(dev); return xfrm_api_check(dev);
case NETDEV_DOWN: case NETDEV_DOWN:
case NETDEV_UNREGISTER:
return xfrm_dev_down(dev); return xfrm_dev_down(dev);
case NETDEV_UNREGISTER:
return xfrm_dev_unregister(dev);
} }
return NOTIFY_DONE; return NOTIFY_DONE;
} }

View file

@ -3801,8 +3801,8 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
struct xfrm_tmpl *tp[XFRM_MAX_DEPTH]; struct xfrm_tmpl *tp[XFRM_MAX_DEPTH];
struct xfrm_tmpl *stp[XFRM_MAX_DEPTH]; struct xfrm_tmpl *stp[XFRM_MAX_DEPTH];
struct xfrm_tmpl **tpp = tp; struct xfrm_tmpl **tpp = tp;
int i, k = 0;
int ti = 0; int ti = 0;
int i, k;
sp = skb_sec_path(skb); sp = skb_sec_path(skb);
if (!sp) if (!sp)
@ -3828,6 +3828,12 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
tpp = stp; tpp = stp;
} }
if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET && sp == &dummy)
/* This policy template was already checked by HW
* and secpath was removed in __xfrm_policy_check2.
*/
goto out;
/* For each tunnel xfrm, find the first matching tmpl. /* For each tunnel xfrm, find the first matching tmpl.
* For each tmpl before that, find corresponding xfrm. * For each tmpl before that, find corresponding xfrm.
* Order is _important_. Later we will implement * Order is _important_. Later we will implement
@ -3837,7 +3843,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
* verified to allow them to be skipped in future policy * verified to allow them to be skipped in future policy
* checks (e.g. nested tunnels). * checks (e.g. nested tunnels).
*/ */
for (i = xfrm_nr-1, k = 0; i >= 0; i--) { for (i = xfrm_nr - 1; i >= 0; i--) {
k = xfrm_policy_ok(tpp[i], sp, k, family, if_id); k = xfrm_policy_ok(tpp[i], sp, k, family, if_id);
if (k < 0) { if (k < 0) {
if (k < -1) if (k < -1)
@ -3853,6 +3859,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
goto reject; goto reject;
} }
out:
xfrm_pols_put(pols, npols); xfrm_pols_put(pols, npols);
sp->verified_cnt = k; sp->verified_cnt = k;

View file

@ -4,13 +4,15 @@
import datetime import datetime
import random import random
import re import re
import time
from lib.py import ksft_run, ksft_pr, ksft_exit from lib.py import ksft_run, ksft_pr, ksft_exit
from lib.py import ksft_eq, ksft_ne, ksft_ge, ksft_in, ksft_lt, ksft_true, ksft_raises from lib.py import ksft_eq, ksft_ne, ksft_ge, ksft_in, ksft_lt, ksft_true, ksft_raises
from lib.py import NetDrvEpEnv from lib.py import NetDrvEpEnv
from lib.py import EthtoolFamily, NetdevFamily from lib.py import EthtoolFamily, NetdevFamily
from lib.py import KsftSkipEx, KsftFailEx from lib.py import KsftSkipEx, KsftFailEx
from lib.py import ksft_disruptive
from lib.py import rand_port from lib.py import rand_port
from lib.py import ethtool, ip, defer, GenerateTraffic, CmdExitFailure from lib.py import cmd, ethtool, ip, defer, GenerateTraffic, CmdExitFailure, wait_file
def _rss_key_str(key): def _rss_key_str(key):
@ -809,6 +811,98 @@ def test_rss_default_context_rule(cfg):
'noise' : (0, 1) }) 'noise' : (0, 1) })
@ksft_disruptive
def test_rss_context_persist_ifupdown(cfg, pre_down=False):
"""
Test that RSS contexts and their associated ntuple filters persist across
an interface down/up cycle.
"""
require_ntuple(cfg)
qcnt = len(_get_rx_cnts(cfg))
if qcnt < 6:
try:
ethtool(f"-L {cfg.ifname} combined 6")
defer(ethtool, f"-L {cfg.ifname} combined {qcnt}")
except Exception as exc:
raise KsftSkipEx("Not enough queues for the test") from exc
ethtool(f"-X {cfg.ifname} equal 2")
defer(ethtool, f"-X {cfg.ifname} default")
ifup = defer(ip, f"link set dev {cfg.ifname} up")
if pre_down:
ip(f"link set dev {cfg.ifname} down")
try:
ctx1_id = ethtool_create(cfg, "-X", "context new start 2 equal 2")
defer(ethtool, f"-X {cfg.ifname} context {ctx1_id} delete")
except CmdExitFailure as exc:
raise KsftSkipEx("Create context not supported with interface down") from exc
ctx2_id = ethtool_create(cfg, "-X", "context new start 4 equal 2")
defer(ethtool, f"-X {cfg.ifname} context {ctx2_id} delete")
port_ctx2 = rand_port()
flow = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} dst-port {port_ctx2} context {ctx2_id}"
ntuple_id = ethtool_create(cfg, "-N", flow)
defer(ethtool, f"-N {cfg.ifname} delete {ntuple_id}")
if not pre_down:
ip(f"link set dev {cfg.ifname} down")
ifup.exec()
wait_file(f"/sys/class/net/{cfg.ifname}/carrier",
lambda x: x.strip() == "1", deadline=20)
remote_addr = cfg.remote_addr_v[cfg.addr_ipver]
for _ in range(10):
if cmd(f"ping -c 1 -W 1 {remote_addr}", fail=False).ret == 0:
break
time.sleep(1)
else:
raise KsftSkipEx("Cannot reach remote host after interface up")
ctxs = cfg.ethnl.rss_get({'header': {'dev-name': cfg.ifname}}, dump=True)
data1 = [c for c in ctxs if c.get('context') == ctx1_id]
ksft_eq(len(data1), 1, f"Context {ctx1_id} should persist after ifup")
data2 = [c for c in ctxs if c.get('context') == ctx2_id]
ksft_eq(len(data2), 1, f"Context {ctx2_id} should persist after ifup")
_ntuple_rule_check(cfg, ntuple_id, ctx2_id)
cnts = _get_rx_cnts(cfg)
GenerateTraffic(cfg).wait_pkts_and_stop(20000)
cnts = _get_rx_cnts(cfg, prev=cnts)
main_traffic = sum(cnts[0:2])
ksft_ge(main_traffic, 18000, f"Main context traffic distribution: {cnts}")
ksft_lt(sum(cnts[2:6]), 500, f"Other context queues should be mostly empty: {cnts}")
_send_traffic_check(cfg, port_ctx2, f"context {ctx2_id}",
{'target': (4, 5),
'noise': (0, 1),
'empty': (2, 3)})
def test_rss_context_persist_create_and_ifdown(cfg):
"""
Create RSS contexts then cycle the interface down and up.
"""
test_rss_context_persist_ifupdown(cfg, pre_down=False)
def test_rss_context_persist_ifdown_and_create(cfg):
"""
Bring interface down first, then create RSS contexts and bring up.
"""
test_rss_context_persist_ifupdown(cfg, pre_down=True)
def main() -> None: def main() -> None:
with NetDrvEpEnv(__file__, nsim_test=False) as cfg: with NetDrvEpEnv(__file__, nsim_test=False) as cfg:
cfg.context_cnt = None cfg.context_cnt = None
@ -823,7 +917,9 @@ def main() -> None:
test_rss_context_out_of_order, test_rss_context4_create_with_cfg, test_rss_context_out_of_order, test_rss_context4_create_with_cfg,
test_flow_add_context_missing, test_flow_add_context_missing,
test_delete_rss_context_busy, test_rss_ntuple_addition, test_delete_rss_context_busy, test_rss_ntuple_addition,
test_rss_default_context_rule], test_rss_default_context_rule,
test_rss_context_persist_create_and_ifdown,
test_rss_context_persist_ifdown_and_create],
args=(cfg, )) args=(cfg, ))
ksft_exit() ksft_exit()

View file

@ -5,6 +5,7 @@ TEST_PROGS := \
dev_addr_lists.sh \ dev_addr_lists.sh \
options.sh \ options.sh \
propagation.sh \ propagation.sh \
refleak.sh \
# end of TEST_PROGS # end of TEST_PROGS
TEST_INCLUDES := \ TEST_INCLUDES := \

View file

@ -0,0 +1,17 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
# shellcheck disable=SC2154
lib_dir=$(dirname "$0")
source "$lib_dir"/../../../net/lib.sh
trap cleanup_all_ns EXIT
# Test that there is no reference count leak and that dummy1 can be deleted.
# https://lore.kernel.org/netdev/4d69abe1-ca8d-4f0b-bcf8-13899b211e57@I-love.SAKURA.ne.jp/
setup_ns ns1 ns2
ip -n "$ns1" link add name team1 type team
ip -n "$ns1" link add name dummy1 mtu 1499 type dummy
ip -n "$ns1" link set dev dummy1 master team1
ip -n "$ns1" link set dev dummy1 netns "$ns2"
ip -n "$ns2" link del dev dummy1

View file

@ -0,0 +1,27 @@
// SPDX-License-Identifier: GPL-2.0
// Some TCP stacks send FINs even though the window is closed. We break
// a possible FIN/ACK loop by accepting the FIN.
--mss=1000
`./defaults.sh`
// Establish a connection.
+0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 setsockopt(3, SOL_SOCKET, SO_RCVBUF, [20000], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
+0 < S 0:0(0) win 32792 <mss 1000,nop,wscale 7>
+0 > S. 0:0(0) ack 1 <mss 1460,nop,wscale 0>
+0 < . 1:1(0) ack 1 win 257
+0 accept(3, ..., ...) = 4
+0 < P. 1:60001(60000) ack 1 win 257
* > . 1:1(0) ack 60001 win 0
+0 < F. 60001:60001(0) ack 1 win 257
+0 > . 1:1(0) ack 60002 win 0

View file

@ -210,16 +210,21 @@ check_result() {
} }
add_namespaces() { add_namespaces() {
local orig_mode ip netns add "global-parent" 2>/dev/null
orig_mode=$(cat /proc/sys/net/vsock/child_ns_mode) echo "global" | ip netns exec "global-parent" \
tee /proc/sys/net/vsock/child_ns_mode &>/dev/null
ip netns add "local-parent" 2>/dev/null
echo "local" | ip netns exec "local-parent" \
tee /proc/sys/net/vsock/child_ns_mode &>/dev/null
for mode in "${NS_MODES[@]}"; do nsenter --net=/var/run/netns/global-parent \
echo "${mode}" > /proc/sys/net/vsock/child_ns_mode ip netns add "global0" 2>/dev/null
ip netns add "${mode}0" 2>/dev/null nsenter --net=/var/run/netns/global-parent \
ip netns add "${mode}1" 2>/dev/null ip netns add "global1" 2>/dev/null
done nsenter --net=/var/run/netns/local-parent \
ip netns add "local0" 2>/dev/null
echo "${orig_mode}" > /proc/sys/net/vsock/child_ns_mode nsenter --net=/var/run/netns/local-parent \
ip netns add "local1" 2>/dev/null
} }
init_namespaces() { init_namespaces() {
@ -237,6 +242,8 @@ del_namespaces() {
log_host "removed ns ${mode}0" log_host "removed ns ${mode}0"
log_host "removed ns ${mode}1" log_host "removed ns ${mode}1"
done done
ip netns del "global-parent" &>/dev/null
ip netns del "local-parent" &>/dev/null
} }
vm_ssh() { vm_ssh() {
@ -287,7 +294,7 @@ check_args() {
} }
check_deps() { check_deps() {
for dep in vng ${QEMU} busybox pkill ssh ss socat; do for dep in vng ${QEMU} busybox pkill ssh ss socat nsenter; do
if [[ ! -x $(command -v "${dep}") ]]; then if [[ ! -x $(command -v "${dep}") ]]; then
echo -e "skip: dependency ${dep} not found!\n" echo -e "skip: dependency ${dep} not found!\n"
exit "${KSFT_SKIP}" exit "${KSFT_SKIP}"
@ -1231,12 +1238,8 @@ test_ns_local_same_cid_ok() {
} }
test_ns_host_vsock_child_ns_mode_ok() { test_ns_host_vsock_child_ns_mode_ok() {
local orig_mode local rc="${KSFT_PASS}"
local rc
orig_mode=$(cat /proc/sys/net/vsock/child_ns_mode)
rc="${KSFT_PASS}"
for mode in "${NS_MODES[@]}"; do for mode in "${NS_MODES[@]}"; do
local ns="${mode}0" local ns="${mode}0"
@ -1246,15 +1249,13 @@ test_ns_host_vsock_child_ns_mode_ok() {
continue continue
fi fi
if ! echo "${mode}" > /proc/sys/net/vsock/child_ns_mode; then if ! echo "${mode}" | ip netns exec "${ns}" \
log_host "child_ns_mode should be writable to ${mode}" tee /proc/sys/net/vsock/child_ns_mode &>/dev/null; then
rc="${KSFT_FAIL}" rc="${KSFT_FAIL}"
continue continue
fi fi
done done
echo "${orig_mode}" > /proc/sys/net/vsock/child_ns_mode
return "${rc}" return "${rc}"
} }