mirror of
https://github.com/torvalds/linux.git
synced 2026-03-08 03:24:45 +01:00
Including fixes from Bluetooth, CAN, WiFi and Netfilter.
More code here than I would have liked. That said, better now than
next week. Nothing particularly scary stands out. The improvement to
the OpenVPN input validation is a bit large but better get them in
before the code makes it to a final release. Some of the changes
we got from sub-trees could have been split better between the fix
and -next refactoring, IMHO, that has been communicated.
We have one known regression in a TI AM65 board not getting link.
The investigation is going a bit slow, a number of people are on
vacation. We'll try to wrap it up, but don't think it should hold
up the release.
Current release - fix to a fix:
- Bluetooth: L2CAP: fix attempting to adjust outgoing MTU, it broke
some headphones and speakers
Current release - regressions:
- wifi: ath12k: fix packets received in WBM error ring with REO LUT
enabled, fix Rx performance regression
- wifi: iwlwifi:
- fix crash due to a botched indexing conversion
- mask reserved bits in chan_state_active_bitmap, avoid FW assert()
Current release - new code bugs:
- nf_conntrack: fix crash due to removal of uninitialised entry
- eth: airoha: fix potential UaF in airoha_npu_get()
Previous releases - regressions:
- net: fix segmentation after TCP/UDP fraglist GRO
- af_packet: fix the SO_SNDTIMEO constraint not taking effect and
a potential soft lockup waiting for a completion
- rpl: fix UaF in rpl_do_srh_inline() for sneaky skb geometry
- virtio-net: fix recursive rtnl_lock() during probe()
- eth: stmmac: populate entire system_counterval_t in get_time_fn()
- eth: libwx: fix a number of crashes in the driver Rx path
- hv_netvsc: prevent IPv6 addrconf after IFF_SLAVE lost that meaning
Previous releases - always broken:
- mptcp: fix races in handling connection fallback to pure TCP
- rxrpc: assorted error handling and race fixes
- sched: another batch of "security" fixes for qdiscs (QFQ, HTB)
- tls: always refresh the queue when reading sock, avoid UaF
- phy: don't register LEDs for genphy, avoid deadlock
- Bluetooth: btintel: check if controller is ISO capable on
btintel_classify_pkt_type(), work around FW returning incorrect
capabilities
Misc:
- make OpenVPN Netlink input checking more strict before it makes it
to a final release
- wifi: cfg80211: remove scan request n_channels __counted_by, its only
yeilding false positives
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmh5G8UACgkQMUZtbf5S
IrvnnxAAtmC6apQ5WLHLFRfRRNR4GpWvw++t/5ANX90w/c+6pfjWZ+I/92tHkR5E
g2HR4dJIRaQodGE8mNSuIMg/JdnGgyFyubpfWSmdfpyz/14cyyB4nPugqOA3OnL5
NpInA08imQDENfQZk1cVwGQuVONeCHZsw1zNNF88Ik5Tu8YMkb39vhchN/pzDOt/
Si607u2/YpvAHIMsDY1F1Q+HQyP0gzlJzC7QgOUCrGxe0A1FJ3SpRYKYabrhjyXQ
AFty6Eeq1mFoHV6Ovt3hD99Kr6/mSPkxwXsZKPp3XjEG9rS3/FiEUWiBQtVxOsbB
u+PYEivswXgS4MFN3Um9Ir5TUEpK7ll0iGBcfqPl1Jl8tNp/w9kinTgyyGvsc9cm
VIrk1r1ufukmgdG8XrMxkbaZFeMTAYQr70+dC/DvoNIik1FNgVnknxGC8zDjlZMV
Tvw6iq4e1Nj8qzaGMFXFJ20EbmwJUtB+f5s5JZD81CsaSrVj8GKyIet4BObINrVC
SKZl/jkzAF2MUc3NenR/CTU2ijF8A/HQv5EG9I0XZM4ccP1eFbgzMmiXzOqQpSwz
ObTR4u6GOllmyim0hF9tu/Lri4WzKmHzhOIeR9Zg8IbYzMiv3OMMgCjKzlKJ/p0Z
aKAYYZ2nz4UN2ocX+BPd6rR9UZoQ/oP73qD+y1zqQNFgGYEd6CQ=
=+3x0
-----END PGP SIGNATURE-----
Merge tag 'net-6.16-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
"Including fixes from Bluetooth, CAN, WiFi and Netfilter.
More code here than I would have liked. That said, better now than
next week. Nothing particularly scary stands out. The improvement to
the OpenVPN input validation is a bit large but better get them in
before the code makes it to a final release. Some of the changes we
got from sub-trees could have been split better between the fix and
-next refactoring, IMHO, that has been communicated.
We have one known regression in a TI AM65 board not getting link. The
investigation is going a bit slow, a number of people are on vacation.
We'll try to wrap it up, but don't think it should hold up the
release.
Current release - fix to a fix:
- Bluetooth: L2CAP: fix attempting to adjust outgoing MTU, it broke
some headphones and speakers
Current release - regressions:
- wifi: ath12k: fix packets received in WBM error ring with REO LUT
enabled, fix Rx performance regression
- wifi: iwlwifi:
- fix crash due to a botched indexing conversion
- mask reserved bits in chan_state_active_bitmap, avoid FW assert()
Current release - new code bugs:
- nf_conntrack: fix crash due to removal of uninitialised entry
- eth: airoha: fix potential UaF in airoha_npu_get()
Previous releases - regressions:
- net: fix segmentation after TCP/UDP fraglist GRO
- af_packet: fix the SO_SNDTIMEO constraint not taking effect and a
potential soft lockup waiting for a completion
- rpl: fix UaF in rpl_do_srh_inline() for sneaky skb geometry
- virtio-net: fix recursive rtnl_lock() during probe()
- eth: stmmac: populate entire system_counterval_t in get_time_fn()
- eth: libwx: fix a number of crashes in the driver Rx path
- hv_netvsc: prevent IPv6 addrconf after IFF_SLAVE lost that meaning
Previous releases - always broken:
- mptcp: fix races in handling connection fallback to pure TCP
- rxrpc: assorted error handling and race fixes
- sched: another batch of "security" fixes for qdiscs (QFQ, HTB)
- tls: always refresh the queue when reading sock, avoid UaF
- phy: don't register LEDs for genphy, avoid deadlock
- Bluetooth: btintel: check if controller is ISO capable on
btintel_classify_pkt_type(), work around FW returning incorrect
capabilities
Misc:
- make OpenVPN Netlink input checking more strict before it makes it
to a final release
- wifi: cfg80211: remove scan request n_channels __counted_by, it's
only yielding false positives"
* tag 'net-6.16-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (66 commits)
rxrpc: Fix to use conn aborts for conn-wide failures
rxrpc: Fix transmission of an abort in response to an abort
rxrpc: Fix notification vs call-release vs recvmsg
rxrpc: Fix recv-recv race of completed call
rxrpc: Fix irq-disabled in local_bh_enable()
selftests/tc-testing: Test htb_dequeue_tree with deactivation and row emptying
net/sched: Return NULL when htb_lookup_leaf encounters an empty rbtree
net: bridge: Do not offload IGMP/MLD messages
selftests: Add test cases for vlan_filter modification during runtime
net: vlan: fix VLAN 0 refcount imbalance of toggling filtering during runtime
tls: always refresh the queue when reading sock
virtio-net: fix recursived rtnl_lock() during probe()
net/mlx5: Update the list of the PCI supported devices
hv_netvsc: Set VF priv_flags to IFF_NO_ADDRCONF before open to prevent IPv6 addrconf
phonet/pep: Move call to pn_skb_get_dst_sockaddr() earlier in pep_sock_accept()
Bluetooth: L2CAP: Fix attempting to adjust outgoing MTU
netfilter: nf_conntrack: fix crash due to removal of uninitialised entry
net: fix segmentation after TCP/UDP fraglist GRO
ipv6: mcast: Delay put pmc->idev in mld_del_delrec()
net: airoha: fix potential use-after-free in airoha_npu_get()
...
This commit is contained in:
commit
6832a9317e
113 changed files with 1591 additions and 549 deletions
|
|
@ -160,6 +160,66 @@ attribute-sets:
|
|||
name: link-tx-packets
|
||||
type: uint
|
||||
doc: Number of packets transmitted at the transport level
|
||||
-
|
||||
name: peer-new-input
|
||||
subset-of: peer
|
||||
attributes:
|
||||
-
|
||||
name: id
|
||||
-
|
||||
name: remote-ipv4
|
||||
-
|
||||
name: remote-ipv6
|
||||
-
|
||||
name: remote-ipv6-scope-id
|
||||
-
|
||||
name: remote-port
|
||||
-
|
||||
name: socket
|
||||
-
|
||||
name: vpn-ipv4
|
||||
-
|
||||
name: vpn-ipv6
|
||||
-
|
||||
name: local-ipv4
|
||||
-
|
||||
name: local-ipv6
|
||||
-
|
||||
name: keepalive-interval
|
||||
-
|
||||
name: keepalive-timeout
|
||||
-
|
||||
name: peer-set-input
|
||||
subset-of: peer
|
||||
attributes:
|
||||
-
|
||||
name: id
|
||||
-
|
||||
name: remote-ipv4
|
||||
-
|
||||
name: remote-ipv6
|
||||
-
|
||||
name: remote-ipv6-scope-id
|
||||
-
|
||||
name: remote-port
|
||||
-
|
||||
name: vpn-ipv4
|
||||
-
|
||||
name: vpn-ipv6
|
||||
-
|
||||
name: local-ipv4
|
||||
-
|
||||
name: local-ipv6
|
||||
-
|
||||
name: keepalive-interval
|
||||
-
|
||||
name: keepalive-timeout
|
||||
-
|
||||
name: peer-del-input
|
||||
subset-of: peer
|
||||
attributes:
|
||||
-
|
||||
name: id
|
||||
-
|
||||
name: keyconf
|
||||
attributes:
|
||||
|
|
@ -216,6 +276,33 @@ attribute-sets:
|
|||
obtain the actual cipher IV
|
||||
checks:
|
||||
exact-len: nonce-tail-size
|
||||
|
||||
-
|
||||
name: keyconf-get
|
||||
subset-of: keyconf
|
||||
attributes:
|
||||
-
|
||||
name: peer-id
|
||||
-
|
||||
name: slot
|
||||
-
|
||||
name: key-id
|
||||
-
|
||||
name: cipher-alg
|
||||
-
|
||||
name: keyconf-swap-input
|
||||
subset-of: keyconf
|
||||
attributes:
|
||||
-
|
||||
name: peer-id
|
||||
-
|
||||
name: keyconf-del-input
|
||||
subset-of: keyconf
|
||||
attributes:
|
||||
-
|
||||
name: peer-id
|
||||
-
|
||||
name: slot
|
||||
-
|
||||
name: ovpn
|
||||
attributes:
|
||||
|
|
@ -235,12 +322,66 @@ attribute-sets:
|
|||
type: nest
|
||||
doc: Peer specific cipher configuration
|
||||
nested-attributes: keyconf
|
||||
-
|
||||
name: ovpn-peer-new-input
|
||||
subset-of: ovpn
|
||||
attributes:
|
||||
-
|
||||
name: ifindex
|
||||
-
|
||||
name: peer
|
||||
nested-attributes: peer-new-input
|
||||
-
|
||||
name: ovpn-peer-set-input
|
||||
subset-of: ovpn
|
||||
attributes:
|
||||
-
|
||||
name: ifindex
|
||||
-
|
||||
name: peer
|
||||
nested-attributes: peer-set-input
|
||||
-
|
||||
name: ovpn-peer-del-input
|
||||
subset-of: ovpn
|
||||
attributes:
|
||||
-
|
||||
name: ifindex
|
||||
-
|
||||
name: peer
|
||||
nested-attributes: peer-del-input
|
||||
-
|
||||
name: ovpn-keyconf-get
|
||||
subset-of: ovpn
|
||||
attributes:
|
||||
-
|
||||
name: ifindex
|
||||
-
|
||||
name: keyconf
|
||||
nested-attributes: keyconf-get
|
||||
-
|
||||
name: ovpn-keyconf-swap-input
|
||||
subset-of: ovpn
|
||||
attributes:
|
||||
-
|
||||
name: ifindex
|
||||
-
|
||||
name: keyconf
|
||||
nested-attributes: keyconf-swap-input
|
||||
-
|
||||
name: ovpn-keyconf-del-input
|
||||
subset-of: ovpn
|
||||
attributes:
|
||||
-
|
||||
name: ifindex
|
||||
-
|
||||
name: keyconf
|
||||
nested-attributes: keyconf-del-input
|
||||
|
||||
operations:
|
||||
list:
|
||||
-
|
||||
name: peer-new
|
||||
attribute-set: ovpn
|
||||
attribute-set: ovpn-peer-new-input
|
||||
flags: [ admin-perm ]
|
||||
doc: Add a remote peer
|
||||
do:
|
||||
|
|
@ -252,7 +393,7 @@ operations:
|
|||
- peer
|
||||
-
|
||||
name: peer-set
|
||||
attribute-set: ovpn
|
||||
attribute-set: ovpn-peer-set-input
|
||||
flags: [ admin-perm ]
|
||||
doc: modify a remote peer
|
||||
do:
|
||||
|
|
@ -286,7 +427,7 @@ operations:
|
|||
- peer
|
||||
-
|
||||
name: peer-del
|
||||
attribute-set: ovpn
|
||||
attribute-set: ovpn-peer-del-input
|
||||
flags: [ admin-perm ]
|
||||
doc: Delete existing remote peer
|
||||
do:
|
||||
|
|
@ -316,7 +457,7 @@ operations:
|
|||
- keyconf
|
||||
-
|
||||
name: key-get
|
||||
attribute-set: ovpn
|
||||
attribute-set: ovpn-keyconf-get
|
||||
flags: [ admin-perm ]
|
||||
doc: Retrieve non-sensitive data about peer key and cipher
|
||||
do:
|
||||
|
|
@ -331,7 +472,7 @@ operations:
|
|||
- keyconf
|
||||
-
|
||||
name: key-swap
|
||||
attribute-set: ovpn
|
||||
attribute-set: ovpn-keyconf-swap-input
|
||||
flags: [ admin-perm ]
|
||||
doc: Swap primary and secondary session keys for a specific peer
|
||||
do:
|
||||
|
|
@ -350,7 +491,7 @@ operations:
|
|||
mcgrp: peers
|
||||
-
|
||||
name: key-del
|
||||
attribute-set: ovpn
|
||||
attribute-set: ovpn-keyconf-del-input
|
||||
flags: [ admin-perm ]
|
||||
doc: Delete cipher key for a specific peer
|
||||
do:
|
||||
|
|
|
|||
|
|
@ -670,7 +670,7 @@ static int bfusb_probe(struct usb_interface *intf, const struct usb_device_id *i
|
|||
hdev->flush = bfusb_flush;
|
||||
hdev->send = bfusb_send_frame;
|
||||
|
||||
set_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_COMMANDS);
|
||||
|
||||
if (hci_register_dev(hdev) < 0) {
|
||||
BT_ERR("Can't register HCI device");
|
||||
|
|
|
|||
|
|
@ -398,7 +398,7 @@ static int bpa10x_probe(struct usb_interface *intf,
|
|||
hdev->send = bpa10x_send_frame;
|
||||
hdev->set_diag = bpa10x_set_diag;
|
||||
|
||||
set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
|
||||
|
||||
err = hci_register_dev(hdev);
|
||||
if (err < 0) {
|
||||
|
|
|
|||
|
|
@ -135,7 +135,7 @@ int btbcm_check_bdaddr(struct hci_dev *hdev)
|
|||
if (btbcm_set_bdaddr_from_efi(hdev) != 0) {
|
||||
bt_dev_info(hdev, "BCM: Using default device address (%pMR)",
|
||||
&bda->bdaddr);
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -467,7 +467,7 @@ static int btbcm_print_controller_features(struct hci_dev *hdev)
|
|||
|
||||
/* Read DMI and disable broken Read LE Min/Max Tx Power */
|
||||
if (dmi_first_match(disable_broken_read_transmit_power))
|
||||
set_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -706,7 +706,7 @@ int btbcm_finalize(struct hci_dev *hdev, bool *fw_load_done, bool use_autobaud_m
|
|||
|
||||
btbcm_check_bdaddr(hdev);
|
||||
|
||||
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -769,7 +769,7 @@ int btbcm_setup_apple(struct hci_dev *hdev)
|
|||
kfree_skb(skb);
|
||||
}
|
||||
|
||||
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ int btintel_check_bdaddr(struct hci_dev *hdev)
|
|||
if (!bacmp(&bda->bdaddr, BDADDR_INTEL)) {
|
||||
bt_dev_err(hdev, "Found Intel default device address (%pMR)",
|
||||
&bda->bdaddr);
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
|
||||
}
|
||||
|
||||
kfree_skb(skb);
|
||||
|
|
@ -2027,7 +2027,7 @@ static int btintel_download_fw(struct hci_dev *hdev,
|
|||
*/
|
||||
if (!bacmp(¶ms->otp_bdaddr, BDADDR_ANY)) {
|
||||
bt_dev_info(hdev, "No device address configured");
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
|
||||
}
|
||||
|
||||
download:
|
||||
|
|
@ -2295,7 +2295,7 @@ static int btintel_prepare_fw_download_tlv(struct hci_dev *hdev,
|
|||
*/
|
||||
if (!bacmp(&ver->otp_bd_addr, BDADDR_ANY)) {
|
||||
bt_dev_info(hdev, "No device address configured");
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -2670,7 +2670,7 @@ static u8 btintel_classify_pkt_type(struct hci_dev *hdev, struct sk_buff *skb)
|
|||
* Distinguish ISO data packets form ACL data packets
|
||||
* based on their connection handle value range.
|
||||
*/
|
||||
if (hci_skb_pkt_type(skb) == HCI_ACLDATA_PKT) {
|
||||
if (iso_capable(hdev) && hci_skb_pkt_type(skb) == HCI_ACLDATA_PKT) {
|
||||
__u16 handle = __le16_to_cpu(hci_acl_hdr(skb)->handle);
|
||||
|
||||
if (hci_handle(handle) >= BTINTEL_ISODATA_HANDLE_BASE)
|
||||
|
|
@ -3435,9 +3435,9 @@ static int btintel_setup_combined(struct hci_dev *hdev)
|
|||
}
|
||||
|
||||
/* Apply the common HCI quirks for Intel device */
|
||||
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG);
|
||||
|
||||
/* Set up the quality report callback for Intel devices */
|
||||
hdev->set_quality_report = btintel_set_quality_report;
|
||||
|
|
@ -3475,8 +3475,8 @@ static int btintel_setup_combined(struct hci_dev *hdev)
|
|||
*/
|
||||
if (!btintel_test_flag(hdev,
|
||||
INTEL_ROM_LEGACY_NO_WBS_SUPPORT))
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
|
||||
&hdev->quirks);
|
||||
hci_set_quirk(hdev,
|
||||
HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
|
||||
err = btintel_legacy_rom_setup(hdev, &ver);
|
||||
break;
|
||||
|
|
@ -3491,11 +3491,11 @@ static int btintel_setup_combined(struct hci_dev *hdev)
|
|||
*
|
||||
* All Legacy bootloader devices support WBS
|
||||
*/
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
|
||||
&hdev->quirks);
|
||||
hci_set_quirk(hdev,
|
||||
HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
|
||||
/* These variants don't seem to support LE Coded PHY */
|
||||
set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_CODED);
|
||||
|
||||
/* Setup MSFT Extension support */
|
||||
btintel_set_msft_opcode(hdev, ver.hw_variant);
|
||||
|
|
@ -3571,10 +3571,10 @@ static int btintel_setup_combined(struct hci_dev *hdev)
|
|||
*
|
||||
* All Legacy bootloader devices support WBS
|
||||
*/
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
|
||||
/* These variants don't seem to support LE Coded PHY */
|
||||
set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_CODED);
|
||||
|
||||
/* Setup MSFT Extension support */
|
||||
btintel_set_msft_opcode(hdev, ver.hw_variant);
|
||||
|
|
@ -3600,7 +3600,7 @@ static int btintel_setup_combined(struct hci_dev *hdev)
|
|||
*
|
||||
* All TLV based devices support WBS
|
||||
*/
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
|
||||
/* Setup MSFT Extension support */
|
||||
btintel_set_msft_opcode(hdev,
|
||||
|
|
|
|||
|
|
@ -2081,9 +2081,9 @@ static int btintel_pcie_setup_internal(struct hci_dev *hdev)
|
|||
}
|
||||
|
||||
/* Apply the common HCI quirks for Intel device */
|
||||
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG);
|
||||
|
||||
/* Set up the quality report callback for Intel devices */
|
||||
hdev->set_quality_report = btintel_set_quality_report;
|
||||
|
|
@ -2123,7 +2123,7 @@ static int btintel_pcie_setup_internal(struct hci_dev *hdev)
|
|||
*
|
||||
* All TLV based devices support WBS
|
||||
*/
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
|
||||
/* Setup MSFT Extension support */
|
||||
btintel_set_msft_opcode(hdev,
|
||||
|
|
|
|||
|
|
@ -1141,7 +1141,7 @@ static int btmtksdio_setup(struct hci_dev *hdev)
|
|||
}
|
||||
|
||||
/* Enable WBS with mSBC codec */
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
|
||||
/* Enable GPIO reset mechanism */
|
||||
if (bdev->reset) {
|
||||
|
|
@ -1384,7 +1384,7 @@ static int btmtksdio_probe(struct sdio_func *func,
|
|||
SET_HCIDEV_DEV(hdev, &func->dev);
|
||||
|
||||
hdev->manufacturer = 70;
|
||||
set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
|
||||
|
||||
sdio_set_drvdata(func, bdev);
|
||||
|
||||
|
|
|
|||
|
|
@ -872,7 +872,7 @@ static int btmtkuart_probe(struct serdev_device *serdev)
|
|||
SET_HCIDEV_DEV(hdev, &serdev->dev);
|
||||
|
||||
hdev->manufacturer = 70;
|
||||
set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
|
||||
|
||||
if (btmtkuart_is_standalone(bdev)) {
|
||||
err = clk_prepare_enable(bdev->osc);
|
||||
|
|
|
|||
|
|
@ -1807,7 +1807,7 @@ static int nxp_serdev_probe(struct serdev_device *serdev)
|
|||
"local-bd-address",
|
||||
(u8 *)&ba, sizeof(ba));
|
||||
if (bacmp(&ba, BDADDR_ANY))
|
||||
set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
|
||||
|
||||
if (hci_register_dev(hdev) < 0) {
|
||||
dev_err(&serdev->dev, "Can't register HCI device\n");
|
||||
|
|
|
|||
|
|
@ -739,7 +739,7 @@ static int qca_check_bdaddr(struct hci_dev *hdev, const struct qca_fw_config *co
|
|||
|
||||
bda = (struct hci_rp_read_bd_addr *)skb->data;
|
||||
if (!bacmp(&bda->bdaddr, &config->bdaddr))
|
||||
set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
|
|
|
|||
|
|
@ -117,7 +117,7 @@ static int btqcomsmd_setup(struct hci_dev *hdev)
|
|||
/* Devices do not have persistent storage for BD address. Retrieve
|
||||
* it from the firmware node property.
|
||||
*/
|
||||
set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1287,7 +1287,7 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
|
|||
/* Enable controller to do both LE scan and BR/EDR inquiry
|
||||
* simultaneously.
|
||||
*/
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
|
||||
/* Enable central-peripheral role (able to create new connections with
|
||||
* an existing connection in slave role).
|
||||
|
|
@ -1301,7 +1301,7 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
|
|||
case CHIP_ID_8851B:
|
||||
case CHIP_ID_8922A:
|
||||
case CHIP_ID_8852BT:
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
|
||||
/* RTL8852C needs to transmit mSBC data continuously without
|
||||
* the zero length of USB packets for the ALT 6 supported chips
|
||||
|
|
@ -1312,7 +1312,8 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
|
|||
if (btrtl_dev->project_id == CHIP_ID_8852A ||
|
||||
btrtl_dev->project_id == CHIP_ID_8852B ||
|
||||
btrtl_dev->project_id == CHIP_ID_8852C)
|
||||
set_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks);
|
||||
hci_set_quirk(hdev,
|
||||
HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER);
|
||||
|
||||
hci_set_aosp_capable(hdev);
|
||||
break;
|
||||
|
|
@ -1331,8 +1332,7 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
|
|||
* but it doesn't support any features from page 2 -
|
||||
* it either responds with garbage or with error status
|
||||
*/
|
||||
set_bit(HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2,
|
||||
&hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
|
|
|
|||
|
|
@ -327,7 +327,7 @@ static int btsdio_probe(struct sdio_func *func,
|
|||
hdev->send = btsdio_send_frame;
|
||||
|
||||
if (func->vendor == 0x0104 && func->device == 0x00c5)
|
||||
set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
|
||||
|
||||
err = hci_register_dev(hdev);
|
||||
if (err < 0) {
|
||||
|
|
|
|||
|
|
@ -2472,18 +2472,18 @@ static int btusb_setup_csr(struct hci_dev *hdev)
|
|||
* Probably will need to be expanded in the future;
|
||||
* without these the controller will lock up.
|
||||
*/
|
||||
set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_READ_VOICE_SETTING, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_NO_SUSPEND_NOTIFIER);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_VOICE_SETTING);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE);
|
||||
|
||||
/* Clear the reset quirk since this is not an actual
|
||||
* early Bluetooth 1.1 device from CSR.
|
||||
*/
|
||||
clear_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
|
||||
clear_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
hci_clear_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
|
||||
hci_clear_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
|
||||
/*
|
||||
* Special workaround for these BT 4.0 chip clones, and potentially more:
|
||||
|
|
@ -3192,6 +3192,32 @@ static const struct qca_device_info qca_devices_table[] = {
|
|||
{ 0x00190200, 40, 4, 16 }, /* WCN785x 2.0 */
|
||||
};
|
||||
|
||||
static u16 qca_extract_board_id(const struct qca_version *ver)
|
||||
{
|
||||
u16 flag = le16_to_cpu(ver->flag);
|
||||
u16 board_id = 0;
|
||||
|
||||
if (((flag >> 8) & 0xff) == QCA_FLAG_MULTI_NVM) {
|
||||
/* The board_id should be split into two bytes
|
||||
* The 1st byte is chip ID, and the 2nd byte is platform ID
|
||||
* For example, board ID 0x010A, 0x01 is platform ID. 0x0A is chip ID
|
||||
* we have several platforms, and platform IDs are continuously added
|
||||
* Platform ID:
|
||||
* 0x00 is for Mobile
|
||||
* 0x01 is for X86
|
||||
* 0x02 is for Automotive
|
||||
* 0x03 is for Consumer electronic
|
||||
*/
|
||||
board_id = (ver->chip_id << 8) + ver->platform_id;
|
||||
}
|
||||
|
||||
/* Take 0xffff as invalid board ID */
|
||||
if (board_id == 0xffff)
|
||||
board_id = 0;
|
||||
|
||||
return board_id;
|
||||
}
|
||||
|
||||
static int btusb_qca_send_vendor_req(struct usb_device *udev, u8 request,
|
||||
void *data, u16 size)
|
||||
{
|
||||
|
|
@ -3348,44 +3374,28 @@ static void btusb_generate_qca_nvm_name(char *fwname, size_t max_size,
|
|||
const struct qca_version *ver)
|
||||
{
|
||||
u32 rom_version = le32_to_cpu(ver->rom_version);
|
||||
u16 flag = le16_to_cpu(ver->flag);
|
||||
const char *variant;
|
||||
int len;
|
||||
u16 board_id;
|
||||
|
||||
if (((flag >> 8) & 0xff) == QCA_FLAG_MULTI_NVM) {
|
||||
/* The board_id should be split into two bytes
|
||||
* The 1st byte is chip ID, and the 2nd byte is platform ID
|
||||
* For example, board ID 0x010A, 0x01 is platform ID. 0x0A is chip ID
|
||||
* we have several platforms, and platform IDs are continuously added
|
||||
* Platform ID:
|
||||
* 0x00 is for Mobile
|
||||
* 0x01 is for X86
|
||||
* 0x02 is for Automotive
|
||||
* 0x03 is for Consumer electronic
|
||||
*/
|
||||
u16 board_id = (ver->chip_id << 8) + ver->platform_id;
|
||||
const char *variant;
|
||||
board_id = qca_extract_board_id(ver);
|
||||
|
||||
switch (le32_to_cpu(ver->ram_version)) {
|
||||
case WCN6855_2_0_RAM_VERSION_GF:
|
||||
case WCN6855_2_1_RAM_VERSION_GF:
|
||||
variant = "_gf";
|
||||
break;
|
||||
default:
|
||||
variant = "";
|
||||
break;
|
||||
}
|
||||
|
||||
if (board_id == 0) {
|
||||
snprintf(fwname, max_size, "qca/nvm_usb_%08x%s.bin",
|
||||
rom_version, variant);
|
||||
} else {
|
||||
snprintf(fwname, max_size, "qca/nvm_usb_%08x%s_%04x.bin",
|
||||
rom_version, variant, board_id);
|
||||
}
|
||||
} else {
|
||||
snprintf(fwname, max_size, "qca/nvm_usb_%08x.bin",
|
||||
rom_version);
|
||||
switch (le32_to_cpu(ver->ram_version)) {
|
||||
case WCN6855_2_0_RAM_VERSION_GF:
|
||||
case WCN6855_2_1_RAM_VERSION_GF:
|
||||
variant = "_gf";
|
||||
break;
|
||||
default:
|
||||
variant = NULL;
|
||||
break;
|
||||
}
|
||||
|
||||
len = snprintf(fwname, max_size, "qca/nvm_usb_%08x", rom_version);
|
||||
if (variant)
|
||||
len += snprintf(fwname + len, max_size - len, "%s", variant);
|
||||
if (board_id)
|
||||
len += snprintf(fwname + len, max_size - len, "_%04x", board_id);
|
||||
len += snprintf(fwname + len, max_size - len, ".bin");
|
||||
}
|
||||
|
||||
static int btusb_setup_qca_load_nvm(struct hci_dev *hdev,
|
||||
|
|
@ -3494,7 +3504,7 @@ static int btusb_setup_qca(struct hci_dev *hdev)
|
|||
/* Mark HCI_OP_ENHANCED_SETUP_SYNC_CONN as broken as it doesn't seem to
|
||||
* work with the likes of HSP/HFP mSBC.
|
||||
*/
|
||||
set_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -4008,10 +4018,10 @@ static int btusb_probe(struct usb_interface *intf,
|
|||
}
|
||||
#endif
|
||||
if (id->driver_info & BTUSB_CW6622)
|
||||
set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY);
|
||||
|
||||
if (id->driver_info & BTUSB_BCM2045)
|
||||
set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY);
|
||||
|
||||
if (id->driver_info & BTUSB_BCM92035)
|
||||
hdev->setup = btusb_setup_bcm92035;
|
||||
|
|
@ -4068,8 +4078,8 @@ static int btusb_probe(struct usb_interface *intf,
|
|||
hdev->reset = btmtk_reset_sync;
|
||||
hdev->set_bdaddr = btmtk_set_bdaddr;
|
||||
hdev->send = btusb_send_frame_mtk;
|
||||
set_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
|
||||
data->recv_acl = btmtk_usb_recv_acl;
|
||||
data->suspend = btmtk_usb_suspend;
|
||||
data->resume = btmtk_usb_resume;
|
||||
|
|
@ -4077,20 +4087,20 @@ static int btusb_probe(struct usb_interface *intf,
|
|||
}
|
||||
|
||||
if (id->driver_info & BTUSB_SWAVE) {
|
||||
set_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_FIXUP_INQUIRY_MODE);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_COMMANDS);
|
||||
}
|
||||
|
||||
if (id->driver_info & BTUSB_INTEL_BOOT) {
|
||||
hdev->manufacturer = 2;
|
||||
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
|
||||
}
|
||||
|
||||
if (id->driver_info & BTUSB_ATH3012) {
|
||||
data->setup_on_usb = btusb_setup_qca;
|
||||
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
|
||||
}
|
||||
|
||||
if (id->driver_info & BTUSB_QCA_ROME) {
|
||||
|
|
@ -4098,7 +4108,7 @@ static int btusb_probe(struct usb_interface *intf,
|
|||
hdev->shutdown = btusb_shutdown_qca;
|
||||
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
|
||||
hdev->reset = btusb_qca_reset;
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
btusb_check_needs_reset_resume(intf);
|
||||
}
|
||||
|
||||
|
|
@ -4112,7 +4122,7 @@ static int btusb_probe(struct usb_interface *intf,
|
|||
hdev->shutdown = btusb_shutdown_qca;
|
||||
hdev->set_bdaddr = btusb_set_bdaddr_wcn6855;
|
||||
hdev->reset = btusb_qca_reset;
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
hci_set_msft_opcode(hdev, 0xFD70);
|
||||
}
|
||||
|
||||
|
|
@ -4140,35 +4150,35 @@ static int btusb_probe(struct usb_interface *intf,
|
|||
|
||||
if (id->driver_info & BTUSB_ACTIONS_SEMI) {
|
||||
/* Support is advertised, but not implemented */
|
||||
set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_EXT_SCAN);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_EXT_CREATE_CONN);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT);
|
||||
}
|
||||
|
||||
if (!reset)
|
||||
set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
|
||||
|
||||
if (force_scofix || id->driver_info & BTUSB_WRONG_SCO_MTU) {
|
||||
if (!disable_scofix)
|
||||
set_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_FIXUP_BUFFER_SIZE);
|
||||
}
|
||||
|
||||
if (id->driver_info & BTUSB_BROKEN_ISOC)
|
||||
data->isoc = NULL;
|
||||
|
||||
if (id->driver_info & BTUSB_WIDEBAND_SPEECH)
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
|
||||
if (id->driver_info & BTUSB_INVALID_LE_STATES)
|
||||
set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_STATES);
|
||||
|
||||
if (id->driver_info & BTUSB_DIGIANSWER) {
|
||||
data->cmdreq_type = USB_TYPE_VENDOR;
|
||||
set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
|
||||
}
|
||||
|
||||
if (id->driver_info & BTUSB_CSR) {
|
||||
|
|
@ -4177,10 +4187,10 @@ static int btusb_probe(struct usb_interface *intf,
|
|||
|
||||
/* Old firmware would otherwise execute USB reset */
|
||||
if (bcdDevice < 0x117)
|
||||
set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
|
||||
|
||||
/* This must be set first in case we disable it for fakes */
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
|
||||
/* Fake CSR devices with broken commands */
|
||||
if (le16_to_cpu(udev->descriptor.idVendor) == 0x0a12 &&
|
||||
|
|
@ -4193,7 +4203,7 @@ static int btusb_probe(struct usb_interface *intf,
|
|||
|
||||
/* New sniffer firmware has crippled HCI interface */
|
||||
if (le16_to_cpu(udev->descriptor.bcdDevice) > 0x997)
|
||||
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
|
||||
}
|
||||
|
||||
if (id->driver_info & BTUSB_INTEL_BOOT) {
|
||||
|
|
|
|||
|
|
@ -424,7 +424,7 @@ static int aml_check_bdaddr(struct hci_dev *hdev)
|
|||
|
||||
if (!bacmp(&paddr->bdaddr, AML_BDADDR_DEFAULT)) {
|
||||
bt_dev_info(hdev, "amlbt using default bdaddr (%pM)", &paddr->bdaddr);
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
|
||||
}
|
||||
|
||||
exit:
|
||||
|
|
|
|||
|
|
@ -643,8 +643,8 @@ static int bcm_setup(struct hci_uart *hu)
|
|||
* Allow the bootloader to set a valid address through the
|
||||
* device tree.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks))
|
||||
set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hu->hdev->quirks);
|
||||
if (hci_test_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR))
|
||||
hci_set_quirk(hu->hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
|
||||
|
||||
if (!bcm_request_irq(bcm))
|
||||
err = bcm_setup_sleep(hu);
|
||||
|
|
|
|||
|
|
@ -1435,7 +1435,7 @@ static int bcm4377_check_bdaddr(struct bcm4377_data *bcm4377)
|
|||
|
||||
bda = (struct hci_rp_read_bd_addr *)skb->data;
|
||||
if (!bcm4377_is_valid_bdaddr(bcm4377, &bda->bdaddr))
|
||||
set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &bcm4377->hdev->quirks);
|
||||
hci_set_quirk(bcm4377->hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
|
||||
|
||||
kfree_skb(skb);
|
||||
return 0;
|
||||
|
|
@ -2389,13 +2389,13 @@ static int bcm4377_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
hdev->setup = bcm4377_hci_setup;
|
||||
|
||||
if (bcm4377->hw->broken_mws_transport_config)
|
||||
set_bit(HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG);
|
||||
if (bcm4377->hw->broken_ext_scan)
|
||||
set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_EXT_SCAN);
|
||||
if (bcm4377->hw->broken_le_coded)
|
||||
set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_CODED);
|
||||
if (bcm4377->hw->broken_le_ext_adv_report_phy)
|
||||
set_bit(HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY);
|
||||
|
||||
pci_set_drvdata(pdev, bcm4377);
|
||||
hci_set_drvdata(hdev, bcm4377);
|
||||
|
|
|
|||
|
|
@ -660,7 +660,7 @@ static int intel_setup(struct hci_uart *hu)
|
|||
*/
|
||||
if (!bacmp(¶ms.otp_bdaddr, BDADDR_ANY)) {
|
||||
bt_dev_info(hdev, "No device address configured");
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_INVALID_BDADDR);
|
||||
}
|
||||
|
||||
/* With this Intel bootloader only the hardware variant and device
|
||||
|
|
|
|||
|
|
@ -667,13 +667,13 @@ static int hci_uart_register_dev(struct hci_uart *hu)
|
|||
SET_HCIDEV_DEV(hdev, hu->tty->dev);
|
||||
|
||||
if (test_bit(HCI_UART_RAW_DEVICE, &hu->hdev_flags))
|
||||
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
|
||||
|
||||
if (test_bit(HCI_UART_EXT_CONFIG, &hu->hdev_flags))
|
||||
set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG);
|
||||
|
||||
if (!test_bit(HCI_UART_RESET_ON_INIT, &hu->hdev_flags))
|
||||
set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE);
|
||||
|
||||
/* Only call open() for the protocol after hdev is fully initialized as
|
||||
* open() (or a timer/workqueue it starts) may attempt to reference it.
|
||||
|
|
|
|||
|
|
@ -649,11 +649,11 @@ static int ll_setup(struct hci_uart *hu)
|
|||
/* This means that there was an error getting the BD address
|
||||
* during probe, so mark the device as having a bad address.
|
||||
*/
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks);
|
||||
hci_set_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR);
|
||||
} else if (bacmp(&lldev->bdaddr, BDADDR_ANY)) {
|
||||
err = ll_set_bdaddr(hu->hdev, &lldev->bdaddr);
|
||||
if (err)
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks);
|
||||
hci_set_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR);
|
||||
}
|
||||
|
||||
/* Operational speed if any */
|
||||
|
|
|
|||
|
|
@ -439,7 +439,7 @@ static int nokia_setup(struct hci_uart *hu)
|
|||
|
||||
if (btdev->man_id == NOKIA_ID_BCM2048) {
|
||||
hu->hdev->set_bdaddr = btbcm_set_bdaddr;
|
||||
set_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks);
|
||||
hci_set_quirk(hu->hdev, HCI_QUIRK_INVALID_BDADDR);
|
||||
dev_dbg(dev, "bcm2048 has invalid bluetooth address!");
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1892,7 +1892,7 @@ static int qca_setup(struct hci_uart *hu)
|
|||
/* Enable controller to do both LE scan and BR/EDR inquiry
|
||||
* simultaneously.
|
||||
*/
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
|
||||
switch (soc_type) {
|
||||
case QCA_QCA2066:
|
||||
|
|
@ -1944,7 +1944,7 @@ retry:
|
|||
case QCA_WCN7850:
|
||||
qcadev = serdev_device_get_drvdata(hu->serdev);
|
||||
if (qcadev->bdaddr_property_broken)
|
||||
set_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BDADDR_PROPERTY_BROKEN);
|
||||
|
||||
hci_set_aosp_capable(hdev);
|
||||
|
||||
|
|
@ -2487,7 +2487,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
|
|||
hdev = qcadev->serdev_hu.hdev;
|
||||
|
||||
if (power_ctrl_enabled) {
|
||||
set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
|
||||
hdev->shutdown = qca_power_off;
|
||||
}
|
||||
|
||||
|
|
@ -2496,11 +2496,11 @@ static int qca_serdev_probe(struct serdev_device *serdev)
|
|||
* be queried via hci. Same with the valid le states quirk.
|
||||
*/
|
||||
if (data->capabilities & QCA_CAP_WIDEBAND_SPEECH)
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
|
||||
&hdev->quirks);
|
||||
hci_set_quirk(hdev,
|
||||
HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
|
||||
if (!(data->capabilities & QCA_CAP_VALID_LE_STATES))
|
||||
set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_BROKEN_LE_STATES);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
@ -2550,7 +2550,7 @@ static void qca_serdev_shutdown(struct device *dev)
|
|||
* invoked and the SOC is already in the initial state, so
|
||||
* don't also need to send the VSC.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks) ||
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP) ||
|
||||
hci_dev_test_flag(hdev, HCI_SETUP))
|
||||
return;
|
||||
|
||||
|
|
|
|||
|
|
@ -152,7 +152,7 @@ static int hci_uart_close(struct hci_dev *hdev)
|
|||
* BT SOC is completely powered OFF during BT OFF, holding port
|
||||
* open may drain the battery.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks)) {
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP)) {
|
||||
clear_bit(HCI_UART_PROTO_READY, &hu->flags);
|
||||
serdev_device_close(hu->serdev);
|
||||
}
|
||||
|
|
@ -358,13 +358,13 @@ int hci_uart_register_device_priv(struct hci_uart *hu,
|
|||
SET_HCIDEV_DEV(hdev, &hu->serdev->dev);
|
||||
|
||||
if (test_bit(HCI_UART_NO_SUSPEND_NOTIFIER, &hu->flags))
|
||||
set_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_NO_SUSPEND_NOTIFIER);
|
||||
|
||||
if (test_bit(HCI_UART_RAW_DEVICE, &hu->hdev_flags))
|
||||
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
|
||||
|
||||
if (test_bit(HCI_UART_EXT_CONFIG, &hu->hdev_flags))
|
||||
set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG);
|
||||
|
||||
if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags))
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -415,16 +415,16 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
|
|||
hdev->get_codec_config_data = vhci_get_codec_config_data;
|
||||
hdev->wakeup = vhci_wakeup;
|
||||
hdev->setup = vhci_setup;
|
||||
set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED);
|
||||
|
||||
/* bit 6 is for external configuration */
|
||||
if (opcode & 0x40)
|
||||
set_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG);
|
||||
|
||||
/* bit 7 is for raw device */
|
||||
if (opcode & 0x80)
|
||||
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_RAW_DEVICE);
|
||||
|
||||
if (hci_register_dev(hdev) < 0) {
|
||||
BT_ERR("Can't register HCI device");
|
||||
|
|
|
|||
|
|
@ -327,17 +327,17 @@ static int virtbt_probe(struct virtio_device *vdev)
|
|||
hdev->setup = virtbt_setup_intel;
|
||||
hdev->shutdown = virtbt_shutdown_generic;
|
||||
hdev->set_bdaddr = virtbt_set_bdaddr_intel;
|
||||
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
break;
|
||||
|
||||
case VIRTIO_BT_CONFIG_VENDOR_REALTEK:
|
||||
hdev->manufacturer = 93;
|
||||
hdev->setup = virtbt_setup_realtek;
|
||||
hdev->shutdown = virtbt_shutdown_generic;
|
||||
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
|
||||
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY);
|
||||
hci_set_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -343,21 +343,19 @@ static void tcan4x5x_get_dt_data(struct m_can_classdev *cdev)
|
|||
of_property_read_bool(cdev->dev->of_node, "ti,nwkrq-voltage-vio");
|
||||
}
|
||||
|
||||
static int tcan4x5x_get_gpios(struct m_can_classdev *cdev,
|
||||
const struct tcan4x5x_version_info *version_info)
|
||||
static int tcan4x5x_get_gpios(struct m_can_classdev *cdev)
|
||||
{
|
||||
struct tcan4x5x_priv *tcan4x5x = cdev_to_priv(cdev);
|
||||
int ret;
|
||||
|
||||
if (version_info->has_wake_pin) {
|
||||
tcan4x5x->device_wake_gpio = devm_gpiod_get(cdev->dev, "device-wake",
|
||||
GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(tcan4x5x->device_wake_gpio)) {
|
||||
if (PTR_ERR(tcan4x5x->device_wake_gpio) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
tcan4x5x->device_wake_gpio = devm_gpiod_get_optional(cdev->dev,
|
||||
"device-wake",
|
||||
GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(tcan4x5x->device_wake_gpio)) {
|
||||
if (PTR_ERR(tcan4x5x->device_wake_gpio) == -EPROBE_DEFER)
|
||||
return -EPROBE_DEFER;
|
||||
|
||||
tcan4x5x_disable_wake(cdev);
|
||||
}
|
||||
tcan4x5x->device_wake_gpio = NULL;
|
||||
}
|
||||
|
||||
tcan4x5x->reset_gpio = devm_gpiod_get_optional(cdev->dev, "reset",
|
||||
|
|
@ -369,14 +367,31 @@ static int tcan4x5x_get_gpios(struct m_can_classdev *cdev,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (version_info->has_state_pin) {
|
||||
tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev,
|
||||
"device-state",
|
||||
GPIOD_IN);
|
||||
if (IS_ERR(tcan4x5x->device_state_gpio)) {
|
||||
tcan4x5x->device_state_gpio = NULL;
|
||||
tcan4x5x_disable_state(cdev);
|
||||
}
|
||||
tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev,
|
||||
"device-state",
|
||||
GPIOD_IN);
|
||||
if (IS_ERR(tcan4x5x->device_state_gpio))
|
||||
tcan4x5x->device_state_gpio = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tcan4x5x_check_gpios(struct m_can_classdev *cdev,
|
||||
const struct tcan4x5x_version_info *version_info)
|
||||
{
|
||||
struct tcan4x5x_priv *tcan4x5x = cdev_to_priv(cdev);
|
||||
int ret;
|
||||
|
||||
if (version_info->has_wake_pin && !tcan4x5x->device_wake_gpio) {
|
||||
ret = tcan4x5x_disable_wake(cdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (version_info->has_state_pin && !tcan4x5x->device_state_gpio) {
|
||||
ret = tcan4x5x_disable_state(cdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
@ -468,15 +483,21 @@ static int tcan4x5x_can_probe(struct spi_device *spi)
|
|||
goto out_m_can_class_free_dev;
|
||||
}
|
||||
|
||||
ret = tcan4x5x_get_gpios(mcan_class);
|
||||
if (ret) {
|
||||
dev_err(&spi->dev, "Getting gpios failed %pe\n", ERR_PTR(ret));
|
||||
goto out_power;
|
||||
}
|
||||
|
||||
version_info = tcan4x5x_find_version(priv);
|
||||
if (IS_ERR(version_info)) {
|
||||
ret = PTR_ERR(version_info);
|
||||
goto out_power;
|
||||
}
|
||||
|
||||
ret = tcan4x5x_get_gpios(mcan_class, version_info);
|
||||
ret = tcan4x5x_check_gpios(mcan_class, version_info);
|
||||
if (ret) {
|
||||
dev_err(&spi->dev, "Getting gpios failed %pe\n", ERR_PTR(ret));
|
||||
dev_err(&spi->dev, "Checking gpios failed %pe\n", ERR_PTR(ret));
|
||||
goto out_power;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -401,12 +401,13 @@ struct airoha_npu *airoha_npu_get(struct device *dev, dma_addr_t *stats_addr)
|
|||
return ERR_PTR(-ENODEV);
|
||||
|
||||
pdev = of_find_device_by_node(np);
|
||||
of_node_put(np);
|
||||
|
||||
if (!pdev) {
|
||||
dev_err(dev, "cannot find device node %s\n", np->name);
|
||||
of_node_put(np);
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
of_node_put(np);
|
||||
|
||||
if (!try_module_get(THIS_MODULE)) {
|
||||
dev_err(dev, "failed to get the device driver module\n");
|
||||
|
|
|
|||
|
|
@ -189,13 +189,14 @@ struct fm10k_q_vector {
|
|||
struct fm10k_ring_container rx, tx;
|
||||
|
||||
struct napi_struct napi;
|
||||
struct rcu_head rcu; /* to avoid race with update stats on free */
|
||||
|
||||
cpumask_t affinity_mask;
|
||||
char name[IFNAMSIZ + 9];
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
struct dentry *dbg_q_vector;
|
||||
#endif /* CONFIG_DEBUG_FS */
|
||||
struct rcu_head rcu; /* to avoid race with update stats on free */
|
||||
|
||||
/* for dynamic allocation of rings associated with this q_vector */
|
||||
struct fm10k_ring ring[] ____cacheline_internodealigned_in_smp;
|
||||
|
|
|
|||
|
|
@ -945,6 +945,7 @@ struct i40e_q_vector {
|
|||
u16 reg_idx; /* register index of the interrupt */
|
||||
|
||||
struct napi_struct napi;
|
||||
struct rcu_head rcu; /* to avoid race with update stats on free */
|
||||
|
||||
struct i40e_ring_container rx;
|
||||
struct i40e_ring_container tx;
|
||||
|
|
@ -955,7 +956,6 @@ struct i40e_q_vector {
|
|||
cpumask_t affinity_mask;
|
||||
struct irq_affinity_notify affinity_notify;
|
||||
|
||||
struct rcu_head rcu; /* to avoid race with update stats on free */
|
||||
char name[I40E_INT_NAME_STR_LEN];
|
||||
bool arm_wb_state;
|
||||
bool in_busy_poll;
|
||||
|
|
|
|||
|
|
@ -606,7 +606,7 @@ void ice_debugfs_fwlog_init(struct ice_pf *pf)
|
|||
|
||||
pf->ice_debugfs_pf_fwlog = debugfs_create_dir("fwlog",
|
||||
pf->ice_debugfs_pf);
|
||||
if (IS_ERR(pf->ice_debugfs_pf))
|
||||
if (IS_ERR(pf->ice_debugfs_pf_fwlog))
|
||||
goto err_create_module_files;
|
||||
|
||||
fw_modules_dir = debugfs_create_dir("modules",
|
||||
|
|
|
|||
|
|
@ -2226,7 +2226,8 @@ bool ice_lag_is_switchdev_running(struct ice_pf *pf)
|
|||
struct ice_lag *lag = pf->lag;
|
||||
struct net_device *tmp_nd;
|
||||
|
||||
if (!ice_is_feature_supported(pf, ICE_F_SRIOV_LAG) || !lag)
|
||||
if (!ice_is_feature_supported(pf, ICE_F_SRIOV_LAG) ||
|
||||
!lag || !lag->upper_netdev)
|
||||
return false;
|
||||
|
||||
rcu_read_lock();
|
||||
|
|
|
|||
|
|
@ -507,9 +507,10 @@ struct ixgbe_q_vector {
|
|||
struct ixgbe_ring_container rx, tx;
|
||||
|
||||
struct napi_struct napi;
|
||||
struct rcu_head rcu; /* to avoid race with update stats on free */
|
||||
|
||||
cpumask_t affinity_mask;
|
||||
int numa_node;
|
||||
struct rcu_head rcu; /* to avoid race with update stats on free */
|
||||
char name[IFNAMSIZ + 9];
|
||||
|
||||
/* for dynamic allocation of rings associated with this q_vector */
|
||||
|
|
|
|||
|
|
@ -1154,8 +1154,9 @@ static void mlx5e_lro_update_tcp_hdr(struct mlx5_cqe64 *cqe, struct tcphdr *tcp)
|
|||
}
|
||||
}
|
||||
|
||||
static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe,
|
||||
u32 cqe_bcnt)
|
||||
static unsigned int mlx5e_lro_update_hdr(struct sk_buff *skb,
|
||||
struct mlx5_cqe64 *cqe,
|
||||
u32 cqe_bcnt)
|
||||
{
|
||||
struct ethhdr *eth = (struct ethhdr *)(skb->data);
|
||||
struct tcphdr *tcp;
|
||||
|
|
@ -1205,6 +1206,8 @@ static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe,
|
|||
tcp->check = tcp_v6_check(payload_len, &ipv6->saddr,
|
||||
&ipv6->daddr, check);
|
||||
}
|
||||
|
||||
return (unsigned int)((unsigned char *)tcp + tcp->doff * 4 - skb->data);
|
||||
}
|
||||
|
||||
static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index)
|
||||
|
|
@ -1561,8 +1564,9 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
|
|||
mlx5e_macsec_offload_handle_rx_skb(netdev, skb, cqe);
|
||||
|
||||
if (lro_num_seg > 1) {
|
||||
mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
|
||||
skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt, lro_num_seg);
|
||||
unsigned int hdrlen = mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
|
||||
|
||||
skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt - hdrlen, lro_num_seg);
|
||||
/* Subtract one since we already counted this as one
|
||||
* "regular" packet in mlx5e_complete_rx_cqe()
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -2257,6 +2257,7 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
|
|||
{ PCI_VDEVICE(MELLANOX, 0x1021) }, /* ConnectX-7 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1023) }, /* ConnectX-8 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1025) }, /* ConnectX-9 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1027) }, /* ConnectX-10 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
|
||||
|
|
|
|||
|
|
@ -433,6 +433,12 @@ static int intel_crosststamp(ktime_t *device,
|
|||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
*system = (struct system_counterval_t) {
|
||||
.cycles = 0,
|
||||
.cs_id = CSID_X86_ART,
|
||||
.use_nsecs = false,
|
||||
};
|
||||
|
||||
num_snapshot = (readl(ioaddr + GMAC_TIMESTAMP_STATUS) &
|
||||
GMAC_TIMESTAMP_ATSNS_MASK) >>
|
||||
GMAC_TIMESTAMP_ATSNS_SHIFT;
|
||||
|
|
@ -448,7 +454,7 @@ static int intel_crosststamp(ktime_t *device,
|
|||
}
|
||||
|
||||
system->cycles *= intel_priv->crossts_adj;
|
||||
system->cs_id = CSID_X86_ART;
|
||||
|
||||
priv->plat->flags &= ~STMMAC_FLAG_INT_SNAPSHOT_EN;
|
||||
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -1912,7 +1912,6 @@ static void wx_configure_rx_ring(struct wx *wx,
|
|||
struct wx_ring *ring)
|
||||
{
|
||||
u16 reg_idx = ring->reg_idx;
|
||||
union wx_rx_desc *rx_desc;
|
||||
u64 rdba = ring->dma;
|
||||
u32 rxdctl;
|
||||
|
||||
|
|
@ -1942,9 +1941,9 @@ static void wx_configure_rx_ring(struct wx *wx,
|
|||
memset(ring->rx_buffer_info, 0,
|
||||
sizeof(struct wx_rx_buffer) * ring->count);
|
||||
|
||||
/* initialize Rx descriptor 0 */
|
||||
rx_desc = WX_RX_DESC(ring, 0);
|
||||
rx_desc->wb.upper.length = 0;
|
||||
/* reset ntu and ntc to place SW in sync with hardware */
|
||||
ring->next_to_clean = 0;
|
||||
ring->next_to_use = 0;
|
||||
|
||||
/* enable receive descriptor ring */
|
||||
wr32m(wx, WX_PX_RR_CFG(reg_idx),
|
||||
|
|
@ -2778,6 +2777,8 @@ void wx_update_stats(struct wx *wx)
|
|||
hwstats->fdirmiss += rd32(wx, WX_RDB_FDIR_MISS);
|
||||
}
|
||||
|
||||
/* qmprc is not cleared on read, manual reset it */
|
||||
hwstats->qmprc = 0;
|
||||
for (i = wx->num_vfs * wx->num_rx_queues_per_pool;
|
||||
i < wx->mac.max_rx_queues; i++)
|
||||
hwstats->qmprc += rd32(wx, WX_PX_MPRC(i));
|
||||
|
|
|
|||
|
|
@ -174,10 +174,6 @@ static void wx_dma_sync_frag(struct wx_ring *rx_ring,
|
|||
skb_frag_off(frag),
|
||||
skb_frag_size(frag),
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
/* If the page was released, just unmap it. */
|
||||
if (unlikely(WX_CB(skb)->page_released))
|
||||
page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
|
||||
}
|
||||
|
||||
static struct wx_rx_buffer *wx_get_rx_buffer(struct wx_ring *rx_ring,
|
||||
|
|
@ -227,10 +223,6 @@ static void wx_put_rx_buffer(struct wx_ring *rx_ring,
|
|||
struct sk_buff *skb,
|
||||
int rx_buffer_pgcnt)
|
||||
{
|
||||
if (!IS_ERR(skb) && WX_CB(skb)->dma == rx_buffer->dma)
|
||||
/* the page has been released from the ring */
|
||||
WX_CB(skb)->page_released = true;
|
||||
|
||||
/* clear contents of rx_buffer */
|
||||
rx_buffer->page = NULL;
|
||||
rx_buffer->skb = NULL;
|
||||
|
|
@ -315,7 +307,7 @@ static bool wx_alloc_mapped_page(struct wx_ring *rx_ring,
|
|||
return false;
|
||||
dma = page_pool_get_dma_addr(page);
|
||||
|
||||
bi->page_dma = dma;
|
||||
bi->dma = dma;
|
||||
bi->page = page;
|
||||
bi->page_offset = 0;
|
||||
|
||||
|
|
@ -352,7 +344,7 @@ void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count)
|
|||
DMA_FROM_DEVICE);
|
||||
|
||||
rx_desc->read.pkt_addr =
|
||||
cpu_to_le64(bi->page_dma + bi->page_offset);
|
||||
cpu_to_le64(bi->dma + bi->page_offset);
|
||||
|
||||
rx_desc++;
|
||||
bi++;
|
||||
|
|
@ -365,6 +357,8 @@ void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count)
|
|||
|
||||
/* clear the status bits for the next_to_use descriptor */
|
||||
rx_desc->wb.upper.status_error = 0;
|
||||
/* clear the length for the next_to_use descriptor */
|
||||
rx_desc->wb.upper.length = 0;
|
||||
|
||||
cleaned_count--;
|
||||
} while (cleaned_count);
|
||||
|
|
@ -2423,9 +2417,6 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring)
|
|||
if (rx_buffer->skb) {
|
||||
struct sk_buff *skb = rx_buffer->skb;
|
||||
|
||||
if (WX_CB(skb)->page_released)
|
||||
page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
|
||||
|
||||
dev_kfree_skb(skb);
|
||||
}
|
||||
|
||||
|
|
@ -2449,6 +2440,9 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring)
|
|||
}
|
||||
}
|
||||
|
||||
/* Zero out the descriptor ring */
|
||||
memset(rx_ring->desc, 0, rx_ring->size);
|
||||
|
||||
rx_ring->next_to_alloc = 0;
|
||||
rx_ring->next_to_clean = 0;
|
||||
rx_ring->next_to_use = 0;
|
||||
|
|
|
|||
|
|
@ -909,7 +909,6 @@ enum wx_reset_type {
|
|||
struct wx_cb {
|
||||
dma_addr_t dma;
|
||||
u16 append_cnt; /* number of skb's appended */
|
||||
bool page_released;
|
||||
bool dma_released;
|
||||
};
|
||||
|
||||
|
|
@ -998,7 +997,6 @@ struct wx_tx_buffer {
|
|||
struct wx_rx_buffer {
|
||||
struct sk_buff *skb;
|
||||
dma_addr_t dma;
|
||||
dma_addr_t page_dma;
|
||||
struct page *page;
|
||||
unsigned int page_offset;
|
||||
};
|
||||
|
|
|
|||
|
|
@ -286,7 +286,7 @@ static void xemaclite_aligned_read(u32 *src_ptr, u8 *dest_ptr,
|
|||
|
||||
/* Read the remaining data */
|
||||
for (; length > 0; length--)
|
||||
*to_u8_ptr = *from_u8_ptr;
|
||||
*to_u8_ptr++ = *from_u8_ptr++;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -2317,8 +2317,11 @@ static int netvsc_prepare_bonding(struct net_device *vf_netdev)
|
|||
if (!ndev)
|
||||
return NOTIFY_DONE;
|
||||
|
||||
/* set slave flag before open to prevent IPv6 addrconf */
|
||||
/* Set slave flag and no addrconf flag before open
|
||||
* to prevent IPv6 addrconf.
|
||||
*/
|
||||
vf_netdev->flags |= IFF_SLAVE;
|
||||
vf_netdev->priv_flags |= IFF_NO_ADDRCONF;
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -62,6 +62,13 @@ static void ovpn_netdev_write(struct ovpn_peer *peer, struct sk_buff *skb)
|
|||
unsigned int pkt_len;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* GSO state from the transport layer is not valid for the tunnel/data
|
||||
* path. Reset all GSO fields to prevent any further GSO processing
|
||||
* from entering an inconsistent state.
|
||||
*/
|
||||
skb_gso_reset(skb);
|
||||
|
||||
/* we can't guarantee the packet wasn't corrupted before entering the
|
||||
* VPN, therefore we give other layers a chance to check that
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -29,6 +29,22 @@ const struct nla_policy ovpn_keyconf_nl_policy[OVPN_A_KEYCONF_DECRYPT_DIR + 1] =
|
|||
[OVPN_A_KEYCONF_DECRYPT_DIR] = NLA_POLICY_NESTED(ovpn_keydir_nl_policy),
|
||||
};
|
||||
|
||||
const struct nla_policy ovpn_keyconf_del_input_nl_policy[OVPN_A_KEYCONF_SLOT + 1] = {
|
||||
[OVPN_A_KEYCONF_PEER_ID] = NLA_POLICY_FULL_RANGE(NLA_U32, &ovpn_a_keyconf_peer_id_range),
|
||||
[OVPN_A_KEYCONF_SLOT] = NLA_POLICY_MAX(NLA_U32, 1),
|
||||
};
|
||||
|
||||
const struct nla_policy ovpn_keyconf_get_nl_policy[OVPN_A_KEYCONF_CIPHER_ALG + 1] = {
|
||||
[OVPN_A_KEYCONF_PEER_ID] = NLA_POLICY_FULL_RANGE(NLA_U32, &ovpn_a_keyconf_peer_id_range),
|
||||
[OVPN_A_KEYCONF_SLOT] = NLA_POLICY_MAX(NLA_U32, 1),
|
||||
[OVPN_A_KEYCONF_KEY_ID] = NLA_POLICY_MAX(NLA_U32, 7),
|
||||
[OVPN_A_KEYCONF_CIPHER_ALG] = NLA_POLICY_MAX(NLA_U32, 2),
|
||||
};
|
||||
|
||||
const struct nla_policy ovpn_keyconf_swap_input_nl_policy[OVPN_A_KEYCONF_PEER_ID + 1] = {
|
||||
[OVPN_A_KEYCONF_PEER_ID] = NLA_POLICY_FULL_RANGE(NLA_U32, &ovpn_a_keyconf_peer_id_range),
|
||||
};
|
||||
|
||||
const struct nla_policy ovpn_keydir_nl_policy[OVPN_A_KEYDIR_NONCE_TAIL + 1] = {
|
||||
[OVPN_A_KEYDIR_CIPHER_KEY] = NLA_POLICY_MAX_LEN(256),
|
||||
[OVPN_A_KEYDIR_NONCE_TAIL] = NLA_POLICY_EXACT_LEN(OVPN_NONCE_TAIL_SIZE),
|
||||
|
|
@ -60,16 +76,49 @@ const struct nla_policy ovpn_peer_nl_policy[OVPN_A_PEER_LINK_TX_PACKETS + 1] = {
|
|||
[OVPN_A_PEER_LINK_TX_PACKETS] = { .type = NLA_UINT, },
|
||||
};
|
||||
|
||||
const struct nla_policy ovpn_peer_del_input_nl_policy[OVPN_A_PEER_ID + 1] = {
|
||||
[OVPN_A_PEER_ID] = NLA_POLICY_FULL_RANGE(NLA_U32, &ovpn_a_peer_id_range),
|
||||
};
|
||||
|
||||
const struct nla_policy ovpn_peer_new_input_nl_policy[OVPN_A_PEER_KEEPALIVE_TIMEOUT + 1] = {
|
||||
[OVPN_A_PEER_ID] = NLA_POLICY_FULL_RANGE(NLA_U32, &ovpn_a_peer_id_range),
|
||||
[OVPN_A_PEER_REMOTE_IPV4] = { .type = NLA_BE32, },
|
||||
[OVPN_A_PEER_REMOTE_IPV6] = NLA_POLICY_EXACT_LEN(16),
|
||||
[OVPN_A_PEER_REMOTE_IPV6_SCOPE_ID] = { .type = NLA_U32, },
|
||||
[OVPN_A_PEER_REMOTE_PORT] = NLA_POLICY_MIN(NLA_BE16, 1),
|
||||
[OVPN_A_PEER_SOCKET] = { .type = NLA_U32, },
|
||||
[OVPN_A_PEER_VPN_IPV4] = { .type = NLA_BE32, },
|
||||
[OVPN_A_PEER_VPN_IPV6] = NLA_POLICY_EXACT_LEN(16),
|
||||
[OVPN_A_PEER_LOCAL_IPV4] = { .type = NLA_BE32, },
|
||||
[OVPN_A_PEER_LOCAL_IPV6] = NLA_POLICY_EXACT_LEN(16),
|
||||
[OVPN_A_PEER_KEEPALIVE_INTERVAL] = { .type = NLA_U32, },
|
||||
[OVPN_A_PEER_KEEPALIVE_TIMEOUT] = { .type = NLA_U32, },
|
||||
};
|
||||
|
||||
const struct nla_policy ovpn_peer_set_input_nl_policy[OVPN_A_PEER_KEEPALIVE_TIMEOUT + 1] = {
|
||||
[OVPN_A_PEER_ID] = NLA_POLICY_FULL_RANGE(NLA_U32, &ovpn_a_peer_id_range),
|
||||
[OVPN_A_PEER_REMOTE_IPV4] = { .type = NLA_BE32, },
|
||||
[OVPN_A_PEER_REMOTE_IPV6] = NLA_POLICY_EXACT_LEN(16),
|
||||
[OVPN_A_PEER_REMOTE_IPV6_SCOPE_ID] = { .type = NLA_U32, },
|
||||
[OVPN_A_PEER_REMOTE_PORT] = NLA_POLICY_MIN(NLA_BE16, 1),
|
||||
[OVPN_A_PEER_VPN_IPV4] = { .type = NLA_BE32, },
|
||||
[OVPN_A_PEER_VPN_IPV6] = NLA_POLICY_EXACT_LEN(16),
|
||||
[OVPN_A_PEER_LOCAL_IPV4] = { .type = NLA_BE32, },
|
||||
[OVPN_A_PEER_LOCAL_IPV6] = NLA_POLICY_EXACT_LEN(16),
|
||||
[OVPN_A_PEER_KEEPALIVE_INTERVAL] = { .type = NLA_U32, },
|
||||
[OVPN_A_PEER_KEEPALIVE_TIMEOUT] = { .type = NLA_U32, },
|
||||
};
|
||||
|
||||
/* OVPN_CMD_PEER_NEW - do */
|
||||
static const struct nla_policy ovpn_peer_new_nl_policy[OVPN_A_PEER + 1] = {
|
||||
[OVPN_A_IFINDEX] = { .type = NLA_U32, },
|
||||
[OVPN_A_PEER] = NLA_POLICY_NESTED(ovpn_peer_nl_policy),
|
||||
[OVPN_A_PEER] = NLA_POLICY_NESTED(ovpn_peer_new_input_nl_policy),
|
||||
};
|
||||
|
||||
/* OVPN_CMD_PEER_SET - do */
|
||||
static const struct nla_policy ovpn_peer_set_nl_policy[OVPN_A_PEER + 1] = {
|
||||
[OVPN_A_IFINDEX] = { .type = NLA_U32, },
|
||||
[OVPN_A_PEER] = NLA_POLICY_NESTED(ovpn_peer_nl_policy),
|
||||
[OVPN_A_PEER] = NLA_POLICY_NESTED(ovpn_peer_set_input_nl_policy),
|
||||
};
|
||||
|
||||
/* OVPN_CMD_PEER_GET - do */
|
||||
|
|
@ -86,7 +135,7 @@ static const struct nla_policy ovpn_peer_get_dump_nl_policy[OVPN_A_IFINDEX + 1]
|
|||
/* OVPN_CMD_PEER_DEL - do */
|
||||
static const struct nla_policy ovpn_peer_del_nl_policy[OVPN_A_PEER + 1] = {
|
||||
[OVPN_A_IFINDEX] = { .type = NLA_U32, },
|
||||
[OVPN_A_PEER] = NLA_POLICY_NESTED(ovpn_peer_nl_policy),
|
||||
[OVPN_A_PEER] = NLA_POLICY_NESTED(ovpn_peer_del_input_nl_policy),
|
||||
};
|
||||
|
||||
/* OVPN_CMD_KEY_NEW - do */
|
||||
|
|
@ -98,19 +147,19 @@ static const struct nla_policy ovpn_key_new_nl_policy[OVPN_A_KEYCONF + 1] = {
|
|||
/* OVPN_CMD_KEY_GET - do */
|
||||
static const struct nla_policy ovpn_key_get_nl_policy[OVPN_A_KEYCONF + 1] = {
|
||||
[OVPN_A_IFINDEX] = { .type = NLA_U32, },
|
||||
[OVPN_A_KEYCONF] = NLA_POLICY_NESTED(ovpn_keyconf_nl_policy),
|
||||
[OVPN_A_KEYCONF] = NLA_POLICY_NESTED(ovpn_keyconf_get_nl_policy),
|
||||
};
|
||||
|
||||
/* OVPN_CMD_KEY_SWAP - do */
|
||||
static const struct nla_policy ovpn_key_swap_nl_policy[OVPN_A_KEYCONF + 1] = {
|
||||
[OVPN_A_IFINDEX] = { .type = NLA_U32, },
|
||||
[OVPN_A_KEYCONF] = NLA_POLICY_NESTED(ovpn_keyconf_nl_policy),
|
||||
[OVPN_A_KEYCONF] = NLA_POLICY_NESTED(ovpn_keyconf_swap_input_nl_policy),
|
||||
};
|
||||
|
||||
/* OVPN_CMD_KEY_DEL - do */
|
||||
static const struct nla_policy ovpn_key_del_nl_policy[OVPN_A_KEYCONF + 1] = {
|
||||
[OVPN_A_IFINDEX] = { .type = NLA_U32, },
|
||||
[OVPN_A_KEYCONF] = NLA_POLICY_NESTED(ovpn_keyconf_nl_policy),
|
||||
[OVPN_A_KEYCONF] = NLA_POLICY_NESTED(ovpn_keyconf_del_input_nl_policy),
|
||||
};
|
||||
|
||||
/* Ops table for ovpn */
|
||||
|
|
|
|||
|
|
@ -13,8 +13,14 @@
|
|||
|
||||
/* Common nested types */
|
||||
extern const struct nla_policy ovpn_keyconf_nl_policy[OVPN_A_KEYCONF_DECRYPT_DIR + 1];
|
||||
extern const struct nla_policy ovpn_keyconf_del_input_nl_policy[OVPN_A_KEYCONF_SLOT + 1];
|
||||
extern const struct nla_policy ovpn_keyconf_get_nl_policy[OVPN_A_KEYCONF_CIPHER_ALG + 1];
|
||||
extern const struct nla_policy ovpn_keyconf_swap_input_nl_policy[OVPN_A_KEYCONF_PEER_ID + 1];
|
||||
extern const struct nla_policy ovpn_keydir_nl_policy[OVPN_A_KEYDIR_NONCE_TAIL + 1];
|
||||
extern const struct nla_policy ovpn_peer_nl_policy[OVPN_A_PEER_LINK_TX_PACKETS + 1];
|
||||
extern const struct nla_policy ovpn_peer_del_input_nl_policy[OVPN_A_PEER_ID + 1];
|
||||
extern const struct nla_policy ovpn_peer_new_input_nl_policy[OVPN_A_PEER_KEEPALIVE_TIMEOUT + 1];
|
||||
extern const struct nla_policy ovpn_peer_set_input_nl_policy[OVPN_A_PEER_KEEPALIVE_TIMEOUT + 1];
|
||||
|
||||
int ovpn_nl_pre_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
|
||||
struct genl_info *info);
|
||||
|
|
|
|||
|
|
@ -352,7 +352,7 @@ int ovpn_nl_peer_new_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
return -EINVAL;
|
||||
|
||||
ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER],
|
||||
ovpn_peer_nl_policy, info->extack);
|
||||
ovpn_peer_new_input_nl_policy, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
@ -476,7 +476,7 @@ int ovpn_nl_peer_set_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
return -EINVAL;
|
||||
|
||||
ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER],
|
||||
ovpn_peer_nl_policy, info->extack);
|
||||
ovpn_peer_set_input_nl_policy, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
@ -654,7 +654,7 @@ int ovpn_nl_peer_get_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
struct ovpn_peer *peer;
|
||||
struct sk_buff *msg;
|
||||
u32 peer_id;
|
||||
int ret;
|
||||
int ret, i;
|
||||
|
||||
if (GENL_REQ_ATTR_CHECK(info, OVPN_A_PEER))
|
||||
return -EINVAL;
|
||||
|
|
@ -668,6 +668,23 @@ int ovpn_nl_peer_get_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
OVPN_A_PEER_ID))
|
||||
return -EINVAL;
|
||||
|
||||
/* OVPN_CMD_PEER_GET expects only the PEER_ID, therefore
|
||||
* ensure that the user hasn't specified any other attribute.
|
||||
*
|
||||
* Unfortunately this check cannot be performed via netlink
|
||||
* spec/policy and must be open-coded.
|
||||
*/
|
||||
for (i = 0; i < OVPN_A_PEER_MAX + 1; i++) {
|
||||
if (i == OVPN_A_PEER_ID)
|
||||
continue;
|
||||
|
||||
if (attrs[i]) {
|
||||
NL_SET_ERR_MSG_FMT_MOD(info->extack,
|
||||
"unexpected attribute %u", i);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
peer_id = nla_get_u32(attrs[OVPN_A_PEER_ID]);
|
||||
peer = ovpn_peer_get_by_id(ovpn, peer_id);
|
||||
if (!peer) {
|
||||
|
|
@ -768,7 +785,7 @@ int ovpn_nl_peer_del_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
return -EINVAL;
|
||||
|
||||
ret = nla_parse_nested(attrs, OVPN_A_PEER_MAX, info->attrs[OVPN_A_PEER],
|
||||
ovpn_peer_nl_policy, info->extack);
|
||||
ovpn_peer_del_input_nl_policy, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
@ -969,14 +986,14 @@ int ovpn_nl_key_get_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
struct ovpn_peer *peer;
|
||||
struct sk_buff *msg;
|
||||
u32 peer_id;
|
||||
int ret;
|
||||
int ret, i;
|
||||
|
||||
if (GENL_REQ_ATTR_CHECK(info, OVPN_A_KEYCONF))
|
||||
return -EINVAL;
|
||||
|
||||
ret = nla_parse_nested(attrs, OVPN_A_KEYCONF_MAX,
|
||||
info->attrs[OVPN_A_KEYCONF],
|
||||
ovpn_keyconf_nl_policy, info->extack);
|
||||
ovpn_keyconf_get_nl_policy, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
@ -988,6 +1005,24 @@ int ovpn_nl_key_get_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
OVPN_A_KEYCONF_SLOT))
|
||||
return -EINVAL;
|
||||
|
||||
/* OVPN_CMD_KEY_GET expects only the PEER_ID and the SLOT, therefore
|
||||
* ensure that the user hasn't specified any other attribute.
|
||||
*
|
||||
* Unfortunately this check cannot be performed via netlink
|
||||
* spec/policy and must be open-coded.
|
||||
*/
|
||||
for (i = 0; i < OVPN_A_KEYCONF_MAX + 1; i++) {
|
||||
if (i == OVPN_A_KEYCONF_PEER_ID ||
|
||||
i == OVPN_A_KEYCONF_SLOT)
|
||||
continue;
|
||||
|
||||
if (attrs[i]) {
|
||||
NL_SET_ERR_MSG_FMT_MOD(info->extack,
|
||||
"unexpected attribute %u", i);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
peer_id = nla_get_u32(attrs[OVPN_A_KEYCONF_PEER_ID]);
|
||||
peer = ovpn_peer_get_by_id(ovpn, peer_id);
|
||||
if (!peer) {
|
||||
|
|
@ -1037,7 +1072,7 @@ int ovpn_nl_key_swap_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
|
||||
ret = nla_parse_nested(attrs, OVPN_A_KEYCONF_MAX,
|
||||
info->attrs[OVPN_A_KEYCONF],
|
||||
ovpn_keyconf_nl_policy, info->extack);
|
||||
ovpn_keyconf_swap_input_nl_policy, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
@ -1074,7 +1109,7 @@ int ovpn_nl_key_del_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
|
||||
ret = nla_parse_nested(attrs, OVPN_A_KEYCONF_MAX,
|
||||
info->attrs[OVPN_A_KEYCONF],
|
||||
ovpn_keyconf_nl_policy, info->extack);
|
||||
ovpn_keyconf_del_input_nl_policy, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
|
|||
|
|
@ -344,6 +344,7 @@ void ovpn_udp_send_skb(struct ovpn_peer *peer, struct sock *sk,
|
|||
int ret;
|
||||
|
||||
skb->dev = peer->ovpn->dev;
|
||||
skb->mark = READ_ONCE(sk->sk_mark);
|
||||
/* no checksum performed at this layer */
|
||||
skb->ip_summed = CHECKSUM_NONE;
|
||||
|
||||
|
|
|
|||
|
|
@ -3416,7 +3416,8 @@ static int phy_probe(struct device *dev)
|
|||
/* Get the LEDs from the device tree, and instantiate standard
|
||||
* LEDs for them.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_PHYLIB_LEDS))
|
||||
if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev) &&
|
||||
!phy_driver_is_genphy_10g(phydev))
|
||||
err = of_phy_leds(phydev);
|
||||
|
||||
out:
|
||||
|
|
@ -3433,7 +3434,8 @@ static int phy_remove(struct device *dev)
|
|||
|
||||
cancel_delayed_work_sync(&phydev->state_queue);
|
||||
|
||||
if (IS_ENABLED(CONFIG_PHYLIB_LEDS))
|
||||
if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev) &&
|
||||
!phy_driver_is_genphy_10g(phydev))
|
||||
phy_leds_unregister(phydev);
|
||||
|
||||
phydev->state = PHY_DOWN;
|
||||
|
|
|
|||
|
|
@ -689,6 +689,10 @@ static int sierra_net_bind(struct usbnet *dev, struct usb_interface *intf)
|
|||
status);
|
||||
return -ENODEV;
|
||||
}
|
||||
if (!dev->status) {
|
||||
dev_err(&dev->udev->dev, "No status endpoint found");
|
||||
return -ENODEV;
|
||||
}
|
||||
/* Initialize sierra private data */
|
||||
priv = kzalloc(sizeof *priv, GFP_KERNEL);
|
||||
if (!priv)
|
||||
|
|
|
|||
|
|
@ -7059,7 +7059,7 @@ static int virtnet_probe(struct virtio_device *vdev)
|
|||
otherwise get link status from config. */
|
||||
netif_carrier_off(dev);
|
||||
if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
|
||||
virtnet_config_changed_work(&vi->config_work);
|
||||
virtio_config_changed(vi->vdev);
|
||||
} else {
|
||||
vi->status = VIRTIO_NET_S_LINK_UP;
|
||||
virtnet_update_settings(vi);
|
||||
|
|
|
|||
|
|
@ -1060,7 +1060,6 @@ int ath12k_dp_rx_peer_tid_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_
|
|||
}
|
||||
|
||||
rx_tid = &peer->rx_tid[tid];
|
||||
paddr_aligned = rx_tid->qbuf.paddr_aligned;
|
||||
/* Update the tid queue if it is already setup */
|
||||
if (rx_tid->active) {
|
||||
ret = ath12k_peer_rx_tid_reo_update(ar, peer, rx_tid,
|
||||
|
|
@ -1072,6 +1071,7 @@ int ath12k_dp_rx_peer_tid_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_
|
|||
}
|
||||
|
||||
if (!ab->hw_params->reoq_lut_support) {
|
||||
paddr_aligned = rx_tid->qbuf.paddr_aligned;
|
||||
ret = ath12k_wmi_peer_rx_reorder_queue_setup(ar, vdev_id,
|
||||
peer_mac,
|
||||
paddr_aligned, tid,
|
||||
|
|
@ -1098,6 +1098,7 @@ int ath12k_dp_rx_peer_tid_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_
|
|||
return ret;
|
||||
}
|
||||
|
||||
paddr_aligned = rx_tid->qbuf.paddr_aligned;
|
||||
if (ab->hw_params->reoq_lut_support) {
|
||||
/* Update the REO queue LUT at the corresponding peer id
|
||||
* and tid with qaddr.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
|
||||
/*
|
||||
* Copyright (C) 2012-2014, 2018-2024 Intel Corporation
|
||||
* Copyright (C) 2012-2014, 2018-2025 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
|
@ -754,7 +754,7 @@ struct iwl_lari_config_change_cmd_v10 {
|
|||
* according to the BIOS definitions.
|
||||
* For LARI cmd version 11 - bits 0:4 are supported.
|
||||
* For LARI cmd version 12 - bits 0:6 are supported and bits 7:31 are
|
||||
* reserved. No need to mask out the reserved bits.
|
||||
* reserved.
|
||||
* @force_disable_channels_bitmap: Bitmap of disabled bands/channels.
|
||||
* Each bit represents a set of channels in a specific band that should be
|
||||
* disabled
|
||||
|
|
@ -787,6 +787,7 @@ struct iwl_lari_config_change_cmd {
|
|||
/* Activate UNII-1 (5.2GHz) for World Wide */
|
||||
#define ACTIVATE_5G2_IN_WW_MASK BIT(4)
|
||||
#define CHAN_STATE_ACTIVE_BITMAP_CMD_V11 0x1F
|
||||
#define CHAN_STATE_ACTIVE_BITMAP_CMD_V12 0x7F
|
||||
|
||||
/**
|
||||
* struct iwl_pnvm_init_complete_ntfy - PNVM initialization complete
|
||||
|
|
|
|||
|
|
@ -614,6 +614,7 @@ int iwl_fill_lari_config(struct iwl_fw_runtime *fwrt,
|
|||
|
||||
ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ACTIVATE_CHANNEL, &value);
|
||||
if (!ret) {
|
||||
value &= CHAN_STATE_ACTIVE_BITMAP_CMD_V12;
|
||||
if (cmd_ver < 8)
|
||||
value &= ~ACTIVATE_5G2_IN_WW_MASK;
|
||||
|
||||
|
|
|
|||
|
|
@ -251,8 +251,10 @@ void iwl_mld_configure_lari(struct iwl_mld *mld)
|
|||
cpu_to_le32(value &= DSM_UNII4_ALLOW_BITMAP);
|
||||
|
||||
ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ACTIVATE_CHANNEL, &value);
|
||||
if (!ret)
|
||||
if (!ret) {
|
||||
value &= CHAN_STATE_ACTIVE_BITMAP_CMD_V12;
|
||||
cmd.chan_state_active_bitmap = cpu_to_le32(value);
|
||||
}
|
||||
|
||||
ret = iwl_bios_get_dsm(fwrt, DSM_FUNC_ENABLE_6E, &value);
|
||||
if (!ret)
|
||||
|
|
|
|||
|
|
@ -546,8 +546,10 @@ again:
|
|||
}
|
||||
|
||||
if (WARN_ON(trans->do_top_reset &&
|
||||
trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_SC))
|
||||
return -EINVAL;
|
||||
trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_SC)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* we need to wait later - set state */
|
||||
if (trans->do_top_reset)
|
||||
|
|
|
|||
|
|
@ -2101,10 +2101,10 @@ static void iwl_txq_gen1_update_byte_cnt_tbl(struct iwl_trans *trans,
|
|||
|
||||
bc_ent = cpu_to_le16(len | (sta_id << 12));
|
||||
|
||||
scd_bc_tbl[txq_id * BC_TABLE_SIZE + write_ptr].tfd_offset = bc_ent;
|
||||
scd_bc_tbl[txq_id * TFD_QUEUE_BC_SIZE + write_ptr].tfd_offset = bc_ent;
|
||||
|
||||
if (write_ptr < TFD_QUEUE_SIZE_BC_DUP)
|
||||
scd_bc_tbl[txq_id * BC_TABLE_SIZE + TFD_QUEUE_SIZE_MAX + write_ptr].tfd_offset =
|
||||
scd_bc_tbl[txq_id * TFD_QUEUE_BC_SIZE + TFD_QUEUE_SIZE_MAX + write_ptr].tfd_offset =
|
||||
bc_ent;
|
||||
}
|
||||
|
||||
|
|
@ -2328,10 +2328,10 @@ static void iwl_txq_gen1_inval_byte_cnt_tbl(struct iwl_trans *trans,
|
|||
|
||||
bc_ent = cpu_to_le16(1 | (sta_id << 12));
|
||||
|
||||
scd_bc_tbl[txq_id * BC_TABLE_SIZE + read_ptr].tfd_offset = bc_ent;
|
||||
scd_bc_tbl[txq_id * TFD_QUEUE_BC_SIZE + read_ptr].tfd_offset = bc_ent;
|
||||
|
||||
if (read_ptr < TFD_QUEUE_SIZE_BC_DUP)
|
||||
scd_bc_tbl[txq_id * BC_TABLE_SIZE + TFD_QUEUE_SIZE_MAX + read_ptr].tfd_offset =
|
||||
scd_bc_tbl[txq_id * TFD_QUEUE_BC_SIZE + TFD_QUEUE_SIZE_MAX + read_ptr].tfd_offset =
|
||||
bc_ent;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -377,6 +377,8 @@ enum {
|
|||
* This quirk must be set before hci_register_dev is called.
|
||||
*/
|
||||
HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE,
|
||||
|
||||
__HCI_NUM_QUIRKS,
|
||||
};
|
||||
|
||||
/* HCI device flags */
|
||||
|
|
|
|||
|
|
@ -464,7 +464,7 @@ struct hci_dev {
|
|||
|
||||
unsigned int auto_accept_delay;
|
||||
|
||||
unsigned long quirks;
|
||||
DECLARE_BITMAP(quirk_flags, __HCI_NUM_QUIRKS);
|
||||
|
||||
atomic_t cmd_cnt;
|
||||
unsigned int acl_cnt;
|
||||
|
|
@ -656,6 +656,10 @@ struct hci_dev {
|
|||
u8 (*classify_pkt_type)(struct hci_dev *hdev, struct sk_buff *skb);
|
||||
};
|
||||
|
||||
#define hci_set_quirk(hdev, nr) set_bit((nr), (hdev)->quirk_flags)
|
||||
#define hci_clear_quirk(hdev, nr) clear_bit((nr), (hdev)->quirk_flags)
|
||||
#define hci_test_quirk(hdev, nr) test_bit((nr), (hdev)->quirk_flags)
|
||||
|
||||
#define HCI_PHY_HANDLE(handle) (handle & 0xff)
|
||||
|
||||
enum conn_reasons {
|
||||
|
|
@ -829,20 +833,20 @@ extern struct mutex hci_cb_list_lock;
|
|||
#define hci_dev_test_and_clear_flag(hdev, nr) test_and_clear_bit((nr), (hdev)->dev_flags)
|
||||
#define hci_dev_test_and_change_flag(hdev, nr) test_and_change_bit((nr), (hdev)->dev_flags)
|
||||
|
||||
#define hci_dev_clear_volatile_flags(hdev) \
|
||||
do { \
|
||||
hci_dev_clear_flag(hdev, HCI_LE_SCAN); \
|
||||
hci_dev_clear_flag(hdev, HCI_LE_ADV); \
|
||||
hci_dev_clear_flag(hdev, HCI_LL_RPA_RESOLUTION);\
|
||||
hci_dev_clear_flag(hdev, HCI_PERIODIC_INQ); \
|
||||
hci_dev_clear_flag(hdev, HCI_QUALITY_REPORT); \
|
||||
#define hci_dev_clear_volatile_flags(hdev) \
|
||||
do { \
|
||||
hci_dev_clear_flag((hdev), HCI_LE_SCAN); \
|
||||
hci_dev_clear_flag((hdev), HCI_LE_ADV); \
|
||||
hci_dev_clear_flag((hdev), HCI_LL_RPA_RESOLUTION); \
|
||||
hci_dev_clear_flag((hdev), HCI_PERIODIC_INQ); \
|
||||
hci_dev_clear_flag((hdev), HCI_QUALITY_REPORT); \
|
||||
} while (0)
|
||||
|
||||
#define hci_dev_le_state_simultaneous(hdev) \
|
||||
(!test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) && \
|
||||
(hdev->le_states[4] & 0x08) && /* Central */ \
|
||||
(hdev->le_states[4] & 0x40) && /* Peripheral */ \
|
||||
(hdev->le_states[3] & 0x10)) /* Simultaneous */
|
||||
(!hci_test_quirk((hdev), HCI_QUIRK_BROKEN_LE_STATES) && \
|
||||
((hdev)->le_states[4] & 0x08) && /* Central */ \
|
||||
((hdev)->le_states[4] & 0x40) && /* Peripheral */ \
|
||||
((hdev)->le_states[3] & 0x10)) /* Simultaneous */
|
||||
|
||||
/* ----- HCI interface to upper protocols ----- */
|
||||
int l2cap_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr);
|
||||
|
|
@ -1931,8 +1935,8 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
|
|||
((dev)->le_rx_def_phys & HCI_LE_SET_PHY_2M))
|
||||
|
||||
#define le_coded_capable(dev) (((dev)->le_features[1] & HCI_LE_PHY_CODED) && \
|
||||
!test_bit(HCI_QUIRK_BROKEN_LE_CODED, \
|
||||
&(dev)->quirks))
|
||||
!hci_test_quirk((dev), \
|
||||
HCI_QUIRK_BROKEN_LE_CODED))
|
||||
|
||||
#define scan_coded(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_CODED) || \
|
||||
((dev)->le_rx_def_phys & HCI_LE_SET_PHY_CODED))
|
||||
|
|
@ -1940,31 +1944,31 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
|
|||
#define ll_privacy_capable(dev) ((dev)->le_features[0] & HCI_LE_LL_PRIVACY)
|
||||
|
||||
#define privacy_mode_capable(dev) (ll_privacy_capable(dev) && \
|
||||
(hdev->commands[39] & 0x04))
|
||||
((dev)->commands[39] & 0x04))
|
||||
|
||||
#define read_key_size_capable(dev) \
|
||||
((dev)->commands[20] & 0x10 && \
|
||||
!test_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks))
|
||||
!hci_test_quirk((dev), HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE))
|
||||
|
||||
#define read_voice_setting_capable(dev) \
|
||||
((dev)->commands[9] & 0x04 && \
|
||||
!test_bit(HCI_QUIRK_BROKEN_READ_VOICE_SETTING, &(dev)->quirks))
|
||||
!hci_test_quirk((dev), HCI_QUIRK_BROKEN_READ_VOICE_SETTING))
|
||||
|
||||
/* Use enhanced synchronous connection if command is supported and its quirk
|
||||
* has not been set.
|
||||
*/
|
||||
#define enhanced_sync_conn_capable(dev) \
|
||||
(((dev)->commands[29] & 0x08) && \
|
||||
!test_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &(dev)->quirks))
|
||||
!hci_test_quirk((dev), HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN))
|
||||
|
||||
/* Use ext scanning if set ext scan param and ext scan enable is supported */
|
||||
#define use_ext_scan(dev) (((dev)->commands[37] & 0x20) && \
|
||||
((dev)->commands[37] & 0x40) && \
|
||||
!test_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &(dev)->quirks))
|
||||
!hci_test_quirk((dev), HCI_QUIRK_BROKEN_EXT_SCAN))
|
||||
|
||||
/* Use ext create connection if command is supported */
|
||||
#define use_ext_conn(dev) (((dev)->commands[37] & 0x80) && \
|
||||
!test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &(dev)->quirks))
|
||||
!hci_test_quirk((dev), HCI_QUIRK_BROKEN_EXT_CREATE_CONN))
|
||||
/* Extended advertising support */
|
||||
#define ext_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_EXT_ADV))
|
||||
|
||||
|
|
@ -1979,8 +1983,8 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
|
|||
*/
|
||||
#define use_enhanced_conn_complete(dev) ((ll_privacy_capable(dev) || \
|
||||
ext_adv_capable(dev)) && \
|
||||
!test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, \
|
||||
&(dev)->quirks))
|
||||
!hci_test_quirk((dev), \
|
||||
HCI_QUIRK_BROKEN_EXT_CREATE_CONN))
|
||||
|
||||
/* Periodic advertising support */
|
||||
#define per_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_PERIODIC_ADV))
|
||||
|
|
@ -1997,7 +2001,7 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
|
|||
#define sync_recv_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER)
|
||||
|
||||
#define mws_transport_config_capable(dev) (((dev)->commands[30] & 0x08) && \
|
||||
(!test_bit(HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, &(dev)->quirks)))
|
||||
(!hci_test_quirk((dev), HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG)))
|
||||
|
||||
/* ----- HCI protocols ----- */
|
||||
#define HCI_PROTO_DEFER 0x01
|
||||
|
|
|
|||
|
|
@ -2690,7 +2690,7 @@ struct cfg80211_scan_request {
|
|||
s8 tsf_report_link_id;
|
||||
|
||||
/* keep last */
|
||||
struct ieee80211_channel *channels[] __counted_by(n_channels);
|
||||
struct ieee80211_channel *channels[];
|
||||
};
|
||||
|
||||
static inline void get_random_mask_addr(u8 *buf, const u8 *addr, const u8 *mask)
|
||||
|
|
|
|||
|
|
@ -306,8 +306,19 @@ static inline bool nf_ct_is_expired(const struct nf_conn *ct)
|
|||
/* use after obtaining a reference count */
|
||||
static inline bool nf_ct_should_gc(const struct nf_conn *ct)
|
||||
{
|
||||
return nf_ct_is_expired(ct) && nf_ct_is_confirmed(ct) &&
|
||||
!nf_ct_is_dying(ct);
|
||||
if (!nf_ct_is_confirmed(ct))
|
||||
return false;
|
||||
|
||||
/* load ct->timeout after is_confirmed() test.
|
||||
* Pairs with __nf_conntrack_confirm() which:
|
||||
* 1. Increases ct->timeout value
|
||||
* 2. Inserts ct into rcu hlist
|
||||
* 3. Sets the confirmed bit
|
||||
* 4. Unlocks the hlist lock
|
||||
*/
|
||||
smp_acquire__after_ctrl_dep();
|
||||
|
||||
return nf_ct_is_expired(ct) && !nf_ct_is_dying(ct);
|
||||
}
|
||||
|
||||
#define NF_CT_DAY (86400 * HZ)
|
||||
|
|
|
|||
|
|
@ -1142,11 +1142,6 @@ int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set);
|
|||
int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain);
|
||||
void nf_tables_unbind_chain(const struct nft_ctx *ctx, struct nft_chain *chain);
|
||||
|
||||
struct nft_hook;
|
||||
void nf_tables_chain_device_notify(const struct nft_chain *chain,
|
||||
const struct nft_hook *hook,
|
||||
const struct net_device *dev, int event);
|
||||
|
||||
enum nft_chain_types {
|
||||
NFT_CHAIN_T_DEFAULT = 0,
|
||||
NFT_CHAIN_T_ROUTE,
|
||||
|
|
|
|||
|
|
@ -322,20 +322,24 @@
|
|||
EM(rxrpc_call_put_kernel, "PUT kernel ") \
|
||||
EM(rxrpc_call_put_poke, "PUT poke ") \
|
||||
EM(rxrpc_call_put_recvmsg, "PUT recvmsg ") \
|
||||
EM(rxrpc_call_put_release_recvmsg_q, "PUT rls-rcmq") \
|
||||
EM(rxrpc_call_put_release_sock, "PUT rls-sock") \
|
||||
EM(rxrpc_call_put_release_sock_tba, "PUT rls-sk-a") \
|
||||
EM(rxrpc_call_put_sendmsg, "PUT sendmsg ") \
|
||||
EM(rxrpc_call_put_unnotify, "PUT unnotify") \
|
||||
EM(rxrpc_call_put_userid_exists, "PUT u-exists") \
|
||||
EM(rxrpc_call_put_userid, "PUT user-id ") \
|
||||
EM(rxrpc_call_see_accept, "SEE accept ") \
|
||||
EM(rxrpc_call_see_activate_client, "SEE act-clnt") \
|
||||
EM(rxrpc_call_see_already_released, "SEE alrdy-rl") \
|
||||
EM(rxrpc_call_see_connect_failed, "SEE con-fail") \
|
||||
EM(rxrpc_call_see_connected, "SEE connect ") \
|
||||
EM(rxrpc_call_see_conn_abort, "SEE conn-abt") \
|
||||
EM(rxrpc_call_see_discard, "SEE discard ") \
|
||||
EM(rxrpc_call_see_disconnected, "SEE disconn ") \
|
||||
EM(rxrpc_call_see_distribute_error, "SEE dist-err") \
|
||||
EM(rxrpc_call_see_input, "SEE input ") \
|
||||
EM(rxrpc_call_see_notify_released, "SEE nfy-rlsd") \
|
||||
EM(rxrpc_call_see_recvmsg, "SEE recvmsg ") \
|
||||
EM(rxrpc_call_see_release, "SEE release ") \
|
||||
EM(rxrpc_call_see_userid_exists, "SEE u-exists") \
|
||||
EM(rxrpc_call_see_waiting_call, "SEE q-conn ") \
|
||||
|
|
|
|||
|
|
@ -142,8 +142,6 @@ enum nf_tables_msg_types {
|
|||
NFT_MSG_DESTROYOBJ,
|
||||
NFT_MSG_DESTROYFLOWTABLE,
|
||||
NFT_MSG_GETSETELEM_RESET,
|
||||
NFT_MSG_NEWDEV,
|
||||
NFT_MSG_DELDEV,
|
||||
NFT_MSG_MAX,
|
||||
};
|
||||
|
||||
|
|
@ -1786,18 +1784,10 @@ enum nft_synproxy_attributes {
|
|||
* enum nft_device_attributes - nf_tables device netlink attributes
|
||||
*
|
||||
* @NFTA_DEVICE_NAME: name of this device (NLA_STRING)
|
||||
* @NFTA_DEVICE_TABLE: table containing the flowtable or chain hooking into the device (NLA_STRING)
|
||||
* @NFTA_DEVICE_FLOWTABLE: flowtable hooking into the device (NLA_STRING)
|
||||
* @NFTA_DEVICE_CHAIN: chain hooking into the device (NLA_STRING)
|
||||
* @NFTA_DEVICE_SPEC: hook spec matching the device (NLA_STRING)
|
||||
*/
|
||||
enum nft_devices_attributes {
|
||||
NFTA_DEVICE_UNSPEC,
|
||||
NFTA_DEVICE_NAME,
|
||||
NFTA_DEVICE_TABLE,
|
||||
NFTA_DEVICE_FLOWTABLE,
|
||||
NFTA_DEVICE_CHAIN,
|
||||
NFTA_DEVICE_SPEC,
|
||||
__NFTA_DEVICE_MAX
|
||||
};
|
||||
#define NFTA_DEVICE_MAX (__NFTA_DEVICE_MAX - 1)
|
||||
|
|
|
|||
|
|
@ -25,8 +25,6 @@ enum nfnetlink_groups {
|
|||
#define NFNLGRP_ACCT_QUOTA NFNLGRP_ACCT_QUOTA
|
||||
NFNLGRP_NFTRACE,
|
||||
#define NFNLGRP_NFTRACE NFNLGRP_NFTRACE
|
||||
NFNLGRP_NFT_DEV,
|
||||
#define NFNLGRP_NFT_DEV NFNLGRP_NFT_DEV
|
||||
__NFNLGRP_MAX,
|
||||
};
|
||||
#define NFNLGRP_MAX (__NFNLGRP_MAX - 1)
|
||||
|
|
|
|||
|
|
@ -357,6 +357,35 @@ static int __vlan_device_event(struct net_device *dev, unsigned long event)
|
|||
return err;
|
||||
}
|
||||
|
||||
static void vlan_vid0_add(struct net_device *dev)
|
||||
{
|
||||
struct vlan_info *vlan_info;
|
||||
int err;
|
||||
|
||||
if (!(dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
|
||||
return;
|
||||
|
||||
pr_info("adding VLAN 0 to HW filter on device %s\n", dev->name);
|
||||
|
||||
err = vlan_vid_add(dev, htons(ETH_P_8021Q), 0);
|
||||
if (err)
|
||||
return;
|
||||
|
||||
vlan_info = rtnl_dereference(dev->vlan_info);
|
||||
vlan_info->auto_vid0 = true;
|
||||
}
|
||||
|
||||
static void vlan_vid0_del(struct net_device *dev)
|
||||
{
|
||||
struct vlan_info *vlan_info = rtnl_dereference(dev->vlan_info);
|
||||
|
||||
if (!vlan_info || !vlan_info->auto_vid0)
|
||||
return;
|
||||
|
||||
vlan_info->auto_vid0 = false;
|
||||
vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
|
||||
}
|
||||
|
||||
static int vlan_device_event(struct notifier_block *unused, unsigned long event,
|
||||
void *ptr)
|
||||
{
|
||||
|
|
@ -378,15 +407,10 @@ static int vlan_device_event(struct notifier_block *unused, unsigned long event,
|
|||
return notifier_from_errno(err);
|
||||
}
|
||||
|
||||
if ((event == NETDEV_UP) &&
|
||||
(dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) {
|
||||
pr_info("adding VLAN 0 to HW filter on device %s\n",
|
||||
dev->name);
|
||||
vlan_vid_add(dev, htons(ETH_P_8021Q), 0);
|
||||
}
|
||||
if (event == NETDEV_DOWN &&
|
||||
(dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
|
||||
vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
|
||||
if (event == NETDEV_UP)
|
||||
vlan_vid0_add(dev);
|
||||
else if (event == NETDEV_DOWN)
|
||||
vlan_vid0_del(dev);
|
||||
|
||||
vlan_info = rtnl_dereference(dev->vlan_info);
|
||||
if (!vlan_info)
|
||||
|
|
|
|||
|
|
@ -33,6 +33,7 @@ struct vlan_info {
|
|||
struct vlan_group grp;
|
||||
struct list_head vid_list;
|
||||
unsigned int nr_vids;
|
||||
bool auto_vid0;
|
||||
struct rcu_head rcu;
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -2654,7 +2654,7 @@ int hci_register_dev(struct hci_dev *hdev)
|
|||
/* Devices that are marked for raw-only usage are unconfigured
|
||||
* and should not be included in normal operation.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE))
|
||||
hci_dev_set_flag(hdev, HCI_UNCONFIGURED);
|
||||
|
||||
/* Mark Remote Wakeup connection flag as supported if driver has wakeup
|
||||
|
|
@ -2784,7 +2784,7 @@ int hci_register_suspend_notifier(struct hci_dev *hdev)
|
|||
int ret = 0;
|
||||
|
||||
if (!hdev->suspend_notifier.notifier_call &&
|
||||
!test_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks)) {
|
||||
!hci_test_quirk(hdev, HCI_QUIRK_NO_SUSPEND_NOTIFIER)) {
|
||||
hdev->suspend_notifier.notifier_call = hci_suspend_notifier;
|
||||
ret = register_pm_notifier(&hdev->suspend_notifier);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ static ssize_t __name ## _read(struct file *file, \
|
|||
struct hci_dev *hdev = file->private_data; \
|
||||
char buf[3]; \
|
||||
\
|
||||
buf[0] = test_bit(__quirk, &hdev->quirks) ? 'Y' : 'N'; \
|
||||
buf[0] = test_bit(__quirk, hdev->quirk_flags) ? 'Y' : 'N'; \
|
||||
buf[1] = '\n'; \
|
||||
buf[2] = '\0'; \
|
||||
return simple_read_from_buffer(user_buf, count, ppos, buf, 2); \
|
||||
|
|
@ -59,10 +59,10 @@ static ssize_t __name ## _write(struct file *file, \
|
|||
if (err) \
|
||||
return err; \
|
||||
\
|
||||
if (enable == test_bit(__quirk, &hdev->quirks)) \
|
||||
if (enable == test_bit(__quirk, hdev->quirk_flags)) \
|
||||
return -EALREADY; \
|
||||
\
|
||||
change_bit(__quirk, &hdev->quirks); \
|
||||
change_bit(__quirk, hdev->quirk_flags); \
|
||||
\
|
||||
return count; \
|
||||
} \
|
||||
|
|
@ -1356,7 +1356,7 @@ static ssize_t vendor_diag_write(struct file *file, const char __user *user_buf,
|
|||
* for the vendor callback. Instead just store the desired value and
|
||||
* the setting will be programmed when the controller gets powered on.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks) &&
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG) &&
|
||||
(!test_bit(HCI_RUNNING, &hdev->flags) ||
|
||||
hci_dev_test_flag(hdev, HCI_USER_CHANNEL)))
|
||||
goto done;
|
||||
|
|
|
|||
|
|
@ -908,8 +908,8 @@ static u8 hci_cc_read_local_ext_features(struct hci_dev *hdev, void *data,
|
|||
return rp->status;
|
||||
|
||||
if (hdev->max_page < rp->max_page) {
|
||||
if (test_bit(HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2,
|
||||
&hdev->quirks))
|
||||
if (hci_test_quirk(hdev,
|
||||
HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2))
|
||||
bt_dev_warn(hdev, "broken local ext features page 2");
|
||||
else
|
||||
hdev->max_page = rp->max_page;
|
||||
|
|
@ -936,7 +936,7 @@ static u8 hci_cc_read_buffer_size(struct hci_dev *hdev, void *data,
|
|||
hdev->acl_pkts = __le16_to_cpu(rp->acl_max_pkt);
|
||||
hdev->sco_pkts = __le16_to_cpu(rp->sco_max_pkt);
|
||||
|
||||
if (test_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks)) {
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_FIXUP_BUFFER_SIZE)) {
|
||||
hdev->sco_mtu = 64;
|
||||
hdev->sco_pkts = 8;
|
||||
}
|
||||
|
|
@ -2971,7 +2971,7 @@ static void hci_inquiry_complete_evt(struct hci_dev *hdev, void *data,
|
|||
* state to indicate completion.
|
||||
*/
|
||||
if (!hci_dev_test_flag(hdev, HCI_LE_SCAN) ||
|
||||
!test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks))
|
||||
!hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY))
|
||||
hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
|
||||
goto unlock;
|
||||
}
|
||||
|
|
@ -2990,7 +2990,7 @@ static void hci_inquiry_complete_evt(struct hci_dev *hdev, void *data,
|
|||
* state to indicate completion.
|
||||
*/
|
||||
if (!hci_dev_test_flag(hdev, HCI_LE_SCAN) ||
|
||||
!test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks))
|
||||
!hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY))
|
||||
hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
|
||||
}
|
||||
|
||||
|
|
@ -3614,8 +3614,7 @@ static void hci_encrypt_change_evt(struct hci_dev *hdev, void *data,
|
|||
/* We skip the WRITE_AUTH_PAYLOAD_TIMEOUT for ATS2851 based controllers
|
||||
* to avoid unexpected SMP command errors when pairing.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT,
|
||||
&hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT))
|
||||
goto notify;
|
||||
|
||||
/* Set the default Authenticated Payload Timeout after
|
||||
|
|
@ -5914,7 +5913,7 @@ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev,
|
|||
* while we have an existing one in peripheral role.
|
||||
*/
|
||||
if (hdev->conn_hash.le_num_peripheral > 0 &&
|
||||
(test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) ||
|
||||
(hci_test_quirk(hdev, HCI_QUIRK_BROKEN_LE_STATES) ||
|
||||
!(hdev->le_states[3] & 0x10)))
|
||||
return NULL;
|
||||
|
||||
|
|
@ -6310,8 +6309,8 @@ static void hci_le_ext_adv_report_evt(struct hci_dev *hdev, void *data,
|
|||
evt_type = __le16_to_cpu(info->type) & LE_EXT_ADV_EVT_TYPE_MASK;
|
||||
legacy_evt_type = ext_evt_type_to_legacy(hdev, evt_type);
|
||||
|
||||
if (test_bit(HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY,
|
||||
&hdev->quirks)) {
|
||||
if (hci_test_quirk(hdev,
|
||||
HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY)) {
|
||||
info->primary_phy &= 0x1f;
|
||||
info->secondary_phy &= 0x1f;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -393,7 +393,7 @@ static void le_scan_disable(struct work_struct *work)
|
|||
if (hdev->discovery.type != DISCOV_TYPE_INTERLEAVED)
|
||||
goto _return;
|
||||
|
||||
if (test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks)) {
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY)) {
|
||||
if (!test_bit(HCI_INQUIRY, &hdev->flags) &&
|
||||
hdev->discovery.state != DISCOVERY_RESOLVING)
|
||||
goto discov_stopped;
|
||||
|
|
@ -3587,7 +3587,7 @@ static void hci_dev_get_bd_addr_from_property(struct hci_dev *hdev)
|
|||
if (ret < 0 || !bacmp(&ba, BDADDR_ANY))
|
||||
return;
|
||||
|
||||
if (test_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_BDADDR_PROPERTY_BROKEN))
|
||||
baswap(&hdev->public_addr, &ba);
|
||||
else
|
||||
bacpy(&hdev->public_addr, &ba);
|
||||
|
|
@ -3662,7 +3662,7 @@ static int hci_init0_sync(struct hci_dev *hdev)
|
|||
bt_dev_dbg(hdev, "");
|
||||
|
||||
/* Reset */
|
||||
if (!test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks)) {
|
||||
if (!hci_test_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE)) {
|
||||
err = hci_reset_sync(hdev);
|
||||
if (err)
|
||||
return err;
|
||||
|
|
@ -3675,7 +3675,7 @@ static int hci_unconf_init_sync(struct hci_dev *hdev)
|
|||
{
|
||||
int err;
|
||||
|
||||
if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE))
|
||||
return 0;
|
||||
|
||||
err = hci_init0_sync(hdev);
|
||||
|
|
@ -3718,7 +3718,7 @@ static int hci_read_local_cmds_sync(struct hci_dev *hdev)
|
|||
* supported commands.
|
||||
*/
|
||||
if (hdev->hci_ver > BLUETOOTH_VER_1_1 &&
|
||||
!test_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks))
|
||||
!hci_test_quirk(hdev, HCI_QUIRK_BROKEN_LOCAL_COMMANDS))
|
||||
return __hci_cmd_sync_status(hdev, HCI_OP_READ_LOCAL_COMMANDS,
|
||||
0, NULL, HCI_CMD_TIMEOUT);
|
||||
|
||||
|
|
@ -3732,7 +3732,7 @@ static int hci_init1_sync(struct hci_dev *hdev)
|
|||
bt_dev_dbg(hdev, "");
|
||||
|
||||
/* Reset */
|
||||
if (!test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks)) {
|
||||
if (!hci_test_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE)) {
|
||||
err = hci_reset_sync(hdev);
|
||||
if (err)
|
||||
return err;
|
||||
|
|
@ -3795,7 +3795,7 @@ static int hci_set_event_filter_sync(struct hci_dev *hdev, u8 flt_type,
|
|||
if (!hci_dev_test_flag(hdev, HCI_BREDR_ENABLED))
|
||||
return 0;
|
||||
|
||||
if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL))
|
||||
return 0;
|
||||
|
||||
memset(&cp, 0, sizeof(cp));
|
||||
|
|
@ -3822,7 +3822,7 @@ static int hci_clear_event_filter_sync(struct hci_dev *hdev)
|
|||
* a hci_set_event_filter_sync() call succeeds, but we do
|
||||
* the check both for parity and as a future reminder.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL))
|
||||
return 0;
|
||||
|
||||
return hci_set_event_filter_sync(hdev, HCI_FLT_CLEAR_ALL, 0x00,
|
||||
|
|
@ -3846,7 +3846,7 @@ static int hci_write_sync_flowctl_sync(struct hci_dev *hdev)
|
|||
|
||||
/* Check if the controller supports SCO and HCI_OP_WRITE_SYNC_FLOWCTL */
|
||||
if (!lmp_sco_capable(hdev) || !(hdev->commands[10] & BIT(4)) ||
|
||||
!test_bit(HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED, &hdev->quirks))
|
||||
!hci_test_quirk(hdev, HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED))
|
||||
return 0;
|
||||
|
||||
memset(&cp, 0, sizeof(cp));
|
||||
|
|
@ -3921,7 +3921,7 @@ static int hci_write_inquiry_mode_sync(struct hci_dev *hdev)
|
|||
u8 mode;
|
||||
|
||||
if (!lmp_inq_rssi_capable(hdev) &&
|
||||
!test_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks))
|
||||
!hci_test_quirk(hdev, HCI_QUIRK_FIXUP_INQUIRY_MODE))
|
||||
return 0;
|
||||
|
||||
/* If Extended Inquiry Result events are supported, then
|
||||
|
|
@ -4111,7 +4111,7 @@ static int hci_set_event_mask_sync(struct hci_dev *hdev)
|
|||
}
|
||||
|
||||
if (lmp_inq_rssi_capable(hdev) ||
|
||||
test_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks))
|
||||
hci_test_quirk(hdev, HCI_QUIRK_FIXUP_INQUIRY_MODE))
|
||||
events[4] |= 0x02; /* Inquiry Result with RSSI */
|
||||
|
||||
if (lmp_ext_feat_capable(hdev))
|
||||
|
|
@ -4163,7 +4163,7 @@ static int hci_read_stored_link_key_sync(struct hci_dev *hdev)
|
|||
struct hci_cp_read_stored_link_key cp;
|
||||
|
||||
if (!(hdev->commands[6] & 0x20) ||
|
||||
test_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks))
|
||||
hci_test_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY))
|
||||
return 0;
|
||||
|
||||
memset(&cp, 0, sizeof(cp));
|
||||
|
|
@ -4212,7 +4212,7 @@ static int hci_read_def_err_data_reporting_sync(struct hci_dev *hdev)
|
|||
{
|
||||
if (!(hdev->commands[18] & 0x04) ||
|
||||
!(hdev->features[0][6] & LMP_ERR_DATA_REPORTING) ||
|
||||
test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks))
|
||||
hci_test_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING))
|
||||
return 0;
|
||||
|
||||
return __hci_cmd_sync_status(hdev, HCI_OP_READ_DEF_ERR_DATA_REPORTING,
|
||||
|
|
@ -4226,7 +4226,7 @@ static int hci_read_page_scan_type_sync(struct hci_dev *hdev)
|
|||
* this command in the bit mask of supported commands.
|
||||
*/
|
||||
if (!(hdev->commands[13] & 0x01) ||
|
||||
test_bit(HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, &hdev->quirks))
|
||||
hci_test_quirk(hdev, HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE))
|
||||
return 0;
|
||||
|
||||
return __hci_cmd_sync_status(hdev, HCI_OP_READ_PAGE_SCAN_TYPE,
|
||||
|
|
@ -4421,7 +4421,7 @@ static int hci_le_read_adv_tx_power_sync(struct hci_dev *hdev)
|
|||
static int hci_le_read_tx_power_sync(struct hci_dev *hdev)
|
||||
{
|
||||
if (!(hdev->commands[38] & 0x80) ||
|
||||
test_bit(HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER, &hdev->quirks))
|
||||
hci_test_quirk(hdev, HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER))
|
||||
return 0;
|
||||
|
||||
return __hci_cmd_sync_status(hdev, HCI_OP_LE_READ_TRANSMIT_POWER,
|
||||
|
|
@ -4464,7 +4464,7 @@ static int hci_le_set_rpa_timeout_sync(struct hci_dev *hdev)
|
|||
__le16 timeout = cpu_to_le16(hdev->rpa_timeout);
|
||||
|
||||
if (!(hdev->commands[35] & 0x04) ||
|
||||
test_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks))
|
||||
hci_test_quirk(hdev, HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT))
|
||||
return 0;
|
||||
|
||||
return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_RPA_TIMEOUT,
|
||||
|
|
@ -4609,7 +4609,7 @@ static int hci_delete_stored_link_key_sync(struct hci_dev *hdev)
|
|||
* just disable this command.
|
||||
*/
|
||||
if (!(hdev->commands[6] & 0x80) ||
|
||||
test_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks))
|
||||
hci_test_quirk(hdev, HCI_QUIRK_BROKEN_STORED_LINK_KEY))
|
||||
return 0;
|
||||
|
||||
memset(&cp, 0, sizeof(cp));
|
||||
|
|
@ -4735,7 +4735,7 @@ static int hci_set_err_data_report_sync(struct hci_dev *hdev)
|
|||
|
||||
if (!(hdev->commands[18] & 0x08) ||
|
||||
!(hdev->features[0][6] & LMP_ERR_DATA_REPORTING) ||
|
||||
test_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks))
|
||||
hci_test_quirk(hdev, HCI_QUIRK_BROKEN_ERR_DATA_REPORTING))
|
||||
return 0;
|
||||
|
||||
if (enabled == hdev->err_data_reporting)
|
||||
|
|
@ -4948,7 +4948,7 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
|
|||
size_t i;
|
||||
|
||||
if (!hci_dev_test_flag(hdev, HCI_SETUP) &&
|
||||
!test_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks))
|
||||
!hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_SETUP))
|
||||
return 0;
|
||||
|
||||
bt_dev_dbg(hdev, "");
|
||||
|
|
@ -4959,7 +4959,7 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
|
|||
ret = hdev->setup(hdev);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(hci_broken_table); i++) {
|
||||
if (test_bit(hci_broken_table[i].quirk, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, hci_broken_table[i].quirk))
|
||||
bt_dev_warn(hdev, "%s", hci_broken_table[i].desc);
|
||||
}
|
||||
|
||||
|
|
@ -4967,10 +4967,10 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
|
|||
* BD_ADDR invalid before creating the HCI device or in
|
||||
* its setup callback.
|
||||
*/
|
||||
invalid_bdaddr = test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) ||
|
||||
test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
|
||||
invalid_bdaddr = hci_test_quirk(hdev, HCI_QUIRK_INVALID_BDADDR) ||
|
||||
hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY);
|
||||
if (!ret) {
|
||||
if (test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks) &&
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY) &&
|
||||
!bacmp(&hdev->public_addr, BDADDR_ANY))
|
||||
hci_dev_get_bd_addr_from_property(hdev);
|
||||
|
||||
|
|
@ -4992,7 +4992,7 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
|
|||
* In case any of them is set, the controller has to
|
||||
* start up as unconfigured.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) ||
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) ||
|
||||
invalid_bdaddr)
|
||||
hci_dev_set_flag(hdev, HCI_UNCONFIGURED);
|
||||
|
||||
|
|
@ -5052,7 +5052,7 @@ static int hci_dev_init_sync(struct hci_dev *hdev)
|
|||
* then they need to be reprogrammed after the init procedure
|
||||
* completed.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_NON_PERSISTENT_DIAG, &hdev->quirks) &&
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_NON_PERSISTENT_DIAG) &&
|
||||
!hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
|
||||
hci_dev_test_flag(hdev, HCI_VENDOR_DIAG) && hdev->set_diag)
|
||||
ret = hdev->set_diag(hdev, true);
|
||||
|
|
@ -5309,7 +5309,7 @@ int hci_dev_close_sync(struct hci_dev *hdev)
|
|||
/* Reset device */
|
||||
skb_queue_purge(&hdev->cmd_q);
|
||||
atomic_set(&hdev->cmd_cnt, 1);
|
||||
if (test_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks) &&
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_RESET_ON_CLOSE) &&
|
||||
!auto_off && !hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
|
||||
set_bit(HCI_INIT, &hdev->flags);
|
||||
hci_reset_sync(hdev);
|
||||
|
|
@ -5959,7 +5959,7 @@ static int hci_active_scan_sync(struct hci_dev *hdev, uint16_t interval)
|
|||
own_addr_type = ADDR_LE_DEV_PUBLIC;
|
||||
|
||||
if (hci_is_adv_monitoring(hdev) ||
|
||||
(test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks) &&
|
||||
(hci_test_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER) &&
|
||||
hdev->discovery.result_filtering)) {
|
||||
/* Duplicate filter should be disabled when some advertisement
|
||||
* monitor is activated, otherwise AdvMon can only receive one
|
||||
|
|
@ -6022,8 +6022,7 @@ int hci_start_discovery_sync(struct hci_dev *hdev)
|
|||
* and LE scanning are done sequentially with separate
|
||||
* timeouts.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY,
|
||||
&hdev->quirks)) {
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_SIMULTANEOUS_DISCOVERY)) {
|
||||
timeout = msecs_to_jiffies(DISCOV_LE_TIMEOUT);
|
||||
/* During simultaneous discovery, we double LE scan
|
||||
* interval. We must leave some time for the controller
|
||||
|
|
@ -6100,7 +6099,7 @@ static int hci_update_event_filter_sync(struct hci_dev *hdev)
|
|||
/* Some fake CSR controllers lock up after setting this type of
|
||||
* filter, so avoid sending the request altogether.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL))
|
||||
return 0;
|
||||
|
||||
/* Always clear event filter when starting */
|
||||
|
|
@ -6815,8 +6814,8 @@ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* No privacy so use a public address. */
|
||||
*own_addr_type = ADDR_LE_DEV_PUBLIC;
|
||||
/* No privacy, use the current address */
|
||||
hci_copy_identity_address(hdev, rand_addr, own_addr_type);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3520,12 +3520,28 @@ done:
|
|||
/* Configure output options and let the other side know
|
||||
* which ones we don't like. */
|
||||
|
||||
/* If MTU is not provided in configure request, use the most recently
|
||||
* explicitly or implicitly accepted value for the other direction,
|
||||
* or the default value.
|
||||
/* If MTU is not provided in configure request, try adjusting it
|
||||
* to the current output MTU if it has been set
|
||||
*
|
||||
* Bluetooth Core 6.1, Vol 3, Part A, Section 4.5
|
||||
*
|
||||
* Each configuration parameter value (if any is present) in an
|
||||
* L2CAP_CONFIGURATION_RSP packet reflects an ‘adjustment’ to a
|
||||
* configuration parameter value that has been sent (or, in case
|
||||
* of default values, implied) in the corresponding
|
||||
* L2CAP_CONFIGURATION_REQ packet.
|
||||
*/
|
||||
if (mtu == 0)
|
||||
mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU;
|
||||
if (!mtu) {
|
||||
/* Only adjust for ERTM channels as for older modes the
|
||||
* remote stack may not be able to detect that the
|
||||
* adjustment causing it to silently drop packets.
|
||||
*/
|
||||
if (chan->mode == L2CAP_MODE_ERTM &&
|
||||
chan->omtu && chan->omtu != L2CAP_DEFAULT_MTU)
|
||||
mtu = chan->omtu;
|
||||
else
|
||||
mtu = L2CAP_DEFAULT_MTU;
|
||||
}
|
||||
|
||||
if (mtu < L2CAP_DEFAULT_MIN_MTU)
|
||||
result = L2CAP_CONF_UNACCEPT;
|
||||
|
|
|
|||
|
|
@ -1703,6 +1703,9 @@ static void l2cap_sock_resume_cb(struct l2cap_chan *chan)
|
|||
{
|
||||
struct sock *sk = chan->data;
|
||||
|
||||
if (!sk)
|
||||
return;
|
||||
|
||||
if (test_and_clear_bit(FLAG_PENDING_SECURITY, &chan->flags)) {
|
||||
sk->sk_state = BT_CONNECTED;
|
||||
chan->state = BT_CONNECTED;
|
||||
|
|
|
|||
|
|
@ -464,7 +464,7 @@ static int read_index_list(struct sock *sk, struct hci_dev *hdev, void *data,
|
|||
/* Devices marked as raw-only are neither configured
|
||||
* nor unconfigured controllers.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
|
||||
if (hci_test_quirk(d, HCI_QUIRK_RAW_DEVICE))
|
||||
continue;
|
||||
|
||||
if (!hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
|
||||
|
|
@ -522,7 +522,7 @@ static int read_unconf_index_list(struct sock *sk, struct hci_dev *hdev,
|
|||
/* Devices marked as raw-only are neither configured
|
||||
* nor unconfigured controllers.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
|
||||
if (hci_test_quirk(d, HCI_QUIRK_RAW_DEVICE))
|
||||
continue;
|
||||
|
||||
if (hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
|
||||
|
|
@ -576,7 +576,7 @@ static int read_ext_index_list(struct sock *sk, struct hci_dev *hdev,
|
|||
/* Devices marked as raw-only are neither configured
|
||||
* nor unconfigured controllers.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
|
||||
if (hci_test_quirk(d, HCI_QUIRK_RAW_DEVICE))
|
||||
continue;
|
||||
|
||||
if (hci_dev_test_flag(d, HCI_UNCONFIGURED))
|
||||
|
|
@ -612,12 +612,12 @@ static int read_ext_index_list(struct sock *sk, struct hci_dev *hdev,
|
|||
|
||||
static bool is_configured(struct hci_dev *hdev)
|
||||
{
|
||||
if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) &&
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) &&
|
||||
!hci_dev_test_flag(hdev, HCI_EXT_CONFIGURED))
|
||||
return false;
|
||||
|
||||
if ((test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) ||
|
||||
test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks)) &&
|
||||
if ((hci_test_quirk(hdev, HCI_QUIRK_INVALID_BDADDR) ||
|
||||
hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY)) &&
|
||||
!bacmp(&hdev->public_addr, BDADDR_ANY))
|
||||
return false;
|
||||
|
||||
|
|
@ -628,12 +628,12 @@ static __le32 get_missing_options(struct hci_dev *hdev)
|
|||
{
|
||||
u32 options = 0;
|
||||
|
||||
if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) &&
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) &&
|
||||
!hci_dev_test_flag(hdev, HCI_EXT_CONFIGURED))
|
||||
options |= MGMT_OPTION_EXTERNAL_CONFIG;
|
||||
|
||||
if ((test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) ||
|
||||
test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks)) &&
|
||||
if ((hci_test_quirk(hdev, HCI_QUIRK_INVALID_BDADDR) ||
|
||||
hci_test_quirk(hdev, HCI_QUIRK_USE_BDADDR_PROPERTY)) &&
|
||||
!bacmp(&hdev->public_addr, BDADDR_ANY))
|
||||
options |= MGMT_OPTION_PUBLIC_ADDRESS;
|
||||
|
||||
|
|
@ -669,7 +669,7 @@ static int read_config_info(struct sock *sk, struct hci_dev *hdev,
|
|||
memset(&rp, 0, sizeof(rp));
|
||||
rp.manufacturer = cpu_to_le16(hdev->manufacturer);
|
||||
|
||||
if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG))
|
||||
options |= MGMT_OPTION_EXTERNAL_CONFIG;
|
||||
|
||||
if (hdev->set_bdaddr)
|
||||
|
|
@ -828,8 +828,7 @@ static u32 get_supported_settings(struct hci_dev *hdev)
|
|||
if (lmp_sc_capable(hdev))
|
||||
settings |= MGMT_SETTING_SECURE_CONN;
|
||||
|
||||
if (test_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
|
||||
&hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED))
|
||||
settings |= MGMT_SETTING_WIDEBAND_SPEECH;
|
||||
}
|
||||
|
||||
|
|
@ -841,8 +840,7 @@ static u32 get_supported_settings(struct hci_dev *hdev)
|
|||
settings |= MGMT_SETTING_ADVERTISING;
|
||||
}
|
||||
|
||||
if (test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks) ||
|
||||
hdev->set_bdaddr)
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG) || hdev->set_bdaddr)
|
||||
settings |= MGMT_SETTING_CONFIGURATION;
|
||||
|
||||
if (cis_central_capable(hdev))
|
||||
|
|
@ -4307,7 +4305,7 @@ static int set_wideband_speech(struct sock *sk, struct hci_dev *hdev,
|
|||
|
||||
bt_dev_dbg(hdev, "sock %p", sk);
|
||||
|
||||
if (!test_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks))
|
||||
if (!hci_test_quirk(hdev, HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED))
|
||||
return mgmt_cmd_status(sk, hdev->id,
|
||||
MGMT_OP_SET_WIDEBAND_SPEECH,
|
||||
MGMT_STATUS_NOT_SUPPORTED);
|
||||
|
|
@ -7935,7 +7933,7 @@ static int set_external_config(struct sock *sk, struct hci_dev *hdev,
|
|||
return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_EXTERNAL_CONFIG,
|
||||
MGMT_STATUS_INVALID_PARAMS);
|
||||
|
||||
if (!test_bit(HCI_QUIRK_EXTERNAL_CONFIG, &hdev->quirks))
|
||||
if (!hci_test_quirk(hdev, HCI_QUIRK_EXTERNAL_CONFIG))
|
||||
return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_EXTERNAL_CONFIG,
|
||||
MGMT_STATUS_NOT_SUPPORTED);
|
||||
|
||||
|
|
@ -9338,7 +9336,7 @@ void mgmt_index_added(struct hci_dev *hdev)
|
|||
{
|
||||
struct mgmt_ev_ext_index ev;
|
||||
|
||||
if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE))
|
||||
return;
|
||||
|
||||
if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
|
||||
|
|
@ -9362,7 +9360,7 @@ void mgmt_index_removed(struct hci_dev *hdev)
|
|||
struct mgmt_ev_ext_index ev;
|
||||
struct cmd_lookup match = { NULL, hdev, MGMT_STATUS_INVALID_INDEX };
|
||||
|
||||
if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_RAW_DEVICE))
|
||||
return;
|
||||
|
||||
mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match);
|
||||
|
|
@ -10089,7 +10087,7 @@ static bool is_filter_match(struct hci_dev *hdev, s8 rssi, u8 *eir,
|
|||
if (hdev->discovery.rssi != HCI_RSSI_INVALID &&
|
||||
(rssi == HCI_RSSI_INVALID ||
|
||||
(rssi < hdev->discovery.rssi &&
|
||||
!test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks))))
|
||||
!hci_test_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER))))
|
||||
return false;
|
||||
|
||||
if (hdev->discovery.uuid_count != 0) {
|
||||
|
|
@ -10107,7 +10105,7 @@ static bool is_filter_match(struct hci_dev *hdev, s8 rssi, u8 *eir,
|
|||
/* If duplicate filtering does not report RSSI changes, then restart
|
||||
* scanning to ensure updated result with updated RSSI values.
|
||||
*/
|
||||
if (test_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks)) {
|
||||
if (hci_test_quirk(hdev, HCI_QUIRK_STRICT_DUPLICATE_FILTER)) {
|
||||
/* Validate RSSI value against the RSSI threshold once more. */
|
||||
if (hdev->discovery.rssi != HCI_RSSI_INVALID &&
|
||||
rssi < hdev->discovery.rssi)
|
||||
|
|
|
|||
|
|
@ -989,7 +989,7 @@ static void msft_monitor_device_evt(struct hci_dev *hdev, struct sk_buff *skb)
|
|||
|
||||
handle_data = msft_find_handle_data(hdev, ev->monitor_handle, false);
|
||||
|
||||
if (!test_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks)) {
|
||||
if (!hci_test_quirk(hdev, HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER)) {
|
||||
if (!handle_data)
|
||||
return;
|
||||
mgmt_handle = handle_data->mgmt_handle;
|
||||
|
|
|
|||
|
|
@ -1379,7 +1379,7 @@ static void smp_timeout(struct work_struct *work)
|
|||
|
||||
bt_dev_dbg(conn->hcon->hdev, "conn %p", conn);
|
||||
|
||||
hci_disconnect(conn->hcon, HCI_ERROR_REMOTE_USER_TERM);
|
||||
hci_disconnect(conn->hcon, HCI_ERROR_AUTH_FAILURE);
|
||||
}
|
||||
|
||||
static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
|
||||
|
|
@ -2977,8 +2977,25 @@ static int smp_sig_channel(struct l2cap_chan *chan, struct sk_buff *skb)
|
|||
if (code > SMP_CMD_MAX)
|
||||
goto drop;
|
||||
|
||||
if (smp && !test_and_clear_bit(code, &smp->allow_cmd))
|
||||
if (smp && !test_and_clear_bit(code, &smp->allow_cmd)) {
|
||||
/* If there is a context and the command is not allowed consider
|
||||
* it a failure so the session is cleanup properly.
|
||||
*/
|
||||
switch (code) {
|
||||
case SMP_CMD_IDENT_INFO:
|
||||
case SMP_CMD_IDENT_ADDR_INFO:
|
||||
case SMP_CMD_SIGN_INFO:
|
||||
/* 3.6.1. Key distribution and generation
|
||||
*
|
||||
* A device may reject a distributed key by sending the
|
||||
* Pairing Failed command with the reason set to
|
||||
* "Key Rejected".
|
||||
*/
|
||||
smp_failure(conn, SMP_KEY_REJECTED);
|
||||
break;
|
||||
}
|
||||
goto drop;
|
||||
}
|
||||
|
||||
/* If we don't have a context the only allowed commands are
|
||||
* pairing request and security request.
|
||||
|
|
|
|||
|
|
@ -138,6 +138,7 @@ struct smp_cmd_keypress_notify {
|
|||
#define SMP_NUMERIC_COMP_FAILED 0x0c
|
||||
#define SMP_BREDR_PAIRING_IN_PROGRESS 0x0d
|
||||
#define SMP_CROSS_TRANSP_NOT_ALLOWED 0x0e
|
||||
#define SMP_KEY_REJECTED 0x0f
|
||||
|
||||
#define SMP_MIN_ENC_KEY_SIZE 7
|
||||
#define SMP_MAX_ENC_KEY_SIZE 16
|
||||
|
|
|
|||
|
|
@ -17,6 +17,9 @@ static bool nbp_switchdev_can_offload_tx_fwd(const struct net_bridge_port *p,
|
|||
if (!static_branch_unlikely(&br_switchdev_tx_fwd_offload))
|
||||
return false;
|
||||
|
||||
if (br_multicast_igmp_type(skb))
|
||||
return false;
|
||||
|
||||
return (p->flags & BR_TX_FWD_OFFLOAD) &&
|
||||
(p->hwdom != BR_INPUT_SKB_CB(skb)->src_hwdom);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -359,6 +359,7 @@ struct sk_buff *tcp_gro_receive(struct list_head *head, struct sk_buff *skb,
|
|||
flush |= skb->ip_summed != p->ip_summed;
|
||||
flush |= skb->csum_level != p->csum_level;
|
||||
flush |= NAPI_GRO_CB(p)->count >= 64;
|
||||
skb_set_network_header(skb, skb_gro_receive_network_offset(skb));
|
||||
|
||||
if (flush || skb_gro_receive_list(p, skb))
|
||||
mss = 1;
|
||||
|
|
|
|||
|
|
@ -767,6 +767,7 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head,
|
|||
NAPI_GRO_CB(skb)->flush = 1;
|
||||
return NULL;
|
||||
}
|
||||
skb_set_network_header(skb, skb_gro_receive_network_offset(skb));
|
||||
ret = skb_gro_receive_list(p, skb);
|
||||
} else {
|
||||
skb_gro_postpull_rcsum(skb, uh,
|
||||
|
|
|
|||
|
|
@ -807,8 +807,8 @@ static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im)
|
|||
} else {
|
||||
im->mca_crcount = idev->mc_qrv;
|
||||
}
|
||||
in6_dev_put(pmc->idev);
|
||||
ip6_mc_clear_src(pmc);
|
||||
in6_dev_put(pmc->idev);
|
||||
kfree_rcu(pmc, rcu);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -129,13 +129,13 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
|
|||
struct dst_entry *cache_dst)
|
||||
{
|
||||
struct ipv6_rpl_sr_hdr *isrh, *csrh;
|
||||
const struct ipv6hdr *oldhdr;
|
||||
struct ipv6hdr oldhdr;
|
||||
struct ipv6hdr *hdr;
|
||||
unsigned char *buf;
|
||||
size_t hdrlen;
|
||||
int err;
|
||||
|
||||
oldhdr = ipv6_hdr(skb);
|
||||
memcpy(&oldhdr, ipv6_hdr(skb), sizeof(oldhdr));
|
||||
|
||||
buf = kcalloc(struct_size(srh, segments.addr, srh->segments_left), 2, GFP_ATOMIC);
|
||||
if (!buf)
|
||||
|
|
@ -147,7 +147,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
|
|||
memcpy(isrh, srh, sizeof(*isrh));
|
||||
memcpy(isrh->rpl_segaddr, &srh->rpl_segaddr[1],
|
||||
(srh->segments_left - 1) * 16);
|
||||
isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr->daddr;
|
||||
isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr.daddr;
|
||||
|
||||
ipv6_rpl_srh_compress(csrh, isrh, &srh->rpl_segaddr[0],
|
||||
isrh->segments_left - 1);
|
||||
|
|
@ -169,7 +169,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
|
|||
skb_mac_header_rebuild(skb);
|
||||
|
||||
hdr = ipv6_hdr(skb);
|
||||
memmove(hdr, oldhdr, sizeof(*hdr));
|
||||
memmove(hdr, &oldhdr, sizeof(*hdr));
|
||||
isrh = (void *)hdr + sizeof(*hdr);
|
||||
memcpy(isrh, csrh, hdrlen);
|
||||
|
||||
|
|
|
|||
|
|
@ -978,8 +978,9 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
|
|||
if (subflow->mp_join)
|
||||
goto reset;
|
||||
subflow->mp_capable = 0;
|
||||
if (!mptcp_try_fallback(ssk))
|
||||
goto reset;
|
||||
pr_fallback(msk);
|
||||
mptcp_do_fallback(ssk);
|
||||
return false;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -765,8 +765,14 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq)
|
|||
|
||||
pr_debug("fail_seq=%llu\n", fail_seq);
|
||||
|
||||
if (!READ_ONCE(msk->allow_infinite_fallback))
|
||||
/* After accepting the fail, we can't create any other subflows */
|
||||
spin_lock_bh(&msk->fallback_lock);
|
||||
if (!msk->allow_infinite_fallback) {
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
return;
|
||||
}
|
||||
msk->allow_subflows = false;
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
|
||||
if (!subflow->fail_tout) {
|
||||
pr_debug("send MP_FAIL response and infinite map\n");
|
||||
|
|
|
|||
|
|
@ -560,10 +560,9 @@ static bool mptcp_check_data_fin(struct sock *sk)
|
|||
|
||||
static void mptcp_dss_corruption(struct mptcp_sock *msk, struct sock *ssk)
|
||||
{
|
||||
if (READ_ONCE(msk->allow_infinite_fallback)) {
|
||||
if (mptcp_try_fallback(ssk)) {
|
||||
MPTCP_INC_STATS(sock_net(ssk),
|
||||
MPTCP_MIB_DSSCORRUPTIONFALLBACK);
|
||||
mptcp_do_fallback(ssk);
|
||||
} else {
|
||||
MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DSSCORRUPTIONRESET);
|
||||
mptcp_subflow_reset(ssk);
|
||||
|
|
@ -792,7 +791,7 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
|
|||
static void mptcp_subflow_joined(struct mptcp_sock *msk, struct sock *ssk)
|
||||
{
|
||||
mptcp_subflow_ctx(ssk)->map_seq = READ_ONCE(msk->ack_seq);
|
||||
WRITE_ONCE(msk->allow_infinite_fallback, false);
|
||||
msk->allow_infinite_fallback = false;
|
||||
mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC);
|
||||
}
|
||||
|
||||
|
|
@ -803,6 +802,14 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
|
|||
if (sk->sk_state != TCP_ESTABLISHED)
|
||||
return false;
|
||||
|
||||
spin_lock_bh(&msk->fallback_lock);
|
||||
if (!msk->allow_subflows) {
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
return false;
|
||||
}
|
||||
mptcp_subflow_joined(msk, ssk);
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
|
||||
/* attach to msk socket only after we are sure we will deal with it
|
||||
* at close time
|
||||
*/
|
||||
|
|
@ -811,7 +818,6 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
|
|||
|
||||
mptcp_subflow_ctx(ssk)->subflow_id = msk->subflow_id++;
|
||||
mptcp_sockopt_sync_locked(msk, ssk);
|
||||
mptcp_subflow_joined(msk, ssk);
|
||||
mptcp_stop_tout_timer(sk);
|
||||
__mptcp_propagate_sndbuf(sk, ssk);
|
||||
return true;
|
||||
|
|
@ -1136,10 +1142,14 @@ static void mptcp_update_infinite_map(struct mptcp_sock *msk,
|
|||
mpext->infinite_map = 1;
|
||||
mpext->data_len = 0;
|
||||
|
||||
if (!mptcp_try_fallback(ssk)) {
|
||||
mptcp_subflow_reset(ssk);
|
||||
return;
|
||||
}
|
||||
|
||||
MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPTX);
|
||||
mptcp_subflow_ctx(ssk)->send_infinite_map = 0;
|
||||
pr_fallback(msk);
|
||||
mptcp_do_fallback(ssk);
|
||||
}
|
||||
|
||||
#define MPTCP_MAX_GSO_SIZE (GSO_LEGACY_MAX_SIZE - (MAX_TCP_HEADER + 1))
|
||||
|
|
@ -2543,9 +2553,9 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
|
|||
|
||||
static void __mptcp_retrans(struct sock *sk)
|
||||
{
|
||||
struct mptcp_sendmsg_info info = { .data_lock_held = true, };
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
struct mptcp_subflow_context *subflow;
|
||||
struct mptcp_sendmsg_info info = {};
|
||||
struct mptcp_data_frag *dfrag;
|
||||
struct sock *ssk;
|
||||
int ret, err;
|
||||
|
|
@ -2590,6 +2600,18 @@ static void __mptcp_retrans(struct sock *sk)
|
|||
info.sent = 0;
|
||||
info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len :
|
||||
dfrag->already_sent;
|
||||
|
||||
/*
|
||||
* make the whole retrans decision, xmit, disallow
|
||||
* fallback atomic
|
||||
*/
|
||||
spin_lock_bh(&msk->fallback_lock);
|
||||
if (__mptcp_check_fallback(msk)) {
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
release_sock(ssk);
|
||||
return;
|
||||
}
|
||||
|
||||
while (info.sent < info.limit) {
|
||||
ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
|
||||
if (ret <= 0)
|
||||
|
|
@ -2603,8 +2625,9 @@ static void __mptcp_retrans(struct sock *sk)
|
|||
len = max(copied, len);
|
||||
tcp_push(ssk, 0, info.mss_now, tcp_sk(ssk)->nonagle,
|
||||
info.size_goal);
|
||||
WRITE_ONCE(msk->allow_infinite_fallback, false);
|
||||
msk->allow_infinite_fallback = false;
|
||||
}
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
|
||||
release_sock(ssk);
|
||||
}
|
||||
|
|
@ -2730,7 +2753,8 @@ static void __mptcp_init_sock(struct sock *sk)
|
|||
WRITE_ONCE(msk->first, NULL);
|
||||
inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;
|
||||
WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk)));
|
||||
WRITE_ONCE(msk->allow_infinite_fallback, true);
|
||||
msk->allow_infinite_fallback = true;
|
||||
msk->allow_subflows = true;
|
||||
msk->recovery = false;
|
||||
msk->subflow_id = 1;
|
||||
msk->last_data_sent = tcp_jiffies32;
|
||||
|
|
@ -2738,6 +2762,7 @@ static void __mptcp_init_sock(struct sock *sk)
|
|||
msk->last_ack_recv = tcp_jiffies32;
|
||||
|
||||
mptcp_pm_data_init(msk);
|
||||
spin_lock_init(&msk->fallback_lock);
|
||||
|
||||
/* re-use the csk retrans timer for MPTCP-level retrans */
|
||||
timer_setup(&msk->sk.icsk_retransmit_timer, mptcp_retransmit_timer, 0);
|
||||
|
|
@ -3117,7 +3142,16 @@ static int mptcp_disconnect(struct sock *sk, int flags)
|
|||
* subflow
|
||||
*/
|
||||
mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE);
|
||||
|
||||
/* The first subflow is already in TCP_CLOSE status, the following
|
||||
* can't overlap with a fallback anymore
|
||||
*/
|
||||
spin_lock_bh(&msk->fallback_lock);
|
||||
msk->allow_subflows = true;
|
||||
msk->allow_infinite_fallback = true;
|
||||
WRITE_ONCE(msk->flags, 0);
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
|
||||
msk->cb_flags = 0;
|
||||
msk->recovery = false;
|
||||
WRITE_ONCE(msk->can_ack, false);
|
||||
|
|
@ -3524,7 +3558,13 @@ bool mptcp_finish_join(struct sock *ssk)
|
|||
|
||||
/* active subflow, already present inside the conn_list */
|
||||
if (!list_empty(&subflow->node)) {
|
||||
spin_lock_bh(&msk->fallback_lock);
|
||||
if (!msk->allow_subflows) {
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
return false;
|
||||
}
|
||||
mptcp_subflow_joined(msk, ssk);
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
mptcp_propagate_sndbuf(parent, ssk);
|
||||
return true;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -346,10 +346,16 @@ struct mptcp_sock {
|
|||
u64 rtt_us; /* last maximum rtt of subflows */
|
||||
} rcvq_space;
|
||||
u8 scaling_ratio;
|
||||
bool allow_subflows;
|
||||
|
||||
u32 subflow_id;
|
||||
u32 setsockopt_seq;
|
||||
char ca_name[TCP_CA_NAME_MAX];
|
||||
|
||||
spinlock_t fallback_lock; /* protects fallback,
|
||||
* allow_infinite_fallback and
|
||||
* allow_join
|
||||
*/
|
||||
};
|
||||
|
||||
#define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock)
|
||||
|
|
@ -1216,15 +1222,22 @@ static inline bool mptcp_check_fallback(const struct sock *sk)
|
|||
return __mptcp_check_fallback(msk);
|
||||
}
|
||||
|
||||
static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
|
||||
static inline bool __mptcp_try_fallback(struct mptcp_sock *msk)
|
||||
{
|
||||
if (__mptcp_check_fallback(msk)) {
|
||||
pr_debug("TCP fallback already done (msk=%p)\n", msk);
|
||||
return;
|
||||
return true;
|
||||
}
|
||||
if (WARN_ON_ONCE(!READ_ONCE(msk->allow_infinite_fallback)))
|
||||
return;
|
||||
spin_lock_bh(&msk->fallback_lock);
|
||||
if (!msk->allow_infinite_fallback) {
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
return false;
|
||||
}
|
||||
|
||||
msk->allow_subflows = false;
|
||||
set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline bool __mptcp_has_initial_subflow(const struct mptcp_sock *msk)
|
||||
|
|
@ -1236,14 +1249,15 @@ static inline bool __mptcp_has_initial_subflow(const struct mptcp_sock *msk)
|
|||
TCPF_SYN_RECV | TCPF_LISTEN));
|
||||
}
|
||||
|
||||
static inline void mptcp_do_fallback(struct sock *ssk)
|
||||
static inline bool mptcp_try_fallback(struct sock *ssk)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
|
||||
struct sock *sk = subflow->conn;
|
||||
struct mptcp_sock *msk;
|
||||
|
||||
msk = mptcp_sk(sk);
|
||||
__mptcp_do_fallback(msk);
|
||||
if (!__mptcp_try_fallback(msk))
|
||||
return false;
|
||||
if (READ_ONCE(msk->snd_data_fin_enable) && !(ssk->sk_shutdown & SEND_SHUTDOWN)) {
|
||||
gfp_t saved_allocation = ssk->sk_allocation;
|
||||
|
||||
|
|
@ -1255,6 +1269,7 @@ static inline void mptcp_do_fallback(struct sock *ssk)
|
|||
tcp_shutdown(ssk, SEND_SHUTDOWN);
|
||||
ssk->sk_allocation = saved_allocation;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
#define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)\n", __func__, a)
|
||||
|
|
@ -1264,7 +1279,7 @@ static inline void mptcp_subflow_early_fallback(struct mptcp_sock *msk,
|
|||
{
|
||||
pr_fallback(msk);
|
||||
subflow->request_mptcp = 0;
|
||||
__mptcp_do_fallback(msk);
|
||||
WARN_ON_ONCE(!__mptcp_try_fallback(msk));
|
||||
}
|
||||
|
||||
static inline bool mptcp_check_infinite_map(struct sk_buff *skb)
|
||||
|
|
|
|||
|
|
@ -544,9 +544,11 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
|
|||
mptcp_get_options(skb, &mp_opt);
|
||||
if (subflow->request_mptcp) {
|
||||
if (!(mp_opt.suboptions & OPTION_MPTCP_MPC_SYNACK)) {
|
||||
if (!mptcp_try_fallback(sk))
|
||||
goto do_reset;
|
||||
|
||||
MPTCP_INC_STATS(sock_net(sk),
|
||||
MPTCP_MIB_MPCAPABLEACTIVEFALLBACK);
|
||||
mptcp_do_fallback(sk);
|
||||
pr_fallback(msk);
|
||||
goto fallback;
|
||||
}
|
||||
|
|
@ -1300,20 +1302,29 @@ static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ss
|
|||
mptcp_schedule_work(sk);
|
||||
}
|
||||
|
||||
static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
|
||||
static bool mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
|
||||
unsigned long fail_tout;
|
||||
|
||||
/* we are really failing, prevent any later subflow join */
|
||||
spin_lock_bh(&msk->fallback_lock);
|
||||
if (!msk->allow_infinite_fallback) {
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
return false;
|
||||
}
|
||||
msk->allow_subflows = false;
|
||||
spin_unlock_bh(&msk->fallback_lock);
|
||||
|
||||
/* graceful failure can happen only on the MPC subflow */
|
||||
if (WARN_ON_ONCE(ssk != READ_ONCE(msk->first)))
|
||||
return;
|
||||
return false;
|
||||
|
||||
/* since the close timeout take precedence on the fail one,
|
||||
* no need to start the latter when the first is already set
|
||||
*/
|
||||
if (sock_flag((struct sock *)msk, SOCK_DEAD))
|
||||
return;
|
||||
return true;
|
||||
|
||||
/* we don't need extreme accuracy here, use a zero fail_tout as special
|
||||
* value meaning no fail timeout at all;
|
||||
|
|
@ -1325,6 +1336,7 @@ static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
|
|||
tcp_send_ack(ssk);
|
||||
|
||||
mptcp_reset_tout_timer(msk, subflow->fail_tout);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool subflow_check_data_avail(struct sock *ssk)
|
||||
|
|
@ -1385,17 +1397,16 @@ fallback:
|
|||
(subflow->mp_join || subflow->valid_csum_seen)) {
|
||||
subflow->send_mp_fail = 1;
|
||||
|
||||
if (!READ_ONCE(msk->allow_infinite_fallback)) {
|
||||
if (!mptcp_subflow_fail(msk, ssk)) {
|
||||
subflow->reset_transient = 0;
|
||||
subflow->reset_reason = MPTCP_RST_EMIDDLEBOX;
|
||||
goto reset;
|
||||
}
|
||||
mptcp_subflow_fail(msk, ssk);
|
||||
WRITE_ONCE(subflow->data_avail, true);
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!READ_ONCE(msk->allow_infinite_fallback)) {
|
||||
if (!mptcp_try_fallback(ssk)) {
|
||||
/* fatal protocol error, close the socket.
|
||||
* subflow_error_report() will introduce the appropriate barriers
|
||||
*/
|
||||
|
|
@ -1413,8 +1424,6 @@ reset:
|
|||
WRITE_ONCE(subflow->data_avail, false);
|
||||
return false;
|
||||
}
|
||||
|
||||
mptcp_do_fallback(ssk);
|
||||
}
|
||||
|
||||
skb = skb_peek(&ssk->sk_receive_queue);
|
||||
|
|
@ -1679,7 +1688,6 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_pm_local *local,
|
|||
/* discard the subflow socket */
|
||||
mptcp_sock_graft(ssk, sk->sk_socket);
|
||||
iput(SOCK_INODE(sf));
|
||||
WRITE_ONCE(msk->allow_infinite_fallback, false);
|
||||
mptcp_stop_tout_timer(sk);
|
||||
return 0;
|
||||
|
||||
|
|
@ -1851,7 +1859,7 @@ static void subflow_state_change(struct sock *sk)
|
|||
|
||||
msk = mptcp_sk(parent);
|
||||
if (subflow_simultaneous_connect(sk)) {
|
||||
mptcp_do_fallback(sk);
|
||||
WARN_ON_ONCE(!mptcp_try_fallback(sk));
|
||||
pr_fallback(msk);
|
||||
subflow->conn_finished = 1;
|
||||
mptcp_propagate_state(parent, sk, subflow, NULL);
|
||||
|
|
|
|||
|
|
@ -1124,6 +1124,12 @@ static int nf_ct_resolve_clash_harder(struct sk_buff *skb, u32 repl_idx)
|
|||
|
||||
hlist_nulls_add_head_rcu(&loser_ct->tuplehash[IP_CT_DIR_REPLY].hnnode,
|
||||
&nf_conntrack_hash[repl_idx]);
|
||||
/* confirmed bit must be set after hlist add, not before:
|
||||
* loser_ct can still be visible to other cpu due to
|
||||
* SLAB_TYPESAFE_BY_RCU.
|
||||
*/
|
||||
smp_mb__before_atomic();
|
||||
set_bit(IPS_CONFIRMED_BIT, &loser_ct->status);
|
||||
|
||||
NF_CT_STAT_INC(net, clash_resolve);
|
||||
return NF_ACCEPT;
|
||||
|
|
@ -1260,8 +1266,6 @@ __nf_conntrack_confirm(struct sk_buff *skb)
|
|||
* user context, else we insert an already 'dead' hash, blocking
|
||||
* further use of that particular connection -JM.
|
||||
*/
|
||||
ct->status |= IPS_CONFIRMED;
|
||||
|
||||
if (unlikely(nf_ct_is_dying(ct))) {
|
||||
NF_CT_STAT_INC(net, insert_failed);
|
||||
goto dying;
|
||||
|
|
@ -1293,7 +1297,7 @@ chaintoolong:
|
|||
}
|
||||
}
|
||||
|
||||
/* Timer relative to confirmation time, not original
|
||||
/* Timeout is relative to confirmation time, not original
|
||||
setting time, otherwise we'd get timer wrap in
|
||||
weird delay cases. */
|
||||
ct->timeout += nfct_time_stamp;
|
||||
|
|
@ -1301,11 +1305,21 @@ chaintoolong:
|
|||
__nf_conntrack_insert_prepare(ct);
|
||||
|
||||
/* Since the lookup is lockless, hash insertion must be done after
|
||||
* starting the timer and setting the CONFIRMED bit. The RCU barriers
|
||||
* guarantee that no other CPU can find the conntrack before the above
|
||||
* stores are visible.
|
||||
* setting ct->timeout. The RCU barriers guarantee that no other CPU
|
||||
* can find the conntrack before the above stores are visible.
|
||||
*/
|
||||
__nf_conntrack_hash_insert(ct, hash, reply_hash);
|
||||
|
||||
/* IPS_CONFIRMED unset means 'ct not (yet) in hash', conntrack lookups
|
||||
* skip entries that lack this bit. This happens when a CPU is looking
|
||||
* at a stale entry that is being recycled due to SLAB_TYPESAFE_BY_RCU
|
||||
* or when another CPU encounters this entry right after the insertion
|
||||
* but before the set-confirm-bit below. This bit must not be set until
|
||||
* after __nf_conntrack_hash_insert().
|
||||
*/
|
||||
smp_mb__before_atomic();
|
||||
set_bit(IPS_CONFIRMED_BIT, &ct->status);
|
||||
|
||||
nf_conntrack_double_unlock(hash, reply_hash);
|
||||
local_bh_enable();
|
||||
|
||||
|
|
|
|||
|
|
@ -9686,64 +9686,6 @@ struct nf_hook_ops *nft_hook_find_ops_rcu(const struct nft_hook *hook,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(nft_hook_find_ops_rcu);
|
||||
|
||||
static void
|
||||
nf_tables_device_notify(const struct nft_table *table, int attr,
|
||||
const char *name, const struct nft_hook *hook,
|
||||
const struct net_device *dev, int event)
|
||||
{
|
||||
struct net *net = dev_net(dev);
|
||||
struct nlmsghdr *nlh;
|
||||
struct sk_buff *skb;
|
||||
u16 flags = 0;
|
||||
|
||||
if (!nfnetlink_has_listeners(net, NFNLGRP_NFT_DEV))
|
||||
return;
|
||||
|
||||
skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
|
||||
if (!skb)
|
||||
goto err;
|
||||
|
||||
event = event == NETDEV_REGISTER ? NFT_MSG_NEWDEV : NFT_MSG_DELDEV;
|
||||
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
|
||||
nlh = nfnl_msg_put(skb, 0, 0, event, flags, table->family,
|
||||
NFNETLINK_V0, nft_base_seq(net));
|
||||
if (!nlh)
|
||||
goto err;
|
||||
|
||||
if (nla_put_string(skb, NFTA_DEVICE_TABLE, table->name) ||
|
||||
nla_put_string(skb, attr, name) ||
|
||||
nla_put(skb, NFTA_DEVICE_SPEC, hook->ifnamelen, hook->ifname) ||
|
||||
nla_put_string(skb, NFTA_DEVICE_NAME, dev->name))
|
||||
goto err;
|
||||
|
||||
nlmsg_end(skb, nlh);
|
||||
nfnetlink_send(skb, net, 0, NFNLGRP_NFT_DEV,
|
||||
nlmsg_report(nlh), GFP_KERNEL);
|
||||
return;
|
||||
err:
|
||||
if (skb)
|
||||
kfree_skb(skb);
|
||||
nfnetlink_set_err(net, 0, NFNLGRP_NFT_DEV, -ENOBUFS);
|
||||
}
|
||||
|
||||
void
|
||||
nf_tables_chain_device_notify(const struct nft_chain *chain,
|
||||
const struct nft_hook *hook,
|
||||
const struct net_device *dev, int event)
|
||||
{
|
||||
nf_tables_device_notify(chain->table, NFTA_DEVICE_CHAIN,
|
||||
chain->name, hook, dev, event);
|
||||
}
|
||||
|
||||
static void
|
||||
nf_tables_flowtable_device_notify(const struct nft_flowtable *ft,
|
||||
const struct nft_hook *hook,
|
||||
const struct net_device *dev, int event)
|
||||
{
|
||||
nf_tables_device_notify(ft->table, NFTA_DEVICE_FLOWTABLE,
|
||||
ft->name, hook, dev, event);
|
||||
}
|
||||
|
||||
static int nft_flowtable_event(unsigned long event, struct net_device *dev,
|
||||
struct nft_flowtable *flowtable, bool changename)
|
||||
{
|
||||
|
|
@ -9791,7 +9733,6 @@ static int nft_flowtable_event(unsigned long event, struct net_device *dev,
|
|||
list_add_tail_rcu(&ops->list, &hook->ops_list);
|
||||
break;
|
||||
}
|
||||
nf_tables_flowtable_device_notify(flowtable, hook, dev, event);
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -127,6 +127,9 @@ static int nf_trace_fill_ct_info(struct sk_buff *nlskb,
|
|||
if (nla_put_be32(nlskb, NFTA_TRACE_CT_ID, (__force __be32)id))
|
||||
return -1;
|
||||
|
||||
/* Kernel implementation detail, withhold this from userspace for now */
|
||||
status &= ~IPS_NAT_CLASH;
|
||||
|
||||
if (status && nla_put_be32(nlskb, NFTA_TRACE_CT_STATUS, htonl(status)))
|
||||
return -1;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -86,7 +86,6 @@ static const int nfnl_group2type[NFNLGRP_MAX+1] = {
|
|||
[NFNLGRP_NFTABLES] = NFNL_SUBSYS_NFTABLES,
|
||||
[NFNLGRP_ACCT_QUOTA] = NFNL_SUBSYS_ACCT,
|
||||
[NFNLGRP_NFTRACE] = NFNL_SUBSYS_NFTABLES,
|
||||
[NFNLGRP_NFT_DEV] = NFNL_SUBSYS_NFTABLES,
|
||||
};
|
||||
|
||||
static struct nfnl_net *nfnl_pernet(struct net *net)
|
||||
|
|
|
|||
|
|
@ -363,8 +363,6 @@ static int nft_netdev_event(unsigned long event, struct net_device *dev,
|
|||
list_add_tail_rcu(&ops->list, &hook->ops_list);
|
||||
break;
|
||||
}
|
||||
nf_tables_chain_device_notify(&basechain->chain,
|
||||
hook, dev, event);
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -2785,7 +2785,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
|
|||
int len_sum = 0;
|
||||
int status = TP_STATUS_AVAILABLE;
|
||||
int hlen, tlen, copylen = 0;
|
||||
long timeo = 0;
|
||||
long timeo;
|
||||
|
||||
mutex_lock(&po->pg_vec_lock);
|
||||
|
||||
|
|
@ -2839,22 +2839,28 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
|
|||
if ((size_max > dev->mtu + reserve + VLAN_HLEN) && !vnet_hdr_sz)
|
||||
size_max = dev->mtu + reserve + VLAN_HLEN;
|
||||
|
||||
timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT);
|
||||
reinit_completion(&po->skb_completion);
|
||||
|
||||
do {
|
||||
ph = packet_current_frame(po, &po->tx_ring,
|
||||
TP_STATUS_SEND_REQUEST);
|
||||
if (unlikely(ph == NULL)) {
|
||||
if (need_wait && skb) {
|
||||
timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT);
|
||||
/* Note: packet_read_pending() might be slow if we
|
||||
* have to call it as it's per_cpu variable, but in
|
||||
* fast-path we don't have to call it, only when ph
|
||||
* is NULL, we need to check the pending_refcnt.
|
||||
*/
|
||||
if (need_wait && packet_read_pending(&po->tx_ring)) {
|
||||
timeo = wait_for_completion_interruptible_timeout(&po->skb_completion, timeo);
|
||||
if (timeo <= 0) {
|
||||
err = !timeo ? -ETIMEDOUT : -ERESTARTSYS;
|
||||
goto out_put;
|
||||
}
|
||||
}
|
||||
/* check for additional frames */
|
||||
continue;
|
||||
/* check for additional frames */
|
||||
continue;
|
||||
} else
|
||||
break;
|
||||
}
|
||||
|
||||
skb = NULL;
|
||||
|
|
@ -2943,14 +2949,7 @@ tpacket_error:
|
|||
}
|
||||
packet_increment_head(&po->tx_ring);
|
||||
len_sum += tp_len;
|
||||
} while (likely((ph != NULL) ||
|
||||
/* Note: packet_read_pending() might be slow if we have
|
||||
* to call it as it's per_cpu variable, but in fast-path
|
||||
* we already short-circuit the loop with the first
|
||||
* condition, and luckily don't have to go that path
|
||||
* anyway.
|
||||
*/
|
||||
(need_wait && packet_read_pending(&po->tx_ring))));
|
||||
} while (1);
|
||||
|
||||
err = len_sum;
|
||||
goto out_put;
|
||||
|
|
|
|||
|
|
@ -826,6 +826,7 @@ static struct sock *pep_sock_accept(struct sock *sk,
|
|||
}
|
||||
|
||||
/* Check for duplicate pipe handle */
|
||||
pn_skb_get_dst_sockaddr(skb, &dst);
|
||||
newsk = pep_find_pipe(&pn->hlist, &dst, pipe_handle);
|
||||
if (unlikely(newsk)) {
|
||||
__sock_put(newsk);
|
||||
|
|
@ -850,7 +851,6 @@ static struct sock *pep_sock_accept(struct sock *sk,
|
|||
newsk->sk_destruct = pipe_destruct;
|
||||
|
||||
newpn = pep_sk(newsk);
|
||||
pn_skb_get_dst_sockaddr(skb, &dst);
|
||||
pn_skb_get_src_sockaddr(skb, &src);
|
||||
newpn->pn_sk.sobject = pn_sockaddr_get_object(&dst);
|
||||
newpn->pn_sk.dobject = pn_sockaddr_get_object(&src);
|
||||
|
|
|
|||
|
|
@ -44,6 +44,7 @@ enum rxrpc_skb_mark {
|
|||
RXRPC_SKB_MARK_SERVICE_CONN_SECURED, /* Service connection response has been verified */
|
||||
RXRPC_SKB_MARK_REJECT_BUSY, /* Reject with BUSY */
|
||||
RXRPC_SKB_MARK_REJECT_ABORT, /* Reject with ABORT (code in skb->priority) */
|
||||
RXRPC_SKB_MARK_REJECT_CONN_ABORT, /* Reject with connection ABORT (code in skb->priority) */
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
@ -1253,6 +1254,8 @@ int rxrpc_encap_rcv(struct sock *, struct sk_buff *);
|
|||
void rxrpc_error_report(struct sock *);
|
||||
bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
|
||||
s32 abort_code, int err);
|
||||
bool rxrpc_direct_conn_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
|
||||
s32 abort_code, int err);
|
||||
int rxrpc_io_thread(void *data);
|
||||
void rxrpc_post_response(struct rxrpc_connection *conn, struct sk_buff *skb);
|
||||
static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local)
|
||||
|
|
@ -1383,6 +1386,7 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *,
|
|||
const struct sockaddr_rxrpc *);
|
||||
struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
|
||||
struct sockaddr_rxrpc *srx, gfp_t gfp);
|
||||
void rxrpc_assess_MTU_size(struct rxrpc_local *local, struct rxrpc_peer *peer);
|
||||
struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t,
|
||||
enum rxrpc_peer_trace);
|
||||
void rxrpc_new_incoming_peer(struct rxrpc_local *local, struct rxrpc_peer *peer);
|
||||
|
|
|
|||
|
|
@ -219,6 +219,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
|
|||
tail = b->call_backlog_tail;
|
||||
while (CIRC_CNT(head, tail, size) > 0) {
|
||||
struct rxrpc_call *call = b->call_backlog[tail];
|
||||
rxrpc_see_call(call, rxrpc_call_see_discard);
|
||||
rcu_assign_pointer(call->socket, rx);
|
||||
if (rx->app_ops &&
|
||||
rx->app_ops->discard_new_call) {
|
||||
|
|
@ -373,8 +374,8 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local,
|
|||
spin_lock(&rx->incoming_lock);
|
||||
if (rx->sk.sk_state == RXRPC_SERVER_LISTEN_DISABLED ||
|
||||
rx->sk.sk_state == RXRPC_CLOSE) {
|
||||
rxrpc_direct_abort(skb, rxrpc_abort_shut_down,
|
||||
RX_INVALID_OPERATION, -ESHUTDOWN);
|
||||
rxrpc_direct_conn_abort(skb, rxrpc_abort_shut_down,
|
||||
RX_INVALID_OPERATION, -ESHUTDOWN);
|
||||
goto no_call;
|
||||
}
|
||||
|
||||
|
|
@ -406,6 +407,7 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local,
|
|||
|
||||
spin_unlock(&rx->incoming_lock);
|
||||
read_unlock_irq(&local->services_lock);
|
||||
rxrpc_assess_MTU_size(local, call->peer);
|
||||
|
||||
if (hlist_unhashed(&call->error_link)) {
|
||||
spin_lock_irq(&call->peer->lock);
|
||||
|
|
@ -420,12 +422,12 @@ bool rxrpc_new_incoming_call(struct rxrpc_local *local,
|
|||
|
||||
unsupported_service:
|
||||
read_unlock_irq(&local->services_lock);
|
||||
return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered,
|
||||
RX_INVALID_OPERATION, -EOPNOTSUPP);
|
||||
return rxrpc_direct_conn_abort(skb, rxrpc_abort_service_not_offered,
|
||||
RX_INVALID_OPERATION, -EOPNOTSUPP);
|
||||
unsupported_security:
|
||||
read_unlock_irq(&local->services_lock);
|
||||
return rxrpc_direct_abort(skb, rxrpc_abort_service_not_offered,
|
||||
RX_INVALID_OPERATION, -EKEYREJECTED);
|
||||
return rxrpc_direct_conn_abort(skb, rxrpc_abort_service_not_offered,
|
||||
RX_INVALID_OPERATION, -EKEYREJECTED);
|
||||
no_call:
|
||||
spin_unlock(&rx->incoming_lock);
|
||||
read_unlock_irq(&local->services_lock);
|
||||
|
|
|
|||
|
|
@ -561,7 +561,7 @@ static void rxrpc_cleanup_rx_buffers(struct rxrpc_call *call)
|
|||
void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
|
||||
{
|
||||
struct rxrpc_connection *conn = call->conn;
|
||||
bool put = false, putu = false;
|
||||
bool putu = false;
|
||||
|
||||
_enter("{%d,%d}", call->debug_id, refcount_read(&call->ref));
|
||||
|
||||
|
|
@ -573,23 +573,13 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
|
|||
|
||||
rxrpc_put_call_slot(call);
|
||||
|
||||
/* Make sure we don't get any more notifications */
|
||||
/* Note that at this point, the call may still be on or may have been
|
||||
* added back on to the socket receive queue. recvmsg() must discard
|
||||
* released calls. The CALL_RELEASED flag should prevent further
|
||||
* notifications.
|
||||
*/
|
||||
spin_lock_irq(&rx->recvmsg_lock);
|
||||
|
||||
if (!list_empty(&call->recvmsg_link)) {
|
||||
_debug("unlinking once-pending call %p { e=%lx f=%lx }",
|
||||
call, call->events, call->flags);
|
||||
list_del(&call->recvmsg_link);
|
||||
put = true;
|
||||
}
|
||||
|
||||
/* list_empty() must return false in rxrpc_notify_socket() */
|
||||
call->recvmsg_link.next = NULL;
|
||||
call->recvmsg_link.prev = NULL;
|
||||
|
||||
spin_unlock_irq(&rx->recvmsg_lock);
|
||||
if (put)
|
||||
rxrpc_put_call(call, rxrpc_call_put_unnotify);
|
||||
|
||||
write_lock(&rx->call_lock);
|
||||
|
||||
|
|
@ -638,6 +628,12 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx)
|
|||
rxrpc_put_call(call, rxrpc_call_put_release_sock);
|
||||
}
|
||||
|
||||
while ((call = list_first_entry_or_null(&rx->recvmsg_q,
|
||||
struct rxrpc_call, recvmsg_link))) {
|
||||
list_del_init(&call->recvmsg_link);
|
||||
rxrpc_put_call(call, rxrpc_call_put_release_recvmsg_q);
|
||||
}
|
||||
|
||||
_leave("");
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -97,6 +97,20 @@ bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
|
|||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Directly produce a connection abort from a packet.
|
||||
*/
|
||||
bool rxrpc_direct_conn_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
|
||||
s32 abort_code, int err)
|
||||
{
|
||||
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
|
||||
|
||||
trace_rxrpc_abort(0, why, sp->hdr.cid, 0, sp->hdr.seq, abort_code, err);
|
||||
skb->mark = RXRPC_SKB_MARK_REJECT_CONN_ABORT;
|
||||
skb->priority = abort_code;
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool rxrpc_bad_message(struct sk_buff *skb, enum rxrpc_abort_reason why)
|
||||
{
|
||||
return rxrpc_direct_abort(skb, why, RX_PROTOCOL_ERROR, -EBADMSG);
|
||||
|
|
|
|||
|
|
@ -814,6 +814,9 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
|
|||
__be32 code;
|
||||
int ret, ioc;
|
||||
|
||||
if (sp->hdr.type == RXRPC_PACKET_TYPE_ABORT)
|
||||
return; /* Never abort an abort. */
|
||||
|
||||
rxrpc_see_skb(skb, rxrpc_skb_see_reject);
|
||||
|
||||
iov[0].iov_base = &whdr;
|
||||
|
|
@ -826,7 +829,13 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
|
|||
msg.msg_controllen = 0;
|
||||
msg.msg_flags = 0;
|
||||
|
||||
memset(&whdr, 0, sizeof(whdr));
|
||||
whdr = (struct rxrpc_wire_header) {
|
||||
.epoch = htonl(sp->hdr.epoch),
|
||||
.cid = htonl(sp->hdr.cid),
|
||||
.callNumber = htonl(sp->hdr.callNumber),
|
||||
.serviceId = htons(sp->hdr.serviceId),
|
||||
.flags = ~sp->hdr.flags & RXRPC_CLIENT_INITIATED,
|
||||
};
|
||||
|
||||
switch (skb->mark) {
|
||||
case RXRPC_SKB_MARK_REJECT_BUSY:
|
||||
|
|
@ -834,6 +843,9 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
|
|||
size = sizeof(whdr);
|
||||
ioc = 1;
|
||||
break;
|
||||
case RXRPC_SKB_MARK_REJECT_CONN_ABORT:
|
||||
whdr.callNumber = 0;
|
||||
fallthrough;
|
||||
case RXRPC_SKB_MARK_REJECT_ABORT:
|
||||
whdr.type = RXRPC_PACKET_TYPE_ABORT;
|
||||
code = htonl(skb->priority);
|
||||
|
|
@ -847,14 +859,6 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
|
|||
if (rxrpc_extract_addr_from_skb(&srx, skb) == 0) {
|
||||
msg.msg_namelen = srx.transport_len;
|
||||
|
||||
whdr.epoch = htonl(sp->hdr.epoch);
|
||||
whdr.cid = htonl(sp->hdr.cid);
|
||||
whdr.callNumber = htonl(sp->hdr.callNumber);
|
||||
whdr.serviceId = htons(sp->hdr.serviceId);
|
||||
whdr.flags = sp->hdr.flags;
|
||||
whdr.flags ^= RXRPC_CLIENT_INITIATED;
|
||||
whdr.flags &= RXRPC_CLIENT_INITIATED;
|
||||
|
||||
iov_iter_kvec(&msg.msg_iter, WRITE, iov, ioc, size);
|
||||
ret = do_udp_sendmsg(local->socket, &msg, size);
|
||||
if (ret < 0)
|
||||
|
|
|
|||
|
|
@ -149,8 +149,7 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *local,
|
|||
* assess the MTU size for the network interface through which this peer is
|
||||
* reached
|
||||
*/
|
||||
static void rxrpc_assess_MTU_size(struct rxrpc_local *local,
|
||||
struct rxrpc_peer *peer)
|
||||
void rxrpc_assess_MTU_size(struct rxrpc_local *local, struct rxrpc_peer *peer)
|
||||
{
|
||||
struct net *net = local->net;
|
||||
struct dst_entry *dst;
|
||||
|
|
@ -277,8 +276,6 @@ static void rxrpc_init_peer(struct rxrpc_local *local, struct rxrpc_peer *peer,
|
|||
|
||||
peer->hdrsize += sizeof(struct rxrpc_wire_header);
|
||||
peer->max_data = peer->if_mtu - peer->hdrsize;
|
||||
|
||||
rxrpc_assess_MTU_size(local, peer);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -297,6 +294,7 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_local *local,
|
|||
if (peer) {
|
||||
memcpy(&peer->srx, srx, sizeof(*srx));
|
||||
rxrpc_init_peer(local, peer, hash_key);
|
||||
rxrpc_assess_MTU_size(local, peer);
|
||||
}
|
||||
|
||||
_leave(" = %p", peer);
|
||||
|
|
|
|||
|
|
@ -29,6 +29,10 @@ void rxrpc_notify_socket(struct rxrpc_call *call)
|
|||
|
||||
if (!list_empty(&call->recvmsg_link))
|
||||
return;
|
||||
if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) {
|
||||
rxrpc_see_call(call, rxrpc_call_see_notify_released);
|
||||
return;
|
||||
}
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
|
|
@ -447,6 +451,16 @@ try_again:
|
|||
goto try_again;
|
||||
}
|
||||
|
||||
rxrpc_see_call(call, rxrpc_call_see_recvmsg);
|
||||
if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) {
|
||||
rxrpc_see_call(call, rxrpc_call_see_already_released);
|
||||
list_del_init(&call->recvmsg_link);
|
||||
spin_unlock_irq(&rx->recvmsg_lock);
|
||||
release_sock(&rx->sk);
|
||||
trace_rxrpc_recvmsg(call->debug_id, rxrpc_recvmsg_unqueue, 0);
|
||||
rxrpc_put_call(call, rxrpc_call_put_recvmsg);
|
||||
goto try_again;
|
||||
}
|
||||
if (!(flags & MSG_PEEK))
|
||||
list_del_init(&call->recvmsg_link);
|
||||
else
|
||||
|
|
@ -470,8 +484,13 @@ try_again:
|
|||
|
||||
release_sock(&rx->sk);
|
||||
|
||||
if (test_bit(RXRPC_CALL_RELEASED, &call->flags))
|
||||
BUG();
|
||||
if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) {
|
||||
rxrpc_see_call(call, rxrpc_call_see_already_released);
|
||||
mutex_unlock(&call->user_mutex);
|
||||
if (!(flags & MSG_PEEK))
|
||||
rxrpc_put_call(call, rxrpc_call_put_recvmsg);
|
||||
goto try_again;
|
||||
}
|
||||
|
||||
ret = rxrpc_recvmsg_user_id(call, msg, flags);
|
||||
if (ret < 0)
|
||||
|
|
|
|||
|
|
@ -140,15 +140,15 @@ const struct rxrpc_security *rxrpc_get_incoming_security(struct rxrpc_sock *rx,
|
|||
|
||||
sec = rxrpc_security_lookup(sp->hdr.securityIndex);
|
||||
if (!sec) {
|
||||
rxrpc_direct_abort(skb, rxrpc_abort_unsupported_security,
|
||||
RX_INVALID_OPERATION, -EKEYREJECTED);
|
||||
rxrpc_direct_conn_abort(skb, rxrpc_abort_unsupported_security,
|
||||
RX_INVALID_OPERATION, -EKEYREJECTED);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (sp->hdr.securityIndex != RXRPC_SECURITY_NONE &&
|
||||
!rx->securities) {
|
||||
rxrpc_direct_abort(skb, rxrpc_abort_no_service_key,
|
||||
sec->no_key_abort, -EKEYREJECTED);
|
||||
rxrpc_direct_conn_abort(skb, rxrpc_abort_no_service_key,
|
||||
sec->no_key_abort, -EKEYREJECTED);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -821,7 +821,9 @@ static struct htb_class *htb_lookup_leaf(struct htb_prio *hprio, const int prio)
|
|||
u32 *pid;
|
||||
} stk[TC_HTB_MAXDEPTH], *sp = stk;
|
||||
|
||||
BUG_ON(!hprio->row.rb_node);
|
||||
if (unlikely(!hprio->row.rb_node))
|
||||
return NULL;
|
||||
|
||||
sp->root = hprio->row.rb_node;
|
||||
sp->pptr = &hprio->ptr;
|
||||
sp->pid = &hprio->last_ptr_id;
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue