From: David Bauer Date: Thu, 29 Jan 2026 23:23:20 +0000 (+0100) Subject: wifi: mt76: don't return TXQ when exceeding max non-AQL packets X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=964f870e090e9c88a41e2890333421204cc0bdf4;p=thirdparty%2Flinux.git wifi: mt76: don't return TXQ when exceeding max non-AQL packets mt76_txq_send_burst does check if the number of non-AQL frames exceeds the maximum. In this case the queue is returned to ieee80211_return_txq when iterating over the scheduled TXQs in mt76_txq_schedule_list. This has the effect of inserting said TXQ at the head of the list. This means the loop will get the same TXQ again, which will terminate the scheduling round. TXQs following in the list thus never get scheduled for transmission. This can manifest in high latency low throughput or broken connections for said STAs. Check if the non-AQL packet count exceeds the limit and not return the TXQ in this case. Schedule all TXQs for the STA in case the non-AQL limit can be satisfied again. Signed-off-by: David Bauer Link: https://patch.msgid.link/20260129232321.276575-1-mail@david-bauer.net Signed-off-by: Felix Fietkau --- diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c index 9ec6d0b53a84a..0753acf2eccb8 100644 --- a/drivers/net/wireless/mediatek/mt76/tx.c +++ b/drivers/net/wireless/mediatek/mt76/tx.c @@ -227,7 +227,9 @@ mt76_tx_check_non_aql(struct mt76_dev *dev, struct mt76_wcid *wcid, struct sk_buff *skb) { struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + struct ieee80211_sta *sta; int pending; + int i; if (!wcid || info->tx_time_est) return; @@ -235,6 +237,17 @@ mt76_tx_check_non_aql(struct mt76_dev *dev, struct mt76_wcid *wcid, pending = atomic_dec_return(&wcid->non_aql_packets); if (pending < 0) atomic_cmpxchg(&wcid->non_aql_packets, pending, 0); + + sta = wcid_to_sta(wcid); + if (!sta || pending != MT_MAX_NON_AQL_PKT - 1) + return; + + for (i = 0; i < ARRAY_SIZE(sta->txq); i++) { + if (!sta->txq[i]) + continue; + + ieee80211_schedule_txq(dev->hw, sta->txq[i]); + } } void __mt76_tx_complete_skb(struct mt76_dev *dev, u16 wcid_idx, struct sk_buff *skb, @@ -542,6 +555,9 @@ mt76_txq_schedule_list(struct mt76_phy *phy, enum mt76_txq_id qid) if (!wcid || test_bit(MT_WCID_FLAG_PS, &wcid->flags)) continue; + if (atomic_read(&wcid->non_aql_packets) >= MT_MAX_NON_AQL_PKT) + continue; + phy = mt76_dev_phy(dev, wcid->phy_idx); if (test_bit(MT76_RESET, &phy->state) || phy->offchannel) continue;