From: Sasha Levin Date: Sat, 31 Aug 2024 23:13:51 +0000 (-0400) Subject: Fixes for 5.15 X-Git-Tag: v4.19.321~33 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=63b8f95ec9af0c2cf16102f320d7c4878c3bc5f5;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.15 Signed-off-by: Sasha Levin --- diff --git a/queue-5.15/dmaengine-dw-add-memory-bus-width-verification.patch b/queue-5.15/dmaengine-dw-add-memory-bus-width-verification.patch new file mode 100644 index 00000000000..0a81cf46866 --- /dev/null +++ b/queue-5.15/dmaengine-dw-add-memory-bus-width-verification.patch @@ -0,0 +1,186 @@ +From 26325d8bcd65485e29e285f732ba2a35d38c464e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 2 Aug 2024 10:50:47 +0300 +Subject: dmaengine: dw: Add memory bus width verification + +From: Serge Semin + +[ Upstream commit d04b21bfa1c50a2ade4816cab6fdc91827b346b1 ] + +Currently in case of the DEV_TO_MEM or MEM_TO_DEV DMA transfers the memory +data width (single transfer width) is determined based on the buffer +length, buffer base address or DMA master-channel max address width +capability. It isn't enough in case of the channel disabling prior the +block transfer is finished. Here is what DW AHB DMA IP-core databook says +regarding the port suspension (DMA-transfer pause) implementation in the +controller: + +"When CTLx.SRC_TR_WIDTH < CTLx.DST_TR_WIDTH and the CFGx.CH_SUSP bit is +high, the CFGx.FIFO_EMPTY is asserted once the contents of the FIFO do not +permit a single word of CTLx.DST_TR_WIDTH to be formed. However, there may +still be data in the channel FIFO, but not enough to form a single +transfer of CTLx.DST_TR_WIDTH. In this scenario, once the channel is +disabled, the remaining data in the channel FIFO is not transferred to the +destination peripheral." + +So in case if the port gets to be suspended and then disabled it's +possible to have the data silently discarded even though the controller +reported that FIFO is empty and the CTLx.BLOCK_TS indicated the dropped +data already received from the source device. This looks as if the data +somehow got lost on a way from the peripheral device to memory and causes +problems for instance in the DW APB UART driver, which pauses and disables +the DMA-transfer as soon as the recv data timeout happens. Here is the way +it looks: + + Memory <------- DMA FIFO <------ UART FIFO <---------------- UART + DST_TR_WIDTH -+--------| | | + | | | | No more data + Current lvl -+--------| |---------+- DMA-burst lvl + | | |---------+- Leftover data + | | |---------+- SRC_TR_WIDTH + -+--------+-------+---------+ + +In the example above: no more data is getting received over the UART port +and BLOCK_TS is not even close to be fully received; some data is left in +the UART FIFO, but not enough to perform a bursted DMA-xfer to the DMA +FIFO; some data is left in the DMA FIFO, but not enough to be passed +further to the system memory in a single transfer. In this situation the +8250 UART driver catches the recv timeout interrupt, pauses the +DMA-transfer and terminates it completely, after which the IRQ handler +manually fetches the leftover data from the UART FIFO into the +recv-buffer. But since the DMA-channel has been disabled with the data +left in the DMA FIFO, that data will be just discarded and the recv-buffer +will have a gap of the "current lvl" size in the recv-buffer at the tail +of the lately received data portion. So the data will be lost just due to +the misconfigured DMA transfer. + +Note this is only relevant for the case of the transfer suspension and +_disabling_. No problem will happen if the transfer will be re-enabled +afterwards or the block transfer is fully completed. In the later case the +"FIFO flush mode" will be executed at the transfer final stage in order to +push out the data left in the DMA FIFO. + +In order to fix the denoted problem the DW AHB DMA-engine driver needs to +make sure that the _bursted_ source transfer width is greater or equal to +the single destination transfer (note the HW databook describes more +strict constraint than actually required). Since the peripheral-device +side is prescribed by the client driver logic, the memory-side can be only +used for that. The solution can be easily implemented for the DEV_TO_MEM +transfers just by adjusting the memory-channel address width. Sadly it's +not that easy for the MEM_TO_DEV transfers since the mem-to-dma burst size +is normally dynamically determined by the controller. So the only thing +that can be done is to make sure that memory-side address width is greater +than the peripheral device address width. + +Fixes: a09820043c9e ("dw_dmac: autoconfigure data_width or get it via platform data") +Signed-off-by: Serge Semin +Acked-by: Andy Shevchenko +Link: https://lore.kernel.org/r/20240802075100.6475-3-fancer.lancer@gmail.com +Signed-off-by: Vinod Koul +Signed-off-by: Sasha Levin +--- + drivers/dma/dw/core.c | 51 +++++++++++++++++++++++++++++++++++++------ + 1 file changed, 44 insertions(+), 7 deletions(-) + +diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c +index 128c194d65b6d..0beafcee72673 100644 +--- a/drivers/dma/dw/core.c ++++ b/drivers/dma/dw/core.c +@@ -625,12 +625,10 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, + struct dw_desc *prev; + struct dw_desc *first; + u32 ctllo, ctlhi; +- u8 m_master = dwc->dws.m_master; +- u8 lms = DWC_LLP_LMS(m_master); ++ u8 lms = DWC_LLP_LMS(dwc->dws.m_master); + dma_addr_t reg; + unsigned int reg_width; + unsigned int mem_width; +- unsigned int data_width = dw->pdata->data_width[m_master]; + unsigned int i; + struct scatterlist *sg; + size_t total_len = 0; +@@ -664,7 +662,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, + mem = sg_dma_address(sg); + len = sg_dma_len(sg); + +- mem_width = __ffs(data_width | mem | len); ++ mem_width = __ffs(sconfig->src_addr_width | mem | len); + + slave_sg_todev_fill_desc: + desc = dwc_desc_get(dwc); +@@ -724,7 +722,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, + lli_write(desc, sar, reg); + lli_write(desc, dar, mem); + lli_write(desc, ctlhi, ctlhi); +- mem_width = __ffs(data_width | mem); ++ mem_width = __ffs(sconfig->dst_addr_width | mem); + lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width)); + desc->len = dlen; + +@@ -816,6 +814,41 @@ static int dwc_verify_p_buswidth(struct dma_chan *chan) + return 0; + } + ++static int dwc_verify_m_buswidth(struct dma_chan *chan) ++{ ++ struct dw_dma_chan *dwc = to_dw_dma_chan(chan); ++ struct dw_dma *dw = to_dw_dma(chan->device); ++ u32 reg_width, reg_burst, mem_width; ++ ++ mem_width = dw->pdata->data_width[dwc->dws.m_master]; ++ ++ /* ++ * It's possible to have a data portion locked in the DMA FIFO in case ++ * of the channel suspension. Subsequent channel disabling will cause ++ * that data silent loss. In order to prevent that maintain the src and ++ * dst transfer widths coherency by means of the relation: ++ * (CTLx.SRC_TR_WIDTH * CTLx.SRC_MSIZE >= CTLx.DST_TR_WIDTH) ++ * Look for the details in the commit message that brings this change. ++ * ++ * Note the DMA configs utilized in the calculations below must have ++ * been verified to have correct values by this method call. ++ */ ++ if (dwc->dma_sconfig.direction == DMA_MEM_TO_DEV) { ++ reg_width = dwc->dma_sconfig.dst_addr_width; ++ if (mem_width < reg_width) ++ return -EINVAL; ++ ++ dwc->dma_sconfig.src_addr_width = mem_width; ++ } else if (dwc->dma_sconfig.direction == DMA_DEV_TO_MEM) { ++ reg_width = dwc->dma_sconfig.src_addr_width; ++ reg_burst = rounddown_pow_of_two(dwc->dma_sconfig.src_maxburst); ++ ++ dwc->dma_sconfig.dst_addr_width = min(mem_width, reg_width * reg_burst); ++ } ++ ++ return 0; ++} ++ + static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig) + { + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); +@@ -825,14 +858,18 @@ static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig) + memcpy(&dwc->dma_sconfig, sconfig, sizeof(*sconfig)); + + dwc->dma_sconfig.src_maxburst = +- clamp(dwc->dma_sconfig.src_maxburst, 0U, dwc->max_burst); ++ clamp(dwc->dma_sconfig.src_maxburst, 1U, dwc->max_burst); + dwc->dma_sconfig.dst_maxburst = +- clamp(dwc->dma_sconfig.dst_maxburst, 0U, dwc->max_burst); ++ clamp(dwc->dma_sconfig.dst_maxburst, 1U, dwc->max_burst); + + ret = dwc_verify_p_buswidth(chan); + if (ret) + return ret; + ++ ret = dwc_verify_m_buswidth(chan); ++ if (ret) ++ return ret; ++ + dw->encode_maxburst(dwc, &dwc->dma_sconfig.src_maxburst); + dw->encode_maxburst(dwc, &dwc->dma_sconfig.dst_maxburst); + +-- +2.43.0 + diff --git a/queue-5.15/dmaengine-dw-add-peripheral-bus-width-verification.patch b/queue-5.15/dmaengine-dw-add-peripheral-bus-width-verification.patch new file mode 100644 index 00000000000..131fd70d460 --- /dev/null +++ b/queue-5.15/dmaengine-dw-add-peripheral-bus-width-verification.patch @@ -0,0 +1,112 @@ +From cdbfafbb4160d91a70466e88aa26e7612ba19e01 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 2 Aug 2024 10:50:46 +0300 +Subject: dmaengine: dw: Add peripheral bus width verification + +From: Serge Semin + +[ Upstream commit b336268dde75cb09bd795cb24893d52152a9191f ] + +Currently the src_addr_width and dst_addr_width fields of the +dma_slave_config structure are mapped to the CTLx.SRC_TR_WIDTH and +CTLx.DST_TR_WIDTH fields of the peripheral bus side in order to have the +properly aligned data passed to the target device. It's done just by +converting the passed peripheral bus width to the encoded value using the +__ffs() function. This implementation has several problematic sides: + +1. __ffs() is undefined if no bit exist in the passed value. Thus if the +specified addr-width is DMA_SLAVE_BUSWIDTH_UNDEFINED, __ffs() may return +unexpected value depending on the platform-specific implementation. + +2. DW AHB DMA-engine permits having the power-of-2 transfer width limited +by the DMAH_Mk_HDATA_WIDTH IP-core synthesize parameter. Specifying +bus-width out of that constraints scope will definitely cause unexpected +result since the destination reg will be only partly touched than the +client driver implied. + +Let's fix all of that by adding the peripheral bus width verification +method and calling it in dwc_config() which is supposed to be executed +before preparing any transfer. The new method will make sure that the +passed source or destination address width is valid and if undefined then +the driver will just fallback to the 1-byte width transfer. + +Fixes: 029a40e97d0d ("dmaengine: dw: provide DMA capabilities") +Signed-off-by: Serge Semin +Acked-by: Andy Shevchenko +Link: https://lore.kernel.org/r/20240802075100.6475-2-fancer.lancer@gmail.com +Signed-off-by: Vinod Koul +Signed-off-by: Sasha Levin +--- + drivers/dma/dw/core.c | 38 ++++++++++++++++++++++++++++++++++++++ + 1 file changed, 38 insertions(+) + +diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c +index 7ab83fe601ede..128c194d65b6d 100644 +--- a/drivers/dma/dw/core.c ++++ b/drivers/dma/dw/core.c +@@ -16,6 +16,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -783,10 +784,43 @@ bool dw_dma_filter(struct dma_chan *chan, void *param) + } + EXPORT_SYMBOL_GPL(dw_dma_filter); + ++static int dwc_verify_p_buswidth(struct dma_chan *chan) ++{ ++ struct dw_dma_chan *dwc = to_dw_dma_chan(chan); ++ struct dw_dma *dw = to_dw_dma(chan->device); ++ u32 reg_width, max_width; ++ ++ if (dwc->dma_sconfig.direction == DMA_MEM_TO_DEV) ++ reg_width = dwc->dma_sconfig.dst_addr_width; ++ else if (dwc->dma_sconfig.direction == DMA_DEV_TO_MEM) ++ reg_width = dwc->dma_sconfig.src_addr_width; ++ else /* DMA_MEM_TO_MEM */ ++ return 0; ++ ++ max_width = dw->pdata->data_width[dwc->dws.p_master]; ++ ++ /* Fall-back to 1-byte transfer width if undefined */ ++ if (reg_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) ++ reg_width = DMA_SLAVE_BUSWIDTH_1_BYTE; ++ else if (!is_power_of_2(reg_width) || reg_width > max_width) ++ return -EINVAL; ++ else /* bus width is valid */ ++ return 0; ++ ++ /* Update undefined addr width value */ ++ if (dwc->dma_sconfig.direction == DMA_MEM_TO_DEV) ++ dwc->dma_sconfig.dst_addr_width = reg_width; ++ else /* DMA_DEV_TO_MEM */ ++ dwc->dma_sconfig.src_addr_width = reg_width; ++ ++ return 0; ++} ++ + static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig) + { + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); + struct dw_dma *dw = to_dw_dma(chan->device); ++ int ret; + + memcpy(&dwc->dma_sconfig, sconfig, sizeof(*sconfig)); + +@@ -795,6 +829,10 @@ static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig) + dwc->dma_sconfig.dst_maxburst = + clamp(dwc->dma_sconfig.dst_maxburst, 0U, dwc->max_burst); + ++ ret = dwc_verify_p_buswidth(chan); ++ if (ret) ++ return ret; ++ + dw->encode_maxburst(dwc, &dwc->dma_sconfig.src_maxburst); + dw->encode_maxburst(dwc, &dwc->dma_sconfig.dst_maxburst); + +-- +2.43.0 + diff --git a/queue-5.15/ethtool-check-device-is-present-when-getting-link-se.patch b/queue-5.15/ethtool-check-device-is-present-when-getting-link-se.patch new file mode 100644 index 00000000000..4ada678e3a6 --- /dev/null +++ b/queue-5.15/ethtool-check-device-is-present-when-getting-link-se.patch @@ -0,0 +1,79 @@ +From fbda198d1f30458e04de3e4d0182c2d165b26bdf Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 23 Aug 2024 16:26:58 +1000 +Subject: ethtool: check device is present when getting link settings + +From: Jamie Bainbridge + +[ Upstream commit a699781c79ecf6cfe67fb00a0331b4088c7c8466 ] + +A sysfs reader can race with a device reset or removal, attempting to +read device state when the device is not actually present. eg: + + [exception RIP: qed_get_current_link+17] + #8 [ffffb9e4f2907c48] qede_get_link_ksettings at ffffffffc07a994a [qede] + #9 [ffffb9e4f2907cd8] __rh_call_get_link_ksettings at ffffffff992b01a3 + #10 [ffffb9e4f2907d38] __ethtool_get_link_ksettings at ffffffff992b04e4 + #11 [ffffb9e4f2907d90] duplex_show at ffffffff99260300 + #12 [ffffb9e4f2907e38] dev_attr_show at ffffffff9905a01c + #13 [ffffb9e4f2907e50] sysfs_kf_seq_show at ffffffff98e0145b + #14 [ffffb9e4f2907e68] seq_read at ffffffff98d902e3 + #15 [ffffb9e4f2907ec8] vfs_read at ffffffff98d657d1 + #16 [ffffb9e4f2907f00] ksys_read at ffffffff98d65c3f + #17 [ffffb9e4f2907f38] do_syscall_64 at ffffffff98a052fb + + crash> struct net_device.state ffff9a9d21336000 + state = 5, + +state 5 is __LINK_STATE_START (0b1) and __LINK_STATE_NOCARRIER (0b100). +The device is not present, note lack of __LINK_STATE_PRESENT (0b10). + +This is the same sort of panic as observed in commit 4224cfd7fb65 +("net-sysfs: add check for netdevice being present to speed_show"). + +There are many other callers of __ethtool_get_link_ksettings() which +don't have a device presence check. + +Move this check into ethtool to protect all callers. + +Fixes: d519e17e2d01 ("net: export device speed and duplex via sysfs") +Fixes: 4224cfd7fb65 ("net-sysfs: add check for netdevice being present to speed_show") +Signed-off-by: Jamie Bainbridge +Link: https://patch.msgid.link/8bae218864beaa44ed01628140475b9bf641c5b0.1724393671.git.jamie.bainbridge@gmail.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/core/net-sysfs.c | 2 +- + net/ethtool/ioctl.c | 3 +++ + 2 files changed, 4 insertions(+), 1 deletion(-) + +diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c +index e9ea0695efb42..173ea92124f8c 100644 +--- a/net/core/net-sysfs.c ++++ b/net/core/net-sysfs.c +@@ -214,7 +214,7 @@ static ssize_t speed_show(struct device *dev, + if (!rtnl_trylock()) + return restart_syscall(); + +- if (netif_running(netdev) && netif_device_present(netdev)) { ++ if (netif_running(netdev)) { + struct ethtool_link_ksettings cmd; + + if (!__ethtool_get_link_ksettings(netdev, &cmd)) +diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c +index 53e2ef6ada8f3..1e9e70a633d1c 100644 +--- a/net/ethtool/ioctl.c ++++ b/net/ethtool/ioctl.c +@@ -433,6 +433,9 @@ int __ethtool_get_link_ksettings(struct net_device *dev, + if (!dev->ethtool_ops->get_link_ksettings) + return -EOPNOTSUPP; + ++ if (!netif_device_present(dev)) ++ return -ENODEV; ++ + memset(link_ksettings, 0, sizeof(*link_ksettings)); + return dev->ethtool_ops->get_link_ksettings(dev, link_ksettings); + } +-- +2.43.0 + diff --git a/queue-5.15/gtp-fix-a-potential-null-pointer-dereference.patch b/queue-5.15/gtp-fix-a-potential-null-pointer-dereference.patch new file mode 100644 index 00000000000..c5b9ef01c4d --- /dev/null +++ b/queue-5.15/gtp-fix-a-potential-null-pointer-dereference.patch @@ -0,0 +1,47 @@ +From e9d816b2799e2b72e050fed6616fcbfa98d1ddbc Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 25 Aug 2024 12:16:38 -0700 +Subject: gtp: fix a potential NULL pointer dereference + +From: Cong Wang + +[ Upstream commit defd8b3c37b0f9cb3e0f60f47d3d78d459d57fda ] + +When sockfd_lookup() fails, gtp_encap_enable_socket() returns a +NULL pointer, but its callers only check for error pointers thus miss +the NULL pointer case. + +Fix it by returning an error pointer with the error code carried from +sockfd_lookup(). + +(I found this bug during code inspection.) + +Fixes: 1e3a3abd8b28 ("gtp: make GTP sockets in gtp_newlink optional") +Cc: Andreas Schultz +Cc: Harald Welte +Signed-off-by: Cong Wang +Reviewed-by: Simon Horman +Reviewed-by: Pablo Neira Ayuso +Link: https://patch.msgid.link/20240825191638.146748-1-xiyou.wangcong@gmail.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/gtp.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c +index 3bc9149e23a7c..40c94df382e54 100644 +--- a/drivers/net/gtp.c ++++ b/drivers/net/gtp.c +@@ -817,7 +817,7 @@ static struct sock *gtp_encap_enable_socket(int fd, int type, + sock = sockfd_lookup(fd, &err); + if (!sock) { + pr_debug("gtp socket fd=%d not found\n", fd); +- return NULL; ++ return ERR_PTR(err); + } + + sk = sock->sk; +-- +2.43.0 + diff --git a/queue-5.15/net-busy-poll-use-ktime_get_ns-instead-of-local_cloc.patch b/queue-5.15/net-busy-poll-use-ktime_get_ns-instead-of-local_cloc.patch new file mode 100644 index 00000000000..d9d6dee4ce8 --- /dev/null +++ b/queue-5.15/net-busy-poll-use-ktime_get_ns-instead-of-local_cloc.patch @@ -0,0 +1,48 @@ +From 5f0a872628046c88dabad47b9e00d2f1b3593aa3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 27 Aug 2024 11:49:16 +0000 +Subject: net: busy-poll: use ktime_get_ns() instead of local_clock() + +From: Eric Dumazet + +[ Upstream commit 0870b0d8b393dde53106678a1e2cec9dfa52f9b7 ] + +Typically, busy-polling durations are below 100 usec. + +When/if the busy-poller thread migrates to another cpu, +local_clock() can be off by +/-2msec or more for small +values of HZ, depending on the platform. + +Use ktimer_get_ns() to ensure deterministic behavior, +which is the whole point of busy-polling. + +Fixes: 060212928670 ("net: add low latency socket poll") +Fixes: 9a3c71aa8024 ("net: convert low latency sockets to sched_clock()") +Fixes: 37089834528b ("sched, net: Fixup busy_loop_us_clock()") +Signed-off-by: Eric Dumazet +Cc: Mina Almasry +Cc: Willem de Bruijn +Reviewed-by: Joe Damato +Link: https://patch.msgid.link/20240827114916.223377-1-edumazet@google.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + include/net/busy_poll.h | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h +index 3459a04a3d61c..2c37aa0a4ccb9 100644 +--- a/include/net/busy_poll.h ++++ b/include/net/busy_poll.h +@@ -63,7 +63,7 @@ static inline bool sk_can_busy_loop(struct sock *sk) + static inline unsigned long busy_loop_current_time(void) + { + #ifdef CONFIG_NET_RX_BUSY_POLL +- return (unsigned long)(local_clock() >> 10); ++ return (unsigned long)(ktime_get_ns() >> 10); + #else + return 0; + #endif +-- +2.43.0 + diff --git a/queue-5.15/nfc-pn533-add-poll-mod-list-filling-check.patch b/queue-5.15/nfc-pn533-add-poll-mod-list-filling-check.patch new file mode 100644 index 00000000000..6cb3e970fcf --- /dev/null +++ b/queue-5.15/nfc-pn533-add-poll-mod-list-filling-check.patch @@ -0,0 +1,62 @@ +From 86f251332fcf95487d8f5b45c9bf9ee9968b8ed4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 27 Aug 2024 11:48:22 +0300 +Subject: nfc: pn533: Add poll mod list filling check + +From: Aleksandr Mishin + +[ Upstream commit febccb39255f9df35527b88c953b2e0deae50e53 ] + +In case of im_protocols value is 1 and tm_protocols value is 0 this +combination successfully passes the check +'if (!im_protocols && !tm_protocols)' in the nfc_start_poll(). +But then after pn533_poll_create_mod_list() call in pn533_start_poll() +poll mod list will remain empty and dev->poll_mod_count will remain 0 +which lead to division by zero. + +Normally no im protocol has value 1 in the mask, so this combination is +not expected by driver. But these protocol values actually come from +userspace via Netlink interface (NFC_CMD_START_POLL operation). So a +broken or malicious program may pass a message containing a "bad" +combination of protocol parameter values so that dev->poll_mod_count +is not incremented inside pn533_poll_create_mod_list(), thus leading +to division by zero. +Call trace looks like: +nfc_genl_start_poll() + nfc_start_poll() + ->start_poll() + pn533_start_poll() + +Add poll mod list filling check. + +Found by Linux Verification Center (linuxtesting.org) with SVACE. + +Fixes: dfccd0f58044 ("NFC: pn533: Add some polling entropy") +Signed-off-by: Aleksandr Mishin +Acked-by: Krzysztof Kozlowski +Link: https://patch.msgid.link/20240827084822.18785-1-amishin@t-argos.ru +Signed-off-by: Paolo Abeni +Signed-off-by: Sasha Levin +--- + drivers/nfc/pn533/pn533.c | 5 +++++ + 1 file changed, 5 insertions(+) + +diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c +index 939d27652a4c9..fceae9c127602 100644 +--- a/drivers/nfc/pn533/pn533.c ++++ b/drivers/nfc/pn533/pn533.c +@@ -1725,6 +1725,11 @@ static int pn533_start_poll(struct nfc_dev *nfc_dev, + } + + pn533_poll_create_mod_list(dev, im_protocols, tm_protocols); ++ if (!dev->poll_mod_count) { ++ nfc_err(dev->dev, ++ "Poll mod list is empty\n"); ++ return -EINVAL; ++ } + + /* Do not always start polling from the same modulation */ + get_random_bytes(&rand_mod, sizeof(rand_mod)); +-- +2.43.0 + diff --git a/queue-5.15/phy-xilinx-add-runtime-pm-support.patch b/queue-5.15/phy-xilinx-add-runtime-pm-support.patch new file mode 100644 index 00000000000..7598d6da85d --- /dev/null +++ b/queue-5.15/phy-xilinx-add-runtime-pm-support.patch @@ -0,0 +1,117 @@ +From db193a5607c8cc8865b7c1c5ef5c19847cb1f7a6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 13 Jun 2023 19:32:49 +0530 +Subject: phy: xilinx: add runtime PM support + +From: Piyush Mehta + +[ Upstream commit b3db66f624468ab4a0385586bc7f4221e477d6b2 ] + +Added Runtime power management support to the xilinx phy driver and using +DEFINE_RUNTIME_DEV_PM_OPS new macros allows the compiler to remove the +unused dev_pm_ops structure and related functions if !CONFIG_PM without +the need to mark the functions __maybe_unused. + +Signed-off-by: Piyush Mehta +Link: https://lore.kernel.org/r/20230613140250.3018947-2-piyush.mehta@amd.com +Signed-off-by: Vinod Koul +Stable-dep-of: 5af9b304bc60 ("phy: xilinx: phy-zynqmp: Fix SGMII linkup failure on resume") +Signed-off-by: Sasha Levin +--- + drivers/phy/xilinx/phy-zynqmp.c | 35 ++++++++++++++++++++++++++------- + 1 file changed, 28 insertions(+), 7 deletions(-) + +diff --git a/drivers/phy/xilinx/phy-zynqmp.c b/drivers/phy/xilinx/phy-zynqmp.c +index 9be9535ad7ab7..964d8087fcf46 100644 +--- a/drivers/phy/xilinx/phy-zynqmp.c ++++ b/drivers/phy/xilinx/phy-zynqmp.c +@@ -21,6 +21,7 @@ + #include + #include + #include ++#include + #include + + #include +@@ -821,7 +822,7 @@ static struct phy *xpsgtr_xlate(struct device *dev, + * Power Management + */ + +-static int __maybe_unused xpsgtr_suspend(struct device *dev) ++static int xpsgtr_runtime_suspend(struct device *dev) + { + struct xpsgtr_dev *gtr_dev = dev_get_drvdata(dev); + unsigned int i; +@@ -836,7 +837,7 @@ static int __maybe_unused xpsgtr_suspend(struct device *dev) + return 0; + } + +-static int __maybe_unused xpsgtr_resume(struct device *dev) ++static int xpsgtr_runtime_resume(struct device *dev) + { + struct xpsgtr_dev *gtr_dev = dev_get_drvdata(dev); + unsigned int icm_cfg0, icm_cfg1; +@@ -877,10 +878,8 @@ static int __maybe_unused xpsgtr_resume(struct device *dev) + return err; + } + +-static const struct dev_pm_ops xpsgtr_pm_ops = { +- SET_SYSTEM_SLEEP_PM_OPS(xpsgtr_suspend, xpsgtr_resume) +-}; +- ++static DEFINE_RUNTIME_DEV_PM_OPS(xpsgtr_pm_ops, xpsgtr_runtime_suspend, ++ xpsgtr_runtime_resume, NULL); + /* + * Probe & Platform Driver + */ +@@ -1006,6 +1005,16 @@ static int xpsgtr_probe(struct platform_device *pdev) + ret = PTR_ERR(provider); + goto err_clk_put; + } ++ ++ pm_runtime_set_active(gtr_dev->dev); ++ pm_runtime_enable(gtr_dev->dev); ++ ++ ret = pm_runtime_resume_and_get(gtr_dev->dev); ++ if (ret < 0) { ++ pm_runtime_disable(gtr_dev->dev); ++ goto err_clk_put; ++ } ++ + return 0; + + err_clk_put: +@@ -1015,6 +1024,17 @@ static int xpsgtr_probe(struct platform_device *pdev) + return ret; + } + ++static int xpsgtr_remove(struct platform_device *pdev) ++{ ++ struct xpsgtr_dev *gtr_dev = platform_get_drvdata(pdev); ++ ++ pm_runtime_disable(gtr_dev->dev); ++ pm_runtime_put_noidle(gtr_dev->dev); ++ pm_runtime_set_suspended(gtr_dev->dev); ++ ++ return 0; ++} ++ + static const struct of_device_id xpsgtr_of_match[] = { + { .compatible = "xlnx,zynqmp-psgtr", }, + { .compatible = "xlnx,zynqmp-psgtr-v1.1", }, +@@ -1024,10 +1044,11 @@ MODULE_DEVICE_TABLE(of, xpsgtr_of_match); + + static struct platform_driver xpsgtr_driver = { + .probe = xpsgtr_probe, ++ .remove = xpsgtr_remove, + .driver = { + .name = "xilinx-psgtr", + .of_match_table = xpsgtr_of_match, +- .pm = &xpsgtr_pm_ops, ++ .pm = pm_ptr(&xpsgtr_pm_ops), + }, + }; + +-- +2.43.0 + diff --git a/queue-5.15/phy-xilinx-phy-zynqmp-dynamic-clock-support-for-powe.patch b/queue-5.15/phy-xilinx-phy-zynqmp-dynamic-clock-support-for-powe.patch new file mode 100644 index 00000000000..98d75e31969 --- /dev/null +++ b/queue-5.15/phy-xilinx-phy-zynqmp-dynamic-clock-support-for-powe.patch @@ -0,0 +1,202 @@ +From 47fc8fddfcd45a15411a9410289f960cad4399ef Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 13 Jun 2023 19:32:50 +0530 +Subject: phy: xilinx: phy-zynqmp: dynamic clock support for power-save + +From: Piyush Mehta + +[ Upstream commit 25d70083351318b44ae699d92c042dcb18a738ea ] + +Enabling clock for all the lanes consumes power even PHY is active or +inactive. To resolve this, enable/disable clocks in phy_init/phy_exit. + +By default clock is disabled for all the lanes. Whenever phy_init called +from USB, SATA, or display driver, etc. It enabled the required clock +for requested lane. On phy_exit cycle, it disabled clock for the active +PHYs. + +During the suspend/resume cycle, each USB/ SATA/ display driver called +phy_exit/phy_init individually. It disabled clock on exit, and enabled +on initialization for the active PHYs. + +Signed-off-by: Piyush Mehta +Link: https://lore.kernel.org/r/20230613140250.3018947-3-piyush.mehta@amd.com +Signed-off-by: Vinod Koul +Stable-dep-of: 5af9b304bc60 ("phy: xilinx: phy-zynqmp: Fix SGMII linkup failure on resume") +Signed-off-by: Sasha Levin +--- + drivers/phy/xilinx/phy-zynqmp.c | 61 ++++++++------------------------- + 1 file changed, 15 insertions(+), 46 deletions(-) + +diff --git a/drivers/phy/xilinx/phy-zynqmp.c b/drivers/phy/xilinx/phy-zynqmp.c +index 964d8087fcf46..a8782aad62ca4 100644 +--- a/drivers/phy/xilinx/phy-zynqmp.c ++++ b/drivers/phy/xilinx/phy-zynqmp.c +@@ -573,6 +573,10 @@ static int xpsgtr_phy_init(struct phy *phy) + + mutex_lock(>r_dev->gtr_mutex); + ++ /* Configure and enable the clock when peripheral phy_init call */ ++ if (clk_prepare_enable(gtr_dev->clk[gtr_phy->lane])) ++ goto out; ++ + /* Skip initialization if not required. */ + if (!xpsgtr_phy_init_required(gtr_phy)) + goto out; +@@ -617,9 +621,13 @@ static int xpsgtr_phy_init(struct phy *phy) + static int xpsgtr_phy_exit(struct phy *phy) + { + struct xpsgtr_phy *gtr_phy = phy_get_drvdata(phy); ++ struct xpsgtr_dev *gtr_dev = gtr_phy->dev; + + gtr_phy->skip_phy_init = false; + ++ /* Ensure that disable clock only, which configure for lane */ ++ clk_disable_unprepare(gtr_dev->clk[gtr_phy->lane]); ++ + return 0; + } + +@@ -825,15 +833,11 @@ static struct phy *xpsgtr_xlate(struct device *dev, + static int xpsgtr_runtime_suspend(struct device *dev) + { + struct xpsgtr_dev *gtr_dev = dev_get_drvdata(dev); +- unsigned int i; + + /* Save the snapshot ICM_CFG registers. */ + gtr_dev->saved_icm_cfg0 = xpsgtr_read(gtr_dev, ICM_CFG0); + gtr_dev->saved_icm_cfg1 = xpsgtr_read(gtr_dev, ICM_CFG1); + +- for (i = 0; i < ARRAY_SIZE(gtr_dev->clk); i++) +- clk_disable_unprepare(gtr_dev->clk[i]); +- + return 0; + } + +@@ -843,13 +847,6 @@ static int xpsgtr_runtime_resume(struct device *dev) + unsigned int icm_cfg0, icm_cfg1; + unsigned int i; + bool skip_phy_init; +- int err; +- +- for (i = 0; i < ARRAY_SIZE(gtr_dev->clk); i++) { +- err = clk_prepare_enable(gtr_dev->clk[i]); +- if (err) +- goto err_clk_put; +- } + + icm_cfg0 = xpsgtr_read(gtr_dev, ICM_CFG0); + icm_cfg1 = xpsgtr_read(gtr_dev, ICM_CFG1); +@@ -870,12 +867,6 @@ static int xpsgtr_runtime_resume(struct device *dev) + gtr_dev->phys[i].skip_phy_init = skip_phy_init; + + return 0; +- +-err_clk_put: +- while (i--) +- clk_disable_unprepare(gtr_dev->clk[i]); +- +- return err; + } + + static DEFINE_RUNTIME_DEV_PM_OPS(xpsgtr_pm_ops, xpsgtr_runtime_suspend, +@@ -887,7 +878,6 @@ static DEFINE_RUNTIME_DEV_PM_OPS(xpsgtr_pm_ops, xpsgtr_runtime_suspend, + static int xpsgtr_get_ref_clocks(struct xpsgtr_dev *gtr_dev) + { + unsigned int refclk; +- int ret; + + for (refclk = 0; refclk < ARRAY_SIZE(gtr_dev->refclk_sscs); ++refclk) { + unsigned long rate; +@@ -898,19 +888,14 @@ static int xpsgtr_get_ref_clocks(struct xpsgtr_dev *gtr_dev) + snprintf(name, sizeof(name), "ref%u", refclk); + clk = devm_clk_get_optional(gtr_dev->dev, name); + if (IS_ERR(clk)) { +- ret = dev_err_probe(gtr_dev->dev, PTR_ERR(clk), +- "Failed to get reference clock %u\n", +- refclk); +- goto err_clk_put; ++ return dev_err_probe(gtr_dev->dev, PTR_ERR(clk), ++ "Failed to get ref clock %u\n", ++ refclk); + } + + if (!clk) + continue; + +- ret = clk_prepare_enable(clk); +- if (ret) +- goto err_clk_put; +- + gtr_dev->clk[refclk] = clk; + + /* +@@ -930,18 +915,11 @@ static int xpsgtr_get_ref_clocks(struct xpsgtr_dev *gtr_dev) + dev_err(gtr_dev->dev, + "Invalid rate %lu for reference clock %u\n", + rate, refclk); +- ret = -EINVAL; +- goto err_clk_put; ++ return -EINVAL; + } + } + + return 0; +- +-err_clk_put: +- while (refclk--) +- clk_disable_unprepare(gtr_dev->clk[refclk]); +- +- return ret; + } + + static int xpsgtr_probe(struct platform_device *pdev) +@@ -950,7 +928,6 @@ static int xpsgtr_probe(struct platform_device *pdev) + struct xpsgtr_dev *gtr_dev; + struct phy_provider *provider; + unsigned int port; +- unsigned int i; + int ret; + + gtr_dev = devm_kzalloc(&pdev->dev, sizeof(*gtr_dev), GFP_KERNEL); +@@ -990,8 +967,7 @@ static int xpsgtr_probe(struct platform_device *pdev) + phy = devm_phy_create(&pdev->dev, np, &xpsgtr_phyops); + if (IS_ERR(phy)) { + dev_err(&pdev->dev, "failed to create PHY\n"); +- ret = PTR_ERR(phy); +- goto err_clk_put; ++ return PTR_ERR(phy); + } + + gtr_phy->phy = phy; +@@ -1002,8 +978,7 @@ static int xpsgtr_probe(struct platform_device *pdev) + provider = devm_of_phy_provider_register(&pdev->dev, xpsgtr_xlate); + if (IS_ERR(provider)) { + dev_err(&pdev->dev, "registering provider failed\n"); +- ret = PTR_ERR(provider); +- goto err_clk_put; ++ return PTR_ERR(provider); + } + + pm_runtime_set_active(gtr_dev->dev); +@@ -1012,16 +987,10 @@ static int xpsgtr_probe(struct platform_device *pdev) + ret = pm_runtime_resume_and_get(gtr_dev->dev); + if (ret < 0) { + pm_runtime_disable(gtr_dev->dev); +- goto err_clk_put; ++ return ret; + } + + return 0; +- +-err_clk_put: +- for (i = 0; i < ARRAY_SIZE(gtr_dev->clk); i++) +- clk_disable_unprepare(gtr_dev->clk[i]); +- +- return ret; + } + + static int xpsgtr_remove(struct platform_device *pdev) +-- +2.43.0 + diff --git a/queue-5.15/phy-xilinx-phy-zynqmp-fix-sgmii-linkup-failure-on-re.patch b/queue-5.15/phy-xilinx-phy-zynqmp-fix-sgmii-linkup-failure-on-re.patch new file mode 100644 index 00000000000..6027f4fefdb --- /dev/null +++ b/queue-5.15/phy-xilinx-phy-zynqmp-fix-sgmii-linkup-failure-on-re.patch @@ -0,0 +1,140 @@ +From e540110596600ace9f60586d2b29b9c5827cc472 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 5 Aug 2024 11:29:07 +0530 +Subject: phy: xilinx: phy-zynqmp: Fix SGMII linkup failure on resume + +From: Piyush Mehta + +[ Upstream commit 5af9b304bc6010723c02f74de0bfd24ff19b1a10 ] + +On a few Kria KR260 Robotics Starter Kit the PS-GEM SGMII linkup is not +happening after the resume. This is because serdes registers are reset +when FPD is off (in suspend state) and needs to be reprogrammed in the +resume path with the same default initialization as done in the first +stage bootloader psu_init routine. + +To address the failure introduce a set of serdes registers to be saved in +the suspend path and then restore it on resume. + +Fixes: 4a33bea00314 ("phy: zynqmp: Add PHY driver for the Xilinx ZynqMP Gigabit Transceiver") +Signed-off-by: Piyush Mehta +Signed-off-by: Radhey Shyam Pandey +Link: https://lore.kernel.org/r/1722837547-2578381-1-git-send-email-radhey.shyam.pandey@amd.com +Signed-off-by: Vinod Koul +Signed-off-by: Sasha Levin +--- + drivers/phy/xilinx/phy-zynqmp.c | 56 +++++++++++++++++++++++++++++++++ + 1 file changed, 56 insertions(+) + +diff --git a/drivers/phy/xilinx/phy-zynqmp.c b/drivers/phy/xilinx/phy-zynqmp.c +index a8782aad62ca4..75b0f9f31c81f 100644 +--- a/drivers/phy/xilinx/phy-zynqmp.c ++++ b/drivers/phy/xilinx/phy-zynqmp.c +@@ -166,6 +166,24 @@ + /* Timeout values */ + #define TIMEOUT_US 1000 + ++/* Lane 0/1/2/3 offset */ ++#define DIG_8(n) ((0x4000 * (n)) + 0x1074) ++#define ILL13(n) ((0x4000 * (n)) + 0x1994) ++#define DIG_10(n) ((0x4000 * (n)) + 0x107c) ++#define RST_DLY(n) ((0x4000 * (n)) + 0x19a4) ++#define BYP_15(n) ((0x4000 * (n)) + 0x1038) ++#define BYP_12(n) ((0x4000 * (n)) + 0x102c) ++#define MISC3(n) ((0x4000 * (n)) + 0x19ac) ++#define EQ11(n) ((0x4000 * (n)) + 0x1978) ++ ++static u32 save_reg_address[] = { ++ /* Lane 0/1/2/3 Register */ ++ DIG_8(0), ILL13(0), DIG_10(0), RST_DLY(0), BYP_15(0), BYP_12(0), MISC3(0), EQ11(0), ++ DIG_8(1), ILL13(1), DIG_10(1), RST_DLY(1), BYP_15(1), BYP_12(1), MISC3(1), EQ11(1), ++ DIG_8(2), ILL13(2), DIG_10(2), RST_DLY(2), BYP_15(2), BYP_12(2), MISC3(2), EQ11(2), ++ DIG_8(3), ILL13(3), DIG_10(3), RST_DLY(3), BYP_15(3), BYP_12(3), MISC3(3), EQ11(3), ++}; ++ + struct xpsgtr_dev; + + /** +@@ -214,6 +232,7 @@ struct xpsgtr_phy { + * @tx_term_fix: fix for GT issue + * @saved_icm_cfg0: stored value of ICM CFG0 register + * @saved_icm_cfg1: stored value of ICM CFG1 register ++ * @saved_regs: registers to be saved/restored during suspend/resume + */ + struct xpsgtr_dev { + struct device *dev; +@@ -226,6 +245,7 @@ struct xpsgtr_dev { + bool tx_term_fix; + unsigned int saved_icm_cfg0; + unsigned int saved_icm_cfg1; ++ u32 *saved_regs; + }; + + /* +@@ -299,6 +319,32 @@ static inline void xpsgtr_clr_set_phy(struct xpsgtr_phy *gtr_phy, + writel((readl(addr) & ~clr) | set, addr); + } + ++/** ++ * xpsgtr_save_lane_regs - Saves registers on suspend ++ * @gtr_dev: pointer to phy controller context structure ++ */ ++static void xpsgtr_save_lane_regs(struct xpsgtr_dev *gtr_dev) ++{ ++ int i; ++ ++ for (i = 0; i < ARRAY_SIZE(save_reg_address); i++) ++ gtr_dev->saved_regs[i] = xpsgtr_read(gtr_dev, ++ save_reg_address[i]); ++} ++ ++/** ++ * xpsgtr_restore_lane_regs - Restores registers on resume ++ * @gtr_dev: pointer to phy controller context structure ++ */ ++static void xpsgtr_restore_lane_regs(struct xpsgtr_dev *gtr_dev) ++{ ++ int i; ++ ++ for (i = 0; i < ARRAY_SIZE(save_reg_address); i++) ++ xpsgtr_write(gtr_dev, save_reg_address[i], ++ gtr_dev->saved_regs[i]); ++} ++ + /* + * Hardware Configuration + */ +@@ -838,6 +884,8 @@ static int xpsgtr_runtime_suspend(struct device *dev) + gtr_dev->saved_icm_cfg0 = xpsgtr_read(gtr_dev, ICM_CFG0); + gtr_dev->saved_icm_cfg1 = xpsgtr_read(gtr_dev, ICM_CFG1); + ++ xpsgtr_save_lane_regs(gtr_dev); ++ + return 0; + } + +@@ -848,6 +896,8 @@ static int xpsgtr_runtime_resume(struct device *dev) + unsigned int i; + bool skip_phy_init; + ++ xpsgtr_restore_lane_regs(gtr_dev); ++ + icm_cfg0 = xpsgtr_read(gtr_dev, ICM_CFG0); + icm_cfg1 = xpsgtr_read(gtr_dev, ICM_CFG1); + +@@ -990,6 +1040,12 @@ static int xpsgtr_probe(struct platform_device *pdev) + return ret; + } + ++ gtr_dev->saved_regs = devm_kmalloc(gtr_dev->dev, ++ sizeof(save_reg_address), ++ GFP_KERNEL); ++ if (!gtr_dev->saved_regs) ++ return -ENOMEM; ++ + return 0; + } + +-- +2.43.0 + diff --git a/queue-5.15/pm-core-add-export-_gpl-_simple_dev_pm_ops-macros.patch b/queue-5.15/pm-core-add-export-_gpl-_simple_dev_pm_ops-macros.patch new file mode 100644 index 00000000000..f433e9bce04 --- /dev/null +++ b/queue-5.15/pm-core-add-export-_gpl-_simple_dev_pm_ops-macros.patch @@ -0,0 +1,95 @@ +From efeb5e0d8f744671309a87af4e8bc7767e0695b2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 7 Jan 2022 18:17:20 +0000 +Subject: PM: core: Add EXPORT[_GPL]_SIMPLE_DEV_PM_OPS macros + +From: Paul Cercueil + +[ Upstream commit 0ae101fdd3297b7165755340e05386f1e1379709 ] + +These macros are defined conditionally, according to CONFIG_PM: +- if CONFIG_PM is enabled, these macros resolve to + DEFINE_SIMPLE_DEV_PM_OPS(), and the dev_pm_ops symbol will be + exported. + +- if CONFIG_PM is disabled, these macros will result in a dummy static + dev_pm_ops to be created with the __maybe_unused flag. The dev_pm_ops + will then be discarded by the compiler, along with the provided + callback functions if they are not used anywhere else. + +In the second case, the symbol is not exported, which should be +perfectly fine - users of the symbol should all use the pm_ptr() or +pm_sleep_ptr() macro, so the dev_pm_ops marked as "extern" in the +client's code will never be accessed. + +Signed-off-by: Paul Cercueil +Acked-by: Jonathan Cameron +Reviewed-by: Ulf Hansson +Signed-off-by: Rafael J. Wysocki +Stable-dep-of: 5af9b304bc60 ("phy: xilinx: phy-zynqmp: Fix SGMII linkup failure on resume") +Signed-off-by: Sasha Levin +--- + include/linux/pm.h | 35 ++++++++++++++++++++++++++++++++--- + 1 file changed, 32 insertions(+), 3 deletions(-) + +diff --git a/include/linux/pm.h b/include/linux/pm.h +index 452c1ed902b75..c3665382b9f8c 100644 +--- a/include/linux/pm.h ++++ b/include/linux/pm.h +@@ -8,6 +8,7 @@ + #ifndef _LINUX_PM_H + #define _LINUX_PM_H + ++#include + #include + #include + #include +@@ -357,14 +358,42 @@ struct dev_pm_ops { + #define SET_RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) + #endif + ++#define _DEFINE_DEV_PM_OPS(name, \ ++ suspend_fn, resume_fn, \ ++ runtime_suspend_fn, runtime_resume_fn, idle_fn) \ ++const struct dev_pm_ops name = { \ ++ SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ ++ RUNTIME_PM_OPS(runtime_suspend_fn, runtime_resume_fn, idle_fn) \ ++} ++ ++#ifdef CONFIG_PM ++#define _EXPORT_DEV_PM_OPS(name, suspend_fn, resume_fn, runtime_suspend_fn, \ ++ runtime_resume_fn, idle_fn, sec) \ ++ _DEFINE_DEV_PM_OPS(name, suspend_fn, resume_fn, runtime_suspend_fn, \ ++ runtime_resume_fn, idle_fn); \ ++ _EXPORT_SYMBOL(name, sec) ++#else ++#define _EXPORT_DEV_PM_OPS(name, suspend_fn, resume_fn, runtime_suspend_fn, \ ++ runtime_resume_fn, idle_fn, sec) \ ++static __maybe_unused _DEFINE_DEV_PM_OPS(__static_##name, suspend_fn, \ ++ resume_fn, runtime_suspend_fn, \ ++ runtime_resume_fn, idle_fn) ++#endif ++ + /* + * Use this if you want to use the same suspend and resume callbacks for suspend + * to RAM and hibernation. ++ * ++ * If the underlying dev_pm_ops struct symbol has to be exported, use ++ * EXPORT_SIMPLE_DEV_PM_OPS() or EXPORT_GPL_SIMPLE_DEV_PM_OPS() instead. + */ + #define DEFINE_SIMPLE_DEV_PM_OPS(name, suspend_fn, resume_fn) \ +-const struct dev_pm_ops name = { \ +- SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ +-} ++ _DEFINE_DEV_PM_OPS(name, suspend_fn, resume_fn, NULL, NULL, NULL) ++ ++#define EXPORT_SIMPLE_DEV_PM_OPS(name, suspend_fn, resume_fn) \ ++ _EXPORT_DEV_PM_OPS(name, suspend_fn, resume_fn, NULL, NULL, NULL, "") ++#define EXPORT_GPL_SIMPLE_DEV_PM_OPS(name, suspend_fn, resume_fn) \ ++ _EXPORT_DEV_PM_OPS(name, suspend_fn, resume_fn, NULL, NULL, NULL, "_gpl") + + /* Deprecated. Use DEFINE_SIMPLE_DEV_PM_OPS() instead. */ + #define SIMPLE_DEV_PM_OPS(name, suspend_fn, resume_fn) \ +-- +2.43.0 + diff --git a/queue-5.15/pm-core-remove-define_universal_dev_pm_ops-macro.patch b/queue-5.15/pm-core-remove-define_universal_dev_pm_ops-macro.patch new file mode 100644 index 00000000000..08a2dc4332c --- /dev/null +++ b/queue-5.15/pm-core-remove-define_universal_dev_pm_ops-macro.patch @@ -0,0 +1,78 @@ +From 942cd5285793148bfaa99021edf820f25017f3d9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 7 Jan 2022 18:17:18 +0000 +Subject: PM: core: Remove DEFINE_UNIVERSAL_DEV_PM_OPS() macro + +From: Paul Cercueil + +[ Upstream commit 3f4b32511a77bc5a05cfbf26fec94c4e1b1cf46a ] + +The deprecated UNIVERSAL_DEV_PM_OPS() macro uses the provided callbacks +for both runtime PM and system sleep, which is very likely to be a +mistake, as a system sleep can be triggered while a given device is +already PM-suspended, which would cause the suspend callback to be +called twice. + +The amount of users of UNIVERSAL_DEV_PM_OPS() is also tiny (16 +occurences) compared to the number of places where +SET_SYSTEM_SLEEP_PM_OPS() is used with pm_runtime_force_suspend() and +pm_runtime_force_resume(), which makes me think that none of these cases +are actually valid. + +As the new macro DEFINE_UNIVERSAL_DEV_PM_OPS() which was introduced to +replace UNIVERSAL_DEV_PM_OPS() is currently unused, remove it before +someone starts to use it in yet another invalid case. + +Signed-off-by: Paul Cercueil +Acked-by: Jonathan Cameron +Reviewed-by: Ulf Hansson +Signed-off-by: Rafael J. Wysocki +Stable-dep-of: 5af9b304bc60 ("phy: xilinx: phy-zynqmp: Fix SGMII linkup failure on resume") +Signed-off-by: Sasha Levin +--- + include/linux/pm.h | 21 ++++++++------------- + 1 file changed, 8 insertions(+), 13 deletions(-) + +diff --git a/include/linux/pm.h b/include/linux/pm.h +index d1c19f5b1380f..452c1ed902b75 100644 +--- a/include/linux/pm.h ++++ b/include/linux/pm.h +@@ -366,6 +366,12 @@ const struct dev_pm_ops name = { \ + SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ + } + ++/* Deprecated. Use DEFINE_SIMPLE_DEV_PM_OPS() instead. */ ++#define SIMPLE_DEV_PM_OPS(name, suspend_fn, resume_fn) \ ++const struct dev_pm_ops __maybe_unused name = { \ ++ SET_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ ++} ++ + /* + * Use this for defining a set of PM operations to be used in all situations + * (system suspend, hibernation or runtime PM). +@@ -378,20 +384,9 @@ const struct dev_pm_ops name = { \ + * suspend and "early" resume callback pointers, .suspend_late() and + * .resume_early(), to the same routines as .runtime_suspend() and + * .runtime_resume(), respectively (and analogously for hibernation). ++ * ++ * Deprecated. You most likely don't want this macro. + */ +-#define DEFINE_UNIVERSAL_DEV_PM_OPS(name, suspend_fn, resume_fn, idle_fn) \ +-static const struct dev_pm_ops name = { \ +- SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ +- RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) \ +-} +- +-/* Deprecated. Use DEFINE_SIMPLE_DEV_PM_OPS() instead. */ +-#define SIMPLE_DEV_PM_OPS(name, suspend_fn, resume_fn) \ +-const struct dev_pm_ops __maybe_unused name = { \ +- SET_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ +-} +- +-/* Deprecated. Use DEFINE_UNIVERSAL_DEV_PM_OPS() instead. */ + #define UNIVERSAL_DEV_PM_OPS(name, suspend_fn, resume_fn, idle_fn) \ + const struct dev_pm_ops __maybe_unused name = { \ + SET_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ +-- +2.43.0 + diff --git a/queue-5.15/pm-runtime-add-define_runtime_dev_pm_ops-macro.patch b/queue-5.15/pm-runtime-add-define_runtime_dev_pm_ops-macro.patch new file mode 100644 index 00000000000..de06047f88e --- /dev/null +++ b/queue-5.15/pm-runtime-add-define_runtime_dev_pm_ops-macro.patch @@ -0,0 +1,71 @@ +From 411ba007ef603aa956dfce28621e34c0ec95f628 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 7 Jan 2022 18:17:21 +0000 +Subject: PM: runtime: Add DEFINE_RUNTIME_DEV_PM_OPS() macro + +From: Paul Cercueil + +[ Upstream commit 9d8619190031af0a314bee865262d8975473e4dd ] + +A lot of drivers create a dev_pm_ops struct with the system sleep +suspend/resume callbacks set to pm_runtime_force_suspend() and +pm_runtime_force_resume(). + +These drivers can now use the DEFINE_RUNTIME_DEV_PM_OPS() macro, which +will use pm_runtime_force_{suspend,resume}() as the system sleep +callbacks, while having the same dead code removal characteristic that +is already provided by DEFINE_SIMPLE_DEV_PM_OPS(). + +Signed-off-by: Paul Cercueil +Acked-by: Jonathan Cameron +Reviewed-by: Ulf Hansson +Signed-off-by: Rafael J. Wysocki +Stable-dep-of: 5af9b304bc60 ("phy: xilinx: phy-zynqmp: Fix SGMII linkup failure on resume") +Signed-off-by: Sasha Levin +--- + include/linux/pm.h | 3 ++- + include/linux/pm_runtime.h | 14 ++++++++++++++ + 2 files changed, 16 insertions(+), 1 deletion(-) + +diff --git a/include/linux/pm.h b/include/linux/pm.h +index c3665382b9f8c..b8578e1f7c110 100644 +--- a/include/linux/pm.h ++++ b/include/linux/pm.h +@@ -414,7 +414,8 @@ const struct dev_pm_ops __maybe_unused name = { \ + * .resume_early(), to the same routines as .runtime_suspend() and + * .runtime_resume(), respectively (and analogously for hibernation). + * +- * Deprecated. You most likely don't want this macro. ++ * Deprecated. You most likely don't want this macro. Use ++ * DEFINE_RUNTIME_DEV_PM_OPS() instead. + */ + #define UNIVERSAL_DEV_PM_OPS(name, suspend_fn, resume_fn, idle_fn) \ + const struct dev_pm_ops __maybe_unused name = { \ +diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h +index 7efb105183134..9a10b6bac4a71 100644 +--- a/include/linux/pm_runtime.h ++++ b/include/linux/pm_runtime.h +@@ -22,6 +22,20 @@ + usage_count */ + #define RPM_AUTO 0x08 /* Use autosuspend_delay */ + ++/* ++ * Use this for defining a set of PM operations to be used in all situations ++ * (system suspend, hibernation or runtime PM). ++ * ++ * Note that the behaviour differs from the deprecated UNIVERSAL_DEV_PM_OPS() ++ * macro, which uses the provided callbacks for both runtime PM and system ++ * sleep, while DEFINE_RUNTIME_DEV_PM_OPS() uses pm_runtime_force_suspend() ++ * and pm_runtime_force_resume() for its system sleep callbacks. ++ */ ++#define DEFINE_RUNTIME_DEV_PM_OPS(name, suspend_fn, resume_fn, idle_fn) \ ++ _DEFINE_DEV_PM_OPS(name, pm_runtime_force_suspend, \ ++ pm_runtime_force_resume, suspend_fn, \ ++ resume_fn, idle_fn) ++ + #ifdef CONFIG_PM + extern struct workqueue_struct *pm_wq; + +-- +2.43.0 + diff --git a/queue-5.15/series b/queue-5.15/series index 89d0fd20593..52133a98145 100644 --- a/queue-5.15/series +++ b/queue-5.15/series @@ -186,3 +186,15 @@ cgroup-cpuset-prevent-uaf-in-proc_cpuset_show.patch net-rds-fix-possible-deadlock-in-rds_message_put.patch ksmbd-the-buffer-of-smb2-query-dir-response-has-at-l.patch soundwire-stream-fix-programming-slave-ports-for-non-continous-port-maps.patch +pm-core-remove-define_universal_dev_pm_ops-macro.patch +pm-core-add-export-_gpl-_simple_dev_pm_ops-macros.patch +pm-runtime-add-define_runtime_dev_pm_ops-macro.patch +phy-xilinx-add-runtime-pm-support.patch +phy-xilinx-phy-zynqmp-dynamic-clock-support-for-powe.patch +phy-xilinx-phy-zynqmp-fix-sgmii-linkup-failure-on-re.patch +dmaengine-dw-add-peripheral-bus-width-verification.patch +dmaengine-dw-add-memory-bus-width-verification.patch +ethtool-check-device-is-present-when-getting-link-se.patch +gtp-fix-a-potential-null-pointer-dereference.patch +net-busy-poll-use-ktime_get_ns-instead-of-local_cloc.patch +nfc-pn533-add-poll-mod-list-filling-check.patch