From: Sasha Levin Date: Sun, 19 Jan 2025 23:10:06 +0000 (-0500) Subject: Fixes for 6.12 X-Git-Tag: v6.6.73~37 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=a532af64912c13967b6f894ab2ccf8d4fa0c1b86;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 6.12 Signed-off-by: Sasha Levin --- diff --git a/queue-6.12/acpi-resource-acpi_dev_irq_override-check-dmi-match-.patch b/queue-6.12/acpi-resource-acpi_dev_irq_override-check-dmi-match-.patch new file mode 100644 index 0000000000..bd9f91163c --- /dev/null +++ b/queue-6.12/acpi-resource-acpi_dev_irq_override-check-dmi-match-.patch @@ -0,0 +1,48 @@ +From 445547ec22c7aa06eebe1f5f5d3b761ce1771f1f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 28 Dec 2024 17:52:53 +0100 +Subject: ACPI: resource: acpi_dev_irq_override(): Check DMI match last + +From: Hans de Goede + +[ Upstream commit cd4a7b2e6a2437a5502910c08128ea3bad55a80b ] + +acpi_dev_irq_override() gets called approx. 30 times during boot (15 legacy +IRQs * 2 override_table entries). Of these 30 calls at max 1 will match +the non DMI checks done by acpi_dev_irq_override(). The dmi_check_system() +check is by far the most expensive check done by acpi_dev_irq_override(), +make this call the last check done by acpi_dev_irq_override() so that it +will be called at max 1 time instead of 30 times. + +Signed-off-by: Hans de Goede +Reviewed-by: Mario Limonciello +Link: https://patch.msgid.link/20241228165253.42584-1-hdegoede@redhat.com +[ rjw: Subject edit ] +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/acpi/resource.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c +index d27a3bf96f80d..90aaec923889c 100644 +--- a/drivers/acpi/resource.c ++++ b/drivers/acpi/resource.c +@@ -689,11 +689,11 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity, + for (i = 0; i < ARRAY_SIZE(override_table); i++) { + const struct irq_override_cmp *entry = &override_table[i]; + +- if (dmi_check_system(entry->system) && +- entry->irq == gsi && ++ if (entry->irq == gsi && + entry->triggering == triggering && + entry->polarity == polarity && +- entry->shareable == shareable) ++ entry->shareable == shareable && ++ dmi_check_system(entry->system)) + return entry->override; + } + +-- +2.39.5 + diff --git a/queue-6.12/afs-fix-merge-preference-rule-failure-condition.patch b/queue-6.12/afs-fix-merge-preference-rule-failure-condition.patch new file mode 100644 index 0000000000..8385dc7d71 --- /dev/null +++ b/queue-6.12/afs-fix-merge-preference-rule-failure-condition.patch @@ -0,0 +1,63 @@ +From a213d709cf2c914864ac1e1c954eb988a68e3aed Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 7 Jan 2025 14:52:32 +0000 +Subject: afs: Fix merge preference rule failure condition + +From: Lizhi Xu + +[ Upstream commit 17a4fde81d3a7478d97d15304a6d61094a10c2e3 ] + +syzbot reported a lock held when returning to userspace[1]. This is +because if argc is less than 0 and the function returns directly, the held +inode lock is not released. + +Fix this by store the error in ret and jump to done to clean up instead of +returning directly. + +[dh: Modified Lizhi Xu's original patch to make it honour the error code +from afs_split_string()] + +[1] +WARNING: lock held when returning to user space! +6.13.0-rc3-syzkaller-00209-g499551201b5f #0 Not tainted +------------------------------------------------ +syz-executor133/5823 is leaving the kernel with locks still held! +1 lock held by syz-executor133/5823: + #0: ffff888071cffc00 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline] + #0: ffff888071cffc00 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: afs_proc_addr_prefs_write+0x2bb/0x14e0 fs/afs/addr_prefs.c:388 + +Reported-by: syzbot+76f33569875eb708e575@syzkaller.appspotmail.com +Closes: https://syzkaller.appspot.com/bug?extid=76f33569875eb708e575 +Signed-off-by: Lizhi Xu +Signed-off-by: David Howells +Link: https://lore.kernel.org/r/20241226012616.2348907-1-lizhi.xu@windriver.com/ +Link: https://lore.kernel.org/r/529850.1736261552@warthog.procyon.org.uk +Tested-by: syzbot+76f33569875eb708e575@syzkaller.appspotmail.com +cc: Marc Dionne +cc: linux-afs@lists.infradead.org +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/afs/addr_prefs.c | 6 ++++-- + 1 file changed, 4 insertions(+), 2 deletions(-) + +diff --git a/fs/afs/addr_prefs.c b/fs/afs/addr_prefs.c +index a189ff8a5034e..c0384201b8feb 100644 +--- a/fs/afs/addr_prefs.c ++++ b/fs/afs/addr_prefs.c +@@ -413,8 +413,10 @@ int afs_proc_addr_prefs_write(struct file *file, char *buf, size_t size) + + do { + argc = afs_split_string(&buf, argv, ARRAY_SIZE(argv)); +- if (argc < 0) +- return argc; ++ if (argc < 0) { ++ ret = argc; ++ goto done; ++ } + if (argc < 2) + goto inval; + +-- +2.39.5 + diff --git a/queue-6.12/cachefiles-parse-the-secctx-immediately.patch b/queue-6.12/cachefiles-parse-the-secctx-immediately.patch new file mode 100644 index 0000000000..0ff165f9be --- /dev/null +++ b/queue-6.12/cachefiles-parse-the-secctx-immediately.patch @@ -0,0 +1,136 @@ +From 6d97c3e6210cfccbebce6423e76badd01c00626e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 13 Dec 2024 13:50:05 +0000 +Subject: cachefiles: Parse the "secctx" immediately + +From: Max Kellermann + +[ Upstream commit e5a8b6446c0d370716f193771ccacf3260a57534 ] + +Instead of storing an opaque string, call security_secctx_to_secid() +right in the "secctx" command handler and store only the numeric +"secid". This eliminates an unnecessary string allocation and allows +the daemon to receive errors when writing the "secctx" command instead +of postponing the error to the "bind" command handler. For example, +if the kernel was built without `CONFIG_SECURITY`, "bind" will return +`EOPNOTSUPP`, but the daemon doesn't know why. With this patch, the +"secctx" will instead return `EOPNOTSUPP` which is the right context +for this error. + +This patch adds a boolean flag `have_secid` because I'm not sure if we +can safely assume that zero is the special secid value for "not set". +This appears to be true for SELinux, Smack and AppArmor, but since +this attribute is not documented, I'm unable to derive a stable +guarantee for that. + +Signed-off-by: Max Kellermann +Signed-off-by: David Howells +Link: https://lore.kernel.org/r/20241209141554.638708-1-max.kellermann@ionos.com/ +Link: https://lore.kernel.org/r/20241213135013.2964079-6-dhowells@redhat.com +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/cachefiles/daemon.c | 14 +++++++------- + fs/cachefiles/internal.h | 3 ++- + fs/cachefiles/security.c | 6 +++--- + 3 files changed, 12 insertions(+), 11 deletions(-) + +diff --git a/fs/cachefiles/daemon.c b/fs/cachefiles/daemon.c +index 89b11336a8369..1806bff8e59bc 100644 +--- a/fs/cachefiles/daemon.c ++++ b/fs/cachefiles/daemon.c +@@ -15,6 +15,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -576,7 +577,7 @@ static int cachefiles_daemon_dir(struct cachefiles_cache *cache, char *args) + */ + static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args) + { +- char *secctx; ++ int err; + + _enter(",%s", args); + +@@ -585,16 +586,16 @@ static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args) + return -EINVAL; + } + +- if (cache->secctx) { ++ if (cache->have_secid) { + pr_err("Second security context specified\n"); + return -EINVAL; + } + +- secctx = kstrdup(args, GFP_KERNEL); +- if (!secctx) +- return -ENOMEM; ++ err = security_secctx_to_secid(args, strlen(args), &cache->secid); ++ if (err) ++ return err; + +- cache->secctx = secctx; ++ cache->have_secid = true; + return 0; + } + +@@ -820,7 +821,6 @@ static void cachefiles_daemon_unbind(struct cachefiles_cache *cache) + put_cred(cache->cache_cred); + + kfree(cache->rootdirname); +- kfree(cache->secctx); + kfree(cache->tag); + + _leave(""); +diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h +index 7b99bd98de75b..38c236e38cef8 100644 +--- a/fs/cachefiles/internal.h ++++ b/fs/cachefiles/internal.h +@@ -122,7 +122,6 @@ struct cachefiles_cache { + #define CACHEFILES_STATE_CHANGED 3 /* T if state changed (poll trigger) */ + #define CACHEFILES_ONDEMAND_MODE 4 /* T if in on-demand read mode */ + char *rootdirname; /* name of cache root directory */ +- char *secctx; /* LSM security context */ + char *tag; /* cache binding tag */ + refcount_t unbind_pincount;/* refcount to do daemon unbind */ + struct xarray reqs; /* xarray of pending on-demand requests */ +@@ -130,6 +129,8 @@ struct cachefiles_cache { + struct xarray ondemand_ids; /* xarray for ondemand_id allocation */ + u32 ondemand_id_next; + u32 msg_id_next; ++ u32 secid; /* LSM security id */ ++ bool have_secid; /* whether "secid" was set */ + }; + + static inline bool cachefiles_in_ondemand_mode(struct cachefiles_cache *cache) +diff --git a/fs/cachefiles/security.c b/fs/cachefiles/security.c +index fe777164f1d89..fc6611886b3b5 100644 +--- a/fs/cachefiles/security.c ++++ b/fs/cachefiles/security.c +@@ -18,7 +18,7 @@ int cachefiles_get_security_ID(struct cachefiles_cache *cache) + struct cred *new; + int ret; + +- _enter("{%s}", cache->secctx); ++ _enter("{%u}", cache->have_secid ? cache->secid : 0); + + new = prepare_kernel_cred(current); + if (!new) { +@@ -26,8 +26,8 @@ int cachefiles_get_security_ID(struct cachefiles_cache *cache) + goto error; + } + +- if (cache->secctx) { +- ret = set_security_override_from_ctx(new, cache->secctx); ++ if (cache->have_secid) { ++ ret = set_security_override(new, cache->secid); + if (ret < 0) { + put_cred(new); + pr_err("Security denies permission to nominate security context: error %d\n", +-- +2.39.5 + diff --git a/queue-6.12/fs-fix-missing-declaration-of-init_files.patch b/queue-6.12/fs-fix-missing-declaration-of-init_files.patch new file mode 100644 index 0000000000..04cce445e6 --- /dev/null +++ b/queue-6.12/fs-fix-missing-declaration-of-init_files.patch @@ -0,0 +1,37 @@ +From 6590e2c3170fa6242ba0d1eb2d69f7d7f2023ef9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 17 Dec 2024 07:18:36 +0000 +Subject: fs: fix missing declaration of init_files + +From: Zhang Kunbo + +[ Upstream commit 2b2fc0be98a828cf33a88a28e9745e8599fb05cf ] + +fs/file.c should include include/linux/init_task.h for + declaration of init_files. This fixes the sparse warning: + +fs/file.c:501:21: warning: symbol 'init_files' was not declared. Should it be static? + +Signed-off-by: Zhang Kunbo +Link: https://lore.kernel.org/r/20241217071836.2634868-1-zhangkunbo@huawei.com +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/file.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/fs/file.c b/fs/file.c +index eb093e7369720..4cb952541dd03 100644 +--- a/fs/file.c ++++ b/fs/file.c +@@ -21,6 +21,7 @@ + #include + #include + #include ++#include + + #include "internal.h" + +-- +2.39.5 + diff --git a/queue-6.12/fs-qnx6-fix-building-with-gcc-15.patch b/queue-6.12/fs-qnx6-fix-building-with-gcc-15.patch new file mode 100644 index 0000000000..63130872ad --- /dev/null +++ b/queue-6.12/fs-qnx6-fix-building-with-gcc-15.patch @@ -0,0 +1,77 @@ +From d9d8184c8ad681fe992658f1b583e8cc163e91ec Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 5 Oct 2024 01:21:32 +0530 +Subject: fs/qnx6: Fix building with GCC 15 +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Brahmajit Das + +[ Upstream commit 989e0cdc0f18a594b25cabc60426d29659aeaf58 ] + +qnx6_checkroot() had been using weirdly spelled initializer - it needed +to initialize 3-element arrays of char and it used NUL-padded +3-character string literals (i.e. 4-element initializers, with +completely pointless zeroes at the end). + +That had been spotted by gcc-15[*]; prior to that gcc quietly dropped +the 4th element of initializers. + +However, none of that had been needed in the first place - all this +array is used for is checking that the first directory entry in root +directory is "." and the second - "..". The check had been expressed as +a loop, using that match_root[] array. Since there is no chance that we +ever want to extend that list of entries, the entire thing is much too +fancy for its own good; what we need is just a couple of explicit +memcmp() and that's it. + +[*]: fs/qnx6/inode.c: In function ‘qnx6_checkroot’: +fs/qnx6/inode.c:182:41: error: initializer-string for array of ‘char’ is too long [-Werror=unterminated-string-initialization] + 182 | static char match_root[2][3] = {".\0\0", "..\0"}; + | ^~~~~~~ +fs/qnx6/inode.c:182:50: error: initializer-string for array of ‘char’ is too long [-Werror=unterminated-string-initialization] + 182 | static char match_root[2][3] = {".\0\0", "..\0"}; + | ^~~~~~ + +Signed-off-by: Brahmajit Das +Link: https://lore.kernel.org/r/20241004195132.1393968-1-brahmajit.xyz@gmail.com +Acked-by: Al Viro +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/qnx6/inode.c | 11 ++++------- + 1 file changed, 4 insertions(+), 7 deletions(-) + +diff --git a/fs/qnx6/inode.c b/fs/qnx6/inode.c +index 85925ec0051a9..3310d1ad4d0e9 100644 +--- a/fs/qnx6/inode.c ++++ b/fs/qnx6/inode.c +@@ -179,8 +179,7 @@ static int qnx6_statfs(struct dentry *dentry, struct kstatfs *buf) + */ + static const char *qnx6_checkroot(struct super_block *s) + { +- static char match_root[2][3] = {".\0\0", "..\0"}; +- int i, error = 0; ++ int error = 0; + struct qnx6_dir_entry *dir_entry; + struct inode *root = d_inode(s->s_root); + struct address_space *mapping = root->i_mapping; +@@ -189,11 +188,9 @@ static const char *qnx6_checkroot(struct super_block *s) + if (IS_ERR(folio)) + return "error reading root directory"; + dir_entry = kmap_local_folio(folio, 0); +- for (i = 0; i < 2; i++) { +- /* maximum 3 bytes - due to match_root limitation */ +- if (strncmp(dir_entry[i].de_fname, match_root[i], 3)) +- error = 1; +- } ++ if (memcmp(dir_entry[0].de_fname, ".", 2) || ++ memcmp(dir_entry[1].de_fname, "..", 3)) ++ error = 1; + folio_release_kmap(folio, dir_entry); + if (error) + return "error reading root directory."; +-- +2.39.5 + diff --git a/queue-6.12/gpio-sim-lock-up-configfs-that-an-instantiated-devic.patch b/queue-6.12/gpio-sim-lock-up-configfs-that-an-instantiated-devic.patch new file mode 100644 index 0000000000..4859521ef8 --- /dev/null +++ b/queue-6.12/gpio-sim-lock-up-configfs-that-an-instantiated-devic.patch @@ -0,0 +1,94 @@ +From 2aa0b92ddd89e5292dfbce9ecba68397acf0b52e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 3 Jan 2025 23:18:29 +0900 +Subject: gpio: sim: lock up configfs that an instantiated device depends on + +From: Koichiro Den + +[ Upstream commit 8bd76b3d3f3af7ac2898b6a27ad90c444fec418f ] + +Once a sim device is instantiated and actively used, allowing rmdir for +its configfs serves no purpose and can be confusing. Effectively, +arbitrary users start depending on its existence. + +Make the subsystem itself depend on the configfs entry for a sim device +while it is in active use. + +Signed-off-by: Koichiro Den +Link: https://lore.kernel.org/r/20250103141829.430662-5-koichiro.den@canonical.com +Signed-off-by: Bartosz Golaszewski +Signed-off-by: Sasha Levin +--- + drivers/gpio/gpio-sim.c | 48 +++++++++++++++++++++++++++++++++++------ + 1 file changed, 41 insertions(+), 7 deletions(-) + +diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c +index dcca1d7f173e5..deedacdeb2395 100644 +--- a/drivers/gpio/gpio-sim.c ++++ b/drivers/gpio/gpio-sim.c +@@ -1030,6 +1030,30 @@ static void gpio_sim_device_deactivate(struct gpio_sim_device *dev) + dev->pdev = NULL; + } + ++static void ++gpio_sim_device_lockup_configfs(struct gpio_sim_device *dev, bool lock) ++{ ++ struct configfs_subsystem *subsys = dev->group.cg_subsys; ++ struct gpio_sim_bank *bank; ++ struct gpio_sim_line *line; ++ ++ /* ++ * The device only needs to depend on leaf line entries. This is ++ * sufficient to lock up all the configfs entries that the ++ * instantiated, alive device depends on. ++ */ ++ list_for_each_entry(bank, &dev->bank_list, siblings) { ++ list_for_each_entry(line, &bank->line_list, siblings) { ++ if (lock) ++ WARN_ON(configfs_depend_item_unlocked( ++ subsys, &line->group.cg_item)); ++ else ++ configfs_undepend_item_unlocked( ++ &line->group.cg_item); ++ } ++ } ++} ++ + static ssize_t + gpio_sim_device_config_live_store(struct config_item *item, + const char *page, size_t count) +@@ -1042,14 +1066,24 @@ gpio_sim_device_config_live_store(struct config_item *item, + if (ret) + return ret; + +- guard(mutex)(&dev->lock); ++ if (live) ++ gpio_sim_device_lockup_configfs(dev, true); + +- if (live == gpio_sim_device_is_live(dev)) +- ret = -EPERM; +- else if (live) +- ret = gpio_sim_device_activate(dev); +- else +- gpio_sim_device_deactivate(dev); ++ scoped_guard(mutex, &dev->lock) { ++ if (live == gpio_sim_device_is_live(dev)) ++ ret = -EPERM; ++ else if (live) ++ ret = gpio_sim_device_activate(dev); ++ else ++ gpio_sim_device_deactivate(dev); ++ } ++ ++ /* ++ * Undepend is required only if device disablement (live == 0) ++ * succeeds or if device enablement (live == 1) fails. ++ */ ++ if (live == !!ret) ++ gpio_sim_device_lockup_configfs(dev, false); + + return ret ?: count; + } +-- +2.39.5 + diff --git a/queue-6.12/gpio-virtuser-lock-up-configfs-that-an-instantiated-.patch b/queue-6.12/gpio-virtuser-lock-up-configfs-that-an-instantiated-.patch new file mode 100644 index 0000000000..ac6c0a0c30 --- /dev/null +++ b/queue-6.12/gpio-virtuser-lock-up-configfs-that-an-instantiated-.patch @@ -0,0 +1,96 @@ +From 3fbd41c833cdc4b0867f1b6873f43ab5fb255f48 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 3 Jan 2025 23:18:28 +0900 +Subject: gpio: virtuser: lock up configfs that an instantiated device depends + on + +From: Koichiro Den + +[ Upstream commit c7c434c1dba955005f5161dae73f09c0a922cfa7 ] + +Once a virtuser device is instantiated and actively used, allowing rmdir +for its configfs serves no purpose and can be confusing. Userspace +interacts with the virtual consumer at arbitrary times, meaning it +depends on its existence. + +Make the subsystem itself depend on the configfs entry for a virtuser +device while it is in active use. + +Signed-off-by: Koichiro Den +Link: https://lore.kernel.org/r/20250103141829.430662-4-koichiro.den@canonical.com +Signed-off-by: Bartosz Golaszewski +Signed-off-by: Sasha Levin +--- + drivers/gpio/gpio-virtuser.c | 47 ++++++++++++++++++++++++++++++------ + 1 file changed, 40 insertions(+), 7 deletions(-) + +diff --git a/drivers/gpio/gpio-virtuser.c b/drivers/gpio/gpio-virtuser.c +index d6244f0d3bc75..e89f299f21400 100644 +--- a/drivers/gpio/gpio-virtuser.c ++++ b/drivers/gpio/gpio-virtuser.c +@@ -1546,6 +1546,30 @@ gpio_virtuser_device_deactivate(struct gpio_virtuser_device *dev) + dev->pdev = NULL; + } + ++static void ++gpio_virtuser_device_lockup_configfs(struct gpio_virtuser_device *dev, bool lock) ++{ ++ struct configfs_subsystem *subsys = dev->group.cg_subsys; ++ struct gpio_virtuser_lookup_entry *entry; ++ struct gpio_virtuser_lookup *lookup; ++ ++ /* ++ * The device only needs to depend on leaf lookup entries. This is ++ * sufficient to lock up all the configfs entries that the ++ * instantiated, alive device depends on. ++ */ ++ list_for_each_entry(lookup, &dev->lookup_list, siblings) { ++ list_for_each_entry(entry, &lookup->entry_list, siblings) { ++ if (lock) ++ WARN_ON(configfs_depend_item_unlocked( ++ subsys, &entry->group.cg_item)); ++ else ++ configfs_undepend_item_unlocked( ++ &entry->group.cg_item); ++ } ++ } ++} ++ + static ssize_t + gpio_virtuser_device_config_live_store(struct config_item *item, + const char *page, size_t count) +@@ -1558,15 +1582,24 @@ gpio_virtuser_device_config_live_store(struct config_item *item, + if (ret) + return ret; + +- guard(mutex)(&dev->lock); ++ if (live) ++ gpio_virtuser_device_lockup_configfs(dev, true); + +- if (live == gpio_virtuser_device_is_live(dev)) +- return -EPERM; ++ scoped_guard(mutex, &dev->lock) { ++ if (live == gpio_virtuser_device_is_live(dev)) ++ ret = -EPERM; ++ else if (live) ++ ret = gpio_virtuser_device_activate(dev); ++ else ++ gpio_virtuser_device_deactivate(dev); ++ } + +- if (live) +- ret = gpio_virtuser_device_activate(dev); +- else +- gpio_virtuser_device_deactivate(dev); ++ /* ++ * Undepend is required only if device disablement (live == 0) ++ * succeeds or if device enablement (live == 1) fails. ++ */ ++ if (live == !!ret) ++ gpio_virtuser_device_lockup_configfs(dev, false); + + return ret ?: count; + } +-- +2.39.5 + diff --git a/queue-6.12/hfs-sanity-check-the-root-record.patch b/queue-6.12/hfs-sanity-check-the-root-record.patch new file mode 100644 index 0000000000..2aa106879f --- /dev/null +++ b/queue-6.12/hfs-sanity-check-the-root-record.patch @@ -0,0 +1,56 @@ +From 017cfd9a39027c2ac40dcf726d636efa780c9926 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 30 Nov 2024 21:14:19 -0800 +Subject: hfs: Sanity check the root record + +From: Leo Stone + +[ Upstream commit b905bafdea21a75d75a96855edd9e0b6051eee30 ] + +In the syzbot reproducer, the hfs_cat_rec for the root dir has type +HFS_CDR_FIL after being read with hfs_bnode_read() in hfs_super_fill(). +This indicates it should be used as an hfs_cat_file, which is 102 bytes. +Only the first 70 bytes of that struct are initialized, however, +because the entrylength passed into hfs_bnode_read() is still the length of +a directory record. This causes uninitialized values to be used later on, +when the hfs_cat_rec union is treated as the larger hfs_cat_file struct. + +Add a check to make sure the retrieved record has the correct type +for the root directory (HFS_CDR_DIR), and make sure we load the correct +number of bytes for a directory record. + +Reported-by: syzbot+2db3c7526ba68f4ea776@syzkaller.appspotmail.com +Closes: https://syzkaller.appspot.com/bug?extid=2db3c7526ba68f4ea776 +Tested-by: syzbot+2db3c7526ba68f4ea776@syzkaller.appspotmail.com +Tested-by: Leo Stone +Signed-off-by: Leo Stone +Link: https://lore.kernel.org/r/20241201051420.77858-1-leocstone@gmail.com +Reviewed-by: Jan Kara +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/hfs/super.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/fs/hfs/super.c b/fs/hfs/super.c +index eeac99765f0d6..cf13b5cc10848 100644 +--- a/fs/hfs/super.c ++++ b/fs/hfs/super.c +@@ -419,11 +419,13 @@ static int hfs_fill_super(struct super_block *sb, void *data, int silent) + goto bail_no_root; + res = hfs_cat_find_brec(sb, HFS_ROOT_CNID, &fd); + if (!res) { +- if (fd.entrylength > sizeof(rec) || fd.entrylength < 0) { ++ if (fd.entrylength != sizeof(rec.dir)) { + res = -EIO; + goto bail_hfs_find; + } + hfs_bnode_read(fd.bnode, &rec, fd.entryoffset, fd.entrylength); ++ if (rec.type != HFS_CDR_DIR) ++ res = -EIO; + } + if (res) + goto bail_hfs_find; +-- +2.39.5 + diff --git a/queue-6.12/iomap-avoid-avoid-truncating-64-bit-offset-to-32-bit.patch b/queue-6.12/iomap-avoid-avoid-truncating-64-bit-offset-to-32-bit.patch new file mode 100644 index 0000000000..aabf9ca70b --- /dev/null +++ b/queue-6.12/iomap-avoid-avoid-truncating-64-bit-offset-to-32-bit.patch @@ -0,0 +1,39 @@ +From c38e829840f4adbc1a0dbc94157d5c360cdffd6b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 8 Jan 2025 20:11:50 -0800 +Subject: iomap: avoid avoid truncating 64-bit offset to 32 bits + +From: Marco Nelissen + +[ Upstream commit c13094b894de289514d84b8db56d1f2931a0bade ] + +on 32-bit kernels, iomap_write_delalloc_scan() was inadvertently using a +32-bit position due to folio_next_index() returning an unsigned long. +This could lead to an infinite loop when writing to an xfs filesystem. + +Signed-off-by: Marco Nelissen +Link: https://lore.kernel.org/r/20250109041253.2494374-1-marco.nelissen@gmail.com +Reviewed-by: Darrick J. Wong +Reviewed-by: Christoph Hellwig +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/iomap/buffered-io.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c +index 25d1ede6bb0eb..1bad460275ebe 100644 +--- a/fs/iomap/buffered-io.c ++++ b/fs/iomap/buffered-io.c +@@ -1138,7 +1138,7 @@ static void iomap_write_delalloc_scan(struct inode *inode, + start_byte, end_byte, iomap, punch); + + /* move offset to start of next folio in range */ +- start_byte = folio_next_index(folio) << PAGE_SHIFT; ++ start_byte = folio_pos(folio) + folio_size(folio); + folio_unlock(folio); + folio_put(folio); + } +-- +2.39.5 + diff --git a/queue-6.12/kheaders-ignore-silly-rename-files.patch b/queue-6.12/kheaders-ignore-silly-rename-files.patch new file mode 100644 index 0000000000..854500381e --- /dev/null +++ b/queue-6.12/kheaders-ignore-silly-rename-files.patch @@ -0,0 +1,60 @@ +From 591328e25b2074fcff39ebe41e245f185910fa45 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 13 Dec 2024 13:50:01 +0000 +Subject: kheaders: Ignore silly-rename files + +From: David Howells + +[ Upstream commit 973b710b8821c3401ad7a25360c89e94b26884ac ] + +Tell tar to ignore silly-rename files (".__afs*" and ".nfs*") when building +the header archive. These occur when a file that is open is unlinked +locally, but hasn't yet been closed. Such files are visible to the user +via the getdents() syscall and so programs may want to do things with them. + +During the kernel build, such files may be made during the processing of +header files and the cleanup may get deferred by fput() which may result in +tar seeing these files when it reads the directory, but they may have +disappeared by the time it tries to open them, causing tar to fail with an +error. Further, we don't want to include them in the tarball if they still +exist. + +With CONFIG_HEADERS_INSTALL=y, something like the following may be seen: + + find: './kernel/.tmp_cpio_dir/include/dt-bindings/reset/.__afs2080': No such file or directory + tar: ./include/linux/greybus/.__afs3C95: File removed before we read it + +The find warning doesn't seem to cause a problem. + +Fix this by telling tar when called from in gen_kheaders.sh to exclude such +files. This only affects afs and nfs; cifs uses the Windows Hidden +attribute to prevent the file from being seen. + +Signed-off-by: David Howells +Link: https://lore.kernel.org/r/20241213135013.2964079-2-dhowells@redhat.com +cc: Masahiro Yamada +cc: Marc Dionne +cc: linux-afs@lists.infradead.org +cc: linux-nfs@vger.kernel.org +cc: linux-kernel@vger.kernel.org +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + kernel/gen_kheaders.sh | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh +index 383fd43ac6122..7e1340da5acae 100755 +--- a/kernel/gen_kheaders.sh ++++ b/kernel/gen_kheaders.sh +@@ -89,6 +89,7 @@ find $cpio_dir -type f -print0 | + + # Create archive and try to normalize metadata for reproducibility. + tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \ ++ --exclude=".__afs*" --exclude=".nfs*" \ + --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \ + -I $XZ -cf $tarfile -C $cpio_dir/ . > /dev/null + +-- +2.39.5 + diff --git a/queue-6.12/mac802154-check-local-interfaces-before-deleting-sda.patch b/queue-6.12/mac802154-check-local-interfaces-before-deleting-sda.patch new file mode 100644 index 0000000000..3be76d369a --- /dev/null +++ b/queue-6.12/mac802154-check-local-interfaces-before-deleting-sda.patch @@ -0,0 +1,100 @@ +From 4917f64d31b481029ce14a7c53edba6bc9961915 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 13 Nov 2024 17:51:29 +0800 +Subject: mac802154: check local interfaces before deleting sdata list + +From: Lizhi Xu + +[ Upstream commit eb09fbeb48709fe66c0d708aed81e910a577a30a ] + +syzkaller reported a corrupted list in ieee802154_if_remove. [1] + +Remove an IEEE 802.15.4 network interface after unregister an IEEE 802.15.4 +hardware device from the system. + +CPU0 CPU1 +==== ==== +genl_family_rcv_msg_doit ieee802154_unregister_hw +ieee802154_del_iface ieee802154_remove_interfaces +rdev_del_virtual_intf_deprecated list_del(&sdata->list) +ieee802154_if_remove +list_del_rcu + +The net device has been unregistered, since the rcu grace period, +unregistration must be run before ieee802154_if_remove. + +To avoid this issue, add a check for local->interfaces before deleting +sdata list. + +[1] +kernel BUG at lib/list_debug.c:58! +Oops: invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI +CPU: 0 UID: 0 PID: 6277 Comm: syz-executor157 Not tainted 6.12.0-rc6-syzkaller-00005-g557329bcecc2 #0 +Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 +RIP: 0010:__list_del_entry_valid_or_report+0xf4/0x140 lib/list_debug.c:56 +Code: e8 a1 7e 00 07 90 0f 0b 48 c7 c7 e0 37 60 8c 4c 89 fe e8 8f 7e 00 07 90 0f 0b 48 c7 c7 40 38 60 8c 4c 89 fe e8 7d 7e 00 07 90 <0f> 0b 48 c7 c7 a0 38 60 8c 4c 89 fe e8 6b 7e 00 07 90 0f 0b 48 c7 +RSP: 0018:ffffc9000490f3d0 EFLAGS: 00010246 +RAX: 000000000000004e RBX: dead000000000122 RCX: d211eee56bb28d00 +RDX: 0000000000000000 RSI: 0000000080000000 RDI: 0000000000000000 +RBP: ffff88805b278dd8 R08: ffffffff8174a12c R09: 1ffffffff2852f0d +R10: dffffc0000000000 R11: fffffbfff2852f0e R12: dffffc0000000000 +R13: dffffc0000000000 R14: dead000000000100 R15: ffff88805b278cc0 +FS: 0000555572f94380(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 000056262e4a3000 CR3: 0000000078496000 CR4: 00000000003526f0 +DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 +DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 +Call Trace: + + __list_del_entry_valid include/linux/list.h:124 [inline] + __list_del_entry include/linux/list.h:215 [inline] + list_del_rcu include/linux/rculist.h:157 [inline] + ieee802154_if_remove+0x86/0x1e0 net/mac802154/iface.c:687 + rdev_del_virtual_intf_deprecated net/ieee802154/rdev-ops.h:24 [inline] + ieee802154_del_iface+0x2c0/0x5c0 net/ieee802154/nl-phy.c:323 + genl_family_rcv_msg_doit net/netlink/genetlink.c:1115 [inline] + genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline] + genl_rcv_msg+0xb14/0xec0 net/netlink/genetlink.c:1210 + netlink_rcv_skb+0x1e3/0x430 net/netlink/af_netlink.c:2551 + genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219 + netlink_unicast_kernel net/netlink/af_netlink.c:1331 [inline] + netlink_unicast+0x7f6/0x990 net/netlink/af_netlink.c:1357 + netlink_sendmsg+0x8e4/0xcb0 net/netlink/af_netlink.c:1901 + sock_sendmsg_nosec net/socket.c:729 [inline] + __sock_sendmsg+0x221/0x270 net/socket.c:744 + ____sys_sendmsg+0x52a/0x7e0 net/socket.c:2607 + ___sys_sendmsg net/socket.c:2661 [inline] + __sys_sendmsg+0x292/0x380 net/socket.c:2690 + do_syscall_x64 arch/x86/entry/common.c:52 [inline] + do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 + entry_SYSCALL_64_after_hwframe+0x77/0x7f + +Reported-and-tested-by: syzbot+985f827280dc3a6e7e92@syzkaller.appspotmail.com +Closes: https://syzkaller.appspot.com/bug?extid=985f827280dc3a6e7e92 +Signed-off-by: Lizhi Xu +Reviewed-by: Miquel Raynal +Link: https://lore.kernel.org/20241113095129.1457225-1-lizhi.xu@windriver.com +Signed-off-by: Stefan Schmidt +Signed-off-by: Sasha Levin +--- + net/mac802154/iface.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/net/mac802154/iface.c b/net/mac802154/iface.c +index c0e2da5072bea..9e4631fade90c 100644 +--- a/net/mac802154/iface.c ++++ b/net/mac802154/iface.c +@@ -684,6 +684,10 @@ void ieee802154_if_remove(struct ieee802154_sub_if_data *sdata) + ASSERT_RTNL(); + + mutex_lock(&sdata->local->iflist_mtx); ++ if (list_empty(&sdata->local->interfaces)) { ++ mutex_unlock(&sdata->local->iflist_mtx); ++ return; ++ } + list_del_rcu(&sdata->list); + mutex_unlock(&sdata->local->iflist_mtx); + +-- +2.39.5 + diff --git a/queue-6.12/netfs-fix-non-contiguous-donation-between-completed-.patch b/queue-6.12/netfs-fix-non-contiguous-donation-between-completed-.patch new file mode 100644 index 0000000000..298d9b4be7 --- /dev/null +++ b/queue-6.12/netfs-fix-non-contiguous-donation-between-completed-.patch @@ -0,0 +1,82 @@ +From d712fb757e6ea8e0a3bd69826237cc850d986808 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 13 Dec 2024 13:50:02 +0000 +Subject: netfs: Fix non-contiguous donation between completed reads + +From: David Howells + +[ Upstream commit c8b90d40d5bba8e6fba457b8a7c10d3c0d467e37 ] + +When a read subrequest finishes, if it doesn't have sufficient coverage to +complete the folio(s) covering either side of it, it will donate the excess +coverage to the adjacent subrequests on either side, offloading +responsibility for unlocking the folio(s) covered to them. + +Now, preference is given to donating down to a lower file offset over +donating up because that check is done first - but there's no check that +the lower subreq is actually contiguous, and so we can end up donating +incorrectly. + +The scenario seen[1] is that an 8MiB readahead request spanning four 2MiB +folios is split into eight 1MiB subreqs (numbered 1 through 8). These +terminate in the order 1,6,2,5,3,7,4,8. What happens is: + + - 1 donates to 2 + - 6 donates to 5 + - 2 completes, unlocking the first folio (with 1). + - 5 completes, unlocking the third folio (with 6). + - 3 donates to 4 + - 7 donates to 4 incorrectly + - 4 completes, unlocking the second folio (with 3), but can't use + the excess from 7. + - 8 donates to 4, also incorrectly. + +Fix this by preventing downward donation if the subreqs are not contiguous +(in the example above, 7 donates to 4 across the gap left by 5 and 6). + +Reported-by: Shyam Prasad N +Closes: https://lore.kernel.org/r/CANT5p=qBwjBm-D8soFVVtswGEfmMtQXVW83=TNfUtvyHeFQZBA@mail.gmail.com/ +Signed-off-by: David Howells +Link: https://lore.kernel.org/r/526707.1733224486@warthog.procyon.org.uk/ [1] +Link: https://lore.kernel.org/r/20241213135013.2964079-3-dhowells@redhat.com +cc: Steve French +cc: Paulo Alcantara +cc: Jeff Layton +cc: linux-cifs@vger.kernel.org +cc: netfs@lists.linux.dev +cc: linux-fsdevel@vger.kernel.org +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/netfs/read_collect.c | 9 +++++---- + 1 file changed, 5 insertions(+), 4 deletions(-) + +diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c +index e70eb4ea21c03..a44132c986538 100644 +--- a/fs/netfs/read_collect.c ++++ b/fs/netfs/read_collect.c +@@ -249,16 +249,17 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was + + /* Deal with the trickiest case: that this subreq is in the middle of a + * folio, not touching either edge, but finishes first. In such a +- * case, we donate to the previous subreq, if there is one, so that the +- * donation is only handled when that completes - and remove this +- * subreq from the list. ++ * case, we donate to the previous subreq, if there is one and if it is ++ * contiguous, so that the donation is only handled when that completes ++ * - and remove this subreq from the list. + * + * If the previous subreq finished first, we will have acquired their + * donation and should be able to unlock folios and/or donate nextwards. + */ + if (!subreq->consumed && + !prev_donated && +- !list_is_first(&subreq->rreq_link, &rreq->subrequests)) { ++ !list_is_first(&subreq->rreq_link, &rreq->subrequests) && ++ subreq->start == prev->start + prev->len) { + prev = list_prev_entry(subreq, rreq_link); + WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len); + subreq->start += subreq->len; +-- +2.39.5 + diff --git a/queue-6.12/nvmet-propagate-npwg-topology.patch b/queue-6.12/nvmet-propagate-npwg-topology.patch new file mode 100644 index 0000000000..c2fcf1a8ba --- /dev/null +++ b/queue-6.12/nvmet-propagate-npwg-topology.patch @@ -0,0 +1,39 @@ +From fa5f7770ca55b61db396207d085d0d00f06ab312 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 17 Dec 2024 18:33:25 -0800 +Subject: nvmet: propagate npwg topology + +From: Luis Chamberlain + +[ Upstream commit b579d6fdc3a9149bb4d2b3133cc0767130ed13e6 ] + +Ensure we propagate npwg to the target as well instead +of assuming its the same logical blocks per physical block. + +This ensures devices with large IUs information properly +propagated on the target. + +Signed-off-by: Luis Chamberlain +Reviewed-by: Sagi Grimberg +Signed-off-by: Keith Busch +Signed-off-by: Sasha Levin +--- + drivers/nvme/target/io-cmd-bdev.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c +index 0bda83d0fc3e0..eaf31c823cbe8 100644 +--- a/drivers/nvme/target/io-cmd-bdev.c ++++ b/drivers/nvme/target/io-cmd-bdev.c +@@ -36,7 +36,7 @@ void nvmet_bdev_set_limits(struct block_device *bdev, struct nvme_id_ns *id) + */ + id->nsfeat |= 1 << 4; + /* NPWG = Namespace Preferred Write Granularity. 0's based */ +- id->npwg = lpp0b; ++ id->npwg = to0based(bdev_io_min(bdev) / bdev_logical_block_size(bdev)); + /* NPWA = Namespace Preferred Write Alignment. 0's based */ + id->npwa = id->npwg; + /* NPDG = Namespace Preferred Deallocate Granularity. 0's based */ +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-intel-power-domains-add-clearwater-fore.patch b/queue-6.12/platform-x86-intel-power-domains-add-clearwater-fore.patch new file mode 100644 index 0000000000..f5169a1431 --- /dev/null +++ b/queue-6.12/platform-x86-intel-power-domains-add-clearwater-fore.patch @@ -0,0 +1,39 @@ +From 50a1f17cc45544e35a0c1b1f5b83c76b94a03558 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 3 Jan 2025 07:52:53 -0800 +Subject: platform/x86/intel: power-domains: Add Clearwater Forest support +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Srinivas Pandruvada + +[ Upstream commit bee9a0838fd223823e5a6d85c055ab1691dc738e ] + +Add Clearwater Forest support (INTEL_ATOM_DARKMONT_X) to tpmi_cpu_ids +to support domaid id mappings. + +Signed-off-by: Srinivas Pandruvada +Link: https://lore.kernel.org/r/20250103155255.1488139-1-srinivas.pandruvada@linux.intel.com +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/x86/intel/tpmi_power_domains.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/platform/x86/intel/tpmi_power_domains.c b/drivers/platform/x86/intel/tpmi_power_domains.c +index 0609a8320f7ec..12fb0943b5dc3 100644 +--- a/drivers/platform/x86/intel/tpmi_power_domains.c ++++ b/drivers/platform/x86/intel/tpmi_power_domains.c +@@ -81,6 +81,7 @@ static const struct x86_cpu_id tpmi_cpu_ids[] = { + X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, NULL), + X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, NULL), + X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, NULL), ++ X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, NULL), + X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, NULL), + X86_MATCH_VFM(INTEL_PANTHERCOVE_X, NULL), + {} +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-isst-add-clearwater-forest-to-support-l.patch b/queue-6.12/platform-x86-isst-add-clearwater-forest-to-support-l.patch new file mode 100644 index 0000000000..a4a9618bf3 --- /dev/null +++ b/queue-6.12/platform-x86-isst-add-clearwater-forest-to-support-l.patch @@ -0,0 +1,39 @@ +From 02f7348667dfc0ea8595d9f5a5e0a37f0d9cd393 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 3 Jan 2025 07:52:54 -0800 +Subject: platform/x86: ISST: Add Clearwater Forest to support list +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Srinivas Pandruvada + +[ Upstream commit cc1ff7bc1bb378e7c46992c977b605e97d908801 ] + +Add Clearwater Forest (INTEL_ATOM_DARKMONT_X) to SST support list by +adding to isst_cpu_ids. + +Signed-off-by: Srinivas Pandruvada +Link: https://lore.kernel.org/r/20250103155255.1488139-2-srinivas.pandruvada@linux.intel.com +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/x86/intel/speed_select_if/isst_if_common.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c +index 1e46e30dae966..dbcd3087aaa4b 100644 +--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c ++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c +@@ -804,6 +804,7 @@ EXPORT_SYMBOL_GPL(isst_if_cdev_unregister); + static const struct x86_cpu_id isst_cpu_ids[] = { + X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, SST_HPM_SUPPORTED), + X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, SST_HPM_SUPPORTED), ++ X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, SST_HPM_SUPPORTED), + X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, 0), + X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, SST_HPM_SUPPORTED), + X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, SST_HPM_SUPPORTED), +-- +2.39.5 + diff --git a/queue-6.12/poll_wait-add-mb-to-fix-theoretical-race-between-wai.patch b/queue-6.12/poll_wait-add-mb-to-fix-theoretical-race-between-wai.patch new file mode 100644 index 0000000000..0df2b0ced7 --- /dev/null +++ b/queue-6.12/poll_wait-add-mb-to-fix-theoretical-race-between-wai.patch @@ -0,0 +1,67 @@ +From 268279373d5cc4dee295299c9a797b22eeefa052 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 7 Jan 2025 17:27:17 +0100 +Subject: poll_wait: add mb() to fix theoretical race between + waitqueue_active() and .poll() + +From: Oleg Nesterov + +[ Upstream commit cacd9ae4bf801ff4125d8961bb9a3ba955e51680 ] + +As the comment above waitqueue_active() explains, it can only be used +if both waker and waiter have mb()'s that pair with each other. However +__pollwait() is broken in this respect. + +This is not pipe-specific, but let's look at pipe_poll() for example: + + poll_wait(...); // -> __pollwait() -> add_wait_queue() + + LOAD(pipe->head); + LOAD(pipe->head); + +In theory these LOAD()'s can leak into the critical section inside +add_wait_queue() and can happen before list_add(entry, wq_head), in this +case pipe_poll() can race with wakeup_pipe_readers/writers which do + + smp_mb(); + if (waitqueue_active(wq_head)) + wake_up_interruptible(wq_head); + +There are more __pollwait()-like functions (grep init_poll_funcptr), and +it seems that at least ep_ptable_queue_proc() has the same problem, so the +patch adds smp_mb() into poll_wait(). + +Link: https://lore.kernel.org/all/20250102163320.GA17691@redhat.com/ +Signed-off-by: Oleg Nesterov +Link: https://lore.kernel.org/r/20250107162717.GA18922@redhat.com +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + include/linux/poll.h | 10 +++++++++- + 1 file changed, 9 insertions(+), 1 deletion(-) + +diff --git a/include/linux/poll.h b/include/linux/poll.h +index d1ea4f3714a84..fc641b50f1298 100644 +--- a/include/linux/poll.h ++++ b/include/linux/poll.h +@@ -41,8 +41,16 @@ typedef struct poll_table_struct { + + static inline void poll_wait(struct file * filp, wait_queue_head_t * wait_address, poll_table *p) + { +- if (p && p->_qproc && wait_address) ++ if (p && p->_qproc && wait_address) { + p->_qproc(filp, wait_address, p); ++ /* ++ * This memory barrier is paired in the wq_has_sleeper(). ++ * See the comment above prepare_to_wait(), we need to ++ * ensure that subsequent tests in this thread can't be ++ * reordered with __add_wait_queue() in _qproc() paths. ++ */ ++ smp_mb(); ++ } + } + + /* +-- +2.39.5 + diff --git a/queue-6.12/rdma-bnxt_re-fix-to-export-port-num-to-ib_query_qp.patch b/queue-6.12/rdma-bnxt_re-fix-to-export-port-num-to-ib_query_qp.patch new file mode 100644 index 0000000000..3e84bd1900 --- /dev/null +++ b/queue-6.12/rdma-bnxt_re-fix-to-export-port-num-to-ib_query_qp.patch @@ -0,0 +1,81 @@ +From 239f4fe016a9cd4cb0085f36cf642535cdf2394c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 11 Dec 2024 14:09:30 +0530 +Subject: RDMA/bnxt_re: Fix to export port num to ib_query_qp + +From: Hongguang Gao + +[ Upstream commit 34db8ec931b84d1426423f263b1927539e73b397 ] + +Current driver implementation doesn't populate the port_num +field in query_qp. Adding the code to convert internal firmware +port id to ibv defined port number and export it. + +Reviewed-by: Saravanan Vajravel +Reviewed-by: Kalesh AP +Signed-off-by: Hongguang Gao +Signed-off-by: Selvin Xavier +Link: https://patch.msgid.link/20241211083931.968831-5-kalesh-anakkur.purayil@broadcom.com +Signed-off-by: Leon Romanovsky +Signed-off-by: Sasha Levin +--- + drivers/infiniband/hw/bnxt_re/ib_verbs.c | 1 + + drivers/infiniband/hw/bnxt_re/ib_verbs.h | 4 ++++ + drivers/infiniband/hw/bnxt_re/qplib_fp.c | 1 + + drivers/infiniband/hw/bnxt_re/qplib_fp.h | 1 + + 4 files changed, 7 insertions(+) + +diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c +index b20cffcc3e7d2..14e434ff51ede 100644 +--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c ++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c +@@ -2269,6 +2269,7 @@ int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr, + qp_attr->retry_cnt = qplib_qp->retry_cnt; + qp_attr->rnr_retry = qplib_qp->rnr_retry; + qp_attr->min_rnr_timer = qplib_qp->min_rnr_timer; ++ qp_attr->port_num = __to_ib_port_num(qplib_qp->port_id); + qp_attr->rq_psn = qplib_qp->rq.psn; + qp_attr->max_rd_atomic = qplib_qp->max_rd_atomic; + qp_attr->sq_psn = qplib_qp->sq.psn; +diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h +index b789e47ec97a8..9cd8f770d1b27 100644 +--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h ++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h +@@ -264,6 +264,10 @@ void bnxt_re_dealloc_ucontext(struct ib_ucontext *context); + int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma); + void bnxt_re_mmap_free(struct rdma_user_mmap_entry *rdma_entry); + ++static inline u32 __to_ib_port_num(u16 port_id) ++{ ++ return (u32)port_id + 1; ++} + + unsigned long bnxt_re_lock_cqs(struct bnxt_re_qp *qp); + void bnxt_re_unlock_cqs(struct bnxt_re_qp *qp, unsigned long flags); +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c +index 828e2f9808012..613b5fc70e13e 100644 +--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c ++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c +@@ -1479,6 +1479,7 @@ int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp) + qp->dest_qpn = le32_to_cpu(sb->dest_qp_id); + memcpy(qp->smac, sb->src_mac, 6); + qp->vlan_id = le16_to_cpu(sb->vlan_pcp_vlan_dei_vlan_id); ++ qp->port_id = le16_to_cpu(sb->port_id); + bail: + dma_free_coherent(&rcfw->pdev->dev, sbuf.size, + sbuf.sb, sbuf.dma_addr); +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h +index d8c71c024613b..6f02954eb1429 100644 +--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h ++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h +@@ -298,6 +298,7 @@ struct bnxt_qplib_qp { + u32 dest_qpn; + u8 smac[6]; + u16 vlan_id; ++ u16 port_id; + u8 nw_type; + struct bnxt_qplib_ah ah; + +-- +2.39.5 + diff --git a/queue-6.12/sched_ext-fix-dsq_local_on-selftest.patch b/queue-6.12/sched_ext-fix-dsq_local_on-selftest.patch new file mode 100644 index 0000000000..4d5ea581db --- /dev/null +++ b/queue-6.12/sched_ext-fix-dsq_local_on-selftest.patch @@ -0,0 +1,74 @@ +From 52849923c02940c8d557561f9981c7148db9d51b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 24 Dec 2024 14:09:15 -1000 +Subject: sched_ext: Fix dsq_local_on selftest + +From: Tejun Heo + +[ Upstream commit ce2b93fc1dfa1c82f2576aa571731c4e5dcc8dd7 ] + +The dsp_local_on selftest expects the scheduler to fail by trying to +schedule an e.g. CPU-affine task to the wrong CPU. However, this isn't +guaranteed to happen in the 1 second window that the test is running. +Besides, it's odd to have this particular exception path tested when there +are no other tests that verify that the interface is working at all - e.g. +the test would pass if dsp_local_on interface is completely broken and fails +on any attempt. + +Flip the test so that it verifies that the feature works. While at it, fix a +typo in the info message. + +Signed-off-by: Tejun Heo +Reported-by: Ihor Solodrai +Link: http://lkml.kernel.org/r/Z1n9v7Z6iNJ-wKmq@slm.duckdns.org +Signed-off-by: Tejun Heo +Signed-off-by: Sasha Levin +--- + tools/testing/selftests/sched_ext/dsp_local_on.bpf.c | 5 ++++- + tools/testing/selftests/sched_ext/dsp_local_on.c | 5 +++-- + 2 files changed, 7 insertions(+), 3 deletions(-) + +diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c +index 6325bf76f47ee..fbda6bf546712 100644 +--- a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c ++++ b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c +@@ -43,7 +43,10 @@ void BPF_STRUCT_OPS(dsp_local_on_dispatch, s32 cpu, struct task_struct *prev) + if (!p) + return; + +- target = bpf_get_prandom_u32() % nr_cpus; ++ if (p->nr_cpus_allowed == nr_cpus) ++ target = bpf_get_prandom_u32() % nr_cpus; ++ else ++ target = scx_bpf_task_cpu(p); + + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0); + bpf_task_release(p); +diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.c b/tools/testing/selftests/sched_ext/dsp_local_on.c +index 472851b568548..0ff27e57fe430 100644 +--- a/tools/testing/selftests/sched_ext/dsp_local_on.c ++++ b/tools/testing/selftests/sched_ext/dsp_local_on.c +@@ -34,9 +34,10 @@ static enum scx_test_status run(void *ctx) + /* Just sleeping is fine, plenty of scheduling events happening */ + sleep(1); + +- SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_ERROR)); + bpf_link__destroy(link); + ++ SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG)); ++ + return SCX_TEST_PASS; + } + +@@ -50,7 +51,7 @@ static void cleanup(void *ctx) + struct scx_test dsp_local_on = { + .name = "dsp_local_on", + .description = "Verify we can directly dispatch tasks to a local DSQs " +- "from osp.dispatch()", ++ "from ops.dispatch()", + .setup = setup, + .run = run, + .cleanup = cleanup, +-- +2.39.5 + diff --git a/queue-6.12/sched_ext-keep-running-prev-when-prev-scx.slice-0.patch b/queue-6.12/sched_ext-keep-running-prev-when-prev-scx.slice-0.patch new file mode 100644 index 0000000000..a21a49ebd5 --- /dev/null +++ b/queue-6.12/sched_ext-keep-running-prev-when-prev-scx.slice-0.patch @@ -0,0 +1,70 @@ +From 41549ee72057203db4bab53a1c28c32865d71649 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 8 Jan 2025 16:47:10 +0800 +Subject: sched_ext: keep running prev when prev->scx.slice != 0 + +From: Henry Huang + +[ Upstream commit 30dd3b13f9de612ef7328ccffcf1a07d0d40ab51 ] + +When %SCX_OPS_ENQ_LAST is set and prev->scx.slice != 0, +@prev will be dispacthed into the local DSQ in put_prev_task_scx(). +However, pick_task_scx() is executed before put_prev_task_scx(), +so it will not pick @prev. +Set %SCX_RQ_BAL_KEEP in balance_one() to ensure that pick_task_scx() +can pick @prev. + +Signed-off-by: Henry Huang +Acked-by: Andrea Righi +Signed-off-by: Tejun Heo +Signed-off-by: Sasha Levin +--- + kernel/sched/ext.c | 11 +++++++---- + 1 file changed, 7 insertions(+), 4 deletions(-) + +diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c +index f928a67a07d29..4c4681cb9337b 100644 +--- a/kernel/sched/ext.c ++++ b/kernel/sched/ext.c +@@ -2630,6 +2630,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev) + { + struct scx_dsp_ctx *dspc = this_cpu_ptr(scx_dsp_ctx); + bool prev_on_scx = prev->sched_class == &ext_sched_class; ++ bool prev_on_rq = prev->scx.flags & SCX_TASK_QUEUED; + int nr_loops = SCX_DSP_MAX_LOOPS; + + lockdep_assert_rq_held(rq); +@@ -2662,8 +2663,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev) + * See scx_ops_disable_workfn() for the explanation on the + * bypassing test. + */ +- if ((prev->scx.flags & SCX_TASK_QUEUED) && +- prev->scx.slice && !scx_rq_bypassing(rq)) { ++ if (prev_on_rq && prev->scx.slice && !scx_rq_bypassing(rq)) { + rq->scx.flags |= SCX_RQ_BAL_KEEP; + goto has_tasks; + } +@@ -2696,6 +2696,10 @@ static int balance_one(struct rq *rq, struct task_struct *prev) + + flush_dispatch_buf(rq); + ++ if (prev_on_rq && prev->scx.slice) { ++ rq->scx.flags |= SCX_RQ_BAL_KEEP; ++ goto has_tasks; ++ } + if (rq->scx.local_dsq.nr) + goto has_tasks; + if (consume_global_dsq(rq)) +@@ -2721,8 +2725,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev) + * Didn't find another task to run. Keep running @prev unless + * %SCX_OPS_ENQ_LAST is in effect. + */ +- if ((prev->scx.flags & SCX_TASK_QUEUED) && +- (!static_branch_unlikely(&scx_ops_enq_last) || ++ if (prev_on_rq && (!static_branch_unlikely(&scx_ops_enq_last) || + scx_rq_bypassing(rq))) { + rq->scx.flags |= SCX_RQ_BAL_KEEP; + goto has_tasks; +-- +2.39.5 + diff --git a/queue-6.12/scsi-ufs-core-honor-runtime-system-pm-levels-if-set-.patch b/queue-6.12/scsi-ufs-core-honor-runtime-system-pm-levels-if-set-.patch new file mode 100644 index 0000000000..abbe8f80b0 --- /dev/null +++ b/queue-6.12/scsi-ufs-core-honor-runtime-system-pm-levels-if-set-.patch @@ -0,0 +1,50 @@ +From a052fb5f12ea4230e0e07b738138a19155417d25 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 19 Dec 2024 22:20:42 +0530 +Subject: scsi: ufs: core: Honor runtime/system PM levels if set by host + controller drivers + +From: Manivannan Sadhasivam + +[ Upstream commit bb9850704c043e48c86cc9df90ee102e8a338229 ] + +Otherwise, the default levels will override the levels set by the host +controller drivers. + +Signed-off-by: Manivannan Sadhasivam +Link: https://lore.kernel.org/r/20241219-ufs-qcom-suspend-fix-v3-2-63c4b95a70b9@linaro.org +Reviewed-by: Bart Van Assche +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + drivers/ufs/core/ufshcd.c | 9 ++++++--- + 1 file changed, 6 insertions(+), 3 deletions(-) + +diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c +index 05b936ad353be..6cc9e61cca07d 100644 +--- a/drivers/ufs/core/ufshcd.c ++++ b/drivers/ufs/core/ufshcd.c +@@ -10589,14 +10589,17 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) + } + + /* +- * Set the default power management level for runtime and system PM. ++ * Set the default power management level for runtime and system PM if ++ * not set by the host controller drivers. + * Default power saving mode is to keep UFS link in Hibern8 state + * and UFS device in sleep state. + */ +- hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( ++ if (!hba->rpm_lvl) ++ hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( + UFS_SLEEP_PWR_MODE, + UIC_LINK_HIBERN8_STATE); +- hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( ++ if (!hba->spm_lvl) ++ hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( + UFS_SLEEP_PWR_MODE, + UIC_LINK_HIBERN8_STATE); + +-- +2.39.5 + diff --git a/queue-6.12/scx-fix-maximal-bpf-selftest-prog.patch b/queue-6.12/scx-fix-maximal-bpf-selftest-prog.patch new file mode 100644 index 0000000000..a04440b169 --- /dev/null +++ b/queue-6.12/scx-fix-maximal-bpf-selftest-prog.patch @@ -0,0 +1,63 @@ +From cb308567ecd4bdd09bcfa08d8f9232030fddcb12 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 9 Dec 2024 09:29:24 -0600 +Subject: scx: Fix maximal BPF selftest prog + +From: David Vernet + +[ Upstream commit b8f614207b0d5e4abd6df8d5cb3cc11f009d1d93 ] + +maximal.bpf.c is still dispatching to and consuming from SCX_DSQ_GLOBAL. +Let's have it use its own DSQ to avoid any runtime errors. + +Signed-off-by: David Vernet +Tested-by: Andrea Righi +Signed-off-by: Tejun Heo +Signed-off-by: Sasha Levin +--- + tools/testing/selftests/sched_ext/maximal.bpf.c | 8 +++++--- + 1 file changed, 5 insertions(+), 3 deletions(-) + +diff --git a/tools/testing/selftests/sched_ext/maximal.bpf.c b/tools/testing/selftests/sched_ext/maximal.bpf.c +index 4c005fa718103..430f5e13bf554 100644 +--- a/tools/testing/selftests/sched_ext/maximal.bpf.c ++++ b/tools/testing/selftests/sched_ext/maximal.bpf.c +@@ -12,6 +12,8 @@ + + char _license[] SEC("license") = "GPL"; + ++#define DSQ_ID 0 ++ + s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu, + u64 wake_flags) + { +@@ -20,7 +22,7 @@ s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu, + + void BPF_STRUCT_OPS(maximal_enqueue, struct task_struct *p, u64 enq_flags) + { +- scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); ++ scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags); + } + + void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags) +@@ -28,7 +30,7 @@ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags) + + void BPF_STRUCT_OPS(maximal_dispatch, s32 cpu, struct task_struct *prev) + { +- scx_bpf_dsq_move_to_local(SCX_DSQ_GLOBAL); ++ scx_bpf_dsq_move_to_local(DSQ_ID); + } + + void BPF_STRUCT_OPS(maximal_runnable, struct task_struct *p, u64 enq_flags) +@@ -123,7 +125,7 @@ void BPF_STRUCT_OPS(maximal_cgroup_set_weight, struct cgroup *cgrp, u32 weight) + + s32 BPF_STRUCT_OPS_SLEEPABLE(maximal_init) + { +- return 0; ++ return scx_bpf_create_dsq(DSQ_ID, -1); + } + + void BPF_STRUCT_OPS(maximal_exit, struct scx_exit_info *info) +-- +2.39.5 + diff --git a/queue-6.12/selftests-sched_ext-fix-build-after-renames-in-sched.patch b/queue-6.12/selftests-sched_ext-fix-build-after-renames-in-sched.patch new file mode 100644 index 0000000000..271e0a1796 --- /dev/null +++ b/queue-6.12/selftests-sched_ext-fix-build-after-renames-in-sched.patch @@ -0,0 +1,238 @@ +From a9e7930540bc8ee05d28baf4c94265eeb2736094 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 21 Nov 2024 21:40:17 +0000 +Subject: selftests/sched_ext: fix build after renames in sched_ext API + +From: Ihor Solodrai + +[ Upstream commit ef7009decc30eb2515a64253791d61b72229c119 ] + +The selftests are falining to build on current tip of bpf-next and +sched_ext [1]. This has broken BPF CI [2] after merge from upstream. + +Use appropriate function names in the selftests according to the +recent changes in the sched_ext API [3]. + +[1] https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=fc39fb56917bb3cb53e99560ca3612a84456ada2 +[2] https://github.com/kernel-patches/bpf/actions/runs/11959327258/job/33340923745 +[3] https://lore.kernel.org/all/20241109194853.580310-1-tj@kernel.org/ + +Signed-off-by: Ihor Solodrai +Acked-by: Andrea Righi +Acked-by: David Vernet +Signed-off-by: Tejun Heo +Signed-off-by: Sasha Levin +--- + .../testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c | 2 +- + .../selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c | 4 ++-- + tools/testing/selftests/sched_ext/dsp_local_on.bpf.c | 2 +- + .../selftests/sched_ext/enq_select_cpu_fails.bpf.c | 2 +- + tools/testing/selftests/sched_ext/exit.bpf.c | 4 ++-- + tools/testing/selftests/sched_ext/maximal.bpf.c | 4 ++-- + tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c | 2 +- + .../selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c | 2 +- + .../testing/selftests/sched_ext/select_cpu_dispatch.bpf.c | 2 +- + .../selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c | 2 +- + .../selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c | 4 ++-- + tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c | 8 ++++---- + 12 files changed, 19 insertions(+), 19 deletions(-) + +diff --git a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c +index 37d9bf6fb7458..6f4c3f5a1c5d9 100644 +--- a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c ++++ b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c +@@ -20,7 +20,7 @@ s32 BPF_STRUCT_OPS(ddsp_bogus_dsq_fail_select_cpu, struct task_struct *p, + * If we dispatch to a bogus DSQ that will fall back to the + * builtin global DSQ, we fail gracefully. + */ +- scx_bpf_dispatch_vtime(p, 0xcafef00d, SCX_SLICE_DFL, ++ scx_bpf_dsq_insert_vtime(p, 0xcafef00d, SCX_SLICE_DFL, + p->scx.dsq_vtime, 0); + return cpu; + } +diff --git a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c +index dffc97d9cdf14..e4a55027778fd 100644 +--- a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c ++++ b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c +@@ -17,8 +17,8 @@ s32 BPF_STRUCT_OPS(ddsp_vtimelocal_fail_select_cpu, struct task_struct *p, + + if (cpu >= 0) { + /* Shouldn't be allowed to vtime dispatch to a builtin DSQ. */ +- scx_bpf_dispatch_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, +- p->scx.dsq_vtime, 0); ++ scx_bpf_dsq_insert_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, ++ p->scx.dsq_vtime, 0); + return cpu; + } + +diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c +index 6a7db1502c29e..6325bf76f47ee 100644 +--- a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c ++++ b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c +@@ -45,7 +45,7 @@ void BPF_STRUCT_OPS(dsp_local_on_dispatch, s32 cpu, struct task_struct *prev) + + target = bpf_get_prandom_u32() % nr_cpus; + +- scx_bpf_dispatch(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0); ++ scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0); + bpf_task_release(p); + } + +diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c +index 1efb50d61040a..a7cf868d5e311 100644 +--- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c ++++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c +@@ -31,7 +31,7 @@ void BPF_STRUCT_OPS(enq_select_cpu_fails_enqueue, struct task_struct *p, + /* Can only call from ops.select_cpu() */ + scx_bpf_select_cpu_dfl(p, 0, 0, &found); + +- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); ++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); + } + + SEC(".struct_ops.link") +diff --git a/tools/testing/selftests/sched_ext/exit.bpf.c b/tools/testing/selftests/sched_ext/exit.bpf.c +index d75d4faf07f6d..4bc36182d3ffc 100644 +--- a/tools/testing/selftests/sched_ext/exit.bpf.c ++++ b/tools/testing/selftests/sched_ext/exit.bpf.c +@@ -33,7 +33,7 @@ void BPF_STRUCT_OPS(exit_enqueue, struct task_struct *p, u64 enq_flags) + if (exit_point == EXIT_ENQUEUE) + EXIT_CLEANLY(); + +- scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags); ++ scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags); + } + + void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p) +@@ -41,7 +41,7 @@ void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p) + if (exit_point == EXIT_DISPATCH) + EXIT_CLEANLY(); + +- scx_bpf_consume(DSQ_ID); ++ scx_bpf_dsq_move_to_local(DSQ_ID); + } + + void BPF_STRUCT_OPS(exit_enable, struct task_struct *p) +diff --git a/tools/testing/selftests/sched_ext/maximal.bpf.c b/tools/testing/selftests/sched_ext/maximal.bpf.c +index 4d4cd8d966dba..4c005fa718103 100644 +--- a/tools/testing/selftests/sched_ext/maximal.bpf.c ++++ b/tools/testing/selftests/sched_ext/maximal.bpf.c +@@ -20,7 +20,7 @@ s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu, + + void BPF_STRUCT_OPS(maximal_enqueue, struct task_struct *p, u64 enq_flags) + { +- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); ++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); + } + + void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags) +@@ -28,7 +28,7 @@ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags) + + void BPF_STRUCT_OPS(maximal_dispatch, s32 cpu, struct task_struct *prev) + { +- scx_bpf_consume(SCX_DSQ_GLOBAL); ++ scx_bpf_dsq_move_to_local(SCX_DSQ_GLOBAL); + } + + void BPF_STRUCT_OPS(maximal_runnable, struct task_struct *p, u64 enq_flags) +diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c +index f171ac4709706..13d0f5be788d1 100644 +--- a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c ++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c +@@ -30,7 +30,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_enqueue, struct task_struct *p, + } + scx_bpf_put_idle_cpumask(idle_mask); + +- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); ++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); + } + + SEC(".struct_ops.link") +diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c +index 9efdbb7da9288..815f1d5d61ac4 100644 +--- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c ++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c +@@ -67,7 +67,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_enqueue, struct task_struct *p, + saw_local = true; + } + +- scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, enq_flags); ++ scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, enq_flags); + } + + s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_init_task, +diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c +index 59bfc4f36167a..4bb99699e9209 100644 +--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c ++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c +@@ -29,7 +29,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_select_cpu, struct task_struct *p, + cpu = prev_cpu; + + dispatch: +- scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, 0); ++ scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, 0); + return cpu; + } + +diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c +index 3bbd5fcdfb18e..2a75de11b2cfd 100644 +--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c ++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c +@@ -18,7 +18,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_bad_dsq_select_cpu, struct task_struct *p + s32 prev_cpu, u64 wake_flags) + { + /* Dispatching to a random DSQ should fail. */ +- scx_bpf_dispatch(p, 0xcafef00d, SCX_SLICE_DFL, 0); ++ scx_bpf_dsq_insert(p, 0xcafef00d, SCX_SLICE_DFL, 0); + + return prev_cpu; + } +diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c +index 0fda57fe0ecfa..99d075695c974 100644 +--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c ++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c +@@ -18,8 +18,8 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_dbl_dsp_select_cpu, struct task_struct *p + s32 prev_cpu, u64 wake_flags) + { + /* Dispatching twice in a row is disallowed. */ +- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0); +- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0); ++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0); ++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0); + + return prev_cpu; + } +diff --git a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c +index e6c67bcf5e6e3..bfcb96cd4954b 100644 +--- a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c ++++ b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c +@@ -2,8 +2,8 @@ + /* + * A scheduler that validates that enqueue flags are properly stored and + * applied at dispatch time when a task is directly dispatched from +- * ops.select_cpu(). We validate this by using scx_bpf_dispatch_vtime(), and +- * making the test a very basic vtime scheduler. ++ * ops.select_cpu(). We validate this by using scx_bpf_dsq_insert_vtime(), ++ * and making the test a very basic vtime scheduler. + * + * Copyright (c) 2024 Meta Platforms, Inc. and affiliates. + * Copyright (c) 2024 David Vernet +@@ -47,13 +47,13 @@ s32 BPF_STRUCT_OPS(select_cpu_vtime_select_cpu, struct task_struct *p, + cpu = prev_cpu; + scx_bpf_test_and_clear_cpu_idle(cpu); + ddsp: +- scx_bpf_dispatch_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0); ++ scx_bpf_dsq_insert_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0); + return cpu; + } + + void BPF_STRUCT_OPS(select_cpu_vtime_dispatch, s32 cpu, struct task_struct *p) + { +- if (scx_bpf_consume(VTIME_DSQ)) ++ if (scx_bpf_dsq_move_to_local(VTIME_DSQ)) + consumed = true; + } + +-- +2.39.5 + diff --git a/queue-6.12/selftests-tc-testing-reduce-rshift-value.patch b/queue-6.12/selftests-tc-testing-reduce-rshift-value.patch new file mode 100644 index 0000000000..4deaa6223b --- /dev/null +++ b/queue-6.12/selftests-tc-testing-reduce-rshift-value.patch @@ -0,0 +1,41 @@ +From 2b38551d99e6b8c2949aaf5b84b21f4db9c39b1f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 3 Jan 2025 10:24:58 -0800 +Subject: selftests: tc-testing: reduce rshift value + +From: Jakub Kicinski + +[ Upstream commit e95274dfe86490ec2a5633035c24b2de6722841f ] + +After previous change rshift >= 32 is no longer allowed. +Modify the test to use 31, the test doesn't seem to send +any traffic so the exact value shouldn't matter. + +Reviewed-by: Eric Dumazet +Link: https://patch.msgid.link/20250103182458.1213486-1-kuba@kernel.org +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + tools/testing/selftests/tc-testing/tc-tests/filters/flow.json | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json b/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json +index 58189327f6444..383fbda07245c 100644 +--- a/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json ++++ b/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json +@@ -78,10 +78,10 @@ + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], +- "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0xff", ++ "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0x1f", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 protocol ip prio 1 flow", +- "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 255 baseclass", ++ "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 31 baseclass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" +-- +2.39.5 + diff --git a/queue-6.12/series b/queue-6.12/series index 10cbc1c7fc..723da6da7d 100644 --- a/queue-6.12/series +++ b/queue-6.12/series @@ -46,3 +46,26 @@ i2c-rcar-fix-nack-handling-when-being-a-target.patch i2c-testunit-on-errors-repeat-nack-until-stop.patch hwmon-ltc2991-fix-mixed-signed-unsigned-in-div_round.patch smb-client-fix-double-free-of-tcp_server_info-hostna.patch +mac802154-check-local-interfaces-before-deleting-sda.patch +hfs-sanity-check-the-root-record.patch +fs-qnx6-fix-building-with-gcc-15.patch +fs-fix-missing-declaration-of-init_files.patch +kheaders-ignore-silly-rename-files.patch +netfs-fix-non-contiguous-donation-between-completed-.patch +cachefiles-parse-the-secctx-immediately.patch +scsi-ufs-core-honor-runtime-system-pm-levels-if-set-.patch +gpio-virtuser-lock-up-configfs-that-an-instantiated-.patch +gpio-sim-lock-up-configfs-that-an-instantiated-devic.patch +selftests-tc-testing-reduce-rshift-value.patch +platform-x86-intel-power-domains-add-clearwater-fore.patch +platform-x86-isst-add-clearwater-forest-to-support-l.patch +acpi-resource-acpi_dev_irq_override-check-dmi-match-.patch +sched_ext-keep-running-prev-when-prev-scx.slice-0.patch +iomap-avoid-avoid-truncating-64-bit-offset-to-32-bit.patch +afs-fix-merge-preference-rule-failure-condition.patch +poll_wait-add-mb-to-fix-theoretical-race-between-wai.patch +selftests-sched_ext-fix-build-after-renames-in-sched.patch +scx-fix-maximal-bpf-selftest-prog.patch +rdma-bnxt_re-fix-to-export-port-num-to-ib_query_qp.patch +sched_ext-fix-dsq_local_on-selftest.patch +nvmet-propagate-npwg-topology.patch