From: Greg Kroah-Hartman Date: Sun, 14 Apr 2013 18:22:30 +0000 (-0700) Subject: remove two 3.0-stable patches X-Git-Tag: v3.0.74~12 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=c66f2a61d848ef0ac4b6989f249a74112b3a7af1;p=thirdparty%2Fkernel%2Fstable-queue.git remove two 3.0-stable patches --- diff --git a/queue-3.0/kobject-fix-kset_find_obj-race-with-concurrent-last-kobject_put.patch b/queue-3.0/kobject-fix-kset_find_obj-race-with-concurrent-last-kobject_put.patch deleted file mode 100644 index efea5719fa6..00000000000 --- a/queue-3.0/kobject-fix-kset_find_obj-race-with-concurrent-last-kobject_put.patch +++ /dev/null @@ -1,98 +0,0 @@ -From a49b7e82cab0f9b41f483359be83f44fbb6b4979 Mon Sep 17 00:00:00 2001 -From: Linus Torvalds -Date: Sat, 13 Apr 2013 15:15:30 -0700 -Subject: kobject: fix kset_find_obj() race with concurrent last kobject_put() - -From: Linus Torvalds - -commit a49b7e82cab0f9b41f483359be83f44fbb6b4979 upstream. - -Anatol Pomozov identified a race condition that hits module unloading -and re-loading. To quote Anatol: - - "This is a race codition that exists between kset_find_obj() and - kobject_put(). kset_find_obj() might return kobject that has refcount - equal to 0 if this kobject is freeing by kobject_put() in other - thread. - - Here is timeline for the crash in case if kset_find_obj() searches for - an object tht nobody holds and other thread is doing kobject_put() on - the same kobject: - - THREAD A (calls kset_find_obj()) THREAD B (calls kobject_put()) - splin_lock() - atomic_dec_return(kobj->kref), counter gets zero here - ... starts kobject cleanup .... - spin_lock() // WAIT thread A in kobj_kset_leave() - iterate over kset->list - atomic_inc(kobj->kref) (counter becomes 1) - spin_unlock() - spin_lock() // taken - // it does not know that thread A increased counter so it - remove obj from list - spin_unlock() - vfree(module) // frees module object with containing kobj - - // kobj points to freed memory area!! - kobject_put(kobj) // OOPS!!!! - - The race above happens because module.c tries to use kset_find_obj() - when somebody unloads module. The module.c code was introduced in - commit 6494a93d55fa" - -Anatol supplied a patch specific for module.c that worked around the -problem by simply not using kset_find_obj() at all, but rather than make -a local band-aid, this just fixes kset_find_obj() to be thread-safe -using the proper model of refusing the get a new reference if the -refcount has already dropped to zero. - -See examples of this proper refcount handling not only in the kref -documentation, but in various other equivalent uses of this pattern by -grepping for atomic_inc_not_zero(). - -[ Side note: the module race does indicate that module loading and - unloading is not properly serialized wrt sysfs information using the - module mutex. That may require further thought, but this is the - correct fix at the kobject layer regardless. ] - -Reported-analyzed-and-tested-by: Anatol Pomozov -Cc: Al Viro -Signed-off-by: Linus Torvalds -Signed-off-by: Greg Kroah-Hartman - ---- - lib/kobject.c | 11 +++++++++-- - 1 file changed, 9 insertions(+), 2 deletions(-) - ---- a/lib/kobject.c -+++ b/lib/kobject.c -@@ -531,6 +531,13 @@ struct kobject *kobject_get(struct kobje - return kobj; - } - -+static struct kobject *kobject_get_unless_zero(struct kobject *kobj) -+{ -+ if (!kref_get_unless_zero(&kobj->kref)) -+ kobj = NULL; -+ return kobj; -+} -+ - /* - * kobject_cleanup - free kobject resources. - * @kobj: object to cleanup -@@ -779,13 +786,13 @@ struct kobject *kset_find_obj_hinted(str - if (!kobject_name(k) || strcmp(kobject_name(k), name)) - goto slow_search; - -- ret = kobject_get(k); -+ ret = kobject_get_unless_zero(k); - goto unlock_exit; - - slow_search: - list_for_each_entry(k, &kset->list, entry) { - if (kobject_name(k) && !strcmp(kobject_name(k), name)) { -- ret = kobject_get(k); -+ ret = kobject_get_unless_zero(k); - break; - } - } diff --git a/queue-3.0/kref-implement-kref_get_unless_zero-v3.patch b/queue-3.0/kref-implement-kref_get_unless_zero-v3.patch deleted file mode 100644 index 6bd216f4d05..00000000000 --- a/queue-3.0/kref-implement-kref_get_unless_zero-v3.patch +++ /dev/null @@ -1,56 +0,0 @@ -From 4b20db3de8dab005b07c74161cb041db8c5ff3a7 Mon Sep 17 00:00:00 2001 -From: Thomas Hellstrom -Date: Tue, 6 Nov 2012 11:31:49 +0000 -Subject: kref: Implement kref_get_unless_zero v3 - -From: Thomas Hellstrom - -commit 4b20db3de8dab005b07c74161cb041db8c5ff3a7 upstream. - -This function is intended to simplify locking around refcounting for -objects that can be looked up from a lookup structure, and which are -removed from that lookup structure in the object destructor. -Operations on such objects require at least a read lock around -lookup + kref_get, and a write lock around kref_put + remove from lookup -structure. Furthermore, RCU implementations become extremely tricky. -With a lookup followed by a kref_get_unless_zero *with return value check* -locking in the kref_put path can be deferred to the actual removal from -the lookup structure and RCU lookups become trivial. - -v2: Formatting fixes. -v3: Invert the return value. - -Signed-off-by: Thomas Hellstrom -Signed-off-by: Dave Airlie -Signed-off-by: Greg Kroah-Hartman - -diff --git a/include/linux/kref.h b/include/linux/kref.h -index 65af688..4972e6e 100644 ---- a/include/linux/kref.h -+++ b/include/linux/kref.h -@@ -111,4 +111,25 @@ static inline int kref_put_mutex(struct kref *kref, - } - return 0; - } -+ -+/** -+ * kref_get_unless_zero - Increment refcount for object unless it is zero. -+ * @kref: object. -+ * -+ * Return non-zero if the increment succeeded. Otherwise return 0. -+ * -+ * This function is intended to simplify locking around refcounting for -+ * objects that can be looked up from a lookup structure, and which are -+ * removed from that lookup structure in the object destructor. -+ * Operations on such objects require at least a read lock around -+ * lookup + kref_get, and a write lock around kref_put + remove from lookup -+ * structure. Furthermore, RCU implementations become extremely tricky. -+ * With a lookup followed by a kref_get_unless_zero *with return value check* -+ * locking in the kref_put path can be deferred to the actual removal from -+ * the lookup structure and RCU lookups become trivial. -+ */ -+static inline int __must_check kref_get_unless_zero(struct kref *kref) -+{ -+ return atomic_add_unless(&kref->refcount, 1, 0); -+} - #endif /* _KREF_H_ */ diff --git a/queue-3.0/series b/queue-3.0/series index caa615596e1..b605f40367f 100644 --- a/queue-3.0/series +++ b/queue-3.0/series @@ -4,5 +4,3 @@ asoc-wm8903-fix-the-bypass-to-hp-lineout-when-no-dac-or-adc-is-running.patch tracing-fix-double-free-when-function-profile-init-failed.patch pm-reboot-call-syscore_shutdown-after-disable_nonboot_cpus.patch target-fix-incorrect-fallthrough-of-alua-standby-offline-transition-cdbs.patch -kref-implement-kref_get_unless_zero-v3.patch -kobject-fix-kset_find_obj-race-with-concurrent-last-kobject_put.patch