]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/3.8.8/kobject-fix-kset_find_obj-race-with-concurrent-last-kobject_put.patch
5.1-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 3.8.8 / kobject-fix-kset_find_obj-race-with-concurrent-last-kobject_put.patch
1 From a49b7e82cab0f9b41f483359be83f44fbb6b4979 Mon Sep 17 00:00:00 2001
2 From: Linus Torvalds <torvalds@linux-foundation.org>
3 Date: Sat, 13 Apr 2013 15:15:30 -0700
4 Subject: kobject: fix kset_find_obj() race with concurrent last kobject_put()
5
6 From: Linus Torvalds <torvalds@linux-foundation.org>
7
8 commit a49b7e82cab0f9b41f483359be83f44fbb6b4979 upstream.
9
10 Anatol Pomozov identified a race condition that hits module unloading
11 and re-loading. To quote Anatol:
12
13 "This is a race codition that exists between kset_find_obj() and
14 kobject_put(). kset_find_obj() might return kobject that has refcount
15 equal to 0 if this kobject is freeing by kobject_put() in other
16 thread.
17
18 Here is timeline for the crash in case if kset_find_obj() searches for
19 an object tht nobody holds and other thread is doing kobject_put() on
20 the same kobject:
21
22 THREAD A (calls kset_find_obj()) THREAD B (calls kobject_put())
23 splin_lock()
24 atomic_dec_return(kobj->kref), counter gets zero here
25 ... starts kobject cleanup ....
26 spin_lock() // WAIT thread A in kobj_kset_leave()
27 iterate over kset->list
28 atomic_inc(kobj->kref) (counter becomes 1)
29 spin_unlock()
30 spin_lock() // taken
31 // it does not know that thread A increased counter so it
32 remove obj from list
33 spin_unlock()
34 vfree(module) // frees module object with containing kobj
35
36 // kobj points to freed memory area!!
37 kobject_put(kobj) // OOPS!!!!
38
39 The race above happens because module.c tries to use kset_find_obj()
40 when somebody unloads module. The module.c code was introduced in
41 commit 6494a93d55fa"
42
43 Anatol supplied a patch specific for module.c that worked around the
44 problem by simply not using kset_find_obj() at all, but rather than make
45 a local band-aid, this just fixes kset_find_obj() to be thread-safe
46 using the proper model of refusing the get a new reference if the
47 refcount has already dropped to zero.
48
49 See examples of this proper refcount handling not only in the kref
50 documentation, but in various other equivalent uses of this pattern by
51 grepping for atomic_inc_not_zero().
52
53 [ Side note: the module race does indicate that module loading and
54 unloading is not properly serialized wrt sysfs information using the
55 module mutex. That may require further thought, but this is the
56 correct fix at the kobject layer regardless. ]
57
58 Reported-analyzed-and-tested-by: Anatol Pomozov <anatol.pomozov@gmail.com>
59 Cc: Al Viro <viro@zeniv.linux.org.uk>
60 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
61 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
62
63 ---
64 lib/kobject.c | 9 ++++++++-
65 1 file changed, 8 insertions(+), 1 deletion(-)
66
67 --- a/lib/kobject.c
68 +++ b/lib/kobject.c
69 @@ -529,6 +529,13 @@ struct kobject *kobject_get(struct kobje
70 return kobj;
71 }
72
73 +static struct kobject *kobject_get_unless_zero(struct kobject *kobj)
74 +{
75 + if (!kref_get_unless_zero(&kobj->kref))
76 + kobj = NULL;
77 + return kobj;
78 +}
79 +
80 /*
81 * kobject_cleanup - free kobject resources.
82 * @kobj: object to cleanup
83 @@ -751,7 +758,7 @@ struct kobject *kset_find_obj(struct kse
84
85 list_for_each_entry(k, &kset->list, entry) {
86 if (kobject_name(k) && !strcmp(kobject_name(k), name)) {
87 - ret = kobject_get(k);
88 + ret = kobject_get_unless_zero(k);
89 break;
90 }
91 }