1 From foo@baz Thu Mar 14 23:20:15 PDT 2019
2 From: Vlad Buslov <vladbu@mellanox.com>
3 Date: Wed, 6 Mar 2019 16:22:12 +0200
4 Subject: net: sched: flower: insert new filter to idr after setting its mask
6 From: Vlad Buslov <vladbu@mellanox.com>
8 [ Upstream commit ecb3dea400d3beaf611ce76ac7a51d4230492cf2 ]
10 When adding new filter to flower classifier, fl_change() inserts it to
11 handle_idr before initializing filter extensions and assigning it a mask.
12 Normally this ordering doesn't matter because all flower classifier ops
13 callbacks assume rtnl lock protection. However, when filter has an action
14 that doesn't have its kernel module loaded, rtnl lock is released before
15 call to request_module(). During this time the filter can be accessed bu
16 concurrent task before its initialization is completed, which can lead to a
19 Example case of NULL pointer dereference in concurrent dump:
40 mask = &f->mask->key; <- NULL ptr
43 Extension initialization and mask assignment don't depend on fnew->handle
44 that is allocated by idr_alloc_u32(). Move idr allocation code after action
45 creation and mask assignment in fl_change() to prevent concurrent access
46 to not fully initialized filter when rtnl lock is released to load action
49 Fixes: 01683a146999 ("net: sched: refactor flower walk to iterate over idr")
50 Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
51 Reviewed-by: Roi Dayan <roid@mellanox.com>
52 Signed-off-by: David S. Miller <davem@davemloft.net>
53 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
55 net/sched/cls_flower.c | 43 ++++++++++++++++++++++---------------------
56 1 file changed, 22 insertions(+), 21 deletions(-)
58 --- a/net/sched/cls_flower.c
59 +++ b/net/sched/cls_flower.c
60 @@ -1213,46 +1213,46 @@ static int fl_change(struct net *net, st
66 - err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
67 - INT_MAX, GFP_KERNEL);
69 - /* user specifies a handle and it doesn't exist */
70 - err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
71 - handle, GFP_KERNEL);
75 - fnew->handle = handle;
77 if (tb[TCA_FLOWER_FLAGS]) {
78 fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]);
80 if (!tc_flags_valid(fnew->flags)) {
87 err = fl_set_parms(net, tp, fnew, mask, base, tb, tca[TCA_RATE], ovr,
88 tp->chain->tmplt_priv, extack);
93 err = fl_check_assign_mask(head, fnew, fold, mask);
100 + err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
101 + INT_MAX, GFP_KERNEL);
102 + } else if (!fold) {
103 + /* user specifies a handle and it doesn't exist */
104 + err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
105 + handle, GFP_KERNEL);
109 + fnew->handle = handle;
111 if (!fold && fl_lookup(fnew->mask, &fnew->mkey)) {
117 err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node,
118 fnew->mask->filter_ht_params);
123 if (!tc_skip_hw(fnew->flags)) {
124 err = fl_hw_replace_filter(tp, fnew, extack);
125 @@ -1291,12 +1291,13 @@ errout_mask_ht:
126 rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node,
127 fnew->mask->filter_ht_params);
130 - fl_mask_put(head, fnew->mask, false);
134 idr_remove(&head->handle_idr, fnew->handle);
137 + fl_mask_put(head, fnew->mask, false);
140 tcf_exts_destroy(&fnew->exts);