From: Sasha Levin Date: Tue, 29 Oct 2019 11:33:40 +0000 (-0400) Subject: fixes for 4.4 X-Git-Tag: v4.4.199~49 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=cd7c09bbb80cfd4fe8635c997fa775d75c7c61a3;p=thirdparty%2Fkernel%2Fstable-queue.git fixes for 4.4 Signed-off-by: Sasha Levin --- diff --git a/queue-4.4/dm-snapshot-introduce-account_start_copy-and-account.patch b/queue-4.4/dm-snapshot-introduce-account_start_copy-and-account.patch new file mode 100644 index 00000000000..39560cbc22c --- /dev/null +++ b/queue-4.4/dm-snapshot-introduce-account_start_copy-and-account.patch @@ -0,0 +1,73 @@ +From ea7fd15463078743e042228e039918de37d3ed65 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 2 Oct 2019 06:14:17 -0400 +Subject: dm snapshot: introduce account_start_copy() and account_end_copy() + +From: Mikulas Patocka + +[ Upstream commit a2f83e8b0c82c9500421a26c49eb198b25fcdea3 ] + +This simple refactoring moves code for modifying the semaphore cow_count +into separate functions to prepare for changes that will extend these +methods to provide for a more sophisticated mechanism for COW +throttling. + +Signed-off-by: Mikulas Patocka +Reviewed-by: Nikos Tsironis +Signed-off-by: Mike Snitzer +Signed-off-by: Sasha Levin +--- + drivers/md/dm-snap.c | 16 +++++++++++++--- + 1 file changed, 13 insertions(+), 3 deletions(-) + +diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c +index 7c8b5fdf4d4e5..2437ca7e43687 100644 +--- a/drivers/md/dm-snap.c ++++ b/drivers/md/dm-snap.c +@@ -1400,6 +1400,16 @@ static void snapshot_dtr(struct dm_target *ti) + kfree(s); + } + ++static void account_start_copy(struct dm_snapshot *s) ++{ ++ down(&s->cow_count); ++} ++ ++static void account_end_copy(struct dm_snapshot *s) ++{ ++ up(&s->cow_count); ++} ++ + /* + * Flush a list of buffers. + */ +@@ -1584,7 +1594,7 @@ static void copy_callback(int read_err, unsigned long write_err, void *context) + } + list_add(&pe->out_of_order_entry, lh); + } +- up(&s->cow_count); ++ account_end_copy(s); + } + + /* +@@ -1608,7 +1618,7 @@ static void start_copy(struct dm_snap_pending_exception *pe) + dest.count = src.count; + + /* Hand over to kcopyd */ +- down(&s->cow_count); ++ account_start_copy(s); + dm_kcopyd_copy(s->kcopyd_client, &src, 1, &dest, 0, copy_callback, pe); + } + +@@ -1629,7 +1639,7 @@ static void start_full_bio(struct dm_snap_pending_exception *pe, + pe->full_bio_end_io = bio->bi_end_io; + pe->full_bio_private = bio->bi_private; + +- down(&s->cow_count); ++ account_start_copy(s); + callback_data = dm_kcopyd_prepare_callback(s->kcopyd_client, + copy_callback, pe); + +-- +2.20.1 + diff --git a/queue-4.4/dm-snapshot-rework-cow-throttling-to-fix-deadlock.patch b/queue-4.4/dm-snapshot-rework-cow-throttling-to-fix-deadlock.patch new file mode 100644 index 00000000000..65f52eb520a --- /dev/null +++ b/queue-4.4/dm-snapshot-rework-cow-throttling-to-fix-deadlock.patch @@ -0,0 +1,246 @@ +From 163bbfd6416bd60bc51e7af76bc4ad3f727f91cb Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 2 Oct 2019 06:15:53 -0400 +Subject: dm snapshot: rework COW throttling to fix deadlock + +From: Mikulas Patocka + +[ Upstream commit b21555786f18cd77f2311ad89074533109ae3ffa ] + +Commit 721b1d98fb517a ("dm snapshot: Fix excessive memory usage and +workqueue stalls") introduced a semaphore to limit the maximum number of +in-flight kcopyd (COW) jobs. + +The implementation of this throttling mechanism is prone to a deadlock: + +1. One or more threads write to the origin device causing COW, which is + performed by kcopyd. + +2. At some point some of these threads might reach the s->cow_count + semaphore limit and block in down(&s->cow_count), holding a read lock + on _origins_lock. + +3. Someone tries to acquire a write lock on _origins_lock, e.g., + snapshot_ctr(), which blocks because the threads at step (2) already + hold a read lock on it. + +4. A COW operation completes and kcopyd runs dm-snapshot's completion + callback, which ends up calling pending_complete(). + pending_complete() tries to resubmit any deferred origin bios. This + requires acquiring a read lock on _origins_lock, which blocks. + + This happens because the read-write semaphore implementation gives + priority to writers, meaning that as soon as a writer tries to enter + the critical section, no readers will be allowed in, until all + writers have completed their work. + + So, pending_complete() waits for the writer at step (3) to acquire + and release the lock. This writer waits for the readers at step (2) + to release the read lock and those readers wait for + pending_complete() (the kcopyd thread) to signal the s->cow_count + semaphore: DEADLOCK. + +The above was thoroughly analyzed and documented by Nikos Tsironis as +part of his initial proposal for fixing this deadlock, see: +https://www.redhat.com/archives/dm-devel/2019-October/msg00001.html + +Fix this deadlock by reworking COW throttling so that it waits without +holding any locks. Add a variable 'in_progress' that counts how many +kcopyd jobs are running. A function wait_for_in_progress() will sleep if +'in_progress' is over the limit. It drops _origins_lock in order to +avoid the deadlock. + +Reported-by: Guruswamy Basavaiah +Reported-by: Nikos Tsironis +Reviewed-by: Nikos Tsironis +Tested-by: Nikos Tsironis +Fixes: 721b1d98fb51 ("dm snapshot: Fix excessive memory usage and workqueue stalls") +Cc: stable@vger.kernel.org # v5.0+ +Depends-on: 4a3f111a73a8c ("dm snapshot: introduce account_start_copy() and account_end_copy()") +Signed-off-by: Mikulas Patocka +Signed-off-by: Mike Snitzer +Signed-off-by: Sasha Levin +--- + drivers/md/dm-snap.c | 80 +++++++++++++++++++++++++++++++++++--------- + 1 file changed, 64 insertions(+), 16 deletions(-) + +diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c +index 2437ca7e43687..98950c4bf939a 100644 +--- a/drivers/md/dm-snap.c ++++ b/drivers/md/dm-snap.c +@@ -19,7 +19,6 @@ + #include + #include + #include +-#include + + #include "dm.h" + +@@ -106,8 +105,8 @@ struct dm_snapshot { + /* The on disk metadata handler */ + struct dm_exception_store *store; + +- /* Maximum number of in-flight COW jobs. */ +- struct semaphore cow_count; ++ unsigned in_progress; ++ wait_queue_head_t in_progress_wait; + + struct dm_kcopyd_client *kcopyd_client; + +@@ -158,8 +157,8 @@ struct dm_snapshot { + */ + #define DEFAULT_COW_THRESHOLD 2048 + +-static int cow_threshold = DEFAULT_COW_THRESHOLD; +-module_param_named(snapshot_cow_threshold, cow_threshold, int, 0644); ++static unsigned cow_threshold = DEFAULT_COW_THRESHOLD; ++module_param_named(snapshot_cow_threshold, cow_threshold, uint, 0644); + MODULE_PARM_DESC(snapshot_cow_threshold, "Maximum number of chunks being copied on write"); + + DECLARE_DM_KCOPYD_THROTTLE_WITH_MODULE_PARM(snapshot_copy_throttle, +@@ -1207,7 +1206,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) + goto bad_hash_tables; + } + +- sema_init(&s->cow_count, (cow_threshold > 0) ? cow_threshold : INT_MAX); ++ init_waitqueue_head(&s->in_progress_wait); + + s->kcopyd_client = dm_kcopyd_client_create(&dm_kcopyd_throttle); + if (IS_ERR(s->kcopyd_client)) { +@@ -1397,17 +1396,54 @@ static void snapshot_dtr(struct dm_target *ti) + + dm_put_device(ti, s->origin); + ++ WARN_ON(s->in_progress); ++ + kfree(s); + } + + static void account_start_copy(struct dm_snapshot *s) + { +- down(&s->cow_count); ++ spin_lock(&s->in_progress_wait.lock); ++ s->in_progress++; ++ spin_unlock(&s->in_progress_wait.lock); + } + + static void account_end_copy(struct dm_snapshot *s) + { +- up(&s->cow_count); ++ spin_lock(&s->in_progress_wait.lock); ++ BUG_ON(!s->in_progress); ++ s->in_progress--; ++ if (likely(s->in_progress <= cow_threshold) && ++ unlikely(waitqueue_active(&s->in_progress_wait))) ++ wake_up_locked(&s->in_progress_wait); ++ spin_unlock(&s->in_progress_wait.lock); ++} ++ ++static bool wait_for_in_progress(struct dm_snapshot *s, bool unlock_origins) ++{ ++ if (unlikely(s->in_progress > cow_threshold)) { ++ spin_lock(&s->in_progress_wait.lock); ++ if (likely(s->in_progress > cow_threshold)) { ++ /* ++ * NOTE: this throttle doesn't account for whether ++ * the caller is servicing an IO that will trigger a COW ++ * so excess throttling may result for chunks not required ++ * to be COW'd. But if cow_threshold was reached, extra ++ * throttling is unlikely to negatively impact performance. ++ */ ++ DECLARE_WAITQUEUE(wait, current); ++ __add_wait_queue(&s->in_progress_wait, &wait); ++ __set_current_state(TASK_UNINTERRUPTIBLE); ++ spin_unlock(&s->in_progress_wait.lock); ++ if (unlock_origins) ++ up_read(&_origins_lock); ++ io_schedule(); ++ remove_wait_queue(&s->in_progress_wait, &wait); ++ return false; ++ } ++ spin_unlock(&s->in_progress_wait.lock); ++ } ++ return true; + } + + /* +@@ -1425,7 +1461,7 @@ static void flush_bios(struct bio *bio) + } + } + +-static int do_origin(struct dm_dev *origin, struct bio *bio); ++static int do_origin(struct dm_dev *origin, struct bio *bio, bool limit); + + /* + * Flush a list of buffers. +@@ -1438,7 +1474,7 @@ static void retry_origin_bios(struct dm_snapshot *s, struct bio *bio) + while (bio) { + n = bio->bi_next; + bio->bi_next = NULL; +- r = do_origin(s->origin, bio); ++ r = do_origin(s->origin, bio, false); + if (r == DM_MAPIO_REMAPPED) + generic_make_request(bio); + bio = n; +@@ -1730,8 +1766,11 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) + if (!s->valid) + return -EIO; + +- /* FIXME: should only take write lock if we need +- * to copy an exception */ ++ if (bio_data_dir(bio) == WRITE) { ++ while (unlikely(!wait_for_in_progress(s, false))) ++ ; /* wait_for_in_progress() has slept */ ++ } ++ + mutex_lock(&s->lock); + + if (!s->valid || (unlikely(s->snapshot_overflowed) && bio_rw(bio) == WRITE)) { +@@ -1879,7 +1918,7 @@ redirect_to_origin: + + if (bio_rw(bio) == WRITE) { + mutex_unlock(&s->lock); +- return do_origin(s->origin, bio); ++ return do_origin(s->origin, bio, false); + } + + out_unlock: +@@ -2215,15 +2254,24 @@ next_snapshot: + /* + * Called on a write from the origin driver. + */ +-static int do_origin(struct dm_dev *origin, struct bio *bio) ++static int do_origin(struct dm_dev *origin, struct bio *bio, bool limit) + { + struct origin *o; + int r = DM_MAPIO_REMAPPED; + ++again: + down_read(&_origins_lock); + o = __lookup_origin(origin->bdev); +- if (o) ++ if (o) { ++ if (limit) { ++ struct dm_snapshot *s; ++ list_for_each_entry(s, &o->snapshots, list) ++ if (unlikely(!wait_for_in_progress(s, true))) ++ goto again; ++ } ++ + r = __origin_write(&o->snapshots, bio->bi_iter.bi_sector, bio); ++ } + up_read(&_origins_lock); + + return r; +@@ -2336,7 +2384,7 @@ static int origin_map(struct dm_target *ti, struct bio *bio) + dm_accept_partial_bio(bio, available_sectors); + + /* Only tell snapshots if this is a write */ +- return do_origin(o->dev, bio); ++ return do_origin(o->dev, bio, true); + } + + /* +-- +2.20.1 + diff --git a/queue-4.4/dm-snapshot-use-mutex-instead-of-rw_semaphore.patch b/queue-4.4/dm-snapshot-use-mutex-instead-of-rw_semaphore.patch new file mode 100644 index 00000000000..c75951d07d3 --- /dev/null +++ b/queue-4.4/dm-snapshot-use-mutex-instead-of-rw_semaphore.patch @@ -0,0 +1,335 @@ +From 90d86a8bec891e934edd7ec2efdca6e935a8e932 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 23 Nov 2017 16:15:43 -0500 +Subject: dm snapshot: use mutex instead of rw_semaphore + +From: Mikulas Patocka + +[ Upstream commit ae1093be5a0ef997833e200a0dafb9ed0b1ff4fe ] + +The rw_semaphore is acquired for read only in two places, neither is +performance-critical. So replace it with a mutex -- which is more +efficient. + +Signed-off-by: Mikulas Patocka +Signed-off-by: Mike Snitzer +Signed-off-by: Sasha Levin +--- + drivers/md/dm-snap.c | 84 +++++++++++++++++++++++--------------------- + 1 file changed, 43 insertions(+), 41 deletions(-) + +diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c +index 5d3797728b9cf..7c8b5fdf4d4e5 100644 +--- a/drivers/md/dm-snap.c ++++ b/drivers/md/dm-snap.c +@@ -48,7 +48,7 @@ struct dm_exception_table { + }; + + struct dm_snapshot { +- struct rw_semaphore lock; ++ struct mutex lock; + + struct dm_dev *origin; + struct dm_dev *cow; +@@ -457,9 +457,9 @@ static int __find_snapshots_sharing_cow(struct dm_snapshot *snap, + if (!bdev_equal(s->cow->bdev, snap->cow->bdev)) + continue; + +- down_read(&s->lock); ++ mutex_lock(&s->lock); + active = s->active; +- up_read(&s->lock); ++ mutex_unlock(&s->lock); + + if (active) { + if (snap_src) +@@ -927,7 +927,7 @@ static int remove_single_exception_chunk(struct dm_snapshot *s) + int r; + chunk_t old_chunk = s->first_merging_chunk + s->num_merging_chunks - 1; + +- down_write(&s->lock); ++ mutex_lock(&s->lock); + + /* + * Process chunks (and associated exceptions) in reverse order +@@ -942,7 +942,7 @@ static int remove_single_exception_chunk(struct dm_snapshot *s) + b = __release_queued_bios_after_merge(s); + + out: +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + if (b) + flush_bios(b); + +@@ -1001,9 +1001,9 @@ static void snapshot_merge_next_chunks(struct dm_snapshot *s) + if (linear_chunks < 0) { + DMERR("Read error in exception store: " + "shutting down merge"); +- down_write(&s->lock); ++ mutex_lock(&s->lock); + s->merge_failed = 1; +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + } + goto shut; + } +@@ -1044,10 +1044,10 @@ static void snapshot_merge_next_chunks(struct dm_snapshot *s) + previous_count = read_pending_exceptions_done_count(); + } + +- down_write(&s->lock); ++ mutex_lock(&s->lock); + s->first_merging_chunk = old_chunk; + s->num_merging_chunks = linear_chunks; +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + + /* Wait until writes to all 'linear_chunks' drain */ + for (i = 0; i < linear_chunks; i++) +@@ -1089,10 +1089,10 @@ static void merge_callback(int read_err, unsigned long write_err, void *context) + return; + + shut: +- down_write(&s->lock); ++ mutex_lock(&s->lock); + s->merge_failed = 1; + b = __release_queued_bios_after_merge(s); +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + error_bios(b); + + merge_shutdown(s); +@@ -1191,7 +1191,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) + s->exception_start_sequence = 0; + s->exception_complete_sequence = 0; + INIT_LIST_HEAD(&s->out_of_order_list); +- init_rwsem(&s->lock); ++ mutex_init(&s->lock); + INIT_LIST_HEAD(&s->list); + spin_lock_init(&s->pe_lock); + s->state_bits = 0; +@@ -1358,9 +1358,9 @@ static void snapshot_dtr(struct dm_target *ti) + /* Check whether exception handover must be cancelled */ + (void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL); + if (snap_src && snap_dest && (s == snap_src)) { +- down_write(&snap_dest->lock); ++ mutex_lock(&snap_dest->lock); + snap_dest->valid = 0; +- up_write(&snap_dest->lock); ++ mutex_unlock(&snap_dest->lock); + DMERR("Cancelling snapshot handover."); + } + up_read(&_origins_lock); +@@ -1391,6 +1391,8 @@ static void snapshot_dtr(struct dm_target *ti) + + dm_exception_store_destroy(s->store); + ++ mutex_destroy(&s->lock); ++ + dm_put_device(ti, s->cow); + + dm_put_device(ti, s->origin); +@@ -1478,7 +1480,7 @@ static void pending_complete(void *context, int success) + + if (!success) { + /* Read/write error - snapshot is unusable */ +- down_write(&s->lock); ++ mutex_lock(&s->lock); + __invalidate_snapshot(s, -EIO); + error = 1; + goto out; +@@ -1486,14 +1488,14 @@ static void pending_complete(void *context, int success) + + e = alloc_completed_exception(GFP_NOIO); + if (!e) { +- down_write(&s->lock); ++ mutex_lock(&s->lock); + __invalidate_snapshot(s, -ENOMEM); + error = 1; + goto out; + } + *e = pe->e; + +- down_write(&s->lock); ++ mutex_lock(&s->lock); + if (!s->valid) { + free_completed_exception(e); + error = 1; +@@ -1520,7 +1522,7 @@ out: + } + increment_pending_exceptions_done_count(); + +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + + /* Submit any pending write bios */ + if (error) { +@@ -1720,7 +1722,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) + + /* FIXME: should only take write lock if we need + * to copy an exception */ +- down_write(&s->lock); ++ mutex_lock(&s->lock); + + if (!s->valid || (unlikely(s->snapshot_overflowed) && bio_rw(bio) == WRITE)) { + r = -EIO; +@@ -1742,9 +1744,9 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) + if (bio_rw(bio) == WRITE) { + pe = __lookup_pending_exception(s, chunk); + if (!pe) { +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + pe = alloc_pending_exception(s); +- down_write(&s->lock); ++ mutex_lock(&s->lock); + + if (!s->valid || s->snapshot_overflowed) { + free_pending_exception(pe); +@@ -1779,7 +1781,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) + bio->bi_iter.bi_size == + (s->store->chunk_size << SECTOR_SHIFT)) { + pe->started = 1; +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + start_full_bio(pe, bio); + goto out; + } +@@ -1789,7 +1791,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) + if (!pe->started) { + /* this is protected by snap->lock */ + pe->started = 1; +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + start_copy(pe); + goto out; + } +@@ -1799,7 +1801,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) + } + + out_unlock: +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + out: + return r; + } +@@ -1835,7 +1837,7 @@ static int snapshot_merge_map(struct dm_target *ti, struct bio *bio) + + chunk = sector_to_chunk(s->store, bio->bi_iter.bi_sector); + +- down_write(&s->lock); ++ mutex_lock(&s->lock); + + /* Full merging snapshots are redirected to the origin */ + if (!s->valid) +@@ -1866,12 +1868,12 @@ redirect_to_origin: + bio->bi_bdev = s->origin->bdev; + + if (bio_rw(bio) == WRITE) { +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + return do_origin(s->origin, bio); + } + + out_unlock: +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + + return r; + } +@@ -1902,7 +1904,7 @@ static int snapshot_preresume(struct dm_target *ti) + down_read(&_origins_lock); + (void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL); + if (snap_src && snap_dest) { +- down_read(&snap_src->lock); ++ mutex_lock(&snap_src->lock); + if (s == snap_src) { + DMERR("Unable to resume snapshot source until " + "handover completes."); +@@ -1912,7 +1914,7 @@ static int snapshot_preresume(struct dm_target *ti) + "source is suspended."); + r = -EINVAL; + } +- up_read(&snap_src->lock); ++ mutex_unlock(&snap_src->lock); + } + up_read(&_origins_lock); + +@@ -1958,11 +1960,11 @@ static void snapshot_resume(struct dm_target *ti) + + (void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL); + if (snap_src && snap_dest) { +- down_write(&snap_src->lock); +- down_write_nested(&snap_dest->lock, SINGLE_DEPTH_NESTING); ++ mutex_lock(&snap_src->lock); ++ mutex_lock_nested(&snap_dest->lock, SINGLE_DEPTH_NESTING); + __handover_exceptions(snap_src, snap_dest); +- up_write(&snap_dest->lock); +- up_write(&snap_src->lock); ++ mutex_unlock(&snap_dest->lock); ++ mutex_unlock(&snap_src->lock); + } + + up_read(&_origins_lock); +@@ -1977,9 +1979,9 @@ static void snapshot_resume(struct dm_target *ti) + /* Now we have correct chunk size, reregister */ + reregister_snapshot(s); + +- down_write(&s->lock); ++ mutex_lock(&s->lock); + s->active = 1; +- up_write(&s->lock); ++ mutex_unlock(&s->lock); + } + + static uint32_t get_origin_minimum_chunksize(struct block_device *bdev) +@@ -2019,7 +2021,7 @@ static void snapshot_status(struct dm_target *ti, status_type_t type, + switch (type) { + case STATUSTYPE_INFO: + +- down_write(&snap->lock); ++ mutex_lock(&snap->lock); + + if (!snap->valid) + DMEMIT("Invalid"); +@@ -2044,7 +2046,7 @@ static void snapshot_status(struct dm_target *ti, status_type_t type, + DMEMIT("Unknown"); + } + +- up_write(&snap->lock); ++ mutex_unlock(&snap->lock); + + break; + +@@ -2110,7 +2112,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector, + if (dm_target_is_snapshot_merge(snap->ti)) + continue; + +- down_write(&snap->lock); ++ mutex_lock(&snap->lock); + + /* Only deal with valid and active snapshots */ + if (!snap->valid || !snap->active) +@@ -2137,9 +2139,9 @@ static int __origin_write(struct list_head *snapshots, sector_t sector, + + pe = __lookup_pending_exception(snap, chunk); + if (!pe) { +- up_write(&snap->lock); ++ mutex_unlock(&snap->lock); + pe = alloc_pending_exception(snap); +- down_write(&snap->lock); ++ mutex_lock(&snap->lock); + + if (!snap->valid) { + free_pending_exception(pe); +@@ -2182,7 +2184,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector, + } + + next_snapshot: +- up_write(&snap->lock); ++ mutex_unlock(&snap->lock); + + if (pe_to_start_now) { + start_copy(pe_to_start_now); +-- +2.20.1 + diff --git a/queue-4.4/dm-use-kzalloc-for-all-structs-with-embedded-biosets.patch b/queue-4.4/dm-use-kzalloc-for-all-structs-with-embedded-biosets.patch new file mode 100644 index 00000000000..3bcdfb17527 --- /dev/null +++ b/queue-4.4/dm-use-kzalloc-for-all-structs-with-embedded-biosets.patch @@ -0,0 +1,109 @@ +From 5e0038a37d056e1b7f699bacba8762968decdfa1 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 5 Jun 2018 05:26:33 -0400 +Subject: dm: Use kzalloc for all structs with embedded biosets/mempools + +From: Kent Overstreet + +[ Upstream commit d377535405686f735b90a8ad4ba269484cd7c96e ] + +mempool_init()/bioset_init() require that the mempools/biosets be zeroed +first; they probably should not _require_ this, but not allocating those +structs with kzalloc is a fairly nonsensical thing to do (calling +mempool_exit()/bioset_exit() on an uninitialized mempool/bioset is legal +and safe, but only works if said memory was zeroed.) + +Acked-by: Mike Snitzer +Signed-off-by: Kent Overstreet +Signed-off-by: Jens Axboe +Signed-off-by: Sasha Levin +--- + drivers/md/dm-bio-prison.c | 2 +- + drivers/md/dm-io.c | 2 +- + drivers/md/dm-kcopyd.c | 2 +- + drivers/md/dm-region-hash.c | 2 +- + drivers/md/dm-snap.c | 2 +- + drivers/md/dm-thin.c | 2 +- + 6 files changed, 6 insertions(+), 6 deletions(-) + +diff --git a/drivers/md/dm-bio-prison.c b/drivers/md/dm-bio-prison.c +index 03af174485d30..fa2432a89bace 100644 +--- a/drivers/md/dm-bio-prison.c ++++ b/drivers/md/dm-bio-prison.c +@@ -32,7 +32,7 @@ static struct kmem_cache *_cell_cache; + */ + struct dm_bio_prison *dm_bio_prison_create(void) + { +- struct dm_bio_prison *prison = kmalloc(sizeof(*prison), GFP_KERNEL); ++ struct dm_bio_prison *prison = kzalloc(sizeof(*prison), GFP_KERNEL); + + if (!prison) + return NULL; +diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c +index 1b84d2890fbf1..ad9a470e5382e 100644 +--- a/drivers/md/dm-io.c ++++ b/drivers/md/dm-io.c +@@ -50,7 +50,7 @@ struct dm_io_client *dm_io_client_create(void) + struct dm_io_client *client; + unsigned min_ios = dm_get_reserved_bio_based_ios(); + +- client = kmalloc(sizeof(*client), GFP_KERNEL); ++ client = kzalloc(sizeof(*client), GFP_KERNEL); + if (!client) + return ERR_PTR(-ENOMEM); + +diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c +index 04248394843e8..09df2c688ba9c 100644 +--- a/drivers/md/dm-kcopyd.c ++++ b/drivers/md/dm-kcopyd.c +@@ -827,7 +827,7 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *thro + int r = -ENOMEM; + struct dm_kcopyd_client *kc; + +- kc = kmalloc(sizeof(*kc), GFP_KERNEL); ++ kc = kzalloc(sizeof(*kc), GFP_KERNEL); + if (!kc) + return ERR_PTR(-ENOMEM); + +diff --git a/drivers/md/dm-region-hash.c b/drivers/md/dm-region-hash.c +index 74cb7b991d41d..a93a4e6839999 100644 +--- a/drivers/md/dm-region-hash.c ++++ b/drivers/md/dm-region-hash.c +@@ -179,7 +179,7 @@ struct dm_region_hash *dm_region_hash_create( + ; + nr_buckets >>= 1; + +- rh = kmalloc(sizeof(*rh), GFP_KERNEL); ++ rh = kzalloc(sizeof(*rh), GFP_KERNEL); + if (!rh) { + DMERR("unable to allocate region hash memory"); + return ERR_PTR(-ENOMEM); +diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c +index 98950c4bf939a..510b0cf430a8a 100644 +--- a/drivers/md/dm-snap.c ++++ b/drivers/md/dm-snap.c +@@ -1137,7 +1137,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) + origin_mode = FMODE_WRITE; + } + +- s = kmalloc(sizeof(*s), GFP_KERNEL); ++ s = kzalloc(sizeof(*s), GFP_KERNEL); + if (!s) { + ti->error = "Cannot allocate private snapshot structure"; + r = -ENOMEM; +diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c +index d52ea584e0bc1..4d7eae3d32b02 100644 +--- a/drivers/md/dm-thin.c ++++ b/drivers/md/dm-thin.c +@@ -2882,7 +2882,7 @@ static struct pool *pool_create(struct mapped_device *pool_md, + return (struct pool *)pmd; + } + +- pool = kmalloc(sizeof(*pool), GFP_KERNEL); ++ pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) { + *error = "Error allocating memory for pool"; + err_p = ERR_PTR(-ENOMEM); +-- +2.20.1 + diff --git a/queue-4.4/efi-cper-fix-endianness-of-pcie-class-code.patch b/queue-4.4/efi-cper-fix-endianness-of-pcie-class-code.patch new file mode 100644 index 00000000000..948fa9c40fb --- /dev/null +++ b/queue-4.4/efi-cper-fix-endianness-of-pcie-class-code.patch @@ -0,0 +1,61 @@ +From 56d896ca7927dcd85510bbef2397c1fb263e28bc Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 2 Oct 2019 18:58:58 +0200 +Subject: efi/cper: Fix endianness of PCIe class code + +From: Lukas Wunner + +[ Upstream commit 6fb9367a15d1a126d222d738b2702c7958594a5f ] + +The CPER parser assumes that the class code is big endian, but at least +on this edk2-derived Intel Purley platform it's little endian: + + efi: EFI v2.50 by EDK II BIOS ID:PLYDCRB1.86B.0119.R05.1701181843 + DMI: Intel Corporation PURLEY/PURLEY, BIOS PLYDCRB1.86B.0119.R05.1701181843 01/18/2017 + + {1}[Hardware Error]: device_id: 0000:5d:00.0 + {1}[Hardware Error]: slot: 0 + {1}[Hardware Error]: secondary_bus: 0x5e + {1}[Hardware Error]: vendor_id: 0x8086, device_id: 0x2030 + {1}[Hardware Error]: class_code: 000406 + ^^^^^^ (should be 060400) + +Signed-off-by: Lukas Wunner +Signed-off-by: Ard Biesheuvel +Cc: Ben Dooks +Cc: Dave Young +Cc: Jarkko Sakkinen +Cc: Jerry Snitselaar +Cc: Linus Torvalds +Cc: Lyude Paul +Cc: Matthew Garrett +Cc: Octavian Purdila +Cc: Peter Jones +Cc: Peter Zijlstra +Cc: Scott Talbert +Cc: Thomas Gleixner +Cc: linux-efi@vger.kernel.org +Cc: linux-integrity@vger.kernel.org +Link: https://lkml.kernel.org/r/20191002165904.8819-2-ard.biesheuvel@linaro.org +Signed-off-by: Ingo Molnar +Signed-off-by: Sasha Levin +--- + drivers/firmware/efi/cper.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c +index f40f7df4b7344..c0e54396f2502 100644 +--- a/drivers/firmware/efi/cper.c ++++ b/drivers/firmware/efi/cper.c +@@ -375,7 +375,7 @@ static void cper_print_pcie(const char *pfx, const struct cper_sec_pcie *pcie, + printk("%s""vendor_id: 0x%04x, device_id: 0x%04x\n", pfx, + pcie->device_id.vendor_id, pcie->device_id.device_id); + p = pcie->device_id.class_code; +- printk("%s""class_code: %02x%02x%02x\n", pfx, p[0], p[1], p[2]); ++ printk("%s""class_code: %02x%02x%02x\n", pfx, p[2], p[1], p[0]); + } + if (pcie->validation_bits & CPER_PCIE_VALID_SERIAL_NUMBER) + printk("%s""serial number: 0x%04x, 0x%04x\n", pfx, +-- +2.20.1 + diff --git a/queue-4.4/efi-x86-do-not-clean-dummy-variable-in-kexec-path.patch b/queue-4.4/efi-x86-do-not-clean-dummy-variable-in-kexec-path.patch new file mode 100644 index 00000000000..82889d0b14f --- /dev/null +++ b/queue-4.4/efi-x86-do-not-clean-dummy-variable-in-kexec-path.patch @@ -0,0 +1,61 @@ +From c756afb1af21d1f1a98c6280a32e7556b40748a9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 2 Oct 2019 18:59:04 +0200 +Subject: efi/x86: Do not clean dummy variable in kexec path + +From: Dave Young + +[ Upstream commit 2ecb7402cfc7f22764e7bbc80790e66eadb20560 ] + +kexec reboot fails randomly in UEFI based KVM guest. The firmware +just resets while calling efi_delete_dummy_variable(); Unfortunately +I don't know how to debug the firmware, it is also possible a potential +problem on real hardware as well although nobody reproduced it. + +The intention of the efi_delete_dummy_variable is to trigger garbage collection +when entering virtual mode. But SetVirtualAddressMap can only run once +for each physical reboot, thus kexec_enter_virtual_mode() is not necessarily +a good place to clean a dummy object. + +Drop the efi_delete_dummy_variable so that kexec reboot can work. + +Signed-off-by: Dave Young +Signed-off-by: Ard Biesheuvel +Acked-by: Matthew Garrett +Cc: Ben Dooks +Cc: Jarkko Sakkinen +Cc: Jerry Snitselaar +Cc: Linus Torvalds +Cc: Lukas Wunner +Cc: Lyude Paul +Cc: Octavian Purdila +Cc: Peter Jones +Cc: Peter Zijlstra +Cc: Scott Talbert +Cc: Thomas Gleixner +Cc: linux-efi@vger.kernel.org +Cc: linux-integrity@vger.kernel.org +Link: https://lkml.kernel.org/r/20191002165904.8819-8-ard.biesheuvel@linaro.org +Signed-off-by: Ingo Molnar +Signed-off-by: Sasha Levin +--- + arch/x86/platform/efi/efi.c | 3 --- + 1 file changed, 3 deletions(-) + +diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c +index ad285404ea7f5..4bc352fc08f19 100644 +--- a/arch/x86/platform/efi/efi.c ++++ b/arch/x86/platform/efi/efi.c +@@ -859,9 +859,6 @@ static void __init kexec_enter_virtual_mode(void) + + if (efi_enabled(EFI_OLD_MEMMAP) && (__supported_pte_mask & _PAGE_NX)) + runtime_code_page_mkexec(); +- +- /* clean DUMMY object */ +- efi_delete_dummy_variable(); + #endif + } + +-- +2.20.1 + diff --git a/queue-4.4/exec-load_script-do-not-exec-truncated-interpreter-p.patch b/queue-4.4/exec-load_script-do-not-exec-truncated-interpreter-p.patch new file mode 100644 index 00000000000..8cc55fe5d57 --- /dev/null +++ b/queue-4.4/exec-load_script-do-not-exec-truncated-interpreter-p.patch @@ -0,0 +1,120 @@ +From 23900eca377a8877b5b85b283c320a4ceb8f5cc5 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 18 Feb 2019 16:36:48 -0800 +Subject: exec: load_script: Do not exec truncated interpreter path + +From: Kees Cook + +[ Upstream commit b5372fe5dc84235dbe04998efdede3c4daa866a9 ] + +Commit 8099b047ecc4 ("exec: load_script: don't blindly truncate +shebang string") was trying to protect against a confused exec of a +truncated interpreter path. However, it was overeager and also refused +to truncate arguments as well, which broke userspace, and it was +reverted. This attempts the protection again, but allows arguments to +remain truncated. In an effort to improve readability, helper functions +and comments have been added. + +Co-developed-by: Linus Torvalds +Signed-off-by: Kees Cook +Cc: Andrew Morton +Cc: Oleg Nesterov +Cc: Samuel Dionne-Riel +Cc: Richard Weinberger +Cc: Graham Christensen +Cc: Michal Hocko +Signed-off-by: Linus Torvalds +Signed-off-by: Sasha Levin +--- + fs/binfmt_script.c | 57 ++++++++++++++++++++++++++++++++++++++-------- + 1 file changed, 48 insertions(+), 9 deletions(-) + +diff --git a/fs/binfmt_script.c b/fs/binfmt_script.c +index afdf4e3cafc2a..37c2093a24d3c 100644 +--- a/fs/binfmt_script.c ++++ b/fs/binfmt_script.c +@@ -14,14 +14,31 @@ + #include + #include + ++static inline bool spacetab(char c) { return c == ' ' || c == '\t'; } ++static inline char *next_non_spacetab(char *first, const char *last) ++{ ++ for (; first <= last; first++) ++ if (!spacetab(*first)) ++ return first; ++ return NULL; ++} ++static inline char *next_terminator(char *first, const char *last) ++{ ++ for (; first <= last; first++) ++ if (spacetab(*first) || !*first) ++ return first; ++ return NULL; ++} ++ + static int load_script(struct linux_binprm *bprm) + { + const char *i_arg, *i_name; +- char *cp; ++ char *cp, *buf_end; + struct file *file; + char interp[BINPRM_BUF_SIZE]; + int retval; + ++ /* Not ours to exec if we don't start with "#!". */ + if ((bprm->buf[0] != '#') || (bprm->buf[1] != '!')) + return -ENOEXEC; + +@@ -34,18 +51,40 @@ static int load_script(struct linux_binprm *bprm) + if (bprm->interp_flags & BINPRM_FLAGS_PATH_INACCESSIBLE) + return -ENOENT; + +- /* +- * This section does the #! interpretation. +- * Sorta complicated, but hopefully it will work. -TYT +- */ +- ++ /* Release since we are not mapping a binary into memory. */ + allow_write_access(bprm->file); + fput(bprm->file); + bprm->file = NULL; + +- bprm->buf[BINPRM_BUF_SIZE - 1] = '\0'; +- if ((cp = strchr(bprm->buf, '\n')) == NULL) +- cp = bprm->buf+BINPRM_BUF_SIZE-1; ++ /* ++ * This section handles parsing the #! line into separate ++ * interpreter path and argument strings. We must be careful ++ * because bprm->buf is not yet guaranteed to be NUL-terminated ++ * (though the buffer will have trailing NUL padding when the ++ * file size was smaller than the buffer size). ++ * ++ * We do not want to exec a truncated interpreter path, so either ++ * we find a newline (which indicates nothing is truncated), or ++ * we find a space/tab/NUL after the interpreter path (which ++ * itself may be preceded by spaces/tabs). Truncating the ++ * arguments is fine: the interpreter can re-read the script to ++ * parse them on its own. ++ */ ++ buf_end = bprm->buf + sizeof(bprm->buf) - 1; ++ cp = strnchr(bprm->buf, sizeof(bprm->buf), '\n'); ++ if (!cp) { ++ cp = next_non_spacetab(bprm->buf + 2, buf_end); ++ if (!cp) ++ return -ENOEXEC; /* Entire buf is spaces/tabs */ ++ /* ++ * If there is no later space/tab/NUL we must assume the ++ * interpreter path is truncated. ++ */ ++ if (!next_terminator(cp, buf_end)) ++ return -ENOEXEC; ++ cp = buf_end; ++ } ++ /* NUL-terminate the buffer and any trailing spaces/tabs. */ + *cp = '\0'; + while (cp > bprm->buf) { + cp--; +-- +2.20.1 + diff --git a/queue-4.4/fs-cifs-mute-wunused-const-variable-message.patch b/queue-4.4/fs-cifs-mute-wunused-const-variable-message.patch new file mode 100644 index 00000000000..7833e78363b --- /dev/null +++ b/queue-4.4/fs-cifs-mute-wunused-const-variable-message.patch @@ -0,0 +1,43 @@ +From d3475e3e0cce7115666fb8469d710fb8fd82adf1 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 1 Oct 2019 16:34:13 +0900 +Subject: fs: cifs: mute -Wunused-const-variable message + +From: Austin Kim + +[ Upstream commit dd19c106a36690b47bb1acc68372f2b472b495b8 ] + +After 'Initial git repository build' commit, +'mapping_table_ERRHRD' variable has not been used. + +So 'mapping_table_ERRHRD' const variable could be removed +to mute below warning message: + + fs/cifs/netmisc.c:120:40: warning: unused variable 'mapping_table_ERRHRD' [-Wunused-const-variable] + static const struct smb_to_posix_error mapping_table_ERRHRD[] = { + ^ +Signed-off-by: Austin Kim +Signed-off-by: Steve French +Signed-off-by: Sasha Levin +--- + fs/cifs/netmisc.c | 4 ---- + 1 file changed, 4 deletions(-) + +diff --git a/fs/cifs/netmisc.c b/fs/cifs/netmisc.c +index cc88f4f0325ef..bed9733302279 100644 +--- a/fs/cifs/netmisc.c ++++ b/fs/cifs/netmisc.c +@@ -130,10 +130,6 @@ static const struct smb_to_posix_error mapping_table_ERRSRV[] = { + {0, 0} + }; + +-static const struct smb_to_posix_error mapping_table_ERRHRD[] = { +- {0, 0} +-}; +- + /* + * Convert a string containing text IPv4 or IPv6 address to binary form. + * +-- +2.20.1 + diff --git a/queue-4.4/fs-ocfs2-fix-a-possible-null-pointer-dereference-in-.patch b/queue-4.4/fs-ocfs2-fix-a-possible-null-pointer-dereference-in-.patch new file mode 100644 index 00000000000..6b64653cc09 --- /dev/null +++ b/queue-4.4/fs-ocfs2-fix-a-possible-null-pointer-dereference-in-.patch @@ -0,0 +1,59 @@ +From 0d13ac117046326f2c8430a8bca051314674edef Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 6 Oct 2019 17:57:57 -0700 +Subject: fs: ocfs2: fix a possible null-pointer dereference in + ocfs2_info_scan_inode_alloc() + +From: Jia-Ju Bai + +[ Upstream commit 2abb7d3b12d007c30193f48bebed781009bebdd2 ] + +In ocfs2_info_scan_inode_alloc(), there is an if statement on line 283 +to check whether inode_alloc is NULL: + + if (inode_alloc) + +When inode_alloc is NULL, it is used on line 287: + + ocfs2_inode_lock(inode_alloc, &bh, 0); + ocfs2_inode_lock_full_nested(inode, ...) + struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); + +Thus, a possible null-pointer dereference may occur. + +To fix this bug, inode_alloc is checked on line 286. + +This bug is found by a static analysis tool STCheck written by us. + +Link: http://lkml.kernel.org/r/20190726033717.32359-1-baijiaju1990@gmail.com +Signed-off-by: Jia-Ju Bai +Reviewed-by: Joseph Qi +Cc: Mark Fasheh +Cc: Joel Becker +Cc: Junxiao Bi +Cc: Changwei Ge +Cc: Gang He +Cc: Jun Piao +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Sasha Levin +--- + fs/ocfs2/ioctl.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/fs/ocfs2/ioctl.c b/fs/ocfs2/ioctl.c +index 3cb097ccce607..79232296b7d2b 100644 +--- a/fs/ocfs2/ioctl.c ++++ b/fs/ocfs2/ioctl.c +@@ -289,7 +289,7 @@ static int ocfs2_info_scan_inode_alloc(struct ocfs2_super *osb, + if (inode_alloc) + mutex_lock(&inode_alloc->i_mutex); + +- if (o2info_coherent(&fi->ifi_req)) { ++ if (inode_alloc && o2info_coherent(&fi->ifi_req)) { + status = ocfs2_inode_lock(inode_alloc, &bh, 0); + if (status < 0) { + mlog_errno(status); +-- +2.20.1 + diff --git a/queue-4.4/fs-ocfs2-fix-possible-null-pointer-dereferences-in-o.patch b/queue-4.4/fs-ocfs2-fix-possible-null-pointer-dereferences-in-o.patch new file mode 100644 index 00000000000..64cddbcd415 --- /dev/null +++ b/queue-4.4/fs-ocfs2-fix-possible-null-pointer-dereferences-in-o.patch @@ -0,0 +1,131 @@ +From def432508d6cc39093a9fbfe2404061c4dd355d9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 6 Oct 2019 17:57:50 -0700 +Subject: fs: ocfs2: fix possible null-pointer dereferences in + ocfs2_xa_prepare_entry() + +From: Jia-Ju Bai + +[ Upstream commit 56e94ea132bb5c2c1d0b60a6aeb34dcb7d71a53d ] + +In ocfs2_xa_prepare_entry(), there is an if statement on line 2136 to +check whether loc->xl_entry is NULL: + + if (loc->xl_entry) + +When loc->xl_entry is NULL, it is used on line 2158: + + ocfs2_xa_add_entry(loc, name_hash); + loc->xl_entry->xe_name_hash = cpu_to_le32(name_hash); + loc->xl_entry->xe_name_offset = cpu_to_le16(loc->xl_size); + +and line 2164: + + ocfs2_xa_add_namevalue(loc, xi); + loc->xl_entry->xe_value_size = cpu_to_le64(xi->xi_value_len); + loc->xl_entry->xe_name_len = xi->xi_name_len; + +Thus, possible null-pointer dereferences may occur. + +To fix these bugs, if loc-xl_entry is NULL, ocfs2_xa_prepare_entry() +abnormally returns with -EINVAL. + +These bugs are found by a static analysis tool STCheck written by us. + +[akpm@linux-foundation.org: remove now-unused ocfs2_xa_add_entry()] +Link: http://lkml.kernel.org/r/20190726101447.9153-1-baijiaju1990@gmail.com +Signed-off-by: Jia-Ju Bai +Reviewed-by: Joseph Qi +Cc: Mark Fasheh +Cc: Joel Becker +Cc: Junxiao Bi +Cc: Changwei Ge +Cc: Gang He +Cc: Jun Piao +Cc: Stephen Rothwell +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Sasha Levin +--- + fs/ocfs2/xattr.c | 56 ++++++++++++++++++++---------------------------- + 1 file changed, 23 insertions(+), 33 deletions(-) + +diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c +index 06faa608e5622..dfa6d45dc4dc4 100644 +--- a/fs/ocfs2/xattr.c ++++ b/fs/ocfs2/xattr.c +@@ -1475,18 +1475,6 @@ static int ocfs2_xa_check_space(struct ocfs2_xa_loc *loc, + return loc->xl_ops->xlo_check_space(loc, xi); + } + +-static void ocfs2_xa_add_entry(struct ocfs2_xa_loc *loc, u32 name_hash) +-{ +- loc->xl_ops->xlo_add_entry(loc, name_hash); +- loc->xl_entry->xe_name_hash = cpu_to_le32(name_hash); +- /* +- * We can't leave the new entry's xe_name_offset at zero or +- * add_namevalue() will go nuts. We set it to the size of our +- * storage so that it can never be less than any other entry. +- */ +- loc->xl_entry->xe_name_offset = cpu_to_le16(loc->xl_size); +-} +- + static void ocfs2_xa_add_namevalue(struct ocfs2_xa_loc *loc, + struct ocfs2_xattr_info *xi) + { +@@ -2118,29 +2106,31 @@ static int ocfs2_xa_prepare_entry(struct ocfs2_xa_loc *loc, + if (rc) + goto out; + +- if (loc->xl_entry) { +- if (ocfs2_xa_can_reuse_entry(loc, xi)) { +- orig_value_size = loc->xl_entry->xe_value_size; +- rc = ocfs2_xa_reuse_entry(loc, xi, ctxt); +- if (rc) +- goto out; +- goto alloc_value; +- } ++ if (!loc->xl_entry) { ++ rc = -EINVAL; ++ goto out; ++ } + +- if (!ocfs2_xattr_is_local(loc->xl_entry)) { +- orig_clusters = ocfs2_xa_value_clusters(loc); +- rc = ocfs2_xa_value_truncate(loc, 0, ctxt); +- if (rc) { +- mlog_errno(rc); +- ocfs2_xa_cleanup_value_truncate(loc, +- "overwriting", +- orig_clusters); +- goto out; +- } ++ if (ocfs2_xa_can_reuse_entry(loc, xi)) { ++ orig_value_size = loc->xl_entry->xe_value_size; ++ rc = ocfs2_xa_reuse_entry(loc, xi, ctxt); ++ if (rc) ++ goto out; ++ goto alloc_value; ++ } ++ ++ if (!ocfs2_xattr_is_local(loc->xl_entry)) { ++ orig_clusters = ocfs2_xa_value_clusters(loc); ++ rc = ocfs2_xa_value_truncate(loc, 0, ctxt); ++ if (rc) { ++ mlog_errno(rc); ++ ocfs2_xa_cleanup_value_truncate(loc, ++ "overwriting", ++ orig_clusters); ++ goto out; + } +- ocfs2_xa_wipe_namevalue(loc); +- } else +- ocfs2_xa_add_entry(loc, name_hash); ++ } ++ ocfs2_xa_wipe_namevalue(loc); + + /* + * If we get here, we have a blank entry. Fill it. We grow our +-- +2.20.1 + diff --git a/queue-4.4/iio-fix-center-temperature-of-bmc150-accel-core.patch b/queue-4.4/iio-fix-center-temperature-of-bmc150-accel-core.patch new file mode 100644 index 00000000000..549ab7c6d72 --- /dev/null +++ b/queue-4.4/iio-fix-center-temperature-of-bmc150-accel-core.patch @@ -0,0 +1,39 @@ +From 9146488386c91855b46de48b5a9984979643d499 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 29 Aug 2019 07:29:41 +0200 +Subject: iio: fix center temperature of bmc150-accel-core + +From: Pascal Bouwmann + +[ Upstream commit 6c59a962e081df6d8fe43325bbfabec57e0d4751 ] + +The center temperature of the supported devices stored in the constant +BMC150_ACCEL_TEMP_CENTER_VAL is not 24 degrees but 23 degrees. + +It seems that some datasheets were inconsistent on this value leading +to the error. For most usecases will only make minor difference so +not queued for stable. + +Signed-off-by: Pascal Bouwmann +Signed-off-by: Jonathan Cameron +Signed-off-by: Sasha Levin +--- + drivers/iio/accel/bmc150-accel-core.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/iio/accel/bmc150-accel-core.c b/drivers/iio/accel/bmc150-accel-core.c +index c7122919a8c0e..ec7ddf8673497 100644 +--- a/drivers/iio/accel/bmc150-accel-core.c ++++ b/drivers/iio/accel/bmc150-accel-core.c +@@ -126,7 +126,7 @@ + #define BMC150_ACCEL_SLEEP_1_SEC 0x0F + + #define BMC150_ACCEL_REG_TEMP 0x08 +-#define BMC150_ACCEL_TEMP_CENTER_VAL 24 ++#define BMC150_ACCEL_TEMP_CENTER_VAL 23 + + #define BMC150_ACCEL_AXIS_TO_REG(axis) (BMC150_ACCEL_REG_XOUT_L + (axis * 2)) + #define BMC150_AUTO_SUSPEND_DELAY_MS 2000 +-- +2.20.1 + diff --git a/queue-4.4/mips-fw-sni-fix-out-of-bounds-init-of-o32-stack.patch b/queue-4.4/mips-fw-sni-fix-out-of-bounds-init-of-o32-stack.patch new file mode 100644 index 00000000000..4e708c5905a --- /dev/null +++ b/queue-4.4/mips-fw-sni-fix-out-of-bounds-init-of-o32-stack.patch @@ -0,0 +1,38 @@ +From fc898efdfddebb91c0a1865aef797cb4fb5626f1 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 9 Oct 2019 17:10:56 +0200 +Subject: MIPS: fw: sni: Fix out of bounds init of o32 stack + +From: Thomas Bogendoerfer + +[ Upstream commit efcb529694c3b707dc0471b312944337ba16e4dd ] + +Use ARRAY_SIZE to caluculate the top of the o32 stack. + +Signed-off-by: Thomas Bogendoerfer +Signed-off-by: Paul Burton +Cc: Ralf Baechle +Cc: James Hogan +Cc: linux-mips@vger.kernel.org +Cc: linux-kernel@vger.kernel.org +Signed-off-by: Sasha Levin +--- + arch/mips/fw/sni/sniprom.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/arch/mips/fw/sni/sniprom.c b/arch/mips/fw/sni/sniprom.c +index 6aa264b9856ac..7c6151d412bd7 100644 +--- a/arch/mips/fw/sni/sniprom.c ++++ b/arch/mips/fw/sni/sniprom.c +@@ -42,7 +42,7 @@ + + /* O32 stack has to be 8-byte aligned. */ + static u64 o32_stk[4096]; +-#define O32_STK &o32_stk[sizeof(o32_stk)] ++#define O32_STK (&o32_stk[ARRAY_SIZE(o32_stk)]) + + #define __PROM_O32(fun, arg) fun arg __asm__(#fun); \ + __asm__(#fun " = call_o32") +-- +2.20.1 + diff --git a/queue-4.4/nfsv4-fix-leak-of-clp-cl_acceptor-string.patch b/queue-4.4/nfsv4-fix-leak-of-clp-cl_acceptor-string.patch new file mode 100644 index 00000000000..38eced5b0ab --- /dev/null +++ b/queue-4.4/nfsv4-fix-leak-of-clp-cl_acceptor-string.patch @@ -0,0 +1,60 @@ +From fbfa2de5c0acb4a3e81a7937f6134420d62b0ceb Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 4 Oct 2019 09:58:54 -0400 +Subject: NFSv4: Fix leak of clp->cl_acceptor string + +From: Chuck Lever + +[ Upstream commit 1047ec868332034d1fbcb2fae19fe6d4cb869ff2 ] + +Our client can issue multiple SETCLIENTID operations to the same +server in some circumstances. Ensure that calls to +nfs4_proc_setclientid() after the first one do not overwrite the +previously allocated cl_acceptor string. + +unreferenced object 0xffff888461031800 (size 32): + comm "mount.nfs", pid 2227, jiffies 4294822467 (age 1407.749s) + hex dump (first 32 bytes): + 6e 66 73 40 6b 6c 69 6d 74 2e 69 62 2e 31 30 31 nfs@klimt.ib.101 + 35 67 72 61 6e 67 65 72 2e 6e 65 74 00 00 00 00 5granger.net.... + backtrace: + [<00000000ab820188>] __kmalloc+0x128/0x176 + [<00000000eeaf4ec8>] gss_stringify_acceptor+0xbd/0x1a7 [auth_rpcgss] + [<00000000e85e3382>] nfs4_proc_setclientid+0x34e/0x46c [nfsv4] + [<000000003d9cf1fa>] nfs40_discover_server_trunking+0x7a/0xed [nfsv4] + [<00000000b81c3787>] nfs4_discover_server_trunking+0x81/0x244 [nfsv4] + [<000000000801b55f>] nfs4_init_client+0x1b0/0x238 [nfsv4] + [<00000000977daf7f>] nfs4_set_client+0xfe/0x14d [nfsv4] + [<0000000053a68a2a>] nfs4_create_server+0x107/0x1db [nfsv4] + [<0000000088262019>] nfs4_remote_mount+0x2c/0x59 [nfsv4] + [<00000000e84a2fd0>] legacy_get_tree+0x2d/0x4c + [<00000000797e947c>] vfs_get_tree+0x20/0xc7 + [<00000000ecabaaa8>] fc_mount+0xe/0x36 + [<00000000f15fafc2>] vfs_kern_mount+0x74/0x8d + [<00000000a3ff4e26>] nfs_do_root_mount+0x8a/0xa3 [nfsv4] + [<00000000d1c2b337>] nfs4_try_mount+0x58/0xad [nfsv4] + [<000000004c9bddee>] nfs_fs_mount+0x820/0x869 [nfs] + +Fixes: f11b2a1cfbf5 ("nfs4: copy acceptor name from context ... ") +Signed-off-by: Chuck Lever +Signed-off-by: Anna Schumaker +Signed-off-by: Sasha Levin +--- + fs/nfs/nfs4proc.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index d1816ee0c11be..900a62a9ad4e5 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -5255,6 +5255,7 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program, + } + status = task->tk_status; + if (setclientid.sc_cred) { ++ kfree(clp->cl_acceptor); + clp->cl_acceptor = rpcauth_stringify_acceptor(setclientid.sc_cred); + put_rpccred(setclientid.sc_cred); + } +-- +2.20.1 + diff --git a/queue-4.4/perf-map-fix-overlapped-map-handling.patch b/queue-4.4/perf-map-fix-overlapped-map-handling.patch new file mode 100644 index 00000000000..5a007a8f895 --- /dev/null +++ b/queue-4.4/perf-map-fix-overlapped-map-handling.patch @@ -0,0 +1,119 @@ +From fe2c40d6d331d19ca53da10cc56c96e066fefb31 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 28 Sep 2019 01:39:00 +0000 +Subject: perf map: Fix overlapped map handling + +From: Steve MacLean + +[ Upstream commit ee212d6ea20887c0ef352be8563ca13dbf965906 ] + +Whenever an mmap/mmap2 event occurs, the map tree must be updated to add a new +entry. If a new map overlaps a previous map, the overlapped section of the +previous map is effectively unmapped, but the non-overlapping sections are +still valid. + +maps__fixup_overlappings() is responsible for creating any new map entries from +the previously overlapped map. It optionally creates a before and an after map. + +When creating the after map the existing code failed to adjust the map.pgoff. +This meant the new after map would incorrectly calculate the file offset +for the ip. This results in incorrect symbol name resolution for any ip in the +after region. + +Make maps__fixup_overlappings() correctly populate map.pgoff. + +Add an assert that new mapping matches old mapping at the beginning of +the after map. + +Committer-testing: + +Validated correct parsing of libcoreclr.so symbols from .NET Core 3.0 preview9 +(which didn't strip symbols). + +Preparation: + + ~/dotnet3.0-preview9/dotnet new webapi -o perfSymbol + cd perfSymbol + ~/dotnet3.0-preview9/dotnet publish + perf record ~/dotnet3.0-preview9/dotnet \ + bin/Debug/netcoreapp3.0/publish/perfSymbol.dll + ^C + +Before: + + perf script --show-mmap-events 2>&1 | grep -e MMAP -e unknown |\ + grep libcoreclr.so | head -n 4 + dotnet 1907 373352.698780: PERF_RECORD_MMAP2 1907/1907: \ + [0x7fe615726000(0x768000) @ 0 08:02 5510620 765057155]: \ + r-xp .../3.0.0-preview9-19423-09/libcoreclr.so + dotnet 1907 373352.701091: PERF_RECORD_MMAP2 1907/1907: \ + [0x7fe615974000(0x1000) @ 0x24e000 08:02 5510620 765057155]: \ + rwxp .../3.0.0-preview9-19423-09/libcoreclr.so + dotnet 1907 373352.701241: PERF_RECORD_MMAP2 1907/1907: \ + [0x7fe615c42000(0x1000) @ 0x51c000 08:02 5510620 765057155]: \ + rwxp .../3.0.0-preview9-19423-09/libcoreclr.so + dotnet 1907 373352.705249: 250000 cpu-clock: \ + 7fe6159a1f99 [unknown] \ + (.../3.0.0-preview9-19423-09/libcoreclr.so) + +After: + + perf script --show-mmap-events 2>&1 | grep -e MMAP -e unknown |\ + grep libcoreclr.so | head -n 4 + dotnet 1907 373352.698780: PERF_RECORD_MMAP2 1907/1907: \ + [0x7fe615726000(0x768000) @ 0 08:02 5510620 765057155]: \ + r-xp .../3.0.0-preview9-19423-09/libcoreclr.so + dotnet 1907 373352.701091: PERF_RECORD_MMAP2 1907/1907: \ + [0x7fe615974000(0x1000) @ 0x24e000 08:02 5510620 765057155]: \ + rwxp .../3.0.0-preview9-19423-09/libcoreclr.so + dotnet 1907 373352.701241: PERF_RECORD_MMAP2 1907/1907: \ + [0x7fe615c42000(0x1000) @ 0x51c000 08:02 5510620 765057155]: \ + rwxp .../3.0.0-preview9-19423-09/libcoreclr.so + +All the [unknown] symbols were resolved. + +Signed-off-by: Steve MacLean +Tested-by: Brian Robbins +Acked-by: Jiri Olsa +Cc: Alexander Shishkin +Cc: Andi Kleen +Cc: Davidlohr Bueso +Cc: Eric Saint-Etienne +Cc: John Keeping +Cc: John Salem +Cc: Leo Yan +Cc: Mark Rutland +Cc: Namhyung Kim +Cc: Peter Zijlstra +Cc: Song Liu +Cc: Stephane Eranian +Cc: Tom McDonald +Link: http://lore.kernel.org/lkml/BN8PR21MB136270949F22A6A02335C238F7800@BN8PR21MB1362.namprd21.prod.outlook.com +Signed-off-by: Arnaldo Carvalho de Melo +Signed-off-by: Sasha Levin +--- + tools/perf/util/map.c | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c +index afc6b56cf749b..97c0684588d99 100644 +--- a/tools/perf/util/map.c ++++ b/tools/perf/util/map.c +@@ -1,4 +1,5 @@ + #include "symbol.h" ++#include + #include + #include + #include +@@ -702,6 +703,8 @@ static int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp + } + + after->start = map->end; ++ after->pgoff += map->end - pos->start; ++ assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end)); + __map_groups__insert(pos->groups, after); + if (verbose >= 2) + map__fprintf(after, fp); +-- +2.20.1 + diff --git a/queue-4.4/rdma-iwcm-fix-a-lock-inversion-issue.patch b/queue-4.4/rdma-iwcm-fix-a-lock-inversion-issue.patch new file mode 100644 index 00000000000..5ef11cfca9f --- /dev/null +++ b/queue-4.4/rdma-iwcm-fix-a-lock-inversion-issue.patch @@ -0,0 +1,87 @@ +From 37178da8b733bf1efbe14511774ee051f242911f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 30 Sep 2019 16:16:54 -0700 +Subject: RDMA/iwcm: Fix a lock inversion issue + +From: Bart Van Assche + +[ Upstream commit b66f31efbdad95ec274345721d99d1d835e6de01 ] + +This patch fixes the lock inversion complaint: + +============================================ +WARNING: possible recursive locking detected +5.3.0-rc7-dbg+ #1 Not tainted +-------------------------------------------- +kworker/u16:6/171 is trying to acquire lock: +00000000035c6e6c (&id_priv->handler_mutex){+.+.}, at: rdma_destroy_id+0x78/0x4a0 [rdma_cm] + +but task is already holding lock: +00000000bc7c307d (&id_priv->handler_mutex){+.+.}, at: iw_conn_req_handler+0x151/0x680 [rdma_cm] + +other info that might help us debug this: + Possible unsafe locking scenario: + + CPU0 + ---- + lock(&id_priv->handler_mutex); + lock(&id_priv->handler_mutex); + + *** DEADLOCK *** + + May be due to missing lock nesting notation + +3 locks held by kworker/u16:6/171: + #0: 00000000e2eaa773 ((wq_completion)iw_cm_wq){+.+.}, at: process_one_work+0x472/0xac0 + #1: 000000001efd357b ((work_completion)(&work->work)#3){+.+.}, at: process_one_work+0x476/0xac0 + #2: 00000000bc7c307d (&id_priv->handler_mutex){+.+.}, at: iw_conn_req_handler+0x151/0x680 [rdma_cm] + +stack backtrace: +CPU: 3 PID: 171 Comm: kworker/u16:6 Not tainted 5.3.0-rc7-dbg+ #1 +Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 +Workqueue: iw_cm_wq cm_work_handler [iw_cm] +Call Trace: + dump_stack+0x8a/0xd6 + __lock_acquire.cold+0xe1/0x24d + lock_acquire+0x106/0x240 + __mutex_lock+0x12e/0xcb0 + mutex_lock_nested+0x1f/0x30 + rdma_destroy_id+0x78/0x4a0 [rdma_cm] + iw_conn_req_handler+0x5c9/0x680 [rdma_cm] + cm_work_handler+0xe62/0x1100 [iw_cm] + process_one_work+0x56d/0xac0 + worker_thread+0x7a/0x5d0 + kthread+0x1bc/0x210 + ret_from_fork+0x24/0x30 + +This is not a bug as there are actually two lock classes here. + +Link: https://lore.kernel.org/r/20190930231707.48259-3-bvanassche@acm.org +Fixes: de910bd92137 ("RDMA/cma: Simplify locking needed for serialization of callbacks") +Signed-off-by: Bart Van Assche +Reviewed-by: Jason Gunthorpe +Signed-off-by: Jason Gunthorpe +Signed-off-by: Sasha Levin +--- + drivers/infiniband/core/cma.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c +index 1454290078def..8ad9c6b04769d 100644 +--- a/drivers/infiniband/core/cma.c ++++ b/drivers/infiniband/core/cma.c +@@ -1976,9 +1976,10 @@ static int iw_conn_req_handler(struct iw_cm_id *cm_id, + conn_id->cm_id.iw = NULL; + cma_exch(conn_id, RDMA_CM_DESTROYING); + mutex_unlock(&conn_id->handler_mutex); ++ mutex_unlock(&listen_id->handler_mutex); + cma_deref_id(conn_id); + rdma_destroy_id(&conn_id->id); +- goto out; ++ return ret; + } + + mutex_unlock(&conn_id->handler_mutex); +-- +2.20.1 + diff --git a/queue-4.4/sc16is7xx-fix-for-unexpected-interrupt-8.patch b/queue-4.4/sc16is7xx-fix-for-unexpected-interrupt-8.patch new file mode 100644 index 00000000000..d30569e136e --- /dev/null +++ b/queue-4.4/sc16is7xx-fix-for-unexpected-interrupt-8.patch @@ -0,0 +1,121 @@ +From 47858e2760c1ce50638965ecb92287782e3defd8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 12 Sep 2018 15:31:56 +0100 +Subject: sc16is7xx: Fix for "Unexpected interrupt: 8" + +From: Phil Elwell + +[ Upstream commit 30ec514d440cf2c472c8e4b0079af2c731f71a3e ] + +The SC16IS752 has an Enhanced Feature Register which is aliased at the +same address as the Interrupt Identification Register; accessing it +requires that a magic value is written to the Line Configuration +Register. If an interrupt is raised while the EFR is mapped in then +the ISR won't be able to access the IIR, leading to the "Unexpected +interrupt" error messages. + +Avoid the problem by claiming a mutex around accesses to the EFR +register, also claiming the mutex in the interrupt handler work +item (this is equivalent to disabling interrupts to interlock against +a non-threaded interrupt handler). + +See: https://github.com/raspberrypi/linux/issues/2529 + +Signed-off-by: Phil Elwell +Signed-off-by: Greg Kroah-Hartman +Signed-off-by: Sasha Levin +--- + drivers/tty/serial/sc16is7xx.c | 28 ++++++++++++++++++++++++++++ + 1 file changed, 28 insertions(+) + +diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c +index 032f3c13b8c45..a3dfefa33e3c1 100644 +--- a/drivers/tty/serial/sc16is7xx.c ++++ b/drivers/tty/serial/sc16is7xx.c +@@ -332,6 +332,7 @@ struct sc16is7xx_port { + struct kthread_worker kworker; + struct task_struct *kworker_task; + struct kthread_work irq_work; ++ struct mutex efr_lock; + struct sc16is7xx_one p[0]; + }; + +@@ -496,6 +497,21 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud) + div /= 4; + } + ++ /* In an amazing feat of design, the Enhanced Features Register shares ++ * the address of the Interrupt Identification Register, and is ++ * switched in by writing a magic value (0xbf) to the Line Control ++ * Register. Any interrupt firing during this time will see the EFR ++ * where it expects the IIR to be, leading to "Unexpected interrupt" ++ * messages. ++ * ++ * Prevent this possibility by claiming a mutex while accessing the ++ * EFR, and claiming the same mutex from within the interrupt handler. ++ * This is similar to disabling the interrupt, but that doesn't work ++ * because the bulk of the interrupt processing is run as a workqueue ++ * job in thread context. ++ */ ++ mutex_lock(&s->efr_lock); ++ + lcr = sc16is7xx_port_read(port, SC16IS7XX_LCR_REG); + + /* Open the LCR divisors for configuration */ +@@ -511,6 +527,8 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud) + /* Put LCR back to the normal mode */ + sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr); + ++ mutex_unlock(&s->efr_lock); ++ + sc16is7xx_port_update(port, SC16IS7XX_MCR_REG, + SC16IS7XX_MCR_CLKSEL_BIT, + prescaler); +@@ -693,6 +711,8 @@ static void sc16is7xx_ist(struct kthread_work *ws) + { + struct sc16is7xx_port *s = to_sc16is7xx_port(ws, irq_work); + ++ mutex_lock(&s->efr_lock); ++ + while (1) { + bool keep_polling = false; + int i; +@@ -702,6 +722,8 @@ static void sc16is7xx_ist(struct kthread_work *ws) + if (!keep_polling) + break; + } ++ ++ mutex_unlock(&s->efr_lock); + } + + static irqreturn_t sc16is7xx_irq(int irq, void *dev_id) +@@ -888,6 +910,9 @@ static void sc16is7xx_set_termios(struct uart_port *port, + if (!(termios->c_cflag & CREAD)) + port->ignore_status_mask |= SC16IS7XX_LSR_BRK_ERROR_MASK; + ++ /* As above, claim the mutex while accessing the EFR. */ ++ mutex_lock(&s->efr_lock); ++ + sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, + SC16IS7XX_LCR_CONF_MODE_B); + +@@ -909,6 +934,8 @@ static void sc16is7xx_set_termios(struct uart_port *port, + /* Update LCR register */ + sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr); + ++ mutex_unlock(&s->efr_lock); ++ + /* Get baud rate generator configuration */ + baud = uart_get_baud_rate(port, termios, old, + port->uartclk / 16 / 4 / 0xffff, +@@ -1172,6 +1199,7 @@ static int sc16is7xx_probe(struct device *dev, + s->regmap = regmap; + s->devtype = devtype; + dev_set_drvdata(dev, s); ++ mutex_init(&s->efr_lock); + + init_kthread_worker(&s->kworker); + init_kthread_work(&s->irq_work, sc16is7xx_ist); +-- +2.20.1 + diff --git a/queue-4.4/scripts-setlocalversion-improve-dirty-check-with-git.patch b/queue-4.4/scripts-setlocalversion-improve-dirty-check-with-git.patch new file mode 100644 index 00000000000..c8810520830 --- /dev/null +++ b/queue-4.4/scripts-setlocalversion-improve-dirty-check-with-git.patch @@ -0,0 +1,69 @@ +From 0ebdfbb27c552e0181bec8d173693fe9f5b58ea5 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 14 Nov 2018 18:11:18 -0800 +Subject: scripts/setlocalversion: Improve -dirty check with git-status + --no-optional-locks + +From: Brian Norris + +[ Upstream commit ff64dd4857303dd5550faed9fd598ac90f0f2238 ] + +git-diff-index does not refresh the index for you, so using it for a +"-dirty" check can give misleading results. Commit 6147b1cf19651 +("scripts/setlocalversion: git: Make -dirty check more robust") tried to +fix this by switching to git-status, but it overlooked the fact that +git-status also writes to the .git directory of the source tree, which +is definitely not kosher for an out-of-tree (O=) build. That is getting +reverted. + +Fortunately, git-status now supports avoiding writing to the index via +the --no-optional-locks flag, as of git 2.14. It still calculates an +up-to-date index, but it avoids writing it out to the .git directory. + +So, let's retry the solution from commit 6147b1cf19651 using this new +flag first, and if it fails, we assume this is an older version of git +and just use the old git-diff-index method. + +It's hairy to get the 'grep -vq' (inverted matching) correct by stashing +the output of git-status (you have to be careful about the difference +betwen "empty stdin" and "blank line on stdin"), so just pipe the output +directly to grep and use a regex that's good enough for both the +git-status and git-diff-index version. + +Cc: Christian Kujau +Cc: Guenter Roeck +Suggested-by: Alexander Kapshuk +Signed-off-by: Brian Norris +Tested-by: Genki Sky +Signed-off-by: Masahiro Yamada +Signed-off-by: Sasha Levin +--- + scripts/setlocalversion | 12 ++++++++++-- + 1 file changed, 10 insertions(+), 2 deletions(-) + +diff --git a/scripts/setlocalversion b/scripts/setlocalversion +index 966dd3924ea9c..aa28c3f298093 100755 +--- a/scripts/setlocalversion ++++ b/scripts/setlocalversion +@@ -72,8 +72,16 @@ scm_version() + printf -- '-svn%s' "`git svn find-rev $head`" + fi + +- # Check for uncommitted changes +- if git diff-index --name-only HEAD | grep -qv "^scripts/package"; then ++ # Check for uncommitted changes. ++ # First, with git-status, but --no-optional-locks is only ++ # supported in git >= 2.14, so fall back to git-diff-index if ++ # it fails. Note that git-diff-index does not refresh the ++ # index, so it may give misleading results. See ++ # git-update-index(1), git-diff-index(1), and git-status(1). ++ if { ++ git --no-optional-locks status -uno --porcelain 2>/dev/null || ++ git diff-index --name-only HEAD ++ } | grep -qvE '^(.. )?scripts/package'; then + printf '%s' -dirty + fi + +-- +2.20.1 + diff --git a/queue-4.4/serial-mctrl_gpio-check-for-null-pointer.patch b/queue-4.4/serial-mctrl_gpio-check-for-null-pointer.patch new file mode 100644 index 00000000000..dc06c2bc46a --- /dev/null +++ b/queue-4.4/serial-mctrl_gpio-check-for-null-pointer.patch @@ -0,0 +1,40 @@ +From 0314affc4d3a37186f364827be7706360387e525 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 6 Oct 2019 11:33:11 -0500 +Subject: serial: mctrl_gpio: Check for NULL pointer + +From: Adam Ford + +[ Upstream commit 37e3ab00e4734acc15d96b2926aab55c894f4d9c ] + +When using mctrl_gpio_to_gpiod, it dereferences gpios into a single +requested GPIO. This dereferencing can break if gpios is NULL, +so this patch adds a NULL check before dereferencing it. If +gpios is NULL, this function will also return NULL. + +Signed-off-by: Adam Ford +Reviewed-by: Yegor Yefremov +Link: https://lore.kernel.org/r/20191006163314.23191-1-aford173@gmail.com +Signed-off-by: Greg Kroah-Hartman +Signed-off-by: Sasha Levin +--- + drivers/tty/serial/serial_mctrl_gpio.c | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/drivers/tty/serial/serial_mctrl_gpio.c b/drivers/tty/serial/serial_mctrl_gpio.c +index 02147361eaa94..2b5329a3d716a 100644 +--- a/drivers/tty/serial/serial_mctrl_gpio.c ++++ b/drivers/tty/serial/serial_mctrl_gpio.c +@@ -67,6 +67,9 @@ EXPORT_SYMBOL_GPL(mctrl_gpio_set); + struct gpio_desc *mctrl_gpio_to_gpiod(struct mctrl_gpios *gpios, + enum mctrl_gpio_idx gidx) + { ++ if (gpios == NULL) ++ return NULL; ++ + return gpios->gpio[gidx]; + } + EXPORT_SYMBOL_GPL(mctrl_gpio_to_gpiod); +-- +2.20.1 + diff --git a/queue-4.4/series b/queue-4.4/series new file mode 100644 index 00000000000..c6d9cdf6634 --- /dev/null +++ b/queue-4.4/series @@ -0,0 +1,21 @@ +dm-snapshot-use-mutex-instead-of-rw_semaphore.patch +dm-snapshot-introduce-account_start_copy-and-account.patch +dm-snapshot-rework-cow-throttling-to-fix-deadlock.patch +dm-use-kzalloc-for-all-structs-with-embedded-biosets.patch +sc16is7xx-fix-for-unexpected-interrupt-8.patch +x86-cpu-add-atom-tremont-jacobsville.patch +scripts-setlocalversion-improve-dirty-check-with-git.patch +usb-handle-warm-reset-port-requests-on-hub-resume.patch +exec-load_script-do-not-exec-truncated-interpreter-p.patch +iio-fix-center-temperature-of-bmc150-accel-core.patch +perf-map-fix-overlapped-map-handling.patch +rdma-iwcm-fix-a-lock-inversion-issue.patch +fs-cifs-mute-wunused-const-variable-message.patch +serial-mctrl_gpio-check-for-null-pointer.patch +efi-cper-fix-endianness-of-pcie-class-code.patch +efi-x86-do-not-clean-dummy-variable-in-kexec-path.patch +fs-ocfs2-fix-possible-null-pointer-dereferences-in-o.patch +fs-ocfs2-fix-a-possible-null-pointer-dereference-in-.patch +mips-fw-sni-fix-out-of-bounds-init-of-o32-stack.patch +nfsv4-fix-leak-of-clp-cl_acceptor-string.patch +tracing-initialize-iter-seq-after-zeroing-in-tracing.patch diff --git a/queue-4.4/tracing-initialize-iter-seq-after-zeroing-in-tracing.patch b/queue-4.4/tracing-initialize-iter-seq-after-zeroing-in-tracing.patch new file mode 100644 index 00000000000..0efd4b68c87 --- /dev/null +++ b/queue-4.4/tracing-initialize-iter-seq-after-zeroing-in-tracing.patch @@ -0,0 +1,82 @@ +From 404a5e8f6317189123f52fde7c507f01a55faccf Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 11 Oct 2019 16:21:34 +0200 +Subject: tracing: Initialize iter->seq after zeroing in tracing_read_pipe() + +From: Petr Mladek + +[ Upstream commit d303de1fcf344ff7c15ed64c3f48a991c9958775 ] + +A customer reported the following softlockup: + +[899688.160002] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [test.sh:16464] +[899688.160002] CPU: 0 PID: 16464 Comm: test.sh Not tainted 4.12.14-6.23-azure #1 SLE12-SP4 +[899688.160002] RIP: 0010:up_write+0x1a/0x30 +[899688.160002] Kernel panic - not syncing: softlockup: hung tasks +[899688.160002] RIP: 0010:up_write+0x1a/0x30 +[899688.160002] RSP: 0018:ffffa86784d4fde8 EFLAGS: 00000257 ORIG_RAX: ffffffffffffff12 +[899688.160002] RAX: ffffffff970fea00 RBX: 0000000000000001 RCX: 0000000000000000 +[899688.160002] RDX: ffffffff00000001 RSI: 0000000000000080 RDI: ffffffff970fea00 +[899688.160002] RBP: ffffffffffffffff R08: ffffffffffffffff R09: 0000000000000000 +[899688.160002] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8b59014720d8 +[899688.160002] R13: ffff8b59014720c0 R14: ffff8b5901471090 R15: ffff8b5901470000 +[899688.160002] tracing_read_pipe+0x336/0x3c0 +[899688.160002] __vfs_read+0x26/0x140 +[899688.160002] vfs_read+0x87/0x130 +[899688.160002] SyS_read+0x42/0x90 +[899688.160002] do_syscall_64+0x74/0x160 + +It caught the process in the middle of trace_access_unlock(). There is +no loop. So, it must be looping in the caller tracing_read_pipe() +via the "waitagain" label. + +Crashdump analyze uncovered that iter->seq was completely zeroed +at this point, including iter->seq.seq.size. It means that +print_trace_line() was never able to print anything and +there was no forward progress. + +The culprit seems to be in the code: + + /* reset all but tr, trace, and overruns */ + memset(&iter->seq, 0, + sizeof(struct trace_iterator) - + offsetof(struct trace_iterator, seq)); + +It was added by the commit 53d0aa773053ab182877 ("ftrace: +add logic to record overruns"). It was v2.6.27-rc1. +It was the time when iter->seq looked like: + + struct trace_seq { + unsigned char buffer[PAGE_SIZE]; + unsigned int len; + }; + +There was no "size" variable and zeroing was perfectly fine. + +The solution is to reinitialize the structure after or without +zeroing. + +Link: http://lkml.kernel.org/r/20191011142134.11997-1-pmladek@suse.com + +Signed-off-by: Petr Mladek +Signed-off-by: Steven Rostedt (VMware) +Signed-off-by: Sasha Levin +--- + kernel/trace/trace.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index c6e4e3e7f6850..6176dc89b32cf 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -4803,6 +4803,7 @@ waitagain: + sizeof(struct trace_iterator) - + offsetof(struct trace_iterator, seq)); + cpumask_clear(iter->started); ++ trace_seq_init(&iter->seq); + iter->pos = -1; + + trace_event_read_lock(); +-- +2.20.1 + diff --git a/queue-4.4/usb-handle-warm-reset-port-requests-on-hub-resume.patch b/queue-4.4/usb-handle-warm-reset-port-requests-on-hub-resume.patch new file mode 100644 index 00000000000..ef26d3ea364 --- /dev/null +++ b/queue-4.4/usb-handle-warm-reset-port-requests-on-hub-resume.patch @@ -0,0 +1,55 @@ +From 13c9aa829ed14de36848bf3bef525734981bf326 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 1 Feb 2019 13:52:31 +0100 +Subject: usb: handle warm-reset port requests on hub resume + +From: Jan-Marek Glogowski + +[ Upstream commit 4fdc1790e6a9ef22399c6bc6e63b80f4609f3b7e ] + +On plug-in of my USB-C device, its USB_SS_PORT_LS_SS_INACTIVE +link state bit is set. Greping all the kernel for this bit shows +that the port status requests a warm-reset this way. + +This just happens, if its the only device on the root hub, the hub +therefore resumes and the HCDs status_urb isn't yet available. +If a warm-reset request is detected, this sets the hubs event_bits, +which will prevent any auto-suspend and allows the hubs workqueue +to warm-reset the port later in port_event. + +Signed-off-by: Jan-Marek Glogowski +Acked-by: Alan Stern +Signed-off-by: Greg Kroah-Hartman +Signed-off-by: Sasha Levin +--- + drivers/usb/core/hub.c | 7 +++++++ + 1 file changed, 7 insertions(+) + +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index 5c274c5440da4..11881c5a1fb0c 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -102,6 +102,8 @@ EXPORT_SYMBOL_GPL(ehci_cf_port_reset_rwsem); + static void hub_release(struct kref *kref); + static int usb_reset_and_verify_device(struct usb_device *udev); + static int hub_port_disable(struct usb_hub *hub, int port1, int set_state); ++static bool hub_port_warm_reset_required(struct usb_hub *hub, int port1, ++ u16 portstatus); + + static inline char *portspeed(struct usb_hub *hub, int portstatus) + { +@@ -1092,6 +1094,11 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type) + USB_PORT_FEAT_ENABLE); + } + ++ /* Make sure a warm-reset request is handled by port_event */ ++ if (type == HUB_RESUME && ++ hub_port_warm_reset_required(hub, port1, portstatus)) ++ set_bit(port1, hub->event_bits); ++ + /* + * Add debounce if USB3 link is in polling/link training state. + * Link will automatically transition to Enabled state after +-- +2.20.1 + diff --git a/queue-4.4/x86-cpu-add-atom-tremont-jacobsville.patch b/queue-4.4/x86-cpu-add-atom-tremont-jacobsville.patch new file mode 100644 index 00000000000..8eadcd770ef --- /dev/null +++ b/queue-4.4/x86-cpu-add-atom-tremont-jacobsville.patch @@ -0,0 +1,60 @@ +From 8298fe4f1f4b73dfb31efbaca1f0b380af991f00 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 25 Jan 2019 11:59:01 -0800 +Subject: x86/cpu: Add Atom Tremont (Jacobsville) + +From: Kan Liang + +[ Upstream commit 00ae831dfe4474ef6029558f5eb3ef0332d80043 ] + +Add the Atom Tremont model number to the Intel family list. + +[ Tony: Also update comment at head of file to say "_X" suffix is + also used for microserver parts. ] + +Signed-off-by: Kan Liang +Signed-off-by: Qiuxu Zhuo +Signed-off-by: Tony Luck +Signed-off-by: Borislav Petkov +Cc: Andy Shevchenko +Cc: Aristeu Rozanski +Cc: "H. Peter Anvin" +Cc: Ingo Molnar +Cc: linux-edac +Cc: Mauro Carvalho Chehab +Cc: Megha Dey +Cc: Peter Zijlstra +Cc: Qiuxu Zhuo +Cc: Rajneesh Bhardwaj +Cc: Thomas Gleixner +Cc: x86-ml +Link: https://lkml.kernel.org/r/20190125195902.17109-4-tony.luck@intel.com +Signed-off-by: Sasha Levin +--- + arch/x86/include/asm/intel-family.h | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h +index 6801f958e2542..aaa0bd820cf4d 100644 +--- a/arch/x86/include/asm/intel-family.h ++++ b/arch/x86/include/asm/intel-family.h +@@ -5,7 +5,7 @@ + * "Big Core" Processors (Branded as Core, Xeon, etc...) + * + * The "_X" parts are generally the EP and EX Xeons, or the +- * "Extreme" ones, like Broadwell-E. ++ * "Extreme" ones, like Broadwell-E, or Atom microserver. + * + * Things ending in "2" are usually because we have no better + * name for them. There's no processor called "WESTMERE2". +@@ -67,6 +67,7 @@ + #define INTEL_FAM6_ATOM_GOLDMONT 0x5C /* Apollo Lake */ + #define INTEL_FAM6_ATOM_GOLDMONT_X 0x5F /* Denverton */ + #define INTEL_FAM6_ATOM_GOLDMONT_PLUS 0x7A /* Gemini Lake */ ++#define INTEL_FAM6_ATOM_TREMONT_X 0x86 /* Jacobsville */ + + /* Xeon Phi */ + +-- +2.20.1 +