Fix occasional decoder errors decoding trace data collected in snapshot
mode.
Snapshot mode can take successive snapshots of trace which might overlap.
The decoder checks whether there is an overlap but only looks at the
current and previous buffer. However buffers that do not contain
synchronization (i.e. PSB) packets cannot be decoded or used for overlap
checking. That means the decoder actually needs to check overlaps between
the current buffer and the previous buffer that contained usable data.
Make that change.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Link: http://lkml.kernel.org/r/1474641528-18776-10-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The ramoops buffer may be mapped as either I/O memory or uncached
memory. On ARM64, this results in a device-type (strongly-ordered)
mapping. Since unnaligned accesses to device-type memory will
generate an alignment fault (regardless of whether or not strict
alignment checking is enabled), it is not safe to use memcpy().
memcpy_fromio() is guaranteed to only use aligned accesses, so use
that instead.
persistent_ram_update uses vmap / iomap based on whether the buffer is in
memory region or reserved region. However, both map it as non-cacheable
memory. For armv8 specifically, non-cacheable mapping requests use a
memory type that has to be accessed aligned to the request size. memcpy()
doesn't guarantee that.
I have here a FPGA behind PCIe which exports SRAM which I use for
pstore. Now it seems that the FPGA no longer supports cmpxchg based
updates and writes back 0xff…ff and returns the same. This leads to
crash during crash rendering pstore useless.
Since I doubt that there is much benefit from using cmpxchg() here, I am
dropping this atomic access and use the spinlock based version.
Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Rabin Vincent <rabinv@axis.com> Tested-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Guenter Roeck <linux@roeck-us.net>
[kees: remove "_locked" suffix since it's the only option now] Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Since commit 34f0ec82e0a9 ("pstore: Correct the max_dump_cnt clearing of
ramoops") sets ->max_dump_cnt to zero before looping over ->przs but we
didn't use it before that either.
And since commit ee1d267423a1 ("pstore: add pstore unregister") we free
that memory on rmmod.
But even then, we looped until a NULL pointer or ERR. I don't see where
it is ensured that the last member is NULL. Let's try this instead:
simply error recovery and free. Clean up in error case where resources
were allocated. And then, in the free path, rely on ->max_dump_cnt in
the free path.
Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Architecturally we need to keep __gp below 0x1000000.
But because of ftrace and tracepoint support, the RO_DATA_SECTION now gets much
bigger than it was before. By moving the linkage tables before RO_DATA_SECTION
we can avoid that __gp gets positioned at a too high address.
The config option HAVE_UNSTABLE_SCHED_CLOCK is set automatically when compiling
for SMP. There is no need to clear the stable-clock flag via
clear_sched_clock_stable() when starting secondary CPUs, and even worse,
clearing it triggers wrong self-detected CPU stall warnings on 64bit Mako
machines.
Increase the initial kernel default page mapping size for SMP kernels to 32MB
and add a runtime check which panics early if the kernel is bigger than the
initial mapping size.
This fixes boot crashes of 32bit SMP kernels. Due to the introduction of huge
page support in kernel 4.4 and it's required initial kernel layout in memory, a
32bit SMP kernel usually got bigger (in layout, not size) than 16MB.
'best' is always less or equals to 'pos', so `best - pos' returns
a negative value which is then getting casted to `unsigned int'
and passed to __cpufreq_driver_target()->acpi_cpufreq_target()
for policy->freq_table selection. This results in
This is a requirement that MSR MSR_PM_ENABLE must be set to 0x01 before
reading MSR_HWP_CAPABILITIES on a given CPU. If cpufreq init() is
scheduled on a CPU which is not same as policy->cpu or migrates to a
different CPU before calling msr read for MSR_HWP_CAPABILITIES, it
is possible that MSR_PM_ENABLE was not to set to 0x01 on that CPU.
This will cause GP fault. So like other places in this path
rdmsrl_on_cpu should be used instead of rdmsrl.
Moreover the scope of MSR_HWP_CAPABILITIES is on per thread basis, so it
should be read from the same CPU, for which MSR MSR_HWP_REQUEST is
getting set.
Commit d352cf47d93e (cpufreq: conservative: Do not use transition
notifications) overlooked the case when the "frequency step" used
by the conservative governor is small relative to the distances
between the available frequencies and broke the algorithm by
using policy->cur instead of the previously requested frequency
when computing the next one.
As a result, the governor may not be able to go outside of a narrow
range between two consecutive available frequencies.
Fix the problem by making the governor save the previously requested
frequency and select the next one relative that value (unless it is
out of range, in which case policy->cur will be used instead).
Fixes: d352cf47d93e (cpufreq: conservative: Do not use transition notifications) Link: https://bugzilla.kernel.org/show_bug.cgi?id=177171 Reported-and-tested-by: Aleksey Rybalkin <aleksey@rybalkin.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Now that the cpufreq-dt-platdev is used to create the cpufreq-dt platform
device for all OMAP platforms and the platform code that did it
before has been removed, add ti,am33xx and ti,dra7xx to the machine list
in cpufreq-dt-platdev which had relied on the removed platform code to do
this previously.
Fixes: 7694ca6e1d6f (cpufreq: omap: Use generic platdev driver) Signed-off-by: Dave Gerlach <d-gerlach@ti.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
of_irq_get[_byname]() return 0 iff irq_create_of_mapping() call fails.
Returning both error code and 0 on failure is a sign of a misdesigned API,
it makes the failure check unnecessarily complex and error prone. We should
rely on the platform IRQ resource in this case, not return 0, especially
as 0 can be a valid IRQ resource too...
Fixes: aff008ad813c ("platform_get_irq: Revert to platform_get_resource if of_irq_get fails") Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The tegra_pcie_phy_disable() path called pads_writel() with arguments in
the wrong order. Swap them to be the "value, offset" order expected by
pads_writel().
Similar to the AR93xx and the AR94xx series, the AR95xx also have the same
quirk for the Bus Reset. It will lead to instant system reset if the
device is assigned via VFIO to a KVM VM. I've been able reproduce this
behavior with a MikroTik R11e-2HnD.
Fixes: c3e59ee4e766 ("PCI: Mark Atheros AR93xx to avoid bus reset") Signed-off-by: Maik Broemme <mbroemme@libmpq.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Potentially overflowing expression 1000000 * data->timeout_clks with
type unsigned int is evaluated using 32-bit arithmetic, and then used
in a context that expects an expression of type unsigned long long.
To avoid overflow, cast 1000000U to type unsigned long long.
Special thanks to Coverity.
CMD23 aka SET_BLOCK_COUNT was introduced with MMC v3.1.
Older versions of the specification allowed to terminate
multi-block transfers only with CMD12.
The patch fixes the following problem:
mmc0: new MMC card at address 0001
mmcblk0: mmc0:0001 SDMB-16 15.3 MiB
mmcblk0: timed out sending SET_BLOCK_COUNT command, card status 0x400900
...
blk_update_request: I/O error, dev mmcblk0, sector 0
Buffer I/O error on dev mmcblk0, logical block 0, async page read
mmcblk0: unable to read partition table
Some RTL8821AE devices sold in Great Britain have the country code of
0x25 encoded in their EEPROM. This value is not tested in the routine
that establishes the regulatory info for the chip. The fix is to set
this code to have the same capabilities as the EU countries. In addition,
the channels allowed for COUNTRY_CODE_ETSI were more properly suited
for China and Israel, not the EU. This problem has also been fixed.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Firmware is running watchdog timer for tracking copy engine ring index
and write index. Whenever both indices are stuck at same location for
given duration, watchdog will be trigger to assert target. While
updating copy engine destination ring write index, driver ensures that
write index will not be same as read index by finding delta between these
two indices (CE_RING_DELTA).
HTT target to host copy engine (CE5) is special case where ring buffers
will be reused and delta check is not applied while updating write index.
In rare scenario, whenever CE5 ring is full, both indices will be referring
same location and this is causing CE ring stuck issue as explained
above. This issue is originally reported on IPQ4019 during long hour stress
testing and during veriwave max clients testsuites. The same issue is
also observed in other chips as well. Fix this by ensuring that write
index is one less than read index which means that full ring is
available for receiving data.
Tested-by: Tamizh chelvam <c_traja@qti.qualcomm.com> Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
there define two devfreq_event_get_drvdata() function in devfreq-event.h
when disable CONFIG_PM_DEVFREQ_EVENT, it will lead to build fail. So
remove devfreq_event_get_drvdata() function.
The current clock tree only implements the minimal set of differences
between the i.MX6Q and the i.MX6DL, but that doesn't really reflect
reality.
Apply the following fixes to match the RM:
- DL has no GPU3D_SHADER_SEL/PODF, the shader domain is clocked by
GPU3D_CORE
- GPU3D_SHADER_SEL/PODF has been repurposed as GPU2D_CORE_SEL/PODF
- GPU2D_CORE_SEL/PODF has been repurposed as MLB_SEL/PODF
Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Acked-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Initialize the GPU clock muxes to sane inputs. Until now they have
not been changed from their default values, which means that both
GPU3D shader and GPU2D core were fed by clock inputs whose rates
exceed the maximium allowed frequency of the cores by as much as
200MHz.
This fixes a severe GPU stability issue on i.MX6DL.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Acked-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
http://www.ti.com/lit/pdf/SWCZ010:
DCDC o/p voltage can go higher than programmed value
Impact:
VDDI, VDD2, and VIO output programmed voltage level can go higher than
expected or crash, when coming out of PFM to PWM mode or using DVFS.
Description:
When DCDC CLK SYNC bits are 11/01:
* VIO 3-MHz oscillator is the source clock of the digital core and input
clock of VDD1 and VDD2
* Turn-on of VDD1 and VDD2 HSD PFETis synchronized or at a constant
phase shift
* Current pulled though VCC1+VCC2 is Iload(VDD1) + Iload(VDD2)
* The 3 HSD PFET will be turned-on at the same time, causing the highest
possible switching noise on the application. This noise level depends
on the layout, the VBAT level, and the load current. The noise level
increases with improper layout.
When DCDC CLK SYNC bits are 00:
* VIO 3-MHz oscillator is the source clock of digital core
* VDD1 and VDD2 are running on their own 3-MHz oscillator
* Current pulled though VCC1+VCC2 average of Iload(VDD1) + Iload(VDD2)
* The switching noise of the 3 SMPS will be randomly spread over time,
causing lower overall switching noise.
Workaround:
Set DCDCCTRL_REG[1:0]= 00.
Signed-off-by: Jan Remmet <j.remmet@phytec.de> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
From the beginning of the gpio-mpc8xxx.c, the "handle_level_irq"
has being used to handle GPIO interrupts in the PowerPC/Layerscape
platforms. But actually, almost all PowerPC/Layerscape platforms
assert an interrupt request upon either a high-to-low change or
any change on the state of the signal.
So the "handle_level_irq" is not reasonable for PowerPC/Layerscape
GPIO interrupt, it should be "handle_edge_irq". Otherwise the system
may lost some interrupts from the PIN's state changes.
Signed-off-by: Liu Gang <Gang.Liu@nxp.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
While debugging timeouts happening in my application workload (ScyllaDB), I have
observed calls to open() taking a long time, ranging everywhere from 2 seconds -
the first ones that are enough to time out my application - to more than 30
seconds.
The problem seems to happen because XFS may block on pending metadata updates
under certain circumnstances, and that's confirmed with the following backtrace
taken by the offcputime tool (iovisor/bcc):
Inspecting my run with blktrace, I can see that the xfsaild kthread exhibit very
high "Dispatch wait" times, on the dozens of seconds range and consistent with
the open() times I have saw in that run.
Still from the blktrace output, we can after searching a bit, identify the
request that wasn't dispatched:
8,0 11 152 81.092472813 804 A WM 141698288 + 8 <- (8,1) 141696240
8,0 11 153 81.092472889 804 Q WM 141698288 + 8 [xfsaild/sda1]
8,0 11 154 81.092473207 804 G WM 141698288 + 8 [xfsaild/sda1]
8,0 11 206 81.092496118 804 I WM 141698288 + 8 ( 22911) [xfsaild/sda1]
<==== 'I' means Inserted (into the IO scheduler) ===================================>
8,0 0 289372 96.718761435 0 D WM 141698288 + 8 (15626265317) [swapper/0]
<==== Only 15s later the CFQ scheduler dispatches the request ======================>
As we can see above, in this particular example CFQ took 15 seconds to dispatch
this request. Going back to the full trace, we can see that the xfsaild queue
had plenty of opportunity to run, and it was selected as the active queue many
times. It would just always be preempted by something else (example):
8,0 1 0 81.117912979 0 m N cfq1618SN / insert_request
8,0 1 0 81.117913419 0 m N cfq1618SN / add_to_rr
8,0 1 0 81.117914044 0 m N cfq1618SN / preempt
8,0 1 0 81.117914398 0 m N cfq767A / slice expired t=1
8,0 1 0 81.117914755 0 m N cfq767A / resid=40
8,0 1 0 81.117915340 0 m N / served: vt=1948520448 min_vt=1948520448
8,0 1 0 81.117915858 0 m N cfq767A / sl_used=1 disp=0 charge=0 iops=1 sect=0
where cfq767 is the xfsaild queue and cfq1618 corresponds to one of the ScyllaDB
IO dispatchers.
The requests preempting the xfsaild queue are synchronous requests. That's a
characteristic of ScyllaDB workloads, as we only ever issue O_DIRECT requests.
While it can be argued that preempting ASYNC requests in favor of SYNC is part
of the CFQ logic, I don't believe that doing so for 15+ seconds is anyone's
goal.
Moreover, unless I am misunderstanding something, that breaks the expectation
set by the "fifo_expire_async" tunable, which in my system is set to the
default.
Looking at the code, it seems to me that the issue is that after we make
an async queue active, there is no guarantee that it will execute any request.
When the queue itself tests if it cfq_may_dispatch() it can bail if it sees SYNC
requests in flight. An incoming request from another queue can also preempt it
in such situation before we have the chance to execute anything (as seen in the
trace above).
This patch sets the must_dispatch flag if we notice that we have requests
that are already fifo_expired. This flag is always cleared after
cfq_dispatch_request() returns from cfq_dispatch_requests(), so it won't pin
the queue for subsequent requests (unless they are themselves expired)
Care is taken during preempt to still allow rt requests to preempt us
regardless.
Testing my workload with this patch applied produces much better results.
From the application side I see no timeouts, and the open() latency histogram
generated by systemtap looks much better, with the worst outlier at 131ms:
Commit 209851649dc4 "acpi: nfit: Add support for hot-add" added
support for _FIT notifications, but it neglected to verify the
notification event code matches the one in the ACPI spec for
"NFIT Update". Currently there is only one code in the spec, but
once additional codes are added, older kernels (without this fix)
will misbehave by assuming all event notifications are for an
NFIT Update.
Fixes: 209851649dc4 ("acpi: nfit: Add support for hot-add") Cc: <linux-acpi@vger.kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Reported-by: Linda Knippers <linda.knippers@hpe.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Before commit a325725633c2 ("drm: Lobotomize set_busid nonsense for !pci
drivers"), several DRM drivers for platform devices used to expose an
explicit "drm_driver.set_busid" callback, invariably backed by
drm_platform_set_busid().
Commit a325725633c2 removed drm_platform_set_busid(), along with the
referring .set_busid field initializations. This was justified because
interchangeable functionality had been implemented in drm_dev_alloc() /
drm_dev_init(), which DRM_IOCTL_SET_VERSION would rely on going forward.
However, commit a325725633c2 also removed drm_virtio_set_busid(), for
which the same consolidation was not appropriate: this .set_busid callback
had been implemented with drm_pci_set_busid(), and not
drm_platform_set_busid(). The error regressed Xorg/xserver on QEMU's
"virtio-vga" card; the drmGetBusid() function from libdrm would no longer
return stable PCI identifiers like "pci:0000:00:02.0", but rather unstable
platform ones like "virtio0".
Reinstate drm_virtio_set_busid() with judicious use of
An NULL-pointer dereference happens in cachefiles_mark_object_inactive()
when it tries to read i_blocks so that it can tell the cachefilesd daemon
how much space it's making available.
The problem is that cachefiles_drop_object() calls
cachefiles_mark_object_inactive() after calling cachefiles_delete_object()
because the object being marked active staves off attempts to (re-)use the
file at that filename until after it has been deleted. This means that
d_inode is NULL by the time we come to try to access it.
To fix the problem, have the caller of cachefiles_mark_object_inactive()
supply the number of blocks freed up.
This fixes a bug where the permission was not properly checked in
overlayfs. The testcase is ltp/utimensat01.
It is also cleaner and safer to do the permission checking in the vfs
helper instead of the caller.
This patch introduces an additional ia_valid flag ATTR_TOUCH (since
touch(1) is the most obvious user of utimes(NULL)) that is passed into
notify_change whenever the conditions for this special permission checking
mode are met.
After backporting commit ee44b4bc054a ("dlm: use sctp 1-to-1 API")
series to a kernel with an older workqueue which didn't use RCU yet, it
was noticed that we are freeing the workqueues in dlm_lowcomms_stop()
too early as free_conn() will try to access that memory for canceling
the queued works if any.
This issue was introduced by commit 0d737a8cfd83 as before it such
attempt to cancel the queued works wasn't performed, so the issue was
not present.
This patch fixes it by simply inverting the free order.
This patch changes the p8_ghash driver to use ghash-generic as a fixed
fallback implementation. This allows the correct value of descsize to be
defined directly in its shash_alg structure and avoids problems with
incorrect buffer sizes when its state is exported or imported.
Reported-by: Jan Stancek <jstancek@redhat.com> Fixes: cc333cd68dfa ("crypto: vmx - Adding GHASH routines for VMX module") Signed-off-by: Marcelo Cerri <marcelo.cerri@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When zeroing blocks for DAX allocations, we also have to unmap aliases
in the block device mappings. Otherwise writeback can overwrite zeros
with stale data from block device page cache.
Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently when doing a DAX hole punch with ext4 we fail to do a writeback.
This is because the logic around filemap_write_and_wait_range() in
ext4_punch_hole() only looks for dirty page cache pages in the radix tree,
not for dirty DAX exceptional entries.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pages clear buffers after ext4 delayed block allocation failed,
However, it does not clean its pte_dirty flag.
if the pages unmap ,in cording to the pte_dirty ,
unmap_page_range may try to call __set_page_dirty,
which may lead to the bugon at
mpage_prepare_extent_to_map:head = page_buffers(page);.
This patch just call clear_page_dirty_for_io to clean pte_dirty
at mpage_release_unused_pages for pages mmaped.
Steps to reproduce the bug:
(1) mmap a file in ext4
addr = (char *)mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED,
fd, 0);
memset(addr, 'i', 4096);
Now, ext4_do_update_inode() clears high 16-bit fields of uid/gid
of deleted and evicted inode to fix up interoperability with old
kernels. However, it checks only i_dtime of an inode to determine
whether the inode was deleted and evicted, and this is very risky,
because i_dtime can be used for the pointer maintaining orphan inode
list, too. We need to further check whether the i_dtime is being
used for the orphan inode list even if the i_dtime is not NULL.
We found that high 16-bit fields of uid/gid of inode are unintentionally
and permanently cleared when the inode truncation is just triggered,
but not finished, and the inode metadata, whose high uid/gid bits are
cleared, is written on disk, and the sudden power-off follows that
in order.
Online defragging of encrypted files is not currently implemented.
However, the move extent ioctl can still return successfully when
called. For example, this occurs when xfstest ext4/020 is run on an
encrypted file system, resulting in a corrupted test file and a
corresponding test failure.
Until the proper functionality is implemented, fail the move extent
ioctl if either the original or donor file is encrypted.
Thomas has reported a lockdep splat hitting in
add_transaction_credits(). The problem is that that function calls
jbd2_might_wait_for_commit() while holding j_state_lock which is wrong
(we do not really wait for transaction commit while holding that lock).
Fix the problem by moving jbd2_might_wait_for_commit() into places where
we are ready to wait for transaction commit and thus j_state_lock is
unlocked.
Fixes: 1eaa566d368b214d99cbb973647c1b0b8102a9ae Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The filesystem used in this case is ubifs, but it can be triggered on
many other filesystems.
When preadv64() is called with offset=0xffffffff000, a page with
index=0xffffffff will be added to the radix tree of ->mapping. Then
this page can be found in ->mapping with pagevec_lookup(). After that,
truncate_inode_pages_range(), which is called in ftruncate(), will fall
into an infinite loop:
- find a page with index=0xffffffff, since index>=end, this page won't
be truncated
- index++, and index become 0
- the page with index=0xffffffff will be found again
The data type of index is unsigned long, so index won't overflow to 0 on
64 bits architecture in this case, and the dead loop won't happen.
Since truncate_inode_pages_range() is executed with holding lock of
inode->i_rwsem, any operation related with this lock will be blocked,
and a hung task will happen, e.g.:
INFO: task truncate_test:3364 blocked for more than 120 seconds.
...
call_rwsem_down_write_failed+0x17/0x30
generic_file_write_iter+0x32/0x1c0
ubifs_write_iter+0xcc/0x170
__vfs_write+0xc4/0x120
vfs_write+0xb2/0x1b0
SyS_write+0x46/0xa0
The page with index=0xffffffff added to ->mapping is useless. Fix this
by checking the read position before allocating pages.
Link: http://lkml.kernel.org/r/1475151010-40166-1-git-send-email-fangwei1@huawei.com Signed-off-by: Wei Fang <fangwei1@huawei.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Dave Chinner <david@fromorbit.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Patch series "mm/hugetlb: memory offline issues with hugepages", v4.
This addresses several issues with hugepages and memory offline. While
the first patch fixes a panic, and is therefore rather important, the
last patch is just a performance optimization.
The second patch fixes a theoretical issue with reserved hugepages,
while still leaving some ugly usability issue, see description.
This patch (of 3):
dissolve_free_huge_pages() will either run into the VM_BUG_ON() or a
list corruption and addressing exception when trying to set a memory
block offline that is part (but not the first part) of a "gigantic"
hugetlb page with a size > memory block size.
When no other smaller hugetlb page sizes are present, the VM_BUG_ON()
will trigger directly. In the other case we will run into an addressing
exception later, because dissolve_free_huge_page() will not work on the
head page of the compound hugetlb page which will result in a NULL
hstate from page_hstate().
To fix this, first remove the VM_BUG_ON() because it is wrong, and then
use the compound head page in dissolve_free_huge_page(). This means
that an unused pre-allocated gigantic page that has any part of itself
inside the memory block that is going offline will be dissolved
completely. Losing an unused gigantic hugepage is preferable to failing
the memory offline, for example in the situation where a (possibly
faulty) memory DIMM needs to go offline.
Fixes: c8721bbb ("mm: memory-hotplug: enable memory hotplug to handle hugepage") Link: http://lkml.kernel.org/r/20160926172811.94033-2-gerald.schaefer@de.ibm.com Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Rui Teng <rui.teng@linux.vnet.ibm.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") introduced a
race:
sem_lock has a fast path that allows parallel simple operations.
There are two reasons why a simple operation cannot run in parallel:
- a non-simple operations is ongoing (sma->sem_perm.lock held)
- a complex operation is sleeping (sma->complex_count != 0)
As both facts are stored independently, a thread can bypass the current
checks by sleeping in the right positions. See below for more details
(or kernel bugzilla 105651).
The patch fixes that by creating one variable (complex_mode)
that tracks both reasons why parallel operations are not possible.
The patch also updates stale documentation regarding the locking.
With regards to stable kernels:
The patch is required for all kernels that include the
commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") (3.10?)
The alternative is to revert the patch that introduced the race.
The patch is safe for backporting, i.e. it makes no assumptions
about memory barriers in spin_unlock_wait().
Background:
Here is the race of the current implementation:
Thread A: (simple op)
- does the first "sma->complex_count == 0" test
Thread B: (complex op)
- does sem_lock(): This includes an array scan. But the scan can't
find Thread A, because Thread A does not own sem->lock yet.
- the thread does the operation, increases complex_count,
drops sem_lock, sleeps
Thread A:
- spin_lock(&sem->lock), spin_is_locked(sma->sem_perm.lock)
- sleeps before the complex_count test
Thread C: (complex op)
- does sem_lock (no array scan, complex_count==1)
- wakes up Thread B.
- decrements complex_count
Thread A:
- does the complex_count test
Bug:
Now both thread A and thread C operate on the same array, without
any synchronization.
If a VFC port gets unmapped in the VIOS, it may not respond with a CRQ
init complete following H_REG_CRQ. If this occurs, we can end up having
called scsi_block_requests and not a resulting unblock until the init
complete happens, which may never occur, and we end up hanging I/O
requests. This patch ensures the host action stay set to
IBMVFC_HOST_ACTION_TGT_DEL so we move all rports into devloss state and
unblock unless we receive an init complete.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Acked-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Do the user_len check first and then the ver_addr allocation so that we
can save us the kfree() on the error path when user_len is >
ARCMSR_API_DATA_BUFLEN.
Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Marco Grassi <marco.gra@gmail.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Tomas Henzl <thenzl@redhat.com> Cc: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We need to put an upper bound on "user_len" so the memcpy() doesn't
overflow.
Reported-by: Marco Grassi <marco.gra@gmail.com> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Seth Forshee reports that in 4.8-rcN some automounts are failing
because the requesting the automount changed.
The relevant call path is:
follow_automount()
->d_automount
autofs4_d_automount
autofs4_mount_wait
autofs4_wait
In autofs4_wait wq_uid and wq_gid are set to current_uid() and
current_gid respectively. With follow_automount now overriding creds
uid that we export to userspace changes and that breaks existing
setups.
To remove the regression set wq_uid and wq_gid from
current_real_cred()->uid and current_real_cred()->gid respectively.
This restores the current behavior as current->real_cred is identical
to current->cred except when override creds are used.
Fixes: aeaa4a79ff6a ("fs: Call d_automount with the filesystems creds") Reported-by: Seth Forshee <seth.forshee@canonical.com> Tested-by: Seth Forshee <seth.forshee@canonical.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If we hold the superblock lock while calling reiserfs_quota_on_mount(), we can
deadlock our own worker - mount blocks kworker/3:2, sleeps forever more.
Signed-off-by: Mike Galbraith <efault@gmx.de> Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Mike Galbraith <mgalbraith@suse.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The memcpy() call overwrites the content of the *data pointer instead of
filling the newly-allocated memory (which pointer is hold by *data).
Fix this by merging kzalloc+memcpy into a single kmemdup() call.
Thanks Joe Perches for suggesting using kmemdup()
Fixes: 60dc8dbacb00 ("ASoC: Intel: sst: Add some helper functions") Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In the FLL parameter calculation, the FVCO should choose the maximum one.
The patch is to fix the bug about the wrong FVCO chosen.
Signed-off-by: John Hsu <KCHSU0@nuvoton.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Flowrings contain skbs waiting for transmission that were passed to us
by netif. It means we checked every one of them looking for 802.1x
Ethernet type. When deleting flowring we have to use freeing function
that will check for 802.1x type as well.
Freeing skbs without a proper check was leading to counter not being
properly decreased. This was triggering a WARNING every time
brcmf_netdev_wait_pend8021x was called.
Signed-off-by: Rafał Miłecki <rafal@milecki.pl> Acked-by: Arend van Spriel <arend@broadcom.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The struct cfg80211_pmksa defines its bssid field as:
const u8 *bssid;
contrary to struct brcmf_pmksa, which uses:
u8 bssid[ETH_ALEN];
Therefore in brcmf_cfg80211_del_pmksa(), &pmksa->bssid takes the address
of this field (of type u8**), not the one of its content (which would be
u8*). Remove the & operator to make brcmf_dbg("%pM") and memcmp()
behave as expected.
This bug have been found using a custom static checker (which checks the
usage of %p... attributes at build time). It has been introduced in
commit 6c404f34f2bd ("brcmfmac: Cleanup pmksa cache handling code"),
which replaced pmksa->bssid by &pmksa->bssid while refactoring the code,
without modifying struct cfg80211_pmksa definition.
Replace &pmk[i].bssid with pmk[i].bssid too to make the code clearer,
this change does not affect the semantic.
Fixes: 6c404f34f2bd ("brcmfmac: Cleanup pmksa cache handling code") Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This is a long-standing bug in the way shadow entries are accounted in
the radix tree nodes. The shrinker needs to know when radix tree nodes
contain only shadow entries, no pages, so node->count is split in half
to count shadows in the upper bits and pages in the lower bits.
Unfortunately, the radix tree implementation doesn't know of this and
assumes all entries are in node->count. When there is a shadow entry
directly in root->rnode and the tree is later extended, the radix tree
implementation will copy that entry into the new node and and bump its
node->count, i.e. increases the page count bits. Once the shadow gets
removed and we subtract from the upper counter, node->count underflows
and triggers the warning. Afterwards, without node->count reaching 0
again, the radix tree node is leaked.
Limit shadow entries to when we have actual radix tree nodes and can
count them properly. That means we lose the ability to detect refaults
from files that had only the first page faulted in at eviction time.
Fixes: 449dd6984d0e ("mm: keep page cache radix tree nodes in check") Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reported-and-tested-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Although rare, it's possible to hit PCI error early on device
probe, meaning possibly some structs are not entirely initialized,
and some might even be completely uninitialized, leading to NULL
pointer dereference.
The i40e driver currently presents a "bad" behavior if device hits
such early PCI error: firstly, the struct i40e_pf might not be
attached to pci_dev yet, leading to a NULL pointer dereference on
access to pf->state.
Even checking if the struct is NULL and avoiding the access in that
case isn't enough, since the driver cannot recover from PCI error
that early; in our experiments we saw multiple failures on kernel
log, like:
[549.664] i40e 0007:01:00.1: Initial pf_reset failed: -15
[549.664] i40e: probe of 0007:01:00.1 failed with error -15
[...]
[871.644] i40e 0007:01:00.1: The driver for the device stopped because the
device firmware failed to init. Try updating your NVM image.
[871.644] i40e: probe of 0007:01:00.1 failed with error -32
[...]
[872.516] i40e 0007:01:00.0: ARQ: Unknown event 0x0000 ignored
Between the first probe failure (error -15) and the second (error -32)
another PCI error happened due to the first bad probe. Also, driver
started to flood console with those ARQ event messages.
This patch will prevent these issues by allowing error recovery
mechanism to remove the failed device from the system instead of
trying to recover from early PCI errors during device probe.
Signed-off-by: Guilherme G Piccoli <gpiccoli@linux.vnet.ibm.com> Acked-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit 22f2ac51b6d6 ("mm: workingset: fix crash in shadow node shrinker
caused by replace_page_cache_page()") switched replace_page_cache() from
raw radix tree operations to page_cache_tree_insert() but didn't take
into account that the latter function, unlike the raw radix tree op,
handles mapping->nrpages. As a result, that counter is bumped for each
page replacement rather than balanced out even.
The mapping->nrpages counter is used to skip needless radix tree walks
when invalidating, truncating, syncing inodes without pages, as well as
statistics for userspace. Since the error is positive, we'll do more
page cache tree walks than necessary; we won't miss a necessary one.
And we'll report more buffer pages to userspace than there are. The
error is limited to fuse inodes.
Fixes: 22f2ac51b6d6 ("mm: workingset: fix crash in shadow node shrinker caused by replace_page_cache_page()") Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Miklos Szeredi <miklos@szeredi.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fuse allowed VFS to set mode in setattr in order to clear suid/sgid on
chown and truncate, and (since writeback_cache) write. The problem with
this is that it'll potentially restore a stale mode.
The poper fix would be to let the filesystems do the suid/sgid clearing on
the relevant operations. Possibly some are already doing it but there's no
way we can detect this.
So fix this by refreshing and recalculating the mode. Do this only if
ATTR_KILL_S[UG]ID is set to not destroy performance for writes. This is
still racy but the size of the window is reduced.
Without "default_permissions" the userspace filesystem's lookup operation
needs to perform the check for search permission on the directory.
If directory does not allow search for everyone (this is quite rare) then
userspace filesystem has to set entry timeout to zero to make sure
permissions are always performed.
Changing the mode bits of the directory should also invalidate the
(previously cached) dentry to make sure the next lookup will have a chance
of updating the timeout, if needed.
Original commit, which added support for Armada CP110 system controller
used global variables for storing all clock information. It worked
fine for Armada 7k SoC, with single CP110 block. After dual-CP110 Armada 8k
was introduced, the data got overwritten and corrupted.
This patch fixes the issue by allocating resources dynamically in the
driver probe and storing it as platform drvdata.
Fixes: d3da3eaef7f4 ("clk: mvebu: new driver for Armada CP110 system ...") Signed-off-by: Marcin Wojtas <mw@semihalf.com> Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Armada CP110 system controller comprises its own routine responsble
for registering gate clocks. Among others 'flags' field in
struct clk_init_data was not set, using a random values, which
may cause an unpredicted behavior.
This patch fixes the problem by resetting all fields of clk_init_data
before assigning values for all gated clocks of Armada 7k/8k SoCs family.
Fixes: d3da3eaef7f4 ("clk: mvebu: new driver for Armada CP110 system ...") Signed-off-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There is a a bug in defered ack stuff that causes a race with the
destroy of a QP.
A packet causes a defered ack to be pended by putting the QP
into an rcd queue.
A return from the driver interrupt processing will process that rcd
queue of QPs and attempt to do a direct send of the ack. At this
point no locks are held and the above QP could now be put in the reset
state in the qp destroy logic. A refcount protects the QP while it
is in the rcd queue so it isn't going anywhere yet.
If the direct send fails to allocate a pio buffer,
hfi1_schedule_send() is called to trigger sending an ack from the
send engine. There is no state test in that code path.
The refcount is then dropped from the driver.c caller
potentially allowing the qp destroy to continue from its
refcount wait in parallel with the workqueue scheduling of the qp.
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The device lock was unnecessary obtained in bus rescan work before the
amthif client search. That causes incorrect lock ordering and task
hang:
...
[88004.613213] INFO: task kworker/1:14:21832 blocked for more than 120 seconds.
...
[88004.645934] Workqueue: events mei_cl_bus_rescan_work
...
The correct lock order is
cl_bus_lock
device_lock
me_clients_rwsem
Move device_lock into amthif init function that called
after me_clients_rwsem is released.
This fixes regression introduced by commit:
commit 025fb792bac3 ("mei: split amthif client init from end of clients enumeration")
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There are two separate issues that can lead to corrupted free space
trees.
1. The free space tree bitmaps had an endianness issue on big-endian
systems which is fixed by an earlier patch in this series.
2. btrfs-progs before v4.7.3 modified filesystems without updating the
free space tree.
To catch both of these issues at once, we need to force the free space
tree to be rebuilt. To do so, add a FREE_SPACE_TREE_VALID compat_ro bit.
If the bit isn't set, we know that it was either produced by a broken
big-endian kernel or may have been corrupted by btrfs-progs.
This also provides us with a way to add rudimentary read-write support
for the free space tree to btrfs-progs: it can just clear this bit and
have the kernel rebuild the free space tree.
We moved the code for creating the free space tree the first time that
it's enabled, but didn't move the clearing code along with it. This
breaks my (undocumented) intention that `mount -o
clear_cache,space_cache=v2` would clear the free space tree and then
recreate it.
Fixes: 511711af91f2 ("btrfs: don't run delayed references while we are creating the free space tree") Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In convert_free_space_to_{bitmaps,extents}(), we buffer the free space
bitmaps in memory and copy them directly to/from the extent buffers with
{read,write}_extent_buffer(). The extent buffer bitmap helpers use byte
granularity, which is equivalent to a little-endian bitmap. This means
that on big-endian systems, the in-memory bitmaps will be written to
disk byte-swapped. To fix this, use byte-granularity for the bitmaps in
memory.
Fixes: a5ed91828518 ("Btrfs: implement the free space B-tree") Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ben Greear reported:
> I see lots of instability as soon as I load up the carl9710 NIC.
> My application is going to be poking at it's debugfs files...
>
> BUG: KASAN: slab-out-of-bounds in carl9170_debugfs_read+0xd5/0x2a0
> [carl9170] at addr 0xffff8801bc1208b0
> Read of size 8 by task btserver/5888
> =======================================================================
> BUG kmalloc-256 (Tainted: G W ): kasan: bad access detected
> -----------------------------------------------------------------------
>
> INFO: Allocated in seq_open+0x50/0x100 age=2690 cpu=2 pid=772
>...
This breakage was caused by the introduction of intermediate
fops in debugfs by commit 9fd4dcece43a
("debugfs: prevent access to possibly dead file_operations at file open")
Thankfully, the original/real fops are still available in d_fsdata.
Reported-by: Ben Greear <greearb@candelatech.com> Signed-off-by: Christian Lamparter <chunkeey@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch fixes a crash that happens because b43legacy's
debugfs code expects file->f_op to be a pointer to its own
b43legacy_debugfs_fops struct. This is no longer the case
since commit 9fd4dcece43a
("debugfs: prevent access to possibly dead file_operations at file open")
This patch fixes a crash that happens because b43's
debugfs code expects file->f_op to be a pointer to
its own b43_debugfs_fops struct. This is no longer
the case since commit 9fd4dcece43a
("debugfs: prevent access to possibly dead file_operations at file open")
This patch introduces an accessor which can be used
by the users of debugfs (drivers, fs, ...) to get the
original file_operations struct. It also removes the
REAL_FOPS_DEREF macro in file.c and converts the code
to use the public version.
Previously, REAL_FOPS_DEREF was only available within
the file.c of debugfs. But having a public getter
available for debugfs users is important as some
drivers (carl9170 and b43) use the pointer of the
original file_operations in conjunction with container_of()
within their debugfs implementations.
Commit d9676fa152c83b ("ARCv2: Enable LOCKDEP"), changed
local_save_flags() to not return raw STATUS32 but encoded in the form
such that it could be fed directly to CLRI/SETI instructions.
However the STATUS32.E[] was not captured correctly as it corresponds to
bits [4:1] in the register and not [3:0]
In the end of "arc_init_IRQ" STATUS32.IE flag is going to be affected by
"flag" instruction but "flag" never touches IE flag on ARCv2. So "kflag"
instruction must be used instead of "flag".
There are calls to serial8250_rpm_{get|put}() in __do_stop_tx_rs485() that are
certainly placed in a wrong location. I dunno how it had been tested with
runtime PM enabled because it is obvious "sleep in atomic context" error.
Besides that serial8250_rpm_get() is called immediately after an IO just
happened. It implies that the device is already powered on, see implementation
of serial8250_em485_rts_after_send() and serial8250_clear_fifos() for the
details.
There is no bug have been seen due to, as I can guess, use of auto suspend mode
when scheduled transaction to suspend is invoked quite lately than it's needed
for a few writes to the port. It might be possible to trigger a warning if
stop_tx_timer fires when device is suspended.
Refactor the code to use runtime PM only in case of timer function.
It should check the data->pclk, not data->clk when get apb_pclk.
Fixes: c8ed99d4f6a8("serial: 8250_dw: Add support for deferred probing") Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Tested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Since commit 18dfef9c7f87 ("serial: atmel: convert to irq handling
provided mctrl-gpio"), interrupts from GPIOs are not disabled any more
when the serial port is closed, leading to an oops when the one of the
input pin is toggled (CTS/DSR/DCD/RNG).
This is only the case if those pins are used as GPIOs, i.e. declared
like that:
usart1: serial@f8020000 {
/* CTS and DTS will be handled by GPIO */
status = "okay";
rts-gpios = <&pioB 17 GPIO_ACTIVE_LOW>;
cts-gpios = <&pioB 16 GPIO_ACTIVE_LOW>;
dtr-gpios = <&pioB 14 GPIO_ACTIVE_LOW>;
dsr-gpios = <&pioC 31 GPIO_ACTIVE_LOW>;
rng-gpios = <&pioB 12 GPIO_ACTIVE_LOW>;
dcd-gpios = <&pioB 15 GPIO_ACTIVE_LOW>;
};
That's because modem interrupts used to be freed in atmel_shutdown().
After commit 18dfef9c7f87 ("serial: atmel: convert to irq handling
provided mctrl-gpio"), this code was just removed.
Calling atmel_disable_ms() disables the interrupts and everything works
fine again.
Tested on at91sam9g35-cm
(This patch doesn't apply on -stable kernels, fixes for 4.4 and 4.7 will
be sent after this one is applied.)
Signed-off-by: Richard Genoud <richard.genoud@gmail.com> Fixes: 18dfef9c7f87 ("serial: atmel: convert to irq handling provided mctrl-gpio") Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The USR2_DCDIN bit is tested for in register usr1. As the name
suggests the usr2 register should be used instead. This fixes
reading the Carrier detect status.
This is an ancient bug that was actually attempted to be fixed once
(badly) by me eleven years ago in commit 4ceb5db9757a ("Fix
get_user_pages() race for write access") but that was then undone due to
problems on s390 by commit f33ea7f404e5 ("fix get_user_pages bug").
In the meantime, the s390 situation has long been fixed, and we can now
fix it by checking the pte_dirty() bit properly (and do it better). The
s390 dirty bit was implemented in abf09bed3cce ("s390/mm: implement
software dirty bits") which made it into v3.9. Earlier kernels will
have to look at the page state itself.
Also, the VM has become more scalable, and what used a purely
theoretical race back then has become easier to trigger.
To fix it, we introduce a new internal FOLL_COW flag to mark the "yes,
we already did a COW" rather than play racy games with FOLL_WRITE that
is very fundamental, and then use the pte dirty flag to validate that
the FOLL_COW flag is still valid.
Commit 2a6fba6 "xfs: only return -errno or success from attr ->put_listent"
changes the returnvalue of __xfs_xattr_put_listen to 0 in case when there is
insufficient space in the buffer assuming that setting context->count to -1
would be enough, but all of the ->put_listent callers only check seen_enough.
This results in a failed assertion:
XFS: Assertion failed: context->count >= 0, file: fs/xfs/xfs_xattr.c, line: 175
in insufficient buffer size case.
This is only reproducible with at least 2 xattrs and only when the buffer
gets depleted before the last one.
Furthermore if buffersize is such that it is enough to hold the last xattr's
name, but not enough to hold the sum of preceeding xattr names listxattr won't
fail with ERANGE, but will suceed returning last xattr's name without the
first character. The first character end's up overwriting data stored at
(context->alist - 1).
Signed-off-by: Artem Savkov <asavkov@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com> Cc: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Runtime PM should be configured already once we call device_add. See
also the description in this mail thread
https://lists.linuxfoundation.org/pipermail/linux-pm/2009-November/023198.html
or the order of calls e.g. in usb_new_device.
The changed order also helps to avoid scenarios where runtime pm for
&shost->shost_gendev is activated whilst the parent is suspended,
resulting in error message "runtime PM trying to activate child device
hostx but parent yyy is not active".
In addition properly reverse the runtime pm calls in the error path.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The rcar_fcp_enable() function immediately returns successfully when the
FCP device pointer is NULL to avoid forcing the users to check the FCP
device manually before every call. However, the stub version of the
function used when the FCP driver is disabled returns -ENOSYS
unconditionally, resulting in a different API contract for the two
versions of the function.
As a user that requires FCP support will fail at probe time when calling
rcar_fcp_get() if the FCP driver is disabled, the stub version of the
rcar_fcp_enable() function will only be called with a NULL FCP device.
We can thus return 0 unconditionally to align the behaviour with the
normal version of the function.
The req_canceled() callback is used by tpm_transmit() periodically to
check whether the request has been canceled while it is receiving a
response from the TPM.
The TPM_CRB_CTRL_CANCEL register was cleared already in the crb_cancel
callback, which has two consequences:
* Cancel might not happen.
* req_canceled() always returns zero.
A better place to clear the register is when starting to send a new
command. The behavior of TPM_CRB_CTRL_CANCEL is described in the
section 5.5.3.6 of the PTP specification.
Unseal and load operations should be done as an atomic operation. This
commit introduces unlocked tpm_transmit() so that tpm2_unseal_trusted()
can do the locking by itself.
Fixes: 0fe5480303a1 ("keys, trusted: seal/unseal with TPM 2.0 chips") Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ima tries to call ->setxattr() on overlayfs dentry after having locked
underlying inode, which results in a deadlock.
Reported-by: Krisztian Litkey <kli@iki.fi> Fixes: 4bacc9c9234c ("overlayfs: Make f_path always point to the overlay and f_inode to the underlay") Signed-off-by: Miklos Szeredi <mszeredi@redhat.com> Cc: Mimi Zohar <zohar@linux.vnet.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We know that 'ret = 0' because it has been tested a few lines above.
So, if 'kzalloc' fails, 0 will be returned instead of an error code.
Return -ENOMEM instead.
Fixes: a0d46a3dfdc3 ("ARM: cpuidle: Register per cpuidle device") Signed-off-by: Christophe Jaillet <christophe.jaillet@wanadoo.fr> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The Qualcomm SPMI GPIO and MPP lines are problematic: the
are fetched from the main MFD driver with platform_get_irq()
which means that at this point they will all be assigned the
flags set up for the interrupts in the device tree.
That is problematic since these are flagged as rising edge
and an this point the interrupt descriptor is assigned a
rising edge, while the only thing the GPIO/MPP drivers really
do is issue irq_get_irqchip_state() on the line to read it
out and to provide a .to_irq() helper for *other* IRQ
consumers.
If another device tree node tries to flag the same IRQ
for use as something else than rising edge, the kernel
irqdomain core will protest like this:
type mismatch, failed to map hwirq-NN for <FOO>!
Which is what happens when the device tree defines two
contradictory flags for the same interrupt line.
To work around this and alleviate the problem, assign 0
as flag for the interrupts taken by the PM GPIO and MPP
drivers. This will lead to the flag being unset, and a
second consumer requesting rising, falling, both or level
interrupts will be respected. This is what the qcom-pm*.dtsi
files already do.
Switched to using the symbolic name IRQ_TYPE_NONE so that
we get this more readable.
This misconfiguration was caused by a copy/pasting the
APQ8064 set-up, the latter has been fixed in a separate
patch.
Tested with one of the SPMI GPIOs: after this I can
successfully request one of these GPIOs as falling edge
from the device tree.
Fixes: 0840ea9e4457 ("ARM: dts: add GPIO and MPP to MSM8660 PMIC") Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Cc: Stephen Boyd <sboyd@codeaurora.org> Cc: Björn Andersson <bjorn.andersson@linaro.org> Cc: Ivan T. Ivanov <ivan.ivanov@linaro.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Andy Gross <andy.gross@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The Qualcomm PMIC GPIO and MPP lines are problematic: the
are fetched from the main MFD driver with platform_get_irq()
which means that at this point they will all be assigned the
flags set up for the interrupts in the device tree.
That is problematic since these are flagged as rising edge
and an this point the interrupt descriptor is assigned a
rising edge, while the only thing the GPIO/MPP drivers really
do is issue irq_get_irqchip_state() on the line to read it
out and to provide a .to_irq() helper for *other* IRQ
consumers.
If another device tree node tries to flag the same IRQ
for use as something else than rising edge, the kernel
irqdomain core will protest like this:
type mismatch, failed to map hwirq-NN for <FOO>!
Which is what happens when the device tree defines two
contradictory flags for the same interrupt line.
To work around this and alleviate the problem, assign 0
as flag for the interrupts taken by the PM GPIO and MPP
drivers. This will lead to the flag being unset, and a
second consumer requesting rising, falling, both or level
interrupts will be respected. This is what the qcom-pm*.dtsi
files already do.
Switched to using the symbolic name IRQ_TYPE_NONE so that
we get this more readable.
Fixes: bce360469676 ("ARM: dts: apq8064: add pm8921 mpp support") Fixes: 874443fe9e33 ("ARM: dts: apq8064: Add pm8921 mfd and its gpio node") Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Cc: Stephen Boyd <sboyd@codeaurora.org> Cc: Björn Andersson <bjorn.andersson@linaro.org> Cc: Ivan T. Ivanov <ivan.ivanov@linaro.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Andy Gross <andy.gross@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>