In an environment where the KDC is running Active Directory, the
exported composite name field returned in the context could be large
enough to span a page boundary. Attaching a scratch buffer to the
decoding xdr_stream helps deal with those cases.
The case where we saw this was actually due to behavior that's been
fixed in newer gss-proxy versions, but we're fixing it here too.
Signed-off-by: Scott Mayhew <smayhew@redhat.com> Reviewed-by: Simo Sorce <simo@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If we find a non-confirmed openowner we jump to exit the function, but do
not set an error value. Fix this by factoring out a helper to do the
check and properly set the error from nfsd4_validate_stateid.
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit df52699e4fcef ("NFSv4.1: Don't cache deviceids that have no
notifications") causes the Linux NFS client to stop caching deviceid's
unless a server pretends to support deviceid notifications. While this
behavior is stupid and the language around this area in rfc5661 is a
mess carified by an errata that I submittted, Trond insists on this
behavior. Not caching deviceids degrades block layout performance
massively as a GETDEVICEINFO is fairly expensive.
So add this hack to make the Linux client happy again.
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NUMA balancing is meant to be disabled by default on UMA machines but
the check is using nr_node_ids (highest node) instead of
num_online_nodes (online nodes).
The consequences are that a UMA machine with a node ID of 1 or higher
will enable NUMA balancing. This will incur useless overhead due to
minor faults with the impact depending on the workload. These are the
impact on the stats when running a kernel build on a single node machine
whose node ID happened to be 1:
vanilla patched
NUMA base PTE updates 5113158 0
NUMA huge PMD updates 643 0
NUMA page range updates 5442374 0
NUMA hint faults 2109622 0
NUMA hint local faults 2109622 0
NUMA hint local percent 100 100
NUMA pages migrated 0 0
root->ino_ida is used for kernfs inode number allocations. Since IDA has
a layered structure, different IDs can reside on the same layer, which
is currently accounted to some memory cgroup. The problem is that each
kmem cache of a memory cgroup has its own directory on sysfs (under
/sys/fs/kernel/<cache-name>/cgroup). If the inode number of such a
directory or any file in it gets allocated from a layer accounted to the
cgroup which the cache is created for, the cgroup will get pinned for
good, because one has to free all kmem allocations accounted to a cgroup
in order to release it and destroy all its kmem caches. That said we
must not account layers of ino_ida to any memory cgroup.
Since per net init operations may create new sysfs entries directly
(e.g. lo device) or indirectly (nf_conntrack creates a new kmem cache
per each namespace, which, in turn, creates new sysfs entries), an easy
way to reproduce this issue is by creating network namespace(s) from
inside a kmem-active memory cgroup.
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Acked-by: Tejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Greg Thelen <gthelen@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Not all kmem allocations should be accounted to memcg. The following
patch gives an example when accounting of a certain type of allocations to
memcg can effectively result in a memory leak. This patch adds the
__GFP_NOACCOUNT flag which if passed to kmalloc and friends will force the
allocation to go through the root cgroup. It will be used by the next
patch.
Note, since in case of kmemleak enabled each kmalloc implies yet another
allocation from the kmemleak_object cache, we add __GFP_NOACCOUNT to
gfp_kmemleak_mask.
Alternatively, we could introduce a per kmem cache flag disabling
accounting for all allocations of a particular kind, but (a) we would not
be able to bypass accounting for kmalloc then and (b) a kmem cache with
this flag set could not be merged with a kmem cache without this flag,
which would increase the number of global caches and therefore
fragmentation even if the memory cgroup controller is not used.
Despite its generic name, currently __GFP_NOACCOUNT disables accounting
only for kmem allocations while user page allocations are always charged.
To catch abusing of this flag, a warning is issued on an attempt of
passing it to mem_cgroup_try_charge.
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Cc: Tejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Greg Thelen <gthelen@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
On architectures where the stack grows upwards (CONFIG_STACK_GROWSUP=y,
currently parisc and metag only) stack randomization sometimes leads to crashes
when the stack ulimit is set to lower values than STACK_RND_MASK (which is 8 MB
by default if not defined in arch-specific headers).
The problem is, that when the stack vm_area_struct is set up in fs/exec.c, the
additional space needed for the stack randomization (as defined by the value of
STACK_RND_MASK) was not taken into account yet and as such, when the stack
randomization code added a random offset to the stack start, the stack
effectively got smaller than what the user defined via rlimit_max(RLIMIT_STACK)
which then sometimes leads to out-of-stack situations and crashes.
This patch fixes it by adding the maximum possible amount of memory (based on
STACK_RND_MASK) which theoretically could be added by the stack randomization
code to the initial stack size. That way, the user-defined stack size is always
guaranteed to be at minimum what is defined via rlimit_max(RLIMIT_STACK).
This bug is currently not visible on the metag architecture, because on metag
STACK_RND_MASK is defined to 0 which effectively disables stack randomization.
The changes to fs/exec.c are inside an "#ifdef CONFIG_STACK_GROWSUP"
section, so it does not affect other platformws beside those where the
stack grows upwards (parisc and metag).
I've discovered a case where both arm and arm64 will miss a ptrace
syscall-exit that they should report. If the syscall is entered
without TIF_SYSCALL_TRACE set, then it goes on the fast path. It's
then possible to have TIF_SYSCALL_TRACE added in the middle of the
syscall, but ret_fast_syscall doesn't check this flag again.
Fix this by always checking for a syscall trace in the fast exit path.
Reported-by: Josh Stone <jistone@redhat.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch sets display clock correctly. If Display clock isn't set
correctly then you would find below messages and Display controller
doesn't work correctly.
exynos-drm: No connectors reported connected with modes
[drm] Cannot find any crtc or sizes - going 1024x768
Fixes: abc0b1447d49 ("drm: Perform basic sanity checks on probed modes") Signed-off-by: Inki Dae <inki.dae@samsung.com> Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Tested-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Kukjin Kim <kgene@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
According to the imx27 documentation, fec has a 4 Kbyte
memory space map. Moreover, the actual 16 Kbyte mapping
overlaps the SCC (Security Controller) memory register
space. So, we reduce the memory register space to 4 Kbyte.
ERR_PTR was dereferenced during sub domain parsing, if parent domain
could not be obtained (because of invalid phandle or deferred
registration of parent domain).
The Exynos power domain code checked whether
of_genpd_get_from_provider() returned NULL and in that case it skipped
that power domain node. However this function returns ERR_PTR or valid
pointer, not NULL.
Fixes: 0f7807518fe1 ("ARM: EXYNOS: add support for sub-power domains") Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Kukjin Kim <kgene@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
At boot time we round the memblock limit down to section size in an
attempt to ensure that we will have mapped this RAM with section
mappings prior to allocating from it. When mapping RAM we iterate over
PMD-sized chunks, creating these section mappings.
Section mappings are only created when the end of a chunk is aligned to
section size. Unfortunately, with classic page tables (where PMD_SIZE is
2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
size the first 1M will not be mapped despite having been accounted for
in the memblock limit. This has been observed to result in page tables
being allocated from unmapped memory, causing boot-time hangs.
This patch modifies the memblock limit rounding to always round down to
PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
will round the memblock limit down to a 2M boundary, matching the limits
on section mappings, and preventing allocations from unmapped memory.
For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Stefan Agner <stefan@agner.ch> Tested-by: Stefan Agner <stefan@agner.ch> Acked-by: Laura Abbott <labbott@redhat.com> Tested-by: Hans de Goede <hdegoede@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
block plug callback could sleep, so we introduce a parameter
'from_schedule' and corresponding drivers can use it to destinguish a
schedule plug flush or a plug finish. Unfortunately io_schedule_out
still uses blk_flush_plug(). This causes below output (Note, I added a
might_sleep() in raid1_unplug to make it trigger faster, but the whole
thing doesn't matter if I add might_sleep). In raid1/10, this can cause
deadlock.
This patch makes io_schedule_out always uses blk_schedule_flush_plug.
This should only impact drivers (as far as I know, raid 1/10) which are
sensitive to the 'from_schedule' parameter.
Git commit 152125b7a882df36a55a8eadbea6d0edf1461ee7
"s390/mm: implement dirty bits for large segment table entries"
broke the pmd_pfn function, it changed the return value from
'unsigned long' to 'int'. This breaks all machine configurations
with memory above the 8TB line.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
No matter how the driver manages its NAPI context, there's no way
sending frames to it from a timer can be correct, since it would
corrupt the internal GRO lists.
To avoid that, always use the non-NAPI path when releasing frames
from the timer.
Reported-by: Jean Trivelly <jean.trivelly@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Multitheaded tests showed that the icv buffer in the current ghash
implementation is not handled correctly. A move of this working ghash
buffer value to the descriptor context fixed this. Code is tested and
verified with an multithreaded application via af_alg interface.
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Gerald Schaefer <geraldsc@linux.vnet.ibm.com> Reported-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch fixes an inverted return value of the gpio get_direction
function.
The wrong value causes the direction sysfs entry and GPIO debugfs file
to indicate incorrect GPIO direction settings. In some cases it also
prevents setting GPIO output values.
The problem is also present in all other stable kernel versions since
linux-3.12.
This code calls cpu_resume() using a straight branch (b), so
now that we have moved cpu_resume() back to .text, this should
be moved there as well. Any direct references to symbols that will
remain in the .data section are replaced with explicit PC-relative
references.
Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add the USB Id to link the D-Link DWA 130 USB Wifi adapter
to the rt2830 driver.
Signed-off-by: Scott Branden <sbranden@broadcom.com> Signed-off-by: Pieter Truter <ptruter@broadcom.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Cc: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Consider "(u64)insn1.imm << 32 | imm" in the arm64 JIT. Since imm is
signed 32-bit, it is sign-extended to 64-bit, losing the high 32 bits.
The fix is to convert imm to u32 first, which will be zero-extended to
u64 implicitly.
Cc: Zi Shen Lim <zlim.lnx@gmail.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Fixes: 30d3d94cc3d5 ("arm64: bpf: add 'load 64-bit immediate' instruction") Signed-off-by: Xi Wang <xi.wang@gmail.com>
[will: removed non-arm64 bits and redundant casting] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The queued TRIM problems appear to be generic to Samsung's firmware and
not tied to a particular model. A recent update to the 840 EVO firmware
introduced the same issue as we saw on 850 Pro.
Blacklist queued TRIM on all 800-series drives while we work this issue
with Samsung.
Reported-by: Günter Waller <g.wal@web.de> Reported-by: Sven Köhler <sven.koehler@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When the LPM policy is set to ATA_LPM_MAX_POWER, the device might
generate a spurious PHY event that cuases errors on the link.
Ignore this event if it occured within 10s after the policy change.
The timeout was chosen observing that on a Dell XPS13 9333 these
spurious events can occur up to roughly 6s after the policy change.
Avoton AHCI occasionally sees drive probe timeouts at driver load time.
When this happens SCR_STATUS indicates device detected, but no D2H FIS
reception. Reset the internal link state machines by bouncing
port-enable in the PCS register when this occurs.
Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The journal revoke block recovery code does not check r_count for
sanity, which means that an evil value of r_count could result in
the kernel reading off the end of the revoke table and into whatever
garbage lies beyond. This could crash the kernel, so fix that.
However, in testing this fix, I discovered that the code to write
out the revoke tables also was not correctly checking to see if the
block was full -- the current offset check is fine so long as the
revoke table space size is a multiple of the record size, but this
is not true when either journal_csum_v[23] are set.
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently when journal restart fails, we'll have the h_transaction of
the handle set to NULL to indicate that the handle has been effectively
aborted. We handle this situation quietly in the jbd2_journal_stop() and just
free the handle and exit because everything else has been done before we
attempted (and failed) to restart the journal.
Unfortunately there are a number of problems with that approach
introduced with commit
41a5b913197c "jbd2: invalidate handle if jbd2_journal_restart()
fails"
First of all in ext4 jbd2_journal_stop() will be called through
__ext4_journal_stop() where we would try to get a hold of the superblock
by dereferencing h_transaction which in this case would lead to NULL
pointer dereference and crash.
In addition we're going to free the handle regardless of the refcount
which is bad as well, because others up the call chain will still
reference the handle so we might potentially reference already freed
memory.
Moreover it's expected that we'll get aborted handle as well as detached
handle in some of the journalling function as the error propagates up
the stack, so it's unnecessary to call WARN_ON every time we get
detached handle.
And finally we might leak some memory by forgetting to free reserved
handle in jbd2_journal_stop() in the case where handle was detached from
the transaction (h_transaction is NULL).
Fix the NULL pointer dereference in __ext4_journal_stop() by just
calling jbd2_journal_stop() quietly as suggested by Jan Kara. Also fix
the potential memory leak in jbd2_journal_stop() and use proper
handle refcounting before we attempt to free it to avoid use-after-free
issues.
And finally remove all WARN_ON(!transaction) from the code so that we do
not get random traces when something goes wrong because when journal
restart fails we will get to some of those functions.
Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
A read() from a pty master may mistakenly indicate EOF (errno == -EIO)
after the pty slave has closed, even though input data remains to be read.
For example,
Several conditions are required to trigger this race:
1. the ldisc read buffer must become full so the input worker exits
2. the read() count parameter must be >= 4096 so the ldisc read buffer
is empty
3. the subsequent read() occurs before the kicked worker has processed
more input
However, the underlying cause of the race is that data is pipelined, while
tty state is not; ie., data already written by the pty slave end is not
yet visible to the pty master end, but state changes by the pty slave end
are visible to the pty master end immediately.
Pipeline the TTY_OTHER_CLOSED state through input worker to the reader.
1. Introduce TTY_OTHER_DONE which is set by the input worker when
TTY_OTHER_CLOSED is set and either the input buffers are flushed or
input processing has completed. Readers/polls are woken when
TTY_OTHER_DONE is set.
2. Reader/poll checks TTY_OTHER_DONE instead of TTY_OTHER_CLOSED.
3. A new input worker is started from pty_close() after setting
TTY_OTHER_CLOSED, which ensures the TTY_OTHER_DONE state will be
set if the last input worker is already finished (or just about to
exit).
Remove tty_flush_to_ldisc(); no in-tree callers.
Fixes: 52bce7f8d4fc ("pty, n_tty: Simplify input processing on final close")
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=96311 BugLink: http://bugs.launchpad.net/bugs/1429756 Reported-by: Andy Whitcroft <apw@canonical.com> Reported-by: H.J. Lu <hjl.tools@gmail.com> Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
when gsmtty_remove put dlci, it will cause memory leak if dlci->port's refcount is zero.
So we do the cleanup work in .cleanup callback instead.
dlci will be last put in two call chains.
1) gsmld_close -> gsm_cleanup_mux -> gsm_dlci_release -> dlci_put
2) gsmld_remove -> dlci_put
so there is a race. the memory leak depends on the race.
In call chain 2. we hit the memory leak. below comment tells.
Recent toolchains force the TOC to be 256 byte aligned. We need
to enforce this alignment in our linker script, otherwise pointers
to our TOC variables (__toc_start, __prom_init_toc_start) could
be incorrect.
If they are bad, we die a few hundred instructions into boot.
Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Before 69111bac42f5 ("powerpc: Replace __get_cpu_var uses"), in
save_mce_event, index got the value of mce_nest_count, and
mce_nest_count was incremented *after* index was set.
However, that patch changed the behaviour so that mce_nest count was
incremented *before* setting index.
This causes an off-by-one error, as get_mce_event sets index as
mce_nest_count - 1 before reading mce_event. Thus get_mce_event reads
bogus data, causing warnings like
"Machine Check Exception, Unknown event version 0 !"
and breaking MCEs handling.
Restore the old behaviour and unbreak MCE handling by subtracting one
from the newly incremented value.
The same broken change occured in machine_check_queue_event (which set
a queue read by machine_check_process_queued_event). Fix that too,
unbreaking printing of MCE information.
Fixes: 69111bac42f5 ("powerpc: Replace __get_cpu_var uses") CC: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> CC: Christoph Lameter <cl@linux.com> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Most of kernel code assume that interface array in
struct usb_configuration is NULL terminated.
When gadget is composed with configfs configuration
structure may be reused for different functions set.
This bug happens because purge_configs_funcs() sets
only next_interface_id to 0. Interface array still
contains pointers to already freed interfaces. If in
second try we add less interfaces than earlier we
may access unallocated memory when trying to get
interface descriptors.
Signed-off-by: Krzysztof Opasiak <k.opasiak@samsung.com> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Without this flag some versions of these enclosures do not work.
Reported-and-tested-by: Christian Schaller <cschalle@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Samsung has just released a portable USB3 SSD, coming in a very small
and nice form factor. It's USB ID is 04e8:8001, which unfortunately is
already used by the Palm Visor driver for the Samsung I330 phone cradle.
Having pl2303 or visor pick up this device ID results in conflicts with
the usb-storage driver, which handles the newly released portable USB3
SSD.
To work around this conflict, I've dug up a mailing list post [1] from a
long time ago, in which a user posts the full USB descriptor
information. The most specific value in this appears to be the interface
class, which has value 255 (0xff). Since usb-storage requires an
interface class of 0x8, I believe it's correct to disambiguate the two
devices by matching on 0xff inside visor.
If the xHCI host controller has died (ie, device removed) or suffered
other serious fatal error (STS_FATAL), then xhci_irq should handle this
condition with IRQ_HANDLED instead of -ESHUTDOWN.
Signed-off-by: Joe Lawrence <joe.lawrence@stratus.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Our event ring consists of only one segment, and we risk filling
the event ring in case we get isoc transfers with short intervals
such as webcams that fill a TD every microframe (125us)
With 64 TRB segment size one usb camera could fill the event ring in 8ms.
A setup with several cameras and other devices can fill up the
event ring as it is shared between all devices.
This has occurred when uvcvideo queues 5 * 32TD URBs which then
get cancelled when the video mode changes. The cancelled URBs are returned
in the xhci interrupt context and blocks the interrupt handler from
handling the new events.
A full event ring will block xhci from scheduling traffic and affect all
devices conneted to the xhci, will see errors such as Missed Service
Intervals for isoc devices, and and Split transaction errors for LS/FS
interrupt devices.
Increasing the TRB_PER_SEGMENT will also increase the default endpoint ring
size, which is welcome as for most isoc transfer we had to dynamically
expand the endpoint ring anyway to be able to queue the 5 * 32TDs uvcvideo
queues.
Isoc TDs usually consist of one TRB, sometimes two. When all goes well we
receive only one success event for a TD, and move the dequeue pointer to
the next TD.
This fails if the TD consists of two TRBs and we get a transfer error
on the first TRB, we will then see two events for that TD.
Fix this by making sure the event we get is for the last TRB in that TD
before moving the dequeue pointer to the next TD. This will resolve some
of the uvc and dvb issues with the
"ERROR Transfer event TRB DMA ptr not part of current TD" error message
See https://bugzilla.redhat.com/show_bug.cgi?id=1025672
We need to put() the reference to the scsi host that we got in
pscsi_configure_device(). In VIRTUAL_HOST mode it is associated with
the dev_virt, not the hba_virt.
Signed-off-by: Andy Grover <agrover@redhat.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fix regression introduced by commit <29ef8a53542a>. After it writing
AT commands to /dev/GCT-ATM0 is unsuccessful (no echo, no response)
and dmesg show "gdmtty: invalid payload : 1 16 f011".
Before that commit value of dummy_cnt was only a padding size. After using
ALIGN() this value is increased by its first argument. So the following
usage of this variable needs correction.
The string iwpm_ulib_name is recorded in a nlmsg as a netlink attribute.
Without this fix parsing of the nlmsg by the userspace port mapper service fails
because of unknown attribute length, causing the port mapper service not to
register the client, which has sent the nlmsg.
According to the RM of wm8958, BCLK DIV 348 doesn't exist, correct it
to 384.
Signed-off-by: Zidan Wang <zidan.wang@freescale.com> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
It should be "RINPUT3" instead of "LINPUT3" route to "Right Input
Mixer".
Signed-off-by: Zidan Wang <zidan.wang@freescale.com> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When there is prefix specified, currently we will add this prefix in
widget->name, but not in widget->sname.
it causes failure at snd_soc_dapm_link_dai_widgets:
if (!w->sname || !strstr(w->sname, dai_w->name))
because dai_w->name has prefix added, but w->sname does not.
We should also add prefix for stream name
Signed-off-by: Koro Chen <koro.chen@mediatek.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
set_dai_fmt_both() callback is called from snd_soc_runtime_set_dai_fmt()
which is called from snd_soc_register_card(), but at this time codec
is not powered on yet. Replace direct i2c write with regcache write.
Fixes: 5f0acedddf533c (ASoC: rx1950_uda1380: Use static DAI format setup) Fixes: 5cc10b9b77c234 (ASoC: h1940_uda1380: Use static DAI format setup) Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
mc13xxx_reg_rmw() won't change any bit if passing 0 to the mask field.
Pass AUDIO_SSI_SEL instead of 0 for the mask field to set AUDIO_SSI_SEL
bit.
Signed-off-by: Axel Lin <axel.lin@ingics.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Through the regression report, it was revealed that the
tpacpi_led_set() call to thinkpad_acpi helper doesn't only toggle the
mute LED but actually mutes the sound. This is contradiction to the
expectation, and rather confuses user.
According to Henrique, it's not trivial to judge which TP model
behaves "LED-only" and which model does whatever more intrusive, as
Lenovo's implementations vary model by model. So, from the safety
reason, we should revert the patch for now.
The module notifier call chain for MODULE_STATE_COMING was moved up before
the parsing of args, into the complete_formation() call. But if the module failed
to load after that, the notifier call chain for MODULE_STATE_GOING was
never called and that prevented the users of those call chains from
cleaning up anything that was allocated.
Link: http://lkml.kernel.org/r/554C52B9.9060700@gmail.com Reported-by: Pontus Fuchs <pontus.fuchs@gmail.com> Fixes: 4982223e51e8 "module: set nx before marking module MODULE_STATE_COMING" Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
when we find that a child has died while we'd been trying to ascend,
we should go into the first live sibling itself, rather than its sibling.
Off-by-one in question had been introduced in "deal with deadlock in
d_walk()" and the fix needs to be backported to all branches this one
has been backported to.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If the specified maximum length of the string is a multiple of unsigned
long, we would load one long behind the specified maximum. If that
happens to be in a next page, we can hit a page fault although we were
not expected to.
Fix the off-by-one bug in the test whether we are at the end of the
specified range.
Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The function brcmf_msgbuf_get_pktid() may return a NULL pointer so
the callers should check the return pointer before accessing it to
avoid the crash below (see [1]):
Before commit 035a61c314eb ("clk: Make clk API return per-user
struct clk instances") we acquired the enable_lock in
__clk_set_parent_{before,after}() by means of calling
clk_enable(). After commit 035a61c314eb we use clk_core_enable()
in place of the clk_enable(), and clk_core_enable() doesn't
acquire the enable_lock. This opens up a race condition between
clk_set_parent() and clk_enable(). Fix it.
Fixes: 035a61c314eb ("clk: Make clk API return per-user struct clk instances") Cc: Mike Turquette <mturquette@linaro.org> Cc: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Dong Aisheng <aisheng.dong@freescale.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit ae43b3289186 ("ARM: 8202/1: dmaengine: pl330: Add runtime Power
Management support v12") added pm support for the pl330 dma driver but
it makes the clock for the Exynos5420 MDMA0 DMA controller to be gated
during suspend and this in turn makes its parent clock aclk266_g2d to
be gated. But the clock needs to be ungated prior suspend to allow the
system to be suspend and resumed correctly.
Add GATE_BUS_TOP register to the list of registers to be restored when
the system enters into a suspend state so aclk266_g2d will be ungated.
Thanks to Abhilash Kesavan for figuring out that this was the issue.
Fixes: ae43b32 ("ARM: 8202/1: dmaengine: pl330: Add runtime Power Management support v12") Signed-off-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk> Tested-by: Kevin Hilman <khilman@linaro.org> Tested-by: Abhilash Kesavan <a.kesavan@samsung.com> Acked-by: Tomasz Figa <tomasz.figa@gmail.com> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The following error message is seen when loading the nct6683 driver
with DEBUG_LOCK_ALLOC enabled.
BUG: key ffff88040b2f0030 not in .data!
------------[ cut here ]------------
WARNING: CPU: 0 PID: 186 at kernel/locking/lockdep.c:2988
lockdep_init_map+0x469/0x630()
DEBUG_LOCKS_WARN_ON(1)
Caused by a missing call to sysfs_attr_init() when initializing
sysfs attributes.
The following error message is seen when loading the nct6775 driver
with DEBUG_LOCK_ALLOC enabled.
BUG: key ffff88040b2f0030 not in .data!
------------[ cut here ]------------
WARNING: CPU: 0 PID: 186 at kernel/locking/lockdep.c:2988
lockdep_init_map+0x469/0x630()
DEBUG_LOCKS_WARN_ON(1)
Caused by a missing call to sysfs_attr_init() when initializing
sysfs attributes.
The VREFN channel is bipolar, not unipolar. Small negative values do
occur (e.g., -1mV), and unsigned conversion maps them incorrectly to
large positive values (about +1V), so fix this.
Signed-off-by: Thomas Betker <thomas.betker@rohde-schwarz.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
According to hardware team there should be some delay after
setting channel number, start mode and before setting START.
Add a one microsecond delay for this purpose.
At present we are incorrectly setting the register to 0x1 to power up
the ADC. Since it is an active high power down register, we need to set
the register to 0x0 to actually power up. Conversely, writing 0x1 to the
register powers it down.
This commit adds a couple of helpers to make the code clearer and then
use them to do the power-up/power-down properly.
When some of the ADC channels are reserved for remote CPUs,
the scan index and the corresponding channel number doesn't
match. This leads to convesion on the incorrect channel during
triggered capture.
Fix this by using a scan index to channel mapping encoded
in the iio_chan_spec for this purpose while starting conversion
on a particular ADC channel in trigger handler.
Also, the channel_map is not really used anywhere but in probe(), so
no need to keep track of it. Remove it from device structure.
While here, add 1 to number of channels to register timestamp channel
with the IIO core.
With 'dx' equal to 0.625V and 15 bit ADC, calculations overflow
when difference against GND is ~20% of the ADC range. Fix this.
Signed-off-by: Ivan T. Ivanov <ivan.ivanov@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently in_proximity_(null)_raw is getting presented as raw sysfs
attribute. Same with the scan_elements.
The modifier doesn't apply to this channel.
In SPI mode the transfer buffer is locked with a mutex. However this
mutex is only initilized after the probe, but some transfer needs to
be done in the probe.
To fix this bug we move the mutex initialization at the beginning of
the device probe.
Signed-off-by: Alban Bedel <alban.bedel@avionic-design.de> Acked-by: Denis Ciocca <denis.ciocca@st.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit 65de7654d39c70c2b ("iio: iio: Fix iio_channel_read return if
channel havn't info") added a check for valid info masks.
This patch adds missing channel info masks for all ADC channels.
Otherwise, iio_read_channel_raw() would return -EINVAL when called
by consumer drivers.
Note that the change of _processed to _raw actually fixes an ABI abuse
in the original driver where it was used to avoid some special handling
rather than because it was correct.
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When configured via device tree, the associated iio device needs to be
measuring voltage for the conversion to resistance to be correct.
Return -EINVAL if that is not the case.
A non-percpu VIRQ (e.g., VIRQ_CONSOLE) may be freed on a different
VCPU than it is bound to. This can result in a race between
handle_percpu_irq() and removing the action in __free_irq() because
handle_percpu_irq() does not take desc->lock. The interrupt handler
sees a NULL action and oopses.
Only use the percpu chip/handler for per-CPU VIRQs (like VIRQ_TIMER).
If while setting a block group read-only we end up allocating a system
chunk, through check_system_chunk(), we were not doing it while holding
the chunk mutex which is a problem if a concurrent chunk allocation is
happening, through do_chunk_alloc(), as it means both block groups can
end up using the same logical addresses and physical regions in the
device(s). So make sure we hold the chunk mutex.
Fixes: 2f0810880f08 ("btrfs: delete chunk allocation attemp when
setting block group ro")
Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
.. which introduced a regression that prevented all lingering requests
requeued in kick_requests() from ever being sent to the OSDs, resulting
in a lot of missed notifies. In retrospect it's pretty obvious that
r_req_lru_item item in the case of lingering requests can be used not
only for notarget, but also for unsent linkage due to how tightly
actual map and enqueue operations are coupled in __map_request().
The assertion that was being silenced is taken care of in the previous
("libceph: request a new osdmap if lingering request maps to no osd")
commit: by always kicking homeless lingering requests we ensure that
none of them ends up on the notarget list outside of the critical
section guarded by request_mutex.
This commit does two things. First, if there are any homeless
lingering requests, we now request a new osdmap even if the osdmap that
is being processed brought no changes, i.e. if a given lingering
request turned homeless in one of the previous epochs and remained
homeless in the current epoch. Not doing so leaves us with a stale
osdmap and as a result we may miss our window for reestablishing the
watch and lose notifies.
MON=1 OSD=1:
# cat linger-needmap.sh
#!/bin/bash
rbd create --size 1 test
DEV=$(rbd map test)
ceph osd out 0
rbd map dne/dne # obtain a new osdmap as a side effect (!)
sleep 1
ceph osd in 0
rbd resize --size 2 test
# rbd info test | grep size -> 2M
# blockdev --getsize $DEV -> 1M
N.B.: Not obtaining a new osdmap in between "osd out" and "osd in"
above is enough to make it miss that resize notify, but that is a
bug^Wlimitation of ceph watch/notify v1.
Second, homeless lingering requests are now kicked just like those
lingering requests whose mapping has changed. This is mainly to
recognize that a homeless lingering request makes no sense and to
preserve the invariant that a registered lingering request is not
sitting on any of r_req_lru_item lists. This spares us a WARN_ON,
which commit ba9d114ec557 ("libceph: clear r_req_lru_item in
__unregister_linger_request()") tried to fix the _wrong_ way.
Fix broken probe of da9052 regulators, which since commit b3f6c73db732
("mfd: da9052-core: Fix platform-device id collision") use a
non-deterministic platform-device id to retrieve static regulator
information. Fortunately, adequate error handling was in place so probe
would simply fail with an error message.
Update the mfd-cell ids to be zero-based and use those to identify the
cells when probing the regulator devices.
Fixes: b3f6c73db732 ("mfd: da9052-core: Fix platform-device id collision") Signed-off-by: Johan Hovold <johan@kernel.org> Acked-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
OpenWRT folks reported that overlayfs fails to mount if upper fs is full,
because workdir can't be created. Wordir creation can fail for various
other reasons too.
There's no reason that the mount itself should fail, overlayfs can work
fine without a workdir, as long as the overlay isn't modified.
So mount it read-only and don't allow remounting read-write.
Add a couple of WARN_ON()s for the impossible case of workdir being used
despite being read-only.
When removing an opaque directory we can't just call rmdir() to check for
emptiness, because the directory will need to be replaced with a whiteout.
The replacement is done with RENAME_EXCHANGE, which doesn't check
emptiness.
Solution is just to check emptiness by reading the directory. In the
future we could add a new rename flag to check for emptiness even for
RENAME_EXCHANGE to optimize this case.
This bug has been there since day 1; addresses in the top guest physical
page weren't considered valid. You could map that page (the check in
check_gpte() is correct), but if a guest tried to put a pagetable there
we'd check that address manually when walking it, and kill the guest.
It was missed when we converted everything in XFs to use negative error
numbers, so fix it now. Bug introduced in 3.17 by commit 2451337 ("xfs: global
error sign conversion"), and should go back to stable kernels.
Thanks to Brian Foster for noticing it.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>