In order to successfully decode Intel PT traces, context switch events
are needed from the moment the trace starts. Currently that is ensured
by using the 'immediate' flag which enables the switch event when it is
opened.
However, since commit 86c2786994bd ("perf intel-pt: Add support for
PERF_RECORD_SWITCH") that might not always happen. When tracing
system-wide the context switch event is added to the tracking event
which was not set as 'immediate'. Change that so it is.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Fixes: 86c2786994bd ("perf intel-pt: Add support for PERF_RECORD_SWITCH") Link: http://lkml.kernel.org/r/1471245784-22580-1-git-send-email-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
User space tools have no idea how to parse "TICK_DEP_BIT_SCHED" or the other
symbols used to do the bit shifting. The reason is that the conversion was
done with using the TICK_DEP_MASK_* symbols which are just macros that
convert to the BIT shift itself (with the exception of NONE, which was
converted properly, because it doesn't use bits, and is defined as zero).
The TICK_DEP_BIT_* needs to be denoted by TRACE_DEFINE_ENUM() in order to
have this properly converted for user space tools to parse this event.
There are multiple cases in vfio_pci_set_ctx_trigger_single() where
we assume we can safely read from our data pointer without actually
checking whether the user has passed any data via the count field.
VFIO_IRQ_SET_DATA_NONE in particular is entirely broken since we
attempt to pull an int32_t file descriptor out before even checking
the data type. The other data types assume the data pointer contains
one element of their type as well.
In part this is good news because we were previously restricted from
doing much sanitization of parameters because it was missed in the
past and we didn't want to break existing users. Clearly DATA_NONE
is completely broken, so it must not have any users and we can fix
it up completely. For DATA_BOOL and DATA_EVENTFD, we'll just
protect ourselves, returning error when count is zero since we
previously would have oopsed.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reported-by: Chris Thompson <the_cartographer@hotmail.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
With debugobjects enabled and using SLAB_DESTROY_BY_RCU, when a
kmem_cache_node is destroyed the call_rcu() may trigger a slab
allocation to fill the debug object pool (__debug_object_init:fill_pool).
Everywhere but during kmem_cache_destroy(), discard_slab() is performed
outside of the kmem_cache_node->list_lock and avoids a lockdep warning
about potential recursion:
=============================================
[ INFO: possible recursive locking detected ]
4.8.0-rc1-gfxbench+ #1 Tainted: G U
---------------------------------------------
rmmod/8895 is trying to acquire lock:
(&(&n->list_lock)->rlock){-.-...}, at: [<ffffffff811c80d7>] get_partial_node.isra.63+0x47/0x430
but task is already holding lock:
(&(&n->list_lock)->rlock){-.-...}, at: [<ffffffff811cbda4>] __kmem_cache_shutdown+0x54/0x320
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&(&n->list_lock)->rlock);
lock(&(&n->list_lock)->rlock);
*** DEADLOCK ***
May be due to missing lock nesting notation
5 locks held by rmmod/8895:
#0: (&dev->mutex){......}, at: driver_detach+0x42/0xc0
#1: (&dev->mutex){......}, at: driver_detach+0x50/0xc0
#2: (cpu_hotplug.dep_map){++++++}, at: get_online_cpus+0x2d/0x80
#3: (slab_mutex){+.+.+.}, at: kmem_cache_destroy+0x3c/0x220
#4: (&(&n->list_lock)->rlock){-.-...}, at: __kmem_cache_shutdown+0x54/0x320
Fixes: 52b4b950b507 ("mm: slab: free kmem_cache_node after destroy sysfs file") Link: http://lkml.kernel.org/r/1470759070-18743-1-git-send-email-chris@chris-wilson.co.uk Reported-by: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com> Acked-by: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dmitry Safonov <dsafonov@virtuozzo.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dave Gordon <david.s.gordon@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When using the indirect buffers feature, 'desc' is allocated in
virtqueue_add() but isn't freed before leaving on a ring full error,
causing a memory leak.
For example, it seems rather clear that this can trigger
with virtio net if mergeable buffers are not used.
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When building gccgo in userspace, errno.h gets parsed and the go include file
sysinfo.go is generated.
Since EREFUSED is defined to the same value as ECONNREFUSED, and ECONNREFUSED
is defined later on in errno.h, this leads to go complaining that EREFUSED
isn't defined yet.
Fix this trivial problem by moving the define of EREFUSED down after
ECONNREFUSED in errno.h (and clean up the indenting while touching this line).
According to UEFI 2.6 section 7.5.3, the capsule should be in contiguous
virtual memory and firmware may consume the capsule immediately. To
correctly implement this functionality, the kernel driver needs to vmap
the entire capsule at the time it is made available to firmware.
The virtual allocation of the capsule update has been changed from kmap,
which was only allocating the first page of the update, to vmap, and
allocates the entire data payload.
Signed-off-by: Austin Christ <austinwc@codeaurora.org> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Reviewed-by: Lee, Chun-Yi <jlee@suse.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Bryan O'Donoghue <pure.logic@nexus-software.ie> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kweh Hock Leong <hock.leong.kweh@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/1470912120-22831-3-git-send-email-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
AT_VECTOR_SIZE_ARCH should be defined with the maximum number of
NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined
for arm64 at all even though ARCH_DLINFO will contain one NEW_AUX_ENT
for the VDSO address.
This shouldn't be a problem as AT_VECTOR_SIZE_BASE includes space for
AT_BASE_PLATFORM which arm64 doesn't use, but lets define it now and add
the comment above ARCH_DLINFO as found in several other architectures to
remind future modifiers of ARCH_DLINFO to keep AT_VECTOR_SIZE_ARCH up to
date.
Fixes: f668cd1673aa ("arm64: ELF definitions") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
For SKL and later Intel chips, we control the power well per codec
basis via link_power callback since the commit [03b135cebc47: ALSA:
hda - remove dependency on i915 power well for SKL].
However, there are a few exceptional cases where the gfx registers are
accessed from the audio driver: namely the wakeup override bit
toggling at (both system and runtime) resume. This seems causing a
kernel warning when accessed during the power well down (and likely
resulting in the bogus register accesses).
This patch puts the proper power up / down sequence around the resume
code so that the wakeup bit is fiddled properly while the power is
up. (The other callback, sync_audio_rate, is used only in the PCM
callback, so it's guaranteed in the power-on.)
Also, by this proper power up/down, the instantaneous flip of wakeup
bit in the resume callback that was introduced by the commit
[033ea349a7cd: ALSA: hda - Fix Skylake codec timeout] becomes
superfluous, as snd_hdac_display_power() already does it. So we can
clean it up together.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96214 Fixes: 03b135cebc47 ('ALSA: hda - remove dependency on i915 power well for SKL') Tested-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
VF0610 does not support reading the sample rate which leads to many
lines of "cannot get freq at ep 0x82". This patch adds the USB ID
(0x041E:4080) to snd_usb_get_sample_rate_quirk() list.
It's possible to have simultaneous upcalls for the same UIDs but
different GSS service. In that case, we need to allow for the
upcall to gssd to proceed so that not the same context is used
by two different GSS services. Some servers lock the use of context
to the GSS service.
If the connect attempt immediately fails with an EADDRNOTAVAIL error, then
that means our choice of source port number was bad.
This error is expected when we set the SO_REUSEPORT socket option and we
have 2 sockets sharing the same source and destination address and port
combinations.
The unit tests crash when hotplug races the previous probe. This race
requires that the loading of the nfit_test module be terminated with
SIGTERM, and the module to be unloaded while the ars scan is still
running.
In contrast to the normal nfit driver, the unit test calls
acpi_nfit_init() twice to simulate hotplug, whereas the nominal case
goes through the acpi_nfit_notify() event handler. The
acpi_nfit_notify() path is careful to flush the previous region
registration before servicing the hotplug event. The unit test was
missing this guarantee.
This problem has actually been in the UV code for a while, but we didn't
catch it until recently, because we had been relying on EFI_OLD_MEMMAP
to allow our systems to boot for a period of time. We noticed the issue
when trying to kexec a recent community kernel, where we hit this NULL
pointer dereference in efi_sync_low_kernel_mappings():
[ 0.337515] BUG: unable to handle kernel NULL pointer dereference at 0000000000000880
[ 0.346276] IP: [<ffffffff8105df8d>] efi_sync_low_kernel_mappings+0x5d/0x1b0
The problem doesn't show up with EFI_OLD_MEMMAP because we skip the
chunk of setup_efi_state() that sets the efi_loader_signature for the
kexec'd kernel. When the kexec'd kernel boots, it won't set EFI_BOOT in
setup_arch, so we completely avoid the bug.
We always kexec with noefi on the command line, so this shouldn't be an
issue, but since we're not actually checking for efi_runtime_disabled in
uv_bios_init(), we end up trying to do EFI runtime callbacks when we
shouldn't be. This patch just adds a check for efi_runtime_disabled in
uv_bios_init() so that we don't map in uv_systab when runtime_disabled ==
true.
Signed-off-by: Alex Thorlton <athorlton@sgi.com> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@suse.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Travis <travis@sgi.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russ Anderson <rja@sgi.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/1470912120-22831-2-git-send-email-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Since instruction decoder now supports EVEX-encoded instructions, two fixes
are needed to correctly handle them in uprobes.
Extended bits for MODRM.rm field need to be sanitized just like we do it
for VEX3, to avoid encoding wrong register for register-relative access.
EVEX has _two_ extended bits: b and x. Theoretically, EVEX.x should be
ignored by the CPU (since GPRs go only up to 15, not 31), but let's be
paranoid here: proper encoding for register-relative access
should have EVEX.x = 1.
Secondly, we should fetch vex.vvvv for EVEX too.
This is now super easy because instruction decoder populates
vex_prefix.bytes[2] for all flavors of (e)vex encodings, even for VEX2.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jim Keniston <jkenisto@us.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: linux-kernel@vger.kernel.org Fixes: 8a764a875fe3 ("x86/asm/decoder: Create artificial 3rd byte for 2-byte VEX") Link: http://lkml.kernel.org/r/20160811154521.20469-1-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Usually current->mm (and therefore mm->pgd) stays the same during the
lifetime of a task so it does not matter if a task gets preempted during
the read and write of the CR3.
But then, there is this scenario on x86-UP:
TaskA is in do_exit() and exit_mm() sets current->mm = NULL followed by:
At this point current->mm is NULL but current->active_mm still points to
the "old" mm.
Let's preempt taskA _after_ native_read_cr3() by taskB. TaskB has its
own mm so CR3 has changed.
Now preempt back to taskA. TaskA has no ->mm set so it borrows taskB's
mm and so CR3 remains unchanged. Once taskA gets active it continues
where it was interrupted and that means it writes its old CR3 value
back. Everything is fine because userland won't need its memory
anymore.
Now the fun part:
Let's preempt taskA one more time and get back to taskB. This
time switch_mm() won't do a thing because oldmm (->active_mm)
is the same as mm (as per context_switch()). So we remain
with a bad CR3 / PGD and return to userland.
The next thing that happens is handle_mm_fault() with an address for
the execution of its code in userland. handle_mm_fault() realizes that
it has a PTE with proper rights so it returns doing nothing. But the
CPU looks at the wrong PGD and insists that something is wrong and
faults again. And again. And one more time…
This pagefault circle continues until the scheduler gets tired of it and
puts another task on the CPU. It gets little difficult if the task is a
RT task with a high priority. The system will either freeze or it gets
fixed by the software watchdog thread which usually runs at RT-max prio.
But waiting for the watchdog will increase the latency of the RT task
which is no good.
Fix this by disabling preemption across the critical code section.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@suse.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/1470404259-26290-1-git-send-email-bigeasy@linutronix.de
[ Prettified the changelog. ] Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch fixes an issue that the extcon_set_cable_state_() is possible
to cause "BUG: scheduling while atomic" because this driver calls
extcon_set_cable_state_() in the interrupt handler and mutex_lock()
is possible to be called by like the following call trace.
So, this patch adds a workqueue function to resolve this issue.
Signing a module should only make it trusted by the specific kernel it
was built for, not anything else. If a module signing key is used for
multiple ABI-incompatible kernels, the modules need to include enough
version information to distinguish them.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signing a module should only make it trusted by the specific kernel it
was built for, not anything else. Loading a signed module meant for a
kernel with a different ABI could have interesting effects.
Therefore, treat all signatures as invalid when a module is
force-loaded.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When the corrupt_bio_byte feature was introduced it caused READ bios to
no longer be errored with -EIO during the down_interval. This had to do
with the complexity of needing to submit READs if the corrupt_bio_byte
feature was used.
Fix it so READ bios are properly errored with -EIO; doing so early in
flakey_map() as long as there isn't a match for the corrupt_bio_byte
feature.
As per code flow s3c_rtc_setfreq() will get called with rtc clock disabled
and in set_freq we perform h/w registers read/write, which results in a
kernel crash on exynos7 platform while probing rtc driver.
Below is code flow:
s3c_rtc_probe()
clk_prepare_enable(info->rtc_clk) // rtc clock enabled
s3c_rtc_gettime() // will enable clk if not done, and disable it upon exit
s3c_rtc_setfreq() //then this will be called with clk disabled
This patch take cares of such issue by adding s3c_rtc_{enable/disable}_clk in
s3c_rtc_setfreq().
The lpfc_sli4_scmd_to_wqidx_distr() function expects the scsi_cmnd
'lpfc_cmd->pCmd' not to be null, and point to the midlayer command.
That's not true in the .eh_(device|target|bus)_reset_handler path,
because lpfc_send_taskmgmt() sends commands not from the midlayer, so
does not set 'lpfc_cmd->pCmd'.
That is true in the .queuecommand path because lpfc_queuecommand()
stores the scsi_cmnd from midlayer in lpfc_cmd->pCmd; and lpfc_cmd is
stored by lpfc_scsi_prep_cmnd() in piocbq->context1 -- which is passed
to lpfc_sli4_scmd_to_wqidx_distr() as lpfc_cmd parameter.
This problem can be hit on SCSI EH, and immediately with sg_reset.
These 2 test-cases demonstrate the problem/fix with next-20160601.
Fixes: 8b0dff14164d ("lpfc: Add support for using block multi-queue") Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Acked-by: James Smart <james.smart@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In this commit, using system workqueue causes that the maximum parallel
executions of _Qxx can exceed 255. This violates the method reentrancy
limit in ACPICA and generates the following error log:
ACPI Error: Method reached maximum reentrancy limit (255) (20150818/dsmethod-341)
This patch creates a seperate workqueue and limits the number of parallel
_Qxx evaluations down to a configurable value (can be tuned against number
of online CPUs).
Since EC events are handled after driver probe, we can create the workqueue
in acpi_ec_init().
Fixes: 02b771b64b73 (ACPI / EC: Fix an issue caused by the serialized _Qxx evaluations) Link: https://bugzilla.kernel.org/show_bug.cgi?id=135691 Reported-and-tested-by: Helen Buus <ubuntu@hbuus.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
On Intel Merrifield platform several PCI devices have a bogus configuration,
i.e. the IRQ0 had been assigned to few of them. These are PCI root bridge,
eMMC0, HS UART common registers, PWM, and HDMI. The actual interrupt line can
be allocated to one device exclusively, in our case to eMMC0, the rest should
cope without it and basically known drivers for them are not using interrupt
line at all.
Rework IRQ0 workaround, which was previously done to avoid conflict between
eMMC0 and HS UART common registers, to behave differently based on the device
in question, i.e. allocate interrupt line to eMMC0, but silently skip interrupt
allocation for the rest except HS UART common registers which are not used
anyway. With this rework IOSF MBI driver in particular would be used.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Fixes: 39d9b77b8deb ("x86/pci/intel_mid_pci: Work around for IRQ0 assignment") Link: http://lkml.kernel.org/r/1465842481-136852-1-git-send-email-andriy.shevchenko@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
At first, we prefer to use mips clockevent device, so we decrease the
rating of hpet clockevent device.
For hpet, if HPET_MIN_PROG_DELTA (minimum delta of hpet programming) is
too small and HPET_MIN_CYCLES (threshold of -ETIME checking) is too
large, then hpet_next_event() can easily return -ETIME. After commit c6eb3f70d44828 ("hrtimer: Get rid of hrtimer softirq") this will cause
a RCU stall.
So, HPET_MIN_PROG_DELTA must be sufficient that we don't re-trip the
-ETIME check -- if we do, we will return -ETIME, forward the next event
time, try to set it, return -ETIME again, and basically lock the system
up. Meanwhile, HPET_MIN_CYCLES doesn't need to be too large, 16 cycles
is enough.
This solution is similar to commit f9eccf24615672 ("clocksource/drivers
/vt8500: Increase the minimum delta").
By the way, this patch ensures hpet count/compare to be 32-bit long.
Some ASUS laptops were shipped with touchpads that require to be woken up
first, before trying to switch them into absolute reporting mode, otherwise
touchpad would fail to work while flooding the logs with:
elan_i2c i2c-ELAN1000:00: invalid report id data (1)
Among affected devices are Asus E202SA, N552VW, X456UF, UX305CA, and
others. We detect such devices by checking the IC type and product ID
numbers and adjusting order of operations accordingly.
We are in atomic context and must not sleep.
Sleeping here is possible since malloc() maps
to kmalloc() with GFP_KERNEL.
Fixes: b6024b21 ("um: extend fpstate to _xstate to support YMM registers") Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If a Simple command is sent with a failure, target_setup_cmd_from_cdb
returns with TCM_UNSUPPORTED_SCSI_OPCODE or TCM_INVALID_CDB_FIELD.
So in the cases where target_setup_cmd_from_cdb returns an error, we
never get far enough to call target_execute_cmd to increment simple_cmds.
Since simple_cmds isn't incremented, the result of the failure from
target_setup_cmd_from_cdb causes transport_generic_request_failure to
decrement simple_cmds, due to call to transport_complete_task_attr.
With this dev->simple_cmds or dev->dev_ordered_sync is now -1, not 0.
So when a subsequent command with an Ordered Task is sent, it causes
a hang, since dev->simple_cmds is at -1.
Tested-by: Bryant G. Ly <bryantly@linux.vnet.ibm.com> Signed-off-by: Bryant G. Ly <bryantly@linux.vnet.ibm.com> Tested-by: Michael Cyr <mikecyr@linux.vnet.ibm.com> Signed-off-by: Michael Cyr <mikecyr@linux.vnet.ibm.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
max_discard_sectors only 32bits, and some non scsi backend
devices will set this to the max 0xffffffff, so we can end up
overflowing during the max_unmap_lba_count calculation.
target: Fix WRITE_SAME/DISCARD conversion to linux 512b sectors
which can result in extra discards being sent to due the overflow
causing max_unmap_lba_count to be smaller than what the backing
device can actually support.
Signed-off-by: Mike Christie <mchristi@redhat.com> Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch fixes a race in iscsit_release_commands_from_conn() ->
iscsit_free_cmd() -> transport_generic_free_cmd() + wait_for_tasks=1,
where CMD_T_FABRIC_STOP could end up being set after the final
kref_put() is called from core_tmr_abort_task() context.
This results in transport_generic_free_cmd() blocking indefinately
on se_cmd->cmd_wait_comp, because the target_release_cmd_kref()
check for CMD_T_FABRIC_STOP returns false.
To address this bug, make iscsit_release_commands_from_conn()
do list_splice and set CMD_T_FABRIC_STOP early while holding
iscsi_conn->cmd_lock. Also make iscsit_aborted_task() only
remove iscsi_cmd_t if CMD_T_FABRIC_STOP has not already been
set.
Finally in target_release_cmd_kref(), only honor fabric_stop
if CMD_T_ABORTED has been set.
Cc: Mike Christie <mchristi@redhat.com> Cc: Quinn Tran <quinn.tran@qlogic.com> Cc: Himanshu Madhani <himanshu.madhani@qlogic.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Tested-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
During transport_generic_free_cmd() with a concurrent TMR
ABORT_TASK and shutdown CMD_T_FABRIC_STOP bit set, the
caller will be blocked on se_cmd->cmd_wait_stop completion
until the final kref_put() -> target_release_cmd_kref()
has been invoked to call complete().
However, when ABORT_TASK is completed with FUNCTION_COMPLETE
in core_tmr_abort_task(), the aborted se_cmd will have already
been removed from se_sess->sess_cmd_list via list_del_init().
This results in target_release_cmd_kref() hitting the
legacy list_empty() == true check, invoking ->release_cmd()
but skipping complete() to wakeup se_cmd->cmd_wait_stop
blocked earlier in transport_generic_free_cmd() code.
To address this bug, it's safe to go ahead and drop the
original list_empty() check so that fabric_stop invokes
the complete() as expected, since list_del_init() can
safely be used on a empty list.
Cc: Mike Christie <mchristi@redhat.com> Cc: Quinn Tran <quinn.tran@qlogic.com> Cc: Himanshu Madhani <himanshu.madhani@qlogic.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Tested-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If a command with a Simple task attribute is failed due to a Unit
Attention, then a subsequent command with an Ordered task attribute
will hang forever. The reason for this is that the Unit Attention
status is checked for in target_setup_cmd_from_cdb, before the call
to target_execute_cmd, which calls target_handle_task_attr, which
in turn increments dev->simple_cmds.
However, transport_generic_request_failure still calls
transport_complete_task_attr, which will decrement dev->simple_cmds.
In this case, simple_cmds is now -1. So when a command with the
Ordered task attribute is sent, target_handle_task_attr sees that
dev->simple_cmds is not 0, so it decides it can't execute the
command until all the (nonexistent) Simple commands have completed.
Reported-by: Michael Cyr <mikecyr@linux.vnet.ibm.com> Tested-by: Michael Cyr <mikecyr@linux.vnet.ibm.com> Reported-by: Bryant G. Ly <bryantly@linux.vnet.ibm.com> Tested-by: Bryant G. Ly <bryantly@linux.vnet.ibm.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When the volume resize operation shrinks a volume,
LEBs will be unmapped. Since unmapping will not erase these
LEBs immediately we have to wait for that operation to finish.
Otherwise in case of a power cut right after writing the new
volume table the UBI attach process can find more LEBs than the
volume table knows. This will render the UBI image unattachable.
Fix this issue by waiting for erase to complete and write the new
volume table afterward.
Reported-by: Boris Brezillon <boris.brezillon@free-electrons.com> Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Some but not all callers of rdma_rw_ctx_init() zero-initialize
struct rdma_rw_ctx. Hence make rdma_rw_ctx_init() initialize all
work request fields that will be read by ib_post_send().
Number of outstanding_pi may overflow and as a result may indicate that
there are no elements in the queue. The effect of doing this is that the
MAD layer will get stuck waiting for completions. The MAD layer will
think that the QP is full - because it didn't receive these completions.
This fix changes it so the outstanding_pi number is increased
with 32-bit wraparound and is not limited to max_send_wr so
that the difference between outstanding_pi and outstanding_ci will
really indicate the number of outstanding completions.
One of the machines has ALC255 on it, another one has ALC298 on it.
On the machine with the codec ALC298, it also has the speaker volume
problem, so we add the fixup chained to ALC298_FIXUP_SPK_VOLUME rather
than adding a group of pin definition in the pin quirk table, since
the speak volume problem does not happen on other machines yet.
We have a Dell AIO on which we can't adjust its speaker's volume.
The problem is it is connected to a Audio Output node without Amp-out
capability. To fix it, we change it to be connnected to a node with
Amp-out capability.
In powerpc servers with large memory(32TB), we watched several soft
lockups for hugepage under stress tests.
The call traces are as follows:
1.
get_page_from_freelist+0x2d8/0xd50
__alloc_pages_nodemask+0x180/0xc20
alloc_fresh_huge_page+0xb0/0x190
set_max_huge_pages+0x164/0x3b0
nand_do_write_ops() determines if it is writing a partial page with the
formula:
part_pagewr = (column || writelen < (mtd->writesize - 1))
When 'writelen' is exactly 1 byte less than the NAND page size the formula
equates to zero, so the code doesn't process it as a partial write,
although it should.
As a consequence the function remains in the while(1) loop with 'writelen'
becoming 0xffffffff and iterating endlessly.
The bug may not be easy to reproduce in Linux since user space tools
usually force the padding or round-up the write size to a page-size
multiple.
This was discovered in U-Boot where the issue can be reproduced by
writing any size that is 1 byte less than a page-size multiple.
For example, on a NAND with 2K page (0x800):
=> nand erase.part <partition>
=> nand write $loadaddr <partition> 7ff
[Editor's note: the bug was added in commit 29072b96078f, but moved
around in commit 66507c7bc8895 ("mtd: nand: Add support to use nand_base
poi databuf as bounce buffer")]
Commit 09954bad4 ("floppy: refactor open() flags handling"), as a
side-effect, causes open(/dev/fdX, O_ACCMODE) to fail. It turns out that
this is being used setfdprm userspace for ioctl-only open().
Reintroduce back the original behavior wrt !(FMODE_READ|FMODE_WRITE)
modes, while still keeping the original O_NDELAY bug fixed.
The name for a bdi of a gendisk is derived from the gendisk's devt.
However, since the gendisk is destroyed before the bdi it leaves a
window where a new gendisk could dynamically reuse the same devt while a
bdi with the same name is still live. Arrange for the bdi to hold a
reference against its "owner" disk device while it is registered.
Otherwise we can hit sysfs duplicate name collisions like the following:
Reported-by: Yi Zhang <yizhan@redhat.com> Tested-by: Yi Zhang <yizhan@redhat.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fixed up missing 0 return in bdi_register_owner().
When a bio is cloned, the newly created bio must be associated with
the same blkcg as the original bio (if BLK_CGROUP is enabled). If
this operation is not performed, then the new bio is not associated
with any group, and the group of the current task is returned when
the group of the bio is requested.
Depending on the cloning frequency, this may cause a large
percentage of the bios belonging to a given group to be treated
as if belonging to other groups (in most cases as if belonging to
the root group). The expected group isolation may thereby be broken.
This commit adds the missing association in bio-cloning functions.
Fixes: da2f0f74cf7d ("Btrfs: add support for blkio controllers") Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Reviewed-by: Nikolay Borisov <kernel@kyup.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The LNKGET based atomic sequence in __cmpxchg_u32 has slightly incorrect
constraints for the return value which under certain circumstances can
allow an address unit register to be used as the first operand of a CMP
instruction. This isn't a valid instruction however as the encodings
only allow a data unit to be specified. This would result in an
assembler error like the following:
Error: failed to assemble instruction: "CMP A0.2,D0Ar6"
Fix by changing the constraint from "=&da" (assigned, early clobbered,
data or address unit register) to "=&d" (data unit register only).
The constraint for the second operand, "bd" (an op2 register where op1
is a data unit register and the instruction supports O2R) is already
correct assuming the first operand is a data unit register.
Other cases of CMP in inline asm have had their constraints checked, and
appear to all be fine.
Fixes: 6006c0d8ce94 ("metag: Atomics, locks and bitops") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-metag@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
glibc recently did a sync up (94e73c95d9b5 "elf.h: Sync with the gabi
webpage") that added a #define for EM_METAG but did not add relocations
This triggers build errors:
scripts/recordmcount.c: In function 'do_file':
scripts/recordmcount.c:466:28: error: 'R_METAG_ADDR32' undeclared (first use in this function)
case EM_METAG: reltype = R_METAG_ADDR32;
^~~~~~~~~~~~~~
scripts/recordmcount.c:466:28: note: each undeclared identifier is reported only once for each function it appears in
scripts/recordmcount.c:468:20: error: 'R_METAG_NONE' undeclared (first use in this function)
rel_type_nop = R_METAG_NONE;
^~~~~~~~~~~~
Work around this change with some more #ifdefery for the relocations.
The balloon has a special mechanism that is subscribed to the oom
notification which leads to deflation for a fixed number of pages.
The number is always fixed even when the balloon is fully deflated.
But leak_balloon did not expect that the pages to deflate will be more
than taken, and raise a "BUG" in balloon_page_dequeue when page list
will be empty.
So, the simplest solution would be to check that the number of releases
pages is less or equal to the number taken pages.
Signed-off-by: Konstantin Neumoin <kneumoin@virtuozzo.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This reverts commit 013dd9e03872
("drm/i915/dp: fall back to 18 bpp when sink capability is unknown")
This commit introduced a regression into stable kernels,
as it reduces output color depth to 6 bpc for any video
sink connected to a Displayport connector if that sink
doesn't report a specific color depth via EDID, or if
our EDID parser doesn't actually recognize the proper
bpc from EDID.
Affected are active DisplayPort->VGA converters and
active DisplayPort->DVI converters. Both should be
able to handle 8 bpc, but are degraded to 6 bpc with
this patch.
The reverted commit was meant to fix
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=105331
A followup patch implements a fix for that specific bug,
which is caused by a faulty EDID of the affected DP panel
by adding a new EDID quirk for that panel.
DP 18 bpp fallback handling and other improvements to
DP sink bpc detection will be handled for future
kernels in a separate series of patches.
Please backport to stable.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Acked-by: Jani Nikula <jani.nikula@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
SNB (and IVB too I suppose) starts to misbehave if the GPU gets stuck
in an infinite batch buffer loop. The GPU apparently hogs something
critical and CPUs start to lose interrupts and whatnot. We can keep
the system limping along by unmasking some interrupts in
GEN6_PMINTRMSK. The EI up interrupt has been previously chosen for
that task, so let's never mask it.
Bugzilla https://bugzilla.kernel.org/show_bug.cgi?id=105331
reports that the "AEO model 0" display is driven with 8 bpc
without dithering by default, which looks bad because that
panel is apparently a 6 bpc DP panel with faulty EDID.
A fix for this was made by commit 013dd9e03872
("drm/i915/dp: fall back to 18 bpp when sink capability is unknown").
That commit triggers new regressions in precision for DP->DVI and
DP->VGA displays. A patch is out to revert that commit, but it will
revert video output for the AEO model 0 panel to 8 bpc without
dithering.
The EDID 1.3 of that panel, as decoded from the xrandr output
attached to that bugzilla bug report, is somewhat faulty, and beyond
other problems also sets the "DFP 1.x compliant TMDS" bit, which
according to DFP spec means to drive the panel with 8 bpc and
no dithering in absence of other colorimetry information.
Try to make the original bug reporter happy despite the
faulty EDID by adding a quirk to mark that panel as 6 bpc,
so 6 bpc output with dithering creates a nice picture.
Tested by injecting the edid from the fdo bug into a DP connector
via drm_kms_helper.edid_firmware and verifying the 6 bpc + dithering
is selected.
This patch should be backported to stable.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Restore the correct behaviour (as in check msg.reply) when aux
->transfer() returns 0. It got removed in
commit 82922da39190 ("drm/dp_helper: Retry aux transactions on all errors")
Now I can actually dump the "entire" DPCD on a Dell UP2314Q with
ddrescue. It has some offsets in the DPCD that can't be read
for some resaon, all you get is defers. Previously ddrescue would
just give up at the first unredable offset on account of
read() returning 0 means EOF. Here's the ddrescue log
for the interested:
0x00000000 0x00001400 +
0x00001400 0x00000030 -
0x00001430 0x000001D0 +
0x00001600 0x00000030 -
0x00001630 0x0001F9D0 +
0x00021000 0x00000001 -
0x00021001 0x000DEFFF +
Cc: Lyude <cpaul@redhat.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch Fixes: 82922da39190 ("drm/dp_helper: Retry aux transactions on all errors") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Because we are using a custom crtc_state structure, we must override the
reset helper to allocate the correct amount of memory.
Fixes: 4e257d9eee23 ("drm/rockchip: get rid of rockchip_drm_crtc_mode_config") Signed-off-by: John Keeping <john@metanate.com> Signed-off-by: Mark Yao <mark.yao@rock-chips.com> Reviewed-by: Sean Paul <seanpaul@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Update CDCLK_FREQ on BDW after changing the cdclk frequency. Not sure
if this is a late addition to the spec, or if I simply overlooked this
step when writing the original code.
This is what Bspec has to say about CDCLK_FREQ:
"Program this field to the CD clock frequency minus one. This is used to
generate a divided down clock for miscellaneous timers in display."
And the "Broadwell Sequences for Changing CD Clock Frequency" section
clarifies this further:
"For CD clock 337.5 MHz, program 337 decimal.
For CD clock 450 MHz, program 449 decimal.
For CD clock 540 MHz, program 539 decimal.
For CD clock 675 MHz, program 674 decimal."
drm: Avoid the double clflush on the last cache line in drm_clflush_virt_range()
as we have observed issues with serialisation of the clflush operations
on Baytrail+ Atoms with partial updates. Applying the double flush on the
last cacheline forces that clflush to be ordered with respect to the
previous clflush, and the mfence then protects against prefetches crossing
the clflush boundary.
The same issue can be demonstrated in userspace with igt/gem_exec_flush.
Fixes: afcd950cafea6 (drm: Avoid the double clflush on the last cache...)
Testcase: igt/gem_concurrent_blit
Testcase: igt/gem_partial_pread_pwrite
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92845 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: dri-devel@lists.freedesktop.org Cc: Akash Goel <akash.goel@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jason Ekstrand <jason.ekstrand@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1467880930-23082-6-git-send-email-chris@chris-wilson.co.uk Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The patch f045f459d925 ("drm/nouveau/fbcon: fix out-of-bounds memory accesses")
tries to fix some out of memory accesses. Unfortunatelly, the patch breaks the
display when using fonts with width that is not divisiable by 8.
The monochrome bitmap for each character is stored in memory by lines from top
to bottom. Each line is padded to a full byte.
For example, for 22x11 font, each line is padded to 16 bits, so each
character is consuming 44 bytes total, that is 11 32-bit words. The patch f045f459d925 changed the logic to "dsize = ALIGN(image->width *
image->height, 32) >> 5", that is just 8 words - this is incorrect and it
causes display corruption.
This patch adds the necesary padding of lines to 8 bytes.
This patch should be backported to stable kernels where f045f459d925 was
backported.
DRM_CONNECTOR_POLL_CONNECT only enables polling for connections, not
disconnections. Because of this, we end up losing hotplug polling for
analog connectors once they get connected.
Easy way to reproduce:
- Grab a machine with a radeon GPU and a VGA port
- Plug a monitor into the VGA port, wait for it to update the connector
from disconnected to connected
- Disconnect the monitor on VGA, a hotplug event is never sent for the
removal of the connector.
Originally, only using DRM_CONNECTOR_POLL_CONNECT might have been a good
idea since doing VGA polling can sometimes result in having to mess with
the DAC voltages to figure out whether or not there's actually something
there since VGA doesn't have HPD. Doing this would have the potential of
showing visible artifacts on the screen every time we ran a poll while a
VGA display was connected. Luckily, radeon_vga_detect() only resorts to
this sort of polling if the poll is forced, and DRM's polling helper
doesn't force it's polls.
Additionally, this removes some assignments to connector->polled that
weren't actually doing anything.
Just about all of amdgpu's connector probing functions try to acquire
runtime PM refs. If we try to do this in the context of
amdgpu_resume_kms by calling drm_helper_hpd_irq_event(), we end up
deadlocking the system.
Since we're guaranteed to be holding the spinlock for RPM in
amdgpu_resume_kms, and we already know the GPU is in working order, we
need to prevent the RPM helpers from trying to run during the initial
connector reprobe on resume.
There's a couple of solutions I've explored for fixing this, but this
one by far seems to be the simplest and most reliable (plus I'm pretty
sure that's what disable_depth is there for anyway).
Reproduction recipe:
- Get any laptop dual GPUs using PRIME
- Make sure runtime PM is enabled for amdgpu
- Boot the machine
- If the machine managed to boot without hanging, switch out of X to
another VT. This should definitely cause X to hang infinitely.
Changes since v1:
- add appropriate #ifdef checks for CONFIG_PM. This is not very
useful, but it appears some kernel test suites test compiling amdgpu
with CONFIG_PM disabled, which results in this patch breaking the builds
if we don't include this #ifdef
Cc: Alex Deucher <alexdeucher@gmail.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Lyude <cpaul@redhat.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
DRM_CONNECTOR_POLL_CONNECT only enables polling for connections, not
disconnections. Because of this, we end up losing hotplug polling for
analog connectors once they get connected.
Easy way to reproduce:
- Grab a machine with an AMD GPU and a VGA port
- Plug a monitor into the VGA port, wait for it to update the connector
from disconnected to connected
- Disconnect the monitor on VGA, a hotplug event is never sent for the
removal of the connector.
Originally, only using DRM_CONNECTOR_POLL_CONNECT might have been a good
idea since doing VGA polling can sometimes result in having to mess with
the DAC voltages to figure out whether or not there's actually something
there since VGA doesn't have HPD. Doing this would have the potential of
showing visible artifacts on the screen every time we ran a poll while a
VGA display was connected. Luckily, amdgpu_vga_detect() only resorts to
this sort of polling if the poll is forced, and DRM's polling helper
doesn't force it's polls.
Additionally, this removes some assignments to connector->polled that
weren't actually doing anything.
Commit e93762bbf681 ("w1: masters: omap_hdq: add support for 1-wire
mode") added a statement to clear the hdq_irqstatus flags in
hdq_read_byte().
If the hdq reading process is scheduled slowly or interrupts are
disabled for a while the hardware read activity might already be
finished on entry of hdq_read_byte(). And hdq_isr() already has set the
hdq_irqstatus to 0x6 (can be seen in debug mode) denoting that both, the
TXCOMPLETE and RXCOMPLETE interrupts occurred in parallel.
This means there is no need to wait and the hdq_read_byte() can just
read the byte from the hdq controller.
By resetting hdq_irqstatus to 0 the read process is forced to be always
waiting again (because the if statement always succeeds) but the
hardware will not issue another RXCOMPLETE interrupt. This results in a
false timeout.
It seems risky to always rely on the caller to ensure the socket's
address family is correct before passing it to the NetLabel kAPI,
especially since we see at least one LSM which didn't. Add address
family checks to the *_delattr() functions to help prevent future
problems.
Reported-by: Maninder Singh <maninder1.s@samsung.com> Signed-off-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Unprivileged users can't use hierarchies if they create them as they do not
have privilieges to the root directory.
Which means the only thing a hiearchy created by an unprivileged user
is good for is expanding the number of cgroup links in every css_set,
which is a DOS attack.
We could allow hierarchies to be created in namespaces in the initial
user namespace. Unfortunately there is only a single namespace for
the names of heirarchies, so that is likely to create more confusion
than not.
So do the simple thing and restrict hiearchy creation to the initial
cgroup namespace.
With cgroup_threadgroup_rwsem held it is guaranteed that cgroup_post_fork
and copy_cgroup_ns will reference the same css_set from the process calling
fork.
Without such an interlock there process after fork could reference one
css_set from it's new cgroup namespace and another css_set from
task->cgroups, which semantically is nonsensical.
If "clone(CLONE_NEWCGROUP...)" is called it results in a nice lockdep
valid splat.
In __cgroup_proc_write the lock ordering is:
cgroup_mutex -- through cgroup_kn_lock_live
cgroup_threadgroup_rwsem
In copy_process the guts of clone the lock ordering is:
cgroup_threadgroup_rwsem -- through threadgroup_change_begin
cgroup_mutex -- through copy_namespaces -- copy_cgroup_ns
lockdep reports some a different call chains for the first ordering of
cgroup_mutex and cgroup_threadgroup_rwsem but it is harder to trace.
This is most definitely deadlock potential under the right
circumstances.
Fix this by by skipping the cgroup_mutex and making the locking in
copy_cgroup_ns mirror the locking in cgroup_post_fork which also runs
during fork under the cgroup_threadgroup_rwsem.
The patch that this was preparing for made it into neither v4.7 nor
v4.8, so we should back this out as well to avoid the opposite
warning:
arch/arm/configs/aspeed_g5_defconfig:62:warning: symbol value '1' invalid for PRINTK_TIME
arch/arm/configs/aspeed_g4_defconfig:61:warning: symbol value '1' invalid for PRINTK_TIME
Sorry for not catching this earlier.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 0ef659a30055 ("ARM: aspeed: adapt defconfigs for new CONFIG_PRINTK_TIME") Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
c90bb7b enabled the high speed UARTs of the Jetson TK1. Due to a merge
quirk, wrong addresses were introduced. Fix it and use the correct
addresses.
Thierry let me know, that there is another patch (b5896f67ab3c in
linux-next) in preparation which removes all the '0,' prefixes of unit
addresses on Tegra124 and is planned to go upstream in 4.8, so
this patch will get reverted then.
But for the moment, this patch is necessary to fix current misbehaviour.
Clearly QEMU is very permissive in how its PL310 model may be set up,
but the real hardware turns out to be far more particular about things
actually being correct. Fix up the DT description so that the real
thing actually boots:
- The arm,data-latency and arm,tag-latency properties need 3 cells to
be valid, otherwise we end up retaining the default 8-cycle latencies
which leads pretty quickly to lockup.
- The arm,dirty-latency property is only relevant to L210/L220, so get
rid of it.
- The cache geometry override also leads to lockup and/or general
misbehaviour. Irritatingly, the manual doesn't state the actual PL310
configuration, but based on the boardfile code and poking registers
from the Boot Monitor, it would seem to be 8 sets of 16KB ways.
With that, we can successfully boot to enjoy the fun of mismatched FPUs...
Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
It seems that recent kernels have a shorter timeout when scanning for
ethernet phys causing us to hit a timeout on boards where the phy's
regulator gets enabled just before scanning, which leads to non working
ethernet.
A 10ms startup delay seems to be enough to fix it, this commit adds a
20ms startup delay just to be safe.
This has been tested on a sun4i-a10-a1000 and sun5i-a10s-wobo-i5 board,
both of which have non-working ethernet on recent kernels without this
fix.
Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When a L2 cache controller is used in a system that provides hardware
coherency, the entire outer cache operations are useless, and can be
skipped. Moreover, on some systems, it is harmful as it causes
deadlocks between the Marvell coherency mechanism, the Marvell PCIe
controller and the Cortex-A9.
In the current kernel implementation, the outer cache flush range
operation is triggered by the dma_alloc function.
This operation can be take place during runtime and in some
circumstances may lead to the PCIe/PL310 deadlock on Armada 375/38x
SoCs.
This patch extends the __dma_clear_buffer() function to receive a
boolean argument related to the coherency of the system. The same
things is done for the calling functions.
Reported-by: Nadav Haklai <nadavh@marvell.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There is a double fetch problem in audit_log_single_execve_arg()
where we first check the execve(2) argumnets for any "bad" characters
which would require hex encoding and then re-fetch the arguments for
logging in the audit record[1]. Of course this leaves a window of
opportunity for an unsavory application to munge with the data.
This patch reworks things by only fetching the argument data once[2]
into a buffer where it is scanned and logged into the audit
records(s). In addition to fixing the double fetch, this patch
improves on the original code in a few other ways: better handling
of large arguments which require encoding, stricter record length
checking, and some performance improvements (completely unverified,
but we got rid of some strlen() calls, that's got to be a good
thing).
As part of the development of this patch, I've also created a basic
regression test for the audit-testsuite, the test can be tracked on
GitHub at the following link:
[1] If you pay careful attention, there is actually a triple fetch
problem due to a strnlen_user() call at the top of the function.
[2] This is a tiny white lie, we do make a call to strnlen_user()
prior to fetching the argument data. I don't like it, but due to the
way the audit record is structured we really have no choice unless we
copy the entire argument at once (which would require a rather
wasteful allocation). The good news is that with this patch the
kernel no longer relies on this strnlen_user() value for anything
beyond recording it in the log, we also update it with a trustworthy
value whenever possible.
Reported-by: Pengfei Wang <wpengfeinudt@gmail.com> Signed-off-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Not doing so might cause IO-Page-Faults when a device uses
an alias request-id and the alias-dte is left in a lower
page-mode which does not cover the address allocated from
the iova-allocator.