This patch replaces the use of "struct can_frame::can_dlc" by "struct
canfd_frame::len" to access the frame's length. As it is ensured that
both structures have a compatible memory layout for this member this is
no functional change. Futher, this compatibility is documented in a
comment.
This patch factors out all non sending parts of can_get_echo_skb() into
a seperate function __can_get_echo_skb(), so that it can be re-used in
an upcoming patch.
Unlock the MB irrespective of reception method being FIFO or timestamp
based. It is optional but recommended to unlock Mailbox as soon as
possible and make it available for reception.
Reported-by: Alexander Stein <alexander.stein@systec-electronic.com> Signed-off-by: Pankaj Bansal <pankaj.bansal@nxp.com> Cc: linux-stable <stable@vger.kernel.org> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If vesafb attaches to the AST device, it configures the framebuffer memory
for uncached access by default. When ast.ko later tries to attach itself to
the device, it wants to use write-combining on the framebuffer memory, but
vesefb's existing configuration for uncached access takes precedence. This
results in reduced performance.
Removing the framebuffer's configuration before loding the AST driver fixes
the problem. Other DRM drivers already contain equivalent code.
Link: https://bugzilla.opensuse.org/show_bug.cgi?id=1112963 Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Cc: <stable@vger.kernel.org> Tested-by: Y.C. Chen <yc_chen@aspeedtech.com> Reviewed-by: Jean Delvare <jdelvare@suse.de> Tested-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The value of pitches is not correct while calling mode_set.
The issue we found so far on following system:
- Debian8 with XFCE Desktop
- Ubuntu with KDE Desktop
- SUSE15 with KDE Desktop
Signed-off-by: Y.C. Chen <yc_chen@aspeedtech.com> Cc: <stable@vger.kernel.org> Tested-by: Jean Delvare <jdelvare@suse.de> Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
I have a Thinkpad X220 Tablet in my hands that is losing vblank
interrupts whenever LP3 watermarks are used.
If I nudge the latency value written to the WM3 register just
by one in either direction the problem disappears. That to me
suggests that the punit will not enter the corrsponding
powersave mode (MPLL shutdown IIRC) unless the latency value
in the register matches exactly what we read from SSKPD. Ie.
it's not really a latency value but rather just a cookie
by which the punit can identify the desired power saving state.
On HSW/BDW this was changed such that we actually just write
the WM level number into those bits, which makes much more
sense given the observed behaviour.
We could try to handle this by disallowing LP3 watermarks
only when vblank interrupts are enabled but we'd first have
to prove that only vblank interrupts are affected, which
seems unlikely. Also we can't grab the wm mutex from the
vblank enable/disable hooks because those are called with
various spinlocks held. Thus we'd have to redesigne the
watermark locking. So to play it safe and keep the code
simple we simply disable LP3 watermarks on all SNB machines.
To do that we simply zero out the latency values for
watermark level 3, and we adjust the watermark computation
to check for that. The behaviour now matches that of the
g4x/vlv/skl wm code in the presence of a zeroed latency
value.
v2: s/USHRT_MAX/U32_MAX/ for consistency with the types (Chris)
Cc: stable@vger.kernel.org Cc: Chris Wilson <chris@chris-wilson.co.uk> Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=101269
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103713 Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181114173440.6730-1-ville.syrjala@linux.intel.com
(cherry picked from commit 03981c6ebec4fc7056b9b45f847393aeac90d060) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drm_atomic_helper_setup_commit() auto-completes commit->flip_done when
state->legacy_cursor_update is true, but we know for sure that we want
a sync update when we call drm_atomic_helper_setup_commit() from
vc4_atomic_commit().
Explicitly set state->legacy_cursor_update to false to prevent this
auto-completion.
Fixes: 184d3cf4f738 ("drm/vc4: Use wait_for_flip_done() instead of wait_for_vblanks()") Cc: <stable@vger.kernel.org> Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Reviewed-by: Eric Anholt <eric@anholt.net> Link: https://patchwork.freedesktop.org/patch/msgid/20181115105852.9844-2-boris.brezillon@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Writeback connectors do not produce any on-screen output and require
special care for use. Such connectors are hidden from enumeration in
DRM resources by default, but they are still picked-up by fbdev.
This makes rather little sense since fbdev is not really adapted for
dealing with writeback.
Moreover, this is also a source of issues when userspace disables the
CRTC (and associated plane) without detaching the CRTC from the
connector (which is hidden by default). In this case, the connector is
still using the CRTC, leading to am "enabled/connectors mismatch" and
eventually the failure of the associated atomic commit. This situation
happens with VC4 testing under IGT GPU Tools.
Filter out writeback connectors in the fbdev helper to solve this.
Signed-off-by: Paul Kocialkowski <paul.kocialkowski@bootlin.com> Reviewed-by: Boris Brezillon <boris.brezillon@bootlin.com> Reviewed-by: Maxime Ripard <maxime.ripard@bootlin.com> Tested-by: Maxime Ripard <maxime.ripard@bootlin.com> Fixes: 935774cd71fe ("drm: Add writeback connector type") Cc: <stable@vger.kernel.org> # v4.19+ Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20181115163248.21168-1-paul.kocialkowski@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
syzkaller was able to hit the WARN_ON(sock_owned_by_user(sk));
in tcp_close()
While a socket is being closed, it is very possible other
threads find it in rtnetlink dump.
tcp_get_info() will acquire the socket lock for a short amount
of time (slow = lock_sock_fast(sk)/unlock_sock_fast(sk, slow);),
enough to trigger the warning.
Fixes: 67db3e4bfbc9 ("tcp: no longer hold ehash lock while calling tcp_get_info()") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We clear the pte temporarily during read/modify/write update of the pte.
If we take a page fault while the pte is cleared, the application can get
SIGBUS. One such case is with remap_pfn_range without a backing
vm_ops->fault callback. do_fault will return SIGBUS in that case.
Slub does not call kmalloc_slab() for sizes > KMALLOC_MAX_CACHE_SIZE,
instead it falls back to kmalloc_large().
For slab KMALLOC_MAX_CACHE_SIZE == KMALLOC_MAX_SIZE and it calls
kmalloc_slab() for all allocations relying on NULL return value for
over-sized allocations.
This inconsistency leads to unwanted warnings from kmalloc_slab() for
over-sized allocations for slab. Returning NULL for failed allocations is
the expected behavior.
Make slub and slab code consistent by checking size >
KMALLOC_MAX_CACHE_SIZE in slab before calling kmalloc_slab().
While we are here also fix the check in kmalloc_slab(). We should check
against KMALLOC_MAX_CACHE_SIZE rather than KMALLOC_MAX_SIZE. It all kinda
worked because for slab the constants are the same, and slub always checks
the size against KMALLOC_MAX_CACHE_SIZE before kmalloc_slab(). But if we
get there with size > KMALLOC_MAX_CACHE_SIZE anyhow bad things will
happen. For example, in case of a newly introduced bug in slub code.
Also move the check in kmalloc_slab() from function entry to the size >
192 case. This partially compensates for the additional check in slab
code and makes slub code a bit faster (at least theoretically).
Also drop __GFP_NOWARN in the warning check. This warning means a bug in
slab code itself, user-passed flags have nothing to do with it.
Nothing of this affects slob.
Link: http://lkml.kernel.org/r/20180927171502.226522-1-dvyukov@gmail.com Signed-off-by: Dmitry Vyukov <dvyukov@google.com> Reported-by: syzbot+87829a10073277282ad1@syzkaller.appspotmail.com Reported-by: syzbot+ef4e8fc3a06e9019bb40@syzkaller.appspotmail.com Reported-by: syzbot+6e438f4036df52cbb863@syzkaller.appspotmail.com Reported-by: syzbot+8574471d8734457d98aa@syzkaller.appspotmail.com Reported-by: syzbot+af1504df0807a083dbd9@syzkaller.appspotmail.com Acked-by: Christoph Lameter <cl@linux.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
syzkaller triggered a use-after-free [1], caused by a combination of
skb_get() in llc_conn_state_process() and usage of sk_eat_skb()
sk_eat_skb() is assuming the skb about to be freed is only used by
the current thread. TCP/DCCP stacks enforce this because current
thread holds the socket lock.
llc_conn_state_process() wants to make sure skb does not disappear,
and holds a reference on the skb it manipulates. But as soon as this
skb is added to socket receive queue, another thread can consume it.
This means that llc must use regular skb_unlink() and kfree_skb()
so that both producer and consumer can safely work on the same skb.
[1]
BUG: KASAN: use-after-free in atomic_read include/asm-generic/atomic-instrumented.h:21 [inline]
BUG: KASAN: use-after-free in refcount_read include/linux/refcount.h:43 [inline]
BUG: KASAN: use-after-free in skb_unref include/linux/skbuff.h:967 [inline]
BUG: KASAN: use-after-free in kfree_skb+0xb7/0x580 net/core/skbuff.c:655
Read of size 4 at addr ffff8801d1f6fba4 by task ksoftirqd/1/18
Memory state around the buggy address: ffff8801d1f6fa80: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb ffff8801d1f6fb00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff8801d1f6fb80: fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc
^ ffff8801d1f6fc00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8801d1f6fc80: fb fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When alloc_percpu() fails, sdp gets freed but sb->s_fs_info still points
to the same address. Move the assignment after that error check so that
s_fs_info can only point to a valid sdp or NULL, which is checked for
later in the error path, in gfs2_kill_super().
Reported-by: syzbot+dcb8b3587445007f5808@syzkaller.appspotmail.com Signed-off-by: Andrew Price <anprice@redhat.com> Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
If a transport is removed by asconf but there still are some chunks with
this transport queuing on out_chunk_list, later an use-after-free issue
will be caused when accessing this transport from these chunks in
sctp_outq_flush().
This is an old bug, we fix it by clearing the transport of these chunks
in out_chunk_list when removing a transport in sctp_assoc_rm_peer().
Reported-by: syzbot+56a40ceee5fb35932f4d@syzkaller.appspotmail.com Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
syzbot is reporting too large memory allocation at bfs_fill_super() [1].
Since file system image is corrupted such that bfs_sb->s_start == 0,
bfs_fill_super() is trying to allocate 8MB of continuous memory. Fix
this by adding a sanity check on bfs_sb->s_start, __GFP_NOWARN and
printf().
synaptics_detect() does not check whether sending commands to the
device succeeds and instead relies on getting unique data from the
device. Let's make sure we seed entire buffer with zeroes to make sure
we will not use garbage on stack that just happen to be 0x47.
Reported-by: syzbot+13cb3b01d0784e4ffc3f@syzkaller.appspotmail.com Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
syzbot is hitting warning at str_read() [1] because len parameter can
become larger than KMALLOC_MAX_SIZE. We don't need to emit warning for
this case.
The voltage range (min, max) provided in the device tree is from
the data manual and is pretty big, catering to a wide range of devices.
On a i2c read/write failure the regulator_set_voltage_triplet function
falls back to set voltage between min and max. The min value from Device
Tree can be lesser than the optimal value and in that case that can lead
to a hang or crash. Hence set the u_volt_min dynamically to the optimal
voltage value.
Driver can report IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ so it's
important to provide valid & complete info about supported bands for
each channel. By default no support for 160 MHz should be assumed unless
firmware reports it for a given channel later.
This fixes info passed to the userspace. Without that change userspace
could try to use invalid channel and fail to start an interface.
Signed-off-by: Rafał Miłecki <rafal@milecki.pl> Cc: stable@vger.kernel.org Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When the firmware starts, it doesn't have any regulatory
information, hence it uses the world wide limitations. The
driver can feed the firmware with previous knowledge that
was kept in the driver, but the firmware may still not
update its internal tables.
This happens when we start a BSS interface, and then the
firmware can change the regulatory tables based on our
location and it'll use more lenient, location specific
rules. Then, if the firmware is shut down (when the
interface is brought down), and then an AP interface is
created, the firmware will forget the country specific
rules.
The host will think that we are in a certain country that
may allow channels and will try to teach the firmware about
our location, but the firmware may still not allow to drop
the world wide limitations and apply country specific rules
because it was just re-started.
In this case, the firmware will reply with MCC_RESP_ILLEGAL
to the MCC_UPDATE_CMD. In that case, iwlwifi needs to let
the upper layers (cfg80211 / hostapd) know that the channel
list they know about has been updated.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=201105
The oldest firmware supported by iwlmvm do support getting
the average beacon RSSI. Enable the sta_statistics() call
from mac80211 even on older firmware versions.
From coreboot/BIOS:
Name ("WGDS", Package() {
Revision,
Package() {
DomainType, // 0x7:WiFi ==> We miss this one.
WgdsWiFiSarDeltaGroup1PowerMax1, // Group 1 FCC 2400 Max
WgdsWiFiSarDeltaGroup1PowerChainA1, // Group 1 FCC 2400 A Offset
WgdsWiFiSarDeltaGroup1PowerChainB1, // Group 1 FCC 2400 B Offset
WgdsWiFiSarDeltaGroup1PowerMax2, // Group 1 FCC 5200 Max
WgdsWiFiSarDeltaGroup1PowerChainA2, // Group 1 FCC 5200 A Offset
WgdsWiFiSarDeltaGroup1PowerChainB2, // Group 1 FCC 5200 B Offset
WgdsWiFiSarDeltaGroup2PowerMax1, // Group 2 EC Jap 2400 Max
WgdsWiFiSarDeltaGroup2PowerChainA1, // Group 2 EC Jap 2400 A Offset
WgdsWiFiSarDeltaGroup2PowerChainB1, // Group 2 EC Jap 2400 B Offset
WgdsWiFiSarDeltaGroup2PowerMax2, // Group 2 EC Jap 5200 Max
WgdsWiFiSarDeltaGroup2PowerChainA2, // Group 2 EC Jap 5200 A Offset
WgdsWiFiSarDeltaGroup2PowerChainB2, // Group 2 EC Jap 5200 B Offset
WgdsWiFiSarDeltaGroup3PowerMax1, // Group 3 ROW 2400 Max
WgdsWiFiSarDeltaGroup3PowerChainA1, // Group 3 ROW 2400 A Offset
WgdsWiFiSarDeltaGroup3PowerChainB1, // Group 3 ROW 2400 B Offset
WgdsWiFiSarDeltaGroup3PowerMax2, // Group 3 ROW 5200 Max
WgdsWiFiSarDeltaGroup3PowerChainA2, // Group 3 ROW 5200 A Offset
WgdsWiFiSarDeltaGroup3PowerChainB2, // Group 3 ROW 5200 B Offset
}
})
When read the ACPI data to find out the WGDS, the DATA_SIZE is never
matched.
From the above format, it gives 19 numbers, but our driver is hardcode
as 18.
Fix it to pass then can parse the data into our wgds table.
Then we will see:
iwlwifi 0000:01:00.0: U iwl_mvm_sar_geo_init Sending GEO_TX_POWER_LIMIT
iwlwifi 0000:01:00.0: U iwl_mvm_sar_geo_init SAR geographic profile[0]
Band[0]: chain A = 68 chain B = 69 max_tx_power = 54
iwlwifi 0000:01:00.0: U iwl_mvm_sar_geo_init SAR geographic profile[0]
Band[1]: chain A = 48 chain B = 49 max_tx_power = 70
iwlwifi 0000:01:00.0: U iwl_mvm_sar_geo_init SAR geographic profile[1]
Band[0]: chain A = 51 chain B = 67 max_tx_power = 50
iwlwifi 0000:01:00.0: U iwl_mvm_sar_geo_init SAR geographic profile[1]
Band[1]: chain A = 69 chain B = 70 max_tx_power = 68
iwlwifi 0000:01:00.0: U iwl_mvm_sar_geo_init SAR geographic profile[2]
Band[0]: chain A = 49 chain B = 50 max_tx_power = 48
iwlwifi 0000:01:00.0: U iwl_mvm_sar_geo_init SAR geographic profile[2]
Band[1]: chain A = 52 chain B = 53 max_tx_power = 51
Cc: stable@vger.kernel.org # 4.12+ Fixes: a6bff3cb19b7 ("iwlwifi: mvm: add GEO_TX_POWER_LIMIT cmd for geographic tx power table") Signed-off-by: Matt Chen <matt.chen@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The change corrects the error path in gpiochip_add_data_with_key()
by avoiding to call ida_simple_remove(), if ida_simple_get() returns
an error.
Note that ida_simple_remove()/ida_free() throws a BUG(), if id argument
is negative, it allows to easily check the correctness of the fix by
fuzzing the return value from ida_simple_get().
Fixes: ff2b13592299 ("gpio: make the gpiochip a real device") Cc: stable@vger.kernel.org # v4.6+ Signed-off-by: Vladimir Zapolskiy <vz@mleia.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
GLK firmware can indicate that the tuning value will be restored after
runtime suspend, but not actually do that. Add a workaround that detects
such cases, and lets the driver do re-tuning instead.
The card detect IRQ does not work with modern BIOS (that want
to use _DSD to provide the card detect GPIO to the driver).
Details:
The mmc core provides the mmc_gpiod_request_cd() API to let host drivers
request the gpio descriptor for the "card detect" pin.
This pin is specified in the ACPI for the SDHC device:
* Either as a resource using _CRS. This is a method used by legacy BIOS.
(The driver needs to tell which resource index).
* Or as a named property ("cd-gpios"/"cd-gpio") in _DSD (which internally
points to an entry in _CRS). This way, the driver can lookup using a
string. This is what modern BIOS prefer to use.
This API finally results in a call to the following code:
struct gpio_desc *acpi_find_gpio(..., const char *con_id,...)
{
...
/* Lookup gpio (using "<con_id>-gpio") in the _DSD */
...
if (!acpi_can_fallback_to_crs(adev, con_id))
return ERR_PTR(-ENOENT);
...
/* Falling back to _CRS is allowed, Lookup gpio in the _CRS */
...
}
Note that this means that if the ACPI has _DSD properties, the kernel
will never use _CRS for the lookup (Because acpi_can_fallback_to_crs()
will always be false for any device hat has _DSD entries).
The SDHCI driver is thus currently broken on a modern BIOS, even if
BIOS provides both _CRS (for index based lookup) and _DSD entries (for
string based lookup). Ironically, none of these will be used for the
lookup currently because:
* Since the con_id is NULL, acpi_find_gpio() does not find a matching
entry in DSDT. (The _DSDT entry has the property name = "cd-gpios")
* Because ACPI contains DSDT entries, thus acpi_can_fallback_to_crs()
returns false (because device properties have been populated from
_DSD), thus the _CRS is never used for the lookup.
Fix:
Try "cd" for lookup in the _DSD before falling back to using NULL so
as to try looking up in the _CRS.
I've tested this patch successfully with both Legacy BIOS (that
provide only _CRS method) as well as modern BIOS (that provide both
_CRS and _DSD). Also the use of "cd" appears to be fairly consistent
across other users of this API (other MMC host controller drivers).
Link: https://lkml.org/lkml/2018/9/25/1113 Signed-off-by: Rajat Jain <rajatja@google.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Fixes: f10e4bf6632b ("gpio: acpi: Even more tighten up ACPI GPIO lookups") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
At the request of the reporter, the Linux kernel security team offers to
postpone the publishing of a fix for up to 5 business days from the date
of a report.
While it is generally undesirable to keep a fix private after it has
been developed, this short window is intended to allow distributions to
package the fix into their kernel builds and permits early inclusion of
the security team in the case of a co-ordinated disclosure with other
parties. Unfortunately, discussions with major Linux distributions and
cloud providers has revealed that 5 business days is not sufficient to
achieve either of these two goals.
As an example, cloud providers need to roll out KVM security fixes to a
global fleet of hosts with sufficient early ramp-up and monitoring. An
end-to-end timeline of less than two weeks dramatically cuts into the
amount of early validation and increases the chance of guest-visible
regressions.
The consequence of this timeline mismatch is that security issues are
commonly fixed without the involvement of the Linux kernel security team
and are instead analysed and addressed by an ad-hoc group of developers
across companies contributing to Linux. In some cases, mainline (and
therefore the official stable kernels) can be left to languish for
extended periods of time. This undermines the Linux kernel security
process and puts upstream developers in a difficult position should they
find themselves involved with an undisclosed security problem that they
are unable to report due to restrictions from their employer.
To accommodate the needs of these users of the Linux kernel and
encourage them to engage with the Linux security team when security
issues are first uncovered, extend the maximum period for which fixes
may be delayed to 7 calendar days, or 14 calendar days in exceptional
cases, where the logistics of QA and large scale rollouts specifically
need to be accommodated. This brings parity with the linux-distros@
maximum embargo period of 14 calendar days.
Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Amit Shah <aams@amazon.com> Cc: Laura Abbott <labbott@redhat.com> Acked-by: Kees Cook <keescook@chromium.org> Co-developed-by: Thomas Gleixner <tglx@linutronix.de> Co-developed-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The Linux kernel security team has been accused of rejecting the idea of
security embargoes. This is incorrect, and could dissuade people from
reporting security issues to us under the false assumption that the
issue would leak prematurely.
Clarify the handling of embargoed information in our process
documentation.
Co-developed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Kees Cook <keescook@chromium.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sasha has somehow been convinced into helping me with the stable kernel
maintenance. Codify this slip in good judgement before he realizes what
he really signed up for :)
PCM OSS layer may allocate a few temporary buffers, one for the core
read/write and another for the conversions via plugins. Currently
both are allocated via vmalloc(). But as the allocation size is
equivalent with the PCM period size, the required size might be quite
small, depending on the application.
This patch replaces these vmalloc() calls with kvzalloc() for covering
small period sizes better. Also, we use "z"-alloc variant here for
addressing the possible uninitialized access reported by syzkaller.
USB3 roothub might autosuspend before a plugged USB3 device is detected,
causing USB3 device enumeration failure.
USB3 devices don't show up as connected and enabled until USB3 link trainig
completes. On a fast booting platform with a slow USB3 link training the
link might reach the connected enabled state just as the bus is suspending.
If this device is discovered first time by the xhci_bus_suspend() routine
it will be put to U3 suspended state like the other ports which failed to
suspend earlier.
The hub thread will notice the connect change and resume the bus,
moving the port back to U0
This U0 -> U3 -> U0 transition right after being connected seems to be
too much for some devices, causing them to first go to SS.Inactive state,
and finally end up stuck in a polling state with reset asserted
Fix this by failing the bus suspend if a port has a connect change or is
in a polling state in xhci_bus_suspend().
Don't do any port changes until all ports are checked, buffer all port
changes and only write them in the end if suspend can proceed
Implement workaround for ThunderX2 Errata-129 (documented in
CN99XX Known Issues" available at Cavium support site).
As per ThunderX2errata-129, USB 2 device may come up as USB 1
if a connection to a USB 1 device is followed by another connection to
a USB 2 device, the link will come up as USB 1 for the USB 2 device.
Resolution: Reset the PHY after the USB 1 device is disconnected.
The PHY reset sequence is done using private registers in XHCI register
space. After the PHY is reset we check for the PLL lock status and retry
the operation if it fails. From our tests, retrying 4 times is sufficient.
Add a new quirk flag XHCI_RESET_PLL_ON_DISCONNECT to invoke the workaround
in handle_xhci_port_status().
This definition is used by msecs_to_jiffies in milliseconds.
According to the comments, max rexit timeout should be 20ms.
Align with the comments to properly calculate the delay.
Realtek USB3.0 Card Reader [0bda:0328] reports wrong port status on
Cannon lake PCH USB3.1 xHCI [8086:a36d] after resume from S3,
after clear port reset it works fine.
Since this device is registered on USB3 roothub at boot,
when port status reports not superspeed, xhci_get_port_status will call
an uninitialized completion in bus_state[0].
Kernel will hang because of NULL pointer.
Restrict the USB2 resume status check in USB2 roothub to fix hang issue.
Observed "TRB completion code (27)" error which corresponds to Stopped -
Length Invalid error(xhci spec section 4.17.4) while connecting USB to
SATA bridge.
Looks like this case was not considered when the following patch[1] was
committed. Hence adding this new check which can prevent
the invalid byte size error.
[1] ade2e3a xhci: handle transfer events without TRB pointer
Cc: <stable@vger.kernel.org> Signed-off-by: Sandeep Singh <sandeep.singh@amd.com>
cc: Nehal Shah <Nehal-bakulchandra.Shah@amd.com>
cc: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
At xhci removal the USB3 hcd (shared_hcd) is removed before the primary
USB2 hcd. Interrupts for port status changes may still occur for USB3
ports after the shared_hcd is freed, causing NULL pointer dereference.
Check if xhci->shared_hcd is still valid before handing USB3 port events
Cc: <stable@vger.kernel.org> Reported-by: Peter Chen <peter.chen@nxp.com> Tested-by: Jack Pham <jackp@codeaurora.org> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ensure that the shared_hcd pointer is valid when calling usb_put_hcd()
The shared_hcd is removed and freed in xhci by first calling
usb_remove_hcd(xhci->shared_hcd), and later
usb_put_hcd(xhci->shared_hcd)
Afer commit fe190ed0d602 ("xhci: Do not halt the host until both HCD have
disconnected their devices.") the shared_hcd was never properly put as
xhci->shared_hcd was set to NULL before usb_put_hcd(xhci->shared_hcd) was
called.
shared_hcd (USB3) is removed before primary hcd (USB2).
While removing the primary hcd we might need to handle xhci interrupts
to cleanly remove last USB2 devices, therefore we need to set
xhci->shared_hcd to NULL before removing the primary hcd to let xhci
interrupt handler know shared_hcd is no longer available.
xhci-plat.c, xhci-histb.c and xhci-mtk first create both their hcd's before
adding them. so to keep the correct reverse removal order use a temporary
shared_hcd variable for them.
For more details see commit 4ac53087d6d4 ("usb: xhci: plat: Create both
HCDs before adding them")
Fixes: fe190ed0d602 ("xhci: Do not halt the host until both HCD have disconnected their devices.") Cc: Joel Stanley <joel@jms.id.au> Cc: Chunfeng Yun <chunfeng.yun@mediatek.com> Cc: Thierry Reding <treding@nvidia.com> Cc: Jianguo Sun <sunjianguo1@huawei.com> Cc: <stable@vger.kernel.org> Reported-by: Jack Pham <jackp@codeaurora.org> Tested-by: Jack Pham <jackp@codeaurora.org> Tested-by: Peter Chen <peter.chen@nxp.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In dwc3_pci_quirks() function, gpiod lookup table is only registered for
baytrail SOC. But in dwc3_pci_remove(), we try to unregistered it
without any checks. This leads to NULL pointer de-reference exception in
gpiod_remove_lookup_table() when unloading the module for non baytrail
SOCs. This patch fixes this issue.
Fixes: 5741022cbdf3 ("usb: dwc3: pci: Add GPIO lookup table on platforms
without ACPI GPIO resources") Cc: <stable@vger.kernel.org> Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Current check for the last extra TRB for zero and unaligned transfers
does not account for isoc OUT. The last TRB of the Buffer Descriptor for
isoc OUT transfers will be retired with HWO=0. As a result, we won't
return early. The req->remaining will be updated to include the BUFSIZ
count of the extra TRB, and the actual number of transferred bytes
calculation will be wrong.
To fix this, check whether it's a short or zero packet and the last TRB
chain bit to return early.
This will clear the USB_PORT_FEAT_C_CONNECTION bit in case of a hub port reset
only if a device is was attached to the hub port before resetting the hub port.
Using a Lenovo T480s attached to the ultra dock it was not possible to detect
some usb-c devices at the dock usb-c ports because the hub_port_reset code
will clear the USB_PORT_FEAT_C_CONNECTION bit after the actual hub port reset.
Using this device combo the USB_PORT_FEAT_C_CONNECTION bit was set between the
actual hub port reset and the clear of the USB_PORT_FEAT_C_CONNECTION bit.
This ends up with clearing the USB_PORT_FEAT_C_CONNECTION bit after the
new device was attached such that it was not detected.
This patch will not clear the USB_PORT_FEAT_C_CONNECTION bit if there is
currently no device attached to the port before the hub port reset.
This will avoid clearing the connection bit for new attached devices.
When building with CONFIG_EFI and CONFIG_EFI_STUB on ARM, the libstub
Makefile would use -mno-single-pic-base without checking it was
supported by the compiler. As the ARM (32-bit) clang backend does not
support this flag, the build would fail.
This changes the Makefile to check the compiler's support for
-mno-single-pic-base before using it, similar to c1c386681bd7 ("ARM:
8767/1: add support for building ARM kernel with clang").
Signed-off-by: Alistair Strachan <astrachan@google.com> Reviewed-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Previously, when a HID client such as the Steam Client was running, this
driver disabled its input device to avoid doubling the input events.
While it worked mostly fine, some games got confused by the idle gamepad,
and switched to two player mode, or asked the user to choose which gamepad
to use. Other games just crashed, probably a bug in Unity [1].
With this commit, when a HID client starts, the input device is removed;
when the HID client ends the input device is recreated.
According to vendor sdk, vco calibration has to be executed
for each channel configuration whereas mcu calibration has to be
performed during channel scanning. This patch fixes the mt76x0
monitor mode issue since in that configuration vco calibration
was never executed
skb_can_coalesce() allows coalescing neighboring slab objects into
a single frag:
return page == skb_frag_page(frag) &&
off == frag->page_offset + skb_frag_size(frag);
ceph_tcp_sendpage() can be handed slab pages. One example of this is
XFS: it passes down sector sized slab objects for its metadata I/O. If
the kernel client is co-located on the OSD node, the skb may go through
loopback and pop on the receive side with the exact same set of frags.
When tcp_recvmsg() attempts to copy out such a frag, hardened usercopy
complains because the size exceeds the object's allocated size:
Although skb_can_coalesce() could be taught to return false if the
resulting frag would cross a slab object boundary, we already have
a fallback for non-refcounted pages. Utilize it for slab pages too.
The PixArt OEM mice are known for disconnecting every minute in
runlevel 1 or 3 if they are not always polled. So add quirk
ALWAYS_POLL for this one as well.
The PixArt OEM mice are known for disconnecting every minute in
runlevel 1 or 3 if they are not always polled. So add quirk
ALWAYS_POLL for two Primax mice as well.
0x4e22 is the Dell MS111-P and 0x4d0f is the unbranded HP Portia
mouse HP 697738-001. Both were built until approx. 2014.
Those were the standard mice from those vendors and are still
around - even as new old stock.
When a UHID_CREATE command is written to the uhid char device, a
copy_from_user() is done from a user pointer embedded in the command.
When the address limit is KERNEL_DS, e.g. as is the case during
sys_sendfile(), this can read from kernel memory. Alternatively,
information can be leaked from a setuid binary that is tricked to write
to the file descriptor. Therefore, forbid UHID_CREATE in these cases.
No other commands in uhid_char_write() are affected by this bug and
UHID_CREATE is marked as "obsolete", so apply the restriction to
UHID_CREATE only rather than to uhid_char_write() entirely.
Thanks to Dmitry Vyukov for adding uhid definitions to syzkaller and to
Jann Horn for commit 9da3f2b740544 ("x86/fault: BUG() when uaccess
helpers fault on kernel addresses"), allowing this bug to be found.
Many HP AMD based laptops contain an SMB0001 device like this:
Device (SMBD)
{
Name (_HID, "SMB0001") // _HID: Hardware ID
Name (_CRS, ResourceTemplate () // _CRS: Current Resource Settings
{
IO (Decode16,
0x0B20, // Range Minimum
0x0B20, // Range Maximum
0x20, // Alignment
0x20, // Length
)
IRQ (Level, ActiveLow, Shared, )
{7}
})
}
The legacy style IRQ resource here causes acpi_dev_get_irqresource() to
be called with legacy=true and this message to show in dmesg:
ACPI: IRQ 7 override to edge, high
This causes issues when later on the AMD0030 GPIO device gets enumerated:
Device (GPIO)
{
Name (_HID, "AMDI0030") // _HID: Hardware ID
Name (_CID, "AMDI0030") // _CID: Compatible ID
Name (_UID, Zero) // _UID: Unique ID
Method (_CRS, 0, NotSerialized) // _CRS: Current Resource Settings
{
Name (RBUF, ResourceTemplate ()
{
Interrupt (ResourceConsumer, Level, ActiveLow, Shared, ,, )
{
0x00000007,
}
Memory32Fixed (ReadWrite,
0xFED81500, // Address Base
0x00000400, // Address Length
)
})
Return (RBUF) /* \_SB_.GPIO._CRS.RBUF */
}
}
Now acpi_dev_get_irqresource() gets called with legacy=false, but because
of the earlier override of the trigger-type acpi_register_gsi() returns
-EBUSY (because we try to register the same interrupt with a different
trigger-type) and we end up setting IORESOURCE_DISABLED in the flags.
The setting of IORESOURCE_DISABLED causes platform_get_irq() to call
acpi_irq_get() which is not implemented on x86 and returns -EINVAL.
resulting in the following in dmesg:
amd_gpio AMDI0030:00: Failed to get gpio IRQ: -22
amd_gpio: probe of AMDI0030:00 failed with error -22
The SMB0001 is a "virtual" device in the sense that the only way the OS
interacts with it is through calling a couple of methods to do SMBus
transfers. As such it is weird that it has IO and IRQ resources at all,
because the driver for it is not expected to ever access the hardware
directly.
The Linux driver for the SMB0001 device directly binds to the acpi_device
through the acpi_bus, so we do not need to instantiate a platform_device
for this ACPI device. This commit adds the SMB0001 HID to the
forbidden_id_list, avoiding the instantiating of a platform_device for it.
Not instantiating a platform_device means we will no longer call
acpi_dev_get_irqresource() for the legacy IRQ resource fixing the probe of
the AMDI0030 device failing.
Fix this by sanitizing req.gid before calling macro GID_TO_GRU, which
uses it to index gru_base.
Notice that given that speculation windows are large, the policy is
to kill the speculation on the first load and not worry if it can be
completed with a dependent load/store [1].
Use the new of_get_compatible_child() helper to lookup the nfc child
node instead of using of_find_compatible_node(), which searches the
entire tree from a given start node and thus can return an unrelated
(i.e. non-child) node.
This also addresses a potential use-after-free (e.g. after probe
deferral) as the tree-wide helper drops a reference to its first
argument (i.e. the node of the device being probed).
While at it, also fix a related nfc-node reference leak.
Fixes: f88fc122cc34 ("mtd: nand: Cleanup/rework the atmel_nand driver") Cc: stable <stable@vger.kernel.org> # 4.11 Cc: Nicolas Ferre <nicolas.ferre@microchip.com> Cc: Josh Wu <rainyfeeling@outlook.com> Cc: Boris Brezillon <boris.brezillon@bootlin.com> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Passing a timeout of zero to the synchronous serdev_device_write()
helper does currently not imply to wait forever (unlike passing zero to
serdev_device_wait_until_sent()). Instead, if there's insufficient
room in the write buffer, we'd end up with an incomplete write.
Passing a timeout of zero to the synchronous serdev_device_write()
helper does currently not imply to wait forever (unlike passing zero to
serdev_device_wait_until_sent()). Instead, if there's insufficient
room in the write buffer, we'd end up with an incomplete write.
Fixes: 37768b054f20 ("gnss: add generic serial driver") Cc: stable <stable@vger.kernel.org> # 4.19 Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
After building the kernel with Clang, the following section mismatch
warning appears:
WARNING: vmlinux.o(.text+0x3bf19a6): Section mismatch in reference from
the function ssc_probe() to the function
.init.text:atmel_ssc_get_driver_data()
The function ssc_probe() references
the function __init atmel_ssc_get_driver_data().
This is often because ssc_probe lacks a __init
annotation or the annotation of atmel_ssc_get_driver_data is wrong.
Remove __init from atmel_ssc_get_driver_data to get rid of the mismatch.
Following on from this patch: https://lkml.org/lkml/2017/11/3/516,
Corsair K70 LUX RGB keyboards also require the DELAY_INIT quirk to
start correctly at boot.
Dmesg output:
usb 1-6: string descriptor 0 read error: -110
usb 1-6: New USB device found, idVendor=1b1c, idProduct=1b33
usb 1-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 1-6: can't set config #1, error -110
Devices connected under Terminus Technology Inc. Hub (1a40:0101) may
fail to work after the system resumes from suspend:
[ 206.063325] usb 3-2.4: reset full-speed USB device number 4 using xhci_hcd
[ 206.143691] usb 3-2.4: device descriptor read/64, error -32
[ 206.351671] usb 3-2.4: device descriptor read/64, error -32
Some expirements indicate that the USB devices connected to the hub are
innocent, it's the hub itself is to blame. The hub needs extra delay
time after it resets its port.
Hence wait for extra delay, if the device is connected to this quirky
hub.
Raydium USB touchscreen fails to set config if LPM is enabled:
[ 2.030658] usb 1-8: New USB device found, idVendor=2386, idProduct=3119
[ 2.030659] usb 1-8: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 2.030660] usb 1-8: Product: Raydium Touch System
[ 2.030661] usb 1-8: Manufacturer: Raydium Corporation
[ 7.132209] usb 1-8: can't set config #1, error -110
Same behavior can be observed on 2386:3114.
Raydium claims the touchscreen supports LPM under Windows, so I used
Microsoft USB Test Tools (MUTT) [1] to check its LPM status. MUTT shows
that the LPM doesn't work under Windows, either. So let's just disable LPM
for Raydium touchscreens.
The cdc-acm kernel module currently does not support the Hiro (Conexant)
H05228 USB modem. The patch below adds the device specific information:
idVendor 0x0572
idProduct 0x1349
I was trying to solve a double free but I introduced a more serious
NULL dereference bug. The problem is that if there is an IRQ which
triggers immediately, then we need "info->uio_dev" but it's not set yet.
This patch puts the original initialization back to how it was and just
sets info->uio_dev to NULL on the error path so it should solve both
the Oops and the double free.
Sparse highlighted it, and appears to be a pure bug (from vs to).
./arch/riscv/include/asm/uaccess.h:403:35: warning: incorrect type in argument 1 (different address spaces)
./arch/riscv/include/asm/uaccess.h:403:39: warning: incorrect type in argument 2 (different address spaces)
./arch/riscv/include/asm/uaccess.h:409:37: warning: incorrect type in argument 1 (different address spaces)
./arch/riscv/include/asm/uaccess.h:409:41: warning: incorrect type in argument 2 (different address spaces)
Re-enable OCTEON USB driver which is needed on older hardware
(e.g. EdgeRouter Lite) for mass storage etc. This got accidentally
deleted when config options were changed for OCTEON2/3 USB.
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: f922bc0ad08b ("MIPS: Octeon: cavium_octeon_defconfig: Enable more drivers")
Patchwork: https://patchwork.linux-mips.org/patch/21077/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org # 4.14+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Patch ad608fbcf166 changed how events were subscribed to address an issue
elsewhere. As a side effect of that change, the "add" callback was called
before the event subscription was added to the list of subscribed events,
causing the first event queued by the add callback (and possibly other
events arriving soon afterwards) to be lost.
Fix this by adding the subscription to the list before calling the "add"
callback, and clean up afterwards if that fails.
Fixes: ad608fbcf166 ("media: v4l: event: Prevent freeing event subscriptions while accessed") Reported-by: Dave Stevenson <dave.stevenson@raspberrypi.org> Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com> Tested-by: Dave Stevenson <dave.stevenson@raspberrypi.org> Reviewed-by: Hans Verkuil <hans.verkuil@cisco.com> Tested-by: Hans Verkuil <hans.verkuil@cisco.com> Cc: stable@vger.kernel.org (for 4.14 and up) Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
modify_ldt(2) leaves the old LDT mapped after switching over to the new
one. The old LDT gets freed and the pages can be re-used.
Leaving the mapping in place can have security implications. The mapping is
present in the userspace page tables and Meltdown-like attacks can read
these freed and possibly reused pages.
It's relatively simple to fix: unmap the old LDT and flush TLB before
freeing the old LDT memory.
This further allows to avoid flushing the TLB in map_ldt_struct() as the
slot is unmapped and flushed by unmap_ldt_struct() or has never been mapped
at all.
[ tglx: Massaged changelog and removed the needless line breaks ]
Fixes: f55f0501cbf6 ("x86/pti: Put the LDT in its own PGD if PTI is on") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: bp@alien8.de Cc: hpa@zytor.com Cc: dave.hansen@linux.intel.com Cc: luto@kernel.org Cc: peterz@infradead.org Cc: boris.ostrovsky@oracle.com Cc: jgross@suse.com Cc: bhe@redhat.com Cc: willy@infradead.org Cc: linux-mm@kvack.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181026122856.66224-3-kirill.shutemov@linux.intel.com Signed-off-by: Sasha Levin <sashal@kernel.org>
On 5-level paging the LDT remap area is placed in the middle of the KASLR
randomization region and it can overlap with the direct mapping, the
vmalloc or the vmap area.
The LDT mapping is per mm, so it cannot be moved into the P4D page table
next to the CPU_ENTRY_AREA without complicating PGD table allocation for
5-level paging.
The 4 PGD slot gap just before the direct mapping is reserved for
hypervisors, so it cannot be used.
Move the direct mapping one slot deeper and use the resulting gap for the
LDT remap area. The resulting layout is the same for 4 and 5 level paging.
[ tglx: Massaged changelog ]
Fixes: f55f0501cbf6 ("x86/pti: Put the LDT in its own PGD if PTI is on") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: bp@alien8.de Cc: hpa@zytor.com Cc: dave.hansen@linux.intel.com Cc: peterz@infradead.org Cc: boris.ostrovsky@oracle.com Cc: jgross@suse.com Cc: bhe@redhat.com Cc: willy@infradead.org Cc: linux-mm@kvack.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181026122856.66224-2-kirill.shutemov@linux.intel.com Signed-off-by: Sasha Levin <sashal@kernel.org>
While git appears to be able to handle this situation, a monitored
build environment (such as the one used for Chrome OS kernel builds)
may detect it and bail out with an access violation error. On top of
that, the attempted write access suggests that git _will_ write to the
file even if a build output directory is specified. Users may have the
reasonable expectation that the source repository remains untouched in
that situation.
Fixes: 6147b1cf19651 ("scripts/setlocalversion: git: Make -dirty check more robust" Cc: Genki Sky <sky@genki.is> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Brian Norris <briannorris@chromium.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
Since commit b41d920acff8 ("kbuild: deb-pkg: split generating packaging
and build"), the build version of the kernel contained in a deb package
is too low by 1.
Prior to the bad commit, the kernel was built first, then the number
in .version file was read out, and written into the debian control file.
Now, the debian control file is created before the kernel is actually
compiled, which is causing the version number mismatch.
Let the mkdebian script pass KBUILD_BUILD_VERSION=${revision} to require
the build system to use the specified version number.
Packets with marked invalid IP/UDP/TCP checksums were considered as good
by the driver. The error was in a logic, processing offload bits in
RX descriptor.
Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: Dmitry Bogdanov <dmitry.bogdanov@aquantia.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
Fixed a condition mistake due to which macvlans unicast
item number 32 was not added in the unicast filter.
The consequence is that when exactly 32 macvlans are created
on NIC, the last created macvlan receives no traffic because
its MAC was not registered in HW.
Fixes: 94b3b542303f ("net: aquantia: vlan unicast address list correct handling") Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com> Tested-by: Nikita Danilov <nikita.danilov@aquantia.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
IOMMU fault may occurr on unbind/bind or if_down/if_up sequence.
Although driver disables the rings on down, this is not enough.
Due to internal HW design, during subsequent initialization
NIC sometimes may reuse RX descriptors cache and write to the
host memory from the descriptor cache.
That's get catched by IOMMU on host.
This patch invalidates the descriptor cache in NIC on interface down
to prevent writing to the cached descriptors and to the memory pointed
in those descriptors.
Signed-off-by: Dmitry Bogdanov <dmitry.bogdanov@aquantia.com> Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
qed_sp_destroy_request() API was added for SPQ users that need to
free/return the entry they acquired in their error flows.
Signed-off-by: Denis Bolotin <denis.bolotin@cavium.com> Signed-off-by: Michal Kalderon <michal.kalderon@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
When there are no SPQ entries left in the free_pool, new entries are
allocated and are added to the unlimited list. When an entry in the pool
is available, the content is copied from the original entry, and the new
entry is sent to the device. qed_spq_post() is not aware of that, so the
additional entry is stored in the original entry as p_post_ent, which can
later be returned to the pool.
Signed-off-by: Denis Bolotin <denis.bolotin@cavium.com> Signed-off-by: Michal Kalderon <michal.kalderon@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
Free the allocated SPQ entry or return the acquired SPQ entry to the free
list in error flows.
Signed-off-by: Denis Bolotin <denis.bolotin@cavium.com> Signed-off-by: Michal Kalderon <michal.kalderon@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
Since commit bacd75cfac8a ("i40e/i40evf: Add capability exchange for
outer checksum", 2017-04-06) the i40e driver has not reported support
for IP-in-IP offloads. This likely occurred due to a bad rebase, as the
commit extracts hw_enc_features into its own variable. As part of this
change, it dropped the NETIF_F_FSO_IPXIP flags from the
netdev->hw_enc_features. This was unfortunately not caught during code
review.
Fix this by adding back the missing feature flags.
For reference, NETIF_F_GSO_IPXIP4 was added in commit 7e13318daa4a
("net: define gso types for IPx over IPv4 and IPv6", 2016-05-20),
replacing NETIF_F_GSO_IPIP and NETIF_F_GSO_SIT.
NETIF_F_GSO_IPXIP6 was added in commit bf2d1df39502 ("intel: Add support
for IPv6 IP-in-IP offload", 2016-05-20).
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
Since the req_speeds field in struct ice_link_status is a u8,
req_speeds & ICE_AQ_LINK_SPEED_40GB always returns 0. This was caught
by a coverity scan.
Fix this by changing req_speeds to be u16.
Reported-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
Currently if the driver does a TSO offload the bytecount sent to
netdev_tx_sent_queue will be incorrect. This is because in ice_tso we
overwrite the initial value that we set in ice_tx_map. This creates a
mismatch between the Tx and Tx clean flow. In the Tx clean flow we
calculate the bytecount (called total_bytes) as we clean the
descriptors so the value used in the Tx clean path is correct. Fix this
by using += in ice_tso instead of =. This fixes the mismatch in
bytecount mentioned above.
Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
# perf record -e '{ref-cycles,cycles}:S' -a sleep 1
# perf script
non matching sample_id_all
That's because we disable sample_id_all bit for non-sampling group
members. We can't do that, because it needs to be the same over the
whole event list. This patch keeps it untouched again.
Reported-by: Andi Kleen <andi@firstfloor.org> Tested-by: Andi Kleen <andi@firstfloor.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180923150420.27327-1-jolsa@kernel.org Fixes: e9add8bac6c6 ("perf evsel: Disable write_backward for leader sampling group events") Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
Currently jvmti agent can not be used because function scnprintf is not
present in the agent libperf-jvmti.so. As a result the JVM when using
such agent to record JITed code profiling information will fail on
looking up scnprintf:
java: symbol lookup error: lib/libperf-jvmti.so: undefined symbol: scnprintf
This commit fixes that by reverting to the use of snprintf, that can be
looked up, instead of scnprintf, adding a proper check for the returned
value in order to print a better error message when the jitdump file
pathname is too long. Checking the returned value also helps to comply
with some recent gcc versions, like gcc8, which will fail due to
truncated writing checks related to the -Werror=format-truncation= flag.
When running on linux-next (8c60c36d0b8c ("Add linux-next specific files
for 20181019")) + CONFIG_PROVE_LOCKING=y on a big.LITTLE system (e.g.
Juno or HiKey960), we get the following report:
The static_key in question is 'sched_asym_cpucapacity' introduced by
commit:
df054e8445a4 ("sched/topology: Add static_key for asymmetric CPU capacity optimizations")
In this particular case, we enable it because smp_prepare_cpus() will
end up fetching the capacity-dmips-mhz entry from the devicetree,
so we already have some asymmetry detected when entering sched_init_smp().
This didn't get detected in tip/sched/core because we were missing:
commit cb538267ea1e ("jump_label/lockdep: Assert we hold the hotplug lock for _cpuslocked() operations")
Calls to build_sched_domains() post sched_init_smp() will hold the
hotplug lock, it just so happens that this very first call is a
special case. As stated by a comment in sched_init_smp(), "There's no
userspace yet to cause hotplug operations" so this is a harmless
warning.
However, to both respect the semantics of underlying
callees and make lockdep happy, take the hotplug lock in
sched_init_smp(). This also satisfies the comment atop
sched_init_domains() that says "Callers must hold the hotplug lock".
We need to enable runtime PM on this i2c controller before populating
child devices with i2c_add_adapter(). Otherwise, if a child device uses
runtime PM and stays runtime PM enabled we'll get the following warning
at boot.
Enabling runtime PM for inactive device (a98000.i2c) with active children
Let's move the runtime PM enabling and setup to before we add the
adapter, so that this device can respond to runtime PM requests from
children.
Fixes: 37692de5d523 ("i2c: i2c-qcom-geni: Add bus driver for the Qualcomm GENI I2C controller") Signed-off-by: Stephen Boyd <swboyd@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
Whenever we update ns_head info, we need to make sure it is still
compatible with all underlying backing devices because although nvme
multipath doesn't have any explicit use of these limits, other devices
can still be stacked on top of it which may rely on the underlying limits.
Start with unlimited stacking limits, and every info update iterate over
siblings and adjust queue limits.
of_dma_configure() was *supposed* to be following the same logic as
acpi_dma_configure() and only setting bus_dma_mask if some range was
specified by the firmware. However, it seems that subtlety got lost in
the process of fitting it into the differently-shaped control flow, and
as a result the force_dma==true case ends up always setting the bus mask
to the 32-bit default, which is not what anyone wants.
Make sure we only touch it if the DT actually said so.
Fixes: 6c2fb2ea7636 ("of/device: Set bus DMA mask as appropriate") Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi> Reported-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi> Tested-by: John Stultz <john.stultz@linaro.org> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Tested-by: Robert Richter <robert.richter@cavium.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>