commit 34b76fcaee upstream.
[Based on a patch from Johan, mangled by gregkh to keep things in line]
Fix up the variable usage in the set_termios call.
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Cc: Preston Fick <preston.fick@silabs.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7f482fc88a upstream.
This fix changes the way baudrates are set on the CP210x devices from
Silicon Labs. The CP2101/2/3 will respond to both a GET/SET_BAUDDIV
command, and GET/SET_BAUDRATE command, while CP2104 and higher devices
only respond to GET/SET_BAUDRATE. The current cp210x.ko driver in
kernel version 3.2.0 only implements the GET/SET_BAUDDIV command.
This patch implements the two new codes for the GET/SET_BAUDRATE
commands. Then there is a change in the way that the baudrate is
assigned or retrieved. This is done according to the CP210x USB
specification in AN571. This document can be found here:
http://www.silabs.com/pages/DownloadDoc.aspx?FILEURL=Support%20Documents/TechnicalDocs/AN571.pdf&src=DocumentationWebPart
Sections 5.3/5.4 describe the USB packets for the old baudrate method.
Sections 5.5/5.6 describe the USB packets for the new method. This
patch also implements the new request scheme, and eliminates the
unnecessary baudrate calculations since it uses the "actual baudrate"
method.
This patch solves the problem reported for the CP2104 in bug 42586,
and also keeps support for all other devices (CP2101/2/3).
This patchfile is also attached to the bug report on
bugzilla.kernel.org. This patch has been developed and test on the
3.2.0 mainline kernel version under Ubuntu 10.11.
Signed-off-by: Preston Fick <preston.fick@silabs.com>
[duplicate patch also sent by Johan - gregkh]
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 791b7d7cf6 upstream.
This device is a Oscilloscope/Logic Analizer/Pattern Generator/TDR,
using a Silabs CP2103 USB to UART Bridge.
Signed-off-by: Renato Caldas <rmsc@fe.up.pt>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 8a622e71f5 ]
md5 key is added in socket through remote address.
remote address should be used in finding md5 key when
sending out reset packet.
Signed-off-by: shawnlu <shawn.lu@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 5b35e1e6e9 ]
This commit fixes tcp_trim_head() to recalculate the number of
segments in the skb with the skb's existing MSS, so trimming the head
causes the skb segment count to be monotonically non-increasing - it
should stay the same or go down, but not increase.
Previously tcp_trim_head() used the current MSS of the connection. But
if there was a decrease in MSS between original transmission and ACK
(e.g. due to PMTUD), this could cause tcp_trim_head() to
counter-intuitively increase the segment count when trimming bytes off
the head of an skb. This violated assumptions in tcp_tso_acked() that
tcp_trim_head() only decreases the packet count, so that packets_acked
in tcp_tso_acked() could underflow, leading tcp_clean_rtx_queue() to
pass u32 pkts_acked values as large as 0xffffffff to
ca_ops->pkts_acked().
As an aside, if tcp_trim_head() had really wanted the skb to reflect
the current MSS, it should have called tcp_set_skb_tso_segs()
unconditionally, since a decrease in MSS would mean that a
single-packet skb should now be sliced into multiple segments.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Nandita Dukkipati <nanditad@google.com>
Acked-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit efc3dbc374 ]
rds_sock_info() triggers locking warnings because we try to perform a
local_bh_enable() (via sock_i_ino()) while hardware interrupts are
disabled (via taking rds_sock_lock).
There is no reason for rds_sock_lock to be a hardware IRQ disabling
lock, none of these access paths run in hardware interrupt context.
Therefore making it a BH disabling lock is safe and sufficient to
fix this bug.
Reported-by: Kumar Sanghvi <kumaras@chelsio.com>
Reported-by: Josh Boyer <jwboyer@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit d00a9dd21b ]
Several problems fixed in this patch :
1) Target of the conditional jump in case a divide by 0 is performed
by a bpf is wrong.
2) Must 'generate' the full function prologue/epilogue at pass=0,
or else we can stop too early in pass=1 if the proglen doesnt change.
(if the increase of prologue/epilogue equals decrease of all
instructions length because some jumps are converted to near jumps)
3) Change the wrong length detection at the end of code generation to
issue a more explicit message, no need for a full stack trace.
Reported-by: Phil Oester <kernel@linuxace.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 68315801db ]
When a packet is received on an L2TP IP socket (L2TPv3 IP link
encapsulation), the l2tpip socket's backlog_rcv function calls
xfrm4_policy_check(). This is not necessary, since it was called
before the skb was added to the backlog. With CONFIG_NET_NS enabled,
xfrm4_policy_check() will oops if skb->dev is null, so this trivial
patch removes the call.
This bug has always been present, but only when CONFIG_NET_NS is
enabled does it cause problems. Most users are probably using UDP
encapsulation for L2TP, hence the problem has only recently
surfaced.
EIP: 0060:[<c12bb62b>] EFLAGS: 00210246 CPU: 0
EIP is at l2tp_ip_recvmsg+0xd4/0x2a7
EAX: 00000001 EBX: d77b5180 ECX: 00000000 EDX: 00200246
ESI: 00000000 EDI: d63cbd30 EBP: d63cbd18 ESP: d63cbcf4
DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
Call Trace:
[<c1218568>] sock_common_recvmsg+0x31/0x46
[<c1215c92>] __sock_recvmsg_nosec+0x45/0x4d
[<c12163a1>] __sock_recvmsg+0x31/0x3b
[<c1216828>] sock_recvmsg+0x96/0xab
[<c10b2693>] ? might_fault+0x47/0x81
[<c10b2693>] ? might_fault+0x47/0x81
[<c1167fd0>] ? _copy_from_user+0x31/0x115
[<c121e8c8>] ? copy_from_user+0x8/0xa
[<c121ebd6>] ? verify_iovec+0x3e/0x78
[<c1216604>] __sys_recvmsg+0x10a/0x1aa
[<c1216792>] ? sock_recvmsg+0x0/0xab
[<c105a99b>] ? __lock_acquire+0xbdf/0xbee
[<c12d5a99>] ? do_page_fault+0x193/0x375
[<c10d1200>] ? fcheck_files+0x9b/0xca
[<c10d1259>] ? fget_light+0x2a/0x9c
[<c1216bbb>] sys_recvmsg+0x2b/0x43
[<c1218145>] sys_socketcall+0x16d/0x1a5
[<c11679f0>] ? trace_hardirqs_on_thunk+0xc/0x10
[<c100305f>] sysenter_do_call+0x12/0x38
Code: c6 05 8c ea a8 c1 01 e8 0c d4 d9 ff 85 f6 74 07 3e ff 86 80 00 00 00 b9 17 b6 2b c1 ba 01 00 00 00 b8 78 ed 48 c1 e8 23 f6 d9 ff <ff> 76 0c 68 28 e3 30 c1 68 2d 44 41 c1 e8 89 57 01 00 83 c4 0c
Signed-off-by: James Chapman <jchapman@katalix.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit b924551bed ]
bond_alb_init_slave() is called from bond_enslave() and sets the slave's MAC
address. This is done differently for TLB and ALB modes.
bond->alb_info.rlb_enabled is used to discriminate between the two modes but
this flag may be uninitialized if the slave is being enslaved prior to calling
bond_open() -> bond_alb_initialize() on the master.
It turns out all the callers of alb_set_slave_mac_addr() pass
bond->alb_info.rlb_enabled as the hw parameter.
This patch cleans up the unnecessary parameter of alb_set_slave_mac_addr() and
makes the function decide based on the bonding mode instead, which fixes the
above problem.
Reported-by: Narendra K <Narendra_K@Dell.com>
Signed-off-by: Jiri Bohac <jbohac@suse.cz>
Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 8a8ee9aff6 ]
caif is a subsystem and as such it needs to register with
register_pernet_subsys instead of register_pernet_device.
Among other problems using register_pernet_device was resulting in
net_generic being called before the caif_net structure was allocated.
Which has been causing net_generic to fail with either BUG_ON's or by
return NULL pointers.
A more ugly problem that could be caused is packets in flight why the
subsystem is shutting down.
To remove confusion also remove the cruft cause by inappropriately
trying to fix this bug.
With the aid of the previous patch I have tested this patch and
confirmed that using register_pernet_subsys makes the failure go away as
it should.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: Sjur Brændeland <sjur.brandeland@stericsson.com>
Tested-by: Sasha Levin <levinsasha928@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 5ee4433efe ]
By definition net_generic should never be called when it can return
NULL. Fail conspicously with a BUG_ON to make it clear when people mess
up that a NULL return should never happen.
Recently there was a bug in the CAIF subsystem where it was registered
with register_pernet_device instead of register_pernet_subsys. It was
erroneously concluded that net_generic could validly return NULL and
that net_assign_generic was buggy (when it was just inefficient).
Hopefully this BUG_ON will prevent people to coming to similar erroneous
conclusions in the futrue.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Tested-by: Sasha Levin <levinsasha928@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 073862ba5d ]
When a new net namespace is created, we should attach to it a "struct
net_generic" with enough slots (even empty), or we can hit the following
BUG_ON() :
[ 200.752016] kernel BUG at include/net/netns/generic.h:40!
...
[ 200.752016] [<ffffffff825c3cea>] ? get_cfcnfg+0x3a/0x180
[ 200.752016] [<ffffffff821cf0b0>] ? lockdep_rtnl_is_held+0x10/0x20
[ 200.752016] [<ffffffff825c41be>] caif_device_notify+0x2e/0x530
[ 200.752016] [<ffffffff810d61b7>] notifier_call_chain+0x67/0x110
[ 200.752016] [<ffffffff810d67c1>] raw_notifier_call_chain+0x11/0x20
[ 200.752016] [<ffffffff821bae82>] call_netdevice_notifiers+0x32/0x60
[ 200.752016] [<ffffffff821c2b26>] register_netdevice+0x196/0x300
[ 200.752016] [<ffffffff821c2ca9>] register_netdev+0x19/0x30
[ 200.752016] [<ffffffff81c1c67a>] loopback_net_init+0x4a/0xa0
[ 200.752016] [<ffffffff821b5e62>] ops_init+0x42/0x180
[ 200.752016] [<ffffffff821b600b>] setup_net+0x6b/0x100
[ 200.752016] [<ffffffff821b6466>] copy_net_ns+0x86/0x110
[ 200.752016] [<ffffffff810d5789>] create_new_namespaces+0xd9/0x190
net_alloc_generic() should take into account the maximum index into the
ptr array, as a subsystem might use net_generic() anytime.
This also reduces number of reallocations in net_assign_generic()
Reported-by: Sasha Levin <levinsasha928@gmail.com>
Tested-by: Sasha Levin <levinsasha928@gmail.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Sjur Brændeland <sjur.brandeland@stericsson.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 15699e6faf upstream.
The probe does not strictly require the USB_CDC_DMM_TYPE
descriptor, which is a good thing as it makes the driver
usable on non-conforming interfaces. A user could e.g.
bind to it to a CDC ECM interface by using the new_id and
bind sysfs files. But this would fail with a 0 buffer length
due to the missing descriptor.
Fix by defining a reasonable fallback size: The minimum
device receive buffer size required by the CDC WMC standard,
revision 1.1
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 655e247daf upstream.
As it turns out, there was a mismatch between the allocated inbuf size
(desc->bMaxPacketSize0, typically something like 64) and the length we
specified in the URB (desc->wMaxCommand, typically something like 2048)
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Cc: Oliver Neukum <oliver@neukum.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 62aaf24dc1 upstream.
wdm_disconnect() waits for the mutex held by wdm_read() before
calling wake_up_all(). This causes a deadlock, preventing device removal
to complete. Do the wake_up_all() before we start waiting for the locks.
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Cc: Oliver Neukum <oliver@neukum.org>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 86b2bbfdbd upstream.
Properly clamp temperature limits set by the user. Without this fix,
attempts to write temperature limits above the maximum supported by
the chip (255 degrees Celsius) would arbitrarily and unexpectedly
result in the limit being set to 0 degree Celsius.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Guenter Roeck <guenter.roeck@ericsson.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cf840551a8 upstream.
When a TD length mismatch is found during isoc TRB enqueue, it directly
returns -EINVAL. However, isoc transfer is partially enqueued at this time,
and the ring should be cleared.
This should be backported to kernels as old as 2.6.36, which contain the
commit 522989a27c "xhci: Fix failed
enqueue in the middle of isoch TD."
Signed-off-by: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d0cd5d482b upstream.
The xHCI hub port code gets passed a zero-based port number by the USB
core. It then adds one to in order to find a device slot by port number
and device speed by calling xhci_find_slot_id_by_port. That function
clearly states it requires a one-based port number. The xHCI port
status change event handler was using a zero-based port number that it
got from find_faked_portnum_from_hw_portnum, not a one-based port
number. This lead to the doorbells never being rung for a device after
a resume, or worse, a different device with the same speed having its
doorbell rung (which could lead to bad power management in the xHCI host
controller).
This patch should be backported to kernels as old as 2.6.39.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Acked-by: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2492c6e645 upstream.
Add missing iounmap in error handling code, in a case where the function
already preforms iounmap on some other execution path.
A simplified version of the semantic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression e;
statement S,S1;
int ret;
@@
e = \(ioremap\|ioremap_nocache\)(...)
... when != iounmap(e)
if (<+...e...+>) S
... when any
when != iounmap(e)
*if (...)
{ ... when != iounmap(e)
return ...; }
... when any
iounmap(e);
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1097ccebe6 upstream.
This changes the max length for the usb seven segment delcom device to 8
from 6. Delcom has both 6 and 8 variants and having 8 works fine with
devices which are only 6.
Signed-off-by: Harrison Metzger <harrisonmetz@gmail.com>
Signed-off-by: Stuart Pook <stuart@acm.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bf9c05d5b6 upstream.
The assignment of handle in vmw_framebuffer_create_handle doesn't actually do anything useful and is incorrectly assigning an integer value to a pointer argument. It appears that this is a typo and should be dereferencing handle rather than assigning to it directly. This fixes a bug where an undefined handle value is potentially returned to user-space.
Signed-off-by: Ryan Mallon <rmallon@gmail.com>
Reviewed-by: Jakob Bornecrantz<jakob@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 26aa38cafa upstream.
There was an error on the jsm driver that would cause it to be unable to
recover after a second error is detected.
At the first error, the device recovers properly:
[72521.485691] EEH: Detected PCI bus error on device 0003:02:00.0
[72521.485695] EEH: This PCI device has failed 1 times in the last hour:
...
[72532.035693] ttyn3 at MMIO 0x0 (irq = 49) is a jsm
[72532.105689] jsm: Port 3 added
However, at the second error, it cascades until EEH disables the device:
[72631.229549] Call Trace:
...
[72641.725687] jsm: Port 3 added
[72641.725695] EEH: Detected PCI bus error on device 0003:02:00.0
[72641.725698] EEH: This PCI device has failed 3 times in the last hour:
It was caused because the PCI state was not being saved after the first
restore. Therefore, at the second recovery the PCI state would not be
restored.
Signed-off-by: Lucas Kannebley Tavares <lucaskt@linux.vnet.ibm.com>
Signed-off-by: Breno Leitao <brenohl@br.ibm.com>
Acked-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ef605fdb33 upstream.
Protect against pl011_console_write() and the interrupt for
the console UART running concurrently on different CPUs.
Otherwise the console_write could spin for a long time
waiting for the UART to become not busy, while the other
CPU continuously services UART interrupts and keeps the
UART busy.
The checks for sysrq and oops_in_progress are taken
from 8250.c.
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Reviewed-by: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com>
Reviewed-by: Bibek Basu <bibek.basu@stericsson.com>
Reviewed-by: Shreshtha Kumar Sahu <shreshthakumar.sahu@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0eee50af5b upstream.
Commit 74c2107759 (serial: Use block_til_ready helper) and its fixup
3f582b8c11 (serial: fix termios settings in open) introduced a
regression on UV systems. The serial eventually freezes while being
used. It's completely unpredictable and sometimes needs a heap of
traffic to happen first.
To reproduce this, yast installation was used as it turned out to be
pretty reliable in reproducing. Especially during installation process
where one doesn't have an SSH daemon running. And no monitor as the HW
is completely headless. So this was fun to find. Given the machine
doesn't boot on vanilla before 2.6.36 final. (And the commits above
are older.)
Unless there is some bad race in the code, the hardware seems to be
pretty broken. Otherwise pure MSR read should not cause such a bug,
or?
So to prevent the bug, revert to the old behavior. I.e. read modem
status only if we really have to -- for non-CLOCAL set serials.
Non-CLOCAL works on this hardware OK, I tried. See? I don't.
And document that shit.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
References: https://lkml.org/lkml/2011/12/6/573
References: https://bugzilla.novell.com/show_bug.cgi?id=718518
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6d443d8499 upstream.
Calling edge_remove_sysfs_attrs from edge_disconnect is too late
as the device has already been removed from sysfs.
Do the simple and obvious thing and make edge_remove_sysfs_attrs
the port_remove method.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Reported-by: Wolfgang Frisch <wfpub@roembden.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e8537bd2c4 upstream.
using a separate read and write mutex for locking is sufficient to make the
driver accept simultaneous read and write. This improves useability a lot.
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Cc: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c428b70c1e upstream.
wdm_in_callback() will also touch this field, so we cannot change it without locking
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Acked-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fc216ec363 upstream.
I tested this against 2.6.39 in the Ubuntu kernel, however I see the IDs
are not in latest 3.2 git.
This adds IDs for the FTDI controller in the Rainforest Automation
Zigbee dongle.
Signed-off-by: Peter Naulls <peter@chocky.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 108e02b129 upstream.
Fix regression introduced by commit b1ffb4c851 ("USB: Fix Corruption
issue in USB ftdi driver ftdi_sio.c") which caused the termios settings
to no longer be initialised at open. Consequently it was no longer
possible to set the port to the default speed of 9600 baud without first
changing to another baud rate and back again.
Reported-by: Roland Ramthun <mail@roland-ramthun.de>
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Tested-by: Roland Ramthun <mail@roland-ramthun.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit eb833a9e09 upstream.
Return EINVAL if new baud_base does not match the current one.
The baud_base is device specific and can not be changed. This restores
the old (pre-2005) behaviour which was changed due to a
misunderstanding regarding this fact (see
https://lkml.org/lkml/2005/1/20/84).
Reported-by: Torbjörn Lofterud <torbjorn@pi.nxs.se>
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 612539e81f upstream.
On v7, we use the same cache maintenance instructions for data lines
as for unified lines. This was not the case for v6, where HARVARD_CACHE
was defined to indicate the L1 cache topology.
This patch removes the erroneous compile-time check for HARVARD_CACHE in
proc-v7.S, ensuring that we perform I-side invalidation at boot.
Reported-and-Acked-by: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Catalin Marinas <Catalin.Marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f2c0d0266c upstream.
syslog-ng versions before 3.3.0beta1 (2011-05-12) assume that
CAP_SYS_ADMIN is sufficient to access syslog, so ever since CAP_SYSLOG
was introduced (2010-11-25) they have triggered a warning.
Commit ee24aebffb ("cap_syslog: accept CAP_SYS_ADMIN for now")
improved matters a little by making syslog-ng work again, just keeping
the WARN_ONCE(). But still, this is a warning that writes a stack trace
we don't care about to syslog, sets a taint flag, and alarms sysadmins
when nothing worse has happened than use of an old userspace with a
recent kernel.
Convert the WARN_ONCE to a printk_once to avoid that while continuing to
give userspace developers a hint that this is an unwanted
backward-compatibility feature and won't be around forever.
Reported-by: Ralf Hildebrandt <ralf.hildebrandt@charite.de>
Reported-by: Niels <zorglub_olsen@hotmail.com>
Reported-by: Paweł Sikora <pluto@agmk.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Liked-by: Gergely Nagy <algernon@madhouse-project.org>
Acked-by: Serge Hallyn <serge@hallyn.com>
Acked-by: James Morris <jmorris@namei.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3b25eb690e upstream.
The refactoring of Realtek codec driver in 3.2 kernel caused a
regression for ASUS A6Rp laptop; it doesn't give any output.
The reason was that this machine has a secret master mute (or EAPD)
control via NID 0x0f VREF. Setting VREF50 on this node makes the
sound working again.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=42588
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5b68edc91c upstream.
We've decided to provide CPU family specific container files
(starting with CPU family 15h). E.g. for family 15h we have to
load microcode_amd_fam15h.bin instead of microcode_amd.bin
Rationale is that starting with family 15h patch size is larger
than 2KB which was hard coded as maximum patch size in various
microcode loaders (not just Linux).
Container files which include patches larger than 2KB cause
different kinds of trouble with such old patch loaders. Thus we
have to ensure that the default container file provides only
patches with size less than 2KB.
Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: <stable@kernel.org>
Link: http://lkml.kernel.org/r/20120120164412.GD24508@alberich.amd.com
[ documented the naming convention and tidied the code a bit. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b1c770c273 upstream
When finding the longest extent in an AG, we read the value directly
out of the AGF buffer without endian conversion. This will give an
incorrect length, resulting in FITRIM operations potentially not
trimming everything that it should.
Note, for 3.0-stable this has been modified to apply to
fs/xfs/linux-2.6/xfs_discard.c instead of fs/xfs/xfs_discard.c. -bpm
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4b90a603a1 upstream.
When the ahash driver returns -EBUSY, AH4/6 input functions return
NET_XMIT_DROP, presumably copied from the output code path. But
returning transmit codes on input doesn't make a lot of sense.
Since NET_XMIT_DROP is a positive int, this gets interpreted as
the next header type (i.e., success). As that can only end badly,
remove the check.
Signed-off-by: Nick Bowler <nbowler@elliptictech.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 30fb6aa740 upstream.
Multiple users of the function tracer can register their functions
with the ftrace_ops structure. The accounting within ftrace will
update the counter on each function record that is being traced.
When the ftrace_ops filtering adds or removes functions, the
function records will be updated accordingly if the ftrace_ops is
still registered.
When a ftrace_ops is removed, the counter of the function records,
that the ftrace_ops traces, are decremented. When they reach zero
the functions that they represent are modified to stop calling the
mcount code.
When changes are made, the code is updated via stop_machine() with
a command passed to the function to tell it what to do. There is an
ENABLE and DISABLE command that tells the called function to enable
or disable the functions. But the ENABLE is really a misnomer as it
should just update the records, as records that have been enabled
and now have a count of zero should be disabled.
The DISABLE command is used to disable all functions regardless of
their counter values. This is the big off switch and is not the
complement of the ENABLE command.
To make matters worse, when a ftrace_ops is unregistered and there
is another ftrace_ops registered, neither the DISABLE nor the
ENABLE command are set when calling into the stop_machine() function
and the records will not be updated to match their counter. A command
is passed to that function that will update the mcount code to call
the registered callback directly if it is the only one left. This
means that the ftrace_ops that is still registered will have its callback
called by all functions that have been set for it as well as the ftrace_ops
that was just unregistered.
Here's a way to trigger this bug. Compile the kernel with
CONFIG_FUNCTION_PROFILER set and with CONFIG_FUNCTION_GRAPH not set:
CONFIG_FUNCTION_PROFILER=y
# CONFIG_FUNCTION_GRAPH is not set
This will force the function profiler to use the function tracer instead
of the function graph tracer.
# cd /sys/kernel/debug/tracing
# echo schedule > set_ftrace_filter
# echo function > current_tracer
# cat set_ftrace_filter
schedule
# cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 692/68108025 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
kworker/0:2-909 [000] .... 531.235574: schedule <-worker_thread
<idle>-0 [001] .N.. 531.235575: schedule <-cpu_idle
kworker/0:2-909 [000] .... 531.235597: schedule <-worker_thread
sshd-2563 [001] .... 531.235647: schedule <-schedule_hrtimeout_range_clock
# echo 1 > function_profile_enabled
# echo 0 > function_porfile_enabled
# cat set_ftrace_filter
schedule
# cat trace
# tracer: function
#
# entries-in-buffer/entries-written: 159701/118821262 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
<idle>-0 [002] ...1 604.870655: local_touch_nmi <-cpu_idle
<idle>-0 [002] d..1 604.870655: enter_idle <-cpu_idle
<idle>-0 [002] d..1 604.870656: atomic_notifier_call_chain <-enter_idle
<idle>-0 [002] d..1 604.870656: __atomic_notifier_call_chain <-atomic_notifier_call_chain
The same problem could have happened with the trace_probe_ops,
but they are modified with the set_frace_filter file which does the
update at closure of the file.
The simple solution is to change ENABLE to UPDATE and call it every
time an ftrace_ops is unregistered.
Link: http://lkml.kernel.org/r/1323105776-26961-3-git-send-email-jolsa@redhat.com
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 072126f452 upstream.
Currently, if set_ftrace_filter() is called when the ftrace_ops is
active, the function filters will not be updated. They will only be updated
when tracing is disabled and re-enabled.
Update the functions immediately during set_ftrace_filter().
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 41fb61c2d0 upstream.
Whenever the hash of the ftrace_ops is updated, the record counts
must be balance. This requires disabling the records that are set
in the original hash, and then enabling the records that are set
in the updated hash.
Moving the update into ftrace_hash_move() removes the bug where the
hash was updated but the records were not, which results in ftrace
triggering a warning and disabling itself because the ftrace_ops filter
is updated while the ftrace_ops was registered, and then the failure
happens when the ftrace_ops is unregistered.
The current code will not trigger this bug, but new code will.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 51fc6dc8f9 upstream.
For rounds 16--79, W[i] only depends on W[i - 2], W[i - 7], W[i - 15] and W[i - 16].
Consequently, keeping all W[80] array on stack is unnecessary,
only 16 values are really needed.
Using W[16] instead of W[80] greatly reduces stack usage
(~750 bytes to ~340 bytes on x86_64).
Line by line explanation:
* BLEND_OP
array is "circular" now, all indexes have to be modulo 16.
Round number is positive, so remainder operation should be
without surprises.
* initial full message scheduling is trimmed to first 16 values which
come from data block, the rest is calculated before it's needed.
* original loop body is unrolled version of new SHA512_0_15 and
SHA512_16_79 macros, unrolling was done to not do explicit variable
renaming. Otherwise it's the very same code after preprocessing.
See sha1_transform() code which does the same trick.
Patch survives in-tree crypto test and original bugreport test
(ping flood with hmac(sha512).
See FIPS 180-2 for SHA-512 definition
http://csrc.nist.gov/publications/fips/fips180-2/fips180-2withchangenotice.pdf
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 84e31fdb7c upstream.
commit f9e2bca6c2
aka "crypto: sha512 - Move message schedule W[80] to static percpu area"
created global message schedule area.
If sha512_update will ever be entered twice, hash will be silently
calculated incorrectly.
Probably the easiest way to notice incorrect hashes being calculated is
to run 2 ping floods over AH with hmac(sha512):
#!/usr/sbin/setkey -f
flush;
spdflush;
add IP1 IP2 ah 25 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000025;
add IP2 IP1 ah 52 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000052;
spdadd IP1 IP2 any -P out ipsec ah/transport//require;
spdadd IP2 IP1 any -P in ipsec ah/transport//require;
XfrmInStateProtoError will start ticking with -EBADMSG being returned
from ah_input(). This never happens with, say, hmac(sha1).
With patch applied (on BOTH sides), XfrmInStateProtoError does not tick
with multiple bidirectional ping flood streams like it doesn't tick
with SHA-1.
After this patch sha512_transform() will start using ~750 bytes of stack on x86_64.
This is OK for simple loads, for something more heavy, stack reduction will be done
separatedly.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 598781d711 upstream.
If the master tries to authenticate a client using drm_authmagic and
that client has already closed its drm file descriptor,
either wilfully or because it was terminated, the
call to drm_authmagic will dereference a stale pointer into kmalloc'ed memory
and corrupt it.
Typically this results in a hard system hang.
This patch fixes that problem by removing any authentication tokens
(struct drm_magic_entry) open for a file descriptor when that file
descriptor is closed.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 58ded24f0f upstream.
If pages passed to the eCryptfs extent-based crypto functions are not
mapped and the module parameter ecryptfs_verbosity=1 was specified at
loading time, a NULL pointer dereference will occur.
Note that this wouldn't happen on a production system, as you wouldn't
pass ecryptfs_verbosity=1 on a production system. It leaks private
information to the system logs and is for debugging only.
The debugging info printed in these messages is no longer very useful
and rather than doing a kmap() in these debugging paths, it will be
better to simply remove the debugging paths completely.
https://launchpad.net/bugs/913651
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a261a03904 upstream.
Most filesystems call inode_change_ok() very early in ->setattr(), but
eCryptfs didn't call it at all. It allowed the lower filesystem to make
the call in its ->setattr() function. Then, eCryptfs would copy the
appropriate inode attributes from the lower inode to the eCryptfs inode.
This patch changes that and actually calls inode_change_ok() on the
eCryptfs inode, fairly early in ecryptfs_setattr(). Ideally, the call
would happen earlier in ecryptfs_setattr(), but there are some possible
inode initialization steps that must happen first.
Since the call was already being made on the lower inode, the change in
functionality should be minimal, except for the case of a file extending
truncate call. In that case, inode_newsize_ok() was never being
called on the eCryptfs inode. Rather than inode_newsize_ok() catching
maximum file size errors early on, eCryptfs would encrypt zeroed pages
and write them to the lower filesystem until the lower filesystem's
write path caught the error in generic_write_checks(). This patch
introduces a new function, called ecryptfs_inode_newsize_ok(), which
checks if the new lower file size is within the appropriate limits when
the truncate operation will be growing the lower file.
In summary this change prevents eCryptfs truncate operations (and the
resulting page encryptions), which would exceed the lower filesystem
limits or FSIZE rlimits, from ever starting.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Reviewed-by: Li Wang <liwang@nudt.edu.cn>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5e6f0d7690 upstream.
ecryptfs_write() handles the truncation of eCryptfs inodes. It grabs a
page, zeroes out the appropriate portions, and then encrypts the page
before writing it to the lower filesystem. It was unkillable and due to
the lack of sparse file support could result in tying up a large portion
of system resources, while encrypting pages of zeros, with no way for
the truncate operation to be stopped from userspace.
This patch adds the ability for ecryptfs_write() to detect a pending
fatal signal and return as gracefully as possible. The intent is to
leave the lower file in a useable state, while still allowing a user to
break out of the encryption loop. If a pending fatal signal is detected,
the eCryptfs inode size is updated to reflect the modified inode size
and then -EINTR is returned.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 30373dc0c8 upstream.
Print inode on metadata read failure. The only real
way of dealing with metadata read failures is to delete
the underlying file system file. Having the inode
allows one to 'find . -inum INODE`.
[tyhicks@canonical.com: Removed some minor not-for-stable parts]
Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit db10e55651 upstream.
A malicious count value specified when writing to /dev/ecryptfs may
result in a a very large kernel memory allocation.
This patch peeks at the specified packet payload size, adds that to the
size of the packet headers and compares the result with the write count
value. The resulting maximum memory allocation size is approximately 532
bytes.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Reported-by: Sasha Levin <levinsasha928@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b4ead019af upstream.
The recent change of the power-widget handling for IDT codecs caused
the silent output from the docking-station line-out jack. This was
partially fixed by the commit f2cbba7602
"ALSA: hda - Fix the lost power-setup of seconary pins after PM resume".
But the line-out on the docking-station is still silent when booted
with the jack plugged even by this fix.
The remainig bug is that the power-widget is set off in stac92xx_init()
because the pins in cfg->line_out_pins[] aren't checked there properly
but only hp_pins[] are checked in is_nid_hp_pin().
This patch fixes the problem by checking both HP and line-out pins
and leaving the power-map correctly.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=42637
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1f5d78dc48 upstream.
We switch to dynamic debugging in commit
56e46742e8 but did not take into account that
now we do not control anymore whether a specific message is enabled or not.
So now we lock the "dbg_lock" and release it in every debugging macro, which
make them not so light-weight.
This commit removes the "dbg_lock" protection from the debugging macros to
fix the issue.
The downside is that now our DBGKEY() stuff is broken, but this is not
critical at all and will be fixed later.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 687875fb7d upstream.
Fix the following NULL ptr dereference caused by
cat /sys/devices/system/memory/memory0/removable
Pid: 13979, comm: sed Not tainted 3.0.13-0.5-default #1 IBM BladeCenter LS21 -[7971PAM]-/Server Blade
RIP: __count_immobile_pages+0x4/0x100
Process sed (pid: 13979, threadinfo ffff880221c36000, task ffff88022e788480)
Call Trace:
is_pageblock_removable_nolock+0x34/0x40
is_mem_section_removable+0x74/0xf0
show_mem_removable+0x41/0x70
sysfs_read_file+0xfe/0x1c0
vfs_read+0xc7/0x130
sys_read+0x53/0xa0
system_call_fastpath+0x16/0x1b
We are crashing because we are trying to dereference NULL zone which
came from pfn=0 (struct page ffffea0000000000). According to the boot
log this page is marked reserved:
e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
and early_node_map confirms that:
early_node_map[3] active PFN ranges
1: 0x00000010 -> 0x0000009c
1: 0x00000100 -> 0x000bffa3
1: 0x00100000 -> 0x00240000
The problem is that memory_present works in PAGE_SECTION_MASK aligned
blocks so the reserved range sneaks into the the section as well. This
also means that free_area_init_node will not take care of those reserved
pages and they stay uninitialized.
When we try to read the removable status we walk through all available
sections and hope that the zone is valid for all pages in the section.
But this is not true in this case as the zone and nid are not initialized.
We have only one node in this particular case and it is marked as node=1
(rather than 0) and that made the problem visible because page_to_nid will
return 0 and there are no zones on the node.
Let's check that the zone is valid and that the given pfn falls into its
boundaries and mark the section not removable. This might cause some
false positives, probably, but we do not have any sane way to find out
whether the page is reserved by the platform or it is just not used for
whatever other reasons.
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 85e72aa538 upstream.
/proc/pid/clear_refs is used to clear the Referenced and YOUNG bits for
pages and corresponding page table entries of the task with PID pid, which
includes any special mappings inserted into the page tables in order to
provide things like vDSOs and user helper functions.
On ARM this causes a problem because the vectors page is mapped as a
global mapping and since ec706dab ("ARM: add a vma entry for the user
accessible vector page"), a VMA is also inserted into each task for this
page to aid unwinding through signals and syscall restarts. Since the
vectors page is required for handling faults, clearing the YOUNG bit (and
subsequently writing a faulting pte) means that we lose the vectors page
*globally* and cannot fault it back in. This results in a system deadlock
on the next exception.
To see this problem in action, just run:
$ echo 1 > /proc/self/clear_refs
on an ARM platform (as any user) and watch your system hang. I think this
has been the case since 2.6.37
This patch avoids clearing the aforementioned bits for reserved pages,
therefore leaving the vectors page intact on ARM. Since reserved pages
are not candidates for swap, this change should not have any impact on the
usefulness of clear_refs.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Reported-by: Moussa Ba <moussaba@micron.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Acked-by: Nicolas Pitre <nico@linaro.org>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c25a785d66 upstream.
If the provided system call number is equal to __NR_syscalls, the
current check will pass and a function pointer just after the system
call table may be called, since sys_call_table is an array with total
size __NR_syscalls.
Whether or not this is a security bug depends on what the compiler puts
immediately after the system call table. It's likely that this won't do
anything bad because there is an additional NULL check on the syscall
entry, but if there happens to be a non-NULL value immediately after the
system call table, this may result in local privilege escalation.
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com>
Cc: Chen Liqin <liqin.chen@sunplusct.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Eugene Teo <eugeneteo@kernel.sg>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f42af6c486 upstream.
Since commit
"7488876... dt/net: Eliminate users of of_platform_{,un}register_driver"
there are two platform drivers named "mdio-gpio" registered.
I renamed the of variant to "mdio-ofgpio".
Signed-off-by: Dirk Eibach <eibach@gdsys.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fe0fe83585 upstream.
As mandated by the standard. In case of an IO error, a pNFS
objects layout driver must return it's layout. This is because
all device errors are reported to the server as part of the
layout return buffer.
This is implemented the same way PNFS_LAYOUTRET_ON_SETATTR
is done, through a bit flag on the pnfs_layoutdriver_type->flags
member. The flag is set by the layout driver that wants a
layout_return preformed at pnfs_ld_{write,read}_done in case
of an error.
(Though I have not defined a wrapper like pnfs_ld_layoutret_on_setattr
because this code is never called outside of pnfs.c and pnfs IO
paths)
Without this patch 3.[0-2] Kernels leak memory and have an annoying
WARN_ON after every IO error utilizing the pnfs-obj driver.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5c0b4129c0 upstream.
Some time along the way pNFS IO errors were switched to
communicate with a special iodata->pnfs_error member instead
of the regular RPC members. But objlayout was not switched
over.
Fix that!
Without this fix any IO error is hanged, because IO is not
switched to MDS and pages are never cleared or read.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dfd00c4c8f upstream.
Same devices can generate interrupt without properly setting bit in
INT_SOURCE_CSR register (spurious interrupt), what will cause IRQ line
will be disabled by interrupts controller driver.
We discovered that clearing INT_MASK_CSR stops such behaviour. We
previously first read that register, and then clear all know interrupt
sources bits and do not touch reserved bits. After this patch, we write
to all register content (I believe writing to reserved bits on that
register will not cause any problems, I tested that on my rt2800pci
device).
This fix very bad performance problem, practically making device
unusable (since worked without interrupts), reported in:
https://bugzilla.redhat.com/show_bug.cgi?id=658451
We previously tried to workaround that issue in commit
4ba7d99978 "rt2800pci: handle spurious
interrupts", but it was reverted in commit
82e5fc2a34
as thing, that will prevent to detect real spurious interrupts.
Reported-and-tested-by: Amir Hedayaty <hedayaty@gmail.com>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d059f9fa84 upstream.
Move the call to enable_timeouts() forward so that
BAU_MISC_CONTROL is initialized before using it in
calculate_destination_timeout().
Fix the calculation of a BAU destination timeout
for UV2 (in calculate_destination_timeout()).
Signed-off-by: Cliff Wickman <cpw@sgi.com>
Link: http://lkml.kernel.org/r/20120116211848.GB5767@sgi.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2727b17539 upstream.
Correct OMAP_I2C_SYSC_REG offset in omap4 register map.
Offset 0x20 is reserved and OMAP_I2C_SYSC_REG has 0x10 as offset.
Signed-off-by: Alexander Aring <a.aring@phytec.de>
[khilman@ti.com: minor changelog edits]
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 895f302252 upstream.
The target code was not setting the additional sense length field in the
sense data it returned, which meant that at least the Linux stack
ignored the ASC/ASCQ fields. For example, without this patch, on a
tcm_loop device:
# sg_raw -v /dev/sda 2 0 0 0 0 0
gives
cdb to send: 02 00 00 00 00 00
SCSI Status: Check Condition
Sense Information:
Fixed format, current; Sense key: Illegal Request
Raw sense data (in hex):
70 00 05 00 00 00 00 00
while after the patch we correctly get the following (which matches what
a regular disk returns):
cdb to send: 02 00 00 00 00 00
SCSI Status: Check Condition
Sense Information:
Fixed format, current; Sense key: Illegal Request
Additional sense: Invalid command operation code
Raw sense data (in hex):
70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00
00 00
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ce136176fe upstream.
Current SCSI specs say that the "response format" field in the standard
INQUIRY response should be set to 2, and all the real SCSI devices I
have do put 2 here. So let's do that too.
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cced5041ed upstream.
sym53c8xx_slave_destroy unconditionally assumes that sym53c8xx_slave_alloc has
succesesfully allocated a sym_lcb. This can lead to a NULL pointer dereference
(exposed by commit 4e6c82b).
Signed-off-by: Stratos Psomadakis <psomas@gentoo.org>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d640113fe8 upstream.
For UP processor, it is likely that no _MAT method or MADT table defined.
So currently acpi_get_cpuid(...) always return -1 for UP processor.
This is wrong. It should return valid value for CPU0.
In the other hand, BIOS may define multiple CPU handles even for UP
processor, for example
Scope (_PR)
{
Processor (CPU0, 0x00, 0x00000410, 0x06) {}
Processor (CPU1, 0x01, 0x00000410, 0x06) {}
Processor (CPU2, 0x02, 0x00000410, 0x06) {}
Processor (CPU3, 0x03, 0x00000410, 0x06) {}
}
We should only return valid value for CPU0's acpi handle.
And return invalid value for others.
http://marc.info/?t=132329819900003&r=1&w=2
Reported-and-tested-by: wallak@free.fr
Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit da4d8b287a upstream.
The call to acpi_os_validate_address in acpi_ds_get_region_arguments was
removed by mistake in commit 9ad19ac(ACPICA: Split large dsopcode and
dsload.c files).
Put it back.
Reported-and-bisected-by: Luca Tettamanti <kronos.it@gmail.com>
Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9f10f6a520 upstream.
In SRAT v1, we had 8bit proximity domain (PXM) fields; SRAT v2 provides
32bits for these. The new fields were reserved before.
According to the ACPI spec, the OS must disregrard reserved fields.
ia64 did handle the PXM fields almost consistently, but depending on
sgi's sn2 platform. This patch leaves the sn2 logic in, but does also
use 16/32 bits for PXM if the SRAT has rev 2 or higher.
The patch also adds __init to the two pxm accessor functions, as they
access __initdata now and are called from an __init function only anyway.
Note that the code only uses 16 bits for the PXM field in the processor
proximity field; the patch does not address this as 16 bits are more than
enough.
Signed-off-by: Kurt Garloff <kurt@garloff.de>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cd298f60a2 upstream.
In SRAT v1, we had 8bit proximity domain (PXM) fields; SRAT v2 provides
32bits for these. The new fields were reserved before.
According to the ACPI spec, the OS must disregrard reserved fields.
x86/x86-64 was rather inconsistent prior to this patch; it used 8 bits
for the pxm field in cpu_affinity, but 32 bits in mem_affinity.
This patch makes it consistent: Either use 8 bits consistently (SRAT
rev 1 or lower) or 32 bits (SRAT rev 2 or higher).
cc: x86@kernel.org
Signed-off-by: Kurt Garloff <kurt@garloff.de>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8df0eb7c9d upstream.
In SRAT v1, we had 8bit proximity domain (PXM) fields; SRAT v2 provides
32bits for these. The new fields were reserved before.
According to the ACPI spec, the OS must disregrard reserved fields.
In order to know whether or not, we must know what version the SRAT
table has.
This patch stores the SRAT table revision for later consumption
by arch specific __init functions.
Signed-off-by: Kurt Garloff <kurt@garloff.de>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 39a74fdedd upstream.
smp_call_function() only lets all other CPUs execute a specific function,
while we expect all CPUs do in intel_idle. Without the fix, we could have
one cpu which has auto_demotion enabled or has no broadcast timer setup.
Usually we don't see impact because auto demotion just harms power and the
intel_idle init is called in CPU 0, where boradcast timer delivers
interrupt, but this still could be a problem.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5c2a9f06a9 upstream.
kvm -cpu host passes the original cpuid info to the guest.
Latest kvm version seem to return true for mwait_leaf cpuid
function on recent Intel CPUs. But it does not return mwait
C-states (mwait_substates), instead zero is returned.
While real CPUs seem to always return non-zero values, the intel
idle driver should not get active in kvm (mwait_substates == 0)
case and bail out.
Otherwise a Null pointer exception will happen later when the
cpuidle subsystem tries to get active:
[0.984807] BUG: unable to handle kernel NULL pointer dereference at (null)
[0.984807] IP: [<(null)>] (null)
...
[0.984807][<ffffffff8143cf34>] ? cpuidle_idle_call+0xb4/0x340
[0.984807][<ffffffff8159e7bc>] ? __atomic_notifier_call_chain+0x4c/0x70
[0.984807][<ffffffff81001198>] ? cpu_idle+0x78/0xd0
Reference:
https://bugzilla.novell.com/show_bug.cgi?id=726296
Signed-off-by: Thomas Renninger <trenn@suse.de>
CC: Bruno Friedmann <bruno@ioda-net.ch>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f0e48b6bd4 upstream.
The two DACs for the front output and the surround/center/LFE/back
outputs are wired up out of phase, so when channels are duplicated,
their sound can cancel out each other and result in a weaker bass
response. To fix this, reverse the polarity of the neutron flow to
the front output.
Reported-any-tested-by: Daniel Hill <daniel@enemyplanet.geek.nz>
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e268337dfe upstream.
Jüri Aedla reported that the /proc/<pid>/mem handling really isn't very
robust, and it also doesn't match the permission checking of any of the
other related files.
This changes it to do the permission checks at open time, and instead of
tracking the process, it tracks the VM at the time of the open. That
simplifies the code a lot, but does mean that if you hold the file
descriptor open over an execve(), you'll continue to read from the _old_
VM.
That is different from our previous behavior, but much simpler. If
somebody actually finds a load where this matters, we'll need to revert
this commit.
I suspect that nobody will ever notice - because the process mapping
addresses will also have changed as part of the execve. So you cannot
actually usefully access the fd across a VM change simply because all
the offsets for IO would have changed too.
Reported-by: Jüri Aedla <asd@ut.ee>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0bfc96cb77 upstream.
[ Changes with respect to 3.3: return -ENOTTY from scsi_verify_blk_ioctl
and -ENOIOCTLCMD from sd_compat_ioctl. ]
Linux allows executing the SG_IO ioctl on a partition or LVM volume, and
will pass the command to the underlying block device. This is
well-known, but it is also a large security problem when (via Unix
permissions, ACLs, SELinux or a combination thereof) a program or user
needs to be granted access only to part of the disk.
This patch lets partitions forward a small set of harmless ioctls;
others are logged with printk so that we can see which ioctls are
actually sent. In my tests only CDROM_GET_CAPABILITY actually occurred.
Of course it was being sent to a (partition on a) hard disk, so it would
have failed with ENOTTY and the patch isn't changing anything in
practice. Still, I'm treating it specially to avoid spamming the logs.
In principle, this restriction should include programs running with
CAP_SYS_RAWIO. If for example I let a program access /dev/sda2 and
/dev/sdb, it still should not be able to read/write outside the
boundaries of /dev/sda2 independent of the capabilities. However, for
now programs with CAP_SYS_RAWIO will still be allowed to send the
ioctls. Their actions will still be logged.
This patch does not affect the non-libata IDE driver. That driver
however already tests for bd != bd->bd_contains before issuing some
ioctl; it could be restricted further to forbid these ioctls even for
programs running with CAP_SYS_ADMIN/CAP_SYS_RAWIO.
Cc: linux-scsi@vger.kernel.org
Cc: Jens Axboe <axboe@kernel.dk>
Cc: James Bottomley <JBottomley@parallels.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[ Make it also print the command name when warning - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c3e0ef9a29 upstream.
For 32-bit architectures using standard jiffies the idletime calculation
in uptime_proc_show will quickly overflow. It takes (2^32 / HZ) seconds
of idle-time, or e.g. 12.45 days with no load on a quad-core with HZ=1000.
Switch to 64-bit calculations.
Cc: Michael Abbott <michael.abbott@diamond.ac.uk>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 66f06127f3 upstream.
Just another eGalax device.
Please note that adding this device to have_special_driver
in hid-core.c is not required anymore.
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@enac.fr>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bb9ff21072 upstream.
This patch adds USB ID for the touchpanel in Acer Iconia W500. The panel
supports up to five fingers, therefore the need for a new addition of panel
types.
Signed-off-by: Marek Vasut <marek.vasut@gmail.com>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@enac.fr>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e36f690b37 upstream.
This is just a renaming of USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH{N}
to USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_{PID} to handle more eGalax
devices.
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@enac.fr>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b7ea81a58a upstream.
The AH4/6 ahash input callbacks read out the nexthdr field from the AH
header *after* they overwrite that header. This is obviously not going
to end well. Fix it up.
Signed-off-by: Nick Bowler <nbowler@elliptictech.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 069294e813 upstream.
The AH4/6 ahash output callbacks pass nexthdr to xfrm_output_resume
instead of the error code. This appears to be a copy+paste error from
the input case, where nexthdr is expected. This causes the driver to
continuously add AH headers to the datagram until either an allocation
fails and the packet is dropped or the ahash driver hits a synchronous
fallback and the resulting monstrosity is transmitted.
Correct this issue by simply passing the error code unadulterated.
Signed-off-by: Nick Bowler <nbowler@elliptictech.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eaf5f90735 upstream.
Two (or more) concurrent calls of shrink_dcache_parent() on the same dentry may
cause shrink_dcache_parent() to loop forever.
Here's what appears to happen:
1 - CPU0: select_parent(P) finds C and puts it on dispose list, returns 1
2 - CPU1: select_parent(P) locks P->d_lock
3 - CPU0: shrink_dentry_list() locks C->d_lock
dentry_kill(C) tries to lock P->d_lock but fails, unlocks C->d_lock
4 - CPU1: select_parent(P) locks C->d_lock,
moves C from dispose list being processed on CPU0 to the new
dispose list, returns 1
5 - CPU0: shrink_dentry_list() finds dispose list empty, returns
6 - Goto 2 with CPU0 and CPU1 switched
Basically select_parent() steals the dentry from shrink_dentry_list() and thinks
it found a new one, causing shrink_dentry_list() to think it's making progress
and loop over and over.
One way to trigger this is to make udev calls stat() on the sysfs file while it
is going away.
Having a file in /lib/udev/rules.d/ with only this one rule seems to the trick:
ATTR{vendor}=="0x8086", ATTR{device}=="0x10ca", ENV{PCI_SLOT_NAME}="%k", ENV{MATCHADDR}="$attr{address}", RUN+="/bin/true"
Then execute the following loop:
while true; do
echo -bond0 > /sys/class/net/bonding_masters
echo +bond0 > /sys/class/net/bonding_masters
echo -bond1 > /sys/class/net/bonding_masters
echo +bond1 > /sys/class/net/bonding_masters
done
One fix would be to check all callers and prevent concurrent calls to
shrink_dcache_parent(). But I think a better solution is to stop the
stealing behavior.
This patch adds a new dentry flag that is set when the dentry is added to the
dispose list. The flag is cleared in dentry_lru_del() in case the dentry gets a
new reference just before being pruned.
If the dentry has this flag, select_parent() will skip it and let
shrink_dentry_list() retry pruning it. With select_parent() skipping those
dentries there will not be the appearance of progress (new dentries found) when
there is none, hence shrink_dcache_parent() will not loop forever.
Set the flag is also set in prune_dcache_sb() for consistency as suggested by
Linus.
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 806e23e95f upstream.
There is a potential integer overflow in uvc_ioctl_ctrl_map(). When a
large xmap->menu_count is passed from the userspace, the subsequent call
to kmalloc() will allocate a buffer smaller than expected.
map->menu_count and map->menu_info would later be used in a loop (e.g.
in uvc_query_v4l2_ctrl), which leads to out-of-bound access.
The patch checks the ioctl argument and returns -EINVAL for zero or too
large values in xmap->menu_count.
Signed-off-by: Haogang Chen <haogangchen@gmail.com>
[laurent.pinchart@ideasonboard.com Prevent excessive memory consumption]
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2e885057b7 upstream.
In ELF64, the sh_flags field is 64-bits wide. recordmcount was
erroneously treating it as a 32-bit wide field. For little endian
objects this works because the flags of interest (SHF_EXECINSTR)
reside in the lower 32 bits of the word, and you get the same result
with either a 32-bit or 64-bit read. Big endian objects on the
other hand do not work at all with this error.
The fix: Correctly treat sh_flags as 64-bits wide in elf64 objects.
The symptom I observed was that my
__start_mcount_loc..__stop_mcount_loc was empty even though ftrace
function tracing was enabled.
Link: http://lkml.kernel.org/r/1324345362-12230-1-git-send-email-ddaney.cavm@gmail.com
Signed-off-by: David Daney <david.daney@cavium.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit da517a08ac upstream.
SGI UV systems print a message during boot:
UV: Found <num> blades
Due to packaging changes, the blade count is not accurate for
on the next generation of the platform. This patch corrects the
count.
Signed-off-by: Jack Steiner <steiner@sgi.com>
Link: http://lkml.kernel.org/r/20120106191900.GA19772@sgi.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fed474857e upstream.
Removing the parent of a watched file results in "kernel BUG at
fs/notify/mark.c:139".
To reproduce
add "-w /tmp/audit/dir/watched_file" to audit.rules
rm -rf /tmp/audit/dir
This is caused by fsnotify_destroy_mark() being called without an
extra reference taken by the caller.
Reported by Francesco Cosoleto here:
https://bugzilla.novell.com/show_bug.cgi?id=689860
Fix by removing the BUG_ON and adding a comment about not accessing mark after
the iput.
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b4f36f88b3 upstream.
Socket callbacks use svc_xprt_enqueue() to add an xprt to a
pool->sp_sockets list. In normal operation a server thread will later
come along and take the xprt off that list. On shutdown, after all the
threads have exited, we instead manually walk the sv_tempsocks and
sv_permsocks lists to find all the xprt's and delete them.
So the sp_sockets lists don't really matter any more. As a result,
we've mostly just ignored them and hoped they would go away.
Which has gotten us into trouble; witness for example ebc63e531c
"svcrpc: fix list-corrupting race on nfsd shutdown", the result of Ben
Greear noticing that a still-running svc_xprt_enqueue() could re-add an
xprt to an sp_sockets list just before it was deleted. The fix was to
remove it from the list at the end of svc_delete_xprt(). But that only
made corruption less likely--I can see nothing that prevents a
svc_xprt_enqueue() from adding another xprt to the list at the same
moment that we're removing this xprt from the list. In fact, despite
the earlier xpo_detach(), I don't even see what guarantees that
svc_xprt_enqueue() couldn't still be running on this xprt.
So, instead, note that svc_xprt_enqueue() essentially does:
lock sp_lock
if XPT_BUSY unset
add to sp_sockets
unlock sp_lock
So, if we do:
set XPT_BUSY on every xprt.
Empty every sp_sockets list, under the sp_socks locks.
Then we're left knowing that the sp_sockets lists are all empty and will
stay that way, since any svc_xprt_enqueue() will check XPT_BUSY under
the sp_lock and see it set.
And *then* we can continue deleting the xprt's.
(Thanks to Jeff Layton for being correctly suspicious of this code....)
Cc: Ben Greear <greearb@candelatech.com>
Cc: Jeff Layton <jlayton@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2fefb8a09e upstream.
There's no reason I can see that we need to call sv_shutdown between
closing the two lists of sockets.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 61c8504c42 upstream.
The pool_to and to_pool fields of the global svc_pool_map are freed on
shutdown, but are initialized in nfsd startup only in the
SVC_POOL_PERCPU and SVC_POOL_PERNODE cases.
They *are* initialized to zero on kernel startup. So as long as you use
only SVC_POOL_GLOBAL (the default), this will never be a problem.
You're also OK if you only ever use SVC_POOL_PERCPU or SVC_POOL_PERNODE.
However, the following sequence events leads to a double-free:
1. set SVC_POOL_PERCPU or SVC_POOL_PERNODE
2. start nfsd: both fields are initialized.
3. shutdown nfsd: both fields are freed.
4. set SVC_POOL_GLOBAL
5. start nfsd: the fields are left untouched.
6. shutdown nfsd: now we try to free them again.
Step 4 is actually unnecessary, since (for some bizarre reason), nfsd
automatically resets the pool mode to SVC_POOL_GLOBAL on shutdown.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 364212fdda upstream.
Thomas Lange reported that when he did a 'make localmodconfig', his
config was missing the brcmsmac driver, even though he had the module
loaded.
Looking into this, I found the file:
drivers/net/wireless/brcm80211/brcmsmac/Makefile
had the following in the Makefile:
MODULEPFX := brcmsmac
obj-$(CONFIG_BRCMSMAC) += $(MODULEPFX).o
The way streamline-config.pl works, is parsing all the
obj-$(CONFIG_FOO) += foo.o
lines to find that CONFIG_FOO belongs to the module foo.ko.
But in this case, the brcmsmac.o was not used, but a variable in its place.
By changing streamline-config.pl to remember defined variables in Makefiles
and substituting them when they are used in the obj-X lines, allows
Thomas (and others) to have their brcmsmac module stay configured
when it is loaded and running "make localmodconfig".
Reported-by: Thomas Lange <thomas-lange2@gmx.de>
Tested-by: Thomas Lange <thomas-lange2@gmx.de>
Cc: Arend van Spriel <arend@broadcom.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d060d963e8 upstream.
Simplify the way lines ending with backslashes (continuation) in Makefiles
is parsed. This is needed to implement a necessary fix.
Tested-by: Thomas Lange <thomas-lange2@gmx.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c06108be5 upstream.
If ctrls->count is too high the multiplication could overflow and
array_size would be lower than expected. Mauro and Hans Verkuil
suggested that we cap it at 1024. That comes from the maximum
number of controls with lots of room for expantion.
$ grep V4L2_CID include/linux/videodev2.h | wc -l
211
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dd8df17fe8 upstream.
This patch fixes a failure to recognize SD cards reported on a Dell
Vostro with O2 Micro SD card reader. Patch 49c468f ("mmc: sd: add
support for uhs bus speed mode selection") caused the problem, by
setting the SDHCI_CTRL_HISPD flag even for legacy timings.
Signed-off-by: Alexander Elbs <alex@segv.de>
Acked-by: Philip Rakity <prakity@marvell.com>
Acked-by: Arindam Nath <arindam.nath@amd.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c6ced0db08 upstream.
When suspending host, the tuning timer shoule be deactivated.
And the HOST_NEEDS_TUNING flag should be set after tuning timer is
deactivated.
Signed-off-by: Philip Rakity <prakity@marvell.com>
Signed-off-by: Aaron Lu <aaron.lu@amd.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 913047e9e5 upstream.
This patch fixes the wrong comparison before setting the interface
voltage in DDR mode.
The assignment to the variable ddr before comaprison is either
ddr = MMC_1_2V_DDR_MODE; or ddr == MMC_1_8V_DDR_MODE. But the comparison
is done with the extended csd value if ddr == EXT_CSD_CARD_TYPE_DDR_1_2V.
Signed-off-by: Girish K S <girish.shivananjappa@linaro.org>
Acked-by: Subhash Jadavani <subhashj@codeaurora.org>
Acked-by: Philip Rakity <prakity@marvell.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7c1f59c9d5 upstream.
When adding checks for ACPI resource conflicts to many bus drivers,
not enough attention was paid to the error paths, and for several
drivers this causes 0 to be returned on error in some cases. Fix this
by properly returning a non-zero value on every error.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d34315da91 upstream.
Patch 56e46742e8 broke UBIFS debugging messages:
before that commit when UBIFS debugging was enabled, users saw few useful
debugging messages after mount. However, that patch turned 'dbg_msg()' into
'pr_debug()', so to enable the debugging messages users have to enable them
first via /sys/kernel/debug/dynamic_debug/control, which is very impractical.
This commit makes 'dbg_msg()' to use 'printk()' instead of 'pr_debug()', just
as it was before the breakage.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 72f0d453d8 upstream.
Patch ab50ff6847 broke UBI debugging messages:
before that commit when UBI debugging was enabled, users saw few useful
debugging messages after attaching an MTD device. However, that patch turned
'dbg_msg()' into 'pr_debug()', so to enable the debugging messages users have
to enable them first via /sys/kernel/debug/dynamic_debug/control, which is
very impractical.
This commit makes 'dbg_msg()' to use 'printk()' instead of 'pr_debug()', just
as it was before the breakage.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4a59c797a1 upstream.
Currently it's possible to create a volume without a name. E.g:
ubimkvol -n 32 -s 2MiB -t static /dev/ubi0 -N ""
After that vtbl_check() will always fail because it does not permit
empty strings.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ab936cbcd0 upstream.
Commit ef6a3c6311 ("mm: add replace_page_cache_page() function") added a
function replace_page_cache_page(). This function replaces a page in the
radix-tree with a new page. WHen doing this, memory cgroup needs to fix
up the accounting information. memcg need to check PCG_USED bit etc.
In some(many?) cases, 'newpage' is on LRU before calling
replace_page_cache(). So, memcg's LRU accounting information should be
fixed, too.
This patch adds mem_cgroup_replace_page_cache() and removes the old hooks.
In that function, old pages will be unaccounted without touching
res_counter and new page will be accounted to the memcg (of old page).
WHen overwriting pc->mem_cgroup of newpage, take zone->lru_lock and avoid
races with LRU handling.
Background:
replace_page_cache_page() is called by FUSE code in its splice() handling.
Here, 'newpage' is replacing oldpage but this newpage is not a newly allocated
page and may be on LRU. LRU mis-accounting will be critical for memory cgroup
because rmdir() checks the whole LRU is empty and there is no account leak.
If a page is on the other LRU than it should be, rmdir() will fail.
This bug was added in March 2011, but no bug report yet. I guess there
are not many people who use memcg and FUSE at the same time with upstream
kernels.
The result of this bug is that admin cannot destroy a memcg because of
account leak. So, no panic, no deadlock. And, even if an active cgroup
exist, umount can succseed. So no problem at shutdown.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Miklos Szeredi <mszeredi@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eb31aae8cb upstream.
Some Dell BIOSes have MCFG tables that don't report the entire
MMCONFIG area claimed by the chipset. If we move PCI devices into
that claimed-but-unreported area, they don't work.
This quirk reads the AMD MMCONFIG MSRs and adds PNP0C01 resources as
needed to cover the entire area.
Example problem scenario:
BIOS-e820: 00000000cfec5400 - 00000000d4000000 (reserved)
Fam 10h mmconf [d0000000, dfffffff]
PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xd0000000-0xd3ffffff] (base 0xd0000000)
pnp 00:0c: [mem 0xd0000000-0xd3ffffff]
pci 0000:00:12.0: reg 10: [mem 0xffb00000-0xffb00fff]
pci 0000:00:12.0: no compatible bridge window for [mem 0xffb00000-0xffb00fff]
pci 0000:00:12.0: BAR 0: assigned [mem 0xd4000000-0xd40000ff]
Reported-by: Lisa Salimbas <lisa.salimbas@canonical.com>
Reported-by: <thuban@singularity.fr>
Tested-by: dann frazier <dann.frazier@canonical.com>
References: https://bugzilla.kernel.org/show_bug.cgi?id=31602
References: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/647043
References: https://bugzilla.redhat.com/show_bug.cgi?id=770308
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 45fae74939 upstream.
Info about new measurements are cached in the iint for performance. When
the inode is flushed from cache, the associated iint is flushed as well.
Subsequent access to the inode will cause the inode to be re-measured and
will attempt to add a duplicate entry to the measurement list.
This patch frees the duplicate measurement memory, fixing a memory leak.
Signed-off-by: Roberto Sassu <roberto.sassu@polito.it>
Signed-off-by: Mimi Zohar <zohar@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9e7860cee1 upstream.
Haogang Chen found out that:
There is a potential integer overflow in process_msg() that could result
in cross-domain attack.
body = kmalloc(msg->hdr.len + 1, GFP_NOIO | __GFP_HIGH);
When a malicious guest passes 0xffffffff in msg->hdr.len, the subsequent
call to xb_read() would write to a zero-length buffer.
The other end of this connection is always the xenstore backend daemon
so there is no guest (malicious or otherwise) which can do this. The
xenstore daemon is a trusted component in the system.
However this seem like a reasonable robustness improvement so we should
have it.
And Ian when read the API docs found that:
The payload length (len field of the header) is limited to 4096
(XENSTORE_PAYLOAD_MAX) in both directions. If a client exceeds the
limit, its xenstored connection will be immediately killed by
xenstored, which is usually catastrophic from the client's point of
view. Clients (particularly domains, which cannot just reconnect)
should avoid this.
so this patch checks against that instead.
This also avoids a potential integer overflow pointed out by Haogang Chen.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Haogang Chen <haogangchen@gmail.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aff132d95f upstream.
The amount of memory required for tracking chain buffers is rather
large, and when the host credit count is big, memory allocation
failure occurs inside __get_free_pages.
The fix is to limit the number of chains to 100,000. In addition,
the number of host credits is limited to 30,000 IOs. However this
limitation can be overridden this using the command line option
max_queue_depth. The algorithm for calculating the
reply_post_queue_depth is changed so that it is equal to
(reply_free_queue_depth + 16), previously it was (reply_free_queue_depth * 2).
Signed-off-by: Nagalakshmi Nandigama <nagalakshmi.nandigama@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 30c43282f3 upstream.
Added code to release the spinlock that is used to protect the
raid device list before calling a function that can block. The
blocking was causing a reschedule, and subsequently it is tried
to acquire the same lock, resulting in a panic (NMI Watchdog
detecting a CPU lockup).
Signed-off-by: Nagalakshmi Nandigama <nagalakshmi.nandigama@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5cf9a4e69c upstream.
We only need amd_bus.o for AMD systems with PCI. arch/x86/pci/Makefile
already depends on CONFIG_PCI=y, so this patch just adds the dependency
on CONFIG_AMD_NB.
Cc: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 24d25dbfa6 upstream.
This factors out the AMD native MMCONFIG discovery so we can use it
outside amd_bus.c.
amd_bus.c reads AMD MSRs so it can remove the MMCONFIG area from the
PCI resources. We may also need the MMCONFIG information to work
around BIOS defects in the ACPI MCFG table.
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ae5cd86455 upstream.
This assures that a _CRS reserved host bridge window or window region is
not used if it is not addressable by the CPU. The new code either trims
the window to exclude the non-addressable portion or totally ignores the
window if the entire window is non-addressable.
The current code has been shown to be problematic with 32-bit non-PAE
kernels on systems where _CRS reserves resources above 4GB.
Signed-off-by: Gary Hade <garyhade@us.ibm.com>
Reviewed-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: Thomas Renninger <trenn@novell.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a776c491ca upstream.
I traced a nasty kexec on panic boot failure to the fact that we had
screaming msi interrupts and we were not disabling the msi messages at
kernel startup. The booting kernel had not enabled those interupts so
was not prepared to handle them.
I can see no reason why we would ever want to leave the msi interrupts
enabled at boot if something else has enabled those interrupts. The pci
spec specifies that msi interrupts should be off by default. Drivers
are expected to enable the msi interrupts if they want to use them. Our
interrupt handling code reprograms the interrupt handlers at boot and
will not be be able to do anything useful with an unexpected interrupt.
This patch applies cleanly all of the way back to 2.6.32 where I noticed
the problem.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e57e0d8e81 upstream.
When we fail to erase a PEB, we free the corresponding erase entry object,
but then re-schedule this object if the error code was something like -EAGAIN.
Obviously, it is a bug to use the object after we have freed it.
Reported-by: Emese Revfy <re.emese@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e801e128b2 upstream.
Under some cases, when scrubbing the PEB if we did not get the lock on
the PEB it fails to scrub. Add that PEB again to the scrub list
Artem: minor amendments.
Signed-off-by: Bhavesh Parekh <bparekh@nvidia.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8a0d551a59 upstream.
Setting the security context of a NFSv4 mount via the context= mount
option is currently broken. The NFSv4 codepath allocates a parsed
options struct, and then parses the mount options to fill it. It
eventually calls nfs4_remote_mount which calls security_init_mnt_opts.
That clobbers the lsm_opts struct that was populated earlier. This bug
also looks like it causes a small memory leak on each v4 mount where
context= is used.
Fix this by moving the initialization of the lsm_opts into
nfs_alloc_parsed_mount_data. Also, add a destructor for
nfs_parsed_mount_data to make it easier to free all of the allocations
hanging off of it, and to ensure that the security_free_mnt_opts is
called whenever security_init_mnt_opts is.
I believe this regression was introduced quite some time ago, probably
by commit c02d7adf.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 43717c7dae upstream.
Lukas Razik <linux@razik.name> reports that on his SPARC system,
booting with an NFS root file system stopped working after commit
56463e50 "NFS: Use super.c for NFSROOT mount option parsing."
We found that the network switch to which Lukas' client was attached
was delaying access to the LAN after the client's NIC driver reported
that its link was up. The delay was longer than the timeouts used in
the NFS client during mounting.
NFSROOT worked for Lukas before commit 56463e50 because in those
kernels, the client's first operation was an rpcbind request to
determine which port the NFS server was listening on. When that
request failed after a long timeout, the client simply selected the
default NFS port (2049). By that time the switch was allowing access
to the LAN, and the mount succeeded.
Neither of these client behaviors is desirable, so reverting 56463e50
is really not a choice. Instead, introduce a mechanism that retries
the NFSROOT mount request several times. This is the same tactic that
normal user space NFS mounts employ to overcome server and network
delays.
Signed-off-by: Lukas Razik <linux@razik.name>
[ cel: match kernel coding style, add proper patch description ]
[ cel: add exponential back-off ]
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Tested-by: Lukas Razik <linux@razik.name>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3df96909b7 upstream.
It would previously write basically random bits to PCI configuration space...
Not very surprising that the GPU tended to stop responding completely. The
resulting MCE even froze the whole machine sometimes.
Now resetting the GPU after a lockup has at least a fighting chance of
succeeding.
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 28eebb703e upstream.
We often end up missing fences on older asics with
writeback enabled which leads to delays in the userspace
accel code, so just disable it by default on those asics.
Reported-by: Helge Deller <deller@gmx.de>
Reported-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 92db7f6c86 upstream.
This change was verified to fix both issues with no video I've
investigated. I've also checked checksum calculation with fglrx on:
RV620, HD54xx, HD5450, HD6310, HD6320.
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3a90274de3 upstream.
When an invalid NID is given, get_wcaps() returns zero as the error,
but get_wcaps_type() takes it as the normal value and returns a bogus
AC_WID_AUD_OUT value. This confuses the parser.
With this patch, get_wcaps_type() returns -1 when value 0 is given,
i.e. an invalid NID is passed to get_wcaps().
Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=740118
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e7848163aa upstream.
Cards with identical PCI ids but no AC97 config in EEPROM do not have
the ac97 field initialized. We must check for this case to avoid kernel oops.
Signed-off-by: Pavel Hofman <pavel.hofman@ivitera.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d50f2ab6f0 upstream.
Commit 503358ae01 ("ext4: avoid divide by
zero when trying to mount a corrupted file system") fixes CVE-2009-4307
by performing a sanity check on s_log_groups_per_flex, since it can be
set to a bogus value by an attacker.
sbi->s_log_groups_per_flex = sbi->s_es->s_log_groups_per_flex;
groups_per_flex = 1 << sbi->s_log_groups_per_flex;
if (groups_per_flex < 2) { ... }
This patch fixes two potential issues in the previous commit.
1) The sanity check might only work on architectures like PowerPC.
On x86, 5 bits are used for the shifting amount. That means, given a
large s_log_groups_per_flex value like 36, groups_per_flex = 1 << 36
is essentially 1 << 4 = 16, rather than 0. This will bypass the check,
leaving s_log_groups_per_flex and groups_per_flex inconsistent.
2) The sanity check relies on undefined behavior, i.e., oversized shift.
A standard-confirming C compiler could rewrite the check in unexpected
ways. Consider the following equivalent form, assuming groups_per_flex
is unsigned for simplicity.
groups_per_flex = 1 << sbi->s_log_groups_per_flex;
if (groups_per_flex == 0 || groups_per_flex == 1) {
We compile the code snippet using Clang 3.0 and GCC 4.6. Clang will
completely optimize away the check groups_per_flex == 0, leaving the
patched code as vulnerable as the original. GCC keeps the check, but
there is no guarantee that future versions will do the same.
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2f4478ccff upstream.
stresstest needs at least two eraseblocks. Bail out gracefully if that
condition is not met. Fixes the following 'division by zero' OOPS:
[ 619.100000] mtd_stresstest: MTD device size 131072, eraseblock size 131072, page size 2048, count of eraseblocks 1, pages per eraseblock 64, OOB size 64
[ 619.120000] mtd_stresstest: scanning for bad eraseblocks
[ 619.120000] mtd_stresstest: scanned 1 eraseblocks, 0 are bad
[ 619.130000] mtd_stresstest: doing operations
[ 619.130000] mtd_stresstest: 0 operations done
[ 619.140000] Division by zero in kernel.
...
caused by
/* Read or write up 2 eraseblocks at a time - hence 'ebcnt - 1' */
eb %= (ebcnt - 1);
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 342ff28f5a upstream.
Some error paths in mtd_blkdevs were fixed in the following commit:
commit 94735ec404
mtd: mtd_blkdevs: fix error path in blktrans_open
But on these error paths, the block device's `dev->open' count is
already incremented before we check for errors. This meant that, while
the error path was handled correctly on the first time through
blktrans_open(), the device is erroneously considered already open on
the second time through.
This problem can be seen, for instance, when a UBI volume is
simultaneously mounted as a UBIFS partition and read through its
corresponding gluebi mtdblockX device. This results in blktrans_open()
passing its error checks (with `dev->open > 0') without actually having
a handle on the device. Here's a summarized log of the actions and
results with nandsim:
# modprobe nandsim
# modprobe mtdblock
# modprobe gluebi
# modprobe ubifs
# ubiattach /dev/ubi_ctrl -m 0
...
# ubimkvol /dev/ubi0 -N test -s 16MiB
...
# mount -t ubifs ubi0:test /mnt
# ls /dev/mtdblock*
/dev/mtdblock0 /dev/mtdblock1
# cat /dev/mtdblock1 > /dev/null
cat: can't open '/dev/mtdblock4': Device or resource busy
# cat /dev/mtdblock1 > /dev/null
CPU 0 Unable to handle kernel paging request at virtual address
fffffff0, epc == 8031536c, ra == 8031f280
Oops[#1]:
...
Call Trace:
[<8031536c>] ubi_leb_read+0x14/0x164
[<8031f280>] gluebi_read+0xf0/0x148
[<802edba8>] mtdblock_readsect+0x64/0x198
[<802ecfe4>] mtd_blktrans_thread+0x330/0x3f4
[<8005be98>] kthread+0x88/0x90
[<8000bc04>] kernel_thread_helper+0x10/0x18
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 556f063580 upstream.
The array of unsigned long pointed by oops_page_used is allocated
by vmalloc which requires the size to be in bytes.
BITS_PER_LONG is equal to 32.
If we want to allocate memory for 32 pages with one bit per page then
32 / BITS_PER_LONG is equal to 1 byte that is 8 bits.
To fix it we need to multiply the result by sizeof(unsigned long) equal to 4.
Signed-off-by: Roman Tereshonkov <roman.tereshonkov@nokia.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 093019cf1b upstream.
Commit fa8b18ed didn't prevent the integer overflow and possible
memory corruption. "count" can go negative and bypass the check.
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[Not upstream as it was fixed differently for 3.3 with a much more
"intrusive" rework of the driver - gregkh]
There is a race condition involving acm_tty_hangup() and acm_tty_close()
where hangup() would attempt to access tty->driver_data without proper
locking and NULL checking after close() has potentially already set it
to NULL. One possibility to (sporadically) trigger this behavior is to
perform a suspend/resume cycle with a running WWAN data connection.
This patch addresses the issue by introducing a NULL check for
tty->driver_data in acm_tty_hangup() protected by open_mutex and exiting
gracefully when hangup() is invoked on a device that has already been
closed.
Signed-off-by: Thilo-Alexander Ginkel <thilo@ginkel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9ae89b0296 upstream.
mpt2sas_base_detach() call was removed from _scsih_remove() while
doing some code shuffling. Mainly when we work on adding code for
scsih_shutdown(). I have added back mpt2sas_base_detach() which will
get callled from _scsih_remove().
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
commit 79cfbdfa87 upstream.
The CPU hotplug notifications sent out by the _cpu_up() and _cpu_down()
functions depend on the value of the 'tasks_frozen' argument passed to them
(which indicates whether tasks have been frozen or not).
(Examples for such CPU hotplug notifications: CPU_ONLINE, CPU_ONLINE_FROZEN,
CPU_DEAD, CPU_DEAD_FROZEN).
Thus, it is essential that while the callbacks for those notifications are
running, the state of the system with respect to the tasks being frozen or
not remains unchanged, *throughout that duration*. Hence there is a need for
synchronizing the CPU hotplug code with the freezer subsystem.
Since the freezer is involved only in the Suspend/Hibernate call paths, this
patch hooks the CPU hotplug code to the suspend/hibernate notifiers
PM_[SUSPEND|HIBERNATE]_PREPARE and PM_POST_[SUSPEND|HIBERNATE] to prevent
the race between CPU hotplug and freezer, thus ensuring that CPU hotplug
notifications will always be run with the state of the system really being
what the notifications indicate, _throughout_ their execution time.
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f7d9821a6a upstream.
If slave device already has a receive handler registered, then the
error unwind of bonding device enslave function is broken.
The following will leave a pointer to freed memory in the slave
device list, causing a later kernel panic.
# modprobe dummy
# ip li add dummy0-1 link dummy0 type macvlan
# modprobe bonding
# echo +dummy0 >/sys/class/net/bond0/bonding/slaves
The fix is to detach the slave (which removes it from the list)
in the unwind path.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Reviewed-by: Nicolas de Pesloüan <nicolas.2p.debian@free.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c15d74def upstream.
At this point if skb->len happens to be 2, the subsequant skb_pull(skb, 4)
call won't work and the skb->len won't be decreased and won't ever reach 0,
resulting in an infinite loop.
With an ASIX 88772 under heavy load, without this patch, rx_fixup() reaches
an infinite loop in less than a minute. With this patch applied,
no infinite loop even after hours of heavy load.
Signed-off-by: Aurelien Jacobs <aurel@gnuage.org>
Cc: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit a8c1f65c79 upstream.
Commit 5b7c840667 ('ipv4: correct IGMP
behavior on v3 query during v2-compatibility mode') added yet another
case for query parsing, which can result in max_delay = 0. Substitute
a value of 1, as in the usual v3 case.
Reported-by: Simon McVittie <smcv@debian.org>
References: http://bugs.debian.org/654876
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit c618759774 upstream.
Problems with NVIDIA's OHCI host controllers persist. After looking
carefully through the spec, I finally realized that when a controller
is reset it then automatically goes into a SUSPEND state in which it
is completely quiescent (no DMA and no IRQs) and from which it will
not awaken until the system puts it into the OPERATIONAL state.
Therefore there's no need to worry about controllers being in the
RESET state for extended periods, or remaining in the OPERATIONAL
state during system shutdown. The proper action for device
initialization is to put the controller into the RESET state (if it's
not there already) and then to issue a software reset. Similarly, the
proper action for device shutdown is simply to do a software reset.
This patch (as1499) implements such an approach. It simplifies
initialization and shutdown, and allows the NVIDIA shutdown-quirk code
to be removed.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Andre "Osku" Schmidt <andre.osku.schmidt@googlemail.com>
Tested-by: Arno Augustin <Arno.Augustin@web.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 18b7ede5f7 upstream.
[ removed the dwc3 portion of the patch as it didn't apply to
older kernels - gregkh]
According to USB 3.0 Specification Table 9-22, if
bmAttributes [4:0] are set to zero, it means "no
streams supported", but the way this helper was
defined on Linux, we will *always* have one stream
which might cause several problems.
For example on DWC3, we would tell the controller
endpoint has streams enabled and yet start transfers
with Stream ID set to 0, which would goof up the host
side.
While doing that, convert the macro to an inline
function due to the different checks we now need.
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3c8c931671 upstream.
Add support for Chinese Noname HSPA USB modem which is apparently
manufactured by a company called ZD Incorporated (based on texts in the
Windows drivers).
This product is available at least from Dealextreme (SKU 80032) and
possibly in India with name Olive V-MW250. It is based on Qualcomm
MSM6280 chip.
I needed to also add "options usb-storage quirks=0685:7000:i" in modprobe
configuration because udevd or the kernel keeps poking the embedded
fake-cd-rom which fails and causes the device to reset. There might be
a better way to accomplish the same. usb_modeswitch is not needed with
this device.
Signed-off-by: Janne Snabb <snabb@epipe.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5b06162335 upstream.
Add VendorID/ProductID for USB 3G dongle Model VT1000 of Viettel.
Signed-off-by: VU Tuan Duc <ducvt@viettel.com.vn>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 71d85724bd upstream.
I encountered a result of COMP_2ND_BW_ERR while improving how the pwc
webcam driver handles not having the full usb1 bandwidth available to
itself.
I created the following test setup, a NEC xhci controller with a
single TT USB 2 hub plugged into it, with a usb keyboard and a pwc webcam
plugged into the usb2 hub. This caused the following to show up in dmesg
when trying to stream from the pwc camera at its highest alt setting:
xhci_hcd 0000:01:00.0: ERROR: unexpected command completion code 0x23.
usb 6-2.1: Not enough bandwidth for altsetting 9
And usb_set_interface returned -EINVAL, which caused my pwc code to not
do the right thing as it expected -ENOSPC.
This patch makes the xhci driver properly handle COMP_2ND_BW_ERR and makes
usb_set_interface return -ENOSPC as expected.
This should be backported to stable kernels as old as 2.6.32.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bc677d5b64 upstream.
Add a new field num_mapped_sgs to struct urb so that we have a place to
store the number of mapped entries and can also retain the original
value of entries in num_sgs. Previously, usb_hcd_map_urb_for_dma()
would overwrite this with the number of mapped entries, which would
break dma_unmap_sg() because it requires the original number of entries.
This fixes warnings like the following when using USB storage devices:
------------[ cut here ]------------
WARNING: at lib/dma-debug.c:902 check_unmap+0x4e4/0x695()
ehci_hcd 0000:00:12.2: DMA-API: device driver frees DMA sg list with different entry count [map count=4] [unmap count=1]
Modules linked in: ohci_hcd ehci_hcd
Pid: 0, comm: kworker/0:1 Not tainted 3.2.0-rc2+ #319
Call Trace:
<IRQ> [<ffffffff81036d3b>] warn_slowpath_common+0x80/0x98
[<ffffffff81036de7>] warn_slowpath_fmt+0x41/0x43
[<ffffffff811fa5ae>] check_unmap+0x4e4/0x695
[<ffffffff8105e92c>] ? trace_hardirqs_off+0xd/0xf
[<ffffffff8147208b>] ? _raw_spin_unlock_irqrestore+0x33/0x50
[<ffffffff811fa84a>] debug_dma_unmap_sg+0xeb/0x117
[<ffffffff8137b02f>] usb_hcd_unmap_urb_for_dma+0x71/0x188
[<ffffffff8137b166>] unmap_urb_for_dma+0x20/0x22
[<ffffffff8137b1c5>] usb_hcd_giveback_urb+0x5d/0xc0
[<ffffffffa0000d02>] ehci_urb_done+0xf7/0x10c [ehci_hcd]
[<ffffffffa0001140>] qh_completions+0x429/0x4bd [ehci_hcd]
[<ffffffffa000340a>] ehci_work+0x95/0x9c0 [ehci_hcd]
...
---[ end trace f29ac88a5a48c580 ]---
Mapped at:
[<ffffffff811faac4>] debug_dma_map_sg+0x45/0x139
[<ffffffff8137bc0b>] usb_hcd_map_urb_for_dma+0x22e/0x478
[<ffffffff8137c494>] usb_hcd_submit_urb+0x63f/0x6fa
[<ffffffff8137d01c>] usb_submit_urb+0x2c7/0x2de
[<ffffffff8137dcd4>] usb_sg_wait+0x55/0x161
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 08e87d0d77 upstream.
Hi, below patch adds the USB-ID of the serial adapters sold by
Multiplex RC (www.multiplex-rc.de).
Signed-off-by: Malte Schröder <maltesch@gmx.de>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 694c6301e5 upstream.
Fix regression introduced by commit 507ca9bc04 ([PATCH] USB: add
ability for usb-serial drivers to determine if their write urb is
currently being used.) which inverted the logic in write_room so that it
returns zero when the write urb is actually free.
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 772aed45b6 upstream.
In musb_init_controller() there's a pm_runtime_put(), but there's no
pm_runtime_get(), which creates a mismatch that causes the driver to
sleep when it shouldn't.
This was introduced in 7acc619[1], but it wasn't triggered in my setup
until 18a2689[2] was merged to Linus' branch at point df0914[3]. IOW;
when PM is working as it was supposed to.
However, it seems most of the time this is used in a way that keeps the
counter above 0, so nobody noticed. Also, it seems to depend on the
configuration used in versions before 3.1, but not later (or in it).
I found the problem by loading isp1704_charger before any usb gadgets:
http://article.gmane.org/gmane.linux.kernel/1226122
All versions after 2.6.39 are affected.
[1] usb: musb: Idle path retention and offmode support for OMAP3
[2] OMAP2+: musb: hwmod adaptation for musb registration
[3] Merge branch 'omap-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6
Cc: Hema HK <hemahk@ti.com>
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
commit 35284b3d2f upstream.
The Guillemot Webcam Hercules Dualpix Exchange camera
has been reported with a second ID.
Signed-off-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 59bf5cf94f upstream.
We were sending data on the stack when uploading firmware, which causes
some machines fits, and is not allowed. Fix this by using the buffer we
already had around for this very purpose.
Reported-by: Wouter M. Koolen <wmkoolen@cwi.nl>
Tested-by: Wouter M. Koolen <wmkoolen@cwi.nl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e7c8e8605d upstream.
On some failures, the country_code field of an acm structure is freed
without freeing the acm structure itself. Elsewhere, operations including
memcpy and kfree are performed on the country_code field. The patch sets
the country_code field to NULL when it is freed, and likewise sets the
country_code_size field to 0.
Signed-off-by: Julia Lawall <julia@diku.dk>
Acked-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d2eb8c3593 upstream.
During BKL removal in 2.6.38, conversion of files from in-ICB format to normal
format got broken. We call ->writepage with i_data_sem held but udf_get_block()
also acquires i_data_sem thus creating A-A deadlock.
We fix the problem by dropping i_data_sem before calling ->writepage() which is
safe since i_mutex still protects us against any changes in the file. Also fix
pagelock - i_data_sem lock inversion in udf_expand_file_adinicb() by dropping
i_data_sem before calling find_or_create_page().
Reported-by: Matthias Matiak <netzpython@mail-on.us>
Tested-by: Matthias Matiak <netzpython@mail-on.us>
Reviewed-by: Namjae Jeon <linkinjeon@gmail.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0d19ea8665 upstream.
If we mount a hierarchy with a specified name, the name is unique,
and we can use it to mount the hierarchy without specifying its
set of subsystem names. This feature is documented is
Documentation/cgroups/cgroups.txt section 2.3
Here's an example:
# mount -t cgroup -o cpuset,name=myhier xxx /cgroup1
# mount -t cgroup -o name=myhier xxx /cgroup2
But it was broken by commit 32a8cf235e
(cgroup: make the mount options parsing more accurate)
This fixes the regression.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d8cae98cdd upstream.
The documentation for usbmon is out of date; the usbfs "devices" file
now exists in /sys/kernel/debug/usb rather than /proc/bus/usb. This
patch (as1505) updates the documentation accordingly, and also
mentions that the necessary information can be found by running lsusb.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
CC: Pete Zaitcev <zaitcev@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 33c104d415 upstream.
WARN_ON_ONCE(IS_RDONLY(inode)) tends to trip when filesystem hits error and is
remounted read-only. This unnecessarily scares users (well, they should be
scared because of filesystem error, but the stack trace distracts them from the
right source of their fear ;-). We could as well just remove the WARN_ON but
it's not hard to fix it to not trip on filesystem with errors and not use more
cycles in the common case so that's what we do.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a9e36da655 upstream.
This patch fixes a crash in reiserfs_delete_xattrs during umount.
When shrink_dcache_for_umount clears the dcache from
generic_shutdown_super, delayed evictions are forced to disk. If an
evicted inode has extended attributes associated with it, it will
need to walk the xattr tree to locate and remove them.
But since shrink_dcache_for_umount will BUG if it encounters active
dentries, the xattr tree must be released before it's called or it will
crash during every umount.
This patch forces the evictions to occur before generic_shutdown_super
by calling shrink_dcache_sb first. The additional evictions caused
by the removal of each associated xattr file and dir will be automatically
handled as they're added to the LRU list.
CC: reiserfs-devel@vger.kernel.org
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a06d789b42 upstream.
When jqfmt mount option is not specified on remount, we mistakenly clear
s_jquota_fmt value stored in superblock. Fix the problem.
CC: reiserfs-devel@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 49908a1b25 upstream.
A update is made to the sched:sched_switch event that adds some
logic to the first parameter of the __print_flags() that shows the
state of tasks. This change cause perf to fail parsing the flags.
A simple fix is needed to have the parser be able to process ops
within the argument.
Reported-by: Andrew Vagin <avagin@openvz.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eddfb67525 upstream.
Prevent a receive data corruption by ensuring that the write to update
the rcvhdrheadn register to generate an interrupt is at the very end
of the receive processing.
Signed-off-by: Ramkrishna Vepa <ram.vepa@qlogic.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@qlogic.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e8303a3b21 upstream.
Adds the device id needed for the USB Ethernet Adapter delivered by
ASUS with their Zenbook.
Signed-off-by: Aurelien Jacobs <aurel@gnuage.org>
Acked-by: Grant Grundler <grundler@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e4f387d8db upstream.
Unpaired calling of probe_hcall_entry and probe_hcall_exit might happen
as following, which could cause incorrect preempt count.
__trace_hcall_entry => trace_hcall_entry -> probe_hcall_entry =>
get_cpu_var => preempt_disable
__trace_hcall_exit => trace_hcall_exit -> probe_hcall_exit =>
put_cpu_var => preempt_enable
where:
A => B and A -> B means A calls B, but
=> means A will call B through function name, and B will definitely be
called.
-> means A will call B through function pointer, so B might not be
called if the function pointer is not set.
So error happens when only one of probe_hcall_entry and probe_hcall_exit
get called during a hcall.
This patch tries to move the preempt count operations from
probe_hcall_entry and probe_hcall_exit to its callers.
Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com>
Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 37fb9a0231 upstream.
When re-enabling interrupts we have code to handle edge sensitive
decrementers by resetting the decrementer to 1 whenever it is negative.
If interrupts were disabled long enough that the decrementer wrapped to
positive we do nothing. This means interrupts can be delayed for a long
time until it finally goes negative again.
While we hope interrupts are never be disabled long enough for the
decrementer to go positive, we have a very good test team that can
drive any kernel into the ground. The softlockup data we get back
from these fails could be seconds in the future, completely missing
the cause of the lockup.
We already keep track of the timebase of the next event so use that
to work out if we should trigger a decrementer exception.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f6efe96edd upstream.
An nvs with malformed contents could cause the processing of the
calibration data to read beyond the end of the buffer. Prevent this
from happening by adding bound checking.
Signed-off-by: Pontus Fuchs <pontus.fuchs@gmail.com>
Reviewed-by: Luciano Coelho <coelho@ti.com>
Signed-off-by: Luciano Coelho <coelho@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2131d3c2f9 upstream.
Check for out of bound FEM index to prevent reading beyond ini
memory end.
Signed-off-by: Pontus Fuchs <pontus.fuchs@gmail.com>
Reviewed-by: Luciano Coelho <coelho@ti.com>
Signed-off-by: Luciano Coelho <coelho@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c055fe0797 upstream.
We used to try to request 8 times more vram than needed, which would
fail if the card has a too small BAR (observed with qemu & kvm).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1bb0b7d215 upstream.
When using a >8bpp framebuffer, offb advertises truecolor, not directcolor,
and doesn't touch the color map even if it has a corresponding access method
for the real hardware.
Thus it needs to set the pseudo-palette with all 3 components of the color,
like other truecolor framebuffers, not with copies of the color index like
a directcolor framebuffer would do.
This went unnoticed for a long time because it's pretty hard to get offb
to kick in with anything but 8bpp (old BootX under MacOS will do that and
qemu does it).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3f81f8f152 upstream.
Testing on the openSUSE wireless forum has shown that a Linksys
WUSB54GC v3 with USB ID 1737:0077 works with rt2800usb when the ID is
written to /sys/.../new_id. This ID can therefore be moved out of UNKNOWN.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Acked-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eea915bb0d upstream.
This oops was reported recently:
firmware_loading_store+0xf9/0x17b
dev_attr_store+0x20/0x22
sysfs_write_file+0x101/0x134
vfs_write+0xac/0xf3
sys_write+0x4a/0x6e
system_call_fastpath+0x16/0x1b
The complete backtrace was unfortunately not captured, but details can be found
here:
https://bugzilla.redhat.com/show_bug.cgi?id=769920
The cause is fairly clear.
Its caused by the fact that firmware_loading_store has a case 0 in its
switch statement that reads and writes the fw_priv->fw poniter without the
protection of the fw_lock mutex. since there is a window between the time that
_request_firmware sets fw_priv->fw to NULL and the time the corresponding sysfs
file is unregistered, its possible for a user space application to race in, and
write a zero to the loading file, causing a NULL dereference in
firmware_loading_store. Fix it by extending the protection of the fw_lock mutex
to cover all of the firware_loading_store function.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2eb7f204db upstream.
The Japanese/Korean/Chinese versions still need updating.
Also, the stable kernel 2.6.x.y descriptions are out of date
and should be updated as well.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bc7a2f3abc upstream.
The old address hasn't worked since the great intrusion of August 2011.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 50b8d25748 upstream.
Test-case:
int main(void)
{
int pid, status;
pid = fork();
if (!pid) {
for (;;) {
if (!fork())
return 0;
if (waitpid(-1, &status, 0) < 0) {
printf("ERR!! wait: %m\n");
return 0;
}
}
}
assert(ptrace(PTRACE_ATTACH, pid, 0,0) == 0);
assert(waitpid(-1, NULL, 0) == pid);
assert(ptrace(PTRACE_SETOPTIONS, pid, 0,
PTRACE_O_TRACEFORK) == 0);
do {
ptrace(PTRACE_CONT, pid, 0, 0);
pid = waitpid(-1, NULL, 0);
} while (pid > 0);
return 1;
}
It fails because ->real_parent sees its child in EXIT_DEAD state
while the tracer is going to change the state back to EXIT_ZOMBIE
in wait_task_zombie().
The offending commit is 823b018e which moved the EXIT_DEAD check,
but in fact we should not blame it. The original code was not
correct as well because it didn't take ptrace_reparented() into
account and because we can't really trust ->ptrace.
This patch adds the additional check to close this particular
race but it doesn't solve the whole problem. We simply can't
rely on ->ptrace in this case, it can be cleared if the tracer
is multithreaded by the exiting ->parent.
I think we should kill EXIT_DEAD altogether, we should always
remove the soon-to-be-reaped child from ->children or at least
we should never do the DEAD->ZOMBIE transition. But this is too
complex for 3.2.
Reported-and-tested-by: Denys Vlasenko <vda.linux@googlemail.com>
Tested-by: Lukasz Michalik <lmi@ift.uni.wroc.pl>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 157e8bf8b4 upstream.
This reverts commit c0afabd3d5.
It causes failures on Toshiba laptops - instead of disabling the alarm,
it actually seems to enable it on the affected laptops, resulting in
(for example) the laptop powering on automatically five minutes after
shutdown.
There's a patch for it that appears to work for at least some people,
but it's too late to play around with this, so revert for now and try
again in the next merge window.
See for example
http://bugs.debian.org/652869
Reported-and-bisected-by: Andreas Friedrich <afrie@gmx.net> (Toshiba Tecra)
Reported-by: Antonio-M. Corbi Bellot <antonio.corbi@ua.es> (Toshiba Portege R500)
Reported-by: Marco Santos <marco.santos@waynext.com> (Toshiba Portege Z830)
Reported-by: Christophe Vu-Brugier <cvubrugier@yahoo.fr> (Toshiba Portege R830)
Cc: Jonathan Nieder <jrnieder@gmail.com>
Requested-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Commit be4f1ac828 upstream.
Since Linux 2.6.36 the writeback code has introduces various measures for
live lock prevention during sync(). Unfortunately some of these are
actively harmful for the XFS model, where the inode gets marked dirty for
metadata from the data I/O handler.
The older_than_this checks that are now more strictly enforced since
writeback: avoid livelocking WB_SYNC_ALL writeback
by only calling into __writeback_inodes_sb and thus only sampling the
current cut off time once. But on a slow enough devices the previous
asynchronous sync pass might not have fully completed yet, and thus XFS
might mark metadata dirty only after that sampling of the cut off time for
the blocking pass already happened. I have not myself reproduced this
myself on a real system, but by introducing artificial delay into the
XFS I/O completion workqueues it can be reproduced easily.
Fix this by iterating over all XFS inodes in ->sync_fs and log all that
are dirty. This might log inode that only got redirtied after the
previous pass, but given how cheap delayed logging of inodes is it
isn't a major concern for performance.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Tested-by: Mark Tinguely <tinguely@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Commit 0b8fd3033c upstream.
If the writeback code writes back an inode because it has expired we currently
use the non-blockin ->write_inode path. This means any inode that is pinned
is skipped. With delayed logging and a workload that has very little log
traffic otherwise it is very likely that an inode that gets constantly
written to is always pinned, and thus we keep refusing to write it. The VM
writeback code at that point redirties it and doesn't try to write it again
for another 30 seconds. This means under certain scenarious time based
metadata writeback never happens.
Fix this by calling into xfs_log_inode for kupdate in addition to data
integrity syncs, and thus transfer the inode to the log ASAP.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Tested-by: Mark Tinguely <tinguely@sgi.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 63a741757d upstream.
This fixes an odd bug found on a Dell PowerEdge 1850/0RC130
(BIOS A05 01/09/2006) where all of the modules doing pci_set_dma_mask
would fail with:
ata_piix 0000:00:1f.1: enabling device (0005 -> 0007)
ata_piix 0000:00:1f.1: can't derive routing for PCI INT A
ata_piix 0000:00:1f.1: BMDMA: failed to set dma mask, falling back to PIO
The issue was the Xen-SWIOTLB was allocated such as that the end of
buffer was stradling a page (and also above 4GB). The fix was
spotted by Kalev Leonid which was to piggyback on git commit
e79f86b2ef "swiotlb: Use page alignment
for early buffer allocation" which:
We could call free_bootmem_late() if swiotlb is not used, and
it will shrink to page alignment.
So alloc them with page alignment at first, to avoid lose two pages
And doing that fixes the outstanding issue.
Suggested-by: "Kalev, Leonid" <Leonid.Kalev@ca.com>
Reported-and-Tested-by: "Taylor, Neal E" <Neal.Taylor@ca.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d0e84caeb4 upstream.
If the twl4030-madc device wasn't registered, and another device, such
as twl4030-madc-hwmon, calls twl4030_madc_conversion() a NULL pointer is
dereferenced.
Signed-off-by: Kyle Manna <kyle@kylemanna.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 66cc5b8e50 upstream.
Worst case this fixes the following error:
[ 72.086212] (NULL device *): conversion timeout!
Best case it prevents a crash
Signed-off-by: Kyle Manna <kyle@kylemanna.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
commit e178ccb335 upstream.
A mutex is locked on entry into twl4030_madc_conversion().
Immediate return on some error conditions leaves the
mutex locked.
This patch ensures that mutex is always unlocked before
leaving the function.
Signed-off-by: Sanjeev Premi <premi@ti.com>
Cc: Keerthy <j-keerthy@ti.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 96f1f05af7 upstream.
Since we configure all the queues as CHAINABLE, we need to update the
byte count for all the queues, not only the AGGREGATABLE ones.
Not doing so can confuse the SCD and make the fw assert.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
[ Upstream commit 9f28a2fc0b ]
Commit 2c8cec5c10 (ipv4: Cache learned PMTU information in inetpeer)
removed IP route cache garbage collector a bit too soon, as this gc was
responsible for expired routes cleanup, releasing their neighbour
reference.
As pointed out by Robert Gladewitz, recent kernels can fill and exhaust
their neighbour cache.
Reintroduce the garbage collection, since we'll have to wait our
neighbour lookups become refcount-less to not depend on this stuff.
Reported-by: Robert Gladewitz <gladewitz@gmx.de>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit d01ff0a049 ]
After reset ipv4_devconf->data[IPV4_DEVCONF_ACCEPT_LOCAL] to 0,
we should flush route cache, or it will continue receive packets with local
source address, which should be dropped.
Signed-off-by: Weiping Pan <panweiping3@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit a76c0adf60 ]
When checking whether a DATA chunk fits into the estimated rwnd a
full sizeof(struct sk_buff) is added to the needed chunk size. This
quickly exhausts the available rwnd space and leads to packets being
sent which are much below the PMTU limit. This can lead to much worse
performance.
The reason for this behaviour was to avoid putting too much memory
pressure on the receiver. The concept is not completely irational
because a Linux receiver does in fact clone an skb for each DATA chunk
delivered. However, Linux also reserves half the available socket
buffer space for data structures therefore usage of it is already
accounted for.
When proposing to change this the last time it was noted that this
behaviour was introduced to solve a performance issue caused by rwnd
overusage in combination with small DATA chunks.
Trying to reproduce this I found that with the sk_buff overhead removed,
the performance would improve significantly unless socket buffer limits
are increased.
The following numbers have been gathered using a patched iperf
supporting SCTP over a live 1 Gbit ethernet network. The -l option
was used to limit DATA chunk sizes. The numbers listed are based on
the average of 3 test runs each. Default values have been used for
sk_(r|w)mem.
Chunk
Size Unpatched No Overhead
-------------------------------------
4 15.2 Kbit [!] 12.2 Mbit [!]
8 35.8 Kbit [!] 26.0 Mbit [!]
16 95.5 Kbit [!] 54.4 Mbit [!]
32 106.7 Mbit 102.3 Mbit
64 189.2 Mbit 188.3 Mbit
128 331.2 Mbit 334.8 Mbit
256 537.7 Mbit 536.0 Mbit
512 766.9 Mbit 766.6 Mbit
1024 810.1 Mbit 808.6 Mbit
Signed-off-by: Thomas Graf <tgraf@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 2692ba61a8 ]
Commit 8ffd3208 voids the previous patches f6778aab and 810c0719 for
limiting the autoclose value. If userspace passes in -1 on 32-bit
platform, the overflow check didn't work and autoclose would be set
to 0xffffffff.
This patch defines a max_autoclose (in seconds) for limiting the value
and exposes it through sysctl, with the following intentions.
1) Avoid overflowing autoclose * HZ.
2) Keep the default autoclose bound consistent across 32- and 64-bit
platforms (INT_MAX / HZ in this patch).
3) Keep the autoclose value consistent between setsockopt() and
getsockopt() calls.
Suggested-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 3f1e6d3fd3 ]
gred_change_vq() is called under sch_tree_lock(sch).
This means a spinlock is held, and we are not allowed to sleep in this
context.
We might pre-allocate memory using GFP_KERNEL before taking spinlock,
but this is not suitable for stable material.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit cd7816d149 ]
previous commit 3fb72f1e6e
makes IP-Config wait for carrier on at least one network device.
Before waiting (predefined value 120s), check that at least one device
was successfully brought up. Otherwise (e.g. buggy bootloader
which does not set the MAC address) there is no point in waiting
for carrier.
Cc: Micha Nelissen <micha@neli.hopto.org>
Cc: Holger Brunck <holger.brunck@keymile.com>
Signed-off-by: Gerlando Falauto <gerlando.falauto@keymile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 7838f2ce36 ]
Userspace may not provide TCA_OPTIONS, in fact tc currently does
so not do so if no arguments are specified on the command line.
Return EINVAL instead of panicing.
Signed-off-by: Thomas Graf <tgraf@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 9cef310fcd ]
Received non stream protocol packets were calling llc_cmsg_rcv that used a
skb after that skb was released by sk_eat_skb. This caused received STP
packets to generate kernel panics.
Signed-off-by: Alexandru Juncu <ajuncu@ixiacom.com>
Signed-off-by: Kunjan Naik <knaik@ixiacom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit a03ffcf873 ]
x86 jump instruction size is 2 or 5 bytes (near/long jump), not 2 or 6
bytes.
In case a conditional jump is followed by a long jump, conditional jump
target is one byte past the start of target instruction.
Signed-off-by: Markus Kötter <nepenthesdev@gmail.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ A combination of upstream commits 1d299bc773 and
e88d246871 ]
Although we provide a proper way for a debugger to control whether
syscall restart occurs, we run into problems because orig_i0 is not
saved and restored properly.
Luckily we can solve this problem without having to make debuggers
aware of the issue. Across system calls, several registers are
considered volatile and can be safely clobbered.
Therefore we use the pt_regs save area of one of those registers, %g6,
as a place to save and restore orig_i0.
Debuggers transparently will do the right thing because they save and
restore this register already.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 21f74d361d ]
This is setting things up so that we can correct the return
value, so that it properly returns the original destination
buffer pointer.
Signed-off-by: David S. Miller <davem@davemloft.net>
Tested-by: Kjetil Oftedal <oftedal@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 3e37fd3153 ]
To handle the large physical addresses, just make a simple wrapper
around remap_pfn_range() like MIPS does.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 0b64120cce ]
Some of the sun4v code patching occurs in inline functions visible
to, and usable by, modules.
Therefore we have to patch them up during module load.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit b1f44e13a5 ]
The "(insn & 0x01800000) != 0x01800000" test matches 'restore'
but that is a legitimate place to see the %lo() part of a 32-bit
symbol relocation, particularly in tail calls.
Signed-off-by: David S. Miller <davem@davemloft.net>
Tested-by: Sergei Trofimovich <slyfox@gentoo.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 7cc8583372 ]
This silently was working for many years and stopped working on
Niagara-T3 machines.
We need to set the MSIQ to VALID before we can set it's state to IDLE.
On Niagara-T3, setting the state to IDLE first was causing HV_EINVAL
errors. The hypervisor documentation says, rather ambiguously, that
the MSIQ must be "initialized" before one can set the state.
I previously understood this to mean merely that a successful setconf()
operation has been performed on the MSIQ, which we have done at this
point. But it seems to also mean that it has been set VALID too.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Upstrem commit: 911ae9434f
There's a bug in the MSIX backup and restore routines that cause a crash on
non-x86 (direct access to PCI space not via read/write). These routines are
unnecessary and were removed by the above commit, so also remove them from
stable to fix the crash.
Signed-off-by: Nagalakshmi Nandigama <nagalakshmi.nandigama@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 77e00f2ea9 upstream.
We already do this for cayman, need to also do it for
BTC parts. The default memory and voltage setup is not
adequate for advanced operation. Continuing will
result in an unusable display.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e67d668e14 upstream.
This patch makes use of the set_memory_x() kernel API in order
to make necessary BIOS calls to source NMIs.
This is needed for SLES11 SP2 and the latest upstream kernel as it appears
the NX Execute Disable has grown in its control.
Signed-off by: Thomas Mingarelli <thomas.mingarelli@hp.com>
Signed-off by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e6780f7243 upstream.
It was found (by Sasha) that if you use a futex located in the gate
area we get stuck in an uninterruptible infinite loop, much like the
ZERO_PAGE issue.
While looking at this problem, PeterZ realized you'll get into similar
trouble when hitting any install_special_pages() mapping. And are there
still drivers setting up their own special mmaps without page->mapping,
and without special VM or pte flags to make get_user_pages fail?
In most cases, if page->mapping is NULL, we do not need to retry at all:
Linus points out that even /proc/sys/vm/drop_caches poses no problem,
because it ends up using remove_mapping(), which takes care not to
interfere when the page reference count is raised.
But there is still one case which does need a retry: if memory pressure
called shmem_writepage in between get_user_pages_fast dropping page
table lock and our acquiring page lock, then the page gets switched from
filecache to swapcache (and ->mapping set to NULL) whatever the refcount.
Fault it back in to get the page->mapping needed for key->shared.inode.
Reported-by: Sasha Levin <levinsasha928@gmail.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 55205c916e upstream.
This change fixes a linking problem, which happens if oprofile
is selected to be compiled as built-in:
`oprofile_arch_exit' referenced in section `.init.text' of
arch/arm/oprofile/built-in.o: defined in discarded section
`.exit.text' of arch/arm/oprofile/built-in.o
The problem is appeared after commit 87121ca504, which
introduced oprofile_arch_exit() calls from __init function. Note
that the aforementioned commit has been backported to stable
branches, and the problem is known to be reproduced at least
with 3.0.13 and 3.1.5 kernels.
Signed-off-by: Vladimir Zapolskiy <vladimir.zapolskiy@nokia.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: oprofile-list <oprofile-list@lists.sourceforge.net>
Link: http://lkml.kernel.org/r/20111222151540.GB16765@erda.amd.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5776ac2eb3 upstream.
According to imx pwm RM, the real period value should be
PERIOD value in PWMPR plus 2.
PWMO (Hz) = PCLK(Hz) / (period +2)
Signed-off-by: Jason Chen <jason.chen@linaro.org>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e30e2fdfe5 upstream.
Currently, the *_global_[un]lock_online() routines are not at all synchronized
with CPU hotplug. Soft-lockups detected as a consequence of this race was
reported earlier at https://lkml.org/lkml/2011/8/24/185. (Thanks to Cong Meng
for finding out that the root-cause of this issue is the race condition
between br_write_[un]lock() and CPU hotplug, which results in the lock states
getting messed up).
Fixing this race by just adding {get,put}_online_cpus() at appropriate places
in *_global_[un]lock_online() is not a good option, because, then suddenly
br_write_[un]lock() would become blocking, whereas they have been kept as
non-blocking all this time, and we would want to keep them that way.
So, overall, we want to ensure 3 things:
1. br_write_lock() and br_write_unlock() must remain as non-blocking.
2. The corresponding lock and unlock of the per-cpu spinlocks must not happen
for different sets of CPUs.
3. Either prevent any new CPU online operation in between this lock-unlock, or
ensure that the newly onlined CPU does not proceed with its corresponding
per-cpu spinlock unlocked.
To achieve all this:
(a) We introduce a new spinlock that is taken by the *_global_lock_online()
routine and released by the *_global_unlock_online() routine.
(b) We register a callback for CPU hotplug notifications, and this callback
takes the same spinlock as above.
(c) We maintain a bitmap which is close to the cpu_online_mask, and once it is
initialized in the lock_init() code, all future updates to it are done in
the callback, under the above spinlock.
(d) The above bitmap is used (instead of cpu_online_mask) while locking and
unlocking the per-cpu locks.
The callback takes the spinlock upon the CPU_UP_PREPARE event. So, if the
br_write_lock-unlock sequence is in progress, the callback keeps spinning,
thus preventing the CPU online operation till the lock-unlock sequence is
complete. This takes care of requirement (3).
The bitmap that we maintain remains unmodified throughout the lock-unlock
sequence, since all updates to it are managed by the callback, which takes
the same spinlock as the one taken by the lock code and released only by the
unlock routine. Combining this with (d) above, satisfies requirement (2).
Overall, since we use a spinlock (mentioned in (a)) to prevent CPU hotplug
operations from racing with br_write_lock-unlock, requirement (1) is also
taken care of.
By the way, it is to be noted that a CPU offline operation can actually run
in parallel with our lock-unlock sequence, because our callback doesn't react
to notifications earlier than CPU_DEAD (in order to maintain our bitmap
properly). And this means, since we use our own bitmap (which is stale, on
purpose) during the lock-unlock sequence, we could end up unlocking the
per-cpu lock of an offline CPU (because we had locked it earlier, when the
CPU was online), in order to satisfy requirement (2). But this is harmless,
though it looks a bit awkward.
Debugged-by: Cong Meng <mc@linux.vnet.ibm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 78feb35b81 upstream.
My previous patch
34a5b4b6af iwlwifi: do not re-configure
HT40 after associated
Fix the case of HT40 after association on specified AP, but it break the
association for some APs and cause not able to establish connection.
We need to address HT40 before and after addociation.
Reported-by: Andrej Gelenberg <andrej.gelenberg@udo.edu>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Tested-by: Andrej Gelenberg <andrej.gelenberg@udo.edu>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 123877b80e upstream.
Check the IEEE80211_TX_CTL_ASSIGN_SEQ flag from mac80211, then decide how to
set the TX_CMD_FLG_SEQ_CTL_MSK bit. Setting the wrong bit in BAR frame whill
make the firmware to increment the sequence number which is incorrect and
cause unknown behavior.
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 10636bc2d6 upstream.
The stations always chooses 1Mbps for all trasmitting frames,
whenever the AP is configured to lock the supported rates.
As the max phy rate is always set with the 4th from highest phy rate,
this assumption might be wrong if we have less than that. Fix that.
Cc: Paul Stewart <pstew@google.com>
Reported-by: Ajay Gummalla <agummalla@google.com>
Signed-off-by: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f83f71fda2 upstream.
With 16-bit RGB565 colour format pixels are stored by the device in memory
in the following order:
| b3 | b2 | b1 | b0 |
~+-----+-----+-----+-----+
| R5 G6 B5 | R5 G6 B5 |
This corresponds to V4L2_PIX_FMT_RGB565 fourcc, not V4L2_PIX_FMT_RGB565X.
This change is required to avoid trouble when setting up video pipeline
with the s5p-tv devices, so the colour formats at both devices can be
properly matched.
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e6f67b8c05 upstream.
lockdep reports a deadlock in jfs because a special inode's rw semaphore
is taken recursively. The mapping's gfp mask is GFP_NOFS, but is not
used when __read_cache_page() calls add_to_page_cache_lru().
Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Acked-by: Hugh Dickins <hughd@google.com>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e0197aae59 upstream.
There is a BUG when migrating a PF_EXITING proc. Since css_set_prefetch()
is not called for the PF_EXITING case, find_existing_css_set() will return
NULL inside cgroup_task_migrate() causing a BUG.
This bug is easy to reproduce. Create a zombie and echo its pid to
cgroup.procs.
$ cat zombie.c
\#include <unistd.h>
int main()
{
if (fork())
pause();
return 0;
}
$
We are hitting this bug pretty regularly on ChromeOS.
This bug is already fixed by Tejun Heo's cgroup patchset which is
targetted for the next merge window:
https://lkml.org/lkml/2011/11/1/356
I've create a smaller patch here which just fixes this bug so that a
fix can be merged into the current release and stable.
Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Downstream-Bug-Report: http://crosbug.com/23953
Reviewed-by: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: containers@lists.linux-foundation.org
Cc: cgroups@vger.kernel.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul Menage <paul@paulmenage.org>
Cc: Olof Johansson <olofj@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 111d489f0f upstream.
Currently, the code assumes that the SEQUENCE status bits are mutually
exclusive. They are not...
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 913050b91e upstream.
If oprofilefs_ulong_from_user() is called with count equals
zero, *val remains unchanged. Depending on the implementation it
might be uninitialized.
Change oprofilefs_ulong_from_user()'s interface to return count
on success. Thus, we are able to return early if count equals
zero which avoids using *val uninitialized. Fixing all users of
oprofilefs_ulong_ from_user().
This follows write syscall implementation when count is zero:
"If count is zero ... [and if] no errors are detected, 0 will be
returned without causing any other effect." (man 2 write)
Reported-By: Mike Waychison <mikew@google.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: oprofile-list <oprofile-list@lists.sourceforge.net>
Link: http://lkml.kernel.org/r/20111219153830.GH16765@erda.amd.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ff05b6f7ae upstream.
An integer overflow will happen on 64bit archs if task's sum of rss,
swapents and nr_ptes exceeds (2^31)/1000 value. This was introduced by
commit
f755a04 oom: use pte pages in OOM score
where the oom score computation was divided into several steps and it's no
longer computed as one expression in unsigned long(rss, swapents, nr_pte
are unsigned long), where the result value assigned to points(int) is in
range(1..1000). So there could be an int overflow while computing
176 points *= 1000;
and points may have negative value. Meaning the oom score for a mem hog task
will be one.
196 if (points <= 0)
197 return 1;
For example:
[ 3366] 0 3366 35390480 24303939 5 0 0 oom01
Out of memory: Kill process 3366 (oom01) score 1 or sacrifice child
Here the oom1 process consumes more than 24303939(rss)*4096~=92GB physical
memory, but it's oom score is one.
In this situation the mem hog task is skipped and oom killer kills another and
most probably innocent task with oom score greater than one.
The points variable should be of type long instead of int to prevent the
int overflow.
Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3d3c8f93a2 upstream.
binary_sysctl() calls sysctl_getname() which allocates from names_cache
slab usin __getname()
The matching function to free the name is __putname(), and not putname()
which should be used only to match getname() allocations.
This is because when auditing is enabled, putname() calls audit_putname
*instead* (not in addition) to __putname(). Then, if a syscall is in
progress, audit_putname does not release the name - instead, it expects
the name to get released when the syscall completes, but that will happen
only if audit_getname() was called previously, i.e. if the name was
allocated with getname() rather than the naked __getname(). So,
__getname() followed by putname() ends up leaking memory.
Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Eric Paris <eparis@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9f57bd4d6d upstream.
per_cpu_ptr_to_phys() incorrectly rounds up its result for non-kmalloc
case to the page boundary, which is bogus for any non-page-aligned
address.
This affects the only in-tree user of this function - sysfs handler
for per-cpu 'crash_notes' physical address. The trouble is that the
crash_notes per-cpu variable is not page-aligned:
crash_notes = 0xc08e8ed4
PER-CPU OFFSET VALUES:
CPU 0: 3711f000
CPU 1: 37129000
CPU 2: 37133000
CPU 3: 3713d000
So, the per-cpu addresses are:
crash_notes on CPU 0: f7a07ed4 => phys 36b57ed4
crash_notes on CPU 1: f7a11ed4 => phys 36b4ded4
crash_notes on CPU 2: f7a1bed4 => phys 36b43ed4
crash_notes on CPU 3: f7a25ed4 => phys 36b39ed4
However, /sys/devices/system/cpu/cpu*/crash_notes says:
/sys/devices/system/cpu/cpu0/crash_notes: 36b57000
/sys/devices/system/cpu/cpu1/crash_notes: 36b4d000
/sys/devices/system/cpu/cpu2/crash_notes: 36b43000
/sys/devices/system/cpu/cpu3/crash_notes: 36b39000
As you can see, all values are rounded down to a page
boundary. Consequently, this is where kexec sets up the NOTE segments,
and thus where the secondary kernel is looking for them. However, when
the first kernel crashes, it saves the notes to the unaligned
addresses, where they are not found.
Fix it by adding offset_in_page() to the translated page address.
-tj: Combined Eugene's and Petr's commit messages.
Signed-off-by: Eugene Surovegin <ebs@ebshome.net>
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Petr Tesarik <ptesarik@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8521478f67 upstream.
Synaptics touchpads on several Dell laptops, particularly Vostro V13
systems, may not respond properly to PS/2 commands and queries immediately
after resuming from suspend to RAM. This leads to unresponsive touchpad
after suspend/resume cycle.
Adding a 1-second delay after resetting the device allows touchpad to
finish initializing (calibrating?) and start reacting properly.
Reported-by: Daniel Manrique <daniel.manrique@canonical.com>
Tested-by: Daniel Manrique <daniel.manrique@canonical.com>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 329456d1ff upstream.
This fixes a Data bus error on some SoCs. The first fix for this
problem did not solve it on all devices.
commit 6ae8ec2786
Author: Rafał Miłecki <zajec5@gmail.com>
Date: Tue Jul 5 17:25:32 2011 +0200
ssb: fix init regression of hostmode PCI core
In ssb_pcicore_fix_sprom_core_index() the sprom on the PCI core is
accessed, but the sprom only exists when the ssb bus is connected over
a PCI bus to the rest of the system and not when the SSB Bus is the
main system bus. SoCs sometimes have a PCI host controller and there
this code will not be executed, but there are some old SoCs with an PCI
controller in client mode around and ssb_pcicore_fix_sprom_core_index()
should not be called on these devices too. The PCI controller on these
devices are unused, but without this fix it results in an Data bus
error when it gets initialized.
Cc: Michael Buesch <m@bues.ch>
Cc: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5151412dd4 upstream.
struct request_queue is allocated with __GFP_ZERO so its "node" field is
zero before initialization. This causes an oops if node 0 is offline in
the page allocator because its zonelists are not initialized. From Dave
Young's dmesg:
SRAT: Node 1 PXM 2 0-d0000000
SRAT: Node 1 PXM 2 100000000-330000000
SRAT: Node 0 PXM 1 330000000-630000000
Initmem setup node 1 0000000000000000-000000000affb000
...
Built 1 zonelists in Node order, mobility grouping on.
...
BUG: unable to handle kernel paging request at 0000000000001c08
IP: [<ffffffff8111c355>] __alloc_pages_nodemask+0xb5/0x870
and __alloc_pages_nodemask+0xb5 translates to a NULL pointer on
zonelist->_zonerefs.
The fix is to initialize q->node at the time of allocation so the correct
node is passed to the slab allocator later.
Since blk_init_allocated_queue_node() is no longer needed, merge it with
blk_init_allocated_queue().
[rientjes@google.com: changelog, initializing q->node]
Reported-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Tested-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 15062e6a85 upstream.
Emmanuel noticed that when mac80211 stops the queues
for aggregation that can leave a packet pending. This
packet will be given to the driver after the AMPDU
callback, but as a non-aggregated packet which messes
up the sequence number etc.
I also noticed by looking at the code that if packets
are being processed while we clear the WANT_START bit,
they might see it cleared already and queue up on
tid_tx->pending. If the driver then rejects the new
aggregation session we leak the packet.
Fix both of these issues by changing this code to not
stop the queues at all. Instead, let packets queue up
on the tid_tx->pending queue instead of letting them
get to the driver, and add code to recover properly
in case the driver rejects the session.
(The patch looks large because it has to move two
functions to before their new use.)
Reported-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f6a290b419 upstream.
_scsih_smart_predicted_fault is called in an interrupt and therefore
must allocate memory using GFP_ATOMIC.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 44f747fff6 upstream.
zfcp_scsi_slave_destroy erroneously always tried to finish its task
even if the corresponding previous zfcp_scsi_slave_alloc returned
early. This can lead to kernel page faults on accessing uninitialized
fields of struct zfcp_scsi_dev in zfcp_erp_lun_shutdown_wait. Take the
port field of the struct to determine if slave_alloc returned early.
This zfcp bug is exposed by 4e6c82b (in turn fixing f7c9c6b to be
compatible with 21208ae) which can call slave_destroy for a
corresponding previous slave_alloc that did not finish.
This patch is based on James Bottomley's fix suggestion in
http://www.spinics.net/lists/linux-scsi/msg55449.html.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5eb46851de upstream.
cfq_cic_link() has race condition. When some processes which shared ioc
issue I/O to same block device simultaneously, cfq_cic_link() returns -EEXIST
sometimes. The race condition might stop I/O by following steps:
step 1: Process A: Issue an I/O to /dev/sda
step 2: Process A: Get an ioc (iocA here) in get_io_context() which does not
linked with a cic for the device
step 3: Process A: Get a new cic for the device (cicA here) in
cfq_alloc_io_context()
step 4: Process B: Issue an I/O to /dev/sda
step 5: Process B: Get iocA in get_io_context() since process A and B share the
same ioc
step 6: Process B: Get a new cic for the device (cicB here) in
cfq_alloc_io_context() since iocA has not been linked with a
cic for the device yet
step 7: Process A: Link cicA to iocA in cfq_cic_link()
step 8: Process A: Dispatch I/O to driver and finish it
step 9: Process B: Try to link cicB to iocA in cfq_cic_link()
But it fails with showing "cfq: cic link failed!" kernel
message, since iocA has already linked with cicA at step 7.
step 10: Process B: Wait for finishig I/O in get_request_wait()
The function does not wake up, when there is no I/O to the
device.
When cfq_cic_link() returns -EEXIST, it means ioc has already linked with cic.
So when cfq_cic_link() return -EEXIST, retry cfq_cic_lookup().
Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2984ff38cc upstream.
If we fail allocating the blkpg stats, we free cfqd and cfgq.
But we need to free the IDA cfqd->cic_index as well.
Signed-off-by: majianpeng <majianpeng@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4ed0b57745 upstream.
This prevents an in-kernel division by zero which happens when we are
asking for i915_chipset_val too quickly, or within a race condition
between the power monitoring thread and userspace accesses via debugfs.
The issue can be reproduced easily via the following command:
while ``; do cat /sys/kernel/debug/dri/0/i915_emon_status; done
This is particularly dangerous because it can be triggered by
a non-privileged user by just reading the debugfs entry.
This issue was also found independently by Konstantin Belousov
<kostikbel@gmail.com>, who proposed a similar patch.
Reported-by: Konstantin Belousov <kostikbel@gmail.com>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c3b79770e5 upstream.
The m41t80 driver can read and set the alarm, but it doesn't
seem to have a functional alarm irq.
This causes failures when the generic core sees alarm functions,
but then cannot use them properly for things like UIE mode.
Disabling the alarm functions allows proper error reporting,
and possible fallback to emulated modes. Once someone fixes
the alarm irq functionality, this can be restored.
CC: Matt Turner <mattst88@gmail.com>
CC: Nico Macrionitis <acrux@cruxppc.org>
CC: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Reported-by: Matt Turner <mattst88@gmail.com>
Reported-by: Nico Macrionitis <acrux@cruxppc.org>
Tested-by: Nico Macrionitis <acrux@cruxppc.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 72b36015ba upstream.
Same fix as 731abb9cb2 for ipip and sit tunnel.
Commit 1c5cae815d removed an explicit call to dev_alloc_name in
ipip_tunnel_locate and ipip6_tunnel_locate, because register_netdevice
will now create a valid name, however the tunnel keeps a copy of the
name in the private parms structure. Fix this by copying the name back
after register_netdevice has successfully returned.
This shows up if you do a simple tunnel add, followed by a tunnel show:
$ sudo ip tunnel add mode ipip remote 10.2.20.211
$ ip tunnel
tunl0: ip/ip remote any local any ttl inherit nopmtudisc
tunl%d: ip/ip remote 10.2.20.211 local any ttl inherit
$ sudo ip tunnel add mode sit remote 10.2.20.212
$ ip tunnel
sit0: ipv6/ip remote any local any ttl 64 nopmtudisc 6rd-prefix 2002::/16
sit%d: ioctl 89f8 failed: No such device
sit%d: ipv6/ip remote 10.2.20.212 local any ttl inherit
Signed-off-by: Ted Feng <artisdom@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e5fe29c719 upstream.
Commit 10299e2e4e (ARM: RX-51:
Enable isp1704 power on/off) added power management for isp1704.
However, the transceiver should be powered on by default,
otherwise USB doesn't work at all for networking during
boot.
All kernels after v3.0 are affected.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Reviewed-by: Sebastian Reichel <sre@debian.org>
[tony@atomide.com: updated comments]
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 82e14e8bdd upstream.
For cards that have two or more DAIs, snd_soc_resume's loop over all
DAIs ends up calling schedule_work(deferred_resume_work) once per DAI.
Since this is the same work item each time, the 2nd and subsequent
calls return 0 (work item already queued), and trigger the dev_err
message below stating that a work item may have been lost.
Solve this by adjusting the loop to simply calculate whether to run the
resume work immediately or defer it, and then call schedule work (or not)
one time based on that.
Note: This has not been tested in mainline, but only in chromeos-2.6.38;
mainline doesn't support suspend/resume on Tegra, nor does the mainline
Tegra ASoC driver contain multiple DAIs. It has been compile-checked in
mainline.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 02a551c975 upstream.
Huawei use the product code HUAWEI_PRODUCT_E353 (0x1506) for a
number of different devices, which each can appear with a number
of different descriptor sets. Different types of interfaces
can be identified by looking at the subclass and protocol fields
Subclass 1 protocol 8 is actually the data interface of a CDC
ECM set, with subclass 1 protocol 9 as the control interface.
Neither support serial data communcation, and cannot therefore
be supported by this driver.
At the same time, add a few other sets which appear if the
device is configured in "Windows mode" using this modeswitch
message:
55534243000000000000000000000011060000000100000000000000000000
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 414b591fd1 upstream.
This patch adds the controlling interfaces for the Huawei E398.
Thanks to Bjørn Mork <bjorn@mork.no> for extracting the interface
numbers from the windows driver.
Signed-off-by: Alex Hermann <alex@wenlex.nl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6abff5dc4d upstream.
Add USB IDs for Motorola H24 HSPA USB module.
Signed-off-by: Krzysztof Hałasa <khalasa@piap.pl>
Acked-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 935a9fee51 upstream.
Found one system with UEFI/iBFT, kernel does not detect the iBFT during
iscsi_ibft module loading.
Root cause: on x86 (UEFI), we are calling of find_ibft_region() much earlier
- specifically in setup_arch() before ACPI is enabled.
Try to split acpi checking code out and call that later
At that time ACPI iBFT already get permanent mapped with ioremap.
So isa_virt_to_bus() will get wrong phys from right virt address.
We could just skip that phys address printing.
For legacy one, print the found address early.
-v2: update comments and description according to Konrad.
-v3: fix problem about module use case that is found by Konrad.
-v4: use acpi_get_table() instead of acpi_table_parse() to handle module use case that is found by Konrad again..
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 48706d0a91 upstream.
Fix two bugs in fuse_retrieve():
- retrieving more than one page would yield repeated instances of the
first page
- if more than FUSE_MAX_PAGES_PER_REQ pages were requested than the
request page array would overflow
fuse_retrieve() was added in 2.6.36 and these bugs had been there since the
beginning.
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5a0dc7365c upstream.
We need to zero out part of a page which beyond EOF before setting uptodate,
otherwise, mapread or write will see non-zero data beyond EOF.
Signed-off-by: Yongqiang Yang <xiaoqiangnk@gmail.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 13a79a4741 upstream.
If there is an unwritten but clean buffer in a page and there is a
dirty buffer after the buffer, then mpage_submit_io does not write the
dirty buffer out. As a result, da_writepages loops forever.
This patch fixes the problem by checking dirty flag.
Signed-off-by: Yongqiang Yang <xiaoqiangnk@gmail.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ea51d132db upstream.
If the pte mapping in generic_perform_write() is unmapped between
iov_iter_fault_in_readable() and iov_iter_copy_from_user_atomic(), the
"copied" parameter to ->end_write can be zero. ext4 couldn't cope with
it with delayed allocations enabled. This skips the i_disksize
enlargement logic if copied is zero and no new data was appeneded to
the inode.
gdb> bt
#0 0xffffffff811afe80 in ext4_da_should_update_i_disksize (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x1\
08000, len=0x1000, copied=0x0, page=0xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2467
#1 ext4_da_write_end (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x108000, len=0x1000, copied=0x0, page=0\
xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2512
#2 0xffffffff810d97f1 in generic_perform_write (iocb=<value optimized out>, iov=<value optimized out>, nr_segs=<value o\
ptimized out>, pos=0x108000, ppos=0xffff88001e26be40, count=<value optimized out>, written=0x0) at mm/filemap.c:2440
#3 generic_file_buffered_write (iocb=<value optimized out>, iov=<value optimized out>, nr_segs=<value optimized out>, p\
os=0x108000, ppos=0xffff88001e26be40, count=<value optimized out>, written=0x0) at mm/filemap.c:2482
#4 0xffffffff810db5d1 in __generic_file_aio_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=0x1, ppos=0\
xffff88001e26be40) at mm/filemap.c:2600
#5 0xffffffff810db853 in generic_file_aio_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=<value optimi\
zed out>, pos=<value optimized out>) at mm/filemap.c:2632
#6 0xffffffff811a71aa in ext4_file_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=0x1, pos=0x108000) a\
t fs/ext4/file.c:136
#7 0xffffffff811375aa in do_sync_write (filp=0xffff88003f606a80, buf=<value optimized out>, len=<value optimized out>, \
ppos=0xffff88001e26bf48) at fs/read_write.c:406
#8 0xffffffff81137e56 in vfs_write (file=0xffff88003f606a80, buf=0x1ec2960 <Address 0x1ec2960 out of bounds>, count=0x4\
000, pos=0xffff88001e26bf48) at fs/read_write.c:435
#9 0xffffffff8113816c in sys_write (fd=<value optimized out>, buf=0x1ec2960 <Address 0x1ec2960 out of bounds>, count=0x\
4000) at fs/read_write.c:487
#10 <signal handler called>
#11 0x00007f120077a390 in __brk_reservation_fn_dmi_alloc__ ()
#12 0x0000000000000000 in ?? ()
gdb> print offset
$22 = 0xffffffffffffffff
gdb> print idx
$23 = 0xffffffff
gdb> print inode->i_blkbits
$24 = 0xc
gdb> up
#1 ext4_da_write_end (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x108000, len=0x1000, copied=0x0, page=0\
xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2512
2512 if (ext4_da_should_update_i_disksize(page, end)) {
gdb> print start
$25 = 0x0
gdb> print end
$26 = 0xffffffffffffffff
gdb> print pos
$27 = 0x108000
gdb> print new_i_size
$28 = 0x108000
gdb> print ((struct ext4_inode_info *)((char *)inode-((int)(&((struct ext4_inode_info *)0)->vfs_inode))))->i_disksize
$29 = 0xd9000
gdb> down
2467 for (i = 0; i < idx; i++)
gdb> print i
$30 = 0xd44acbee
This is 100% reproducible with some autonuma development code tuned in
a very aggressive manner (not normal way even for knumad) which does
"exotic" changes to the ptes. It wouldn't normally trigger but I don't
see why it can't happen normally if the page is added to swap cache in
between the two faults leading to "copied" being zero (which then
hangs in ext4). So it should be fixed. Especially possible with lumpy
reclaim (albeit disabled if compaction is enabled) as that would
ignore the young bits in the ptes.
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fc6cb1cda5 upstream.
/proc/mounts was showing the mount option [no]init_inode_table when
the correct mount option that will be accepted by parse_options() is
[no]init_itable.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d3db728125 upstream.
d312ae878b "xen: use maximum reservation to limit amount of usable RAM"
clamped the total amount of RAM to the current maximum reservation. This is
correct for dom0 but is not correct for guest domains. In order to boot a guest
"pre-ballooned" (e.g. with memory=1G but maxmem=2G) in order to allow for
future memory expansion the guest must derive max_pfn from the e820 provided by
the toolstack and not the current maximum reservation (which can reflect only
the current maximum, not the guest lifetime max). The existing algorithm
already behaves this correctly if we do not artificially limit the maximum
number of pages for the guest case.
For a guest booted with maxmem=512, memory=128 this results in:
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] Xen: 0000000000000000 - 00000000000a0000 (usable)
[ 0.000000] Xen: 00000000000a0000 - 0000000000100000 (reserved)
-[ 0.000000] Xen: 0000000000100000 - 0000000008100000 (usable)
-[ 0.000000] Xen: 0000000008100000 - 0000000020800000 (unusable)
+[ 0.000000] Xen: 0000000000100000 - 0000000020800000 (usable)
...
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] DMI not present or invalid.
[ 0.000000] e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
[ 0.000000] e820 remove range: 00000000000a0000 - 0000000000100000 (usable)
-[ 0.000000] last_pfn = 0x8100 max_arch_pfn = 0x1000000
+[ 0.000000] last_pfn = 0x20800 max_arch_pfn = 0x1000000
[ 0.000000] initial memory mapped : 0 - 027ff000
[ 0.000000] Base memory trampoline at [c009f000] 9f000 size 4096
-[ 0.000000] init_memory_mapping: 0000000000000000-0000000008100000
-[ 0.000000] 0000000000 - 0008100000 page 4k
-[ 0.000000] kernel direct mapping tables up to 8100000 @ 27bb000-27ff000
+[ 0.000000] init_memory_mapping: 0000000000000000-0000000020800000
+[ 0.000000] 0000000000 - 0020800000 page 4k
+[ 0.000000] kernel direct mapping tables up to 20800000 @ 26f8000-27ff000
[ 0.000000] xen: setting RW the range 27e8000 - 27ff000
[ 0.000000] 0MB HIGHMEM available.
-[ 0.000000] 129MB LOWMEM available.
-[ 0.000000] mapped low ram: 0 - 08100000
-[ 0.000000] low ram: 0 - 08100000
+[ 0.000000] 520MB LOWMEM available.
+[ 0.000000] mapped low ram: 0 - 20800000
+[ 0.000000] low ram: 0 - 20800000
With this change "xl mem-set <domain> 512M" will successfully increase the
guest RAM (by reducing the balloon).
There is no change for dom0.
Reported-and-Tested-by: George Shuklin <george.shuklin@gmail.com>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 355840e7a7 upstream.
commit a847627709 in linux-3.0.9
attempted to backport this to 3.0 but only made one change were two
were necessary. This add the second change.
This bug was introduced in 415e72d034
which was in 2.6.36.
There is a small window of time between when a device fails and when
it is removed from the array. During this time we might still read
from it, but we won't write to it - so it is possible that we could
read stale data.
We didn't need the test of 'Faulty' before because the test on
In_sync is sufficient. Since we started allowing reads from the early
part of non-In_sync devices we need a test on Faulty too.
This is suitable for any kernel from 2.6.36 onwards, though the patch
might need a bit of tweaking in 3.0 and earlier.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 859f57ca00 upstream.
[slightly different from the upstream version because of a previous cleanup]
Currently xfs_attr_inactive causes a synchronous transactions if we are
removing a file that has any extents allocated to the attribute fork, and
thus makes XFS extremely slow at removing files with out of line extended
attributes. The code looks a like a relict from the days before the busy
extent list, but with the busy extent list we avoid reusing data and attr
extents that have been freed but not commited yet, so this code is just
as superflous as the synchronous transactions for data blocks.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c29f7d457a upstream.
The i_ino field in the VFS inode is of type unsigned long and thus can't
hold the full 64-bit inode number on 32-bit kernels. We have the full
inode number in the XFS inode, so use that one for nfs exports. Note
that I've also switched the 32-bit file handles types to it, just to make
the code more consistent and copy & paste errors less likely to happen.
Reported-by: Guoquan Yang <ygq51@hotmail.com>
Reported-by: Hank Peng <pengxihan@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is for stable kernel branch 3.0 only. Previous and later versions
have different code paths and are not affected by this bug.
This is the same fix as "hwmon: (coretemp) Fix oops on driver load"
but for the CPU offlining case. Sorry for missing it at first.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Cc: Durgadoss R <durgadoss.r@intel.com>
Acked-by: Guenter Roeck <guenter.roeck@ericsson.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 434a964daa upstream.
Clement Lecigne reports a filesystem which causes a kernel oops in
hfs_find_init() trying to dereference sb->ext_tree which is NULL.
This proves to be because the filesystem has a corrupted MDB extent
record, where the extents file does not fit into the first three extents
in the file record (the first blocks).
In hfs_get_block() when looking up the blocks for the extent file
(HFS_EXT_CNID), it fails the first blocks special case, and falls
through to the extent code (which ultimately calls hfs_find_init())
which is in the process of being initialised.
Hfs avoids this scenario by always having the extents b-tree fitting
into the first blocks (the extents B-tree can't have overflow extents).
The fix is to check at mount time that the B-tree fits into first
blocks, i.e. fail if HFS_I(inode)->alloc_blocks >=
HFS_I(inode)->first_blocks
Note, the existing commit 47f365eb57 ("hfs: fix oops on mount with
corrupted btree extent records") becomes subsumed into this as a special
case, but only for the extents B-tree (HFS_EXT_CNID), it is perfectly
acceptable for the catalog B-Tree file to grow beyond three extents,
with the remaining extent descriptors in the extents overfow.
This fixes CVE-2011-2203
Reported-by: Clement LECIGNE <clement.lecigne@netasq.com>
Signed-off-by: Phillip Lougher <plougher@redhat.com>
Cc: Jeff Mahoney <jeffm@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Moritz Mühlenhoff <jmm@inutil.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1a51410abe upstream.
Ok, this isn't optimal, since it means that 'iotop' needs admin
capabilities, and we may have to work on this some more. But at the
same time it is very much not acceptable to let anybody just read
anybody elses IO statistics quite at this level.
Use of the GENL_ADMIN_PERM suggested by Johannes Berg as an alternative
to checking the capabilities by hand.
Reported-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: Johannes Berg <johannes.berg@intel.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Moritz Mühlenhoff <jmm@inutil.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8762202dd0 upstream.
I hit a J_ASSERT(blocknr != 0) failure in cleanup_journal_tail() when
mounting a fsfuzzed ext3 image. It turns out that the corrupted ext3
image has s_first = 0 in journal superblock, and the 0 is passed to
journal->j_head in journal_reset(), then to blocknr in
cleanup_journal_tail(), in the end the J_ASSERT failed.
So validate s_first after reading journal superblock from disk in
journal_get_superblock() to ensure s_first is valid.
The following script could reproduce it:
fstype=ext3
blocksize=1024
img=$fstype.img
offset=0
found=0
magic="c0 3b 39 98"
dd if=/dev/zero of=$img bs=1M count=8
mkfs -t $fstype -b $blocksize -F $img
filesize=`stat -c %s $img`
while [ $offset -lt $filesize ]
do
if od -j $offset -N 4 -t x1 $img | grep -i "$magic";then
echo "Found journal: $offset"
found=1
break
fi
offset=`echo "$offset+$blocksize" | bc`
done
if [ $found -ne 1 ];then
echo "Magic \"$magic\" not found"
exit 1
fi
dd if=/dev/zero of=$img seek=$(($offset+23)) conv=notrunc bs=1 count=1
mkdir -p ./mnt
mount -o loop $img ./mnt
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: Moritz Mühlenhoff <jmm@inutil.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2ded6e6a94 upstream.
When HPET is operating in RTC mode, the TN_ENABLE bit on timer1
controls whether the HPET or the RTC delivers interrupts to irq8. When
the system goes into suspend, the RTC driver sends a signal to the
HPET driver so that the HPET releases control of irq8, allowing the
RTC to wake the system from suspend. The switchover is accomplished by
a write to the HPET configuration registers which currently only
occurs while servicing the HPET interrupt.
On some systems, I have seen the system suspend before an HPET
interrupt occurs, preventing the write to the HPET configuration
register and leaving the HPET in control of the irq8. As the HPET is
not active during suspend, it does not generate a wake signal and RTC
alarms do not work.
This patch forces the HPET driver to immediately transfer control of
the irq8 channel to the RTC instead of waiting until the next
interrupt event.
Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
Link: http://lkml.kernel.org/r/20111118153306.GB16319@alberich.amd.com
Tested-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e58f516ff4 upstream.
When we can't configure the dma channel we want to fall
back to PIO. We do this by setting host->do_dma to zero.
This does not work as do_dma is used to see whether dma
can be used for the current transfer. Instead, we have
to set host->dma to NULL.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0b57d7602b upstream.
wait_for_completion_interruptible_timeout() may return negative value.
In this case, checking if (t > 0) will return true if t is unsigned.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Guenter Roeck <guenter.roeck@ericsson.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 13c07b0286 upstream.
Exactly like roundup_pow_of_two(1), the rounddown version was buggy for
the case of a compile-time constant '1' argument. Probably because it
originated from the same code, sharing history with the roundup version
from before the bugfix (for that one, see commit 1a06a52ee1: "Fix
roundup_pow_of_two(1)").
However, unlike the roundup version, the fix for rounddown is to just
remove the broken special case entirely. It's simply not needed - the
generic code
1UL << ilog2(n)
does the right thing for the constant '1' argment too. The only reason
roundup needed that special case was because rounding up does so by
subtracting one from the argument (and then adding one to the result)
causing the obvious problems with "ilog2(0)".
But rounddown doesn't do any of that, since ilog2() naturally truncates
(ie "rounds down") to the right rounded down value. And without the
ilog2(0) case, there's no reason for the special case that had the wrong
value.
tl;dr: rounddown_pow_of_two(1) should be 1, not 0.
Acked-by: Dmitry Torokhov <dtor@vmware.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Upstream commit d305a6557b.
If addBA responses comes in just after addba_resp_timer has
expired mac80211 will still accept it and try to open the
aggregation session. This causes drivers to be confused and
in some cases even crash.
This patch fixes the race condition and makes sure that if
addba_resp_timer has expired addBA response is not longer
accepted and we do not try to open half-closed session.
Signed-off-by: Nikolay Martynov <mar.kolya@gmail.com>
[some adjustments]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a855b84c3d upstream.
Percpu allocator recorded the cpus which map to the first and last
units in pcpu_first/last_unit_cpu respectively and used them to
determine the address range of a chunk - e.g. it assumed that the
first unit has the lowest address in a chunk while the last unit has
the highest address.
This simply isn't true. Groups in a chunk can have arbitrary positive
or negative offsets from the previous one and there is no guarantee
that the first unit occupies the lowest offset while the last one the
highest.
Fix it by actually comparing unit offsets to determine cpus occupying
the lowest and highest offsets. Also, rename pcu_first/last_unit_cpu
to pcpu_low/high_unit_cpu to avoid confusion.
The chunk address range is used to flush cache on vmalloc area
map/unmap and decide whether a given address is in the first chunk by
per_cpu_ptr_to_phys() and the bug was discovered by invalid
per_cpu_ptr_to_phys() translation for crash_note.
Kudos to Dave Young for tracking down the problem.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: WANG Cong <xiyou.wangcong@gmail.com>
Reported-by: Dave Young <dyoung@redhat.com>
Tested-by: Dave Young <dyoung@redhat.com>
LKML-Reference: <4EC21F67.10905@redhat.com>
Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4399c8bf2b upstream.
If target_level == 0, current code breaks out of the while-loop if
SUPERPAGE bit is set. We should also break out if PTE is not present.
If we don't do this, KVM calls to iommu_iova_to_phys() will cause
pfn_to_dma_pte() to create mapping for 4KiB pages.
Signed-off-by: Allen Kay <allen.m.kay@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Youquan Song <youquan.song@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8140a95d22 upstream.
set dmar->iommu_superpage field to the smallest common denominator
of super page sizes supported by all active VT-d engines. Initialize
this field in intel_iommu_domain_init() API so intel_iommu_map() API
will be able to use iommu_superpage field to determine the appropriate
super page size to use.
Signed-off-by: Allen Kay <allen.m.kay@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Youquan Song <youquan.song@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 292827cb16 upstream.
iommu_unmap() API expects IOMMU drivers to return the actual page order
of the address being unmapped. Previous code was just returning page
order passed in from the caller. This patch fixes this problem.
Signed-off-by: Allen Kay <allen.m.kay@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Youquan Song <youquan.song@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9b5cd7f37e upstream.
SBC-3 says:
A TRANSFER LENGTH field set to zero specifies that 256 logical
blocks shall be written. Any other value specifies the number
of logical blocks that shall be written.
The old code was always just returning the value in the TRANSFER LENGTH
byte. Fix this to return 256 if the byte is 0.
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 02125a8264 upstream.
__d_path() API is asking for trouble and in case of apparmor d_namespace_path()
getting just that. The root cause is that when __d_path() misses the root
it had been told to look for, it stores the location of the most remote ancestor
in *root. Without grabbing references. Sure, at the moment of call it had
been pinned down by what we have in *path. And if we raced with umount -l, we
could have very well stopped at vfsmount/dentry that got freed as soon as
prepend_path() dropped vfsmount_lock.
It is safe to compare these pointers with pre-existing (and known to be still
alive) vfsmount and dentry, as long as all we are asking is "is it the same
address?". Dereferencing is not safe and apparmor ended up stepping into
that. d_namespace_path() really wants to examine the place where we stopped,
even if it's not connected to our namespace. As the result, it looked
at ->d_sb->s_magic of a dentry that might've been already freed by that point.
All other callers had been careful enough to avoid that, but it's really
a bad interface - it invites that kind of trouble.
The fix is fairly straightforward, even though it's bigger than I'd like:
* prepend_path() root argument becomes const.
* __d_path() is never called with NULL/NULL root. It was a kludge
to start with. Instead, we have an explicit function - d_absolute_root().
Same as __d_path(), except that it doesn't get root passed and stops where
it stops. apparmor and tomoyo are using it.
* __d_path() returns NULL on path outside of root. The main
caller is show_mountinfo() and that's precisely what we pass root for - to
skip those outside chroot jail. Those who don't want that can (and do)
use d_path().
* __d_path() root argument becomes const. Everyone agrees, I hope.
* apparmor does *NOT* try to use __d_path() or any of its variants
when it sees that path->mnt is an internal vfsmount. In that case it's
definitely not mounted anywhere and dentry_path() is exactly what we want
there. Handling of sysctl()-triggered weirdness is moved to that place.
* if apparmor is asked to do pathname relative to chroot jail
and __d_path() tells it we it's not in that jail, the sucker just calls
d_absolute_path() instead. That's the other remaining caller of __d_path(),
BTW.
* seq_path_root() does _NOT_ return -ENAMETOOLONG (it's stupid anyway -
the normal seq_file logics will take care of growing the buffer and redoing
the call of ->show() just fine). However, if it gets path not reachable
from root, it returns SEQ_SKIP. The only caller adjusted (i.e. stopped
ignoring the return value as it used to do).
Reviewed-by: John Johansen <john.johansen@canonical.com>
ACKed-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1368edf064 upstream.
Commit f5252e00 ("mm: avoid null pointer access in vm_struct via
/proc/vmallocinfo") adds newly allocated vm_structs to the vmlist after
it is fully initialised. Unfortunately, it did not check that
__vmalloc_area_node() successfully populated the area. In the event of
allocation failure, the vmalloc area is freed but the pointer to freed
memory is inserted into the vmlist leading to a a crash later in
get_vmalloc_info().
This patch adds a check for ____vmalloc_area_node() failure within
__vmalloc_node_range. It does not use "goto fail" as in the previous
error path as a warning was already displayed by __vmalloc_area_node()
before it called vfree in its failure path.
Credit goes to Luciano Chavez for doing all the real work of identifying
exactly where the problem was.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-by: Luciano Chavez <lnx1138@linux.vnet.ibm.com>
Tested-by: Luciano Chavez <lnx1138@linux.vnet.ibm.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d021563888 upstream.
setup_zone_migrate_reserve() expects that zone->start_pfn starts at
pageblock_nr_pages aligned pfn otherwise we could access beyond an
existing memblock resulting in the following panic if
CONFIG_HOLES_IN_ZONE is not configured and we do not check pfn_valid:
IP: [<c02d331d>] setup_zone_migrate_reserve+0xcd/0x180
*pdpt = 0000000000000000 *pde = f000ff53f000ff53
Oops: 0000 [#1] SMP
Pid: 1, comm: swapper Not tainted 3.0.7-0.7-pae #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
EIP: 0060:[<c02d331d>] EFLAGS: 00010006 CPU: 0
EIP is at setup_zone_migrate_reserve+0xcd/0x180
EAX: 000c0000 EBX: f5801fc0 ECX: 000c0000 EDX: 00000000
ESI: 000c01fe EDI: 000c01fe EBP: 00140000 ESP: f2475f58
DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
Process swapper (pid: 1, ti=f2474000 task=f2472cd0 task.ti=f2474000)
Call Trace:
[<c02d389c>] __setup_per_zone_wmarks+0xec/0x160
[<c02d3a1f>] setup_per_zone_wmarks+0xf/0x20
[<c08a771c>] init_per_zone_wmark_min+0x27/0x86
[<c020111b>] do_one_initcall+0x2b/0x160
[<c086639d>] kernel_init+0xbe/0x157
[<c05cae26>] kernel_thread_helper+0x6/0xd
Code: a5 39 f5 89 f7 0f 46 fd 39 cf 76 40 8b 03 f6 c4 08 74 32 eb 91 90 89 c8 c1 e8 0e 0f be 80 80 2f 86 c0 8b 14 85 60 2f 86 c0 89 c8 <2b> 82 b4 12 00 00 c1 e0 05 03 82 ac 12 00 00 8b 00 f6 c4 08 0f
EIP: [<c02d331d>] setup_zone_migrate_reserve+0xcd/0x180 SS:ESP 0068:f2475f58
CR2: 00000000000012b4
We crashed in pageblock_is_reserved() when accessing pfn 0xc0000 because
highstart_pfn = 0x36ffe.
The issue was introduced in 3.0-rc1 by 6d3163ce ("mm: check if any page
in a pageblock is reserved before marking it MIGRATE_RESERVE").
Make sure that start_pfn is always aligned to pageblock_nr_pages to
ensure that pfn_valid s always called at the start of each pageblock.
Architectures with holes in pageblocks will be correctly handled by
pfn_valid_within in pageblock_is_reserved.
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Tested-by: Dang Bo <bdang@vmware.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Arve Hjnnevg <arve@android.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 58a84aa927 upstream.
Commit 70b50f94f1 ("mm: thp: tail page refcounting fix") keeps all
page_tail->_count zero at all times. But the current kernel does not
set page_tail->_count to zero if a 1GB page is utilized. So when an
IOMMU 1GB page is used by KVM, it wil result in a kernel oops because a
tail page's _count does not equal zero.
kernel BUG at include/linux/mm.h:386!
invalid opcode: 0000 [#1] SMP
Call Trace:
gup_pud_range+0xb8/0x19d
get_user_pages_fast+0xcb/0x192
? trace_hardirqs_off+0xd/0xf
hva_to_pfn+0x119/0x2f2
gfn_to_pfn_memslot+0x2c/0x2e
kvm_iommu_map_pages+0xfd/0x1c1
kvm_iommu_map_memslots+0x7c/0xbd
kvm_iommu_map_guest+0xaa/0xbf
kvm_vm_ioctl_assigned_device+0x2ef/0xa47
kvm_vm_ioctl+0x36c/0x3a2
do_vfs_ioctl+0x49e/0x4e4
sys_ioctl+0x5a/0x7c
system_call_fastpath+0x16/0x1b
RIP gup_huge_pud+0xf2/0x159
Signed-off-by: Youquan Song <youquan.song@intel.com>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b6999b1912 upstream.
With the 3.2-rc kernel, IOMMU 2M pages in KVM works. But when I tried
to use IOMMU 1GB pages in KVM, I encountered an oops and the 1GB page
failed to be used.
The root cause is that 1GB page allocation calls gup_huge_pud() while 2M
page calls gup_huge_pmd. If compound pages are used and the page is a
tail page, gup_huge_pmd() increases _mapcount to record tail page are
mapped while gup_huge_pud does not do that.
So when the mapped page is relesed, it will result in kernel oops
because the page is not marked mapped.
This patch add tail process for compound page in 1GB huge page which
keeps the same process as 2M page.
Reproduce like:
1. Add grub boot option: hugepagesz=1G hugepages=8
2. mount -t hugetlbfs -o pagesize=1G hugetlbfs /dev/hugepages
3. qemu-kvm -m 2048 -hda os-kvm.img -cpu kvm64 -smp 4 -mem-path /dev/hugepages
-net none -device pci-assign,host=07:00.1
kernel BUG at mm/swap.c:114!
invalid opcode: 0000 [#1] SMP
Call Trace:
put_page+0x15/0x37
kvm_release_pfn_clean+0x31/0x36
kvm_iommu_put_pages+0x94/0xb1
kvm_iommu_unmap_memslots+0x80/0xb6
kvm_assign_device+0xba/0x117
kvm_vm_ioctl_assigned_device+0x301/0xa47
kvm_vm_ioctl+0x36c/0x3a2
do_vfs_ioctl+0x49e/0x4e4
sys_ioctl+0x5a/0x7c
system_call_fastpath+0x16/0x1b
RIP put_compound_page+0xd4/0x168
Signed-off-by: Youquan Song <youquan.song@intel.com>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cefcc03ffc upstream.
Allow userspace applications to do more parameter setting by providing a
more complete stub DMA driver specifying a wildcard set of formats and
channels and essentially random values for the DMA parameters. This is
required for useful runtime operation of the dummy DMA driver until we
are able to figure out how to power up links and do hw_params() from DAPM.
Sending to stable as without this the dummy driver is not terribly
useful.
Reported-by: Kyung-Kwee Ryu <Kyung-Kwee.Ryu@wolfsonmicro.com>
Tested-by: Kyung-Kwee Ryu <Kyung-Kwee.Ryu@wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 83713fc937 upstream.
The function setup_vpif_input_channel_mode() used the VSCLKDIS register
instead of VIDCLKCTL. This meant that when in HD mode videoport channel 0
used a different clock from channel 1.
Clearly a copy-and-paste error.
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Acked-by: Manjunath Hadli <manjunath.hadli@ti.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 11357be924 upstream.
Adding the machine_is_* line was forgotten when converting mach-stmp378x to
mach-mxs.
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f1b21c5256 upstream.
On OMAP-L138 platform, EDMA event queue 0 should be used for audio
transfers so that they are not starved by video data moving on event queue 1.
Commit 48519f0ae0 (ASoC: davinci: let platform
data define edma queue numbers) had a side-effect of changing this behavior
by making the driver actually honor the platform data passed.
Fix this now by passing event queue 0 as the queue to be used for audio
transfers.
Signed-off-by: Manjunathappa, Prakash <prakash.pm@ti.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c9c024b3f3 upstream.
The expiry function compares the timer against current time and does
not expire the timer when the expiry time is >= now. That's wrong. If
the timer is set for now, then it must expire.
Make the condition expiry > now for breaking out the loop.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cce4aa378a upstream.
When no imux is available (e.g. a single capture source),
alc_auto_init_input_src() may trigger an Oops due to the access to -1.
Add a proper zero-check to avoid it.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fc084e0b93 upstream.
There are some AC97 codec and board combinations that have been observed
to take a very long time to respond after the cold reset has completed.
In one case, more than 350 ms was required. To allow users to have sound
on those platforms, we'll wait up to 500ms for the codec to become
ready.
As a board may have multiple codecs, with some faster than others to
reset, we add a module parameter to inform the driver which codecs
should be present.
Reported-by: KotCzarny <tjosko@yahoo.com>
Signed-off-by: David Dillow <dave@thedillows.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit de28f25e82 upstream.
If a device is shutdown, then there might be a pending interrupt,
which will be processed after we reenable interrupts, which causes the
original handler to be run. If the old handler is the (broadcast)
periodic handler the shutdown state might hang the kernel completely.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b1f919664d upstream.
In order to leave a margin of 12.5% we should >> 3 not >> 5.
Signed-off-by: Yang Honggang (Joseph) <eagle.rtlinux@gmail.com>
[jstultz: Modified commit subject]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d06c27b22a upstream.
A update is made to the sched:sched_switch event that adds some
logic to the first parameter of the __print_flags() that shows the
state of tasks. This change cause perf to fail parsing the flags.
A simple fix is needed to have the parser be able to process ops
within the argument.
Reported-by: Andrew Vagin <avagin@openvz.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c1be84309c upstream.
When a better rated broadcast device is installed, then the current
active device is not disabled, which results in two running broadcast
devices.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cb59974742 upstream.
Fix a bug introduced by e9dbfae5, which prevents event_subsystem from
ever being released.
Ref_count was added to keep track of subsystem users, not for counting
events. Subsystem is created with ref_count = 1, so there is no need to
increment it for every event, we have nr_events for that. Fix this by
touching ref_count only when we actually have a new user -
subsystem_open().
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Link: http://lkml.kernel.org/r/1320052062-7846-1-git-send-email-idryomov@gmail.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c0afabd3d5 upstream.
Currently, the RTC code does not disable the alarm in the hardware.
This means that after a sequence such as the one below (the files are in the
RTC sysfs), the box will boot up after 2 minutes even though we've
asked for the alarm to be turned off.
# echo $((`cat since_epoch`)+120) > wakealarm
# echo 0 > wakealarm
# poweroff
Fix this by disabling the alarm when there are no timers to run.
Cc: John Stultz <john.stultz@linaro.org>
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4c393a6059 upstream.
With Dmitry fsstress updates I've seen very reproducible crashes in
xfs_attr_shortform_remove because xfs_attr_shortform_bytesfit claims that
the attributes would not fit inline into the inode after removing an
attribute. It turns out that we were operating on an inode with lots
of delalloc extents, and thus an if_bytes values for the data fork that
is larger than biggest possible on-disk storage for it which utterly
confuses the code near the end of xfs_attr_shortform_bytesfit.
Fix this by always allowing the current attribute fork, like we already
do for the attr1 format, given that delalloc conversion will take care
for moving either the data or attribute area out of line if it doesn't
fit at that point - or making the point moot by merging extents at this
point.
Also document the function better, and clean up some loose bits.
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
Acked-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4dd2cb4a28 upstream.
If we are doing synchronous inode reclaim we block the VM from making
progress in memory reclaim. So if we encouter a flush locked inode
promote it in the delwri list and wake up xfsbufd to write it out now.
Without this we can get hangs of up to 30 seconds during workloads hitting
synchronous inode reclaim.
The scheme is copied from what we do for dquot reclaims.
Reported-by: Simon Kirby <sim@hostway.ca>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Simon Kirby <sim@hostway.ca>
Signed-off-by: Ben Myers <bpm@sgi.com>
Acked-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fa8b18edd7 upstream.
This prevents in-memory corruption and possible panics if the on-disk
ACL is badly corrupted.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
Acked-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is a backport of critical parts of
commit 7c24d9489f "NFSv4.1: File layout only supports whole file layouts"
It prevents the file layout driver from (incorrectly) using
partial layouts, but ignores the part of the referenced commmit that
relies on additional machinery to change the LAYOUTGET request
based on layout driver.
Signed-off-by: Fred Isaman <iisaman@netapp.com>
Acked-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 550acb1926 upstream.
In irq_wait_for_interrupt(), the should_stop member is verified before
setting the task's state to TASK_INTERRUPTIBLE and calling schedule().
In case kthread_stop sets should_stop and wakes up the process after
should_stop is checked by the irq thread but before the task's state
is changed, the irq thread might never exit:
kthread_stop irq_wait_for_interrupt
------------ ----------------------
...
... while (!kthread_should_stop()) {
kthread->should_stop = 1;
wake_up_process(k);
wait_for_completion(&kthread->exited);
...
set_current_state(TASK_INTERRUPTIBLE);
...
schedule();
}
Fix this by checking if the thread should stop after modifying the
task's state.
[ tglx: Simplified it a bit ]
Signed-off-by: Ido Yariv <ido@wizery.com>
Link: http://lkml.kernel.org/r/1322740508-22640-1-git-send-email-ido@wizery.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0bac71af6e upstream.
Johannes' patch for "cfg80211: fix regulatory NULL dereference"
broke user regulaotry hints and it did not address the fact that
last_request was left populated even if the previous regulatory
hint was stale due to the wiphy disappearing.
Fix user reguluatory hints by only bailing out if for those
regulatory hints where a request_wiphy is expected. The stale last_request
considerations are addressed through the previous fixes on last_request
where we reset the last_request to a static world regdom request upon
reset_regdomains(). In this case though we further enhance the effect
by simply restoring reguluatory settings completely.
Cc: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luis R. Rodriguez <mcgrof@qca.qualcomm.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a042994dd3 upstream.
There is a theoretical race that if hit will trigger
a crash. The race is between when we issue the first
regulatory hint, regulatory_hint_core(), gets processed
by the workqueue and between when the first device
gets registered to the wireless core. This is not easy
to reproduce but it was easy to do so through the
regulatory simulator I have been working on. This
is a port of the fix I implemented there [1].
[1] a246ccf81f
Cc: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luis R. Rodriguez <mcgrof@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b934069c99 upstream.
The last breaking event address is a read-only value, the regset misses the
.set function. If a PTRACE_SETREGSET is done for NT_S390_LAST_BREAK we
get an oops due to a branch to zero:
Kernel BUG at 0000000000000002 verbose debug info unavailable
illegal operation: 0001 #1 SMP
...
Call Trace:
(<0000000000158294> ptrace_regset+0x184/0x188)
<00000000001595b6> ptrace_request+0x37a/0x4fc
<0000000000109a78> arch_ptrace+0x108/0x1fc
<00000000001590d6> SyS_ptrace+0xaa/0x12c
<00000000005c7a42> sysc_noemu+0x16/0x1c
<000003fffd5ec10c> 0x3fffd5ec10c
Last Breaking-Event-Address:
<0000000000158242> ptrace_regset+0x132/0x188
Add a nop .set function to prevent the branch to zero.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 57d1c0c03c upstream.
Masami spotted that we always try to decode the instruction stream as
64bit instructions when running a 64bit kernel, this doesn't work for
ia32-compat proglets.
Use TIF_IA32 to detect if we need to use the 32bit instruction
decoder.
Reported-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2cd1c8d4dc upstream.
Fix an outstanding issue that has been reported since 2.6.37.
Under a heavy loaded machine processing "fork()" calls could
crash with:
BUG: unable to handle kernel paging request at f573fc8c
IP: [<c01abc54>] swap_count_continued+0x104/0x180
*pdpt = 000000002a3b9027 *pde = 0000000001bed067 *pte = 0000000000000000 Oops: 0000 [#1] SMP
Modules linked in:
Pid: 1638, comm: apache2 Not tainted 3.0.4-linode37 #1
EIP: 0061:[<c01abc54>] EFLAGS: 00210246 CPU: 3
EIP is at swap_count_continued+0x104/0x180
.. snip..
Call Trace:
[<c01ac222>] ? __swap_duplicate+0xc2/0x160
[<c01040f7>] ? pte_mfn_to_pfn+0x87/0xe0
[<c01ac2e4>] ? swap_duplicate+0x14/0x40
[<c01a0a6b>] ? copy_pte_range+0x45b/0x500
[<c01a0ca5>] ? copy_page_range+0x195/0x200
[<c01328c6>] ? dup_mmap+0x1c6/0x2c0
[<c0132cf8>] ? dup_mm+0xa8/0x130
[<c013376a>] ? copy_process+0x98a/0xb30
[<c013395f>] ? do_fork+0x4f/0x280
[<c01573b3>] ? getnstimeofday+0x43/0x100
[<c010f770>] ? sys_clone+0x30/0x40
[<c06c048d>] ? ptregs_clone+0x15/0x48
[<c06bfb71>] ? syscall_call+0x7/0xb
The problem is that in copy_page_range() we turn lazy mode on,
and then in swap_entry_free() we call swap_count_continued()
which ends up in:
map = kmap_atomic(page, KM_USER0) + offset;
and then later we touch *map.
Since we are running in batched mode (lazy) we don't actually
set up the PTE mappings and the kmap_atomic is not done
synchronously and ends up trying to dereference a page that has
not been set.
Looking at kmap_atomic_prot_pfn(), it uses
'arch_flush_lazy_mmu_mode' and doing the same in
kmap_atomic_prot() and __kunmap_atomic() makes the problem go
away.
Interestingly, commit b8bcfe997e ("x86/paravirt: remove lazy
mode in interrupts") removed part of this to fix an interrupt
issue - but it went to far and did not consider this scenario.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9e6866686b upstream.
In commit f8924e770e ("x86: unify mp_bus_info"), the 32-bit
and 64-bit versions of MP_bus_info were rearranged to match each
other better. Unfortunately it introduced a regression: prior
to that change we used to always set the mp_bus_not_pci bit,
then clear it if we found a PCI bus. After it, we set
mp_bus_not_pci for ISA buses, clear it for PCI buses, and leave
it alone otherwise.
In the cases of ISA and PCI, there's not much difference. But
ISA is not the only non-PCI bus, so it's better to always set
mp_bus_not_pci and clear it only for PCI.
Without this change, Dan's Dell PowerEdge 4200 panics on boot
with a log indicating interrupt routing trouble unless the
"noapic" option is supplied. With this change, the machine
boots reliably without "noapic".
Fixes http://bugs.debian.org/586494
Reported-bisected-and-tested-by: Dan McGrath <troubledaemon@gmail.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: Dan McGrath <troubledaemon@gmail.com>
Cc: Alexey Starikovskiy <aystarik@gmail.com>
[jrnieder@gmail.com: clarified commit message]
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Link: http://lkml.kernel.org/r/20111122215000.GA9151@elie.hsd1.il.comcast.net
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4cecf6d401 upstream.
(Added the missing signed-off-by line)
In hundreds of days, the __cycles_2_ns calculation in sched_clock
has an overflow. cyc * per_cpu(cyc2ns, cpu) exceeds 64 bits, causing
the final value to become zero. We can solve this without losing
any precision.
We can decompose TSC into quotient and remainder of division by the
scale factor, and then use this to convert TSC into nanoseconds.
Signed-off-by: Salman Qazi <sqazi@google.com>
Acked-by: John Stultz <johnstul@us.ibm.com>
Reviewed-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20111115221121.7262.88871.stgit@dungbeetle.mtv.corp.google.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 158886cd2c upstream.
When system enters suspend, xHCI driver clears command ring by writing zero
to all the TRBs. However, this also writes zero to the Link TRB, and the ring
is mangled. This may cause driver accesses wrong memory address and the
result is unpredicted.
When clear the command ring, keep the last Link TRB intact, only clear its
cycle bit. This should fix the "command ring full" issue reported by Oliver
Neukum.
This should be backported to stable kernels as old as 2.6.37, since the
commit 89821320 "xhci: Fix command ring replay after resume" is merged.
Signed-off-by: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Reported-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e3420901eb upstream.
Fix a regression that was introduced by commit
811c926c53 (USB: EHCI: fix HUB TT scheduling
issue with iso transfer).
We detect an error if next == start, but this means uframe 0 can't be allocated
anymore for iso transfer...
Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Matthieu CASTET <castet.matthieu@free.fr>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 811c926c53 upstream.
The current TT scheduling doesn't allow to play and then record on a
full-speed device connected to a high speed hub.
The IN iso stream can only start on the first uframe (0-2 for a 165 us)
because of CSPLIT transactions.
For the OUT iso stream there no such restriction. uframe 0-5 are possible.
The idea of this patch is that the first uframe are precious (for IN TT iso
stream) and we should allocate the last uframes first if possible.
For that we reverse the order of uframe allocation (last uframe first).
Here an example :
hid interrupt stream
----------------------------------------------------------------------
uframe | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
----------------------------------------------------------------------
max_tt_usecs | 125 | 125 | 125 | 125 | 125 | 125 | 30 | 0 |
----------------------------------------------------------------------
used usecs on a frame | 13 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
----------------------------------------------------------------------
iso OUT stream
----------------------------------------------------------------------
uframe | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
----------------------------------------------------------------------
max_tt_usecs | 125 | 125 | 125 | 125 | 125 | 125 | 30 | 0 |
----------------------------------------------------------------------
used usecs on a frame | 13 | 125 | 39 | 0 | 0 | 0 | 0 | 0 |
----------------------------------------------------------------------
There no place for iso IN stream (uframe 0-2 are used) and we got "cannot
submit datapipe for urb 0, error -28: not enough bandwidth" error.
With the patch this become.
iso OUT stream
----------------------------------------------------------------------
uframe | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
----------------------------------------------------------------------
max_tt_usecs | 125 | 125 | 125 | 125 | 125 | 125 | 30 | 0 |
----------------------------------------------------------------------
used usecs on a frame | 13 | 0 | 0 | 0 | 125 | 39 | 0 | 0 |
----------------------------------------------------------------------
iso IN stream
----------------------------------------------------------------------
uframe | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
----------------------------------------------------------------------
max_tt_usecs | 125 | 125 | 125 | 125 | 125 | 125 | 30 | 0 |
----------------------------------------------------------------------
used usecs on a frame | 13 | 0 | 125 | 40 | 125 | 39 | 0 | 0 |
----------------------------------------------------------------------
Signed-off-by: Matthieu Castet <matthieu.castet@parrot.com>
Signed-off-by: Thomas Poussevin <thomas.poussevin@parrot.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cec28a5428 upstream.
Kingston DT 101 G2 replies a wrong tag while transporting, add an
unusal_devs entry to ignore the tag validation.
Signed-off-by: Qinglin Ye <yestyle@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b1807719f6 upstream.
Genera Touch told us that 0001 is their single point device
and 0003 is the multitouch one. Apparently, we made the tests
someone having a prototype, and not the final product.
They said it should be safe to do the switch.
This partially reverts 5572da0 ("HID: hid-mulitouch: add support
for the 'Sensing Win7-TwoFinger'").
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8746c83d53 upstream.
qset->qh.link is an __le64 field and we should be using cpu_to_le64()
to fill it.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6a9ce6b654 upstream.
After sleeping on a wait queue, signal_pending(current) should be
checked (not before sleeping).
Acked-by: Alessandro Rubini <rubini@gnudd.com>
Signed-off-by: Federico Vaga <federico.vaga@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit df30b21cb0 upstream.
In comedi_fops, mmap_count is decremented at comedi_vm_ops->close but
it is not incremented at comedi_vm_ops->open. This may result in a negative
counter. The patch introduces the open method to keep the counter
consistent.
The bug was triggerd by this sample code:
mmap(0, ...., comedi_fd);
fork();
exit(0);
Acked-by: Alessandro Rubini <rubini@gnudd.com>
Signed-off-by: Federico Vaga <federico.vaga@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3ffab428f4 upstream.
This fixes kernel oops when an USB DAQ device is plugged out while it's
communicating with the userspace software.
Signed-off-by: Bernd Porr <berndporr@f2s.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 438957f8d4 upstream.
Interrupts must be disabled prior to calling usb_hcd_unlink_urb_from_ep.
If interrupts are not disabled, it can potentially lead to a deadlock.
The deadlock is readily reproduceable on a slower (ARM based) device
such as the TI Pandaboard.
Signed-off-by: Bart Westgeest <bart@elbrys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bda63586bc upstream.
Currently the SigmaDSP firmware loader only works correctly on little-endian
systems. Fix this by using the proper endianess conversion functions.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4f718a29fe upstream.
The SigmaDSP firmware loader currently does not perform enough boundary size
checks when processing the firmware. As a result it is possible that a
malformed firmware can cause an out of bounds memory access.
This patch adds checks which ensure that both the action header and the payload
are completely inside the firmware data boundaries before processing them.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 745718132c upstream.
When we tear down a device we try to flush all outstanding
commands in scsi_free_queue(). However the check in
scsi_request_fn() is imperfect as it only signals that
we _might start_ aborting commands, not that we've actually
aborted some.
So move the printk inside the scsi_kill_request function,
this will also give us a hint about which commands are aborted.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Cc: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is for stable kernel branch 3.0 only. Previous and later versions
have different code paths and are not affected by this bug.
If the CPU microcode is too old, the coretemp driver won't work. But
instead of failing gracefully, it currently oops. Check for NULL
platform device data to avoid this.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Acked-by: Durgadoss R <durgadoss.r@intel.com>
Acked-by: Guenter Roeck <guenter.roeck@ericsson.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2a1e0fd175 upstream.
When a packet is supposed to sent be as an a-MPDU, mac80211 sets
IEEE80211_TX_CTL_AMPDU to let the driver know. On the other
hand, mac80211 configures the driver for aggregration with the
ampdu_action callback.
There is race between these two mechanisms since the following
scenario can occur when the BA agreement is torn down:
Tx softIRQ drv configuration
========== =================
check OPERATIONAL bit
Set the TX_CTL_AMPDU bit in the packet
clear OPERATIONAL bit
stop Tx AGG
Pass Tx packet to the driver.
In that case the driver would get a packet with TX_CTL_AMPDU set
although it has already been notified that the BA session has been
torn down.
To fix this, we need to synchronize all the Qdisc activity after we
cleared the OPERATIONAL bit. After that step, all the following
packets will be buffered until the driver reports it is ready to get
new packets for this RA / TID. This buffering allows not to run into
another race that would send packets with TX_CTL_AMPDU unset while
the driver hasn't been requested to tear down the BA session yet.
This race occurs in practice and iwlwifi complains with a WARN_ON
when it happens.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 24f50a9d16 upstream.
Nikolay noticed (by code review) that mac80211 can
attempt to stop an aggregation session while it is
already being stopped. So to fix it, check whether
stop is already being done and bail out if so.
Also move setting the STOPPING state into the lock
so things are properly atomic.
Reported-by: Nikolay Martynov <mar.kolya@gmail.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit de3584bd62 upstream.
By the time userspace returns with a response to
the regulatory domain request, the wiphy causing
the request might have gone away. If this is so,
reject the update but mark the request as having
been processed anyway.
Cc: Luis R. Rodriguez <lrodriguez@qca.qualcomm.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e007b857e8 upstream.
MAC addresses have a fixed length. The current
policy allows passing < ETH_ALEN bytes, which
might result in reading beyond the buffer.
Signed-off-by: Eliad Peller <eliad@wizery.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2d1618170e upstream.
priv->work must not be synced while priv->mutex is locked, because
the mutex is taken in the work handler.
Move cancel_work_sync down to after the device shutdown code.
This is safe, because the work handler checks fw_state and bails out
early in case of a race.
Signed-off-by: Michael Buesch <m@bues.ch>
Acked-by: Christian Lamparter <chunkeey@googlemail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 27c9cd7e60 upstream.
__remove_hrtimer() attempts to reprogram the clockevent device when
the timer being removed is the next to expire. However,
__remove_hrtimer() reprograms the clockevent *before* removing the
timer from the timerqueue and thus when hrtimer_force_reprogram()
finds the next timer to expire it finds the timer we're trying to
remove.
This is especially noticeable when the system switches to NOHz mode
and the system tick is removed. The timer tick is removed from the
system but the clockevent is programmed to wakeup in another HZ
anyway.
Silence the extra wakeup by removing the timer from the timerqueue
before calling hrtimer_force_reprogram() so that we actually program
the clockevent for the next timer to expire.
This was broken by 998adc3 "hrtimers: Convert hrtimers to use
timerlist infrastructure".
Signed-off-by: Jeff Ohlstein <johlstei@codeaurora.org>
Link: http://lkml.kernel.org/r/1321660030-8520-1-git-send-email-johlstei@codeaurora.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d004e02405 upstream.
ktime_get and ktime_get_ts were calling timekeeping_get_ns()
but later they were not calling arch_gettimeoffset() so architectures
using this mechanism returned 0 ns when calling these functions.
This happened for example when running Busybox's ping which calls
syscall(__NR_clock_gettime, CLOCK_MONOTONIC, ts) which eventually
calls ktime_get. As a result the returned ping travel time was zero.
Signed-off-by: Hector Palacios <hector.palacios@digi.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 884a45d964 upstream.
2d3cbf8b (cgroup_freezer: update_freezer_state() does incorrect state
transitions) removed is_task_frozen_enough and replaced it with a simple
frozen call. This, however, breaks freezing for a group with stopped tasks
because those cannot be frozen and so the group remains in CGROUP_FREEZING
state (update_if_frozen doesn't count stopped tasks) and never reaches
CGROUP_FROZEN.
Let's add is_task_frozen_enough back and use it at the original locations
(update_if_frozen and try_to_freeze_cgroup). Semantically we consider
stopped tasks as frozen enough so we should consider both cases when
testing frozen tasks.
Testcase:
mkdir /dev/freezer
mount -t cgroup -o freezer none /dev/freezer
mkdir /dev/freezer/foo
sleep 1h &
pid=$!
kill -STOP $pid
echo $pid > /dev/freezer/foo/tasks
echo FROZEN > /dev/freezer/foo/freezer.state
while true
do
cat /dev/freezer/foo/freezer.state
[ "`cat /dev/freezer/foo/freezer.state`" = "FROZEN" ] && break
sleep 1
done
echo OK
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Li Zefan <lizf@cn.fujitsu.com>
Cc: Tomasz Buchert <tomasz.buchert@inria.fr>
Cc: Paul Menage <paul@paulmenage.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 52553ddffa upstream.
Commit fa27271bc8d2("genirq: Fixup poll handling") introduced a
regression that broke irqfixup/irqpoll for some hardware configurations.
Amidst reorganizing 'try_one_irq', that patch removed a test that
checked for 'action->handler' returning IRQ_HANDLED, before acting on
the interrupt. Restoring this test back returns the functionality lost
since 2.6.39. In the current set of tests, after 'action' is set, it
must precede '!action->next' to take effect.
With this and my previous patch to irq/spurious.c, c75d720fca, all
IRQ regressions that I have encountered are fixed.
Signed-off-by: Edward Donovan <edward.donovan@numble.net>
Reported-and-tested-by: Rogério Brito <rbrito@ime.usp.br>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 24ca9a8477 upstream.
By returning '0' instead of 'EAGAIN' when the tests in xs_nospace() fail
to find evidence of socket congestion, we are making the RPC engine believe
that the message was incorrectly sent and so it disconnects the socket
instead of just retrying.
The bug appears to have been introduced by commit
5e3771ce2d (SUNRPC: Ensure that xs_nospace
return values are propagated).
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2391a0e067 upstream.
This patch makes it possible to set DAI mode to its currently applied
value even if codec is active. This is necessary to allow
aplay -t raw -r 44100 -f S16_LE -c 2 < /dev/urandom &
alsactl store -f backup.state
alsactl restore -f backup.state
to work without returning errors. This patch is based on a patch sent
by Klaus Kurzmann <mok@fluxnetz.de>.
Signed-off-by: Timo Juhani Lindfors <timo.lindfors@iki.fi>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a29878553a upstream.
commit 6175ddf06b optimized the mem*io
functions that have been used to send commands to the device. these
optimizations somehow corrupted the communication with the lx6464es,
that resulted the device to be unusable with kernels after 2.6.33.
this patch emulates the memcpy_*_io functions via a loop to avoid these
problems.
Signed-off-by: Tim Blechmann <tim@klingt.org>
LKML-Reference: <4ECB5257.4040600@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 11ed0ba175 upstream.
This patch implements a workaround for PL310 erratum 769419. On
revisions of the PL310 prior to r3p2, the Store Buffer does not
automatically drain. This can cause normal, non-cacheable writes to be
retained when the memory system is idle, leading to suboptimal I/O
performance for drivers using coherent DMA.
This patch adds an optional wmb() call to the cpu_idle loop. On systems
with an outer cache, this causes an explicit flush of the store buffer.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a8a6565c76 upstream.
This patch selects ARM_AMBA if OMAP3_EMU is defined because
OC_ETM depends on ARM_AMBA, so fix the link failure[1].
[1],
arch/arm/kernel/built-in.o: In function `etm_remove':
/home/tom/git/omap/linux-2.6-omap/arch/arm/kernel/etm.c:609: undefined
reference to `amba_release_regions'
arch/arm/kernel/built-in.o: In function `etb_remove':
/home/tom/git/omap/linux-2.6-omap/arch/arm/kernel/etm.c:409: undefined
reference to `amba_release_regions'
arch/arm/kernel/built-in.o: In function `etm_init':
/home/tom/git/omap/linux-2.6-omap/arch/arm/kernel/etm.c:640: undefined
reference to `amba_driver_register'
/home/tom/git/omap/linux-2.6-omap/arch/arm/kernel/etm.c:646: undefined
reference to `amba_driver_register'
/home/tom/git/omap/linux-2.6-omap/arch/arm/kernel/etm.c:648: undefined
reference to `amba_driver_unregister'
arch/arm/kernel/built-in.o: In function `etm_probe':
/home/tom/git/omap/linux-2.6-omap/arch/arm/kernel/etm.c:545: undefined
reference to `amba_request_regions'
/home/tom/git/omap/linux-2.6-omap/arch/arm/kernel/etm.c:595: undefined
reference to `amba_release_regions'
arch/arm/kernel/built-in.o: In function `etb_probe':
/home/tom/git/omap/linux-2.6-omap/arch/arm/kernel/etm.c:347: undefined
reference to `amba_request_regions'
/home/tom/git/omap/linux-2.6-omap/arch/arm/kernel/etm.c:392: undefined
reference to `amba_release_regions'
arch/arm/mach-omap2/built-in.o: In function `emu_init':
/home/tom/git/omap/linux-2.6-omap/arch/arm/mach-omap2/emu.c:62:
undefined reference to `amba_device_register'
/home/tom/git/omap/linux-2.6-omap/arch/arm/mach-omap2/emu.c:63:
undefined reference to `amba_device_register'
make: *** [.tmp_vmlinux1] Error 1
making modules
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5a4f1844c2 upstream.
Fix a bug which has been on this driver since
it was added by the original commit 984aa6db
which would never clear IRQSTATUS bits.
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c0a39151a4 upstream.
Since CONFIG_USB_GADGET_PXA27X and other macros are renamed to
CONFIG_USB_PXA27X. Update them in arch/arm/mach-pxa and arch/arm/configs
to keep consistent.
Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Acked-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a32839696a upstream.
While the OLPC display appears to be able to handle either positive
or negative sync, the Display Controller only recognises positive sync.
This brings viafb (for XO-1.5) in line with lxfb (for XO-1) and
fixes a recent regression where the XO-1.5 DCON could no longer be
frozen. Thanks to Florian Tobias Schandinat for helping identify
the fix.
Test case: from a vt,
echo 1 > /sys/devices/platform/dcon/freeze
should cause the current screen contents to freeze, rather than garbage being
displayed.
Signed-off-by: Daniel Drake <dsd@laptop.org>
Signed-off-by: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c47e5c23a upstream.
Fixes i2c test failures when i2c_algo_bit.bit_test=1.
The hw doesn't actually require a mask, so just set it
to the default mask bits for r1xx-r4xx radeon ddc.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4cac2eb158 upstream.
Previously we claimed device ID 0x7450, regardless of the vendor, which is
clearly wrong. Now we'll claim that device ID only for AMD.
I suspect this was just a typo in the original code, but it's possible this
change will break shpchp on non-7450 AMD bridges. If so, we'll have to fix
them as we find them.
Reference: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=638863
Reported-by: Ralf Jung <ralfjung-e@gmx.de>
Cc: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cb0e093162 upstream.
CB tuning is needed to handle potential process variations that might
cause clock jitter for certain PLL settings. However, we were setting
it incorrectly since we were using the wrong M value as a check (M1 when
we needed to use the whole M value). Fix it up, making my HDMI
attached display a little prettier (used to have occasional dots crawl
across the display).
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Timo Aaltonen <timo@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9ca1d10d74 upstream.
Unlike the previous one, I don't have known testcases it fixes. I'd
rather not go through the same debug cycle on whatever testcases those
might be.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 406478dc91 upstream.
Fixes rendering failures in Unigine Tropics and Sanctuary and the mesa
"fire" demo.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d724502a9d upstream.
Fixes i2c test failures when i2c_algo_bit.bit_test=1.
The hw doesn't actually require a mask, so just set it
to the default mask bits for r1xx-r4xx radeon ddc.
I missed this part the first time through.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a5cd335165 upstream.
There is a potential integer overflow in drm_mode_dirtyfb_ioctl()
if userspace passes in a large num_clips. The call to kmalloc would
allocate a small buffer, and the call to fb->funcs->dirty may result
in a memory corruption.
Reported-by: Haogang Chen <haogangchen@gmail.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 274252862f upstream.
This was broken by commit 7759995c75 (yes,
myself). The basic problem here is since the digest state is only saved
after the last chunk, the state array is only valid when handling the
first chunk of the next buffer. Broken since linux-3.0.
Signed-off-by: Phil Sutter <phil.sutter@viprinet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0f751e641a upstream.
From mhalcrow's original commit message:
Characters with ASCII values greater than the size of
filename_rev_map[] are valid filename characters.
ecryptfs_decode_from_filename() will access kernel memory beyond
that array, and ecryptfs_parse_tag_70_packet() will then decrypt
those characters. The attacker, using the FNEK of the crafted file,
can then re-encrypt the characters to reveal the kernel memory past
the end of the filename_rev_map[] array. I expect low security
impact since this array is statically allocated in the text area,
and the amount of memory past the array that is accessible is
limited by the largest possible ASCII filename character.
This patch solves the issue reported by mhalcrow but with an
implementation suggested by Linus to simply extend the length of
filename_rev_map[] to 256. Characters greater than 0x7A are mapped to
0x00, which is how invalid characters less than 0x7A were previously
being handled.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Reported-by: Michael Halcrow <mhalcrow@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 32001d6fe9 upstream.
Dirty pages weren't being written back when an mmap'ed eCryptfs file was
closed before the mapping was unmapped. Since f_ops->flush() is not
called by the munmap() path, the lower file was simply being released.
This patch flushes the eCryptfs file in the vm_ops->close() path.
https://launchpad.net/bugs/870326
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
patch 58d84c4ee0 upstream.
Currently we always redirty an inode that was attempted to be written out
synchronously but has been cleaned by an AIL pushed internall, which is
rather bogus. Fix that by doing the i_update_core check early on and
return 0 for it. Also include async calls for it, as doing any work for
those is just as pointless. While we're at it also fix the sign for the
EIO return in case of a filesystem shutdown, and fix the completely
non-sensical locking around xfs_log_inode.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit db3e74b582 upstream
The doalloc arg in xfs_qm_dqattach_one() is a flag that indicates
whether a new area to handle quota information will be allocated
if needed. Originally, it was passed to xfs_qm_dqget(), but has
been removed by the following commit (probably by mistake):
commit 8e9b6e7fa4
Author: Christoph Hellwig <hch@lst.de>
Date: Sun Feb 8 21:51:42 2009 +0100
xfs: remove the unused XFS_QMOPT_DQLOCK flag
As the result, xfs_qm_dqget() called from xfs_qm_dqattach_one()
never allocates the new area even if it is needed.
This patch gives the doalloc arg to xfs_qm_dqget() in
xfs_qm_dqattach_one() to fix this problem.
Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
Cc: Alex Elder <aelder@sgi.com>
Cc: Christoph Hellwig <hch@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b52a360b2a upstream.
Fixes a possible memory corruption when the link is larger than
MAXPATHLEN and XFS_DEBUG is not enabled. This also remove the
S_ISLNK assert, since the inode mode is checked previously in
xfs_readlink_by_handle() and via VFS.
Updated to address concerns raised by Ben Hutchings about the loose
attention paid to 32- vs 64-bit values, and the lack of handling a
potentially negative pathlen value:
- Changed type of "pathlen" to be xfs_fsize_t, to match that of
ip->i_d.di_size
- Added checking for a negative pathlen to the too-long pathlen
test, and generalized the message that gets reported in that case
to reflect the change
As a result, if a negative pathlen were encountered, this function
would return EFSCORRUPTED (and would fail an assertion for a debug
build)--just as would a too-long pathlen.
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 87c7bec7fc upstream.
The code to flush buffers in the umount code is a bit iffy: we first
flush all delwri buffers out, but then might be able to queue up a
new one when logging the sb counts. On a normal shutdown that one
would get flushed out when doing the synchronous superblock write in
xfs_unmountfs_writesb, but we skip that one if the filesystem has
been shut down.
Fix this by moving the delwri list flushing until just before unmounting
the log, and while we're at it also remove the superflous delwri list
and buffer lru flusing for the rt and log device that can never have
cached or delwri buffers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Amit Sahrawat <amit.sahrawat83@gmail.com>
Tested-by: Amit Sahrawat <amit.sahrawat83@gmail.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ed32201e65 upstream.
An attribute of inode can be fetched via xfs_vn_getattr() in XFS.
Currently it returns EIO, not negative value, when it failed. As a
result, the system call returns not negative value even though an
error occured. The stat(2), ls and mv commands cannot handle this
error and do not work correctly.
This patch fixes this bug, and returns -EIO, not EIO when an error
is detected in xfs_vn_getattr().
Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c58cb165bd upstream.
Currently a buffered reader or writer can add pages to the pagecache
while we are waiting for the iolock in xfs_file_dio_aio_write. Prevent
this by re-checking mapping->nrpages after we got the iolock, and if
nessecary upgrade the lock to exclusive mode. To simplify this a bit
only take the ilock inside of xfs_file_aio_write_checks.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0c38a2512d upstream.
There is no need to grab the i_mutex of the IO lock in exclusive
mode if we don't need to invalidate the page cache. Taking these
locks on every direct IO effective serialises them as taking the IO
lock in exclusive mode has to wait for all shared holders to drop
the lock. That only happens when IO is complete, so effective it
prevents dispatch of concurrent direct IO reads to the same inode.
Fix this by taking the IO lock shared to check the page cache state,
and only then drop it and take the IO lock exclusively if there is
work to be done. Hence for the normal direct IO case, no exclusive
locking will occur.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Tested-by: Joern Engel <joern@logfs.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 866e4ed774 upstream.
During umount we do not add a dirty inode to the lru and wait for it to
become clean first, but force writeback of data and metadata with
I_WILL_FREE set. Currently there is no way for XFS to detect that the
inode has been redirtied for metadata operations, as we skip the
mark_inode_dirty call during teardown. Fix this by setting i_update_core
nanually in that case, so that the inode gets flushed during inode reclaim.
Alternatively we could enable calling mark_inode_dirty for inodes in
I_WILL_FREE state, and let the VFS dirty tracking handle this. I decided
against this as we will get better I/O patterns from reclaim compared to
the synchronous writeout in write_inode_now, and always marking the inode
dirty in some way from xfs_mark_inode_dirty is a better safetly net in
either case.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If removed storage while synchronous buffer write underway,
"xfslogd" hangs.
Detailed log http://oss.sgi.com/archives/xfs/2011-07/msg00740.html
Related work bfc60177f8
"xfs: fix error handling for synchronous writes"
Given that xfs_bwrite actually does the shutdown already after
waiting for the b_iodone completion and given that we actually
found that calling xfs_force_shutdown from inside
xfs_buf_iodone_callbacks was a major contributor the problem
it better to drop this call.
Signed-off-by: Ajeet Yadav <ajeet.yadav.77@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 60c71ca972 upstream.
We've had another report of the "chipmunk" sound on a Logitech C600 webcam.
This patch resolves the issue.
Signed-off-by: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 811c926c53 upstream.
The current TT scheduling doesn't allow to play and then record on a
full-speed device connected to a high speed hub.
The IN iso stream can only start on the first uframe (0-2 for a 165 us)
because of CSPLIT transactions.
For the OUT iso stream there no such restriction. uframe 0-5 are possible.
The idea of this patch is that the first uframe are precious (for IN TT iso
stream) and we should allocate the last uframes first if possible.
For that we reverse the order of uframe allocation (last uframe first).
Here an example :
hid interrupt stream
----------------------------------------------------------------------
uframe | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
----------------------------------------------------------------------
max_tt_usecs | 125 | 125 | 125 | 125 | 125 | 125 | 30 | 0 |
----------------------------------------------------------------------
used usecs on a frame | 13 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
----------------------------------------------------------------------
iso OUT stream
----------------------------------------------------------------------
uframe | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
----------------------------------------------------------------------
max_tt_usecs | 125 | 125 | 125 | 125 | 125 | 125 | 30 | 0 |
----------------------------------------------------------------------
used usecs on a frame | 13 | 125 | 39 | 0 | 0 | 0 | 0 | 0 |
----------------------------------------------------------------------
There no place for iso IN stream (uframe 0-2 are used) and we got "cannot
submit datapipe for urb 0, error -28: not enough bandwidth" error.
With the patch this become.
iso OUT stream
----------------------------------------------------------------------
uframe | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
----------------------------------------------------------------------
max_tt_usecs | 125 | 125 | 125 | 125 | 125 | 125 | 30 | 0 |
----------------------------------------------------------------------
used usecs on a frame | 13 | 0 | 0 | 0 | 125 | 39 | 0 | 0 |
----------------------------------------------------------------------
iso IN stream
----------------------------------------------------------------------
uframe | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
----------------------------------------------------------------------
max_tt_usecs | 125 | 125 | 125 | 125 | 125 | 125 | 30 | 0 |
----------------------------------------------------------------------
used usecs on a frame | 13 | 0 | 125 | 40 | 125 | 39 | 0 | 0 |
----------------------------------------------------------------------
Signed-off-by: Matthieu Castet <matthieu.castet@parrot.com>
Signed-off-by: Thomas Poussevin <thomas.poussevin@parrot.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2f640bf4c9 upstream.
The 8020i protocol (also 8070i and QIC-157) uses 12-byte commands;
shorter commands must be padded. Simon Detheridge reports that his
3-TB USB disk drive claims to use the 8020i protocol (which is
normally meant for ATAPI devices like CD drives), and because of its
large size, the disk drive requires the use of 16-byte commands.
However the usb_stor_pad12_command() routine in usb-storage always
sets the command length to 12, making the drive impossible to use.
Since the SFF-8020i specification allows for 16-byte commands in
future extensions, we may as well accept them. This patch (as1490)
changes usb_stor_pad12_command() to leave commands larger than 12
bytes alone rather than truncating them.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Simon Detheridge <simon@widgit.com>
CC: Matthew Dharm <mdharm-usb@one-eyed-alien.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b1ffb4c851 upstream.
Fix for ftdi_set_termios() glitching output
ftdi_set_termios() is constantly setting the baud rate, data bits and parity
unnecessarily on every call, . When called while characters are being
transmitted can cause the FTDI chip to corrupt the serial port bit stream
output by stalling the output half a bit during the output of a character.
Simple fix by skipping this setting if the baud rate/data bits/parity are
unchanged.
Signed-off-by: Andrew Worsley <amworsley@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 583182ba5f upstream.
This patch for the usb serial ark3116 driver fixes an initialisation
ordering bug that gets triggered on hotplug when using at least recent
debian/ubuntu userspace. Without it, ark3116 serial cables don't work.
Signed-off-by: Bart Hartgers <bart.hartgers@gmail.com>
Tested-by: law_ence.dev@ntlworld.com
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 97ff22ee3b upstream.
This patch (as1491) works around a bug in GCC-3.4.6, which is still
supposed to be supported. The number of microseconds in the udelay()
call in quirk_usb_disable_ehci() is fixed at 100, but the compiler
doesn't understand this and generates a link-time error. So we
replace the otherwise unused variable "delta" with a simple constant
100. This same pattern is already used in other delay loops in that
source file.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Konrad Rzepecki <krzepecki@dentonet.pl>
Tested-by: Konrad Rzepecki <krzepecki@dentonet.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5dc2470c60 upstream.
There's a race between the USB disconnect handler and the TTY close
handler which may cause the acm object to be freed while it's still
being used. This may lead to things like
http://article.gmane.org/gmane.linux.usb.general/54250
and
https://lkml.org/lkml/2011/5/29/64
This is the simplest fix I could come up with. Holding on to open_mutex
while closing the TTY device prevents acm_disconnect() from freeing the
acm object between acm->port.count drops to 0 and the TTY side of the
cleanups are finalized.
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
Cc: Oliver Neukum <oliver@neukum.name>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0c16595539 upstream.
I get report from customer that his usb-serial
converter doesn't work well,it sometimes work,
but sometimes it doesn't.
The usb-serial converter's id:
vendor_id product_id
0x4348 0x5523
Then I search the usb-serial codes, and there are
two drivers announce support this device, pl2303
and ch341, commit 026dfaf1 cause it. Through many
times to test, ch341 works well with this device,
and pl2303 doesn't work quite often(it just work quite little).
ch341 works well with this device, so we doesn't
need pl2303 to support.I try to revert 026dfaf1 first,
but it failed. So I prepare this patch by hand to revert it.
Signed-off-by: Wang YanQing <Udknight@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 46b5a277ed upstream.
This patch adds new PIDs for ZTE 3G modem, after we confirm it and tested.
Thanks for Dan's work at kernel option devier.
Signed-off-by: Alvin.Zheng <zheng.zhijian@zte.com.cn>
Signed-off-by: wsalvin <wsalvin@yahoo.com.cn>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f69e3120df upstream.
This patch (as1494) fixes a problem in xhci-hcd's resume routine.
When the controller is runtime-resumed, this can only mean that one of
the two root hubs has made a wakeup request and therefore needs to be
resumed as well. Rather than try to determine which root hub requires
attention (which might be difficult in the case where a new
non-SuperSpeed device has been plugged in), the patch simply resumes
both root hubs.
Without this change, there is a race: The controller might be put back
to sleep before it can activate its IRQ line, and the wakeup condition
might never get handled.
The patch also simplifies the logic in xhci_resume a little, combining
some repeated flag settings into a single pair of statements.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
CC: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Tested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 79c3dd8150 upstream.
I noticed on my Panther Point system that I wasn't getting hotplug events
for my usb3.0 disk on a usb3 port. I tracked it down to the fact that the
system had the warm reset change bit still set. This seemed to block future
events from being received, including a hotplug event.
Clearing this bit during initialization allowed the hotplug event to be
received and the disk to be recognized correctly.
This patch should be backported to kernels as old as 2.6.39.
Signed-off-by: Don Zickus <dzickus@redhat.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 91a13c281d upstream.
Patch to fix the error message "directives may not be used inside a macro
argument" which appears when the kernel is compiled for the cris architecture.
Signed-off-by: Claudio Scordino <claudio@evidence.eu.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1788ea6e3b upstream.
commit d953126 changed how nfs_atomic_lookup handles an -EISDIR return
from an OPEN call. Prior to that patch, that caused the client to fall
back to doing a normal lookup. When that patch went in, the code began
returning that error to userspace. The d_revalidate codepath however
never had the corresponding change, so it was still possible to end up
with a NULL ctx->state pointer after that.
That patch caused a regression. When we attempt to open a directory that
does not have a cached dentry, that open now errors out with EISDIR. If
you attempt the same open with a cached dentry, it will succeed.
Fix this by reverting the change in nfs_atomic_lookup and allowing
attempts to open directories to fall back to a normal lookup
Also, add a NFSv4-specific f_ops->open routine that just returns
-ENOTDIR. This should never be called if things are working properly,
but if it ever is, then the dprintk may help in debugging.
To facilitate this, a new file_operations field is also added to the
nfs_rpc_ops struct.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0c73c08ec7 upstream.
For /dev/console case, we do not kill all ldisc users. It's due to
redirected_tty_write test in __tty_hangup. In that case there still
might be a process waiting e.g. in n_tty_read for input.
We wait for such processes to disappear. The problem is that we use a
timeout. After this timeout, we continue closing the ldisc and start
freeing tty resources. It obviously leads to crashes when the other
process is woken.
So to fix this, we wait infinitely before reiniting the ldisc. (The
tiocsetd remains untouched -- times out after 5s.)
This is nicely reproducible with this run from shell:
exec 0<>/dev/console 1<>/dev/console 2<>/dev/console
and stopping a getty like:
systemctl stop serial-getty@ttyS0.service
The crash proper may be produced only under load or with constified
timing the same as for 92f6fa09b.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Dave Young <hidave.darkstar@gmail.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Cc: Dmitriy Matrosov <sgf.dma@gmail.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 300420722e upstream.
It is the only place where reinit is called from. And we really need
to wait for the old ldisc to go once. Actually this is the place where
the waiting originally was (before removed and re-added later).
This will make the fix in the following patch easier to implement.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Dave Young <hidave.darkstar@gmail.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Cc: Dmitriy Matrosov <sgf.dma@gmail.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit df92d0561d upstream.
To fix a nasty bug in ldisc hup vs. reinit we need to wait infinitely
long for ldisc to be gone. So here we add a parameter to
tty_ldisc_wait_idle to allow that.
This is only a preparation for the real fix which is done in the
following patches.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Dave Young <hidave.darkstar@gmail.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Cc: Dmitriy Matrosov <sgf.dma@gmail.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c2a3e84f95 upstream.
Reading from the DCC grabs a character from the buffer and
clears the status bit. Since this is a context-changing
operation, instructions following the character read that rely on
the status bit being accurate need to be synchronized with an
ISB.
In this case, the status bit check needs to execute after the
character read otherwise we run the risk of reading the character
and checking the status bit before the read can clear the status
bit in the first place. When this happens, the user will see the
same character they typed twice, instead of once.
Add an ISB after the read and the write, so that the status check
is synchronized with the read/write operations.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 90f04c2926 upstream.
Changing UART mode PIO->DMA->PIO->DMA like below, pch_uart driver can't get
DMA channel resource.
setserial /dev/ttyPCH0 ^low_latency
setserial /dev/ttyPCH0 low_latency
CAUSE:
Changing mode using setserial command, ".startup" function which gets DMA
channel is called before ".verify_port" function which sets
dma-flag(use_dma/use_dma_flag) as 1.
PIO->DMA
.startup: Since dma-flag is 0, DMA channel is not requested.
.verify_port: dma-flag is set as 1.
.shutdown: N/A
DMA->PIO
.startup: Since dma-flag is 1, DMA channel is requested.
.verify_port: dma-flag is set as 0.
.shutdown: Since dma-flag is 0, DMA channel is not released.
This means DMA channel resource leak occurs.
Next time, this driver can't get DMA channel resource forever.
MODIFICATION:
Currently, when release DMA channel resource, this driver checks dma-flag.
However, this specification occurs the above issue.
This driver must check whether dma_request_channel is executed or not.
The values are saved in private data variable "chan_tx/chan_tx".
These variables mean if the value is NULL, DMA channel is not requested,
if not NULL, DMA channel is requested.
This patch fixes the issue.
Signed-off-by: Tomoya MORINAGA <tomoya.rohm@gmail.com>
Acked-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2a98879194 upstream.
ISSUE:
Using ML7831, MAC address writing doesn't work well.
CAUSE:
ML7831 and EG20T have the same register map for MAC address access.
However, this driver processes the writing the same as ML7223.
This is not true.
This driver must process the writing the same as EG20T.
This patch fixes the issue.
Signed-off-by: Tomoya MORINAGA <tomoya.rohm@gmail.com>
Cc: Masayuki Ohtak <masa-korg@dsn.okisemi.com>
Cc: Alexander Stein <alexander.stein@systec-electronic.com>
Cc: Denis Turischev <denis@compulab.co.il>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit af8db1508f upstream.
There may be an issue when the user issue "reboot/shutdown" command, then
the device has shut down its hardware, after that, this runtime-pm featured
device's driver will probably be scheduled to do its suspend routine,
and at its suspend routine, it may access hardware, but the device has
already shutdown physically, then the system hang may be occurred.
I ran out this issue using an auto-suspend supported USB devices, like
3G modem, keyboard. The usb runtime suspend routine may be scheduled
after the usb controller has been shut down, and the usb runtime suspend
routine will try to suspend its roothub(controller), it will access
register, then the system hang occurs as the controller is shutdown.
Signed-off-by: Peter Chen <peter.chen@freescale.com>
Acked-by: Ming Lei <tom.leiming@gmail.com>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 731abb9cb2 upstream.
Commit 1c5cae815d removed an explicit call to dev_alloc_name in ip6_tnl_create
because register_netdevice will now create a valid name. This works for the
net_device itself.
However the tunnel keeps a copy of the name in the parms structure for the
ip6_tnl associated with the tunnel. parms.name is set by copying the net_device
name in ip6_tnl_dev_init_gen. That function is called from ip6_tnl_dev_init in
ip6_tnl_create, but it is done before register_netdevice is called so the name
is set to a bogus value in the parms.name structure.
This shows up if you do a simple tunnel add, followed by a tunnel show:
[root@localhost ~]# ip -6 tunnel add remote fec0::100 local fec0::200
[root@localhost ~]# ip -6 tunnel show
ip6tnl0: ipv6/ipv6 remote :: local :: encaplimit 0 hoplimit 0 tclass 0x00 flowlabel 0x00000 (flowinfo 0x00000000)
ip6tnl%d: ipv6/ipv6 remote fec0::100 local fec0::200 encaplimit 4 hoplimit 64 tclass 0x00 flowlabel 0x00000 (flowinfo 0x00000000)
[root@localhost ~]#
Fix this by moving the strcpy out of ip6_tnl_dev_init_gen, and calling it after
register_netdevice has successfully returned.
Signed-off-by: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 58ebacc66b upstream.
Commit 4d9d88d1 by Scott James Remnant <keybuk@google.com> added
the .uevent() callback for the regulatory device used during
the platform device registration. The change was done to account
for queuing up udev change requests through udevadm triggers.
The change also meant that upon regulatory core exit we will now
send a uevent() but the uevent() callback, reg_device_uevent(),
also accessed last_request. Right before commiting device suicide
we free'd last_request but never set it to NULL so
platform_device_unregister() would lead to bogus kernel paging
request. Fix this and also simply supress uevents right before
we commit suicide as they are pointless.
This fix is required for kernels >= v2.6.39
$ git describe --contains 4d9d88d1
v2.6.39-rc1~468^2~25^2^2~21
The impact of not having this present is that a bogus paging
access may occur (only read) upon cfg80211 unload time. You
may also get this BUG complaint below. Although Johannes
could not reproduce the issue this fix is theoretically correct.
mac80211_hwsim: unregister radios
mac80211_hwsim: closing netlink
BUG: unable to handle kernel paging request at ffff88001a06b5ab
IP: [<ffffffffa030df9a>] reg_device_uevent+0x1a/0x50 [cfg80211]
PGD 1836063 PUD 183a063 PMD 1ffcb067 PTE 1a06b160
Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
CPU 0
Modules linked in: cfg80211(-) [last unloaded: mac80211]
Pid: 2279, comm: rmmod Tainted: G W 3.1.0-wl+ #663 Bochs Bochs
RIP: 0010:[<ffffffffa030df9a>] [<ffffffffa030df9a>] reg_device_uevent+0x1a/0x50 [cfg80211]
RSP: 0000:ffff88001c5f9d58 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff88001d2eda88 RCX: ffff88001c7468fc
RDX: ffff88001a06b5a0 RSI: ffff88001c7467b0 RDI: ffff88001c7467b0
RBP: ffff88001c5f9d58 R08: 000000000000ffff R09: 000000000000ffff
R10: 0000000000000000 R11: 0000000000000001 R12: ffff88001c7467b0
R13: ffff88001d2eda78 R14: ffffffff8164a840 R15: 0000000000000001
FS: 00007f8a91d8a6e0(0000) GS:ffff88001fc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffff88001a06b5ab CR3: 000000001c62e000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process rmmod (pid: 2279, threadinfo ffff88001c5f8000, task ffff88000023c780)
Stack:
ffff88001c5f9d98 ffffffff812ff7e5 ffffffff8176ab3d ffff88001c7468c2
000000000000ffff ffff88001d2eda88 ffff88001c7467b0 ffff880000114820
ffff88001c5f9e38 ffffffff81241dc7 ffff88001c5f9db8 ffffffff81040189
Call Trace:
[<ffffffff812ff7e5>] dev_uevent+0xc5/0x170
[<ffffffff81241dc7>] kobject_uevent_env+0x1f7/0x490
[<ffffffff81040189>] ? sub_preempt_count+0x29/0x60
[<ffffffff814cab1a>] ? _raw_spin_unlock_irqrestore+0x4a/0x90
[<ffffffff81305307>] ? devres_release_all+0x27/0x60
[<ffffffff8124206b>] kobject_uevent+0xb/0x10
[<ffffffff812fee27>] device_del+0x157/0x1b0
[<ffffffff8130377d>] platform_device_del+0x1d/0x90
[<ffffffff81303b76>] platform_device_unregister+0x16/0x30
[<ffffffffa030fffd>] regulatory_exit+0x5d/0x180 [cfg80211]
[<ffffffffa032bec3>] cfg80211_exit+0x2b/0x45 [cfg80211]
[<ffffffff8109a84c>] sys_delete_module+0x16c/0x220
[<ffffffff8108a23e>] ? trace_hardirqs_on_caller+0x7e/0x120
[<ffffffff814cba02>] system_call_fastpath+0x16/0x1b
Code: <all your base are belong to me>
RIP [<ffffffffa030df9a>] reg_device_uevent+0x1a/0x50 [cfg80211]
RSP <ffff88001c5f9d58>
CR2: ffff88001a06b5ab
---[ end trace 147c5099a411e8c0 ]---
Reported-by: Johannes Berg <johannes@sipsolutions.net>
Cc: Scott James Remnant <keybuk@google.com>
Signed-off-by: Luis R. Rodriguez <mcgrof@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c7394197a upstream.
Since the NL80211_ATTR_HT_CAPABILITY attribute is
used as a struct, it needs a minimum, not maximum
length. Enforce that properly. Not doing so could
potentially lead to reading after the buffer.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5b2bbf75a2 upstream.
ieee80211_probereq_get() can return NULL in
which case we should clean up & return NULL
in ieee80211_build_probe_req() as well.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f8d1ccf155 upstream.
When receiving failed PLCP frames is enabled, there
won't be a rate pointer when we add the radiotap
header and thus the kernel will crash. Fix this by
not assuming the rate pointer is always valid. It's
still always valid for frames that have good PLCP
though, and that is checked & enforced.
This was broken by my
commit fc88518916
Author: Johannes Berg <johannes.berg@intel.com>
Date: Fri Jul 30 13:23:12 2010 +0200
mac80211: don't check rates on PLCP error frames
where I removed the check in this case but didn't
take into account that the rate info would be used.
Reported-by: Xiaokang Qin <xiaokang.qin@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ed66ba472a upstream.
The generic powersaving code that determines after reception of a frame
whether the device should go back to sleep or whether is could stay
awake was calling rt2x00lib_config directly from RX tasklet context.
On a number of the devices this call can actually sleep, due to having
to confirm that the sleeping commands have been executed successfully.
Fix this by moving the call to rt2x00lib_config to a workqueue call.
This fixes bug https://bugzilla.redhat.com/show_bug.cgi?id=731672
Tested-by: Tomas Trnka <tomastrnka@gmx.com>
Signed-off-by: Gertjan van Wingerde <gwingerde@gmail.com>
Acked-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fe09b32a43 upstream.
If we hit the default case in the switch in if_spi_host_to_card() we'll leak
the memory we allocated for 'packet'. This patch resolves the leak by freeing
the allocated memory in that case.
Signed-off-by: Jesper Juhl <jj@chaosbits.net>
Acked-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8428e84d42 upstream.
Recent gcc versions generate unaligned accesses by default on ARMv6 and
later processors. This patch ensures that the SCTLR.A bit is always
cleared on such processors to avoid kernel traping before
alignment_init() is called.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: John Linn <John.Linn@xilinx.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cda2bb78c2 upstream.
At least on a Lenovo X220 the HPD bits of this are enabled at boot but
cleared after resume, which means plug interrupts stop working.
This also happens to fix DP displays re-lighting on resume. I'm quite
certain that's an accident: the first DP link train inevitably fails on
that machine, and it's only serendipity that we're getting multiple plug
interrupts and the second train works. But I shall take my victories
where I get them.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Tested-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Cc: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 62dd28d0c6 upstream.
Hauppauge have released a new model rev, sub id 8940, this adds
support.
[stoth@kernellabs.com: I modified Tony's patch slightly in relation to the
card numbering in saa7164.h, appending rather than inserting the new card
- normal practise]
Signed-off-by: Tony Jago <tony@hammertelecom.com.au>
Signed-off-by: Steven Toth <stoth@kernellabs.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cf16123c9c upstream.
Aacraid controller can hang on some nodes if kernel uses non-default
(powersave) ASPM policy. Controller hangs shortly after successful load and
hardware detection. Scsi error handler detects this hang and tries to restart
hardware but it does not help.
Initially it was noticed on RHEL6-based openVZ kernel after backporting
aacraid driver from mainline (RHEL6 kernel with original driver works well)
http://bugzilla.openvz.org/show_bug.cgi?id=2043
This issue happens because default ASPM policy was changed in Red Hat
kernels. Therefore guys from Red Hat have noticed this problem long time ago:
on Fedora 12
https://bugzilla.redhat.com/show_bug.cgi?id=540478
on Fedora 14
https://bugzilla.redhat.com/show_bug.cgi?id=679385
In RHEL6 kernel this issue was fixed, ASPM was disabled in aacraid driver. In
kernel changelog I've found that seems it was done by Matthew Garrett: -
[scsi] aacraid: Disable ASPM by default (Matthew Garrett) [599735]
However seems this patch was not submitted to mainline. I've reproduced this
issue on vanilla 3.1.0 kernel booted with "pcie_aspm.policy=powersave" option,
So I believe it makes sense to do it now.
Signed-off-by: Vasily Averin <vvs@sw.ru>
[mjg: Checking the Windows drivers indicates that they disable ASPM under all
circumstances, so:]
Acked-by: Matthew Garrett <mjg@redhat.com>
Acked-by: Achim Leubner <Achim_Leubner@pmc-sierra.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e5a44df85e upstream.
The Windows driver .inf disables ASPM on hpsa devices. Do the same because the
selection of a non default ASPM policy can cause the device to hang.
Signed-off-by: Matthew Garrett <mjg@redhat.com>
Acked-by: Mike Miller <mike.miller@hp.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c75d720fca upstream.
commit d05c65fff0 ("genirq: spurious: Run only one poller at a time")
introduced a regression, leaving the boot options 'irqfixup' and
'irqpoll' non-functional. The patch placed tests in each function, to
exit if the function is already running. The test in 'misrouted_irq'
exited when it should have proceeded, effectively disabling
'misrouted_irq' and 'poll_spurious_irqs'.
The check for an already running poller needs to be "!= 1" not "== 1"
as "1" is the value when the first poller starts running.
Signed-off-by: Edward Donovan <edward.donovan@numble.net>
Cc: maciej.rutecki@gmail.com
Link: http://lkml.kernel.org/r/1320175784-6745-1-git-send-email-edward.donovan@numble.net
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6b76106d8e upstream.
Even after commit 5478755616
("block: check for proper length of iov entries earlier ...")
we still won't check for zero-length entries after an unaligned
entry. Remove the break-statement, so all entries are checked.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7a401a972d upstream.
bdi_prune_sb() in bdi_unregister() attempts to removes the bdi links
from all super_blocks and then del_timer_sync() the writeback timer.
However, this can race with __mark_inode_dirty(), leading to
bdi_wakeup_thread_delayed() rearming the writeback timer on the bdi
we're unregistering, after we've called del_timer_sync().
This can end up with the bdi being freed with an active timer inside it,
as in the case of the following dump after the removal of an SD card.
Fix this by redoing the del_timer_sync() in bdi_destory().
------------[ cut here ]------------
WARNING: at /home/rabin/kernel/arm/lib/debugobjects.c:262 debug_print_object+0x9c/0xc8()
ODEBUG: free active (active state 0) object type: timer_list hint: wakeup_timer_fn+0x0/0x180
Modules linked in:
Backtrace:
[<c00109dc>] (dump_backtrace+0x0/0x110) from [<c0236e4c>] (dump_stack+0x18/0x1c)
r6:c02bc638 r5:00000106 r4:c79f5d18 r3:00000000
[<c0236e34>] (dump_stack+0x0/0x1c) from [<c0025e6c>] (warn_slowpath_common+0x54/0x6c)
[<c0025e18>] (warn_slowpath_common+0x0/0x6c) from [<c0025f28>] (warn_slowpath_fmt+0x38/0x40)
r8:20000013 r7:c780c6f0 r6:c031613c r5:c780c6f0 r4:c02b1b29
r3:00000009
[<c0025ef0>] (warn_slowpath_fmt+0x0/0x40) from [<c015eb4c>] (debug_print_object+0x9c/0xc8)
r3:c02b1b29 r2:c02bc662
[<c015eab0>] (debug_print_object+0x0/0xc8) from [<c015f574>] (debug_check_no_obj_freed+0xac/0x1dc)
r6:c7964000 r5:00000001 r4:c7964000
[<c015f4c8>] (debug_check_no_obj_freed+0x0/0x1dc) from [<c00a9e38>] (kmem_cache_free+0x88/0x1f8)
[<c00a9db0>] (kmem_cache_free+0x0/0x1f8) from [<c014286c>] (blk_release_queue+0x70/0x78)
[<c01427fc>] (blk_release_queue+0x0/0x78) from [<c015290c>] (kobject_release+0x70/0x84)
r5:c79641f0 r4:c796420c
[<c015289c>] (kobject_release+0x0/0x84) from [<c0153ce4>] (kref_put+0x68/0x80)
r7:00000083 r6:c74083d0 r5:c015289c r4:c796420c
[<c0153c7c>] (kref_put+0x0/0x80) from [<c01527d0>] (kobject_put+0x48/0x5c)
r5:c79643b4 r4:c79641f0
[<c0152788>] (kobject_put+0x0/0x5c) from [<c013ddd8>] (blk_cleanup_queue+0x68/0x74)
r4:c7964000
[<c013dd70>] (blk_cleanup_queue+0x0/0x74) from [<c01a6370>] (mmc_blk_put+0x78/0xe8)
r5:00000000 r4:c794c400
[<c01a62f8>] (mmc_blk_put+0x0/0xe8) from [<c01a64b4>] (mmc_blk_release+0x24/0x38)
r5:c794c400 r4:c0322824
[<c01a6490>] (mmc_blk_release+0x0/0x38) from [<c00de11c>] (__blkdev_put+0xe8/0x170)
r5:c78d5e00 r4:c74083c0
[<c00de034>] (__blkdev_put+0x0/0x170) from [<c00de2c0>] (blkdev_put+0x11c/0x12c)
r8:c79f5f70 r7:00000001 r6:c74083d0 r5:00000083 r4:c74083c0
r3:00000000
[<c00de1a4>] (blkdev_put+0x0/0x12c) from [<c00b0724>] (kill_block_super+0x60/0x6c)
r7:c7942300 r6:c79f4000 r5:00000083 r4:c74083c0
[<c00b06c4>] (kill_block_super+0x0/0x6c) from [<c00b0a94>] (deactivate_locked_super+0x44/0x70)
r6:c79f4000 r5:c031af64 r4:c794dc00 r3:c00b06c4
[<c00b0a50>] (deactivate_locked_super+0x0/0x70) from [<c00b1358>] (deactivate_super+0x6c/0x70)
r5:c794dc00 r4:c794dc00
[<c00b12ec>] (deactivate_super+0x0/0x70) from [<c00c88b0>] (mntput_no_expire+0x188/0x194)
r5:c794dc00 r4:c7942300
[<c00c8728>] (mntput_no_expire+0x0/0x194) from [<c00c95e0>] (sys_umount+0x2e4/0x310)
r6:c7942300 r5:00000000 r4:00000000 r3:00000000
[<c00c92fc>] (sys_umount+0x0/0x310) from [<c000d940>] (ret_fast_syscall+0x0/0x30)
---[ end trace e5c83c92ada51c76 ]---
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d715e433b7 upstream.
kdump fails because we try to execute an HV only instruction. Feature
fixups are being applied after we copy the exception vectors down to 0
so they miss out on any updates.
We have always had this issue but it only became critical in v3.0
when we added CFAR support (breaks POWER5) and v3.1 when we added
POWERNV (breaks everyone).
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 72f3bea075 upstream.
Fixes the PS3 bootup hang introduced in 3.0-rc1 by:
commit 317f394160
sched: Move the second half of ttwu() to the remote cpu
Move the PS3's LV1 EOI call lv1_end_of_interrupt_ext() from ps3_chip_eoi()
to ps3_get_irq() for IPI messages.
If lv1_send_event_locally() is called between a previous call to
lv1_send_event_locally() and the coresponding call to
lv1_end_of_interrupt_ext() the second event will not be delivered to the
target cpu.
The PS3's SMP IPIs are implemented using lv1_send_event_locally(), so if two
IPI messages of the same type are sent to the same target in a relatively
short period of time the second IPI event can become lost when
lv1_end_of_interrupt_ext() is called from ps3_chip_eoi().
Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 99cb2ddcc6 upstream.
gref->gref_id is unsigned so the error handling didn't work.
gnttab_grant_foreign_access() returns an int type, so we can add a
cast here, and it doesn't cause any problems.
gnttab_grant_foreign_access() can return a variety of errors
including -ENOSPC, -ENOSYS and -ENOMEM.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 21643e69a4 upstream.
On 32 bit systems a high value of op.count could lead to an integer
overflow in the kzalloc() and gref_ids would be smaller than
expected. If the you triggered another integer overflow in
"if (gref_size + op.count > limit)" then you'd probably get memory
corruption inside add_grefs().
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f09ee0451a upstream.
The codec for Devkit8000 (TWL4030) was not detected except
when build with CONFIG_SND_SOC_ALL_CODECS.
twl-core.c still uses the CONFIG_TWL4030_CODEC for
twl_has_codec().
In commit 57fe7251f5
the CONFIG_TWL4030_CODEC was renamed
into CONFIG_MFD_TWL4030_AUDIO, thatswhy the codec
was not detected.
This patch renames the CONFIG_ TWL4030_CODEC into
CONFIG_MFD_TWL4030_AUDIO in twl-core.c.
Signed-off-by: Thomas Weber <weber@corscience.de>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Cc: Jarkko Nikula <jarkko.nikula@bitmer.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9a3f530f39 upstream.
When the number of failed devices exceeds the allowed number
we must abort any active parity operations (checks or updates) as they
are no longer meaningful, and can lead to a BUG_ON in
handle_parity_checks6.
This bug was introduce by commit 6c0069c0ae
in 2.6.29.
Reported-by: Manish Katiyar <mkatiyar@gmail.com>
Tested-by: Manish Katiyar <mkatiyar@gmail.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[This patch is supposed to be applied in 3.1 (and maybe older) branches only.]
New kernels support newer firmware that users may try to incorrectly use
with older kernels. Display error and explain the problem in such a case
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 153b19a3b9 upstream.
SFI tables reside in RAM and should not be modified once they are
written. Current code went to set pentry->irq to zero which causes
subsequent reads to fail with invalid SFI table checksum. This will
break kexec as the second kernel fails to validate SFI tables.
To fix this we use temporary variable for irq number.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a94cc4e6c0 upstream.
According to the SFI specification irq number 0xFF means device has no
interrupt or interrupt attached via GPIO.
Currently, we don't handle this special case and set irq field in
*_board_info structs to 255. It leads to confusion in some drivers.
Accelerometer driver tries to register interrupt 255, fails and prints
"Cannot get IRQ" to dmesg.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dcaaf9f2c1 upstream.
In the recent usb-audio driver, the initialization of volume ranges
may be delayed when the device doesn't respond well at the probing time.
But the volume quirks for certain devices are applied only in
mixer_ctl_feature_info() thus only at the very first probe and will be
missing when the volume range is initialized later.
This patch moves the volume quirk code to be always called from the
volume-range extraction (get_min_max()), so that the quirks are properly
applied in the later init time.
Reported-and-tested-by: Alexey Fisher <bug-track@fisher-privat.net>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9fcd0ab130 upstream.
When the initial check of dB-range failed due to the read error, try to
check again at the later read, too. When an invalid dB range is found,
remove TLV flags and notify the mixer info change.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 14660ccd59 upstream.
I've been seeing memory leaks on my system in the form of large
(300-400MB) GEM objects created by now-dead processes laying around
clogging up memory. I usually notice when it gets to about 1.2GB of
them. Hopefully this clears up the issue, but I just found this bug
by inspection.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 72103bd128 upstream.
Commit 31a3ddda16 introduced
a use after free in virtio-pci. The main issue is
that the release method signals removal of the virtio device,
while remove signals removal of the pci device.
For example, on driver removal or hot-unplug,
virtio_pci_release_dev is called before virtio_pci_remove.
We then might get a crash as virtio_pci_remove tries to use the
device freed by virtio_pci_release_dev.
We allocate/free all resources together with the
pci device, so we can leave the release method empty.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aeb4b88ec0 upstream.
When a virtual mater control is created, the driver looks for slave
elements from the assigned card instance. But this may include the
elements of other codecs when multiple codecs are on the same HD-audio
bus. This works at the first time, but it'll give Oops when it's once
freed and re-created via reconfig sysfs.
This patch changes the element-look-up strategy to limit only to the
mixer elements of the same codec.
Reported-by: David Henningsson <david.henningsson@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 21404b772a upstream.
This removes the use of the special "macbookair_fn_keys" keyboard
translation table for the MacBookAir4,x models (ie the 2011 refresh).
They use the standard apple_fn_keys[] translation. Apparently only the
old MacBook Air's need a different translation table.
This mirrors the change that commit da617c7cb9 ("HID: consolidate
MacbookAir 4,1 mappings") did for the WELLSPRING6A ones, but does it for
the WELLSPRING6 model used on the MacBookAir4,2.
Reported-and-tested-by: Dirk Hohndel <hohndel@infradead.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Joshua V Dillon <jvdillon@gmail.com>
Cc: Chase Douglas <chase.douglas@canonical.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit da617c7cb9 upstream.
MacbookAir 4,1 doesn't require extra mapping table, as the mappings
are identical to apple_fn_keys[].
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ad734bc156 upstream.
I've recently bought a Apple wireless aluminum keyboard (model 2011) which is
not yet supported by the kernel - it seems they just changed the device id.
After applying the attached patch, the device is fully functional.
Signed-off-by: Andreas Krist <andreas.krist@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 213f9da805 upstream.
This patch adds keyboard support for Macbook Pro 8 models which has
WELLSPRING5A model name and 0x0252, 0x0253 and 0x0254 USB IDs. Trackpad
support for those models are added to bcm5974 in
c331eb580a ("Input: bcm5974 - Add
support for newer MacBookPro8,2).
Signed-off-by: Gökçen Eraslan <gokcen@pardus.org.tr>
Acked-by: Henrik Rydberg <rydberg@euromail.se>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: Chase Douglas <chase.douglas@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f6f554f09c upstream.
Otherwise the generic driver wouldn't unbind from it and wouldn't
let hid-apple to automatically take over.
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5d922baa63 upstream.
Added USB device IDs for MacBookAir4,2 keyboard. Device constants were
copied from the MacBookAir3,2 constants. The 4,2 device specification is
reportedly unchanged from the 3,2 predecessor and seems to work well.
Signed-off-by: Joshua V Dillon <jvdillon@gmail.com>
Signed-off-by: Chase Douglas <chase.douglas@canonical.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4a4c879904 upstream.
Add USB device ids for the new revision (MB110LL/B) of Apple's wired aluminum
keyboard. I have only confirmed that the ANSI version is correct - it is
assumed that the ISO and JIS versions follow the standard numbering convention.
Signed-off-by: Dan Bastone <dan@pwienterprises.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f722013ee9 upstream.
In nand_do_write_ops() code it is possible for a caller to provide
ops.oobbuf populated and ops.mode == MTD_OOB_AUTO, which currently
means that the chip->oob_poi buffer isn't initialised to all 0xFF.
The nand_fill_oob() method then carries out the task of copying
the provided OOB data to oob_poi, but with MTD_OOB_AUTO it skips
areas marked as unavailable by the layout struct, including the
bad block marker bytes.
An example of this causing issues is when the last OOB data read
was from the start of a bad block where the markers are not 0xFF,
and the caller wishes to write new OOB data at the beginning of
another block. In this scenario the caller would provide OOB data,
but nand_fill_oob() would skip the bad block marker bytes in
oob_poi before copying the OOB data provided by the caller.
This means that when the OOB data is written back to NAND,
the block is inadvertently marked as bad without the caller knowing.
This has been witnessed when using YAFFS2 where tags are stored
in the OOB.
To avoid this oob_poi is always initialised to 0xFF to make sure
no left over data is inadvertently written back to the OOB area.
Credits to Brian Norris <computersforpeace@gmail.com> for fixing this
patch.
Signed-off-by: Adam Thomson <adam.thomson@alcatel-lucent.com>
Signed-off-by: Artem Bityutskiy <dedekind1@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 52d6d4ef5e upstream.
My recent commits (3782c69d, 324c74a) introduced regression
for register offset selection that based on the macversion.
Not using parentheses in proper manner for ternary operator
leads to select wrong offset for the registers.
This issue was observed with AR9462 chip that immediate disconnect
after the association with the following message
ieee80211 phy3: wlan0: Failed to send nullfunc to AP 00:23:69:12:ea:47
after 500ms, disconnecting.
Signed-off-by: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f5ff7cd1a8 upstream.
The previous commit enforces a new rule for handling the cloned packets
for transmit time stamping. These packets must not be freed using any other
function than skb_complete_tx_timestamp. This commit fixes the one and only
driver using this API.
The driver first appeared in v3.0.
Signed-off-by: Richard Cochran <richard.cochran@omicron.at>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b2bac6acf8 upstream.
As cryptd is depeneded on by other algorithms such as aesni-intel,
it needs to be registered before them. When everything is built
as modules, this occurs naturally. However, for this to work when
they are built-in, we need to use subsys_initcall in cryptd.
Tested-by: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Kerin Millar <kerframil@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 528f7ce6e4 upstream.
In enter_state() we use "state" as an offset for the pm_states[]
array. The pm_states[] array only has PM_SUSPEND_MAX elements so
this test is off by one.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aa1c366e4f upstream.
With the conversion of struct flowi to a union of AF-specific structs, some
operations on the flow cache need to account for the exact size of the key.
Signed-off-by: David Ward <david.ward@ll.mit.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8c0bec2151 upstream.
The i_mutex lock and flush_completed_IO() added by commit 2581fdc810
in ext4_evict_inode() causes lockdep complaining about potential
deadlock in several places. In most/all of these LOCKDEP complaints
it looks like it's a false positive, since many of the potential
circular locking cases can't take place by the time the
ext4_evict_inode() is called; but since at the very least it may mask
real problems, we need to address this.
This change removes the flush_completed_IO() and i_mutex lock in
ext4_evict_inode(). Instead, we take a different approach to resolve
the software lockup that commit 2581fdc810 intends to fix. Rather
than having ext4-dio-unwritten thread wait for grabing the i_mutex
lock of an inode, we use mutex_trylock() instead, and simply requeue
the work item if we fail to grab the inode's i_mutex lock.
This should speed up work queue processing in general and also
prevents the following deadlock scenario: During page fault,
shrink_icache_memory is called that in turn evicts another inode B.
Inode B has some pending io_end work so it calls ext4_ioend_wait()
that waits for inode B's i_ioend_count to become zero. However, inode
B's ioend work was queued behind some of inode A's ioend work on the
same cpu's ext4-dio-unwritten workqueue. As the ext4-dio-unwritten
thread on that cpu is processing inode A's ioend work, it tries to
grab inode A's i_mutex lock. Since the i_mutex lock of inode A is
still hold before the page fault happened, we enter a deadlock.
Signed-off-by: Jiaying Zhang <jiayingz@google.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 543e32d5ff upstream.
This bug was introduced in f8155a40 ("mtd: pxa3xx_nand: rework irq
logic") and causes the PXA3xx NAND controller fail to operate with NAND
flash that has empty pages. According to the comment in this block, the
hardware controller will report a double-bit error for empty pages,
which can and must be ignored.
This patch restores the original behaviour of the driver.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Lei Wen <leiwen@marvell.com>
Cc: Haojian Zhuang <haojian.zhuang@marvell.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0fab028b77 upstream.
When keep_config is set, the detection would goes different routine.
That the driver would read out the setting which is set previously
by bootloader. While most bootloader keep the irq mask as off, and
current driver need all irq default open, keep_config behavior would
lead to no irq at all.
Signed-off-by: Lei Wen <leiwen@marvell.com>
Tested-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d5de1907d0 upstream.
parse_mtd_partitions takes a list of partition types; if the driver
isn't loaded, it attempts to load it, and then it grabs the partition
parser. For redboot, the module name is "redboot.ko", while the parser
name is "RedBoot". Since modprobe is case-sensitive, attempting to
modprobe "RedBoot" will never work. I suspect the embedded systems that
make use of redboot just always manually loaded redboot prior to loading
their specific nand chip drivers (or statically compiled it in).
Signed-off-by: Andres Salomon <dilinger@queued.net>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bf5140817b upstream.
On writes in MODE_RAW the mtd_oob_ops struct is not sufficiently
initialized which may cause nandwrite to fail. With this patch
it is possible to write raw nand/oob data without additional ECC
(either for testing or when some sectors need different oob layout
e.g. bootloader) like
nandwrite -n -r -o /dev/mtd0 <myfile>
Signed-off-by: Peter Wippich <pewi@gw-instruments.de>
Tested-by: Ricard Wanderlof <ricardw@axis.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 05cb910857 upstream.
Only AID values 1-2007 are valid, but some APs have been
found to send random bogus values, in the reported case an
AP that was sending the AID field value 0xffff, an AID of
0x3fff (16383).
There isn't much we can do but disable powersave since
there's no way it can work properly in this case.
Reported-by: Bill C Riemers <briemers@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6911bf0453 upstream.
When going back on-channel, we should reconfigure
the hw iff the hardware is not already configured
to the operational channel.
Signed-off-by: Eliad Peller <eliad@wizery.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eaa7af2ae5 upstream.
The offchannel code is currently broken - we should
remain_off_channel if the work was started, and
the work's channel and channel_type are the same
as local->tmp_channel and local->tmp_channel_type.
However, if wk->chan_type and local->tmp_channel_type
coexist (e.g. have the same channel type), we won't
remain_off_channel.
This behavior was introduced by commit da2fd1f
("mac80211: Allow work items to use existing
channel type.")
Tested-by: Ben Greear <greearb@candelatech.com>
Signed-off-by: Eliad Peller <eliad@wizery.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 98fb2cc115 upstream.
This patch fixes system hang when resuming from S3 state
and lower rate sens failure issue.
Signed-off-by: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c30bc94758 upstream.
L2TP for example uses NLA_MSECS like this:
policy:
[L2TP_ATTR_RECV_TIMEOUT] = { .type = NLA_MSECS, },
code:
if (info->attrs[L2TP_ATTR_RECV_TIMEOUT])
cfg.reorder_timeout = nla_get_msecs(info->attrs[L2TP_ATTR_RECV_TIMEOUT]);
As nla_get_msecs() is essentially nla_get_u64() plus the
conversion to a HZ-based value, this will not properly
reject attributes from userspace that aren't long enough
and might overrun the message.
Add NLA_MSECS to the attribute minlen array to check the
size properly.
Cc: Thomas Graf <tgraf@suug.ch>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3bf3f8b19d upstream.
Callers to __acpi_ioremap_fast() pass the bit_width that they found in the
acpi_generic_address structure. Convert from bits to bytes when passing to
__acpi_find_iomap() - as it wants to see bytes, not bits.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8bdafa39a4 upstream.
The icswx code introduced an A-B B-A deadlock:
CPU0 CPU1
---- ----
lock(&anon_vma->mutex);
lock(&mm->mmap_sem);
lock(&anon_vma->mutex);
lock(&mm->mmap_sem);
Instead of using the mmap_sem to keep mm_users constant, take the
page table spinlock.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8feaa43494 upstream.
Since commit 188917e183, /proc/ppc64 is a
symlink to /proc/powerpc/. That means that creating /proc/ppc64/eeh will
end up with a unaccessible file, that is not listed under /proc/powerpc/
and, then, not listed under /proc/ppc64/.
Creating /proc/powerpc/eeh fixes that problem and maintain the
compatibility intended with the ppc64 symlink.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9c740025c5 upstream.
During hotplug CPU add we get the following error:
Unexpected Error (0) returned from configure-connector
ibm,configure-connector returns 0 for configuration complete, so
catch this and avoid the error.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a11940978b upstream.
If we echo an address the hypervisor doesn't like to
/sys/devices/system/memory/probe we oops the box:
# echo 0x10000000000 > /sys/devices/system/memory/probe
kernel BUG at arch/powerpc/mm/hash_utils_64.c:541!
The backtrace is:
create_section_mapping
arch_add_memory
add_memory
memory_probe_store
sysdev_class_store
sysfs_write_file
vfs_write
SyS_write
In create_section_mapping we BUG if htab_bolt_mapping returned
an error. A better approach is to return an error which will
propagate back to userspace.
Rerunning the test with this patch applied:
# echo 0x10000000000 > /sys/devices/system/memory/probe
-bash: echo: write error: Invalid argument
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6083184269 upstream.
During memory hotplug testing, I got the following warning:
ERROR: Bad of_node_put() on /memory@0
of_node_release
kref_put
of_node_put
of_find_node_by_type
hot_add_node_scn_to_nid
hot_add_scn_to_nid
memory_add_physaddr_to_nid
...
of_find_node_by_type() loop does the of_node_put for us so we only
need the handle the case where we terminate the loop early.
As suggested by Stephen Rothwell we can do the of_node_put
unconditionally outside of the loop since of_node_put handles a
NULL argument fine.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a3fbbde70a upstream.
Mountpoint crossing is similar to following procfs symlinks - we do
not get ->d_revalidate() called for dentry we have arrived at, with
unpleasant consequences for NFS4.
Simple way to reproduce the problem in mainline:
cat >/tmp/a.c <<'EOF'
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
main()
{
struct flock fl = {.l_type = F_RDLCK, .l_whence = SEEK_SET, .l_len = 1};
if (fcntl(0, F_SETLK, &fl))
perror("setlk");
}
EOF
cc /tmp/a.c -o /tmp/test
then on nfs4:
mount --bind file1 file2
/tmp/test < file1 # ok
/tmp/test < file2 # spews "setlk: No locks available"...
What happens is the missing call of ->d_revalidate() after mountpoint
crossing and that's where NFS4 would issue OPEN request to server.
The fix is simple - treat mountpoint crossing the same way we deal with
following procfs-style symlinks. I.e. set LOOKUP_JUMPED...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c4853efec6 upstream.
The P600 requires a small delay when changing states. Otherwise we may think
the board did not reset and we bail. This for kdump only and is particular
to the P600.
Signed-off-by: Mike Miller <mike.miller@hp.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5c8a0fbba5 upstream.
No one in their right mind would expect statfs() to not work on a
automounter managed mount point. Fix it.
[ I'm not sure about the "no one in their right mind" part. It's not
mounted, and you didn't ask for it to be mounted. But nobody will
really care, and this probably makes it match previous semantics, so..
- Linus ]
This mirrors the fix made to the quota code in 815d405cef.
Signed-off-by: Dan McGee <dpmcgee@gmail.com>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5c62cb4860 upstream.
We did not increment the amount of sectors written to disk
b/c we tested for the == WRITE which is incorrect - as the
operations are more of WRITE_FLUSH, WRITE_ODIRECT. This patch
fixes it by doing a & WRITE check.
Reported-by: Andy Burns <xen.lists@burns.me.uk>
Suggested-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f992ae801a upstream.
The following command sequence triggers an oops.
# mount /dev/sdb1 /mnt
# echo 1 > /sys/class/scsi_device/0\:0\:1\:0/device/delete
# umount /mnt
general protection fault: 0000 [#1] PREEMPT SMP
CPU 2
Modules linked in:
Pid: 791, comm: umount Not tainted 3.1.0-rc3-work+ #8 Bochs Bochs
RIP: 0010:[<ffffffff810d0879>] [<ffffffff810d0879>] __lock_acquire+0x389/0x1d60
...
Call Trace:
[<ffffffff810d2845>] lock_acquire+0x95/0x140
[<ffffffff81aed87b>] _raw_spin_lock+0x3b/0x50
[<ffffffff811573bc>] bdi_lock_two+0x5c/0x70
[<ffffffff811c2f6c>] bdev_inode_switch_bdi+0x4c/0xf0
[<ffffffff811c3fcb>] __blkdev_put+0x11b/0x1d0
[<ffffffff811c4010>] __blkdev_put+0x160/0x1d0
[<ffffffff811c40df>] blkdev_put+0x5f/0x190
[<ffffffff8118f18d>] kill_block_super+0x4d/0x80
[<ffffffff8118f4a5>] deactivate_locked_super+0x45/0x70
[<ffffffff8119003a>] deactivate_super+0x4a/0x70
[<ffffffff811ac4ad>] mntput_no_expire+0xed/0x130
[<ffffffff811acf2e>] sys_umount+0x7e/0x3a0
[<ffffffff81aeeeab>] system_call_fastpath+0x16/0x1b
This is because bdev holds on to disk but disk doesn't pin the
associated queue. If a SCSI device is removed while the device is
still open, the sdev puts the base reference to the queue on release.
When the bdev is finally released, the associated queue is already
gone along with the bdi and bdev_inode_switch_bdi() ends up
dereferencing already freed bdi.
Even if it were not for this bug, disk not holding onto the associated
queue is very unusual and error-prone.
Fix it by making add_disk() take an extra reference to its queue and
put it on disk_release() and ensuring that disk and its fops owner are
put in that order after all accesses to the disk and queue are
complete.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dc6f55e9f8 upstream.
The sunrpc layer keeps a cache of recently used credentials and
'unx_match' is used to find the credential which matches the current
process.
However unx_match allows a match when the cached credential has extra
groups at the end of uc_gids list which are not in the process group list.
So if a process with a list of (say) 4 group accesses a file and gains
access because of the last group in the list, then another process
with the same uid and gid, and a gid list being the first tree of the
gids of the original process tries to access the file, it will be
granted access even though it shouldn't as the wrong rpc credential
will be used.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2af8de8c39 upstream.
Since there is no current software control for these they would otherwise
be left enabled, consuming power.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4f4488abc9 upstream.
The WM8962 has a separate software reset for the PLL registers. Ensure that
these are reset also on startup.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d558cfc300 upstream.
Current implementation in wm8711_set_dai_fmt always clear BIT[3:2]
(the Input Audio Data Bit Length Select) of WM8711_IFACE(07h) register.
Input Audio Data Bit Length Select bits are set by wm8711_hw_params,
we should leave BIT[3:2] untouched in wm8711_set_dai_fmt.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 04c57163c8 upstream.
The Input Audio Data Bit Length Select is controlled by BIT[3:2] of
WM8711_IFACE(07h) register.
Current code incorrectly masks BIT[1:0] which is for Audio Data Format Select.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0167ac67ff upstream.
Fix for issue : While discovery is in progress, hot unplug and hot plug of
enclosure connected to the controller card is causing system to hang.
When a device is in the process of being detected at driver load time then
if it is removed, the device that is no longer present will not be added
to the list. So the code in _scsih_probe_sas() is rearranged as such so
the devices that failed to be detected are not added to the list.
Signed-off-by: Nagalakshmi Nandigama <nagalakshmi.nandigama@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f7c9c6bb14 upstream.
When looking at memory consumption issues I noticed quite a
lot of memory in the kmalloc-2048 bucket:
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
6561 6471 98% 2.30K 243 27 15552K kmalloc-2048
Over 15MB. slub debug shows that cfq is responsible for almost
all of it:
# sort -nr /sys/kernel/slab/kmalloc-2048/alloc_calls
6402 .cfq_init_queue+0xec/0x460 age=43423/43564/43655 pid=1 cpus=4,11,13
In scsi_alloc_sdev we do scsi_alloc_queue but if slave_alloc
fails we don't free it with scsi_free_queue.
The patch below fixes the issue:
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
135 72 53% 2.30K 5 27 320K kmalloc-2048
# cat /sys/kernel/slab/kmalloc-2048/alloc_calls
3 .cfq_init_queue+0xec/0x460 age=3811/3876/3925 pid=1 cpus=4,11,13
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c68bf8eeaa upstream.
The call to complete() in st_scsi_execute_end() wakes up sleeping thread
in write_behind_check(), which frees the st_request, thus invalidating
the pointer to the associated bio structure, which is then passed to the
blk_rq_unmap_user(). Fix by storing pointer to bio structure into
temporary local variable.
This bug is present since at least linux-2.6.32.
Signed-off-by: Petr Uzel <petr.uzel@suse.cz>
Reported-by: Juergen Groß <juergen.gross@ts.fujitsu.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Kai Mäkisara <kai.makisara@kolumbus.fi>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8cd79f2435 upstream.
This patch addresses an issue with buggy userspace code sending I/O
via scsi-generic that does not explictly clear their associated read
buffers. It adds an explict memset of the first SGL entry within
tcm_loop_new_cmd_map() for SCF_SCSI_CONTROL_SG_IO_CDB payloads that
are currently guaranteed to be a single SGL by target-core code.
This issue is a side effect of the v3.1-rc1 merge to remove the
extra memcpy between certain control CDB types using a contigious
+ cleared buffer in target-core, and performing a memcpy into the
SGL list within tcm_loop.
It was originally mainfesting itself by udev + scsi_id + scsi-generic
not properly setting up the expected /dev/disk/by-id/ symlinks because
the INQUIRY payload was containing extra bogus data preventing the
proper NAA IEEE WWN from being parsed by userspace.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Andy Grover <agrover@redhat.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bfa02b0da6 upstream.
Commit 2265cef2 (hwmon: (w83627ehf) Properly report PECI and AMD-SI
sensor types) results in kernel panic if data->temp_label was not
initialized.
The problem was found with chip W83627DHG-P.
Add check if data->temp->label was set before use.
Based on incomplete patch by Alexander Beregalov.
Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Tested-by: Alexander Beregalov <a.beregalov@gmail.com>
Signed-off-by: Guenter Roeck <guenter.roeck@ericsson.com>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2265cef275 upstream.
When temperature sources are PECI or AMD-SI agents, it makes no sense
to report their type as diode or thermistor. Instead we must report
their digital nature.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Acked-by: Guenter Roeck <guenter.roeck@ericsson.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2aba6cac2a upstream.
The definition of TO_ATTR_NO in the non-SMP case is wrong. As the SMP
definition resolves to the correct value, just use this for both
cases.
Without this fix the temperature attributes are named temp0_* instead
of temp2_*, so libsensors won't pick them. Broken since kernel 3.0.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Tested-by: Phil Sutter <phil@nwl.cc>
Acked-by: Durgadoss R <Durgadoss.r@intel.com>
Acked-by: Guenter Roeck <guenter.roeck@ericsson.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ab5dbebe33 upstream.
The P600 requires a small delay when changing states. Otherwise we may think
the board did not reset and we bail. This for kdump only and is particular
to the P600.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b2c0a863e1 upstream.
Originally, the runtime PM core would send an idle notification
whenever a suspend attempt failed. The idle callback routine could
then schedule a delayed suspend for some time later.
However this behavior was changed by commit
f71648d73c (PM / Runtime: Remove idle
notification after failing suspend). No notifications were sent, and
there was no clear mechanism to retry failed suspends.
This caused problems for the usbhid driver, because it fails
autosuspend attempts as long as a key is being held down. A companion
patch changes the PM core's behavior, but we also need to change the
USB core. In particular, this patch (as1493) updates the device's
last_busy time when an autosuspend fails, so that the PM core will
retry the autosuspend in the future when the delay time expires
again.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Henrik Rydberg <rydberg@euromail.se>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 886486b792 upstream.
Originally, the runtime PM core would send an idle notification
whenever a suspend attempt failed. The idle callback routine could
then schedule a delayed suspend for some time later.
However this behavior was changed by commit
f71648d73c (PM / Runtime: Remove idle
notification after failing suspend). No notifications were sent, and
there was no clear mechanism to retry failed suspends.
This caused problems for the usbhid driver, because it fails
autosuspend attempts as long as a key is being held down. Therefore
this patch (as1492) adds a mechanism for retrying failed
autosuspends. If the callback routine updates the last_busy field so
that the next autosuspend expiration time is in the future, the
autosuspend will automatically be rescheduled.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Henrik Rydberg <rydberg@euromail.se>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3f198dfee4 upstream.
Help text under choice menu is never displayed because it does not have
symbol name associated with it, however many kconfigs have help text
under choice, assuming that it will be displayed when user selects help.
for example in Kconfig if we have:
choice
prompt "Choice"
---help---
HELP TEXT ...
config A
bool "A"
config B
bool "B"
endchoice
Without this patch "HELP TEXT" is not displayed when user selects help
option when "Choice" is highlighted from menuconfig or xconfig or
gconfig.
This patch changes the logic in menu_get_ext_help to display help for
cases which dont have symbol names like choice.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@st.com>
Reviewed-by: Stuart Menefy <stuart.menefy@st.com>
Reviewed-by: Arnaud Lacombe <lacombar@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 64912e997f upstream.
Polarity needs to be set accordingly to connector status (connected
or disconnected). Set it up in hpd_init() so first hotplug works
reliably no matter what is the initial set of connector. hpd_init()
also covers resume so HPD will work correctly after resume as well.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a18cee15ed upstream.
Allow the user to override whether MSIs are enabled
or not on supported ASICs. MSIs are disabled by default
on IGP chips as they tend not to work. However certain
IGP chips only seem to work with MSIs enabled.
I suspect this is a chipset or bios issue, but I'm not sure
what the proper fix is. This will at least make diagnosing
and working around the problem much easier.
See:
https://bugs.freedesktop.org/show_bug.cgi?id=37679
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8ab250d448 upstream.
Polarity needs to be set accordingly to connector status (connected
or disconnected). Set it up at module init so first hotplug works
reliably no matter what is the initial set of connector.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 340764465a upstream.
Since force handling rework of d0d0a225e6
we could end up bouncing connector status btw disconnected and unknown.
When connector status change a call to output_poll_changed happen which
in turn ask again for detect but with force set.
So set the load detect flags whenever we report the connector as
connected or unknown this avoid bouncing btw disconnected and unknown.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Cc: Stefan Lippers-Hollmann <s.L-H@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 51e4152a96 upstream.
Some BIOS report invalid pins as digital output pins. The driver checks
the connection but it doesn't do it fully correctly, and it leaves some
undefined value as the audio-out widget, which makes the driver spewing
warnings. This patch fixes the issue.
Reference: https://bugzilla.novell.com/show_bug.cgi?id=727348
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ad5d875511 upstream.
These codecs have SPDIF-in, which is new to the 92HD83xxx compatible
families, so a bit of logic is added to support them.
Signed-off-by: Charles Chin <Charles.Chin@idt.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 35c11777b9 upstream.
The power-widget control in patch_stac92hd83xxx() never worked properly,
thus it's safer to turn it off as default for now.
Signed-off-by: Charles Chin <Charles.Chin@idt.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 862a6244eb upstream.
If the device is unplugged while running, it is possible for a PCM
device to be closed after the disconnect callback has returned. This
means that kill_stream_urb() and disable_iso_interface() would try to
access already-invalid or freed USB data structures.
The function free_usb_related_resources() was intended to prevent this,
but forgot to clear the affected variables.
Reported-and-tested-by: Olivier Courtay <olivier@courtay.org>
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit b64b73d7d0 ]
This resolves a regression seen by some users of bridging.
Some users use the bridge like a dummy device.
They expect to be able to put an IPv6 address on the device
with no ports attached. Although there are better ways of doing
this, there is no reason to not allow it.
Note: the bridge still will reflect the state of ports in the
bridge if there are any added.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 405e44f2e3 upstream.
"page" may have changed to point to the next hugepage after the loop
completed, The references have been taken on the head page, so the
put_page must happen there too.
This is a longstanding issue pre-thp inclusion.
It's totally unclear how these page_cache_add_speculative and
pte_val(pte) != pte_val(*ptep) checks are necessary across all the
powerpc gup_fast code, when x86 doesn't need any of that: there's no way
the page can be freed with irq disabled so we're guaranteed the
atomic_inc will happen on a page with page_count > 0 (so not needing the
speculative check).
The pte check is also meaningless on x86: no need to rollback on x86 if
the pte changed, because the pte can still change a CPU tick after the
check succeeded and it won't be rolled back in that case. The important
thing is we got a reference on a valid page that was mapped there a CPU
tick ago. So not knowing the soft tlb refill code of ppc64 in great
detail I'm not removing the "speculative" page_count increase and the
pte checks across all the code, but unless there's a strong reason for
it they should be later cleaned up too.
If a pte can change from huge to non-huge (like it could happen with
THP) passing a pte_t *ptep to gup_hugepte() would also require to repeat
the is_hugepd in gup_hugepte(), but that shouldn't happen with hugetlbfs
only so I'm not altering that.
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 12d0d0d3a7 upstream.
The commit aabdcb0b55 ("can bcm: fix tx_setup
off-by-one errors") fixed only a part of the original problem reported by
Andre Naujoks. It turned out that the original code needed to be re-ordered
to reduce complexity and to finally fix the reported frame counting issues.
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6fd4562178 upstream.
When the link state changes, xHC will report a port status change event
and set the PORT_PLC bit, for both USB3 and USB2 root hub ports.
The PLC will be cleared by usbcore for USB3 root hub ports, but not for
USB2 ports, because they do not report USB_PORT_STAT_C_LINK_STATE in
wPortChange.
Clear it for USB2 root hub ports in handle_port_status().
Signed-off-by: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2dc3753997 upstream.
Some alternate interface settings have no endpoints associated with them.
This shows up in some USB webcams, particularly the Logitech HD 1080p,
which uses the uvcvideo driver. If a driver switches between two alt
settings with no endpoints, there is no need to issue a configure endpoint
command, because there is no endpoint information to update.
The only time a configure endpoint command with just the add slot flag set
makes sense is when the driver is updating hub characteristics in the slot
context. However, that code never calls xhci_check_bandwidth, so we
should be safe not issuing a command if only the slot context add flag is
set.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f02fe890ec upstream.
Scanning cannot be run during suspend or hibernation, but if
usb-stor-scan freezes another thread waiting on scanning to
complete may fail to freeze.
However, if usb-stor-scan is left freezable without ever actually
freezing then the freezer will wait on it to exit, and threads
waiting for scanning to finish will no longer be blocked. One
problem with this approach is that usb-stor-scan has a delay to
wait for devices to settle (which is currently the only point where
it can freeze). To work around this we can request that the freezer
send a fake signal when freezing, then use interruptible sleep to
wake the thread early when freezing happens.
To make this happen, the following changes are made to
usb-stor-scan:
* Use set_freezable_with_signal() instead of set_freezable() to
request a fake signal when freezing
* Use wait_event_interruptible_timeout() instead of
wait_event_freezable_timeout() to avoid freezing
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c510eae377 upstream.
This device declares itself to be vendor specific
It therefore needs to be added to the device table
to make btusb bind.
Signed-off-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f78b68261e upstream.
Today I noticed that the usb bluetooth adapter (BCM2046B1) on my 2011
mac mini was not working. I've created a patch to get it going.
Signed-off-by: Jurgen Kramer <gtmkramer@xs4all.nl>
Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2d25f8b462 upstream.
The new Ath3k needs to download patch and radio table,
and it keeps same PID/VID even after downloading the patch and radio
table. This patch is to use the bcdDevice (Device Release Number) to
judge whether the chip has been patched or not. The init bcdDevice
value of the chip is 0x0001, this value increases after patch and
radio table downloading.
Signed-off-by: Steven.Li <yongli@qca.qualcomm.com>
Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a63b723d02 upstream.
This patch against current git adds the hardware ID for the Apple
MacBookAir4,1, released in July 2011. The device features a BCM2046
USB chip. The patch was inspired by the previous modifications adding
support for the MacBookAir3,x.
Signed-off-by: Pieter-Augustijn Van Malleghem <p-a@scarlet.be>
Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bca0beb936 upstream.
The AX88772B uses only 11 bits of the header for the actual size. The other bits
are used for something else. This causes dmesg full of messages:
asix_rx_fixup() Bad Header Length
This patch trims the check to only 11 bits. I believe on older chips, the
remaining 5 top bits are unused.
Signed-off-by: Marek Vasut <marek.vasut@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c2d7b49f42 upstream.
When a xHC host is unable to handle isochronous transfer in the
interval, it reports a Missed Service Error event and skips some tds.
Currently xhci driver handles MSE event in the following ways:
1. When encounter a MSE event, set ep->skip flag, update event ring
dequeue pointer and return.
2. When encounter the next event on this ep, the driver will run the
do-while loop, fetch td from ep's td_list to find the td
corresponding to this event. All tds missed are marked as short
transfer(-EXDEV).
The do-while loop will end in two ways:
1. If the td pointed by the event trb is found;
2. If the ep ring's td_list is empty.
However, if a buggy HW reports some unpredicted event (for example, an
overrun event following a MSE event while the ep ring is actually not
empty), the driver will never find the td, and it will loop until the
td_list is empty.
Unfortunately, the spinlock is dropped when give back a urb in the
do-while loop. During the spinlock released period, the class driver
may still submit urbs and add tds to the td_list. This may cause
disaster, since the td_list will never be empty and the loop never ends,
and the system hangs.
To fix this, count the number of TDs on the ep ring before skipping TDs,
and quit the loop when skipped that number of tds. This guarantees the
do-while loop will end after certain number of cycles, and driver will
not be trapped in an infinite loop.
Signed-off-by: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
commit 8a9af4fdf6 upstream.
usb_ifnum_to_if() can return NULL if the USB device does not have a
configuration installed (usb_device->actconfig == NULL), or if we can't
find the interface number in the installed configuration. Return an
error instead of crashing.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 75bc8ef528 upstream.
The cdc_ncm driver still has a few places where stack variables are
passed to the cdc_ncm_do_request function. This triggers a stack trace in
lib/dma-debug.c if the CONFIG_DEBUG_DMA_API option is set.
Adjust these calls to pass parameters that have been allocated with
kzalloc.
Signed-off-by: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ce7e906595 upstream.
Here is a patch for a new PID (zeitcontrol-device mifare-reader FT232BL(like FT232BM but lead free)).
Signed-off-by: Artur Zimmer <artur128@3dzimmer.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2f1def2695 upstream.
A new device ID pair is added for Sierra Wireless MC8305.
Signed-off-by: Florian Echtler <floe@butterbrot.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 77636c86a6 upstream.
The sequence to put port in test mode is not complete.
According EHCI specification all enabled ports must be
put in suspend.
Signed-off-by: Boris Todorov <boris.st.todorov@gmail.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c2e2a313ff upstream.
Executing cmd 'rmmod rtl8150' does not return(if your device connects
to host), the root cause is tasklet_disable() causes tasklet_kill()
block, remove it from rtl8150_disconnect().
Signed-off-by: Huajun Li <huajun.li.lee@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6d6a435190 upstream.
Ceph users reported that when using Ceph on ext4, the filesystem
would often become corrupted, containing inodes with incorrect
i_blocks counters.
I managed to reproduce this with a very hacked-up "streamtest"
binary from the Ceph tree.
Ceph is doing a lot of xattr writes, to out-of-inode blocks.
There is also another thread which does sync_file_range and close,
of the same files. The problem appears to happen due to this race:
sync/flush thread xattr-set thread
----------------- ----------------
do_writepages ext4_xattr_set
ext4_da_writepages ext4_xattr_set_handle
mpage_da_map_blocks ext4_xattr_block_set
set DELALLOC_RESERVE
ext4_new_meta_blocks
ext4_mb_new_blocks
if (!i_delalloc_reserved_flag)
vfs_dq_alloc_block
ext4_get_blocks
down_write(i_data_sem)
set i_delalloc_reserved_flag
...
up_write(i_data_sem)
if (i_delalloc_reserved_flag)
vfs_dq_alloc_block_nofail
In other words, the sync/flush thread pops in and sets
i_delalloc_reserved_flag on the inode, which makes the xattr thread
think that it's in a delalloc path in ext4_new_meta_blocks(),
and add the block for a second time, after already having added
it once in the !i_delalloc_reserved_flag case in ext4_mb_new_blocks
The real problem is that we shouldn't be using the DELALLOC_RESERVED
state flag, and instead we should be passing
EXT4_GET_BLOCKS_DELALLOC_RESERVE down to ext4_map_blocks() instead of
using an inode state flag. We'll fix this for now with using
i_data_sem to prevent this race, but this is really not the right way
to fix things.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5930ea6438 upstream.
ext4_dx_add_entry manipulates bh2 and frames[0].bh, which are two buffer_heads
that point to directory blocks assigned to the directory inode. However, the
function calls ext4_handle_dirty_metadata with the inode of the file that's
being added to the directory, not the directory inode itself. Therefore,
correct the code to dirty the directory buffers with the directory inode, not
the file inode.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f9287c1f2d upstream.
ext4_mkdir calls ext4_handle_dirty_metadata with dir_block and the inode "dir".
Unfortunately, dir_block belongs to the newly created directory (which is
"inode"), not the parent directory (which is "dir"). Fix the incorrect
association.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bcaa992975 upstream.
When ext4_rename performs a directory rename (move), dir_bh is a
buffer that is modified to update the '..' link in the directory being
moved (old_inode). However, ext4_handle_dirty_metadata is called with
the old parent directory inode (old_dir) and dir_bh, which is
incorrect because dir_bh does not belong to the parent inode. Fix
this error.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1cd9f0976a upstream.
This doesn't make much sense, and it exposes a bug in the kernel where
attempts to create a new file in an append-only directory using
O_CREAT will fail (but still leave a zero-length file). This was
discovered when xfstests #79 was generalized so it could run on all
file systems.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 93b465c2e1 upstream.
Since we're using non-atomic radix tree allocations, we
should be protecting the tree using a mutex and not a
spinlock.
Non-atomic allocations and process context locking is good enough,
as the tree is manipulated only when locks are registered/
unregistered/requested/freed.
The locks themselves are still protected by spinlocks of course,
and mutexes are not involved in the locking/unlocking paths.
Signed-off-by: Juan Gutierrez <jgutierrez@ti.com>
[ohad@wizery.com: rewrite the commit log, #include mutex.h, add minor
commentary]
[ohad@wizery.com: update register/unregister parts in hwspinlock.txt]
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Upstream commit effd4d9aec.
Since the dawn of its time, iwlwifi has used
interruptible waits to wait for synchronous
commands and firmware loading.
This leads to "interesting" bugs, because it
can't actually handle the interruptions; for
example when a command sending is interrupted
it will assume the command completed fully,
and then leave it pending, which leads to all
kinds of trouble when the command finishes
later.
Since there's no easy way to gracefully deal
with interruptions, fix the driver to not use
interruptible waits.
This at least fixes the error
iwlagn 0000:02:00.0: Error: Response NULL in 'REPLY_SCAN_ABORT_CMD'
I have seen in P2P testing, but it is likely
that there are other errors caused by this.
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1117f72ea0 upstream.
The CLOEXE bit is magical, and for performance (and semantic) reasons we
don't actually maintain it in the file descriptor itself, but in a
separate bit array. Which means that when we show f_flags, the CLOEXE
status is shown incorrectly: we show the status not as it is now, but as
it was when the file was opened.
Fix that by looking up the bit properly in the 'fdt->close_on_exec' bit
array.
Uli needs this in order to re-implement the pfiles program:
"For normal file descriptors (not sockets) this was the last piece of
information which wasn't available. This is all part of my 'give
Solaris users no reason to not switch' effort. I intend to offer the
code to the util-linux-ng maintainers."
Requested-by: Ulrich Drepper <drepper@akkadia.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a3defbe5c3 upstream.
The case of address space randomization being disabled in runtime through
randomize_va_space sysctl is not treated properly in load_elf_binary(),
resulting in SIGKILL coming at exec() time for certain PIE-linked binaries
in case the randomization has been disabled at runtime prior to calling
exec().
Handle the randomize_va_space == 0 case the same way as if we were not
supporting .text randomization at all.
Based on original patch by H.J. Lu and Josh Boyer.
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: H.J. Lu <hongjiu.lu@intel.com>
Cc: <stable@kernel.org>
Tested-by: Josh Boyer <jwboyer@redhat.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 70b50f94f1 upstream.
Michel while working on the working set estimation code, noticed that
calling get_page_unless_zero() on a random pfn_to_page(random_pfn)
wasn't safe, if the pfn ended up being a tail page of a transparent
hugepage under splitting by __split_huge_page_refcount().
He then found the problem could also theoretically materialize with
page_cache_get_speculative() during the speculative radix tree lookups
that uses get_page_unless_zero() in SMP if the radix tree page is freed
and reallocated and get_user_pages is called on it before
page_cache_get_speculative has a chance to call get_page_unless_zero().
So the best way to fix the problem is to keep page_tail->_count zero at
all times. This will guarantee that get_page_unless_zero() can never
succeed on any tail page. page_tail->_mapcount is guaranteed zero and
is unused for all tail pages of a compound page, so we can simply
account the tail page references there and transfer them to
tail_page->_count in __split_huge_page_refcount() (in addition to the
head_page->_mapcount).
While debugging this s/_count/_mapcount/ change I also noticed get_page is
called by direct-io.c on pages returned by get_user_pages. That wasn't
entirely safe because the two atomic_inc in get_page weren't atomic. As
opposed to other get_user_page users like secondary-MMU page fault to
establish the shadow pagetables would never call any superflous get_page
after get_user_page returns. It's safer to make get_page universally safe
for tail pages and to use get_page_foll() within follow_page (inside
get_user_pages()). get_page_foll() is safe to do the refcounting for tail
pages without taking any locks because it is run within PT lock protected
critical sections (PT lock for pte and page_table_lock for
pmd_trans_huge).
The standard get_page() as invoked by direct-io instead will now take
the compound_lock but still only for tail pages. The direct-io paths
are usually I/O bound and the compound_lock is per THP so very
finegrined, so there's no risk of scalability issues with it. A simple
direct-io benchmarks with all lockdep prove locking and spinlock
debugging infrastructure enabled shows identical performance and no
overhead. So it's worth it. Ideally direct-io should stop calling
get_page() on pages returned by get_user_pages(). The spinlock in
get_page() is already optimized away for no-THP builds but doing
get_page() on tail pages returned by GUP is generally a rare operation
and usually only run in I/O paths.
This new refcounting on page_tail->_mapcount in addition to avoiding new
RCU critical sections will also allow the working set estimation code to
work without any further complexity associated to the tail page
refcounting with THP.
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Michel Lespinasse <walken@google.com>
Reviewed-by: Michel Lespinasse <walken@google.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit d0e5d83284 ]
If a VM is saved and restored (or migrated) the netback driver will no
longer process any Tx packets from the frontend. xenvif_up() does not
schedule the processing of any pending Tx requests from the front end
because the carrier is off. Without this initial kick the frontend
just adds Tx requests to the ring without raising an event (until the
ring is full).
This was caused by 47103041e9 (net:
xen-netback: convert to hw_features) which reordered the calls to
xenvif_up() and netif_carrier_on() in xenvif_connect().
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 7091fbd82c ]
This is a minor change.
Up until kernel 2.6.32, getsockopt(fd, SOL_PACKET, PACKET_STATISTICS,
...) would return total and dropped packets since its last invocation. The
introduction of socket queue overflow reporting [1] changed drop
rate calculation in the normal packet socket path, but not when using a
packet ring. As a result, the getsockopt now returns different statistics
depending on the reception method used. With a ring, it still returns the
count since the last call, as counts are incremented in tpacket_rcv and
reset in getsockopt. Without a ring, it returns 0 if no drops occurred
since the last getsockopt and the total drops over the lifespan of
the socket otherwise. The culprit is this line in packet_rcv, executed
on a drop:
drop_n_acct:
po->stats.tp_drops = atomic_inc_return(&sk->sk_drops);
As it shows, the new drop number it taken from the socket drop counter,
which is not reset at getsockopt. I put together a small example
that demonstrates the issue [2]. It runs for 10 seconds and overflows
the queue/ring on every odd second. The reported drop rates are:
ring: 16, 0, 16, 0, 16, ...
non-ring: 0, 15, 0, 30, 0, 46, 0, 60, 0 , 74.
Note how the even ring counts monotonically increase. Because the
getsockopt adds tp_drops to tp_packets, total counts are similarly
reported cumulatively. Long story short, reinstating the original code, as
the below patch does, fixes the issue at the cost of additional per-packet
cycles. Another solution that does not introduce per-packet overhead
is be to keep the current data path, record the value of sk_drops at
getsockopt() at call N in a new field in struct packetsock and subtract
that when reporting at call N+1. I'll be happy to code that, instead,
it's just more messy.
[1] http://patchwork.ozlabs.org/patch/35665/
[2] http://kernel.googlecode.com/files/test-packetsock-getstatistics.c
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 676a1184e8 ]
ipv6_ac_list and ipv6_fl_list from listening socket are inadvertently
shared with new socket created for connection.
Signed-off-by: Zheng Yan <zheng.z.yan@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit e730c82347 ]
USE_PHYLIB flag in tg3_remove_one() is being checked incorrectly. This
results tg3_phy_fini->phy_disconnect is never called and when tg3 module
is removed.
In my case this resulted in panics in phy_state_machine calling function
phydev->adjust_link.
So correct this check.
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Acked-by: Matt Carlson <mcarlson@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 1e5289e121 ]
lost_skb_hint is used by tcp_mark_head_lost() to mark the first unhandled skb.
lost_cnt_hint is the number of packets or sacked packets before the lost_skb_hint;
When shifting a skb that is before the lost_skb_hint, if tcp_is_fack() is ture,
the skb has already been counted in the lost_cnt_hint; if tcp_is_fack() is false,
tcp_sacktag_one() will increase the lost_cnt_hint. So tcp_shifted_skb() does not
need to adjust the lost_cnt_hint by itself. When shifting a skb that is equal to
lost_skb_hint, the shifted packets will not be counted by tcp_mark_head_lost().
So tcp_shifted_skb() should adjust the lost_cnt_hint even tcp_is_fack(tp) is true.
Signed-off-by: Zheng Yan <zheng.z.yan@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 260fcbeb1a ]
tcp_v4_clear_md5_list() assumes that multiple tcp md5sig peers
only hold one reference to md5sig_pool. but tcp_v4_md5_do_add()
increases use count of md5sig_pool for each peer. This patch
makes tcp_v4_md5_do_add() only increases use count for the first
tcp md5sig peer.
Signed-off-by: Zheng Yan <zheng.z.yan@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit d5123480b1 ]
There is no check if netconsole is enabled current.
so when exec echo 1 > enabled;
the reference of net_device will increment always.
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
Acked-by: Flavio Leitner <fbl@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit cb2d0f3e96 ]
Packets should always be forwarded to the lowerdev using dev_forward_skb.
vlan->forward is for packets being forwarded directly to another macvlan/
macvtap device (used for multicast in bridge mode).
Reported-and-tested-by: Shlomo Pongratz <shlomop@mellanox.com>
Signed-off-by: David Ward <david.ward@ll.mit.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit aabdcb0b55 ]
This patch fixes two off-by-one errors that canceled each other out.
Checking for the same condition two times in bcm_tx_timeout_tsklet() reduced
the count of frames to be sent by one. This did not show up the first time
tx_setup is invoked as an additional frame is sent due to TX_ANNONCE.
Invoking a second tx_setup on the same item led to a reduced (by 1) number of
sent frames.
Reported-by: Andre Naujoks <nautsch@gmail.com>
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 1ce5cce895 ]
Need to cleanup bridge device timers and ports when being bridge
device is being removed via netlink.
This fixes the problem of observed when doing:
ip link add br0 type bridge
ip link set dev eth1 master br0
ip link set br0 up
ip link del br0
which would cause br0 to hang in unregister_netdev because
of leftover reference count.
Reported-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 4d97480b18 ]
The bond->recv_probe is called in bond_handle_frame() when
a packet is received, but bond_close() sets it to NULL. So,
a panic occurs when both functions work in parallel.
Why this happen:
After null pointer check of bond->recv_probe, an sk_buff is
duplicated and bond->recv_probe is called in bond_handle_frame.
So, a panic occurs when bond_close() is called between the
check and call of bond->recv_probe.
Patch:
This patch uses a local function pointer of bond->recv_probe
in bond_handle_frame(). So, it can avoid the null pointer
dereference.
Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
Cc: Jay Vosburgh <fubar@us.ibm.com>
Cc: Andy Gospodarek <andy@greyhouse.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9d898966c4 upstream.
jsm uses a write queue that copies from uart_core circular buffer. This
copying however has some bugs, like not wrapping the head counter. Since
this write queue is also a circular buffer, the consumer function is
ready to use the uart_core circular buffer directly.
This buggy copying function was making some bytes be dropped when
transmitting to a raw tty, doing something like this.
[root@hostname ~]$ cat /dev/ttyn1 > cascardo/dump &
[1] 2658
[root@hostname ~]$ cat /proc/tty/drivers > /dev/ttyn0
[root@hostname ~]$ cat /proc/tty/drivers
/dev/tty /dev/tty 5 0 system:/dev/tty
/dev/console /dev/console 5 1 system:console
/dev/ptmx /dev/ptmx 5 2 system
/dev/vc/0 /dev/vc/0 4 0 system:vtmaster
jsm /dev/ttyn 250 0-31 serial
serial /dev/ttyS 4 64-95 serial
hvc /dev/hvc 229 0-7 system
pty_slave /dev/pts 136 0-1048575 pty:slave
pty_master /dev/ptm 128 0-1048575 pty:master
unknown /dev/tty 4 1-63 console
[root@hostname ~]$ cat cascardo/dump
/dev/tty /dev/tty 5 0 system:/dev/tty
/dev/console /dev/console 5 1 system:console
/dev/ptmx /dev/ptmx 5 2 system
/dev/vc/0 /dev/vc/0 4 0 system:vtmaste[root@hostname ~]$
This patch drops the driver write queue entirely, using the circular
buffer from uart_core only.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[This does not correspond to any specific patch in the upstream tree as it was
fixed accidentally by rewriting the code in the 3.1 release]
https://bugzilla.redhat.com/show_bug.cgi?id=740121
1. Luke Macken triggered WARN_ON(!(group_stop & GROUP_STOP_SIGMASK))
in do_signal_stop().
This is because do_signal_stop() clears GROUP_STOP_SIGMASK part
unconditionally but doesn't update it if task_is_stopped().
2. Looking at this problem I noticed that WARN_ON_ONCE(!ptrace) is
not right, a stopped-but-resumed tracee can clone the untraced
thread in the SIGNAL_STOP_STOPPED group, the new thread can start
another group-stop.
Remove this warning, we need more fixes to make it true.
Reported-by: Luke Macken <lmacken@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Since we've now turned around and made LOOKUP_FOLLOW *not* force an
automount, we want to add the ability to force an automount event on
lookup even if we don't happen to have one of the other flags that force
it implicitly (LOOKUP_OPEN, LOOKUP_DIRECTORY, LOOKUP_PARENT..)
Most cases will never want to use this, since you'd normally want to
delay automounting as long as possible, which usually implies
LOOKUP_OPEN (when we open a file or directory, we really cannot avoid
the automount any more).
But Trond argued sufficiently forcefully that at a minimum bind mounting
a file and quotactl will want to force the automount lookup. Some other
cases (like nfs_follow_remote_path()) could use it too, although
LOOKUP_DIRECTORY would work there as well.
This commit just adds the flag and logic, no users yet, though. It also
doesn't actually touch the LOOKUP_NO_AUTOMOUNT flag that is related, and
was made irrelevant by the same change that made us not follow on
LOOKUP_FOLLOW.
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Ian Kent <raven@themaw.net>
Cc: Jeff Layton <jlayton@redhat.com>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: David Howells <dhowells@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 815d405cef upstream.
The concensus seems to be that system calls such as stat() etc should
not trigger an automount. Neither should the l* versions.
This patch therefore adds a LOOKUP_AUTOMOUNT flag to tag those lookups
that _should_ trigger an automount on the last path element.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
[ Edited to leave out the cases that are already covered by LOOKUP_OPEN,
LOOKUP_DIRECTORY and LOOKUP_CREATE - all of which also fundamentally
force automounting for their own reasons - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0ec26fd069 upstream.
Prior to 2.6.38 automount would not trigger on either stat(2) or
lstat(2) on the automount point.
After 2.6.38, with the introduction of the ->d_automount()
infrastructure, stat(2) and others would start triggering automount
while lstat(2), etc. still would not. This is a regression and a
userspace ABI change.
Problem originally reported here:
http://thread.gmane.org/gmane.linux.kernel.autofs/6098
It appears that there was an attempt at fixing various userspace tools
to not trigger the automount. But since the stat system call is
rather common it is impossible to "fix" all userspace.
This patch reverts the original behavior, which is to not trigger on
stat(2) and other symlink following syscalls.
[ It's not really clear what the right behavior is. Apparently Solaris
does the "automount on stat, leave alone on lstat". And some programs
can get unhappy when "stat+open+fstat" ends up giving a different
result from the fstat than from the initial stat.
But the change in 2.6.38 resulted in problems for some people, so
we're going back to old behavior. Maybe we can re-visit this
discussion at some future date - Linus ]
Reported-by: Leonardo Chiquitto <leonardo.lists@gmail.com>
Acked-by: Ian Kent <raven@themaw.net>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5a30d8a2b8 upstream.
[ backport for 3.0.x: LOOKUP_PARENT => LOOKUP_CONTINUE by Chuck Ebbert
<cebbert@redhat.com> ]
Autofs may set the DCACHE_NEED_AUTOMOUNT flag on negative dentries. These
need attention from the automounter daemon regardless of the LOOKUP_FOLLOW flag.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Ian Kent <raven@themaw.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1fa1e7f615 upstream.
Since the commit below which added O_PATH support to the *at() calls, the
error return for readlink/readlinkat for the empty pathname has switched
from ENOENT to EINVAL:
commit 65cfc67223
Author: Al Viro <viro@zeniv.linux.org.uk>
Date: Sun Mar 13 15:56:26 2011 -0400
readlinkat(), fchownat() and fstatat() with empty relative pathnames
This is both unexpected for userspace and makes readlink/readlinkat
inconsistant with all other interfaces; and inconsistant with our stated
return for these pathnames.
As the readlinkat call does not have a flags parameter we cannot use the
AT_EMPTY_PATH approach used in the other calls. Therefore expose whether
the original path is infact entry via a new user_path_at_empty() path
lookup function. Use this to determine whether to default to EINVAL or
ENOENT for failures.
Addresses http://bugs.launchpad.net/bugs/817187
[akpm@linux-foundation.org: remove unused getname_flags()]
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8535639810 upstream.
ubd_file_size() cannot use ubd_dev->cow.file because at this time
ubd_dev->cow.file is not initialized.
Therefore, ubd_file_size() will always report a wrong disk size when
COW files are used.
Reading from /dev/ubd* would crash the kernel.
We have to read the correct disk size from the COW file's backing
file.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6b45214277 upstream.
It seems that Conexant CX20549 chip handle only a single input-amp even
though the audio-input widget has multiple sources. This has been never
clear, and I implemented in the current way based on the debug information
I got at the early time -- the device reacts individual input-amp values
for different sources. This is true for another Conexant codec, but it's
not applied to CX20549 actually.
This patch changes the auto-parser code to handle a single input-amp
per audio-in widget for CX20549. After applying this, you'll see only a
single "Capture" volume control instead of separate "Mic" or "Line"
captures when the device is set up to use a single ADC.
We haven't tested 20551 and 20561 codecs yet. If these show the similar
behavior like 20549, they need to set spec->single_adc_amp=1, too.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f5252e009d upstream.
The /proc/vmallocinfo shows information about vmalloc allocations in
vmlist that is a linklist of vm_struct. It, however, may access pages
field of vm_struct where a page was not allocated. This results in a null
pointer access and leads to a kernel panic.
Why this happens: In __vmalloc_node_range() called from vmalloc(), newly
allocated vm_struct is added to vmlist at __get_vm_area_node() and then,
some fields of vm_struct such as nr_pages and pages are set at
__vmalloc_area_node(). In other words, it is added to vmlist before it is
fully initialized. At the same time, when the /proc/vmallocinfo is read,
it accesses the pages field of vm_struct according to the nr_pages field
at show_numa_info(). Thus, a null pointer access happens.
The patch adds the newly allocated vm_struct to the vmlist *after* it is
fully initialized. So, it can avoid accessing the pages field with
unallocated page when show_numa_info() is called.
Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1bf6d2c1bb upstream.
Apparently U8500 U-Boot versions may leave the l2x0 locked down
before executing the kernel. Make sure we unlock it before we
initialize the l2x0. This fixes a performance problem reported
by Jan Rinze.
The l2x0 core has been modified to unlock the l2x0 by default,
but it will not touch the locking registers if the l2x0 was
already enabled, as on the ux500, so we need this quirk to
make sure it is properly turned off.
Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com>
Cc: Rabin Vincent <rabin.vincent@stericsson.com>
Cc: Adrian Bunk <adrian.bunk@movial.com>
Reported-by: Jan Rinze <janrinze@gmail.com>
Tested-by: Robert Marklund <robert.marklund@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6571534b60 upstream.
To configure pads during the initialisation a set of special constants
is used, e.g.
#define MX25_PAD_FEC_MDIO__FEC_MDIO IOMUX_PAD(0x3c4, 0x1cc, 0x10, 0, 0, PAD_CTL_HYS | PAD_CTL_PUS_22K_UP)
The problem is that no pull-up/down is getting activated unless both
PAD_CTL_PUE (pull-up enable) and PAD_CTL_PKE (pull/keeper module
enable) set. This is clearly stated in the i.MX25 datasheet and is
confirmed by the measurements on hardware. This leads to some rather
hard to understand bugs such as misdetecting an absent ethernet PHY (a
real bug i had), unstable data transfer etc. This might affect mx25,
mx35, mx50, mx51 and mx53 SoCs.
It's reasonable to expect that if the pullup value is specified, the
intention was to have it actually active, so we implicitly add the
needed bits.
Signed-off-by: Paul Fertser <fercerpav@gmail.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9bed77ee2f upstream.
This device is not using the proper demod IF. Instead of using the
IF macro, it is specifying a IF frequency. This doesn't work, as xc3028
needs to load an specific SCODE for the tuner. In this case, there's
no IF table for 5 MHz.
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bff469f416 upstream.
This patch protects the common buffer access inside the dib0700 in order
to manage concurrent access. This protection is done using mutex.
Cc: Mauro Carvalho Chehab <mchehab@redhat.com>
Cc: Florian Mickler <florian@mickler.org>
Signed-off-by: Javier Marcet <javier@marcet.info>
Signed-off-by: Olivier Grenie <olivier.grenie@dibcom.fr>
Signed-off-by: Patrick Boettcher <patrick.boettcher@dibcom.fr>
[mchehab@redhat.com: dprint requires 3 arguments. Replaced by dib_info]
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 79fcce3230 upstream.
This patch protects the I2C buffer access in order to manage concurrent
access. This protection is done using mutex.
Furthermore, for the dib9000, if a pid filtering command is
received during the tuning, this pid filtering command is delayed to
avoid any concurrent access issue.
Cc: Mauro Carvalho Chehab <mchehab@redhat.com>
Cc: Florian Mickler <florian@mickler.org>
Signed-off-by: Olivier Grenie <olivier.grenie@dibcom.fr>
Signed-off-by: Patrick Boettcher <Patrick.Boettcher@dibcom.fr>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d59a7b1dbc upstream.
If the bus has been reset on resume, set the alternate setting to 0.
This should be the default value, but some devices crash or otherwise
misbehave if they don't receive a SET_INTERFACE request before any other
video control request.
Microdia's 0c45:6437 camera has been found to require this change or it
will stop sending video data after resume.
uvc_video.c]
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 936a3f770b upstream.
This patch adds checks for minimum and maximum pitch size to prevent
invalid settings which could otherwise crash the machine. Also the
alignment is done in a slightly more readable way.
Signed-off-by: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d933990c57 upstream.
As Laurent pointed out we must not use any information in the passed
var besides xoffset, yoffset and vmode as otherwise applications
might abuse it. Also use the aligned fix.line_length and not the
(possible) unaligned xres_virtual.
Signed-off-by: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
Reported-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4a47a0e09c upstream.
Following on Herton's patch "fb: avoid possible deadlock caused by
fb_set_suspend" which moves lock_fb_info() out of fb_set_suspend()
to its callers, correct sh-mobile's locking around call to
fb_set_suspend() and the same sort of deaklocks with console_lock()
due to order of taking the lock.
console_lock() must be taken while fb_info is already locked and fb_info
must be locked while calling fb_set_suspend().
Signed-off-by: Bruno Prémont <bonbons@linux-vserver.org>
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9e769ff3f5 upstream.
A lock ordering issue can cause deadlocks: in framebuffer/console code,
all needed struct fb_info locks are taken before acquire_console_sem(),
in places which need to take console semaphore.
But fb_set_suspend is always called with console semaphore held, and
inside it we call lock_fb_info which gets the fb_info lock, inverse
locking order of what the rest of the code does. This causes a real
deadlock issue, when we write to state fb sysfs attribute (which calls
fb_set_suspend) while a framebuffer is being unregistered by
remove_conflicting_framebuffers, as can be shown by following show
blocked state trace on a test program which loads i915 and runs another
forked processes writing to state attribute:
Test process with semaphore held and trying to get fb_info lock:
..
fb-test2 D 0000000000000000 0 237 228 0x00000000
ffff8800774f3d68 0000000000000082 00000000000135c0 00000000000135c0
ffff880000000000 ffff8800774f3fd8 ffff8800774f3fd8 ffff880076ee4530
00000000000135c0 ffff8800774f3fd8 ffff8800774f2000 00000000000135c0
Call Trace:
[<ffffffff8141287a>] __mutex_lock_slowpath+0x11a/0x1e0
[<ffffffff814142f2>] ? _raw_spin_lock_irq+0x22/0x40
[<ffffffff814123d3>] mutex_lock+0x23/0x50
[<ffffffff8125dfc5>] lock_fb_info+0x25/0x60
[<ffffffff8125e3f0>] fb_set_suspend+0x20/0x80
[<ffffffff81263e2f>] store_fbstate+0x4f/0x70
[<ffffffff812e7f70>] dev_attr_store+0x20/0x30
[<ffffffff811c46b4>] sysfs_write_file+0xd4/0x160
[<ffffffff81155a26>] vfs_write+0xc6/0x190
[<ffffffff81155d51>] sys_write+0x51/0x90
[<ffffffff8100c012>] system_call_fastpath+0x16/0x1b
..
modprobe process stalled because has the fb_info lock (got inside
unregister_framebuffer) but waiting for the semaphore held by the
test process which is waiting to get the fb_info lock:
..
modprobe D 0000000000000000 0 230 218 0x00000000
ffff880077a4d618 0000000000000082 0000000000000001 0000000000000001
ffff880000000000 ffff880077a4dfd8 ffff880077a4dfd8 ffff8800775a2e20
00000000000135c0 ffff880077a4dfd8 ffff880077a4c000 00000000000135c0
Call Trace:
[<ffffffff81411fe5>] schedule_timeout+0x215/0x310
[<ffffffff81058051>] ? get_parent_ip+0x11/0x50
[<ffffffff814130dd>] __down+0x6d/0xb0
[<ffffffff81089f71>] down+0x41/0x50
[<ffffffff810629ac>] acquire_console_sem+0x2c/0x50
[<ffffffff812ca53d>] unbind_con_driver+0xad/0x2d0
[<ffffffff8126f5f7>] fbcon_event_notify+0x457/0x890
[<ffffffff814144ff>] ? _raw_spin_unlock_irqrestore+0x1f/0x50
[<ffffffff81058051>] ? get_parent_ip+0x11/0x50
[<ffffffff8141836d>] notifier_call_chain+0x4d/0x70
[<ffffffff8108a3b8>] __blocking_notifier_call_chain+0x58/0x80
[<ffffffff8108a3f6>] blocking_notifier_call_chain+0x16/0x20
[<ffffffff8125dabb>] fb_notifier_call_chain+0x1b/0x20
[<ffffffff8125e6ac>] unregister_framebuffer+0x7c/0x130
[<ffffffff8125e8b3>] remove_conflicting_framebuffers+0x153/0x180
[<ffffffff8125eef3>] register_framebuffer+0x93/0x2c0
[<ffffffffa0331112>] drm_fb_helper_single_fb_probe+0x252/0x2f0 [drm_kms_helper]
[<ffffffffa03314a3>] drm_fb_helper_initial_config+0x2f3/0x6d0 [drm_kms_helper]
[<ffffffffa03318dd>] ? drm_fb_helper_single_add_all_connectors+0x5d/0x1c0 [drm_kms_helper]
[<ffffffffa037b588>] intel_fbdev_init+0xa8/0x160 [i915]
[<ffffffffa0343d74>] i915_driver_load+0x854/0x12b0 [i915]
[<ffffffffa02f0e7e>] drm_get_pci_dev+0x19e/0x360 [drm]
[<ffffffff8141821d>] ? sub_preempt_count+0x9d/0xd0
[<ffffffffa0386f91>] i915_pci_probe+0x15/0x17 [i915]
[<ffffffff8124481f>] local_pci_probe+0x5f/0xd0
[<ffffffff81244f89>] pci_device_probe+0x119/0x120
[<ffffffff812eccaa>] ? driver_sysfs_add+0x7a/0xb0
[<ffffffff812ed003>] driver_probe_device+0xa3/0x290
[<ffffffff812ed1f0>] ? __driver_attach+0x0/0xb0
[<ffffffff812ed29b>] __driver_attach+0xab/0xb0
[<ffffffff812ed1f0>] ? __driver_attach+0x0/0xb0
[<ffffffff812ebd3e>] bus_for_each_dev+0x5e/0x90
[<ffffffff812ecc2e>] driver_attach+0x1e/0x20
[<ffffffff812ec6f2>] bus_add_driver+0xe2/0x320
[<ffffffffa03aa000>] ? i915_init+0x0/0x96 [i915]
[<ffffffff812ed536>] driver_register+0x76/0x140
[<ffffffffa03aa000>] ? i915_init+0x0/0x96 [i915]
[<ffffffff81245216>] __pci_register_driver+0x56/0xd0
[<ffffffffa02f1264>] drm_pci_init+0xe4/0xf0 [drm]
[<ffffffffa03aa000>] ? i915_init+0x0/0x96 [i915]
[<ffffffffa02e84a8>] drm_init+0x58/0x70 [drm]
[<ffffffffa03aa094>] i915_init+0x94/0x96 [i915]
[<ffffffff81002194>] do_one_initcall+0x44/0x190
[<ffffffff810a066b>] sys_init_module+0xcb/0x210
[<ffffffff8100c012>] system_call_fastpath+0x16/0x1b
..
fb-test2 which reproduces above is available on kernel.org bug #26232.
To solve this issue, avoid calling lock_fb_info inside fb_set_suspend,
and move it out to where needed (callers of fb_set_suspend must call
lock_fb_info before if needed). So far, the only place which needs to
call lock_fb_info is store_fbstate, all other places which calls
fb_set_suspend are suspend/resume hooks that should not need the lock as
they should be run only when processes are already frozen in
suspend/resume.
References: https://bugzilla.kernel.org/show_bug.cgi?id=26232
Signed-off-by: Herton Ronaldo Krzesinski <herton@mandriva.com.br>
Signed-off-by: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fcd0861db1 upstream.
The shift direction was wrong because the function takes a
page number and i is the address is the loop.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dbdf1afcaa upstream.
Put sysfs attributes of ccwgroup devices in an attribute group to
ensure that these attributes are actually present when userspace
is notified via uevents.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e73b7fffe4 upstream.
The rcu page table free code uses a couple of bits in the page table
pointer passed to tlb_remove_table to discern the different page table
types. __tlb_remove_table extracts the type with an incorrect mask which
leads to memory leaks. The correct mask is ((FRAG_MASK << 4) | FRAG_MASK).
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a45aff5285 upstream.
git commit 5e9a2692 "[S390] ptrace cleanup" introduced a regression
for the case when both a user PER set (e.g. a storage alteration trace) and
PTRACE_SINGLESTEP are active. The new code will overrule the user PER set
with a instruction-fetch PER set over the whole address space for ptrace
single stepping. The inferior process will be stopped after each instruction
with an instruction fetch event. Any other events that may have occurred
concurrently are not reported (e.g. storage alteration event) because the
control bits for them are not set. The solution is to merge the PER control
bits of the user PER set with the PER_EVENT_IFETCH control bit for
PTRACE_SINGLESTEP.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4d47555a80 upstream.
We use the cpu id provided by userspace as array index here. Thus we
clearly need to check it first. Ooops.
Signed-off-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3a340104fa upstream.
According to the datasheet:
Format Control (05h)
BITS[3:2]
FMT[1:0] Audio data format selection
00 = right justified mode
01 = left justified mode
10 = I2S mode
11 = DSP mode
BIT[4] LRP Polarity selec for LRCLK/DSP mode select
0 = normal LRCLK poalrity/DSP mode A
1 = inverted LRCLK poarity/DSP mode B
For SND_SOC_DAIFMT_DSP_A, we should set 0x000C instead of 0x0003.
For SND_SOC_DAIFMT_DSP_B, we should set 0x001C instead of 0x0013.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 24dd85ff72 upstream.
For the !HAVE_ATOMIC_IOMAP case the stub functions did not call
pagefault_disable/_enable. The i915 driver relies on the map
actually being atomic, otherwise it can deadlock with it's own
pagefault handler in the gtt pwrite fastpath.
This is exercised by gem_mmap_gtt from the intel-gpu-toosl gem
testsuite.
v2: Chris Wilson noted the lack of an include.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38115
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a877ee03ac upstream.
nfsiostat was failing to find mounted filesystems on kernels after
2.6.38 because of changes to show_vfsstat() by commit
c7f404b40a. This patch adds back the
"device" tag before the nfs server entry so scripts can parse the
mountstats file correctly.
Signed-off-by: Bryan Schumaker <bjschuma@netapp.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c30e92df30 upstream.
We don't use WANT bits yet--and sending them can probably trigger a
BUG() further down.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3d02fa29de upstream.
Yet another open-management regression:
- nfs4_file_downgrade() doesn't remove the BOTH access bit on
downgrade, so the server's idea of the stateid's access gets
out of sync with the client's. If we want to keep an O_RDWR
open in this case, we should do that in the file_put_access
logic rather than here.
- We forgot to convert v4 access to an open mode here.
This logic has proven too hard to get right. In the future we may
consider:
- reexamining the lock/openowner relationship (locks probably
don't really need to take their own references here).
- adding open upgrade/downgrade support to the vfs.
- removing the atomic operations. They're redundant as long as
this is all under some other lock.
Also, maybe some kind of additional static checking would help catch
O_/NFS4_SHARE_ACCESS confusion.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a043226bc1 upstream.
A client that wants to execute a file must be able to read it. Read
opens over nfs are therefore implicitly allowed for executable files
even when those files are not readable.
NFSv2/v3 get this right by using a passed-in NFSD_MAY_OWNER_OVERRIDE on
read requests, but NFSv4 has gotten this wrong ever since
dc730e1737 "nfsd4: fix owner-override on
open", when we realized that the file owner shouldn't override
permissions on non-reclaim NFSv4 opens.
So we can't use NFSD_MAY_OWNER_OVERRIDE to tell nfsd_permission to allow
reads of executable files.
So, do the same thing we do whenever we encounter another weird NFS
permission nit: define yet another NFSD_MAY_* flag.
The industry's future standardization on 128-bit processors will be
motivated primarily by the need for integers with enough bits for all
the NFSD_MAY_* flags.
Reported-by: Leonardo Borda <leonardoborda@gmail.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 576163005d upstream.
The set of errors here does *not* agree with the set of errors specified
in the rfc!
While we're there, turn this macros into a function, for the usual
reasons, and move it to the one place where it's actually used.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3e77246393 upstream.
The server is returning nfserr_resource for both permanent errors and
for errors (like allocation failures) that might be resolved by retrying
later. Save nfserr_resource for the former and use delay/jukebox for
the latter.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 832023bffb upstream.
Fan Yong <yong.fan@whamcloud.com> noticed setting
FMODE_32bithash wouldn't work with nfsd v4, as
nfsd4_readdir() checks for 32 bit cookies. However, according to RFC 3530
cookies have a 64 bit type and cookies are also defined as u64 in
'struct nfsd4_readdir'. So remove the test for >32-bit values.
Signed-off-by: Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2da9565235 upstream.
nfs_find_and_lock_request will take a reference to the nfs_page and
will then put it if the req is already locked. It's possible though
that the reference will be the last one. That put then can kick off
a whole series of reference puts:
nfs_page
nfs_open_context
dentry
inode
If the inode ends up being deleted, then the VFS will call
truncate_inode_pages. That function will try to take the page lock, but
it was already locked when migrate_page was called. The code
deadlocks.
Fix this by simply refusing the migration request if PagePrivate is
already set, indicating that the page is already associated with an
active read or write request.
We've had a customer test a backported version of this patch and
the preliminary results seem good.
Cc: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Harshula Jayasuriya <harshula@redhat.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 436fc28026 upstream.
The trace_pipe_raw handler holds a cached page from the time the file
is opened to the time it is closed. The cached page is used to handle
the case of the user space buffer being smaller than what was read from
the ring buffer. The left over buffer is held in the cache so that the
next read will continue where the data left off.
After EOF is returned (no more data in the buffer), the index of
the cached page is set to zero. If a user app reads the page again
after EOF, the check in the buffer will see that the cached page
is less than page size and will return the cached page again. This
will cause reading the trace_pipe_raw again after EOF to return
duplicate data, making the output look like the time went backwards
but instead data is just repeated.
The fix is to not reset the index right after all data is read
from the cache, but to reset it after all data is read and more
data exists in the ring buffer.
Reported-by: Jeremy Eder <jeder@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 355840e7a7 upstream.
This bug was introduced in 415e72d034
which was in 2.6.36.
There is a small window of time between when a device fails and when
it is removed from the array. During this time we might still read
from it, but we won't write to it - so it is possible that we could
read stale data.
We didn't need the test of 'Faulty' before because the test on
In_sync is sufficient. Since we started allowing reads from the early
part of non-In_sync devices we need a test on Faulty too.
This is suitable for any kernel from 2.6.36 onwards, though the patch
might need a bit of tweaking in 3.0 and earlier.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 838312be46 upstream.
These warnings (generally one per CPU) are a result of
initializing x86_cpu_to_logical_apicid while apic_default is
still in use, but the check in setup_local_APIC() being done
when apic_bigsmp was already used as an override in
default_setup_apic_routing():
Overriding APIC driver with bigsmp
Enabling APIC mode: Physflat. Using 5 I/O APICs
------------[ cut here ]------------
WARNING: at .../arch/x86/kernel/apic/apic.c:1239
...
CPU 1 irqstacks, hard=f1c9a000 soft=f1c9c000
Booting Node 0, Processors #1
smpboot cpu 1: start_ip = 9e000
Initializing CPU#1
------------[ cut here ]------------
WARNING: at .../arch/x86/kernel/apic/apic.c:1239
setup_local_APIC+0x137/0x46b() Hardware name: ...
CPU1 logical APIC ID: 2 != 8
...
Fix this (for the time being, i.e. until
x86_32_early_logical_apicid() will get removed again, as Tejun
says ought to be possible) by overriding the previously stored
values at the point where the APIC driver gets overridden.
v2: Move this and the pre-existing override logic into
arch/x86/kernel/apic/bigsmp_32.c.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tejun Heo <tj@kernel.org>
Link: http://lkml.kernel.org/r/4E835D16020000780005844C@nat28.tlf.novell.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cbbc719fcc upstream.
The parameter's origin type is long. On an i386 architecture, it can
easily be larger than 0x80000000, causing this function to convert it
to a sign-extended u64 type.
Change the type to unsigned long so we get the correct result.
Signed-off-by: hank <pyu@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
[ build fix ]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6cd9d21a0c upstream.
We were using incorrect max and min dwell times during forced passive
scans because we were still using the active scan states to scan
(passively) the channels that were not marked as passive.
Instead of doing passive scans in active states, we now skip active
states and scan for all channels in passive states.
Signed-off-by: Luciano Coelho <coelho@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit da92b194cc upstream.
The pair of functions,
* skb_clone_tx_timestamp()
* skb_complete_tx_timestamp()
were designed to allow timestamping in PHY devices. The first
function, called during the MAC driver's hard_xmit method, identifies
PTP protocol packets, clones them, and gives them to the PHY device
driver. The PHY driver may hold onto the packet and deliver it at a
later time using the second function, which adds the packet to the
socket's error queue.
As pointed out by Johannes, nothing prevents the socket from
disappearing while the cloned packet is sitting in the PHY driver
awaiting a timestamp. This patch fixes the issue by taking a reference
on the socket for each such packet. In addition, the comments
regarding the usage of these function are expanded to highlight the
rule that PHY drivers must use skb_complete_tx_timestamp() to release
the packet, in order to release the socket reference, too.
These functions first appeared in v2.6.36.
Reported-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Richard Cochran <richard.cochran@omicron.at>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 28a1bcdb57 upstream.
When I introduced in-kernel off-channel TX I
introduced a bug -- the work can't be canceled
again because the code clear the skb pointer.
Fix this by keeping track separately of whether
TX status has already been reported.
Reported-by: Jouni Malinen <j@w1.fi>
Tested-by: Jouni Malinen <j@w1.fi>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8b3408f8ee upstream.
If the PHY should disappear (for example, on an USB Ethernet MAC), then
the driver would leak any undelivered time stamp packets. This commit
fixes the issue by calling the appropriate functions to free any packets
left in the transmit and receive queues.
The driver first appeared in v3.0.
Signed-off-by: Richard Cochran <richard.cochran@omicron.at>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d2237d3574 upstream.
Renato Westphal noticed that since commit a2835763e1
"rtnetlink: handle rtnl_link netlink notifications manually" was merged
we no longer send a netlink message when a networking device is moved
from one network namespace to another.
Fix this by adding the missing manual notification in dev_change_net_namespaces.
Since all network devices that are processed by dev_change_net_namspaces are
in the initialized state the complicated tests that guard the manual
rtmsg_ifinfo calls in rollback_registered and register_netdevice are
unnecessary and we can just perform a plain notification.
Tested-by: Renato Westphal <renatowestphal@gmail.com>
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3236c3e1ad upstream.
commit 420e3646 allowed the kernel to reduce the number of unnecessary
commit calls by skipping the commit when there are a large number of
outstanding pages.
However, the current test in nfs_commit_unstable_pages does not handle
the edge condition properly. When ncommit == 0, then that means that the
kernel doesn't need to do anything more for the inode. The current test
though in the WB_SYNC_NONE case will return true, and the inode will end
up being marked dirty. Once that happens the inode will never be clean
until there's a WB_SYNC_ALL flush.
Fix this by immediately returning from nfs_commit_unstable_pages when
ncommit == 0.
Mike noticed this problem initially in RHEL5 (2.6.18-based kernel) which
has a backported version of 420e3646. The inode cache there was growing
very large. The inode cache was unable to be shrunk since the inodes
were all marked dirty. Calling sync() would essentially "fix" the
problem -- the WB_SYNC_ALL flush would result in the inodes all being
marked clean.
What I'm not clear on is how big a problem this is in mainline kernels
as the writeback code there is very different. Either way, it seems
incorrect to re-mark the inode dirty in this case.
Reported-by: Mike McLean <mikem@redhat.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 59b7c05fff upstream.
This reverts commit b80c3cb628.
The reverted commit was rendered obsolete by a VFS fix: commit
5547e8aac6 (writeback: Update dirty flags in
two steps). We now no longer need to worry about writeback_single_inode()
missing our marking the inode for COMMIT in 'do_writepages()' call.
Reverting this patch, fixes a performance regression in which the inode
would continuously get queued to the dirty list, causing the writeback
code to unnecessarily try to send a COMMIT.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Tested-by: Simon Kirby <sim@hostway.ca>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 37252db6aa upstream.
Due to post-increment in condition of kmod_loop_msg in __request_module(),
the system log can be spammed by much more than 5 instances of the 'runaway
loop' message if the number of events triggering it makes the kmod_loop_msg
to overflow.
Fix that by making sure we never increment it past the threshold.
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a7ea19926f upstream.
samsung_init() should not return success if not all devices are initialized.
Otherwise, samsung_exit() will dereference sdev NULL pointers and others.
Signed-off-by: David Herrmann <dh.herrmann@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Matthew Garrett <mjg@redhat.com>
commit bee460be8c upstream.
The min_brightness value of the sabi_config is incorrectly used in brightness
calculations. For the config where min_brightness = 1 and max_brightness = 8,
the user visible range should be 0 to 7 with hardware being set in the range
of 1 to 8. What is actually happening is that the user visible range is 0 to
8 with hardware being set in the range of -1 to 7.
This patch fixes the above issue as well as a miscalculation that would occur
in the case of min_brightness > 1.
Signed-off-by: Jason Stubbs <jasonbstubbs@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Matthew Garrett <mjg@redhat.com>
commit 093ed56164 upstream.
patch works for me, but I need to add "acpi_backlight=vendor" to kernel
params
Signed-off-by: Smelov Andrey <xor29a@bk.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Matthew Garrett <mjg@redhat.com>
commit 7500eeb08a upstream.
my samsung laptop would be very happy if you add
these lines to the file drivers/platform/x86/samsung-laptop.c
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Matthew Garrett <mjg@redhat.com>
commit f87d02996f upstream.
My DMI model is this:
>dmesg |grep DMI
[ 0.000000] DMI present.
[ 0.000000] DMI: SAMSUNG ELECTRONICS CO., LTD. SR700/SR700, BIOS
04SR 02/20/2008
adding dmi information of Samsung R700 laptops
This adds the dmi information of Samsungs R700 laptops.
Signed-off-by: Stefan Beller <stefanbeller@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Matthew Garrett <mjg@redhat.com>
commit 08613e4626 upstream.
The caif code will register its own pernet_operations, and then register
a netdevice_notifier. Each time the netdevice_notifier is triggered,
it'll do some stuff... including a lookup of its own pernet stuff with
net_generic().
If the net_generic() call ever returns NULL, the caif code will BUG().
That doesn't seem *so* unreasonable, I suppose — it does seem like it
should never happen.
However, it *does* happen. When we clone a network namespace,
setup_net() runs through all the pernet_operations one at a time. It
gets to loopback before it gets to caif. And loopback_net_init()
registers a netdevice... while caif hasn't been initialised. So the caif
netdevice notifier triggers, and immediately goes BUG().
We could imagine a complex and overengineered solution to this generic
class of problems, but this patch takes the simple approach. It just
makes caif_device_notify() *not* go looking for its pernet data
structures if the device it's being notified about isn't a caif device
in the first place.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Acked-by: Sjur Brændeland <sjur.brandeland@stericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ab2a47bd24 upstream.
Propagate the baremetal git commit "swiotlb: fix wrong panic"
(fba99fa38b) in the Xen-SWIOTLB version.
wherein swiotlb's map_page wrongly calls panic() when it can't find
a buffer fit for device's dma mask. It should return an error instead.
Devices with an odd dma mask (i.e. under 4G) like b44 network card hit
this bug (the system crashes):
http://marc.info/?l=linux-kernel&m=129648943830106&w=2
If xen-swiotlb returns an error, b44 driver can use the own bouncing
mechanism.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 917e3e65c3 upstream.
With Xen changeset 23428 "libxl: Add 'e820_host' option to config file"
the E820 as seen from the host can now be passed into the guest.
This means that a PV guest can now:
- Use the correct PCI I/O gap. Before these patches, Linux guest would
boot up and would tell:
[ 0.000000] Allocating PCI resources starting at 40000000 (gap: 40000000:c0000000)
while in actuality the PCI I/O gap should have been:
[ 0.000000] Allocating PCI resources starting at b0000000 (gap: b0000000:4c000000)
- The PV domain with PCI devices was limited to 3GB. It now can be booted
with 4GB, 8GB, or whatever number you want. The PCI devices will now _not_ conflict
with System RAM. Meaning the drivers can load.
CC: Jesse Barnes <jbarnes@virtuousgeek.org>
CC: linux-pci@vger.kernel.org
[v2: Made the string less broken up. Suggested by Joe Perches]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 273d23574f upstream.
For USB CONTROL transaction, when the data length is zero,
the IN package is needed to finish this transaction in status stage.
Signed-off-by: Jerry Huang <r66093@freescale.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 364b936fc3 upstream.
The config option needs to be a 'bool' and not a tristate, otheriwse
force feedback support never makes it into the module.
Signed-off-by: Sergei Kolzun <x0r@dv-life.ru>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ac06697c79 upstream.
PHY errors relevant for ANI are always tracked by hardware counters, the
bits that allow them to pass through the rx filter are independent of that.
Enabling PHY errors in the rx filter often creates lots of useless DMA traffic
and might be responsible for some of the rx dma stop failure warnings.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6321eb0977 upstream.
this patch fixes the assumption of maximum number of GPIO pins present
in AR9287/AR9300. this fix is essential as we might encounter some
functionality issues involved in accessing the status of GPIO pins which
are all incorrectly assumed to be not within the range of max_num_gpio
of AR9300/AR9287 chipsets
Signed-off-by: Mohammed Shafi Shajakhan <mohammed@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e9c10469cf upstream.
Do the magnitude/phase coeff correction only if the outlier
is detected. Updating wrong magnitude/phase coeff factor
impacts not only tx gain setting but also leads to poor
performance in congested networks. In the clear environment
the impact is very minimal because the outlier happens
very rarely according to the past experiment. It occured
less than once every 1000 calibrations.
Signed-off-by: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c58a76cdd7 upstream.
IDs found in the Windows driver's ZTEusbnet.inf file from the
ZTE MF100 drivers (O2 UK). Also fixes the ZTE MF626 device
since it really is distinct from the 4G Systems stick and
apparently needs the net interface blacklisted too, while
there's no indication (yet) that the 4G Systems stick does.
Signed-off-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0d905fd5ec upstream.
That's what the blacklist is for...
Signed-off-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b4626c1092 upstream.
It's cleaner than the array stuff, and we're about to add a bunch
more blacklist entries. Second, there are devices that need both
the sendsetup and the reserved interface blacklists, which the
current code can't accommodate.
Signed-off-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3687f64130 upstream.
Some Stellaris evaluation kits have the JTAG/SWD FTDI chip onboard,
and some, like EK-LM3S9B90, come with a separate In-Circuit Debugger
Interface Board. The ICDI board can also be used stand-alone, for
other boards and chips than the kit it came with. The ICDI has both
old style 20-pin JTAG connector and new style JTAG/SWD 10-pin 1.27mm
pitch connector.
Tested with EK-LM3S9B90, where the BD-ICDI board is included.
Signed-off-by: Peter Stuge <peter@stuge.se>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 598f0b7035 upstream.
Add vendor and product ID for the SMART USB to serial adapter. These
were meant to be used with their SMART Board whiteboards, but can be
re-purposed for other tasks. Tested and working (at at least 9600 bps).
Signed-off-by: Eric Benoit <eric@ecks.ca>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5b253d88cc upstream.
My webcam is a Logitech C300 and I get "chipmunk"ed squeaky sound.
The following trivial patch fixes it.
Signed-off-by: Jon Levell <linuxusb@coralbark.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2394d67e44 upstream.
The new runtime PM code has shown that many webcams suffer
from a race condition that may crash them upon resume.
Runtime PM is especially prone to show the problem because
it retains power to the cameras at all times. However
system suspension may also crash the devices and retain
power to the devices.
The only way to solve this problem without races is in
usbcore with the RESET_RESUME quirk.
Signed-off-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aec01c5895 upstream.
Alan Stern points out that after spin_unlock(&ps->lock) there is no
guarantee that ps->pid won't be freed. Since kill_pid_info_as_uid() is
called after the spin_unlock(), the pid passed to it must be pinned.
Reported-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Serge Hallyn <serge.hallyn@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 393cbb5151 upstream.
In the usb printer class specific request get_device_id the value of
wIndex is (interface << 8 | altsetting) instead of just interface.
This enables the detection of some printers with libusb.
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Matthias Dellweg <2500@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8582d86143 upstream.
The allocated chardevice region range is only 1 device but on
unregister it currently tries to deregister 2.
Found this while doing a insmod/rmmod/insmod/rm... of the module
which seemed to eat major numbers.
Signed-off-by: Fabian Godehardt <fg@emlix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a8b43c00ef upstream.
At least some OHCI hardware (such as the MCP89) fails to flag any change
in the host status register or the port status registers when receiving
a remote wakeup while in D3 state. This results in the controller being
resumed but no device state change being noticed, at which point the
controller is put back to sleep again. Since there doesn't seem to be any
reliable way to identify the state change, just unconditionally resume the
hub. It'll be put back to sleep in the near future anyway if there are no
active devices attached to it.
Signed-off-by: Matthew Garrett <mjg@redhat.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e16da02fcd upstream.
This patch solves two things:
1) Enables autosense emulation code to correctly
interpret descriptor format sense data, and
2) Fixes a bug whereby the autosense emulation
code would overwrite descriptor format sense data
with SENSE KEY HARDWARE ERROR in fixed format, to
incorrectly look like this:
Oct 21 14:11:07 localhost kernel: sd 7:0:0:0: [sdc] Sense Key : Recovered Error [current] [descriptor]
Oct 21 14:11:07 localhost kernel: Descriptor sense data with sense descriptors (in hex):
Oct 21 14:11:07 localhost kernel: 72 01 04 1d 00 00 00 0e 09 0c 00 00 00 00 00 00
Oct 21 14:11:07 localhost kernel: 00 4f 00 c2 00 50
Oct 21 14:11:07 localhost kernel: sd 7:0:0:0: [sdc] ASC=0x4 ASCQ=0x1d
Signed-off-by: Luben Tuikov <ltuikov@yahoo.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Acked-by: Matthew Dharm <mdharm-usb@one-eyed-alien.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 236c448cb6 upstream.
Report the number of dropped packets instead of zero
when using the binary usbmon interface with tcpdump.
# tcpdump -i usbmon1 -w dump
tcpdump: listening on usbmon1, link-type USB_LINUX_MMAPPED (USB with padded Linux header), capture size 65535 bytes
^C2155 packets captured
2155 packets received by filter
1019 packets dropped by kernel
Signed-off-by: Johannes Stezenbach <js@sig21.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 488bc35bf4 upstream.
Depending on the implementation of the hardware blinking function in
blink_set(), the led can support hardware blinking for some values of
delay_on and delay_off and fall-back to software blinking for some other
values.
Turning off the blink_timer unconditionally before starting to blink
make sure that a sequence like:
OFF
hardware blinking
software blinking
hardware blinking
does not leave the software blinking timer active.
Signed-off-by: Antonio Ospite <ospite@studenti.unina.it>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Cc: Richard Purdie <rpurdie@rpsys.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d8805e633e upstream.
epoll can acquire recursively acquire ep->mtx on multiple "struct
eventpoll"s at once in the case where one epoll fd is monitoring another
epoll fd. This is perfectly OK, since we're careful about the lock
ordering, but it causes spurious lockdep warnings. Annotate the recursion
using mutex_lock_nested, and add a comment explaining the nesting rules
for good measure.
Recent versions of systemd are triggering this, and it can also be
demonstrated with the following trivial test program:
--------------------8<--------------------
int main(void) {
int e1, e2;
struct epoll_event evt = {
.events = EPOLLIN
};
e1 = epoll_create1(0);
e2 = epoll_create1(0);
epoll_ctl(e1, EPOLL_CTL_ADD, e2, &evt);
return 0;
}
--------------------8<--------------------
Reported-by: Paul Bolle <pebolle@tiscali.nl>
Tested-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Nelson Elhage <nelhage@nelhage.com>
Acked-by: Jason Baron <jbaron@redhat.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Davide Libenzi <davidel@xmailserver.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 315eb8a2a1 upstream.
When compiling an i386_defconfig kernel with gcc-4.6.1-9.fc15.i686, I
noticed a warning about the asm operand for test_bit in kprobes'
can_boost. I discovered that this caused only the first long of
twobyte_is_boostable[] to be output.
Jakub filed and fixed gcc PR50571 to correct the warning and this output
issue. But to solve it for less current gcc, we can make kprobes'
twobyte_is_boostable[] non-const, and it won't be optimized out.
Before:
CC arch/x86/kernel/kprobes.o
In file included from include/linux/bitops.h:22:0,
from include/linux/kernel.h:17,
from [...]/arch/x86/include/asm/percpu.h:44,
from [...]/arch/x86/include/asm/current.h:5,
from [...]/arch/x86/include/asm/processor.h:15,
from [...]/arch/x86/include/asm/atomic.h:6,
from include/linux/atomic.h:4,
from include/linux/mutex.h:18,
from include/linux/notifier.h:13,
from include/linux/kprobes.h:34,
from arch/x86/kernel/kprobes.c:43:
[...]/arch/x86/include/asm/bitops.h: In function ‘can_boost.part.1’:
[...]/arch/x86/include/asm/bitops.h:319:2: warning: use of memory input
without lvalue in asm operand 1 is deprecated [enabled by default]
$ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
551: 0f a3 05 00 00 00 00 bt %eax,0x0
554: R_386_32 .rodata.cst4
$ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o
arch/x86/kernel/kprobes.o: file format elf32-i386
Contents of section .data:
0000 48000000 00000000 00000000 00000000 H...............
Contents of section .rodata.cst4:
0000 4c030000 L...
Only a single long of twobyte_is_boostable[] is in the object file.
After, without the const on twobyte_is_boostable:
$ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
551: 0f a3 05 20 00 00 00 bt %eax,0x20
554: R_386_32 .data
$ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o
arch/x86/kernel/kprobes.o: file format elf32-i386
Contents of section .data:
0000 48000000 00000000 00000000 00000000 H...............
0010 00000000 00000000 00000000 00000000 ................
0020 4c030000 0f000200 ffff0000 ffcff0c0 L...............
0030 0000ffff 3bbbfff8 03ff2ebb 26bb2e77 ....;.......&..w
Now all 32 bytes are output into .data instead.
Signed-off-by: Josh Stone <jistone@redhat.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Jakub Jelinek <jakub@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6a469e4665 upstream.
This is a workaround for a UV2 hub bug that affects the format of system
global addresses.
The GRU API for UV2 was inadvertently broken by a hardware change. The
format of the physical address used for TLB dropins and for addresses used
with instructions running in unmapped mode has changed. This change was
not documented and became apparent only when diags failed running on
system simulators.
For UV1, TLB and GRU instruction physical addresses are identical to
socket physical addresses (although high NASID bits must be OR'ed into the
address).
For UV2, socket physical addresses need to be converted. The NODE portion
of the physical address needs to be shifted so that the low bit is in bit
39 or bit 40, depending on an MMR value.
It is not yet clear if this bug will be fixed in a silicon respin. If it
is fixed, the hub revision will be incremented & the workaround disabled.
Signed-off-by: Jack Steiner <steiner@sgi.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6b20fa9aaf upstream.
This patch fixes a bug with the handling of REPORT TARGET PORT GROUPS
containing a smaller allocation length than the payload requires causing
memory writes beyond the end of the buffer. This patch checks for the
minimum 4 byte length for the response payload length, and also checks
upon each loop of T10_ALUA(su_dev)->tg_pt_gps_list to ensure the Target
port group and Target port descriptor list is able to fit into the
remaining allocation length.
If the response payload exceeds the allocation length length, then rd_len
is still increments to indicate to the initiator that the payload has
been truncated.
Reported-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas Bellinger <nab@risingtidesystems.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c5c04e509 upstream.
The purpose of this patch is to remove a section of "bad" code that
assigns the last DAC to ports E or F in order to support notebooks
with docking in earlier days, around ALSA 1.0.19 - 21. This is not
necessary now and actually breaks some configurations that use these
ports as other devices. This have been tested on several different
configurations to make sure that it is working for different combinations.
Signed-off-by: Charles Chin <Charles.Chin@idt.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 54b5e3a4bf upstream.
Kill the local smp response buffer.
Besides being unnecessary, it is too small (currently truncates
responses to 60 bytes). The mid-layer will have already allocated a
sufficiently sized buffer, just kmap and copy into it directly.
Reported-by: Derick Marks <derick.w.marks@intel.com>
Tested-by: Derick Marks <derick.w.marks@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bb041a0e9c upstream.
Libsas forget to set the sas_address and device type of rphy lead to file
under /sys/class/sas_x show wrong value, fix that.
Signed-off-by: Jack Wang <jack_wang@usish.com>
Tested-by: Crystal Yu <crystal_yu@usish.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5d7c20b7fa upstream.
During kdump testing I noticed timeouts when initialising each IPR
adapter. While the driver has logic to detect an adapter in an
indeterminate state, it wasn't triggering and each adapter went
through a 5 minute timeout before finally going operational.
Some analysis showed the needs_hard_reset flag wasn't getting set.
We can check the reset_devices kernel parameter which is set by
kdump and force a full reset. This fixes the problem.
Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f575c5d3eb upstream.
The following patch for megaraid_sas will fix a potential bad pointer access
in megasas_reset_timer(), when a MegaRAID 9265/9285 or 9360/9380 gets a
timeout. megasas_build_io_fusion() sets SCp.ptr to be a struct
megasas_cmd_fusion *, but then megasas_reset_timer() was casting SCp.ptr to be
a struct megasas_cmd *, then trying to access cmd->instance, which is invalid.
Just loading instance from scmd->device->host->hostdata in
megasas_reset_timer() fixes the issue.
Signed-off-by: Adam Radford <aradford@gmail.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3e309cdf07 upstream.
Commit 15bed0f2f added a quirk for the e823 Ricoh card reader to lower the
base frequency. However, the quirk first checks to see if the proprietary
MMC controller is disabled, and returns if so. On some devices, such as the
Lenovo X220, the MMC controller is already disabled by firmware it seems,
but the frequency change is still needed so sdhci-pci can talk to the cards.
Since the MMC controller is disabled, the frequency fixup was never being run
on these machines.
This moves the e823 check above the MMC controller check so that it always
gets run.
This fixes https://bugzilla.redhat.com/show_bug.cgi?id=722509
Signed-off-by: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5238acbe36 upstream.
f39b2dd9d ("mmc: core: Bus width testing needs to handle suspend/resume")
added code to only compare read-only ext_csd fields in bus width testing
code, yet it's comparing some fields that are never set.
The affected fields are ext_csd.raw_erased_mem_count and
ext_csd.raw_partition_support.
Signed-off-by: Andrei Warkentin <andrey.warkentin@gmail.com>
Acked-by: Philip Rakity <prakity@marvell.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7f7e4129c2 upstream.
During a rescan operation mmc_attach(sd|mmc|sdio) functions are
called. The error handling in these function can trigger a detach
of the bus, which also meant a power off. This is not notified by
the rescan operation which then continues to the next attach function.
If a power off has been done, the framework must never send any
new commands to the host driver, without first doing a new power up.
This will most likely trigger any host driver to hang.
Moving power off out of detach and instead handle power off
separately when it is actually needed, solves the issue.
Signed-off-by: Ulf Hansson <ulf.hansson@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 286e0c94f9 upstream.
Commit 9b9fe724 accidentally used RADEON_GPIO_EN_* where
RADEON_GPIO_MASK_* was intended. This caused improper initialization
of I2C buses, mostly visible when setting i2c_algo_bit.bit_test=1.
Using the right constants fixes the problem.
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d0d0a225e6 upstream.
When force == false, we don't do load detection in the connector
detect functions. Unforunately, we also return the previous
connector state so we never get disconnect events for DVI-I, DVI-A,
or VGA. Save whether we detected the monitor via load detection
previously and use that to determine whether we return the previous
state or not.
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=41561
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5f0a26128d upstream.
DVI-D and HDMI-A are digital only, so there's no need to
attempt analog load detect. Also, skip bail before the
!force check, or we fail to get a disconnect events.
The next patches in the series attempt to fix disconnect
events for connectors with analog support (DVI-I, HDMI-B,
DVI-A).
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=41561
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f52c619a59 upstream.
The commit 47356eb672 introduced a
mechanism to record the backlight level only at disabling time, but it
also introduced a regression. Since intel_lvds_enable() may be called
without disabling (e.g. intel_lvds_commit() calls it unconditionally),
the backlight gets back to the last recorded value. For example, this
happens when you dim the backlight, close the lid and open the lid,
then the backlight suddenly goes to the brightest.
This patch fixes the bug by recording the backlight level always
when changed via intel_panel_set_backlight(). And,
intel_panel_{enable|disable}_backlight() call the internal function not
to update the recorded level wrongly.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8c241fef3e upstream.
Talking to the eDP DDC channel requires that the panel be powered
up. Wrap both the EDID and modes fetch code with calls to turn the vdd
power on and back off.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7e393a834b upstream.
Setting the chain (CH) bit in the link TRB of isochronous transfer rings
is required by AMD 0.96 xHCI host controller to successfully transverse
multi-TRB TD that span through different memory segments.
When a Missed Service Error event occurs, if the chain bit is not set in
the link TRB and the host skips TDs which just across a link TRB, the
host may falsely recognize the link TRB as a normal TRB. You can see
this may cause big trouble - the host does not jump to the right address
which is pointed by the link TRB, but continue fetching the memory which
is after the link TRB address, which may not even belong to the host,
and the result cannot be predicted.
This causes some big problems. Without the former patch I sent: "xHCI:
prevent infinite loop when processing MSE event", the system may hang.
With that patch applied, system does not hang, but the host still access
wrong memory address and isoc transfer will fail. With this patch,
isochronous transfer works as expected.
This patch should be applied to kernels as old as 2.6.36, which was when
the first isochronous support was added for the xHCI host controller.
Signed-off-by: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0e6c7f746e upstream.
There are 2 situations wherein the xhci_ring* might not get freed:
- When xhci_ring_alloc() -> xhci_segment_alloc() returns NULL and
we goto the fail: label in xhci_ring_alloc. In this case, the ring
will not get kfreed.
- When the num_segs argument to xhci_ring_alloc is passed as 0 and
we try to free the rung after that.
( This doesn't really happen as of now in the code but we seem to
be entertaining num_segs=0 in xhci_ring_alloc )
This should be backported to kernels as old as 2.6.31.
Signed-off-by: Kautuk Consul <consul.kautuk@gmail.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 68aa95d5d4 upstream.
This patch (as1489) works around a hardware bug in MosChip EHCI
controllers. Evidently when one of these controllers increments the
frame-index register, it changes the three low-order bits (the
microframe counter) before changing the higher order bits (the frame
counter). If the register is read at just the wrong time, the value
obtained is too low by 8.
When the appropriate quirk flag is set, we work around this problem by
reading the frame-index register a second time if the first value's
three low-order bits are all 0. This gives the hardware a chance to
finish updating the register, yielding the correct value.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Jason N Pitt <jpitt@fhcrc.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 276532ba96 upstream.
The Kirkwood gave an unaligned memory access error on
line 742 of drivers/usb/host/echi-hcd.c:
"ehci->last_periodic_enable = ktime_get_real();"
Signed-off-by: Harro Haan <hrhaan@gmail.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 94abc56f4d upstream.
The following patch removed uart_change_pm() in uart_resume_port():
commit 5933a161ab
Author: Yin Kangkai <kangkai.yin@linux.intel.com>
serial-core: reset the console speed on resume
It will break the pxa serial driver when the system resumes from suspend mode
as it will try to set baud rate divider register in set_termios but with
clock off. The register value can not be set correctly on some platform if
the clock is disabled. The pxa driver will check the value and report the
following warning:
------------[ cut here ]------------
WARNING: at drivers/tty/serial/pxa.c:545 serial_pxa_set_termios+0x1dc/0x250()
Modules linked in:
[<c0281f30>] (unwind_backtrace+0x0/0xf0) from [<c029341c>] (warn_slowpath_common+0x4c/0x64)
[<c029341c>] (warn_slowpath_common+0x4c/0x64) from [<c029344c>] (warn_slowpath_null+0x18/0x1c)
[<c029344c>] (warn_slowpath_null+0x18/0x1c) from [<c044b1e4>] (serial_pxa_set_termios+0x1dc/0x250)
[<c044b1e4>] (serial_pxa_set_termios+0x1dc/0x250) from [<c044a840>] (uart_resume_port+0x128/0x2dc)
[<c044a840>] (uart_resume_port+0x128/0x2dc) from [<c044bbe0>] (serial_pxa_resume+0x18/0x24)
[<c044bbe0>] (serial_pxa_resume+0x18/0x24) from [<c0454d34>] (platform_pm_resume+0x40/0x4c)
[<c0454d34>] (platform_pm_resume+0x40/0x4c) from [<c0457ebc>] (pm_op+0x68/0xb4)
[<c0457ebc>] (pm_op+0x68/0xb4) from [<c0458368>] (device_resume+0xb0/0xec)
[<c0458368>] (device_resume+0xb0/0xec) from [<c04584c8>] (dpm_resume+0xe0/0x194)
[<c04584c8>] (dpm_resume+0xe0/0x194) from [<c0458588>] (dpm_resume_end+0xc/0x18)
[<c0458588>] (dpm_resume_end+0xc/0x18) from [<c02c518c>] (suspend_devices_and_enter+0x16c/0x1ac)
[<c02c518c>] (suspend_devices_and_enter+0x16c/0x1ac) from [<c02c5278>] (enter_state+0xac/0xdc)
[<c02c5278>] (enter_state+0xac/0xdc) from [<c02c48ec>] (state_store+0xa0/0xbc)
[<c02c48ec>] (state_store+0xa0/0xbc) from [<c0408f7c>] (kobj_attr_store+0x18/0x1c)
[<c0408f7c>] (kobj_attr_store+0x18/0x1c) from [<c034a6a4>] (sysfs_write_file+0x108/0x140)
[<c034a6a4>] (sysfs_write_file+0x108/0x140) from [<c02fb798>] (vfs_write+0xac/0x134)
[<c02fb798>] (vfs_write+0xac/0x134) from [<c02fb8cc>] (sys_write+0x3c/0x68)
[<c02fb8cc>] (sys_write+0x3c/0x68) from [<c027c700>] (ret_fast_syscall+0x0/0x2c)
---[ end trace 88289eceb4675b04 ]---
This patch fix the problem by adding the power on opertion back for uart
console when console_suspend_enabled is true.
Signed-off-by: Ning Jiang <ning.jiang@marvell.com>
Tested-by: Mayank Rana <mrana@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e44aabd649 upstream.
Errata E20: UART: Character Timeout interrupt remains set under certain
software conditions.
Implication: The software servicing the UART can be trapped in an infinite loop.
Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 68c79e5756 upstream.
Simple patch to make qcserial recognize the USB id of the Sierra
Wireless MC8355 which is based on the Gobi 3000 chip.
Both UMTS and GPS work fine.
Signed-off-by: Richard Hartmann <richih.mailinglist@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e8df1674d3 upstream.
If the usermode app does an ioctl over this serial device by
using TIOCMIWAIT, then the code will wait by setting the current
task state to TASK_INTERRUPTIBLE and then calling schedule().
This will be woken up by the qt2_process_modem_status on URB
completion when the port_extra->shadowMSR is set to the new
modem status.
However, this could result in a lost wakeup scenario due to a race
in the logic in the qt2_ioctl(TIOCMIWAIT) loop and the URB completion
for new modem status in qt2_process_modem_status.
Due to this, the usermode app's task will continue to sleep despite a
change in the modem status.
Signed-off-by: Kautuk Consul <consul.kautuk@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7cbf3c7cd5 upstream.
The serqt_usb2 driver will not work properly with the ssu100 device
even though it claims to support it. The ssu100 is supported by the
ssu100 driver in mainline so there is no need to have it claimed by
serqt_usb2.
Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c5a48592d8 upstream.
A return value of -EINPROGRESS from pm_runtime_get indicates that
the device is already resuming due to a previous call. Internally,
usb_autopm_get_interface_async doesn't treat this as an error and
increments the usage count, but passes the error status along
to the caller. The logical assumption of the caller is that
any negative return value reflects the device not resuming
and the pm_usage_cnt not being incremented. Since the usage count
is being incremented and the device is resuming, return success (0)
instead.
Signed-off-by: James Wylder <james.wylder@motorola.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1177c0efc0 upstream.
Mistakenly, commit 64ba3dc314 (tty: never hold BTM while getting
tty_mutex) switched one fail path in ptmx_open to not free the newly
allocated tty.
Fix that by jumping to the appropriate place. And rename the labels so
that it's clear what is going on there.
Introduced-in: v2.6.36-rc2
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fa90e1c935 upstream.
If tty_add_file fails at the point it is now, we have to revert all
the changes we did to the tty. It means either decrease all refcounts
if this was a tty reopen or delete the tty if it was newly allocated.
There was a try to fix this in v3.0-rc2 using tty_release in 0259894c7
(TTY: fix fail path in tty_open). But instead it introduced a NULL
dereference. It's because tty_release dereferences
filp->private_data, but that one is set even in our tty_add_file. And
when tty_add_file fails, it's still NULL/garbage. Hence tty_release
cannot be called there.
To circumvent the original leak (and the current NULL deref) we split
tty_add_file into two functions, making the latter non-failing. In
that case we may do the former early in open, where handling failures
is easy. The latter stays as it is now. So there is no change in
functionality.
The original bug (leak) was introduced by f573bd176 (tty: Remove
__GFP_NOFAIL from tty_add_file()). Thanks Dan for reporting this.
Later, we may split tty_release into more functions and call only some
of them in this fail path instead. (If at all possible.)
Introduced-in: v2.6.37-rc2
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c290f8358a upstream.
When tty_driver_lookup_tty fails in tty_open, we forget to drop a
reference to the tty driver. This was added by commit 4a2b5fddd5 (Move
tty lookup/reopen to caller).
Fix that by adding tty_driver_kref_put to the fail path.
I will refactor the code later. This is for the ease of backporting to
stable.
Introduced-in: v2.6.28-rc2
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Acked-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2f7861de11 upstream.
This patch fixes the following build error:
drivers/tty/serial/crisv10.c:4453: error: 'if_ser0' undeclared (first use in this function): 2 errors in 2 logs
v3.1-rc4/cris/cris-allmodconfig v3.1-rc4/cris/cris-allyesconfig
drivers/tty/serial/crisv10.c:4453: error: (Each undeclared identifier is reported only once: 2 errors in 2 logs
v3.1-rc4/cris/cris-allmodconfig v3.1-rc4/cris/cris-allyesconfig
drivers/tty/serial/crisv10.c:4453: error: for each function it appears in.): 2 errors in 2 logs
v3.1-rc4/cris/cris-allmodconfig v3.1-rc4/cris/cris-allyesconfig
"if_ser0" is a typo, it should be "if_serial_0".
Cc: Mikael Starvik <starvik@axis.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Signed-off-by: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f588c960fc upstream.
Commit 6596528e39 ("hfsplus: ensure bio requests are not smaller than
the hardware sectors") changed the pointers used for volume header
allocations but failed to free the correct pointers in the error path
path of hfsplus_fill_super() and hfsplus_read_wrapper.
The second hunk came from a separate patch by Pavel Ivanov.
Reported-by: Pavel Ivanov <paivanof@gmail.com>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
Signed-off-by: Christoph Hellwig <hch@tuxera.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 051a8cb655 upstream.
The previous fix for the position-buffer check gives yet another
regression on a Dell laptop. The safest fix right now is to add a
static quirk for this device (and better to apply it for stable
kernels too).
Reported-by: Éric Piel <Eric.Piel@tremplin-utc.net>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ca201c0962 upstream.
This is patch for Conexant codec of Intel HDA driver, adding new quirk
for Lenovo Thinkpad T520 and W520. Conexant autodetection works fine for
T520 (similar subsystem ID is used also in W520 model) and detects more
mixer features compared to generic (fallback) Lenovo quirk with
hardcoded options in Conexant codec.
Patch was activelly tested with Linux 3.0.4, 3.0.6 and 3.0.7 without any
problems.
Signed-off-by: Daniel Suchy <danny@danysek.cz>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 486cf46f3f upstream.
I don't usually pay much attention to the stale "? " addresses in
stack backtraces, but this lucky report from Pawel Sikora hints that
mremap's move_ptes() has inadequate locking against page migration.
3.0 BUG_ON(!PageLocked(p)) in migration_entry_to_page():
kernel BUG at include/linux/swapops.h:105!
RIP: 0010:[<ffffffff81127b76>] [<ffffffff81127b76>]
migration_entry_wait+0x156/0x160
[<ffffffff811016a1>] handle_pte_fault+0xae1/0xaf0
[<ffffffff810feee2>] ? __pte_alloc+0x42/0x120
[<ffffffff8112c26b>] ? do_huge_pmd_anonymous_page+0xab/0x310
[<ffffffff81102a31>] handle_mm_fault+0x181/0x310
[<ffffffff81106097>] ? vma_adjust+0x537/0x570
[<ffffffff81424bed>] do_page_fault+0x11d/0x4e0
[<ffffffff81109a05>] ? do_mremap+0x2d5/0x570
[<ffffffff81421d5f>] page_fault+0x1f/0x30
mremap's down_write of mmap_sem, together with i_mmap_mutex or lock,
and pagetable locks, were good enough before page migration (with its
requirement that every migration entry be found) came in, and enough
while migration always held mmap_sem; but not enough nowadays, when
there's memory hotremove and compaction.
The danger is that move_ptes() lets a migration entry dodge around
behind remove_migration_pte()'s back, so it's in the old location when
looking at the new, then in the new location when looking at the old.
Either mremap's move_ptes() must additionally take anon_vma lock(), or
migration's remove_migration_pte() must stop peeking for is_swap_entry()
before it takes pagetable lock.
Consensus chooses the latter: we prefer to add overhead to migration
than to mremapping, which gets used by JVMs and by exec stack setup.
Reported-and-tested-by: Paweł Sikora <pluto@agmk.net>
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 133d324d82 upstream.
Since 8-bit temperature values are now handled in 16-bit struct
members, values have to be cast to s8 for negative temperatures to be
properly handled. This is broken since kernel version 2.6.39
(commit bce26c58df86599c9570cee83eac58bdaae760e4.)
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Cc: Guenter Roeck <guenter.roeck@ericsson.com>
Signed-off-by: Guenter Roeck <guenter.roeck@ericsson.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8548c84da2 upstream.
Commit 4b239f458 ("x86-64, mm: Put early page table high") causes a S4
regression since 2.6.39, namely the machine reboots occasionally at S4
resume. It doesn't happen always, overall rate is about 1/20. But,
like other bugs, once when this happens, it continues to happen.
This patch fixes the problem by essentially reverting the memory
assignment in the older way.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Yinghai Lu <yinghai.lu@oracle.com>
[ We'll hopefully find the real fix, but that's too late for 3.1 now ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0278ccd9d5 upstream.
If firewire-sbp2 starts a login to a target that doesn't complete ORBs
in a timely manner (and has to retry the login), and the module is
removed before the operation times out, you end up with a null-pointer
dereference and a kernel panic.
[SR: This happens because sbp2_target_get/put() do not maintain
module references. scsi_device_get/put() do, but at occasions like
Chris describes one, nobody holds a reference to an SBP-2 sdev.]
This patch cancels pending work for each unit in sbp2_remove(), which
hopefully means there are no extra references around that prevent us
from unloading. This fixes my crash.
Signed-off-by: Chris Boot <bootc@bootc.net>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0030807c66 upstream
Currently we have a few issues with the way the workqueue code is used to
implement AIL pushing:
- it accidentally uses the same workqueue as the syncer action, and thus
can be prevented from running if there are enough sync actions active
in the system.
- it doesn't use the HIGHPRI flag to queue at the head of the queue of
work items
At this point I'm not confident enough in getting all the workqueue flags and
tweaks right to provide a perfectly reliable execution context for AIL
pushing, which is the most important piece in XFS to make forward progress
when the log fills.
Revert back to use a kthread per filesystem which fixes all the above issues
at the cost of having a task struct and stack around for each mounted
filesystem. In addition this also gives us much better ways to diagnose
any issues involving hung AIL pushing and removes a small amount of code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Stefan Priebe <s.priebe@profihost.ag>
Tested-by: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 17b38471c3 upstream
We need to check for pinned buffers even in .iop_pushbuf given that inode
items flush into the same buffers that may be pinned directly due operations
on the unlinked inode list operating directly on buffers. To do this add a
return value to .iop_pushbuf that tells the AIL push about this and use
the existing log force mechanisms to unpin it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Stefan Priebe <s.priebe@profihost.ag>
Tested-by: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bc6e588a89 upstream
If an item was locked we should not update xa_last_pushed_lsn and thus skip
it when restarting the AIL scan as we need to be able to lock and write it
out as soon as possible. Otherwise heavy lock contention might starve AIL
pushing too easily, especially given the larger backoff once we moved
xa_last_pushed_lsn all the way to the target lsn.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Stefan Priebe <s.priebe@profihost.ag>
Tested-by: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1d8c95a363 upstream
xfs: use a cursor for bulk AIL insertion
Delayed logging can insert tens of thousands of log items into the
AIL at the same LSN. When the committing of log commit records
occur, we can get insertions occurring at an LSN that is not at the
end of the AIL. If there are thousands of items in the AIL on the
tail LSN, each insertion has to walk the AIL to find the correct
place to insert the new item into the AIL. This can consume large
amounts of CPU time and block other operations from occurring while
the traversals are in progress.
To avoid this repeated walk, use a AIL cursor to record
where we should be inserting the new items into the AIL without
having to repeat the walk. The cursor infrastructure already
provides this functionality for push walks, so is a simple extension
of existing code. While this will not avoid the initial walk, it
will avoid repeating it tens of thousands of times during a single
checkpoint commit.
This version includes logic improvements from Christoph Hellwig.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2bcf6e970f upstream
Start the periodic sync workers only after we have finished xfs_mountfs
and thus fully set up the filesystem structures. Without this we can
call into xfs_qm_sync before the quotainfo strucute is set up if the
mount takes unusually long, and probably hit other incomplete states
as well.
Also clean up the xfs_fs_fill_super error path by using consistent
label names, and removing an impossible to reach case.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Arkadiusz Miskiewicz <arekm@maven.pl>
Reviewed-by: Alex Elder <aelder@sgi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eac2095398 upstream.
Nouveau makes the assumption that if a TTM is bound there will be a mm_node
around for it and the backwards ordering here resulted in a use-after-free
on some eviction paths.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Cc: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8d3bb23609 upstream.
This was true for new TTM_PL_SYSTEM and new TTM_PL_TT cases, but wasn't
the case on TTM_PL_SYSTEM<->TTM_PL_TT moves, which causes trouble on some
paths as nouveau's move_notify() hook requires that the dma addresses be
valid at this point.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Cc: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4d9b2ebd33 upstream.
The uvc_mc_register_entity() function wrongfully selects the
media_entity associated with a UVC entity when creating links. This
results in access to uninitialized media_entity structures and can hit a
BUG_ON statement in media_entity_create_link(). Fix it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Cc: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 35d851df23 upstream.
This is basically a more generic respin of 23746a6 ("HID: magicmouse: ignore
'ivalid report id' while switching modes") which got reverted later by
c3a492.
It turns out that on some configurations, this is actually still the case
and we are not able to detect in runtime.
The device reponds with 'invalid report id' when feature report switching it
into multitouch mode is sent to it.
This has been silently ignored before 0825411ade ("HID: bt: Wait for ACK
on Sent Reports"), but since this commit, it propagates -EIO from the _raw
callback .
So let the driver ignore -EIO as response to 0xd7,0x01 report, as that's
how the device reacts in normal mode.
Sad, but following reality.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=35022
Reported-by: Chase Douglas <chase.douglas@canonical.com>
Reported-by: Jaikumar Ganesh <jaikumarg@android.com>
Tested-by: Chase Douglas <chase.douglas@canonical.com>
Tested-by: Jaikumar Ganesh <jaikumarg@android.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 78a7539b88 upstream.
Some samsung latop of the N150/N2{10,20,30} serie are badly detected by the samsung-laptop platform driver, see bug # 36082.
It appears that N230 identifies itself as N150/N210/N220/N230 whereas the other identify themselves as N150/N210/220.
This patch attemtp fix#36082 allowing correct identification for all the said netbook model.
Reported-by: Daniel Eklöf <daniel@ekloef.se>
Signed-off-by: Thomas Courbon <thcourbon@gmail.com>
Signed-off-by: Matthew Garrett <mjg@redhat.com>
Cc: Josh Boyer <jwboyer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bcd5cff721 upstream.
There's a lock inversion between the cputimer->lock and rq->lock;
notably the two callchains involved are:
update_rlimit_cpu()
sighand->siglock
set_process_cpu_timer()
cpu_timer_sample_group()
thread_group_cputimer()
cputimer->lock
thread_group_cputime()
task_sched_runtime()
->pi_lock
rq->lock
scheduler_tick()
rq->lock
task_tick_fair()
update_curr()
account_group_exec()
cputimer->lock
Where the first one is enabling a CLOCK_PROCESS_CPUTIME_ID timer, and
the second one is keeping up-to-date.
This problem was introduced by e8abccb719 ("posix-cpu-timers: Cure
SMP accounting oddities").
Cure the problem by removing the cputimer->lock and rq->lock nesting,
this leaves concurrent enablers doing duplicate work, but the time
wasted should be on the same order otherwise wasted spinning on the
lock and the greater-than assignment filter should ensure we preserve
monotonicity.
Reported-by: Dave Jones <davej@redhat.com>
Reported-by: Simon Kirby <sim@hostway.ca>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Link: http://lkml.kernel.org/r/1318928713.21167.4.camel@twins
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5a6e8482a1 upstream.
FB scratch indices are dword indices, but we were treating
them as byte indices. As such, we were getting the wrong
FB scratch data for non-0 indices. Fix the indices and
guard the indexing against indices larger than the scratch
allocation.
Fixes memory corruption on some boards if data was written
past the end of the FB scratch array.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reported-by: Dave Airlie <airlied@redhat.com>
Tested-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a84a79e4d3 upstream.
The size is always valid, but variable-length arrays generate worse code
for no good reason (unless the function happens to be inlined and the
compiler sees the length for the simple constant it is).
Also, there seems to be some code generation problem on POWER, where
Henrik Bakken reports that register r28 can get corrupted under some
subtle circumstances (interrupt happening at the wrong time?). That all
indicates some seriously broken compiler issues, but since variable
length arrays are bad regardless, there's little point in trying to
chase it down.
"Just don't do that, then".
Reported-by: Henrik Grindal Bakken <henribak@cisco.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bf164c58e5 upstream.
The w83627ehf driver is improperly reporting thermal diode sensors as
type 2, instead of 3. This caused "sensors" and possibly other
monitoring tools to report these sensors as "transistor" instead of
"thermal diode".
Furthermore, diode subtype selection (CPU vs. external) is only
supported by the original W83627EHF/EHG. All later models only support
CPU diode type, and some (NCT6776F) don't even have the register in
question so we should avoid reading from it.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Guenter Roeck <guenter.roeck@ericsson.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f5e4282586 upstream.
Patch to add SiGma Micro-based keyboards (1c4f:0002) to hid-quirks.
These keyboards dont seem to allow the records to be initialized, and hence a
timeout occurs when the usbhid driver attempts to initialize them. The patch
just adds the signature for these keyboards to the hid-quirks list with the
setting HID_QUIRK_NO_INIT_REPORTS. This removes the 5-10 second wait for the
timeout to occur.
Signed-off-by: Jeremiah Matthey <sprg86@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0ed013e28f upstream.
The MAC can drop short packets when the PHY detects noise on the line at
100Mbps due to a timing issue. Workaround the issue by increasing the PLL
counter so the PHY properly recognizes the synchronization pattern from the
MAC.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Leann Ogasawara <leann.ogasawara@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 04da85b861 upstream.
The struct ftrace_hash was declared within CONFIG_FUNCTION_TRACER
but was referenced outside of it.
Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
commit f7bc8b61f6 upstream.
Enabling function tracer to trace all functions, then load a module and
then disable function tracing will cause ftrace to fail.
This can also happen by enabling function tracing on the command line:
ftrace=function
and during boot up, modules are loaded, then you disable function tracing
with 'echo nop > current_tracer' you will trigger a bug in ftrace that
will shut itself down.
The reason is, the new ftrace code keeps ref counts of all ftrace_ops that
are registered for tracing. When one or more ftrace_ops are registered,
all the records that represent the functions that the ftrace_ops will
trace have a ref count incremented. If this ref count is not zero,
when the code modification runs, that function will be enabled for tracing.
If the ref count is zero, that function will be disabled from tracing.
To make sure the accounting was working, FTRACE_WARN_ON()s were added
to updating of the ref counts.
If the ref count hits its max (> 2^30 ftrace_ops added), or if
the ref count goes below zero, a FTRACE_WARN_ON() is triggered which
disables all modification of code.
Since it is common for ftrace_ops to trace all functions in the kernel,
instead of creating > 20,000 hash items for the ftrace_ops, the hash
count is just set to zero, and it represents that the ftrace_ops is
to trace all functions. This is where the issues arrise.
If you enable function tracing to trace all functions, and then add
a module, the modules function records do not get the ref count updated.
When the function tracer is disabled, all function records ref counts
are subtracted. Since the modules never had their ref counts incremented,
they go below zero and the FTRACE_WARN_ON() is triggered.
The solution to this is rather simple. When modules are loaded, and
their functions are added to the the ftrace pool, look to see if any
ftrace_ops are registered that trace all functions. And for those,
update the ref count for the module function records.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
commit bd7100099a upstream.
Convert some MIPS architecture's code to using struct syscore_ops
objects for power management instead of sysdev classes and sysdevs.
This simplifies the code and reduces the kernel's memory footprint.
It also is necessary for removing sysdevs from the kernel entirely in
the future.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-and-tested-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: linux-kernel@vger.kernel.org
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Patchwork: http://patchwork.linux-mips.org/patch/2431/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3c4aa91f21 upstream.
Like e65cc194f7 this patch enables 64bit DMA
for the AHCI SATA controller of a board that has the SB600 southbridge. In
this case though we're enabling 64bit DMA for the Asus M3A motherboard. It
is a new enough board that all of the BIOS releases since the initial
release (0301 from 2007-10-22) work correctly with 64bit DMA enabled.
Signed-off-by: Mark Nelson <mdnelson8@gmail.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch fixes the issue caused by ef81bb40bf
which is a backport of upstream 87c48fa3b4. The
problem does not exist in upstream.
We do not check whether route is attached before trying to assign ip
identification through route dest which lead NULL pointer dereference. This
happens when host bridge transmit a packet from guest.
This patch changes ipv6_select_ident() to accept in6_addr as its paramter and
fix the issue by using the destination address in ipv6 header when no route is
attached.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4f332844cc upstream.
If there are error flags in the aux status, retry the transaction.
This makes aux much more reliable, especially on llano systems.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 912193521b upstream.
Currently, search_binary_handler() tries to load binary loader module
using request_module() if a loader for the requested program is not yet
loaded. But second attempt of request_module() does not affect the result
of search_binary_handler().
If request_module() triggered recursion, calling request_module() twice
causes 2 to the power of MAX_KMOD_CONCURRENT (= 50) repetitions. It is
not an infinite loop but is sufficient for users to consider as a hang up.
Therefore, this patch changes not to call request_module() twice, making 1
to the power of MAX_KMOD_CONCURRENT repetitions in case of recursion.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Reported-by: Richard Weinberger <richard@nod.at>
Tested-by: Richard Weinberger <richard@nod.at>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Maxim Uvarov <muvarov@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
commit d982dcdc4e upstream.
Fix clock rate setting in the mxs-mmc driver. Previously, if div2 was 0
then the value for TIMING_CLOCK_RATE would have been 255 instead of 0.
The limits for div1 (TIMING_CLOCK_DIVIDE) and div2 (TIMING_CLOCK_RATE+1)
were also not correctly defined.
Can easily be reproduced on mx23evk: default clock for high speed sdio
cards is 50 MHz. With a SSP_CLK of 28.8 MHz default), this resulted in
an actual clock rate of about 56 kHz. Tested on mx23evk.
Signed-off-by: Koen Beel <koen.beel@barco.com>
Reviewed-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 876fbba1db upstream.
Commit a63a5cf (dm: improve block integrity support) introduced a
two-phase initialization of a DM device's integrity profile. This
patch avoids dereferencing a NULL 'template_disk' pointer in
blk_integrity_register() if there is an integrity profile mismatch in
dm_table_set_integrity().
This can occur if the integrity profiles for stacked devices in a DM
table are changed between the call to dm_table_prealloc_integrity() and
dm_table_set_integrity().
Reported-by: Zdenek Kabelac <zkabelac@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 01f96c0a99 upstream.
Two related problems:
1/ some error paths call "md_unregister_thread(mddev->thread)"
without subsequently clearing ->thread. A subsequent call
to mddev_unlock will try to wake the thread, and crash.
2/ Most calls to md_wakeup_thread are protected against the thread
disappeared either by:
- holding the ->mutex
- having an active request, so something else must be keeping
the array active.
However mddev_unlock calls md_wakeup_thread after dropping the
mutex and without any certainty of an active request, so the
->thread could theoretically disappear.
So we need a spinlock to provide some protections.
So change md_unregister_thread to take a pointer to the thread
pointer, and ensure that it always does the required locking, and
clears the pointer properly.
Reported-by: "Moshe Melnikov" <moshe@zadarastorage.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9bfacd01dc upstream.
I hit a crash in qla2x00_abort_all_cmds() if the qla2xxx module is
unloaded right after it is loaded. I debugged this down to the abort
handling improperly treating a command of type SRB_ADISC_CMD as if it
had a bsg_job to complete when that command actually uses the iocb_cmd
part of the union. (I guess to hit this one has to unload the module
while the async FC initialization is still in progress)
It seems we should only look for a bsg_job if type is SRB_ELS_CMD_RPT,
SRB_ELS_CMD_HST or SRB_CT_CMD, so switch the test to make that explicit.
Signed-off-by: Roland Dreier <roland@purestorage.com>
Acked-by: Chad Dupuis <chad.dupuis@qlogic.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 29cf7a30f8 upstream.
In summary, this DMI quirk uses the _CRS info by default for the ASUS
M2V-MX SE by turning on `pci=use_crs` and is similar to the quirk
added by commit 2491762cfb ("x86/PCI: use host bridge _CRS info on
ASRock ALiveSATA2-GLAN") whose commit message should be read for further
information.
Since commit 3e3da00c01 ("x86/pci: AMD one chain system to use pci
read out res") Linux gives the following oops:
parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
HDA Intel 0000:20:01.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
HDA Intel 0000:20:01.0: setting latency timer to 64
BUG: unable to handle kernel paging request at ffffc90011c08000
IP: [<ffffffffa0578402>] azx_probe+0x3ad/0x86b [snd_hda_intel]
PGD 13781a067 PUD 13781b067 PMD 1300ba067 PTE 800000fd00000173
Oops: 0009 [#1] SMP
last sysfs file: /sys/module/snd_pcm/initstate
CPU 0
Modules linked in: snd_hda_intel(+) snd_hda_codec snd_hwdep snd_pcm_oss snd_mixer_oss snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event tpm_tis tpm snd_seq tpm_bios psmouse parport_pc snd_timer snd_seq_device parport processor evdev snd i2c_viapro thermal_sys amd64_edac_mod k8temp i2c_core soundcore shpchp pcspkr serio_raw asus_atk0110 pci_hotplug edac_core button snd_page_alloc edac_mce_amd ext3 jbd mbcache sha256_generic cryptd aes_x86_64 aes_generic cbc dm_crypt dm_mod raid1 md_mod usbhid hid sg sd_mod crc_t10dif sr_mod cdrom ata_generic uhci_hcd sata_via pata_via libata ehci_hcd usbcore scsi_mod via_rhine mii nls_base [last unloaded: scsi_wait_scan]
Pid: 1153, comm: work_for_cpu Not tainted 2.6.37-1-amd64 #1 M2V-MX SE/System Product Name
RIP: 0010:[<ffffffffa0578402>] [<ffffffffa0578402>] azx_probe+0x3ad/0x86b [snd_hda_intel]
RSP: 0018:ffff88013153fe50 EFLAGS: 00010286
RAX: ffffc90011c08000 RBX: ffff88013029ec00 RCX: 0000000000000006
RDX: 0000000000000000 RSI: 0000000000000246 RDI: 0000000000000246
RBP: ffff88013341d000 R08: 0000000000000000 R09: 0000000000000040
R10: 0000000000000286 R11: 0000000000003731 R12: ffff88013029c400
R13: 0000000000000000 R14: 0000000000000000 R15: ffff88013341d090
FS: 0000000000000000(0000) GS:ffff8800bfc00000(0000) knlGS:00000000f7610ab0
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffffc90011c08000 CR3: 0000000132f57000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process work_for_cpu (pid: 1153, threadinfo ffff88013153e000, task ffff8801303c86c0)
Stack:
0000000000000005 ffffffff8123ad65 00000000000136c0 ffff88013029c400
ffff8801303c8998 ffff88013341d000 ffff88013341d090 ffff8801322d9dc8
ffff88013341d208 0000000000000000 0000000000000000 ffffffff811ad232
Call Trace:
[<ffffffff8123ad65>] ? __pm_runtime_set_status+0x162/0x186
[<ffffffff811ad232>] ? local_pci_probe+0x49/0x92
[<ffffffff8105afc5>] ? do_work_for_cpu+0x0/0x1b
[<ffffffff8105afc5>] ? do_work_for_cpu+0x0/0x1b
[<ffffffff8105afd0>] ? do_work_for_cpu+0xb/0x1b
[<ffffffff8105fd3f>] ? kthread+0x7a/0x82
[<ffffffff8100a824>] ? kernel_thread_helper+0x4/0x10
[<ffffffff8105fcc5>] ? kthread+0x0/0x82
[<ffffffff8100a820>] ? kernel_thread_helper+0x0/0x10
Code: f4 01 00 00 ef 31 f6 48 89 df e8 29 dd ff ff 85 c0 0f 88 2b 03 00 00 48 89 ef e8 b4 39 c3 e0 8b 7b 40 e8 fc 9d b1 e0 48 8b 43 38 <66> 8b 10 66 89 14 24 8b 43 14 83 e8 03 83 f8 01 77 32 31 d2 be
RIP [<ffffffffa0578402>] azx_probe+0x3ad/0x86b [snd_hda_intel]
RSP <ffff88013153fe50>
CR2: ffffc90011c08000
---[ end trace 8d1f3ebc136437fd ]---
Trusting the ACPI _CRS information (`pci=use_crs`) fixes this problem.
$ dmesg | grep -i crs # with the quirk
PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
The match has to be against the DMI board entries though since the vendor entries are not populated.
DMI: System manufacturer System Product Name/M2V-MX SE, BIOS 0304 10/30/2007
This quirk should be removed when `pci=use_crs` is enabled for machines
from 2006 or earlier or some other solution is implemented.
Using coreboot [1] with this board the problem does not exist but this
quirk also does not affect it either. To be safe though the check is
tightened to only take effect when the BIOS from American Megatrends is
used.
15:13 < ruik> but coreboot does not need that
15:13 < ruik> because i have there only one root bus
15:13 < ruik> the audio is behind a bridge
$ sudo dmidecode
BIOS Information
Vendor: American Megatrends Inc.
Version: 0304
Release Date: 10/30/2007
[1] http://www.coreboot.org/
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=30552
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: x86@kernel.org
Signed-off-by: Paul Menzel <paulepanter@users.sourceforge.net>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 77a861c405 upstream.
The rt2x00 driver gets frequent occurrences of the following error message
when operating under load:
phy0 -> rt2x00queue_write_tx_frame: Error - Arrived at non-free entry in the
non-full queue 2.
This is caused by simultaneous attempts from mac80211 to send a frame via
rt2x00, which are not properly serialized inside rt2x00queue_write_tx_frame,
causing the second frame to fail sending with the above mentioned error
message.
Fix this by introducing a per-queue spinlock to serialize the TX operations
on that queue.
Reported-by: Andreas Hartmann <andihartmann@01019freenet.de>
Signed-off-by: Gertjan van Wingerde <gwingerde@gmail.com>
Acked-by: Helmut Schaa <helmut.schaa@googlemail.com>
Signed-off-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Cc: Tim Gardner <tim.gardner@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f75159e993 upstream.
The IEEE 1588 standard defines two kinds of messages, event and general
messages. Event messages require time stamping, and general do not. When
using UDP transport, two separate ports are used for the two message
types.
The BPF designed to recognize event messages incorrectly classifies L2
general messages as event messages. This commit fixes the issue by
extending the filter to check the message type field for L2 PTP packets.
Event messages are be distinguished from general messages by testing
the "general" bit.
Signed-off-by: Richard Cochran <richard.cochran@omicron.at>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 12d5180bd7 upstream.
Most asics just use the hw default value which requires
no explicit programming. For those that need a different
value, the vbios will program it properly. As such,
there's no need to program these registers explicitly
in the driver. Changing MC_SHARED_CHREMAP requires a reload
of all data in vram otherwise its contents will be scambled.
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=40103
v2: drop now unused channel_remap functions.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 21d17dd2a3 upstream.
Current code set update bits for WM8753_LDAC and WM8753_RDAC twice,
but missed setting update bits for WM8753_LADC and WM8753_RADC.
I think it is a copy-paste bug in commit 776065
"ASoC: codecs: wm8753: Fix register cache incoherency".
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eff919ac0f upstream.
A recent conversion has introduced references to &pdev->dev, which does
not actually exist in all the contexts it's used in.
Replace this with card->dev where necessary, in order to let
the driver build again.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 05faadcf59 upstream.
Commit 2a7fade7e0 ("hwmon: lis3: Power on corrections") caused a
regression on HP laptops with 8bit chip. Writing CTRL2_BOOT_8B bit seems
clearing the BIOS setup, and no proper interrupt for DriveGuard will be
triggered any more.
Since the init code there is basically only for embedded devices, put a
pdata check so that the problematic initialization will be skipped for
hp_accel stuff.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Cc: Eric Piel <eric.piel@tremplin-utc.net>
Cc: Samu Onkalo <samu.p.onkalo@nokia.com>
Signed-off-by: Andrew Morton <akpm@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d670ec1317 upstream.
David reported:
Attached below is a watered-down version of rt/tst-cpuclock2.c from
GLIBC. Just build it with "gcc -o test test.c -lpthread -lrt" or
similar.
Run it several times, and you will see cases where the main thread
will measure a process clock difference before and after the nanosleep
which is smaller than the cpu-burner thread's individual thread clock
difference. This doesn't make any sense since the cpu-burner thread
is part of the top-level process's thread group.
I've reproduced this on both x86-64 and sparc64 (using both 32-bit and
64-bit binaries).
For example:
[davem@boricha build-x86_64-linux]$ ./test
process: before(0.001221967) after(0.498624371) diff(497402404)
thread: before(0.000081692) after(0.498316431) diff(498234739)
self: before(0.001223521) after(0.001240219) diff(16698)
[davem@boricha build-x86_64-linux]$
The diff of 'process' should always be >= the diff of 'thread'.
I make sure to wrap the 'thread' clock measurements the most tightly
around the nanosleep() call, and that the 'process' clock measurements
are the outer-most ones.
---
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <pthread.h>
static pthread_barrier_t barrier;
static void *chew_cpu(void *arg)
{
pthread_barrier_wait(&barrier);
while (1)
__asm__ __volatile__("" : : : "memory");
return NULL;
}
int main(void)
{
clockid_t process_clock, my_thread_clock, th_clock;
struct timespec process_before, process_after;
struct timespec me_before, me_after;
struct timespec th_before, th_after;
struct timespec sleeptime;
unsigned long diff;
pthread_t th;
int err;
err = clock_getcpuclockid(0, &process_clock);
if (err)
return 1;
err = pthread_getcpuclockid(pthread_self(), &my_thread_clock);
if (err)
return 1;
pthread_barrier_init(&barrier, NULL, 2);
err = pthread_create(&th, NULL, chew_cpu, NULL);
if (err)
return 1;
err = pthread_getcpuclockid(th, &th_clock);
if (err)
return 1;
pthread_barrier_wait(&barrier);
err = clock_gettime(process_clock, &process_before);
if (err)
return 1;
err = clock_gettime(my_thread_clock, &me_before);
if (err)
return 1;
err = clock_gettime(th_clock, &th_before);
if (err)
return 1;
sleeptime.tv_sec = 0;
sleeptime.tv_nsec = 500000000;
nanosleep(&sleeptime, NULL);
err = clock_gettime(th_clock, &th_after);
if (err)
return 1;
err = clock_gettime(my_thread_clock, &me_after);
if (err)
return 1;
err = clock_gettime(process_clock, &process_after);
if (err)
return 1;
diff = process_after.tv_nsec - process_before.tv_nsec;
printf("process: before(%lu.%.9lu) after(%lu.%.9lu) diff(%lu)\n",
process_before.tv_sec, process_before.tv_nsec,
process_after.tv_sec, process_after.tv_nsec, diff);
diff = th_after.tv_nsec - th_before.tv_nsec;
printf("thread: before(%lu.%.9lu) after(%lu.%.9lu) diff(%lu)\n",
th_before.tv_sec, th_before.tv_nsec,
th_after.tv_sec, th_after.tv_nsec, diff);
diff = me_after.tv_nsec - me_before.tv_nsec;
printf("self: before(%lu.%.9lu) after(%lu.%.9lu) diff(%lu)\n",
me_before.tv_sec, me_before.tv_nsec,
me_after.tv_sec, me_after.tv_nsec, diff);
return 0;
}
This is due to us using p->se.sum_exec_runtime in
thread_group_cputime() where we iterate the thread group and sum all
data. This does not take time since the last schedule operation (tick
or otherwise) into account. We can cure this by using
task_sched_runtime() at the cost of having to take locks.
This also means we can (and must) do away with
thread_group_sched_runtime() since the modified thread_group_cputime()
is now more accurate and would deadlock when called from
thread_group_sched_runtime().
Aside of that it makes the function safe on 32 bit systems. The old
code added t->se.sum_exec_runtime unprotected. sum_exec_runtime is a
64bit value and could be changed on another cpu at the same time.
Reported-by: David Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1314874459.7945.22.camel@twins
Tested-by: David Miller <davem@davemloft.net>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2c8fc86760 upstream.
Simon Kirby reported that on his RAID setup with idedisk underneath
the box OOMs after a couple of days of runtime. Running with
CONFIG_DEBUG_KMEMLEAK pointed to idedisk_prep_fn() which unconditionally
allocates an ide_cmd struct. However, ide_requeue_and_plug() can be
called more than once per request, either from the request issue or the
IRQ handler path and do blk_peek_request() ends up in idedisk_prep_fn()
repeatedly, allocating a struct ide_cmd everytime and "forgetting" the
previous pointer.
Make sure the code reuses the old allocated chunk.
Reported-and-tested-by: Simon Kirby <sim@hostway.ca>
Link: http://marc.info/?l=linux-kernel&m=131667641517919
Link: http://lkml.kernel.org/r/20110922072643.GA27232@hostway.ca
Signed-off-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3be209a8e2 upstream.
Commit 43fa5460fe ("sched: Try not to
migrate higher priority RT tasks") also introduced a change in behavior
which keeps RT tasks on the same CPU if there is an equal priority RT
task currently running even if there are empty CPUs available.
This can cause unnecessary wakeup latencies, and can prevent the
scheduler from balancing all RT tasks across available CPUs.
This change causes an RT task to search for a new CPU if an equal
priority RT task is already running on wakeup. Lower priority tasks
will still have to wait on higher priority tasks, but the system should
still balance out because there is always the possibility that if there
are both a high and low priority RT tasks on a given CPU that the high
priority task could wakeup while the low priority task is running and
force it to search for a better runqueue.
Signed-off-by: Shawn Bohrer <sbohrer@rgmadvisors.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Tested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1315837684-18733-1-git-send-email-sbohrer@rgmadvisors.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In the OF 'translations' property, the template TTEs in the mappings
never specify the executable bit. This is the case even though some
of these mappings are for OF's code segment.
Therefore, we need to force the execute bit on in every mapping.
This problem can only really trigger on Niagara/sun4v machines and the
history behind this is a little complicated.
Previous to sun4v, the sun4u TTE entries lacked a hardware execute
permission bit. So OF didn't have to ever worry about setting
anything to handle executable pages. Any valid TTE loaded into the
I-TLB would be respected by the chip.
But sun4v Niagara chips have a real hardware enforced executable bit
in their TTEs. So it has to be set or else the I-TLB throws an
instruction access exception with type code 6 (protection violation).
We've been extremely fortunate to not get bitten by this in the past.
The best I can tell is that the OF's mappings for it's executable code
were mapped using permanent locked mappings on sun4v in the past.
Therefore, the fact that we didn't have the exec bit set in the OF
translations we would use did not matter in practice.
Thanks to Greg Onufer for helping me track this down.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d9ad77ebfd upstream.
This reverts commit 18b4fada27.
This code was correct, apologies to anyone who noticed things broke.
revert contents are different due to another commit in between.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 777eb1bf15 upstream.
A kernel crash is observed when a mounted ext3/ext4 filesystem is
physically removed. The problem is that blk_cleanup_queue() frees up
some resources eg by calling elevator_exit(), which are not checked for
in normal operation. So we should rather move these calls to the
destructor function blk_release_queue() as at that point all remaining
references are gone. However, in doing so we have to ensure that any
externally supplied queue_lock is disconnected as the driver might free
up the lock after the call of blk_cleanup_queue(),
Signed-off-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c80c39d9a upstream.
If iwl_scan_initiate() fails for any reason,
priv->scan_request and priv->scan_vif are left
dangling. This can lead to a crash later when
iwl_bg_scan_completed() tries to run a pending
scan request.
In practice, this seems to be very rare due to
the STATUS_SCANNING check earlier. That check,
however, is wrong -- it should allow a scan to
be queued when a reset/roc scan is going on.
When a normal scan is already going on, a new
one can't be issued by mac80211, so that code
can be removed completely. I introduced this
bug when adding off-channel support in commit
266af4c745.
Reported-by: Peng Yan <peng.yan@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 65d0f19e58 upstream.
iwlegacy version of fix:
commit effd4d9aec
Author: Johannes Berg <johannes.berg@intel.com>
Date: Thu Sep 15 11:46:52 2011 -0700
iwlagn: do not use interruptible waits
Since the dawn of its time, iwlwifi has used
interruptible waits to wait for synchronous
commands and firmware loading.
This leads to "interesting" bugs, because it
can't actually handle the interruptions; for
example when a command sending is interrupted
it will assume the command completed fully,
and then leave it pending, which leads to all
kinds of trouble when the command finishes
later.
Since there's no easy way to gracefully deal
with interruptions, fix the driver to not use
interruptible waits.
This at least fixes the error
iwlagn 0000:02:00.0: Error: Response NULL in 'REPLY_SCAN_ABORT_CMD'
I have seen in P2P testing, but it is likely
that there are other errors caused by this.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2e2a41d6ca upstream.
iwlegacy version of fix:
commit 282cdb325a
Author: Johannes Berg <johannes.berg@intel.com>
Date: Mon Sep 12 12:09:10 2011 -0700
iwlagn: fix command queue timeout
If the command queue is constantly busy,
which can happen in P2P, the hangcheck
timer will frequently find a command in
it and will eventually reset the device
because nothing sets the timestamp for
this queue when commands are processed.
Fix this by setting the timestamp when
a command completes.
iwlegacy does not support P2P, but this patch fix possible
unneeded hardware resets, hence is needed.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e9f9530bb6 upstream.
During the endurance testing, rx frames are not getting DMAd from
MAC whereas pcu rx frame counters are getting updated properly.
As per systems team input updated the initval to fix rx dma stuck
issue.
Signed-off-by: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1b9ca0272f upstream.
Incorrect variable was used in validating the akm_suites array from
NL80211_ATTR_AKM_SUITES. In addition, there was no explicit
validation of the array length (we only have room for
NL80211_MAX_NR_AKM_SUITES).
This can result in a buffer write overflow for stack variables with
arbitrary data from user space. The nl80211 commands using the affected
functionality require GENL_ADMIN_PERM, so this is only exposed to admin
users.
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 24926dadc4 upstream.
In an enclosure model where there are chaining expanders to a large body
of storage, it was discovered that libsas, responding to a broadcast
event change, would only revalidate the domain of first child expander
in the list.
The issue is that the pointer value to the discovered source device was
used to break out of the loop, rather than the content of the pointer.
This still remains non-compliant as the revalidate domain code is
supposed to loop through all child expanders, and not stop at the first
one it finds that reports a change count. However, the design of this
routine does not allow multiple device discoveries and that would be a
more complicated set of patches reserved for another day. We are fixing
the glaring bug rather than refactoring the code.
Signed-off-by: Mark Salyzyn <msalyzyn@us.xyratex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 96067723e4 upstream.
Following reports on the list, it looks like the 3e-9xxx driver will leak dma
mappings every time we get a transient queueing error back from the card.
This is because it maps the sg list in the routine that sends the command, but
doesn't unmap again in the transient failure path (even though the command is
sent back to the block layer). Fix by unmapping before returning the status.
Reported-by: Chris Boot <bootc@bootc.net>
Tested-by: Chris Boot <bootc@bootc.net>
Acked-by: Adam Radford <aradford@gmail.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4508378b95 upstream.
Commit 246e87a939 ("memcg: fix get_scan_count() for small targets")
fixes the memcg/kswapd behavior against small targets and prevent vmscan
priority too high.
But the implementation is too naive and adds another problem to small
memcg. It always force scan to 32 pages of file/anon and doesn't handle
swappiness and other rotate_info. It makes vmscan to scan anon LRU
regardless of swappiness and make reclaim bad. This patch fixes it by
adjusting scanning count with regard to swappiness at el.
At a test "cat 1G file under 300M limit." (swappiness=20)
before patch
scanned_pages_by_limit 360919
scanned_anon_pages_by_limit 180469
scanned_file_pages_by_limit 180450
rotated_pages_by_limit 31
rotated_anon_pages_by_limit 25
rotated_file_pages_by_limit 6
freed_pages_by_limit 180458
freed_anon_pages_by_limit 19
freed_file_pages_by_limit 180439
elapsed_ns_by_limit 429758872
after patch
scanned_pages_by_limit 180674
scanned_anon_pages_by_limit 24
scanned_file_pages_by_limit 180650
rotated_pages_by_limit 35
rotated_anon_pages_by_limit 24
rotated_file_pages_by_limit 11
freed_pages_by_limit 180634
freed_anon_pages_by_limit 0
freed_file_pages_by_limit 180634
elapsed_ns_by_limit 367119089
scanned_pages_by_system 0
the numbers of scanning anon are decreased(as expected), and elapsed time
reduced. By this patch, small memcgs will work better.
(*) Because the amount of file-cache is much bigger than anon,
recalaim_stat's rotate-scan counter make scanning files more.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Ying Han <yinghan@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 61a6a108d1 upstream.
Before clearing the probing flag in the error exit path, check that the
chip pointer is not NULL.
Signed-off-by: Thomas Pfaff <tpfaff@gmx.net>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5fe6e0151d upstream.
When the headphone pin is assigned as primary output to line_out_pins[],
the automatic HP-pin assignment by ASSID must be suppressed. Otherwise
a wrong pin might be assigned to the headphone and breaks the auto-mute.
Reference: https://bugzilla.novell.com/show_bug.cgi?id=716104
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9058020cd9 upstream.
Currently the the internal oscillator is powered down when entering BIAS_OFF
state, but not re-enabled when going back to BIAS_STANDBY. As a result the
CODEC will stop working after suspend if the internal oscillator is used to
generate the sysclock signal. This patch fixes it by clearing the appropriate
bit in the power down register when the CODEC is re-enabled.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 34c869855a upstream.
Attempt to change McBSP CLKS source while another stream is active is not
safe after commit d135865 ("OMAP: McBSP: implement functional clock
switching via clock framework") in 2.6.37.
CLKS parent clock switching using clock framework have to idle the McBSP
before switching and then activate it again. This short break can cause a
DMA transaction error to already running stream which halts and recovers
only by closing and restarting the stream.
This goes more fatal after commit e2fa61d ("OMAP3: l3: Introduce
l3-interconnect error handling driver") in 2.6.39 where l3 driver detects a
severe timeout error and does BUG_ON().
Fix this by not changing any configuration in omap_mcbsp_dai_set_dai_sysclk
if the McBSP is already active. This test should have been here just from
the beginning anyway.
Signed-off-by: Jarkko Nikula <jarkko.nikula@bitmer.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit caca9510ff upstream.
In commit a144c6a6c9 ("PM: Print a warning if firmware is requested
when tasks are frozen") we not only printed a warning if somebody tried
to load the firmware when tasks are frozen - we also failed the load.
But that check was done before the check for built-in firmware, and then
when we disallowed usermode helpers during bootup (commit 288d5abec8:
"Boot up with usermodehelper disabled"), that actually means that
built-in modules can no longer load their firmware even if the firmware
is built in too. Which used to work, and some people depended on it for
the R100 driver.
So move the test for usermodehelper_is_disabled() down, to after
checking the built-in firmware.
This should fix:
https://bugzilla.kernel.org/show_bug.cgi?id=40952
Reported-by: James Cloos <cloos@hjcloos.com>
Bisected-by: Elimar Riesebieter <riesebie@lxtec.de>
Cc: Michel Dänzer <michel@daenzer.net>
Cc: Rafael Wysocki <rjw@sisk.pl>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Cc: Valdis Kletnieks <valdis.kletnieks@vt.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Lucas Villa Real <lucasvr@gobolinux.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit df77abcafc upstream.
The SMP implementation of __futex_atomic_op clobbers oldval with the
status flag from the exclusive store. This causes it to always read as
zero when performing the FUTEX_OP_CMP_* operation.
This patch updates the ARM __futex_atomic_op implementations to take a
tmp argument, allowing us to store the strex status flag without
overwriting the register containing oldval.
Reported-by: Minho Ban <mhban@samsung.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f630c1bdfb upstream.
This patch implements a workaround for erratum 764369 affecting
Cortex-A9 MPCore with two or more processors (all current revisions).
Under certain timing circumstances, a data cache line maintenance
operation by MVA targeting an Inner Shareable memory region may fail to
proceed up to either the Point of Coherency or to the Point of
Unification of the system. This workaround adds a DSB instruction before
the relevant cache maintenance functions and sets a specific bit in the
diagnostic control register of the SCU.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d8e89b47e0 upstream.
If the attempt to map a page for DMA fails (eg, because we're out of
mapping space) then we must not hold on to the page we allocated for
DMA - doing so will result in a memory leak.
Reported-by: Bryan Phillippe <bp@darkforest.org>
Tested-by: Bryan Phillippe <bp@darkforest.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b5a95fe7ef upstream.
Do not set io_req->sc_cmd to NULL until bnx2fc_unmap_sg_list() is called to
enable it to unmap the DMA mappings.
Signed-off-by: Bhanu Prakash Gollapudi <bprakash@broadcom.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
commit d36b3279e1 upstream.
Deleting NPIV port causes a kernel panic when the NPIV port is in the same zone
as the physical port and shares the same LUN. This happens due to the fact that
vport destroy and unsolicited ELS are scheduled to run on the same workqueue,
and vport destroy destroys the lport and the unsolicited ELS tries to access
the invalid lport. This patch fixes this issue by maintaining a list of valid
lports and verifying if the lport is valid or not before accessing it.
Signed-off-by: Bhanu Prakash Gollapudi <bprakash@broadcom.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7625eb2f2f upstream.
Based on earlier patch from Neil Horman <nhorman@tuxdriver.com>
If iSCSI is not supported on a bnx2 device, bnx2_cnic_probe() will
return NULL and the cnic device will not be visible to bnx2i. This
will prevent bnx2i from registering and then unregistering during
cnic_start() and cause the warning message:
bnx2 0003:01:00.1: eth1: Failed waiting for ULP up call to complete
Signed-off-by: Michael Chan <mchan@broadcom.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit db1d350fcb upstream.
During NETDEV_UP, we use symbol_get() to get the net driver's cnic
probe function. This sometimes doesn't work if NETDEV_UP happens
right after NETDEV_REGISTER and the net driver is still running module
init code. As a result, the cnic device may not be discovered. We
fix this by probing on all NETDEV events if the device's netif_running
state is up.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 101c40c8cb upstream.
During iSCSI connection terminations, if the target is also terminating
at about the same time, the firmware may not complete the driver's
request to close or reset the connection. This is fixed by handling
other events (instead of the expected completion event) as an indication
that the driver's request has been rejected.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9373665613 upstream.
We need to keep looping until cnic_get_kcqes() returns 0. cnic_get_kcqes()
returns a maximum of 64 entries. If there are more entries in the queue
and we don't loop back, the remaining entries may not be serviced for a
long time.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c37279b92a upstream.
Commit 9676001559
("ALSA: fm801: add error handling if auto-detect fails") seems to
break systems that were previously working without a tuner.
As a bonus, this should fix init and cleanup for the case where the
tuner is explicitly disabled.
Reported-and-tested-by: Hor Jiun Shyong <jiunshyong@gmail.com>
References: http://bugs.debian.org/641946
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2ba34e43ba upstream.
Commit 9676001559
("ALSA: fm801: add error handling if auto-detect fails") added
incorrect error handling.
Once we have successfully called snd_device_new(), the cleanup
function fm801_free() will automatically be called by snd_card_free()
and we must *not* also call fm801_free() directly.
Reported-by: Hor Jiun Shyong <jiunshyong@gmail.com>
References: http://bugs.debian.org/641946
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fdfc61594e upstream.
DVOOutputControl checks the value of of bios scratch reg 3
on some tables and assumes the encoder is already enabled
if the DFP2_ACTIVE bit is set. Clear that bit so the table
sets the DDIA enable bit properly.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 362e4e49ab upstream.
The Terratec Aureon 5.1 USB sound card support is broken since kernel
2.6.39.
2.6.39 introduced power management support for USB sound cards that added
a probing flag in struct snd_usb_audio.
During the probe of the card it gives following error message :
usb 7-2: new full speed USB device number 2 using uhci_hcd
cannot find UAC_HEADER
snd-usb-audio: probe of 7-2:1.3 failed with error -5
input: USB Audio as
/devices/pci0000:00/0000:00:1d.1/usb7/7-2/7-2:1.3/input/input6
generic-usb 0003:0CCD:0028.0001: input: USB HID v1.00 Device [USB Audio]
on usb-0000:00:1d.1-2/input3
I can not comment about that "cannot find UAC_HEADER" error, but until
2.6.38 the card worked anyway.
With 2.6.39 chip->probing remains 1 on error exit, and any later ioctl
stops in snd_usb_autoresume with -ENODEV.
Signed-off-by: Thomas Pfaff <tpfaff@gmx.net>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a7e6401e19 upstream.
ehci_bios_handoff() is marked __devinit, `ehci_dmi_nohandoff_table' should be
marked __devinitconst, not __initconst. This fixes the following section
mismatch:
WARNING: vmlinux.o(.devinit.text+0x4f08): Section mismatch in reference from the function ehci_bios_handoff() to the variable .init.rodata:ehci_dmi_nohandoff_table
The function __devinit ehci_bios_handoff() references a variable __initconst ehci_dmi_nohandoff_table.
If ehci_dmi_nohandoff_table is only used by ehci_bios_handoff then annotate ehci_dmi_nohandoff_table with a matching annotation.
Cc: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Arnaud Lacombe <lacombar@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 74dcd0ec73 upstream.
Have libiscsi_tcp have upper layers allocate the LLD data
along with the iscsi_cls_conn struct, so it is refcounted.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2d20a26a92 upstream.
The checks for HCI_INQUIRY and HCI_MGMT were in the wrong order,
so that second scans always failed.
Signed-off-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Gustavo F. Padovan <padovan@profusion.mobi>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2cab7a4c5c upstream.
This patch adds an additional SATA RAID controller DeviceID for the Intel Panther Point PCH.
Signed-off-by: Seth Heasley <seth.heasley@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 985af6f70d upstream.
Need the following workaround in the driver for interoperability with
the older Intel SSD drives and any other SATA drive that may exhibit the
same behavior. This is a corner case where SCU speed is limited to
either 3G or 1.5G and the drive has a period of DC idle when it switches
speed during SATA speed negotiation. Workaround :change PHYTOV[31:24]
from 0x36 to 0x3B.
Signed-off-by: Marcin Tomczak <marcin.tomczak@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0a96e9754d upstream.
PCI and SR-IOV Fixes
- Call pci_save_state after the pci_restore_state completes.
- After calling pci_enable_pcie_error_reporting() and checking the return
value for logging messages from rc, reset rc to 0 to it will not later be
interpreted for error.
- Read PCI config space SR-IOV capability to get the number of VFs supported.
- Check for the PF's supported number of VFs before invoking PCI enable sriov
API call and log error message that user requested number of VFs is beyond
the PF capability if such request is passed in.
- Added check for Physical function with Virtual Functions attached. If so,
first disable all the VFs before proceeding to device reset.
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5248a7498e upstream.
Fabric and Target Discovery Fixes
- Clear FC_VPORT_NEEDS_INIT_VPI flag during completion of REG_VFI mailbox
command.
- Prevent SLI3 Code from unregistering the physical VPI.
- Add an else clause to the code that checks and sets
sp->cmn.request_multiple_Nport to clear the bit.
- Remove a redundant mbox free.
- Modified lpfc_sli4_async_fip_evt to pass in physical VPI toi
lpfc_find_vport_by_vpid function.
- Modified lpfc_find_vport_by_vpid to translate physical VPI to logical VPI
before comparing with vport VPI.
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7851fe2c7f upstream.
Adapter Interface fixes and changes
- Modify the macro field from lpfc_init_vpi_vpi to lpfc_init_vfi_vpi
- Add the new CQE_CODE_RECEIVE_V1 CQE Code, add code in the driver to handle
the new Code the same as the CQE_CODE_RECEIVE code except that there are
two new checks for this code that will cause the driver to use the new V1
macros for rq_id and fcf_id.
- Fix a bug in lpfc_prep_seq() where the size out of the first CQE was
ONLY being used, even though multiple dmabufs make up the sequence,
each have their own CQE with potentially different sizes.
- Fix bug in lpfc_bsg_ct_unsol_event() where the ulpContext and ulpWord[3]
fields of the XMIT_SEQUENCE64_CX IOCB were being calculated incorrectly.
- Do physical to logical translation before indexing into the active
XRI array.
- Populate physical vpi in the iocb data structure.
- Put the current accumulated total in each IOCB in the chain as we are
walking thru then. The last IOCB in the chain should have the total
length of the sequence.
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 88a2cfbb8b upstream.
Miscellaneous Bug fixes and code cleanup
- Fix 16G link speed reporting by adding check for 16G check.
- Change the check and enforcement of MAILBOX_EXT_SIZE (2048B)
to the check and enforcement of BSG_MBOX_SIZE - sizeof(MAILBOX_t) (3840B).
- Instead of waiting for a fixed amount of time after performing firmware
reset, the driver shall wait for the Lancer SLIPORT_STATUS register for the
readiness of the firmware for bring up.
- Add logging to indicate when dynamic parameters are changed.
- Add revision and date to the firmware image format.
- Use revision instead of rev_name to check firmware image version.
- Update temporary offset after memcopy is complete for firmware update.
- Consolidated the use of the macros to get rid of duplicated register
offset definitions.
- Removed the unused second parameter in routine lpfc_bsg_diag_mode_enter()
- Enable debugfs when debugfs is enabled.
- Update function comments for lpfc_sli4_alloc_xri and lpfc_sli4_init_rpi_hdrs.
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7c56b9fd3b upstream.
T10 DIF Fixes
- Fix the case where the SCSI Host supplies the CRC and driver to controller
protection is on.
- Only support T10 DIF type 1. LBA always goes in ref tag and app tag is not
checked.
- Change the format of the sense data passed up to the SCSI layer to match the
Descriptor Format Sense Data found in SPC-4 sections 4.5.2.1 and 4.5.2.2.
- Fix Slip PDE implementation.
- Remove BUG() in else casein lpfc_sc_to_bg_opcodes.
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3321c07ae5 upstream.
Since the buffer might contain security related data it might be a good idea to
zero the buffer after we have copied it to userspace.
This got assigned CVE-2011-1162.
Signed-off-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6b07d30aca upstream.
This patch changes the call of tpm_transmit by supplying the size of the
userspace buffer instead of TPM_BUFSIZE.
This got assigned CVE-2011-1161.
[The first hunk didn't make sense given one could expect
way less data than TPM_BUFSIZE, so added tpm_transmit boundary
check over bufsiz instead
The last parameter of tpm_transmit() reflects the amount
of data expected from the device, and not the buffer size
being supplied to it. It isn't ideal to parse it directly,
so we just set it to the maximum the input buffer can handle
and let the userspace API to do such job.]
Signed-off-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a7f4d00a82 upstream.
As the Amiga Zorro II address space is limited to 8.5 MiB and Zorro
devices can contain only one BAR, several Amiga Zorro II expansion
boards (mainly graphics cards) contain multiple Zorro devices: a small
one for the control registers and one (or more) for the graphics memory.
The conversion of cirrusfb to the new driver framework introduced a
regression: the driver contains a zorro_driver for the first Zorro
device, and uses the (old) zorro_find_device() call to find the second
Zorro device.
However, as the Zorro core calls device_register() as soon as a Zorro
device is identified, it may not have identified the second Zorro device
belonging to the same physical Zorro expansion card. Hence cirrusfb
could no longer find the second part of the Picasso II graphics card,
causing a NULL pointer dereference.
Defer the registration of Zorro devices with the driver framework until
all Zorro devices have been identified to fix this.
Note that the alternative solution (modifying cirrusfb to register a
zorro_driver for all Zorro devices belonging to a graphics card, instead
of only for the first one, and adding a synchronization mechanism to
defer initialization until all have been found), is not an option, as on
some cards one device may be optional (e.g. the second bank of 2 MiB of
graphics memory on the Picasso IV in Zorro II mode).
Reported-by: Ingo Jürgensmann <ij@2011.bluespice.org>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 22df13319d ]
br_multicast_ipv6_rcv() can call pskb_trim_rcsum() and therefore skb
head can be reallocated.
Cache icmp6_type field instead of dereferencing twice the struct
icmp6hdr pointer.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 4b275d7efa ]
Checksum of ICMPv6 is not properly computed because the pseudo header is not used.
Thus, the MLD packet gets dropped by the bridge.
Signed-off-by: Zheng Yan <zheng.z.yan@intel.com>
Reported-by: Ang Way Chuang <wcang@sfc.wide.ad.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit bcf66bf54a ]
When asyncronous crypto algorithms are used, there might be many
packets that passed the xfrm replay check, but the replay advance
function is not called yet for these packets. So the replay check
function would accept a replay of all of these packets. Also the
system might crash if there are more packets in async processing
than the size of the anti replay window, because the replay advance
function would try to update the replay window beyond the bounds.
This pach adds a second replay check after resuming from the async
processing to fix these issues.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit c5114cd59d ]
It's after all necessary to do reset headers here. The reason is we
cannot depend that it gets reseted in __netif_receive_skb once skb is
reinjected. For incoming vlanids without vlan_dev, vlan_do_receive()
returns false with skb != NULL and __netif_reveive_skb continues, skb is
not reinjected.
This might be good material for 3.0-stable as well
Reported-by: Mike Auty <mike.auty@gmail.com>
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit f0e3d0689d ]
Using a gcc 4.4.3, warnings are emitted for a possibly uninitialized use
of ecn_ok.
This can happen if cookie_check_timestamp() returns due to not having
seen a timestamp. Defaulting to ecn off seems like a reasonable thing
to do in this case, so initialized ecn_ok to false.
Signed-off-by: Mike Waychison <mikew@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit f779b2d60a ]
D-SACK is allowed to reside below snd_una. But the corresponding check
in tcp_is_sackblock_valid() is the exact opposite. It looks like a typo.
Signed-off-by: Zheng Yan <zheng.z.yan@intel.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 86c432ca5d ]
This reverts commits 65f0b417de,
d88d6b05fe,
fcfa060468,
747df2258b and
867955f568.
Depending on the processor model, write-combining may result in
reordering that the NIC will not tolerate. This typically results
in a DMA error event and reset by the driver, logged as:
sfc 0000:0e:00.0: eth2: TX DMA Q reports TX_EV_PKT_ERR.
sfc 0000:0e:00.0: eth2: resetting (ALL)
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 3557619f0f ]
commit 07bd8df5df
(sch_sfq: fix peek() implementation) changed sfq to use generic
peek helper.
This makes HFSC complain about a non-work-conserving child qdisc, if
prio with sfq child is used within hfsc:
hfsc peeks into prio qdisc, which will then peek into sfq.
returned skb is stashed in sch->gso_skb.
Next, hfsc tries to dequeue from prio, but prio will call sfq dequeue
directly, which may return NULL instead of previously peeked-at skb.
Have prio call qdisc_dequeue_peeked, so sfq->dequeue() is
not called in this case.
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 797fd3913a ]
TCP in some cases uses different global (raw) socket
to send RST and ACK. The transparent flag is not set there.
Currently, it is a problem for rerouting after the previous
change.
Fix it by simplifying the checks in ip_route_me_harder
and use FLOWI_FLAG_ANYSRC even for sockets. It looks safe
because the initial routing allowed this source address to
be used and now we just have to make sure the packet is rerouted.
As a side effect this also allows rerouting for normal
raw sockets that use spoofed source addresses which was not possible
even before we eliminated the ip_route_input call.
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit e05c4ad3ed ]
Should check use count of include mode filter instead of total number
of include mode filters.
Signed-off-by: Zheng Yan <zheng.z.yan@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 98e77438ae ]
IPV6_2292PKTOPTIONS is broken for 32-bit applications running
in COMPAT mode on 64-bit kernels.
The same problem was fixed for IPv4 with the patch:
ipv4: Fix ip_getsockopt for IP_PKTOPTIONS,
commit dd23198e58
Signed-off-by: Sorin Dumitru <sdumitru@ixiacom.com>
Signed-off-by: Daniel Baluta <dbaluta@ixiacom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 97a8041020 ]
As rt_iif represents input device even for packets
coming from loopback with output route, it is not an unique
key specific to input routes. Now rt_route_iif has such role,
it was fl.iif in 2.6.38, so better to change the checks at
some places to save CPU cycles and to restore 2.6.38 semantics.
compare_keys:
- input routes: only rt_route_iif matters, rt_iif is same
- output routes: only rt_oif matters, rt_iif is not
used for matching in __ip_route_output_key
- now we are back to 2.6.38 state
ip_route_input_common:
- matching rt_route_iif implies input route
- compared to 2.6.38 we eliminated one rth->fl.oif check
because it was not needed even for 2.6.38
compare_hash_inputs:
Only the change here is not an optimization, it has
effect only for output routes. I assume I'm restoring
the original intention to ignore oif, it was using fl.iif
- now we are back to 2.6.38 state
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 561dac2d41 ]
add new fib rule can cause BUG_ON happen
the reproduce shell is
ip rule add pref 38
ip rule add pref 38
ip rule add to 192.168.3.0/24 goto 38
ip rule del pref 38
ip rule add to 192.168.3.0/24 goto 38
ip rule add pref 38
then the BUG_ON will happen
del BUG_ON and use (ctarget == NULL) identify whether this rule is unresolved
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7821578caa upstream.
Driver should not call shutdown call from _scsih_remove otherwise,
The scsi midlayer can be deadlocked when devices are removed from the driver
pci_driver->shutdown handler.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1ff9918b62 upstream.
Problem: When initiator sends write command to target, target tries to
assign new sequence. It allocates new exchangeID (RX_ID)
always from non-offloaded pool (Non-offload EMA)
Fix: Enhanced fcoe_oem_match routine to look at F_CTL flags and if it
is exchange responder and command type is WRITEDATA, then function
returns TRUE instead of FALSE. This function is used to determine
which pool to use (offload pool of exchange is used only if this
function returns TRUE).
Technical Notes: N/A
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 480584818a upstream.
Problem: Existing RPORT state machine continues witg FLOGI/PLOGI
process only after it receices beacon from other end. Once claiming
stage is over (either clain notify or clain repose), beacon is sent
and state machine enters into operational mode where it initiates the
rlogin process (FLOGI/PLOGI) to the peer but before this rlogin is
initiated, exitsing implementation checks if it received beacon from
other end, it beacon is not received yet, rlogin process is not
initiated. Other end initiates FLOGI but peer end keeps on rejecting
FLOGI, hence after 3 retries other end deletes associated rport, then
sends a beacon. Once the beacon is received, peer end now initiates
rlogin to the peer end but since associated rport is deleted FLOGI is
neither accepted nor the reject response send out because rport is
deleted. Hence unable to proceed withg FLOGI/PLOGI process and fails
to establish VN2VN connection.
Fix: VN2VN spec is not standard yet but based on exitsing collateral
on T11, it appears that, both end shall send beacon and enter into
'operational mode' without explictly waiting for beacon from other
end. Fix is to allow the RPORT login process as long as respective
RPORT is created (as part of claim notification / claim response) even
though state of RPORT is INIT. Means don't wait for beacon from peer
end, if peer end initiates FLOGI (means peer end exist and
responding).
Notes: This patch is preparing the FCoE stack for target wrt
offload. This is generic patch and harmless even if applied on storage
initiator because 'else if' condition of function 'fcoe_oem_found'
shall evaluate to TRUE only for targets.
Dependencies: None
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 36c35416a9 upstream.
Fix a misusage of the struct usb_cdc_notification to pass arguments to the
usb_control_msg function. The usb_control_msg function expects host endian
arguments but usb_cdc_notification stores these values as little endian.
Now usb_control_msg is directly invoked with host endian values.
Signed-off-by: Giuseppe Scrivano <giuseppe@southpole.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0c42a4e845 upstream.
Add another variant of the Pegatron tablet used by Ordissimo, and
apparently RM Slate 100, to the list of models that should skip the
negociation for the handoff of the EHCI controller.
Signed-off-by: Anisse Astier <anisse@astier.eu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 03c7536218 upstream.
In commit 3610ea5397 (ehci: workaround for pci
quirk timeout on ExoPC), a workaround was added to skip the negociation for
the handoff of the EHCI controller.
Refactor the DMI detection code to use standard dmi_check_system function.
Signed-off-by: Anisse Astier <anisse@astier.eu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3aa1cdf87c upstream.
This patch fixes interrupt selftest failures for recent devices (57765,
5717, 5718. 5719, 5720) by disabling MSI one-shot mode and applying the
status tag workaround to the selftest code.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b02f0c2ea2 upstream.
The following race can occur with qdio devices that use the shared device
state change indicator:
Device (Shared DSCI) CPU0 CPU1
===============================================================================
1. DSCI 0 => 1,
INT pending
2. Thinint handler
* si_used = 1
* Inbound tasklet_schedule
* DSCI 1 => 0
3. DSCI 0 => 1,
INT pending
4. Thinint handler
* si_used = 1
* Inbound tasklet_schedu
le
=> NOP
5. Inbound tasklet run
6. DSCI = 1,
INT surpressed
7. DSCI 1 => 0
The race would lead to a stall where new data in the input queue is
not recognized so the device stops working in case of no further traffic.
Fix the race by resetting the DSCI before scheduling the inbound tasklet
so the device generates an interrupt if new data arrives in the above
scenario in step 6.
Reviewed-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 94c3dcbb0b upstream.
Explicitly update .dirtied_when on synced inodes, so that they are no
longer considered for writeback in the next round.
It can prevent both of the following livelock schemes:
- while true; do echo data >> f; done
- while true; do touch f; done (in theory)
The exact livelock condition is, during sync(1):
(1) no new inodes are dirtied
(2) an inode being actively dirtied
On (2), the inode will be tagged and synced with .nr_to_write=LONG_MAX.
When finished, it will be redirty_tail()ed because it's still dirty
and (.nr_to_write > 0). redirty_tail() won't update its ->dirtied_when
on condition (1). The sync work will then revisit it on the next
queue_io() and find it eligible again because its old ->dirtied_when
predates the sync work start time.
We'll do more aggressive "keep writeback as long as we wrote something"
logic in wb_writeback(). The "use LONG_MAX .nr_to_write" trick in commit
b9543dac5b ("writeback: avoid livelocking WB_SYNC_ALL writeback") will
no longer be enough to stop sync livelock.
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6e6938b6d3 upstream.
sync(2) is performed in two stages: the WB_SYNC_NONE sync and the
WB_SYNC_ALL sync. Identify the first stage with .tagged_writepages and
do livelock prevention for it, too.
Jan's commit f446daaea9 ("mm: implement writeback livelock avoidance
using page tagging") is a partial fix in that it only fixed the
WB_SYNC_ALL phase livelock.
Although ext4 is tested to no longer livelock with commit f446daaea9,
it may due to some "redirty_tail() after pages_skipped" effect which
is by no means a guarantee for _all_ the file systems.
Note that writeback_inodes_sb() is called by not only sync(), they are
treated the same because the other callers also need livelock prevention.
Impact: It changes the order in which pages/inodes are synced to disk.
Now in the WB_SYNC_NONE stage, it won't proceed to write the next inode
until finished with the current inode.
Acked-by: Jan Kara <jack@suse.cz>
CC: Dave Chinner <david@fromorbit.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d40dcdb017 upstream.
We return ENOMEM from mqueue_get_inode even when we have enough memory.
Namely in case the system rlimit of mqueue was reached. This error
propagates to mq_queue and user sees the error unexpectedly. So fix
this up to properly return EMFILE as described in the manpage:
EMFILE The process already has the maximum number of files and
message queues open.
instead of:
ENOMEM Insufficient memory.
With the previous patch we just switch to ERR_PTR/PTR_ERR/IS_ERR error
handling here.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 04715206c0 upstream.
If new_inode fails to allocate an inode we need only to return with
NULL. But now we test the opposite and have all the work in a nested
block. So do the opposite to save one indentation level (and remove
unnecessary line breaks).
This is only a preparation/cleanup for the next patch where we fix up
return values from mqueue_get_inode.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9e975cc291 upstream.
Commit f2096f94b5, entitled
"tg3: Add 5720 H2BMC support", needed to add code to preserve some bits
set by firmware. Unfortunately the new code causes throughput to stop
after a chip reset because it enables state machines before they are
ready. This patch undoes the problematic code. The bits will be
restored later in the init sequence.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 03adb5f912 upstream.
iscsi_sw_tcp_conn_restore_callbacks could have set
the sk_user_data field to NULL then iscsi_sw_tcp_data_ready
could read that and try to access the NULL pointer. This
adds some checks for NULL sk_user_data in the sk
callback functions and it uses the sk_callback_lock to
set/get that sk_user_data field.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d11bb4462c upstream.
The bug is we're not able to remove the device from blkio cgroup's
per-device control files if it gets unplugged.
To reproduce the bug:
# mount -t cgroup -o blkio xxx /cgroup
# cd /cgroup
# echo "8:0 1000" > blkio.throttle.read_bps_device
# unplug the device
# cat blkio.throttle.read_bps_device
8:0 1000
# echo "8:0 0" > blkio.throttle.read_bps_device
-bash: echo: write error: No such device
After patching, the device removal will succeed.
Thanks for the comments of Paul, Zefan, and Vivek.
Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Paul Menage <paul@paulmenage.org>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2249b01143 upstream.
This patch reverts commit 9b76883284 which
was introduced in 2.6.38-rc1. It works around a problem where the iwlagn
driver stimulates a bug crashing (requiring power cycle to recover) some
APs under heavy traffic.
Signed-off-by: Don Fry <donald.h.fry@intel.com>
SIgned-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit daabead1c3 upstream.
The eeprom data is stored in little-endian order in the rt2x00 library.
As it was converted to cpu order in the read routines, the data need to
be converted to LE on a big-endian platform.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aa3d7eef39 upstream.
During the association, the regulatory is updated by country IE
that reaps the previously found beacons. The impact is that
after a STA disconnects *or* when for any reason a regulatory
domain change happens the beacon hint flag is not cleared
therefore preventing future beacon hints to be learned.
This is important as a regulatory domain change or a restore
of regulatory settings would set back the passive scan and no-ibss
flags on the channel. This is the right place to do this given that
it covers any regulatory domain change.
Reviewed-by: Luis R. Rodriguez <mcgrof@gmail.com>
Signed-off-by: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Acked-by: Luis R. Rodriguez <mcgrof@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e3b73c4a25 upstream.
The patch "xen: use maximum reservation to limit amount of usable RAM"
(d312ae878b) breaks machines that
do not use 'dom0_mem=' argument with:
reserve RAM buffer: 000000133f2e2000 - 000000133fffffff
(XEN) mm.c:4976:d0 Global bit is set to kernel page fffff8117e
(XEN) domain_crash_sync called from entry.S
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
...
The reason being that the last E820 entry is created using the
'extra_pages' (which is based on how many pages have been freed).
The mentioned git commit sets the initial value of 'extra_pages'
using a hypercall which returns the number of pages (if dom0_mem
has been used) or -1 otherwise. If the later we return with
MAX_DOMAIN_PAGES as basis for calculation:
return min(max_pages, MAX_DOMAIN_PAGES);
and use it:
extra_limit = xen_get_max_pages();
if (extra_limit >= max_pfn)
extra_pages = extra_limit - max_pfn;
else
extra_pages = 0;
which means we end up with extra_pages = 128GB in PFNs (33554432)
- 8GB in PFNs (2097152, on this specific box, can be larger or smaller),
and then we add that value to the E820 making it:
Xen: 00000000ff000000 - 0000000100000000 (reserved)
Xen: 0000000100000000 - 000000133f2e2000 (usable)
which is clearly wrong. It should look as so:
Xen: 00000000ff000000 - 0000000100000000 (reserved)
Xen: 0000000100000000 - 000000027fbda000 (usable)
Naturally this problem does not present itself if dom0_mem=max:X
is used.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d312ae878b upstream.
Use the domain's maximum reservation to limit the amount of extra RAM
for the memory balloon. This reduces the size of the pages tables and
the amount of reserved low memory (which defaults to about 1/32 of the
total RAM).
On a system with 8 GiB of RAM with the domain limited to 1 GiB the
kernel reports:
Before:
Memory: 627792k/4472000k available
After:
Memory: 549740k/11132224k available
A increase of about 76 MiB (~1.5% of the unused 7 GiB). The reserved
low memory is also reduced from 253 MiB to 32 MiB. The total
additional usable RAM is 329 MiB.
For dom0, this requires at patch to Xen ('x86: use 'dom0_mem' to limit
the number of pages for dom0') (c/s 23790)
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 32ef43848f upstream.
This is modeled after the smaps code.
It detects transparent hugepages and then does a single gather_stats()
for the page as a whole. This has two benifits:
1. It is more efficient since it does many pages in a single shot.
2. It does not have to break down the huge page.
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Acked-by: Hugh Dickins <hughd@google.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3200a8aaab upstream.
gather_pte_stats() does a number of checks on a target page
to see whether it should even be considered for statistics.
This breaks that code out in to a separate function so that
we can use it in the transparent hugepage case in the next
patch.
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Acked-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Christoph Lameter <cl@gentwo.org>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eb4866d006 upstream.
We need to teach the numa_maps code about transparent huge pages. The
first step is to teach gather_stats() that the pte it is dealing with
might represent more than one page.
Note that will we use this in a moment for transparent huge pages since
they have use a single pmd_t which _acts_ as a "surrogate" for a bunch
of smaller pte_t's.
I'm a _bit_ unhappy that this interface counts in hugetlbfs page sizes
for hugetlbfs pages and PAGE_SIZE for normal pages. That means that to
figure out how many _bytes_ "dirty=1" means, you must first know the
hugetlbfs page size. That's easier said than done especially if you
don't have visibility in to the mount.
But, that's probably a discussion for another day especially since it
would change behavior to fix it. But, just in case anyone wonders why
this patch only passes a '1' in the hugetlb case...
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Acked-by: Hugh Dickins <hughd@google.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d331eb51e4 upstream.
Using gcc 4.4.5 on a Powerbook G4 with a PPC cpu, a complicated
if statement results in incorrect flow, whereas the equivalent switch
statement works correctly.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9c1f8594df upstream.
xz_dec_run() could incorrectly return XZ_BUF_ERROR if all of the
following was true:
- The caller knows how many bytes of output to expect and only provides
that much output space.
- When the last output bytes are decoded, the caller-provided input
buffer ends right before the LZMA2 end of payload marker. So LZMA2
won't provide more output anymore, but it won't know it yet and thus
won't return XZ_STREAM_END yet.
- A BCJ filter is in use and it hasn't left any unfiltered bytes in the
temp buffer. This can happen with any BCJ filter, but in practice
it's more likely with filters other than the x86 BCJ.
This fixes <https://bugzilla.redhat.com/show_bug.cgi?id=735408> where
Squashfs thinks that a valid file system is corrupt.
This also fixes a similar bug in single-call mode where the uncompressed
size of a block using BCJ + LZMA2 was 0 bytes and caller provided no
output space. Many empty .xz files don't contain any blocks and thus
don't trigger this bug.
This also tweaks a closely related detail: xz_dec_bcj_run() could call
xz_dec_lzma2_run() to decode into temp buffer when it was known to be
useless. This was harmless although it wasted a minuscule number of CPU
cycles.
Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c4867f646 upstream.
When no floppy is found the module code can be released while a timer
function is pending or about to be executed.
CPU0 CPU1
floppy_init()
timer_softirq()
spin_lock_irq(&base->lock);
detach_timer();
spin_unlock_irq(&base->lock);
-> Interrupt
del_timer();
return -ENODEV;
module_cleanup();
<- EOI
call_timer_fn();
OOPS
Use del_timer_sync() to prevent this.
Signed-off-by: Carsten Emde <C.Emde@osadl.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c9c7fa0064 upstream.
Both these options are started with "rw" - that's why the first one
isn't switched on even if it is specified. Fix this by adding a length
check for "rw" option check.
Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9438fabb73 upstream.
The name_len variable in CIFSFindNext is a signed int that gets set to
the resume_name_len in the cifs_search_info. The resume_name_len however
is unsigned and for some infolevels is populated directly from a 32 bit
value sent by the server.
If the server sends a very large value for this, then that value could
look negative when converted to a signed int. That would make that
value pass the PATH_MAX check later in CIFSFindNext. The name_len would
then be used as a length value for a memcpy. It would then be treated
as unsigned again, and the memcpy scribbles over a ton of memory.
Fix this by making the name_len an unsigned value in CIFSFindNext.
Reported-by: Darren Lavender <dcl@hppine99.gbr.hp.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8974bd51a7 upstream.
When the system has only the headphone and the line-out jacks without
speakers, the current auto-mute code doesn't work. It's because the
spec->automute_lines flag is wrongly referred in update_speakers().
This flag must be meaningless when spec->automute_hp_lo isn't set, thus
they should be always coupled.
The patch fixes the problem and add a comment to indicate the
relationship briefly.
BugLink: http://bugs.launchpad.net/bugs/851697
Reported-by: David Henningsson <david.henningsson@canonical.com>
Tested-By: Jayne Han <jayne.han@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 282cdb325a upstream.
If the command queue is constantly busy,
which can happen in P2P, the hangcheck
timer will frequently find a command in
it and will eventually reset the device
because nothing sets the timestamp for
this queue when commands are processed.
Fix this by setting the timestamp when
a command completes.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 44f4c3ed60 upstream.
Sometimes, when a USB 3.0 device is disconnected, the Intel Panther
Point xHCI host controller will report a link state change with the
state set to "SS.Inactive". This causes the xHCI host controller to
issue a warm port reset, which doesn't finish before the USB core times
out while waiting for it to complete.
When the warm port reset does complete, and the xHC gives back a port
status change event, the xHCI driver kicks khubd. However, it fails to
set the bit indicating there is a change event for that port because the
logic in xhci-hub.c doesn't check for the warm port reset bit.
After that, the warm port status change bit is never cleared by the USB
core, and the xHC stops reporting port status change bits. (The xHCI
spec says it shouldn't report more port events until all change bits are
cleared.) This means any port changes when a new device is connected
will never be reported, and the port will seem "dead" until the xHCI
driver is unloaded and reloaded, or the computer is rebooted. Fix this
by making the xHCI driver set the port change bit when a warm port reset
change bit is set.
A better solution would be to make the USB core handle warm port reset
in differently, merging the current code with the standard port reset
code that does an incremental backoff on the timeout, and tries to
complete the port reset two more times before giving up. That more
complicated fix will be merged next window, and this fix will be
backported to stable.
This should be backported to kernels as old as 3.0, since that was the
first kernel with commit a11496ebf3 ("xHCI: warm reset support").
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
commit 003cefe0c2 upstream.
The BO blit code inconsistenly handled the page size. This wasn't
an issue on system with 4k pages since the GPU's page size is 4k as
well. Switch the driver blit callbacks to take num pages in GPU
page units.
Fixes lemote mipsel systems using AMD rs780/rs880 chipsets.
v2: incorporate suggestions from Michel.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f39aa30d77 upstream.
This fixes https://bugs.launchpad.net/ubuntu/+source/linux/+bug/801719 .
An O2Micro PCI Express FireWire controller,
"FireWire (IEEE 1394) [0c00]: O2 Micro, Inc. Device [1217:11f7] (rev 05)"
which is a combination device together with an SDHCI controller and some
sort of storage controller, misses SBP-2 status writes from an attached
FireWire HDD. This problem goes away if MSI is disabled for this
FireWire controller.
The device reportedly does not require QUIRK_CYCLE_TIMER.
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 91aae1e5c4 upstream.
Commit b9367bf3ee (net: ibmveth: convert to hw_features) reversed
a check in ibmveth_set_csum_offload that results in checksum offload
never being enabled.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b93da27f52 upstream.
descs[].fields.address is 32bit which truncates any dma mapping
errors so dma_mapping_error() fails to catch it.
Use a dma_addr_t to do the comparison. With this patch I was able
to transfer many gigabytes of data with IOMMU fault injection set
at 10% probability.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 33a48ab105 upstream.
Commit 6e8ab30ec6 (ibmveth: Add scatter-gather support) introduced a
DMA mapping API inconsistency resulting in dma_unmap_page getting
called on memory mapped via dma_map_single. This was seen when
CONFIG_DMA_API_DEBUG was enabled. Fix up this API usage inconsistency.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 763437a9e7 upstream.
wait_for_avail() in pcm_lib.c has a race in it (observed in practice by an
Intel validation group).
The function is supposed to return once space in the buffer has become
available, or if some timeout happens. The entity that creates space (irq
handler of sound driver and some such) will do a wake up on a waitqueue
that this function registers for.
However there are two races in the existing code
1) If space became available between the caller noticing there was no
space and this function actually sleeping, the wakeup is missed and the
timeout condition will happen instead
2) If a wakeup happened but not sufficient space became available, the
code will loop again and wait for more space. However, if the second
wake comes in prior to hitting the schedule_timeout_interruptible(), it
will be missed, and potentially you'll wait out until the timeout
happens.
The fix consists of using more careful setting of the current state (so
that if a wakeup happens in the main loop window, the schedule_timeout()
falls through) and by checking for available space prior to going into the
schedule_timeout() loop, but after being on the waitqueue and having the
state set to interruptible.
[tiwai: the following changes have been added to Arjan's original patch:
- merged akpm's fix for waitqueue adding order into a single patch
- reduction of duplicated code of avail check
]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fa2563e41c upstream.
Take cwq->gcwq->lock to avoid racing between drain_workqueue checking to
make sure the workqueues are empty and cwq_dec_nr_in_flight decrementing
and then incrementing nr_active when it activates a delayed work.
We discovered this when a corner case in one of our drivers resulted in
us trying to destroy a workqueue in which the remaining work would
always requeue itself again in the same workqueue. We would hit this
race condition and trip the BUG_ON on workqueue.c:3080.
Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e71f5cc402 upstream.
per_cpu(processors, n) can be NULL, resulting in:
Loading CPUFreq modules[ 437.661360] BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<ffffffffa0434314>] pcc_cpufreq_cpu_init+0x74/0x220 [pcc_cpufreq]
It's better to avoid the oops by failing the driver, and allowing the
system to boot.
Signed-off-by: Naga Chumbalkar <nagananda.chumbalkar@hp.com>
Cc: Dave Jones <davej@codemonkey.org.uk>
Cc: Len Brown <lenb@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7a5caabd09 upstream.
Fix regression introduced by commit 5ada28bf76 ("led-class: always
implement blinking") which broke sysfs delay handling by not storing the
updated value. Consequently it was only possible to set one of the delays
through the sysfs interface as the other delay was automatically restored
to it's default value. Reading the parameters always gave the defaults.
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Acked-by: Florian Fainelli <florian@openwrt.org>
Acked-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 461ae488ec upstream.
Xen backend drivers (e.g., blkback and netback) would sometimes fail to
map grant pages into the vmalloc address space allocated with
alloc_vm_area(). The GNTTABOP_map_grant_ref would fail because Xen could
not find the page (in the L2 table) containing the PTEs it needed to
update.
(XEN) mm.c:3846:d0 Could not find L1 PTE for address fbb42000
netback and blkback were making the hypercall from a kernel thread where
task->active_mm != &init_mm and alloc_vm_area() was only updating the page
tables for init_mm. The usual method of deferring the update to the page
tables of other processes (i.e., after taking a fault) doesn't work as a
fault cannot occur during the hypercall.
This would work on some systems depending on what else was using vmalloc.
Fix this by reverting ef691947d8 ("vmalloc: remove vmalloc_sync_all()
from alloc_vm_area()") and add a comment to explain why it's needed.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Keir Fraser <keir.xen@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1d2ef59014 upstream.
We used to get the victim pinned by dentry_unhash() prior to commit
64252c75a2 ("vfs: remove dget() from dentry_unhash()") and ->rmdir()
and ->rename() instances relied on that; most of them don't care, but
ones that used d_delete() themselves do. As the result, we are getting
rmdir() oopses on NFS now.
Just grab the reference before locking the victim and drop it explicitly
after unlocking, same as vfs_rename_other() does.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Tested-by: Simon Kirby <sim@hostway.ca>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2e1210bc3d upstream.
This patch fixes "Surround Speaker Playback Volume" being cut off.
(Commit b4dabfc452 was probably meant to fix this, but it fixed
only the "Switch" name, not the "Volume" name.)
Signed-off-by: David Henningsson <david.henningsson@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 477694e711 upstream.
Mark this lowlevel IRQ handler as non-threaded. This prevents a boot
crash when "threadirqs" is on the kernel commandline. Also the
interrupt handler is handling hardware critical events which should
not be delayed into a thread.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4bae7d9769 upstream.
Since my commit 34e895075e
("mac80211: allow station add/remove to sleep") there is
a race in mac80211 when it clears the TIM bit because a
sleeping station disconnected, the spinlock isn't held
around the relevant code any more. Use the right API to
acquire the spinlock correctly.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ed585a6516 upstream.
If an irq_chip provides .irq_shutdown(), but neither of .irq_disable() or
.irq_mask(), free_irq() crashes when jumping to NULL.
Fix this by only trying .irq_disable() and .irq_mask() if there's no
.irq_shutdown() provided.
This revives the symmetry with irq_startup(), which tries .irq_startup(),
.irq_enable(), and irq_unmask(), and makes it consistent with the comment for
irq_chip.irq_shutdown() in <linux/irq.h>, which says:
* @irq_shutdown: shut down the interrupt (defaults to ->disable if NULL)
This is also how __free_irq() behaved before the big overhaul, cfr. e.g.
3b56f0585f ("genirq: Remove bogus conditional"),
where the core interrupt code always overrode .irq_shutdown() to
.irq_disable() if .irq_shutdown() was NULL.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: linux-m68k@lists.linux-m68k.org
Link: http://lkml.kernel.org/r/1315742394-16036-2-git-send-email-geert@linux-m68k.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e600cffe61 upstream.
This code section seems to have been accidentally copy pasted.
It causes incorrect bits to be set up in the TLL_CHANNEL_CONF
register and prevents the TLL mode from working correctly.
Signed-off-by: Anand Gadiyar <gadiyar@ti.com>
Cc: Keshava Munegowda <keshava_mgowda@ti.com>
Acked-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fa948761e6 upstream.
Fix regression introduced by commit
a2974732ca (TPS65911: Add new irq
definitions) which caused irq_num to be incorrectly set for tps65910.
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Acked-by: Graeme Gregory <gg@slimlogic.co.uk>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8efcc57ded upstream.
This needs to be an out of band value for the register and on this device
registers are 16 bit so we must shift left one to the 17th bit.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c5d2e650bd upstream.
Fix the codec_name field of the dai_link to match the actual device name
of the codec. Otherwise the card won't be instantiated.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 747da0f80e upstream.
We need to report the entire jack state to the core jack code, not just
the bits that were being updated by the caller, otherwise the status
reported by other detection methods will be omitted from the state seen
by userspace.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e2faeec2de upstream.
The checksum field in the EEPROM on HPPA is really not a
checksum but a signature (0x16d6). So allow 0x16d6 as the
matching checksum on HPPA systems.
This issue is present on longterm/stable kernels, I have
verified that this patch is applicable back to at least
2.6.32.y kernels.
v2- changed ifdef to use CONFIG_PARISC instead of __hppa__
CC: Guy Martin <gmsoft@tuxicoman.be>
CC: Rolf Eike Beer <eike-kernel@sf-tec.de>
CC: Matt Turner <mattst88@gmail.com>
Reported-by: Mikulas Patocka <mikulas@artax.kerlin.mff.cuni.cz>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0e4660cbe5 upstream.
ADC calibrations cannot run on 5 GHz with fast clock enabled. They
need to be disabled, otherwise they'll hang and IQ mismatch calibration
will not be run either.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Reported-by: Adrian Chadd <adrian@freebsd.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7c2510120e upstream.
When trying to connect to 5GHz we can provide negative index to
mac80211 what trigger BUG_ON. Reason of iwl-3945-rs malfunction
on 5GHz is unknown and needs further investigation. For now, to
do not trigger a bug, correct value and just print WARNING.
Address bug:
https://bugzilla.redhat.com/show_bug.cgi?id=730653
Reported-and-tested-by: Jan Teichmann <jan.teichmann@gmail.com>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 58b4857696 upstream.
Transitioning to a LOOP_UPDATE loop-state could cause the driver
to miss normal link/target processing. LOOP_UPDATE is a crufty
artifact leftover from at time the driver performed it's own
internal command-queuing. Safely remove this state.
Signed-off-by: Andrew Vasquez <andrew.vasquez@qlogic.com>
Signed-off-by: Chad Dupuis <chad.dupuis@qlogic.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 01350d0553 upstream.
If a physical device exposed to the OS by hpsa
is replaced (e.g. one hot plug tape drive is replaced
by another, or a tape drive is placed into "OBDR" mode
in which it acts like a CD-ROM device) and a rescan is
initiated, the replaced device will be added to the
SCSI midlayer with target and lun numbers set to -1.
After that, a panic is likely to ensue. When a physical
device is replaced, the lun and target number should be
preserved.
Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0b0e1d6cbc upstream.
The test to detect OBDR ("One Button Disaster Recovery")
cd-rom devices was comparing against uninitialized data.
Fixed by moving the test for the device to where the
inquiry data is collected, and uninitialized variable
altogether as it wasn't really being used.
Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3bdf28feaf upstream.
'struct of_device' no longer exists, and its functionality has been merged
into platform_device. Update the MPC5200 audio DMA driver (mpc5200_dma)
accordingly. This fixes a build break.
Signed-off-by: Timur Tabi <timur@freescale.com>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ee33e2b771 upstream.
The unsolicited frame control infrastructure requires a table of dma
addresses for the hardware to lookup the frame buffer location by an
index. The hardware expects the elements of this table to be 64-bit
quantities, so we cannot reference these elements as dma_addr_t. All
unsolicited frame protocols are affected, particularly SATA-PIO and SMP
which prevented direct-attached SATA drives and expander-attached drives
to not be discovered.
Reported-by: Jacek Danecki <jacek.danecki@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1a87828447 upstream.
A bug (likely copy/paste) that has been carried from the original
implementation. The unsolicited frame handling structure returns the
d2h fis in the isci_request.stp.rsp buffer.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 73f507171c upstream.
This make sure we don't end up reusing the unlinked inode object.
The ideal way is to use inode i_generation. But i_generation is
not available in userspace always.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b49d8b5d70 upstream.
With msize equal to 512K (PAGE_SIZE * VIRTQUEUE_NUM), we hit multiple
crashes. This patch fix those.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f88657ce3f upstream.
Some of the flags are OS/arch dependent we add a 9p
protocol value which maps to asm-generic/fcntl.h values in Linux
Based on the original patch from Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com>
[extra comments from author as to why this needs to go to stable:
Earlier for different operation such as open we used the values of open
flag as defined by the OS. But some of these flags such as O_DIRECT are
arch dependent. So if we have the 9p client and server running on
different architectures, we end up with client sending client
architecture value of these open flag and server will try to map these
values to what its architecture states. For ex: O_DIRECT on a x86 client
maps to
#define O_DIRECT 00040000
Where as on sparc server it will maps to
#define O_DIRECT 0x100000
Hence we need to map these open flags to OS/arch independent flag
values. Getting these changes to an early version of kernel ensures us
that we work with different combination of client and server. We should
ideally backport this patch to all possible kernel version.]
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 45089142b1 upstream.
We should only update attributes that we can change on stat2inode.
Also do file type initialization in v9fs_init_inode.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5441ae5eb3 upstream.
d_instantiate marks the dentry positive. So a parallel lookup and mkdir of
the directory can find dentry that doesn't have fid attached. This can result
in both the code path doing v9fs_fid_add which results in v9fs_dentry leak.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8f9c91273e upstream.
We can only sort the _TSS return package if there is no _PSS
in the same scope. This is because if _PSS is present, the ACPI
specification dictates that the _TSS Power Dissipation field is
to be ignored, and therefore some BIOSs leave garbage values in
the _TSS Power field(s). In this case, it is best to just return
the _TSS package as-is.
Reported-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f1ca1512e7 upstream.
The value is only set to true but never set back to false,
which causes to many completion-wait commands to be sent to
hardware. Fix it with this patch.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e33acde911 upstream.
The domain_flush_devices() function takes the domain->lock.
But this function is only called from update_domain() which
itself is already called unter the domain->lock. This causes
a deadlock situation when the dma-address-space of a domain
grows larger than 1GB.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f470e5ae34 upstream.
Fix section mismatch warning:
WARNING: drivers/net/irda/smsc-ircc2.o(.devinit.text+0x1a7): Section mismatch in reference from the function smsc_ircc_pnp_probe() to the function .init.text:smsc_ircc_open()
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ed80fcfac2 upstream.
This make sure we don't end up reusing the unlinked inode object.
The ideal way is to use inode i_generation. But i_generation is
not available in userspace always.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a2dd43bb0d upstream.
Without this fix, if any invalid mount options/args are passed while mouting
the 9p fs, no error (-EINVAL) is returned and default arg value is assigned.
This fix returns -EINVAL when an invalid arguement is found while parsing
mount options.
Signed-off-by: Prem Karat <prem.karat@linux.vnet.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fd2421f544 upstream.
This make sure we don't use wrong inode from the inode hash. The inode number
of the file deleted is reused by the next file system object created
and if we only use inode number for inode hash lookup we could end up
with wrong struct inode.
Also compare inode generation number. Not all Linux file system provide
st_gen in userspace. So it could be 0;
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b85f7d92d7 upstream.
There was a BUG_ON to protect against a bad id which could be dealt with
more gracefully.
Reported-by: Natalie Orlin <norlin@us.ibm.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fc61ccd35f upstream.
dvb_usb_device_init calls the frontend_attach method of this driver which
uses vp7045_usb_ob. In order to have a buffer ready in vp7045_usb_op, it has to
be allocated before that happens.
Luckily we can use the whole private data as the buffer as it gets separately
allocated on the heap via kzalloc in dvb_usb_device_init and is thus apt for
use via usb_control_msg.
This fixes a
BUG: unable to handle kernel paging request at 0000000000001e78
reported by Tino Keitel and diagnosed by Dan Carpenter.
Tested-by: Tino Keitel <tino.keitel@tikei.de>
Signed-off-by: Florian Mickler <florian@mickler.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit de4ed0c111 upstream.
The nuvoton-cir driver was storing up consecutive pulse-pulse and
space-space samples internally, for no good reason, since
ir_raw_event_store_with_filter() already merges back to back like
samples types for us. This should also fix a regression introduced late
in 3.0 that related to a timeout change, which actually becomes correct
when coupled with this change. Tested with RC6 and RC5 on my own
nuvoton-cir hardware atop vanilla 3.0.0, after verifying quirky
behavior in 3.0 due to the timeout change.
Reported-by: Stephan Raue <sraue@openelec.tv>
CC: Stephan Raue <sraue@openelec.tv>
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 27a7b260f7 upstream.
0.90 metadata uses an unsigned 32bit number to count the number of
kilobytes used from each device.
This should allow up to 4TB per device.
However we multiply this by 2 (to get sectors) before casting to a
larger type, so sizes above 2TB get truncated.
Also we allow rdev->sectors to be larger than 4TB, so it is possible
for the array to be resized larger than the metadata can handle.
So make sure rdev->sectors never exceeds 4TB when 0.90 metadata is in
used.
Also the sanity check at the end of super_90_load should include level
1 as it used ->size too. (RAID0 and Linear don't use ->size at all).
Reported-by: Pim Zandbergen <P.Zandbergen@macroscoop.nl>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 94007751bb upstream.
On the last close of an 'md' device which as been stopped, the device
is destroyed and in particular the request_queue is freed. The free
is done in a separate thread so it might happen a short time later.
__blkdev_put calls bdev_inode_switch_bdi *after* ->release has been
called.
Since commit f758eeabeb
bdev_inode_switch_bdi will dereference the 'old' bdi, which lives
inside a request_queue, to get a spin lock. This causes the last
close on an md device to sometime take a spin_lock which lives in
freed memory - which results in an oops.
So move the called to bdev_inode_switch_bdi before the call to
->release.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Acked-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 17c8b96093 upstream.
Not cleaning after alloc failure would result in crash on destroy,
because nouveau_sgdma_clear assumes "ttm_alloced" to be not null when
"pages" is not null.
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 897a6a1a14 upstream.
The TNET variant of DaVinci compiles some code that it shares
with other DaVinci variants, however it has a V6 CPU rather than
an ARM926T, thus the hardcoded call to arm926_flush_kern_cache_all()
in sleep.S will obviously fail, and we need to build with the
v6_flush_kern_cache_all() call instead. This was triggered by
manually altering the DaVinci config to build the TNET version.
Cc: Dave Martin <dave.martin@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 810198bc9c upstream.
DA850/OMAP-L138 EMAC driver uses random mac address instead of
a fixed one because the mac address is not stuffed into EMAC
platform data.
This patch provides a function which reads the mac address
stored in SPI flash (registered as MTD device) and populates the
EMAC platform data. The function which reads the mac address is
registered as a callback which gets called upon addition of MTD
device.
NOTE: In case the MAC address stored in SPI flash is erased, follow
the instructions at [1] to restore it.
[1] http://processors.wiki.ti.com/index.php/GSG:_OMAP-L138_DVEVM_Additional_Procedures#Restoring_MAC_address_on_SPI_Flash
Modifications in v2:
Guarded registering the mtd_notifier only when MTD is enabled.
Earlier this was handled using mtd_has_partitions() call, but
this has been removed in Linux v3.0.
Modifications in v3:
a. Guarded da850_evm_m25p80_notify_add() function and
da850evm_spi_notifier structure with CONFIG_MTD macros.
b. Renamed da850_evm_register_mtd_user() function to
da850_evm_setup_mac_addr() and removed the struct mtd_notifier
argument to this function.
c. Passed the da850evm_spi_notifier structure to register_mtd_user()
function.
Modifications in v4:
Moved the da850_evm_setup_mac_addr() function within the first
CONFIG_MTD ifdef construct.
Signed-off-by: Sudhakar Rajashekhara <sudhakar.raj@ti.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bb9ea77846 upstream.
I was intrigued by the fact that the clock stood still on
the Integrator, but it wasn't strange at all, because the
timer was set up all wrong and probably has been for a
while. With this patch the clock starts ticking again:
make the timer periodic (reload), |= on the divisor bit
and load the timer before starting it.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ed467e69f1 upstream.
We have hit a couple of customer bugs where they would like to
use those parameters to run an UP kernel - but both of those
options turn of important sources of interrupt information so
we end up not being able to boot. The correct way is to
pass in 'dom0_max_vcpus=1' on the Xen hypervisor line and
the kernel will patch itself to be a UP kernel.
Fixes bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=637308
Acked-by: Ian Campbell <Ian.Campbell@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d198d49914 upstream.
If vmalloc page_fault happens inside of interrupt handler with interrupts
disabled then on exit path from exception handler when there is no pending
interrupts, the following code (arch/x86/xen/xen-asm_32.S:112):
cmpw $0x0001, XEN_vcpu_info_pending(%eax)
sete XEN_vcpu_info_mask(%eax)
will enable interrupts even if they has been previously disabled according to
eflags from the bounce frame (arch/x86/xen/xen-asm_32.S:99)
testb $X86_EFLAGS_IF>>8, 8+1+ESP_OFFSET(%esp)
setz XEN_vcpu_info_mask(%eax)
Solution is in setting XEN_vcpu_info_mask only when it should be set
according to
cmpw $0x0001, XEN_vcpu_info_pending(%eax)
but not clearing it if there isn't any pending events.
Reproducer for bug is attached to RHBZ 707552
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 49bb1e6195 upstream.
This patch fixes the problem in sdhci-s3c host driver for Samsung Soc's.
During the card identification stage the mmc core driver enumerates for
the best bus width in combination with the highest available data rate.
It starts enumerating from the highest bus width (8) to lowest width (1).
In case of few MMC cards the 4-bit bus enumeration fails and tries
the 1-bit bus enumeration. When switched to 1-bit bus mode the host driver
has to clear the previous bus width setting and apply the new setting.
The current patch will clear the previous bus mode and apply the new
mode setting.
Signed-off-by: Girish K S <girish.shivananjappa@linaro.org>
Acked-by: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 50a50f9248 upstream.
The default multithread workqueue can cause the same work to be executed
concurrently on a different CPUs. This isn't really suitable for clock
gating as it might already gated the clock and gating it twice results both
host->clk_old and host->ios.clock to be set to 0.
To prevent this from happening we use system_nrt_wq instead.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 778e277cb8 upstream.
We have seen at least two different races when clock gating kicks in in a
middle of ios structure update.
First one happens when ios->clock is changed outside of aggressive clock
gating framework, for example via mmc_set_clock(). The race might happen
when we run following code:
mmc_set_ios():
...
if (ios->clock > 0)
mmc_set_ungated(host);
Now if gating kicks in right after the condition check we end up setting
host->clk_gated to false even though we have just gated the clock. Next
time a request is started we try to ungate and restore the clock in
mmc_host_clk_hold(). However since we have host->clk_gated set to false the
original clock is not restored.
This eventually will cause the host controller to hang since its clock is
disabled while we are trying to issue a request. For example on Intel
Medfield platform we see:
[ 13.818610] mmc2: Timeout waiting for hardware interrupt.
[ 13.818698] sdhci: =========== REGISTER DUMP (mmc2)===========
[ 13.818753] sdhci: Sys addr: 0x00000000 | Version: 0x00008901
[ 13.818804] sdhci: Blk size: 0x00000000 | Blk cnt: 0x00000000
[ 13.818853] sdhci: Argument: 0x00000000 | Trn mode: 0x00000000
[ 13.818903] sdhci: Present: 0x1fff0000 | Host ctl: 0x00000001
[ 13.818951] sdhci: Power: 0x0000000d | Blk gap: 0x00000000
[ 13.819000] sdhci: Wake-up: 0x00000000 | Clock: 0x00000000
[ 13.819049] sdhci: Timeout: 0x00000000 | Int stat: 0x00000000
[ 13.819098] sdhci: Int enab: 0x00ff00c3 | Sig enab: 0x00ff00c3
[ 13.819147] sdhci: AC12 err: 0x00000000 | Slot int: 0x00000000
[ 13.819196] sdhci: Caps: 0x6bee32b2 | Caps_1: 0x00000000
[ 13.819245] sdhci: Cmd: 0x00000000 | Max curr: 0x00000000
[ 13.819292] sdhci: Host ctl2: 0x00000000
[ 13.819331] sdhci: ADMA Err: 0x00000000 | ADMA Ptr: 0x00000000
[ 13.819377] sdhci: ===========================================
[ 13.919605] mmc2: Reset 0x2 never completed.
and it never recovers.
Second race might happen while running mmc_power_off():
static void mmc_power_off(struct mmc_host *host)
{
host->ios.clock = 0;
host->ios.vdd = 0;
[ clock gating kicks in here ]
/*
* Reset ocr mask to be the highest possible voltage supported for
* this mmc host. This value will be used at next power up.
*/
host->ocr = 1 << (fls(host->ocr_avail) - 1);
if (!mmc_host_is_spi(host)) {
host->ios.bus_mode = MMC_BUSMODE_OPENDRAIN;
host->ios.chip_select = MMC_CS_DONTCARE;
}
host->ios.power_mode = MMC_POWER_OFF;
host->ios.bus_width = MMC_BUS_WIDTH_1;
host->ios.timing = MMC_TIMING_LEGACY;
mmc_set_ios(host);
}
If the clock gating worker kicks in while we are only partially updated the
ios structure the host controller gets incomplete ios and might not work as
supposed. Again on Intel Medfield platform we get:
[ 4.185349] kernel BUG at drivers/mmc/host/sdhci.c:1155!
[ 4.185422] invalid opcode: 0000 [#1] PREEMPT SMP
[ 4.185509] Modules linked in:
[ 4.185565]
[ 4.185608] Pid: 4, comm: kworker/0:0 Not tainted 3.0.0+ #240 Intel Corporation Medfield/iCDKA
[ 4.185742] EIP: 0060:[<c136364e>] EFLAGS: 00010083 CPU: 0
[ 4.185827] EIP is at sdhci_set_power+0x3e/0xd0
[ 4.185891] EAX: f5ff98e0 EBX: f5ff98e0 ECX: 00000000 EDX: 00000001
[ 4.185970] ESI: f5ff977c EDI: f5ff9904 EBP: f644fe98 ESP: f644fe94
[ 4.186049] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
[ 4.186125] Process kworker/0:0 (pid: 4, ti=f644e000 task=f644c0e0 task.ti=f644e000)
[ 4.186219] Stack:
[ 4.186257] f5ff98e0 f644feb0 c1365173 00000282 f5ff9460 f5ff96e0 f5ff96e0 f644feec
[ 4.186418] c1355bd8 f644c0e0 c1499c3d f5ff96e0 f644fed4 00000006 f5ff96e0 00000286
[ 4.186579] f644fedc c107922b f644feec 00000286 f5ff9460 f5ff9700 f644ff10 c135839e
[ 4.186739] Call Trace:
[ 4.186802] [<c1365173>] sdhci_set_ios+0x1c3/0x340
[ 4.186883] [<c1355bd8>] mmc_gate_clock+0x68/0x120
[ 4.186963] [<c1499c3d>] ? _raw_spin_unlock_irqrestore+0x4d/0x60
[ 4.187052] [<c107922b>] ? trace_hardirqs_on+0xb/0x10
[ 4.187134] [<c135839e>] mmc_host_clk_gate_delayed+0xbe/0x130
[ 4.187219] [<c105ec09>] ? process_one_work+0xf9/0x5b0
[ 4.187300] [<c135841d>] mmc_host_clk_gate_work+0xd/0x10
[ 4.187379] [<c105ec82>] process_one_work+0x172/0x5b0
[ 4.187457] [<c105ec09>] ? process_one_work+0xf9/0x5b0
[ 4.187538] [<c1358410>] ? mmc_host_clk_gate_delayed+0x130/0x130
[ 4.187625] [<c105f3c8>] worker_thread+0x118/0x330
[ 4.187700] [<c1496cee>] ? preempt_schedule+0x2e/0x50
[ 4.187779] [<c105f2b0>] ? rescuer_thread+0x1f0/0x1f0
[ 4.187857] [<c1062cf4>] kthread+0x74/0x80
[ 4.187931] [<c1062c80>] ? __init_kthread_worker+0x60/0x60
[ 4.188015] [<c149acfa>] kernel_thread_helper+0x6/0xd
[ 4.188079] Code: 81 fa 00 00 04 00 0f 84 a7 00 00 00 7f 21 81 fa 80 00 00 00 0f 84 92 00 00 00 81 fa 00 00 0
[ 4.188780] EIP: [<c136364e>] sdhci_set_power+0x3e/0xd0 SS:ESP 0068:f644fe94
[ 4.188898] ---[ end trace a7b23eecc71777e4 ]---
This BUG() comes from the fact that ios.power_mode was still in previous
value (MMC_POWER_ON) and ios.vdd was set to zero.
We prevent these by inhibiting the clock gating while we update the ios
structure.
Both problems can be reproduced by simply running the device in a reboot
loop.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 08c14071fd upstream.
As per suggestion by Linus Walleij:
> If you think the names of the functions are confusing then
> you may rename them, say like this:
>
> mmc_host_clk_ungate() -> mmc_host_clk_hold()
> mmc_host_clk_gate() -> mmc_host_clk_release()
>
> Which would make the usecases more clear
(This is CC'd to stable@ because the next two patches, which fix
observable races, depend on it.)
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9c40cef2b7 upstream.
There is no real reason to run blk_schedule_flush_plug() with
interrupts and preemption disabled.
Move it into schedule() and call it when the task is going voluntarily
to sleep. There might be false positives when the task is woken
between that call and actually scheduling, but that's not really
different from being woken immediately after switching away.
This fixes a deadlock in the scheduler where the
blk_schedule_flush_plug() callchain enables interrupts and thereby
allows a wakeup to happen of the task that's going to sleep.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/n/tip-dwfxtra7yg1b5r65m32ywtct@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c259e01a1e upstream.
Block-IO and workqueues call into notifier functions from the
scheduler core code with interrupts and preemption disabled. These
calls should be made before entering the scheduler core.
To simplify this, separate the scheduler core code into
__schedule(). __schedule() is directly called from the places which
set PREEMPT_ACTIVE and from schedule(). This allows us to add the work
checks into schedule(), so they are only called when a task voluntary
goes to sleep.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20110622174918.813258321@linutronix.de
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 938f97bcf1 upstream.
Thomas earlier submitted a fix to limit the RTC PIE freq, but
picked 5000Hz out of the air. Willy noticed that we should
instead use the 8192Hz max from the rtc man documentation.
Cc: Willy Tarreau <w@1wt.eu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6af7e471e5 upstream.
Its possible to jam up the alarm timers by setting very small interval
timers, which will cause the alarmtimer subsystem to spend all of its time
firing and restarting timers. This can effectivly lock up a box.
A deeper fix is needed, closely mimicking the hrtimer code, but for now
just cap the interval to 100us to avoid userland hanging the system.
CC: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 425933b30b upstream.
iomux-v3.c uses NO_PAD_CTRL as a 32 bit value
so it should not be shifted left by MUX_PAD_CTRL_SHIFT(41)
Previously, anything requesting NO_PAD_CTRL would get
their pad control register set to 0.
Since it is a pad control mask, place it with the other mask values.
Signed-off-by: Troy Kisky <troy.kisky@boundarydevices.com>
Acked-by: Lothar Waßmann <LW@KARO-electronics.de>
Tested-by: Lothar Waßmann <LW@KARO-electronics.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Cc: John Ogness <john.ogness@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 76d3fbf8fb upstream.
With zone_reclaim_mode enabled, it's possible for zones to be considered
full in the zonelist_cache so they are skipped in the future. If the
process enters direct reclaim, the ZLC may still consider zones to be full
even after reclaiming pages. Reconsider all zones for allocation if
direct reclaim returns successfully.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cd38b115d5 upstream.
There have been a small number of complaints about significant stalls
while copying large amounts of data on NUMA machines reported on a
distribution bugzilla. In these cases, zone_reclaim was enabled by
default due to large NUMA distances. In general, the complaints have not
been about the workload itself unless it was a file server (in which case
the recommendation was disable zone_reclaim).
The stalls are mostly due to significant amounts of time spent scanning
the preferred zone for pages to free. After a failure, it might fallback
to another node (as zonelists are often node-ordered rather than
zone-ordered) but stall quickly again when the next allocation attempt
occurs. In bad cases, each page allocated results in a full scan of the
preferred zone.
Patch 1 checks the preferred zone for recent allocation failure
which is particularly important if zone_reclaim has failed
recently. This avoids rescanning the zone in the near future and
instead falling back to another node. This may hurt node locality
in some cases but a failure to zone_reclaim is more expensive than
a remote access.
Patch 2 clears the zlc information after direct reclaim.
Otherwise, zone_reclaim can mark zones full, direct reclaim can
reclaim enough pages but the zone is still not considered for
allocation.
This was tested on a 24-thread 2-node x86_64 machine. The tests were
focused on large amounts of IO. All tests were bound to the CPUs on
node-0 to avoid disturbances due to processes being scheduled on different
nodes. The kernels tested are
3.0-rc6-vanilla Vanilla 3.0-rc6
zlcfirst Patch 1 applied
zlcreconsider Patches 1+2 applied
FS-Mark
./fs_mark -d /tmp/fsmark-10813 -D 100 -N 5000 -n 208 -L 35 -t 24 -S0 -s 524288
fsmark-3.0-rc6 3.0-rc6 3.0-rc6
vanilla zlcfirs zlcreconsider
Files/s min 54.90 ( 0.00%) 49.80 (-10.24%) 49.10 (-11.81%)
Files/s mean 100.11 ( 0.00%) 135.17 (25.94%) 146.93 (31.87%)
Files/s stddev 57.51 ( 0.00%) 138.97 (58.62%) 158.69 (63.76%)
Files/s max 361.10 ( 0.00%) 834.40 (56.72%) 802.40 (55.00%)
Overhead min 76704.00 ( 0.00%) 76501.00 ( 0.27%) 77784.00 (-1.39%)
Overhead mean 1485356.51 ( 0.00%) 1035797.83 (43.40%) 1594680.26 (-6.86%)
Overhead stddev 1848122.53 ( 0.00%) 881489.88 (109.66%) 1772354.90 ( 4.27%)
Overhead max 7989060.00 ( 0.00%) 3369118.00 (137.13%) 10135324.00 (-21.18%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds) 501.49 493.91 499.93
Total Elapsed Time (seconds) 2451.57 2257.48 2215.92
MMTests Statistics: vmstat
Page Ins 46268 63840 66008
Page Outs 90821596 90671128 88043732
Swap Ins 0 0 0
Swap Outs 0 0 0
Direct pages scanned 13091697 8966863 8971790
Kswapd pages scanned 0 1830011 1831116
Kswapd pages reclaimed 0 1829068 1829930
Direct pages reclaimed 13037777 8956828 8648314
Kswapd efficiency 100% 99% 99%
Kswapd velocity 0.000 810.643 826.346
Direct efficiency 99% 99% 96%
Direct velocity 5340.128 3972.068 4048.788
Percentage direct scans 100% 83% 83%
Page writes by reclaim 0 3 0
Slabs scanned 796672 720640 720256
Direct inode steals 7422667 7160012 7088638
Kswapd inode steals 0 1736840 2021238
Test completes far faster with a large increase in the number of files
created per second. Standard deviation is high as a small number of
iterations were much higher than the mean. The number of pages scanned by
zone_reclaim is reduced and kswapd is used for more work.
LARGE DD
3.0-rc6 3.0-rc6 3.0-rc6
vanilla zlcfirst zlcreconsider
download tar 59 ( 0.00%) 59 ( 0.00%) 55 ( 7.27%)
dd source files 527 ( 0.00%) 296 (78.04%) 320 (64.69%)
delete source 36 ( 0.00%) 19 (89.47%) 20 (80.00%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds) 125.03 118.98 122.01
Total Elapsed Time (seconds) 624.56 375.02 398.06
MMTests Statistics: vmstat
Page Ins 3594216 439368 407032
Page Outs 23380832 23380488 23377444
Swap Ins 0 0 0
Swap Outs 0 436 287
Direct pages scanned 17482342 69315973 82864918
Kswapd pages scanned 0 519123 575425
Kswapd pages reclaimed 0 466501 522487
Direct pages reclaimed 5858054 2732949 2712547
Kswapd efficiency 100% 89% 90%
Kswapd velocity 0.000 1384.254 1445.574
Direct efficiency 33% 3% 3%
Direct velocity 27991.453 184832.737 208171.929
Percentage direct scans 100% 99% 99%
Page writes by reclaim 0 5082 13917
Slabs scanned 17280 29952 35328
Direct inode steals 115257 1431122 332201
Kswapd inode steals 0 0 979532
This test downloads a large tarfile and copies it with dd a number of
times - similar to the most recent bug report I've dealt with. Time to
completion is reduced. The number of pages scanned directly is still
disturbingly high with a low efficiency but this is likely due to the
number of dirty pages encountered. The figures could probably be improved
with more work around how kswapd is used and how dirty pages are handled
but that is separate work and this result is significant on its own.
Streaming Mapped Writer
MMTests Statistics: duration
User/Sys Time Running Test (seconds) 124.47 111.67 112.64
Total Elapsed Time (seconds) 2138.14 1816.30 1867.56
MMTests Statistics: vmstat
Page Ins 90760 89124 89516
Page Outs 121028340 120199524 120736696
Swap Ins 0 86 55
Swap Outs 0 0 0
Direct pages scanned 114989363 96461439 96330619
Kswapd pages scanned 56430948 56965763 57075875
Kswapd pages reclaimed 27743219 27752044 27766606
Direct pages reclaimed 49777 46884 36655
Kswapd efficiency 49% 48% 48%
Kswapd velocity 26392.541 31363.631 30561.736
Direct efficiency 0% 0% 0%
Direct velocity 53780.091 53108.759 51581.004
Percentage direct scans 67% 62% 62%
Page writes by reclaim 385 122 1513
Slabs scanned 43008 39040 42112
Direct inode steals 0 10 8
Kswapd inode steals 733 534 477
This test just creates a large file mapping and writes to it linearly.
Time to completion is again reduced.
The gains are mostly down to two things. In many cases, there is less
scanning as zone_reclaim simply gives up faster due to recent failures.
The second reason is that memory is used more efficiently. Instead of
scanning the preferred zone every time, the allocator falls back to
another zone and uses it instead improving overall memory utilisation.
This patch: initialise ZLC for first zone eligible for zone_reclaim.
The zonelist cache (ZLC) is used among other things to record if
zone_reclaim() failed for a particular zone recently. The intention is to
avoid a high cost scanning extremely long zonelists or scanning within the
zone uselessly.
Currently the zonelist cache is setup only after the first zone has been
considered and zone_reclaim() has been called. The objective was to avoid
a costly setup but zone_reclaim is itself quite expensive. If it is
failing regularly such as the first eligible zone having mostly mapped
pages, the cost in scanning and allocation stalls is far higher than the
ZLC initialisation step.
This patch initialises ZLC before the first eligible zone calls
zone_reclaim(). Once initialised, it is checked whether the zone failed
zone_reclaim recently. If it has, the zone is skipped. As the first zone
is now being checked, additional care has to be taken about zones marked
full. A zone can be marked "full" because it should not have enough
unmapped pages for zone_reclaim but this is excessive as direct reclaim or
kswapd may succeed where zone_reclaim fails. Only mark zones "full" after
zone_reclaim fails if it failed to reclaim enough pages after scanning.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9adceaa5b3 upstream.
On some Power rv100 cards, we have no ATY OF table, but we have
no combios table either, and hence we refuse all modes on VGA-0
since we end up with a 0 max pixel clock.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Alex Deucher <alexdeucher@gmail.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1b6afa1758 upstream.
I don't know what I was thinking putting 'rcu' after a dynamically
sized array! The array could still be in use when we call rcu_free()
(That is the point) so we mustn't corrupt it.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 43c734be55 upstream.
This patch fixes L2 Cache size calculations for L2C-210, L2C-310 and
PL310, by changing the L2X0_AUX_CTRL_WAY_SIZE_MASK from 2 bits to 3
bits.
The Auxiliary Control Register for L2C-210, L2C-310 and PL310 has 3bits
[19:17] for Way size, however the existing code only uses 2 bits to
get this value. This results in incorrect cachesize calculations.
It also results in performing operations on the whole cache when we
erroneously decide that the range is big enough (due to l2x0_size being
too small) and also prints incorrect cachesize.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@st.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a49a50dad4 upstream.
For some reason SPI block is in broken state after module
unloading. This lead to broken rendering after reloading
module. Fix this by reseting SPI block in CP resume function
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 38f7f8f05e upstream.
On Sun4d systems running in SMP mode, IRQ 14 is used for timer interrupts
and has a specialized interrupt handler. IPI is currently set to use IRQ 14
as well, which causes it to trigger the timer interrupt handler, and not the
IPI interrupt handler.
The IPI interrupt is therefore changed to IRQ 13, which is the highest
normally handled interrupt. This IRQ is also used for SBUS interrupts,
however there is nothing in the IPI/SBUS interrupt handlers that indicate
that they will not handle sharing the interrupt.
(IRQ 13 is indicated as audio interrupt, which is unlikely to be found in a
sun4d system)
Signed-off-by: Kjetil Oftedal <oftedal@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4a0342ca8e upstream.
CC arch/sparc/kernel/pcic.o
arch/sparc/kernel/pcic.c: In function 'pcic_probe':
arch/sparc/kernel/pcic.c:359:33: error: array subscript is above array bounds [-Werror=array-bounds]
arch/sparc/kernel/pcic.c:359:8: error: array subscript is above array bounds [-Werror=array-bounds]
arch/sparc/kernel/pcic.c:360:33: error: array subscript is above array bounds [-Werror=array-bounds]
arch/sparc/kernel/pcic.c:360:8: error: array subscript is above array bounds [-Werror=array-bounds]
arch/sparc/kernel/pcic.c:361:33: error: array subscript is above array bounds [-Werror=array-bounds]
arch/sparc/kernel/pcic.c:361:8: error: array subscript is above array bounds [-Werror=array-bounds]
cc1: all warnings being treated as errors
I'm not particularly familiar with sparc but t_nmi (defined in head_32.S via
the TRAP_ENTRY macro) and pcic_nmi_trap_patch (defined in entry.S) both appear
to be 4 instructions long and I presume from the usage that instructions are
int sized.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5598473a5b upstream.
If we can't push the pending register windows onto the user's stack,
we disallow signal delivery even if the signal would be delivered on a
valid seperate signal stack.
Add a register window save area in the signal frame, and store any
unsavable windows there.
On sigreturn, if any windows are still queued up in the signal frame,
try to push them back onto the stack and if that fails we kill the
process immediately.
This allows the debug/tst-longjmp_chk2 glibc test case to pass.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3f6aa0b113 upstream.
The sparc32 version of arch_write_unlock() is just a plain assignment.
Unfortunately this allows the compiler to schedule side-effects in a
protected region to occur after the HW-level unlock, which is broken.
E.g., the following trivial test case gets miscompiled:
#include <linux/spinlock.h>
rwlock_t lock;
int counter;
void foo(void) { write_lock(&lock); ++counter; write_unlock(&lock); }
Fixed by adding a compiler memory barrier to arch_write_unlock(). The
sparc64 version combines the barrier and assignment into a single asm(),
and implements the operation as a static inline, so that's what I did too.
Compile-tested with sparc32_defconfig + CONFIG_SMP=y.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a0fba3eb05 upstream.
The sparc64 spinlock_64.h contains a number of operations defined
first as static inline functions, and then as macros with the same
names and parameters as the functions. Maybe this was needed at
some point in the past, but now nothing seems to depend on these
macros (checked with a recursive grep looking for ifdefs on these
names). Other archs don't define these identity-macros.
So this patch deletes these unnecessary macros.
Compile-tested with sparc64_defconfig.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fbe5e29ec1 upstream.
This oops have been already fixed with commit
27141666b6
atm: [br2684] Fix oops due to skb->dev being NULL
It happens that if a packet arrives in a VC between the call to open it on
the hardware and the call to change the backend to br2684, br2684_regvcc
processes the packet and oopses dereferencing skb->dev because it is
NULL before the call to br2684_push().
but have been introduced again with commit
b6211ae7f2
atm: Use SKB queue and list helpers instead of doing it by-hand.
Signed-off-by: Daniel Schwierzeck <daniel.schwierzeck@googlemail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6d0e194d2e upstream.
On AVERATEC 3200, pata_via causes memory corruption with ATAPI DMA,
which often leads to random kernel oops. The cause of the problem is
not well understood yet and only small subset of machines using the
controller seem affected. Blacklist ATAPI DMA on the machine.
Signed-off-by: Tejun Heo <tj@kernel.org>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=11426
Reported-and-tested-by: Jim Bray <jimsantelmo@gmail.com>
Cc: Alan Cox <alan@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4b00e4b394 upstream.
Two additional savage4 variants were added, but the S3_SAVAGE4_SERIES
macro was incompletely modified, resulting in a false positive detection
of a savage4 card regardless of which savage card is actually present.
For non-savage4 series cards, such as a Savage/IX-MV card, this results
in garbled video and/or a hard-hang at boot time. Fix this by changing
an '||' to an '&&' in the S3_SAVAGE4_SERIES macro.
Signed-off-by: John P. Stanley <jpsinthemix@verizon.net>
Reviewed-by: Tormod Volden <debian.tormod@gmail.com>
[ The macros have incomplete parenthesis too, but whatever .. -Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 543cc38c8f upstream.
When hibernating ->resume may not be called by usb core, but disconnect
and probe instead, so we do not increase the counter after decreasing
it in ->supend. As a result we free memory early, and get crash when
unplugging usb dongle.
BUG: unable to handle kernel paging request at 6b6b6b9f
IP: [<c06909b0>] driver_sysfs_remove+0x10/0x30
*pdpt = 0000000034f21001 *pde = 0000000000000000
Pid: 20, comm: khubd Not tainted 3.1.0-rc1-wl+ #20 LENOVO 6369CTO/6369CTO
EIP: 0060:[<c06909b0>] EFLAGS: 00010202 CPU: 1
EIP is at driver_sysfs_remove+0x10/0x30
EAX: 6b6b6b6b EBX: f52bba34 ECX: 00000000 EDX: 6b6b6b6b
ESI: 6b6b6b6b EDI: c0a0ea20 EBP: f61c9e68 ESP: f61c9e64
DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
Process khubd (pid: 20, ti=f61c8000 task=f6138270 task.ti=f61c8000)
Call Trace:
[<c06909ef>] __device_release_driver+0x1f/0xa0
[<c0690b20>] device_release_driver+0x20/0x40
[<c068fd64>] bus_remove_device+0x84/0xe0
[<c068e12a>] ? device_remove_attrs+0x2a/0x80
[<c068e267>] device_del+0xe7/0x170
[<c06d93d4>] usb_disconnect+0xd4/0x180
[<c06d9d61>] hub_thread+0x691/0x1600
[<c0473260>] ? wake_up_bit+0x30/0x30
[<c0442a39>] ? complete+0x49/0x60
[<c06d96d0>] ? hub_disconnect+0xd0/0xd0
[<c06d96d0>] ? hub_disconnect+0xd0/0xd0
[<c0472eb4>] kthread+0x74/0x80
[<c0472e40>] ? kthread_worker_fn+0x150/0x150
[<c0809b3e>] kernel_thread_helper+0x6/0x10
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Acked-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b503c7a273 upstream.
Due to some recent optimization done in the way the mac address
bytes are written into the OTP memory, some AR9485 chipsets were
forced to use the first byte from the eeprom template and the
remaining bytes are read from OTP.
AR9485 happens to use generic eeprom template which has 0x1 as
the first byte causes issues in bringing up the card.
So fixed the eeprom template accordingly to address the issue.
Cc: Paul Stewart <pstew@google.com>
Signed-off-by: Senthil Balasubramanian <senthilb@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 66cb54bd24 upstream.
If is_main_vif(ar, vif) reports that we have to fall back
to software encryption, we goto err_softw; before locking ar->mutex.
As a result, we have unprotected call to carl9170_set_operating_mode
and unmatched mutex_unlock.
The patch fix the issue by adding mutex_lock before goto.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
Acked-By: Christian Lamparter <chunkeey@googlemail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c6f59d13e2 upstream.
If h_add_logical_lan_buffer returns an error we need to free
the skb.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8b2a3827bb upstream.
this callback is called during suspend/resume and also via iw command.
it configures parameters like sifs, slottime, acktimeout in
ath9k_hw_init_global_settings where few REG_READ, REG_RMW are also done
and hence the need for PS wrappers
Signed-off-by: Mohammed Shafi Shajakhan <mohammed@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bc909d9ddb upstream.
Dereferencing a user pointer directly from kernel-space without going
through the copy_from_user family of functions is a bad idea. Two of
such usages can be found in the sendmsg code path called from sendmmsg,
added by
commit c71d8ebe7a upstream.
commit 5b47b8038f in the 3.0-stable tree.
Usages are performed through memcmp() and memcpy() directly. Fix those
by using the already copied msg_sys structure instead of the __user *msg
structure. Note that msg_sys can be set to NULL by verify_compat_iovec()
or verify_iovec(), which requires additional NULL pointer checks.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: David Goulet <dgoulet@ev0ke.net>
CC: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
CC: Anton Blanchard <anton@samba.org>
CC: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 48df4a6fd8 upstream.
For a long time, the xHCI driver has had this note:
/* FIXME: Ignoring zero-length packets, can those happen? */
It turns out that, yes, there are drivers that need to queue zero-length
transfers for isochronous OUT transfers. Without this patch, users will
see kernel hang messages when a driver attempts to enqueue an isochronous
URB with a zero length transfer (because count_isoc_trbs_needed will return
zero for that TD, xhci_td->last_trb will never be set, and updating the
dequeue pointer will cause an infinite loop).
Matěj ran into this issue when using an NI Audio4DJ USB soundcard
with the snd-usb-caiaq driver. See
https://bugzilla.kernel.org/show_bug.cgi?id=40702
Fix count_isoc_trbs_needed() to return 1 for zero-length transfers (thanks
Alan on the math help). Update the various TRB field calculations to deal
with zero-length transfers. We're still transferring one packet with a
zero-length data payload, so the total_packet_count should be 1. The
Transfer Burst Count (TBC) and Transfer Last Burst Packet Count (TLBPC)
fields should be set to zero.
This patch should be backported to kernels as old as 2.6.36.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Tested-by: Matěj Laitl <matej@laitl.cz>
Cc: Daniel Mack <zonque@gmail.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 585df1d90c upstream.
When a driver tries to cancel an URB, and the host controller is dying,
xhci_urb_dequeue will giveback the URB without removing the xhci_tds
that comprise that URB from the td_list or the cancelled_td_list. This
can cause a race condition between the driver calling URB dequeue and
the stop endpoint command watchdog timer.
If the timer fires on a dying host, and a driver attempts to resubmit
while the watchdog timer has dropped the xhci->lock to giveback a
cancelled URB, URBs may be given back by the xhci_urb_dequeue() function.
At that point, the URB's priv pointer will be freed and set to NULL, but
the TDs will remain on the td_list. This will cause an oops in
xhci_giveback_urb_in_irq() when the watchdog timer attempts to loop
through the endpoints' td_lists, giving back killed URBs.
Make sure that xhci_urb_dequeue() removes TDs from the TD lists and
canceled TD lists before it gives back the URB.
This patch should be backported to kernels as old as 2.6.36.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Cc: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 522989a27c upstream.
When an isochronous transfer is enqueued, xhci_queue_isoc_tx_prepare()
will ensure that there is enough room on the transfer rings for all of the
isochronous TDs for that URB. However, when xhci_queue_isoc_tx() is
enqueueing individual isoc TDs, the prepare_transfer() function can fail
if the endpoint state has changed to disabled, error, or some other
unknown state.
With the current code, if Nth TD (not the first TD) fails, the ring is
left in a sorry state. The partially enqueued TDs are left on the ring,
and the first TRB of the TD is not given back to the hardware. The
enqueue pointer is left on the TRB after the last successfully enqueued
TD. This means the ring is basically useless. Any new transfers will be
enqueued after the failed TDs, which the hardware will never read because
the cycle bit indicates it does not own them. The ring will fill up with
untransferred TDs, and the endpoint will be basically unusable.
The untransferred TDs will also remain on the TD list. Since the td_list
is a FIFO, this basically means the ring handler will be waiting on TDs
that will never be completed (or worse, dereference memory that doesn't
exist any more).
Change the code to clean up the isochronous ring after a failed transfer.
If the first TD failed, simply return and allow the xhci_urb_enqueue
function to free the urb_priv. If the Nth TD failed, first remove the TDs
from the td_list. Then convert the TRBs that were enqueued into No-op
TRBs. Make sure to flip the cycle bit on all enqueued TRBs (including any
link TRBs in the middle or between TDs), but leave the cycle bit of the
first TRB (which will show software-owned) intact. Then move the ring
enqueue pointer back to the first TRB and make sure to change the
xhci_ring's cycle state to what is appropriate for that ring segment.
This ensures that the No-op TRBs will be overwritten by subsequent TDs,
and the hardware will not start executing random TRBs because the cycle
bit was left as hardware-owned.
This bug is unlikely to be hit, but it was something I noticed while
tracking down the watchdog timer issue. I verified that the fix works by
injecting some errors on the 250th isochronous URB queued, although I
could not verify that the ring is in the correct state because uvcvideo
refused to talk to the device after the first usb_submit_urb() failed.
Ring debugging shows that the ring looks correct, however.
This patch should be backported to kernels as old as 2.6.36.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Cc: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d13565c128 upstream.
When the isochronous transfer support was introduced, and the xHCI driver
switched to using urb->hcpriv to store an "urb_priv" pointer, a couple of
memory leaks were introduced into the URB enqueue function in its error
handling paths.
xhci_urb_enqueue allocates urb_priv, but it doesn't free it if changing
the control endpoint's max packet size fails or the bulk endpoint is in
the middle of allocating or deallocating streams.
xhci_urb_enqueue also doesn't free urb_priv if any of the four endpoint
types' enqueue functions fail. Instead, it expects those functions to
free urb_priv if an error occurs. However, the bulk, control, and
interrupt enqueue functions do not free urb_priv if the endpoint ring is
NULL. It will, however, get freed if prepare_transfer() fails in those
enqueue functions.
Several of the error paths in the isochronous endpoint enqueue function
also fail to free it. xhci_queue_isoc_tx_prepare() doesn't free urb_priv
if prepare_ring() indicates there is not enough room for all the
isochronous TDs in this URB. If individual isochronous TDs fail to be
queued (perhaps due to an endpoint state change), urb_priv is also leaked.
This argues that the freeing of urb_priv should be done in the function
that allocated it, xhci_urb_enqueue.
This patch looks rather ugly, but refactoring the code will have to wait
because this patch needs to be backported to stable kernels.
This patch should be backported to kernels as old as 2.6.36.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Cc: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8a8ff2f939 upstream.
When a USB2 port initiate a remote wakeup, software shall ensure that
resume is signaled for at least 20ms, and then write '0' to the PLS field.
According to this, xhci driver do the following things:
1. When receive a remote wakeup event in irq_handler, set the resume_done
value as jiffies + 20ms, and modify rh_timer to poll root hub status at
that time;
2. When receive a GetPortStatus request, if the jiffies is after the
resume_done value, clear the resume signal and resume_done.
However, if usb_port_resume() is called before the rh_timer triggered, it
will indicate the port as Suspend Cleared and skip the clear resume signal
part. The device will fail the usb_get_status request in finish_port_resume(),
and usbcore will try a reset-resume instead. Device will work OK after
reset-resume, but resume_done value is not cleared in this case, and
xhci_bus_suspend() will fail because when it finds a non-zero resume_done
value, it will regard the port as resuming and return -EBUSY.
This causes issue on some platforms that the system fail to suspend
after remote wakeup from suspend by USB2 devices connected to xHCI port.
To fix this issue, report the port status as suspend if the resume is
signaling less that 20ms, and usb_port_resume() will wait 25ms and check
port status again, so xHCI driver can clear the resume signaling and
resume_done value.
This should be backported to kernels as old as 2.6.37.
Signed-off-by: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5ac04bf190 upstream.
Fix the port U3 status check when Clear PORT_SUSPEND Feature.
The port status should be masked with PORT_PLS_MASK to check if it's in
U3 state.
This should be backported to kernels as old as 2.6.37.
Signed-off-by: Andiry Xu <andiry.xu@amd.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d0f2fb2500 upstream.
From EHCI Spec p.28 HC should clear PORT_SUSPEND when SW clears
PORT_RESUME. In Intel Oaktrail platform, MPH (Multi-Port Host
Controller) core clears PORT_SUSPEND directly when SW sets PORT_RESUME
bit. If we rely on PORT_SUSPEND bit to stop USB resume, we will miss
the action of clearing PORT_RESUME. This will cause unexpected long
resume signal on USB bus.
Signed-off-by: Wang Zhi <zhi.wang@windriver.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f847a79ab3 upstream.
Replace DBG with dev_dbg and fix invalid access of musb->controller.
With this patch cppi_dma builds successfully.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6118514e87 upstream.
Currently the Option driver avoids binding interface 1 on Huawei K3765
and K4505 broadband modems as it should be handled by the cdc_ether
driver instead. This patch ensures we don't bind the interface 2
on those devices as that is CDC_DATA.
Signed-off-by: Andrew Bird <ajb@spheresystems.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7e1805844d upstream.
This patch adds the product ID of Huawei's Vodafone K4605 mobile broadband
modem to option.c. This is necessary so that the driver gets loaded on
demand without the intervention of usb_modeswitch. This has the benefit of
it becoming available faster and also ensures that the option driver is not
bound to a network interface that should be claimed by suitable network
driver.
Signed-off-by: Andrew Bird <ajb@spheresystems.co.uk>
Signed-off-by: Alex Chiang <achiang@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0e69d75ccb upstream.
This patch adds the product ID of Huawei's Vodafone K3806 mobile broadband
modem to option.c. This is necessary so that the driver gets loaded on
demand without the intervention of usb_modeswitch. This has the benefit of
it becoming available faster and also ensures that the option driver is not
bound to a network interface that should be claimed by cdc_ether.
Signed-off-by: Andrew Bird <ajb@spheresystems.co.uk>
Signed-off-by: Alex Chiang <achiang@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e5d3d4463f upstream.
This patch fixes a NULL pointer deference. A NULL pointer
dereference happens since s5p_ehci->hcd field is not initialized
yet in probe function.
[jg1.han@samsung.com: edit commit message]
Signed-off-by: Yulgon Kim <yulgon.kim@samsung.com>
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c96fbdd0ab upstream.
Calao use on there dev kits a FT2232 where the port 0 is used for the JTAG and
port 1 for the UART
They use the same VID and PID as FTDI Chip but they program the manufacturer
name in the eeprom
So use this information to detect it
Signed-off-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Cc: Gregory Hermant <gregory.hermant@calao-systems.com>
Cc: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 24d406a6bf upstream.
tty_operations->remove is normally called like:
queue_release_one_tty
->tty_shutdown
->tty_driver_remove_tty
->tty_operations->remove
However tty_shutdown() is called from queue_release_one_tty() only if
tty_operations->shutdown is NULL. But for pty, it is not.
pty_unix98_shutdown() is used there as ->shutdown.
So tty_operations->remove of pty (i.e. pty_unix98_remove()) is never
called. This results in invalid pty_count. I.e. what can be seen in
/proc/sys/kernel/pty/nr.
I see this was already reported at:
https://lkml.org/lkml/2009/11/5/370
But it was not fixed since then.
This patch is kind of a hackish way. The problem lies in ->install. We
allocate there another tty (so-called tty->link). So ->install is
called once, but ->remove twice, for both tty and tty->link. The fix
here is to count both tty and tty->link and divide the count by 2 for
user.
And to have ->remove called, let's make tty_driver_remove_tty() global
and call that from pty_unix98_shutdown() (tty_operations->shutdown).
While at it, let's document that when ->shutdown is defined,
tty_shutdown() is not called.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Alan Cox <alan@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8c4074cd22 upstream.
Since commit e0626e38 (spi: prefix modalias with "spi:"),
the spi modalias is prefixed with "spi:".
This patch adds "spi:" prefix and removes "-spi" suffix in the modalias.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dbb3b1ca56 upstream.
This is to fix an issue where output will suddenly become very slow.
The problem occurs on 8250 UARTS with the hardware bug UART_BUG_THRE.
BACKGROUND
For normal UARTs (without UART_BUG_THRE): When the serial core layer
gets new transmit data and the transmitter is idle, it buffers the
data and calls the 8250s' serial8250_start_tx() routine which will
simply enable the TX interrupt in the IER register and return. This
should immediately fire a THRE interrupt and begin transmitting the
data.
For buggy UARTs (with UART_BUG_THRE): merely enabling the TX interrupt
in IER does not necessarily generate a new THRE interrupt.
Therefore, a background timer periodically checks to see if there is
pending data, and starts transmission if that is the case.
The bug happens on SMP systems when the system has nothing to transmit,
the transmit interrupt is disabled and the following sequence occurs:
- CPU0: The background timer routine serial8250_backup_timeout()
starts and saves the state of the interrupt enable register (IER)
and then disables all interrupts in IER. NOTE: The transmit interrupt
(TI) bit is saved as disabled.
- CPU1: The serial core gets data to transmit, grabs the port lock and
calls serial8250_start_tx() which enables the TI in IER.
- CPU0: serial8250_backup_timeout() waits for the port lock.
- CPU1: finishes (with TI enabled) and releases the port lock.
- CPU0: serial8250_backup_timeout() calls the interrupt routine which
will transmit the next fifo's worth of data and then restores the
IER from the previously saved value (TI disabled).
At this point, as long as the serial core has more transmit data
buffered, it will not call serial8250_start_tx() again and the
background timer routine will slowly transmit the data.
The fix is to have serial8250_start_tx() get the port lock before
it saves the IER state and release it after restoring IER. This will
prevent serial8250_start_tx() from running in parallel.
Signed-off-by: Al Cooper <alcooperx@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 44178176ec upstream.
This patch adds support for the Rosewill RC-305 four-port PCI serial
card, and probably any other four-port serial cards based on the
Moschip MCS9865 chip, assuming that the EEPROM on the card was
programmed in accordance with Table 6 of the MCS9865 EEPROM
Application Note version 0.3 dated 16-May-2008, available from the
Moschip web site (registration required).
This patch is based on an earlier patch [1] for the SYBA 6x serial
port card by Ira W. Snyder.
[1]: http://www.gossamer-threads.com/lists/linux/kernel/1162435
Signed-off-by: Eric Smith <eric@brouhaha.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b280a97d1c upstream.
Fixes logic bug that software flow control cannot be disabled, because
serial_omap_configure_xonxoff() is not called if both IXON and IXOFF bits
are cleared.
Signed-off-by: Nick Pelly <npelly@google.com>
Acked-by: Govindraj.R <govindraj.raja@ti.com>
Tested-by: Govindraj.R <govindraj.raja@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d2b4c7bd7e upstream.
request_any_context_irq() returns a negative value on failure.
On success, it returns either IRQC_IS_HARDIRQ or IRQC_IS_NESTED.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 671ee7f0ce upstream.
This bug causes the IECSR register clear failure. In this case, the RETE
(retry error threshold exceeded) interrupt will be generated and cannot be
cleared. So the related ISR may be called persistently.
The RETE bit in IECSR is cleared by writing a 1 to it.
Signed-off-by: Liu Gang <Gang.Liu@freescale.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 284fb68d00 upstream.
Replace/remove use of RIO v.1.2 registers/bits that are not
forward-compatible with newer versions of RapidIO specification.
RapidIO specification v.1.3 removed Write Port CSR, Doorbell CSR,
Mailbox CSR and Mailbox and Doorbell bits of the PEF CAR.
Use of removed (since RIO v.1.3) register bits affects users of
currently available 1.3 and 2.x compliant devices who may use not so
recent kernel versions.
Removing checks for unsupported bits makes corresponding routines
compatible with all versions of RapidIO specification. Therefore,
backporting makes stable kernel versions compliant with RIO v.1.3 and
later as well.
Signed-off-by: Alexandre Bounine <alexandre.bounine@idt.com>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Matt Porter <mporter@kernel.crashing.org>
Cc: Li Yang <leoli@freescale.com>
Cc: Thomas Moll <thomas.moll@sysgo.com>
Cc: Chul Kim <chul.kim@idt.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4c30c6f566 upstream.
It seems that 7bf693951a ("console: allow to retain boot console via
boot option keep_bootcon") doesn't always achieve what it aims, as when
printk_late_init() runs it unconditionally turns off all boot consoles.
With this patch, I am able to see more messages on the boot console in
KVM guests than I can without, when keep_bootcon is specified.
I think it is appropriate for the relevant -stable trees. However, it's
more of an annoyance than a serious bug (ideally you don't need to keep
the boot console around as console handover should be working -- I was
encountering a situation where the console handover wasn't working and
not having the boot console available meant I couldn't see why).
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Greg KH <gregkh@suse.de>
Acked-by: Fabio M. Di Nitto <fdinitto@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit be27425dcc upstream.
I ran into a couple of programs which broke with the new Linux 3.0
version. Some of those were binary only. I tried to use LD_PRELOAD to
work around it, but it was quite difficult and in one case impossible
because of a mix of 32bit and 64bit executables.
For example, all kind of management software from HP doesnt work, unless
we pretend to run a 2.6 kernel.
$ uname -a
Linux svivoipvnx001 3.0.0-08107-g97cd98f #1062 SMP Fri Aug 12 18:11:45 CEST 2011 i686 i686 i386 GNU/Linux
$ hpacucli ctrl all show
Error: No controllers detected.
$ rpm -qf /usr/sbin/hpacucli
hpacucli-8.75-12.0
Another notable case is that Python now reports "linux3" from
sys.platform(); which in turn can break things that were checking
sys.platform() == "linux2":
https://bugzilla.mozilla.org/show_bug.cgi?id=664564
It seems pretty clear to me though it's a bug in the apps that are using
'==' instead of .startswith(), but this allows us to unbreak broken
programs.
This patch adds a UNAME26 personality that makes the kernel report a
2.6.40+x version number instead. The x is the x in 3.x.
I know this is somewhat ugly, but I didn't find a better workaround, and
compatibility to existing programs is important.
Some programs also read /proc/sys/kernel/osrelease. This can be worked
around in user space with mount --bind (and a mount namespace)
To use:
wget ftp://ftp.kernel.org/pub/linux/kernel/people/ak/uname26/uname26.c
gcc -o uname26 uname26.c
./uname26 program
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 78869618a8 upstream.
Currently, the retuning timer for retuning mode 1 will be deleted in
function sdhci_tasklet_finish after a mmc request done, which will make
retuning timing never trigger again. This patch fixed this problem.
Signed-off-by: Aaron Lu <Aaron.Lu@amd.com>
Reviewed-by: Philip Rakity <prakity@marvell.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit df71c9cfce upstream.
In rt2800usb_work_txdone we check flags in order:
- ENTRY_OWNER_DEVICE_DATA
- ENTRY_DATA_STATUS_PENDING
- ENTRY_DATA_IO_FAILED
Modify flags in separate order in rt2x00usb_interrupt_txdone, to avoid
processing entries in _txdone with wrong flags or skip processing
ready entries.
Reported-by: Justin Piszcz <jpiszcz@lucidpixels.com>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Acked-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c2183d1e9b upstream.
FUSE_NOTIFY_INVAL_ENTRY didn't check the length of the write so the
message processing could overrun and result in a "kernel BUG at
fs/fuse/dev.c:629!"
Reported-by: Han-Wen Nienhuys <hanwenn@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f2b60717e6 upstream.
Toshiba Satellite L300D with ATI Mobility Radeon X1100 sends data
to i2c bus for a HDMI connector that is not implemented/existent
on the notebook's board.
Fix by applying extented DDC probing for this connector.
Requires [PATCH] drm/radeon: Extended DDC Probing for Connectors
with Improperly Wired DDC Lines
Tested for kernel 2.6.38 on Toshiba Satellite L300D notebook
BugLink: http://bugs.launchpad.net/bugs/826677
Signed-off-by: Thomas Reim <reimth@gmail.com>
Acked-by: Chris Routh <routhy@gmail.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 05e33fc20e upstream.
Delete the 10 msec delay between the INIT and SIPI when starting
slave cpus. I can find no requirement for this delay. BIOS also
has similar code sequences without the delay.
Removing the delay reduces boot time by 40 sec. Every bit helps.
Signed-off-by: Jack Steiner <steiner@sgi.com>
Link: http://lkml.kernel.org/r/20110805140900.GA6774@sgi.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7ca0758cdb upstream.
When we enter a 32-bit system call via SYSENTER or SYSCALL, we shuffle
the arguments to match the int $0x80 calling convention. This was
probably a design mistake, but it's what it is now. This causes
errors if the system call as to be restarted.
For SYSENTER, we have to invoke the instruction from the vdso as the
return address is hardcoded. Accordingly, we can simply replace the
jump in the vdso with an int $0x80 instruction and use the slower
entry point for a post-restart.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/r/CA%2B55aFztZ=r5wa0x26KJQxvZOaQq8s2v3u50wCyJcA-Sc4g8gQ@mail.gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a3ea14df0e upstream.
When executing EC commands, only waiting when there are still
more bytes to write is usually fine. However, if the system
suspends very quickly after a call to olpc_ec_cmd(), the last
data byte may not yet be transferred to the EC, and the command
will not complete.
This solves a bug where the SCI wakeup mask was not correctly
written when going into suspend.
It means that sometimes, on XO-1.5 (but not XO-1), the
devices that were marked as wakeup sources can't wake up
the system. e.g. you ask for wifi wakeups, suspend, but then
incoming wifi frames don't wake up the system as they should.
Signed-off-by: Paul Fox <pgf@laptop.org>
Signed-off-by: Daniel Drake <dsd@laptop.org>
Acked-by: Andres Salomon <dilinger@queued.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3c05c4bed4 upstream.
Fix regression for HVM case on older (<4.1.1) hypervisors caused by
commit 99bbb3a84a
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu Dec 2 17:55:10 2010 +0000
xen: PV on HVM: support PV spinlocks and IPIs
This change replaced the SMP operations with event based handlers without
taking into account that this only works when the hypervisor supports
callback vectors. This causes unexplainable hangs early on boot for
HVM guests with more than one CPU.
BugLink: http://bugs.launchpad.net/bugs/791850
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-and-Reported-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ccbcdf7cf1 upstream.
The order-based approach is not only less efficient (requiring a shift
and a compare, typical generated code looking like this
mov eax, [machine_to_phys_order]
mov ecx, eax
shr ebx, cl
test ebx, ebx
jnz ...
whereas a direct check requires just a compare, like in
cmp ebx, [machine_to_phys_nr]
jae ...
), but also slightly dangerous in the 32-on-64 case - the element
address calculation can wrap if the next power of two boundary is
sufficiently far away from the actual upper limit of the table, and
hence can result in user space addresses being accessed (with it being
unknown what may actually be mapped there).
Additionally, the elimination of the mistaken use of fls() here (should
have been __fls()) fixes a latent issue on x86-64 that would trigger
if the code was run on a system with memory extending beyond the 44-bit
boundary.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
[v1: Based on Jeremy's feedback]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 196cfe2ae8 upstream.
These were intended to avoid the namespace clash when representing
emulated IDE and SCSI devices. However that seems to confuse users
more than expected (a disk defined as sda becomes xvde).
So for now go back to the scheme which does no adjustments. This
will break when mixing IDE and SCSI names in the configuration of
guests but should be by now expected.
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9dd75f1f1a upstream.
Bug discovered by Jan Kara:
Finally, commit 1449032be1 returned back
the old IO submission code but apparently it forgot to return the old
handling of uninitialized buffers so we unconditionnaly call
block_write_full_page() without specifying end_io function. So AFAICS
we never convert unwritten extents to written in some cases. For
example when I mount the fs as: mount -t ext4 -o
nomblk_io_submit,dioread_nolock /dev/ubdb /mnt and do
int fd = open(argv[1], O_RDWR | O_CREAT | O_TRUNC, 0600);
char buf[1024];
memset(buf, 'a', sizeof(buf));
fallocate(fd, 0, 0, 16384);
write(fd, buf, sizeof(buf));
I get a file full of zeros (after remounting the filesystem so that
pagecache is dropped) instead of seeing the first KB contain 'a's.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 32c80b32c0 upstream.
EXT4_IO_END_UNWRITTEN flag set and the increase of i_aiodio_unwritten
should be done simultaneously since ext4_end_io_nolock always clear
the flag and decrease the counter in the same time.
We don't increase i_aiodio_unwritten when setting
EXT4_IO_END_UNWRITTEN so it will go nagative and causes some process
to wait forever.
Part of the patch came from Eric in his e-mail, but it doesn't fix the
problem met by Michael actually.
http://marc.info/?l=linux-ext4&m=131316851417460&w=2
Reported-and-Tested-by: Michael Tokarev<mjt@tls.msk.ru>
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Tao Ma <boyu.mt@taobao.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2581fdc810 upstream.
Flush inode's i_completed_io_list before calling ext4_io_wait to
prevent the following deadlock scenario: A page fault happens while
some process is writing inode A. During page fault,
shrink_icache_memory is called that in turn evicts another inode
B. Inode B has some pending io_end work so it calls ext4_ioend_wait()
that waits for inode B's i_ioend_count to become zero. However, inode
B's ioend work was queued behind some of inode A's ioend work on the
same cpu's ext4-dio-unwritten workqueue. As the ext4-dio-unwritten
thread on that cpu is processing inode A's ioend work, it tries to
grab inode A's i_mutex lock. Since the i_mutex lock of inode A is
still hold before the page fault happened, we enter a deadlock.
Also moves ext4_flush_completed_IO and ext4_ioend_wait from
ext4_destroy_inode() to ext4_evict_inode(). During inode deleteion,
ext4_evict_inode() is called before ext4_destroy_inode() and in
ext4_evict_inode(), we may call ext4_truncate() without holding
i_mutex lock. As a result, there is a race between flush_completed_IO
that is called from ext4_ext_truncate() and ext4_end_io_work, which
may cause corruption on an io_end structure. This change moves
ext4_flush_completed_IO and ext4_ioend_wait from ext4_destroy_inode()
to ext4_evict_inode() to resolve the race between ext4_truncate() and
ext4_end_io_work during inode deletion.
Signed-off-by: Jiaying Zhang <jiayingz@google.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 441c850857 upstream.
ext4_should_writeback_data() had an incorrect sequence of
tests to determine if it should return 0 or 1: in
particular, even in no-journal mode, 0 was being returned
for a non-regular-file inode.
This meant that, in non-journal mode, we would use
ext4_journalled_aops for directories, symlinks, and other
non-regular files. However, calling journalled aop
callbacks when there is no valid handle, can cause problems.
This would cause a kernel crash with Jan Kara's commit
2d859db3e4 ("ext4: fix data corruption in inodes with
journalled data"), because we now dereference 'handle' in
ext4_journalled_write_end().
I also added BUG_ONs to check for a valid handle in the
obviously journal-only aops callbacks.
I tested this running xfstests with a scratch device in
these modes:
- no-journal
- data=ordered
- data=writeback
- data=journal
All work fine; the data=journal run has many failures and a
crash in xfstests 074, but this is no different from a
vanilla kernel.
Signed-off-by: Curt Wohlgemuth <curtw@google.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eade7b281c upstream.
BugLink: https://bugs.launchpad.net/bugs/826081
The original reporter needs 'Headphone Jack Sense' enabled to have
audible audio, so add his PCI SSID to the whitelist.
Reported-and-tested-by: Muhammad Khurram Khan
Signed-off-by: Daniel T Chen <crimsun@ubuntu.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit da6094ea7d upstream.
The snd_usb_caiaq driver currently assumes that output urbs are serviced
in time and doesn't track when and whether they are given back by the
USB core. That usually works fine, but due to temporary limitations of
the XHCI stack, we faced that urbs were submitted more than once with
this approach.
As it's no good practice to fire and forget urbs anyway, this patch
introduces a proper bit mask to track which requests have been submitted
and given back.
That alone however doesn't make the driver work in case the host
controller is broken and doesn't give back urbs at all, and the output
stream will stop once all pre-allocated output urbs are consumed. But
it does prevent crashes of the controller stack in such cases.
See http://bugzilla.kernel.org/show_bug.cgi?id=40702 for more details.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Reported-and-tested-by: Matej Laitl <matej@laitl.cz>
Cc: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 38b65190c6 upstream.
The recent fix for testing dB range at the mixer creation time seems
to cause regressions in some devices. In such devices, reading the dB
info at probing time gives an error, thus both dBmin and dBmax are still
zero, and TLV flag isn't set although the later read of dB info succeeds.
This patch adds a workaround for such a case by assuming that the later
read will succeed. In future, a similar test should be performed in a
case where a wrong dB range is seen even in the later read.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 34f3e4f23c upstream.
When btrfs recovers from a crash, it may hit the oops below:
------------[ cut here ]------------
kernel BUG at fs/btrfs/inode.c:4580!
[...]
RIP: 0010:[<ffffffffa03df251>] [<ffffffffa03df251>] btrfs_add_link+0x161/0x1c0 [btrfs]
[...]
Call Trace:
[<ffffffffa03e7b31>] ? btrfs_inode_ref_index+0x31/0x80 [btrfs]
[<ffffffffa04054e9>] add_inode_ref+0x319/0x3f0 [btrfs]
[<ffffffffa0407087>] replay_one_buffer+0x2c7/0x390 [btrfs]
[<ffffffffa040444a>] walk_down_log_tree+0x32a/0x480 [btrfs]
[<ffffffffa0404695>] walk_log_tree+0xf5/0x240 [btrfs]
[<ffffffffa0406cc0>] btrfs_recover_log_trees+0x250/0x350 [btrfs]
[<ffffffffa0406dc0>] ? btrfs_recover_log_trees+0x350/0x350 [btrfs]
[<ffffffffa03d18b2>] open_ctree+0x1442/0x17d0 [btrfs]
[...]
This comes from that while replaying an inode ref item, we forget to
check those old conflicting DIR_ITEM and DIR_INDEX items in fs/file tree,
then we will come to conflict corners which lead to BUG_ON().
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
Tested-by: Andy Lutomirski <luto@mit.edu>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 05eb0f252b upstream.
LOOP_CLR_FD takes lo->lo_ctl_mutex and tries to remove the loop sysfs
files. Sysfs calls show() and waits for lo->lo_ctl_mutex. LOOP_CLR_FD
waits for show() to finish to remove the sysfs file.
cat /sys/class/block/loop0/loop/backing_file
mutex_lock_nested+0x176/0x350
? loop_attr_do_show_backing_file+0x2f/0xd0 [loop]
? loop_attr_do_show_backing_file+0x2f/0xd0 [loop]
loop_attr_do_show_backing_file+0x2f/0xd0 [loop]
dev_attr_show+0x1b/0x60
? sysfs_read_file+0x86/0x1a0
? __get_free_pages+0x12/0x50
sysfs_read_file+0xaf/0x1a0
ioctl(LOOP_CLR_FD):
wait_for_common+0x12c/0x180
? try_to_wake_up+0x2a0/0x2a0
wait_for_completion+0x18/0x20
sysfs_deactivate+0x178/0x180
? sysfs_addrm_finish+0x43/0x70
? sysfs_addrm_start+0x1d/0x20
sysfs_addrm_finish+0x43/0x70
sysfs_hash_and_remove+0x85/0xa0
sysfs_remove_group+0x59/0x100
loop_clr_fd+0x1dc/0x3f0 [loop]
lo_ioctl+0x223/0x7a0 [loop]
Instead of taking the lo_ctl_mutex from sysfs code, take the inner
lo->lo_lock, to protect the access to the backing_file data.
Thanks to Tejun for help debugging and finding a solution.
Cc: Milan Broz <mbroz@redhat.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d5e2003c2b upstream.
We have a problem where if a user specifies discard but doesn't actually support
it we will return EOPNOTSUPP from btrfs_discard_extent. This is a problem
because this gets called (in a fashion) from the tree log recovery code, which
has a nice little BUG_ON(ret) after it, which causes us to fail the tree log
replay. So instead detect wether our devices support discard when we're adding
them and then don't issue discards if we know that the device doesn't support
it. And just for good measure set ret = 0 in btrfs_issue_discard just in case
we still get EOPNOTSUPP so we don't screw anybody up like this again. Thanks,
Signed-off-by: Josef Bacik <josef@redhat.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6d3321e8e2 upstream.
MTRR rendezvous sequence using stop_one_cpu_nowait() can potentially
happen in parallel with another system wide rendezvous using
stop_machine(). This can lead to deadlock (The order in which
works are queued can be different on different cpu's. Some cpu's
will be running the first rendezvous handler and others will be running
the second rendezvous handler. Each set waiting for the other set to join
for the system wide rendezvous, leading to a deadlock).
MTRR rendezvous sequence is not implemented using stop_machine() as this
gets called both from the process context aswell as the cpu online paths
(where the cpu has not come online and the interrupts are disabled etc).
stop_machine() works with only online cpus.
For now, take the stop_machine mutex in the MTRR rendezvous sequence that
gets called from an online cpu (here we are in the process context
and can potentially sleep while taking the mutex). And the MTRR rendezvous
that gets triggered during cpu online doesn't need to take this stop_machine
lock (as the stop_machine() already ensures that there is no cpu hotplug
going on in parallel by doing get_online_cpus())
TBD: Pursue a cleaner solution of extending the stop_machine()
infrastructure to handle the case where the calling cpu is
still not online and use this for MTRR rendezvous sequence.
fixes: https://bugzilla.novell.com/show_bug.cgi?id=672008
Reported-by: Vadim Kotelnikov <vadimuzzz@inbox.ru>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Link: http://lkml.kernel.org/r/20110623182056.807230326@sbsiddha-MOBL3.sc.intel.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 910ac68a2b upstream.
If the client is in the process of resetting the session when it receives
a callback, then returning NFS4ERR_DELAY may cause a deadlock with the
DESTROY_SESSION call.
Basically, if the client returns NFS4ERR_DELAY in response to the
CB_SEQUENCE call, then the server is entitled to believe that the
client is busy because it is already processing that call. In that
case, the server is perfectly entitled to respond with a
NFS4ERR_BACK_CHAN_BUSY to any DESTROY_SESSION call.
Fix this by having the client reply with a NFS4ERR_BADSESSION in
response to the callback if it is resetting the session.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 55a673990e upstream.
Currently, there is no guarantee that we will call nfs4_cb_take_slot() even
though nfs4_callback_compound() will consistently call
nfs4_cb_free_slot() provided the cb_process_state has set the 'clp' field.
The result is that we can trigger the BUG_ON() upon the next call to
nfs4_cb_take_slot().
This patch fixes the above problem by using the slot id that was taken in
the CB_SEQUENCE operation as a flag for whether or not we need to call
nfs4_cb_free_slot().
It also fixes an atomicity problem: we need to set tbl->highest_used_slotid
atomically with the check for NFS4_SESSION_DRAINING, otherwise we end up
racing with the various tests in nfs4_begin_drain_session().
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 20618b21da upstream.
When we have a situation that the number of pages we want
to encode is bigger then the size of the bio. (Which can
currently happen only when all IO is going to a single device
.e.g group_width==1) then the IO is submitted short and we
report back only the amount of bytes we actually wrote/read
and all is fine. BUT ...
There was a bug that the current length counter was advanced
before the fail to add the extra page, and we come to a situation
that the CDB length was one-page longer then the actual bio size,
which is of course rejected by the osd-target.
While here also fix the bio size calculation, in the case
that we received more then one group of devices.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9af7db3228 upstream.
There were bugs in the case of partial layout where olo_comp_index
is not zero. This used to work and was tested but one of the later
cleanup SQUASHMEs broke it and was not tested since.
Also add a dprint that specify those received layout parameters.
Everything else was already printed.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 13589c437d upstream.
CIFS cleanup_volume_info_contents() looks like having a memory
corruption problem.
When UNCip is set to "&vol->UNC[2]" in cifs_parse_mount_options(), it
should not be kfree()-ed in cleanup_volume_info_contents().
Introduced in commit b946845a9d
Signed-off-by: J.R. Okajima <hooanon05@yahoo.co.jp>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fa71f44706 upstream.
Running the cthon tests on a recent kernel caused this message to pop
occasionally:
CIFS VFS: did not end path lookup where expected namelen is 0
Some added debugging showed that namelen and dfsplen were both 0 when
this occurred. That means that the read_seqretry returned true.
Assuming that the comment inside the if statement is true, this should
be harmless and just means that we raced with a rename. If that is the
case, then there's no need for alarm and we can demote this to cFYI.
While we're at it, print the dfsplen too so that we can see what
happened here if the message pops during debugging.
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aba8d05607 upstream.
In addition to /etc/perfconfig and $HOME/.perfconfig, perf looks for
configuration in the file ./config, imitating git which looks at
$GIT_DIR/config. If ./config is not a perf configuration file, it
fails, or worse, treats it as a configuration file and changes behavior
in some unexpected way.
"config" is not an unusual name for a file to be lying around and perf
does not have a private directory dedicated for its own use, so let's
just stop looking for configuration in the cwd. Callers needing
context-sensitive configuration can use the PERF_CONFIG environment
variable.
Requested-by: Christian Ohm <chr.ohm@gmx.net>
Cc: 632923@bugs.debian.org
Cc: Ben Hutchings <ben@decadent.org.uk>
Cc: Christian Ohm <chr.ohm@gmx.net>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110805165838.GA7237@elie.gateway.2wire.net
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d5811e8731 upstream.
Attempting to try and turn off disconnected display hw in the
hotput handler lead to more problems than it helped. For
now just register an event and only attempt the do something
interesting with DP. Other connectors are just too problematic:
- Some systems have an HPD pin assigned to LVDS, but it's rarely
if ever connected properly and we don't really care about hpd
events on LVDS anyway since it's always connected.
- The HPD pin is wired up correctly for eDP, but we don't really
have to do anything since the events since it's always connected.
- Some HPD pins fire more than once when you connect/disconnect
- etc.
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=39882
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e22a539824 upstream.
The CONFIG_RELOCATABLE code tries to align the unpack destination to
the value of 'kernel_alignment' in the setup_hdr. If that's 0, it
tries to unpack to address 0, which in fact causes the gunzip code
to call 'error("Out of memory while allocating output buffer")'.
The bootloader (ie. the lguest Launcher in this case) should be doing
setting this field; the normal bzImage is 16M, we can use the same.
Reported-by: Stefanos Geraggelos <sgerag@cslab.ece.ntua.gr>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f982f91516 upstream.
Commit db64fe0225 ("mm: rewrite vmap layer") introduced code that does
address calculations under the assumption that VMAP_BLOCK_SIZE is a
power of two. However, this might not be true if CONFIG_NR_CPUS is not
set to a power of two.
Wrong vmap_block index/offset values could lead to memory corruption.
However, this has never been observed in practice (or never been
diagnosed correctly); what caught this was the BUG_ON in vb_alloc() that
checks for inconsistent vmap_block indices.
To fix this, ensure that VMAP_BLOCK_SIZE always is a power of two.
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=31572
Reported-by: Pavel Kysilka <goldenfish@linuxsoft.cz>
Reported-by: Matias A. Fonzo <selk@dragora.org>
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Krzysztof Helt <krzysztof.h1@poczta.fm>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bdc71bc592 upstream.
This cleans up error handling for the beacon in case of dma mapping
failure. We need to free the skb when dma mapping fails instead of
nulling and leaking the pointer, and we should bail out to avoid
giving the hardware the bad descriptor.
Finally, we need to perform the null check after trying to update
the beacon, or else beacons will never be sent after a single
mapping failure.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 29591ed4ac upstream.
Two issues were preventing module snd-soc-tegra-wm8903.ko from being
removed and re-inserted:
a) The speaker-enable GPIO is hosted by the WM8903 chip. This GPIO must
be freed before snd_soc_unregister_card() is called, because that
triggers wm8903.c:wm8903_remove(), which calls gpiochip_remove(), which
then fails if any of the GPIOs are in use. To solve this, free all GPIOs
first, so the code doesn't care where they come from.
b) We need to call snd_soc_jack_free_gpios() to match the call to
snd_soc_jack_add_gpios() during initialization. Without this, the
call to snd_soc_jack_add_gpios() fails during any subsequent modprobe
and initialization, since the GPIO and IRQ are already registered. In
turn, this causes the headphone state not to be monitored, so the
headphone is assumed not to be plugged in, and the audio path to it is
never enabled.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a96edd59b2 upstream.
Not all PCM devices have all sub-streams. Specifically, the SPDIF driver
only supports playback and hence has no capture substream. Check whether
a substream exists before dereferencing it, when de-allocating DMA
buffers in tegra_pcm_deallocate_dma_buffer.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 15439bde3a upstream.
This fixes faulty outbount packets in case the inbound packets
received from the hardware are fragmented and contain bogus input
iso frames. The bug has been there for ages, but for some strange
reasons, it was only triggered by newer machines in 64bit mode.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Reported-and-tested-by: William Light <wrl@illest.net>
Reported-by: Pedro Ribeiro <pedrib@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 66a89b2164 upstream.
rs_resp is dynamically allocated in aem_read_sensor(), so it should be freed
before exiting in every case. This collects the kfree and the return at
the end of the function.
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Guenter Roeck <guenter.roeck@ericsson.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 35e9e21fb3 upstream.
This patch adds the product ID of Huawei's Vodafone K4511 mobile broadband
modem to option.c. This is necessary so that the driver gets loaded on demand
without the intervention of usb_modeswitch. This has the benefit of it becoming
available faster and also ensures that the option driver is not bound to a
network interface that should be claimed by cdc_ether.
Signed-off-by: Andrew Bird <ajb@spheresystems.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0930bb46bb upstream.
This patch adds the product ID of Huawei's Vodafone K4510 mobile broadband
modem to option.c. This is necessary so that the driver gets loaded on demand
without the intervention of usb_modeswitch. This has the benefit of it becoming
available faster and also ensures that the option driver is not bound to a
network interface that should be claimed by cdc_ether.
Signed-off-by: Andrew Bird <ajb@spheresystems.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e294908079 upstream.
This patch adds the product ID of Huawei's Vodafone K3771 mobile broadband
modem to option.c. This is necessary so that the driver gets loaded on demand
without the intervention of usb_modeswitch. This has the benefit of it becoming
available faster and also ensures that the option driver is not bound to a
network interface that should be claimed by cdc_ether.
Signed-off-by: Andrew Bird <ajb@spheresystems.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 07b21fd836 upstream.
This patch adds the product ID of Huawei's Vodafone K3770 mobile broadband
modem to option.c. This is necessary so that the driver gets loaded on demand
without the intervention of usb_modeswitch. This has the benefit of it becoming
available faster and also ensures that the option driver is not bound to a
network interface that should be claimed by cdc_ether.
Signed-off-by: Andrew Bird <ajb@spheresystems.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e468561739 upstream.
A new device ID pair is added for Qualcomm Modem present in Sagemcom's HiLo3G module.
Signed-off-by: Vijay Chavan <VijayChavan007@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a871e4f551 upstream.
Connecting the V2M to a Linux host results in a constant stream of
errors spammed to the console, all of the form
sd 1:0:0:0: ioctl_internal_command return code = 8070000
: Sense Key : 0x4 [current]
: ASC=0x0 ASCQ=0x0
The errors appear to be otherwise harmless. Add an unusual_devs entry
which eliminates all of the error messages.
Signed-off-by: Nick Bowler <nbowler@elliptictech.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1862cdd542 upstream.
Even if it's unlikely for this to cause an error,
there is a typo in the code that uses the bitwise-AND
operator instead of the logical one.
Signed-off-by: Ionut Nicu <ionut.nicu@cloudbit.ro>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4f1a7a3e78 upstream.
Assign operator instead of equality test in the usbtmc_ioctl_abort_bulk_in() function.
Signed-off-by: Maxim A. Nikulin <M.A.Nikulin@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 72c487dfb9 upstream.
an 'unhandled fault' is causes when a gadget driver calls
usb_gadget_connect() while the USB cable isn't plugged into
the OTG port.
the fault is caused by an access to MUSB's memory space
while its clock is turned off due to pm_runtime kicking
in.
in order to fix the fault, we enclose musb_gadget_pullup()
with pm_runtime_get_sync() ... pm_runtime_put() calls to
be sure we will always reach that path with clock turned on.
[ balbi@ti.com : simplified commit log; removed few things
which didn't belong there ]
Reported-by: Zach Pfeffer <zach.pfeffer@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7de7c7d2cb upstream.
wMaxPacketSize is __le16 and should be accessed as such. Also fix the
wBytesPerInterval assignment while here.
v2: also fix the wBytesPerInterval assigment, noticed by Matt Evans
This patch should be backported to the 3.0 kernel.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Acked-by: Matt Evans <matt@ozlabs.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7bd89b4017 upstream.
Commit fccf4e8620
"USB: Free bandwidth when usb_disable_device is called" caused a bit of an
issue when the xHCI host controller driver is unloaded. It changed the
USB core to remove all endpoints when a USB device is disabled. When the
driver is unloaded, it will remove the SuperSpeed split root hub, which
will disable all devices under that roothub and then halt the host
controller. When the second High Speed split roothub is removed, the USB
core will attempt to disable the endpoints, which will submit a Configure
Endpoint command to a halted host controller.
The command will eventually time out, but it makes the xHCI driver unload
take *minutes* if there are a couple of USB 1.1/2.0 devices attached. We
must halt the host controller when the SuperSpeed roothub is removed,
because we can't allow any interrupts from things like port status
changes.
Make several different functions not submit commands or URBs to the host
controller when the host is halted, by adding a check in
xhci_check_args(). xhci_check_args() is used by these functions:
xhci.c-int xhci_urb_enqueue()
xhci.c-int xhci_drop_endpoint()
xhci.c-int xhci_add_endpoint()
xhci.c-int xhci_check_bandwidth()
xhci.c-void xhci_reset_bandwidth()
xhci.c-static int xhci_check_streams_endpoint()
xhci.c-int xhci_discover_or_reset_device()
It's also used by xhci_free_dev(). However, we have to take special
care in that case, because we want the device memory to be freed if the
host controller is halted.
This patch should be backported to the 2.6.39 and 3.0 kernel.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6768458b17 upstream.
Software should set XHCI_HC_OS_OWNED bit to request ownership of xHC.
This patch should be backported to kernels as far back as 2.6.31.
Signed-off-by: JiSheng Zhang <jszhang3@gmail.com>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 118c9db51e upstream.
This patch addresses an issue with incorrect HW register
AR_PHY_TX_IQCAL_CORR_COEFF_B1 definition which leads to incorrect clibration.
Signed-off-by: Alex Hacker <hacker@epn.ru>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 15052f81d2 upstream.
CTL power data incorrect in ctlPowerData_2G field of ar9300_eeprom.
Setting incorrect CTL power in calibration is causing lower tx power.
Tx power was reported as 3dBm while operating in channel 6 HT40+/
in channel 11 HT40- due to CTL powers in the calibration is set to
zero.
Signed-off-by: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8028837d71 upstream.
The dp83640 buffers receive time stamps from special PHY status frames,
matching them to received PTP packets in a work queue. Because the timeout
for orphaned time stamps is so long and the buffer is so small, the driver
can drop time stamps under moderate PTP traffic.
This commit fixes the issue by decreasing the timeout to (at least) one
timer tick and increasing the buffer size.
Signed-off-by: Richard Cochran <richard.cochran@omicron.at>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cbc056602c upstream.
After resetting the time, the PPS signals on the FIPER output channels
are incorrectly offset from the clock time, as can be readily verified
by a looping back the FIPER to the external time stamp input.
Despite its name, setting the "Fiper Realignment Disable" bit seems to
fix the problem, at least on the P2020.
Also, following the example code from the Freescale BSP, it is not really
necessary to disable and re-enable the timer in order to reprogram the
FIPER. (The documentation is rather unclear on this point. It seems that
writing to the alarm register also disables the FIPER.)
Signed-off-by: Richard Cochran <richard.cochran@omicron.at>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8c20871998 upstream.
Commit df5e622340 ("ext4: fix deadlock in ext4_symlink() in ENOSPC
conditions") recalculated the number of credits needed for a long
symlink, in the process of splitting it into two transactions. However,
the first credit calculation under-counted because if selinux is
enabled, credits are needed to create the selinux xattr as well.
Overrunning the reservation will result in an OOPS in
jbd2_journal_dirty_metadata() due to this assert:
J_ASSERT_JH(jh, handle->h_buffer_credits > 0);
Fix this by increasing the reservation size.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d2db60df1e upstream.
Commit ae54870a1d ("ext3: Fix lock inversion in ext3_symlink()")
recalculated the number of credits needed for a long symlink, in the
process of splitting it into two transactions. However, the first
credit calculation under-counted because if selinux is enabled, credits
are needed to create the selinux xattr as well.
Overrunning the reservation will result in an OOPS in
journal_dirty_metadata() due to this assert:
J_ASSERT_JH(jh, handle->h_buffer_credits > 0);
Fix this by increasing the reservation size.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bed9a31527 upstream.
On a box with 8TB of RAM the MMU hashtable is 64GB in size. That
means we have 4G PTEs. pSeries_lpar_hptab_clear was using a signed
int to store the index which will overflow at 2G.
Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 966728dd88 upstream.
I have a box that fails in OF during boot with:
DEFAULT CATCH!, exception-handler=fff00400
at %SRR0: 49424d2c4c6f6768 %SRR1: 800000004000b002
ie "IBM,Logh". OF got corrupted with a device tree string.
Looking at make_room and alloc_up, we claim the first chunk (1 MB)
but we never claim any more. mem_end is always set to alloc_top
which is the top of our available address space, guaranteeing we will
never call alloc_up and claim more memory.
Also alloc_up wasn't setting alloc_bottom to the bottom of the
available address space.
This doesn't help the box to boot, but we at least fail with
an obvious error. We could relocate the device tree in a future
patch.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b1301797f3 upstream.
Recent versions of firmware will fail to unmap the virtual processor
area if we have a dispatch trace log registered. This causes kexec
to fail.
If a trace log is registered this patch unregisters it before the
SLB shadow and virtual processor areas, fixing the problem.
The address argument is ignored by firmware on unregister so we
may as well remove it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 764355487e upstream.
Close a TOCTOU race for mounts done via ecryptfs-mount-private. The mount
source (device) can be raced when the ownership test is done in userspace.
Provide Ecryptfs a means to force the uid check at mount time.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 961f65fc41 ]
There is currently no upper limit on the mondo queue sizes we'll use,
which guarentees that we'll eventually his page allocation limits, and
thus allocation failures, due to MAX_ORDER.
Cap the sizes sanely, current limits are:
CPU MONDO 2 * max_possible_cpus
DEV MONDO 256 (basically NR_IRQS)
RES MONDO 128
NRES MONDO 4
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 9076d0e7e0 ]
On sun4v this is basically required since we point the hypervisor and
the TSB walking hardware at these tables using physical addressing
too.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit ef7c4d4675 ]
Just like powerpc, we code patch at boot time.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit e95ade0839 ]
Don't use floating point on Niagara2, use the traditional
plain Niagara code instead.
Unroll Niagara loops to 128 bytes for copy, and 256 bytes
for clear.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit ac85fe8b21 ]
Instead of evaluating the cpu features for ELF_HWCAP every exec,
calculate it once at boot time.
Add AV_SPARC_* capability flag bits, compatible with what Solaris
reports to applications.
Report these capabilities once in the kernel log, and also via
/proc/cpuinfo in a new "cpucaps" entry.
If available, fetch the cpu features from the machine description
'hwcap-list' property of the 'cpu' node.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 4ba991d3eb ]
The cpu compatible string we look for is "SPARC-T3".
As far as memset/memcpy optimizations go, we treat this chip the same
as Niagara-T2/T2+. Use cache initializing stores for memset, and use
perfetch, FPU block loads, cache initializing stores, and block stores
for copies.
We use the Niagara-T2 perf support, since T3 is a close relative in
this regard. Later we'll add support for the new events T3 can
report, plus enable T3's new "sample" mode.
For now I haven't added any new ELF hwcap flags. We probably need
to add a couple, for example:
T2 and T3 both support the population count instruction in hardware.
T3 supports VIS3 instructions, including support (finally) for
partitioned shift. One can also now move directly between float
and integer registers.
T3 supports instructions meant to help with Galois Field and other HPC
calculations, such as XOR multiply. Also there are "OP and negate"
instructions, for example "fnmul" which is multiply-and-negate.
T3 recognizes the transactional memory opcodes, however since
transactional memory isn't supported: 1) 'commit' behaves as a NOP and
2) 'chkpt' always branches 3) 'rdcps' returns all zeros and 4) 'wrcps'
behaves as a NOP.
So we'll need about 3 new elf capability flags in the end to represent
all of these things.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 314ff52727 ]
The hypervisor call is only necessary if hypervisor events are
being requested.
So if we're not tracking hypervisor events, simply do a direct
register write.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit facfddef2c ]
Otherwise we'll crash in the sparc perf init code.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 559fafb94a ]
Fix improper protocol err_handler, current implementation is fully
unapplicable and may cause kernel crash due to double kfree_skb.
Signed-off-by: Dmitry Kozlov <xeb@mail.ru>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit b0fe4a3184 ]
rt_tos was changed to iph->tos but it must be filtered by RT_TOS
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 1821f7cd65 ]
As reported by Ben Greer and Froncois Romieu. The code path in
the netif_carrier code leads it to try and disable
a late workqueue to reenable it immediately
netif_carrier_on
-> linkwatch_fire_event
-> linkwatch_schedule_work
-> cancel_delayed_work
-> del_timer_sync
If __cancel_delayed_work is used instead then there is no
problem of waiting for running linkwatch_event.
There is a race between linkwatch_event running re-scheduling
but it is harmless to schedule an extra scan of the linkwatch queue.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 93a3aa2593 ]
The D-Link DGE-530T rev C1 is a re-badged Realtek 8169 named DLG10028C,
unlike the previous revisions which were skge based. It is probably
the same as the discontinued DGE-528T (0x4300) other than the PCI ID.
The PCI ID is 0x1186:0x4302.
Adding it to r8169.c where 0x1186:0x4300 is already found makes the card
be detected and work.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=38862
Signed-off-by: Len Sorensen <lsorense@csclub.uwaterloo.ca>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 4203223a1a ]
Fix the min and max bit lengths for AES-CTR (RFC3686) keys.
The number of bits in key spec is the key length (128/256)
plus 32 bits of nonce.
This change takes care of the "Invalid key length" errors
reported by setkey when specifying 288 bit keys for aes-ctr.
Signed-off-by: Tushar Gohad <tgohad@mvista.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit a0295a3b67 ]
Try to send to correct address this time!
---------- Forwarded Message ----------
Subject: [PATCH] Fix cdc-phonet build
Date: Saturday 23 Jul 2011
From: Chris Clayton <chris2553@googlemail.com>
To: linux-net@vger.kernel.org
cdc-phonet does not presently build on linux-3.0 because there is no entry for it in
drivers/net/Makefile. This patch adds that entry.
Signed-off-by: Chris Clayton <chris2553@googlemail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit f4bb2e9c4f ]
When a bond contains a device where one name is the subset of another
(eth1 and eth10, for example), one cannot properly set the primary
device or the currently active device.
This was reported and based on work by Takuma Umeya. I also verified
the problem and tested that this fix resolves it.
V2: A few did not like the the current code or my changes, so I
refactored bonding_store_primary and bonding_store_active_slave to be a
bit cleaner, dropped the use of strnicmp since we did not really need
the comparison to be case insensitive, and formatted the input string
from sysfs so a comparison to IFNAMSIZ could be used.
I also discovered an error in bonding_store_active_slave that would
modify bond->primary_slave rather than bond->curr_active_slave before
forcing the bonding driver to choose a new active slave.
V3: Actually sending the proper patch....
Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
Reported-by: Takuma Umeya <tumeya@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 550fd08c2c ]
After the last patch, We are left in a state in which only drivers calling
ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real
hardware call ether_setup for their net_devices and don't hold any state in
their skbs. There are a handful of drivers that violate this assumption of
course, and need to be fixed up. This patch identifies those drivers, and marks
them as not being able to support the safe transmission of skbs by clearning the
IFF_TX_SKB_SHARING flag in priv_flags
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Karsten Keil <isdn@linux-pingi.de>
CC: "David S. Miller" <davem@davemloft.net>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: Patrick McHardy <kaber@trash.net>
CC: Krzysztof Halasa <khc@pm.waw.pl>
CC: "John W. Linville" <linville@tuxdriver.com>
CC: Greg Kroah-Hartman <gregkh@suse.de>
CC: Marcel Holtmann <marcel@holtmann.org>
CC: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit d887331506 ]
Pktgen attempts to transmit shared skbs to net devices, which can't be used by
some drivers as they keep state information in skbs. This patch adds a flag
marking drivers as being able to handle shared skbs in their tx path. Drivers
are defaulted to being unable to do so, but calling ether_setup enables this
flag, as 90% of the drivers calling ether_setup touch real hardware and can
handle shared skbs. A subsequent patch will audit drivers to ensure that the
flag is set properly
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Jiri Pirko <jpirko@redhat.com>
CC: Robert Olsson <robert.olsson@its.uu.se>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Alexey Dobriyan <adobriyan@gmail.com>
CC: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit b76d0789c9 ]
If a device event generates gratuitous ARP messages, only primary
address is used for sending. This patch iterates through the whole
list. Tested with 2 IP addresses configuration on bonding interface.
Signed-off-by: Zoltan Kiss <schaman@sch.bme.hu>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 956837f7c9 ]
Convert array index from the loop bound to the loop index.
A simplified version of the semantic patch that fixes this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression e1,e2,ar;
@@
for(e1 = 0; e1 < e2; e1++) { <...
ar[
- e2
+ e1
]
...> }
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit a1889c0d20 ]
Convert array index from the loop bound to the loop index.
A simplified version of the semantic patch that fixes this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression e1,e2,ar;
@@
for(e1 = 0; e1 < e2; e1++) { <...
ar[
- e2
+ e1
]
...> }
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit d547f727df ]
compare_keys and ip_route_input_common rely on
rt_oif for distinguishing of input and output routes
with same keys values. But sometimes the input route has
also same hash chain (keyed by iif != 0) with the output
routes (keyed by orig_oif=0). Problem visible if running
with small number of rhash_entries.
Fix them to use rt_route_iif instead. By this way
input route can not be returned to users that request
output route.
The patch fixes the ip_rt_bug errors that were
reported in ip_local_out context, mostly for 255.255.255.255
destinations.
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit d9be4f7a6f ]
Because the ip fragment offset field counts 8-byte chunks, ip
fragments other than the last must contain a multiple of 8 bytes of
payload. ip_ufo_append_data wasn't respecting this constraint and,
depending on the MTU and ip option sizes, could create malformed
non-final fragments.
Google-Bug-Id: 5009328
Signed-off-by: Bill Sommerfeld <wsommerfeld@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 415b3334a2 ]
icmp_route_lookup() uses the wrong flow parameters if the reverse
session route lookup isn't used.
So do not commit to the re-decoded flow until we actually make a
final decision to use a real route saved in 'rt2'.
Reported-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Backport of upstream commit 87c48fa3b4 ]
Fernando Gont reported current IPv6 fragment identification generation
was not secure, because using a very predictable system-wide generator,
allowing various attacks.
IPv4 uses inetpeer cache to address this problem and to get good
performance. We'll use this mechanism when IPv6 inetpeer is stable
enough in linux-3.1
For the time being, we use jhash on destination address to provide less
predictable identifications. Also remove a spinlock and use cmpxchg() to
get better SMP performance.
Reported-by: Fernando Gont <fernando@gont.com.ar>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 824818b148 upstream.
The Focusrite Scarlett 18i6 USB has them that way, which is probably a
bug. Anyway, the driver should simply ignore this fact.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Reported-by: Nicolai Krakowiak <nicolai.krakowiak@gmail.com>
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1faa5d07a9 upstream.
When creating the mixers for an USB audio device, the current code looks
at the host interface stored in mixer->chip->ctrl_if. Change this and
rather keep a local pointer to the interface that was given when
snd_usb_create_mixer() was called.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Reported-by: Nicolai Krakowiak <nicolai.krakowiak@gmail.com>
Reported-by: Lean-Yves LENHOF <jean-yves@lenhof.eu.org>
Acked-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 151798f872 upstream.
Cache handling in this driver is broken. The chip has 16-bit registers, yet the
register numbers also increase by 2 per register, i.e. there are only
even-numbered registers. The cache in this driver, though, simply increments
register numbers, so it does need some mapping as seen in
sgtl5000_restore_regs(), note the '>> 1':
snd_soc_write(codec, SGTL5000_CHIP_LINREG_CTRL,
cache[SGTL5000_CHIP_LINREG_CTRL >> 1]);
That, of course, won't work with snd_soc_update_bits(). (Thus, we won't even
notice the missing register 0x1c in the default regs which shifted all follwing
registers to wrong values.) Noticed on the MX28EVK where enabling the regulators
simply locked up the chip.
Refactor the routines and use a properly sized default_regs array which matches
the register layout of the underlying chip, i.e. create a truly flat cache.
This also saves some code which should make up for the bigger array a little.
When soc-core will somewhen have another cache type which handles a step size,
this conversion will also ease the transition.
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Tested-by: Dong Aisheng <b29396@freescale.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 49979d091d upstream.
The code was completly broken, and should never had been sent
to the kernel. That's what happens when you write code without
hardware to test it.
Signed-off-by: Corentin Chary <corentin.chary@gmail.com>
Signed-off-by: Matthew Garrett <mjg@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Computers have become a lot faster since we compromised on the
partial MD4 hash which we use currently for performance reasons.
MD5 is a much safer choice, and is inline with both RFC1948 and
other ISS generators (OpenBSD, Solaris, etc.)
Furthermore, only having 24-bits of the sequence number be truly
unpredictable is a very serious limitation. So the periodic
regeneration and 8-bit counter have been removed. We compute and
use a full 32-bit sequence number.
For ipv6, DCCP was found to use a 32-bit truncated initial sequence
number (it needs 43-bits) and that is fixed here as well.
Reported-by: Dan Kaminsky <dan@doxpara.com>
Tested-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
We are going to use this for TCP/IP sequence number and fragment ID
generation.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 40ee3381dd upstream.
drm_helper_hpd_irq_event queues another work proc to go and deliver
the user-space event, and that function also wants to hold the config
mutex, so we shouldn't hold the mutex across the
drm_helper_hpd_irq_event call.
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a65e34c79c upstream.
Hotplug detection is a mode setting operation and must hold the
struct_mutex or risk colliding with other mode setting operations.
In particular, the display port hotplug function attempts to re-train
the link if the monitor is supposed to be running when plugged back
in. If that happens while mode setting is underway, the link will get
scrambled, leaving it in an inconsistent state.
This is a special case -- usually the driver mode setting entry points
are covered by the upper level DRM code, but in this case the function
is invoked as a work function not under the control of DRM.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f3234706a7 upstream.
Physically-addressed hardware status pages are initialized early in
the driver load process by i915_init_phys_hws. For UMS environments,
the ring structure is not initialized until the X server starts. At
that point, the entire ring structure is re-initialized with all new
values. Any values set in the ring structure (including
ring->status_page.page_addr) will be lost when the ring is
re-initialized.
This patch moves the initialization of the status_page.page_addr value
to intel_render_ring_init_dri.
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 842d452985 upstream.
Because of a typo, calling ioctl with DRM_IOCTL_I915_OVERLAY_PUT_IMAGE
is broken if the macro is used directly. When using libdrm the bug is
not hit, since libdrm handles the ioctl encoding internally.
The typo also leads to the .cmd and .cmd_drv fields of the drm_ioctl
structure for DRM_I915_OVERLAY_PUT_IMAGE having inconsistent content.
Signed-off-by: Ole Henrik Jahren <olehenja@alumni.ntnu.no>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 302983e905 upstream.
Consider a 1600x900 panel, upscaling a 1360x768 mode, full-aspect. The
old math would give you:
scaled_width = 1600 * 768; /* 1228800 */
scaled_height = 1360 * 900; /* 1224000 */
if (scaled_width > scaled_height) { /* pillarbox, and true */
width = 1224000 / 768; /* int(1593.75) = 1593 */
x = (1600 - 1593 + 1) / 2; /* 4 */
y = 0;
height = 768;
} /* ... */
This is broken. The total width of scanout would then be 1593 + 4 + 4,
or 1601, which is wider than the panel itself. The hardware very
dutifully implements this, and you end up with a black 45° diagonal from
the top-left corner to the bottom edge of the screen. It's a cool
effect and all, but not what you wanted. Similar things happen for the
letterbox case.
The problem is that you have an integer number of pixels, which means
it's usually impossible to upscale equally on both axes. 1360/768 is
1.7708, 1600/900 is 1.7777. Since we're constrained on the one axis,
the other one wants to come out as an even number of pixels (the panel
is almost certainly even on both axes, and the x/y offsets will be
applied on both sides). In the math above, if 'width' comes out even,
rounding down is correct; if it's odd, you'd rather round up. So just
increment width/height in those cases.
Tested on a Lenovo T500 (Ironlake).
Signed-off-by: Adam Jackson <ajax@redhat.com>
Tested-By: Daniel Manrique <daniel.manrique@canonical.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38851
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d522d9cc5b upstream.
Log PCI subsystem vendor and subsystem device ID in addition to
PCI vendor and device ID during kernel mode initialisation. This helps
to better identify radeon devices of third-party vendors, e. g. for
bug analysis.
Tested for kernel 2.6.35, 2.6.38 and 3.0 on Asus M2A-VM HDMI board
Signed-off-by: Thomas Reim <reimth@gmail.com>
Reviewed-by: Alex Deucher <alexdeucher@gmail.com>
Acked-by: Stephen Michaels <Stephen.Micheals@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a81b31e9fc upstream.
ECS A740GM-M with ATI RADEON 2100 sends data to i2c bus
for a DVI connector that is not implemented/existent on the board.
Fix by applying extented DDC probing for this connector.
Requires [PATCH] drm/radeon: Extended DDC Probing for Connectors
with Improperly Wired DDC Lines
Tested for kernel 2.6.38 on Asus ECS A740GM-M board
BugLink: http://bugs.launchpad.net/bugs/810926
Signed-off-by: Thomas Reim <reimth@gmail.com>
Reviewed-by: Alex Deucher <alexdeucher@gmail.com>
Acked-by: Stephen Michaels <Stephen.Micheals@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e384fab8c6 upstream.
Some integrated ATI Radeon chipset implementations with add-on HDMI card
(e. g. Asus M2A-VM HDMI) indicate the availability of a DDC even
when the add-on card is not plugged in or HDMI is disabled in BIOS setup.
In this case, drm_get_edid() and drm_edid_block_valid() periodically
dump data and kernel errors into system log files and onto terminals.
For these connectors DDC probing is extended by a check for a correct
EDID header. Only in case a valid EDID header is also found, the
(HDMI or DVI) connector will be used by the Radeon driver. This prevents
the kernel driver from useless flooding of logs and terminal sessions with
EDID dumps and error messages.
This patch adds a flag 'requires_extended_probe' to the radeon_connector
structure. In function radeon_connector_needs_extended_probe() this flag
can be set on a chipset family/vendor/connector type specific basis.
In addition, function radeon_ddc_probe() has been adapted to perform
extended DDC probing if required by the connector's flag.
Requires function drm_edid_header_is_valid() in DRM module provided by
[PATCH] drm: Separate EDID Header Check from EDID Block Check.
Tested for kernel 2.6.35, 2.6.38 and 3.0 on Asus M2A-VM HDMI board
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=668196
BugLink: http://bugs.launchpad.net/bugs/7228066
Signed-off-by: Thomas Reim <reimth@gmail.com>
Reviewed-by: Alex Deucher <alexdeucher@gmail.com>
Acked-by: Stephen Michaels <Stephen.Micheals@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 051963d483 upstream.
Provides function drm_edid_header_is_valid() for EDID header check
and replaces EDID header check part of function drm_edid_block_valid()
by a call of drm_edid_header_is_valid().
This is a prerequisite to extend DDC probing, e. g. in function
radeon_ddc_probe() for Radeon devices, by a central EDID header check.
Tested for kernel 2.6.35, 2.6.38 and 3.0
Signed-off-by: Thomas Reim <reimth@gmail.com>
Reviewed-by: Alex Deucher <alexdeucher@gmail.com>
Acked-by: Stephen Michaels <Stephen.Micheals@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c2419b4a47 upstream.
Get the information about the VGA console hardware from Xen, and put
it into the form the bootloader normally generates, so that the rest
of the kernel can deal with VGA as usual.
[ Impact: make VGA console work in dom0 ]
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
[v1: Rebased on 2.6.39]
[v2: Removed incorrect comments and fixed compile warnings]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c71d8ebe7a upstream.
The sendmmsg() introduced by commit 228e548e "net: Add sendmmsg socket system
call" is capable of sending to multiple different destination addresses.
SMACK is using destination's address for checking sendmsg() permission.
However, security_socket_sendmsg() is called for only once even if multiple
different destination addresses are passed to sendmmsg().
Therefore, we need to call security_socket_sendmsg() for each destination
address rather than only the first destination address.
Since calling security_socket_sendmsg() every time when only single destination
address was passed to sendmmsg() is a waste of time, omit calling
security_socket_sendmsg() unless destination address of previous datagram and
that of current datagram differs.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 98382f419f upstream.
To limit the amount of time we can spend in sendmmsg, cap the
number of elements to UIO_MAXIOV (currently 1024).
For error handling an application using sendmmsg needs to retry at
the first unsent message, so capping is simpler and requires less
application logic than returning EINVAL.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 728ffb86f1 upstream.
sendmmsg uses a similar error return strategy as recvmmsg but it
turns out to be a confusing way to communicate errors.
The current code stores the error code away and returns it on the next
sendmmsg call. This means a call with completely valid arguments could
get an error from a previous call.
Change things so we only return an error if no datagrams could be sent.
If less than the requested number of messages were sent, the application
must retry starting at the first failed one and if the problem is
persistent the error will be returned.
This matches the behaviour of other syscalls like read/write - it
is not an error if less than the requested number of elements are sent.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d4930086bd upstream.
We receive many bug reports about system hang during suspend/resume
when ath9k driver is in use. Adrian Chadd remarked that this problem
happens on systems that have ASPM disabled.
To do not hit the bug, skip doing ->config_pci_powersave magic if PCIe
downstream port device, which ath9k device is connected to, has ASPM
disabled.
Bug was introduced by:
commit 53bc7aa08b
Author: Vivek Natarajan <vnatarajan@atheros.com>
Date: Mon Apr 5 14:48:04 2010 +0530
ath9k: Add support for newer AR9285 chipsets.
Patch should address:
https://bugzilla.kernel.org/show_bug.cgi?id=37462https://bugzilla.kernel.org/show_bug.cgi?id=37082https://bugzilla.redhat.com/show_bug.cgi?id=697157
however I did not receive confirmation about that, except from Camilo
Mesias, whose system stops hang regularly with this patch (but still
hangs from time to time, but this is probably some other bug).
Tested-by: Camilo Mesias <camilo@mesias.co.uk>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c1227340ca upstream.
With an uninitialized chainmask, the per-channel power will only contain
the power limits for a single chain instead of the combined tx power.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 17e859a899 upstream.
If settings of tx power was deferred during scan or changing channel we
have to setup them during commit rxon. Fix problem on 3945 (4965 already
has this fix).
Optimize code to apply tx settings only when tx power was actually
changed.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b6b67df3f2 upstream.
This driver uses information from the self member of the pci_bus struct to
get information regarding the bridge to which the PCIe device is attached.
Unfortunately, this member is not established on all architectures, which
leads to a kernel oops.
Skipping the entire block that uses the self member to determine the bridge
vendor will only affect RTL8192DE devices as that driver sets the ASPM support
flag differently when the bridge vendor is Intel. If the self member is
available, there is no functional change.
This patch fixes Bugzilla No. 40212.
Reported-by: Hubert Liao <liao.hubertt@gmail.com>
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 548c210fbf upstream.
The return type of __atomic64_add_return of should be s64 or long, not
int. This fixes the atomic64 test failure that I previously reported.
Signed-off-by: John David Anglin <dave.anglin@nrc-cnrc.gc.ca>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d9ba5fe76d upstream.
Implements futex op support and makes futex cmpxchg atomic.
Tested on 64-bit SMP kernel running on 2 x PA8700s.
[jejb: checkpatch fixes]
Signed-off-by: Carlos O'Donell <carlos@systemhalted.org>
Tested-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9ea71503a8 upstream.
commit 7485d0d375 (futexes: Remove rw
parameter from get_futex_key()) in 2.6.33 fixed two problems: First, It
prevented a loop when encountering a ZERO_PAGE. Second, it fixed RW
MAP_PRIVATE futex operations by forcing the COW to occur by
unconditionally performing a write access get_user_pages_fast() to get
the page. The commit also introduced a user-mode regression in that it
broke futex operations on read-only memory maps. For example, this
breaks workloads that have one or more reader processes doing a
FUTEX_WAIT on a futex within a read only shared file mapping, and a
writer processes that has a writable mapping issuing the FUTEX_WAKE.
This fixes the regression for valid futex operations on RO mappings by
trying a RO get_user_pages_fast() when the RW get_user_pages_fast()
fails. This change makes it necessary to also check for invalid use
cases, such as anonymous RO mappings (which can never change) and the
ZERO_PAGE which the commit referenced above was written to address.
This patch does restore the original behavior with RO MAP_PRIVATE
mappings, which have inherent user-mode usage problems and don't really
make sense. With this patch performing a FUTEX_WAIT within a RO
MAP_PRIVATE mapping will be successfully woken provided another process
updates the region of the underlying mapped file. However, the mmap()
man page states that for a MAP_PRIVATE mapping:
It is unspecified whether changes made to the file after
the mmap() call are visible in the mapped region.
So user-mode users attempting to use futex operations on RO MAP_PRIVATE
mappings are depending on unspecified behavior. Additionally a
RO MAP_PRIVATE mapping could fail to wake up in the following case.
Thread-A: call futex(FUTEX_WAIT, memory-region-A).
get_futex_key() return inode based key.
sleep on the key
Thread-B: call mprotect(PROT_READ|PROT_WRITE, memory-region-A)
Thread-B: write memory-region-A.
COW happen. This process's memory-region-A become related
to new COWed private (ie PageAnon=1) page.
Thread-B: call futex(FUETX_WAKE, memory-region-A).
get_futex_key() return mm based key.
IOW, we fail to wake up Thread-A.
Once again doing something like this is just silly and users who do
something like this get what they deserve.
While RO MAP_PRIVATE mappings are nonsensical, checking for a private
mapping requires walking the vmas and was deemed too costly to avoid a
userspace hang.
This Patch is based on Peter Zijlstra's initial patch with modifications to
only allow RO mappings for futex operations that need VERIFY_READ access.
Reported-by: David Oliver <david@rgmadvisors.com>
Signed-off-by: Shawn Bohrer <sbohrer@rgmadvisors.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: peterz@infradead.org
Cc: eric.dumazet@gmail.com
Cc: zvonler@rgmadvisors.com
Cc: hughd@google.com
Link: http://lkml.kernel.org/r/1309450892-30676-1-git-send-email-sbohrer@rgmadvisors.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d4969213f9 upstream.
Fix this error:
kernel/fork.c:267: error: implicit declaration of function 'alloc_thread_info_node'
This is due to renaming alloc_thread_info() to alloc_thread_info_node().
[akpm@linux-foundation.org: coding-style fixes]
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 286f367dad upstream.
Avoid dereferencing a NULL pointer if the number of feature arguments
supplied is fewer than indicated.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 762a80d9fc upstream.
This patch makes dm-snapshot flush disk cache when writing metadata for
merging snapshot.
Without cache flushing the disk may reorder metadata write and other
data writes and there is a possibility of data corruption in case of
power fault.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bb91bc7bac upstream.
For normal kernel pages, CPU cache is synchronized by the dma layer.
However, this is not done for pages allocated with vmalloc. If we do I/O
to/from vmallocated pages, we must synchronize CPU cache explicitly.
Prior to doing I/O on vmallocated page we must call
flush_kernel_vmap_range to flush dirty cache on the virtual address.
After finished read we must call invalidate_kernel_vmap_range to
invalidate cache on the virtual address, so that accesses to the virtual
address return newly read data and not stale data from CPU cache.
This patch fixes metadata corruption on dm-snapshots on PA-RISC and
possibly other architectures with caches indexed by virtual address.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ca9380fd68 upstream.
Convert array index from the loop bound to the loop index.
A simplified version of the semantic patch that fixes this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression e1,e2,ar;
@@
for(e1 = 0; e1 < e2; e1++) { <...
ar[
- e2
+ e1
]
...> }
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bea1906620 upstream.
Fix the usage of mod_timer() and make the driver usable. mod_timer() must
be called with an absolute timeout in jiffies. The old implementation
used a relative timeout thus the hardware watchdog was never triggered.
Signed-off-by: David Engraf <david.engraf@sysgo.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Wim Van sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1923703991 upstream.
Depending upon the order of userspace/kernel during the
mount process, this can result in a hang without the
_all version of the completion.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c027a474a6 upstream.
exit_mm() sets ->mm == NULL then it does mmput()->exit_mmap() which
frees the memory.
However select_bad_process() checks ->mm != NULL before TIF_MEMDIE,
so it continues to kill other tasks even if we have the oom-killed
task freeing its memory.
Change select_bad_process() to check ->mm after TIF_MEMDIE, but skip
the tasks which have already passed exit_notify() to ensure a zombie
with TIF_MEMDIE set can't block oom-killer. Alternatively we could
probably clear TIF_MEMDIE after exit_mmap().
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 25e75dff51 upstream.
AppArmor is masking the capabilities returned by capget against the
capabilities mask in the profile. This is wrong, in complain mode the
profile has effectively all capabilities, as the profile restrictions are
not being enforced, merely tested against to determine if an access is
known by the profile.
This can result in the wrong behavior of security conscience applications
like sshd which examine their capability set, and change their behavior
accordingly. In this case because of the masked capability set being
returned sshd fails due to DAC checks, even when the profile is in complain
mode.
Kernels affected: 2.6.36 - 3.0.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 04fdc099f9 upstream.
The pointer returned from tracehook_tracer_task() is only valid inside
the rcu_read_lock. However the tracer pointer obtained is being passed
to aa_may_ptrace outside of the rcu_read_lock critical section.
Mover the aa_may_ptrace test into the rcu_read_lock critical section, to
fix this.
Kernels affected: 2.6.36 - 3.0
Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d694ad62bf upstream.
If a semaphore array is removed and in parallel a sleeping task is woken
up (signal or timeout, does not matter), then the woken up task does not
wait until wake_up_sem_queue_do() is completed. This will cause crashes,
because wake_up_sem_queue_do() will read from a stale pointer.
The fix is simple: Regardless of anything, always call get_queue_result().
This function waits until wake_up_sem_queue_do() has finished it's task.
Addresses https://bugzilla.kernel.org/show_bug.cgi?id=27142
Reported-by: Yuriy Yevtukhov <yuriy@ucoz.com>
Reported-by: Harald Laabs <kernel@dasr.de>
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8c2381af0d upstream.
Currently, the hvc_console_print() function drops console output if the
hvc backend's put_chars() returns 0. This patch changes this behavior
to allow a retry through returning -EAGAIN.
This change also affects the hvc_push() function. Both functions are
changed to handle -EAGAIN and to retry the put_chars() operation.
If a hvc backend returns -EAGAIN, the retry handling differs:
- hvc_console_print() spins to write the complete console output.
- hvc_push() behaves the same way as for returning 0.
Now hvc backends can indirectly control the way how console output is
handled through the hvc console layer.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 51d3302142 upstream.
Return -EAGAIN when we get H_BUSY back from the hypervisor. This
makes the hvc console driver retry, avoiding dropped printks.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f2eb3cdf14 upstream.
Kconfig allows enabling console support for the SC26xx driver even when
it's configured as a module resulting in a:
ERROR: "uart_console_device" [drivers/tty/serial/sc26xx.ko] undefined!
modpost error since the driver was merged in
eea63e0e8a [SC26XX: New serial driver for
SC2681 uarts] in 2.6.25. Fixed by only allowing console support to be
enabled if the driver is builtin.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-serial@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Acked-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5568181f18 upstream.
Commit 4539c24fe4 "tty/serial: Add
explicit PORT_TEGRA type" introduced separate flags describing the need
for IER bits UUE and RTOIE. Both bits are required for the XSCALE port
type. While that patch updated uart_config[] as required, the auto-probing
code wasn't updated to set the RTOIE flag when an XSCALE port type was
detected. This caused such ports to stop working. This patch rectifies
that.
Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 108b6a7846 upstream.
Commit 22a668d7c3 ("memcg: fix behavior under memory.limit equals to
memsw.limit") introduced "memsw_is_minimum" flag, which becomes true
when mem_limit == memsw_limit. The flag is checked at the beginning of
reclaim, and "noswap" is set if the flag is true, because using swap is
meaningless in this case.
This works well in most cases, but when we try to shrink mem_limit,
which is the same as memsw_limit now, we might fail to shrink mem_limit
because swap doesn't used.
This patch fixes this behavior by:
- check MEM_CGROUP_RECLAIM_SHRINK at the begining of reclaim
- If it is set, don't set "noswap" flag even if memsw_is_minimum is true.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Balbir Singh <bsingharora@gmail.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Ying Han <yinghan@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a203c2aa4c upstream.
At the beginning of wiphy_update_regulatory() a check is performed
whether the request is to be ignored. Then the request is sent to
the driver nevertheless. This happens even if last_request points
to NULL, leading to a crash in the driver:
[<bf01d864>] (lbs_set_11d_domain_info+0x28/0x1e4 [libertas]) from [<c03b714c>] (wiphy_update_regulatory+0x4d0/0x4f4)
[<c03b714c>] (wiphy_update_regulatory+0x4d0/0x4f4) from [<c03b4008>] (wiphy_register+0x354/0x420)
[<c03b4008>] (wiphy_register+0x354/0x420) from [<bf01b17c>] (lbs_cfg_register+0x80/0x164 [libertas])
[<bf01b17c>] (lbs_cfg_register+0x80/0x164 [libertas]) from [<bf020e64>] (lbs_start_card+0x20/0x88 [libertas])
[<bf020e64>] (lbs_start_card+0x20/0x88 [libertas]) from [<bf02cbd8>] (if_sdio_probe+0x898/0x9c0 [libertas_sdio])
Fix this by returning early. Also remove the out: label as it is
not any longer needed.
Signed-off-by: Sven Neumann <s.neumann@raumfeld.com>
Cc: linux-wireless@vger.kernel.org
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Daniel Mack <daniel@zonque.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e04f5f7e42 upstream.
This patch (as1480) fixes a rather obscure bug in ehci-hcd. The
qh_update() routine needs to know the number and direction of the
endpoint corresponding to its QH argument. The number can be taken
directly from the QH data structure, but the direction isn't stored
there. The direction is taken instead from the first qTD linked to
the QH.
However, it turns out that for interrupt transfers, qh_update() gets
called before the qTDs are linked to the QH. As a result, qh_update()
computes a bogus direction value, which messes up the endpoint toggle
handling. Under the right combination of circumstances this causes
usb_reset_endpoint() not to work correctly, which causes packets to be
dropped and communications to fail.
Now, it's silly for the QH structure not to have direct access to all
the descriptor information for the corresponding endpoint. Ultimately
it may get a pointer to the usb_host_endpoint structure; for now,
adding a copy of the direction flag solves the immediate problem.
This allows the Spyder2 color-calibration system (a low-speed USB
device that sends all its interrupt data packets with the toggle set
to 0 and hance requires constant use of usb_reset_endpoint) to work
when connected through a high-speed hub. Thanks to Graeme Gill for
supplying the hardware that allowed me to track down this bug.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Graeme Gill <graeme@argyllcms.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 81463c1d70 upstream.
MAX4967 USB power supply chip we use on our boards signals over-current when
power is not enabled; once it's enabled, over-current signal returns to normal.
That unfortunately caused the endless stream of "over-current change on port"
messages. The EHCI root hub code reacts on every over-current signal change
with powering off the port -- such change event is generated the moment the
port power is enabled, so once enabled the power is immediately cut off.
I think we should only cut off power when we're seeing the active over-current
signal, so I'm adding such check to that code. I also think that the fact that
we've cut off the port power should be reflected in the result of GetPortStatus
request immediately, hence I'm adding a PORTSCn register readback after write...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f086ced171 upstream.
FCS could be GSM0_SOF, so will break state machine...
[This byte isn't quoted in any way so a SOF here doesn't imply an error
occurred.]
Signed-off-by: Alek Du <alek.du@intel.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
[Trivial but best backported once its in 3.1rc I think]
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 293eb1e777 upstream.
If an inode's mode permits opening /proc/PID/io and the resulting file
descriptor is kept across execve() of a setuid or similar binary, the
ptrace_may_access() check tries to prevent using this fd against the
task with escalated privileges.
Unfortunately, there is a race in the check against execve(). If
execve() is processed after the ptrace check, but before the actual io
information gathering, io statistics will be gathered from the
privileged process. At least in theory this might lead to gathering
sensible information (like ssh/ftp password length) that wouldn't be
available otherwise.
Holding task->signal->cred_guard_mutex while gathering the io
information should protect against the race.
The order of locking is similar to the one inside of ptrace_attach():
first goes cred_guard_mutex, then lock_task_sighand().
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0c0308066c upstream.
If the directory contents change, then we have to accept that the
file->f_pos value may shrink if we do a 'search-by-cookie'. In that
case, we should turn off the loop detection and let the NFS client
try to recover.
The patch also fixes a second loop detection bug by ensuring
that after turning on the ctx->duped flag, we read at least one new
cookie into ctx->dir_cookie before attempting to match with
ctx->dup_cookie.
Reported-by: Petr Vandrovec <petr@vandrovec.name>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ed1e6211a0 upstream.
nfs_mark_return_delegation() is usually called without any locking, and
so it is not safe to dereference delegation->inode. Since the inode is
only used to discover the nfs_client anyway, it makes more sense to
have the callers pass a valid pointer to the nfs_server as a parameter.
Reported-by: Ian Kent <raven@themaw.net>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ebc63e531c upstream.
After commit 3262c816a3 "[PATCH] knfsd:
split svc_serv into pools", svc_delete_xprt (then svc_delete_socket) no
longer removed its xpt_ready (then sk_ready) field from whatever list it
was on, noting that there was no point since the whole list was about to
be destroyed anyway.
That was mostly true, but forgot that a few svc_xprt_enqueue()'s might
still be hanging around playing with the about-to-be-destroyed list, and
could get themselves into trouble writing to freed memory if we left
this xprt on the list after freeing it.
(This is actually functionally identical to a patch made first by Ben
Greear, but with more comments.)
Cc: gnb@fmeh.org
Reported-by: Ben Greear <greearb@candelatech.com>
Tested-by: Ben Greear <greearb@candelatech.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f197c27196 upstream.
Stateid's hold a read reference for a read open, a write reference for a
write open, and an additional one of each for each read+write open. The
latter wasn't getting put on a downgrade, so something like:
open RW
open R
downgrade to R
was resulting in a file leak.
Also fix an imbalance in an error path.
Regression from 7d94784293 "nfsd4: fix
downgrade/lock logic".
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 499f3edc23 upstream.
Without this, for example,
open read
open read+write
close
will result in a struct file leak.
Regression from 7d94784293 "nfsd4: fix
downgrade/lock logic".
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0c12eaffdf upstream.
CLAIM_DELEGATE_CUR is used in response to a broken lease; allowing it
to break the lease and return EAGAIN leaves the client unable to make
progress in returning the delegation
nfs4_get_vfs_file() now takes struct nfsd4_open for access to the
claim type, and calls nfsd_open() with NFSD_MAY_NOT_BREAK_LEASE when
claim type is CLAIM_DELEGATE_CUR
Signed-off-by: Casey Bodley <cbodley@citi.umich.edu>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b2987a5e05 upstream.
Fixes a regression caused by b5695d0463
Kernel keyring keys containing eCryptfs authentication tokens should not
be write locked when calling out to ecryptfsd to wrap and unwrap file
encryption keys. The eCryptfs kernel code can not hold the key's write
lock because ecryptfsd needs to request the key after receiving such a
request from the kernel.
Without this fix, all file opens and creates will timeout and fail when
using the eCryptfs PKI infrastructure. This is not an issue when using
passphrase-based mount keys, which is the most widely deployed eCryptfs
configuration.
Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
Acked-by: Roberto Sassu <roberto.sassu@polito.it>
Tested-by: Roberto Sassu <roberto.sassu@polito.it>
Tested-by: Alexis Hafner1 <haf@zurich.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 985ca0e626 upstream.
Make the inode mapping bdi consistent with the superblock bdi so that
dirty pages are flushed properly.
Signed-off-by: Thieu Le <thieule@chromium.org>
Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ad95c5e9bc upstream.
Block allocation is called from two places: ext3_get_blocks_handle() and
ext3_xattr_block_set(). These two callers are not necessarily synchronized
because xattr code holds only xattr_sem and i_mutex, and
ext3_get_blocks_handle() may hold only truncate_mutex when called from
writepage() path. Block reservation code does not expect two concurrent
allocations to happen to the same inode and thus assertions can be triggered
or reservation structure corruption can occur.
Fix the problem by taking truncate_mutex in xattr code to serialize
allocations.
CC: Sage Weil <sage@newdream.net>
Reported-by: Fyodor Ustinov <ufm@ufm.su>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 575a1d4bdf upstream.
Upon corrupted inode or disk failures, we may fail after we already
allocate some blocks from the inode or take some blocks from the
inode's preallocation list, but before we successfully insert the
corresponding extent to the extent tree. In this case, we should free
any allocated blocks and discard the inode's preallocated blocks
because the entries in the inode's preallocation list may be in an
inconsistent state.
Signed-off-by: Jiaying Zhang <jiayingz@google.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7132de744b upstream.
The current implementation of ext4_free_blocks() always calls
dquot_free_block This looks quite sensible in the most cases: blocks
to be freed are associated with inode and were accounted in quota and
i_blocks some time ago.
However, there is a case when blocks to free were not accounted by the
time calling ext4_free_blocks() yet:
1. delalloc is on, write_begin pre-allocated some space in quota
2. write-back happens, ext4 allocates some blocks in ext4_ext_map_blocks()
3. then ext4_ext_map_blocks() gets an error (e.g. ENOSPC) from
ext4_ext_insert_extent() and calls ext4_free_blocks().
In this scenario, ext4_free_blocks() calls dquot_free_block() who, in
turn, decrements i_blocks for blocks which were not accounted yet (due
to delalloc) After clean umount, e2fsck reports something like:
> Inode 21, i_blocks is 5080, should be 5128. Fix<y>?
because i_blocks was erroneously decremented as explained above.
The patch fixes the problem by passing the new flag
EXT4_FREE_BLOCKS_NO_QUOT_UPDATE to ext4_free_blocks(), to request
that the dquot_free_block() call be skipped.
Signed-off-by: Maxim Patlasov <maxim.patlasov@gmail.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0d0138ebe2 upstream.
Prevent an arbitrary kernel read. Check the user pointer with access_ok()
before copying data in.
[akpm@linux-foundation.org: s/EIO/EFAULT/]
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com>
Cc: Christian Zankel <chris@zankel.net>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ccb6108f5b upstream.
Vito said:
: The system has many usb disks coming and going day to day, with their
: respective bdi's having min_ratio set to 1 when inserted. It works for
: some time until eventually min_ratio can no longer be set, even when the
: active set of bdi's seen in /sys/class/bdi/*/min_ratio doesn't add up to
: anywhere near 100.
:
: This then leads to an unrelated starvation problem caused by write-heavy
: fuse mounts being used atop the usb disks, a problem the min_ratio setting
: at the underlying devices bdi effectively prevents.
Fix this leakage by resetting the bdi min_ratio when unregistering the
BDI.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: Vito Caputo <lkml@pengaru.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2efaca927f upstream.
I haven't reproduced it myself but the fail scenario is that on such
machines (notably ARM and some embedded powerpc), if you manage to hit
that futex path on a writable page whose dirty bit has gone from the PTE,
you'll livelock inside the kernel from what I can tell.
It will go in a loop of trying the atomic access, failing, trying gup to
"fix it up", getting succcess from gup, go back to the atomic access,
failing again because dirty wasn't fixed etc...
So I think you essentially hang in the kernel.
The scenario is probably rare'ish because affected architecture are
embedded and tend to not swap much (if at all) so we probably rarely hit
the case where dirty is missing or young is missing, but I think Shan has
a piece of SW that can reliably reproduce it using a shared writable
mapping & fork or something like that.
On archs who use SW tracking of dirty & young, a page without dirty is
effectively mapped read-only and a page without young unaccessible in the
PTE.
Additionally, some architectures might lazily flush the TLB when relaxing
write protection (by doing only a local flush), and expect a fault to
invalidate the stale entry if it's still present on another processor.
The futex code assumes that if the "in_atomic()" access -EFAULT's, it can
"fix it up" by causing get_user_pages() which would then be equivalent to
taking the fault.
However that isn't the case. get_user_pages() will not call
handle_mm_fault() in the case where the PTE seems to have the right
permissions, regardless of the dirty and young state. It will eventually
update those bits ... in the struct page, but not in the PTE.
Additionally, it will not handle the lazy TLB flushing that can be
required by some architectures in the fault case.
Basically, gup is the wrong interface for the job. The patch provides a
more appropriate one which boils down to just calling handle_mm_fault()
since what we are trying to do is simulate a real page fault.
The futex code currently attempts to write to user memory within a
pagefault disabled section, and if that fails, tries to fix it up using
get_user_pages().
This doesn't work on archs where the dirty and young bits are maintained
by software, since they will gate access permission in the TLB, and will
not be updated by gup().
In addition, there's an expectation on some archs that a spurious write
fault triggers a local TLB flush, and that is missing from the picture as
well.
I decided that adding those "features" to gup() would be too much for this
already too complex function, and instead added a new simpler
fixup_user_fault() which is essentially a wrapper around handle_mm_fault()
which the futex code can call.
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix some nits Darren saw, fiddle comment layout]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reported-by: Shan Hai <haishan.bai@gmail.com>
Tested-by: Shan Hai <haishan.bai@gmail.com>
Cc: David Laight <David.Laight@ACULAB.COM>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Darren Hart <darren.hart@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 703f03c896 upstream.
As stated in drivers/mfd/cs5535-mfd.c, the mfd driver exposes the BARs
which then make the GPIO, MFGPT, ACPI, etc. all visible to the system.
So the dependencies of the MFGPT stuff have changed, and most people
expect Kconfig to bring in the necessary dependencies. Without them, the
module fails to load and most people don't understand why because the
details of the rewrite aren't captured anywhere most people who know to
look.
This dependency needs to be reflected in Kconfig.
Signed-off-by: Philip A. Prindeville <philipp@redfish-solutions.com>
Acked-by: Alexandros C. Couloumbis <alex@ozo.com>
Acked-by: Andres Salomon <dilinger@queued.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 864d296cf9 upstream.
The function pci_enable_ari() may mistakenly set the downstream port
of a v1 PCIe switch in ARI Forwarding mode. This is a PCIe v2 feature,
and with an SR-IOV device on that switch port believing the switch above
is ARI capable it may attempt to use functions 8-255, translating into
invalid (non-zero) device numbers for that bus. This has been seen
to cause Completion Timeouts and general misbehaviour including hangs
and panics.
Acked-by: Don Dutile <ddutile@redhat.com>
Tested-by: Don Dutile <ddutile@redhat.com>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 81d6743985 upstream.
<linux/kernel.h> is needed for min_t. The old version
happened to work on x86 because <asm/unaligned.h>
indirectly includes <linux/kernel.h>, but it didn't
work on ARM.
<linux/kernel.h> includes <asm/byteorder.h> so it's
not necessary to include it explicitly anymore.
Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 40ee4dffff upstream.
The "enable" file for the event system can be removed when a module
is unloaded and the event system only has events from that module.
As the event system nr_events count goes to zero, it may be freed
if its ref_count is also set to zero.
Like the "filter" file, the "enable" file may be opened by a task and
referenced later, after a module has been unloaded and the events for
that event system have been removed.
Although the "filter" file referenced the event system structure,
the "enable" file only references a pointer to the event system
name. Since the name is freed when the event system is removed,
it is possible that an access to the "enable" file may reference
a freed pointer.
Update the "enable" file to use the subsystem_open() routine that
the "filter" file uses, to keep a reference to the event system
structure while the "enable" file is opened.
Reported-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e9dbfae53e upstream.
The event system is freed when its nr_events is set to zero. This happens
when a module created an event system and then later the module is
removed. Modules may share systems, so the system is allocated when
it is created and freed when the modules are unloaded and all the
events under the system are removed (nr_events set to zero).
The problem arises when a task opened the "filter" file for the
system. If the module is unloaded and it removed the last event for
that system, the system structure is freed. If the task that opened
the filter file accesses the "filter" file after the system has
been freed, the system will access an invalid pointer.
By adding a ref_count, and using it to keep track of what
is using the event system, we can free it after all users
are finished with the event system.
Reported-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 63f21a56f1 upstream.
The existing code it pretty ugly. How about we clean it up even more
like this?
From: Anton Blanchard <anton@samba.org>
We check for timeout expiry in the outer loop, but we also need to
check it in the inner loop or we can lock up forever waiting for a
CPU to hit real mode.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 050438ed5a upstream.
In kexec jump support, jump back address passed to the kexeced
kernel via function calling ABI, that is, the function call
return address is the jump back entry.
Furthermore, jump back entry == 0 should be used to signal that
the jump back or preserve context is not enabled in the original
kernel.
But in the current implementation the stack position used for
function call return address is not cleared context
preservation is disabled. The patch fixes this bug.
Reported-and-tested-by: Yin Kangkai <kangkai.yin@intel.com>
Signed-off-by: Huang Ying <ying.huang@intel.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Link: http://lkml.kernel.org/r/1310607277-25029-1-git-send-email-ying.huang@intel.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c48a8fb0d3 upstream.
Copying hp_pins and speaker_pins from line_out_pins may confuse the
parser, and it can lead to duplicated initializations for the same pin
with a wrong DAC assignment. The problem appears in 3.0 kernel code.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c81c6b356b upstream.
Commit dd203fa97b (ALSA: virtuoso: remove non-working controls on
Essence ST Deluxe) made it impossible to adjust the volume after the
driver initialized it to muted.
Ensure that those DACs that can be accessed with I2C are initialized
to the same volume that is the reset default of the DAC without I2C.
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5a96a899bb upstream.
DPEncoderService newer than 1.1 can't properly program the DP (display port)
link training. When facing such version use the DIGxEncoderControl method
instead. Fix DP link training on some R7XX.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fec62c368b upstream.
Most smartarrays tolerate it, but a few new ones don't.
Without this change some newer Smart Arrays will lock up
and i/o will grind to a halt.
Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b5b515445f upstream.
There's a code path in pmcraid that can be reached via device ioctl that
causes all sorts of ugliness, including heap corruption or triggering the
OOM killer due to consecutive allocation of large numbers of pages.
First, the user can call pmcraid_chr_ioctl(), with a type
PMCRAID_PASSTHROUGH_IOCTL. This calls through to
pmcraid_ioctl_passthrough(). Next, a pmcraid_passthrough_ioctl_buffer
is copied in, and the request_size variable is set to
buffer->ioarcb.data_transfer_length, which is an arbitrary 32-bit
signed value provided by the user. If a negative value is provided
here, bad things can happen. For example,
pmcraid_build_passthrough_ioadls() is called with this request_size,
which immediately calls pmcraid_alloc_sglist() with a negative size.
The resulting math on allocating a scatter list can result in an
overflow in the kzalloc() call (if num_elem is 0, the sglist will be
smaller than expected), or if num_elem is unexpectedly large the
subsequent loop will call alloc_pages() repeatedly, a high number of
pages will be allocated and the OOM killer might be invoked.
It looks like preventing this value from being negative in
pmcraid_ioctl_passthrough() would be sufficient.
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bfe159a512 upstream.
USB surprise removal of sr is triggering an oops in
scsi_dispatch_command(). What seems to be happening is that USB is
hanging on to a queue reference until the last close of the upper
device, so the crash is caused by surprise remove of a mounted CD
followed by attempted unmount.
The problem is that USB doesn't issue its final commands as part of
the SCSI teardown path, but on last close when the block queue is long
gone. The long term fix is probably to make sr do the teardown in the
same way as sd (so remove all the lower bits on ejection, but keep the
upper disk alive until last close of user space). However, the
current oops can be simply fixed by not allowing any commands to be
sent to a dead queue.
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2a350cab9d upstream.
Noticed that when the sysfs interface of the SCSI SES
driver was used to request a fault indication the LED
flashed but the buzzer didn't sound. So it was doing
what REQUEST IDENT (locate) should do.
Changelog:
- fix the setting of REQUEST FAULT for the device slot
and array device slot elements in the enclosure control
diagnostic page
- note the potentially defective code that reads the
FAULT SENSED and FAULT REQUESTED bits from the enclosure
status diagnostic page
The attached patch is against git/scsi-misc-2.6
Signed-off-by: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 79b9677d88 upstream.
Some broken devices indicates that media has changed on every
GET_EVENT_STATUS_NOTIFICATION. This translates into MEDIA_CHANGE
uevent on every open() which lets udev run into a loop.
Verify GET_EVENT result against TUR and if it generates spurious
events for several times in a row, ignore the GET_EVENT events, and
trust only the TUR status.
This is the log of a USB stick with a (broken) fake CDROM drive:
scsi 5:0:0:0: Direct-Access SanDisk U3 Cruzer Micro 8.02 PQ: 0 ANSI: 0 CCS
sd 5:0:0:0: Attached scsi generic sg3 type 0
scsi 5:0:0:1: CD-ROM SanDisk U3 Cruzer Micro 8.02 PQ: 0 ANSI: 0
sd 5:0:0:0: [sdb] Attached SCSI removable disk
sr2: scsi3-mmc drive: 48x/48x tray
sr 5:0:0:1: Attached scsi CD-ROM sr2
sr 5:0:0:1: Attached scsi generic sg4 type 5
sr2: GET_EVENT and TUR disagree continuously, suppress GET_EVENT events
sd 5:0:0:0: [sdb] 31777279 512-byte logical blocks: (16.2 GB/15.1 GiB)
sd 5:0:0:0: [sdb] No Caching mode page present
sd 5:0:0:0: [sdb] Assuming drive cache: write through
sd 5:0:0:0: [sdb] No Caching mode page present
sd 5:0:0:0: [sdb] Assuming drive cache: write through
sdb: sdb1
-tj: Updated to consider only spurious GET_EVENT events among
different types of disagreement and allow using TUR for kernel
event polling after GET_EVENT is ignored.
Reported-By: Markus Rathgeb maggu2810@googlemail.com
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The below patch is for -stable only, upstream has a much larger patch
that contains the below hunk in commit a8b0ca17b8
Vince found that under certain circumstances software event overflows
go wrong and deadlock. Avoid trying to delete a timer from the timer
callback.
Reported-by: Vince Weaver <vweaver1@eecs.utk.edu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit abe48b1082 upstream.
Since 2.6.36 (23016bf0d2), Linux prints the existence of "epb" in /proc/cpuinfo,
Since 2.6.38 (d5532ee7b4), the x86_energy_perf_policy(8) utility has
been available in-tree to update MSR_IA32_ENERGY_PERF_BIAS.
However, the typical BIOS fails to initialize the MSR, presumably
because this is handled by high-volume shrink-wrap operating systems...
Linux distros, on the other hand, do not yet invoke x86_energy_perf_policy(8).
As a result, WSM-EP, SNB, and later hardware from Intel will run in its
default hardware power-on state (performance), which assumes that users
care for performance at all costs and not for energy efficiency.
While that is fine for performance benchmarks, the hardware's intended default
operating point is "normal" mode...
Initialize the MSR to the "normal" by default during kernel boot.
x86_energy_perf_policy(8) is available to change the default after boot,
should the user have a different preference.
Signed-off-by: Len Brown <len.brown@intel.com>
Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1107140051020.18606@x980
Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eda3913bb7 upstream.
The perf_event_attr struct has two __u32's at the top and
they need to be swapped individually.
With this change I was able to analyze a perf.data collected in a
32-bit PPC VM on an x86 system. I tested both 32-bit and 64-bit
binaries for the Intel analysis side; both read the PPC perf.data
file correctly.
-v2:
- changed the existing perf_event__attr_swap() to swap only elements
of perf_event_attr and exported it for use in swapping the
attributes in the file header
- updated swap_ops used for processing events
Signed-off-by: David Ahern <dsahern@gmail.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: acme@ghostprotocols.net
Cc: peterz@infradead.org
Cc: paulus@samba.org
Link: http://lkml.kernel.org/r/1310754849-12474-1-git-send-email-dsahern@gmail.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 08a4a43fc4 upstream.
Builds for 32-bit perf binaries on a 64-bit host currently fail
with this error:
[...]
bench/../../../arch/x86/lib/memcpy_64.S: Assembler messages:
bench/../../../arch/x86/lib/memcpy_64.S:29: Error: bad register name `%rdi'
bench/../../../arch/x86/lib/memcpy_64.S:34: Error: invalid instruction suffix for `movs'
bench/../../../arch/x86/lib/memcpy_64.S:50: Error: bad register name `%rdi'
bench/../../../arch/x86/lib/memcpy_64.S:61: Error: bad register name `%rdi'
...
The problem is the detection of the host arch without considering passed in
flags. This change fixes 32-bit builds via:
make EXTRA_CFLAGS=-m32
and 64-bit builds still reference the memcpy_64.S.
Signed-off-by: David Ahern <dsahern@gmail.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1310420304-21452-1-git-send-email-dsahern@gmail.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 676b58c274 upstream.
A panic was observed when the device is failed to resume properly,
and there are no running interfaces. ieee80211_reconfig tries
to restart STA timers on unassociated state.
Signed-off-by: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1288aa4e80 upstream.
A typo causes routine rtl92cu_phy_rf6052_set_cck_txpower() to test the
same condition twice. The problem was found using cppcheck-1.49, and the
proper fix was verified against the pre-mac80211 version of the code.
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5911e963d3 upstream.
If expander discovery fails (sas_discover_expander()), remove the
expander from the port device list (sas_ex_discover_expander()),
before freeing it. Else the list is corrupted and, e.g., when we
attempt to send SMP commands to other devices, the kernel oopses.
Signed-off-by: Luben Tuikov <ltuikov@yahoo.com>
Reviewed-by: Jack Wang <jack_wang@usish.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fd1b6c4a69 upstream.
SCSI scanning of a channel🆔lun triplet in Linux works as follows
(function scsi_scan_target() in drivers/scsi/scsi_scan.c):
- If lun == SCAN_WILD_CARD, send a REPORT LUNS command to the target
and process the result.
- If lun != SCAN_WILD_CARD, send an INQUIRY command to the LUN
corresponding to the specified channel🆔lun triplet to verify
whether the LUN exists.
So a SCSI driver must either take the channel and target id values in
account in its quecommand() function or it should declare that it only
supports one channel and one target id.
Currently the ib_srp driver does neither. As a result scanning the
SCSI bus via e.g. rescan-scsi-bus.sh causes many duplicate SCSI
devices to be created. For each 0:0:L device, several duplicates are
created with the same LUN number and with (C:I) != (0:0). Fix this by
declaring that the ib_srp driver only supports one channel and one
target id.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Acked-by: David Dillow <dillowda@ornl.gov>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 93b37905f7 upstream.
Between open(2) of a /dev/fw* and the first FW_CDEV_IOC_GET_INFO
ioctl(2) on it, the kernel already queues FW_CDEV_EVENT_BUS_RESET events
to be read(2) by the client. The get_info ioctl is practically always
issued right away after open, hence this condition only occurs if the
client opens during a bus reset, especially during a rapid series of bus
resets.
The problem with this condition is twofold:
- These bus reset events carry the (as yet undocumented) @closure
value of 0. But it is not the kernel's place to choose closures;
they are privat to the client. E.g., this 0 value forced from the
kernel makes it unsafe for clients to dereference it as a pointer to
a closure object without NULL pointer check.
- It is impossible for clients to determine the relative order of bus
reset events from get_info ioctl(2) versus those from read(2),
except in one way: By comparison of closure values. Again, such a
procedure imposes complexity on clients and reduces freedom in use
of the bus reset closure.
So, change the ABI to suppress queuing of bus reset events before the
first FW_CDEV_IOC_GET_INFO ioctl was issued by the client.
Note, this ABI change cannot be version-controlled. The kernel cannot
distinguish old from new clients before the first FW_CDEV_IOC_GET_INFO
ioctl.
We will try to back-merge this change into currently maintained stable/
longterm series, and we only document the new behaviour. The old
behavior is now considered a kernel bug, which it basically is.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Cc: <stable@kernel.org>
commit d873d79423 upstream.
On Jun 27 Linus Torvalds wrote:
> The correct error code for "I don't understand this ioctl" is ENOTTY.
> The naming may be odd, but you should think of that error value as a
> "unrecognized ioctl number, you're feeding me random numbers that I
> don't understand and I assume for historical reasons that you tried to
> do some tty operation on me".
[...]
> The EINVAL thing goes way back, and is a disaster. It predates Linux
> itself, as far as I can tell. You'll find lots of man-pages that have
> this line in it:
>
> EINVAL Request or argp is not valid.
>
> and it shows up in POSIX etc. And sadly, it generally shows up
> _before_ the line that says
>
> ENOTTY The specified request does not apply to the kind of object
> that the descriptor d references.
>
> so a lot of people get to the EINVAL, and never even notice the ENOTTY.
[...]
> At least glibc (and hopefully other C libraries) use a _string_ that
> makes much more sense: strerror(ENOTTY) is "Inappropriate ioctl for
> device"
So let's correct this in the <linux/firewire-cdev.h> ABI while it is
still young, relative to distributor adoption.
Side note: We return -ENOTTY not only on _IOC_TYPE or _IOC_NR mismatch,
but also on _IOC_SIZE mismatch. An ioctl with an unsupported size of
argument structure can be seen as an unsupported version of that ioctl.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 67ae7cf1ee upstream.
Some drivers (ab)use the ethtool_ops::get_regs operation to expose
only a hardware revision ID. Commit
a77f5db361 ('ethtool: Allocate register
dump buffer with vmalloc()') had the side-effect of breaking these, as
vmalloc() returns a null pointer for size=0 whereas kmalloc() did not.
For backward-compatibility, allow zero-length dumps again.
Reported-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 94c5b41b32 upstream.
This patch add the missing dma_unmap().
Which solved the critical issue of system freeze on heavy load.
Michal Miroslaw's rejected patch:
[PATCH v2 10/46] net: jme: convert to generic DMA API
Pointed out the issue also, thank you Michal.
But the fix was incorrect. It would unmap needed address
when low memory.
Got lots of feedback from End user and Gentoo Bugzilla.
https://bugs.gentoo.org/show_bug.cgi?id=373109
Thank you all. :)
Signed-off-by: Guo-Fu Tseng <cooldavid@cooldavid.org>
Acked-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f5bc1e755d upstream.
commit fec11dd9a0 caused
a regression when we have already mounted //server/share/a
and want to mount //server/share/a/b.
The problem is that lookup_one_len calls __lookup_hash
with nd pointer as NULL. Then __lookup_hash calls
do_revalidate in the case when dentry exists and we end
up with NULL pointer deference in cifs_d_revalidate:
if (nd->flags & LOOKUP_RCU)
return -ECHILD;
Fix this by checking nd for NULL.
Signed-off-by: Pavel Shilovsky <piastryyy@gmail.com>
Reviewed-by: Shirish Pargaonkar <shirishp@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0472ade031 upstream.
Decryping frames on key_miss handling shouldn't be done for Michael
MIC failed frames as h/w would have already decrypted such frames
successfully anyway.
Also leaving CRC and PHY error(where the frame is going to be dropped
anyway), we are left to prcoess Decrypt error for which s/w decrypt is
selected anway and so having key_miss as a separate check doesn't serve
anything. So making key_miss handling mutually exlusive with other RX
status handling makes much more sense.
This patch addresses an issue with STA not reporting MIC failure events
resulting in STA being disconnected immediately.
Signed-off-by: Senthil Balasubramanian <senthilb@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 98ab5c7755 upstream.
When ath6kl module was resumed while a scan was ongoing, for example during
suspend, the driver would crash in ar6k_cfg80211_scanComplete_event():
[26581.586440] Call Trace:
[26581.586440] [<f99ffeda>] ? ar6k_cfg80211_scanComplete_event+0xaa/0xaa [ath6kl]
[26581.586440] [<f9a0a020>] wmi_iterate_nodes+0xb/0xd [ath6kl]
[26581.586440] [<f99ffe78>] ar6k_cfg80211_scanComplete_event+0x48/0xaa [ath6kl]
[26581.586440] [<f9a038ae>] ar6000_close+0x77/0x7e [ath6kl]
[26581.586440] [<c139c25d>] __dev_close_many+0x87/0xab
[26581.586440] [<c139c30a>] dev_close_many+0x54/0xab
[26581.586440] [<c139c437>] rollback_registered_many+0xa5/0x19e
[26581.586440] [<c139c595>] rollback_registered+0x23/0x2f
[26581.586440] [<c139c5ed>] unregister_netdevice_queue+0x4c/0x69
[26581.586440] [<c139c6b2>] unregister_netdev+0x18/0x1f
[26581.586440] [<f9a00d4c>] ar6000_destroy+0xf8/0x115 [ath6kl]
[26581.586440] [<f9a0c765>] ar6k_cleanup_module+0x20/0x29 [ath6kl]
[26581.586440] [<c1062843>] sys_delete_module+0x181/0x1d9
[26581.586440] [<c105876b>] ? lock_release_holdtime+0x2b/0xcd
[26581.586440] [<c10b55dc>] ? sys_munmap+0x3b/0x42
[26581.586440] [<c14a99dc>] ? restore_all+0xf/0xf
[26581.586440] [<c14aeb6c>] sysenter_do_call+0x12/0x32
[26581.586440] Code: 89 53 6c 75 07 89 d8 e8 c0 ff ff ff 89 f0 e8 2c f2 a9 c7 5b 5e 5d c3 55 89 e5 57 56 53 89 c3 83 ec 08 89 55 f0 8d 78 04 89 4d ec <8b> b0 b8 00 00 00 46 89 b0 b8 00 00 00 89 f8 e8 ae ed a9 c7 8b
Fix the function not to iterate nodes when the scan is aborted. The nodes
are already freed when the module is being unloaded. Patch "ath6kl: Fix a
kernel panic furing suspend/resume" tried to fix this already but it wasn't
enough as a pointer was still used even after the null check. This patch
removes the null check entirely as the wmi structure is not accessed anymore
during module unload.
Also fix a bug where the status was checked as a bitfield with '&' operator.
But it's not a bitfield, just a regular (enum like) value.
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b42a7b1bc7 upstream.
Drivers should not request firmware during resume. Fix ath6kl to
cache the firmware instead.
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7be4ba24a3 upstream.
Since quite a few drivers are not managing to flag the cache as needing
to be resynced after suspend and it's a reasonable thing to do flag the
cache as needing sync automatically when suspending.
The expectation is that systems will mainly only keep the CODEC powered
when doing audio through the CODEC so we won't actually suspend the
device anyway; drivers which want to can override this behaviour when
they resume.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3012f43eaf upstream.
According to DM365 voice codec data sheet at [1], before starting
recording or playback, ADC/DAC modules should follow a reset and
enable cycle. Writing a 1 to the ADC/DAC bit in the register resets
the module and clearing the bit to 0 will enable the module. But the
driver seems to be doing the reverse of it.
[1] http://focus.ti.com/lit/ug/sprufi9b/sprufi9b.pdf
Signed-off-by: Rajashekhara, Sudhakar <sudhakar.raj@ti.com>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 82d1d52103 upstream.
In davinci_vcif_trigger() function, a break() statement was missing
causing the davinci_vcif_stop() function to be called as a fallback
after calling davinci_vcif_start().
Signed-off-by: Rajashekhara, Sudhakar <sudhakar.raj@ti.com>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c7b3ea52e upstream.
While in sleep mode the CS# and other V3020 RTC GPIOs must be driven
high, otherwise V3020 RTC fails to keep the right time in sleep mode.
Signed-off-by: Igor Grinberg <grinberg@compulab.co.il>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b830ac1d9a upstream.
Ben reported a lockup related to rtc. The lockup happens due to:
CPU0 CPU1
rtc_irq_set_state() __run_hrtimer()
spin_lock_irqsave(&rtc->irq_task_lock) rtc_handle_legacy_irq();
spin_lock(&rtc->irq_task_lock);
hrtimer_cancel()
while (callback_running);
So the running callback never finishes as it's blocked on
rtc->irq_task_lock.
Use hrtimer_try_to_cancel() instead and drop rtc->irq_task_lock while
waiting for the callback. Fix this for both rtc_irq_set_state() and
rtc_irq_set_freq().
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reported-by: Ben Greear <greearb@candelatech.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3c5fec75e1 upstream.
Restoring the missing INDEX register value in musb_restore_context().
Without this suspend resume functionality is broken with offmode
enabled.
Acked-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Ajay Kumar Gupta <ajay.gupta@ti.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 004c196828 upstream.
This patch (as1477) fixes a problem affecting a few types of EHCI
controller. Contrary to what one might expect, these controllers
automatically stop their internal frame counter when no ports are
enabled. Since ehci-hcd currently relies on the frame counter for
determining when it should unlink QHs from the async schedule, those
controllers run into trouble: The frame counter stops and the QHs
never get unlinked.
Some systems have also experienced other problems traced back to
commit b963801164 (USB: ehci-hcd unlink
speedups), which made the original switch from using the system clock
to using the frame counter. It never became clear what the reason was
for these problems, but evidently it is related to use of the frame
counter.
To fix all these problems, this patch more or less reverts that commit
and goes back to using the system clock. But this can't be done
cleanly because other changes have since been made to the scan_async()
subroutine. One of these changes involved the tricky logic that tries
to avoid rescanning QHs that have already been seen when the scanning
loop is restarted, which happens whenever an URB is given back.
Switching back to clock-based unlinks would make this logic even more
complicated.
Therefore the new code doesn't rescan the entire async list whenever a
giveback occurs. Instead it rescans only the current QH and continues
on from there. This requires the use of a separate pointer to keep
track of the next QH to scan, since the current QH may be unlinked
while the scanning is in progress. That new pointer must be global,
so that it can be adjusted forward whenever the _next_ QH gets
unlinked. (uhci-hcd uses this same trick.)
Simplification of the scanning loop removes a level of indentation,
which accounts for the size of the patch. The amount of code changed
is relatively small, and it isn't exactly a reversion of the
b963801164 commit.
This fixes Bugzilla #32432.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Matej Kenda <matejken@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6ea12a04d2 upstream.
The NVIDIA series of OHCI controllers continues to be troublesome. A
few people using the MCP67 chipset have reported that even with the
most recent kernels, the OHCI controller fails to handle new
connections and spams the system log with "unable to enumerate USB
port" messages. This is different from the other problems previously
reported for NVIDIA OHCI controllers, although it is probably related.
It turns out that the MCP67 controller does not like to be kept in the
RESET state very long. After only a few seconds, it decides not to
work any more. This patch (as1479) changes the PCI initialization
quirk code so that NVIDIA controllers are switched into the SUSPEND
state after 50 ms of RESET. With no interrupts enabled and all the
downstream devices reset, and thus unable to send wakeup requests,
this should be perfectly safe (even for non-NVIDIA hardware).
The removal code in ohci-hcd hasn't been changed; it will still leave
the controller in the RESET state. As a result, if someone unloads
ohci-hcd and then reloads it, the controller won't work again until
the system is rebooted. If anybody complains about this, the removal
code can be updated similarly.
This fixes Bugzilla #22052.
Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5c5781b3f8 upstream.
On some loaded windows hosts, we have discovered that the host may not
respond to guest requests within the specified time (one second)
as evidenced by the guest timing out. Fix this problem by increasing
the timeout to 5 seconds.
It may be useful to apply this patch to the 3.0 kernel as well.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Hank Janssen <hjanssen@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2dfde9644f upstream.
On some loaded windows hosts, we have discovered that the host may not
respond to guest requests within the specified time (one second)
as evidenced by the guest timing out. Fix this problem by increasing
the timeout to 5 seconds.
It may be useful to apply this patch to the 3.0 kernel as well.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Hank Janssen <hjanssen@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 46d2eb6d82 upstream.
On some loaded windows hosts, we have discovered that the host may not
respond to guest requests within the specified time (one second)
as evidenced by the guest timing out. Fix this problem by increasing
the timeout to 5 seconds.
It may be useful to apply this patch to the 3.0 kernel as well.
the 3.0 kernel as well.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Hank Janssen <hjanssen@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 819cbb120e upstream.
driver_name and board_name are pointers to strings, not buffers of size
COMEDI_NAMELEN. Copying COMEDI_NAMELEN bytes of a string containing
less than COMEDI_NAMELEN-1 bytes would leak some unrelated bytes.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1c50bf7e41 upstream.
There are two devices with PCI ID 0x10ec:0x8192, namely RTL8192E and
RTL8192SE. The method of distinguishing them is by the revision ID
at offset 0x8 of the PCI configuration space. If the value is 0x10,
then the device uses rtl8192se for a driver.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8547d4cc2b upstream.
When unbinding a device on the host which was still attached on the
client, I got a NULL pointer dereference on the client. This turned out
to be due to kthread_stop() being called on an already dead kthread.
Here is how I was able to reproduce the problem:
server:# usbip bind -b 1-2
client:# usbip attach -h server -b 1-2
server:# usbip unbind -b 1-2
This patch fixes the problem by checking the kthread before attempting
to kill it, as it is done on the opposite side in
stub_shutdown_connection().
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 17dd759c67 upstream.
Currently skb_gro_header_slow unconditionally resets frag0 and
frag0_len. However, when we can't pull on the skb this leaves
the GRO fields in an inconsistent state.
This patch fixes this by only resetting those fields after the
pskb_may_pull test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0c03150e7e upstream.
A bridge topology with three systems:
+------+ +------+
| A(2) |--| B(1) |
+------+ +------+
\ /
+------+
| C(3) |
+------+
What is supposed to happen:
* bridge with the lowest ID is elected root (for example: B)
* C detects that A->C is higher cost path and puts in blocking state
What happens. Bridge with lowest id (B) is elected correctly as
root and things start out fine initially. But then config BPDU
doesn't get transmitted from A -> C. Because of that
the link from A-C is transistioned to the forwarding state.
The root cause of this is that the configuration messages
is generated with bogus message age, and dropped before
sending.
In the standardmessage_age is supposed to be:
the time since the generation of the Configuration BPDU by
the Root that instigated the generation of this Configuration BPDU.
Reimplement this by recording the timestamp (age + jiffies) when
recording config information. The old code incorrectly used the time
elapsed on the ageing timer which was incorrect.
See also:
https://bugzilla.vyatta.com/show_bug.cgi?id=7164
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 803862a6f7 upstream.
The function esdhc_readl_le intends to clear bit SDHCI_CARD_PRESENT,
when the card detect gpio tells there is no card. But it does not
clear the bit actually. The patch gives a fix on that.
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 15bed0f2fa upstream.
Ricoh 1180:e823 does not recognize certain types of SD/MMC cards,
as reported at http://launchpad.net/bugs/773524. Lowering the SD
base clock frequency from 200Mhz to 50Mhz fixes this issue. This
solution was suggest by Koji Matsumuro, Ricoh Company, Ltd.
This change has no negative performance effect on standard SD
cards, though it's quite possible that there will be one on
UHS-1 cards.
Signed-off-by: Manoj Iyer <manoj.iyer@canonical.com>
Tested-by: Daniel Manrique <daniel.manrique@canonical.com>
Cc: Koji Matsumuro <matsumur@nts.ricoh.co.jp>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 026dfaf189 upstream.
Add ID 4348:5523 for WinChipHead USB->RS 232 adapter with
Prolifec PL2303 chipset
Signed-off-by: Wolfgang Denk <wd@denx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-04 21:58:30 -07:00
1152 changed files with 14250 additions and 8013 deletions
@ -2940,16 +2947,23 @@ extern unsigned long sun4v_ncs_request(unsigned long request,
#define HV_GRP_CORE 0x0001
#define HV_GRP_INTR 0x0002
#define HV_GRP_SOFT_STATE 0x0003
#define HV_GRP_TM 0x0080
#define HV_GRP_PCI 0x0100
#define HV_GRP_LDOM 0x0101
#define HV_GRP_SVC_CHAN 0x0102
#define HV_GRP_NCS 0x0103
#define HV_GRP_RNG 0x0104
#define HV_GRP_PBOOT 0x0105
#define HV_GRP_TPM 0x0107
#define HV_GRP_SDIO 0x0108
#define HV_GRP_SDIO_ERR 0x0109
#define HV_GRP_REBOOT_DATA 0x0110
#define HV_GRP_NIAG_PERF 0x0200
#define HV_GRP_FIRE_PERF 0x0201
#define HV_GRP_N2_CPU 0x0202
#define HV_GRP_NIU 0x0204
#define HV_GRP_VF_CPU 0x0205
#define HV_GRP_KT_CPU 0x0209
#define HV_GRP_DIAG 0x0300
#ifndef __ASSEMBLY__
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.