commit 1431559200 upstream.
Distros generally (I looked at Debian, RHEL5 and SLES11) seem to
enable CONFIG_HIGHPTE for any x86 configuration which has highmem
enabled. This means that the overhead applies even to machines which
have a fairly modest amount of high memory and which therefore do not
really benefit from allocating PTEs in high memory but still pay the
price of the additional mapping operations.
Running kernbench on a 4G box I found that with CONFIG_HIGHPTE=y but
no actual highptes being allocated there was a reduction in system
time used from 59.737s to 55.9s.
With CONFIG_HIGHPTE=y and highmem PTEs being allocated:
Average Optimal load -j 4 Run (std deviation):
Elapsed Time 175.396 (0.238914)
User Time 515.983 (5.85019)
System Time 59.737 (1.26727)
Percent CPU 263.8 (71.6796)
Context Switches 39989.7 (4672.64)
Sleeps 42617.7 (246.307)
With CONFIG_HIGHPTE=y but with no highmem PTEs being allocated:
Average Optimal load -j 4 Run (std deviation):
Elapsed Time 174.278 (0.831968)
User Time 515.659 (6.07012)
System Time 55.9 (1.07799)
Percent CPU 263.8 (71.266)
Context Switches 39929.6 (4485.13)
Sleeps 42583.7 (373.039)
This patch allows the user to control the allocation of PTEs in
highmem from the command line ("userpte=nohigh") but retains the
status-quo as the default.
It is possible that some simple heuristic could be developed which
allows auto-tuning of this option however I don't have a sufficiently
large machine available to me to perform any particularly meaningful
experiments. We could probably handwave up an argument for a threshold
at 16G of total RAM.
Assuming 768M of lowmem we have 196608 potential lowmem PTE
pages. Each page can map 2M of RAM in a PAE-enabled configuration,
meaning a maximum of 384G of RAM could potentially be mapped using
lowmem PTEs.
Even allowing generous factor of 10 to account for other required
lowmem allocations, generous slop to account for page sharing (which
reduces the total amount of RAM mappable by a given number of PT
pages) and other innacuracies in the estimations it would seem that
even a 32G machine would not have a particularly pressing need for
highmem PTEs. I think 32G could be considered to be at the upper bound
of what might be sensible on a 32 bit machine (although I think in
practice 64G is still supported).
It's seems questionable if HIGHPTE is even a win for any amount of RAM
you would sensibly run a 32 bit kernel on rather than going 64 bit.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
LKML-Reference: <1266403090-20162-1-git-send-email-ian.campbell@citrix.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 83ab0aa0d5 upstream.
setscheduler() saves task->sched_class outside of the rq->lock held
region for a check after the setscheduler changes have become
effective. That might result in checking a stale value.
rtmutex_setprio() has the same problem, though it is protected by
p->pi_lock against setscheduler(), but for correctness sake (and to
avoid bad examples) it needs to be fixed as well.
Retrieve task->sched_class inside of the rq->lock held region.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9000f05c6d upstream.
Fix a SMT scheduler performance regression that is leading to a scenario
where SMT threads in one core are completely idle while both the SMT threads
in another core (on the same socket) are busy.
This is caused by this commit (with the problematic code highlighted)
commit bdb94aa5db
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Tue Sep 1 10:34:38 2009 +0200
sched: Try to deal with low capacity
@@ -4203,15 +4223,18 @@ find_busiest_queue()
...
for_each_cpu(i, sched_group_cpus(group)) {
+ unsigned long power = power_of(i);
...
- wl = weighted_cpuload(i);
+ wl = weighted_cpuload(i) * SCHED_LOAD_SCALE;
+ wl /= power;
- if (rq->nr_running == 1 && wl > imbalance)
+ if (capacity && rq->nr_running == 1 && wl > imbalance)
continue;
On a SMT system, power of the HT logical cpu will be 589 and
the scheduler load imbalance (for scenarios like the one mentioned above)
can be approximately 1024 (SCHED_LOAD_SCALE). The above change of scaling
the weighted load with the power will result in "wl > imbalance" and
ultimately resulting in find_busiest_queue() return NULL, causing
load_balance() to think that the load is well balanced. But infact
one of the tasks can be moved to the idle core for optimal performance.
We don't need to use the weighted load (wl) scaled by the cpu power to
compare with imabalance. In that condition, we already know there is only a
single task "rq->nr_running == 1" and the comparison between imbalance,
wl is to make sure that we select the correct priority thread which matches
imbalance. So we really need to compare the imabalnce with the original
weighted load of the cpu and not the scaled load.
But in other conditions where we want the most hammered(busiest) cpu, we can
use scaled load to ensure that we consider the cpu power in addition to the
actual load on that cpu, so that we can move the load away from the
guy that is getting most hammered with respect to the actual capacity,
as compared with the rest of the cpu's in that busiest group.
Fix it.
Reported-by: Ma Ling <ling.ma@intel.com>
Initial-Analysis-by: Zhang, Yanmin <yanmin_zhang@linux.intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1266023662.2808.118.camel@sbs-t61.sc.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 28f5318167 upstream.
Fix for sched_mc_powersavigs for pre-Nehalem platforms.
Child sched domain should clear SD_PREFER_SIBLING if parent will have
SD_POWERSAVINGS_BALANCE because they are contradicting.
Sets the flags correctly based on sched_mc_power_savings.
Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20100208100555.GD2931@dirshya.in.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e92805ac12 upstream.
Add CPL checking in case emulator is tricked into emulating
privilege instruction from userspace.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8b9f44140b upstream.
Inject #UD if guest attempts to do so. This is in accordance to Intel
SDM.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2db2c2eb62 upstream.
Use groups mechanism to decode 0F BA instructions.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a97f925a32 upstream.
Free the dm_io structure before calling bio_endio() instead of after it,
to ensure that the io_pool containing it is not referenced after it is
freed.
This partially fixes a problem described here
https://www.redhat.com/archives/dm-devel/2010-February/msg00109.html
thread 1:
bio_endio(bio, io_error);
/* scheduling happens */
thread 2:
close the device
remove the device
thread 1:
free_io(md, io);
Thread 2, when removing the device, sees non-empty md->io_pool (because the
io hasn't been freed by thread 1 yet) and may crash with BUG in mempool_free.
Thread 1 may also crash, when freeing into a nonexisting mempool.
To fix this we must make sure that bio_endio() is the last call and
the md structure is not accessed afterwards.
There is another bio_endio in process_barrier, but it is called from the thread
and the thread is destroyed prior to freeing the mempools, so this call is
not affected by the bug.
A similar bug exists with module unloads - the module may be unloaded
immediately after bio_endio - but that is more difficult to fix.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ebed9203b6 upstream.
sunrpc_cache_update() will always call detail->update() from inside the
detail->hash_lock, so it cannot allocate memory.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9fcfe0c83c upstream.
This can, for instance, happen if the user specifies a link local IPv6
address.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ab1b18f70a upstream.
The 'struct svc_deferred_req's on the xpt_deferred queue do not
own a reference to the owning xprt. This is seen in svc_revisit
which is where things are added to this queue. dr->xprt is set to
NULL and the reference to the xprt it put.
So when this list is cleaned up in svc_delete_xprt, we mustn't
put the reference.
Also, replace the 'for' with a 'while' which is arguably
simpler and more likely to compile efficiently.
Cc: Tom Tucker <tom@opengridcomputing.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 46216e4fbe upstream.
Enable the SD-Card interface on multiple Option 3G sticks.
The unusual_devs.h entry is necessary because the device descriptor is
vendor-specific. That prevents usb-storage from binding to it as an interface
driver.
Signed-off-by: Jan Dumon <j.dumon@option.com>
Signed-off-by: Phil Dibowitz <phil@ipom.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 46b72d78cb upstream.
This is a patch to ftdi_sio_ids.h and ftdi_sio.c that adds
identifiers for CONTEC USB serial converter. I tested it
with the device COM-1(USB)H
Signed-off-by: Daniel Sangorrin <daniel.sangorrin@gmail.com>
Cc: Andreas Mohr <andi@lisas.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 65e1ec6751 upstream.
- add FTDI device IDs for several ELV devices and NXTCam of Lego Mindstorms NXT
- add hopefully helpful new_id comment
- remove less helpful "Due to many user requests for multiple ELV devices we enable
them by default." comment (we simply add _all_ known devices - an
enduser shouldn't have to fiddle with obscure module parameters...).
- add myself to DRIVER_AUTHOR
The missing NXTCam ID has been found at
http://www.unixboard.de/vb3/showthread.php?t=44155
, ELV devices taken from ELV Windows .inf file.
Signed-off-by: Andreas Mohr <andi@lisas.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a7787e508a upstream.
added new device pid (PAPOUCH_AD4USB_PID) to ftdi_sio.h and ftdi_sio.c
AD4USB measuring converter is a 4-input A/D converter which enables the
user to measure to four current inputs ranging from 0(4) to 20 mA or
voltage between 0 and 10 V. The measured values are then transferred to
a superior system in digital form. The AD4USB communicates via USB.
Powered is also via USB. datasheet in english is here:
http://www.papouch.com/shop/scripts/pdf/ad4usb_en.pdf
Signed-off-by: Radek Liboska <liboska@uochb.cas.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4e092d110f upstream.
This is a (almost) sort-only patch to sort FTDI device
product ID definitions in new ftdi_sio_ids.h header.
Advantage is that new device ID submissions will now have a specific (sorted)
position - less future merge conflicts.
Compile-tested, based on _current_ mainline git.
Minor checkpatch.pl warnings were eliminated whereever it made sense,
very minor text changes.
Signed-off-by: Andreas Mohr <andi@lisas.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 31844d5580 upstream.
This is a strictly move-only patch to relocate all FTDI device
product ID definitions to their own ftdi_sio_ids.h header
(following the usual *_ids.h kernel tree convention, too),
thus correcting the slightly too messy appearance
(crucial driver defines were stuck somewhere in the decaying middle swamp
of the huge existing header).
Compile-tested, based on latest mainline git.
Signed-off-by: Andreas Mohr <andi@lisas.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f7410ced7f upstream.
USB: Move hcd free_dev call into usb_disconnect
I found a way to oops the kernel:
1. Open a USB device through devio.
2. Remove the hcd module in the host kernel.
3. Close the devio file descriptor.
The problem is that closing the file descriptor does usb_release_dev
as it is the last reference. usb_release_dev then tries to invoke
the hcd free_dev function (or rather dereferencing the hcd driver
struct). This causes an oops as the hcd driver has already been
unloaded so the struct is gone.
This patch tries to fix this by bringing the free_dev call earlier
and into usb_disconnect. I have verified that repeating the
above steps no longer crashes with this patch applied.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cceffe9348 upstream.
This patch (as1332) removes an unneeded and annoying debugging message
announcing all USB uevent constructions.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d23356da71 upstream.
When hardware is removed on a Stratus, the system may crash like this:
ACPI: PCI interrupt for device 0000:7c:00.1 disabled
Trying to free nonexistent resource <00000000a8000000-00000000afffffff>
Trying to free nonexistent resource <00000000a4800000-00000000a480ffff>
uhci_hcd 0000:7e:1d.0: remove, state 1
usb usb2: USB disconnect, address 1
usb 2-1: USB disconnect, address 2
Unable to handle kernel paging request at 0000000000100100 RIP:
[<ffffffff88021950>] :uhci_hcd:uhci_scan_schedule+0xa2/0x89c
#4 [ffff81011de17e50] uhci_scan_schedule at ffffffff88021918
#5 [ffff81011de17ed0] uhci_irq at ffffffff88023cb8
#6 [ffff81011de17f10] usb_hcd_irq at ffffffff801f1c1f
#7 [ffff81011de17f20] handle_IRQ_event at ffffffff8001123b
#8 [ffff81011de17f50] __do_IRQ at ffffffff800ba749
This occurs because an interrupt scans uhci->skelqh, which is
being freed. We do the right thing: disable the interrupts in the
device, and do not do any processing if the interrupt is shared
with other source, but it's possible that another CPU gets
delayed somewhere (e.g. loops) until we started freeing.
The agreed-upon solution is to wait for interrupts to play out
before proceeding. No other bareers are neceesary.
A backport of this patch was tested on a 2.6.18 based kernel.
Testing of 2.6.32-based kernels is under way, but it takes us
forever (months) to turn this around. So I think it's a good
patch and we should keep it.
Tracked in RH bz#516851
Signed-Off-By: Pete Zaitcev <zaitcev@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cd78069492 upstream.
This patch (as1346) changes the idProduct value for USB-3.0 root hubs
from 0x0002 (which we already use for USB-2.0 root hubs) to 0x0003.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Acked-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 05197921ff upstream.
According "5.3.6 Capability Parameters (HCCPARAMS)" of xHCI rev0.96 spec,
value of xECP register indicates a relative offset, in 32-bit words,
from Base to the beginning of the first extended capability.
The wrong calculation will cause BIOS handoff fail (not handoff from BIOS)
in some platform with BIOS USB legacy sup support.
Signed-off-by: Edward Shao <laface.tw@gmail.com>
Cc: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 18dce6ba5c upstream.
Thomas Renninger <trenn@suse.de> reported on IBM x3330
booting a latest kernel on this machine results in:
PCI: PCI BIOS revision 2.10 entry at 0xfd61c, last bus=1
PCI: Using configuration type 1 for base access bio: create slab <bio-0> at 0
ACPI: SCI (IRQ30) allocation failed
ACPI Exception: AE_NOT_ACQUIRED, Unable to install System Control Interrupt handler (20090903/evevent-161)
ACPI: Unable to start the ACPI Interpreter
Later all kind of devices fail...
and bisect it down to this commit:
commit b9c61b7007
x86/pci: update pirq_enable_irq() to setup io apic routing
it turns out we need to set irq routing for the sci on ioapic1 early.
-v2: make it work without sparseirq too.
-v3: fix checkpatch.pl warning, and cc to stable
Reported-by: Thomas Renninger <trenn@suse.de>
Bisected-by: Thomas Renninger <trenn@suse.de>
Tested-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <1265793639-15071-2-git-send-email-yinghai@kernel.org>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ced5b697a7 upstream.
Keep chip_data in create_irq_nr and destroy_irq.
When two drivers are setting up MSI-X at the same time via
pci_enable_msix() there is a race. See this dmesg excerpt:
[ 85.170610] ixgbe 0000:02:00.1: irq 97 for MSI/MSI-X
[ 85.170611] alloc irq_desc for 99 on node -1
[ 85.170613] igb 0000:08:00.1: irq 98 for MSI/MSI-X
[ 85.170614] alloc kstat_irqs on node -1
[ 85.170616] alloc irq_2_iommu on node -1
[ 85.170617] alloc irq_desc for 100 on node -1
[ 85.170619] alloc kstat_irqs on node -1
[ 85.170621] alloc irq_2_iommu on node -1
[ 85.170625] ixgbe 0000:02:00.1: irq 99 for MSI/MSI-X
[ 85.170626] alloc irq_desc for 101 on node -1
[ 85.170628] igb 0000:08:00.1: irq 100 for MSI/MSI-X
[ 85.170630] alloc kstat_irqs on node -1
[ 85.170631] alloc irq_2_iommu on node -1
[ 85.170635] alloc irq_desc for 102 on node -1
[ 85.170636] alloc kstat_irqs on node -1
[ 85.170639] alloc irq_2_iommu on node -1
[ 85.170646] BUG: unable to handle kernel NULL pointer dereference
at 0000000000000088
As you can see igb and ixgbe are both alternating on create_irq_nr()
via pci_enable_msix() in their probe function.
ixgbe: While looping through irq_desc_ptrs[] via create_irq_nr() ixgbe
choses irq_desc_ptrs[102] and exits the loop, drops vector_lock and
calls dynamic_irq_init. Then it sets irq_desc_ptrs[102]->chip_data =
NULL via dynamic_irq_init().
igb: Grabs the vector_lock now and starts looping over irq_desc_ptrs[]
via create_irq_nr(). It gets to irq_desc_ptrs[102] and does this:
cfg_new = irq_desc_ptrs[102]->chip_data;
if (cfg_new->vector != 0)
continue;
This hits the NULL deref.
Another possible race exists via pci_disable_msix() in a driver or in
the number of error paths that call free_msi_irqs():
destroy_irq()
dynamic_irq_cleanup() which sets desc->chip_data = NULL
...race window...
desc->chip_data = cfg;
Remove the save and restore code for cfg in create_irq_nr() and
destroy_irq() and take the desc->lock when checking the irq_cfg.
Reported-and-analyzed-by: Brandon Philips <bphilips@suse.de>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <1265793639-15071-3-git-send-email-yinghai@kernel.org>
Signed-off-by: Brandon Phililps <bphilips@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 817a824b75 upstream.
There's a path in the pagefault code where the kernel deliberately
breaks its own locking rules by kmapping a high pte page without
holding the pagetable lock (in at least page_check_address). This
breaks Xen's ability to track the pinned/unpinned state of the
page. There does not appear to be a viable workaround for this
behaviour so simply disable HIGHPTE for all Xen guests.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
LKML-Reference: <1267204562-11844-1-git-send-email-ian.campbell@citrix.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Pasi Kärkkäinen <pasik@iki.fi>
Cc: <xen-devel@lists.xensource.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 318f6b228b upstream.
Do not set current->mm->mmap to NULL in 32-bit emulation on 64-bit
load_aout_binary after flush_old_exec as it would destroy already
set brpm mapping with arguments.
Introduced by b6a2fea393
mm: variable length argument support
where the argument mapping in bprm was added.
[ hpa: this is a regression from 2.6.22... time to kill a.out? ]
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
LKML-Reference: <1265831716-7668-1-git-send-email-jslaby@suse.cz>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ollie Wild <aaw@google.com>
Cc: x86@kernel.org
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cbaee472f2 upstream.
In ocfs2_direct_IO_get_blocks, we only need to bug out
in case of we are going to write a recounted extent rec.
What a silly bug introduced by me!
Signed-off-by: Tao Ma <tao.ma@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 08fedfc903 upstream.
Studying the DSDTs of various thinkpads, it looks like bit 3 of the
argument to SBDC and SWAN is not "set radio to last state on resume".
Rather, it seems to be "if this bit is set, enable radio on resume,
otherwise disable it on resume".
So, the proper way to prepare the radios for S3 suspend is: disable
radio and clear bit 3 on the SBDC/SWAN call to to resume with radio
disabled, and enable radio and set bit 3 on the SBDC/SWAN call to
resume with the radio enabled.
Also, for persistent devices, the rfkill core does not restore state,
so we really need to get the firmware to do the right thing.
We don't sync the radio state on suspend, instead we trust the BIOS to
not do anything weird if we never touched the radio state since boot.
Time will tell if that's a wise way of doing things...
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7f0cf712a7 upstream.
Thadeu Lima de Souza Cascardo reports this:
Brightness notification does not work until the user writes to
hotkey_mask attribute. That's because the polling thread will only run
if hotkey_user_mask is set and someone is reading the input device or
if hotkey_driver_mask is set. In this second case, this condition is
not tested after the mask is changed, because the brightness and
volume drivers are started after the hotkey drivers.
Fix tpacpi_hotkey_driver_mask_set() to call hotkey_poll_setup(), so
that the poller kthread will be started when needed.
Reported-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Tested-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bf8b29c8f7 upstream.
Event 0x3006 is used to help power management of the ODD in the
UltraBay. The EC generates this event when the ODD eject button is
pressed (even if the bay is powered down).
Normally, Linux doesn't need this as we keep the SATA link powered
up (which wastes power). The EC powers up the bay by itself when the
ODD eject button is pressed, and the SATA PHY reports the hotplug.
However, we could also power that SATA link down (and for that matter,
also power down the Ultrabay) if the ODD is left idle for a while with
no disk inside, and use event 0x3006 to know when we need that SATA link
powered back up.
For now, just stop asking for more information when event 0x3006 is
seen, there is no point in pestering users about it anymore.
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7d1894d8d1 upstream.
We can stop pestering users for confirmation of the brightness_mode
default for firmware TP-76.
While at it, add a few missing comments in that quirk table.
Reported-by: Whoopie <whoopie79@gmx.net>
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2c08522e5d upstream.
e->index overflows e->stamps[] every ip_pkt_list_tot packets.
Consider the case when ip_pkt_list_tot==1; the first packet received is stored
in e->stamps[0] and e->index is initialized to 1. The next received packet
timestamp is then stored at e->stamps[1] in recent_entry_update(),
a buffer overflow because the maximum e->stamps[] index is 0.
Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0866b03c7d upstream.
If b43 or b43legacy are deauthenticated or disconnected, there is a
possibility that a reconnection is tried with the queues stopped in
mac80211. To prevent this, start the queues before setting
STAT_INITIALIZED.
In b43, a similar change has been in place (twice) in the
wireless_core_init() routine. Remove the duplicate and add similar
code to b43legacy.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2ac2927a95 upstream.
The hardware needs to know what type of frames are being
sent in order to fill in various fields, for example the
timestamp in probe responses (before this patch, it was
always 0). Set it correctly when initializing the TX
descriptor.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7bfbae10dc upstream.
While ath9k does not support RIFS yet, the ability to receive RIFS
frames is currently enabled for most chipsets in the initvals.
This is causing baseband related issues on AR9160 and AR9130 based
chipsets, which can lock up under certain conditions.
This patch fixes these issues by overriding the initvals, effectively
disabling RIFS for all affected chipsets.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Acked-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5c0ba62fd4 upstream.
When selecting the tx fallback rate, rc.c used a separate variable
'nrix' for storing the next rate index, however it did not use that as
reference for further rate index lowering. Because of that, it ended up
reusing the same rate for multiple multi-rate retry stages, thus
decreasing delivery probability under changing link conditions.
This patch removes the separate (unnecessary) variable and fixes
fallback the way it was intended to work.
This should result in increased throughput and better link stability.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d8728ee919 upstream.
In AP mode, ath_beacon_config_ap only restarts the timer if a TSF
restart is requested. Apparently this was added, because this function
unconditionally sets the flag for TSF reset.
The problem with this is, that ath9k_hw_reset() clobbers the timer
registers (specified in the initvals), thus effectively disabling the
SWBA interrupt whenever a card reset without TSF reset is issued
(happens in a few places in the code).
This patch fixes ath_beacon_config_ap to only issue the TSF reset flag
when necessary, but reinitialize the timer unconditionally. Tests show,
that this is enough to keep the SWBA interrupt going after a call to
ath_reset()
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 76dadd76c2 upstream.
We use scm_send and scm_recv on both unix domain and
netlink sockets, but only unix domain sockets support
everything required for file descriptor passing,
so error if someone attempts to pass file descriptors
over netlink sockets.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6066193399 upstream.
The UltraDMA Tss timing must be stretched with ATA clock of 66 MHz, but the
driver only does this when PCI clock is 66 MHz, whereas it always programs
DPLL clock (which is used as the ATA clock) to 66 MHz.
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8d59582a86 upstream.
An off-by-one error caused some inputs to not be created by the driver
when they should. TMP421 gets only one input instead of two, TMP422
gets two instead of three, etc. Fix the bug by listing explicitly the
number of inputs each device has.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Tested-by: Andre Prendel <andre.prendel@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a44908d742 upstream.
The low bits of temperature registers are status bits, they must be
masked out before converting the register values to temperatures.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Tested-by: Andre Prendel <andre.prendel@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8740cc7d0c upstream.
i2c_board_info doesn't contain a member called name. i2c_register_client
call does not exist.
Signed-off-by: Luotao Fu <l.fu@pengutronix.de>
Acked-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bbcb8bbad5 upstream.
This patch adds the USB product ID of KAIREN's USB VGA Adaptor,
USB20SVGA-MB-PLUS, to sisusbvga work with it.
Signed-off-by: Tanaka Akira <akr@fsij.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b87c6e86da upstream.
A crash has been reported with sierra driver on disconnect with
Ubuntu/Lucid distribution based on kernel-2.6.32.
The cause of the crash was determined as "NULL tty pointer was being
referenced" and the NULL pointer was passed by sierra_indat_callback().
This patch modifies sierra_indat_callback() function to check for NULL
tty structure pointer. This modification prevents a crash from happening
when the device is disconnected.
This patch fixes the bug reported in Launchpad:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/511157
Signed-off-by: Elina Pasheva <epasheva@sierrawireless.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bbcd18d1b3 upstream.
The platform code doesn't have to provide platform data to get sensible
default behaviour from the imx serial driver.
This patch does not handle NULL dereference in the IrDA case, which still
requires a valid platform data pointer (in imx_startup()/imx_shutdown()),
since I don't know whether there is a sensible default behaviour, or
should the operation just fail cleanly.
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Cc: Baruch Siach <baruch@tkos.co.il>
Cc: Alan Cox <alan@linux.intel.com>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Oskar Schirmer <os@emlix.com>
Cc: Fabian Godehardt <fg@emlix.com>
Cc: Daniel Glöckner <dg@emlix.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 638b9648ab upstream.
This was noticed by Matthias Urlichs and he proposed a fix. This patch
does the fixing a different way to avoid introducing several new race
conditions into the code.
The problem case is TTY_DRIVER_RESET_TERMIOS = 0. In that case while we
abort the ldisc change, the hangup processing has not cleaned up and restarted
the ldisc either.
We can't restart the ldisc stuff in the set_ldisc as we don't know what
the hangup did and may touch stuff we shouldn't as we are no longer
supposed to influence the tty at that point in case it has been re-opened
before we get rescheduled.
Instead do it the simple way. Always re-init the ldisc on the hangup, but
use TTY_DRIVER_RESET_TERMIOS to indicate that we should force N_TTY.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5e31d76f28 upstream.
Before unlinking the inode, reset the current permissions of possible
references like hardlinks, so granted permissions can not be retained
across the device lifetime by creating hardlinks, in the unusual case
that there is a user-writable directory on the same filesystem.
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 77d3d7c1d5 upstream.
sysfs is creating several devices in cuse class concurrently and with
CONFIG_SYSFS_DEPRECATED turned off, it triggers the following oops.
BUG: unable to handle kernel NULL pointer dereference at 0000000000000038
IP: [<ffffffff81158b0a>] sysfs_addrm_start+0x4a/0xf0
PGD 75bb067 PUD 75be067 PMD 0
Oops: 0000 [#1] PREEMPT SMP
last sysfs file: /sys/devices/system/cpu/cpu7/topology/core_siblings
CPU 1
Modules linked in: cuse fuse
Pid: 4737, comm: osspd Not tainted 2.6.31-work #77
RIP: 0010:[<ffffffff81158b0a>] [<ffffffff81158b0a>] sysfs_addrm_start+0x4a/0xf0
RSP: 0018:ffff88000042f8f8 EFLAGS: 00010296
RAX: ffff88000042ffd8 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff880007eef660 RDI: 0000000000000001
RBP: ffff88000042f918 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000001 R11: ffffffff81158b0a R12: ffff88000042f928
R13: 00000000fffffff4 R14: 0000000000000000 R15: ffff88000042f9a0
FS: 00007fe93905a950(0000) GS:ffff880008600000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000038 CR3: 00000000077c9000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process osspd (pid: 4737, threadinfo ffff88000042e000, task ffff880007eef040)
Stack:
ffff880005da10e8 0000000011cc8d6e ffff88000042f928 ffff880003d28a28
<0> ffff88000042f988 ffffffff811592d7 0000000000000000 0000000000000000
<0> 0000000000000000 0000000000000000 ffff88000042f958 0000000011cc8d6e
Call Trace:
[<ffffffff811592d7>] create_dir+0x67/0xe0
[<ffffffff811593a8>] sysfs_create_dir+0x58/0xb0
[<ffffffff8128ca7c>] ? kobject_add_internal+0xcc/0x220
[<ffffffff812942e1>] ? vsnprintf+0x3c1/0xb90
[<ffffffff8128cab7>] kobject_add_internal+0x107/0x220
[<ffffffff8128cd37>] kobject_add_varg+0x47/0x80
[<ffffffff8128ce53>] kobject_add+0x53/0x90
[<ffffffff81357d84>] device_add+0xd4/0x690
[<ffffffff81356c2b>] ? dev_set_name+0x4b/0x70
[<ffffffffa001a884>] cuse_process_init_reply+0x2b4/0x420 [cuse]
...
The problem is that kobject_add_internal() first adds a kobject to the
kset and then try to create sysfs directory for it. If the creation
fails, it remove the kobject from the kset. get_device_parent()
accesses class_dirs kset while only holding class_dirs.list_lock to
see whether the cuse class dir exists. But when it exists, it may not
have finished initialization yet or may fail and get removed soon. In
the above case, the former happened so the second one ends up trying
to create subdirectory under NULL sysfs_dirent.
Fix it by grabbing a mutex in get_device_parent().
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Colin Guthrie <cguthrie@mandriva.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e555317c08 upstream.
Don't touch the variable 'reg' to construct the value for the actual SPI
transport. This variable is again used to access the driver's register
cache, and so random memory is overwritten.
Compute the value in-place instead.
Signed-off-by: Daniel Mack <daniel@caiaq.de>
Acked-by: Liam Girdwood <lrg@slimlogic.co.uk>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0708cc582f upstream.
With PulseAudio and an application accessing an input device like `gnome-volume-manager` both have high CPU load as reported in [1].
Loading `snd-hda-intel` with `position_fix=1` fixes this issue. Therefore add a quirk for ASUS M2V-MX SE.
The only downside is, when now exiting for example MPlayer when it is playing an audio file a high pitched sound is outputted by the speaker.
$ lspci -vvnn | grep -A10 Audio
20:01.0 Audio device [0403]: VIA Technologies, Inc. VT1708/A [Azalia HDAC] (VIA High Definition Audio Controller) [1106:3288] (rev 10)
Subsystem: ASUSTeK Computer Inc. Device [1043:8290]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 17
Region 0: Memory at fbffc000 (64-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: HDA Intel
[1] http://sourceforge.net/mailarchive/forum.php?thread_name=1265550675.4642.24.camel%40mattotaupa&forum_name=alsa-user
Signed-off-by: Paul Menzel <paulepanter@users.sourceforge.net>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d39e82db73 upstream.
Here's a patch that adds MIDI support through USB for one of the Access
Music synths, the VirusTI.
The synth uses standard USBMIDI protocol on its USB interface 3, although
it does signal "vendor specific" class. A magic string has to be sent on
interface 3 to enable the sending of MIDI from the synth (this string was
found by sniffing usb communication of the Windows driver). This is all
my patch does, and it works on my computer.
Please note that the synth can also do standard usb audio I/O on its
interfaces 2&3, which already works with the current snd-usb-audio driver,
except for the audio input from the synth. I'm going to work on it when I
have some time.
Signed-off-by: Sebastien Alaiwan <sebastien.alaiwan@gmail.com>
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ba579eb7b3 upstream.
BugLink: https://bugs.launchpad.net/bugs/524948
The OR has verified that the existing model=laptop-eapd quirk does not
function correctly but instead needs model=3stack. Make this change
so that manual corrections to module-init-tools file(s) are not
required.
Reported-by: Lasse Havelund <lasse@havelund.org>
Signed-off-by: Daniel T Chen <crimsun@ubuntu.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cfc9c0b450 upstream.
During switching virtual counters there is access to perfctr msrs. If
the counter is not available this fails due to an invalid
address. This patch fixes this.
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 89baaaa98a upstream.
Standard AMD systems have the same number of nodes as there are
northbridge devices. However, there may kernel configurations
(especially for 32 bit) or system setups exist, where the node number
is different or it can not be detected properly. Thus the check is not
reliable and may fail though IBS setup was fine. For this reason it is
better to remove the check.
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 18b4a4d59e upstream.
The commit
1155de4 ring-buffer: Make it generally available
already made ring-buffer available without the TRACING option
enabled. This patch removes the TRACING dependency from oprofile.
Fixes also oprofile configuration on ia64.
The patch also applies to the 2.6.32-stable kernel.
Reported-by: Tony Jones <tonyj@suse.de>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 68dc819ce8 upstream.
Multiple virtual counters share one physical counter. The reservation
of virtual counters fails due to duplicate allocation of the same
counter. The counters are already reserved. Thus, virtual counter
reservation may removed at all. This also makes the code easier.
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 98ceb75c7c upstream.
Some code that is in ams_exit() (the module exit code) should instead
be called when the device (not module) is removed. It probably doesn't
make much of a difference in the PMU case, but in the I2C case it does
matter.
I make no guarantee that my fix isn't racy, I'm not familiar enough
with the ams driver code to tell for sure.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Tested-by: Christian Kujau <lists@nerdbynature.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Stelian Pop <stelian@popies.net>
Cc: Michael Hanselmann <linux-kernel@hansmi.ch>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 33a470f6d5 upstream.
Looking at drivers/macintosh/therm_adt746x.c, the sysfs files are
created in thermostat_init() and removed in thermostat_exit(), which
are the driver's init and exit functions. These files are backed-up by
a per-device structure, so it looks like the wrong thing to do: the
sysfs files have a lifetime longer than the data structure that is
backing it up.
I think that sysfs files creation should be moved to the end of
probe_thermostat() and sysfs files removal should be moved to the
beginning of remove_thermostat().
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Tested-by: Christian Kujau <lists@nerdbynature.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Colin Leroy <colin@colino.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a9c9b4429d upstream.
The hibernate memory preallocation code allocates memory to push some
user space data out of physical RAM, so that the hibernation image is
not too large. It allocates more memory than necessary for creating
the image, so it has to release some pages to make room for
allocations made while suspending devices and disabling nonboot CPUs,
or the system will hang due to the lack of free pages to allocate
from. Unfortunately, the function used for freeing these pages,
free_unnecessary_pages(), contains a bug that prevents it from doing
the job on all systems without highmem.
Fix this problem, which is a regression from the 2.6.30 kernel, by
using the right condition for the termination of the loop in
free_unnecessary_pages().
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reported-and-tested-by: Alan Jenkins <sourcejedi.lkml@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 29e1fa3565 upstream.
ULE (Unidirectional Lightweight Encapsulation RFC 4326) decapsulation
has a bug that causes endless loop when Payload Pointer of MPEG2-TS
frame is 182 or 183. Anyone who sends malicious MPEG2-TS frame will
cause the receiver of ULE SNDU to go into endless loop.
This patch was generated and tested against linux-2.6.32.9 and should
apply cleanly to linux-2.6.33 as well because there was only one typo
fix to dvb_net.c since v2.6.32.
This bug was brought to you by modern day Santa Claus who decided to
shower the satellite dish at Keio University with heavy snow causing
huge burst of errors. We, receiver end, received Santa Claus's gift in
the form of kernel bug.
Care has been taken not to introduce more bug by fixing this bug, but
please scrutinize the code for I always produces buggy code.
Signed-off-by: Ang Way Chuang <wcang79@gmail.com>
Acked-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e37bcc0de0 upstream.
It turns out that Mimio has a userspace solution for this product using
libusb, and the in-kernel driver is just getting in the way now and
causing problems. So they have asked that the in-kernel driver be
removed. As the staging driver wasn't quite working anyway, and Mimio
supports their libusb solution for all distros, I am removing the
in-kernel driver.
The libusb solution can be downloaded from:
http://www.mimio.com/downloads/mimio_studio_software/linux.asp
Cc: <mwilder@cs.nmsu.edu>
Cc: Phil Hannent <phil@hannent.co.uk>
Cc: Marc Rousseau <Marc.Rousseau@mimio.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c22090facd upstream.
The HV core mucks around with specific irqs and other low-level stuff
and takes forever to determine that it really shouldn't be running on a
machine. So instead, trigger off of the DMI system information and
error out much sooner. This also allows the module loading tools to
recognize that this code should be loaded on this type of system.
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Hank Janssen <hjanssen@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9a775dbd4e upstream.
This allows the HV core to be properly found and autoloaded
by the system tools.
It uses the Microsoft virtual VGA device to trigger this.
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Hank Janssen <hjanssen@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2cec802980 upstream.
request_firmware() may sleep and it appears to be safe to release the
spinlock here.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit da64c2a8de upstream.
All of the SH clocksource drivers follow the scheme that the IRQ is setup
prior to registering the clockevent. The interrupt handler in the
clockevent cases looks to the event handler function pointer being filled
in by the registration code, permitting us to get in to situations where
asserted IRQs step in to the handler before registration has had a chance
to complete and hitting a NULL pointer deref.
In practice this is not an issue for most platforms, but some of them
with fairly special loaders (or that are chain-loading from another
kernel) may enter in to this situation. This fixes up the oops reported
by Rafael on hp6xx.
Reported-and-tested-by: Rafael Ignacio Zurita <rafaelignacio.zurita@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c0d7a0212b upstream.
There are BUGs "scheduling while atomic" triggered by the timer
rhine_tx_timeout(). They are caused by calling napi_disable() (with
msleep()). This patch fixes it by moving most of the timer content to
the workqueue function (similarly to other drivers, like tg3), with
spin_lock() changed to BH version.
Additionally, there is spin_lock_irq() moved in rhine_close() to
exclude napi_disable() etc., also tg3's way.
Reported-by: Andrey Rahmatullin <wrar@altlinux.org>
Tested-by: Andrey Rahmatullin <wrar@altlinux.org>
Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 77593ae28c upstream.
Stall workaround doesn't work with bcm4320a devices like with bcm4320b.
This workaround actually causes more stalls/device freeze on bcm4320a.
Therefore disable stall workaround by default.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c1f8ca1d83 upstream.
rndis_query_oid overwrites *len which stores buffer size to return full size
of received command and then uses *len with memcpy to fill buffer with
command.
Ofcourse memcpy should be done before replacing buffer size.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 634a555ce3 upstream.
rndis_wlan didn't know about NL80211_AUTHTYPE_AUTOMATIC and simple
setup with 'iwconfig wlan essid no-encrypt' would fail (ENOSUPP).
v2: use NDIS_80211_AUTH_AUTO_SWITCH instead of _OPEN.
This will make device try shared key auth first, then open.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3507d61236 upstream.
Some newer Lenovo models are shipped with a TPM that doesn't seem to set the TPM_STS_DATA_EXPECT status bit
when sending it a burst of data, so the code understands it as a failure and doesn't proceed sending the chip
the intended data. In this patch we bypass this bit check in case the itpm module parameter was set.
This patch is based on Andy Isaacson's one:
http://marc.info/?l=linux-kernel&m=124650185023495&w=2
It was heavily discussed how should we deal with identifying the chip in kernel space, but the required
patch to do so was NACK'd:
http://marc.info/?l=linux-kernel&m=124650186423711&w=2
This way we let the user choose using this workaround or not based on his
observations on this code behavior when trying to use the TPM.
Fixed a checkpatch issue present on the previous patch, thanks to Daniel Walker.
Signed-off-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Acked-by: Eric Paris <eparis@redhat.com>
Tested-by: Seiji Munetoh <seiji.munetoh@gmail.com>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ceae8cbe94 upstream.
This allows offb to be used for initial framebuffer,
and a kms driver to take over later in the boot sequence.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 43bcd61fae upstream.
Somehow the case for G33 got dropped while porting from ums code.
This made a 400MHz chip into a 133MHz one which resulted in the
unnecessary enabling of double wide pipe mode which in turn
screwed up the overlay code.
Nothing else (than the overlay code) seems to be affected.
This fixes fdo.org bug #24835
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a67093d46e upstream.
Original code incorrectly assumed only status-type-0
IOCBs would be queued to the response-queue, and thus all
entries would safely reference a VHA from the IOCB
'handle.'
Signed-off-by: Giridhar Malavali <giridhar.malavali@qlogic.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c8c15ff1e9 upstream
This patch workaround a possible security issue which can allow
user to abuse drm on r6xx/r7xx hw to access any system ram memory.
This patch doesn't break userspace, it detect "valid" old use of
CB_COLOR[0-7]_FRAG & CB_COLOR[0-7]_TILE registers and overwritte
the address these registers are pointing to with the one of the
last color buffer. This workaround will work for old mesa &
xf86-video-ati and any old user which did use similar register
programming pattern as those (we expect that there is no others
user of those ioctl except possibly a malicious one). This patch
add a warning if it detects such usage, warning encourage people
to update their mesa & xf86-video-ati. New userspace will submit
proper relocation.
Fix for xf86-video-ati / mesa (this kernel patch is enough to
prevent abuse, fix for userspace are to set proper cs stream and
avoid kernel warning) :
http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/commit/?id=95d63e408cc88b6934bec84a0b1ef94dfe8bee7bhttp://cgit.freedesktop.org/mesa/mesa/commit/?id=46dc6fd3ed5ef96cda53641a97bc68c3bc104a9f
Abusing this register to perform system ram memory is not easy,
here is outline on how it could be achieve. First attacker must
have access to the drm device and be able to submit command stream
throught cs ioctl. Then attacker must build a proper command stream
for r6xx/r7xx hw which will abuse the FRAG or TILE buffer to
overwrite the GPU GART which is in VRAM. To achieve so attacker
as to setup CB_COLOR[0-7]_FRAG or CB_COLOR[0-7]_TILE to point
to the GPU GART, then it has to find a way to write predictable
value into those buffer (with little cleverness i believe this
can be done but this is an hard task). Once attacker have such
program it can overwritte GPU GART to program GPU gart to point
anywhere in system memory. It then can reusse same method as he
used to reprogram GART to overwritte the system ram through the
GART mapping. In the process the attacker has to be carefull to
not overwritte any sensitive area of the GART table, like ring
or IB gart entry as it will more then likely lead to GPU lockup.
Bottom line is that i think it's very hard to use this flaw
to get system ram access but in theory one can achieve so.
Side note: I am not aware of anyone ever using the GPU as an
attack vector, nevertheless we take great care in the opensource
driver to try to detect and forbid malicious use of GPU. I don't
think the closed source driver are as cautious as we are.
[bwh: Adjusted context for 2.6.32]
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit db96380ea2 upstream
If ib initialization failed don't try to test ib as it will result
in an oops (accessing NULL ib buffer ptr).
[bwh: Adjusted context for 2.6.32]
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7e71c9e2e7 upstream.
This will avoid oops if at later point the fb is use. Trying to create
a framebuffer with no valid GEM object is bogus and should be forbidden
as this patch does.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit f6815077e7 ]
The book keeping structure for transmit always had the flags value
cleared so transmit DMA maps were never released correctly.
Based on patch by Jarek Poplawski, problem observed by Michael Breuer.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit aeedba8bd2 ]
Hello David Miller,
I fix a bug in ks8851_mll driver, which has existed since 2.6.32-rc6.
>From : David J. Choi <david.choi@micrel.com>
Fix a bug that the data pointers in the interrupt handler are set wrong, which is related with the 5th parameter of request_irq().
Signed-off-by: David J. Choi <david.choi@micrel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit c92b544bd5 ]
The commit 0b5ccb2(title:ipv6: reassembly: use seperate reassembly queues for
conntrack and local delivery) has broken the saddr&&daddr member of
nf_ct_frag6_queue when creating new queue. And then hash value
generated by nf_hashfn() was not equal with that generated by fq_find().
So, a new received fragment can't be inserted to right queue.
The patch fixes the bug with adding member of user to nf_ct_frag6_queue structure.
Signed-off-by: Shan Wei <shanwei@cn.fujitsu.com>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit c6b471e645 ]
Currently we treat IGMPv3 reports as if it were an IGMPv2/v1 report.
This is broken as IGMPv3 reports are formatted differently. So we
end up suppressing a bogus multicast group (which should be harmless
as long as the leading reserved field is zero).
In fact, IGMPv3 does not allow membership report suppression so
we should simply ignore IGMPv3 membership reports as a host.
This patch does exactly that. I kept the case statement for it
so people won't accidentally add it back thinking that we overlooked
this case.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit e76b69cc01 ]
Traffic (tcp) doesnot start on a vlan interface when gro is enabled.
Even the tcp handshake was not taking place.
This is because, the eth_type_trans call before the netif_receive_skb
in napi_gro_finish() resets the skb->dev to napi->dev from the previously
set vlan netdev interface. This causes the ip_route_input to drop the
incoming packet considering it as a packet coming from a martian source.
I could repro this on 2.6.32.7 (stable) and 2.6.33-rc7.
With this fix, the traffic starts and the test runs fine on both vlan
and non-vlan interfaces.
CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: Patrick McHardy <kaber@trash.net>
Signed-off-by: Ajit Khaparde <ajitk@serverengines.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit b8afe64161 ]
The wireless sysfs methods like the rest of the networking sysfs
methods are removed with the rtnl_lock held and block until
the existing methods stop executing. So use rtnl_trylock
and restart_syscall so that the code continues to work.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 88af182e38 ]
Yuck. It turns out that when we restart sysctls we were restarting
with the values already changed. Which unfortunately meant that
the second time through we thought there was no change and skipped
all kinds of work, despite the fact that there was indeed a change.
I have fixed this the simplest way possible by restoring the changed
values when we restart the sysctl write.
One of my coworkers spotted this bug when after disabling forwarding
on an interface pings were still forwarded.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 1f474646fd ]
Thanks to testcase and report from Brad Spengler:
--------------------
#include <stdio.h>
typedef int (* _wee)(void);
int main(void)
{
char buf[8] = { '\x81', '\xc7', '\xe0', '\x08', '\x81', '\xe8',
'\x00', '\x00' };
_wee wee;
printf("%p\n", &buf);
wee = (_wee)&buf;
wee();
return 0;
}
--------------------
TSB I-tlb load code tries to use andcc to check the _PAGE_EXEC_4U bit,
but that's bit 12 so it gets sign extended all the way up to bit 63
and the test nearly always passes as a result.
Use sethi to fix the bug.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 2531be413b ]
Commit 085219f79c
("sparc32: use proper types in struct stat")
Accidently changed the struct stat uid/gid members
to uid_t and gid_t, but those get set to
__kernel_uid32_t and __kernel_gid32_t respectively.
Those are of type 'int' but the structure is meant
to have 'short'. So use uid16_t and gid16_t to
correct this.
Reported-by: Rob Landley <rob@landley.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 8654164f54 ]
It doesn't account for phys_base like it should, fix by using
page_to_pfn().
While we're here, make virt_to_page() use pfn_to_page() as well, so we
consistently use the asm/memory-model.h abstractions instead of
open-coding memory model assumptions.
Tested-by: Kristoffer Glembo <kristoffer@gaisler.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commits f036d9f398
and 440ab7ac2d ]
This is mandatory for 64-bit processes, and doing it also for 32-bit
processes saves a conditional in the compat case.
This fixes the glibc/nptl/tst-stdio1 test case, as well
as many others, on 64-bit.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d2e7276b6b upstream.
This is retry of reverted 859ddf0974
("idr: fix a critical misallocation bug") which contained two bugs.
* pa[idp->layers] should be cleared even if it's not used by
sub_alloc() because it's used by mark idr_mark_full().
* The original condition check also assigned pa[l] to p which the new
code didn't do thus leaving p pointing at the wrong layer.
Both problems have been fixed and the idr code has received good amount
testing using userland testing setup where simple bitmap allocator is
run parallel to verify the result of idr allocation.
The bug this patch fixes is caused by sub_alloc() optimization path
bypassing out-of-room condition check and restarting allocation loop
with starting value higher than maximum allowed value. For detailed
description, please read commit message of 859ddf09.
Signed-off-by: Tejun Heo <tj@kernel.org>
Based-on-patch-from: Eric Paris <eparis@redhat.com>
Reported-by: Eric Paris <eparis@redhat.com>
Tested-by: Stefan Lippers-Hollmann <s.l-h@gmx.de>
Tested-by: Serge Hallyn <serue@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f09c256375 upstream.
Patch prevents call set_wep_key() with zero key length. That fix long
standing regression since commit c038069352
"airo: clean up WEP key operations". Additionally print call trace when
someone will try to use improper parameters, and remove key.len = 0
assignment, because it is in not possible code path.
Reported-by: Chris Siebenmann <cks-rhbugzilla@cs.toronto.edu>
Bisected-by: Chris Siebenmann <cks-rhbugzilla@cs.toronto.edu>
Tested-by: Chris Siebenmann <cks@cs.toronto.edu>
Cc: Dan Williams <dcbw@redhat.com>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 858155fbcc upstream.
Some devices do not react to a control request (seen on APC UPS's) resulting in
a slow stream of messages, "generic-usb ... control queue full". Therefore
request needs a timeout.
Signed-off-by: Oliver Neukum <oliver@neukum.org>
Signed-off-by: David Fries <david@fries.net>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9db630b48a upstream.
These touchscreens are mounted onto HP TouchSmart and the Dell Studio One
19. Without a quirk they report a wrong button set and the x/y coordinates
through ABS_Z/ABS_RX, confusing the higher levels (most notably X.Org's
evdev driver).
Device id 0x003 covers models 1900, 2150, and 2700 [1] though testing could
only be performed on a model 1900.
[1] http://www.nextwindow.com/nextwindow_support/latest_tech_info.html
Signed-off-by: Peter Hutterer <peter.hutterer@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: maximilian attems <max@stro.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4bb9508bbb upstream.
There were multiple reports which indicate that vendor messed up horribly
and the same VID/PID combination is used for completely different devices,
some of them requiring the blacklist entry and other not.
Remove the blacklist entry for this combination of VID/PID completely, and let
the user decide and unbind the driver via sysfs eventually, if needed. Proper
fix would be fixing the vendor.
References:
http://lkml.org/lkml/2009/2/10/434http://bugzilla.kernel.org/show_bug.cgi?id=13411
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0141450f66 upstream.
This fixes inefficient page-by-page reads on POSIX_FADV_RANDOM.
POSIX_FADV_RANDOM used to set ra_pages=0, which leads to poor performance:
a 16K read will be carried out in 4 _sync_ 1-page reads.
In other places, ra_pages==0 means
- it's ramfs/tmpfs/hugetlbfs/sysfs/configfs
- some IO error happened
where multi-page read IO won't help or should be avoided.
POSIX_FADV_RANDOM actually want a different semantics: to disable the
*heuristic* readahead algorithm, and to use a dumb one which faithfully
submit read IO for whatever application requests.
So introduce a flag FMODE_RANDOM for POSIX_FADV_RANDOM.
Note that the random hint is not likely to help random reads performance
noticeably. And it may be too permissive on huge request size (its IO
size is not limited by read_ahead_kb).
In Quentin's report (http://lkml.org/lkml/2009/12/24/145), the overall
(NFS read) performance of the application increased by 313%!
Tested-by: Quentin Barnes <qbarnes+nfs@yahoo-inc.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: <qbarnes+nfs@yahoo-inc.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 70136081fc upstream.
If you read the mail to Oliver Neukum on the linux-usb list, then you know
that I found a cure for the mysterious problem that the MR97310a CIF "type
1" cameras have been freezing up and refusing to stream if hooked up to a
machine with a UHCI controller.
Namely, the cure is that if the camera is an mr97310a CIF type 1 camera, you
have to send it 0xa0, 0x00. Somehow, this is a timing reset command, or
such. It un-blocks whatever was previously stopping the CIF type 1 cameras
from working on the UHCI-based machines.
Signed-off-by: Theodore Kilgore <kilgota@auburn.edu>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3dc1de0bf2 upstream.
Make addba_resp_timer aware the HT_AGG_STATE_REQ_STOP_BA_MSK mask
so that when ___ieee80211_stop_tx_ba_session() is issued the timer
will quit. Otherwise when suspend happens before the timer expired,
the timer handler will be called immediately after resume and
messes up driver status.
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Acked-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d7384b28af upstream.
The driver hangs when doing `rmmod mpt2sas` if there are any
IR volumes present.The hang is due the scsi midlayer trying to access the
IR volumes after the driver releases controller resources. Perhaps when
scsi_remove_host is called,the scsi mid layer is sending some request.
This doesn't occur for bare drives becuase the driver is already reporting
those drives deleted prior to calling mpt2sas_base_detach.
To solve this issue, we need to delete the volumes as well.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Reviewed-by: Eric Moore <eric.moore@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d306ebc286 upstream.
ACPI deep C-state entry had a long standing bug/missing feature, wherein we were sending
resched IPIs when an idle CPU is in mwait based deep C-state. Only mwait based C1 was using
the write to the monitored address to wake up mwait'ing CPU.
This patch changes the code to retain TS_POLLING bit if we are entering an mwait based
deep C-state.
The patch has been verified to reduce the number of resched IPIs in general and also
improves the performance/power on workloads with low system utilization (i.e., when mwait based
deep C-states are being used).
Fixes "netperf ~50% regression with 2.6.33-rc1, bisect to 1b9508f"
http://marc.info/?l=linux-kernel&m=126441481427331&w=4
Reported-by: Lin Ming <ming.m.lin@intel.com>
Tested-by: Alex Shi <alex.shi@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1379d2fef0 upstream.
Wrong Lid state reported.
Need to blacklist this machine for LVDS detection.
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b0fc889c43 upstream.
ibmphp driver currently maps only 1KB of ebda memory area into kernel address
space during driver initialization. This causes kernel oops when the driver is
modprobe'd and it accesses memory beyond 1KB within ebda segment. The first
byte of ebda segment actually stores the length of the ebda region in
Kilobytes. Hence make use of the length parameter and map the entire ebda
region.
Signed-off-by: Chandru Siddalingappa <chandru@linux.vnet.ibm.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 453d3131ec upstream.
Mike Cui reported that his system with an NVIDIA MCP79 (aka MCP7A)
chipset stopped working with 2.6.32. The problem appears to be that
2.6.32 now enables the FPDMA auto-activate optimization in the ahci
driver. The drive works fine with this enabled on an Intel AHCI so
this appears to be a chipset bug. Since MCP79 is a fairly recent
NVIDIA chipset and we don't have any info on whether any other NVIDIA
chipsets have this issue, disable FPDMA AA optimization on all NVIDIA
AHCI controllers for now.
Should address http://bugzilla.kernel.org/show_bug.cgi?id=14922
Signed-off-by: Robert Hancock <hancockrwd@gmail.com>
While-we-investigate-issue-this-patch-looks-good-to-me-by:
Prajakta Gudadhe <pgudadhe@nvidia.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c36f74e67f upstream.
This fixes corrupted CIPSO packets when SELinux categories greater than 127
are used. The bug occured on the second (and later) loops through the
while; the inner for loop through the ebitmap->maps array used the same
index as the NetLabel catmap->bitmap array, even though the NetLabel bitmap
is twice as long as the SELinux bitmap.
Signed-off-by: Joshua Roys <joshua.roys@gtri.gatech.edu>
Acked-by: Paul Moore <paul.moore@hp.com>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a120e912eb upstream.
Check the frame control for ieee80211_is_data_qos() is true before
counting the number of tfds can be free, the tfds_in_queue only
increment when ieee80211_is_data_qos() is true before transmit; so it
should only decrement if the type match.
Remove ieee80211_is_data_qos check for frame_ctrl in tx_resp to avoid
invalid information pass from uCode.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5e2f75b899 upstream.
The HT extension channel settings require priv->staging_rxon.channel to be
accurate. However, iwl_set_rxon_ht was being called before iwl_set_rxon_channel
and thus HT40 could be broken unless another call to iwl_mac_config came in.
This problem was recently introduced by "iwlwifi: Fix to set correct ht
configuration"
The particular setting in which I noticed this was monitor mode:
iwconfig wlan0 mode monitor
ifconfig wlan0 up
./iw wlan0 set channel 64 HT40-
#./iw wlan0 set channel 64 HT40-
tcpdump -i wlan0 -y IEEE802_11_RADIO
would only catch HT40 packets if I issued the IW command twice.
From visual inspection, iwl_set_rxon_channel does not depend on
iwl_set_rxon_ht, so simply swapping them should be safe and fixes this problem.
Signed-off-by: Daniel Halperin <dhalperi@cs.washington.edu>
Acked-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a239a8b47c upstream.
When receive reply_tx and ready to decrement the count for number of
tfds in queue, do error checking to prevent error condition and
tfds_in_queue become negative number.
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fc4a7f9308 upstream.
cxusb uses the atbm8830 and lgs8gxx (not lgs8gl5) frontends and the
max2165 tuner, so it needs to select them.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2434466432 upstream.
Move I2C IR initialization from just after I2C bus setup to right
before non-I2C IR initialization. This avoids the case where an I2C IR
device is blocking audio support (at least the PV951 suffers from
this). It is also more logical to group IR support together,
regardless of the connectivity.
This fixes bug #15184:
http://bugzilla.kernel.org/show_bug.cgi?id=15184
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 53f68607ca upstream.
Regression was caused by my commit 6b35ca0d3d
which determined message size using sizeof rather than hardcoded constants.
Unfortunately pwc_set_shutter_speed reuses a 2 byte buffer for a one byte
message too so the sizeof was bogus in this case.
All other uses of sizeof checked and are ok.
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Martin Fuzzey <mfuzzey@gmail.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3dae93ec3e upstream.
Relying on overflow/wrap around isn't exact because if you wrap far
enough, you get back to "valid" values.
Reported-by: Thorsten Pohlmann <pohlmann@tetronik.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c1db53b366 upstream.
I'm trying to fix it on the GCC side (PR43007), but the module is
quite stupid in using ULL constants to operate on u32 values:
static int apply_frontend_param (struct dvb_frontend* fe, struct
dvb_frontend_parameters *param)
{
...
static const u32 ppm = 8000;
u32 spi_bias;
...
spi_bias *= 1000ULL;
spi_bias /= 1000ULL + ppm/1000;
which causes current GCC 4.5 to emit calls to __udivdi3 for i?86 again.
This patch fixes this issue.
Signed-off-by: Richard Guenther <rguenther@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
commit ac278a9c50 upstream.
Make sure that automount "symlinks" are followed regardless of LOOKUP_FOLLOW;
it should have no effect on them.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ebfd32bba9 upstream.
This patch fixes two bugs that revolve around the miscalculation and
misuse of the variable 'overhead_size'. 'overhead_size' is the size of
the various header structures used during communication.
The first bug is the use of 'sizeof' with the pointer of a structure
instead of the structure itself - resulting in the wrong size being
computed. This is then used in a check to see if the payload
(data_size) would be to large for the preallocated structure. Since the
bug produces a smaller value for the overhead, it was possible for the
structure to be breached. (Although the current users of the code do
not currently send enough data to trigger this bug.)
The second bug is that the 'overhead_size' value is used to compute how
much of the preallocated space should be cleared before populating it
with fresh data. This should have simply been 'sizeof(struct cn_msg)'
not overhead_size. The fact that 'overhead_size' was computed
incorrectly made this problem "less bad" - leaving only a pointer's
worth of space at the end uncleared. Thus, this bug was never producing
a bad result, but still needs to be fixed - especially now that the
value is computed correctly.
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 781248c1b5 upstream.
If a table containing zero as stripe count is passed into stripe_ctr
the code attempts to divide by zero.
This patch changes DM_TABLE_LOAD to return -EINVAL if the stripe count
is zero.
We now get the following error messages:
device-mapper: table: 253:0: striped: Invalid stripe count
device-mapper: ioctl: error adding target to table
Signed-off-by: Nikanth Karthikesan <knikanth@suse.de>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0da780c269 upstream.
We only reply to probe request if either the requested SSID is the
broadcast SSID or if the requested SSID matches our own SSID. This
latter case was not properly handled since we were replying to different
SSID with the same length as our own SSID.
Signed-off-by: Benoit Papillault <benoit.papillault@free.fr>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c8afef551 upstream.
Currently, PAE frames are not assigned proper sequence numbers.
Since sending PAE frames as part of aggregates breaks
crupto with several APs, they are sent as normal MPDUs.
Fix the seqeuence number issue by updating the frame with the
internal sequence number.
Tested-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Sujith <Sujith.Manoharan@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b6c3f5be7c upstream.
Commit c7ab5ef9bc entitled "b43: implement
short slot and basic rate handling" reduced the transmit throughput for
my BCM4311 device from 18 Mb/s to 0.7 Mb/s. The basic rate handling
portion is OK, the problem is in the short slot handling.
Prior to this change, the short slot enable/disable routines were never
called. Experimentation showed that the critical part was changing the
value at offset 0x0010 in the shared memory. This is supposed to contain
the 802.11 Slot Time in usec, but if it is changed from its initial value
of zero, performance is destroyed. On the other hand, changing the value
in the MMIO register corresponding to the Interframe Slot Time increased
performance from 18 to 22 Mb/s. A BCM4306/3 also shows dramatic
improvement of the transmit rate from 5.3 to 19.0 Mb/s.
Other changes in the patch include removal of the magic number for the
MMIO register, and allowing the slot time to be set for any PHY operating
in the 2.4 GHz band. Previously, the routine was executed only for G PHYs.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f8f484d1b6 upstream.
The i_blocks field of an eCryptfs inode cannot be trusted, but
generic_fillattr() uses it to instantiate the blocks field of a stat()
syscall when a filesystem doesn't implement its own getattr(). Users
have noticed that the output of du is incorrect on newly created files.
This patch creates ecryptfs_getattr() which calls into the lower
filesystem's getattr() so that eCryptfs can use its kstat.blocks value
after calling generic_fillattr(). It is important to note that the
block count includes the eCryptfs metadata stored in the beginning of
the lower file plus any padding used to fill an extent before
encryption.
https://bugs.launchpad.net/ecryptfs/+bug/390833
Reported-by: Dominic Sacré <dominic.sacre@gmx.de>
Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
Cc: Tim Gardner <timg@tpi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 65d269538a upstream.
The cached read and write paths initialize fattr->time_start in their
setup procedures. The value of fattr->time_start is propagated to
read_cache_jiffies by nfs_update_inode(). Subsequent calls to
nfs_attribute_timeout() will then use a good time stamp when
computing the attribute cache timeout, and squelch unneeded GETATTR
calls.
Since the direct I/O paths erroneously leave the inode's
fattr->time_start field set to zero, read_cache_jiffies for that inode
is set to zero after any direct read or write operation. This
triggers an otw GETATTR or ACCESS call to update the file's attribute
and access caches properly, even when the NFS READ or WRITE replies
have usable post-op attributes.
Make sure the direct read and write setup code performs the same fattr
initialization as the cached I/O paths to prevent unnecessary GETATTR
calls.
This was likely introduced by commit 0e574af1 in 2.6.15, which appears
to add new nfs_fattr_init() call sites in the cached read and write
paths, but not in the equivalent places in fs/nfs/direct.c. A
subsequent commit in the same series, 33801147, introduces the
fattr->time_start field.
Interestingly, the direct write reschedule path already has a call to
nfs_fattr_init() in the right place.
Reported-by: Quentin Barnes <qbarnes@yahoo-inc.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 01d4503968 upstream.
For usec delays use udelay instead of scheduling, this should
allow reclocking to happen faster. This also was the cause
of reported 33s delays at bootup on certain systems.
fixes: freedesktop.org bug 25506
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 370d5cd885 upstream.
Since the rewrite of the CPU idle governor in 2.6.32, two laptops have
surfaced where the BIOS advertises a C2 power state, but for some reason
this state is not functioning (as verified in both cases by powertop
before the patch in .32).
The old governor had the accidental behavior that if a non-working state
was chosen too many times, it would end up falling back to C1. The new
governor works differently and this accidental behavior is no longer
there; the result is a high temperature on these two machines.
This patch adds these 2 machines to the DMI table for C state anomalies;
by just not using C2 both these machines are better off (the TSC can be
used instead of the pm timer, giving a performance boost for example).
Addresses http://bugzilla.kernel.org/show_bug.cgi?id=14742
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Reported-by: <akwatts@ymail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d2f6650a95 upstream.
If acpi_bus_add does not return a device and it's passed
to acpi_bus_start, bad things will happen:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
IP: [<ffffffff8128402d>] acpi_bus_start+0x14/0x24
...
[<ffffffffa008977a>] acpiphp_bus_add+0xba/0x130 [acpiphp]
[<ffffffffa008aa72>] enable_device+0x132/0x2ff [acpiphp]
[<ffffffffa0089b68>] acpiphp_enable_slot+0xb8/0x130 [acpiphp]
[<ffffffffa0089df7>] handle_hotplug_event_func+0x87/0x190 [acpiphp]
Next patch would make this NULL pointer check obsolete, but
better having one more than one missing...
Signed-off-by: Thomas Renninger <trenn@suse.de>
Acked-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ddeee0b2ee upstream.
I notice that the processcompl_compat() function seems to be leaking the
'struct async *as' in the error paths.
I think that the calling convention is fundamentally buggered. The
caller is the one that did the "reap_as()" to get the as thing, the
caller should be the one to free it too.
Freeing it in the caller also means that it very clearly always gets
freed, and avoids the need for any "free in the error case too".
From: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Marcus Meissner <meissner@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d4a4683ca0 upstream.
We need to only copy the data received by the device to userspace, not
the whole kernel buffer, which can contain "stale" data.
Thanks to Marcus Meissner for pointing this out and testing the fix.
Reported-by: Marcus Meissner <meissner@suse.de>
Tested-by: Marcus Meissner <meissner@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7c0ff870d1 upstream.
There is currently a bug in sysfs_sd_setattr inherited from
sysfs_setattr in 2.6.32 where the first time we set the attributes
on a sysfs file we allocate backing store but do not set the
backing store attributes. Resulting in overly restrictive
permissions on sysfs files.
The fix is to simply modify the code so that it always executes
when we update the sysfs attributes, as we did in 2.6.31 and earlier.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Tested-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bca476139d upstream.
When controlling an industrial radio modem it can be necessary to
manipulate the handshake lines in order to control the radio modem's
transmitter, from userspace.
The transmitter should not be turned off before all characters have been
transmitted. serial8250_tx_empty() was reporting that all characters were
transmitted before they actually were.
===
Discovered in parallel with more testing and analysis by Kees Schoenmakers
as follows:
I ran into an NetMos 9835 serial pci board which behaves a little
different than the standard. This type of expansion board is very common.
"Standard" 8250 compatible devices clear the 'UART_LST_TEMT" bit together
with the "UART_LSR_THRE" bit when writing data to the device.
The NetMos device does it slightly different
I believe that the TEMT bit is coupled to the shift register. The problem
is that after writing data to the device and very quickly after that one
does call serial8250_tx_empty, it returns the wrong information.
My patch makes the test more robust (and solves the problem) and it does
not affect the already correct devices.
Alan:
We may yet need to quirk this but now we know which chips we have a
way to do that should we find this breaks some other 8250 clone with
dodgy THRE.
Signed-off-by: Dick Hollenbeck <dick@softplc.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Cc: Kees Schoenmakers <k.schoenmakers@sigmae.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 78b8d5d2ee upstream.
As the release of substreams may be done asynchronously from the
disconnection, close callback needs to check the shutdown flag before
actually accessing the usb interface.
Reference: Novell bnc#505027
http://bugzilla.novell.com/show_bug.cgi?id=565027
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit df574b8ecf upstream.
This patch fixes compilation problems that were caused by function
naming conflicts between the rtl8187se driver and the mac80211 stack.
Signed-off-by: George Kadianakis <desnacked@gmail.com>
Cc: maximilian attems <max@stro.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d3ad9373b7 upstream.
Deassigning a device from the passthrough domain does not
work and breaks device assignment to kvm guests. This patch
fixes the issue.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f532509437 upstream
This patch moves the initialization of the iommu-api out of
the dma-ops initialization code. This ensures that the
iommu-api is initialized even with iommu=pt.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cedc9bf906 upstream.
Acer G725 shares the same suspend problem with the HP laptops which
lose ATA devices on resume. New firmware which fixes the problem is
already available. Add G725 with old firmwares to the broken suspend
list.
This problem has been reported in bko#15104.
http://bugzilla.kernel.org/show_bug.cgi?id=15104
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Jani-Matti Hätinen <jani-matti.hatinen@iki.fi>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2d68b7fe55 upstream.
flush_dcache_page() must be called after (!ATA_TFLAG_WRITE) the
data copying to avoid D-cache aliasing with user space or I-D cache
coherency issues (when reading data from an ATA device using PIO,
the kernel dirties the D-cache but there is no flush_dcache_page()
required on Harvard architectures).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a76221d47e upstream.
This patch adds support for automatically muting the speakers when headphones
are inserted, as well as relabelling the headphone widgets from the
non-standard "HP" to the standard "Headphone" for the mb5 model.
Signed-off-by: Alex Murray <murray.alex@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2fc1b5dd99 upstream.
Kernel bugzilla #15239
On some workloads, it is quite possible to get a huge dst list to
process in dst_gc_task(), and trigger soft lockup detection.
Fix is to call cond_resched(), as we run in process context.
Reported-by: Pawel Staszewski <pstaszewski@itcare.pl>
Tested-by: Pawel Staszewski <pstaszewski@itcare.pl>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d6d8bf5493 upstream.
Replace the zero-division warning message with WARN_ON_ONCE() per the
advice by Linus. This shouldn't happen, but if it happens, it's
possible that the bug happens often due to buggy IRQs.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fcb4ebd678 upstream.
pte_write() should check whether the permissions include either the user
or kernel write permission bits. Likewise, pte_wrprotect() needs to
remove both the kernel and user write bits.
Without this patch handle_tlbmiss() doesn't handle faulting in pages
from the P3 area (our vmalloc space) because of a write. Mappings of the
P3 space have the _PAGE_EXT_KERN_WRITE bit but not _PAGE_EXT_USER_WRITE.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9858ae3801 upstream.
retval should be SUCCESS/FAILED which is defined at scsi.h
retval = 0 is directing wrong return value. It must be retval = SUCCESS.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 325fda71d0
[ cebbert@redhat.com : backport to 2.6.32 ]
devmem: check vmalloc address on kmem read/write
Otherwise vmalloc_to_page() will BUG().
This also makes the kmem read/write implementation aligned with mem(4):
"References to nonexistent locations cause errors to be returned." Here we
return -ENXIO (inspired by Hugh) if no bytes have been transfered to/from
user space, otherwise return partial read/write results.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fda11e61ff upstream
[ backport to 2.6.32 ]
When acpi_evaluate_object() is passed ACPI_ALLOCATE_BUFFER,
the caller must kfree the returned buffer if AE_OK is returned.
The callers of wmi_get_event_data() pass ACPI_ALLOCATE_BUFFER,
and thus must check its return value before accessing
or kfree() on the buffer.
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3e9b988e4e upstream
[ backported to 2.6.32 ]
These function allocate an acpi object by calling wmi_get_event_data, which
then calls acpi_evaluate_object, and it is not freed afterwards.
And kernel doc is fixed for parameters of wmi_get_event_data.
Signed-off-by: Anisse Astier <anisse@astier.eu>
Acked-by: Randy Dunlap <randy.dunlap@oracle.com>
Acked-by: Carlos Corbacho <carlos@strangeworlds.co.uk>
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a8d7ac2797 upstream.
As the padlock driver for SHA uses a software fallback to perform
partial hashing, it must implement custom import/export functions.
Otherwise hmac which depends on import/export for prehashing will
not work with padlock-sha.
Reported-by: Wolfgang Walter <wolfgang.walter@stwm.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b8ed5dd548 upstream.
Remove strings from s390 debugfeature entries that could lead to a
crash when the data is read from dbf because the strings do not exist
any more.
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b9241ea31f upstream.
When we setup buffer for display plane, we'll check any pending
required GPU flush and possible make interruptible wait for flush
complete. But that wait would be most possibly to fail in case of
signals received for X process, which will then fail modeset process
and put display engine in unconsistent state. The result could be
blank screen or CPU hang, and DDX driver would always turn on outputs
DPMS after whatever modeset fails or not.
So this one creates new helper for setup display plane buffer, and
when needing flush using uninterruptible wait for that.
This one should fix bug like https://bugs.freedesktop.org/show_bug.cgi?id=24009.
Also fixing mode switch stress test on Ironlake.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 48764bf43f upstream.
This just waits until the hw passed the current ring position with
cmd execution. This slightly changes the existing i915_wait_request
function to make uninterruptible waiting possible - no point in
returning to userspace while mucking around with the overlay, that
piece of hw is just too fragile.
Also replace a magic 0 with the symbolic constant (and kill the then
superflous comment) while I was looking at the code.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d696c7bdaa upstream.
As noticed by Jon Masters <jonathan@jonmasters.org>, the conntrack hash
size is global and not per namespace, but modifiable at runtime through
/sys/module/nf_conntrack/hashsize. Changing the hash size will only
resize the hash in the current namespace however, so other namespaces
will use an invalid hash size. This can cause crashes when enlarging
the hashsize, or false negative lookups when shrinking it.
Move the hash size into the per-namespace data and only use the global
hash size to initialize the per-namespace value when instanciating a
new namespace. Additionally restrict hash resizing to init_net for
now as other namespaces are not handled currently.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 14c7dbe043 upstream.
As per C99 6.2.4(2) when temporary table data goes out of scope,
the behaviour is undefined:
if (compat) {
struct foo tmp;
...
private = &tmp;
}
[dereference private]
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 13ccdfc2af upstream.
Expectation hashtable size was simply glued to a variable with no code
to rehash expectations, so it was a bug to allow writing to it.
Make "expect_hashsize" readonly.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5b3501faa8 upstream.
nf_conntrack_cachep is currently shared by all netns instances, but
because of SLAB_DESTROY_BY_RCU special semantics, this is wrong.
If we use a shared slab cache, one object can instantly flight between
one hash table (netns ONE) to another one (netns TWO), and concurrent
reader (doing a lookup in netns ONE, 'finding' an object of netns TWO)
can be fooled without notice, because no RCU grace period has to be
observed between object freeing and its reuse.
We dont have this problem with UDP/TCP slab caches because TCP/UDP
hashtables are global to the machine (and each object has a pointer to
its netns).
If we use per netns conntrack hash tables, we also *must* use per netns
conntrack slab caches, to guarantee an object can not escape from one
namespace to another one.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
[Patrick: added unique slab name allocation]
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9edd7ca0a3 upstream.
As discovered by Jon Masters <jonathan@jonmasters.org>, the "untracked"
conntrack, which is located in the data section, might be accidentally
freed when a new namespace is instantiated while the untracked conntrack
is attached to a skb because the reference count it re-initialized.
The best fix would be to use a seperate untracked conntrack per
namespace since it includes a namespace pointer. Unfortunately this is
not possible without larger changes since the namespace is not easily
available everywhere we need it. For now move the untracked conntrack
initialization to the init_net setup function to make sure the reference
count is not re-initialized and handle cleanup in the init_net cleanup
function to make sure namespaces can exit properly while the untracked
conntrack is in use in other namespaces.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 923de3cf5b upstream.
Current kvm wallclock does not consider the total_sleep_time which could cause
wrong wallclock in guest after host suspend/resume. This patch solve
this issue by counting total_sleep_time to get the correct host boot time.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c93d89f3db upstream.
Export getboottime and monotonic_to_bootbased in order to let them
could be used by following patch.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 691c9ae099 upstream.
A DVB demultiplexer device can be used to set up either a PES filter or
a section filter. In the former case, the ts field of the feed union of
struct dmxdev_filter is used, in the latter case the sec field of the
same union is used.
The ts field is a struct list_head, and is currently initialized in the
open() method of the demux device. When for a given demuxer a section
filter is set up, the sec field is played with, thus if a PES filter
needs to be set up after that the ts field will be corrupted, causing a
kernel oops.
This fix moves the list head initialization to
dvb_dmxdev_pes_filter_set(), so that the ts field is properly
initialized every time a PES filter is set up.
Signed-off-by: Francesco Lavra <francescolavra@interfree.it>
Reviewed-by: Andy Walls <awalls@radix.net>
Tested-by: hermann pitton <hermann-pitton@arcor.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9eb07c2592 upstream.
This code was written long ago when it was not possible to
reshape a degraded array. Now it is so the current level of
degraded-ness needs to be taken in to account. Also newly addded
devices should only reduce degradedness if they are deemed to be
in-sync.
In particular, if you convert a RAID5 to a RAID6, and increase the
number of devices at the same time, then the 5->6 conversion will
make the array degraded so the current code will produce a wrong
value for 'degraded' - "-1" to be precise.
If the reshape runs to completion end_reshape will calculate a correct
new value for 'degraded', but if a device fails during the reshape an
incorrect decision might be made based on the incorrect value of
"degraded".
This patch is suitable for 2.6.32-stable and if they are still open,
2.6.31-stable and 2.6.30-stable as well.
Reported-by: Michael Evans <mjevans1983@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fdcb45777a upstream.
It was recently pointed out that the NFSERR_SERVERFAULT error, which is
designed to inform the user of a serious internal error on the server, was
being mapped to an error value that is internal to the kernel.
This patch maps it to the error EREMOTEIO, which is exported to userland
through errno.h.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 387c149b54 upstream.
Ensure that we unregister the bdi before kill_anon_super() calls
ida_remove() on our device name.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9f557cd807 upstream.
The VM/VFS does not allow mapping->a_ops->invalidatepage() to fail.
Unfortunately, nfs_wb_page_cancel() may fail if a fatal signal occurs.
Since the NFS code assumes that the page stays mapped for as long as the
writeback is active, we can end up Oopsing (among other things).
The only safe fix here is to convert nfs_wait_on_request(), so as to make
it uninterruptible (as is already the case with wait_on_page_writeback()).
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 82be934a59 upstream.
If someone calls nfs_release_page(), we presumably already know that the
page is clean, however it may be holding an unstable write.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Reviewed-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f12f98dba6 upstream.
cifs_from_ucs2 returns the length of the converted name, including the
length of the NULL terminator. We don't want to include the NULL
terminator in the dentry name length however since that'll throw off the
hash calculation for the dentry cache.
I believe that this is the root cause of several problems that have
cropped up recently that seem to be papered over with the "noserverino"
mount option. More confirmation of that would be good, but this is
clearly a bug and it fixes at least one reproducible problem that
was reported.
This patch fixes at least this reproducer in this kernel.org bug:
http://bugzilla.kernel.org/show_bug.cgi?id=15088#c12
Reported-by: Bjorn Tore Sund <bjorn.sund@it.uib.no>
Acked-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 803bf5ec25 upstream.
When reserving stack space for a new process, make sure we're not
attempting to expand the stack by more than rlimit allows.
This fixes a bug caused by b6a2fea393 ("mm:
variable length argument support") and unmasked by
fc63cf2370 ("exec: setup_arg_pages() fails
to return errors").
This bug means that when limiting the stack to less the 20*PAGE_SIZE (eg.
80K on 4K pages or 'ulimit -s 79') all processes will be killed before
they start. This is particularly bad with 64K pages, where a ulimit below
1280K will kill every process.
To test, do:
'ulimit -s 15; ls'
before and after the patch is applied. Before it's applied, 'ls' should
be killed. After the patch is applied, 'ls' should no longer be killed.
A stack limit of 15KB since it's small enough to trigger 20*PAGE_SIZE.
Also 15KB not a multiple of PAGE_SIZE, which is a trickier case to handle
correctly with this code.
4K pages should be fine to test with.
[kosaki.motohiro@jp.fujitsu.com: cleanup]
[akpm@linux-foundation.org: cleanup cleanup]
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Americo Wang <xiyou.wangcong@gmail.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: James Morris <jmorris@namei.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Serge Hallyn <serue@us.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3e10e716ab upstream.
We want to be sure that compiler fetches the limit variable only
once, so add helpers for fetching current and maximal resource
limits which do that.
Add them to sched.h (instead of resource.h) due to circular dependency
sched.h->resource.h->task_struct
Alternative would be to create a separate res_access.h or similar.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: James Morris <jmorris@namei.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7e55a70c5b upstream.
Fix typo in ioat2_quiesce. check 'tmo' is zero, not 'end'. Also applies
to 2.6.32.3
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 531c2dc70d upstream.
It is possible (and expected) for there to be holes in the h->drv[]
array, that is, some elements may be NULL pointers. cciss_seq_show
needs to be made aware of this possibility to avoid an Oops.
To reproduce the Oops which this fixes:
1) Create two "arrays" in the Array Configuratino Utility and
several logical drives on each array.
2) cat /proc/driver/cciss/cciss* in an infinite loop
3) delete some of the logical drives in the first "array."
Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4b06e5b9ad upstream.
Thanks Thomas and Christoph for testing and review.
I removed 'smp_wmb()' before up_write from the previous patch,
since up_write() should have necessary ordering constraints.
(I.e. the change of s_frozen is visible to others after up_write)
I'm quite sure the change is harmless but if you are uncomfortable
with Tested-by/Reviewed-by on the modified patch, please remove them.
If MS_RDONLY, freeze_bdev should just up_write(s_umount) instead of
deactivate_locked_super().
Also, keep sb->s_frozen consistent so that remount can check the frozen state.
Otherwise a crash reported here can happen:
http://lkml.org/lkml/2010/1/16/37http://lkml.org/lkml/2010/1/28/53
This patch should be applied for 2.6.32 stable series, too.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Thomas Backlund <tmb@mandriva.org>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 557a701c16 upstream.
Easy fix for a regression introduced in 2.6.31.
On managed CPUs the cpufreq.c core will call driver->exit(cpu) on the
managed cpus and powernow_k8 will free the core's data.
Later driver->get(cpu) function might get called trying to read out the
current freq of a managed cpu and the NULL pointer check does not work on
the freed object -> better set it to NULL.
->get() is unsigned and must return 0 as invalid frequency.
Reference:
http://bugzilla.kernel.org/show_bug.cgi?id=14391
Signed-off-by: Thomas Renninger <trenn@suse.de>
Tested-by: Michal Schmidt <mschmidt@redhat.com>
Signed-off-by: Dave Jones <davej@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fed08d036f upstream.
On my AMD780V chipset, hda_intel.c can crash the kernel with a divide by
zero
for as-yet unknown reasons. A simple check for zero prevents it, though
the problem that causes it remains. Since the workaround is harmless and
won't affect anyone except victims of this bug, it should be safe;
moreover,
because this crash can be triggered by a user-mode application, there are
denial of service implications on the systems affected by the bug without
the patch.
Signed-off-by: Jody Bruchon <jody@nctritech.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 973e9a2795 upstream.
If the regulator constraints are empty and there is no voltage
reported then nothing will be added to the text displayed for the
constraints, leading to random stack data being printed. This is
unlikely to happen for practical regulators since most will at
least report a voltage but should still be fixed.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Liam Girdwood <lrg@slimlogic.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 99fcb766a3 upstream.
Before changing the status of a buffer with a pending write we will await
upon a new flush for that buffer. So we can take advantage of any flushes
posted whilst the buffer is active and pending processing by the GPU, by
clearing its write_domain and updating its last_rendering_seqno -- thus
saving a potential flush in deep queues and improves flushing behaviour
upon eviction for both GTT space and fences.
In order to reduce the time spent searching the active list for matching
write_domains, we move those to a separate list whose elements are
the buffers belong to the active/flushing list with pending writes.
Orignal patch by Chris Wilson <chris@chris-wilson.co.uk>, forward-ported
by me.
In addition to better performance, this also fixes a real bug. Before
this changes, i915_gem_evict_everything didn't work as advertised. When
the gpu was actually busy and processing request, the flush and subsequent
wait would not move active and dirty buffers to the inactive list, but
just to the flushing list. Which triggered the BUG_ON at the end of this
function. With the more tight dirty buffer tracking, all currently busy and
dirty buffers get moved to the inactive list by one i915_gem_flush operation.
I've left the BUG_ON I've used to prove this in there.
References:
Bug 25911 - 2.10.0 causes kernel oops and system hangs
http://bugs.freedesktop.org/show_bug.cgi?id=25911
Bug 26101 - [i915] xf86-video-intel 2.10.0 (and git) triggers kernel oops
within seconds after login
http://bugs.freedesktop.org/show_bug.cgi?id=26101
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: Adam Lantos <hege@playma.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fd2e8ea597 upstream.
An untiled framebuffer must be aligned to 64k. This is normally handled
by intel_pin_and_fence_fb_obj(), but the intelfb_create() likes to be
different and do the pinning itself. However, it aligns the buffer
object incorrectly for pre-i965 chipsets causing a PGTBL_ERR when it is
installed onto the output.
Fixes:
KMS error message while initializing modesetting -
render error detected: EIR: 0x10 [i915]
http://bugs.freedesktop.org/show_bug.cgi?id=22936
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2717568e7c upstream.
This implements the same D-cache flushing logic for r8a66597-hcd as
Catalin's isp1760 (http://patchwork.kernel.org/patch/76391/) change,
with the same note applying here as well:
When the HDC driver writes the data to the transfer buffers it
pollutes the D-cache (unlike DMA drivers where the device writes
the data). If the corresponding pages get mapped into user space,
there are no additional cache flushing operations performed and
this causes random user space faults on architectures with
separate I and D caches (Harvard) or those with aliasing D-cache.
This fixes up crashes during USB boot on SH7724 and others:
http://marc.info/?l=linux-sh&m=126439837308912&w=2
Reported-by: Goda Yusuke <goda.yusuke@renesas.com>
Tested-by: Goda Yusuke <goda.yusuke@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f0217c42c9 upstream.
This is a sync of a fix I made in the old UMS code. If the BIOS uses
the GMBUS and doesn't clear that setup, then our bit-banging I2C can
fail, leading to monitors not being detected.
Signed-off-by: Eric Anholt <eric@anholt.net>
Cc: maximilian attems <max@stro.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 33c5fd121e upstream.
Self Refresh should be disabled on dual plane configs. Otherwise, as
the SR watermark is not calculated for such configs, switching to non
VGA mode causes FIFO underrun and display flicker.
This fixes Korg Bug #14897.
Signed-off-by: David John <davidjon@xenontk.org>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eceb784cec upstream.
This tries to fix CRT detect loop hang seen on some Ironlake form
factor, to clear up hotplug detect state before taking CRT detect
to make sure next hotplug detect cycle is consistent.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 21956b61f5 upstream.
After hours of debugging, I finally found the reason why some source
and runtime combination does not work. The PTP (page table pages)
address must be aligned. I am not sure how much, but alignment to
PAGE_SIZE is sufficient. Also, use ALSA's page allocation routines
to ensure proper virtual -> physical address translation.
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 85f8d3e5fa upstream.
The #define ADT7462_VOLT_COUNT is wrong, it should be 13 not 12. All the
for loops that use this as a limit count are of the typical form, "for
(n = 0; n < ADT7462_VOLT_COUNT; n++)", so to loop through all voltages
w/o missing the last one it is necessary for the count to be one greater
than it is. (Specifically, you will miss the +1.5V 3GPIO input with count
= 12 vs. 13.)
Signed-off-by: Ray Copeland <ray.copeland@aprius.com>
Acked-by: "Darrick J. Wong" <djwong@us.ibm.com>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 197027e6ef upstream.
Different motherboards have different PNP declarations for LM78/LM79
chips. Some declare the whole range of I/O ports (8 ports), some
declare only the useful ports (2 ports at offset 5) and some declare
fancy ranges, for example 4 ports at offset 4. To properly handle all
cases, request all ports individually for probing. After we have
determined that we really have an LM78 or LM79 chip, the useful port
range will be requested again, as a single block.
This fixes the driver on the Olivetti M3000 DT 540, at least.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b0bcdd3cd0 upstream.
Different motherboards have different PNP declarations for
W83781D/W83782D chips. Some declare the whole range of I/O ports (8
ports), some declare only the useful ports (2 ports at offset 5) and
some declare fancy ranges, for example 4 ports at offset 4. To
properly handle all cases, request all ports individually for probing.
After we have determined that we really have a W83781D or W83782D
chip, the useful port range will be requested again, as a single
block.
I did not see a board which needs this yet, but I know of one for lm78
driver and I'd like to keep the logic of these two drivers in sync.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 80e1e82398 upstream.
This reverts commit 7036251180 ("tty: fix race in tty_fasync") and
commit b04da8bfdf ("fnctl: f_modown should call write_lock_irqsave/
restore") that tried to fix up some of the fallout but was incomplete.
It turns out that we really cannot hold 'tty->ctrl_lock' over calling
__f_setown, because not only did that cause problems with interrupt
disables (which the second commit fixed), it also causes a potential
ABBA deadlock due to lock ordering.
Thanks to Tetsuo Handa for following up on the issue, and running
lockdep to show the problem. It goes roughly like this:
- f_getown gets filp->f_owner.lock for reading without interrupts
disabled, so an interrupt that happens while that lock is held can
cause a lockdep chain from f_owner.lock -> sighand->siglock.
- at the same time, the tty->ctrl_lock -> f_owner.lock chain that
commit 7036251180 introduced, together with the pre-existing
sighand->siglock -> tty->ctrl_lock chain means that we have a lock
dependency the other way too.
So instead of extending tty->ctrl_lock over the whole __f_setown() call,
we now just take a reference to the 'pid' structure while holding the
lock, and then release it after having done the __f_setown. That still
guarantees that 'struct pid' won't go away from under us, which is all
we really ever needed.
Reported-and-tested-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: Américo Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 59647b6ac3 upstream.
The WARN_ON in lookup_pi_state which complains about a mismatch
between pi_state->owner->pid and the pid which we retrieved from the
user space futex is completely bogus.
The code just emits the warning and then continues despite the fact
that it detected an inconsistent state of the futex. A conveniant way
for user space to spam the syslog.
Replace the WARN_ON by a consistency check. If the values do not match
return -EINVAL and let user space deal with the mess it created.
This also fixes the missing task_pid_vnr() when we compare the
pi_state->owner pid with the futex value.
Reported-by: Jermome Marchand <jmarchan@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Darren Hart <dvhltc@us.ibm.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 51246bfd18 upstream.
If the owner of a PI futex dies we fix up the pi_state and set
pi_state->owner to NULL. When a malicious or just sloppy programmed
user space application sets the futex value to 0 e.g. by calling
pthread_mutex_init(), then the futex can be acquired again. A new
waiter manages to enqueue itself on the pi_state w/o damage, but on
unlock the kernel dereferences pi_state->owner and oopses.
Prevent this by checking pi_state->owner in the unlock path. If
pi_state->owner is not current we know that user space manipulated the
futex value. Ignore the mess and return -EINVAL.
This catches the above case and also the case where a task hijacks the
futex by setting the tid value and then tries to unlock it.
Reported-by: Jermome Marchand <jmarchan@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Darren Hart <dvhltc@us.ibm.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5ecb01cfdf upstream.
This fixes a futex key reference count bug in futex_lock_pi(),
where a key's reference count is incremented twice but decremented
only once, causing the backing object to not be released.
If the futex is created in a temporary file in an ext3 file system,
this bug causes the file's inode to become an "undead" orphan,
which causes an oops from a BUG_ON() in ext3_put_super() when the
file system is unmounted. glibc's test suite is known to trigger this,
see <http://bugzilla.kernel.org/show_bug.cgi?id=14256>.
The bug is a regression from 2.6.28-git3, namely Peter Zijlstra's
38d47c1b70 "[PATCH] futex: rely on
get_user_pages() for shared futexes". That commit made get_futex_key()
also increment the reference count of the futex key, and updated its
callers to decrement the key's reference count before returning.
Unfortunately the normal exit path in futex_lock_pi() wasn't corrected:
the reference count is incremented by get_futex_key() and queue_lock(),
but the normal exit path only decrements once, via unqueue_me_pi().
The fix is to put_futex_key() after unqueue_me_pi(), since 2.6.31
this is easily done by 'goto out_put_key' rather than 'goto out'.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Darren Hart <dvhltc@us.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6f5a55f1a6 upstream.
We incorrectly depended on the 'node_state/node_isset()' functions
testing the node range, rather than checking it explicitly. That's not
reliable, even if it might often happen to work. So do the proper
explicit test.
Reported-by: Marcus Meissner <meissner@suse.de>
Acked-and-tested-by: Brice Goglin <Brice.Goglin@inria.fr>
Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This fixes the boot time oops on the 2.6.32-stable tree. It is needed
only in this tree due to the divergance from upstream.
From: jamal <hadi@cyberus.ca>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 94f28da840 upstream.
Here are the powerpc bits to remove TIF_ABI_PENDING now that
set_personality() is called at the appropriate place in exec.
Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 74401773f8 upstream.
When cleaning up beacon buffers and slots, ath9k currently checks if
sc->ah->opmode is set to a beacon related mode before cleaning up
buffers.
An unfortunate ordering of interface up/down commands can lead to
sc->ah->opmode being set to monitor mode, while there are AP interfaces
present on the same wiphy.
Always cleaning up beacon buffers if present fixes this issue.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aa8bc9ef18 upstream.
Among other changes, this commit:
commit 06d0f0663e
Author: Sujith <Sujith.Manoharan@atheros.com>
Date: Thu Feb 12 10:06:45 2009 +0530
ath9k: Enable Fractional N mode
changed the hw attach code to fix up initialization values only for
dual band devices, however the commit message did not give a reason as
to why this would be useful or necessary.
According to tests by Jorge Boncompte, this breaks at least some
2GHz-only cards, so the code should be changed back to the
unconditional INI fixup.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Reported-by: Jorge Boncompte <jorge@dti2.net>
Tested-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ca0bf64d99 upstream.
This is the counterpart to cba767175b
("pktcdvd: remove broken dev_t export of class devices"). Device is not
registered using dev_t, so it should not be destroyed using device_destroy
which looks up the device by dev_t. This will fail and adding the device
again will fail with the "duplicate name" error. This is fixed using
device_unregister instead of device_destroy.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Cc: Peter Osterlund <petero2@telia.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b8a1d37c5f upstream.
Free memory allocated using kmem_cache_zalloc using kmem_cache_free rather
than kfree.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@@
expression x,E,c;
@@
x = \(kmem_cache_alloc\|kmem_cache_zalloc\|kmem_cache_alloc_node\)(c,...)
... when != x = E
when != &x
?-kfree(x)
+kmem_cache_free(c,x)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Acked-by: David Howells <dhowells@redhat.com>
Cc: James Morris <jmorris@namei.org>
Cc: Steve Dickson <steved@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b3cb537218 upstream.
Fix the kernel oops when dev_dbg is called with mx3_fbi->txd == NULL
Fix the late initialisation of mx3fb->backlight_level. If not, in the
chain of function started by init_fb_chan(), in __blank() call
sdc_set_brightness(mx3fb, mx3fb->backlight_level) that will shut down the
CONTRAST PWM output.
Signed-off-by: Alberto Panizzo <maramaopercheseimorto@gmail.com>
Acked-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1ec562035b upstream.
The probe function passes a pointer to a struct fb_info to
platform_set_drvdata(), so don't interpret the return value of
platform_get_drvdata() as a pointer to struct imxfb_info.
The original imxfb_info *fbi backlight_power was NULL but in imxfb_suspend
it was 4 resulting in an oops as imxfb_suspend calls
imxfb_disable_controller(fbi) which in turn has
if (fbi->backlight_power)
fbi->backlight_power(0);
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Sascha Hauer <kernel@pengutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3092ad0544 upstream.
I got below kernel oops when I try to bring down the network interface if
ftrace is enabled. The root cause is drv_ampdu_action() is passed with a
NULL ssn pointer in the BA session tear down case. We need to check and
avoid dereferencing it in trace entry assignment.
BUG: unable to handle kernel NULL pointer dereference
Modules linked in: at (null)
IP: [<f98fe02a>] ftrace_raw_event_drv_ampdu_action+0x10a/0x160 [mac80211]
*pde = 00000000
Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[...]
Call Trace:
[<f98fdf20>] ? ftrace_raw_event_drv_ampdu_action+0x0/0x160 [mac80211]
[<f98dac4c>] ? __ieee80211_stop_rx_ba_session+0xfc/0x220 [mac80211]
[<f98d97fb>] ? ieee80211_sta_tear_down_BA_sessions+0x3b/0x50 [mac80211]
[<f98dc6f6>] ? ieee80211_set_disassoc+0xe6/0x230 [mac80211]
[<f98dc6ac>] ? ieee80211_set_disassoc+0x9c/0x230 [mac80211]
[<f98dcbb8>] ? ieee80211_mgd_deauth+0x158/0x170 [mac80211]
[<f98e4bdb>] ? ieee80211_deauth+0x1b/0x20 [mac80211]
[<f8987f49>] ? __cfg80211_mlme_deauth+0xe9/0x120 [cfg80211]
[<f898b870>] ? __cfg80211_disconnect+0x170/0x1d0 [cfg80211]
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 931e80e4b3 upstream.
The cache alias problem will happen if the changes of user shared mapping
is not flushed before copying, then user and kernel mapping may be mapped
into two different cache line, it is impossible to guarantee the coherence
after iov_iter_copy_from_user_atomic. So the right steps should be:
flush_dcache_page(page);
kmap_atomic(page);
write to page;
kunmap_atomic(page);
flush_dcache_page(page);
More precisely, we might create two new APIs flush_dcache_user_page and
flush_dcache_kern_page to replace the two flush_dcache_page accordingly.
Here is a snippet tested on omap2430 with VIPT cache, and I think it is
not ARM-specific:
int val = 0x11111111;
fd = open("abc", O_RDWR);
addr = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
*(addr+0) = 0x44444444;
tmp = *(addr+0);
*(addr+1) = 0x77777777;
write(fd, &val, sizeof(int));
close(fd);
The results are not always 0x11111111 0x77777777 at the beginning as expected. Sometimes we see 0x44444444 0x77777777.
Signed-off-by: Anfei <anfei.zhou@gmail.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f98bfbd78c upstream.
On Tue, Feb 02, 2010 at 02:57:14PM -0800, Greg KH (gregkh@suse.de) wrote:
> > There are at least two ways to fix it: using a big cannon and a small
> > one. The former way is to disable notification registration, since it is
> > not used by anyone at all. Second way is to check whether calling
> > process is root and its destination group is -1 (kind of priveledged
> > one) before command is dispatched to workqueue.
>
> Well if no one is using it, removing it makes the most sense, right?
>
> No objection from me, care to make up a patch either way for this?
Getting it is not used, let's drop support for notifications about
(un)registered events from connector.
Another option was to check credentials on receiving, but we can always
restore it without bugs if needed, but genetlink has a wider code base
and none complained, that userspace can not get notification when some
other clients were (un)registered.
Kudos for Sebastian Krahmer <krahmer@suse.de>, who found a bug in the
code.
Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e5ff15bec9 upstream.
This patch improves disable_controller() in the r8a66597-hdc
driver to disable all interrupts and clear status flags. It
also makes sure that disable_controller() is called during
probe(). This fixes the relatively rare case of unexpected
pending interrupts after kexec reboot.
Signed-off-by: Magnus Damm <damm@opensource.se>
Acked-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9e9432c267 upstream.
Fix two bugs in the bio integrity code:
use_bip_pool() always returns 0 because it checks against the wrong limit,
causing the mempool to be used only when regular allocation fails.
When the mempool is used as a fallback we don't free the data properly.
Signed-Off-By: Chuck Ebbert <cebbert@redhat.com>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a996996dd7 upstream.
No other driver does anything remotely like this that I know of except
for the tty drivers, and I can't see any reason for random/urandom to do
it. In fact, it's a (trivial, harmless) timing information leak. And
obviously, it generates power- and flash-cycle wasting I/O, especially
if combined with something like hwrngd. Also, it breaks ubifs's
expectations.
Signed-off-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7ab02af428 upstream.
Commit 221af7f87b ("Split 'flush_old_exec' into two functions") split
the function at the point of no return - ie right where there were no
more error cases to check. That made sense from a technical standpoint,
but when we then also combined it with the actual personality setting
going in between flush_old_exec() and setup_new_exec(), it needs to be a
bit more careful.
In particular, we need to make sure that we really flush the old
personality bits in the 'flush' stage, rather than later in the 'setup'
stage, since otherwise we might be flushing the _new_ personality state
that we're just setting up.
So this moves the flags and personality flushing (and 'flush_thread()',
which is the arch-specific function that generally resets lazy FP state
etc) of the old process into flush_old_exec(), so that it doesn't affect
any state that execve() is setting up for the new process environment.
This was reported by Michal Simek as breaking his Microblaze qemu
environment.
Reported-and-tested-by: Michal Simek <michal.simek@petalogix.com>
Cc: Peter Anvin <hpa@zytor.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1d6165851c upstream.
We have to properly decrease bi_size in order to merge_bvec_fn return
right result. Otherwise this result in false merge rejects for two
absolutely valid bio_vecs. This may cause significant performance
penalty for example fs_block_size == 1k and block device is raid0 with
small chunk_size = 8k. Then it is impossible to merge 7-th fs-block in
to bio which already has 6 fs-blocks.
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 02b709df81 upstream.
Improve handling of fragmented per-CPU vmaps. We previously don't free
up per-CPU maps until all its addresses have been used and freed. So
fragmented blocks could fill up vmalloc space even if they actually had
no active vmap regions within them.
Add some logic to allow all CPUs to have these blocks purged in the case
of failure to allocate a new vm area, and also put some logic to trim
such blocks of a current CPU if we hit them in the allocation path (so
as to avoid a large build up of them).
Christoph reported some vmap allocation failures when using the per CPU
vmap APIs in XFS, which cannot be reproduced after this patch and the
previous bug fix.
Cc: linux-mm@kvack.org
Tested-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit de5604231c upstream.
RCU list walking of the per-cpu vmap cache was broken. It did not use
RCU primitives, and also the union of free_list and rcu_head is
obviously wrong (because free_list is indeed the list we are RCU
walking).
While we are there, remove a couple of unused fields from an earlier
iteration.
These APIs aren't actually used anywhere, because of problems with the
XFS conversion. Christoph has now verified that the problems are solved
with these patches. Also it is an exported interface, so I think it
will be good to be merged now (and Christoph wants to get the XFS
changes into their local tree).
Cc: linux-mm@kvack.org
Tested-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5040ab67a2 upstream.
Interestingly, when SIDPR is used in ata_piix, writes to DET in
SControl sometimes get ignored leading to detection failure. Update
sata_link_resume() such that it reads back SControl after clearing DET
and retry if it's not clear.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: fengxiangjun <fengxiangjun@neusoft.com>
Reported-by: Jim Faulkner <jfaulkne@ccs.neu.edu>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d8cc108f4f upstream.
With multiplexing enabled oprofile crashs when profiling more than 28
events. This patch fixes this.
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e83e452b06 upstream.
Add Xeon 7500 series support to oprofile.
Straight forward: it's the same as Core i7, so just detect
the model number. No user space changes needed.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from afbcf7ab8d)
When we migrate a kvm guest that uses pvclock between two hosts, we may
suffer a large skew. This is because there can be significant differences
between the monotonic clock of the hosts involved. When a new host with
a much larger monotonic time starts running the guest, the view of time
will be significantly impacted.
Situation is much worse when we do the opposite, and migrate to a host with
a smaller monotonic clock.
This proposed ioctl will allow userspace to inform us what is the monotonic
clock value in the source host, so we can keep the time skew short, and
more importantly, never goes backwards. Userspace may also need to trigger
the current data, since from the first migration onwards, it won't be
reflected by a simple call to clock_gettime() anymore.
[marcelo: future-proof abi with a flags field]
[jan: fix KVM_GET_CLOCK by clearing flags field instead of checking it]
Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 28f6aeea3f ]
when using policy routing and the skb mark:
there are cases where a back path validation requires us
to use a different routing table for src ip validation than
the one used for mapping ingress dst ip.
One such a case is transparent proxying where we pretend to be
the destination system and therefore the local table
is used for incoming packets but possibly a main table would
be used on outbound.
Make the default behavior to allow the above and if users
need to turn on the symmetry via sysctl src_valid_mark
Signed-off-by: Jamal Hadi Salim <hadi@cyberus.ca>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 9db2f1bec3 ]
During TX timeout procedure dev could be awoken too early, e.g. by
sky2_complete_tx() called from sky2_down(). Then sky2_xmit_frame()
can run while buffers are freed causing an oops. This patch fixes it
by adding netif_device_present() test in sky2_tx_complete().
Fixes: http://bugzilla.kernel.org/show_bug.cgi?id=14925
With debugging by: Mike McCormack <mikem@ring3k.org>
Reported-by: Berck E. Nash <flyboy@gmail.com>
Tested-by: Berck E. Nash <flyboy@gmail.com>
Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a362c638bd upstream
Commit a9238ce3bb broke compilation on
platforms that do not implement GENERIC_TIME (e.g. iop32x):
kernel/time/clocksource.c: In function 'clocksource_register':
kernel/time/clocksource.c:556: error: implicit declaration of function 'clocksource_max_deferment'
Provide the implementation of clocksource_max_deferment() also for
such platforms.
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d91afd15b0 upstream.
The variable i in this function could be increased to over
2**32 which would result in an integer overflow when using
int. Fix it by changing i to unsigned long.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a2fad9bf26 upstream.
The WM8350 LED driver needs to be able to enable and disable the
regulators it is using. Previously the core wasn't properly enforcing
status change constraints so the driver was able to function but this
has always been intended to be required.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Liam Girdwood <lrg@slimlogic.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 17740d8978 upstream.
Don't pass current RLIMIT_RTTIME to update_rlimit_cpu() in
selinux_bprm_committing_creds, since update_rlimit_cpu expects
RLIMIT_CPU limit.
Use proper rlim[RLIMIT_CPU].rlim_cur instead to fix that.
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Acked-by: James Morris <jmorris@namei.org>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Cc: Eric Paris <eparis@parisplace.org>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Backport of commit e300839da4 upstream.
Presently, firewire-core only checks whether descriptors that are to be
added by userspace drivers to the local node's config ROM do not exceed
a size of 256 quadlets. However, the sum of the bare minimum ROM plus
all descriptors (from firewire-core, from firewire-net, from userspace)
must not exceed 256 quadlets.
Otherwise, the bounds of a statically allocated buffer will be
overwritten. If the kernel survives that, firewire-core will
subsequently be unable to parse the local node's config ROM.
(Note, userspace drivers can add descriptors only through device files
of local nodes. These are usually only accessible by root, unlike
device files of remote nodes which may be accessible to lesser
privileged users.)
Therefore add a test which takes the actual present and required ROM
size into account for all descriptors of kernelspace and userspace
drivers.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b01f2c3a4a upstream.
This patch changes around our hotplug enable code a bit to only enable
it for ports we actually detect and initialize. This prevents problems
with stuck or spurious interrupts on outputs that aren't actually wired
up, and is generally more correct.
Fixes FDO bug #23183.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4d80d7210b upstream.
Multiple MPDUs can be aggregated, transmitted, and finally acknowledged
together using a single BA frame. Block ACK (BA) contains
bitmap size of 64*16 bits so the maximum frame count is 64.
The default value of aggregation frame count suggested by uCode is 31 to
achieve best performance.
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 93fb84b50f upstream.
I missed converting one dev_info call to deb_dbg before submitting the driver.
Without this change, a message will be printed to dmesg for each button press
if a RC6 remote is used.
Signed-off-by: David Härdeman <david@hardeman.nu>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 05d43ed8a8 upstream.
Now that the previous commit made it possible to do the personality
setting at the point of no return, we do just that for ELF binaries.
And suddenly all the reasons for that insane TIF_ABI_PENDING bit go
away, and we can just make SET_PERSONALITY() just do the obvious thing
for a 32-bit compat process.
Everything becomes much more straightforward this way.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 94673e968c upstream.
Here are the sparc bits to remove TIF_ABI_PENDING now that
set_personality() is called at the appropriate place in exec.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 221af7f87b upstream.
'flush_old_exec()' is the point of no return when doing an execve(), and
it is pretty badly misnamed. It doesn't just flush the old executable
environment, it also starts up the new one.
Which is very inconvenient for things like setting up the new
personality, because we want the new personality to affect the starting
of the new environment, but at the same time we do _not_ want the new
personality to take effect if flushing the old one fails.
As a result, the x86-64 '32-bit' personality is actually done using this
insane "I'm going to change the ABI, but I haven't done it yet" bit
(TIF_ABI_PENDING), with SET_PERSONALITY() not actually setting the
personality, but just the "pending" bit, so that "flush_thread()" can do
the actual personality magic.
This patch in no way changes any of that insanity, but it does split the
'flush_old_exec()' function up into a preparatory part that can fail
(still called flush_old_exec()), and a new part that will actually set
up the new exec environment (setup_new_exec()). All callers are changed
to trivially comply with the new world order.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 04e4f2b18c upstream.
The current code will load the stack size and protection markings, but
then only use the markings in the MMU code path. The NOMMU code path
always passes PROT_EXEC to the mmap() call. While this doesn't matter
to most people whilst the code is running, it will cause a pointless
icache flush when starting every FDPIC application. Typically this
icache flush will be of a region on the order of 128KB in size, or may
be the entire icache, depending on the facilities available on the CPU.
In the case where the arch default behaviour seems to be desired
(EXSTACK_DEFAULT), we probe VM_STACK_FLAGS for VM_EXEC to determine
whether we should be setting PROT_EXEC or not.
For arches that support an MPU (Memory Protection Unit - an MMU without
the virtual mapping capability), setting PROT_EXEC or not will make an
important difference.
It should be noted that this change also affects the executability of
the brk region, since ELF-FDPIC has that share with the stack. However,
this is probably irrelevant as NOMMU programs aren't likely to use the
brk region, preferring instead allocation via mmap().
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a7016235a6 upstream.
After memory pressure has forced it to dip into the reserves, 2.6.32's
5f8dcc2121 "page-allocator: split per-cpu
list into one-list-per-migrate-type" has been returning MIGRATE_RESERVE
pages to the MIGRATE_MOVABLE free_list: in some sense depleting reserves.
Fix that in the most straightforward way (which, considering the overheads
of alternative approaches, is Mel's preference): the right migratetype is
already in page_private(page), but free_pcppages_bulk() wasn't using it.
How did this bug show up? As a 20% slowdown in my tmpfs loop kbuild
swapping tests, on PowerMac G5 with SLUB allocator. Bisecting to that
commit was easy, but explaining the magnitude of the slowdown not easy.
The same effect appears, but much less markedly, with SLAB, and even
less markedly on other machines (the PowerMac divides into fewer zones
than x86, I think that may be a factor). We guess that lumpy reclaim
of short-lived high-order pages is implicated in some way, and probably
this bug has been tickling a poor decision somewhere in page reclaim.
But instrumentation hasn't told me much, I've run out of time and
imagination to determine exactly what's going on, and shouldn't hold up
the fix any longer: it's valid, and might even fix other misbehaviours.
Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 12e9a45609 upstream.
deactivate_locked_super() will be done by caller of fill_super, doing
it there as well is b0rken.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 217686e983 upstream.
Error handling in that sucker got broken back in 2003. If function
returns 0 on failure, it's not nice to add return -EINVAL into it.
Adding return 1 on other failure exits is also not a good thing (and
yes, original success exits with 1 and some of failure exits with 0
are still there; so's the original logics in callers).
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 29333920a5 upstream.
A couple of fields in affs_sb_info is used in follow_link() and
symlink() for handling AFFS "absolute" symlinks. Need locking
against affs_remount() updates.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 083c73c253 upstream.
if 9P ->get_sb() fails late (at root inode or root dentry
allocation), we'll hit its ->kill_sb() with NULL ->s_root
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9926146b15 upstream.
When testing the "e1000: enhance frame fragment detection" (and e1000e)
patches we found some bugs with reducing the MTU size. The 1024 byte
descriptor used with the 1000 mtu test also (re) introduced the
(originally) reported bug, and causes us to need the e1000_clean_tx_irq
"enhance frame fragment detection" fix.
So what has occured here is that 2.6.32 is only vulnerable for mtu <
1500 due to the jumbo specific routines in both e1000 and e1000e.
So, 2.6.32 needs the 2kB buffer len fix for those smaller MTUs, but
is not vulnerable to the original issue reported. It has been pointed
out that this vulnerability needs to be patched in older kernels that
don't have the e1000 jumbo routine. Without the jumbo routines, we
need the "enhance frame fragment detection" fix the e1000, old
e1000e is only vulnerable for < 1500 mtu, and needs a similar
fix. We split the patches up to provide easy backport paths.
There is only a slight bit of extra code when this fix and the
original "enhance frame fragment detection" fixes are applied, so
please apply both, even though it is a bit of overkill.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b94b502896 upstream.
Originally patched by Neil Horman <nhorman@tuxdriver.com>
e1000e could with a jumbo frame enabled interface, and packet split disabled,
receive a packet that would overflow a single rx buffer. While in practice
very hard to craft a packet that could abuse this, it is possible.
this is related to CVE-2009-4538
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
CC: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 40a14deaf4 upstream.
Originally From: Neil Horman <nhorman@tuxdriver.com>
Modified by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Hey all-
A security discussion was recently given:
http://events.ccc.de/congress/2009/Fahrplan//events/3596.en.html
And a patch that I submitted awhile back was brought up. Apparently some of
their testing revealed that they were able to force a buffer fragment in e1000
in which the trailing fragment was greater than 4 bytes. As a result the
fragment check I introduced failed to detect the fragement and a partial
invalid frame was passed up into the network stack. I've written this patch
to correct it. I'm in the process of testing it now, but it makes good
logical sense to me. Effectively it maintains a per-adapter state variable
which detects a non-EOP frame, and discards it and subsequent non-EOP frames
leading up to _and_ _including_ the next positive-EOP frame (as it is by
definition the last fragment). This should prevent any and all partial frames
from entering the network stack from e1000.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6a4e2b7503 upstream.
If the BIOS pokes the system-wide OSC bits to see if Linux
supports evaluating _OST after a _PPC change notification,
answer yes.
Also, fix an oversight where we neglected to set the OSC
bit advertising processor aggregator device support
when acpi-pad is compiled as a module.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9dc130fccb upstream.
Executing _OSC returns a buffer, which has an acpi object in it.
Don't directly returns the buffer, instead, we return the acpi object's
buffer. This fixes a regression since caller of acpi_run_osc expects
an acpi object's buffer returned.
Tested-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 70023de88c upstream.
v2->v1:
.improve debug info as suggedted by Bjorn,Kenji
.API is using uuid string as suggested by Alexey
Add an API to execute _OSC. A lot of devices can have this method, so add a
generic API.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 19b123ebac upstream.
In a case where the number of the input data is bigger than the
modulus of the key, the coprocessor adapters will report an 8/72
error. This case is not caught yet, thus the adapter will be taken
offline. To prevent this, we return an -EINVAL instead.
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 534ead7092 upstream.
libata currently doesn't retry if a command fails with AC_ERR_INVALID
assuming that retrying won't get it any further even if retried.
However, a failure may be classified as invalid through hardware
glitch (incorrect reading of the error register or firmware bug) and
there isn't whole lot to gain by not retrying as actually invalid
commands will be failed immediately. Also, commands serving FS IOs
are extremely unlikely to be invalid. Retry FS IOs even if it's
marked invalid.
Transient and incorrect invalid failure was seen while debugging
firmware related issue on Samsung n130 on bko#14314.
http://bugzilla.kernel.org/show_bug.cgi?id=14314
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Johannes Stezenbach <js@sig21.net>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b160091802 upstream.
CONFIG_X86_CPU_DEBUG, which provides some parsed versions of the x86
CPU configuration via debugfs, has caused boot failures on real
hardware. The value of this feature has been marginal at best, as all
this information is already available to userspace via generic
interfaces.
Causes crashes that have not been fixed + minimal utility -> remove.
See the referenced LKML thread for more information.
Reported-by: Ozan Çağlayan <ozan@pardus.org.tr>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
LKML-Reference: <alpine.LFD.2.00.1001221755320.13231@localhost.localdomain>
Cc: Jaswinder Singh Rajput <jaswinder@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3a5fc0e40c upstream.
nodes_possible_map does not currently include nodes that have SRAT
entries that are all ACPI_SRAT_MEM_HOT_PLUGGABLE since the bit is
cleared in nodes_parsed if it does not have an online address range.
Unequivocally setting the bit in nodes_parsed is insufficient since
existing code, such as acpi_get_nodes(), assumes all nodes in the map
have online address ranges. In fact, all code using nodes_parsed
assumes such nodes represent an address range of online memory.
nodes_possible_map is created by unioning nodes_parsed and
cpu_nodes_parsed; the former represents nodes with online memory and
the latter represents memoryless nodes. We now set the bit for
hotpluggable nodes in cpu_nodes_parsed so that it also gets set in
nodes_possible_map.
[ hpa: Haicheng Li points out that this makes the naming of the
variable cpu_nodes_parsed somewhat counterintuitive. However, leave
it as is in the interest of keeping the pure bug fix patch small. ]
Signed-off-by: David Rientjes <rientjes@google.com>
Tested-by: Haicheng Li <haicheng.li@linux.intel.com>
LKML-Reference: <alpine.DEB.2.00.1001201152040.30528@chino.kir.corp.google.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 21ec7f6dbf upstream.
If irq flags tracing is enabled the TRACE_IRQS_ON macros expands to
a function call which clobbers registers %r0-%r5. The macro is used
in the code path for single stepped system calls. The argument
registers %r2-%r6 need to be restored from the stack before the system
call function is called.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7a48143678 upstream.
Unsurprisingly, Texas Instruments TSB43AB23 exhibits the same behaviour
as TSB43AB22/A in dual buffer IR DMA mode: If descriptors are located
at physical addresses above the 31 bit address range (2 GB), the
controller will overwrite random memory. With luck, this merely
prevents video reception. With only a little less luck, the machine
crashes.
We use the same workaround here as with TSB43AB22/A: Switch off the
dual buffer capability flag and use packet-per-buffer IR DMA instead.
Another possible workaround would be to limit the coherent DMA mask to
31 bits.
In Linux 2.6.33, this change serves effectively only as documentation
since dual buffer mode is not used for any controller anymore. But
somebody might want to re-enable it in the future to make use of
features of dual buffer DMA that are not available in packet-per-buffer
mode.
In Linux 2.6.32 and older, this update is vital for anyone with this
controller, more than 2 GB RAM, a 64 bit kernel, and FireWire video or
audio applications.
We have at least four reports:
http://bugzilla.kernel.org/show_bug.cgi?id=13808http://marc.info/?l=linux1394-user&m=126154279004083https://bugzilla.redhat.com/show_bug.cgi?id=552142http://marc.info/?l=linux1394-user&m=126432246128386
Reported-by: Paul Johnson
Reported-by: Ronneil Camara
Reported-by: G Zornetzer
Reported-by: Mark Thompson
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4bdadb9785 upstream.
Having missed the ENOMEM return via i915_gem_fault(), there are probably
other paths that I also missed. By not enabling NORETRY by default these
paths can run the shrinker and take memory from the system (but not from
our own inactive lists because our shrinker can not run whilst we hold
the struct mutex) and this may allow the system to survive a little longer
whilst our drivers consume all available memory.
References:
OOM killer unexpectedly called with kernel 2.6.32
http://bugzilla.kernel.org/show_bug.cgi?id=14933
v2: Pass gfp into page mapping.
v3: Use new read_cache_page_gfp() instead of open-coding.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Eric Anholt <eric@anholt.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0531b2aac5 upstream.
It's a simplified 'read_cache_page()' which takes a page allocation
flag, so that different paths can control how aggressive the memory
allocations are that populate a address space.
In particular, the intel GPU object mapping code wants to be able to do
a certain amount of own internal memory management by automatically
shrinking the address space when memory starts getting tight. This
allows it to dynamically use different memory allocation policies on a
per-allocation basis, rather than depend on the (static) address space
gfp policy.
The actual new function is a one-liner, but re-organizing the helper
functions to the point where you can do this with a single line of code
is what most of the patch is all about.
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f1053a7ca9 upstream.
Since commit 9d2e9d66a3
mptsas driver fails to allocate memory for the MPT chain buffers
for second LSI adapter on PPC440SPe Katmai platform:
...
ioc1: LSISAS1068E B3: Capabilities={Initiator}
mptbase: ioc1: ERROR - Unable to allocate Reply, Request, Chain Buffers!
mptbase: ioc1: ERROR - didn't initialize properly! (-3)
mptsas: probe of 0002:31:00.0 failed with error -3
This commit increased MPT_FC_CAN_QUEUE value but initChainBuffers()
doesn't differentiate between SAS and FC causing increased allocation
for SAS case, too. Later pci_alloc_consistent() fails to allocate
increased chain buffer pool size for SAS case.
Provide a fix by looking at the bus type and using appropriate
MPT_SAS_CAN_QUEUE value while calculation of the number of chain
buffers.
Signed-off-by: Anatolij Gustschin <agust@denx.de>
Acked-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 63c43b0ec1 upstream.
Because of the terrible structuring of scsi-bidi-commands
it breaks some of the life time rules of a scsi-command.
It is now not allowed to free up the block-request before
cleanup and partial deallocation of the scsi-command. (Which
is not so for none bidi commands)
The right fix to this problem would be to make bidi command
a first citizen by allocating a scsi_sdb pointer at scsi command
just like cmd->prot_sdb. The bidi sdb should be allocated/deallocated
as part of the get/put_command (Again like the prot_sdb) and the
current decoupling of scsi_cmnd and blk-request should be kept.
For now make sure scsi_release_buffers() is called before the
call to blk_end_request_all() which might cause the suicide of
the block requests. At best the leak of bidi buffers, at worse
a crash, as there is a race between the existence of the bidi_request
and the free of the associated bidi_sdb.
The reason this was never hit before is because only OSD has the potential
of doing asynchronous bidi commands. (So does bsg but it is never used)
And OSD clients just happen to do all their bidi commands synchronously, up
until recently.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1152dcc28c upstream
Similar to 6000 and 1000 series, RTS/CTS is the recommended protection
mechanism for 5000 series in HT mode based on the HW design.
Using RTS/CTS will better protect the inner exchange from interference,
especially in highly-congested environment, it also prevent uCode encounter
TX FIFO underrun and other HT mode related performance issues.
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
upstream in 2.6.33-rc: 5d76b6f6c1
Refreshed here for 2.6.32.y, applies w/ offset back to 2.6.29.y.
Linux has always ignored ACPI BIOS C2 with exit latency > 100 usec,
and the ACPI spec is clear that is correct FADT-supplied C2.
However, the ACPI spec explicitly states that _CST-supplied C-states
have no latency limits.
So move the 100usec C2 test out of the code shared
by FADT and _CST code-paths, and into the FADT-specific path.
This bug has not been visible until Nehalem, which advertises
a CPU-C2 worst case exit latency on servers of 205usec.
That (incorrect) figure is being used by BIOS writers
on mobile Nehalem systems for the AC configuration.
Thus, Linux ignores C2 leaving just C1, which is
saves less power, and also impacts performance
by preventing the use of turbo mode.
http://bugzilla.kernel.org/show_bug.cgi?id=15064
Tested-by: Alex Chiang <achiang@hp.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c56ccecf0 upstream.
Commit 83ce4009 did the following change
If the TSC is constant and non-stop, also set it reliable.
But, there seems to be few systems that will end up with TSC warp across
sockets, depending on how the cpus come out of reset. Skipping TSC sync
test on such systems may result in time inconsistency later.
So, reenable TSC sync test even on constant and non-stop TSC systems.
Set, sched_clock_stable to 1 by default and reset it in
mark_tsc_unstable, if TSC sync fails.
This change still gives perf benefit mentioned in 83ce4009 for systems
where TSC is reliable.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Acked-by: Suresh Siddha <suresh.b.siddha@intel.com>
LKML-Reference: <20091217202702.GA18015@linux-os.sc.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0cd4d0fd9b upstream.
IPoIB can miss a change in destination GID under some conditions. The
problem is caused when ipoib_neigh->dgid contains a stale address.
The fix is to set ipoib_neigh->dgid to zero in ipoib_neigh_alloc().
This can happen when a system using bonding on its IPoIB interfaces
has switched its active interface from interface A to B and back to A.
The system that fails over will not correctly processes the 2nd
address change, as described below.
When an address has changed neighbor->ha is updated with the new
address. Each neighbor has an associated ipoib_neigh.
ipoib_neigh->dgid also holds a copy of the remote node's hardware
address. When an address changes neighbor->ha is updated by the
network layer (arp code) with the new address. IPoIB detects this
change in ipoib_start_xmit() by comparing neighbor->ha with
ipoib_neigh->dgid. The bug is that ipoib_neigh->dgid may already
contain the new address (A) thus the change from B to A is missed by
ipoib. Here is the sequence of events:
ipoib_neigh->dgid = A and neighbor->ha = A
The address is switched to B (the first switch)
neighbor->ha = B
The change is seen in ipoib_start_xmit() -- neighbor->ha !=
ipoib_neigh->dgid so ipoib_neigh is released, and a new one is
allocated.
The allocator may return the same chunk of memory that was just
released, therefore ipoib_neigh->dgid still contains A at this point.
ipoib_neigh->dgid should be updated in neigh_add_path(), but if the
following conditions are true dgid is not updated:
1) __path_find() returns a path
2) path->ah is NULL
The remote system now switches from address B to A, neighbor->ha is
updated to A.
Now we have again : ipoib_neigh->dgid = A and neighbor->ha = A
Since the addresses are the same ipoib won't process the change in
address. Fix this by zeroing out the dgid field when allocating a new
struct ipoib_neigh.
Signed-off-by: David Wilder <dwilder@us.ibm.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0c6ddcebd8 upstream.
Stanse found 2 lock imbalances in kvm_request_irq_source_id and
kvm_free_irq_source_id. They omit to unlock kvm->irq_lock on fail paths.
Fix that by adding unlock labels at the end of the functions and jump
there from the fail paths.
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 443c39bc9e upstream.
In function kvm_arch_vcpu_init(), if the memory malloc for
vcpu->arch.mce_banks is fail, it does not free the memory
of lapic date. This patch fixed it.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 36cb93fd6b upstream.
vcpu->arch.mce_banks is malloc in kvm_arch_vcpu_init(), but
never free in any place, this may cause memory leak. So this
patch fixed to free it in kvm_arch_vcpu_uninit().
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 82b7005f0e upstream.
When found a error hva, should not return PAGE_SIZE but the level...
Also clean up the coding style of the following loop.
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a6085fbaf6 upstream.
Exit the guest pagetable walk loop if reading gpte failed. Otherwise its
possible to enter an endless loop processing the previous present pte.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a5d36f82c4 upstream.
When we queue an interrupt to the local apic, we set the IRR before the TMR.
The vcpu can pick up the IRR and inject the interrupt before setting the TMR,
and perhaps even EOI it, causing incorrect behaviour.
The race is really insignificant since it can only occur on the first
interrupt (usually following interrupts will not change TMR), but it's better
closed than open.
Fixed by reordering setting the TMR vs IRR.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f1d1c309f3 upstream.
Looks like repeatedly binding same fd to multiple gsi's with irqfd can
use up a ton of kernel memory for irqfd structures.
A simple fix is to allow each fd to only trigger one gsi: triggering a
storm of interrupts in guest is likely useless anyway, and we can do it
by binding a single gsi to many interrupts if we really want to.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Acked-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 062d5e9b0d upstream.
kvm_handle_sie_intercept uses a jump table to get the intercept handler
for a SIE intercept. Static code analysis revealed a potential problem:
the intercept_funcs jump table was defined to contain (0x48 >> 2) entries,
but we only checked for code > 0x48 which would cause an off-by-one
array overflow if code == 0x48.
Use the compiler and ARRAY_SIZE to automatically set the limits.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5f6120335c upstream.
Patch fixes the bug at
http://bugzilla.intellinuxwireless.org/show_bug.cgi?id=2139
Currently we cannot set the channel using wext extension
if we have already associated and disconnected. As
cfg80211_mgd_wext_siwfreq will not switch the channel if ssid is set.
This fixes it by clearing the ssid.
Following is the sequence which it tries to fix.
modprobe iwlagn
iwconfig wlan0 essid ""
ifconfig wlan0 down
iwconfig wlan0 chan X
wext is marked as deprecate.If we use nl80211 we can easily play with
setting the channel.
Signed-off-by: Abhijeet Kolekar <abhijeet.kolekar@intel.com>
Acked-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e5de30c9bf upstream.
ieee80211_set_power_mgmt is meant for STA interfaces only. Moreover,
since sdata->u.mgd.mtx is only initialized for STA interfaces, using
this code for any other type of interface (like creating a monitor
interface) will result in a oops.
Signed-off-by: Benoit Papillault <benoit.papillault@free.fr>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ff99879328 upstream.
The in kernel copy of a volume's update marker is not initialised from the
volume table. This means that volumes where an update was unfinnished will
not be treated as "forbidden to use". This is basically that the update
functionality was broken.
Signed-off-by: Peter Horton <zero@colonel-panic.org>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ebddd63b74 upstream.
When truncating an UBI volume, UBI should allocates a PEB-sized
buffer but does not release it, which leads to memory leaks.
This patch fixes the issue.
Reported-by: Marek Skuczynski <mareksk7@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Tested-by: Marek Skuczynski <mareksk7@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c453615f77 upstream.
When /dev/watchdog gets opened a second time we return -EBUSY, but
we already have got a kref then, so we end up leaking our data struct.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dc99be4766 upstream.
This patch fixes the aut-mute setup on HP T5735 with ALC262 codec.
Instead of wrong amp, use pin control toggling for muting the speaker now.
Tested-by: Lee Trager <lee.trager@hp.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7d6feeb287 upstream.
We have apparently had a memory leak since
7ca7e564e0 "ipc: store ipcs into IDRs" in
2007. The idr of which 3 exist for each ipc namespace is never freed.
This patch simply frees them when the ipcns is freed. I don't believe any
idr_remove() are done from rcu (and could therefore be delayed until after
this idr_destroy()), so the patch should be safe. Some quick testing
showed no harm, and the memory leak fixed.
Caught by kmemleak.
Signed-off-by: Serge E. Hallyn <serue@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 48e4c385c5 upstream.
io_subchannel_probe() frees memory for sch->private which is later
freed again when io_subchannel_remove() is called. Fix this problem
by removing the cleanup in io_subchannel_probe().
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 385097e08b upstream.
The control blacklisting code erroneously used usb_match_id() by passing
a pointer to a usb_device_id structure instead of an array of such
structures.
Replace the usb_match_id() call by usb_match_id_one().
Thanks to Paulo Assis for diagnosing the bug and providing an initial
fix.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0f9552b5dc upstream.
The start_ro modules parameter can be used to force arrays to be
started in 'auto-readonly' in which they are read-only until the first
write. This ensures that no resync/recovery happens until something
else writes to the device. This is important for resume-from-disk
off an md array.
However if an array is started 'readonly' (by writing 'readonly' to
the 'array_state' sysfs attribute) we want it to be really 'readonly',
not 'auto-readonly'.
So strengthen the condition to only set auto-readonly if the
array is not already read-only.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6938594374 upstream.
Fix erroneous check for ap->udma_mask in do_pata_set_dmamode()
resulting in controller not being programmed properly for MWDMA.
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1b677afda4 upstream.
I obseved there is a sata_async_notification() for every ahci
interrupt. But the async notification does nothing (this is hard
disk drive and no pmp). This cause cpu wastes some time on sntf
register access.
It appears ICH AHCI doesn't support SNotification register, but the
controller reports it does. After quirking it, the async notification
disappears.
PS. it appears all ICH don't support SNotification register from ICH
manual, don't know if we need quirk all ICH. I don't have machines
with all kinds of ICH.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4946f8353d upstream.
add PCI ID for the Intel EP80579 (Tolapai) SoC
Signed-off-by: Imre Kaloz <kaloz@openwrt.org>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cb711a1931 upstream.
Cleanup the documentation about the supported chipsets.
[needed for further device ids to add to this driver - gkh]
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 23033b2bce upstream.
Realtek codecs may have "PCM" and "Line-Out" playback switches, and
they can be slaves for vmaster.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 739b47f1e5 upstream.
An earlier patch merely adds id for 0x80862804.
It has 2/3 cvt/pin nodes and shall be tied to the IbexPeak handler.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a61cd03827 upstream.
Gigabyte netbook model M1022M requires i8042.noloop, otherwise AUX port
will not detected and the touchpad will not work. Unfortunately chassis
type in DMI set to "Other" and thus generic laptop entry does not fire
on it.
Reported-by: Darryl Bond <dbond@nrggos.com.au>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f909b1df0a upstream.
The driver does not reference identification strings in DMI tables and
since these strings are no longer required by DMI core we can safely
remove them and save some memory.
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 75757507e0 upstream.
The purpose of dmi->ident is twofold - it may be used by DMI callback
functions when composing log messages; it is also used to determine
end of DMI table in dmi_check_system() and dmi_first_match(). However,
in case when callbacks are not interested in using ident at all it just
wastes memory. Let's make entries with empty first match slot serve as
end-of-table markers instead.
[needed for DMI table changes that need to be done by later patches - gkh]
Acked-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 91ced682f9 upstream.
The driver has nothing to do, but this marker prevents the event from
showing up 'not handled'.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Brandon Philips <bphilips@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 14caf44c69 upstream.
The cmd_per_lun value is used by scsi-ml as fall back lowest
queue_depth value but in case of libfc cmd_per_lun is set to
same value as max queue_depth = 32.
So this patch reduces cmd_per_lun value to 3 and configures
each lun with default max queue_depth 32 in fc_slave_alloc.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Acked-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5543c72e2b upstream.
We ran into a scenario where a remote port goes into RESTART state, but
never gets added to scsi transport. The running vmcore showed the following:
a) Port was in RESTART state
b) rdata->event was STOP
c) no work gets scheduled for the remote work to fc_rport_work
After this point, shut/no-shut of the remote port did not cause the port
to get re-discovered. The port would move betwen DELETE and RESTART states,
but the event would always be STOP, no work would get scheduled to
fc_rport_work and the port would not get added to scsi_transport.
The problem is that rdata->event is not set to NONE after a port is
restarted. After this point, no more work gets scheduled for the remote port
since new work is scheduled only if rdata->event is non-NONE. So, the event
and state keep changing, but fc_rport_work does not get scheduled to actually
handle the event.
Here's a transition of states that explains the above observation:
) Port is first in READY State, event is NONE
2) RSCN on shut, port goes to DELETED, event is stop
3) Before fc_rport_work runs, RSCN on no-shut, port goes to RESTART, event is
still STOP
4) fc_rport_work gets scheduled, removes the port from transport, sees state
as RESTART, begins the PLOGI state machine, event remains as STOP (event NOT
changed to NONE, this is the bug)
5) Plogi state machine completes, port state goes to READY, event goes to
READY, but no work is scheduled since event was STOP (non-NONE) before.
Fc_rport_work is not scheduled, port remains in READY state, but is not added
to transport.
Things are broken at this point. Libfc rport is ready, but no transport rport
created.
6) now a shut causes port state to change to DELETE, event to change to STOP,
no work gets scheduled
7) no-shut causes port state to change to RESTART, event remains at STOP,
no work gets scheduled
(6) and (7) now get repeated everytime we do shut/no-shut. No way to get out
of this state. Fcc reset does not help too.
Only way to get out is to load/unload module.
Fix is to set rdata->event to NONE while processing the STOP/LOGO/FAILED
events, inside the discovery and rport locks.
Signed-off-by: Abhijeet Joglekar <abjoglek@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b4a9c7ede9 upstream.
Timer crashes were caused by freeing a struct fc_rport_priv
with a timer pending, causing the timer facility list to be
corrupted. This was during FC uplink flap tests with a lot
of targets.
After discovery, we were doing an PLOGI on an rdata that was
in DELETE state but not yet removed from the lookup list.
This moved the rdata from DELETE state to PLOGI state.
If the PLOGI exchange allocation failed and needed to be
retried, the timer scheduling could race with the free
being done by fc_rport_work().
When fc_rport_login() is called on a rport in DELETE state,
move it to a new state RESTART. In fc_rport_work, when
handling a LOGO, STOPPED or FAILED event, look for restart
state. In the RESTART case, don't take the rdata off the
list and after the transport remote port is deleted and
exchanges are reset, re-login to the remote port.
Note that the new RESTART state also corrects a problem we
had when re-discovering a port that had moved to DELETE state.
In that case, a new rdata was created, but the old rdata
would do an exchange manager reset affecting the FC_ID
for both the new rdata and old rdata. With the new state,
the new port isn't logged into until after any old exchanges
are reset.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8f550f937e upstream.
I was running into several different panics under stress, which I traced down
to a few different possible slab corruption issues in error handling paths.
I have not yet looked into why these exchange sends fail, but with these
fixes my test system is much more stable under stress than before.
fc_elsct_send() could fail and either leave the passed in frame intact
(failure in fc_ct/els_fill) or the frame could have been freed if the
failure was is fc_exch_seq_send(). The caller had no way of knowing, and
there was a potential double free in the error handling in fc_fcp_rec().
Make fc_elsct_send() always free the frame before returning, and remove the
fc_frame_free() call in fc_fcp_rec().
While fc_exch_seq_send() did always consume the frame, there were double free
bugs in the error handling of fc_fcp_cmd_send() and fc_fcp_srr() as well.
Numerous calls to error handling routines (fc_disc_error(),
fc_lport_error(), fc_rport_error_retry() ) were passing in a frame pointer that
had already been freed in the case of an error. I have changed the call
sites to pass in a NULL pointer, but there may be more appropriate error
codes to use.
Question: Why do these error routines take a frame pointer anyway? I
understand passing in a pointer encoded error to the response handlers, but
the error routines take no action on a valid pointer and should never be
called that way.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d37322a43e upstream.
In case of sequence offload, in fc_fcp_send_data(), the skb_fill_page_info()
called may end up adding more frags to the skb_shinfo(fp_skb(fp))->frags[],
exceeding SKB_MAX_FRAGS, this eventually corrupts the memory. I am adding the
FR_FRAME_SG_LEN back, but as SKB_MAX_FRAGS -1, leaving 1 for our fcoe_eof_crc
page. And send will be broken into multiple large sends if the frame already
contains more frags than skb handle.
Signed-off-by: Yi Zou <yi.zou@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8eca355fa8 upstream.
When doing echo ethX > /sys..../destroy I am getting
errors when the tear down succeeds. It looks like the
reason for this is because the rc var is not getting set
when the destruction works. This just sets it to zero.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 22655ac222 upstream.
It's possible and harmless to get FLOGI timeouts
while in RESET state. Don't do a WARN_ON in that case.
Also, split out the other WARN_ONs in fc_lport_timeout, so
we can tell which one is hit by its line number.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1b69bc062c upstream.
Fix minor errors.
A debug message said an RLIR was received instead of ECHO.
"Expected" was misspelled in several places.
Fix a type cast from u32 to __be32.
Rob, Some of these may have been also taken care of in your
other doc cleanup patch. Feel free to fold them in.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4347fa6687 upstream.
This bug is exposed when there is a link flap in LLD. Particularly, when it
happens right after a SCSI write command is sent out, no FCP_DATA is sent,
causing fsp->status_code to be set as FC_DATA_UNDRUN in fc_fcp_complete_locked
even no SCSI status is received. Consequently, fc_io_compl treats this as DID_OK.
This results in SCSI returning successful to the initial I/O request even
there is no DATA actually sent. Particularly, if you run an I/O tool w/ data
verification on, the read back for verification is gonna fail.
This is fixed here by checking when FC_DATA_UNDRUN happens, SCSI status is
received w/ FC_SRB_RCV_STATUS set in fsp->state.
Signed-off-by: Yi Zou <yi.zou@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5e472d077f upstream.
xid 0 was used as an indication of invalid xid before but now xid 0
can be used as a valid exchange i. This patch fixes the ddp completion
in fcp layer, i.e., in fc_fcp.c:fc_fcp_ddp_done() function, to make sure it
does not use xid 0 for indication of an invalid xid, instead, it now
uses use FC_XID_UNKNOWN for such indication.
Signed-off-by: Yi Zou <yi.zou@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 85b5893ca9 upstream.
A received Fibre Channel ELS PRLI request contains a bit that
indicates whether the remote port supports certain retry processing
sequences. The test for this bit was somehow coded to use multiply
instead of AND!
This case would apply only for target mode operation, and it is
unlikely to be noticed as an initiator.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8e68597d08 upstream.
In testing 2.6.31 on one of our ia64 platforms I've encountered a hang
due to the driver using hardware ATEs which are a limited resource.
This is because the driver does not set the dma consistent mask to
64 bits.
Signed-off-by: Michael Reed <mdr@sgi.com>
Acked-by: James Smart <James.Smart@Emulex.Com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8798a694da upstream.
I was doing some large lun count testing with 2.6.31 and hit
a BUG_ON() in fc_timeout_deleted_rport(), and it seems like it
should have been just a matter of time before someone did.
It seems invalid to set port_state under lock, then expect it to
remain set after releasing the lock. Another thread called
fc_remote_port_add() when the lock was released, changing the
port_state.
This patch removes the BUG_ON and moves the test of the
port_state to inside the host_lock. It's been running for
several weeks now with no ill effect.
Signed-off-by: Michael Reed <mdr@sgi.com>
Acked-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5917290ce9 upstream.
Create the sysfs file, dh_state even if the new SCSI device is not
in the any of the device handler's internal lists.
Signed-Off-by: Chandra Seetharaman <sekharan@us.ibm.com>
Acked-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 627511e3e6 upstream.
Four models, OPEN-/DF400/DF500/DISK-SUBSYSTEM, can handle REPORT_LUN,
and the BLIST_REPORTLUN2 flag needs to be set. And DF600 doesn't require
any flags because it returns ANSI 03h (SPC).
Signed-off-by: Takahiro Yasui <tyasui@redhat.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Acked-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5b915d9e6d upstream.
NCR devices are terminally broken by design -- they claim themselves to contain
proper input applications in their HID report descriptor, but behave very badly
if treated in standard way.
According to NCR developers, the devices get confused when queried for reports
in a standard way, rendering them unusable.
NCR is shipping application called "RPSL" that can be used to drive these
devices through hiddev, under the assumption that in-kernel driver doesn't
perform initial report query.
If it does, neither in-kernel nor hiddev-based driver can operate with these
devices any more.
Introduce a quirk that skips the report query for all NCR devices. The previous
NOGET quirk was wrong and had been introduced because I misunderstood the nature
of brokenness of these devices.
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dd47f96c07 upstream.
When the "rsize=" or "wsize=" mount options are not specified,
text-based mounts have slightly different behavior than legacy binary
mounts. Text-based mounts use the smaller of the server's maximum
and the client's maximum, but binary mounts use the smaller of the
server's _preferred_ size and the client's maximum.
This difference is actually pretty subtle. Most servers advertise
the same value as their maximum and their preferred transfer size, so
the end result is the same in most cases.
The reason for this difference is that for text-based mounts, if
r/wsize are not specified, they are set to the largest value supported
by the client. For legacy mounts, the values are set to zero if these
options are not specified.
nfs_server_set_fsinfo() can negotiate the transfer size defaults
correctly in any case. There's no need to specify any particular
value as default in the text-based option parsing logic.
Note that nfs4 doesn't use nfs_server_set_fsinfo(), but the mount.nfs4
command does set rsize and wsize to 0 if the user didn't specify these
options. So, make the same change for text-based NFSv4 mounts.
Thanks to James Pearson <james-p@moving-picture.com> for reporting and
diagnosing the problem.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fdd46dcbe4 upstream.
This patch modifies the replacement/recovery_timeout so it works
more like the fc fast io fail tmo.
If userspace tries to set the replacement/recovery_timeout to less than
zero, we will turn off the forced recovery cleanup.
If userspace sets the value to 0 then we will force the recovery
cleanup immediately.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 59353ea30e upstream.
Prior to 1f82de10 we always initialized the upper 32bits of the
prefetchable memory window, regardless of the address range used.
Now we only touch it for a >32bit address, which means the upper32
registers remain whatever the BIOS initialized them too.
It's valid for the BIOS to set the upper32 base/limit to
0xffffffff/0x00000000, which makes us program prefetchable ranges
like 0xffffffffabc00000 - 0x00000000abc00000
Revert the chunk of 1f82de10 that made this conditional so we always
write the upper32 registers and remove now unused pref_mem64 variable.
Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Rafael J. Wysocki <rjw@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aba24d7158 upstream.
We have been doing some extensive testing of Linux support for ACLs on
NFDS v4. We have noticed that the server rejects ACLs where the groups
are out of order, for example, the following ACL is rejected:
A::OWNER@:rwaxtTcCy
A::user101@domain:rwaxtcy
A::GROUP@:rwaxtcy
A:g:group102@domain:rwaxtcy
A:g:group101@domain:rwaxtcy
A::EVERYONE@:rwaxtcy
Examining the server code, I found that after converting an NFS v4 ACL
to POSIX, sort_pacl is called to sort the user ACEs and group ACEs.
Unfortunately, a minor bug causes the group sort to be skipped.
Signed-off-by: Frank Filz <ffilzlnx@us.ibm.com>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 98962465ed upstream.
The dynamic tick allows the kernel to sleep for periods longer than a
single tick, but it does not limit the sleep time currently. In the
worst case the kernel could sleep longer than the wrap around time of
the time keeping clock source which would result in losing track of
time.
Prevent this by limiting it to the safe maximum sleep time of the
current time keeping clock source. The value is calculated when the
clock source is registered.
[ tglx: simplified the code a bit and massaged the commit msg ]
Signed-off-by: Jon Hunter <jon-hunter@ti.com>
Cc: John Stultz <johnstul@us.ibm.com>
LKML-Reference: <1250617512-23567-2-git-send-email-jon-hunter@ti.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bdddd2963c upstream.
Anton Blanchard wrote:
> We allocate and zero cpu_isolated_map after the isolcpus
> __setup option has run. This means cpu_isolated_map always
> ends up empty and if CPUMASK_OFFSTACK is enabled we write to a
> cpumask that hasn't been allocated.
I introduced this regression in 49557e6203 (sched: Fix
boot crash by zalloc()ing most of the cpu masks).
Use the bootmem allocator if they set isolcpus=, otherwise
allocate and zero like normal.
Reported-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: peterz@infradead.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: <stable@kernel.org>
LKML-Reference: <200912021409.17013.rusty@rustcorp.com.au>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Tested-by: Anton Blanchard <anton@samba.org>
commit 87038c2d5b upstream.
The size of EFI GPT header is not static, but whole sector is
allocated for the header. The HeaderSize field must be greater
than 92 (= sizeof(struct gpt_header) and must be less than or
equal to the logical block size.
It means we have to read whole sector with the header, because the
header crc32 checksum is calculated according to HeaderSize.
For more details see UEFI standard (version 2.3, May 2009):
- 5.3.1 GUID Format overview, page 93
- Table 13. GUID Partition Table Header, page 96
Signed-off-by: Karel Zak <kzak@redhat.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3a0429292d upstream.
commit d6d3f08b0f
(netfilter: xtables: conntrack match revision 2) does break the
v1 conntrack match iptables-save output in a subtle way.
Problem is as follows:
up = kmalloc(sizeof(*up), GFP_KERNEL);
[..]
/*
* The strategy here is to minimize the overhead of v1 matching,
* by prebuilding a v2 struct and putting the pointer into the
* v1 dataspace.
*/
memcpy(up, info, offsetof(typeof(*info), state_mask));
[..]
*(void **)info = up;
As the v2 struct pointer is saved in the match data space,
it clobbers the first structure member (->origsrc_addr).
Because the _v1 match function grabs this pointer and does not actually
look at the v1 origsrc, run time functionality does not break.
But iptables -nvL (or iptables-save) cannot know that v1 origsrc_addr
has been overloaded in this way:
$ iptables -p tcp -A OUTPUT -m conntrack --ctorigsrc 10.0.0.1 -j ACCEPT
$ iptables-save
-A OUTPUT -p tcp -m conntrack --ctorigsrc 128.173.134.206 -j ACCEPT
(128.173... is the address to the v2 match structure).
To fix this, we take advantage of the fact that the v1 and v2 structures
are identical with exception of the last two structure members (u8 in v1,
u16 in v2).
We extract them as early as possible and prevent the v2 matching function
from looking at those two members directly.
Previously reported by Michel Messerschmidt via Ben Hutchings, also
see Debian Bug tracker #556587.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5bf5834738 upstream.
If docs are being built in a separate directory, xmlto and xsltproc
can't find included sources. Make links back to the source directory.
I would much prefer to have xmlto and xsltproc look in the source
directory for included entities but couldn't see how to do that. This
needs to be solved in some way for 2.6.32, even if this patch isn't the
right way to do it.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 49b14650ba upstream.
The rule for %.html removes the output directory, so there is no point
in copying images before building HTML.
Documentation/DocBook/Makefile | 10 +++++-----
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cb19054697 upstream.
use common_task instead of reset_task and link_chg_task, so it fix "call cancel_work_sync
from the work itself".
Signed-off-by: Jie Yang <jie.yang@atheros.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e3c6e1aaa5 upstream.
Adds the device IDs and driver linking to allow the Asus Europa DVB-T
card to operate with these drivers.
The device has a SAA7134 chipset with a TD1316 Hybrid Tuner.
All inputs work on the card including switching between DVB-T and
Analogue TV, there is also no IR with this card.
[mchehab@redhat.com: CodingStyle fixes]
Signed-off-by: Danny Wood <danwood76@gmail.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 20d15a200d upstream.
Add support for five new Hauppauge Device USB IDs:
2040:b980
2040:b990
2040:c010
2040:c080
2040:c090
Signed-off-by: Michael Krufky <mkrufky@kernellabs.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9329d1beae upstream.
Filesystem code usually destroys the option buffer while
parsing it. This leads to errors when the same buffer is
passed twice. In case we fill a new superblock do not call
remount.
This is needed to quite a warning that the debugfs code
causes every boot.
Cc: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f776c5ec46 upstream.
On Mon, Jan 18, 2010 at 05:26:20PM +0530, Sachin Sant wrote:
> Hello Heiko,
>
> Today while trying to boot next-20100118 i came across
> the following Oops :
>
> Brought up 4 CPUs
> Unable to handle kernel pointer dereference at virtual kernel address 0000000000
> 543000
> Oops: 0004 #1 SMP
> Modules linked in:
> CPU: 0 Not tainted 2.6.33-rc4-autotest-next-20100118-5-default #1
> Process swapper (pid: 1, task: 00000000fd792038, ksp: 00000000fd797a30)
> Krnl PSW : 0704200180000000 00000000001eb0b8 (shmem_parse_options+0xc0/0x328)
> R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:0 CC:2 PM:0 EA:3
> Krnl GPRS: 000000000054388a 000000000000003d 0000000000543836 000000000000003d
> 0000000000000000 0000000000483f28 0000000000536112 00000000fd797d00
> 00000000fd4ba100 0000000000000100 0000000000483978 0000000000543832
> 0000000000000000 0000000000465958 00000000001eb0b0 00000000fd797c58
> Krnl Code: 00000000001eb0aa: c0e5000994f1 brasl %r14,31da8c
> 00000000001eb0b0: b9020022 ltgr %r2,%r2
> 00000000001eb0b4: a784010b brc 8,1eb2ca
> >00000000001eb0b8: 92002000 mvi 0(%r2),0
> 00000000001eb0bc: a7080000 lhi %r0,0
> 00000000001eb0c0: 41902001 la %r9,1(%r2)
> 00000000001eb0c4: b9040016 lgr %r1,%r6
> 00000000001eb0c8: b904002b lgr %r2,%r11
> Call Trace:
> (<00000000fd797c50> 0xfd797c50)
> <00000000001eb5da> shmem_fill_super+0x13a/0x25c
> <0000000000228cfa> get_sb_single+0xbe/0xdc
> <000000000034ffc0> dev_get_sb+0x2c/0x38
> <000000000066c602> devtmpfs_init+0x46/0xc0
> <000000000066c53e> driver_init+0x22/0x60
> <000000000064d40a> kernel_init+0x24e/0x3d0
> <000000000010a7ea> kernel_thread_starter+0x6/0xc
> <000000000010a7e4> kernel_thread_starter+0x0/0xc
>
> I never tried to boot a kernel with DEVTMPFS enabled on a s390 box.
> So am wondering if this is supported or not ? If you think this
> is supported i will send a mail to community on this.
There is nothing arch specific to devtmpfs. This part crashes because the
kernel tries to modify the data read-only section which is write protected
on s390.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1d9f26262a upstream
Properly handle version of the protocol where standard PS/2 packets
from trackpoint are stuffed into middle (byte 3-6) of the standard
ALPS packets when both the touchpad and trackpoint are used together.
The patch is based on work done by Matthew Chapman and additional
research done by David Kubicek and Erik Osterholm:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/296610
Many thanks to David Kubicek for his efforts in researching fine points
of this new version of the protocol, especially interaction between pad
and stick in these models.
Cc: Andy Isaacson <adi@hexapodia.org>
Signed-off-by: Sebastian Kapfer <sebastian_kapfer@gmx.net>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f63dd12da2 upstream.
DM6467 silicon revisions 3.x have variant field in JTAGID register as '1'.
This path adds entry for the same in dm646x_ids to be able to boot on boards
with 3.x revision chips.
Also modifies name for 'variant=0' (revisions 1.0, 1.1).
Signed-off-by: Hemant Pedanekar <hemantp@ti.com>
Signed-off-by: Kevin Hilman <khilman@deeprootsystems.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 15295380f4 upstream.
At least two revisions of the D-Link DWA 160 exist, called A1 and A2. A1
(USB-ID 07d1:3c10) is already listed in usb.c as D-Link DWA 160A. A2
(USB-ID 07d1:3a09) works if added to ar9170_usb_ids. I didn't do much
testing until now, but I was able to connect to APs using WPA or WEP and
transmit data.
Summary:
* Add model revision number to the comment for D-Link DWA 160 A1 (07d1:3c10)
* Add support for D-Link DWA 160 A2 (07d1:3a09)
Signed-off-by: Thomas Klute <thomas2.klute@uni-dortmund.de>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a7ebd27a13 upstream.
We need buffer->len to remain valid to work out the correct address to
be unmapped. We therefore need to clear buffer->len after the unmap
operation.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ea9d8e3f45 upstream.
Marc reported that the BUG_ON in clockevents_notify() triggers on his
system. This happens because the kernel tries to remove an active
clock event device (used for broadcasting) from the device list.
The handling of devices which can be used as per cpu device and as a
global broadcast device is suboptimal.
The simplest solution for now (and for stable) is to check whether the
device is used as global broadcast device, but this needs to be
revisited.
[ tglx: restored the cpuweight check and massaged the changelog ]
Reported-by: Marc Dionne <marc.c.dionne@gmail.com>
Tested-by: Marc Dionne <marc.c.dionne@gmail.com>
Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
LKML-Reference: <1262834564-13033-1-git-send-email-dfeng@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 22e190851f upstream.
Anton reported that perf record kept receiving events even after calling
ioctl(PERF_EVENT_IOC_DISABLE). It turns out that FORK,COMM and MMAP
events didn't respect the disabled state and kept flowing in.
Reported-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Anton Blanchard <anton@samba.org>
LKML-Reference: <1263459187.4244.265.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 88f5004430 upstream.
In free_unmap_area_noflush(), va->flags is marked as VM_LAZY_FREE first, and
then vmap_lazy_nr is increased atomically.
But, in __purge_vmap_area_lazy(), while traversing of vmap_are_list, nr
is counted by checking VM_LAZY_FREE is set to va->flags. After counting
the variable nr, kernel reads vmap_lazy_nr atomically and checks a
BUG_ON condition whether nr is greater than vmap_lazy_nr to prevent
vmap_lazy_nr from being negative.
The problem is that, if interrupted right after marking VM_LAZY_FREE,
increment of vmap_lazy_nr can be delayed. Consequently, BUG_ON
condition can be met because nr is counted more than vmap_lazy_nr.
It is highly probable when vmalloc/vfree are called frequently. This
scenario have been verified by adding delay between marking VM_LAZY_FREE
and increasing vmap_lazy_nr in free_unmap_area_noflush().
Even the vmap_lazy_nr is for checking high watermark, it never be the
strict watermark. Although the BUG_ON condition is to prevent
vmap_lazy_nr from being negative, vmap_lazy_nr is signed variable. So,
it could go down to negative value temporarily.
Consequently, removing the BUG_ON condition is proper.
A possible BUG_ON message is like the below.
kernel BUG at mm/vmalloc.c:517!
invalid opcode: 0000 [#1] SMP
EIP: 0060:[<c04824a4>] EFLAGS: 00010297 CPU: 3
EIP is at __purge_vmap_area_lazy+0x144/0x150
EAX: ee8a8818 EBX: c08e77d4 ECX: e7c7ae40 EDX: c08e77ec
ESI: 000081fe EDI: e7c7ae60 EBP: e7c7ae64 ESP: e7c7ae3c
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Call Trace:
[<c0482ad9>] free_unmap_vmap_area_noflush+0x69/0x70
[<c0482b02>] remove_vm_area+0x22/0x70
[<c0482c15>] __vunmap+0x45/0xe0
[<c04831ec>] vmalloc+0x2c/0x30
Code: 8d 59 e0 eb 04 66 90 89 cb 89 d0 e8 87 fe ff ff 8b 43 20 89 da 8d 48 e0 8d 43 20 3b 04 24 75 e7 fe 05 a8 a5 a3 c0 e9 78 ff ff ff <0f> 0b eb fe 90 8d b4 26 00 00 00 00 56 89 c6 b8 ac a5 a3 c0 31
EIP: [<c04824a4>] __purge_vmap_area_lazy+0x144/0x150 SS:ESP 0068:e7c7ae3c
[ See also http://marc.info/?l=linux-kernel&m=126335856228090&w=2 ]
Signed-off-by: Yongseok Koh <yongseok.koh@samsung.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 10d2cdb610 upstream.
Resolves kernel.org bug 14914.
Remove entry for 2770:915d (usb digital camera with mass storage
support) from unusual_devs.h. The fix triggered by the entry causes
the file system on the camera to be completely inaccessible (no
partition table, the device is not mountable).
The patch works, but let me clarify a few things about it. All the
patch does is remove the entry for this device from the
drivers/usb/storage/unusual_devs.h, which is supposed to help with a
problem with the device's reported size (I think). I'm pretty sure it
was originally added for a reason, so I'm not sure removing it won't
cause other problems to reappear. Also, I should note that this
unusual_devs.h entry was present (and activating workarounds) in
2.6.29, but in that version everything works fine. Starting with
2.6.30, things no longer work.
Signed-off-by: Ryan May <rmay31@gmail.com>
Cc: Rohan Hart <rohan.hart17@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2992e545ea upstream.
Thomas Schlichter reported:
> X.org uses libpciaccess which tries to mmap with write combining enabled via
> /sys/bus/pci/devices/*/resource0_wc. Currently, when PAT is not enabled, the
> kernel does fall back to uncached mmap. Then libpciaccess thinks it succeeded
> mapping with write combining enabled and does not set up suited MTRR entries.
> ;-(
Instead of silently mapping pci mmap region as UC minus in the case
of !pat_enabled and wc request, we can return error. Eric Anholt mentioned
that caller (like X) typically follows up with UC minus pci mmap request and
if there is a free mtrr slot, caller will manage adding WC mtrr.
Jesse Barnes says:
> Older versions of libpciaccess will behave better if we do it that way
> (iirc it only allocates an MTRR if the resource_wc file doesn't exist or
> fails to get mapped).
Reported-by: Thomas Schlichter <thomas.schlichter@web.de>
Signed-off-by: Thomas Schlichter <thomas.schlichter@web.de>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Acked-by: Eric Anholt <eric@anholt.net>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b27d7f16d3 upstream.
Make DM use bdev_stack_limits() function so that partition offsets get
taken into account when calculating alignment. Clarify stacking
warnings.
Also remove obsolete clearing of final alignment_offset and misalignment
flag.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: Alasdair G. Kergon <agk@redhat.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cc9b2e9f66 upstream.
Based on patch originally by Jeff Mahoney <jeffm@suse.com>
enclosure_status is expected to be a NULL terminated array of strings
but isn't actually NULL terminated. When writing an invalid value to
/sys/class/enclosure/.../.../status, it goes off the end of the array
and Oopses.
Fix by making the assumption true and adding NULL at the end.
Reported-by: Artur Wojcik <artur.wojcik@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 49d0f078f4 upstream.
This patch (as1330) fixes a bug in khbud's handling of remote
wakeups. When a device sends a remote-wakeup request, the parent hub
(or the host controller driver, for directly attached devices) begins
the resume sequence and notifies khubd when the sequence finishes. At
this point the port's SUSPEND feature is automatically turned off.
However the device needs an additional 10-ms resume-recovery time
(TRSMRCY in the USB spec). Khubd does not wait for this delay if the
SUSPEND feature is off, and as a result some devices fail to behave
properly following a remote wakeup. This patch adds the missing
delay to the remote-wakeup path.
It also extends the resume-signalling delay used by ehci-hcd and
uhci-hcd from 20 ms (the value in the spec) to 25 ms (the value we use
for non-remote-wakeup resumes). The extra time appears to help some
devices.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Cc: Rickard Bellini <rickard.bellini@ericsson.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cec3a53c7f upstream.
This patch (as1321) fixes a problem with EHCI and UHCI root-hub
suspends: If the suspend occurs while a port is trying to resume, the
resume doesn't finish and simply gets lost. When remote wakeup is
enabled, this is undesirable behavior.
The patch checks first to see if any port resumes are in progress, and
if they are then it fails the root-hub suspend with -EBUSY.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1b9a38bfa6 upstream.
This patch (as1320) fixes two problems related to interrupt-URB
scheduling in ehci-hcd.
URBs with an interval of 2 or 4 microframes aren't handled.
For the time being, the patch reduces to interval to 1 uframe.
URBs are constrained to have an interval no larger than 1024
frames by usb_submit_urb(). But some EHCI controllers allow
use of a schedule as short as 256 frames; for these
controllers we may have to decrease the interval to the
actual schedule length.
The second problem isn't very significant since few devices expose
interrupt endpoints with an interval larger than 256 frames. But the
first problem is critical; it will prevent the kernel from working
with devices having interrupt intervals of 2 or 4 uframes.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Glynn Farrow <farrowg@sg.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit acbe2febe7 upstream.
Memory allocations with GFP_KERNEL can cause IO to a storage
device which can fail resulting in a need to reset the device.
Therefore GFP_KERNEL cannot be safely used between usb_lock_device()
and usb_unlock_device(). Replace by GFP_NOIO.
Signed-off-by: Oliver Neukum <oliver@neukum.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a91b593edd upstream.
This patch adds a mask bit which was mistakenly omitted from the
as1311 patch (usb-storage: add BAD_SENSE flag).
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2591530204 upstream.
Fix a regression introduced by commit
715b1dc01f ("USB: usb_debug,
usb_generic_serial: implement multi urb write").
URB transfer buffer was never freed when using multi-urb writes.
Currently the only driver enabling multi-urb writes is usb_debug.
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Cc: Greg KH <greg@kroah.com>
Acked-by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6d34855d9a upstream.
Wacom claims that the WACF namespace will always be devoted to serial
Wacom tablets. Remove the existing entries and add a wildcard to avoid
having to update the kernel every time they add a new device.
Signed-off-by: Ping Cheng <pingc@wacom.com>
Signed-off-by: Matthew Garrett <mjg@redhat.com>
Tested-by: Ping Cheng <pingc@wacom.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eeec32a731 upstream.
Nozomi goes wrong if you get the sequence
open
open
close
[stuff]
close
which turns out to occur on some ppp type setups.
This is a quick patch up for the problem. It's not really fixing Nozomi
which completely fails to implement tty open/close semantics and all the
other needed stuff. Doing it right is a rather more invasive patch set and
not one that will backport.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e27759d7a3 upstream.
Ecryptfs_open dereferences a pointer to the private lower file (the one
stored in the ecryptfs inode), without checking if the pointer is NULL.
Right afterward, it initializes that pointer if it is NULL. Swap order of
statements to first initialize. Bug discovered by Duckjin Kang.
Signed-off-by: Duckjin Kang <fromdj2k@gmail.com>
Signed-off-by: Erez Zadok <ezk@cs.sunysb.edu>
Cc: Dustin Kirkland <kirkland@canonical.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ece550f51b upstream.
The "full_alg_name" variable is used on a couple error paths, so we
shouldn't free it until the end.
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7692fd4d44 upstream.
This fixes a number of SMP problems that were in the hyperv core code.
Patch originally written by K. Y. Srinivasan <ksrinivasan@novell.com>
but forward ported to the latest in-kernel code and tweaked slightly by
me.
Novell, Inc. hereby disclaims all copyright in any derivative work
copyright associated with this patch.
Signed-off-by: K. Y. Srinivasan <ksrinivasan@novell.com>
Cc: Hank Janssen <hjanssen@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 20633bf014 upstream.
After updating to 2.6.32 kernel, I started experiencing Oopses caused by
the asus_oled module. After quick investigation, I wrapped this simple
patch which fixes an Oops in by asus_oled module on 2.6.32.2 kernel,
caused by incorrect usage of strict_strtoul function call within
set_enabled and set_disabled functions. This can be triggered by simple
running the userspace client for asus_old (e.g., 'asusoled -e' or
'asusoled -d').
Signed-off-by: Eugeni Dodonov <eugeni@mandriva.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0b962d473a upstream.
register_chrdev() hardcodes registering 256 minors, presumably to
avoid breaking old drivers. However, we need to register enough
minors so that we have all possible CPUs.
checkpatch warns on this patch, but the patch is correct: NR_CPUS here
is a static *upper bound* on the *maximum CPU index* (not *number of
CPUs!*) and that is what we want.
Reported-and-tested-by: Russ Anderson <rja@sgi.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
LKML-Reference: <tip-*@git.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cedabed49b upstream.
If __block_prepare_write() was failed in block_write_begin(), the
allocated blocks can be outside of ->i_size.
But new truncate_pagecache() in vmtuncate() does nothing if new < old.
It means the above usage is not working anymore.
So, this patch fixes it by removing "new < old" check. It would need
more cleanup/change. But, now -rc and truncate working is in progress,
so, this tried to fix it minimum change.
Acked-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 57785df5ac upstream.
83f9ac removed a call to effective_prio() in wake_up_new_task(), which
leads to tasks running at MAX_PRIO.
This is caused by the idle thread being set to MAX_PRIO before forking
off init. O(1) used that to make sure idle was always preempted, CFS
uses check_preempt_curr_idle() for that so we can savely remove this bit
of legacy code.
Reported-by: Mike Galbraith <efault@gmx.de>
Tested-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1259754383.4003.610.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 22f8b2695e upstream.
Unexpected signals can disturb the bus-handling and lock it up. Don't use
interruptible in 'wait_event_*' and 'wake_*' as in commits
dc1972d027 (for cpm),
1ab082d7cb (for mpc),
b7af349b17 (for omap).
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c556752109 upstream.
dev_dbg outputs dev_name, which is released with device_unregister. This bug
resulted in output like this:
i2c Xy2�0: adapter [SMBus I801 adapter at 1880] unregistered
The right output would be:
i2c i2c-0: adapter [SMBus I801 adapter at 1880] unregistered
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit e04ed38d4e ]
For chips like Niagara2 that have true overflow indications
in the %pcr (which we don't actually need and don't use)
the interrupt signal persists until the overflow bits are
cleared by an explicit %pcr write.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 8183e2b384 ]
If perf events are active, we should not reset the %pcr to
PCR_PIC_PRIV. That perf events code does the management.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b9f8fcd55b upstream.
Relax stable-sched-clock architectures to not save/disable/restore
hardirqs in cpu_clock().
The background is that I was trying to resolve a sparc64 perf
issue when I discovered this problem.
On sparc64 I implement pseudo NMIs by simply running the kernel
at IRQ level 14 when local_irq_disable() is called, this allows
performance counter events to still come in at IRQ level 15.
This doesn't work if any code in an NMI handler does
local_irq_save() or local_irq_disable() since the "disable" will
kick us back to cpu IRQ level 14 thus letting NMIs back in and
we recurse.
The only path which that does that in the perf event IRQ
handling path is the code supporting frequency based events. It
uses cpu_clock().
cpu_clock() simply invokes sched_clock() with IRQs disabled.
And that's a fundamental bug all on it's own, particularly for
the HAVE_UNSTABLE_SCHED_CLOCK case. NMIs can thus get into the
sched_clock() code interrupting the local IRQ disable code
sections of it.
Furthermore, for the not-HAVE_UNSTABLE_SCHED_CLOCK case, the IRQ
disabling done by cpu_clock() is just pure overhead and
completely unnecessary.
So the core problem is that sched_clock() is not NMI safe, but
we are invoking it from NMI contexts in the perf events code
(via cpu_clock()).
A less important issue is the overhead of IRQ disabling when it
isn't necessary in cpu_clock().
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK architectures are not
affected by this patch.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <20091213.182502.215092085.davem@davemloft.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 14f8af311e upstream.
Lenovo SL series laptop has a very similar DSDT with Asus laptops. We can
easily have the extra ACPI function support with little modification in
asus-laptop.c
Here is the hotkey enablement for Lenovo SL series laptop.
This patch will enable the following hotkey:
- Volumn Up
- Volumn Down
- Mute
- Screen Lock (Fn+F2)
- Battery Status (Fn+F3)
- WLAN switch (Fn+F5)
- Video output switch (Fn+F7)
- Touchpad switch (Fn+F8)
- Screen Magnifier (Fn+Space)
The following function of Lenovo SL laptop is still need to be enabled:
- Hotkey: KEY_SUSPEND (Fn+F4), KEY_SLEEP (Fn+F12), Dock Eject (Fn+F9)
- Rfkill for bluetooth and wlan
- LenovoCare LED
- Hwmon for fan speed
- Fingerprint scanner
- Active Protection System
Signed-off-by: Ike Panhc <ike.pan@canonical.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4a18b3ab6e upstream.
Sentelic probes confuse IBM trackpoints so they stop responding to
TP_READ_ID command. See:
http://bugzilla.kernel.org/show_bug.cgi?id=14970
Let's move FSP detection lower so it is probed after trackpoint and
others, just before we strat probing for Intellimouse Explorer.
Signed-off-by: Tai-hwa Liang <avatar@sentelic.com>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bb7d3f24c7 upstream.
/sys/bus/pci/drivers/megaraid_sas/poll_mode_io defaults to being
world-writable, which seems bad (letting any user affect kernel driver
behavior).
This turns off group and user write permissions, so that on typical
production systems only root can write to it.
Signed-off-by: Bryn M. Reeves <bmr@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2d1c861871 upstream
The cardbus code creates PCI devices without ever going through the
necessary fixup bits and pieces that normal PCI devices go through.
There's in fact a commented out call to pcibios_fixup_bus() in there,
it's commented because ... it doesn't work.
I could make pcibios_fixup_bus() do the right thing on powerpc easily
but I felt it cleaner instead to provide a specific hook pci_fixup_cardbus
for which a weak empty implementation is provided by the PCI core.
This fixes cardbus on powerbooks and probably all other PowerPC
platforms which was broken completely for ever on some platforms and
since 2.6.31 on others such as PowerBooks when we made the DMA ops
mandatory (since those are setup by the fixups).
Acked-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 23aeb61e7e upstream.
Added device IDs for the new model of the Apple Wireless Keyboard
(November 2009).
Signed-off-by: Christian Schuerer-Waldheim <csw@xray.at>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ec8e2f7466 upstream.
It can happen that write does not use all the blocks allocated in
write_begin either because of some filesystem error (like ENOSPC) or
because page with data to write has been removed from memory. We truncate
these blocks so that we don't have dangling blocks beyond i_size.
Cc: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9dffe2a32b upstream.
The constants used to specify ISINK ramp times for WM835x had the
wrong shifts so that the on times applied to the off ramp and vice
versa. The masks for the bitfields are correct.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fcfbb2b5fa upstream.
This fixes the problem of the initialization code not correctly
mapping the entire MMIO space on a UV system. A side effect is
the map_high() interface needed to be changed to accommodate
different address and size shifts.
Signed-off-by: Mike Travis <travis@sgi.com>
Reviewed-by: Mike Habeck <habeck@sgi.com>
Cc: Jack Steiner <steiner@sgi.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <4B479202.7080705@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 118f3e1afd upstream.
EDAC MC0: INTERNAL ERROR: channel-b out of range (4 >= 4)
Kernel panic - not syncing: EDAC MC0: Uncorrected Error (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
This happens because FERR_NF_FBD bit 28 is not updated on i5000. Due to
that, both bits 28 and 29 may be equal to one, returning channel = 3. As
this value is invalid, EDAC core generates the panic.
Addresses http://bugzilla.kernel.org/show_bug.cgi?id=14568
Signed-off-by: Tamas Vincze <tom@vincze.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Doug Thompson <dougthompson@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dfea91d5a7 upstream.
Chris McDermott from IBM confirmed that hurricane chipset in IBM summit
platforms doesn't support logical flat mode. Irrespective of the other
things like apic_id's, total number of logical cpu's, Linux kernel
should default to physical mode for this system.
The 32-bit kernel does so using the OEM checks for the IBM summit
platform. Add a similar OEM platform check for the 64bit kernel too.
Otherwise the linux kernel boot can hang on this platform under certain
bios/platform settings.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Tested-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Chris McDermott <lcm@linux.vnet.ibm.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7485d0d375 upstream.
Currently, futexes have two problem:
A) The current futex code doesn't handle private file mappings properly.
get_futex_key() uses PageAnon() to distinguish file and
anon, which can cause the following bad scenario:
1) thread-A call futex(private-mapping, FUTEX_WAIT), it
sleeps on file mapping object.
2) thread-B writes a variable and it makes it cow.
3) thread-B calls futex(private-mapping, FUTEX_WAKE), it
wakes up blocked thread on the anonymous page. (but it's nothing)
B) Current futex code doesn't handle zero page properly.
Read mode get_user_pages() can return zero page, but current
futex code doesn't handle it at all. Then, zero page makes
infinite loop internally.
The solution is to use write mode get_user_page() always for
page lookup. It prevents the lookup of both file page of private
mappings and zero page.
Performance concerns:
Probaly very little, because glibc always initialize variables
for futex before to call futex(). It means glibc users never see
the overhead of this patch.
Compatibility concerns:
This patch has few compatibility issues. After this patch,
FUTEX_WAIT require writable access to futex variables (read-only
mappings makes EFAULT). But practically it's not a problem,
glibc always initalizes variables for futexes explicitly - nobody
uses read-only mappings.
Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Darren Hart <dvhltc@us.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Ulrich Drepper <drepper@gmail.com>
LKML-Reference: <20100105162633.45A2.A69D9226@jp.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 485a2e1973 upstream.
Add check if APIC is not disabled since thermal
monitoring depends on it. As only apic gets disabled
we should not try to install "thermal monitor" vector,
print out that thermal monitoring is enabled and etc...
Note that "Intel Correct Machine Check Interrupts" already
has such a check.
Also I decided to not add cpu_has_apic check into
mcheck_intel_therm_init since even if it'll call apic_read on
disabled apic -- it's safe here and allow us to save a few code
bytes.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org>
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
LKML-Reference: <4B25FDC2.3020401@jp.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 81744ee44a upstream
queue_sector_alignment_offset returned the wrong value which caused
partitions to report an incorrect alignment_offset. Since offset
calculation is needed several places it has been split into a separate
helper function.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Tested-by: Mike Snitzer <snitzer@redhat.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c7c85101af upstream.
On Ironlake, there is an interrupt master control bit. With the bit
disabled before clearing IIR, we do not need to handle extra interrupt
in a loop. This patch removes the loop in Ironlake interrupt handler.
It fixed irq lost issue on some Ironlake platforms.
Signed-off-by: Zou Nan hai <Nanhai.zou@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fce6647757 upstream.
Current mem_cgroup_force_empty() only ensures mem->res.usage == 0 on
success. But this doesn't guarantee memcg's LRU is really empty, because
there are some cases in which !PageCgrupUsed pages exist on memcg's LRU.
For example:
- Pages can be uncharged by its owner process while they are on LRU.
- race between mem_cgroup_add_lru_list() and __mem_cgroup_uncharge_common().
So there can be a case in which the usage is zero but some of the LRUs are not empty.
OTOH, mem_cgroup_del_lru_list(), which can be called asynchronously with
rmdir, accesses the mem_cgroup, so this access can cause a problem if it
races with rmdir because the mem_cgroup might have been freed by rmdir.
Actually, I saw a bug which seems to be caused by this race.
[1530745.949906] BUG: unable to handle kernel NULL pointer dereference at 0000000000000230
[1530745.950651] IP: [<ffffffff810fbc11>] mem_cgroup_del_lru_list+0x30/0x80
[1530745.950651] PGD 3863de067 PUD 3862c7067 PMD 0
[1530745.950651] Oops: 0002 [#1] SMP
[1530745.950651] last sysfs file: /sys/devices/system/cpu/cpu7/cache/index1/shared_cpu_map
[1530745.950651] CPU 3
[1530745.950651] Modules linked in: configs ipt_REJECT xt_tcpudp iptable_filter ip_tables x_tables bridge stp nfsd nfs_acl auth_rpcgss exportfs autofs4 hidp rfcomm l2cap crc16 bluetooth lockd sunrpc ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp bnx2i cnic uio ipv6 cxgb3i cxgb3 mdio libiscsi_tcp libiscsi scsi_transport_iscsi dm_mirror dm_multipath scsi_dh video output sbs sbshc battery ac lp kvm_intel kvm sg ide_cd_mod cdrom serio_raw tpm_tis tpm tpm_bios acpi_memhotplug button parport_pc parport rtc_cmos rtc_core rtc_lib e1000 i2c_i801 i2c_core pcspkr dm_region_hash dm_log dm_mod ata_piix libata shpchp megaraid_mbox sd_mod scsi_mod megaraid_mm ext3 jbd uhci_hcd ohci_hcd ehci_hcd [last unloaded: freq_table]
[1530745.950651] Pid: 19653, comm: shmem_test_02 Tainted: G M 2.6.32-mm1-00701-g2b04386 #3 Express5800/140Rd-4 [N8100-1065]
[1530745.950651] RIP: 0010:[<ffffffff810fbc11>] [<ffffffff810fbc11>] mem_cgroup_del_lru_list+0x30/0x80
[1530745.950651] RSP: 0018:ffff8803863ddcb8 EFLAGS: 00010002
[1530745.950651] RAX: 00000000000001e0 RBX: ffff8803abc02238 RCX: 00000000000001e0
[1530745.950651] RDX: 0000000000000000 RSI: ffff88038611a000 RDI: ffff8803abc02238
[1530745.950651] RBP: ffff8803863ddcc8 R08: 0000000000000002 R09: ffff8803a04c8643
[1530745.950651] R10: 0000000000000000 R11: ffffffff810c7333 R12: 0000000000000000
[1530745.950651] R13: ffff880000017f00 R14: 0000000000000092 R15: ffff8800179d0310
[1530745.950651] FS: 0000000000000000(0000) GS:ffff880017800000(0000) knlGS:0000000000000000
[1530745.950651] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[1530745.950651] CR2: 0000000000000230 CR3: 0000000379d87000 CR4: 00000000000006e0
[1530745.950651] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[1530745.950651] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[1530745.950651] Process shmem_test_02 (pid: 19653, threadinfo ffff8803863dc000, task ffff88038612a8a0)
[1530745.950651] Stack:
[1530745.950651] ffffea00040c2fe8 0000000000000000 ffff8803863ddd98 ffffffff810c739a
[1530745.950651] <0> 00000000863ddd18 000000000000000c 0000000000000000 0000000000000000
[1530745.950651] <0> 0000000000000002 0000000000000000 ffff8803863ddd68 0000000000000046
[1530745.950651] Call Trace:
[1530745.950651] [<ffffffff810c739a>] release_pages+0x142/0x1e7
[1530745.950651] [<ffffffff810c778f>] ? pagevec_move_tail+0x6e/0x112
[1530745.950651] [<ffffffff810c781e>] pagevec_move_tail+0xfd/0x112
[1530745.950651] [<ffffffff810c78a9>] lru_add_drain+0x76/0x94
[1530745.950651] [<ffffffff810dba0c>] exit_mmap+0x6e/0x145
[1530745.950651] [<ffffffff8103f52d>] mmput+0x5e/0xcf
[1530745.950651] [<ffffffff81043ea8>] exit_mm+0x11c/0x129
[1530745.950651] [<ffffffff8108fb29>] ? audit_free+0x196/0x1c9
[1530745.950651] [<ffffffff81045353>] do_exit+0x1f5/0x6b7
[1530745.950651] [<ffffffff8106133f>] ? up_read+0x2b/0x2f
[1530745.950651] [<ffffffff8137d187>] ? lockdep_sys_exit_thunk+0x35/0x67
[1530745.950651] [<ffffffff81045898>] do_group_exit+0x83/0xb0
[1530745.950651] [<ffffffff810458dc>] sys_exit_group+0x17/0x1b
[1530745.950651] [<ffffffff81002c1b>] system_call_fastpath+0x16/0x1b
[1530745.950651] Code: 54 53 0f 1f 44 00 00 83 3d cc 29 7c 00 00 41 89 f4 75 63 eb 4e 48 83 7b 08 00 75 04 0f 0b eb fe 48 89 df e8 18 f3 ff ff 44 89 e2 <48> ff 4c d0 50 48 8b 05 2b 2d 7c 00 48 39 43 08 74 39 48 8b 4b
[1530745.950651] RIP [<ffffffff810fbc11>] mem_cgroup_del_lru_list+0x30/0x80
[1530745.950651] RSP <ffff8803863ddcb8>
[1530745.950651] CR2: 0000000000000230
[1530745.950651] ---[ end trace c3419c1bb8acc34f ]---
[1530745.950651] Fixing recursive fault but reboot is needed!
The problem here is pages on LRU may contain pointer to stale memcg. To
make res->usage to be 0, all pages on memcg must be uncharged or moved to
another(parent) memcg. Moved page_cgroup have already removed from
original LRU, but uncharged page_cgroup contains pointer to memcg withou
PCG_USED bit. (This asynchronous LRU work is for improving performance.)
If PCG_USED bit is not set, page_cgroup will never be added to memcg's
LRU. So, about pages not on LRU, they never access stale pointer. Then,
what we have to take care of is page_cgroup _on_ LRU list. This patch
fixes this problem by making mem_cgroup_force_empty() visit all LRUs
before exiting its loop and guarantee there are no pages on its LRU.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eb29a5cc0b upstream.
Fix divide by zero and broken output. Commit 600ce1a0fa ("fix clock
setting for Samsung SoC Framebuffer") introduced a mandatory refresh
parameter to the platform data for the S3C framebuffer but did not
introduce any validation code, causing existing platforms (none of which
have refresh set) to divide by zero whenever the framebuffer is
configured, generating warnings and unusable output.
Ben Dooks noted several problems with the patch:
- The platform data supplies the pixclk directly and should already
have taken care of the refresh rate.
- The addition of a window ID parameter doesn't help since only the
root framebuffer can control the pixclk.
- pixclk is specified in picoseconds (rather than Hz) as the patch
assumed.
and suggests reverting the commit so do that. Without fixing this no
mainline user of the driver will produce output.
[akpm@linux-foundation.org: don't revert the correct bit]
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: InKi Dae <inki.dae@samsung.com>
Cc: Kyungmin Park <kmpark@infradead.org>
Cc: Krzysztof Helt <krzysztof.h1@poczta.fm>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 976ae32be4 upstream.
inotify will WARN() if it finds that the idr and the fsnotify internals
somehow got out of sync. It was only supposed to do this once but due
to this stupid bug it would warn every single time a problem was
detected.
Signed-off-by: Eric Paris <eparis@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9e572cc987 upstream.
Since commit 7e790dd5fc ("inotify: fix
error paths in inotify_update_watch") inotify changed the manor in which
it gave watch descriptors back to userspace. Previous to this commit
inotify acted like the following:
inotify_add_watch(X, Y, Z) = 1
inotify_rm_watch(X, 1);
inotify_add_watch(X, Y, Z) = 2
but after this patch inotify would return watch descriptors like so:
inotify_add_watch(X, Y, Z) = 1
inotify_rm_watch(X, 1);
inotify_add_watch(X, Y, Z) = 1
which I saw as equivalent to opening an fd where
open(file) = 1;
close(1);
open(file) = 1;
seemed perfectly reasonable. The issue is that quite a bit of userspace
apparently relies on the behavior in which watch descriptors will not be
quickly reused. KDE relies on it, I know some selinux packages rely on
it, and I have heard complaints from other random sources such as debian
bug 558981.
Although the man page implies what we do is ok, we broke userspace so
this patch almost reverts us to the old behavior. It is still slightly
racey and I have patches that would fix that, but they are rather large
and this will fix it for all real world cases. The race is as follows:
- task1 creates a watch and blocks in idr_new_watch() before it updates
the hint.
- task2 creates a watch and updates the hint.
- task1 updates the hint with it's older wd
- task removes the watch created by task2
- task adds a new watch and will reuse the wd originally given to task2
it requires moving some locking around the hint (last_wd) but this should
solve it for the real world and be -stable safe.
As a side effect this patch papers over a bug in the lib/idr code which
is causing a large number WARN's to pop on people's system and many
reports in kerneloops.org. I'm working on the root cause of that idr
bug seperately but this should make inotify immune to that issue.
Signed-off-by: Eric Paris <eparis@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fc61901373 upstream.
Some BIOSes fail to initialise the GTT, which will cause DMA faults when
the IOMMU is enabled. We need to clear the whole thing to point at the
scratch page, not just the part that Linux is going to use.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
[anholt: Note that this may also help with stability in the presence of
driver bugs, by not drawing to memory we don't own]
Signed-off-by: Eric Anholt <eric@anholt.net>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2570a4f542 upstream.
This fixes CERT-FI FICORA #341748
Discovered by Olli Jarva and Tuomo Untinen from the CROSS
project at Codenomicon Ltd.
Just like in CVE-2007-4567, we can't rely upon skb_dst() being
non-NULL at this point. We fixed that in commit
e76b2b2567 ("[IPV6]: Do no rely on
skb->dst before it is assigned.")
However commit 483a47d2fe ("ipv6: added
net argument to IP6_INC_STATS_BH") put a new version of the same bug
into this function.
Complicating analysis further, this bug can only trigger when network
namespaces are enabled in the build. When namespaces are turned off,
the dev_net() does not evaluate it's argument, so the dereference
would not occur.
So, for a long time, namespaces couldn't be turned on unless SYSFS was
disabled. Therefore, this code has largely been disabled except by
people turning it on explicitly for namespace development.
With help from Eugene Teo <eugene@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b4c30aad39 upstream.
Several leaks in audit_tree didn't get caught by commit
318b6d3d7d, including the leak on normal
exit in case of multiple rules refering to the same chunk.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6f5d511489 upstream.
... aka "Al had badly fscked up when writing that thing and nobody
noticed until Eric had fixed leaks that used to mask the breakage".
The function essentially creates a copy of old array sans one element
and replaces the references to elements of original (they are on cyclic
lists) with those to corresponding elements of new one. After that the
old one is fair game for freeing.
First of all, there's a dumb braino: when we get to list_replace_init we
use indices for wrong arrays - position in new one with the old array
and vice versa.
Another bug is more subtle - termination condition is wrong if the
element to be excluded happens to be the last one. We shouldn't go
until we fill the new array, we should go until we'd finished the old
one. Otherwise the element we are trying to kill will remain on the
cyclic lists...
That crap used to be masked by several leaks, so it was not quite
trivial to hit. Eric had fixed some of those leaks a while ago and the
shit had hit the fan...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is a backport of the mainline patches
cf0277e714045cfb71a3b49bb574e4
Here is the description of the first of
those patches (the other two just fixed
bugs added by that patch):
Since I removed the master netdev, we've been
keeping internal queues only, and even before
that we never told the networking stack above
the virtual interfaces about congestion. This
means that packets are queued in mac80211 and
the upper layers never know, possibly leading
to memory exhaustion and other problems.
This patch makes all interfaces multiqueue and
uses ndo_select_queue to put the packets into
queues per AC. Additionally, when the driver
stops a queue, we now stop all corresponding
queues for the virtual interfaces as well.
The injection case will use VO by default for
non-data frames, and BE for data frames, but
downgrade any data frames according to ACM. It
needs to be fleshed out in the future to allow
chosing the queue/AC in radiotap.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Cc: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Stable commit 0399123f3d didn't match the
original upstream commit. The CONFIG_MMU check was added much too early
in the list disabling a lot of proc entries in the process.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cda9d05c49 upstream.
This code generally fails to adjust the render clock, and when it does,
it conflicts with some other register settings and can cause problems.
So remove this code altogether. I'm reworking it now to do the right
thing, but the only bit it will share is the VBT check for whether
reclocking is supported, so I'm leaving that bit.
Reverts most of 652c393a33 ("add dynamic
clock frequency control"), though for many the regressions showed up
in the later 181a5336d6 ("Fix render
reclock availability detection").
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d790744880 upstream.
Various missing sanity checks caused rejected action frames to be
interpreted as channel switch announcements, which can cause a client
mode interface to switch away from its operating channel, thereby losing
connectivity. This patch ensures that only spectrum management action
frames are processed by the CSA handling function and prevents rejected
action frames from getting processed by the MLME code.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8a9ac160e8 upstream.
tid is used as an array offset.
agg = &priv->stations[sta_id].tid[tid].agg;
iwl4965_tx_status_reply_tx(priv, agg, tx_resp, txq_id, index);
It should be limitted to MAX_TID_COUNT - 1;
struct iwl_tid_data tid[MAX_TID_COUNT];
regards,
dan carpenter
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e12822e1d3 upstream.
This fixes a syntax error when setting up the user regulatory
hint. This change yields the same exact binary object though
so it ends up just being a syntax typo fix, fortunately.
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 359207c687 upstream.
Commit 8bf3d79bc4 enabled EEPROM
checksum checks to avoid bogus bug reports but failed to address
updating the code to consider devices with custom EEPROM sizes.
Devices with custom sized EEPROMs have the upper limit size stuffed
in the EEPROM. Use this as the upper limit instead of the static
default size. In case of a checksum error also provide back the
max size and whether or not this was the default size or a custom
one. If the EEPROM is busted we add a failsafe check to ensure
we don't loop forever or try to read bogus areas of hardware.
This closes bug 14874
http://bugzilla.kernel.org/show_bug.cgi?id=14874
Cc: David Quan <david.quan@atheros.com>
Cc: Stephen Beahm <stephenbeahm@comcast.net>
Reported-by: Joshua Covington <joshuacov@googlemail.com>
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c5cae661d6 upstream.
In 65f63384 "xen: improve error handling in do_suspend" I said:
- xs_suspend()/xs_resume() and dpm_suspend_noirq()/dpm_resume_noirq() were not
nested in the obvious way.
and changed the ordering of the calls as so:
BEFORE AFTER
xs_suspend dpm_suspend_noirq
dpm_suspend_noirq xs_suspend
*SUSPEND* *SUSPEND*
dpm_resume_noirq dpm_resume_noirq
xs_resume xs_resume
Clearly this is not an improvement and I was talking rubbish.
In particular the new ordering is susceptible to a hang if a xenstore write is
in progress at the point at which the suspend kicks in. When the suspend
process calls xs_suspend it tries to take the request_mutex but if a write is
in progress it could be looping in xenbus_xs.c:read_reply() waiting for
something to arrive on &xs_state.reply_list while holding the request_mutex
(taken in the caller of read_reply).
However if we have done dpm_suspend_noirq before xs_suspend then we won't get
any more xenstore interrupts and process_msg() will never be woken up to add
anything to the reply_list.
Fix this by calling xs_suspend before dpm_suspend_noirq. If dpm_suspend_noirq
fails then make sure we go through the xs_suspend_cancel() code path.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 05b5d89823 upstream.
Commit fd8fbfc1 modified the way we find amount of reserved space
belonging to an inode. The amount of reserved space is checked
from dquot_transfer and thus inode_reserved_space gets called
even for filesystems that don't provide get_reserved_space callback
which results in a BUG.
Fix the problem by checking get_reserved_space callback and return 0 if
the filesystem does not provide it.
CC: Dmitry Monakhov <dmonakhov@openvz.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bb595c923b upstream.
The ADT7462_PIN28_VOLT value is a 4-bit field, so the corresponding
shift must be 4.
Signed-off-by: Roger Blofeld <blofeldus@yahoo.com>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1fe63ab47a upstream.
The max junction temperature of Atom N450/D410/D510 CPUs is 100 degrees
Celsius. Since these CPUs are always coupled with Intel NM10 chipset in
one package, the best way to verify whether an Atom CPU is N450/D410/D510
is to check the host bridge device.
Signed-off-by: Yong Wang <yong.y.wang@intel.com>
Acked-by: Huaxu Wan <huaxu.wan@intel.com>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aaff23a95a upstream.
As noticed by Dan Carpenter <error27@gmail.com>, update_nl_seq()
currently contains an out of bounds read of the seq_aft_nl array
when looking for the oldest sequence number position.
Fix it to only compare valid positions.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit dce766af54 upstream.
normal users are currently allowed to set/modify ebtables rules.
Restrict it to processes with CAP_NET_ADMIN.
Note that this cannot be reproduced with unmodified ebtables binary
because it uses SOCK_RAW.
Signed-off-by: Florian Westphal <fwestphal@astaro.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit af9a75dd1a upstream.
This model needs both 'Headphone Jack Sense' and 'Line Jack Sense' muted
for audible playback, so just add it to the ad1981 jack sense blacklist.
Tested-by: Pete <x41215201@gmail.com>
Signed-off-by: Daniel T Chen <crimsun@ubuntu.com>
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9c0afc861a upstream.
The capture source or input source mixer element wasn't created properly
for ALC861-VD codec due to the wrong NID passed to
alc_auto_create_input_ctls().
References: Novell bnc#568305
http://bugzilla.novell.com/show_bug.cgi?id=568305
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5fa83ce284 upstream.
The main bug was that 'blk_cleanup_queue()' was called while the block
device could still be in use, for example, because the card was removed
while files were still open.
In addition, to be sure that 'mmc_request()' will get called for all new
requests (so it can error them out), the queue is emptied during cleanup.
This is done after the worker thread is stopped to avoid racing with it.
Finally, it is not a device error for this to be happening, so quiet the
(sometimes very many) error messages.
Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Cc: <linux-mmc@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7d92df6929 upstream.
When a card is removed before mmc_blk_probe() has called add_disk(), then
the minor field is uninitialized and has value 0. This caused
mmc_blk_put() to always release devidx 0 even if 0 was still in use. Then
the next mmc_blk_probe() used the first free idx of 0, which oopses in
sysfs, since it is used by another card.
Signed-off-by: Anna Lemehova <EXT-Anna.Lemehova@nokia.com>
Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Cc: <linux-mmc@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b45c6e76bc upstream.
When print-fatal-signals is enabled it's possible to dump any memory
reachable by the kernel to the log by simply jumping to that address from
user space.
Or crash the system if there's some hardware with read side effects.
The fatal signals handler will dump 16 bytes at the execution address,
which is fully controlled by ring 3.
In addition when something jumps to a unmapped address there will be up to
16 additional useless page faults, which might be potentially slow (and at
least is not very efficient)
Fortunately this option is off by default and only there on i386.
But fix it by checking for kernel addresses and also stopping when there's
a page fault.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bd4f490a07 upstream.
The LTP cgroup test suite generates a "kernel BUG at kernel/cgroup.c:790!"
here in cgroup_diput():
/*
* if we're getting rid of the cgroup, refcount should ensure
* that there are no pidlists left.
*/
BUG_ON(!list_empty(&cgrp->pidlists));
The cgroup pidlist rework in 2.6.32 generates the BUG_ON, which is caused
when pidlist_array_load() calls cgroup_pidlist_find():
(1) if a matching cgroup_pidlist is found, it down_write's the mutex of the
pre-existing cgroup_pidlist, and increments its use_count.
(2) if no matching cgroup_pidlist is found, then a new one is allocated, it
down_write's its mutex, and the use_count is set to 0.
(3) the matching, or new, cgroup_pidlist gets returned back to pidlist_array_load(),
which increments its use_count -- regardless whether new or pre-existing --
and up_write's the mutex.
So if a matching list is ever encountered by cgroup_pidlist_find() during
the life of a cgroup directory, it results in an inflated use_count value,
preventing it from ever getting released by cgroup_release_pid_array().
Then if the directory is subsequently removed, cgroup_diput() hits the
BUG_ON() when it finds that the directory's cgroup is still populated with
a pidlist.
The patch simply removes the use_count increment when a matching pidlist
is found by cgroup_pidlist_find(), because it gets bumped by the calling
pidlist_array_load() function while still protected by the list's mutex.
Signed-off-by: Dave Anderson <anderson@redhat.com>
Reviewed-by: Li Zefan <lizf@cn.fujitsu.com>
Acked-by: Ben Blum <bblum@andrew.cmu.edu>
Cc: Paul Menage <menage@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 29bd0ae25f upstream.
drivers/gpu/drm/i915/i915_dma.c: In function 'i915_driver_load':
drivers/gpu/drm/i915/i915_dma.c:1114: warning: 'll_base' may be used uninitialized in this function
Partly this is because gcc isn't smart enough. But `ll_base' does get used
uninitialised in the DRM_DEBUG() call.
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Eric Anholt <eric@anholt.net>
Cc: Dave Airlie <airlied@linux.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e5a95eb778 upstream.
Select the correct BPC for LVDS on Ironlake. If it is 18-bit LVDS panel,
the BPC will be 6. When it is 24-bit LVDS panel, the BPC will 8.
At the same time the BPC will be 8 when the output device is CRT/HDMI/DP.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8faf3b3174 upstream.
Make the BPC in FDI rx/transcoder be consistent with that in pipeconf on Ironlake.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 898822ce95 upstream.
Enable/disable the dithering for LVDS based on VBT setting. On the 965/g4x
platform the dithering flag is defined in LVDS register. And on the ironlake
the dithering flag is defined in pipeconf register.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e6be8d9d17 upstream.
drm_pci_alloc() has input of address mask for setting pci dma
mask on the device, which should be properly setup by drm driver.
And leave it as a param for drm_pci_alloc() would cause confusion
or mistake would corrupt the correct dma mask setting, as seen on
intel hw which set wrong dma mask for hw status page. So remove
it from drm_pci_alloc() function.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e3d8affb0d upstream.
As pinning (allocating and binding GTT memory) does not actually invoke
GPU commands, it is safe, and indeed is attempted, during resumption
from suspension:
[drm:intel_init_clock_gating] *ERROR* failed to pin power context: -16
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 96b47b6559 upstream.
i915_gem_object_unbind had the ordering wrong. The other user,
i915_gem_object_put_fence_reg already has the correct ordering.
Results was usually corrupted pixmaps, especially garbled font glyphs
after a suspend/resume (because this evicts everything).
I'm still waiting for the feedback from the bug-reporters, but
because this obviously fixes a bug (at least for me) I'm already
submitting it.
Bugzilla: http://bugs.freedesktop.org/show_bug.cgi?id=25406
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a2565377a5 upstream.
Dirk reports that nothing is displayed on LVDS when using ubuntu 9.1 after
close/reopen the LID. And I also reproduce this issue on another laptop.
After some tests and debug, it seems that it is related with that the
LVDS status is not updated in time in course of suspend/resume.
Now the LID state is used to check whether the LVDS is connected or
disconnected. And when the LID is closed, it means that the LVDS is
disconnected. When it is reopened, it means that the LVDS is connected.
At the same time on some distributions the LID event is also used to put
the system into suspend state. When the LID is closed, the system will enter
the suspend state. When the LID is reopened, the system will be resumed.
In such case when the LID is closed, user-space script will receive the LID
notification event and detect the LVDS as disconnected. Then the system will
enter the suspended state. When the LID is reopened, the system will be
resumed. As the LVDS status is not updated in course of resume, it will cause
that the LVDS connector is marked as unused and disabled. After the resume is
finished,user-space script will try to configure the display mode for LVDS.
But unfortunately as the LVDS status is not updated in time and it is still
marked as disconnected, the LVDS and its corresponding CRTC will be disabled
again in the function of drm_helper_disable_unused_functions after changing
mode for LVDS.
So we had better check and update the status of LVDS connector after receiving
the LID notication event. Then after the system is resumed from suspended
state, we can set the display mode for LVDS correctly.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Reported-by: Dirk Hohndel <hohndel@infradead.org>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 486bad2e40 upstream.
When handling the gssd downcall, the kernel should distinguish between a
successful downcall that contains an error code and a failed downcall
(i.e. where the parsing failed or some other sort of problem occurred).
In the former case, gss_pipe_downcall should be returning the number of
bytes written to the pipe instead of an error. In the event of other
errors, we generally want the initiating task to retry the upcall so
we set msg.errno to -EAGAIN. An unexpected error code here is a bug
however, so BUG() in that case.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b891e4a05e upstream.
If the context allocation fails, it will return GSS_S_FAILURE, which is
neither a valid error code, nor is it even negative.
Return ENOMEM instead...
Reported-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 14ace024b1 upstream.
If the context allocation fails, the function currently returns a random
error code, since the variable 'p' still points to a valid memory location.
Ensure that it returns ENOMEM...
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b292cf9ce7 upstream.
There're some warnings of "nfsd: peername failed (err 107)!"
socket error -107 means Transport endpoint is not connected.
This warning message was outputed by svc_tcp_accept() [net/sunrpc/svcsock.c],
when kernel_getpeername returns -107. This means socket might be CLOSED.
And svc_tcp_accept was called by svc_recv() [net/sunrpc/svc_xprt.c]
if (test_bit(XPT_LISTENER, &xprt->xpt_flags)) {
<snip>
newxpt = xprt->xpt_ops->xpo_accept(xprt);
<snip>
So this might happen when xprt->xpt_flags has both XPT_LISTENER and XPT_CLOSE.
Let's take a look at commit b0401d72, this commit has moved the close
processing after do recvfrom method, but this commit also introduces this
warnings, if the xpt_flags has both XPT_LISTENER and XPT_CLOSED, we should
close it, not accpet then close.
Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
Cc: J. Bruce Fields <bfields@fieldses.org>
Cc: Neil Brown <neilb@suse.de>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Nikola Ciprich <extmaillist@linuxbox.cz>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7211a4e859 upstream.
nfsd is not using vfs_fsync, so I missed it when changing the calling
convention during the 2.6.32 window. This patch fixes it to not only
start the data writeout, but also wait for it to complete before calling
into ->fsync.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit efd124b999 upstream.
exofs uses simple_write_end() for it's .write_end handler. But
it is not enough because simple_write_end() does not call
mark_inode_dirty() when it extends i_size. So even if we do
call mark_inode_dirty at beginning of write out, with a very
long IO and a saturated system we might get the .write_inode()
called while still extend-writing to file and miss out on the last
i_size updates.
So override .write_end, call simple_write_end(), and afterwords if
i_size was changed call mark_inode_dirty().
It stands to logic that since simple_write_end() was the one extending
i_size it should also call mark_inode_dirty(). But it looks like all
users of simple_write_end() are memory-bound pseudo filesystems, who
could careless about mark_inode_dirty(). I might submit a
warning-comment patch to simple_write_end() in future.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 10b465aaf9 upstream.
Commit 35dead4 "modules: don't export section names of empty sections
via sysfs" changed the set of sections that have attributes, but did
not change the iteration over these attributes in add_notes_attrs().
This can lead to add_notes_attrs() creating attributes with the wrong
names or with null name pointers.
Introduce a sect_empty() function and use it in both add_sect_attrs()
and add_notes_attrs().
Reported-by: Martin Michlmayr <tbm@cyrius.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Tested-by: Martin Michlmayr <tbm@cyrius.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b3172f222a upstream.
Sevelar ASoC codec drivers wrongly assume, that the params_rate() macro
returns one of SNDRV_PCM_RATE_* defines instead of the actual numerical
sampling rate. Fix them.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Acked-by: Liam Girdwood <lrg@slimlogic.co.uk>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 53281b6d34 upstream.
Yes, the add and remove cases do share the same basic loop and the
locking, but the compiler can inline and then CSE some of the end result
anyway. And splitting it up makes the code way easier to follow,
and makes it clearer exactly what the semantics are.
In particular, we must make sure that the FASYNC flag in file->f_flags
exactly matches the state of "is this file on any fasync list", since
not only is that flag visible to user space (F_GETFL), but we also use
that flag to check whether we need to remove any fasync entries on file
close.
We got that wrong for the case of a mixed use of file locking (which
tries to remove any fasync entries for file leases) and fasync.
Splitting the function up also makes it possible to do some future
optimizations without making the function even messier. In particular,
since the FASYNC flag has to match the state of "is this on a list", we
can do the following future optimizations:
- on remove, we don't even need to get the locks and traverse the list
if FASYNC isn't set, since we can know a priori that there is no
point (this is effectively the same optimization that we already do
in __fput() wrt removing fasync on file close)
- on add, we can use the FASYNC flag to decide whether we are changing
an existing entry or need to allocate a new one.
but this is just the cleanup + fix for the FASYNC flag.
Acked-by: Al Viro <viro@ZenIV.linux.org.uk>
Tested-by: Tavis Ormandy <taviso@google.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7ea6600148 upstream.
generic_permission was refusing CAP_DAC_READ_SEARCH-enabled
processes from opening DAC-protected files read-only, because
do_filp_open adds MAY_OPEN to the open mask.
Ignore MAY_OPEN. After this patch, CAP_DAC_READ_SEARCH is
again sufficient to open(fname, O_RDONLY) on a file to which
DAC otherwise refuses us read permission.
Reported-by: Mike Kazantsev <mk.fraggod@gmail.com>
Signed-off-by: Serge E. Hallyn <serue@us.ibm.com>
Tested-by: Mike Kazantsev <mk.fraggod@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 93b6bd26b7 upstream.
We've had many reports of rt61pci failures with powersaving enabled.
Therefore, as a stop-gap measure, disable powersaving of the rt61pci
until we have found a proper solution.
Also disable powersaving on rt2800pci as it most probably will show
the same problem.
Signed-off-by: Gertjan van Wingerde <gwingerde@gmail.com>
Acked-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2.6.33-rc1 commit 73848b4684, adjusted
to include 31e855ea7173bdb0520f9684580423a9560f66e0's movement of
the unlock_page(oldpage), but omit other intervening cleanups.
When KSM merges an mlocked page, it has been forgetting to munlock it:
that's been left to free_page_mlock(), which reports it in /proc/vmstat
as unevictable_pgs_mlockfreed instead of unevictable_pgs_munlocked,
which indicates that such pages _might_ be left unevictable for long
after they should be evictable. Call munlock_vma_page() to fix that.
Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b39415b273 upstream.
In AIM7 runs, recent kernels start swapping out anonymous pages well
before they should. This is due to shrink_list falling through to
shrink_inactive_list if !inactive_anon_is_low(zone, sc), when all we
really wanted to do is pre-age some anonymous pages to give them extra
time to be referenced while on the inactive list.
The obvious fix is to make sure that shrink_list does not fall through to
scanning/reclaiming inactive pages when we called it to scan one of the
active lists.
This change should be safe because the loop in shrink_zone ensures that we
will still shrink the anon and file inactive lists whenever we should.
[kosaki.motohiro@jp.fujitsu.com: inactive_file_is_low() should be inactive_anon_is_low()]
Reported-by: Larry Woodman <lwoodman@redhat.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Tomasz Chmielewski <mangoo@wpkg.org>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rik Theys <rik.theys@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6d3b82f2d3 upstream.
Per commit 240799cd, the option name for readahead should be
inode_readahead_blks, not inode_readahead.
Signed-off-by: Fang Wenqi <antonf@turbolinux.com.cn>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 047106adcc upstream.
Heiko reported a case where a timer interrupt managed to
reference a root_domain structure that was already freed by a
concurrent hot-un-plug operation.
Solve this like the regular sched_domain stuff is also
synchronized, by adding a synchronize_sched() stmt to the free
path, this ensures that a root_domain stays present for any
atomic section that could have observed it.
Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Gregory Haskins <ghaskins@novell.com>
Cc: Siddha Suresh B <suresh.b.siddha@intel.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
LKML-Reference: <1258363873.26714.83.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 43f5e68733 upstream.
Clear the override flag after force-loading the module.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 56b34b91e2 upstream.
Currently, the module does not initialize fully when the DIMMs aren't
ECC but remains still loaded. Propagate the error when no instance of
the driver is properly initialized and prevent further loading.
Reorganize and polish error handling in amd64_edac_init() while at it.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8f68ed9728 upstream.
Fix use-after-free errors by pushing all memory-freeing calls to the end
of amd64_remove_one_instance().
Reported-by: Darren Jenkins <darrenrjenkins@gmail.com>
LKML-Reference: <1261370306.11354.52.camel@ICE-BOX>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6ede31e030 upstream.
Randy Dunlap reported the following build error:
"When CONFIG_SMP=n, CONFIG_X86_MSR=m:
ERROR: "msrs_free" [drivers/edac/amd64_edac_mod.ko] undefined!
ERROR: "msrs_alloc" [drivers/edac/amd64_edac_mod.ko] undefined!"
This is due to the fact that <arch/x86/lib/msr.c> is conditioned on
CONFIG_SMP and in the UP case we have only the stubs in the header.
Fork off SMP functionality into a new file (msr-smp.c) and build
msrs_{alloc,free} unconditionally.
Reported-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Borislav Petkov <petkovbb@gmail.com>
LKML-Reference: <20091216231625.GD27228@liondog.tnic>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 505422517d upstream.
The current rd/wrmsr_on_cpus helpers assume that the supplied
cpumasks are contiguous. However, there are machines out there
like some K8 multinode Opterons which have a non-contiguous core
enumeration on each node (e.g. cores 0,2 on node 0 instead of 0,1), see
http://www.gossamer-threads.com/lists/linux/kernel/1160268.
This patch fixes out-of-bounds writes (see URL above) by adding per-CPU
msr structs which are used on the respective cores.
Additionally, two helpers, msrs_{alloc,free}, are provided for use by
the callers of the MSR accessors.
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Mauro Carvalho Chehab <mchehab@redhat.com>
Cc: Aristeu Rozanski <aris@redhat.com>
Cc: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Doug Thompson <dougthompson@xmission.com>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
LKML-Reference: <20091211171440.GD31998@aftab>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f6d6ae9657 upstream.
Unify almost identical code into one function and remove NUMA-specific
usage (specifically cpumask_of_node()) in favor of generic topology
methods.
Remove unused defines, while at it.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ba578cb34a upstream.
cpumask_t -> struct cpumask, and don't put one on the stack. (Note: this
is actually on the stack unless CONFIG_CPUMASK_OFFSTACK=y).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b8a4754147 upstream.
Since rdmsr_on_cpus and wrmsr_on_cpus are almost identical, unify them
into a common __rwmsr_on_cpus helper thus avoiding code duplication.
While at it, convert cpumask_t's to const struct cpumask *.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fd8fbfc170 upstream.
Currently inode_reservation is managed by fs itself and this
reservation is transfered on dquot_transfer(). This means what
inode_reservation must always be in sync with
dquot->dq_dqb.dqb_rsvspace. Otherwise dquot_transfer() will result
in incorrect quota(WARN_ON in dquot_claim_reserved_space() will be
triggered)
This is not easy because of complex locking order issues
for example http://bugzilla.kernel.org/show_bug.cgi?id=14739
The patch introduce quota reservation field for each fs-inode
(fs specific inode is used in order to prevent bloating generic
vfs inode). This reservation is managed by quota code internally
similar to i_blocks/i_bytes and may not be always in sync with
internal fs reservation.
Also perform some code rearrangement:
- Unify dquot_reserve_space() and dquot_reserve_space()
- Unify dquot_release_reserved_space() and dquot_free_space()
- Also this patch add missing warning update to release_rsv()
dquot_release_reserved_space() must call flush_warnings() as
dquot_free_space() does.
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b462707e7c upstream.
Quota code requires unlocked version of this function. Off course
we can just copy-paste the code, but copy-pasting is always an evil.
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e971b0b9e0 upstream.
Some disks do not contain VAT inode in the last recorded block as required
by the standard but a few blocks earlier (or the number of recorded blocks
is wrong). So look for the VAT inode a bit before the end of the media.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ae78880129 upstream.
Increases the device timeout from 10s to 5 minutes, giving the user a
visual indication during that time in case there are problems. The patch
is a backport of changesets 144 and 150 in the Xenbits tree.
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f8dc33088f upstream.
When printing a warning about a timed-out device, print the
current state of both ends of the device connection (i.e., backend as
well as frontend). This backports half of changeset 146 from the
Xenbits tree.
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c6e1971139 upstream.
The logic of is_disconnected_device/exists_disconnected_device is wrong
in that they are used to test whether a device is trying to connect (i.e.
connecting). For this reason the patch fixes them to not consider a
Closing or Closed device to be connecting. At the same time the patch
also renames the functions according to what they really do; you could
say a closed device is "disconnected" (the old name), but not "connecting"
(the new name).
This patch is a backport of changeset 909 from the Xenbits tree.
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 22825ab769 upstream.
When a DASD device is used with the DIAG discipline, the DIAG
initialization will indicate success or error with a respective
return code. So far we have interpreted a return code of 4 as error,
but it actually means that the initialization was successful, but
the device is read-only. To allow read-only devices to be used with
DIAG we need to accept a return code of 4 as success.
Re-initialization of the DIAG access is also part of the DIAG error
recovery. If we find that the access mode of a device has been
changed from writable to read-only while the device was in use,
we print an error message.
Signed-off-by: Stefan Weinhuber <wein@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Stephen Powell <zlinuxman@wowway.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b16d9acbdb upstream.
Sometimes we will use a crtc for integerated LVDS, which is different with
that assigned by BIOS. If we want to get flicker-free transitions,
then we could read out the current state for it and set our current state
accordingly.
But it is true that if we aren't reading current state out, we do need
to turn everything off before modesetting. Otherwise the clocks can get very
angry and we get things worse than a flicker at boot.
In fact we also do the similar thing in UMS mode. We will disable all the
possible outputs/crtcs for the first modesetting.
So we disable all the possible outputs/crtcs before entering the KMS mode.
Before we configure connector/encoder/crtc, the function of
drm_helper_disable_unused_function can disable all the possible outputs/crtcs.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Rafal Milecki <zajec5@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
In 2.6.32.2 r600 had no IRQ support, however the patch in
500b758725 to fix vblanks on avivo
cards, needs irqs.
So check for an R600 card and avoid this path if so.
This is a stable only patch for 2.6.32.2 as 2.6.33 has IRQs for r600.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6ad4c18884 upstream.
Since (e761b77: cpu hotplug, sched: Introduce cpu_active_map and redo
sched domain managment) we have cpu_active_mask which is suppose to rule
scheduler migration and load-balancing, except it never (fully) did.
The particular problem being solved here is a crash in try_to_wake_up()
where select_task_rq() ends up selecting an offline cpu because
select_task_rq_fair() trusts the sched_domain tree to reflect the
current state of affairs, similarly select_task_rq_rt() trusts the
root_domain.
However, the sched_domains are updated from CPU_DEAD, which is after the
cpu is taken offline and after stop_machine is done. Therefore it can
race perfectly well with code assuming the domains are right.
Cure this by building the domains from cpu_active_mask on
CPU_DOWN_PREPARE.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Holger Hoffstätte <holger.hoffstaette@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a00ae4d21b upstream.
As of commit ee18d64c1f ("KEYS: Add a keyctl to
install a process's session keyring on its parent [try #6]"), CONFIG_KEYS=y
fails to build on architectures that haven't implemented TIF_NOTIFY_RESUME yet:
security/keys/keyctl.c: In function 'keyctl_session_to_parent':
security/keys/keyctl.c:1312: error: 'TIF_NOTIFY_RESUME' undeclared (first use in this function)
security/keys/keyctl.c:1312: error: (Each undeclared identifier is reported only once
security/keys/keyctl.c:1312: error: for each function it appears in.)
Make KEYCTL_SESSION_TO_PARENT depend on TIF_NOTIFY_RESUME until
m68k, and xtensa have implemented it.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: James Morris <jmorris@namei.org>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c2ff581aca upstream.
The routine b43_is_hw_radio_enabled() has long been a problem.
For PPC architecture with PHY Revision < 3, a read of the register
B43_MMIO_HWENABLED_LO will cause a CPU fault unless b43_status()
returns a value of 2 (B43_STAT_STARTED) (BUG 14181). Fixing that
results in Bug 14538 in which the driver is unable to reassociate
after resuming from hibernation because b43_status() returns 0.
The correct fix would be to determine why the status is 0; however,
I have not yet found why that happens. The correct value is found for
my device, which has PHY revision >= 3.
Returning TRUE when the PHY revision < 3 and b43_status() returns 0 fixes
the regression for 2.6.32.
This patch fixes the problem in Red Hat Bugzilla #538523.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Tested-by: Christian Casteyde <casteyde.christian@free.fr>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8fa9ff6849 upstream.
When fragments from bridge netfilter are passed to IPv4 or IPv6 conntrack
and a reassembly queue with the same fragment key already exists from
reassembling a similar packet received on a different device (f.i. with
multicasted fragments), the reassembled packet might continue on a different
codepath than where the head fragment originated. This can cause crashes
in bridge netfilter when a fragment received on a non-bridge device (and
thus with skb->nf_bridge == NULL) continues through the bridge netfilter
code.
Add a new reassembly identifier for packets originating from bridge
netfilter and use it to put those packets in insolated queues.
Fixes http://bugzilla.kernel.org/show_bug.cgi?id=14805
Reported-and-Tested-by: Chong Qiao <qiaochong@loongson.cn>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0b5ccb2ee2 upstream.
Currently the same reassembly queue might be used for packets reassembled
by conntrack in different positions in the stack (PREROUTING/LOCAL_OUT),
as well as local delivery. This can cause "packet jumps" when the fragment
completing a reassembled packet is queued from a different position in the
stack than the previous ones.
Add a "user" identifier to the reassembly queue key to seperate the queues
of each caller, similar to what we do for IPv4.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 70abc8cb90 upstream.
Alan Stern noticed that e100 caused slab corruption.
commit 98468efddb changed
the allocation of cbs to use dma pools that don't return zeroed memory,
especially the cb->status field used to track which cb to clean, causing
(the visible) double freeing of skbs and a wrong free cbs count.
Now the cbs are explicitly zeroed at allocation time.
Reported-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Roger Oksanen <roger.oksanen@cs.helsinki.fi>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d31f56dbf8 upstream.
task_in_mem_cgroup(), which is called by select_bad_process() to check
whether a task can be a candidate for being oom-killed from memcg's limit,
checks "curr->use_hierarchy"("curr" is the mem_cgroup the task belongs
to).
But this check return true(it's false positive) when:
<some path>/aa use_hierarchy == 0 <- hitting limit
<some path>/aa/00 use_hierarchy == 1 <- the task belongs to
This leads to killing an innocent task in aa/00. This patch is a fix for
this bug. And this patch also fixes the arg for
mem_cgroup_print_oom_info(). We should print information of mem_cgroup
which the task being killed, not current, belongs to.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 04a1e62c2c upstream.
The loop condition is fragile: we compare an unsigned value to zero, and
then decrement it by something larger than one in the loop. All the
callers should be passing in appropriately aligned buffer lengths, but
it's better to just not rely on it, and have some appropriate defensive
loop limits.
Acked-by: Roland McGrath <roland@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 50e9d31183 upstream.
This was found with a static checker and has not been tested, but it seems
pretty clear that the mutex_lock() was supposed to be mutex_unlock()
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Douglas Schilling Landgraf <dougsland@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Cc: Brandon Philips <brandon@ifup.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 70da2340fb upstream.
Jan Engelhardt reported we have this problem:
setting max_map_count to a value large enough results in programs dying at
first try. This is on 2.6.31.6:
15:59 borg:/proc/sys/vm # echo $[1<<31-1] >max_map_count
15:59 borg:/proc/sys/vm # cat max_map_count
1073741824
15:59 borg:/proc/sys/vm # echo $[1<<31] >max_map_count
15:59 borg:/proc/sys/vm # cat max_map_count
Killed
This is because we have a chance to make 'max_map_count' negative. but
it's meaningless. Make it only accept non-negative values.
Reported-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: WANG Cong <amwang@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: James Morris <jmorris@namei.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6e14154676 upstream.
In NOMMU mode clamp dac_mmap_min_addr to zero to cause the tests on it to be
skipped by the compiler. We do this as the minimum mmap address doesn't make
any sense in NOMMU mode.
mmap_min_addr and round_hint_to_min() can be discarded entirely in NOMMU mode.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Eric Paris <eparis@redhat.com>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b98c06b6de upstream.
When mac80211 suspends it calls a driver's suspend callback
as a last step and after that the driver assumes no calls will
be made to it until we resume and its start callback is kicked.
If such calls are made, however, suspend can end up throwing
hardware in an unexpected state and making the device unusable
upon resume.
Fix this by preventing mac80211 to schedule dynamic_ps_disable_work
by checking for when mac80211 starts to suspend and starts
quiescing. Frames should be allowed to go through though as
that is part of the quiescing steps and we do not flush the
mac80211 workqueue since it was already done towards the
beginning of suspend cycle.
The other mac80211 issue will be hanled in the next patch.
For further details see refer to the thread:
http://marc.info/?t=126144866100001&r=1&w=2
Cc: stable@kernel.org
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b7bb1756cb upstream.
I've also for a long time had a problem with the
temperature calculation code, which I had fixed
by byte-swapping the values, and now it turns out
that was the correct fix after all.
Also, any use of iwl_eeprom_query_addr() that is
for more than a u8 must be cast to little endian,
and some structs as well.
Fix all this. Again, no real impact on platforms
that already are little endian.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit af6b8ee388 upstream.
The construct "le16_to_cpu((__force __le16)(r >> 16))" has
always bothered me when looking through the iwlwifi code,
it shouldn't be necessary to __force anything, and before
this code, "r" was obtained with an ioread32, which swaps
each of the two u16 values in it properly when swapping the
entire u32 value. I've had arguments about this code with
people before, but always conceded they were right because
removing it only made things not work at all on big endian
platforms.
However, analysing a failure of the OTP reading code, I now
finally figured out what is going on, and why my intuition
about that code being wrong was right all along.
It turns out that the 'priv->eeprom' u8 array really wants
to have the data in it in little endian. So the force code
above and all really converts *to* little endian, not from
it. Cf., for instance, the function iwl_eeprom_query16() --
it reads two u8 values and combines them into a u16, in a
little-endian way. And considering it more, it makes sense
to have the eeprom array as on the device, after all not
all values really are 16-bit values, the MAC address for
instance is not.
Now, what this really means is that all the annotations are
completely wrong. The eeprom reading code should fill the
priv->eeprom array as a __le16 array, with __le16 values.
This also means that iwl_read_otp_word() should really have
a __le16 pointer as the data argument, since it should be
filling that in a format suitable for priv->eeprom.
Propagating these changes throughout, iwl_find_otp_image()
is found to be, now obviously visible, defective -- it uses
the data returned by iwl_read_otp_word() directly as if it
was CPU endianness. Fixing that, which is this hunk of the
patch:
- next_link_addr = link_value * sizeof(u16);
+ next_link_addr = le16_to_cpu(link_value) * sizeof(u16);
is the only real change of this patch. Everything else is
just fixing the sparse annotations.
Also, the bug only shows up on big endian platforms with a
1000 series card. 5000 and previous series do not use OTP,
and 6000 series has shadow RAM support which means we don't
ever use the defective code on any cards but 1000.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bc45a67079 upstream.
we see from http://bugzilla.intellinuxwireless.org/show_bug.cgi?id=2125
that power saving does not work well on 3945. Since then power saving has
also been connected with association problems where an AP deathenticates a
3945 after it is unable to transmit data to it - this happens when 3945
enters power savings mode.
Disable power save support until issues are resolved.
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c37919bfe0 upstream.
The bit value of AR_GPIO_INPUT_EN_VAL_BT_PRIORITY_BB is wrong, it should
be 0x400 and the number of bits to be right shifted is 10. Having this
wrong value in 0x4054 sometimes affects bt quality on btcoex environment.
Signed-off-by: Vasanthakumar Thiagarajan <vasanth@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c90017dd43 upstream.
debruijn32 (0x077CB531) is used to index gen_timer_index[]
which is an array of 32 u32. Having debruijn32 as unsigned
long on a 64-bit platform will result in indexing more than 32
in gen_timer_index[] and there by causing a crash. Make it
unsigned to fix this issue.
Signed-off-by: Vasanthakumar Thiagarajan <vasanth@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3867cf6a8c upstream.
Ensure the device is awake prior to trying to tell hardware
to stop it. Impact of not doing this is we can likely leave
the device in an undefined state likely causing issues with
suspend and resume. This patch ensures harware is where it
should be prior to suspend.
Signed-off-by: Sujith <Sujith.Manoharan@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8b685ba9de upstream.
AMDPDU actions poke hardware for TX operation, as such
we want to turn hardware on for these actions. AMDPU RX operations
do not require hardware on as nothing is done in hardware for
those actions. Without this we cannot guarantee hardware has
been programmed correctly for each AMPDU TX action.
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5b479a076d upstream.
My previous change added in:
commit 815833e7ec
ath9k: fix tx status reporting
was not checking all possible tx error conditions. This could possibly
lead to throughput issues due to slow rate control adaption or missed
retransmissions of failed A-MPDU frames.
This patch adds a mask for all possible error conditions and uses it
in the xmit ok check.
Reported-by: Björn Smedman <bjorn.smedman@venatech.se>
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e8009e9850 upstream.
When TX DMA termination has failed, the HW has to be reset
completely. Doing a fast channel change in this case is insufficient.
Also, change the debug level of a couple of messages to FATAL.
Signed-off-by: Sujith <Sujith.Manoharan@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5f70a88f63 upstream.
When we remove a IBSS/AP/Mesh interface we stop DMA
but to do this we should ensure hardware is on. Awaken
the device prior to these calls. This should ensure
DMA is stopped upon suspend and plain device removal.
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 242ab7ad68 upstream.
The calibration period is now invoked by triggering a software
interrupt from within the ISR by ath5k_hw_calibration_poll()
instead of via a timer.
However, the calibration interval isn't initialized before
interrupts are enabled, so we can have a situation where an
interrupt occurs before the interval is assigned, so the
interval is actually negative. As a result, the ISR will
arm a software interrupt to schedule the tasklet, and then
rearm it when the SWI is processed, and so on, leading to a
softlockup at modprobe time.
Move the initialization order around so the calibration interval
is set before interrupts are active. Another possible fix
is to schedule the tasklet directly from the poll routine,
but I think there are additional plans for the SWI.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3bdb2d48c5 upstream.
Joseph Nahmias reported, in http://bugs.debian.org/562016,
that he was getting the following warning (with some log
around the issue):
ath0: direct probe to AP 00:11:95:77:e0:b0 (try 1)
ath0: direct probe responded
ath0: authenticate with AP 00:11:95:77:e0:b0 (try 1)
ath0: authenticated
ath0: associate with AP 00:11:95:77:e0:b0 (try 1)
ath0: deauthenticating from 00:11:95:77:e0:b0 by local choice (reason=3)
ath0: direct probe to AP 00:11:95:77:e0:b0 (try 1)
ath0: RX AssocResp from 00:11:95:77:e0:b0 (capab=0x421 status=0 aid=2)
ath0: associated
------------[ cut here ]------------
WARNING: at net/wireless/mlme.c:97 cfg80211_send_rx_assoc+0x14d/0x152 [cfg80211]()
Hardware name: 7658CTO
...
Pid: 761, comm: phy0 Not tainted 2.6.32-trunk-686 #1
Call Trace:
[<c1030a5d>] ? warn_slowpath_common+0x5e/0x8a
[<c1030a93>] ? warn_slowpath_null+0xa/0xc
[<f86cafc7>] ? cfg80211_send_rx_assoc+0x14d/0x152
...
ath0: link becomes ready
ath0: deauthenticating from 00:11:95:77:e0:b0 by local choice (reason=3)
ath0: no IPv6 routers present
ath0: link is not ready
ath0: direct probe to AP 00:11:95:77:e0:b0 (try 1)
ath0: direct probe responded
ath0: authenticate with AP 00:11:95:77:e0:b0 (try 1)
ath0: authenticated
ath0: associate with AP 00:11:95:77:e0:b0 (try 1)
ath0: RX ReassocResp from 00:11:95:77:e0:b0 (capab=0x421 status=0 aid=2)
ath0: associated
It is not clear to me how the first "direct probe" here
happens, but this seems to be a race condition, if the
user requests to deauth after requesting assoc, but before
the assoc response is received. In that case, it may
happen that mac80211 tries to report the assoc success to
cfg80211, but gets blocked on the wdev lock that is held
because the user is requesting the deauth.
The result is that we run into a warning. This is mostly
harmless, but maybe cause an unexpected event to be sent
to userspace; we'd send an assoc success event although
userspace was no longer expecting that.
To fix this, remove the warning and check whether the
race happened and in that case abort processing.
Reported-by: Joseph Nahmias <joe@nahmias.net>
Cc: 562016-quiet@bugs.debian.org
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 450aae3d7b upstream.
Currently, in IBSS mode, a single creator would go into
a loop trying to merge/scan. This happens because the IBSS timer is
rearmed on finishing a scan and the subsequent
timer invocation requests another scan immediately.
This patch fixes this issue by checking if we have just completed
a scan run trying to merge with other IBSS networks.
Signed-off-by: Sujith <Sujith.Manoharan@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Cc: Luis Rodriguez <lrodriguez@atheros.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0183826b58 upstream.
My
commit 77fdaa12ce
Author: Johannes Berg <johannes@sipsolutions.net>
Date: Tue Jul 7 03:45:17 2009 +0200
mac80211: rework MLME for multiple authentications
inadvertedly broke WMM because it removed, along with
a bunch of other now useless initialisations, the line
initialising sdata->u.mgd.wmm_last_param_set to -1
which would make it adopt any WMM parameter set. If,
as is usually the case, the AP uses WMM parameter set
sequence number zero, we'd never update it until the
AP changes the sequence number.
Add the missing initialisation back to get the WMM
settings from the AP applied locally.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 24feda0084 upstream.
mac80211 does not propagate failed hardware reconfiguration
requests. For suspend and resume this is important due to all
the possible issues that can come out of the suspend <-> resume
cycle. Not propagating the error means cfg80211 will assume
the resume for the device went through fine and mac80211 will
continue on trying to poke at the hardware, enable timers,
queue work, and so on for a device which is completley
unfunctional.
The least we can do is to propagate device start issues and
warn when this occurs upon resume. A side effect of this patch
is we also now propagate the start errors upon harware
reconfigurations (non-suspend), but this should also be desirable
anyway, there is not point in continuing to reconfigure a
device if mac80211 was unable to start the device.
For further details refer to the thread:
http://marc.info/?t=126151038700001&r=1&w=2
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6c853da3f3 upstream.
Allocate priv->rx_packets[IWM_RX_ID_HASH + 1] because the max array
index is IWM_RX_ID_HASH according to IWM_RX_ID_GET_HASH().
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6e24a6eff4 upstream.
The vcpus are initialized with irr_pending set to false, but
loading the LAPIC registers with pending IRR fails to reset
the irr_pending variable.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fb341f572d upstream.
The invlpg prefault optimization breaks Windows 2008 R2 occasionally.
The visible effect is that the invlpg handler instantiates a pte which
is, microseconds later, written with a different gfn by another vcpu.
The OS could have other mechanisms to prevent a present translation from
being used, which the hypervisor is unaware of.
While the documentation states that the cpu is at liberty to prefetch tlb
entries, it looks like this is not heeded, so remove tlb prefetch from
invlpg.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a6d52d7067 upstream.
Put the ioat2 and ioat3 state machines in the halted state with all
errors cleared.
The ioat1 init path is not disturbed for stability, there are no
reported ioat1 initiaization issues.
Reported-by: Roland Dreier <rdreier@cisco.com>
Tested-by: Roland Dreier <rdreier@cisco.com>
Acked-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cd78809f61 upstream.
When continuing a pq calculation the driver needs 3 extra sources. The
driver can perform a 3 source calculation with a single descriptor, but
needs an extended descriptor to process up to 8 sources in one
operation. However, in the p-disabled case only one extra source is
needed. When continuing a p-disabled operation there are occasions
(i.e. 0 < src_cnt % 8 < 3) where the tail operation does not need an
extended descriptor. Properly account for this fact otherwise invalid
'dmacount' values will be written to hardware usually causing the
channel to halt with 'invalid descriptor' errors.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0f76480643 upstream.
The assumption that acpi_table_parse passes the return value
of the hanlder function to the caller proved wrong
recently. The return value of the handler function is
totally ignored. This makes the initialization code for AMD
IOMMU buggy in a way that could cause a kernel panic on
initialization. This patch fixes the issue in the AMD IOMMU
driver.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a2934c7b36 upstream.
The scenario is this:
The kernel gets EREMOTE and starts chasing a DFS referral at mount time.
The tcon reference is put, which puts the session reference too, but
neither pointer is zeroed out.
The mount gets retried (goto try_mount_again) with new mount info.
Session setup fails fails and rc ends up being non-zero. The code then
falls through to the end and tries to put the previously freed tcon
pointer again. Oops at: cifs_put_smb_ses+0x14/0xd0
Fix this by moving the initialization of the rc variable and the tcon,
pSesInfo and srvTcp pointers below the try_mount_again label. Also, add
a FreeXid() before the goto to prevent xid "leaks".
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Reported-by: Gustavo Carvalho Homem <gustavo@angulosolido.pt>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a8fe9ea200 upstream.
Stephen Rothwell reported the following build warning:
lib/dma-debug.c: In function 'dma_debug_device_change':
lib/dma-debug.c:680: warning: 'return' with no value, in function returning non-void
Introduced by commit f797d9881b
("dma-debug: Do not add notifier when dma debugging is disabled").
Return 0 [notify-done] when disabled. (this is standard bus notifier behavior.)
Signed-off-by: Shaun Ruffell <sruffell@digium.com>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <20091231125624.GA14666@liondog.tnic>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f797d9881b upstream.
If CONFIG_HAVE_DMA_API_DEBUG is defined and "dma_debug=off" is
specified on the kernel command line, when you detach a driver from a
device you can cause the following NULL pointer dereference:
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<c0580d35>] dma_debug_device_change+0x5d/0x117
The problem is that the dma_debug_device_change notifier function is
added to the bus notifier chain even though the dma_entry_hash array
was never initialized. If dma debugging is disabled, this patch both
prevents dma_debug_device_change notifiers from being added to the
chain, and additionally ensures that the dma_debug_device_change
notifier function is a no-op.
Signed-off-by: Shaun Ruffell <sruffell@digium.com>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cbd1998377 upstream.
evms configures md arrays by:
open device
send ioctl
close device
for each different ioctl needed.
Since 2.6.29, the device can disappear after the 'close'
unless a significant configuration has happened to the device.
The change made by "SET_ARRAY_INFO" can too minor to stop the device
from disappearing, but important enough that losing the change is bad.
So: make sure SET_ARRAY_INFO sets mddev->ctime, and keep the device
active as long as ctime is non-zero (it gets zeroed with lots of other
things when the array is stopped).
This is suitable for -stable kernels since 2.6.29.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 39d3077099 upstream.
The wrong address was being used to write the SCIR led regs on
remote hubs. Also, there was an inconsistency between how BIOS
and the kernel indexed these regs. Standardize on using the
lower 6 bits of the APIC ID as the index.
This patch fixes the problem of writing to an errant address to
a cpu # >= 64.
Signed-off-by: Mike Travis <travis@sgi.com>
Reviewed-by: Jack Steiner <steiner@sgi.com>
Cc: Robin Holt <holt@sgi.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <4B3922F9.3060905@sgi.com>
[ v2: fix a number of annoying checkpatch artifacts and whitespace noise ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6057912d7b upstream.
sizeof(dev->dev_addr) is the size of a pointer. A few lines above, the
size of this field is obtained using netdev->addr_len for a call to memcpy,
so do the same here.
A simplified version of the semantic patch that finds this problem is as
follows: (http://coccinelle.lip6.fr/)
// <smpl>
@@
expression *x;
expression f;
type T;
@@
*f(...,(T)x,...)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit da307123c6 upstream.
This patch (as1315) fixes some bugs in the USB core authorization
code:
usb_deauthorize_device() should deallocate the device strings
instead of leaking them, and it should invoke
usb_destroy_configuration() (which does proper reference
counting) instead of freeing the config information directly.
usb_authorize_device() shouldn't change the device strings
until it knows that the authorization will succeed, and it should
autosuspend the device at the end (having autoresumed the
device at the start).
Because the device strings can be changed, the sysfs routines
to display the strings must protect the string pointers by
locking the device.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
CC: Inaky Perez-Gonzalez <inaky@linux.intel.com>
Acked-by: David Vrabel <david.vrabel@csr.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8d8558d108 upstream.
This patch (as1314) renames usb_configure_device() and
usb_configure_device_otg() in the hub driver. Neither name is
appropriate because these routines enumerate devices, they don't
configure them. That's handled by usb_choose_configuration() and
usb_set_configuration().
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 17be5c5f5e upstream.
Gadget stalling a zero-length SETUP request results in this error message:
SetupEnd came in a wrong ep0stage idle
In order to avoid it, always set the CSR0.DataEnd bit after detecting a zero-
length request. Add the missing '\n' to the error message itself as well...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Acked-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Felipe Balbi <felipe.balbi@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 37e9066b2f upstream.
brightness status is reported by the Apple Cinema Displays as an
'unsigned char' (u8) value, but the code used 'char' instead.
Note that he driver was developed on the PowerPC architecture,
where the two types are synonymous, which is not always the case.
Fixed that. Otherwise the driver will interpret brightness
levels > 127 as negative, and fail to load.
Signed-off-by: pancho horrillo <pancho@pancho.name>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ac06c06770 upstream.
While converting emi62 to use request_firmware(), the driver was also
changed to use the ihex helper functions. However, this broke the loading
of the FPGA firmware because the code tries to access the addr field of
the EOF record which works with a plain array that has an empty last
record but not with the ihex helper functions where the end of the data is
signaled with a NULL record pointer, resulting in:
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<f80d248c>] emi62_load_firmware+0x33c/0x740 [emi62]
This can be fixed by changing the loop condition to test the return value
of ihex_next_binrec() directly (like in emi26.c).
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Reported-and-tested-by: Der Mickster <retroeffective@gmail.com>
Acked-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 794f3141a1 upstream.
drivers/gpu/drm/radeon/radeon_test.c:45: undefined reference to `__udivdi3'
Reported-by: Mr. James W. Laferriere <babydr@baby-dragons.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 48e3cbb3f6 upstream.
This patch fixes a bug where "virtual" registers were being written to the ac97
bus. This was causing unrelated registers to become corrupted (headphone 0x04,
touchscreen 0x78, etc).
This patch duplicates protection that was included in the wm9713 driver.
Signed-off-by: Eric Millbrandt <emillbrandt@dekaresearch.com>
Acked-by: Liam Girdwood <lrg@slimlogic.co.uk>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bb7f20b1c6 upstream.
This patch fixes the handling of VSX alignment faults in little-endian
mode (the current code assumes the processor is in big-endian mode).
The patch also makes the handlers clear the top 8 bytes of the register
when handling an 8 byte VSX load.
This is based on 2.6.32.
Signed-off-by: Neil Campbell <neilc@linux.vnet.ibm.com>
Acked-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 13c199c0d0 upstream.
On some laptops it will return NOTIFY_OK(non-zero) when calling the ACPI LID
notifier. Then it is used as the result of ACPI LID resume function, which
will complain the following warning message in course of suspend/resume:
>PM: Device PNP0C0D:00 failed to resume: error 1
This patch is to eliminate the above warning message.
http://bugzilla.kernel.org/show_bug.cgi?id=14782
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bdc731bc5f upstream.
BugLink: https://bugs.launchpad.net/ubuntu/+bug/435958
The module alias currently matches any Acer computer but when loaded the
BIOS checks will only succeed on Aspire One models. This causes a invalid
BIOS warning for all other models (seen on Aspire 4810T). This is not
fatal but worries users that see this message. Limiting the moule alias
to models starting with AOA or DOA for Packard Bell.
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Acked-by: Borislav Petkov <petkovbb@gmail.com>
Acked-by: Peter Feuerer <peter@piie.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 035eb0cff0 upstream.
Some model quirks missed the corresponding capsrc_nids. This resulted in
non-working capture source selection.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3e85fd614c upstream.
When allocating the PCM buffer, use vmalloc_user() instead of vmalloc().
Otherwise, it would be possible for applications to play the previous
contents of the kernel memory to the speakers, or to read it directly if
the buffer is exported to userspace.
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 509426bd46 upstream.
adev->dma_mode stores the transfer mode value not UDMA mode number
so the condition in cmd64x_set_dmamode() is always true and the higher
UDMA clock is always selected. This can potentially result in data
corruption when UDMA33 device is used, when 40-wire cable is used or
when the error recovery code decides to lower the device speed down.
The issue was introduced in the commit 6a40da0 ("libata cmd64x: whack
into a shape that looks like the documentation") which goes back to
kernel 2.6.20.
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 256ace9bbd upstream.
The clock turnaround code still doesn't work for several reasons:
- 'USE_DPLL' flag in 'ap->host->private_data' is never initialized
or updated, so the driver can only set the chip to the DPLL clock
mode, not the PCI mode;
- the driver doesn't serialize access to the channels depending on
the current clock mode like the vendor drivers, so the clock
turnaround is only executed "optionally", not always as it should be;
- the wrong ports are written to when hpt3x2n_set_clock() is called
for the secondary channel;
- hpt3x2n_set_clock() can inadvertently enable the disabled channels
when resetting the channel state machines.
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bb6eddf767 upstream.
Xiaotian Feng triggered a list corruption in the clock events list on
CPU hotplug and debugged the root cause.
If a CPU registers more than one per cpu clock event device, then only
the active clock event device is removed on CPU_DEAD. The unused
devices are kept in the clock events device list.
On CPU up the clock event devices are registered again, which means
that we list_add an already enqueued list_head. That results in list
corruption.
Resolve this by removing all devices which are associated to the dead
CPU on CPU_DEAD.
Reported-by: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 45a94d7cd4 upstream.
xsave_cntxt_init() does something like:
cpuid(0xd, ..); // find out what features FP/SSE/.. etc are supported
xsetbv(); // enable the features known to OS
cpuid(0xd, ..); // find out the size of the context for features enabled
Depending on what features get enabled in xsetbv(), value of the
cpuid.eax=0xd.ecx=0.ebx changes correspondingly (representing the
size of the context that is enabled).
As we don't have volatile keyword for native_cpuid(), gcc 4.1.2
optimizes away the second cpuid and the kernel continues to use
the cpuid information obtained before xsetbv(), ultimately leading to kernel
crash on processors supporting more state than the legacy FP/SSE.
Add "volatile" for native_cpuid().
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
LKML-Reference: <1261009542.2745.55.camel@sbs-t61.sc.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 48de68a40a upstream.
If transport_class_register fails we should unregister any
registered classes, or we will leak memory or other
resources.
I did a quick modprobe of scsi_transport_fc to test the
patch.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c982c368bb upstream.
dio transfer always resets mdata->page_order to zero. It breaks
high-order pages previously allocated for non-dio transfer.
This patches adds reserved_page_order to st_buffer structure to save
page order for non-dio transfer.
http://bugzilla.kernel.org/show_bug.cgi?id=14563
When enlarge_buffer() allocates 524288 from 0, st uses six-order page
allocation. So mdata->page_order is 6 and frp_seg is 2.
After that, if st uses dio, sgl_map_user_pages() sets
mdata->page_order to 0 for st_do_scsi(). After that, when we call
normalize_buffer(), it frees only free frp_seg * PAGE_SIZE (2 * 4096)
though we should free frp_seg * PAGE_SIZE << 6 (2 * 4096 << 6). So we
see buffer_size is set to 516096 (524288 - 8192).
Reported-by: Joachim Breuer <linux-kernel@jmbreuer.net>
Tested-by: Joachim Breuer <linux-kernel@jmbreuer.net>
Acked-by: Kai Makisara <kai.makisara@kolumbus.fi>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1486400f7e upstream.
Fix crash in qla2x00_fdmi_register() due to the dpc
thread executing before the scsi host has been fully
added.
Unable to handle kernel NULL pointer dereference (address 00000000000001d0)
qla2xxx_7_dpc[4140]: Oops 8813272891392 [1]
Call Trace:
[<a000000100016910>] show_stack+0x50/0xa0
sp=e00000b07c59f930 bsp=e00000b07c591400
[<a000000100017180>] show_regs+0x820/0x860
sp=e00000b07c59fb00 bsp=e00000b07c5913a0
[<a00000010003bd60>] die+0x1a0/0x2e0
sp=e00000b07c59fb00 bsp=e00000b07c591360
[<a0000001000681a0>] ia64_do_page_fault+0x8c0/0x9e0
sp=e00000b07c59fb00 bsp=e00000b07c591310
[<a00000010000c8e0>] ia64_native_leave_kernel+0x0/0x270
sp=e00000b07c59fb90 bsp=e00000b07c591310
[<a000000207197350>] qla2x00_fdmi_register+0x850/0xbe0 [qla2xxx]
sp=e00000b07c59fd60 bsp=e00000b07c591290
[<a000000207171570>] qla2x00_configure_loop+0x1930/0x34c0 [qla2xxx]
sp=e00000b07c59fd60 bsp=e00000b07c591128
[<a0000002071732b0>] qla2x00_loop_resync+0x1b0/0x2e0 [qla2xxx]
sp=e00000b07c59fdf0 bsp=e00000b07c5910c0
[<a000000207166d40>] qla2x00_do_dpc+0x9a0/0xce0 [qla2xxx]
sp=e00000b07c59fdf0 bsp=e00000b07c590fa0
[<a0000001000d5bb0>] kthread+0x110/0x140
sp=e00000b07c59fe00 bsp=e00000b07c590f68
[<a000000100014a30>] kernel_thread_helper+0xd0/0x100
sp=e00000b07c59fe30 bsp=e00000b07c590f40
[<a00000010000a4c0>] start_kernel_thread+0x20/0x40
sp=e00000b07c59fe30 bsp=e00000b07c590f40
crash> dis a000000207197350
0xa000000207197350 <qla2x00_fdmi_register+2128>: [MMI] ld1 r45=[r14];;
crash> scsi_qla_host.host 0xe00000b058c73ff8
host = 0xe00000b058c73be0,
crash> Scsi_Host.shost_data 0xe00000b058c73be0
shost_data = 0x0, <<<<<<<<<<<
The fc_transport fc_* workqueue threads have yet to be created.
crash> ps | grep _7
3891 2 2 e00000b075c80000 IN 0.0 0 0 [scsi_eh_7]
4140 2 3 e00000b07c590000 RU 0.0 0 0 [qla2xxx_7_dpc]
The thread creating adding the Scsi_Host is blocked due to other
activity in sysfs.
crash> bt 3762
PID: 3762 TASK: e00000b071e70000 CPU: 3 COMMAND: "modprobe"
#0 [BSP:e00000b071e71548] schedule at a000000100727e00
#1 [BSP:e00000b071e714c8] __mutex_lock_slowpath at a0000001007295a0
#2 [BSP:e00000b071e714a8] mutex_lock at a000000100729830
#3 [BSP:e00000b071e71478] sysfs_addrm_start at a0000001002584f0
#4 [BSP:e00000b071e71440] create_dir at a000000100259350
#5 [BSP:e00000b071e71410] sysfs_create_subdir at a000000100259510
#6 [BSP:e00000b071e713b0] internal_create_group at a00000010025c880
#7 [BSP:e00000b071e71388] sysfs_create_group at a00000010025cc50
#8 [BSP:e00000b071e71368] dpm_sysfs_add at a000000100425050
#9 [BSP:e00000b071e71310] device_add at a000000100417d90
#10 [BSP:e00000b071e712d8] scsi_add_host at a00000010045a380
#11 [BSP:e00000b071e71268] qla2x00_probe_one at a0000002071be950
#12 [BSP:e00000b071e71248] local_pci_probe at a00000010032e490
#13 [BSP:e00000b071e71218] pci_device_probe at a00000010032ecd0
#14 [BSP:e00000b071e711d8] driver_probe_device at a00000010041d480
#15 [BSP:e00000b071e711a8] __driver_attach at a00000010041d6e0
#16 [BSP:e00000b071e71170] bus_for_each_dev at a00000010041c240
#17 [BSP:e00000b071e71150] driver_attach at a00000010041d0a0
#18 [BSP:e00000b071e71108] bus_add_driver at a00000010041b080
#19 [BSP:e00000b071e710c0] driver_register at a00000010041dea0
#20 [BSP:e00000b071e71088] __pci_register_driver at a00000010032f610
#21 [BSP:e00000b071e71058] (unknown) at a000000207200270
#22 [BSP:e00000b071e71018] do_one_initcall at a00000010000a9c0
#23 [BSP:e00000b071e70f98] sys_init_module at a0000001000fef00
#24 [BSP:e00000b071e70f98] ia64_ret_from_syscall at a00000010000c740
So, it appears that qla2xxx dpc thread is moving forward before the
scsi host has been completely added.
This patch moves the setting of the init_done (and online) flag to
after the call to scsi_add_host() to hold off the dpc thread.
Found via large lun count testing using 2.6.31.
Signed-off-by: Michael Reed <mdr@sgi.com>
Acked-by: Giridhar Malavali <giridhar.malavali@qlogic.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 99c965dd9e upstream.
After commits c82f63e411 (PCI: check saved
state before restore) and 4b77b0a2ba (PCI:
Clear saved_state after the state has been restored) PCI drivers are
prevented from restoring the device standard configuration registers
twice in a row. These changes introduced a regression on ipr EEH
recovery.
The ipr device driver saves the PCI state only during the device probe
and restores it on ipr_reset_restore_cfg_space() during IOA resets. This
behavior is causing the EEH recovery to fail after the second error
detected, since the registers are not being restored.
One possible solution would be saving the registers after restoring
them. The problem with this approach is that while recovering from an
EEH error if pci_save_state() results in an EEH error, the adapter/slot
will be reset, and end up back in ipr_reset_restore_cfg_space(), but it
won't have a valid saved state to restore, so pci_restore_state() will
fail.
The following patch introduces a workaround for this problem, hacking
around the PCI API by setting pdev->state_saved = true before we do the
restore. It fixes the EEH regression and prevents that we hit another
EEH error during EEH recovery.
[jejb: fix is a hack ... Jesse and Rafael will fix properly]
Signed-off-by: Kleber Sacilotto de Souza <klebers@linux.vnet.ibm.com>
Acked-by: Brian King <brking@linux.vnet.ibm.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0f624e7e56 upstream.
It is quite legitimate for CPUs to be numbered sparsely, meaning
that it possible for an online CPU to have a number which is
greater than the total count of possible CPUs.
Currently find_get_context() has a sanity check on the cpu
number where it checks it against num_possible_cpus(). This
test can fail for a legitimate cpu number if the
cpu_possible_mask is sparsely populated.
This fixes the problem by checking the CPU number against
nr_cpumask_bits instead, since that is the appropriate check to
ensure that the cpu number is same to pass to cpu_isset()
subsequently.
Reported-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Tested-by: Michael Neuling <mikey@neuling.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <20091215084032.GA18661@brick.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1672af1164 upstream.
We are seeing a bug when booting w/ iommu=pt with current upstream
(bisect blames 19943b0e30 "intel-iommu:
Unify hardware and software passthrough support).
The issue is specific to this loop during identity map initialization
of each device:
domain_context_mapping_one(si_domain, ..., CONTEXT_TT_PASS_THROUGH)
...
/* Skip top levels of page tables for
* iommu which has less agaw than default.
*/
for (agaw = domain->agaw; agaw != iommu->agaw; agaw--) {
pgd = phys_to_virt(dma_pte_addr(pgd));
if (!dma_pte_present(pgd)) { <------ failing here
spin_unlock_irqrestore(&iommu->lock, flags);
return -ENOMEM;
}
This box has 2 iommu's in it. The catchall iommu has MGAW == 48, and
SAGAW == 4. The other iommu has MGAW == 39, SAGAW == 2.
The device that's failing the above pgd test is the only device connected
to the non-catchall iommu, which has a smaller address width than the
domain default. This test is not necessary since the context is in PT
mode and the ASR is ignored.
Thanks to Don Dutile for discovering and debugging this one.
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 44cd613c0e upstream.
The hotplug notifier will call find_domain() to see if the device in
question has been assigned an IOMMU domain. However, this should never
be called for devices with a "dummy" domain, such as graphics devices
when intel_iommu=igfx_off is set and the corresponding IOMMU isn't even
initialised. If you do that, it'll oops as it dereferences the (-1)
pointer.
The notifier function should check iommu_no_mapping() for the
device before doing anything else.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5595b528b4 upstream.
Some HP BIOSes report an RMRR region (a region which needs a 1:1 mapping
in the IOMMU for a given device) which has an end address lower than its
start address. Detect that and warn, rather than triggering the
BUG() in dma_pte_clear_range().
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6ecbf01c7c upstream.
The BIOS errors where an IOMMU is reported either at zero or a bogus
address are causing problems even when the IOMMU is disabled -- because
interrupt remapping uses the same hardware. Ensure that the checks get
applied for the interrupt remapping initialisation too.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2c99220810 upstream.
Many BIOSes will lie to us about the existence of an IOMMU, and claim
that there is one at an address which actually returns all 0xFF.
We need to detect this early, so that we know we don't have a viable
IOMMU and can set up swiotlb before it's too late.
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2e16cfca6e upstream.
Ever since jffs2_garbage_collect_metadata() was first half-written in
February 2001, it's been broken on architectures where 'char' is signed.
When garbage collecting a symlink with target length above 127, the payload
length would end up negative, causing interesting and bad things to happen.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 258c889362 upstream.
Make sure that any otherwise uninitialised fields of usvc are zero.
This has been obvserved to cause a problem whereby the port of
fwmark services may end up as a non-zero value which causes
scheduling of a destination server to fail for persisitent services.
As observed by Deon van der Merwe <dvdm@truteq.co.za>.
This fix suggested by Julian Anastasov <ja@ssi.bg>.
For good measure also zero udest.
Cc: Deon van der Merwe <dvdm@truteq.co.za>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d99be1a8ec upstream.
When do_nonlinear_fault() realizes that the page table must have been
corrupted for it to have been called, it does print_bad_pte() and returns
... VM_FAULT_OOM, which is hard to understand.
It made some sense when I did it for 2.6.15, when do_page_fault() just
killed the current process; but nowadays it lets the OOM killer decide who
to kill - so page table corruption in one process would be liable to kill
another.
Change it to return VM_FAULT_SIGBUS instead: that doesn't guarantee that
the process will be killed, but is good enough for such a rare
abnormality, accompanied as it is by the "BUG: Bad page map" message.
And recent HWPOISON work has copied that code into do_swap_page(), when it
finds an impossible swap entry: fix that to VM_FAULT_SIGBUS too.
Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Izik Eidus <ieidus@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Nick Piggin <npiggin@suse.de>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Cc: Andi Kleen <andi@firstfloor.org>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: Wu Fengguang <fengguang.wu@intel.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1b3c7a47f9 upstream.
In disable sequence, all output ports on PCH have to be disabled
before PCH transcoder, but LVDS port was left always enabled. This
one fixes that by disable LVDS port properly during pipe disable
process, and resolved stability issue seen on Ironlake. Also move
panel fitting disable time just after pipe disable to align with
the spec.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Upstream commit a2202aa292.
On platforms where bios handles the thermal monitor interrupt,
APIC_LVTTHMR on each logical CPU is programmed to generate a SMI and OS
can't touch it.
Unfortunately AP bringup sequence using INIT-SIPI-SIPI clear all
the LVT entries except the mask bit. Essentially this results in
all LVT entries including the thermal monitoring interrupt set to masked
(clearing the bios programmed value for APIC_LVTTHMR).
And this leads to kernel take over the thermal monitoring interrupt
on AP's but not on BSP (leaving the bios programmed value only on BSP).
As a result of this, we have seen system hangs when the thermal
monitoring interrupt is generated.
Fix this by reading the initial value of thermal LVT entry on BSP
and if bios has taken over the control, then program the same value
on all AP's and leave the thermal monitoring interrupt control
on all the logical cpu's to the bios.
Signed-off-by: Yong Wang <yong.y.wang@intel.com>
Reviewed-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Arjan van de Ven <arjan@infradead.org>
LKML-Reference: <20091110013824.GA24940@ywang-moblin2.bj.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a3f92eea04 upstream.
This patch converts bcm63xx_enet to uset get_sset_count
like the other drivers do.
Signed-off-by: Florian Fainelli <ffainelli@freebox.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 68eb3db083 upstream.
When ext3_write_begin fails after allocating some blocks or
generic_perform_write fails to copy data to write, we truncate blocks already
instantiated beyond i_size. Although these blocks were never inside i_size, we
have to truncate pagecache of these blocks so that corresponding buffers get
unmapped. Otherwise subsequent __block_prepare_write (called because we are
retrying the write) will find the buffers mapped, not call ->get_block, and
thus the page will be backed by already freed blocks leading to filesystem and
data corruption.
Reported-by: James Y Knight <foom@fuhm.net>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d90a909e1f upstream.
I received some bug reports about userspace programs having problems
because after RTM_NEWLINK was received they could not immeidate
access files under /proc/sys/net/ because they had not been
registered yet.
The problem was trivailly fixed by moving the userspace
notification from rtnetlink_event to the end of register_netdevice.
Signed-off-by: Eric W. Biederman <ebiederm@aristanetworks.com>
Cc: David Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 03a05ed115 upstream.
Currently, ARB_DISABLE is a NOP on all of the recent Intel platforms.
For such platforms, reduce contention on c3_lock by skipping the fake
ARB_DISABLE.
The cpu model id on one laptop is 14. If we disable ARB_DISABLE on this box,
the box can't be booted correctly. But if we still enable ARB_DISABLE on this
box, the box can be booted correctly.
So we still use the ARB_DISABLE for the cpu which mode id is less than 0x0f.
http://bugzilla.kernel.org/show_bug.cgi?id=14700
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
No matching upstream commit as it was resolved differently there.
pcpu_get_vm_areas() is used only when dynamic percpu allocator is used
by the architecture. In 2.6.32, ia64 doesn't use dynamic percpu
allocator and has a macro which makes pcpu_get_vm_areas() buggy via
local/global variable aliasing and triggers compile warning.
The problem is fixed in upstream and ia64 uses dynamic percpu
allocators, so the only left issue is inclusion of unnecessary code
and compile warning on ia64 on 2.6.32.
Don't build pcpu_get_vm_areas() if legacy percpu allocator is in use.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Jan Beulich <JBeulich@novell.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3606574636 upstream.
Added new BIOS versions for following netbooks: Acer 1410, Gateway LT31,
Packard Bell DOA150. As the Gateway LT31 machines have different register
values for setting and checking the off-state, the "cmd_off" variable has
been splitted up to "cmd_off" and "chk_off".
Signed-off-by: Peter Feuerer <peter@piie.net>
Cc: Borislav Petkov <petkovbb@gmail.com>
Cc: Andreas Mohr <andi@lisas.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 208b996b6c upstream.
Since the rfkill rework in 2.6.31, the driver is always resuming with
the radios disabled.
Change thinkpad-acpi to ask the firmware to resume with the radios in
the last state. This fixes the Bluetooth and WWAN rfkill switches.
Note that it means we respect the firmware's oddities. Should the
user toggle the hardware rfkill switch on and off, it might cause the
radios to resume enabled.
UWB is an unknown quantity since it has nowhere the same level of
firmware support (no control over state storage in NVRAM, for
example), and might need further fixing. Testers welcome.
This change fixes a regression from 2.6.30.
Reported-by: Jerone Young <jerone.young@canonical.com>
Reported-by: Ian Molton <ian.molton@collabora.co.uk>
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Tested-by: Ian Molton <ian.molton@collabora.co.uk>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a9f8eacca4 upstream.
According to a report, the R50e wants EC-based brightness control,
even if it uses an Intel GPU. The current driver default was reported
to not work at all.
This bug can be worked around by the "brightness_mode=3" module
parameter.
Change the default of the R50e and R51 2xxx models (which use the same
EC firmware, 1V) to TPACPI_BRGHT_Q_EC, but keep TPACPI_BRGHT_Q_ASK set
for now, as I'd like to get more reports.
This fixes a regression caused by commit
59fe4fe34d,
"thinkpad-acpi: fix incorrect use of TPACPI_BRGHT_MODE_ECNVRAM"
Kernel 2.6.31 also needs this fix.
Reported-by: Ferenc Wagner <wferi@niif.hu>
Tested-by: Ferenc Wagner <wferi@niif.hu>
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Cc: stable@kernel.org
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cd9b45b78a upstream.
A memory cgroup has a memory.memsw.usage_in_bytes file. It shows the sum
of the usage of pages and swapents in the cgroup. Presently the root
cgroup's memsw.usage_in_bytes shows the wrong value - the number of
swapents are not added.
So take MEM_CGROUP_STAT_SWAPOUT into account.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 778c902640 upstream
In current vblank-wait implementation, if we turn off VGA output,
drm_wait_vblank will still wait on the disabled pipe until timeout,
because vblank on the pipe is assumed be enabled. This would cause
slow system response on some system such as moblin.
This patch resolve the issue by adding a drm helper function
drm_vblank_off which explicitly clear vblank_enabled[crtc], wake up
any waiting queue and save last vblank counter before turning off
crtc. It also slightly change drm_vblank_get to ensure that we will
will return immediately if trying to wait on a disabled pipe.
Signed-off-by: Li Peng <peng.li@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[anholt: hand-applied for conflicts with overlay changes]
Signed-off-by: Eric Anholt <eric@anholt.net>
Cc: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Upstream commit: 7c3f4bbedc
Not only ps_sdata but also IEEE80211_CONF_PS is to be considered
before restoring PS in scan_ps_disable(). For instance, when ps_sdata
is set but CONF_PS is not set just because the dynamic timer is still
running, a sw scan leads to setting of CONF_PS in scan_ps_disable
instead of restarting the dynamic PS timer.
Also for the above case, a null data frame is to be sent after
returning to operating channel which was not happening with the
current implementation. This patch fixes this too.
Signed-off-by: Vivek Natarajan <vnatarajan@atheros.com>
Reviewed-by: Kalle Valo <kalle.valo@nokia.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is a backport of upstream commit: e8c6342d989e241513baeba4b05a04b6b1f3bc8b
This patch fixes a bug in ath9k's tx status check, which
caused mac80211 to consider regularly transmitted unicast frames
as un-acked.
When checking the ts_status field for errors, it needs to be masked
with ATH9K_TXERR_FILT, because this field also contains other fields
like ATH9K_TX_ACKED.
Without this patch, AP mode is pretty much unusable, as hostapd
checks the ACK status for the frames that it injects.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is a backport of upstream commit: f4709fdf68
Atheros single stream AR9285 and AR9271 have half the PCU TX FIFO
buffer size of that of dual stream devices. Dual stream devices
have a max PCU TX FIFO size of 8 KB while single stream devices
have 4 KB. Single stream devices have an issue though and require
hardware only to use half of the amount of its capable PCU TX FIFO
size, 2 KB and this requires a change in software.
Technically a change would not have been required (except for frame
burst considerations of 128 bytes) if these devices would have been
able to use the full 4 KB of the PCU TX FIFO size but our systems
engineers recommend 2 KB to be used only. We enforce this through
software by reducing the max frame triggger level to 2 KB.
Fixing the max frame trigger level should then have a few benefits:
* The PER will now be adjusted as designed for underruns when the
max trigger level is reached. This should help alleviate the
bus as the rate control algorithm chooses a slower rate which
should ensure frames are transmitted properly under high system
bus load.
* The poll we use on our TX queues should now trigger and work
as designed for single stream devices. The hardware passes
data from each TX queue on the PCU TX FIFO queue respecting each
queue's priority. The new trigger level ensures this seeding of
the PCU TX FIFO queue occurs as designed which could mean avoiding
false resets and actually reseting hw correctly when a TX queue
is indeed stuck.
* Some undocumented / unsupported behaviour could have been triggered
when the max trigger level level was being set to 4 KB on single
stream devices. Its not clear what this issue was to me yet.
Cc: Kyungwan Nam <kyungwan.nam@atheros.com>
Cc: Bennyam Malavazi <bennyam.malavazi@atheros.com>
Cc: Stephen Chen <stephen.chen@atheros.com>
Cc: Shan Palanisamy <shan.palanisamy@atheros.com>
Cc: Paul Shaw <paul.shaw@atheros.com>
Signed-off-by: Vasanthakumar Thiagarajan <vasanth@atheros.com>
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is a backport of upstream commit: e7824a5066
When mac80211 was telling us to go into Powersave we listened
and immediately turned RX off. This meant hardware would not
see the ACKs from the AP we're associated with and hardware
we'd end up retransmiting the null data frame in a loop
helplessly.
Fix this by keeping track of the transmitted nullfunc frames
and only when we are sure the AP has sent back an ACK do we
go ahead and shut RX off.
Signed-off-by: Vasanthakumar Thiagarajan <vasanth@atheros.com>
Signed-off-by: Vivek Natarajan <Vivek.Natarajan@atheros.com>
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This is a backport of upstream commit: 332c556633
When TX is hung, the chip is reset. Ensure that
the chip is awake by using the PS wrappers.
Signed-off-by: Sujith <Sujith.Manoharan@atheros.com>
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 811cb50baf upstream.
For some reason the export of the event print format to userspace
uses '#fmt' which breaks if the format string is anything but a plain
string, for example if it is built with macros then the macro names
are exported instead of their contents.
Use
"\"%s\"", fmt
instead of
"%s", #fmt
to export the string and not the way it is built.
For example, in net/mac80211/driver-trace.h for the trace event drv_start
there is:
TP_printk(
LOCAL_PR_FMT, LOCAL_PR_ARG
)
Which use to produce:
print fmt: LOCAL_PR_FMT, REC->wiphy_name
Now produces:
print fmt: "%s", REC->wiphy_name
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
LKML-Reference: <20091113224009.GB23942@elte.hu>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 316a4d966c upstream.
For PPC architecture with PHY Revision < 3, a read of the register
B43_MMIO_HWENABLED_LO will cause a CPU fault unless b43legacy_status()
returns a value of 2 (B43legacy_STAT_STARTED); however, one finds that
the driver is unable to associate after resuming from hibernation unless
this routine returns 1. To satisfy both conditions, the routine is rewritten
to return TRUE whenever b43legacy_status() returns a value < 2.
This patch fixes the second problem listed in the postings for Red Hat
Bugzilla #538523.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 7f5620a5fc ]
"ARCH" can be just about anything, so we shouldn't end up
with UTS_MACHINE of "sparc" in a 64-bit kernel build just
because someone set the personality using 'sparc32' or
similar. CONFIG_SPARC64 drives the compilation and
therefore provides the definitive value, not "ARCH".
This mirrors commit 8c6531f7a9
(x86: correctly set UTS_MACHINE for "make ARCH=x86")
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 166e553a57 ]
Commit 4f70f7a91b
(sparc64: Implement IRQ stacks.) has two bugs.
First, the softirq range check forgets to subtract STACK_BIAS
before comparing with %sp. Next, on failure the wrong label
is jumped to, resulting in a bogus stack being loaded.
Reported-by: Igor Kovalenko <igor.v.kovalenko@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 4230fa3b89 ]
When we are trying to see if a range property entry applies
to a given address, we are overly strict about the type.
We should only allow I/O ranges for I/O addresses, and only allow
CONFIG space ranges for CONFIG space address.
However for MEM ranges, they come in 32-bit and 64-bit flavors.
And a lack of an exact match is OK if the range is 32-bit and
the address is 64-bit. We can assign a 64-bit address properly
into a 32-bit parent range just fine.
So allow it.
Reported-by: Patrick Finnegan <pat@computer-refuge.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 08a036d583 ]
IRQF_SHARED and IRQF_DISABLED don't mix.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit: e0188829cb ]
About 50% of shutdowns of b44 Ethernet adapter ends by kernel panic
with kernels compiled with stack-protector.
Checking b44_magic_pattern() return values, one call of
b44_magic_pattern() returns 127. It means, that set_bit(128, pmask)
was called on line 1509. It means that bit 0 of 17th byte of pmask was
overwritten. But pmask has only 16 bytes. Stack corruption happens.
It seems that set_bit() on line 1509 always writes one bit off.
The fix does not only solve the stack corruption, but also makes Wake
On LAN working on my onboard B44 on Asus A7V-333X mainboard.
It seems that this problem affects all kernel versions since commit
725ad800 ([PATCH] b44: add wol for old nic) on 2006-06-20.
Signed-off-by: Stanislav Brabec <sbrabec@suse.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit b2722b1c3a ]
When a large packet gets reassembled by ip_defrag(), the head skb
accounts for all the fragments in skb->truesize. If this packet is
refragmented again, skb->truesize is not re-adjusted to reflect only
the head size since its not owned by a socket. If the head fragment
then gets recycled and reused for another received fragment, it might
exceed the defragmentation limits due to its large truesize value.
skb_recycle_check() explicitly checks for linear skbs, so any recycled
skb should reflect its true size in skb->truesize. Change ip_fragment()
to also adjust the truesize value of skbs not owned by a socket.
Reported-and-tested-by: Ben Menchaca <ben@bigfootnetworks.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[ Upstream commit 07f29bc5bb ]
This patch fixes a problem in the TCP connection timeout calculation.
Currently, timeout decisions are made on the basis of the current
tcp_time_stamp and retrans_stamp, which is usually set at the first
retransmission.
However, if the retransmission fails in tcp_retransmit_skb(),
retrans_stamp is not updated and remains zero. This leads to wrong
decisions in retransmits_timed_out() if tcp_time_stamp is larger than
the specified timeout, which is very likely.
In this case, the TCP connection dies after the first attempted
(and unsuccessful) retransmission.
With this patch, tcp_skb_cb->when is used instead, when retrans_stamp
is not available.
This bug has been introduced together with retransmits_timed_out() in
2.6.32, as the number of retransmissions has been used for timeout
decisions before. The corresponding commit was
6fa12c8503 (Revert Backoff [v3]:
Calculate TCP's connection close threshold as a time value.).
Thanks to Ilpo Järvinen for code suggestions and Frederic Leroy for
testing.
Reported-by: Frederic Leroy <fredo@starox.org>
Signed-off-by: Damian Lukowski <damian@tvk.rwth-aachen.de>
Acked-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 542da31766 upstream.
The "wipe key" message is used to wipe the volume key from memory
temporarily, for example when suspending to RAM.
But the initialisation vector in ESSIV mode is calculated from the
hashed volume key, so the wipe message should wipe this IV key too and
reinitialise it when the volume key is reinstated.
This patch adds an IV wipe method called from a wipe message callback.
ESSIV is then reinitialised using the init function added by the
last patch.
Signed-off-by: Milan Broz <mbroz@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b95bf2d3d5 upstream.
This patch separates the construction of IV from its initialisation.
(For ESSIV it is a hash calculation based on volume key.)
Constructor code now preallocates hash tfm and salt array
and saves it in a private IV structure.
The next patch requires this to reinitialise the wiped IV
without reallocating memory when resuming a suspended device.
Signed-off-by: Milan Broz <mbroz@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8e87b9b81b upstream.
Under some special conditions the snapshot hash_size is calculated as zero.
This patch instead sets a minimum value of 64, the same as for the
pending exception table.
rounddown_pow_of_two(0) is an undefined operation (it expands to shift
by -1). init_exception_table with an argument of 0 would fail with -ENOMEM.
The way to trigger the problem is to create a snapshot with a chunk size
that is larger than the origin device.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6076905b5e upstream.
Fix a reported deadlock if there are still unprocessed multipath events
on a device that is being removed.
_hash_lock is held during dev_remove while trying to send the
outstanding events. Sending the events requests the _hash_lock
again in dm_copy_name_and_uuid.
This patch introduces a separate lock around regions that modify the
link to the hash table (dm_set_mdptr) or the name or uuid so that
dm_copy_name_and_uuid no longer needs _hash_lock.
Additionally, dm_copy_name_and_uuid can only be called if md exists
so we can drop the dm_get() and dm_put() which can lead to a BUG()
while md is being freed.
The deadlock:
#0 [ffff8106298dfb48] schedule at ffffffff80063035
#1 [ffff8106298dfc20] __down_read at ffffffff8006475d
#2 [ffff8106298dfc60] dm_copy_name_and_uuid at ffffffff8824f740
#3 [ffff8106298dfc90] dm_send_uevents at ffffffff88252685
#4 [ffff8106298dfcd0] event_callback at ffffffff8824c678
#5 [ffff8106298dfd00] dm_table_event at ffffffff8824dd01
#6 [ffff8106298dfd10] __hash_remove at ffffffff882507ad
#7 [ffff8106298dfd30] dev_remove at ffffffff88250865
#8 [ffff8106298dfd60] ctl_ioctl at ffffffff88250d80
#9 [ffff8106298dfee0] do_ioctl at ffffffff800418c4
#10 [ffff8106298dff00] vfs_ioctl at ffffffff8002fab9
#11 [ffff8106298dff40] sys_ioctl at ffffffff8004bdaf
#12 [ffff8106298dff80] tracesys at ffffffff8005d28d (via system_call)
Reported-by: guy keren <choo@actcom.co.il>
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5861f1be00 upstream.
Use kzfree for salt deallocation because it is derived from the volume
key. Use a common error path in ESSIV constructor.
Required by a later patch which fixes the way key material is wiped
from memory.
Signed-off-by: Milan Broz <mbroz@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6047359277 upstream.
Define private structures for IV so it's easy to add further attributes
in a following patch which fixes the way key material is wiped from
memory. Also move ESSIV destructor and remove unnecessary 'status'
operation.
There are no functional changes in this patch.
Signed-off-by: Milan Broz <mbroz@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 94e76572b5 upstream.
Take snapshot lock only for STATUSTYPE_INFO, not STATUSTYPE_TABLE.
Commit 4c6fff445d
(dm-snapshot-lock-snapshot-while-supplying-status.patch)
introduced this use of the lock, but userspace applications using
libdevmapper have been found to request STATUSTYPE_TABLE while the device
is suspended and the lock is already held, leading to deadlock. Since
the lock is not necessary in this case, don't try to take it.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 613978f871 upstream.
Error handling code following a kmalloc should free the allocated data.
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bc2c030322 upstream.
Currently if the balloon driver is unable to increase the guest's
reservation it assumes the failure was due to reaching its full
allocation, gives up on the ballooning operation and records the limit
it reached as the "hard limit". The driver will not try again until
the target is set again (even to the same value).
However it is possible that ballooning has in fact failed due to
memory pressure in the host and therefore it is desirable to keep
attempting to reach the target in case memory becomes available. The
most likely scenario is that some guests are ballooning down while
others are ballooning up and therefore there is temporary memory
pressure while things stabilise. You would not expect a well behaved
toolstack to ask a domain to balloon to more than its allocation nor
would you expect it to deliberately over-commit memory by setting
balloon targets which exceed the total host memory.
This patch drops the concept of a hard limit and causes the balloon
driver to retry increasing the reservation on a timer in the same
manner as when decreasing the reservation.
Also if we partially succeed in increasing the reservation
(i.e. receive less pages than we asked for) then we may as well keep
those pages rather than returning them to Xen.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3d65c9488c upstream.
Change totalram_pages when a single page is added/removed to the
ballooned list. This avoid totalram_pages to be set erroneously to
max_pfn at boot.
Signed-off-by: Gianluca Guida <gianluca.guida@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b4606f2165 upstream.
I have observed cases where the implicit stop_machine_destroy() done by
stop_machine() hangs while destroying the workqueues, specifically in
kthread_stop(). This seems to be because timer ticks are not restarted
until after stop_machine() returns.
Fortunately stop_machine provides a facility to pre-create/post-destroy
the workqueues so use this to ensure that workqueues are only destroyed
after everything is really up and running again.
I only actually observed this failure with 2.6.30. It seems that newer
kernels are somehow more robust against doing kthread_stop() without timer
interrupts (I tried some backports of some likely looking candidates but
did not track down the commit which added this robustness). However this
change seems like a reasonable belt&braces thing to do.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6aaf5d633b upstream.
If Xen wants to return to a 32b usermode with sysret it must use the
right form. When using VCGF_in_syscall to trigger this, it looks at
the code segment and does a 32b sysret if it is FLAT_USER_CS32.
However, this is different from __USER32_CS, so it fails to return
properly if we use the normal Linux segment.
So avoid the whole mess by dropping VCGF_in_syscall and simply use
plain iret to return to usermode.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Acked-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fed5ea87e0 upstream.
On resume irq_info[*].evtchn is reset to 0 since event channel mappings
are not preserved over suspend/resume. The other contents of irq_info
is preserved to allow rebind_evtchn_irq() to function.
However when a device resumes it will try to unbind from the
previous IRQ (e.g. blkfront goes blkfront_resume() -> blkif_free() ->
unbind_from_irqhandler() -> unbind_from_irq()). This will fail due to the
check for VALID_EVTCHN in unbind_from_irq() and the IRQ is leaked. The
device will then continue to resume and allocate a new IRQ, eventually
leading to find_unbound_irq() panic()ing.
Fix this by changing unbind_from_irq() to handle teardown of interrupts
which have type!=IRQT_UNBOUND but are not currently bound to a specific
event channel.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 65f63384b3 upstream.
The existing error handling has a few issues:
- If freeze_processes() fails it exits with shutting_down = SHUTDOWN_SUSPEND.
- If dpm_suspend_noirq() fails it exits without resuming xenbus.
- If stop_machine() fails it exits without resuming xenbus or calling
dpm_resume_end().
- xs_suspend()/xs_resume() and dpm_suspend_noirq()/dpm_resume_noirq() were not
nested in the obvious way.
Fix by ensuring each failure case goto's the correct label. Treat a failure of
stop_machine() as a cancelled suspend in order to follow the correct resume
path.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f6eafe3665 upstream.
tick_resume() is never called on secondary processors. Presumably this
is because they are offlined for suspend on native and so this is
normally taken care of in the CPU onlining path. Under Xen we keep all
CPUs online over a suspend.
This patch papers over the issue for me but I will investigate a more
generic, less hacky, way of doing to the same.
tick_suspend is also only called on the boot CPU which I presume should
be fixed too.
Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 499d19b82b upstream.
printk timestamping uses sched_clock, which in turn relies on runstate
info under Xen. So make sure we set it up before any printks can
be called.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 922cc38ab7 upstream.
dpm_resume_noirq() takes a mutex, so it can't be called from a no-interrupt
context. Don't call it from within the stop-machine function, but just
afterwards, since we're resuming anyway, regardless of what happened.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 028896721a upstream.
The commit "xen: re-register runstate area earlier on resume" caused us
to never try and setup the runstate area for secondary CPUs. Ensure that
we do this...
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f350c7922f upstream.
Otherwise the timer is disabled by dpm_suspend_noirq() which in turn prevents
correct operation of stop_machine on multi-processor systems and breaks
suspend.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fa24ba62ea upstream.
pvops kernels >= 2.6.30 can currently only be saved and restored once. The
second attempt to save results in:
ERROR Internal error: Frame# in pfn-to-mfn frame list is not in pseudophys
ERROR Internal error: entry 0: p2m_frame_list[0] is 0xf2c2c2c2, max 0x120000
ERROR Internal error: Failed to map/save the p2m frame list
I finally narrowed it down to:
commit cdaead6b4e
Author: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Date: Fri Feb 27 15:34:59 2009 -0800
xen: split construction of p2m mfn tables from registration
Build the p2m_mfn_list_list early with the rest of the p2m table, but
register it later when the real shared_info structure is in place.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
The unforeseen side-effect of this change was to cause the mfn list list to not
be rebuilt on resume. Prior to this change it would have been rebuilt via
xen_post_suspend() -> xen_setup_shared_info() -> xen_setup_mfn_list_list().
Fix by explicitly calling xen_build_mfn_list_list() from xen_post_suspend().
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3905bb2aa7 upstream.
Even if have_vcpu_info_placement is not set, we still need to set up
the runstate area on each resumed vcpu.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit be012920ec upstream.
This is necessary to ensure the runstate area is available to
xen_sched_clock before any calls to printk which will require it in
order to provide a timestamp.
I chose to pull the xen_setup_runstate_info out of xen_time_init into
the caller in order to maintain parity with calling
xen_setup_runstate_info separately from calling xen_time_resume.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c3a73ba13b upstream.
drm/ttm fails to build on MIPS because "struct page" is not known:
| In file included from drivers/gpu/drm/ttm/ttm_memory.c:28:
| include/drm/ttm/ttm_memory.h:154: warning: 'struct page' declared inside parameter list
| include/drm/ttm/ttm_memory.h:154: warning: its scope is only this definition or declaration, which is probably not what you want
| include/drm/ttm/ttm_memory.h:156: warning: 'struct page' declared inside parameter list
| drivers/gpu/drm/ttm/ttm_memory.c:540: error: conflicting types for 'ttm_mem_global_alloc_page'
| include/drm/ttm/ttm_memory.h:154: error: previous declaration of 'ttm_mem_global_alloc_page' was here
| drivers/gpu/drm/ttm/ttm_memory.c:561: error: conflicting types for 'ttm_mem_global_free_page'
| include/drm/ttm/ttm_memory.h:156: error: previous declaration of 'ttm_mem_global_free_page' was here
Signed-off-by: Martin Michlmayr <tbm@cyrius.com>
Acked-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e090aa8032 upstream.
e821ea70f3 introduced a bug by copying
some 64-bit originated code as-is to be used by both 32 and 64-bit
but this code contains a 64-bit ony "cmpdi" instruction.
This changes it to cmpwi, which is fine since VRSAVE can only contains
a 32-bit value anyway.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1496e89ae2 upstream.
In commit 0512a9a8e2, we unilaterally zero the
"pwm invert" bit in the fan behavior configuration register. On my PowerBook
G4, this results in the fans going to full speed at low temperature and
shutting off at high temperature because the pwm invert bit is supposed to be
set.
Therefore, record the pwm invert bit at driver load time, and write the bit
into the fan behavior control register. This restores correct behavior on my
PBG4 and should work around the bit being set to the wrong value after
suspend/resume (which is what the original patch was trying to fix). It also
fixes a minor omission where the pwm invert bit correction is NOT performed
when switching into automatic mode.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 529586dc39 upstream.
Windfarm SMU control is explicitly missing support for a second CPU pump in G5 PowerMacs. Such machines actually exist (specifically Quads with a second pump), so this patch adds detection for it.
Signed-off by: Bolko Maass <bmaass@math.uni-bremen.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d33b9f45bd upstream.
Most callers of pmd_none_or_clear_bad() check whether the target page is
in a hugepage or not, but walk_page_range() do not check it. So if we
read /proc/pid/pagemap for the hugepage on x86 machine, the hugepage
memory is leaked as shown below. This patch fixes it.
Details
=======
My test program (leak_pagemap) works as follows:
- creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
- read()/write() something on it,
- call page-types with option -p (walk around the page tables),
- munmap() and unlink() the file on hugetlbfs
Without my patches
------------------
$ cat /proc/meminfo |grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 1000
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ./leak_pagemap
[snip output]
$ cat /proc/meminfo |grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 900
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ls /hugetlbfs/
$
100 hugepages are accounted as used while there is no file on hugetlbfs.
With my patches
---------------
$ cat /proc/meminfo |grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 1000
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ./leak_pagemap
[snip output]
$ cat /proc/meminfo |grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 1000
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ls /hugetlbfs
$
No memory leaks.
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4f16fc107d upstream.
Most callers of pmd_none_or_clear_bad() check whether the target page is
in a hugepage or not, but mincore() and walk_page_range() do not check it.
So if we use mincore() on a hugepage on x86 machine, the hugepage memory
is leaked as shown below. This patch fixes it by extending mincore()
system call to support hugepages.
Details
=======
My test program (leak_mincore) works as follows:
- creat() and mmap() a file on hugetlbfs (file size is 200MB == 100 hugepages,)
- read()/write() something on it,
- call mincore() for first ten pages and printf() the values of *vec
- munmap() and unlink() the file on hugetlbfs
Without my patch
----------------
$ cat /proc/meminfo| grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 1000
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ./leak_mincore
vec[0] 0
vec[1] 0
vec[2] 0
vec[3] 0
vec[4] 0
vec[5] 0
vec[6] 0
vec[7] 0
vec[8] 0
vec[9] 0
$ cat /proc/meminfo |grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 999
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ls /hugetlbfs/
$
Return values in *vec from mincore() are set to 0, while the hugepage
should be in memory, and 1 hugepage is still accounted as used while
there is no file on hugetlbfs.
With my patch
-------------
$ cat /proc/meminfo| grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 1000
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ./leak_mincore
vec[0] 1
vec[1] 1
vec[2] 1
vec[3] 1
vec[4] 1
vec[5] 1
vec[6] 1
vec[7] 1
vec[8] 1
vec[9] 1
$ cat /proc/meminfo |grep "HugePage"
HugePages_Total: 1000
HugePages_Free: 1000
HugePages_Rsvd: 0
HugePages_Surp: 0
$ ls /hugetlbfs/
$
Return value in *vec set to 1 and no memory leaks.
[akpm@linux-foundation.org: cleanup]
[akpm@linux-foundation.org: build fix]
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a946d8f11f upstream.
apic_noop is used to provide dummy apic functions. It's installed
when the CPU has no APIC or when the APIC is disabled on the kernel
command line.
The apic_noop implementation of apic_write() warns when the CPU has
an APIC or when the APIC is not disabled.
That's bogus. The warning should only happen when the CPU has an
APIC _AND_ the APIC is not disabled. apic_noop.apic_read() has the
correct check.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
LKML-Reference: <alpine.LFD.2.00.0912071255420.3089@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 70d57139f9 upstream.
There are different bits used to convey the setting of the rfkill
switch to the driver. The current driver only supports one of these
possibilities. These changes were derived from the latest version
of the vendor driver.
This patch fixes the regression noted in kernel Bugzilla #14743.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Reported-and-tested-by: Antti Kaijanmäki <antti@kaijanmaki.net>
Tested-by: Hin-Tak Leung <hintak.leung@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 19deffbeba upstream.
This part was missed in "cfg80211: implement get_wireless_stats",
probably because sta_set_sinfo already existed and was only handling
dBm signals.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6d3560d4fc upstream.
Since sometimes mac80211 queues up a scan request
to only act on it later, it must be allowed to
(internally) cancel a not-yet-running scan, e.g.
when the interface is taken down. This condition
was missing since we always checked only the
local->scanning variable which isn't yet set in
that situation.
Reported-by: Luis R. Rodriguez <mcgrof@gmail.com>
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7b324d28a9 upstream.
The patch ("mac80211: Use correct sign for mesh active path
refresh.") was actually a bug. Reverted it and improved the
explanation of how mesh path refresh works.
Signed-off-by: Javier Cardona <javier@cozybit.com>
Signed-off-by: Andrey Yurovsky <andrey@cozybit.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5d618cb81a upstream.
Paths to mesh portals were being timed out immediately after each use in
intermediate forwarding nodes. mppath->exp_time is set to the expiration time
so assigning it to jiffies was marking the path as expired.
Signed-off-by: Javier Cardona <javier@cozybit.com>
Signed-off-by: Andrey Yurovsky <andrey@cozybit.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1814077fd1 upstream.
On a 32-bit machine, BIT() macro does not give the required
bit value if the bit is mroe than 31. In ieee802_11_parse_elems_crc(),
BIT() is suppossed to get the bit value more than 31 (42 (id of ERP_INFO_IE),
37 (CHANNEL_SWITCH_IE), (42), 32 (POWER_CONSTRAINT_IE), 45 (HT_CAP_IE),
61 (HT_INFO_IE)). As we do not get the required bit value for the above
IEs, crc over these IEs are never calculated, so any dynamic change in these
IEs after the association is not really handled on 32-bit platforms.
This patch fixes this issue.
Signed-off-by: Vasanthakumar Thiagarajan <vasanth@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 68cb4f8e24 upstream.
Do not read IIR in serial8250_start_tx when UART_BUG_TXEN
Reading the IIR clears some oustanding interrupts so it is not safe.
Instead, simply transmit immediately if the buffer is empty without
regard to IIR.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Jiri Kosina <jkosina@suse.cz>
Cc: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 3589972e51 upstream.
This patch (as1310) works around a race in dev_driver_string(). If
the device is unbound while the function is running, dev->driver might
become NULL after we test it and before we dereference it.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Cc: Oliver Neukum <oliver@neukum.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d3a3b0adad upstream.
Setting fops and private data outside of the mutex at debugfs file
creation introduces a race where the files can be opened with the wrong
file operations and private data. It is easy to trigger with a process
waiting on file creation notification.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit edfacdd6f8 upstream.
devpts_get_tty() assumes that the inode passed in is associated with a valid
pty. But if the only reference to the pty is via a bind-mount, the inode
passed to devpts_get_tty() while valid, would refer to a pty that no longer
exists.
With a lot of debug effort, Grzegorz Nosek developed a small program (see
below) to reproduce a crash on recent kernels. This crash is a regression
introduced by the commit:
commit 527b3e4773
Author: Sukadev Bhattiprolu <sukadev@us.ibm.com>
Date: Mon Oct 13 10:43:08 2008 +0100
To fix, ensure that the dentry associated with the inode has not yet been
deleted/unhashed by devpts_pty_kill().
See also:
https://lists.linux-foundation.org/pipermail/containers/2009-July/019273.html
tty-bug.c:
#define _GNU_SOURCE
#include <fcntl.h>
#include <sched.h>
#include <stdlib.h>
#include <sys/mount.h>
#include <sys/signal.h>
#include <unistd.h>
#include <stdio.h>
#include <linux/fs.h>
void dummy(int sig)
{
}
static int child(void *unused)
{
int fd;
signal(SIGINT, dummy); signal(SIGHUP, dummy);
pause(); /* cheesy synchronisation to wait for /dev/pts/0 to appear */
mount("/dev/pts/0", "/dev/console", NULL, MS_BIND, NULL);
sleep(2);
fd = open("/dev/console", O_RDWR);
dup(0); dup(0);
write(1, "Hello world!\n", sizeof("Hello world!\n")-1);
return 0;
}
int main(void)
{
pid_t pid;
char *stack;
stack = malloc(16384);
pid = clone(child, stack+16384, CLONE_NEWNS|SIGCHLD, NULL);
open("/dev/ptmx", O_RDWR|O_NOCTTY|O_NONBLOCK);
unlockpt(fd); grantpt(fd);
sleep(2);
kill(pid, SIGHUP);
sleep(1);
return 0; /* exit before child opens /dev/console */
}
Reported-by: Grzegorz Nosek <root@localdomain.pl>
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Tested-by: Serge Hallyn <serue@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit aa5cbd1038 upstream.
A write intent bitmap can be removed from an array while the
array is active.
When this happens, all IO is suspended and flushed before the
bitmap is removed.
However it is possible that bitmap_daemon_work is still running to
clear old bits from the bitmap. If it is, it can dereference the
bitmap after it has been freed.
So introduce a new mutex to protect bitmap_daemon_work and get it
before destroying a bitmap.
This is suitable for any current -stable kernel.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 190f38e5ce upstream.
The call to migrate_page() will cause the page->private field to be
cleared.
Also fix up the locking around the page->private transfer, so that we ensure
that calls to nfs_page_find_request() don't end up racing.
Finally, fix up a double free bug: nfs_unlock_request() already calls
nfs_release_request() for us...
Reported-by: Wu Fengguang <fengguang.wu@intel.com>
Tested-by: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ec81aecb29 upstream.
A specially-crafted Hierarchical File System (HFS) filesystem could cause
a buffer overflow to occur in a process's kernel stack during a memcpy()
call within the hfs_bnode_read() function (at fs/hfs/bnode.c:24). The
attacker can provide the source buffer and length, and the destination
buffer is a local variable of a fixed length. This local variable (passed
as "&entry" from fs/hfs/dir.c:112 and allocated on line 60) is stored in
the stack frame of hfs_bnode_read()'s caller, which is hfs_readdir().
Because the hfs_readdir() function executes upon any attempt to read a
directory on the filesystem, it gets called whenever a user attempts to
inspect any filesystem contents.
[amwang@redhat.com: modify this patch and fix coding style problems]
Signed-off-by: WANG Cong <amwang@redhat.com>
Cc: Eugene Teo <eteo@redhat.com>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Dave Anderson <anderson@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit c2d284ee04 upstream.
USB drivers that create character devices call usb_register_dev in their
probe function. This associates the usb_interface device with that minor
number and creates the character device and announces it to the world.
However, the driver's probe function is called before the new
usb_interface is added to the driver's klist_devices.
This is a problem because userspace will respond to the character device
creation announcement by opening the character device. The driver's open
function will the call usb_find_interface to find the usb_interface
associated with that minor number. usb_find_interface will walk the
driver's list of devices and find the usb_interface with the matching
minor number.
Because the announcement happens before the usb_interface is added to the
driver's klist_devices, a race condition exists. A straightforward fix
is to walk the list of devices on usb_bus_type instead since the device
is added to that list before the announcement occurs.
bus_find_device calls get_device to bump the reference count on the found
device. It is arguable that the reference count should be dropped by the
caller of usb_find_interface instead of usb_find_interface, however,
the current users of usb_find_interface do not expect this.
The original version of this patch only matched against minor number
instead of driver and minor number. This version matches against both.
Signed-off-by: Russ Dill <Russ.Dill@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a0bb108112 upstream.
This patch (as1311) fixes a problem in usb-storage: Some devices are
pretty broken when it comes to reporting sense data. The information
they send back indicates that they have more than 18 bytes of sense
data available, but when the system asks for more than 18 they fail or
hang. The symptom is that probing fails with multiple resets.
The patch adds a new BAD_SENSE flag to indicate that usb-storage
should never ask for more than 18 bytes of sense data. The flag can
be set in an unusual_devs entry or via the "quirks=" module parameter,
and it is set automatically whenever a REQUEST SENSE command for more
than 18 bytes fails or times out.
An unusual_devs entry is added for the Agfa photo frame, which uses a
Prolific chip having this bug.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Daniel Kukula <daniel.kuku@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit ec412b92db upstream.
usb_bulk_msg() transfers only bytes up to the maximum packet size.
It must be repeated by the usbtmc driver until all bytes of a TMC message
are transfered.
Without this patch, ETIMEDOUT is reported when writing TMC messages
larger than the maximum USB bulk size and the transfer remains incomplete.
The user will notice that the device hangs and must be reset by either closing
the application or pulling the plug.
Signed-off-by: Andre Herms <andre.herms@tec-venture.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 54a8e144ac upstream.
Add D-Link DWM-162-U5 device id 1e0e:ce16 into option driver. The device
has 4 interfaces, of which 1 is handled by storage and the other 3 by
option driver.
The device appears first as CD-only 05c6:2100 device and must be switched
to 1e0e:ce16 mode either by using "eject CD" or usb_modeswitch.
The MessageContent for usb_modeswitch.conf is:
"55534243e0c26a85000000000000061b000000020000000000000000000000"
Signed-off-by: Zhang Le <r0bertz@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 196f1b7a38 upstream.
Commit a5073b5283 (musb_gadget: fix
unhandled endpoint 0 IRQs) somehow missed its key change:
"The gadget EP0 code routinely ignores an interrupt at end of
the data phase because of musb_g_ep0_giveback() resetting the
state machine to "idle, waiting for SETUP" phase prematurely."
So, the majority of the cases of unhandled IRQs is still unfixed...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 36d0344c25 upstream.
Add the xHCI driver files to its MAINTAINERS entry so that I'm Cc'd on
cleanup patches. Update the email address to one I actually use for
sending patches and responding to Linux mailing list emails.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e6a47428de upstream.
If there is a failed journal checksum, don't reset the journal. This
allows for userspace programs to decide how to recover from this
situation. It may be that ignoring the journal checksum failure might
be a better way of recovering the file system. Once we add per-block
checksums, we can definitely do better. Until then, a system
administrator can try backing up the file system image (or taking a
snapshot) and and trying to determine experimentally whether ignoring
the checksum failure or aborting the journal replay results in less
data loss.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 6afaf8a484 upstream.
ubiupdatevol -t does the following:
- ubi_start_update()
- set_update_marker()
- for all LEBs ubi_eba_unmap_leb()
- clear_update_marker()
- ubi_wl_flush()
ubi_wl_flush() physically erases all PEB, once it returns all PEBs are
empty. clear_update_marker() has the update marker written after return.
If there is a power cut between the last two functions then the UBI
volume has no longer the "update" marker set and may have some valid
LEBs while some of them may be gone.
If that volume in question happens to be a UBIFS volume, then mount
will fail with
|UBIFS error (pid 1361): ubifs_read_node: bad node type (255 but expected 6)
|UBIFS error (pid 1361): ubifs_read_node: bad node at LEB 0:0
|Not a node, first 24 bytes:
|00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
if there is at least one valid LEB and the wear-leveling worker managed
to clear LEB 0.
The patch waits for the wl worker to finish prior clearing the "update"
marker on flash. The two new LEB which are scheduled for erasing after
clear_update_marker() should not matter because they are only visible to
UBI.
Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit cf87b7439e upstream.
When the kernel is IPLed without the CLEAR option and switches
to 64-bit, the high-order half of the registers might contain
random values. This can cause addressing exceptions and the
kernel enters an interrupt loop.
Initialize the high-order half of the general purpose registers
with zeros after switching to 64-bit mode.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 5600c70e57 upstream.
These drivers inherited from the older 'hpt366' IDE driver the buggy timing
register masks in their set_piomode() metods. As a result, too low command
cycle active time is programmed for slow PIO modes. Quite fortunately, it's
later "fixed up" by the set_dmamode() methods which also "helpfully" reprogram
the command timings, usually to PIO mode 4; unfortunately, setting an UltraDMA
mode #N also reprograms already set PIO data timings, usually to MWDMA mode #
max(N, 2) timings...
However, the drivers added some breakage of their own too: the bit that they
set/clear to control the FIFO is sometimes wrong -- it's actually the MSB of
the command cycle setup time; also, setting it in DMA mode is wrong as this
bit is only for PIO actually and clearing it for PIO modes is not needed as
no mode in any timing table has it set...
Fix all this, inverting the masks while at it, like in the 'hpt366' and
'pata_hpt366' drivers; bump the drivers' versions, accounting for recent
patches that forgot to do it...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 1d865fb728 upstream.
Interrupt vector 0xec has been doubly defined in irq_vectors.h
It seems arbitrary whether LOCAL_PENDING_VECTOR or
UV_BAU_MESSAGE is the higher number. As long as they are
unique. If they are not unique we'll hit a BUG in
alloc_system_vector().
Signed-off-by: Cliff Wickman <cpw@sgi.com>
LKML-Reference: <E1NJ9Pe-0004P7-0Q@eag09.americas.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e38e2af1c5 upstream.
A memory mapped register that affects the SGI UV Broadcast
Assist Unit's interrupt handling may sometimes be unintialized.
Remove the condition on its initialization, as that condition
can be randomly satisfied by a hardware reset.
Signed-off-by: Cliff Wickman <cpw@sgi.com>
LKML-Reference: <E1NBGB9-0005nU-Dp@eag09.americas.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bc09effabf upstream.
mce_timer must be passed to setup_timer() in all cases, no
matter whether it is going to be actually used. Otherwise, when
the CPU gets brought down, its call to del_timer_sync() will
never return, as the timer won't have a base associated, and
hence lock_timer_base() will loop infinitely.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
LKML-Reference: <4B1DB831.2030801@jp.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit b8b7d791a8 upstream.
commit 746357d (x86: Prevent GCC 4.4.x (pentium-mmx et al) function
prologue wreckage) uses -mtune=generic to work around the function
prologue problem with mcount on -march=pentium-mmx and others.
Jakub pointed out that we can use -maccumulate-outgoing-args instead
which is selected by -mtune=generic and prevents the problem without
losing the -march specific optimizations.
Pointed-out-by: Jakub Jelinek <jakub@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 746357d6a5 upstream.
When the kernel is compiled with -pg for tracing GCC 4.4.x inserts
stack alignment of a function _before_ the mcount prologue if the
-march=pentium-mmx is set and -mtune=generic is not set. This breaks
the assumption of the function graph tracer which expects that the
mcount prologue
push %ebp
mov %esp, %ebp
is the first stack operation in a function because it needs to modify
the function return address on the stack to trap into the tracer
before returning to the real caller.
The generated code is:
push %edi
lea 0x8(%esp),%edi
and $0xfffffff0,%esp
pushl -0x4(%edi)
push %ebp
mov %esp,%ebp
so the tracer modifies the copy of the return address which is stored
after the stack alignment and therefor does not trap the return which
in turn breaks the call chain logic of the tracer and leads to a
kernel panic.
Aside of the fact that the generated code is horrible for no good
reason other -march -mtune options generate the expected:
push %ebp
mov %esp,%ebp
and $0xfffffff0,%esp
which does the same and keeps everything intact.
After some experimenting we found out that this problem is restricted
to gcc4.4.x and to the following -march settings:
i586, pentium, pentium-mmx, k6, k6-2, k6-3, winchip-c6, winchip2, c3,
geode
By adding -mtune=generic the code generator produces always the
expected code.
So forcing -mtune=generic when CONFIG_FUNCTION_GRAPH_TRACER=y is not
pretty, but at the moment the only way to prevent that the kernel
trips over gcc-shrooms induced code madness.
Most distro kernels have CONFIG_X86_GENERIC=y anyway which forces
-mtune=generic as well so it will not impact those.
References: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42109http://lkml.org/lkml/2009/11/19/17
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
LKML-Reference: <alpine.LFD.2.00.0911200206570.24119@localhost.localdomain>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>,
Cc: Jeff Law <law@redhat.com>
Cc: gcc@gcc.gnu.org
Cc: David Daney <ddaney@caviumnetworks.com>
Cc: Andrew Haley <aph@redhat.com>
Cc: Richard Guenther <richard.guenther@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e3267cbbbf upstream.
For a while now, we are issuing a rdmsr instruction to find out which
msrs in our save list are really supported by the underlying machine.
However, it fails to account for kvm-specific msrs, such as the pvclock
ones.
This patch moves then to the beginning of the list, and skip testing them.
Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d7b0b5eb30 upstream.
This patch moves s390 processor status word into the base kvm_run
struct and keeps it up-to date on all userspace exits.
The userspace ABI is broken by this, however there are no applications
in the wild using this. A capability check is provided so users can
verify the updated API exists.
Signed-off-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f50146bd7b upstream.
This patch corrects the checking of the new address for the prefix register.
On s390, the prefix register is used to address the cpu's lowcore (address
0...8k). This check is supposed to verify that the memory is readable and
present.
copy_from_guest is a helper function, that can be used to read from guest
memory. It applies prefixing, adds the start address of the guest memory in
user, and then calls copy_from_user. Previous code was obviously broken for
two reasons:
- prefixing should not be applied here. The current prefix register is
going to be updated soon, and the address we're looking for will be
0..8k after we've updated the register
- we're adding the guest origin (gmsor) twice: once in subject code
and once in copy_from_guest
With kuli, we did not hit this problem because (a) we were lucky with
previous prefix register content, and (b) our guest memory was mmaped
very low into user address space.
Signed-off-by: Carsten Otte <cotte@de.ibm.com>
Reported-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eb3c79e64a upstream.
While we are never normally passed an instruction that exceeds 15 bytes,
smp games can cause us to attempt to interpret one, which will cause
large latencies in non-preempt hosts.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit fcfdebe707 upstream.
The timer stop callback can be called from snd_timer_interrupt(), which
is called from the hrtimer callback. Since hrtimer_cancel() waits for
the callback completion, this eventually results in a lock-up.
This patch fixes the problem by just toggling a flag at stop callback
and call hrtimer_cancel() later.
Reported-and-tested-by: Wojtek Zabolotny <W.Zabolotny@elka.pw.edu.pl>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8629ea2eab upstream.
commit 507e1231 (timer stats: Optimize by adding quick check to avoid
function calls) introduced a regression in /proc/timer_list.
/proc/timer_list shows now
#0: <c27d46b0>, tick_sched_timer, S:01, <(null)>, /-1
instead of
#0: <c27d46b0>, tick_sched_timer, S:01, hrtimer_start, swapper/0
Revert the hrtimer quick check for now. The optimization needs more
thought, but this is neither 2.6.32-rc7 nor stable material.
[ tglx: - Removed unrelated changes from the original patch
- Prevent unneccesary call to timer_stats_update_stats
- massaged the changelog ]
Signed-off-by: Feng Tang <feng.tang@intel.com>
LKML-Reference: <alpine.LFD.2.00.0911181933540.24119@localhost.localdomain>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 512414b0be upstream.
Without this we have no gaurantee of the integrity of the
EEPROM and are likely to encounter a lot of bogus bug reports
due to actual issues on the EEPROM. With the EEPROM checksum
check in place we can easily rule those issues out.
If you run patch during a revert *you* have a card with a busted
EEPROM and only older kernel will support that concoction. This
patch is a trade off between not accepitng bogus EEPROMs and
avoiding bogus bug reports allowing developers to focus instead
on real concrete issues.
If stable keeps bogus bug reports because of a possibly busted EEPROM
feel free to apply this there too.
Tested on an AR5414
Cc: jirislaby@gmail.com
Cc: akpm@linux-foundation.org
Cc: rjw@sisk.pl
Cc: me@bobcopeland.com
Cc: david.quan@atheros.com
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit e33761e6f2 upstream.
The range check in the sprom image parser hex2sprom() is broken.
One sprom word is 4 hex characters.
This fixes the check and also adds much better sanity checks to the code.
We better make sure the image is OK by doing some sanity checks to avoid
bricking the device by accident.
Signed-off-by: Michael Buesch <mb@bu3sch.de>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7d1849aff6 upstream.
The x86 lapic nmi watchdog does not recognize AMD Family 11h,
resulting in:
NMI watchdog: CPU not supported
As far as I can see from available documentation (the BKDM),
family 11h looks identical to family 10h as far as the PMU
is concerned.
Extending the check to accept family 11h results in:
Testing NMI watchdog ... OK.
I've been running with this change on a Turion X2 Ultra ZM-82
laptop for a couple of weeks now without problems.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: Joerg Roedel <joerg.roedel@amd.com>
LKML-Reference: <19223.53436.931768.278021@pilspetsen.it.uu.se>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4832ddda2e upstream.
Bug reporter noted their system with an ASUS P4S800 motherboard would
hang when rebooting unless reboot=b was specified. Their dmidecode
didn't contain descriptive System Information for Manufacturer or
Product Name, so I used their Base Board Information to create a
reboot quirk patch. The bug reporter confirmed this patch resolves
the reboot hang.
Handle 0x0001, DMI type 1, 25 bytes
System Information
Manufacturer: System Manufacturer
Product Name: System Name
Version: System Version
Serial Number: SYS-1234567890
UUID: E0BFCD8B-7948-D911-A953-E486B4EEB67F
Wake-up Type: Power Switch
Handle 0x0002, DMI type 2, 8 bytes
Base Board Information
Manufacturer: ASUSTeK Computer INC.
Product Name: P4S800
Version: REV 1.xx
Serial Number: xxxxxxxxxxx
BugLink: http://bugs.launchpad.net/bugs/366682
ASUS P4S800 will hang when rebooting unless reboot=b is specified.
Add a quirk to reboot through the bios.
Signed-off-by: Leann Ogasawara <leann.ogasawara@canonical.com>
LKML-Reference: <1259972107.4629.275.camel@emiko>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 4528752f49 upstream.
On a multi-node x3950M2 system, there's a slight oddity in the
PCI device tree for all secondary nodes:
30:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1)
\-33:00.0 PCI bridge: IBM CalIOC2 PCI-E Root Port (rev 01)
\-34:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04)
...as compared to the primary node:
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1)
\-01:00.0 VGA compatible controller: ATI Technologies Inc ES1000 (rev 02)
03:00.0 PCI bridge: IBM CalIOC2 PCI-E Root Port (rev 01)
\-04:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04)
In both nodes, the LSI RAID controller hangs off a CalIOC2
device, but on the secondary nodes, the BIOS hides the VGA
device and substitutes the device tree ending with the disk
controller.
It would seem that Calgary devices don't necessarily appear at
the top of the PCI tree, which means that the current code to
find the Calgary IOMMU that goes with a particular device is
buggy.
Rather than walk all the way to the top of the PCI
device tree and try to match bus number with Calgary descriptor,
the code needs to examine each parent of the particular device;
if it encounters a Calgary with a matching bus number, simply
use that.
Otherwise, we BUG() when the bus number of the Calgary doesn't
match the bus number of whatever's at the top of the device tree.
Extra note: This patch appears to work correctly for the x3950
that came before the x3950 M2.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com>
Acked-by: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Joerg Roedel <joerg.roedel@amd.com>
Cc: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: Jon D. Mason <jdmason@kudzu.us>
Cc: Corinna Schultz <coschult@us.ibm.com>
LKML-Reference: <20091202230556.GG10295@tux1.beaverton.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9f800de38b upstream.
This function may be called on the resume path and can not
be dropped after booting.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit be83129771 upstream.
For some devices the ACPI table may define unity map
requirements which must me met when the IOMMU is enabled. So
we need to attach devices to their domains as early as
possible so that these mappings are in place when needed.
This patch assigns the domains right after they are
allocated. Otherwise this can result in I/O page faults
before a driver binds to a device and BIOS is still using
it.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit eae0c9dfb5 upstream.
Commit 1b9508f, "Rate-limit newidle" has been confirmed to fix
the netperf UDP loopback regression reported by Alex Shi.
This is a cleanup and a fix:
- moved to a more out of the way spot
- fix to ensure that balancing doesn't try to balance
runqueues which haven't gone online yet, which can
mess up CPU enumeration during boot.
Reported-by: Alex Shi <alex.shi@intel.com>
Reported-by: Zhang, Yanmin <yanmin_zhang@linux.intel.com>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1257821402.5648.17.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit a1f84a3ab8 upstream.
When waking affine, check for an idle shared cache, and if
found, wake to that CPU/sibling instead of the waker's CPU.
This improves pgsql+oltp ramp up by roughly 8%. Possibly more
for other loads, depending on overlap. The trade-off is a
roughly 1% peak downturn if tasks are truly synchronous.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <1256654138.17752.7.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit bab636b921 upstream.
Lockdep complains about taking the parent lock in
__pm_runtime_set_status(), so mark it as nested.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reported-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 9160306e6f upstream.
Impose a clear locking design on the note_new_gpnum()
function's use of the ->gpnum counter. This is done by updating
rdp->gpnum only from the corresponding leaf rcu_node structure's
rnp->gpnum field, and even then only under the protection of
that same rcu_node structure's ->lock field. Performance and
scalability are maintained using a form of double-checked
locking, and excessive spinning is avoided by use of the
spin_trylock() function. The use of spin_trylock() is safe due
to the fact that CPUs who fail to acquire this lock will try
again later. The hierarchical nature of the rcu_node data
structure limits contention (which could be limited further if
need be using the RCU_FANOUT kernel parameter).
Without this patch, obscure but quite possible races could
result in a quiescent state that occurred during one grace
period to be accounted to the following grace period, causing
this following grace period to end prematurely. Not good!
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: laijs@cn.fujitsu.com
Cc: dipankar@in.ibm.com
Cc: mathieu.desnoyers@polymtl.ca
Cc: josh@joshtriplett.org
Cc: dvhltc@us.ibm.com
Cc: niv@us.ibm.com
Cc: peterz@infradead.org
Cc: rostedt@goodmis.org
Cc: Valdis.Kletnieks@vt.edu
Cc: dhowells@redhat.com
LKML-Reference: <12571987492350-git-send-email->
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d09b62dfa3 upstream.
Impose a clear locking design on the rcu_process_gp_end()
function's use of the ->completed counter. This is done by
creating a ->completed field in the rcu_node structure, which
can safely be accessed under the protection of that structure's
lock. Performance and scalability are maintained by using a
form of double-checked locking, so that rcu_process_gp_end()
only acquires the leaf rcu_node structure's ->lock if a grace
period has recently ended.
This fix reduces rcutorture failure rate by at least two orders
of magnitude under heavy stress with force_quiescent_state()
being invoked artificially often. Without this fix,
unsynchronized access to the ->completed field can cause
rcu_process_gp_end() to advance callbacks whose grace period has
not yet expired. (Bad idea!)
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: laijs@cn.fujitsu.com
Cc: dipankar@in.ibm.com
Cc: mathieu.desnoyers@polymtl.ca
Cc: josh@joshtriplett.org
Cc: dvhltc@us.ibm.com
Cc: niv@us.ibm.com
Cc: peterz@infradead.org
Cc: rostedt@goodmis.org
Cc: Valdis.Kletnieks@vt.edu
Cc: dhowells@redhat.com
LKML-Reference: <12571987494069-git-send-email->
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 8c0c0cc2d9 upstream.
Queueing to receive an ISO packet with a payload length of zero
silently does nothing in dualbuffer mode, and crashes the kernel in
packet-per-buffer mode. Return an error in dualbuffer mode, because
the DMA controller won't let us do what we want, and work correctly in
packet-per-buffer mode.
Signed-off-by: Jay Fenlason <fenlason@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit f3f6faa9ed upstream.
This patch (as1312) fixes a minor bug in usb-storage. The
fill_inquiry() routine neglects to pre-load the inquiry data buffer
with spaces. As a result, if the vendor name is shorter than 8
characters or the product name is shorter than 16, the remainder will
be filled with garbage.
The patch also removes some unnecessary calls to strlen().
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 4a58579b9e)
This patch fixes three problems in the handling of the
EXT4_IOC_MOVE_EXT ioctl:
1. In current EXT4_IOC_MOVE_EXT, there are read access mode checks for
original and donor files, but they allow the illegal write access to
donor file, since donor file is overwritten by original file data. To
fix this problem, change access mode checks of original (r->r/w) and
donor (r->w) files.
2. Disallow the use of donor files that have a setuid or setgid bits.
3. Call mnt_want_write() and mnt_drop_write() before and after
ext4_move_extents() calling to get write access to a mount.
Signed-off-by: Akira Fujita <a-fujita@rs.jp.nec.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit b436b9bef8)
We cannot rely on buffer dirty bits during fsync because pdflush can come
before fsync is called and clear dirty bits without forcing a transaction
commit. What we do is that we track which transaction has last changed
the inode and which transaction last changed allocation and force it to
disk on fsync.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 194074acac)
Inside ->setattr() call both ATTR_UID and ATTR_GID may be valid
This means that we may end-up with transferring all quotas. Add
we have to reserve QUOTA_DEL_BLOCKS for all quotas, as we do in
case of QUOTA_INIT_BLOCKS.
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Reviewed-by: Mingming Cao <cmm@us.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 5aca07eb7d)
Currently all quota block reservation macros contains hard-coded "2"
aka MAXQUOTAS value. This is no good because in some places it is not
obvious to understand what does this digit represent. Let's introduce
new macro with self descriptive name.
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Acked-by: Mingming Cao <cmm@us.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit b844167edc)
This fixes a leak of blocks in an inode prealloc list if device failures
cause ext4_mb_mark_diskspace_used() to fail.
Signed-off-by: Curt Wohlgemuth <curtw@google.com>
Acked-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit d4edac314e)
There is a potential race when a transaction is committing right when
the file system is being umounting. This could reduce in a race
because EXT4_SB(sb)->s_group_info could be freed in ext4_put_super
before the commit code calls a callback so the mballoc code can
release freed blocks in the transaction, resulting in a panic trying
to access the freed s_group_info.
The fix is to wait for the transaction to finish committing before we
shutdown the multiblock allocator.
Signed-off-by: Josef Bacik <josef@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit b9a4207d5e)
When ext4_write_begin fails after allocating some blocks or
generic_perform_write fails to copy data to write, we truncate blocks
already instantiated beyond i_size. Although these blocks were never
inside i_size, we have to truncate the pagecache of these blocks so
that corresponding buffers get unmapped. Otherwise subsequent
__block_prepare_write (called because we are retrying the write) will
find the buffers mapped, not call ->get_block, and thus the page will
be backed by already freed blocks leading to filesystem and data
corruption.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 446aaa6e7e)
The move_extent.moved_len is used to pass back the number of exchanged
blocks count to user space. Currently the caller must clear this
field; but we spend more code space checking for this requirement than
simply zeroing the field ourselves, so let's just make life easier for
everyone all around.
Signed-off-by: Kazuya Mio <k-mio@sx.jp.nec.com>
Signed-off-by: Akira Fujita <a-fujita@rs.jp.nec.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 94d7c16cbb)
At the beginning of ext4_move_extent(), we call
ext4_discard_preallocations() to discard inode PAs of orig and donor
inodes. But in the following case, blocks can be double freed, so
move ext4_discard_preallocations() to the end of ext4_move_extents().
1. Discard inode PAs of orig and donor inodes with
ext4_discard_preallocations() in ext4_move_extents().
orig : [ DATA1 ]
donor: [ DATA2 ]
2. While data blocks are exchanging between orig and donor inodes, new
inode PAs is created to orig by other process's block allocation.
(Since there are semaphore gaps in ext4_move_extents().) And new
inode PAs is used partially (2-1).
2-1 Create new inode PAs to orig inode
orig : [ DATA1 | used PA1 | free PA1 ]
donor: [ DATA2 ]
3. Donor inode which has old orig inode's blocks is deleted after
EXT4_IOC_MOVE_EXT finished (3-1, 3-2). So the block bitmap
corresponds to old orig inode's blocks are freed.
3-1 After EXT4_IOC_MOVE_EXT finished
orig : [ DATA2 | free PA1 ]
donor: [ DATA1 | used PA1 ]
3-2 Delete donor inode
orig : [ DATA2 | free PA1 ]
donor: [ FREE SPACE(DATA1) | FREE SPACE(used PA1) ]
4. The double-free of blocks is occurred, when close() is called to
orig inode. Because ext4_discard_preallocations() for orig inode
frees used PA1 and free PA1, though used PA1 is already freed in 3.
4-1 Double-free of blocks is occurred
orig : [ DATA2 | FREE SPACE(free PA1) ]
donor: [ FREE SPACE(DATA1) | DOUBLE FREE(used PA1) ]
Signed-off-by: Akira Fujita <a-fujita@rs.jp.nec.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit e3bb52ae2b)
Users on the linux-ext4 list recently complained about differences
across filesystems w.r.t. how to mount without a journal replay.
In the discussion it was noted that xfs's "norecovery" option is
perhaps more descriptively accurate than "noload," so let's make
that an alias for ext4.
Also show this status in /proc/mounts
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 5328e63531)
It is anticipated that when sb_issue_discard starts doing
real work on trim-capable devices, we may see issues. Make
this mount-time optional, and default it to off until we know
that things are working out OK.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 2bba702d4f)
When an error happened in ext4_splice_branch we failed to notice that
in ext4_ind_get_blocks and mapped the buffer anyway. Fix the problem
by checking for error properly.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 6b17d902fd)
We don't to issue an I/O barrier on an error or if we force commit
because we are doing data journaling.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 1032988c71)
The block validity checks used by ext4_data_block_valid() wasn't
correctly written to check file systems with the meta_bg feature. Fix
this.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 8dadb198cb)
The number of old-style block group descriptor blocks is
s_meta_first_bg when the meta_bg feature flag is set.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 3f8fb9490e)
commit a71ce8c6c9 updated ext4_statfs()
to update the on-disk superblock counters, but modified this buffer
directly without any journaling of the change. This is one of the
accesses that was causing the crc errors in journal replay as seen in
kernel.org bugzilla #14354.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 86ebfd08a1)
ext4_xattr_set_handle() was zeroing out an inode outside
of journaling constraints; this is one of the accesses that
was causing the crc errors in journal replay as seen in
kernel.org bugzilla #14354.
Reviewed-by: Andreas Dilger <adilger@sun.com>
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 30c6e07a92)
We need to be testing the i_flags field in the ext4 specific portion
of the inode, instead of the (confusingly aliased) i_flags field in
the generic struct inode.
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 5068969686)
When an inode gets unlinked, the functions ext4_clear_blocks() and
ext4_remove_blocks() call ext4_forget() for all the buffer heads
corresponding to the deleted inode's data blocks. If the inode is a
directory or a symlink, the is_metadata parameter must be non-zero so
ext4_forget() will revoke them via jbd2_journal_revoke(). Otherwise,
if these blocks are reused for a data file, and the system crashes
before a journal checkpoint, the journal replay could end up
corrupting these data blocks.
Thanks to Curt Wohlgemuth for pointing out potential problems in this
area.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 567f3e9a70)
One of the invalid error paths in ext4_iget() forgot to brelse() the
inode buffer head. Fix it by adding a brelse() in the common error
return path, which also simplifies function.
Thanks to Andi Kleen <ak@linux.intel.com> reporting the problem.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 49bd22bc4d)
If CONFIG_PROVE_LOCKING is enabled, the double_down_write_data_sem()
will trigger a false-positive warning of a recursive lock. Since we
take i_data_sem for the two inodes ordered by their inode numbers,
this isn't a problem. Use of down_write_nested() will notify the lock
dependency checker machinery that there is no problem here.
This problem was reported by Brian Rogers:
http://marc.info/?l=linux-ext4&m=125115356928011&w=1
Reported-by: Brian Rogers <brian@xyzw.org>
Signed-off-by: Akira Fujita <a-fujita@rs.jp.nec.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit fc04cb49a8)
ext4_move_extents() checks the logical block contiguousness
of original file with ext4_find_extent() and mext_next_extent().
Therefore the extent which ext4_ext_path structure indicates
must not be changed between above functions.
But in current implementation, there is no i_data_sem protection
between ext4_ext_find_extent() and mext_next_extent(). So the extent
which ext4_ext_path structure indicates may be overwritten by
delalloc. As a result, ext4_move_extents() will exchange wrong blocks
between original and donor files. I change the place where
acquire/release i_data_sem to solve this problem.
Moreover, I changed move_extent_per_page() to start transaction first,
and then acquire i_data_sem. Without this change, there is a
possibility of the deadlock between mmap() and ext4_move_extents():
* NOTE: "A", "B" and "C" mean different processes
A-1: ext4_ext_move_extents() acquires i_data_sem of two inodes.
B: do_page_fault() starts the transaction (T),
and then tries to acquire i_data_sem.
But process "A" is already holding it, so it is kept waiting.
C: While "A" and "B" running, kjournald2 tries to commit transaction (T)
but it is under updating, so kjournald2 waits for it.
A-2: Call ext4_journal_start with holding i_data_sem,
but transaction (T) is locked.
Signed-off-by: Akira Fujita <a-fujita@rs.jp.nec.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit f868a48d06)
If the EXT4_IOC_MOVE_EXT ioctl fails, the number of blocks that were
exchanged before the failure should be returned to the userspace
caller. Unfortunately, currently if the block size is not the same as
the page size, the returned block count that is returned is the
page-aligned block count instead of the actual block count. This
commit addresses this bug.
Signed-off-by: Akira Fujita <a-fujita@rs.jp.nec.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 503358ae01)
If s_log_groups_per_flex is greater than 31, then groups_per_flex will
will overflow and cause a divide by zero error. This can cause kernel
BUG if such a file system is mounted.
Thanks to Nageswara R Sastry for analyzing the failure and providing
an initial patch.
http://bugzilla.kernel.org/show_bug.cgi?id=14287
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
(cherry picked from commit 2de770a406)
Previously add_dirent_to_buf() did not free its passed-in buffer head
in the case of ENOSPC, since in some cases the caller still needed it.
However, this led to potential buffer head leaks since not all callers
dealt with this correctly. Fix this by making simplifying the freeing
convention; now add_dirent_to_buf() *never* frees the passed-in buffer
head, and leaves that to the responsibility of its caller. This makes
things cleaner and easier to prove that the code is neither leaking
buffer heads or calling brelse() one time too many.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: Curt Wohlgemuth <curtw@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 7b2519afa1 upstream.
The current sense pointer is cast to a u32 pointer, which can truncate
on 64 bits. Fix by using unsigned long instead.
Signed-off-by Bo Yang<bo.yang@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 0899638688 upstream.
include/scsi/osd_protocol.h uses ALIGN() without an #include
<linux/kernel.h>, leading to:
| include/scsi/osd_protocol.h:362: error: implicit declaration of function 'ALIGN'
Signed-off-by: Martin Michlmayr <tbm@cyrius.com>
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d139b9bd0e upstream.
Some of our virtual SCSI hosts don't have a proper bus parent at the
top, which can be a problem for doing DMA on them
This patch makes the host device cache a pointer to the physical bus
device and provides an extra API for setting it (the normal API picks
it up from the parent). This patch also modifies the qla2xxx and lpfc
vport logic to use the new DMA host setting API.
Acked-By: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit 2a855dd01b upstream.
All architectures in the kernel increment/decrement the stack pointer
before storing values on the stack.
On architectures which have the stack grow down sas_ss_sp == sp is not
on the alternate signal stack while sas_ss_sp + sas_ss_size == sp is
on the alternate signal stack.
On architectures which have the stack grow up sas_ss_sp == sp is on
the alternate signal stack while sas_ss_sp + sas_ss_size == sp is not
on the alternate signal stack.
The current implementation fails for architectures which have the
stack grow down on the corner case where sas_ss_sp == sp.This was
reported as Debian bug #544905 on AMD64.
Simplified test case: http://download.breakpoint.cc/tc-sig-stack.c
The test case creates the following stack scenario:
0xn0300 stack top
0xn0200 alt stack pointer top (when switching to alt stack)
0xn01ff alt stack end
0xn0100 alt stack start == stack pointer
If the signal is sent the stack pointer is pointing to the base
address of the alt stack and the kernel erroneously decides that it
has already switched to the alternate stack because of the current
check for "sp - sas_ss_sp < sas_ss_size"
On parisc (stack grows up) the scenario would be:
0xn0200 stack pointer
0xn01ff alt stack end
0xn0100 alt stack start = alt stack pointer base
(when switching to alt stack)
0xn0000 stack base
This is handled correctly by the current implementation.
[ tglx: Modified for archs which have the stack grow up (parisc) which
would fail with the correct implementation for stack grows
down. Added a check for sp >= current->sas_ss_sp which is
strictly not necessary but makes the code symetric for both
variants ]
Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Roland McGrath <roland@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
LKML-Reference: <20091025143758.GA6653@Chamillionaire.breakpoint.cc>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-12-14 09:44:42 -08:00
800 changed files with 9990 additions and 8339 deletions
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.