Patches contributed by Eötvös Lorand University
commit 251e8e3c7235f5944805a64f24c79fc4696793f1
Author: Ingo Molnar <mingo@elte.hu>
Date: Thu May 14 05:16:59 2009 +0200
perf_counter: Remove ACPI quirk
We had a disable/enable around acpi_idle_do_entry() due to an erratum
in an early prototype CPU i had access to. That erratum has been fixed
in the BIOS so remove the quirk.
The quirk also kept us from profiling interrupts that hit the ACPI idle
instruction - so this is an improvement as well, beyond a cleanup and
a micro-optimization.
[ Impact: improve profiling scope, cleanup, micro-optimization ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index 9645758c0472..f7ca8c55956b 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -765,7 +765,6 @@ static inline void acpi_idle_do_entry(struct acpi_processor_cx *cx)
{
/* Don't trace irqs off for idle */
stop_critical_timings();
- perf_disable();
if (cx->entry_method == ACPI_CSTATE_FFH) {
/* Call into architectural FFH based C-state */
acpi_processor_ffh_cstate_enter(cx);
@@ -780,7 +779,6 @@ static inline void acpi_idle_do_entry(struct acpi_processor_cx *cx)
gets asserted in time to freeze execution properly. */
unused = inl(acpi_gbl_FADT.xpm_timer_block.address);
}
- perf_enable();
start_critical_timings();
}
commit 9029a5e3801f1cc7cdaab80169d82427acf928d8
Author: Ingo Molnar <mingo@elte.hu>
Date: Fri May 15 08:26:20 2009 +0200
perf_counter: x86: Protect against infinite loops in intel_pmu_handle_irq()
intel_pmu_handle_irq() can lock up in an infinite loop if the hardware
does not allow the acking of irqs. Alas, this happened in testing so
make this robust and emit a warning if it happens in the future.
Also, clean up the IRQ handlers a bit.
[ Impact: improve perfcounter irq/nmi handling robustness ]
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
diff --git a/arch/x86/kernel/cpu/perf_counter.c b/arch/x86/kernel/cpu/perf_counter.c
index 46a82d1e4cbe..5a7f718eb1e1 100644
--- a/arch/x86/kernel/cpu/perf_counter.c
+++ b/arch/x86/kernel/cpu/perf_counter.c
@@ -722,9 +722,13 @@ static void intel_pmu_save_and_restart(struct perf_counter *counter)
*/
static int intel_pmu_handle_irq(struct pt_regs *regs, int nmi)
{
- int bit, cpu = smp_processor_id();
+ struct cpu_hw_counters *cpuc;
+ struct cpu_hw_counters;
+ int bit, cpu, loops;
u64 ack, status;
- struct cpu_hw_counters *cpuc = &per_cpu(cpu_hw_counters, cpu);
+
+ cpu = smp_processor_id();
+ cpuc = &per_cpu(cpu_hw_counters, cpu);
perf_disable();
status = intel_pmu_get_status();
@@ -733,7 +737,13 @@ static int intel_pmu_handle_irq(struct pt_regs *regs, int nmi)
return 0;
}
+ loops = 0;
again:
+ if (++loops > 100) {
+ WARN_ONCE(1, "perfcounters: irq loop stuck!\n");
+ return 1;
+ }
+
inc_irq_stat(apic_perf_irqs);
ack = status;
for_each_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) {
@@ -765,13 +775,14 @@ static int intel_pmu_handle_irq(struct pt_regs *regs, int nmi)
static int amd_pmu_handle_irq(struct pt_regs *regs, int nmi)
{
- int cpu = smp_processor_id();
- struct cpu_hw_counters *cpuc = &per_cpu(cpu_hw_counters, cpu);
- u64 val;
- int handled = 0;
+ int cpu, idx, throttle = 0, handled = 0;
+ struct cpu_hw_counters *cpuc;
struct perf_counter *counter;
struct hw_perf_counter *hwc;
- int idx, throttle = 0;
+ u64 val;
+
+ cpu = smp_processor_id();
+ cpuc = &per_cpu(cpu_hw_counters, cpu);
if (++cpuc->interrupts == PERFMON_MAX_INTERRUPTS) {
throttle = 1;
commit 1c80f4b598d9b075a2a0be694e28be93a6702bcc
Author: Ingo Molnar <mingo@elte.hu>
Date: Fri May 15 08:25:22 2009 +0200
perf_counter: x86: Disallow interval of 1
On certain CPUs i have observed a stuck PMU if interval was set to
1 and NMIs were used. The PMU had PMC0 set in MSR_CORE_PERF_GLOBAL_STATUS,
but it was not possible to ack it via MSR_CORE_PERF_GLOBAL_OVF_CTRL,
and the NMI loop got stuck infinitely.
[ Impact: fix rare hangs during high perfcounter load ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
diff --git a/arch/x86/kernel/cpu/perf_counter.c b/arch/x86/kernel/cpu/perf_counter.c
index 1dcf67057f16..46a82d1e4cbe 100644
--- a/arch/x86/kernel/cpu/perf_counter.c
+++ b/arch/x86/kernel/cpu/perf_counter.c
@@ -473,6 +473,11 @@ x86_perf_counter_set_period(struct perf_counter *counter,
left += period;
atomic64_set(&hwc->period_left, left);
}
+ /*
+ * Quirk: certain CPUs dont like it if just 1 event is left:
+ */
+ if (unlikely(left < 2))
+ left = 2;
per_cpu(prev_left[idx], smp_processor_id()) = left;
commit f5a5a2f6e69e88647ae12da39f0ff3a510bcf0a6
Author: Ingo Molnar <mingo@elte.hu>
Date: Wed May 13 12:54:01 2009 +0200
perf_counter: x86: Fix throttling
If counters are disabled globally when a perfcounter IRQ/NMI hits,
and if we throttle in that case, we'll promote the '0' value to
the next lapic IRQ and disable all perfcounters at that point,
permanently ...
Fix it.
[ Impact: fix hung perfcounters under load ]
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
diff --git a/arch/x86/kernel/cpu/perf_counter.c b/arch/x86/kernel/cpu/perf_counter.c
index 3a92a2b2a80f..88ae8cebf3c1 100644
--- a/arch/x86/kernel/cpu/perf_counter.c
+++ b/arch/x86/kernel/cpu/perf_counter.c
@@ -765,8 +765,13 @@ static int intel_pmu_handle_irq(struct pt_regs *regs, int nmi)
/*
* Restore - do not reenable when global enable is off or throttled:
*/
- if (++cpuc->interrupts < PERFMON_MAX_INTERRUPTS)
- intel_pmu_restore_all(cpuc->throttle_ctrl);
+ if (cpuc->throttle_ctrl) {
+ if (++cpuc->interrupts < PERFMON_MAX_INTERRUPTS) {
+ intel_pmu_restore_all(cpuc->throttle_ctrl);
+ } else {
+ pr_info("CPU#%d: perfcounters: max interrupt rate exceeded! Throttle on.\n", smp_processor_id());
+ }
+ }
return ret;
}
@@ -817,11 +822,16 @@ void perf_counter_unthrottle(void)
cpuc = &__get_cpu_var(cpu_hw_counters);
if (cpuc->interrupts >= PERFMON_MAX_INTERRUPTS) {
- if (printk_ratelimit())
- printk(KERN_WARNING "perfcounters: max interrupts exceeded!\n");
+ pr_info("CPU#%d: perfcounters: throttle off.\n", smp_processor_id());
+
+ /*
+ * Clear them before re-enabling irqs/NMIs again:
+ */
+ cpuc->interrupts = 0;
hw_perf_restore(cpuc->throttle_ctrl);
+ } else {
+ cpuc->interrupts = 0;
}
- cpuc->interrupts = 0;
}
void smp_perf_counter_interrupt(struct pt_regs *regs)
commit d80c19df5fcceb8c741e96f09f275c2da719efef
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue May 12 16:29:13 2009 +0200
lockdep: increase MAX_LOCKDEP_ENTRIES and MAX_LOCKDEP_CHAINS
Now that lockdep coverage has increased it has become easier to
run out of entries:
[ 21.401387] BUG: MAX_LOCKDEP_ENTRIES too low!
[ 21.402007] turning off the locking correctness validator.
[ 21.402007] Pid: 1555, comm: S99local Not tainted 2.6.30-rc5-tip #2
[ 21.402007] Call Trace:
[ 21.402007] [<ffffffff81069789>] add_lock_to_list+0x53/0xba
[ 21.402007] [<ffffffff810eb615>] ? lookup_mnt+0x19/0x53
[ 21.402007] [<ffffffff8106be14>] check_prev_add+0x14b/0x1c7
[ 21.402007] [<ffffffff8106c304>] validate_chain+0x474/0x52a
[ 21.402007] [<ffffffff8106c6fc>] __lock_acquire+0x342/0x3c7
[ 21.402007] [<ffffffff8106c842>] lock_acquire+0xc1/0xe5
[ 21.402007] [<ffffffff810eb615>] ? lookup_mnt+0x19/0x53
[ 21.402007] [<ffffffff8153aedc>] _spin_lock+0x31/0x66
Double the size - as we've done in the past.
[ Impact: allow lockdep to cover more locks ]
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
diff --git a/kernel/lockdep_internals.h b/kernel/lockdep_internals.h
index a2cc7e9a6e84..699a2ac3a0d7 100644
--- a/kernel/lockdep_internals.h
+++ b/kernel/lockdep_internals.h
@@ -54,9 +54,9 @@ enum {
* table (if it's not there yet), and we check it for lock order
* conflicts and deadlocks.
*/
-#define MAX_LOCKDEP_ENTRIES 8192UL
+#define MAX_LOCKDEP_ENTRIES 16384UL
-#define MAX_LOCKDEP_CHAINS_BITS 14
+#define MAX_LOCKDEP_CHAINS_BITS 15
#define MAX_LOCKDEP_CHAINS (1UL << MAX_LOCKDEP_CHAINS_BITS)
#define MAX_LOCKDEP_CHAIN_HLOCKS (MAX_LOCKDEP_CHAINS*5)
commit 6cda3eb62ef42aa5acd649bf99c8db544e0f4051
Merge: b9c61b70075c cec6be6d1069
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue May 12 12:17:30 2009 +0200
Merge branch 'x86/apic' into irq/numa
Merge reason: both topics modify the APIC code but were able to do it in
parallel so far. An upcoming patch generates a conflict so
merge them to avoid the conflict.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
commit 41fb454ebe6024f5c1e3b3cbc0abc0da762e7b51
Merge: 19c1a6f5764d 091bf7624d1c
Author: Ingo Molnar <mingo@elte.hu>
Date: Mon May 11 14:44:27 2009 +0200
Merge commit 'v2.6.30-rc5' into core/iommu
Merge reason: core/iommu was on an .30-rc1 base,
update it to .30-rc5 to refresh.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
commit 7961386fe9596e6bf03d09948a73c5df9653325b
Merge: aa47b7e0f89b 091bf7624d1c
Author: Ingo Molnar <mingo@elte.hu>
Date: Mon May 11 12:59:32 2009 +0200
Merge commit 'v2.6.30-rc5' into sched/core
Merge reason: sched/core was on .30-rc1 before, update to latest fixes
Signed-off-by: Ingo Molnar <mingo@elte.hu>
commit 7a309490da98981558a07183786201f02a6341e2
Merge: 9a8709d44139 091bf7624d1c
Author: Ingo Molnar <mingo@elte.hu>
Date: Mon May 11 09:33:06 2009 +0200
Merge commit 'v2.6.30-rc5' into x86/apic
Merge reason: this branch was on a .30-rc2 base - sync it up with
all the latest fixes.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
commit 134cbf35c739bf89c51fd975a33a6b87507482c4
Merge: 2feceeff1e77 091bf7624d1c
Author: Ingo Molnar <mingo@elte.hu>
Date: Mon May 11 09:33:06 2009 +0200
Merge commit 'v2.6.30-rc5' into x86/mm
Merge reason: this branch was on a .30-rc2 base - sync it up with
all the latest fixes.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
diff --cc arch/x86/mm/init.c
index 4d67c33a2e16,ae4f7b5d7104..95f5ecf2be50
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@@ -7,11 -7,9 +7,12 @@@
#include <asm/page.h>
#include <asm/page_types.h>
#include <asm/sections.h>
+ #include <asm/setup.h>
#include <asm/system.h>
#include <asm/tlbflush.h>
+#include <asm/tlb.h>
+
+DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
unsigned long __initdata e820_table_start;
unsigned long __meminitdata e820_table_end;