Patches contributed by Eötvös Lorand University
commit d1bef4ed5faf7d9872337b33c4269e45ae1bf960
Author: Ingo Molnar <mingo@elte.hu>
Date: Thu Jun 29 02:24:36 2006 -0700
[PATCH] genirq: rename desc->handler to desc->chip
This patch-queue improves the generic IRQ layer to be truly generic, by adding
various abstractions and features to it, without impacting existing
functionality.
While the queue can be best described as "fix and improve everything in the
generic IRQ layer that we could think of", and thus it consists of many
smaller features and lots of cleanups, the one feature that stands out most is
the new 'irq chip' abstraction.
The irq-chip abstraction is about describing and coding and IRQ controller
driver by mapping its raw hardware capabilities [and quirks, if needed] in a
straightforward way, without having to think about "IRQ flow"
(level/edge/etc.) type of details.
This stands in contrast with the current 'irq-type' model of genirq
architectures, which 'mixes' raw hardware capabilities with 'flow' details.
The patchset supports both types of irq controller designs at once, and
converts i386 and x86_64 to the new irq-chip design.
As a bonus side-effect of the irq-chip approach, chained interrupt controllers
(master/slave PIC constructs, etc.) are now supported by design as well.
The end result of this patchset intends to be simpler architecture-level code
and more consolidation between architectures.
We reused many bits of code and many concepts from Russell King's ARM IRQ
layer, the merging of which was one of the motivations for this patchset.
This patch:
rename desc->handler to desc->chip.
Originally i did not want to do this, because it's a big patch. But having
both "desc->handler", "desc->handle_irq" and "action->handler" caused a
large degree of confusion and made the code appear alot less clean than it
truly is.
I have also attempted a dual approach as well by introducing a
desc->chip alias - but that just wasnt robust enough and broke
frequently.
So lets get over with this quickly. The conversion was done automatically
via scripts and converts all the code in the kernel.
This renaming patch is the first one amongst the patches, so that the
remaining patches can stay flexible and can be merged and split up
without having some big monolithic patch act as a merge barrier.
[akpm@osdl.org: build fix]
[akpm@osdl.org: another build fix]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/arch/alpha/kernel/irq.c b/arch/alpha/kernel/irq.c
index da677f829f76..ec9d243d42c9 100644
--- a/arch/alpha/kernel/irq.c
+++ b/arch/alpha/kernel/irq.c
@@ -49,7 +49,7 @@ select_smp_affinity(unsigned int irq)
static int last_cpu;
int cpu = last_cpu + 1;
- if (!irq_desc[irq].handler->set_affinity || irq_user_affinity[irq])
+ if (!irq_desc[irq].chip->set_affinity || irq_user_affinity[irq])
return 1;
while (!cpu_possible(cpu))
@@ -57,7 +57,7 @@ select_smp_affinity(unsigned int irq)
last_cpu = cpu;
irq_affinity[irq] = cpumask_of_cpu(cpu);
- irq_desc[irq].handler->set_affinity(irq, cpumask_of_cpu(cpu));
+ irq_desc[irq].chip->set_affinity(irq, cpumask_of_cpu(cpu));
return 0;
}
#endif /* CONFIG_SMP */
@@ -93,7 +93,7 @@ show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[irq]);
#endif
- seq_printf(p, " %14s", irq_desc[irq].handler->typename);
+ seq_printf(p, " %14s", irq_desc[irq].chip->typename);
seq_printf(p, " %c%s",
(action->flags & SA_INTERRUPT)?'+':' ',
action->name);
diff --git a/arch/alpha/kernel/irq_alpha.c b/arch/alpha/kernel/irq_alpha.c
index 9d34ce26e5ef..f20f2dff9c43 100644
--- a/arch/alpha/kernel/irq_alpha.c
+++ b/arch/alpha/kernel/irq_alpha.c
@@ -233,7 +233,7 @@ void __init
init_rtc_irq(void)
{
irq_desc[RTC_IRQ].status = IRQ_DISABLED;
- irq_desc[RTC_IRQ].handler = &rtc_irq_type;
+ irq_desc[RTC_IRQ].chip = &rtc_irq_type;
setup_irq(RTC_IRQ, &timer_irqaction);
}
diff --git a/arch/alpha/kernel/irq_i8259.c b/arch/alpha/kernel/irq_i8259.c
index b188683b83fd..ac893bd48036 100644
--- a/arch/alpha/kernel/irq_i8259.c
+++ b/arch/alpha/kernel/irq_i8259.c
@@ -109,7 +109,7 @@ init_i8259a_irqs(void)
for (i = 0; i < 16; i++) {
irq_desc[i].status = IRQ_DISABLED;
- irq_desc[i].handler = &i8259a_irq_type;
+ irq_desc[i].chip = &i8259a_irq_type;
}
setup_irq(2, &cascade);
diff --git a/arch/alpha/kernel/irq_pyxis.c b/arch/alpha/kernel/irq_pyxis.c
index 146a20b9e3d5..3b581415bab0 100644
--- a/arch/alpha/kernel/irq_pyxis.c
+++ b/arch/alpha/kernel/irq_pyxis.c
@@ -120,7 +120,7 @@ init_pyxis_irqs(unsigned long ignore_mask)
if ((ignore_mask >> i) & 1)
continue;
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &pyxis_irq_type;
+ irq_desc[i].chip = &pyxis_irq_type;
}
setup_irq(16+7, &isa_cascade_irqaction);
diff --git a/arch/alpha/kernel/irq_srm.c b/arch/alpha/kernel/irq_srm.c
index 0a87e466918c..8e4d121f84cc 100644
--- a/arch/alpha/kernel/irq_srm.c
+++ b/arch/alpha/kernel/irq_srm.c
@@ -67,7 +67,7 @@ init_srm_irqs(long max, unsigned long ignore_mask)
if (i < 64 && ((ignore_mask >> i) & 1))
continue;
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &srm_irq_type;
+ irq_desc[i].chip = &srm_irq_type;
}
}
diff --git a/arch/alpha/kernel/sys_alcor.c b/arch/alpha/kernel/sys_alcor.c
index d7f0e97fe56f..1a1a2c7a3d94 100644
--- a/arch/alpha/kernel/sys_alcor.c
+++ b/arch/alpha/kernel/sys_alcor.c
@@ -144,7 +144,7 @@ alcor_init_irq(void)
if (i >= 16+20 && i <= 16+30)
continue;
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &alcor_irq_type;
+ irq_desc[i].chip = &alcor_irq_type;
}
i8259a_irq_type.ack = alcor_isa_mask_and_ack_irq;
diff --git a/arch/alpha/kernel/sys_cabriolet.c b/arch/alpha/kernel/sys_cabriolet.c
index 8e3374d34c95..8c9e443d93ad 100644
--- a/arch/alpha/kernel/sys_cabriolet.c
+++ b/arch/alpha/kernel/sys_cabriolet.c
@@ -124,7 +124,7 @@ common_init_irq(void (*srm_dev_int)(unsigned long v, struct pt_regs *r))
for (i = 16; i < 35; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &cabriolet_irq_type;
+ irq_desc[i].chip = &cabriolet_irq_type;
}
}
diff --git a/arch/alpha/kernel/sys_dp264.c b/arch/alpha/kernel/sys_dp264.c
index d5da6b1b28ee..b28c8f1c6e10 100644
--- a/arch/alpha/kernel/sys_dp264.c
+++ b/arch/alpha/kernel/sys_dp264.c
@@ -300,7 +300,7 @@ init_tsunami_irqs(struct hw_interrupt_type * ops, int imin, int imax)
long i;
for (i = imin; i <= imax; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = ops;
+ irq_desc[i].chip = ops;
}
}
diff --git a/arch/alpha/kernel/sys_eb64p.c b/arch/alpha/kernel/sys_eb64p.c
index 61a79c354f0b..aeb8e0277905 100644
--- a/arch/alpha/kernel/sys_eb64p.c
+++ b/arch/alpha/kernel/sys_eb64p.c
@@ -137,7 +137,7 @@ eb64p_init_irq(void)
for (i = 16; i < 32; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &eb64p_irq_type;
+ irq_desc[i].chip = &eb64p_irq_type;
}
common_init_isa_dma();
diff --git a/arch/alpha/kernel/sys_eiger.c b/arch/alpha/kernel/sys_eiger.c
index bd6e5f0e43c7..64a785baf53a 100644
--- a/arch/alpha/kernel/sys_eiger.c
+++ b/arch/alpha/kernel/sys_eiger.c
@@ -154,7 +154,7 @@ eiger_init_irq(void)
for (i = 16; i < 128; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &eiger_irq_type;
+ irq_desc[i].chip = &eiger_irq_type;
}
}
diff --git a/arch/alpha/kernel/sys_jensen.c b/arch/alpha/kernel/sys_jensen.c
index fcabb7c96a16..0148e095638f 100644
--- a/arch/alpha/kernel/sys_jensen.c
+++ b/arch/alpha/kernel/sys_jensen.c
@@ -206,11 +206,11 @@ jensen_init_irq(void)
{
init_i8259a_irqs();
- irq_desc[1].handler = &jensen_local_irq_type;
- irq_desc[4].handler = &jensen_local_irq_type;
- irq_desc[3].handler = &jensen_local_irq_type;
- irq_desc[7].handler = &jensen_local_irq_type;
- irq_desc[9].handler = &jensen_local_irq_type;
+ irq_desc[1].chip = &jensen_local_irq_type;
+ irq_desc[4].chip = &jensen_local_irq_type;
+ irq_desc[3].chip = &jensen_local_irq_type;
+ irq_desc[7].chip = &jensen_local_irq_type;
+ irq_desc[9].chip = &jensen_local_irq_type;
common_init_isa_dma();
}
diff --git a/arch/alpha/kernel/sys_marvel.c b/arch/alpha/kernel/sys_marvel.c
index e32fee505220..36d215954376 100644
--- a/arch/alpha/kernel/sys_marvel.c
+++ b/arch/alpha/kernel/sys_marvel.c
@@ -303,7 +303,7 @@ init_io7_irqs(struct io7 *io7,
/* Set up the lsi irqs. */
for (i = 0; i < 128; ++i) {
irq_desc[base + i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[base + i].handler = lsi_ops;
+ irq_desc[base + i].chip = lsi_ops;
}
/* Disable the implemented irqs in hardware. */
@@ -317,7 +317,7 @@ init_io7_irqs(struct io7 *io7,
/* Set up the msi irqs. */
for (i = 128; i < (128 + 512); ++i) {
irq_desc[base + i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[base + i].handler = msi_ops;
+ irq_desc[base + i].chip = msi_ops;
}
for (i = 0; i < 16; ++i)
@@ -335,7 +335,7 @@ marvel_init_irq(void)
/* Reserve the legacy irqs. */
for (i = 0; i < 16; ++i) {
irq_desc[i].status = IRQ_DISABLED;
- irq_desc[i].handler = &marvel_legacy_irq_type;
+ irq_desc[i].chip = &marvel_legacy_irq_type;
}
/* Init the io7 irqs. */
diff --git a/arch/alpha/kernel/sys_mikasa.c b/arch/alpha/kernel/sys_mikasa.c
index d78a0daa6168..b741600e3761 100644
--- a/arch/alpha/kernel/sys_mikasa.c
+++ b/arch/alpha/kernel/sys_mikasa.c
@@ -117,7 +117,7 @@ mikasa_init_irq(void)
for (i = 16; i < 32; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &mikasa_irq_type;
+ irq_desc[i].chip = &mikasa_irq_type;
}
init_i8259a_irqs();
diff --git a/arch/alpha/kernel/sys_noritake.c b/arch/alpha/kernel/sys_noritake.c
index 65061f5d7410..55db02d318d7 100644
--- a/arch/alpha/kernel/sys_noritake.c
+++ b/arch/alpha/kernel/sys_noritake.c
@@ -139,7 +139,7 @@ noritake_init_irq(void)
for (i = 16; i < 48; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &noritake_irq_type;
+ irq_desc[i].chip = &noritake_irq_type;
}
init_i8259a_irqs();
diff --git a/arch/alpha/kernel/sys_rawhide.c b/arch/alpha/kernel/sys_rawhide.c
index 05888a02a604..949607e3d6fb 100644
--- a/arch/alpha/kernel/sys_rawhide.c
+++ b/arch/alpha/kernel/sys_rawhide.c
@@ -180,7 +180,7 @@ rawhide_init_irq(void)
for (i = 16; i < 128; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &rawhide_irq_type;
+ irq_desc[i].chip = &rawhide_irq_type;
}
init_i8259a_irqs();
diff --git a/arch/alpha/kernel/sys_rx164.c b/arch/alpha/kernel/sys_rx164.c
index 58404243057b..6ae506052635 100644
--- a/arch/alpha/kernel/sys_rx164.c
+++ b/arch/alpha/kernel/sys_rx164.c
@@ -117,7 +117,7 @@ rx164_init_irq(void)
rx164_update_irq_hw(0);
for (i = 16; i < 40; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &rx164_irq_type;
+ irq_desc[i].chip = &rx164_irq_type;
}
init_i8259a_irqs();
diff --git a/arch/alpha/kernel/sys_sable.c b/arch/alpha/kernel/sys_sable.c
index a7ff84474ace..24dea40c9bfe 100644
--- a/arch/alpha/kernel/sys_sable.c
+++ b/arch/alpha/kernel/sys_sable.c
@@ -537,7 +537,7 @@ sable_lynx_init_irq(int nr_irqs)
for (i = 0; i < nr_irqs; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &sable_lynx_irq_type;
+ irq_desc[i].chip = &sable_lynx_irq_type;
}
common_init_isa_dma();
diff --git a/arch/alpha/kernel/sys_takara.c b/arch/alpha/kernel/sys_takara.c
index 7955bdfc2db0..2c75cd1fd81a 100644
--- a/arch/alpha/kernel/sys_takara.c
+++ b/arch/alpha/kernel/sys_takara.c
@@ -154,7 +154,7 @@ takara_init_irq(void)
for (i = 16; i < 128; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = &takara_irq_type;
+ irq_desc[i].chip = &takara_irq_type;
}
common_init_isa_dma();
diff --git a/arch/alpha/kernel/sys_titan.c b/arch/alpha/kernel/sys_titan.c
index 2551fb49ae09..13f3ed8ed7ac 100644
--- a/arch/alpha/kernel/sys_titan.c
+++ b/arch/alpha/kernel/sys_titan.c
@@ -189,7 +189,7 @@ init_titan_irqs(struct hw_interrupt_type * ops, int imin, int imax)
long i;
for (i = imin; i <= imax; ++i) {
irq_desc[i].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i].handler = ops;
+ irq_desc[i].chip = ops;
}
}
diff --git a/arch/alpha/kernel/sys_wildfire.c b/arch/alpha/kernel/sys_wildfire.c
index 1553f470246e..22c5798fe083 100644
--- a/arch/alpha/kernel/sys_wildfire.c
+++ b/arch/alpha/kernel/sys_wildfire.c
@@ -199,14 +199,14 @@ wildfire_init_irq_per_pca(int qbbno, int pcano)
if (i == 2)
continue;
irq_desc[i+irq_bias].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i+irq_bias].handler = &wildfire_irq_type;
+ irq_desc[i+irq_bias].chip = &wildfire_irq_type;
}
irq_desc[36+irq_bias].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[36+irq_bias].handler = &wildfire_irq_type;
+ irq_desc[36+irq_bias].chip = &wildfire_irq_type;
for (i = 40; i < 64; ++i) {
irq_desc[i+irq_bias].status = IRQ_DISABLED | IRQ_LEVEL;
- irq_desc[i+irq_bias].handler = &wildfire_irq_type;
+ irq_desc[i+irq_bias].chip = &wildfire_irq_type;
}
setup_irq(32+irq_bias, &isa_enable);
diff --git a/arch/cris/arch-v10/kernel/irq.c b/arch/cris/arch-v10/kernel/irq.c
index 4b368a122015..2d5be93b5197 100644
--- a/arch/cris/arch-v10/kernel/irq.c
+++ b/arch/cris/arch-v10/kernel/irq.c
@@ -172,7 +172,7 @@ init_IRQ(void)
/* Initialize IRQ handler descriptiors. */
for(i = 2; i < NR_IRQS; i++) {
- irq_desc[i].handler = &crisv10_irq_type;
+ irq_desc[i].chip = &crisv10_irq_type;
set_int_vector(i, interrupt[i]);
}
diff --git a/arch/cris/arch-v32/kernel/irq.c b/arch/cris/arch-v32/kernel/irq.c
index c78cc2685133..06260874f018 100644
--- a/arch/cris/arch-v32/kernel/irq.c
+++ b/arch/cris/arch-v32/kernel/irq.c
@@ -369,7 +369,7 @@ init_IRQ(void)
/* Point all IRQ's to bad handlers. */
for (i = FIRST_IRQ, j = 0; j < NR_IRQS; i++, j++) {
- irq_desc[j].handler = &crisv32_irq_type;
+ irq_desc[j].chip = &crisv32_irq_type;
set_exception_vector(i, interrupt[j]);
}
diff --git a/arch/cris/kernel/irq.c b/arch/cris/kernel/irq.c
index b504def3e346..6547bb646364 100644
--- a/arch/cris/kernel/irq.c
+++ b/arch/cris/kernel/irq.c
@@ -69,7 +69,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
diff --git a/arch/i386/kernel/i8259.c b/arch/i386/kernel/i8259.c
index c1a42feba286..3c6063671a9f 100644
--- a/arch/i386/kernel/i8259.c
+++ b/arch/i386/kernel/i8259.c
@@ -132,7 +132,7 @@ void make_8259A_irq(unsigned int irq)
{
disable_irq_nosync(irq);
io_apic_irqs &= ~(1<<irq);
- irq_desc[irq].handler = &i8259A_irq_type;
+ irq_desc[irq].chip = &i8259A_irq_type;
enable_irq(irq);
}
@@ -386,12 +386,12 @@ void __init init_ISA_irqs (void)
/*
* 16 old-style INTA-cycle interrupts:
*/
- irq_desc[i].handler = &i8259A_irq_type;
+ irq_desc[i].chip = &i8259A_irq_type;
} else {
/*
* 'high' PCI IRQs filled in on demand
*/
- irq_desc[i].handler = &no_irq_type;
+ irq_desc[i].chip = &no_irq_type;
}
}
}
diff --git a/arch/i386/kernel/io_apic.c b/arch/i386/kernel/io_apic.c
index 72ae414e4d49..4a74b696c6a3 100644
--- a/arch/i386/kernel/io_apic.c
+++ b/arch/i386/kernel/io_apic.c
@@ -1205,15 +1205,17 @@ static struct hw_interrupt_type ioapic_edge_type;
#define IOAPIC_EDGE 0
#define IOAPIC_LEVEL 1
-static inline void ioapic_register_intr(int irq, int vector, unsigned long trigger)
+static void ioapic_register_intr(int irq, int vector, unsigned long trigger)
{
- unsigned idx = use_pci_vector() && !platform_legacy_irq(irq) ? vector : irq;
+ unsigned idx;
+
+ idx = use_pci_vector() && !platform_legacy_irq(irq) ? vector : irq;
if ((trigger == IOAPIC_AUTO && IO_APIC_irq_trigger(irq)) ||
trigger == IOAPIC_LEVEL)
- irq_desc[idx].handler = &ioapic_level_type;
+ irq_desc[idx].chip = &ioapic_level_type;
else
- irq_desc[idx].handler = &ioapic_edge_type;
+ irq_desc[idx].chip = &ioapic_edge_type;
set_intr_gate(vector, interrupt[idx]);
}
@@ -1325,7 +1327,7 @@ static void __init setup_ExtINT_IRQ0_pin(unsigned int apic, unsigned int pin, in
* The timer IRQ doesn't have to know that behind the
* scene we have a 8259A-master in AEOI mode ...
*/
- irq_desc[0].handler = &ioapic_edge_type;
+ irq_desc[0].chip = &ioapic_edge_type;
/*
* Add it to the IO-APIC irq-routing table:
@@ -2135,7 +2137,7 @@ static inline void init_IO_APIC_traps(void)
make_8259A_irq(irq);
else
/* Strange. Oh, well.. */
- irq_desc[irq].handler = &no_irq_type;
+ irq_desc[irq].chip = &no_irq_type;
}
}
}
@@ -2351,7 +2353,7 @@ static inline void check_timer(void)
printk(KERN_INFO "...trying to set up timer as Virtual Wire IRQ...");
disable_8259A_irq(0);
- irq_desc[0].handler = &lapic_irq_type;
+ irq_desc[0].chip = &lapic_irq_type;
apic_write_around(APIC_LVT0, APIC_DM_FIXED | vector); /* Fixed mode */
enable_8259A_irq(0);
diff --git a/arch/i386/kernel/irq.c b/arch/i386/kernel/irq.c
index 9eec9435318e..b942a5918dab 100644
--- a/arch/i386/kernel/irq.c
+++ b/arch/i386/kernel/irq.c
@@ -249,7 +249,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
@@ -296,8 +296,8 @@ void fixup_irqs(cpumask_t map)
printk("Breaking affinity for irq %i\n", irq);
mask = map;
}
- if (irq_desc[irq].handler->set_affinity)
- irq_desc[irq].handler->set_affinity(irq, mask);
+ if (irq_desc[irq].chip->set_affinity)
+ irq_desc[irq].chip->set_affinity(irq, mask);
else if (irq_desc[irq].action && !(warned++))
printk("Cannot set affinity for irq %i\n", irq);
}
diff --git a/arch/i386/mach-visws/visws_apic.c b/arch/i386/mach-visws/visws_apic.c
index 3e64fb721291..c418521dd554 100644
--- a/arch/i386/mach-visws/visws_apic.c
+++ b/arch/i386/mach-visws/visws_apic.c
@@ -278,22 +278,22 @@ void init_VISWS_APIC_irqs(void)
irq_desc[i].depth = 1;
if (i == 0) {
- irq_desc[i].handler = &cobalt_irq_type;
+ irq_desc[i].chip = &cobalt_irq_type;
}
else if (i == CO_IRQ_IDE0) {
- irq_desc[i].handler = &cobalt_irq_type;
+ irq_desc[i].chip = &cobalt_irq_type;
}
else if (i == CO_IRQ_IDE1) {
- irq_desc[i].handler = &cobalt_irq_type;
+ irq_desc[i].chip = &cobalt_irq_type;
}
else if (i == CO_IRQ_8259) {
- irq_desc[i].handler = &piix4_master_irq_type;
+ irq_desc[i].chip = &piix4_master_irq_type;
}
else if (i < CO_IRQ_APIC0) {
- irq_desc[i].handler = &piix4_virtual_irq_type;
+ irq_desc[i].chip = &piix4_virtual_irq_type;
}
else if (IS_CO_APIC(i)) {
- irq_desc[i].handler = &cobalt_irq_type;
+ irq_desc[i].chip = &cobalt_irq_type;
}
}
diff --git a/arch/i386/mach-voyager/voyager_smp.c b/arch/i386/mach-voyager/voyager_smp.c
index 8242af9ebc6f..5b8b579a079f 100644
--- a/arch/i386/mach-voyager/voyager_smp.c
+++ b/arch/i386/mach-voyager/voyager_smp.c
@@ -1419,7 +1419,7 @@ smp_intr_init(void)
* This is for later: first 16 correspond to PC IRQs; next 16
* are Primary MC IRQs and final 16 are Secondary MC IRQs */
for(i = 0; i < 48; i++)
- irq_desc[i].handler = &vic_irq_type;
+ irq_desc[i].chip = &vic_irq_type;
}
/* send a CPI at level cpi to a set of cpus in cpuset (set 1 bit per
diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index d58c1c5c903a..abb4cb1c831e 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -660,13 +660,13 @@ register_intr (unsigned int gsi, int vector, unsigned char delivery,
irq_type = &irq_type_iosapic_level;
idesc = irq_descp(vector);
- if (idesc->handler != irq_type) {
- if (idesc->handler != &no_irq_type)
+ if (idesc->chip != irq_type) {
+ if (idesc->chip != &no_irq_type)
printk(KERN_WARNING
"%s: changing vector %d from %s to %s\n",
__FUNCTION__, vector,
- idesc->handler->typename, irq_type->typename);
- idesc->handler = irq_type;
+ idesc->chip->typename, irq_type->typename);
+ idesc->chip = irq_type;
}
return 0;
}
@@ -903,7 +903,7 @@ iosapic_unregister_intr (unsigned int gsi)
BUG_ON(iosapic_intr_info[vector].count);
/* Clear the interrupt controller descriptor */
- idesc->handler = &no_irq_type;
+ idesc->chip = &no_irq_type;
/* Clear the interrupt information */
memset(&iosapic_intr_info[vector], 0,
diff --git a/arch/ia64/kernel/irq.c b/arch/ia64/kernel/irq.c
index 9c72ea3f6432..2645153dba8a 100644
--- a/arch/ia64/kernel/irq.c
+++ b/arch/ia64/kernel/irq.c
@@ -76,7 +76,7 @@ int show_interrupts(struct seq_file *p, void *v)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
}
#endif
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
@@ -144,15 +144,15 @@ static void migrate_irqs(void)
/*
* Al three are essential, currently WARN_ON.. maybe panic?
*/
- if (desc->handler && desc->handler->disable &&
- desc->handler->enable && desc->handler->set_affinity) {
- desc->handler->disable(irq);
- desc->handler->set_affinity(irq, mask);
- desc->handler->enable(irq);
+ if (desc->chip && desc->chip->disable &&
+ desc->chip->enable && desc->chip->set_affinity) {
+ desc->chip->disable(irq);
+ desc->chip->set_affinity(irq, mask);
+ desc->chip->enable(irq);
} else {
- WARN_ON((!(desc->handler) || !(desc->handler->disable) ||
- !(desc->handler->enable) ||
- !(desc->handler->set_affinity)));
+ WARN_ON((!(desc->chip) || !(desc->chip->disable) ||
+ !(desc->chip->enable) ||
+ !(desc->chip->set_affinity)));
}
}
}
diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index ef9a2b49307a..6d8fc9498ed9 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -251,7 +251,7 @@ register_percpu_irq (ia64_vector vec, struct irqaction *action)
if (irq_to_vector(irq) == vec) {
desc = irq_descp(irq);
desc->status |= IRQ_PER_CPU;
- desc->handler = &irq_type_ia64_lsapic;
+ desc->chip = &irq_type_ia64_lsapic;
if (action)
setup_irq(irq, action);
}
diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 44e9547878ac..d69288055599 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -684,9 +684,9 @@ int migrate_platform_irqs(unsigned int cpu)
* polling before making changes.
*/
if (desc) {
- desc->handler->disable(ia64_cpe_irq);
- desc->handler->set_affinity(ia64_cpe_irq, mask);
- desc->handler->enable(ia64_cpe_irq);
+ desc->chip->disable(ia64_cpe_irq);
+ desc->chip->set_affinity(ia64_cpe_irq, mask);
+ desc->chip->enable(ia64_cpe_irq);
printk ("Re-targetting CPEI to cpu %d\n", new_cpei_cpu);
}
}
diff --git a/arch/ia64/sn/kernel/irq.c b/arch/ia64/sn/kernel/irq.c
index 677c6c0fd661..7bb6ad188ba3 100644
--- a/arch/ia64/sn/kernel/irq.c
+++ b/arch/ia64/sn/kernel/irq.c
@@ -225,8 +225,8 @@ void sn_irq_init(void)
ia64_last_device_vector = IA64_SN2_LAST_DEVICE_VECTOR;
for (i = 0; i < NR_IRQS; i++) {
- if (base_desc[i].handler == &no_irq_type) {
- base_desc[i].handler = &irq_type_sn;
+ if (base_desc[i].chip == &no_irq_type) {
+ base_desc[i].chip = &irq_type_sn;
}
}
}
diff --git a/arch/m32r/kernel/irq.c b/arch/m32r/kernel/irq.c
index a4634b06f675..3841861df6a2 100644
--- a/arch/m32r/kernel/irq.c
+++ b/arch/m32r/kernel/irq.c
@@ -54,7 +54,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
diff --git a/arch/m32r/kernel/setup_m32104ut.c b/arch/m32r/kernel/setup_m32104ut.c
index 6328e1357a80..f9f56c270195 100644
--- a/arch/m32r/kernel/setup_m32104ut.c
+++ b/arch/m32r/kernel/setup_m32104ut.c
@@ -87,7 +87,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_SMC91X)
/* INT#0: LAN controller on M32104UT-LAN (SMC91C111)*/
irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT0].handler = &m32104ut_irq_type;
+ irq_desc[M32R_IRQ_INT0].chip = &m32104ut_irq_type;
irq_desc[M32R_IRQ_INT0].action = 0;
irq_desc[M32R_IRQ_INT0].depth = 1;
icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD11; /* "H" level sense */
@@ -96,7 +96,7 @@ void __init init_IRQ(void)
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].handler = &m32104ut_irq_type;
+ irq_desc[M32R_IRQ_MFT2].chip = &m32104ut_irq_type;
irq_desc[M32R_IRQ_MFT2].action = 0;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
@@ -105,7 +105,7 @@ void __init init_IRQ(void)
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].handler = &m32104ut_irq_type;
+ irq_desc[M32R_IRQ_SIO0_R].chip = &m32104ut_irq_type;
irq_desc[M32R_IRQ_SIO0_R].action = 0;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = M32R_ICUCR_IEN;
@@ -113,7 +113,7 @@ void __init init_IRQ(void)
/* SIO0_S : uart send data */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].handler = &m32104ut_irq_type;
+ irq_desc[M32R_IRQ_SIO0_S].chip = &m32104ut_irq_type;
irq_desc[M32R_IRQ_SIO0_S].action = 0;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = M32R_ICUCR_IEN;
diff --git a/arch/m32r/kernel/setup_m32700ut.c b/arch/m32r/kernel/setup_m32700ut.c
index fad1fc99bb27..b6ab00eff580 100644
--- a/arch/m32r/kernel/setup_m32700ut.c
+++ b/arch/m32r/kernel/setup_m32700ut.c
@@ -301,7 +301,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_SMC91X)
/* INT#0: LAN controller on M32700UT-LAN (SMC91C111)*/
irq_desc[M32700UT_LAN_IRQ_LAN].status = IRQ_DISABLED;
- irq_desc[M32700UT_LAN_IRQ_LAN].handler = &m32700ut_lanpld_irq_type;
+ irq_desc[M32700UT_LAN_IRQ_LAN].chip = &m32700ut_lanpld_irq_type;
irq_desc[M32700UT_LAN_IRQ_LAN].action = 0;
irq_desc[M32700UT_LAN_IRQ_LAN].depth = 1; /* disable nested irq */
lanpld_icu_data[irq2lanpldirq(M32700UT_LAN_IRQ_LAN)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD02; /* "H" edge sense */
@@ -310,7 +310,7 @@ void __init init_IRQ(void)
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].handler = &m32700ut_irq_type;
+ irq_desc[M32R_IRQ_MFT2].chip = &m32700ut_irq_type;
irq_desc[M32R_IRQ_MFT2].action = 0;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
@@ -318,7 +318,7 @@ void __init init_IRQ(void)
/* SIO0 : receive */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].handler = &m32700ut_irq_type;
+ irq_desc[M32R_IRQ_SIO0_R].chip = &m32700ut_irq_type;
irq_desc[M32R_IRQ_SIO0_R].action = 0;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
@@ -326,7 +326,7 @@ void __init init_IRQ(void)
/* SIO0 : send */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].handler = &m32700ut_irq_type;
+ irq_desc[M32R_IRQ_SIO0_S].chip = &m32700ut_irq_type;
irq_desc[M32R_IRQ_SIO0_S].action = 0;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
@@ -334,7 +334,7 @@ void __init init_IRQ(void)
/* SIO1 : receive */
irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].handler = &m32700ut_irq_type;
+ irq_desc[M32R_IRQ_SIO1_R].chip = &m32700ut_irq_type;
irq_desc[M32R_IRQ_SIO1_R].action = 0;
irq_desc[M32R_IRQ_SIO1_R].depth = 1;
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
@@ -342,7 +342,7 @@ void __init init_IRQ(void)
/* SIO1 : send */
irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].handler = &m32700ut_irq_type;
+ irq_desc[M32R_IRQ_SIO1_S].chip = &m32700ut_irq_type;
irq_desc[M32R_IRQ_SIO1_S].action = 0;
irq_desc[M32R_IRQ_SIO1_S].depth = 1;
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
@@ -350,7 +350,7 @@ void __init init_IRQ(void)
/* DMA1 : */
irq_desc[M32R_IRQ_DMA1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_DMA1].handler = &m32700ut_irq_type;
+ irq_desc[M32R_IRQ_DMA1].chip = &m32700ut_irq_type;
irq_desc[M32R_IRQ_DMA1].action = 0;
irq_desc[M32R_IRQ_DMA1].depth = 1;
icu_data[M32R_IRQ_DMA1].icucr = 0;
@@ -359,7 +359,7 @@ void __init init_IRQ(void)
#ifdef CONFIG_SERIAL_M32R_PLDSIO
/* INT#1: SIO0 Receive on PLD */
irq_desc[PLD_IRQ_SIO0_RCV].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SIO0_RCV].handler = &m32700ut_pld_irq_type;
+ irq_desc[PLD_IRQ_SIO0_RCV].chip = &m32700ut_pld_irq_type;
irq_desc[PLD_IRQ_SIO0_RCV].action = 0;
irq_desc[PLD_IRQ_SIO0_RCV].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_SIO0_RCV)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD03;
@@ -367,7 +367,7 @@ void __init init_IRQ(void)
/* INT#1: SIO0 Send on PLD */
irq_desc[PLD_IRQ_SIO0_SND].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SIO0_SND].handler = &m32700ut_pld_irq_type;
+ irq_desc[PLD_IRQ_SIO0_SND].chip = &m32700ut_pld_irq_type;
irq_desc[PLD_IRQ_SIO0_SND].action = 0;
irq_desc[PLD_IRQ_SIO0_SND].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_SIO0_SND)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD03;
@@ -376,7 +376,7 @@ void __init init_IRQ(void)
/* INT#1: CFC IREQ on PLD */
irq_desc[PLD_IRQ_CFIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFIREQ].handler = &m32700ut_pld_irq_type;
+ irq_desc[PLD_IRQ_CFIREQ].chip = &m32700ut_pld_irq_type;
irq_desc[PLD_IRQ_CFIREQ].action = 0;
irq_desc[PLD_IRQ_CFIREQ].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_CFIREQ)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* 'L' level sense */
@@ -384,7 +384,7 @@ void __init init_IRQ(void)
/* INT#1: CFC Insert on PLD */
irq_desc[PLD_IRQ_CFC_INSERT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_INSERT].handler = &m32700ut_pld_irq_type;
+ irq_desc[PLD_IRQ_CFC_INSERT].chip = &m32700ut_pld_irq_type;
irq_desc[PLD_IRQ_CFC_INSERT].action = 0;
irq_desc[PLD_IRQ_CFC_INSERT].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_CFC_INSERT)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD00; /* 'L' edge sense */
@@ -392,7 +392,7 @@ void __init init_IRQ(void)
/* INT#1: CFC Eject on PLD */
irq_desc[PLD_IRQ_CFC_EJECT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_EJECT].handler = &m32700ut_pld_irq_type;
+ irq_desc[PLD_IRQ_CFC_EJECT].chip = &m32700ut_pld_irq_type;
irq_desc[PLD_IRQ_CFC_EJECT].action = 0;
irq_desc[PLD_IRQ_CFC_EJECT].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_CFC_EJECT)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD02; /* 'H' edge sense */
@@ -416,7 +416,7 @@ void __init init_IRQ(void)
outw(USBCR_OTGS, USBCR); /* USBCR: non-OTG */
irq_desc[M32700UT_LCD_IRQ_USB_INT1].status = IRQ_DISABLED;
- irq_desc[M32700UT_LCD_IRQ_USB_INT1].handler = &m32700ut_lcdpld_irq_type;
+ irq_desc[M32700UT_LCD_IRQ_USB_INT1].chip = &m32700ut_lcdpld_irq_type;
irq_desc[M32700UT_LCD_IRQ_USB_INT1].action = 0;
irq_desc[M32700UT_LCD_IRQ_USB_INT1].depth = 1;
lcdpld_icu_data[irq2lcdpldirq(M32700UT_LCD_IRQ_USB_INT1)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* "L" level sense */
@@ -434,7 +434,7 @@ void __init init_IRQ(void)
* INT3# is used for AR
*/
irq_desc[M32R_IRQ_INT3].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT3].handler = &m32700ut_irq_type;
+ irq_desc[M32R_IRQ_INT3].chip = &m32700ut_irq_type;
irq_desc[M32R_IRQ_INT3].action = 0;
irq_desc[M32R_IRQ_INT3].depth = 1;
icu_data[M32R_IRQ_INT3].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
diff --git a/arch/m32r/kernel/setup_mappi.c b/arch/m32r/kernel/setup_mappi.c
index 00f253209cb3..c268044185f5 100644
--- a/arch/m32r/kernel/setup_mappi.c
+++ b/arch/m32r/kernel/setup_mappi.c
@@ -86,7 +86,7 @@ void __init init_IRQ(void)
#ifdef CONFIG_NE2000
/* INT0 : LAN controller (RTL8019AS) */
irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT0].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_INT0].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_INT0].action = 0;
irq_desc[M32R_IRQ_INT0].depth = 1;
icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
@@ -95,7 +95,7 @@ void __init init_IRQ(void)
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_MFT2].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_MFT2].action = 0;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
@@ -104,7 +104,7 @@ void __init init_IRQ(void)
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_SIO0_R].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_SIO0_R].action = 0;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
@@ -112,7 +112,7 @@ void __init init_IRQ(void)
/* SIO0_S : uart send data */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_SIO0_S].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_SIO0_S].action = 0;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
@@ -120,7 +120,7 @@ void __init init_IRQ(void)
/* SIO1_R : uart receive data */
irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_SIO1_R].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_SIO1_R].action = 0;
irq_desc[M32R_IRQ_SIO1_R].depth = 1;
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
@@ -128,7 +128,7 @@ void __init init_IRQ(void)
/* SIO1_S : uart send data */
irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_SIO1_S].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_SIO1_S].action = 0;
irq_desc[M32R_IRQ_SIO1_S].depth = 1;
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
@@ -138,7 +138,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_M32R_PCC)
/* INT1 : pccard0 interrupt */
irq_desc[M32R_IRQ_INT1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT1].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_INT1].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_INT1].action = 0;
irq_desc[M32R_IRQ_INT1].depth = 1;
icu_data[M32R_IRQ_INT1].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD00;
@@ -146,7 +146,7 @@ void __init init_IRQ(void)
/* INT2 : pccard1 interrupt */
irq_desc[M32R_IRQ_INT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT2].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_INT2].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_INT2].action = 0;
irq_desc[M32R_IRQ_INT2].depth = 1;
icu_data[M32R_IRQ_INT2].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD00;
diff --git a/arch/m32r/kernel/setup_mappi2.c b/arch/m32r/kernel/setup_mappi2.c
index eebc9d8b4e72..bd2327d5cca2 100644
--- a/arch/m32r/kernel/setup_mappi2.c
+++ b/arch/m32r/kernel/setup_mappi2.c
@@ -87,7 +87,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_SMC91X)
/* INT0 : LAN controller (SMC91111) */
irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT0].handler = &mappi2_irq_type;
+ irq_desc[M32R_IRQ_INT0].chip = &mappi2_irq_type;
irq_desc[M32R_IRQ_INT0].action = 0;
irq_desc[M32R_IRQ_INT0].depth = 1;
icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
@@ -96,7 +96,7 @@ void __init init_IRQ(void)
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].handler = &mappi2_irq_type;
+ irq_desc[M32R_IRQ_MFT2].chip = &mappi2_irq_type;
irq_desc[M32R_IRQ_MFT2].action = 0;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
@@ -105,7 +105,7 @@ void __init init_IRQ(void)
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].handler = &mappi2_irq_type;
+ irq_desc[M32R_IRQ_SIO0_R].chip = &mappi2_irq_type;
irq_desc[M32R_IRQ_SIO0_R].action = 0;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
@@ -113,14 +113,14 @@ void __init init_IRQ(void)
/* SIO0_S : uart send data */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].handler = &mappi2_irq_type;
+ irq_desc[M32R_IRQ_SIO0_S].chip = &mappi2_irq_type;
irq_desc[M32R_IRQ_SIO0_S].action = 0;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_mappi2_irq(M32R_IRQ_SIO0_S);
/* SIO1_R : uart receive data */
irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].handler = &mappi2_irq_type;
+ irq_desc[M32R_IRQ_SIO1_R].chip = &mappi2_irq_type;
irq_desc[M32R_IRQ_SIO1_R].action = 0;
irq_desc[M32R_IRQ_SIO1_R].depth = 1;
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
@@ -128,7 +128,7 @@ void __init init_IRQ(void)
/* SIO1_S : uart send data */
irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].handler = &mappi2_irq_type;
+ irq_desc[M32R_IRQ_SIO1_S].chip = &mappi2_irq_type;
irq_desc[M32R_IRQ_SIO1_S].action = 0;
irq_desc[M32R_IRQ_SIO1_S].depth = 1;
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
@@ -138,7 +138,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_USB)
/* INT1 : USB Host controller interrupt */
irq_desc[M32R_IRQ_INT1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT1].handler = &mappi2_irq_type;
+ irq_desc[M32R_IRQ_INT1].chip = &mappi2_irq_type;
irq_desc[M32R_IRQ_INT1].action = 0;
irq_desc[M32R_IRQ_INT1].depth = 1;
icu_data[M32R_IRQ_INT1].icucr = M32R_ICUCR_ISMOD01;
@@ -147,7 +147,7 @@ void __init init_IRQ(void)
/* ICUCR40: CFC IREQ */
irq_desc[PLD_IRQ_CFIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFIREQ].handler = &mappi2_irq_type;
+ irq_desc[PLD_IRQ_CFIREQ].chip = &mappi2_irq_type;
irq_desc[PLD_IRQ_CFIREQ].action = 0;
irq_desc[PLD_IRQ_CFIREQ].depth = 1; /* disable nested irq */
icu_data[PLD_IRQ_CFIREQ].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD01;
@@ -156,7 +156,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_M32R_CFC)
/* ICUCR41: CFC Insert */
irq_desc[PLD_IRQ_CFC_INSERT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_INSERT].handler = &mappi2_irq_type;
+ irq_desc[PLD_IRQ_CFC_INSERT].chip = &mappi2_irq_type;
irq_desc[PLD_IRQ_CFC_INSERT].action = 0;
irq_desc[PLD_IRQ_CFC_INSERT].depth = 1; /* disable nested irq */
icu_data[PLD_IRQ_CFC_INSERT].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD00;
@@ -164,7 +164,7 @@ void __init init_IRQ(void)
/* ICUCR42: CFC Eject */
irq_desc[PLD_IRQ_CFC_EJECT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_EJECT].handler = &mappi2_irq_type;
+ irq_desc[PLD_IRQ_CFC_EJECT].chip = &mappi2_irq_type;
irq_desc[PLD_IRQ_CFC_EJECT].action = 0;
irq_desc[PLD_IRQ_CFC_EJECT].depth = 1; /* disable nested irq */
icu_data[PLD_IRQ_CFC_EJECT].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
diff --git a/arch/m32r/kernel/setup_mappi3.c b/arch/m32r/kernel/setup_mappi3.c
index d2ff021e2d3d..014b51d17505 100644
--- a/arch/m32r/kernel/setup_mappi3.c
+++ b/arch/m32r/kernel/setup_mappi3.c
@@ -87,7 +87,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_SMC91X)
/* INT0 : LAN controller (SMC91111) */
irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT0].handler = &mappi3_irq_type;
+ irq_desc[M32R_IRQ_INT0].chip = &mappi3_irq_type;
irq_desc[M32R_IRQ_INT0].action = 0;
irq_desc[M32R_IRQ_INT0].depth = 1;
icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
@@ -96,7 +96,7 @@ void __init init_IRQ(void)
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].handler = &mappi3_irq_type;
+ irq_desc[M32R_IRQ_MFT2].chip = &mappi3_irq_type;
irq_desc[M32R_IRQ_MFT2].action = 0;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
@@ -105,7 +105,7 @@ void __init init_IRQ(void)
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].handler = &mappi3_irq_type;
+ irq_desc[M32R_IRQ_SIO0_R].chip = &mappi3_irq_type;
irq_desc[M32R_IRQ_SIO0_R].action = 0;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
@@ -113,14 +113,14 @@ void __init init_IRQ(void)
/* SIO0_S : uart send data */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].handler = &mappi3_irq_type;
+ irq_desc[M32R_IRQ_SIO0_S].chip = &mappi3_irq_type;
irq_desc[M32R_IRQ_SIO0_S].action = 0;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_mappi3_irq(M32R_IRQ_SIO0_S);
/* SIO1_R : uart receive data */
irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].handler = &mappi3_irq_type;
+ irq_desc[M32R_IRQ_SIO1_R].chip = &mappi3_irq_type;
irq_desc[M32R_IRQ_SIO1_R].action = 0;
irq_desc[M32R_IRQ_SIO1_R].depth = 1;
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
@@ -128,7 +128,7 @@ void __init init_IRQ(void)
/* SIO1_S : uart send data */
irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].handler = &mappi3_irq_type;
+ irq_desc[M32R_IRQ_SIO1_S].chip = &mappi3_irq_type;
irq_desc[M32R_IRQ_SIO1_S].action = 0;
irq_desc[M32R_IRQ_SIO1_S].depth = 1;
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
@@ -138,7 +138,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_USB)
/* INT1 : USB Host controller interrupt */
irq_desc[M32R_IRQ_INT1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT1].handler = &mappi3_irq_type;
+ irq_desc[M32R_IRQ_INT1].chip = &mappi3_irq_type;
irq_desc[M32R_IRQ_INT1].action = 0;
irq_desc[M32R_IRQ_INT1].depth = 1;
icu_data[M32R_IRQ_INT1].icucr = M32R_ICUCR_ISMOD01;
@@ -147,7 +147,7 @@ void __init init_IRQ(void)
/* CFC IREQ */
irq_desc[PLD_IRQ_CFIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFIREQ].handler = &mappi3_irq_type;
+ irq_desc[PLD_IRQ_CFIREQ].chip = &mappi3_irq_type;
irq_desc[PLD_IRQ_CFIREQ].action = 0;
irq_desc[PLD_IRQ_CFIREQ].depth = 1; /* disable nested irq */
icu_data[PLD_IRQ_CFIREQ].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD01;
@@ -156,7 +156,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_M32R_CFC)
/* ICUCR41: CFC Insert & eject */
irq_desc[PLD_IRQ_CFC_INSERT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_INSERT].handler = &mappi3_irq_type;
+ irq_desc[PLD_IRQ_CFC_INSERT].chip = &mappi3_irq_type;
irq_desc[PLD_IRQ_CFC_INSERT].action = 0;
irq_desc[PLD_IRQ_CFC_INSERT].depth = 1; /* disable nested irq */
icu_data[PLD_IRQ_CFC_INSERT].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD00;
@@ -166,7 +166,7 @@ void __init init_IRQ(void)
/* IDE IREQ */
irq_desc[PLD_IRQ_IDEIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_IDEIREQ].handler = &mappi3_irq_type;
+ irq_desc[PLD_IRQ_IDEIREQ].chip = &mappi3_irq_type;
irq_desc[PLD_IRQ_IDEIREQ].action = 0;
irq_desc[PLD_IRQ_IDEIREQ].depth = 1; /* disable nested irq */
icu_data[PLD_IRQ_IDEIREQ].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
diff --git a/arch/m32r/kernel/setup_oaks32r.c b/arch/m32r/kernel/setup_oaks32r.c
index 0e9e63538c0f..ea64831aef7a 100644
--- a/arch/m32r/kernel/setup_oaks32r.c
+++ b/arch/m32r/kernel/setup_oaks32r.c
@@ -85,7 +85,7 @@ void __init init_IRQ(void)
#ifdef CONFIG_NE2000
/* INT3 : LAN controller (RTL8019AS) */
irq_desc[M32R_IRQ_INT3].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT3].handler = &oaks32r_irq_type;
+ irq_desc[M32R_IRQ_INT3].chip = &oaks32r_irq_type;
irq_desc[M32R_IRQ_INT3].action = 0;
irq_desc[M32R_IRQ_INT3].depth = 1;
icu_data[M32R_IRQ_INT3].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
@@ -94,7 +94,7 @@ void __init init_IRQ(void)
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].handler = &oaks32r_irq_type;
+ irq_desc[M32R_IRQ_MFT2].chip = &oaks32r_irq_type;
irq_desc[M32R_IRQ_MFT2].action = 0;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
@@ -103,7 +103,7 @@ void __init init_IRQ(void)
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].handler = &oaks32r_irq_type;
+ irq_desc[M32R_IRQ_SIO0_R].chip = &oaks32r_irq_type;
irq_desc[M32R_IRQ_SIO0_R].action = 0;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
@@ -111,7 +111,7 @@ void __init init_IRQ(void)
/* SIO0_S : uart send data */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].handler = &oaks32r_irq_type;
+ irq_desc[M32R_IRQ_SIO0_S].chip = &oaks32r_irq_type;
irq_desc[M32R_IRQ_SIO0_S].action = 0;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
@@ -119,7 +119,7 @@ void __init init_IRQ(void)
/* SIO1_R : uart receive data */
irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].handler = &oaks32r_irq_type;
+ irq_desc[M32R_IRQ_SIO1_R].chip = &oaks32r_irq_type;
irq_desc[M32R_IRQ_SIO1_R].action = 0;
irq_desc[M32R_IRQ_SIO1_R].depth = 1;
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
@@ -127,7 +127,7 @@ void __init init_IRQ(void)
/* SIO1_S : uart send data */
irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].handler = &oaks32r_irq_type;
+ irq_desc[M32R_IRQ_SIO1_S].chip = &oaks32r_irq_type;
irq_desc[M32R_IRQ_SIO1_S].action = 0;
irq_desc[M32R_IRQ_SIO1_S].depth = 1;
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
diff --git a/arch/m32r/kernel/setup_opsput.c b/arch/m32r/kernel/setup_opsput.c
index 548e8fc7949b..55e8972d455a 100644
--- a/arch/m32r/kernel/setup_opsput.c
+++ b/arch/m32r/kernel/setup_opsput.c
@@ -302,7 +302,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_SMC91X)
/* INT#0: LAN controller on OPSPUT-LAN (SMC91C111)*/
irq_desc[OPSPUT_LAN_IRQ_LAN].status = IRQ_DISABLED;
- irq_desc[OPSPUT_LAN_IRQ_LAN].handler = &opsput_lanpld_irq_type;
+ irq_desc[OPSPUT_LAN_IRQ_LAN].chip = &opsput_lanpld_irq_type;
irq_desc[OPSPUT_LAN_IRQ_LAN].action = 0;
irq_desc[OPSPUT_LAN_IRQ_LAN].depth = 1; /* disable nested irq */
lanpld_icu_data[irq2lanpldirq(OPSPUT_LAN_IRQ_LAN)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD02; /* "H" edge sense */
@@ -311,7 +311,7 @@ void __init init_IRQ(void)
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].handler = &opsput_irq_type;
+ irq_desc[M32R_IRQ_MFT2].chip = &opsput_irq_type;
irq_desc[M32R_IRQ_MFT2].action = 0;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
@@ -319,7 +319,7 @@ void __init init_IRQ(void)
/* SIO0 : receive */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].handler = &opsput_irq_type;
+ irq_desc[M32R_IRQ_SIO0_R].chip = &opsput_irq_type;
irq_desc[M32R_IRQ_SIO0_R].action = 0;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
@@ -327,7 +327,7 @@ void __init init_IRQ(void)
/* SIO0 : send */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].handler = &opsput_irq_type;
+ irq_desc[M32R_IRQ_SIO0_S].chip = &opsput_irq_type;
irq_desc[M32R_IRQ_SIO0_S].action = 0;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
@@ -335,7 +335,7 @@ void __init init_IRQ(void)
/* SIO1 : receive */
irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].handler = &opsput_irq_type;
+ irq_desc[M32R_IRQ_SIO1_R].chip = &opsput_irq_type;
irq_desc[M32R_IRQ_SIO1_R].action = 0;
irq_desc[M32R_IRQ_SIO1_R].depth = 1;
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
@@ -343,7 +343,7 @@ void __init init_IRQ(void)
/* SIO1 : send */
irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].handler = &opsput_irq_type;
+ irq_desc[M32R_IRQ_SIO1_S].chip = &opsput_irq_type;
irq_desc[M32R_IRQ_SIO1_S].action = 0;
irq_desc[M32R_IRQ_SIO1_S].depth = 1;
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
@@ -351,7 +351,7 @@ void __init init_IRQ(void)
/* DMA1 : */
irq_desc[M32R_IRQ_DMA1].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_DMA1].handler = &opsput_irq_type;
+ irq_desc[M32R_IRQ_DMA1].chip = &opsput_irq_type;
irq_desc[M32R_IRQ_DMA1].action = 0;
irq_desc[M32R_IRQ_DMA1].depth = 1;
icu_data[M32R_IRQ_DMA1].icucr = 0;
@@ -360,7 +360,7 @@ void __init init_IRQ(void)
#ifdef CONFIG_SERIAL_M32R_PLDSIO
/* INT#1: SIO0 Receive on PLD */
irq_desc[PLD_IRQ_SIO0_RCV].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SIO0_RCV].handler = &opsput_pld_irq_type;
+ irq_desc[PLD_IRQ_SIO0_RCV].chip = &opsput_pld_irq_type;
irq_desc[PLD_IRQ_SIO0_RCV].action = 0;
irq_desc[PLD_IRQ_SIO0_RCV].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_SIO0_RCV)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD03;
@@ -368,7 +368,7 @@ void __init init_IRQ(void)
/* INT#1: SIO0 Send on PLD */
irq_desc[PLD_IRQ_SIO0_SND].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SIO0_SND].handler = &opsput_pld_irq_type;
+ irq_desc[PLD_IRQ_SIO0_SND].chip = &opsput_pld_irq_type;
irq_desc[PLD_IRQ_SIO0_SND].action = 0;
irq_desc[PLD_IRQ_SIO0_SND].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_SIO0_SND)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD03;
@@ -378,7 +378,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_M32R_CFC)
/* INT#1: CFC IREQ on PLD */
irq_desc[PLD_IRQ_CFIREQ].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFIREQ].handler = &opsput_pld_irq_type;
+ irq_desc[PLD_IRQ_CFIREQ].chip = &opsput_pld_irq_type;
irq_desc[PLD_IRQ_CFIREQ].action = 0;
irq_desc[PLD_IRQ_CFIREQ].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_CFIREQ)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* 'L' level sense */
@@ -386,7 +386,7 @@ void __init init_IRQ(void)
/* INT#1: CFC Insert on PLD */
irq_desc[PLD_IRQ_CFC_INSERT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_INSERT].handler = &opsput_pld_irq_type;
+ irq_desc[PLD_IRQ_CFC_INSERT].chip = &opsput_pld_irq_type;
irq_desc[PLD_IRQ_CFC_INSERT].action = 0;
irq_desc[PLD_IRQ_CFC_INSERT].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_CFC_INSERT)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD00; /* 'L' edge sense */
@@ -394,7 +394,7 @@ void __init init_IRQ(void)
/* INT#1: CFC Eject on PLD */
irq_desc[PLD_IRQ_CFC_EJECT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CFC_EJECT].handler = &opsput_pld_irq_type;
+ irq_desc[PLD_IRQ_CFC_EJECT].chip = &opsput_pld_irq_type;
irq_desc[PLD_IRQ_CFC_EJECT].action = 0;
irq_desc[PLD_IRQ_CFC_EJECT].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_CFC_EJECT)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD02; /* 'H' edge sense */
@@ -420,7 +420,7 @@ void __init init_IRQ(void)
outw(USBCR_OTGS, USBCR); /* USBCR: non-OTG */
irq_desc[OPSPUT_LCD_IRQ_USB_INT1].status = IRQ_DISABLED;
- irq_desc[OPSPUT_LCD_IRQ_USB_INT1].handler = &opsput_lcdpld_irq_type;
+ irq_desc[OPSPUT_LCD_IRQ_USB_INT1].chip = &opsput_lcdpld_irq_type;
irq_desc[OPSPUT_LCD_IRQ_USB_INT1].action = 0;
irq_desc[OPSPUT_LCD_IRQ_USB_INT1].depth = 1;
lcdpld_icu_data[irq2lcdpldirq(OPSPUT_LCD_IRQ_USB_INT1)].icucr = PLD_ICUCR_IEN|PLD_ICUCR_ISMOD01; /* "L" level sense */
@@ -438,7 +438,7 @@ void __init init_IRQ(void)
* INT3# is used for AR
*/
irq_desc[M32R_IRQ_INT3].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_INT3].handler = &opsput_irq_type;
+ irq_desc[M32R_IRQ_INT3].chip = &opsput_irq_type;
irq_desc[M32R_IRQ_INT3].action = 0;
irq_desc[M32R_IRQ_INT3].depth = 1;
icu_data[M32R_IRQ_INT3].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
diff --git a/arch/m32r/kernel/setup_usrv.c b/arch/m32r/kernel/setup_usrv.c
index 64be659a23e7..7fa12d8f66b4 100644
--- a/arch/m32r/kernel/setup_usrv.c
+++ b/arch/m32r/kernel/setup_usrv.c
@@ -158,7 +158,7 @@ void __init init_IRQ(void)
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_MFT2].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_MFT2].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_MFT2].action = 0;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
@@ -167,7 +167,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_SERIAL_M32R_SIO)
/* SIO0_R : uart receive data */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_R].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_SIO0_R].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_SIO0_R].action = 0;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
@@ -175,7 +175,7 @@ void __init init_IRQ(void)
/* SIO0_S : uart send data */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO0_S].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_SIO0_S].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_SIO0_S].action = 0;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
@@ -183,7 +183,7 @@ void __init init_IRQ(void)
/* SIO1_R : uart receive data */
irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_R].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_SIO1_R].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_SIO1_R].action = 0;
irq_desc[M32R_IRQ_SIO1_R].depth = 1;
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
@@ -191,7 +191,7 @@ void __init init_IRQ(void)
/* SIO1_S : uart send data */
irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
- irq_desc[M32R_IRQ_SIO1_S].handler = &mappi_irq_type;
+ irq_desc[M32R_IRQ_SIO1_S].chip = &mappi_irq_type;
irq_desc[M32R_IRQ_SIO1_S].action = 0;
irq_desc[M32R_IRQ_SIO1_S].depth = 1;
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
@@ -201,7 +201,7 @@ void __init init_IRQ(void)
/* INT#67-#71: CFC#0 IREQ on PLD */
for (i = 0 ; i < CONFIG_CFC_NUM ; i++ ) {
irq_desc[PLD_IRQ_CF0 + i].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_CF0 + i].handler = &m32700ut_pld_irq_type;
+ irq_desc[PLD_IRQ_CF0 + i].chip = &m32700ut_pld_irq_type;
irq_desc[PLD_IRQ_CF0 + i].action = 0;
irq_desc[PLD_IRQ_CF0 + i].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_CF0 + i)].icucr
@@ -212,7 +212,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_SERIAL_8250) || defined(CONFIG_SERIAL_8250_MODULE)
/* INT#76: 16552D#0 IREQ on PLD */
irq_desc[PLD_IRQ_UART0].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_UART0].handler = &m32700ut_pld_irq_type;
+ irq_desc[PLD_IRQ_UART0].chip = &m32700ut_pld_irq_type;
irq_desc[PLD_IRQ_UART0].action = 0;
irq_desc[PLD_IRQ_UART0].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_UART0)].icucr
@@ -221,7 +221,7 @@ void __init init_IRQ(void)
/* INT#77: 16552D#1 IREQ on PLD */
irq_desc[PLD_IRQ_UART1].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_UART1].handler = &m32700ut_pld_irq_type;
+ irq_desc[PLD_IRQ_UART1].chip = &m32700ut_pld_irq_type;
irq_desc[PLD_IRQ_UART1].action = 0;
irq_desc[PLD_IRQ_UART1].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_UART1)].icucr
@@ -232,7 +232,7 @@ void __init init_IRQ(void)
#if defined(CONFIG_IDC_AK4524) || defined(CONFIG_IDC_AK4524_MODULE)
/* INT#80: AK4524 IREQ on PLD */
irq_desc[PLD_IRQ_SNDINT].status = IRQ_DISABLED;
- irq_desc[PLD_IRQ_SNDINT].handler = &m32700ut_pld_irq_type;
+ irq_desc[PLD_IRQ_SNDINT].chip = &m32700ut_pld_irq_type;
irq_desc[PLD_IRQ_SNDINT].action = 0;
irq_desc[PLD_IRQ_SNDINT].depth = 1; /* disable nested irq */
pld_icu_data[irq2pldirq(PLD_IRQ_SNDINT)].icucr
diff --git a/arch/mips/au1000/common/irq.c b/arch/mips/au1000/common/irq.c
index afe05ec12c27..da74ac21954b 100644
--- a/arch/mips/au1000/common/irq.c
+++ b/arch/mips/au1000/common/irq.c
@@ -333,31 +333,31 @@ static void setup_local_irq(unsigned int irq_nr, int type, int int_req)
au_writel(1<<(irq_nr-32), IC1_CFG2CLR);
au_writel(1<<(irq_nr-32), IC1_CFG1CLR);
au_writel(1<<(irq_nr-32), IC1_CFG0SET);
- irq_desc[irq_nr].handler = &rise_edge_irq_type;
+ irq_desc[irq_nr].chip = &rise_edge_irq_type;
break;
case INTC_INT_FALL_EDGE: /* 0:1:0 */
au_writel(1<<(irq_nr-32), IC1_CFG2CLR);
au_writel(1<<(irq_nr-32), IC1_CFG1SET);
au_writel(1<<(irq_nr-32), IC1_CFG0CLR);
- irq_desc[irq_nr].handler = &fall_edge_irq_type;
+ irq_desc[irq_nr].chip = &fall_edge_irq_type;
break;
case INTC_INT_RISE_AND_FALL_EDGE: /* 0:1:1 */
au_writel(1<<(irq_nr-32), IC1_CFG2CLR);
au_writel(1<<(irq_nr-32), IC1_CFG1SET);
au_writel(1<<(irq_nr-32), IC1_CFG0SET);
- irq_desc[irq_nr].handler = &either_edge_irq_type;
+ irq_desc[irq_nr].chip = &either_edge_irq_type;
break;
case INTC_INT_HIGH_LEVEL: /* 1:0:1 */
au_writel(1<<(irq_nr-32), IC1_CFG2SET);
au_writel(1<<(irq_nr-32), IC1_CFG1CLR);
au_writel(1<<(irq_nr-32), IC1_CFG0SET);
- irq_desc[irq_nr].handler = &level_irq_type;
+ irq_desc[irq_nr].chip = &level_irq_type;
break;
case INTC_INT_LOW_LEVEL: /* 1:1:0 */
au_writel(1<<(irq_nr-32), IC1_CFG2SET);
au_writel(1<<(irq_nr-32), IC1_CFG1SET);
au_writel(1<<(irq_nr-32), IC1_CFG0CLR);
- irq_desc[irq_nr].handler = &level_irq_type;
+ irq_desc[irq_nr].chip = &level_irq_type;
break;
case INTC_INT_DISABLED: /* 0:0:0 */
au_writel(1<<(irq_nr-32), IC1_CFG0CLR);
@@ -385,31 +385,31 @@ static void setup_local_irq(unsigned int irq_nr, int type, int int_req)
au_writel(1<<irq_nr, IC0_CFG2CLR);
au_writel(1<<irq_nr, IC0_CFG1CLR);
au_writel(1<<irq_nr, IC0_CFG0SET);
- irq_desc[irq_nr].handler = &rise_edge_irq_type;
+ irq_desc[irq_nr].chip = &rise_edge_irq_type;
break;
case INTC_INT_FALL_EDGE: /* 0:1:0 */
au_writel(1<<irq_nr, IC0_CFG2CLR);
au_writel(1<<irq_nr, IC0_CFG1SET);
au_writel(1<<irq_nr, IC0_CFG0CLR);
- irq_desc[irq_nr].handler = &fall_edge_irq_type;
+ irq_desc[irq_nr].chip = &fall_edge_irq_type;
break;
case INTC_INT_RISE_AND_FALL_EDGE: /* 0:1:1 */
au_writel(1<<irq_nr, IC0_CFG2CLR);
au_writel(1<<irq_nr, IC0_CFG1SET);
au_writel(1<<irq_nr, IC0_CFG0SET);
- irq_desc[irq_nr].handler = &either_edge_irq_type;
+ irq_desc[irq_nr].chip = &either_edge_irq_type;
break;
case INTC_INT_HIGH_LEVEL: /* 1:0:1 */
au_writel(1<<irq_nr, IC0_CFG2SET);
au_writel(1<<irq_nr, IC0_CFG1CLR);
au_writel(1<<irq_nr, IC0_CFG0SET);
- irq_desc[irq_nr].handler = &level_irq_type;
+ irq_desc[irq_nr].chip = &level_irq_type;
break;
case INTC_INT_LOW_LEVEL: /* 1:1:0 */
au_writel(1<<irq_nr, IC0_CFG2SET);
au_writel(1<<irq_nr, IC0_CFG1SET);
au_writel(1<<irq_nr, IC0_CFG0CLR);
- irq_desc[irq_nr].handler = &level_irq_type;
+ irq_desc[irq_nr].chip = &level_irq_type;
break;
case INTC_INT_DISABLED: /* 0:0:0 */
au_writel(1<<irq_nr, IC0_CFG0CLR);
diff --git a/arch/mips/au1000/pb1200/irqmap.c b/arch/mips/au1000/pb1200/irqmap.c
index bacc0c6bfe67..5dd164fc1889 100644
--- a/arch/mips/au1000/pb1200/irqmap.c
+++ b/arch/mips/au1000/pb1200/irqmap.c
@@ -172,7 +172,7 @@ void _board_init_irq(void)
for (irq_nr = PB1200_INT_BEGIN; irq_nr <= PB1200_INT_END; irq_nr++)
{
- irq_desc[irq_nr].handler = &external_irq_type;
+ irq_desc[irq_nr].chip = &external_irq_type;
pb1200_disable_irq(irq_nr);
}
diff --git a/arch/mips/ddb5xxx/ddb5477/irq_5477.c b/arch/mips/ddb5xxx/ddb5477/irq_5477.c
index 5fcd5f070cdc..63c3d6534b3a 100644
--- a/arch/mips/ddb5xxx/ddb5477/irq_5477.c
+++ b/arch/mips/ddb5xxx/ddb5477/irq_5477.c
@@ -107,7 +107,7 @@ void __init vrc5477_irq_init(u32 irq_base)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &vrc5477_irq_controller;
+ irq_desc[i].chip = &vrc5477_irq_controller;
}
vrc5477_irq_base = irq_base;
diff --git a/arch/mips/dec/ioasic-irq.c b/arch/mips/dec/ioasic-irq.c
index d5bca5d233b6..da2dbb42f913 100644
--- a/arch/mips/dec/ioasic-irq.c
+++ b/arch/mips/dec/ioasic-irq.c
@@ -144,13 +144,13 @@ void __init init_ioasic_irqs(int base)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &ioasic_irq_type;
+ irq_desc[i].chip = &ioasic_irq_type;
}
for (; i < base + IO_IRQ_LINES; i++) {
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &ioasic_dma_irq_type;
+ irq_desc[i].chip = &ioasic_dma_irq_type;
}
ioasic_irq_base = base;
diff --git a/arch/mips/dec/kn02-irq.c b/arch/mips/dec/kn02-irq.c
index 898bed502a34..d44c00d9e80f 100644
--- a/arch/mips/dec/kn02-irq.c
+++ b/arch/mips/dec/kn02-irq.c
@@ -123,7 +123,7 @@ void __init init_kn02_irqs(int base)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &kn02_irq_type;
+ irq_desc[i].chip = &kn02_irq_type;
}
kn02_irq_base = base;
diff --git a/arch/mips/gt64120/ev64120/irq.c b/arch/mips/gt64120/ev64120/irq.c
index 46c468b26b30..f489a8067a93 100644
--- a/arch/mips/gt64120/ev64120/irq.c
+++ b/arch/mips/gt64120/ev64120/irq.c
@@ -138,7 +138,7 @@ void __init arch_init_irq(void)
/* Let's initialize our IRQ descriptors */
for (i = 0; i < NR_IRQS; i++) {
irq_desc[i].status = 0;
- irq_desc[i].handler = &no_irq_type;
+ irq_desc[i].chip = &no_irq_type;
irq_desc[i].action = NULL;
irq_desc[i].depth = 0;
spin_lock_init(&irq_desc[i].lock);
diff --git a/arch/mips/ite-boards/generic/irq.c b/arch/mips/ite-boards/generic/irq.c
index 77be7216bdd0..a6749c56fe38 100644
--- a/arch/mips/ite-boards/generic/irq.c
+++ b/arch/mips/ite-boards/generic/irq.c
@@ -208,10 +208,10 @@ void __init arch_init_irq(void)
#endif
for (i = 0; i <= IT8172_LAST_IRQ; i++) {
- irq_desc[i].handler = &it8172_irq_type;
+ irq_desc[i].chip = &it8172_irq_type;
spin_lock_init(&irq_desc[i].lock);
}
- irq_desc[MIPS_CPU_TIMER_IRQ].handler = &cp0_irq_type;
+ irq_desc[MIPS_CPU_TIMER_IRQ].chip = &cp0_irq_type;
set_c0_status(ALLINTS_NOTIMER);
}
diff --git a/arch/mips/jazz/irq.c b/arch/mips/jazz/irq.c
index becc9accd495..478be9858a1e 100644
--- a/arch/mips/jazz/irq.c
+++ b/arch/mips/jazz/irq.c
@@ -73,7 +73,7 @@ void __init init_r4030_ints(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &r4030_irq_type;
+ irq_desc[i].chip = &r4030_irq_type;
}
r4030_write_reg16(JAZZ_IO_IRQ_ENABLE, 0);
diff --git a/arch/mips/jmr3927/rbhma3100/irq.c b/arch/mips/jmr3927/rbhma3100/irq.c
index 11304d1354f4..380046ea1db5 100644
--- a/arch/mips/jmr3927/rbhma3100/irq.c
+++ b/arch/mips/jmr3927/rbhma3100/irq.c
@@ -435,7 +435,7 @@ void jmr3927_irq_init(u32 irq_base)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &jmr3927_irq_controller;
+ irq_desc[i].chip = &jmr3927_irq_controller;
}
jmr3927_irq_base = irq_base;
diff --git a/arch/mips/kernel/i8259.c b/arch/mips/kernel/i8259.c
index 0cb8ed5662f3..91ffb1233cad 100644
--- a/arch/mips/kernel/i8259.c
+++ b/arch/mips/kernel/i8259.c
@@ -120,7 +120,7 @@ int i8259A_irq_pending(unsigned int irq)
void make_8259A_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &i8259A_irq_type;
+ irq_desc[irq].chip = &i8259A_irq_type;
enable_irq(irq);
}
@@ -327,7 +327,7 @@ void __init init_i8259_irqs (void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &i8259A_irq_type;
+ irq_desc[i].chip = &i8259A_irq_type;
}
setup_irq(2, &irq2);
diff --git a/arch/mips/kernel/irq-msc01.c b/arch/mips/kernel/irq-msc01.c
index 97ebdc754b9e..f8cd1ac64d88 100644
--- a/arch/mips/kernel/irq-msc01.c
+++ b/arch/mips/kernel/irq-msc01.c
@@ -174,14 +174,14 @@ void __init init_msc_irqs(unsigned int base, msc_irqmap_t *imp, int nirq)
switch (imp->im_type) {
case MSC01_IRQ_EDGE:
- irq_desc[base+n].handler = &msc_edgeirq_type;
+ irq_desc[base+n].chip = &msc_edgeirq_type;
if (cpu_has_veic)
MSCIC_WRITE(MSC01_IC_SUP+n*8, MSC01_IC_SUP_EDGE_BIT);
else
MSCIC_WRITE(MSC01_IC_SUP+n*8, MSC01_IC_SUP_EDGE_BIT | imp->im_lvl);
break;
case MSC01_IRQ_LEVEL:
- irq_desc[base+n].handler = &msc_levelirq_type;
+ irq_desc[base+n].chip = &msc_levelirq_type;
if (cpu_has_veic)
MSCIC_WRITE(MSC01_IC_SUP+n*8, 0);
else
diff --git a/arch/mips/kernel/irq-mv6434x.c b/arch/mips/kernel/irq-mv6434x.c
index 0613f1f36b1b..f9c763a65547 100644
--- a/arch/mips/kernel/irq-mv6434x.c
+++ b/arch/mips/kernel/irq-mv6434x.c
@@ -155,7 +155,7 @@ void __init mv64340_irq_init(unsigned int base)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 2;
- irq_desc[i].handler = &mv64340_irq_type;
+ irq_desc[i].chip = &mv64340_irq_type;
}
irq_base = base;
diff --git a/arch/mips/kernel/irq-rm7000.c b/arch/mips/kernel/irq-rm7000.c
index 0b130c5ac5d9..121da385a94d 100644
--- a/arch/mips/kernel/irq-rm7000.c
+++ b/arch/mips/kernel/irq-rm7000.c
@@ -91,7 +91,7 @@ void __init rm7k_cpu_irq_init(int base)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &rm7k_irq_controller;
+ irq_desc[i].chip = &rm7k_irq_controller;
}
irq_base = base;
diff --git a/arch/mips/kernel/irq-rm9000.c b/arch/mips/kernel/irq-rm9000.c
index 9b5f20c32acb..25109c103e44 100644
--- a/arch/mips/kernel/irq-rm9000.c
+++ b/arch/mips/kernel/irq-rm9000.c
@@ -139,11 +139,11 @@ void __init rm9k_cpu_irq_init(int base)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &rm9k_irq_controller;
+ irq_desc[i].chip = &rm9k_irq_controller;
}
rm9000_perfcount_irq = base + 1;
- irq_desc[rm9000_perfcount_irq].handler = &rm9k_perfcounter_irq;
+ irq_desc[rm9000_perfcount_irq].chip = &rm9k_perfcounter_irq;
irq_base = base;
}
diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
index 3dce742e716f..5c9dcd5eed59 100644
--- a/arch/mips/kernel/irq.c
+++ b/arch/mips/kernel/irq.c
@@ -95,7 +95,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
@@ -137,7 +137,7 @@ void __init init_IRQ(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &no_irq_type;
+ irq_desc[i].chip = &no_irq_type;
spin_lock_init(&irq_desc[i].lock);
#ifdef CONFIG_MIPS_MT_SMTC
irq_hwmask[i] = 0;
diff --git a/arch/mips/kernel/irq_cpu.c b/arch/mips/kernel/irq_cpu.c
index 5db67e31ec1a..0e455a8ad860 100644
--- a/arch/mips/kernel/irq_cpu.c
+++ b/arch/mips/kernel/irq_cpu.c
@@ -167,14 +167,14 @@ void __init mips_cpu_irq_init(int irq_base)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &mips_mt_cpu_irq_controller;
+ irq_desc[i].chip = &mips_mt_cpu_irq_controller;
}
for (i = irq_base + 2; i < irq_base + 8; i++) {
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &mips_cpu_irq_controller;
+ irq_desc[i].chip = &mips_cpu_irq_controller;
}
mips_cpu_irq_base = irq_base;
diff --git a/arch/mips/lasat/interrupt.c b/arch/mips/lasat/interrupt.c
index 2d3472b21ebb..9316a024a818 100644
--- a/arch/mips/lasat/interrupt.c
+++ b/arch/mips/lasat/interrupt.c
@@ -156,6 +156,6 @@ void __init arch_init_irq(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &lasat_irq_type;
+ irq_desc[i].chip = &lasat_irq_type;
}
}
diff --git a/arch/mips/mips-boards/atlas/atlas_int.c b/arch/mips/mips-boards/atlas/atlas_int.c
index db53950b7cfb..9dd6b8925581 100644
--- a/arch/mips/mips-boards/atlas/atlas_int.c
+++ b/arch/mips/mips-boards/atlas/atlas_int.c
@@ -215,7 +215,7 @@ void __init arch_init_irq(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &atlas_irq_type;
+ irq_desc[i].chip = &atlas_irq_type;
spin_lock_init(&irq_desc[i].lock);
}
}
diff --git a/arch/mips/momentum/ocelot_c/cpci-irq.c b/arch/mips/momentum/ocelot_c/cpci-irq.c
index bd885785e2f9..31d179c4673f 100644
--- a/arch/mips/momentum/ocelot_c/cpci-irq.c
+++ b/arch/mips/momentum/ocelot_c/cpci-irq.c
@@ -147,6 +147,6 @@ void cpci_irq_init(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 2;
- irq_desc[i].handler = &cpci_irq_type;
+ irq_desc[i].chip = &cpci_irq_type;
}
}
diff --git a/arch/mips/momentum/ocelot_c/uart-irq.c b/arch/mips/momentum/ocelot_c/uart-irq.c
index 755bde5146be..852265026fd1 100644
--- a/arch/mips/momentum/ocelot_c/uart-irq.c
+++ b/arch/mips/momentum/ocelot_c/uart-irq.c
@@ -137,10 +137,10 @@ void uart_irq_init(void)
irq_desc[80].status = IRQ_DISABLED;
irq_desc[80].action = 0;
irq_desc[80].depth = 2;
- irq_desc[80].handler = &uart_irq_type;
+ irq_desc[80].chip = &uart_irq_type;
irq_desc[81].status = IRQ_DISABLED;
irq_desc[81].action = 0;
irq_desc[81].depth = 2;
- irq_desc[81].handler = &uart_irq_type;
+ irq_desc[81].chip = &uart_irq_type;
}
diff --git a/arch/mips/philips/pnx8550/common/int.c b/arch/mips/philips/pnx8550/common/int.c
index 39ee6314f627..8f18764a2359 100644
--- a/arch/mips/philips/pnx8550/common/int.c
+++ b/arch/mips/philips/pnx8550/common/int.c
@@ -236,7 +236,7 @@ void __init arch_init_irq(void)
int configPR;
for (i = 0; i < PNX8550_INT_CP0_TOTINT; i++) {
- irq_desc[i].handler = &level_irq_type;
+ irq_desc[i].chip = &level_irq_type;
pnx8550_ack(i); /* mask the irq just in case */
}
@@ -273,7 +273,7 @@ void __init arch_init_irq(void)
/* mask/priority is still 0 so we will not get any
* interrupts until it is unmasked */
- irq_desc[i].handler = &level_irq_type;
+ irq_desc[i].chip = &level_irq_type;
}
/* Priority level 0 */
@@ -282,12 +282,12 @@ void __init arch_init_irq(void)
/* Set int vector table address */
PNX8550_GIC_VECTOR_0 = PNX8550_GIC_VECTOR_1 = 0;
- irq_desc[MIPS_CPU_GIC_IRQ].handler = &level_irq_type;
+ irq_desc[MIPS_CPU_GIC_IRQ].chip = &level_irq_type;
setup_irq(MIPS_CPU_GIC_IRQ, &gic_action);
/* init of Timer interrupts */
for (i = PNX8550_INT_TIMER_MIN; i <= PNX8550_INT_TIMER_MAX; i++) {
- irq_desc[i].handler = &level_irq_type;
+ irq_desc[i].chip = &level_irq_type;
}
/* Stop Timer 1-3 */
@@ -295,7 +295,7 @@ void __init arch_init_irq(void)
configPR |= 0x00000038;
write_c0_config7(configPR);
- irq_desc[MIPS_CPU_TIMER_IRQ].handler = &level_irq_type;
+ irq_desc[MIPS_CPU_TIMER_IRQ].chip = &level_irq_type;
setup_irq(MIPS_CPU_TIMER_IRQ, &timer_action);
}
diff --git a/arch/mips/sgi-ip22/ip22-eisa.c b/arch/mips/sgi-ip22/ip22-eisa.c
index b19820110aa3..989167b49ce9 100644
--- a/arch/mips/sgi-ip22/ip22-eisa.c
+++ b/arch/mips/sgi-ip22/ip22-eisa.c
@@ -279,9 +279,9 @@ int __init ip22_eisa_init(void)
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
if (i < (SGINT_EISA + 8))
- irq_desc[i].handler = &ip22_eisa1_irq_type;
+ irq_desc[i].chip = &ip22_eisa1_irq_type;
else
- irq_desc[i].handler = &ip22_eisa2_irq_type;
+ irq_desc[i].chip = &ip22_eisa2_irq_type;
}
/* Cannot use request_irq because of kmalloc not being ready at such
diff --git a/arch/mips/sgi-ip22/ip22-int.c b/arch/mips/sgi-ip22/ip22-int.c
index fc6a7e2b189c..18906af69691 100644
--- a/arch/mips/sgi-ip22/ip22-int.c
+++ b/arch/mips/sgi-ip22/ip22-int.c
@@ -436,7 +436,7 @@ void __init arch_init_irq(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = handler;
+ irq_desc[i].chip = handler;
}
/* vector handler. this register the IRQ as non-sharable */
diff --git a/arch/mips/sgi-ip27/ip27-irq.c b/arch/mips/sgi-ip27/ip27-irq.c
index 0b61a39ce2bb..869566c360ae 100644
--- a/arch/mips/sgi-ip27/ip27-irq.c
+++ b/arch/mips/sgi-ip27/ip27-irq.c
@@ -386,7 +386,7 @@ void __devinit register_bridge_irq(unsigned int irq)
irq_desc[irq].status = IRQ_DISABLED;
irq_desc[irq].action = 0;
irq_desc[irq].depth = 1;
- irq_desc[irq].handler = &bridge_irq_type;
+ irq_desc[irq].chip = &bridge_irq_type;
}
int __devinit request_bridge_irq(struct bridge_controller *bc)
diff --git a/arch/mips/sgi-ip32/ip32-irq.c b/arch/mips/sgi-ip32/ip32-irq.c
index 8ba08047d164..00b94aaf6371 100644
--- a/arch/mips/sgi-ip32/ip32-irq.c
+++ b/arch/mips/sgi-ip32/ip32-irq.c
@@ -591,7 +591,7 @@ void __init arch_init_irq(void)
irq_desc[irq].status = IRQ_DISABLED;
irq_desc[irq].action = 0;
irq_desc[irq].depth = 0;
- irq_desc[irq].handler = controller;
+ irq_desc[irq].chip = controller;
}
setup_irq(CRIME_MEMERR_IRQ, &memerr_irq);
setup_irq(CRIME_CPUERR_IRQ, &cpuerr_irq);
diff --git a/arch/mips/sibyte/bcm1480/irq.c b/arch/mips/sibyte/bcm1480/irq.c
index e61760b14d99..610df40cb820 100644
--- a/arch/mips/sibyte/bcm1480/irq.c
+++ b/arch/mips/sibyte/bcm1480/irq.c
@@ -276,10 +276,10 @@ void __init init_bcm1480_irqs(void)
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
if (i < BCM1480_NR_IRQS) {
- irq_desc[i].handler = &bcm1480_irq_type;
+ irq_desc[i].chip = &bcm1480_irq_type;
bcm1480_irq_owner[i] = 0;
} else {
- irq_desc[i].handler = &no_irq_type;
+ irq_desc[i].chip = &no_irq_type;
}
}
}
diff --git a/arch/mips/sibyte/sb1250/irq.c b/arch/mips/sibyte/sb1250/irq.c
index f853c32f60a0..fcc61940f1ff 100644
--- a/arch/mips/sibyte/sb1250/irq.c
+++ b/arch/mips/sibyte/sb1250/irq.c
@@ -246,10 +246,10 @@ void __init init_sb1250_irqs(void)
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
if (i < SB1250_NR_IRQS) {
- irq_desc[i].handler = &sb1250_irq_type;
+ irq_desc[i].chip = &sb1250_irq_type;
sb1250_irq_owner[i] = 0;
} else {
- irq_desc[i].handler = &no_irq_type;
+ irq_desc[i].chip = &no_irq_type;
}
}
}
diff --git a/arch/mips/sni/irq.c b/arch/mips/sni/irq.c
index 7365b4853ddb..c19e158ec402 100644
--- a/arch/mips/sni/irq.c
+++ b/arch/mips/sni/irq.c
@@ -203,7 +203,7 @@ void __init arch_init_irq(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &pciasic_irq_type;
+ irq_desc[i].chip = &pciasic_irq_type;
}
change_c0_status(ST0_IM, IE_IRQ1|IE_IRQ2|IE_IRQ3|IE_IRQ4);
diff --git a/arch/mips/tx4927/common/tx4927_irq.c b/arch/mips/tx4927/common/tx4927_irq.c
index 8ca68015cf40..a42be00483e6 100644
--- a/arch/mips/tx4927/common/tx4927_irq.c
+++ b/arch/mips/tx4927/common/tx4927_irq.c
@@ -227,7 +227,7 @@ static void __init tx4927_irq_cp0_init(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &tx4927_irq_cp0_type;
+ irq_desc[i].chip = &tx4927_irq_cp0_type;
}
return;
@@ -435,7 +435,7 @@ static void __init tx4927_irq_pic_init(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 2;
- irq_desc[i].handler = &tx4927_irq_pic_type;
+ irq_desc[i].chip = &tx4927_irq_pic_type;
}
setup_irq(TX4927_IRQ_NEST_PIC_ON_CP0, &tx4927_irq_pic_action);
diff --git a/arch/mips/tx4927/toshiba_rbtx4927/toshiba_rbtx4927_irq.c b/arch/mips/tx4927/toshiba_rbtx4927/toshiba_rbtx4927_irq.c
index aee07ff2212a..c67978b6dae4 100644
--- a/arch/mips/tx4927/toshiba_rbtx4927/toshiba_rbtx4927_irq.c
+++ b/arch/mips/tx4927/toshiba_rbtx4927/toshiba_rbtx4927_irq.c
@@ -368,7 +368,7 @@ static void __init toshiba_rbtx4927_irq_ioc_init(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 3;
- irq_desc[i].handler = &toshiba_rbtx4927_irq_ioc_type;
+ irq_desc[i].chip = &toshiba_rbtx4927_irq_ioc_type;
}
setup_irq(TOSHIBA_RBTX4927_IRQ_NEST_IOC_ON_PIC,
@@ -526,7 +526,7 @@ static void __init toshiba_rbtx4927_irq_isa_init(void)
irq_desc[i].action = 0;
irq_desc[i].depth =
((i < TOSHIBA_RBTX4927_IRQ_ISA_MID) ? (4) : (5));
- irq_desc[i].handler = &toshiba_rbtx4927_irq_isa_type;
+ irq_desc[i].chip = &toshiba_rbtx4927_irq_isa_type;
}
setup_irq(TOSHIBA_RBTX4927_IRQ_NEST_ISA_ON_IOC,
@@ -692,13 +692,13 @@ void toshiba_rbtx4927_irq_dump(char *key)
{
u32 i, j = 0;
for (i = 0; i < NR_IRQS; i++) {
- if (strcmp(irq_desc[i].handler->typename, "none")
+ if (strcmp(irq_desc[i].chip->typename, "none")
== 0)
continue;
if ((i >= 1)
- && (irq_desc[i - 1].handler->typename ==
- irq_desc[i].handler->typename)) {
+ && (irq_desc[i - 1].chip->typename ==
+ irq_desc[i].chip->typename)) {
j++;
} else {
j = 0;
@@ -707,12 +707,12 @@ void toshiba_rbtx4927_irq_dump(char *key)
(TOSHIBA_RBTX4927_IRQ_INFO,
"%s irq=0x%02x/%3d s=0x%08x h=0x%08x a=0x%08x ah=0x%08x d=%1d n=%s/%02d\n",
key, i, i, irq_desc[i].status,
- (u32) irq_desc[i].handler,
+ (u32) irq_desc[i].chip,
(u32) irq_desc[i].action,
(u32) (irq_desc[i].action ? irq_desc[i].
action->handler : 0),
irq_desc[i].depth,
- irq_desc[i].handler->typename, j);
+ irq_desc[i].chip->typename, j);
}
}
#endif
diff --git a/arch/mips/tx4938/common/irq.c b/arch/mips/tx4938/common/irq.c
index 873805178d8e..0b2f8c849218 100644
--- a/arch/mips/tx4938/common/irq.c
+++ b/arch/mips/tx4938/common/irq.c
@@ -102,7 +102,7 @@ tx4938_irq_cp0_init(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &tx4938_irq_cp0_type;
+ irq_desc[i].chip = &tx4938_irq_cp0_type;
}
return;
@@ -306,7 +306,7 @@ tx4938_irq_pic_init(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 2;
- irq_desc[i].handler = &tx4938_irq_pic_type;
+ irq_desc[i].chip = &tx4938_irq_pic_type;
}
setup_irq(TX4938_IRQ_NEST_PIC_ON_CP0, &tx4938_irq_pic_action);
diff --git a/arch/mips/tx4938/toshiba_rbtx4938/irq.c b/arch/mips/tx4938/toshiba_rbtx4938/irq.c
index 9cd9c0fe2265..3b8245dc5bd3 100644
--- a/arch/mips/tx4938/toshiba_rbtx4938/irq.c
+++ b/arch/mips/tx4938/toshiba_rbtx4938/irq.c
@@ -146,7 +146,7 @@ toshiba_rbtx4938_irq_ioc_init(void)
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = 0;
irq_desc[i].depth = 3;
- irq_desc[i].handler = &toshiba_rbtx4938_irq_ioc_type;
+ irq_desc[i].chip = &toshiba_rbtx4938_irq_ioc_type;
}
setup_irq(RBTX4938_IRQ_IOCINT,
diff --git a/arch/mips/vr41xx/common/icu.c b/arch/mips/vr41xx/common/icu.c
index 07ae19cf0c29..b9323302cc4e 100644
--- a/arch/mips/vr41xx/common/icu.c
+++ b/arch/mips/vr41xx/common/icu.c
@@ -722,10 +722,10 @@ static int __init vr41xx_icu_init(void)
icu2_write(MGIUINTHREG, 0xffff);
for (i = SYSINT1_IRQ_BASE; i <= SYSINT1_IRQ_LAST; i++)
- irq_desc[i].handler = &sysint1_irq_type;
+ irq_desc[i].chip = &sysint1_irq_type;
for (i = SYSINT2_IRQ_BASE; i <= SYSINT2_IRQ_LAST; i++)
- irq_desc[i].handler = &sysint2_irq_type;
+ irq_desc[i].chip = &sysint2_irq_type;
cascade_irq(INT0_IRQ, icu_get_irq);
cascade_irq(INT1_IRQ, icu_get_irq);
diff --git a/arch/mips/vr41xx/common/irq.c b/arch/mips/vr41xx/common/irq.c
index 86796bb63c3c..66aa50802deb 100644
--- a/arch/mips/vr41xx/common/irq.c
+++ b/arch/mips/vr41xx/common/irq.c
@@ -73,13 +73,13 @@ static void irq_dispatch(unsigned int irq, struct pt_regs *regs)
if (cascade->get_irq != NULL) {
unsigned int source_irq = irq;
desc = irq_desc + source_irq;
- desc->handler->ack(source_irq);
+ desc->chip->ack(source_irq);
irq = cascade->get_irq(irq, regs);
if (irq < 0)
atomic_inc(&irq_err_count);
else
irq_dispatch(irq, regs);
- desc->handler->end(source_irq);
+ desc->chip->end(source_irq);
} else
do_IRQ(irq, regs);
}
diff --git a/arch/mips/vr41xx/common/vrc4173.c b/arch/mips/vr41xx/common/vrc4173.c
index 3e31f8193d21..2d287b8893d9 100644
--- a/arch/mips/vr41xx/common/vrc4173.c
+++ b/arch/mips/vr41xx/common/vrc4173.c
@@ -483,7 +483,7 @@ static inline int vrc4173_icu_init(int cascade_irq)
vr41xx_set_irq_level(GIU_IRQ_TO_PIN(cascade_irq), LEVEL_LOW);
for (i = VRC4173_IRQ_BASE; i <= VRC4173_IRQ_LAST; i++)
- irq_desc[i].handler = &vrc4173_irq_type;
+ irq_desc[i].chip = &vrc4173_irq_type;
return 0;
}
diff --git a/arch/mips/vr41xx/nec-cmbvr4133/irq.c b/arch/mips/vr41xx/nec-cmbvr4133/irq.c
index 31db6b61a39e..7b2511ca0a61 100644
--- a/arch/mips/vr41xx/nec-cmbvr4133/irq.c
+++ b/arch/mips/vr41xx/nec-cmbvr4133/irq.c
@@ -104,7 +104,7 @@ void __init rockhopper_init_irq(void)
}
for (i = I8259_IRQ_BASE; i <= I8259_IRQ_LAST; i++)
- irq_desc[i].handler = &i8259_irq_type;
+ irq_desc[i].chip = &i8259_irq_type;
setup_irq(I8259_SLAVE_IRQ, &i8259_slave_cascade);
diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
index 197936d9359a..26e69f73fdb0 100644
--- a/arch/parisc/kernel/irq.c
+++ b/arch/parisc/kernel/irq.c
@@ -158,7 +158,7 @@ int show_interrupts(struct seq_file *p, void *v)
seq_printf(p, "%10u ", kstat_irqs(i));
#endif
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
#ifndef PARISC_IRQ_CR16_COUNTS
seq_printf(p, " %s", action->name);
@@ -210,12 +210,12 @@ int cpu_claim_irq(unsigned int irq, struct hw_interrupt_type *type, void *data)
{
if (irq_desc[irq].action)
return -EBUSY;
- if (irq_desc[irq].handler != &cpu_interrupt_type)
+ if (irq_desc[irq].chip != &cpu_interrupt_type)
return -EBUSY;
if (type) {
- irq_desc[irq].handler = type;
- irq_desc[irq].handler_data = data;
+ irq_desc[irq].chip = type;
+ irq_desc[irq].chip_data = data;
cpu_interrupt_type.enable(irq);
}
return 0;
@@ -378,7 +378,7 @@ static void claim_cpu_irqs(void)
{
int i;
for (i = CPU_IRQ_BASE; i <= CPU_IRQ_MAX; i++) {
- irq_desc[i].handler = &cpu_interrupt_type;
+ irq_desc[i].chip = &cpu_interrupt_type;
}
irq_desc[TIMER_IRQ].action = &timer_action;
diff --git a/arch/powerpc/kernel/crash.c b/arch/powerpc/kernel/crash.c
index e253a45dcf10..1bfad7c2f8c8 100644
--- a/arch/powerpc/kernel/crash.c
+++ b/arch/powerpc/kernel/crash.c
@@ -193,10 +193,10 @@ void default_machine_crash_shutdown(struct pt_regs *regs)
struct irq_desc *desc = irq_descp(irq);
if (desc->status & IRQ_INPROGRESS)
- desc->handler->end(irq);
+ desc->chip->end(irq);
if (!(desc->status & IRQ_DISABLED))
- desc->handler->disable(irq);
+ desc->chip->disable(irq);
}
if (ppc_md.kexec_cpu_down)
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 40d4c14fde8f..8cfc779d882d 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -120,8 +120,8 @@ int show_interrupts(struct seq_file *p, void *v)
#else
seq_printf(p, "%10u ", kstat_irqs(i));
#endif /* CONFIG_SMP */
- if (desc->handler)
- seq_printf(p, " %s ", desc->handler->typename);
+ if (desc->chip)
+ seq_printf(p, " %s ", desc->chip->typename);
else
seq_puts(p, " None ");
seq_printf(p, "%s", (desc->status & IRQ_LEVEL) ? "Level " : "Edge ");
@@ -169,8 +169,8 @@ void fixup_irqs(cpumask_t map)
printk("Breaking affinity for irq %i\n", irq);
mask = map;
}
- if (irq_desc[irq].handler->set_affinity)
- irq_desc[irq].handler->set_affinity(irq, mask);
+ if (irq_desc[irq].chip->set_affinity)
+ irq_desc[irq].chip->set_affinity(irq, mask);
else if (irq_desc[irq].action && !(warned++))
printk("Cannot set affinity for irq %i\n", irq);
}
diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c
index 1bbf822b4efc..7bff3cbc5723 100644
--- a/arch/powerpc/platforms/cell/interrupt.c
+++ b/arch/powerpc/platforms/cell/interrupt.c
@@ -307,7 +307,7 @@ static void iic_request_ipi(int ipi, const char *name)
irq = iic_ipi_to_irq(ipi);
/* IPIs are marked SA_INTERRUPT as they must run with irqs
* disabled */
- get_irq_desc(irq)->handler = &iic_pic;
+ get_irq_desc(irq)->chip = &iic_pic;
get_irq_desc(irq)->status |= IRQ_PER_CPU;
request_irq(irq, iic_ipi_action, SA_INTERRUPT, name, NULL);
}
@@ -330,7 +330,7 @@ static void iic_setup_spe_handlers(void)
for (be=0; be < num_present_cpus() / 2; be++) {
for (isrc = 0; isrc < IIC_CLASS_STRIDE * 3; isrc++) {
int irq = IIC_NODE_STRIDE * be + IIC_SPE_OFFSET + isrc;
- get_irq_desc(irq)->handler = &iic_pic;
+ get_irq_desc(irq)->chip = &iic_pic;
}
}
}
diff --git a/arch/powerpc/platforms/cell/spider-pic.c b/arch/powerpc/platforms/cell/spider-pic.c
index 55cbdd77a62d..7c3a0b6d34fd 100644
--- a/arch/powerpc/platforms/cell/spider-pic.c
+++ b/arch/powerpc/platforms/cell/spider-pic.c
@@ -162,7 +162,7 @@ void spider_init_IRQ_hardcoded(void)
spider_pics[node] = ioremap(spiderpic, 0x800);
for (n = 0; n < IIC_NUM_EXT; n++) {
int irq = n + IIC_EXT_OFFSET + node * IIC_NODE_STRIDE;
- get_irq_desc(irq)->handler = &spider_pic;
+ get_irq_desc(irq)->chip = &spider_pic;
}
/* do not mask any interrupts because of level */
@@ -217,7 +217,7 @@ void spider_init_IRQ(void)
for (n = 0; n < IIC_NUM_EXT; n++) {
int irq = n + IIC_EXT_OFFSET + node * IIC_NODE_STRIDE;
- get_irq_desc(irq)->handler = &spider_pic;
+ get_irq_desc(irq)->chip = &spider_pic;
}
/* do not mask any interrupts because of level */
diff --git a/arch/powerpc/platforms/iseries/irq.c b/arch/powerpc/platforms/iseries/irq.c
index 62bbbcf5ded3..33bb4aa0e1e8 100644
--- a/arch/powerpc/platforms/iseries/irq.c
+++ b/arch/powerpc/platforms/iseries/irq.c
@@ -242,9 +242,9 @@ void __init iSeries_activate_IRQs()
for_each_irq (irq) {
irq_desc_t *desc = get_irq_desc(irq);
- if (desc && desc->handler && desc->handler->startup) {
+ if (desc && desc->chip && desc->chip->startup) {
spin_lock_irqsave(&desc->lock, flags);
- desc->handler->startup(irq);
+ desc->chip->startup(irq);
spin_unlock_irqrestore(&desc->lock, flags);
}
}
@@ -324,7 +324,7 @@ int __init iSeries_allocate_IRQ(HvBusNumber bus,
+ function;
virtirq = virt_irq_create_mapping(realirq);
- irq_desc[virtirq].handler = &iSeries_IRQ_handler;
+ irq_desc[virtirq].chip = &iSeries_IRQ_handler;
return virtirq;
}
diff --git a/arch/powerpc/platforms/powermac/pic.c b/arch/powerpc/platforms/powermac/pic.c
index 18bf3011d1e3..9f6189af6dd6 100644
--- a/arch/powerpc/platforms/powermac/pic.c
+++ b/arch/powerpc/platforms/powermac/pic.c
@@ -446,7 +446,7 @@ static void __init pmac_pic_probe_oldstyle(void)
/* Set the handler for the main PIC */
for ( i = 0; i < max_real_irqs ; i++ )
- irq_desc[i].handler = &pmac_pic;
+ irq_desc[i].chip = &pmac_pic;
/* Get addresses of first controller if we have a node for it */
BUG_ON(of_address_to_resource(master, 0, &r));
@@ -493,7 +493,7 @@ static void __init pmac_pic_probe_oldstyle(void)
/* Setup handlers for secondary controller and hook cascade irq*/
if (slave) {
for ( i = max_real_irqs ; i < max_irqs ; i++ )
- irq_desc[i].handler = &gatwick_pic;
+ irq_desc[i].chip = &gatwick_pic;
setup_irq(irq_cascade, &gatwick_cascade_action);
}
printk(KERN_INFO "irq: System has %d possible interrupts\n", max_irqs);
diff --git a/arch/powerpc/platforms/pseries/xics.c b/arch/powerpc/platforms/pseries/xics.c
index b14f9b5c114e..bdc9e26a93cf 100644
--- a/arch/powerpc/platforms/pseries/xics.c
+++ b/arch/powerpc/platforms/pseries/xics.c
@@ -558,7 +558,7 @@ void xics_init_IRQ(void)
}
for (i = irq_offset_value(); i < NR_IRQS; ++i)
- get_irq_desc(i)->handler = &xics_pic;
+ get_irq_desc(i)->chip = &xics_pic;
xics_setup_cpu();
@@ -701,9 +701,9 @@ void xics_migrate_irqs_away(void)
continue;
/* We only need to migrate enabled IRQS */
- if (desc == NULL || desc->handler == NULL
+ if (desc == NULL || desc->chip == NULL
|| desc->action == NULL
- || desc->handler->set_affinity == NULL)
+ || desc->chip->set_affinity == NULL)
continue;
spin_lock_irqsave(&desc->lock, flags);
@@ -728,7 +728,7 @@ void xics_migrate_irqs_away(void)
virq, cpu);
/* Reset affinity to all cpus */
- desc->handler->set_affinity(virq, CPU_MASK_ALL);
+ desc->chip->set_affinity(virq, CPU_MASK_ALL);
irq_affinity[virq] = CPU_MASK_ALL;
unlock:
spin_unlock_irqrestore(&desc->lock, flags);
diff --git a/arch/powerpc/sysdev/i8259.c b/arch/powerpc/sysdev/i8259.c
index b7ac32fdd776..2bff30f6d635 100644
--- a/arch/powerpc/sysdev/i8259.c
+++ b/arch/powerpc/sysdev/i8259.c
@@ -208,7 +208,7 @@ void __init i8259_init(unsigned long intack_addr, int offset)
spin_unlock_irqrestore(&i8259_lock, flags);
for (i = 0; i < NUM_ISA_INTERRUPTS; ++i)
- irq_desc[offset + i].handler = &i8259_pic;
+ irq_desc[offset + i].chip = &i8259_pic;
/* reserve our resources */
setup_irq(offset + 2, &i8259_irqaction);
diff --git a/arch/powerpc/sysdev/ipic.c b/arch/powerpc/sysdev/ipic.c
index 8f01e0f1d847..46801f5ec03f 100644
--- a/arch/powerpc/sysdev/ipic.c
+++ b/arch/powerpc/sysdev/ipic.c
@@ -472,7 +472,7 @@ void __init ipic_init(phys_addr_t phys_addr,
ipic_write(primary_ipic->regs, IPIC_SEMSR, temp);
for (i = 0 ; i < NR_IPIC_INTS ; i++) {
- irq_desc[i+irq_offset].handler = &ipic;
+ irq_desc[i+irq_offset].chip = &ipic;
irq_desc[i+irq_offset].status = IRQ_LEVEL;
}
diff --git a/arch/powerpc/sysdev/mpic.c b/arch/powerpc/sysdev/mpic.c
index bffe50d02c99..f4613ee6b7a2 100644
--- a/arch/powerpc/sysdev/mpic.c
+++ b/arch/powerpc/sysdev/mpic.c
@@ -379,14 +379,14 @@ static inline u32 mpic_physmask(u32 cpumask)
/* Get the mpic structure from the IPI number */
static inline struct mpic * mpic_from_ipi(unsigned int ipi)
{
- return container_of(irq_desc[ipi].handler, struct mpic, hc_ipi);
+ return container_of(irq_desc[ipi].chip, struct mpic, hc_ipi);
}
#endif
/* Get the mpic structure from the irq number */
static inline struct mpic * mpic_from_irq(unsigned int irq)
{
- return container_of(irq_desc[irq].handler, struct mpic, hc_irq);
+ return container_of(irq_desc[irq].chip, struct mpic, hc_irq);
}
/* Send an EOI */
@@ -752,7 +752,7 @@ void __init mpic_init(struct mpic *mpic)
if (!(mpic->flags & MPIC_PRIMARY))
continue;
irq_desc[mpic->ipi_offset+i].status |= IRQ_PER_CPU;
- irq_desc[mpic->ipi_offset+i].handler = &mpic->hc_ipi;
+ irq_desc[mpic->ipi_offset+i].chip = &mpic->hc_ipi;
#endif /* CONFIG_SMP */
}
@@ -813,7 +813,7 @@ void __init mpic_init(struct mpic *mpic)
/* init linux descriptors */
if (i < mpic->irq_count) {
irq_desc[mpic->irq_offset+i].status = level ? IRQ_LEVEL : 0;
- irq_desc[mpic->irq_offset+i].handler = &mpic->hc_irq;
+ irq_desc[mpic->irq_offset+i].chip = &mpic->hc_irq;
}
}
diff --git a/arch/ppc/8xx_io/commproc.c b/arch/ppc/8xx_io/commproc.c
index 12b84ca51327..9b3ace26280c 100644
--- a/arch/ppc/8xx_io/commproc.c
+++ b/arch/ppc/8xx_io/commproc.c
@@ -187,7 +187,7 @@ cpm_interrupt_init(void)
* interrupt vectors
*/
for ( i = CPM_IRQ_OFFSET ; i < CPM_IRQ_OFFSET + NR_CPM_INTS ; i++ )
- irq_desc[i].handler = &cpm_pic;
+ irq_desc[i].chip = &cpm_pic;
/* Set our interrupt handler with the core CPU. */
if (setup_irq(CPM_INTERRUPT, &cpm_interrupt_irqaction))
diff --git a/arch/ppc/platforms/apus_setup.c b/arch/ppc/platforms/apus_setup.c
index fe0cdc04d436..5c4118a459f3 100644
--- a/arch/ppc/platforms/apus_setup.c
+++ b/arch/ppc/platforms/apus_setup.c
@@ -734,9 +734,9 @@ void apus_init_IRQ(void)
for ( i = 0 ; i < AMI_IRQS; i++ ) {
irq_desc[i].status = IRQ_LEVEL;
if (i < IRQ_AMIGA_AUTO) {
- irq_desc[i].handler = &amiga_irqctrl;
+ irq_desc[i].chip = &amiga_irqctrl;
} else {
- irq_desc[i].handler = &amiga_sys_irqctrl;
+ irq_desc[i].chip = &amiga_sys_irqctrl;
action = &amiga_sys_irqaction[i-IRQ_AMIGA_AUTO];
if (action->name)
setup_irq(i, action);
diff --git a/arch/ppc/platforms/sbc82xx.c b/arch/ppc/platforms/sbc82xx.c
index 866807b4ad0b..41006d2b4b38 100644
--- a/arch/ppc/platforms/sbc82xx.c
+++ b/arch/ppc/platforms/sbc82xx.c
@@ -172,7 +172,7 @@ void __init sbc82xx_init_IRQ(void)
/* Set up the interrupt handlers for the i8259 IRQs */
for (i = NR_SIU_INTS; i < NR_SIU_INTS + 8; i++) {
- irq_desc[i].handler = &sbc82xx_i8259_ic;
+ irq_desc[i].chip = &sbc82xx_i8259_ic;
irq_desc[i].status |= IRQ_LEVEL;
}
diff --git a/arch/ppc/syslib/cpc700_pic.c b/arch/ppc/syslib/cpc700_pic.c
index 5add0a919ef6..172aa215fdb0 100644
--- a/arch/ppc/syslib/cpc700_pic.c
+++ b/arch/ppc/syslib/cpc700_pic.c
@@ -140,12 +140,12 @@ cpc700_init_IRQ(void)
/* IRQ 0 is highest */
for (i = 0; i < 17; i++) {
- irq_desc[i].handler = &cpc700_pic;
+ irq_desc[i].chip = &cpc700_pic;
cpc700_pic_init_irq(i);
}
for (i = 20; i < 32; i++) {
- irq_desc[i].handler = &cpc700_pic;
+ irq_desc[i].chip = &cpc700_pic;
cpc700_pic_init_irq(i);
}
diff --git a/arch/ppc/syslib/cpm2_pic.c b/arch/ppc/syslib/cpm2_pic.c
index 29d95d415ceb..c0fee0beb815 100644
--- a/arch/ppc/syslib/cpm2_pic.c
+++ b/arch/ppc/syslib/cpm2_pic.c
@@ -171,7 +171,7 @@ void cpm2_init_IRQ(void)
/* Enable chaining to OpenPIC, and make everything level
*/
for (i = 0; i < NR_CPM_INTS; i++) {
- irq_desc[i+CPM_IRQ_OFFSET].handler = &cpm2_pic;
+ irq_desc[i+CPM_IRQ_OFFSET].chip = &cpm2_pic;
irq_desc[i+CPM_IRQ_OFFSET].status |= IRQ_LEVEL;
}
}
diff --git a/arch/ppc/syslib/gt64260_pic.c b/arch/ppc/syslib/gt64260_pic.c
index dc3bd9ecbbf6..91096b38ae70 100644
--- a/arch/ppc/syslib/gt64260_pic.c
+++ b/arch/ppc/syslib/gt64260_pic.c
@@ -98,7 +98,7 @@ gt64260_init_irq(void)
/* use the gt64260 for all (possible) interrupt sources */
for (i = gt64260_irq_base; i < (gt64260_irq_base + 96); i++)
- irq_desc[i].handler = >64260_pic;
+ irq_desc[i].chip = >64260_pic;
if (ppc_md.progress)
ppc_md.progress("gt64260_init_irq: exit", 0x0);
diff --git a/arch/ppc/syslib/m82xx_pci.c b/arch/ppc/syslib/m82xx_pci.c
index 1941a8c7ca9a..63fa5b313396 100644
--- a/arch/ppc/syslib/m82xx_pci.c
+++ b/arch/ppc/syslib/m82xx_pci.c
@@ -159,7 +159,7 @@ pq2pci_init_irq(void)
immap->im_memctl.memc_or8 = 0xffff8010;
#endif
for (irq = NR_CPM_INTS; irq < NR_CPM_INTS + 4; irq++)
- irq_desc[irq].handler = &pq2pci_ic;
+ irq_desc[irq].chip = &pq2pci_ic;
/* make PCI IRQ level sensitive */
immap->im_intctl.ic_siexr &=
diff --git a/arch/ppc/syslib/m8xx_setup.c b/arch/ppc/syslib/m8xx_setup.c
index dae9af78bde1..0c4c0de7c59f 100644
--- a/arch/ppc/syslib/m8xx_setup.c
+++ b/arch/ppc/syslib/m8xx_setup.c
@@ -347,13 +347,13 @@ m8xx_init_IRQ(void)
int i;
for (i = SIU_IRQ_OFFSET ; i < SIU_IRQ_OFFSET + NR_SIU_INTS ; i++)
- irq_desc[i].handler = &ppc8xx_pic;
+ irq_desc[i].chip = &ppc8xx_pic;
cpm_interrupt_init();
#if defined(CONFIG_PCI)
for (i = I8259_IRQ_OFFSET ; i < I8259_IRQ_OFFSET + NR_8259_INTS ; i++)
- irq_desc[i].handler = &i8259_pic;
+ irq_desc[i].chip = &i8259_pic;
i8259_pic_irq_offset = I8259_IRQ_OFFSET;
i8259_init(0);
diff --git a/arch/ppc/syslib/mpc52xx_pic.c b/arch/ppc/syslib/mpc52xx_pic.c
index c4406f9dc6a3..6425b5cee7db 100644
--- a/arch/ppc/syslib/mpc52xx_pic.c
+++ b/arch/ppc/syslib/mpc52xx_pic.c
@@ -204,9 +204,9 @@ mpc52xx_init_irq(void)
out_be32(&intr->main_pri1, 0);
out_be32(&intr->main_pri2, 0);
- /* Initialize irq_desc[i].handler's with mpc52xx_ic. */
+ /* Initialize irq_desc[i].chip's with mpc52xx_ic. */
for (i = 0; i < NR_IRQS; i++) {
- irq_desc[i].handler = &mpc52xx_ic;
+ irq_desc[i].chip = &mpc52xx_ic;
irq_desc[i].status = IRQ_LEVEL;
}
diff --git a/arch/ppc/syslib/mv64360_pic.c b/arch/ppc/syslib/mv64360_pic.c
index 5a19697060f0..a4244d468381 100644
--- a/arch/ppc/syslib/mv64360_pic.c
+++ b/arch/ppc/syslib/mv64360_pic.c
@@ -119,7 +119,7 @@ mv64360_init_irq(void)
/* All interrupts are level interrupts */
for (i = mv64360_irq_base; i < (mv64360_irq_base + 96); i++) {
irq_desc[i].status |= IRQ_LEVEL;
- irq_desc[i].handler = &mv64360_pic;
+ irq_desc[i].chip = &mv64360_pic;
}
if (ppc_md.progress)
diff --git a/arch/ppc/syslib/open_pic.c b/arch/ppc/syslib/open_pic.c
index 70456c8f998c..822058c2983e 100644
--- a/arch/ppc/syslib/open_pic.c
+++ b/arch/ppc/syslib/open_pic.c
@@ -373,7 +373,7 @@ void __init openpic_init(int offset)
OPENPIC_VEC_IPI+i+offset);
/* IPIs are per-CPU */
irq_desc[OPENPIC_VEC_IPI+i+offset].status |= IRQ_PER_CPU;
- irq_desc[OPENPIC_VEC_IPI+i+offset].handler = &open_pic_ipi;
+ irq_desc[OPENPIC_VEC_IPI+i+offset].chip = &open_pic_ipi;
}
#endif
@@ -408,7 +408,7 @@ void __init openpic_init(int offset)
/* Init descriptors */
for (i = offset; i < NumSources + offset; i++)
- irq_desc[i].handler = &open_pic;
+ irq_desc[i].chip = &open_pic;
/* Initialize the spurious interrupt */
if (ppc_md.progress) ppc_md.progress("openpic: spurious",0x3bd);
diff --git a/arch/ppc/syslib/open_pic2.c b/arch/ppc/syslib/open_pic2.c
index bcbe40de26fe..b8154efff6ed 100644
--- a/arch/ppc/syslib/open_pic2.c
+++ b/arch/ppc/syslib/open_pic2.c
@@ -290,7 +290,7 @@ void __init openpic2_init(int offset)
/* Init descriptors */
for (i = offset; i < NumSources + offset; i++)
- irq_desc[i].handler = &open_pic2;
+ irq_desc[i].chip = &open_pic2;
/* Initialize the spurious interrupt */
if (ppc_md.progress) ppc_md.progress("openpic2: spurious",0x3bd);
diff --git a/arch/ppc/syslib/ppc403_pic.c b/arch/ppc/syslib/ppc403_pic.c
index c46043c47225..1584c8b1229f 100644
--- a/arch/ppc/syslib/ppc403_pic.c
+++ b/arch/ppc/syslib/ppc403_pic.c
@@ -121,5 +121,5 @@ ppc4xx_pic_init(void)
ppc_md.get_irq = ppc403_pic_get_irq;
for (i = 0; i < NR_IRQS; i++)
- irq_desc[i].handler = &ppc403_aic;
+ irq_desc[i].chip = &ppc403_aic;
}
diff --git a/arch/ppc/syslib/ppc4xx_pic.c b/arch/ppc/syslib/ppc4xx_pic.c
index fd9af0fc0e9f..e669c1335d47 100644
--- a/arch/ppc/syslib/ppc4xx_pic.c
+++ b/arch/ppc/syslib/ppc4xx_pic.c
@@ -276,7 +276,7 @@ void __init ppc4xx_pic_init(void)
/* Attach low-level handlers */
for (i = 0; i < (NR_UICS << 5); ++i) {
- irq_desc[i].handler = &__uic[i >> 5].decl;
+ irq_desc[i].chip = &__uic[i >> 5].decl;
if (is_level_sensitive(i))
irq_desc[i].status |= IRQ_LEVEL;
}
diff --git a/arch/ppc/syslib/xilinx_pic.c b/arch/ppc/syslib/xilinx_pic.c
index e672b600f315..39a93dc6375b 100644
--- a/arch/ppc/syslib/xilinx_pic.c
+++ b/arch/ppc/syslib/xilinx_pic.c
@@ -143,7 +143,7 @@ ppc4xx_pic_init(void)
ppc_md.get_irq = xilinx_pic_get_irq;
for (i = 0; i < NR_IRQS; ++i) {
- irq_desc[i].handler = &xilinx_intc;
+ irq_desc[i].chip = &xilinx_intc;
if (XPAR_INTC_0_KIND_OF_INTR & (0x00000001 << i))
irq_desc[i].status &= ~IRQ_LEVEL;
diff --git a/arch/sh/boards/adx/irq_maskreg.c b/arch/sh/boards/adx/irq_maskreg.c
index c0973f8d57ba..357fab1bac2b 100644
--- a/arch/sh/boards/adx/irq_maskreg.c
+++ b/arch/sh/boards/adx/irq_maskreg.c
@@ -102,6 +102,6 @@ static void end_maskreg_irq(unsigned int irq)
void make_maskreg_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &maskreg_irq_type;
+ irq_desc[irq].chip = &maskreg_irq_type;
disable_maskreg_irq(irq);
}
diff --git a/arch/sh/boards/bigsur/irq.c b/arch/sh/boards/bigsur/irq.c
index 6ddbcc77244d..1d32425782c0 100644
--- a/arch/sh/boards/bigsur/irq.c
+++ b/arch/sh/boards/bigsur/irq.c
@@ -253,7 +253,7 @@ static void make_bigsur_l1isr(unsigned int irq) {
/* sanity check first */
if(irq >= BIGSUR_IRQ_LOW && irq < BIGSUR_IRQ_HIGH) {
/* save the handler in the main description table */
- irq_desc[irq].handler = &bigsur_l1irq_type;
+ irq_desc[irq].chip = &bigsur_l1irq_type;
irq_desc[irq].status = IRQ_DISABLED;
irq_desc[irq].action = 0;
irq_desc[irq].depth = 1;
@@ -270,7 +270,7 @@ static void make_bigsur_l2isr(unsigned int irq) {
/* sanity check first */
if(irq >= BIGSUR_2NDLVL_IRQ_LOW && irq < BIGSUR_2NDLVL_IRQ_HIGH) {
/* save the handler in the main description table */
- irq_desc[irq].handler = &bigsur_l2irq_type;
+ irq_desc[irq].chip = &bigsur_l2irq_type;
irq_desc[irq].status = IRQ_DISABLED;
irq_desc[irq].action = 0;
irq_desc[irq].depth = 1;
diff --git a/arch/sh/boards/cqreek/irq.c b/arch/sh/boards/cqreek/irq.c
index d1da0d844567..2955adc52310 100644
--- a/arch/sh/boards/cqreek/irq.c
+++ b/arch/sh/boards/cqreek/irq.c
@@ -103,7 +103,7 @@ void __init init_cqreek_IRQ(void)
cqreek_irq_data[14].stat_port = BRIDGE_IDE_INTR_STAT;
cqreek_irq_data[14].bit = 1;
- irq_desc[14].handler = &cqreek_irq_type;
+ irq_desc[14].chip = &cqreek_irq_type;
irq_desc[14].status = IRQ_DISABLED;
irq_desc[14].action = 0;
irq_desc[14].depth = 1;
@@ -117,7 +117,7 @@ void __init init_cqreek_IRQ(void)
cqreek_irq_data[10].bit = (1 << 10);
/* XXX: Err... we may need demultiplexer for ISA irq... */
- irq_desc[10].handler = &cqreek_irq_type;
+ irq_desc[10].chip = &cqreek_irq_type;
irq_desc[10].status = IRQ_DISABLED;
irq_desc[10].action = 0;
irq_desc[10].depth = 1;
diff --git a/arch/sh/boards/dreamcast/setup.c b/arch/sh/boards/dreamcast/setup.c
index 55dece35cde5..0027b80a2343 100644
--- a/arch/sh/boards/dreamcast/setup.c
+++ b/arch/sh/boards/dreamcast/setup.c
@@ -70,7 +70,7 @@ int __init platform_setup(void)
/* Assign all virtual IRQs to the System ASIC int. handler */
for (i = HW_EVENT_IRQ_BASE; i < HW_EVENT_IRQ_MAX; i++)
- irq_desc[i].handler = &systemasic_int;
+ irq_desc[i].chip = &systemasic_int;
board_time_init = aica_time_init;
diff --git a/arch/sh/boards/ec3104/setup.c b/arch/sh/boards/ec3104/setup.c
index 5130ba2b6ff1..4b3ef16a0e96 100644
--- a/arch/sh/boards/ec3104/setup.c
+++ b/arch/sh/boards/ec3104/setup.c
@@ -63,7 +63,7 @@ int __init platform_setup(void)
str[i] = ctrl_readb(EC3104_BASE + i);
for (i = EC3104_IRQBASE; i < EC3104_IRQBASE + 32; i++)
- irq_desc[i].handler = &ec3104_int;
+ irq_desc[i].chip = &ec3104_int;
printk("initializing EC3104 \"%.8s\" at %08x, IRQ %d, IRQ base %d\n",
str, EC3104_BASE, EC3104_IRQ, EC3104_IRQBASE);
diff --git a/arch/sh/boards/harp/irq.c b/arch/sh/boards/harp/irq.c
index 52d0ba39031b..701fa55d5297 100644
--- a/arch/sh/boards/harp/irq.c
+++ b/arch/sh/boards/harp/irq.c
@@ -114,7 +114,7 @@ static void enable_harp_irq(unsigned int irq)
static void __init make_harp_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &harp_irq_type;
+ irq_desc[irq].chip = &harp_irq_type;
disable_harp_irq(irq);
}
diff --git a/arch/sh/boards/mpc1211/setup.c b/arch/sh/boards/mpc1211/setup.c
index 2bb581b91683..b72f009c52c2 100644
--- a/arch/sh/boards/mpc1211/setup.c
+++ b/arch/sh/boards/mpc1211/setup.c
@@ -194,7 +194,7 @@ static struct hw_interrupt_type mpc1211_irq_type = {
static void make_mpc1211_irq(unsigned int irq)
{
- irq_desc[irq].handler = &mpc1211_irq_type;
+ irq_desc[irq].chip = &mpc1211_irq_type;
irq_desc[irq].status = IRQ_DISABLED;
irq_desc[irq].action = 0;
irq_desc[irq].depth = 1;
diff --git a/arch/sh/boards/overdrive/irq.c b/arch/sh/boards/overdrive/irq.c
index 715e8feb3a68..2c13a7de6b22 100644
--- a/arch/sh/boards/overdrive/irq.c
+++ b/arch/sh/boards/overdrive/irq.c
@@ -150,7 +150,7 @@ static void enable_od_irq(unsigned int irq)
static void __init make_od_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &od_irq_type;
+ irq_desc[irq].chip = &od_irq_type;
disable_od_irq(irq);
}
diff --git a/arch/sh/boards/renesas/hs7751rvoip/irq.c b/arch/sh/boards/renesas/hs7751rvoip/irq.c
index ed4c5b50ea45..52a98b524e1f 100644
--- a/arch/sh/boards/renesas/hs7751rvoip/irq.c
+++ b/arch/sh/boards/renesas/hs7751rvoip/irq.c
@@ -86,7 +86,7 @@ static struct hw_interrupt_type hs7751rvoip_irq_type = {
static void make_hs7751rvoip_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &hs7751rvoip_irq_type;
+ irq_desc[irq].chip = &hs7751rvoip_irq_type;
disable_hs7751rvoip_irq(irq);
}
diff --git a/arch/sh/boards/renesas/rts7751r2d/irq.c b/arch/sh/boards/renesas/rts7751r2d/irq.c
index d36c9374aed1..e16915d9cda4 100644
--- a/arch/sh/boards/renesas/rts7751r2d/irq.c
+++ b/arch/sh/boards/renesas/rts7751r2d/irq.c
@@ -100,7 +100,7 @@ static struct hw_interrupt_type rts7751r2d_irq_type = {
static void make_rts7751r2d_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &rts7751r2d_irq_type;
+ irq_desc[irq].chip = &rts7751r2d_irq_type;
disable_rts7751r2d_irq(irq);
}
diff --git a/arch/sh/boards/renesas/systemh/irq.c b/arch/sh/boards/renesas/systemh/irq.c
index 7a2eb10edb56..845979181059 100644
--- a/arch/sh/boards/renesas/systemh/irq.c
+++ b/arch/sh/boards/renesas/systemh/irq.c
@@ -105,7 +105,7 @@ static void end_systemh_irq(unsigned int irq)
void make_systemh_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &systemh_irq_type;
+ irq_desc[irq].chip = &systemh_irq_type;
disable_systemh_irq(irq);
}
diff --git a/arch/sh/boards/se/73180/irq.c b/arch/sh/boards/se/73180/irq.c
index 70f04caad9a4..402735c7c898 100644
--- a/arch/sh/boards/se/73180/irq.c
+++ b/arch/sh/boards/se/73180/irq.c
@@ -85,7 +85,7 @@ void
make_intreq_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &intreq_irq_type;
+ irq_desc[irq].chip = &intreq_irq_type;
disable_intreq_irq(irq);
}
diff --git a/arch/sh/boards/superh/microdev/irq.c b/arch/sh/boards/superh/microdev/irq.c
index efcbd86b7cd2..cb5999425d16 100644
--- a/arch/sh/boards/superh/microdev/irq.c
+++ b/arch/sh/boards/superh/microdev/irq.c
@@ -147,7 +147,7 @@ static void enable_microdev_irq(unsigned int irq)
static void __init make_microdev_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = µdev_irq_type;
+ irq_desc[irq].chip = µdev_irq_type;
disable_microdev_irq(irq);
}
diff --git a/arch/sh/cchips/hd6446x/hd64461/setup.c b/arch/sh/cchips/hd6446x/hd64461/setup.c
index f014b9bf6922..724db04cb392 100644
--- a/arch/sh/cchips/hd6446x/hd64461/setup.c
+++ b/arch/sh/cchips/hd6446x/hd64461/setup.c
@@ -154,7 +154,7 @@ int __init setup_hd64461(void)
outw(0xffff, HD64461_NIMR);
for (i = HD64461_IRQBASE; i < HD64461_IRQBASE + 16; i++) {
- irq_desc[i].handler = &hd64461_irq_type;
+ irq_desc[i].chip = &hd64461_irq_type;
}
setup_irq(CONFIG_HD64461_IRQ, &irq0);
diff --git a/arch/sh/cchips/hd6446x/hd64465/setup.c b/arch/sh/cchips/hd6446x/hd64465/setup.c
index 68e4c4e4283d..cf9142c620b7 100644
--- a/arch/sh/cchips/hd6446x/hd64465/setup.c
+++ b/arch/sh/cchips/hd6446x/hd64465/setup.c
@@ -182,7 +182,7 @@ static int __init setup_hd64465(void)
outw(0xffff, HD64465_REG_NIMR); /* mask all interrupts */
for (i = 0; i < HD64465_IRQ_NUM ; i++) {
- irq_desc[HD64465_IRQ_BASE + i].handler = &hd64465_irq_type;
+ irq_desc[HD64465_IRQ_BASE + i].chip = &hd64465_irq_type;
}
setup_irq(CONFIG_HD64465_IRQ, &irq0);
diff --git a/arch/sh/cchips/voyagergx/irq.c b/arch/sh/cchips/voyagergx/irq.c
index 2ee330b3c38f..892214bade19 100644
--- a/arch/sh/cchips/voyagergx/irq.c
+++ b/arch/sh/cchips/voyagergx/irq.c
@@ -191,7 +191,7 @@ void __init setup_voyagergx_irq(void)
flag = 1;
}
if (flag == 1)
- irq_desc[VOYAGER_IRQ_BASE + i].handler = &voyagergx_irq_type;
+ irq_desc[VOYAGER_IRQ_BASE + i].chip = &voyagergx_irq_type;
}
setup_irq(IRQ_VOYAGER, &irq0);
diff --git a/arch/sh/kernel/cpu/irq/imask.c b/arch/sh/kernel/cpu/irq/imask.c
index baed9a550d39..a33ae3e0a5a5 100644
--- a/arch/sh/kernel/cpu/irq/imask.c
+++ b/arch/sh/kernel/cpu/irq/imask.c
@@ -105,6 +105,6 @@ static void shutdown_imask_irq(unsigned int irq)
void make_imask_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &imask_irq_type;
+ irq_desc[irq].chip = &imask_irq_type;
enable_irq(irq);
}
diff --git a/arch/sh/kernel/cpu/irq/intc2.c b/arch/sh/kernel/cpu/irq/intc2.c
index 06e8afab32e4..30064bf6e154 100644
--- a/arch/sh/kernel/cpu/irq/intc2.c
+++ b/arch/sh/kernel/cpu/irq/intc2.c
@@ -137,7 +137,7 @@ void make_intc2_irq(unsigned int irq,
local_irq_restore(flags);
- irq_desc[irq].handler = &intc2_irq_type;
+ irq_desc[irq].chip = &intc2_irq_type;
disable_intc2_irq(irq);
}
diff --git a/arch/sh/kernel/cpu/irq/ipr.c b/arch/sh/kernel/cpu/irq/ipr.c
index e55150ed0856..0373b65c77f9 100644
--- a/arch/sh/kernel/cpu/irq/ipr.c
+++ b/arch/sh/kernel/cpu/irq/ipr.c
@@ -115,7 +115,7 @@ void make_ipr_irq(unsigned int irq, unsigned int addr, int pos, int priority)
ipr_data[irq].shift = pos*4; /* POSition (0-3) x 4 means shift */
ipr_data[irq].priority = priority;
- irq_desc[irq].handler = &ipr_irq_type;
+ irq_desc[irq].chip = &ipr_irq_type;
disable_ipr_irq(irq);
}
diff --git a/arch/sh/kernel/cpu/irq/pint.c b/arch/sh/kernel/cpu/irq/pint.c
index 95d6024fe1ae..714963a25bba 100644
--- a/arch/sh/kernel/cpu/irq/pint.c
+++ b/arch/sh/kernel/cpu/irq/pint.c
@@ -85,7 +85,7 @@ static void end_pint_irq(unsigned int irq)
void make_pint_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &pint_irq_type;
+ irq_desc[irq].chip = &pint_irq_type;
disable_pint_irq(irq);
}
diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
index b56e79632f24..c2e07f7f3496 100644
--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -47,7 +47,7 @@ int show_interrupts(struct seq_file *p, void *v)
goto unlock;
seq_printf(p, "%3d: ",i);
seq_printf(p, "%10u ", kstat_irqs(i));
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
diff --git a/arch/sh64/kernel/irq.c b/arch/sh64/kernel/irq.c
index d69879c0e063..675776a5477e 100644
--- a/arch/sh64/kernel/irq.c
+++ b/arch/sh64/kernel/irq.c
@@ -65,7 +65,7 @@ int show_interrupts(struct seq_file *p, void *v)
goto unlock;
seq_printf(p, "%3d: ",i);
seq_printf(p, "%10u ", kstat_irqs(i));
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
diff --git a/arch/sh64/kernel/irq_intc.c b/arch/sh64/kernel/irq_intc.c
index fc99bf4e362c..fa730f5fe2e6 100644
--- a/arch/sh64/kernel/irq_intc.c
+++ b/arch/sh64/kernel/irq_intc.c
@@ -178,7 +178,7 @@ static void end_intc_irq(unsigned int irq)
void make_intc_irq(unsigned int irq)
{
disable_irq_nosync(irq);
- irq_desc[irq].handler = &intc_irq_type;
+ irq_desc[irq].chip = &intc_irq_type;
disable_intc_irq(irq);
}
@@ -208,7 +208,7 @@ void __init init_IRQ(void)
/* Set default: per-line enable/disable, priority driven ack/eoi */
for (i = 0; i < NR_INTC_IRQS; i++) {
if (platform_int_priority[i] != NO_PRIORITY) {
- irq_desc[i].handler = &intc_irq_type;
+ irq_desc[i].chip = &intc_irq_type;
}
}
diff --git a/arch/sh64/mach-cayman/irq.c b/arch/sh64/mach-cayman/irq.c
index f797c84bfdd1..05eb7cdc26f0 100644
--- a/arch/sh64/mach-cayman/irq.c
+++ b/arch/sh64/mach-cayman/irq.c
@@ -187,7 +187,7 @@ void init_cayman_irq(void)
}
for (i=0; i<NR_EXT_IRQS; i++) {
- irq_desc[START_EXT_IRQS + i].handler = &cayman_irq_type;
+ irq_desc[START_EXT_IRQS + i].chip = &cayman_irq_type;
}
/* Setup the SMSC interrupt */
diff --git a/arch/sparc64/kernel/irq.c b/arch/sparc64/kernel/irq.c
index cc89b06d0178..5d7c821ab7f6 100644
--- a/arch/sparc64/kernel/irq.c
+++ b/arch/sparc64/kernel/irq.c
@@ -151,7 +151,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
- seq_printf(p, " %9s", irq_desc[i].handler->typename);
+ seq_printf(p, " %9s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
@@ -414,8 +414,8 @@ void irq_install_pre_handler(int virt_irq,
data->pre_handler_arg1 = arg1;
data->pre_handler_arg2 = arg2;
- desc->handler = (desc->handler == &sun4u_irq ?
- &sun4u_irq_ack : &sun4v_irq_ack);
+ desc->chip = (desc->chip == &sun4u_irq ?
+ &sun4u_irq_ack : &sun4v_irq_ack);
}
unsigned int build_irq(int inofixup, unsigned long iclr, unsigned long imap)
@@ -431,7 +431,7 @@ unsigned int build_irq(int inofixup, unsigned long iclr, unsigned long imap)
bucket = &ivector_table[ino];
if (!bucket->virt_irq) {
bucket->virt_irq = virt_irq_alloc(__irq(bucket));
- irq_desc[bucket->virt_irq].handler = &sun4u_irq;
+ irq_desc[bucket->virt_irq].chip = &sun4u_irq;
}
desc = irq_desc + bucket->virt_irq;
@@ -465,7 +465,7 @@ unsigned int sun4v_build_irq(u32 devhandle, unsigned int devino)
bucket = &ivector_table[sysino];
if (!bucket->virt_irq) {
bucket->virt_irq = virt_irq_alloc(__irq(bucket));
- irq_desc[bucket->virt_irq].handler = &sun4v_irq;
+ irq_desc[bucket->virt_irq].chip = &sun4v_irq;
}
desc = irq_desc + bucket->virt_irq;
diff --git a/arch/um/kernel/irq.c b/arch/um/kernel/irq.c
index 2ffda012385e..fae43a3054a0 100644
--- a/arch/um/kernel/irq.c
+++ b/arch/um/kernel/irq.c
@@ -63,7 +63,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
@@ -451,13 +451,13 @@ void __init init_IRQ(void)
irq_desc[TIMER_IRQ].status = IRQ_DISABLED;
irq_desc[TIMER_IRQ].action = NULL;
irq_desc[TIMER_IRQ].depth = 1;
- irq_desc[TIMER_IRQ].handler = &SIGVTALRM_irq_type;
+ irq_desc[TIMER_IRQ].chip = &SIGVTALRM_irq_type;
enable_irq(TIMER_IRQ);
for (i = 1; i < NR_IRQS; i++) {
irq_desc[i].status = IRQ_DISABLED;
irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
- irq_desc[i].handler = &normal_irq_type;
+ irq_desc[i].chip = &normal_irq_type;
enable_irq(i);
}
}
diff --git a/arch/v850/kernel/irq.c b/arch/v850/kernel/irq.c
index 7a151c26f82e..858c45819aab 100644
--- a/arch/v850/kernel/irq.c
+++ b/arch/v850/kernel/irq.c
@@ -65,10 +65,10 @@ int show_interrupts(struct seq_file *p, void *v)
int j;
int count = 0;
int num = -1;
- const char *type_name = irq_desc[irq].handler->typename;
+ const char *type_name = irq_desc[irq].chip->typename;
for (j = 0; j < NR_IRQS; j++)
- if (irq_desc[j].handler->typename == type_name){
+ if (irq_desc[j].chip->typename == type_name){
if (irq == j)
num = count;
count++;
@@ -117,7 +117,7 @@ init_irq_handlers (int base_irq, int num, int interval,
irq_desc[base_irq].status = IRQ_DISABLED;
irq_desc[base_irq].action = NULL;
irq_desc[base_irq].depth = 1;
- irq_desc[base_irq].handler = irq_type;
+ irq_desc[base_irq].chip = irq_type;
base_irq += interval;
}
}
diff --git a/arch/x86_64/kernel/i8259.c b/arch/x86_64/kernel/i8259.c
index 86b2c1e197aa..3dd1659427dc 100644
--- a/arch/x86_64/kernel/i8259.c
+++ b/arch/x86_64/kernel/i8259.c
@@ -235,7 +235,7 @@ void make_8259A_irq(unsigned int irq)
{
disable_irq_nosync(irq);
io_apic_irqs &= ~(1<<irq);
- irq_desc[irq].handler = &i8259A_irq_type;
+ irq_desc[irq].chip = &i8259A_irq_type;
enable_irq(irq);
}
@@ -468,12 +468,12 @@ void __init init_ISA_irqs (void)
/*
* 16 old-style INTA-cycle interrupts:
*/
- irq_desc[i].handler = &i8259A_irq_type;
+ irq_desc[i].chip = &i8259A_irq_type;
} else {
/*
* 'high' PCI IRQs filled in on demand
*/
- irq_desc[i].handler = &no_irq_type;
+ irq_desc[i].chip = &no_irq_type;
}
}
}
diff --git a/arch/x86_64/kernel/io_apic.c b/arch/x86_64/kernel/io_apic.c
index c768d8a036d0..88a8736eb8ce 100644
--- a/arch/x86_64/kernel/io_apic.c
+++ b/arch/x86_64/kernel/io_apic.c
@@ -876,15 +876,17 @@ static struct hw_interrupt_type ioapic_edge_type;
#define IOAPIC_EDGE 0
#define IOAPIC_LEVEL 1
-static inline void ioapic_register_intr(int irq, int vector, unsigned long trigger)
+static void ioapic_register_intr(int irq, int vector, unsigned long trigger)
{
- unsigned idx = use_pci_vector() && !platform_legacy_irq(irq) ? vector : irq;
+ unsigned idx;
+
+ idx = use_pci_vector() && !platform_legacy_irq(irq) ? vector : irq;
if ((trigger == IOAPIC_AUTO && IO_APIC_irq_trigger(irq)) ||
trigger == IOAPIC_LEVEL)
- irq_desc[idx].handler = &ioapic_level_type;
+ irq_desc[idx].chip = &ioapic_level_type;
else
- irq_desc[idx].handler = &ioapic_edge_type;
+ irq_desc[idx].chip = &ioapic_edge_type;
set_intr_gate(vector, interrupt[idx]);
}
@@ -986,7 +988,7 @@ static void __init setup_ExtINT_IRQ0_pin(unsigned int apic, unsigned int pin, in
* The timer IRQ doesn't have to know that behind the
* scene we have a 8259A-master in AEOI mode ...
*/
- irq_desc[0].handler = &ioapic_edge_type;
+ irq_desc[0].chip = &ioapic_edge_type;
/*
* Add it to the IO-APIC irq-routing table:
@@ -1683,7 +1685,7 @@ static inline void init_IO_APIC_traps(void)
make_8259A_irq(irq);
else
/* Strange. Oh, well.. */
- irq_desc[irq].handler = &no_irq_type;
+ irq_desc[irq].chip = &no_irq_type;
}
}
}
@@ -1900,7 +1902,7 @@ static inline void check_timer(void)
apic_printk(APIC_VERBOSE, KERN_INFO "...trying to set up timer as Virtual Wire IRQ...");
disable_8259A_irq(0);
- irq_desc[0].handler = &lapic_irq_type;
+ irq_desc[0].chip = &lapic_irq_type;
apic_write(APIC_LVT0, APIC_DM_FIXED | vector); /* Fixed mode */
enable_8259A_irq(0);
diff --git a/arch/x86_64/kernel/irq.c b/arch/x86_64/kernel/irq.c
index bfa82f52a5cc..0c2fb2c77bd0 100644
--- a/arch/x86_64/kernel/irq.c
+++ b/arch/x86_64/kernel/irq.c
@@ -79,7 +79,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
@@ -151,8 +151,8 @@ void fixup_irqs(cpumask_t map)
printk("Breaking affinity for irq %i\n", irq);
mask = map;
}
- if (irq_desc[irq].handler->set_affinity)
- irq_desc[irq].handler->set_affinity(irq, mask);
+ if (irq_desc[irq].chip->set_affinity)
+ irq_desc[irq].chip->set_affinity(irq, mask);
else if (irq_desc[irq].action && !(warned++))
printk("Cannot set affinity for irq %i\n", irq);
}
diff --git a/arch/xtensa/kernel/irq.c b/arch/xtensa/kernel/irq.c
index 51f9bed455fa..1cf744ee0959 100644
--- a/arch/xtensa/kernel/irq.c
+++ b/arch/xtensa/kernel/irq.c
@@ -100,7 +100,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
- seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %14s", irq_desc[i].chip->typename);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
@@ -181,7 +181,7 @@ void __init init_IRQ(void)
int i;
for (i=0; i < XTENSA_NR_IRQS; i++)
- irq_desc[i].handler = &xtensa_irq_type;
+ irq_desc[i].chip = &xtensa_irq_type;
cached_irq_mask = 0;
diff --git a/drivers/char/vr41xx_giu.c b/drivers/char/vr41xx_giu.c
index 05e6e814d86f..073da48c092e 100644
--- a/drivers/char/vr41xx_giu.c
+++ b/drivers/char/vr41xx_giu.c
@@ -689,9 +689,9 @@ static int __devinit giu_probe(struct platform_device *dev)
for (i = GIU_IRQ_BASE; i <= GIU_IRQ_LAST; i++) {
if (i < GIU_IRQ(GIUINT_HIGH_OFFSET))
- irq_desc[i].handler = &giuint_low_irq_type;
+ irq_desc[i].chip = &giuint_low_irq_type;
else
- irq_desc[i].handler = &giuint_high_irq_type;
+ irq_desc[i].chip = &giuint_high_irq_type;
}
return cascade_irq(GIUINT_IRQ, giu_get_irq);
diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c
index 6e8ed0c81a6c..ce0a6ebcff15 100644
--- a/drivers/parisc/dino.c
+++ b/drivers/parisc/dino.c
@@ -299,7 +299,7 @@ struct pci_port_ops dino_port_ops = {
static void dino_disable_irq(unsigned int irq)
{
- struct dino_device *dino_dev = irq_desc[irq].handler_data;
+ struct dino_device *dino_dev = irq_desc[irq].chip_data;
int local_irq = gsc_find_local_irq(irq, dino_dev->global_irq, DINO_LOCAL_IRQS);
DBG(KERN_WARNING "%s(0x%p, %d)\n", __FUNCTION__, dino_dev, irq);
@@ -311,7 +311,7 @@ static void dino_disable_irq(unsigned int irq)
static void dino_enable_irq(unsigned int irq)
{
- struct dino_device *dino_dev = irq_desc[irq].handler_data;
+ struct dino_device *dino_dev = irq_desc[irq].chip_data;
int local_irq = gsc_find_local_irq(irq, dino_dev->global_irq, DINO_LOCAL_IRQS);
u32 tmp;
diff --git a/drivers/parisc/eisa.c b/drivers/parisc/eisa.c
index 9d3bd15bf53b..58f0ce8d78e0 100644
--- a/drivers/parisc/eisa.c
+++ b/drivers/parisc/eisa.c
@@ -350,7 +350,7 @@ static int __devinit eisa_probe(struct parisc_device *dev)
irq_desc[2].action = &irq2_action;
for (i = 0; i < 16; i++) {
- irq_desc[i].handler = &eisa_interrupt_type;
+ irq_desc[i].chip = &eisa_interrupt_type;
}
EISA_bus = 1;
diff --git a/drivers/parisc/gsc.c b/drivers/parisc/gsc.c
index 16d40f95978d..5476ba7709b3 100644
--- a/drivers/parisc/gsc.c
+++ b/drivers/parisc/gsc.c
@@ -109,7 +109,7 @@ int gsc_find_local_irq(unsigned int irq, int *global_irqs, int limit)
static void gsc_asic_disable_irq(unsigned int irq)
{
- struct gsc_asic *irq_dev = irq_desc[irq].handler_data;
+ struct gsc_asic *irq_dev = irq_desc[irq].chip_data;
int local_irq = gsc_find_local_irq(irq, irq_dev->global_irq, 32);
u32 imr;
@@ -124,7 +124,7 @@ static void gsc_asic_disable_irq(unsigned int irq)
static void gsc_asic_enable_irq(unsigned int irq)
{
- struct gsc_asic *irq_dev = irq_desc[irq].handler_data;
+ struct gsc_asic *irq_dev = irq_desc[irq].chip_data;
int local_irq = gsc_find_local_irq(irq, irq_dev->global_irq, 32);
u32 imr;
@@ -164,8 +164,8 @@ int gsc_assign_irq(struct hw_interrupt_type *type, void *data)
if (irq > GSC_IRQ_MAX)
return NO_IRQ;
- irq_desc[irq].handler = type;
- irq_desc[irq].handler_data = data;
+ irq_desc[irq].chip = type;
+ irq_desc[irq].chip_data = data;
return irq++;
}
diff --git a/drivers/parisc/iosapic.c b/drivers/parisc/iosapic.c
index 7a458d5bc751..1fbda77cefc2 100644
--- a/drivers/parisc/iosapic.c
+++ b/drivers/parisc/iosapic.c
@@ -619,7 +619,7 @@ iosapic_set_irt_data( struct vector_info *vi, u32 *dp0, u32 *dp1)
static struct vector_info *iosapic_get_vector(unsigned int irq)
{
- return irq_desc[irq].handler_data;
+ return irq_desc[irq].chip_data;
}
static void iosapic_disable_irq(unsigned int irq)
diff --git a/drivers/parisc/superio.c b/drivers/parisc/superio.c
index 828eb45062de..a988dc7a9abd 100644
--- a/drivers/parisc/superio.c
+++ b/drivers/parisc/superio.c
@@ -360,7 +360,7 @@ int superio_fixup_irq(struct pci_dev *pcidev)
#endif
for (i = 0; i < 16; i++) {
- irq_desc[i].handler = &superio_interrupt_type;
+ irq_desc[i].chip = &superio_interrupt_type;
}
/*
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 7f8429284fab..76d023d8a33b 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -429,12 +429,12 @@ static void irq_handler_init(int cap_id, int pos, int mask)
spin_lock_irqsave(&irq_desc[pos].lock, flags);
if (cap_id == PCI_CAP_ID_MSIX)
- irq_desc[pos].handler = &msix_irq_type;
+ irq_desc[pos].chip = &msix_irq_type;
else {
if (!mask)
- irq_desc[pos].handler = &msi_irq_wo_maskbit_type;
+ irq_desc[pos].chip = &msi_irq_wo_maskbit_type;
else
- irq_desc[pos].handler = &msi_irq_w_maskbit_type;
+ irq_desc[pos].chip = &msi_irq_w_maskbit_type;
}
spin_unlock_irqrestore(&irq_desc[pos].lock, flags);
}
diff --git a/drivers/pcmcia/hd64465_ss.c b/drivers/pcmcia/hd64465_ss.c
index b39435bbfaeb..c662e4f89d46 100644
--- a/drivers/pcmcia/hd64465_ss.c
+++ b/drivers/pcmcia/hd64465_ss.c
@@ -244,8 +244,8 @@ static void hs_map_irq(hs_socket_t *sp, unsigned int irq)
hs_mapped_irq[irq].sock = sp;
/* insert ourselves as the irq controller */
- hs_mapped_irq[irq].old_handler = irq_desc[irq].handler;
- irq_desc[irq].handler = &hd64465_ss_irq_type;
+ hs_mapped_irq[irq].old_handler = irq_desc[irq].chip;
+ irq_desc[irq].chip = &hd64465_ss_irq_type;
}
@@ -260,7 +260,7 @@ static void hs_unmap_irq(hs_socket_t *sp, unsigned int irq)
return;
/* restore the original irq controller */
- irq_desc[irq].handler = hs_mapped_irq[irq].old_handler;
+ irq_desc[irq].chip = hs_mapped_irq[irq].old_handler;
}
/*============================================================*/
diff --git a/include/asm-powerpc/hw_irq.h b/include/asm-powerpc/hw_irq.h
index ce0f7db63c16..a27ed3feb315 100644
--- a/include/asm-powerpc/hw_irq.h
+++ b/include/asm-powerpc/hw_irq.h
@@ -86,20 +86,20 @@ static inline void local_irq_save_ptr(unsigned long *flags)
#define mask_irq(irq) \
({ \
irq_desc_t *desc = get_irq_desc(irq); \
- if (desc->handler && desc->handler->disable) \
- desc->handler->disable(irq); \
+ if (desc->chip && desc->chip->disable) \
+ desc->chip->disable(irq); \
})
#define unmask_irq(irq) \
({ \
irq_desc_t *desc = get_irq_desc(irq); \
- if (desc->handler && desc->handler->enable) \
- desc->handler->enable(irq); \
+ if (desc->chip && desc->chip->enable) \
+ desc->chip->enable(irq); \
})
#define ack_irq(irq) \
({ \
irq_desc_t *desc = get_irq_desc(irq); \
- if (desc->handler && desc->handler->ack) \
- desc->handler->ack(irq); \
+ if (desc->chip && desc->chip->ack) \
+ desc->chip->ack(irq); \
})
/* Should we handle this via lost interrupts and IPIs or should we don't care like
diff --git a/include/linux/irq.h b/include/linux/irq.h
index 676e00dfb21a..9597a6904239 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -69,8 +69,8 @@ typedef struct hw_interrupt_type hw_irq_controller;
* Pad this out to 32 bytes for cache and indexing reasons.
*/
typedef struct irq_desc {
- hw_irq_controller *handler;
- void *handler_data;
+ hw_irq_controller *chip;
+ void *chip_data;
struct irqaction *action; /* IRQ action list */
unsigned int status; /* IRQ status */
unsigned int depth; /* nested irq disables */
diff --git a/kernel/irq/autoprobe.c b/kernel/irq/autoprobe.c
index 3467097ca61a..6f1e68a46cbc 100644
--- a/kernel/irq/autoprobe.c
+++ b/kernel/irq/autoprobe.c
@@ -41,7 +41,7 @@ unsigned long probe_irq_on(void)
spin_lock_irq(&desc->lock);
if (!irq_desc[i].action)
- irq_desc[i].handler->startup(i);
+ irq_desc[i].chip->startup(i);
spin_unlock_irq(&desc->lock);
}
@@ -59,7 +59,7 @@ unsigned long probe_irq_on(void)
spin_lock_irq(&desc->lock);
if (!desc->action) {
desc->status |= IRQ_AUTODETECT | IRQ_WAITING;
- if (desc->handler->startup(i))
+ if (desc->chip->startup(i))
desc->status |= IRQ_PENDING;
}
spin_unlock_irq(&desc->lock);
@@ -85,7 +85,7 @@ unsigned long probe_irq_on(void)
/* It triggered already - consider it spurious. */
if (!(status & IRQ_WAITING)) {
desc->status = status & ~IRQ_AUTODETECT;
- desc->handler->shutdown(i);
+ desc->chip->shutdown(i);
} else
if (i < 32)
val |= 1 << i;
@@ -128,7 +128,7 @@ unsigned int probe_irq_mask(unsigned long val)
mask |= 1 << i;
desc->status = status & ~IRQ_AUTODETECT;
- desc->handler->shutdown(i);
+ desc->chip->shutdown(i);
}
spin_unlock_irq(&desc->lock);
}
@@ -173,7 +173,7 @@ int probe_irq_off(unsigned long val)
nr_irqs++;
}
desc->status = status & ~IRQ_AUTODETECT;
- desc->handler->shutdown(i);
+ desc->chip->shutdown(i);
}
spin_unlock_irq(&desc->lock);
}
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index 0f6530117105..f9d95705a4ac 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -31,7 +31,7 @@
irq_desc_t irq_desc[NR_IRQS] __cacheline_aligned = {
[0 ... NR_IRQS-1] = {
.status = IRQ_DISABLED,
- .handler = &no_irq_type,
+ .chip = &no_irq_type,
.lock = SPIN_LOCK_UNLOCKED
}
};
@@ -118,16 +118,16 @@ fastcall unsigned int __do_IRQ(unsigned int irq, struct pt_regs *regs)
/*
* No locking required for CPU-local interrupts:
*/
- if (desc->handler->ack)
- desc->handler->ack(irq);
+ if (desc->chip->ack)
+ desc->chip->ack(irq);
action_ret = handle_IRQ_event(irq, regs, desc->action);
- desc->handler->end(irq);
+ desc->chip->end(irq);
return 1;
}
spin_lock(&desc->lock);
- if (desc->handler->ack)
- desc->handler->ack(irq);
+ if (desc->chip->ack)
+ desc->chip->ack(irq);
/*
* REPLAY is when Linux resends an IRQ that was dropped earlier
* WAITING is used by probe to mark irqs that are being tested
@@ -187,7 +187,7 @@ fastcall unsigned int __do_IRQ(unsigned int irq, struct pt_regs *regs)
* The ->end() handler has to deal with interrupts which got
* disabled while the handler was running.
*/
- desc->handler->end(irq);
+ desc->chip->end(irq);
spin_unlock(&desc->lock);
return 1;
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 1279e3499534..31ee1f3bfcf9 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -69,7 +69,7 @@ void disable_irq_nosync(unsigned int irq)
spin_lock_irqsave(&desc->lock, flags);
if (!desc->depth++) {
desc->status |= IRQ_DISABLED;
- desc->handler->disable(irq);
+ desc->chip->disable(irq);
}
spin_unlock_irqrestore(&desc->lock, flags);
}
@@ -131,9 +131,9 @@ void enable_irq(unsigned int irq)
desc->status = status;
if ((status & (IRQ_PENDING | IRQ_REPLAY)) == IRQ_PENDING) {
desc->status = status | IRQ_REPLAY;
- hw_resend_irq(desc->handler,irq);
+ hw_resend_irq(desc->chip,irq);
}
- desc->handler->enable(irq);
+ desc->chip->enable(irq);
/* fall-through */
}
default:
@@ -178,7 +178,7 @@ int setup_irq(unsigned int irq, struct irqaction * new)
if (irq >= NR_IRQS)
return -EINVAL;
- if (desc->handler == &no_irq_type)
+ if (desc->chip == &no_irq_type)
return -ENOSYS;
/*
* Some drivers like serial.c use request_irq() heavily,
@@ -230,10 +230,10 @@ int setup_irq(unsigned int irq, struct irqaction * new)
desc->depth = 0;
desc->status &= ~(IRQ_DISABLED | IRQ_AUTODETECT |
IRQ_WAITING | IRQ_INPROGRESS);
- if (desc->handler->startup)
- desc->handler->startup(irq);
+ if (desc->chip->startup)
+ desc->chip->startup(irq);
else
- desc->handler->enable(irq);
+ desc->chip->enable(irq);
}
spin_unlock_irqrestore(&desc->lock,flags);
@@ -295,16 +295,16 @@ void free_irq(unsigned int irq, void *dev_id)
/* Currently used only by UML, might disappear one day.*/
#ifdef CONFIG_IRQ_RELEASE_METHOD
- if (desc->handler->release)
- desc->handler->release(irq, dev_id);
+ if (desc->chip->release)
+ desc->chip->release(irq, dev_id);
#endif
if (!desc->action) {
desc->status |= IRQ_DISABLED;
- if (desc->handler->shutdown)
- desc->handler->shutdown(irq);
+ if (desc->chip->shutdown)
+ desc->chip->shutdown(irq);
else
- desc->handler->disable(irq);
+ desc->chip->disable(irq);
}
spin_unlock_irqrestore(&desc->lock,flags);
unregister_handler_proc(irq, action);
diff --git a/kernel/irq/migration.c b/kernel/irq/migration.c
index a12d00eb5e7c..d978c87bca93 100644
--- a/kernel/irq/migration.c
+++ b/kernel/irq/migration.c
@@ -33,7 +33,7 @@ void move_native_irq(int irq)
if (unlikely(cpus_empty(pending_irq_cpumask[irq])))
return;
- if (!desc->handler->set_affinity)
+ if (!desc->chip->set_affinity)
return;
assert_spin_locked(&desc->lock);
@@ -51,12 +51,12 @@ void move_native_irq(int irq)
*/
if (likely(!cpus_empty(tmp))) {
if (likely(!(desc->status & IRQ_DISABLED)))
- desc->handler->disable(irq);
+ desc->chip->disable(irq);
- desc->handler->set_affinity(irq,tmp);
+ desc->chip->set_affinity(irq,tmp);
if (likely(!(desc->status & IRQ_DISABLED)))
- desc->handler->enable(irq);
+ desc->chip->enable(irq);
}
cpus_clear(pending_irq_cpumask[irq]);
}
diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index afacd6f585fa..90fe05f23e69 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -37,7 +37,7 @@ void proc_set_irq_affinity(unsigned int irq, cpumask_t mask_val)
{
set_balance_irq_affinity(irq, mask_val);
irq_affinity[irq] = mask_val;
- irq_desc[irq].handler->set_affinity(irq, mask_val);
+ irq_desc[irq].chip->set_affinity(irq, mask_val);
}
#endif
@@ -59,7 +59,7 @@ static int irq_affinity_write_proc(struct file *file, const char __user *buffer,
unsigned int irq = (int)(long)data, full_count = count, err;
cpumask_t new_value, tmp;
- if (!irq_desc[irq].handler->set_affinity || no_irq_affinity)
+ if (!irq_desc[irq].chip->set_affinity || no_irq_affinity)
return -EIO;
err = cpumask_parse(buffer, count, new_value);
@@ -122,7 +122,7 @@ void register_irq_proc(unsigned int irq)
char name [MAX_NAMELEN];
if (!root_irq_dir ||
- (irq_desc[irq].handler == &no_irq_type) ||
+ (irq_desc[irq].chip == &no_irq_type) ||
irq_dir[irq])
return;
diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
index b2fb3c18d06b..ea3ceed362da 100644
--- a/kernel/irq/spurious.c
+++ b/kernel/irq/spurious.c
@@ -81,7 +81,7 @@ static int misrouted_irq(int irq, struct pt_regs *regs)
* IRQ controller clean up too
*/
if(work)
- desc->handler->end(i);
+ desc->chip->end(i);
spin_unlock(&desc->lock);
}
/* So the caller can adjust the irq error counts */
@@ -166,7 +166,7 @@ void note_interrupt(unsigned int irq, irq_desc_t *desc, irqreturn_t action_ret,
*/
printk(KERN_EMERG "Disabling IRQ #%d\n", irq);
desc->status |= IRQ_DISABLED;
- desc->handler->disable(irq);
+ desc->chip->disable(irq);
}
desc->irqs_unhandled = 0;
}
commit c87e2837be82df479a6bae9f155c43516d2feebc
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue Jun 27 02:54:58 2006 -0700
[PATCH] pi-futex: futex_lock_pi/futex_unlock_pi support
This adds the actual pi-futex implementation, based on rt-mutexes.
[dino@in.ibm.com: fix an oops-causing race]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Dinakar Guniguntala <dino@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/include/linux/futex.h b/include/linux/futex.h
index f05a3f469322..34c3a215f2cd 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -12,6 +12,9 @@
#define FUTEX_REQUEUE 3
#define FUTEX_CMP_REQUEUE 4
#define FUTEX_WAKE_OP 5
+#define FUTEX_LOCK_PI 6
+#define FUTEX_UNLOCK_PI 7
+#define FUTEX_TRYLOCK_PI 8
/*
* Support for robust futexes: the kernel cleans up held futexes at
@@ -97,10 +100,14 @@ extern int handle_futex_death(u32 __user *uaddr, struct task_struct *curr);
#ifdef CONFIG_FUTEX
extern void exit_robust_list(struct task_struct *curr);
+extern void exit_pi_state_list(struct task_struct *curr);
#else
static inline void exit_robust_list(struct task_struct *curr)
{
}
+static inline void exit_pi_state_list(struct task_struct *curr)
+{
+}
#endif
#define FUTEX_OP_SET 0 /* *(int *)UADDR2 = OPARG; */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index edadd13cf53f..b4e6be7de5ad 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -84,6 +84,7 @@ struct sched_param {
#include <asm/processor.h>
struct exec_domain;
+struct futex_pi_state;
/*
* List of flags we want to share for kernel threads,
@@ -915,6 +916,8 @@ struct task_struct {
#ifdef CONFIG_COMPAT
struct compat_robust_list_head __user *compat_robust_list;
#endif
+ struct list_head pi_state_list;
+ struct futex_pi_state *pi_state_cache;
atomic_t fs_excl; /* holding fs exclusive resources */
struct rcu_head rcu;
diff --git a/kernel/exit.c b/kernel/exit.c
index 3e8a0282e9a5..ab06b9f88f64 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -925,6 +925,14 @@ fastcall NORET_TYPE void do_exit(long code)
mpol_free(tsk->mempolicy);
tsk->mempolicy = NULL;
#endif
+ /*
+ * This must happen late, after the PID is not
+ * hashed anymore:
+ */
+ if (unlikely(!list_empty(&tsk->pi_state_list)))
+ exit_pi_state_list(tsk);
+ if (unlikely(current->pi_state_cache))
+ kfree(current->pi_state_cache);
/*
* If DEBUG_MUTEXES is on, make sure we are holding no locks:
*/
diff --git a/kernel/fork.c b/kernel/fork.c
index b664a081fffa..628198a4f28a 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1092,6 +1092,9 @@ static task_t *copy_process(unsigned long clone_flags,
#ifdef CONFIG_COMPAT
p->compat_robust_list = NULL;
#endif
+ INIT_LIST_HEAD(&p->pi_state_list);
+ p->pi_state_cache = NULL;
+
/*
* sigaltstack should be cleared when sharing the same VM
*/
diff --git a/kernel/futex.c b/kernel/futex.c
index 50356fb5d726..b305b7f8dad5 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -12,6 +12,10 @@
* (C) Copyright 2006 Red Hat Inc, All Rights Reserved
* Thanks to Thomas Gleixner for suggestions, analysis and fixes.
*
+ * PI-futex support started by Ingo Molnar and Thomas Gleixner
+ * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ *
* Thanks to Ben LaHaise for yelling "hashed waitqueues" loudly
* enough at me, Linus for the original (flawed) idea, Matthew
* Kirkwood for proof-of-concept implementation.
@@ -46,6 +50,8 @@
#include <linux/signal.h>
#include <asm/futex.h>
+#include "rtmutex_common.h"
+
#define FUTEX_HASHBITS (CONFIG_BASE_SMALL ? 4 : 8)
/*
@@ -74,6 +80,27 @@ union futex_key {
} both;
};
+/*
+ * Priority Inheritance state:
+ */
+struct futex_pi_state {
+ /*
+ * list of 'owned' pi_state instances - these have to be
+ * cleaned up in do_exit() if the task exits prematurely:
+ */
+ struct list_head list;
+
+ /*
+ * The PI object:
+ */
+ struct rt_mutex pi_mutex;
+
+ struct task_struct *owner;
+ atomic_t refcount;
+
+ union futex_key key;
+};
+
/*
* We use this hashed waitqueue instead of a normal wait_queue_t, so
* we can wake only the relevant ones (hashed queues may be shared).
@@ -96,6 +123,10 @@ struct futex_q {
/* For fd, sigio sent using these: */
int fd;
struct file *filp;
+
+ /* Optional priority inheritance state: */
+ struct futex_pi_state *pi_state;
+ struct task_struct *task;
};
/*
@@ -258,6 +289,232 @@ static inline int get_futex_value_locked(u32 *dest, u32 __user *from)
return ret ? -EFAULT : 0;
}
+/*
+ * Fault handling. Called with current->mm->mmap_sem held.
+ */
+static int futex_handle_fault(unsigned long address, int attempt)
+{
+ struct vm_area_struct * vma;
+ struct mm_struct *mm = current->mm;
+
+ if (attempt >= 2 || !(vma = find_vma(mm, address)) ||
+ vma->vm_start > address || !(vma->vm_flags & VM_WRITE))
+ return -EFAULT;
+
+ switch (handle_mm_fault(mm, vma, address, 1)) {
+ case VM_FAULT_MINOR:
+ current->min_flt++;
+ break;
+ case VM_FAULT_MAJOR:
+ current->maj_flt++;
+ break;
+ default:
+ return -EFAULT;
+ }
+ return 0;
+}
+
+/*
+ * PI code:
+ */
+static int refill_pi_state_cache(void)
+{
+ struct futex_pi_state *pi_state;
+
+ if (likely(current->pi_state_cache))
+ return 0;
+
+ pi_state = kmalloc(sizeof(*pi_state), GFP_KERNEL);
+
+ if (!pi_state)
+ return -ENOMEM;
+
+ memset(pi_state, 0, sizeof(*pi_state));
+ INIT_LIST_HEAD(&pi_state->list);
+ /* pi_mutex gets initialized later */
+ pi_state->owner = NULL;
+ atomic_set(&pi_state->refcount, 1);
+
+ current->pi_state_cache = pi_state;
+
+ return 0;
+}
+
+static struct futex_pi_state * alloc_pi_state(void)
+{
+ struct futex_pi_state *pi_state = current->pi_state_cache;
+
+ WARN_ON(!pi_state);
+ current->pi_state_cache = NULL;
+
+ return pi_state;
+}
+
+static void free_pi_state(struct futex_pi_state *pi_state)
+{
+ if (!atomic_dec_and_test(&pi_state->refcount))
+ return;
+
+ /*
+ * If pi_state->owner is NULL, the owner is most probably dying
+ * and has cleaned up the pi_state already
+ */
+ if (pi_state->owner) {
+ spin_lock_irq(&pi_state->owner->pi_lock);
+ list_del_init(&pi_state->list);
+ spin_unlock_irq(&pi_state->owner->pi_lock);
+
+ rt_mutex_proxy_unlock(&pi_state->pi_mutex, pi_state->owner);
+ }
+
+ if (current->pi_state_cache)
+ kfree(pi_state);
+ else {
+ /*
+ * pi_state->list is already empty.
+ * clear pi_state->owner.
+ * refcount is at 0 - put it back to 1.
+ */
+ pi_state->owner = NULL;
+ atomic_set(&pi_state->refcount, 1);
+ current->pi_state_cache = pi_state;
+ }
+}
+
+/*
+ * Look up the task based on what TID userspace gave us.
+ * We dont trust it.
+ */
+static struct task_struct * futex_find_get_task(pid_t pid)
+{
+ struct task_struct *p;
+
+ read_lock(&tasklist_lock);
+ p = find_task_by_pid(pid);
+ if (!p)
+ goto out_unlock;
+ if ((current->euid != p->euid) && (current->euid != p->uid)) {
+ p = NULL;
+ goto out_unlock;
+ }
+ if (p->state == EXIT_ZOMBIE || p->exit_state == EXIT_ZOMBIE) {
+ p = NULL;
+ goto out_unlock;
+ }
+ get_task_struct(p);
+out_unlock:
+ read_unlock(&tasklist_lock);
+
+ return p;
+}
+
+/*
+ * This task is holding PI mutexes at exit time => bad.
+ * Kernel cleans up PI-state, but userspace is likely hosed.
+ * (Robust-futex cleanup is separate and might save the day for userspace.)
+ */
+void exit_pi_state_list(struct task_struct *curr)
+{
+ struct futex_hash_bucket *hb;
+ struct list_head *next, *head = &curr->pi_state_list;
+ struct futex_pi_state *pi_state;
+ union futex_key key;
+
+ /*
+ * We are a ZOMBIE and nobody can enqueue itself on
+ * pi_state_list anymore, but we have to be careful
+ * versus waiters unqueueing themselfs
+ */
+ spin_lock_irq(&curr->pi_lock);
+ while (!list_empty(head)) {
+
+ next = head->next;
+ pi_state = list_entry(next, struct futex_pi_state, list);
+ key = pi_state->key;
+ spin_unlock_irq(&curr->pi_lock);
+
+ hb = hash_futex(&key);
+ spin_lock(&hb->lock);
+
+ spin_lock_irq(&curr->pi_lock);
+ if (head->next != next) {
+ spin_unlock(&hb->lock);
+ continue;
+ }
+
+ list_del_init(&pi_state->list);
+
+ WARN_ON(pi_state->owner != curr);
+
+ pi_state->owner = NULL;
+ spin_unlock_irq(&curr->pi_lock);
+
+ rt_mutex_unlock(&pi_state->pi_mutex);
+
+ spin_unlock(&hb->lock);
+
+ spin_lock_irq(&curr->pi_lock);
+ }
+ spin_unlock_irq(&curr->pi_lock);
+}
+
+static int
+lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, struct futex_q *me)
+{
+ struct futex_pi_state *pi_state = NULL;
+ struct futex_q *this, *next;
+ struct list_head *head;
+ struct task_struct *p;
+ pid_t pid;
+
+ head = &hb->chain;
+
+ list_for_each_entry_safe(this, next, head, list) {
+ if (match_futex (&this->key, &me->key)) {
+ /*
+ * Another waiter already exists - bump up
+ * the refcount and return its pi_state:
+ */
+ pi_state = this->pi_state;
+ atomic_inc(&pi_state->refcount);
+ me->pi_state = pi_state;
+
+ return 0;
+ }
+ }
+
+ /*
+ * We are the first waiter - try to look up the real owner and
+ * attach the new pi_state to it:
+ */
+ pid = uval & FUTEX_TID_MASK;
+ p = futex_find_get_task(pid);
+ if (!p)
+ return -ESRCH;
+
+ pi_state = alloc_pi_state();
+
+ /*
+ * Initialize the pi_mutex in locked state and make 'p'
+ * the owner of it:
+ */
+ rt_mutex_init_proxy_locked(&pi_state->pi_mutex, p);
+
+ /* Store the key for possible exit cleanups: */
+ pi_state->key = me->key;
+
+ spin_lock_irq(&p->pi_lock);
+ list_add(&pi_state->list, &p->pi_state_list);
+ pi_state->owner = p;
+ spin_unlock_irq(&p->pi_lock);
+
+ put_task_struct(p);
+
+ me->pi_state = pi_state;
+
+ return 0;
+}
+
/*
* The hash bucket lock must be held when this is called.
* Afterwards, the futex_q must not be accessed.
@@ -285,6 +542,70 @@ static void wake_futex(struct futex_q *q)
q->lock_ptr = NULL;
}
+static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this)
+{
+ struct task_struct *new_owner;
+ struct futex_pi_state *pi_state = this->pi_state;
+ u32 curval, newval;
+
+ if (!pi_state)
+ return -EINVAL;
+
+ new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
+
+ /*
+ * This happens when we have stolen the lock and the original
+ * pending owner did not enqueue itself back on the rt_mutex.
+ * Thats not a tragedy. We know that way, that a lock waiter
+ * is on the fly. We make the futex_q waiter the pending owner.
+ */
+ if (!new_owner)
+ new_owner = this->task;
+
+ /*
+ * We pass it to the next owner. (The WAITERS bit is always
+ * kept enabled while there is PI state around. We must also
+ * preserve the owner died bit.)
+ */
+ newval = (uval & FUTEX_OWNER_DIED) | FUTEX_WAITERS | new_owner->pid;
+
+ inc_preempt_count();
+ curval = futex_atomic_cmpxchg_inatomic(uaddr, uval, newval);
+ dec_preempt_count();
+
+ if (curval == -EFAULT)
+ return -EFAULT;
+ if (curval != uval)
+ return -EINVAL;
+
+ list_del_init(&pi_state->owner->pi_state_list);
+ list_add(&pi_state->list, &new_owner->pi_state_list);
+ pi_state->owner = new_owner;
+ rt_mutex_unlock(&pi_state->pi_mutex);
+
+ return 0;
+}
+
+static int unlock_futex_pi(u32 __user *uaddr, u32 uval)
+{
+ u32 oldval;
+
+ /*
+ * There is no waiter, so we unlock the futex. The owner died
+ * bit has not to be preserved here. We are the owner:
+ */
+ inc_preempt_count();
+ oldval = futex_atomic_cmpxchg_inatomic(uaddr, uval, 0);
+ dec_preempt_count();
+
+ if (oldval == -EFAULT)
+ return oldval;
+ if (oldval != uval)
+ return -EAGAIN;
+
+ return 0;
+}
+
/*
* Wake up all waiters hashed on the physical page that is mapped
* to this virtual address:
@@ -309,6 +630,8 @@ static int futex_wake(u32 __user *uaddr, int nr_wake)
list_for_each_entry_safe(this, next, head, list) {
if (match_futex (&this->key, &key)) {
+ if (this->pi_state)
+ return -EINVAL;
wake_futex(this);
if (++ret >= nr_wake)
break;
@@ -385,27 +708,9 @@ futex_wake_op(u32 __user *uaddr1, u32 __user *uaddr2,
* still holding the mmap_sem.
*/
if (attempt++) {
- struct vm_area_struct * vma;
- struct mm_struct *mm = current->mm;
- unsigned long address = (unsigned long)uaddr2;
-
- ret = -EFAULT;
- if (attempt >= 2 ||
- !(vma = find_vma(mm, address)) ||
- vma->vm_start > address ||
- !(vma->vm_flags & VM_WRITE))
+ if (futex_handle_fault((unsigned long)uaddr2,
+ attempt))
goto out;
-
- switch (handle_mm_fault(mm, vma, address, 1)) {
- case VM_FAULT_MINOR:
- current->min_flt++;
- break;
- case VM_FAULT_MAJOR:
- current->maj_flt++;
- break;
- default:
- goto out;
- }
goto retry;
}
@@ -572,6 +877,7 @@ queue_lock(struct futex_q *q, int fd, struct file *filp)
static inline void __queue_me(struct futex_q *q, struct futex_hash_bucket *hb)
{
list_add_tail(&q->list, &hb->chain);
+ q->task = current;
spin_unlock(&hb->lock);
}
@@ -626,6 +932,9 @@ static int unqueue_me(struct futex_q *q)
}
WARN_ON(list_empty(&q->list));
list_del(&q->list);
+
+ BUG_ON(q->pi_state);
+
spin_unlock(lock_ptr);
ret = 1;
}
@@ -634,16 +943,36 @@ static int unqueue_me(struct futex_q *q)
return ret;
}
+/*
+ * PI futexes can not be requeued and must remove themself from the
+ * hash bucket. The hash bucket lock is held on entry and dropped here.
+ */
+static void unqueue_me_pi(struct futex_q *q, struct futex_hash_bucket *hb)
+{
+ WARN_ON(list_empty(&q->list));
+ list_del(&q->list);
+
+ BUG_ON(!q->pi_state);
+ free_pi_state(q->pi_state);
+ q->pi_state = NULL;
+
+ spin_unlock(&hb->lock);
+
+ drop_key_refs(&q->key);
+}
+
static int futex_wait(u32 __user *uaddr, u32 val, unsigned long time)
{
- DECLARE_WAITQUEUE(wait, current);
+ struct task_struct *curr = current;
+ DECLARE_WAITQUEUE(wait, curr);
struct futex_hash_bucket *hb;
struct futex_q q;
u32 uval;
int ret;
+ q.pi_state = NULL;
retry:
- down_read(¤t->mm->mmap_sem);
+ down_read(&curr->mm->mmap_sem);
ret = get_futex_key(uaddr, &q.key);
if (unlikely(ret != 0))
@@ -680,7 +1009,7 @@ static int futex_wait(u32 __user *uaddr, u32 val, unsigned long time)
* If we would have faulted, release mmap_sem, fault it in and
* start all over again.
*/
- up_read(¤t->mm->mmap_sem);
+ up_read(&curr->mm->mmap_sem);
ret = get_user(uval, uaddr);
@@ -688,11 +1017,9 @@ static int futex_wait(u32 __user *uaddr, u32 val, unsigned long time)
goto retry;
return ret;
}
- if (uval != val) {
- ret = -EWOULDBLOCK;
- queue_unlock(&q, hb);
- goto out_release_sem;
- }
+ ret = -EWOULDBLOCK;
+ if (uval != val)
+ goto out_unlock_release_sem;
/* Only actually queue if *uaddr contained val. */
__queue_me(&q, hb);
@@ -700,8 +1027,8 @@ static int futex_wait(u32 __user *uaddr, u32 val, unsigned long time)
/*
* Now the futex is queued and we have checked the data, we
* don't want to hold mmap_sem while we sleep.
- */
- up_read(¤t->mm->mmap_sem);
+ */
+ up_read(&curr->mm->mmap_sem);
/*
* There might have been scheduling since the queue_me(), as we
@@ -739,8 +1066,415 @@ static int futex_wait(u32 __user *uaddr, u32 val, unsigned long time)
*/
return -EINTR;
+ out_unlock_release_sem:
+ queue_unlock(&q, hb);
+
out_release_sem:
+ up_read(&curr->mm->mmap_sem);
+ return ret;
+}
+
+/*
+ * Userspace tried a 0 -> TID atomic transition of the futex value
+ * and failed. The kernel side here does the whole locking operation:
+ * if there are waiters then it will block, it does PI, etc. (Due to
+ * races the kernel might see a 0 value of the futex too.)
+ */
+static int do_futex_lock_pi(u32 __user *uaddr, int detect, int trylock,
+ struct hrtimer_sleeper *to)
+{
+ struct task_struct *curr = current;
+ struct futex_hash_bucket *hb;
+ u32 uval, newval, curval;
+ struct futex_q q;
+ int ret, attempt = 0;
+
+ if (refill_pi_state_cache())
+ return -ENOMEM;
+
+ q.pi_state = NULL;
+ retry:
+ down_read(&curr->mm->mmap_sem);
+
+ ret = get_futex_key(uaddr, &q.key);
+ if (unlikely(ret != 0))
+ goto out_release_sem;
+
+ hb = queue_lock(&q, -1, NULL);
+
+ retry_locked:
+ /*
+ * To avoid races, we attempt to take the lock here again
+ * (by doing a 0 -> TID atomic cmpxchg), while holding all
+ * the locks. It will most likely not succeed.
+ */
+ newval = current->pid;
+
+ inc_preempt_count();
+ curval = futex_atomic_cmpxchg_inatomic(uaddr, 0, newval);
+ dec_preempt_count();
+
+ if (unlikely(curval == -EFAULT))
+ goto uaddr_faulted;
+
+ /* We own the lock already */
+ if (unlikely((curval & FUTEX_TID_MASK) == current->pid)) {
+ if (!detect && 0)
+ force_sig(SIGKILL, current);
+ ret = -EDEADLK;
+ goto out_unlock_release_sem;
+ }
+
+ /*
+ * Surprise - we got the lock. Just return
+ * to userspace:
+ */
+ if (unlikely(!curval))
+ goto out_unlock_release_sem;
+
+ uval = curval;
+ newval = uval | FUTEX_WAITERS;
+
+ inc_preempt_count();
+ curval = futex_atomic_cmpxchg_inatomic(uaddr, uval, newval);
+ dec_preempt_count();
+
+ if (unlikely(curval == -EFAULT))
+ goto uaddr_faulted;
+ if (unlikely(curval != uval))
+ goto retry_locked;
+
+ /*
+ * We dont have the lock. Look up the PI state (or create it if
+ * we are the first waiter):
+ */
+ ret = lookup_pi_state(uval, hb, &q);
+
+ if (unlikely(ret)) {
+ /*
+ * There were no waiters and the owner task lookup
+ * failed. When the OWNER_DIED bit is set, then we
+ * know that this is a robust futex and we actually
+ * take the lock. This is safe as we are protected by
+ * the hash bucket lock. We also set the waiters bit
+ * unconditionally here, to simplify glibc handling of
+ * multiple tasks racing to acquire the lock and
+ * cleanup the problems which were left by the dead
+ * owner.
+ */
+ if (curval & FUTEX_OWNER_DIED) {
+ uval = newval;
+ newval = current->pid |
+ FUTEX_OWNER_DIED | FUTEX_WAITERS;
+
+ inc_preempt_count();
+ curval = futex_atomic_cmpxchg_inatomic(uaddr,
+ uval, newval);
+ dec_preempt_count();
+
+ if (unlikely(curval == -EFAULT))
+ goto uaddr_faulted;
+ if (unlikely(curval != uval))
+ goto retry_locked;
+ ret = 0;
+ }
+ goto out_unlock_release_sem;
+ }
+
+ /*
+ * Only actually queue now that the atomic ops are done:
+ */
+ __queue_me(&q, hb);
+
+ /*
+ * Now the futex is queued and we have checked the data, we
+ * don't want to hold mmap_sem while we sleep.
+ */
+ up_read(&curr->mm->mmap_sem);
+
+ WARN_ON(!q.pi_state);
+ /*
+ * Block on the PI mutex:
+ */
+ if (!trylock)
+ ret = rt_mutex_timed_lock(&q.pi_state->pi_mutex, to, 1);
+ else {
+ ret = rt_mutex_trylock(&q.pi_state->pi_mutex);
+ /* Fixup the trylock return value: */
+ ret = ret ? 0 : -EWOULDBLOCK;
+ }
+
+ down_read(&curr->mm->mmap_sem);
+ hb = queue_lock(&q, -1, NULL);
+
+ /*
+ * Got the lock. We might not be the anticipated owner if we
+ * did a lock-steal - fix up the PI-state in that case.
+ */
+ if (!ret && q.pi_state->owner != curr) {
+ u32 newtid = current->pid | FUTEX_WAITERS;
+
+ /* Owner died? */
+ if (q.pi_state->owner != NULL) {
+ spin_lock_irq(&q.pi_state->owner->pi_lock);
+ list_del_init(&q.pi_state->list);
+ spin_unlock_irq(&q.pi_state->owner->pi_lock);
+ } else
+ newtid |= FUTEX_OWNER_DIED;
+
+ q.pi_state->owner = current;
+
+ spin_lock_irq(¤t->pi_lock);
+ list_add(&q.pi_state->list, ¤t->pi_state_list);
+ spin_unlock_irq(¤t->pi_lock);
+
+ /* Unqueue and drop the lock */
+ unqueue_me_pi(&q, hb);
+ up_read(&curr->mm->mmap_sem);
+ /*
+ * We own it, so we have to replace the pending owner
+ * TID. This must be atomic as we have preserve the
+ * owner died bit here.
+ */
+ ret = get_user(uval, uaddr);
+ while (!ret) {
+ newval = (uval & FUTEX_OWNER_DIED) | newtid;
+ curval = futex_atomic_cmpxchg_inatomic(uaddr,
+ uval, newval);
+ if (curval == -EFAULT)
+ ret = -EFAULT;
+ if (curval == uval)
+ break;
+ uval = curval;
+ }
+ } else {
+ /*
+ * Catch the rare case, where the lock was released
+ * when we were on the way back before we locked
+ * the hash bucket.
+ */
+ if (ret && q.pi_state->owner == curr) {
+ if (rt_mutex_trylock(&q.pi_state->pi_mutex))
+ ret = 0;
+ }
+ /* Unqueue and drop the lock */
+ unqueue_me_pi(&q, hb);
+ up_read(&curr->mm->mmap_sem);
+ }
+
+ if (!detect && ret == -EDEADLK && 0)
+ force_sig(SIGKILL, current);
+
+ return ret;
+
+ out_unlock_release_sem:
+ queue_unlock(&q, hb);
+
+ out_release_sem:
+ up_read(&curr->mm->mmap_sem);
+ return ret;
+
+ uaddr_faulted:
+ /*
+ * We have to r/w *(int __user *)uaddr, but we can't modify it
+ * non-atomically. Therefore, if get_user below is not
+ * enough, we need to handle the fault ourselves, while
+ * still holding the mmap_sem.
+ */
+ if (attempt++) {
+ if (futex_handle_fault((unsigned long)uaddr, attempt))
+ goto out_unlock_release_sem;
+
+ goto retry_locked;
+ }
+
+ queue_unlock(&q, hb);
+ up_read(&curr->mm->mmap_sem);
+
+ ret = get_user(uval, uaddr);
+ if (!ret && (uval != -EFAULT))
+ goto retry;
+
+ return ret;
+}
+
+/*
+ * Restart handler
+ */
+static long futex_lock_pi_restart(struct restart_block *restart)
+{
+ struct hrtimer_sleeper timeout, *to = NULL;
+ int ret;
+
+ restart->fn = do_no_restart_syscall;
+
+ if (restart->arg2 || restart->arg3) {
+ to = &timeout;
+ hrtimer_init(&to->timer, CLOCK_REALTIME, HRTIMER_ABS);
+ hrtimer_init_sleeper(to, current);
+ to->timer.expires.tv64 = ((u64)restart->arg1 << 32) |
+ (u64) restart->arg0;
+ }
+
+ pr_debug("lock_pi restart: %p, %d (%d)\n",
+ (u32 __user *)restart->arg0, current->pid);
+
+ ret = do_futex_lock_pi((u32 __user *)restart->arg0, restart->arg1,
+ 0, to);
+
+ if (ret != -EINTR)
+ return ret;
+
+ restart->fn = futex_lock_pi_restart;
+
+ /* The other values are filled in */
+ return -ERESTART_RESTARTBLOCK;
+}
+
+/*
+ * Called from the syscall entry below.
+ */
+static int futex_lock_pi(u32 __user *uaddr, int detect, unsigned long sec,
+ long nsec, int trylock)
+{
+ struct hrtimer_sleeper timeout, *to = NULL;
+ struct restart_block *restart;
+ int ret;
+
+ if (sec != MAX_SCHEDULE_TIMEOUT) {
+ to = &timeout;
+ hrtimer_init(&to->timer, CLOCK_REALTIME, HRTIMER_ABS);
+ hrtimer_init_sleeper(to, current);
+ to->timer.expires = ktime_set(sec, nsec);
+ }
+
+ ret = do_futex_lock_pi(uaddr, detect, trylock, to);
+
+ if (ret != -EINTR)
+ return ret;
+
+ pr_debug("lock_pi interrupted: %p, %d (%d)\n", uaddr, current->pid);
+
+ restart = ¤t_thread_info()->restart_block;
+ restart->fn = futex_lock_pi_restart;
+ restart->arg0 = (unsigned long) uaddr;
+ restart->arg1 = detect;
+ if (to) {
+ restart->arg2 = to->timer.expires.tv64 & 0xFFFFFFFF;
+ restart->arg3 = to->timer.expires.tv64 >> 32;
+ } else
+ restart->arg2 = restart->arg3 = 0;
+
+ return -ERESTART_RESTARTBLOCK;
+}
+
+/*
+ * Userspace attempted a TID -> 0 atomic transition, and failed.
+ * This is the in-kernel slowpath: we look up the PI state (if any),
+ * and do the rt-mutex unlock.
+ */
+static int futex_unlock_pi(u32 __user *uaddr)
+{
+ struct futex_hash_bucket *hb;
+ struct futex_q *this, *next;
+ u32 uval;
+ struct list_head *head;
+ union futex_key key;
+ int ret, attempt = 0;
+
+retry:
+ if (get_user(uval, uaddr))
+ return -EFAULT;
+ /*
+ * We release only a lock we actually own:
+ */
+ if ((uval & FUTEX_TID_MASK) != current->pid)
+ return -EPERM;
+ /*
+ * First take all the futex related locks:
+ */
+ down_read(¤t->mm->mmap_sem);
+
+ ret = get_futex_key(uaddr, &key);
+ if (unlikely(ret != 0))
+ goto out;
+
+ hb = hash_futex(&key);
+ spin_lock(&hb->lock);
+
+retry_locked:
+ /*
+ * To avoid races, try to do the TID -> 0 atomic transition
+ * again. If it succeeds then we can return without waking
+ * anyone else up:
+ */
+ inc_preempt_count();
+ uval = futex_atomic_cmpxchg_inatomic(uaddr, current->pid, 0);
+ dec_preempt_count();
+
+ if (unlikely(uval == -EFAULT))
+ goto pi_faulted;
+ /*
+ * Rare case: we managed to release the lock atomically,
+ * no need to wake anyone else up:
+ */
+ if (unlikely(uval == current->pid))
+ goto out_unlock;
+
+ /*
+ * Ok, other tasks may need to be woken up - check waiters
+ * and do the wakeup if necessary:
+ */
+ head = &hb->chain;
+
+ list_for_each_entry_safe(this, next, head, list) {
+ if (!match_futex (&this->key, &key))
+ continue;
+ ret = wake_futex_pi(uaddr, uval, this);
+ /*
+ * The atomic access to the futex value
+ * generated a pagefault, so retry the
+ * user-access and the wakeup:
+ */
+ if (ret == -EFAULT)
+ goto pi_faulted;
+ goto out_unlock;
+ }
+ /*
+ * No waiters - kernel unlocks the futex:
+ */
+ ret = unlock_futex_pi(uaddr, uval);
+ if (ret == -EFAULT)
+ goto pi_faulted;
+
+out_unlock:
+ spin_unlock(&hb->lock);
+out:
+ up_read(¤t->mm->mmap_sem);
+
+ return ret;
+
+pi_faulted:
+ /*
+ * We have to r/w *(int __user *)uaddr, but we can't modify it
+ * non-atomically. Therefore, if get_user below is not
+ * enough, we need to handle the fault ourselves, while
+ * still holding the mmap_sem.
+ */
+ if (attempt++) {
+ if (futex_handle_fault((unsigned long)uaddr, attempt))
+ goto out_unlock;
+
+ goto retry_locked;
+ }
+
+ spin_unlock(&hb->lock);
up_read(¤t->mm->mmap_sem);
+
+ ret = get_user(uval, uaddr);
+ if (!ret && (uval != -EFAULT))
+ goto retry;
+
return ret;
}
@@ -819,6 +1553,7 @@ static int futex_fd(u32 __user *uaddr, int signal)
err = -ENOMEM;
goto error;
}
+ q->pi_state = NULL;
down_read(¤t->mm->mmap_sem);
err = get_futex_key(uaddr, &q->key);
@@ -856,7 +1591,7 @@ static int futex_fd(u32 __user *uaddr, int signal)
* Implementation: user-space maintains a per-thread list of locks it
* is holding. Upon do_exit(), the kernel carefully walks this list,
* and marks all locks that are owned by this thread with the
- * FUTEX_OWNER_DEAD bit, and wakes up a waiter (if any). The list is
+ * FUTEX_OWNER_DIED bit, and wakes up a waiter (if any). The list is
* always manipulated with the lock held, so the list is private and
* per-thread. Userspace also maintains a per-thread 'list_op_pending'
* field, to allow the kernel to clean up if the thread dies after
@@ -931,7 +1666,7 @@ sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr,
*/
int handle_futex_death(u32 __user *uaddr, struct task_struct *curr)
{
- u32 uval;
+ u32 uval, nval;
retry:
if (get_user(uval, uaddr))
@@ -948,8 +1683,12 @@ int handle_futex_death(u32 __user *uaddr, struct task_struct *curr)
* thread-death.) The rest of the cleanup is done in
* userspace.
*/
- if (futex_atomic_cmpxchg_inatomic(uaddr, uval,
- uval | FUTEX_OWNER_DIED) != uval)
+ nval = futex_atomic_cmpxchg_inatomic(uaddr, uval,
+ uval | FUTEX_OWNER_DIED);
+ if (nval == -EFAULT)
+ return -1;
+
+ if (nval != uval)
goto retry;
if (uval & FUTEX_WAITERS)
@@ -994,7 +1733,7 @@ void exit_robust_list(struct task_struct *curr)
while (entry != &head->list) {
/*
* A pending lock might already be on the list, so
- * dont process it twice:
+ * don't process it twice:
*/
if (entry != pending)
if (handle_futex_death((void *)entry + futex_offset,
@@ -1040,6 +1779,15 @@ long do_futex(u32 __user *uaddr, int op, u32 val, unsigned long timeout,
case FUTEX_WAKE_OP:
ret = futex_wake_op(uaddr, uaddr2, val, val2, val3);
break;
+ case FUTEX_LOCK_PI:
+ ret = futex_lock_pi(uaddr, val, timeout, val2, 0);
+ break;
+ case FUTEX_UNLOCK_PI:
+ ret = futex_unlock_pi(uaddr);
+ break;
+ case FUTEX_TRYLOCK_PI:
+ ret = futex_lock_pi(uaddr, 0, timeout, val2, 1);
+ break;
default:
ret = -ENOSYS;
}
@@ -1055,17 +1803,22 @@ asmlinkage long sys_futex(u32 __user *uaddr, int op, u32 val,
unsigned long timeout = MAX_SCHEDULE_TIMEOUT;
u32 val2 = 0;
- if (utime && (op == FUTEX_WAIT)) {
+ if (utime && (op == FUTEX_WAIT || op == FUTEX_LOCK_PI)) {
if (copy_from_user(&t, utime, sizeof(t)) != 0)
return -EFAULT;
if (!timespec_valid(&t))
return -EINVAL;
- timeout = timespec_to_jiffies(&t) + 1;
+ if (op == FUTEX_WAIT)
+ timeout = timespec_to_jiffies(&t) + 1;
+ else {
+ timeout = t.tv_sec;
+ val2 = t.tv_nsec;
+ }
}
/*
* requeue parameter in 'utime' if op == FUTEX_REQUEUE.
*/
- if (op >= FUTEX_REQUEUE)
+ if (op == FUTEX_REQUEUE || op == FUTEX_CMP_REQUEUE)
val2 = (u32) (unsigned long) utime;
return do_futex(uaddr, op, val, timeout, uaddr2, val2, val3);
diff --git a/kernel/futex_compat.c b/kernel/futex_compat.c
index 7e57c31670a3..d1d92b441fb7 100644
--- a/kernel/futex_compat.c
+++ b/kernel/futex_compat.c
@@ -129,14 +129,19 @@ asmlinkage long compat_sys_futex(u32 __user *uaddr, int op, u32 val,
unsigned long timeout = MAX_SCHEDULE_TIMEOUT;
int val2 = 0;
- if (utime && (op == FUTEX_WAIT)) {
+ if (utime && (op == FUTEX_WAIT || op == FUTEX_LOCK_PI)) {
if (get_compat_timespec(&t, utime))
return -EFAULT;
if (!timespec_valid(&t))
return -EINVAL;
- timeout = timespec_to_jiffies(&t) + 1;
+ if (op == FUTEX_WAIT)
+ timeout = timespec_to_jiffies(&t) + 1;
+ else {
+ timeout = t.tv_sec;
+ val2 = t.tv_nsec;
+ }
}
- if (op >= FUTEX_REQUEUE)
+ if (op == FUTEX_REQUEUE || op == FUTEX_CMP_REQUEUE)
val2 = (int) (unsigned long) utime;
return do_futex(uaddr, op, val, timeout, uaddr2, val2, val3);
diff --git a/kernel/rtmutex_common.h b/kernel/rtmutex_common.h
index e068024eeffc..9c75856e791e 100644
--- a/kernel/rtmutex_common.h
+++ b/kernel/rtmutex_common.h
@@ -112,4 +112,12 @@ static inline unsigned long rt_mutex_owner_pending(struct rt_mutex *lock)
return (unsigned long)lock->owner & RT_MUTEX_OWNER_PENDING;
}
+/*
+ * PI-futex support (proxy locking functions, etc.):
+ */
+extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock);
+extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
+ struct task_struct *proxy_owner);
+extern void rt_mutex_proxy_unlock(struct rt_mutex *lock,
+ struct task_struct *proxy_owner);
#endif
commit 0cdbee9920fb37eb2dc49b860c2b28862d647adc
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue Jun 27 02:54:57 2006 -0700
[PATCH] pi-futex: rt mutex futex api
Add proxy-locking rt-mutex functionality needed by pi-futexes.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 39c8ca0cf526..3fc0f0680ca2 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -903,3 +903,58 @@ void __rt_mutex_init(struct rt_mutex *lock, const char *name)
debug_rt_mutex_init(lock, name);
}
EXPORT_SYMBOL_GPL(__rt_mutex_init);
+
+/**
+ * rt_mutex_init_proxy_locked - initialize and lock a rt_mutex on behalf of a
+ * proxy owner
+ *
+ * @lock: the rt_mutex to be locked
+ * @proxy_owner:the task to set as owner
+ *
+ * No locking. Caller has to do serializing itself
+ * Special API call for PI-futex support
+ */
+void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
+ struct task_struct *proxy_owner)
+{
+ __rt_mutex_init(lock, NULL);
+ debug_rt_mutex_proxy_lock(lock, proxy_owner __RET_IP__);
+ rt_mutex_set_owner(lock, proxy_owner, 0);
+ rt_mutex_deadlock_account_lock(lock, proxy_owner);
+}
+
+/**
+ * rt_mutex_proxy_unlock - release a lock on behalf of owner
+ *
+ * @lock: the rt_mutex to be locked
+ *
+ * No locking. Caller has to do serializing itself
+ * Special API call for PI-futex support
+ */
+void rt_mutex_proxy_unlock(struct rt_mutex *lock,
+ struct task_struct *proxy_owner)
+{
+ debug_rt_mutex_proxy_unlock(lock);
+ rt_mutex_set_owner(lock, NULL, 0);
+ rt_mutex_deadlock_account_unlock(proxy_owner);
+}
+
+/**
+ * rt_mutex_next_owner - return the next owner of the lock
+ *
+ * @lock: the rt lock query
+ *
+ * Returns the next owner of the lock or NULL
+ *
+ * Caller has to serialize against other accessors to the lock
+ * itself.
+ *
+ * Special API call for PI-futex support
+ */
+struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock)
+{
+ if (!rt_mutex_has_waiters(lock))
+ return NULL;
+
+ return rt_mutex_top_waiter(lock)->task;
+}
commit e7eebaf6a81b956c989f184ee4b27277c88f8afe
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue Jun 27 02:54:55 2006 -0700
[PATCH] pi-futex: rt mutex debug
Runtime debugging functionality for rt-mutexes.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0ac255720f34..c41a1299b8cf 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1034,6 +1034,7 @@ static inline void
debug_check_no_locks_freed(const void *from, unsigned long len)
{
mutex_debug_check_no_locks_freed(from, len);
+ rt_mutex_debug_check_no_locks_freed(from, len);
}
#ifndef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
index 12309c916c68..fa4a3b82ba70 100644
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -40,6 +40,19 @@ struct rt_mutex {
struct rt_mutex_waiter;
struct hrtimer_sleeper;
+#ifdef CONFIG_DEBUG_RT_MUTEXES
+ extern int rt_mutex_debug_check_no_locks_freed(const void *from,
+ unsigned long len);
+ extern void rt_mutex_debug_check_no_locks_held(struct task_struct *task);
+#else
+ static inline int rt_mutex_debug_check_no_locks_freed(const void *from,
+ unsigned long len)
+ {
+ return 0;
+ }
+# define rt_mutex_debug_check_no_locks_held(task) do { } while (0)
+#endif
+
#ifdef CONFIG_DEBUG_RT_MUTEXES
# define __DEBUG_RT_MUTEX_INITIALIZER(mutexname) \
, .name = #mutexname, .file = __FILE__, .line = __LINE__
@@ -48,7 +61,7 @@ struct hrtimer_sleeper;
#else
# define __DEBUG_RT_MUTEX_INITIALIZER(mutexname)
# define rt_mutex_init(mutex) __rt_mutex_init(mutex, NULL)
-# define rt_mutex_debug_task_free(t) do { } while (0)
+# define rt_mutex_debug_task_free(t) do { } while (0)
#endif
#define __RT_MUTEX_INITIALIZER(mutexname) \
diff --git a/kernel/Makefile b/kernel/Makefile
index 21df9a338ff0..f9c92d34cde5 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -17,6 +17,7 @@ ifeq ($(CONFIG_COMPAT),y)
obj-$(CONFIG_FUTEX) += futex_compat.o
endif
obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
+obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
obj-$(CONFIG_SMP) += cpu.o spinlock.o
obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o
diff --git a/kernel/exit.c b/kernel/exit.c
index 304ef637be6c..3e8a0282e9a5 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -929,6 +929,7 @@ fastcall NORET_TYPE void do_exit(long code)
* If DEBUG_MUTEXES is on, make sure we are holding no locks:
*/
mutex_debug_check_no_locks_held(tsk);
+ rt_mutex_debug_check_no_locks_held(tsk);
if (tsk->io_context)
exit_io_context();
diff --git a/kernel/rtmutex-debug.c b/kernel/rtmutex-debug.c
new file mode 100644
index 000000000000..4aa8a2c9f453
--- /dev/null
+++ b/kernel/rtmutex-debug.c
@@ -0,0 +1,513 @@
+/*
+ * RT-Mutexes: blocking mutual exclusion locks with PI support
+ *
+ * started by Ingo Molnar and Thomas Gleixner:
+ *
+ * Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ *
+ * This code is based on the rt.c implementation in the preempt-rt tree.
+ * Portions of said code are
+ *
+ * Copyright (C) 2004 LynuxWorks, Inc., Igor Manyilov, Bill Huey
+ * Copyright (C) 2006 Esben Nielsen
+ * Copyright (C) 2006 Kihon Technologies Inc.,
+ * Steven Rostedt <rostedt@goodmis.org>
+ *
+ * See rt.c in preempt-rt for proper credits and further information
+ */
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <linux/kallsyms.h>
+#include <linux/syscalls.h>
+#include <linux/interrupt.h>
+#include <linux/plist.h>
+#include <linux/fs.h>
+
+#include "rtmutex_common.h"
+
+#ifdef CONFIG_DEBUG_RT_MUTEXES
+# include "rtmutex-debug.h"
+#else
+# include "rtmutex.h"
+#endif
+
+# define TRACE_WARN_ON(x) WARN_ON(x)
+# define TRACE_BUG_ON(x) BUG_ON(x)
+
+# define TRACE_OFF() \
+do { \
+ if (rt_trace_on) { \
+ rt_trace_on = 0; \
+ console_verbose(); \
+ if (spin_is_locked(¤t->pi_lock)) \
+ spin_unlock(¤t->pi_lock); \
+ if (spin_is_locked(¤t->held_list_lock)) \
+ spin_unlock(¤t->held_list_lock); \
+ } \
+} while (0)
+
+# define TRACE_OFF_NOLOCK() \
+do { \
+ if (rt_trace_on) { \
+ rt_trace_on = 0; \
+ console_verbose(); \
+ } \
+} while (0)
+
+# define TRACE_BUG_LOCKED() \
+do { \
+ TRACE_OFF(); \
+ BUG(); \
+} while (0)
+
+# define TRACE_WARN_ON_LOCKED(c) \
+do { \
+ if (unlikely(c)) { \
+ TRACE_OFF(); \
+ WARN_ON(1); \
+ } \
+} while (0)
+
+# define TRACE_BUG_ON_LOCKED(c) \
+do { \
+ if (unlikely(c)) \
+ TRACE_BUG_LOCKED(); \
+} while (0)
+
+#ifdef CONFIG_SMP
+# define SMP_TRACE_BUG_ON_LOCKED(c) TRACE_BUG_ON_LOCKED(c)
+#else
+# define SMP_TRACE_BUG_ON_LOCKED(c) do { } while (0)
+#endif
+
+/*
+ * deadlock detection flag. We turn it off when we detect
+ * the first problem because we dont want to recurse back
+ * into the tracing code when doing error printk or
+ * executing a BUG():
+ */
+int rt_trace_on = 1;
+
+void deadlock_trace_off(void)
+{
+ rt_trace_on = 0;
+}
+
+static void printk_task(task_t *p)
+{
+ if (p)
+ printk("%16s:%5d [%p, %3d]", p->comm, p->pid, p, p->prio);
+ else
+ printk("<none>");
+}
+
+static void printk_task_short(task_t *p)
+{
+ if (p)
+ printk("%s/%d [%p, %3d]", p->comm, p->pid, p, p->prio);
+ else
+ printk("<none>");
+}
+
+static void printk_lock(struct rt_mutex *lock, int print_owner)
+{
+ if (lock->name)
+ printk(" [%p] {%s}\n",
+ lock, lock->name);
+ else
+ printk(" [%p] {%s:%d}\n",
+ lock, lock->file, lock->line);
+
+ if (print_owner && rt_mutex_owner(lock)) {
+ printk(".. ->owner: %p\n", lock->owner);
+ printk(".. held by: ");
+ printk_task(rt_mutex_owner(lock));
+ printk("\n");
+ }
+ if (rt_mutex_owner(lock)) {
+ printk("... acquired at: ");
+ print_symbol("%s\n", lock->acquire_ip);
+ }
+}
+
+static void printk_waiter(struct rt_mutex_waiter *w)
+{
+ printk("-------------------------\n");
+ printk("| waiter struct %p:\n", w);
+ printk("| w->list_entry: [DP:%p/%p|SP:%p/%p|PRI:%d]\n",
+ w->list_entry.plist.prio_list.prev, w->list_entry.plist.prio_list.next,
+ w->list_entry.plist.node_list.prev, w->list_entry.plist.node_list.next,
+ w->list_entry.prio);
+ printk("| w->pi_list_entry: [DP:%p/%p|SP:%p/%p|PRI:%d]\n",
+ w->pi_list_entry.plist.prio_list.prev, w->pi_list_entry.plist.prio_list.next,
+ w->pi_list_entry.plist.node_list.prev, w->pi_list_entry.plist.node_list.next,
+ w->pi_list_entry.prio);
+ printk("\n| lock:\n");
+ printk_lock(w->lock, 1);
+ printk("| w->ti->task:\n");
+ printk_task(w->task);
+ printk("| blocked at: ");
+ print_symbol("%s\n", w->ip);
+ printk("-------------------------\n");
+}
+
+static void show_task_locks(task_t *p)
+{
+ switch (p->state) {
+ case TASK_RUNNING: printk("R"); break;
+ case TASK_INTERRUPTIBLE: printk("S"); break;
+ case TASK_UNINTERRUPTIBLE: printk("D"); break;
+ case TASK_STOPPED: printk("T"); break;
+ case EXIT_ZOMBIE: printk("Z"); break;
+ case EXIT_DEAD: printk("X"); break;
+ default: printk("?"); break;
+ }
+ printk_task(p);
+ if (p->pi_blocked_on) {
+ struct rt_mutex *lock = p->pi_blocked_on->lock;
+
+ printk(" blocked on:");
+ printk_lock(lock, 1);
+ } else
+ printk(" (not blocked)\n");
+}
+
+void rt_mutex_show_held_locks(task_t *task, int verbose)
+{
+ struct list_head *curr, *cursor = NULL;
+ struct rt_mutex *lock;
+ task_t *t;
+ unsigned long flags;
+ int count = 0;
+
+ if (!rt_trace_on)
+ return;
+
+ if (verbose) {
+ printk("------------------------------\n");
+ printk("| showing all locks held by: | (");
+ printk_task_short(task);
+ printk("):\n");
+ printk("------------------------------\n");
+ }
+
+next:
+ spin_lock_irqsave(&task->held_list_lock, flags);
+ list_for_each(curr, &task->held_list_head) {
+ if (cursor && curr != cursor)
+ continue;
+ lock = list_entry(curr, struct rt_mutex, held_list_entry);
+ t = rt_mutex_owner(lock);
+ WARN_ON(t != task);
+ count++;
+ cursor = curr->next;
+ spin_unlock_irqrestore(&task->held_list_lock, flags);
+
+ printk("\n#%03d: ", count);
+ printk_lock(lock, 0);
+ goto next;
+ }
+ spin_unlock_irqrestore(&task->held_list_lock, flags);
+
+ printk("\n");
+}
+
+void rt_mutex_show_all_locks(void)
+{
+ task_t *g, *p;
+ int count = 10;
+ int unlock = 1;
+
+ printk("\n");
+ printk("----------------------\n");
+ printk("| showing all tasks: |\n");
+ printk("----------------------\n");
+
+ /*
+ * Here we try to get the tasklist_lock as hard as possible,
+ * if not successful after 2 seconds we ignore it (but keep
+ * trying). This is to enable a debug printout even if a
+ * tasklist_lock-holding task deadlocks or crashes.
+ */
+retry:
+ if (!read_trylock(&tasklist_lock)) {
+ if (count == 10)
+ printk("hm, tasklist_lock locked, retrying... ");
+ if (count) {
+ count--;
+ printk(" #%d", 10-count);
+ mdelay(200);
+ goto retry;
+ }
+ printk(" ignoring it.\n");
+ unlock = 0;
+ }
+ if (count != 10)
+ printk(" locked it.\n");
+
+ do_each_thread(g, p) {
+ show_task_locks(p);
+ if (!unlock)
+ if (read_trylock(&tasklist_lock))
+ unlock = 1;
+ } while_each_thread(g, p);
+
+ printk("\n");
+
+ printk("-----------------------------------------\n");
+ printk("| showing all locks held in the system: |\n");
+ printk("-----------------------------------------\n");
+
+ do_each_thread(g, p) {
+ rt_mutex_show_held_locks(p, 0);
+ if (!unlock)
+ if (read_trylock(&tasklist_lock))
+ unlock = 1;
+ } while_each_thread(g, p);
+
+
+ printk("=============================================\n\n");
+
+ if (unlock)
+ read_unlock(&tasklist_lock);
+}
+
+void rt_mutex_debug_check_no_locks_held(task_t *task)
+{
+ struct rt_mutex_waiter *w;
+ struct list_head *curr;
+ struct rt_mutex *lock;
+
+ if (!rt_trace_on)
+ return;
+ if (!rt_prio(task->normal_prio) && rt_prio(task->prio)) {
+ printk("BUG: PI priority boost leaked!\n");
+ printk_task(task);
+ printk("\n");
+ }
+ if (list_empty(&task->held_list_head))
+ return;
+
+ spin_lock(&task->pi_lock);
+ plist_for_each_entry(w, &task->pi_waiters, pi_list_entry) {
+ TRACE_OFF();
+
+ printk("hm, PI interest held at exit time? Task:\n");
+ printk_task(task);
+ printk_waiter(w);
+ return;
+ }
+ spin_unlock(&task->pi_lock);
+
+ list_for_each(curr, &task->held_list_head) {
+ lock = list_entry(curr, struct rt_mutex, held_list_entry);
+
+ printk("BUG: %s/%d, lock held at task exit time!\n",
+ task->comm, task->pid);
+ printk_lock(lock, 1);
+ if (rt_mutex_owner(lock) != task)
+ printk("exiting task is not even the owner??\n");
+ }
+}
+
+int rt_mutex_debug_check_no_locks_freed(const void *from, unsigned long len)
+{
+ const void *to = from + len;
+ struct list_head *curr;
+ struct rt_mutex *lock;
+ unsigned long flags;
+ void *lock_addr;
+
+ if (!rt_trace_on)
+ return 0;
+
+ spin_lock_irqsave(¤t->held_list_lock, flags);
+ list_for_each(curr, ¤t->held_list_head) {
+ lock = list_entry(curr, struct rt_mutex, held_list_entry);
+ lock_addr = lock;
+ if (lock_addr < from || lock_addr >= to)
+ continue;
+ TRACE_OFF();
+
+ printk("BUG: %s/%d, active lock [%p(%p-%p)] freed!\n",
+ current->comm, current->pid, lock, from, to);
+ dump_stack();
+ printk_lock(lock, 1);
+ if (rt_mutex_owner(lock) != current)
+ printk("freeing task is not even the owner??\n");
+ return 1;
+ }
+ spin_unlock_irqrestore(¤t->held_list_lock, flags);
+
+ return 0;
+}
+
+void rt_mutex_debug_task_free(struct task_struct *task)
+{
+ WARN_ON(!plist_head_empty(&task->pi_waiters));
+ WARN_ON(task->pi_blocked_on);
+}
+
+/*
+ * We fill out the fields in the waiter to store the information about
+ * the deadlock. We print when we return. act_waiter can be NULL in
+ * case of a remove waiter operation.
+ */
+void debug_rt_mutex_deadlock(int detect, struct rt_mutex_waiter *act_waiter,
+ struct rt_mutex *lock)
+{
+ struct task_struct *task;
+
+ if (!rt_trace_on || detect || !act_waiter)
+ return;
+
+ task = rt_mutex_owner(act_waiter->lock);
+ if (task && task != current) {
+ act_waiter->deadlock_task_pid = task->pid;
+ act_waiter->deadlock_lock = lock;
+ }
+}
+
+void debug_rt_mutex_print_deadlock(struct rt_mutex_waiter *waiter)
+{
+ struct task_struct *task;
+
+ if (!waiter->deadlock_lock || !rt_trace_on)
+ return;
+
+ task = find_task_by_pid(waiter->deadlock_task_pid);
+ if (!task)
+ return;
+
+ TRACE_OFF_NOLOCK();
+
+ printk("\n============================================\n");
+ printk( "[ BUG: circular locking deadlock detected! ]\n");
+ printk( "--------------------------------------------\n");
+ printk("%s/%d is deadlocking current task %s/%d\n\n",
+ task->comm, task->pid, current->comm, current->pid);
+
+ printk("\n1) %s/%d is trying to acquire this lock:\n",
+ current->comm, current->pid);
+ printk_lock(waiter->lock, 1);
+
+ printk("... trying at: ");
+ print_symbol("%s\n", waiter->ip);
+
+ printk("\n2) %s/%d is blocked on this lock:\n", task->comm, task->pid);
+ printk_lock(waiter->deadlock_lock, 1);
+
+ rt_mutex_show_held_locks(current, 1);
+ rt_mutex_show_held_locks(task, 1);
+
+ printk("\n%s/%d's [blocked] stackdump:\n\n", task->comm, task->pid);
+ show_stack(task, NULL);
+ printk("\n%s/%d's [current] stackdump:\n\n",
+ current->comm, current->pid);
+ dump_stack();
+ rt_mutex_show_all_locks();
+ printk("[ turning off deadlock detection."
+ "Please report this trace. ]\n\n");
+ local_irq_disable();
+}
+
+void debug_rt_mutex_lock(struct rt_mutex *lock __IP_DECL__)
+{
+ unsigned long flags;
+
+ if (rt_trace_on) {
+ TRACE_WARN_ON_LOCKED(!list_empty(&lock->held_list_entry));
+
+ spin_lock_irqsave(¤t->held_list_lock, flags);
+ list_add_tail(&lock->held_list_entry, ¤t->held_list_head);
+ spin_unlock_irqrestore(¤t->held_list_lock, flags);
+
+ lock->acquire_ip = ip;
+ }
+}
+
+void debug_rt_mutex_unlock(struct rt_mutex *lock)
+{
+ unsigned long flags;
+
+ if (rt_trace_on) {
+ TRACE_WARN_ON_LOCKED(rt_mutex_owner(lock) != current);
+ TRACE_WARN_ON_LOCKED(list_empty(&lock->held_list_entry));
+
+ spin_lock_irqsave(¤t->held_list_lock, flags);
+ list_del_init(&lock->held_list_entry);
+ spin_unlock_irqrestore(¤t->held_list_lock, flags);
+ }
+}
+
+void debug_rt_mutex_proxy_lock(struct rt_mutex *lock,
+ struct task_struct *powner __IP_DECL__)
+{
+ unsigned long flags;
+
+ if (rt_trace_on) {
+ TRACE_WARN_ON_LOCKED(!list_empty(&lock->held_list_entry));
+
+ spin_lock_irqsave(&powner->held_list_lock, flags);
+ list_add_tail(&lock->held_list_entry, &powner->held_list_head);
+ spin_unlock_irqrestore(&powner->held_list_lock, flags);
+
+ lock->acquire_ip = ip;
+ }
+}
+
+void debug_rt_mutex_proxy_unlock(struct rt_mutex *lock)
+{
+ unsigned long flags;
+
+ if (rt_trace_on) {
+ struct task_struct *owner = rt_mutex_owner(lock);
+
+ TRACE_WARN_ON_LOCKED(!owner);
+ TRACE_WARN_ON_LOCKED(list_empty(&lock->held_list_entry));
+
+ spin_lock_irqsave(&owner->held_list_lock, flags);
+ list_del_init(&lock->held_list_entry);
+ spin_unlock_irqrestore(&owner->held_list_lock, flags);
+ }
+}
+
+void debug_rt_mutex_init_waiter(struct rt_mutex_waiter *waiter)
+{
+ memset(waiter, 0x11, sizeof(*waiter));
+ plist_node_init(&waiter->list_entry, MAX_PRIO);
+ plist_node_init(&waiter->pi_list_entry, MAX_PRIO);
+}
+
+void debug_rt_mutex_free_waiter(struct rt_mutex_waiter *waiter)
+{
+ TRACE_WARN_ON(!plist_node_empty(&waiter->list_entry));
+ TRACE_WARN_ON(!plist_node_empty(&waiter->pi_list_entry));
+ TRACE_WARN_ON(waiter->task);
+ memset(waiter, 0x22, sizeof(*waiter));
+}
+
+void debug_rt_mutex_init(struct rt_mutex *lock, const char *name)
+{
+ void *addr = lock;
+
+ if (rt_trace_on) {
+ rt_mutex_debug_check_no_locks_freed(addr,
+ sizeof(struct rt_mutex));
+ INIT_LIST_HEAD(&lock->held_list_entry);
+ lock->name = name;
+ }
+}
+
+void rt_mutex_deadlock_account_lock(struct rt_mutex *lock, task_t *task)
+{
+}
+
+void rt_mutex_deadlock_account_unlock(struct task_struct *task)
+{
+}
+
diff --git a/kernel/rtmutex-debug.h b/kernel/rtmutex-debug.h
new file mode 100644
index 000000000000..7612fbc62d70
--- /dev/null
+++ b/kernel/rtmutex-debug.h
@@ -0,0 +1,37 @@
+/*
+ * RT-Mutexes: blocking mutual exclusion locks with PI support
+ *
+ * started by Ingo Molnar and Thomas Gleixner:
+ *
+ * Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ *
+ * This file contains macros used solely by rtmutex.c. Debug version.
+ */
+
+#define __IP_DECL__ , unsigned long ip
+#define __IP__ , ip
+#define __RET_IP__ , (unsigned long)__builtin_return_address(0)
+
+extern void
+rt_mutex_deadlock_account_lock(struct rt_mutex *lock, struct task_struct *task);
+extern void rt_mutex_deadlock_account_unlock(struct task_struct *task);
+extern void debug_rt_mutex_init_waiter(struct rt_mutex_waiter *waiter);
+extern void debug_rt_mutex_free_waiter(struct rt_mutex_waiter *waiter);
+extern void debug_rt_mutex_init(struct rt_mutex *lock, const char *name);
+extern void debug_rt_mutex_lock(struct rt_mutex *lock __IP_DECL__);
+extern void debug_rt_mutex_unlock(struct rt_mutex *lock);
+extern void debug_rt_mutex_proxy_lock(struct rt_mutex *lock,
+ struct task_struct *powner __IP_DECL__);
+extern void debug_rt_mutex_proxy_unlock(struct rt_mutex *lock);
+extern void debug_rt_mutex_deadlock(int detect, struct rt_mutex_waiter *waiter,
+ struct rt_mutex *lock);
+extern void debug_rt_mutex_print_deadlock(struct rt_mutex_waiter *waiter);
+# define debug_rt_mutex_reset_waiter(w) \
+ do { (w)->deadlock_lock = NULL; } while (0)
+
+static inline int debug_rt_mutex_detect_deadlock(struct rt_mutex_waiter *waiter,
+ int detect)
+{
+ return (waiter != NULL);
+}
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 8bab0102ac73..06d3ea1133c9 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -107,6 +107,19 @@ config DEBUG_MUTEXES
This allows mutex semantics violations and mutex related deadlocks
(lockups) to be detected and reported automatically.
+config DEBUG_RT_MUTEXES
+ bool "RT Mutex debugging, deadlock detection"
+ default y
+ depends on DEBUG_KERNEL && RT_MUTEXES
+ help
+ This allows rt mutex semantics violations and rt mutex related
+ deadlocks (lockups) to be detected and reported automatically.
+
+config DEBUG_PI_LIST
+ bool
+ default y
+ depends on DEBUG_RT_MUTEXES
+
config DEBUG_SPINLOCK
bool "Spinlock debugging"
depends on DEBUG_KERNEL
diff --git a/mm/slab.c b/mm/slab.c
index f378d027c684..233e39d14caf 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -107,6 +107,7 @@
#include <linux/nodemask.h>
#include <linux/mempolicy.h>
#include <linux/mutex.h>
+#include <linux/rtmutex.h>
#include <asm/uaccess.h>
#include <asm/cacheflush.h>
commit 23f78d4a03c53cbd75d87a795378ea540aa08c86
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue Jun 27 02:54:53 2006 -0700
[PATCH] pi-futex: rt mutex core
Core functions for the rt-mutex subsystem.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 678c1a90380d..3a256957fb56 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -124,6 +124,7 @@ extern struct group_info init_groups;
.cpu_timers = INIT_CPU_TIMERS(tsk.cpu_timers), \
.fs_excl = ATOMIC_INIT(0), \
.pi_lock = SPIN_LOCK_UNLOCKED, \
+ INIT_RT_MUTEXES(tsk) \
}
diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
new file mode 100644
index 000000000000..12309c916c68
--- /dev/null
+++ b/include/linux/rtmutex.h
@@ -0,0 +1,104 @@
+/*
+ * RT Mutexes: blocking mutual exclusion locks with PI support
+ *
+ * started by Ingo Molnar and Thomas Gleixner:
+ *
+ * Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ *
+ * This file contains the public data structure and API definitions.
+ */
+
+#ifndef __LINUX_RT_MUTEX_H
+#define __LINUX_RT_MUTEX_H
+
+#include <linux/linkage.h>
+#include <linux/plist.h>
+#include <linux/spinlock_types.h>
+
+/*
+ * The rt_mutex structure
+ *
+ * @wait_lock: spinlock to protect the structure
+ * @wait_list: pilist head to enqueue waiters in priority order
+ * @owner: the mutex owner
+ */
+struct rt_mutex {
+ spinlock_t wait_lock;
+ struct plist_head wait_list;
+ struct task_struct *owner;
+#ifdef CONFIG_DEBUG_RT_MUTEXES
+ int save_state;
+ struct list_head held_list_entry;
+ unsigned long acquire_ip;
+ const char *name, *file;
+ int line;
+ void *magic;
+#endif
+};
+
+struct rt_mutex_waiter;
+struct hrtimer_sleeper;
+
+#ifdef CONFIG_DEBUG_RT_MUTEXES
+# define __DEBUG_RT_MUTEX_INITIALIZER(mutexname) \
+ , .name = #mutexname, .file = __FILE__, .line = __LINE__
+# define rt_mutex_init(mutex) __rt_mutex_init(mutex, __FUNCTION__)
+ extern void rt_mutex_debug_task_free(struct task_struct *tsk);
+#else
+# define __DEBUG_RT_MUTEX_INITIALIZER(mutexname)
+# define rt_mutex_init(mutex) __rt_mutex_init(mutex, NULL)
+# define rt_mutex_debug_task_free(t) do { } while (0)
+#endif
+
+#define __RT_MUTEX_INITIALIZER(mutexname) \
+ { .wait_lock = SPIN_LOCK_UNLOCKED \
+ , .wait_list = PLIST_HEAD_INIT(mutexname.wait_list, mutexname.wait_lock) \
+ , .owner = NULL \
+ __DEBUG_RT_MUTEX_INITIALIZER(mutexname)}
+
+#define DEFINE_RT_MUTEX(mutexname) \
+ struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname)
+
+/***
+ * rt_mutex_is_locked - is the mutex locked
+ * @lock: the mutex to be queried
+ *
+ * Returns 1 if the mutex is locked, 0 if unlocked.
+ */
+static inline int rt_mutex_is_locked(struct rt_mutex *lock)
+{
+ return lock->owner != NULL;
+}
+
+extern void __rt_mutex_init(struct rt_mutex *lock, const char *name);
+extern void rt_mutex_destroy(struct rt_mutex *lock);
+
+extern void rt_mutex_lock(struct rt_mutex *lock);
+extern int rt_mutex_lock_interruptible(struct rt_mutex *lock,
+ int detect_deadlock);
+extern int rt_mutex_timed_lock(struct rt_mutex *lock,
+ struct hrtimer_sleeper *timeout,
+ int detect_deadlock);
+
+extern int rt_mutex_trylock(struct rt_mutex *lock);
+
+extern void rt_mutex_unlock(struct rt_mutex *lock);
+
+#ifdef CONFIG_DEBUG_RT_MUTEXES
+# define INIT_RT_MUTEX_DEBUG(tsk) \
+ .held_list_head = LIST_HEAD_INIT(tsk.held_list_head), \
+ .held_list_lock = SPIN_LOCK_UNLOCKED
+#else
+# define INIT_RT_MUTEX_DEBUG(tsk)
+#endif
+
+#ifdef CONFIG_RT_MUTEXES
+# define INIT_RT_MUTEXES(tsk) \
+ .pi_waiters = PLIST_HEAD_INIT(tsk.pi_waiters, tsk.pi_lock), \
+ INIT_RT_MUTEX_DEBUG(tsk)
+#else
+# define INIT_RT_MUTEXES(tsk)
+#endif
+
+#endif
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6f167645e7e2..6ea23c9af413 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -73,6 +73,7 @@ struct sched_param {
#include <linux/seccomp.h>
#include <linux/rcupdate.h>
#include <linux/futex.h>
+#include <linux/rtmutex.h>
#include <linux/time.h>
#include <linux/param.h>
@@ -858,6 +859,17 @@ struct task_struct {
/* Protection of the PI data structures: */
spinlock_t pi_lock;
+#ifdef CONFIG_RT_MUTEXES
+ /* PI waiters blocked on a rt_mutex held by this task */
+ struct plist_head pi_waiters;
+ /* Deadlock detection and priority inheritance handling */
+ struct rt_mutex_waiter *pi_blocked_on;
+# ifdef CONFIG_DEBUG_RT_MUTEXES
+ spinlock_t held_list_lock;
+ struct list_head held_list_head;
+# endif
+#endif
+
#ifdef CONFIG_DEBUG_MUTEXES
/* mutex deadlock detection */
struct mutex_waiter *blocked_on;
diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
index bee12a7a0576..46e4d8f2771f 100644
--- a/include/linux/sysctl.h
+++ b/include/linux/sysctl.h
@@ -149,6 +149,7 @@ enum
KERN_ACPI_VIDEO_FLAGS=71, /* int: flags for setting up video after ACPI sleep */
KERN_IA64_UNALIGNED=72, /* int: ia64 unaligned userland trap enable */
KERN_COMPAT_LOG=73, /* int: print compat layer messages */
+ KERN_MAX_LOCK_DEPTH=74,
};
diff --git a/init/Kconfig b/init/Kconfig
index df55b3665601..f70f2fd273c2 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -339,9 +339,14 @@ config BASE_FULL
kernel data structures. This saves memory on small machines,
but may reduce performance.
+config RT_MUTEXES
+ boolean
+ select PLIST
+
config FUTEX
bool "Enable futex support" if EMBEDDED
default y
+ select RT_MUTEXES
help
Disabling this option will cause the kernel to be built without
support for "fast userspace mutexes". The resulting kernel may not
diff --git a/kernel/Makefile b/kernel/Makefile
index 752bd7d383af..21df9a338ff0 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -16,6 +16,7 @@ obj-$(CONFIG_FUTEX) += futex.o
ifeq ($(CONFIG_COMPAT),y)
obj-$(CONFIG_FUTEX) += futex_compat.o
endif
+obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
obj-$(CONFIG_SMP) += cpu.o spinlock.o
obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o
diff --git a/kernel/fork.c b/kernel/fork.c
index 9b4e54ef0225..b664a081fffa 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -104,6 +104,7 @@ static kmem_cache_t *mm_cachep;
void free_task(struct task_struct *tsk)
{
free_thread_info(tsk->thread_info);
+ rt_mutex_debug_task_free(tsk);
free_task_struct(tsk);
}
EXPORT_SYMBOL(free_task);
@@ -913,6 +914,19 @@ asmlinkage long sys_set_tid_address(int __user *tidptr)
return current->pid;
}
+static inline void rt_mutex_init_task(struct task_struct *p)
+{
+#ifdef CONFIG_RT_MUTEXES
+ spin_lock_init(&p->pi_lock);
+ plist_head_init(&p->pi_waiters, &p->pi_lock);
+ p->pi_blocked_on = NULL;
+# ifdef CONFIG_DEBUG_RT_MUTEXES
+ spin_lock_init(&p->held_list_lock);
+ INIT_LIST_HEAD(&p->held_list_head);
+# endif
+#endif
+}
+
/*
* This creates a new process as a copy of the old one,
* but does not actually start it yet.
@@ -1034,6 +1048,8 @@ static task_t *copy_process(unsigned long clone_flags,
mpol_fix_fork_child_flag(p);
#endif
+ rt_mutex_init_task(p);
+
#ifdef CONFIG_DEBUG_MUTEXES
p->blocked_on = NULL; /* not blocked yet */
#endif
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
new file mode 100644
index 000000000000..937a474fae94
--- /dev/null
+++ b/kernel/rtmutex.c
@@ -0,0 +1,904 @@
+/*
+ * RT-Mutexes: simple blocking mutual exclusion locks with PI support
+ *
+ * started by Ingo Molnar and Thomas Gleixner.
+ *
+ * Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 2005-2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ * Copyright (C) 2005 Kihon Technologies Inc., Steven Rostedt
+ * Copyright (C) 2006 Esben Nielsen
+ */
+#include <linux/spinlock.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/timer.h>
+
+#include "rtmutex_common.h"
+
+#ifdef CONFIG_DEBUG_RT_MUTEXES
+# include "rtmutex-debug.h"
+#else
+# include "rtmutex.h"
+#endif
+
+/*
+ * lock->owner state tracking:
+ *
+ * lock->owner holds the task_struct pointer of the owner. Bit 0 and 1
+ * are used to keep track of the "owner is pending" and "lock has
+ * waiters" state.
+ *
+ * owner bit1 bit0
+ * NULL 0 0 lock is free (fast acquire possible)
+ * NULL 0 1 invalid state
+ * NULL 1 0 Transitional State*
+ * NULL 1 1 invalid state
+ * taskpointer 0 0 lock is held (fast release possible)
+ * taskpointer 0 1 task is pending owner
+ * taskpointer 1 0 lock is held and has waiters
+ * taskpointer 1 1 task is pending owner and lock has more waiters
+ *
+ * Pending ownership is assigned to the top (highest priority)
+ * waiter of the lock, when the lock is released. The thread is woken
+ * up and can now take the lock. Until the lock is taken (bit 0
+ * cleared) a competing higher priority thread can steal the lock
+ * which puts the woken up thread back on the waiters list.
+ *
+ * The fast atomic compare exchange based acquire and release is only
+ * possible when bit 0 and 1 of lock->owner are 0.
+ *
+ * (*) There's a small time where the owner can be NULL and the
+ * "lock has waiters" bit is set. This can happen when grabbing the lock.
+ * To prevent a cmpxchg of the owner releasing the lock, we need to set this
+ * bit before looking at the lock, hence the reason this is a transitional
+ * state.
+ */
+
+static void
+rt_mutex_set_owner(struct rt_mutex *lock, struct task_struct *owner,
+ unsigned long mask)
+{
+ unsigned long val = (unsigned long)owner | mask;
+
+ if (rt_mutex_has_waiters(lock))
+ val |= RT_MUTEX_HAS_WAITERS;
+
+ lock->owner = (struct task_struct *)val;
+}
+
+static inline void clear_rt_mutex_waiters(struct rt_mutex *lock)
+{
+ lock->owner = (struct task_struct *)
+ ((unsigned long)lock->owner & ~RT_MUTEX_HAS_WAITERS);
+}
+
+static void fixup_rt_mutex_waiters(struct rt_mutex *lock)
+{
+ if (!rt_mutex_has_waiters(lock))
+ clear_rt_mutex_waiters(lock);
+}
+
+/*
+ * We can speed up the acquire/release, if the architecture
+ * supports cmpxchg and if there's no debugging state to be set up
+ */
+#if defined(__HAVE_ARCH_CMPXCHG) && !defined(CONFIG_DEBUG_RT_MUTEXES)
+# define rt_mutex_cmpxchg(l,c,n) (cmpxchg(&l->owner, c, n) == c)
+static inline void mark_rt_mutex_waiters(struct rt_mutex *lock)
+{
+ unsigned long owner, *p = (unsigned long *) &lock->owner;
+
+ do {
+ owner = *p;
+ } while (cmpxchg(p, owner, owner | RT_MUTEX_HAS_WAITERS) != owner);
+}
+#else
+# define rt_mutex_cmpxchg(l,c,n) (0)
+static inline void mark_rt_mutex_waiters(struct rt_mutex *lock)
+{
+ lock->owner = (struct task_struct *)
+ ((unsigned long)lock->owner | RT_MUTEX_HAS_WAITERS);
+}
+#endif
+
+/*
+ * Calculate task priority from the waiter list priority
+ *
+ * Return task->normal_prio when the waiter list is empty or when
+ * the waiter is not allowed to do priority boosting
+ */
+int rt_mutex_getprio(struct task_struct *task)
+{
+ if (likely(!task_has_pi_waiters(task)))
+ return task->normal_prio;
+
+ return min(task_top_pi_waiter(task)->pi_list_entry.prio,
+ task->normal_prio);
+}
+
+/*
+ * Adjust the priority of a task, after its pi_waiters got modified.
+ *
+ * This can be both boosting and unboosting. task->pi_lock must be held.
+ */
+static void __rt_mutex_adjust_prio(struct task_struct *task)
+{
+ int prio = rt_mutex_getprio(task);
+
+ if (task->prio != prio)
+ rt_mutex_setprio(task, prio);
+}
+
+/*
+ * Adjust task priority (undo boosting). Called from the exit path of
+ * rt_mutex_slowunlock() and rt_mutex_slowlock().
+ *
+ * (Note: We do this outside of the protection of lock->wait_lock to
+ * allow the lock to be taken while or before we readjust the priority
+ * of task. We do not use the spin_xx_mutex() variants here as we are
+ * outside of the debug path.)
+ */
+static void rt_mutex_adjust_prio(struct task_struct *task)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&task->pi_lock, flags);
+ __rt_mutex_adjust_prio(task);
+ spin_unlock_irqrestore(&task->pi_lock, flags);
+}
+
+/*
+ * Max number of times we'll walk the boosting chain:
+ */
+int max_lock_depth = 1024;
+
+/*
+ * Adjust the priority chain. Also used for deadlock detection.
+ * Decreases task's usage by one - may thus free the task.
+ * Returns 0 or -EDEADLK.
+ */
+static int rt_mutex_adjust_prio_chain(task_t *task,
+ int deadlock_detect,
+ struct rt_mutex *orig_lock,
+ struct rt_mutex_waiter *orig_waiter
+ __IP_DECL__)
+{
+ struct rt_mutex *lock;
+ struct rt_mutex_waiter *waiter, *top_waiter = orig_waiter;
+ int detect_deadlock, ret = 0, depth = 0;
+ unsigned long flags;
+
+ detect_deadlock = debug_rt_mutex_detect_deadlock(orig_waiter,
+ deadlock_detect);
+
+ /*
+ * The (de)boosting is a step by step approach with a lot of
+ * pitfalls. We want this to be preemptible and we want hold a
+ * maximum of two locks per step. So we have to check
+ * carefully whether things change under us.
+ */
+ again:
+ if (++depth > max_lock_depth) {
+ static int prev_max;
+
+ /*
+ * Print this only once. If the admin changes the limit,
+ * print a new message when reaching the limit again.
+ */
+ if (prev_max != max_lock_depth) {
+ prev_max = max_lock_depth;
+ printk(KERN_WARNING "Maximum lock depth %d reached "
+ "task: %s (%d)\n", max_lock_depth,
+ current->comm, current->pid);
+ }
+ put_task_struct(task);
+
+ return deadlock_detect ? -EDEADLK : 0;
+ }
+ retry:
+ /*
+ * Task can not go away as we did a get_task() before !
+ */
+ spin_lock_irqsave(&task->pi_lock, flags);
+
+ waiter = task->pi_blocked_on;
+ /*
+ * Check whether the end of the boosting chain has been
+ * reached or the state of the chain has changed while we
+ * dropped the locks.
+ */
+ if (!waiter || !waiter->task)
+ goto out_unlock_pi;
+
+ if (top_waiter && (!task_has_pi_waiters(task) ||
+ top_waiter != task_top_pi_waiter(task)))
+ goto out_unlock_pi;
+
+ /*
+ * When deadlock detection is off then we check, if further
+ * priority adjustment is necessary.
+ */
+ if (!detect_deadlock && waiter->list_entry.prio == task->prio)
+ goto out_unlock_pi;
+
+ lock = waiter->lock;
+ if (!spin_trylock(&lock->wait_lock)) {
+ spin_unlock_irqrestore(&task->pi_lock, flags);
+ cpu_relax();
+ goto retry;
+ }
+
+ /* Deadlock detection */
+ if (lock == orig_lock || rt_mutex_owner(lock) == current) {
+ debug_rt_mutex_deadlock(deadlock_detect, orig_waiter, lock);
+ spin_unlock(&lock->wait_lock);
+ ret = deadlock_detect ? -EDEADLK : 0;
+ goto out_unlock_pi;
+ }
+
+ top_waiter = rt_mutex_top_waiter(lock);
+
+ /* Requeue the waiter */
+ plist_del(&waiter->list_entry, &lock->wait_list);
+ waiter->list_entry.prio = task->prio;
+ plist_add(&waiter->list_entry, &lock->wait_list);
+
+ /* Release the task */
+ spin_unlock_irqrestore(&task->pi_lock, flags);
+ put_task_struct(task);
+
+ /* Grab the next task */
+ task = rt_mutex_owner(lock);
+ spin_lock_irqsave(&task->pi_lock, flags);
+
+ if (waiter == rt_mutex_top_waiter(lock)) {
+ /* Boost the owner */
+ plist_del(&top_waiter->pi_list_entry, &task->pi_waiters);
+ waiter->pi_list_entry.prio = waiter->list_entry.prio;
+ plist_add(&waiter->pi_list_entry, &task->pi_waiters);
+ __rt_mutex_adjust_prio(task);
+
+ } else if (top_waiter == waiter) {
+ /* Deboost the owner */
+ plist_del(&waiter->pi_list_entry, &task->pi_waiters);
+ waiter = rt_mutex_top_waiter(lock);
+ waiter->pi_list_entry.prio = waiter->list_entry.prio;
+ plist_add(&waiter->pi_list_entry, &task->pi_waiters);
+ __rt_mutex_adjust_prio(task);
+ }
+
+ get_task_struct(task);
+ spin_unlock_irqrestore(&task->pi_lock, flags);
+
+ top_waiter = rt_mutex_top_waiter(lock);
+ spin_unlock(&lock->wait_lock);
+
+ if (!detect_deadlock && waiter != top_waiter)
+ goto out_put_task;
+
+ goto again;
+
+ out_unlock_pi:
+ spin_unlock_irqrestore(&task->pi_lock, flags);
+ out_put_task:
+ put_task_struct(task);
+ return ret;
+}
+
+/*
+ * Optimization: check if we can steal the lock from the
+ * assigned pending owner [which might not have taken the
+ * lock yet]:
+ */
+static inline int try_to_steal_lock(struct rt_mutex *lock)
+{
+ struct task_struct *pendowner = rt_mutex_owner(lock);
+ struct rt_mutex_waiter *next;
+ unsigned long flags;
+
+ if (!rt_mutex_owner_pending(lock))
+ return 0;
+
+ if (pendowner == current)
+ return 1;
+
+ spin_lock_irqsave(&pendowner->pi_lock, flags);
+ if (current->prio >= pendowner->prio) {
+ spin_unlock_irqrestore(&pendowner->pi_lock, flags);
+ return 0;
+ }
+
+ /*
+ * Check if a waiter is enqueued on the pending owners
+ * pi_waiters list. Remove it and readjust pending owners
+ * priority.
+ */
+ if (likely(!rt_mutex_has_waiters(lock))) {
+ spin_unlock_irqrestore(&pendowner->pi_lock, flags);
+ return 1;
+ }
+
+ /* No chain handling, pending owner is not blocked on anything: */
+ next = rt_mutex_top_waiter(lock);
+ plist_del(&next->pi_list_entry, &pendowner->pi_waiters);
+ __rt_mutex_adjust_prio(pendowner);
+ spin_unlock_irqrestore(&pendowner->pi_lock, flags);
+
+ /*
+ * We are going to steal the lock and a waiter was
+ * enqueued on the pending owners pi_waiters queue. So
+ * we have to enqueue this waiter into
+ * current->pi_waiters list. This covers the case,
+ * where current is boosted because it holds another
+ * lock and gets unboosted because the booster is
+ * interrupted, so we would delay a waiter with higher
+ * priority as current->normal_prio.
+ *
+ * Note: in the rare case of a SCHED_OTHER task changing
+ * its priority and thus stealing the lock, next->task
+ * might be current:
+ */
+ if (likely(next->task != current)) {
+ spin_lock_irqsave(¤t->pi_lock, flags);
+ plist_add(&next->pi_list_entry, ¤t->pi_waiters);
+ __rt_mutex_adjust_prio(current);
+ spin_unlock_irqrestore(¤t->pi_lock, flags);
+ }
+ return 1;
+}
+
+/*
+ * Try to take an rt-mutex
+ *
+ * This fails
+ * - when the lock has a real owner
+ * - when a different pending owner exists and has higher priority than current
+ *
+ * Must be called with lock->wait_lock held.
+ */
+static int try_to_take_rt_mutex(struct rt_mutex *lock __IP_DECL__)
+{
+ /*
+ * We have to be careful here if the atomic speedups are
+ * enabled, such that, when
+ * - no other waiter is on the lock
+ * - the lock has been released since we did the cmpxchg
+ * the lock can be released or taken while we are doing the
+ * checks and marking the lock with RT_MUTEX_HAS_WAITERS.
+ *
+ * The atomic acquire/release aware variant of
+ * mark_rt_mutex_waiters uses a cmpxchg loop. After setting
+ * the WAITERS bit, the atomic release / acquire can not
+ * happen anymore and lock->wait_lock protects us from the
+ * non-atomic case.
+ *
+ * Note, that this might set lock->owner =
+ * RT_MUTEX_HAS_WAITERS in the case the lock is not contended
+ * any more. This is fixed up when we take the ownership.
+ * This is the transitional state explained at the top of this file.
+ */
+ mark_rt_mutex_waiters(lock);
+
+ if (rt_mutex_owner(lock) && !try_to_steal_lock(lock))
+ return 0;
+
+ /* We got the lock. */
+ debug_rt_mutex_lock(lock __IP__);
+
+ rt_mutex_set_owner(lock, current, 0);
+
+ rt_mutex_deadlock_account_lock(lock, current);
+
+ return 1;
+}
+
+/*
+ * Task blocks on lock.
+ *
+ * Prepare waiter and propagate pi chain
+ *
+ * This must be called with lock->wait_lock held.
+ */
+static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
+ struct rt_mutex_waiter *waiter,
+ int detect_deadlock
+ __IP_DECL__)
+{
+ struct rt_mutex_waiter *top_waiter = waiter;
+ task_t *owner = rt_mutex_owner(lock);
+ int boost = 0, res;
+ unsigned long flags;
+
+ spin_lock_irqsave(¤t->pi_lock, flags);
+ __rt_mutex_adjust_prio(current);
+ waiter->task = current;
+ waiter->lock = lock;
+ plist_node_init(&waiter->list_entry, current->prio);
+ plist_node_init(&waiter->pi_list_entry, current->prio);
+
+ /* Get the top priority waiter on the lock */
+ if (rt_mutex_has_waiters(lock))
+ top_waiter = rt_mutex_top_waiter(lock);
+ plist_add(&waiter->list_entry, &lock->wait_list);
+
+ current->pi_blocked_on = waiter;
+
+ spin_unlock_irqrestore(¤t->pi_lock, flags);
+
+ if (waiter == rt_mutex_top_waiter(lock)) {
+ spin_lock_irqsave(&owner->pi_lock, flags);
+ plist_del(&top_waiter->pi_list_entry, &owner->pi_waiters);
+ plist_add(&waiter->pi_list_entry, &owner->pi_waiters);
+
+ __rt_mutex_adjust_prio(owner);
+ if (owner->pi_blocked_on) {
+ boost = 1;
+ get_task_struct(owner);
+ }
+ spin_unlock_irqrestore(&owner->pi_lock, flags);
+ }
+ else if (debug_rt_mutex_detect_deadlock(waiter, detect_deadlock)) {
+ spin_lock_irqsave(&owner->pi_lock, flags);
+ if (owner->pi_blocked_on) {
+ boost = 1;
+ get_task_struct(owner);
+ }
+ spin_unlock_irqrestore(&owner->pi_lock, flags);
+ }
+ if (!boost)
+ return 0;
+
+ spin_unlock(&lock->wait_lock);
+
+ res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock,
+ waiter __IP__);
+
+ spin_lock(&lock->wait_lock);
+
+ return res;
+}
+
+/*
+ * Wake up the next waiter on the lock.
+ *
+ * Remove the top waiter from the current tasks waiter list and from
+ * the lock waiter list. Set it as pending owner. Then wake it up.
+ *
+ * Called with lock->wait_lock held.
+ */
+static void wakeup_next_waiter(struct rt_mutex *lock)
+{
+ struct rt_mutex_waiter *waiter;
+ struct task_struct *pendowner;
+ unsigned long flags;
+
+ spin_lock_irqsave(¤t->pi_lock, flags);
+
+ waiter = rt_mutex_top_waiter(lock);
+ plist_del(&waiter->list_entry, &lock->wait_list);
+
+ /*
+ * Remove it from current->pi_waiters. We do not adjust a
+ * possible priority boost right now. We execute wakeup in the
+ * boosted mode and go back to normal after releasing
+ * lock->wait_lock.
+ */
+ plist_del(&waiter->pi_list_entry, ¤t->pi_waiters);
+ pendowner = waiter->task;
+ waiter->task = NULL;
+
+ rt_mutex_set_owner(lock, pendowner, RT_MUTEX_OWNER_PENDING);
+
+ spin_unlock_irqrestore(¤t->pi_lock, flags);
+
+ /*
+ * Clear the pi_blocked_on variable and enqueue a possible
+ * waiter into the pi_waiters list of the pending owner. This
+ * prevents that in case the pending owner gets unboosted a
+ * waiter with higher priority than pending-owner->normal_prio
+ * is blocked on the unboosted (pending) owner.
+ */
+ spin_lock_irqsave(&pendowner->pi_lock, flags);
+
+ WARN_ON(!pendowner->pi_blocked_on);
+ WARN_ON(pendowner->pi_blocked_on != waiter);
+ WARN_ON(pendowner->pi_blocked_on->lock != lock);
+
+ pendowner->pi_blocked_on = NULL;
+
+ if (rt_mutex_has_waiters(lock)) {
+ struct rt_mutex_waiter *next;
+
+ next = rt_mutex_top_waiter(lock);
+ plist_add(&next->pi_list_entry, &pendowner->pi_waiters);
+ }
+ spin_unlock_irqrestore(&pendowner->pi_lock, flags);
+
+ wake_up_process(pendowner);
+}
+
+/*
+ * Remove a waiter from a lock
+ *
+ * Must be called with lock->wait_lock held
+ */
+static void remove_waiter(struct rt_mutex *lock,
+ struct rt_mutex_waiter *waiter __IP_DECL__)
+{
+ int first = (waiter == rt_mutex_top_waiter(lock));
+ int boost = 0;
+ task_t *owner = rt_mutex_owner(lock);
+ unsigned long flags;
+
+ spin_lock_irqsave(¤t->pi_lock, flags);
+ plist_del(&waiter->list_entry, &lock->wait_list);
+ waiter->task = NULL;
+ current->pi_blocked_on = NULL;
+ spin_unlock_irqrestore(¤t->pi_lock, flags);
+
+ if (first && owner != current) {
+
+ spin_lock_irqsave(&owner->pi_lock, flags);
+
+ plist_del(&waiter->pi_list_entry, &owner->pi_waiters);
+
+ if (rt_mutex_has_waiters(lock)) {
+ struct rt_mutex_waiter *next;
+
+ next = rt_mutex_top_waiter(lock);
+ plist_add(&next->pi_list_entry, &owner->pi_waiters);
+ }
+ __rt_mutex_adjust_prio(owner);
+
+ if (owner->pi_blocked_on) {
+ boost = 1;
+ get_task_struct(owner);
+ }
+ spin_unlock_irqrestore(&owner->pi_lock, flags);
+ }
+
+ WARN_ON(!plist_node_empty(&waiter->pi_list_entry));
+
+ if (!boost)
+ return;
+
+ spin_unlock(&lock->wait_lock);
+
+ rt_mutex_adjust_prio_chain(owner, 0, lock, NULL __IP__);
+
+ spin_lock(&lock->wait_lock);
+}
+
+/*
+ * Slow path lock function:
+ */
+static int __sched
+rt_mutex_slowlock(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+ int detect_deadlock __IP_DECL__)
+{
+ struct rt_mutex_waiter waiter;
+ int ret = 0;
+
+ debug_rt_mutex_init_waiter(&waiter);
+ waiter.task = NULL;
+
+ spin_lock(&lock->wait_lock);
+
+ /* Try to acquire the lock again: */
+ if (try_to_take_rt_mutex(lock __IP__)) {
+ spin_unlock(&lock->wait_lock);
+ return 0;
+ }
+
+ set_current_state(state);
+
+ /* Setup the timer, when timeout != NULL */
+ if (unlikely(timeout))
+ hrtimer_start(&timeout->timer, timeout->timer.expires,
+ HRTIMER_ABS);
+
+ for (;;) {
+ /* Try to acquire the lock: */
+ if (try_to_take_rt_mutex(lock __IP__))
+ break;
+
+ /*
+ * TASK_INTERRUPTIBLE checks for signals and
+ * timeout. Ignored otherwise.
+ */
+ if (unlikely(state == TASK_INTERRUPTIBLE)) {
+ /* Signal pending? */
+ if (signal_pending(current))
+ ret = -EINTR;
+ if (timeout && !timeout->task)
+ ret = -ETIMEDOUT;
+ if (ret)
+ break;
+ }
+
+ /*
+ * waiter.task is NULL the first time we come here and
+ * when we have been woken up by the previous owner
+ * but the lock got stolen by a higher prio task.
+ */
+ if (!waiter.task) {
+ ret = task_blocks_on_rt_mutex(lock, &waiter,
+ detect_deadlock __IP__);
+ /*
+ * If we got woken up by the owner then start loop
+ * all over without going into schedule to try
+ * to get the lock now:
+ */
+ if (unlikely(!waiter.task))
+ continue;
+
+ if (unlikely(ret))
+ break;
+ }
+ spin_unlock(&lock->wait_lock);
+
+ debug_rt_mutex_print_deadlock(&waiter);
+
+ schedule();
+
+ spin_lock(&lock->wait_lock);
+ set_current_state(state);
+ }
+
+ set_current_state(TASK_RUNNING);
+
+ if (unlikely(waiter.task))
+ remove_waiter(lock, &waiter __IP__);
+
+ /*
+ * try_to_take_rt_mutex() sets the waiter bit
+ * unconditionally. We might have to fix that up.
+ */
+ fixup_rt_mutex_waiters(lock);
+
+ spin_unlock(&lock->wait_lock);
+
+ /* Remove pending timer: */
+ if (unlikely(timeout))
+ hrtimer_cancel(&timeout->timer);
+
+ /*
+ * Readjust priority, when we did not get the lock. We might
+ * have been the pending owner and boosted. Since we did not
+ * take the lock, the PI boost has to go.
+ */
+ if (unlikely(ret))
+ rt_mutex_adjust_prio(current);
+
+ debug_rt_mutex_free_waiter(&waiter);
+
+ return ret;
+}
+
+/*
+ * Slow path try-lock function:
+ */
+static inline int
+rt_mutex_slowtrylock(struct rt_mutex *lock __IP_DECL__)
+{
+ int ret = 0;
+
+ spin_lock(&lock->wait_lock);
+
+ if (likely(rt_mutex_owner(lock) != current)) {
+
+ ret = try_to_take_rt_mutex(lock __IP__);
+ /*
+ * try_to_take_rt_mutex() sets the lock waiters
+ * bit unconditionally. Clean this up.
+ */
+ fixup_rt_mutex_waiters(lock);
+ }
+
+ spin_unlock(&lock->wait_lock);
+
+ return ret;
+}
+
+/*
+ * Slow path to release a rt-mutex:
+ */
+static void __sched
+rt_mutex_slowunlock(struct rt_mutex *lock)
+{
+ spin_lock(&lock->wait_lock);
+
+ debug_rt_mutex_unlock(lock);
+
+ rt_mutex_deadlock_account_unlock(current);
+
+ if (!rt_mutex_has_waiters(lock)) {
+ lock->owner = NULL;
+ spin_unlock(&lock->wait_lock);
+ return;
+ }
+
+ wakeup_next_waiter(lock);
+
+ spin_unlock(&lock->wait_lock);
+
+ /* Undo pi boosting if necessary: */
+ rt_mutex_adjust_prio(current);
+}
+
+/*
+ * debug aware fast / slowpath lock,trylock,unlock
+ *
+ * The atomic acquire/release ops are compiled away, when either the
+ * architecture does not support cmpxchg or when debugging is enabled.
+ */
+static inline int
+rt_mutex_fastlock(struct rt_mutex *lock, int state,
+ int detect_deadlock,
+ int (*slowfn)(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+ int detect_deadlock __IP_DECL__))
+{
+ if (!detect_deadlock && likely(rt_mutex_cmpxchg(lock, NULL, current))) {
+ rt_mutex_deadlock_account_lock(lock, current);
+ return 0;
+ } else
+ return slowfn(lock, state, NULL, detect_deadlock __RET_IP__);
+}
+
+static inline int
+rt_mutex_timed_fastlock(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout, int detect_deadlock,
+ int (*slowfn)(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+ int detect_deadlock __IP_DECL__))
+{
+ if (!detect_deadlock && likely(rt_mutex_cmpxchg(lock, NULL, current))) {
+ rt_mutex_deadlock_account_lock(lock, current);
+ return 0;
+ } else
+ return slowfn(lock, state, timeout, detect_deadlock __RET_IP__);
+}
+
+static inline int
+rt_mutex_fasttrylock(struct rt_mutex *lock,
+ int (*slowfn)(struct rt_mutex *lock __IP_DECL__))
+{
+ if (likely(rt_mutex_cmpxchg(lock, NULL, current))) {
+ rt_mutex_deadlock_account_lock(lock, current);
+ return 1;
+ }
+ return slowfn(lock __RET_IP__);
+}
+
+static inline void
+rt_mutex_fastunlock(struct rt_mutex *lock,
+ void (*slowfn)(struct rt_mutex *lock))
+{
+ if (likely(rt_mutex_cmpxchg(lock, current, NULL)))
+ rt_mutex_deadlock_account_unlock(current);
+ else
+ slowfn(lock);
+}
+
+/**
+ * rt_mutex_lock - lock a rt_mutex
+ *
+ * @lock: the rt_mutex to be locked
+ */
+void __sched rt_mutex_lock(struct rt_mutex *lock)
+{
+ might_sleep();
+
+ rt_mutex_fastlock(lock, TASK_UNINTERRUPTIBLE, 0, rt_mutex_slowlock);
+}
+EXPORT_SYMBOL_GPL(rt_mutex_lock);
+
+/**
+ * rt_mutex_lock_interruptible - lock a rt_mutex interruptible
+ *
+ * @lock: the rt_mutex to be locked
+ * @detect_deadlock: deadlock detection on/off
+ *
+ * Returns:
+ * 0 on success
+ * -EINTR when interrupted by a signal
+ * -EDEADLK when the lock would deadlock (when deadlock detection is on)
+ */
+int __sched rt_mutex_lock_interruptible(struct rt_mutex *lock,
+ int detect_deadlock)
+{
+ might_sleep();
+
+ return rt_mutex_fastlock(lock, TASK_INTERRUPTIBLE,
+ detect_deadlock, rt_mutex_slowlock);
+}
+EXPORT_SYMBOL_GPL(rt_mutex_lock_interruptible);
+
+/**
+ * rt_mutex_lock_interruptible_ktime - lock a rt_mutex interruptible
+ * the timeout structure is provided
+ * by the caller
+ *
+ * @lock: the rt_mutex to be locked
+ * @timeout: timeout structure or NULL (no timeout)
+ * @detect_deadlock: deadlock detection on/off
+ *
+ * Returns:
+ * 0 on success
+ * -EINTR when interrupted by a signal
+ * -ETIMEOUT when the timeout expired
+ * -EDEADLK when the lock would deadlock (when deadlock detection is on)
+ */
+int
+rt_mutex_timed_lock(struct rt_mutex *lock, struct hrtimer_sleeper *timeout,
+ int detect_deadlock)
+{
+ might_sleep();
+
+ return rt_mutex_timed_fastlock(lock, TASK_INTERRUPTIBLE, timeout,
+ detect_deadlock, rt_mutex_slowlock);
+}
+EXPORT_SYMBOL_GPL(rt_mutex_timed_lock);
+
+/**
+ * rt_mutex_trylock - try to lock a rt_mutex
+ *
+ * @lock: the rt_mutex to be locked
+ *
+ * Returns 1 on success and 0 on contention
+ */
+int __sched rt_mutex_trylock(struct rt_mutex *lock)
+{
+ return rt_mutex_fasttrylock(lock, rt_mutex_slowtrylock);
+}
+EXPORT_SYMBOL_GPL(rt_mutex_trylock);
+
+/**
+ * rt_mutex_unlock - unlock a rt_mutex
+ *
+ * @lock: the rt_mutex to be unlocked
+ */
+void __sched rt_mutex_unlock(struct rt_mutex *lock)
+{
+ rt_mutex_fastunlock(lock, rt_mutex_slowunlock);
+}
+EXPORT_SYMBOL_GPL(rt_mutex_unlock);
+
+/***
+ * rt_mutex_destroy - mark a mutex unusable
+ * @lock: the mutex to be destroyed
+ *
+ * This function marks the mutex uninitialized, and any subsequent
+ * use of the mutex is forbidden. The mutex must not be locked when
+ * this function is called.
+ */
+void rt_mutex_destroy(struct rt_mutex *lock)
+{
+ WARN_ON(rt_mutex_is_locked(lock));
+#ifdef CONFIG_DEBUG_RT_MUTEXES
+ lock->magic = NULL;
+#endif
+}
+
+EXPORT_SYMBOL_GPL(rt_mutex_destroy);
+
+/**
+ * __rt_mutex_init - initialize the rt lock
+ *
+ * @lock: the rt lock to be initialized
+ *
+ * Initialize the rt lock to unlocked state.
+ *
+ * Initializing of a locked rt lock is not allowed
+ */
+void __rt_mutex_init(struct rt_mutex *lock, const char *name)
+{
+ lock->owner = NULL;
+ spin_lock_init(&lock->wait_lock);
+ plist_head_init(&lock->wait_list, &lock->wait_lock);
+
+ debug_rt_mutex_init(lock, name);
+}
+EXPORT_SYMBOL_GPL(__rt_mutex_init);
diff --git a/kernel/rtmutex.h b/kernel/rtmutex.h
new file mode 100644
index 000000000000..1e0fca13ff72
--- /dev/null
+++ b/kernel/rtmutex.h
@@ -0,0 +1,29 @@
+/*
+ * RT-Mutexes: blocking mutual exclusion locks with PI support
+ *
+ * started by Ingo Molnar and Thomas Gleixner:
+ *
+ * Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ *
+ * This file contains macros used solely by rtmutex.c.
+ * Non-debug version.
+ */
+
+#define __IP_DECL__
+#define __IP__
+#define __RET_IP__
+#define rt_mutex_deadlock_check(l) (0)
+#define rt_mutex_deadlock_account_lock(m, t) do { } while (0)
+#define rt_mutex_deadlock_account_unlock(l) do { } while (0)
+#define debug_rt_mutex_init_waiter(w) do { } while (0)
+#define debug_rt_mutex_free_waiter(w) do { } while (0)
+#define debug_rt_mutex_lock(l) do { } while (0)
+#define debug_rt_mutex_proxy_lock(l,p) do { } while (0)
+#define debug_rt_mutex_proxy_unlock(l) do { } while (0)
+#define debug_rt_mutex_unlock(l) do { } while (0)
+#define debug_rt_mutex_init(m, n) do { } while (0)
+#define debug_rt_mutex_deadlock(d, a ,l) do { } while (0)
+#define debug_rt_mutex_print_deadlock(w) do { } while (0)
+#define debug_rt_mutex_detect_deadlock(w,d) (d)
+#define debug_rt_mutex_reset_waiter(w) do { } while (0)
diff --git a/kernel/rtmutex_common.h b/kernel/rtmutex_common.h
new file mode 100644
index 000000000000..50eed60eb085
--- /dev/null
+++ b/kernel/rtmutex_common.h
@@ -0,0 +1,93 @@
+/*
+ * RT Mutexes: blocking mutual exclusion locks with PI support
+ *
+ * started by Ingo Molnar and Thomas Gleixner:
+ *
+ * Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ *
+ * This file contains the private data structure and API definitions.
+ */
+
+#ifndef __KERNEL_RTMUTEX_COMMON_H
+#define __KERNEL_RTMUTEX_COMMON_H
+
+#include <linux/rtmutex.h>
+
+/*
+ * This is the control structure for tasks blocked on a rt_mutex,
+ * which is allocated on the kernel stack on of the blocked task.
+ *
+ * @list_entry: pi node to enqueue into the mutex waiters list
+ * @pi_list_entry: pi node to enqueue into the mutex owner waiters list
+ * @task: task reference to the blocked task
+ */
+struct rt_mutex_waiter {
+ struct plist_node list_entry;
+ struct plist_node pi_list_entry;
+ struct task_struct *task;
+ struct rt_mutex *lock;
+#ifdef CONFIG_DEBUG_RT_MUTEXES
+ unsigned long ip;
+ pid_t deadlock_task_pid;
+ struct rt_mutex *deadlock_lock;
+#endif
+};
+
+/*
+ * Various helpers to access the waiters-plist:
+ */
+static inline int rt_mutex_has_waiters(struct rt_mutex *lock)
+{
+ return !plist_head_empty(&lock->wait_list);
+}
+
+static inline struct rt_mutex_waiter *
+rt_mutex_top_waiter(struct rt_mutex *lock)
+{
+ struct rt_mutex_waiter *w;
+
+ w = plist_first_entry(&lock->wait_list, struct rt_mutex_waiter,
+ list_entry);
+ BUG_ON(w->lock != lock);
+
+ return w;
+}
+
+static inline int task_has_pi_waiters(struct task_struct *p)
+{
+ return !plist_head_empty(&p->pi_waiters);
+}
+
+static inline struct rt_mutex_waiter *
+task_top_pi_waiter(struct task_struct *p)
+{
+ return plist_first_entry(&p->pi_waiters, struct rt_mutex_waiter,
+ pi_list_entry);
+}
+
+/*
+ * lock->owner state tracking:
+ */
+#define RT_MUTEX_OWNER_PENDING 1UL
+#define RT_MUTEX_HAS_WAITERS 2UL
+#define RT_MUTEX_OWNER_MASKALL 3UL
+
+static inline struct task_struct *rt_mutex_owner(struct rt_mutex *lock)
+{
+ return (struct task_struct *)
+ ((unsigned long)lock->owner & ~RT_MUTEX_OWNER_MASKALL);
+}
+
+static inline struct task_struct *rt_mutex_real_owner(struct rt_mutex *lock)
+{
+ return (struct task_struct *)
+ ((unsigned long)lock->owner & ~RT_MUTEX_HAS_WAITERS);
+}
+
+static inline unsigned long rt_mutex_owner_pending(struct rt_mutex *lock)
+{
+ return (unsigned long)lock->owner & RT_MUTEX_OWNER_PENDING;
+}
+
+#endif
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index f54afed8426f..93a2c5398648 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -133,6 +133,10 @@ extern int acct_parm[];
extern int no_unaligned_warning;
#endif
+#ifdef CONFIG_RT_MUTEXES
+extern int max_lock_depth;
+#endif
+
static int parse_table(int __user *, int, void __user *, size_t __user *, void __user *, size_t,
ctl_table *, void **);
static int proc_doutsstring(ctl_table *table, int write, struct file *filp,
@@ -688,6 +692,17 @@ static ctl_table kern_table[] = {
.proc_handler = &proc_dointvec,
},
#endif
+#ifdef CONFIG_RT_MUTEXES
+ {
+ .ctl_name = KERN_MAX_LOCK_DEPTH,
+ .procname = "max_lock_depth",
+ .data = &max_lock_depth,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+#endif
+
{ .ctl_name = 0 }
};
commit b29739f902ee76a05493fb7d2303490fc75364f4
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue Jun 27 02:54:51 2006 -0700
[PATCH] pi-futex: scheduler support for pi
Add framework to boost/unboost the priority of RT tasks.
This consists of:
- caching the 'normal' priority in ->normal_prio
- providing a functions to set/get the priority of the task
- make sched_setscheduler() aware of boosting
The effective_prio() cleanups also fix a priority-calculation bug pointed out
by Andrey Gelman, in set_user_nice().
has_rt_policy() fix: Peter Williams <pwil3058@bigpond.net.au>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Andrey Gelman <agelman@012.net.il>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index e127ef7e8da8..678c1a90380d 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -87,6 +87,7 @@ extern struct group_info init_groups;
.lock_depth = -1, \
.prio = MAX_PRIO-20, \
.static_prio = MAX_PRIO-20, \
+ .normal_prio = MAX_PRIO-20, \
.policy = SCHED_NORMAL, \
.cpus_allowed = CPU_MASK_ALL, \
.mm = NULL, \
@@ -122,6 +123,7 @@ extern struct group_info init_groups;
.journal_info = NULL, \
.cpu_timers = INIT_CPU_TIMERS(tsk.cpu_timers), \
.fs_excl = ATOMIC_INIT(0), \
+ .pi_lock = SPIN_LOCK_UNLOCKED, \
}
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0bc81a151e50..6f167645e7e2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -495,8 +495,11 @@ struct signal_struct {
#define MAX_PRIO (MAX_RT_PRIO + 40)
-#define rt_task(p) (unlikely((p)->prio < MAX_RT_PRIO))
+#define rt_prio(prio) unlikely((prio) < MAX_RT_PRIO)
+#define rt_task(p) rt_prio((p)->prio)
#define batch_task(p) (unlikely((p)->policy == SCHED_BATCH))
+#define has_rt_policy(p) \
+ unlikely((p)->policy != SCHED_NORMAL && (p)->policy != SCHED_BATCH)
/*
* Some day this will be a full-fledged user tracking system..
@@ -725,7 +728,7 @@ struct task_struct {
#endif
#endif
int load_weight; /* for niceness load balancing purposes */
- int prio, static_prio;
+ int prio, static_prio, normal_prio;
struct list_head run_list;
prio_array_t *array;
@@ -852,6 +855,9 @@ struct task_struct {
/* Protection of (de-)allocation: mm, files, fs, tty, keyrings */
spinlock_t alloc_lock;
+ /* Protection of the PI data structures: */
+ spinlock_t pi_lock;
+
#ifdef CONFIG_DEBUG_MUTEXES
/* mutex deadlock detection */
struct mutex_waiter *blocked_on;
@@ -1018,6 +1024,17 @@ static inline void idle_task_exit(void) {}
#endif
extern void sched_idle_next(void);
+
+#ifdef CONFIG_RT_MUTEXES
+extern int rt_mutex_getprio(task_t *p);
+extern void rt_mutex_setprio(task_t *p, int prio);
+#else
+static inline int rt_mutex_getprio(task_t *p)
+{
+ return p->normal_prio;
+}
+#endif
+
extern void set_user_nice(task_t *p, long nice);
extern int task_prio(const task_t *p);
extern int task_nice(const task_t *p);
diff --git a/kernel/sched.c b/kernel/sched.c
index 15abf0833245..08431f07a999 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -354,6 +354,25 @@ static inline void finish_lock_switch(runqueue_t *rq, task_t *prev)
}
#endif /* __ARCH_WANT_UNLOCKED_CTXSW */
+/*
+ * __task_rq_lock - lock the runqueue a given task resides on.
+ * Must be called interrupts disabled.
+ */
+static inline runqueue_t *__task_rq_lock(task_t *p)
+ __acquires(rq->lock)
+{
+ struct runqueue *rq;
+
+repeat_lock_task:
+ rq = task_rq(p);
+ spin_lock(&rq->lock);
+ if (unlikely(rq != task_rq(p))) {
+ spin_unlock(&rq->lock);
+ goto repeat_lock_task;
+ }
+ return rq;
+}
+
/*
* task_rq_lock - lock the runqueue a given task resides on and disable
* interrupts. Note the ordering: we can safely lookup the task_rq without
@@ -375,6 +394,12 @@ static runqueue_t *task_rq_lock(task_t *p, unsigned long *flags)
return rq;
}
+static inline void __task_rq_unlock(runqueue_t *rq)
+ __releases(rq->lock)
+{
+ spin_unlock(&rq->lock);
+}
+
static inline void task_rq_unlock(runqueue_t *rq, unsigned long *flags)
__releases(rq->lock)
{
@@ -638,7 +663,7 @@ static inline void enqueue_task_head(struct task_struct *p, prio_array_t *array)
}
/*
- * effective_prio - return the priority that is based on the static
+ * __normal_prio - return the priority that is based on the static
* priority but is modified by bonuses/penalties.
*
* We scale the actual sleep average [0 .... MAX_SLEEP_AVG]
@@ -651,13 +676,11 @@ static inline void enqueue_task_head(struct task_struct *p, prio_array_t *array)
*
* Both properties are important to certain workloads.
*/
-static int effective_prio(task_t *p)
+
+static inline int __normal_prio(task_t *p)
{
int bonus, prio;
- if (rt_task(p))
- return p->prio;
-
bonus = CURRENT_BONUS(p) - MAX_BONUS / 2;
prio = p->static_prio - bonus;
@@ -692,7 +715,7 @@ static int effective_prio(task_t *p)
static void set_load_weight(task_t *p)
{
- if (rt_task(p)) {
+ if (has_rt_policy(p)) {
#ifdef CONFIG_SMP
if (p == task_rq(p)->migration_thread)
/*
@@ -730,6 +753,44 @@ static inline void dec_nr_running(task_t *p, runqueue_t *rq)
dec_raw_weighted_load(rq, p);
}
+/*
+ * Calculate the expected normal priority: i.e. priority
+ * without taking RT-inheritance into account. Might be
+ * boosted by interactivity modifiers. Changes upon fork,
+ * setprio syscalls, and whenever the interactivity
+ * estimator recalculates.
+ */
+static inline int normal_prio(task_t *p)
+{
+ int prio;
+
+ if (has_rt_policy(p))
+ prio = MAX_RT_PRIO-1 - p->rt_priority;
+ else
+ prio = __normal_prio(p);
+ return prio;
+}
+
+/*
+ * Calculate the current priority, i.e. the priority
+ * taken into account by the scheduler. This value might
+ * be boosted by RT tasks, or might be boosted by
+ * interactivity modifiers. Will be RT if the task got
+ * RT-boosted. If not then it returns p->normal_prio.
+ */
+static int effective_prio(task_t *p)
+{
+ p->normal_prio = normal_prio(p);
+ /*
+ * If we are RT tasks or we were boosted to RT priority,
+ * keep the priority unchanged. Otherwise, update priority
+ * to the normal priority:
+ */
+ if (!rt_prio(p->prio))
+ return p->normal_prio;
+ return p->prio;
+}
+
/*
* __activate_task - move a task to the runqueue.
*/
@@ -752,6 +813,10 @@ static inline void __activate_idle_task(task_t *p, runqueue_t *rq)
inc_nr_running(p, rq);
}
+/*
+ * Recalculate p->normal_prio and p->prio after having slept,
+ * updating the sleep-average too:
+ */
static int recalc_task_prio(task_t *p, unsigned long long now)
{
/* Caller must always ensure 'now >= p->timestamp' */
@@ -1448,6 +1513,12 @@ void fastcall sched_fork(task_t *p, int clone_flags)
* event cannot wake it up and insert it on the runqueue either.
*/
p->state = TASK_RUNNING;
+
+ /*
+ * Make sure we do not leak PI boosting priority to the child:
+ */
+ p->prio = current->normal_prio;
+
INIT_LIST_HEAD(&p->run_list);
p->array = NULL;
#ifdef CONFIG_SCHEDSTATS
@@ -1527,6 +1598,7 @@ void fastcall wake_up_new_task(task_t *p, unsigned long clone_flags)
__activate_task(p, rq);
else {
p->prio = current->prio;
+ p->normal_prio = current->normal_prio;
list_add_tail(&p->run_list, ¤t->run_list);
p->array = current->array;
p->array->nr_active++;
@@ -3668,12 +3740,65 @@ long fastcall __sched sleep_on_timeout(wait_queue_head_t *q, long timeout)
EXPORT_SYMBOL(sleep_on_timeout);
+#ifdef CONFIG_RT_MUTEXES
+
+/*
+ * rt_mutex_setprio - set the current priority of a task
+ * @p: task
+ * @prio: prio value (kernel-internal form)
+ *
+ * This function changes the 'effective' priority of a task. It does
+ * not touch ->normal_prio like __setscheduler().
+ *
+ * Used by the rt_mutex code to implement priority inheritance logic.
+ */
+void rt_mutex_setprio(task_t *p, int prio)
+{
+ unsigned long flags;
+ prio_array_t *array;
+ runqueue_t *rq;
+ int oldprio;
+
+ BUG_ON(prio < 0 || prio > MAX_PRIO);
+
+ rq = task_rq_lock(p, &flags);
+
+ oldprio = p->prio;
+ array = p->array;
+ if (array)
+ dequeue_task(p, array);
+ p->prio = prio;
+
+ if (array) {
+ /*
+ * If changing to an RT priority then queue it
+ * in the active array!
+ */
+ if (rt_task(p))
+ array = rq->active;
+ enqueue_task(p, array);
+ /*
+ * Reschedule if we are currently running on this runqueue and
+ * our priority decreased, or if we are not currently running on
+ * this runqueue and our priority is higher than the current's
+ */
+ if (task_running(rq, p)) {
+ if (p->prio > oldprio)
+ resched_task(rq->curr);
+ } else if (TASK_PREEMPTS_CURR(p, rq))
+ resched_task(rq->curr);
+ }
+ task_rq_unlock(rq, &flags);
+}
+
+#endif
+
void set_user_nice(task_t *p, long nice)
{
unsigned long flags;
prio_array_t *array;
runqueue_t *rq;
- int old_prio, new_prio, delta;
+ int old_prio, delta;
if (TASK_NICE(p) == nice || nice < -20 || nice > 19)
return;
@@ -3688,7 +3813,7 @@ void set_user_nice(task_t *p, long nice)
* it wont have any effect on scheduling until the task is
* not SCHED_NORMAL/SCHED_BATCH:
*/
- if (rt_task(p)) {
+ if (has_rt_policy(p)) {
p->static_prio = NICE_TO_PRIO(nice);
goto out_unlock;
}
@@ -3698,12 +3823,11 @@ void set_user_nice(task_t *p, long nice)
dec_raw_weighted_load(rq, p);
}
- old_prio = p->prio;
- new_prio = NICE_TO_PRIO(nice);
- delta = new_prio - old_prio;
p->static_prio = NICE_TO_PRIO(nice);
set_load_weight(p);
- p->prio += delta;
+ old_prio = p->prio;
+ p->prio = effective_prio(p);
+ delta = p->prio - old_prio;
if (array) {
enqueue_task(p, array);
@@ -3718,7 +3842,6 @@ void set_user_nice(task_t *p, long nice)
out_unlock:
task_rq_unlock(rq, &flags);
}
-
EXPORT_SYMBOL(set_user_nice);
/*
@@ -3833,16 +3956,14 @@ static void __setscheduler(struct task_struct *p, int policy, int prio)
BUG_ON(p->array);
p->policy = policy;
p->rt_priority = prio;
- if (policy != SCHED_NORMAL && policy != SCHED_BATCH) {
- p->prio = MAX_RT_PRIO-1 - p->rt_priority;
- } else {
- p->prio = p->static_prio;
- /*
- * SCHED_BATCH tasks are treated as perpetual CPU hogs:
- */
- if (policy == SCHED_BATCH)
- p->sleep_avg = 0;
- }
+ p->normal_prio = normal_prio(p);
+ /* we are holding p->pi_lock already */
+ p->prio = rt_mutex_getprio(p);
+ /*
+ * SCHED_BATCH tasks are treated as perpetual CPU hogs:
+ */
+ if (policy == SCHED_BATCH)
+ p->sleep_avg = 0;
set_load_weight(p);
}
@@ -3911,15 +4032,21 @@ int sched_setscheduler(struct task_struct *p, int policy,
retval = security_task_setscheduler(p, policy, param);
if (retval)
return retval;
+ /*
+ * make sure no PI-waiters arrive (or leave) while we are
+ * changing the priority of the task:
+ */
+ spin_lock_irqsave(&p->pi_lock, flags);
/*
* To be able to change p->policy safely, the apropriate
* runqueue lock must be held.
*/
- rq = task_rq_lock(p, &flags);
+ rq = __task_rq_lock(p);
/* recheck policy now with rq lock held */
if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
policy = oldpolicy = -1;
- task_rq_unlock(rq, &flags);
+ __task_rq_unlock(rq);
+ spin_unlock_irqrestore(&p->pi_lock, flags);
goto recheck;
}
array = p->array;
@@ -3940,7 +4067,9 @@ int sched_setscheduler(struct task_struct *p, int policy,
} else if (TASK_PREEMPTS_CURR(p, rq))
resched_task(rq->curr);
}
- task_rq_unlock(rq, &flags);
+ __task_rq_unlock(rq);
+ spin_unlock_irqrestore(&p->pi_lock, flags);
+
return 0;
}
EXPORT_SYMBOL_GPL(sched_setscheduler);
@@ -4575,7 +4704,7 @@ void __devinit init_idle(task_t *idle, int cpu)
idle->timestamp = sched_clock();
idle->sleep_avg = 0;
idle->array = NULL;
- idle->prio = MAX_PRIO;
+ idle->prio = idle->normal_prio = MAX_PRIO;
idle->state = TASK_RUNNING;
idle->cpus_allowed = cpumask_of_cpu(cpu);
set_task_cpu(idle, cpu);
@@ -6582,7 +6711,8 @@ void normalize_rt_tasks(void)
if (!rt_task(p))
continue;
- rq = task_rq_lock(p, &flags);
+ spin_lock_irqsave(&p->pi_lock, flags);
+ rq = __task_rq_lock(p);
array = p->array;
if (array)
@@ -6593,7 +6723,8 @@ void normalize_rt_tasks(void)
resched_task(rq->curr);
}
- task_rq_unlock(rq, &flags);
+ __task_rq_unlock(rq);
+ spin_unlock_irqrestore(&p->pi_lock, flags);
}
read_unlock_irq(&tasklist_lock);
}
commit 77ba89c5cf28d5d98a3cae17f67a3e42b102cc25
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue Jun 27 02:54:51 2006 -0700
[PATCH] pi-futex: add plist implementation
Add the priority-sorted list (plist) implementation.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/include/linux/plist.h b/include/linux/plist.h
new file mode 100644
index 000000000000..3404faef542c
--- /dev/null
+++ b/include/linux/plist.h
@@ -0,0 +1,247 @@
+/*
+ * Descending-priority-sorted double-linked list
+ *
+ * (C) 2002-2003 Intel Corp
+ * Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>.
+ *
+ * 2001-2005 (c) MontaVista Software, Inc.
+ * Daniel Walker <dwalker@mvista.com>
+ *
+ * (C) 2005 Thomas Gleixner <tglx@linutronix.de>
+ *
+ * Simplifications of the original code by
+ * Oleg Nesterov <oleg@tv-sign.ru>
+ *
+ * Licensed under the FSF's GNU Public License v2 or later.
+ *
+ * Based on simple lists (include/linux/list.h).
+ *
+ * This is a priority-sorted list of nodes; each node has a
+ * priority from INT_MIN (highest) to INT_MAX (lowest).
+ *
+ * Addition is O(K), removal is O(1), change of priority of a node is
+ * O(K) and K is the number of RT priority levels used in the system.
+ * (1 <= K <= 99)
+ *
+ * This list is really a list of lists:
+ *
+ * - The tier 1 list is the prio_list, different priority nodes.
+ *
+ * - The tier 2 list is the node_list, serialized nodes.
+ *
+ * Simple ASCII art explanation:
+ *
+ * |HEAD |
+ * | |
+ * |prio_list.prev|<------------------------------------|
+ * |prio_list.next|<->|pl|<->|pl|<--------------->|pl|<-|
+ * |10 | |10| |21| |21| |21| |40| (prio)
+ * | | | | | | | | | | | |
+ * | | | | | | | | | | | |
+ * |node_list.next|<->|nl|<->|nl|<->|nl|<->|nl|<->|nl|<-|
+ * |node_list.prev|<------------------------------------|
+ *
+ * The nodes on the prio_list list are sorted by priority to simplify
+ * the insertion of new nodes. There are no nodes with duplicate
+ * priorites on the list.
+ *
+ * The nodes on the node_list is ordered by priority and can contain
+ * entries which have the same priority. Those entries are ordered
+ * FIFO
+ *
+ * Addition means: look for the prio_list node in the prio_list
+ * for the priority of the node and insert it before the node_list
+ * entry of the next prio_list node. If it is the first node of
+ * that priority, add it to the prio_list in the right position and
+ * insert it into the serialized node_list list
+ *
+ * Removal means remove it from the node_list and remove it from
+ * the prio_list if the node_list list_head is non empty. In case
+ * of removal from the prio_list it must be checked whether other
+ * entries of the same priority are on the list or not. If there
+ * is another entry of the same priority then this entry has to
+ * replace the removed entry on the prio_list. If the entry which
+ * is removed is the only entry of this priority then a simple
+ * remove from both list is sufficient.
+ *
+ * INT_MIN is the highest priority, 0 is the medium highest, INT_MAX
+ * is lowest priority.
+ *
+ * No locking is done, up to the caller.
+ *
+ */
+#ifndef _LINUX_PLIST_H_
+#define _LINUX_PLIST_H_
+
+#include <linux/list.h>
+#include <linux/spinlock_types.h>
+
+struct plist_head {
+ struct list_head prio_list;
+ struct list_head node_list;
+#ifdef CONFIG_DEBUG_PI_LIST
+ spinlock_t *lock;
+#endif
+};
+
+struct plist_node {
+ int prio;
+ struct plist_head plist;
+};
+
+#ifdef CONFIG_DEBUG_PI_LIST
+# define PLIST_HEAD_LOCK_INIT(_lock) .lock = _lock
+#else
+# define PLIST_HEAD_LOCK_INIT(_lock)
+#endif
+
+/**
+ * #PLIST_HEAD_INIT - static struct plist_head initializer
+ *
+ * @head: struct plist_head variable name
+ */
+#define PLIST_HEAD_INIT(head, _lock) \
+{ \
+ .prio_list = LIST_HEAD_INIT((head).prio_list), \
+ .node_list = LIST_HEAD_INIT((head).node_list), \
+ PLIST_HEAD_LOCK_INIT(&(_lock)) \
+}
+
+/**
+ * #PLIST_NODE_INIT - static struct plist_node initializer
+ *
+ * @node: struct plist_node variable name
+ * @__prio: initial node priority
+ */
+#define PLIST_NODE_INIT(node, __prio) \
+{ \
+ .prio = (__prio), \
+ .plist = PLIST_HEAD_INIT((node).plist, NULL), \
+}
+
+/**
+ * plist_head_init - dynamic struct plist_head initializer
+ *
+ * @head: &struct plist_head pointer
+ */
+static inline void
+plist_head_init(struct plist_head *head, spinlock_t *lock)
+{
+ INIT_LIST_HEAD(&head->prio_list);
+ INIT_LIST_HEAD(&head->node_list);
+#ifdef CONFIG_DEBUG_PI_LIST
+ head->lock = lock;
+#endif
+}
+
+/**
+ * plist_node_init - Dynamic struct plist_node initializer
+ *
+ * @node: &struct plist_node pointer
+ * @prio: initial node priority
+ */
+static inline void plist_node_init(struct plist_node *node, int prio)
+{
+ node->prio = prio;
+ plist_head_init(&node->plist, NULL);
+}
+
+extern void plist_add(struct plist_node *node, struct plist_head *head);
+extern void plist_del(struct plist_node *node, struct plist_head *head);
+
+/**
+ * plist_for_each - iterate over the plist
+ *
+ * @pos1: the type * to use as a loop counter.
+ * @head: the head for your list.
+ */
+#define plist_for_each(pos, head) \
+ list_for_each_entry(pos, &(head)->node_list, plist.node_list)
+
+/**
+ * plist_for_each_entry_safe - iterate over a plist of given type safe
+ * against removal of list entry
+ *
+ * @pos1: the type * to use as a loop counter.
+ * @n1: another type * to use as temporary storage
+ * @head: the head for your list.
+ */
+#define plist_for_each_safe(pos, n, head) \
+ list_for_each_entry_safe(pos, n, &(head)->node_list, plist.node_list)
+
+/**
+ * plist_for_each_entry - iterate over list of given type
+ *
+ * @pos: the type * to use as a loop counter.
+ * @head: the head for your list.
+ * @member: the name of the list_struct within the struct.
+ */
+#define plist_for_each_entry(pos, head, mem) \
+ list_for_each_entry(pos, &(head)->node_list, mem.plist.node_list)
+
+/**
+ * plist_for_each_entry_safe - iterate over list of given type safe against
+ * removal of list entry
+ *
+ * @pos: the type * to use as a loop counter.
+ * @n: another type * to use as temporary storage
+ * @head: the head for your list.
+ * @m: the name of the list_struct within the struct.
+ */
+#define plist_for_each_entry_safe(pos, n, head, m) \
+ list_for_each_entry_safe(pos, n, &(head)->node_list, m.plist.node_list)
+
+/**
+ * plist_head_empty - return !0 if a plist_head is empty
+ *
+ * @head: &struct plist_head pointer
+ */
+static inline int plist_head_empty(const struct plist_head *head)
+{
+ return list_empty(&head->node_list);
+}
+
+/**
+ * plist_node_empty - return !0 if plist_node is not on a list
+ *
+ * @node: &struct plist_node pointer
+ */
+static inline int plist_node_empty(const struct plist_node *node)
+{
+ return plist_head_empty(&node->plist);
+}
+
+/* All functions below assume the plist_head is not empty. */
+
+/**
+ * plist_first_entry - get the struct for the first entry
+ *
+ * @ptr: the &struct plist_head pointer.
+ * @type: the type of the struct this is embedded in.
+ * @member: the name of the list_struct within the struct.
+ */
+#ifdef CONFIG_DEBUG_PI_LIST
+# define plist_first_entry(head, type, member) \
+({ \
+ WARN_ON(plist_head_empty(head)); \
+ container_of(plist_first(head), type, member); \
+})
+#else
+# define plist_first_entry(head, type, member) \
+ container_of(plist_first(head), type, member)
+#endif
+
+/**
+ * plist_first - return the first node (and thus, highest priority)
+ *
+ * @head: the &struct plist_head pointer
+ *
+ * Assumes the plist is _not_ empty.
+ */
+static inline struct plist_node* plist_first(const struct plist_head *head)
+{
+ return list_entry(head->node_list.next,
+ struct plist_node, plist.node_list);
+}
+
+#endif
diff --git a/lib/Kconfig b/lib/Kconfig
index 3de93357f5ab..f6299342b882 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -86,4 +86,10 @@ config TEXTSEARCH_BM
config TEXTSEARCH_FSM
tristate
+#
+# plist support is select#ed if needed
+#
+config PLIST
+ boolean
+
endmenu
diff --git a/lib/Makefile b/lib/Makefile
index 79358ad1f113..10c13c9d7824 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -25,6 +25,7 @@ lib-$(CONFIG_SEMAPHORE_SLEEPERS) += semaphore-sleepers.o
lib-$(CONFIG_GENERIC_FIND_NEXT_BIT) += find_next_bit.o
lib-$(CONFIG_GENERIC_HWEIGHT) += hweight.o
obj-$(CONFIG_LOCK_KERNEL) += kernel_lock.o
+obj-$(CONFIG_PLIST) += plist.o
obj-$(CONFIG_DEBUG_PREEMPT) += smp_processor_id.o
ifneq ($(CONFIG_HAVE_DEC_LOCK),y)
diff --git a/lib/plist.c b/lib/plist.c
new file mode 100644
index 000000000000..3074a02272f3
--- /dev/null
+++ b/lib/plist.c
@@ -0,0 +1,118 @@
+/*
+ * lib/plist.c
+ *
+ * Descending-priority-sorted double-linked list
+ *
+ * (C) 2002-2003 Intel Corp
+ * Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>.
+ *
+ * 2001-2005 (c) MontaVista Software, Inc.
+ * Daniel Walker <dwalker@mvista.com>
+ *
+ * (C) 2005 Thomas Gleixner <tglx@linutronix.de>
+ *
+ * Simplifications of the original code by
+ * Oleg Nesterov <oleg@tv-sign.ru>
+ *
+ * Licensed under the FSF's GNU Public License v2 or later.
+ *
+ * Based on simple lists (include/linux/list.h).
+ *
+ * This file contains the add / del functions which are considered to
+ * be too large to inline. See include/linux/plist.h for further
+ * information.
+ */
+
+#include <linux/plist.h>
+#include <linux/spinlock.h>
+
+#ifdef CONFIG_DEBUG_PI_LIST
+
+static void plist_check_prev_next(struct list_head *t, struct list_head *p,
+ struct list_head *n)
+{
+ if (n->prev != p || p->next != n) {
+ printk("top: %p, n: %p, p: %p\n", t, t->next, t->prev);
+ printk("prev: %p, n: %p, p: %p\n", p, p->next, p->prev);
+ printk("next: %p, n: %p, p: %p\n", n, n->next, n->prev);
+ WARN_ON(1);
+ }
+}
+
+static void plist_check_list(struct list_head *top)
+{
+ struct list_head *prev = top, *next = top->next;
+
+ plist_check_prev_next(top, prev, next);
+ while (next != top) {
+ prev = next;
+ next = prev->next;
+ plist_check_prev_next(top, prev, next);
+ }
+}
+
+static void plist_check_head(struct plist_head *head)
+{
+ WARN_ON(!head->lock);
+ if (head->lock)
+ WARN_ON_SMP(!spin_is_locked(head->lock));
+ plist_check_list(&head->prio_list);
+ plist_check_list(&head->node_list);
+}
+
+#else
+# define plist_check_head(h) do { } while (0)
+#endif
+
+/**
+ * plist_add - add @node to @head
+ *
+ * @node: &struct plist_node pointer
+ * @head: &struct plist_head pointer
+ */
+void plist_add(struct plist_node *node, struct plist_head *head)
+{
+ struct plist_node *iter;
+
+ plist_check_head(head);
+ WARN_ON(!plist_node_empty(node));
+
+ list_for_each_entry(iter, &head->prio_list, plist.prio_list) {
+ if (node->prio < iter->prio)
+ goto lt_prio;
+ else if (node->prio == iter->prio) {
+ iter = list_entry(iter->plist.prio_list.next,
+ struct plist_node, plist.prio_list);
+ goto eq_prio;
+ }
+ }
+
+lt_prio:
+ list_add_tail(&node->plist.prio_list, &iter->plist.prio_list);
+eq_prio:
+ list_add_tail(&node->plist.node_list, &iter->plist.node_list);
+
+ plist_check_head(head);
+}
+
+/**
+ * plist_del - Remove a @node from plist.
+ *
+ * @node: &struct plist_node pointer - entry to be removed
+ * @head: &struct plist_head pointer - list head
+ */
+void plist_del(struct plist_node *node, struct plist_head *head)
+{
+ plist_check_head(head);
+
+ if (!list_empty(&node->plist.prio_list)) {
+ struct plist_node *next = plist_first(&node->plist);
+
+ list_move_tail(&next->plist.prio_list, &node->plist.prio_list);
+ list_del_init(&node->plist.prio_list);
+ }
+
+ list_del_init(&node->plist.node_list);
+
+ plist_check_head(head);
+}
commit 8eb94f80dd2da5977c35cd094f0802c1501a12cd
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue Jun 27 02:54:50 2006 -0700
[PATCH] pi-futex: introduce WARN_ON_SMP
Introduce a new WARN_ON variant: WARN_ON_SMP(cond).
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h
index 845cb67ad8ea..8ceab7bcd8b4 100644
--- a/include/asm-generic/bug.h
+++ b/include/asm-generic/bug.h
@@ -51,4 +51,10 @@
__ret; \
})
+#ifdef CONFIG_SMP
+# define WARN_ON_SMP(x) WARN_ON(x)
+#else
+# define WARN_ON_SMP(x) do { } while (0)
+#endif
+
#endif
commit f9b8404cf8f8456dfa83459510762b700dc00385
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue Jun 27 02:54:49 2006 -0700
[PATCH] pi-futex: introduce debug_check_no_locks_freed()
Add debug_check_no_locks_freed(), as a central inline to add
bad-lock-free-debugging functionality to.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/arch/i386/mm/pageattr.c b/arch/i386/mm/pageattr.c
index 0887b34bc59b..353a836ed63c 100644
--- a/arch/i386/mm/pageattr.c
+++ b/arch/i386/mm/pageattr.c
@@ -229,8 +229,8 @@ void kernel_map_pages(struct page *page, int numpages, int enable)
if (PageHighMem(page))
return;
if (!enable)
- mutex_debug_check_no_locks_freed(page_address(page),
- numpages * PAGE_SIZE);
+ debug_check_no_locks_freed(page_address(page),
+ numpages * PAGE_SIZE);
/* the return value is ignored - the calls cannot fail,
* large pages are disabled at boot time.
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ff1fa87df8d0..0ac255720f34 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1030,13 +1030,19 @@ static inline void vm_stat_account(struct mm_struct *mm,
}
#endif /* CONFIG_PROC_FS */
+static inline void
+debug_check_no_locks_freed(const void *from, unsigned long len)
+{
+ mutex_debug_check_no_locks_freed(from, len);
+}
+
#ifndef CONFIG_DEBUG_PAGEALLOC
static inline void
kernel_map_pages(struct page *page, int numpages, int enable)
{
if (!PageHighMem(page) && !enable)
- mutex_debug_check_no_locks_freed(page_address(page),
- numpages * PAGE_SIZE);
+ debug_check_no_locks_freed(page_address(page),
+ numpages * PAGE_SIZE);
}
#endif
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dafd12ec7a0c..084a2de7e52a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -446,8 +446,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
arch_free_page(page, order);
if (!PageHighMem(page))
- mutex_debug_check_no_locks_freed(page_address(page),
- PAGE_SIZE<<order);
+ debug_check_no_locks_freed(page_address(page),
+ PAGE_SIZE<<order);
for (i = 0 ; i < (1 << order) ; ++i)
reserved += free_pages_check(page + i);
diff --git a/mm/slab.c b/mm/slab.c
index d1d55279202e..f378d027c684 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3397,7 +3397,7 @@ void kfree(const void *objp)
local_irq_save(flags);
kfree_debugcheck(objp);
c = virt_to_cache(objp);
- mutex_debug_check_no_locks_freed(objp, obj_size(c));
+ debug_check_no_locks_freed(objp, obj_size(c));
__cache_free(c, (void *)objp);
local_irq_restore(flags);
}
commit 6abdce7680e3e8436b3292b345d77b67d5ec9ea8
Author: Ingo Molnar <mingo@elte.hu>
Date: Tue Jun 27 02:54:48 2006 -0700
[PATCH] pi-futex: robust futex docs fix
Fix typo in Documentation/robust-futexes.txt.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/Documentation/robust-futexes.txt b/Documentation/robust-futexes.txt
index df82d75245a0..76e8064b8c3a 100644
--- a/Documentation/robust-futexes.txt
+++ b/Documentation/robust-futexes.txt
@@ -95,7 +95,7 @@ comparison. If the thread has registered a list, then normally the list
is empty. If the thread/process crashed or terminated in some incorrect
way then the list might be non-empty: in this case the kernel carefully
walks the list [not trusting it], and marks all locks that are owned by
-this thread with the FUTEX_OWNER_DEAD bit, and wakes up one waiter (if
+this thread with the FUTEX_OWNER_DIED bit, and wakes up one waiter (if
any).
The list is guaranteed to be private and per-thread at do_exit() time,