We were simply concatenating the devhandle and devino and using that
as the cookie, which defeats the entire purpose of the VIRQ hypervisor
interfaces.
Now that we use physical addresses for the INO buckets, we can
allocate them dynamically for VIRQs and encode the cookies as
~__pa(bucket). This allows us to test for and decode the cookie with
a simple:
brlz $reg1, 1f
xnor $reg1, %g0, $reg2
sequence.
This works because bit 64 is never set in traditional
INO vectors, and it is also never set in a physical
address. So xnor'ing the physical address of the bucket
always gives us a negative number, and thus a unique
condition we can test cheaply.
Inspired by ideas from Greg Onufer.
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently we chain IVEC entries using 32-bit "pointers"
because we know that the ivector_table is in the main
kernel image, thus below 4GB.
This uses proper 64-bit pointers instead.
Whilst this bloats up the kernel image size, this sets
the infrastructure necessary to significantly shrink the
kernel size by using physical addresses and dynamically
allocating the ivector table.
Signed-off-by: David S. Miller <davem@davemloft.net>
This is the long overdue conversion of sparc64 over to
the generic IRQ layer.
The kernel image is slightly larger, but the BSS is ~60K
smaller due to the reduced size of struct ino_bucket.
A lot of IRQ implementation details, including ino_bucket,
were moved out of asm-sparc64/irq.h and are now private to
arch/sparc64/kernel/irq.c, and most of the code in irq.c
totally disappeared.
One thing that's different at the moment is IRQ distribution,
we do it at enable_irq() time. If the cpu mask is ALL then
we round-robin using a global rotating cpu counter, else
we pick the first cpu in the mask to support single cpu
targetting. This is similar to what powerpc's XICS IRQ
support code does.
This works fine on my UP SB1000, and the SMP build goes
fine and runs on that machine, but lots of testing on
different setups is needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
This is the first in a series of cleanups that will hopefully
allow a seamless attempt at using the generic IRQ handling
infrastructure in the Linux kernel.
Define PIL_DEVICE_IRQ and vector all device interrupts through
there.
Get rid of the ugly pil0_dummy_{bucket,desc}, instead vector
the timer interrupt directly to a specific handler since the
timer interrupt is the only event that will be signaled on
PIL 14.
The irq_worklist is now in the per-cpu trap_block[].
Signed-off-by: David S. Miller <davem@davemloft.net>
This is where the virtual address of the fault status
area belongs.
To set it up we don't make a hypervisor call, instead
we call OBP's SUNW,set-trap-table with the real address
of the fault status area as the second argument. And
right before that call we write the virtual address into
ASI_SCRATCHPAD vaddr 0x0.
Signed-off-by: David S. Miller <davem@davemloft.net>
Sun4v has 4 interrupt queues: cpu, device, resumable errors,
and non-resumable errors. A set of head/tail offset pointers
help maintain a work queue in physical memory. The entries
are 64-bytes in size.
Each queue is allocated then registered with the hypervisor
as we bring cpus up.
The two error queues each get a kernel side buffer that we
use to quickly empty the main interrupt queue before we
call up to C code to log the event and possibly take evasive
action.
Signed-off-by: David S. Miller <davem@davemloft.net>