This patch adds support for the 256k L2 cache found on some IBM/AMCC
4xx PPC's. It introduces a common 4xx SoC file (sysdev/ppc4xx_soc.c)
which currently "only" adds the L2 cache init code. Other common 4xx
stuff can be added later here.
The L2 cache handling code is a copy of Eugene's code in arch/ppc
with small modifications.
Tested on AMCC Taishan 440GX.
Signed-off-by: Stefan Roese <sr@denx.de>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
The AMCC 440EP Yosemite board is very similar to the original AMCC Bamboo
board. This adds a YOSEMITE option to Kconfig, and reuses the existing
bamboo board support in the kernel.
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Canyonlands is the AMCC 460EX eval board, featuring nearly all of the 460EX
interfaces:
- 1 * PCI (max 66MHz), 2 * PCIe (one 4-lane, one 1-lane)
- 2 * GBit Ethernet with TCP/IP acceleration
- USB 2.0 Host/Device OTG and Host interface
- SATA port
Signed-off-by: Stefan Roese <sr@denx.de>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
scanlog_init() could use some love.
* properly return -ENODEV if this system doesn't support scan-log-dump
* don't printk if scan-log-dump not present; only older systems have it
* convert from create_proc_entry() to preferred proc_create()
* allocate zeroed data buffer
* fix potential memory leak of ent->data on failed create_proc_entry()
* simplify control flow
Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds /sys/kernel/phyp_dump_active so that kdump init scripts may
look for it and take appropriate action if this file is found. This
file is only created when phyp_dump has been registered.
Signed-off-by: Manish Ahuja <mahuja@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds a kernel command line option "phyp_dump", which takes a 0/1
value for disabling/ enabling phyp_dump at boot time. Kdump can use
this on cmdline (phyp_dump=0) to disable phyp-dump during boot when
enabling itself. This will ensure only one dumping mechanism is active
at any given time.
Signed-off-by: Manish Ahuja <mahuja@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This tracks the size freed. For now it does a simple rudimentary
calculation of the ranges freed. The idea is to keep it simple at the
external shell script level and send in large chunks for now.
Signed-off-by: Manish Ahuja <mahuja@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds routines to
a. invalidate dump
b. calculate region that is reserved and needs to be freed. This is
exported through sysfs interface.
Unregister has been removed for now as it wasn't being used.
Signed-off-by: Manish Ahuja <mahuja@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Set up the actual dump header, register it with the hypervisor.
Signed-off-by: Manish Ahuja <mahuja@us.ibm.com>
Signed-off-by: Linas Vepstas <linasvepstas@gmail.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Check to see if there actually is data from a previously
crashed kernel waiting. If so, allow user-space tools to
grab the data (by reading /proc/kcore). When user-space
finishes dumping a section, it must release that memory
by writing to sysfs. For example,
echo "0x40000000 0x10000000" > /sys/kernel/release_region
will release 256MB starting at the 1GB. The released memory
becomes free for general use.
Signed-off-by: Linas Vepstas <linasvepstas@gmail.com>
Signed-off-by: Manish Ahuja <mahuja@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Initial patch for reserving memory in early boot, and freeing it
later. If the previous boot had ended with a crash, the reserved
memory would contain a copy of the crashed kernel data.
Signed-off-by: Manish Ahuja <mahuja@us.ibm.com>
Signed-off-by: Linas Vepstas <linasvepstas@gmail.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The functions time_before, time_before_eq, time_after, and
time_after_eq are more robust for comparing jiffies against other
values.
This implements usage of the time_after() macro, defined at
linux/jiffies.h, which deals with wrapping correctly.
Signed-off-by: S.Çağlar Onur <caglar@pardus.org.tr>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The hypervisor can look at the value in the wait_state_cycles field of
the VPA for an estimate of how busy dedicated processors are.
Currently, as the kernel never touches this field, we appear to be
100% busy. This records the duration the kernel is in powersave and
passes that to the HV to provide a reasonable indication of
utilisation.
Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This function has been a no-op for about 18 months; it's there in
the history should anyone need to resurrect it.
Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Prevailing practice for define_machine() in powerpc is to use the
platform name when the platform has only one define_machine()
statement, but maple uses "maple_md". This caused me some
head-scratching when writing some new code that uses
machine_is(maple).
Use "maple" instead of "maple_md". There should not be any behavioral
change -- fixup_maple_ide() calls machine_is(maple) but the body of
the function is ifdef'd out.
Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The PCI bridge representing the PCIE root complex on Axon, contains
device BARs for a memory range and ROM that define inbound accesses.
This confuses the kernel resource management code -- the resources
need to be hidden when Axon is a host bridge.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The cell IOMMU code to parse the dma-ranges properties, used for the fixed
mapping, was broken in two ways for some devices.
Firstly it didn't cope with empty dma-ranges properties. An empty property
implies no translation so can be safely skipped.
The code also wrongly assumed it would be looking at PCI devices, and hard
coded the number of address and size cells.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
When building arch/powerpc/platforms/powermac/pic.c when !CONFIG_ADB_PMU
we get the following warnings:
arch/powerpc/platforms/powermac/pic.c: In function 'pmacpic_find_viaint':
arch/powerpc/platforms/powermac/pic.c:623: warning: label 'not_found' defined but not used
This fixes it.
Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
At present, we can hit the BUG_ON in __spu_update_sched_info by reading
the regs file of a context between two calls to spu_run. The
spu_release_saved called by spufs_regs_read() is resulting in the (now
non-runnable) context being placed back on the run queue, so the next
call to spu_run ends up in the bug condition.
This change uses the SPU_SCHED_SPU_RUN flag to only reschedule a context
if it's still in spu_run().
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
commit 4ef11014 introduced a usage of SCHED_IDLE to detect when
a context is within spu_run.
Instead of SCHED_IDLE (which has other meaning), add a flag to
sched_flags to tell if a context should be running.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Not all e300 cores support the performance monitors, and the ones
that don't will be confused by the mf/mtpmr instructions. This
allows the support to be optional, so the 8349 can turn it off
while the 8379 can turn it on. Sadly, those aren't config options,
so it will be left to the defconfigs and the users to make that
determination.
Signed-off-by: Andy Fleming <afleming@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Add functions to manage the channel syncronization flags to dma_lib
Signed-off-by: Olof Johansson <olof@lixom.net>
Acked-by: Jeff Garzik <jgarzik@pobox.com>
Also stop both rx and tx sections before changing the configuration of
the dma device during init.
Signed-off-by: Olof Johansson <olof@lixom.net>
Acked-by: Jeff Garzik <jgarzik@pobox.com>
The only tricky part is we need to adjust the PTE insertion loop to
cater for holes in the page table. The PTEs for each segment start on
a 4K boundary, so with 16M pages we have 16 PTEs per segment and then
a gap to the next 4K page boundary.
It might be possible to allocate the PTEs for each segment separately,
saving the memory currently filling the gaps. However we'd need to
check that's OK with the hardware, and that it actually saves memory.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Make some preliminary changes to cell_iommu_alloc_ptab() to allow it to
take the page size as a parameter rather than assuming IOMMU_PAGE_SIZE.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
We use n_pte_pages to calculate the stride through the page tables, but
we also use it to set the NPPT value in the segment table entry. That is
defined as the number of 4K pages per segment, so we should calculate
it as such regardless of the IOMMU page size.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Currently the cell IOMMU code allocates the entire IOMMU page table in a
contiguous chunk. This is nice and tidy, but for machines with larger
amounts of RAM the page table allocation can fail due to it simply being
too large.
So split the segment table and page table setup routine, and arrange to
have the dynamic and fixed page tables allocated separately.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
There's no need to allocate the pad page unless we're going to actually
use it - so move the allocation to where we know we're going to use it.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The cell IOMMU code no longer needs to save the pte_offset variable
separately, it is incorporated into tbl->it_offset.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The cell IOMMU tce build and free routines use pte_offset to convert
the index passed from the generic IOMMU code into a page table offset.
This takes into account the SPIDER_DMA_OFFSET which sets the top bit
of every DMA address.
However it doesn't cater for the IOMMU window starting at a non-zero
address, as the base of the window is not incorporated into pte_offset
at all.
As it turns out tbl->it_offset already contains the value we need, it
takes into account the base of the window and also pte_offset. So use
it instead!
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
It's called the fixed mapping, not the static mapping.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Ulrich Weigand has found that the hardware watchpoints on cell were not
working back in November :
http://ozlabs.org/pipermail/linuxppc-dev/2007-November/046135.html
This patch sets them during initialization.
Signed-off-by: Jens Osterkamp <jens@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
This moves the private DABRX definitions for celleb from beat.h to
reg.h to make them usable for all.
Signed-off-by: Jens Osterkamp <jens@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The spu_runcntl_RW register is restored within spu_restore function.
So, at the end of spu_bind_context, the SPU context is not just loaded,
but running.
This change corrects the state switch to account the time as USER.
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
There is a potential race between flushes of the entire SLB in the MFC
and the point where new entries are being established. The problem is
that we might put a ESID entry into the MFC SLB when the VSID entry has
just been cleared by the global flush.
This can be circumvented by holding the register_lock throughout both
the flushing and the creation of SLB entries.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
When we replace an SLB entry in the MFC after using up all the available
entries, there is a short window in which an incorrect entry is marked
as valid.
The problem is that the 'valid' bit is stored in the ESID, which is
always written after the VSID. Overwriting the VSID first will make the
original ESID entry point to the new VSID, which means that any
concurrent DMA accessing the old ESID ends up being redirected to the
new virtual address. A few cycles later, we write the new ESID and
everything is fine again.
That race can be closed by writing a zero entry to the ESID first, which
makes sure that the VSID is not accessed until we write the new ESID.
Note that we don't actually need to invalidate the SLB entry using the
invalidation register, which would also flush any ERAT entries for that
segment, because the segment translation does not become invalid but is
only removed from the SLB cache.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
There is a small race between the context save procedure
and the SPU interrupt handling, where we expect all interrupt
processing to have finished after disabling them, while
an interrupt is still being processed on another CPU.
The obvious fix is to call synchronize_irq() after disabling
the interrupts at the start of the context save procedure
to make sure we never access the SPU any more during an
ongoing save or even after that.
Thanks to Benjamin Herrenschmidt for pointing this out.
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Currently, we get the following output from sputrace:
[5.097935954] 1606: spufs_ps_nopfn__enter (thread = 1605, spu = -1)
[5.097958164] 1606: spufs_ps_nopfn__insert (thread = 1605, spu = 15)
[5.097973529] 1607: spufs_ps_nopfn__enter (thread = 1605, spu = -1)
[5.097989174] 1607: spufs_ps_nopfn__insert (thread = 1605, spu = 14)
Which leads me to believe that 160[67] is the current thread ID, and
1605 is the context backing the psmap.
However, the 'current' and 'owner' tids are reversed - the 'current'
tid is on the right. This change puts the current thread ID in the
left-hand column instead, and renames the right to 'ctxthread'.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
At present, we have a situation where a context with no owner is
re-scheduled by spu_forget:
Thread 1: reading regs file Thread 2: context owner
spu_forget()
- ctx->owner = NULL
- set SPU_SCHED_WAS_ACTIVE
spu_acquire_saved()
- context is in saved state
spu_release_saved()
- SPU_SCHED_WAS_ACTIVE is set,
so spu_activate() the context,
which now has no owner
In spu_forget(), we shouldn't be requesting a re-schedule by setting
SPU_SCHED_WAS_ACTIVE. This change removes the set_bit in spu_forget(),
so that spu_release_saved() doesn't reinsert this destroyed context on
to the run queue.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
We have a small window where a spu context may be destroyed while
we're servicing a page fault (from another thread) to the context's
problem state mapping.
After we up_read() the mmap_sem, it's possible that the context is
destroyed by its owning thread, and so the later references to ctx
are invalid. This can maifest as a deadlock on the (now free()-ed)
context state mutex.
This change adds a reference to the context before we release the
mmap_sem, so that the context cannot be destroyed.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
m8xx_setup.c says:
/* Force all 8xx processors to use divide by 16 processor clock. */
And at the same time it is using bus-frequency for calculating
timebase. It is okay for most setups because bus-frequency is
equal to clock-frequency.
The problem emerges when cpu frequency is > 66MHz, quoting
u-boot/cpu/mpc8xx/speed.c:
if (gd->cpu_clk <= 66000000) {
sccr_reg |= SCCR_EBDF00; /* bus division factor = 1 */
gd->bus_clk = gd->cpu_clk;
} else {
sccr_reg |= SCCR_EBDF01; /* bus division factor = 2 */
gd->bus_clk = gd->cpu_clk / 2;
}
So in case of cpu clock > 66MHz, bus_clk = cpu_clk / 2. An then, from
Linux, we calculate timebase frequency as tb_freq = bus_clk / 16,
that is cpu_clk / 2 / 16, which is wrong.
This fixes the system time drifting problem on the EP885C board
running at 133MHz.
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
For memory remove, we need to clean up htab mappings for the
section of the memory we are removing.
This implements support for removing htab bolted mappings for pSeries
logical partitions. Other sub-archs may need to implement similar
functionality for hotplug memory remove to work on them.
Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>