Commit graph

17 commits

Author SHA1 Message Date
Paul Mundt
089b43f973 sh: Fix up NUMA build for 29-bit.
pmb_bolt_mapping() is undefined on 29-bit builds, so provide a stub.
This fixes up the NUMA build on platforms lacking PMB support.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-10 16:29:48 +09:00
Paul Mundt
90e7d649d8 sh: reworked dynamic PMB mapping.
This implements a fairly significant overhaul of the dynamic PMB mapping
code. The primary change here is that the PMB gets its own VMA that
follows the uncached mapping and we attempt to be a bit more intelligent
with dynamic sizing, multi-entry mapping, and so forth.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02 16:40:06 +09:00
Paul Mundt
d01447b319 sh: Merge legacy and dynamic PMB modes.
This implements a bit of rework for the PMB code, which permits us to
kill off the legacy PMB mode completely. Rather than trusting the boot
loader to do the right thing, we do a quick verification of the PMB
contents to determine whether to have the kernel setup the initial
mappings or whether it needs to mangle them later on instead.

If we're booting from legacy mappings, the kernel will now take control
of them and make them match the kernel's initial mapping configuration.
This is accomplished by breaking the initialization phase out in to
multiple steps: synchronization, merging, and resizing. With the recent
rework, the synchronization code establishes page links for compound
mappings already, so we build on top of this for promoting mappings and
reclaiming unused slots.

At the same time, the changes introduced for the uncached helpers also
permit us to dynamically resize the uncached mapping without any
particular headaches. The smallest page size is more than sufficient for
mapping all of kernel text, and as we're careful not to jump to any far
off locations in the setup code the mapping can safely be resized
regardless of whether we are executing from it or not.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18 18:13:51 +09:00
Paul Mundt
d53a0d33bc sh: PMB locking overhaul.
This implements some locking for the PMB code. A high level rwlock is
added for dealing with rw accesses on the entry map while a per-entry
data structure spinlock is added to deal with the PMB entry changing out
from underneath us.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 21:17:02 +09:00
Paul Mundt
d7813bc9e8 sh: Build PMB entry links for existing contiguous multi-page mappings.
This plugs in entry sizing support for existing mappings and then builds
on top of that for linking together entries that are mapping contiguous
areas. This will ultimately permit us to coalesce mappings and promote
head pages while reclaiming PMB slots for dynamic remapping.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 17:56:38 +09:00
Paul Mundt
51becfd962 sh: PMB tidying.
Some overdue cleanup of the PMB code, killing off unused functionality
and duplication sprinkled about the tree.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 15:33:30 +09:00
Paul Mundt
7bdda6209f sh: Fix up more 64-bit pgprot truncation on SH-X2 TLB.
Both the store queue API and the PMB remapping take unsigned long for
their pgprot flags, which cuts off the extended protection bits. In the
case of the PMB this isn't really a problem since the cache attribute
bits that we care about are all in the lower 32-bits, but we do it just
to be safe. The store queue remapping on the other hand depends on the
extended prot bits for enabling userspace access to the mappings.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 13:23:00 +09:00
Paul Mundt
efd54ea315 sh: Merge the legacy PMB mapping and entry synchronization code.
This merges the code for iterating over the legacy PMB mappings and the
code for synchronizing software state with the hardware mappings. There's
really no reason to do the same iteration twice, and this also buys us
the legacy entry logging facility for the dynamic PMB case.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16 18:39:30 +09:00
Paul Mundt
2efa53b269 sh: Make 29/32-bit mode check helper generally available.
Presently __in_29bit_mode() is only defined for the PMB case, but
it's also easily derived from the CONFIG_29BIT and CONFIG_32BIT &&
CONFIG_PMB=n cases.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20 16:40:48 +09:00
Matt Fleming
46c4e5daea sh: Fix CONFIG_PMB=n build.
The last commit introduced the following breakage

arch/sh/include/asm/mmu.h: In function 'pmb_remap':
arch/sh/include/asm/mmu.h:79: error: expected ';' before '}' token

and...

arch/sh/include/asm/mmu.h:78: error: 'EINVAL' undeclared (first use in this function)
arch/sh/include/asm/mmu.h:78: error: (Each undeclared identifier is reported only once
arch/sh/include/asm/mmu.h:78: error: for each function it appears in.)
arch/sh/include/asm/mmu.h: In function 'pmb_init':
arch/sh/include/asm/mmu.h:87: error: 'ENODEV' undeclared (first use in this function)

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-15 08:00:45 +09:00
Paul Mundt
a0ab36689a sh: fixed PMB mode refactoring.
This introduces some much overdue chainsawing of the fixed PMB support.
fixed PMB was introduced initially to work around the fact that dynamic
PMB mode was relatively broken, though they were never intended to
converge. The main areas where there are differences are whether the
system is booted in 29-bit mode or 32-bit mode, and whether legacy
mappings are to be preserved. Any system booting in true 32-bit mode will
not care about legacy mappings, so these are roughly decoupled.

Regardless of the entry point, PMB and 32BIT are directly related as far
as the kernel is concerned, so we also switch back to having one select
the other.

With legacy mappings iterated through and applied in the initialization
path it's now possible to finally merge the two implementations and
permit dynamic remapping overtop of remaining entries regardless of
whether boot mappings are crafted by hand or inherited from the boot
loader.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-13 18:31:48 +09:00
Matt Fleming
20b5014b3e sh: Fold fixed-PMB support into dynamic PMB support
The initialisation process differs for CONFIG_PMB and for
CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB
entries that were allocated by the bootloader.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:34 +09:00
Matt Fleming
8386aebb9e sh: Make most PMB functions static
There's no need to export the internal PMB functions for allocating,
freeing and modifying PMB entries, etc. This way we can restrict the
interface for PMB.

Also remove the static from pmb_init() so that we have more freedom in
setting up the initial PMB entries and turning on MMU 32bit mode.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:37 +09:00
Matt Fleming
1f69b6af91 sh: Prepare for dynamic PMB support
To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:12 +09:00
Francesco VIRLINZI
3b4df71b36 sh: Sanitize asm/mmu.h for assembly use.
This patch adds the ifndef __ASSEMBLY__ preprocessor to allow the
defines in the file to be used also in assembly code.

Signed-off-by: Francesco Virlinzi <francesco.virlinzi@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-03-31 07:42:37 +09:00
David Howells
8feae13110 NOMMU: Make VMAs per MM as for MMU-mode linux
Make VMAs per mm_struct as for MMU-mode linux.  This solves two problems:

 (1) In SYSV SHM where nattch for a segment does not reflect the number of
     shmat's (and forks) done.

 (2) In mmap() where the VMA's vm_mm is set to point to the parent mm by an
     exec'ing process when VM_EXECUTABLE is specified, regardless of the fact
     that a VMA might be shared and already have its vm_mm assigned to another
     process or a dead process.

A new struct (vm_region) is introduced to track a mapped region and to remember
the circumstances under which it may be shared and the vm_list_struct structure
is discarded as it's no longer required.

This patch makes the following additional changes:

 (1) Regions are now allocated with alloc_pages() rather than kmalloc() and
     with no recourse to __GFP_COMP, so the pages are not composite.  Instead,
     each page has a reference on it held by the region.  Anything else that is
     interested in such a page will have to get a reference on it to retain it.
     When the pages are released due to unmapping, each page is passed to
     put_page() and will be freed when the page usage count reaches zero.

 (2) Excess pages are trimmed after an allocation as the allocation must be
     made as a power-of-2 quantity of pages.

 (3) VMAs are added to the parent MM's R/B tree and mmap lists.  As an MM may
     end up with overlapping VMAs within the tree, the VMA struct address is
     appended to the sort key.

 (4) Non-anonymous VMAs are now added to the backing inode's prio list.

 (5) Holes may be punched in anonymous VMAs with munmap(), releasing parts of
     the backing region.  The VMA and region structs will be split if
     necessary.

 (6) sys_shmdt() only releases one attachment to a SYSV IPC shared memory
     segment instead of all the attachments at that addresss.  Multiple
     shmat()'s return the same address under NOMMU-mode instead of different
     virtual addresses as under MMU-mode.

 (7) Core dumping for ELF-FDPIC requires fewer exceptions for NOMMU-mode.

 (8) /proc/maps is now the global list of mapped regions, and may list bits
     that aren't actually mapped anywhere.

 (9) /proc/meminfo gains a line (tagged "MmapCopy") that indicates the amount
     of RAM currently allocated by mmap to hold mappable regions that can't be
     mapped directly.  These are copies of the backing device or file if not
     anonymous.

These changes make NOMMU mode more similar to MMU mode.  The downside is that
NOMMU mode requires some extra memory to track things over NOMMU without this
patch (VMAs are no longer shared, and there are now region structs).

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Mike Frysinger <vapier.adi@gmail.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
2009-01-08 12:04:47 +00:00
Paul Mundt
f15cbe6f1a sh: migrate to arch/sh/include/
This follows the sparc changes a439fe51a1.

Most of the moving about was done with Sam's directions at:

http://marc.info/?l=linux-sh&m=121724823706062&w=2

with subsequent hacking and fixups entirely my fault.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-07-29 08:09:44 +09:00
Renamed from include/asm-sh/mmu.h (Browse further)