Here is a set of ppc64 specific patches that at least allow
compilation/booting with the following configurations:
FLATMEM
SPARSEMEN
SPARSEMEM + MEMORY_HOTPLUG
Signed-off-by: Mike Kravetz <kravetz@us.ibm.com>
Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
pgdat->node_size_lock is basically only neeeded in one place in the normal
code: show_mem(), which is the arch-specific sysrq-m printing function.
Strictly speaking, the architectures not doing memory hotplug do no need this
locking in show_mem(). However, they are all included for completeness. This
should also make any future consolidation of all of the implementations a
little more straightforward.
This lock is also held in the sparsemem code during a memory removal, as
sections are invalidated. This is the place there pfn_valid() is made false
for a memory area that's being removed. The lock is only required when doing
pfn_valid() operations on memory which the user does not already have a
reference on the page, such as in show_mem().
Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
First step in pushing down the page_table_lock. init_mm.page_table_lock has
been used throughout the architectures (usually for ioremap): not to serialize
kernel address space allocation (that's usually vmlist_lock), but because
pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.
Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
and drop it when allocating a new one, to check lest a racing task already
did. Similarly no page_table_lock in vmalloc's map_vm_area.
Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
user mms, which are converted only by a later patch, for now they have to lock
differently according to whether or not it's init_mm.
If sources get muddled, there's a danger that an arch source taking
init_mm.page_table_lock will be mixed with common source also taking it (or
neither take it). So break the rules and make another change, which should
break the build for such a mismatch: remove the redundant mm arg from
pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).
Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
took page_table_lock for no good reason.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove PageReserved() calls from core code by tightening VM_RESERVED
handling in mm/ to cover PageReserved functionality.
PageReserved special casing is removed from get_page and put_page.
All setting and clearing of PageReserved is retained, and it is now flagged
in the page_alloc checks to help ensure we don't introduce any refcount
based freeing of Reserved pages.
MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being
deprecated. We never completely handled it correctly anyway, and is be
reintroduced in future if required (Hugh has a proof of concept).
Once PageReserved() calls are removed from kernel/power/swsusp.c, and all
arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can
be trivially removed.
Last real user of PageReserved is swsusp, which uses PageReserved to
determine whether a struct page points to valid memory or not. This still
needs to be addressed (a generic page_is_ram() should work).
A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and
thus mapcounted and count towards shared rss). These writes to the struct
page could cause excessive cacheline bouncing on big systems. There are a
number of ways this could be addressed if it is an issue.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Refcount bug fix for filemap_xip.c
Signed-off-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The mpic interrupt controller driver (used on G5 and early pSeries among
others) has a bug where it doesn't get the right virtual address for the
timer registers. It causes the driver to poke at the MMIO space of
whatever has been mapped just next to it (ouch !) when initializing and
causes boot failures on some IBM machines.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This fixes a stupid typo bug in the iSeries hash table code.
When we place a hash PTE in the secondary bucket, instead of setting the
SECONDARY flag bit, as we should, we (redundantly) set the VALID flag.
This was introduced with the patch abolishing bitfields from the hash
table code. Mea culpa, oops. It hasn't been noticed until now because
in practice we don't hit the secondary bucket terribly often.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
While working on 64K pages, I found this little buglet in our
update_mmu_cache() implementation.
The code calls __hash_page() passing it an "access" parameter (the type
of access that triggers the hash) containing the bits _PAGE_RW and
_PAGE_USER of the linux PTE. The latter is useless in this case and the
former is wrong. In fact, if we have a writeable PTE and we pass
_PAGE_RW to hash_page(), it will set _PAGE_DIRTY (since we track dirty
that way, by hash faulting !dirty) which is not what we want.
In fact, the correct fix is to always pass 0. That means that only
read-only or already dirty read write PTEs will be preloaded. The
(hopefully rare) case of a non dirty read write PTE can't be preloaded
this way, it will have to fault in hash_page on the actual access.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This fixes a typo in the div128_by_32 function used in the timekeeping
calculations on ppc64. If you look at the code it's quite obvious
that we need (rb + c) rather than (rb + b). The "b" is clearly just a
typo.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The implementation of __kernel_gettimeofday() in the 32 bits vDSO has a
small bug (a typo actually) that will cause it to lose 1 bit of precision.
Not terribly bad but worth fixing.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Andreas Schwab spotted that recent kernels broke reporting of the PowerMac
machine model in /proc/cpuinfo. This fixes it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Newer gcc's are generating this relocation, so the module loader needs to
handle it.
Signed-off-by: Peter Bergner <bergner@vnet.ibm.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
pSeries_irq_bus_setup is marked __devinit but references s7a_workaround
which is marked __initdata.
Depending on who got the memory for s7a_workaround (and if the value was
now positive), it was possible for PCI hotplugged devices to have 3
subtracted from their interrupt number. This would happen randomly and
caused me much confusion :)
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
- added typedef unsigned int __nocast gfp_t;
- replaced __nocast uses for gfp flags with gfp_t - it gives exactly
the same warnings as far as sparse is concerned, doesn't change
generated code (from gcc point of view we replaced unsigned int with
typedef) and documents what's going on far better.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The incorrect kprobe_mutex usage on x86_64 had percolated to ppc64 too.
First noticed by Yanmin Zhang.
Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
BUILD_BUG_ON(1) is asking for trouble (and getting it) when used in that
manner - dead code elimination happens after we parse it and invalid
type is invalid type, dead code or not.
It might be version-dependent, but at least 4.0.1 refuses to accept
that.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
My previous patch fixing invalidation of huge PTEs wasn't good enough, we
still had an issue if a PTE invalidation batch contained both small and
large pages. This patch fixes this by making sure the batch is flushed if
the page size fed to it changes.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Mikey and I were testing kexec and hit a lockup. It turns out gcc 4.0
optimises the kexec_prepare_cpus loop so we avoid reloading paca.hw_cpu_id.
A gcc barrier() fixes the problem.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Current kernel has a couple of sneaky bugs in the ppc64 hugetlb code that
cause huge pages to be potentially left stale in the hash table and TLBs
(improperly invalidated), with all the nasty consequences that can have.
One is that we forgot to set the "secondary" bit in the hash PTEs when
hashing a huge page in the secondary bucket (fortunately very rare).
The other one is on non-LPAR machines (like Apple G5s), flush_hash_range()
which is used to flush a batch of PTEs simply did not work for huge pages.
Historically, our huge page code didn't batch, but this was changed without
fixing this routine. This patch fixes both.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The SMU is the "system controller" chip used by Apple recent G5 machines
including the iMac G5. It drives things like fans, i2c busses, real time
clock, etc...
The current kernel contains a very crude driver that doesn't do much more
than reading the real time clock synchronously. This is a completely
rewritten driver that provides interrupt based command queuing, a userland
interface, and an i2c/smbus driver for accessing the devices hanging off
the SMU i2c busses like temperature sensors. This driver is a basic block
for upcoming work on thermal control for those machines, among others.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Jean Delvare <khali@linux-fr.org>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix my stupid bug in the 64bit version of PTRACE_SET_DEBUGREG.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix build when iommu debug is enabled.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The recent iommu fix broke booting on some POWER4 and POWER5 LPAR boxes.
It looks like we have been calling the non LPAR iommu_dev_setup on LPAR
machines for a while. The recent iommu fix caused that code path to
fail.
It looks like we just need to hook up the devices iommu_table to the
parents one, so do that instead of calling iommu_dev_setup_pSeries and
crossing the streams.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
zImage.vmode was recently added. It's a version of zImage in which the ELF
note section used by open firmware indicates that it requires a virtual
mode instance of OF instead of real mode. This allows it to work with
Apple OF, and thus is directly bootable (or netbootable) from OF command
line. (Unfortunately, pSeries OF sort-of requires real mode and Apple OF
sort-of requires virtual mode, and both tend to be unhappy if no notes
section specifies the mode at all).
However, we forgot to add zImage.vmode to the default G5 build. This
fixes it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The new version of the flattened device tree passes the boot cpuid in the
header instead of via a linux,boot-cpu property.
We need to update the in kernel OF parsing code to do this, otherwise
machines with a non zero boot cpuid fail to come up.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Some RS64 systems (such as F80) have non-python host bridges with EADS.
However, they have two EADS with 4 buses each under them, so the old logic
that assumed no more than 7 busses per PHB failed miserably.
Big thanks to Olaf Hering for helping me test this, he's got one of the few
machines that broke from the previous logic.
Also, to be a bit smarter at detecting the need for a PHB-level IOMMU table
by checking for the presence of an ISA bus. Only PHBs with ISA bridges
should need the PHB-level table.
Signed-off-by: Olof Johansson <olof@lixom.net>
Cc: Anton Blanchard <anton@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
My code to set up the PCI tree from the Open Firmware device tree was
setting IORESOURCE_* flags on the resources for the devices, but not
the PCI_BASE_ADDRESS_* flags. This meant that some drivers
misbehaved, and /proc/pci showed the wrong types for the resources.
This fixes it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
I forgot to include siginfo.h when I added data breakpoint support. We
must include it in a round-a-bout way in mainline.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
As noted by Olof Johansson <olof@lixom.net>:
"A recent patch changed the way the LPAR bit is checked during early
boot. This resulted in a polarity change in a conditional branch
without changing the branch, causing at least some legacy machines to
not boot."
This fixes it.
Signed-off-by: Jimi Xenidis <jimix@watson.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Several implementations were essentialy a common piece of C code using
the cmpxchg() macro. Put the implementation in one spot that everyone
can share, and convert sparc64 over to using this.
Alpha is the lone arch-specific implementation, which codes up a
special fast path for the common case in order to avoid GP reloading
which a pure C version would require.
Signed-off-by: David S. Miller <davem@davemloft.net>
Pavel Emelianov and Kirill Korotaev observe that fs and arch users of
security_vm_enough_memory tend to forget to vm_unacct_memory when a
failure occurs further down (typically in setup_arg_pages variants).
These are all users of insert_vm_struct, and that reservation will only
be unaccounted on exit if the vma is marked VM_ACCOUNT: which in some
cases it is (hidden inside VM_STACK_FLAGS) and in some cases it isn't.
So x86_64 32-bit and ppc64 vDSO ELFs have been leaking memory into
Committed_AS each time they're run. But don't add VM_ACCOUNT to them,
it's inappropriate to reserve against the very unlikely case that gdb
be used to COW a vDSO page - we ought to do something about that in
do_wp_page, but there are yet other inconsistencies to be resolved.
The safe and economical way to fix this is to let insert_vm_struct do
the security_vm_enough_memory check when it finds VM_ACCOUNT is set.
And the MIPS irix_brk has been calling security_vm_enough_memory before
calling do_brk which repeats it, doubly accounting and so also leaking.
Remove that, and all the fs and arch calls to security_vm_enough_memory:
give it a less misleading name later on.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-Off-By: Kirill Korotaev <dev@sw.ru>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
My patch "Separate pci bits out of struct device_node" (commit
1635317fac) had the unfortunate
side-effect that it stopped eeh_init() from working correctly.
It needs the pointers set up by find_and_init_phbs(), but it was being
called just before find_and_init_phbs(). That meant that we didn't
enable EEH (pSeries PCI error recovery) on any devices, and that meant
that on POWER5 systems, the hypervisor wouldn't let us enable memory or
I/O space access to any devices, and their drivers got somewhat
confused.
This fixes it by moving the eeh_init call after find_and_init_phbs.
Tested on a POWER5 partition.
Signed-of-by: Paul Mackerras <paulus@samba.org>
Signed-of-by: Linus Torvalds <torvalds@osdl.org>
ppc64_attention_msg and ppc64_dump_msg are not used so remove them.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
If the rtas start-cpu token doesnt exist then presume the cpu is already
spinning. If it isnt we will catch it later on when the cpu doesnt
respond.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
A few xics cleanups:
- Make some things static.
- Be more consistent with error printing - interrupts are unsigned,
error values are signed.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The ptrace get and set methods for VMX/Altivec registers present in the
ppc tree were missing for ppc64. This patch adds the 32-bit and
64-bit methods. Updated with the suggestions from Anton following the lines
of his code snippet.
Added:
- flush_altivec_to_thread calls as suggested by Anton
- piecewise copy of structure to preserve 32-bit vrsave data as per
Anton
(I consolidated the 32 and 64bit versions with 2 helper macros - Anton)
Signed-off-by: Robert C Jennings <rcjenn@austin.ibm.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds code which gives us the option on ppc64 of instantiating the
PCI tree (the tree of pci_bus and pci_dev structs) from the Open
Firmware device tree rather than by probing PCI configuration space.
The OF device tree has a node for each PCI device and bridge in the
system, with properties that tell us what addresses the firmware has
configured for them and other details.
There are a couple of reasons why this is needed. First, on systems
with a hypervisor, there is a PCI-PCI bridge per slot under the PCI
host bridges. These PCI-PCI bridges have special isolation features
for virtualization. We can't write to their config space, and we are
not supposed to be reading their config space either. The firmware
tells us about the address ranges that they pass in the OF device
tree.
Secondly, on powermacs, the interrupt controller is in a PCI device
that may be behind a PCI-PCI bridge. If we happened to take an
interrupt just at the point when the device or a bridge on the path to
it was disabled for probing, we would crash when we try to access the
interrupt controller.
I have implemented a platform-specific function which is called for
each PCI bridge (host or PCI-PCI) to say whether the code should look
in the device tree or use normal PCI probing for the devices under
that bridge. On pSeries machines we use the device tree if we're
running under a hypervisor, otherwise we use normal probing. On
powermacs we use normal probing for the AGP bridge, since the device
for the AGP bridge itself isn't shown in the device tree (at least on
my G5), and the device tree for everything else.
This has been tested on a dual G5 powermac, a partition on a POWER5
machine (running under the hypervisor), and a legacy iSeries
partition.
Signed-off-by: Paul Mackerras <paulus@samba.org>
This is less troublesome and makes more sense.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch contains the most trivial from Rusty's trivial patches:
- spelling fixes
- remove duplicate includes
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch (written by me and also containing many suggestions of Arjan van
de Ven) does a major cleanup of the spinlock code. It does the following
things:
- consolidates and enhances the spinlock/rwlock debugging code
- simplifies the asm/spinlock.h files
- encapsulates the raw spinlock type and moves generic spinlock
features (such as ->break_lock) into the generic code.
- cleans up the spinlock code hierarchy to get rid of the spaghetti.
Most notably there's now only a single variant of the debugging code,
located in lib/spinlock_debug.c. (previously we had one SMP debugging
variant per architecture, plus a separate generic one for UP builds)
Also, i've enhanced the rwlock debugging facility, it will now track
write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too.
All locks have lockup detection now, which will work for both soft and hard
spin/rwlock lockups.
The arch-level include files now only contain the minimally necessary
subset of the spinlock code - all the rest that can be generalized now
lives in the generic headers:
include/asm-i386/spinlock_types.h | 16
include/asm-x86_64/spinlock_types.h | 16
I have also split up the various spinlock variants into separate files,
making it easier to see which does what. The new layout is:
SMP | UP
----------------------------|-----------------------------------
asm/spinlock_types_smp.h | linux/spinlock_types_up.h
linux/spinlock_types.h | linux/spinlock_types.h
asm/spinlock_smp.h | linux/spinlock_up.h
linux/spinlock_api_smp.h | linux/spinlock_api_up.h
linux/spinlock.h | linux/spinlock.h
/*
* here's the role of the various spinlock/rwlock related include files:
*
* on SMP builds:
*
* asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
* initializers
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel
* implementations, mostly inline assembly code
*
* (also included on UP-debug builds:)
*
* linux/spinlock_api_smp.h:
* contains the prototypes for the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*
* on UP builds:
*
* linux/spinlock_type_up.h:
* contains the generic, simplified UP spinlock type.
* (which is an empty structure on non-debug builds)
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* linux/spinlock_up.h:
* contains the __raw_spin_*()/etc. version of UP
* builds. (which are NOPs on non-debug, non-preempt
* builds)
*
* (included on UP-non-debug builds:)
*
* linux/spinlock_api_up.h:
* builds the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*/
All SMP and UP architectures are converted by this patch.
arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should
be mostly fine.
From: Grant Grundler <grundler@parisc-linux.org>
Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
Builds 32-bit SMP kernel (not booted or tested). I did not try to build
non-SMP kernels. That should be trivial to fix up later if necessary.
I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids
some ugly nesting of linux/*.h and asm/*.h files. Those particular locks
are well tested and contained entirely inside arch specific code. I do NOT
expect any new issues to arise with them.
If someone does ever need to use debug/metrics with them, then they will
need to unravel this hairball between spinlocks, atomic ops, and bit ops
that exist only because parisc has exactly one atomic instruction: LDCW
(load and clear word).
From: "Luck, Tony" <tony.luck@intel.com>
ia64 fix
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjanv@infradead.org>
Signed-off-by: Grant Grundler <grundler@parisc-linux.org>
Cc: Matthew Wilcox <willy@debian.org>
Signed-off-by: Hirokazu Takata <takata@linux-m32r.org>
Signed-off-by: Mikael Pettersson <mikpe@csd.uu.se>
Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This converts the final 20 DEFINE_SPINLOCK holdouts. (another 580 places
are already using DEFINE_SPINLOCK). Build tested on x86.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The pciconfig_iobase, pciconfig_read and pciconfig_write system calls
were only implemented for 32-bit processes; for 64-bit processes they
returned an ENOSYS error. This allows them to be used by 64-bit
processes as well. The X server uses pciconfig_iobase at least, and
this change is necessary to allow a 64-bit X server to work on my G5.
Signed-off-by: Paul Mackerras <paulus@samba.org>