This updates the CFQ io scheduler to the new time sliced design (cfq
v3). It provides full process fairness, while giving excellent
aggregate system throughput even for many competing processes. It
supports io priorities, either inherited from the cpu nice value or set
directly with the ioprio_get/set syscalls. The latter closely mimic
set/getpriority.
This import is based on my latest from -mm.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Use the DMA_{64,32}BIT_MASK constants from dma-mapping.h when calling
pci_set_dma_mask() or pci_set_consistent_dma_mask()
These patches include dma-mapping.h explicitly because it caused errors
on some architectures otherwise.
See http://marc.theaimsgroup.com/?t=108001993000001&r=1&w=2 for details
Signed-off-by: Tobias Klauser <tklauser@nuerscht.ch>
Signed-off-by: Domen Puncer <domen@coderock.org>
1. Establish a simple API for process freezing defined in linux/include/sched.h:
frozen(process) Check for frozen process
freezing(process) Check if a process is being frozen
freeze(process) Tell a process to freeze (go to refrigerator)
thaw_process(process) Restart process
frozen_process(process) Process is frozen now
2. Remove all references to PF_FREEZE and PF_FROZEN from all
kernel sources except sched.h
3. Fix numerous locations where try_to_freeze is manually done by a driver
4. Remove the argument that is no longer necessary from two function calls.
5. Some whitespace cleanup
6. Clear potential race in refrigerator (provides an open window of PF_FREEZE
cleared before setting PF_FROZEN, recalc_sigpending does not check
PF_FROZEN).
This patch does not address the problem of freeze_processes() violating the rule
that a task may only modify its own flags by setting PF_FREEZE. This is not clean
in an SMP environment. freeze(process) is therefore not SMP safe!
Signed-off-by: Christoph Lameter <christoph@lameter.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch contains the following cleanups:
- make needlessly global code static
- remove the following unused global functions:
- blkdev_scsi_issue_flush_fn
- __blk_attempt_remerge
- remove the following unused EXPORT_SYMBOL's:
- blk_phys_contig_segment
- blk_hw_contig_segment
- blkdev_scsi_issue_flush_fn
- __blk_attempt_remerge
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Acked-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch allows block device drivers to convert their ioctl functions to
unlocked_ioctl() like character devices and other subsystems. All
functions that were called with the BKL held before are still used that
way, but I would not be surprised if it could be removed from the ioctl
functions in drivers/block/ioctl.c themselves.
As a side note, I found that compat_blkdev_ioctl() acquires the BKL as
well, which looks like a bug. I have checked that every user of
disk->fops->compat_ioctl() in the current git tree gets the BKL itself, so
it could easily be removed from compat_blkdev_ioctl().
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch improves write performance for the CD/DVD packet writing driver.
The logic for switching between reading and writing has been changed so
that streaming writes are no longer interrupted by read requests.
Signed-off-by: Peter Osterlund <petero2@telia.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch to add check to get_chrdev_list and get_blkdev_list to prevent reads
of /proc/devices from spilling over the provided page if more than 4096
bytes of string data are generated from all the registered character and
block devices in a system
Signed-off-by: Neil Horman <nhorman@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Looks like locking can be optimised quite a lot. Increase lock widths
slightly so lo_lock is taken fewer times per request. Also it was quite
trivial to cover lo_pending with that lock, and remove the atomic
requirement. This also makes memory ordering explicitly correct, which is
nice (not that I particularly saw any mem ordering bugs).
Test was reading 4 250MB files in parallel on ext2-on-tmpfs filesystem (1K
block size, 4K page size). System is 2 socket Xeon with HT (4 thread).
intel:/home/npiggin# umount /dev/loop0 ; mount /dev/loop0 /mnt/loop ; /usr/bin/time ./mtloop.sh
Before:
0.24user 5.51system 0:02.84elapsed 202%CPU (0avgtext+0avgdata 0maxresident)k
0.19user 5.52system 0:02.88elapsed 198%CPU (0avgtext+0avgdata 0maxresident)k
0.19user 5.57system 0:02.89elapsed 198%CPU (0avgtext+0avgdata 0maxresident)k
0.22user 5.51system 0:02.90elapsed 197%CPU (0avgtext+0avgdata 0maxresident)k
0.19user 5.44system 0:02.91elapsed 193%CPU (0avgtext+0avgdata 0maxresident)k
After:
0.07user 2.34system 0:01.68elapsed 143%CPU (0avgtext+0avgdata 0maxresident)k
0.06user 2.37system 0:01.68elapsed 144%CPU (0avgtext+0avgdata 0maxresident)k
0.06user 2.39system 0:01.68elapsed 145%CPU (0avgtext+0avgdata 0maxresident)k
0.06user 2.36system 0:01.68elapsed 144%CPU (0avgtext+0avgdata 0maxresident)k
0.06user 2.42system 0:01.68elapsed 147%CPU (0avgtext+0avgdata 0maxresident)k
Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Sprinkle around a few branch hints in the block layer.
Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This memory barrier is not needed because the waitqueue will only get waiters
on it in the following situations:
rq->count has exceeded the threshold - however all manipulations of ->count
are performed under the runqueue lock, and so we will correctly pick up any
waiter.
Memory allocation for the request fails. In this case, there is no additional
help provided by the memory barrier. We are guaranteed to eventually wake up
waiters because the request allocation mempool guarantees that if the mem
allocation for a request fails, there must be some requests in flight. They
will wake up waiters when they are retired.
Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add KERN_ERR and __FUNCTION__ to generic tag error messages, and add a comment
in blk_queue_end_tag() which explains the silent failure path.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Acked-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
blk_queue_tag->real_max_depth was used to optimize out unnecessary
allocations/frees on tag resize. However, the whole thing was very broken -
tag_map was never allocated to real_max_depth resulting in access beyond the
end of the map, bits in [max_depth..real_max_depth] were set when initializing
a map and copied when resizing resulting in pre-occupied tags.
As the gain of the optimization is very small, well, almost nill, remove the
whole thing.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Acked-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
blk_queue_start_tag() hand-coded searching for the first zero bit in the tag
map. Replace it with find_first_zero_bit(). With this patch,
blk_queue_star_tag() doesn't need to fill remains of tag map with 1, thus
allowing it to work properly with the next remove_real_max_depth patch.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Acked-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch to allocate the control structures for for ide devices on the node of
the device itself (for NUMA systems). The patch depends on the Slab API
change patch by Manfred and me (in mm) and the pcidev_to_node patch that I
posted today.
Does some realignment too.
Signed-off-by: Justin M. Forbes <jmforbes@linuxtx.org>
Signed-off-by: Christoph Lameter <christoph@lameter.com>
Signed-off-by: Pravin Shelar <pravin@calsoftinc.com>
Signed-off-by: Shobhit Dayal <shobhit@calsoftinc.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
sysfs: fix drivers/block so if an attribute doesn't implement
show or store method read/write will return -EIO
instead of 0 or -EINVAL.
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The recent mapping changes didn't update the kerneldoc appropriately.
Original from Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@suse.de>
Original From: Mike Christie <michaelc@cs.wisc.edu>
Modified to split out block changes (this patch) and SCSI pieces.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
Change the blk_rq_map_user() and blk_rq_map_kern() interface to require
a previously allocated request to be passed in. This is both more efficient
for multiple iterations of mapping data to the same request, and it is also
a much nicer API.
Signed-off-by: Jens Axboe <axboe@suse.de>
Add blk_rq_map_kern which takes a kernel buffer and maps it into
a request and bio. This can be used by the dm hw_handlers, old
sg_scsi_ioctl, and one day scsi special requests so all requests
comming into scsi will have bios. All requests having bios
should allow scsi to use scatter lists for all IO and allow it
to use block layer functions.
Signed-off-by: Jens Axboe <axboe@suse.de>
__cfq_get_queue(). __cfq_get_queue() finds an existing queue (struct
cfq_queue) of the current process for the device and returns it. If it's not
found, __cfq_get_queue() creates and returns a new one if __cfq_get_queue() is
called with __GFP_WAIT flag, or __cfq_get_queue() returns NULL (this means that
get_request() fails) if no __GFP_WAIT flag.
On the other hand, in __make_request(), get_request() is called without
__GFP_WAIT flag at the first time. Thus, the get_request() fails when there is
no existing queue, typically when it's called for the first I/O request of the
process to the device.
Though it will be followed by get_request_wait() for general case,
__make_request() will just end the I/O with an error (EWOULDBLOCK) when the
request was for read-ahead.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
__elv_add_request(). rq.count[READ] + rq.count[WRITE] can increase
more than one if another thread has allocated a request after the
current request is allocated or in_flight could have changed resulting
in larger-than-one change of nrq, thus breaking the threshold
mechanism.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Tejun Heo <htejun@gmail.com>
Patch removes our homegrown DMA masks and uses the ones defined in the kernel.
This patch replaces the broken one I sent in earlier. It has been tested and works. Please discard the first submission.
Signed-off-by: Mike Miller <mike.miller@hp.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
This smoothes two imperfections:
- Increase number of LUNs per device from 4 to 9. The best solution
would be to remove this limit altogether, but that has to wait until
the time when more than 26 hosts are allowed.
- Replace mdelay with msleep in a probing routine.
Signed-off-by: Pete Zaitcev <zaitcev@yahoo.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If you tried to open a packet device first in read-only mode and then a
second time in read-write mode, the second open succeeded even though the
device was not correctly set up for writing. If you then tried to write
data to the device, the writes would fail with I/O errors.
This patch prevents that problem by making the second open fail with
-EBUSY.
Signed-off-by: Peter Osterlund <petero2@telia.com>
Cc: Al Viro <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
blk_insert_request() has a unobivous feature of requeuing a
request setting REQ_SPECIAL|REQ_SOFTBARRIER. SCSI midlayer
was the only user and as previous patches removed the usage,
remove the feature from blk_insert_request(). Only special
requests should be queued with blk_insert_request(). All
requeueing should go through blk_requeue_request().
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
This is the reworked version of the patch. It sets REQ_SOFTBARRIER
in two places - in elv_next_request() on BLKPREP_DEFER and in
blk_requeue_request().
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
I found a bug in the packet writing driver that could cause data
corruption. The problem arised if the driver got a write request for a
sector in a "zone" it was already working on. In that case it was supposed
to queue the write request until it was done processing earlier requests
for the same zone, and instead work on some other zone in the mean time.
However, if there was no other zone to work on, the driver would initiate
two packet_data objects for the same zone, causing unpredictable things to
happen.
Signed-off-by: Peter Osterlund <petero2@telia.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
ioctl_by_bdev may only be used INSIDE the kernel. If the "arg" argument
refers to memory that is accessed by put_user/get_user in the ioctl
function, the memory needs to be in the kernel address space (that's the
set_fs(KERNEL_DS) doing in the ioctl_by_bdev). This works on i386 because
even with set_fs(KERNEL_DS) the user space memory is still accessible with
put_user/get_user. That is not true for s390. In short the ioctl
implementation of the pktcdvd device driver is horribly broken.
Signed-off-by: Peter Osterlund <petero2@telia.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
[Patch] Fix raw device ioctl pass-through
Raw character devices are supposed to pass ioctls through to the block
devices they are bound to. Unfortunately, they are using the wrong
function for this: ioctl_by_bdev(), instead of blkdev_ioctl().
ioctl_by_bdev() performs a set_fs(KERNEL_DS) before calling the ioctl,
redirecting the user-space buffer access to the kernel address space.
This is, needless to say, a bad thing.
This was noticed first on s390, where raw IO was non-functioning. The
s390 driver config does not actually allow raw IO to be enabled, which
was the first part of the problem. Secondly, the s390 kernel address
space is distinct from user, causing legal raw ioctls to fail. I've
reproduced this on a kernel built with 4G:4G split on x86, which fails
in the same way (-EFAULT if the address does not exist kernel-side;
returns success without actually populating the user buffer if it does.)
The patch below fixes both the config and address-space problems. It's
based closely on a patch by Jan Glauber <jang@de.ibm.com>, which has
been tested on s390 at IBM. I've tested it on x86 4G:4G (split address
space) and x86_64 (common address space).
Kernel-address-space access has been assigned CAN-2005-1264.
Signed-off-by: Stephen Tweedie <sct@redhat.com>
Signed-off-by: Dave Jones <davej@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
I somehow missed that there is external usage of rd_size on some
architectures.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch makes some needlessly global identifiers static.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Acked-by: Arjan van de Ven <arjanv@infradead.org>
Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The only caller that ever sets it can call fsync_bdev itself easily. Also
update some comments.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch adds support for a new class of DAC960 controllers. It's based
on the GPLed idac320 driver from IBM for Linux 2.4.18. That driver is a
fork of the 2.4.18 version of DAC960 that adds support for this new type of
controllers (internally called "GEM Series"), that differ from other DAC960
V2 firmware controllers only in the register offsets and removes support
for all others.
This patch instead integrates support for these controllers into the DAC960
driver.
Thanks to Anders Norrbring for pointing me to the idac320 driver and
testing this patch.
No Signed-Off: line because all code is either copy & pasted from IBM's
idac320 driver or support for other controllers in the 2.6 DAC960 driver.
Note: the really odd formating matches the rest of the DAC960 driver.
Cc: Dave Olien <dmo@osdl.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Drivers that expect ISA DMA API are marked as such in Kconfig.
Signed-off-by: Al Viro <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
allow multiple aoe devices to have the same mac
Signed-off-by: Ed L. Cashin <ecashin@coraid.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
diff -u b/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c
This patches adds the "nbds_max" parameter to the nbd kernel module, which
limits the number of nbds allocated. Previously, always all 128 entries
were allocated unconditionally, which used to waste resources and
needlessly flood the hotplug system with events. (Defaults to 16 now.)
Signed-off-by: Lars Marowsky-Bree <lmb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Profiling hit rates on merging shows that the last merge hint works
extremely well for most work loads. So lets kill the linear merge scan in
noop-iosched, so it provides O(1) run time for any operation.
Testing credits go to Ken Chen from Intel.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
(!ARCH_S390 && !M68K && !IA64 && !UML) is obviously always true on ARM.
Intended behaviour for ARM is "absent unless we are on RiscPC or
EBSA285". So what we want is added && !ARM in the first term - without
it the last part (|| ARCH_RPC || ARCH_EBSA285, that is) doesn't do
anything.
Signed-off-by: Al Viro <viro@parcelfarce.linux.theplanet.co.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
I can't use list.h, since sk_buff doesn't have a list_head but instead
has two struct sk_buff pointers, and I want to avoid any extra memory
allocation.
send outgoing packets in order
Signed-off-by: Ed L. Cashin <ecashin@coraid.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The current problem seen is that the queue lock is actually in the
SCSI device structure, so when that structure is freed on device
release, we go boom if the queue tries to access the lock again.
The fix here is to move the lock from the scsi_device to the queue.
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
Both the RiscPC and (optionally) EBSA285 have floppy disk support. Allow this
option to be selected on these ARM platforms again.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
In function __generic_unplug_device(), kernel can use a cheaper function
elv_queue_empty() instead of more expensive elv_next_request to find
whether the queue is empty or not. blk_run_queue can also made conditional
on whether queue's emptiness before calling request_fn().
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
There is a possibility that a bio will be accessed after it has been freed
on SCSI. It happens if you submit a bio with BIO_SYNC marked and the
auto-unplugging kicks the request_fn, SCSI re-enables interrupts in-between
so if the request completes between the add_request() in __make_request()
and the bio_sync() call, we could be looking at a dead bio. It's a slim
race, but it has been triggered in the Real World.
So assign bio_sync() to a local variable instead.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!