As

The anticipatory (as) IO Scheduler/elevator
The algorithms used to schedule disk IO requests are referred to as elevator algorithms. Since 2.6 you have had the choice of the Deadline elevator, the Anticipatory elevator (as), the No-op (noop) elevator and the Completely Fair Queuing (CFQ) elevator.

The anticipatory elevator is designed for workloads with lots of dependent reads. This would be a situation where you read a small chunk of data, process it, then read the next chunk, process it, read the next chunk, process it etc... as opposed to a read/write/write/read/read/read/write random IO work load. If you know you will have many dependent reads then it makes sense for the elevator to wait briefly before moving the heads to another location of the disk, in doing so minimising head movement. The elevator processes batches of read and write operations, typically the read batches are given longer to run the the write batches. This batching behaviour can offer good performance increases in the right environments however can also cause undesirable delays to IO requests in other environments. For most desktops the CFQ elevator is a better choice than the anticipatory elevator, however servers carrying out mostly read operations on larger files could benefit from the anticipatory elevator.

There are a few things you can tune, they can be found under /sys/block/ /queue/iosched/
 * read_expire - the number of milliseconds before each read IO request expires.
 * write_expire - the number of milliseconds before each write IO request expires.
 * read_batch_expire - controls the amount of time given to read requests before before checking if any write operations are pending; higher values are better.
 * write_batch_expire - controls the amount of time given to write requests before before checking if any read operations are pending.
 * antic_expire - controls the amount of time the scheduler will wait to a process to carry out another IO operation before moving on to another process's requsts. Typically a few ms is fine.