[ACCEPTED]-What is the status of POSIX asynchronous I/O (AIO)?-aio
Doing socket I/O efficiently has been solved 41 with kqueue, epoll, IO completion ports 40 and the likes. Doing asynchronous file I/O 39 is sort of a late comer (apart from windows' overlapped 38 I/O and solaris early support for posix 37 AIO).
If you're looking for doing socket 36 I/O, you're probably better off using one 35 of the above mechanisms.
The main purpose 34 of AIO is hence to solve the problem of 33 asynchronous disk I/O. This is most likely 32 why Mac OS X only supports AIO for regular 31 files, and not sockets (since kqueue does 30 that so much better anyway).
Write operations 29 are typically cached by the kernel and flushed 28 out at a later time. For instance when the 27 read head of the drive happens to pass by 26 the location where the block is to be written.
However, for 25 read operations, if you want the kernel 24 to prioritize and order your reads, AIO 23 is really the only option. Here's why the 22 kernal can (theoretically) do that better 21 than any user level application:
- The kernel sees all disk I/O, not just your applications disk jobs, and can order them at a global level
- The kernel (may) know where the disk read head is, and can pick the read jobs you pass on to it in optimal order, to move the head the shortest distance
- The kernel can take advantage of native command queuing to optimize your read operations further
- You may be able to issue more read operations per system call using lio_listio() than with readv(), especially if your reads are not (logically) contiguous, saving a tiny bit of system call overhead.
- Your program might be slightly simpler with AIO since you don't need an extra thread to block in a read or write call.
That said, posix 20 AIO has a quite awkward interface, for instance:
- The only efficient and well supported mean of event callbacks are via signals, which makes it hard to use in a library, since it means using signal numbers from the process-global signal namespace. If your OS doesn't support realtime signals, it also means you have to loop through all your outstanding requests to figure out which one actually finished (this is the case for Mac OS X for instance, not Linux). Catching signals in a multi-threaded environment also makes for some tricky restrictions. You can typically not react to the event inside the signal handler, but you have to raise a signal, write to a pipe or use signalfd() (on linux).
- lio_suspend() has the same issues as select() does, it doesn't scale very well with the number of jobs.
- lio_listio(), as implemented has fairly limited number of jobs you can pass in, and it's not trivial to find this limit in a portable way. You have to call sysconf(_SC_AIO_LISTIO_MAX), which may fail, in which case you can use the AIO_LISTIO_MAX define, which are not necessarily defined, but then you can use 2, which is defined as guaranteed to be supported.
As 19 for real-world application using posix AIO, you 18 could take a look at lighttpd (lighty), which 17 also posted a performance measurement when introducing support.
Most 16 posix platforms supports posix AIO by now 15 (Linux, BSD, Solaris, AIX, tru64). Windows 14 supports it via its overlapped file I/O. My 13 understanding is that only Solaris, Windows 12 and Linux truly supports async. file I/O 11 all the way down to the driver, whereas 10 the other OSes emulate the async. I/O with 9 kernel threads. Linux being the exception, its 8 posix AIO implementation in glibc emulates 7 async operations with user level threads, whereas 6 its native async I/O interface (io_submit() etc.) are 5 truly asynchronous all the way down to the 4 driver, assuming the driver supports it.
I 3 believe it's fairly common among OSes to 2 not support posix AIO for any fd, but restrict 1 it to regular files.
Network I/O is not a priority for AIO because 14 everyone writing POSIX network servers uses 13 an event based, non-blocking approach. The 12 old-style Java "billions of blocking threads" approach 11 sucks horribly.
Disk write I/O is already 10 buffered and disk read I/O can be prefetched 9 into buffer using functions like posix_fadvise. That 8 leaves direct, unbuffered disk I/O as the 7 only useful purpose for AIO.
Direct, unbuffered 6 I/O is only really useful for transactional 5 databases, and those tend to write their 4 own threads or processes to manage their 3 disk I/O.
So, at the end that leaves POSIX 2 AIO in the position of not serving any useful 1 purpose. Don't use it.
A libtorrent developer provides a report 1 on this: http://blog.libtorrent.org/2012/10/asynchronous-disk-io/
There is aio_write - implemented in glibc; first 11 call of the aio_read or aio_write function 10 spawns a number of user mode threads, aio_write 9 or aio_read post requests to that thread, the 8 thread does pread/pwrite and when it is 7 finished the answer is posted back to the 6 blocked calling thread.
Ther is also 'real' aio 5 - supported by the kernel level (need libaio 4 for that, see the io_submit call http://linux.die.net/man/2/io_submit ); also 3 need O_DIRECT for that (also may not be 2 supported by all file systems, but the major 1 ones do support it)
see here:
http://lse.sourceforge.net/io/aio.html
More Related questions
We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.