分享

Linux下异步IO(libaio)的使用以及性能 | 系统技术非业余研究

 SamBookshelf 2013-10-18

原创文章,转载请注明: 转载自系统技术非业余研究

本文链接地址: Linux下异步IO(libaio)的使用以及性能

Linux下异步IO是比较新的内核里面才有的,异步io的好处可以参考这里.
但是文章中aio_*系列的调用是glibc提供的,是glibc用线程+阻塞调用来模拟的,性能很差,千万千万不要用。

我们今天要说的是真正的原生的异步IO接口. 由于这几个系统调用glibc没有提供相应的封装,所以libaio来救急了:

libaio项目: http://oss.oracle.com/projects/libaio-oracle/

This is a library for accessing the new AIO system calls (asynchronous i/o) for the Linux kernel. It is a thin, state-keeping wrapper that conforms to the Single Unix Specification for aio_read, aio_write, aio_error, aio_return and aio_suspend functions, and also implements lio_listio and aio_reap for batch processing.

This library requires a kernel with the new AIO code and a recent version of the libaio userspace library.

libaio提供了4个函数: io_cancal, io_destroy, io_getevents, io_setup,io_submit来提供服务,具体的可以参看man io_*

淘宝的雕梁同学写了这篇文章介绍了libaio如何和event一起高效工作:看这里

使用很简单的,在RHEL 5U4上只要安装下开发包就好:

$ sudo yum install libaio-devel

其实大家会比较关心性能。 fio 测试工具支持同步(pread/pwrite)和异步(libaio)的测试,那我们比较下:

为了减低底层硬件(如raid)cache的影响,我们用总共2G左右的文件做随机读写操作, 完全随机的读写,块大小128K, directio绕过OS page buffer系统, 总共跑1分钟。

在异步io操作的时候只用一个进程保持io队列的长度为16, 同步io开16个进程,保持同样的队列这样会公平些。

测试机器配置:

$ summary
# Aspersa System Summary Report ##############################
     Release | Red Hat Enterprise Linux Server release 5.4 (Tikanga)
      Kernel | 2.6.18-164.el5
  Processors | physical = 2, cores = 8, virtual = 16, hyperthreading = yes
      Speeds | 16x2261.053
      Models | 16xIntel(R) Xeon(R) CPU E5520 @ 2.27GHz
      Caches | 16x8192 KB
# Memory #####################################################
       Total | 23.53G
...

先开始异步IO的测试:

$ cat aio-bench
[global]
ioengine=libaio
direct=1
rw=randrw
bs=128k
directory=.
ioscheduler=deadline
time_based
runtime=60
 
[libaio.dat]
size=2g
iodepth=16
 
$ sudo fio aio-bench 
libaio.dat: (g=0): rw=randrw, bs=128K-128K/128K-128K, ioengine=libaio, iodepth=16
fio 1.50.2
Starting 1 process
Jobs: 1 (f=1): [m] [100.0% done] [31462K/34595K /s] [240 /263  iops] [eta 00m:00s]
libaio.dat: (groupid=0, jobs=1): err= 0: pid=25892
  read : io=1795.4MB, bw=30575KB/s, iops=238 , runt= 60129msec
    slat (usec): min=14 , max=130031 , avg=25.13, stdev=1084.85
    clat (usec): min=178 , max=1959.4K, avg=66231.82, stdev=154044.70
     lat (msec): min=1 , max=1959 , avg=66.26, stdev=154.04
    bw (KB/s) : min=22721, max=95552, per=100.66%, avg=30775.78, stdev=7306.36
  write: io=1836.8MB, bw=31280KB/s, iops=244 , runt= 60129msec
    slat (usec): min=12 , max=55571 , avg=18.50, stdev=458.33
    clat (usec): min=252 , max=377818 , avg=658.98, stdev=7612.29
     lat (usec): min=266 , max=377832 , avg=677.73, stdev=7626.25
    bw (KB/s) : min=20223, max=94275, per=100.74%, avg=31511.06, stdev=7334.59
  cpu          : usr=0.13%, sys=0.72%, ctx=28484, majf=0, minf=24
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=14363/14694/0, short=0/0/0
     lat (usec): 250=0.01%, 500=46.44%, 750=1.91%, 1000=0.39%
     lat (msec): 2=1.55%, 4=0.39%, 10=3.07%, 20=5.66%, 50=20.97%
     lat (msec): 100=14.42%, 250=4.46%, 500=0.19%, 750=0.02%, 1000=0.01%
     lat (msec): 2000=0.51%
 
Run status group 0 (all jobs):
   READ: io=1795.4MB, aggrb=30575KB/s, minb=31309KB/s, maxb=31309KB/s, mint=60129msec, maxt=60129msec
  WRITE: io=1836.8MB, aggrb=31279KB/s, minb=32030KB/s, maxb=32030KB/s, mint=60129msec, maxt=60129msec
 
Disk stats (read/write):
  sda: ios=15108/20465, merge=4704/4206, ticks=969251/16763, in_queue=989615, util=99.84%
 
 
#通过strace可以看到
...
26112 io_submit(47710156091392, 1, {{(nil), 0, 1, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 1, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 1, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 1, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 1, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 1, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 0, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 0, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 0, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 0, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 1, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 0, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 0, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 0, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 0, 0, 9}}) = 1
26112 io_submit(47710156091392, 1, {{(nil), 0, 0, 0, 9}}) = 1
26112 io_getevents(47710156091392, 1, 1, {{(nil), 0x18ad9f00, 131072, 0}}, NULL) = 1
证明确实aio在工作。

再看下同步IO的测试:

$ cat psync-bench
[global]
ioengine=psync
direct=1
rw=randrw
bs=128k
directory=.
ioscheduler=deadline
time_based
runtime=60
 
[file1]
numjobs=16
 
$ sudo fio psync-bench
file1: (g=0): rw=randrw, bs=128K-128K/128K-128K, ioengine=psync, iodepth=1
...
file1: (g=0): rw=randrw, bs=128K-128K/128K-128K, ioengine=psync, iodepth=1
fio 1.50.2
Starting 16 processes
Jobs: 16 (f=16): [mmmmmmmmmmmmmmmm] [100.0% done] [31854K/31723K /s] [243 /242  iops] [eta 00m:00s]
file1: (groupid=0, jobs=1): err= 0: pid=26145
  read : io=118144KB, bw=1967.1KB/s, iops=15 , runt= 60036msec
    clat (msec): min=3 , max=1531 , avg=64.36, stdev=133.07
     lat (msec): min=3 , max=1531 , avg=64.36, stdev=133.07
    bw (KB/s) : min=   84, max= 6083, per=7.02%, avg=2205.14, stdev=807.87
  write: io=121984KB, bw=2031.9KB/s, iops=15 , runt= 60036msec
    clat (usec): min=264 , max=153817 , avg=660.30, stdev=6100.86
     lat (usec): min=265 , max=153818 , avg=660.53, stdev=6100.88
    bw (KB/s) : min=   84, max= 5765, per=7.32%, avg=2311.53, stdev=1256.97
  cpu          : usr=0.00%, sys=0.05%, ctx=1877, majf=0, minf=32
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=923/953/0, short=0/0/0
     lat (usec): 500=46.75%, 750=2.40%, 1000=0.43%
     lat (msec): 2=1.01%, 4=0.16%, 10=2.99%, 20=4.69%, 50=20.31%
     lat (msec): 100=15.41%, 250=5.28%, 500=0.21%, 2000=0.37%
...
省去中间15个文件
...
Run status group 0 (all jobs):
   READ: io=1844.0MB, aggrb=31411KB/s, minb=1699KB/s, maxb=2186KB/s, mint=60001msec, maxt=60114msec
  WRITE: io=1852.9MB, aggrb=31562KB/s, minb=1620KB/s, maxb=2325KB/s, mint=60001msec, maxt=60114msec
 
Disk stats (read/write):
  sda: ios=15756/20708, merge=4553/4159, ticks=979074/16483, in_queue=999501, util=99.83%
 
#通过strace可以看到
26195 pwrite(20, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072, 5242880) = 131072
26195 pwrite(20, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072, 23855104) = 131072
26195 pwrite(20, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072, 7864320) = 131072
26195 pread(20,  <unfinished ...>
证明确实是pread/pwrite操作的。

从上面的数据我们可以看出,最后的结果都是设备的IO能力将近100%的被使用,二个测试的吞吐量差不多。 但是如果从当个进程提交的吞吐量来看的话,一个aio进程相当于16个同步进程的能力,也就是说如果大家都是一个进程在干活的话,那么aio的能力要比同步的好太多了。

大家玩得开心!

Post Footer automatically generated by wp-posturl plugin for wordpress.

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多