Doing some performance testing on some modern hardware comparing Innodb plugin 1.0.4 with stock Innodb. I'm running a sysbench transactions test (reads and writes) with 200M rows in my table (table size is around 46G, RAM is 16G, buffer pool is set to 12G).
I was puzzled to see the innodb plugin to be decent, but not really as great as I expected, I was doing about ~6100 RW operations a second (individual statements within transactions). Then I compared it to the stock innodb and shockingly I got ~7K ops. I thought about what I tuned that was different in the plugin and came up with the innodb_io_capacity.
Somewhat arbitrarily, I did the following sysbench tests to try to determine a good value to put in innodb_io_capacity.
sysbench --test=fileio prepare sysbench --test=fileio --num-threads=16 --file-extra-flags=direct --file-test-mode=rndrw run
This gave me a number in the 4k-5k iops total... this is a 12-drive (15k SAS) RAID 1+0 setup. Sounds good, so I stuck 4000 in innodb_io_capacity.
What I noticed about my tests was that the innodb plugin was excessively vigorous about flushing the buffer pool. I rarely say the buffer pool beyond a few percent dirty.
When I realized that maybe innodb thinking it was ok to do all that I/O was my bottleneck, I bumped innodb_io_capacity from 4k down to 1k, and reran my tests: I jumped from ~6k to ~12.2k ops.
So what's the lesson? Well, I *think* some more thought might need to go into tuning innodb_io_capacity. Unless my sysbench test was inappropriate for measuring iops on my filesystem, it doesn't seem wise to give innodb_io_capacity *all* of your io, just a healthy portion.