Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fio with multiple Uring workers #1825

Open
HazyMrf opened this issue Oct 9, 2024 · 5 comments
Open

fio with multiple Uring workers #1825

HazyMrf opened this issue Oct 9, 2024 · 5 comments

Comments

@HazyMrf
Copy link

HazyMrf commented Oct 9, 2024

Is it possible to somehow setup io_uring engine to have multiple "write" workers when using fio? There must be such possibility, but there is nothing about it in the docs:

$ fio --enghelp=io_uring
hipri                   : Use polled IO completions
cmdprio_percentage      : Send high priority I/O this percentage of the time
cmdprio_class           : Set asynchronous IO priority class
cmdprio                 : Set asynchronous IO priority level
cmdprio_bssplit         : Set priority percentages for different block sizes
fixedbufs               : Pre map IO buffers
registerfiles           : Pre-open/register files
sqthread_poll           : Offload submission/completion to kernel thread
sqthread_poll_cpu       : What CPU to run SQ thread polling on
nonvectored             : Use non-vectored read/write commands
uncached                : Use RWF_UNCACHED for buffered read/writes
nowait                  : Use RWF_NOWAIT for reads/writes
force_async             : Set IOSQE_ASYNC every N requests


$ fio --version
fio-3.28

$ uname -a
Linux 6.8.0-1014-aws

I use such setup: taskset -c 20-23 fio --name test --ioengine=io_uring --rw=randwrite --iodepth=64 --bs=256k --size=900G --filename=/mnt/test/fio.data --runtime=600 --time_based and have only one uring worker :(
image

@HazyMrf HazyMrf changed the title fio with Uring fio with multiple Uring workers Oct 9, 2024
@axboe
Copy link
Owner

axboe commented Oct 9, 2024

io_uring in the kernel will manage those, and create more as needed. Generally only if one blocks, if things don't block, then it will not create more workers. Don't mistake these workers as a way to create parallelism, creating more workers than needed just creates contention.

For your case of buffered writes, generally most IOs will just copy into the page cache and dirty the page(s). Outside of that, some writes will end up having to wait for balancing of dirty pages.

On top of that, since it's a single file, io_uring will also serialize writes to it as buffered writes to the same file are exclusive. This is the primary reason you only see one writer above. O_DIRECT (--direct=1) will not run into that issue.

@HazyMrf
Copy link
Author

HazyMrf commented Oct 10, 2024

Thank you for such a quick response @axboe. It appeared that I made a mistake in my question. I want not many workers, but many cores for my workers. Maybe I can somehow set up this in fio + uring, I mean things like https://man7.org/linux/man-pages/man3/io_uring_register_iowq_aff.3.html may help IDK.

@HazyMrf
Copy link
Author

HazyMrf commented Oct 10, 2024

I'm trying to reproduce the uneven write load distribution with fio + io_engine=io_uring from axboe/liburing#1260

@axboe
Copy link
Owner

axboe commented Oct 10, 2024

Just run more jobs? This isn't an io_uring type option, that's a generic fio opion. --numjobs=X would do that. But then you probably want to get rid of the --filename option and just use --directory=/mnt/test/ so that each job will use its own file.

@HazyMrf
Copy link
Author

HazyMrf commented Oct 14, 2024

Thank you for your response @axboe. I thought about it, but what I actually want is 1 job, but multiple cores for io_uring workers, in other words somehow set affinity to my io_uring with https://man7.org/linux/man-pages/man3/io_uring_register_iowq_aff.3.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants