-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
smp: add a function that barriers memory prefault work #2608
base: master
Are you sure you want to change the base?
Conversation
Currently, memory prefault logic is internal and seastar doesnt provide much control to users. In order to improve the situation, I suggest to provide a barrier for the prefault threads. This allows to: * Prefer predictable low latency and high throughput from the start of request serving, at the cost of a startup delay, depending on machine characteristics and application specific requirements. For example, a fixed capacity on prem db setup, where slower startup can be tolerated. From users perspective, they generally cannot tolerate inconsistency (like spikes in latency). * Similarly, improve user scheduling decisions, like running less critical tasks while prefault works. * Reliably test the prefault logic, improving reliability and users trust in seastar. * Release memory_prefaulter::_worker_threads early and remove this overhead, rather than only at exit. I tested locally. If you approve this change, next I will submit a prefault test.
Did you observe latency impact from the prefault threads? It was written carefully not to have latency impact, but it's of course possible that some workloads suffer. |
As you described in #1702, page faults can cause deviation, and following up the example, there can be 25sec where latency is variably higher. |
I said nothing about latency being higher there. We typically run large machines with a few vcpus not assigned to any shards, and the prefault threads run with low priority. |
There are 2 aspects:
In the previous comment, I meant page fault latency. The page faults can cause high latency unpredictably until the prefaulter finishes. Regarding page faults measurement, it seems I cannot reliably measure on my env.
I tried to non scientifically isolate wall time overhead of prefault threads: I have a test app that performs file I/O and process memory buffers repeatedly. I used Ubuntu Orbstack VM with 1 NUMA node, 10 cores, --memory=14G - effectively a small NUMA node, and a small input to let the overhead be most visible.
|
By default seastar uses all vcpus, which makes sense for resource efficiency. Also, do you free specific vcpus? Like one per numa node, the granularity of prefault threads. |
1 in 8, with NUMA awareness. They're allocated for kernel network processing. See perftune.py. |
Nice. Let me know if this change makes sense to you |
@avikivity ping |
I also tried to simulate perftune with 1 free vcpu: |
Currently, memory prefault logic is internal and seastar doesnt provide much control to users. In order to improve the situation, I suggest to provide a barrier for the prefault threads. This allows to:
I tested locally. If you approve, next I will try to submit a prefault test.