-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chore] [exporterhelper] Add an option for items based queue sizing #9147
Conversation
6da46cd
to
d1365d6
Compare
d1365d6
to
57cf854
Compare
9c99cd0
to
854f7ae
Compare
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #9147 +/- ##
==========================================
+ Coverage 91.41% 91.42% +0.01%
==========================================
Files 320 322 +2
Lines 17187 17302 +115
==========================================
+ Hits 15711 15818 +107
- Misses 1173 1178 +5
- Partials 303 306 +3 ☔ View full report in Codecov by Sentry. |
8c7b937
to
4dd9e1f
Compare
// newUnboundedChannel creates a pair of buffered channels with no cap. | ||
// Caller should close the input channel when done. That will trigger draining of the output channel. | ||
// Closing the output channel by client will panic. | ||
func newUnboundedChannel[T any]() (chan<- T, <-chan T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need two channels and a goroutine here. I am confused why not just a simple unbounded channel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What kind of simple unbounded channel are you referring to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had something else in my mind which does not work. What if we create a channel with same Capacity as the "capacity" of the limiter? Every element has at least size 1, so we should be good. Does that mean we are allocating too much memory? Can you check how much memory will allocate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put a draft with the bounded capacity #9164 with some benchmarks:
Unbounded:
Benchmark_QueueUsage_1M_items_10_250k-10 9 118011278 ns/op 7497951 B/op 250017 allocs/op
Benchmark_QueueUsage_1M_items_10_1M-10 3 478627375 ns/op 29998074 B/op 1000004 allocs/op
Benchmark_QueueUsage_100M_items_10_10M-10 1 4688934042 ns/op 299973192 B/op 9999824 allocs/op
Bounded:
Benchmark_QueueUsage_1M_items_10_25k-10 25 44122055 ns/op 24003696 B/op 23 allocs/op
Benchmark_QueueUsage_1M_items_10_1M-10 7 176613071 ns/op 24003396 B/op 20 allocs/op
Benchmark_QueueUsage_100M_items_10_10M-10 1 1693857875 ns/op 2400016832 B/op 62 allocs/op
B/op seems like cumulative. Not sure how to check the max RAM usage. But it is still enough to see that the bounded option takes 2Gb of memory for 100M of the item's capacity configuration value.
The unbounded has a lot of churn due to slice recreation on the overflow. That potentially can be fixed. We could handle the overflow manually and preallocate bigger arrays.
Also, I have an idea of how to integrate the capacity limiter right into the current unbounded channel implementation. It will remove the unnecessary decoupling between capacity limiting and the channel enqueuing. It should also make the code a bit simpler and maybe more performant. I can try that out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For simplicity I would go for the moment with a buffered channel. It has better performance and also seem to have significant less churn (allocations). Let's go with that for the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I updated #9164 and marked it as ready
type itemsCapacityLimiter[T itemsCounter] struct { | ||
baseCapacityLimiter[T] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not
type itemsCapacityLimiter[T itemsCounter] struct { | |
baseCapacityLimiter[T] | |
} | |
type itemsCapacityLimiter[T itemsCounter] baseCapacityLimiter[T] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We won't get the exported Size
and Capacity
methods inherited from baseCapacityLimiter
then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably you know :D, I was just surprised...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's why we had all the type aliases in pdata
initially :)
From https://tip.golang.org/ref/spec#Type_declarations:
A defined type may have [methods](https://tip.golang.org/ref/spec#Method_declarations) associated with it. It does not inherit any methods bound to the given type
7af07ae
to
cf1baa0
Compare
cf1baa0
to
38b57eb
Compare
Introduce an option to limit the queue size by the number of items instead of number of requests. This is preliminary step for having the exporter helper v2 with a batcher sender placed after the queue sender. Otherwise, it'll be hard for the users to estimate the queue size based on the number of requests without batch processor in front of it. This change doesn't effect the existing functionality and the items based queue limiting cannot be utilized yet.
38b57eb
to
597855c
Compare
#9164) Introduce an option to limit the queue size by the number of items instead of number of requests. This is preliminary step for having the exporter helper v2 with a batcher sender placed after the queue sender. Otherwise, it'll be hard for the users to estimate the queue size based on the number of requests without batch processor in front of it. This change doesn't effect the existing functionality and the items based queue limiting cannot be utilized yet. Updates #8122 Alternative to #9147
Introduce an option to limit the queue size by the number of items instead of number of requests. This is preliminary step for having the exporter helper v2 with a batcher sender placed after the queue sender. Otherwise, it'll be hard for the users to estimate the queue size based on the number of requests without batch processor in front of it.
This change doesn't effect the existing functionality and the items based queue limiting cannot be utilized yet.
Updates #8122