Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[chore] [exporterhelper] Add an option for items based queue sizing #9147

Closed
wants to merge 1 commit into from

Conversation

dmitryax
Copy link
Member

Introduce an option to limit the queue size by the number of items instead of number of requests. This is preliminary step for having the exporter helper v2 with a batcher sender placed after the queue sender. Otherwise, it'll be hard for the users to estimate the queue size based on the number of requests without batch processor in front of it.

This change doesn't effect the existing functionality and the items based queue limiting cannot be utilized yet.

Updates #8122

@dmitryax dmitryax requested review from a team and bogdandrutu December 19, 2023 19:30
@dmitryax dmitryax force-pushed the itemized-queue-sizing branch 3 times, most recently from 6da46cd to d1365d6 Compare December 19, 2023 19:35
@dmitryax dmitryax force-pushed the itemized-queue-sizing branch from d1365d6 to 57cf854 Compare December 19, 2023 19:37
@dmitryax dmitryax force-pushed the itemized-queue-sizing branch 2 times, most recently from 9c99cd0 to 854f7ae Compare December 20, 2023 00:35
Copy link

codecov bot commented Dec 20, 2023

Codecov Report

Attention: 10 lines in your changes are missing coverage. Please review.

Comparison is base (44fbb84) 91.41% compared to head (597855c) 91.42%.

Files Patch % Lines
...porter/exporterhelper/internal/persistent_queue.go 88.37% 7 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #9147      +/-   ##
==========================================
+ Coverage   91.41%   91.42%   +0.01%     
==========================================
  Files         320      322       +2     
  Lines       17187    17302     +115     
==========================================
+ Hits        15711    15818     +107     
- Misses       1173     1178       +5     
- Partials      303      306       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@dmitryax dmitryax force-pushed the itemized-queue-sizing branch 11 times, most recently from 8c7b937 to 4dd9e1f Compare December 20, 2023 05:33
Comment on lines +6 to +9
// newUnboundedChannel creates a pair of buffered channels with no cap.
// Caller should close the input channel when done. That will trigger draining of the output channel.
// Closing the output channel by client will panic.
func newUnboundedChannel[T any]() (chan<- T, <-chan T) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need two channels and a goroutine here. I am confused why not just a simple unbounded channel.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What kind of simple unbounded channel are you referring to?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had something else in my mind which does not work. What if we create a channel with same Capacity as the "capacity" of the limiter? Every element has at least size 1, so we should be good. Does that mean we are allocating too much memory? Can you check how much memory will allocate?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put a draft with the bounded capacity #9164 with some benchmarks:

Unbounded:

Benchmark_QueueUsage_1M_items_10_250k-10     	       9	 118011278 ns/op	 7497951 B/op	  250017 allocs/op
Benchmark_QueueUsage_1M_items_10_1M-10    	       3	 478627375 ns/op	29998074 B/op	 1000004 allocs/op
Benchmark_QueueUsage_100M_items_10_10M-10    	       1	4688934042 ns/op	299973192 B/op	 9999824 allocs/op

Bounded:

Benchmark_QueueUsage_1M_items_10_25k-10     	      25	  44122055 ns/op	24003696 B/op	      23 allocs/op
Benchmark_QueueUsage_1M_items_10_1M-10    	       7	 176613071 ns/op	24003396 B/op	      20 allocs/op
Benchmark_QueueUsage_100M_items_10_10M-10    	       1	1693857875 ns/op	2400016832 B/op	      62 allocs/op

B/op seems like cumulative. Not sure how to check the max RAM usage. But it is still enough to see that the bounded option takes 2Gb of memory for 100M of the item's capacity configuration value.

The unbounded has a lot of churn due to slice recreation on the overflow. That potentially can be fixed. We could handle the overflow manually and preallocate bigger arrays.

Also, I have an idea of how to integrate the capacity limiter right into the current unbounded channel implementation. It will remove the unnecessary decoupling between capacity limiting and the channel enqueuing. It should also make the code a bit simpler and maybe more performant. I can try that out.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For simplicity I would go for the moment with a buffered channel. It has better performance and also seem to have significant less churn (allocations). Let's go with that for the moment.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I updated #9164 and marked it as ready

exporter/exporterhelper/internal/queue_capacity.go Outdated Show resolved Hide resolved
exporter/exporterhelper/internal/queue_capacity.go Outdated Show resolved Hide resolved
exporter/exporterhelper/internal/queue_capacity.go Outdated Show resolved Hide resolved
exporter/exporterhelper/internal/queue_capacity.go Outdated Show resolved Hide resolved
exporter/exporterhelper/internal/queue_capacity.go Outdated Show resolved Hide resolved
exporter/exporterhelper/internal/queue_capacity.go Outdated Show resolved Hide resolved
exporter/exporterhelper/internal/queue_capacity.go Outdated Show resolved Hide resolved
exporter/exporterhelper/internal/queue_capacity.go Outdated Show resolved Hide resolved
Comment on lines +63 to +65
type itemsCapacityLimiter[T itemsCounter] struct {
baseCapacityLimiter[T]
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not

Suggested change
type itemsCapacityLimiter[T itemsCounter] struct {
baseCapacityLimiter[T]
}
type itemsCapacityLimiter[T itemsCounter] baseCapacityLimiter[T]

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We won't get the exported Size and Capacity methods inherited from baseCapacityLimiter then

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably you know :D, I was just surprised...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's why we had all the type aliases in pdata initially :)

From https://tip.golang.org/ref/spec#Type_declarations:

A defined type may have [methods](https://tip.golang.org/ref/spec#Method_declarations) associated with it. It does not inherit any methods bound to the given type

Introduce an option to limit the queue size by the number of items instead of number of requests. This is preliminary step for having the exporter helper v2 with a batcher sender placed after the queue sender. Otherwise, it'll be hard for the users to estimate the queue size based on the number of requests without batch processor in front of it.

This change doesn't effect the existing functionality and the items based queue limiting cannot be utilized yet.
@dmitryax dmitryax force-pushed the itemized-queue-sizing branch from 38b57eb to 597855c Compare December 20, 2023 22:16
@bogdandrutu
Copy link
Member

@dmitryax this is outdated? Can you re-open after #9164 if still needed?

dmitryax added a commit that referenced this pull request Dec 22, 2023
#9164)

Introduce an option to limit the queue size by the number of items
instead of number of requests. This is preliminary step for having the
exporter helper v2 with a batcher sender placed after the queue sender.
Otherwise, it'll be hard for the users to estimate the queue size based
on the number of requests without batch processor in front of it.

This change doesn't effect the existing functionality and the items
based queue limiting cannot be utilized yet.

Updates
#8122

Alternative to
#9147
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants