-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
style: just some small refactors/fixes/added comments #16
style: just some small refactors/fixes/added comments #16
Conversation
@@ -311,7 +311,7 @@ pub fn process_chunks<T>(receiver: Receiver<(Vec<u8>, usize, bool)>) -> Vec<(T, | |||
where | |||
T: ReadPointFromBytes, | |||
{ | |||
#[allow(clippy::unnecessary_filter_map)] | |||
// TODO: should we use rayon to process this in parallel? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The process chunks is called from a parallel processor which splits up the entire file to chunks and calls the function and later arranges the results in order, if thats what you were thinking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no I meant to use par_iter here instead of iter. If you only have a few cpus lets say, then you'll only have a few workers, and each will be processing a lot of points, sequentially. Not sure if all the cpus are saturated already... maybe.
src/kzg.rs
Outdated
@@ -295,6 +293,8 @@ impl Kzg { | |||
let mut buffer = vec![0u8; point_size]; | |||
|
|||
let mut i = 0; | |||
// We are making one syscall per field element, which is super inefficient. | |||
// FIXME: read the entire file into memory and then split it into field elements. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entire G1 point file is 8GB or so i believe. We might need a middle ground because sometimes the caller might not have enough resources for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah good point ok amended comment: bef408a
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait thinking about this more... if the file is 8GB isnt it also going to take >= 8GB in RAM? Or is there some form of compression we're doing after loading?
First time reading through this. Some small style refactor, nothing very exciting here.
@anupsv can you change settings of this repo such that we can only merge with approval now? Also make sure workflows are successful.