Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

style: just some small refactors/fixes/added comments #16

Merged
merged 3 commits into from
Nov 22, 2024

Conversation

samlaf
Copy link
Contributor

@samlaf samlaf commented Nov 22, 2024

First time reading through this. Some small style refactor, nothing very exciting here.

@anupsv can you change settings of this repo such that we can only merge with approval now? Also make sure workflows are successful.

@samlaf samlaf requested a review from anupsv November 22, 2024 12:38
@@ -311,7 +311,7 @@ pub fn process_chunks<T>(receiver: Receiver<(Vec<u8>, usize, bool)>) -> Vec<(T,
where
T: ReadPointFromBytes,
{
#[allow(clippy::unnecessary_filter_map)]
// TODO: should we use rayon to process this in parallel?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The process chunks is called from a parallel processor which splits up the entire file to chunks and calls the function and later arranges the results in order, if thats what you were thinking.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no I meant to use par_iter here instead of iter. If you only have a few cpus lets say, then you'll only have a few workers, and each will be processing a lot of points, sequentially. Not sure if all the cpus are saturated already... maybe.

src/kzg.rs Outdated
@@ -295,6 +293,8 @@ impl Kzg {
let mut buffer = vec![0u8; point_size];

let mut i = 0;
// We are making one syscall per field element, which is super inefficient.
// FIXME: read the entire file into memory and then split it into field elements.
Copy link
Collaborator

@anupsv anupsv Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The entire G1 point file is 8GB or so i believe. We might need a middle ground because sometimes the caller might not have enough resources for this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah good point ok amended comment: bef408a

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait thinking about this more... if the file is 8GB isnt it also going to take >= 8GB in RAM? Or is there some form of compression we're doing after loading?

@samlaf samlaf requested a review from anupsv November 22, 2024 20:00
@samlaf samlaf merged commit fe0c03c into master Nov 22, 2024
1 check passed
@samlaf samlaf deleted the samlaf/style--fix-typos-and-docs-and-small-refactors branch November 22, 2024 23:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants