Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

verify concurrency behaves as intended #10

Open
2bndy5 opened this issue Aug 17, 2024 · 3 comments
Open

verify concurrency behaves as intended #10

2bndy5 opened this issue Aug 17, 2024 · 3 comments
Labels
help wanted Extra attention is needed

Comments

@2bndy5
Copy link
Collaborator

2bndy5 commented Aug 17, 2024

Recently, I implemented the tokio runtime for asynchronous execution of

  • REST API calls (the pure python project doesn't implement this)
  • running clang-tidy and/or clang-format on each file

Although, I didn't notice any performance boost in the tests. I'm wondering if I missed something. Additionally, some await commands in some of the tests do not show as triggered in the coverage reports (using llvm-cov reports, not codecov which only shows line coverage).

@2bndy5
Copy link
Collaborator Author

2bndy5 commented Sep 15, 2024

some await commands in some of the tests do not show as triggered in the coverage reports

This is a known problem. See rust-lang/rust#98712

@2bndy5
Copy link
Collaborator Author

2bndy5 commented Sep 30, 2024

I noticed that the -j, --jobs argument is not implemented here, but it is in v1.x. In resolving this issue, I should add the argument.

@2bndy5 2bndy5 mentioned this issue Oct 2, 2024
@2bndy5
Copy link
Collaborator Author

2bndy5 commented Oct 2, 2024

Concurrency is working 👀

I built a release binary and I ran some simple benchmarks using

  • hyperfine tool
  • the libgit2 files (build artifacts) in the target directory (which produces 2569 checks-failed)

pure-python vs native rust

hyperfine --shell powershell --warmup 1 --runs 2 --command-name=pure-python 'cpp-linter -j=30 -l=false -i=!target' --command-name=pure-rust './target/release/cpp-linter.exe -l=0 -i=!target'
Benchmark 1: pure-python
  Time (mean ± σ):     46.621 s ±  0.870 s    [User: 185.804 s, System: 63.301 s]
  Range (min … max):   46.006 s … 47.236 s    2 runs

Benchmark 2: pure-rust
  Time (mean ± σ):     19.769 s ±  0.125 s    [User: 86.546 s, System: 20.152 s]
  Range (min … max):   19.680 s … 19.857 s    2 runs

Summary
  pure-rust ran
    2.36 ± 0.05 times faster than pure-python

2.36x faster is a significant improvement!

py-binding vs native rust

hyperfine --shell powershell --warmup 1 --runs 2 --command-name=py-binding 'cpp-linter -l=0 -i=!target' --command-name=native-rust './target/release/cpp-linter.exe -l=0 -i=!target'
Benchmark 1: py-binding
  Time (mean ± σ):     19.746 s ±  0.356 s    [User: 85.412 s, System: 20.656 s]
  Range (min … max):   19.494 s … 19.998 s    2 runs

Benchmark 2: native-rust
  Time (mean ± σ):     19.470 s ±  0.143 s    [User: 86.358 s, System: 21.062 s]
  Range (min … max):   19.369 s … 19.571 s    2 runs

Summary
  native-rust ran
    1.01 ± 0.02 times faster than py-binding

We'll call this a negligible result.

py-binding vs node-binding

hyperfine --shell powershell --warmup 1 --runs 2 --command-name=py-binding 'cpp-linter -l=false -i=!target' --command-name=node-binding 'node ./node-binding/cli.js -l=0 -i=!target'
Benchmark 1: py-binding
  Time (mean ± σ):     19.657 s ±  0.262 s    [User: 85.682 s, System: 21.050 s]
  Range (min … max):   19.472 s … 19.842 s    2 runs

Benchmark 2: node-binding
  Time (mean ± σ):     19.907 s ±  0.367 s    [User: 85.362 s, System: 20.558 s]
  Range (min … max):   19.647 s … 20.167 s    2 runs

Summary
  py-binding ran
    1.01 ± 0.02 times faster than node-binding

This showed a negligible difference in performance as well.

Still room for improvement 🤔

The benchmarks above do not use HTTP requests (which are also executed asynchronously in rust). And I'm not yet using tokio's async alternative of rust's std::process::Command (similar to python's subprocess.run).

Add a CI job to do benchmark

It would be nice to spot performance regressions using a CI workflow. This would require an additional checkout of parent commit (on push to main branch) or main branch (on PR sync event).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant