Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spike: convert all scenarios to new perf format #29

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

coolsoftwaretyler
Copy link
Owner

@coolsoftwaretyler coolsoftwaretyler commented Nov 13, 2023

I was having some issues with the prior setup, where we had bundled all of the performance tests into one file. The benchmarks took forever to run, and would sometimes hang in Node.js (less so in Bun/Web, but sometimes it would happen). This was happening even when I added max-old-space-size=8096 to my Node arguments.

Initially, I had wanted a very simple surface area to write scenarios in. But I don't think that scales particularly well. We have fewer than 100 scenarios and the benchmarks were just taking too long.

I wanted a way to run specific scenarios, and I explored some options where I was running some containers locally to sort of check the repo for files, build them to bundles, and execute them in individual processes.

But honestly, I really like the way I was bundling up the JS before to be executed and measured. And I still want to inject Benchmark.js into the tests, which is different for Node/Browser environments.

I toyed around with this all weekend, and realized I could understand a manual way to do this on a per-test basis, so I've decided to experiment with a file format (maybe a DSL? I'm not sure the correct term for what I'm doing here) that allows us to:

  1. Add metadata to each scenario (right now just titles, but we could extend it)
  2. Specify what imports are required in each scenario (in the future, could even write ESM/CJS imports separately, and let the parser handle that gracefully)
  3. Write the actual scenario (assuming this is basically isomorphic JS, we can bundle it the same each way).

Then there's build step in build-test-bundles.jsthat parses the directory, finds all .perf.md files, and handles each of those individual concerns, then bundles them up into web or node bundles.

I chose .perf.md so I could get nice syntax highlighting out of the box, but I'm interested in a custom file format and maybe some kind of tool that does syntax highlighting on its own. That's for much later.

From there, we can run them with puppeteer, or node, or bun, or deoptigate, or 0x or any other tool that can execute a JS file top to bottom.

I think this creates some constituent parts that can be brought back together in some kind of CLI interface to allow us to:

  1. Check what scenarios are available
  2. Bundle a set of scenarios into a run
  3. Run and test those scenarios
  4. Combine their output
  5. Save the output

Eventually it could be configured to have some preset scenarios as well.

Right now it relies on node_modules locally, but I wonder too if we can find a way to select dependencies at different versions and dynamically include those. That's neither here nor there.

I've got these running. If you do:

node js-perf-runner.cjs
# Lots of output
node build/add-1-object-to-an-array-node-bundle.js # Single scenario output
node puppeteer.cjs build/add-1-object-to-an-array-web-bundle.js # Single scenario output

It should basically work as expected.

I'm gonna sleep on this and revisit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant