spike: convert all scenarios to new perf format #29
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I was having some issues with the prior setup, where we had bundled all of the performance tests into one file. The benchmarks took forever to run, and would sometimes hang in Node.js (less so in Bun/Web, but sometimes it would happen). This was happening even when I added
max-old-space-size=8096
to my Node arguments.Initially, I had wanted a very simple surface area to write scenarios in. But I don't think that scales particularly well. We have fewer than 100 scenarios and the benchmarks were just taking too long.
I wanted a way to run specific scenarios, and I explored some options where I was running some containers locally to sort of check the repo for files, build them to bundles, and execute them in individual processes.
But honestly, I really like the way I was bundling up the JS before to be executed and measured. And I still want to inject Benchmark.js into the tests, which is different for Node/Browser environments.
I toyed around with this all weekend, and realized I could understand a manual way to do this on a per-test basis, so I've decided to experiment with a file format (maybe a DSL? I'm not sure the correct term for what I'm doing here) that allows us to:
Then there's build step in
build-test-bundles.js
that parses the directory, finds all.perf.md
files, and handles each of those individual concerns, then bundles them up into web or node bundles.I chose
.perf.md
so I could get nice syntax highlighting out of the box, but I'm interested in a custom file format and maybe some kind of tool that does syntax highlighting on its own. That's for much later.From there, we can run them with puppeteer, or node, or bun, or deoptigate, or 0x or any other tool that can execute a JS file top to bottom.
I think this creates some constituent parts that can be brought back together in some kind of CLI interface to allow us to:
Eventually it could be configured to have some preset scenarios as well.
Right now it relies on
node_modules
locally, but I wonder too if we can find a way to select dependencies at different versions and dynamically include those. That's neither here nor there.I've got these running. If you do:
It should basically work as expected.
I'm gonna sleep on this and revisit.