Thank you for contributing!
This crate has some code paths that depend on no_std
, which can be compiled with Cargo by using
--no-default-features
. Additionally, its best to use the wasm32v1-none
target to ensure the
standard library isn't included in any dependency.
Example usage:
cargo +nightly build --target wasm32v1-none --no-default-features
To get proper diagnostics for no_std
it can be helpful to configure Rust Analyzer to support
that.
Here is an example configuration for Visual Studio Code:
"rust-analyzer.cargo.target": "wasm32v1-none",
"rust-analyzer.cargo.noDefaultFeatures": true,
"rust-analyzer.cargo.extraEnv": {
"RUSTUP_TOOLCHAIN": "nightly",
},
This crate has some code paths that depend on Wasm Atomics, which has some prerequisites to compile:
- Rust nightly.
- The
rust-src
component. - Cargo's
build-std
. - The
atomics
andbulk-memory
target features.
Example usage:
# Installing Rust nightly and necessary components:
rustup toolchain install nightly --target wasm32-unknown-unknown --component rust-src
# Example `cargo build` usage:
CFLAGS_wasm32_unknown_unknown="-matomics -mbulk-memory" RUSTFLAGS=-Ctarget-feature=+atomics,+bulk-memory cargo +nightly build --target wasm32-unknown-unknown -Zbuild-std=panic_abort,std
# Example `no_std` `cargo build` usage:
CFLAGS_wasm32_unknown_unknown="-matomics -mbulk-memory" RUSTFLAGS=-Ctarget-feature=+atomics,+bulk-memory cargo +nightly build --target wasm32v1-none -Zbuild-std=core,alloc --no-default-features
To get proper diagnostics for Rust Atomics it can be helpful to configure Rust Analyzer to support that.
Here is an example configuration for Visual Studio Code:
"rust-analyzer.cargo.target": "wasm32-unknown-unknown",
"rust-analyzer.cargo.extraArgs": [
"-Zbuild-std=panic_abort,std",
],
"rust-analyzer.cargo.extraEnv": {
"RUSTUP_TOOLCHAIN": "nightly",
"RUSTFLAGS": "-Ctarget-feature=+atomics,+bulk-memory",
"CFLAGS_wasm32_unknown_unknown": "-matomics -mbulk-memory",
},
Or with no_std
:
"rust-analyzer.cargo.target": "wasm32v1-none",
"rust-analyzer.cargo.noDefaultFeatures": true,
"rust-analyzer.cargo.extraArgs": [
"-Zbuild-std=core,alloc"
],
"rust-analyzer.cargo.extraEnv": {
"RUSTUP_TOOLCHAIN": "nightly",
"RUSTFLAGS": "-Ctarget-feature=+atomics,+bulk-memory"
},
Tests are run as usual. But integration tests have a special setup to support no_std
.
To run integration tests just use --workspace
:
# Run tests for native.
cargo test --workspace
# Run tests for Wasm.
cargo test --workspace --target wasm32-unknown-unknown
# Run tests for `no_std`.
cargo +nightly test --workspace --target wasm32v1-none --no-default-features
# Run tests for Wasm atomics.
CFLAGS_wasm32_unknown_unknown="-matomics -mbulk-memory" RUSTFLAGS=-Ctarget-feature=+atomics,+bulk-memory cargo +nightly test --workspace --target wasm32-unknown-unknown -Zbuild-std=panic_abort,std
Make sure not to use --all-features
.
To implement integration tests, you have to understand the setup. no_std
support requires the
test harness has to be disabled. However, to keep the test harness enabled for native tests, the
same tests are split into two test targets. These are defined in the tests-native
and
tests-web
crate for each target respectively. The test targets are then enabled, depending on
the target, via the run
crate feature.
So to add a new integration test the following test targets have to be added:
tests-web/Cargo.toml
:
[[test]]
harness = false
name = "web_new_test"
path = "../tests/new_test.rs"
required-features = ["run"]
tests-native/Cargo.toml
:
[[test]]
name = "native_new_test"
path = "../tests/new_test.rs"
required-features = ["run"]
Additionally, keep in mind that usage of #[should_panic]
is known to cause
browsers to get stuck because of the lack of unwinding support.
The current workaround is to split tests using await
into separate test targets.
The only benchmark is marked as an example target because of the lack of Wasm support. To run it you can use the following command:
RUSTFLAGS=-Ctarget-feature=+nontrapping-fptoint cargo build --workspace --example benchmark --target wasm32-unknown-unknown --profile bench
wasm-bindgen --out-dir benches --target web --no-typescript target/wasm32-unknown-unknown/release/examples/benchmark.wasm
The benches
folder then needs to be hosted by a HTTP server to run it in a browser.
Optionally wasm-opt
could be added as well:
wasm-opt benches/benchmark_bg.wasm -o benches/benchmark_bg.wasm -O4
The process to generate WebAssembly test coverage is quite involved, see the wasm-bindgen
Guide
on the matter. If you open a PR the "Coverage & Documentation" CI workflow will generate test
coverage data for you and will post a summary via a Job Summary, but you can also download the
full test coverage data via an artifact called test-coverage
.
If you want to generate test coverage locally, here is an example shell script that you can use:
rm -rf coverage-input
rm -rf coverage-output
# Single-threaded test run.
st () {
CARGO_TARGET_WASM32_UNKNOWN_UNKNOWN_RUSTFLAGS="-Cinstrument-coverage -Zcoverage-options=condition -Zno-profiler-runtime --emit=llvm-ir --cfg web_time_test_coverage" cargo +nightly $1 --workspace --features serde --target wasm32-unknown-unknown --tests ${@:2}
}
# Multi-threaded test run.
mt () {
CFLAGS_wasm32_unknown_unknown="-matomics -mbulk-memory" CARGO_TARGET_WASM32_UNKNOWN_UNKNOWN_RUSTFLAGS="-Cinstrument-coverage -Zcoverage-options=condition -Zno-profiler-runtime --emit=llvm-ir -Ctarget-feature=+atomics,+bulk-memory --cfg web_time_test_coverage" cargo +nightly $1 --workspace --features serde --target wasm32-unknown-unknown --tests ${@:2}
}
# To collect object files.
objects=()
# Start Chromedriver in the background
chromedriver --port=9000 &
pid=$!
# Run tests and adjust LLVM IR.
test () {
local command=$1
local path=$2
# Run tests.
mkdir -p coverage-input/$path
WASM_BINDGEN_USE_BROWSER=1 CHROMEDRIVER_REMOTE=http://127.0.0.1:9000 LLVM_PROFILE_FILE=$(realpath coverage-input/$path/%m_%p.profraw) $command 'nextest run' ${@:3}
local crate_name=web_time
local IFS=$'\n'
for file in $(
# Extract path to artifacts.
$command 'test' ${@:3} --no-run --message-format=json | \
jq -r "select(.reason == \"compiler-artifact\") | (select(.target.kind == [\"test\"]) // select(.target.name == \"$crate_name\")) | .filenames[0]"
)
do
# Get the path to the LLVM IR files instead of the rlib and Wasm artifacts.
local base
if [[ ${file##*.} == "rlib" ]]; then
base=$(basename $file .rlib)
file=$(dirname $file)/${base#"lib"}.ll
else
file=$(dirname $file)/$(basename $file .wasm).ll
fi
# Compile LLVM IR files to object files.
local output=coverage-input/$path/$(basename $file .ll).o
clang-19 $file -Wno-override-module -c -o $output
objects+=(-object $output)
done
}
test st 'st'
test st 'st-no_std' --no-default-features
test mt 'mt' -Zbuild-std=panic_abort,std
test mt 'mt-no_std' -Zbuild-std=core,alloc --no-default-features
# Shutdown Chromedriver
kill $pid
# Merge all generated `*.profraw` files.
rust-profdata merge -sparse coverage-input/*/*.profraw -o coverage-input/coverage.profdata
# Finally generate coverage information.
rust-cov show -show-instantiations=false -output-dir coverage-output -format=html -instr-profile=coverage-input/coverage.profdata ${objects[@]} -sources src