You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is designed to summarize any discussions we have about our testing needs, capabilities, and any gaps between the two.
For testing emirge, we need the following high-level capabilities (please add):
System for adding, building, running tests during the development cycle
Automated build and test on remote HPC platforms
Testing dashboard
Test coverage
Performance data collection & analysis
Performance/benchmarking over time
Automated build and test on remote HPC platforms & dashboard with TEESD
What can we currently do and what do we need?
TEESD currently uses CTest and CDash to provide the following capabilities:
developers to (quickly) run selected tests
CI-type tests: detect changes in the repository, pull those changes, configure, build, and test the project on all remote HPC platforms we care about. Those platforms are currently Quartz, Lassen, and the Illinois-local MachineShop (dunkle, et al).
Regular, nightly tests on each platform
Testing results are published on a project-public testing dashboard
TEESD uses ABaTe constructs to provide the following:
Monitor multiple projects/branches
Environment management (works with Conda, pip, modules, Spack)
Test grouping into testing suites
Platform-specific batch system navigation
Inclusion of raw and image data into test results
Test Coverage
What do we want our coverage tools to do at the top/emirge level? What are the capabilities we currently have and what are our options?
Our current coverage strategy works using gcov and does not include Python. It works for C, C++, and F90, and submits coverage reports to the dashboard.
Performance data collection & analysis
What do we need and what do we have?
We have an in-house Profiler python package. It is integrated with TEESD and does the following simple things:
Times user-defined code constructs (in a naive way)
Generates timing reports in gnuplot-ready text format
MPI parallel times with or without barriers
Through customized tests, using gnuplot - plots performance results and submits to dashboard
What else is needed?
Better timing/profiling for PyOpenCL, or other asynchronous execution
Benchmarking results over time
What do we have, and what do we need?
AirSpeedVelocity (asv) is set up for use in TEESD. We have added no emirge-specific benchmarks. It is not integrated with the dashboard. What benchmarks should we have?
Needed:
Project-public results summary
The text was updated successfully, but these errors were encountered:
Overview
This issue is designed to summarize any discussions we have about our testing needs, capabilities, and any gaps between the two.
For testing emirge, we need the following high-level capabilities (please add):
Automated build and test on remote HPC platforms & dashboard with TEESD
What can we currently do and what do we need?
TEESD currently uses CTest and CDash to provide the following capabilities:
TEESD uses ABaTe constructs to provide the following:
Test Coverage
What do we want our coverage tools to do at the top/emirge level? What are the capabilities we currently have and what are our options?
Our current coverage strategy works using gcov and does not include Python. It works for C, C++, and F90, and submits coverage reports to the dashboard.
Performance data collection & analysis
What do we need and what do we have?
We have an in-house Profiler python package. It is integrated with TEESD and does the following simple things:
What else is needed?
Benchmarking results over time
What do we have, and what do we need?
AirSpeedVelocity (asv) is set up for use in TEESD. We have added no emirge-specific benchmarks. It is not integrated with the dashboard. What benchmarks should we have?
Needed:
The text was updated successfully, but these errors were encountered: