Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-publication validation to visualze data is currently broken #98

Open
chengzhuzhang opened this issue Feb 7, 2022 · 1 comment
Open

Comments

@chengzhuzhang
Copy link
Contributor

This workflow needs to be brought back by plotting tools updates. Right now manually point check is relied on before publication.

@TonyB9000
Copy link
Contributor

TonyB9000 commented Apr 21, 2022

There are twin issues here. One is that for the DataSM, any mechanical failure in "visual validation" would unnecessarily cause a break in the entire "Postprocess" (derivative datasets) workflow. As these visuals were a subordinate stage of Postprocess, the only (DataSM) way to recover would be to re-run the Postprocess jobs (re-extract and cmorize). The alternative (manual intervention) requires editing each status file to add "Postprocess:Pass" to cover the preceding "Fail".

What we need are two things:

  1. A suite of automated "data validation" measure that can be applied to flag real data problems automatically.
  2. A suite of automated visual generations as an independent, top-level Pre-publication stage.

Both the data-based and visual-based validations can be invoked as part of the DataSM operations. Perhaps neither should be in the critical path of Postprocessing, but an independent follow-on stage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants