-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
run examples using shinytest2 #983
Conversation
Hey, using shinytest2 brings less overhead than using cypress. Unless for this example. We do not have any configuration files, nor directories in tests/ that only cypress developer understands. All is written in R, not JavaScript, and this does not require any extra software (like |
I agree with @m7pr. There are many tools and technologies for app testing, but we should keep our codebase homogeneous and use shinytest2. |
True, But JS + Shiny unlocks so much potential for the apps. The fact that we can offload some tasks outside server, increases the scalability and performance of the apps just to name one advantage. I truly believe that we will implement some CSS and JS features in teal apps moving forward. I see these tests as a one-time thing that we set up like our CI and don't make drastic changes to it until we add new features that need testing and, just adding a test case should not be that complicated to learn in my opinion. |
Depending on who is going to write and read those tests. I personally do not understand tests written in cypress/JavaScript so I will not be able to help. Does it go any further? Should we allow writing statistical modules in python or Java for statistical computing, if they can be covered in teal module? |
Okay, I get your point. It's valid. We think so deeply about just adding an R package dependency to our packages. Adding a whole new programming language increases the complexity of maintaining it. The question is: Does this justify the benefits of cypress over shinytest2. I don't know yet. Let's refine our test specs to understand how much effort it tasks to implement and maintain them in both these testing frameworks. |
Signed-off-by: Pawel Rucki <[email protected]>
Unit Tests Summary 1 files 34 suites 4s ⏱️ Results for commit f6d93a2. ♻️ This comment has been updated with latest results. |
Unit Test Performance Difference
Additional test case details
Results for commit fcfa119 ♻️ This comment has been updated with latest results. |
Testing bot is reporting +110 tests and all of them are skipped 🙃 This is because of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a valuable addition to ensure that the module example app can be executed based on the current state of the package. This will be very helpful during module development.
I also think this will complement the ongoing work on shinytest2, as outlined here. The latter has a broader scope, considering that examples in our modules tend to be simple and straightforward
I like the tests, Since they don't have any interactions after the app is initialized, they run quickly. For the short term adding them is a quick win and allows us to catch any module-related errors. |
I realised that I made a mistake with So I switched this arg to FALSE and introduced When testing locally I got a meaningful warning: #1044. That's a sign of increased coverage. Other than that I improved the way we detect errors - added snippet provided by @vedhav as well as validation errors. This is needed because teal actually continues on error inside its child module. shinytest2 seems to capture only hard crash type of event. |
Ah! At least the super-fast test runs have an explanation now. |
I can't confirm @donyunardi results either, but as @pawelru said, it might be something with the installed packages. I get 2 errors locally when running this with all packages installed (included
Fails (confirmed with running manual examples):
Warnings (can't confirm when running example app manually)
|
I can confirm that I too get the two errors mentioned here. I had all the dev versions of the packages installed before I tested it. |
I installed everything, including tmc, and I got similar result with the rest of you when performing R CMD Check: ══ Results ═════════════════════════════════════════════════════════════════════════════════════════════════════════════
Duration: 369.7 s
── Failed tests ────────────────────────────────────────────────────────────────────────────────────────────────────────
Failure (test-examples.R:107:7): example-tm_t_logistic.Rd
Expected `pkgload::run_example(...)` to run without any errors.
i Actually got a <expectation_failure> with text:
`app_check_unique_names(self, private)` threw an unexpected warning.
Message: ! Shiny inputs and outputs should have unique HTML id values.
i The following HTML id values are not unique:
* teal-main_ui-root-logistic_regression-module-interaction_var
i Do you have a Shiny input and output value with the same name?
Class: rlang_warning/warning/condition
Backtrace:
▆
1. ├─teal.modules.clinical (local) with_mocked_app_bindings(...) at test-examples.R:107:7
2. │ ├─testthat::with_mocked_bindings(...) at test-examples.R:75:3
3. │ └─testthat::with_mocked_bindings(...) at test-examples.R:75:3
4. ├─teal.modules.clinical (local) suppress_warnings(...) at test-examples.R:75:3
5. │ └─base::withCallingHandlers(...) at test-examples.R:10:3
6. └─testthat::expect_no_error(...)
[ FAIL 1 | WARN 2 | SKIP 0 | PASS 428 ] Here's another curve-ball, I got a different behavior when running If I run the the statement in VSCode R Console, I got consistent result with R CMD Check. Backtrace:
▆
1. ├─teal.modules.clinical (local) with_mocked_app_bindings(...) at test-examples.R:107:7
2. │ ├─testthat::with_mocked_bindings(...) at test-examples.R:75:3
3. │ └─testthat::with_mocked_bindings(...) at test-examples.R:75:3
4. ├─teal.modules.clinical (local) suppress_warnings(...) at test-examples.R:75:3
5. │ └─base::withCallingHandlers(...) at test-examples.R:10:3
6. └─testthat::expect_no_error(...)
[ FAIL 36 | WARN 0 | SKIP 0 | PASS 110 ]
══ Terminated early ═════════════════════════════════════════════════════════════════════════════ @pawelru @vedhav @averissimo |
This one I can reproduce. I'm investigating this now |
I witnessed the problem with rstudio as well. @kartikeyakirar also confirmed #> Error in .Call("rs_shinyviewer", url, getwd(), "window", NULL, PACKAGE = "(embedding)") : "rs_shinyviewer" not available for .Call() for package "(embedding)" I guess that it tries to detect the environment to launch a browser window and fails when it doesn't find a binding to |
It looks like this issue is an RStudio IDE issue. This piece of code does not belong to any package source code but it's set by RStudio as a Tested in both IDEs and I got the same results. |
I have pushed one more change - replace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks ready to be merged 🤩. Is there anything else you want to add / fix on this PR?
I left a minor comment that may allow us not to symlink man
folder.
Co-authored-by: André Veríssimo <[email protected]> Signed-off-by: Pawel Rucki <[email protected]>
No. From my side it's complete. Most probably we would need to implement allow-list mechanism for some warnings, validation errors that are expected (something you have noticed & proposed in tmb) but I'm happy to do this gradually during copy&pasting this functionality to other repos. I guess we would not foresee all the edge-cases and I have done my best to cover all I detected here. |
Actually there is one thing that I am not super sure about - code comments. I tried to be super verbose what I'm doing here to help you understand what's going on. If you think that anything is too much then please raise it and I'm happy to remove it. |
Let's keep them as this code is being propagated across packages. The comments give proper context on the (non-obvious) methods used to achieve example testing. The only one I find that is not necessary is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @pawelru for investigating the findings that I raised.
Overall, I believe this PR is ready for merging.
I'm aware we had a small discussion regarding whether to use a symlink or not, and I've made some suggestions as well.
Once that's resolved, feel free to merge this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job on this one 🥇
OK, I'm merging this guy as this might unblock some efforts elsewhere. If needed feel free to modify it! |
# Pull Request <!--- Replace `#nnn` with your issue link for reference. --> Part of #712 #### Changes description - Adds tests that iterate on each documentation file and runs the examples apps by mocking `interactive` and `shinyApp` functions. - Checks if there are no errors nor validation errors (with exceptions) - Implements insightsengineering/teal.modules.clinical#983 on this repository #### Changes from insightsengineering/teal.modules.clinical#983 - Adds: - Regex rules define "accepted" validation errors - Fixes: - Reverts to use `library` instead of `pkgload::load_all` due to problems with `system.file` call that cannot find package files. ```diff diff -u teal.modules.clinical/tests/testthat/test-examples.R teal.modules.general/tests/testthat/test-examples.R --- teal.modules.clinical/tests/testthat/test-examples.R 2024-04-12 10:32:33.100707738 +0200 +++ teal.modules.general/tests/testthat/test-examples.R 2024-04-12 10:26:27.645642183 +0200 @@ -38,12 +38,7 @@ with_mocked_app_bindings <- function(code) { # change to `print(shiny__shinyApp(...))` and remove allow warning once fixed mocked_shinyApp <- function(ui, server, ...) { # nolint object_name_linter. functionBody(server) <- bquote({ - pkgload::load_all( - .(normalizePath(file.path(testthat::test_path(), "..", ".."))), - export_all = FALSE, - attach_testthat = FALSE, - warn_conflicts = FALSE - ) + library(.(testthat::testing_package()), character.only = TRUE) .(functionBody(server)) }) print(do.call(shiny__shinyApp, append(x = list(ui = ui, server = server), list(...)))) @@ -56,16 +51,34 @@ with_mocked_app_bindings <- function(code) { app_driver <- shinytest2::AppDriver$new( x, shiny_args = args, + timeout = 20 * 1000, + load_timeout = 30 * 1000, check_names = FALSE, # explicit check below options = options() # rstudio/shinytest2#377 ) on.exit(app_driver$stop(), add = TRUE) - app_driver$wait_for_idle(timeout = 20000) + app_driver$wait_for_idle() # Simple testing ## warning in the app does not invoke a warning in the test ## rstudio/shinytest2#378 app_logs <- subset(app_driver$get_logs(), location == "shiny")[["message"]] + + # Check if the teal app has content (indicator of a Shiny App fatal error) + if (identical(trimws(app_driver$get_text("#teal-main_ui_container")), "")) { + tryCatch( + app_driver$wait_for_idle(duration = 2000), # wait 2 seconds for session to disconnect + error = function(err) { + stop( + sprintf( + "Teal Application is empty. An Error may have occured:\n%s", + paste0(subset(app_driver$get_logs(), location == "shiny")[["message"]], collapse = "\n") + ) + ) + } + ) + } + # allow `Warning in file(con, "r")` warning coming from pkgload::load_all() if (any(grepl("Warning in.*", app_logs) & !grepl("Warning in file\\(con, \"r\"\\)", app_logs))) { warning( @@ -79,9 +92,17 @@ with_mocked_app_bindings <- function(code) { ## Throw an error instead of a warning (default `AppDriver$new(..., check_names = TRUE)` throws a warning) app_driver$expect_unique_names() + err_el <- Filter( + function(x) { + allowed_errors <- getOption("test_examples.discard_error_regex", "") + identical(allowed_errors, "") || !grepl(allowed_errors, x) + }, + app_driver$get_html(".shiny-output-error") + ) + ## shinytest2 captures app crash but teal continues on error inside the module ## we need to use a different way to check if there are errors - if (!is.null(err_el <- app_driver$get_html(".shiny-output-error"))) { + if (!is.null(err_el) && length(err_el) > 0) { stop(sprintf("Module error is observed:\n%s", err_el)) } @@ -110,11 +131,14 @@ with_mocked_app_bindings <- function(code) { strict_exceptions <- c( # r-lib/gtable#94 - "tm_g_barchart_simple.Rd", - "tm_g_ci.Rd", - "tm_g_ipp.Rd", - "tm_g_pp_adverse_events.Rd", - "tm_g_pp_vitals.Rd" + "tm_outliers.Rd", + "tm_g_response.Rd", + "tm_a_pca.Rd" +) + +discard_validation_regex <- list( + "tm_file_viewer.Rd" = "Please select a file\\.", + "tm_g_distribution.Rd" = "Please select a test" ) for (i in rd_files()) { @@ -122,11 +146,18 @@ for (i in rd_files()) { paste0("example-", basename(i)), { testthat::skip_on_cran() + skip_if_too_deep(5) if (basename(i) %in% strict_exceptions) { op <- options() withr::local_options(opts_partial_match_old) withr::defer(options(op)) } + # Allow for specific validation errors for individual examples + withr::local_options( + list( + "test_examples.discard_error_regex" = discard_validation_regex[[basename(i)]] + ) + ) with_mocked_app_bindings( # suppress warnings coming from saving qenv insightsengineering/teal.code#194 suppress_warnings(``` --------- Signed-off-by: André Veríssimo <[email protected]> Co-authored-by: kartikeya kirar <[email protected]> Co-authored-by: Vedha Viyash <[email protected]>
UPDATE: This is meant to be an example implementation copy&pasted in other repos.
This is what I came up with when testing examples against strict argument match (a separate PR; let's not discuss it here) and then I realize this could be an alternative solution to #917. This is a draft and remain draft for a time being. The aim is to compare and discuss alternative approaches.
CC @vedhav
After additional silencing logger (a separate PR not to mix it here) this is the output:
Locally, it took ~5secs to get all the examples tested
Created on 2024-01-26 with reprex v2.1.0