-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix task queueing. #524
Fix task queueing. #524
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Part of the problem with missing tasks is indeed figuring out what task source to use. If there are no good reasons to reuse existing ones, the best is to use a dedicated one.
The tool that analyzed the spec and reported #523 is not smart enough to follow algorithms and/or understand which algorithms de facto run in parallel. If it were, it would also have reported the start a presentation connection algorithm. The algorithm also needs fixing.
Ideally, that algorithm would start with a step that says "Assert: this is running in parallel." step, as done in a few algorithms, e.g., attempt to populate the history entry's document in HTML, so that the tool could understand that it actually runs in parallel.
Speaking of the analysis tool, I understand why it's convenient to proceed as you did, but having a separate "all tasks use the task source 'foo' and the global object of the current realm" in prose is the kind of things that makes it hard to analyze the spec semi-automatically. I started a thread on spec-prod partly to discuss conventions that could strike a good balance between readability, ease of authoring, and ease of validation.
One possible alternative to avoid inlining the selection of the global object and repeating the name of the task source as done in a few other specs, while still making it possible to analyze the spec would be to define a "Queue a Presentation API task" algorithm that initializes the variables and calls "Queue a global task", and to rather call that algorithm within the other algorithms. The analysis tool will miss that today, but could at least in theory process that somewhat easily afterwards. Just a suggestion, it's ok to keep the definition in prose.
I went ahead and made this change. I also added an "in parallel" assertion for steps that require the Presentation API task source, without explicitly mentioning running "in parallel." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The asserts makes me realize that the introduction of the algorithm to start a presentation from a default presentation request could say that the user agent must run the algorithm "in parallel" too (or another assert could be added).
Also, it's not clear to me that the algorithm that terminates a connection in a controlling browsing context actually runs in parallel because it is called directly when the application calls terminate()
. If it needs to run in parallel, the call to it should probably indicate that the user agent must run the algorithm in parallel.
Thanks for your detailed review and clear comments as usual @tidoust 😺
Added an explicit step to run these steps in parallel.
I think the only step that needs to run "in parallel" is notifying the receiving browsing context of the termination. I updated the steps to clarify that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See inline. One assert to remove, and one additional suggestion to clarify what realm to use in steps that loop through the set of controlled presentations.
SHA: 214830c Reason: push, by markafoltz Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Addresses Issue #523 by specifically queuing a global task when promises are resolved or events are fired from parallel steps.
Preview | Diff