Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix task queueing. #524

Merged
merged 7 commits into from
Sep 20, 2024
Merged

Fix task queueing. #524

merged 7 commits into from
Sep 20, 2024

Conversation

markafoltz
Copy link
Contributor

@markafoltz markafoltz commented Sep 15, 2024

Addresses Issue #523 by specifically queuing a global task when promises are resolved or events are fired from parallel steps.


Preview | Diff

@markafoltz markafoltz requested a review from tidoust September 15, 2024 19:07
@markafoltz markafoltz self-assigned this Sep 15, 2024
@markafoltz markafoltz linked an issue Sep 15, 2024 that may be closed by this pull request
4 tasks
Copy link
Member

@tidoust tidoust left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part of the problem with missing tasks is indeed figuring out what task source to use. If there are no good reasons to reuse existing ones, the best is to use a dedicated one.

The tool that analyzed the spec and reported #523 is not smart enough to follow algorithms and/or understand which algorithms de facto run in parallel. If it were, it would also have reported the start a presentation connection algorithm. The algorithm also needs fixing.

Ideally, that algorithm would start with a step that says "Assert: this is running in parallel." step, as done in a few algorithms, e.g., attempt to populate the history entry's document in HTML, so that the tool could understand that it actually runs in parallel.

Speaking of the analysis tool, I understand why it's convenient to proceed as you did, but having a separate "all tasks use the task source 'foo' and the global object of the current realm" in prose is the kind of things that makes it hard to analyze the spec semi-automatically. I started a thread on spec-prod partly to discuss conventions that could strike a good balance between readability, ease of authoring, and ease of validation.

One possible alternative to avoid inlining the selection of the global object and repeating the name of the task source as done in a few other specs, while still making it possible to analyze the spec would be to define a "Queue a Presentation API task" algorithm that initializes the variables and calls "Queue a global task", and to rather call that algorithm within the other algorithms. The analysis tool will miss that today, but could at least in theory process that somewhat easily afterwards. Just a suggestion, it's ok to keep the definition in prose.

index.html Outdated Show resolved Hide resolved
index.html Outdated Show resolved Hide resolved
@markafoltz
Copy link
Contributor Author

define a "Queue a Presentation API task" algorithm that initializes the variables and calls "Queue a global task"

I went ahead and made this change. I also added an "in parallel" assertion for steps that require the Presentation API task source, without explicitly mentioning running "in parallel."

Copy link
Member

@tidoust tidoust left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The asserts makes me realize that the introduction of the algorithm to start a presentation from a default presentation request could say that the user agent must run the algorithm "in parallel" too (or another assert could be added).

Also, it's not clear to me that the algorithm that terminates a connection in a controlling browsing context actually runs in parallel because it is called directly when the application calls terminate(). If it needs to run in parallel, the call to it should probably indicate that the user agent must run the algorithm in parallel.

index.html Outdated Show resolved Hide resolved
index.html Outdated Show resolved Hide resolved
index.html Outdated Show resolved Hide resolved
index.html Outdated Show resolved Hide resolved
@markafoltz
Copy link
Contributor Author

Thanks for your detailed review and clear comments as usual @tidoust 😺

the algorithm to start a presentation from a default presentation request could say that the user agent must run the algorithm "in parallel" too

Added an explicit step to run these steps in parallel.

Also, it's not clear to me that the algorithm that terminates a connection in a controlling browsing context actually runs in parallel because it is called directly when the application calls terminate(). If it needs to run in parallel, the call to it should probably indicate that the user agent must run the algorithm in parallel.

I think the only step that needs to run "in parallel" is notifying the receiving browsing context of the termination. I updated the steps to clarify that.

Copy link
Member

@tidoust tidoust left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See inline. One assert to remove, and one additional suggestion to clarify what realm to use in steps that loop through the set of controlled presentations.

index.html Outdated Show resolved Hide resolved
index.html Outdated Show resolved Hide resolved
index.html Outdated Show resolved Hide resolved
@markafoltz markafoltz merged commit 214830c into main Sep 20, 2024
2 checks passed
@markafoltz markafoltz deleted the fix-tasks branch September 20, 2024 18:52
github-actions bot added a commit that referenced this pull request Sep 20, 2024
SHA: 214830c
Reason: push, by markafoltz

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Missing tasks in parallel steps in Presentation API
2 participants