-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Returning a FetchPromise from fetch() #625
Comments
One weird thing is that this means that e.g. |
From my thoughts on IRC the other day: I personally think Request is an awkward place to put stateful information and mutating methods like this. These only really apply if you've passed the Request ro It would really be nice to have some kind of object representing the in-progress fetch operation. It seems we don't have a good way to return this from The registry approach seems like it might be ok. Or maybe we could have
Just not very promise-like, though. |
If the only difference this introduces is: var r = new Request('/');
fetch(r);
fetch(r); …making the 2nd fetch fail, then this is fully compatible with how Chrome already behaves.
+1 - I've already seen people expect adding headers to
What's the API proposal here? I thought it'd be a cancellable promise + cancellable stream? Neither of which seem to need changes to the current model (and would work with |
@wanderview the right API fit feels like extendable promises var fetchPromise = fetch('/');
// as usual
fetchPromise.then(…);
// but also
fetchPromise.abort(); But I understand extendable promises haven't been worked out. |
I think we discussed on IRC that the extendable promise approach doesn't work as a place to put things like Some other ideas:
Or perhaps some kind of transforming function that takes a fetch returned Promise and gives you a Promise to the fetchControl.
|
@jakearchibald subclassing promises and having declarative syntax for promises is not really compatible. |
That is a different case from modifying a request that is already in the network layer. It's an interesting case as well, but very different. |
If (We should probably also fix the definition of |
I tried to summarize our options here: https://gist.github.com/annevk/56e97115f2c9c2e90c23 |
I think subclassing promises should be done with care but isn't necessarily incompatible with async/await. That is, you can do const inProgressFetch = fetch(req);
// Do other stuff, possibly async...
const otherInfo = await someOtherStuff();
if (otherInfo.shouldCancel) {
inProgressFetch.cancel();
return;
}
const response = await inProgressFetch; That said, I think it would require some concerted all-hands-on-deck API design to come up with the correct semantics for cancelable promises in the timeframe we seem to be working on, here. I also wanted to put out there as an option the following syntax: const inProgressFetch = fetch.controllable(req); Maybe in this example Finally regarding the names "abort" vs. "cancel," in streams I came up with (fairly artificially) the idea that abort is "forceful" and will cause the resulting stream to error, as seen by any other consumers of it, whereas cancel signals "loss of interest" and will just act as if the stream is closed. In promises I think the semantics might be rejecting with a new |
Yeah, @jakearchibald convinced me that subclassing is fine even with async/await. Especially since you often want to store the promise in a variable anyway to do concurrent processing. Note that I think that if we expose something like |
Although having methods on the promise that function even after the promise has settled feels a bit odd, I agree it's the practical thing to do here. |
Filed https://www.w3.org/Bugs/Public/show_bug.cgi?id=28057 for IDL support. |
@domenic I'm trying to figure out how this works, but I find the ES6 spec heavy going. Can you clear up a few things? Firstly, is this how it should work? var p = fetch(url);
p.abort();
// aborts fetch for url
var p = fetch(url).then(r => r.json());
p.abort();
// aborts fetch for url and rejects promise so .json is never called
// or if fetch already resolved:
// terminate the stream and therefore reject .json()
var p = fetch(url1).then(_ => fetch(url2));
p.abort();
// aborts fetch for url1 and rejects promise so url2 is never fetched
// or if fetch(url1) already resolved:
// terminate the stream for url1
// abort the fetch for url2
var p = fetch(url1).then(_ => wait(3000)).then(_ => fetch(url2));
p.abort();
// aborts fetch for url1 and rejects
// or if fetch(url1) already resolved:
// terminate the stream for url1
// reject the 'wait' so url2 never fetches
// or if wait(3000) already resolved:
// terminate the stream for url1
// abort the fetch for url2
var p = Promise.resolve(fetch(url));
p.abort();
// undefined is not a function (fetchPromise has been casted to a normal promise) If the above is right, we need a generic var p = fetchPromise.then(_ => "Hello"); For the above to work, I imagine I'm missing a simpler way. |
From @tabatkins… var fetchPromise = fetch(url);
var jsonPromise = fetchPromise.then(r => r.clone().json());
var textPromise = fetchPromise.then(r => r.text());
textPromise.abort(); // should this cause jsonPromise to abort? |
From a stream point of view, I think closing (or aborting) one stream should not close a peer stream. I think this should also be true for Response clones. So I would vote for not aborting the |
It obviously shouldn't. ^_^ This is an ambient authority bug, caused by the abort capability being carried along by default to all the chained promises. You really do have to be strict about whether an object is multicast or unicast; if you work with a unicast object in a multicast way, it needs to act multicast, and not expose dangerous capabilities. The proper way to handle this is: var fetchTask = fetch(url);
var jsonTask = fetchTask.pipe(r => r.clone().json());
var textTask = fetchTask.pipe(r => r.text());
textTask.abort(); // aborts the textTask operation, but leaves fetchTask alone
jsonTask.abort(); // now that all consumers are aborted, fetchTask can abort too In other words, you need to ref-count your clients with abort capabilities if you want to be able to propagate aborts up the chain. You don't want to give that out by default; it'll just mean that aborts rarely propagate. You need to be able to both pass out the ability to observe the result (a normal Promise) and the higher-powered ability to manipulate the fetch, and these need to be separate operations. Alternately, you could just use standard |
We were also contemplating leaving the promise forever-pending instead of rejecting it, if the fetch is in progress.
The latter is not entirely clear. We could design it either way---return values are ignored, or return value abortion is supported. Analogously would be the question: var p = FetchPromise.resolve(otherFetchPromise).abort(); Does abortion of
Well, only if
This comes down to the same question as above.
Yes, definitely.
That's one approach. If we want that, someone's going to have to do the design for a generic AbortablePromise, which is going to be tricky. See e.g. last paragraph of https://esdiscuss.org/topic/aborting-an-async-function#content-9.
Sure, just define
This is the biggest issue with cancellable promises, IMO. Our current take is to make |
Uuuuuuggggghhhh, that's not a Promise anymore, then. Please don't screw with the interface like this; it'll break things in confusing ways, especially as more tooling shows up around Promises. If you want .then() to only be callable once, you don't have a Promise. You have a Task. If that's what we want, great, but then let's make it much clearer that this isn't a promise; call the chaining method |
It's worth mentioning an alternate approach which I think avoids the ambient authority problems. @bterlson brought it up when I talked to him about this. It is the cancellation token idea. A cancellation token basically has three things: const token = new CancellationToken();
fetch("http://example.com", token);
// later
token.requestCancel(); There's a slight modification to this design that re-uses the promise mechanism. E.g. if CancellationToken is actually |
Then you can't use it with |
You don't need to create an alternate universe. I outlined a good option earlier in IRC: Tasks can be Promises too, they just create a plain Promise when you call (I haven't read your previous post yet; feedback probably incoming.) |
So you still can't |
Promise combinators aren't going to preserve aborting either; you can mix multiple types of Promises! A specialized CancelablePromise combinator can know how to clone/pipe, so no special work required there.
There's nothing parallel-universe about this, except that if you explicitly treat them as multi-cast, you either lose aborting, or you leak capabilities. This is inescapable, and you already ran into it with streams, so I'm unsure of why you're resisting here. |
My point is that people writing libraries don't want to think "oh, I'll need to write another library for people who are using tasks instead of promises." They'll just use With the then-only-works-once approach, all those libraries can be used just fine, as long as they don't assume multicast (which very few do). |
That's how this works, yes. Either you leak ambient authority, or naive libraries will lose authority. Then-only-works-once just means that any library which chains off of a promise twice (which is totally fine for every other promise in the entire world) will break when you hand it one of these special not-really-a-promise objects. Or if you pass the promise to two libraries, each of which chains once. It means you can't, for example, use the promise in two combinators. Or do a "totally safe" Promise.resolve() on the value to assimilate arbitrary thenables for your library. Or a bunch of other things. Then-only-works-once means it's not a Promise any more, period, and it's not a good idea. :( There aren't that many shortcuts around GTOR. |
I guess all I can say is that I disagree. |
Edit: NM, that's chaining two levels, which isn't multicast. |
The other way to do it that doesn't break perfectly normal behavior is to have Then you can cast to a plain Promise with Really, the reason I strongly disagree here is that observing the value multiple times is not a problem. That's totally fine; the problem is that always returning a |
Consider please this example: var a = fetch(...);
var b = fetch(...);
var c = fetch(...);
var req = Promise.all([a, b, c]);
req.then(function() {
// handle load somehow
});
onEscPressed(function() {
req.abort();
}); Here is 3 request, for example, made one one user action. Let's say it's dialog. I as a developer sometimes want to controls such request as a one. So for this case I would to use
|
Yes and yes.
That would be a design decision that we have to make when designing FetchPromise.all
It will be converted to a FetchPromise using FetchPromise.resolve, which presumably gives it a no-op cancellation action. |
Agreed on all counts. |
I strongly believe what in that situation it should cancel all promises. At least because |
I'm wondering I think fetch() is a process fetching for Request and create Response. so if we will get new feature in process of fetching (like HTTP3, 4, 5...?) my opinion is simply. if we have a fetch instance instead of function, if that is builtin class, developer also can extend that for their own class. |
Isn't that essentially what |
I like @tabatkins' ref-count idea. Although I'd only count child abortable promises as refs, So var rootFetchP = fetch(url).then(r => r.json());
var childFetchP1 = rootFetchP.then(data => fetch(data[0]));
var childFetchP2 = rootFetchP.then(data => fetch(data[1]));
var childP = Promise.resolve(rootFetchP).then(r => r.text());
childFetchP1.abort();
// …aborts fetch(data[0]), or waits until it hits that point in the chain, then aborts.
// fetch(url) continues
childFetchP2.abort();
// …aborts fetch(data[1]), or waits until it hits that point in the chain, then aborts.
// fetch(url) aborts also, if not already complete. Out of refs.
// childP rejects (or hangs) as a result
rootFetchP.then(data => console.log(data));
// …would reject/hang because the fetch has aborted (unless it completed before abortion) |
Aborting is a common & desirable feature of asynchronous processing |
@Jxck if there's some feature of fetching that requires a different signature than |
Yeah, that's the entire point, actually; you can cast it to a standard Promise to strip it of its additional powers, so of course you don't need to ref-count it. |
I have a question: var p = waitForUserConfirm('press OK to continue').then(() => fetch(url)); Since a generic For this, I like the var token = new CancellationToken();
var p = waitForUserConfirm('press OK to continue').then(() => fetch(url, token));
app.on('navigate', token.requestCancel); // possible to cancel fetch when navigate to another page |
@otakustay Yeah, I'm not sure how to address that scenario either, with the I guess looking seriously into token approaches is the way to go; that separates the capability from the class, so you don't have to do gymnastics to keep the type correct and the cancellation chain going. We'll need to think about ergonomics, though; particularly, how to offer the same "chain rejections upward" scenarios we talked about previously. The rejection-chaining worked well when every chained |
I am not sure if you disccused this before, but here is another thing which might work: var req = fetch('...', function(abort) {
onESCPressed(abort);
});
req.then(function() { ... }).catch(function(err) {
// err -> FetchError, type: 'cancelation'
}); Something like that. I looks like |
Ugh, I just realized that, pending some clever insight, the token approach doesn't work with combinators, unless you explicitly pass in the tokens as well, or have duplicate token combinators. @NekR From what others have said, it appears preferable for a cancelled fetch (and cancelled things in general) to instead just be forever-pending; in real usage, rejecting just means that everybody puts a check at the top of their reject handler that does nothing if it was a cancel-rejection. |
Yes, I know that and 100% with you. For example, jQuery has same approach for their abort action and it's really annoying. But I am not sure if forever-pending is better approach. Ability to know if promise/request is canceled (from other part of code) is a really usable thing. var req = new Promise(function(fullfill, reject, cancel) {
// ...
});
req
.then(function onFullfill() {}, function onReject() {}, function onCancel() {})
.catch(function onReject() {})
.cancelled(function onCancel() {}); This version is backwards-compatible, it has no problems with casting and for old-Promises (current, non-cancelable) cancelation will be just forever-pending state (+this version is polyfillable). P.S. |
This is just pulling the "cancellation" ability into the core @otakustay Reviewing your example from earlier, I see I missed a possibility. Here's the example again: var p = waitForUserConfirm('press OK to continue').then(() => fetch(url)); You correctly pointed out that this'll cast the var p = FetchPromise.resolve(waitForUserConfirm('press OK to continue')).then(() => fetch(url)); This upgrades your original So we can continue looking at type-based solutions rather than token-based ones, which makes me happy because they have much better ergonomics. (The one problem, looking forward, is that they don't compose; you can't really make a promise that is both a |
@otakustay perhaps this is really out there, but in C#/Unity land when I was using their task/await system (not promise based), I was able to (IMO) elegantly handle situations like this by structuring the code as such (pseudo-code): while (true)
page = await Q.race(navigate(), page.actions()) Apologies since the explanation is a bit cyclic (assume we have "cancelation" to explain how cancelation works). But basically, the idea is given high level primitives that know what to do with cancellation (race which returns the value of the first promise that finishes first and cancels all the other running promises, all which waits for all the promises to finish, etc), then you can describe these relationships in the actual structure of the code vs passing variables around. So here, navigate would behave as such: function navigate()
{
return new Promise(function(resolve, reject)
{
app.on("navigate", resolve);
});
} So in other words, navigate never finishes unless the user navigates. When it does navigate, it returns the respective "page" object for said page, which has its own little world of action represented by the actions method. The actions method probably simply never returns on its own (simply waits to get canceled by the navigation). So, no matter what the page is doing, a navigation action necessarily cancels everything, returns a new page, and kicks of the process with the next page's actions. With this kind of structure you kind of design your code flow like a flow diagram, with everything thats necessarily coupled at the top. |
I'm afraid if in your code var p = FetchPromise.resolve(waitForUserConfirm('press OK to continue')).then(() => fetch(url)); the We don't need a |
I'm imagining that the resolving behavior for (Sorry, I thought this was obvious; otherwise chaining cancellable things doesn't work at all, in any circumstance.) |
Please note that this discussion is now also taking place in #592 (comment) |
Closing this in favor of whatwg/fetch#27 |
We want to expose these features over time and the best candidate is the
Request
object:FetchEvent
.I think this requires that we change
Request
that after passing it tofetch()
it is no longer usable. And changefetch()
to not copy the object, but instead mutate it.#624 is related to this.
Please let me know what you think about this soonish so I start making the required changes.
The text was updated successfully, but these errors were encountered: