From 9ac0235b22936fc00e5334029c5568a8551d9a24 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Mon, 23 Dec 2024 01:02:26 -0800 Subject: [PATCH 1/3] December notes --- meetings/2024-12/december-02.md | 1007 +++++++++++++++++++++++++++++++ meetings/2024-12/december-03.md | 850 ++++++++++++++++++++++++++ meetings/2024-12/december-04.md | 355 +++++++++++ meetings/2024-12/december-05.md | 453 ++++++++++++++ 4 files changed, 2665 insertions(+) create mode 100644 meetings/2024-12/december-02.md create mode 100644 meetings/2024-12/december-03.md create mode 100644 meetings/2024-12/december-04.md create mode 100644 meetings/2024-12/december-05.md diff --git a/meetings/2024-12/december-02.md b/meetings/2024-12/december-02.md new file mode 100644 index 0000000..f9fbb2b --- /dev/null +++ b/meetings/2024-12/december-02.md @@ -0,0 +1,1007 @@ +# 105th TC39 Meeting | 2nd December 2024 + +----- + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Daniel Ehrenberg | DE | Bloomberg | +| Istvan Sebestyen | IS | Ecma | +| Jordan Harband | JHD | HeroDevs | +| Dmitry Makhnev | DJM | JetBrains | +| Chris de Almeida | CDA | IBM | +| Sergey Rubanov | SRV | Invited Expert | +| Michael Saboff | MLS | Apple | +| Jesse Alama | JMN | Igalia | +| Andreu Botella | ABO | Igalia | +| Jirka Marsik | JMK | Oracle | +| Rob Palmer | RPR | Bloomberg | +| Eemeli Aro | EAO | Mozilla | +| Josh Goldberg | JKG | Invited Expert | +| Aki Rose Braun | AKI | Ecma International | +| Ron Buckton | RBN | Microsoft | +| Luca Forstner | LFR | Sentry | +| Mikhail Barash | MBH | Univ. Bergen | +| Ujjwal Sharma | USA | Igalia | +| J. S. Choi | JSC | Invited Expert | +| Linus Groh | LGH | Bloomberg | +| Keith Miller | KM | Apple | +| Richard Gibson | RGN | Agoric | +| James M Snell | JSL | Cloudflare | +| Samina Husain | SHN | Ecma International | +| Devin Rousso | DRO | Invited Expert | +| Nicolo Ribaudo | NRO | Igalia | +| Jan Olaf Martin | JOM | Google | +| Daniel Minor | DLM | Mozilla | +| Philip Chimento | PFC | Igalia | + +## Opening & Welcome + +Presenter: Rob Palmer (RPR) + +RPR: Welcome everyone to the 105th TC39 meeting. It’s labelled 106th in the meeting notes. That’s my fault changing the name. I can see the transcription is beginning which is excellent. Before we start, could I get a couple of volunteers to assist with the note taking to polish up the notes as we go. I’ll get started with the slides, then. Here we go. So welcome everyone. We are here with our remote meeting today. And so let’s begin. Are these slides working? There we go. So you know who we all are, I’m rob one of the three chairs that we have here today. We also have Ujjwal and Chris in the meeting and assisted by the three facilitators. I’m not sure if any are here at the moment. But we have Brian, Justin, and Yulia who help us out with running the meetings. So if you have any requests or any curiosity, please do reach out to us at any time. We try to keep the meeting on time and give everyone a chance to speak using our TCQ tool which I’ll get to. Before we begin, hopefully the way that you all got here today was through the meeting entry form. So the reflector links to this. If you found your way here through any other means, for example, someone sharing the URL direct, please do return back to the reflect and make sure you sign in the form. This is an Ecma requirement that we take attendance. + +We have a code of conduct. This can be found on the main TC39.es site. Please do give it a read and do your best to stick to the spirit of the document and with the best faith interpretation and if you have any concerns or any issues that come up, you can always reach out to us chairs direct, we’re available direct on matrix or if you need to, you can reach out to the code of conduct committee and these can be kept confidential. We are having a remote meeting this week which means we have four days and that’s broken up into a morning session or a.m. session and p.m. session. Of course, that depends on your TimeZone. + +We’re on mountain time this week. So that is UTC -7. For communicating during the meeting we are using our regular tools. So primarily that is TCQ. TCQ I think we were just getting that linked from the reflector. Do you know, Chris, is this now available on the reflector? + +CDA: Yes, it’s available on reflector and I also posted it in the meeting chat. Still being populated, but it’s up. + +RPR: Awesome. And so we use this tool to manage both our agenda and discussions. You can see what’s coming up. Let’s go through some of the controls. So you’ll see if you switch to the view where you see the current item, we have the name of the current item, then within that, there will be a topic when someone has proposed a topic to discuss. And within that will be a current person speaking. When you’re using this tool and if you’re actually speaking, you will see an extra button called I’m done speaking. So when you have finished saying your piece and wish to move on with the conversation, please do click this button or otherwise the chairs will click it when they see appropriate. And then on the actual buttons you see there, please prefer to use the buttons on the left, so the blue ones, so the new topic, and discuss current topic. Those are preferred. The ones on the right will generally interrupt the conversation or will be increasingly urgent. So you’re allowed to ask clarifying questions at any point. If you really need to stop the discussion urgently, choose point of order such as I can’t hear anything. You’re muted, that kind of thing. We use for synchronous realtime chat, we have matrix. Our better version of IRC. It’s a little bit like slack and discord. So hopefully you’re all signed up there. Primarily we use the TC39 delegates room for talking about work and everything that is on topic. If you have things that are off topic, then please keep them in the temporal dead zone. That is the place for any conversations about Pokmon or joking or puns or that kind of thing. We have an IPR policy. So to make sure that everything is clean and so on, everyone here is expected to be into a particular category. Most people, the standard original classification is an ECMA member. That way you have delegates people from the ECMA member organization and everyone here who is in that status has, you know, they’re company has already signed the agreements when they joined. Otherwise, we have the concept of invited experts which is a formal process by which people can be invited to join. And as part of that equally you will have signed the forms. If you are not in either of those categories, then we expect that you are perhaps an observer. Normally notified on the TC39 reflector and in advance, you are welcome to observe. Please do not talk. Because that’s the principle of being a signed up—if you haven’t yet signed the agreements. We also have transcription running. So I will just read out this so that everyone is fully aware that a detailed transcript of the meeting is being prepared and will eventually be posted on GitHub. You may edit this at any time during the meeting in Google Docs for accuracy including deleting comments which you do not wish to appear. And you may also request corrections or deletions after the fact by editing the Google Doc in the first two weeks after the TC39 meeting or subsequently making GitHub or contacting the chairs. The next meeting after this, the 106th will be in February next year. Some of us will be going to Seattle as kindly hosted by F5. We were there roughly two years ago or so. So some of you may remember. I don’t know. Michael, is it in the same place as last time? + +MF: Yes, it is. + +RPR: Okay. So having attended it previously, it was an awesome place to visit. So please do join us for that. The survey for that, the interest survey, is currently open. We have already seen lots of interest. So you can see who else is planning to go. Let’s return to the opportunity to volunteer as a note taker. We will make this request at the start of each session hoping for volunteers. + +RPR: First of all, hopefully everyone has reviewed the previous minutes. Are there any objections to approving the previous minutes? Silence means no objections. They are approved. Next we have our current agenda. Are there any objections against proceeding with the current agenda? None? Okay. We have adopted the agenda. So first up, we have SHN with the secretary’s report. + +## Secretary’s Report + +Presenter: Samina Husain (SHN) + +- (no proposal) +- (no link to slides) + +SHN: Also want to thank everybody for all the efforts. It’s been a very busy year. You had in June, your new edition. There’s been lots of work going on. So all those efforts are very much appreciated. I also want to recognize and thank AKI who supports me and you on the secretariat for all her work she has done in the months past. Just want to make those small recognitions. + +SHN: Just some of the topics I would like to cover today, I would like to go through some of the new projects we are working on, some conversations I’m having at W3C for the source map that I think closes very soon. A lot of work done there. Confirmation of the chairs and editors. And a comment on IETF and a short comment on the invited experts. And then as per usual, there’s always the general overview of the invited experts. I always like to repeat the code of conduct. I think mentioned by RPR and then some documents and some dates. + +SHN: So first for recognitions, I want to first bring this up. So CDA has been recognized as a pathfinder for security. I want to congratulate you on this nomination and winning this prestigious recognition. That’s wonderful and it’s great because you’re also so involved with Ecma. Very much pleased to announce that to everybody who didn’t know. And secondly, I want to thank all of you for giving me the opportunity to be recognized and thank you for all of that because I understand that much of my recognition is a result of a lot of work that you all do in TC39. So you play a big role in this nomination. So the energy and professionalism that comes, actually comes from all of you. So thank you for giving me this honour. + +SHN: So moving on to a little bit of the new activities. So TC55 has been a conversation that has been going on for some months as many of you are aware. We had lots of to and fro regarding the scope and the work that will continue whether it’s in W3C or move into Ecma. It is moving forward slowly but surely to move the entire WinterCG work into Ecma. The committee that will be formed will be TC55. The scope had a lot of conversation. Over the last weeks we had a number of meetings to really fine tune the scope and address a lot of the comments that came from the ExeCom and other members of TCs and thank you for all the work. LCA and OMT and AKI and others on the call. So forgive me if I’ve forgotten your name or didn’t mention your name. We did a lot of work. The scope looks quite fine. It will be proposed and discussed at the GA coming up in ten days. + +SHN: TC56 is another new proposal. It is the first one covering artificial intelligence. It has been proposed by IBM and other members involved Purdue university and Microsoft to just name the first three and others that will be interested. I wanted to bring it to your attention that perhaps organizations that are involved in may find interest and seek to participate. This will be discussed in the GA coming up. We had the initial proposal already at the last ExeCom. It’s good to see new work coming into Ecma. + +SHN: I have also mentioned this particular—I don’t have a TC number to it. It hasn’t yet been officially formalized the high-level shading language HLSL is proposed by Microsoft and there is interest from other members. Microsoft just needs a little bit more time to work this through to the management. So they will be proposing this probably in the new year in the Execom and if we haven’t had any others it will be T C57 and those within TC39 find that of interest within your organization. Just to keep you aware. + +SHN: I spoke about TC55, and the work we’re doing to move WinterCG into Ecma and to bring it here has generated also a lot of conversation with W3C at the broader scope. At the last TPAC meeting AKI had an opportunity to attend and meet a lot of people. I believe he had given an update in the last plenary in Tokyo. I wasn’t on the call. I wanted to bring this topic up again. It come up in the conversation recently with the W3C folks and would like to know if Ecma TC39 would like to participate in the horizontal review that takes place in W3C and I’m going to pause there and ask AKI to add a bit more detail to the conversation. + +AKI: I mentioned the horizontal review kind of briefly last plenary. We were on a tight schedule so I tried to breeze through as quickly as possible. The way it works is W3C has impressive tooling around GitHub where they track cross-cutting concerns within W3C. The tooling will open issues on both repos for follow-up. So say the i18n working group has something come up that will be relevant to the privacy interest group, the tooling will open an issue within the appropriate repo for the privacy interest group as well as that for the i18n working group requesting a review that can then be followed up on, tagged, discussed, and upon satisfactory conclusion of conversation closed. + +AKI: There is nothing involving formal obligation in terms of horizontal reviews. They are not “we reviewed the thing and therefore you must change it”. It’s an informative move making sure groups know what each other are up to and making sure that nothing is in conflict. I think it sounds like a great idea on its face. It is certainly something I would like to ease our way into if we wanted to pursue it—I don’t think we need to immediately be hooked into hundred percent of the automation into tooling. I do think being able to both request reviews and have reviews requested of us, by us would be a good way to solidify a relationship with W3C and make sure anything that we’re doing is beneficial for the web and making sure that nobody is building something that conflicts with what we are up to. + +SHN: Okay, thank you AKI. I can certainly field some questions with that. I just have a couple of slides and then we can go through that. Okay, a few other items. We have our GA coming up on December 11th and 12th. The opt out period for TC39 TG4 source map first edition will end the day before. First congratulations to the team, to the subcommittee working on source maps. Great work. I have received the final standard, the first edition. It is uploaded to the GA folder so the GA members can read it. It is also uploaded to the TC39 folder. I believe you have seen the final PDF that was created. There are two minor editorials. Two letters that need to be small letters and very minor before published for review. My expectation is at the GA review and approve it. I hope the members at the GA had enough time. They had the first draft already uploaded sometime ago. The final edition to be approved has been uploaded for them. + +SHN: I had some questions regarding TC39 and IETF liaison. My question to the committee is, are you aware of your status with IETF? And if so, is there a TC39 representative that is a liaison to IETF? Because it would be good for us to have a short exchange of information, maybe give them a short report of what is going on just to keep this relationship at IETF and TC39 active? And I will pause for some comments on that after I just finished my couple slides if that’s okay. + +SHN: For invited experts at the end of every year around now, I review the invited experts list we have, this is just to confirm that everyone is still active and interested and relative to be working forward. I do like to touch a little bit to each of the invited experts that are part of the organization to see if they’re still interested or if there’s a potential membership opportunity. So some of you may see an email from me regarding that. Otherwise, with the TC39 chairs, I just would like to have a short confirmation that our current list of invited experts are still invited experts that are relevant and valid for the work going on in TC39? + +SHN: I also want to thank everybody for their nomination. So there have been a number of nominations that have come. Many of you are in TC39, that’s excellent. It’s great to see the activity. The ExeCom nomination seats are only four. We had a lot of nominations and seven total. We will have a vote. I do understand that we may want to consider that to be different in the future. For the upcoming GA coming up, we will have a vote. So your activity and your interest are very, very much appreciated as we move on to building our ExeCom. + +SHN: Something that we also do at the end of the year typically or start of the year, I think I have at the start of this year, but I just want to confirm that the chairs that I listed here, the editors that I listed here, are the individuals that will continue on in 2025. I will list them also on the Ecma documents and Ecma website. If I have made an error or I need a correction, please advise me. This is the list that I have based on what we did from 2024. + +SHN: In the annex, I will run through it quickly and then stop for questions. It is the usual invited experts rules and conditions. Our code of conduct rules and regulations. I want to thank everybody to continue to give the summary and conclusions. That makes a huge help through the minutes and I appreciate that you take the time to do that. The document list that we have are there for your reference. You may access it through your chairs. And I listed there the title of the TC39 documents that have been recently published after your last plenary meeting and also GA documents that have been listed since the last meeting. So have a look through that. Anything specific you would like through your chairs, you can access that. I see that the dates are set for the meetings for next year for TC39. I hope I got that right. So that’s great. Thank you so much to the hosts that are going to be hosting it for the three times next year. F5, I look forward to being able to attend all of them. + +CDA: Sorry to interrupt. If you go back to the slide. To the dates, I think if I’m not mistaken the Igalia dates are incorrect by a month. I think you had June on there. It’s in May. + +SHN: Yes. I will correct that. Apologies. It should be May. I knew that. I just didn’t know how to count this morning. + +SHN: Then of course the dates that are currently set for our general assembly and ExeCom and keeping in mind with the election and potential new members on the management, these dates could adjust a little bit based on everybody’s availability. This is what is tentatively set for now. Those are the venues. I think that is my very last slide. Thank you very much. I’m going to stop sharing and open for any questions. + +DE: Minor clarification. For the ExeCom, there are three parts the officers and vice president and president and treasurer as well as eight ordinary member slots and only three candidates. So all three of those from IBM and Apple and Google will be there. I’m very happy about Apple and Google joining this. I think this will be really great for Ecma management. For non-ordinary members there are four slots. We recently expanded this from two and there are seven candidates. Wanted to apologize because I—you know, I pitched this to a number of people. I’m really happy that people have signed up as candidates. Historically this wasn’t competitive for a long time and now this is, I think. I would like to consider in the middle of next year, at the following GA, allowing for additional slots for non-ordinary members when all of the ordinary member slots are taken in the ExeCom and something we can discuss in the future. Apologies for this being unexpectedly a vote. And Bloomberg are hosting the Ecma GA in just one week. So if you’re planning on attending, please fill out the Doodle for that. This is a hybrid meeting. It’s open to all Ecma members, not only ordinary members. I encourage you to attend remotely if you would like to. If you’re the designated representative from your member organization. So please get in touch with Samina or me if you’re interested in attending. Thank you. + +SHN: Thanks Dan. All of you who have nominated, I sent you the link. All of you should have received the invitation. And thank you for the update. We will discuss how to better have the ability and engagement from others in the event that the seats are not filled by the ordinary members. Are there any other questions? + +CDA: There’s nothing in the queue at the moment. + +SHN: Great. Thank you very much. I will update the slide with the correct dates and give it back to you, Rob. Thank you. + +## ECMA262 Status Updates + +Presenter: Michael Ficarra (MF) + +* [slides](https://docs.google.com/presentation/d/1IS6hsFker8TM_mPtK1VQbFCH2TK3LljOxFu6-zMCjkM/edit) + +MF: Pretty quick update on 262 editorial stuff. So normative changes, the first one here is a needs-consensus PR that we agreed to at the last meeting. We merged this change to `toSorted` to make it stable, as is already required by `Array.prototype.sort`. This was an oversight in the integration of `toSorted` as things changed with the Array sort stability specification at the same time. And the rest are Stage 4 proposal integrations: `Promise.try`, iterator helpers, and duplicate named capture groups. There were plenty of editorial changes, but none that need to be called out to plenary. And the list of upcoming and planned editorial work is the same. We should probably review it sometime soon just to make sure this is what our plan is going forward. But for now nothing has changed there. And that’s it. + +## Test262 Status Updates + +Presenter: Philip Chimento (PFC) + +- (no slides presented) + +PFC: Test262 has landed support for a bunch of new proposals thanks to many of the champions such as deferred imports, `Promise.try`, the iterator one whose name escapes me at the moment. So this is what we like to see, champions participating in the writing of tests and in reviewing tests that other people have written, this is really helpful. We are continuing to look for sustainable funding for the maintenance of Test262. We’ll let you know when we have any updates on that, but if you have tips, please let us know. We’re very interested in avenues for keeping the current level of involvement where it is. + +RPR: Just checking, are you meant to be showing any slides? + +PFC: No, no slides. I believe that’s it. + +RPR: All right. Any questions for PFC? No, okay. Thank you PFC. We are making excellent progress through the agenda. Things are moving quicker than normal which is a hint to the fellow chairs to bring things forward. + +## TG3 (Security) Updates + +Presenter: Jordan Harband (JHD) + +- (no slides) + +JHD: So we continue to discuss the security aspects of multiple proposals at various stages. We don’t have anything concrete to talk about this plenary. But we will continue to review and hopefully surface useful feedback. + +## TG4 (Source Maps) Updates + +Presenter: Nicolò Ribaudo (NRO) + +- (no proposal) +- [slides](https://docs.google.com/presentation/d/1uzimn85ojU0TOdiFB1s5VZG_aT7xw8uf646hgnDNQ3w/edit#slide=id.g31b69470253_0_0) + +NRO: The very good update is we now have a spec number. Source maps will be ECMA-424 if I get it right, and as mentioned – + +AKI: 426. + +NRO: It’s 426. Sorry about this. As SHN mentioned before, you can find the latest PDF draft in the Ecma drive at this path here. We’ve already got a lot of feedback and suggested changes. So thanks to everybody who tried to make the spec better. Special thanks to AKI who put in a lot of work to properly generate the PDF and include all of the feedback. + +NRO: The next few steps are as SHN mentioned before, next week during the Ecma GA, there will be a vote and hopefully we will finally get approval for our new standard. There are a few steps on our side that we still need to do. One is that we actually now need to rename the URL to the new spec number, that is also wrong in this slide, and we still need to finish a few changes from the PDF to the web snapshot. But the PDF is the reference yearly snapshot. + +NRO: There have been a few spec changes. The only relevant one is that we have this warning that we discussed in the last plenary about the different ways to find the source map comment potentially giving different results. This is included also in the final version that we will publish. We are working towards a solution. We don’t have the exact solution yet. + +NRO: There have been some progress with proposals. Scopes, which is our most active proposal, it’s Stage 3. We have multiple ongoing experimental implementations and we’re now thanks to the implementation testing how to best encode this data to minimize the source map size. And there has been also some progress on another four proposals which was promoted to Stage 2. Debug IDs allows to give the identifier the file because in many cases URL is not enough. Redeploy the application and file might change, so this proposal gives all file some. It’s at Stage 2 right now. Before advancing we likely need to discuss it with somebody that can specify normative APIs. It could be WHATWG, it might be ourselves with the build on top of the respective proposal. But we will have this discussion when time comes. + +## TG5 (Experiments in Programming Language Standardization) Updates + +Presenter: Mikhail Barash (MBH) + +- (no proposal) +- [slides](https://docs.google.com/presentation/d/1DJUuR4Bnoe3VgV-rc2jWIXqvyMv3krwqi9J9yZbxwDw/edit?usp=sharing) + +MBH: Short update on TG5. So we continue to have regular monthly meetings. Some of the recent topics that we had was a tool for previewing how syntactic or API proposals manifest in existing code bases. Essentially it was a project to implement a structural search and replace, and then this enables previewing some of the syntactic proposals and most of the API proposals in existing code. And there is also a study being conducted in the University of California-San Diego on MessageFormat. With in person TC39 meetings we are arranging TG5 workshops where we have sort of a small update workshop with the local university with the group that works on programming languages. So currently we are planning the TG5 workshop in Seattle on Friday the 21st of February. This is not yet confirmed. And this is in discussion with the research group on programming languages and software engineering at the University of Washington. And I will come back with more updates in Reflector when we have a confirmation. + +MBH: And I would also like to mention that we have a list of open issues for TG5 and you are welcome to say what you would like TG5 to conduct. That’s it. I’m ready for the queue. + +RPR: I will say the most recent TG5 workshop in Tokyo was a lot of fun. And very high quality. So looking forward to the next one in Seattle. + +## Updates from the CoC Committee + +Presenter: Chris de Almeida (CDA) + +- (no proposal) +- (no slides presented) + +RPR: CDA, do we have things to say about—from the code of conduct committee? + +CDA: A little bit. I mean, pretty quiet for the most part. I think the only thing we got was we got sort of a report, not really a report, more of an email. Not from anybody within the committee itself. It was just a couple of outside folks, outside from the committee, got into a little bit of a tiff and discussion in one of the GitHub repos. Really wasn’t a severely—we have seen much worse in the GitHub repos but apparently struck a cord with this individual. But it fizzled out. We reminded folks to be mindful of the code of conduct. Haven’t heard anything since. That’s really the only thing. Other than that, as always, standing invitation to the code of conduct committee and if interested, reach out to us. + +## Call for reviewers - ESM Phase Imports + +Presenter: Guy Bedford (GB) + +- [proposal](https://github.com/tc39/proposal-esm-phase-imports) +- [slides]() + +GB: So this is just a quick update on the ESM phase import proposal. So one of the things when we got Stage 2 earlier this year was, we did not identify our Stage 2.7 reviewers at that time. So this is just a call out for the fact that we are seeking to just confirm those reviewers. I’ve reached out to everyone who I think should have been interested in being a reviewer and have put those who have confirmed interest down. But this is a formal shout out in case we have missed anyone. So if we have missed anyone, we are seeking Stage 2.7 in two days’ time. So ideally if you’re able to review by then, but of course if someone would like to review, they can. And, yeah, so now is the time to speak if anyone else would be interested. + +RPR: Or if there’s any concerns that this is insufficient review, please do say. + +KKL: I volunteer as tribute. + +GB: And would you be able to complete your review in time for Stage 2.7 request on Wednesday, or are you requesting that we delay our 2.7 request for the next meeting in February? + +KKL: I will do what I can. + +GB: Okay. Thank you. I will add you to the list of reviewers. Much appreciated. + +RPR: I think we can conclude we have agreed the reviewers for ESM phase imports. + +## Process document fixes and corrections + +Presenter: Chris de Almeida (CDA) + +- (no proposal) +- presenting [tc39/process-document#46](https://github.com/tc39/process-document/pull/46) + +CDA: My intention wasn’t to actually pain painstakingly go through all of the changes but just to talk briefly about them and to ask consensus on making them with the caveat of providing, I don’t know, an additional week or something for folks to review offline. Basically there are two PRs. One is a correction. So first of all to be really clear, there is no process change here being proposed. These are just fixes and clarifications in two PRs. The first one is there were things we forgot to update when we introduced the new Stage 2.7. So this doesn’t change reality. This just makes the process document actually reflect the reality that we already have. So that substantive change is isolated here in this PR, which I think already has a couple of approvals. So this is just clarifying the text about reviewing for Stage 2 and Stage 2.7. + +- presenting [tc39/process-document#48](https://github.com/tc39/process-document/pull/48) + +CDA: The second PR– as I was making this change here, I was going through the rest of the document and felt like it could use a little bit of clean up as well. So there’s a second PR with a little bit more of content change here. Again, no substance has been changed. There are grammatical corrections, fixing of awkward phrasing in places, consistency with capitalization, things of that nature. So removing of scare quotes, fixing of the Ecma spelling, and still a reference to the Ecma CC in here which is no longer a thing, at least not by that name. So, again, no real significant substantive changes here. Certainly nothing that changes process. But just really cleaning things up. I think we have a couple of reviews on this as well or some feedback that we received. + +CDA: So this is I suppose a call for consensus to make these changes, as well as a call for anybody to get more eyes on it and maybe we could say if by the end of this week or perhaps the end of next week, it might be better since this week is plenary with no objections and approvals by then to merge these changes. + +NRO: Your changes look good. Just as a follow-up, the current process documents says that when we find the Stage 2 reviewers, we should already know roughly when we’re planning to go for 2.7. In practice what happens is that, well, we don’t know yet, and at some point the champions with the reviewers say I plan to go 2.7 next meeting, please review. So maybe we should just reword this to better reflect what we actually do. + +CDA: To be clear, you’re referring to this line here at 185 when reviews are designated, a target meeting for 2.7 should be identified? + +NRO: Yeah. + +CDA: Yeah, I think that would be a good idea for a follow-up PR. It does say “should be identified”, not “must have identified”. I agree. If this differs from what we typically do, then I agree that we should update it to match reality as well. + +NRO: I can make a pull request and ask you to merge these changes. + +CDA: Sure. That’s great feedback, thank you. + +RPR: MF is agreeing with you that it’s best done as a follow-up. + +CDA: Okay. Concretely requesting consensus to merge these two PRs at the end of next week at the earliest provided we have approvals and no blocking concerns via the PR review. + +DLM: We support that. + +RPR: No objections. So I think we have consensus on this review for merge at the end of next week subject to no review comments.. + +## More Currency Display Choices + +Presenter: Eemeli Aro (EAO) + +- [proposal](https://github.com/eemeli/proposal-intl-currency-display-choices) +- [slides](TODO) + +EAO: This is a very small proposal. We had a short discussion in TG2 in fact about whether this should be a normative PR instead. But we thought, because there’s a little bit of discussion here that it would be good to have a little bit of space for that and the staging process is a very fine place for that. So the short entirety of this is that we do currency formatting under `Intl.NumberFormat` by using the `style: 'currency'` option and furthermore when formatting currency we have a `currencyDisplay` option that is effectively an enum value that we accept how to format the currency symbol. If you use the default `symbol` you get “$” or “US$” for formatting USD and `narrowSymbol` formats to "$" and `code` that gives you an ISO currency code like USD, or then as a spelled out `name`. All of these are of course localized name such as “U.S. dollars”. + +EAO: And specifically here one thing to note is that for the `'symbol'` choice, not the `'narrowSymbol'` but just the `'symbol'`, whether or not you end up with something like a “$” sign just by itself or “US$” depends on both the currency and the locale. In US English, you get “$” for USD and “CA$” for CAD. And similarly, in Canadian English, you get “$” for CAD and “US$” for USD. + +EAO: And now, the proposal itself is about extending the scope of things. That’s to solve two different use cases. First of all, there are times, such as when you are formatting values in different currencies and you would like to use a relatively narrow symbol view of the currency, it would be really useful to be able, even in an en-US context to say, “I would like to have ‘US$’ for USD,” similarly to what you would get for effectively all the other currencies in the world. With the options right now, there’s no way to getting “US$” in the en-US locale. This is not just an en-US problem. Similarly with many locales and currencies across the world, where there is a local way of expressing and implicitly understanding that it’s our dollars and so we don’t need to specify like a US or other units like this. + +EAO: Then a separate case is that when we are doing currency formatting, there are aspects of this that need to take into account the currency, and based on the currency, change some parts of the formatting, specifically, most importantly, the number of fractional digits that is displayed. And there, it becomes in some cases interesting to do currency formatting even if you are not actually displaying any currency symbol there at all. And to effect that, it’s really useful to be able to format currency, but not show anything, any currency indicator at all while doing so. And this is currently not possible, effectively. So these are the two issues that we are looking to try and fix here. + +EAO: The proposed solution here is to add the following two currency display option values: `'formalSymbol'`, it always chooses a sort of longer form like “US$”, for instance. In the discussions in TG2 on this one, the specific aspect of the whole proposal that I think there’s a little bit further discussion is whether this thing ought to be called `'formalSymbol'` or possibly `'wideSymbol'`. And introduce a second additional possible `'never'` value to the option and that would not display any currency symbol or name. So the code here effectively shows how these would work, where the first one is showing the `'formalSymbol'` currency display option, and the second one is showing the use of the `'never'` currency display option. The word “never”, by the way, in this context, I picked it because we have kind of near this in the same space, we have the option `signDisplay` for whether to display the positive or negative sign before, and it has a “never” possible value for that. + +EAO: Some of the relevant background here is that ICU has already support for something like “formal” and something like “never”, which that’s where the “formal” name as opposed to “wide” name comes from. + +EAO: That’s pretty much the entirety of the thing. I’ve also put together the very, very small spec change that would be required for all of this into 402, and that’s adding the `'formalSymbol'` and `'never'` as appropriate to the few places where the currency display values are iterated and the very brief description thereof that can include in the spec. And based on this, I am asking for—well, if it were acceptable, Stage 2, but I would also be happy with a Stage 1 in order to discuss this and advance the formatting of this to effectively bikeshed to be called a `'wideSymbol'` for the options values. And that’s effectively all I have got on this. If there’s any queue, I am happy to address any issues or questions. + +RPR: At the moment, there is no one on the queue. It’s hard to tell, isn’t it? For something coming in Stage 1 or Stage 2. + +EAO: If nothing is going to show up to the queue, I would like to ask for Stage 2 for this proposal. + +DLM: We discussed internally and this is definitely a proposal that we support. I am not sure as a fellow Mozillian I should be the only person supported for Stage 2, but it definitely has team support for Stage 1. + +RPR: Okay. So are you stating personal support for Stage 2? + +DLM: I am stating—yeah. I guess I really… I am not sure what I meant by that. But, yes, definitely support for Stage 1 and I will second someone else who says they support it for Stage 2 + +JHD: So I apologize if you said this, and I missed it, does this mean you believe there is no further design space here? And that’s why it’s ready for Stage 2 because this is basically done? + +EAO: Basically, yes. In that we already have this currency display option. It is already controlling how symbol formatting happens. And I am asking to extend -- well, not extend it because we are already doing formal symbol style formatting, we just don’t allow it explicitly in some cases and never is kind an option of not symbol at all. I don’t see any other possible solution for these use cases, other than adding two different view currency display option values. Specifically, I think the discussion about whether `'formalSymbol'` or `'wideSymbol'` might be the best name or whether there is something better than `'never'` as a name for the other one are possibly something that could be discussed within Stage 2, if the options I am proposing here initially are not to everyone’s satisfaction. + +JHD: All right. Yeah. To be clear, I wasn’t suggesting that there is further design work that could be done in Stage 2. More, my sense of the proposal is that there’s nothing further to be designed and I wanted to confirm that we shared that sense. + +EAO: Okay. + +JHD: Yeah. I support Stage 2, so I have not fully reviewed the spec. + +DE: I support this proposal as well. I also haven’t reviewed the spec. But a small feature like this, that adds on to an existing capability is exactly the kind of thing that I would hope to come from TG2, that I look forward to, especially given the concrete motivation. I would be okay with proposals like this going by either the stage process or a PR. And I want to emphasize what JHD just said, Stage 2 still permits a lot of further design work. We often go to Stage 2, with significant open questions, so I guess in this case we don’t have any open questions either. + +RPR: The queue is now empty. So I think we’ve heard qualified, caveated support for Stage 2 in the sense of JHD, but without reading the spec; and DLM from a personal point of view. So EAO, I think it’s your choice, what you want to ask. + +EAO: I think I would like to ask for Stage 2 because I think there is sufficient support for that. If there are concerns that arise, I believe that those concerns would fit in well to the work that this proposal will undergo, under Stage 2. + +RPR: Okay. DLM has upgraded to unqualified support for Stage 2. DLM, did you want to say anything more? + +DLM: No. I think the open questions here are resolvable in Stage 2. I didn’t want to be the only invoice in support for Stage 2, given that Eemeli and I work for the same organization. + +RPR: We also now have DE with +1 for stage 2. So there is definitive support from multiple orgs. All right. Any objections to Stage 2? + +RPR: No objections. We have heard support. Congratulations, Eemeli, you have Stage 2! + +EAO: Excellent. Thank you. Am I supposed to ask for reviewers for Stage 2.7 at this time? + +RPR: Now is the time + +EAO: I would like to ask for reviewers for Stage 2.7 for this very, very small change. + +JHD: I am happy to review. + +RPR: Thank you, JHD. Any chance we could get one more reviewer for this proposal? Okay. We only got one at the moment. + +NRO: I can review. I have now—very new experience with Intl, but this seems small enough that I can do it. Nicolo, for the notes + +RPR: Thank you, NRO. Should we also be setting a target meeting for the 2.7? You brought it up. I am not trying to coerce. Coercion is bad. Okay. All right. EAO, would you like to, perhaps, read out a summary for the notes? Or would you like to write a summary + +EAO: I am happy to state that the proposal received support for advancement to Stage 2. I don’t think that there’s more. Was there? I mean, other than—the proposal was presented and it was accepted. + +AKI: And there are two committed reviewers for Stage 2.7. + +### Speaker's Summary of Key Points + +The proposal was presented and it was accepted. + +### Conclusion + +“More Currency Display Choices” was accepted for Stage 2, with JHD and NRO as committed spec reviews. + +## Upsert (formerly Map.emplace) Update and request for Stage 2 reviewers + +Presenter: Dan Minor (DLM) + +- [proposal](https://github.com/tc39/proposal-upsert) +- [slides](https://docs.google.com/presentation/d/15sWTvdWIo9Jt12LFRNBPJo1N_8xsMSCB3jy73HBFX-M/) + +RPR: So we have DLM with upsert, normally map.emplace. An up date and also a request to Stage 2 reviewsers + +DLM: This was the original name five years ago. And we have gone back to that. MF pointed out, we should not name proposals after solutions, but rather problems. We finally agreed on “upsert”. I should start with the motivation. + +DLM: So this is the thing that we were trying to make easier for JavaScript developers. You have a map. And you want to do something different, depending whether or not the key is present in that map. Proposed solution. This changed slightly when I presented this in October. Two methods. One is a `getOrInsert`. This one, search for the key and the map. If found, returns the value associated with that key. Otherwise, it inserts a value in the map, the default value in this case and return that. + +DLM: I also have a `getOrInsertComputed`. This is very similar to the above. September in this case, you are going to call a callback function that returns a default value which then is inserted. When I presented this in October, it was `getOrInsert`, but there was feedback from the community at that time. If it takes a lot of work to calculate a default value, it would be nice to defer that to a callback function and so to do that up front. Works since the last time I presented this. Yeah, as discussed, the name changes back to upsert. Two methods, one using the value directly and other with a callback. Upgrade specification tasks 6789 Michael has done a great job for fixes and suggestions. Students also have prototype versions of the design. SpiderMonkey and V8. This work at the moment exists in their local repository. + +DLM: Two open issues that I was hoping to get feedback on. The first one, this is an issue that dates back to when this proposal first came into committee. This was about locking the map with concurrent access. That’s no longer what we are discussing. But there remains a problem with the callback version. Where the person who modifies the map, in the callback, rather than using a callback to return a default value, MF has helpfully put together two pull requests with two proposal solutions to the problem. One checks to see if the map has been changed by the callback function. So checking for the existence of the key that previously was not there. And now it does exist. And throw an error in this case. The other proposed solution would be to check for the existence of that key after the callback and return that value. So the problem that we are trying to prevent is people mistakenly using the API to insert values during the callback function, rather than returning the value to be inserted. As I state that, there is a third thing to have at API, to use that to callback to insert values into the map. But basically, the API design is that you should return a default value, so it’s a user mistake or developer mistake if they use that callback to insert the value and we should probably—at least in my opinion, I lean slightly towards throwing because this is a mistake using the API. The other option would be to accept the developer’s intention and insert during the callback. + +KG: Just a clarification on the second option here. The non-throwing option here. There’s two possible values that you could end up with in the map and returned. There is whatever happened during the callback. And then there’s whatever the callback returned. + +DLM: Yes + +KG: I thought that the proposed solution, and certainly my preference for the behavior, is to use the value from the callback. Like, that the callback returns. Not that it happened during the callback. Because the return value is sort of the second thing. There’s a mutation and then there’s the value that is returned. And I would not be excited about using whatever mutation happened during the callback. But I am fine with the approach of using the returned value from the callback. That said, I am also personally leaning towards throw. So maybe this isn’t even relevant. + +DLM: Yeah. I possibly misread the PR that MF put together. But I am sure the second offence was to return the value from the mutation during a callback and not the return value from the callback. And I probably quickly bring it up. Did you have your hand up? + +KM: Doesn’t this situation basically kind of—I thought the main reason for `getOrInsert`—maybe I am misremembering—was like it was more performant than looking up twice. Doesn’t it default the whole optimization. You have to look up where the key goes anywhere? + +DLM: Yes. Mm-hmm. So in this case, this is the computed version. So calling the callbacks. We assume people only use this when there’s a lot of work to be done on that callback function anyway. The usual optimization from not having to re-lookup the value wouldn’t apply in this case. There’s the API where it takes the default. In that case, we wouldn’t have to relook up anything. + +KM: I see. I guess I am just worried that there will be confusion between the two in that sense. People will expect not to have to look up and would be surprised there’s a huge difference, even if you inline the computed thing. But maybe there wouldn’t be. I don’t know. Okay. All right. That’s fine. + +KG: Sorry. When—KG, when talking about the performance costs, do you mean the performance cost of having the spec check whether someone inserted the value again or the performance cost of someone doing the insert in the callback? + +KM: The—I mean, just semantically, like the fact that your hash table could change underneath during the callback would require you to do another hash table callback, the element. Whether that’s in like—I think it’s more like no matter what you do. It assumes you allow any mutation to the thing, that you have to do a hash. You can't assume your status is the same. On the other hand, if we throw, on the other side, now every map operation has to check “am I under a callback?” hook, which is also not great because all of the other normal existing operations gets slower, they have to do a check. + +KG: Yes, especially if you try to polyfill it. + +Right. Yeah. Either way, it’s roughly the same. Because the code we are going to generate is not, I think. But there’s—it’s definitely going to hurt the perf. + +KM; I guess in terms of Perf, these are long, expensive things, just a final comment, the second one is probably better because the cost is localized as to get or insert computed rather than every operation. If you have to throw, it means every operation needs to have some, like, check for. Under a get insert computed, whereas if you just have to rehash when you return, then the cost is only borne by `getOrInsertComputed` and not every other operation. + +DLM: I am not quite following. Because I thought we only throw inside `getOrInsertComputed`. We weren’t talking about throwing from the actual set – + +KM: How do you know the map is mutated? + +DLM: We would reach out—the existence of the key. That’s the only case we are going to throw is where the key—and both of these options, the idea was to check for the existence of the key after the callback completes. And taking that existence of the key as evidence that someone has mutated the map and we need to do something. + +KM: I think in that case, I am indifferent to the choice. Yeah. Sorry. I thought were you throwing on the underlying sets inside the callback [snoo*ek] no. + +DLM: No. That is it was proposing or doing a double locking. But that’s not the solution that we have come up with since that issue was originally filed. + +SYG: I think this has been clarified by what MF said in the queue item and what DLM said. So in the non-throwing case, the semantics of the non-throwing alternative: initially you check for the existence of the key, if it’s not existent, you run the callback. And after you run the callback and get the value, you then check again for the existence of the key, and even if it still exists, you then set the key with the new computed value returned by the callback. Is that correct? And then return that computed value. + +DLM: Yes. That’s my understanding, and MF commented that that is correct. + +SYG: Okay. Cool. Then I think it is also my preference that along similar lines as what Keith was saying, I want basically most features to be pay as you go and that would be the—no throwing thing is clear that is pay as you go for use of this particular method. And because the way you decided to check whether the map is mutated by the existence of the key, you could certainly, you know, delete stuff and then re-add the key. And that would result in a pretty different hash map at the end, even though from that method's point of view is not “mutated” and I find this misleading. Instead of like—unless we build an actual mutation check which would have the non-pay as you go problem, as Keith pointed out, I would not build a particular notion of mutation that is different from a normal understanding of what it means for a map to be mutated. + +DLM: Okay. So that’s support for rechecking in that case. Right? + +SYG: Yes. + +MF: ACE is unable to be here, but asked me to relay Bloomberg’s opinion, that they prefer the non-throwing PR because the throwing version catches only one specific case of mutation. + +DLM: I am convinced, I think we should go with the non-throwing version. Can I move on to the other issue that I wanted feedback on? + +RBN: We don’t throw during iteration, so I don’t think it makes sense to throw here. I also am not certain we need to have such complicated locking behavior as was initially proposed, which has been kind of put aside. But I don’t think that blocking behavior is necessary for something like map because we don’t employ that elsewhere in the array types for any mutationion of that sort. I also wonder, so this is—the second option was returning the existing key’s values, regardless of what happens in the callback. Is that the case? + +DLM: I think it’s actually—we return—yeah. I wish I had the PR open. Sorry about that. + +KG: The behavior in the PR you use is the value from the callback, the returned value from the callback. And it clobbers any mutations that happened to have happened during execution of the callback. + +RBN: Then, yeah. That is the behavior I think I would personally prefer + +DLM: Okay. + +RBN: I agree with that behavior. + +DLM: Thank you, Kevin. + +DLM: In that case, great. It sounds like we have that there. + +DLM: The other issue, I wanted to open and we talked about this last meeting as well, is the name, there’s a few comments in issue number 60. I am still open to other suggestions for names. I don’t think we have come to anything much better than this. But… and also, very welcome to—some of the suggestions from last time around got lost in the notes and weren’t captured + +RPR: SYG is asking a question + +SYG: I am confused. You started this presentation with this has been renamed to upsert. Is that the proposal name? + +DLM: The proposal has renamed from “map in place” to “upsert”. So it didn’t refer to a name we weren’t planning to use anyone + +SYG: I see. Okay. + +DLM: Okay. I will not waste time on the bikeshedding thing. I suspect—I will move to my next slide. Two open questions. Thanks for the comments in #40 and #60. We can resolve. The other thing was any volunteers for Stage 2 reviewers? + +JMN: I am happy to do so this. It’s JMN from Igalia + +DLM: Thank you,JMN. + +RPR: I feel like your photo is calling out for MF to be a reviewer? + +DLM: That was completely unintentional on my part. + +DLM: I am going to use that photo in every presentation. We recently did you believe + +RPR: MF has volunteered. + +DLM: Okay. Great. Thank you. + +DLM: I think I need two people. So that’s perfect. Thank you. If anyone else is interested… please let me know. + +### Speaker's Summary of Key Points + +* Presented update on work that has occurred since October 2024 plenary, including renamed to proposal-upsert, support for both `getOrInsert` and `getOrInsertComputed`. +* Asked for feedback on handling of modification to the map during `getOrInsertComputed` callback, and on method names. +* Asked for Stage 2 reviewers. + +### Conclusion + +* Committee was in favour of the non-throwing solution to issue #40 (https://github.com/tc39/proposal-upsert/pull/71) +* No further feedback on naming of methods, we’ll resolve this in the issue itself. (https://github.com/tc39/proposal-upsert/issues/60) +* JMN and MF volunteered as Stage 2 reviewers + +## `Intl.DurationFormat` for Stage 4 + +Presenter: Ujjwal Sharma (USA) + +- [proposal](https://github.com/tc39/ecma402/pull/943) +- [slides](https://docs.google.com/presentation/d/1bAuZ0ZSSYUdJxiDYXz2tUWHZwaOmYkNoLpQBBy_qz1w/edit?usp=sharing) + +USA: Hi, everyone. Before I start with the actual presentation, thanks to BAN for doing basically everything. He couldn’t be around, so I am going to be presenting this instead. But as one of the champions of the proposal, I can say that it’s been amazing the recent amount of work that has gone in, and yeah, it looks like we are finally at the finish line. So let’s see. + +USA: A quick overview of DurationFormat for, initialized. It is a formatter, sort of in the same class of low-level built- in formatters, like other existing Intl formatters and you use them and they are specialized. They take one certain kind of input, and they format it according to the locale provided to them and other cultural hints like calendars and so on. A duration in this case is certainly defined as any time duration. So you know, it could be expressed in multiple units, it could be composite duration in that sense, or it could be expressed in a single unit. You can see, different locales format them differently. This might not be the best example, since it seems very similar. But from prior experience, you might know that different locales handle certain details of durations differently. So this is one of the driving use cases of this proposal. + +USA: One thing to note is, one of the most important ways to customize the result of this formatting or generally to change how it looks is through width. And width essentially implies the amount of space, and in this case, screen space you want to dedicate to a duration. So as you can see, in en-US, in long style, you would have something very fleshed out. So you will say something like “one year, three days, and 30 minutes”. In narrow, that becomes much shorter. So it becomes a thing like “1y, 3d”. It is replaced with these alphabets to signify which unit it is. And then there’s the digital style. So digital style is interesting. It’s not well-defined for every single unit. However, it has a very special case for things such as hours, minutes and seconds and imitates a digital clock. One important thing here is that, while it’s possible to use a single consistent width for the entire duration, there are viable use cases that require you to mix and match the different widths for different units in order to get the point across. + +USA: So to summarize, this proposal allows for duration formatting based on locales and flexibility for using different formatting sometimes for different units. So you can basically have one different sort of width per unit. Use case for this is skyscanner. So as you can see and probably relate, all websites that deal with air travel are full of durations. There’s a handful of them all over this place. And, yeah. Anything from a simple timer on an application, maybe a to-do list application, or something like the duration of some trip can be a duration. Right? So, yeah. Here is how it looks on skyscanner. + +USA: One thing to note, this is already using different width or style, as you will, however you like to call it, for different units. In this case, seconds, for instance—well, I don’t know if there is seconds data for this stuff. But it is, it’s never displayed because you don’t want to display that. Minutes, on the other hand, are displayed numerically, so this means without any unit. This is mostly because it’s implied that the lowest unit would be minutes in this duration. And for hours, it's narrow. So it’s using “2h” because that’s the shortest way to signify an hour. + +USA: These are a few usage examples. I wouldn’t go into detail. As you can see, there are many different ways to use the API. We have been over this many times. But, yeah. Feel free to ask any questions about the API. And here we go. So, you know, different styles. And mixing different locales and stuff. And as you can see, you can provide an alternative numbering system and that would just work. + +USA: So, yeah. Going over the history of stage advancement, the proposal advanced to Stage 1 in February 2020, 2 on June 22. Relatively quickly. October ‘21, we got to Stage 3. That was before there was a Stage 2.7. This has, as you might have noticed, significantly longer as a stage because of a lot of implementer feedback. And the fact that we were slightly doing things in a different order, this time around, with developing our API and then going back to, you know, making sure that it works in different tools. + +USA: Plans for V2. There are a few, but to name the popular ones, maybe a format range. So you could have a range of duration. This could be useful for things where you don’t need an exact duration. For example, recipes might have—maybe not baking, but I know for sure, cooking, you can have a range of a duration. Fractional component abouts of hours and minutes so that you could do things such as 1.5 hours or 0.1 minutes. But yeah. These should be done in a way that, you know, we can control well and ergonomically. + +USA: The most significant part is the Stage 4 requirements. As you just saw, the proposal has been at Stage 3 for a while. In this time, we have not only polished the proposal significantly, but we have shipped Test262 tests, and we have two compatible implementations that have passed this test. We have a lot of experience from the implementers, as well as all the feedback that has been addressed. I would like to really thank all the implementers, everyone involved, in the implementation of DurationFormat, namely YSZ, ABL and FYT, all the feedback has been really important for the development of that proposal. We have also had NPM documentation and a pull request made against ECMA-402, which was approved by TG2. So as you can see, the last step that we have to go through is committee approval. So I would like to formally request stage advancement for DurationFormat. + +RPR: You have support from DLM. And incoming support from PFC. + +PFC: Yeah. I mean, I am also from the same organization, but yeah. With my Temporal hat on, I am very excited about this becoming a way to format the Temporal object. Which we will incorporate after the proposal reaches Stage 4. + +USA: That was indeed an important use case when this started. And, yeah. I am glad that both proposals have matured well. Yeah, looking forward to that. Thank you, PFC. + +USA: Thanks, everyone, for Stage 4. + +### Speaker's Summary of Key Points + +* USA went over some details about the purpose and history of the proposal. +* Stage 4 was requested and there were no objections to stage advancement. + +### Conclusion + +* DurationFormat reached Stage 4 with supporting comments from DLM and PFC. + +## `Error.isError` to stage 3 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-is-error/) +- [slides](https://github.com/tc39/proposal-is-error/issues/7) + +JHD: Error.isError. It was not too long ago that we advanced it to Stage 2.7. We have Test262 tests written and merged. And it would be wonderful to see this proposal advance to Stage 3, at which point the HTML integration PR, which has already been directionally approved, would then be able to merge as well, unblocking the further advancement of this proposal. So I would like to request Stage 3. + +DLM: We support and we actually have a implementation ready to go once it reaches Stage 3 + +JHD: Love it. Thank you. + +NRO: So this is just like I didn’t see any update. I wonder if Mozilla has anything. I opened an issue about `InternalError`, which is an error that Firefox throws in some cases. Given that DLM—I wonder if the internal error has been properly handled. + +JHD: That would be good to get Mozilla confirmation + +DLM: I am not sure. It was done as an open source contributor. + +JHD: For what it's worth, the internal error is already currently indistinguishable from a true subclass of error. So depending on how that was implemented, it might work by default, but I assume the change for DOMExceptions could be made for internal errors. I will keep an eye on that and NRO, I will make sure as it gets published in any channel of Firefox, and keep an eye open until it’s implemented. + +RPR: So, yeah. We had one note of support. No objections. Last call, any objections to Stage 3? No objections. So congratulations! You have Stage 3. + +JHD: Thank you. + +### Speaker's Summary of Key Points + +* test262 tests merged +* Firefox’s `InternalError` should pass this predicate, and champion will monitor implementation status + +### Conclusion + +* Consensus for stage 3 + +## Iterator helpers close receiver on argument validation failure + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/ecma262/pull/3467) +- [slides]() + +KG: Hello, all. This is a follow up to iterator helpers, which just landed in the spec. It is a normative change to something that is already shipping, but I strongly suspect it’s web compatible, especially given how new iterator helpers are, and I would like to make the change if we can. It's an oversight from specifying it. + +KG: So the background here is that iterators are closeable. They have a `.return` method. All generators have this, and user defined iterators may or may not have this. For generators, it would trigger the `finally` block if you are yielding within a try-finally. + +KG: And because this can do important cleanup work, the general rule is that once you get an iterator, it’s your responsibility to close it, unless it throws an error or violates the iterator protocol, or any of these other things. But if it yields a value you weren’t expecting or you got some other value that you didn’t know how to handle from somewhere else, then you need to close that iterator. Generally, we are disciplined about that. But we failed to do that specifically for the case of argument validation for the iterator helper methods. They do not close their receiver, the `this` value. They do in other cases. I have here the specification for `Iterator.prototype.filter`, and you can see down here, if calling the predicate throws, then we close the underlying iterator. But we don’t close the underlying iterator if the predicate is not callable. And I am pretty sure this is just a mistake. So there’s a few different places where we do this kind of argument validation. `filter` requires a callable predicate. `map` requires that also. `take` and `drop` require a number argument, not NaN. + +KG: And so what this pull request is doing is going through and each of the places that one of the iterator helpers takes an argument which gets validated, and if the argument fails validation, then we close the underlying iterator. So we maintain the contract that once you have been handed an iterator, you are responsible for closing it, where "you" is the prototype methods on the iterator helpers. + +KG: So this is just a "needs consensus" PR, because it’s a small tweak to the existing spec. I haven’t written tests because this was a last minute thing but I will do so, as soon as this is approved, if we approve it conditional on tests. Yeah. + +MF: I strongly support this. This was totally just an oversight. We didn’t think of the `this` value as a parameter here. Like a regular parameter, we should handle closing it because it is passed in. + +RPR: DLM supports this. + +KG: Okay. Well, hearing no objection, and having two notes of explicit support, I will take that for consensus. I won’t merge this until I get tests up. But take it as having consensus. + +### Speaker's Summary of Key Points + +* An oversight in iterator helpers meant that we did not close the receiver when an argument failed validation. This PR will correct that. It's almost certainly web-compat given how new iterator helpers are. + +### Conclusion + +Approved. + +## AsyncContext request for Stage 2.7 reviewers + +Presenter: Andreu Botella (ABO) + +- [proposal](https://github.com/tc39/proposal-async-context) +- [slides](https://docs.google.com/presentation/d/14DxgoHhTL7tzJpcu94y70USeXT9jlkF2k6lJDI720Kc/) + +ABO: Yeah. So we have just two points of updates from the web integration that we shared in Tokyo. So after hearing feedback from multiple parties, about how the proposal that we had didn’t really fit many use cases, we changed the context of events. Like, in which context event listeners are run. So the callbacks have run in the context that triggered the event, the dispatch context. If there is no dispatch context, such as a user click, or in Node.js something like process signals, then it falls back to the root context. This is usually the empty context, where every `AsyncContext.Variable` is mapped back to the default. We want to configure this fallback, and that also covers the other use case that in the initial proposal was being covered by having the registration context. + +ABO: So we propose having this web API, `EventTarget.captureFallbackContext`. The name, and being part of `EventTarget`, are still up for bikeshedding. So this creates a scope, and anything inside that scope — if there is an event with no dispatch context, such as a user click, it will use the context that was active when captureFallbackContext was called. This is useful for things like code regions that you want to keep isolated and where an event that has no dispatch context would lose the context for anything that spawns from that callback. + +ABO: We have a PR and, the next steps are just, we will continue and finish the discussion with HTML editors about the implicit context propagation. We will finish getting the PR for the web specs. And the next time we present, we’re expecting to ask for Stage 2.7. So at this time, we’re asking for reviewers. + +RPR: Any volunteers to review `AsyncContext`? + +JSL: I can. + +RPR: Thank you, JSL. + +RPR: Can we get one more reviewer, please. + +???: MM is not here right now, but he requests to volunteer as a reviewer. + +RPR: Request granted. Yes. Thank you. + +### Conclusion + +* JSL & MM will review `AsyncContext` + +## The importance of supporting materials + +Presenter: Dan Minor (DLM) + +- [slides](https://docs.google.com/presentation/d/1teo8pAE4lbFTIlPZxum2MBcNZfGdUM2Y8huEiVdvQiQ/) + +DLM: I just wanted to talk briefly about the importance of supporting materials. So gentle reminder, supporting materials are already part of our process. We want to see proposals advance to Stage 2, 2.7, 3 or 4 to be available ahead of the deadline along with the supporting materials or delegates can withhold the consensus solely on the basis of missing the deadline. I want to talk about why this is important. Not trying to be pedantic and bureaucratic. SpiderMonkey team do the best to review every proposal as fairly as we can and provide action feedback. As implementers we can’t look at what is interested for us, we have to look at everything. It requires a lot of work on our part. It is not just us. I’m aware that other groups do this. Why supporting materials? Without them, ultimately we’re left guessing what is actually going to be presented. That means we can’t get the right feedback in advance of plenary. I’m not an expert in every JavaScript and have to reach out to other people on the SpiderMonkey team to get feedback depending on the proposal and often we have to reach out to the DOM team or others as well. And people are busy. We need time ahead of time to get the feedback that we’re looking for. I mentioned this in the past every once in a while and got feedback. Why don’t we just attend individual meetings for proposals to keep on top of them or reach out to the champions ahead of the plenary to ask clarifying questions? These are things we do. Not enough time to do this for every proposal out there. It is helpful to us doing proposals. Definitely a clearly written motivation, clear and concise solution for stages where applicable. Supporting use cases and for example code and prior art. These are often helpful to the system implementers and sometimes things that are obviously improvements to people writing JavaScript every day aren’t as obvious to us as implementers and any links to issues and PRs and discussions help inform the opinion. And the other thing is posting the slides as early as possible. That leads back to us having the time to reach out and get the appropriate people involved. + +DLM: So briefly in the future, I’m not asking for any process changes now. I wanted to raise the possibilities. But one thing that I think would be quite helpful for us and other groups are doing these types of proposal reviews is require supporting materials to be available prior to adding the item to agenda. I don’t think they need to be finalized or anything like that. Anything available to give us an idea what the topic will be would be much appreciated. I would like to point out this isn’t actually asking people to do any extra work. It’s the same amount of work. We’re moving around when it has to be done. I don’t want to say people procrastinate. I do have that tendency. I think this might help people to cut down on that. But it would actually help us with more time for review and I think that would lead to better discussions and feedback during plenary. The other item and I think this is something that Yulia may have brought up at the last plenary, if we move the deadline a little bit further back, that would also be helpful. The existing ten-day deadline basically means that there’s only five working days in-between the deadline and the beginning of plenary. Again, this would just mean doing work a little bit earlier. Not asking anyone to do any extra work. And that is actually it for my presentation. Not sure if anyone has any comments they would like to say. JHD is on the queue. Go ahead. + +JHD: Just the spirit of what you’re hoping we eventually get is great, but the—I’m not sure how—like, when we originally were talking about this many years ago, one of the things that I think was brought up or maybe I’m hallucinating it but it still applies is if we require them to be in advance, than any additional supporting materials within—they come up that manifest within that 10 days or 14 days is something that you can’t add to your presentation. because then that part of the supporting materials is not there. That is an often with late-breaking realizations before plenary. Additionally many proposals like Mansara one earlier today don’t require supporting materials and nothing to provide. If I feel inspired I should be able to make slides a day or two in advance. I’m not requiring them. And such a requirement would ensure that if I do procrastinate, that I just wing it on no materials. And so I’m not sure it would actually—I think there would be some potential proverse in-sensitives and wouldn’t necessarily achieve what you’re hoping for. I agree with the spirit more feedback earlier so everyone has time to provide feedback. + +DLM: I appreciate your comments. I think in terms of like not allowing changes to supporting materials, that wasn’t the intention of what I was saying. I was just hoping to see even some initial slides early on I think would be quite helpful. And the intention is not to be bureaucratic or pedantic about this. We have some topics that are urgent. Wouldn’t expect those that aren’t necessarily urgent requiring people to have some form of supporting materials in advance would definitely be helpful. + +CDA: I took myself off the queue. I think you answered it. I was just saying I thought that you had mentioned it’s okay if the thing isn’t complete. I think you just clarified that as well. And also like JHD, I don’t really view this shift as being any different from the status quo today. Today especially for the advanced stages that supporting materials are required, but there’s also nothing that says oh, if you change the slide at the last minute or added a slide at the last minute that somehow that’s unacceptable for any reason. That’s not my understanding of the current process. + +JHD: I mean, I think if it’s not a meaningful change, then it wouldn’t achieve what Daniel is hoping for. I think if it’s achieving anything, it is definitely making a shift in some way. And so I’m just suggesting that we need to be careful about the unintentional consequences of various sets of requirements. + +CDA: I think there’s only a few minutes left in the topic. Like to hear from Nicolo and SYG if possible. + +NRO: What if I publish slides and then I have new things to add to them. Something I started a a few meetings ago is to mark my slides, if I had something saying this is a couple days ago. I would appreciate everybody doing something like that. It helps understanding did I forget about this slide when I reviewed them or is this actually something new? It’s fine to have late changes. It would be great to mark them somehow. + +DLM: I agree with that. + +NRO: Last meeting in Tokyo particularly bad is with we tried starting establishing an internal deadline for I think one extra week before official deadline where at least internally we must share the slides with other Igalians and we still didn’t do perfectly but recommend other companies to do something similar. + +DLM: Thank you. + +SYG: I agree with the general spirit of this for sure. I support adding as many support materials as you can as early as possible for the same reasons that Dan Minor said. We need to review everything. At the very least, I don’t want people to over index I have to make a full slide deck and don’t want to do that and give us an idea of what changed since last time and why you’re bringing this back. If I don’t know why you’re bringing something back and if I don’t know why something is proposed, and put on the agenda item, that is not—I am not predisposed to that. Just as a matter of fact thing, the more material there is as early as possible, the better your chances are. Even if it turns out that there really isn’t much you can add a quick note. There isn’t much. Add a quick list of bullet points. There is only one material question I really want to go or blah blah blah. Some sort of hint, please. + +DLM: I agree. That would be quite helpful for us as well. And I guess to quickly summarize with held consensus before because I hadn’t had time to reach out to the appropriate experts at man zilla to say if something is okay or not. There is a down side of potentially wasting committee time because something has to be brought back again because there wasn’t enough advance notice to get the right feedback on it. + +CDA: Thanks Daniel. We are at time. Now, I know you only scheduled ten minutes for this. I don’t know if it’s worthwhile to do a continuation like later on if we thought it would be useful to talk about the 14 days specifically or anything like that. + +DLM: I think I would have that for another plenary. I wanted to give a brief presentation and I can make a brief summary in the notes. Thank you for your time. + +### Conclusion + +* Not asking for any process changes at the time, just trying to highlight the importance of supporting materials for people who are evaluating proposals, in particular implementers who spend a lot of time on this. + +## re-using IteratorResult objects in iterator helpers + +Presenter: Michael Ficarra (MF) + +- [PR](https://github.com/tc39/ecma262/pull/3489) +- [slides](https://docs.google.com/presentation/d/1HQzC15dFnQClnUWYHSFx95aMuiJjHAjE186flPW7iZE) + +MF: This is needs-consensus PR #3489 on the ecma262 repo. The goal of which is to reduce the number of temporary objects we create. I just want to give some examples of where this would apply in the iterator helpers we have today. If you look at the present behavior of Iterator.prototype.take. We have an iterator called nums here that yields 0, 1, 2, 3, 4, and each of the squares is an IteratorResult object. The object with the done and the value property. And if we do `nums.take(3)` you can see we yield new IteratorResult objects where we copy the value over and create new objects to do that. And after this pull request, if this pull request was merged, instead of creating new IteratorResult objects through the iterator helper and copying the value over, we reuse the whole IteratorResult object itself. So nums and `nums.take(3)` would each yield the same IteratorResult objects. You can see another example here in `.drop(...)`. So today what it looks like is if we call `nums.drop(2)` yield these four IteratorResult objects, three of which are copying the value over. Instead, we could yield these four IteratorResult objects and three of them can be completely reused. So we don’t create extra objects that provide no value. And lastly, `Iterator.prototype.filter`. This is doing something similar. You can see today it is copying values over into new IteratorResult objects and even though not sequential we could still reuse the IteratorResult objects as we iterate the result of filtering. + +MF: So filter, I do want to talk a little bit more. Filter is a bit different than take and drop. Filter does have to observe the value. This value here is observed by the predicate passed to filter which means that if you have getters on your IteratorResult objects, which is a weird thing to do, but if you have those, you may have some kind of weird behavior. You know, you could yield values for which the predicate returned false by having the value getter return different values for the predicate versus when you are actually consuming the resulting iterator. And similarly you can have it yield values that were not passed through the predicate. So because of getters, both on value and done, filter can be a bit strange here because it observes the value. But take and drop do not share that problem. They don’t observe the value and if you have a getter on done, it just kind of changes the behavior but it doesn’t lead to something unreasonable like with filter where you wouldn’t expect any of the values coming out of the filtered iterator to have not passed the predicate. So I can understand not wanting to do this optimization for filter for that reason if we care about that use case. + +MF: I guess as a little bit of context, I kind of maybe should have led with the context. Anyway, `yield*` already does reuse IteratorResult objects in this way. It actually is inconsistently implemented in engines. The spec says that `yield*` should reuse IteratorResult objects, and reconstruct IteratorResults with the value given. JavaScriptCore and LibJS don’t comply with spec, they create new objects. So this would be matching `yield *` in that way. + +MF: Other context. How I originally discovered this issue is that ABL opened a pull request for test262 for `iterator.concat` that asserts this behavior. And `iterator.concat` is another place where we could possibly reuse IteratorResult objects if we chose the optimization across the iterator helpers. Whatever we choose to do with these, whether we choose to reuse IteratorResult objects or not reuse them, we should follow that precedent with Iterator.concat. This is a necessary decision to be made whichever way it goes before we could move `Iterator.concat` forward. This is an area not tested in previous iterator helpers but the `iterator.concat` tests are thorough and asserting on the identity -- or rather lack of identity -- of the IteratorResult objects. So I will be asking for Stage 3 for that proposal later in this meeting and I will need to resolve this open issue before then. I have an open PR on that proposal to align it in either direction if needed. That’s all I have for slides. So happy to answer questions and have a discussion on this. + +MF: As far as my own personal preference on which way we go, I don’t really have a very strong preference. I would generally prefer to reuse objects if implementations find that it would be helpful, and I think generally we can assume that it would be. But if we get negative signals from implementations, I’m fine also going the other way. I just want the question to have been addressed within plenary so that we have set a precedent going forward. That’s it. + +RGN: This is not the strongest position, but Agoric are opposed to reusing the IteratorResult object because of the weird behavior that you alluded to. If I could just run down briefly the list of things that we considered, Number 1 would be that even the `take` and `drop` helpers have to look at `done`, so despite not inspecting `value`, all the weirdness is still possible. Number 2, it results in the inability to accurately shim this behavior using generators because generators aren’t going to reuse the IteratorResult objects. And Number 3, back on the weirdness, any extra properties of the result object beyond just `done` and `value` would sometimes be visible and sometimes not depending on which helper was used. So all things considered, it would be most convenient for our use cases if the reuse were limited to `yield*`. But because it already exists in `yield*`, this is not a blocking objection. That’s it. + +MF: Thank you. + +KM: Doesn’t do think but I’m happy to allocate-less objects when possible especially in things that run inside of loops since those tend to run a lot. + +CDA: That’s it for the queue. + +MF: So I guess I can share—let me share—I have some feedback from ABL on the pull request itself. You can also open this yourself. I’m going to open it in one second. Pretty much just from what I understand inconclusive at this point about whether or not SpiderMonkey would benefit from this change. It’s not written exactly as identical to `yield*`. There would be one final access of the value at the end. So if we were trying to look for it to be a way for implementations to implement in JavaScript as using `yield*`, we would have to slightly change that. But I would still be open to it. Because one extra value access is probably minor compared to, as KM said, something happening in a loop. All of the IteratorResults yielded by the iterator are able to be reused. It’s 1 to N. But so far, you know, I think without more prototype work, fully changing up the implementation, we won't have actual numbers on this. I think either way, it’s probably not a huge performance difference. It’s just that we need to make a call in either direction. I was hoping to hear more from implementers if they had opinions there. + +DLM: First off, I have to admit I haven’t read ABL’s comment and not fully up to date on this. When we discussed this last week more or less into the idea that currently using generators and that’s not ever going to be really opt optimal for us so chances are we eventually write this code and it doesn’t make sense to object to the optimization based on the implementation when we consider changing it in the future anyways. I don’t think we should be specifying that closely to the particularities of implementation. But I would be interested in hearing—I expect this doesn’t affect V8 and it sounds like KM is in favor of this. So I think we’re more or less neutral. + +MAH: So thanks for that information on implementation status of `yield*`. But if not all engines agree on what yield star does, maybe they also provides an opportunity to change the yield star implementation and align it to whatever we decide or not change it if we don’t need to. If we decide to reuse. + +MF: We could. That seems like a regression to me. If anything, I think we should be leaning towards reusing the objects unless we have good reason not to. We heard from Agoric that they think it’s a bit weird with how getters can make things behave which is, you know, a reason not to, but it’s a balancing act. + +SYG: All things considered, I like allocating less. I don’t know if this is straightforward when—it is straightforward when allocating less. But are we going to have inconsistency? Like, are some things going to reuse and some things are going to recreate? And we’re going to have to know? + +MF: Yeah, the only opportunities for doing this are where the iterator helper works on some underlying iterator and passes the exact same value that was yielded through to the result. And the only ones I’m aware of right now are take, drop, filter, and `Iterator.concat` in the iterator sequencing proposal. Other ones like `map` are not going to be able to reuse an IteratorResult object because they don’t yield the same value, it yields a potentially different value. So it’s inconsistent in that way, but it would be consistent in that all helpers that pass a value directly through will pass – + +SYG: Is that true of map? You could mutate the IteratorResult object to update the value. + +MF: It could mutate. That’s right. + +SYG: It’s not clear to me if the principle is to—if the goal is to have fewer allocations, why not also do that? + +MF: I would be happy to explore that possibility. I wouldn’t have thought that that was feasible within this group. But if it is, I can come back with another proposal that tries to actually reduce allocations in that way. + +SYG: I’m very happy with the goal to reduce allocation. I think my only worry is if we have somewhat open ad hoc thing for good reasons or bad reasons on an individual iterator helper by helper basis, that seems to be like maybe interrupt issue in the future. This might be easy to miss or something. + +MAH: You cannot reliably mutate an object because you don’t know where the object is coming from. The property might not be configurable or writable, you don’t know what is going on there. + +SYG: That’s fair. + +NRO: Just going to say it’s a random user provided object. It’s a bit weird to mutate it. + +CDA: That’s it for the queue. + +MF: Okay. Well, it looks like that more expanded one is not very possible. Thank you for that feedback. So it looks like we’re still considering just the scope that I had originally presented of take, drop, filter, and `Iterator.concat`. I think I hear fairly weak arguments on either side and given that, I think my preference is to ask for this change. If there’s opposition to that I’d like to hear it. Otherwise, I would like to ask for this change. And then for it to set precedent for `Iterator.concat`. + +NRO: I am very slightly against doing this for the consistency reason that SYG mentioned as in some methods do it and some others do not. And maybe it’s obvious to us that the rule is does this method need to move the object or not? But in general it’s like it might be less obvious to other people why summary is object. I think it’s fine if `iterator.concat` emerges from this mostly because it’s a static method and it’s not one of the methods on the prototype. + +MF: I can see that it’s not absolutely necessary for us to be consistent here. I just thought it was nice. But I would be fine with inconsistency if that’s what is requested. + +KG: I’m also slightly against doing this. Mostly just for the like—it makes the general shape is less consistent, which I’m worried about not just for users but for engines as well. I have slight hopes there’s some room for optimization here in engines to skip allocations in a lot of cases. I think that gets harder the more complex we make this. So my inclination is to keep the machinery as simple as we reasonably can which means making it implementable with the generator and making all the methods implementable for the generator. In case we go the other direction, I want to mention that you have been saying during the filter/take/drop you can do it for flatMap too: if you get the iterator out of the mapper function for flat map you can forward those iterator results. + +RGN: We had only limited on-the-record participation. I’m wondering if a temperature check is appropriate. + +USA: That was it for the queue. + +MF: Do the chairs think we should do a temperature check? I’m also okay with asking for the inconsistent proposal from NRO: that we do not do this but we do the optimization for Iterator.concat. That’s also fine for me. I can see that argument. + +USA: As it is, it is checks are not binding. I don’t see why not. + +MF: As long as we have time remaining in the time box, I would be okay spending five minutes to do a temperature check. + +USA: Okay. Then let’s do a five-minute temperature check. Would you like to define precisely what the question is and then I can start the temperature check. + +CDA: Quick point of order on this besides what is important to define those. Everybody needs to have TCQ open at the time when we start the temperature check because if you come afterwards, the interface will not pop up. One of the many quirks of TCQ. So if you have any opinion on this, please make sure that you have the TCQ pulled up. You can see the queue or logged in, et cetera. And then that would enable you to see the interface and make a choice on that. Maybe we’ll give, I don’t know, 30 seconds just in case for that and then – + +MF: I can explain the options while we do that. I see three options. The first would be doing the reuse of IteratorResult objects for both existing iterator helpers and for `Iterator.concat`, reusing IteratorResult objects for neither, or reusing IteratorResult objects only for `Iterator.concat`. + +CDA: I pulled up the interface. If you can take a look, Michael, to see what people will be presented with. And then you can define what each of those things mean. Or if you like you just said – + +MF: The scale is not the greatest thing. + +NRO: A suggestion. Can you do two separate checks? One comparing here to the status quo for the prototype method and then a separate one for iterator concat. And people can vote the same in both if they prefer both to be consistent with each other or vote if they prefer my approach and prefer not to vote if they want to never use the objects. I think it’s okay to have two polls rather than one in this case. + +KM: I will copy whatever the question is that is stated into the topic so people can see it also. + +MF: I’m fine with NRO's suggestion that we ask about this optimization for existing iterator prototype methods. I don’t mean to call it an optimization. Reuse of IteratorResult objects for existing iterator prototype methods. Express your positivity or negativity on the topic using the emoji scale that we have. + +CDA: Do you want to use the meanings that are currently ascribed there or did you want to provide your own – + +MF: That’s the best that we have. + +CDA: Okay. We’ll give this maybe, I don’t know, another minute. I don’t see any more responses trickling in. + +MF: Then it looks like we are very slightly leaning negative on that, the reuse of IteratorResult objects. So we can run the second poll about iterator.concat. We can do that now or later during the iterator concat section. Either way. Are we directly following this with iterator.concat? + +CDA: Yes. Iterator sequencing for Stage 3 is next. + +MF: Then we may as well do it now since we can kind of combine the topics. + +CDA: Okay. + +MF: So the question here is, yes, do we want to reuse the IteratorResult objects for iterator.concat? Given the prior knowledge that we don’t want to use those IteratorResult objects on the `Iterator.prototype` helpers. + +CDA: I’m just going to pull up the last result to see. I don’t recall how many responses we had total. It looks like we had at least 14. And we have about the same number here. So, of course, the numbers can be skewed a little bit because you can vote for multiple things, vote is the wrong word. It’s a multiple choice selection, shall we say? All right. I think things have stopped trickling in. + +MF: This one looks fairly convincingly positive. So I will—when we come to that discussion, I will assume that we are making that change for iterator sequencing. So I think that’s all unless anyone is in the queue, I think we’re decided on this. + +CDA: I did not note who is unconvinced on this one. Folks who were unconvinced, did you want to make any remarks that you haven’t already made? + +JHD: I put indifferent on the last one. Do we care that there then won’t be consistency with the other iterator helpers? + +MF: That’s what the vote was for, right, is that – + +JHD: Right. + +MF: Knowing that, the existing iterator helpers are not going to reuse IteratorResult objects, do we then want to reuse them for iterator.concat. And the result there was—it was fairly positive. + +JHD: Okay. So nobody including myself, then, is hung up on the inconsistency or the inability to use yield star to polyfill it or shim it or any of that stuff? We’re just kind of like, sure, let’s take the opportunity while we have it just verifying? + +NRO: I’m on the queue and this has been discussed in the matrix. If you want to polyfill generator concat we should not use this—take and drop, we should not reuse. For concat should reuse because it will be polyfill with yield star and it will be used. Inconsistency of concat and take filter is what makes it possible with polyfill generators. + +MF: I’m happy with that conclusion. + +CDA: Before you go to the next topic, would you like to dictate a summary and conclusion for the notes? + +### Speaker's Summary of Key Points + +MF: We have rejected the proposal to reuse IteratorResult objects for existing iterator helpers on `Iterator.prototype`, not setting precedent for `Iterator.concat` but setting precedent for other `Iterator.prototype` methods in the future. + +## iterator sequencing for Stage 3 + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-iterator-sequencing) +- [slides](https://docs.google.com/presentation/d/1EHMDcnV9zJ1E7BRhKmYtzHchZvOzjWynR3W-VdNxglw) + +MF: Okay. Stage 3 is mostly formality now. So we have tests as a pull request to test262, and they're not merged yet. ABL opened this pull request, I don’t know, about a month ago. I reviewed it. I added a couple of tests that I could think of that were missing and it’s now all good for me. I know JHD has run it against his polyfill at various points. I’m not sure if his polyfills are fully passing those tests yet or not. But I am happy with the state of available tests for this proposal. + +MF: I have this pull request open for `Iterator.concat` to reuse IteratorResult objects which is that topic that we just talked about. Based on the result of the last topic, I will merge this and update the one test in the test262 pull request to match. And that is all. So I would like to ask for Stage 3. + +SYG: Can we have them merged before the end of the meeting? I’m uncomfortable agreeing to Stage 3 if they’re going to sit in an open PR for some amount of time. + +MF: I’ve asked test262 reviewers and they weren’t going to have time to review them between then and the meeting. I’m happy to make it conditional on the tests being merged if that’s what we want to do. + +SYG: I feel somewhat strongly like that. The point of me of extra tests for Stage 3, multiple points. One is get the proposal author to think at a deeper level at the per step level and also if Stage 3 is throw over the fence for implementers the point of having the tests is that we don’t have to reinvent the tests. If they’re not yet merged we have to do that. I want them to be in the repo to be runnable. Doesn’t have to be in the main trunk. That’s why staging exists. I want them to be runnable tests in the repo at the point of Stage 3. + +MF: Yeah, I’m mostly focused on the former. I think it has caused us to think more deeply about all the minor semantics of it and we have now done that. But I understand that you would want to actually have them available for you to run in test262 and they’re not currently in the repo. + +SYG: As for conditional, I’m happy to give conditional if we get like—if basically they’re merged by the end of day 4. If there’s no time for—if there’s no cycles for that, I would rather wait. + +MF: Okay. + +JHD: So the tests were great. They helped me catch some bugs with my polyfill and there may be one or two—I think there’s one test that is still failing but I’m convinced now that that is solely due to the fact that I’m trying—that I’m manually reimplementing generator state machine stuff without using generators so that sort of my—I have dug my own grave for that one. But I’m convinced that the tests are correct. So I think that they’re ready to be merged once they’re re-based and the change is discussed. I’m happy with Stage 3. + +MF: Sounds like we have to hear from any test262 maintainers to see if they would – + +JHD: I’m in that group. So I will—if no one else wants to look at it, I will merge it once it’s passing or once it’s – + +SYG: To avoid putting other maintainers on the spot, can I make a concrete suggestion of having a two-minute extension at a later date and people would have some time to decide whether they want to defer on the review or press the button and then like you come back and say okay now it’s merged and then we get Stage 3 instead of putting people on the spot right now. + +MF: It’s fine by me. I don’t know. JHD, is that okay? + +JHD: Wouldn’t be up to me. We have done conditional approvals in the past and approved once merged. If you’re not comfortable doing that, there’s nothing wrong with waiting until the end of the meeting or something to bring it back up. + +MF: SYG, am I taking this to be general feedback that if I submit tests for a proposal and it’s thoroughly tested and it’s had a review from somebody else and not been merged yet that I should hold off asking for Stage 3 advancement in the future until it’s been merged? + +SYG: My preference is land it in staging and wait for the other review to—like, if you’re convinced that they’re correct, I’m happy to take the champions assumption that they are correctly written. And as long as they’re in the easy to access and executable part like staging, then when they kind of graduate out of staging, you can work on that at your own time and then you don’t have to wait for the full maintainer sign off. + +MF: Okay. + +SYG: Either that they are merged and you have the maintainer sign off or just in staging. That’s my preference. + +MF: I will take that path in the future, then. I’m going to ask for an extension item sometime later in the meeting where we can revisit this assuming that the tests have been merged – + +CDA: Okay. And the assumption is that that should be later in the meeting as much as possible? + +MF: Yeah, I guess as late as we can make it. + +### Conclusion + +* MF will wait until the test262 tests have been merged before asking for Stage 3 again. +* This topic was not revisited later in the meeting. + +## ShadowRealm for Stage 3 + +Presenter: Philip Chimento (PFC) + +- [proposal](https://github.com/tc39/proposal-shadowrealm) +- [slides](https://ptomato.name/talks/tc39-2024-12) + +PFC: My name is Philip Chimento. I work at Igalia and I’m doing this presentation in partnership with Salesforce. This is a short recap and ask for Stage 3 for the ShadowRealm proposal. So a quick overview of what is ShadowRealm. It’s a mechanism to execute JavaScript code within the context of a new global object and a new set of built-ins. So you create this object and inside it, it has an entirely fresh JavaScript execution environment. You can evaluate code in it, you can import other modules into it and they will be unaffected by anything else that you have done to the global object outside of the ShadowRealm. There’s a little code snippet here showing that it’s not affected by a global variable of the same name on the outer global object. + +PFC: People get antsy when you mention the word security in the context of ShadowRealm. It is not security but integrity. You want to have complete control over the execution environment. It’s not a security boundary. I also asked chat GPT to draw an illustration of ShadowRealm and came back with an "eerie otherworldly domain filled with dark energy and mysterious elements. Let me know if you’d like any adjustments or additions!" I think I have nothing to add to this. This is an exact depiction of what it looks like. + +PFC: The history of the proposal. At this point, everything seems to revolve around the question of which web APIs should be present inside ShadowRealm? Over the history of the proposal, we had several different answers to this question that we don’t like. One possible answer was none. We don’t like that because if you create a ShadowRealm in a browser, there’s no obvious reason why you shouldn’t have something like, I don’t know, atob() and btoa() or TextEncoder or TextDecoder in ShadowRealm. They’re not intrinsically tied to the browser. That confuses developers because the answer comes down to can I use this facility inside of ShadowRealm? You have to know which standards body standardized it. That’s not a great answer. We don’t like the answer of having no web APIs present in ShadowRealm. + +PFC: The other answer is a vetted list. We don’t like this answer either. For several reasons, but the main one; still, how are developers going to know whether we can use something or not? Telling them to go consult a list is not that much better than, you know, telling them to look up by which standards body standardized the API. Another possible answer was a criterion based on confidentiality, which got us closer to an answer, but in the end, people found that criterion hard to evaluate without getting into the weeds. That’s something we want to avoid. So in a couple of slides, I will present the answers we have now for this. But this is kind of the history of how the various answers we have had to that question. + +PFC: The proposal has been at Stage 3 before. In September of 2023, it was moved back to Stage 2 due to this question basically, which web APIs should be exposed—and also for concerns that the test coverage for these Web APIs wasn’t sufficient. So in that meeting, we made readvancement pending two explicitly supporting implementations that the testing and list of APIs exposed to ShadowRealm are sufficient. In February of this year, we advanced the proposal to Stage 2.7 with the understanding that that the Stage 3 requires sign off from the HTML folks on the HTML integration as well as resolution of Mozilla’s concerns about the test coverage. At the time that the proposal was moved back to Stage 2, it was noted that this is not an opportunity to relitigate design decisions. This is a narrow scope for answering these questions or concerns. + +PFC: So what is the state today? Which Web APIs should be exposed inside ShadowRealm? We have written a design principle for the W3C TAG that governs whether spec authors should choose for something to be exposed everywhere or not. So a little bit of background on this. The web spec had an "Exposed" annotation that was like you could say, if something was exposed in windows and workers. As part of the preparation for the HTML integration of ShadowRealm, it gained an "Exposed everywhere" attribute, and so this design principle tells spec authors when to use that exposed everywhere attribute. The principle is that only purely computational features are exposed everywhere. So that means, features that perform I/O are not purely computational, features that affect the state of the user agent are not purely computational. As an additional exception, anything relying on an event loop is not exposed because one place where things can be exposed is in worklets, which don’t necessarily have an event loop. And then the final part of the principle is to expose conservatively. So features that are primarily useful for other unexposed features are not exposed. An example of that is Blob, which is a purely computational Web API, but mainly used in the context of I/O, so we should default to not exposing that, unless there’s a really good reason for using it by itself. + +PFC: So we developed this design principle based on a number of conversations with implementers and web platform experts. Tried a few different iterations. I think that people are mostly happy with this one. There is a clear criterion for spec authors to decide whether something is in or out. And the distinction that mentioned before, that distinction doesn’t exist. That’s irrelevant with this. If you want the full list of the 1300+ global properties that are available in web environments, you know, with which are in and out and why, there’s a spreadsheet to click through there. + +PFC: The current state of the HTML integration, there’s the pull request to click through here. The design is settled and there have been reviews. There are some details still being worked on, in particular, some mechanical work needed in specs downstream of HTML to use the new terminology of principal settings objects and principal global objects. + +PFC: We talked earlier about test coverage. So I will show you an overview of things that are now covered with this. APIs that have test coverage in web platform tests run in ShadowRealm. So one thing we did was, to not just test in a ShadowRealm created in a regular browser window, but also to test everything in ShadowRealms created in multiple different scopes. So you can create a ShadowRealm and run code inside it from any of these scopes listed here: window, worker, shared worker, service worker, audio worklet, and another ShadowRealm. Sometimes testing an API might succeed in one of those and fail in another, if there are, for example, assumptions that the global is either a window or a worker, which sometimes exist in code. So now, tests run in ShadowRealm scopes in web platform tests will be run in all of these scopes by default. + +PFC: I have got a list here of all of the web APIs that are exposed according to the new criterion. Links to PRs adding web platform tests for testing those in ShadowRealm. Some of these PRs are still pending review. + +PFC: Here they are. Abort, Base64, console, et cetera. There’s several slides for this. You can click through to the PRs, if you want to see the details. A couple of these like crypto, and URLPattern are separate specs, and those—we have additional integration PR to add the exposed attribute of those specs, which is up to the authors of those specs. + +PFC: There are a couple of things that are exposed that don’t have any WPT coverage. TransformStreamDefaultController, WebTransportWriter do not have tests in any realm. But when they do get these, we will enable them in ShadowRealm as well. + +PFC: So the requirements for stage 3. Like the TC39 requirement, the feature has sufficient testing and appropriate pre-implementation experience. We can safely say that this requirement is fulfilled. Then we had the spec conditions that were imposed when we moved to Stage 2. Explicit support from the two implementations that the testing list and APIs to be exposed to ShadowRealm are sufficient. Signoff from the HTML folks on the HTML, integration and resolution of Mozilla's concerns about the test coverage. So I think that we can discuss these requirements in the queue. The—yeah. So I’ve asked various implementations about what they think about the current state of the testing and list of APIs and I am wondering if we could do explicit support on the record for that, you know, requirement during this meeting. + +PFC: The HTML integration, I think we have moved that as far as we can until we hit a chicken and egg situation. There is an agreement on the API is exposed and it needs two statements of explicit support from implementations, as per the WHATWG process. That is moved as far as forward as it can go until we get this positive signal from implementations, which I am hoping we can also discuss in this meeting; and resolution of Mozilla’s concern about test coverage. I have talked to MAG a couple of days ago, and it looks good, but he’s going to take a closer look. I am hoping we can also discuss that on the queue. + +PFC: So let’s move to the queue now. This is a fairly short slide deck, but I am expecting a certain amount of discussion. I think the majority of the time will be spent on that. + +RGN: Yeah. I had a question about the TAG guidelines that came out, where you mentioned I/O as being excluded from exposure in ShadowRealm, and I wanted to know, is it actually I/O, is it input *and* output, or just input. Because APIs such as `console.log` do produce output and have proven useful for anyone with access to the debug console. + +PFC: This is a very good question, and I have actually mentioned console as a particular example in the design principle guideline. Technically, the console is I/O. It definitely prints a message in the developer tools in the user agent. It affects the state of the user agent. And it might also write a message to a log file. But given that you can’t—like, this output is unobservable from JavaScript. You can’t use another API to read in the messages that were output to the developer console. And the practicality of having console in all environments weighs strongly in favour of including console. Console is kind of a debatable case, but I think everybody I have talked to feels it needs to be included. And I certainly strongly agree with that. Like, not having a console in an environment would be very weird. + +RGN: Okay. Yeah, I agree as well. I would not want to see a guideline that was worded too broadly used as justification for excluding `console`. Thanks for the clarification. + +WH: I have a question about “purely computational”. Does it mean that, no matter which environment you run in, the result will always be the same? Or can the result depend on aspects of the environment, such as locale, what hardware you have installed, or such? + +PFC: So it should not depend on what hardware you have installed. But it is also not the case that it will be exactly the same no matter what environment you run it in. For example, we have exposed the isSecureContext boolean global property, which will be true, if you have created the ShadowRealm inside a realm that is a secure context, and false if you created it inside a realm that is not a secure context. So I would have to look at the W3C PR for the particular definition that we want to use in the design principles. We are leaning on the definition of not performing I/O and not affecting the states of the user agent or the user’s device. + +WH: My question is regarding manipulation of external state. Can you read external state, such as the locale, or various state similar to that? + +PFC: Reading the current locale is a capability that is exposed by ECMAScript itself. It would be difficult to say that a JavaScript environment couldn’t do that. The same as Date.now. + +WH: Okay. Can this form a one-way communications channel? And do we care? + +PFC: Do we care? Good question. Like I said early in the presentation, the goal of the ShadowRealm proposal is not security, but integrity. So a ShadowRealm is not useful unless you do have some sort of communication with it. The point is that you—right. I am not an expert on what kinds of things can be used as a communications channel. But I think that is pretty much covered by the callable boundary. + +WH: Okay. I just wanted to understand how deep into the prohibition of “I” out of “I/O” this is going. Thank you. + +KG: Waldemar, I recommend looking at the spreadsheet as well. There are a lot of examples there and that might be a list of—if you are familiar with the web APIs anyway. + +SYG: Are the WPT tests merged? + +PFC: Some are and some are not. You can see which ones are still pending in the slides. I’ve updated it as of Friday, I think. + +SYG: Yeah. In a similar vein as having Test262 tests merged, what is your read on getting these merged ASAP. Stage 3, is implementer stage—this is I think more important to get this merged than Test262 because it’s not as easy to discover because there’s a bunch of different PRs + +PFC: Yeah. That makes sense. If it were all in one PR, that would be probably not realistic for one person to review the whole thing. But I don’t see any currently any obstacles to getting these merged, except for just review capacity. + +SYG: Okay. Thanks. To be clear, I would be more comfortable with Stage 3 once they are merged. I have no other concerns than that. + +PFC: Okay. + +NRO: Yeah. Relative to what SYG said. I don’t know how it works, but I guess we can merge them as tentative. They use this tentative marker for test that are not fully confirmed for some reason. + +PFC: Some of the tests are tentative ones like the Wasm integration ones. It’s not feasible to do it tentatively, because it takes already existing coverage and adds a flag to it that says, run this in ShadowRealm as well. So these tests are not already not tentative. We might be able to do that somehow in the test harness, where it marks only the ShadowRealm as tentative. A number of the PRs have been merged already, and if we can get reviews on these, that would certainly be preferable to using the tentative flag. + +DLM: I just wanted to answer the specific question of whether or not Mozilla is happy with the test coverage? I haven't remembered that we are the gatekeeper there, but we would like to recognize that a lot of work has been under tests and we no longer have concerns about the test coverage. + +PFC: Okay. Thanks. + +KG: Yeah. I really like the principle of pure computation. I did want to raise some wrinkles, and all of these have come up on the various threads of which there are several. I don’t necessarily think this needs to hold up the advancement of the proposal except maybe in one case, which we can talk about. But I do want to try to get more clarity about what exactly pure computation means. In particular, you have the webcrypto stuff not being included. I don’t understand how that can fail to be computation. And it's not like a trust store and that doesn’t even use hardware. Most of the time, you can shim subtle crypto. + +PFC: Can you? My understanding was that it required access to a trust store. If it doesn’t, then we should take another look at that, I guess + +KG: Most of it doesn’t. Maybe there’s some I am not thinking of. But the basic shot 256 array buffer doesn’t. And like encryption and decryption, maybe there’s some other things I am forgetting. If that’s an oversight, that’s fine. And then there’s some things where it could in principle be implemented in WebAssembly, but probably has hardware, like video encoding and decoding is the example here. You have that excluded on the basis of being mainly useful for IO. Which – + +PFC: Yeah + +KG: I think it is basically fine. But it doesn’t answer the question of, , like, , you know, assume there is some hardware module, it was useful for some operation that we think is reasonable to perform in the ShadowRealm. Does the fact that it is done by dedicated hardware mean that it’s not usable in a ShadowRealm? And WebGPU is maybe the example here. And I forget if that shares state with other WebGPU stuff throwing on the same page. If it doesn’t, it seems like that is basically pure computation. + +PFC: Yeah. We did discuss this on the thread about the design principle. I don’t really have a strong opinion on that. I feel like, if it could be emulated in WebAssembly, there’s no reason to keep it out. But basically, I don’t know enough about what use cases people would want for WebGPU in ShadowRealm to say, that should be out because it’s non-CPU computation or whatever. Or it should be in because you can do this and that with it. I would say, in the absence of anything else, it’s out for reasons of primarily being useful for other things that are not exposed in ShadowRealm, but I, – + +KG: Yeah. These days, there’s a lot of use that is LLMs. And that’s not unreasonable to use in a ShadowRealm + +PFC: You mentioned in your queue item about audio worklets. + +KG: Yes. Maybe this was the result. But the audio—some of the people that work on audio, at Mozilla, had this concern about not wanting to allocate memory in an audio worklet. And I think that’s a major concern for audio worklets, that shouldn’t carry over to ShadowRealm. It just complicates this expose = star thing. If this implies, you know, exposing TextEncoder and code into audio worklet, if they don’t want that—is there a resolution to that? Was the plan to do it anyway? + +PFC: So I think these are two conflicting viewpoints and both are reasonable. One is that audio worklets must not expose anything that allocates memory and the other is, well just don’t do that then in audio worklets. + +KG: Right + +PFC: Neither of these are unreasonable. I think the latter is the more commonly held position. And so like that’s what I have proposed in the TAG design principles issue. I don’t have a strong opinion on this position, but I don’t like the idea of keeping things out of audio worklet that are otherwise exposed everywhere. And you know, if that happens, if the TAG decides that the former viewpoint of, you must not expose anything in audio worklets that allocates memory, then it’s I think it’s better to just make like the HostInitializeShadowRealm operation throw if the incubating global is an audio worklet. + +KG: That would work for me. I don’t have a strong opinion about the audio one or the WebGPU one. But it… I guess the—there is still some edge case that is unsolved. I am fine with going forward with the principles written and the list that you have with the change to expose crypto. I just wanted to talk through these + +NRO: Yeah. Just a clarifying question. What is—what APIs allocate memory? Is new array buffer, + +KG: Array buffer, yes. Probably not an object. The concern was specifically stuff where it’s like unbounded or based on user input. And yeah. + +PFC: I guess TextDecoder is an example of I think why that fight is kind of already lost. Because for example TextEncoder already has the exposed everywhere attribute. But in an audio worklet, you are only supposed to use encodeInto() on an already existing buffer. Because that doesn’t allocate the new buffer. So that ship has already sailed. TextEncoder is already exposed everywhere. So, you know, shrug. + +KKL: I know you haven’t—lots of folks were expecting to hear from us from the hard and JavaScript community about this proposal and I want to make this explicit, that we were unworried about any particular decision, though elated that come up with a criterion that was enough to make thousands of small decisions and make progress on this change. I wanted to remind folks that the reason they are unworried is because the capabilities are deniable in a ShadowRealm by the code that run first there, because we got in early. The requirement that properties of the global object kind of ShadowRealm are all delete-able and oner that, thank you for manufacturing through this. This is despite we find that the—while we are elated that the—that the implementers and other specification authors find that the criterion is sufficiently inambiguous, they are able to use it to make a lot of small decisions, we recognize there are ambiguities to it and unworried for the prior reasons. For example, one ambiguity, the criterion of nothing that schedules to the event loop. That obviously does not limit the use of the microtask queue, I assume that event by event loop, we mean the IO scheduler. And that’s all I have to say about this and thank you and good work. + +PFC: Yeah. To answer the question about microtasks, queueMicrotask is reasonable in the ShadowRealm because it does not rely any more on an event loop than `Promise.resolve` does. + +CDA: Nothing in the queue. + +PFC: All right. In that case, how do folks feel about moving the proposal to Stage 3? + +SYG: As I said before, I would be more comfortable with Stage 3, once the tests are merged. and let be reflected in the record, there are no concerns other than the mechanical than having the tests merged. Before the WPT coverage is there, I am not comfortable signalling that it’s basically ready for implementation because it’s too easy to slip through the cracks. + +PFC: That’s fair enough. Other than that, are there any concerns that I should be aware of when bringing this back when the tests are merged? + +CDA: I am not sure who that question was directed to. + +PFC: Everybody. + +KG: As long as we are okay with tongue bikeshed smaller things like `crypto.subtle`, and you know with the potential open question of the interaction with audio worklets specifically, which I will fine with leaving those open. + +PFC: Okay. I will look in more detail into which parts of `crypto.subtle` are able to be exposed. + +NRO: Yeah. So PFC said we’re in the chicken and egg situation, with what—you need supports from browsers to merging the HTML Q we are going to assume that once this proposal is here, which is Stage 3, then browsers having implicitly supporting it in the WHATWG integration or looking for something more explicit? + +PFC: That’s a good question. And I would actually like to hear what other folks think about that. + +SYG: That’s my understanding. Sorry, I am not in the queue. Let me check the queue. Is it okay if I jump the queue without typing? + +CDA: I think there’s nobody else on it. + +SYG: That’s my understanding, that because the browser vendors are in the room, in TC39, we should not get something to Stage 3 if we do not have browser consensus. So if there are concerns within an individual browser vendor, such as the HTML DOM side of I don't know if your team do not agree to ShadowRealm, you should not give consensus. My understanding is, once we give Stage 3 consensus here, that implies at least two. Like, it implies all three. Actually. + +KM: I may have to tentatively give spec things but I need to double-check. I think it’s fine, but I need to double-check with our folks. I wasn’t aware of this meeting + +DLM: We have some concerns around this area. So I have not been attending—I am not really involved in the HTML side of things. But from what you have heard from people at Mozilla that are involved in that side of things is that there’s no real objections. But neither are the real statements of interest, on the HTML side, and we would need browser vendors to express interest in the implementing this and for this to go ahead in the HTML side and what I heard last week, that’s not the case right now and I don’t really feel that my statement of support or objection here is any any way speaking for the DOM team at Mozilla. + +DLM: Just to be clear, I don’t feel the same way as Shu. I don’t feel like that I can—you know, a lack of objection for me, here it doesn’t mean that our DOM team is going to express an interest in influencing this on the HTML side. + +PFC: It’s good we discussed this. + +SYG: You said a lack of objection from you here does not—sorry. I missed the second part. + +DLM: Sorry. That was not clearly stated on my part. I am not speaking for the DOM team. I feel like that is a separate group that needs to be convinced that this should be implemented. I realize, you know, you and I both work in browser vendors as well, but in my case, I am only speaking here as a TC39 representative, and there’s no kind of internal consensus between us and the DOM team about implementing ShadowRealm. So this is something that I have brought up in our internal meetings with them. And I have spoken to the people that are representing us at WHATWG, and my understanding is this was discussed either last week and it sounds like none of the people in that meeting expressed an interest in implementation at this time. + +SYG: I see. I… I would encourage—maybe we all need to sync up as browser vendor, but I would encourage the other browser vendors to not give consensus here, if it meant—like in this case, in particular, since so much of the proposed API requires so much of the semantics is around web API integration, like if there is not a willingness to implement on the HTML DOM side of the team, our given consensus in giving stage 3 in TC39 will send the wrong signal and we shouldn’t do that. Stage 3 means it’s coming for sure, a matter of time and if you don’t have that internal consensus on the HTML side, I don’t think we as the representative of Chrome or Firefox or Safari should give the consensus until we—we should block or give consensus depending on the internal agreement. + +SYG: So that’s not to put you on the spot, but I think it is it mean that—since I am blocking on properly ground this any way, until the test is emerged, I think I would request the other browser vendors to get internal consensus or lack thereof, later on the internal agreement before Philip comes back for Stage 3 ask. + +DLM: Yes. I am definitely willing to bring this up internally again. And yes, I agree with you, and this is something they also said about the risk of sending the wrong signal here. I am also sensitive to, it feels like we’re moving the goalpost a little bit in terms of, you know, the work that the ShadowRealm champions have done. But yes, I mean, I—yeah, I can’t really disagree with you, Shu. I do—it’s sending the wrong signal if we say this is good for Stage 3, and there’s no kind of expression—and to be clear, there was no—my understanding of what was communicated to me was that there was no objections to this. It’s just there was no particular statement of interest in implementing this any time soon. I will work to clarify that before this comes back + +KM: I guess, my—again, I—in addition to what DLM said, I have basically the same feedback, except from—you know, I don’t have—I probably have less conversations probably. So it’s kind of, like,—but I did not hear any objectionion, but also I did not hear any strong desire from the HTML folks to do this work. But I will also ask them and come back. + +PFC: Okay. Before bringing this back, I will be in touch with all of you asking about how these conversations went. So I think I will withdraw the request for consensus. And come back at a future meeting, probably February, after the tests are merged. + +MLS: Yeah. You are talking about—I am the same as KM, I haven’t talked to them—the HTML folks. But—sin the right venue for all browsers? Because they have not just the people that are with our companies, but others are the right ones to indicate their interest in moving ShadowRealms forward, on the HTML side. + +SYG: it’s a little bit tricky because it’s kind of like—yeah. I guess it’s kind of chicken and egg, but I mean… it just feels existential. If we agree to Stage 3 and no browser ships it, that's bad. If we agree to stage 3 and a subset of browsers subset it, that’s also bad. If the HTML sides whether we ought to ship at TC39 proposal, I would like the consensus to completely agree between the two. Where the conversations happen is a great—I don’t know where to best facilitate that. But are you suggesting that we all go to the WHATWG meeting to hash it out there? + +MLS: It sounds like we need to have a homework assignment. On a TC39 from browser companies to have this conversation internally. But it seems to me that there’s also been these discussions—so we are having a discussion right now and we have had discussions on ShadowRealm in the past. And we’re inclined from the TC39 point to move forward. But is there the same kind of inclination with WHATWG? Because in TC39 obviously browsers, you say SYG, agree. We want browsers' support. But it’s the whole TC39 that’s why they want the proposal. And I know the WHATWG is a little different than what we as far as their makeup. But TC39 if they want it as a whole, that helps the conversation between the browsers. + +SYG: I agree. I am not clear if there’s a concrete suggested course of action that was different than my suggestion. + +MLS: I don’t know because it—it doesn’t seem to—there is a concrete action that we need to view as TC39 delegates and that is to talk to the TC39 folks. Earlier in the slides, we seen all the PR on the HTML side that haven’t—haven’t moved forward to completion. Is the desire in each of those subvenues that they do move to completion? + +SYG: My understanding is that yes. Because the goal is that we are all agree to something we will all ship. And if we allow things to move to Stage 3, but then due to whatever external reasons. TC39, we don’t ship, I think that is a breakdown in the norm of, like, working in TC39 at all. Then why do we agree to Stage 3 if we agree not to ship without good reason, just to like external things that come after we agree to Stage 3. + +CDA: There is nothing on the queue. + +MLS: So I was muted. Let me respond to you, Shu. The general idea, when we proposed something for Stage 3, when it gets to Stage 3, that everybody TC39 including browsers, then there’s a ten to implement and ship it. I know that is—in practicality, that doesn’t always play out + +SYG: It sounds like we are agreed that we should have that—like, with, as implementers, should not grant—agree to grant to Stage 3 unless we have our ducks internally in a row. We don’t know we’re going to implement and ship ShadowRealm because the DOM side folks might not agree, we should figure that out before we advance to Stage 3 is all I am saying + +MLS: And I agree. And about so, you know, KM and I will do our home and you will do yours and DLM will do his and so on and so forth. + +PFC: Yeah. That’s my assumption as well. It’s good that we confirmed that. + +DLM: Sorry. I wanted to add a little bit more to this topic. Yeah. I agree. We should let things move to Stage 3, if we don’t things will be implemented in a reasonable amount of time. I will follow up with. I don’t feel like it’s my job to advocate for a proposal without DOM. We can ask for feedback, but in this case, I have raised it since this is very timely. But yeah, I would encourage proposal champions to also work and make sure that things don’t get lost on the HTML side as well. I mean, I can ask for people’s feedback, but I can’t require it. And I would also like to say, I am very happy that this topic came up now because I can see—like I think when HTML context comes up, there will be a substantial amount of work done on the HTML side as well and I am glad we established that as the rule, that we won’t let things advance to Stage 3 without our DOM’s as well. + +SYG: My intention is definitely to get clarity on, not asking the individual delegates to champion these proposals that you are not championing. + +PFC: So that’s an action for me, which I will definitely take to heart. + +DLM: Yeah. Just to follow up on what SYG said. Yes. I am certainly quite happy to ask for opinions, but I am not going to press for opinions. So if this is fine for Stage 3, yes, the thing is that the proposal champions will have to take up with people involved in the HTML spec. + +CDA: Anything further in the queue? All right. + +PFC: Then I think that brings us to the end. + +### Speaker's Summary of Key Points + +* Since advancing to stage 2.7, the Web APIs available in ShadowRealm have been determined using a new W3C TAG design principle. +* Each of these available Web APIs is covered in web-platform-tests with tests run in ShadowRealm, including ShadowRealm started from multiple scopes such as Workers and other ShadowRealms. Some web-platform-tests PRs are still awaiting review. +* The HTML integration is now agreed upon in principle, and needs some mechanical work done in downstream specs. However, it needs two explicitly positive signals from implementors to move forward. +* The concerns about test coverage have been resolved, assuming all of the open pull requests are merged. +* We will get the web-platform-tests merged, look into what can be included from crypto.subtle, and talk to the DOM teams of each of the browser implementations and get a commitment to move this forward. When that is finished, we'll bring this back for Stage 3 as soon as possible. + diff --git a/meetings/2024-12/december-03.md b/meetings/2024-12/december-03.md new file mode 100644 index 0000000..1fd41d7 --- /dev/null +++ b/meetings/2024-12/december-03.md @@ -0,0 +1,850 @@ +# 105th TC39 Meeting | 3rd December 2024 + +----- + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Michael Saboff | MLS | Apple | +| Dmitry Makhnev | DJM | JetBrains | +| Nicolò Ribaudo | NRO | Igalia | +| Jesse Alama | JMN | Igalia | +| Luca Casonato | LCA | Deno | +| Daniel Minor | DLM | Mozilla | +| Waldemar Horwat | WH | Invited Expert | +| Chengzhong Wu | CZW | Bloomberg | +| Jirka Marsik | JMK | Oracle | +| Jack Works | JWK | Sujitech | +| Chip Morningstar | CM | Consensys | +| Ujjwal Sharma | USA | Igalia | +| Andreu Botella | ABO | Igalia | +| J. S. Choi | JSC | Invited Expert | +| Ron Buckton | RBN | Microsoft | +| Keith Miller | KM | Apple | +| Chris de Almeida | CDA | IBM | +| Jan Olaf Martin | JOM | Google | +| Jason Williams | JWS | Bloomberg | +| James M Snell | JLS | Cloudflare | +| Jordan Harband | JHD | HeroDevs | +| Philip Chimento | PFC | Igalia | +| Richard Gibson | RGN | Agoric | +| Eemeli Aro | EAO | Mozilla | +| Istvan Sebestyen | IS | Ecma | +| Sergey Rubanov | SRV | Invited Expert | +| Devin Rousso | DRO | Invited Expert | +| Samina Husain | SHN | Ecma International | + +## Briefing on the formation and goals of TC55 (or, All About Moving the WinterCG into Ecma) + +Presenter: James Snell (JSL) + +- [slides](https://docs.google.com/presentation/d/1WnqF7y52QlPRw737ZOTC4rdmJ65-nT9BbOD05jr2sjE/edit?usp=sharing) + +JSL: Hello everyone. It’s been a while since I’ve been to a TC39 meeting. Good to be here. I was previously here as an invited expert, now I’m representing Cloudflare. I’ll talk about WinterCG and about TC55 or about all moving winterCG to ECMA. And talk about what WinterCG is. We have 30 minutes scheduled for this. I will try to get through this relatively quickly so we have time for discussion and questions and that kind of thing. If I skip over some key detail, just go ahead and head it to the queue and we’ll address the questions afterwards. But what is WinterCG and started a couple years ago. We started getting more and more non-browser ECMAScript runtimes like Deno and Bun and porffor, Cloudflare Workers and others. Really started to emerge in the ecosystem. There was a risk of fragmentation in the ecosystem where you know node might have one set of globals, you know, web platform API and Deno had a different set and Bun and whatever and risk of adding the ecosystem in the individual run times. WinterCG started as, the original idea is let’s get all the run times together to at least agree on a common set of web platform APIs that we were going to agree to be interoperable to run. And call it the minimum common API. This is basically just an informal spec that basically just says if you’re going to do your out person use, if you use streams, use ReadableStream, writable stream. It’s a minimum set of APIs that we should expect to exist in all of the run times. We should expect them to be there and expect them to be consistent with each other. + +JSL: Now, this is original set up as a w3c community group. If you’re not familiar with, you’re not allowed to publish normative specs and do notes and recommendations and informal recommendations. You can’t have anything it says normatively this is what you must do. But almost as soon as we WinterCG as soon as we put out this minimum common API draft, this document, we immediately had called from the ecosystem to say let’s have definition of compliance and had people making claims we are WinterCG compliant run time or this module is module is WinterCG compliant and had no definition of what compliance was and nothing we could enforce. + +JSL: We had other discussions about hey what do we do with fetch since fetch on the server works differently than fetch on the browser? What do we do with some of the other APIs that we were being asked to look into? For instance, streaming crypto, adding streaming capabilities to WebCrypto and that kind of thing. We discovered we really didn’t have a good structure for actually talking about normative things. We couldn’t do normative definition of compliance. We can’t really have a real clear inaction. How do we relate to WHATWG and how to relate to some of the other standard efforts. We took a step back and wanted to formalize this and come up with a better approach to how we deal with all these different questions. That’s where we’re at now with moving WinterCG into ECMA as a technical committee not just TC55. + +JSL: The charter, pretty straight forward. This is just copied directly from our draft charter right now. Define and standardize is the key part here a minimum common aPI service. Along with a verifiable definition of compliance. What is this going to mean? Minimum API is not advanced APIs. It is a list of other APIs that exist, all of them web platform APIs. So things like readable stream and URL and others in there. The intent of minimum common API is not following something new but a subset and compliance level if you are a compliant run time compliant with the spec, these are the APIs that you will have. They will pass these set of tests defined in either Test262 or the web platform tests. And this is how those things must be implemented. In order to give ecosystem a common base to write code on so we’re not fragmenting everything things don’t just work in Deno and Node, and Test262 doesn’t overlap with web APIs. We don't want to create a whole new version of fetch spec. What we might do is cover things that web platform is not like CLI APIs or anything that is needed like on a server platform. And of course all these things will be operating on the royalty free policy. + +JSL: Program of work that minimum common API is the primary piece of work for the foreseeable future and defining what that is and compliance of that is. When I say compliance, what is the subset of the web platform tests that the run times must be able to pass? Are there variations in behavior from the web platform that needs to be standardized? For instance, fetch on the server is not necessarily going to have all of the core requirements in there. Subset of those that we need to define it as out of scope for these environments, that kind of thing. For anything that is—collect requirements of non-web-browsers with input feedback. If we have a change not the spec and go to WG and say here are the requirements we are discussed and here what we identified and work with the process to make the changes if we can. So we’re not trying to change anybody’s process. We’re not trying to go around it. We really want to have a form to work within that but still be able to discuss common requirements, that kind of thing. Should it be necessary, the committee will standardize new API capabilities relative to serve side runtimes. We identified a couple of these and some of these, the key focus is the minimum common aPI. And then we have the notion of standardize and maintain conformance level and standard that is one level. Another one may be if your runtime does CLI apps, here is another set of APIs that you need to be able to support. If you’re doing sockets, here is another set of APIs that you need to be able to support, that kind of thing. Each one of those is defined as a separate conformance level. + +JSL: Working with others. We had a lot of questions how to interface with other groups? And again, we’re not going to fork anything. We are going to work within the process of those other groups, whether it’s TC39 or WHATWG or some other w3c working group. It doesn’t matter. We will use this TC55 as a form of discussing and collecting requirements and go off and do the contributions with the other specs as they are being discussed. + +JSL: We already talked about conformance levels. We will have a number of these. First one and the primary one we will be focusing on initially is the minimum common. + +JSL: And again we are keeping everything royalty free. + +JSL: That’s the presentation. I wanted to go through that quickly. I wanted to make sure we have plenty of time with discussion questions if anyone has any concerns. I know there are a few folks here like LCA and ABO involved in this process. So happy if they have some comments or anything to add to this as well. + +NRO: I’m happy to hear your plan to have normative references to what are asked for example for the common API. I tried in the source maps and we struggled a little bit with saying those things were normative. So next I was planning for the certain spec to work through ECMA rules to be able to actually have normative references to WHATWG and happy to hear we see this in TC55. + +JSL: That is one of the key things that we will need to work through is how do we have those normative references for that and what are the requirements there. It’s one of the open questions and definitely something that is great to see. + +SYG: I’m missing a step in the reasoning here. I heard in the beginning that you are a CG in w3c and you can’t publish stuff normatively. My understanding of the substitute is do it via a WG. An example is WASM, where the CG hands it off to WG to stamp. Your reasoning CG can’t do this and we’re moving to TC in ECMA. I’m missing some of the middle part there. + +JSL: So we basically put it to a—not necessarily a vote, but just a consensus decision within the WinterCG members. Hey do we want to do this as a w3c working group or ECMA? And the majority folks came down on just saying, hey, they prefer the ECMA process, let’s pursue this. We could have gone either way. The ECMA committee is just the one we landed on that everyone is most comfortable working. + +LCA: There’s another part to this which is that when we initially started trying to figure out how to publish standards, one of the options we also looked at is keeping the community group in the w3c but also having a technical committee within ECMA to actually normatively standardize things. which unfortunately due to various policy reasons from within ECMA and the w3c was not possible. But, yeah, we were really trying to get to the point where we could have something that would work similar to the WASM group where we can have an open—relatively open discussion with relatively little requirements from people that want to join and then have a place to standardize. I think we have figured something out with the ECMA secretariat where the invited expert policy is lenient enough for us to enable us to do that within ECMA. + +AKI: I just want to add here in case it wasn’t clear, there’s not currently a working group in w3c to correspond to this community group. Regardless in order to publish something a new group needed to be chartered. + +SYG: Right. Thanks for that. Can I respond to this? Can you say more. I heard a little bit—I heard thing about why the participants prefer the ECMA working mode which is the invited expert thing. Were there others that you can share? + +LCA: We initially had—so we were initially also unclear about how exactly the WASM process actually works with the community group and working group because we got some conflicting information from folks at the w3c about like where standardization actually happens. And we did sort of more quickly get clarity from things within ECMA because we also had closer contact to folks within ECMA. Ultimately, it could have gone both ways. It just happened to work out such that we had more contacts with folks at ECMA and we within the group thought that this was the more convenient place for us to do this. + +WH: Can you say more about how the conformance test would work? + +JSL: For the minimum common API, the intent really is to just specify a subset of web platform tests. So basically just calling out which ones these run times are expected to pass, which ones are expected to fail, which ones—where variances and behavior may exist. So it really will just create a profile of the web platform tests to say here is the subset you have to be able to pass. That will be the conformance tests for the minimum common API. Other ones like if this committee does go off and produce a novel spec, then it would define a set of tests in the web platform test style whether or not it would be added to web platform tests or some other project. That remains to be seen and determined. We would define what those tests are for those particular new specs. + +CM: I heard lots of references to W3C and WHATWG and things that are explicitly server platforms but I wanted to check (and I suspect I know the answer) that TC53 and the work they’re doing is on the radar. Because I think there will be considerable overlap with a few of the things that are in the APIs that they’re specifying. + +LCA: I think we had a lot of discussion during the charging process also with you on figuring out how to like cleanly split between what TC55 and what TC53 does. For those unaware, TC53 is the technical committee that works on something very similar to TC55 but more focused on embedded devices and devices that may have more resource-constrained—more constrained resources. I think the overlap there is definitely exists, but I think there is a clear case to be made that sort of our devices are able to run full fledge web servers and things like that do not necessarily fit into TC53 scope whereas devices that have, for example, no asynchronous I/O don’t really fit into the scope of TC55. Surely going to be overlap, but I think they are sufficiently—like, the use case they’re sufficiently different. + +CM: All of that seems entirely valid to me. I just wanted to make sure that this was a coordination point that was consciously part of your process. Sounds like the answer is yes. I’m happy with that. + +JSL: We’ve been getting—part of the chartering process, we had the calls and reviewing the charter draft and went around and trying to figure out the right language in the charter to cover this. It’s like are they resource-constrained? Are the servers well resourced? We couldn’t figure out good wording. I would love if folks take a look at the charter draft and come up with better wording. We want to make sure a good clear line between 53 and 55. I also want to make sure there’s a really good open dialogue and collaboration going on between the two technical committees to make sure that we are at least driving towards consistency. + +MS: I know that Deno is involved and are Bun and node involved in the discussion of coming up with APIs. + +JSL: Deno for sure and node active contributors that are involved. Node as a project is too large and too diverse for any individual to speak on behalf of the project without getting the technical steering committee explicit listen on board. It’s the whole thing. But we have node contributors and core contributors who are involved. Bun folks have been involved in conversations. Probably not as much as I would have preferred. I’d like them to get more involved and more active in this. But we do have run times. I’m also representing workers and proffor developers there. We have quite a few. + +MS: Thank you. + +PFC: Another question where I suspect I might know the answer. But we talked yesterday about the annotation in web specs for exposing something in all environments, `Expose=*`. I’m wondering if you see the minimum common API as a superset of those things. + +JSL: It can be. I think we need to go back and look at this rule about whether it’s purely computational or not. If you look at the stuff in the minimum common right now, there are things like set time apps. There are things that wouldn’t be purely computational. But I definitely think there is some area of overlap there that we need to seriously look at and consider. I do think that TC55 would be a great venue to answer those questions which API exist in the ShadowRealm? This is a venue to discuss that. + +PFC: I’m hoping the overlap is the minimum common API is a pure superset. Of course setTimeout should be included in every server runtime even if they can’t be exposed in audio worklet or ShadowRealm or whatever. It’s the opposite that I would be wary of where – + +JSL: Lost you. Definitely to the point that you’re making there, absolutely. I definitely think that given the spreadsheet, I want to look at minimum common API with the rule of set of computational things and on the list anything missing from the minimum common that should not be there? + +ABO: I don’t think there are. I looked at this before like with the new update, I haven’t checked. But the intention is definitely that the minimum common API is a super set of `Exposed=*`. So I know there was some question about whether crypto should be part of the—web paper should be part of the set of the globally exposed set. And in terms of whether it should be exposed in AudioWorklets, because it’s supposed to be only in secured context or something like that. And it’s not clear how secure context works in the server side environments, but in any case, web crypto is I think an API that is it currently in the API? I’m not sure. I think it should be. + +JSL: We have discussed it. This is a really good question that I think T C5 5 should look at first is your exact point. What does secure context mean in the server environment like node and Deno and that kind of thing? The entire environment is secure and realm and that’s how we operate and we have these available and we don’t restrict these things. It would be nice to have a formal definition of that. It’s easier for us to address these questions moving forward. + +MAH: I should have put end of message. I had suggested the common API is a starting point when considering which APIs to include in shadow realm. I’m not surprised at all there’s significant overlap and they’re consistent. + +CDA: That’s it for the queue. + +JSL: So finish up a couple minutes early. Feel free to reach out. Definitely happy to get reviews on the charter as we go here. I don’t remember exactly when the meeting is for—when the charter will be looked at again. Next week or something like that. If you have any comments or feedback, let us know. + +### Speaker's Summary of Key Points + +JSL (summary): Just want to emphasize a desire to closely work with other groups like TC-39, TC-53, WHATWG, etc to work collaboratively as much as possible. In particular, I think we likely need to workshop some of the charter language that would differentiate more with TC-53's charter. + +## Stabilize to stage 1 + +Presenter: Mark Miller (MM) + +- [proposal](https://github.com/Agoric/proposal-stabilize) +- [slides](https://docs.google.com/presentation/d/1474EreKln5bErl-pMUUq2PnX5LRo2Z93jxxGBNbZmco/edit?usp=sharing) + +MM: I’m going to present and I would like to record the slide show and and turn it off with the questions. This would be permission for recording for public posting. Does anybody object? Recording the presentation itself with audio with public posting fine. + +DE: I support this. I want to ask to the anybody who is good with technical set ups that we offer this presenters in general. I think a lot of people put good work and I’m glad you’re setting this path. + +MM: So I’m proposing stabilize and other integrity traits. We have existing set of integrity levels in JavaScript as background frozen, sealed, and non-extensible and the arrow of this diagram represents “implies” and if frozen is sealed and non-extensible and up the chart is the integrity levels and the levels were to support high integrity programming and served the function rather well but there’s still some weaknesses we would like to address. On this diagram by the way on the left we have the functions that bring about the integrity levels and on the right we have the predicates that test an integrity level and in the middle is the name of the integrity level’s states the object can be in. And the bulk of the presentation will focus on the states. + +MM: So when considering introducing new features like the integrity levels, integrity traits I’m about to show, this raises the question about when should a new feature be considered an integrity trait? There’s several aspects of the existing integrity levels that we’re going to take to be defining of what it means for something to be an integrity level, which is that it’s a monotonic one-way switch, for example, once an object is frozen, it is always frozen. That it brings about stronger object invariants and better support higher integrity programming to make things more predictable, and that a proxy has an integrity level if and only if its target has the same integrity level. For example, a proxy is frozen if and only if its target is frozen and this if-and-only-if upholds the idea that the target is the entirety of the bookkeeping to keep track of whether the proxy should be considered to have that integrity level. + +MM: There’s also a distinction between the existing integrity levels that we will be carrying forward that is some integrity levels are explicit and some are emergent. What I mean by that is that non-extensible is an explicit integrity level because it’s a fundamental part of the semantic state of the object that has to be represented explicitly both in the spec and in any implementation, whether an object is non-extensible or not. And an object only comes to be non-extensible if explicitly made non-extensible. And sealed and frozen are emergent integrity levels in that they are defined by conjunction of other conditions and if the conjunction holds, then the object is considered sealed or frozen independent of how that conjunction came to be. So, for example, if I have an object that is non-extensible but has a single own configurable property, it is not sealed or frozen, but if I delete that property, then the object becomes both sealed and frozen because a sealed object is just a non-extensible object in which all properties are nonconfigurable, nonproperties are nonconfigurable and sealed is non-data properties are also nonwritable. And a particular reason why this distinction is important is that there’s only proxy traps for the explicit integrity levels. There’s an event extension trap that is extensible trap because that’s the fundamental state change that the proxy needs to be able to intervene in. There are no proxy traps that correspond to sealed or frozen. + +MM: So the way we got started on this journey is that we are doing hardened JavaScript and doing the shim for implementing hardened JavaScript and level has harden JavaScript directly, and hardened JavaScript is explicitly trying to support have harden for JavaScript and has an operation implemented as a library in the session, implementable as a library which is harden which is a transitive deep freeze, transitive by own property walk and inheritance walk. Walking up the inheritance chain and walking forward along all properties and applying the freeze operation to all objects that it encounters. We are not in this presentation proposing hardened as an integrity level or anything else. It’s just an example of the library that is proving to be useful. And the important point of it is that it tamperproofs an API surface by freezing each object at each step of the transitive walk. Hardened JS in addition hardens all the primordials. All the primordial objects, all of the built-in intrinsic objects that exist before any code starts running which is all hardened before code starts running, and the result is that these are the objects that are explicitly shared by all code running in the same realm, and by hardening them all before code starts running in the realm you’re in the position to isolate the effects of different portions of code from each other. And we’ve been doing that since eCMAScript 5 days under other names. + +MM: But we found that there are three weaknesses that we would like to address. So our first try was to address all three weaknesses with one additional stronger integrity level, which we’re calling “stable”. And the idea would be that the harden operation I preferred to would be changed so that instead of freezing the object at each step of the transitive walk, it instead stabilizes each object at every step of the transitive walk. And by addressing all three of these weaknesses, the stable integrity level would be strong enough. + +MM: However, in talking with SYG on a hallway conversation at the last plenary, we realized that a major motivating use for one of the changes that stable would introduce, one of the stronger invariants, would be extremely useful for the structs and Shared Structs proposal. I will get into the specifics of that. The key thing if the new feature is brought in only by the stable integrity level, and stable implies frozen, then it cannot be applied to structs cross realm not definite. It cannot be frozen. Unshared instructs can be frozen but they need to benefit from this feature even in their initial non-frozen state. They are generally objects that you for most purposes won’t want to freeze. Because they have properties that are mutable. But the key thing is that structs have a fixed shape implementation. In current JavaScript, there’s no way to do that compatibility with the language and the new feature that would have been introduced by stable would have enabled structs to have fixed shape, but only if the new feature could be applied to non-frozen objects. + +MM: So Jim Barksdale of Netscape is famously said only two ways to in his case make money in business. One is to bundle and the other is to unbundle. Let’s examine a full unbundling of the features of all of our integrity levels into separate explicit orthogonal as possible integrity levels. And now because these are in a graph, not a fully ordered hierarchy, we’re going to shift away from the term “levels” for all of these and just refer in general to integrity traits from now on rather than integrity levels. So with these fully unbundled into separate explicit traits, this gives us a good framework for talking about what each of the separate features would be that address the different weaknesses. + +MM: So fixed is the one that would enable structs to be fixed shape. Right now in JavaScript, there is this feature return override such that if, for example, a super class constructor ends by explicitly returning some value, then in the super call in the subclass constructor, following the super call, the this in the subclass is bound to the value that was returned by the superclass constructor. It is not bound to the object that was behind the scenes freshly made to be an instance of the class. And also at this point in the subclass constructor, takes control, the private fields pound value in this case are added to whatever object.return. That’s the case even if the object is frozen. So it’s possible to actually use this to build a WeakMap-like abstraction that this code example is extracted from. The proposal repo has a more complete code for an emulated weakMap that just used return override. And the key thing here is that if the subclass constructor is called with a struct object as the key and some value, then the language would obligate the implementation to add this private field to the struct. Now, the specification accounts for this semantics of how is it that these things can be added to frozen objects. It accounts for this by saying the private fields have a WeakMap-like semantics, but practically, all high-speed implementations we’re aware of, in particular all browser implementations we’re aware of, all actually add the private fields by a hidden shape change of the object. So in V8 are different shapes of objects have different hidden classes they call it, through the internal bookkeeping for keeping track of the shape. This would have to change the hidden class behind the struct. And this conflicts with a lot of the high performance goals that are motivating the structs proposal. + +MM: So the idea is that if an object is fixed, then it cannot be extended using this return override, it cannot be extended to have new private fields. And in fact, there’s a precedent for this already in the language which is, by special dispensation, the browser global window proxy object is already exempt from having private fields added to it. And this is again motivated by a different implementation constraints, but again it’s motivated by enabling the implementation to avoid having to do something complex in order to implement the feature that nobody actually cares about for that case anyway. So “retcon”, retroactive consistency and continuity, is a fanfiction practice of trying to retroactively rationalize something that had been a special case. If we introduce fixed, we also get to retcon the dispensation of window proxy and say instead the window proxy simply carries the fix integrity trait. And this solves another problem with the special dispensation that that the special dispensation on the window proxy, which is it’s impossible for the library to do a fully faithful emulation on the window proxy on the non-browser window platform because of the inability for that emulation to prohibit the addition of the private field. The introduction of the fixed trait would make that same exemption available to an emulated window proxy. + +MM: The next one is the overridable integrity trait, which would be an exemption from the assignment override mistake. So the assignment-override mistake is—I think the example explains it really well, ignoring the first object freeze line, the second two statements here. There’s a tremendous amount of legacy code on the web, particularly before the introduction of classes, that used this pattern in order to create class-like abstractions. So a function point that’s acting like a construction function, and then using this assignment to add a toString method to the `Point.prototype` that is inherit to object toString. What many projects have found is that in attempting to freeze the primordials in order to create a more defensible environment, for example, to inhibit prototype poisoning that they immediately break legacy code like this in that environment, because the assignment override mistake is that you cannot override by assignment a non-writable property that’s not inherited. So in particular, the object freeze makes the toString property on `Object.prototype` a nonwritable data property that therefore cannot be overridden on `Point.prototype` with assignment. The strict environment throws, and sloppy environment is worse and fails silently and the program proceeds to misbehave in weird ways. The idea here would be that if the object is made overridable, then in particular if the object prototype object in this case is made overridable, then its non-writable properties can be overridden by assignment in objects that inherit from the overridable object. So the parenthetical here is some people on the committee believe we might be able to fix the assignment override mistake globally for the language as a whole. I have no opinion one way or the other on this. I’d like to offline find about more of the evidence of pro and con. We’re just taking the position that if it could be fixed globally for the language as a whole rather than introduce an integrity trait, we would prefer that. And if that were to happen, we would remove the overridable trait from this proposal and just accept it as a global language fix. But if not, this is how we propose to fix it for objects that opt in to the fix by adopting this integrity trait. + +MM: When writing defensive programs, in particular programs that are defensive against possible misbehavior of their arguments, possible surprising arguments, it’s very nice to be able to do some up-front validation early in the function to validate that the arguments are well-behaved in the ways that the body of the function will then proceed to count on, to rely on. And a particular pervasive need for this is that many functions that are responsible from maintaining an invariant have to also momentarily suspend the invariant, do something, and then restore the invariant. While the invariant is suspended they’re in a delicate state. For example, a function that splices a doubly linked list, that must go through a moment in time where the doubly linked list is ill-formed before the doubly linked list comes to be well-formed again. And why it’s in this delicate state with suspended invariants, it is quite often vulnerable to re-entrancy hazards. So if code that was brought in by the argument could interleave surprisingly during an operation that you do while the invariant is suspended, then that interleaved code might re-enter foo. So “recordLike” here is named and inspired by the records and tuples proposal. If for example the validated suspect argument is JavaScript primitive data then within the delicate region we can operate on primitive data without any worry because primitive data we know does not observably transfer control to any other code brought in by the primitive data. Records and tuples would create object-like records which are still primitive data and still have this guarantee of no interleaving and therefore no worry about interleaving hazards. + +MM: What we’re proposing is that the one source of interleaving hazards that we cannot validate do not exist in the language as it is today is interleaving via proxy handler traps. And because even if recordLike to ensure that the object cannot interleave, checks the object is frozen and inherits only from something record-like and that it has no access or properties, all of that together does not give you safety if the object happens to be a proxy. So the idea is that if recordLike additionally checks that the object is non-trapping, then what that would mean is that if a non-trapping object is used as the target of a proxy, that no operation on the proxy traps to the handler, rather all operations on the proxy go directly to the target. To put it another way, the proxy acts exactly like the target in all ways except that the proxy and target continue to have separate object identity. And this simple way of specifying non-trapping, which is what we favor, is sensible if non-trapping implies frozen, so that the only objects you can make non-trapping are frozen objects, because the object invariance already enforce that if the target is frozen, that the only things the handlers can do is they can interleave other code during the access or they can throw, they cannot change the result of any of the proxy traps. + +MM: So is because the handlers are already mostly useless, for frozen objects, but certainly too late too make proxies on frozen targets non-trapping, the idea would be that this additional opt-in would make proxies on non-trapping frozen objects non-trapping and, therefore, not able to cause interleaving. And it does this while still not providing an ability in the language to test whether an object is a proxy or not. So it does not break practical membrane transparency, while still turning off this interleaving feature of the non-trapping proxies, and thereby mitigating the proxy reentrancy hazard. + +MM: As long as we are considering a full unbundling of integrity traits, we could additionally consider unbundling non-extensible into its two orthogonal components. And this would serve another retcon purpose. It’s already the case by special dispensation that the window proxy object, you cannot change what object it inherits from, and the `Object.prototype` object is born inherits from null and again by special dispensation, you cannot change what object it inherits from to something else, even when both objects are extensible, which they are certainly both born extensible. But nevertheless, they have this restriction. By making this an explicit integrity trait, then we can retcon window proxy and `Object.prototype` to account for this special behavior and again, we also enable higher fidelity emulations of the browser global window proxy object on non-browser platforms by making this selective prohibition on changing prototype available on objects that are otherwise extensible. + +MM: And then, finally, if we unbundle non-extensible in the two features, this is the other, by separating into a separate integrity trait, would allow one to make an object in which new properties could not be added, but the object itself you can still change what object it inherits from. + +MM: So this would be the maximally unbundled picture. The solid arrows are the implies. The question mark dotted arrows are maybe implied to be explored and discussed, it’s an open design issue. The only really compelling case for the dotted arrow is non-trapping implies frozen, it is actually possible to specify non-trapping if we relax that it implies frozen, but it is quite a complicated specification that probably is not worth the extra complexity and probably does not serve any actual purpose. + +MM: So there’s a problem with this full unbundling, which is that it has five orthogonal traits. In general, we like orthogonality, it’s more expressive, it’s more future-proof, with regards to a picture that accommodates future additions, more exclusively, but is it really worth ten new proxy traps to support these five traits? And in our opinion, the current opinion of the champions of the proposal, it is not. + +MM: So one way to solve this would be, instead of creating ten new traps, just create two new parameterized proxy traits that take an integrity trait name, protect, which brings about the integrity trait, and isProtected, which tests with the presence of the integrity trait. This raises a design question. It’s not necessarily fatal. It’s just an open design question for which we’ve have imagined answers, none of which we like, but all of which are coherent: as to how the new traps protect and isProtected could coexist with preventExtensions and isExtensible, since now, those would be existing traps that existing handler uses but now correspond to what is effectively a new emergent, a retroactively emergent integrity level. So when would an operation protect to trap extensions, versus when to have a trap(?) to protect non-extensible? + +MM: The other approach to the cost of having so many different integrity traits, and so many different explicit integrity traits, is to rebundle to the minimal picture that still addresses the issues that we find strongly motivating. So this would simply not unbundle non-extensible, and leave it as an explicit integrity trait, forgo the retcon of the permanent inheritance property of `Object.prototype` and window proxy. And also, fold both overrideable and non-trapping back up into stable. So basically, this picture is very much like the picture of our first try, with the only difference being that fixed is broken out as a separate trait. And in this minimal picture, we choose not to have any implication arrow from fix to any other trait, so that fixed can be applied by itself, and then retconning that aspect of the window proxy, since that is extensible, and therefore if fixed implied even non-extensible, you could not apply it to the window proxy. Altogether, just speaking from myself as one of the champions, I will say that I find this minimal picture to be the most attractive, even though it’s foregoing some of the benefits of the further unbundling, but any intermediate between the fully bundled and unbundled pictures this is proposed for Stage 1. So exploring the design space is certainly the appropriate exploration in Stage 1, not settling on a particular preference, necessarily going in. + +MM: And at this point, I will break for questions. But first, I will, as agreed, stop the recording. + +SYG: So thanks, MM. I support Stage 1. I need it for—a fixed for structs obviously, as you have said. I discussed this after our chat with other V8 folks, and in the spirit of simplicity, if possible, I think V8’s preference would be, if we can retcon non-extensible to imply fixed. If it’s web-compatible. To that end, we have added a use counter to check how many times we have seen in the wild how many times people are adding to global this. We mentioned this one reason for being able to explain the window proxy. If we unbundle fixes. But I want to raise our preferences. And I am not sure how—for us, at least, the ability to explain window proxy and to virtually window proxy is not a motivating or a compelling reason for it to be unbundled. So this is Stage 1. This is not a Stage 1 concern obviously. But I would like to raise it and get your thoughts. How compelling a motivation do you think the explanation of window proxy is, to keep fixed unbundled? + +MM: I think it’s not. I actually—this is the first I’ve heard of this particular suggestion, of having non-extensible imply fixed. By my immediate off the cuff reaction, you know, in ten seconds of thinking about it, I like it. The reason I refer to the return override mistake and assignment override mistake is, I consider both of those features of the language to largely be mistakes. And the assignment override case, very strongly so, because as far as I know, no one has ever seen production non-test code in the wild that purposely made use of the assignment override mistake. The return override mistake, to add private properties to preexisting objects, is certainly also very, very obscure. The use of it to create a WeakMap-like abstraction that I’m doing in the proposal repo is just there as a demonstration of the possibility. It’s not because I expect anybody to make use of that. So I don’t think I’ve seen a use of the return override mistake in production non-test code that was on purpose, where the object that was being extended was a preexisting object. It was one that was not created fresh during the class construction. If anybody does know such a counterexample, I would be very interested. + +SYG: That’s also our hunch. And in a few months, whenever this use counter hits stable with the larger population, we will have a better idea of how much in the wild use there actually is + +MM: I want to applaud you and applaud the V8 team and the Chrome team for deploying this use counter. This is an overinvoice to invest in doing the experiment. + +KG: Yeah. I do like this exploration. I think that the object model in JavaScript is a little bit confusing. As you say, things are bundled that don’t necessarily make sense to be bundled. I am happy going to Stage 1 for this proposal to continue exploring this space. I want to raise a concern which is that, I think, changes to the object model are very, very conceptually expensive for developers. Having more states that things can be in is at least potentially very expensive in terms of reasoning about the possible behaviors of code. So I am not convinced that all, or possibly any of this, is going to be worth doing, in terms of the benefits it brings versus the additional complexity. Which isn’t to say I don’t see the benefits. I would certainly like to redo the whole language to have more reasonable behavior. But tacking it on is not necessarily an improvement. I am concerned about the complexity, happy to continue exploring in Stage 1 + +MM: Good. Thank you. I share your reluctance. Obviously, I come down on the other side altogether, but that’s due to a difference in weighting of the inputs, but I certainly agree that the costs are real. I am curious, from an explanatory point of view, do you prefer this picture, the minimal picture, or the fully unbundled picture? + +KG: That’s a good question. I am not sure. I think I would have to sit with both of them for a while to have an opinion. + +MM: Okay. And I encourage, you know, everybody to ruminate on that, I would be very curious as we continue the exploration. It’s much more subjective to get people’s sense of how much of an explanatory burden it is. It’s very much more something that I just need people’s feedback of what they expect. + +KM: I also want to say that I think there’s a good chance this is—has a lot of implementation complexity in I guess the implementations just because I think a lot of the logic of frozen and stuff, there’s a long tail of security bugs, but I am not sure. We have to look more at implementation. Obviously not a Stage 1 blocker. + +MM: Thank you. And obviously, in doing the exploration, we get as much feedback as we can from existing implementations, high-speed implementations, that with the new degrees of freedom, might be painful given some of the existing optimizations. + +NRO: Yeah. Thanks, MM for already incorporating a lot of the feedback I gave. For context I was in a discussion with MM, where we discussed bundling and not bundling, and my recommendation is that unbundling, with slides with more things and arrows that look much more complex, is actually more simple to explain. With the reason being, if we bundle everything, you have to learn everything at the same time. And this is a very complex topic. And like, developers today struggle to know the difference between sealed and un-extensible, so there is just a label to learn the properties one by one, rather than having, like, understand three of them at the same time. So yeah. Like, I am happy to see both options are on the table. I hope that we can go ahead eventually with the unbundled version. + +MM: Great. Thank you. + +NRO: And my next point, which is very related to this. All of this work, and discussions can, very difficult to understand. And while we were reviewing proposals internally at Igalia, one suggestion we had was that even for terms that might seem obvious to us that participate in TG3, it could be great to have a glossary or explanation or pointers of what they mean in the proposal itself. Even terms like reentrancy, and things like that that don’t come up in most proposals. + +MM: Good, would you care to contribute some of that glossary writing? + +NRO: I guess I could start by giving a list of words that people can find complex and we can work from there. + +MM: Okay, that would be wonderful. Thank you. + +CDA: That’s it for the queue. + +MM: Okay. Any support for—I think I saw support for Stage 1 go past. Anybody wishes to explicitly voice support for Stage 1 and of course are there any objections? + +### Conclusion + +MM: Okay. So I see on the TCQ explicit support from SYG. Thank you. Weak support from JWK. Okay. I think I have Stage 1. + +CDA: Yeah. You also have support from Jordan. + +MM: Okay. Thank you. + +### Speaker's Summary of Key Points + +MM: There are a number of ways in which existing JavaScript fails to support client integrity programming well. The existing integrity levels have served us well as supporting high-integrity programming, but there’s extensions to the system of integrity levels that might be able to rest of the soft shortfalls and I degrees three particular motivating shortfalls to be the focus of the investigation, which is suppressing the return override mistake to enable fixed shape implementations of particular structs, the suppressing of the assignment override mistake, making it painless to freeze prototypes, and the introduction of non-trap to mitigate proxy reentrancy hazards. + +## Module Harmony: where we are + +Presenter: Nicolò Ribaudo (NRO) + +- [slides](https://docs.google.com/presentation/d/1V2-4Hj-HBVQwdphcJUsrbmbitOPBMSf3HhKSvhBk4d0/edit?usp=sharing) + +NRO: So hi, everybody. This is a summary/reintroduction/update of where we are with all the various module proposals. There is no normative, any normative discussion, any concrete request for this—for any specific proposal as part of this presentation. It’s more of a way to like set some common understanding for then the next presentations we will have about the specific proposals. + +NRO: So I presented this module harmony presentation, like, one year, one year and a half ago. And there have been some changes since then. Both about individual proposals, and how we generally see the area and how various proposals interact with each other. + +NRO: This was what I presented last time. We had this kind of dependency tree between concepts. With ModuleSource and ModuleInstance. At the root of the tree and then there were many other concepts depending on them. And we had this division in proposals. So we had this, like, blue proposal on the left introducing ModuleSources and source imports. We had these purple proposals in the middle that was introducing the module constructor with the hooks and like was giving a way to link to create modules from ModuleSources. And we had these module instance phase import that would let you import the module and would like some modifying statement and get a linked module object out of it, being the phase after import source. And this was interpreting the module expressions, giving you some syntax to get to this module objects. And then there were like various other proposals, depending on those. On the bottom left, we have deferred import evaluation are which didn’t have any dependency—like, on the rest. + +NRO: So our understanding of this has changed a little bit since last time. So, first of all, import attributes is Stage 4. So let’s say we don’t really need to worry about it anymore. The proposal is advanced, so we had the source phase proposal and this is stage 3. The semantics are finalized and implemented in browsers already. + +NRO: We now have a proposal, the phase import proposal, Stage 2. It’s on the agenda to go to stage 2.7 at this meeting, which introduces ModuleSources specifically for JavaScript Modules. And also, deferred import evaluation is now at Stage 2.7 and we also have an update about this proposal later at this meeting. + +NRO: We have a new concept, deferred/optional reexports. This was originally part of the deferred import proposal. However, roughly one year ailing, I think, we decided to, like, unbundle it from the proposal because they had like a larger like—add more semantics than the deferred import proposal. We wanted to focus on them one by one. + +NRO: Also, thanks to the work that GB put into this ESM phase proposal, we realized it’s possible to have module expressions and iterations to not depend anymore on the concept of the Module Instances. And instead, to just be some syntax for JavaScript ModuleSources. The ESM phase imports proposal is introducing some machinery to let you import ModuleSources by flowing the necessary metadata in some ways. And module expression, module iterations could just use the same machinery. So they are actually unblocked by the ESM phase imports proposal. + +NRO: Also, we used to think of module declarations, as expressions. Because there were like about a bunch of shared concepts that were defined as part of the model expression proposal and then module declarations could be built on top of that. But that’s not necessarily anymore because of the most shared proposals are already—had been introduced by the various import proposals. + +NRO: Also, we discussed last meeting, I believe, about static analysis for modules. This was originally part of the ESM phase imports proposal. So this JS module sources and modules were part of the same proposal as per request from last time this proposal was presented, it has now been removed from it. So now, the modules source static analysis will probably actually go together with the proposal that would introduce module loader hooks. So I marked them as depending on each other because we will probably need them at the same time. + +NRO: We are not discussing anymore about the ModuleInstance phase imports, mostly because the main case was to get the module to then create workers from it. And this is now solved by the ESM phase imports proposal. There are still some possible use cases for ModuleInstance imports, as part of module loader hooks and compartments. However, it’s not clear whether it’s needed or whether ModuleSources plus some, like, constructor to wrap them is enough. + +NRO: And finally, we have a new potential proposal that’s on the bottom right of the slide, which is about sync dynamic imports. And GB will talk more about it later in this meeting. + +NRO: So we can divide the area into three main, like, clusters. One is the one where everything is ready to module sources. So if you want to focus just on this proposals, they are self-contained and they contain all the concepts necessary to understand all the other proposals in this cluster. So we have the source phase import proposal at the root and ESM phase imports is already building on top of that, and that—not only defines what ModuleSource objects for JavaScript are, but also separate for importing the JavaScript sources. So to continue the import process where if it was posed at the source face and working with WHATWG as part of web integration to create workers from these sources. And then module decorations can be built on top of these. + +NRO: Exactly what these are? Modules as defined today are composed of multiple parts. It has some source code. If it exists. For Modules, it doesn’t have this course sewed right now. It just has parts node. That is like a spec, the spec way of saying it. + +NRO: A module also has some metadata used are, for example, to resolve its dependencies. On the web specifically, this metadata includes the URL, and then you resolve from, like, URL of the module so you know where to resolve all the imports from. But the metadata can vary depending on the platform that is embedding JavaScript. After you start using the module, you start loading its dependencies. Each module has a list of resolve and like created modus operandis it depends on. It has some of the evaluation state. Like a module to be new, it could be linked with its dependencies, it could be—it could be awaiting or evaluated, either successfully or with some error. And a module is also exposed to its namespace object, and once the module starts evaluating, it progressively starts exposing the various exports from the module. + +NRO: The various model source proposals are cut into this list in two by saying, okay. We have some immutable data. And we call this the ModuleSource. And then there is some state. And the state is what is part of the full module. So the module source is the mutable subsection. Like, a subset of the information needed to create a module. + +NRO: As the way to get ModuleSources is, well, through the import source syntax that introduced by this Stage 3 import source proposal. There are other ways to get sources. For example, the `WebAssembly.Module` object can be explained as being a source. So there can also be APIs to get or create sources of specific module types. + +NRO: A source is a module lazy been loaded. It has—all of its dependencies have not been loaded yet. It’s been posed at one of its earliest spaces. With the ESM phase imports proposal, you can complete this evaluation to like actually load its dependencies and evaluate it to get it to the final state. + +NRO: The module declarations and expressions proposal would now give us a way to create this ModuleSources. Other than importing this. We can import them or declare one in line. So this proposal would not be introducing almost any new concept other than giving you syntax for some object that the language, through the proposals currently in the pipeline, already provides. Again we can define a source like this. This source would inherit some meta data from its parent. Such as the URL to then resolve the dependencies. And then you can import these sources exactly as you can import sources obtained to import source. And the loader would read this metadata and know what to do with the metadata together with the source to then actually progress in the module lifetime. + +NRO: This means also that maybe the module expressions and declarations proposal will change the keyword to say source instead of module. Module still I would say, looks nicer. But one of the blockers for this proposal was there conflict with TypeScript syntax. TypeScript was in the process of deprecating it, but it’s good to know we have a potential alternative in case it’s needed. + +NRO: There is also a proposal that is not part of module harmony, but we have been talking about in the context of module harmony, which is the structs proposal for the shared structs part. One of the challenges that the structs proposal needs to overcome is that if it wants to have prototypes for shared structs, it needs a way to tell whether shared structs whose definition is in two different threads is actually the same. The shared structure object is to get a thread, then it gets to the right local product type. One way that the proposal can solve this problem is by saying, okay. We now have the concept of ModuleSources. ModuleSources are immutable. So they are sharable. And we can actually explain the same module evaluated into two places, as being the same evaluation in the same ModuleSource. To share structs, definitions would be the same, if they actually come from the same underlying shared ModuleSource. + +NRO: And yeah. This is like a drawing of how different modules can point to the same struct. But we have been discussing this both in the initial structs, in the to see if this is actually a viable solution. + +NRO: We don’t have a second cluster, which is let’s call it the optional/sync evaluation cluster. This is about proposals that do not really affect how loading works or what a module is. They are here, but they just help us potentially skip some evaluation or like defer it. In this cluster we have the import deferred proposal. The deferred/optional re-exports, born as a child of import defer. And the new dynamic imports idea + +NRO: So to recap what the goal of the deferred import was, it was to evaluate as little as possible and only at the point where you need it. So that you don’t need to validate everything while it’s been loaded. But for code, you can evaluate later. While having less friction than what dynamic imports require. Export defer was born as a consequence of this. But we noticed that export defer can make the language support built-in tree-shaking, that is, if I re-export binding and my importer is not using the binding, I can avoid loading the module where that binding is exported from. This is something that is very common in tools. And this is one of the reasons why tools are better than just using browsers. Other than loading 100 separate files, they also remove a lot of that code. Different tools have different implementations of tree-shaking. There is no shared standard in how to do it. So having this tree-shaking in the language might help significantly. + +NRO: And yeah, while the import deferred proposal was Stage 2, this was left behind and we just defer this so Stage 2.7. + +NRO: Sync dynamic imports are in the same cluster because they are something in between them, I am import defer, in again, GB will talk more about this, but the general idea is that sometimes it’s actually possible to do a sync import in some sense, that allows to keep what more import defer does. It has little friction as import defer. But unfortunately it only works in some cases. There are similar concepts in other parts of the ecosystem, like in Node.js you can require ESM and synchronously load these files and evaluate them. And it’s now also exploring these on `import.meta` for convenience. Again we have more from GB about this + +NRO: And lastly, we have the custom loaders and compartment cluster, which includes all the tools to virtualize a module system. So the tools that will let you define how resolution works, without using a Node.js specific hook or a browser-specific implementation, but having a standard way of doing standard work across all platforms. And it allows you, for example, to implement hot reloading of Modules at language level and anything that is currently just—these proposals allow you to find your own type of modules and create some separation between different module graphs. When Modules get implemented or not. + +NRO: We’ve received some feedback on these proposals since when they were first presented at the plenar, I think three years ago. At the time, we presented a new module constructor that gets a module source parameter and a series of hooks. The most important of which was the import hook and this import hook was, as you were linking the module, this import hook was called for every dependency. Getting the specified as a parameter and returning the loaded module as the return value. And this very closely resembles the existing host API for embedder spec. Some feedback was this might require too much back and forth between the engine and user code. And so there have been some discussions about making it more—let’s say, upfront imperative, as in you could like with the static analysis features, you would like get list dependencies and then manually link it for each module. + +NRO: But there has been not much progress in this overall, other than a few discussions. So yeah. If anybody wants to help with this, you are very welcome. I know there are some people that want to help in in. But the current module harmony time is not working well for them but we will try to fix this this the future. + +NRO: And this is where we are right now. I would be happy to answer any question. If GB is here, he will also be happy to answer questions, specifically about how the various proposals work together, or about proposals that are not being presented at this meeting. If you don’t have any questions, I hope this presentation will help you follow the next discussions about the specific proposals. + +NRO: If there are no questions, I have a question for the committee: I’ve been asked to give this presentation because it’s difficult to follow the whole model space. But I would love to have feedback on the format. Like, would have been better if this presentation was done in some other way? Should have been longer or shorter? Should it have been focussed on different proposals? If anybody has meta feedback like that, it’s welcome. + +KKL: Yeah. I wanted to expand a little bit on another point that appears to be an intersection of interests between module harmony and shared structs. One of the ideas that NRO has, that satisfies a constraint that I think is important for module harmony, is that there’s this open question of how shared structs, which are primitives, as values, are associated with their corresponding prototype instances. And in hardened JavaScript it’s important for us to be able to ensure that these prototype instances which are born mutable, can be frozen and isolated to a particular—we call them compartments. I think there’s an emerging concept of a cohort of instances of modules, that comes out of the primitives, and should be sunk lower into module harmony. And that NRO is proposing that there be a property that outpick token of what cohort it belongs to. If a ModuleSource, which is so he should with a ModuleInstance pass from one cohort to another, that it ensures that it gets a different instance and different instances of the implied shared struct prototypes. And I think this mechanism is growing in importance, and that I wanted to share that with you today. So that you can be prepared to hear about more in the future, especially those of us from the hardened JavaScript perspective. Probably we haven’t talked about it much yet + +NRO: Okay. Yeah. Just to understand what a cohort is, it’s equivalent to the module cache being different to get the source module imported twice from different methods. So the one to find the host and the idea is like we might need for instance to expose its identity in some way. And this would be part of this custom loaders cluster. Thanks, KKL. + +CDA: Circling back to Nicolo’s request about the feedback about the presentation. Was this helpful? Could it be—people prefer an update in modified form in some way? I think he would appreciate any feedback. + +NRO: I guess it’s also fine to send me a message in matrix if you have any feedback. + +## ECMA402 Status Updates + +Presenter: Ujjwal Sharma (USA) + +- [proposal](https://github.com/tc39/ecma402) +- [slides](https://hackmd.io/@ryzokuken/r1qXw2hQkx#/) + +USA: So yeah. Okay. Let me know, if there’s issues, but I will try to be quick with this, and these are quickly hacked together slides. Before I begin, I credit all the editorial work that I am talking to here is not on me, but on BAN. But BAN hasn’t been here. So okay. + +USA: 402 updates. Not much happened since the last meeting. One of the editorial changes that we did was by ABL. Basically in Intl number more mat constructor. There was incorrect markup for the notation variable. So this is not a big deal at all. Just some formatting issue that was fixed. Thanks, Anba as always being on top of these editorial things + +USA: And after that, there was a change by BAN to clarify CollapseNumberRange. So this is for some context an abstract operation that is used by NumberFormat for collapsing number ranges. Basically, let’s say that you had a small range, within some degree of error. That is closer to, it could be formatted from something, let’s say, in please don’t quote me on this, 1.99 to 2.01 to approximately 2 and things like that + +USA: So anyhow, CollapseNumberRange. It was clarified, more specifically, it can now add spacing characters. Is the reality because this is how LDML does things as well as how ICU implements it. This is just updating NumberFormat to improve things editorially. + +USA: And as you might know from the last meeting, `Intl.DurationFormat` is Stage 4. The editors will be working on making it part of the spec ASAP. And that’s it. Thank you. + +USA: Is there something on the queue? I don’t think so. No? + +CDA: Okay. Thank you, Ujjwal. + +## Immutable ArrayBuffer to stage 2 + +Presenter: Mark Miller (MM) + +- [proposal](https://github.com/tc39/proposal-immutable-arraybuffer) +- [slides](https://github.com/tc39/proposal-immutable-arraybuffer/blob/main/immu-arraybuffer-talks/immu-arrayBuffers-stage2.pdf) + +MM: I will ask the committee again for permission to record the presentation itself, including the audio with the understanding that when we shift to QA at the end, I turn off recording. But any discussion during the presentation would be default be part of the audio recording for public posting. Any objections? Thank you. + +MM: Last meeting, we got Immutable ArrayBuffer to Stage 1. So to recap, the gray here is the existing ArrayBuffer API and the proposal would add at least these two features to the existing API, a transfer to a mutable method that returns an ArrayBuffer that has the immutability flavor and then an immutable accessor that is true if exactly for those ArrayBuffers that have the immutable flavor and the immutable flavor would be with the detached and sizable flavors. The behavior of the mutable flavor of arrayBuffer is that it would—its immutable accessor would say true. It’s not detachable or not detached. It’s not detachable. It’s not resizable. It’s next length is the same as the byte length and as the methods, the slice method that is a query method would still work, would still be there, but the other methods which would cause a change including all the transfer methods would throw an error rather than do what is normally expected. + +MM: Status update: at the last plenary, the public comments were all positive, but I additionally got many private positive comments. I don’t recall receiving any negative comments or objections. So if anybody here did give me some negative feedback, please remind me. As of the last plenary, the spec text was already what I consider to be Stage 2 quality thanks to RG for that. And since the last plenary, Moddable has done a full implementation of the proposal. + +MM: As of last plenary there were some open questions. And I will now—which I will now go into and tell you what our preference is on the resolution of these open questions. But in each case ask for the feedback today from the committee. So the existing `transfer` and `transferToFixedLength` methods both have a length, an optional length parameter. The `transferToImmutable` as presented at the last plenary had no optional length parameter and the question is, should it have one? And there’s an argument from orthogonality in each direction. + +MM: The argument from orthogonality to omit the length parameter is that the composition of slice and `transferToImmutable`—or the combination of an existing `transfer` followed by `transferToImmutable`—already composes orthogonal issues changing the length and making something immutable and because it transfers, it would not interfere with being zero copy. And it just kind of keeps separate jobs being done by separate methods. + +MM: The argument from orthogonality for including the length parameter is that we have got three—we would have then three different transfer methods and each independently has a length parameter that can be present or absent and you would just have the orthogonal combination of whatever the method does and whatever you ask for in the parameter. And so I think orthogonality is a wash. + +MM: I’m advocating now, I’m changing my mind on this. I’m advocating now that we include the length parameter because it minimizes the damage from surprise. What I mean by that is that either decision might surprise some programmers. A programmer that expects that there is no optional length parameter and doesn’t use it in a language in which there is an optional length parameter experiences no damaging surprise. A programmer who does expect there is an optional length parameter in a language that does not have one might provide an optional length argument and then they don’t even get an error, the language just proceeds to then do something that deviates from their expectation and solemn deviation from programmer expectation is very dangerous. + +MM: On those grounds, I now favor the length parameter. Should we add a zero copy slice method? Right now, we have got slice and transfer to immutable and they can be composed to get an immutable slice. But in the example code down here, if we have an immutable buffer and want an immutable slice into the buffer, we can just take the slice and transfer to immutable on the slice. But this technique for getting the effect is very hard to make zero copy. + +MM: So the proposal would be to add a new method sliced-to-immutable whose semantics is exactly the same as this line of code that you see down here but with the implementation expectation that the new ArrayBuffer is a zero copy window into the original ArrayBuffer. NRO, I think it was, raised the issue about whether the accessor property for determining the flavor of an ArrayBuffer should be named mutable or immutable. In general, there’s a principle that boolean should have positive names so that the negation of the bouillon does not read like a double negative. If we said the accessor was immutable, then in order to say if mutable, you would have to say if not immutable which just seems much more complicated than saying if mutable. The contrary argument for immutable is that there’s a general convention of booleans defaulting to `false` and in particular the really nice thing about that absence is false-y. So if buffer immutable is run on the system before immutable arrayBuffers on the language without the accessor would do the same thing. It would be false-y, indicating, correctly, that the buffer in question is mutable. Both are reasonable pros and cons. + +MM: All together, I’ll just again speak for myself rather than champions but I favor immutable as the answer because of the compatibility with absence I find compelling. And then there’s this complex set of open questions, all of which are about what the precise order of operations should be in the specification. And in the happy path, when everything just does what it’s supposed to do, this doesn’t matter very much. The consequence of the end of the happy path is pretty much the same. where the order of operations matters and where some of those other questions also explicitly matter is when you’re not on the happy path, the most important issue is: Does the failure cause a throw, or does it fail silently, doing nothing? + +MM: There’s an unpleasant precedent in the existing ArrayBuffer system standard that we need to live with as we resolve this issue, which is some of the things that you would expect to throw already in the language, such as reading a field of a detached ArrayBuffer or setting a field of a detached ArrayBuffer, instead fail silently. There’s a long history about why that is. ArrayBuffers are trying to get grandfathered-in language. Something that was a de facto standard that was that the de jure standard needed to be compatible. However we got there, we’re there. So we can’t change those cases. + +MM: So all together, our position is that especially for other subtler issues of, you know, observable consequences of order operations, overall we want to drive the answer to all of these questions by implementer feedback. because if it’s easy for an implementation to implement something that follows one particular order of operations and not others, that probably is the dominant issue rather than any semantic issue. However, there is a semantic bias that I certainly want to inject in that exploration, which is: when in doubt, throw. So the moddable access implementation, if you assign to an in-range field, i.e., a field that is an indexed property of the ArrayBuffer, rather assign the fields to rather and assign to fields of TypedArrays—such that when you want two ArrayBuffers, that if you sign to an index field, then it throws if you assign outside of the index field, then it does what it does now. + +MM: And the access implementation which is the only source of implementation feedback so far does do that, but moddable access implementation is not optimized for speed, it’s optimized for space and runability. So we still need feedback from the high speed engines. And that is it for the presentations and as agreed, I will stop recording. And throw it open for questions. + +JLS: The question is pretty straight forward. Instead of like a sliced immutable in an attempt to get the zero copy transfer, could ArrayBuffer just have a subarray not what we have on typedArray right now where it always – + +MM: Did it have a what? + +JLS: If it could just have subarray? Like, TypedArray right now has the slice which is copied and subarray which does not copy. If we had that also on ArrayBuffer. + +MM: I don’t know. It seems to be mixing just esthetically levels and seems less orthogonal to me. That’s just five seconds of thinking about it. I don’t have the strong reaction one way or the other. + +MAH: I understand James’ question, it seems that subarray is just—it seemed like a different proposal entirely. So I’m unsure how it is related to this proposal about immutable arrayBuffers. + +JLS: Well, the goal is just to get that zero-copy view of that. And where slice is created a copy subarray just gets you a view truncated. If you’re taking a subarray on it’s immutable and it will be affecting it more. + +SYG: My gut reaction is, no, we can’t do that. Because the way things are architected today is that ArrayBuffers are never windows. They’re never views. And TypedArrays are. So the consequence of that is that if you make ArrayBuffers also sometimes use ArrayBuffer work for some and may not work for others. Because there’s no reason to indirect—just the language level, there’s no reason to indirect the backing store ArrayBuffers today. Some implementations may not have that direction and insignificant to also make them indirected. + +JLS: That’s fair. + +MM: Make sure I understand, you’re saying no, not just to the subarray, but also to slice to immutable itself? + +JLS: That was going to be another question, but I’m after Mathieu in the queue. + +MM: So I am in favor of having a length parameter to transfer to immutable as it would avoid some refactoring hazard if someone has transfer today and want to get it transferred to immutable, they expect it to be resized during the operation. All of a sudden, they would end up with an ArrayBuffer that hasn’t been resized if it is—if that method is likely a length parameter. So in order for that I would prefer the length parameter. + +WH: I agree for the same reason. + +MM: As I said, I also prefer the length parameter. Does anybody wish to express a preference for omitting the length parameter? Okay. Great. In that case, I will consider that decided in favor of the length parameter. The length parameter by the way is already implemented in modable access engine, it is not yet reflected in the draft spec or in the shim. Both of those would be repaired. + +SYG: I was typing. I will just speak now. I have nothing against the length parameter. But I would like to point out if you have a length parameter using it, may perhaps break the expectation that transfer to immutable itself is zero copy. If you transfer it to a longer size, you would have to get that size somewhere. + +MM: Okay. That’s a good point. That’s a very good point. So actually, let’s stay with that point for a moment. If the source ArrayBuffer that you’re doing the transfer to immutable on is itself a resizable ArrayBuffer and the length is still within the max length of the resizable one, that would still give you a length expansion and immutability with zero copy all at the same time; is that correct? + +SYG: It depends. I would say like 95% of the case yes. If for whatever reason your language—not your language, your OS under the hood doesn’t have zero filled on demand pages, like, you might have—so the max length exists so that the OS can reserve virtual memory pages. + +MM: I see. + +SYG: Those are not backed by physical pages yet. When you get that, most of the OSs should support zero-filled on-demand and show up as zero. If it doesn’t for whatever reason, you might need to incur some costs to make sure that the new pages that get backed in actually show up as zero. + +MM: Good. That’s an implementation cost for the length parameter that I had—was completely unaware of. That’s good to know. + +SYG: Specifically my blind spot is windows. I really don’t understand the window VM subsystem. If someone does here, please speak up. + +MM: So are you okay with us proceeding assuming the length parameter that explicitly stating because of these issues, we desire more feedback from implementations? + +SYG: To be clear, I have no concern with the length parameter going forward. I’m just pointing out that if you care about the constraint that transfer to immutable would be always zero copy of performance expectation. + +MM: I see. I don’t care that it’s always zero copy. Well, I mean I care, but I don’t care more than I care to—about the reasons for the length parameter. If you’re okay with it, in that case, let me say are there any objections to adding the length parameter? I’m just considering that to be as of Stage 2 that I’m asking for the decision for now. Okay. So I will revise the spec and the shim and as I mentioned the access implementation already have the length parameter. + +RPR: I don’t think anyone disagrees. But let’s go to KG. + +KG: Yeah, I think it’s fine that it’s not zero-copy if you pass a larger length. Presumably if you pass a larger length, it’s because you needed that for some reason. It’s pretty weird thing to need on immutable because the extension is length zero. If you do need it, it’s not like you have a better option by composing some other operations and it might end up being free if there happens to be space to resize it. So I still think it’s the best you can do. It’s fine. + +RGN: In a similar vein, it’s also possible that newLength would be supported for truncation but throws for attempted expansion, making the restriction clear. + +MM: That would be coherent and I can see the argument for it. If no objection, I would like to stick with the decision that the length parameter works in both directions at the possible cost of not being zero copy on expansion. + +KG: Very mildly prefer to not throw. + +MM: Okay, good. + +RPR: Mark just to let you know, the time box is running out. You have about four minutes left. + +MM: Oh, okay. With one minute left, I would like to ask for Stage 2. + +SYG: I may have misunderstood. For slice to immutable I’m trying to understand two things. What is this a concrete use case? The use case I saw is nice to have this ability. I didn’t see the concrete use case. Two, what happens for slice to immutable from a mutable array? Like, it detaches the whole array and then gives you this one mutable window? + +MM: So, no—so the piece of code at the bottom, we stick with that equivalent. It just wouldn’t be zero copy in that case. If the source ArrayBuffer on the left here was a mutable arrayBuffer then the lice would make a genuine new mutable ArrayBuffer that was a copy of those contents from the original as of that moment and then transfer to immutable would take that one and make it immutable. + +SYG: But that’s a very different semantics because it detaches the copy. It doesn’t detach the original one. I can see use cases where you want – + +MM: It detaches the—slice to immutable does not detach the original in any case. + +SYG: But how can you make it zero copy if the original is mutable? + +MM: Sorry. It’s only a zero copy if the original is immutable. + +SYG: I see. Okay, I see. + +MAH: I think what it means here is that the spec would guarantee that when you do a slice to immutable on the immutable—on the source immutable ArrayBuffer, you end up having a zero-copy subset of it. + +MM: Exactly. And a use case for that is that right now a TypedArray or data view, you can ask it for the underlying ArrayBuffer and it gives you the whole thing. Well, maybe I want to create a TypedArray that does not reveal the entire contents of the original ArrayBuffer. This would enable me to let it reveal only a relevant subset by making it a TypedArray on the slice. And obviously that’s what would happen right now with just normal `slice`. But the normal `slice` does that at the cost of a copy. The only thing I’m focused on here is if the original is immutable but reveals too much, then you want one that reveals less without making the copy, this would let you do that. + +SYG: I see. Okay. I think I’m on the fence about this inclusion barring a concrete motivation. + +MM: Okay. Noted. Does anybody else have—does anybody have a strong opinion either way? + +MAH: It may have been a performance. We keep hearing that engines cannot optimize and do copy on write and things like that for ArrayBuffer. Here we have a particular opportunity to create a copy—a zero copy of an ArrayBuffer that is clearly—that can clearly be zero copy. But without this API, we’re back into the let’s be hopeful that maybe some day engines can actually optimize this by doing copy on write. + +MM: I have another motivating case for you. We want—you know, it’s not part of a TC39 ECMA script spec but in the larger ecosystem is immutable ArrayBuffers be transferred of structured clone and if you’re transferring it within the same agent cluster it’s a zero-copy copy. In in other words, the immutable ArrayBuffer exists in both locations without having copied the data. For that use, it’s certainly the case that one agent might want to transfer a subset of the data to another agent and not reveal the entire thing. And, again, it would be nice to be able to do that in a zero copy manner. + +RPR: So to your question, mark, earlier won’t slice to immutable, WH is in favor. + +SYG: I’m not asking who would like to slice to—I’m asking concrete use cases. + +RPR: Just also reminder we are basically at time now. + +MM: Okay. Can I have—it looks like the remaining—can I have a five minute extension? + +RPR: Five minutes is okay, yeah. + +MM: WH, can you answer SYG’s question, do you have a reason why you want slice to be immutable? + +WH: Just to allow an implementation, if it wants, to make this zero copy. It’s too hard to optimize it if it’s rolled out into a combination of slice and transferToImmutable. But I wouldn’t *mandate* sliceToImmutable be zero copy. + +MM: Okay, good. And that’s a good point about not mandating that it be zero copy, just allowing it. That’s a good point. + +WH: If it’s too hard to do the optimization, just expand it to slice and transferToImmutable. + +RPR: I’m not sure we have—I think WH was first in the queue with preferring no throwing. + +WH: That was before the comment queue got reversed. My comment about not throwing was regarding transferToImmutable. + +MM: I’m going to skip over this and go to JHD, then. + +JHD: Just wanted to concurring in every API built into the language and platform or not absent boolean should be the same as providing false and making the name stuck and with the name come up with a better name that works with the default. I very much support that. + +KG: I was a little bit too slow to get on the queue. The response to JHD, this wouldn’t be absent. This would be present ever. It is not present in older implementations. + +JHD: Is the accessor not an option? + +KG: Yeah. + +JHD: My statement doesn’t apply to the accessor. + +KG: Okay. + +JHD: But in terms of feature detecting and things like that, like, it’s still nice if absent on the prototype and then next release present on the prototype the false is the same value. + +MM: I’ll just take this as certainly at least not an objection to naming it immutable. + +JHD: Correct. + +SYG: This is about the throwing or no throwing behavior. I think the simplest thing for implementations, I can speak for myself but not for the other fast engines here, I have zero interest in working on this part of the code, because it’s like old and historical and all that stuff. And I think the simplest thing by far and there’s a lot of it. The simplest thing by far would be to align with whatever detached/out of bounds does for the particular case. And whatever that does, if it’s not possible to do that operation to an immutable ArrayBuffer, we just pretend it is detached/out of bounds. + +MM: Okay. So good. That’s implementer feedback pushing us in the other direction. Let me just verify with the committee that we don’t have to emerge from the decision to go to Stage 2, have to emerge with the stated preference on the resolution of that that the details of order of operations and when it throws is something that we can investigate during Stage 2. + +MM: So I would like to ask for Stage 2. First of all, anyone support Stage 2? + +WH: I do. + +MM: Thank you. + +NRO: Reasonable questions to have during Stage 2. + +MM: Great. Also support from JHD, thank you. + +RPR: And JLS. And CM. + +MM: Any objections? Great. I have Stage 2. Thank you. + +RPR: Thank you, MM. And then next up today, we have Nicolo with an update on import defer. Chris, are you ready to chair this one? + +## import defer updates + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-defer-import-eval/) +- [slides](https://docs.google.com/presentation/d/1yFbqn6px5rIwAVjBbXgrYgql1L90tKPTWZq2A5D6f5Q/) + +NRO: This is a follow up from the presentation we had last plenary where we went through two problems with the proposal but unfortunately the time did not have a concrete solution yet. So thanks everybody for the feedback during the plenary and now I’m proposing an actual concrete solution. + +NRO: The two problems we had, one is that significant aspect of the proposal is that we made all gets string keys on the trigger execution because that’s what tools can actually implement or at least easy to implement with a large group of tools. But that ended up not being enough. The second problem was that `import.defer` was the dynamic import that the proposal has was not actually deferring anything, but it was always triggering execution because it was internally the proper case from the object and triggering execution in the promise. + +NRO: So the reason why we made get a strong property pace trigger because for many tools, it’s not actually possible or reasonable to know what are the modules and when they actually start importing it. Normally I would like still care about tools, but maybe not this much. The reason I’m considering tools as so important for this proposal is because unfortunately is most module get transpiled or bundled and the experience that many developers have is not through the implementation on browsers but implementation of tools. + +NRO: There is still a need to the proposal to actually check the dependencies because you need to check that there is the weight or not. But this is just a binary piece of information that tools can more easily check at built time. For example, the the process would just fail. You can assume the deferred module has no—code in a different way to handle the delay. Again, at the time you can assume that the delay is already handled in the right way. + +NRO: So how tools can implement the proposal is in the general case is to basically wrap the main.js process in the proxy and then in the proxy when it is necessary trigger of the module. In many cases they would be able to module away (?) because the use of module is it’s not that dynamic. So it’s actually not too difficult to build time and form a static analysis. + +NRO: But in some cases when that’s not possible, the way to do it is through a proxy. This code here is in line and some sort of bundle. But tools for this code use a lot, for example, when transpiling in babel and targeting in the environments, it’s likely compiled to the synchronous import. So reading the key string trigger without it exports in this case foo or not. In string here because reading symbols does not trigger relation with the reason being you might want to check the `symbol.object` in a somewhat safe way. The key here is that this only depends, the way this works is whether it is triggered or not is only defending on the key itself. Before trigger evaluation the proxy and tool can check this type of string or not. + +NRO: So property access always triggers but there are other ways you can observe the contents of the module. We have getOwnPropertyDescriptor and `object.keys` and sync text and so on. The change I’m proposing here to actually make it possible for tools to more closely implement the semantics is that query any info depends on the contents of the module trigger. So before only any syntax or any function that would internally call the get internal operates from the objects this includes any get with the proper syntax and object case and object emergency (?) or properties or object and get and property descriptor with the spring that is one of the name of the experts of the module. Here in the slide is known and unknown is that the module is not actually exporting. + +NRO: The proposed change here to actually make it implementable with tools is that also anything that queries the list of keys exported by the module should trigger evaluation. So when we use the syntax with the string key, that should trigger and use get and property names should trigger evaluation and when you trigger property descriptor should trigger evaluation (?) sometimes. So it means spec-wise where efficiency specked it and get the list it will be if this is deferred. This is for symbol properties because we know that the module does not have an export with the symbol name even without including the contents of the module. So as I mentioned for tools, make it easier for the platform to nondeferred cases but still requires non-deferred analysis. + +NRO: Not just tools, there are other platforms to have synchronous modules that simplify the implementation of the loading as long as the platforms have some preview for example when pushing to the server whether it is syntax server or TLA and not imposable but keep around the list of exports for each module that could potentially be imported. It’s just a little bit simpler. + +NRO: The second problem that we had was that `import.defer` dynamic syntax always triggers that. It triggers evaluation. And the reason is `import.defer` is a promise resolved in the space and that’s how promises work. The then property. We never get the deferred module. + +NRO: We discussed two main options. One was to defer `import.defer` from the proposal. We can for now get the part in discuss how to do it in the future or hide the then property from deferred name space objects so that getting them from a namespace object would also be defined and the deferred object will never have a property regardless of what the module exports. This will be similar to the named property where we know that accessing them will return even without knowing the contents of the module. + +NRO: We want here to propose going ahead with the second option because if we remove `import.defer` now, it’s not like we can re-introduce it in the future. This is a problem about having promises with deferred space objects. It’s not specific to import.defer. And `import.defer` I would hope always one of these space objects that will not introduce the third type of object. It would not be possible to do this in the future. + +NRO: There are some use cases for `import.defer` dynamic form. It’s not the main motivation for the proposal. One of it is that you might want to have conditional loading in some place where async is allowed while still deferring execution. You might have at the top of the module a way that have different dependency depending on the environment and pay the execution up front. And also even though I guess more or less it’s for symmetry how other imports work. We have import declaration with the dynamic form and we have import source with the dynamic form and then just continue this pattern. So this is where we propose hiding them. So what does it mean exactly to hide them? As I said before, this deferred name space objects never have a then property. So according to the principle of when do we evaluate, we evaluate when we need to query the module. And reading or checking whether the nodule the namespace object has a property would not meet trigger evaluation. Even if `import.defer` that is a promise that results to deferred space object and the promise constructor or the promise resolution step with the property from the object still does not trigger evaluation which is exactly how symbol-named properties behave. + +NRO: So other things discussed last plenary and approaches to go forward with. But there has been other two minor changes suggested since the plenary that I would like to share with the committee. One is that with integration with logging utilities such as the built-in in console in logging JS and with the string object it’s common to look at the toString tag object. And while the deferred name space objects are meant to be drop in replacement for the name space objects, they have differences. Important that one triggers execution when used and other doesn’t. It was suggested a deferred module. The reason that the proposal uses module is mostly how it’s written. Not create a separate type of object in the spec. I just reused existing namespace objects by adding some conditions. The values of object internal methods. If I were to create a completely separate opposite spec I would have went with the separate toString at the beginning. This is some change I would actually like to see. I will see if there’s consensus for this. + +NRO: There’s been another suggestion coming from people think about how to integrate this with various loggers implementations. You have to know how much you can log. So a good logger is a logger that gives you as much useful information as the user wants. But that’s in a nonobservable way and not triggering any sort of effect. In platforms like JS, I’m thinking of JS because Chrome—browsers have more Interactive UI and I’m assuming much going on. In a logger, you would probably want to see the exports of the module if you can. Like all the values that it is exporting. And you need to know if it has been evaluated. What we the suggestion is to have the symbol dot evaluated and tell you whether it’s safe with the module or not. + +NRO: If this doesn’t happen, like, this is not strictly necessary assuming that the dev tools from the engine. This happens in the Node.js because they already have to check whether an object is a space object or not. And only JavaScript doesn’t run that. Node.JS uses the specific API. We can go to another one. + +NRO: And we are close to Stage 3 as far as I am concerned. We already have tested for the major semantics of the proposals so for everything that was not still open for discussion as part of this presentation. I started working on tests for the changes but I i don't have anything concrete yet because I don’t know yet in which direction we will go. We have a working in progress implementation to validate that the tests are correct. + +NRO: We are missing one thing that was Stage 2.7 was conditional on the spec editors reviewing the spec text. This would be a good time to do it. I will continue once all the changes caused by this presentations are merged. But yes, please, try to find some time for this. The idea is that I will come to the next meeting proposing Stage 3. To the queue now. + +GB: I just wanted to bring this point up again and thank you for explaining it so clearly in the presentation, but just to bring up the point again that the semantics that we’re changing here are in order to polyfillability and support and bundlers and tools to date. So my question is, how important is it for polyfills to have perfect semantics with the specification when in fact what we are doing here is we are creating trade offs in the specification itself that are not justified. All the use cases of the specification beyond—so when the polyfill is no longer needed? And in particular, there’s two risks that are being opened up by these changes. The one is that because it’s no longer a requirement of implementations apart from it just being a specification node for hosts, that the named exports are validated early, there’s no reason hosts couldn’t implement by no longer making the keys—the key list available at all, the list of names before the namespace is evaluated, there’s no requirement on implementations to have even validated the list of named exports. And so how do we know that hosts like Node.JS won’t decide to fully do this lazily and not do early validations at all since the only requirement is a spec? + +GB: And the other point is that we do lose a use case in this. Slide 7. So with the new evaluation triggers because you can’t check if a key is in the name space anymore, we lose the possible namespace where you could defer import something and still be able to do feature detections on the namespace and check if keys are available or not. And so that’s the context in which I’m asking the question about the importance of polyfillability as we expand these triggers. + +NRO: Okay. Yeah. Thanks, GB. With the main part of the comment about the spec requirements, so the spec still normally requires that mismatched exports or syntax errors are validated eagerly and the way to require that is that the errors are reported either in module loading when it comes to syntax errors because the load hook expects the result of the expect the parse module that parses the module and checks for syntax errors or linking when it comes tooling errors. With this proposal, linking is still happening eagerly. So I guess there is a potential conclusion if somebody doesn’t read the spec because they see I need no info to expose eagerly and defer everything. But the spec requires that some things happen eagerly. + +NRO: To the other point, I guess this is more about trade offs, what trade off we’re comfortable to make as a committee? I personally while I’m hoping and trying to work towards where we need less built tools especially—if we have built tools to make them as slight as possible and rely on the underlie engine as much as possible for example, with the proposal, tools don’t have to emulate semantics and just rely on the implementation. However, I don’t see where that is happening anytime soon. And there’s the reason why I’m pushing for trying to do what today’s tools can do. We’re talking about years here. + +NRO: Regarding the use case, yes, this is losing a use case. It’s like losing something that you would be otherwise able to do. I don’t know how common it is to do feature detection without actually using the library so that the feature detection so that—if you do feature detection and may not be able to use the library, it doesn’t matter whether it’s validating the first or second branch. If you go to the full matter (?) of something. This is true. And I guess it’s about trade offs. + +GB: Just to final point on the trade off question, you know, the question is there an alternative trade off space in which we accept some degree of polyfill semantic mismatch in in order to hold open future use cases and has there been any thought to that? This question took a lot of time. Maybe we can continue that discussion off line as well. + +JHD: So as one of the major polyfill authors in the ecosystem, I don’t think that—like, it’s more convenient for me when proposals are polyfillable and when they’re more polyfillable. I don’t think that’s a good thing to guide language design. I think that it is perfectly fine if a polyfill can only do best effort in many cases and I also think it’s perfectly fine if the polyfill has to be slower or bigger or harder to make as a result. It’s just the lot of a polyfill maintainer. + +JHD: There is often a correlation between something being more polyfillable and something being more consistent with the language or something being easier to implement and so on. And so it’s fine to use polyfillability as a test to surface those other possible issues, but I think it’s important that we use those other things as the motivation. And not polyfillability itself. And then, the—the second piece was about the host requirements. We have definitely already seen multiple examples of the spec saying something within an intention that is not mandated, and then we see implementations violating the spirit of the spec simply because the spec doesn’t prevent it. So it has been empirically valuable to tighten up wording in the spec and allow for use cases we like and do our best to disallow any use cases, you know, where like is tolerated proof of whatever. I am not trying to be paternalistic, but just… You know, we should restrict the things that we aren’t certain we need because we can loosen things more easily than tighten them. Yeah. That’s all. + +NRO: It’s not like ignoring a requirement in some host tool. Not implementing the hook here removes moving the steps from the algorithms and placing them somewhere else. It’s actually like I am talking the algorithms and not just some words around them. Quickly, first SYG, you were talking about the bundler and polyfills. But let’s get to the question now, unless GB has something + +JHD: Just to clarify, bundlers and transpilers are what we need to cover the syntax, but the things they transpile into would be a polyfill and that’s where the polyfillability would come into play. + +ACE: Yes. They—I can completely see where you’re coming from, GB. I am not—I wouldn’t—if Bloomberg uses a code import defer as a way just like a set of keys to do a set of detection, it feels like the—well, it—while that worked, it feels like the wrong way to go about it. Import defer is loading, like the whole dependency tree, the top-level await thing. It seems more important—it doesn’t give you exported keys from a module. But it feels like a use case would be better served at that layer of the module thing, rather than people using `import.defer` and reflect on keys. But I do see where you are coming from. + +GB: Thanks. Yeah. I just wanted to state those points. I understand the tradeoffs. I am just wondering how many exploration is done in the trade off spaces. But thanks for the responses. + +NRO: SYG? Was there a question or did you want to speak? + +SYG: You did answer it, but I would like to agree with you against GB here. I think—especially in the ESM space, because of the cost of the network, like, the—the dream of using ESMs outside of bundlers is a long ways off, if ever, from my perspective. So if there were any spaces currently that TC39 is looking at that really warrants favouring what the tools can do today, I think ESMs is it. + +GB: That makes a lot of sense. I guess it is a new perspective to me, having—you know, previously found the other direction in arguments. But also, just to sort of, you know, touch on what NRO mentioned module declarations would provide a path for bundlers in the future to future natively to the semantics. So the polyfills semantics we are designing around, if module declarations are successful, would no longer be constraints in the module harmony effort, as achieved module declarations. + +MAH: Yeah. I want to clarify that means the then export is never available, assuming—basically when you transform an import into an import.defer. + +NRO: Yes, that is correct. It is generally already considered to be a fishy product to have a then export due to how to interacts with dynamic import. But, yes, it would not—it would never be available on a deferred namespace regardless how you get to it. + +MAH: Yeah. I suspect a .then static import is never actually useful, so I—it’s strange that adding a defer would now be missing a namespace export. + +NRO: I agree. It’s an ugly solution. + +ACE: Missing then, we haven’t—I have assumed that tools like TypeScript would also reflect the missing .then in the type. Haven’t actually checked that with them, but it seems non-controversial to assume, and if someone wanted to get the then for some reason, the work around is creating another module that export * from everything to add a layer in direction and import defer that wrapper, so it’s—people could still do things in this space, but yeah. It is missing. So I hope the tools will catch on and if they do need it, they can put a work around it. + +KG: Yeah. This is on a topic that we haven’t talked at all about, which is Symbol.evaluated. I really like the capability and really do not like the proposed solution. I really don’t want a new well known symbol for this. I would be happier with just a new top level global function, like isDeferredModuleEvaluated or something. Also, I think that can easily be in a follow-up. Anything in this space can be a follow-up. So I'm happy to see it go forward without this, but want to register support for having this capability at some point. + +NRO: Okay. Thank you. MAH and then MM, which I guess will say something similar + +MAH: Same. I really like the ability to detect if a module has been evaluated, but it’s something where maybe the—the stabilized proposal, the non-trapping trades might be able to reflect that the fact it has been evaluated or not, and I will let Mark expand on the integration of that proposal. + +MM: Yeah. So I actually need to elaborate on one thing that I forgot to mention in the proposal. It is in the draft spec text, which is that for the non-trapping, it would not just be with regard to interleaving and reentrancy hazards, of proxies. But also with regard to exotic objects. That if an exotic object, on a data access, when access to a data property, exotic object is certainly allowed to observeably interleave user code during access to a data property, but that—to simply allow that creates the same reentrancy hazards, and this was also raised when import defer first happened. Which was the reentrancy hazards of data access causing inter leaving and possibly reentrancy. The non-trapping integrity trait in trying to prevent that would also need to say that if an exotic object does have that behavior, that it is not non-trapping. And then if you ask it—if you try to make it non-trapping, either it has to change its behavior, or to no longer to interleaving, or it has to refuse to become non-trapping, just like for other integrity traits that exotic objects don’t uphold. They have to either come to uphold it or have to refuse to acquire the integrity trait. For namespaces, the reason we were considering this new symbol or whatever the API is, is it really has to do with is there still a possibility of evaluation triggered by a property, a data property access, which is exactly the interleaving issue that non-trapping is about. It seems like the same two choices could apply: you could say that a defer—a deferred namespace, the namespace of the deferred module starts off without non-trapping. And if you ask it, if it’s not, if it’s non-trapping, it will say that it is not non-trapping. Sorry for the double negative again. But then if you try to make it non-trapping, it could either be refused or much more natural for import defer, is it could treat that as a trigger for evaluation. And then once evaluated, during that attempt to make it non-trapping, it would then return successfully from the request to make it non-trapping, and it will now be non-trapping because it is evaluated. So I was wondering what your reaction to that whole possible interaction between the proposals are? + +NRO: It seems reasonable. It seems like all, especially given that KG say this is good, but not in this shape and here you are offering a different shape for that, we should probably explore working in that direction. See, there is still Mathieu in the queue. MAH, I would ask you not to go to this because we’re short on minutes. And—but thank you. + +NRO: Okay. So okay. So we have had feedback in both directions. I would like to see consensus for some of the changes. This one seems to be the least controversial one about changing toStringTag on the deferred modules, say deferred modules—on the deferred namespaces to say the deferred module instead of just module. Do we have any concern with this? If not, I will go ahead and change it in the proposal. + +JHD: Well, not on the queue. So if they—because they are different, they’re—typically when they have a different thing, we provide some way to brand check it and determine it’s that different kind of thing. ToString tag alone does not achieve that. That helps debuggability. So I have no objection to the change there. Is there a way to determine that the given object is a deferred module namespace object? + +NRO: No. In general, there is no way to return whether an object is already a namespace object. It’s probably the only one missing a brand check. This proposal is not introducing that. As part of the proposals and the module harmony space, with new module constructor and that brand check will come, but it’s not been introduced for this proposal, especially given that normal namespace is already not brand checkable. + +JHD: Normal namespaces, I think—yeah. So you’re right. There were multiple things introduced in ES6 that failed to include a way to brand check in module namespace and after error, is error is the last one. But the only behavior I can think of for module namespace objects that is different from a frozen object is the live binding behaviour if you are exporting a let or a var and then you change it. + +NRO: They can also throw TDZ errors where you have some property access. + +JHD: Okay. Fair. So I am certainly not asking to introduce that brand check for regular namespace module objects, but it may be coming for both in the future, but there is a way that we could handle it right now, by having—doing the thing I wish all toString tags have done in the first place. Instead of being string data properties, being brand checking accessors that return a string. + +NRO: We have from some members of the committee, some requirement that all built-ins must be reachable, not just through syntax. So we would—the answer here is no. We need it exposed in some way as a property of some object in the global + +CDA: We are past time. + +NRO: Okay. But I will be happy to work with you on the brand check. It’s unfortunate so I am assuming we have consensus for this specific slide, even though nobody else voiced concerns? For the proposed change for adding hiding.then. Do we have consensus for this? + +NRO: Okay. Thank you. I am assuming silence means yes. I don’t see objections in the queue. + +WH: Are there any reasonable alternatives? + +NRO: The alternative discussed last time was not to have—never have the dynamic syntax or one extra [AL]ive (?) discussed last time was to have import.defer, namespace object, but object in a property point in the third… so that the promise will be resolved with another object. Which does not trigger this evaluation. But like all solutions present time being considered ugly, this one seems to be the least ugly. + +WH: Yup. + +NRO: Okay. So I am not going to ask for consensus on the last slide given the feedback received. We have had very mixed feedback on this one [slide 7]. My preference is to do it, but Chris could we do a temp check in a follow-up discussion at the end of the meeting? Because we have 2 and 2. + +CDA: We can add a continuation. + +NRO: Okay. And probably just like five minutes + +CDA: Sure. + +NRO: Thank you. + +CDA: All right. Thank you, Nicolo. Did you want to—well, I guess we have a continuation, so we can, I suppose, defer key points and summary to then. + +### Speaker's Summary of Key Points + +… + +## Error Stacks Structure for Stage 2 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-error-stacks) +- [slides]() + +JHD: So a little history on this proposal. Since a long, long time ago, error stacks have been in implementations but not in the language. That’s unfortunate. A lot of folks want it specified in some way. And it’s currently not specified at all. + +JHD: So the—as of 2019, I think, was the last time that I actually brought this to plenary, although I think we discussed it briefly—I don’t know in a previous one or not, but 2019, I came back and the state of it at that time was that error.prototype.stack would be a normative optional accessor, the the hardened JavaScript cohort can remove it. And then it would be static methods, which given our—at the time our `system.getStack` and `system.getStackString` and our new position reflect, no longer reflect, no longer match. The dot get stack function is a static method that enables shadow that provides the same string that the stack accessor does. That’s how you get in a normative, not required way. There’s a get set string. And a get stack function that returns what most people who work with stacks beyond just looking at them actually want, which is structured metadata that you can traverse and work with. + +JHD: This is such a massive problem, so I was attempting only to do the structure and schema of the stack traces. And to not delve into the contents yet, the actual prose. That is. Larger to document and research and we don’t specify, you know, prose almost anywhere else, including error messages. It’s not entirely clear how we do it. + +JHD: And about so in the interest of sort of doing things in an iterative way, the proposal basically only provides these methods. And the structure of the stack trace which then—then it sort of retcons it into you build the string from the structure. It’s just the contents of the pieces of the structure will be, you know, implementation-defined or whatever the correct term is for that. Everything everybody does is correct. Browsers, for example, would have to add two new functions. And move some of the stack code that already exists to the accessor. Some already have an accessor. And they could take their string and reverse engineer it back into the structure if they wanted to start with the string, although the likelihood, they have a structure they used to generate the string and then that would be clean. + +JHD: So I presented this, you know, spec text, which I think I have updated to the modern approaches, but probably not all the way. It was up to date in 2019. And it has all the abstract operations that construct each of the little pieces and creates the stack frame objects. And, you know, provides—it doesn’t provide all the contents, but it certainly provides enough of the machinery that stack traces can’t get any worse in terms of structure and format. But the wording, it leaves the task of figuring out the wording to a follow-on proposal. + +JHD: And I got surprise feedback in 2019 during the Stage 2 advancement, that my recollection of it was that the work required to do stuff with stack traces was large, and as such, if it wasn’t going to be fully specified, we shouldn’t do it at all. And so I left that meeting discouraged, but trying to see if I would have the time to come up with the text or if anyone—any volunteers would show up to help. And in all the intervening time, no one has done it. This is a boiled ocean request. Then we don’t have stack traces in the language probably ever. So I talked to a few delegates and it was suggested I come back with no change to the proposal. In the agenda, there were error stack structures to try to change the problem state so it more accurately reflects the limited scope of the proposal. And hopefully maybe we can continue and get this advanced so that the work required to do the rest of the stack standardization is not so unreasonable. + +JHD: Yeah. So that’s essentially where I am at. I am hoping if there are additional constraints that I missed or misrepresented or unaware of, it would be great to hear about them. I would really prefer—because I have too much spec text and it matches to it to my upsing of the union of what engines do, I would love to go to Stage 2 with this and work ought any additional kinks and create the—create a smooth path for whatever wants to do a follow-up proposal for the actual prose, the text of the stack trace. + +JHD: I think we can go to the queue. + +MM: Okay. So as co-champion, I want to first apologize that I didn’t coordinate with you ahead of time, so that—I should have—the statement that reflects is a little bit problematic. Reflecting right now has only safe non-privileged things. Things that are completely safe to share among things which should have no privileged access or should not be able to communicate with each other. The reason why we made the accessor, error prototype stack optional is exactly is that it could be denied reflect—the reflect namespace object is not something that is in the category of things that we would want to be able to deny, rather in the category of things that we want to ensure that we never need to deny it. + +MM: If it did go into reflect, we could cope, but we would have could cope by giving each department its own separate copy of the reflect object that shared non-dangerous methods – + +JHD: Other. That’s already a consequence that I thought that we had accepted along with the getIntrinsic proposal, which is still at Stage 1 was planning to put that on. We discussed that last time. Yes, it adds the extra cost, but that’s like—like, tolerable. + +MM: Okay. + +JHD: Either way, I am happy to continue discussing that within Stage 2. The name of the global is perfectly fine to resolve in Stage 2. + +MM: Okay. Good. Thanks for reminding me of that. I had forgotten about that. This is a consequence of my coming to this section unprepared. Yes. It would be the same issue with getIntrinsics and have the same resolution. + +JHD: Right. + +MM: And that resolution might very well be that each compartment hits own and that to be tolerable. Right now, my shim, a very, very stale shim, but my old stale shim does produce a get stack by scraping the string, but we—it’s important to, very clear, you cannot do a conformant implementation by scraping the string because of essentially the equivalent of the dejection syntax. Whatever punctuation you are lag for and open paren in it, and if it has it, you are never going to scrape the stack string to produce the structured stack, unless we specify completely reversible escaping rules for the string and that would still take us away from existing implementations. I think that people would be less willing to do that. In any case, it’s certainly fine for shim implementations to ease the transition. Altogether, I am very, very glad you are reviving this. I would like to see it go to Stage 2. Especially if these issues or something that were all agreed or see solvable in Stage 2, but yeah. Very eager to see this proceed. + +WH: I am trying to understand the discussion about `Reflect`. I don’t see any mention of `Reflect` in the proposal. What did you mean by that? + +JHD: We are talking about the location on which the two functions get stack and get stack string are made available. The proposal has them on `System`. But Reflect is another alternative location that the discussions aren’t the getIntrinsic proposal and another one or two, those results in no longer being reflected to only matching proxy traps, which means it becomes a viable location for it. And so I just offhand mentioned it as another possibility. You can stick them on anything that meets the hardened JavaScript constraints and that would be fine. + +MM: I covered that one already. + +SYG: First, I would like to clarify that—okay. You did say that this has no change from what was presented in 2019. So you are really coming back and asking if the opinions of the parties who gave concerns have changed? + +JHD: Well, the—basically, yes. The two individuals who gave that feedback who believe were representing their own opinions have not prepared in plenary for four years. So I am hoping that if their concerns are not repeated by anyone else, given the additional time and the underlying value of an iterative approach to standardization, I am hoping we can decide it’s still worth advancing + +SYG: I am happy to reiterate Adam's question here, what does this make better? Like, if we—I am reading the notes from back then and it gives the structure. But the rebuttal to that was that we would needly—like, down stream of the structure, everything else is implementation-defined. You are speccing something that you immediately require more kind of engine branching to do anything useful with. And like that structure is not as far as I understand, like—I think it says it’s not even in the top ten of the difficulties of looking at stacks. In terms of what problems this solved, if there’s no delta. —procedurally, there’s a few concerns going for Stage 2. You are iterating stack frames out of the error data internal slot. I don’t understand what that does. + +JHD: Yeah. It’s certainly hand wavy. The error data slot in the spec is not used. And so I am putting stack frames in as a fictional concept. That is something I can – + +SYG: I just don’t understand that. + +JHD: Okay. + +SYG: But the—like, I think the higher order is what does this better? The problem I heard, you want to spec it. That is not a problem, to me. + +JHD: I will let Nicolo was on there for a minute with a use case. But in general – + +NRO: The use case—okay. In some libraries currently using the V8 API for basically this, and the way I use them is that for reasons libraries offer stack traces by users, and I basically reverse the stack traces to be nicer in some ways so that users can see their code on both sides of the library. My library is called by users and it goes back. And then I have some functions with special names that mark the entrance and the exit of my library code. And I start, like,—I remove the frames between those two entrances and exit and replace them with some fake frame. I have not looked enough at the proposal to tell how easy this would make it to do so. But right now it’s annoying. I can do it in Chrome and Chrome-based browsers because in other environments, it’s just the string stack so it’s not worth it. Even if I had to do some, like, I guess engine branching because they might represent a function in some way, I don’t know how exactly this proposal is doing. But if I had to do paraming, it’s better than to parse the stream by myself too. Yeah. This is like my personal use case for this. + +MAH: Yeah. In general, people have been wanting a consistent way to consume stack information that works across engines so you don't parse a stack string or implement custom engine approaches like V8s prepare stack string and so on. So I think this provides a basis for building that API for representing stack information, that people need and I think that also means, we’re going to need to specify some things that are currently not in the document specifically + +SYG: This doesn’t solve that problem completely. It solves it, it pushes down the—the string trace, parsing to be, like, frame or something. Instead of this entire giant string. It doesn’t actually solve the problem. You can perhaps reasonably think that is the starting point of solving that. But I am not at all convinced it is solvable. + +MM: So no, it doesn’t—there is no parsing of strings employed by this. An implementation of get stack—well, for example, my shim implementation of stack, because it’s on V8 because you have the internal structure stack objects, I use the structure tack trace object to produce the structure here, I am never parsing the string. So parsing a string, even within a frame is inherently unruly because because a function name might have an open paren in it, whatever punctuation you are using to do your part, a function can have that punctuation. So the only hope for getting accurate structured information is to not lose the structure in the first place. + +SYG: What does this spec say the function name must be? Like what does it say about that? + +MM: Well, it—I mean, I don’t—JHD, you can answer specifically, but – + +JHD: Yeah. So this—this relates to the conceptual fictional error stack frames in the [[ErrorData]] slot, which I completely agree is wildly unspecified and that would be the implementation defined thing. I have created the error data like a list of—where is it? Yeah. A list of records, and the records have fields and I pulled the field out and put them in a certain way. But the contents of those fields by and large are entirely unspecified. Right? The ones that are numbers, like line and column counts are specified to be numbers. And name is specified to be a string. But, like, what your name is – + +RPR: Can you show on the screen? This relates to the question of – + +JHD: Yeah. Sure. + +MM: Let me—this question about function names is a great example of this issue which is, in the language, functions have names. And perhaps there are two different things to reach for in the language to—with regard to the name of the function. But either one, the—is an arbitrary string that might have columns in and might have open parens in it in and the expectation certainly is that the function name appearing in this stack structure is a string we consider to be the string of—the—the string name of the function, that the consequence of not having to parse a stack trace string to recover the function name is that you are not going to confuse where is the function name stop and where does the source location URL begin? And that kind of safe parsing is a big deal. An avoidance of parse is gone a big deal. One of the ways which systems go very, very wrong is when they introduce little embedded text languages that need their own little embedded parsers, especially when they have no agreed escaping rules. And for punctuation and function names, there’s no agreed escaping rules so that punctuation is irreversible. + +SYG: That is fine, but, like, … this solve that? Does this—if I implement this, and Safari implements, this ring with go to implement something that will help you + +MM: If you implement this, then there is a get stack – + +SYG: This specifically meaning this spec draft + +MM: This spec draft provides a get stack operation, that provides the structured, you know, stack information, as big JSON structure + +SYG: It doesn’t. It gets a thing that we don’t know what it means from error data, which is an—it doesn’t tell me what to do at all. + +MM: Wait. Wait wait. I don’t understand it. + +JHD: SYG, let me clarify. You’re correct that I don’t tell you exactly what is in the error data slot in the spec at the moment. But you do something to come up with the string that dot stack produces. And I am assuming that comes from a structured that you have inside your implementation. + +SYG: That’s correct. + +JHD: And the feedback was to do the legwork for the champions to look at the different implementations and the different structures to come up with something here. Okay. So I’m sorry. What you are looking for—what you see in the requirement is, as dig into the actual code of the implementations, and try and understand the structures they are already using. And relate that to the error data internal slots, let’s say, or which could filter down to the rest of it + +SYG: No. You are having a detailed plan. It’s not my proposal. I am saying, if your—like, one, have a clear goal beyond I want structure. Like, structure—like the problem with the structure—trying to—the specific problem is that you have the stack strings that are hard to parse and people don’t want to parse it. It’s understandable. And structure to recover the structure without having to do the parsing + +JHD: Yes + +SYG: If that’s what you want, we have inter op problems that need to be designed so that, like, the thing that everyone implements, once this—Stage 2 and beyond is agreed to, you have something that you can ship a library that works beyond just V8, or just Safari or just Firefox, because – + +JHD: This already works for that. In other words, this already describes—this already is – + +SYG: It does not describe that because it doesn’t say what error data it’s – + +RPR: SYG, okay. There is structure here. I think—maybe NRO, do you want to ask your question directly? + +NRO: Yeah. So actually, I would have to see the spec, an object like some example. But I think you’re talking past each other. In the case of, for example, a function called var. And I throw an error in this function var, and I catch it and—as far as success, the name could just be full and not var. It’s through the structures that was specified. But nobody is guaranteeing the function name is going to be correct. + +SYG: Is it guaranteeing it must have a frame at all? Like what – + +MM: No. It’s not guaranteed. But the—I am wondering if I am misunderstanding your objection. Because it sounds like your objection is that the structure itself is—is unspecified and the structure itself essentially has a schema to it. Now, you know, there is—there is—you know, frames with function names, with line and column spans, with source—with, you know, source URLs, source, you know, indication strings. And, you know, then there’s an array of frames and it must—there’s a schema. And the—you know, the—it’s certainly unspecified what data it is that is used to populate the schema, but it is specified that the result of populating the schema is structured data that satisfies the schema. And that schema gives us for the first time an interoperable, it would give us for the first time, an interoperable accurate way to navigate the stack structure that’s produced in order to, you know, process it, to give feedback and all that. It does not mean that the contents of the schema will be interoperable from one thing to another. That’s underspecified. But, you know, this whole issue reminds me very much of iteration order for—for in loops. That we started with them being completely unspecified, we were never able to arrive at consensus to fully specify the order in which for in loops and array properties, but what we did is progressively narrowed the remaining degrees of freedom over time, over many years ever the committee, reconvening on this. But each step of reducing the remaining degrees of freedom led to greater interoperability and less danger of how it works on tests on some browsers and breaking on others. + +MM: I think one way that I—I would recommend looking at this proposal is interoperability on an agreed schema is a hell of a lot better than nothing. Especially if it can avoid, you know, necessarily unreliable parsing to produce the schema, and then with that, agreed, we can iterate in committee over time, as we discover what else we can agree on between implementations. If we require complete over implementations just to agree on a schema, that it’s basically a—you know, a formula for paralysis. We just want to move forward + +JHD: Empirically, that’s what happened. Here is an example where the content that this proposal does not specify are this part. Actually, this is the stack part. So this F, like obviously it comes from the function name. But that’s not in this proposal. The source here, that’s a URL for the place it comes from or something. That’s—that contents—those letters are not specified. But that string goes in that highlighted area. Then a colon and a number and a colon and a number. I have limited them to be positive integers and another line. The contents are specified here it doesn’t completely solve the problem, nor does the major of the proposal inside in plenary. What it does is it solve part of the problem and builds towards a better solution, where the amount of work that user-land has to do to solve problems is less or easier or faster or harder to mess up, et cetera. Or more secure, even. + +JHD: So there is an—there is a benefit here, a concrete benefit. If you are doing anything with stack traces, without looking at them with eyes to do some of the work—to obviate some of the work by using this proposal. + +SYG: So as it stands today, I always expose an empty array. Is that compliant? + +MM: Yes. + +SYG: Okay. Then how does this help you? + +JHD: Because in practice, you are not going to do that because you wanted to help your users. + +SYG: They are helped today with a non-standard thing unfortunately, but they are helped today. In other words, what you are – + +JHD: What you are designing, it’s the fact that it’s standardized, that is the value. It is the fact that it’s doing the useful thing that people want, consistently across implementations. Which means, any implementation that just ships an empty array for it where there’s a stack array, it won’t maintain its credibility. It it will maintain the version of the stack string. + +RPR: We have ten minutes left. I think we are spinning on the same point here. Could we move on the queue? There's a queue, a few more to go. All right. Thank you. + +WH: Going by the example on the screen, you say things like `f` and the URL are just implementation provided strings? + +JHD: Yeah + +WH: Can those contain things like closing parentheses? Is there anything that prevents implementations from putting characters that makes the whole thing unable to be parsed in there? + +JHD: Well, currently, there’s not, which is why we are actually… nobody has to do the string parsing. It doesn’t matter what characters are in those things. We certainly can and should in the future, lock this down to match what people are already doing or, you know, so that nobody can do crazy things. But currently, you can do whatever you want. And the – + +MM: Wait. Sorry. An URL can have a closing — a function name especially can have a closing parenthesis — + +JHD: It can + +MM: So I think—so, you know, it’s not that we’re going to reduce the punctuation allowed in the string. It’s specifically that providing the structure so that these—the structure can be examined without the burden of unreliably parsing things that might contain punctuation. + +JHD: We are not trying to make piercing easier, but eliminate the need to parse. + +WH: Thank you. + +NRO: Yeah. So we have recently discussed this proposal in relation to some other proposal in TG4. I quickly mentioned TG4 update, but there is a proposal that’s giving some ID to file and also storing in source map to actually connect files, like—like if you throw an error, you can report this so some logging service and somehow the currently have some—there’s a polyfill to get the idea of the error. Like, post hook, like record service, you can actually connect to the right source map. And the champions of the proposal, where we’re discussing how to expose these different errors, the idea was just as to have to through the non-standard error.captureStackTrace. Whatever no standard API V8 has for this, with the significant throw back that it would make it V8 only. In TG4 we got feedback. There’s a standardized way to getting to it. The ideas were to go threw some new globally new API in WHATWG, to get to the idea of a file. These are attached as a comment in the end. Or another idea, was that if this proposal has some movement, we could, then, other than the file income statement expose in the structured data which is the best way of exposing it. + +NRO: So if this is to move forward, the champion of that source maps related proposal, would be build different on top of this. + +MAG: Okay. So at the beginning, this was like, okay. This is going to be the union of real things. This should be the minimal capacity subset. But this spec is having at at the beginning of a stack frame. That’s not compatible. We don’t have that. Right? And like – + +JHD: Did you five years ago? + +MAG: I can tell, no, in the notes – + +JHD: That’s a bug in the spec. We should make that optional. + +MAG: Yeah. That’s fine. So, no. But like I think practically speaking, I see this being a very challenging thing to specify the contents of the string coming out of stack. Right? To the point that I actually think the better version of this would literally be to say, hey. Error stack exists. It has a string. That string has maybe a property or two. Maybe we can agree that everyone comes after a new line. Im not sure we can agree on that. + +MAG: But, like, this current design is a lot. And I will just—like, bundle my next point, is that this is multiple proposals and like I have differing amounts of appeal on different parts of this. Specifying that stacks exist, I think it is a good thing. We should talk about that. The idea to get a string representation of the execution context, even if you ever say that it is an implementation defined representation of the execution context, good. We can do that. We can say that, you know, maybe it’s not NXG (?), you can say that. It’s a regular thing. But, you know, trying to specify the actual format, I think is bad. I don’t think we can do it and frankly, I don’t think ever will ship it. It’s a whole mess of web compat. And then the stack getter thing, I am super interested in it. it’s a really interesting idea. I totally agree with the pain points. Yeah, having to, you know, I can imagine people like, for example, Sentry, probably would leave if if there's an automatic way to get a programmatic stack. Great. I totally imagine the use cases. But this proposal conflates all of the different pieces, and as a result, we have got this whole mess of conversation here. Right? And so, I just—I think the current proposal, no. It will be split in proposals both pushing? I absolutely think so. + +JHD: So just to make sure I am understanding correctly, you have see one of the proposals as this stack accessor itself, let’s say. And a different one to get the stack thing and the different one to get the structure? + +MAG: So I mean, I am skeptical about things. I think string representation probably always—the best you are going to get is a normative note that says, it’s implementation defined. I might be able to see that, say, if you have access to a programmatic way to be like, here is a—like object representation of a stack frame, I can see tools making use of this. Now, I am also terrified because it’s a huge interrupt problem and about I would argue, it should be specced such that his, you know, implementations are free to drop random frames so people stop depending on it. But like there—there is a design space I would see exploring there. But yeah. Strings, no. + +JHD: What you said about dropping and adding random frames, that’s allowable. If we ship this today, you can do that. This proposal isn’t attempting to close down that design space. And I think—I agree it could have an interop problem. But I am at Stage 1 looking for Stage 2. Like, that—that’s—it’s—but not fessly before Stage 2. + +MAH: Right. I just don’t agree that this proposal as a exists today is well scoped and motivated and it instead at least two maybe three proposals in a trench coat (?) at the moment. + +RPR: We have two minutes on the clock. There’s a bit more in the queue about this—whether to split into other proposals or not. I will point out, JHD, you are entitled to another 20 minutes. + +JHD: We can do a continuation tomorrow. + +RPR: But let’s try to close on time. + +JHD: All right. + +WH: I would be reluctant to split this into separate proposals simply because I want to understand the big picture of what is going on. The issue I have with separate proposals is, each of them would be missing the big picture. I want to know where we are going. I’m perfectly fine with not getting there, all the way, in one jump. But I want to see where we’re going, rather than considering things incrementally. + +NRO: So given how difficult it might be to standardize the stack string, would—right now, the spec doesn’t say there’s a stack property in errors. Would anybody be happy if we say, well there, is a stack property. So now we recognize that web reality fact, and we just say, it returns a string that is completely implementation defined? + +MM: Since you are asking if anybody would be unhappy, I will just say, yes. But because of limited time, I will postpone for when we resume. + +RPR: Okay. We are at time. Let’s try and get DLM in because DLM is the last person on the queue. Maybe we can squeeze this in here + +DLM: I will be brief. MM, you more or less raised and SYG too, I would have made the points basically, I see this as, you know, a source of potentially a huge amount of work, implementations to tie to converge our internal representations of error stacks, and I could see that as introducing web compatibility, interoperability problems and because of that, I am not convincing of that and from what is worth, that is like a blocking concern from SpiderMonkey. So feel free to request a continuation, but we’re not comfortable with advancing during continuation + +CDA: Are you unconvinced in Stage 2 for the shape of the solution, or is it a broader unconvinced on the stage 1 problem statement in general? + +DLM: I think we recognize there’s a problem here. We are and MAH, can say, this we are seeing web compatibility problems around errors and stacks. Yes we agree there’s a problem. We are not convinced of this particular solution. + +CDA: But is it still worth spending committee time to explore a different shape of this. + +DLM: Yeah. And I think we have heard opposition to it but MAH’s idea of splitting this up and prioritizing things that are causing us real web compatibility problems would be of interest at the moment. + +RPR: Okay. Thank you, Jordan. I guess we will wrap up now and you will get a continuation. + +DLM: Cool. Thank you. + +### Speaker's Summary of Key Points + +* List +* of +* things + +### Conclusion + +* List +* of +* things + diff --git a/meetings/2024-12/december-04.md b/meetings/2024-12/december-04.md new file mode 100644 index 0000000..b26a98c --- /dev/null +++ b/meetings/2024-12/december-04.md @@ -0,0 +1,355 @@ +# 105th TC39 Meeting | 4th December 2024 + +----- + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Jack Works | JWK | Sujitech | +| Nicolò Ribaudo | NRO | Igalia | +| James M Snell | JLS | Cloudflare | +| Dmitry Makhnev | DJM | JetBrains | +| Gus Caplan | GCL | Deno Land | +| Jordan Harband | JHD | HeroDevs | +| Sergey Rubanov | SRV | Invited Expert | +| Michael Saboff | MLS | Apple | +| Samina Husain | SHN | Ecma International | +| Chris de Almeida | CDA | IBM | +| Keith Miller | KM | Apple | +| Istvan Sebestyen | IS | Ecma | +| Jesse Alama | JMN | Igalia | +| Eemeli Aro | EAO | Mozilla | +| Ron Buckton | RBN | Microsoft | +| Daniel Minor | DLM | Mozilla | + +## Import Sync discussion, request for Stage 1? + +Presenter: Guy Bedford (GB) + +- [proposal](https://github.com/guybedford/proposal-import-sync) +- [slides](https://docs.google.com/presentation/d/1GW_OCoVjd6OJi9BKSlQzQKqxrB0GUKHKFof4s3rn9yk/edit) + +GB: So we are starting with import sync today. And to give the background on this proposal, it was recently a PR on the Node.js project, #55730, for a import.meta.require function inside of ES modules. The only runtime today that supports a syntactical `require` inside of the ES modules is Bun and what makes this possible is that we have synchronous require of modules inside of Node.js and this I guess seemed like a useful feature to users to be able to have this requirement. On further investigation and discussion, we were able to determine that really the only reason this was desired since you can import CommonJS modules and require ES modules, the only feature that was really wanted out of this is to synchronously obtain a module. And so this is kind of like maybe could be thought of in some sense as like the last feature of CommonJS that Node.JS is struggling with. I wanted to bring it to the committee out of this discussion to make sure we’re having the discussion in TC39 because there’s a clear demand for some kind of synchronous ability to get access to a module. And there’s also possibly a risk that if TC39 were to consistently ignore this demand, that there could be ways in which platforms could continue to work around it and potentially create new importers which are basically going to be having different semantics to the ones that exist in TC39. + +GB: So what are the use cases here? A very simple one is when you want to get access to the node.JS modules or FS or any of those. Node.JS had an interface module to the built in modules to solve this use case. That’s clearly not the use case that’s in particular being tackled here although may be a cross platform version of it. Also synchronous conditional loading and then getting dependencies that have already been loaded. If a dependency has been imported and available, it’s available synchronously if you had the ability to check for it. So kind of like traditional `registry.get` use cases. And then there’s the sort of all in sync executor use case where there could be benefit in having a sync executor when we do module virtualization and also module instances and module expression and exploration. + +GB: And what is different about this conversation today versus in the past? One of the big changes that happened—another one of the big changes that happened recently in Node for the require refactoring is that module resolution is fully synchronous. This is now pretty well set in stone. That is a recent development. That was not true up until recently with Node.JS had a async hook pipeline and the ability to have asynchronous resolvers running on the separate thread and various asynchronous folks. All of that has been made synchronous now. In the process of being made fully synchronous now. In addition, browsers implemented fully synchronous resolvers which means that we can do the resolve part of an import sync fully synchronously, and we know that in reality of all of the platforms today, all of them use synchronous resolution, that was never a given. That’s one of the changes. Another change that I think is worth bringing up is that a sync import was never a possible discussion because it went against so many of the fundamental semantics of the module system working between browsers and Node.JS and the difficulty in bridging the module system between the very different environments. But now that the baseline asynchronous behaviors are fully baked and fully reliable and implemented and, for example, the Node.js module story has come together, it is possible to consider synchronous additions which don’t sacrifice the core semantics but can layer on top at this point. And `import defer` has actually already done that work. So the semantics that import defer defines are basically exactly the same semantics that you would want in many ways for a synchronous execution. And then also we think about what we want for virtualization use cases dynamic import is the thing we always talk about as the executor in virtualization and compartments where that would always require virtualization being async. Having a synchronous of import, a synchronous virtualization could be useful. + +GB: The design I’m proposing here, this is a proposal if someone thinks there is a better design and I tried to discuss a few. We shouldn’t move forward with this design, but this is the design I’m bringing forward for discussion today, which is just an `import.sync`. It would not be a phase. Sync is not phase. And semantics would be roughly what `import defer` has as of today. You would do the synchronous resolution throwing any resolution errors. If there’s already a registry module available providing that, if not doing host loading. There’s a question here about whether host loading should be included in import sync. So should we actually do the creation and compilation and substantiation of the module work inside of import sync that obviously something that the browsers won’t be able to do and convergence of behavior of modules and node and it can do the full pipeline and that can’t. And up to bundlists to bridge that. Effectively not available sync error and browsers could throw and Node.js could succeed, or TLA could throw, et cetera, to get to completion. It’s this new error that would be "not sync or not available synchronously", or some kind of error like that. Very similar to Node.JS does to require—when you require something using TLA, it will give you this error and start to try to load it and give this error and maybe leaves some stuff partially loaded or something. That says that you should use the async. We can have some kind of host error. Maybe host error, maybe fully TC39 error and we decide what error it is. + +GB: And then to explain how this could be useful in module expressions and declarations is that instead of using to use an async function to get instance for module expression, if you have a module expression available synchronously no reason you couldn’t synchronously evaluate the module. You have that. With module expressions you have everything available in the synchronous context. Maybe this justifies having a synchronous executor. For module declarations with the dependency graph with the module declaration you could synchronously execute the modules as long as not having top level await or third party dependencies. What if they have external dependencies? Trying to import sync with module exploration with external dependencies and in this example an outer import get that in the registry and get it to execution. In the module tries to load the same import specifying string it’s already in the registry and available. That can actually work. If you just bubble up all of the string specifiers to the upper scope and know they are executed, that will be fine. There is nice interaction with `import defer`. If you have deferred loading of something, it will be import unless in the cycle NRO reminds me. Imports defer readiness is exactly import sync readiness. And the question then is would it be worth considering for the import defer proposal some kind of name space deferred because in this example we never use the deferred name space, we want to be able to access it through other means. For example, it could have been in the nested module declaration. So with the namespace-free defer you guarantee the semantic, you guarantee everything and done all the work before you get here and import. And here is the example with module declarations. You would execute `name` and `lib` together late on the synchronous executor and the defer would have done all the upfront async loading. + +GB: That’s the semantics for `import.sync`. And to consider some alternatives of what else could be done, registry getter you could have just a plain `registry.get` with an import.meta.resolve. In general, the registry probably belongs contextually. So you probably want it to be `import.registry` or some kind of local thing. And then you probably do want to resolveGet. And so I guess my point here is that the ergonomic API you want for registry look ups is something that will be doing the resolution and being able to check the registry. So just thinking about different APIs that could be possible. This ends up with the semantic that’s very similar to `import.sync`. But overall, the alternative design space seems to include firstly do we want to do this divergence between node and browsers and node could maybe do more loading than browsers can? Or try to be stricter and say this is a very strict semantics and only import something that has very specific properties with that availability in the registry or do something like registry capabilities? I do generally think that the use cases here get most interesting when we think about the interactions with module expressions and module declarations and virtualization and registry APIs might not be suitable. But registry API exploration could be in the space of alternatives for exploration as well. + +GB: And then across the use cases, for other alternatives to sync module executors, maybe you have `.deferredNS` property on the module source or something like that on the instance. Maybe it’s some kind of function on the module source, for instance. Of course dependencies might have other solutions like conditionally calling `import.meta.resolve` or weak imports. We have the built in getter in node and sync conditional execution is kind of solved by import defer already. But it could be worth having discussions around. + +GB: And the risk is: would import sync be something that pollutes applications and people start creating much less analyzable applications or applications that have different semantics between browsers and Node? So bundlers could still analyze that and make it work. Doesn’t seem like that exists in the ecosystem. If we had this from day 1 it would have been a more attempting proposal. But today it seems like it would be hard for this to prove itself as more ergonomic than the static import system we have in place. + +GB: And then deadlocks a cycle of import syncs is a deadlock. This is already effectively possible with import defer, I believe. And so, yeah, that’s a risk. And then I mentioned there’s this browser-server divergence that kind of seems to come to that question do we want to say that all modules that are import synced must already be present in the registry in some way shape or form or description or allow it to do the loading and substantiation? Might be some ways to define that way to more closely match the defer model. Other weaker import semantics could be possible to explore. I will just end there. And hopefully we can get like a few minutes for the next presentation. I will go to the queue. + +NRO: With a cycle of executions, import defer would throw instead of deadlocking. + +GB: Thanks. We would probably have the same behavior, that makes sense. + +ACE: So the relation of import defer while import defer does define some aspect of this, it’s as you know, it’s crucially different to that it does split up the explicitness of when the async work can happen and the sync work can happen. It puts things in the spec that allows theoretical sync execution, they develop intention is not that, it must be synchronous. It allows browsers to load things asynchronously and allows top level await to still happen. I see the relation, I don’t think import defer gives us like a free pass of this to just naturally follow on. There’s still such a fundamental difference between the two. + +GB: If I can follow up on that briefly, yeah, I think there’s maybe like—there’s a bunch of loading phases and maybe it would help to put the phases of modules loading down and mark which belong to defer and which belong to asynchronous and which belong to import sync. I think there’s a lot of crossover so far as when you do an import defer, you are doing execution which is exactly what we want to do for import sync. And so far as there might be a model of `import.sync` that we want to specify which is that should not be weaker than the import defer used by defer. We could even say that we actually have exactly the same semantics in a strict version of this implementation and then there is the question about how much it should be weakened? + +ACE: So if `import.sync` is just executing the evaluation phase of the last part of import defer, that’s only going to work when the host can call the load callback synchronously or already in the – + +GB: It can’t call the load or callback. + +ACE: So then if we’re saying people need to ensure something else is adding something to the registry for this to work. A big issue we had trying to modernize our module system at bloomberg to be in the situation to do import defer is stopping code making assumptions about what other modules are being loaded. So there will be commenting saying this is safe because we know someone else has already loaded this. We really want—we have been trying to make everything statically and we have look ups and environment base and load this on the server and this on the test. We’re trying to make all of those things statically analyzable and limit the things that are relying on the interaction at a distance. + +GB: So within this proposal, there’s a kind of gradient from the very strict version that is defer level strictly. I don’t think we would get stricter than what defer is today. And then there’s maybe some slight weakening so we could say we are going to permit some host loading or something. But I think the strict definition that matches defer is very much a viable implementation. So far as you would actually ban any host loading. And that could be supported within the proposal as currently proposed. + +SYG: So I am concerned about the complexity of all the ESM stuff like adding sync back in particular , it concerns me. Spent a lot of effort with TLA to move the infrastructure in the spec to implementations to everything async. We will add another sync path that threads through everything that makes me very unhappy. Also from the browser perspective, the divergence problem concerns me. If we diverge that seems bad. If we don’t diverge, the value of the proposal seems much less motivated to me from the browser perspective certainly. If node is disallowed from synchronously loading then why would node want this is my question? + +GB: That’s a good question. Yeah, just to be clear, like, I personally have no desire to see import sync today. I am not looking to progress this proposal unless others want to progress it. I am presenting it because it is something that people are doing, it is something that there is a demonstrated use case around. And because there is this kind of risk that if we don’t show exploration of the space to solve use cases for users and demonstrate we’re interested in having those discussions and we shut down those discussions, we risk other risks. + +SYG: Okay. Well, then let my crankiness be noted in the notes. + +GB: Your crankiness is noted. + +CDA: Let the record show that SYG is cranky. Thank you. + +MM: Make sure to coordinate with XS importNow. + +GB: That’s definitely been an input into the design. We’ll follow up with some discussions. + +GB: So I’m not asking for Stage 1. What I’m asking for is if anyone thinks I should ask for stage 1 or if anyone thinks I should not ask for Stage 1? + +MM: I think you should ask for Stage 1. I think that Stage 1 is weak enough and that in terms of what it implies with regard to committee commitment and signal and I think that, you know, the issues you’ve raised are perfectly reasonable to explore in a Stage 1 exploration. And I would support Stage 1. + +JSL: Just to the point I’m not particularly happy with having a sync option for import either. But I also am very unhappy if node is disallowed to go off and do this on their own and no one else follows suit. If this exists in the ecosystem, I would rather it be part of the standard. Just on the point of view of someone who has to make their runtime compatible with node and other runtimes and that kind of thing. I don’t want to be chasing incompatible and noncompatible extensions to stay compatible with the ecosystem. While I absolutely sympathize with the concern ant adding sync back into this picture in the standard, I would rather it be done here rather than in node. If that makes sense. + +DE: I’m not sure whether this should go to Stage 1. This is very different from the design of eS modules generally and maybe we should be hesitant before giving this sort of positive signal to this direction. But I’m not blocking it. + +JHD: I mean, there are a lot of different desires around this stuff. Desire expressed on the node PR as I understand it is something about being able to statically determine import like points where new code is brought in. There are some folks who want synchronous imports. Some folks who want to be able to import JS without the type boilerplate. There’s some people who want the cJS algorithm in node. A lot of the use cases for sync imports that I see is something that the conditional imports proposal from many years ago, conditional static imports might have ? and worth looking into that. Simply being able to put a static import in a not top level scope position like allowing it to appear in blocks or ifs or things like that, that would I believe provide a similar—like, the same amount of staticness but would also—and the same amount of seeming synchronous—what’s the word I’m looking for? Apparently synchronous imports. And it may drastically remove the desire for a synchronous import. So I do think it is worth going to Stage 1. I think it’s worth exploring all these possibilities. But I mean the reality is that there might be some use cases that we can’t solve in ESM because of the decisions made ten years ago including doing it all asynchronously. If we can solve them, it is definitely better to solve them in TC39 than to have every individual engine or implementation deciding to make their own version. Do I think it’s worth continuing the discussion if only to avoid that risk. + +CDA: Noting we are almost out of time, SYG. + +SYG: Just to respond to James’ point earlier, if the thing we standardize is not good enough for the sync use cases on the server side that motivated the server run times to come up with their own nonstandard solutions in the first place, we will then just have another standardized thing that people don’t use and they will continue to use the not standard thing. I don’t think it’s a silver bullet to standardize something if it’s actually not good enough. We have to be pretty sure it’s good enough to replace the nonstandard solutions and I don’t see a path right now to that. While I don’t block Stage 1 because, you know, it’s exploration is what is needed here, I don’t see a path currently to Stage 2. So I want to be very clear about that. + +JSL: Just quickly responding, yeah, I agree. I agree with that. I just think it is something that we need to discuss. I don’t see a path to Stage 2 right this moment either. But let’s at least have it on the agenda for discussion, then. + +CDA: Okay. You also had some voices of support for asking for Stage 1 from KKL and JWK which i assume translates to also support for Stage 1 if you’re asking for it. + +GB: I’m going to suggest a framing, then, in that case, what if we say that there’s empathy for use cases in this space but there’s certainly not agreement on the shape of the solution and so this specific proposal for `import.sync` is not the thing that’s being proposed with Stage 1. What if it were instead Stage 1 for something in this space? And maybe I actually update to remove the exact API shape and say this is still an exploration. + +MM: I think with Stage 1, the general thing there’s a problem statement which is really what you’re asking Stage 1 for. But the concreteness of having some sketch of some possible API I think is always appreciated. But, yeah, the thing that’s Stage 1 is about is the explicit problem statement. I think this is, you know, a fine problem statement to explore in Stage 1. + +GB: So to be clear, what we’re asking for Stage 1 on in that case is not the proposed design, because that is not a Stage 1 decision. But in particular, exploring the sync imports use cases including optional dependencies and synchronous executions and explorations and conditional loading parts and built in modules as the problem statement. And under that framing, DE, would you be comfortable with Stage 1? + +NRO: Dan is no longer in the meeting. But I will note that he did say he would not block Stage 1. + +GB: In that case, I would like to ask for Stage 1. + +NRO: Stage 1 is about the problem not the solution, anyway. + +CDA: As MM said, it is not the strongest signal to actually land something in the language. Do we have consensus for Stage 1. I think you had support from JWK and KKL and MM. Any other voices of support or does anyone object to advancing to Stage 1? Hearing nothing and seeing nothing, you have Stage 1. Congratulations. We are a little bit past time. Do you want to dictate key points summary for the notes. + +### Speaker's Summary of Key Points + +* List +* of +* things + +### Conclusion + +* List +* of +* things + +Presented a number of use cases where synchronous access to modules and their execution could be valuable and would like to explore the problem space of these under a Stage 1 process. There were reservations about the import sync design, but we are going to explore the solution space further. + +## ESM phase imports for Stage 2.7. + +Presenter: Guy Bedford (GB) + +- [proposal]() +- [slides]() + +GB: So in the last meeting, we presented an update on the source phase import proposal. I will just go through a very quick recap of where the proposal is today. So this is a follow on to the import source phase syntax proposal which defined an abstract module source representation for host defined source phases. But it did not provide a module source for JavaScript modules. So this proposal extends the previous source phase proposal to define in ECMA-262 a representation for a JS module source that represents a JavaScript source text module and also forms the primitive for module expressions and module declarations. + +GB: The feature is needed in order for it to fulfill the primitives required of module explorations and expressions and dynamic report and host postMessage as module harmony requirements. We’re specifying motivating this proposal on the new Worker() construction use case. So the motivating use case is the ability that the spec will immediately be able to satisfy is the ability to substantiate a worker from directly a source phase import. And this is something that provides tooling benefits, ergonomic benefits for users and enables portable for working substantiation and work across platforms. The module expressions use cases we’re going to be supporting ? the module expressions and message them to other environments. There are object values that get to dynamic import and support serialization and deserialization. The other update that we have from the last meeting is we formally had syntax analysis functions, the import function and export function and `import.meta` with top level await property. They were on abstractModuleSource and not on ModuleSource. These have since been removed because they were a secondary use case to the proposal and not part of the primary motivation. To be clear, this still remains a future goal. Because they are not suitable or in this position. But instead to just focus on the module source primitive for the specification. And these will likely to come back in instances of virtualization proposals in the future. + +GB: So when we got Stage 2, we identified certain questions we would need to further answer before we could seek stage progression. And these four questions. The big one for worker substantiation is can we actually do this across the different specifications? The source phase has implications in the WebAssembly and HTML and collaboration that has to happen between standards. Can we do that? Do those behaviors work across the specifications? We also identified early on that this module source as specified in the source phase sort of implies that you would have an immutable source record backing it. Generalizing the concept of a module which in turn requires generalizing the concept of the key to align with this. I got my numbers out here. I previously had number 4 higher up and switched it around. I shifted it there. But the concept of a compiled module record is number 4. And Number 2, the concept of generalized keying can work with that. Thinking about the problem of keying and thinking about if there should be some kind of compiled backing record. And so Number 2 is keying. Number 4 is spec refactoring and Number 3 is how does dynamic import behavior work for module sources across different contexts including across compartments and across realms and through serialization? So these were all individually big problems for us to investigate. We spent a lot of time from the module harmony meetings working through a lot of these requirements. So I will give an update on each of these. Cross-specification behaviors we presented at the HTML meet whatnot meeting at the 10th of October and explaining this proposal was at Stage 2 in TC39 and specifies this new source object, because source phase imports have already been merged into HTML, there was awareness of the source phase. This new Worker use case and its semantics and the transfer semantics represented. And there was genuine interest in the proposal. And no negative concerns were raised. It was not an explicit signal of intent or interest. But it was certainly a very unsafe signal positive experience if that makes sense. So, yeah, based on that, I put together a very draft HTML PR to work through some of the initial semantics and prove out the cross spec behaviors. And we worked through this. There are still some outstanding questions that we might well defer initially. We might say that the shared worker and Worklets are unsupported. We’ll probably default to some high secure settings for the cross origin instantiation and COOP integration and CSP integration. And then there’s another question on the HTML side about setting import maps for workers that comes up with resolution and the idea that there is a rough isomorphism for modules in different agents, which only works if you have the same resolution. One of the things we’re looking at there is import map, having good defaults for import maps in the working substantiation so that this worker substantiation would actually clone the import map in the pairing context and to do a best effort match of resolution across contexts. + +GB: So this is the HTML PR. There is no HTML PR right now. As a Stage 2 specification, we would like to seek Stage 2.7 to be able to work—to be able to put up the HTML PR and move that into a spec and implementation process. In addition, I presented the WebAssembly CG yesterday, a variation of these slides, and gave another update on the implications for the WebAssembly integration. Again, the overall feedback is interest and no negative concerns were raised. The second investigation was module keying. + +GB: So I want to just go through the semantics of how module keying works and it’s kind of like a key semantic when you support dynamic import of these module sources. How does this keying module work? This is something that we spent a significant amount of discussion time exploring and something we gave an update on at the last meeting. And so the semantics that we converged on here is an example of the module registry on the left. There’s the key and the instance. And note that the source is part of the key. So the key consists of URL and attributes and also the actual module source aspect, the sort of compiled source text and then the instance is the thing that you look up against those. So what happens when you import a source? If there is not an existing source in the registry, the source carries both the source and its underlying key, which is the URL and attributes. So when you import a source, it gets injected into the registry with that key end source and gets instantiated against that key and you get back that instance. If I later on import the string key with the matching attributes, I will also get back that same instance corresponding to that same source. What happens if I import a source that is a different compiled source via whatever means you transferred it from another agent and there was a file change in the meantime and so you had different responses on the network? That source in the other agent, but it has the same URL key in attributes as source C but it’s a different module source. This is one of the primary requirements that we identified for import sources. The module keying behavior is that if you import a source, you should always get an instance of the source that you imported. We discussed lots of variations here and discussed them at the last meeting. This is the semantic that we feel is crucial to maintain for this model to make sense. So what happens when there’s already that URL in the registry and a source, we add another entry into the registry against the new source and create a new instance for that. You get a new instance get the new source. So you get registry as far as the URL key and source match as part of the two aspects of the source key. And then the other case is what happens when you have an eval-like module. So you could think of if you evaluate a string containing a module expression, if you have kind of like in WebAssembly, it would be `WebAssembly.compile` and compile some random bytes. They are strongly linked and not to the original source URL key. It has a base URL but the source that you have was just its module constructor that would create these when you construct the module from sources, they are eval sources. They just have a unique ID. It has a source and the unique ID and when you import it, the URL key aspect is not the full URL key because that’s just a base URL key. In this case it’s actually this eval unique key combined with the source. So if you structure clone these things, they do reinstance because that key gets regenerated. + +GB: To summarize, the primary module key consists of the URL key and attributes or the unique eval ID for unrooted? modules. When we extend this module to module declarations and expressions, their key is parent relative. It’s basically the parent and an offset or something like that. In addition, there’s the secondary key which is the module source and it acts—it contains exact immutable source contents. We need to be able to do comparisons of module sources and define a new concrete method module sources equal which is able to do comparison of module sources between module records. We distinguish sources that are rooted to a key so the source phases records from those that are not eval-ish things with the unique eval ID. And as I mentioned, we define equality because you could have that case where we loaded two modules that have the different source with the same underlying URL key and we need to be able to detect that and add another entry in the registry. So they have the same keys, they coalesce, if they have separate keys they are separate entries in the registry. So that’s module keying. + +GB: The next investigation is what happens when you move module sources between agents? So if you have got different types of module sources and you transfer them between agents and dynamic import them, what kind of behaviors do we get? And this directly follows from the keying semantics described in the previous section. Here I have three types of modules. I have a module source that’s rooted to its absolute URL key and a local module declaration that’s contained in this parent module and I have an eval module that’s created by eval and could also be created by the module constructor. When I post message across, I send two copies of every module so I have two variations of each module. They are serialized and deserialized twice. Because we do the serialization and deserialization twice, the actual object that the module source object itself has is unique itself for each structured clone operation. That’s not the level at which this identity exists. Instead, when you import these objects, that’s where the keying identity comes into play. So if I have the source mode, it’s sorted with the URL key and source text and post it twice and the URL key and source text will match for dynamic imports so we get the same instance in namespace. Similarly, having done that, this module source is not present in the registry and just with the previous keying demonstration, if I import the URL key string, I’m going to get the same source. We maintain identity for module expression and module declaration based on the parent equivalently provided the parent isn’t self rooted. And the eval-ish modules on the other hand, every time you transfer them, that eval key effectively gets this key here effectively gets refreshed. So every time a structured clone is regenerated. There’s no concept of a global key. It’s all just serialization. So they aren’t equal. These are the proposed semantics and this is what is written up in the spec and host and variance. To explain the structured clone, you get the same behaviors there. The module and the string imports will be the same even though the objects aren’t the same. You have module declaration identity and the evallish modules get the eval ID created with the serialization and deserialization and they become unique instances. + +GB: The implications for WebAssembly is that it also gets the same behaviors. So the analogy of this, we don’t have module declarations or module expressions for WASM today and the module proposal that components became potter components does support the nested modules and you have something similar there. But in view of that not existing yet, we can sort of describe this in sort of the source phase for WASM and `WebAssembly.compile` is eval-ish. You post these things you get the same behavior source module for WebAssembly match the chronicle instance and refer to it with the string imports and that matches the cross-examined instance and one of the hard things here is you can already compile WebAssembly and post messages today. We have to be compatible with that semantic as well which we are as a quality. And if you have two different agents so that have different sources, so say for example one agent it had a module URL key with the source bar and on the other agent you have a module that happened to get a different source for that and post them both into agent 3, well, if you do the—if you import agent from agent 1 first is foo = bar. That will become your instance under that source. And so it will have equality. But we don’t get coalesce are qualities and source module two gets different instances. That’s from the course semantic you get the source you import and you don’t get the different source. So that’s the core principle for import source that it must provide canonical instances for the source provided. We updated this into the spec and allowing equality operation before sources by the ? source record and module equal concrete method and update import to run through the host load import module machinery to allow it to perform registry injection and when a record exists coalesce on I quality and import on a source must return instance of the same source and extension of the existing such that the same instance for a given source must be returned every time. If you transfer a module source from an iframe outside of the iframe and you have a module source from a different realm, today this will throw an error. It’s a one line specification because we weren’t sure if we should support it. This is purely a technical question or like an editor but not an architecture request. It’s something where we can remove the lines or add them. Seemed more conservative to add them initially. We could always remove them making what is an error not an error. And so this is something that could be discussed further as well. But it seems better to error on the side of caution without further discussion. And then the last investigation for Stage 2 was the refactoring of the source record. Today, we talk about sources and instances, everything is just a module record. So we talk about this module of importing the source and having the registry key by the source and matching to the instance, and creating instance undertaking against that but in reality everything is just the module record. So in the registry you have module records against the URL keys and when you import a module source, it just points to its module record. So the question here is there this refactoring and split up the source and instance? Should we be doing that? What happens when you import source and points to the module record? Well, we don’t inject the instance that you pass with. We inject the source, the instance gets injected because it effectively already existed in the registry. It’s kind of like sensing which like the module record already represents, it’s basically you can look up the module record like the registry entry. If you have the module record you already have the registry entry. If you have the source object, you basically have the constraint that you have to not rely on the instance data. So that’s the constraint on the source data. So, for example, if we had—sorry, here. This should already be in the registry here. If you import a module record that happens to have another instance on it, you’re going to get the canonical registry instance for that source, not the one that happens to be on the module record. So in the current spec design, we do actually specify these kind of almost like ghost instances that are unused where you’ll still just get that instance 3. So every time you structure clone a module source, you breed a new module record that has the sort of floating instance. But you still converge on the registry and instance. This is kind of the question of spec reality versus spec fiction. And an important part of the discussion. The argument is we maintain equivalence with the spec fiction because the import of a source is always the same canonical registry instance at the key in source. The only way to obtain an instance is through canonization. Only the canonical instances are sort of the ghost instances are fully inaccessible. That’s an invariant that we obtain. This is module records and abstract module records. So if we split them up, we split them all down the middle. But because we don’t have multiinstances today and there’s only one canonical instance of the source identity, we can maintain the invariance on the current module records to specify the necessary behaviors. Only when we get to multiinstances or module instance primitives with compartments that we need to start separating these things. + +GB: Since the key model is always consistent with the source instance separation. And the argument that we are making is that right now, today, it would be an increase in spec to make this refactoring. So, yeah. That’s the—those are our Stage 2 updates. For Stage 2.7, we have reviews from Nicolo, and Chris. And we also have from the editors, in that review process, some things came up. Kevin had a – + +GB: so KG, brought up a good point about possible refactoring of GetModuleSource. Because initially, in the—in the previous proposal, for source phase imports, that was only supporting WebAssembly source imports. We weren’t defining WebAssembly ModuleRecords in ECMA262 so we used a get concrete method to allow hosts to define their ModuleSource. But now, with ESM phase imports, we do this in internal slots. And we could ensure, I am writing a spec mode the host is maintained, that it’s always the same object. But now we have the field defined, we could actually go back to the source phase imports spec and upstream this new ModuleSource internal slot. As an alternative to the concrete method. Which would basically inspect those eagerly populate the ModuleSource JavaScript objects for all ModuleRecords, even if they don’t have a ModuleSource imports, which we would then expect hosts not to actually do for proponents that they shouldn’t allocate those objects that aren’t used, since maybe less than 10% would expose the sources. They would expect it to do it lazily. But it might be a spec to just define it as an object field. So that’s something that I didn’t want to change upstream in source phase import, in this presentation, but something to continue and determine if it’s a suitable refactoring. + +GB: So I want to take a break there. And open up to discussion on the design of the proposal and any questions? + +JHD: yeah. I wa—you mention something allowing the HTML PR. Why is Stage 2.7 a blocker? I put in for `Error.isError` in Stage 2. I marked it as a draft. + +GB: That's great to hear. Yeah. I think—it would help a lot. You know, it isn’t just HTML but WebAssembly as an integration. And spec and implementation through this prospect stuff they do go naturally together. I think also, seeking, like, if reviews on HTML are generally reviews to land and implement a feature, or reviews for implementation, I feel like if we want to see this feature shipping early next year, so that we can start to move forward with module declarations and module expressions late next year, that obtaining Stage 2.7 now instead of February will allow us to be able to see module expressions and declarations by 2026. And, you know, there’s still two stages left to that as well. So I—I think it’s interesting to hear how—what the requirements for Stage 2.7 are, or in terms of what the standards processes are for Stage 2.7 in this contexts and I think that’s a really interesting discussion. Maybe we can move more of that discussion to the last discussion on this. + +DM: yeah. I am happy to postpone this to another time as well. But similar to what Jordan was saying, I am wondering, there’s an ambiguity at what Stage we want to resolve cross-specification issues. After the ShadowRealm discussion earlier this week it sounds like we have this resolved before Stage 3, which means Stage 2.7 is a perfectly fine time to do that. It’s nice, as a committee, to make that part of our process so that we can remove this in the future. + +SYG: I prefer proposals where a majority of the proposal does depend on another spec, like HTML. Or the PRs and where the more equivalent stage of advancement and in lock step. I don’t like the pattern, we want to advance to a more mature stage to convince the other body to, like, look at it for real. I think it’s fine to tell the other body that, their interest in it, should be independently derived and that will feedback into 2.7 or 3. I don’t see why any particular standards body needs to move ahead of the other one. If HTML doesn’t have interest, that should directly feedback into stage advancement considerations here. + +GB: just so follow-up on that, I have written an HTML spec. Out of respect for the HTML authors, I did not post it up because I didn’t personally feel like it’s—like, HTML does not have a stage process like TC39 does. + +SYG: I think it’s fine for you to say that, like, there’s no more concerns on the TC39 side, aside from HTML folks being okay with this. And if that is—if HTML says, there’s no concerns from our side, aside from TC39 being okay with this, then we are both fine to advance. + +GB: that’s not the situation we are in here. + +SYG: I see. Okay. That sounds fine here. + +GB: so HTML did mention in the whatnot meeting that there could be a concept of an explicit signal of intent from HTML. And that there could be some process around this in the WHATWG meta issue. That could be something that TC39 could explore in this space for future proposals. We did not obtain that official intent because it’s never been done before. But it’s worth mentioning in this context. + +SYG: yeah. For the future, that would be a clear signal. + +NRO: yeah. Well, it’s already been said, but the problem is that asking web people to integration proposal for Stage 2 usually does not work because Stage 2 can mean like proposals can significantly change at Stage 2 so they usually prefer to wait. So I was not aware of this proposal, official signal, in some way. But we should, like, really work something like that for some of our proposals. + +MM: okay. So I have a minor question on the slides as presented so far. But firstly, the orientation question: this question’s discussion we are having right now, this is an intermediate step; is that correct? We are still going to have a response to deal with— + +GB: I will follow up with a compartments deep dive and a and then a process discussion before the advancement + +MM: great. I will postpone all of my major issues until then + +GB: great. KKL could I ask the same of you as well? + +MM: okay. Just the minor issue, which might, you know, which if you want to postpone also till then, it might be appropriate – + +GB: it’s been—if it’s about the designs as presented so far, this is the time for that design discussion. + +MM: okay. So the—you talked about coalescing, and that also was a phrase that Kris Kowal used in our private discussion last evening. And in both cases, it confuses me. Maybe it’s just a terminology issue. But coalescing, to me, sounds like there’s two things that already exist in separately and then made to be the same. And from everything I understood both last night and today, we are not talking about coalescing, if that’s what coalescing means; we are talking about using E information to look up an entry in a table and look up an existing value, rather than creating a new value. There were never two separate values that are then retroactively being made into one + +GB: okay. So I will—I would update this slide to demonstrate coalescing, but maybe I can actually just make some edits here. So in this case, you have got two separate agents that have the same ModuleSource to the same URL, but it has a different underlying content. So when I transfer the first module, I will get an instance of that source. When I transfer the second one, I will get an instance of the same source and in this case, there is no coalescing. If we instead had the same SourceText, so if both contain the SourceText, foo = bar, the SourceText equality is, as you say, part of the key so you would get the same instance, and this is what we mean by coalescing. Even though they are completely separate things, and they were structured and serialized and deserialized, their identity coalesces. And strictly speaking, it is just a key lookup. But I have been using the term coalescing for the source coalescing because the source is a secondary key, not a primary key. So it’s a secondary key coalescing + +MM: what do you mean, when you say secondary key? I think maybe I still don’t understand that. The pair together is the key. + +GB: yes. From a lookup perspective, you would look up the string key, and then you would check if the canonical source matches your canonical source ID. It’s like a primary and a secondary key. + +MM: is it – + +GB: maybe it’s a terminology thing. You say the lookup is for that key + +MM: okay. As long as it’s consistent with saying, lookup is for that key, the issue of how do you break up the overall key lookup seems like an implementation concern, not a semantic one + +GB: that might be a case of spec fiction versus spec reality. The spec model is that. Where the—you are effectively looking up the compound key. Because one key is defined in HTML and the other key is defined in ECMA262, it does end up being a two-part process + +MM I see. Okay. That was clarifying. Okay. I think I can postpone everything else I am concerned about + +GB: let’s follow up in the—after the compartments discussion. Were there any other questions on the queue? + +KKL: I just wanted to throw in that I propose the word coalesce might be the source of the confusion. I think, yeah. A way to describe this is that in transferring a ModuleSource from one agent to the other, the identities of the ModuleSource objects diverge, and when you import them, the identity of the corresponding ModuleInstance converges. Is that a good way to describe it? + +MM: not to me. It makes me even more confused. + +KKL: okay. Maybe—better luck next time then. Pray continue + +GB: I am sure there will be more on that topic compartment discussion, so we can have a more in-depth discussion shortly. + +GB: All right. So thank you, Chris, for the review. And thank you for getting it in swiftly. I appreciate you having taken the time yesterday. The—in that review, what came up was that there—there are a lot of compartments interactions here that have not been fleshed out by this proposal, and so what I am going to attempt to do here is some kind of rough working through what those compartments interactions might look like, for the sake of the compartments folks, to feel comfortable with the proposal. + +GB: Please do interject if you want to clarify or if I am going off track from compartments and working or trying to work. For folks not actively interested in compartments, this will be far too much detail. So my apologies in advance + +GB: If you can consider the compartments model today, before the ability to import ModuleSources, compartments moved to a model of module hooks and module instantiation. And that is compatible with source phase imports, so far as this can be instantiated with. Then you have import hooks. They call it the import hook and just pull a result from here. And about you can instantiate instances that have hooks and through the hooks model, be able to virtualize the module system. In this example, if you want b.js to resolve to a specific instance, I can implement that took and dynamic import is used as the compartment executor to execute the virtualization. And so in this example, we have got a static import to the local b.js and a dynamic import to it. Because module resolution is reified per ModuleInstance, the results look only runs once. It only runs once for the static import, and dynamic import and you get the same thing. We get this kind of idempotence put into properties of modules that the I am sport of the same specifier should return is maintained through the design. + +GB: It’s worth noting that that we – + +MM: I’m sorry, before you advance, can you go back to that slide and just stay on it for just a second. So I can observe something. + +KKL: Mark, the parents' instance argument is irrelevant. It doesn’t exist in the proposal, but also isn’t germane + +GB: my apologies, if I haven’t worked out my imagination of compartments as opposed to the compartments. I hope what I have written adapts to the compartment model + +KKL: I think so. + +MM: okay. Okay. I am fine. Go ahead. + +GB: So the proposed is a local idempotence and not a global idempotence. Because the URL key is not defined in ECMA262, we have no way of creating key quality and so it’s possible to break global key items easily, where you could return a different instance for a “././b.js” and a “./b.js” and you would actually get a different instance. And so you can violate global key item ID Epotence so far as traditional folks are concerned. It’s worth noting this is like an edge case of the model. That is quite similar to the one that dynamic import of source imports also exposes. + +GB: So what happens when we introduce the ability to import a ModuleSource? We have got the—we talk about looking up in the registry the canonical instance for this source key. But sources can exist across compartments. You can pass sources around, so how do we define the canonical instance? And the only way to do this is to introduce a compartment key that is associated with the instance doing the import. + +GB: So here is the concept of multicompartment registries. Where you have got two separate registries and we have got canonical instances. The instance has a [homo?] reference on it, C1 on the first compartment registry and C2 on the second compartment registry. When you import a source, in the spec fiction here, you put that source inside of the registry, and you create a canonical instance in that registry for that source. And you get the canonical instance of that source in that registry. In the other compartment, you will get the version of the canonical in that instance. Sources have canonical instances per compartment. + +GB: To illustrate that, if—what we would have to do is to create a ModuleInstance from some kind of compartment key. So in this case, we are going back to our compartment constructor that defines the hooks, and passing that key into the ModuleInstance constructor. When we do that, the instances are associated with the compartments and able to maintain this relation that when you call this import B function, which will import a source, that is, its own source, it gets back the same canonical instance against that compartment. And so we are able to create this spec relation by design that the import of a source for a key is the canonical instance of that key, which will be the same as the one that you would import normally through the module system. So we maintain the new invariance introduced by the source phase module through the compartment key. There are some questions about how exactly canonical instances should be defined for compartments. And this is something that could be explored more in the compartment design process, but to widen the field here, you could—canonical instances can be set ones for registry. So it could be something like part of the constructor does the constructor immediately set the canonical, you have some kind of canonical true option. Which means, okay. This is going to be a canonical instance and the registry versus a non-canonical sort of separate instance that exists outside of the normal canonicalization process. Or is it an operation directly on the compartments, where you can create a source instance relationship, if you do it twice for the same source, it throws because it’s already been done. + +MM: I’m sorry. I don’t understand non-canonical. The model is that—you know, the things we’re talking about have a key that’s looked up by the multicolumn quality that you talked about. And that the per-compartment registry, if you will, has a single value for a key, which says that there is a per compartment canonical instance for that key. First of all, does that correspond to what you call canonical and second of all what is the use case for non-canonical. Why is the concept there + +GB: when you construct a model instance, you can have multiinstances for the same source. Which are non-canonical. Because the canonical ones says, when I do an import of this source, this is the one I usually want. But you create other module instances against the same source that have different resolution behaviors within the same compartment. If we want to allow multiinstancing within the same compartment, we need to distinguish canonical versus non-canonical and the distinguisher; if it’s in the registry key. There could be a compartments model that doesn’t allow multiinstancing if we deprecate the module instance constructor. + +MM: I see. I see. It’s the coexistence of the ModuleInstance constructor and compartments that creates the question. yes. okay.good. I understand. Thank you. + +GB: great. So or a special canonical hook. If you bring up a source in the compartment that the compartment has never seen before, you could get a canonical that runs against the source. Based on the invariant it should return an instance that is an instance of the source and throw it. Alternative, have automatic. What you would expect. Which is when you do the first load of an instance or source, it creates the canonical instance automatically. So yeah. There is some—a little bit of design space there. So another point worth noting in the spec reality, where we have this ModuleRecord, that is both the source and an instance, already solves the canonicalization because the ghost instance can be just adopted in the compartment matches, and if not, we create a new instance. So there is—you need a compartment field on the ModuleRecord that you would carefully check when doing this adoption. + +GB: So what does that look like in reality? So if we created a compartment, and we just—I am calling it the Joe record design, but the last bullet point, when you do it automatically. When you import the instance, it’s now put that instance in because it was the first instance seen for the source, and it’s now canonical. Now it’s dynamic import. So if you now have this import external function, which takes external sources and takes a source B—sorry. This is source A. Apologies. So if it takes an external source and we pass back in the source amount, greater than instance for, the canonical instance is going to be—sorry that was a source B. My apologies. So if we import the source B, the—it adopts the source, creates a new canonical instance for it. If we later do an import later, at its canonical instance for the string, that will be the B. If we put in a source A, it’s the same instance for A. We have single canonical instancing here. So there’s effectively no separate instances in this model. Furthermore, the import hook would not necessarily need to be called. Because when you import the B, it’s already at the key for that ModuleSource potentially. So there’s some kind of, like—there’s some questions about resolve and import. And that sort of aligns with how much to think about the local versus invariance, and there’s still some questions there. But overall, the model seems to support these canonicalization features + +MM: Can I repeat back in my own words? if you’re loading—importing from a source, then the full specifier plus the SourceText itself is the key. But then, if you import from a full—if in that context, I you import from the full specifier from a string, there is no SourceText yet to compare, so the two design choices would be that you go out to the network and fetch the source in order to have the SourceText to complete lookup of the key, which is unpleasure. Or the other choice to say, this is where your primary key thing, I think, becomes a relevant part of the definition which to say, okay. I have already got the prior key, the full specifier, so rather than go to the network, I am going to assume that the SourceText I would get is the one that I have already gotten and then proceed under that assumption. Is that a correct restatement of what implied + +GB: That's the model. The details are the question and the hook design is the question. So I think there is—the way this is presented here, I don’t think it’s clear that import hook would never necessarily be called because you want to normalize the specifier and you could have alternative normalization. So effectively, I would say if you—if there was a way to pass a normalized URL as the full URL and you say that’s the thing, then that’s this model. Because of the fact we don’t have a model for URL resolution for key resolution, the non-source part of the key, this statement is not necessarily true in this framing. Sorry, I should correct this slide. But it was late. But yeah. The model you described is correct. And there is definitely some design space there. + +MM: okay. Good. I think I understand. + +GB: Canonical instances map one-to-one with their compartment so the instance is associated with the compartment and the compartment keys are the instance. If you go to another compartment, it will—the dependencies, and give you the instance fully loaded. If you were to go inside another compartment and import an instance that belonged to another compartment, you are not going to stop populating that compartment’s registry. Obviously, this design, you could throw and say, this is not allowed. You should only import things in your own compartment. Do you want cross-compartment loading, that could be an option where it will derive the other compartment’s loading to completion. The point being that only sources preserved between compartments, instances don’t transfer between compartments. They stay in their home compartment. They stay associated with their own compartment. + +GB: To try to summarize that, we do require a compartment identity for this source key to model. The canonical instancing model. There are questions of spec fiction and reality that need to be carefully considered. But both worlds are very much intact, and ModuleInstance and source combinesses, actually help us to write the spec text, in most cases, apart from having the ghost instances on clone source that are never accessible in non-instancing and non-multiunderstanding. As far as we have the invariance to separation, keeping the spec as simple as it needs to be until it needs to be more complex is better so we don’t try to refactor before we have all the design constraints in place. I have tried my best to explore the interactions as much as we could, but there’s some design work to go. Overall, there’s clearly only less work for compartments with the ModuleSource defined and go through all the transfer and import semantics that’s up. + +GB: so yeah. I am happy to have a discussion on compartments at this point, if you would like to. Kris Kowal? + +NRO: yeah. So question about the table that you showed. Within your compartment key. We didn’t have compartments today, but realms and they share similarities in which they are like some set contexts that—and share objects, when it comes to the web frames. Even though we don't have compartments today, we need to add the realm as the third entry of the composted key. I guess also for work—with other compartments its more granular division + +GB: I have also imagined that the—since we need a compartment field, on the ModuleInstance, that that would point to its realm. And so you could maintain that anyway. But I haven’t thought about that. + +MM: I agree. + +NRO: Okay. I think the answer to me is, yes. Because the map is already per realm. But I am not 100% sure about it. + +KKL: yeah. I wanted to—for one, thank you for framing the conversation in terms of compartments, it’s been helpful for us to invest in them. And apologies for not invested in compartments, I wanted to draw you in anyway. If we go back to a slide that illustrates—where the ModuleInstance constructor, and the compartment property and its handler, the—Guy is using a placeholder word for compartment. But this is more fundamental than compartment. And is an abstraction that lives beneath the layer of compartments that I think is well motivated for other reasons, specifically, Nicolo has pointed out in the shared structs proposal there would be intersection between shared structs and multiinstantiation of modules and compartments and such. Such if you had multi-instances of the same source, that contained shared structures, there is a relationship between the instance and which set of prototypes of those shared structs you are going to get access to and this exists within multiple realms of the same agent that already and all that compartments are adding is another level of indirection between the execution context and the so associated realm effectively. The—so the key in this registry would be moving from being keyed on the registry to the—what I am going to call for the purposes of this conversation, the cohort. That is to say, within a cohort, you are going to get a single registry of ModuleInstances and also, a registry of shared struct prototypes and these can be the same concept at this level. And again, apologies. There are a bunch of complications that Guy proposes that I do not think will survive to the final design. I think that in the end, the implication of the proposal that Guy is advancing, the ESM source phase imports, the implications of that landing ahead of module harmony is, for one, it seems likely to me there will be a simplification that module—that the module constructor with its hooks, as you see in the ModuleInstance constructor here, will probably just—it will probably need to simplify down to just being an options bag on the ModuleSource constructor in a future proposal because the model that this proposal establishes is one where there’s only a ratified ModuleSource that directly addresses immutable source under the—detachment hooks are semantics for the current realm, its association with a particular registry. And yeah. I think—I think this simplifies in time. I wanted to make sure my fellow delegates are aware that that is an implication of this proposal advance being than the fullness of harmony. + +NRO: So when earlier Mark restated to Guy about that we have the two choices, like go to the network to check if the source is the same, or not and just—so when we need to define canonical instance, we actually don’t have the choice between the two options. It’s already the case that the dynamic part will not go to the network. So that—the behavior is already done. + +GB: So if you had a ModuleInstance constructor, I guess the open question there; whether that instance has been injected into the registry to block further fetching or if the instance is existing outside of the registry in a sense. + +KKL: To follow up on that, I believe the implication is that the only way to have an entry in the register industry is to dynamically import the thing. + +GB: that’s very much the model that we’re working to. + +MM: So several things. First, just some questions about—further questions about key quality. When you talked about the eval-ed module expression, something surprised me in what you are saying; on transmission of the ModuleSource, that it loses its identity. That—basically, a new identity is regenerated on each deserialization. I can understand why that might arise as a constraint of serializing data, if you don’t want to imagine that you can create unique unforgettable identities in data. But other than that, it seems to conflict with the goals of, as I understood it from your private discussion last night, that the ModuleSource has a transmissible identity, where by the lookup equality, where the—where the key lookup equality is—that that equality is preserved across transmission. So if the same originating eval is transmitted to the same destination, multiple times through multiple paths, that has arrived, it’s equal to itself. Is that still desirer and is it the constraints of deserialization that caused you to give up on that + +GB: you could in theory define a cross-agent keying, unique keying. And have some kind of relation like that. I think there are a lot of benefits to making sure we don’t introduce new side tables. And so the most straightforward behavior was to just have structuredClone or serialization or deserialization to key evaluated sources. I would be interested to hear if there are other use cases for maintaining the key. It’s not something I have heard of as a desirable property. But it—yeah. It was more a case of implementing the most reasonable design as opposed to tying to enforce a new type of side table for a new use case. + +MM: I mean, just—the idea that it is a key locally, such that importing the same evaluated, you know, thing gives you back the same instance locally multiple times, it seems strange to have it produces the same instance for multiple importing by key quality. But if you emit through multiple locations and paths and receive from each path, you use as import, you have different instances, , that seems strange. It seems like a weird incoherent intermediate case. If you want to regenerate the key on every transmission, you should just have it not be canonicalized in the first place. So every time you import it, even locally, you get a unique instance. + +GB: So there are benefits for the local because we will—using a ModuleSource constructor in the same agents, or same compartment, you—there are benefits to being able to treat that as a cross-compartment key normally. The—so it’s only in the transfer with your of the key compartment because it’s a local key, not a cross-agent key. And early on we ruled out the idea of having key synchronization between agents to remove a lot of complexity, so it’s not introducing a bunch of complexity. So to put that back on the cards would have to be motivated carefully. It’s not out of scope. It’s something that could be considered. But it’s not something that has been strongly motivated today and there are definitely some high bars for chasing a whole new type of synchronization + +MM: to restate my words to see if we are in sync here: it’s in the abstract would be desirable to say that a ModuleSource has a transmissible identity that preserves module equality, but doing that cross-agent has complexity costs that are just not worth paying, so as an expedient matter, we are not going to contain key quality across sittings for the evaluated ModuleSources + +GB: Yeah. And to be here with evaluated ModuleSources on the web, at least, you would have to have an eval CSP policy enabled, in general the rooted sources as we call them, are the—generally like much more recommended path and that security hosts control sources + +MM: good. I understand that. The—I want to make an observation about the spec complexity. I am satisfied that as far as I can tell, you have succeeded at specifying an observable semantics that is consistent with the specs spec refactoring that we are postponing. That was an issue that came up hard last night. And I am satisfied that you did that. And very glad that you did that. The statement that the spec would be more complicated on the other side of the refactoring, I don’t believe. And that’s based on a previous exercise that I did exploring what refactoring would look like. But I do believe, and supporting the same end conclusion, that the effort to do the refactoring is a complicated effort. Not that the landing point on the other side of the refactoring is more complicated. But being given the effort to do the refactoring is quite a lot of effort. Postponing it as long as we are quantity, maintaining observable equivalence is fine. I won’t object to that. I wanted to register that I don’t believe that the resulting refactored spec would actually be more complicated. I think it’s actually simpler. + +NRO: I agree with everything that MM said. The refactoring makes everything much easier, I think, to read. But it’s a lot of work. + +MM: So since you are asking for 2.7, I will state my position there: I very much want to see this go forward to 2.7. You have successfully dealt with all of the things that were red flags to me yesterday. So congratulations. I am on the edge of approving it for 2.7. But I think I don’t want to do that today simply because of the size of the surface area of new issues to think about, and my uncertainty and fear with regard to whether I am missing something. And if I had had more time to think about the new issues, raised by the changes since what I understood last night, my level of fear might be reduced to the point that I would approve today. But I just—I just think we need to postpone and as we discussed privately last night, just—we are going to continue to discuss this in the TG3 meeting, which meets weekly, between now and next plenary, and I expect to be fine with 2.7 as a result of those discussions. + +GB: I am just going to quickly run through the last two slides and then do the formal request for Stage 2.7. As opposed to taking this as an immediate blocker, if you would be okay with that. I will jump to the very end. + +MM: sure. + +GB: So what we’re looking to achieve in the next steps is, as soon as when the import attributes perspective lands we will land the source phase PR, which this specification is based on. Source phase import now shipping the V8, soon to be implemented in node.js and Deno, after which it could seek Stage 4. To keep the module harmony trend going, the goal is to have this proposal closely followed so we can unlock module expressions and module declarations next year. If we can achieve 2.7, which can be the downstream HTML WASM, specification updates move toward coming back for a Stage 3 request before landing the HTML PR. So the HTML PR would not land before we seek Stage 3 and not regress the Wasm integration either without first getting to Stage 3 at TC39 and having everything presented at both groups. + +GB: To give a very brief demonstration of the spec, it’s a very small but a spec text to dynamic import and a couple of invariance on HostLoadImportedModule. And so it’s not a large surface area of change to the source, to the spec. But being able to achieve Stage 2.7 would allow us to be able to move forward with further investment in the proposal. I would therefore like to formally request for Stage 2.7. + +MM: So thank you for that. Those clarifications. I am still going to object, but—if you get consensus for 2.7 right now, then could we have agreed to a process where I am reserving my approval but could approve before the next plenary, in which case if we get conditional approval now, at the point where I am comfortable approving, then you can announce 2.7. Is that a conditional stage advancement that we could agree to? + +CDA: that seems a little bit awkward. + +MM: okay. + +CDA: yeah. If you have blocking concerns now, + +MM: it’s simply my degree of uncertainty, and that 2.7 is a green light to implementers to proceed to implement. And long experience on the committee says that once there are entrenched investments by implementers implementing this stuff, if there’s a mistake from my point of view that needs to be corrected, we—especially if the people who have invested in implementations don’t particularly care about the consequences of that mistake, the friction in getting the mistake corrected is much higher once they have been given the green light to implement and they have invested in implementations. So time to correct those things would be before 2.7. + +CDA: okay. There’s a couple of comments on the queue. NRO? + +NRO: yeah. Just that if mark thinking more about this implies there is some tweak needed, even if it’s just some integration, probably should be represented like without the conditional thing. Like, I am fine with the condition thing if the—what I am saying is that the condition should be—this is fine only if it ends up with Mark like saying, okay. Everything is fine. If Mark requests tweaks to the whole picture, it should probably be brought back for clarity. And then assume you are quick approval next time. But it should be presented with the tweaks + +MM: it makes sense to me + +DE: I agree with what Nicolo said. That if would—if any changes, bring back to committee for review. When have done lots and lots of these conditional advancements in the past based on someone needing a bit more time for review, including with Mark in particular, but other reviewers. So I think it makes sense to do here. We definitely need to work out all of the observable semantics before Stage 2.7. If we are not sure, we need to be and this conditional is a way to do that. At the same time, I just want to make a slight correction whether this is a signal to implement. The reason we separated Stage 2.7 from Stage 3 is because we want there to be tests present to, you know, to save the implementers’ time and everything. It’s optional to implement after Stage 3 and implementation sometimes happens before 2.7 for prototypes. But I wouldn’t consider Stage 2.7 to be the implementing signal. That’s it. So I support conditional consensus on 2.7, conditional on this being this proposal and Mark asynchronously signing off on it. + +GB: I would be happy to engage in meetings on a conditional progression. Under the understanding described by both Nicolo and Dan. + +CDA: okay. do we have support for 2.7? + +NRO: +1. if I can add, I was in the same boat as Mark. It took me a while to understand the spec text matches the implementation model. + +CDA: okay. Other voices of support for 2.7? I think Dan was a + 1, if I understand correctly. + +CDA: aside from Mark’s concerns and for the review, do we have any voices of objection to advancing this to Stage 2.7 at this time? Any dissenting opinions are welcome as well, even if they are non-blocking. All right. + +JHD: I would just ask that there be a specific issue where MM can comment when he has had an approval so that we can all be—have a place to follow and be notified. When the condition is met + +CDA: GB, would you create an issue in the proposal repo for the conditional 2.7 advancement and yeah. A home for those concerns and up follow approval from MM. That would be great. + +CDA: okay. You have 2.7 conditional. + +### Speaker's Summary of Key Points + +(earlier mid-summary): + +GB: So to summarize we provided a ModuleSource intrinsic. The spec text is complete and reviews and has all necessary reviews. There’s a possibility of editorial refactoring. We have investigated all of the stage 2 concerns. Cross-specification, defining key and this is based on the source record concepts. Including identifying necessary refactoring for the compartment specification. The semantics have been presented at both the whatnot and the Wasm CG without any concerns raised. + +GB: We presented the proposal semantics including an update on the Stage 2 questions. Including cross-specification work. ModuleSource keying. And equality. The behavior of dynamic import across different agents. And also, investigating compartments, interactions, and the refactorings in implications for future ModuleSource records and compartments. + +### Conclusion + +We obtained Stage 2.7 based on conditional approval from Mark for further interrogation of the semantics, through meetings at TG3, which will happen before the next meeting. diff --git a/meetings/2024-12/december-05.md b/meetings/2024-12/december-05.md new file mode 100644 index 0000000..c330a72 --- /dev/null +++ b/meetings/2024-12/december-05.md @@ -0,0 +1,453 @@ +# 105th TC39 Meeting | 6th December 2024 + +----- + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Jesse Alama | JMN | Igalia | +| Istvan Sebestyen | IS | Ecma | +| Gus Caplan | GCL | Deno Land | +| Dmitry Makhnev | DJM | JetBrains | +| Andreu Botella | ABO | Igalia | +| Keith Miller | KM | Apple | +| Eemeli Aro | EAO | Mozilla | +| Richard Gibson | RGN | Agoric | +| Ron Buckton | RBN | Microsoft | +| Jirka Marsik | JMK | Oracle | +| Jack Works | JWK | Sujitech | +| Samina Husain | SHN | Ecma International | +| Daniel Minor | DLM | Mozilla | + +## Vision for numeric types in ECMAScript + +Presenter: Shane F. Carr (SFC) + +- [proposal]() +- [slides](https://docs.google.com/presentation/d/1Uzrf-IwPrljF2BhCbCWuwQxlgGSm_bcd3FRbPO3Yrio/edit#slide=id.p) + +SFC: Hello everyone. You can see my slides. So a little preface for this presentation: we’ve been going back and forth for a little while now regarding different number related proposals and it concerns me that we haven’t taken a holistic view at how numbers work in ECMAScript and how we want them to work moving forward. We’ve been sort of narrowing in on let’s solve this little problem here and solve this little problem there. So my goal for this presentation is to sort of have a discussion about what we want number—how we want numbers to work in ECMA script in general and then that can sort of give us a framework so when we work on the other proposals we can see how they fit in with the big picture. So that is sort of the goal of this presentation. + +SFC: So here is what I have on the agenda. So first I want to talk about what we currently have. Then I want to talk about problems that I’ve heard that delegates wish to solve and the process of making this presentation, I spoke with a number of other delegates. I sort of synthesized these into five unique problem spaces. And then the third is possible ways to solve the problems and then last one is opinions of delegates. Not just Shane’s opinions, but opinions of several delegates. + +SFC: Starting with background on what we currently have. So we have these two numeric types. Number and BigInt. Number has been around for a long time. IEEE 64 bit floating point approximately and it does funny things with NaN and Infinities. I have this little line the domain is real numbers to distinguish it from bigints where the domain is integer. In integers, one thing that’s different about integers is unlimited significant digits and the number we have only what fits in the IEEE 64-bit floating point but we only cover the domain of integers. Let’s talk a bit about numbers. + +SFC: So hopefully people are familiar with this. What is 0.1? 0.1 in memory is represented like this as a 64-bit floating point IEEE floating point number. The bits of the 0.1 are broken down into the sign exponent and mantissa and there’s the floating exact representation or the fully—the full precision value actually is what is 0.1 followed by a bunch of zeros. After you get past 15 significant digits you get some things and always ends with a 625 at the end because it’s a base 2 number. So is 0.1 really 0.1? I think this is a question that has confused me and confused a lot of other people when I talk to them about it. + +SFC: So is 0.1 actually 0.1? Really interesting question. Because the IEEE floating point numbers are discrete points in the number line, right? And every particular valueOf an IEEE floating point can be represented in one of two ways, right? It can be represented, you know, it’s a binary representation in memory on one hand, but also the shortest round trip decimal. There’s a lot of algorithms. Every engine ships an algorithm for computing what the shortest round trip decimal is. There is a unique representation of 0.1 in IEEE floating point number. So if you have that—those bits I showed on the previous page, like, that is 0.1. So, yes, it is. But it’s also not. So it depends on how you interpret it. If you inter present it as decimal it’s 0.1. If you’re interpreting it as a binary, that’s the other thing. That’s the important distinction to draw. When you do arithmetic, you always do it in binary space. Arithmetic uses binary representation of the number in order to do the arithmetic. This is why you get things like this. 0.1 plus 0.2 is not equal to 0.3, it’s equal to the binary floating point number that is one tick above 0.3 that I have here on the screen (0.30000000000000004). And that’s how binary floats work. And that’s how they work. That’s why they do what they do, right? + +SFC: So with that little bit of background, I’m going to go into problems. But I’m going to open up the queue first to see if there’s anything on the queue. Doesn’t look like it so I will just keep going along. I will go ahead and talk about the problem space. So what I did is I wanted to synthesize down to five core problems that I see that we have in terms of things that the language doesn’t currently do, like, issues that we would like to be able to solve. So problem 1 is arithmetic on decimal values. I sort of synthesized this from the readme file of the decimal proposal to try to summarize what I see the use case being of that proposal. So when you’re doing financial calculations like calculating sales tax, for example, you want that to be done in decimal space. You don’t want that to be done in floating point space. There’s specific rules that have to apply. Those rules are much based on arithmetic as you learned in second grade that is in decimal spaces and not in floating point space. So that’s something that we don’t currently have the ability to do in the language. You don’t have in the language a built-in mechanism to do 0.1 plus 0.2 equals 0.3. We don’t have a way to do that right now. That’s one feature that’s missing, arithmetic on decimal values. The second feature missing is representing the precision of numbers. So there’s a thread that I wrote on the decimal proposal repo explaining this idea here. Depending on how the number is written, it may be spoken differently. + +SFC: Therefore it depends on how you internationalize it. You say “1 star” because that’s singular. But “1.0 stars” the zero at the end the plural form, even in English, and it’s sort of interesting that this shows up in English, which is when it comes to grammatical plural cases and things like that has not as many rules as other languages do such as Polish and Arabic and Russian and things that have more rules. The fact it shows up in English means this is very much a very common widespread problem here. So why do we care about representing precision? For Intl, with these different ways. When we format the number we want to know what we’re formatting and decouple as much as possible the international step from the representation step. I have a long post on GitHub if you want to dive more into this topic. + +SFC: Two is, we want to interop with systems that retain precision. Among other IEEE decimal systems, most retain the zeros. I have on GitHub and done the analysis and look at what languages of Java and others do retain the zeros. To fully round trip we need the capability. The third is finance and scientific computing. There’s some other people who posted allegation on the issue that have noted that trailing zeros are important when it comes to the financial calculations that are exactly the ones that the decimal proposal is aiming to solve. I make a note here that the IEEE reckoning of precisions is primarily focused on sort of the financial use case and scientific precision could have different ways of being represented. And then four is possibly HTML input elements. So that’s sort of problem space two. We want to represent the precision of numbers. There’s a lot of use cases for this. That’s not something that we should leave out. + +SFC: The third problem is representing more significant digits. So the number type is limited to 15.95 on average decimal digits. That means that 15 decimal digits is safe to assume. 15.95 is the average, which is enough for a lot of cases, but not enough for every case. For example, large financial transactions, things on the order of bitcoins could exceed that limit. Interoperability with decimal128 is also an issue here, because if you have a system like Python or Java that uses decimal128, it may have more than 15 significant digits and want to operate with it. And the third is big data and scientific computation. From time to time when I’m training my machine learning models, I do sometimes run into this issue where I have like two weights that are very close to each other and then I try to take the difference and then all of a sudden I’m down to three significant digits and that’s not always helpful. There’s definitely use cases in that area. So that’s sort of problem space 3. + +SFC: Problem space 4 is unergonomic behavior. I could have put a few more examples on this slide but we should have a numbers framework that just works. So we want to be able to make sure that programmers can avoid the footguns like 0.1 + 0.2 in order to have something that works for users that doesn’t have the mistakes that you can easily make. + +SFC: Problem 5 is associating a dimension with a number. So, for example, we want to be able to take not only the point on the number line, but also want to be able to take the unit that’s being represented. For example, dollars or meters. Why do we need this? Because in Intl.MessageFormat, `Intl.PluralRules` and so forth, this is something that we want to have as part of the data model and also feeds into the unit conversion measure proposal and avoids a certain class of programming errors. After my talk, EAO will go into more talk to justify this problem in case people are not convinced this is a problem that we need to solve with the language. EAO in the next time slot has an excellent slide deck to go into more of the motivation behind problem Number 5. + +SFC: I see JHD has questions. Before I get to those, I will go ahead of the next section of the slides. I think they might be answered there. + +SFC: So a non-issue and I want to sort of emphasize this, because this is something that is sort of, I think, been a point of confusion, is that a non-issue is being able to represent decimal values because as I showed earlier in the deck, as long as when you have your IEEE binary floating point number and you say I’m going to interpret this as a decimal, you can represent decimals exactly. Like, 0.1 does triple equal 0.1 if both created the same way and normalize them the correct way, they will equal each other. That is actually a correct representation of 0.1. So representing decimal values is something that we can do in the language, it’s not necessarily type safe and goes into problem 4 and maybe not ergonomic but it can be done. The problems that we often see arrive when we normalize numbers. We don’t have decimal arithmetic and we are able to represent it if you interpret the number in the correct way. + +SFC: I’m going to go over some solutions now. The solutions are not in any particular order. I sort of put them in this order in order to most easily explain how they—what the different aspects are of these different types of solutions. When I say solutions, I mean how all the different problems we’re trying to solve and how they can all fit together in one cohesive package for developers. + +SFC: So solution 1 is the measure proposal. Measure proposal that BAN presented at the last plenary and EAO will describe more today. It’s a number and precision and dimension. So a number is currently a JavaScript number, a point on the number line. It can also possibly support current and future numeric types like BigInt and things like that. So precision is the number of significant digits and then dimension is the unit. So this solves the precision problem and the dimension problem. It’s possible that decimal math could be included via prototype functions. It’s possible that you could support more digits via string decimals. If the number type is sort of abstract, right, then it’s possible that we could add additional functionality to sort of say if the number is a string, go ahead and do decimal math but do it and then you basically use string as the sort of type where you encode the arbitrary precision decimal value, right? Without actually exposing it directly, it would sort of be inside this wrapper. Measure could be an all in one solution where, you know, it represents all these things. Dimension could be null if you just want to be able to represent a decimal value without any unit attached to it. Set dimension to null. That’s fine. And then otherwise you can sort of use this one package that sort of has all these features and solves all the problems except not necessarily ergonomics because it doesn’t necessarily give a direct way to do the 0.1 + 0.2 as a primitive. + +SFC: The next type of solution is decimal 128 with precision. So IEEE decimal 128 basically is an encoding over 128 bits that is able to represent numbers with quanta and precision. Quanta and cohort. JMN talked about this previously in previous plenary meetings. So decimal one, if we add such a type in ECMA script, we could add a type that is fully conformant with IEEE. Measure no longer needs precision and decimal needs precision. We solve the precision problem. One concern that I heard when discussing this with folks is this concern of precision propagation. IEEE gives a specific algorithm if you have two numbers and then you multiply them together, it gives a very specific algorithm for how you calculate the output and how many trailing zeros the output has. That algorithm is sometimes surprising how it behaves. I’m told it’s based on a certain set of rules for how you do financial calculations. But that’s not necessarily a generally applicable algorithm. Another concern is this idea of equality operators. If you have a decimal value of 2.5m and another of 2.50m you want them to be equal because they represent the same point on the number line. The representation in memory is different because they have different precisions. And do you include the precision as part of the equality operation or not? And there’s been some debate about that and it causes concerns especially when we look at what the behavior would be with primitive values because that’s much more tight if decimal is an object we can have two equality functions equals and total qualities, that’s what Python does. When it’s primitive, we don’t have that luxury. + +SFC: Solution 3 is decimal 128 without precision. And this basically means that within the decimal 128 space, we only represent the—we only include the numbers that don’t have trailing zeros and ones that do have trailing zeros, we just don’t expose, we don’t export those from JavaScript. If you have a decimal 128 that has trailing zeros, like, that is not something that you’re able to represent as a decimal 128 in ECMAScript. So the main benefit I’ve heard from this is it’s potentially better for a future primitive decimal because this makes the equality operators behave the way that, you know, certain delegates expected they behave which is nice. A concern I have is that the unused bits are wasteful because IEEE gives us a framework to be able to represent precision in the same bits that the decimal is represented. Overall if you take every bit pattern that could possibly represent decimal 128, 10% of them have trailing zeros. Numbers less than 20 significant digits, a common use case, over 90% of them are able to be represented with trailing zeros. We lose the ability to represent those values if we sort of have this limitation. Storing precision as separate is possible and doesn’t work as well with arithmetic operations and so forth. The other concern is not interoperable with decimal 128 and precision is part of the data model and support other languages and we use the ability to have interop. This is the concern that I raised in Tokyo when this was presented. + +SFC: Solution 4 is DecimalMeasure. This is a new one I’m sort of throwing out there to put it in the field of possible approaches that could be taken is the DecimalMeasure approach. So the DecimalMeasure is we take the idea of a measure but then the measure instead of wrapping a number in precision, it wraps a decimal with precision. And it associates that with a dimension. This could have decimal semantics, a future primitive decimal can still be its own type and sort of emphasize that. It could be composed. The type decimal measure could be composed with fully normalized primitive decimal? There’s no reason that that can’t be composed because these are two different enough types that they could co-exist in the same universe. And alternative i18n focused decimal measure. The one way to think about decimal, think about measure is it’s just an input type for Intl operations. The other is general purpose useful type with other operations on it. So decimal measure can sort of take either shape. + +SFC: Solution 5 Number.prototype. I want to talk more about this one. I posted this more in the decimal repository. Since Number is able to represent a decimal value but you can’t operate on the decimal, that’s the main foot gun. Decimal add could be a prototype function and the prototype function just defines to say if you have 0.1 and 0.2 you add them up as if decimals and get a decimal on the other side as a number. I hope that makes sense to people. And this can be a function of the prototype. There’s sort of a couple ways it can be exposed to developers. It could be exposed with a new operator. Since these are already primitives we can spec out an operator and another is JSSugar or TypeScript. TypeScript can introduce a type called decimal number or something like that. And in TypeScript land, if you use a plus operator on the decimal number, it gets compiled to JavaScript as a.decimalAdd(b). This is a nice way of JSZero & JSSugar to work together and you have the abstract layer and then the built in layer. It’s a minimal change you have to make on the built in layer. It really gives the ability for TypeScript and JSSugar to do something on the user facing layer of the API by exposing this primitive operation called decimal add. I’ll keep going through. + +SFC: Solution 6. There we go. This is one I brought up. I haven’t gotten a very specific clear signal from any engines yet. I sent some inquiries and haven’t got an answer whether it’s feasible. It’s an interesting idea. We have the existing BigInt type. What we could do in principle is we could—again, I don’t know if this is feasible or not—add a field to it for the scale. And then the scale could represent a decimal value. And existing BigInt would work exactly the same way that existing BigInts work. If you construct them, they’re fine. If you compare them, they’re fine. Everything works as expected. However, you are able to construct a BigInt with the scale. If you do that, what you get is a decimal BigInt. There’s some questions here about what you do with the slash operator if you have two BigInts and divide them, like, that would probably have—that would have to maintain existing behavior. So we probably have to add another operator that does a decimal divide, for example. Another concern here is like we evaluate the risk of changing BigInts domain. For example, if there’s a program that assumes you had the interrogee and maybe do index to array and then pass in the BigInt and now it’s not an integer anymore, could that be a problem and evaluate that risk? And of course feasibility. It’s a solution I want to throw out there. I haven’t seen anyone actually give a definitive answer that no this is not feasible. I think it’s an interesting avenue that could be explored. The benefit is gives us a primitive right out of the gate because the primitive is already there. So that is solution 6. + +SFC: Now I will just go through some opinion slides and then after these, we’ll open up the discussion. I’m glad I booked an hour for this because I think we might need it. So my opinion. I try to make the slides as neutral as possible. Some of my biases may have slipped in a little bit. But my opinion is that I think we should leverage iEEE to represent a precision. Because IEEE gives us a way to do it. It’s very well defined and interfaces how other languages solve the problem. I think we should leverage IEEE to represent the precision. I think we should leave the door open for primitive decimal. I don’t think we should design around a primitive decimal today. I think a primitive decimal is something that we should leave the door open for in the future. I think we should design a good object type for dealing with these numbers because that’s what developers will have today, and what developers will have for the next probably decade or so. And even in a world with a primitive decimal, developers are still going to be using objects. We should try to design a good object decimal. And if we introduce a type that sort of makes it harder to add an object decimal, that’s sort of a problem. We should leave the door open. I think we should focus on building a good object interface for decimals. My third point is DecimalMeasure seems like could be a decent solution and sort of solves most of the problems in one package. Leaves the door open and I sort of wanted to float that out there as a possible approach. The main push back I heard there, sort of scope creeps the measure proposal and merges the problems and solve in one way than another way. + +SFC: I asked NRO for an opinion and this is what he said. He sort of pointed out with Temporal. In Temporal I’m also a co-champion for Temporal. We designed 7 different types with different data models. There’s a plain time and zone time and instant, right? And there’s sort of the universe and then when you’re inside one of those little types, no matter what you do with it, it will always be well defined. I think that’s cool we did that with Temporal. Maybe there’s an opportunity to do that with numbers. NRO, I don’t know if you wanted to add anything to that. + +NRO: I think you represented it somewhat well. What I like about Temporal is that don’t have all the sub slices of the whole model. You don’t have to worry about things you don’t need. You can have a PlainDate and you don’t have to worry about the TimeZone. Also if you have some where we expect ZonedDateTime it’s easy to check. And not somewhere else. You don’t risk using the wrong thing because we have a good runtime type system there. So I would—if we’re going to have different types of numbers like many more variations, I would hope we go in some direction like that where for example I don’t have to worry about a dimension if I don’t care about that and same the number and mention and not one without. I don’t accidentally use the wrong operations of banner versus decimal. + +SFC: Then I will go ahead and move on to JHD’s opinion. And again once we get through the slides, we can go to the discussion. I want to focus on the opinion slides right now. I just did SFC and NRO and JHD has his turn. After talking with JHD, you know, we sort of established the primitive decimal is a really good long-term solution, because it solves the ergonomic problems and some of what NRO was talking about with the type system and know what you get in and get out. But a solution that solves only a subset of the problems without a clear path forward makes us in a worse solution with a long-term solution. Imagine we do a have solution today that did a little bit and not all the stuff. And then in a world where we can add a decimal primitive, it’s now harder because we have this new type that we have to inter-op with. If it wasn’t there, we could add it clean. We can be where we are now and add a clean decimal primitive and everyone is happy. It sort of muddies the water. JHD, I don’t know if you want to add anything. + +JHD: This is a good summary. I also spoke a bit during JMN’s presentation in the previous plenary about my wider vision. I have some more specific comments but can wait for the queue. + +SFC: Cool, thank you. So then I have EAO. Not all these problems need to be solved in the standard library. So the I18N conditions could be solved with the thin measure protocol, you know, with these precision dimension string decimals. Do we need a type that solves all the problems? Maybe we just need to sort of solve the one concrete use case that we really have today is this use case of how we interop with for example solve measure format and design the protocol we don’t need to muddy the waters with decimal and leave that open just to solve in the future. We don’t have to think about solving the problems now. We do need to solve the measure problems now because inter op with native types with primitives with primordial types it needs the protocol to read the message from. Maybe we have to focus on that problem space. I don’t know if you had anything to add. + +EAO: I have half an hour to continue on this topic later. Nothing more at this time. + +SFC: Okay. Thank you. So I threw in this extra slide yesterday just because sort of thinking a little bit more about what NRO and others had said and there’s sort of a little bit of a composition here, you know, like there’s sort of the three things that could be layered on top of each other. You have the normalized decimal 128 and full decimal 128 that has the cohorts in it and then you have your measure which has also dimension in it. So think about how the types compose, this could be sort of one framework that we could use. I have no more comment on this other than just showing this slide. It’s just a brainstorm. Thank you so much for hanging with me through the presentation. I think we have half an hour to continue with the queue. There’s quite a queue to discuss. I’m happy that people are interested in this subject. So with that, we can—CDA looks like back to you. + +JHD: So like the second or third slide, I’m not aware of any system where 1 star and 1.0 stars would mean different things. Every star system I know of that’s not talking about stellar phenomena is either in increments of 1 star at a time or half star after a time. Anything more granular than that gets hairy as a visual representation. Can you elaborate on when those are different? + +SFC: So I think you’re talking about the Problem 2 slide. So I posted a lengthy essay on gitHub and I think you read it before. Basically my evidence that 1 and 1.0 are different things are the fact that they create different pluralizations. Even if they represent the same point on the number line, they need to be handled differently in software. Because one has no precision and the other has some precision and therefore need to be treated differently in software. The fact they need to be treated differently in software means we need a way to represent it. + +JHD: Thank you. That was good. And then my next queue item, when you’re saying precision, like, I feel like that word is used to describe two things. One of them is I think the previous slide probably, if I remember correctly, which is—maybe a different slide I was thinking of. Any way, one of them is supporting enough decimal places to do math. So when you said—thank you. This one, Problem 3. If you have a 20-decimal-digit precision number, you need to be able to do math with it. But then the second bucket is from science class and stuff where you actually care about the underlying precision of the numbers you’re using and combining those and all that. And so I think it’s—I don’t know how to differentiate those two. I think it’s important to try to figure out which one of those two or both we’re talking about when we talk about precision. Personally I find that the first bucket which is just supporting very fractional numbers is very important. That is something that needs ergonomics and accuracy and perhaps deserves primitive support. But I think the second bucket is something that is important and perhaps could be satisfied by user land or API only solution. + +SFC: Cool. I guess just to clarify what I mean in the presentation, I used the word precision for trailing zeros and significant digits to refer to the number which is sometimes called precision in other cases. I try to use the language in that way. I try to be consistent with that. I think I was consistent in this presentation about those different words to represent those things. WH looks like you’re next. + +WH: I have some questions about the bit pattern concerns on the slides. Why do you care about bit patterns of numbers? + +SFC: Why do I care about bit patterns? I can say why I care about bit patterns. So in the all in one measure type, we have an interesting issue where we have like a number which is 64 and it’s one chunk in memory if it’s in the future of a normalized primitive decimal. That’s the 128 bits. All of a sudden we have this precision and dimension field. Dimension is probably a pointer or an enum and more likely a pointer to a string value or something like that. And then we have this extra precision value which like what is it? It’s sort of a big bucket of like things. It could be a number of significant digits , it could be number of fraction digits , it could be error bars, for example. It could be a number of different things and on the one hand that’s cool, we have the flexibility. On the other hand, it’s a big muddy murky space. IEEE with the bits sort of gives us a way to represent that compactly. We can eliminate the extra fields from the measure type. We pack it all in to the 128 bits of the decimal type. Engines don’t have to worry about supporting this extra field. We don’t have to worry about figuring out what the extra field does. And we leverage the existing machinery that IEEE already gotten us. Does that answer your question? + +WH: I don’t understand the concerns about wasted bit patterns. Using Decimal128 to just represent points on the number line representable in Decimal128 requires 128 bits, so there are no wasted bits in the representation. If you count the number of possible values, there are 340 undecillion possible 128 bit patterns out of which there are 221 undecillion possible points on the number line. You can represent those in 128 bits. You cannot represent those in 127 bits if you want a fixed-with number type. As far as wasted bit patterns, the bigger source of waste is actually the base-1000 representation Decimal128 uses. There are Decimal128 values that have thousands of possible bit patterns all representing the same number. That’s due to its using the base-1000 representation where each digit uses 10 bits. So it seems like a bit in the weeds to worry about Decimal128 bit pattern efficiency. I’m not sure why that should have any effect on our proposals. + +WH: The other thing I’d like to note is that on a later slide you discuss the BigDecimal proposal, calling it BigInt. That has issues which have been well-discussed which are not on the slide. When reviving proposals like that it would be good to replicate the main concerns about them on the slide. + +SFC: For the second point, if you can—I did a little bit of looking around. But I didn’t find that. It may have been—I would like to read more. If this has been discussed, I would definitely like to read more about it. + +WH: We spent many hours on this. The primary concern is runaway precision with multiplication. + +SFC: Cool. I would like to read more about that. And regarding your first question about wasted bit patterns, you know, another sort of thing that I didn’t put in this deck which I think is maybe worth mentioning is that if we’re going to have 128 bits and we’re not going to be representing precision, we can actually get a little bit more out of it if we did float binary 128. IEEE do binary 128. If the whole plane is to represent precision, we could use binary 128. Using decimal 128 is not as efficient for doing—I will not be doing machine learning with decimal 128. So I might need to do things where I really care about precision like financial calculations. I won’t do big data with 128. Binary 128 is another thing if that’s really the thing we care about, you know, that’s the more efficient option any way. + +WH: For machine learning you want the least possible width because it’s faster. + +SFC: You want the least possible width that gives you correct results. And 64 bit is usually enough for that. + +WH: We could debate what “correct” means. Anyway, we’re going off into the weeds. Let’s move on. + +SFC: NRO is next with a comment about this one. + +NRO: It’s more about JS sugar than numbers. When we talk about JS sugar, we always dream about what tools could do but not actually able to do. I see RBN on the queue and won’t speak for TypeScript but everything except for TypeScript, any type of like type-directed conciliation that affects is starter and run way is the same for TypeScript second. + +RBN: I concur with NRO on this. TypeScript’s position is not doing type directed emit unless able to statically determine that syntax can only be used a certain way. We would not be able to transpile it. Something like ~+ that is everything is ~+ is always transpiled to thing on the left dot decimal add or something like that. Yes, that’s feasible. That’s something we can always do regardless of what the input value is. If it’s something like transpile plus, we can only do that if we transpile plus for everything that would slow down everything. We would not be transpiling plus. That is not something that we would be able to do. + +SFC: Sort of going on the point, then, even if you don’t transpile plus, is there still the possibility of writing a lint or TS lint and use plus on the decimal type and maybe meant to use ~+. + +RBN: That’s something that is essentially feasible and not going to catch everything. If we know the type is decimal type, that is something that you could be warned by. + +SFC: Okay. Thanks for that comment. Looks like EAO is next on the queue. + +EAO: Just continuing on this same slide. Hopefully a quick question given that we have the math.some precise proposal currently at Stage 3, I’m wondering doesn’t that actually provide a solution for the use cases that something like decimal add or ~+lus would be doing and then the concerns here would be going further from there and ergonomic concerns that need to be improved regarding what `Math.sumPrecise` is kind of what we’re already doing? + +SFC: I don’t know if KG is on the call and could make a comment about that. I think WH is on the queue. + +WH: `Math.sumPrecise` gives you precise binary addition. `Math.sumPrecise` of a set of numbers will always be equal to the mathematical sum of the numbers rounded to the nearest representable IEEE double value. When adding two numbers this is always the same thing that the built-in `+` operator does. When adding 0.1 and 0.2 `Math.sumPrecise` by definition will likewise produce 0.30000000000000004 because that’s the nearest representable IEEE double. + +SFC: Just to echo that. I tried out the `Math.sumPrecise` polyfill and it had that behavior. So unfortunately that proposal doesn’t solve this problem. It has to be another proposal. + +KG: I was on mute. You can’t solve that problem as long as you’re using Numbers, because the Number 0.2 is not the decimal number 0.2. It’s the floating point number. Something more complicated than that. + +SFC: Looks like MM is on the queue. + +MM: Yes. So let me start by asking you ayou a rhetorical question. If I ask you to write down two-thirds to four significant digits of precision, what would you write down? + +SFC: Two-thirds to four significant digits of precision? This is a little mental exercise? + +MM: Yeah. + +SFC: Well, I mean, I would like to—I would have to be able to know what rounding mode that we’re discussing. Maybe I think—if we’re assuming like half-even rounding, like, that would mean it would go—the last digit would round to a seven. + +MM: How many sixes would you write down before you wrote down the seven? + +SFC: That would be 0.6667. That’s my mental model. + +MM: Okay. Good. Thank you. So the question was rhetorical. The point of it I’m making, the larger point I’m going to make, there are many different notions of precision and I find that the one bundled into IEEE decimal 128 is not any of them in a coherent manner. That in particular the notion of precision that you’re emphasizing when you talk about 0.1 stars is a display notion of precision that is usually static. It is usually not a degree of precision that is data dependent. It is for all the data flowing through a given call site or all the data flowing through a given parameterized system and more statically parameterized than individual units of data. I will note in the example that I just posed you that is not what the IEEE will render for two-thirds no matter what the non-normalization is, because it’s not an issue of trailing zeros. It’s a question of overall total digits of rendering. If you’re in a context where what you want to see is numbers rendered to four digits of precision, and there are many such static contexts, rendering two-thirds at all possible sixes followed by the trailing seven is what you get directly out of IEEE and not what you want when you’re trying to use precision to colour a display. The other notion of precision that I think is coherent is something to capture the notion of error bars. And there are many different ways to do this. There are many different theories of that. There’s statistical error bars where you’re trying to propagate through one standard deviation of error under some statistical independence assumptions and then there’s a lower bound and upper bound and trying to propagate through worse-case error bars. So you’ve made the point that—you agreed with the point that the scientific notion of precision, which is intended to take into account error bars, is certainly not what IEEE is doing. I don’t see any theory of what IEEE is doing that actually meets well any use case. So I’ll let that be my first question. And then I’ll put myself back on the queue. + +SFC: I can respond to that a little bit. So first of all, you know, as I think I mentioned this a little—this also came up in JHD’s point which is that like the word precision has multiple meanings and different context that is a little bit unfortunate. In this presentation, when I say the word precision, I’m referring to precision as needed in the context of `Intl.NumberFormat` and talking about it in terms of the number of trailing zeros. That’s different than significant digits, which is representing precision in terms of like how many digits of a number are you able to represent? So trailing zeros versus like total number of digits that are able to be represented. + +MM: So in the Intl.DisplayFormat, if you’ve got two-thirds, and the display format is suggesting four digits of precision, how would the Intl number rendering render the two-thirds value? + +SFC: So currently `Intl.NumberFormat` has the ability to encode rounding options in the options bag, and that’s a utility – + +MM: I’m not that concerned about whether the last digit is six or seven. I’m concerned about how many sixes are displayed before the last digit. + +SFC: So it depends on—so `Intl.NumberFormat` allows you to configure if you want to round through a number of fraction digits or a number of significant digits. If you choose four significant digits, that’s what I said earlier which is 6667. If you specified you want four significant digits. + +MM: So does `Intl.NumberFormat` actually have any need for the display format that comes bundled with the IEEE definition of IEEE 128? + +SFC: Yeah, okay. I can definitely answer that question. I have a little bit of a thread about this on GitHub. But this idea of being able to fully decouple display from the quantity being displayed is a thing that helps us fix bugs in how we, for example, interoperate with PluralRules and NumberFormat and allows us to be able to more correctly express numbers into `Intl.NumberFormat` and allows us to potentially interoperate better with HTML input elements. It decouples as much as possible from Intl and as we’ve been working on these Intl APIs, the more the making Intl APIs focused on how to internationalize the number and how to take the data and put it in a form that can be displayed and as much as we can decouple those two things it tends to solve a lot of problems. That’s sort of the idea for like why having precision in the data model as opposed to just being formatting options is a desirable outcome. Obviously remain formatting options because it currently is. But it would be nice to be able to put it in the data model. + +MM: I’m sorry. I didn’t understand how you got through the first part of what you just said to the second part. + +SFC: Maybe NRO can give an example. He’s on the queue. + +NRO: I can give an example here. In Intl currently, when you want to, in this case for example, display 1.0 stars, you have two different Intl functions. One that gets Number 1 and converts it to the string “1”.0. And then you have another function that gets Number 1 and gives you back the string “stars”. And you need to make sure to configure these two functions the same way and tell what functions the numbers will have two digits. We’ll have one digit after the dot so that they are coherent so they don’t give the string “1” and the string “stars” or the string “star” and the string “1.0”. And right now you have to—given that this settings are not saved together with the number, you need to make sure to pass the coherent settings to all the functions while by having this encoded in the number itself means that you don’t risk accidentally getting the various functions out of sync. + +MM: So if the actual underlying number was 1.1111 and you’re rendering it in a context where you wanted to render it to one digit of precision it would be rendered as “1” and when it’s rendered as “1” would still be singular and the rendering it as “1” is not a rendering that IEEE provides you because the IEEE degree of freedom is only trailing zeros, it’s not overall precision of display. So I just don’t find dynamically tracking trailing zeros as the degree of freedom carried dynamically in the data to be coherent. It doesn’t match any use case that I can imagine. + +NRO: Yes, I agree with you here. What is important for the Intl as presented is to have the number together with a number of trailing zeros. But it’s not really necessary for it to track this number of zeros across operations. You usually would want to just set the precision after you’re done with your computation. + +MM: But when do you care about number of trailing zeros as opposed to just number of significant digits? + +SFC: I mean, I think number of significant digits could be one like way of representing the number. I think in many cases, that is the thing that Intl would need. But that can include trailing zeros. If you say, well, I want to render this number 1 with two significant digits, like, that’s something that can be encoded in the data model. IEEE gives us a mechanism for encoding it in the data model. To finish my point, I think you’re sort of discussing a little bit about this concern, the first concern on this slide here, which is that like the way that IEEE deals with precisions across operations is kind of unexpected in certain situations. And that’s not necessarily the problem that Intl needs. Intl just needs it in the data model. Intl doesn’t care how it’s propagated. + +MM: Intl doesn’t need trailing zeros. Intl needs total number of digits whether the digits omitted are zeros or not. So if I was in the context to see something to three significant digits and the actual number was one, I would expect “1.00” to be displayed. The trailing zeros comes from the DisplayFormat at the point of display. It’s static. It’s not dependent. It’s not carried with the data. I still have not heard a use case where what’s dynamically carried with the data is only number of trailing zeros rather than number of digits to show. + +SFC: Yeah, I understand your point. But I want to make sure we get through the—we’re pretty close to time. If—if Nicolo, if I can jump ahead to NRO. If you can make your last little comment. + +NRO: Yeah. He would also like to hear from JMN, but I was trying to encourage other people to give their opinion here. We have heard from a few people today, and this same people already discussing all this a few weeks ago in other meetings. It would be great if the rest of the committee also like expressed some expression or their feelings + +SFC: And yeah. JMN, you said in the queue that you like 3, 2 and 6 in that order. Is there anything else you wanted to add to that? Or elaborate on why. + +JMN: Yeah. I think 3 is the state of affairs today. 2 is what we had, I think, one or two iterations before that. 6 is interesting because it is a kind a path to being a primitive today. But as WH said, there’s some big concerns about that, with values getting extremely big very quickly. But maybe just a general point, why would I prefer these three things? It’s because to my mind, they clearly separate the measure idea from the decimal proposal, which I understand to be something focussed on numbers. We can debate whether that’s mathematical values or things with some precision on them or not. But it’s still—at least as far as I understand it—somewhat separable from the measure idea, which is a nice, I think, independently-motivated proposal. So that’s why I would list those things in that order. This is fantastic. Thank you for organizing the presentation. + +### Speaker's Summary of Key Points + +SFC: The goal is to take a holistic approach how we want numbers, precisions and measures and dimensions to interop together to give ECMA developers a cohesive, well-designed architecture. I went over several of the different problems spaces, as well as some of the different possible solutions. We had some good discussion regarding, you know, what should be represented in the type system, some good discussion involving, you know, what is precision and the different ways to represent precision. And I think the—you know, next action items are for the sort of number-related champions to dive, to continue to sort of iterate on this and come up with a, you know, architecture that solves all of the problems in a clean and future-proof way. + +SFC: Does that sound about right, NRO, JMN, et cetera? + +NRO: Yeah. + +CDA: Okay. Thank you, SFC. + +## Measure Stage 1 update + +Presenter: Eemeli Aro (EAO) + +- [proposal](https://github.com/tc39-transfer/proposal-measure) +- [slides](https://docs.google.com/presentation/d/17ypyikW1q8RFf5AnnYKpe5dsdrHTb0BnSzZGaq0mm-I/edit?usp=sharing) + +EAO: This was supposed to be BAN presenting, but as he’s on medical leave I’ve stepped in. I needed to put the presentation together yesterday, so apologize for rough edges and so on. + +EAO: This is something like a continuation of the previous discussion, but looking at the—maybe not how to define a number part of this. Measure as a proposal is providing a way to separate the “what” and the “how” when we are formatting numbers. This statement is carrying a lot of weight. So in the “what” here we have, for example, a number and units; of meters, kilograms, or any other things that are being measured, US dollars could be one. And then separately, “how” are we formatting these things. I will get to why that’s an issue we would maybe want to address in the next slide. + +EAO: The Measure proposal is also talking about supporting mixed unit formatting, such as rather than formatting “3.5 feet”, providing a way of formatting that value as “3 feet, 6 inches”. And then, the third sort of basket of problems, shall we say, that we are looking to solve is providing unit conversion capability in ECMA262. + +EAO: To some extent, all of these are coming from desires and needs identified in other discussions and proposals, such as the Smart Units proposal, Decimal to some extent, and Intl.MessageFormat. Measure is one possible way of looking at the space of problems we have here that we would like to solve. + +EAO: A lot of what is going to continue from here is based around the proposed solution of adding Measure as a new primordial object and specifically, one that would be accepted by `Intl.NumberFormat` as a formattable value. + +EAO: That part is, in fact, the—the key of what makes this something that, I think, we ought to be defining in the spec. And that’s coming from the way that we do number formatting. Along with the other formatting operations in Intl, we have a two-phase process here. First, we have a constructor. And in the constructor, we set a bag of options that are defining how the constructor instance ought to be formatting. And then later on, once or multiple times, the formatted value is given in a format() method on this instance that we’ve created. + +EAO: So what this means is that as it’s currently set up, if we want to format currencies, for example, we need to create a separate `Intl.NumberFormat` instance for every currency that we would like to format in, even if the other aspects of how are we formatting currencies, or values with units, or values with precision, would otherwise stay the same. And this ends up mixing what we are formatting with the options of how are we looking to format this. And specifically, as alluded by SFC in the previous presentation, this becomes a problem if we consider for instance the `Intl.MessageFormat` proposal, where we have in the MessageFormat 2 specification, almost a requirement to support something like a currency or a unit as a concrete thing that can be formatted. The sample code here is showing how this could likely look, if `Intl.MesageFormat` advances in the spec. We have the pattern of a message, which includes a placeholder cost formatted as a currency, and then we have something like a Measure that we can pass in, as the cost, and that Measure, then, carries with it the currency or unit could work there as well, for, you know, when doing unit formatting. That could give us a value that can be passed through the message and formatted in a way that ensure that a translator does not “translate” the value, and localize it, which could change entirely the meaning of what is being formatted here. This is largely the problem we are looking to solve. + +EAO: The strawman proposal in a little bit more detail, allows for operations here, we can create a new measure. For example, we are starting from 180 centimetres. Then we are converting a unit here defined as foot-and-inch. And then this is what we allow to be passed to a NumberFormat instance that gives us output that says “5 ft, 11 in”, in this case. I am omitting some discussion about how exactly precision works. That is something we can consider, I think, separately. There’s a lot that I would not spend time on that topic because it’s a big topic that could swallow up the discussion completely. + +EAO: One further example of what we may consider to be in scope for Measure is this conversion to a locale where we could be defining, for example, a usage for the value. So here, in this example, we’re starting from the same starting point of having a measure of 180 centimetres, and then converting that to en-US, American person-height usage. And then, getting my height as a new measure instance. And this, then, effectively becoming foot-and-inch, which can, then, be formatted as previously, and we end up with “5 ft, 11 in”. + +EAO: As might be obvious here, a lot of this is a proposal that is to a large extent coming from an internationalization and ECMA-402 interest, why does this exist effectively? Because we do have an interest in 402 looking forward, in particular, for NumberFormatting for enabling something like “usage” to be accounted for, because it becomes very convenient to be able to format values and localize them in this way. + +EAO: But at the same time, we are very concerned about the same sort of issues that, for example, the Stable Formatting proposal considers, where if we were to introduce any capability of having an input like 180 centimetres and having output coming out of that is “5 ft, 11 in”, we end up in a situation where JavaScript developers will absolutely figure out a way of getting a “5 ft, 11 in”, even if that is only available through a complicated sequence of formatting to parts and parsing the output from there. So we are looking to ensure, in part, that this sort of capability is provided without needing to do convoluted work and abusing Intl, in order to get at the final result. + +EAO: At the last meeting, BAN represented some of the aspects of this, as well, of how we would allow for a—the `myHeight` instance here, for example, to be able to output the “5 ft, 11 in” values that would be also used for the formatting, for instance. + +EAO: It’s maybe relevant also here to note that there’s a whole bunch of things that this proposal is not about. It’s not about unit formatting, because this is already a thing that we can do with Intl.NumberFormat. It’s already supported for an explicit list of units that we say must be supported and you can’t go beyond that. + +EAO: And furthermore, it’s not even about localized unit formatting because that is already a thing. This is taking a formatting finish, the feet unit and note in particular that this is already handling some amounts of pluralization, “1 jalka”, “3,5 jalkaa”, where the units are accounting for the value and being formatted there. And then this is also not about formatting numbers with an arbitrary count of digits because we have that too, as the input given to NumberFormat gets converted internally to an Intl mathematical value that, if I remember right, has effectively arbitrary precision. Furthermore, even though we talk about currencies, we are not talking about or even considering allowing for a currency conversion to happen within Measure. And we’re not talking, within at least the scope of the Measure proposal, of considering measure as a primitive or otherwise allowing for operator overloading with it. + +EAO: But then we do have some things that this—this is the part of the proposal where I would be interested in input and comments from TG1 here. One aspect is that this proposal can be done with a very, very minimal amount of data payload addition that could be added, because we already have these units, and we don’t necessarily need to go beyond them. But we could. There’s a bunch of units that it might be interesting to have formatting be supported for, or to have conversions be supported for, but these would, then, carry additional data requirements. Should we or should we not do that? That would be interesting to do, or if there is a hard line, that would be very interesting to hear. + +EAO: Then, also, there’s the conversions that account for the locale and value-specific usage references. That’s the second example I showed. That would be very interesting to hear whether this should be considered as a part of the initial proposal or something as a possible later addition. And these are conversions like, I mentioned earlier, about converting a height to a person-height, or for other locales. And it’s important to note that the conversion also needs to account for the value of the number that we’re formatting. For example, if I remember right, the CLDR data commonly used for this, says if a person age less is less than 2 ½ years, then you end up including months in the output, but over 2 ½ years, it’s only years that are being sorted out. So the usage depends on the value and the locale. + +EAO: And this data for this is very small. Like, compressed, if you look at the CLDR data, we are taking maybe 2, 3, 4 kilobytes for this sort of capability. This is not a lot that is being asked for potentially. + +EAO: Also under consideration is whether Measure should support addition, multiplication, division, and other operators on the value. Given that we already consider and do want to support conversion to some extent, should we allow for operations that potentially would even transform the base unit of what is being worked on? + +EAO: So a lot of this is driven by this one big question, which I would appreciate input in, should we really care about anything beyond specifically formatting and conversion? Those are the requirements that this proposal at a minimum needs. But whether we should go beyond them is something that could be done, but it doesn’t need to be done. And knowing whether to—whether measure ought to go beyond is going to drive quite a bit of the considerations for how we structure it and about how we allow for it or not, something like a usage parameter, and how it interacts with the other parts. So this is where I would be very interested to hear if there’s anything in the queue or other comments or criticisms to address here. + +CDA: WH? + +WH: So … the answer to the question you have posed all depends on handling of precision, which you didn’t cover in the presentation. Because I think that’s the long pole in the tent here. Treatment of precision becomes important for doing arithmetic. And treatment of precision also becomes important when doing conversions. So do you want to do the precision-handling work in one place or do you want to do it in two places, and have them potentially get out of sync? + +EAO: I would say the precision question depends on this question that’s on the slide currently. Because if we were only caring about formatting and conversion, we can consider precision only from these points of view. However, if we also want to support, for example, operations on the value, explicitly, as a part of Measure, then precision, as you mentioned, needs to be accounted for more widely. This is why I am asking this question, because it needs to be answered first before we get into the depths of how do we handle precision. + +WH: You skipped over the precision part of the presentation. I can’t give you the answer until you present that. + +EAO: What I mean is that we do not have a ready answer for how exactly the precision ought to work because we can define it in multiple ways and I think there are—this in particular is a fundamental way that ought to be answered first before we figure out, okay. Given these are the use cases and needs that we are trying to address, therefore, what do we do about precision here? + +WH: Well, that’s the opposite of what my point is. We need to understand what’s involved in handling of precision here. And it’s hard to answer this question without a good understanding of the precision aspects of conversion. + +EAO: Okay. + +WH: What I am asking for is either a presentation or some kind of discussion of what are the considerations dealing with precision. And that would be helpful to decide whether we should care only about conversion and reinvent the wheel for doing arithmetic, or whether it’s better to consider them both at the same time. + +EAO: That does seem like a topic for consideration later. + +MM: Yeah. My question is, related, I suppose: given that measure includes some notion of precision, even without pinning down what it is, but given that, you know, the current IEEE floating point numbers and the current BigInts don’t carry a distinct notion of precision, they just identify a point on the number line, and given that the number field of a measure would also be able to carry regular IEEE floating point numbers and BigInts and add some notion of precision in this measure wrapper, SFC had raised the idea of somehow combining the trailing zeros that are being dynamically carried by a decimal number, using that in the measure context as the precision of measure. And that confuses me on two grounds: one is that, in order to deal with—so this question is sort of across both presentations taken together, so I consider a question for both of you. So this confuses me because on one hand, Measure would already need to carry its own precision in order to deal with floating point numbers and BigInts. So that would seem to carry as whatever theory of precision it would apply to decimals. And would the theory of precisions that you might think to carry in Measure, is there any use case for which the theory of precision that you would consider would be one that’s only tracking trailing zeros as opposed to tracking trailing digits? + +EAO: So I would say that if we consider precision as a utility primary for the Measure of formatting, for instance, and also directing what might happen during conversion, then it becomes sufficient, for instance, for the precision to be retained within a measured instance as an integer number of fraction digits of the value that is being then formatted. And we can theoretically, with this sort of approach, even require precision to be included as a parameter, when conversion is happening. So that we are completely externalizing what happens to precision when converting, say, from centimeters to inches and—or doing other operations like this. Does this possibly answer your question? + +MM: I think so. Let me restate and see if you agree with my restatement: that there is no anticipated use case for which the notion of precision that measure would carry dynamically would be trailing zeros, the closest is trailing digits. Two-thirds rendered with three trailing zeros is 0.6667 or something. And, therefore, there is no theory of precision that measure would want for which, if the number is decimal, it could just delegate that notion of precision to the dynamic precision information that decimal numbers carry. + +EAO: Probably yes. Because we will absolutely need to support numbers, and numbers do not carry their own precision, so the precision will be need to be somewhere, or the number will need to be converted into a Decimal, and converting the number into a Decimal to later to be converted into an Intl numerical value seems a bit too convoluted. + +MM: There’s two grounds: one is, as you said, that the precision has to be in the Measure because it applies to numbers and BigInts. And the second ground, which the second part of my question is focused on, none of the theories that precision that one would think to build into measure is a—something that keeps tracks only of trailing zeros, rather than trailing digits. + +EAO: I would agree with that. + +MM: Okay. Thank you. + +CDA: SFC? + +SFC: Yeah. I think I have the next two items on the queue. First about the precision. Trailing zeros versus trailing digits. I don’t necessarily understand why those two concepts are distinct, because, for example, let’s say you have 2 …. 2.500, which also 2.5 with 4 significant digits. Those are two different—the only difference is like you know how you represent it in the data model. But the data model is able to represent both the same concept. Right? The concept of this number 2.500. Both are able to do it. + +MM: Yeah. And so I agree with that. And I agree that you can get there by saying, either two trailing zeros or three significant digits. There’s several different ways to do it. But none of them are—trailing digits versus total significant digit, none of the coherent choice, none of the choices you make has something to lift into measure or something to use as a substitute for the precision carried by measure would be number of trailing zeros, rather than trailing of zeros or total digits. + +SFC: I still don’t understand because number of trailing zeros is also a coherent model. As is the number of fraction digits or number of significant digits. + +MM: Give me a use case for which number of trailing zeros as opposed to number of trailing digits is useful. + +SFC: They represent the same thing in the model. + +MM: I didn’t understand that. + +CDA: I want to interject because we only have a couple of minutes left. + +MM: I think we can probably further investigate this off-line. + +SFC: My initial reaction, MM, as far as I can tell, as I said, those can—the thing we want to represent is 2.500—and at the end of the day, to be able to represent the quantity is what we care about. + +MM: The context in which you want to represent 2.500, which in which the underlying number is two-thirds, you want to represent all the 6s you can. + +SFC: I think I see. I mean, we wouldn’t represent two-thirds because two-thirds is neither a decimal or a floating point binary. + +MM: I think that misses the point. + +CDA: We do need to move on. SFC, do you want to briefly, very briefly touch on your last topic. + +SFC: I think a lot of questions that EAO is asking have to do with the scope question that was the topic of my discussion. So I feel like we should continue to have these discussions and, you know, decide what the scope is going to be and that will drive a lot of these decisions and answer a lot of the questions from EAO’s presentation. + +CDA: All right. EAO, would you like to dictate key points/summary for the notes? + +### Speaker's Summary of Key Points + +EAO: The rationalization and use cases for the Measure proposal were presented along with a strawman solution. Some of the extent of the scope of the proposal was also presented, along with some of the other open questions about the extent of said scope. No clear opinions were expressed by the committee on the questions presented, but a further discussion on the representation and handling of precision, in particular, was requested. + +## Continuation: Error Stacks Structure for Stage 2 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-error-stacks) + +JHD: Okay. All right. So I don’t remember where we were at the end. I think it was DLM’s comment was the last one. + +JHD: So just—my understanding ever the—the push back from Mozilla, in particular, MAG and DLM, I believe is this seems like too much, too big, not well motivated as a big proposal. Make we could split it up. I think that in general, that is a good principle to apply. Like a good way to interrogate proposals. This proposal contains three separable pieces, I guess. One is the normative optional accessor, which like we could—ship that and say great. That accessor is great. It produces a host-defined string. Cool. The problem that solves is the one that doesn’t—that isn’t actually very convincing anymore, which is great. We have specified it. But, like, it’s not actually—I guess it prevents someone from having their own property. It’s not no value, but it’s not a lot of value to be a whole proposal. That’s almost like it needs consensus PR. + +JHD: And then the next piece would be the `system.get` stack string, wherever it lives. And the benefit there is that is—with the combination of those—the first one and that one, now the stack string can be retrieved in a way that is compatible with the desires of hardened JavaScript. There’s a brand check included in the method. That could be done, even in an environment where the stack accessor is not available. Then it can be denied in a way that is compatible with the needs of hardened JavaScript so there is some value to be had there as well, but the—typically, the desires of hardened JavaScript have been enough to motivate design changes, but, like, I also haven’t seen a lot of enthusiasm from the committee as a whole for building things just for that purpose. I am not trying to say we shouldn’t, but you know I am just concerned that perhaps that wouldn’t be seen as enough value to be a proposal. And then the third piece is, the bulk of this proposal, which is, the get stack static method which gives you the structured data. This is the one that developers want. Nobody wants to work with a string. And that’s where I think the majority of the value comes, but that isn’t very useful unless it is tied together with the contents of the string. So that you can be confident they represent each other in some way. So I don’t think that the structured data can happen in the absence of at least specifying the contents—the structure of the string in the way that this proposal does, for the accessor. I suppose we could omit the get stack string, but, like, I don’t think that’s going to be the—I don’t think that—if you are already building the structured meta data and you are already ensuring that complies with the structure and schema and shipping the accessor, I would be surprised if someone thought it was a lot of extra work to add the static method that’s basically doing the same thing the accessor is doing. I can separate it, but that feels like a bunch of overhead and a process that won’t add any value and won’t result in a different outcome. Assuming all three eventually make it. + +JHD: So I would love to hear some more evaluation about the value of splitting them up and where the difficulty lies around, like, implementing this and so on. So let’s go to the queue. + +DLM: Yeah. My topic is not addressing what you asked about. I don’t know if you want to follow up on that later. Yeah. Basically after the conversation the other day, I went back to the meeting notes from the last time that was brought to committee in 2019 and that helped me clarify my thinking a little bit about my concerns. So basically, at that time, with Adam and Domenic expressed concerns about exposing, like, the structure in order to get access to frames without standardizing the contents of the frame. I believe that would start exposing a bunch of things that are kind of non-interoperable in between the different engines. And the other thing that really stuck out was when—the SpiderMonkey team in 2019, we already tried to align our stack space with V8 and found it wasn’t possible. We were breaking internal code and extensions. And breaking code on the web. So to tie those together, unless we can standardize, not just a schema, but the actual contents, this is going to introduce more interoperable troubles and cause more trouble than it’s going to solve. The concerns raised the last time this came to committee are still valid. I share them, and like I don’t think there has really been any change since then. I am not hearing any evidence that, you know, anything around those concerns has changed in the intermediate time. + +JHD: I don’t think it’s necessarily clear that it’s a value or desirable goal to make this stack trace contents actually be the same across browsers. Like, it’s—it seems nice in theory, but I don’t know if it makes much of a difference. Anything working with stacks is already doing some stuff to work around the differences across browsers. So I—I don’t understand—like, I am not convinced—so the—one of the concerns you stated, which was stated back then as well, is that the—that it would expose information that would make or close—like, interoperable differences or create for compatible problems down the road. The people already doing this stuff, century and so on, they are already—they have already built that. And they are working with it already. So making their job easier by encoding some of the stuff in the standard doesn’t strike me as something that makes compatibility problems worse. It would prevent engines from deviating in some ways and not in other ways further. Which seems like it reduces compatibility problems + +DLM: So I think, you know, what this will do is actually make it easier for people to start things inspecting stack frames. This is actually going to increase the usage of this kind of code, which means we expose this differences to a broader audience. Like a few specialized people are doing this, working around it, that doesn’t convince me it’s a good idea to expose this to everyone on the web. + +JHD: Okay. So I understand your position better. Thank you. + +DLM: Thank you. And I sympathize; I understand why people would want this. Okay. It’s not like I think, it’s a bad idea itself. It’s just I am completely unconvinced without standardizing the concepts, exposing this more easily is going to make the world better for anyone. + +SYG: I agree with Mozilla’s concerns here. To put it another way on how we think this does not help the interop story, we have one point of non-interop today, the whole of the get stack machinery, you have to wholesale, do browsers insisting and decide what to do. It’s unlikely we can unship that. It’s beyond unlikely. We can’t just unship that. If we standardize a new thing, what happens is, there are two concerns: one is a footgun concern. It looks like it’s interoperable but it’s not. The contents are not. We got into that last time. You have to do the browser insisting and deal with the contents. The net increase now, another point of interop. We expose the stacks and the existing non-standard stack machinery that you will have, now there’s going to be a new thing that we will also have to maintain forever that is not interoperable and unlikely to ever be. Net increase in the non-interop surface, I am not interested in that. + +JHD: Just to clarify, so your concerns here and DLM’s, are those primarily about the structured form? Like, if I did the three pieces I discussed, the first two deals don’t with the structure, do those same concerns apply to the first two? + +SYG: Number 1 was, the normative optional accessor. Which is what you already have in theory. Number 2, is the static method that gets you the string and 3 is the structure. The concerns we just talked about you and DLM are about the structure part and not about the other two. + +SYG: That’s my concern was about the structure part. But I don’t see the value in the first two. + +JHD: Got it. Okay. So you don’t—those concerns don’t apply, that you don’t see the value. Just clarifying. Thank you. + +MM: Yeah. So given what SYG just said, I am going to combine this with the thing—the other thing that I put on the queue because they both address the degree of interop concerns. First, to be more ambitious, and the second to be less ambitious. The first one to be more ambitious. A possible compromise that’s still below trying to fully specify the stack, which I don’t think will ever get the engines to agree on, especially since one of the engines does something like tail recursion optimization or the others. I can’t imagine that’s going to—that that’s going to be surmountable in terms of what stack traces are produced. The ambitious compromise would be that any stack frame might be omitted, but any stack frame that is present reflects reality. So that once again, an empty stack would still be conformant, but a stack that simply claims that there’s some function on the call stack that has nothing to do with any valid interpretation of the actual call stack could be considered non-conformant. So that would be very ambitious. I am not hopeful we can get agreement on that. I am offering it in response to the idea that the structured stack trace is only something that might be agreeable, if we go beyond – + +SYG: I’m sorry. Could you repeat the last—like, 45 seconds? There was an earthquake and I zoned out. + +MM: Sure… glad you’re still there. There’s been concern that just standardizing the schema without standardizing the content would be not very useful. I think it would still be useful. But I offer the—offering the ambitious compromise, as one of the two compromises I am suggesting today, the ambitious compromise is that we go beyond just the schema to say that the—that any frame might be omitted, but any frame that is represented must be truthful, must be accurate. So, for example, you can’t produce a stack trace, a structured stack trace that claims that there’s a function on the call stack that by no semantics interpretation of the call stack is actually on the call stack. So that would, I think, be something more than schema that would be useful and potentially that is in the realm to get engines to agree on. But let me just stipulate that I find it unlikely that we would actually get engines to agree on that, even because of lots of internal ways they might be optimizing code or stacks or whatever. And that’s the part that covers everything you might have missed. Now new material the less ambitious compromise, I am going to suggest, is Jordan’s number 1. I agree with Jordan’s statement of the value of each of his three break downs, except that I want to say that just number 1, by itself, would be hugely useful to us, that number 1 by itself is just the normative optional accessor, and it doesn’t even need to be normative optional, since it would be conformant for it to return the empty string. If you want to censor it. We provide a substitute accessor that returns the empty string. Which is conformant without resting on the normative option. The thing about standardizing the accessor, as the source of the stack property is it would address what is currently a very painful, a very different, painful situation for us. Mozilla, SpiderMonkey, already conforms to the accessor, where the stack property is located, it’s a narrative accessor. And Moddable access conforms to it as well. Our shims, basically, tries as much as possible to turn JavaScript platforms into one in which the stack trace is the accessor. The two pain points for us is JS C, Safari, the stack—there’s a stack data property on error instances that are—are produced on error instances before we can intervene. We don’t have any hook to intervene. And, therefore, we have no hook to be able to sensor information of the call stack. The revelation of that, you know, implementation—the spooky action at a distance, of seeing what should be encapsulated information in the call stack model above that. We do not have a way to censor that on JSC. And the much more telling stake that V8 made and and the stake from our point of view, we had a long discussion about this, on GitHub threads, public and private, with Shu, but the end result of those is that V8 recently, without realizing the damage it would cause, added a own accessor property to error instances, where all of the own accessors have the same get—sorry: that’s probably the same… + +SYG: Yeah. It’s a tsunami from the earthquake + +MM: Sorry about that. And I am very impressed there is V8 but an own accessors property on error instances where the getters and the setters are the same function, and therefore, the per error instance it was using information they must be accessing are hidden internal state so it would have been, and this was agreed to on the thread, that would have been and would still be easy for V8 to change that, to be an inherited accessor, and it’s simply the case that right now, it’s—there’s no basis for motivating the V8 to make the change. + +MM: If it was an inherited accessor across all engines, then it would give us one way without virtualize the—to censor the visibility of the stack, and then the issue about virtualizing it, in the absence of the other parts of this proposal, would still, perhaps, be a lot of sniffing and the platform stuff. The major need is the censoring. Because right you on V8, they have created not just an unplugable communications channel for data, the accessor properties will allow you for the communication of object references through the hid general state because the setter is honoured and it does not require the argument to be a string. So that’s a capability leak. That we cannot plug because of this set of decisions that V8 made. And it would be easy for V8 to change to this common behavior if we could agree to that. So if there’s—you know, if part 1 of this is something the committee could agree to, I would be very happy to separate it out and try to push that through to consensus and let the remainder remain in a distinct proposal. + +CDA: Noting we have less than 10 minutes for this topic. + +DLM: I have two quick replies to what MM said. First of all, I wanted to clarify our position about a schema without specifying the contents, we are not saying it’s not useful. We are saying it’s harmful because we’re concerned about interop problems. In the other one, we would be happy to see some specification of the accessor because this is causing web compat problems for us. + +SYG: So to MM—it sounds like you would specifically like V8 to change our existing non-standard API, which we have discussed. I would like to point out that this is not a direct outcome of standardizing a new thing. Like, if you standardize this something, this stack getter, like a very likely outcome is we have that *and* our thing. It’s not now, you standardize a thing that kind of sort of overlaps with a non-standard thing and we unship this. These are independent outcomes. + +MM: My understanding from the threads, that—on GitHub that you and I engaged in both public and private, are that the—is that if there was the accessor property on error.prototype, that was inherited by error instances, that there would be no reason for new Error instances that were created and thrown by the engine to carry own stack accessor properties that simply have the same getter and setter to them because the ones that they would inherit would access the same internal state. + +SYG: That’s correct. But the outcome—like to get to that place, the investigation needed is, like, what is the risk of doing that? It’s not just standard vs. non-standard. + +SYG: It is just independent of whether it is a standard thing. + +MM: Certainly, any change to, you know, browser-specific API, in order to conform with cross-browser agreement is a danger to that browser and the users of that browser. And, yes, I will acknowledge that and, yes, it would be for this something to make it to Stage 3, would certainly require, you know, buying in to at least do the experiment and see if there’s any interop risk. In the case of—not by the way, the security problem that we’re concerned with. What we need here only has to do with the pre-endowment of the error own accessor on platform generated errors. It has nothing to do with whether capturing a stack trace stamps error stack own properties on to errors and non-errors because we can censor capture stack trace. It’s only the pre-endowed accessor + +JHD: To clarify, in general, correct, standardizing a thing cannot force an engine to unship a non-standard thing. And the rubric is based on many things, but breakage and not about the simply the fact of being standard or not. In this specific case, it’s likely if we shipped an error prototype, that V8 would do it, but that’s not a guarantee. That accurate? + +MM: That’s correct and that kind of investigation is appropriate after, you know, to happen, you know, at least during Stage 2, if not later. It’s an implementor feedback. It’s one that might volume the same kind of counters that you have done for the fixed versus non-extensible. You know, it’s an investigation to see what the – + +SYG: Let me be frank. We haven’t done this investigation because we don’t think it’s high priority. And you don’t get to force that high priority by making it a proposal. + +MM: Okay. I understand that. Would that be an objection to this proposal sectioned off from the error stack proposal proceeding through the early stages of a process so that we can continue this discussion and possibly cajole V8 into trying the experiment? + +SYG: Are you asking if this part is being split off to continue the discussion? + +MM: Yes. + +SYG: I do not object to it being split off. + +JHD: Okay. So it sounds like just to summarize what I have heard, so I can update the proposal with the current status: there remains concerns that any form of standardizing the schema that does not account for the contenters, whether it standardizes them is not the issue, but accounts for those issues, Mozilla and V8, at least, consider that would be harmful. Even though a lot of other folks think it would be useful, that’s the constraint there. There is intrinsic value, it seems, in shipping the stack accessor by itself, where the only requirement is that it return a string. So what I think that I—I will talk over with MM, but I am suggesting that happen is, I rename the current proposal to be like about the structure, and then I make a new proposal that is just for the stack accessor and try to advance that. And figure out what to do with the structure separately. Does that seem like a viable plan for now? Or does anyone have a reason for why that’s not a viable plan for now? + +JHD: Feel free to reach out, outside of plenary. I just wanted to get the opportunity to get in the notes, if anybody has a reaction. + +MM: Obviously, I support that plan, and I would volunteer to be a cochampion on both. + +JHD: Okay. Well, then, I will plan to come back at a future meeting, request Stage 1 or beyond for accessor, and I will update the README of the current proposal to indicate what those concerns are, and how we might need to address them and proceed from there. + +## Continuation: import defer updates + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-defer-import-eval/) +- [slides](https://docs.google.com/presentation/d/1yFbqn6px5rIwAVjBbXgrYgql1L90tKPTWZq2A5D6f5Q/) + +NRO: Okay. Yeah. Hello, everybody. We are continuing from the discussion started on Tuesday about import defer. On Tuesday we had different proposed and there’s one which didn’t concluded about specifically, just like recap from the proposal currently does is that D well, there are some evaluation triggers when happening the models. Whenever you perform a get operation, module, symbols. It will trigger evaluation. This means that operations like foo in the namespace does not trigger evaluation because it doesn’t go through the Get internal method of the deferred namespace object. Operations like `object.key` triggers evaluation in time. Specifically, because `Object.keys` calls get when there is some key. And operations like `Object.getOwnPropertyNames` or—well, I guess `Object.getOwnPropertyNames` does not trigger evaluation because it doesn’t trigger get. There are other ways to get objects. There are a bunch of internal object methods. + +NRO: The proposed change is to align all of these things and to make all of them always trigger evaluation. So that the rule would become—when you try to get some information about the export of the model, you are triggering evaluation. There are some arguments in favour and against the change. The argument in favour is that this change would simplify what tools to implement, making it possible for the tools to implement the semantics of the proposal. And the reason I am expressing this is because decide native browser, a lot of the time, ESM gets transpiled or bundled to the problem in the browser environments. If one way we have the model declaration proposal, bundler would meet—so use the ESM as defined by—as implemented by the browser. The argument against this change is that it removes some abilities we are giving to JavaScript users right now with the proposal. That is, to list the export model without triggering evaluation. This change is entirely driven by the needs of tools, and not of any spec constraint or any constraint coming from JavaScript engines. + +NRO: And the counterpoint to the argument is that, well, we can still introduce a way to get a list of exports in a module, in a way tools would have needed to implement in some different way probably. But it was part of the ESM phase imports where we have the static import capabilities. And it’s now been split out and deferred when we do—we continue with the other virtualization proposals, but it could still come in the future. + +NRO: So we ended up with discussions last time, and arguments, and at the time asked for a temperature check. So I would like to—if anybody has further thoughts, other than the four people expressed, you are welcome to get in the queue. Otherwise, I would ask CDA to prepare the poll with this question. Like, how do you feel about this change? Specifically about changing the evaluation trigger to be whenever you are querying about the exports of a model. So just in the list of exports or checking whether an export exists my personal preference is to do this change, but let’s have the poll. + +CDA: All right. Nothing on—MM supports. Nothing else on the queue. So for temperature check: in order to participate, you need to have TCQ open before we bring up the interface. Once it’s up, if you join after, you will not see it. So if you have—if everybody—if you don’t have TCQ open, please open it up. I will give you 10 or 15 seconds. Or shout out if you need more time to open it up. Otherwise… All right. We will bring up the temp check. + +NRO: Okay. So I think some people are actually missing, because I know at least GB would have voted unconvinced—but considering that, I think the—these results are giving me a direction. Is GB in the call? + +AKI: Point of order, do you have to have the TCQ window active in addition to be open because I think that my tab was in the backed and the temperature check never showed up. + +CDA: Yeah, it depends on your browser. If your tab was inactive for long enough and the browser does any form of, like, memory optimization. And then that feature would have prevented that from coming up. You could indicate—is that—you want to see the results + +CDA: 3 strong positive. 9 positive. 3 following. 1 indifferent. And everything else is zeros. + +NRO: Okay. So I would like official consensus for this change. Given that GB is not here, I want to read a message that GB sent to me: “I want to be sure and clear about decisions made, as long as we are clear in making these tradeoffs, the committee can decide to make them, but let’s have a discussion openly.” And the previous slide about the tradeoffs was reviewed by GB. So I am just going to assume that GB would have been fine with the conclusion, given the temperature poll and ask, does anybody object to making this change? + +CDA: Nothing on the queue. + +NRO: Okay. Thank you. Then, we have consensus. + +### Speaker's Summary of Key Points + +NRO: The summary for the notes, including the discussion from Tuesday, is that we presented four changes to the proposal. The first was presented, the same one we conclude today, was about changing when evaluation of the deferred model happens to happen whenever we not only read the value of the exports, but also when we read the exports of the model. This change got consensus. The second change was in response to a problem, when it comes to the dynamic form of import defer and with the behavior of promises by reading them would trigger execution. And the change was to make sure that deferred module namespaces never have a `then` property, regardless of what the module exports have not. And it does not read the contents of the model. That change also got consensus. +There was a third change, about changing the value of the toStringTag symbol from model to deferred module, and deferred module namespaces and that changes also got consensus there was a fourth change, adding a symbol evaluate property to deferred module namespaces, whether reading properties from it would trigger execution or not. Given the feedback that was—generally it seems supportive of the idea, but not in the shape and especially given that the stabilized proposal is in a very similar area, we did—I did not ask for consensus on this change. The first three changes are in and the fourth one is not. And I think this is it. + +## Adjournment + +CDA: With that, that is the end of this meeting! Thanks to everyone, big special thanks to everyone who helped with the notes. + +AKI: Don’t forget, if you want a hat for your contributions to note-taking, you need to make sure to contact me, so I know to make it. + +MM: I need reviewers for immutable ArrayBuffers which got to Stage 2. SYG and WM, I think, that you had privately or in previous structs meeting expressed interest in being a reviewer? + +SYG: I will confirm, I will review + +JHD: I am happy to also review it + +WM: Yes. + +MM: Excellent. So I have got three reviewers. Thank you very much. + +CDA: Great. We did get reviewers for upsert/map-emplace. DLM? + +DLM: That’s correct. + +CDA: Okay. I just got paranoid about any other ones we missed. Okay. Great. Thanks, everyone. + From 8cdea93cf6d8abc2e627ec9042c821e861bbec78 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Mon, 23 Dec 2024 01:10:43 -0800 Subject: [PATCH 2/3] Linter's yelling bc of empty links! --- meetings/2024-12/december-02.md | 47 ++++++++++++++++----------------- meetings/2024-12/december-03.md | 13 +++++---- meetings/2024-12/december-04.md | 14 +++++----- meetings/2024-12/december-05.md | 1 - 4 files changed, 36 insertions(+), 39 deletions(-) diff --git a/meetings/2024-12/december-02.md b/meetings/2024-12/december-02.md index f9fbb2b..2e2577f 100644 --- a/meetings/2024-12/december-02.md +++ b/meetings/2024-12/december-02.md @@ -112,7 +112,7 @@ SHN: Great. Thank you very much. I will update the slide with the correct dates Presenter: Michael Ficarra (MF) -* [slides](https://docs.google.com/presentation/d/1IS6hsFker8TM_mPtK1VQbFCH2TK3LljOxFu6-zMCjkM/edit) +- [slides](https://docs.google.com/presentation/d/1IS6hsFker8TM_mPtK1VQbFCH2TK3LljOxFu6-zMCjkM/edit) MF: Pretty quick update on 262 editorial stuff. So normative changes, the first one here is a needs-consensus PR that we agreed to at the last meeting. We merged this change to `toSorted` to make it stable, as is already required by `Array.prototype.sort`. This was an oversight in the integration of `toSorted` as things changed with the Array sort stability specification at the same time. And the rest are Stage 4 proposal integrations: `Promise.try`, iterator helpers, and duplicate named capture groups. There were plenty of editorial changes, but none that need to be called out to plenary. And the list of upcoming and planned editorial work is the same. We should probably review it sometime soon just to make sure this is what our plan is going forward. But for now nothing has changed there. And that’s it. @@ -429,15 +429,15 @@ DLM: I think I need two people. So that’s perfect. Thank you. If anyone else i ### Speaker's Summary of Key Points -* Presented update on work that has occurred since October 2024 plenary, including renamed to proposal-upsert, support for both `getOrInsert` and `getOrInsertComputed`. -* Asked for feedback on handling of modification to the map during `getOrInsertComputed` callback, and on method names. -* Asked for Stage 2 reviewers. +- Presented update on work that has occurred since October 2024 plenary, including renamed to proposal-upsert, support for both `getOrInsert` and `getOrInsertComputed`. +- Asked for feedback on handling of modification to the map during `getOrInsertComputed` callback, and on method names. +- Asked for Stage 2 reviewers. ### Conclusion -* Committee was in favour of the non-throwing solution to issue #40 (https://github.com/tc39/proposal-upsert/pull/71) -* No further feedback on naming of methods, we’ll resolve this in the issue itself. (https://github.com/tc39/proposal-upsert/issues/60) -* JMN and MF volunteered as Stage 2 reviewers +- Committee was in favour of the non-throwing solution to issue #40 (https://github.com/tc39/proposal-upsert/pull/71) +- No further feedback on naming of methods, we’ll resolve this in the issue itself. (https://github.com/tc39/proposal-upsert/issues/60) +- JMN and MF volunteered as Stage 2 reviewers ## `Intl.DurationFormat` for Stage 4 @@ -474,12 +474,12 @@ USA: Thanks, everyone, for Stage 4. ### Speaker's Summary of Key Points -* USA went over some details about the purpose and history of the proposal. -* Stage 4 was requested and there were no objections to stage advancement. +- USA went over some details about the purpose and history of the proposal. +- Stage 4 was requested and there were no objections to stage advancement. ### Conclusion -* DurationFormat reached Stage 4 with supporting comments from DLM and PFC. +- DurationFormat reached Stage 4 with supporting comments from DLM and PFC. ## `Error.isError` to stage 3 @@ -508,12 +508,12 @@ JHD: Thank you. ### Speaker's Summary of Key Points -* test262 tests merged -* Firefox’s `InternalError` should pass this predicate, and champion will monitor implementation status +- test262 tests merged +- Firefox’s `InternalError` should pass this predicate, and champion will monitor implementation status ### Conclusion -* Consensus for stage 3 +- Consensus for stage 3 ## Iterator helpers close receiver on argument validation failure @@ -540,7 +540,7 @@ KG: Okay. Well, hearing no objection, and having two notes of explicit support, ### Speaker's Summary of Key Points -* An oversight in iterator helpers meant that we did not close the receiver when an argument failed validation. This PR will correct that. It's almost certainly web-compat given how new iterator helpers are. +- An oversight in iterator helpers meant that we did not close the receiver when an argument failed validation. This PR will correct that. It's almost certainly web-compat given how new iterator helpers are. ### Conclusion @@ -573,7 +573,7 @@ RPR: Request granted. Yes. Thank you. ### Conclusion -* JSL & MM will review `AsyncContext` +- JSL & MM will review `AsyncContext` ## The importance of supporting materials @@ -613,7 +613,7 @@ DLM: I think I would have that for another plenary. I wanted to give a brief pre ### Conclusion -* Not asking for any process changes at the time, just trying to highlight the importance of supporting materials for people who are evaluating proposals, in particular implementers who spend a lot of time on this. +- Not asking for any process changes at the time, just trying to highlight the importance of supporting materials for people who are evaluating proposals, in particular implementers who spend a lot of time on this. ## re-using IteratorResult objects in iterator helpers @@ -797,8 +797,8 @@ MF: Yeah, I guess as late as we can make it. ### Conclusion -* MF will wait until the test262 tests have been merged before asking for Stage 3 again. -* This topic was not revisited later in the meeting. +- MF will wait until the test262 tests have been merged before asking for Stage 3 again. +- This topic was not revisited later in the meeting. ## ShadowRealm for Stage 3 @@ -999,9 +999,8 @@ PFC: Then I think that brings us to the end. ### Speaker's Summary of Key Points -* Since advancing to stage 2.7, the Web APIs available in ShadowRealm have been determined using a new W3C TAG design principle. -* Each of these available Web APIs is covered in web-platform-tests with tests run in ShadowRealm, including ShadowRealm started from multiple scopes such as Workers and other ShadowRealms. Some web-platform-tests PRs are still awaiting review. -* The HTML integration is now agreed upon in principle, and needs some mechanical work done in downstream specs. However, it needs two explicitly positive signals from implementors to move forward. -* The concerns about test coverage have been resolved, assuming all of the open pull requests are merged. -* We will get the web-platform-tests merged, look into what can be included from crypto.subtle, and talk to the DOM teams of each of the browser implementations and get a commitment to move this forward. When that is finished, we'll bring this back for Stage 3 as soon as possible. - +- Since advancing to stage 2.7, the Web APIs available in ShadowRealm have been determined using a new W3C TAG design principle. +- Each of these available Web APIs is covered in web-platform-tests with tests run in ShadowRealm, including ShadowRealm started from multiple scopes such as Workers and other ShadowRealms. Some web-platform-tests PRs are still awaiting review. +- The HTML integration is now agreed upon in principle, and needs some mechanical work done in downstream specs. However, it needs two explicitly positive signals from implementors to move forward. +- The concerns about test coverage have been resolved, assuming all of the open pull requests are merged. +- We will get the web-platform-tests merged, look into what can be included from crypto.subtle, and talk to the DOM teams of each of the browser implementations and get a commitment to move this forward. When that is finished, we'll bring this back for Stage 3 as soon as possible. diff --git a/meetings/2024-12/december-03.md b/meetings/2024-12/december-03.md index 1fd41d7..2070daf 100644 --- a/meetings/2024-12/december-03.md +++ b/meetings/2024-12/december-03.md @@ -838,13 +838,12 @@ DLM: Cool. Thank you. ### Speaker's Summary of Key Points -* List -* of -* things +- List +- of +- things ### Conclusion -* List -* of -* things - +- List +- of +- things diff --git a/meetings/2024-12/december-04.md b/meetings/2024-12/december-04.md index b26a98c..b40fc1f 100644 --- a/meetings/2024-12/december-04.md +++ b/meetings/2024-12/december-04.md @@ -113,19 +113,19 @@ CDA: As MM said, it is not the strongest signal to actually land something in th ### Speaker's Summary of Key Points -* List -* of -* things +- List +- of +- things ### Conclusion -* List -* of -* things +- List +- of +- things Presented a number of use cases where synchronous access to modules and their execution could be valuable and would like to explore the problem space of these under a Stage 1 process. There were reservations about the import sync design, but we are going to explore the solution space further. -## ESM phase imports for Stage 2.7. +## ESM phase imports for Stage 2.7 Presenter: Guy Bedford (GB) diff --git a/meetings/2024-12/december-05.md b/meetings/2024-12/december-05.md index c330a72..f0b6b80 100644 --- a/meetings/2024-12/december-05.md +++ b/meetings/2024-12/december-05.md @@ -450,4 +450,3 @@ CDA: Great. We did get reviewers for upsert/map-emplace. DLM? DLM: That’s correct. CDA: Okay. I just got paranoid about any other ones we missed. Okay. Great. Thanks, everyone. - From 76bd8536e8a40b88130ae064278a4849e1835366 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Mon, 23 Dec 2024 02:05:44 -0800 Subject: [PATCH 3/3] Remove TODO slides link --- meetings/2024-12/december-02.md | 1 - 1 file changed, 1 deletion(-) diff --git a/meetings/2024-12/december-02.md b/meetings/2024-12/december-02.md index 2e2577f..d756131 100644 --- a/meetings/2024-12/december-02.md +++ b/meetings/2024-12/december-02.md @@ -242,7 +242,6 @@ RPR: No objections. So I think we have consensus on this review for merge at the Presenter: Eemeli Aro (EAO) - [proposal](https://github.com/eemeli/proposal-intl-currency-display-choices) -- [slides](TODO) EAO: This is a very small proposal. We had a short discussion in TG2 in fact about whether this should be a normative PR instead. But we thought, because there’s a little bit of discussion here that it would be good to have a little bit of space for that and the staging process is a very fine place for that. So the short entirety of this is that we do currency formatting under `Intl.NumberFormat` by using the `style: 'currency'` option and furthermore when formatting currency we have a `currencyDisplay` option that is effectively an enum value that we accept how to format the currency symbol. If you use the default `symbol` you get “$” or “US$” for formatting USD and `narrowSymbol` formats to "$" and `code` that gives you an ISO currency code like USD, or then as a spelled out `name`. All of these are of course localized name such as “U.S. dollars”.