-
Notifications
You must be signed in to change notification settings - Fork 253
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into fledge-currency
Showing
8 changed files
with
682 additions
and
150 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,196 @@ | ||
# Protected Audience (formerly FLEDGE) WICG Calls: Agenda & Notes | ||
|
||
Calls take place on some Wednesdays, at 11am US Eastern time. | ||
|
||
That's 8am California = 5pm Paris time = 3pm UTC (during summer). | ||
|
||
This notes doc will be editable during the meeting — if you can only comment, hit reload | ||
|
||
Notes from past calls are all on GitHub [in this directory](https://github.com/WICG/turtledove/tree/main/meetings). | ||
|
||
|
||
# Next meeting: Wednesday July 19, 2023 | ||
|
||
|
||
## Attendees: please sign yourself in! | ||
|
||
|
||
|
||
1. Paul Jensen (Google Chrome) | ||
2. Dennis Yurkevich (AirGrid) | ||
3. Youssef Bourouphael (Google Privacy Sandbox) | ||
4. Roni Gordon (Index Exchange) | ||
5. Brian May (dstillery) | ||
6. Sven May (Google Privacy Sandbox) | ||
7. Sid Sahoo (Google Chrome) | ||
8. Nick Llerandi (Triplelift) | ||
9. Andrew Aikens (TripleLift) | ||
10. David Dabbs (Epsilon) | ||
11. Fabian Höring (Criteo) | ||
12. Andrew Pascoe (NextRoll) | ||
13. Gianni Campion (Google Ads) | ||
14. Martin Pal (Google Privacy Sandbox) | ||
15. Isaac Foster (MSFT Ads) | ||
16. Tamara Yaeger (BidSwitch) | ||
17. Marco Lugo (NextRoll) | ||
18. Matt Menke (Google Chrome) | ||
19. Priyanka Chatterjee (Google Privacy Sandbox) | ||
20. Arun Nair (Google Privacy Sandbox) | ||
21. Alex COne (Google Privacy Sandbox) | ||
22. Danny Rojas (Google Chrome) | ||
23. Stan Belov (Google Ads) | ||
24. Alonso Velasquez (Google Chrome) | ||
25. Russ Hamilton (Google Chrome) | ||
26. Jeff Wieland (Magnite) | ||
27. Manny Isu (Google Chrome) | ||
28. Caleb Raitto (Google Chrome) | ||
|
||
|
||
## Note taker: Martin Pal | ||
|
||
|
||
## To join the speaker queue: | ||
|
||
Please use the "Raise My Hand" feature in Google Meet. | ||
|
||
|
||
# Agenda | ||
|
||
|
||
### Process reminder: Join WICG | ||
|
||
If you want to participate in the call, please make sure you join the WICG: https://www.w3.org/community/wicg/ | ||
|
||
|
||
## [Suggest agenda items here — no agenda no meeting!] | ||
|
||
|
||
|
||
* [Isaac Foster] 673 - 675 (IG Filtering via OpenRTB, Capping, Topics) | ||
* [Isaac Foster] 607 (XD Targeting, few minute discussion) | ||
* [Isaac Foster] TEEs in Azure Public, Azure Private, and Xandr DCs (I will find the appropriate tickets and inject here) | ||
* [Gianni Campion] https://github.com/WICG/turtledove/issues/475 (deleteAllIGs api change) | ||
* [David Dabbs] Issue [319](https://github.com/WICG/turtledove/issues/319), _Supporting Negative Interest Group Targeting_ | ||
|
||
|
||
# Notes | ||
|
||
Isaac Foster 673–675: I filed a bunch of issues, didn’t expect all to make it to the agenda. Was there further discussion on on-premise TEEs. Criteo and Xandr brought this up. | ||
|
||
Arun Nair: Some folks who thought about on-prem not present. Current POR is that for Privacy Sandbox services we’ll support AWS and GCP. DIscussing with Azure. We do not have a plan to extend this to on-prem right now. Our position has not changed since patcg. | ||
|
||
Isaac: Can you delineate what we believe public cloud has that is secure that private one doesn’t. What are the physical infrastructure requirements? Lots of subtleties. Treat this as a longer term discussion. | ||
|
||
Jeff Wieland: Add what is bad about on-prem. Using a public cloud is very costly. Have the reasons an on prem solution is not allowed written down in a public place. | ||
|
||
What are the security requirements needed by a public cloud to be approved? These are not available publically to my knowledge. | ||
|
||
Arun: Fair points. We’re thinking about on prem on our side as well. To share our position, AWS, GCP, Azure. Based on security practices of these cloud providers to secure data. Includes physical access, policies. | ||
|
||
Jeff Wieland: you should be open to the notion that a non-public cloud can qualify. | ||
|
||
Isaac: What if someone builds a cloud called AppNexus2 (Isaac Edit: AdNexus2, actually, AdNexus->AppNexus->Xandr->whatever we are now) who offers TEEs (and general private advertising cloud infra). | ||
|
||
Arun: I’m hearing that publishing a clear set of requirements would help. Sounds reasonable. We don’t have the right folks present. Just to clarify, AWS only a year ago, today GCP, Azure down the road. We’re open to adding more providers. | ||
|
||
Martin: considering on-prem solutions requires a deep dive into what else the machines are doing and what the hypervisor looks like. What level of intent can we consider? Deep dive on the stack is required. | ||
|
||
Isaac: If we don’t know what the requirements are, we don’t actually know if public cloud implements them. The set of requirements would be pretty stringent I assume. | ||
|
||
Brian May: From PATCG meeting: how do we ensure a cloud provider doesn’t serve their own advertising business. FOlks in the ecosystem would be interested in that. One problem with on-prem is that it’s not only the TEE, but also environment the TEE is in (software and physical). We should define TEEs as necessary for advertising but not sufficient. | ||
|
||
Stuff happening between the device (point of origin) and the TEE. Needs to be monitored. If that link is not secure, we leak privacy. | ||
|
||
Paul Jensen: For B&A server design, my primary concern in the API design was what we’re exposing. Even sizes of packets may leak info. I used to work on the Chrome networking team. http2/http3. We wanted everything to work with tls, best security practices. | ||
|
||
Brian May: I have a background in netops. Attributes of the traffic can expose info about the traffic. | ||
|
||
Have you considered the requirement of amazon advertising not running on aws / google ads on gcp? | ||
|
||
David Dabbs: How does this fit in if cloud and adtech are the same company.. | ||
|
||
Arun: Paraphrasing Michael Kleber. At this point in time Amazon can use AWS and Google can use GCP. Mitigation is that the cloud provider is sufficiently independent and has sufficient reputation on the line. | ||
|
||
Isaac Foster: No consensus necessarily, but backing up Arun. Folks from MIT, Martin Thompson (?), Kleber seem (Isaac edit: I was just backing up the notion that there was a discussion a few different fronts, a) is public cloud the right standard b) a little bit on what that standard might include, in particular around physical security of machines and premises and c) whether any tech should be able to use their own servers, regardless of whether those servers are public cloud or no. There was not any “established consensus around any of those, Kleber said he believes (or is at least taking the position) that Google can use GCP and more generally a tech can use their own servers. I would say there was “a lot of folks wanting to discuss it more”) | ||
|
||
Arun Nair: <scribe missed> | ||
|
||
J Wieland: I find the position [that Google can use GCP and Amazon can use AWS for its B&A] incredible. That Google and Amazon can use their own cloud but you don’t allow other ad tech companies to use their on-prem. CMA will take note. | ||
|
||
Isaac Foster: Publishing on-prem requirements would help resolve this. | ||
|
||
If the standard is that you don’t run on your own infrastructure. Let’s come up with a set of standards. If aws/gcp/azure are the only ones then so be it. (Isaac Edit: Isaac was agreeing with Jeff’s position but saying we need to split this into a number of questions 1) is “public cloud” the right technical specification 2) if it’s not, what is the right technical specification and with that 2a) should that include a requirement that the host of an ad tech’s TEEs must not be the ad tech and the 3) if it really is the case that those can only be accomplished by GCP, AWS, and Azure…well, OK I guess) | ||
|
||
Brian May: Martin Thompson said in patcg that large cloud providers have strict security protocols.Suggest we identify the attributes being identified as “cloud providers” specifically so that others can develop compliant services. | ||
|
||
Arun: Going back, publishing reqs makes sense. When we’re ready to publish we’ll give heads up. | ||
|
||
Jeffrey Wieland: we’re racing a deadline. We don’t have requirements published, and it is hugely problematic for smaller companies. | ||
|
||
Paul Jensen: On-device version of Protected Audiences doesn’t require cloud tees. | ||
|
||
Isaac: Let me characterize this a bit differently. COnversation w. Alex & Kleber at patcg. Going on a tangent. Agree the API is reasonably well defined. If 1.0 is going to satisfy competition requirements. The implementation may not be where we want to be. We can say API is defined, but infrastructure is not fully in place. (Isaac Edit: lot of nuance here so want to amend: I think we’re all having emotional reactions to the “readiness” discussion, which I think has two-to-three dimensions to the it: (1) is whether the API as imagined is a reasonable v1.0 replacement for 3rd party cookies, and while I have my own opinion on that (which is, ehhhhhh mayyyybbbee), I’ll say that is a different kind of question then the other two; (2) is the infrastructural one, and my point is that the logical API of a system being at v1.0 but the infrastructure and operations of it being < v1.0, does not add up to the system being at v1.0…in this case I’d guess that the reason B & A came about is that we’re realizing Edge Compute in general isn’t ready to meet ad tech functionality and SLAs, and so now we’re trying to get B & A sorted out and it’s not yet; (3) is that we’re coupling the initial production release of this system preeeettty tightly to the deprecation of the old system (3PC), which means we’re leaning heavily on this new system working out of the box, and (2) makes me (and I think others on the call) quite concerned, even if we ignore differences on (1)). | ||
|
||
Andrew Pascoe: For Protected Audience there’s on device path forward. Our problem is that the SSP decides on B&A vs on-device. Hence a DSP needs to implement both. If the SSP declares B&A, then DSP is required to set up B&A stack in public cloud. | ||
|
||
Paul: Right now, on device is the path forward. | ||
|
||
Arun: To Andrew’s point. If SSP sees a mix of DSPs on-device and B&A, they can split auctions. Can run a component auction – one on-device one B&A, then merge results. THis architecture could accommodate both on-device and server DSPs. WOuld love to hear feedback from DSPs on this. | ||
|
||
Isaac: Getting into deep things. Challenge/reaction: one thing if we release the APIs and build towards a competitive solution. But we’re coupling this to 3pcd. Jeff: things are still evolving. Diagram on the page is reasonably fixed. Problem is the hard coupling and sequencing with 3p cookie deprecation. Physical infrastructure and operational things. WIll discover issues. Fix cycles pretty long. This may be a disconnect ppl are feeling. | ||
|
||
Brian May: some of us at smaller adtechs are feeling severe resource crunch. | ||
|
||
Paul: Thanks Isaac & Jeffrey. Arun has an AI to give clarity on requirements. Isaac has posted other issues. | ||
|
||
Isaac: let someone else go. We can discuss my other issues next time. | ||
|
||
Gianni, Issue 465. Deleting all IGs. | ||
|
||
Gianni: 1 minute summary. User goes to advertiser many times. Advertiser polulates many IGs. Then we want to remove IGs. That requires keeping track of which IGs are on device. This is doable but not practical. Don’t want to build infra to keep track of all IGs. On-server and on-device state may get out of sync. Or, tagging fails for some reason bust server doesn;’t know. Or, we ask Chrome to delete IG, but delete fails. | ||
|
||
I’d like to request a feature that drops all IGs from a given origin, except a few. Thoughts? | ||
|
||
Paul: You want an API that would leave all IGs from the origin that are owned by one owner. | ||
|
||
Gianni: yes, except I want to be able to specify “leave all, except for IG 7”. Can call it leaveandJoin() if you like. | ||
|
||
Paul: From privacy angle this seems OK to me (could implement this via 1p cookie) | ||
|
||
Brian May: sounds familiar from other domains, such as databases. Given the lack of insight into what’s on the browser, such primitive might be useful. It would be a swap – replace what’s there with a new set. I think having the option of excluding existing IGs might create a potential for information leakage and we should only allow a complete swap. | ||
|
||
Paul: question for Gianni. The API would remove all IGS from given site and given owner, except for an optional list to be left alone. What if some IGs on the list are from different origin/owner. | ||
|
||
Gianni: I’m thinking mostly about “group by origin”. Anything from different origin is to be left alone. | ||
|
||
Paul: anyone else want to comment? Implementation wise, this seems not difficult to implement. There may be a function that kind of does this. | ||
|
||
Paul: David, Issue 319, negative targeting. | ||
|
||
David Dabbs: I pinged Orr Bernstein, Max Orlovich, the two engs who proposed this. Seems like something that’s moving forward? I’d like them to talk about it. | ||
|
||
Paul: Orr, max may not be on the call. I’ve helped design the feature. This is about negative targeting. IGs are for positive targeting. Steven brought up negative targeting. Ex: don’t advertise to ppl who already signed up. Put them in a ‘negative’ interest group. Can a bid be submitted at auction time. | ||
|
||
The SSP would pass bids into runOneAuciton. In Protected Audiences, assumption is money will change hands. Normally bid comes from a bidding script from an origin that is expected to pay money. We use TLS to verify identity of the bidding script. | ||
|
||
David: want to plant the seed to enhance the feature. Wafer thin interest group. Name and bidding URL. The IG will have no ads. What about a header based interest group. Set a tombstone. Header bidding can add negative targeting. | ||
|
||
Paul: What would be the advantage of header vs js api. | ||
|
||
David: you can do it with a pixel rather than javascript. ARA uses http headers, shared storage may too. | ||
|
||
Brian May: http headers are more trusted – more under control of endpoints and harder to manipulate. | ||
|
||
Concern with negative IG: potential to leak info about the browser. Leaks info that this user is being left out. Negative targeting may ‘cast a shadow’ visible in logs. | ||
|
||
David: ads being served are not interest targeted. Based on IG data, but not the same way. I’d like to be able to engage on this. | ||
|
||
Paul: idea that iframes have significant cost resonates with me. Process creation for new render. I don’t know if this ask is specific for negatively targeted IGs. | ||
|
||
David Dabbs: ability to manage IGs via http headers without the user needing to visit the advertiser site sounds useful. | ||
|
||
Paul: recommend filing an issue for the idea of joining an IG via http headers. There may be issues that I’m not seeing immediately. | ||
|
||
David: I’ll file an issue. | ||
|
||
Paul: Worth filing. Need to think through how policies will be applied. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,147 @@ | ||
# Protected Audience (formerly FLEDGE) WICG Calls: Agenda & Notes | ||
|
||
Calls take place on some Wednesdays, at 11am US Eastern time. | ||
|
||
That's 8am California = 5pm Paris time = 3pm UTC (during summer). | ||
|
||
This notes doc will be editable during the meeting — if you can only comment, hit reload | ||
|
||
Notes from past calls are all on GitHub [in this directory](https://github.com/WICG/turtledove/tree/main/meetings). | ||
|
||
|
||
# Next meeting: Wednesday Aug 2, 2023 | ||
|
||
|
||
## Attendees: please sign yourself in! | ||
|
||
|
||
|
||
1. Michael Kleber (Google Privacy Sandbox) | ||
2. Nick Llerandi (Triplelift) | ||
3. Andrew Aikens (TripleLift) | ||
4. Nick Llerandi (Triplelift) | ||
5. Roni Gordon (index Exchange)x | ||
6. Andrew Aikens (TripleLift) | ||
7. Roni Gordon (index Exchange) | ||
8. David Dabbs (Epsilon) | ||
9. Sven May (Google Privacy Sandbox) | ||
10. Tamara Yaeger (BidSwitch) | ||
11. Stan Belov (Google Ads) | ||
12. Russ Hamilton (Google Chrome) | ||
13. Youssef Bourouphael (Google Chrome) | ||
14. Paul Jensen (Google Chrome) | ||
15. Marco Lugo (NextRoll) | ||
16. Steve Luo (PubMatic) | ||
17. Matt Menke (Google Chrome) | ||
18. Orr Bernstein (Google Chrome) | ||
19. Nitin Nimbalkar(PubMatic) | ||
20. Fabian Höring (Criteo) | ||
21. Priyanka Chatterjee (Google Privacy Sandbox) | ||
22. Manny Isu (Google Chrome) | ||
23. Isaac Foster (MSFT Ads) | ||
24. Harshad Mane (PubMatic) | ||
25. Viacheslav Levshukov (MSFT Ads) | ||
26. Risako Hamano (Yahoo Japan) | ||
27. Tianyang Xu(Google Chrome) | ||
28. Sid Sahoo (Google Chrome) | ||
29. Andrew Pascoe (NextRoll) | ||
30. Jeff Wieland (Magnite) | ||
31. Rotem Dar (eyeo) | ||
32. Yanush Piskevich (MSFT Ads) | ||
|
||
|
||
## Note taker: mannyisu | ||
|
||
|
||
## To join the speaker queue: | ||
|
||
Please use the "Raise My Hand" feature in Google Meet. | ||
|
||
|
||
# Agenda | ||
|
||
|
||
### Process reminder: Join WICG | ||
|
||
If you want to participate in the call, please make sure you join the WICG: https://www.w3.org/community/wicg/ | ||
|
||
|
||
## [Suggest agenda items here — no agenda no meeting!] | ||
|
||
|
||
|
||
* Isaac (MSFT) Issues (don’t all have to be mine, so grouping so we can easily filter) | ||
* B&A Multi Tag Auctions https://github.com/WICG/turtledove/issues/724 | ||
* <span style="text-decoration:underline;">https://github.com/privacysandbox/fledge-docs/issues/52</span> | ||
* <span style="text-decoration:underline;">Production Support: https://github.com/WICG/turtledove/issues/620</span> | ||
* <span style="text-decoration:underline;">More Spinoff Meetings for uncovered topics?</span> | ||
* Yanush (MSFT): | ||
* Clarify BA Final Decision: https://github.com/WICG/turtledove/issues/739 | ||
|
||
|
||
# Notes | ||
|
||
|
||
## B&A Multi Tag Auctions https://github.com/WICG/turtledove/issues/724 | ||
|
||
|
||
|
||
* [Isaac Foster] This is important to the MSFT side of things for unified ads request | ||
* There is going to be generation ID for the Ads request for the seller front end that will be used for debugging and preventing attacks. The concern is trying to run multiple auctions in a single request - run one and cannot run the other one due to fraud. (Isaac Edit: UUID generated for request and then used to prevent replay attacks, assuming this basically does a used-more-than-once validation, that would prevent running multiple ad auctions with a single request). | ||
* [David Dabbs] To clarify, are you referring to the recent spec for Ad Auction Nonce? What Identifier are you referring to? | ||
* [Isaac] I do not believe this is what I am referring to - This is what I am referring to: https://github.com/privacysandbox/fledge-docs/blob/main/bidding\_auction\_services\_system\_design.md#throttling-in-sellerfrontend. Also there is a field called requestID: https://github.com/privacysandbox/bidding-auction-servers/blob/main/api/bidding\_auction\_servers.proto#L47 | ||
* [Priyanka] The concern is spot on - valid that B&A comes with a bunch of threats. We want to incorporate chaffing as a mitigation strategy but it also comes with some threats so we want to integrate anti-abuse. The issue is that generationID is generated from the browser. For a single slot auction, it should be okay. But for multi slot, it is an issue - As of now, we have not placed a replay attack mitigation in place and not incorporated chaffing. We want to discuss with SSPs on a viable solution keeping privacy in mind and not have a broken feature once a mitigation is in place. | ||
* [Isaac] It seems like this is one of the features that will be turned on afterwards. Can you clarify the timeline? | ||
* [Priyanka] While 3PCD is there, it is fine to take on this but we need to do this ASAP. We believe that this is going to come last. We need to put in the mitigations by mid 2024. | ||
* [Issac] I am referring to the case where there are multiple ad slots on the same page - Basically a single request from a web page to the SSP, and SSP does a bunch of inline things… I am not referring to single ads that might have components | ||
* [Priyanka] Yes, we are talking about the same thing. | ||
* [Isaac] On timing, it is good to know that when we are doing B&A testing, we will be able to do the multi ads per page testing. But if that were to be deprecated inline with the deployments, that will be problematic and a massive hit to our SLAs. I will push a bit for some clarity on this one (Isaac edit: both on timing and on us all developing the solution). | ||
* [Priyanka] We will get back to you on exact timelines after further discussions with Product. We will make sure features continue to work. Also, how does the single request going out for all slots work for all SSPs here? | ||
* [Harshad] Most SSPs are on-page using Prebid, and that coalesces all the slots on the page to a single request | ||
* [Roni] Everything coming from one page ad slots is sent as a single HTTPS request to the exchange via PBJS | ||
* [David] The security model has padding payloads - They need to be encrypted and transit through the SSPs relay and relay that packet into the TEE. Since they need to be encrypted, is today’s model compatible with that? | ||
* [Priyanka] We do send back encrypted and padded responses through the seller to the client. If there are multiple requests, we will treat that as different requests at this point | ||
* [MK] Priyanka, can you take a sec to talk through what the replay attack that you’re mitigating is? | ||
* [Priyanka] The SSP needs to send the chaff request to a few buyers. This way, the sellers don't figure out which buyers are on the browser. We want to randomize the number of buyers that will be sent. The seller ultimately determines which buyers will take part in the auction. If the browser has an IG for those buyers, only those buyers will get the request. The abuse here is that we need to keep the set of chaffs random and folks getting the chaffs random. This ensures that there is no kind of abuse happening. | ||
* [MK] It seems like the risk could be addressed in other ways. If this is the only leakage we are worried about, I would like to have a conversation on the design to avoid this need for preventing replays. As long as this is just about random chaffand the response that comes from a TEE to the untrusted server isn't leaking information, some kind of deterministic random number generator based on stuff coming from the browser could let us get around this problem entirely. | ||
* **ACTION:** Priyanka Chatterjee & Michael Kleber will discuss this design offline. | ||
|
||
|
||
## [Isaac] As we are getting closer to deployment date, I would like to propose a structure for meetings so we can make progress on some of the additional items on the agenda. | ||
|
||
[MK] If the questions are around server setup or mechanics like that, those should probably be a different meeting. | ||
|
||
[Isaac] There are things that are worthy of public follow up discussions - some might make sense as WICG. Alternatively, if it's hyper specific, perhaps it can be a one off meeting and doesn’t have to be a WICG session. | ||
|
||
|
||
## https://github.com/privacysandbox/fledge-docs/issues/52 | ||
|
||
|
||
|
||
* [Isaac] On server side, it seems like there is a lot of caching that happens on startup. If IG comes with a new thing that it hasn’t seen before… | ||
* [Priyanka] Here is some documentation in the explainer: https://github.com/privacysandbox/fledge-docs/blob/main/bidding\_auction\_services\_system\_design.md#code-blob-fetch-and-code-version | ||
* We reviewed and wanted to fetch the code modules. Within the bidding and auction server, the code will be prefetched at server startup and periodically. We are also supporting a way to fetch from cloud storage where the adtech can push code modules as they roll out, push to the buckets and notify the bidding servers. We will support multiple versions of the code modules. However, the adtech can pass the version of the code modules for requests. Both buyers and sellers can pass any version of the code modules they want to use. | ||
* [Isaac] Regardless of the version being passed, that will still be factored into the k anon, correct? | ||
* [MK] The bidding script is also the script used for reporting. | ||
* [Priyanka] If we fetch from cloud storage, we will not be able to micro target the user of the url. | ||
* [MK] The actual url specified inside the IG will need to go through a k-anon check. It's OK if the file that gets loaded is different in different servers. The point is that it must not be different for different _people_, in a way that is consistent as they browse across sites, otherwise it's a tracking risk. | ||
* [Isaac] Let’s button this up on the ticket. | ||
|
||
|
||
## Yanush (MSFT) - Clarify BA Final Decision: https://github.com/WICG/turtledove/issues/739 | ||
|
||
* [Yanush] How do we choose on client side between the contextual and private bids returned from the B&A flow? | ||
* [MK] see https://github.com/WICG/turtledove/blob/main/FLEDGE\_browser\_bidding\_and\_auction\_API.md#step-4-complete-auction-in-browser | ||
* When you execute an auction on B&A service, you end up with a server response blob. This contains the results of the B&A auction that took place server side. It doesn’t contain contextual ads. Feed the server response blob into the on-device auction. | ||
* Use the contextual response if the on-device auction has no winner. This is the same as for a purely on-device auction with no B&A at all. | ||
* [Yanush] So I can use contextual ads if something goes wrong? | ||
* [MK] Basically the contextual auction is a fallback if the PA auction decides that there is no winner. | ||
* [Yanush] And I can render contextual ads without a fenced frame, correct? | ||
* [MK] Yes | ||
|
||
## [Rotem Dar] Can an SSP and DSP work together without an endpoint integration? | ||
* [MK] For a purely ondevice PA auction, it is an optional engagement between SSP and DSP. Of course the SSP and DSP need some business relationship with each other, since they are exchanging money! But they can do it without any real-time server-to-server communication, if they want to. | ||
* [Rotem] This is something that we should highlight a little bit more - it is not trivial | ||
|
||
## [David] So whoever is running the top most auction is the orchestrator - If I am a seller, is it a binary thing that if I am going to do server only or on-device only or can I mix and match? I may have DSP partners that aren’t as sophisticated? How is this possible? How is it going to work? | ||
* [MK] An SSP that wants to work with some DSPs who want to do server side B/A stuff and others who want to do on-device stuff can run two different component auctions. The SSP should pretend there are two separate SSPs. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,168 @@ | ||
# Protected Audience (formerly FLEDGE) WICG Calls: Agenda & Notes | ||
|
||
Calls take place on ~~some~~ Wednesdays, at 11am US Eastern time. | ||
|
||
That's 8am California = 5pm Paris time = 3pm UTC (during summer). | ||
|
||
This notes doc will be editable during the meeting — if you can only comment, hit reload | ||
|
||
Notes from past calls are all on GitHub [in this directory](https://github.com/WICG/turtledove/tree/main/meetings). | ||
|
||
|
||
# Next meeting: Wednesday Aug 16, 2023 | ||
|
||
|
||
## Attendees: please sign yourself in! | ||
|
||
|
||
|
||
1. Michael Kleber (Google Privacy Sandbox) | ||
2. Nick Llerandi (Triplelift) | ||
3. Andrew Aikens (TripleLift) | ||
4. Bosko Milekic (Optable) | ||
5. Xavier Capaldi (Optable) | ||
6. Antoine Niek (Optable) | ||
7. Brian May (dstillery) | ||
8. Sid Sahoo (Google Privacy Sandbox) | ||
9. Paul Jensen (Google Chrome) | ||
10. Orr Bernstein (Google Chrome) | ||
11. Don Marti (Raptive) | ||
12. Priyanka Chatterjee (Google Privacy Sandbox) | ||
13. Luckey Harpley (Remerge) | ||
14. Alex Cone (Google Privacy Sandbox) | ||
15. Russ Hamilton (Google Chrome) | ||
16. David Dabbs (Epsilon) | ||
17. Harshad Mane (PubMatic) | ||
18. Risako Hamano(Yahoo Japan) | ||
19. Matt Menke (Google Chrome) | ||
20. Isaac Foster (MSFT Ads) | ||
21. Sergey Fedorenko (MSFT Ads) | ||
22. David Tam (Relay42) | ||
23. Tamara Yaeger (BidSwitch) | ||
24. Alexandru Daicu (eyeo) | ||
25. Manny Isu (Google Chrome) | ||
26. Guy Teller (eyeo) | ||
27. Caleb Raitto (Google Chrome) | ||
28. Joel Meyer (OpenX) | ||
29. Stan Belov (Google Ads) | ||
30. Tristram Southey (Google Privacy Sandbox) | ||
31. Martin Thomson (Mozilla) | ||
|
||
|
||
## Note taker: Manny Isu | ||
|
||
|
||
## To join the speaker queue: | ||
|
||
Please use the "Raise My Hand" feature in Google Meet. | ||
|
||
|
||
# Agenda | ||
|
||
|
||
### Process reminder: Join WICG | ||
|
||
If you want to participate in the call, please make sure you join the WICG: https://www.w3.org/community/wicg/ | ||
|
||
|
||
## [Suggest agenda items here — no agenda no meeting!] | ||
|
||
|
||
|
||
* Isaac (MSFT) Issues (don’t all have to be mine, so grouping so we can easily filter) | ||
* <span style="text-decoration:underline;">BA: Jan 1st 1% vs Feb Scaled Testing https://github.com/privacysandbox/fledge-docs/issues/55</span> | ||
* <span style="text-decoration:underline;">User IG view/delete interaction w/r/t BA payload optimization https://github.com/privacysandbox/fledge-docs/issues/56</span> | ||
* <span style="text-decoration:underline;">Multiple Bids Per IG (checkin, was discussed a while back) https://github.com/WICG/turtledove/issues/595</span> | ||
* <span style="text-decoration:underline;">PA Deals Phase: https://github.com/WICG/turtledove/issues/686</span> | ||
* <span style="text-decoration:underline;">Cross Device https://github.com/WICG/turtledove/issues/607</span> | ||
* <span style="text-decoration:underline;">Production Operations: </span> | ||
|
||
<span style="text-decoration:underline;">https://github.com/WICG/turtledove/issues/728 \ | ||
https://github.com/WICG/turtledove/issues/620</span> | ||
|
||
* Nick (Triplelift) | ||
* desirabilityScore: is there a range? What’s to stop a component seller from scoring ads with a tremendously high desirability score? | ||
* Clarifying how the attestations file lookup/check works (Seller perspective); is the “Seller” origin checked prior to every auction? | ||
* Bosko (Optable) | ||
* Clarifying question on suggested IG scope/granularity | ||
* GAM throttling | ||
|
||
|
||
# Notes | ||
|
||
|
||
## desirabilityScore: is there a range? What’s to stop a component seller from scoring ads with a tremendously high desirability score? | ||
|
||
|
||
|
||
* [Nick] How will PA API distinguish between which one to choose? | ||
* [MK] Every IG produces bids and the bid comes with an amount of money associated with it - It has to be comparable across different bidders. On the other hand, the desirability score is what the party running the auction assigned to each bid for how good they think it is. The desirability score is purely internal and it's never seen by anybody except that seller. The browser has no idea of what this means except larger is better; and less than or equal to zero means this ads should never win. The winner of the component auction is passed along to the top level auction for the top level seller to evaluate and assign their own desirability score, doesn't matter what the component seller thought of it. The desirability score is put in the reporting that goes back to the seller but does not go to any other parties. | ||
* [Brian] So the seller is taking a bunch of different attributes and scoring them… wondering what we can do to understand what goes into each of those scores. | ||
* [MK] Browser has no idea how buyers are coming up with the desirability scores. GAds in one of their articles may have talked about their calculation for their desirability scores, see here: https://github.com/google/ads-privacy/tree/master/proposals/fledge-multiple-seller-testing#technical-details | ||
* [Isaac] In my discussions with Sergey, and discussing the ranking function… the SSP has the opportunity to pass in things from the auction context. Can we talk about the canonical way to pass in values into the function? | ||
* [MK] Every one of the IG on the device has an opportunity to make a bid… seller sees bid from IG one at a time; the js code assigns the desirability score for each bid and the highest one is the ad that is chosen. https://github.com/WICG/turtledove/blob/main/FLEDGE.md#23-scoring-bids | ||
* Any information that is coming from the bidder; there is stuff that comes from the publisher page - info that publisher makes available like first party information, stuff that the seller puts into the decision making process - directFromSellerSignals; trusted scoring signals - info retrieved from the sellers KV server; renderURL is looked up in the SSPs KV server. | ||
* Also, the SSP is probably enforcing some kind of rules that they got from the publisher on what kind of ads that they may not want on their site. | ||
* [Sergey] Will that particular renderURL be used to render the Ad? | ||
* [MK] Yes, that is a guarantee that the browser makes. | ||
* [Sergey] So every creative will have a renderable URL on the SSP side | ||
* [MK] Yes, that is correct. | ||
* [Sergey] So every DSP will have to register their creatives with every single SSP that they participate in? | ||
* [MK] Not sure if that is required. If the SSP needs to do an analysis of the image, then they need to know about the candidate ads. But they do not need to know about it in advance, all depends on how they manage their relationship with the DSP, which can provide metadata for the ad in the bid. | ||
* [Sergey] Most SSPs do not blindly propagate ads to the public which means that it must be registered somehow - it brings up the question on how that is going to happen. Given that there is limited logging in TEE… it moves away from where DSPs and SSPs are going… we are moving towards 1.) Trusted Bidders: Allow all buyers to bid as long as they declare the brand, type of creatives etc. in the bid request so we can apply at quality. For other buyers, we do allow dynamic registration | ||
* [MK] I think it could still work in this case… For ads that did not win the auction, you do not get the event level information into what happened, but you do have a way to find out what happened from an aggregate standpoint. You can learn all the creatives you have bid with but haven't yet registered with the corresponding SSP using the aggregate feature. | ||
* [MK] It seems like the crux of the question is: How can a pull model work? How can one discover a url that hasn’t been registered yet? | ||
* One model might be to change to a push model: DSP can use aggregated reports to learn about URLs that are winning auctions, and can tell SSPs to scan those URLs | ||
* Either SSPs or DSP could use aggregate measurement to learn some hash(render URLs). But DSPs are in a much better position to convert that back to the actual URL itself; this is hard for SSPs who would need to discover novel URL strings they had never seen before. One option: DSPs could run a hash-to-render URL lookup service and the SSPs could use that. | ||
* [Harshad] If there are 5 SSPs with an IG… do the renderURL need to be send to the SSP KV server all in the same URL? | ||
* [Paul] They are GETs right now given the privacy model. The browser groups together SSP requests and reduces the number. | ||
* [MK] I think that is the essence of the problem here, if the KV server URL contains a bunch of ad render URLs, could get much too long | ||
* [Harshad] There are other parameters too that need to be parsed | ||
* [MK] I think Harshad makes a good point, we should consider whether they should be POST | ||
* [Martin Thomson] should be the HTTP QUERY method, if you want it idempotent but with a body \ | ||
|
||
* Matt Menke (call chat comments) | ||
|
||
\ | ||
In practice, Chrome will currently coalesce all DSP requests from a single auction, but may issue multiple SSP signals requests | ||
|
||
|
||
\ | ||
Since bids trickle in, and we don't want to delay scoring until we've received all bids. | ||
|
||
|
||
\ | ||
PerBuyerSignals are coalesced (If there are two auctions at once with the same buyer, one buyer may get results from two separate requests, though) | ||
|
||
|
||
Also, if multiple SSPs share a buyer in components from the same auction, there may not be a single request, but somewhat weirdly sharded requests. | ||
|
||
|
||
There will never be more buyer signals requests than times the buyer appears in (component) auctionConfigs. | ||
|
||
|
||
So best not to rely on all signals for a single auction coming from a single buyer or seller signals request. \ | ||
Since that would not be accurate. | ||
|
||
* [Joel] What is the relationship of the renderURL and the top level ads | ||
* [MK] In the ad components, the top level ads have 1 renderURL and… | ||
* [Harshad] _<Participant suggested a different flow, with browser fetching per-buyer signals directly from their server, and SSP response just containing a set of URLs instead of the signals themselves>_ | ||
* [MK] It seems like it will delay the processing of the bidding operation by a bunch due to the round trips involved in your suggested flow. The tradeoff for performance does not seem optimal. | ||
|
||
|
||
## Clarifying question on suggested IG scope/granularity | ||
|
||
[Bosko] For IG granularity, IG per advertiser site… what is the canonical example and recommended way to do this? One IG per interest? One IG for all of a person's interests on one site? | ||
|
||
https://github.com/WICG/turtledove/issues/207#issuecomment-1573785298 | ||
|
||
|
||
|
||
* Our goal is to make sure that both of the options you mentioned are doable. As we have had more discussions, we have heard folks say that the best way is to minimize the number of IGs that the user is in to having 1 IG for all the activities for a user. This means that even if there are several different things that we did on a site, we could still have 1 IG for a user that did a collection of things. | ||
|
||
|
||
## [Isaac] Lots left on the agenda. Can we have more meetings? | ||
|
||
[MK] Yes, let's increase frequency to every week, for now. | ||
|
||
**Next call on Wednesday Aug 23** |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters