-
Notifications
You must be signed in to change notification settings - Fork 401
Incorporating conclusions of recent privacy discussions #1124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks! I had a partial draft but your wording is way better. Should this be within a note? I feel like this should be normative text, perhaps with a note including the techniques of "always report true and defer to requestSession" and "when there is no device connected, introduce a random delay before saying no" |
I waffled back and forth on that. My thinking for laying it out this way is that the actual algorithm reflects the intended steps fairly strongly, and the text in the note primarily gives reasoning behind the algorithm and some examples of when it may apply (I consider examples to always be non-normative.) I'm absolutely willing to shuffle that, though, if you feel it makes more sense to have some/all of that text be normative. Also, I didn't include text about the random delay due to Nick's comments from #983 highlighting that developers may wait for this method to return prior to loading content, so random delays would be brutal for user experience. That said, if the method waits for a prompt of any type you'd end up with the same effect. 🤷 |
I think the list of "for X devices, do Y" should be normative, however it does not need to be part of the algorithm, just a section below it.
I think it's worth mentioning it, and warning against the negative effects to devs. |
Okay, moved some sections out of the non-normative note. Thanks for pointing out offline that we can wrap text spans in note classes! |
index.bs
Outdated
1. Let |device| be the result of [=ensure an immersive XR device is selected|ensuring an immersive XR device is selected=]. | ||
1. If |device| is null, [=/resolve=] |promise| with <code>false</code> and abort these steps. | ||
1. If |device|'s [=list of supported modes=] does not [=list/contain=] |mode|, [=queue a task=] to [=/resolve=] |promise| with <code>false</code> and abort these steps. | ||
1. The user agent SHOULD ensure [=user intent=] to allow pages to know XR capabilities are available is well undertood. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about
The user agent SHOULD ensure user intent to allow fingerprinting in cases where
isSessionSupported()
represents a fingerprinting vector, as detailed below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's easier to parse, definitely. The nature of the switch statement implies that you'll only hit this case in situations where isSessionSupported() does "represent a fingerprinting vector", so we could possibly trim that wording, but I don't think it's problematic to leave it in either.
I haven't been happy with the way this sentence (my version or yours) doesn't describe the step to be taken when fingerprinting is blocked, though. What would you think about negating the wording to read like this:
If user intent to avoid advertising their system's XR capabilities is well understood queue a task to resolve
promise
withfalse
and abort these steps.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! Updated.
CC: @mounirlamouri, @pes10k to take a look at the proposed spec change re: fingerprinting. |
I appreciate the change here, but I dont think this addresses the concern. If Im understanding the new text correctly, the spec still seems undefined with regards to how to prevent the new APIs from being used by fingerprinters. Its great that the new text more clearly describes when there is fingerprinting risk, but unless I'm missing something, I still dont understand how fingerprinting is being prevented. Like I mentioned on the call, privacy-in-specs approaches that scrape down to "it is the responsibility of implementors to make it privacy-respecting" don't have a good success record on the web (despite the sincere and best intentions by spec authors). Specs need to address and mitigate fingerprinting risk added by new features introduced by the spec; just identifying where the new risks are introduced is only half the task. Several approaches were discussed on the previous call that seemed like solutions to the problem. If the WG thinks it'd be helpful, I'd be happy to try and help come up with solutions (though I just want to note in advance that while PING can try and help discover solutions, its ultimately the responsibility of the WG to figure out how to make their new functionality privacy-respecting by default). |
I don't see how it's not? We require user intent: This can be a permissions prompt, a setting, a cached permission, or something else. We intentionally do not mandate a specific kind of user intent because there should be flexibility on this. We also have the sentence:
when talking about the devices where the fingerprinting risk is actually extant.
We are not just identifying the fingerprinting risk here, we are very explicitly bucketing types of user agents based on fingerprinting risk and prescribing a solution (ranging from "do nothing" to "always report true in |
Howdy @Manishearth, My concerns are the following: 1. Ensuring that "isSessionSupported" fast track decisions are only made based on information already available to the page The new text says "If the user agent and system are known to" (and similar). I think I understand wants intended here, but "system" here is re-introducing the privacy risk. Sites can already access the UA, so any FP information that corresponds one to one with the UA is a wash. But sites don't generally have access to other types of information about the system. If you mean other information about the device running the browser beyond the UA, that are 1) otherwise always accessible to sites and 2) will always be 1-to-1 with XR availably, can you list them? Otherwise, it would be good to just change this to be the User-Agent string. 2. Similarly, does the above text mean "user agent" to mean broadly the browser, or specifically the value reported in the User-Agent header? 3. Defining Privacy Protections More broadly, this is the same concern / point I mentioned on the call; its not appropriate to be extremely specific about the behaviors that introduce privacy harm, but then unspecified / vague about how to solve the privacy concern. As you say, the spec "require(s) user intent"; thats great! But thats a goal, not a solution itself. It's just as undefined as if the spec said "the browser will let the website what types of sessions are supported", but with the details left to the implementor. For the same reason its important to have clearly defined functionality for the developer, its important to have clearly defined privacy protections for the implementer and browser user! Put differently, look how specific this text is in defining the ways a browser SHOULD not / MUST NOT ensure user intent; the spec should be just as specific in defining how user intent should be achieved
|
Sites have access to the operating system via
Uh, it's not undefined, we have an entire section on how user intent may be achieved: https://immersive-web.github.io/webxr/#user-intention . It deliberately allows for different ways to achieve user intent -- no web spec normatively defines one way to do this (not even the permissions spec!) and that's a good thing. Now, some of this could be clearer. The "transient activation" bit obviously should not apply here: we could potentially edit that section to say that "transient activation MUST NOT serve as an indication for user intent in cases where the user intent is required to mitigate fingerprinting risk". It would be helpful if you could point out what needs to be improved here, but as it stands your characterization of user intent as being an undefined concept is not correct and thus not actionable. I do feel that some of this should be left up to the implementor . For example, there's a choice between unilaterally reporting |
I'll follow up in more detail later, but just to make sure i understand the above comment: Are you saying that 13.2 is a complete list of ways the implementor can determine "user intention"? If so, it's difficult for to understand the section in that light, as most of the sections seem intentionally not comprehensive (X is MAY be a signal of intent, etc). It would be good to tighten the language here to make it clear that the implementor should choose from one of X options. If 13.2 isn't intended to be a complete list of ways to determine user intent, then i go back to my original claim; that what "user intent" means in the spec is undefined, since that would mean there are things that qualify as determining "user intent" that are not defined, and so its not possible to asses the privacy implications of spec yet. If you could help me understand the WG's position here, that would be very helpful and I can follow up with a fuller reply then. Thanks! |
This is the case, it is not intended to be a complete list. Again, no specification I know of goes to the extent of defining precisely the ways in which consent may be achieved. No specification (including the permissions specification itself) mandates permission prompts, permission prompts are just one of many tools here. I'm perfectly okay with tighter language defining the scope of user intent, or tighter language on Would language specifying the scope of user intent be sufficient here? What sort of points would you envision here? Put a different way: What language do you think can resolve this without exhaustively listing consent flows? |
A thing I realized on the call today: A change which we had intended to make, but had not made yet, is to tighten up this section and add more MUST/etcs
I still plan to do that, but it would be useful to have direction on the granularity of boundaries required |
@pes10k I've made some improvements (last two commits). Brandon may be tightening it up a bit more. What do you think? |
I understand your point here, but there is a significant difference. The permission system has a mostly-well-defined meaning on the Web platform. To the best of my understanding "user intent" doesn't have a well understood meaning, which is where my concern about "undefined-ness" comes in. If a spec is introducing a new concept, and then not concretely defining what's meant, it's not possible to understand the privacy properties or impact of the spec. So, while it would be ideal for your spec to be designed to not enable fingerprinting, I appreciate that the WG has decided thats out of bounds. As an alternative, I think it'd be sufficient for the WG to define (directly or implicitly) what it means by user intent in terms of existing platform features, and then user agents can differentiate by how they implement those platform features. This would prevent your spec from having its own, unique but only partially-defined privacy concept (user intent). Permissions is a good analogy here; specs say permission X is required for feature Y, but not the UX the UA needs to get acquire permission X. If WebXR could define user intent in terms of existing platform features (but not necessarily the UX around those existing platform features) I think that'd be useful and maybe a way for us to cut this knot. |
I've tried to add some clarification in the latest changes, please take a look
This is not the case, and I do not understand why you would think such a thing after we have been trying to understand your concerns and fix up the spec to prevent this.
We do. We're trying to do exactly this. What part of this section would you like to see improved, and how? It's not clear how to define user intent in terms of existing platform features in a way that does not tie down the UX: if you have concrete suggestions here that would be much appreciated.
I don't understand: this is literally what I've been saying. The permissions spec does not define permissions in terms of existing platform features either. It can't in a way that does not pin down the UX. I'd love to see what kind of text from the permissions spec you would consider sufficient to shore up our definitions of user intent here. I think the fundamental issue is that we do not actually want to unconditionally throw a permissions prompt. You may want to cache the prompt. If the page is an installed PWA, we may not want to show prompts. There is ambiguity here, and the permissions spec has this same ambiguity, but we cannot utilize that ambiguity by directly referencing it in any way that I can see. |
Oh, worth calling out: my latest commits also switch the relevant bits over to using "explicit consent" instead of "user intent", which is more narrowly defined. |
To step back a bit and clarify our goals here, as I understand them:
|
Thinking about this more, are you suggesting that we make the "intent to allow fingerprinting" thing a permission that is automatically granted in some cases? That could work, the automatic granting could be a bit gnarly. |
Thats not quite my suggestion, I think I did a bad way of conveying it. The goal is to be able to access the privacy implications of the spec, and the immediate problem is that the spec attempts to remove the introduced fingerprinting surface by both i) introducing a new concept, "user intent", and ii) only partially defining it. My suggestion is that, instead of the above, doing any of the following:
So (back to the up top quote), if you wanted to go with #2 above, you could define new "supports session" permission types, be silent in the spec about whats allowed by default, and leave it to vendors to decide what permissions were automatically granted. That wouldn't be ideal, since its still depending on implementors to figure out how to do your spec in a privacy preserving way, but it would be an improvement over the existing text, since it would at least allow users, developers and implementors to understand the privacy risks in terms of other, well understood platform features.
But, for my 2c, this is the problem. If the idea is "we need to give a way for sites to query advanced, sometimes-very-identifying hardware capabilities without any user intervention", things are already in a very difficult spot. I really recommend the WG figure out a way of enabling the benign uses cases it wants to enable that don't require passive device capability detection. |
We use "user intent" for things other than fingerprinting prevention, so I'd rather not remove the concept, but I think introducing a "supports session" permission that is sometimes autogranted would be good. We could include MAY or non normative text that suggests when it can be autogranted. |
Opened a PR making these changes to this PR: #1136 . User intent is no longer mentioned, we have a new permission type for this. I have kept the text explaining when it should be granted automatically. |
@pes10k How do the latest changes look? User intent is no longer referenced, and we use a permission, specifying cases where it should be auto-granted. |
Howdy @Manishearth! I think this is getting to a really good place. Thank you for the additional go! My remaining (and smaller) concerns are:
Finally, this is broader question, just to make sure I understand the WG's thinking; is it correct that on a standard, stock desktop browser (or other browser where XR support will vary), a correct implementation of the spec always requires achieving explicit consent whether or not the user has XR hardware installed? For my read of the PR, I believe this is necessary to avoid leaking the same FP bits through timing differences, and I just want to make sure that my understanding matches the group's. Thanks again for your work on this @Manishearth ! |
This is. Implementations can choose to always resolve this promise to I've changed it to explicitly call out that the user agent needs to have identical behavior here across all instances if it picks this option.
It is, it is totally valid for an implementor to never autogrant the permission. I've changed it to lowercase and mentioned "may be autogranted based on criteria below", since the intent of that text is mostly to just call out that there are criteria for autogranting.
Done.
I'm afraid we can't, we use user intent more broadly for non fingerprinting user activation issues elsewhere. This was a bit of a struggle in this PR and is why moved away from better specifying user intent because it's used for other things in this spec and I didn't want to shut those cases out. I think there is a lot of space for a privacy infra spec that defines reusable terms for things like this.
No, this is incorrect. Alternate correct implementations can be:
|
Thanks again @Manishearth . Would you mind rebasing the PR so that I can more easily see how it fits into existing text?
Like before, do you mean the bullets in #1124 (comment) to be the full set of options for an implementor? If yes, it would be good to have those be explicitly mentioned in the spec. If no, then we have the same issue as before, that there are not-specified paths through the spec. I think we've landed on "yes" but wanted to make sure |
No. We do not have the same problems as before: The consent situation is now modelled in terms of an existing web platform concept; permissions. Permissions already are allowed to be autogranted based on the user agent's discretion (the simplest example of this is caching permissions between pages when enabled, no spec explicitly talks about this). Our spec text is actually constraining autogranting to say that it must not be done in a way that hinders fingerprinting. We list some ways this can be achieved. A standard, stock browser will mostly likely select one of the options I listed or ask for explicit consent. But we do not wish to hinder UX innovation here. |
b0284f0
to
c842810
Compare
Rebased. |
Would it be possible for us to have a call, just between PING and anyone in the WG who is interested in this PR / issue. The reason I'm asking is bc, while I appreciate the additional text, the approach described still has similar issues then, as I re-read the text in the context of @Manishearth most recent reply. Specifically, the concerns I see with the existing text are:
FWIW, I sincerely understand the goal of not hindering UX innovation, but I think the correct solution there is to figure out how to get the spec's functionality that doesn't depend on UX innovation to be private by default. Some imperfect, spaghetti on the wall suggestions (some in tension with each other), but maybe useful:
TL;DR: I'm excited and grateful to see one main issue addressed (i.e. some uncertainty and tackiness in evaluating the spec is addressed by defining things in terms of permissions). But some of that privacy improvement is undone by the spec in some places being very opinionated about when permissions should be auto granted, and in other places insufficiently opinionated (about how specifically non-IBUAS browsers should and should not complete the isSessionSupported algorithm where support fingerprinting risk is high). Again I appreciate the WG's work on addressing these issues, and I am happy to be as available as I can to try and help find a solution to these problems. |
Happy to have a call.
This confuses me, because this seems like a return to what we had proposed originally, with s/user intent/permissions. A lot of the explicitness was introduced in order to address your concerns. I'm happy to accomodate but it does feel like we're going in circles with these changes. I think it's important to talk about auto granting -- we had language to this effect before we made these changes too. We could simply make the section on auto granting be completely non normative, as it was in the beginning, and remove any of the SHOULD and MUSTs.
This is unacceptable since UAs may wish to explore routes that preserve privacy but do not require asking explicit permission. We can change it so that IBUAS browsers never hit the permissions request, and non-IBUAS browsers have a permissions check, and they are as usual allowed to auto grant based on the scenario, but I'm not comfortable requiring them to ask explicit permission. How does that sound? |
@pes10k wrote:
This is a drive-by comment since I'm not specifically involved in this issue, but I think there may be a misunderstanding here about the "indistinguishable by user-agent string" (IBUAS) situation. It's intended for classes of devices such as smartphones where knowing the UA string basically already tells the site about the potential capabilities since the needed hardware (i.e. accelerometer for VR/AR and camera for AR) is built into the device and is a common characteristic of all devices with that UA string. The device would generally be expected to be capable of smartphone AR, or Cardboard-style split-screen VR, without requiring any additional hardware to be installed. This isn't intended to be a precise answer, after all the goal is to let a site check if it seems useful to add an "Enter VR" or "Enter AR" button to a page. It's always possible that actually starting a session won't work despite a "is potentially supported" response, and that's by design. The point of the IBUAS algorithm is that the UA must make a choice for the answer that's strictly determined by the information in the UA string, and not on any other characteristics of the device. The UA can use that to filter out broad classes of devices where it's known that immersive sessions are definitely not going to work. Here are some examples of IBUAS algorithms that I'd expect to be compatible with the proposed spec:
However, the following would not be compatible with the proposed spec since they'd expose additional information beyond what can be inferred from the UA string. If the UA would want to incorporate this information, it cannot use the IBUAS algorithm for that.
As a more fundamental question, what do you mean by "I've disabled XR support" in this context, and what would be the goal of doing so? If users selectively override the UA's "can this device support an immersive-ar session" return value, I think that this would increase the fingerprint information since it would replace an answer that's purely determined from the UA string with one that's in addition reflecting a user configuration choice, basically telling sites that you've intentionally disabled this even though your device seems capable. And the only effect of this choice would be to prevent a conditional "Enter AR" button from being displayed, it wouldn't affect actually trying to start sessions. If the user's goal is to prevent immersive XR experiences from ever being started, that could be done separately, i.e. by blocking the corresponding Does this help? |
Just want to confirm that @klausw's comment is exactly inline with my thinking on the matter, with one addition: If you are on a device class that supports a particular session type 100% of the time (an Oculus Quest for example: It by definition always supports VR) and you allow the user to explicitly opt out of advertising that support that's a significantly more fingerprintable state than if we force it to always report And to reiterate the point that Klaus stressed above, reporting |
Do any of these timeslots work for you?
|
(Monday doesn't work for me) |
Manish and I, in talking about where this text is at right now, feel like it's enough of an improvement and moves us closer to the desired end state enough that we're comfortable merging it now. We still want to have the followup conversation with @pes10k and are more than happy to continue iterating on the language, but there doesn't seem to be much reason to hold this longstanding PR open while we do so. |
Sure, sounds good. For a call, of the times mentioned above, these would work for me:
I appreciate your all time here, and I hope we can wrap up the few remaining issues with one last chat |
All three of these work for me (unsure about Brandon), so I'll let @AdaRoseCannon pick the particular slot. |
Hi all, just following up here, to find a time so that we can try and resolve the remaining concerns / issues. Is there a time that would work for ya'll? |
Can you provide some times this week that could work? I'm mostly free and Brandon and I are both in PST. |
Is there a time Friday afternoon PST that would work. My time is pretty flexible then |
Any time after 11AM PST works for me on Friday. @toji ? |
I apologize for the delayed response. I am available today any time after 11AM PST as well, but I understand if this is too late notice. Otherwise any time next week Wed.-Fri. after 11AM works for me as well. |
Hows about Wednesday Nov 18, at 1pm PST? |
Perfect, works for me. I'll send an invite |
(Hopefully) Fixes #983
Tried to capture some of the conclusions we came to on the most recent call regarding fingerprinting here. I suspect it'll need some iteration, and I'm very happy to take feedback! The high level points I'm trying to communicate:
Preview | Diff