Skip to content

Commit c6831e8

Browse files
committed
Updated the explainer
1 parent 33cf3b3 commit c6831e8

File tree

2 files changed

+11
-60
lines changed

2 files changed

+11
-60
lines changed

explainer.md

Lines changed: 10 additions & 59 deletions
Original file line numberDiff line numberDiff line change
@@ -365,41 +365,7 @@ function checkARSupport() {
365365

366366
The UA may choose to present the immersive AR session's content via any type of display, including dedicated XR hardware (for devices like HoloLens or Magic Leap) or 2D screens (for APIs like [ARKit](https://developer.apple.com/arkit/) and [ARCore](https://developers.google.com/ar/)). In all cases the session takes exclusive control of the display, hiding the rest of the page if necessary. On a phone screen, for example, this would mean that the session's content should be displayed in a mode that is distinct from standard page viewing, similar to the transition that happens when invoking the `requestFullscreen` API. The UA must also provide a way of exiting that mode and returning to the normal view of the page, at which point the immersive AR session must end.
367367

368-
## Rendering to the Page
369-
370-
There are a couple of scenarios in which developers may want to present content rendered with the WebXR Device API on the page instead of (or in addition to) a headset: Mirroring and inline rendering. Both methods display WebXR content on the page via a Canvas element with an `XRPresentationContext`. Like a `WebGLRenderingContext`, developers acquire an `XRPresentationContext` by calling the `HTMLCanvasElement` or `OffscreenCanvas` `getContext()` method with the context id of "xrpresent". The returned `XRPresentationContext` is permanently bound to the canvas.
371-
372-
An `XRPresentationContext` can only be supplied imagery by an `XRSession`, though the exact behavior depends on the scenario in which it's being used. The context is associated with a session by setting the `XRRenderState`'s `outputContext` to the desired `XRPresentationContext` object. An `XRPresentationContext` cannot be used with multiple `XRSession`s simultaneously, so when an `XRPresentationContext` is set as the `outputContext` for a session's `XRRenderState`, any session it was previously associated with will have it's `renderState.outputContext` set to `null`.
373-
374-
### Mirroring
375-
376-
On desktop devices, or any device which has an external display connected to it, it's frequently desirable to show what the user in the headset is seeing on the external display. This is usually referred to as mirroring.
377-
378-
In order to mirror WebXR content to the page, the session's `renderState.outputContext` must be set to a `XRPresentationContext`. Once a valid `outputContext` has been set any content displayed on the headset will then be mirrored into the canvas associated with the `outputContext`.
379-
380-
When mirroring only one eye's content will be shown, and it should be shown without any distortion to correct for headset optics. The UA may choose to crop the image shown, display it at a lower resolution than originally rendered, and the mirror may be multiple frames behind the image shown in the headset. The mirror may include or exclude elements added by the underlying XR system (such as visualizations of room boundaries) at the UA's discretion. Pages should not rely on a particular timing or presentation of mirrored content, it's really just for the benefit of bystanders or demo operators.
381-
382-
The UA may also choose to ignore the `outputContext` on systems where mirroring is inappropriate, such as devices without an external display like mobile or all-in-one systems.
383-
384-
```js
385-
function beginXRSession() {
386-
let mirrorCanvas = document.createElement('canvas');
387-
let mirrorCtx = mirrorCanvas.getContext('xrpresent');
388-
document.body.appendChild(mirrorCanvas);
389-
390-
navigator.xr.requestSession('immersive-vr')
391-
.then((session) => {
392-
// A mirror context isn't required to render, so it's not necessary to
393-
// wait for the updateRenderState promise to resolve before continuing.
394-
// It may mean that a frame is rendered which is not mirrored.
395-
session.updateRenderState({ outputContext: mirrorCtx });
396-
onSessionStarted(session);
397-
})
398-
.catch((reason) => { console.log("requestSession failed: " + reason); });
399-
}
400-
```
401-
402-
### Inline sessions
368+
## Inline sessions
403369

404370
There are several scenarios where it's beneficial to render a scene whose view is controlled by device tracking within a 2D page. For example:
405371

@@ -411,31 +377,25 @@ These scenarios can make use of inline sessions to render tracked content to the
411377

412378
The [`RelativeOrientationSensor`](https://w3c.github.io/orientation-sensor/#relativeorientationsensor) and [`AbsoluteOrientationSensor`](https://w3c.github.io/orientation-sensor/#absoluteorientationsensor) interfaces (see [Motion Sensors Explainer](https://w3c.github.io/motion-sensors/)) can be used to polyfill the first case.
413379

414-
Similar to mirroring, to make use of this mode the `XRRenderState`'s `outputContext` must be set. At that point content rendered to the `XRRenderState`'s `baseLayer` will be rendered to the canvas associated with the `outputContext`. The UA is also allowed to composite in additional content if desired. (In the future, if multiple `XRLayers` are used their composited result will be what is displayed in the `outputContext`.)
380+
To make use of this mode a `XRWebGLLayer` must be created with the `useDefaultFramebuffer` option set to `true`. This instructs the layer to not allocate a new WebGL framebuffer but instead set the `framebuffer` attribute to `null`. That way when `framebuffer` is bound all WebGL commands will naturally execute against the WebGL context's default framebuffer and display on the page like any other WebGL content. When that layer is set as the `XRRenderState`'s `baseLayer` the inline session is able to render it's output to the page.
415381

416-
Immersive and inline sessions can use the same render loop, but there are some differences in behavior to be aware of. Most importantly, inline sessions will not pump their render loop if they do not have a valid `outputContext`. Instead the session acts as though it has been [suspended](#handling-suspended-sessions) until a valid `outputContext` has been assigned.
382+
Immersive and inline sessions can use the same render loop, but there are some differences in behavior to be aware of. Most importantly, inline sessions will not pump their render loop if they do not have a `baseLayer` with `useDefaultFramebuffer` set. (This restriction may be lifted in the future to enable more advanced effects.) Instead the session acts as though it has been [suspended](#handling-suspended-sessions) until a valid `baseLayer` has been assigned.
417383

418384
Immersive and inline sessions may run their render loops at at different rates. During immersive sessions the UA runs the rendering loop at the XR device's native refresh rate. During inline sessions the UA runs the rendering loop at the refresh rate of page (aligned with `window.requestAnimationFrame`.) The method of computation of `XRView` projection and view matrices also differs between immersive and inline sessions, with inline sessions taking into account the output canvas dimensions and possibly the position of the users head in relation to the canvas if that can be determined.
419385

420-
Most instances of inline sessions will only provide a single `XRView` to be rendered, but UA may request multiple views be rendered if, for example, it's detected that that output medium of the page supports stereo rendering. As a result pages should always draw every `XRView` provided by the `XRFrame` regardless of what type of session has been requested.
421-
422386
UAs may have different restrictions on inline sessions that don't apply to immersive sessions. For instance, the UA does not have to guarantee the availability of tracking data to inline sessions, and even when it does a different set of `XRReferenceSpace` types may be available to inline sessions versus immersive sessions.
423387

424388
```js
425-
let inlineCanvas = document.createElement('canvas');
426-
let inlineCtx = inlineCanvas.getContext('xrpresent');
427-
document.body.appendChild(inlineCanvas);
428-
429389
function beginInlineXRSession() {
430390
// Request an inline session in order to render to the page.
431391
navigator.xr.requestSession('inline')
432392
.then((session) => {
433-
// Inline sessions must have an output context prior to rendering, so
434-
// it's a good idea to wait until the outputContext is confirmed to have
435-
// taken effect before rendering.
436-
session.updateRenderState({ outputContext: inlineCtx }).then(() => {
437-
onSessionStarted(session);
438-
});
393+
// Inline sessions must have an appropriately constructed WebGL layer
394+
// set as the baseLayer prior to rendering. (This code assumes the WebGL
395+
// context has already been made XR compatible.)
396+
let glLayer = new XRWebGLLayer(session, gl, { useDefaultFramebuffer: true });
397+
session.updateRenderState({ baseLayer: glLayer });
398+
onSessionStarted(session);
439399
})
440400
.catch((reason) => { console.log("requestSession failed: " + reason); });
441401
}
@@ -543,7 +503,7 @@ function drawScene() {
543503
544504
Whenever possible the matrices given by `XRView`'s `projectionMatrix` attribute should make use of physical properties, such as the headset optics or camera lens, to determine the field of view to use. Most inline content, however, won't have any physically based values from which to infer a field of view. In order to provide a unified render pipeline for inline content an arbitrary field of view must be selected.
545505
546-
By default a vertical field of view of 0.5 radians (90 degrees) is used for inline sessions. The horizontal field of view can be computed from the vertical field of view based on the width/height ratio of the `outputContext`'s canvas.
506+
By default a vertical field of view of 0.5 radians (90 degrees) is used for inline sessions. The horizontal field of view can be computed from the vertical field of view based on the width/height ratio of the `XRWebGLLayer`'s associated canvas.
547507
548508
If a different default field of view is desired, it can be specified by passing a new `inlineVerticalFieldOfView` value, in radians, to the `updateRenderState` method:
549509
@@ -647,15 +607,13 @@ dictionary XRRenderStateInit {
647607
double depthFar;
648608
double inlineVerticalFieldOfView;
649609
XRLayer? baseLayer;
650-
XRPresentationContext? outputContext
651610
};
652611

653612
[SecureContext, Exposed=Window] interface XRRenderState {
654613
readonly attribute double depthNear;
655614
readonly attribute double depthFar;
656615
readonly attribute double? inlineVerticalFieldOfView;
657616
readonly attribute XRLayer? baseLayer;
658-
readonly attribute XRPresentationContext? outputContext;
659617
};
660618

661619
//
@@ -750,11 +708,4 @@ partial dictionary WebGLContextAttributes {
750708
partial interface WebGLRenderingContextBase {
751709
Promise<void> makeXRCompatible();
752710
};
753-
754-
//
755-
// RenderingContext
756-
//
757-
[SecureContext, Exposed=Window] interface XRPresentationContext {
758-
readonly attribute HTMLCanvasElement canvas;
759-
};
760711
```

input-explainer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Tracked pointers are input sources able to be tracked separately from the viewer
1616
#### Screen
1717
Screen based input is driven by mouse and touch interactions on a 2D screen that are then translated into a 3D targeting ray. The targeting ray originates at the interacted point on the screen as mapped into the input `XRSpace` and extends out into the scene along a line from the screen's viewer pose position through that point. The specific mapped depth of the origin point depends on the user agent. It SHOULD correspond to the actual 3D position of the point on the screen where available, but MAY also be projected onto the closest clipping plane (defined by the smaller of the `depthNear` and `depthFar` attributes of the `XRSession`) if the actual screen placement is not known.
1818

19-
To accomplish this, pointer events over the relevant screen regions are monitored and temporary input sources are generated in response to allow unified input handling. For inline sessions with an `outputContext`, the monitored region is the `outputContext`'s canvas. For immersive sessions (e.g. hand-held AR), the entire screen is monitored.
19+
To accomplish this, pointer events over the relevant screen regions are monitored and temporary input sources are generated in response to allow unified input handling. For inline sessions the monitored region is the canvas associated with the `baseLayer`. For immersive sessions (e.g. hand-held AR), the entire screen is monitored.
2020

2121
### Selection styles
2222
In addition to a targeting ray, all input sources provide a mechanism for the user to perform a "select" action. This user intent is communicated to developers through events which are discussed in detail in the [Input events](#input-events) section. The physical action which triggers this selection will differ based on the input type. For example (though this is hardly conclusive):

0 commit comments

Comments
 (0)