Open
Description
There are XR use cases (e.g., "audio AR") that could build on poses and other capabilities exposed by core WebXR (and future extensions). The current spec language, though, appears to require visual devices. The superficial issues can probably be addressed with a bit of rewording, though there may be some more complex issues as well.
Some of the most obvious examples revolve around the word "imagery":
- "An XR device is a physical unit of hardware that can present imagery to the user."
- "Once a session has been successfully acquired it can be used to... present imagery to the user."
- "A state of visible indicates that imagery rendered by the XRSession can be seen by the user…"
More complex issues might include XR Compositor, assumptions about XRWebGLLayer
, definition and/or assumptions about XRView
.
While AR is out of scope for the first version of the core spec, it would be nice if the definitions weren’t technically incompatible with such use cases and form factors.