Description
Hello,
I would like to question the requirement for a WebGL layer in a WebXR session. WebGL is assuming the application will have some kind of visual rendering. There are plenty of examples where XR does not require visual rendering. Currently audio and visual modes are the most advanced XR modalities, but touch, taste, and smell XR sessions will soon become robust enough to handle their own XR sessions.
This topic and this topic in particular give examples of nonvisual use cases for XR.
I would like to see a 3rd mode be added that goes along with immersive and inline. This mode would only give access to the sensors and does operations for world tracking and everything WebXR does that is not related to WebGL. This 3rd session would be a universal basic mode used for all nonvisual experiences, or could be used if someone wants to create their own visual viewer rather than using the tools in the other XR modes, such as with web GPU.