1. Introduction
2. Initialization
2.1. Feature descriptor
In order for the applications to signal their interest in using mesh detection during a session, the session must be requested with appropriate feature descriptor. The string mesh-detection is introduced by this module as a new valid feature descriptor for mesh detection feature.
A device is capable of supporting the mesh-detection feature if the device’s tracking system exposes a native mesh detection capability. The inline XR device MUST NOT be treated as capable of supporting the mesh-detection feature.
When a session is created with mesh-detection feature enabled, the update meshes algorithm MUST be added to the list of frame updates of that session.
const session= await navigator. xr. requestSession( "immersive-ar" , { requiredFeatures: [ "mesh-detection" ] });
3. Meshs
3.1. XRMesh
[Exposed =Window ]interface { [
XRMesh SameObject ]readonly attribute XRSpace ;
meshSpace readonly attribute FrozenArray <Float32Array >;
vertices readonly attribute Uint32Array ;
indices readonly attribute DOMHighResTimeStamp ;
lastChangedTime readonly attribute DOMString ?; };
semanticLabel
An XRMesh
represents a single instance of 3D geometry detected by the underlying XR system.
The meshSpace
is an XRSpace
that establishes the coordinate system of the mesh. The native origin of the meshSpace
tracks mesh’s center. The underlying XR system defines the exact meaning of the mesh center. The Y axis of the coordinate system defined by meshSpace
MUST represent the mesh’s normal vector.
Each XRMesh
has an associated native entity.
Each XRMesh
has an associated frame.
The vertices
is an array of vertices that describe the shape of the mesh. They are expressed in the coordinate system defined by meshSpace
.
The indices
is an array of indices that describe the index of each vertex of the mesh.
The lastChangedTime
is the last time some of the mesh attributes have been changed.
Note: The pose of a mesh is not considered a mesh attribute and therefore updates to mesh pose will not cause the lastChangedTime
to change. This is because mesh pose is a property that is derived from two different entities - meshSpace
and the XRSpace
relative to which the pose is to be computed via getPose()
function.
4. Obtaining detected meshes
4.1. XRMeshSet
[Exposed =Window ]interface {
XRMeshSet readonly setlike <XRMesh >; };
An XRMeshSet
is a collection of XRMesh
s. It is the primary mechanism of obtaining the collection of meshes detected in an XRFrame
.
partial interface XRFrame {readonly attribute XRMeshSet ; };
detectedMeshes
XRFrame
is extended to contain detectedMeshes
attribute which contains all meshes that are still tracked in the frame. The set is initially empty and will be populated by the update meshes algorithm. If this attribute is accessed when the frame is not active, the user agent MUST throw InvalidStateError
.
XRSession
is also extended to contain associated set of tracked meshes, which is initially empty. The elements of the set will be of XRMesh
type.
-
Let session be a frame’s session.
-
Let device be a session’s XR device.
-
Let trackedMeshs be a result of calling into device’s native mesh detection capability to obtain tracked meshes at frame’s time.
-
For each native mesh in trackedMeshs, run:
-
If desired, treat the native mesh as if it were not present in trackedMeshs and continue to the next entry. See § 6 Privacy & Security Considerations for criteria that could be used to determine whether an entry should be ignored in this way.
-
If session’s set of tracked meshes contains an object mesh that corresponds to native mesh, invoke update mesh object algorithm with mesh, native mesh, and frame, and continue to the next entry.
-
Let mesh be the result of invoking the create mesh object algorithm with native mesh and frame.
-
Add mesh to session’s set of tracked meshes.
-
-
Remove each object in session’s set of tracked meshes that was neither created nor updated during the invocation of this algorithm.
-
Set frame’s
detectedMeshes
to set of tracked meshes.
XRFrame
frame, the user agent MUST run the following steps:
-
Let result be a new instance of
XRMesh
. -
Set result’s native entity to native mesh.
-
Set result’s
meshSpace
to a newXRSpace
object created with session set to frame’ssession
and native origin set to track native mesh’s native origin. -
Invoke update mesh object algorithm with result, native mesh, and frame.
-
Return result.
A mesh object, result, created in such way is said to correspond to the passed in native mesh object native mesh.
XRFrame
frame, the user agent MUST run the following steps:
-
Set mesh’s frame to frame.
-
Set mesh’s
vertices
to the new array of vertices representing native mesh’s vertices, performing all necessary conversions to account for differences in native mesh representation. -
Set mesh’s
indices
to the new array of indices representing native mesh’s vertices, performing all necessary conversions to account for differences in native mesh representation. -
If desired, reduce the level of detail of the mesh’s
vertices
andindices
as described in § 6 Privacy & Security Considerations. -
Set mesh’s
lastChangedTime
to time.
5. Native device concepts
5.1. Native mesh detection
The mesh detection API provides information about 3D surfaces detected in users' environment. It is assumed in this specification that user agents can rely on native mesh detection capabilities provided by the underlying platform for their implementation of mesh-detection features. Specifically, the underlying XR device should provide a way to query all meshes that are tracked at a time that corresponds to the timeof a specific XRFrame
.
Moreover, it is assumed that the tracked meshes, known as native mesh objects, maintain their identity across frames - that is, given a mesh object P
returned by the underlying system at time t0
, and a mesh object Q
returned by the underlying system at time t1
, it is possible for the user agent to query the underlying system about whether P
and Q
correspond to the same logical mesh object. The underlying system is also expected to provide a native origin that can be used to query the location of a pose at time t
, although it is not guaranteed that mesh pose will always be known (for example for meshes that are still tracked but not localizable at a given time). In addition, the native mesh object should expose a polygon describing approximate shape of the detected mesh.
In addition, the underlying system should recognize native meshes as native entities for the purposes of XRAnchor
creation. For more information, see WebXR Anchors Module § native-anchor section.
6. Privacy & Security Considerations
The mesh detection API exposes information about users' physical environment. The exposed mesh information (such as mesh’s polygon) may be limited if the user agent so chooses. Some of the ways in which the user agent can reduce the exposed information are: decreasing the level of detail of the mesh’s polygon in update mesh object algorithm (for example by decreasing the number of vertices, or by rounding / quantizing the coordinates of the vertices), or removing the mesh altogether by behaving as if the mesh object was not present in trackedMeshs
collection in update meshes algorithm (this could be done for example if the detected mesh is deemed to small / too detailed to be surfaced and the mechanisms to reduce details exposed on meshes are not implemented by the user agent). The poses of the meshes (obtainable from meshSpace
) could also be quantized.
Since concepts from mesh detection API can be used in methods exposed by [webxr-anchors-module] specification, some of the privacy & security considerations that are relevant to WebXR Anchors Module also apply here. For details, see WebXR Anchors Module § privacy-security section.
Due to how mesh detection API extends WebXR Device API, the section WebXR Device API § 13. Security, Privacy, and Comfort Considerations is also applicable to the features exposed by the WebXR Mesh Detection Module.
7. Acknowledgements
The following individuals have contributed to the design of the WebXR Mesh Detection specification: