WebXR/WebGPU Binding Module - Level 1

Editor’s Draft,

More details about this document
This version:
https://immersive-web.github.io/webxr-webgpu-binding/
Latest published version:
https://www.w3.org/TR/webxr-webgpu-binding-1/
Previous Versions:
Feedback:
GitHub
Editor:
(Google)
Unstable API

The API represented in this document is under development and may change at any time.

For additional context on the use of this API please reference the WebXR/WebGPU Binding Module Explainer.


Abstract

This specification describes support for rendering content for a WebXR session with WebGPU.

Status of this document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This document was published by the Immersive Web Working Group as an Editors' Draft. This document is intended to become a W3C Recommendation. Feedback and comments on this specification are welcome. Please use Github issues. Discussions may also be found in the public-immersive-web-wg@w3.org archives.

Publication as an Editors' Draft does not imply endorsement by W3C and its Members. This is a draft document and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this document as other than a work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent that the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 18 August 2025 W3C Process Document.

1. Introduction

This specification describes a mechanism for rendering WebXR content using WebGPU, instead of WebGL.

It adds support for creation of XRCompositionLayers, as described in the WebXR Layers API, which are rendered using the WebGPU API.

WebGPU is an API for utilizing the graphics and compute capabilities of a device’s GPU more efficiently than WebGL allows, with an API that better matches both GPU hardware architecture and the modern native APIs that interface with them, such as Vulkan, Direct3D 12, and Metal.

1.1. Terminology

This specification uses the terms XR device, XRSession, XRFrame, XRView, and feature descriptor as defined in the WebXR Device API specification.

It uses the terms XRCompositionLayer, XRProjectionLayer, XRQuadLayer, XRCylinderLayer, XREquirectLayer, XRCubeLayer, XRSubImage, and XRWebGLBinding as defined in the WebXR Layers API specification.

It uses the terms GPUDevice, GPUAdapter, GPUTexture, GPUTextureViewDescriptor, GPUTextureFormat, and GPUTextureUsageFlags as defined in the WebGPU specification.

1.2. Application flow

If an author wants to use WebGPU to render content for a WebXR Session, they must perform the following steps:

In no particular order

  1. Create a GPUDevice from an GPUAdapter which was requested with the xrCompatible option set to true.

  2. Create an XRSession with the webgpu feature.

Then

  1. Create an XRGPUBinding with both the XR-compatible GPUDevice and WebGPU-compatible session.

  2. Create one or more XRCompositionLayers with the XRGPUBinding

  3. Add the layers to XRRenderStateInit and call updateRenderState().

  4. During requestAnimationFrame() for each WebGPU layer:

    1. For each XRGPUSubImage exposed by the layer:

      1. Draw the contents of the subimage using the GPUDevice the XRGPUBinding was created with.

2. Initialization

2.1. Feature Descriptor

The string "webgpu" is introduced by this module as a new valid feature descriptor for the WebXR/WebGPU Binding feature.

If a user agent wants to use WebGPU for rendering during a session, the session MUST be requested with the webgpu feature descriptor. XRSessions created with this feature are referred to as WebGPU-compatible sessions.

A WebGPU-compatible session MUST have the following behavioral differences from a WebGL-compatible session:

The following code creates a WebGPU-compatible session.
const session = await navigator.xr.requestSession('immersive-vr', {
  requiredFeatures: ['webgpu']
});

NOTE: The webgpu feature may be passed to either requiredFeatures or optionalFeatures. If passed to optionalFeatures, the author MUST check enabledFeatures after the session is created and use either WebGPU or WebGL to render the session’s content depending on whether webgpu is present.

2.2. GPUAdapter Integration

To create a GPUDevice that is compatible with an XR device, the GPUAdapter used to create it must have been requested with the xrCompatible option set to true.

partial dictionary GPURequestAdapterOptions {
    boolean xrCompatible = false;
};

The xrCompatible option, when set to true, indicates that the returned GPUAdapter MUST be compatible with the XR device selected by the user agent. If no GPUAdapter can satisfy this constraint, the request MUST return null.

NOTE: There is no WebGPU equivalent to the WebGLRenderingContextBase.makeXRCompatible() method. If a user agent needs to ensure XR compatibility, the GPUAdapter MUST be requested with xrCompatible set to true from the start.

An XR-compatible adapter is a GPUAdapter that was successfully returned from a requestAdapter() call with xrCompatible set to true.

An XR-compatible device is a GPUDevice that was created from an XR-compatible adapter.

2.3. XRGPUBinding

The XRGPUBinding interface is the entry point for using WebGPU with a WebGPU-compatible session. It provides methods for creating WebGPU-backed XRCompositionLayers and obtaining XRGPUSubImages for rendering.

[Exposed=(Window), SecureContext]
interface XRGPUBinding {
  constructor(XRSession session, GPUDevice device);

  readonly attribute double nativeProjectionScaleFactor;

  XRProjectionLayer createProjectionLayer(optional XRGPUProjectionLayerInit init = {});
  XRQuadLayer createQuadLayer(optional XRGPUQuadLayerInit init = {});
  XRCylinderLayer createCylinderLayer(optional XRGPUCylinderLayerInit init = {});
  XREquirectLayer createEquirectLayer(optional XRGPUEquirectLayerInit init = {});
  XRCubeLayer createCubeLayer(optional XRGPUCubeLayerInit init = {});

  XRGPUSubImage getSubImage(XRCompositionLayer layer, XRFrame frame, optional XREye eye = "none");
  XRGPUSubImage getViewSubImage(XRProjectionLayer layer, XRView view);

  GPUTextureFormat getPreferredColorFormat();
};

Each XRGPUBinding has an associated session which is the XRSession it was created with, and an associated device which is the GPUDevice it was created with.

2.3.1. Constructor

The XRGPUBinding(session, device) constructor MUST perform the following steps when invoked:

  1. If session’s ended value is true, throw an InvalidStateError DOMException.

  2. If session is NOT a WebGPU-compatible session, throw an InvalidStateError DOMException.

  3. If device has been destroyed, throw an InvalidStateError DOMException.

  4. If device was NOT created from an XR-compatible adapter, throw an InvalidStateError DOMException.

  5. Let binding be a new XRGPUBinding.

  6. Set binding’s session to session.

  7. Set binding’s device to device.

  8. Return binding.

Creating an XRGPUBinding:
const adapter = await navigator.gpu.requestAdapter({ xrCompatible: true });
const device = await adapter.requestDevice();
const binding = new XRGPUBinding(session, device);

2.3.2. Attributes

The nativeProjectionScaleFactor attribute returns the scale factor that, when applied to the recommended WebGPU texture resolution, would result in a 1:1 texel-to-pixel ratio at the center of the user’s view. This value MAY change over the lifetime of the session.

Each XR device has a recommended WebGPU texture resolution, which represents the per-view dimensions that the user agent considers a good balance between rendering quality and performance for that device. The recommended resolution is determined by taking the maximum width and height across all of the session’s views, scaled by a user agent-defined default scale factor.

NOTE: Unlike the recommended WebGL framebuffer resolution defined in the WebXR spec, which concatenates views side-by-side into a single framebuffer, the recommended WebGPU texture resolution describes the size of a single view. When creating projection layers, the user agent allocates a texture array where each layer corresponds to one view, with each layer having the recommended resolution.

The nativeProjectionScaleFactor attribute can be used to determine the scale factor needed to achieve the native 1:1 resolution. A scaleFactor of 1.0 in createProjectionLayer() uses the recommended resolution directly.

2.3.4. getPreferredColorFormat

The getPreferredColorFormat() method returns the GPUTextureFormat that the user agent recommends for the color attachments of layers created with this binding.

When invoked, the user agent MUST return the preferred GPUTextureFormat for the session’s XR device.

NOTE: The preferred color format is typically "rgba8unorm" or "bgra8unorm" depending on the platform. The preferred format for WebXR may differ from the format reported by navigator.gpu.getPreferredCanvasFormat(). Authors SHOULD use this method rather than getPreferredCanvasFormat() to determine the format for their XR projection layers.

2.3.5. createProjectionLayer

The createProjectionLayer(init) method creates a new XRProjectionLayer backed by WebGPU textures.

When invoked, the user agent MUST run the following steps:

  1. If the session has ended, throw an InvalidStateError DOMException.

  2. If the device has been destroyed, throw an InvalidStateError DOMException.

  3. Let colorFormat be init’s colorFormat.

  4. If colorFormat is not a supported color format, throw an InvalidStateError DOMException.

  5. If init’s depthStencilFormat is present and is not a supported depth-stencil format, throw an InvalidStateError DOMException.

  6. Let scaleFactor be init’s scaleFactor, clamped to the range [0.2, max(nativeProjectionScaleFactor, 1.0)].

  7. Let recommendedSize be the recommended WebGPU texture resolution for the session’s XR device.

  8. Let textureSize be recommendedSize scaled by scaleFactor.

  9. Let maxDimension be the device’s maxTextureDimension2D limit.

  10. If either dimension of textureSize exceeds maxDimension, scale textureSize proportionally so the largest dimension equals maxDimension.

  11. Create a new XRProjectionLayer with color textures of size textureSize and format colorFormat.

  12. If init’s depthStencilFormat is present, let depthStencilFormat be init’s depthStencilFormat and allocate depth/stencil textures of size textureSize and format depthStencilFormat.

  13. Return the XRProjectionLayer.

NOTE: The textures allocated for projection layers are typically texture arrays, with one layer per view (e.g., 2 layers for stereoscopic VR). The number of layers is determined by the session’s view count.

Creating a projection layer with color and depth and making it the active layer for the session:
const layer = binding.createProjectionLayer({
  colorFormat: binding.getPreferredColorFormat(),
  depthStencilFormat: 'depth24plus',
});
session.updateRenderState({ layers: [layer] });

2.3.6. createQuadLayer / createCylinderLayer / createEquirectLayer / createCubeLayer

NOTE: Non-projection layer types are still in development and are not yet supported by any user agent. The interfaces described in this section and the associated layer init dictionaries are subject to change.

The createQuadLayer(init), createCylinderLayer(init), createEquirectLayer(init), and createCubeLayer(init) methods each create a new layer of the corresponding type backed by WebGPU textures.

These methods MUST only succeed if the "layers" feature descriptor was requested and enabled for the session. If "layers" is not enabled, the user agent MUST throw a NotSupportedError DOMException.

When any of these methods are invoked, the user agent MUST run the following steps:

  1. If the session has ended, throw an InvalidStateError DOMException.

  2. If the device has been destroyed, throw an InvalidStateError DOMException.

  3. If the "layers" feature descriptor is not enabled for the session, throw a NotSupportedError DOMException.

  4. If init’s color format is not a supported color format, throw an InvalidStateError DOMException.

  5. If init’s depth/stencil format is present and is not a supported depth-stencil format, throw an InvalidStateError DOMException.

  6. Create and return a new layer of the appropriate type using the remaining fields of init.

2.3.7. getSubImage

The getSubImage(layer, frame, eye) method returns an XRGPUSubImage for non-projection layers.

This method MUST only succeed if the "layers" feature descriptor was requested and enabled for the session. If "layers" is not enabled, the user agent MUST throw a NotSupportedError DOMException.

When invoked, the user agent MUST run the following steps:

  1. If the "layers" feature descriptor is not enabled for the session, throw a NotSupportedError DOMException.

  2. If layer was not created by this XRGPUBinding, throw an InvalidStateError DOMException.

  3. If frame’s session is not the session, throw an InvalidStateError DOMException.

  4. If frame is not an active XR animation frame, throw an InvalidStateError DOMException.

  5. Let subImage be a new XRGPUSubImage.

  6. Set subImage’s colorTexture to the layer’s current color texture.

  7. Set subImage’s depthStencilTexture to the layer’s stencil texture, or null if no depth/stencil format was specified during layer creation.

  8. Set subImage’s viewport based on the eye parameter and the layer’s layout.

  9. Set subImage’s array layer index based on the eye parameter.

  10. Return subImage.

2.3.8. getViewSubImage

The getViewSubImage(layer, view) method returns an XRGPUSubImage for a specific view of a projection layer.

When invoked, the user agent MUST run the following steps:

  1. If layer was not created by this XRGPUBinding, throw an InvalidStateError DOMException.

  2. If view’s session is not the session, throw an InvalidStateError DOMException.

  3. Let subImage be a new XRGPUSubImage.

  4. Set subImage’s colorTexture to the layer’s current color texture.

  5. Set subImage’s depthStencilTexture to the layer’s stencil texture, or null if no depth/stencil format was specified during layer creation.

  6. Set subImage’s array layer index to the view’s index.

  7. Set the viewport to the full texture dimensions, adjusted by the current viewport scale.

  8. Return subImage.

Rendering a projection layer during an animation frame:
function onXRFrame(time, frame) {
  session.requestAnimationFrame(onXRFrame);

  const pose = frame.getViewerPose(refSpace);
  if (!pose) return;

  const commandEncoder = device.createCommandEncoder();

  for (const view of pose.views) {
    const subImage = binding.getViewSubImage(layer, view);
    const viewDesc = subImage.getViewDescriptor();

    const passEncoder = commandEncoder.beginRenderPass({
      colorAttachments: [{
        view: subImage.colorTexture.createView(viewDesc),
        loadOp: 'clear',
        storeOp: 'store',
        clearValue: { r: 0, g: 0, b: 0, a: 1 },
      }],
      depthStencilAttachment: {
        view: subImage.depthStencilTexture.createView(viewDesc),
        depthLoadOp: 'clear',
        depthClearValue: 1.0,
        depthStoreOp: 'store',
      },
    });

    const vp = subImage.viewport;
    passEncoder.setViewport(vp.x, vp.y, vp.width, vp.height, 0.0, 1.0);

    // Render scene from the viewpoint of view...

    passEncoder.end();
  }

  device.queue.submit([commandEncoder.finish()]);
}

3. Rendering

3.1. XRGPUSubImage

An XRGPUSubImage represents a view into a WebGPU-backed composition layer’s textures. It provides the GPUTextures to render into and a GPUTextureViewDescriptor that describes which portion of the texture corresponds to the requested view.

Each XRCompositionLayer created with an XRGPUBinding has an associated current color texture and an optional current depth/stencil texture. These are GPUTexture objects allocated and managed by the user agent’s swap chain for the layer. At the beginning of each XR animation frame, the user agent provides new textures for rendering. The new textures MUST be cleared to zero before being provided to the application. The textures from the previous frame are submitted to the compositor and are no longer available for rendering.

[Exposed=(Window), SecureContext]
interface XRGPUSubImage : XRSubImage {
  [SameObject] readonly attribute GPUTexture colorTexture;
  [SameObject] readonly attribute GPUTexture? depthStencilTexture;

  GPUTextureViewDescriptor getViewDescriptor();
};

3.1.1. Attributes

The colorTexture attribute returns the GPUTexture to be used as the color attachment when rendering this sub image. This texture is allocated by the user agent and its lifetime is managed by the layer’s swap chain. The same GPUTexture object is returned for all sub images of the same layer within a single frame — use the result of getViewDescriptor() to determine which array layer of the texture to render to.

The returned texture has the following properties:

The depthStencilTexture attribute returns the GPUTexture to be used as the depth/stencil attachment when rendering this sub image, or null if no depth/stencil format was specified when creating the layer. When provided, the user agent MAY use the depth information to improve composition quality (for example, for reprojection).

When present, the returned texture has the following properties:

NOTE: If a depthStencilFormat was provided during layer creation, it is implied that the user agent will populate it with an accurate representation of the scene’s depth. If the depth information is not representative of the rendered scene, the user agent SHOULD allocate its own depth/stencil textures rather than use the layer-provided one.

3.1.2. getViewDescriptor

The getViewDescriptor() method returns a GPUTextureViewDescriptor configured for creating a texture view of this sub image’s portion of the layer’s textures. The returned descriptor can be passed to GPUTexture.createView() on both the colorTexture and depthStencilTexture.

When invoked, the user agent MUST run the following steps:

  1. Let descriptor be a new GPUTextureViewDescriptor.

  2. Set descriptor’s dimension to "2d".

  3. Set descriptor’s mipLevelCount to 1.

  4. Set descriptor’s arrayLayerCount to 1.

  5. Set descriptor’s baseArrayLayer to the array layer index corresponding to this sub image’s view (e.g., 0 for the left eye, 1 for the right eye).

  6. Return descriptor.

NOTE: The returned descriptor selects a single 2D slice from the texture array via baseArrayLayer paired with an arrayLayerCount of 1. The viewport still needs to be applied via setViewport() on the render pass encoder.

Using the view descriptor to create render pass attachments:
const subImage = binding.getViewSubImage(layer, view);
const viewDesc = subImage.getViewDescriptor();

const colorView = subImage.colorTexture.createView(viewDesc);
const depthView = subImage.depthStencilTexture.createView(viewDesc);

4. Layer Creation

4.1. Supported Texture Formats

The supported color formats for XRGPUBinding layer creation are:

The supported depth/stencil formats for XRGPUBinding layer creation are:

NOTE: The formats listed above are the only formats that MUST be accepted for layer creation. User agents MUST NOT accept formats outside of these lists.

4.2. XRGPUProjectionLayerInit

The XRGPUProjectionLayerInit dictionary is used to configure projection layers created with createProjectionLayer().

dictionary XRGPUProjectionLayerInit {
  required GPUTextureFormat colorFormat;
  GPUTextureFormat? depthStencilFormat;
  GPUTextureUsageFlags textureUsage = 0x10; // GPUTextureUsage.RENDER_ATTACHMENT
  double scaleFactor = 1.0;
};
Creating a projection layer with the preferred color format:
const layer = binding.createProjectionLayer({
  colorFormat: binding.getPreferredColorFormat(),
  depthStencilFormat: 'depth24plus-stencil8',
});

The colorFormat member specifies the GPUTextureFormat for the layer’s color textures. This MUST be a supported color format.

The depthStencilFormat member, when present, specifies the GPUTextureFormat for the layer’s depth/stencil textures. This MUST be a supported depth-stencil format. When not present, no depth/stencil texture is allocated.

The textureUsage member specifies the GPUTextureUsageFlags to be set on the allocated textures. The default value is GPUTextureUsage.RENDER_ATTACHMENT. If overriding this value, developers MUST explicitly include RENDER_ATTACHMENT if they intend to use the textures as render attachments.

The scaleFactor member specifies a scale factor to apply to the recommended WebGPU texture resolution. A value of 1.0 uses the recommended resolution; values less than 1.0 reduce quality for improved performance; values greater than 1.0 increase quality at the cost of performance. The value is clamped to the range [0.2, max(nativeProjectionScaleFactor, 1.0)].

4.3. XRGPULayerInit

The XRGPULayerInit dictionary is the base dictionary for configuring non-projection composition layers. Non-projection layers require the "layers" feature descriptor to be enabled for the session.

dictionary XRGPULayerInit {
  required GPUTextureFormat colorFormat;
  GPUTextureFormat? depthStencilFormat;
  GPUTextureUsageFlags textureUsage = 0x10; // GPUTextureUsage.RENDER_ATTACHMENT
  required XRSpace space;
  unsigned long mipLevels = 1;
  required unsigned long viewPixelWidth;
  required unsigned long viewPixelHeight;
  XRLayerLayout layout = "mono";
  boolean isStatic = false;
};

The colorFormat member specifies the GPUTextureFormat for the layer’s color textures.

The depthStencilFormat member, when present, specifies the GPUTextureFormat for the layer’s depth/stencil textures.

The textureUsage member specifies additional GPUTextureUsageFlags for the allocated textures.

The space member specifies the XRSpace in which the layer is positioned.

The mipLevels member specifies the number of mip levels to allocate for the layer’s textures.

The viewPixelWidth member specifies the width, in pixels, of each view’s texture.

The viewPixelHeight member specifies the height, in pixels, of each view’s texture.

The layout member specifies the XRLayerLayout of the layer.

The isStatic member, when set to true, indicates that the layer’s content will rarely change. This allows the user agent to optimize for this scenario.

4.4. XRGPUQuadLayerInit

dictionary XRGPUQuadLayerInit : XRGPULayerInit {
  XRRigidTransform? transform;
  float width = 1.0;
  float height = 1.0;
};

The transform member specifies the initial position and orientation of the quad layer relative to the space.

The width member specifies the width of the quad in meters.

The height member specifies the height of the quad in meters.

4.5. XRGPUCylinderLayerInit

dictionary XRGPUCylinderLayerInit : XRGPULayerInit {
  XRRigidTransform? transform;
  float radius = 2.0;
  float centralAngle = 0.78539;
  float aspectRatio = 2.0;
};

The transform member specifies the initial position and orientation of the cylinder layer.

The radius member specifies the radius of the cylinder in meters.

The centralAngle member specifies the central angle of the cylinder in radians. The default value of 0.78539 corresponds to approximately 45 degrees.

The aspectRatio member specifies the aspect ratio (width / height) of the visible portion of the cylinder.

4.6. XRGPUEquirectLayerInit

dictionary XRGPUEquirectLayerInit : XRGPULayerInit {
  XRRigidTransform? transform;
  float radius = 0;
  float centralHorizontalAngle = 6.28318;
  float upperVerticalAngle = 1.570795;
  float lowerVerticalAngle = -1.570795;
};

The transform member specifies the initial position and orientation of the equirect layer.

The radius member specifies the radius of the sphere in meters. A value of 0 indicates an infinite sphere (the equirect is rendered as a skybox).

The centralHorizontalAngle member specifies the horizontal angular extent of the sphere in radians. The default value of 6.28318 corresponds to a full 360 degrees.

The upperVerticalAngle member specifies the upper vertical angle of the visible portion in radians, measured from the horizon.

The lowerVerticalAngle member specifies the lower vertical angle of the visible portion in radians, measured from the horizon.

4.7. XRGPUCubeLayerInit

dictionary XRGPUCubeLayerInit : XRGPULayerInit {
  DOMPointReadOnly? orientation;
};

The orientation member specifies the initial orientation of the cube layer as a quaternion.

5. Security and Privacy Considerations

This specification does not introduce any new security or privacy considerations beyond those described in the WebXR Device API, WebXR Layers API, and WebGPU specifications.

The textures provided by XRGPUSubImage are allocated by the user agent and do not expose any additional information about the user’s environment beyond what the underlying XR session already provides. The user agent MUST ensure that textures returned by the binding do not contain data from previous frames or other origins.

The xrCompatible flag does not expose any new fingerprinting surface beyond what is already available through the requestAdapter() API, as the returned adapter capabilities are the same regardless of whether XR compatibility is requested.

6. Conformance

As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 when, and only when, they appear in all capitals, as shown here.

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[GEOMETRY-1]
Sebastian Zartner; Yehonatan Daniv. Geometry Interfaces Module Level 1. URL: https://drafts.csswg.org/geometry/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[WEBGPU]
Kai Ninomiya; Brandon Jones; Jim Blandy. WebGPU. URL: https://gpuweb.github.io/gpuweb/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
[WEBXR]
Brandon Jones; Manish Goregaokar; Rik Cabanier. WebXR Device API. URL: https://immersive-web.github.io/webxr/
[WEBXRLAYERS-1]
Rik Cabanier. WebXR Layers API Level 1. URL: https://immersive-web.github.io/layers/

IDL Index

partial dictionary GPURequestAdapterOptions {
    boolean xrCompatible = false;
};

[Exposed=(Window), SecureContext]
interface XRGPUBinding {
  constructor(XRSession session, GPUDevice device);

  readonly attribute double nativeProjectionScaleFactor;

  XRProjectionLayer createProjectionLayer(optional XRGPUProjectionLayerInit init = {});
  XRQuadLayer createQuadLayer(optional XRGPUQuadLayerInit init = {});
  XRCylinderLayer createCylinderLayer(optional XRGPUCylinderLayerInit init = {});
  XREquirectLayer createEquirectLayer(optional XRGPUEquirectLayerInit init = {});
  XRCubeLayer createCubeLayer(optional XRGPUCubeLayerInit init = {});

  XRGPUSubImage getSubImage(XRCompositionLayer layer, XRFrame frame, optional XREye eye = "none");
  XRGPUSubImage getViewSubImage(XRProjectionLayer layer, XRView view);

  GPUTextureFormat getPreferredColorFormat();
};

[Exposed=(Window), SecureContext]
interface XRGPUSubImage : XRSubImage {
  [SameObject] readonly attribute GPUTexture colorTexture;
  [SameObject] readonly attribute GPUTexture? depthStencilTexture;

  GPUTextureViewDescriptor getViewDescriptor();
};

dictionary XRGPUProjectionLayerInit {
  required GPUTextureFormat colorFormat;
  GPUTextureFormat? depthStencilFormat;
  GPUTextureUsageFlags textureUsage = 0x10; // GPUTextureUsage.RENDER_ATTACHMENT
  double scaleFactor = 1.0;
};

dictionary XRGPULayerInit {
  required GPUTextureFormat colorFormat;
  GPUTextureFormat? depthStencilFormat;
  GPUTextureUsageFlags textureUsage = 0x10; // GPUTextureUsage.RENDER_ATTACHMENT
  required XRSpace space;
  unsigned long mipLevels = 1;
  required unsigned long viewPixelWidth;
  required unsigned long viewPixelHeight;
  XRLayerLayout layout = "mono";
  boolean isStatic = false;
};

dictionary XRGPUQuadLayerInit : XRGPULayerInit {
  XRRigidTransform? transform;
  float width = 1.0;
  float height = 1.0;
};

dictionary XRGPUCylinderLayerInit : XRGPULayerInit {
  XRRigidTransform? transform;
  float radius = 2.0;
  float centralAngle = 0.78539;
  float aspectRatio = 2.0;
};

dictionary XRGPUEquirectLayerInit : XRGPULayerInit {
  XRRigidTransform? transform;
  float radius = 0;
  float centralHorizontalAngle = 6.28318;
  float upperVerticalAngle = 1.570795;
  float lowerVerticalAngle = -1.570795;
};

dictionary XRGPUCubeLayerInit : XRGPULayerInit {
  DOMPointReadOnly? orientation;
};