Skip to content

Commit cb07e68

Browse files
committed
Initial commit
0 parents  commit cb07e68

10 files changed

Lines changed: 426 additions & 0 deletions

.pr-preview.json

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"src_file": "index.bs",
3+
"type": "bikeshed",
4+
"params": {
5+
"force": 1
6+
}
7+
}

CODE_OF_CONDUCT.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Code of Conduct
2+
3+
All documentation, code and communication under this repository are covered by the [W3C Code of Ethics and Professional Conduct](https://www.w3.org/Consortium/cepc/).

CONTRIBUTING.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# Immersive Web Working Group
2+
3+
Contributions to this repository are intended to become part of Recommendation-track documents governed by the
4+
[W3C Patent Policy](https://www.w3.org/Consortium/Patent-Policy-20040205/) and
5+
[Document License](https://www.w3.org/Consortium/Legal/copyright-documents). To make substantive contributions to specifications, you must either participate
6+
in the relevant W3C Working Group or make a non-member patent licensing commitment.
7+
8+
If you are not the sole contributor to a contribution (pull request), please identify all
9+
contributors in the pull request comment.
10+
11+
To add a contributor (other than yourself, that's automatic), mark them one per line as follows:
12+
13+
```
14+
+@github_username
15+
```
16+
17+
If you added a contributor by mistake, you can remove them in a comment with:
18+
19+
```
20+
-@github_username
21+
```
22+
23+
If you are making a pull request on behalf of someone else but you had no part in designing the
24+
feature, you can remove yourself with the above syntax.

LICENSE.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
All documents in this Repository are licensed by contributors
2+
under the
3+
[W3C Document License](https://www.w3.org/Consortium/Legal/copyright-documents).
4+

Makefile

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
.PHONY: all index.html
2+
3+
all: index.html
4+
5+
index.html: index.bs
6+
curl https://api.csswg.org/bikeshed/ -F file=@webxrlayers-1.bs -F output=err
7+
curl https://api.csswg.org/bikeshed/ -F file=@webxrlayers-1.bs -F force=1 > index.html | tee
8+
9+
local: index.bs
10+
bikeshed --die-on=everything spec index.bs

explainer.md

Lines changed: 279 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,279 @@
1+
# WebXR/WebGPU binding
2+
3+
WebXR is well understood to be a demanding API in terms of graphics rendering performance, a task that has previously fallen entirely to WebGL. The [WebGL API](https://www.khronos.org/registry/webgl/specs/latest/1.0/), while capable, is based on the relatively outdated native APIs which have recently been overtaken by more modern equivalents. As a result, it can sometimes be a struggle to implement various recommended XR rendering techniques in a performant way.
4+
5+
The [WebGPU API](https://gpuweb.github.io/gpuweb/) is an upcoming API for utilizing the graphics and compute capabilities of a device's GPU more efficiently than WebGL allows, with an API that better matches both GPU hardware architecture and the modern native APIs that interface with them, such as Vulkan, Direct3D 12, and Metal. As it offers the potential to enable developers to get significantly better performance in their WebXR applications.
6+
7+
This module aims to allow the existing [WebXR Layers module](https://immersive-web.github.io/layers/) to interface with WebGPU by providing WebGPU swap chains for each layer type.
8+
9+
## WebGPU binding
10+
11+
As with the existing WebGL path described in the Layers module, all WebGPU resources required by WebXR would be supplied by an `XRGPUBinding` instance, created with an `XRSession` and [`GPUDevice`](https://gpuweb.github.io/gpuweb/#gpu-device) like so:
12+
13+
```js
14+
const gpuAdapter = await navigator.gpu.getAdapter({xrCompatible: true});
15+
const gpuDevice = await gpuAdapter.requestDevice();
16+
const xrGpuBinding = new XRGPUBinding(xrSession, gpuDevice);
17+
```
18+
19+
Note that the [`GPUAdapter`](https://gpuweb.github.io/gpuweb/#gpu-adapter) must be requested with the `xrCompatible` option set to `true`. This mirrors the WebGL context creation arg by the same name, and ensures that the returned adapter will be one that is compatible with the UAs selected XR Device.
20+
21+
Once the `XRGPUBinding` instance has been created, it can be used to create the various `XRCompositorLayer`s, just like `XRWebGLBinding`. One of the primary differences is that when using WebGPU the format of the texture must be specified. The list of supported formats can be queried from the `XRGPUBinding.getSupportedColorFormats()` and `XRGPUBinding.getSupportedDepthStencilFormats()` methods, which return the supported formats in order of preference (so element `0` in the returned list is always the most highly preferred format.)
22+
23+
```js
24+
const gpuAdapter = await navigator.gpu.getAdapter({xrCompatible: true});
25+
const gpuDevice = await gpuAdapter.requestDevice();
26+
const xrGpuBinding = new XRGPUBinding(xrSession, gpuDevice);
27+
const projectionLayer = xrGpuBinding.createProjectionLayer({
28+
colorFormat: xrGpuBinding.getSupportedColorFormats()[0],
29+
depthStencilFormat: xrGpuBinding.getSupportedDepthStencilFormats()[0],
30+
});
31+
```
32+
33+
This allocates a layer that supplies a [`GPUTexture`](https://gpuweb.github.io/gpuweb/#gputexture) to use for both color attachments and depth/stencil attachements. Note that if a `depthStencilFormat` is provided it is implied that the application will populate it will a reasonalbe representation of the scene's depth and that the UAs XR compositor may use that information when rendering. If you cannot guarantee that the the depth information output by your application is representative of the scene rendered into the color attachment your application should allocate it's own depth/stencil textures instead.
34+
35+
As with the base XR Layers module, `XRGPUBinding` is only required to support `XRProjectionLayer`s unless the `layers` feature descriptor is supplied at session creation and supported by the UA/device. If the `layers` feature descriptor is requested and supported, however, all other `XRCompositionLayer` types must be supported. Layers are still set via `XRSession`'s `updateRenderState` method, as usual:
36+
37+
```js
38+
const quadLayer = xrGpuBinding.createQuadLayer({
39+
colorFormat: xrGpuBinding.getSupportedColorFormats()[0],
40+
space: xrReferenceSpace,
41+
viewPixelWidth: 1024,
42+
viewPixelHeight: 768,
43+
layout: 'stereo'
44+
});
45+
46+
xrSession.updateRenderState({ layers: [projectionLayer, quadLayer] });
47+
```
48+
49+
## Rendering
50+
51+
During `XRFrame` processing each layer can be updated with new imagery. Calling `getViewSubImage()` with a view from the `XRFrame` will return an `XRGPUSubImage` indicating the textures to use as the render target and what portion of the texture will be presented to the `XRView`'s associated physical display.
52+
53+
WebGPU projection layers will provide the same `colorTexture` and `depthStencilTexture` for each `GPUSubImage` queried, while the `GPUSubImage` queried for each `XRView` will contian a different `GPUTextureViewDescriptor` that should be used when creating the texture views of both the color and depth textures to use as render pass attachments. The `GPUSubImage`'s `viewport` must also be set to ensure only the expected portion of the texture is written to.
54+
55+
```js
56+
// Render Loop for a projection layer with a WebGPU texture source.
57+
const xrGpuBinding = new XRGPUBinding(xrSession, gpuDevice);
58+
const layer = xrGpuBinding.createProjectionLayer({
59+
colorFormat: xrGpuBinding.getSupportedColorFormats()[0],
60+
depthStencilFormat: xrGpuBinding.getSupportedDepthStencilFormats()[0],
61+
});
62+
63+
xrSession.updateRenderState({ layers: [layer] });
64+
xrSession.requestAnimationFrame(onXRFrame);
65+
66+
function onXRFrame(time, xrFrame) {
67+
xrSession.requestAnimationFrame(onXRFrame);
68+
69+
const commandEncoder = device.createCommandEncoder({});
70+
71+
for (const view in xrViewerPose.views) {
72+
const subImage = xrGpuBinding.getViewSubImage(layer, view);
73+
74+
// Render to the subImage's color and depth textures
75+
const passEncoder = commandEncoder.beginRenderPass({
76+
colorAttachments: [{
77+
attachment: subImage.colorTexture.createView(subImage.viewDescriptor),
78+
loadValue: 'load',
79+
}],
80+
depthStencilAttachment: {
81+
attachment: subImage.depthStencilTexture.createView(subImage.viewDescriptor),
82+
depthLoadValue: 'load',
83+
depthStoreOp: 'store',
84+
stencilLoadValue: 'load',
85+
stencilStoreOp: 'store',
86+
}
87+
});
88+
89+
let viewport = subImage.viewport;
90+
passEncoder.setViewport(viewport.x, viewport.y, viewport.width, viewport.height, 0.0, 1.0);
91+
92+
// Render from the viewpoint of xrView
93+
94+
passEncoder.endPass();
95+
}
96+
97+
device.defaultQueue.submit([commandEncoder.finish()]);
98+
}
99+
```
100+
101+
Non-projection layers, such as `XRQuadLayer`, may only have 1 sub image for `'mono'` layers and 2 sub images for `'stereo'` layers, which may not align exactly with the number of `XRView`s reported by the device. To avoid rendering the same view multiple times in these scenarios Non-projection layers must use the `XRGPUBinding`'s `getSubImage()` method to get the `XRSubImage` to render to.
102+
103+
For mono textures the `XRSubImage` can be queried using just the layer and `XRFrame`:
104+
105+
```js
106+
// Render Loop for a projection layer with a WebGPU texture source.
107+
const xrGpuBinding = new XRGPUBinding(xrSession, gpuDevice);
108+
const quadLayer = xrGpuBinding.createQuadLayer({
109+
colorFormat: xrGpuBinding.getSupportedColorFormats()[0],
110+
space: xrReferenceSpace,
111+
viewPixelWidth: 512,
112+
viewPixelHeight: 512,
113+
layout: 'mono'
114+
});
115+
116+
// Position 2 meters away from the origin with a width and height of 1.5 meters
117+
quadLayer.transform = new XRRigidTransform({z: -2});
118+
quadLayer.width = 1.5;
119+
quadLayer.height = 1.5;
120+
121+
xrSession.updateRenderState({ layers: [quadLayer] });
122+
xrSession.requestAnimationFrame(onXRFrame);
123+
124+
function onXRFrame(time, xrFrame) {
125+
xrSession.requestAnimationFrame(onXRFrame);
126+
127+
const commandEncoder = device.createCommandEncoder({});
128+
129+
const subImage = xrGpuBinding.getSubImage(quadLayer, xrFrame);
130+
131+
// Render to the subImage's color texture.
132+
const passEncoder = commandEncoder.beginRenderPass({
133+
colorAttachments: [{
134+
attachment: subImage.colorTexture.createView(subImage.viewDescriptor),
135+
loadValue: 'load',
136+
}]
137+
// Many times simple quad layers won't require a depth attachment, as they're often just
138+
// displaying a pre-rendered 2D image.
139+
});
140+
141+
let viewport = subImage.viewport;
142+
passEncoder.setViewport(viewport.x, viewport.y, viewport.width, viewport.height, 0.0, 1.0);
143+
144+
// Render the mono content.
145+
146+
passEncoder.endPass();
147+
148+
device.defaultQueue.submit([commandEncoder.finish()]);
149+
}
150+
```
151+
152+
For stereo textures the target `XREye` must be given to `getSubImage()` as well:
153+
154+
```js
155+
// Render Loop for a projection layer with a WebGPU texture source.
156+
const xrGpuBinding = new XRGPUBinding(xrSession, gpuDevice);
157+
const quadLayer = xrGpuBinding.createQuadLayer({
158+
colorFormat: xrGpuBinding.getSupportedColorFormats()[0],
159+
space: xrReferenceSpace,
160+
viewPixelWidth: 512,
161+
viewPixelHeight: 512,
162+
layout: 'stereo'
163+
});
164+
165+
// Position 2 meters away from the origin with a width and height of 1.5 meters
166+
quadLayer.transform = new XRRigidTransform({z: -2});
167+
quadLayer.width = 1.5;
168+
quadLayer.height = 1.5;
169+
170+
xrSession.updateRenderState({ layers: [quadLayer] });
171+
xrSession.requestAnimationFrame(onXRFrame);
172+
173+
function onXRFrame(time, xrFrame) {
174+
xrSession.requestAnimationFrame(onXRFrame);
175+
176+
const commandEncoder = device.createCommandEncoder({});
177+
178+
for (const eye of ['left', 'right']) {
179+
const subImage = xrGpuBinding.getSubImage(quadLayer, xrFrame, eye);
180+
181+
// Render to the subImage's color texture.
182+
const passEncoder = commandEncoder.beginRenderPass({
183+
colorAttachments: [{
184+
attachment: subImage.colorTexture.createView(subImage.viewDescriptor),
185+
loadValue: 'load',
186+
}]
187+
// Many times simple quad layers won't require a depth attachment, as they're often just
188+
// displaying a pre-rendered 2D image.
189+
});
190+
191+
let viewport = subImage.viewport;
192+
passEncoder.setViewport(viewport.x, viewport.y, viewport.width, viewport.height, 0.0, 1.0);
193+
194+
// Render content for the given eye.
195+
196+
passEncoder.endPass();
197+
}
198+
199+
device.defaultQueue.submit([commandEncoder.finish()]);
200+
}
201+
```
202+
203+
## Proposed IDL
204+
205+
```webidl
206+
partial dictionary GPURequestAdapterOptions {
207+
boolean xrCompatible = false;
208+
};
209+
210+
[Exposed=Window] interface XRGPUSubImage : XRSubImage {
211+
[SameObject] readonly attribute GPUTexture colorTexture;
212+
[SameObject] readonly attribute GPUTexture? depthStencilTexture;
213+
readonly attribute GPUTextureViewDescriptor viewDescriptor;
214+
readonly attribute unsigned long textureWidth;
215+
readonly attribute unsigned long textureHeight;
216+
readonly attribute unsigned long textureArrayLayers;
217+
};
218+
219+
dictionary XRGPUProjectionLayerInit {
220+
required GPUTextureFormat colorFormat;
221+
GPUTextureFormat? depthStencilFormat;
222+
GPUTextureUsageFlags textureUsage = 0x10; // GPUTextureUsage.OUTPUT_ATTACHMENT
223+
double scaleFactor = 1.0;
224+
};
225+
226+
dictionary XRGPULayerInit {
227+
required GPUTextureFormat colorFormat;
228+
GPUTextureFormat? depthStencilFormat;
229+
GPUTextureUsageFlags textureUsage = 0x10; // GPUTextureUsage.OUTPUT_ATTACHMENT
230+
required XRSpace space;
231+
required unsigned long viewPixelWidth;
232+
required unsigned long viewPixelHeight;
233+
XRLayerLayout layout = "mono";
234+
boolean isStatic = false;
235+
};
236+
237+
dictionary XRGPUQuadLayerInit : XRGPULayerInit {
238+
XRRigidTransform? transform;
239+
float width = 1.0;
240+
float height = 1.0;
241+
};
242+
243+
dictionary XRGPUCylinderLayerInit : XRGPULayerInit {
244+
XRRigidTransform? transform;
245+
float radius = 2.0;
246+
float centralAngle = 0.78539;
247+
float aspectRatio = 2.0;
248+
};
249+
250+
dictionary XRGPUEquirectLayerInit : XRGPULayerInit {
251+
XRRigidTransform? transform;
252+
float radius = 0;
253+
float centralHorizontalAngle = 6.28318;
254+
float upperVerticalAngle = 1.570795;
255+
float lowerVerticalAngle = -1.570795;
256+
};
257+
258+
dictionary XRGPUCubeLayerInit : XRGPULayerInit {
259+
DOMPointReadOnly? orientation;
260+
};
261+
262+
[Exposed=Window] interface XRGPUBinding {
263+
constructor(XRSession session, GPUDevice device);
264+
265+
readonly attribute double nativeProjectionScaleFactor;
266+
267+
FrozenArray<GPUTextureFormat> supportedColorFormats();
268+
FrozenArray<GPUTextureFormat> supportedDepthStencilFormats();
269+
270+
XRProjectionLayer createProjectionLayer(optional XRGPUProjectionLayerInit init);
271+
XRQuadLayer createQuadLayer(optional XRGPUQuadLayerInit init);
272+
XRCylinderLayer createCylinderLayer(optional XRGPUCylinderLayerInit init);
273+
XREquirectLayer createEquirectLayer(optional XRGPUEquirectLayerInit init);
274+
XRCubeLayer createCubeLayer(optional XRGPUCubeLayerInit init);
275+
276+
XRGPUSubImage getSubImage(XRCompositionLayer layer, XRFrame frame, optional XREye eye = "none");
277+
XRGPUSubImage getViewSubImage(XRProjectionLayer layer, XRView view);
278+
};
279+
```

favicon-32x32.png

1.05 KB
Loading

favicon-96x96.png

3.18 KB
Loading

0 commit comments

Comments
 (0)