Si ya ha utilizado la API de dispositivo WebXR, ya ha recorrido la mayor parte del camino.
The WebXR Device API shipped last fall in Chrome 79. As stated then, the API implementation in Chrome is a work in progress. Chrome is happy to announce that some of the work is done. In Chrome 81, two new features have arrived:
Este artículo cubre la realidad aumentada. Si ya ha utilizado WebXR Device API, le alegrará saber que hay muy pocas novedades que aprender. Entrar en una sesión de WebXR es básicamente lo mismo. Ejecutar un bucle de cuadro es básicamente lo mismo. Las diferencias radican en configuraciones que permiten mostrar el contents de forma adecuada para la realidad aumentada. Si no está familiarizado con los conceptos básicos de WebXR, debería leer mis publicaciones anteriores sobre la API del dispositivo WebXR, o al menos estar familiarizado con los temas que se tratan allí. Debe saber cómo solicitar e ingresar a una sesión y debe saber cómo ejecutar un bucle de tramas.
Para obtener información sobre las pruebas de positioning, consulte el artículo complementario Posicionamiento de objetos virtuales en vistas del mundo real. El código de este artículo se basa en el ejemplo de sesión de AR inmersivo (manifestation
source) de Immersive Web Working Group’s WebXR Device API Samples.
Before diving into the code, you should use the Immersive Augmented Reality Session Sample
at least once. You will need a modern Android phone with Chrome 81 or later.
What is it for?
La realidad aumentada será una valiosa adición a muchas páginas web nuevas o existentes al permitirles implementar casos de uso de RA sin salir del browser. Por ejemplo, puede ayudar a las persons a aprender en sitios educativos y permitir que los compradores potenciales visualicen objetos en su hogar mientras compran.
Considere el segundo caso de uso. Imagina la simulación de colocar una representación a tamaño real de un objeto virtual en una escena real. Una vez colocada, la imagen permanece en la superficie seleccionada, aparece del tamaño que tendría si el elemento real estuviera en esa superficie y permite al Username moverse alrededor de ella, así como más cerca o más lejos de ella. Esto brinda a los espectadores una comprensión más profunda del objeto de lo que es posible con una imagen bidimensional.
I'm getting a little ahead of myself. To actually do what I've described, you need AR functionality and some means of detecting surfaces. This article covers the first one. The accompanying article on the WebXR Success Test API (linked above) covers the latter.
Requesting a session
Requesting a session is very similar to what you have seen before. First find out if the type of session you want is available on the current device by calling
xr.isSessionSupported ()
. Instead of asking 'immersive-vr'
as before, request 'immersive-ar'
.
if (navigator.xr) {
const supported = await navigator.xr.isSessionSupported('immersive-ar');
if (supported) {
xrButton.addEventListener('click', onButtonClicked);
xrButton.textContent = 'Enter AR';
xrButton.enabled = supported;
}
}
As before, this enables an 'Enter AR' button. When the user clicks on it, call
xr.requestSession ()
, also happening 'immersive-ar'
.
let xrSession = null;
function onButtonClicked() {
if (!xrSession) {
navigator.xr.requestSession('immersive-ar')
.then((session) => {
xrSession = session;
xrSession.isImmersive = true;
xrButton.textContent = 'Exit AR';
onSessionStarted(xrSession);
});
} else {
xrSession.end();
}
}
A property of convenience
You probably noticed that I highlighted two lines in the last code sample. the XRSession
the object appears to have a property called isImmersive
. This is a convenience property that I have created myself and is not part of the specification. I'll use it later to make decisions about what to show the viewer. Why is this property not part of the API? Because your application may need to track this property differently, the spec authors decided to keep the API clean.
Enter a session
Remember that onSessionStarted ()
it looked like in my previous article:
function onSessionStarted(xrSession) {
xrSession.addEventListener('end', onSessionEnded);let canvas = document.createElement('canvas');
gl = canvas.getContext('webgl', { xrCompatible: true });
xrSession.updateRenderState({
baseLayer: new XRWebGLLayer(session, gl)
});
xrSession.requestReferenceSpace('local-floor')
.then((refSpace) => {
xrRefSpace = refSpace;
xrSession.requestAnimationFrame(onXRFrame);
});
}
I need to add a few things to account for the rendering of augmented reality. Turn off the background First, I'm going to determine if I need the background. This is the first place where I will use my convenience property.
function onSessionStarted(xrSession) {
xrSession.addEventListener('end', onSessionEnded);
if (session.isImmersive) {
removeBackground();
}
let canvas = document.createElement('canvas');
gl = canvas.getContext('webgl', { xrCompatible: true });
xrSession.updateRenderState({
baseLayer: new XRWebGLLayer(session, gl)
});
refSpaceType = xrSession.isImmersive ? 'local' : 'viewer';
xrSession.requestReferenceSpace(refSpaceType).then((refSpace) => {
xrSession.requestAnimationFrame(onXRFrame);
});
}
Reference spaces
My previous articles went through the reference spaces. The sample I'm describing uses two of them, so it's time to correct that omission.
A full explanation of the reference spaces would be longer than I can provide here. I'm only going to talk about reference spaces when it comes to augmented reality.
A reference space describes the relationship between the virtual world and the user's physical environment. It does this by:
- Specify the origin of the coordinate system used to express positions in the virtual world.
- Specify whether the user is expected to move within that coordinate system.
- If that coordinate system has preset limits. (The examples shown here do not use coordinate systems with preset limits.)
For all reference spaces, the X coordinate expresses left and right, Y expresses up and down, and Z expresses forward and backward. Positive values are correct, up and back, respectively.
The coordinates returned by XRFrame.getViewerPose ()
depend on what is requested
reference space type. Más sobre eso cuando lleguemos al bucle de cuadro. En este momento, debemos seleccionar un tipo de referencia que be apropiado para la realidad aumentada. Nuevamente, esto usa mi propiedad de conveniencia.
let refSpaceType
function onSessionStarted(xrSession) {
xrSession.addEventListener('end', onSessionEnded);
if (session.isImmersive) {
removeBackground();
}
let canvas = document.createElement('canvas');
gl = canvas.getContext('webgl', { xrCompatible: true });
xrSession.updateRenderState({
baseLayer: new XRWebGLLayer(session, gl)
});
refSpaceType = xrSession.isImmersive ? 'local' : 'viewer';
xrSession.requestReferenceSpace(refSpaceType).then((refSpace) => {
xrSession.requestAnimationFrame(onXRFrame);
});
}
If you have visited the Immersive Augmented Reality Session Sample
You will notice that initially the scene is static and not augmented reality. You can drag and slide your finger to move around the scene. If you click "START AR", the background disappears and you can move around the scene by moving the device. The modes use different types of reference spaces. The highlighted text above shows how it is selected. Use the following reference types:
local
- The native origin is in the viewer's position at the time of the session creation. This means that the experience does not necessarily have a well defined floor and the exact position of the origin may vary depending on the platform. Although there are no preset limits for the space, it is expected that the content can be viewed without movement other than rotation. As you can see from our own AR example, some movement is possible within space.
viewer
- Most frequently used for content presented online on the page, this space follows the display device. When passed to getViewerPose it does not provide tracking and therefore always reports a pose at the origin unless the application modifies it with XRReferenceSpace.getOffsetReferenceSpace ()
. The sample uses this to enable touch panning of the camera.
Running a frame loop
Conceptualmente, nada cambia con respecto a lo que hice en la sesión de virtual reality descrita en mis artículos anteriores. Pase el tipo de espacio de referencia a XRFrame.getViewerPose ()
. Return XRViewerPose
will be for the current reference space type. Using
viewer
as the default value allows a page to display content previews before user consent for AR or VR is requested. This illustrates an important point: online content uses the same frame loop as immersive content, reducing the amount of code that must be maintained.
function onXRFrame(hrTime, xrFrame) {
let xrSession = xrFrame.session;
xrSession.requestAnimationFrame(onXRFrame);
let xrViewerPose = xrFrame.getViewerPose(refSpaceType);
if (xrViewerPose) {
}
}
conclusion
This series of articles only covers the basics of implementing immersive content on the web. The Immersive Web Working Group introduces many more capabilities and use cases. WebXR Device API Samples. También acabamos de publicar un artículo de prueba de éxito que explica una API para detectar superficies y colocar elementos virtuales en una vista de cámara del mundo real. Échales un vistazo y mira el Blog de web.dev para ver más artículos durante el próximo año.
Photo by David grandmougin in Unsplash