Skip to main content




Una arquitectura fuera del hilo principal puede mejorar significativamente la confiabilidad y la experiencia del Username de su aplicación.

En los últimos 20 años, la Web ha evolucionado drásticamente desde documentos estáticos con algunos estilos e imágenes hasta aplicaciones complejas y dinámicas. Sin embargo, una cosa se ha mantenido prácticamente sin cambios: solo tenemos un hilo por pestaña del browser (con algunas excepciones) para hacer el trabajo de renderizar nuestros sitios y ejecutar nuestro JavaScript.

As a result, the main thread has become incredibly overworked. And as the complexity of web applications increases, the main thread becomes a significant performance bottleneck. To make matters worse, the amount of time it takes to execute the code in the main thread for a given user is almost completely unpredictable
because the capabilities of the device have a huge effect on performance. That unpredictability will only increase as users access the web from an increasingly diverse set of devices, from phones with hyper-restricted features to high-powered, high-refresh rate flagship machines.

If we want sophisticated web applications to reliably meet performance guidelines like RAIL model—Which is based on empirical data on human perception and psychology - we need ways to run our code outside main thread (OMT).

If you want to know more about the case of an OMT architecture, watch my CDS 2019 talk below.

Threads with web workers

Native platforms generally support parallel work by allowing you to assign a function to a thread, which runs in parallel with the rest of your program. You can access the same variables from both threads, and access to these shares can be synchronized with mutexes and semaphores to avoid race conditions.

En JavaScript, podemos obtener una funcionalidad similar de los trabajadores web, que existen desde 2007 y son compatibles con los principales browsers desde 2012. Los trabajadores web se ejecutan en paralelo con el hilo principal, pero a diferencia del hilo nativo, no pueden compartir variables.

Do not confuse web workers with service workers or worklets. While the names are similar, the functionality and uses are different.

To create a web worker, pass a file to the worker's constructor, which starts executing that file in a separate thread:

const worker = new Worker("./worker.js");

Communicate with the web worker by sending messages through the
postMessage API. Pass the message value as a parameter in the postMessage call and then add a message event listener to the worker:

main.js

const worker = new Worker("./worker.js");
worker.postMessage([40, 2]);

worker.js

addEventListener("message", event => {
const [to, b] = event.data;
});

To send a message to the main thread, use the same postMessage API en el trabajador web y configure un detector de eventos en el hilo principal:

main.js

const worker = new Worker("./worker.js");
worker.postMessage([40, 2]);
worker.addEventListener("message", event => {
console.log(event.data);
});

worker.js

addEventListener("message", event => {
const [to, b] = event.data;

postMessage(to+b);
});

It is true that this approach is somewhat limited. Historically, web workers have been used primarily to pull a single heavy-duty piece off the main thread. Trying to handle multiple operations with a single web worker quickly becomes unwieldy - you have to hard-code not just the parameters, but also the operation in the message, and you have to do the accounting to match the responses with the requests. That complexity is probably why web workers haven't been more widely adopted.

But if we could eliminate some of the communication difficulties between the main thread and the web workers, this model could be ideal for many use cases. And thankfully, there is a library that does just that!

Comlink: make web workers work less

Comlink es una biblioteca cuyo target es permitirle utilizar trabajadores web sin tener que pensar en los detalles de postMessage. Comlink le permite compartir variables entre los trabajadores web y el hilo principal casi como lenguajes de programming que soportan de forma nativa el hilo.

You configure Comlink by importing it into a web worker and defining a set of functions to expose to the main thread. Then you import Comlink into the main thread, wrap the worker, and get access to the exposed functions:

worker.js

import {expose} desde "comlink";

const api = {
someMethod() { }
}
expose(api);

main.js

import {wrap} desde "comlink";

const worker = new Worker("./worker.js");
const api = wrap(worker);

the api The variable in the main thread behaves the same as the one in the web worker, except that each function returns a promise of a value instead of the value itself.

What code should I transfer to a web worker?

Web workers have no access to the DOM and many APIs like WebUSB,
WebRTCor
Web Audio, so you cannot put parts of your application that depend on such access on a worker. Still, every little piece of code that is passed to a worker generates more headroom in the main thread for the things that have estar allí, como actualizar la user interface.

Restricting access to the UI to the main thread is typical in other languages. In fact, both iOS and Android call the main thread the UI thread.

One problem for web developers is that most web applications rely on a user interface framework like Vue or React to orchestrate everything in the application; everything is a component of the framework and therefore inherently bound to the DOM. That would seem to make migration to an OMT architecture difficult.

However, if we switch to a model where user interface concerns are separated from other concerns, such as state management, web workers can be quite useful even with framework-based applications. That is exactly the approach taken with PROXX.

PROXX: an OMT case study

El equipo de Google Chrome desarrolló PROXX como un clon de Buscaminas que cumple
Progressive web app requisitos, incluido trabajar sin conexión y tener una user experience atractiva. Desafortunadamente, las primeras versiones del juego funcionaron mal en dispositivos restringidos como los teléfonos con funciones, lo que llevó al equipo a darse cuenta de que el hilo principal era un cuello de botella.

The team decided to use web workers to separate the visual state of the game from its logic:

  • The main thread handles rendering of animations and transitions.
  • A web worker handles the logic of the game, which is purely computational.

This approach is similar to Redux
Flow pattern, many Flux applications can easily migrate to an OMT architecture. Take a look at this blog post
to read more about how to apply OMT to a Redux application.

OMT had interesting effects on the performance of the PROXX feature phone. In the non-OMT version, the user interface freezes for six seconds after the user interacts with it. There are no comments and the user has to wait the full six seconds before being able to do anything else.

The response time of the UI in the not OMT PROXX version.

In the OMT version, however, the game takes twelve seconds to complete a user interface update. While that seems like a performance loss, it actually leads to more user feedback. The slowdown occurs because the application sends more frames than the non-OMT version, which does not send any frames. Therefore, the user knows that something is happening and can continue playing as the user interface is updated, which makes the game feel considerably better.

The response time of the UI in the OMT PROXX version.

This is a conscious trade-off: we provide users of restricted devices with an experience that feel better without penalizing users of high-end devices.

Implications of an OMT architecture

Como muestra el ejemplo de PROXX, OMT hace que su aplicación se ejecute de manera confiable en una gama más amplia de dispositivos, pero no hace que su aplicación be más rápida:

  • You are simply moving the work from the main thread, not reducing the work.
  • The additional communication overhead between the web worker and the main thread can sometimes slow things down a bit.

Considering the tradeoffs

Since the main thread is free to process user interactions, such as scrolling while JavaScript is running, there are fewer dropped frames, although the total wait time may be slightly longer. Making the user wait a bit is preferable to dropping a frame because the margin of error is less for discarded frames: dropping a frame occurs in milliseconds, while hundreds milliseconds before a user perceives the timeout.

Due to the unpredictability of performance across devices, the goal of the OMT architecture is really about reducing risk—Making your application more robust against highly variable runtime conditions — not about the performance benefits of parallelization. The increased recoverability and UX improvements are worth any small trade-offs in speed.

Developers are sometimes concerned about the cost of copying complex objects to the main thread and web workers. There are more details in the talk, but in general, you shouldn't break your performance budget if the JSON string representation of your object is less than 10KB. If you need to copy larger objects, consider using
ArrayBuffer
or WebAssembly. You can read more about this problem at
esta publicación de Blog on postMessage performance.

A note on tools

Web workers are not mainstream yet, so most module tools such as WebPack
and Roll up"Don't support them from the start." (Pack or pack Though yes!) Fortunately, there are plugins to make web workers, well, job with WebPack and Rollup:

summarizing

To make sure our applications are as reliable and accessible as possible, especially in an increasingly global market, we need to support restricted devices; it is the way in which most users access the web globally. OMT offers a promising way to increase performance on such devices without negatively impacting users of high-end devices.

In addition, OMT has secondary benefits:

Web workers don't have to be scary. Tools like Comlink take the work out of workers and make them a viable option for a wide range of web applications.