Example : Simple JavaScript Audio Worklet

Specifications :

This first example uses a simple audio worklet processor written in pure JavaScript. We can customize the processor by using the Web Audio API. You will see how to customize your processor, how to host it, and play music with it.

Prerequisites :

We only need standard JavaScript libraries. The only prerequisite is to understand and read the official Web Audio API.

Create an index.html :

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Simple Javascript Audio Processor</title>
</head>
<body>

<button id="btn-start" class="ui button">Start</button>

<script src="index.js"></script>
</body>
</html>

Simple index.html file. Nothing special about it. You can add everything you want to glow up your host.

Create an Operable Audio Buffer :

// operable-audio-buffer.js
class OperableAudioBuffer extends AudioBuffer {
toArray(shared = false) {
    const supportSAB = typeof SharedArrayBuffer !== "undefined";
    const channelData = [];
    const {numberOfChannels, length} = this;
    for (let i = 0; i < numberOfChannels; i++) {
        if (shared && supportSAB) {
            channelData[i] = new Float32Array(new SharedArrayBuffer(length * Float32Array.BYTES_PER_ELEMENT));
            channelData[i].set(this.getChannelData(i));
        } else {
            channelData[i] = this.getChannelData(i);
        }
    }
    return channelData;
}
}

Because the Web Audio API is using Float32Array to process the sound, we need to create a custom Audio Buffer. We use it to send the audio via the post message of the processor in the audio thread. You can read the documentation of the audio worklet node port.

Main script in index.js :

// index.js
const audioUrl = "<url-to-a-song>";

// Initialize the Audio Context
const audioCtx = new AudioContext();
const btnStart = document.getElementById("btn-start");

(async () => {
// Code of your host goes in a self-invoking asynchronous function.
})();

The API supports loading audio file data in multiple formats, such as WAV, MP3, AAC, OGG and others. Browser support for different audio formats varies.

Initialize the audio and the page :

// self-invoking function in index.js

// Register our custom JavaScript processor in the current audio worklet.
await audioCtx.audioWorklet.addModule("./audio-player-processor.js");

const response = await fetch(audioUrl);
const audioArrayBuffer = await response.arrayBuffer();
const audioBuffer = await audioCtx.decodeAudioData(audioArrayBuffer);

//Transform the audio buffer into a custom audio buffer to add logic inside. (Needed to manipulate the audio, for example, editing...)
const operableAudioBuffer = Object.setPrototypeOf(audioBuffer, OperableAudioBuffer.prototype);
const node = new AudioPlayerNode(audioCtx, 2);

// Connecting host's logic of the page.
btnStart.onclick = () => {
if (audioCtx.state === "suspended") audioCtx.resume();
const playing = node.parameters.get("playing").value;
if (playing === 1) {
    node.parameters.get("playing").value = 0;
    btnStart.textContent = "Start";
} else {
    node.parameters.get("playing").value = 1;
    btnStart.textContent = "Stop";
}
}

We first need to register the custom processor to the web audio API by using the addModule method. We use our custom operable audio buffer to transform the buffer into a Float32Array and send it to the processor.

Connecting the audio nodes :

// self-invoking function in index.js

//Sending audio to the processor and connecting the node to the output destination.
node.port.postMessage(operableAudioBuffer.toArray());

node.connect(audioCtx.destination);
node.parameters.get("playing").value = 0;
node.parameters.get("loop").value = 1;

In this example, we will use our custom AudioNode. But it is not mandatory to create a custom audio node.

Audio processor class:

// audio-player-processor.js

class AudioPlayerProcessor extends AudioWorkletProcessor {

static get parameterDescriptors() {
    return [{
        name: "playing",
        minValue: 0,
        maxValue: 1,
        defaultValue: 0
    }];
}

constructor(options) {
    super(options);

    this.audio = null;
    this.playhead = 0;

    this.port.onmessage = (e) => {
        if (e.data.audio) {
            this.audio = e.data.audio;
        }
    };
}

process(inputs, outputs, parameters) {
    if (!this.audio) return true;

    // Initializing the buffer with the given outputs and the audio length.
    const bufferSize = outputs[0][0].length;
    const audioLength = this.audio[0].length;

    // Only one output is used. Because we use our buffer source see {OperableAudioBuffer}
    const output = outputs[0];

    for (let i = 0; i < bufferSize; i++) {
        const playing = !!(i < parameters.playing.length ? parameters.playing[i] : parameters.playing[0]);
        if (!playing) continue; // Not playing

        const channelCount = Math.min(this.audio.length, output.length);
        for (let channel = 0; channel < channelCount; channel++) {
            output[channel][i] = this.audio[channel][this.playhead];
        }
        this.playhead++;
    }
    return true;
}
}

The processor represents an audio processing code that handles the audio buffer to play a song. To better understand the use and the idea behind the processor, read the documentation of the API and I suggest you read this article from Paul Adenot.

Registering the processor :

// audio-player-processor.js

const {registerProcessor} = globalThis;

try {
registerProcessor("audio-player-processor", AudioPlayerProcessor);
} catch (error) {
console.warn(error);
}


To use the processor, it is needed to register it in the AudioWorkletGlobalScope, by using the registerProcessor method. Read about more here.

Conclusion :

We're all set to use our first custom processor, and play a song with it. You can try it by yourself following this example. Further more details are available on the Github's repository.

If you are comfortable with this example, make sure to see the next example. We will use a processor that uses Web Assembly, to process the audio data.

Special thanks to Shihong Ren who writes this example, and Michel Buffa who supervises the project.