There, we will see how to manipulate audio with C++. To start with, make sure to understand how the Web Assembly works. See Web Assembly Documentation. In this case, we will use Emscripten to compile the C++ code into Web Assembly.
Make sure to download Emscripten via the official website and use their script to activate the development environment. To use and compile C++ or other languages, there is a lot of compiling flags, here we will use them to compile the C++ into a .wasm. But you can decide to compile your C++ into a JavaScript Module that you can import in your code. To understand the MakeFile make sure to refer to the Emscripten Flags Settings.
# Makefile
DEPS = ./cpp/processor-perf.cpp
OUTPUT_WASM = ./ProcessWasm.wasm
CC = emcc
FLAGS = --no-entry \
-s WASM=1 \
-s EXPORTED_FUNCTIONS=_processPerf \
-s ALLOW_MEMORY_GROWTH=1 \
-o $(OUTPUT_WASM) $(DEPS) \
build: $(DEPS)
@$(CC) $(FLAGS)
We first give our C++ code to the compiler, by running the makefile, we retrieve a .wasm that we will instantiate later in the code.
// processor-perf.cpp
#include <emscripten.h>
#include <emscripten/bind.h>
using namespace emscripten;
extern "C" {
int processPerf(uintptr_t input_ptr, uintptr_t output_ptr, int channel_count) {
const unsigned kRenderQuantumFrames = 128;
const unsigned kBytesPerChannel = kRenderQuantumFrames * sizeof(float);
float *input_buffer = reinterpret_cast<float *>(input_ptr);
float *output_buffer = reinterpret_cast<float *>(output_ptr);
for (unsigned channel = 0; channel < channel_count; ++channel)
{
float *destination = output_buffer + (channel * kRenderQuantumFrames);
float *source = input_buffer + (channel * kRenderQuantumFrames);
memcpy(destination, source, kBytesPerChannel);
}
return 1;
}
}
This function demonstrates how to use C++ code in javascript. This function is only copying the content of the input buffer, into the output buffer.
// index.js
let moduleWasm;
async function loadWasm() {
WebAssembly.compileStreaming(fetch("./ProcessWasm.wasm"))
.then(module => moduleWasm = module);
}
(async () => {
await loadWasm();
const node = new AudioPlayerNode(audioCtx, 2, moduleWasm);
/* Code of the host.
...
*/
})();
We will use the loadWasm at the very beginning of the initialization to load the web assembly. Later in our AudioNode, we will give the module in the options of the processor.
// audio-player-node.js
export default class AudioPlayerNode extends AudioWorkletNode {
constructor(context, channelCount, moduleWasm) {
super(context, "audio-player-processor", {
numberOfInputs: 0,
numberOfOutputs: 2,
channelCount,
processorOptions: {
moduleWasm
}
});
}
setAudio(audio) {
this.port.postMessage({audio});
}
}
With this custom Audio Node, we can give additional parameters to the processor. In this case we send the Web Assembly Module that we previously fetched in the host. Everything is ready to setup the processor.
// audio-player-processor.js
class AudioPlayerProcessor extends AudioWorkletProcessor {
constructor(options) {
super(options);
this.setupWasm(options);
}
setupWasm(options) {
WebAssembly.instantiate(options.processorOptions.moduleWasm)
.then(instance => {
this.instance = instance.exports;
this._processPerf = this.instance.processPerf;
this.loadBuffers();
})
.catch(err => console.log(err));
}
/* Code processor
...
*/
}
In the processor, you will have access to the module thanks to the processor options. When you create the processor, you will have to instantiate the module, to use the C++ function. There are multiple ways to do this, there, we use WebAssembly.instantiate().
To share buffer between C++ and JavaScript, we will use Heap Audio Buffer, see WebAudio SharedArrayBuffer and the class HeapAudioBufferInsideProcessor written by Hongchan Choi.
On one hand, we have to create the output and input heaps :
// audio-player-processor.js
const BYTES_PER_SAMPLE = Float32Array.BYTES_PER_ELEMENT;
// The max audio channel on Chrome is 32.
const MAX_CHANNEL_COUNT = 32;
// WebAudio's render quantum size.
const RENDER_QUANTUM_FRAMES = 128;
async loadBuffers() {
this._heapInputBuffer = new HeapAudioBufferInsideProcessor(
this.instance,
RENDER_QUANTUM_FRAMES,
2,
MAX_CHANNEL_COUNT
);
this._heapOutputBuffer = new HeapAudioBufferInsideProcessor(
this.instance,
RENDER_QUANTUM_FRAMES,
2,
MAX_CHANNEL_COUNT
);
}
Thanks to these two buffers we will use them to share the audio buffer source.
And then during the process of the processor, we will use the C++ code :
// audio-player-processor.js
let returnCode = this._processPerf(
this._heapInputBuffer.getHeapAddress(),
this._heapOutputBuffer.getHeapAddress(),
channelCount
);
We've seen, how to call and execute C++ code with Web Assembly.
If you are comfortable with this example, make sure to see the next example. For the next example, we will not use the C++ to process the sound to keep the essential part of the example on new features. But we will show how to use the API following the Web Audio Modules standards. We will instantiate Audio plugins by the occasion.
Special thanks to Hongchan Choi that wrote the SharedArrayBuffer class.