As a music and audio production enthusiast for the past 10 years I have been witnessing the growth impact of technology on music. It is fascinating how technology made audio production and audio processing more accessible. It was not long ago when everything needed to be done in a fully analog recording studio equipped with pricy rare equipments.
With DAWs being introduced most of the steps could be done anywhere artists have access to a computer and a pair of headphones, although you could go fancy with that and spend as much as studio, but the idea is to making the focus of music production on the creative part by making it more accessible affordable.
Moving from analog to digital not only made it easy for artists and producers, it changed the presence of audio in browsers. It used to as easy as using a HTML <audio> tag which had many limitations, but with tools like Web Audio API audio processing took a huge step forward.
Instead of only playing the audio in browser, Web Audio API lets us introduce different audio sources or inputs, process the input signal and chose the destination or output. On top of that, it provides visualization which is another interesting world by itself and I will take you there too, but in my future blogs.
If you are not bored with such a long intro, let’s get to know the Web Audio API a little better and learn some of the basic concepts.
The Web Audio API handles audio operations inside an audio context. It has been designed in modular routing way. That means basic audio operations are performed with audio nodes, which are linked together to form an audio routing graph. The modular design gives the flexibility of creating complex audio signal flows.
// Create web audio api context
const audioContext = new (window.AudioContext || window.webkitAudioContext)();
Every audio source is linked into chains by their inputs and outputs. Sources could be oscillators (oscillatorNode), or they can be audio files (MediaElementAudioSourceNode).
Lets create an oscillator;
// Create Oscillator node
const oscillator = audioContext.createOscillator();
Using audio files is a little different since we have to set the audio file as a variable and feed it into our audioContext;
let audioContext = new (window.AudioContext || window.webkitAudioContext)();
let myAudio = document.querySelector(“audio”);
let pre = document.querySelector(“pre”);
let myScript = document.querySelector(“script”);pre.innerHTML = myScript.innerHTML;// Create a MediaElementAudioSourceNode
// Feed the HTMLMediaElement into it
let source = audioContetx.createMediaElementSource(myAudio);
Now that we have inputs we can link them to other nodes. It could be as simple as an output or routing the audio source to other nodes for further audio processing.
One common modification is ,multiplying the samples by a value to make them louder or softer, and for that we can need to add a GainNode to the chain.
// Create a gain node
let gainNode = audioContext.createGain();
After processing the sound, it is time to send it into a destination node. This step is only if we want to let the user hear the sound, since the output of destination sends the sound into speaker or headphones. In the code below I am connecting the gainNode I created earlier into the destination node;
// First connecting the oscillator into the gainNode
// Now connecting the gainNode to destination node
What we have done here is a simple typical workflow for web audio, which we can describe it as five steps process;
1. Create audio context
2. Inside the context, create sources such as oscillator, <audio> or stream
3. Create effects nodes, such as gain, reverb etc..
4. Choose final destination like our system speakers
5. Link up the modules by connecting the sources up to the effects, and effects to the destination
Which at the end would look like below;
I have built the most simple process in this blog, but there are a lot more about Web Audio API to discover, which I will cover in future blogs.
Give it a try and enjoy making noise on web!