Blunt procedural audio library
Blunt is a procedural and generative audio library, written in C# for Unity3D. It was written specifically for the game Surfing the Wave. Blunt focuses on high-performance synthesis and a modular audio graph that, together with a algorithmic sequencing system, makes it possible to create everything from synces effects to whole generative music compositions.
Much more to come, on this article.
- Mixer & effects
- Simple Example
- Making a simple polyphonic synthesizer sound
- Parameter modulations
- Creating a basic sequencing system
- Generative Music
- Development Status
- Fast envelopes with linear or exponential behaviour
- N-operator/modulator FM/AM/additive/subtractive synthesizer
- Additive resynthesizing synthesizer
- Piano component allowing to play the synthesizer on the keyboard
- Visual mixer with measurements
- White-noise generators
- SVF resonant 12 dB bandpass, highpass and lowpass filters
- Reverb, phaser, limiter and valve overdrive effects (last 3 also as separate components)
- Linear and smooth-step parameter thread-safe modulations
- Resonators and oscillators
- Generic synth, voice and voice buffer interfaces (allowing simple voice design, stealing and allocation)
- Central place for tempo and pitch calculations, allowing global changes
- Optional SIMD codepaths for synthesizers, filters and envelopes
- General model for thread-safe programing
- Sequencing system with beat and distance calculations
- Call-back sequencing system
- Probability-based rhytmic pattern-banks sequencing
- Instrument, melody and rhytm collections into pieces/compositions that can be stopped/started/paused
- FFT's and some image signal processing algorithms
I'll begin a technical reference now, that explains how the audio system works by example. Any references to Blunt code objects lives inside the master Blunt namespace.
Mixer & effects
The basic unit of the system is the Sound.Mixer. The Sound.Mixer interconnects the Unity audio system and the Blunt system. The reason for the difference (and need) is firstly because Unity 4.0 doesn't actually have a mixer, and any object interacting with Unity needs to inherit MonoBehaviour, to override OnAudioFilterRead. This eliminates any further inheritance in your object system, since C# doesn't support multiple inheritance. This is an awkward limitation, which is also why most of Blunt's functionality is achieved through interfaces.
The Sound.Mixer's main job is dispatching sound to Blunt elements. You can have multiple mixers, but need at least one. To get started, create an empty Game Object in the Unity Editor. In the GameObject, add a Audio Source - this is a static requirement to get sound. After this, add a Blunt.Mixer component. Then, add a new script called AudioSystem. This is where you'll add your audio scripting code. Lastly, add a Limiter Effect at the end of the chain. You'll usually want to have a limiter, to avoid blowing up your speaker when working with unprotected sound (like, resonating filters and such). Your hierachy should now look like this, from the Unity Editor:
Your basic code in a top-level audio script (AudioSystem.cs) should start out like this:
Next, you'll have to indicate the sound system is fully loaded (should be done after all components have been initialized) using mixer.enableSoundSystem();. Every audio element in Blunt (synthesizers, effects, sequencers, listeners) implements the Sound.SoundStage interface (which is just a wrapper around a process() call). The Sound.Mixer stores Sound.Mixer.MixerChannels which is a container of Sound.SoundStages. Sound.Mixer.MixerChannel processes its Sound.SoundStages serially, that is, the first Sound.SoundStage feeds into the next. Sound.Mixer.MixerChannels, on the other hand, are processed in parallel and are associative entries of the Sound.Mixer. They are summed additively to the output of the mixer.
Note that Sound.Mixer.MixerChannel itself implements Sound.SoundStage, so you can nest mixer channels inside mixer channels. The Sound.Mixer also allows listeners on channels.
Now, we'll create a simple object making some sound. For this purpose we will just generate some white-noise. If you're not confident with DSP code, don't worry, you won't actually be making stuff like this - it's just to create some example sound.
Notice the channels are interleaved in the audio buffer. Now, we'll add the audio object to our audio system object. In summary, your audio script should look something like this, non-explained steps have comments next to them.
You can now procede to start the project inside the Unity Editor. If everything goes well, it should look something like this:
As the project plays, the mixer and limiter will display diagnostics, like level meters, status, cpu usage etc. Notice that the output of the Mixer is subject to Unity's Audio Source transformation and modulation. This means the sound is spatialized in the 3D space according to its position relative to the listener (usually attached to the Main Camera or your Player). If you want to create non-spatialized sound, you can either disable it in the Audio Source, or translate the position of our AudioSystem game object to the Camera - or simply, make it a child of the Camera.
Simple polyphonic synthesizer sound
In this example, we'll design a polyphonic sound for the Synthesis.FMSynth synthesizer. Any classes from now on lives inside the Blunt.Synthesis namespace. We'll start out with the audio system object again. Besides the FMSynth, we are going to need a VoiceBuffer<Voice>. Substituting the Synthesis.FMSynth in (as Synthesis.FMSynth has of course a Sound.SoundStage interface), our code should look like this:
There is a base class called Voice and a generic interface called Synthesizer. The Voice base allow to create, play, stop, pitch and release sounds, control volume etc.; in general basic MIDI-stuff. The Synthesizer interface adds support for adding and removing voices, along with playhead/position info. In general, synthesizers are designed such that they render all the properties in the Voice. The Synthesizer instead define a more complex voice as a class member. The Voice, then, carries all state information which allows the Synthesizer to render any supported voice, and more importantly, any amount.
This practically means, that you don't do sound design on a specifc synthesizer, you design a group of voices that (may) sound exactly similar. This allows you to only have one synth for all your sound designs. This is where the VoiceBuffer<T> comes in to play. It does resource management for you, and allow to do operations on all voices at once. The voice buffer then, is able to decay to a generic polyphonic playable device / sound source linked to the actual synthesizer. This encapsulation allows generic sequencers to play a sound source, with automatic voice-stealing.
The interface we need to use is VoiceBuffer<T>.initialize. It allocates N (five in the following example) voices, and runs a lambda on each of them, that configurates the voice (thus, also the sound). This code should be a part of the Awake() function, in the end.
Now, I don't (yet) document everything like this, so I'll advice you to have a tool with a decent kind of intellisense, since a lot of the documentation is written in doxygen-style comments (if the naming isn't descriptive enough). That aside, the FMSynth works with operators. A operator is basically a modulatable sound creating/modifying device, like a filter, oscillator, value/parameter etc. (full list can be seen in FMSynth.Voice.Operator.Kind). An operator has two destination targets/pipelines, the modulator pipeline or the sound pipeline. Modulators only have access to the modulator pipeline, while sound modifiers has access to both. At the end of both pipelines, the operator's operation - whether it is added accumulatively to the sum of the chain, or if it is multiplied with the rest of the sum (along with a mix parameter). This system allows anything to audio-rate modulate any other arbitrary modulator - adding parameters, you can change mostly anything on the fly. With that in mind, let's design a sound.
This should sound like some LFO-filtered FM. It is quite some code, but it demonstrates some of the possible concepts. You can check out some sound designs I did for Surfing the Wave in this file. Now we only need a mechanism to trigger the sounds, maybe a keypress?
The synth should now generate the sound polyphonically, when you press the 'a' button and the Unity project is running. If you want a more advanced 'piano player', check out this file. Note the previous file has now been integrated in the examples project here.
In this example, we'll see how we can alter the sound of the synthesizer in real-time, by using parameters. Parameters are basically modulators (though they don't need be), outputting linear interpolations from current to next value, over some time interval. Firstly, we'll design a sound where we will add a control to modulate the FM-index using a random device, added in the class:
Parameters are created through FMSynth.Voice.createAndAddParameter, and functions exactly like the other operators - that is, they generate a sound. Through modulations, you can control filters, offsets, phases and using multiplicative operation, you can control the amount of modulations based on external inputs. Without further ado, lets get to the sound:
Now we just need to alter the parameter somewhere - for example the 's' key. Thus, we will update the Update function:
Pressing the 's' key (after playing a sound with 'a') should now noticably alter the FM modulator's index, thus making a sweeping sound. Now that we've actually done all the work, I can safely reveal all this functionality is present already through the FMSynth.Voice.Kind.RandomOffset, created through FMSynth.Voice.createAndAddRandomOffset(). The RandomOffset will create a random offset (which, again, can be used as a sound or modulator source) each time the note is played, which is slightly different use-cases, though.
In this example, we'll look at sequencing sounds in Blunt. At the basic level, each Sound.Synthesizer renders a block of N samples of audio. The block size, N, can be controlled by setting the latency of each synthesizer, and this directly affects timings and precision for sequencing. As always, lower latency means lower performance. The interface Sound.SequencingSynthesizer<Synth> maintains a Synthesis.PlayHead and an interface for getting a callback on each render block. You will rarely use this interface yourself, as we will now investigate the GenerativeMusic.CallbackSequencer<Synth>.
The CallbackSequencer is basically a wrapper around the callback interface, that allows sequenced/timed callbacks using CallbackSequencer<Synth>.Tokens. Tokens are representative of a unique callback relationship between the owner, callback, sequencer and synthesizer. Tokens are created through CallbackSequencer.createCallbackToken. It has a basic function called Token.callbackIn(PlayHead.TimeOffset t), which does just that - calls you back in a certain time offset, and it is inside this callback you will sequence sounds.
The PlayHead contains all information about the current timing, "song position" etc. and it is able to calculate distances, that is, time offsets, to quantizable points in time. If you're at any arbitrary point in time, PlayHead.getDistanceToNextBar() or PlayHead.getDistanceToNextBeat() will calculate the distance to the next N bar, N beat with N subdivisions and a fractional offset as well. This is returned as a PlayHead.TimeOffset, which the CallbackSequencer happily eats. This system ensures that you of course can sequence completely unrelated to any timeline, but it is also brings the facilities to sequence on-beat/on-divisions, emulating a timeline based off settings in the Synthesis.EnvironmentTuning.
With that in mind, lets use this system to sequence our synth dynamically, generating some algorithmic music. The CallbackSequencer lives inside the Blunt.Synthesis.GenerativeMusic namespace. We are also going to modulate the pitch of the voice, so from the start we'll also create a harmonic index table that makes randomization a bit easier. The start of our script will now look like this:
Now, we need to do some more setup'ing in the Awake() function. The sound design / initialization of the voice stays the same as in the previous example.
As we can see, we create an instant callback, such that we will get sequenced at 0.0.0.0 when the 'timeline' starts. The callback function, now, is where the magic will happen:
The script should now play sequenced and modulated music when you start the project, ever-changing. Just to make sure we're on the same page, it should sound something like this:
While not actively being worked on, it is scheduled for a rework - especially since it was developed for Unity 4. Unity 5 includes a lot of new built-in real-time audio utilities, even including a full-blown mixer. A list of the relevant work can be found here. I believe Blunt should utilize these new features, and perhaps focus more on synthesis and sequencing.
Blunt is licnsed under GPL v3.