Brief Technical Overview

I thought I’d share some of the juicy technical details of Pithesiser as it stands right now.

The Pithesiser program is written in C on Linux, and is constructed as a number of discrete modules (corresponding to paired source and header files). Modules include low level audio interfacing with hardware, waveform generation modules, graphics rendering modules and the “main” module of the program.

When executing, Pithesiser runs as a small number of processing threads, each with different responsibilities.

Main thread – the Core Audio Loop

This is the beating heart of the Pithesiser – effectively the main loop of the program. It does the actual synthesis for any notes that are playing,  mixes them together and passes them in small chunks to the  audio output thread.

The same thread also currently processes received MIDI input and uses that to trigger notes and update the synthesiser state such as master volume, selected waveform and envelope parameters. It also sends events to the rendering system to update the display when state changes and when audio chunks are completed.

I might consider splitting the core audio processing from the other work done in this thread; however right now it’s small, simple and can be quickly and directly updated from said thread.

Audio Output Thread

This manages the audio output buffers – provides access for the core audio loop to write mixed audio data, and sends completed buffers on to the audio hardware for playback via ALSA.

MIDI Processing Thread

This thread wakes up on the arrival of data via any MIDI device, packages that data up into discrete events and posts them onto the MIDI event queue for the main thread to read. It also maintains a map of the current controller values and whether they have changed sinced last polled.

Rendering Thread

This thread accepts graphics events from the main thread that are used to update the display (currently just the oscilloscope  and a simple graph representing the volume envelope). This thread receives copies of the mixed audio output and uses that to render the oscilloscope display and also to drive the timing of when to physically update the screen (execute OpenVG operations and swap the display buffers).

As the audio output operates on a much higher frequency than the display (345Hz vs 60Hz), the oscilloscope holds on to the waveform data it receives until it has enough to match the equivalent of 1/60th of a second of audio output, then renders it all out. That means that the core audio loop completes roughly 5.75 times for each frame rendered to the screen.

Event Communication

Where threads communicate via events, this is implemented as simple ring-buffers of predefined C structures using simple arrays as storage for the events. Semaphores and mutexes are used to manage synchronisation and notification between such communicating threads.

Some Choice Metrics

  • Sample rate used for synthesis, mixing and playback: 44.1 kHz.
  • Size of audio data chunks: 128 samples (equivalent to nearly 3 milliseconds).
  • Target render frame rate: 60Hz.

(c) 2013 Nick Tuckett.

Advertisements