Signed distance field based sound modelling
My bachelor's thesis explores the application of signed distance fields in the audio synthesising context. The main idea was to exploit the fact that signed distance fields don't have a resolution associated by default and that they are (usually) continues as well. Feldversuch is an experimental synthesizer that use those facts to extend the wavetable class of synthesizers by letting the user model the sound domain with simple modelling tools.
A paper called Creative Sound Modeling with Signed Distance Fields1 describes how such a system can be used to extend interaction of the user and a synthesizer using the system.
While working on Nako and using it in Feldversuch to render more complex signed distance fields performance became an issue. Usually the pure mathematical function is rendered into some kind of acceleration structure (usually Voxels). This however introduces two limitations:
- Voxels have a resolution
- Such a structure is (usually) bound in space.
In my case at least the first limitation was no option, since that would introduce artifacts in the generated sound.
Nako already uses a custom sdf-byte-code that is interpreted on the GPU. The interpreter loads each word from memory, which make the evaluation too slow for big functions. Algae injects SpriV code into already compiled shaders. Shader execution becomes more uniform and only variables have to be loaded from memory. Have a look at the blog post for a technical discussion.
Signed distance field renderer for 3D and 2D objects using GPU and CPU interpretable byte code.
High resolution sparse-octree Voxel renderer. Explores Voxel cone tracing and unbiased GI. The renderer has no concept of light objects. Instead, Voxels have an emission property that can be used to let any Voxel emit light.
My first more complex renderer that used Vulkano. It went through multiple iterations. The final one uses PBR-shading, a physical camera model and compute-shader based ray tracing for shadows. The KHR-raytracing extension was not available at the time yet. It does not compile anymore, but there are videos online on my YouTube channel.
I created two Vulkan related helper crates. Marp and MarpII. The first one tried to wrap Vulkan. The second one is build as a composable ash-helper crate. It comes with an experimental frame-graph implementation and bindless helper.
Dagn is a node based synthesizer framework. It uses the node graph for scheduling and visual representation of audio nodes. Contrary to most node based synthesizers it uses a typed interface. Correct types are enforced at runtime. It is therefore possible to send/receive higher level information like Json strings or Rust structs.
Sometimes I explore embedded programming. Currently I am in the process of building a MIDI-Wind controller. A little bit like a Saxophone, but outputting a MIDI signal that can be hooked up to a soft- or hardware synthesizer.
I wrote two driver crates for that task:
1 Mende, T., Engeln, L., McGinity, M. & Groh, R., (2022). Creative Sound Modeling with Signed Distance Fields. In: Marky, K., Grünefeld, U. & Kosch, T. (Hrsg.), Mensch und Computer 2022 - Workshopband. Bonn: Gesellschaft für Informatik e.V. DOI: 10.18420/muc2022-mci-ws03-339