Combining Opengl and Real-Time Audio Synth

Started by Charles Pegge, October 19, 2007, 11:38:44 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Charles Pegge

The scheme is a very simple one:

To accompany the OpenGL frame, continuous sound can be added by setting up a permanently looping
waveAudio buffer and inserting slices of sound ahead of the reader. When the sound is completed, silent slices are used to overwrite old sound, so it is not repeated. The frame rate is around 32 FPS so synthesising wave audio in chunks of 1/20 second gives plenty of margin for unexpected delays in the system.

To accommodate the most cavernous of reverberations I am using a buffer of about 884000 bytes which holds 5 seconds of 16 bit stereo sound at the standard 44100 samples per second.

When a sound is triggered, The pitch, harmonic structure, envelope, and modulation are  set up, then the first 1/20 second sound byte is computed and inserted. As this is done just ahead of the moving read pointer, there is no significant latency. After doing this, the synth is suspended so that the next frame can be rendered, user inputs monitored etc. At the end of every subsequent frame another chunk of sound is synthesised and added to the buffer until the sound is completed.

The buffer pointers are calculated with a modulus equal to the buffer length. If the synth gets too far ahead, and is at risk of overtaking the reader, then synth operations can be delayed for a few frames.

This scheme, as described so far is monophonic. Multiple sounds require multiple synths, working more or less independently, mixing their outputs before inserting into the buffer. Ambient sounds, rumbles, clangs and bangs of colliding objects are also treated in the same way.