Not signed in (Sign In)

Welcome, Guest

Want to take part? Sign in or apply for an account below

Vanilla 1.1.4 is a product of Lussumo. More Information: Documentation, Community Support.

    • CommentAuthorthbb
    • CommentTimeMar 26th 2007 edited
     
    The algorithm used in HighC is quite similar to the SAS library, eventhough it was developed independantly.

    A Piece is a collection of sounds. Each sound has a relative level, a waveform and an envelope. The sounds, envelopes and waveforms are piecewise linear functions. The envelopes and waveforms are normalized over the domains [0..1]->[0..1], while the sounds are defined over a portion of the whole piece duration. The algorithm looks like this:

    for (int i=0;i<piece.duration*samplesPerSecond;i++) {
    float curSample=0;
    float curTime=i/samplesPerSecond;
    for all sounds s in piece such that
    s.start_time>=curTime and s.end_time<curTime {
    float curFrequency=s.value(curTime - s.start_time);
    float curLevel=s.envelope.value(
    (curTime - s.start_time)/s.duration());
    float curValue = s.waveform.value(curFrequency, curTime);
    curSample += s.level * curLevel * curValue;
    }
    writeSample(curSample*(dynamicRangeMax-dynamicRangeMin) + dynamicRangeMin);
    }

    For a sine waveform, the value function looks like:

    float Sine::value(float f, float t) {
    return sin(2*f*t*PI);
    }

    Of course, there are small tricks here and there. First, the sounds are kept in an ordered queue by increasing order of start time to ensure a maximum complexity proportional to "number of samples"*"max number of sounds at any time". Second, the current phase of each sound is kept and transmitted to the waveform to ensure smooth transitions when going from one sound segment to the next. Finally, the sound level is adjusted in a precomputing loop to ensure the maximum level of the piece is 1.
    • CommentAuthorllixgrijb
    • CommentTimeApr 8th 2008
     
    What Programming language is this? I don't recognize it. Aspects of it look like C, but then again, I have never written anything in C which synthesizes sound.
    • CommentAuthoradmin
    • CommentTimeApr 9th 2008
     
    This is pseudo-code loosely inspired from Java/C++. HighC is written in Java (mostly so that I could release it both on the PC and MacOSX). If you want more details, I can extend the documentation to provide those details, but it's almost as long to write clear explanations as it is to write the code, so you'll have to wait...
    • CommentAuthorllixgrijb
    • CommentTimeApr 10th 2008
     
    thanks
    • CommentAuthorrob
    • CommentTimeJun 20th 2008
     
    I notice that using noise waveforms cause a significant increase in render time.

    Have you thought about making the engine that generates the sounds more efficient? Don't take this as critical; I'm just asking where in the priority scheme of features does more efficient sound generation fall?

    It may dovetail with the feature set around playing back arbitrary waveforms. Sampler VSTs can generate stuff from tables full of wav files really efficiently, I wonder if building in a wavetable facility into the audio engine of HighC could ultimately lead to shorter render times and more time spent productively. Along the way you get the ability to load other waveforms, sampler-style.
    • CommentAuthoradmin
    • CommentTimeJun 21st 2008 edited
     
    Computing waveforms on the fly is the essential limitating factor.

    Basically, one or more sin() calls have to be called 44000 times per second for as many sounds as there are in parallel. If you open the Tools > Messages window at the beginning and play a sound, a message will be printed showing the result of a calibration routine, estimating how many simultaneous sounds HighC thinks it can render in real time. This is unfortunately an approximation, and on some pieces, one can sometimes hear some "jumps" caused by the play routine catching up with the computation routine, and "blanks" being heard.
    This of course does not happen when rendering to an output file.

    Right now, I write HighC with expandability and generality in mind, so that I can implement new features easily. Performance is not the focus, even though wavetables could be implemented instead of calling sin, it would make the software harder to evolve.


    One reason Stereo is not implemented yet is that this will drain computing resources even more. I want to rewrite the rendering engine before to use multiple cores and compute the samples in parallel threads.

    This means that on a machine with multiple cores, more than one core would be available. The next generations of processors won't improve in terms of MIPS, but will include more and more cores, probably 8 to 12 in a year or 2. This means HighC will be able to take advantage of the hardware progress to come, and I try to write it in this spirit.

    I'll still think about wavetables implementing caches for heavy sine computation...
    • CommentAuthorrob
    • CommentTimeJun 21st 2008
     
    Well even if one sound could leverage the computations done for another with the same waveform it would help.
    • CommentAuthorrob
    • CommentTimeJun 21st 2008 edited
     
    Okay, I see the file's just a serialized representation. That's pretty cool. I can definitely envision writing my own front end scripts (or macros or whatever) for workarounds some of the things I find limiting in the UI..