Scoring Central

Full Version: Homemade experimental impulse response
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
I saw a picture of a ripple tank and thought Hey, what if...?

Turns out there's an old, really simple algorithm to simulate ripples. I hacked together a quick program in C++ to run the algorithm at a ludicrous resolution. After it seemed to work, I ran the algorithm for 200000 frames at a resolution of 1920*1200. I ended up with a not-completely-horrible hall reverb impulse response.

The impulse is pseudo-true stereo, with the left and right impulses being the same, only with swapped left and right channels. It actually ended up like that by mistake, as I had no randomisation active in the program and the source and pickup positions were symmetrical for left and right in a rectangular room.

While I'm by no means aiming to develop the next Impulse Modeler, I'll probably continue this experiment to see how much it's possible to improve the sound.

I've attached the impulse response I ended up with, if you'd like to try it out.
Artificial Hall - test 2
This is another experimental impulse response, done using a different (and considerably faster) algorithm. This one is in proper true stereo. A short demo is included.
Interesting, thanks! I'll check it out Smile
Would you post an example perhaps,  maybe with a clarinet or oboe in a slow melody?

 -- Kurt
Sure! Here's the reverb in use. As it happens, I had a not-too-fast oboe melody in one of my WIP songs. Smile

The reverb sends have different pre-delays, but get routed into a single instance of the reverb itself. If I manage to get a good sound, I'll probably render the impulse with different front-to-back source positions (à la Altiverb) to make it more usable in an orchestral setting.

EDIT: While I'm at it, here's a demo with an almost full orchestra. There's a bit of staccato there, so you'll be able to hear the tails in action as well.
I played around a bit with it and I have to say this isn't a bad hall reverb at all. Goes to show yet again that extreme realism isn't crucial -- maybe not even desireable -- when it comes to reverb and sampled instruments. Having said that, I'm not into convolution reverb anymore so for me personally this is of limited use. But really cool stuff anyway!

So... care to explain to a non-programmer how the HELL a graphical (I assume, given the mention of x/y resolution) ripple generating algo can be turned into audio, and an IR at that?
It's easiest to understand if you play around with a ripple tank sim. Here's one. Smile

The ripple tank algorithm is essentially a 2D version of sound waves in a 3D space. To create an impulse response, I inserted a 1.0 spike at a predetermined "player" position on the first frame, and I saved one "pixel" value from each frame at a predetermined "microphone" position. Done! The array of saved values works as floating point audio as-is, and because I started with a single spike, the saved data can be used as an IR in a convolution plugin. Smile

EDIT: I don't have any kind of a background in actual engineering, so pardon me if I get things wrong. Big Grin While I'm actually studying audio, the curriculum mostly consists of the humanities and has very little in the way of electronics or DSP.
That is VERY creative! Love the way you're thinking Smile

From a practical perspective though, how can the values work as audio as is? What kind of output do you get? I guess we're talking coordinates in an X/Y grid. How is the output saved? As an image? As plain text or what? What I fail to understand is how the spike-triggered graphical ripple becomes a wave file.
The values work as audio because floating point audio is just a series of floating point numbers ranging from -1.0 to 1.0, each number representing the amplitude of a single sample. That's exactly what I get when I save the "height" value of one "pixel" at specific coordinates from each frame: A series of floating point numbers. I don't have any libs installed at the moment, so the output is saved as raw data. I simply used Audacity's raw data import feature to convert it to a wave file (and to normalize it).

I'm sorry if I can't explain it clearly, I've only just begun to wrap my head around DSP myself.
(08-11-2016, 10:08 PM)Otto Halmén Wrote: [ -> ]I don't have any libs installed at the moment, so the output is saved as raw data. I simply used Audacity's raw data import feature to convert it to a wave file (and to normalize it)

That was the exact technical tidbit of the procedure that escaped me. So, thanks! Smile
I'd be very interested in seeing a screencast of the whole procedure from start to finish sometime,  if you've an appetite for it.  

I think I get the theory,  but it always helps to see "the math in action", so to speak...

  -- Kurt
Pages: 1 2