02-13-2018, 01:09 PM
I've been working on a new project over the last month and a half or so, using a number of different sample libraries that I've had for a while now but haven't really used for anything yet, e.g. Miroslav Philharmonik 2, some SampleTank 3 stuff, the Sonivox Orchestral Companion libs and various freebies that have come out in recent years. Working with these libraries got me thinking, and as I know this forum is frequented by a number of sample developers, both pro and amateur, I figured I should post this here as food for thought.
Sample libraries have evolved by leaps and bounds over the last couple of decades, at least when it comes to sound fidelity, realism and technical features. In 1998, single velocity layers, wide-interval multisampling and looped sustains were still the norm, while now 20 years later almost everything has chromatic multisampling, multiple velocity layers, crossfading dynamics, round robins, true legato, no looping and so on and so forth. It's basically a VO composer's wet dream come true... right?
Well, not quite.
Is it just me, or are modern orchestral libraries in general more loose in terms of responsiveness than their more primitive forbears? And by responsiveness I'm referring to the feel of playing the various patches on a midi controller. You hit a key, and you're greeted with either a molasses-slow note attack or a dozen milliseconds of delay before the sample in question plays back (and yes, my ASIO buffer settings are fine TYVM). Is everybody else drawing the notes in the piano roll these days since I appear to be the only one who's concerned about this? And it's not like tightness has become less important either. One single delayed note start in a string of RR's will throw a whole phrase out of whack and make it sound like you were drunk while performing it.
I'm honestly not sure why this is. Is it because in the days of limited RAM and less powerful hardware every kB counted, and leaving long bits of silence in your samples was a terrible waste of memory? Developers were forced to keep their samples neatly trimmed at both ends, and as a result the responsiveness of the patches was much tighter, whereas now people have SSD's and more RAM than you can shake a stick at so "who cares about a bit of silence" or something like that. Or is it because today, when a big orchestral library might contain tens of thousands of individual samples, trimming them all so that there is no excess silence at note starts is an inhuman amount of work? Maybe it's the price we pay for having all those other technical features, I dunno. All I know is that it quickly takes the fun out of composing with a library that has these problems, especially if it's a "black box" one where you are locked out of editing individual notes on a sampler level.
And then there's the matter of velocity and dynamics. Sample developers DO know that not everyone uses the exact same midi keyboard as they used for developing the library... right? And that midi keyboards by and large can be VERY different in terms of feel and responsiveness? Common sense tells me they should know this, but when I'm playing a patch that is just barely audible when playing softly and then jumps up and gets CRAZY loud when I press just a tad harder, it makes me wonder. Developers take note: if you're going to make a black box library or virtual instrument where the end user can't edit the velocities, then for the love of all that's holy add an option for changing the velocity curve! Expecting the end user to change the velocity response of their keyboard (if even possible) for playing that specific instrument and then changing it back when playing others isn't a good idea.
So, in short: for someone like me who composes almost exclusively using a keyboard, these are extremely important points that I wish more developers would pay more attention to. Having all these technical marvels that make instruments sound more realistic at my fingertips is a wonderful thing, but if I have to struggle with every part because of an unintuitive responsiveness, all that realism is worth nothing. All the RR's and legatos in the world won't save a phrase that is impossible to nail in terms of feel and timing.
Thoughts?
Sample libraries have evolved by leaps and bounds over the last couple of decades, at least when it comes to sound fidelity, realism and technical features. In 1998, single velocity layers, wide-interval multisampling and looped sustains were still the norm, while now 20 years later almost everything has chromatic multisampling, multiple velocity layers, crossfading dynamics, round robins, true legato, no looping and so on and so forth. It's basically a VO composer's wet dream come true... right?
Well, not quite.
Is it just me, or are modern orchestral libraries in general more loose in terms of responsiveness than their more primitive forbears? And by responsiveness I'm referring to the feel of playing the various patches on a midi controller. You hit a key, and you're greeted with either a molasses-slow note attack or a dozen milliseconds of delay before the sample in question plays back (and yes, my ASIO buffer settings are fine TYVM). Is everybody else drawing the notes in the piano roll these days since I appear to be the only one who's concerned about this? And it's not like tightness has become less important either. One single delayed note start in a string of RR's will throw a whole phrase out of whack and make it sound like you were drunk while performing it.
I'm honestly not sure why this is. Is it because in the days of limited RAM and less powerful hardware every kB counted, and leaving long bits of silence in your samples was a terrible waste of memory? Developers were forced to keep their samples neatly trimmed at both ends, and as a result the responsiveness of the patches was much tighter, whereas now people have SSD's and more RAM than you can shake a stick at so "who cares about a bit of silence" or something like that. Or is it because today, when a big orchestral library might contain tens of thousands of individual samples, trimming them all so that there is no excess silence at note starts is an inhuman amount of work? Maybe it's the price we pay for having all those other technical features, I dunno. All I know is that it quickly takes the fun out of composing with a library that has these problems, especially if it's a "black box" one where you are locked out of editing individual notes on a sampler level.
And then there's the matter of velocity and dynamics. Sample developers DO know that not everyone uses the exact same midi keyboard as they used for developing the library... right? And that midi keyboards by and large can be VERY different in terms of feel and responsiveness? Common sense tells me they should know this, but when I'm playing a patch that is just barely audible when playing softly and then jumps up and gets CRAZY loud when I press just a tad harder, it makes me wonder. Developers take note: if you're going to make a black box library or virtual instrument where the end user can't edit the velocities, then for the love of all that's holy add an option for changing the velocity curve! Expecting the end user to change the velocity response of their keyboard (if even possible) for playing that specific instrument and then changing it back when playing others isn't a good idea.
So, in short: for someone like me who composes almost exclusively using a keyboard, these are extremely important points that I wish more developers would pay more attention to. Having all these technical marvels that make instruments sound more realistic at my fingertips is a wonderful thing, but if I have to struggle with every part because of an unintuitive responsiveness, all that realism is worth nothing. All the RR's and legatos in the world won't save a phrase that is impossible to nail in terms of feel and timing.
Thoughts?