Scoring Central

Full Version: Responsiveness and playability in modern orchestral sample libraries
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
I've been working on a new project over the last month and a half or so, using a number of different sample libraries that I've had for a while now but haven't really used for anything yet, e.g. Miroslav Philharmonik 2, some SampleTank 3 stuff, the Sonivox Orchestral Companion libs and various freebies that have come out in recent years. Working with these libraries got me thinking, and as I know this forum is frequented by a number of sample developers, both pro and amateur, I figured I should post this here as food for thought.

Sample libraries have evolved by leaps and bounds over the last couple of decades, at least when it comes to sound fidelity, realism and technical features. In 1998, single velocity layers, wide-interval multisampling and looped sustains were still the norm, while now 20 years later almost everything has chromatic multisampling, multiple velocity layers, crossfading dynamics, round robins, true legato, no looping and so on and so forth. It's basically a VO composer's wet dream come true... right?

Well, not quite.

Is it just me, or are modern orchestral libraries in general more loose in terms of responsiveness than their more primitive forbears? And by responsiveness I'm referring to the feel of playing the various patches on a midi controller. You hit a key, and you're greeted with either a molasses-slow note attack or a dozen milliseconds of delay before the sample in question plays back (and yes, my ASIO buffer settings are fine TYVM). Is everybody else drawing the notes in the piano roll these days since I appear to be the only one who's concerned about this? And it's not like tightness has become less important either. One single delayed note start in a string of RR's will throw a whole phrase out of whack and make it sound like you were drunk while performing it.

I'm honestly not sure why this is. Is it because in the days of limited RAM and less powerful hardware every kB counted, and leaving long bits of silence in your samples was a terrible waste of memory? Developers were forced to keep their samples neatly trimmed at both ends, and as a result the responsiveness of the patches was much tighter, whereas now people have SSD's and more RAM than you can shake a stick at so "who cares about a bit of silence" or something like that. Or is it because today, when a big orchestral library might contain tens of thousands of individual samples, trimming them all so that there is no excess silence at note starts is an inhuman amount of work? Maybe it's the price we pay for having all those other technical features, I dunno. All I know is that it quickly takes the fun out of composing with a library that has these problems, especially if it's a "black box" one where you are locked out of editing individual notes on a sampler level.

And then there's the matter of velocity and dynamics. Sample developers DO know that not everyone uses the exact same midi keyboard as they used for developing the library... right? And that midi keyboards by and large can be VERY different in terms of feel and responsiveness? Common sense tells me they should know this, but when I'm playing a patch that is just barely audible when playing softly and then jumps up and gets CRAZY loud when I press just a tad harder, it makes me wonder. Developers take note: if you're going to make a black box library or virtual instrument where the end user can't edit the velocities, then for the love of all that's holy add an option for changing the velocity curve! Expecting the end user to change the velocity response of their keyboard (if even possible) for playing that specific instrument and then changing it back when playing others isn't a good idea.

So, in short: for someone like me who composes almost exclusively using a keyboard, these are extremely important points that I wish more developers would pay more attention to. Having all these technical marvels that make instruments sound more realistic at my fingertips is a wonderful thing, but if I have to struggle with every part because of an unintuitive responsiveness, all that realism is worth nothing. All the RR's and legatos in the world won't save a phrase that is impossible to nail in terms of feel and timing.

Thoughts?
Given what you wrote about responsiveness of sample libraries, if you have the time, I'd be interested in your opinion on the responsiveness and playability of the new Performance version of my sample library.

No pressure. If you want to review my library, let me know and I'll send you a link.

(02-13-2018, 01:09 PM)Mattias Westlund Wrote: [ -> ]You hit a key, and you're greeted with either a molasses-slow note attack or a dozen milliseconds of delay before the sample in question plays back
I was concerned about that too. Going back to version 1.0 of Virtual Playing Orchestra, I literally listened to every single sample and sometimes did have to adjust the offset for samples that started too late. Fortunately none of the sample attacks were too long because other than modifying the actual wave file, I can't shorten the attack using .sfz commands alone.


(02-13-2018, 01:09 PM)Mattias Westlund Wrote: [ -> ]And that midi keyboards by and large can be VERY different in terms of feel and responsiveness?
I have an extra challenge with both of my MIDI controllers. They seem to max out at a MIDI velocity of 100 instead of 127. Guaranteed my controllers do not respond like most others. I have to apply a plugin with a scaling factor to reach 127.


(02-13-2018, 01:09 PM)Mattias Westlund Wrote: [ -> ]if you're going to make a black box library or virtual instrument where the end user can't edit the velocities, then for the love of all that's holy add an option for changing the velocity curve!
I wish Sforzando was less of a black box. A text editor is the "user interface" if someone wants to make changes and then an update to the library will wipe out any user customizations.
Honestly as far as I can tell a lot of it is the time and effort required to edit samples. I'm told there are automated ways to do it and I've been known to use simple Audacity scripts myself though that tends to cut samples short and you have to fake it with reverb. I did that with the Iowa samples but never admitted it. Editing and looping 100 samples is one thing, editing and looping 100,000 samples is quite another. I've hand edited the LDK Violin and a couple pianos and man that gets old fast!

Also the ST3 engine is a pain to use and I have the impression that even for the IKM guys it isn't nearly as easy as Kontakt. Got a bad sample in Kontakt you can go find it and adjust it on the fly, in ST3 who knows... I tend to make my Kontakt instruments before Maize for this reason and use the samples straight from my Kontakt instrument directory.

If you can open that folder... stuff for ST3 including Sonatina.
https://www.mediafire.com/#ka1z8vmz8lv9c

Some multis...
http://www.mediafire.com/file/l4gafp2edt...ultis_.zip
http://www.mediafire.com/file/g1ffblsneq...VeloS.st3m
http://www.mediafire.com/file/q6m5q2flr5...ac_VS.st3m
(02-13-2018, 02:10 PM)Paul Battersby Wrote: [ -> ]Given what you wrote about responsiveness of sample libraries, if you have the time, I'd be interested in your opinion on the responsiveness and playability of the new Performance version of my sample library.

No pressure. If you want to review my library, let me know and I'll send you a link.

I'd be happy to have a look at it, though I can't promise I will have the time to give you any beta testing level feedback Smile

(02-13-2018, 05:36 PM)bigcat1969 Wrote: [ -> ]Honestly as far as I can tell a lot of it is the time and effort required to edit samples. I'm told there are automated ways to do it and I've been known to use simple Audacity scripts myself though that tends to cut samples short and you have to fake it with reverb. I did that with the Iowa samples but never admitted it. Editing and looping 100 samples is one thing, editing and looping 100,000 samples is quite another. I've hand edited the LDK Violin and a couple pianos and man that gets old fast!

Also the ST3 engine is a pain to use and I have the impression that even for the IKM guys it isn't nearly as easy as Kontakt. Got a bad sample in Kontakt you can go find it and adjust it on the fly, in ST3 who knows... I tend to make my Kontakt instruments before Maize for this reason and use the samples straight from my Kontakt instrument directory.

To be clear, I wasn't taking a stab at neither you, Paul, Sam or any other developer of free samples. I know how tedious and time consuming it can be to edit tons of samples, and with free libraries issues like that are definitely forgivable. The problem is that there are companies out there who are charging hundreds of $/€/£'s for stuff that plays like there's year-old chewing gum stuck under the keys, and that's more difficult to shrug off. If the insane amount of samples involved is what's causing this trend, I can't wait for physical modelling to become mature enough to make this "sampling real instruments" business a thing of the past.
(02-13-2018, 09:03 PM)Mattias Westlund Wrote: [ -> ]I'd be happy to have a look at it, though I can't promise I will have the time to give you any beta testing level feedback Smile
I just sent the link to you in a private message. I'm not looking for rigorous testing. Mostly a sanity check. Is it playable? Does the MOD wheel cover an adequate volume range? Are you easily able to get the articulations you want as you play? That sort of thing.



(02-13-2018, 09:03 PM)Mattias Westlund Wrote: [ -> ]To be clear, I wasn't taking a stab at neither you, Paul, Sam or any other developer of free samples.
I never thought you were. If I'm purchasing a professional library, I expect consistent sounds, in tune, no delay before I hear the sound and I certainly don't want to hear any bad samples. No clicks or pops nothing like that. I merely pointed out how I weeded through the samples in my library to demonstrate that I completely agree.
Mattias, I actually agree a ton.

Like, what happened to Aftertouch? Every single one of my rackmount samplers, even the oldest one, has aftertouch settings on many patches, either to change dynamic or add vibrato. I do not understand why many sample libraries now have a vibrato slider when they could just use aftertouch! It feels so natural...

The best feeling sampler I have is a 20+ year old piece of physical hardware that is trumped technically by even computers of its time... it's just that they took the time to program all of those controls.

I purchased some guitar sample sets for Kontakt, but I found the keyboard response was SO hard, I had to hammer my keyboard (or else go into the physical keyboard's settings and change the velocity curve there) just to get it to play with any sense of dynamics other than 'pp'.

Something Simon Autenrieth and I tried to do with VSCO 2 Pro was make the control as intuitive as possible- sustaining sounds use mod for the dynamics/crossfade and velocity for attack, short sounds use velocity for dynamics. We've continued to try this in our recent products- I do have a really cool one coming out soon that uses aftertouch!

Regarding sample starts and other inconsistencies, the whole philosophy of recording has changed a lot since the the 90's. Back then, the philosophy was to record a few good samples and make the most use of them possible. Now the philosophy is to record as many as possible. There are still some companies that do the former, typically supplementing the fewer samples with modelling/synthesis in a hybrid blend reminiscent of late 80's/early 90's boxes that combined synthesis and PCM samples.

It is true that the 'bulk' nature of sample cutting means less attention per note, although the software that can do auto-cutting well (typically Reaper scripts) is very advanced. For instance, we now use a concept on most of our instruments where the start of the sample is set to a constant distance from the transient across all samples, i.e. you will never wait a millisecond longer on one sample than any other. The downside of course is any natural development of the incision of the attack (e.g. fuzzy start on a clarinet) is lost. A careful balance must be maintained between accuracy and response.

There are of course many blunders made from the bulk nature of sampling. I greatly dislike the staccatos in EW Hollywood libraries because they cut them too short so the tail gets chopped- probably because they were recorded too quickly without proper care taken to let the sound fade in the space. In my opinion, that makes them unusable.

The largest issue with timing is the nature of multi-mic itself. In "the olden days" (1980-1995 or so), often only a single mic, or at times, a very close array, was used for recording. Believe it or not, this configuration is still used today by some (Eduardo Tarilonte's libraries), but widely has been rejected in favor of more natural "main" mic positions up to several meters away from the instrument. The signature sound of old samples is due to the recording nature of close mic'ing and mono recording, combined with wide mapping. Very tight cutting also is important- if needed, a poor attack can be 'reconstituted' manually by overlaying the sound of an attack in post processing. If you only have 2-8 samples per instrument as was common in the early 90's, this is very easy editing. But now with multi-mic, each position has a delay of several ms (remember it is about 2.9 ms per meter of distance), meaning if a close mic is cut as tight as possible, a main mic might still delay 5-15 ms, and a far 15-50+ ms... unless you delay the mics so they line up together, but that is not accurate to real recording (although an interesting sound, and an improvement in phase issues).

By the late 90's, you may see up to and beyond 20-40 samples per patch, with libraries like EWQL SO paving the way in the early 2000's for the current generation of libraries. In that respect, I consider a library like SO the "turning point"- combining the processed elements of earlier libraries with the bulk that was to define the next generations of libraries. After that, with the emergence of companies like Spitfire and later Embertone and others, the sound moved generally towards a more "organic", "natural" sound from the more processed "clean" sound of previous libraries. When Spitfire first launched, their about page was a virtual manifesto against the tyranny of overly-processed, clean sample libraries, and, combined with their stunning commercial success, served to inspire many sample library developers to care less about getting "the perfect sample" as "the spectrum of all samples" and provide that to the user instead. Companies like VSL have never shied away from sampling things ridiculously too deep, so no one has bothered to question if those numbers are really necessary. Like all things, there is a natural curve with a specific point at which the "right" number of samples is evident, but most just go above and beyond figuring excess can only equate additional value- even if that additional value is not as potent as adding, say, another instrument. This was the design philosophy difference between VSCO 2 and other current orchestral libraries- less detail, less 'consistency', but more instruments and articulations- closer to the older generation of sample libraries.

(02-13-2018, 02:10 PM)Paul Battersby Wrote: [ -> ]
(02-13-2018, 01:09 PM)Mattias Westlund Wrote: [ -> ]You hit a key, and you're greeted with either a molasses-slow note attack or a dozen milliseconds of delay before the sample in question plays back
I was concerned about that too. Going back to version 1.0 of Virtual Playing Orchestra, I literally listened to every single sample and sometimes did have to adjust the offset for samples that started too late. Fortunately none of the sample attacks were too long because other than modifying the actual wave file, I can't shorten the attack using .sfz commands alone.

Paul- try the following:

Offset=1000 //value in samples (44100/s)
Attack=0.005 //value in seconds, 3-7 ms is enough to clean up any clicks resulting from cutting at a non-zero

If you want to be REALLY REALLY cool, you can instead try:

offset_cc#=1000 //maximum value with cc @ 127 in samples (44100/s)
Attack=0.005 //value in seconds

That maps the CC (whichever # you enter) to control the attack. That way the user can tweak the responsiveness!

Let's say the samples were 'cut to transient with offset' in the way I described earlier, with say, a 50 ms offset (that's 2205 samples if I'm not mistaken). You could then have the following-

offset_cc#=2150
Attack=0.005

That would make it so the end user could configure the span in real time... hmm... Wink
I didn't think you were taking any shots Mattias. Just sharing a few thoughts that were dwarfed by Sam's impressive post.

Really should learn how Reaper and its scripting works. It was mentioned by another dev on VI-C.

Thanks Sam good stuff.
Welp, after all that writing, I managed to forget the video I meant to share. It's a bit meandering, but I made a video about recreating late 80's style sample libraries in Kontakt the other day-
https://www.youtube.com/watch?v=xTXiCEP5GDg
(02-13-2018, 11:21 PM)Samulis Wrote: [ -> ]Paul- try the following:

Offset=1000 //value in samples (44100/s)
Attack=0.005 //value in seconds, 3-7 ms is enough to clean up any clicks resulting from cutting at a

Thanks for the suggestion Sam. I've experimented with that sort of thing, but that amounts to cutting off the natural attack and replacing it with an artificial one. It can work, but what I really want is the ability to take the actual attack from the wave form and simply speed it up.  As it stands, the sfz attack value is the amount to add to the existing attack rather than an absolute value.

(02-14-2018, 02:08 AM)bigcat1969 Wrote: [ -> ]Really should learn how Reaper and its scripting works. It was mentioned by another dev on VI-C.

This might be a good place to start. Maybe you've already seen it on the Reaper forum?

JSFX tutorial for the total Newbie - guide
(02-14-2018, 12:09 PM)Paul Battersby Wrote: [ -> ]Thanks for the suggestion Sam. I've experimented with that sort of thing, but that amounts to cutting off the natural attack and replacing it with an artificial one. It can work, but what I really want is the ability to take the actual attack from the wave form and simply speed it up.  As it stands, the sfz attack value is the amount to add to the existing attack rather than an absolute value.

Cut out the attack and save it as a separate sample with a fade out on it, then map it to the same keys in a new group. You can also do this trick with staccato samples if there are no good attacks present, just attenuate the staccatos with volume= until they don't cause too much of an "accent" sound (unless that is desired).

I did the first trick with a clarinet I was working on a few years back- the attacks took forever to kick in, so we chopped most of the samples down and took some of the "tongued" noises and played them layered on top of the sustain samples.
Pages: 1 2