Scoring Central

Full Version: The time has come
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
I've been putting this off for a long time, but now I have finally started putting together a new big master project template. My old template is getting seriously long in the tooth and once I've wrapped up the music for Fighting Fantasy Legends I'm going to retire it in favor of a new and updated one. Only string sections done so far and it will be quite some time before you'll hear any actual compositions made with this one, but at least I've taken the first steps.

[Image: template.png]

Obviously I'm using a lot of the same libraries as in the old template, the main difference being that I'm phasing out the very oldest ones like Miroslav and Roland in favor of more stuff from EWQLSO and SONiVOX. The old one also had a lot of corners cut like single velocities and no RR's in some places, as well as hardly any chromatic multisampling. This time I'm going all in, using the all the full pacthes from the libs I have.

The other big difference is that this time around I'm using TX16Wx as my main sampler for this project, where the old one uses sfz+ and soundfonts (!). One of the reasons I've put this off is that I wanted to wait for v3 of TX16Wx to come out (as it has a lot of promised, or at least hinted-at features that would be nice to have) but there has been literally zero news on this new version for like a year now so I'm done waiting. When v3 comes out it will hopefully be backwards compatible so that I can transfer the project to it.

Here's the thing though... TX v2 has a tendency of being a little CPU-heavy when there's a lot of stuff going on (at least this was the case with the Horus Heresy soundtrack), so I'm actually not sure whether this is even feasible on my current system. I'm pretty sure I can squeeze everything I need into my 16GB of RAM -- these are older libraries, after all -- but whether I can use more than a handful of instruments without the CPU maxing out remains to be seen. And there's only one way to find out.

So... wish me luck.
I see a lot of similarities to your past template as outlined in http://mattiaswestlund.net/?p=514, along with hints from some of your other articles. You're using four reverb busses as usual, and you're sticking with your preference for one track per articulation, instead of using key switches or some kind of setup in which a track represents an instrument section with a midi channel per articulation. I notice that you group the articulations together, but the solos don't seem to be grouped with the rest? (I'm looking at first violins and seeing no solo). I'm assuming that's because the grouping also includes things like stereo width, which is markedly different between the section and the solo. Is that right, or are there more/other reasons?

I'm also curious as to why you prefer a track per articulation? I've tried a number of different ways of doing this and found that I like doing the same thing, but if anybody were to question me on it, I'm not sure how I would explain it - I guess I like knowing exactly what articulation is being played by a specific region without having to guess about which keyswitch was previously engaged, or which midi channel it is playing on, but admittedly I feel like the explosion of tracks is a little bit crazy.
(05-01-2017, 03:08 PM)Michael Willis Wrote: [ -> ]I'm also curious as to why you prefer a track per articulation? I've tried a number of different ways of doing this and found that I like doing the same thing, [...] I like knowing exactly what articulation is being played by a specific region without having to guess about which keyswitch was previously engaged

I would feel the same about a key switching scheme where the selected articulation wasn't clear but for me, using Reaper it is clear. I can label the key switches and have them appear in the piano roll. I imagine this sort of thing is possible with other DAWs.

[Image: 1418ppg.jpg]

P.S.

(04-30-2017, 07:49 PM)Mattias Westlund Wrote: [ -> ]So... wish me luck.

Good Luck.  Smile
(05-01-2017, 03:08 PM)Michael Willis Wrote: [ -> ]I notice that you group the articulations together, but the solos don't seem to be grouped with the rest? (I'm looking at first violins and seeing no solo). I'm assuming that's because the grouping also includes things like stereo width, which is markedly different between the section and the solo. Is that right, or are there more/other reasons?

Well, the main reason is that I haven't gotten around to setting up the solo strings yet Wink

(05-01-2017, 03:08 PM)Michael Willis Wrote: [ -> ]I'm also curious as to why you prefer a track per articulation? I've tried a number of different ways of doing this and found that I like doing the same thing, but if anybody were to question me on it, I'm not sure how I would explain it - I guess I like knowing exactly what articulation is being played by a specific region without having to guess about which keyswitch was previously engaged, or which midi channel it is playing on, but admittedly I feel like the explosion of tracks is a little bit crazy.

My preference for having articulations on separate tracks stems from the simple fact that I didn't come in contact with keyswitched instruments until fairly late in my VO life, so I've never really used them much. Which in turn means that for me, it's not a comfortable way of working. Aside from reducing project clutter, I don't see a whole lot of upsides to using keyswitches, at least not with my particular workflow. Here's my thinking:

- Keeping things separate lets me blend articulations more seamlessly, e.g. I can use CC#11 to crossfade between a non-vib articulation and a tremolo/trill articulation, or layer a sustain articulation with staccatos to add a little more bite here and there. These things you can't do with keyswitches, unless you set up separate patches for all thinkable combinations of the articulations.

- I'm not a keyboard whiz so I rarely perform long passages in one go. The longer I play the more mishaps are introduced which leads to even more editing afterwards. Performing things in short, focused takes lets me avoid having to play dozens of takes of the same part, and minimizes editing time. If I were to bring keyswitches into this it would just add another layer of complexity, i.e. having to memorize where the different articulations are and triggering them on cue.

- I'm using a ton of different libraries together. Just sticking all articulations in a KS patch would sound awful, as you'd be able to hear that it's not the exact same instruments and sections. Keeping them separate so that I can massage the midi where necessary greatly alleviates this problem.

- Being able to see exactly what articulations I have at my disposal helps my creativity.
I find that keyswitches are good for playing live, but bad for arranging. You can't just click in the score and tell it to start playing. Depending on where you start, it might use the wrong articulations. And if you copy and paste a range of notes, the articulation may not get copied with them. And of course the extra notes mess up the display of the score.

Instead, I treat the keyswitches as part of the instrument's API. Logic Pro lets you associate an articulation ID with each note, which is a much more convenient way to work. Then I have a MIDI processing script to convert the articulation IDs to the correct keyswitches for each instrument.
I would love to see a walk-through of this in a future article -- specifically how you configured each instance of TX16Wx within the String section layers. Heck, I'd love an ongoing series that shows your progress as you go through it, encountering and overcoming challenges, etc. I'd find that an incredible learning tool for me...

Thanks!
Everything except choirs, percussion and a few solo instruments is now in place.

Here's what it sounds like right now. A lot of tweaking and mixing still left, but I like the way it's turning out. Let me know if you hear anything odd (aside from the various imbalanced levels).
Is it just me, or are the horns just a tiny bit too wide and/or dry?

Other than that, everything sounds in place. It's a bit more upfront than a classical orchestra, which is probably just the right thing for a soundtrack orchestra. Smile
(07-11-2017, 08:03 PM)Otto Halmén Wrote: [ -> ]Is it just me, or are the horns just a tiny bit too wide and/or dry?

Now that you mention it, maybe. IMO horns are really difficult when it comes to positioning. This is just the basic midi plus panning and reverb levels so far, none of that psychoacoustic magic. The horns are actually very wet (IIRC around -4dB sent to the "mid" reverb) but it might be the unusally wet strings that's throwing things off.

(07-11-2017, 08:03 PM)Otto Halmén Wrote: [ -> ]Other than that, everything sounds in place. It's a bit more upfront than a classical orchestra, which is probably just the right thing for a soundtrack orchestra. Smile

Upfront? Wow. This is probably my wettest orchestral setup to date Smile
(07-12-2017, 02:52 PM)jmcmillan Wrote: [ -> ]For example string sus track is routed to master, and "aux sent" to proper reverb channel, varying the level of the sent signal per amount of reverb needed. Then just rinse repeat for all other instrument/articulation tracks. Do I have the signal flow understood correctly?

Yes, that sounds correct.

(07-12-2017, 02:52 PM)jmcmillan Wrote: [ -> ]The reason I ask is it seems you have some additional busses for trumpets trombones etc. I may be interpreting this wrong from the Reaper screen shot as I am not a Reaper user. 

Busses? Hm, no, that's just the output tracks for different instruments. E.g. the various trumpet samples in TX16Wx get sent to a separate stereo out in the plugin, and then this stereo out gets sent to a reverb buss (indicated by the thingy on the channel that says "4:Mid").
Pages: 1 2