This article is a detailed overview of Part Five of My Musical Mouth and is by far the most interesting, revolutionary and complicated. Of course, nothing is really complicated in this project, as you have hopefully seen from the previous parts. If you have not read the previous parts then it might be advisable to do so using the links below, as this part is quite dependant on you understanding the previously covered processes and functionality.
Previous articles in this series include:
Part Five is all about creating a pad sound (a long, rich, floating synth akin to a “string” sound) by using my voice to determine the pitch and my mouth to control the brightness. I do this by opening and closing my lips whilst singing to create a low pass filter (i.e. cut out the high frequencies to leave a dull muffled sound). This, as you will see, can then be used to control the low pass filter on a synthesiser.
In the first three parts of the project, the reason for using my mouth was to allow me to easily transfer an idea from my mind into actual music without being restricted by a lack of instrumental ability – I lack timing, knowledge of scales and the ability to remember what notes I need to play! In Part Four, I used the method to freely “jam” new instrument parts over the top of my existing tracks; again, without being restricted by ability. This part goes beyond writing music and ventures into sound design and the resulting midi part would be almost impossible to recreate by playing a midi keyboard. Simply by singing, I trigger the sound, bend the pitch and control two filter effects, morph and volume controls. You can hear the resulting pad sound right at the very beginning of the finished track, which can be to listened to via the Soundcloud clip at the end of this article.
I start by creating a new audio part and clicking on the “Show Lanes” button on the Track Header. The record mode is set to “Keep History” so that my takes stack up on different lanes to be easily auditioned afterwards. I then start Cubase playing in cycle mode and listen to the other parts while I practice singing my pad sound and when I’m ready I hit the record button and record several takes, as you can see below:
You can see how I’ve slowly opened my mouth during the first bar, resulting in the volume steadily increasing until the second bar, where I steadily close my mouth making it quieter again. This happens repeatedly throughout the remaining two bars.
I open the Sample Editor by double clicking on my favourite take and click on the Vari-Audio tab followed by the Segments button to edit the segments that have been created, as you can see below:
Here, you can clearly see the changes in volume and the changes in pitch, which are represented by the continuous squiggly black line than runs through the segments. You can see that the pitch changes occur at the end of the second bar and in the middle of the fourth bar, however you can also see small fluctuations happening constantly as my voice wobbles (I’m not a very good singer).
Vari-Audio thinks that there are four notes in this part (there is a fifth purple segment which belongs at the beginning of the fifth bar). I know there should be just one long note, so I glue all the Segments together by holding down [Alt] and clicking on them. It doesn’t really matter what pitch the one note is on, as this is going to be controlled entirely by the black squiggly line which will be exported as pitchbend control, so all I need to do is export the midi and controller information as you can see below:
In the picture above you can see my single segment which becomes my midi note and the squiggly pitch line which will become continuous pitchbend data. From the waveform, the peaks and troughs are translated into modulation control (CC1) which I eventually use to control the filters, morph and volume control. To do all this, I click the Extract MIDI button and choose “Notes and Continuous Pitchbend Data” for the Pitch Mode with a range of 5 semitones, “Volume Controller Curve” for the Volume Mode and “CC1 (Modulation)” for the MIDI Controller that the Volume Controller Curve will export to.
You can see the resulting midi part open in the Key Editor in the picture below with the original audio part open in the Sample Editor, so that you can see how they relate:
In the picture above, you can see the midi part at the bottom has a midi note on E2, then below that there are two “lanes” of continuous controller data. The top lane is the Pitchbend data created from the pitch line in Vari-Audio and plots an identical path in the upper part of the picture. The relationship between the bottom lane and the amplitude of the audio in the upper part of the picture is harder to spot, but if you imagine the CC1 data line inverted, then you should be able to see it is a reflected image of the faded audio wave. CC1 is, by default, named Modulation according to MIDI protocol, and most midi instruments can use it to control a number of parameters (such as volume, filter cut-off, pan, etc.) simultaneously.
I then find a midi instrument to play back my imagined pad sound by going to the Media Bay and filtering on “Synth Pad”, “Analogue” and “Dark” to quickly locate the right sound. I audition all the sounds that fit this description and double click on those that I like to add them to my project. The one I choose is called “Shining” and is from the Spector synth, one of three synths with identical controls, the others being Mystic and Prologue; so if you select from any these three you will be able to continue this tutorial easily. You can see the synth in the picture below:
The first thing I do is set is the Pitchbend range. This must equal the range that the continuous pitchbend data was set to when the midi part was exported – which, if you recall, was 5 in my example. The pitchbend control setting is in a circle in the top left corner and in the picture above you can see that I have clicked on it and the drop-down box has popped up from which I’ve selected “5”. The synth will now play back at the same pitch as my original singing part.
Finally, I need to set the synth to respond to the CC1 data which was extracted from the volume of my original singing. To do this, I go to the EVENT settings via one of the four buttons located in the centre of the instrument screen. Once in the EVENT settings, the bottom third of the screen displays the different “events” that can be used to control parameters within the synth which are:
- Modulation – (MIDI CC1) You will find a Modulation controller on most MIDI keyboards next to the pitch bend controller or as part of a joystick controller.
- Velocity – Most midi keyboards send velocity data which records how hard you play a note.
- Aftertouch – Some keyboards are able to send continuous data about how hard the key is being pressed – so having hit a key, you can keep it depressed but raise it and lower it to control the amplitude of the sustained note or assign it to other functions such as filter cutoff.
- Key Pitch – This responds to how high up the scale the note being played is and you can change the amount a parameter is affected accordingly (particularly useful for controlling the filter cutoff on a low pass as you go up the scale).
You can see these settings in the picture below:
It is the Modulation Wheel setting that relates to my CC1 data, and whilst listening back to the synth playing my pad part I tinkered with the synth controls to determine which ones would affect the sound in the way I imagined to create the morphing, evolving sound. I decided to add the cutoff for filter 1 (Cut 1), the Morph amount, the cutoff for filter 2 (Cut 2) and the volume and tinkered with the amounts while listening back to the pad playing to make sure they responded exactly how I wanted them to. The results were a morphing, filtered pad, exactly as I intended. You can hear this right at the beginning for the finished track by clicking the Soundcloud at the bottom of the page.
This is a complex process which would normally be done in stages: firstly, create the midi note and attempt to do the pitchbend on-the-fly; then edit the pitchbend data to be accurate; then record automation for all the other controllers (which in this case would be a further four parts). Much of this process would be reactive and experimental as you have no real understanding of how the layers of controllers are going to work together. By singing the part in, I had a reference to listen back to and a map of what should be happening and when; so finding the controllers was purposeful and precise.
With this technique, the sound is there and it is simply a case of transferring it to another medium. Using normal techniques, you have to build from scratch with nothing to refer and it can often result in a sound that isn’t what you originally intended. Once more, My Musical Mouth has made a complicated process precise, easy, enjoyable and instant.
This is quite an extensive article even though it is merely an overview. If you need the more in depth tutorial then that is available to all our customers or anyone purchasing a full copy of Cubase 6 simply by emailing email@example.com with your home address or customer number.