Not sure if this fits the bill, but… A five beat rhythm track was created in Audacity, then multi-tracked and mixed with different pitches and effects and assigned to extreme left or right channels – with one mix being in “true” stereo and one dead centre in mono. A couple of mixes were placed to provide a counter-rhythm. I hope.
A drone track was added in 2 octaves (one left, one right) as well as a Bluecat mix which gives that jittery effect.
For this Junto I only had about 2 hours, so I just built a quick saw-based patch on the Mininova, recorded some C-minor stuff on one track, some dissonances and bass notes on another track, panned them to either side and added some drums and percussive elements (with different samples in extreme panning). The idea was to have something that gives a sense of “otherness” by giving each ear a different spectrum of tones, either consonant or dissonant. A centered patch only plays the root note with some filter and resonance sweeps, adding to the suspenseful feeling and glueing the whole thing together. The most work was sidechaining everything to the left and right kicks and carefully adjusting sample gains to get something coherent …
If I had time for another go, I’d try and
make the patches for the left and right recordings a little more distinct and
replace the centered patch with some stereo delays fed by the panned tracks.
Prompted to “[use] stereo as a compositional tool” took this in the generative sense and built a patch on my modular system centered around a ‘swap sides’ event where two hard-panned sequences swap over to the other stereo channel. This swap event is triggered by a clock divider when a certain number of note events from either sequence is counted.
Technical detail … the swap switches between a simple -5V/+5V voltage source (run through a slew limiter to have a brief ‘crossover’ period in the middle!) and the 2xVCX takes care of most of the logistics. The swap signal also intentionally patches to the DSP rate so you hear a bit of wobbliness on entering and exiting a swap. Also running the slew limited swap signal through Compare 2 deriving some latched gates feeding into character of the two voice sequences.
Wanted to use stereo and the depth due to reverb / the smaller differences between left and right, as a way to change background to foreground and to bring different emphasis on the instruments by changing their position in the stereo field. Also tried to take the bass, which you usually experience as mono due to the low frequencies, from one place to another by opening up the filter while also changing the stereo width of the signal.
What if … we would hear deep frequencies left, high frequencies right? This was the idea I applicated on a track that was already in work (most sound is Auxture with own samples plus a Grain Scanner Preset with lovely ‘bowed metals’).
Remember from the collab projects: I really hate it to split up a track in two totally different channels. But while listening to this I asked my self: Why is our brain unable to merge the sounds to mono? Are there musicians in the universe who can? We can trick our eyes to make stereo images happen just by looking at two different mono images (see cover)…
I struggled with this riddle! Not at all a full piece. This is a sound idea from thinking about the term “stereo” as a compositional tool.
I chewed on the idea of separate inputs becoming one whole thing in your head in a nonlinear-everything-all-at-once sort of way. This led me to conceptualize DJ’s beat matching where you’d intentionally have two musical inputs in each ear to imagine how the overlap & transition work out.
So, I recorded Laurie Anderson’s “O Superman” and David Bowie’s “Space Oddity” on the turntables and filled out the imagined part with additional sound elements. Not sure what to do with this conceptualization, but it was fun thinking about combinations. I also tinkered with cross sections of Janet Jackson’s “That’s the Way Love Goes” with Stravinsky’s “Rite of Spring” and Beethoven’s piano sonata Op. 31 No. 1 mvt II with Soul II Soul “Back to Life.”
Ok, logos is poor, but you definitely should moderate that pathos
The poietic assumption in all the work I develop for Disquiet Junto as Ossimuratore is that verbality is anti-musical matter. In order to try silencing our logotic mind I make use of noisy textures from which some plain and repetitive melodic material emerges and finally disappears.
Instruments, noises and two contrasting master delays effects are all hard panned left or right, except for pianoforte that slowly flows left/right.
This started with field recording of a bouncing table tennis ball. I then added percussion (Klevgrand Borsta), arpeggio (Arturia Pigments and KORG Odyssey) and a very distorted bass (DI into a wall of distortion and wavefolding). At every turn I messed with the stereo field with modulation (both Ableton native LFOs, sometimes modulating themselves, and CableGuys’ ShaperBox). It feels unfinished but, hey, it’s midnight here!
My first idea was to regard the panning of a note as a quality like pitch or loudness. I wanted to try different concepts and see how I could overcome the restrictions of Cubase…
I discovered the Cubase “note to CC” Midi-effect which transforms velocity to a CC-channel. I used that for the bass: I played that sound with very different velocities, transformed that to a CC and mapped it to the panning. With a following Midi-transformer I converted the velocity-value to a smaller bandwidth for restricting the dynamics.
For the lead of the first section I created a sound in Vital with three oscillators. I employed the CC-Stepper to create three different CC-sequences (with random addition). I mapped these to the panning of the three oscillators and selected the CC-stepper mode to step with every note. So for every note played the three oscillators have a new individual panning.
In the second section the panning of the lead is automated in saw- and triangle-waves.
In the third section I applied the Cubase imager that sets pan and width in four frequency bands to the lead (a Melodica and a Labs Onde Musical layer). I recorded the automation of these eight factors with the Korg nanoKONTROL.
In he whole I had no advanced artistic idea, but it was great to discover what these approaches provoke.
Loved the prompt! Especially the second part: “Now think about it a bit more”.
Because we so rarely do, don’t we?
Or at least, I so rarely do, don’t I?
In the ways I make music, the notion of ‘stereo’ appears mostly as a format; as the inevitable, implicitly assumed, unquestioned end point I ‘mix down’ to. Other formats exist, of course, but I tend to think of them mostly as professional niches, like film scoring, or as super expensive excentricities, like ambisonics.
Moreover, implicit in the notion of a ‘format’ itself are many assumptions. Such as that stereo necessarily means a symmetrical left-right setup: a pair of headphones, or a pair of loud speakers positioned for a sweet spot.
So… I thought I’d try an experiment. For this week’s prompt I made a short track that requires an unusual positioning of a stereo speaker set. You, the listener, should be around 30-50 centimeters in front of the speaker normally designated as ‘left’. While the speaker normally designated as ‘right’ should be farther away, ideally at least 3 meters. Somewhere near a corner seems to work well for the ‘right’ speaker, and perhaps pointing away from you.
Hope you enjoy
Of course, uploading the track to SoundCloud would sort of defeat the purpose. Which is another aspect I found interesting when thinking a bit more about stereo. On the one hand it seems mad to make music that would require listeners to rearrange their speaker setup: how many people are going to be willing to do that? And how could such a thing possibly work in a playlist with other people’s tracks?
On the other hand: aren’t we missing out on some interesting, creative, fun stuff? We’re surrounded by speakers these days: in our phones, tablets, laptops; we have bluetooth speakers, headphones, etc etc. What a waste to not use them to create rich, quirky, unusual listening environments.
I had a foggy idea of using something simple like sine waves that “swooped” to their pitch, moving in the stereo field as they moved between the frequencies of different pitches. However, the gliding pitches were too prominent and overwhelmed everything else.
So, I ended up using the basic patch of Ableton’s Operator, sampling a long tone, and then using Sampler so that different notes played at different places in the stereo field (lowest notes when to the leftmost part of the stereo field, the highest notes went to the right).
This didn’t make a huge difference, but when I listen to it I can feel the tones “shifting,” and that’s a good start for me!
Ear candy provided by AudioThing Noises and Michael Norris’ Spectral DroneMaker, pitch warble accomplished via RC-20.