Hi folks - It’s been awhile, but after Marc’s very kind mention of me in the newsletter and such an intriguing challenge this week, I felt I had to do something.
Here’s my track:
featuring lapsteel processed through my Serge /Random Source Eurorack modules.
Ps can someone remind me how to embed a Soundcloud track, rather than just a link - can’t seem to remember how to do it anymore
EDIT: I fixed it now…
I thought that a robot feeling sad would be a grainy emulation of human emotion. So I fed some human sadness (flutes) into my two favourite robots (morphagene and w/) - and then played their various playback knobs by hand for a few minutes.
The sound of a robot with the blues: apparently it takes “Black Man Blues” by DeFord Bailey, runs it through a vocoder, quantizes him a bit and evens out some his phrase lengths, and accompanies him with some samples. The beats are from Aphex Twin, Archie Bell and the Drells, and Romulo Caicedo. The synth pad is sampled from another of my Junto tracks (https://soundcloud.com/ethanhein/first-chair-at-the-wavetable-disquiet0315). I played back the clips in a random sequence using Follow Actions. The bass is a patch on the Helm synth.
If a robot is sad, it’s likely the fault of a human being. “To some degree, we all live out our emotional lives through technology,” writes Michael Harris in his book The End of Absence. “Yet every time we use our technologies as a mediator for the chaotic elements of our lives, we change our relationship with those parts of our lives that we seek to control … ultimately, we seek machines that can understand our feelings perfectly.”
Suss Müsik has never been comfortable unloading our deepest emotional traumas in any context—human, animal or machine—solely because we don’t want to subject our whinging upon others. A team of scientists once determined that the root cause of unhappiness is the persistence of painful childhood memories, which fester and accumulate over long periods of time. Now imagine a robot programmed to store entire reams of superficial data, terabytes of squalor dumped into its gloomy computerized brain like some digital landfill for the morbidly wretched. Hey, you’d feel sad too.
For this sedate piece, Suss Müsik aimed for a result somewhere between To Rococo Rot and Tom Waits. We started with a somber sequence on prepared piano and played it through a Boss RV-3 on the 12th dial setting. Two electronic figures were then composed for Moog synthesizer to imagine the sounds a sobbing robot might create. The misery ends with a sad trumpet and maudlin fake strings pecking at the carrion.
The piece is titled 0011101000101000, which is the binary code for an ASCII frowning face. The image is a sad little robot in Suss Müsik studios who feels a lot worse after hearing this piece.
Robots can burn out.
Nothing more to say except that mastering is strange but that fits with the subject/story.
Used this as an excuse to practice patching a variety of sounds on my MS-20 mini.
I believe that dancing is the best cure for the blues.
So I would program a robot to play their favourite tune and bust a move to improve their mood.
Excellent piece. Quite soothing and comforting to me. If your blue robot doesn’t feel better after this try again tomorrow. Robots are all bipolar aren’t they? (you know, in their heads all goes from 1 to 0 to 1 to 0 all the time). Hard to be a Robot with the blues.
Thank you! Robot feeling better. We played the entire Junto 0331 playlist and that seems to have helped.
I think if a robot was to have the blues, it would firstly turn to Google to identify the specific emotions that goes into feeling sad, and the reasons for sadness. It wouldn’t surprise me if, during Googling, one phrase that will consistently show up is “anger turned inwards”.
So I thought the music then created would be slow, replete with mechanical and metallic sounds [because I think a robot would take the idea of “inwards” literally and will reflect on its own body], with a noisy, smouldering core.
Layer 1: Drums
Layer 2: Field recording of squeaking metal, with reverb
Layer 3: Bowed cymbal + reverb, panned left
Layer 4: Bowed cymbal + reverb, panned right
Layer 5: Distorted guitar
Layer 6: Field recording of an MRI machine, slowed 400%
In my mind, the answer is simple: like a piece of hi-tech soul from the early 1990s. (This turns out to be the answer to a lot of other questions too).
So I constructed something suitable using the eurorack modular, and jammed it out live into Logic. The synth noises are an Intellijel Shapeshifter (and a loop of itself in Morphagene the 4ms SMR, a bit of filtering and FX in the rack too.
If I could give 10 Likes I would. You’re killing it with that dancing robot video!
Robots and sines prompted a FM synthesis: live tweaking a 3 sine osc FM patch that I coded some time ago into Axoloti. Recorded on cassette.
Here’s mine! Lots of wavy tape-y-ness with ER-301 and W/.
Ready to finish going through the submissions now
I can see I wasn’t the only one who instantly thought of Marvin from HGttG when reading “robot with the blues”.
I have tried to create how it must sound inside a depressed robots head, using samples of the late Alan Rickmans awesome performance as Marvin:
I sampled the best lines from the movie, and used a combination of Abletons arpegiator, random, and simpler modules to play bits at random and slightly overlapping. This is running in parallel with a resonator, a ring modulator, a tape echo emulation (Echo), and a resonant membrane simulation (Corpus) which only trigger when a gate is triggered. All are fed by the random samples.
Great prompt and clever title!
This is actually the second try this week, wasn’t happy with some elements so pulled what I liked and started again. Taking cues from Asimov, Kraftwerk, TMBG, Star Trek and all the other science fiction I’ve read and watched, I came up with a darker side of the equation. A place where grunts will always be grunts, regardless of their origins. Organic or mechanical it will likely be the same. This is an environmental piece, more about the mood than the music.
This piece uses many field recordings mixed in Cubasis with music and beats from Patterning, Figure and Gadget. Synths are Micrologue and Mood. Effects are mostly eq, reverb, overdrive. iVoxel gave fits with the vocals, but sounds right for the part
*Sorry, a little heavy on the socio-philisophical angle :-/
Hey everyone! This is my first Junto submission. I’ve long wanted to give it a try, and this week’s prompt spoke to me.
So here’s the setup:
When you’re using machine learning systems to generate output – whether it’s text, images, or audio – there’s often a “temperature” parameter you can set, which deeply affects that output’s character. A sample generated at high temperature will be “adventurous,” maybe even chaotic; a sample generated at low temperature will be “safe” and usually pretty blah.
When I read “a robot has the blues,” I thought immediately of an ML system operating at low temperature. A bit glum. Depressed.
Generally, you set the temperature for an entire sample, so the resulting output – a whole sentence, a big chunk of audio – is uniformly surprising or boring. But it’s also possible to adjust the temperature as the generation process unfolds, and that’s what I did here. Before I went to sleep last night, I set my computer up to generate a few dozen samples using a model trained on a dataset that leans cinematic and synth-y. Each sample was to be 120 seconds long, declining smoothly from temperature 1 to temperature 0.
This morning, I sorted through those samples, chose three that I liked, dropped them into Ableton Live, and built a short track around them. I hope you can still detect the temperature decline, modulo my own arrangement. The percussion is from a Moog DFAM.
So glad you could join in!