What would music sound like if humans evolved underwater? Answer this question by making a track using any tools at your disposal. The one rule is that modular synths must be included. Feel free to share your process in the comments. We would love to hear how each of you interprets this idea.
This new Patchable was submitted by @mrwilliam
HOW TO PLAY
THIS PATCHABLE RUNS FROM Sep 5th – sept 16th
- Create an original piece of music following the above challenge.
- Include the suffix PATCHABLE-002 in the track title. A completed track title would look like this: Spiral Vortex PATCHABLE-0002.
- Upload your track to soundcloud by end of day September 16th.
- Include a link to your track and patch notes in the comments below.
10 thoughts on “Patchable 0002”
Evolving underwater, How would we communicate and speak across the lonely vastness of the sea? A lonely siren sings her song to you.
So recording this, I’ve had a Make Noise Mimeophon and a Mutable Instruments Clouds clone realistically for all of 20 minutes and I’m not quite sure how either work yet!
My DAW, (Cubase) is sending clock to a Korg SQ1 which is sending clock to a Pittsburgh KB-1 Controller. Clock out of that is going to CV bus on the Make Noise case and providing clock to the Pittsburgh SV-1 and to a clock divider.
For sound, I had a simple 4 note sequence coming from the KB-1 sequencing the first oscillator on the SV-1. The first oscillator saw wave, and the blade wave (kind of like a saw wave but with a “pulse width” style duty cycle, also being clocked) was going through the SV-1’s onboard mixer and through its normalled semimodular connections: mixer > filter > vca (used with its onboard envelope generator).
Oscillator 2 I had was being played with the voltage pads of the KB-1 (I had a few stackable cables in this one which I’ll get in to later), also going into the SV-1 mixer to go through the same signal path as oscillator 1.
Maths channel 1 was set to cycle for the “siren” type sound and that was being played manually with the log/exp knob.
Output of the SV-1, and unity output of channel 1 of Maths were sent to channels 1 and two (respectively) of a Doepfer a-135-2 quad VCA/Mixer… being used as just a simple mixer. Which was sending the mixed mono signal to a Make Noise Mimeophon, then the stereo out of that into a Momo Modular uBursts (Clouds clone), into the a stereo channel of a Ladik m-175 mixer then out to my audio interface.
For modulation of effects:
Keep in mind I’ve never used Clouds or the Mimeophon before I started on this and am still learning them. Reading the manuals is still on my to-do list
Mimeophon was receiving master clock. I’m still learning the seetings on this thing, so I’m not quite sure what is what. I can tell you however, I was live tweaking the repeats knob to bring to and from *almost* to the point of self oscillation, until the end where I went full ham on it (during the fade out)
Texture on Clouds was being modulated by the same voltage that was controlling the second oscillator of the SV-1. Whether or not it was actually doing anything, I don’t know!
The 1v/oct on clouds was also being controlled by the same voltage coming that was controlling the second oscillator of the SV-1, but I had this going through channel 2 of maths to invert the signal.
The /3 out of the clock divider is triggering the freeze on Clouds.
For post production, I did some EQ tweaks and compression.
@mrwilliam, I cannot wait to listen to this!
I thought about this question from many different perspectives. What would humans look like? What kind of instruments would they play? How does sound work underwater? Eventually I figured mammals with more complex appendages might gather in a cavern for a performance. The cavern might have several instruments performers would swim at or against to create sound. I took a bit of an echo and doppler effect approach for the sound.
The patch itself used the the DPO and Benjolin as sound sources. The DPO when into the FxDf and out from RxMx to use a harmonically rich sound that shifted constantly in frequency spectrum. The output then went into a feedback loop in this order: DPO > FxDF & RxMx > Mixer > Mimeophon > E560 Freq shifter > QPAS band pass back into the Mixer. The Benjolin LP out went in the Mixer also and was injected in the feedback path. The left and right outputs of the QPAS were my only outs. Wogglebug, Brains, Pressure Points, QPLFO and Maths handled modulations.
Some light mastering was used to bring up levels a little, but other than that this is all modular, all done in one take, no mixing or editing.
Warning, there is a fair amount of dynamic range, if you turn your volume up be prepared for some pretty loud moments.
For this track, I envisioned how we currently navigate underwater, and if humans who had evolved to live underwater would view the pings, and bips and boops, as music.
This track is all modular:
Pamela’s, out to Varigate 8+ to control rhythm.
The sub-bass is SARA, pitch controlled by the ADM06, which is controlled by the trigger out, on the PGH Envelope. The envelope also controls VCA, giving the track its beat.
The clack (snare) sound is a triggered Rings, with both odd and even outputs going to the JOVE, to create the muted, snare-like sound.
The repeated, almost-submarine-ping is Spectrum, through streams as VCA, controlled by the Varigate 8+, out to a delay.
The main melody is also controlled by SARA, with Arpitecht controlling the tones, going into the ASDRVCA, to control hits.
Like the challenge! I’m a fan of the disquiet junto projects, glad to see more doing this format.
Often thought about how sea creatures hear differently. I’ve been told sound moves faster in water, maybe why whales can recognize each others calls from hundreds of miles away.. crazy
Anyway I was thinking about layering watery textures with this one. Some drones w/ wave of texture over them. Thats it.
Started recording a couple layers of guitar using an ebow, then filtering w/ sisters. Some sampled percussion sequenced w pam’s. Sine wav w/ feedback on the er301 sequenced in ableton. This sound put through morphagene and then rerecorded to make the high tinkling sounds. Some mouth noises also fed into morphagene and looped. Mixed live on the er301.
Hi @sack, So happy you found us and joined in. What a cool piece. I love your interpretation. If you are interested, you should consider joining us on Discord for chat. https://discord.gg/pxpyqac
I always had a fear of drowning, so this was a reaction to that. How sounds and ambience would take over, and the uneasiness of whatever may be out there. I was going for the atmosphere of submergence which played throughout. Other textures played throughout are a loose interpretation of underwater activities/things/living creatures. I attempted to incorporate some subtle melody to give the ocean it’s own voice, which otherwise seems vast, mysterious, and sometimes menacing.
Second patchable for Repatch!
This is hard to make patch notes for, but the tl;dr is it’s a pitch shifter/spring reverb feedback loop, plus some other stuff.
pitch from md – blade wave is modulated by timbre wave, modulated by clep diaz and eg release
mix out from vca into monsoon, which is in pitch shifter mode
monsoon outputs multed with mutes into both my mixer and into spring reverb in and noise in of grandmother
triangle from werkstatt into monsoon stereo input
pam’s into trig in, feedback in, reverb in, density, freeze
werkstatt doing a quick filter sweep with high resonance for bubbles
reverb out from grandmother sent into audio in of sem and bp and lp out into vca (going back into monsoon)
sine from osc into sem
envelope from pam’s into sem
Grandmother osc 1 triangle at 32’ osc 2 square at 16’
sequence from MD into filter cutoff in
Trigger from Pam’s into envelope trigger in
Sequence from MD into Pitch In of osc 1 and they are synced
Multed pam’s envelope into LFO Rate In
Tiny bit of wiggly tape delay on everything from a delay pedal
this was really cool!
This is a live eurorack modular synth jam for the second challenge of this month’s New York Modular Society Patchables. The first voice stream is two channels of an Instruo CS-L into an Optomix, with the strike controlled by two channels of Make Noise Maths in Cycle mode. The second stream is an Erbeverb feeding back on itself, and later I use an Intellijel Quadrax in burst mode to gate each channel. The third stream is a Serge NTO being controlled by a Stillson Hammer sequencer into a Serge VQVCF and into an Echophon. I’m controlling the tempo of this stream from a Make Noise Tempi. Headphones are highly recommended.
The purpose of this challenge was to create music that answers “What if humans had evolved under water?” This piece is titled “Life of Coral.” I’m using the Erbeverb to evoke waves that can be heard both above and below water. Later the Erbeverb stream becomes the chatter of dolphins. The Instruo stream is the deep bumps of sounds that carry underwater, and the NTO stream is meant to evoke schools of fish flitting through the coral. Enjoy!