Does the script give any indication of the note playing, anywhere? (You mention C4 would mark 1.5c of warming, so I’m wondering what note is playing today?)
Is it possible to introduce a second tone? I was thinking that 350 (often said to be the safe concentration of carbon dioxide in the atmosphere) would be a handy base note, or reference point. Probably very discordant, but maybe that would be appropriate!
NB: I’m no coder, so these are just idle thoughts (more than questions) and I’m not expecting any answers that I could do anything with!
Thank you for doing this - I once had a similar idea for a reworking of fiahod which would show the disintegration of the global ecosystem over time, but my coding skillz just weren’t up to it. Oh well, c’est la vie…
Thankyou for sharing your thoughts! I appreciate it very much, and I’m glad it spurred some ideas in you. Let me have a go at responding to them as best as I can.
I did wonder about this, but the tone isn’t quantised - it maps straight from ppm to frequency - so if it showed the closest note it’d almost always be slightly off that note and I worry that would confuse people. Maybe I should change how I talk about it to talk about frequencies, but I feel like the idea of a middle C is more accessible to most music folks than frequencies are. Should I quantise the tone?
I would need to actually learn me some Supercollider to do this, but it’s almost certainly technically possible. I think an issue here would be that most folks would respond to the concordance or discordance of the two notes, rather than the distance between them, and in situations where it sounded harmonious, that harmony wouldn’t be “meaningful” in any sense. I do agree that a reference note would be useful though. Maybe it should start at a reference note and rise over a couple of seconds to the final note?
I would love to see more work like this out there! An early draft of this idea used the planetary boundaries framework and allowed the user to select between different metrics. But I felt like it didn’t need to be too complicated to say what I wanted to say with it, so ultimately decided to keep the experience as simple as possible, both to maximise the impact of the message and minimise development time. Finished is better than perfect, right?
Hmm, to some extent, maybe - but even a non-techie like me knows that A is 440 (tuning forks!) - and I’m sure that nice Mr Goggle would be able to answer any questions, anyway! (I’m guessing that’s what you mean by quantising the tone? Please do correct me if I’m wrong, I’m not very good at this stuff)
So, yes, even a display of the Hz would be good to see!
Oh lordy, no! Don’t go that far! It was just a random thought rather than a serious request!
That’s not a bad idea at all, actually! As long as it doesn’t require Deep Coder-y skills! (Is it something us users could change in the script, if we wanted?)
Me too! In my case, though, I think the initial idea was better than the practical reality of making it work!
Oh yes, for sure! And ppm is pretty cool just as it is! Thanks again!
Apologies, I should have been clearer! Right now it plays lots of frequencies that land in between different chromatic notes. By quantising the tone, I mean rounding the frequency to the nearest note in the chromatic scale, so you’ll be able to play along with it with other instruments without it sounding “out of tune”.
Maybe a parameter in the settings that lets you choose to display ppm, Hz or MIDI note? That wouldn’t be too tricky to add.
What kind of changes would you like to be able to make as a user?
Not taken as criticism at all! I really appreciate the feedback, and you being so generous with your time using the script and sharing your thoughts.
I’ll look into adding an option to swap between ppm, Hz and MIDI note and probably quantise the note that’s played. But it might take a little while - I need to finish the data sonification module I’m working on for VCV Rack first!
This is really cool! Some time ago I had thought about how to do an audio piece or patch that translated and expressed a natural process like weather patterns, in real time, but didn’t know how to go about it. Are you aware of other artists that are doing similar work, feeding live data sets into audio systems?
“Live” sonification of this nature is a little more challenging as a design problem, because you need to predict what the data might do in the future and account for that. For example, my script won’t work well in a situation when atmospheric CO2 levels quadruple (though I feel like my script not working will be the least of our problems if that transpires).
It’s also super interesting to think what this might look like if you take computers out of the equation. Nikola Bašić’s Sea Organ is a great example, but honestly, so is a simple windchime. How could you build an instrument which is played by its surroundings, rather than a person?
Always up for chatting about this! Maybe we need a data sonification thread…
exactly what I have been curious about …fascinated by this concept of data sonification. A thread on the subject sounds like a great idea. I will be checking out your projects and the others mentioned for sure. Thanks for sharing this info!