Data sonification - turning information into sound and music

could we say, then, that performing music from a score is a sonification of the trace, which is not, in itself, music but which participates in and is related to music?

Is the data not the score in this sense? It could be understood as a trace (i.e., temperature data are not temperatures, they are a trace left by the measurement of temperatures) which can then become instructions to be interpreted by, as @radioedit put it,

In this sense, a score is a trace in the same way temperature data is a trace, but whereas temperature data is a trace left by temperatures, which can then be turned into music, a score is a trace left by music, which can then be turned into music.

Part of what I find interesting about this whole idea of “sonification” is that it changes the kinds of decisions we make as interpreters. In my experience with the script I mentioned above, my attention is initially drawn to how I’m designing the system (i.e., how am I mapping letters in plain text to notes) and then to the timbres of the destination instrument (in this case, a eurorack setup, since I’m using crow).

1 Like

Some very interesting ideas here glazing my brain at work. I just wanted to write this down while I still remember, but having brushed with @radioedit new Loud Numbers VCV module I think .CSV must have carved out small alcove in my brain and I couldn’t help noticing that NYSTHI has a module that lets you create CV generated .CSV files. It’s called LOGAN
Seems like there’s some fun to be had here :smiley:


I shared this on Discord already but it belongs here, too: an excerpt from Douglas Adams’ Dirk Gently’s Holistic Detective Agency:


I mean, data can become a score through artistic choices, but sonification in the scientific/communication world is not artistic. The choices made to sonify the data are not done following aesthetics, but intelligibility and clarity. In other words, I would say that if you’re making artistic decisions then you’re composing, not sonifying.

There is definitely some room for an overlapping field in terms of “data-generated composition” but again I would want to give it a different name. Maybe “musical sonification” (though again that’s kind of redundant), like a sort of “sound poetry” hybrid. Or then again, maybe it’s just “composition.”


Googling for “supercollider sonification” proves to be productive.

Reading the above as well



Thank you for starting this thread! Really looking forward to how this unfolds.

I work for an academic library and few colleagues and I started an extracurricular project to sonify the metadata describing scholarly literature, citations. It is the most slow motion (cuz extracurricular) project I’ve ever been involved in, so I don’t have a lot to show for at the moment. We also read the Scaletti article/chapter “Data Sonification ≠ Music.” Like @corbetta, I would strongly recommend it and I would consider myself to be the resident Scaletti-ist of our project, though that puts me in the minority.

Just a few quick thoughts until I have more time to share:

TL;DR: auditory semantic literacy is not nearly as well established as the visual equivalent and there are some surprising gotchas.

I would just like to +1 @corbetta’s comment that the distinguishing feature is the “mapping” and “intention” aspect to sonification. They go hand in hand in terms of understanding that perspective and it helps me understand the distinction between, for example, temperature data and a score. In some sense the notes in a score might be already equivalent to MIDI note numbers (of course, I realize there are problems with that generalization!) through their intended use and that they don’t get mapped in quite the same way that temperature data does. My example here would simply be that the mapping for temp data to some musical parameter to, say, the 0-127 MIDI range is more of a mapping because the sonifier has to make choices like, “What range of temperatures should be mapped to what range of the MIDI integers?” And once you take into account things like the logarithmic nature of the human frequency experience, that temperature data mapping might involve a lot more mapping, as it were, than a score.

One of the things that I have constantly struggled with on our project is that the most obvious examples (in terms of what members of the group have suggested most frequently) usually involve mapping some numerical data element to pitch but that this can frequently be the opposite of the psychology of human hearing. Or at least it makes for surprising results that don’t convey meaning, which, again, if one is a Scaletti-ist :slight_smile: , is a core purpose of data sonification analogous to data visualizations like bar charts, line graphs and scatterplots. Large numbers mean in their simplest sense a higher quanity of X. However, very high frequencies often are experienced as “tinny” or “thin” sounding and have conveyed small-ness in some of our experiments. Lower numbers mean less of X, but often my animal brain hears a low frequency rumble and experiences a swell, some large-ness.

Another way that I have struggled with the ways where the psychology of hearing is reversed from simplistic numerical meaning has to do with pitch proximity. We use data visualization as the more established practice/field as a kind of metaphor in our project to assess the effectiveness of communicating meaning. But the visual field works well with many kinds of intuitive numerical meanings. For example, on a scatterplot, two points with number pairs that are near each other also have X/Y number pairs that are near each other. In both the visual representation and the numerical representation, we can convey closeness/clustering and therefore convey similarity. Now when we try nearness with pitch, the auditory experience is the opposite, there is no consonance that conveys similarity. Two pitches that are close, but not the same, by their simple numerical frequency representation sound dissonant. In our group this tends to produce an interpreted experience of the data that suggests things are dissimilar, which is not what we wanted.


I am now officially fascinated. Could you send me the pdf? After looking at Scarletti’s website, I’m dying to read it

Great to read your thoughts, @stephenmeyer — the issues you bring up about pitch and perception are close to my interest areas (I’m a tuning nut/advocate). I’d add octave equivalence (or lack thereof) and pitch quantization as crucial elements that are often glossed over and/or give rise to complications on both the “artistic” and “scientific” side of the practice (for lack of better terms). But there’s much more to sound than pitch!

Don’t know if this is an ongoing concern but perhaps looking at something like Euler’s Gradus Suavitatis or Tenney’s Harmonic Distance function (in a justly-tuned context) could be used to convey “proximity” and “distance”?


Thank you for these recommendations. I will look into them and if you have any suggested citations for good intros, I will track them down through my library! I have neither formal music nor math training, so these things do take a while for me to internalize. Often worth the effort, tho!

This. Many times over! I’d be curious if anyone here has knows of a sonification vocabulary, as it were. Guidance, such as, use timbre as a categorical indicator equivalent to color/shape in data visualization, ___ as magnitude, etc.


What a great discussion this is turning into. Thankyou to everyone for participating :green_heart:

Yeah, I think that’s the key difference between our viewpoints. In my world, music does do that, and so I don’t feel any need to separate separate sonification and music into different categories.

I actually really love the idea of a “trace” that you mention. Information is a “trace” of something that happened in the world, and you can turn that trace back into an experience again in many different ways. If you turn it into sound, then like @WilliamHazard I would call that sonification.

This sounds a little like a sonification version of Edward Tufte’s arguments that anything that isn’t pure information transfer in a visualization is “chartjunk”. Those ideas have their place when your goal is solely to communicate information to a receptive audience as efficiently as possible. They’re less effective when your goal is to grab attention, charm, persuade, or spark emotion.

When I’m creating music with data (in my podcast, for example), I’m usually trying to one of those latter things. So when I’m making the decisions about how to map the data to sound (which is what I use the word sonification to refer to), I’m balancing up: “does this effectively tell the story?”, “does this represent the data accurately?” and “does this sound good?”.

Yes!! This is something we encountered a lot while working on the podcast. Pitch is often the first mapping that people reach for (sometime it’s volume), but it’s rarely a good choice for many of the reasons that you both outline. I find it most useful when it’s limited to a relatively narrow range, mapped inversely to how you might expect (so lower pitches are bigger numbers) and used to convey very small changes, rather than very large ones. Even then, the issues that you bring up around consonance and dissonance are still an issue (see the discussion in this post).

These days I try to use pitch for category information instead, like in my London Under the Microscope sonification, where different variants of the virus are represented by different notes within a chord, fading in and out as they rise and fall in prominence.

Another little trick that I used in that piece is to use a secondary mapping to handle outliers. Cases and deaths are primarily mapped to volume/amplitude, and when the data is within “normal” range, it has a strong LPF on it. But when they spike up out of the “normal” range, the filter opens up and you hear the higher frequencies, making it sound more “raw” and emphasising that this is weird and unusual.

I’ve heard this request many times, but sonification is in its infancy compared to visualization and so I haven’t seen one pop up yet. Maybe we should try to collaboratively compile something? I think the objective should probably be more in the vein of “advice/good starting point/this has worked for me”, rather than “this is objectively the best approach”.

One last thing, today I spotted Ableton plugging sonification as a compositional technique in their “One Thing” series :heart_eyes:


This is the low hanging fruit, but perhaps there’s something useful in the Sonification Handbook?
I haven’t read it, but the TOC looks promising as there are entire sections dedicated to aesthetics/design, techniques, and approaches.

Plus, it’s free!


The sonification handbook is ace, but I don’t think it has quite what’s being requested here in it - I remember hunting for it there once before…


I agree on all counts. That Sonification Handbook looks great and I’m interested in:

I could envision a “recipes” model like the Max Cookbook. On some level, what I want to learn from others as a supplement to completed examples is small discussions at the level of mappings. Like if a recipes model let you browse small examples based on the ingredients (hope I’m not bending this metaphor too far) where the “ingredients” are the parameters on both sides of the mapping.

Probably easier on the sonic parameter side (more constrained) than the data parameter side. But I’d love to see small examples of all the cases people have success with pitch or volume or reverb tail or timbre…


Is this something that could inspire an LCRP?


Oh yes! It’d be really fun to have a bunch of folks make different bits of music from the same dataset.


hmmm…if i am walking on a beach, the impression i leave is a “trace,” but i’m not convinced that this trace automatically becomes information at the moment of its creation. rather, the impression of my feet in the sand becomes information when someone or some other kind of living being encounters and reacts to it in some way.

in regards to the more general question of the difference between data sonification and music, it occurs to me that intent and reference are important.

if i am using data sonification techniques to make music, what the sounds refer to (at least in part) is something we generally call “art,” whatever that is.

on the other hand, it sounds like (no pun intended) the practice of data sonification was originally developed to serve a different purpose: to create sounds that refer back to the things that were measured to create the data from which the sounds were later generated.

does that make sense?


This Tomato Quintet at Machine Project was a neat data sonification that used CO2 emission data from ripening tomatoes to compose their own requiem to be played during a meal of pasta with tomato sauce. Machine Project


I’ve been a bit sparse on lines recently - life has been busy - so I was pleasantly surprised to see myself mentioned in a discussion about sonification :wink:

The concept of sonification is, in essence, quite broad. I believe we all have the consensus that when we talk about sonification, we are discussing how to turn something inaudible [such as a data set, or an image, or anything that isn’t a sound source] into sound. Sonification is also about why we are turning this inaudible object[s] into sound - what are we wanting to communicate?

On one level, you could view sonification in pragmatic terms [the Sonification Handbook referenced earlier in this thread highlights all the ways sonification can be used to convey information via sound]. Essentially, sonification in this regard is about directly conveying information to the listener. Think, for instance, a geiger counter - the faster the rate of sound, the higher the radiation levels.

On another level, it is also about conveying emotionality about a subject, through sound. I often think about sound artist Marty Quinn’s words about sonification, which some here might find resonance as well:

By transforming data into music we utilize the mind’s ability to remember and recognize
melody…We can present thousands of data points as chords of multitimbral
music to present the ever-changing face of the sun, or hear climate as a symphony.
I believe we have only begun to explore the perceptual opportunities afforded by
transforming data into music and the limits of such musical perception by our brains.

Also another quote about sonification by Andrea Polli, reflecting on why she engages in sonification as part of her practice:

I believe that artists have the opportunity to create works that have an emotional
impact and through touching the emotions of the audience, this work can affect environmental
understanding and therefore behavior. This is critically important now as we face
the problem of global climate change, a much more difficult and complex problem than
even the problem of the ozone hole. In my own work, I have tried to use the sonification
of climate and weather data and the visual impact of natural imagery to have a kind of
emotional impact and raise awareness of climate issues.

So, at least for these artists [and for myself], sonification can definitely lead to music. Sonification is not just about the methods of sonifying data sets, but also about creating a musical world in which datasets can be explored by both artist and listener. The question then moves onto designing the sonification process. What sort of data should the listener be hearing? What style of sound? How do you best convey the emotionality of a theme?


Although I’m a science/data nerd by vocation, I can’t help but be skeptical of most attempts at data sonification.

Perhaps this is a shallow take on the topic but a lot of sonification approaches are fundamentally too arbitrary - the choice of how one maps the inputs to meaningful musical outputs (be it MIDI, v/oct, triggers, whatever) is one out of uncountably many. There is nothin intrinsic to the data that ends up being conveyed in the music.

A great example of this is the Instruo Scion module which is a glorified random CV generator that just happens to be hooked up to a succulent.

Don’t get me wrong - I love sources of randomness and they form a huge part of my own practice/process. But it really grinds my gears when there’s assumed/suggested to be a meaningful connection between the inputs being sonified and the end result.

Perhaps a litmus test of ‘good’ sonification could be - if the inputs are replaced with a genuine random source (with similar distributions etc?), is the end result similar (emotionally / musically)? If so, the data source can be disregarded when discussing the piece.

Of course, you’re missing out on the clicks that you’d get from saying “This what Jupiter sounds like” or “This is what Pi sounds like” :person_shrugging:

(Tantacrul made a nice video on this topic - Sonification & The Problem with Making Music from Data)

(And of course there’s value in getting non-experts interested in scientific topics and if “the music of a black hole” YouTube video gets someone into science then that’s surely a good thing … )


Ill preface this by saying I may not understand what data sonification is! But… In the muscial context shouldn’t sonification approaches be arbitrary? Solely the choice of the artist? The sonification process (in my limited understanding) is a fancy way of opening up another avenue of inspiration. E.g. polymeters from a fibonacci sequence, or notes derived from NFL weekly rushing stats (maybe thats a reach on my part haha).

1 Like