Velocity sensitivity via camera

Someone had this idea eight years ago on KVR. I still think it’s good.

I think it could be even simpler. Dots on your fingernails, like that post suggests, would enable the camera to track where each finger goes at the same time. But that’s not needed. The camera could instead merely enable software to compute the maximum velocity across all fingers – a single number, which would then be applied to the press. A really good piano player will sometimes play two notes at substantially different volumes, but it’s kind of a virtuosic technique; it seems reasonable to sacrifice that kind of perfect for the sake of the good.

The concept is interesting, but I can see a few complications around making such a setup that works robustly and intuitively.

First of all, to get a good view of your fingers the camera would need to be well placed with respect to your fingers (as they play on the screen). No on device camera could do this, so your need external cameras (not very convenient).

An alternative is to use the front facing camera, but that is designed to show your face when you look at the display. You could need to angle to device to instead point at your fingers, and even then, viewing then from above isn’t very good for estimating velocity (I.e camera placement matters a lot).

Next, if you are playing off the screen, you need a keyboard (maybe projected or paper) to play on. Both of these methods require extra space and potentially additional material (a keyboard with QR codes on?).

Finally, cameras are often not very good in low light (typical evening time on a household illumination). They compensate by taking longer exposures resulting in motion blurred images. My guess is that this wouldn’t make for very usable velocity estimates so you would also need artificial illumination for it to be viable. Visible light illuminating would probably be awkward so I’d guess you’d been invisible light (such as IR); this would require more power and a camera capable of selectively filtering out visible/invisible light, or a dedicated camera.

As for different velocity per finger, if you solve the other problems it’s no more complex, and playing one finger at higher velocity than the others isn’t that uncommon. You can effectively hold one finger higher or lower than the other as you push down meaning it comes in contact earlier or later than the other fingers as your hand accelerates making it louder or quieter. It’s very common to use this method to make the top (or bottom) note in a chord louder than the rest (less common for dinner voices).

i think this is where time-of-flight cameras or other industrial camera solutions might be best, as camera resolution isn’t really necessary, you just need the ability to track the tips of the fingers. that said, once you get into live processing TOF cameras (probably more than one so that fingers aren’t occluding each other) and calibrating that system to rarely, if ever, deliver false velocity, it would probably becomes a lot of trouble (possibly a lot more than it’s worth).

embedding the force/velocity detection within the actual interface rather than inferring it out of the air above the interface just seems like a better solution for a bunch of reasons. i’m not up to date on the latest touchscreen stuff (if the solution sought here is expressive touchscreen interfaces), but it would seem like if apple had implemented 3-d touch on the ipad it would have gotten things to some sufficient level of pressure sensitivity to inform velocity, so maybe there’s something on someone’s roadmap somewhere that will address this capability.

If the end result should be velocity sensitivity added to an existing tablet or other consumer device in a way that is useful and makes sense, I agree that it sounds convoluted and weird solution for something that could be so easily solved eg. with a separate compact controller.

At very least it’d require a separate well positioned camera setup + analysis software and that wouldn’t be cheaper or more portable / nearly as robust as a small controller, no haptic feedback from tablet screens means it’s kind of dull to play them IME, and even if one doesn’t feel like that, alternative solutions like mapping hi-res touch area size (“3d touch”?) or simply y axis on screen to velocity already sort of invented (see eg. Sensel Morph, I’m not sure if Apple has done something that works similarly yet on newer gen iPad screens?).

BUT seeing this is Lines and not KVR, as sort of an art project / experiment that doesn’t need to work super robustly under different conditions or make sense financially, it could be cool. Maybe someone has already done something similar? You wouldn’t even need, or maybe even want, a tablet to prototype it, eg. just draw a controller on paper with some kind of colors or visual cues, and use a garden variety computer + some Python libraries for analysing data from multiple cameras.

I think many inexpensive cameras will be too slow for this, and will add an annoying latency; for example a 60 Hz camera only has a new frame every 16.6 milliseconds, then the image needs to be processed.

2 Likes

An easier solution, I realize now, might be accelerometers on one’s fingertips. From the acceleration information they could derive which way is up – it’s the direction in which a finger never gets suddenly stopped – and if they were sensitive enough the software could determine which finger was lowest at all times, and thereby deduce which finger the latest press had come from.

I should have specified: If I could do this, I would use it to add velocity sensitivity to my monome grids.

Camera latency does seem like it would matter. The way the Axon guitar-to-midi converter got around the latency problem (it needed to know what frequency a waveform corresponds to before that waveform had even completed a fully cycle, let alone repeated itself) was through AI. I can imagine something similar might work for cameras.

A few years ago for an installation at an art center I used Leap Motion controllers for something similar to this. At the time, I sent individual fingertip data from the Leap Motions to Ableton Live via Touchdesigner. Each fingertip’s distance from the sensor modulated a different frequency range for filtering and participates’ palm-distance controlled volume. Velocity shouldn’t be too difficult to rig up. It would be nice cut Touchdesigner and Ableton out of the chain for a leaner implementation.

1 Like