Some things that come to mind:
Chris Chafe at CCRMA partnered with people at Stanford Medical to develop a “brain stethoscope” project which would monitor and sonify someones brain activity to see if they were having a seizure. Chris made a composition around it.
Since it was with stanford medical, I’m guessing they were using a medical-grade EEG headset. If I were to guess, the sounds were probably done with ChucK, FAUST, or both, since those are the tools that Chris tends to use.
https://news.stanford.edu/news/2013/september/seizure-music-research-092413.html
Another CCRMA student (Victora Grace) wrote a biometrically controlled composition and performance called “Sonic Anxiety”. In the performance, she locks herself in a cage, and monitors her anxiety level. All of her biometric interfaces were arduino based, probably fed into MaxMSP and/or Live. I’m guessing it was mostly heart-rate driven (EKG).
Back when I was at Berklee, I remember people doing experimenting with biometrics and music. Lots of EEG and heartrate stuff. The apple watch had just come out, so they were getting heartrate from that. The Muse was used for EEG. There were also some arduino components used too, but I couldn’t tell you which ones.
See also: the encephalophone!