Mind Blowing Facts


The most mind-blowing fact I’ve come across so far is the fact that I get to exist and experience the world, as opposed to not existing at all. It feels monumentally absurd and beautiful. It’s a feeling that keeps coming up every so often that I cherish. Many times music will trigger it.

Even better than that: a whole bunch of other people get to exist too, which makes this place much, much more interesting.


every now and then I get a glimpse of the excruciating improbability that the world would be as it is, as opposed to any other way, and it utterly blows my mind.


Apparently, you can teach yourself to echolocate.


There was a good episode of the excellent podcast Invisibilia a few years back about a man teaching other sight-impaired people how to echolocate: https://www.npr.org/programs/invisibilia/378577902/how-to-become-batman. Also, my brain inverted letters and initially read echolocate as ``e-chocolate’’, which seemed intriguing but would be ultimately disappointing.


My brain went on the same journey. :chocolate_bar:



good Saturday. Here’s a fact that got me started this afternoon.


Integrating a (periodic) function doesn’t change the frequencies present in its Fourier composition, it just changes their phase and (complex) amplitude :exploding_head:


In case any of you are interested in following along at home, this is somehow the exact right tone for me to be reading: it explained block diagrams and what a Laplace transform is to me, it threw me some mathematical bones, and it’s pointing out where the assumptions of linearity and time-invariance are necessary, and why they’re limited. In short, I’m having a blast


Also currently reading after watching the Vadim video posted on CDM.


The three basic insights (for those mathematically oriented --proofs as exercise)

  1. All discrete time linear operators can be expressed as a matrix multiplication. The expression is a little weird because input and output signals are doubly infinite vectors, covering times(indices) minus infinity to infinity. So too, the matrix has to be doubly infinite.

Proof is simple: The nth column of the matrix is the response to the nth elementary basis vector, which is the unit impulse at time n. All input signals can be expressed as a linear combination of these basis vectors. The basis coefficients are just the signal values. The result follows by definitions of linearity and matrix multiplication.

  1. For time invariant operators the matrix takes on a Toeplitz structure. Each column is a translation of the one next to it by one row position. The entire matrix can be summarized by one of the columns, let’s say column zero. That is the response to the impulse at time 0 or the “impulse response”. Matrix multiplication then simplifies to the convolution sum.

  2. Eigenvectors of the doubly infinite Toeplitz matrix are just the complex exponentials. [Easy proof using convolution sum. The vector gain becomes the Fourier transform or Z-transform in the more general case.] All of the Fourier analysis results follow… but keep in mind the collection of eigenvectors is indexed by a continuous frequency variable [from -pi to pi] and is therefore uncountable. That’s where the doubly infinite matrix becomes pathological. Without time invariance you’d have some other basis, not complex sinusoids.

Similar results hold for continuous time Fourier analysis but the “matrix multiplication” becomes integration with respect to a bivariate kernel. The “elementary basis” becomes the Dirac impulse, also pathological because formally, one must take limits of integrals under Dirac sequences, or introduce a custom signed measure, but in practice there’s no need to get formal.

Hope this helps!


ahahaha without breaking out my rusty knowledge of Hilbert spaces, it doesn’t help too much, but I can kind of see the analogy, and I appreciate the additional perspective!


It’s about “what it all means…” or the “why”, specifically “why time invariance”… it won’t be too helpful with the “how”. The eigenvector property is why you can put a sinusoid in and get the same thing out with a different gain/phase. To make the gain/phase thing more clear (why also phase?) you have to decompose a real-valued sinusoid into a pair of complex sinusoids. It’s the complex sinusoid that is the eigenfunction.

It’s more useful in understanding things like wavelet transforms and how they handle certain types of nonstationarity or time variance.


well my rather more primitive intuition about this is just that the derivative/integral of a sinusoid is another sinusoid. (or alternatively with euler, d(e^x)/dx = e^x.)

for me, euler’s identity is the mind-blowing part in itself. (e^x being its own derivative is not too shabby either.)


I’m with you here, but I didn’t realize how it connected with filters till somebody walked me through it


This follows because differentiation is just another LTI system. It’s impulse response is another pathological object, the Dirac-derivative, but it’s very cool and weird because it allows you to express differentiation as integration, as a convolution integral.


well one of the neatest things about math is the confluence of different analogies, right?

for whatever reason it’s easy for me to visualize complex sinusoids geometrically, as spirals in the c-plane, with real sin/cos projections. euler’s identity and the derivative relationship of the projections becomes clear.

and of course i’m well used to thinking of filters in terms of response function (complex phasor multiplied by complex polynomial of phasors.) it’s a little less natural to always reach for the matrix / convolution representation of the whole signal and its IR. (but then, i’m not really a mathematician!)

maybe this is because my early experiences with these things were with analog circuits, where in fact you often have nonlinear (saturation) elements in the integrator, and good luck getting the impulse response directly. vadim has to deal with this when he starts considering saturation in the SVF and ladder structures. (iirc he does it by polynomial approximation.)

anyways, in this vein here are two of my favorite mind-blowing books:


as I understand it, the most reasonable definition of e^x is 1 + x + 1/2! x^2 + 1/3! x^ 3 …

So in a sense e^x is pretty much designed to be its own derivative! I also maintain that complex numbers are less mind blowing if you start from the 2d vector space picture saying ‘let’s design a multiplication rule which adds the angles of its inputs’, rather than ‘hey wouldn’t it be crazy if there was this imaginary number thingy’


Oh, totally, the push-pull of the nonlinear elements + ideal linear characteristic… in both amplitude and bandwidth are what’s musically interesting. You can’t even talk about tone until you’ve covered these aspects.

I just think it’s fascinating that all of Fourier analysis and even the intuitive concept of “filter” falls out of basic definitions of linearity and time invariance. You get all this stuff about sinusoids that you didn’t expect.

There’s a lot you also get for free with unitary (orthogonal/magnitude-preserving) operators and system duals that’s behind much of filterbank theory in audio coding. With a unitary operator, it’s matrix inverse is just the matrix transpose. The transpose system is just the topological dual, where you reverse the order of system blocks (recursively transposing them of course) and swap fan-outs for summing junctions and vice versa. (No thinking required thus to “undo” the operator!) So the linear algebra/time domain view is always a useful thing to have in the back of your mind.


The thing that blows my mind (I only read about it recently) is that half this stuff got figured out (the power series stuff) by scholars in India hundreds of years before Newton ‘invented’ calculus (discovered maybe?).

Would love to know more about the history - in what order and how did everything get figured out?

On an unrelated (but on-topic) note some of my new bell ringing acquaintances told me some of the earliest work on group theory is credited to ringers!