Should I do everything at 48Khz SR?

I keep seeing 48Khz samplig rate used for final digital masters, i.e. I often download albums from Bandcamp or other sources and see that these are using 48Khz instead of 44.1Khz.
I’ve searched the web for some info, but most of it has just being confusing to be honest.

It seems 44.1K was really only used by CDs, everything else has always preferred 48K. In fact the video and film industries run everything at 48K (or multiples of it).
Since CDs are being less and less common as a physical media for music, I guess 44.1Khz might actually be going away, so maybe it’s time to switch everything to 48K?
Also I guess some converters might produce better results at 48K, but it’s really hard to know which oens do, if they do at all.

1 Like

tl,dr: i use 48 kHz for my own work and 44,1 kHz for my daily job.

long story

Since the radio station i work at historically sources its music material from CDs, the entire facility runs at 44,1 kHz. That means also journalists recorders etc.
For practical reasons my own recorders and soundcards at home were set at 44,1 and i worked that way for a long time. Also years ago 44,1 was easier on CPU.

I had to think of changing the recorder’s settings for the occasional picture shooting but not too much a hassle, until i began to do more of those gigs and had DAW sessions at both sample rates which became a pain to manage.

The arrival of the ER-301 (which runs at 48 kHz) in my set of tools greatly accelerated the transition to 48 kHz for all my field recordings and personal works.
There is still some time lost in resampling here and there but nothing too daunting.

I don’t think one or two conversions at the end of the chain will make a significant audible difference.

That’s my understanding as well, and my situation is the opposite, for my personnal stuff I work at 44.1khz out of habit mostly, for work at 48khz because I work with video and there 48khz is a standard. Overall, for what I’m doing I don’t think anyone could care less whatever it is I use and yeah, no audible difference really.

At 96khz, you can have some practical use case if you mix tracks with lots of well recorded audio source and signal to noise ratio is a problem once they’re all treated / compressed, on top of eachother and all the background noise add up. Otherwise if it’s the occasional acoustic element in the instrumentation here and there, really, it’s no big deal.

Oh and also, background noise is sooo nice.

Edit : also, we live in a world where the realities of today aren’t those of yesterday, a lot of standard software now do 44.1khz to 48khz conversion seamlessly and it really works out ok in most cases so we’re really far from the days were it could ruin a whole project, play the tracks at the wrong speed and have a big impact. Funily enough as a counter argument I had the issue recently with my Norns on MLR if I remember correctly ^^

With modern hardware and software, there is little practical difference between using 44.1kHz vs. 48kHz. The audio quality will be the same for all practical purposes. There is some CPU savings at 44.1kHz… but only 10%. Pretty much everything now tracks and manages sample rate correctly, and most processing paths do a fine job of sample rate conversion where needed. (Gone are the days of non-interpolating conversion!)

As others have pointed out - the choice is usually down to the conventions of the workflow you’re interfacing with… but almost all workflows can now use either rate interchangeably, and correctly.

Oddly, I was just last week looking at one odd corner where it still makes a difference: If you are creating single-cycle waveforms for a sampler - then because the waveform will be an integral number of samples - the sample rate influences the tuning accuracy of your base waveform.


  • at 48kHz, use a 367 sample waveform, which will sound at C2
  • at 44.1kHz, use either
    • 401 sample waveform, which will sound at A1
    • or a 225 sample waveform, which will sound at G2

These will all be less than ½ cent off from equal temperament.

Full tables:

PDF with all details:
Cycle Waveform Tuning.numbers.pdf (172.1 KB)


I agree with everything being said. It doesn’t really matter, but it also depends on where it is going in the end.


There’s a great blog post on the the origins of the 80x25 character layout for screens. In a similar spirit, I’d be interested in anyone able to shed insight into where the precise values 44.1kHz and 48kHz came from. Why 44.1kHz for red book audio? Why was there a move to 48kHz?

44100, I learned recently, is the product of the first 4 (not 3) squared primes: 2^2 * 3^2 * 5^2 * 7^2. (edit: first 4 primes: not 3) (edit2: sum, not product. thanks. I really thought I had this right, I had this correctly worked out on the calculator and everything. yeesh.) Surely, that can’t be a coincidence.

I also heard somewhere that the duration of the CD was based around the duration of Beethoven’s 9th symphony.

I thought it had to do with the nyquist frequency, related to the span of human hearing:

In applications where the sample-rate is pre-determined, the filter is chosen based on the Nyquist frequency, rather than vice versa. For example, audio CDs have a sampling rate of 44100 samples/sec. The Nyquist frequency is therefore 22050 Hz. The anti-aliasing filter must adequately suppress any higher frequencies but negligibly affect the frequencies within the human hearing range. A filter that preserves 0–20 kHz is more than adequate for that.


That always made sense to me. I don’t see any logical reason for 48k when it comes to audio only. I’ve read some articles calling 44.1k an “outdated thing from the 80’s for CDs” but it’s not like they pulled it out of a hat.

Bit depth probably plays a bigger factor when it comes to reproducing audio.


Huh? Unless I’m misinterpreting something, that equals 38. As mentioned above me, 44,100 Hz is related to Nyquist frequency. Look up sampling theory. Edit to remove misleading and poorly remembered information. need more coffee.

I think you’re confusing sample rate with the Nyquist frequency with the limit of human hearing.

“Most” ears can’t hear much past 15kHz. The upper limit of measured human hearing is inside 20kHz.

To preserve a 20kHz frequency without aliasing, you must sample at twice that, plus some allowance for the fact that filters are not perfect. 44.1kHz was seen as an acceptable compromise between the lowest sample rate possible to preserve the complete spectrum of human-perceptible sound content.

As explained on Wikipedia’s page on 44.1kHz, the precise choice of frequency was chosen to fit both PAL and NTSC frame alignment with integer values.


First 4 primes, my bad. 2^2 * 3^2 * 5^2 * 7^2

Edit: product, not sum. oops

It’s the product of the squares of the first four prime numbers. This has nothing to do with why it was chosen, just an interesting observation.

1 Like

Thanks - yeah, I really shouldn’t have blurted that post out. I need more coffee. Cheers.

By Furtwängler yep. There’s a lot of romantism from Sony and all about that choice, but overall, I find it crazy to think a figure like Karajan had such a marketting impact on a physical format adoption back then, kinda shows how the whole picture of the era was different from now.

1 Like

Thank you! This is the explanation I was looking for for 44.1kHz. Do you happen to know where 48kHz number came from? My quick searches didn’t turn up that much, and there doesn’t seem to be a dedicated wiki page for the number.

My own personal setup is set for 48 kHz, 24 bit.

I do enough video editing that it’s easier if I just leave everything set up that way. 48 kHz is the standard for digital video content.

96 kHz can be useful if you are doing any kind of post-processing that involves pitch-shifting, since it gives the algorithms more samples to work with. (Pitching down, specifically.)

This article gives an explanation for where 48 kHz came from:


This is only true if you’re interested in sampling ultrasonic content into the sonic region. If you’re concerned with re-pitching only audible content (e.g. 20kHz and lower) from the source, there is no mathematical advantage to using 96kHz in re-pitching - complete and perfect reconstruction and thus resampling/re-pitching is possible for all frequencies under Nyquist at any sample rate.


I believe you are correct, I think I was thinking of time-stretching (while preserving pitch) when I wrote the post.

1 Like

Ah, time-stretching is FFT based in most cases, and in that sense oversampling can lead to more accurate FFT windows and thus a cleaner reconstruction in the inverse transform. However, mathematically speaking it doesn’t matter if you oversample the source or just upsample from the existing data - if the time stretching algorithm is implemented correctly it should be able to get an arbitrarily high-resolution stretch from a 48kHz source as well as from a 96kHz source - again, the mathematics of perfect reconstruction play out here too.

However, some stretching algorithms don’t oversample, and thus you’re effectively doing this work for them by operating at 96kHz. This is not a benefit of 96kHz, though, it’s just a deficiency in the implementation of the algorithm.


I always record at 192 for this purpose (resampling), but produce masters at 48 - out of habit from working in post-production.