few random thoughts
plenty of artifacts from both algorithms and hardware components at both capture and playback. the design of ADCs/DACs themselves have changed a lot.
since the 80’s most low-cost ADCs are sigma-delta architecture. in a very small nutshell, this means starting with a fast 1-bit sampler (a comparator), applying lowpass filtering, doing this a number of times, and performing error correction. this basic architecture has been drastically improved many times in the last 30 years (e.g. much better decimation filters: more gates on a die -> more taps on a FIR.) Analog Devices website has some deep-dive white papers hosted that are hidden but searchable if you are interested.
many older samplers had additional analog filtering stages, sometimes with interesting and arbitrary uses, like linn lm-1 letting through some grit (8-bit samples) in the attack portion of envelope.
ableton has a variety of algos to apply. resampling for pitch change plus time change can be done without significant degradation using windowed-sinc interpolation, and on modern computers this is fast enough to seem realtime (but there is a fundamental trade off between window size and quality and window size implies latency.) “warping” usually means doing something more exotic using frequency-domain representations, time-domain granulation, or a combination. (that’s a large topic in itself but not really related.)
uses 3rd-order resampling by default (though i keep meaning to add runtime switches for lower orders.) this is a common compromise for realtime resampling, it has less distortion than linear and is nearly as fast to compute.
(softcut is very straightforward; if it has any interesting features they are about record+ partial erase behavior during cross fades, which is surprisingly hard to do in a natural-sounding way without expensive input analysis. but it’s main reason for existence is just to encapsulate some common stuff that is fiddly to get right in high-level DSP environments.)
usually that means doing something with the sample depth, not sample rate. sometimes it is simple rounding, sometimes an attempt to emulate old hardware by using a nonlinear quantization like u-law encoding. like old telephones, old samplers may have also applied dynamic range compression at capture stage to make the most of limited bit depth with these encodings.
if you are interested in specific spectral effects of quantization, sampling &c, i would look at a good modern DSP book. zolzer wrote a good one that is very clear and to-the-point, in particular chapter 3 for this stuff. if you are interested in the math behind it, hamming’s “numerical methods” is the classic reference.
that sounds like resampling. if the new, intermediate values are simply repeated from existing values, that’s zero-order inteprolation. if they are extrapolated in some way, that’s some other kind of interpolation.