Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

sample rates better than 44.1

  • 18-07-2011 1:37pm
    #1
    Closed Accounts Posts: 2,655 ✭✭✭


    Would it be correct to say that working in 48, 88.2 , or 96 is of no benefit if the final render will be in 44.1 ?

    I experimented with different sample rates on a project and could not hear a difference when rendered to 44.1

    I did however hear a better high end , smoother sheen , higher quality reverb and better image
    in the project at higher sample rates , while playing back in the daw .

    but this did not appear in the final render to 44.1 .
    it was quite a surprisingly obvious difference- the mix sounded alot better , but the final render was the same.


    and it did appear in a final render to 88.2 and 96 - they sounded more open and more real.



    maybe all audio should be on dvds now not cd's ?
    it defo sounds better to me in 96 or 88


Comments

  • Registered Users, Registered Users 2 Posts: 1,759 ✭✭✭Neurojazz


    DaDumTish wrote: »
    Would it be correct to say that working in 48, 88.2 , or 96 is of no benefit if the final render will be in 44.1 ?

    I experimented with different sample rates on a project and could not hear a difference when rendered to 44.1

    I did however hear a better high end , smoother sheen , higher quality reverb and better image
    in the project at higher sample rates , while playing back in the daw .

    but this did not appear in the final render to 44.1 .
    it was quite a surprisingly obvious difference- the mix sounded alot better , but the final render was the same.


    and it did appear in a final render to 88.2 and 96 - they sounded more open and more real.



    maybe all audio should be on dvds now not cd's ?
    it defo sounds better to me in 96 or 88

    Dithering! - all the 48k+ rates dither down nicely to 44k no problems... the top end retianed pretty much (even though it would be nice to invent another format to sell ;)


  • Registered Users, Registered Users 2 Posts: 3,945 ✭✭✭Anima


    There is a benefit to plugins. Every plugin has numerical errors because it's digital and that means limitations. Having twice or more samples to work with means less of these errors during calculations, especially when accumulated. That's why there is over-sampling options in plugins for more precision.

    It should be especially noticeable with reverb but that might be more to do with dithering and using 64bits than anything else. The hi-end should be less attenuated when rendering as well with over-sampling though.

    So over-sampling is not so much for our hearing but because of limitations of digital systems I guess.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    DaDumTish wrote: »
    Working in 48, 88.2 , or 96 is of no benefit.
    Neurojazz wrote: »
    Resampling! - all the 48+ rates resample down nicely to 44k no problems
    Fixed. :P

    But seriously, it depends on many factors. The ADC is probably already oversampling at a lot more than 32x or something like that. The anti aliasing and decimation filters might be on chip, or maybe the designer felt it was better to make his own, and maybe it works better at some rates than at others. Maybe the plugin already oversamples to realise the processing benefits, so it makes no difference what rate you put in. Maybe your speakers already roll off at 16kHz, or your room response is all over the place so there's no way you'll hear a difference anyway.

    It is reasonably safe to say that converting the sampling rate (i.e. resampling) is a transparent process these days, in most (but not all) DAWs. However, there are always bugs!
    DaDumTish wrote: »
    higher quality reverb and better image
    in the project at higher sample rates , while playing back in the daw .
    Differences with reverb and imaging in this context are usually caused by bit depth issues. So it's nothing to do with the sampling rate. I would be examining your chain to see where bit depth is changed. It's very like that dither was not applied by whatever process you used to create a 44.1kHz render.


  • Closed Accounts Posts: 2,655 ✭✭✭i57dwun4yb1pt8


    But seriously, it depends on many factors. The ADC is probably already oversampling at a lot more than 32x or something like that. The anti aliasing and decimation filters might be on chip, or maybe the designer felt it was better to make his own, and maybe it works better at some rates than at others. Maybe the plugin already oversamples to realise the processing benefits, so it makes no difference what rate you put in. Maybe your speakers already roll off at 16kHz, or your room response is all over the place so there's no way you'll hear a difference anyway.


    <<< speakers are top notch , room is very good for what it is.( measured ) ,
    it also appears on phones though .

    i can hear the difference clearly and i cant hear anything above 11k due to ear problems .

    It is reasonably safe to say that converting the sampling rate (i.e. resampling) is a transparent process these days, in most (but not all) DAWs. However, there are always bugs!

    << i did find a higher quality renderder setting in REAPER so will try that

    Quote:
    Originally Posted by DaDumTish View Post
    higher quality reverb and better image
    in the project at higher sample rates , while playing back in the daw .
    Differences with reverb and imaging in this context are usually caused by bit depth issues. So it's nothing to do with the sampling rate. I would be examining your chain to see where bit depth is changed. It's very like that dither was not applied by whatever process you used to create a 44.1kHz render

    << again i will look at reapers render and see if it can be bumped up , i think it can.
    if i could capture the overall mix sound when set to 88 or 96 / 24 in a 44.1 / 16 render i would be quite happy .
    just wonder if its possible/ or worth the extra CPU kick and file sizes


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    Are you on a PC? Try doing the conversion with Voxengo R8 Brain. Give it the highest bit depth you can.


  • Advertisement
  • Closed Accounts Posts: 2,655 ✭✭✭i57dwun4yb1pt8


    i heard about this package , but also that it reverses file polarity .

    so you end up with a negative impulse low end .

    but i havent confirmed this .


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    All you need to do is flip the polarity back afterwards, that process in itself has no effect on the sound. It's the best free one and still does really well in SRC test AFAIK.

    There are several good ones for the Mac, not free but inexpensive.


  • Closed Accounts Posts: 2,655 ✭✭✭i57dwun4yb1pt8


    ok so would this sound like a plan ?

    1/ mix in 96 / 32 in reaper

    2/ output mix to mix wav file = 96/32

    3/ master the 96/32 file from 96/32 using mastering wizardary to 96/24 wav with reversed polarity

    4/ r8brain the 96k/24 file to 44.1/16 in turn reversing plolarity back to normal

    does this make sense ?

    thanks alot

    ( note - im also experimenting with mastering - yeah i know ;-) )


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    Yes that makes sense, except if I was you I would do blind AB tests between a few 44.1/24 vs 96/24 mixes, and see what the difference is.


  • Closed Accounts Posts: 2,655 ✭✭✭i57dwun4yb1pt8


    hmmm, to me it seems to be the REAPER render is not as good as doing it outside reaper in the main

    so assuming im mastering at home :

    project is at 96k

    1/ render 96k/32 bit float of mix to a wav file

    (mix includes any master eq - but no master limiting - apparently
    SRC can sometimes cause overs )

    2/ SRC this to a 44.1 / 24 bit wav using r8brain

    3/ import the 44.1/24 bit wav into a reaper single track project

    4/ put mastering limiter followed by dither plugin on master track



    5/ adjust mastering limter to required master level / etc

    6/ adjust dither for 16 bit with 80% type 1 noise shaping

    7/ render out the 44.1 / 16 file ( making sure dither option is turned off in reaper )

    the resulting file does seem to sound better than just rendering with dither and src from reaper directly .


    i did this , as well as a reaper straight render and the straight one didint seem to sound as good as the fiddly one.

    so it does look like doing it seperate steps matters .


    now i must try a project at 44.1 k and see if it is any diffferent than the 96k one .


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 801 ✭✭✭PMI


    Well the latest Katy Perry stuff was all recorded at 44.1 and sold millions and got millions of airplay? less strain and easier to manage size wise !

    the official line was....

    " where do people listen to their songs.... MP3 / Ipod where do they watch their videos... You Tube "


  • Closed Accounts Posts: 2,655 ✭✭✭i57dwun4yb1pt8


    indeed , if thats true .

    but is that a reason to stop aiming for higher standards
    or should we start the dumbing down of everything ?

    may be go back to wax cylinders ?

    mp3 will die when internet band width and storage space on devices becomes huge - which it will ,

    then people will be listening to 96/24 wavs as a default.


    personally id like to continue trying to make it better than assist it in sliding back into the sh1t.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    Have you missed the raging debates about this on prosoundweb and gearslutz etc. years ago? Take a look at Dan Lavry's paper.

    For recording audio, any rate higher than 50kHz is a marketing ploy, showing ignorance of the sampling theorem. More does not equal better. Processing audio has other issues, most of which are solved by upsampling, which can be transparent to the user i.e. don't worry about it.

    If you're really concerned stick to 48kHz. Did you know that was originally intended to be the professional sampling rate, with 44.1kHz reserved for domestic use? See John Watkinson's excellent book The Art of Digital Audio.

    PS PMI- that Katy Perry thing is a red herring, and that's not quite what he said anyway! And the guy was remixing the stuff, not recording it.


  • Closed Accounts Posts: 6,408 ✭✭✭studiorat


    I was discussing something similar with a friend today. We did some field recordings last week and in the playback there's a very high tone audible although the recording is basically of silence in a field. Obviously there's wide-band background noise and we were wondering was the tone we were hearing a notch created by the filter or an alias frequency reflected back down into the bandwidth.
    madtheory wrote: »

    For recording audio, any rate higher than 50kHz is a marketing ploy, showing ignorance of the sampling theorem. [/I].

    That assumes that the top end of the human hearing range is 20KHz or so. Which is fine, however theres some research about now which contradicts this, suggesting the bandwidth is actually much larger. Apparently the paper on the subject is “Detection of Threshold for tones above 22kHz.” by Kiryu and Ashihara.

    In this (another) paper : AES Paper

    Subjects who couldn't hear a pure tone of 22KHz could still hear a difference in musical signals with and without a band pass at the higher frequency.

    Furthermore when the likes of Rupert Neve etc are designing systems with bandwidths in the 100Khz region, means that a good front end will pick up signals well above 20Khz. And, when you consider that intermodulation products from ultrasonic frequencies can be heard, it seems natural then to try and capture the higher frequency components.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    If you search elsewhere you'll see that the Kiryu and Ashihara has been discredited, IIRC their methodology was incorrect, what was heard was speaker distortion. Even if you don't buy that, one experiment does not equal scientific fact. Plus, if you read the paper, it's not conclusive.

    Ear structure limits the bandwidth:
    http://thewombforums.com/showpost.php?p=294233&postcount=226
    Some more involved material from above poster:
    http://www.aes.org/sections/pnw/ppt.htm


  • Closed Accounts Posts: 6,408 ✭✭✭studiorat


    Discredited is a bit harsh, their work is cited in many other papers. Also while the basilar membrane may be band limited there's also the notion of bone conducted ultrasound. As has been shown by work in treating tinnitus and by perception of ultrasound via bone conduction by profoundly deaf individuals.

    This work suggests that the bio-mechanical effect of high frequency appears as a result of a complex interaction between HF and audible low-frequency components as opposed to HF on it's own.


    And there's still the high frequency intermodulation issue reflecting back down to below 20K.

    Personally I think a 22Khz cut-off is too low and since the instruments we record often have harmonics above the accepted range, I still see no reason why we shouldn't attempt to record them. Even if they are band limited by the playback medium.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    Perhaps "discredited" is harsh, but the fact is that paper is inconclusive. Do you know of any papers on "bone conducted ultrasound"?

    Intermodulation will only occur if the anti aliasing filter is poor. I believe that is the issue you had with your field recorder.

    I agree with you that 22kHz is probably too low. But only because it makes the anti aliasing filter a challenge/ expensive to implement without distortion. That's why I mentioned 48kHz earlier, that gives a cutoff of 24kHz. Probably still a bit low, but it is an established universal standard. Furthermore, ADC/DAC and filter design has moved on a lot since the CD was introduced!

    The reason for bandwidth limiting is that it is a waste of resources to gather and store more information than is required. I don't know of any conclusive evidence showing that HF can modulate LF audibly.

    Finally, Rupert Neve has come out with some strange statements, I would be sceptical of someone who is trying to sell us something.


  • Closed Accounts Posts: 6,408 ✭✭✭studiorat


    madtheory wrote: »
    Perhaps "discredited" is harsh, but the fact is that paper is inconclusive. Do you know of any papers on "bone conducted ultrasound"?

    Intermodulation will only occur if the anti aliasing filter is poor. I believe that is the issue you had with your field recorder.

    I do, can't access them right now, I'll try and find them with free access I'll dig them up.

    When I'm talking about intermodulation I mean the modulation between two high frequency sources where the difference product is audible but the fundamentals are not...


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    OK, but wouldn't the difference product make it through the filter, regardless of how it originated?

    Thanks for digging out those links. :)


  • Closed Accounts Posts: 6,408 ✭✭✭studiorat


    Will need to email those pdf's Madser, they can't be assessed on line...

    Meanwhile here's the abstract to them

    Ultrasound activates the auditory cortex of profoundly deaf subjects.
    Using three-dimensional PET, the cortical areas activated by bone-conducted ultrasound were measured from five profoundly deaf subjects and compared with the cortical areas of normal-hearing subjects activated by stimuli through bone-conducted ultrasonic, air-conducted, bone-conducted, and vibro-tactile hearing aids. All of the hearing aids, including the ultrasonic hearing aid, consistently activated the medial portion of the primary auditory cortex of the normal volunteers. The same cortical area was also significantly activated in the profoundly deaf subjects although the percentage increase in regional cerebral blood flow (rCBF) was smaller than in normal subjects. These results suggest that extra-cochlear routes convey information to the primary auditory cortex and can therefore produce detectable sound sensation even in the profoundly deaf subjects, who reported a sensation themselves.

    PET is a kind of brain scan like CT or MRI.

    Ultrasonic masker clarifies ultrasonic perception in man
    Bone conduction enables ultrasound to be heard. Although several hypotheses about ultrasonic perception have been presented, the perception mechanism of bone-conducted ultrasound has not yet been established. In this study, to investigate ultrasonic perception, the amount of masking produced by 27-, 30- and 33-kHz bone-conducted ultrasonic maskers for air-conducted high-frequency sounds was measured in the frequency range of 8–18 kHz at 1-kHz intervals. The results showed that the air-conducted signals in the frequency range of 10–14 kHz were strongly masked by the ultrasonic maskers. When the masker intensity increased from 5 to 10 dB SL, the growth of masking was more than 10 dB in the frequency range of 9–15 kHz, and the masking spread strongly to lower frequencies. Furthermore, the dynamic range for bone-conducted ultrasound was clearly narrower than that for air-conducted high-frequency sounds. These results suggest that perception of bone-conducted ultrasound depends on inner hair cell activity induced by ultrasound, even without modulation being present, and does not depend on enhancement by the outer hair cells in the basal turn of the cochlea.


  • Advertisement
Advertisement