Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Repllacing lost frequencies

  • 12-05-2011 12:26pm
    #1
    Registered Users, Registered Users 2 Posts: 639 ✭✭✭


    Hi,

    Does anyone know of any technology / software that attempts to replicate lost frequencies from a recording?
    For instance, if I record an acoustic guitar with a cheap dynamic mic a lot of high frequencies would not get recorded. EQ-ing doesn't fix this as it can't address the transients of these frequencies.
    Just wondering as I'm thinking of looking into it for a project in college.

    Thanks....


«1

Comments

  • Closed Accounts Posts: 6,408 ✭✭✭studiorat


    omen80 wrote: »
    Hi,

    Does anyone know of any technology / software that attempts to replicate lost frequencies from a recording?
    For instance, if I record an acoustic guitar with a cheap dynamic mic a lot of high frequencies would not get recorded. EQ-ing doesn't fix this as it can't address the transients of these frequencies.
    Just wondering as I'm thinking of looking into it for a project in college.

    Thanks....

    a)Can you explain the bold part of your question?

    b) How do you plan to tell the software what frequencies are missing? Since the frequencies haven't been recorded in the first place.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    studiorat wrote: »
    a)Can you explain the bold part of your question?

    b) How do you plan to tell the software what frequencies are missing? Since the frequencies haven't been recorded in the first place.

    a) EQ-ing can't work alone because a sound source is made up of harmonics that start and end at different times, and also have contrasting amplitude envelopes.

    b) I'm not sure yet - I'll have to do some research.......maybe some sort of fractal math like photoshop uses to enlarge images without loosing quality.


  • Registered Users, Registered Users 2 Posts: 352 ✭✭splitrmx


    iZotope (their capitalisation, not mine) have a product called RX that can fill in tiny gaps of audio, used for fixing slight drop outs etc: http://www.izotope.com/products/audio/rx/

    However reproducing the higher frequency range of an entire track would probably beyond most software currently.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    splitrmx wrote: »
    iZotope (their capitalisation, not mine) have a product called RX that can fill in tiny gaps of audio, used for fixing slight drop outs etc: http://www.izotope.com/products/audio/rx/

    Looks like a great piece of software to have, thanks.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    OMG, this is why plugins are bad. The basic principles get forgotten!

    1. If you close mic, you exaggerate the transients, mainly because we don't normally put our ears that close!

    2. The "high frequencies" DO get recorded, but they may be very close to the noise floor.

    3. If it doesn't sound good it's most likely because of an imbalance in the mid range, because this is where the ear is most sensitive.

    4. It's always best to record it properly in the first place- fresh strings, correct intonation, good acoustic space, microphone properly positioned.

    There are quite a few dynamic mics that you probably wouldn't think are dynamics based on the sound, for example Beyerdynamic M201, EV RE20, Sennheiser MD421. They all sound very good on stringed instruments.

    That said, you could try one of many harmonic distortion plugins out there- so called "exciters" for example. Voxengo overtone eq is pretty good. These things add harmonics above (or below) based on what is there already. You choose which ones via a filter.

    I don't know of anything that would let you alter the envelope of a harmonic after the fact, but I guess iZotope could do something. Or Cedar (which the iZotope guys were "inspired" by). It's not going to make a bad recording better though. Just different.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,945 ✭✭✭Anima


    It's simply impossible to boost what isn't there. If its recorded badly and you want to boost quiet parts of the spectrum, thats possible but you'll be boosting the noise as well and it won't sound great.
    EQ-ing can't work alone because a sound source is made up of harmonics that start and end at different times, and also have contrasting amplitude envelopes.

    Not sure what the point is there. An EQ works by boosting a certain region by a constant amount, thus over time there will be more energy in that region and as a consequence it will be louder. Any periodic sound will behave as you described, its not a problem.

    Anyway you slice it, the only real high quality solution is to record it right the first time. In reality, you want to do as little processing as possible.


  • Registered Users, Registered Users 2 Posts: 801 ✭✭✭PMI


    Heres what ya need www.aib.ie :D and buy a great mic and rent a great room :D

    get it right at source a turd only gets so shiney :D


  • Registered Users, Registered Users 2 Posts: 4,034 ✭✭✭rcaz


    It sounds like omen80 is talking about a more educated project than just 'how can i put things in that aren't there already'... From the comparison to Photoshop's algorithms anyway.

    Would you give something like Pure Data or SuperCollider a go?


  • Closed Accounts Posts: 6,408 ✭✭✭studiorat


    El Pr0n wrote: »
    It sounds like omen80 is talking about a more educated project than just 'how can i put things in that aren't there already'... From the comparison to Photoshop's algorithms anyway.

    Would you give something like Pure Data or SuperCollider a go?

    It would need to be a little more "educated" than Pd or SC.

    You are talking about doing an analysis of material that is recorded without an incomplete picture and then estimating what it should be like were it recorded with a different microphone. There's already IR software available that will do something similar, however there isn't software that will remove the characteristics of the first microphone.

    We also need to take into account that frequency response is only one characteristic that differentiates one microphone from another.


  • Hosted Moderators Posts: 8,344 ✭✭✭fitz


    Not to mention the fact that the direct source is not the only thing that will have been recorded through the mic. How do you get an algorithm to account for room sound, flutter echos, etc, and as Rat said, the sound of the original mic. You might be able to do some generic IR emulation to apply a certain sonic character to the audio, but that's still just manipulating what's already there. Broken premise imo.


  • Advertisement
  • Closed Accounts Posts: 6,408 ✭✭✭studiorat


    @OP, I did about 3 months research into a similar problem and bailed out because there was no way I was going to come up with anything near a solution in time. What you are looking at here is PhD material IMO.

    Off the top of my head, I think you can extract the frequency information in two ways, by looking for crossover points and measuring periodic time, or FFT. Around this point I lost the will to live, I just didn't have the skills to do the hard coding and get my head around the maths at the same time. I concentrated on the coding instead.

    You may be better to concentrate on measuring characteristics for one "bad" microphone and finding ways to synthesize improvements on that. But recording something and using a magic plug-in to create something that isn't there or you have no information on what actually should be happening is not viable IMO.

    The DSPrelated or DSPGuide sites may be worth a look, also check out the Marsayas framework, it's an SDK for feature extraction and FFTW (fastest fourier transform in the west), I'm not sure if you could use them for impulse response measurement though. You could also look at DIT's work on feature extraction " http://arrow.dit.ie/argcon/".

    I suppose you'd need a really good handle on Mathlab as well and math related stuff in general, just wasn't my bag. Best of luck if you go down that road.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    Thanks for the info everyone. If I was to go down this road it would be for a Phd as there is a heck of a lot of work involved, maybe too much work! Obviously it is always better to record eveything perfectly in the first place but that isn't really the point here.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    I think maybe you're approaching it the wrong way. Any mic with poor HF response has a roll off, which means it doesn't suddenly stop reacting to HF, it just becomes less sensitive with increasing frequency.

    So maybe you could look at noise removal plus eq? There are quite a few excellent noise reduction tools out there, but they do not have an automatic method for restoring attenuated HF. We just do it by ear, which is fine. It would be great if you could dial in known info about the recording- mic, format etc. and apply an algo to "undo" the mic, or change it to a different mic. Maybe it's something you could think about, and you can build on the existing NR research.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    madtheory wrote: »
    It would be great if you could dial in known info about the recording- mic, format etc. and apply an algo to "undo" the mic, or change it to a different mic. Maybe it's something you could think about, and you can build on the existing NR research.

    Well aside from noise reduction, this is essentially what I would be looking into - deconvolution of the original microphone. It's a big area of dsp though.

    "Antares mic modeler" does essentially what you're looking for, you should check it out. But I would be looking at a new method of removing the original microphone's response using fractal algorithms.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    I'm not looking for it, I'm just suggesting new angles for you. :) The Antares mic modeller seems to be frequency domain only. It doesn't really work, in other words.


  • Hosted Moderators Posts: 8,344 ✭✭✭fitz


    omen80 wrote: »
    Well aside from noise reduction, this is essentially what I would be looking into - deconvolution of the original microphone.
    But I would be looking at a new method of removing the original microphone's response using fractal algorithms.

    This is where I think you're on a hiding to nothing.
    If you want to identify and remove what the mic does to the audio, you need to understand what the mic's circuit does to incoming signal, but how do you do that? You can't use the captured audio to do this, cause you've no "pure" version of the audio to compare it to...

    Maybe I'm just confused...but it seems like you're still only going to be trying to interpolate a full signal by trying to remove the already subtractive effect the mic has had on the audio. It's still going to just be guesswork. Mathematically complicated guesswork, but guesswork nonetheless.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    A distortion free full range speaker in an anechoic chamber could be used a source for a sine sweep, and you could measure various microphones. Ideally! :)


  • Hosted Moderators Posts: 8,344 ✭✭✭fitz


    So essentially, if you did that, you'd be using a know frequency profile of a mic in ideal conditions to try and generate missing signal, or reducing boosted signal. It's still generating new audio content via maths as opposed to "revealing" original audio content from the source. Still seems like it's an extremely contrived idea that isn't going to be that practically useful...


  • Hosted Moderators Posts: 8,344 ✭✭✭fitz


    I mean, you still need to use the audio to generate the new information, so you're back to being influenced by more than just the mic characteristics (room sound, etc.).


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    fitz wrote: »
    It's still generating new audio content via maths as opposed to "revealing" original audio content from the source. Still seems like it's an extremely contrived idea that isn't going to be that practically useful...

    Well yes you would have to resynthesise certain frequencies etc..... A lot of musicians use virtual instruments so I don't see why it wouldn't be useful. Obviously I'm not saying that I'm going to create a mad piece of software that will be a "must have" bit of equipment for every studio, I would be only researching into new methods of deconvolution.


  • Advertisement
  • Closed Accounts Posts: 6,408 ✭✭✭studiorat


    madtheory wrote: »
    A distortion free full range speaker in an anechoic chamber could be used a source for a sine sweep, and you could measure various microphones. Ideally! :)

    And transient response's to various attacks and releases at those frequencies; how the diaphragm starts and stops vibrating, resonances etc.

    Don't forget that a lot of the time we choose microphones because they don't have perfect characteristics. The add or remove something in the original sound.

    I'd also agree with Fitz regarding the fractal modeling. From what I know those techniques are fine for replacing missing samples etc. examining a signals trajectory and estimating. But here you're talking about examining the trajectories of groups of partials and estimating where they are going without any ground truth since at that level of analysis I reckon every instrument is going to be different. Not to mention actual notes on the same instrument.


  • Registered Users, Registered Users 2 Posts: 3,945 ✭✭✭Anima


    Fitz is right here. It doesn't make sense. If it was possible, it would have been done long ago as well anyways.

    To use an analogy, if I use a camera to take a photo. I can't retrieve any of the image outside of the frame after I've taken it, the opportunity is gone. Its the same thing.

    The mic can only pick up the bandwidth it's sensitive to. You can't just remove the effect of the mic on the signal and somehow end up with more information than you had before.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    Anima wrote: »
    Fitz is right here. It doesn't make sense. If it was possible, it would have been done long ago as well anyways.

    By that logic I guess the human race should never try to advance in anything.
    Anima wrote: »
    To use an analogy, if I use a camera to take a photo. I can't retrieve any of the image outside of the frame after I've taken it, the opportunity is gone. Its the same thing.

    Maybe not outside the frame but there are software programs that allow you to zoom in on a photo without loosing picture quality, therefore they are creating new information based on the existing image.
    Anima wrote: »
    The mic can only pick up the bandwidth it's sensitive to. You can't just remove the effect of the mic on the signal and somehow end up with more information than you had before.

    As I stated before this new information would have to be resynthesised. It's obviously impossible to retrieve frequencies that were never recorded in the first place but it could be possible to replicate these digitally using new methods of deconvolution??


  • Closed Accounts Posts: 6,408 ✭✭✭studiorat


    omen80 wrote: »

    As I stated before this new information would have to be resynthesised. It's obviously impossible to retrieve frequencies that were never recorded in the first place but it could be possible to replicate these digitally using new methods of deconvolution??

    But the question is : de-convolve what? The information isn't there in the first place.


  • Registered Users, Registered Users 2 Posts: 3,945 ✭✭✭Anima


    By that logic I guess the human race should never try to advance in anything.

    :rolleyes:
    Maybe not outside the frame but there are software programs that allow you to zoom in on a photo without loosing picture quality, therefore they are creating new information based on the existing image.

    Image signal processing is very close to audio signal processing. It's the same maths involved. Its done by re-sampling or interpolating between existing data.

    Thats not creating new data though, thats filling the cracks when you don't have the actual real data. If you have a small image and you resize to make it bigger by a large degree, you see how this works. Its not adding more detail, its just averaging the data already, so before if there was 1 pixel, theres now 4 pixels with a similar colour. The result is a blurry mess with no sharpness or details.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    studiorat wrote: »
    But the question is : de-convolve what? The information isn't there in the first place.

    The microphone's impact on the recording.
    Are you talking about using a reference model to base the deconvolution on?


  • Registered Users, Registered Users 2 Posts: 4,034 ✭✭✭rcaz


    Anima wrote: »
    Thats not creating new data though, thats filling the cracks when you don't have the actual real data. If you have a small image and you resize to make it bigger by a large degree, you see how this works. Its not adding more detail, its just averaging the data already, so before if there was 1 pixel, theres now 4 pixels with a similar colour. The result is a blurry mess with no sharpness or details.

    But you can vectorise a raster image so that zooming isn't just averaging out the gaps inbetween the already-known pixels, can't you? Maybe there's a similar process for audio?


  • Registered Users, Registered Users 2 Posts: 3,945 ✭✭✭Anima


    Thats different really, can't think of how it would work with audio specifically but maybe there is such a thing.


  • Registered Users, Registered Users 2 Posts: 4,034 ✭✭✭rcaz


    Anima wrote: »
    Thats different really, can't think of how it would work with audio specifically but maybe there is such a thing.

    I was thinking about it since I'd posted last, maybe that's more akin to time-stretching with audio or something? I don't know nearly as much about audio as the rest of you do, but this is really interesting stuff.


  • Advertisement
  • Hosted Moderators Posts: 8,344 ✭✭✭fitz


    Vector graphics don't really work like that, it's not an accurate analogy. And zooming in on something may not result in a perceived lack of quality, boot zooming in and scaling up are different things. If you zoom in on a section of a hi-res photo, crop the image to that section, the scale it up, and repeat this process, there's no magic software out there that will preserve quality. Assuming, wrongly, that it's possible to do effectively with images is leading to faulty thinking on how it could work with audio, imo.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    fitz wrote: »
    Assuming, wrongly, that it's possible to do effectively with images is leading to faulty thinking on how it could work with audio, imo.

    I'm not saying that it would definately work, but I would be very open to the possibility that pretty soon we could have something like this.
    10 years ago people would have laughed if you told them you could edit individual notes in a polyphonic recording, and now we have the Melodyne DNA editor that can do this.


  • Hosted Moderators Posts: 8,344 ✭✭✭fitz


    Melodyne is working with something that's already there though, and DNA doesn't always work. Anything you do to fill in what's not there is always going to require what is there in order to generate output. Unless you can remove bleed, room sounds, etc. from the audio too, in order to have a "cleaner" signal to generate your output from, it's a pointless exercise. And stripping out those elements is more likely to degrade what you have left over, so you're then starting your generation with an even lesser quality signal.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    fitz wrote: »
    Melodyne is working with something that's already there though, and DNA doesn't always work.

    True but it still has to create added overtones when changing the pitch of a selected note etc....which are not present in the original recording.
    It may not always work perfectly but it's a big step to something better in the future.


  • Registered Users, Registered Users 2 Posts: 153 ✭✭TheBigRedDog


    This is a good idea man. I'm working on something similar myself but in college. There will always be new mathematical models to improve recorded sound etc. You'll obviously need to pick a section of this to tackle, unless you're part of a well funded team :P .
    Anima
    To use an analogy, if I use a camera to take a photo. I can't retrieve any of the image outside of the frame after I've taken it, the opportunity is gone. Its the same thing.

    Yeh it's true but in video, you can create a virtual frame from the motion captured between the one before and after, thus capturing something that wasn't either of the ones actually captured.

    The same stands for audio recordings too. If the OP can find a way to IMPROVE the audio, let's have it. Obviously he can't recreate the sound of an orchestra or rockband 2 metres from our head while out for a stroll. (that's my area ;) )

    Good luck, J


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    There will always be new mathematical models to improve recorded sound etc.
    As long as this refers to restoration of archive recordings, and not a crutch for the act of recording...


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 153 ✭✭TheBigRedDog


    I agree, but when there's a demand for turd polishing, someone would be wise to sell some polish. It's not the ideal situation but it's where the money and thus the research is these days. One of my lecturers just secured a big deal with some big people.

    There will always be a market for making bad audio sound good. Archiving definately but also because people are lazy and would prefer to whack a plugin on a track than get up and move the mic an inch or two. There is space for improvement on these things too, which the OP is getting into.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    but also because people are lazy and would prefer to whack a plugin on a track than get up and move the mic an inch or two.
    How is spending hours with a plugin easier than taking one minute to move a mic? This is my point- there are a lot of novice recordists who spend too much time looking at the laptop, and not enough time actually listening. Or they haven't yet figured out/ been taught the simple art of mic positioning.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    madtheory wrote: »
    How is spending hours with a plugin easier than taking one minute to move a mic? This is my point- there are a lot of novice recordists who spend too much time looking at the laptop, and not enough time actually listening. Or they haven't yet figured out/ been taught the simple art of mic positioning.

    This is getting off topic...


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    Note really. Some ideas are poor because the basic principles are ignored.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    madtheory wrote: »
    Note really. Some ideas are poor because the basic principles are ignored.

    But that's not the point of the thread.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    Yeeeess.... but it's (possibly) what is wrong with "the point of the thread". But let's not argue about arguing, OK?


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    madtheory wrote: »
    Yeeeess.... but it's (possibly) what is wrong with "the point of the thread". But let's not argue about arguing, OK?

    If you want to talk about something else then start a different thread. Simples.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    If you can't accept criticism of an idea on a public forum, don't post it. And if you're being anonymous, don't take criticism personally.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    madtheory wrote: »
    If you can't accept criticism of an idea on a public forum, don't post it. And if you're being anonymous, don't take criticism personally.

    I can accept criticism fine, in fact I opened the thread just looking for answers / advice. You began discussing an issue that was irrelevant to the topic.


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    You're missing my point because you can't see the wood for the trees. The basic idea is fundamentally flawed, and I'm not the only one who has pointed that out. For example, your initial assertion about the frequency and transient response of a dynamic mic is based on music technology textbook learning (or worse, internet) rather than good engineering knowledge or practice, and no knowledge of how sound is perceived by the ear/ brain mechanism. A lack of fundamentals, in other words.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    madtheory wrote: »
    You're missing my point because you can't see the wood for the trees.

    Am I supposed to come up with an old saying now too??
    Ok then.....a rolling stone gathers no moss. There, all done.


    You sound like someone who doesn't trust digital technology??


  • Registered Users, Registered Users 2 Posts: 1,892 ✭✭✭madtheory


    You're flying off on a whole load of baseless assumptions there. Good luck with that.


  • Registered Users, Registered Users 2 Posts: 3,915 ✭✭✭GTE


    Correct me if I am wrong OP but I have an interest in DSP and am intrigued by this thread because of the idea and the discussion of it.

    Im confused as the title doesn't read to me like it is at all possible to replace a frequency that was lost in the recording.

    Are you trying to come up with a way to:

    1) Undo the flaws in a microphone - Like your example. Another is a SM57 not being able to take high frequencies very well. This would result in the audio sounding more like it was recorded with a perfectly neutral microphone.

    2) Undo the colouring that a mic can put onto a sound? - So try and find out what is making a mic that is considered "warm" sound warm and take it out of the source signal. By contrast with this found you could apply it to a signal.

    If that is the case then I can see what you are trying to do.

    A plugin that could either

    A) Undo the negative effects of cheap mics
    and by contrast then;
    B) Give source audio the characteristic of being recorded through a different mic.

    In my opinion, rather limited experience I have to admit, B is the only real winner here.

    By the way of convolution you could sample microphones in different situations and use this impulse response to shape the source audio provided the source audio is not hopelessly badly recorded.

    I am just trying to see if this is what you are on about because I am not too clear.
    If that is it I don't see it being as being doable like a reverb or hardware model based on convolution and impulse responses.

    When you put white noise through a compressor at a certain setting it will only ever work in one way but there is a lot more going on with mics I feel. The compressor doesn't have to deal with reverberant sound and even in a chamber you would have to do a very long and complex set of tone generation to get it right.

    I think a swept sine wave was mentioned. If that was continuous then the effect of what just hit it would effect what is hitting it now so in my mind you would have to play each freq separately. But then how big are the steps you have to take? Do this with a number of mics.

    It sounds a lot less useful then a compressor or reverb model that we get nowadays.

    Interesting though.

    EDIT:
    Re-reading the first post it sounds like you want re-synthesize the frequencies that a microphone can not record or record properly.
    That seems a bit impossible as you can not predict the sound source well enough.


  • Registered Users, Registered Users 2 Posts: 639 ✭✭✭omen80


    bbk wrote: »
    EDIT:
    Re-reading the first post it sounds like you want re-synthesize the frequencies that a microphone can not record or record properly.
    That seems a bit impossible as you can not predict the sound source well enough.

    It may well be impossible but it could be worth looking into if you have most of the sound source captured. Using a chaos theory algorithm for instance, it could be conceivable to rebuild the missing harmonics using the data already present. There are obvious pitfalls in that each sound source would produce varying frequencies (bells for instance produce inharmonic spectra), but it might still be worth checking out.
    The other side of the coin is flattening the frequencies that were boosted/attenuated by the microphone.
    Room reverb etc. is something that could be dealt with later. Deconvoluting the original microphone is the big issue.


  • Registered Users, Registered Users 2 Posts: 3,915 ✭✭✭GTE


    omen80 wrote: »
    It may well be impossible but it could be worth looking into if you have most of the sound source captured. Using a chaos theory algorithm for instance, it could be conceivable to rebuild the missing harmonics using the data already present. There are obvious pitfalls in that each sound source would produce varying frequencies (bells for instance produce inharmonic spectra), but it might still be worth checking out.
    The other side of the coin is flattening the frequencies that were boosted/attenuated by the microphone.
    Room reverb etc. is something that could be dealt with later. Deconvoluting the original microphone is the big issue.

    Interesting but I don't see it as being as useful as the other convolution based things I mentioned sadly.


  • Advertisement
Advertisement