Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Real Reviews ....

  • 29-09-2009 9:08pm
    #1
    Registered Users Posts: 7,790 ✭✭✭


    I've just been speaking to one of the magazines about the quality of equipment reviews.
    We both feel they're all too vague, perhaps like the equipment they're reviewing !

    We've come up with the idea of marking units performance out of 100 against an Industry standard (or standards) which stands at 50.

    So each item can be plus or minus in relation to the standard's 50 points.

    For example, an Sm57 is clearly a standard, but I've often heard the Audix i5 being mentioned around here as being superior, so it might score 65 or whatever.

    Similarly an cheap vocal condenser might score 20 against a U87.

    To be Topical from another thread, a Behringer DI might score 10 against a BSS.

    So instead of comparing stuff to 'Good for the price' you compare to a quality.

    'If I spend X I can get to half an agreed quality but if I spend Y I can get to 80% of the standard'

    Would that enable guys to make better and easier decisions regarding gear?

    This is literally a first draft idea so I'd be interested in feedback on it.


«1

Comments

  • Registered Users Posts: 1,214 ✭✭✭ICN


    Thats 1/2 of it alright..

    Confidence for me is knowing that the source of the review is Kosher.

    Its funny how you hardly ever see a really bad review of anything.. unless its "Budget" or made by a fledgling company. Easy targets.


    Independant reviews are the best.. I'd always do a few Google reviews & find out the word on the Forums about all the OS conflicts / bugs etc.. Basically all the stuff that doesn't get printed.

    Can't totally trust a review from some Guy who is on the payroll.

    If you could get trusted independent reviews with your system.. then you'd be on to a winner. If it was on the Net - Reviews / Reviewers could themselves be rated / have comments left in their profile.

    Once you start going down the route of Trade Shows Shmoozatons, Press Kits & not wanting to offend Whoever.. then you are undoubtebly already in that "Bad Place".

    And thats where most Reviewers / Editors probably are.


  • Closed Accounts Posts: 229 ✭✭bedbugs


    hey paul.

    I always get the fear when I hear the term "industry standard". It really depends on what division of the industry yr talking about; film, theatre, rock bands, tv, classcal etc..


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    ICN wrote: »
    Thats 1/2 of it alright..

    Confidence for me is knowing that the source of the review is Kosher.

    Its funny how you hardly ever see a really bad review of anything.. unless its "Budget" or made by a fledgling company. Easy targets.


    Independant reviews are the best.. I'd always do a few Google reviews & find out the word on the Forums about all the OS conflicts / bugs etc.. Basically all the stuff that doesn't get printed.

    Can't totally trust a review from some Guy who is on the payroll.

    If you could get trusted independent reviews with your system.. then you'd be on to a winner. If it was on the Net - Reviews / Reviewers could themselves be rated / have comments left in their profile.

    Once you start going down the route of Trade Shows Shmoozatons, Press Kits & not wanting to offend Whoever.. then you are undoubtebly already in that "Bad Place".

    And thats where most Reviewers / Editors probably are.

    Yes - but someone like George Shilling you know you can trust (if not agree) with his opinion because -
    1. He's walked the walk, worked as an equipment user and made hit records
    2. He has a long history in doing fair reviews.

    But let's separate the reviewer from the review process and talk about the process alone for the moment.


  • Registered Users Posts: 1,180 ✭✭✭Seziertisch


    I think that when it comes to gear there is no magic bullet. Different gear performs differently in different situations. What makes it great in one, might make it not so great in another. Variety is the spice of life.

    For example, on vocals certain mics will perform differently with different vocalists, sounding great in one case and not so great in another. If you happen to have one of those vocalists that mic x works well for, you might see "works great on male vocals" or something in the review. Whether this means works great on this particular male vocalist or male vocalists in general is a little difficult to say sometimes.

    This where the Internet really comes to the fore, sure there is a lot of bull**** and bull****ters, gear rising and falling in popularity, a certain degree of skepticism is required, many grains of salt, but there is still solid info to be found from real people in real situations using (and sometimes not properly using) the gear.

    Another problem with a standards comparison is that people have different aims and aspirations with recordings. If you are trying to make the next Darkside of the Moon a 58 and an M-box won't bring you too far, if you just want to do some recordings they are perfect.


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    bedbugs wrote: »
    hey paul.

    I always get the fear when I hear the term "industry standard". It really depends on what division of the industry yr talking about; film, theatre, rock bands, tv, classcal etc..

    Why should you 'get the fear' about anything ?

    There are industry standards, full stop !

    You know there are, what's to be afraid of ?

    Obviously if you're talking about a hand held mic boom it does not refer to rock'n'roll. And a 57 isn't going to play a part in a staged period drama.

    In most elements that make up a recording studio there are standards - wouldn't comparisons to those benchmarks give you a better idea of what a review meant ?


  • Advertisement
  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    I but there is still solid info to be found from real people in real situations using (and sometimes not properly using) the gear.

    But isn't my suggestion better than the current style ? You compare something to something else people are probably familiar with ?


  • Registered Users Posts: 843 ✭✭✭trackmixstudio


    Ha!
    Good luck with this.
    "Hello, S0und on S0und"
    "Hi. John from @lesis here. We are releasing a new firewire mixer and are thinking of running a full page inside front cover ad for 4 issues. We are sending over one for review but will need a copy of the review prior to committing to the ad."
    "Yeah, sure John. Fire it over. We will give it a good review"

    All industry magazines are driven by advertising and, as such, cannot be trusted when it comes to reviews. That has been discussed here before and it's not going to change.


  • Registered Users Posts: 3,834 ✭✭✭Welease


    PaulBrewer wrote: »
    I've just been speaking to one of the magazines about the quality of equipment reviews.
    We both feel they're all too vague, perhaps like the equipment they're reviewing !

    We've come up with the idea of marking units performance out of 100 against an Industry standard (or standards) which stands at 50.

    So each item can be plus or minus in relation to the standard's 50 points.

    For example, an Sm57 is clearly a standard, but I've often heard the Audix i5 being mentioned around here as being superior, so it might score 65 or whatever.

    Similarly an cheap vocal condenser might score 20 against a U87.

    To be Topical from another thread, a Behringer DI might score 10 against a BSS.

    So instead of comparing stuff to 'Good for the price' you compare to a quality.

    'If I spend X I can get to half an agreed quality but if I spend Y I can get to 80% of the standard'

    Would that enable guys to make better and easier decisions regarding gear?

    This is literally a first draft idea so I'd be interested in feedback on it.

    From a purely process point of view (ignoring that the score will always subjective and based on application)... I'm not sure your scale (if i am reading it right) would make any sense..

    SM57 is assigned 50 (it would be a nightmare in itself to decide (and agree) what *is* the standard). It costs 100 quid..

    What would a slightly better sounding 1000 quid mic score? 60 cos it's sounds better?... 20 cos its 10 times the cost? 40 cos its better but costs a load more? Would someone reading that score be able to decipher why it got that score, without reading the full article and explanation in which case why even bother scoring it like that..

    From my experience, people are used to a 1-10 (or similar) scale and understand what it respresents, any new scale would require too much explanation to the casual user to be of any real value. Any complication of the scoring mechanism just distracts the reader from the detail they are actually interested in.

    Sorry :(


  • Registered Users Posts: 1,180 ✭✭✭Seziertisch


    PaulBrewer wrote: »
    But isn't my suggestion better than the current style ? You compare something to something else people are probably familiar with ?

    The U47 is an industry standard, but I have never even seen one let alone used one.

    The SM57. Mic x sounds better than a SM57 on vocals. With the vocalists we tested this mic sounded better than a 57. Again, it doesn't tell me much.

    I was recording last weekend. For the guitar sound we were after a V72 beat the competition (Neve, Daking, API, Tubetech) out the gate. The others were all undesirable for different reasons in that particular situation. I have been in other situations where a V72 was similarly uncomplimenatary to other sounds, lacking the crispness and fast transients of the API or whatever. I think to be able to offer an accurate overview of a piece of gear, it needs to be used everyday in real life scenarios (which some reviews do involve) but ultimately opinions are still going to vary.


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    Ha!
    Good luck with this.
    "Hello, S0und on S0und"
    "Hi. John from @lesis here. We are releasing a new firewire mixer and are thinking of running a full page inside front cover ad for 4 issues. We are sending over one for review but will need a copy of the review prior to committing to the ad."
    "Yeah, sure John. Fire it over. We will give it a good review"

    All industry magazines are driven by advertising and, as such, cannot be trusted when it comes to reviews. That has been discussed here before and it's not going to change.

    Ah ole SOS !

    I agree with what you say (in fact it's what's driven this idea) but I think you'll also agree that Resolution's Speaker reviews for example are primarily technical, and stuff gets hung out to dry there regularly.

    All industry mags are of course driven by ads , but that doesn't mean it is, or should be lies.

    Any regular reader will also know who to trust , Mr. Shilling is my example.





    Why did you put the 0 instead of the O Michael ?


  • Advertisement
  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    Welease wrote: »
    From a purely process point of view (ignoring that the score will always subjective and based on application)... I'm not sure your scale (if i am reading it right) would make any sense..

    SM57 is assigned 50 (it would be a nightmare in itself to decide (and agree) what *is* the standard). It costs 100 quid..

    What would a slightly better sounding 1000 quid mic score? 60 cos it's sounds better?... 20 cos its 10 times the cost? 40 cos its better but costs a load more? Would someone reading that score be able to decipher why it got that score, without reading the full article and explanation in which case why even bother scoring it like that..

    From my experience, people are used to a 1-10 (or similar) scale and understand what it respresents, any new scale would require too much explanation to the casual user to be of any real value. Any complication of the scoring mechanism just distracts the reader from the detail they are actually interested in.

    Sorry :(

    Ah Head - if the scaling system is too difficult to understand there's no hope for any of us !


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    The U47 is an industry standard, but I have never even seen one let alone used one.

    The SM57. Mic x sounds better than a SM57 on vocals. With the vocalists we tested this mic sounded better than a 57. Again, it doesn't tell me much.

    I was recording last weekend. For the guitar sound we were after a V72 beat the competition (Neve, Daking, API, Tubetech) out the gate. The others were all undesirable for different reasons in that particular situation. I have been in other situations where a V72 was similarly uncomplimenatary to other sounds, lacking the crispness and fast transients of the API or whatever. I think to be able to offer an accurate overview of a piece of gear, it needs to be used everyday in real life scenarios (which some reviews do involve) but ultimately opinions are still going to vary.

    I think it would be primarily of use in the lower end of the market where differences can be more marked.

    I agree with your points regarding the upper end - as I've said here before virtually ALL the posh stuff sounds 'good' but different 'good'.

    There would be no value in comparing an SSL to a Neve.
    But there could be in comparing a Mackie to a Neve.

    Especially for those who don't have access to one.


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    Welease wrote: »

    What would a slightly better sounding 1000 quid mic score? 60 cos it's sounds better?...

    Exactly !


  • Closed Accounts Posts: 229 ✭✭bedbugs


    PaulBrewer wrote: »
    In most elements that make up a recording studio there are standards - wouldn't comparisons to those benchmarks give you a better idea of what a review meant ?

    No. Because it'd have to delve into what application the mic was being used for. If your review idea is for a specific aspect of audio recording, fair enough, but I like the reviews in mags like SOS because they give me a general idea of different applications for each piece of kit that's reviewed.


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    bedbugs wrote: »
    . If your review idea is for a specific aspect of audio recording, fair enough, but I like the reviews in mags like SOS because they give me a general idea of different applications for each piece of kit that's reviewed.

    That would be the case - this idea is just to add to a 'regular' review


  • Closed Accounts Posts: 229 ✭✭bedbugs


    PaulBrewer wrote: »
    That would be the case - this idea is just to add to a 'regular' review

    Good luck.


  • Registered Users Posts: 843 ✭✭✭trackmixstudio


    PaulBrewer wrote: »
    Why did you put the 0 instead of the O Michael ?

    50 1t d0esn't sh0w up 1n a g00gle 5e@rch

    Smaller companies sometimes get bad reviews because their advertising isn't as important but when did you last see a scathing review of alesis, mackie or even behringer in a trade mag?


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    50 1t d0esn't sh0w up 1n a g00gle 5e@rch

    Smaller companies sometimes get bad reviews because their advertising isn't as important but when did you last see a scathing review of alesis, mackie or even behringer in a trade mag?

    I don't read S 0 S so I don't know !

    The goal is to make reviews more of a worthwhile read.

    A lot of the above points I'd certainly like to put to magazine editors - I'd hate to be trying to make up my mind now starting out where everything is 'very good for the money'

    Making the yardstick as close to a yard as possible seems to me like it would improve things.


  • Registered Users Posts: 292 ✭✭shayleon


    Paul. I think your idea is BRILLIANT! More than once I see "good value" or "good for the money" , and I wonder, well - "will it make my work sound better"?. GREAT IDEA! Will be very happy if this happens.

    oh.. and.. get a copy of SOS once - will not do you harm to check it - I love it to bits.

    All the best.
    S.


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    shayleon wrote: »
    Paul. I think your idea is BRILLIANT! More than once I see "good value" or "good for the money" , and I wonder, well - "will it make my work sound better"?. GREAT IDEA! Will be very happy if this happens.

    oh.. and.. get a copy of SOS once - will not do you harm to check it - I love it to bits.

    All the best.
    S.

    Jaysus Shay , go easy will ya !

    It's not the done thing to agree with me ;)


  • Advertisement
  • Registered Users Posts: 1,472 ✭✭✭Rockshamrover


    It's an interesting idea. But who sets the standard? I couldn't see manufacturer A agreeing that his product is 10 points below average.

    I would never buy anything based on a single review and never based on a 3 line mini review. I would actively seek out as many positive and negative in depth reviews as I could find on whatever type of product I am interested in. Knowledge is power as they say.

    Your idea would be a good starting point though.


  • Registered Users Posts: 1,759 ✭✭✭Neurojazz


    Would be nice if gear had some sort of pure wave input and the output in an overlay you could compare and see the loss/gain/change.


  • Registered Users Posts: 2,182 ✭✭✭dav nagle


    The best site I use in the gaming industry for reviews is called Metacritic. Metacritic takes all the reviews from websites throughout the world and give you an overall average, but you can also read every review. This kind of thing would be SWEET in the music biz. Call it AudioCritic.

    This is an example:

    http://www.metacritic.com/games/platforms/xbox360/guitarhero3legendsofrock


  • Closed Accounts Posts: 252 ✭✭kfoltman


    PaulBrewer wrote: »
    We've come up with the idea of marking units performance out of 100 against an Industry standard (or standards) which stands at 50.
    I think assigning a numeric value to something highly subjective and multidimensional is doomed to fail.

    A microphone may suit one person/style and not another. It may suit some use cases only (say, recording guitar amps). The numeric score may also depend on reviewer's mood, review conditions (while trying to record a band with singer with bad technique?), amount of bribe he got from the manufacturer/reseller, etc. Even if there is no statistical noise (good luck with that!), the resulting score is not comparable and aids nothing except boosting sales of the highest-scored devices to people who will possibly end up being unhappy with the equipment they bought.

    What I think could be fair is to replace a review "score" completely with some more specific information in summary form. Like:

    - adds a presence peak that makes vocals and guitars fit in the mix without lots of EQ, but the bandwidth is limited
    - poor bandwidth
    - silent/noisy knobs
    - knobs with unusable control law (I mean - things like "nothing happen in the first 50% of the knob range, and most change occurs in the last 5%")
    - apparent self-noise/hum when used with high gain settings, even with no input (a decibel value would be even better)
    - buzzy transformer in the power supply
    - the line input goes through mic preamp and there's no way to bypass it (*ahem* ADA8000)
    - drivers are crashing the PC
    - good/poor off-axis/feedback rejection (when speaking about handheld vocal mikes)
    - extreme off-axis coloration
    - distorts with normal-range SPL (cheap condensers?)
    - "note off" triggered very early in key travel (speaking about some specific digital piano I won't name)
    - depressing sustain pedal just after a note is released causes an unnatural jump in volume envelope
    - you can hear a local preacher in the speakers (about some poorly shielded Behringer monitors that happened to receive AM stations)
    - lacks phantom power
    - works well with mics A and B, but noisy with mics C and D (preamp or soundcard)

    This is probably more suitable for low-end gear, where poor engineering is more of a problem. Knowing what compromises have been made, I could pick devices that have a set of faults that will affect me in the least harmful/annoying way.

    I can't speak about more expensive equipment, as I'm not using any. But I don't think one-metric-fits-all is valid in that area either (for different reasons).


  • Registered Users Posts: 3,834 ✭✭✭Welease


    PaulBrewer wrote: »
    Ah Head - if the scaling system is too difficult to understand there's no hope for any of us !

    It's not that it's too difficult, its that it adds a layer on unneeded complexity without any real tangable value. There is a reason why the millions upon millions of magazines irrespective of genre stick to a basic format.

    In essence, your system just provides a shootout for mic X against a SM57..So why not just do that instead? It's clear and easily understood.

    Your system attempts to boil down "quality" to a number. If you want to do this, then you need to explain the rational to people so they can understand the system. This opens a host of issues, as "quality" has hundreds of potential attributes and placing an arbitrary number on this is a level of complexity that will encourage readers to disagree with your assertions and in their minds devalue the review. As per my example if a 1000 Euro mic that sounds slightly better than a 57 10 points better? Not in my mind, its about a 30-40 becuase of the cost difference..

    Now none of these issues are specific to your system, but I could replicate the intent of your system by doing a) a shoutout with a 57 or b) Stating that a 57 scored 8/10, without the need to confuse the reader and having to explain the new sliding scale system. Your system does nothing now over that.

    Sorry to sound so negative, I don't mean to pull your suggestion apart, but I spend 20% of my days reporting out fact/figures to various organisations, and the one thing i have learnt is that people expect to get information presented in a certain way, and will respond to that. Changing the format confuses and alienates people (sad but true), and imho your proposed change does not deliver anywhere near the benefits to counteract the negative reaction it would get. Sorry dude.


  • Registered Users Posts: 1,445 ✭✭✭ZV Yoda


    kfoltman wrote: »
    I think assigning a numeric value to something highly subjective and multidimensional is doomed to fail.

    A microphone may suit one person/style and not another. It may suit some use cases only (say, recording guitar amps). The numeric score may also depend on reviewer's mood, review conditions (while trying to record a band with singer with bad technique?), amount of bribe he got from the manufacturer/reseller, etc. Even if there is no statistical noise (good luck with that!), the resulting score is not comparable and aids nothing except boosting sales of the highest-scored devices to people who will possibly end up being unhappy with the equipment they bought.

    What I think could be fair is to replace a review "score" completely with some more specific information in summary form. Like:

    - adds a presence peak that makes vocals and guitars fit in the mix without lots of EQ, but the bandwidth is limited
    - poor bandwidth
    - silent/noisy knobs
    - knobs with unusable control law (I mean - things like "nothing happen in the first 50% of the knob range, and most change occurs in the last 5%")
    - apparent self-noise/hum when used with high gain settings, even with no input (a decibel value would be even better)
    - buzzy transformer in the power supply
    - the line input goes through mic preamp and there's no way to bypass it (*ahem* ADA8000)
    - drivers are crashing the PC
    - good/poor off-axis/feedback rejection (when speaking about handheld vocal mikes)
    - extreme off-axis coloration
    - distorts with normal-range SPL (cheap condensers?)
    - "note off" triggered very early in key travel (speaking about some specific digital piano I won't name)
    - depressing sustain pedal just after a note is released causes an unnatural jump in volume envelope
    - you can hear a local preacher in the speakers (about some poorly shielded Behringer monitors that happened to receive AM stations)
    - lacks phantom power
    - works well with mics A and B, but noisy with mics C and D (preamp or soundcard)

    This is probably more suitable for low-end gear, where poor engineering is more of a problem. Knowing what compromises have been made, I could pick devices that have a set of faults that will affect me in the least harmful/annoying way.

    I can't speak about more expensive equipment, as I'm not using any. But I don't think one-metric-fits-all is valid in that area either (for different reasons).


    That's a great post... I'd find something like that very useful.

    I get SOS every month… but for the tips & tricks, “mix rescue” etc… I’ve never read a bad review of any product in SOS, so Trackmix’s comments are on the money there.

    Irrespective of a potential reviewers agenda (or not) most equipment will be suited to some applications & not suited to others. The ol' €100 SM57 is (from what I can gather) is a "must have" mic in any studio... so by default is the industry standard. But the scoring vs. industry std is irrelevant - what will it really tell us?.. that a €4k mic is way "better" than an SM57... well yeah, but I bet the €4k mic would be feck all use on my snare drum.

    What's I've doing a lot of lately is checking real a/b comparisons on the likes of Gearslutz. There's loads of wavs there for mic/preamps comparisons. This helps me gauge whether or not I think there sufficient improvement in product A vs. product A for me to justify spending my hard earned yoyos. Of course, the test conditions will be different in my own room, but unless I have a retailer prepared to rent/lend me heaps of gear to test at home, then these online comparisons are a good compromise.

    Of course, you have to trust that the samples are what they claim to be, but then you're into a whole other debate about t'internet.


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    Neurojazz wrote: »
    Would be nice if gear had some sort of pure wave input and the output in an overlay you could compare and see the loss/gain/change.

    Huh? :o


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    It's an interesting idea. But who sets the standard? I couldn't see manufacturer A agreeing that his product is 10 points below average.

    I would never buy anything based on a single review and never based on a 3 line mini review. I would actively seek out as many positive and negative in depth reviews as I could find on whatever type of product I am interested in. Knowledge is power as they say.

    Your idea would be a good starting point though.

    Knowledge is indeed power.

    We discussed standards too - how about something like the Music Producers Guild members ?

    As recording is such a mature area at this stage from a hardware point of view agreement in most areas would be easily had, I think.

    In fact most people here will know what they are already.


  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    Welease wrote: »
    Changing the format confuses and alienates people (sad but true), and imho your proposed change does not deliver anywhere near the benefits to counteract the negative reaction it would get. Sorry dude.

    Fair point !


  • Advertisement
  • Registered Users Posts: 7,790 ✭✭✭PaulBrewer


    kfoltman wrote: »
    I think assigning a numeric value to something highly subjective and multidimensional is doomed to fail.

    A microphone may suit one person/style and not another. It may suit some use cases only (say, recording guitar amps). The numeric score may also depend on reviewer's mood, review conditions (while trying to record a band with singer with bad technique?), amount of bribe he got from the manufacturer/reseller, etc. Even if there is no statistical noise (good luck with that!), the resulting score is not comparable and aids nothing except boosting sales of the highest-scored devices to people who will possibly end up being unhappy with the equipment they bought.

    What I think could be fair is to replace a review "score" completely with some more specific information in summary form. Like:

    - adds a presence peak that makes vocals and guitars fit in the mix without lots of EQ, but the bandwidth is limited
    - poor bandwidth
    - silent/noisy knobs
    - knobs with unusable control law (I mean - things like "nothing happen in the first 50% of the knob range, and most change occurs in the last 5%")
    - apparent self-noise/hum when used with high gain settings, even with no input (a decibel value would be even better)
    - buzzy transformer in the power supply
    - the line input goes through mic preamp and there's no way to bypass it (*ahem* ADA8000)
    - drivers are crashing the PC
    - good/poor off-axis/feedback rejection (when speaking about handheld vocal mikes)
    - extreme off-axis coloration
    - distorts with normal-range SPL (cheap condensers?)
    - "note off" triggered very early in key travel (speaking about some specific digital piano I won't name)
    - depressing sustain pedal just after a note is released causes an unnatural jump in volume envelope
    - you can hear a local preacher in the speakers (about some poorly shielded Behringer monitors that happened to receive AM stations)
    - lacks phantom power
    - works well with mics A and B, but noisy with mics C and D (preamp or soundcard)

    This is probably more suitable for low-end gear, where poor engineering is more of a problem. Knowing what compromises have been made, I could pick devices that have a set of faults that will affect me in the least harmful/annoying way.

    I can't speak about more expensive equipment, as I'm not using any. But I don't think one-metric-fits-all is valid in that area either (for different reasons).

    All of the above is readily available in most reviews.

    I see what you're say but your argument has a fatal flaw !

    Figures don't tell you how stuff sounds, ears do !- if they did we'd all be sorted.

    I remember hearing the following from a rep of a well know US amp manufacturer.

    Their amp specs peak power was based on a sine wave running on minimum impedance for 24 hours.

    Another company's peak power was for .2 of a second on max impedance ... then the amp blew!!



    Another classic is frequency response of speakers.
    I know of one reputable speaker company who released a model with a phenomenally flat speaker response.

    This 'flatness' was gained by some jiggery pokery that put a delay between the tweeter and woofer of something in the order of 8 metres !
    That's to say if you sent the same signal to both speakers at the same time you'd have to have the woofer 8 metres behind the tweeter to reproduce how this speaker delivered sound.

    It's impulse response was also destroyed with something in the order of 100ms at 100hz ....


    One cannot take subjectivity out of the equation unfortunately and of course lies, damn lies and statistics tell us little information we can use to judge.


Advertisement